text stringlengths 54 548k | label stringclasses 4
values | id_ stringlengths 32 32 |
|---|---|---|
Since its introduction by {{cite:54f6e1a7482ee32c9461e6d4550660a0ad486456}}, the BIC Bayes factor has been a popular method for estimating evidential value from empirical data. One of the attractive features of the BIC Bayes factor is that it can be computed with a simple calculator. This makes it easy to implement not only for the researcher, but also in the context of a beginning statistics course. However, it is a large-sample approximation, and the discussion above calls into question its computational stability for smaller examples, especially those which might be used in a traditional statistics course.
| m | e02d382e1fc69dedfaeb0f620b6af8e3 |
In this section, we present experimental results from multi-class classification on the MNIST, CIFAR-10 , and CIFAR-100 to validate our theory {{cite:9e100a40f10ba20d481e1266391303f246085f39}}. All our experiments are conducted on a Windows machine with Intel Core(TM) i9 CPU @ 3.7GHz, 64Gb RAM, and 1 NVIDIA RTX 3090 GPU card.
| r | bacf7ffec438090e62c12983cdc7c59d |
will prompt the model {{cite:dc1cffc679a5d9037bf872064becd8855364768a}} to perform the task of French-to-English translation, returning:
| i | c6214e4807b9e7d5c1a4df5de49e7813 |
In practice, however, it turns out to be surprisingly difficult to come up with data collection setups that lead to interesting studies of both these aspects of grounding.
Existing tasks still adopt a very rigid interaction protocol, where e.g. an asymmetric interaction between a question asker and a question answerer produces uniform sequences of question-answer pairs (as in the “Visual Dialogue” setting of das2017visual for instance).
Here, it is impossible to model e.g. turn-taking, clarification, collaborative utterance construction, which are typical phenomena of conversational grounding in interaction {{cite:95cf0c9c7099a8d893a6843459f3b4b49d1a38de}}.
Others tasks follow the traditional idea of the reference game {{cite:9262a84761987085e2aa3fe3704fc25b4b0833e4}}, {{cite:9a3bc6c3baba669f8243c1d095bf5bab90ce4a4c}} in some way, but try to set up the game such that the referent can only be established in a sequence of turns .
While this approach leads to goal-oriented dialogue,
the goal is still directly related to reference and visual grounding.
However, realistic, every-day communication between human speakers rarely centers entirely around establishing reference. It has been argued in the literature that reference production radically changes if it is the primary goal of an interactive game, rather than embedded in a dialogue that tries to achieve a more high-level communicative goal {{cite:5d5a14d790c62c436dbf71b58d8f0e75bb2b0710}}.
| i | 81e9066c5c45ee74622d0dc721f5fb39 |
We have found significant discrepancies in the experimental settings used in the previous width-based planning papers for evaluating their algorithms over the Atari-2600 games. We believe a clear and consistent evaluation protocol should be set out for planning based algorithms applied to the Atari-2600 games to facilitate the direct comparison of their results. This could be similar to the evaluation protocol for the Atari-2600 games set out by Machado et al. {{cite:4363a13a08718aad9677c61492bf3d9f0b6fd6ed}}, which was mainly focused towards RL agents and included recommendations on episode termination, setting of hyper-parameters, measuring training data, summarising learning performance and injecting stochasticity. However Machado et al. do not discuss evaluation settings that are vital to the deterministic planning setting we have explored in this paper, such as planning budgets, and caching of transitions within lookahead trees. We hope that by having addressed some of the discrepancies in the experimental settings of previous width-based algorithms, such as the size of the action set and the planning budget used, future research in planning agents for the Atari-2600 games can be more easily assessed. We were able to observe interesting patterns in the relative performance of algorithms through segmenting the Atari-2600 games by their different game characteristics. We are not aware of other works that analyse the performance of agents in regards to the characteristics of specific Atari-2600 games. We believe this taxonomy will provide useful insights into the behaviour of agents on the Atari-2600 games.
| d | 50d5b881bee09581a0f627f6dc2c9f5c |
It is well-known that {{formula:d544207b-626c-4f46-bf4b-22cd8ef08fc6}} for {{formula:46115734-ff7c-4951-b9ad-96f70de05100}} (cf. {{cite:8f776dcf6af4d3129621c4ec25535c257deb4b80}}). On the other hand, for {{formula:476d7015-61a5-4472-863f-0f8bf08ab420}} , we have {{formula:2c1d2a29-8327-4aa2-8ef2-39f2b291f4b9}} by an observation of Lichtenbaum {{cite:0885810b39f29015361d1cb78925f9d78339540c}}. Hence the spectral sequence degenerates to two diagonal lines {{formula:0050f4e8-c280-443c-9f3e-07401059ef67}} and {{formula:0ec384e8-c974-4cef-8f25-fcb880d6700e}} . Consequently, we obtain an identification
{{formula:f6c5e4e0-55cf-4e21-a936-8a8cc77190f8}}
| r | 392d07e15496f74b02d5d8e903b9a5db |
In this section, we give a detailed description of MVSTER. The network architecture is illustrated in Fig. REF . Given a reference image and its corresponding source images, we firstly extract 2D multi-scale features using Feature Pyramid Network (FPN) {{cite:88f8871cfe99b1e96039c1399ff81809f38ed081}}. Source image features are then warped into reference camera frustum to construct source volumes via differentiable homography (Sec. REF ). Subsequently, we leverage the epipolar Transformer to aggregate source volumes and produce the cost volume, which is regularized by lightweight 3D CNNs to make depth estimations (Sec. REF ). Our pipeline is further built in a cascade structure, propagating depth map in a coarse to fine manner (Sec. REF ). To reduce erroneous depth hypotheses during depth propagating, we formulate depth estimation as a depth-aware classification problem and optimize it with optimal transport. Finally, the network losses are given (Sec. REF ).
| m | 71fa785e18b4d0b849b0f63e2a67bc79 |
In light of the discussion presented in this chapter, and based on the experimental evidence that decoherence parameters vary with time, the proposed time-varying quantum channel models will be very important when studying and optimizing error correction codes capable of protecting qubit-based quantum computers from the dynamics of the decoherence phenomenon. However, it should be pointed out that the proposed time-varying quantum channel model would be relevant for protocols involving a large number of error correction cycles (e.g., a quantum memory or a long quantum algorithm such as the one in {{cite:e1a0f4e362119525df346990afaf9aa9cd5025ca}}). If a very short algorithm or if a QECC is run just once or a few number of times, the parameters will not fluctuate as much, and the effects of the proposed channel model will not be as noticeable. Even if the current state-of-the-art experimentally implemented QECCs will not be significantly affected by the TVQC (as they are generally short codes such as the {{formula:bf002cab-f795-4d1b-9ca1-d94ee901879b}} toric code discussed above), the proposed channel model will be essential when better QECCs are implemented, such as the {{formula:4a314412-5e3a-4bac-aea3-b2913f6949b3}} and {{formula:64199054-c1e6-4e7b-a142-c5cd1b19f176}} toric codes for the near term, or the more advanced QTCs or QLDPCs for the long term.
| d | d469213224d35207cea7429242192ad7 |
where {{formula:bb1757df-160b-4606-9b02-e7029badc2d0}} are the indicator functions of the sets {{formula:c5dc89d6-7ccb-43fc-9be0-2476dd61ba9a}} and {{formula:69d0d55f-e461-40f9-9f3c-53a85645a9fe}} respectively, hence, any limit point of the sequence {{formula:0df34de4-1379-482f-a3c0-a5245be50375}} is a global minimizer of (REF ) by {{cite:22744c2c1412d6011219a89f900d1a422b2c6bd3}}. The latter results can be slightly improved by allowing {{formula:68b45335-0df0-4904-ab4e-ea13af79b819}} be a sub-optimal provided that a constraint qualification is assumed; see {{cite:9a9649c766a3b7a39d27a0f82bcda0b9d0f470c9}} and {{cite:22744c2c1412d6011219a89f900d1a422b2c6bd3}} . Although the problem (REF ) is an unconstrained optimization problem, finding an (approximate) global optimal solution for (REF ) is impractical for non-convex problems. However, there are many fast optimization algorithms to find an (approximate) stationary point for the non-convex problem (REF ); see {{cite:9a9649c766a3b7a39d27a0f82bcda0b9d0f470c9}}, {{cite:2cf47c1958cea3bb58ef7222d4ab9aa9021c4c1a}}, {{cite:d8e51a6861343c29be4e862c49c12f03a7ec2682}}. Motivated by the practicality of the obtaining an approximate stationary point for the unconstrained problem (REF ), in this paper, we aim to establish a relationship between the approximate stationary points of problems (REF ) and (REF ). More precisely, in Theorem REF , we show that, if {{formula:f3bd11f8-9a00-44a4-82a8-44acf7405b7a}} is a {{formula:a29b7990-66e2-4ef5-adda-f72157b73888}} -stationary point of (REF ), i.e., {{formula:372d7c40-bad1-4c68-9bb3-e68d74320cd5}} , then for sufficiently large {{formula:e8c2a1ab-67d5-4743-97f2-b6ab833be7f8}} , {{formula:df7bc5a1-a723-4cb8-ab2a-b92694134f23}} is an approximate stationary point for (REF ), in the sense that there exist so-called approximate Lagrangian multipliers {{formula:a43f096e-0866-48cd-973d-97db5c4e2cc3}} and {{formula:d7eae81f-b8ab-4bd6-a4bc-6534a354ad69}} such that
{{formula:91af4737-e016-4391-b1d1-9b8620eea62a}}
| i | b851e663ac71d6df4deb6885bf39242a |
All terms considered relevant to the queried topic are ranked according to their p-values. Key phrases are extracted by scanning abstracts with the RAKE algorithm {{cite:029befcb74dcb734bed69a73bbfd5d0e8ffd0edf}}. Finally, pathways and biological processes are derived from KEGG{{cite:f93e10371135416a95bd4dded32217bd501f36f2}} using significant gene names. Each result is directly linked to PubMed, Google, Google Scholar and Bing websites to allow further investigation.
{{table:4efaa84d-8145-4d2b-80ac-69d4393e8f13}} | m | 844da3200fe0ae0f956d0186d7ed9457 |
The signatures in the chromospheric spectra give an indication of what to look for in observations. Complex line profiles as the ones shown in Fig. REF using the 5 keV empty loop model have not been reported in the literature so far, but might suggest heating by non-thermal electrons if observed. The inclusion of the Caii and Hi lines to the diagnostics gives potential for observing small-scale events with ground-based telescopes, such as the Swedish 1-m Solar Telescope {{cite:4beda0020f48478959bb843e4871dda003625b0e}} and the Daniel K. Inouye Solar Telescope {{cite:8e54b298c639c32fb07ec3bdc2edb4d97314c08b}}. With the CHROMIS instrument installed at the SST, it is possible to sample the Caii H and K and H{{formula:4edfe9ed-6c6c-4a56-aeef-79ce2a89785c}} lines with high spectral resolution. Ground-based telescopes generally allow for higher spatial resolution as compared to millimetre observations and (extreme-)UV diagnostics observed from space. This could possibly provide tighter constraints on the spatial dimensions of the strand widths and temporal variability. We also note that Mgii h and Caii K appear to be more sensitive to events at the top of the chromosphere than Caii 854.2 nm, and coordinated observations with SST and IRIS, for instance, would be beneficial to provide more constraints on nanoflares.
A comparison of observations to our numerical results, as well as Bifrost electron beam simulations, will be explored in future work.
{{cite:78a55e119b0a7666c8a67c948cfc7bf17e1cf807}} found that Mgii profiles from RADYN nanoflare models showed similarities to what was observed at the footpoints of AR coronal loops with IRIS. They also found that the Mgii triplet is an important diagnostic for non-thermal electrons due to its sensitivity to heating in the lower atmosphere. We will therefore include the Mgii triplet in future spectral line analysis.
| d | e10d0049de436e80357fa7eeac5ebc72 |
We will prove Theorem REF by means of the following algorithm, generalizing Prop. 1 of {{cite:51e5647e8a757270925ca295ca87188cf7f0e2e4}}.
| r | c23e828bba1564484dc110dc01b47b67 |
Speaker identity is one of the important characteristics in human speech. In voice conversion, the speaker identity can be changed from one to another one while the linguistic information can be preserved {{cite:97b22d75c4d1f68f1c3c460d791ae422d28042ac}}. Traditional voice conversion focused on spectrum mapping using statistical and signal processing methods on parallel training data, which has the same linguistic content from different speaker, for example gaussian mixture model(GMM) {{cite:70280194f03704c13667677102b717d52dc4c4a2}}, {{cite:bc68c12d65250592b2f6d3e692bb10882acdf040}}, vector quantization(VQ) {{cite:358e234a5d156ce3a25ea4b81d14a60e0a9f359f}}, fuzzy vector quantization {{cite:da59923eaeea65d52bdc392cec78a2f093fca598}}, frequency wrapping {{cite:6cbc40a44e7b6b53c27e48c87bdb9fcc78c0c134}}, partial least square regression {{cite:b21feb381e6940553cead5da087e7847b91924b6}} and dynamic kernel partial least squares regression (DKPLS) {{cite:8b7de76ea34d0b7c925cd47214b05a66f8089bac}}.
| i | 97fdef61c4252b1dd2f4920f4fffdfa1 |
We evaluate our approach on the 7-Scenes {{cite:9b30c4237164ca442424f780c7ad86a74343f321}} dataset to check its generalizability. Tab. REF shows that our method also significantly outperforms PlaneRCNN {{cite:71d1f2f1168934792541547f2a612b28846995ac}}, and is better than or comparable with other MVS methods {{cite:78a8b299dee55d0d2f116898b96d09e55962f7ed}}, {{cite:a77c6f8477d4c0dfc9817c7df29f68dce623619a}}, {{cite:b98cc1b22a426ce9f057d84cb5b6f616a53bf8d8}}, {{cite:78e56eece99dfa35626ee5753e7573c9db35e2b4}}. Since PlaneRCNN {{cite:71d1f2f1168934792541547f2a612b28846995ac}} learns plane geometry from single views, the ability to generalize beyond the domain of training scenes is limited. However, our method benefits from multi-view geometry
to learn multi-view feature correspondences and thus has superior generalizability on unseen data. We leave how we perform finetuning on 7-Scenes with only groundtruth depth to the supplementary material.
| r | e6dbae21954ebbb03104ece93c8f8481 |
Our proposed method consistently outperforms all other methods on all four datasets in terms of each threshold of {{formula:5dcf7719-f160-4662-ba5d-8c3fbd0b9638}} and {{formula:74abe027-ac2b-4b3a-8361-97d6cdf881b8}} . The improvement is particularly significant on the more challenging datasets UNC+ and G-Ref. It leads to 4.12%, 5.64% and 2.22% absolute improvement in overall IoU on the val, testA and testB set of UNC+ and nearly doubles the performance of {{formula:ed3ce52b-598e-4e64-aeb7-9593450409c3}} which requires a precise segmentation mask compared with the groundtruth. This demonstrates the superior performance of our model for identifying the referred objects with appearance and scene context word only since there is no location word available on the UNC+ dataset. Furthermore, the substantial gain (6.39% in overall IoU) on the G-Ref dataset proves the advantage of our model in capturing long-range dependencies for cross-modal features and capturing the referred objects based on referring expressions where this dataset contains longer and richer query expressions. The consistent improvement compared with the preliminary work CMSA {{cite:295e0c5385456294ec9945b0e19f04fa165ac5d2}} also proves the word attention-aware pooling in the cross-modal self-attention module can summarizes more effective features from word attentions.
{{table:751e07cd-2e19-42ab-8927-0001d1ca652f}}{{table:77f0904d-382c-4fe1-80b1-0a922c4a0f89}} | r | 5fa18cf325645954cd145611397aa150 |
Related work includes a comprehensive study of the stability of graph neural networks (GNNs) in {{cite:4e814696f62e6a46fdc243271d6350c618539c5b}} and {{cite:94a3e89993c1abe55b7305051fbd45b33ba38100}}, which consider absolute and relative perturbations of the graph structure respectively, and the GNN stability analysis in {{cite:3922c0714761aea6f520232a34051976a8c14dbc}}, which focuses on perturbations of the graph spectrum.
More in line with our paper, {{cite:613ed223030af054a7eea00db46c80d584fc4a16}} studies stability of GNNs to perturbations of the underlying graph model, which is assumed to be a graphon. Unlike manifolds, however, graphons can only model dense graphs.
More flexible models such as continuous graph models with tunable sparsity and generic topological spaces have been considered in {{cite:2f636947710da4f99f950738d1f3bbce5ccd2870}} and {{cite:43cf6bf969984a3248eb22df4f72b0b8a4409ee7}} respectively, but these papers focus on the transferability and not on the stability of convolutional neural networks in these domains.
| i | be88e1835431dca56ad682c01246578a |
We also compared the performances of shallow and deep TriAN models.
On datasets such as SQuAD {{cite:8f2787d20dd501691c2e13c294a7ac190c69d6bb}},
deep models typically works better than shallow ones.
Notice that the attention layer in our proposed TriAN model
can be stacked multiple times
if we treat the output vectors of BiLSTMs as new input representations.
{{table:456683c9-6430-47df-bdeb-e57ebbf01916}} | r | 231d286fd72b6015f65ebde092f34b08 |
BERT.
Bidirectional Encoder Representations from Transformers or BERT {{cite:a5f88648489cf10588eab8c9941d9813da70345e}} is a language representation model that processes tokens in relation to all other tokens in the sentence, unlike RNN-based models that process tokens in order, one token at a time. BERT has been used to achieve state-of-the-art results in numerous NLP benchmarks and has been integrated into Google Search, leading to significant improvements in understanding and ranking search queries {{cite:25d8a1e8acdf4a79a4819865ad944b9e848c2d43}}.
| m | 2e294f119f43d18e00d7641e8f33524c |
We approximate quantum annealing with classical path-integral Monte Carlo method described in Section REF .
The SQA procedure assumes a Chimera graph where the total number of qubits is {{formula:4a3fd96d-e204-406e-8200-303ff0ddb674}} .
For the quantum annealing Hamiltonian {{formula:a09d8df6-6e12-4bae-9a23-9359593c25ad}} in (REF ), to find the Boltzmann state of the quantum system, we are required to evaluate canonical partition function {{formula:1ff53601-1167-4102-ae45-7ffd38c63dbe}} for the transverse field quantum Ising model.
To approximate the partition function, we use the Trotter formula {{cite:ca4e705c52a9aa049acaf4ae8119f16233d8ed4a}}, {{cite:2230dd6570a36fd8717067f32c3cdc9d9ad9d630}}, {{cite:3b3d8c21f4ffb43edc932976f62ae246b1a08762}}.
Specifically, given annealing schedules {{formula:ccadaae1-980c-40f7-9726-183676c879e3}} and {{formula:33cee843-d052-4b79-a473-8559203eb932}} from quantum annealing and to approximate the partition function for quantum annealing Hamiltonian {{formula:fdb114d0-a8cc-4925-bac7-33b917810ae4}} with temperature {{formula:d2537730-f1d3-4cd2-afc8-fca460e895aa}} , we replace the temperature parameter by {{formula:5260e004-6194-40f8-83ae-0429a2aaa291}} and the transverse field parameter by {{formula:39a1844c-3298-45cb-8290-18ea57def302}} .
Such approximation maps the transverse quantum Ising model to a classical {{formula:f9b77f51-a8a5-4dfb-8006-e4f24ef988e8}} -dimensional anisotropic Ising model with temperature {{formula:72f2813c-5649-4733-9b26-0330f08a4295}} and Hamiltonian
{{formula:7e73b01c-16fc-4c6b-89ad-123b19cccaa0}}
| m | 82e536ac2f60ba32aff3ddcdcdba0508 |
Quantitative Comparisons. In this section, we evaluate
results with PSNR, SSIM {{cite:f43f3067b75b55a99256209c5e71c4c424f0bc7d}}, and FID {{cite:d9222d74bca3f99250e909bd28e0adc8f5b1ea15}}. As discussed in {{cite:d85e0edbdd19df7258dd259da77e636528e1bbf2}}, we find that {{formula:12163b80-72b0-4c9b-b7fd-51a027c0922f}} based metrics such as PSNR and SSIM often contradict human judgment. For example, some meaningless blurring areas will
cause large perceptual deviations but small {{formula:08c1c399-3946-43d9-a420-eefdb2cd4aee}} loss {{cite:d85e0edbdd19df7258dd259da77e636528e1bbf2}}.
Therefore, we pay more attention to the perceptual metric FID in quantitative
comparisons. The results on ShanghaiTech, P2M, and York Urban are
shown in Tab. REF . From the quantitative results, our method outperforms other approaches in PSNR and FID. Especially for the FID, which accords with the human perception, our method
achieves considerable advantages. Besides, our method
enjoys good generalization, as it also works properly in P2M, York Urban, and even P2C.
{{figure:46a610e1-e995-41ac-b537-57b33f17972f}} | r | e7681b5731a4ff6fc1040928f4ed867c |
Estimated prevalence {{formula:ab42e6b5-a78b-4ff8-b451-f09ded6aa083}} for each {{formula:6255d181-a0a1-4800-b32c-9cb3219d57a9}}
Calculate discretized predictions on training data: {{formula:fe158e4c-860e-466e-bb36-92f30a84692e}}
Calculate discretized predictions on test data: {{formula:6efd3564-d0cb-4d15-b113-d561d58275a6}}
{{formula:e330e2e8-fac7-476d-9976-660978942439}}
{{formula:c176a716-cbeb-46e7-94b8-2250af9b677c}}
{{formula:9906dfaf-3ba0-4789-a4bd-848d8ba75bbb}} , with {{formula:6e828221-0b7d-4d67-bddd-f89c9b6837f8}} , for {{formula:c13f354a-0990-4291-aa1b-456d7e6ebec3}}
{{formula:7b4484a5-7bca-4c2d-9187-8707369a0f6a}}
{{formula:0e9d81f0-9b45-4520-b8b1-311ea93a15bd}} , where {{formula:8865c8ac-0954-4236-9a65-778e90273424}} denotes the second element of vector {{formula:6334c6f6-5e93-4d21-919a-f45554c33767}}
{{formula:952a87a4-26b8-41a8-a07e-195844bb153f}} for each {{formula:8a8a1203-3765-43e4-9fb4-16cefa6d6623}}
Discretization method for label shift prevalence estimation {{cite:d54813f5235a98d822c8981ec3dcde2d2ef613be}}
| m | ed7250e9d06db4b0a74047cf16c73c78 |
Using the generalized Burchnall identity and the raising and lowering operators being each others adjoint on suitable weighted {{formula:683f9614-d294-42a2-aeb1-0386af511655}} -spaces, we obtain an integrated version in Corollary REF . In various cases we can use the integrated version to find orthogonal polynomials with respect to a slightly modified weight, where the modification is governed by an eigenfunction to certain operators occurring in the generalized Leibniz rule. We use this to find the orthogonal polynomials with respect to the orthogonality measure modified by an exponential or a {{formula:037d519a-291d-4bfd-b6cf-7de782d0eab9}} -exponential function. Often it is possible to recognize the modified measure explicitly in typically the same class. In this way we obtain expressions for orthogonal polynomials in terms of orthogonal polynomials with different parameters. In particular, for the modification by {{formula:ae54455b-af55-4bf7-af29-3e8df2e112d0}} we can understand the orthogonal polynomials for the measure {{formula:0c7ba53d-0fdc-42a8-ae6f-190ee830364a}} , where the measure {{formula:eaaeb4bd-3d42-4ef1-9c83-ca9485c7dcd3}} corresponds to the orthogonality measure for a family of orthogonal polynomials in the Askey-scheme. By the work of Flaschka, Moser and others this is related to the Lax pair for the Toda lattice, see, e.g., {{cite:93e00bb4f55fd1309bb291fb61ea824e19e91e50}}, {{cite:8043a2e2c8a7cccc783084e2e39e0ac303c094f3}}, {{cite:13ed5f47d3673ecbcb1dd28620844e9b9e488acc}}, {{cite:4f2c92323e4ad75e1115ded6681b06413b5986eb}}. The recurrence coefficients for the monic orthogonal polynomials {{formula:f6f69f56-8735-4277-9fd2-38a4c6a3f87e}} for the modified weight satisfy the Toda lattice equations
{{formula:02fb19d4-d176-440c-9174-97d0e0f197e3}}
| i | 85378a597d7fdbce9a99a617986dd081 |
The different methods were applied to the dataset described in Section REF . For training and testing, we used mostly 5-fold as the different classifiers for cross-validation. Each split was trained with 10 or 20 epochs. For transfer learning, we used models pre-trained on the ImageNet dataset. We used a batch size of 16 for training and testing. Most of our DL experiments were run on Tesla P100-PCIE on Google Colab. The DL models were built using TensorFlow {{cite:48a25a4274b838fca7aa8509128a6183ebe9f408}}.
| r | 0c2aee4775b3bd46deb14baf195fb529 |
To fully reap the passive reflection gain of IRS, it is crucial to acquire the channel state information (CSI) accurately, which, however, is practically challenging due to the lack of active RF chains at IRS reflecting elements as well as their large number in practice. In the literature, there are two main approaches for IRS channel estimation based on different IRS configurations, namely semi-passive IRS and fully passive IRS {{cite:e7e29139b28a4a63a01b93a547a406ee6149a53c}}, {{cite:36e889c6ab02a634c58ef613f4f59c36081d4f9d}}. In the first case, additional sensing devices (e.g., low-power sensors) equipped with low-cost receive RF chains are integrated into IRS. By this means, the user-IRS and base station (BS)-IRS channels can be separately estimated for the uplink and downlink communications based on the pilot signals sent by the users and BS, respectively {{cite:ce9f197b47489154c519988b50b5e14ef8333f8d}}. Although this approach works well in the time-division duplexing (TDD) mode by exploiting the channel reciprocity to estimate the CSI from the IRS to users/BS, it becomes ineffective in the frequency-division duplexing (FDD) modeIt may be practically possible to estimate the channel from the IRS to the users/BS if active sensors that can both transmit and receive signals are integrated into the IRS, which, however, results in considerably higher hardware cost and power consumption.. In contrast, for the second case, with the fully passive IRS, although the BS-IRS and user-IRS channels cannot be estimated individually as by the semi-passive IRS, the cascaded user-IRS-BS or BS-IRS-user channel can be estimated at the BS or each user based on the pilot signals sent by the other, which is applicable to both TDD and FDD modes {{cite:ea2b939a7bd6291cd3c9aa1190b9733d6d74c629}}, {{cite:b5eddedb497c9119b748646c8a703356dbe53d4e}}, {{cite:76121a7970de52c17fcc18a208c3776a27f46e65}}. However, as compared to the semi-passive IRS, this approach entails substantially more channel parameters to be estimated due to the redundancy in cascaded channels, especially when the number of IRSs, IRS reflecting elements, and/or users becomes large. To reduce the channel training time for the fully passive IRS, assorted methods have been proposed in the literature, including IRS elements grouping {{cite:af5b60831f5d239e1aa95ec76b089b7f6e15c3e3}}, {{cite:40c090d91236d56c4e838e343266ef23d8d567eb}}, {{cite:904a8dbe4eaf58f5b47c74d721f006124cb62eec}}, reference user based channel estimation {{cite:32dcfe6b1236fea908441e1f3c5a277ba4e9a06e}}, {{cite:904a8dbe4eaf58f5b47c74d721f006124cb62eec}}, anchor-aided channel estimation {{cite:75a6683fba28e58b7de10a662a09348904e909f1}}, channel estimation based on channel sparsity {{cite:e7ad31ff9579901f1b58d04a31961d7085440daf}}, {{cite:91d13db0903fc1e7838d9fc47d0687fd1bc1a291}}, and so on (see {{cite:e7e29139b28a4a63a01b93a547a406ee6149a53c}}, {{cite:4c21932a0e91af7cfef49a670fe11ce7aaaa0f90}}, {{cite:dd9ea6634b12bf795dd198da2d03bf7e571d1e80}} and the references therein).
{{figure:83894022-c834-4323-8040-72f930107cc9}} | i | 17823a4067c3699aab3e3d16f3c8a161 |
Due to the capability limitation of a single unmanned aerial vehicle (UAV),
multi-UAV collaboration has attracted increasing attention {{cite:21479817084dce8de36ae2b5d078baed2cb22d5a}}, {{cite:9f2e1564e9f9d5bc03707d71f4afb89b1a01bad8}}, {{cite:37fe6f9c2a265f0d79d7d403dae8063dd2c84e1a}}.
One of the fundamental and challenging problems is
the flocking control of UAVs without collision {{cite:57722c1d6368bb97a3027dd944d8e16f689b64fc}}. Traditional
methods such as model predictive control {{cite:d464356d6f199d55a65e487610f11dcdca64f38d}} and consensus theory {{cite:f27a376c03ee5a82e7fc717e14ddd8a57c194daa}}
usually depend on precise physical models, which
are complex and difficult to obtain in practice.
| i | 3d4a94523c49c6a6bf06795bc7ccd302 |
We will see the existence of a homotopy section of the inclusion {{formula:e7bd2e33-9ff2-4fda-9cb7-cd738c556ca2}} in Section , where {{formula:6f6ffc8a-f5c9-4ffe-b701-942105543c60}} is the Segal–Wilson restricted unitary group {{cite:fde5d14d556f39261d7afcc8fdf5e678effaa84c}} having the homotopy type of {{formula:878a2716-2511-4b0f-ac97-1b3a926eaa58}} .
This implies Theorem REF .
We also show in Section that, for any integer {{formula:a5da55dc-eabc-45d9-a5b7-3e341d06c1b2}} , the inclusion {{formula:f8adf2c2-0464-4643-a275-4ebb6fd007c7}} admits a homotopy section if and only if the homomorphism {{formula:64ca6624-581d-437c-9290-1571cac54fd1}} is surjective.
Since we can see it is surjective when {{formula:efa8d4d0-cf3e-4b99-8330-99e25594edef}} , Theorems REF again and REF follows.
If one could show the surjectivity for {{formula:73108da3-4b96-4e66-8b96-5e8d00c55806}} , then a similar homotopy decomposition will immediately follow.
| i | 88c87a40cf8e7fe302090f4fa7b3c349 |
Another example of how this could be applicable pertains to chondrule formation. Chondrules are molten grains found in meteorites, whose formation requires rapid heating to {{formula:fb4a7505-0063-4ef7-b784-880a4c57de09}} 2000 K {{cite:2ddcb3195aba2fa0d45b8519050a11571bc132aa}}. While there is no consensus on this topic, it has been proposed that their creation takes place in magnetic current sheets in protoplanetary disks, which may reach temperatures {{formula:bf176ad7-88c1-4b8a-980e-781ea705dd97}} K {{cite:fc58d57ce6e9981647a2d65963daa16006b2fb23}}, {{cite:3f36df6f283eae822fde5c0b0dc09febf9b76660}}. Those high temperatures would trigger the thermal ionization of K and Na, leading to a sharp decrease in resistivities. Subsequently, the downward gradient in resistivities may create an instability that would allow the magnetic field to pile up in the current sheet {{cite:949791ead16ff5d9fe79c9af581b96d7e3fe1eb9}}. That phenomenon may be the origin of thunderclaps and extremely localized heating of the grains and gas {{cite:4ff28abb45fb82b76046b4af9de0c155d5e581b7}}, which are necessary conditions for chrondrule formation. However, readers should refer to {{cite:95c3a552c3b46b445e1f119db64b78a8228f272a}}, who argue that insufficient alkali metals would evaporate from grains to allow for this instability to act.
| i | 1b50c31134cb38c5452d2a865f993cd6 |
To reduce the variance of the estimate in Eq. (REF ) while keeping it unbiased, the {{formula:54465dc6-0574-46fd-a965-d40d5671b1f8}} function can be replaced with an advantage function, {{formula:df7dfc90-efd2-4126-80fe-6662ad08eeac}} {{cite:7f33d5eaea95131eca4ff8dbca8280fa704533d5}} where {{formula:b7f0ddd7-fd2e-4b5d-93ff-c6fafd15454e}} is the state value function. This approach can be viewed as an actor-critic architecture where the policy is the actor and the advantage function is the critic {{cite:1444092c983df6af43dc1888192a889db8efc8b0}}.
| m | 84698458532d6cb5caed25fdb29b0c75 |
Other alternatives to inducing points in a VI context have been suggested,
including random feature expansion {{cite:69647763af5056a2674a0773636fd0acd1a1986b}}.
Extending RFE from ordinary to DGPs has been the subject of
several recent papers
{{cite:865a5acf44d87cc5e7bedba1eeafa8437aafd72a}}, {{cite:ab14af92d4bd860c8ba61738cace036fedc35154}}, {{cite:4606439035d4b6ce0bdd42fce52be23c89085abb}}, with some
success. Others have taken the opposite route, keeping inducing points but
swapping out VI for Hamiltonian Monte Carlo
{{cite:9d1105813313347d63c5d5974c521a1536538899}} for DGPs {{cite:35fc52ae63b7eb34f627b181fc205f53b7fb55b1}}.
HMC has an advantage over VI in that hyperparameters can easily be subsumed
into the inferential apparatus without external validation. We show [Section
] that this leads to performance gains on predictive accuracy in
surrogate modeling settings. Nevertheless, DSVI is generally considered the
state-of-the-art inferential method for DGPs in machine learning. We think
this is largely to do with computation and prowess in classification tasks.
DSVI's inducing point approximation enables mini-batching to massively
distribute a stochastic gradient descent. This seems to work well
in classification settings where resolution drawbacks are less acute, but
our experience [Section ] suggests this may not extend
to the low-noise regression settings encountered in surrogate
modeling of computer simulations.
| m | c955a66b03525544a2ab45033415a1b8 |
We further verify our approach on general segmentation benchmarks including Cityscapes {{cite:baa1bbd672418cfb8ad0ab9751552b63a9b3b57f}}, ADE-20k {{cite:1ae01388b462a5936646aa0608d547c899270eee}} and BDD {{cite:8cf79010bc8b38c008ec2b3fd244199f98afbc7d}} for only verification purpose. We only report the results due to the limited space. More implementation details and visual results can be found in the supplementary file. We train both our baseline model and PFnet model on train datasets and report results on validation datasets under the same setting.
{{table:319f4113-9c23-475d-9c0e-a452f3954440}} | r | 78287779863c200a4d10d1360ee66651 |
One considers therefore an electron moving in the presence of a twisting electric field
that couples to the spin and leads to a non-homogeneous spin-orbit like effect.
Typically, the models considered have a gapless structure that may be changed by adding
gaps to the system, either adding a magnetic field (which naturally breaks time
reversal invariance and may lead to spin transport) or introducing some form of distortion
of the lattice (like in the SSH model {{cite:e5b50579c265ca199b8487398e316fc1d816518a}}), adding local potential energy terms that separate the energies of the
various lattice sites inside the unit cell of the helix-like structure.
| i | 7af90b697e2565281c75a3e37eb823d5 |
Neural network approximation algorithm of smooth functions presented in {{cite:5cc74a9690f0c4effbf1acf4b6a70b687724d387}} is based on the approximation of a function {{formula:254548fa-05da-4109-a4d7-0afc041edea2}} which then leads to an approximation of the product {{formula:81705716-368a-4827-b740-96f1964f4deb}} . The latter then allows to construct local approximations of Taylor polynomials of a given function. In order to approximate the function {{formula:eb484bd5-609a-4c19-87b0-d93fe60de29c}} it is noted that for the triangle wave
{{formula:d68d0956-96db-408e-a919-a723a9cd8f03}}
| r | 8defc435f9ecea8441ab0dfdce6bfc5e |
One possible route is to place a prior over {{formula:ddbb945c-9cf4-4e70-a79d-f4075f681d74}} (and possibly personalized weights {{formula:aec0e241-2c71-4232-a07a-5c783b6edabd}} ) {{formula:f48beab2-ea56-4bfb-aab6-2300331fca71}} and try to estimate the posterior {{formula:71271b65-1c4c-417d-b82f-5e3d3ea16585}} . Clearly, if {{formula:a2f3a48a-fea7-4e2c-b1c2-6151a1410af0}} is a complex function such as a neural network, {{formula:47414e3a-5dbc-4702-b172-28f98bfe3d3f}} is usually intractable, yet one may hope to extract some samples from it. Here recent advances in approximate posterior sampling such as Stochastic Gradient Langevin Dynamics (SGLD) {{cite:295b08675aff1c8160b05f8927eed1562091d5ea}} or Stein Variational Gradient Sescent (SVGD) {{cite:ebaab16d16145520fc991f18ec274b01ae74e6b6}} may be possible solution techniques. Besides the approximate sampling schemes above, hierarchical Bayes may be worth investigating, as IoFT has an inherent hierarchy between the orchestrator and clients. The challenge to be addressed here is how to estimate a hierarchical Bayes model in a decentralized fashion that preserves edge privacy and minimizes communication.
| m | 5e8a8f7ef6dd1b4c2e819a17b2c116a4 |
Even though the Basel Committee on Banking Supervision recommends the use of VaR at high levels (see for example {{cite:d5c4226ec7504012d183511de7ee6eb004f74b74}}), the VaR itself has been criticized several times in the financial literature for two main reasons. First, the VaR only measures the frequency of observations below or above the predictor and not their magnitude: this means that, while it is known that {{formula:14a220ad-6b16-42d7-8d42-5272cce2955d}} of losses will be higher than the VaR {{formula:9330ae49-2a55-40c4-afd1-bf72c6fa6d05}} at level {{formula:9953961b-3dfa-41f4-8d52-7b4372056858}} , the VaR alone cannot give any further information about the size of these large losses. Second, the VaR is not a coherent risk measure in the sense of {{cite:e28ed1b6218d5ecc8727dfd02417659ad480d8e9}}, because it is not sub-additive in general, meaning that it does not abide by the intuitive diversification principle stating that a portfolio built on several financial assets carries less risk than a portfolio solely consisting of one of these assets. These two weaknesses pushed the Basel Committee to also recommend calculating the Expected Shortfall (or Conditional Value-at-Risk) as a complement or alternative to the VaR. In practice, this is hampered by the fact that the Expected Shortfall is not elicitable, and therefore the development of a simple backtesting methodology for the Expected Shortfall is not clear. This is why we believe that the accurate estimation of extreme VaR is still worth pursuing.
| d | ef13a5d71498919da522eb1f1f9b0f24 |
Remark 6 Lemma 4.3 in {{cite:7eeddb85ad2e658ba0618d2e6a78e8a86925ef7c}} shows that there exists {{formula:3e759fe6-e63b-428c-9e88-d73a09c2ab42}} such that () holds.
In particular, letting the model gradient {{formula:f46dfc8a-0f37-4316-a0c5-9bdd93dc586d}} ,
if {{formula:de9afb6b-b2ec-42f5-b2f7-745815c63601}} , we set {{formula:73c9c74a-956e-470b-8c30-8fb694605ce3}} ; otherwise we may let {{formula:e0430804-0892-4ff9-a255-14ac101ef31b}} be the Cauchy point (that is, the point where the model {{formula:9aca495c-32e2-44e1-8c19-6a217e9a792b}} is minimised in the negative model gradient direction within the trust region), which can be easily computed.
| m | dfaca06c4fe73188840222eaee9309ec |
Recently, numerous graph-convolution-based models {{cite:62d00c9b7511f12673896388073265f90940b2fc}}, {{cite:202b6607bbc025df294a6eaf3d0914521b3572c9}}, {{cite:5844ef36f3aa6f566e049041e7c60a42dfa74a44}}, {{cite:d8649b8a2cfef58e87b054c156d6f89a41e50f22}}, {{cite:571e5d17ffa84d2a4ffdaab6ea8701c7f3ab8394}}, {{cite:4bb0285c926d19f228e99a16dfd26e4806b33074}}, {{cite:11e6b427ebf43c62957d1edbf8dadbab7dc33413}}, {{cite:9b1de6674a6f5799bd50b007ae4706ccfcaacba1}} have achieved remarkable success in motion prediction by explicitly modeling the inherent body relations and extracting spatio-temporal features {{cite:c1451ebcbfd8a747e621f0ab4e40a3492d51b24c}}, {{cite:c0e700cd2236bcc60b3c8c155cca3a0e6b6d3810}}, {{cite:6d6ab6900e0856fa0a1e40126499cbea75f7a0a8}}. However, further development of graph-based methods encounters two critical issues. First, as long as the graph structure is given, standard graph convolution just filters the features within limited graph spectrum but cannot significantly preserve much richer bands (e.g., smoothness and difference on the graphs) at the same time. However, the pattern learning of human motions needs not only to capture the similarity or consistency of body-joints under the spatial constraints (low-frequency), but also to enhance the difference for diverse representation learning (high-frequency). For example, when the graph edge weights are purely positive or negative, deep graph convolution tends to average the distinct body-joints to be similar but ignores their specific characteristics.
Second, existing methods usually use a single graph to model the whole body {{cite:62d00c9b7511f12673896388073265f90940b2fc}}, {{cite:571e5d17ffa84d2a4ffdaab6ea8701c7f3ab8394}}, which underestimates diverse movement patterns in different body-parts {{cite:557a8a35a9a4a8818785cc65c350c29d0839611d}}. For example, the upper and lower bodies have
distinct motions, calling for using different graphs to represent them separately.
| i | 405f96558d84dfeb37a6b97a0136fb91 |
Image retrieval has been an important and active research area in computer vision with broad applications, such as person re-identification {{cite:dff74523adc1be9257420e5ce1d58b3045ebec1c}}, remote sensing {{cite:6b5f9846b1ee07bf34c1f52c1f6e4b5c10265cf2}}, medical image search {{cite:c68bda6f3cbfed947c0fb8716c6c720b1da435a9}}, and shopping recommendation {{cite:55e009dd6f8a2e23b89a3c404f58db8d889802c7}}.
In a typical image retrieval task, given a query image, the image retrieval algorithm selects semantically similar images from a large gallery. To conduct efficient retrieval, the high-dimensional images are often encoded into an embedding space by deep neural networks (DNNs). The encoder is expected to cluster semantically similar images while separating dissimilar images.
| i | 4a6f8c07d12baaa09453be101cc2c922 |
The study of mean-field spin glasses has been one of the central objectives in Statistical Physics over the past decades. Based on the replica method, this approach has attained great achievements following Parisi's celebrated ansatz {{cite:8eb613922a3b36fc67cb22dc25f1c9b5b33853c1}}, {{cite:93ae792db1d87bdaff29b46a3d8def193bf75a07}}, {{cite:075e84f2cf64616c731a5b902945315e2cafbfe7}} for the famous Sherrington-Kirkpatrick (SK) model {{cite:21f5f5269102efb1db586c68b7a2bbc3953d3573}} as well as its variants, see physics literature in {{cite:d129c28639f44ea770d3fc048a54273d4c7eccf2}} and recent mathematical development in {{cite:846a5135df2db3744283f2426ffbec660422623d}}, {{cite:f2200f5dfea7a3b74c428920596b8eedc9540d87}}, {{cite:f3d9b22c18346fc738abdb03610fd25be18ef7cd}}. In a different direction, Thouless-Anderson-Palmer {{cite:880a0a396d8e2832cda3cc087f2850a0f3d342be}} proposed an approach to investigating the free energy in the SK model by diagramatically expanding the free energy with respect to an effective magnetization and arrived at a new variational expression in terms of the TAP free energy, which involves a novel correlation energy of the spin fluctuations in addition to maintaining the spirit of the Gibbs variational principle and the mean-field approximation. From the first order optimality of the TAP free energy, they deduced a system of self-consistent equations, known as the TAP equations, where the solutions were predicted to contain crucial information about the landscapes of the TAP free energy as well as the SK Hamiltonian and have strong connections with Parisi's replica ansatz, see {{cite:d129c28639f44ea770d3fc048a54273d4c7eccf2}}.
| i | 10bbfd66dc1d7842872f920b39051f00 |
First and foremost, there is a large Sim-to-Real gap in RGB images between sim and reality. Because of this, design choices easily overfit to sim. Two representative examples are that (A) policy architectures that directly operate on RGB images don't transfer because they overfit to sim images, and (B) the common practice of training segmentation models on sim data improves performance in sim but hurts in the real world. Improving sim to close this gap would involve increasing RGB photo-realism or providing extensive plug-and-play RGB randomization, both hard open problems {{cite:bfecc26f6f7d42228071fadd53638b842ca36376}}. This leaves us with no choice but to improve practices to work with today's sim. We should prioritize real-world transfer when designing policies: (A) replace policy architectures that directly operate on RGB images with ones leveraging abstractions as is common practice in other domains {{cite:21a2a950179a1b502f0e024446c3624055749d4c}}, {{cite:e679517736ee7d3e0600095712be6140d09baf9e}}, {{cite:9a48963551b9a80ff249023ed09e06f5ca79044b}}, and (B) avoid training a segmentation model on sim data if the policy architecture does not allow easily swapping it for one trained on real-world data at inference time.
| i | 968675250754b5ae942cf7173eac2323 |
which is essentially the {{formula:0a952204-ab00-4994-a70c-98fdd1c5b626}} th
power of the reciprocal of the Dedekind eta function {{cite:f1b68133f80ed32815a0d1381d6d9892a01331fd}}.
Then (REF ) was extended by Chern–Fu–Tang {{cite:f5e9d686bfceef913bbea4a57f982e5aa4e217f2}} to an interesting conjecture:
Suppose {{formula:8e7b9b27-f555-4a7c-8a37-d65dcbb6c3f8}} with {{formula:b5817881-cdd7-420c-85fa-3bca3b8aff72}} and
{{formula:f285c959-1522-4d0c-a40e-d6faeb3e97c0}} . Let {{formula:dab05f00-9ece-4f0b-8378-8b2a63ab47e5}} , then
{{formula:ca55aab9-1f79-4429-94bb-0e27cf47fcc0}}
| r | db803d0be6cc6466e71272be41733e75 |
where {{formula:9b0980da-a0ee-4d62-a441-72547692ae09}} is the exchange energy and {{formula:596ca460-790b-43a6-a2f7-5d49f6754e02}} can be found
in Ref. {{cite:191734f455aae99e0ed4145aee33219844d86533}}. By using the linear response theory,
{{cite:191734f455aae99e0ed4145aee33219844d86533}} discussed the exchange energy and electron
chemical potential in the lowest Landau level for non-uniformity
electron gas in a SMF. They analyzed the SES problem in a SMF and
their results shown that a SMF only the lowest Landau level is
occupied by electrons on the condition of
{{formula:9e7bc09f-0cf9-4bbe-8bb6-c1023fa21984}} or equivalently
{{formula:2d07912a-96bb-40a7-8d9b-d01514eb2038}} (A/z){{formula:4b3cd0d3-339e-462c-b613-44ecf2278860}} . The cyclotron
radius in the lowest Landau level orbital is give by
{{formula:a1910e27-b785-4e96-8ed0-82f95f898580}} . FGP used the expression of
{{formula:3a750a5b-bb4a-456d-ab85-4fd8803e411b}} in dealing with {{formula:0144b358-3ade-47d9-8877-41bdea15b04d}} . In FGP model, they thought at high
density the exchange correction is very small, thus they neglected
the exchange correction to {{formula:87cc3120-41cf-491e-bba1-f1acc3053c52}}
and had {{formula:443d4543-2417-4493-9adc-346862a3e6d9}} in a SMF. Due to different
ways of dealing with exchange correction under this condition, the
SEF of FGP model has some difference compared with other SES models.
| r | fa5449df11780ee4aac0cae0a124589f |
Adversarial training.
The adversarial training objective is to increase the model robustness by adding adversarial examples into the training set.
The most popular and common approach is adding the adversarial examples with correct labels to the training set, so the trained model correctly predicts the label of future adversarial examples after fine-tuning.
Other popular adversarial training methods generate adversarial examples at each training iteration {{cite:9821a69852fb1ae97a8968b5e539bc5040cbc37a}}. From the obtained examples, we calculate the gradients of the model and do a backpropagation.
| m | a31d7a0ee49f03de85cfddc89496d800 |
We begin by discussing the constraints on the interacting scenario for CMB alone and then gradually investigate the effects of other cosmological probes as they are added to CMB. For CMB alone, we find an indication for a non-cold DM at 68% CL ({{formula:36cb1c73-efc5-4460-9426-1832e916e2fc}} at 68% CL). This is in support of the decay of the non-cold DM into vacuum (i.e. an energy transfer from non-cold DM to vacuum) because we notice also an indication for a non-zero coupling at 68% CL ({{formula:deb42819-c8cd-4724-8436-f2207644a82f}} at 68% CL). Due to the transfer of energy from non-cold DM to vacuum, we detect a smaller amount of DM which is clearly reflected from the estimated value of the matter density parameter {{formula:161c6680-4671-4904-8a8f-d4cc18be41a9}} at 68% CL (for CMB alone) compared to the {{formula:9225be35-0246-4ded-b219-c80d2922d6dd}} CDM value obtained from Planck {{cite:b7865bd1e9b56f8038e44b19950f20e281d39ae8}}. It is worthwhile to note that given that we have only an upper limit on {{formula:7223ed1a-da32-43e5-b81d-51809ce3cf0a}} in the CMB only case because of the interaction, and given the strong correlation with many parameters of the model (see fig:interacting-vacuum-DM), this produces highly non-gaussian posteriors for all of them. In addition, this results in a higher value of the Hubble constant ({{formula:2be9e97f-6357-4f00-adc4-fd5ceaf55fb2}} km/s/Mpc at 68% CL for CMB alone) due to the anti-correlation existing between {{formula:48911db8-55fd-41cc-83d9-579507cc8b46}} and {{formula:205433d2-9378-40b0-874c-bea3ab9a9672}} (as we can see in tab:IVS). We also note that the {{formula:2d8c5e53-649c-46b8-a58b-dcaafefdcea3}} constraint is significantly relaxed due to its very large error bars.
Consequently, the {{formula:3e2206d5-fe09-4754-972f-f670b3a5cf84}} tension on {{formula:4a86bc91-3d46-40e3-9b36-95fc9a647ce2}} between Planck (within the {{formula:6dc851f0-eb7d-496c-b004-76f6d97d95f2}} CDM paradigm) {{cite:b7865bd1e9b56f8038e44b19950f20e281d39ae8}} and the SH0ES (Supernovae and {{formula:91305a47-da31-43b9-8a1b-6a0ac33aad46}} for the Equation of State of dark energy) collaboration ({{formula:ed7701aa-d315-497a-9bcb-2d2f4d0a2527}} km/s/Mpc) {{cite:aab58494fb11297d413f635441fecef9bb421074}} is reduced down within {{formula:b0ae7979-3f9f-45ce-aa0e-0f35ce503c2e}} for this case. Notice that, such alleviation of the {{formula:217008ba-c813-41bc-81d4-6a0ad6cbdaff}} tension depends both on the mean value of {{formula:6188b85c-01e8-42e9-a874-a3f54a17674e}} and on the increased error bars, which is caused by the increase in the volume of the parameter space.
| r | 7d44c863afc6848d08151453b3f5ccb5 |
(7) MoCo + Self-Training:
Here we initialize the model using MoCo learning on the unlabeled data before semi-supervised learning using Self-Training.
A recent work by Chen et al. {{cite:c278c8bca071aa50b14fca1d6b425d72cbef57d3}} has shown this to be a strong semi-supervised learning baseline. The procedure is as follows:
| m | f7db01ad5a090e53b466c7dd2152e321 |
We use six public datasets — three citation networks (Cora, Citeseer, and Pubmed) {{cite:08087cd76baad449fa9e035e1fd0fbb716efd1dc}}, two co-purchase graphs (Amazon Photo and Amazon Computer) {{cite:5b0dcfbf8d57f4c378b8fff178d0d6c69159d22e}}, and one co-authorship graph (Microsoft CS) {{cite:5b0dcfbf8d57f4c378b8fff178d0d6c69159d22e}}.
Results on Amazon Computer and Microsoft CS can be found in the Tables REF and REF .
{{table:d0bcd4d8-3e8d-4f23-88f7-2606737ee179}}{{table:6319530d-e2a7-4faf-9057-2618fe625f1c}} | r | 52ed234d33cf422e14a8e088ce61d1ac |
Five state-of-the-art models proposed for VPR
are selected for comparison:
NetVLAD {{cite:69722d26e9eb710bc095851aeda6caf1643f8e70}} is the seminal generalized VLAD pooling layer.
On top of it,
CRN {{cite:ea8bc7678b549dddb3815fa41407f6c45eec97e4}} introduces an attention layer to estimate local saliency according to semi-global context.
SPENetVLAD {{cite:c50b6bd2b761458be96fa0d4e3c57522736291a6}}
stacks the
regional
VLAD
features to retain the spatial information.
R-MAC {{cite:65d4ecfad71c7027bcab2d01a2997fd8f6a528c9}}
aggregates
the max-pooled activations of multi-scale rigid grids.
APAnet {{cite:705450a173d02fa407fbe7878cdbfa4ac2003e04}}
aggregates
spatial pyramid features
weighted by cascaded attention blocks. Besides, we adapt
GhostVLAD {{cite:4734f3879e27987f5580ace6bd83b21acc3cacc3}}, a similar architecture proposed for face recognition, to the VPR tasks.
{{table:0d3b0608-ec28-4c40-944d-0d107264bc25}} | m | 3515a6b51827bdd0af7b30f39d9e5f28 |
To simulate the IFE, we use the 3D relativistic PIC code EPOCH {{cite:6242f371044019166dc63ca57029d8ca7760486f}}. We assume a fully ionized helium plasma with a super-Gaussian longitudinal shape to mimic the electron density from a gas jet given by, {{formula:0a75ed74-a9d0-447c-85e6-eb383399fb2b}} with initial electron and ion temperatures of 1 keV and 1 eV respectively. Using this profile, the plasma ramps from vacuum to {{formula:14fe3ee3-6cbb-4c48-a1da-43dd712782fd}} over roughly 100 {{formula:ec8e2723-ef0b-4352-9cb0-968742f0795d}} m. The laser is polarized along the {{formula:5e3601fe-6959-4b11-84f2-81c68718c714}} axis, has a Gaussian temporal pulse shape, and is focused to the plane {{formula:d180e310-b466-47e9-913f-8f085a374fdb}} {{formula:6f7fe77f-304e-4c0e-90da-b57a8519f352}} m.
| r | fbbfaddeec0a9ff6bbc237d68da2b05c |
Layerwise training, a greedy learning strategy employed to reduce optimization time, has been shown to function via a reparameterization of search parameters {{cite:61f35531d7a4ba989f0f306841d8a4aa60661e5b}}, and to be helpful at avoiding barren plateaus {{cite:9ba2c5b7e9e875f82bd4ffe4ca318465b781a665}}. Although promising, such strategies can become sub-optimal in certain scenarios i.e. when stacking single layers is not expressive enough and adding more layers per stack is required to minimize a cost function, see abrupt training transitions {{cite:326d72a7f900592de24cc81e2834bf3f63d8e394}}.
| i | d324358b74ec595d3efe198153589900 |
Extensions to Other Applications.
Flash-Cosmos can be used to accelerate not only bitwise operations but also any desired operation.
This is because Flash-Cosmos supports a set of bitwise operations that are logically complete, like other processing-using-memory (PuM) substrates that use the operational principles of the memory cells for computation, such as Compute Caches {{cite:7d6930dbd1db92e2f2153c594e5e8abbab1c905d}} (SRAM-based PuM), Ambit {{cite:eb5bfe875b713439bd1b28c463632999dfff9370}} (DRAM-based PuM), and Pinatubo {{cite:75def2e337dc2237da49393987b4a01b406f9a67}} (NVM-based PuM).
Follow-up works (e.g., DualityCache {{cite:9b71513db92cc837171d96e807480f168acf9e9a}}, SIMDRAM {{cite:d25da6ea2ec81cdad0ce28e707410959d45f8246}}, and IMP {{cite:1f90068ad5ca0e74bc1856a1a013764c787e92b9}}) propose frameworks that leverage these substrates and techniques to automate the creation of desired complex operations (e.g., addition and multiplication) to accelerate a broad range of workloads, including graph processing, databases, neural networks and genome analysis.
We leave the development of such a framework for Flash-Cosmos to future work.
| d | 565ee30fdc2ecb44c8f99f838bb1c9f5 |
To further verify the efficacy of the proposed method, we conduct experiments on the MS COCO dataset {{cite:62ae38facc41dccbcd5195844f3c75b8c00125a2}}, which consists of much more images and semantic categories than PASCAL VOC 2012 dataset. The quantitative evaluation results on the MS COCO dataset are presented in Table REF . We observe that Ours-L and Ours-M achieve 36.2% and 36.1% mIoU on the MS COCO {{formula:fa44f8b1-1efc-48d6-a075-d1f37d886946}} dataset, respectively. Both of them could outperform other WSSS methods with only image-level labels. Besides, our results are also better than the results of recent state-of-the-art WSSS methods with image-level labels and extra saliency cues. The superiority of the performance on the MS COCO dataset also demonstrates the efficacy of our methods.
{{table:7f1fdcb6-adfc-4226-855b-7306b8677b9d}} | r | 476d608563fa64e25f42d6108f6b84ca |
There has recently been a concerted effort to tackle the phase recognition problem using machine learning methods
{{cite:af14d6ffcf44b96f5234cf2b6aee14d89d3221f8}}, {{cite:bc442a80ecd74c4064d0e2ab7811542f87e7c36a}}, {{cite:56c3d8c7963f218bda2a24103c44dec377b67b5e}}, {{cite:61254c97f529b94a3bcdaba95c26eb9a50ac09b1}}, {{cite:2b69e9eb7643361d923214793eb20e015b89c346}}, {{cite:c0969838b7a8cac7990a2b2303865f8a71e58588}}.
The multiscale string operators constructed by our circuits — which allow SPT phases to be identified simply by measuring expectation values of certain universal operators — suggest that our circuits may serve as a good reference point from which to benchmark heuristic machine learning approaches.
| d | 8bb1c095103523be57d512bf0b8288a2 |
The examination of frustrated magnetic systems is dated back to 1950s, when it was found that the Ising antiferromagnetic triangular lattice has quite different properties to ferromagnets or bipartite antiferromagnets {{cite:4a82f6612c401cc4612e1212f8c1d98f0df19612}}, {{cite:cdb58fd1fc571cd915a972f2e52e406a85648952}}. Since that time the frustration phenomenon has been a topic of constant scientific interest. Thanks to the relentless desire of theoretists and experimentalists to clarify the unusual and discover the unknown, the area of frustrated magnetism has been expanded considerably over the last two decades. Many current studies on frustrated magnetic systems are simultaneously focused on other phenomena with more degrees of freedom, such as magneto-elastic couplings, dilution effects, orbital degrees of freedom, or electron doping (see, e.g., Ref. {{cite:bcd47ed6379bef64aed27210846c6000b79b9928}} and references therein).
| i | 57b9a4014c71e0b2889536e496310b11 |
In this paper, we have performed a first investigation of the spectrum
of non-linear solutions of biadjoint scalar field theory in four
Euclidean dimensions. Our motivation stems from the known double copy
relationships between various field theories, summarised here in
figure REF . It is not known how general this scheme is,
and finding a genuinely non-perturbative incarnation would be a big
step forward in this regard. We found that the spectrum of Euclidean
solutions is rather rich in dimensions other than four, mirroring
similar results that have been obtained previously in Lorentzian
signature {{cite:df2813895df4600b14d92c876137d366a751ba0e}}, {{cite:c5639e606cc0228421cfe64d84a348aac64f0446}}, {{cite:79091660d706d62a040aacf64b14cc0b8d5ff342}}. In
precisely four spacetime dimensions, however, there are no simple
power-like solutions, with a consequent absence of dressed solutions
that screen a power-like divergence at the origin. This can be traced
to the fact that the power-like form that is required is a harmonic
function in {{formula:e02352a8-cce7-429b-9d24-c5d8774bc52d}} , and thus solves the linearised biadjoint field
equation. It can be identified with the zeroth copy of the
Eguchi-Hanson solution, which has previously been considered from a
double copy point of view in
refs. {{cite:3fd4901d2d27902707b8a945090ffe98a32817c4}}, {{cite:58cfb4f2175d24292aa51d08a7d377f15f7b8e86}}. The single and zeroth copies
can be formulated in terms of certain differential operators, obeying
conditions that are similar (but not the same) as the Kerr-Schild
conditions underlying the exact classical double copy of
ref. {{cite:b4b34f8f461d4feb90c86282071d4c915ca01bb3}}.
| d | 74fd3f79444efec9102cee31eef98336 |
Instead of exhaustively searching over the hyperparameter space, a random search samples configurations at random until a user-defined budget is exhausted {{cite:4a1dca0af017b11b2e39c06013640ed3c1e5cc7b}}. This approach is usually able to outperform grid search in cases where some hyperparameters are more important than others, as it may find configurations that are left out when discretizing. In the literature, random search is a standard baseline when comparing HPO methods, which is also why we choose to include a random search in the experimental section of this paper.
| m | 3dcce9db3ee0b3029f320d38a11de26c |
To address the issues with the existing methods, we propose a Gaussian process based method for the contour estimation.
Gaussian processes (GP) are nonparametric kernel-based probabilistic models that have been used in several machine learning applications such as regression, prediction, and contour estimation {{cite:6e3d19da2d26765eaba896f77d4e2bcf4de09c63}}, {{cite:ac5e4786655c61c8e24eb84d2fe289629007cdea}}.
In {{cite:10d06702abb74086e72981db27c600984cc8be3c}} a GP model based on charging curve is proposed for
state-of-health (SOH) estimation of lithium-ion battery.
In {{cite:601a8dadd5d02bcbbcb6b7faf7a82acf3bfb7223}} GP is applied for pattern discovery and extrapolation.
In {{cite:f59b92eafd2340d8793d196c190659890b4d005d}} a regression method based on the GP is proposed for prediction of wind power.
| m | 1855060db156e85d13760fc8a8fafe19 |
In this work, we analyzed a large sample of 10,092 {{formula:081ceff1-a022-4bb3-bcfa-05e4a5532dc0}} Scuti stars towards the Galactic bulge from the OGLE-IV survey. Two decades earlier, a catalog of {{formula:3ae5b432-fd7b-4a56-a5cf-51c63f48fbf7}} Scuti stars from different sources was prepared by {{cite:0fdce6c114aed00ad8b2972a4024d94777e7f455}} and contained 636 objects. Unfortunately, the authors did not provide information on the number of HADS and multiperiodicity of stars. {{cite:9efc3566b114036578944dac52c9fe43e55ff5fe}} analyzed 1568 {{formula:fa04ff38-309e-4b50-8fec-a1b57ec93235}} Scuti stars from the Kepler mission. They focused on the mode content and comparison between the theory and observations. The authors did not provide the number of stars classified as HADS, but the number of stars with one dominant signal. Thanks to excellent quality of the Kepler photometry, the stars are classified as objects showing one signal with significantly higher amplitude than others, not necessarily as those having only one signal in the power spectrum. {{cite:9efc3566b114036578944dac52c9fe43e55ff5fe}} identified 160 stars with one dominant frequency in the power spectrum, which corresponds to about 10 per cent of their sample. In our sample, we identified 28 per cent of single-periodic stars. The difference stems from a significantly higher noise level in ground-based photometry.
| d | 988f4e1bc12d80354f1782a90a01527e |
To derive such a framework, we ask ourselves: what are the ideal requirements that ISSL representations should aim to satisfy?
We prove necessary and sufficient requirements to ensure that probes from a specified family, linear or multi-layer perceptron (MLP), perfectly classify any task that is invariant to desired data augmentations.
This complements theoretical work in ISSL {{cite:7d1199c7e96f66f4325bcb863b39f3ac794d925a}}, {{cite:e3864868e651e810e6ac47e9bba45e21a50d9deb}}, {{cite:1ad214c77ac987442d5d2ad20ef93f992511400a}}, {{cite:6af8e553d43ae2c4876e605029d75f06f42316b3}}, {{cite:1df30ac6e1548d6ea44c93b93299c49c42408a04}}, {{cite:7e07b7084961ef5be06cb498920bf49735299708}}, which analyze specific ISSL algorithms.
Our work instead focuses on properties of representations that should serve as a goal for any ISSL algorithm.
These ideal properties are:
| i | 0fc0f5c5b4130af16f02c3b794775e3d |
One of the areas where higher dimensional CVTs have found an application is in the field of evolutionary optimization. Recently introduced, MAP-elites {{cite:0afe96b63aa181a3acdeac39f45a7a6daf329636}} is an algorithm that illuminates search spaces in evolutionary optimization, allowing researchers to understand how interesting attributes of solutions combine to affect performance. To scale up the MAP-elites algorithm, authors in {{cite:12a63d39218f6c9250b99637d9c8860fb426cb55}} employ centroidal Voronoi tessellations, and therein, following {{cite:3de872dc257867dca7bdf906e10e7871a2404ca1}}, they employ MacQueen's method to obtain the CVTs and show the sufficiency of using 5000 centroids. In line with their result, we keep the total number of centroids around the same. Accordingly, we vary {{formula:74707891-3327-485c-af6b-869e602d7041}} such that {{formula:02a055c0-1baa-4795-b899-2d5243f00f1d}} is around 5000. The corresponding results are given in Table REF where we see that the computation time decreases with the increase in dimension {{formula:31faea79-33eb-49a8-9fe5-8ea33f429a25}} . This is because with increase in {{formula:94b9785b-42a0-4752-8d5c-6068fb3108d4}} , we decrease {{formula:1303808e-87dd-4268-a50c-768c45bf53c1}} to keep {{formula:1efde30d-868d-47ca-ad4e-ca919f5b5ffc}} around 5000. Hence, the computation of CVT in one-dimensional spaces with fewer centroids is faster. The low energy of all the tessellations is also worth noting.
{{table:eeb10c7d-00a6-48f8-8210-ca49e123e270}} | r | e22d491c9e26ab1b39db621c4f80cdf4 |
Gravitational wave production in cosmology was studied in Refs. {{cite:eea70a97b35c15823df8641b670227f784d1bc12}}, {{cite:8fc2b99852a33e5ae9431d6a3a2a9c4ce21b1cc9}}, {{cite:20916b7a93691f36617b935c95ad94dca262c0e3}}.
According to the Parker theorem, see Ref. {{cite:a45a8945099f7ab3f24e56796e67ff925b9ba7af}}, {{cite:1dc5715f2d8fd380a9e0348e87775952871cb9e4}}, massless particles production by conformally flat space-time
metric is forbidden
if the corresponding field equations are conformally invariant. This is true for massless fermions, conformally coupled scalars,
and massless vector fields, up to possible breaking of conformal invariance by the trace anomaly {{cite:95e23917db90e5e52d31419a71949e76efd82077}}.
It was discovered by L. Grishchuk that gravitons can be produced in conformally flat space-time since their equation of
motion is not conformally invariant {{cite:eea70a97b35c15823df8641b670227f784d1bc12}}.
As shown in book {{cite:d08d6f5042dc74195e3dc94599e1430ed02e8ab0}}, graviton propagation in empty
but curved space-time is governed by the equation
{{formula:b9db42a9-ae0e-45db-a661-a1c7c4de6856}}
| i | 7ba64dfa088570b1539fa5f7e630d819 |
We consider 2D TM halides with formulas MX{{formula:653700a8-ccd7-4749-8645-ca11a4e71298}} and MX{{formula:3839f404-5e4c-473e-bf57-2cf2cb00e04f}}
(M=Ti, V, Cr, Mn, Fe, Co, Ni; X=Cl, Br, I). Fig. REF (a) and
Fig. REF (b) show the side and top view of crystal structures MX{{formula:4820905d-9af0-4620-8c01-951c5a9b0f29}}
dihalides and MX{{formula:7dbc3c66-81c1-4277-aceb-3bbdd107e7da}} trihalides, respectively. The lattice of TM dihalides
consists of triangular nets of TM atoms and exhibits geometrical frustration when the
magnetic moments couple antiferromagnetically. On the other hand, the TM atoms form
honeycomb nets in MX{{formula:6f1e816d-a528-4173-aff2-2388a2bbb4be}} trihalide monolayers. The lattice parameters are
taken from Refs. {{cite:98c819139fd8f98eec14c422307f74dbbbb4e341}}, {{cite:55b676f21a09be3b161d751299dace8fcf5a8f79}}, {{cite:307615157f25a3e32080337adc36a60e45d29a18}}. Simulation of {{formula:54b652c3-7df1-4439-acf5-0ce106d8cec4}} and {{formula:6a573098-2852-4c8b-b6e6-fa53774e4306}}
unit cells, containing one and two formula units, respectively, is based on the slab model
having a 25 Å vacuum separating them. For DFT calculations we use the FLEUR
code {{cite:4b37ee7b7a4cb1d1358de4ea36c6195c9a63d528}}, which is based on the FLAPW method.
For the exchange correlation functional we use the generalized gradient approximation (GGA)
parametrized by Perdew et al. {{cite:45ee8fa2f1b4d6fdaab5edc3b4058408a9fc5567}} (PBE). A 18{{formula:d00ffc38-b2c3-4eac-b785-8aaf3bac8c5b}} 18{{formula:d687ed38-9fca-4ae7-915f-87fe236bef63}} 1 {{formula:ab0c63d2-a1a4-4013-ad25-c08f2c3ced46}} -point
grid is used for all systems. A linear momentum cutoff of {{formula:0321a9ea-2dc4-4498-8378-56056ec38052}} = 4.5 bohr{{formula:e8876a93-af10-4beb-a6f5-87502d75c54d}} is
chosen for the plane waves. The effective Coulomb interaction parameters are
calculated within the cRPA method {{cite:360af6e443cdc63dd245f2ad82b758c8766d404e}}, {{cite:76fe5e8e2faa6ca89fa892d82457c59c9e68f9ff}}, {{cite:28116314b66be9b81e656b3c544cbd85d597efed}} implemented in the SPEX
code {{cite:d7b788567962cc01836d64becaa72d4867486d0a}} with Wannier orbitals constructed from projection onto localized muffin-tin orbitals {{cite:d014cfdbb3add8d4b3d98d6e994acba7c7bd666e}}. A dense 16{{formula:925d96e5-902f-4ba1-8dd3-a7f6b628dbbd}} 16{{formula:ea721f95-a874-4064-967e-c181ad3356e2}} 1 k-point grid is used for the cRPA
calculations.
{{figure:b8045ed8-c015-49ee-bebd-8953d33cc644}} | m | 8a06a86c519a9cffe1888af4f8df5f66 |
Instead of the given entity graph in existing KGs, such as Wikipedia hyperlinks and YAGO {{cite:4de54cb845a58931cdd9f76ebd7e4b03968c3684}}, we capture the entity co-occurrence from texts, because (1) under the opened-world assumption, existing KGs are far from complete and may not cover all relevant information about entities; and the missing links and multi-relational data may bring noise; (2) we are inspired by Word2vec {{cite:c28654d4a7d6eea4697af7223589f7cf2b694cca}}, which presents the expect analogy nature by modeling co-occurrence; and (3) although TransE {{cite:963818fe2b85be15c12b8321deffca8ab8d75e3d}} can also model the relation between entities as a translation operation in the vector space, similar to Word2vec, the explicitly modeled relation {{formula:1ccede60-d33f-4d0a-9427-039ec1f1a306}} may have a different distribution with that in texts, which is the target source of RE. Nevertheless, we are interested in incorporating KG as side information in the future.
| d | 829b0c07bbe71e925280ec571ee42730 |
We begin by summarizing our results. We find that a tight-binding model in
one dimension can host edge states when the phase of the hopping amplitude
is periodically driven in time by applying an oscillating electric field.
The presence of a staggered potential or an on-site Bose-Hubbard
interaction generally enhances the regions where such states appear.
The edge states only appear when the driving frequency is of the order of
the hopping. For frequencies much larger than the hopping, we find that there
are no edge states; the reason for this is explained at the end of Appendix
C 1. Hence we cannot use the Floquet-Magnus expansion {{cite:990068e0508e84870c4372f7eeecdf88ad5414ae}}, {{cite:80b2e3ec1fdf491de35671aabb27489815e3b5f7}},
which is valid at high frequencies, to study the edge states.
We have used a Floquet perturbation theory to show that when the staggered
potential or the interaction strength is much larger than the hopping,
periodic driving can generate states localized at the edges.
The results obtained by this method agree well with those found numerically.
Finally, we have shown that a measurement of the differential conductance
across a periodically driven wire with non-interacting electrons can detect the
edge states; the conductance has peaks when the voltage bias coincides with
the quasienergy of one of the edge states.
| d | dd92494bfe9f30768b5360efdc3e9310 |
Some of the studies mentioned above were actually performed using a version of mean field theory known as the population density approach {{cite:357903c67168ce212d6a1edb8485c67758a2e80d}}, {{cite:57bc1bd2d9e52eaa61aa5a4671ef43fe0ba1352e}}, {{cite:1023554e6c59e24e442858dcf10109337cd07308}}, {{cite:d99275d1dd9d62c1ff978763b4b7ae5152f9f687}}, {{cite:49bc5dff05068e7908df71c790328f15d601da51}}, {{cite:493d1d68672feafac2a784dca0b6df01c95645f1}}, {{cite:020f13217eef627dd34a4f6ea6c8303e656adc47}}, {{cite:d2143ebaaefd761c4702acecf2ea157febd7b82d}}, {{cite:fa9127f341c77aa78ea2233e38faf7954c832702}}, {{cite:3aaf9e91c9ba75aeba3b467d33a6f02759467107}}. In this approach, one obtains not only the stationary distribution of the firing rates, but also the distribution of the membrane potentials. By leveraging perturbative solutions of a Fokker-Planck equation, this approach can help uncover the dynamics emerging from the instability of the asynchronous regime, and it has been used to build complete phase diagrams of networks of spiking neurons {{cite:49bc5dff05068e7908df71c790328f15d601da51}}, {{cite:d2143ebaaefd761c4702acecf2ea157febd7b82d}}. In some special cases, exact results in the thermodynamic limit for the dynamics of both the firing rate and the membrane potential have been obtained {{cite:79a0df98d835831577e7e21c32f1d57e5d382f40}}. Mean field theory, dynamical field theory and the population density approach are complementary approaches, and one may choose one approach or the other according to the problem at hand.
| d | a99cbe5072c974a73ad4fbd3bd79abfd |
In this paper we have proposed a simple strategy to assess the role that a given network might play
in shaping the relation between other two networks, thus enlarging the paradigm of network similarity beyond the classical pairwise comparison. This approach is aligned to a recent endeavour that aims at going beyond dyadic interactions
in the characterisation of complex systems {{cite:f7b0e7d5d0d71960502d454b5ce4831599bf4a96}}, and takes inspiration from the causal mediation literature {{cite:feab7b1c1bc1ca01814a79afec7b9e7f873cb8a9}}, {{cite:56ca4ebdb6e28c81cadd21f483f37f361126c971}}. We make use of a set-theoretic approach to define a similarity metric between a pair of networks and to further explore if such relation is independent of, mediated or suppressed by a third network which might be hidden. We introduce simple generative models that, we prove, produce pure mediation and suppression. We then explore the coexistence between mediation and suppression and develop a procedure to disentangle both indirect effects. The whole methodology is subsequently applied to a range of real-world, 3-layer multiplex networks, and we unveil previously unnoticed mediation and suppression effects in social and brain networks.
| d | bd506af98585951a62546377a70e025b |
Compared to other methods that use CNN features {{cite:42ffb27b9e004ce48239bea9941ece060701b24d}}, {{cite:17b4edd8bb4b90d544892d0581b76221642afcc8}} pretrained
on external datasets, our COG-based 3D object detector has comparable or better performance even without the contextual cues provided by our cascaded classifier.
Conventional CNNs for 3D detection {{cite:42ffb27b9e004ce48239bea9941ece060701b24d}}, {{cite:17b4edd8bb4b90d544892d0581b76221642afcc8}} are trained to produce weighted confidence
scores for each of multiple object categories, while our first-stage detector is instead tuned to discriminatively localize individual categories in 3D.
Our subsequent cascaded prediction {{cite:2e05893f8591a1dfe126ae1cc59ef6a218b3bddf}} of contextual relationships between object detections
has structural similarities to a multi-stage neural network, but it is trained using (convex) structural SVM loss functions and designed to have a more interpretable, graphical structure.
Interestingly, our overall cascaded approach is more accurate than standard 3D CNNs {{cite:42ffb27b9e004ce48239bea9941ece060701b24d}}, {{cite:17b4edd8bb4b90d544892d0581b76221642afcc8}}, {{cite:785b2349cda7b2f5ba29538cba9a5e482225b3eb}} in the detection of both 10 and 19 object categories.
{{figure:3450c7f9-6463-4c97-a01d-ae00c5e4faa5}} | m | 94a650a29d5520bb14d8f25c591ea1f8 |
Such a decomposition is unique; cf. {{cite:dbd98143d8d7d9ffade5c95b9aa3b3d76d2bfca9}}, Ch 1, §4.
| r | 2d9ff5fe993eae9087bb51ce21274945 |
Localization is the process of estimating the location of a signal source, which is vital for a variety of applications, including location-aware communications {{cite:69710c895415939acbbb696b635830739e2933af}}, autonomous driving {{cite:ff34bdd80166394b7325bf0e9bdbc8e876d036d7}}, industrial internet of things (IoT) {{cite:d8ac3ff2b8699e7ffe8deeb4cb03e314a267ee41}}, and tactile internet {{cite:7ac8277453360305fbc9aad3146f63f7f45e0428}}. Over the years, a plethora of localization techniques have been proposed. These techniques utilize different signal or measurement types that include ultrasound, visible light, radio frequency (RF), inertial measurements, and hybrid signals {{cite:667575c45a900bc70d3b0f273bd20a08cfedbdee}}. Among these modalities, RF signals are widely used because of their ubiquity in current wireless communication systems, where abundant cellular and wireless local area network (WLAN) infrastructure provide added value to user-oriented services and network management {{cite:2ee7e89ea980c0cb5e6951382ba7600870cb40ab}}.
| i | 72a7e620599b90bccbc26b7671a38f6f |
{{cite:dc28d589d0f8578f9698472fda92cbbb27f8d65c}} pointed out that VDB0-B195D is a particularly
extended object and that the photometric measurements of
{{cite:4183c191c19e73d982dafd4082dce7c33f8050fc}}, {{cite:5acd5acf60284749d59ab2723e92f464f3a19d77}}, {{cite:1c318f13b109d5f1d10a9e483f8405abb6e39882}}, and {{cite:90e0894a1f02223faafdc3c4f5f72f3ab39b9826}} did
not include all of its flux. Therefore, we adopt the photometry of
{{cite:dc28d589d0f8578f9698472fda92cbbb27f8d65c}} to fit the observed SED with theoretical SSPs for our
age determination. The fit yielding the minimum {{formula:1808a50d-3e34-46d8-a9d2-65d8194e7dbc}} value
({{formula:b02b86ee-3470-4193-a0cf-6ac6f1036467}} ) was adopted as the best fit and we adopted the
corresponding age value, {{formula:da1bdfec-8c9a-4a29-a903-3bb0831ea14f}} Myr. In addition, our
best-fitting age estimate of {{formula:533ef01e-ea4d-4071-b2b8-f86d89381b2d}} Myr results from using the
(redder) {{formula:40a6c974-0e81-400d-9050-0844807f2e01}} –{{formula:59636f82-c8c0-457d-bd3a-3ac729f3da4c}} and {{formula:34c7874d-350b-419d-8ec4-8c624f51d5d5}} photometry; using only the blue
part of the cluster's SED ({{formula:20769720-f852-42ef-a530-f42b4ced9bf4}} –{{formula:4de8ca9c-56df-4e20-9636-defe64e82217}} , where any effects caused by
stochasticity may be smaller) yields an age of {{formula:66c3b923-9454-47f2-ac4f-d4b79d393c8c}} Myr.
The uncertainty was estimated using confidence limits. If
{{formula:0d52c9da-d3db-4717-9a75-6502970e35fa}} , the resulting age is within the
68.3% probability range; here, {{formula:56eaca42-c1ad-400e-bb6b-62369de8f7ca}} is the number of free
parameters, i.e., the number of observational data points minus the
number of parameters used in the theoretical model. Therefore, the
accepted age range is derived from those fits that have {{formula:a20dce8f-cf24-4db3-8d08-afc86843a65f}} . The best
reduced-{{formula:514d311a-2c75-436b-a2a8-4fef37c7f41d}} —defined as {{formula:24beeb85-5bd6-4037-9056-3aadb89c5407}} —and age are listed in Table 4. The best fit to the SED
of VDB0-B195D is shown in Fig. 4, where we display the intrinsic
cluster SED (symbols with error bars), as well as the integrated SED
(open circles) and spectrum of the best-fitting model. From Fig. 4, we
note that the observational data in the {{formula:bed4a9d8-4950-497f-bf94-47735fde9a79}} , {{formula:881b2d55-475b-41d6-897a-1ea47c2057a3}} , {{formula:7f27442f-28be-4691-860b-9dfc2a140cb7}} , and {{formula:36dd0294-6480-4313-b9d5-c5ae6e06b8f8}} BATC
filters and in the {{formula:065c3370-8934-4cbf-bbae-426f20b32308}} band do not match the best-fitting
model very well (the difference is approximately 0.3 mag).
Photometric uncertainties in these filters may cause some differences,
although this might not be the main reason for the discrepancy.
As we know, observational star clusters'
SEDs are affected by age, metallicity and reddening. If the reddening
value and metallicity adopted in this paper are not problematic,
discrepancy between our observations and the best-fitting model may
reflect the difficulty in achieving an appropriate (but formal) fit of
an SED of a single, real cluster by SSP models. However, as we will
see below, the reddening value adopted in this paper may be bigger
than the actual reddening of VDB0-B195D. In addition, the
differences between the photometric data and the model in Fig. 4 show
a somewhat systematic behavior with wavelength: in bluer passbands the
cluster seems to be more luminous than predicted by the model, while
in redder passbands it is fainter than the corresponding model
predictions. A blue excess and red deficiency in the observed SED with
respect to the model predictions may indicate a shortage of red giants
(RGs), which can occur when the cluster is either younger or less
massive (or both) than the corresponding best-fitting model suggests.
In other words, IMF discreteness may play a role: due to a relatively
longer main-sequence (MS) phase and shorter RG phase, a random young
cluster is typically bluer than predicted by SSP models. At the same time,
we find that the reddening value adopted affects the fitting result
greatly. In fact, the best fit to the SED of VDB0-B195D improves a great deal when
adopting a smaller reddening value such as {{formula:51bdef09-766c-4696-957b-dee3a26bca1c}} : {{formula:71fcb999-faf9-43a8-8a21-452d1e201962}} ;
the resulting age ({{formula:cdd2e8a2-521d-408e-a524-7b914090cc78}} Myr) is nearly the same as one ({{formula:79b1828d-1d7b-4341-9123-fc4cafddca55}} Myr)
obtained with {{formula:091d3b6c-8e9c-4992-aa07-9080e5dbb71e}} .
| r | 363cfdaf5bc24ee5a01977d79a64040f |
Due to the non-mesh-conforming character of the (isogeometric) finite cell method, it is infeasible to impose Dirichlet boundary conditions by (strongly) constraining functions in the spaces (REF ). Instead, Dirichlet conditions are imposed weakly through Nitsche's method {{cite:3f4ec22551684b60ed8c993c265d6e6df2d4f95e}}, {{cite:b5873f325d71d4301daca060b09f462f4d33bbee}}. By employing a mesh-dependent consistent stabilization term, a well-posed Galerkin problem is obtained:
{{formula:4b82ac25-7b2a-4a05-ac10-48b3c627586e}}
| m | 84136f8a4adbb01a3bcf91cd8f6700c2 |
To show the generalization ability of the proposed JPU, we replace EncNet with two popular methods in DilatedFCN, namely DeepLabV3 (ASPP Head) {{cite:1ac32ae180d52b9a0676e1929ac046e1068f7f74}} and PSPNet {{cite:95723330f0dbdb70d5306089616c139543934f29}}.
As shown in Table REF , our methods transformed from DeepLabV3 and PSP outperforms the corresponding original methods consistently.
{{table:1e2804cf-156f-4f38-b7f0-94193062f22a}}{{figure:6862d462-c4e6-4560-8483-6b0c32d93902}}{{table:de83650e-ba34-4ada-b727-0fe43c0d1109}} | m | 0817475e832d73aeb739677ed6b00861 |
Our model of how quarkonium production is modified in nucleus-nucleus collisions
assumes that, indeed, the nuclear suppression depends sequentially on the
binding energy of the quarkonium states.
The starting point is the significant correlation patterns
observed {{cite:758c16afe220c1e48e348fe21feaab9a6ad4da4d}} in the precise and detailed 7 and 13 TeV pp
data {{cite:1e8e9b68f53888ece798439c48be4d9e6b4b93b5}}, {{cite:918734704cb3e884491df808e466db0de6349794}}, {{cite:4bc3c6f6ef2a18a22feda058bc66869a45b9d231}}, {{cite:c55664eda38093e3db07b40f87946aaf8ff486b1}}, {{cite:7f0e3398a5046ad3c1c7a287125a65471ebb848a}}, {{cite:e2c6c171f6ab7da6f192099137ec6ae9cb98f51c}}:
S-wave quarkonium production in pp collisions can be parametrized
assuming that the transition probability from the pre-resonant {{formula:49ec94ad-391f-4885-b734-572dda012819}} state
to the physically observable bound state
is simply proportional to a power law in the binding energy,
{{formula:162d072f-36ea-4c0d-9966-22b6c50e7fbc}} , equal for all states.
We use this parametrization, extended to the P-wave states,
as a reference description of pp quarkonium production,
including the detailed structure of the indirect production via feed-down decays.
The nuclear suppression effect is then modelled by a minimal modification of the pp reference formula,
introducing a threshold mechanism parametrizable with a “penalty” applied to the binding energy:
{{formula:3def5edd-19fe-4f8e-a0a7-91f6d7221684}} ,
with {{formula:e191d6c7-e5e3-4e43-8c63-ae04a0eab345}} for {{formula:e9aa5ece-f897-4f78-a9c4-d9a9a20682b1}} ,
where {{formula:5bbaa7ab-40b3-483f-a616-5124b09a1c46}} depends on collision energy and centrality
but is identical for all {{formula:b7b2c216-f0c7-4f1a-8e0b-e07836932363}} and {{formula:63709d5d-08de-4b35-be9c-f144ef640752}} states.
| d | 58c3227e68cbc248ed2e40af87ddf2aa |
I present a system for verifying partial correctness of deterministic
sequential programs. In contrast to Floyd-Hoare logic {{cite:d99c283918b8b69aa8986be6269be0095fee532b}}, {{cite:ca103acaf73b3de889e9775f717044b223a0e624}}, my system does not
use pre-conditions and post-conditions, but rather
pre-programs and post-programs.
An assertion is essentially a means for defining a set of states:
those for which the assertion evaluates to true.
Hence the usual Hoare triple
{{formula:d151c60b-9132-423e-98f7-e82e7d4eb796}} means that if execution of {{formula:c7d33aca-bed1-462a-84f5-5b5d4fd16f4a}} is started in any of the
states that satisfy assertion {{formula:f14a5444-0dfb-4ced-a930-cb03e3675d94}}, then,
upon termination of {{formula:5ecd3c58-2ae3-4d50-be98-472fd236a8d4}} , the resulting state will be some state that
satisfies assertion {{formula:5a82ab38-397e-4554-be10-76ae59c61aa7}}.
| i | b0b37848a984b55c18431b49ec84ffa9 |
To summarize, we study the staple reason for the non-{{formula:57329082-4f23-4b95-b948-7e745238e313}} -ray detection of a sample of {{formula:b65c5f69-9ea3-4135-b231-14a7b1fed717}} -ray quiet FSRQs. We find that the synchrotron flux of {{formula:aa4470e1-304f-4afb-a8cc-6e4c60e558c2}} -ray quiet FSRQs is not significantly different from that of {{formula:d8447e83-7580-48e9-a867-a8612a080c39}} -ray loud FSRQs. It suggests that the difference in the {{formula:49dd4a14-4063-4ed1-834c-d5947fc39b55}} -ray band is more likely caused by the location of the dissipation region. In this work, we focus on which physical parameter has a decisive influence, therefore we boldly fixed the rest parameters. Of course, it is undeniable that a joint influence of multiple parameters is also a possible explanation. Previous studies {{cite:b49049a6937ec562739737fcec0672f3a28bea80}}, {{cite:493415ba4c7a389e2886edad3f077e7c5bc0582e}}, {{cite:e1b7e0fc51783101cc7bca105c0d4287089de443}} suggests that having a smaller Doppler factor is the most likely reason why a great many blazars are not detected with {{formula:ac32682a-87f2-45e5-9892-80f588cbb66c}} -ray. However, if the electron injection luminosity and magnetic field are not significantly enhanced simultaneously, the synchrotron flux will be significantly reduced, as shown by the purple dotted line in Figure REF . Therefore, the non-{{formula:ccc0fe6c-8cb6-498f-a547-298dcd09e8dc}} -ray detection of {{formula:6dd5dd27-4c6e-4c63-a8ad-5b4923940ddb}} -ray quiet FSRQs is unlikely to be caused by the Doppler factor alone, at least it also needs to introduce larger electron injection luminosity or stronger magnetic field while ensuring that their synchrotron flux is still comparable to that of the {{formula:f09748de-961c-4540-b960-a49b549eb572}} -ray loud FSRQs.
| d | c2c42e4ed56eef7621edf1d88d804339 |
According to the profound classification of Hawking {{cite:fad913a44d6454f48158090524db57849639b6a7}}, a modeling of reality in theoretical physics is good if it contains few adjustable elements, agrees with and explain existing observations, and makes detailed predictions about future observations. For instance, within our personal selection, in the famous treatment of Landau on extended, interacting Fermi systems the determination of the low-energy excitation spectrum requires the introduction of just one adjustable element {{cite:db44b31dbd865a8c9705b17cfe0f1b99ea239019}}, {{cite:061f8eb8f7209cdb8e481cbe05159dbdd0b026ea}}, the quasiparticle effective mass ({{formula:d9f8c3b2-5f76-4da7-a52c-6431d4295d90}} ).
| i | 90e4fd79fec79146519c473766d84453 |
We want to use Proposition REF for the family
{{formula:d7fd67ad-0bac-4f26-b3f2-64e18cf952f0}} . Since Every FNE is NE (see Lemma REF (i)),
for every {{formula:9d700255-0129-400f-a739-47a05180e4ef}} we apply {{cite:6f03e7606cb80d19e09346aa7c9d357fe5d33686}}
to the family {{formula:590da88a-1f46-4b82-b468-e909d23191ea}} and conclude that the
string operator {{formula:b42dbd28-bfb1-499b-9836-2cbe505370a5}} is NE.
Thus, from (REF ) and due to the fact that every NE
with a fixed point is QNE, the family of operators {{formula:e918e6d5-c732-48ed-99ea-ec63fd4bc93b}}
is a finite family of QNEs. This, together with (REF ),
yields, according to Proposition REF , that
{{formula:9fb731f3-6f26-4228-bc4e-d8dad846fc30}}
| m | e2a9298f2ef261eb41c2b897dc91a652 |
Theorem REF shows that if the functions {{formula:ed7b65aa-f577-4065-a7d4-a2347bc24eb5}} have minima and the parameter space is restricted to keep production quantities nonnegative, then the collaboration network in the oligopolistic competition model will have a graph corresponding to the arbitrary degree sequence in the prior section.
As a result, this is a particular application of Theorem REF , to a general collaboration game. To prove our theorems, we alter the assumptions on the functions {{formula:ba8a433a-7def-44d2-8158-327593ca96a4}} from the marginal cost (Expression REF ). Theorem REF requires convex functions {{formula:f485098f-dc59-4002-94d3-d4eb71085c77}} where as Theorem REF requires {{formula:f445ec19-7171-4330-bb86-e47ad0ccac27}} to be a decreasing and convex function.
Lemma 4.3
Suppose we have an oligopoly consisting of {{formula:6345b577-c1c3-45b4-9eb9-81a2ccd16a4c}} firms in which collaboration is defined by the graph {{formula:d5bfba4a-0890-4f53-ae29-484d5519d6a5}} and the profit function (allocation rule) for Firm {{formula:acd56a24-a817-4cb4-b8e9-9c99a8bc65c8}} in that oligopoly is given by:
{{formula:95b90855-4e0f-4e5a-9a2c-452fac9b93dd}}
then the quantity produced for firm {{formula:4268e0ed-13a0-469a-b862-5c5afa23aab2}} is:
{{formula:3a3fd9b8-a3d2-4f45-b580-b307e3a4821e}}
From {{cite:65f35eee6c3e7eb84cc762f7de03fe252f7f1b63}}, for any oligopoly with profit function of the form:
{{formula:c6d4012e-d2d2-490f-bfb9-456bc99aff6d}}
The resulting Cournot equilibrium point on quantities is:
{{formula:94149f78-fdf5-43d1-ac90-174a7e70ddf8}}
In our case, we have:
{{formula:ce0688c5-2fb8-411f-beae-68c4ec19510c}}
Substituting these definitions into Expression (REF ) yields Expression (REF ). This completes the proof.
Remark 4.4 It is worth noting that when for each firm {{formula:189dd91c-5990-43cd-ae6d-8f8296f79cb4}} , {{formula:e481e9d3-ae2c-49e7-a519-6669710f83c9}} then the cost function (REF ) and induced equilibrium quantity (REF ) is retrieved from Goyal and Joshi.
Corollary 4.5
Suppose that {{formula:929a6b1d-35d6-4dab-8475-958f9e18dfc5}} is a convex function that has a minimum at 0. Further, suppose {{formula:42a07bb1-8f52-4169-ad42-918fefb85063}} where {{formula:6dcd5b36-a9d2-4459-8e81-c146ce8f6dae}} .
If the parameters {{formula:7f6aa0db-e791-47c3-90db-6053edd6c7b4}} and {{formula:ce7c2ac7-dd2b-44f5-8b35-89d8d6a31bb8}} and the function {{formula:1e7395cf-d83c-4006-84a6-8e838678770d}} are such that:
{{formula:3b0262a4-96a1-4d64-ab6f-e72c9cb3fdda}}
and {{formula:b9b1500c-2db9-4441-9742-dabcfb17bc31}} , then the Cournot equilibrium point quantities (REF ) are nonnegative for all firms and for all collaboration graphs and the two inequalities are true:
{{formula:a532db4c-e4ff-4754-915c-9bd9eadd6c19}}
Since {{formula:4da85af2-5a45-4851-91e4-a1ec43735854}} and {{formula:f795c867-327c-4d3f-96e0-33846c2b60ba}} is convex and has a minimum at 0, this implies that {{formula:4cddb3d6-94cf-42ad-a0cc-56539af40230}} and {{formula:5f5ed502-97d7-4720-968b-1b58e21c56b0}} are non-negative. Hence, (REF ) and () imply that {{formula:2ccf7cbe-5398-4d37-a6ac-6d234d50eb1b}} is non-negative and hence it suffices to only show that (REF ) and () are implied by (REF ).
For all {{formula:b216725d-2e8c-4cd2-a6aa-ef729ec24f8c}} , function {{formula:138ef8d6-4380-477f-bc6c-a1c3c92e6690}} is a convex function of the degree of node {{formula:d8852ef1-16bf-4e1b-8a16-4eafd73716d2}} in the graph {{formula:4c3a6e56-4d99-42e1-8756-799871429b90}} ; the degree of node {{formula:4cf16d6c-6a80-460b-a882-6d4f0f6777bd}} must take an integer value between 0 and {{formula:8f114536-1162-4014-a850-f7640970ee42}} , which due to the convexity of {{formula:6dd7a883-f089-4331-b624-af7c37088ae8}} and the fact that {{formula:55d40a43-14d0-41af-bb31-2d552f269c6b}} implies that the maximum of {{formula:1dbf0752-4fc0-4a64-962a-3c9c69811285}} is equivalent to {{formula:fd23dc88-3e64-4212-a1ca-4444ba3beb6c}} . That is,
{{formula:de36e130-6cca-4700-b27c-2bfb035000d6}}
This means that (REF ) implies:
{{formula:15021ee3-4f19-4d64-8dfb-3f07b4459b93}}
Since, all {{formula:94b4c7dc-9a9b-46a1-8b73-48fa6cb7756f}} , we may add {{formula:17b2d87d-6605-481b-90f1-205a51f5ff7f}} to the left side of (REF ) without harming the inequality, implying:
{{formula:7b22c545-4c7c-4c45-88ec-3bf6f767d4ee}}
Now we divide by {{formula:b95de03c-3a56-4b0c-886c-0d63fe4493de}} :
{{formula:21362912-64d7-462b-9bf8-48df63ff9407}}
The term on the left of (REF ) is {{formula:bb41e0ad-a3d7-4386-baec-51f44900112f}} :
{{formula:945613b6-4bb0-4347-8b37-a5dbf3c86bba}}
Multiply through by two and note:
{{formula:867a4a8f-dc9e-4a8e-aa90-1a30c1b2f867}}
Now (REF ) and () immediately follow.
Remark 4.6 This essentially means that the steeper a function {{formula:5038e626-327a-4fa4-80ee-86b272d35ef8}} around zero and on the interval {{formula:c4586099-8c1e-4e3b-b646-812be2c66e70}} , the greater the quantity {{formula:c0054ed5-f6b3-46be-bc47-8af19f201261}} is needed to ensure the theorem proved later in this section. It is worth pointing out that this bound may often not be tight (i.e., the inequalities may hold true and production quantities may be positive even when the condition is not not met).
Theorem 4.7
Suppose that {{formula:c5ab9787-58be-4cd2-a6fb-88b30121a93e}} is a convex function that has a minimum at 0. Further, suppose {{formula:cd184fd8-973c-43af-9244-1b1b044dbb3c}} . Define the change in {{formula:d239fca0-9804-47f9-9839-7b8b0f7466fe}} as {{formula:b4ec5834-588e-41bc-83bd-6bdfe1b19dc9}} and {{formula:10de9f83-6f60-4f51-b149-56c40d8d7dc8}} . Suppose {{formula:4bcf6979-a6a7-47c2-b131-d3e4f02c03e5}} firms compete in an oligopoly with market demand {{formula:59d833e5-97de-4c83-80a1-d4d9ebba23e6}} and marginal costs {{formula:a1917547-b064-40ea-b3cb-d35f5b6870e9}} . If the parameters {{formula:fe1c1f00-08da-4f95-ac14-566cdb0f7b18}} and {{formula:cdb569e1-889f-4389-8b4c-c87ff77e0f36}} and the function {{formula:594766bd-3779-4f2a-8166-0f03d1a934bd}} obey condition (REF ), then the equivalence class of graphs {{formula:eae04408-06ee-4537-8345-66090d6787f4}} such that {{formula:9a36d645-40ff-4427-8313-c8fc694677cf}} is an equivalence class of stable collaboration graphs.
Let {{formula:ad95bc68-beec-4391-856c-e6c21273384d}} be a graph {{formula:0b2bab12-395a-4794-8b0c-dc53998ffdbb}} such that {{formula:18739628-b20a-4142-a8bd-0d841f133d17}} for all firms {{formula:a8ca13c8-d9ff-44c4-bc3d-bd3c6519b227}} . Consider a firm {{formula:fa1818b4-1efe-4ff0-8f46-9e0dd1e8c9d1}} who may consider dropping its link with node {{formula:f87a6d87-1104-49b6-a0f1-281ce48dd096}} . If node {{formula:56a96a53-435c-4afc-967d-aca1b6e79c27}} drops its link with node {{formula:202d4629-dc40-4923-8a5e-277ab36117ad}} leading to graph {{formula:4ec0ff99-d15c-41bd-9bc4-0fbda9ba1706}} , then {{formula:8be3607f-3c0b-4db9-b76d-59c801abaf1a}} and {{formula:25680ad5-cac1-48f8-8d64-83d5facb04ec}} , while {{formula:36e96970-c64e-4f45-b864-3c90be6e90ab}} for {{formula:0c379ec0-68a5-4692-9543-b3f2963ceb66}} . Using Lemma REF
{{formula:e48f1118-63fc-42bb-85ea-46a96ed9c433}}
Calculate:
{{formula:cfe2dcc3-10fc-44e2-8a63-5adf36a704b5}}
It then follows that
{{formula:0a85fc2a-fb0a-4c60-89d6-1bc7d2f00a5b}}
Now, we can calculate {{formula:9edf54e8-a520-4028-a32a-692d7a61af65}} in terms of {{formula:ea6165fc-3f66-4253-b365-736f3007dede}} :
{{formula:690924a8-7765-446a-b369-76aa26b3d877}}
Since {{formula:f189e9ef-f87c-4adf-8089-275daeb28622}} this implies that {{formula:2f2db0bb-8297-41d8-bebf-5470cc5c2e28}} leading to (REF ) and then () and () through algebraic manipulation. Finally, by the assumptions of the theorem and condition (REF ) each of the quantities {{formula:b4d36a3a-f9fd-493f-aa94-05afbb11404c}} , {{formula:5c5c453d-a1a9-48dc-b219-73853c6f39ad}} , and {{formula:0af9422f-9e16-4e82-9cc1-9bb1f2242fde}} are nonnegative implying ().
{{formula:3dba1cb5-13e0-4706-9d3c-ecb29382d1f0}}
This implies that if firm {{formula:36480e0c-647d-4671-bfea-c82e716f0e7d}} attempts to drop link {{formula:bac03639-8e41-4a55-9dd9-31de1c78f7ff}} , then {{formula:90139478-1f37-498e-bd85-0e6d3aab3173}} and thus firm {{formula:545a6f4b-f89e-4f5d-9dfd-fa9bbd413064}} decreases its profit. The same will be true for firm {{formula:ead3436e-8ea6-4050-94f5-e2c3de0c91b2}} . Hence, no firm has an incentive to drop a link from graph {{formula:f821c491-785c-4168-a66b-49e7120f87d2}} .
Now, we will consider the case where firm {{formula:e50fc1d7-3acc-46ba-b512-203252479ef5}} attempts to add a link to the graph {{formula:a1aa6da3-bbc7-4608-a231-242a4eff452e}} , giving {{formula:e5ba1e27-bc2d-4705-8d05-a3cfa49a0bd3}} under the assumption that the link {{formula:62baf52a-7747-4a2b-9368-dda9ec0f3c6d}} does not exist in graph {{formula:39fabe1c-9847-42f5-adc4-320f29a79c89}} . This analysis will follow closely the analysis for {{formula:6f942887-b5f8-4a02-97aa-465fbfa69faa}} . First note that {{formula:d3e018ad-9b1b-45b8-8d42-adf82464b616}} for all firms {{formula:49c1e33c-ad6a-479c-83d3-92ac13fd6fec}} and {{formula:c86ab2ab-bd5d-4251-8558-324f833f2823}} and {{formula:9743482c-2d8a-4393-a873-356bcd7359ee}} , while {{formula:aa9b308c-5e25-4d7e-9482-ebdd701242bc}} for {{formula:5adae961-a9fd-4406-9a3c-70aaeb236d67}} . We define {{formula:c3a9f505-da50-4b21-9459-4e3414f6efe4}} as {{formula:2b5129f7-c814-42f8-952f-f7f62df77113}} ; note the subtle difference from the definition of {{formula:5bc3b7bd-d161-461a-bee4-0b15e3b7a883}} . Again using Lemma REF , we calculate the production quantity for each node in graph {{formula:705ce787-a147-47c8-b83d-655d54e3b903}} :
{{formula:4074a199-2fe6-4221-bbb6-b3fa51f196fc}}
We can then calculate the corresponding total production quantity {{formula:b593cba8-5633-410d-a254-14322f65c986}} , the market price {{formula:7aec139d-b9d2-4619-a4c5-3d67a811f6b8}} and marginal costs for each player for the graph {{formula:fd25d444-e4b6-40e0-a070-021da5542a93}} :
{{formula:00e349c9-125c-4f65-a4a8-6a3bd3a67cb5}}
Now, we can calculate {{formula:236c925b-ad7a-4e0a-bb85-a54fd04e0c0f}} in terms of {{formula:72292c4a-42cb-4eaa-9a52-f9ad45f377b6}} :
{{formula:13959b26-85c8-48b3-99a6-9f330d13e645}}
Since {{formula:aad72709-94c2-48bd-80f6-656f4a338baf}} this implies that {{formula:0ca90ec5-b33d-4501-8e32-9e243629488b}} leading to (REF ) and then () and () through algebraic manipulation. Finally, by the assumptions of the theorem and condition (REF ), each of the quantities {{formula:0e505a50-5350-4142-aad4-a049b5a260e2}} , {{formula:9baca4a7-1705-4cae-b2eb-a1cb87b7238c}} , and {{formula:2508407b-cbb9-483e-ac76-205d9b64b29d}} are positive implying ().
{{formula:5eda12b4-0dcb-4e14-878a-4553e8a4ee9e}}
This implies that if firm {{formula:f6e26173-1811-4890-baa3-344af514f095}} attempts to add a link {{formula:dea2301a-2a77-44e2-b0f0-6118dca8fa2d}} , then {{formula:42b46da8-6f74-4830-abcf-be8a2ee97025}} and the firm decreases its profit. The same will be true for firm {{formula:e6f9f7b7-50f7-427b-8b00-5803ec70ea27}} . Hence, no firm has an incentive to add a link to graph {{formula:94c2da2c-ff9d-4b61-8824-4236751c1998}} . Since no firm has an incentive to add or drop a link to graph {{formula:98ddddf9-8c2e-4dc8-97be-1b2a451dbe6b}} , it is stable. This completes the proof.
Example 4.8
We present a numerical example of Theorem REF . Let {{formula:fc97db42-c454-4dff-b655-cd4b5252f7e6}} firms compete in an oligopoly with inverse demand function {{formula:effa5779-691a-468d-bae9-dd1ff19051ae}} , fixed cost {{formula:32b905f5-9d02-4a27-82df-666bf33f2cd5}} , and {{formula:9bd90ede-1739-4b63-9a21-d862aa1a5ec6}} where {{formula:d698c084-8e3e-4c25-9d1d-a7a34c6f2091}} . Each firm has the shifted function {{formula:ae42ae64-54e1-48f5-b422-3e8f73cbec0f}} where {{formula:681b6ddb-0863-40d3-96cf-5b9f023e03ad}} and . We want to test the stability of a graph {{formula:b1e6a29f-3e81-42fe-81b8-92bfa2f3245b}} with {{formula:632614b9-19a6-4ad6-888f-3f2b54db6317}} and {{formula:83b8d12d-5a17-4bee-af83-a8299072c87c}} for each node {{formula:88ecbe92-dc96-4740-810b-ed1c90b0f9dc}} . In order to apply theorem (REF ), we must ensure condition (REF ) is met. We calculate:
{{formula:031afa17-3d12-4baa-8458-a2ee93d240a1}}
Hence,
{{formula:b2eb5ed2-7f20-4d85-9e21-239c33632421}}
{{formula:eaf14ee1-735e-4bc3-8f74-ce431a84755a}}
Two stable isomorphic graphs, shown in Figure REF , have a degree sequence equivalent to {{formula:326e4edc-e5bb-42a1-a737-e9c2d77db3ea}} .
{{figure:73cde78f-1e11-4ccb-be83-ac1f742313e5}}However, for the given parameters, there are 33 stable graphs of which only the two shown have degree sequence equal to {{formula:601081bc-bf18-460e-b935-f439fa8f9879}} . These graphs were computed using Maple and are shown in Figure REF .
{{figure:ef1c0fe8-799f-462e-996f-dc4d8dd5b0e8}}It is interesting to investigate the other stable networks with degree sequences different from {{formula:75dcd8a9-437a-4222-bd42-4b2686758b25}} . The graph in Row 7, Column 2 of Figure REF shows a stable configuration where both nodes 3 and 4 would prefer one more link, but they are already linked together and no other node requires an additional link. Hence, the network is stable because each node is either satisfied or no pair of nodes can bilaterally improve themselves through the addition or removal of a link. It is interesting to note that in the case of the graph shown, if both nodes 1 and 5, were to give up their link with one another and instead link to nodes 4 and 3 respectively, then each node would have minimized its marginal costs. While it is the case that nodes 1 and 5 would not benefit from such a trade that would help nodes 3 and 4. These nodes would indirectly hurt themselves via the decreased market price as a result of the additional quantity produced by nodes 3 and 4. Nonetheless this analysis brings out the fact that the manner in which nodes link to one another and the manner in which stability is analyzed, greatly affects which networks are deemed to be stable.
Example 4.9 For small numbers of firms {{formula:6e73d50a-4d35-42d5-b0c2-b54c642bd009}} generating the set of stable graphs takes seconds (even in an interpreted language like Maple).
Consider the case of a firm (in this case Firm 3) who determines that collaboration is desired, but recognizes there may be a sink cost (not included in the model) associated with initializing such a collaboration. For example, the time taken to establish industry connections will cost in terms of human labor. Assuming Firm 3 is a selfish profit maximizer and assumes that all other firms are selfish profit maximizers who will play to a stable configuration, then the analysis of the potentially stable graphs will inform Firm 3 on the potential payoffs it might receive. If the network is already in a stable configuration, then since there are three stable configurations in which Firm 3 is not connected to any other players, then it may not be worthwhile to even explore collaboration. On the other hand, if there is no collaboration (an unstable condition), then Firm 3 may hope to steer the network evolution and will evaluate its various payoffs in each possible stable configuration. The possible payoffs are shown in Figure REF :
{{figure:b2377f5a-6da5-473e-8995-5eb2b1d372dd}}Note that there is a substantial variation in the payoff the Player 3 may receive. In the empty graph, Player 3 receives a payoff of {{formula:3a176b09-3393-48e8-912a-16c2b2c82ab0}} . Notice that some collaborative scenarios disadvantage Player 3 (3 of the 33) while most improve Player 3's payoff.
Remark 4.10 Goyal and Joshi show that in a market with homogenous products and under quantity competition, the uniquely stable network is the complete network for specific cost functions under some parameter restrictions {{cite:e835aec0a59c7991a46106c553263b0ecf1dd35b}}. Theorem REF extends the result in Goyal and Joshi {{cite:e835aec0a59c7991a46106c553263b0ecf1dd35b}} by using nonlinear costs and decreasing the restriction on parameters necessary for the model to be feasible.
Theorem 4.11
Suppose the marginal cost for firm {{formula:69ccc5c5-6272-4d4b-9a24-3fd51b6054d2}} on graph {{formula:c0717226-346d-435f-8a09-612643dc0f10}} is {{formula:752c3346-5eec-445b-aace-8d9e3dc79667}} . Define {{formula:58610c04-91e4-4789-bcbb-9ce6ff45bfc5}} . Further suppose each firm reduces its marginal cost for each additional collaboration (Condition (1)), the collaboration cost reduction function {{formula:c9b3e643-61d5-4581-9a70-f5a344a680ac}} is convex (Condition (2)), collaboration cost reduction function {{formula:fa47acb1-0f13-48c3-9763-5aa11ae416de}} is positive (Condition (3)), the production quantity is positive (Condition (4)), and the collaboration cost reduction function {{formula:5c83ca34-b9db-4a43-a4b1-707fe2ac25d3}} is not too steep (Condition (5)):
{{formula:dd7a5fad-605d-45cf-8b42-ae7006b72e63}} implies {{formula:17ce8a86-9b88-476f-acf9-74f33f7c1af1}} ;
{{formula:c5fdfda8-7635-49de-a655-168b3bfe387e}} is convex;
{{formula:9f936388-0518-45b2-b146-5e2bbaadbedf}} is positive;
{{formula:26aedf23-6bd6-470d-bfe6-98c41f92ba2d}} ; and
{{formula:716dac2e-4aa9-479c-b0a1-1be8f56bac4a}} for all {{formula:20b0750a-6d09-440e-8832-d11c4cff9676}}
Then the complete network {{formula:ef3ec4e1-0298-4cc1-9670-af1152daca19}} is a stable graph.
The production quantity and profit can be calculated as before in (REF ) and (REF ), respectively, by using that fact that {{formula:8c117387-891a-4dd8-bea7-4bf7dc57cb42}} for all {{formula:c5672789-7013-4cdf-ab98-c338bca9e2d2}} .
{{formula:ba5bf9bd-af25-4aa5-819d-fbc4c9f314cf}}
Observe through algebraic manipulation that {{formula:53da1324-d506-48fd-9c48-c439df41e9f2}} , which implies that {{formula:8c651b10-e53a-4634-a36e-17e0188c23e4}} .
For a complete network {{formula:f715b0f2-23bd-46c4-9496-bc6cb98c37e4}} , {{formula:8e7886f0-351c-4ee1-9cd9-cb2c68858bd2}} for all {{formula:db4728c7-15c1-4905-86b8-e3e893ef5dbe}} . Given {{formula:2cc615d6-63ff-48fb-95e5-489036913b35}} :
{{formula:d3b1fc13-0fda-4a7c-ba5e-6ad31107013b}}
Now for a complete network missing a link {{formula:e49bc254-1175-4d0d-aaf3-0d0fcebfdc57}} , denoted as {{formula:0d60e982-e6be-4abb-abe5-81dc39906091}} , for nodes {{formula:29e4277f-7b5b-40a8-8804-d8e1ca00b835}} and {{formula:f6636ac2-17d2-4a9e-867b-3de106e1187e}} , {{formula:a6e43b22-4974-4c6b-8cbd-f4e603de60d4}} but all other nodes {{formula:f8fa0ca3-6436-42e1-a404-ad7c2be6c120}} have {{formula:c1b0aa25-6dbd-4296-a45e-699fcd7421da}} . Thus:
{{formula:e77752a5-3c42-43a3-bf93-8fac84432600}}
It follows that:
{{formula:5b99ad2f-2ad5-4af3-84ee-c27b0a5f8288}}
Since {{formula:b471a18b-4f75-4c83-bdc8-1363d0587ca3}} , Condition (1) implies {{formula:845066e7-7a09-4791-ba5d-fab522f3c470}} which implies {{formula:d8b8f742-52a4-452a-8149-c43ff52e37fd}} . Hence, {{formula:aa7c210e-6970-422f-9df9-6540a4e8d748}} .
By Condition (4), {{formula:c7cc0a3b-5fc5-48a9-aa65-5e49c2108ab3}} , which implies {{formula:eb9edfa3-ba90-43f4-b3fa-2ebc25e95378}} by Corollary REF using {{formula:2f733ac4-b4d3-47fd-8361-509bf4febe18}} . Further, since {{formula:c34bfdae-f7d8-4553-b127-4d1bfc935369}} , which combined with {{formula:287fa5b2-fdb8-4595-a488-aeaea812b8c8}} , implies {{formula:6f44155f-5cb4-455b-a405-c9425164f959}} . This implies that any firm {{formula:972f4f14-43cd-4b9f-9a27-b02dfb701dac}} will decrease its profit by dropping link {{formula:f461189d-4b39-4a1d-9791-0a63b8998add}} , hence the graph {{formula:d5f11fdf-1f7d-4dfa-a0a3-f77c75a101d0}} is stable. Now to show that it is the only stable graph, suppose there is another graph {{formula:e509ab2b-6cb9-4b63-8058-e13132c251ea}} that is also stable. This implies that there exists a pair of firms {{formula:149dd83d-28f5-4ace-8aef-75bab51634f7}} such that {{formula:a6a620c0-6053-4ac6-a292-55b5a9bcf850}} . Let us consider the graph {{formula:463a0ff0-b34b-4dc9-bdeb-8ca463eab362}} relative to the graph {{formula:810b493c-069e-48aa-8d27-e310a8b50dc5}} .
{{formula:74bb2a91-ed9f-4beb-b1a3-250c0b630e22}}
and thus that:
{{formula:affd7e1a-c6e9-48aa-964f-d2a6d4cff80e}}
By Condition (5), {{formula:e9112d61-2cdc-4ba8-971c-6fa489b4e0d7}} and hence {{formula:9ea7423f-8015-4f1f-b463-c61c5b245556}} . Similarly, {{formula:ddb7bb62-6c23-439a-b538-4b3d001d9472}} , which implies that both nodes {{formula:f94beb3f-24fa-4a40-8805-4ec40822bec8}} and {{formula:47d9090a-4440-48fc-8991-fba5322eb21d}} may increase their profit by linking together and so graph {{formula:53690127-ecc3-447f-880f-db978f0c30e9}} is not stable. This is a contradiction and so {{formula:939a18c7-b017-4b49-b5fb-ff38f0d11e77}} is the only stable graph. This completes the proof.
Example 4.12
We present a numerical example of Theorem REF . Let {{formula:1947e8e6-98b9-48cf-9298-8d8ee36839f2}} firms compete in an oligopoly with demand function {{formula:2b217453-6199-43eb-a526-2425e78aceaf}} , fixed cost {{formula:de991739-8462-4493-9c16-a44f101700ed}} , and {{formula:15ff38ad-79a0-45ef-8852-f021dace49b3}} . We want to test the stability of the complete graph {{formula:c8a6f21f-21e4-4bc3-bc43-ccf66f14d906}} . Conditions (1)-(4) are easy to check:
{{formula:12629da0-440c-4961-8bc8-63a02a7e0f12}} , which implies {{formula:9f111d84-240b-421e-a3ca-cf24c58d329d}} is decreasing;
{{formula:7afe5dc7-f587-455f-86e9-fbabde88c038}} , which implies that {{formula:a14ede62-9dfa-432e-be04-569474f053ae}} is convex;
{{formula:ffcddc3e-2e7c-4203-853b-406901906430}} , which implies {{formula:4978156a-e604-4616-9cd9-aaee7db7ed17}} is positive;
{{formula:7bda2861-b174-492d-8834-0d9bd49bf05b}}
Condition (5) can be verified with the below table:
{{table:a0c06df4-5b27-41f3-a133-fbb8a1385aa4}}
All conditions of Theorem REF are met. The stability of the complete graph can be verified exhaustively and was done using Matlab.
Remark 4.13 The functions described in Theorem REF generalizes the result of Goyal and Joshi {{cite:2eefbfd46754daaa9bcf1cb34353b50d4a9208dd}}, since it is clear that a linear function of the vertex degrees (as assumed in {{cite:2eefbfd46754daaa9bcf1cb34353b50d4a9208dd}}) will satisfy the given criteria. Furthermore, Theorem REF is a further generalization since we define conditions under which a collaboration graph with an arbitrary degree distribution will be stable. This is not possible with Goyal and Joshi's model.
Conclusions
In this paper we showed that networks with specific structure properties may form as a result of game theoretic interactions. Specifically, we showed a simple way of constructing a game whose pairwise stable solutions are those graphs with a given degree distribution. As a particular application of network formation via game theoretic principles, we investigated the formation of collaboration networks in oligopolistic competition. We extended the model of Goyal and Joshi {{cite:e835aec0a59c7991a46106c553263b0ecf1dd35b}} with a nonlinear cost function that, under particular conditions, admits stable collaboration graphs with an arbitrary degree sequence as well. One limitation of this approach is that we cannot specify an exact graph structure. The degree distribution specification generates a class of stable graphs rather than a single graph. We do not view this as a problem since it can help explain variation in observed situations with the same parameters.
Potential Relation to Network Science
Special structure in networks has been considered in several recent papers {{cite:abe63adbc02ccfe80d67d83ff221787b12263da6}}, {{cite:626c7ee25456efd6ec36b3834fe3a42a2972e04a}}, {{cite:76968e76ed3b0fb361b96054944c7a3739e720b6}}, {{cite:bbbdba50a2c0303e63e0dd377121571d309dda92}}, that have cut across various subjects including social networks, information networks, and biological networks as well as physical networks such as power grids and road networks. A review of networks in these disciplines may be found in various articles (see {{cite:626c7ee25456efd6ec36b3834fe3a42a2972e04a}}, {{cite:76968e76ed3b0fb361b96054944c7a3739e720b6}}, {{cite:bbbdba50a2c0303e63e0dd377121571d309dda92}} and the references therein).
The network science literature was largely inspired by the observation of macroscopic structural properties (e.g., small world, power law degree distribution) of networks that occurred in several distinct network types (e.g., social, information, biological networks), which have diverse microscopic properties. The network science literature has largely been devoted to finding the mechanisms by which networks form and/or evolve in order to generate the structural properties that are observed. The momentum in this direction has largely been driven by the statistical physics community {{cite:abe63adbc02ccfe80d67d83ff221787b12263da6}}, {{cite:626c7ee25456efd6ec36b3834fe3a42a2972e04a}}, {{cite:76968e76ed3b0fb361b96054944c7a3739e720b6}}, who argue that the phenomena of complex networks (e.g., power laws) may be explained by laws that reach across all complex networks because they are phenomena that are inherent to the complexity of the networks.
Alternatively, there has been recent interest in the structural properties of networks that have been designed via optimization {{cite:eb464eb6027b0480bee6f3144c9e0ee617b23823}}, {{cite:55a5cc055602addd151ad85caf20914e0c26767e}}, {{cite:86b382005af1fd4912184085e6e3be847b3e898d}}. This perspective is motivated largely by the fact that many networks (in the abstract sense) are models for physical networks that are designed by humans to function with particular objectives (or even designed by nature to serve an evolutionary purpose). These networks are distinctly different from social networks, which are abstract models that describe interactions between actors (e.g., people talking, writing scientific papers or dating). These networks (e.g., power grids, communication networks) are not designed through central coordination, but arise as a result of the objectives of multiple independent actors. As a result there are structural properties that often exist in these networks that are not explained by models that do not account for these functioning characteristics {{cite:3572854b389f26c744d7b687ff25fd9ecd7ceaaa}}.
The work presented in this paper is motivated by this new work. Our perspective is that some networks are formed as a result of the interacting strategies of multiple players, rather than a universal rule for network formation. This is consistent with certain recent observations on (e.g.) power law networks {{cite:e919588d2a7257179d2bf269b4ff6fa5277e7610}}. This occurs in collaboration networks. The formation of the collaboration network results from the strategic decisions of the players. The network will still have particular structural properties that are explained via game theoretic principles. In this paper, we illustrated that an important property to the Network Science community, namely the degree distribution, can emerge as a result of designed game mechanism. As an explanatory model, we suggest that certain networks with interesting degree distributions form as a result of strategic player interactions in which each node degree in embedded (or hidden) in the objective function of the node player. The formulation in this paper is simply an explicit representation. In {{cite:4398bc01205ed93640c44bbf71c356bb9ee094cd}} we illustrate a class of games that generate specific degree distributions in which the degree is not an explicit part of the objective functions of the players.
Future Directions
There are several directions this research could take. In {{cite:4398bc01205ed93640c44bbf71c356bb9ee094cd}} we begin the investigation of graph formation games with specific link bias and identify a game theoretic mechanism that does not explicitly encode the degree distribution of interest into the players' objective functions and yet is still capable of having graphs with arbitrary degree distributions as stable solutions. This work is extended in {{cite:c6b4787101bbb48ba921dd6c5df6228c471e6bfb}}. We also investigate the graph formation game in the presence of spatial oligopolies in {{cite:d35209b23e087784ba8d051dc74644123b60c5d0}}. Clearly investigating the mechanism design problem for additional graph properties, such as the clustering coefficient is of interest. Recent work on generation of graphs with specific clustering coefficients {{cite:30f7b20edc49f5761b6c43b3ef4e83a29c8a0688}} may provide insight into game theoretic mechanisms for solving the same problem.
Another equally interesting future direction lies in the extraction of game theoretic mechanisms from real-world data. Solving such a problem may take two forms. In the first form, small real world human networks are evaluated using traditional psycho-social interviewing techniques in an attempt to identify objective functions or constraints consistent with the model presented in this paper and theoretical extensions (e.g., {{cite:c6b4787101bbb48ba921dd6c5df6228c471e6bfb}}). This work is planned in collaboration with social psychologists at the authors' parent institution. A second form of objective function inference would be completely observational in which, given a network that appeared to be stable, one would attempt to infer the set of potential objective functions (and player constraints {{cite:4398bc01205ed93640c44bbf71c356bb9ee094cd}}) that would generate the given stable graph. Observation of network evolution would also be useful. In this case, statistical techniques would have to be applied to the determination of the objective function structure.
A final direction of investigation lies in the evaluation of dynamic network formation. That is, investigating this problem in a dynamic context in which the objective functions of players may change. We propose that this problem can be analyzed in discrete time using techniques from competitive Markov decision theory {{cite:41095138204902c203ca4d05217562fc33a13b44}}. However, the combinatorial properties of the graph structures (for all but the simplest networks) make a brute force analysis approach intractable. Thus a more sophisticated analysis technique may be necessary to obtain any useful results.
| r | 659c55c5daebc90de94a0db166646f84 |
where the stellar luminosity is given in solar units, and where {{formula:01ce938a-07ee-4741-8782-fb5f3b5a51a8}} and {{formula:04e24934-ce33-429c-93ce-7537ecb19ba2}} {{cite:e4cda58551a9e6879cb1b9afb71eab6be9e18cb5}}; and the (g-r) were obtained using the {{formula:a045fcba-b479-4bb0-ac55-08d4bdc86405}} and {{formula:61cc1f2f-fe7b-470d-a8be-d39ff435b050}} images inside the
r{{formula:a484a524-6a43-421a-bc6d-0f25336e1297}} . From the values listed in Table REF , it is clear that most of the TDG's in our sample have stellar masses lower than {{formula:ea73814f-dc2e-49d9-8d3f-e627c6e27c89}} , with a median value of {{formula:5761c285-5009-4284-a068-00ec1aa6ffc2}} , which is very similar to the one determined by {{cite:3335c46eac194965015179b897d5937261cea461}} for a sample of 407 TDGc, of {{formula:ebfe7c02-a1cf-4823-8f14-f26ff2fe4aaa}} .
| d | 99fbc34b76c0503f1c2a22044018bf67 |
Generative image replay, while appealing, has numerous limitations in practice. First, real images are high dimensional representations and the image distribution of a particular task lies in a narrow yet very complex manifold. This complexity requires deep generators with many parameters and are computationally expensive, difficult to train, and often highly dependent on initialization {{cite:f26afe921b380065a5ff90166db89c9350df5ad1}}. Training these models requires large amounts of images, which is rarely the case in continual learning. Even with enough training images, the quality of the generated images is often unsatisfactory as training data for the classifier, since they may not capture relevant discriminative features. Figure REF shows the CCA similarity for class-conditional GAN. It shows a similar pattern to LwF and fine tuning with the similarity decreasing especially in intermediate layers.
| m | e166ea6d46b8bffa60324ed7ad201929 |
Simultaneously, at the opposite end of the conductivity spectrum, there has also been a large interest in two-dimensional (2D) and layered superconductors {{cite:91a3ed2a565bde74beaf0d63eb8ab2adcd8343d5}}, {{cite:7d242586c40ed3120441de7e52c62c16109c979d}}, {{cite:56587cd6eedfee1810baa7dadc1b0f458c4f6744}}, {{cite:2ef16b88c25bdab1c06d066c9961cbe436bf94d5}}, {{cite:1ba27c9375524695093f1f01e56ac539e744edb2}}, {{cite:d664dcccde353e138266b08856bcec2035bf3e2a}}, {{cite:7b001ace438b99505570ea1ac6ed36ab0e27c994}}, {{cite:b8aa004e4dc549a8911e5442030815420130933f}}, {{cite:216d0cd0f06436127cc57a03929ff85ce77df342}}. Particular interest has been in monolayer FeSe, in monolayer and few-layer samples of the transition metal dichalcogenide 2{{formula:35679402-d42d-454a-92f5-b25664a820da}} -NbSe{{formula:000101d2-75c6-4155-a429-41ff320e2990}} , in gated bulk samples of the transition metal dichalcogenide 2{{formula:04e07531-9bde-4cf8-91a6-4e95f91fc354}} -MoS{{formula:d355a369-e23c-4908-9752-77d59e352879}} , which also resulted in effective monolayer superconductors {{cite:91a3ed2a565bde74beaf0d63eb8ab2adcd8343d5}}, {{cite:7d242586c40ed3120441de7e52c62c16109c979d}}, {{cite:56587cd6eedfee1810baa7dadc1b0f458c4f6744}}, {{cite:2ef16b88c25bdab1c06d066c9961cbe436bf94d5}}, and in twisted-bilayer and twisted-trilayer graphene, which was surprisingly also shown to be superconducting for the magic twist angle {{formula:ce642413-a676-4406-9e22-70262d548e05}} for certain induced carrier densities {{cite:1ba27c9375524695093f1f01e56ac539e744edb2}}, {{cite:d664dcccde353e138266b08856bcec2035bf3e2a}}.
| i | b2d8baba535872e59180c2d7c51f6ca5 |
We compare our FWM directly with the current state-of-the-art on word-level bAbI: Metalearned Neural Memory (MNM; {{cite:832649ef927e8bbb716d2ba1e3b2756827528365}}).
We also include two strong autoregressive word-level language models as baselines: a regularized LSTM {{cite:2b6bb154be75c130731e49aacb7abc69179955ea}}, {{cite:8e48d71be8febef6573f28b3be64aab35bbc81a7}} and a regularized Transformer-XL (TXL; {{cite:f2da8b2dbdf07eec420398f05d68640568a30122}}).
Lastly, we also evaluate Ba's Fast Weights which attend to the recent past (JBFW; {{cite:f1bcf744ab7c9909ed808274490a305eb52ba73c}}) but were unable to find hyperparameters that converged.
We truncate backpropagation through time (tBPTT) to 200 tokens for all models and limited the amount of GPU memory to 16GB for practical reasons.
For every model, we performed a hyperparameter search in QA mode over the first 3k steps of which a smaller selection was trained for 30-60k steps. For all models, we adopt the best QA mode hyperparameters for the LM mode results.
Table REF lists the best accuracy and perplexity of each model over three seeds while figure REF shows the learning curves of the best seeds.
Further hyperparameter search results can be found in the appendix section .
{{table:1cdedea6-be26-4f71-a5cc-d1e776dbc015}}{{figure:eb218b5d-4e30-4225-87c5-b391c1152897}} | r | 3c5e328a09d4f7798c91d1685c2ca5a4 |
The following result is an extension of Tutte's theorem and also a lean version of a comprehensive structure theorem for matchings, due to Gallai (1964) and Edmonds (1965). See {{cite:3180f7cd7dc593bfb22a535977bd9a96a89bb310}} for a detailed statement and discussion of this theorem.
| i | f158a705b45243c23f0fc1b834a17907 |
The scalar resonance discovered by the CMS and ATLAS Collaborations at the LHC {{cite:1d91cb81661c9efebe551ee895c984af063ecf87}}, {{cite:c8ea359dd07f328b58e74c1d2f8754657bbc2843}}, {{cite:ae1e7d90035a072c1f27a5c72a9223ab3531fb66}} in 2012 has been found to have properties consistent with the predictions of the standard model (SM) for a Higgs boson with a mass of about 125 {{cite:cac85006974f2938407953c696d463e5e0d5bc2d}}.
In particular, its couplings to bosons ({{formula:922bb676-e31e-4c16-929e-48ef21e52c3a}} ) and fermions ({{formula:a72131b4-89c5-496f-9211-1733bbab103b}} ) corroborate an SM-like dependence on the respective masses.
Furthermore, data indicate that it has zero spin and positive parity {{cite:be6325e208e42bf06665495ef0b1d196771ae711}}.
Recently, the associated production of top quark pairs with a Higgs boson ({{formula:e6f54917-9cdd-4914-97c8-769cd54c55a2}} ) and Higgs boson decays to pairs of bottom quarks have been observed {{cite:ca4052d42c0b0e99f44e5d51b31965ae2a425121}}, {{cite:c7be5ad81973b13dfbdc4f747c746e84b24665e4}}, {{cite:a69a50c9252c052fe87886ee3dc73f2559f99f5f}}, thereby directly probing the Yukawa interactions between the Higgs boson and top as well as bottom quarks for the first time.
In addition to measuring the absolute strengths of Higgs boson couplings, it is pertinent to assess the possible existence of relative phases among the couplings, as well as their general Lorentz structure.
Hence a broad sweep of Higgs boson production mechanisms and decay modes must be considered to reveal any potential deviations from the SM expectations.
| i | 8f37b388df4253d0a8c49e3d5bef3f92 |
The proof of Theorem REF crucially uses the fact that the iterates {{formula:911693ff-7130-49be-b00a-a1a4efa4a4f5}}
have “fractional power growth” and our argument fails for iterates with “integer power growth”.
Similar results that cover the case of polynomials with integer or real coefficients were obtained in {{cite:5b3c6242736215bacec455b80a17fe4a9438c8de}}, {{cite:109342642badd34628aa2dd5f126a80f4c2703d6}} and {{cite:b28eb089192f5c4b2ba3f055fc256dfead0a38de}} respectively, and depend on deep properties of the von Mangoldt function from {{cite:f03e375fb88c2f38b0b24bcb700565490198b888}} and {{cite:ac1b1e8246d3c54d2df9e99a71a6993a29bdd1ca}}, but these results and their proofs do not appear to be useful for our purposes. Instead, we rely on some softer number theory input that follows from standard sieve theory techniques (see Section REF ), and an argument that is fine-tuned for the case of fractional polynomials (but fails for polynomials with integer exponents). This argument eventually enables us to bound the averages in (REF ) with averages involving iterates given by multivariate polynomials with real coefficients evaluated at the integers, a case that was essentially handled in {{cite:76a48fcc710355c029a197030830ffbc2e6431cf}}.
| r | 18710823d5eede717dff3abe8b9cc8dc |
There are clearly many future directions. One of the most important
directions is to understand the reason for the peculiar relation
(REF ) for the prepotentials.
Another interesting
direction is to study the Nekrasov-Shatashvili limit of the instanton
partition function {{cite:39424d8824e2519c9ffca6f3d57d51d93cddc115}}, which should be combined with
the recent results on the quantum periods of AD theories
{{cite:1a5cf27e702cbaa2f14e71d4a6f56b271647e737}}, {{cite:e561f646ac87bff15ec3e9b0157429d24036c8b3}}, {{cite:93b4fff746072df78899b19e580e614eae69e5b2}}, {{cite:8b8a300ea0d0761187658767eb65f55aa0ce64dd}}.
The uplift of
our formula (REF ) to five dimensions would also be an
interesting direction. It would also be interesting to search for a
matrix model description of the instanton partition function of
{{formula:b431c08d-7ff1-4f2a-a008-1ddb323d1f2f}} , generalizing the ones studied in
{{cite:4c737ab923daf4b7539b6e18c349ee36f38d5702}}, {{cite:41aca5272e6b9a832ade850efcf0afbb2484d364}}, {{cite:88be43b1af8146a4882484c7b80311327152da68}}, {{cite:2996917b9e0ab883badaf83fc3aad35fe38d3aa4}}, {{cite:6ff6c711c64866c790e8519561d72d33509cbaec}}, {{cite:cc48accefbc31c63300c9a858f8cc9db0461753d}}.
| d | 429cab171d6cc2b055b6f0eb753fa552 |
To estimate the first integral in (REF ), let {{formula:1e953181-65b9-42a6-8474-74db2f3388e9}} be a rectangle with its four vertices {{formula:2bd24102-05f6-4c9c-b90c-4a193166d1dd}} , {{formula:0844e0b5-4523-4529-a579-4b010d9e64f8}} . The arc of {{formula:6c1e18e9-3b50-40cf-a4cc-5f129ad9d0c1}} given by {{formula:aa900a4e-51fb-47cf-9c18-0ae6f2068771}} is contained within {{formula:e56b6d6b-8c68-4fb7-ace6-1ef778c3e9f4}} . Applying the Theorem 3.3.1 of [{{cite:6b9263d2ec7b4cb2e6ccac06370b6dcf4d7ee3f3}}], the polynomials {{formula:4e43113e-faf3-4c73-b608-dc9beb9c5d08}} has only real zeros, so its maximum absolute value on {{formula:e80e9724-1724-4fd8-91ea-c79f5837a035}} must be attained on the horizontal sides of {{formula:e45de7bf-ef9b-4a1f-a773-dd4f0a6300b7}} . Hence, according to the Theorem REF , we have
{{formula:04675fbc-0cf0-494c-bab3-b4e66ec71ea1}}
| r | c4587c24205737fcaaf45cafb641922d |
where {{formula:15aefd5c-1f9b-4883-b90f-4d183363e243}} is the step size. Applying the DPG method developed by {{cite:6f1acfa410c8e4157d642443a43e848a5bebfa6c}}, the gradient of {{formula:352ac71c-9881-4b38-b63a-c0edf5dc3531}} with respect to parameters {{formula:6464e870-2d03-4f61-8f6a-5e46349e7856}} is obtained as
{{formula:3fad2c7f-7249-4f3a-a8c2-ed013f316428}}
| m | c6f78e38554faa975a9b9d93cba30aa9 |
To conclude this section, some quick comments are in order. First, one of the typical preliminary step when building a regression model is to conduct a correlation analysis (e.g., scatter plots, correlation matrix), which can be done using tools that we have discussed in Subsection REF . This can be done here with matrix scatter plots and correlation tables; see Figure REF .
Also, often to improve an initial model as in (REF ) or resulting forecasting accuracy (REF ), a careful selection process of variables or features of the data sets can be done. Finally, the term prediction is usually confused with that of forecast. Prediction is much more broad, as it includes tasks such as predicting the result of a soccer game or an election, where only characteristics of the player of each team (soccer) or surveys from voters (election) not necessarily historical data can be used. Further details on these topics can be found in {{cite:ac195915b4a84bec0d7c85ef2a9ceb8783026425}}, {{cite:992945432dbd105c10541e42b189600916968bcc}}, {{cite:9fff8664f4786ca9dadae960bb2f96866ee70ecb}} and references therein.
| m | fac3b5ba8af97836b075ce23983de760 |
We further note that the detected oscillation periods vary between different fieldlines and can also change with time, particularly in loops that are subject to change in physical properties, or with increasing/decreasing loop length, as would be expected for standing oscillations (this is true for all oscillation modes visible in the simulation). This spread and variability of the oscillation periods as well as lack of oscillation coherence in different parts of the simulation domain suggests that they are not a result of a coherent global oscillation in the simulation. Our analysis therefore does not agree with the premise that the transverse oscillations in the corona are driven by the global p-mode oscillation which is based on comparison of peaks in the power spectra from velocity measurements in the corona and the p-mode power spectra {{cite:3ce10781fc7104f8666a59b6481ea5dd0ec7a227}}, {{cite:9e8369be4bfd32dbb06386fa3de9e9c54e211b97}} (it should however be noted that analysis in such studies focuses on very long coronal loops, i.e. the spatial scales are very different from the short low-lying loops studied in this work). Such mechanism requires coronal loops to oscillate with the same period regardless of their length, as they are driven at their footpoints by a harmonic driver. The variability of the oscillation period in the simulation therefore excludes the presence of a harmonic driver, or at the very least suggests that such driver is not a dominant mechanism responsible for the excitation of transverse oscillations in this work. We stress that care should be taken when drawing conclusions from the global power spectra due to the temporal variability of the oscillation periods of individual magnetic structures which has been shown both observationally {{cite:edf23f3fc83d9fa55642fe077f958c1234351016}} and by numerical simulations {{cite:68136fbfa0d3021c6c4a5072014875cb5d7aa3fb}}.
| d | 2abbae1569fce11356c9c2969699d927 |
Role of Transformers:
Transformers have recently shown success in a variety of vision tasks {{cite:e198459a92b159ff2382c53ca6cc6706300921b7}}.
Very recent tracking approaches employ transformers in different ways. {{cite:01897a4dbeaa11f7b9545e046ec85b7555504881}}, {{cite:da23f31e36d3a04e8fdabe1cd0374e3e410cb0eb}} utilize transformers for feature enhancement, in combination with either DCF of Siamese trackers.
{{cite:c1e65d9340f86329ef42ec85ab57f01a3b0063ca}} employs a transformer to associate the target object between frames in the presence of distractors.
In particular, STARK employs a transformer module for target detection and bounding box regression {{cite:a53699865ce7fae2fc4d6610a55306548cb22a95}}.
In this work, the transformer thus takes on the role of the DCF or Siamese correlation component.
The transformer, with its embedded attention module, bares interesting commonalities with DCF. Most importantly, it allows for integration of background appearance information through global operations.
Moreover, the transformer employed in STARK predicts a correlation filter. It therefore can be seen as a replacement of the optimization based filter prediction in DCF. Much future effort is needed to further analyze the effectiveness of transformers, as well as their relation to the DCF and Siamese paradigms.
| d | ddd50a29c53c95b1a0a3038f4feffe41 |
From Definition REF we obtain that compact subsets of a topological group are precompact. {{cite:054074a8994340b8aa5512377d2e6730859ea74b}} establishes that a pseudocompact topological group is precompact. The following theorem extends this result to a class of topological semigroups.
| r | a021223e96229b063014fc262780e95d |
Bell's theorem {{cite:260146f993f955e30fd4c1f87bbf95ed8168c89b}} continues to challenge our understanding of the relationship between the two pillars of modern physics: quantum mechanics and relativity. In recent years, the measurement-independence assumptionSometimes also referred to as statistical-independence {{cite:b10b671d126040a35756e760b480ef84f50faa1e}}, {{cite:9ad0da22c410601f8f4046d0fde6dd0f6fd06547}} or {{formula:01d06a81-94fb-4ee1-a085-7c96fc4e5e2c}} -independence {{cite:0547ca60fe4df10e3310b85a3bd02aca16044fdb}}. in Bell's theorem has received significant attention in the literature {{cite:b10b671d126040a35756e760b480ef84f50faa1e}}, {{cite:4310c978f48f11d49d72a7a1e9794ec397923cb2}}, {{cite:93986c0b8ed159ae0f91f761a475237384ac42d4}}, {{cite:e44537814f7c2ce445068c979149c27425373c47}}, {{cite:da5672e41fccf6c972fa3589e7ee9c023951490f}}, {{cite:595da75edd7673c8007928d83515cf1ab94827f8}}. The assumption states that the hidden variables that determine the measurement outcomes are uncorrelated with the measurement settings. Several models have been developed that violate this assumption and thereby circumvent Bell's theorem (for a recent survey of such models, see ref. {{cite:0547ca60fe4df10e3310b85a3bd02aca16044fdb}}). It has also been possible to exploit the properties of these models to incorporate relativistic effects on entangled quantum systems {{cite:c24d3d04d74c867f85016ddd99ace37213620f7d}}. However, no wide consensus has yet emerged on how to physically interpret the violation of measurement independence.
| i | b353317edb33f59a034eba17a5021231 |
The general theory of stochastic gradient ascent {{cite:712a0d237ae4eca867fa088a0038541064a45090}} implies that
the above algorithm constructs an approximation of
{{formula:4990d85b-142e-4ff0-bd1f-4b9f2773e4fc}}
| d | b8228d48d9c834aa6176d0112240fc85 |
Strong homotopy algebras, {{formula:7bf1b040-c254-42b7-92a8-8343b4af6ddf}} , {{formula:40100baa-9127-4aef-961c-bb19c40f3e49}} , {{formula:fc66dc2a-8560-46b6-acac-2202c840485c}} , ..., help to implement the idea of the (most general) consistent algebraic structure. They play a central role in string field theory and provide another point of view on the BV–BRST quantization method, see e.g. {{cite:678058377e28175eedfc258390af82be3a71049a}}, {{cite:d33bd29b053bb40b5e1ccc797a47da17f3003dbb}}, {{cite:8cadb157c8b544cf8521d38427967b5895bda263}}, {{cite:318ee170e2e234b9c0d8a8ea9b05fc02604d8702}}, {{cite:c8585874322690dd550722e00e159a92bb8b590d}}, {{cite:ba7df9cc92da68ba613cc10d0df330cbef4195f2}}, {{cite:f69eb59a4ba74046e7ff6a6cadf9220ac7278254}}, {{cite:1042f3232939e2fac9b40e300744b24e6bc61845}}. However, strong homotopy algebras did not seem to have appeared as symmetries of physical systems in the past. In {{cite:a4b1c32d52d04419c272169f4ec4ddeca061b37a}} it was proposed to implement the idea of the slightly-broken higher spin symmetry {{cite:ee77fcb701dac4b5db73df7a38864426dd25b8f6}} as a certain {{formula:35f23908-7226-4479-96cd-94402b85db8e}} -algebra. The usual symmetries are realized by Lie algebras whose generators act on one-particle states and the action on multiparticle states is the sum of the actions on each of the one-particle states. There are examples of symmetries that go beyond the standard Lie one. For instance, Yangian is the algebra of conserved charges of an integrable model, whose action on multiparticle states is defined in accordance with a nontrivial co-product and differs from the canonical one.
| i | d67022e96bcf53565408c30c0ae256a5 |
Fig. REF provides visual comparison of watermarked images produced by combined models.
Both the watermark images and the corresponding {{formula:277aac34-3693-4f80-a629-984dec68e8d8}} magnified differences compared to the original images demonstrate that the bit message is successfully embedded in the images in an imperceptible way.
The watermark images generated by our model and HiDDeN {{cite:8b2978aa70b7569df2d3e1798655a931e5b3c9f5}} look very similar to the corresponding cover images, which is consistent with the PSNR metric reported in Tab. REF .
| r | 08370727fae77a0a19b28abfbe5f2712 |
In this work we have only explored using a simple population-based policy gradient method {{cite:cb61698e7e5575ff9f55bcb48e1f4ee25d3d4f56}} for learning. State-of-the-art model-free RL algorithms, such as TRPO {{cite:f359a4319aea1d219adb7153467c82ef20f2c29b}} and PPO {{cite:cc3bfb0af958548926462f70e9e4398d324f5439}} work well when our agent is presented with a well-designed dense reward signal, while population-based methods offer computational advantages for sparse-reward problems {{cite:fcbe3f4a9148bf6724d3f8d9605bc3283bc37086}}, {{cite:1cde000224ff278ca3e0ce85bb10cbeb7e2a528a}}. In our setting, as the body design is parameterized by a small set of learnable parameters and is only set once at the beginning of a rollout, the problem of learning the body along with the policy becomes more sparse. In principle, we could allow an agent to augment its body during a rollout to obtain a dense reward signal, but we find this unpractical for realistic problems. Future work may look at separating the learning from dense-rewards and sparse-rewards into an inner loop and outer loop, and also examine differences in performance and behaviours in structures learned with various different RL algorithms.
| d | b4c63b0886af8838aca1bec6e3943341 |
We conduct experiments on multiple computer vision tasks, including image classification, semantic segmentation, and object detection. Systematic experiments on several public benchmarks demonstrate that the proposed BOAT clearly and consistently improves existing image-space local attention vision Transformers, including Swin {{cite:aa14aca0a6337236579eeb2de8c8b52a8614959a}} and CSWin {{cite:92e01a5ac736ec9c87dde010bc6213a5ce3a9618}}, on these tasks.
| i | 1c4e11c19b985af399c20a9b60d7ad4e |
Recent research in deep learning, focus mainly on Transformer-based models such as BERT {{cite:88c63e7dd60c607ca7f83ed6ae4c125824845512}} and XLNet {{cite:b3dbac47ce597191d0740f980f9fec3e27a87b81}}. These approaches clearly showed impressive results in various state-of-the-art natural language processing (NLP) {{cite:c426ddadeba12ad04c7e310bed8f7f9dabbe0550}} and computer vision {{cite:cc18ba2ad89bbc962da93856001dce0b625d29d5}} tasks. Thanks to the attention mechanisms layers {{cite:32aa63a3f71bfdb767623b352655bfed909673cc}} added to the encoder-decoder architecture, Transformers can focus on the most important patterns in the data, leading to a remarkable boost in performance.
| i | 61593fb5c83f12184ea0a8efd73dcfb9 |
For example, {{cite:9179a523e8cd9eca23503d2e1176fa418a782284}} studied the nature of feedback mechanisms in the 11 CLASH BCGs and their results strongly suggest that thermally unstable ICM plasma with a low cooling time is the source of material that forms the reservoir of cool gas fuelling star formation in the BCGs and that BCG star formation and feedback either exhausts the supply of this material on gigayear timescales or settles into a state with relatively modest and continuous star formation.
| d | 3a235234230284fbbafceb7ec5d2eddf |
The interest for stability issues was raised by {{cite:10d7541474df86add16701d55cb770f5ce5bd19e}} and the stability result of Bianchi and Egnell in {{cite:72587c083084ed03d4db3e0ecad23b2b8ed45701}}, on the Euclidean space. Over the years, various approaches have been developed, based on compactness methods and contradiction arguments as in {{cite:72587c083084ed03d4db3e0ecad23b2b8ed45701}}, {{cite:1758f73e3300395e6edefc53bf2804802c85b6af}}, spectral analysis and orthogonality conditions as in {{cite:8cd50a3de1ec16ce45fb9bc1cb8fd8cebc55206d}} and {{cite:0bed9ead3dbda0ed99300437e36277e25f536ecf}}, or entropy methods and improved inequalities as in {{cite:27393a6af2d91126add55276a367b52e418b04ef}}, {{cite:9bee6a55f717abbfa0a10e9652ad746ef5a51258}}, {{cite:da6203240b97f0a03a56d96911d8e50d3a8c6e31}}, {{cite:8cd50a3de1ec16ce45fb9bc1cb8fd8cebc55206d}}, {{cite:5aed4e3e0f286ea67f62070e75ee6b4382c09464}}. For spectral methods, a fruitful strategy relies on the Funk-Hecke formula and the approach of {{cite:7f8a59149fe8ae84a5e0f7c8fdf54acb4d53a619}}, {{cite:0a9dc285193de03af0fff09aaf73d1a0d3ea4770}}, which applies to the stability result for fractional interpolation inequalities of {{cite:1758f73e3300395e6edefc53bf2804802c85b6af}} and {{cite:c5acbb68d530084716b7b8c349ebdc85558d29a3}}. This is the method we use in Section . Stability issues for (REF ) have recently been discussed in {{cite:a8cf816997aacb0da7620f1b0ff3f31ca1d7b68e}} with methods of Bianchi-Egnell type, with the drawback that no estimate of the stability constant is known. This drawback can be cured by a carré du champ method as we shall see in Section . Without entering into details, let us mention some recent progress on stability in {{cite:8011a927a292c8ad959ed164083a1f1e5142b86f}}, {{cite:8e082b483e39e3f1c013bc9670baeb697def6b88}}, {{cite:1650aa2a0347835bc3984c3139b237257c6e3388}}, {{cite:b529ba104475d856fd7b96f3503bc3c3e71c0b7e}} for related critical inequalities.
| i | 21047962cdd78ebd7f60c9df96aa6b69 |
A detailed distributed implementation of DFM is given in Algorithm . In the initialization, each node {{formula:2aefe0e9-8378-40ac-a02f-35eedb6c5338}} shares {{formula:914765c0-0de0-4a4f-a6a1-9d58111672db}} , {{formula:f3fccc6c-9d77-4213-a525-c8d2d9640892}} , and {{formula:42bd79d7-0490-4a1e-ad58-a2cfdcf17483}} with its neighbors {{formula:522ca1cf-cddc-4a8d-b9d8-ee7543ba9e5f}} , which will be used to solve (REF ) and implement (REF ). Sharing of {{formula:e0406a42-90d5-4163-902e-737bf5369a88}} or {{formula:66ea7826-a362-4686-bd9e-2a521da56e6e}} between neighbors is common in algorithms {{cite:257ce3bce15e1a62640b9b9f9b2dae33ae02804d}}, {{cite:9c8e486dc042690d32de47f7388d6852cdae0849}} for handling problem (REF ). In addition, when all the {{formula:c0942595-1e79-401d-8557-c40f74f169dd}} 's and {{formula:d91f1ee9-2721-4e0b-a6c9-9f77a430e1ab}} 's are identical, this issue disappears.
| m | d9a14b5b3cef6596f10022e15d40b38a |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.