text
stringlengths
54
548k
label
stringclasses
4 values
id_
stringlengths
32
32
In this study, we find that in a large and successful software company, communication patterns have a strong association with organizational structure and that communication between employees increases with their organizational proximity, both globally across the organization (Figure REF ) and locally within teams (Figure REF ). Figure REF also provides evidence of the “rich club" phenomenon at the top of the organization in which the most central employees are embedded in dense communities {{cite:58efdad03fff5d8e9a9514235264b238d401c69f}}, {{cite:b21b9433283becac5d04802aa6d052f8b5447044}}. Similarly, Figure REF and Appendix Figure REF show that measures of employees' centrality or importance in the communication network are larger for employees higher in the organizational chart, in agreement with several substantive theories of organizational behavior {{cite:7cc70c8061de7e117ed131a9d465fb29a95fae6d}}, {{cite:91f04b6eeb0689987c3b1fd742681918b38d584a}}, {{cite:419bf4b0d448a70e742678370095fb3162d812c7}}.
d
067cdca117f18db973f0bdf86e5fcea0
A frame is a generalization of an orthonormal basis that provides painless nonorthogonal expansions {{cite:272014fb719dd95adb25b9adb748bef89101e711}}. Frames have properties similar to bases, but they offer more flexibility to accommodate specific design requirements, and their redundancy allows for protection against information loss in data transmission. In the past few decades frame representations of vectors have been studied by an army of mathematicians and applied scientists, interested in their mathematical properties and in the advantages they offer in numerous applications {{cite:01b4046c0a388b11793473798daf6d752a9699e2}}, {{cite:f303a00885779e37e821de6cd75bc2cffde70f30}}.
i
9240fd234f01fd4a75d10e18eb92845f
However, in the second stage of preheating, backreaction might increase the frequency of oscillations of the inflaton field, which makes the process even more efficient. Another issue is that a model with the quantum scalar field {{formula:ee49fd90-84e8-4775-ab28-1f8d9cb44767}} non-minimally coupled to gravity is another interesting scenario. Moreover, reheating mechanism of inflation in general scalar-tensor theories of gravity is worth investigating. We recommend readers to Ref.{{cite:18c3ed1668bd4c6907bd35635b7552b2ebb248bd}} for detailed discussion on the topics. However, we will leave these interesting topics for our future investigation.
d
6ac5299c9aa2a4cb533f38724997bd0d
In Table REF , we compare DA results with more augmentation methods in Something-Something-v2 and UCF-101 datasets, such as, RMS {{cite:83ce614a53e2f5053fca2c42077d50fabb850e75}}, MixUp {{cite:bb56f2cddb81e77226f84329d93ddc73f8f77911}}, VideoMix {{cite:4fdfd793f79c46123e66e42c5548a422896c1b78}}, and AutoAugment (AA) {{cite:842750cfb1a53efe076704caa061a9d3d93f3c81}}. For RMS, We use mean filter for the pooling operation, and random sampling distribution is Gaussian whose mean is {{formula:b6b4f330-b228-41be-8d9c-33f61b514fd9}} and standard deviation is {{formula:14b0f421-e81c-4a7b-ac46-313cb2f99566}} . RMS operation is inserted before the 3rd batch normalization layer of all residual blocks. The mixing ratio between two data samples is sampled from {{formula:7c68994e-df50-46df-8b8a-3eba6a421955}} that is identical to {{formula:e3370eda-c086-4d8c-8cd9-3e1dfc6ca4aa}} . The mixing is occurred within the mini-batch. For VideoMix, the box coordinates are uniformly sampled following the original implementation. For video, MixUp and VideoMix are applied identically across frames. In the case of CutMix {{cite:3fd1c844a00fc5d7a265e8e4ad9f68fa929f0f8b}}, it is identical to the spatial version of VideoMix. For AA, we use searched policy from the ImageNet as in RA. {{table:4e65e282-d865-48e6-8681-3f682d0e4168}}
r
5c4ef023d7b0056a289a4173320c69b3
Due to the complex, nonlinear nature of the turbulent energy cascade, direct numerical simulations (DNS) are likely the best means to investigate the role of magnetic reconnection in the energy transfer across scales and how it changes the turbulent energy spectra. To date, no evidence for a new range of the turbulent energy cascade due to magnetic reconnection has been provided by DNS in realistic three dimensions (3D) {{cite:4cd06aa02bbba1daf3db3d9f8a872e0865072e94}}, {{cite:c6e60541b3954cb9e2b9a4eecf703cc5f02b5b46}}. Such DNS are extremely challenging, mainly due to the high grid resolution required to capture the fine structure of the omnipresent current sheets in a turbulent plasma at large {{formula:521d7a9e-0210-45e5-8c04-bdf055f772a5}} . In addition, MHD turbulence and magnetic reconnection are known to behave differently in 2D and 3D {{cite:fbecb19c256a09949f14cc0029d8b43c9e04e97c}}, {{cite:4cd06aa02bbba1daf3db3d9f8a872e0865072e94}}, therefore 3D DNS with large {{formula:f5ea3bcd-b65f-433a-9630-3327ccfc2163}} are essential to fundamentally address this question.
i
db10bd9ef3bab3efcc0786cc9b1732ca
is known in the analyses of the anomalous Hall effect {{cite:33ad2c346b037bac04b75a508bc33f7fc5d5fe2e}} and the spin Hall effect {{cite:408034c9d8c9942860cee59c33e844d83a9f6282}}. This effective Hamiltonian of the two-level crossing for the generic {{formula:704187ea-c371-4b1f-9f33-0945769911df}} (Bloch momentum) has been analyzed in detail in {{cite:736ab515721846c4e9bf4b3c7a1441571cb43fa9}}, and it has been shown that Berry's phase for (REF ) is determined by the time derivative of the azimuthal angle {{formula:9d9d01e5-aa1b-40b9-914b-9170b76d095c}} in both adiabatic (monopole) and nonadiabatic (dipole) limits, and thus our parameterization (REF ) describes an essential aspect of the topology of Berry's phase. To be more precise, Berry's phase becomes trivial, namely, either 0 or {{formula:7d80db55-0a9f-4e91-8f90-fec00ce27fb1}} , in the model (REF ) for the nonadiabatic limit {{cite:736ab515721846c4e9bf4b3c7a1441571cb43fa9}} {{formula:8da43238-a898-4d87-bfc5-d46068315eb8}}
d
a708781798394478b11077fb57629a7a
The real merit of our algorithm is that it is able to produce BBP-type formulas to non-integer bases, while still producing formulas with integer coefficients. We have not been able to find examples of this in literature, though Adegoke {{cite:d8cbde1b21d64122f11dcc10fb901d88a2312c8a}} does have BBP-type formulas in base {{formula:90bc91df-ce32-4e09-8ad2-d20974ec3ae6}} , the Golden ratio. However, the formulas obtained by Adegoke have coefficients in {{formula:f7ae6c5a-b189-4ee8-9c45-9fb035147d1e}} , which to us seems a little unnatural. The bases studied must be algebraic. We have made our algorithm run with several Pisot bases. Of course, the output will always be expansions with rational coefficients as is the case with integer bases, and as such they are not {{formula:5bd8f900-86bb-46f9-8f36-a243e35ebb93}} -expansions in the sense of Rényi {{cite:7ae7effb157ea62d4a7ab8a7f66ef6c8d0626b98}} and Parry {{cite:e7cba421972d70db5ce5620f5fbc9f527670706b}}. Nevertheless, we find the expansions of interest. We are particularly intrigued by the rather large number of expansions with the Golden Ratio as a base.
r
bfd4ac2b9a00aec6ada8636eb989d8f6
At the time of the acceptance of this paper, we notice that there is a concurrent work {{cite:7155ad7a0158bde7427edf6af27791593014db4f}} related to our research. We exploit first-order derivative as the criterion for data selection, whereas they use both first-order and second-order derivative (i.e. in Hessian). In their method, it is not necessary to explicitly compute the inverse of the Hessian, instead the authors take an approximation based on implicit Hessian-vector products (HVPs), as suggested in {{cite:8ba1ec4d5a547173a1ba68062c9d88be652b91a1}}. The main difference between the two methods is: we use gradient magnitude (norm) for data selection, whereas they use both magnitude and direction. We think only using magnitude may have an advantage in terms of data diversity. We refer readers to the item (2) in section A.3 of this Appendix for the discussion.
d
1acfd25160c3acc17eb6dc75ce4c9caa
Theorem 2.6 ({{cite:6d5533bdd533e3e276263133951e8102e4004946}}) Suppose that {{formula:79240957-317c-414d-9f7c-953c984ee19d}} is a reflexive Banach space with norm {{formula:e987c280-1108-4f71-ad04-d7294dac3294}} and let {{formula:d9b19f64-65ee-4046-a954-d3b5c8b20f88}} be a weakly closed subset of {{formula:7fe98298-4ba1-4930-91ed-0a8b8f1728f7}} . Suppose {{formula:0e20f306-4eb4-4d35-9424-2090be168216}} is coercive and (sequentially) weakly lower semi-continuous on {{formula:a20a5bde-636f-4bf0-b597-2b5871e54bbd}} with respect to {{formula:cd8e6f43-3776-4afe-9e3e-bdb78e5786ca}} , that is, suppose the following conditions are fulfilled:
r
c86b5331b8f79e82f91c74af5a267a40
In Figure REF we illustrate the behaviour of a MuRel network with three shared cells. Iterations through the MuRel cell tend to gradually discard regions, keeping only the most relevant ones. As explained in Section REF , the regions that are most involved in the pairwise modeling process are shown in green and red. Both region contributions and pairwise links match human intuition. In the first row, the most relevant relations according to our model are between the player's hand, containing the WII controller, and the screen, which explains the prediction bowling. In the third row, the model answers kite using the relation between the man's hand and the kite he is holding. Finally, in the last row, our model is able to address a third question on the same image than in Figure REF and REF . Here, the relation between the head of the woman and her hat is used to provide the right answer. As VQA models are often subject to linguistic bias {{cite:754e73415b3f757321fc13ca8803632f9cb54b01}}, {{cite:16a3a0b2fbbbac0dc31509bbe19fa674b39e298c}}, this type of visualization shows that the MuRel network actually relies on the visual information to answer questions.
r
ab5b407971d5fdeaf41fad3a44e5830b
Comparing with the other TBOs, we notice that this TBO is a small-amplitude oscillation. In the cooling wake scenario, the behavior of TBO in the burst decay phase is related to the latitude at which the burst ignites, since the first ignition spot being the first cooling spot {{cite:b16759670fd628053673ce66b9361298b720b90d}}. The ignites latitude could be inferred from the convexity parameter of the lightcurve profile during the burst rising phase, i.e., roughly, a cancave and convex profile is responding to high and low ignites latitude {{cite:62ed7ffae23c33d6b0972f87d6d25d2f7e75411e}}. We notice that rising profile of this burst is a convex one, and indicates that it ignites near the equator and induces a short-lived asymmetric emission due to fast speed of the cool wake during the decay phase {{cite:ff7193caa2fa0d1928b1353ccfa4479e5e229a11}} than the case of the high latitude ignition. This may be related to the small fractional rms of this TBO, since TBO detectability also depends on the time length of the asymmetric emission {{cite:62ed7ffae23c33d6b0972f87d6d25d2f7e75411e}}.
d
6c850d7895f268bc8e97b0272eaba8f7
For the numerical simulation and analysis of the nonlinear {{formula:b71927f6-68a8-45d2-978a-7b1dd9297e14}} -propagation dynamics of ultrashort laser pulses we use the generalized nonlinear Schrödinger equation (GNLS) {{cite:e812487928bb0fe843d3a137030d5f229f1ad996}}, {{cite:f9197034cd507af0ed95d3275ba5364c1efb853d}} {{formula:6079c0af-0ff2-4c7b-b836-c4264ffdc977}}
m
ec70e31fabed3269f455865c6f2e28a5
While an exhaustive grid search is simple to implement, it is infeasible for most practical problems because the number of sample points increases exponentially with the number of hyperparameters {{cite:12e77a8ed74ad4504e8569964e8967597a534c3a}}, {{cite:f868c855724ab3e415c89b8c8792fe8c991c928f}}. In order to compare our Bayesian search method and uniform random sampling with a “ground truth" grid search, in this section we consider a toy problem with only two tunable parameters. Using our first lung case, we again vary the D2cm maximum-dose parameter {{formula:952a3fc6-59bc-4fc9-86be-50fb5c03a2d8}} , and we add an additional dimension by varying the ribs maximum-dose parameter {{formula:c1321b70-410f-48cc-b0d2-421948a916a1}} . All weights are fixed at {{formula:a306c444-5c41-4a48-8ae2-e43f0dc8023c}} , and all remaining dose parameters are set to their limit values {{formula:74d6bd30-0597-41d0-b995-d6c737e8c7c4}} . {{figure:76a148eb-5a22-41f2-8bcc-85480191125d}}
m
2f1ae986bb1b90213d5c5908a83f03d3
To compute the similarity between 2 or more macromolecule graphs, we used GED and graph kernel (Figure REF B, Section ). GED {{cite:f90a7e2c6397b82f8d4f082669f092194c2c1a50}} computes the similarity between two graphs by assigning node and edge substitution scores, similar to local sequence alignment methods, such as BLAST {{cite:ed308025b4fea0710f71f79178b5fc903f3880c3}}. Instead of evolutionary statistics-based substitution matrices like BLOSUM62, we used Tanimoto chemical similarity matrices that compute the similarity between molecular fingerprints. Tanimoto similarity also extends to unnatural monomers. Since computing exact graph edit distances is an NP-hard problem, we used propagation attribute graph kernels to obtain approximate similarity matrices for large datasets {{cite:58904f5f0d219428676d990a28f3ea888302a927}}, {{cite:13fab073ca240cfd2c9af7b4b353f1b40e816534}}. This graph kernel captures local monomer node information and propagates this information along the bond edges, making it an ideal choice for macromolecule graphs. We have demonstrated the similarity computation for a linear glycan with six additional glycans of different topology and/or monomer chemistry, as well as with itself, using Tanimoto chemical similarity matrix (Figure REF A, B).
r
1441debb5222888e1b76bca30e7040a0
Next, we evaluate on an unseen dataset (not used in training), which is EgoDexter {{cite:d4160607beeac7bd2eb16fe241ef7e42edfbd3ec}}, to compare the generality of our method with several existing pose estimation methods {{cite:f41a58e312cbc52c16116bbe10418142f553d734}}, {{cite:3083c1a76d6326b6d2df21e78da069af043d21dd}}, {{cite:a43e19c115c3451ee6f7437b122b4ea1ddd12de1}}, {{cite:1dc7abadfbaf864fef2943f21809f1def5ba8b5b}}, {{cite:1914586b7371382e57083fdb119dc847e21b22a9}}. Following {{cite:a43e19c115c3451ee6f7437b122b4ea1ddd12de1}}, we use the centroids of the finger tips as roots to align the prediction and the ground truth. The third plot in Figure REF shows the AUC result. Note that, since Zhou et al. {{cite:a43e19c115c3451ee6f7437b122b4ea1ddd12de1}} did not provide their pose PCK curve on EgoDexter, we only report their AUC. On the other hand, {{cite:5b5b67600fc64d58819c52749574bd635fb2c399}} did not test their performance on the EgoDexter dataset, so we do not report it in the figure. From the plot, we can see that our method achieves the highest pose AUC value, comparing with all the other methods, demonstrating its generality and potential for practical usage. Please check Figure REF for more qualitative results from EgoDexter and FreiHAND. Also, we report the speed-accuracy comparison in the fourth plot shown in Figure REF . As we can see from this plot and also from Table REF , our method beats all recent methods on mesh prediction quality and also achieves real-time performance.
m
0818233f82bfe3a08c53a064972e9b84
Building on the analogy between Feynman graph polynomials and those of electrical circuits, we then formulated a second class of parametric representations. For these, the integration variables represent the effective resistances between vertices of the simplex, rather than the conductivities (i.e., the inverse Schwinger parameters) used previously. This change of variables immediately diagonalises the Schwinger exponential, expressing the {{formula:09a43c5f-6f66-44f4-8411-18801988ef8e}} -point function as a standard Laplace transform of a product of polynomials raised to generalised powers. These polynomials correspond to the determinant and first minors of the Cayley-Menger matrix for the simplex, which plays an analogous role to the Gram matrix for this second class of parametrisations. From the form of these polynomials, new weight-shifting operators can immediately be constructed to raise the power of these polynomials, with further shift operators following by shadow conjugation. Besides shifting the scaling dimensions of external operators, these new weight-shifting operators raise the spacetime dimension by two. They therefore generalise the 3-point shift operators of {{cite:98292392a10dd502359923839a3c1c6d3afee480}}, {{cite:9f1e80849c1efe43eca450f569a3c1a222d6cfe3}} to {{formula:4f921943-9c94-4002-b17b-7c93e72692de}} -points, and constitute a distinct class of operators to those identified in {{cite:7af4fcf6f705aaa7d65d7205d190d2e8f1272b4e}}.
d
b690d4fc206f32da53ffcc1057d07c3d
The lines in the list were adjusted in order of their depth, starting with the deepest ones. For each line, a 0.8Å  wavelength range was considered, unless the line had HFS components. In the latter case, the wavelength range was extended 0.4Å beyond the bluest and reddest components of the HFS families. Parameters for the entire HFS family were changed simultaneously during the fitting. The line position was allowed to vary by up to 0.25 km s{{formula:a819f22e-063a-49c6-bb3f-5abd8947da58}} (or up to 0.013Å at 16,000Å), while the {{formula:5f9e12e4-5552-4602-b186-b25d24a81a8b}} value could vary by up to twice its estimated uncertainty in the course of the least squares optimization, which used the downhill simplex algorithm {{cite:bd89fdb8d49f3576314cbea88e5f9aec15ce4ea4}}. The spectra of the Sun and Arcturus were considered separately for each line of interest, and the resulting {{formula:aaa17dc4-d1a3-4b9f-8f0f-442ea2768d21}} value is the weighted mean over both stars, where the weight is the line depth; this was the same method employed by {{cite:fc0eac6d6ea6b8e4403d559304b2b2e12108ac2d}}. The effects of strong line wings were taken into account by computing synthetic spectra over a wider range ({{formula:270ae0fd-6980-43d7-9931-8dfbef32a948}} 9Å) before cutting out the smaller piece for fitting.
m
c025636c266d91d479d801675f7f48bd
To further demonstrate the effectiveness of our proposed model and the training scheme, a large and complicated dataset RadioML 2018.01A {{cite:ab46048021689ce2ee7447742f4679872fc14d67}} is utilized. The RadioML 2018.01A contains 24 kinds of modulations under an SNR range from {{formula:6f7175a2-a1ec-4fe2-ad02-eadc26ede1fe}} dB to 30dB with a step of 2dB. There are over 2 millions of samples with 1024 points in this dataset. For the complicated dataset, we follow the training strategy of the RadioML 2016.10A, and the number of training epoch is increased to 100. The simulation results for the network parameters, the training loss, the feature visualization are operated on the selected RadioML 2016.10A, and the simulation results under the RadioML 2018.01A are also presented.
r
d3c513ef8b958159d7a247b39c769fe0
Fig. REF shows the NMSE versus number of beam training when {{formula:0797b3e6-2f99-4eff-bcb3-d543ebb129af}} and {{formula:7f52b359-ae53-40b7-beae-7449fb90316b}} . The SNR is set to 0 dB. To show that the training beamwidth adaptation ensures robust channel estimation when there is less beam training, the NMSE of the proposed algorithm without training beamwidth adaptation is also presented in Fig. REF . Since the deficiency of beam training may cause the erroneous multipath signal reception, the proposed algorithm without training beamwidth adaptation shows high NMSE when {{formula:c36b7a85-e8ba-4e9a-87fc-b8aab6d2880c}} . On the other hand, the NMSE of the proposed algorithm is lower than those of other algorithms at every {{formula:fb38e325-f842-4da6-bf31-db03653abce3}} . When {{formula:9482d489-66fe-44aa-9644-67bb76e7f74b}} reaches {{formula:484579c8-06e0-407e-86b0-1bf52a6a37d2}} , there is no need for broadening beamwidth so that the NMSE of the proposed algorithm without training beamwidth adaptation becomes equivalent to the NMSE of the proposed algorithm. Since the channel estimation is inaccurate when AoDs or AoAs are closely separated, the NMSE of {{cite:359429c6614087cc38657d675ff4f3d798c2cb27}} and {{cite:f6cb542c9f70b64539ef1c9f2906555ed287a616}} remains high even when sufficient beam training is performed. {{figure:377a1f9d-b17a-4b34-a1b9-d18f661771f2}}
r
ce4475a2a6f057195eb2320928018fa4
We test representations trained with different proxy tasks of self-supervised learning, including baselines such as rotation {{cite:74908b8e8276bda369cb370b1208468f31eaaa15}}, Cutout or scar predictions, the proposed CutPaste, CutPaste-Scar predictions, and using both with 3-way classification. We also compare with previous works, including deep one-class classifier (DOCC) {{cite:cbf568c4bf3104137d206605c1f172ceae9f4f88}}, uninformed student {{cite:d03bbb366967b6b05852c44b0b1dce9780974448}}, and patch SVDD {{cite:1d11c991777c4cef4349dc7e9e4e2ac359125adc}}. We note that some of these methods use ImageNet pretrained model for transfer learning, either by fine-tuning (DOCC) or distillation (uninformed student). The results are in Table REF .
r
475d2f69c0f3ebc842fb150287e4ec08
Matching problem is an important problem, and it appears in many fields such as biology information, chemistry molecule, pattern recognition and computer vision. Especially in the computer vision, matching problem is a crucial issue and frequently occur in stereo matching{{cite:ecf8be5cc04ae26b92606a92e515590b40719627}}, target tracking{{cite:cc74ae20a2cfa228ae37f3010053cb0c5e2f7907}} and pedestrian re-identification{{cite:bdaa5ed6a77b9020d811abe5cdf58232802341aa}}, to name a few. The matching problem can be decomposed into two parts: a) Extraction of local features from the raw inputs. b) Matching at best the local features. Matching implies solving an assignment problem and solving the conflicting evidence. Matching at best the local features can be modeled as a graph matching problem to consider the relation between local features.
i
d9056b54c60ffc661156a55e817e539a
The current unbiased LTR theory lacks the ability to properly analyze the unbiasedness and consistency of click modelling methods. These limitations do not undermine the value of the unbiased LTR field: its usefulness and effectiveness is very evident by a multitude empirical results {{cite:c37ff46fe1b04ebbd7a7fd193ee1579436df87f1}}, {{cite:55c328fd9b77e765d8baab3b9edccdc049a2a664}}, {{cite:158cd80fdba08ba82d0aeb85cdc406d456ad56bf}}, {{cite:1d3bb815e5be730f263e6843d27dca01168d4a66}}, {{cite:c37ff46fe1b04ebbd7a7fd193ee1579436df87f1}}, {{cite:8c7c6b83297ca261be1d84e8352a4bae17be55f7}}, but could help guide future directions and provide valuable lessons to the field. The implicit limitations that we have uncovered for the existing approach, reveal that in order to find unbiased counterfactual estimation methods that are unbiased and consistent w.r.t. non-affine click behavior, future research should differentiate from our generic definition (Definition ). Concretely, such methods should not rely on averaging over transformed individual clicks, and as a result, proving unbiasedness would be much more difficult for them. Furthermore, novel theoretical research is needed for finding theoretical guarantees of unbiasedness and consistency for click modelling methods, in particular, those that utilize complex models. Our findings reveal that it is not enough that the underlying click model of a method matches the real-world click behavior; whether the method will produce unbiased or consistent relevance estimates also depends on how data is gathered, and ultimately, where the minima of the resulting loss are. Similarly, theoretical research that explores in what circumstances bias estimation is feasible would be very valuable for further understanding the theoretical guarantees of counterfactual estimation. Lastly, future research that introduces novel assumptions should critically investigate what these assumptions implicitly entail. One should actively avoid assumptions that - intentionally or incidentially - make learning from clicks a trivial problem. Finally, the findings of our critical analysis also provide some important lessons for the field: Firstly, we should realize that unbiasedness is not always possible. Accordingly, we should thus not invariably expect nor require it from future work, for this could systematically exclude research that tackles novel problem settings (e.g. non-affine click behavior). Secondly, since unbiasedness may not be a realistic long term goal, the field will likely shift to bias mitigation or partial debiasing as feasible future directions. Correspondingly, we recommend replacing the term unbiased LTR with the less-demanding debiased LTR or the neutral click-based LTR. This work was supported by the Google Research Scholar Program. All content represents the opinion of the author, which is not necessarily shared or endorsed by their employers and/or sponsors.
d
83e55c495f1064b4cdb93974c1445ee2
In this paper, we analyzed zeroth-order algorithms for deterministic and stochastic nonconvex minimax optimization problems. Specifically, we considered two types of algorithms: the standard single-step gradient descent ascent algorithm and a modified version with multiple ascent steps following each descent step. We obtain oracle complexities for both algorithms that match the performance of comparable first-order algorithms, up to unavoidable dimensionality factors. A summary of our complexity results with those in {{cite:ed0372d1883a3b11b49eaf93cc296daa4767f0ca}} for the zeroth-order setting is provided in Table REF . Note that we provide improved complexity results in comparison to {{cite:ed0372d1883a3b11b49eaf93cc296daa4767f0ca}} for the zeroth-order gradient descent algorithm. We emphasize that the improvement comes by our use of mini-batch gradient estimators, along with an analysis of their approximation properties (as in Lemma REF and Lemma REF ). Furthermore, in Table REF , we compare with existing results on first-order method. We note that we match the first-order algorithms in terms of dependence on {{formula:ea08aad7-3f9f-42f1-9ae9-00116101791c}} . Finally, the linear dimension-dependence is natural in zeroth-order optimization {{cite:bf63efeda54aa22e74e0d68b86d640daf3b50d99}}, {{cite:23bc1436fc57ca0b49108badc77f395e8c57ff5c}}, {{cite:ee48bd13278680fd231d9a68f85ab9285da0ab16}}. {{table:1045aecc-4565-4782-8934-b4a716471796}}{{table:7fa37c40-a69d-4c0c-9145-4b115f42e87a}}
d
5a85a135ca817f591cf45c23f51c690e
Learning GHP from observed heterogeneous event sequences requires us to infer and align the corresponding Hawkes processes with respect to the underlying graphon, for which traditional methods like maximum likelihood estimation are infeasible. To overcome this problem, we design a novel learning algorithm based on the reward-augmented maximum likelihood (RAML) estimation {{cite:20bbbba418f02322263405263406a8ec7a7a213b}} and the hierarchical optimal transport (HOT) distance {{cite:f4033129d6268fbfd476079cdb4edf0273b4bec2}}, {{cite:962c2cf38d53c6b6ac9ae8294f207bb81b388705}}. In particular, given observed event sequences and those generated by our GHP model, we calculate the HOT distance between them and obtain an optimal transport matrix corresponding to their joint probabilities. The probabilities work as the rewards modulating the log-likelihood of each generated event sequence. Taking the reward-augment log-likelihood as an objective, we estimate the parameters of GHP accordingly. We verify the feasibility of our GHP model and its learning algorithm on both synthetic and real-world data. When modeling sparse heterogeneous event sequences that have many event types but small number of events, our GHP model significantly mitigates the risk of over-fitting and thus outperforms other state-of-the-art point process models.
i
119d25fd1b71d8ed8e0586752e9bd025
Figure REF shows the correlation bands at {{formula:e54d34a0-78ec-4a2a-aacf-aa67ee6dd9dc}} for the pairs {{formula:5caa49c9-2183-42f4-ab76-25fbc25ddc99}} and {{formula:b591012f-68b3-4859-8aeb-c1f5c510a44e}} in linear scales, including only the constraints from oscillation data, for NO and IO taken separately (i.e., without the offset {{formula:1ec9ff23-8e4a-4234-873c-423633cf768e}} ). In the top panel, the bands have a tiny width, reflecting the small fractional errors on the oscillation parameters ({{formula:27a68969-2050-40cd-9e64-177ae9ff69d4}} ) relevant for the pair {{formula:325f4237-4bec-4c57-b74c-2f7cf9444ba2}} . In the bottom panel, the widening of the bands is almost entirely due to the unknown Majorana phases in {{formula:70f178fa-e28c-42ad-b381-238fd8201201}} . See also {{cite:0ce2bc56581980e5e14dcf264c12b3f47bcc71f1}} and Fig. 2 therein for analogous correlation plots in logarithmic scales. {{figure:80777ef5-a06c-4b7d-b8e2-968b65a1f564}}
r
a7b650186b198f00e5d717084e480f60
Several works have made their attempts to remedy these issues. DHSL {{cite:c1867cae6353238e9c325839c60ef9c5301b9b3f}} proposes to use the initial raw graph to update the hypergraph structure, but it fails to capture high-order relations among features. Also, the optimization algorithm in DHSL could be expensive cost and unable to incorporate with convolutional networks as well. DHGNN {{cite:c4cd62bd5d9d4a6f48393ddff681066c5d38a9d4}} manages to capture local and global feature, but the adopted K-NN method {{cite:9a79a1ec5111dabc651b64366fe284edff22e7a6}} leads the graph structure to a k-uniform hypergraph and lost its flexibility. AGCN {{cite:42c940a0f6bf7e43130411feae0615b8e635eda3}} proposes a spectral graph convolution network that can optimize graph adjacent matrix during training. Unfortunately, it can not be naturally extended to hypergraph spectral learning for which is based on the incidence matrix.
i
b0008ef6d0bbce8bb23ebba1afd12e7e
Neuronal circuits in the brain are highly complex. Even for the retina, a relatively simple neuronal circuit, the underlying structure and, in particular, its functional characteristics are still not completely understood. However, the retina serves as a typical model for both deciphering the structure of neuronal circuits {{cite:890a43547ad55594f75e9430667dea43832734fa}}, {{cite:a73894fc9354321ae092ad7b6b8f7fbd66659533}}, {{cite:6c70dbc8dc3df73459bfe45ffcc7363008975809}}, {{cite:c370572c14632f36f28d46b8055b899764b79f30}}, {{cite:95931379861e55e22e1b9ed8a765d14a47fbd700}}, {{cite:681680fb5552a0c51788c2a59063f313e09a8b51}} and testing novel methods for neuronal coding {{cite:a5685416e73d98c5b6e9135251b27da0df50b1a2}}, {{cite:bbbc7d31851e620a58e84160828187274e33870f}}, {{cite:66ea10daa70631808eacbdf0e14b0f2b11f604e1}}, {{cite:177268250597612f92513ca71273959dfd12f781}}, {{cite:c74beb347596344180201bee5b5b13390383b74b}}. The retina consists of three layers with photoreceptors, bipolar cells, and ganglion cells together with inhibitory horizontal and amacrine cells in between as illustrated in Fig. REF (A). The ganglion cells (GCs), as the only output neurons of the retina, send visual information via the optic tracts and thalamus to cortical areas for higher cognition. Each ganglion cell receives input from a number of excitatory bipolar cells (BCs) as driving force to generate spikes (Fig. REF (B)).
i
3e77fc2b7171f47006e62583ef71a153
In this paper, we present a new neural architecture called EfficientSeg, which can be counted as a modified version of the classic U-Net architecture{{cite:1db651f2ffcad977ad2c396f3af0fecf58306c1d}} by alternating the blocks with inverted residual blocks which are presented in MobileNetV3{{cite:8aa186f2a5c8677c2c5867451d52ad3d2abf4b3e}}.
m
e396ce75dfbd795aeee373356d78f653
Although the S-G setup has already been widely used  {{cite:48a64d7b532a56d1dc353c00f65fd7f5e690d872}}, {{cite:c08fd2340d337e738b73d227f716750c50237d55}}, {{cite:31d1da4a5a3be81e74dab30c0f5b79ffb0144cf4}}, the realization of the sequential S-G setup, however, is not as easy as it seems. Extremely accurate calibration and control are required, which makes it difficult to realize even today. Fortunately, the recent rapid development of technologies in programmable quantum processors  {{cite:4fee28cceeacf8c3b1f9a72455339bf0ee21e156}}, {{cite:fab7e841f2864d69968706826e3f998b0333bca6}}, {{cite:7d53a27fce2ce617e1f426fbafc839953d9dc70e}}, {{cite:3aa344fe8b236a4b9bf7d6f2f16d1366ec3bdffc}}, {{cite:08289fd61a28f84165fd2d83826fa848c5f363ba}}, {{cite:c8ff552f4d5bcba3585931cdbc6a9145864d68a0}}, {{cite:7011451353ab76264944f5d04a47c6131bc69d09}}, {{cite:4761a549030919c07ed85115f63c92b7230edae1}}, {{cite:1bb626056fd327a7d2c85098dafab2116ef3f965}}, {{cite:1c6dcde433d90c65765f152a7a6bd2c1fc76ad9f}} may promote an alternative way to demonstrate sequential S-G experiments. While only noisy intermediate-scale quantum (NISQ) processors  {{cite:ef585f340a8c915e63b725538f36333a747de07a}} are currently available, it is sufficient for our purpose. In this work, we show how to realize S-G devices with different measurement directions by using the parametric quantum circuits. Similar to the current variational quantum algorithms  {{cite:cc7b0d704b54f382e8e3209a151af8e9d5d63df3}}, {{cite:d545c437ceed0fe170caf443a7218e569a0a708b}}, {{cite:38cbc83277a1a60419b456beb4ce626774a5e71e}}, the classical optimizer is used to find the optimal parameters of quantum circuits  {{cite:9907136eb62b0bcf1edcb747912c388e5ac53729}}. The quantum circuits with optimal parameters will act as S-G devices to perform the measurement of spin, i.e., a qubit in our case. The variational shallow quantum circuit with only the nearest two-qubits gates are contained, which makes our S-G quantum circuits very friendly to NISQ hardware implementation. For better execution of sequential S-G experiment, we propose to use cross-shaped quantum processors to execute S-G quantum circuits. {{figure:80a207eb-b926-4889-87d6-dd8f0cb30b96}}
i
e111cc4767466348237f02f9dcd132af
One of the advantages of the present approach is that the density matrix captures this information in the corner term (proportional to the codim-2 area), thus explaining that the Jafferis-Lewkowycz-Maldacena-Suh (JLMS) {{cite:cdb97185f0712eb9fece8b07f929e05dfff8d3b5}} proposal for the gravitational modular Hamiltonian explicitly contains the area operator {{cite:1cb6df839b499a5250a94ef9aeebdb213c72bf68}}. As a result, we will compute the density operator and show this claim in the JT example. It is worth emphasizing here that the cosmic brane model {{cite:e506d81e5e41cc57cbed14d49bb739dccb8a6896}} is equivalent to the proposal based on the Hayward/corner term at the level of the partition function, whose saddles are closed, periodic, Euclidean geometries, but the last model furthermore provides the underlying description of the reduced density matrix and modular Hamiltonian in gravity. Moreover this is better supported by holographic arguments on the states and their matrix elements.
i
cd1a38e80eb1f1c505340f9197b09416
We have also assumed that the quantum computing chip {{formula:0b194cc4-9316-49f5-a87e-973c31bc7ac1}} is sparse. Our method is superior to the direct fidelity estimation method {{cite:897e1cb30008acad606496ad35f00b4ec8e52a12}} even when the graph {{formula:3ad6ca1c-ba33-4bd0-8ef8-893ce3c53e32}} corresponding to the chip {{formula:fcd04a2c-748b-4835-b158-df540dd5883f}} is not sparse but is a planar graph with a constant maximum degree. Since for almost all current chips, {{formula:b6e2062f-2c3b-4742-9e4c-31f553af1db2}} is a planar graph with a constant maximum degree, this assumption is quite natural. For example, the geometry of Google's Sycamore 53-qubit chip {{cite:a6ce24c063612d937b73bceeb71f691491281d3a}} is a planar graph with the maximum degree four, although it is not sparse. As a consequence of the planar separator theorem {{cite:fa3365c16074e26b9344e4d13b767fcd1c522258}}, {{cite:7c45a3b8f8f8765f1a548eea47978b45357f2570}}, {{formula:9f87f214-dbbe-4517-8f36-be2552427c70}} is {{formula:7dce1d18-33e8-4ce1-8f3a-cb8077a48015}} for logarithmic-depth quantum circuits on any planar graph with a constant maximum degree. Therefore, the sample complexity of our method is {{formula:687eea76-a515-4641-83df-cf5d794e60ae}} , which is less than {{formula:fe5d7af4-ad77-42e3-a156-86032a1cfe57}} .
d
574dbd412ca1566810216f07a90ba013
Strangely, the Odd Path Polyhedron (the “dominant” of odd paths, and the related integer minimax theorem {{cite:363182dc6a6c77f3a7a007c90bec4218b2f542ca}}, see also {{cite:3b3dfc70ba1038548e58d22c39fc3c23ce107a4b}}) have been determined much later.
r
288da6dd98dd890bfc99c724aca175a6
We have achieved the conclusion that in our theory the minimal group should be the entire conformal group. On mathematical side the conformal group has been investigated thoroughly from different aspects {{cite:00132a8dd01089c030cac7042b6bd48e8e5959a7}}, and its application to physics especially to quantum field once was also widely considered. However the application is not so satisfactory because hitherto no other perfect quantum system than photon field {{cite:40c55dc58646cf0347e1f8bdbc0f3905768635e6}}, {{cite:586030598ea33451a73615df68a7080e39126bd4}} has been found so that the corresponding Lagrangian is conformally invariant, unless the mass of the involved particles are null {{cite:47c9f400fa62278c85122bbf01110ea60512d072}}, {{cite:99b996789b2fe6e9d5311ea1deb6043531111b29}}, {{cite:98c50ee15b7b82f5b56c1fdcc73068872ed133f1}}, {{cite:c9f846fcf51c2a9ee57281dfeea1e779f5afcb59}}, {{cite:ad3f87b5b9cf36b3419cb60becbe8a05f2574c19}}, {{cite:c28abedc00f719ff1fd5b199072edcffbebfe4bf}}. In our treatment we turn from searching for invariant dynamics to how the conformal group makes physics quantities running, so as to match the renormalization results that some of the physics constants vary regularly with the energy scale, for instance charge, mass, and Green functions. According to our analysis, the scaling transformation can somehow cause variation of coupling constants and masses. And in this paper, we have also discussed the function of generators {{formula:8958b7f0-d417-450b-83b9-5de07c94d680}} and {{formula:079b3faa-f6c2-4591-baa1-8385161520f5}} which conformally change the interaction vertex from a chiral one to a non-chiral one. The action performed by generators {{formula:f0dceed8-48ef-4223-bb0b-4f07bd65c23d}} or {{formula:9c38a8ea-b185-4c20-89cd-63487e9d2069}} may be caused by the external non-hermite stimulation. Basically it is with such stimulations that the value of {{formula:4fa86687-1544-4b87-ac3e-eeb2e17a0aa4}} changes from 0 to {{formula:846d6561-4abe-4aac-b00f-ec675b76c129}} . Then by certain decaying process, fermion mass is generated. In summary, we get to the conclusion that the running of interaction vertex corresponds to certain kinds of curving of complex space, which is certainly governed by generators of conformal group. Conformal group might live for curving, but not for invariance.
d
14ada5f35f3697a46afdb472d493bf2c
The simplicity and generality of the DISK framework enable scaling of any spatial model. For example, recent applications have confirmed that the NNGP prior requires modifications if scalability is desired for even a few millions of locations {{cite:d0d1dc8402f66890b89a31e1774f18ebed730915}}. In future, we aim to scale ordinary NNGP and other multiscale approaches to tens of millions of locations with the DISK framework. Another important future work is to extend the DISK framework for scalable modeling of multiple correlated outcomes observed over massive number of locations.
d
42833935c00732941418ca4c5a1fd163
As a consequence, recent attempts {{cite:ce5b5f05a442c7e3f8b7c849cad6310480ef9afa}}, {{cite:81da3daf8c2c6374bcb886f58d25737020f5acf1}}, {{cite:77c3b20abb9cbddfaa3757dd464ff7824fd1bb8b}}, {{cite:4cf82af19fb075cd58c4c7ea1275eab043787922}} aim to migrate this power to real image editing by inverting an image to the latent code {{formula:17fbf242-2db7-4fd9-983e-934e081cd151}} . There are two prominent demands for this task, whether the inverted code can faithfully reconstruct the original input, and, whether the pre-learned semantic directions can be successfully applied. However, existing methods seem to stuck in a paradox, as achieving one end will inevitably sacrifice the other. As shown in Fig. REF , I2S, I2S++, and pSp {{cite:ce5b5f05a442c7e3f8b7c849cad6310480ef9afa}}, {{cite:81da3daf8c2c6374bcb886f58d25737020f5acf1}}, {{cite:77c3b20abb9cbddfaa3757dd464ff7824fd1bb8b}} concentrate only on obtaining faithful reconstruction, but the inverted codes show limited editability. In contrast, latent codes obtained from In-domain inversion {{cite:4cf82af19fb075cd58c4c7ea1275eab043787922}} (Fig. REF ) are regularized in the semantically meaningful domain at the expense of fidelity. We argue that balancing these two factors solely based on a single image is extremely challenging, as there is no indicator to shed light on the editable domain in the latent space, preventing the optimization from obtaining a perfect balance between two factors.
i
54030bb8c4f4ef6a306d06b957c14395
Although many approaches {{cite:7cdd5df27b48057d1538b07bec447612240e39ef}}, {{cite:2ed0444cd2ece43fc1caa148e08dcaa363e37cdf}}, {{cite:275646f1f07669c49a3e87b64b15cc2cc320b03c}} have been proposed to handle noisy labels, they cannot be directly adopted in DML since these methods mainly focus on classification task using label information, rather than the similarity between data. According to the characteristics and limitations of DML, we specially design adaptive hierarchical similarity metric learning to counteract noisy labels. In other words, this is a robust DML approach that can alleviate the impact of noisy labels, rather than a simple application of other denoise methods to DML.
m
d6f42a991db1fd6da8d081b7b186a744
In an EIV model, however, errors in both {{formula:b8eda672-467f-41c2-9ab5-da1ccca042ef}} and {{formula:998fa3ce-a91d-495e-a146-026d07d2b088}} are considered; e.g., see {{cite:c49e9240ffcc02089ad6f44d7daa5cecd781739d}}. Total least squares formulation is a well-known EIV model, where the goal is to solve the following mathematical problem (e.g., see {{cite:02ef68bc82412855d75dba30b1609d1df12b7c2e}} and {{cite:034172f5c3f6d86b1529df2311cd2f20531f7fcb}}): {{formula:6c57fbd9-71bb-43dc-98c9-4878a9b4a4e5}}
i
c03872d1e521636bc44a626c93469049
where {{formula:89808d11-1125-41e8-8852-aea0514b3a97}} is the number of time-steps, {{formula:9dc7bf81-06e9-47a9-b82e-43455d9b915a}} is the number of joints and the sum over {{formula:2ccd9067-fc78-4e76-bb82-df23d28b3892}} accumulates the error in the {{formula:fe0fdc03-c2a7-49de-87a1-e48865f424f4}} , {{formula:7207e4ad-343d-49d7-8a4d-373e42b511f7}} and {{formula:05b96d2a-77d9-4635-add3-16445f053075}} dimension of the given Euler angles. The final results correspond to the average taken over the four samples. Following {{cite:6b166cd69a1cee66bdf91a7daa9f32ca988f623d}}, the results for a running average over 2 and 4 frames (Run. avg. 2/4) and a zero-velocity-model are also documented as baselines. The zero-velocity model returns the first observed frame as the prediction for all successive frames.
r
bef5ac330304ade0ef9a30bafd56d38d
However, the dynamics of a large class of Boolean models does not have an absorbing state. In the present manuscript, we analyse the stochastic dynamics of linear threshold models, formulated in terms of {{formula:c9b43bbc-b777-4fad-8e18-70a5e55b2ca5}} variables, and discuss extensions to threshold models with multi-node interactions and models with Ising spin {{formula:0b239763-7680-40d9-98bb-e5e2dfc80e47}} , which do not have absorbing states. In such models, approximation schemes that are successfully used for dense systems, such as the heterogeneous mean-field and TAP approaches {{cite:59db66accb0d2402252234c9059818af07444207}}, {{cite:bfd7daae97121606152243fc96aa84db5ccda6bf}}, have been shown to be ineffective for sparse systems {{cite:ccb87e220e5a5e6739a9221f69b3d406b4bfb51f}}, {{cite:e8a70dfd079d82bba417d4591ffbd9c96a3b12cb}}. On the other hand, generating functional analyses {{cite:aaacf49fa8268045320e94753bdc7e156185f0bc}} can accurately characterise site averaged quantities, such as global magnetization and time-lagged correlations {{cite:297b4a38f3f3c0d253a3839a26ab7dfdc252c6b5}}, {{cite:017b2fb64fd2041b916c82e4580d14ce2eebfe33}}, in sparse heterogeneous systems. However they require averaging over graph ensembles, hence they are not able to describe single instances. These can instead be investigated by the dynamic cavity method {{cite:017b2fb64fd2041b916c82e4580d14ce2eebfe33}}, {{cite:04f0004e583a93bdd2a50c1081a4c4839d055c2f}}, {{cite:1787908c64cd456313cc92e95750d4a53dc841ac}}, {{cite:0e4c09cf3168d13a75ea8c75468446e49d73a953}} which is particularly effective in the analysis of sparse systems, whenever short loops are rare. In particular, the dynamic cavity method can potentially investigate dynamic properties at the level of individual nodes {{cite:04f0004e583a93bdd2a50c1081a4c4839d055c2f}}. Single-node statistics has, for instance, been shown to be highly heterogeneous for non-equilibrium models with an absorbing state {{cite:0e4c09cf3168d13a75ea8c75468446e49d73a953}}, {{cite:6f662d65ceb30cb4a92d6099e9b9956470f0769e}}, {{cite:688de10da7c76cad7a9e1baf7856c5173407b9e5}}. Little, however, is known for other models, including spin models and linear threshold models.
i
9c72c2ccb9e4618aba7590ec7df3aa6d
Paraphrase is a restatement of the meaning of a text using other words. Many natural language generation tasks are paraphrase-orientated. For example, abstractive summarization is to use a condensed description to summarize the main idea of a document, while text simplification is to simplify the grammar and vocabulary of a document. In paraphrase, copying and rewriting are two main writing modes. Recently, the encoder-decoder structure (aka. SeqsSeq model) has become more and more popular in many language generation tasks {{cite:c1e2a51a3897a88f32c5fe5e6dbabd21b4e1db5f}}, {{cite:63d5cab5b1a5965b68afd32579d412c98eebfd59}}. In such a structure, the source text is encoded by the encoder as a context vector. Then, a decoder decodes the semantic information in the vector and outputs the target text. Studies such as {{cite:43cb6a1a528adcd99dd669d8c9a15a0790d4163b}}, {{cite:95f272375f7da5113ad303568f553d72b96985ae}} have applied the popular SeqsSeq model initially used in machine translation {{cite:c1e2a51a3897a88f32c5fe5e6dbabd21b4e1db5f}} to the paraphrase task. Despite the competitive performance, these models seldom take into account the two major writing modes of paraphrase.
i
c41f11047ea7fb9c8b097c0b853767e7
Next, we study the per-user throughput performance. Specifically, we consider the following per-user net throughput expression, which takes into account the channel estimation overhead {{cite:8c7e9f5b4422d712ba31999f46ed23bf27cc5db2}}: {{formula:25cb5b3d-2b09-46a9-92de-6313c1ae4ef4}}
r
b1d07a418d185bcfcca42be4e28f6c48
Injecting node involves generating discrete graph data, such as adjacency matrices or feature matrices, that gradient-based approaches handle poorly in many circumstances. This phenomenon could be further aggravated by the black-box setting where gradient information from the surrogate model might not be accurate. Moreover, generating node and assigning edges are naturally sequential and reinforcement learning fits for such Markov Decision Process (MDP). Hence, to perform the optimization task in Eq. (REF ), we propose to explore deep reinforcement learning. Specifically, we utilize an on-policy A2C reinforcement learning framework, adapted from {{cite:43cbab7ed3e4414798bde976b1787c7c0dea8360}}, instead of the off-policy algorithms such as deep Q-learning. Since A2C circumvents the need to calculate the expected value for every possible action, which is intractable.
m
61937614c99a6cf7f619a7656e28b018
In {{cite:eedd9674823a37415588eab2f820bcb1ef1e7321}}, it was shown that {{formula:71c20f0e-7780-45db-884d-dacef7c795bb}} has infinitely many path components for any spin manifold {{formula:f4d7ff88-a889-4319-ac37-769170bb7df6}} admitting a psc metric that satisfies certain topological conditions (explained below in Section REF ). This is achieved by showing that none of the metrics constructed using {{cite:8ed50023efa702ae2d638171c5ec8baa88f70039}}, {{cite:641db6946b95ff0f169c9f3d53dcbfcb3b135934}} that lie in the infinitely many path components of {{formula:dbee66c2-d1fa-498e-a3ba-7809a7812557}} can lie in the same path component of {{formula:fe77c2ca-5753-4871-92d3-6e18a309013e}} . As {{formula:08d4fa05-1672-468a-aec3-eb2574051cb3}} automatically satisfies the topological conditions of {{cite:eedd9674823a37415588eab2f820bcb1ef1e7321}}, in {{cite:9dd507489104710db41f30e06f052992bc3c4bd6}} it is shown that {{formula:fc71f0a5-fed5-4dc9-862b-d60aed322bec}} has infinitely many path components. We similarly are able to claim that {{formula:cb3d8f4d-9978-43e7-93b6-34cf01fa7117}} has infinitely many path components for any of the manifolds listed in Theorem REF , but only if they satisfy the topological conditions of {{cite:eedd9674823a37415588eab2f820bcb1ef1e7321}}.
r
1d8642315e4504c9d30bbf680b850586
Our analysis of the complexity of operators generated by the Heisenberg and SU(1,1) groups has many interesting applications. For example, coherent and squeezed states of light are essential building blocks of quantum optics {{cite:0344eadd873f6bb8a75f2820c842a569f5540749}}, {{cite:cd3de6c45540fc52dd9c52fae72b163dd34b6f69}}, {{cite:8d8d57100984b67a22372dbde45512863c20bb97}}, and these states can play an important role in continuous variable quantum computation, see e.g. {{cite:b48b5da317c71fc9fe89b36a752ecbce9af0e77f}}, {{cite:d14f0aa0f264114475db3a15c068de844b2f3c92}}, {{cite:5827e5c1b2b20c7d7d51b60784b9f4b6224951ec}}. For example, in one such algorithm {{cite:d14f0aa0f264114475db3a15c068de844b2f3c92}} the squeezing {{formula:6614d5c5-146d-4ec8-ad57-2be03f3a085d}} is inversely proportional to the precision {{formula:e8e8bfff-3d1f-47b0-b41c-a11f59bae80d}} of phase estimation (for fixed computational time), {{formula:2d8d782f-a2a6-445d-bfea-696e8b3665e8}} . Using (REF ), this implies that the resulting precision for a measurement constructed from a squeezed state with complexity {{formula:2664a762-9126-419c-8b12-590bc32b074b}} scales exponentially with the complexity {{formula:310a0b77-45b3-48c0-bcab-04be68afb9df}} , suggesting that complexity itself might be an exploitable resource for quantum information. Alternatively, the average energy (or particle number) of a squeezed vacuum state – another useful quantum information resource – scales with the squeezing parameter as {{formula:fbb59125-92cf-458e-bd38-b0f40c51a328}}
d
7c4518883b5a4e2ccbc3577c964e49af
However, as shown in Figures REF and REF , these reductions in active users and visits to different categories do not account for all of the reduction in income diversity, and indicates that more microscopic changes in human behavior have contributed to a further decrease in income diversity in cities during the pandemic. To investigate what behavioral changes during the pandemic contributed to the decrease in income diversity, we seek to find any microscopic, individual level behavior that changed during the later stages of the pandemic. To do that, we analyze the behavioral parameters of the Social-EPR model (proposed in {{cite:010a5ba2cac5f8e27d46c6e40b569ddb7df3e08f}}, which extended the EPR model proposed in {{cite:3ccbaf17a33e317c4f53a84830b0ae9a59fdb8ee}}). {{figure:275b9239-eda2-4bc2-a8a6-c6757a08f787}}
r
f2a1717e9fea11e5ea1849d365c8786b
In this work, we propose to use available off-the-shelf models to help in the unconditional GAN training. Our method significantly improves the quality of generated images, especially in the limited-data setting. While the use of multiple pre-trained models as discriminators improves the generator, it has a few limitations. First, this increases memory requirement for training. Exploring the use of efficient computer vision models {{cite:dbbd0eb5d48a74ab49d4a2790698527cf57be330}}, {{cite:6351138f4dfdbcd550782a0608708326dad1b46f}} will potentially make our method more accessible. Second, our model selection strategy is not ideal in the low-shot settings when only a dozen samples are available. We observe increased variance in the linear probe accuracy with sample size {{formula:e29a5957-cb4f-446f-a3cc-3ab0ca17c80e}} which can lead to ineffective model selection. We plan to adopt few-shot learning {{cite:4f0c059862e4125d070ab4d282bad2c8710ecc38}}, {{cite:9ebd8b177aeb76e460e157581bbe61bfce591ea9}} methods for these settings in future.
d
f5c50f16492cbb8b72895a471be695b2
While we mainly focused on the popular IM-based SFDA methods, our proposed uncertainty-guided adaptation is also applicable to other SFDA frameworks, e.g., neighbourhood clustering {{cite:7f3e950f1c1b2c7c3d0f07a0201319556491bb4e}} or extensions to the multi-source SFDA problem. Moreover, the principles we build upon are general, interpretable, and have strong backing in classical statistics. We believe that uncertainty-guided SFDA will become a backbone tool for future methods in DA that generalize over different problem domains, are less sensitive to the training setup, and will provide good results without extensive ad hoc tuning to each problem.
d
8cae98eb76f6a3d5efb9b32d9448ffc6
There are a number of avenues for future work that extend our proof-of-principle experiments here. Apart from exploring different CNN architectures, such as those tailored to segmentation tasks {{cite:6e6ffb05de3def3301c756df2da58a21e984266f}}, we could consider other types of annotation appropriate for cell-counting tasks, such as global count {{cite:0253e2a82d85bd791decc062f1a2d814be5f5b38}}. In addition to sparse cell-counting tasks, we envision our technique being applied to other similar tasks, such as identifying defects in otherwise pristine surfaces, such as semiconductor wafers. Alternative hardware implementations may also improve performance for such tasks. For example, while our current prototype uses brightfield illumination, it may be advantageous to use darkfield illumination, which would substantially reduce the background and thus improve the contrast of the overlapped images. Further, using higher-dynamic-range sensors could compensate for lost contrast due to overlapping. Another potential direction is to use the recently proposed random access parallel microscopy setup {{cite:d58347702f7ba87434fb1776db31d9cd12578c93}}, which images multiple FOVs sequentially with a single parabolic mirror as a common tube lens. Such a setup has the advantage that all sub-images overlap completely on the sensor.
d
7e22ff3dd65cedec3f32e787f93fc62f
Mahalanobis and Gram Matrices. We identify two potential reasons for these two feature-based scoring mechanisms. First, although it is unclear how this can affect the detection, we note that both methods are evaluated under a low input dimensionality (i.e., {{formula:841487b4-352c-4b57-8d96-551653686e27}} ) in their original works, while we are operating at a much larger scale (i.e., {{formula:1aa24147-06ec-45b7-9480-5fd46c1ce4b5}} ). Second, we find that there exists over-estimates in the reported detection rates of these methods due to the use of poor benchmark tests. Specifically, one of the primary tests in both {{cite:c8ee64a8efd88ab68e356e9812c2b5f4cc696821}} and {{cite:c82801b0b98e6b1a9159105db3d0c77172f55a6a}} is detecting samples from a resized version of LSUN (“LSUN-r”) with a classifier trained on CIFAR-10. However, due to inappropriate resizing operations, LSUN-r images exhibit obvious artifacts {{cite:b853dacd3ac6d428b973e210977b2b316c0b68e4}} and might provide trivial signals for feature-based methods to exploit to achieve high detection performance. As evidence, on a pre-trained CIFAR-10 model, Gram Matrices (using its official implementation) achieves {{formula:decc00f3-48dd-4667-9e0b-1c35d1bf9788}} AUROC against LSUN-r; yet, if one puts aside LSUN-r and instead resizes the LSUN images with Pytorch transformations, the AUROC of Gram Matrices immediately drops to {{formula:c6d647a2-57ba-4c2c-810d-ba3a422bdbc1}} . By contrast, the MSP's AUROC is {{formula:280d018e-1420-42c6-ba32-a79fdf997807}} and {{formula:6cad6d3f-d10e-403f-9bdd-f4dfd6b78b50}} , respectively. We also observe similar trends in Mahalanobis's performance. One can obviously see how the use of LSUN-r leads to over-estimated detection abilities for these two feature-based methods.
d
67226d17fb097469c4c4ec5e8115d06f
In this study, we replaced the neural vocoder with MelGAN {{cite:359936a2fe484ca9b1a2002b2ad5578faaff8b3d}}. To train MelGAN, we used spontaneous monologue speech of 361 speakers in the Corpus of Spontaneous Japanese {{cite:b9ecad30a4ff8eeb07dbad482b7ae1c73d8450d5}}.
m
c444617723073ebd3e6dfdf8307a485c
In practice, the video is scaled down to 640{{formula:ddd7e564-0806-4d43-88ad-412bf6e82e65}} 480 resolution to achieve a frame rate of 20 FPS. Intrinsics of the camera are acquired following the calibration guide of {{cite:906761eec3065baf30e7ea183da11d698b47a0de}}. Extrinsic calibration between IMU and camera is established using an open source tool "Kalibr" {{cite:cba03d04fd5039ccf51c6401b38ea819bfffc431}}{{cite:813af2984a913dbd5123c5562bd6df83df904f33}}. Figure REF shows the simulation setup on top of the building in the landscape. There are three parts, a motor driver board, Jetson Nano, and the gimbal part. All 3D-printed cases have enhanced connections, taking the aerodynamics into account for flying efficiency. The camera on the gimbal is placed to be forward facing the landscape. Then the gimbal cases along with the sensors are attached to a pole end (not in the view of Figure REF ). The other end of the pole is controlled manually to simulate random rotation. Here, the ground truth roll and pitch angles are read from the servo motors, and the protractor in the figure is only adopted for verification of the test. In the demo test, we use the fused estimation from our particle filter to steer the motors. Closed-loop PID controller is leveraged for actuation. All the following test sequences were recorded from a static position at the start. Furthermore, it is guaranteed the aligned ground plane should be orthogonal to the gravitational vector, and horizontal skyline. This configuration remains unchanged for the real UAV test. {{figure:64ac9841-6fec-4ea3-87da-6406216f96b0}}
r
4e1fa571cd5cbfd502a32231d069e32d
Phenomenologically, it is suggested that the light mesons could be grouped into the following Regge trajectories {{cite:669aac3a63138a8df159879a5e3e824559bf72f4}}, {{cite:bea025c8cf3caf2d745acd26bc7cea8b07e89abb}}, {{cite:d58af2760c6a2e62fe10c3918e80a83884b7b2ed}}, {{formula:bd82c882-2a89-4ef9-8baa-e03d5a60ec0e}}
r
c408fc0aaf724f0e028c940171f25a64
ScanNet.  2D depth metrics and 3D geometry metrics are used on the ScanNet dataset. The 3D geometry evaluation results are shown in Tab. REF . Our method produces much better performance than recent learning-based methods and achieves slightly better results than COLMAP. We believe that the improvements come from the joint reconstruction and fusion design achieved by the GRU Fusion module. Compared to depth-based methods, NeuralRecon can produce coherent reconstructions both locally and globally. Our method also surpasses the volumetric baseline method Atlas {{cite:9c21814841a271e00e533a981e5fa7d057eb62ca}} on the accuracy, precision, and F-score. The improvements potentially come from the design of local fragment separation in our method, which can act as a view-selection mechanism that avoids irrelevant image features to be fused into the 3D volume. In terms of completeness and recall, the proposed method has an inferior performance compared to both depth-based methods and Atlas. Since depth-based methods predict pixel-wise depth maps on each view, the coverage of their predictions is high by nature, but with the cost of accuracy. Being an offline approach, Atlas has the advantage of having a global context from the entire sequence before predicting the geometry. As a result, Atlas sometimes achieves even better completeness compared to the ground-truth due to its TSDF completion capability. However, Atlas tends to predict over-smoothed geometries, and the completed regions may be inaccurate. As for 2D depth metrics, NeuralRecon also outperforms previous state-of-the-art methods for almost all 2D depth metrics, as shown in Tab. REF .
r
46b58ec3f8ed68b7660a66e35fa36f6d
As described in previous sections, we analysed four nights of observations: three in multi-order mode, with only HD 189733b in the slit (referred to as 'short-slit nights') and one night in L-band with single order, long-slit set up, observing HD 189733b and a fainter reference star simultaneously. While the long-slit observation covers a narrower spectral interval compared to the other eclipse observations, it is a critical test of the methodology with its simultaneous observations of the target and the reference star. In figure REF we present two lightcurves: HD 189733 and the reference star. Both are centred at 3.31{{formula:f313b1b6-1b48-4831-8afe-62b83d55b71b}} m with a binning width of 50 channels ({{formula:a70a4962-3681-4403-b78b-af29ce4ce6dd}} nm). As expected the HD 189733 timeseries (top) shows the distinctive lightcurve shape whilst the reference star (bottom) timeseries shows a null result. We have fitted a {{cite:25ee78967d9bab999cce78adcefed305b1d02a77}} secondary eclipse lightcurve to both and found the HD 189733b transit depth to be {{formula:1e3a7c5a-7eb9-4dc3-8003-ba47b64baae2}} and {{formula:e5975a04-86cd-48f3-9c41-eb3f2a8d783f}} respectively. These results are in good agreement with the spectra presented below.
m
4a68270a48156b7c6ac9b05f076d03d9
One usually consider the quantum systems with two levels (namely, qubits) in quantum information and quantum computation. To describe a quantum system with two levels, one can use the Grassmann representation of Fermi operators in fermionic phase spaces. Since quantum phase spaces are some kinds of noncommutative spaces, one can also use the mathematical tools in noncommutative geometry to study the geometric structures of quantum states in phase spaces {{cite:449a1a04ed0d58446b6449c0c90910a6b7a1b28a}}. In a noncommutative space, a pure state is the analog of a traditional point in a normal commutative space, and the Connes spectral distance between pure states corresponds to the geodesic distance between points {{cite:44a86440ea44aa1af2c20cc599bb5227febf09b8}}. The Connes spectral distances in some kinds of noncommutative spaces have already been studied in the literatures {{cite:138b41c2e24876adec3f39691669a616cf401ed2}}, {{cite:7b4b86ac27b3760ad37519a57f47e4b2907eef0f}}, {{cite:0bc9bee54728cf42e28a32403e60774ea7e546e9}}, {{cite:8cd18170c8c84754e0a822efdf5f553724619890}}, {{cite:c0833a77ff9846470d494a13f46050cda45a07a3}}, {{cite:13de8f125885799cf94a191abe515d0024dd0127}}, {{cite:68efba33da0166560006165647e0ab4e239f2a03}}, {{cite:43e67e7c196246889fd5f3c4d96a8e77157b4fec}}, {{cite:5fea5d1cdb3fab0f0ade2aa21d9a7d8c5663e8a4}}, {{cite:1eee8ef1b3ac9afb2885c108fcb7164e82512694}}, {{cite:b39d95fbbec925f2943e584c67256f304d1e80c2}}, {{cite:2e8ea29f4783747f2e7483bd3401557e33db9490}}, {{cite:aee3d11d4a74e91e8a2e315992cba73e0f7ef520}}. For example, Dai et. al. have studied Connes' distance in {{formula:c7b49cd5-e90b-4376-b67e-26efa60350bf}} lattices {{cite:7b4b86ac27b3760ad37519a57f47e4b2907eef0f}}. Cagnache et. al. computed Connes spectral distances between the pure states which corresponding to eigenfunctions of the quantum harmonic oscillators in the Moyal plane{{cite:0bc9bee54728cf42e28a32403e60774ea7e546e9}}. Martinetti et. al. obtained the spectral distance between coherent states in the so-called double Moyal plane {{cite:8cd18170c8c84754e0a822efdf5f553724619890}}. Scholtz and his collaborators have studied the Connes spectral distances of harmonic oscillator states and also coherent states in Moyal plane and fuzzy space {{cite:43e67e7c196246889fd5f3c4d96a8e77157b4fec}}, {{cite:5fea5d1cdb3fab0f0ade2aa21d9a7d8c5663e8a4}}, {{cite:1eee8ef1b3ac9afb2885c108fcb7164e82512694}}. In the present work, we will study the Connes spectral distance between qubits which can be represented by fermionic Fock states in phase spaces.
i
92ef0eea5a95d50864021c5a448f950a
In this method (see e.g. {{cite:559729727006e27d3d2e6ba10b99508b794f6a64}}, {{cite:b8acda884394b02190eb6e028da3ef5250749ac6}}, {{cite:57f6fe15406d625385c414ff950e3e6c7af89949}}, {{cite:016f9b3d572d3bb10412ef9b2d352511ef55fb98}}, {{cite:fa3c919144418c12a87a2279d7479f31ba035eab}}), one brings the dynamical equations into a special matrix form called a Lax representation. Then the existence of an extended set of FIs is guaranteed. Specifically, Hamilton's equations have to be written in the form {{formula:8b1b68b1-892b-4ea8-afd7-d4d19a84eb4e}}
m
9e45888fbefdb9286c6edb7e3746e6ab
The class of Neural Differential Equation methods {{cite:4dfebe7b8ab1381fb66a82dc6369b48d59bb9309}}, {{cite:26999d8abf6124d7d3e3481f1db3549446655f67}}, {{cite:ecaf39a356e9e2529b8241a56fb929aa82d494b0}} were developed to account for irregular time series with missing values and have demonstrated high performance in prediction tasks when provided feature sets with varying rates of missingness. Following the analyses performed in this paper, the Neural CDE appears promising for constructing state representations in the midst of the missingness and other irregularities inherent in healthcare data.
d
de6d5345ea8b40ac0f8f62d86427ede3
We calculate the full {{formula:ae196caa-ed47-493a-ba90-7b122cbaa86b}} spectrum at {{formula:24d9ca2b-b36c-4641-960b-ae4cfd9f399e}} and {{formula:81154060-9e58-4e25-823f-edc4b1dc0d8b}} using the {{formula:240d0332-347c-4e7f-9094-4fbd824a5a6d}} -jet (N)LO calculation from MCFM8 {{cite:a2ca5ddac0a6211b6d90ccd258fb63433ce20050}}, {{cite:9022c2fb577cf9624d50c948dbffcea100d9287b}}, {{cite:9a0547aa236e59934f4680d3e87818ac7b8daad6}}, {{cite:1615d8b2ad51a0e048878f450bf043deb7cd06f0}}, which allows one to generate points down to very small values of {{formula:cde3bd07-2a4e-4293-b422-68d169e3f239}} . We then subtract the known singular (leading-power) terms in the {{formula:06662508-a090-4113-bb3c-b7bd273cf1fd}} spectrum {{cite:e4272c1b40f12c4fd488620c485b51dfd7317b8c}}, {{cite:52d6c560ee361a21e66f15033afec4d3392c1020}}, {{cite:d7dddd15f8cd17224c51e74f9c898a9c6f793303}} to obtain the complete nonsingular (subleading-power) contributions, {{formula:669c5ade-ea29-4c84-96d4-36916266ecd3}}
r
9bedb40ecab552ac63b65191b2e1f507
For the synthetic benchmark datasets comparisons, we use two synthetic datasets: the GoPro{{cite:7e0a93fdea8a450363bed8f6a82245814fe1e6bb}} and the HIDE{{cite:ed37a6aecefb8ca8da592e69ddc2ad1fdbb4960a}}, and compare our method with fourteen state-of-the-art SIDSBD methods (Xu et al.{{cite:1618dc4050100071d38c6f96753e50d69b0743df}}, DeblurGAN{{cite:c60401f7dab88a7b774655c2cd809bb06612fa80}}, Nah et al.{{cite:7e0a93fdea8a450363bed8f6a82245814fe1e6bb}}, Zhang et al.{{cite:505cce0f5847bd0391880c189f6596a1de2e6b59}}, DeblurGAN-v2{{cite:910574c1ffb872eadfff7e6099f2988e77767c61}}, SRN{{cite:8692b4d11b7f45e80f482cd435f47bc7f9fec33c}}, Gao et al.{{cite:56e9d1cf3eb79a7c18c2b04b62f91db5e0a3704e}}, DBGAN{{cite:cb49caf6bad4013a9604b130670b93b6b1620c56}}, MT-RNN{{cite:96981e2ca11148d519776d23b0bd5b9d661c4c13}}, DMPHN{{cite:faf4ba7eba99966320e358c68ca3e43e1b2d7764}}, MSCAN{{cite:28c92126634d059576dceb8302238b643e1749e6}}, Suin et al.{{cite:4f13bc328d66a0c8a816ee7937771376f0389333}}, SPAIR{{cite:06b3776ff4d09c0922e9fb6af7cfd4fb487e0550}} , MIMO-UNet+{{cite:24a6cd2cf6f8ec93d4f3245cbc3b90bc9dcb9cb3}}). For fair comparison, the models {{cite:c60401f7dab88a7b774655c2cd809bb06612fa80}}, {{cite:7e0a93fdea8a450363bed8f6a82245814fe1e6bb}}, {{cite:505cce0f5847bd0391880c189f6596a1de2e6b59}}, {{cite:910574c1ffb872eadfff7e6099f2988e77767c61}}, {{cite:8692b4d11b7f45e80f482cd435f47bc7f9fec33c}}, {{cite:56e9d1cf3eb79a7c18c2b04b62f91db5e0a3704e}}, {{cite:cb49caf6bad4013a9604b130670b93b6b1620c56}}, {{cite:96981e2ca11148d519776d23b0bd5b9d661c4c13}}, {{cite:faf4ba7eba99966320e358c68ca3e43e1b2d7764}}, {{cite:28c92126634d059576dceb8302238b643e1749e6}},{{cite:4f13bc328d66a0c8a816ee7937771376f0389333}}, {{cite:06b3776ff4d09c0922e9fb6af7cfd4fb487e0550}} and {{cite:24a6cd2cf6f8ec93d4f3245cbc3b90bc9dcb9cb3}} are trained on the 2103 GoPro training image pairs. Because the method {{cite:1618dc4050100071d38c6f96753e50d69b0743df}} is an optimization-based method, we use the executable program provided by Xu et al. for the comparison. Table REF shows the values of the mean PSNR and the mean SSIM of all models on the 1111 GoPro testing image pairs and 2025 HIDE testing image pairs. From Table REF we can see that our CDCN significantly outperforms all the methods and could attain the highest mean PSNR value and mean SSIM value on both the GoPro testing images and the HIDE testing images. So, in summary, our CDCN can produce better deblurring results than the state-of-the-art SIDSBD methods in terms of the quantitative metrics.
m
7730645ad88a5d4ac6e3381eab0fb208
For {{formula:476e68d8-6ec4-4d4f-9072-853a35d05919}} , the {{formula:c2759786-54e4-45a4-a040-91c6c132df61}} -stable Lévy process {{formula:8b2c3168-30a8-4989-bf0e-4560c08263cf}} has a heavy-tailed distribution{{cite:88a8b6e8b31cc14d3ad2872cd36fc8b3b29cae6f}} {{formula:68cd0e49-7740-4022-adc9-85b83d9002c7}}
m
8859bd77feb7519e2f121319d30bae10
{{formula:0ade4118-9ff2-4223-9502-13c40ab74d2e}} is sampled from {{formula:bcf4909a-890a-4074-ae0d-541efed34b97}} . Theorem 2 Let {{formula:8c5ded64-2aba-489d-a34b-cbd6287ae4b6}} be a sufficiently large constant. Fix {{formula:0b78d020-51e3-4bcd-9149-2dab235d9d1b}} and {{formula:3e6282e5-7a2c-4e57-8e87-725f8aeb9285}} . Let {{formula:7e3fa08e-86c1-4805-820c-2e5798f713d5}} be a function, and {{formula:73c5caef-a9fe-49a8-8845-aec8b7202a6e}} be a distribution over {{formula:f6e053c7-b14a-419a-9cf6-0fcdbe4ca211}} . Suppose {{formula:804ae907-2248-4bf8-8728-9b8ec8555f07}} satisfies {{formula:6c4a3ae8-f510-42e3-9454-0dccdb0e04b8}} then for any integer {{formula:11b6491d-cf3d-4485-81f3-2491d78a4980}} , we have {{formula:b69b7057-3640-4ef8-89b3-13899b51afe7}} This distributional strong XOR lemma states that for any fixed input distribution {{formula:66482da7-6003-45e1-8aec-90efc15a6a6e}} and function {{formula:da660ed6-a7ff-4ba8-bde7-b84fe11f3cd8}} , to compute {{formula:9c73cf81-ba49-4cf7-b765-32ca5d663921}} when the {{formula:df503bcc-1bc2-48c4-a7cc-08d146fd70f3}} inputs are sampled independently from {{formula:77fff96c-0826-4217-ae0f-a32d2a4438d5}} , either the advantage is exponentially small in {{formula:d8794bdb-37a7-4cde-8720-a9c500d47078}} , or one of the players need to communicate at least {{formula:317ad126-4849-461f-959d-cc18b09eb0b2}} times more than one copy. This also gives a strong XOR lemma in the asymmetric communication, where we separately count how many bits Alice and Bob send. It is worth noting that Shaltiel {{cite:c82360b23c4bc5834a685ae2df9b7ca4d28b7b57}} proved a similar strong XOR lemma for functions whose communication lower bound can be obtained via bounding the discrepancy. By the equivalence between the discrepancy and the correlation with 2-bit protocols {{cite:e21443e067faa8c12b96691c44b0e9340fa0e6f0}}, Theorem REF implies their result. See Appendix  for a more detailed argument. Note that a simple argument shows that Theorem REF implies Theorem REF (see also Section ). Therefore, we will focus on the distributional version, and assume that the {{formula:5a11ac42-042c-4862-9a17-f89c04b611ab}} input pairs are sampled independently from some distribution {{formula:f12fffa5-ea21-4efb-acce-a0393c1e6cf1}} . Our proof of the distributional version is inspired by the information complexity {{cite:92f5748d82831ff2e3a9c687a2c3e380e7be324f}}. We define a new complexity measure for protocols, the {{formula:2c86e942-7e05-4168-96ca-9e01179beab3}} -cost, which is related to the internal information cost {{cite:95bfd47af16c7314891a4dd8cd299d425f491d5f}}, {{cite:958744b0cbfcc1dfad17ce627dffa2a9e6a90ab0}}. Roughly speaking, it replaces the KL-divergence in the internal information cost with the {{formula:817e7669-945b-4b82-a5d6-6a7b29c39726}} -divergence, which can be viewed as the “exponential” version of KL. This provides better concentration, which is needed in our argument. Throughout the proof, we will also work with distributions that are “close to” communication protocols, i.e., the speaker's message may slightly depend on the receiver's input. Such distributions have also been studied in the proof of direct product theorems {{cite:5e8f048793c776440e0e48db3e0bde977af6c2b8}}, {{cite:d0ff64419f0a76da14d9434131e290d9d5f7bd19}}, {{cite:3c13de41f28dae37f4277a775432c7a48546c3dc}}. We will provide more details in Section . Related work As we mentioned earlier, Shaltiel {{cite:c82360b23c4bc5834a685ae2df9b7ca4d28b7b57}} proved a strong XOR lemma for functions whose communication lower bound can be obtained via bounding the discrepancy. Sherstov {{cite:21a9edca1c7c2b488e75d8d318b9264c696144b4}} extended this bound to generalized discrepancy and quantum communication complexity. Barak, Braverman, Chen and Rao {{cite:958744b0cbfcc1dfad17ce627dffa2a9e6a90ab0}} obtained an XOR lemma for the information complexity and then an XOR lemma for communication (with worst parameters) via information compression. However, their XOR lemma does not give exponentially small advantage. They proved that if {{formula:ed3a59b0-3ae1-47f7-9f25-b967ea835a0f}} is hard to compute with information cost {{formula:c84cc72d-b0a5-43fd-bcc0-574ebf16cf14}} , then {{formula:1856573e-b4f1-4f9b-ad33-45ca62da71fc}} is hard to compute with information cost {{formula:4d732e87-6d42-4ebc-bbd9-ad4ab118d703}} . In fact, the starting point of our proof is an alternative view of their argument, which we will outline in Section REF . Viola and Wigderson {{cite:e21443e067faa8c12b96691c44b0e9340fa0e6f0}} proved a strong XOR lemma for multi-player {{formula:5ab7d6dd-1bb9-40ee-b334-a3df93fa91fc}} -bit communication for small {{formula:cbecd73b-4412-469d-8082-623ee3392abe}} . As pointed out in their paper, it implies the XOR lemma by Shaltiel {{cite:c82360b23c4bc5834a685ae2df9b7ca4d28b7b57}}. XOR lemmas have also been proved in circuit complexity {{cite:95836e4030d52b503cd86a48214016d331987419}}, {{cite:dd22983c55b0a8c83d13d930e5a61662bd617a40}}, {{cite:81c803cb7f27b16c0adb878c68dc2b599553310e}}, {{cite:9b60bb41d25d0fa863b8c870008633bc1d8d5269}}, {{cite:3e416dafa67a841ac2ef491a629c53449b3ea8d6}}, query complexity {{cite:c82360b23c4bc5834a685ae2df9b7ca4d28b7b57}}, {{cite:21a9edca1c7c2b488e75d8d318b9264c696144b4}}, {{cite:8c9f947d0cd44c4eb8e000638316a9b531acca98}}, {{cite:2aa4b34af67731f4776b60030bf1c1637fdc42c1}}, streaming {{cite:3518ab574220d4915f025e70ec9ac9100bd22954}} and for low degree polynomials {{cite:e21443e067faa8c12b96691c44b0e9340fa0e6f0}}. Direct product and direct sum theorems, which are results of similar types, have also been studied in the literature. They ask to return the outputs of all {{formula:c832f103-b913-4ada-bd5c-bf57d00d43ab}} copies instead of their XOR. Direct sum theorems state that the problem cannot be solved with the same probability unless {{formula:e81421ba-a80a-409e-a7ae-bc573da7d685}} times more resource is used, while direct product theorems state that the problem can only be solved with probability exponentially small in {{formula:3b90595f-1da4-48b7-8c86-2279ac6e9aa9}} unless {{formula:25aa0b8d-503d-4a10-a507-f2008b55e8e5}} times more resource is used. The direct sum theorem for information complexity is known {{cite:92f5748d82831ff2e3a9c687a2c3e380e7be324f}}, {{cite:95bfd47af16c7314891a4dd8cd299d425f491d5f}}, {{cite:958744b0cbfcc1dfad17ce627dffa2a9e6a90ab0}}. A direct sum theorem for communication complexity with suboptimal parameters can be obtained via information compression {{cite:958744b0cbfcc1dfad17ce627dffa2a9e6a90ab0}}. A direct sum theorem for bounded-round communication has been proved {{cite:1d6cf6ed23926cb5d5f384337af2f6ff46b1fb69}}, and we use a similar argument in one component of the proof (see Section REF and Section ). Direct product theorems for communication complexity (with suboptimal parameters via information compression), bounded-round communication and from information complexity to communication complexity have also been studied {{cite:5e8f048793c776440e0e48db3e0bde977af6c2b8}}, {{cite:3c13de41f28dae37f4277a775432c7a48546c3dc}}, {{cite:66c904713b61a43271eb6da91cb03258f7ab8d6a}}. Technical Overview An alternative view of {{cite:958744b0cbfcc1dfad17ce627dffa2a9e6a90ab0}} The starting point of our proof is an alternative view of the XOR lemma in {{cite:958744b0cbfcc1dfad17ce627dffa2a9e6a90ab0}} for information complexity, which does not give an exponentially small advantage. Running a protocol on an input pair sampled from some fixed input distribution defines a joint distribution over the input pairs and the transcripts. Information complexity studies that in this joint distribution, how much information the transcript reveals about the inputs. The (internal) information cost is defined as {{formula:0899f4ce-62a5-4516-b993-daa4ae10c342}} where {{formula:de194621-6d0c-47c9-b0ee-7e83e10c4997}} is the transcript and {{formula:ad3e9aca-2242-46c4-a6d1-0159187623f9}} is the public random bits.In the usual definition, the public random string is not part of the transcript. We add it for simplicity of notations. This does not change the values of the mutual information terms as it is already in the condition. We assume that Alice sends all the odd {{formula:94fa3cbe-3f48-4944-83e2-7b43fa9aec69}} and Bob sends all the even {{formula:cb3ee48b-17ab-40e0-b06f-41c280a762fd}} . The internal information cost of a protocol is always at most its communication cost. It is also known that for bounded-round communication, the internal information complexity is roughly equal to the communication complexity {{cite:1d6cf6ed23926cb5d5f384337af2f6ff46b1fb69}} (up to some additive error probability). For the XOR lemma for information complexity, we consider input pair {{formula:296984a4-e0bf-4a67-8a0b-74e455dbaff0}} and {{formula:69008654-9ca0-4980-b25d-4f94b5611fcf}} sampled from {{formula:8ebdf708-1d30-4aea-b9be-e6a5a7ca549a}} . Suppose there is a protocol {{formula:ec96a77e-6583-4338-8da8-d564eaa7bf01}} computing {{formula:4ee5334d-06d9-4b4d-840a-a005361f736b}} with information cost {{formula:e85a4866-999c-4914-9372-3bf09fcf7523}} , we want to show that {{formula:c618b0bb-fbc7-4adb-9316-fbfb8a4406c6}} can be computed with information cost {{formula:39a832b4-a83b-4254-82ae-72005f564df8}} . To this end, we show that {{formula:53ea8b07-2c81-4a98-806c-81c0b3cb1725}} can be “decomposed” into a protocol {{formula:c3821760-2b9f-446c-a8d7-ad05f3d05cf9}} computing {{formula:c638182a-5db0-4cc8-99f2-603f3a1bc105}} with information cost {{formula:a5a2dd61-f431-4f75-9133-e8170fc008ac}} and a protocol {{formula:c15ffb24-a1e9-451f-9bb9-24bf2e3d24f2}} computing {{formula:0792e016-f958-4e66-ae9c-b6f8cd46b039}} with information cost {{formula:1e3546fa-23ba-4892-95e4-48e2548d8e73}} such that {{formula:3fa0f61c-1f70-4ce8-b4fe-4af7ec1e4954}} , as follows (see also Figure REF ). For {{formula:1a0fdc4a-8c1a-476d-936f-c1476e051fed}} , given {{formula:d129adcd-b49c-4cbe-b7d9-48d7aa2d8843}} input pairs, the players view them as {{formula:3742fc4c-55d9-4438-a19c-82b0ab102f9e}} and {{formula:10ea0f6d-4038-4338-a437-00b5a94526c4}} as part of the inputs for {{formula:46902c83-c1c0-4ef7-86ae-9cba348909ea}} , where {{formula:16fd2c41-550e-4eed-9f0a-22fb1081cc52}} denotes {{formula:8feef4ae-b3ae-4509-b4c4-cba658fae51a}} and {{formula:25278421-bed4-4fba-88b8-f3fa6f2433e8}} denotes {{formula:d397701a-cfa5-442a-b7da-02ae25cd0617}} ; then the players publicly sample {{formula:1b5697fd-9339-40c0-b2ba-86e731c60615}} , and Bob privately samples {{formula:44f40aef-385c-40be-95fa-0002638fbfcf}} conditioned on {{formula:fd9d4f7e-c531-46f2-bbdc-3700adf37c19}} ; the players run {{formula:d8aaf724-1a31-444a-b7bd-034afb7ad8ff}} to compute {{formula:86c4f6d8-bc3c-4c6e-8316-23eb2f6a6a74}} ; Bob sends one extra bit indicating {{formula:397a9f10-0dfb-47b0-815e-a89398b45e9a}} . For {{formula:e53909ee-3bc0-4eaa-a295-51bea22ae8c1}} , given one input pair, the players view it as {{formula:74c402e8-9abf-485d-bb0b-55a96f5cf050}} and {{formula:2423ed08-3caa-4372-b0fd-0b4e214265b4}} ; then the players publicly sample {{formula:01ddc556-3ada-4b2a-bc7a-90c97c28006d}} , and Alice privately samples {{formula:d2e09029-11ec-4f7d-b75d-2eaa55d1de9f}} conditioned on {{formula:95a5b595-20ca-48ce-9597-a251e4403370}} ; the players run {{formula:1591f552-d9b4-432c-8cc3-80fb0bfee476}} to compute {{formula:43d94f26-40b7-46d7-8ec4-0148bdc58f2c}} ; Alice sends one extra bit indicating {{formula:d337e713-623e-454c-a472-b6070834aa00}} . {{figure:78a5d0db-d3b7-4c10-a8d8-c5eeb6ba30f0}}If {{formula:d9263f6a-02e3-4315-b22c-7bd3526b1314}} computes {{formula:5a12cf0d-9b51-4687-950c-7d7ea5d98bba}} correctly, then the two protocols compute {{formula:8d92fd4d-f5ae-4ce7-a4ee-36a33a265faa}} and {{formula:dc6fbd36-4258-4a82-853e-c27edd336a32}} correctly respectively. For the information cost of {{formula:edccfce1-204e-48fc-887b-6d3f2ec82b6e}} (if we exclude the last bit indicating {{formula:546ca094-eef5-4d67-a396-044a4faa1f45}} ), the first term is equal to {{formula:0c460093-336f-4e02-a7f2-66561f24a149}} , since {{formula:2842e8e5-a33b-4168-a247-1fecc8b2af03}} is sampled using public random bits. It is also equal to {{formula:43fea43a-87f7-405a-bd5a-25260d6b29a5}} due to the rectangle property of communication protocols. For the information cost of {{formula:94e225ef-a33d-41f7-b41d-dd3e3521c3e9}} (if we exclude the last bit), the first term is equal to {{formula:56ea8b9c-b9be-49dd-8b98-b9543aba8b5c}} since {{formula:2bfeb37b-205b-488a-9b33-0419cf04bab7}} is sampled using public random bits. Therefore, the first terms sum up to exactly {{formula:69343fb6-9f43-45ef-aed8-53d46f33e726}} , the first term in the information cost of {{formula:609c2109-c189-44af-8c6a-63e6df195ac6}} , by the chain rule of mutual information. Similarly, the second terms sum up to {{formula:276a9d04-4e6f-4d29-adea-7d794f394161}} , the second term in the information cost of {{formula:b9669c91-b447-4136-b7f1-76d73e05a720}} . Hence, including the last bits in the protocols, we have {{formula:419a8c5c-7b82-4488-845f-980fb9a6f6ca}} . Thus, by repeatedly applying this argument, we obtain a protocol for {{formula:be67544a-b728-4732-86c1-d8ebb00873e7}} with information cost {{formula:79945abe-427f-4adc-867e-86e5b51eb935}} , as desired. Note that in this decomposition, the players do not need to sample the private parts explicitly. As long as they can send the messages from the same distribution (e.g., by directly sampling the messages conditioned on the previous messages and their own inputs), the information costs and correctness are not affected. The original paper proves the same result by explicitly writing out the protocol for {{formula:0631d824-51a1-4a83-bda1-336d34fd95b0}} obtained after applying the above decomposition {{formula:da409146-ca6b-4806-852f-500ecb5fb2bb}} times for a random {{formula:8b9b76cb-5ea4-4fcf-afc0-01351dacb899}} , and proving the expected cost is as claimed. The two proofs are essentially equivalent for this statement.The orginal proof embeds the input to {{formula:a8cf1f65-bda8-4a70-aa87-4ad98286bfc4}} into a random coordinate {{formula:a10fb78f-761b-4d5c-be78-91d1036d520c}} of {{formula:d77f6243-1b77-495f-bd73-a25f8a587f82}} , and samples {{formula:3e41e839-d460-48d9-86a1-ee144543bc4e}} and {{formula:166f051c-ed19-476b-9d34-6042eba602b0}} using public random bits. However, as we will see later, our new view is more flexible, allowing for more sophisticated manipulations when doing the decomposition. Obtaining exponentially small advantage The above decomposition preserves the success probability. However, if we start from a protocol for {{formula:b8cc4aea-fed0-4a43-a3c7-ef98ce7cc480}} with exponentially small advantage, then we will not be able to obtain a protocol for {{formula:f37e5cc3-9226-4ea0-8ff6-3bcd1604834e}} with success probability {{formula:fda0d954-86f5-430a-9f9e-d3bd9dcae2b9}} , which is required in order to prove the strong XOR lemma.In fact, this is inherent for information complexity, since the strong XOR lemma for information complexity does not hold. This is because the information complexity is an average measure, and it is at most the expected communication. A protocol can choose to compute all {{formula:ce809ee2-5ebf-49c8-a6b0-027c5a7b8dc7}} with probability {{formula:0c11cd23-3c99-4c60-9d17-31c7733d13f2}} and output a random bit otherwise, which achieves success probability {{formula:1a84970f-f51a-448a-8905-c3c10232f381}} and expected communication {{formula:41d29464-48b7-482f-b99c-d5789e67a11b}} times the one-copy cost. However, the reader is encouraged to continue reading this subsection pretending that they do not know about this counterexample. Let {{formula:707b1eec-15f9-4055-a434-66fb71ea2349}} denote the advantage for {{formula:09f0146f-05fb-4eb2-968a-d657153e65f2}} conditioned on {{formula:3ccd787a-b628-407b-bc55-931e0d20f3a9}} , which is defined as {{formula:b3e9cff3-0e27-4eb7-97a9-e1d2e70f6267}} i.e., the advantage is {{formula:c504099a-fb96-48e5-97be-9770a4b483d6}} if the conditional probability is either {{formula:e133137d-f635-462a-9e8d-04edd09bffa0}} or {{formula:288a5d49-fd78-469d-8441-39187349f024}} . Now let us take a closer look at the two protocols {{formula:dd64a430-1adc-4170-97aa-fa442e9d9ffc}} and {{formula:3f634b95-6734-4b96-946a-163140ab6291}} (see Figure REF ). For {{formula:08f92166-a6c7-4d14-8e6c-4c33109b2b5d}} , in Bob's view at the end of the communication, he knows his input {{formula:d5e6c1e5-9c3c-4e60-ba07-b579d3b82f31}} , the publicly sampled {{formula:55d3cd6e-bb51-4513-9f38-e45082ef96ae}} and the transcript {{formula:5fa5b291-6935-4e58-96fa-27997f93ca7a}} . Hence, he is able to predict {{formula:d7715bde-4b28-4477-a5d0-1d3bd6e38ad0}} with advantage {{formula:c74cca31-faf6-46df-b753-c539df1b7b51}} . By letting Bob send one extra bit indicating his prediction, the advantage of the protocol achieves the same. For {{formula:00da0514-3f18-49ab-97f9-e23700668fba}} , in Alice's view at the end of the communication, she knows her input {{formula:312b1368-130d-485f-909b-bdede577fc0a}} , the publicly sampled {{formula:8dadd1a2-6851-45f4-a037-4fc277d64263}} and the transcript {{formula:57ada614-60c5-4de2-abc5-5d8d340434d5}} . Hence, she is able to predict {{formula:c7d7f9c5-4106-4622-87ac-19d80342d21b}} with advantage {{formula:ea26fadc-11cb-4241-9f95-0830faf7a91c}} . By letting Alice send one extra bit indicating her prediction, the advantage of the protocol achieves the same. Now an important observation is that {{formula:b26741d5-c5e9-4f6f-b3d9-730e1c48f306}} and {{formula:61ad05a8-8ae6-4a3a-bc41-881e02ad69b4}} are independent conditioned on {{formula:ea8af6d4-6847-4f0b-8c07-acbc3aa52ae0}} , by the rectangle property of communication protocols. Hence, {{formula:9674b712-3bea-485e-8a5c-6ea0764948e9}} and {{formula:8f57782a-cd38-4b20-a3f9-29c8396e8d4d}} are also independent conditioned on {{formula:0580b509-0da9-4098-b9af-7ff2f4c373c0}} . Since {{formula:7adeae3b-bc3d-47bc-be44-abc7d2fb1b84}} , by the probability of XOR of two independent bits, we have {{formula:84751a54-00e9-405c-87c5-6850eeab53e2}} {{figure:0a860b6a-df75-4ca0-83de-2cb0e775502c}}This suggests the following strategy for the decomposition: if the information cost of {{formula:6014e9ab-3f6a-4142-91d7-3f54de87d73e}} is large, then the information cost of {{formula:41247850-751e-424d-b705-8f1a9aaa304c}} must be much smaller than that of {{formula:462ffdbf-c6ac-49d2-9b03-0921e6c12018}} ; if the information cost of {{formula:1db8b4ae-a4ff-4bfa-871e-bcf13b559f51}} is small and its advantage for {{formula:ef1bd2de-7a46-42f8-af78-5fbaa7a5102c}} is large, then we have obtained a good protocol for {{formula:1aa3d417-6fff-4dfd-8fe2-ec1877d891ab}} ; if the information cost of {{formula:2b98de36-a98c-4751-a280-03896c2f37b7}} is small and its advantage for {{formula:1ab3b9c9-219a-44b5-8190-ae28c405b67b}} is small, then by (REF ), the advantage of {{formula:a3fbf2e0-644b-4f68-9cef-efdd95566488}} must be larger than that of {{formula:43d888cc-34d9-4cc5-b46f-6ded4ad607b6}} by some factor. Hence, in each decomposition, if we don't already obtain a good protocol for {{formula:11e11b66-2df5-475c-aa8e-b68fa4f19c32}} , then when decrementing {{formula:319b4cf7-7360-4390-85d2-d0d544d01cf7}} to {{formula:d7329518-d267-4748-a608-a24b03b89adb}} , we must either significantly decrease the information cost, or increase the advantage by a multiplicative factor. If we start with a protocol with a low cost and a mild-exponentially small advantage for {{formula:edcf8f19-1a97-40f1-ba3d-ff5ea0fc2dcf}} , then we must obtain a good protocol for {{formula:e5ea117a-9dc8-44d1-b97a-6d777b1ee4d7}} by applying this decomposition iteratively. It turns out that the main difficulty in applying the above strategy is to formalize the last bullet point. Note that the expected advantage of {{formula:240c11f4-20bc-439b-99b0-38fd912c818a}} (after Alice sending the one extra bit indicating her prediction) is {{formula:5a67e6a8-e1e1-404a-8e76-dba85fb128e8}} , the expected advantage of {{formula:12cae94a-7b41-4797-801d-3fd42f22b500}} (after Bob sending the one extra bit indicating his prediction) is {{formula:348b52c1-3faf-4bec-bb48-5fc840c844b8}} , and the expected advantage of {{formula:ca547166-cb73-45b6-a337-b3fdbd2610b9}} is {{formula:92ef4863-3cad-48b7-afb3-78aa0da5224f}} , which is at most {{formula:29fbdd5d-6dd8-40cf-bce3-0a41969d2cab}} . When we say that the advantage of {{formula:8c61fbfd-b5fd-4cb1-bf94-1dbcd450555a}} for {{formula:38e6f38b-8291-4c66-ae05-964e2d20d24a}} is small in the last bullet point, we can only guarantee that this expectation is small. Equation (REF ), which is a pointwise equality, does not directly give any useful bounds on the expectations. For example, it is possible that both {{formula:deedf225-1392-4265-beb2-6c31690481ff}} and {{formula:4df01689-9fd9-4001-9a1a-962c63069515}} are very small, but {{formula:5dad4ae7-31ad-49e2-b693-a73a255f7ce7}} and {{formula:3986432b-52bc-4228-9d5e-0f5b0ec22921}} are always equal to zero or one at the same time, both concentrated on a small probability set. Then we have {{formula:e8822176-1071-4732-a80a-fd0e7d0a7701}} , the advantage may not increase at all. In this case, the advantage {{formula:1a79298f-5635-4a49-bf42-e8310417282a}} is also concentrated on the same small probability set. On the other hand, observe that if {{formula:a524472e-f514-4e5c-b7eb-a7aae29200fb}} takes roughly the same value (say, {{formula:aa02c4b1-4bd3-4471-a89c-f255376dbcfb}} ) most of the time, then we do obtain an advantage increase: {{formula:5f80c87f-7e75-4cbf-b0d5-97b3c762a860}} by the convexity of {{formula:67a3c588-4c8e-43fb-89ea-c460877238c0}} . This motivates us to consider the following two extreme cases: {{formula:ed162957-720a-473c-b1b0-b1ce3b43a26a}} is roughly uniformly distributed among all {{formula:a4f60f5f-b459-4d1f-b18e-10a02c4763ef}} ; {{formula:7dbc930c-ec5d-489b-bf9d-c03034428e07}} is concentrated on a tiny fraction of the triples {{formula:f3a65f45-9fc3-4960-9281-8b660f0f2a8b}} . Basically following what we just argued, the above strategy directly applies in the first case. The second case is related to the direct product theorems, where we also want to analyze protocols that is correct with exponentially small probability. This is because one possible strategy for the players is to compute all {{formula:c235a636-6c38-4167-a98c-969bbc53ccff}} correctly with some probability {{formula:ccde86f7-9b7d-41ed-bfd0-b5f3dc76ab3c}} and output a random bit otherwise. We must at least show that in this case, {{formula:792407c6-f5b8-44fd-8f66-00896b59370b}} . Generalized protocols For the second case above, we follow one strategy for direct product theorems {{cite:3c13de41f28dae37f4277a775432c7a48546c3dc}}. When the advantage {{formula:222e46b3-9ee1-4ba7-a31b-5dbc975b7b8f}} is concentrated on a small set {{formula:afb9b4d9-a89a-46e0-a8ef-5819f9e07468}} of triples {{formula:66fb9fd4-9a5a-44a1-857f-29df19043a94}} , we restrict our attention to {{formula:d56e6780-40f3-44c6-b601-12c931b011bb}} by conditioning {{formula:a25ab6ba-901e-45fa-96c0-af0da6c1b7c1}} on {{formula:9ec811fd-49b5-41a1-8eea-9994da60d4c5}} . However, this immediately creates two issues. The first issue is that although {{formula:4a9c6ab3-bb63-43b9-97da-2be6ad92f461}} is a well-defined distribution, it is not necessarily a protocol, since conditioning on an arbitrary event may break the independence between a message and the receiver's input, e.g., {{formula:d83e99c1-8707-4d61-b592-840b6ad565a7}} may no longer be independent of {{formula:35c3b94b-3512-468c-a538-31f50459abd7}} conditioned on {{formula:03fb1447-ec93-4bb9-a95e-d04c6b2b5758}} .Conditioning on an event also distorts the input distribution, which needs to be handled. But for simplicity, we omit it in the overview. This issue was also encountered in the direct product theorem proofs. Instead of studying standard protocols, we focus on generalized protocols, where we allow each message to depend on both player's inputs, and we wish to restrict the correlation between the odd {{formula:54fde772-aa8d-40ab-9332-1ae0aaa43b08}} and Bob's input and the correlation between the even {{formula:c7b26498-59d3-4b6a-89aa-5b5d503ba1b5}} and Alice's input. In the previous work, it bounds {{formula:190ebbe8-9255-42a2-8c37-bc419f77e433}} the mutual information between the message and the receiver's input. Intuitively, the {{formula:b4e3f508-a533-4148-aed1-7e643a7e7b15}} -value measures how close to a standard protocol a generalized protocol is. It turns out that the {{formula:161c360d-d54e-4040-ad6f-dab904132da9}} -value of a standard protocol conditioned on a not-too-small probability event is small; on the other hand, when the {{formula:5bceafc4-5bb4-477c-bebd-93d4432838d8}} -value is small, it is statistically close to a standard protocol. Furthermore, an important feature of {{formula:f135389d-9c3a-4e42-9a71-3e63fe592592}} is that the decomposition of {{formula:edf968bf-268f-4ff5-81a6-28721aefb460}} into {{formula:80ec1cc8-62ea-43cf-9074-3a7daf1f086f}} and {{formula:6f612706-1b01-4ae5-9446-56f1361ca893}} also satisfies that {{formula:992fc898-8c86-449e-b9bc-01d836d4e502}} . Hence, when doing the decomposition, we can hope to obtain a generalized protocol for {{formula:a64e8d17-2e4e-4302-929a-6ac17c254ad7}} that is very close to a standard protocol. The second issue is that conditioning on a small probability event {{formula:f52ff416-8875-488a-9418-26b868afc75b}} could greatly increase the information cost, from {{formula:201ae8ba-fde2-48a7-9cc2-a541b8a757c9}} to {{formula:7f5e4d76-ef32-4ddf-bab0-33f0d0b9b7cd}} . Since {{formula:d3193bfa-b8f4-4b2d-8db7-7140126bb902}} is close to the communication cost, such a multiplicative loss in each step of decomposition is unaffordable. Such a loss occurs because the mutual information is an average measure (an expectation), which does not provide any concentration (also recall the counterexample in footnote REF where the communication cost and the advantage are both concentrated on an {{formula:8dd11bd9-57ab-42a3-bb91-b1da3d092685}} -probability event, when we condition on this event, both the expected communication cost and the advantage increase by a factor of {{formula:0cde2c83-b752-4915-be57-0f2ff3785dfb}} ). More specifically, consider the first term in the information cost, {{formula:5d037c28-2615-4812-bd00-d8fb89e80be1}} (omit the public random bits for now). For standard protocols, it is equal to {{formula:affc0579-1ad0-4771-acd8-97010c9dd48f}} We also note that the argument in the previous subsections crucially uses the rectangle property of the communication protocols, which does not necessarily hold for generalized protocols. This turns out not to be a real issue, since throughout the argument, we will maintain the rectangle property at all leaves, which is sufficient for the argument to go through (see also Section REF ). {{formula:4e16c6ed-b302-483c-97f5-c999ea73d0bf}} -cost and {{formula:c504bb7f-5396-418a-8f4a-a8ad3c6b4ebf}} -costs Our novel solution to the second issue above is to focus on the “exponential version” of the information cost, i.e., for the first term, {{formula:fb65dbd4-7dca-4253-973c-f13962f0abd9}} which we call the {{formula:37d51b26-eef8-4887-bffc-8397f802bc73}} -cost by Alice. The {{formula:59800c87-56ec-4837-8d3a-fa0fd0542bf4}} -cost by Bob, {{formula:3108b980-0334-4aff-a847-d046afacb638}} , is defined similarly for the second term in the information cost (see Definition REF ). This notion of the cost has the following benefits. For a (deterministic) standard protocol with {{formula:f8d66a14-09f3-4083-8cc9-3544b78ad9f9}} bits of communication, {{formula:eda2e5dc-ed7d-4f80-b126-db77b0a8556d}} . Hence, it corresponds to the exponential of the communication cost. When conditioning on a small probability event {{formula:6849f0f2-190a-4c67-9e4a-1ad6595fb109}} , we can essentially ensure that it increases by a factor of {{formula:2d969884-ce51-48b8-a972-873b496f5195}} (Lemma REF gives a more generalized statement). Effectively, this only adds {{formula:017dc96f-57fb-45da-8217-8196a02bb1f4}} to the communication cost, which becomes affordable. Note that the mutual information is the expected KL-divergence, and the {{formula:7e6fc418-2d34-443a-a049-88d118f11d0f}} -cost is the expected {{formula:71420e28-8323-42df-9322-5ccf113452c6}} -divergence (plus one). Similarly, we also define an “exponential version” of {{formula:2afbf033-445f-4fc2-a577-7f8cbed349f0}} , which we call the {{formula:4b425477-d3ee-489b-861c-8544f593a985}} -cost of {{formula:726fa373-e43e-4dc2-9312-9ba66ae372c6}} (see Definition REF ). It also ensures that the value does not increase significantly when conditioning on a small probability event. On the other hand, going from mutual information to its “exponential version” loses many of its good properties, most importantly, the chain rule. The next crucial observation is that the chain rule for mutual information in fact holds pointwisely, which enables us to work with the {{formula:5e02d1b8-20e4-4f45-a0b3-089ffb25c33b}} -costs. More specifically, let {{formula:fbe08c19-d889-475b-859f-2f2557c3654c}} be three random variables with joint distribution {{formula:ced193ec-e1b7-4bf2-bfa8-700bdd288613}} , the chain rules says {{formula:2c1b7cfe-74fd-41ec-8d3a-efa587662f02}} . By writing the mutual information as an expectation, this is {{formula:1b820bf9-bb97-4054-b02f-452cfd0e16cb}} This equality holds pointwisely in the sense that for any concrete values {{formula:aea6cd83-50c3-4679-9423-9e9764fb87d0}} , the equality holds for the logarithms inside the expectation {{formula:41dfea55-1379-4bfe-b3e0-ce4ba1fa662c}} by the definition of conditional probability. Therefore, the “exponential version” also holds pointwisely: {{formula:be3f7a23-3f1e-49cc-a83d-f11efc3d54a5}} This is what we use in replacement of the chain rule for mutual information. See the next subsection for more details. Proof outline We now give an outline of the proof of the following statement: Given an {{formula:3fc98aa4-51ce-480f-9c82-917b834109cb}} -round standard protocol {{formula:759a2948-6281-4d44-bd2a-6a666f98b35d}} for {{formula:51af425f-c3de-413c-b23f-fbd4e76390b3}} with communication cost {{formula:e8f9e11c-8d1e-44eb-b035-4f310a462c5f}} that succeeds with advantage {{formula:6bd1a088-30df-49c4-8648-70e85a561ae5}} on the inputs sampled from {{formula:9bbd67ca-a0ad-4397-be87-ca0ddd5c086d}} , we can obtain an {{formula:5fbeb5d2-66f2-4c60-87d5-663e82c4915e}} -round generalized protocol {{formula:76f89fc2-9c14-4899-b499-13b568e6aafd}} for {{formula:efe10558-29b2-4799-9951-d5f7c0c1c418}} with {{formula:2521fce5-5325-478f-94d7-98fe15f23974}} -costs {{formula:7bd4b455-90a2-4122-a0da-a5d31746a815}} , {{formula:27b63028-0805-4ee5-bcbe-e8376c784905}} -cost {{formula:312097a0-c433-4ce8-97fe-a3f228841645}} and advantage {{formula:72cbe8bd-9b89-4301-839a-2f42b12a1d80}} . We will then discuss how to convert such a generalized protocol to a standard protocol with low communication cost in the next subsection. We first show that {{formula:3a02d670-7bb5-4223-a476-3bc6a6fbe1af}} is also a generalized protocol with {{formula:595478f3-1538-417b-9386-8d99d001f0dd}} -cost {{formula:f262b6df-51a1-4f7a-8bfb-917970373992}} and {{formula:d9a7ed0c-71e9-4a37-80a3-70f567a916ca}} -cost 1 (in the proof of Lemma REF ). Next, we decompose {{formula:1d6545a1-9fe1-4a5d-856a-146443b805ba}} into {{formula:450ca47e-e4f4-417a-a0e1-d577e7d16b50}} for {{formula:77092759-302d-4017-b8a9-7c1d8087e893}} and {{formula:e697fdc3-915b-4f0e-bf21-f7dc1dc96048}} for {{formula:94fbf604-5998-4bb2-8ba4-de416e83fe08}} , and prove that the product of the {{formula:c521195c-5fc5-40c4-abf0-8d037c566576}} -cost [resp. {{formula:c9987c48-9d0f-4639-b1b6-ab662000b98a}} -costs] of {{formula:7130b444-aae5-44a0-a7ef-0f7b11230c30}} and {{formula:8ec8fdc8-20e9-40d0-9bc9-3ed3e665f93e}} is that of {{formula:c0f9a506-75f6-4750-bc55-d92c479173a6}} pointwisely (Section ). Now if the advantage of {{formula:22baf286-7d1a-40bc-a433-6455ae784946}} is not roughly evenly distributed, we will identify an event {{formula:d9d0f6ac-14b2-4084-81f0-3bc46c39210f}} such that the advantage conditioned on {{formula:ad6f153d-e19e-48d9-8f4e-9514fc396757}} is much higher than the average advantage, and more importantly, the advantage within {{formula:6a3ad7f7-4eb4-488d-9ad3-8e27079aa3da}} becomes roughly evenly distributed (not concentrated on any small probability event in {{formula:889e8ca8-9cba-411e-a151-8e2c9cdf3ecd}} ) (Section REF ). Conditioning on {{formula:090ecb32-81de-48f4-9fc8-d0b59a1b7baf}} increases the advantage while also increases the {{formula:7e81002e-a931-4e43-a11e-ca1e2f44d516}} -cost and {{formula:741f3246-8378-4d94-95dc-0fda9bf3f351}} -costs, it turns out that they all increase by about the same factor. Next, we partition the sample space of {{formula:26e7d927-da0a-4fe4-80ff-baa394f7f147}} into {{formula:1230acf2-c73c-44b3-8631-ff0ec6996c8a}} and {{formula:b4a4f394-b110-4b28-b90b-0c8bce00796f}} such that in {{formula:3e4a0fe9-f859-4afd-ba4a-f70b67b50c84}} , {{formula:64d6445c-b475-4ed3-a7d0-96cd6d24d678}} has high {{formula:8b1910c2-c3c9-47a1-956e-9cde9d64312a}} -cost or high {{formula:5e9bff1f-53ed-4611-8689-56af89409dbb}} -cost (excluding some corner cases), say {{formula:4f4501a3-a0d9-44fc-9541-1efb6aad7798}} for {{formula:94b4abb6-21bd-4d1f-a105-55390a3b3bfb}} -cost or {{formula:99663ed0-43c6-4f52-8da8-e92da87c8331}} for {{formula:ac0d68b7-f51b-4d06-a0a9-b401723afcc0}} -cost, in {{formula:2e22f8b0-da97-40fb-a521-72962a3669fb}} , {{formula:03ba3805-2b33-49eb-b31d-61b1edee0f59}} has low {{formula:d277dddc-a5f6-40a4-8040-919e986c6d98}} -cost and low {{formula:9ce28517-2403-4afc-9c06-9169b54f3597}} -cost (also excluding some corner cases), {{formula:53962743-2d86-4ded-ba25-b81f9785a92a}} is the rest, which will happen with very low probability. Since the advantage is not concentrated on any small probability in {{formula:5f4f9fec-8483-41cc-8ebb-198c7e63a6c9}} , then (at least) one of {{formula:15174cef-7c26-4890-a828-b181fbe9a044}} or {{formula:66ce2eda-ef22-4041-91b3-d6d7c012a8a7}} will have advantage about as high as the advantage of {{formula:a355b68d-0408-44fa-b369-0a8e4b5808f0}} . If {{formula:e7229f53-08eb-4d1c-907a-df6d57507c25}} has the advantage as high as {{formula:5ad37be6-b8b8-42a9-86e5-6652ccae5107}} , then we prove that by the pointwise equality for the costs, {{formula:bfb719e3-9149-46b6-8d7b-a211e1150745}} must have a much smaller cost than {{formula:ae386c66-2111-4268-bfbf-699001bc49d7}} , while they have roughly the same advantage (Section REF ). If {{formula:b1bee871-f52c-463d-b84f-feb012ca6643}} has the advantage as high as {{formula:4887a8e3-67a2-46b3-af63-2575cdab45dd}} , then if {{formula:2802cdcb-3b02-4761-a092-d54a5815d23f}} has high advantage, then we obtain a desired generalized protocol for {{formula:164954f0-8c28-494a-862d-f1088a9ae921}} with low costs and high advantage; otherwise we prove that {{formula:85fd5d62-3f5d-4bb5-90c1-4f7367850a2f}} has a much higher advantage than {{formula:111e3050-9674-4f06-a1ac-4220d179f639}} (as the advantage of {{formula:87828fba-27e6-4245-a623-86bf2d338f2b}} is roughly evenly distribution within {{formula:aae54bca-ea69-432c-8a5e-6f0702f264d7}} ), while they have roughly the same costs (Section REF ). To summarize the above argument, if we don't already find a desired generalized protocol for {{formula:496a8df7-e35c-460a-a45b-e100e53d152b}} , then when decrementing {{formula:07e6be77-8c7d-47ac-b248-cf2614178ffd}} to {{formula:ff556787-d56e-4436-bdab-f399febce701}} , we first condition on an event {{formula:a0047751-9ca8-4993-9f73-327d514738a7}} , increasing costs and advantage simultaneously by about the same (while arbitrary) factor, then either we reduce the {{formula:e2e0b0dd-6834-45bf-bd18-df466a10886a}} -cost by a factor of {{formula:3445ecec-a572-425d-bce9-33a434519341}} , or we reduce the {{formula:ba9a1b70-461d-45be-b41a-6990691c8af3}} -costs by a factor of {{formula:b00863df-bb8e-4956-8a9f-245a3302eeac}} , or we increase the advantage by a factor of {{formula:c5f1340f-d305-46fe-88d8-67ddf07b5b71}} . Since we start with {{formula:8d4eceba-7e75-4d94-8775-3004b516544a}} -costs {{formula:36591846-43a7-49df-a5f3-20095adcaf38}} , {{formula:4ec11f65-e93b-45f8-8b2a-174e3f147787}} -cost 1 and advantage {{formula:b5a24094-7df3-4184-8dd5-19d748103f0a}} , we cannot repeat this for {{formula:deed8891-478d-4dfa-b15e-d4c7d6c78bc8}} steps without finding a desired protocol for {{formula:ee275fbb-f826-4b8e-807f-32fb0df5225f}} . More formally, we will measure the progress by using a potential function that depends on the costs and advantage of the current protocol, and show that each time we decrement from {{formula:34be101b-023b-4dc0-909c-afe1234bf8ad}} to {{formula:f0ecb954-4147-465b-9a62-496c25915eaa}} , how much the potential must decrease (Section ). Convert a generalized protocol to a standard protocol Finally, we need to show that the existence of a good generalized protocol implies the existence of a good standard protocol. We prove that if an {{formula:6965cb44-9326-4d61-b33e-917cfe5d3b86}} -round generalized protocol {{formula:c4c59ace-9ec7-4aac-887d-daf755f483e6}} has {{formula:2db53ffa-2a55-45ca-9369-3e7b3abe7570}} -costs {{formula:d01b4946-bb61-426d-b58f-1f7e294a21e0}} , {{formula:42f51d2b-8c5d-4243-b522-ad94a299f4b2}} -cost {{formula:8d08c9c7-d536-48ab-92aa-43f555f0f43d}} and advantage {{formula:691bdd89-f718-4c3c-b8e0-8c18ce56fd37}} , then there is an {{formula:442980a3-49ef-4c0c-9c55-8f6ab8abd203}} -round standard protocol {{formula:2343dfcf-f41e-4231-94df-cf857f3576dd}} with communication cost {{formula:34d3c539-b698-4041-aecf-3b81b71e69e9}} and advantage {{formula:354fd582-c37c-47e6-a1f2-866271d06e24}} . Together with what we summarized in the last subsection, we obtain the strong XOR lemma for {{formula:c10307a5-cd47-4f56-9d86-2c7043bdcf38}} -round communication. {{cite:1d6cf6ed23926cb5d5f384337af2f6ff46b1fb69}} converts a standard protocol {{formula:39d9474b-72fc-4ab8-8de4-85892f2165af}} with constant rounds to a standard protocol with communication matching the internal information cost of {{formula:bbde0a45-d5e3-4bf3-a024-c70fc7f4cc3c}} . Using a similar argument, we can convert {{formula:d414c729-99f2-4456-ada3-ee8f4645b056}} to a standard protocol with communication {{formula:d76c6358-85f3-4e0d-8c39-ed1018d9af6c}} . By the convexity of {{formula:6eef0af5-9f55-43fd-8d75-5b9439a0fa1c}} , {{formula:6175d36a-e2d2-4e70-be40-7762ef392293}} -cost of {{formula:137bd8aa-6198-44bd-bffa-da47e99a6c2c}} implies internal information cost of at most {{formula:c3ef35fe-bb76-499f-8f61-e12b833963db}} . It turns out that the (almost) same argument applies in our case, for generalized protocol {{formula:35ba67c7-b766-4ccd-a581-cadc03bf83b8}} . Then the next crucial observation is that we can ensure the generalized protocol {{formula:9c8200c5-533d-4caa-b8a8-0e9d798a5722}} that we obtain from the arguments in the previous subsection has the rectangle property with respect to {{formula:0a73f3fd-d30d-44e5-ad89-a66bcaa298a3}} . Roughly speaking, it means that for all transcripts {{formula:9233b701-5206-4e9e-9530-def9f3f3c3b2}} , if we look at the ratio of the probabilities {{formula:61675a9d-b1a0-480f-bd58-59dcc9068e51}} , it is a product function of {{formula:995c1780-fb74-44d4-bcb7-f9cebfd2af1b}} and {{formula:77dddc2f-7ab0-4025-b74d-b2d13361a10f}} , i.e., it is equal to {{formula:45136a23-b438-4180-8a91-fb80d38a5aeb}} for some functions {{formula:b7266697-fe6d-4863-9da7-94712675a337}} that may depend on {{formula:f29077b2-5058-4d83-9e97-148a42bcc2b8}} . Note that a standard protocol has the rectangle property, since each message depends only on either {{formula:af94d379-6957-4da1-bd8c-a9e0fb5f87be}} or {{formula:f2ce850b-65eb-47ee-8db9-8e97c2c9eab7}} , and the same property holds even conditioned on any prefix of the transcript {{formula:5820ef4d-8a02-4960-b34a-8e2a7c5fcb70}} . A generalized protocol may not have this property in general, but we can ensure that the protocol we obtain has this product structure conditioned on any complete transcript {{formula:ffdf0862-374f-4840-b038-581117f76a4a}} . After generating a transcript {{formula:5f0f8feb-7e25-4d76-bf2a-4a39e54aad6b}} using {{cite:1d6cf6ed23926cb5d5f384337af2f6ff46b1fb69}}, the rectangle property allows the players to locally “re-adjust” the probabilities (via rejection sampling) so that after the readjustment, the probability of a triple {{formula:d8350160-ba97-4201-9845-d2faa06bb5d0}} is proportional to the “right” probability {{formula:eeb4f40c-76e1-4a3c-b1ca-718ced5e8917}} , which in turn, gives the advantage proportional to that of {{formula:33356181-71b5-4b63-8dfd-68114fad1039}} . The probability that is sacrificed in the rejection sampling depends on how far {{formula:646d8761-0e31-48a4-9d1a-c58af705c66b}} is from a standard protocol, i.e., the {{formula:06cae0c4-45e0-4e11-9fd5-8f3b7fffce8c}} -cost of {{formula:12e9575f-11d9-477a-9f73-64a36a94fba6}} . It turns out that the above argument gives an overall advantage of at least {{formula:86b1c4cc-d754-4158-957c-0b8b1a00c6fb}} divided by the {{formula:e7476f16-b40e-4489-bf90-ba3a5351e6e3}} -cost of {{formula:33525225-6552-493c-b160-f4a32da9e0b5}} . See Section  for the formal proof. Notations and Definitions for Generalized Protocols Notations and standard probabilities Throughout the paper, all logarithms have base 2. We use {{formula:c05c1826-a6d1-4ff2-850b-579195aa02f9}} to denote the set {{formula:7c0aac96-19e3-4dfa-be2a-fe3dd0a2b571}} . Let {{formula:fa184bdc-3b77-4462-82c6-ecfbd70d8d84}} be a binary-valued function. We use {{formula:6448d93f-ad97-444a-b57f-2535b67e71da}} to denote the function {{formula:f8c8f233-80f4-4a52-b97c-8c78238b5948}} such that {{formula:469a7cd0-1534-461a-9cab-5a39f0913d0e}} where {{formula:689a69fe-5575-4d7f-8f4a-c3b9646f905d}} is the XOR operation. Let {{formula:6ad8742a-0efe-4097-8482-5189ed87760f}} be a vector {{formula:68a28af7-c9c8-49f8-8303-f2f3d6aabc8f}} . We denote the prefix {{formula:9fa9aaa8-1686-48f7-a16e-793d7b2208c0}} by {{formula:dd509110-f37a-4a27-8062-bba008e27382}} . Similarly, {{formula:7f3bf676-e4c6-4347-a4c8-c2c6d37de5f7}} denotes {{formula:621abe7e-8097-4b25-82c4-f73c517d28e3}} and {{formula:c7ccb05c-3160-469f-97f1-85642d807593}} denotes {{formula:a05ad5fe-1839-4474-a7c2-4496a8d8a0a2}} . For vectors {{formula:350462a2-7fda-4785-90c9-92687a84fa91}} where we start the index from 0, {{formula:06a4c0fd-cbc8-488e-8abf-e4c8eff3f980}} denotes {{formula:0b9b31bd-bdea-46fe-82e7-4971acf51d00}} . Let {{formula:e72c3b75-8b95-4167-a805-d5e91b6cae42}} be a distribution over triples {{formula:59edcc24-da34-4d0e-9d3a-bccc800f5b64}} , where {{formula:1f5493d9-62f1-4fc8-b331-2937f5832548}} . For an event {{formula:c0ab4eb5-766a-4910-a96c-21331730b910}} , we use {{formula:913abeec-a9ec-4572-ae88-5fc36d4969f0}} to denote its probability. For a random variable {{formula:0cd35a87-d9df-4df3-a32f-faea08727c4c}} , we use {{formula:7a2986c7-31ce-4e65-9f73-8e09d762b1be}} to denote the probability of {{formula:d910c16c-21b3-4c06-84e9-5afc71e4931c}} in distribution {{formula:8630be46-dda2-4d7b-9218-30f51feaf8b7}} , which by itself is a random variable that depends on the value of {{formula:e9290b8b-211a-4590-b5f7-500330899a1b}} . It is similar for multiple variables, e.g., {{formula:8ef13338-21c5-4294-9ab3-b31fb628176a}} denotes the probability of {{formula:1a25f7a3-3a08-496b-8142-b2203b471087}} . Let {{formula:6e3a16a4-d241-497c-a08b-8e8e0f3d3c00}} be a set of possible values of several variables, say, {{formula:8391a62f-acc3-4f48-8d11-842d2555f5d2}} is a set of possible values of {{formula:a58e2f1e-1ca5-4191-9818-79eaecb2cc08}} . We use {{formula:e9c7f798-1f10-45e1-ab16-3cc8ab352827}} to denote the probability that {{formula:785f5f93-0276-4fd6-8fd8-4414fea26410}} , i.e., {{formula:8a0ee588-13b5-4bfc-83d6-867466a5f073}} . When there is no ambiguity, we may abuse the notation, and use {{formula:48929fed-cd2c-4625-91d7-a2e3cb623caf}} to denote the event that {{formula:9e4bbdd6-c057-4fa8-8842-310767e209a5}} , which is the set {{formula:82b0ffa4-51b4-49ce-b9af-cd945ce0f0a4}} , e.g., if {{formula:cdd5be98-16f0-42d9-8b9a-a0eb99f65d08}} is a set of possible values of {{formula:edb3602b-a05e-42c6-b442-c05fb4bff8fa}} , then {{formula:dc6e101a-6657-4d06-9dfa-abc18cbf9d31}} is the event that {{formula:a4b33b89-7d84-49a1-b0b3-972c339d4ea6}} , which is the set {{formula:a9101c81-3b90-46a7-bcda-4c2696a23b27}} . The {{formula:03a37d65-4c74-49ec-8c6d-ce58b0c94788}} -divergence of two distributions is defined as follows. Definition 3 ({{formula:4c37d1cc-396e-412b-838e-f19125e3cd82}} -divergence) Let {{formula:ffc79801-f158-4ddf-ae9b-8ca1c6198825}} and {{formula:a38a0dbd-8476-476d-ab61-09d71dd54053}} be two distributions over a sample space {{formula:d47f2976-2d4c-4509-ab2a-36f09df464b9}} . The {{formula:202ad3f5-1894-4515-8303-3a59c929dfd9}} -divergence from {{formula:5e0e4329-8b40-46e2-a1d5-32c389c156ae}} to {{formula:c58822c6-268d-4468-8129-04ea83542f18}} is {{formula:76d0e484-d098-4f3e-8a4a-1dc1e8b0eb37}} The KL-divergence of two distributions is defined as follows. Definition 4 (KL-divergence) Let {{formula:a658607b-d6f4-4efd-a4a1-122fe8650706}} and {{formula:6c0e9054-1d38-4880-94a2-c7e046d9eb2c}} be two distributions over a sample space {{formula:66fa80a9-04ac-4ca9-a556-2585e6c88c34}} . The KL-divergence from {{formula:be76dcf5-0abc-4525-90e3-afc7f11a1119}} to {{formula:9d3a9a74-ced7-4534-a591-426879fefcc1}} is {{formula:06fdcc39-a06f-444b-ad62-a23faf3138a6}} A simple calculation gives the following proposition. Proposition 5 Let {{formula:46b674cc-50b0-4013-ba49-569659a4acc3}} be two independent random variables such that {{formula:d73b07d1-ed60-480c-824b-72119f724725}} and {{formula:d87b9094-235d-4304-aab8-b3484a39ee8d}} . Then {{formula:814f111a-4854-479d-8e97-dfe07cd163af}} . Generalized communication protocols For most standard communication protocols discussed in this paper, we pair it with an input distribution, and study the joint distribution. Definition 6 (standard protocols) An {{formula:d7721f05-b962-441e-a410-cd3cd52d4304}} -round standard protocol {{formula:f839517a-aa1f-4f6b-8be6-f80e751d0f1a}} for input distribution {{formula:10124073-7dbc-4a64-8234-53fc31935e8f}} is a distribution over triples {{formula:c4c79351-d04e-4526-a0ac-46f0add7d445}} where the transcript {{formula:7d207291-be01-4f50-942c-fb451ffdb76e}} , and each {{formula:c7d276af-fb15-4e84-9020-f0cb1a09ed7b}} is chosen from a prefix-free set of strings that only depends on {{formula:9b2519d9-6f46-4d40-b39f-05f859e5acee}} . Moreover, {{formula:2c404b2c-3714-4e95-8e6d-999cbcaab023}} ; the public random string {{formula:523e9575-1e54-4126-af57-ce7bdccd319c}} is independent of {{formula:84e68735-ce3b-4e85-934a-295961683307}} ; for odd {{formula:97b08c94-e01f-49b5-a041-559f4fe709a1}} , {{formula:84c72a7c-b27e-4e12-910b-bfb43339b8c2}} (a message by Alice) is independent of {{formula:83f03c14-cf8e-402e-b5f7-b1825f719be9}} conditioned on {{formula:94d56f05-92b6-4970-a93c-2f0fd866ed67}} and {{formula:1f9f573a-7618-4759-bf13-6d6cfd327c22}} ; for even {{formula:8b20a987-a6ce-4755-b954-529c478a6cc3}} , {{formula:e0d36635-b80b-4e56-b558-1f57b87a513d}} (a message by Bob) is independent of {{formula:c7f53349-2020-4428-8e6d-217179163b0a}} conditioned on {{formula:e808ca0b-349d-4bb8-bfb3-8d9a7c2733a4}} and {{formula:0d0b8d7a-da26-48c1-b126-fbe27ef25f93}} . The output of {{formula:da546b26-df94-4cc0-a965-fcb4b881fa6a}} is a function of {{formula:e8a8b5e6-5d0d-4d2a-8643-9881eac3d018}} . Now we define {{formula:d08e0496-834d-49fe-ae79-e7ca6a0463ac}} to be the maximum success probability of a protocol computing {{formula:c9afc9ae-6247-49d6-ab7f-361c978d1637}} under the communication cost constraints. Definition 7 Let {{formula:ef5fb19f-57d8-45a6-bb8c-f1c83fa0666b}} be a function, and {{formula:fbec7349-739c-42f4-b2be-a1e02305e7fa}} be a distribution over {{formula:10b39aa8-d87c-475d-aa06-a6a5076ff85c}} . For {{formula:6b807b28-0827-4f1f-ad69-96cba3411c8c}} , let {{formula:928f8273-cd6b-420a-8eb8-fcac8bc1341f}} be the supremum over all {{formula:959ec242-edce-4f8f-8ab3-96a8074f8469}} -round standard protocols {{formula:d2167d6e-1841-49c9-886f-ef9dbf322397}} where Alice sends at most {{formula:e5ce91b5-85a1-480d-a8d2-7927eb0c361e}} bits and Bob sends at most {{formula:89bed715-22f4-4e59-8b4e-cb195dbb6860}} bits in a round, the probability that the output of {{formula:32685534-e3f4-4ee1-a85e-ecea516bca2c}} is equal to {{formula:be3a3d34-73aa-4b80-b192-6cbc87d8d407}} when {{formula:d03a42a0-0130-4c58-997a-b8ccaf8ec404}} is sampled from {{formula:a10bf27d-2996-4e05-9e19-bd0dd16b9954}} . Remark Without loss of generality, we may assume that {{formula:90946875-2772-4587-b969-012e44dabaee}} . This is because the output of the protocol is a function of {{formula:d438dcc3-7bb1-486b-ba9f-71d2732887f0}} . Instead of {{formula:c664bcd6-d26c-472c-b8c3-c0ad8e627dbc}} , we could always let Bob send the output, which has only one bit. Next, we define generalized protocols. Definition 8 (generalized protocols) An {{formula:63e28923-f976-4148-926a-97b6cf2dc377}} -round generalized protocol {{formula:f90ab806-a4df-43e8-85c3-2698a9eda22b}} is a distribution over triples {{formula:3450c0ac-eb32-4db8-9eb0-56cef9a0f41a}} where {{formula:87d8845a-f986-4f1f-a985-83f71c6e014d}} , and each {{formula:86a70213-a228-4723-a9a9-c5c8cabe5ab9}} is chosen from a prefix-free set of strings that only depends on {{formula:a98933e4-0e92-48a2-933d-66e105131ff9}} . In this paper, we also only consider generalized protocols with {{formula:a359cc6b-e2fa-4321-8d8a-8f150dffccbb}} . Clearly, a standard protocol is also a generalized protocol. One still should think {{formula:2f86c045-930b-4782-b9e8-81d0557c67bd}} as the public random bits, and think {{formula:4bf792ca-2142-40a2-9f4f-379b273199f4}} as a message sent by Alice if {{formula:a77355e7-c91c-4027-98a0-942d9c307c5e}} is odd, and sent by Bob if {{formula:b35d7554-fe84-403d-babd-c89e942cc8b0}} is even. The messages and public random bits are allowed to be arbitrarily correlated with both players' inputs. We do not explicitly define the output of a generalized protocol in this paper. When we study the correctness of a generalized protocol when computing some function {{formula:6f8cd1cd-2431-4b87-b650-bbd5ddf3588b}} , we characterize it using the advantage. Definition 9 (advantage) Let {{formula:ed0e6b44-2711-449b-932d-d2ee7c204b3e}} be a generalized protocol, and {{formula:54171890-3b93-401e-b3f2-dded35e868b5}} be a binary-valued random variable (e.g., {{formula:f4d10f1e-5766-4323-b412-4a70e38bb6cc}} for {{formula:023678a2-ef5e-4d60-9523-d2ea34d5919c}} ). The advantage of {{formula:8ad42166-944e-4059-b5b3-0f94fad3a764}} for {{formula:7af3b09a-dd2e-4b1e-b8c1-6169d0e66f8e}} conditioned on {{formula:44d804d7-c568-4f06-8c64-8326a54e8012}} is {{formula:43e20cad-0253-4dde-9b58-86961fefc0a7}} We may omit the subscript {{formula:bd3650bd-65f7-4e68-adfd-9025e27c8174}} when there is no ambiguity. Note that fixing {{formula:d91e52a7-962f-4ad7-a8f1-f76b20a34d2e}} and {{formula:074b714f-fe60-4866-8a5c-f77886064534}} , {{formula:adacec4a-e849-450a-94f5-1a360d664c9f}} is a function of {{formula:e7103e95-9190-4f87-8cf0-e1aae87b3041}} . For a standard protocol with input distribution {{formula:88c8d1d2-ded8-4be2-ad39-a39700f59679}} , the larges probability that the output can equal {{formula:11d6d525-1d52-4231-816a-f9d276376e22}} when the transcript is {{formula:8ee8cd79-e077-451e-bd65-71a244a974d0}} is {{formula:04cfc2e6-f7b3-43c3-a6f0-cafa309c84e1}} Thus, the overall success probability is always at most {{formula:16988487-19b4-4a8b-97a3-c2bcab79f511}} For general protocols, we will also use {{formula:2b4f341b-2ca6-4f26-9f3a-085d1289366f}} to characterize the success probability. The (conditional) advantage is superadditive when weighted by the probability of the condition. Lemma 10 Let {{formula:d1712024-7e7d-4cdc-a75e-94c3bb61d55a}} be disjoint events and {{formula:c9b5d29d-9a36-4b9e-be55-32b11f25042a}} be a set of random variables, then {{formula:521c936f-e4c5-41d3-9875-99d222dd8b31}} By definition, we have {{formula:09bb1a51-d3d5-4488-a263-3c1116984bcc}} The following proposition states that knowing more could only increase the expected advantage. Proposition 11 Let {{formula:bd3762f1-179d-4373-9e63-353bd1c34773}} be two random variables, then {{formula:fcf5351e-10b2-47e6-8c15-dacbf482a1e9}} We have {{formula:c3d294ec-afad-4196-8acc-2ba22f83241f}} The {{formula:6b6562b6-9fad-4f51-b281-8752685740f5}} -cost and {{formula:3fd63f1f-fb29-48fa-8b99-7524037557d9}} -costs In a standard protocol, Alice's message must be independent of Bob's input conditioned on Alice's input and the previous messages, and vice versa, while we allow arbitrary correlation in a generalized protocol. The {{formula:c500ed73-fc48-48a9-b995-5fb0c55d1c5c}} -cost of a generalized protocol measures this correlation. Definition 12 ({{formula:ea662143-396f-434f-972c-b9f43cc2b6a4}} -cost) The {{formula:c2d50344-7a41-40b2-9476-d1745394771f}} -cost of {{formula:f247d7ef-b97b-4bcd-ab3e-a42b9276bef2}} with respect to {{formula:a07f52dc-0af4-48f0-b3b7-e2894a851556}} at {{formula:f6df9ac7-7234-4e66-8566-bf19bacab93d}} is {{formula:2f8872bc-b9e6-4973-9040-0ccbe04ee9b4}} The {{formula:ead35ace-7a28-47d9-bc82-351a9fd8cbce}} -cost of {{formula:9247b5b8-08d3-41a9-81b6-f6cbb478ba8b}} with respect to {{formula:f1315109-9923-4168-8641-036449586320}} is {{formula:a184b869-e762-454e-8cf7-6ac2d692be7c}} For an event {{formula:345ba125-a64f-4914-8788-822ec6c2faa8}} , the {{formula:1fa19669-8edd-4c72-b49f-321a8e3c479c}} -cost of {{formula:84d57093-ad94-4904-bf42-1c36dca40979}} respect to {{formula:7c0902db-16e0-47a0-9020-1b8115077dff}} conditioned on {{formula:13fef774-f78d-4efa-a933-26e56cc341ea}} is {{formula:fcd75225-55db-45d0-a779-0793052ed6ff}} Remark We emphasize that {{formula:7d9575e6-8e19-476a-b3a2-722b3453baf1}} is different from {{formula:b911fdf4-708d-46c5-a60b-bb4b9e72f098}} for {{formula:2b40400a-9c9c-44e7-947b-3c6303ab5ed3}} being the distribution of {{formula:5f32881d-fb42-413a-97c6-55c16bf7ac4c}} conditioned on {{formula:5c6c87d1-4466-4311-a24c-8ae8b735d9fa}} . According to the definitions, although {{formula:1293e57b-e83d-4d72-a004-45b6d39f7780}} is sampled from {{formula:9e6d5fba-da75-4212-b555-4f4ade54d985}} in both cases, the quantity inside the expectation is different. For {{formula:a7bc9b20-cab0-4895-ae69-cdbb4f88931b}} , we still measure the {{formula:861b8160-236e-4ecb-a754-bb7bfa500115}} -cost at {{formula:2adfbee6-f7a3-4865-875c-707966206b3a}} according to distribution {{formula:4b03fae8-5850-4922-9d40-dcce8fbd587c}} , while for {{formula:c33a3873-1672-454c-96c5-da8fc8a9dced}} , we measure the {{formula:dac088d0-ff05-4f6c-90b8-0ac8cb6ed184}} -cost at {{formula:ac582150-72db-4d58-8e74-d9ca948cc7b9}} according to {{formula:4865207b-49e8-4620-b28a-c5fe4679fe9b}} . Remark Let {{formula:5bdd1494-3c6d-4209-8790-47ce5b60c4f4}} be the protocol obtained by “making {{formula:9abf185d-c3cd-4f91-84fb-2391bed2583d}} standard.” That is, {{formula:7bc49628-162f-4c17-80d8-dbb2973cbf32}} is equal to {{formula:2ef6a332-5a34-4c1c-837b-3440e0542787}} ; {{formula:35177377-c157-49fb-9fd3-77f22bdcd097}} is {{formula:76eb9021-40bf-40b6-a13a-ac184eeaba04}} , independent of {{formula:b7f4de26-e2f0-4055-bfce-fdd2dd27e2f6}} . Each odd {{formula:e1da5953-7668-4d1c-bf0c-586425cf0dd4}} is sampled according to {{formula:7775b91c-3e48-44cf-a630-8ff5dfc4dcaf}} independent of {{formula:03732054-673e-4e4b-af2e-80bbe3dc87fb}} , and each even {{formula:dc3b69d3-6cb0-4e35-9c62-6178ea776569}} is sampled according to {{formula:af7590c3-453b-453b-8754-56f6edade5a9}} independent of {{formula:d2c29988-378d-45b6-9928-0861683983fe}} . Then {{formula:345986bb-46fa-4d51-8d84-6e2afc43d61b}} is a standard protocol such that {{formula:b634820a-f326-4d47-a1f5-53737afb4d4d}} The {{formula:2b56623d-9501-4984-baed-7de69b9bb7f6}} -cost of {{formula:7d04894a-5657-4233-b009-d8b235b284e1}} is simply the {{formula:a50f625a-76f9-4d0c-9b05-e88c5bf3ef49}} -divergence from {{formula:f5611aa0-eb3f-4d91-85bc-29438d5c07f4}} to {{formula:42b31a68-e3d3-4699-a8cb-1a82a5e20baf}} plus one. By the above connection to {{formula:8d499844-73fa-48f1-894e-679bf91731f6}} -divergence, we have the following proposition. Proposition 13 For any protocol {{formula:60acde75-c204-4f97-87c5-edbb76d3d3b8}} , we have {{formula:c0faec4b-a88d-4d91-9646-9fd5255e6884}} Let {{formula:21767d7e-eee4-4b20-bd89-26e111bdc12d}} be the protocol by making {{formula:6073c21f-5ff0-42af-afae-7ab4dd8aa27e}} standard as described in the remark above. Then we have {{formula:608c7ea4-c38d-48da-8891-05ccd783f5af}} Therefore, {{formula:3d920d0e-8c5b-4929-b48a-39abb2d03887}} By standard bounds on conditional probabilities, we have the following bound on the conditional cost. Proposition 14 For events {{formula:0502ad83-2352-44e1-8a39-963a4981f344}} , we have {{formula:61b44402-19b0-4b56-a779-665d46188137}} Since {{formula:364e46c8-2f3f-4c01-83b3-a96f43468f95}} is nonnegative, we have {{formula:376bce4d-daaf-49ca-9ba1-cb99e6a56262}} Next, the {{formula:2fe30eef-3d62-4bac-a0e8-3f062b35b150}} -cost measures the “communication cost” of a protocol: how different Alice's input becomes in Bob's view at the end of the communication compared to that in the input distribution. Definition 15 ({{formula:fa034ec1-a1c7-4187-a162-cc1bba8edaaa}} -cost) Let {{formula:99488500-f310-4441-baf4-6f9a0a242446}} be a generalized protocol, and {{formula:64f38347-46cf-4ce5-920f-78cccb66f54c}} be a distribution over the inputs. The {{formula:0ca623cf-7b6f-4721-be47-7de252fa82fa}} -cost of {{formula:3d38531f-1901-4ae1-aeba-0b07610e8c1f}} by Alice with respect to {{formula:e518d2c7-58f7-43fd-a34d-cc3297051e3b}} at {{formula:11939bd5-9f53-4e5b-8f4b-40c7defc805d}} is {{formula:53b2c05c-8ccc-4edc-8b95-14d5a8b691ac}} the {{formula:a62555b6-e2ff-4fe1-a6f5-30b5191cb8f0}} -cost of {{formula:ee158114-4743-4267-8f92-a45c6a3dc18b}} by Bob with respect to {{formula:b4defd07-0191-413f-ba32-ab255f968c2b}} at {{formula:767c07d7-762e-4f6c-8f77-962ae9fe8210}} is {{formula:829da770-2bf3-4b88-bb87-9e25746916f5}} The {{formula:0adff998-b69a-4094-96df-6e17a3a218cf}} -costs of {{formula:3829bdeb-254f-47de-acda-004afb1387c5}} with respect to {{formula:b27bd75e-6427-4f90-b732-07932f013ffe}} are {{formula:13160beb-9560-4b99-9c04-62e14f737bb5}} For an event {{formula:d9610ad3-d2a1-4e99-a73d-c3f4e679ea90}} , the {{formula:af8f2f8a-c47c-4a9e-8ee3-581b0e3dd120}} -costs of {{formula:8a87a3c2-2789-4cd3-b715-2bbeac2618e2}} with respect to {{formula:62dcd59b-7e13-4fd9-abf9-f3c3ed4cb829}} conditioned on {{formula:a496d922-aca5-4686-bac2-be29f460739b}} are {{formula:a0487979-9df8-484e-b978-8d42955a6f51}} Remark Similar to the {{formula:49793b5b-2cd6-4576-b5a5-ecf66efe26db}} -cost, {{formula:e242c31c-198d-446a-8ad6-49c742cc5024}} is also different from {{formula:f7ea4a91-6e5b-4fe1-ab6a-dc688be147bd}} . The {{formula:a3e9a58d-b050-4c33-bef5-4721788c974e}} -cost of {{formula:d6e439a3-bcf6-479a-9d75-94ecf19ad469}} by Alice is the expected {{formula:0a9358cc-8474-4310-b9ab-915bdcdb8bb9}} -divergence from {{formula:3013bcc6-e706-4162-ab40-4bb6d749dbc0}} to {{formula:8a898d8b-391e-4e80-afda-fbcfc29309c8}} plus one. Similarly, the {{formula:d5a6347e-8284-4f09-8b29-d3250b0fb16c}} -cost of {{formula:a1bb5d32-160c-4f4c-9715-50f727cb394c}} by Bob is the expected {{formula:93d17914-e214-4af5-94aa-69f2eb559aaf}} -divergence from {{formula:78dbc1df-9c2c-499e-b33e-a4cff6596822}} to {{formula:ca413d97-67c9-4e4f-99e6-058f320f4ef6}} plus one. Observe that for standard protocols, if we measure the expected KL-divergence instead of the {{formula:82611cd2-04f7-4736-aa06-6fbe0b6bde63}} -divergence, then we obtain the internal information costs: {{formula:95fe59fc-c5d4-418e-a0ac-2c1589ae0a64}} Similar proofs to Proposition REF and Proposition REF give us the following two propositions. Proposition 16 For any protocol {{formula:01b658c6-da55-4794-95e5-ecf1e42bd8fd}} , we have {{formula:6a1f50d9-5967-4a42-a238-8c5214ba421f}} and {{formula:fec16f3a-86fd-48eb-87c2-a9f061cc88c2}} Proposition 17 For any events {{formula:184fb8b8-4ca8-470c-86c5-f206264b6a8a}} , we have {{formula:9083317d-316d-4b3c-9800-43473974475d}} and {{formula:e98d3c38-4650-42c7-8bc1-4961aaab4e1b}} Rectangle properties in generalized protocols We will maintain the rectangle property for the generalized protocols throughout the proof. Definition 18 (Rectangle property) A generalized protocol {{formula:19e7f8e7-624f-48be-8d58-4d9071ef905d}} has the rectangle property with respect to {{formula:7f7acfd3-a9a7-47ca-a2c7-83e5a7ff6611}} , if there exists nonnegative functions {{formula:9b99bb01-71e8-4a50-84a2-cee3925b7f4f}} such that {{formula:98d74ef3-7589-4d92-a2c2-6a6cfb619c4d}} Let {{formula:f3d25615-0bdb-4ce1-a1e0-bd9b6f92af03}} be an event, {{formula:b75e0af9-2d53-41b7-98a6-74edfbacc392}} has the rectangle property with respect to {{formula:5e89b9de-713e-4899-becf-2b08c6f4ebca}} if there exists nonnegative functions {{formula:5304ec04-58d8-4264-b919-bfd1cffcfe57}} such that {{formula:f4897850-4564-444d-b8f2-01eacad7f02b}} Equivalently, {{formula:e765c804-f966-4509-b5bd-99c63791e780}} has the rectangle property if for every transcript {{formula:d97a8144-1fa3-4c79-aba0-71f8ca49813a}} , the posterior distribution {{formula:ce6cede2-2477-4939-88b7-6ab8ea01f0e6}} is equal to {{formula:d4a0a223-1317-4b3b-a35b-58d40cb01376}} rescaled by some product function with one factor depending only on {{formula:c0e4fe90-79a7-4842-bcfe-4fb74635d12d}} and another factor depending only on {{formula:00b3a394-3d2f-4561-871c-bf549e170a55}} . Note that this property holds for any standard protocol, since each message {{formula:f2cfb2b2-4ae6-4d43-a247-593999cecbb5}} conditioned on {{formula:7afdea87-caa6-4ee4-99ba-7e267b461840}} only depends on one of {{formula:3950d529-2f83-4396-9419-0999ecb8ded1}} and {{formula:c5cc573a-a282-41fd-a7f5-4573d4db5f99}} . Hence, for standard protocols, we have such product structure even conditioned on any prefix {{formula:c0805f02-266d-475b-83ff-0e0264adaf66}} . When decomposing a protocol for {{formula:8747af9f-3a67-4f5e-80ef-1eb6cf25494a}} instances, we need the following definition of the partial rectangle property. Definition 19 (Partial rectangle property) Let {{formula:805b748a-d74c-4809-adbd-529eeefece4e}} be a generalized protocol such that {{formula:72de9ac9-7cbf-4e41-932a-fdbea844bb2d}} and {{formula:e5a71c76-79b4-4162-8fee-3cb34f4aedab}} . {{formula:b0603222-f849-4e6f-b19d-fffd5ec30858}} satisfies the partial rectangle property with respect to {{formula:771a70c6-b79c-4c32-a79f-e21b5cf9099f}} if there exists nonnegative functions {{formula:fa93622a-6941-48ad-a385-cc5298f76391}} such that {{formula:fecb2c37-00a5-4b06-b7dc-c9ec642ebc45}} Let {{formula:bf27703c-98c7-4b89-a400-d6f727367482}} be an event, {{formula:311d9f3f-9a55-4bcd-be5c-2d82f5676b9f}} has the partial rectangle property with respect to {{formula:35d900a0-395e-4bd8-b75c-ab12c29532dc}} if there exists nonnegative functions {{formula:a061da5a-4f78-4287-81f6-989512da11fd}} such that {{formula:18ab1409-08b0-4db5-984c-093669b2600e}} Proposition 20 If {{formula:d92edfd1-da10-45b3-8c11-850e32627b33}} has the partial rectangle property, then {{formula:edf73e30-45de-45b3-b368-ceb308576462}} and {{formula:6714a728-a59c-4f11-980c-95694240924f}} are independent conditioned on {{formula:5f19c3ee-2cfe-43c7-900d-b28d22d69c6d}} ; If {{formula:a0cf51ad-a167-4b22-8117-4cf0add903fb}} has the partial rectangle property, then {{formula:a0c659d0-ab41-46b2-bdae-d29970cb6c0a}} and {{formula:5dec4aee-1164-4c30-85bf-585768ddad5f}} are independent conditioned on {{formula:8d9dc249-7098-49be-9cc6-e09701b9603e}} . If {{formula:ac94a37e-6fe3-464b-817d-1f148e35203c}} has the partial rectangle property, then {{formula:31b3b04c-cdf8-460b-bb39-ae8e4bbbdf85}} Note that given {{formula:35e8e27c-fca7-4fab-9197-6350e914f224}} , the first factor only depends on {{formula:db4b5ba1-ceaa-41b1-88c8-d1ba9d1059da}} , and the second factor only depends on {{formula:7768e84a-8e51-490e-a94d-524ddd5062ba}} . By normalizing the two factors, we obtain that {{formula:c649dbc0-acbb-494f-bfc6-aa7845d6caae}} The proof for {{formula:d021b6f0-063b-4754-8a39-4bd45a0595ce}} is almost identical. We omit the details. For a protocol {{formula:e23b4d55-c758-480b-9aa0-261a85f848ab}} , we define the follow sets that are related to the rectangle property and the partial rectangle property. Definition 21 Let {{formula:b733174e-6e00-423b-bc66-18513d7bd04a}} be the set consisting of all possible pairs {{formula:2b4cd292-90cf-4a44-a497-b71890379413}} . Let {{formula:b06564d8-0190-48d1-9bb0-7457f5223bbe}} be the set consisting of all possible pairs {{formula:c8e179b1-e7d6-49f7-94a1-adf9ce2f45a4}} . Let {{formula:b0e008f7-609f-4c1b-908e-fa4da99742e2}} be the set consisting of all possible triples {{formula:b7487c78-bdba-4a19-8041-c906f7807c35}} . Let {{formula:91609091-63b6-4f3b-b152-89062a9e0644}} be the collection of all possible events {{formula:f2c939a2-ce22-440b-a711-c9f7683947b0}} such that there exist {{formula:9e5e9efb-6eab-41c8-b63a-9ee7cd1de423}} , and {{formula:026fede0-3198-4b8a-9758-2dce23516ac6}} Let {{formula:70b96aaa-65a0-4490-bcb0-b78e5b320675}} be the collection of all possible events {{formula:c3c07e3d-2256-4257-935c-a30afd1a4689}} such that there exist {{formula:2036d005-3780-4ae7-b779-651526af2aa7}} , and {{formula:b75a5a3c-20ed-4f41-9d2f-a5c2a250afe3}} We may omit {{formula:a68adeb8-5a77-4942-b780-8145aaddfecb}} and use {{formula:11e75ae9-9236-4cb6-907c-0ea382894b0c}} , {{formula:a7d4f0c1-a33d-4eeb-bebd-edc726ac960c}} when there is no ambiguity. Intuitively, {{formula:5ea3838d-4ca3-4765-a4e1-b04130f3e579}} is the collection of events conditioned on which, {{formula:e1309b45-904d-4a31-9ac0-2d806570f809}} remains to have the rectangle property. Similarly, {{formula:04a6d988-f865-4594-8ee3-ab4a25d171db}} is the collection of events conditioned on which, {{formula:4ce813b5-9578-4203-ad00-5d8acc2943ec}} remains to have the partial rectangle property. Proposition 22 We have the following properties about {{formula:07ffe5ae-dfe7-4608-954f-673daa320204}} and {{formula:54f7aa6d-6018-4fc3-b44b-2cde926cfb77}} : if {{formula:1ed8b2b0-dec6-4535-a3af-ee125c46368d}} has the rectangle property, then for any {{formula:c7db71da-c6fd-4863-86be-6b8f8f223bf1}} , {{formula:6cb0ac1f-9199-400a-95aa-e41e3f8754b4}} has the rectangle property; if {{formula:4b1e1a56-763c-41ae-8a94-85faf9bb224f}} has the partial rectangle property, then for any {{formula:ae3e69d5-65d9-49f3-986e-8f05d0ae06e3}} , {{formula:2f4e6089-f82c-45ec-bcf5-f363c1ea0678}} has the partial rectangle property; {{formula:49cc5bca-b48c-4eec-89e5-9da5357a3768}} ; both {{formula:40590770-8999-4232-a3ce-334029d514e3}} and {{formula:71958a9d-777c-4aeb-bc22-1e5cc3b2c3ca}} are closed under intersection. For (i), suppose {{formula:f2204d98-c7ca-4bb7-990e-ead63a7513f0}} . Then {{formula:26c68c5f-b091-4cb4-93ba-abc49130c719}} Thus, if {{formula:c49b69ef-2336-42c6-bc24-a892dffa4871}} has the rectangle property, then {{formula:14d8091e-c71f-41bc-9691-c332ce8166f5}} has the rectangle property. Similarly for (ii), suppose {{formula:a894fca6-280e-4aa5-863b-0157ff7059ce}} Then {{formula:4d26f28e-0be0-486f-983a-b23abe44ea12}} Thus, if {{formula:26bcbb3f-399b-41a8-aea8-693255f6d0e7}} has the partial rectangle property, then {{formula:b075f577-23b5-4cc2-8536-d1d008ff35be}} has the partial rectangle property. (iii) and (iv) follow from the definitions. Main Setup In this section, we set up the main framework for proving our main theorems. (Theorem REF ) For any {{formula:ef3be481-dfe0-4dce-ae98-1bfebe4afc44}} -valued function {{formula:785bb4bc-d18d-4da7-bc81-91c5f9a493dc}} , we have {{formula:5abcc790-2a46-4886-a7e2-238b04196766}} (Theorem REF ) Let {{formula:a13f1680-5622-40c1-ac56-9bf66e25e331}} be a sufficiently large constant. Fix {{formula:23d8dfee-994a-49d1-bee1-d03fec20d882}} and {{formula:368adc61-b3c2-449a-959a-284ddcc330f4}} . Let {{formula:259e087a-3a86-4589-8ef6-3f49d9bb514f}} be a function, and {{formula:eb4a4883-4f4e-4e85-9696-4c5c9ed8eaf7}} be a distribution over {{formula:a227aa57-d52c-4f1b-8516-54dec20127e9}} . Suppose {{formula:eb7d09bb-283f-474a-ab2e-b8f050f1c896}} satisfies {{formula:69a4104c-2bdc-41ac-8b9f-97c935d625ac}} then for any integer {{formula:3ae8db49-4945-4c06-a2a8-d1ccdee7f11c}} , we have {{formula:b9d4dffd-1900-4fe3-aced-cc106d66b8a5}} We first show that Theorem REF implies Theorem REF . [Proof of Theorem REF ] Fix a function {{formula:012f4b6d-fe73-488e-a0dd-d7a21e8f500e}} , suppose there is an {{formula:64158070-9bc6-421e-9d2f-f0da744cc312}} -round protocol {{formula:80240375-cd99-4c94-8718-1eca67ce3a32}} for {{formula:fc261525-95a7-41f1-88c9-408d12c7794d}} with communication cost {{formula:effd681c-9b92-42f9-b337-3ac9ebe50392}} and success probability {{formula:3580ea03-bc60-42e1-ba6a-33b4396edf40}} . Let {{formula:07b2a798-6772-4ff5-b12c-496600b1fcdb}} for a sufficiently large {{formula:1e531d25-4533-4e52-8ab0-823109524988}} , then {{formula:2e8d40ba-1fe4-4dd4-8a91-3fdd5e3fb750}} has success probability more than {{formula:b87249e9-9624-4df9-9e20-eca88a518886}} . By setting {{formula:3afc667e-c1c7-4be1-bdc3-c1c5e9b3606f}} , Theorem REF implies {{formula:5040c3eb-cde5-4283-9409-ab06a058bc09}} , i.e., for distribution {{formula:9c26c41a-11df-455a-b267-001c51cbb862}} , there is an {{formula:8c242e0a-c5bc-48e0-8aa5-d9173a311e30}} -round protocol with communication cost at most {{formula:5ab181b2-7a6c-4d13-9ed3-463e7b0ed080}} in each round and success probability at least {{formula:b7da9064-717b-4e60-8997-cc12c0973417}} . Since this holds for any {{formula:cf1dc4b0-21d1-4308-97f8-3e9e8bbfa41f}} , by Yao's minimax lemma, there is an {{formula:fe7dcd0c-cdad-4cdf-a015-7805cea51e0f}} -round randomized protocol with {{formula:88d0fce4-3161-471f-975b-9102df0a8df2}} communication in each round and success probability at least {{formula:d1b35c18-e76e-4bfc-bb93-71b772d93ffc}} for all inputs. By simply running such a protocol {{formula:cc2807f2-7ec5-4e6e-8229-4ed452d6f74a}} times in parallel and outputting the majority, we obtain an {{formula:a000e5ef-5040-4167-b678-cf883d94a1b5}} -round protocol with {{formula:11c7e79a-ec2c-4875-8dfe-8e31134995ae}} total communication and success probability {{formula:7cffb18c-e67e-4274-b140-e7f02bb8fbea}} . Thus, we obtain {{formula:2bd20d44-b9e6-4f03-bc96-91612a506da1}} . Rearranging the terms gives Theorem REF . In the rest of the paper, we will focus on proving Theorem REF . Let us fix a sufficiently large constant {{formula:85f3cb1d-33bc-478a-9227-18564ebcd0ed}} , parameters {{formula:58ab3c17-fbe4-4139-99b3-d35a229d501d}} , function {{formula:4d1c0297-3c74-4577-bcd2-af349b03cbf5}} and input distribution {{formula:a52f1c58-fdc3-463c-8bd7-9a335c6481e1}} satisfying its premises. As mentioned in Section , we will first define a potential function based on the costs and advantage, and then show that the potential function value decreases as we decrement {{formula:3a220d7b-0dc1-4e4c-987a-cd2feab2e380}} . Definition 23 (Potential functions) For an {{formula:c26df8e8-4e07-406c-8ef6-1cdae8196461}} -round generalized protocol {{formula:08642284-cfec-4a95-9d30-9847e50a2fd0}} for {{formula:5e95d900-cc04-451c-a397-f480e41beaf1}} and an event {{formula:6c57e3d4-4a4a-41a2-8597-2ea0ee18e378}} , we define the potential function {{formula:b1a0548e-2740-49d4-95c5-39962012f8e4}} (and {{formula:deb44b1a-0a82-4c0a-ba60-5727f3841149}} ) as follows: {{formula:b5f5fd34-44b6-4b00-b701-a18d305d9244}} We also define {{formula:cecbb7c0-b9d5-4407-9a20-dcadd4335da1}} (and {{formula:0f61afa1-7c18-4834-becc-e06de85a795f}} ) as follows: {{formula:5bd88001-6e44-4c7c-ac14-119d1388b2fd}} When {{formula:ded09a20-5b3d-426c-b7e4-1b4b1731c8a5}} is the whole sample space, we may simply write {{formula:24537b33-ccc4-42c7-83e3-610316a936ed}} or {{formula:de345b88-3c1c-4a6f-9270-893d6c18cd90}} . The first three terms in both potential functions {{formula:66ca780c-40d5-43c2-8e73-26af1caefe85}} are the (normalized) costs of {{formula:be6f7691-064a-4dac-8dc4-bfa5ff5c07fc}} . They are small if {{formula:44b4f32e-0923-4da7-b2ca-828cfe02cd8f}} has low {{formula:347e7876-4474-43c8-9358-e04493e921eb}} -cost and low {{formula:3c8884fa-9baa-44da-a653-a6b331cbc812}} -costs. The last term in both potential functions depends on the expected advantage. {{formula:280ea30b-df67-43e0-8797-4298818aeabe}} uses the standard advantage, while {{formula:c93f0d72-041c-42e4-8ad5-afb839aee8d2}} uses the advantage conditioned not only on the transcript, but also {{formula:21afddd8-18b5-4488-b12d-c14cc168502e}} and {{formula:7716ed65-a6ab-4e05-89b3-d37caa1764ac}} . As we will see later, it is used when decomposing {{formula:65fb99d2-8826-4864-9639-db649f5b88b9}} . The last term is small if the protocol has high advantage. By Proposition REF , knowing more could only increase the expected advantage. Hence, {{formula:ba700e60-f624-46fa-ab5f-d2a31e02701b}} is always at most {{formula:9715f525-f094-4fa3-9f8d-ba869433a9bb}} . We have the following lower bound on the potential of {{formula:9621e06e-8a32-49ee-95d8-fa9a2e7abfa1}} conditioned on {{formula:ec2a9866-dcac-4314-9b53-26a20e4a16e0}} . In particular, when {{formula:c47b53ef-1c3f-4fef-84a9-1a7228c25356}} is the whole sample space, the potential function is nonnegative. Lemma 24 For any {{formula:2baf8766-ba90-4c5c-86bd-2488862a7191}} , event {{formula:6de0c10c-f2f9-4b32-8fae-c01662e0c2fb}} and any {{formula:cca8c315-27dc-479f-9ed4-662bfbe8ce40}} , we must have {{formula:41e96a69-5ffd-4d6e-950e-f812e07485a9}} For the {{formula:8a45e805-127a-4cc0-9d3b-253b52fe33c1}} -cost, by the convexity of {{formula:dbd0bce1-59ee-4b91-b8e2-c3d06767cbbf}} , we have {{formula:0eb78db3-0969-4b59-9d9c-7a001ae41689}} Hence, {{formula:64299507-b444-4d7d-a875-58578a30db8d}} . Similarly, we also have {{formula:604bb4af-3cc5-4040-be1d-c9f5e3e81fee}} , and {{formula:c0972f39-e74e-430d-87d1-61b160e6f26a}} . By the fact that {{formula:6f59a959-cd94-4f1d-a0e8-94582608d2b4}} and {{formula:903c5a27-bb89-4bc1-b34b-5239c797927f}} , we have {{formula:f0be0a84-4195-4ce6-84d1-71feb72ee31c}} The advantage is always at most 1. Therefore, the last term is nonnegative. Hence, {{formula:84749d9d-30ba-4302-b662-9370a787ca53}} . The following lemma shows an upper bound on the potential of a deterministic standard protocol {{formula:e0ff4e63-6368-4035-bef0-c34a483ef0dd}} computing {{formula:6d53cb1e-60ba-417d-b07c-22435fc36407}} . Lemma 25 Let {{formula:45302874-cf12-4a42-a810-70c1c32e83a1}} be a deterministic standard protocol where Alice sends at most {{formula:f947e878-f34f-4f1b-a857-2f903c5c2d03}} bits in each (odd) round and Bob sends at most {{formula:54ececc0-6223-48c0-8176-fb284dfd16a2}} bits in each (even) round. If it computes {{formula:6e81d8ad-48d8-4ac3-a58f-f9fee352a2de}} with probability {{formula:c00c0c40-a7b1-472f-8246-57422900074e}} under input distribution {{formula:e405f265-7352-41a2-b902-81b12d2da0b8}} , then {{formula:c9311bf2-541d-4426-8cb3-07b7fcd1f892}} By the property of a standard protocol, {{formula:a8ef95f4-8805-4e5a-b25c-12fabdc74293}} for any {{formula:76271585-6d6b-43cf-97ac-ee70b156a7b0}} in the support of {{formula:364213f6-889b-455c-9914-dad4707865fa}} . Hence, {{formula:dff4411c-7ed7-4168-ba0a-855631e4e8e3}} . For the {{formula:7ebf447e-9b08-429c-bc1a-637f1ed44efc}} -cost by Alice, we have {{formula:d41d4064-2aeb-482e-a011-d16aaea23d29}} Since {{formula:c3ce7af2-bbf9-4975-9ea0-3ba3a4c90fe1}} is a deterministic standard protocol, {{formula:6fbc5dce-de66-4891-9155-81e61569c829}} is fixed. All even messages {{formula:d0da6bcc-566b-4d9b-9a70-1f12c5e32d5e}} are sent by Bob such that each {{formula:c8b222db-1471-46b7-83b6-f0a75297174a}} is determined by {{formula:4984b720-2cfe-494b-805e-178785181e6b}} and {{formula:74fba8a3-ed09-4100-b33d-ee2a8eecd295}} . Therefore, {{formula:790fc59e-23d8-41dd-9bba-9c72668e0f94}} are determined by all odd messages {{formula:7ce3fef7-18d6-4ad5-9b19-5a6836a5af4f}} and {{formula:beecd674-361d-42cf-be42-a5ca62cebd6e}} . Denote {{formula:66bf0123-1ea2-46d2-8cf6-3c39976324a7}} by {{formula:44d5e0b9-06dc-470a-a7b7-5685923f4878}} and {{formula:d88d2189-12f4-439b-b2fe-6057e7295b22}} by {{formula:60a45233-d1bb-4c12-befc-4a41df05d62e}} , we have {{formula:30a740d2-aba9-4563-90b2-4665027481c8}} where the last inequality uses the fact that Alice's messages have at most {{formula:a5445e9a-4616-4f58-8ed1-e6559183e86a}} bits in each (odd) round. Similarly, we have {{formula:8533e153-c1ac-43dc-bd16-50df10e18167}} . Finally, by the connection between advantage and success probability, {{formula:65e8ab8e-756e-43d5-b65c-375490ee5afe}} . Hence, {{formula:7ac14d56-e36b-4013-9ccd-b7d270899ef3}} In the rest of the paper, we will prove the following lemma, which shows that given a protocol for {{formula:574aaa58-d91c-4501-ad7a-771fa9253ceb}} , we can construct a protocol for {{formula:a6dc612e-561b-4ff2-adb0-14dd3ba6c515}} with a lower potential. Lemma 26 For {{formula:6a516570-d589-404e-ace8-2b6b03fdeb9a}} ,       if there is a generalized protocol {{formula:13182607-a1fe-4322-81f0-6806b8a6d8ea}} for {{formula:acc252b1-8e9a-40b5-98ba-1d4e2dc48366}} with the rectangle property with respect to {{formula:31089fa4-dff4-43d0-b953-414e26ddc6cc}} and an event {{formula:4f5fb1c0-45bc-44e5-8514-673481594def}} such that {{formula:a0f49035-1357-4012-8e82-4714287153e1}} ,       then there is a generalized protocol {{formula:f0e5ef44-43bb-4906-8781-b14c292a7a0b}} for {{formula:a35551bb-0ac9-4b3e-86bd-e629b0e6022d}} with the rectangle property with respect to {{formula:66fea719-135e-45a8-baa3-5fb319b11493}} and an event {{formula:7235938d-2078-4bb0-bbd0-9fb54057bac3}} such that {{formula:01dee1ff-d15c-4059-8dc9-4284405ca5bd}} , and {{formula:7f7de7cc-ebd1-4e24-8614-6cbf08e920a6}} Our main theorem is a direct corollary of Lemma REF , REF and REF . [Proof of Theorem REF ] Since {{formula:34281f4c-cdce-4e1d-8f0a-144212b55991}} , {{formula:6b2fcbcb-76df-49fc-bb80-3c09136c4f2b}} and {{formula:ecce141e-5c49-4e03-badd-d24dbfdcb5bf}} . Suppose there exists an {{formula:dc6ef34e-0ec9-4017-b247-382c0e1f9cca}} -round protocol {{formula:f1f0d202-41e3-4594-9569-70d00bbd4760}} where Alice sends at most {{formula:20aaf83a-9768-432d-8d76-ff535a844e15}} bits in each round and Bob sends at most {{formula:9f44f561-2814-436e-b5b5-cb8d01e54312}} in each round, which computes {{formula:aee67b19-79aa-451a-94c4-f71af502dce7}} correctly with probability {{formula:d0979d86-29c8-41e4-ae1d-2b42a5f8680f}} when the input is sampled from {{formula:4b6c525b-4792-4dad-a54b-46f5ad90258c}} . By fixing the randomness, we may assume that {{formula:1acbfc62-4193-465a-b40f-665a89ecff31}} is deterministic. Then by Lemma REF , we have {{formula:60c236f5-890d-458b-8aed-91373316b060}} Now we set {{formula:dcbd9f81-1b2a-4d5d-b071-874b6b63c3cc}} to be the whole sample space of {{formula:751e26b0-0668-462b-8c80-4e32ec795b5f}} . Clearly, {{formula:e755f71e-678d-45d6-8136-4938d811de6a}} and {{formula:61a4c2bb-5e7f-42b0-9445-3f28c9f9cdd9}} satisfy the premise of Lemma REF . By inductively applying Lemma REF a total of {{formula:e1186fd1-14e3-499b-8d70-c727d0b7cd79}} times, we obtain a protocol {{formula:d52bf4fd-f8ba-4d48-a9a6-7be04ef830f5}} for {{formula:01c3dd5f-0bbb-46f3-9910-88a48e2bacf3}} and event {{formula:9bbd3852-66d1-4298-930b-644f0b09ff24}} such that {{formula:66c2a851-f315-40c2-9db1-0ab4d19013b5}} and {{formula:059fdd95-8acb-456d-a82c-3e0cdd53d253}} On the other hand, Lemma REF implies that the LHS is at least {{formula:6136c82f-ce08-4fb4-98ce-30e20a038cdc}} , implying that {{formula:dbe7080a-4aed-4a97-b42a-923960274ca4}} since {{formula:c8681bca-26c7-41dc-9faa-744fcf80b446}} and {{formula:d5f3f579-4eba-40fd-8511-f6e84cd9186f}} for a sufficiently large {{formula:8cbe0da1-56d3-4de1-b3d8-b53759dede03}} . Combining the above upper and lower bounds on {{formula:35eb268a-c975-486b-8d5d-50e7151ba08b}} , we obtain {{formula:2c579d97-24cd-4fbd-9896-3e44f780e581}} implying that {{formula:cb96e34a-b8cc-409d-8c52-683e0922f0a3}} , i.e., {{formula:2d4cb45f-b418-48f2-b1d1-21cd44a54a54}} This proves the theorem. Decomposition of Generalized Protocols To prove Lemma REF , we will decompose a generalized protocol {{formula:4edee1b5-f687-470b-b267-e2c15a9795fe}} for {{formula:6b4d77cd-9f3e-4bcc-bfe3-e8416c4cd15f}} into a protocol {{formula:937eca0e-ec1e-4401-a60c-a670f187ec90}} for {{formula:cd4f7b9c-ddf7-4e6b-9b2e-009a3dd79ccc}} and a protocol {{formula:918e659f-e279-4530-85eb-4283c63cfd18}} for {{formula:baeb8d5a-d95f-4597-9fd1-fc740d1364f0}} such that the costs of {{formula:4bec424c-f2a7-4b20-a393-acaa1df4b7fe}} and {{formula:61c80390-a230-47cd-9275-5378e460646f}} “add up” to the costs of {{formula:54bbb898-a6fb-469f-9561-5182035e6670}} pointwisely. For simplicity of notations, we will assume that {{formula:4da1bef2-35f2-4948-8d77-f27861e08071}} is even from now on, the case of odd {{formula:979d9d1f-15d8-4324-a96a-3d5c98cda84a}} is similar. Definition of {{formula:6535afcb-4610-4881-9406-34d3f7357afb}} and {{formula:77f0c3b5-54ce-4dba-a708-01f16b4a4c44}} Fix a generalized protocol {{formula:a51e28e6-5986-49cf-ac21-1237a462f295}} with the rectangle property with respect to {{formula:5e9d5c40-3a6c-41d2-a196-554823037aca}} . Let {{formula:b88f908e-532a-4e8a-bbd5-075dcab09d4d}} . We view the following tuple as the {{formula:0165a62d-f68c-41b8-b3cd-ebc0777b5835}} -round generalized protocol {{formula:e86ae737-5992-406f-96e9-36a30ade585c}} on inputs {{formula:2b57921d-12c2-49c3-9624-35cea0d334f4}} {{formula:baea9389-f978-4fc9-acc2-5358c7f91b88}} where we append {{formula:7cbbd4c9-e693-4244-8053-d00fc4cda640}} to {{formula:9c0eac04-813d-43b5-88fe-0c232dcba977}} . We view the following tuple as the {{formula:a100a7f7-6c4d-4ae4-a737-a82cd6c82e10}} -round generalized protocol {{formula:b3dc23f6-db6f-4898-8784-db7315f3e0ff}} on inputs {{formula:0fe43d30-9f25-426c-b6e7-fad782c3f820}} {{formula:fe9b1e5d-7cad-4542-83e6-a15e1f43ce97}} where we prepend {{formula:d818cfaa-d4a9-46fe-8be5-88ec09b99031}} to {{formula:ab9ea3f7-b16a-49f2-8ae3-b0a3b9e6ab74}} . It is useful to think that {{formula:94a47e8e-1c84-4937-b917-4124ad790bc0}} , {{formula:122e9356-ca3b-4044-b233-e6a7c77305c1}} and {{formula:fc94bdde-272d-4fa7-8d8e-962865d82992}} are the same distribution over the same sample space, only their inputs and transcripts are defined in different ways. Therefore, we may use {{formula:1ca25bb9-a630-40a3-823c-45e49b93d4a2}} interchangeably when measuring the probability of an event {{formula:f08ae321-4326-420c-8721-dd75b3f598fa}} . For simplicity of notations, we use {{formula:93db665d-32d7-4231-b4eb-cbe1294fa044}} to denote {{formula:6609ae77-6cc4-465c-b0ce-c818796c5f02}} , the transcript of {{formula:6fdec6a3-add1-4331-b525-7618b0dba6b8}} , and use {{formula:8da0d9d3-5ee5-45c6-9598-3a9ac2c42e07}} to denote {{formula:12ee3d5f-99e3-4d8c-a6d4-6d5f037d2147}} , the transcript of {{formula:0259b9a3-1fe9-41d0-8225-d3bf4c20bc98}} . {{formula:05441173-9224-4c0a-b53f-9838f953aa98}} and {{formula:fd2624d6-b4e9-4e78-8947-d2cb07418b58}} are defined similarly. Since {{formula:3e90f29f-d356-40e2-b06c-fe109b32844c}} determines {{formula:7866686c-4ade-4162-85f6-6639b67f52d8}} , we define {{formula:c664d48c-98f4-4e16-87a8-ac9b168a2716}} -cost of {{formula:ba0d3283-3fa7-4dc6-a81e-ca41e98acc3f}} at {{formula:429684fe-ffdb-4d8e-997c-2dbcb89d7488}} as {{formula:05825be8-1ffd-4b1f-824a-cb053fe6d261}} where {{formula:a0e7d441-1bf8-4031-a32f-5aac197af060}} is the triple determined by {{formula:1e9b32e7-f10f-4f29-97dd-c8cc3155594b}} . Note that this cost does not depend on {{formula:63c46113-42a8-4bb6-8a73-22c68de5dc2e}} given the other parts of {{formula:acf1107b-2cbf-4828-ad4c-c7ebab468d97}} . The {{formula:eff9846f-70b4-4986-9edb-31ccbfb5d29f}} -costs of {{formula:786713e8-0bd3-44e6-a91c-37891cf481c9}} and the costs of {{formula:5601d5c5-94a0-4ce6-9153-097876b1c277}} at {{formula:1e41bb5c-a160-4e6a-9912-8de1ffd6fafa}} are defined similarly. In the remainder of this section, we will analyze {{formula:43218357-2509-41ea-b218-868103d39411}} and {{formula:a94c5fa7-79f9-43f2-933e-aed729f541e6}} . First, we observe that the partial rectangle property of {{formula:44a6a327-3b53-434e-b053-097c55cf3228}} implies the rectangle properties of {{formula:97975b88-9298-46d3-82f2-1b94f19053f7}} and {{formula:077df32b-117f-4ba7-86ac-f943f19b5628}} . Proposition 27 Let {{formula:ef50a02e-ee45-476f-a965-2f90cacca843}} be an event such that {{formula:a37ca13c-0b95-4a7a-9f0c-b91df6104b05}} has the partial rectangle property with respect to {{formula:08c08e2c-5088-41d2-99d6-7928a9268639}} . Then {{formula:c78f3b42-7f9c-4d74-9535-12cbf64fa131}} has the rectangle property with respect to {{formula:fded9c2a-8808-443a-bbaa-3e9ab1c89efb}} , and {{formula:cd521592-f94e-4a03-858c-d77506aef182}} has the rectangle property with respect to {{formula:ba6dde09-5e84-4a41-beb0-24336cbfbbf0}} . Since {{formula:be214bdd-de14-4525-b031-ae06b1bffbdb}} has the partial rectangle property, there exists {{formula:030d26dd-836b-46c8-9601-676454ae195d}} such that {{formula:a43f368a-7b41-4a83-b2d7-93889517cab4}} Thus, {{formula:4fdc79ef-03e2-429c-85fd-480e32b674b6}} Note that the second factor is a function of only {{formula:30b00290-fd1c-4412-9162-5179c1c7309e}} and {{formula:798aec89-e067-4d3e-9c4f-50229427b79d}} , the third factor is a function of only {{formula:e0153bb7-876a-4ec1-a422-cefc50565c0d}} and {{formula:9b2e6901-b861-40bd-a0e7-1ec38bdb9a3a}} . For {{formula:c76cf827-2595-4f50-a877-e57e7972bcf2}} , we have {{formula:b253ec8f-f24c-4814-848b-6c60a8482dcc}} The second factor depends only on {{formula:543cf419-6533-4463-98b2-168a2156b34d}} and {{formula:ca4a2c30-32af-4023-9f68-df1200f9ce25}} , and the third factor depends only on {{formula:36e0b3e6-97f0-408f-a3b9-643ebea3e654}} and {{formula:151ec9db-a8a4-4a8b-b064-984e1ad92997}} . This proves the lemma. Similarly, we have the following relation between {{formula:1e489be5-310d-43c9-af25-2f30a5333223}} and {{formula:96eccf32-214e-4f5d-8860-efe98af4c6a3}} . Proposition 28 We have {{formula:6807807b-4fc3-4fd6-ae59-4e00da82580e}} and {{formula:a99d540e-916a-4b97-981b-28112330f43c}} . Let {{formula:7247d17e-0f82-4b76-94b1-540b96188091}} be an event such that {{formula:8d8522b5-d0a9-4182-b827-8b117a4d7109}} for sets {{formula:df53aba7-711a-4646-9bde-4429268a4190}} and {{formula:45dc9931-cdeb-4c6a-826c-7cabb942bad5}} . Hence, {{formula:c39cea8e-2acd-4ecc-a1d4-f159093c9abc}} is a set in {{formula:cb62ec00-27a0-4504-93bf-c01b24b4867f}} . The proof of {{formula:c6c11e70-c1e0-44e3-8ac9-38117824b5c8}} is similar, and we omit the details. Decomposition of the costs Below is the first main lemma of the decomposition, stating that the product of {{formula:80ad5cbd-6673-4765-8364-2ed398ab3915}} -costs of {{formula:ea53d62a-b3bd-4cbe-8841-c6771859536d}} and {{formula:aeb54faf-869d-4595-9d0b-72cabb347567}} is equal to that of {{formula:0e2e206b-a35a-4c7b-b0f1-fc0f34270395}} pointwisely. Lemma 29 The product of the {{formula:3acf5c9b-42d3-4540-a497-3d3c76ce25eb}} -costs of {{formula:5b929abe-c70d-4deb-b69f-78d549b65e83}} and {{formula:5a9447af-0fd9-4ce5-922a-bcba2a11c90c}} at {{formula:c32aec1e-7571-47d9-a77a-ffd3a07d899c}} is {{formula:1e3f1059-f564-4bbb-820f-ce00374dd8f3}} -cost of {{formula:3f9cf7ac-a737-4767-9062-8ae735ab060e}} at {{formula:9946b78f-59e2-4019-820c-5cf0e1e7a06f}} , {{formula:5f46a879-6118-42dc-bc51-b9051bc2ef66}} By definition, we have {{formula:81002478-85de-4139-bfb7-8a214d963a87}} Similarly, {{formula:b6f75c6b-e15b-4ea2-b085-eef64d5a47cd}} Combining Equation (REF ) and (REF ), we have {{formula:849ca690-064c-46e0-9236-fe2566655990}} Then by the rectangle property of {{formula:708b0986-677e-4768-af66-2d516c7324b4}} and Proposition REF , {{formula:21c6213a-0941-4ba0-923e-5af71e55e2df}} is independent of {{formula:79c97cec-3866-4bb7-82a6-64b7e1060173}} conditioned on {{formula:6ec00da0-368b-4733-9c10-fe553a7f6fd5}} . It is equal to {{formula:35fa55f1-a02a-4960-aa4c-41b9be54ef3c}} This proves the lemma. The second main lemma of the decomposition states that the product of the {{formula:59b39269-9d38-410e-b01a-6f2cebc783d6}} -costs of {{formula:6f5f0138-abce-4412-99c4-3b772f92ce87}} and {{formula:69b6cd90-cf95-4a60-b5fd-801a7f6e56d3}} is also equal to that of {{formula:8c0753a8-6c2c-4512-bdad-71fb84770155}} pointwisely. Lemma 30 The product of the {{formula:8b4d7d3b-a600-4af7-bf47-b6ab671edb69}} -costs of {{formula:09696885-b038-474c-9597-b49f38876014}} and {{formula:c9dcfbf0-474d-4eb9-895a-88b70ff2a438}} at {{formula:0d05b531-8f6f-4b32-bcb6-334a668c0416}} is the {{formula:c309e293-0366-4b53-b507-c2789965c288}} -cost of {{formula:fa996257-8e92-4c6c-a50d-203a1e670ffa}} at {{formula:ee6f2a18-ba8e-430d-9245-565657624d70}} by Alice and Bob respectively, {{formula:1f478b03-eff6-4e9e-9500-0d5902b2d448}} For the {{formula:32a074c7-d027-4393-accd-534e03f90a08}} -cost by Alice, by definition, we have {{formula:9262baf7-4e5d-4ee8-96c2-4005e536e415}} and {{formula:58b8c99f-050e-4495-805b-3c0a4d72dbe4}} Hence, by partial rectangle property of {{formula:25838154-4489-4df6-a724-8cc2c7d9731c}} and Proposition REF , their product is equal to {{formula:230e05f9-f132-4ec9-bd3e-0e1316cd0868}} The {{formula:dc9f63ac-2d31-4110-9198-33a816f68dba}} -cost for Bob is similar, {{formula:ada48a31-26c4-4252-994b-9a20021cd26f}} This proves the lemma. Induction: Proof of Lemma  REF In this section, we will use the decomposition of {{formula:9440c532-55a4-434f-a268-8358b0c6a0d3}} to prove Lemma REF . Identify event {{formula:5f2f8de1-59e8-4ef7-8e8d-02303f2c06e7}} As we mentioned in Section , to obtain a new protocol for {{formula:f3ff5cd6-787f-4f29-be56-2dbe52a85c5c}} from {{formula:1238d60a-69fa-42c6-b10f-fbc67f0b2edf}} , we first identify an event {{formula:043cccfd-2c89-45cc-abcb-d82215aa25d0}} such that the advantage of {{formula:160d8701-2cbc-478b-b226-ef40e79fd95c}} is not concentrated on any {{formula:0894f6eb-8d35-41aa-95b6-293d23717106}} for {{formula:41f71fb8-35e1-45b2-a63b-a546c5f00324}} and {{formula:c1b3026b-86d2-4ddf-a43f-7f9f20f78710}} . Let {{formula:c09d38ac-42d0-493a-ac07-b3ff81984199}} and {{formula:274e13df-97df-4b77-a751-a79179b08f79}} be an event that maximizes {{formula:eacbc72d-86bb-416f-9aef-fc4cf895ab13}} Since {{formula:e89541d0-5c18-4500-a1c1-b1f826ee05d3}} is a discrete set, such {{formula:8376d459-ce64-41c7-bd43-4efca2d6ebce}} exists. If there is a tie, we fix {{formula:17f09019-e57e-43d4-bdea-1ad390b886fe}} to be any maximizer. We first show that conditioning on {{formula:a4e1854b-2169-4ad6-8b23-684c6c1322e2}} reduces the potential function value, and the reduction is large when the probability {{formula:8154c93d-4e54-4c56-98a7-567483a3b068}} is small. Lemma 31 {{formula:9320ab20-7414-4bb4-a222-57c2b409069c}} has the partial rectangle property with respect to {{formula:43ddf555-989d-43a6-bff1-72f5b549d5cd}} , and {{formula:d82b2a50-717b-4779-90f8-8888434612e9}} By definition, we have {{formula:8be90694-401a-467c-bba3-5871dcf09a49}} By Proposition REF , Proposition REF and the fact that {{formula:82d04a89-c6dd-49e7-9c2e-f4ff9d5b0b7c}} , we have {{formula:9bc02194-3d0f-43b6-bf7f-f532d5770b16}} Then since {{formula:48a838f0-c42b-4ed4-9370-9aa21fae0ab4}} is the maximizer of Equation (REF ) and {{formula:d1714846-a618-4484-becd-80bba1c2d557}} , {{formula:44a96fac-c776-47e5-8e89-728241650b38}} Since knowing less could only decrease the advantage (Proposition REF ), {{formula:54d3f083-22b5-40a5-9020-554fc029f76b}} Combining the inequalities and using the fact that {{formula:04fc2e14-7ada-4e6a-8634-520c709c28fb}} and {{formula:b09606e6-2749-4194-bd07-dca01a697ffb}} , we have {{formula:0293f8b2-c65a-496a-927d-14fdb8f6c14a}} This proves the lemma. We need the following proposition in the later proof. Proposition 32 We have the following: for any {{formula:a071d58b-6507-40ab-bd83-6d23a7854c91}} and {{formula:5cb03786-316b-4e32-afcd-4a9410644b55}} , we have {{formula:67dd7ee5-6738-4471-abad-622aafd7ef84}} for any {{formula:5d91e5a7-8790-4348-8c7c-e419c00579e2}} and {{formula:7ee83655-20e3-488b-b33a-e20e409595c6}} , if {{formula:d4933e94-2712-489a-981a-238c336631e1}} for some {{formula:ae96720e-3e91-44ef-b9bd-f31280a68495}} , then for any {{formula:5546863b-50d8-4e7f-a500-f77cba0ea685}} , we have {{formula:3f837187-5ff0-48d6-bb46-511ab47151b2}} Since {{formula:ef215a72-94cb-4193-acac-21ea189067bd}} is the maximizer of Equation (REF ), {{formula:e225867e-aba2-49f2-a650-5bcbe5277de1}} and {{formula:6c018ce1-ca24-4cee-b3ae-751af8204809}} , we have {{formula:e67f2b88-0b08-4786-8f1b-abbfd2e7b06a}} By taking the logarithm on both sides of the premise, we have {{formula:66fe2b44-9043-41aa-8e5d-bdbe7910ff31}} i.e., (recall Definition REF ) {{formula:0dfc71ae-7b3c-4868-b384-ca7b2e9b1dca}} Since {{formula:1baca4e4-88f5-4056-ae02-6f1a45137edf}} , for any {{formula:f523396a-81e8-43a6-b944-9b3912c43012}} , {{formula:c6a800b3-e9d5-4f0d-b6df-cf84e9b6a631}} Now we will divide the set of all {{formula:a9bf15d0-eb47-40e7-b8c0-f3ac4118a7aa}} with nonzero probability under {{formula:7c28b821-76bd-4c5d-aa2e-91501885f32e}} into subsets based on the costs and the advantages of {{formula:4785b100-a921-42f0-a3d2-cc09abd1199c}} and {{formula:ab99d817-a615-4e4b-bb5b-58cb60491dd5}} . Then we show that for each subset, there is a way to construct a generalized protocol for {{formula:5ad9d141-50b8-43ea-ac2f-eace6689a9ee}} such that at least one of the protocols satisfies the requirements of Lemma REF . To analyze the costs of these protocols, which we will construct later in this section, we need the following two lemmas. Lemma 33 Fix a set {{formula:6ffdd167-44a6-46f4-b036-5044e3b39b79}} of triples {{formula:6011b8f6-8e7a-4039-b9f9-ec9e15543aad}} and a parameter {{formula:a344a0ef-6951-42b5-94bb-f53b35a728d0}} . If for all {{formula:041351be-d728-4a98-88cc-b624be2a832a}} , {{formula:254a3ca9-2f9c-41e1-ad27-87a5181fe3e1}} then we have {{formula:98deddf8-9ba4-4528-adfa-39aadfe11a18}} where we abused the notation to let {{formula:2a2ee41a-78f0-4619-ac1e-dc74b0a6b274}} also denote the set {{formula:4bb920fd-e8a7-42c1-8f0e-a1268ae4fee1}} . By Lemma REF , we have {{formula:990ed9d1-d7ba-4f23-8468-5bc734130d47}} By the construction of {{formula:785b4acd-ea81-4ffd-8649-4102d2598c07}} and {{formula:c8378ce6-8dd2-46ff-acd8-a8f5bd3c7cba}} , {{formula:f4d9f608-ed5f-4f21-8113-86c9da620fd7}} is a function of {{formula:964df1cc-4525-4a62-b317-9bbbd5626ecf}} and does not depend on {{formula:5aa97627-146f-4db1-972c-f742934d6979}} , and {{formula:5457c933-483f-43fc-9a7f-24f517286df5}} is a function of {{formula:7f190b84-9342-4f0f-be08-a15c24990065}} does not depend on {{formula:859d0bcc-71a3-48d7-946b-01deb584e775}} . Thus, it is equal to {{formula:3a062c9f-c405-4b0f-8e37-0b41c81589bf}} Since {{formula:c6194615-5a41-4e4a-930d-da9628b44a4b}} is closed under intersection and {{formula:3bba72fe-1f66-400a-b7ab-b8b7f91a742d}} by definition, we have that {{formula:b521e340-c20f-476a-a23b-00c4ce9a1843}} . Hence, {{formula:74cc6118-6824-4c70-98e0-f28cdd2b5630}} has the partial rectangle property by Proposition REF (ii). Then {{formula:031dff83-2486-4e41-a122-f96ba5b2ee7d}} and {{formula:2660ed16-a613-4a41-be1d-e8b9feee5c19}} are independent conditioned on {{formula:6f09dbe5-13d9-4095-b26d-614d6544a7d0}} by Proposition REF . Hence, it is equal to {{formula:dfe086c1-fdfc-43f9-b0c3-f738e64a7121}} Since {{formula:053606e3-6e99-4af3-8a10-66da6cbddbc5}} is a set of triples {{formula:429e873b-4c13-4f03-a6c5-3277c8a6bb27}} , {{formula:4bcae2e8-e507-4d46-8301-22df619e667a}} is the same as {{formula:629e5f25-87dc-44e6-8f28-26b333237a2b}} (for {{formula:46bf7102-58cd-4a07-90a5-e9870ce1952d}} ). It is equal to {{formula:475f61a0-ab8f-4dec-81e9-9a2d6c11b8a9}} Finally, by Proposition REF , {{formula:36fd9252-5fda-45cb-8072-b3977f32b3db}} Hence, we have {{formula:71c05322-79a7-4def-9bc4-2421de108301}} This proves the lemma. Next, by applying Lemma REF and Proposition REF instead of Lemma REF and Proposition REF , the same proof gives the following lemma for the {{formula:e8c0fbf6-e7ac-4bd9-bde9-4e4279295f5e}} -costs. Lemma 34 Fix a set {{formula:8dce4d20-e98a-45f4-a069-377cb556f310}} of triples {{formula:b258b9a9-37de-4082-8a82-6f28dcca612c}} and a parameter {{formula:7bc70ef8-9674-482c-9ea6-20e74fd5f248}} . If for all {{formula:856076d6-57bc-4a31-9523-3c3cfb0e983e}} , {{formula:94ff53d3-47f2-4602-8ab7-5567b135ae7d}} then we have {{formula:0eeeb8de-85a0-4daa-bded-50757b1a9f94}} similarly, if for all {{formula:6457f610-79d4-40d9-ab22-4fc8d07643c9}} , {{formula:56aff7d9-12a3-4cd5-b162-a9eaed0b968e}} then we have {{formula:1396b074-763b-4f70-a002-5aa4312c5d94}} We will also need the following lemma to relate the advantage for {{formula:f98673e3-fc5c-4ad0-a02d-ad41b7ece4a4}} to the advantage for {{formula:90b9b2dd-3da5-4694-bf8c-b6e887fee42c}} . Lemma 35 Fix a set {{formula:9e824003-472b-4613-a9ef-714d703afe95}} such that {{formula:2dcbda0e-730e-472e-9385-478a0a64b9aa}} . Suppose there exists {{formula:1d00eb19-0071-4651-b2c0-d53a2269a12b}} such that for any {{formula:408139de-caae-4781-9e19-3d300f199831}} with {{formula:b04476a1-1fe7-4618-bcf1-883c9e63a23f}} , we have {{formula:f5499a91-00d1-4e89-8e94-ab02a29204f6}} Then we have {{formula:31169157-f601-452f-b040-f562ae21704f}} Moreover, if we further have {{formula:c55611b3-188e-4b47-8851-f992f0a1e64d}} for {{formula:14d51c62-8516-4c31-9e7b-858e85870504}} then we have {{formula:069ec915-1653-495b-8317-83a7de9fadaf}} The first condition {{formula:54ebc87e-7644-47f8-8989-e4e272ba1884}} is used to ensure that the expected advantage conditioned on {{formula:512b5bf2-4b7d-4ba4-9861-db8a3bcefe46}} is the same as the expected advantage conditioned on {{formula:ee4aabe3-ac65-4e10-9777-7faf4323f708}} . We have {{formula:33f2dd62-d156-4970-97d9-08c16014e88f}} By the assumption that {{formula:3393c627-68d9-4869-9020-a7acb997391b}} , the absolute value of the sum is equal to the sum of absolute values: {{formula:c9646a12-01fb-45db-bcb7-008e28391a8f}} By taking the expectation over {{formula:83179e42-5daa-4c37-bdf2-696d650690dc}} conditioned on {{formula:c9a3b05a-7596-4759-99c1-95761b3c33c4}} , we obtain {{formula:fa73908c-b1a2-4699-9780-9d8d1294a08e}} Since {{formula:a33d2e6a-fb81-43b7-9b02-ff398d6cd674}} , we have {{formula:4b0433cf-520a-48c7-ba9b-4045fba4eab1}} . Therefore, {{formula:448368be-d685-42b7-a472-d0503e89f031}} and {{formula:41116113-e7b0-4807-a8de-d6c26ff9fa10}} are independent conditioned on {{formula:c1e6b7c0-5a6c-446d-8e79-a3dc3f749f60}} by Proposition REF (ii) and Proposition REF . In particular, {{formula:c3035914-b52e-4b4d-b73d-6a9c280f9e9f}} and {{formula:ce0b2ff3-dad8-4c4e-b865-e191dbb1ea6b}} are independent conditioned on {{formula:8e418339-1aae-4046-b45f-3082000aec5a}} . By the fact that {{formula:d7b933cf-eca3-4c95-bdec-4e388f7c750e}} and Proposition REF , we have {{formula:a07bd625-15bd-4cd6-a4ac-6b7246bc3de9}} Thus, the expected advantage is at least {{formula:edb5a8a5-91d3-4dc6-a54c-354a3951e1cd}} This proves the first part of the lemma. For the second part, let {{formula:ce05fec1-f180-4857-b3c0-c6f611c511e5}} be the set of all triples {{formula:7d73bbbb-f6ae-4669-9290-f09bd73f6920}} such that {{formula:7a1a9345-4be2-4c17-88c9-4c5141626db7}} and {{formula:03ba4b86-5f14-47a5-9a22-1903a7dbc725}} Then by Markov's inequality, if we have {{formula:605c0252-764a-4fdf-bb0c-27e1cbac1be3}} then {{formula:b3c8837c-8fe0-4093-ac73-79c3af9dfc9e}} . Hence, for {{formula:2c9468e7-513d-411a-ab8c-c94bf3c2874c}} and {{formula:f78dc7a6-c679-4daa-b3f6-bd8a4b108067}} , Equation (REF ) implies that {{formula:8e605027-dd73-4a4d-a980-b9dbcb37b8e4}} Hence, we have {{formula:23d758dd-8a1c-4639-b318-78f941ef5d35}} Next we show that (the absolute value of) the second term is at most half of the first term. First since {{formula:3087eb79-36fb-4587-892f-8f024a3a1a99}} is a set of {{formula:354ba33a-8b12-40f1-98ad-b47e3c264e93}} , we have {{formula:fa263d4c-74a4-4bf9-b211-a9c18e2b9dc4}} for any {{formula:e18a2058-a470-4cd5-ae45-a2e640520b2d}} . Hence, by the fact that {{formula:ec68a1c8-080f-45f9-a920-5b33f0f88193}} and {{formula:f487b7be-dbb1-4120-aad6-97976d9e228c}} maximizes Equation (REF ), we have that {{formula:245a280a-9334-4253-8266-5c6d040a8877}} which by the bound on {{formula:8917e574-6d0e-474f-8b6e-d3606823042e}} , is at most {{formula:5b041da8-e54a-4ad9-b11e-8b0ff2c3fadf}} Thus, we have {{formula:125ed5ff-e859-41b5-b713-8b257ecf118a}} Combining it with Equation (REF ) proves the lemma. High costs We first consider all {{formula:8d585ed5-19e0-4368-aaa8-ae0eea3637fa}} at which {{formula:21dd9703-7ac1-434b-80ad-8724b134c569}} has high costs. We will show that it leads to significant lower {{formula:785bfb3e-81ae-4d89-93cc-f64bab7f5031}} . High {{formula:36eba80e-d50e-43b3-8017-1b0e0bcbeaa8}} -cost. The first set of triples consists of all {{formula:6aeb6f0e-ab94-4494-955c-fcc410081788}} such that {{formula:816e1a85-7b6a-4289-93c9-b89cc174da4c}} This is the set of triples at which {{formula:2114efc6-b55d-4908-bb3c-6327aa28953c}} has high {{formula:56e9aa2a-359f-4924-830a-fbad4de1c834}} -cost and not-too-low {{formula:7630f0a7-c2e1-4505-86de-4a8cb430be92}} -costs. Note that {{formula:2480b2f7-78c9-48af-be6b-66d258401d09}} , {{formula:2586f7bd-42ee-4b1f-a602-e7cb3664bdaa}} and {{formula:8f8eb807-f83b-4cc1-8685-750ef386719a}} are functions of {{formula:fdcf52f5-14ea-46b1-a8e1-22165440220b}} , hence, they do not depend on {{formula:ae94fdd8-a829-46c5-b6c3-06dac9999371}} . Denote this set of {{formula:bb3e18d5-69f4-4ad2-aa98-834fb832bdd0}} by {{formula:d40a857e-0d48-4753-abaa-4722a00a8815}} . We will also abuse the notation, and use {{formula:8ee79a9e-341c-4485-b3ac-f9b792bb80e7}} to denote the set {{formula:c0827134-5d3b-482a-aaef-3583257f1863}} , which can also be treated as an event. By applying Lemma REF and Lemma REF to {{formula:c0eb1d98-c99a-4153-9cdc-2571d1d0985e}} and the corresponding {{formula:6bf8f461-8aac-437a-9b21-b90a63157e4e}} , we obtain the following bounds on the costs of {{formula:e8ed818c-4a25-4b0d-8636-1d44f7ed53b5}} conditioned on {{formula:f4d48176-bc33-411e-9a78-e387653e5309}} : {{formula:dd9bb510-c1e2-412d-b95e-2887915a3cc3}} Thus, it implies that (recall Definition REF ) {{formula:4c0daa2a-c9eb-4b6e-bb60-3ad46649a550}} where we used the assumption that {{formula:4da7b9cd-f5bf-47ae-8212-a7535d40fc0d}} . High {{formula:e9bf42a6-cb69-4b12-8de5-843ce6137741}} -cost by Alice. The next set consists of all {{formula:8ff6d358-b771-49f9-a139-6cfea0a62746}} such that {{formula:cecfdfa9-11cd-4d49-b0eb-b54f07dfa7da}} This is the set of triples at which {{formula:abbf2350-f138-45b4-a477-1fa41272cb67}} has high {{formula:b4062944-e426-49b4-8edb-70f5580af562}} -cost by Alice and not-too-low {{formula:aa61ecc1-935f-4614-89ee-57ed3cd2bb16}} -cost and {{formula:9e8691bc-b9b2-42f8-84eb-9fb34e15166d}} -cost by Bob. Denote this set of {{formula:49ae0e11-c7e0-4796-8571-9f26fd9e4e89}} by {{formula:f3152b26-ecf8-472a-8b69-523678fa5682}} . The upper bound on the {{formula:e8c3931b-9569-4bd7-af88-379f521163e9}} -cost ensures that it is disjoint from {{formula:807e0dc4-2c3d-455d-b98d-5779117f47a2}} . Similarly, we also abuse the notation to let {{formula:e98bb794-20ad-48fd-87b5-aaa962750102}} also denote the set {{formula:82765332-ea27-4e1f-accf-9cf2c0509107}} . By applying Lemma REF and Lemma REF to {{formula:4aa028a7-1384-48fe-a12c-408c6e32b077}} and the corresponding {{formula:7518df22-670a-4d9e-90af-e14c6cb2f506}} , we have the following bounds: {{formula:08553331-68a5-4f0c-9f45-a9436c08653a}} which also implies {{formula:07ae87b1-7777-43d9-881c-aae5ca107b93}} High {{formula:048fffba-b9aa-4a5a-8c99-ba3e5394a709}} -cost by Bob. The third case consists of all {{formula:9e96f656-c0d2-4718-8c84-60aab2935298}} such that {{formula:71755754-bae8-4d92-add3-d40d9a39d21a}} This is the set of triples at which {{formula:0df7d779-ab1d-4935-b00b-f4bdabc3e4be}} has high {{formula:2b22b569-1323-488e-8419-665e55595257}} -cost by Bob and not-too-low {{formula:76d12e68-3b23-43d2-9f4a-30d4a17d4969}} -cost and {{formula:7f3d6c20-8663-438e-88bc-b208d33767cc}} -cost by Alice. Denote this set of {{formula:e87e07e6-3c39-470a-91a8-39aee19e9d73}} by {{formula:e751282a-1af4-4f4b-9a59-3efab859bfe9}} . It is disjoint from {{formula:eadafc6d-c509-4191-82ce-319a7e657f9f}} and {{formula:de43bc1b-4ef7-4dfa-9bdf-97de3676c761}} . Similarly, we also use {{formula:9c476d2d-5e41-4b99-9b9f-15e50b2d306e}} to denote the set {{formula:a968bd2c-f998-4604-9ab7-0fdde065c2c4}} . By applying Lemma REF and Lemma REF to {{formula:b5f3a2d5-f6e5-4413-8c92-fed0be5c1894}} and the appropriate {{formula:26f5cf55-9466-4d92-9e0c-14048f60361d}} , we have the following bounds: {{formula:021d2886-279c-40a5-b3ec-0ec77624ba6b}} which also implies that {{formula:09367270-e4b4-4132-85d9-15d9a9b164e8}} Equation (REF ), (REF ) and (REF ) implies that for {{formula:c335fb63-d92c-445b-9e4f-5594cc221919}}{{formula:afd46ea8-3105-4d57-b82d-0c1f0f1bfcdf}} 2{{formula:62d65c6f-2d24-4afc-873b-fa1b8755825b}} 2{{formula:461274d1-21a7-4ad0-86f6-998689817490}} , we all have {{formula:262045ac-50ae-4fb3-a64e-c27e2bed84db}} The main lemma of this subsection is the following, stating that if the above three sets contribute a nontrival amount of total advantage in {{formula:07c3858f-7f27-4a07-96ed-7a3b130fb416}} (weighted by the probability), then we can construct a protocol for {{formula:60fc210d-c6bd-4695-a4ec-fdcae05136cd}} satisfying the requirements of Lemma REF (by conditioning {{formula:ee64bb6e-621c-48ba-aaf0-ee5b311a5db4}} on a carefully chosen event). Lemma 36 Let {{formula:dd1d0bb2-512b-4f9b-acbe-ba8503307f43}} be the union {{formula:8f4e59a5-5aed-4300-a906-4c2e27beed58}} . If we have {{formula:82a0df16-0f73-47b8-a9b6-02855bed2100}} then Lemma REF holds. Protocol {{formula:3e3084c0-b8f5-48a8-9e07-c13278b1f9a0}} and event {{formula:7a07eef4-aadb-4c03-9ee3-81fc6ac02c15}} may be one potential choice for {{formula:d245921e-eb4c-4991-ba1d-118fb4c3eb98}} and {{formula:dd678ee3-280a-4cd6-b927-d290c3bd25c4}} in Lemma REF . However, Lemma REF requires the probability of {{formula:dadaf085-3ad7-41d7-97b0-e316a8593d92}} to be {{formula:db5fc145-c627-4b9c-899c-32eec4851459}} , which is not necessarily true for any {{formula:6c80e6f4-20f2-4f8c-a3d7-a35edaeea26e}} , since {{formula:1bea20dd-c2be-4c8b-a09e-cd25e88eb94d}} may have very small probability. On the other hand, we could also consider setting {{formula:317cae63-2bb1-4b92-8e2a-ef9fd2f46e9b}} to the distribution of {{formula:026fd654-f927-4281-a6df-77cf229978f2}} conditioned on {{formula:d3290f3c-aaa5-4d3c-a477-e844792a946b}} and {{formula:6ec6c38e-eb8d-4fe3-991a-da96981a40d4}} to the entire sample space, but this protocol may have very large costs. To resolve this issue, we will use the following lemma, which turns {{formula:305a136b-0f32-4b8c-a8bc-aa5be4b8a0f6}} into a protocol {{formula:8609576b-041d-4686-9537-09858cec2e62}} with bounded costs for some event {{formula:871d49ee-43dc-4865-b9aa-ca1fe733f517}} . Moreover, by dividing {{formula:c30154c5-86f7-4c63-bac5-9e17069d632a}} into {{formula:ecb08aa5-66b2-4b7c-83df-eab28fc6083b}} according to whether the function value is more likely to be 0 or 1 conditioned on {{formula:02eddfab-e8db-4478-a450-966ba6107758}} and {{formula:dff72c49-805e-4baf-9dd1-0bac6181bed3}} , the lemma guarantees that the costs conditioned on {{formula:f2c28aa3-e21a-4181-acfd-08f64d808429}} are also bounded (for {{formula:3b09c0ae-43af-4ed8-a1e6-7fd1a9d1c272}} ). This will allow us to apply Lemma REF later to lower bound the advantage. Lemma 37 Fix any {{formula:5057bf0d-a752-4453-bb60-f49f60431ad3}} . Let {{formula:2161f2ab-eb93-4d6e-8f80-8a0ac0a92dfd}} be an {{formula:03f66416-e7a1-451f-8a57-d54c4e93fb7a}} -round generalized protocol over {{formula:25b63839-9fd5-4949-b7a0-013441f2546f}} , {{formula:bdfdf7df-e0cd-4bcf-8b62-ebde93e3a526}} be an event, {{formula:5ddc9e3e-6cc9-4d86-8103-73841e71775a}} be an input distribution and {{formula:3181ba4e-476c-4b0d-bd83-0f7d6b4d5896}} be a function of the inputs. Then there exists a partition of {{formula:3b29f3d8-10e5-44a5-add1-0f42fac0333c}} into three events {{formula:f247724e-77ea-4ae2-a039-334fc3decfb4}} and a partition of {{formula:0df76a53-b352-4b24-a52d-4f979443f334}} into {{formula:ecd21dfd-0385-424a-bdf4-891a01a7076c}} such that the following holds: all three events {{formula:5e32c3c9-0bbe-4298-9bae-2a7d57584f60}} , {{formula:2bb4e751-e998-46d3-848c-ec05b3697992}} , {{formula:6ad57cb4-6928-403e-940a-a4c89e85b833}} have the form {{formula:82674adc-a1c4-49a9-b347-a256abed09a1}} for some {{formula:b73536ff-8768-48cd-abab-7d4976e480e0}} ; {{formula:06ba7522-75f1-4777-9663-11e3ed730f83}} ; let {{formula:af8e4bde-3901-4d8f-af0c-486ba3cc19af}} be the protocol {{formula:570a6be7-b484-43c2-8058-560376db1078}} , {{formula:341fcc90-8882-426c-a657-ba42f96114d0}} and {{formula:13718ffd-f75a-42d3-b9a3-8aa59dd60867}} , then for {{formula:c808be61-f496-4f08-b541-5867dd57788d}} , {{formula:23af7f74-ce05-48ba-8ccc-a6224d81c8ad}} for {{formula:4744b100-4ba7-4969-92bb-a7f36a61b2bc}} , and all {{formula:7ed19cc6-5940-4a4b-83cc-db0d740c55e8}} such that {{formula:94d9b2ec-6b3c-48d2-815c-7863b3cd9450}} , {{formula:37915144-51e2-4027-8fdf-93d9fa18928a}} Note that we upper bound the costs of {{formula:899299c5-f49c-44e8-9d60-9888406386c7}} conditioned on {{formula:6e4f1a51-3138-4751-bbda-1d141925cd7f}} by the costs of {{formula:0e7afbe9-7bdf-48c4-82e6-2a7e11f7486a}} conditioned on {{formula:67191e5b-1d60-48dd-aa0c-ff16dda6f741}} (plus some small quantity). Thus, for {{formula:00c8796b-296a-46c0-9242-5328095a6624}} and {{formula:501c3219-b8ea-4bac-a7c8-9e823d116cb4}} , the costs are bounded due to Equation (REF ). To focus on our main proof, we will defer the proof of Lemma REF to Section REF . Now we use it to prove Lemma REF . [Proof of Lemma REF ] We first fix some {{formula:087fb7f0-2eeb-4fa2-b1d8-34b17b724c7e}}{{formula:1f7a0b01-06be-4271-87a5-34a1eade29ac}} 2{{formula:6eabff94-651c-4f06-bfa5-ac901516808a}} 2{{formula:2ba78da5-2379-4c4e-8b8a-26545f85a098}} . By applying Lemma REF to protocol {{formula:9406efb6-10e9-4062-9da4-e969cfba4479}} , event {{formula:d98cc794-c07e-4d6d-8ab8-35883914df1b}} , input distribution {{formula:823abbc2-e8fc-4af5-b428-00ce964167d3}} and function {{formula:870a7e8b-d2c3-4e48-9c0e-3622fe95da0b}} for {{formula:0ba11c7f-afb9-48cc-8594-5653082fd71e}} , we obtain sets {{formula:3db5ef88-8b5b-4eb9-9175-747601810b49}} . Let {{formula:c3735b77-2d1d-4bf8-95ab-be2d0c3c8b6d}} , {{formula:b438f7d0-6719-4a19-91d1-e9cc5149f788}} , and {{formula:8c6ba1f5-1cbd-4048-94da-fa9b3e13575f}} be the distribution {{formula:59c2ad77-d0b1-48b4-9d5b-ebcabe55f70e}} conditioned on {{formula:81112b70-bd04-496f-85d8-ed3b521b53f1}} . The lemma guarantees that {{formula:970a598a-d58a-4d01-b806-ab489ed0586e}} all have the form {{formula:0ac97cd8-a3d3-46b0-bf26-007f31903cd5}} for some {{formula:2dd6baac-cacb-4295-af54-4b53cbb20aef}} (Proposition REF ). Since {{formula:59d4eeeb-441f-4da0-8d60-c1bd170075bd}} , we have that {{formula:aa61b1c9-8765-4025-ab56-834df5068baf}} . Since {{formula:706f8f7e-d87d-4c0e-b534-9dfcd611360c}} , we also have {{formula:56f07986-0965-4401-af5c-0af961e15df5}} . For each {{formula:a0bc879c-aa78-47f0-816a-6bb9acf34e79}} , since {{formula:cdf8638b-6fb0-4b20-bf37-0b7726ce2c48}} , Proposition REF (ii) implies that {{formula:f798d1db-ba73-4f14-b8be-6035b10b184b}} has the partial rectangle property with respect to {{formula:37eea5b9-8397-4483-982f-b81137faa151}} . Then Proposition REF implies that {{formula:60e9ee78-f5a0-493e-aaac-54f49816da8c}} , i.e., {{formula:63cc66ee-3d1b-424a-b4fa-725b6ab1493d}} , has the rectangle property with respect to {{formula:f846330c-9370-4693-83c6-2392c21e6571}} . For each {{formula:963fb634-78dd-4abb-ac9b-8852bba17140}} , since {{formula:0d227392-45b5-4958-b3ae-6d16d83c1d79}} , the protocol {{formula:85e5363e-021a-4033-bff6-0a74715f85c9}} and the event {{formula:1be0b62e-71fe-4fb8-bb0c-f4f069b5c910}} are one candidate for {{formula:a73ad583-4c78-410a-850d-97e1dcde1f43}} and {{formula:cc122e36-22f6-4aac-a666-79478d480189}} in Lemma REF . We will prove the following sufficient condition for them to satisfy the requirements of Lemma REF . Claim 38 If we have {{formula:b8620298-d503-4ccb-8c7a-f75592e50c87}} then {{formula:9fd8ee09-ea3e-4632-84e2-d1076754bf24}} and event {{formula:ca74443b-de25-48c7-ad08-1f5252d82e7e}} satisfy the requirements of Lemma REF for {{formula:68408091-d695-4051-a208-8bccc38f9f87}} and {{formula:df84fea8-a391-475b-bdff-686177d5bb50}} . Before proving the claim, we first show that it implies Lemma REF . If Equation (REF ) holds for any {{formula:d5561205-785a-4c74-b969-79aa6c6a70c3}}{{formula:c14e3133-4e07-4284-83ef-1a6abc05be15}} 2{{formula:0416a162-020d-4f7e-8d4c-c2865666d565}} 2{{formula:39b11391-8457-47e6-a09f-a2c313facda2}} and {{formula:d11057cd-4e46-4f1c-b76b-290bc8f6fc5c}} , then the lemma holds. Otherwise we must have for every {{formula:62d95f71-f5a5-4065-8b08-6b6085656c6c}} and {{formula:ef97fae6-1f28-46f4-acaa-b631c1a8b0a3}} , {{formula:a0cb4739-2662-4161-a8bd-5f7fa5093903}} On the other hand, since {{formula:3ee90d68-197b-4b41-8c5c-589b57d6303b}} , {{formula:6b8f0503-e44d-483d-beb9-d3d4c22139f7}} and {{formula:f4a626cf-bd0d-4269-891e-78091ddb9253}} , by Proposition REF (i), we also have {{formula:d8cd46c9-9009-4609-a88e-16b5d6b031a4}} Since {{formula:2c1d9021-e7ba-4322-9912-b5b441e9a354}} , {{formula:4cd9d05b-9543-450a-ab4e-be56b2cc4390}} , and all 12 sets are disjoint, by summing up the above inequalities for all {{formula:9cd30ada-b028-48bd-ac4a-d3d675e0dcb4}} and {{formula:2edf7152-7323-4ba1-945c-d634c5efdf63}} and applying Lemma REF , we have {{formula:c429d547-7fc5-4c8e-ab9a-2718be1296a6}} contradicting with the lemma premise. Now it suffices to prove the claim. We first observe that by Proposition REF (i), Equation (REF ) also implies that {{formula:f1c47049-2fbb-40f0-b114-6f62d8dc7892}} , which in turn, implies that {{formula:9b62011e-3fa8-4601-9b7e-1760c1aac086}} , i.e., the probability of {{formula:3b6a5bc8-7a5f-4fc6-a6c4-5a8315742506}} in the distribution {{formula:924b18eb-aecc-4ea7-88c1-3b729821aeca}} is at least {{formula:b698c232-c53b-4445-bf2d-bdf3320830e5}} , as required by Lemma REF . In the following, we show that the bound on {{formula:3ba5d075-fc1d-475c-8072-fbfceaf22aa0}} also holds. Bounding {{formula:c8b7fc30-5f01-47ca-8398-d4dee700c87a}} . We first bound its {{formula:ae7bf5ee-5824-4214-b6b0-255afc2443f0}} value. By Lemma REF and the fact that {{formula:3873d4bb-a6f0-4ad7-845c-023b969fb012}} and {{formula:8cb77651-98f6-4ace-92a2-4c02296a96d5}} , we have {{formula:3a63e2f1-8170-44f2-a506-dcb9fc73f381}} where we used the assumption that {{formula:d724b11e-712b-45ea-a8f2-f34ad1161c59}} for a sufficiently large {{formula:82ef3431-b44a-47a5-8ae6-d33105bd5d4e}} , and the fact that {{formula:7faf35c1-9c06-47d9-82cc-bb1343d5ee83}} , and the fact that {{formula:5e350d54-adb9-4431-91d0-6b84fb0a6d49}} can also be viewed as an event in {{formula:f100f551-c8cc-4b08-a4f8-dd8fbcc2dd14}} . Bounding {{formula:12791cf5-562e-4fc3-b1d0-a57b93172e3d}} . Next we bound its {{formula:814ab5f5-04f9-43a2-82df-559e62bbf133}} value. Lemma REF also guarantees that for all {{formula:97290833-ed27-4d45-8ded-50220eefcc7b}} such that {{formula:9d9580a0-eb8f-450f-a5aa-30e060fbe87e}} (recall that {{formula:e1ca39f3-c6e2-45e5-88d1-7a42e498a052}} ), we have {{formula:22b0b31f-f279-4abf-92bc-f22f330cc410}} This allows us to bound its advantage by applying the first part of Lemma REF for {{formula:9eacde16-19e4-4b7b-887e-ce9f5afcbbdd}} (note that {{formula:62628da0-a2d2-4947-ba12-6e65f79dbc5b}} ). Thus, it implies {{formula:e793b793-d192-4f66-a638-62bee37f6ac4}} Note that the LHS is exactly the expected advantage of protocol {{formula:7469e3a5-05ea-429b-92bf-27e338055f97}} conditioned on {{formula:ef80d9f4-5295-4e46-b4a1-37325998fb2e}} : {{formula:4b16f2ad-888f-467b-97cf-57e41f2825aa}} Thus, by definition, that is {{formula:ef7032e7-aaf0-4f83-b523-6223a0f6dc59}} Bounding {{formula:f4527854-fadb-42a3-8fa9-6b9d95b9d81b}} . Now we sum up the two parts of the potential function. By Equation (REF ) and (REF ), we have {{formula:7358c4c3-4351-4eb8-b9b8-0a61a3708d73}} Finally, by Equation (REF ) and Proposition REF (ii) (for {{formula:1f31899d-c74d-4c19-8285-5c855144d61b}} and {{formula:01cdc3f0-e1e6-4e7b-9203-58c626872fab}} ), we have {{formula:091364fc-71e4-4c6f-9145-0c257ad2ddb2}} Plugging it into Equation (REF ), we obtain {{formula:fe7b6063-c13f-426d-8ca2-abde22fceb73}} since {{formula:209e3d74-94c2-4744-b7bd-c218a68cfcc0}} for a sufficiently large constant {{formula:7a13013d-47e1-4dac-92df-30c9fb6c1fa9}} . Hence, {{formula:fead0225-61ae-4f9c-ac5e-7153318c3ea0}} and {{formula:90ebf449-edbe-430b-bc8e-c28ca683ce45}} satisfy the requirements of Lemma REF . This proves the claim, completing the proof of Lemma REF . Low costs Now we consider the case where {{formula:f9f8f6fb-3fbb-4cba-be94-051e5fb41a81}} has low costs. It consists of all {{formula:b388693a-f56f-4c9a-879b-0144f71e2fa2}} such that {{formula:d073d15d-5065-4bdc-9e6e-3b0e2e5f2fff}} This is the set of triples at which the costs of {{formula:35accf49-b9cd-4261-9ea8-b2cbdf579ed7}} are not high, nor too low. Denote this set of {{formula:a9799de7-3939-4c1a-a344-9b475923ba29}} by {{formula:582fb352-aaa5-4fd1-adc4-f53e23c76003}} . By definition, it is disjoint from {{formula:76f9f0ed-2771-4863-80f8-237277279110}} . Similarly, we also use {{formula:b3807605-bec4-4b5c-842c-23207b5ea71e}} to denote set {{formula:fa373fb0-5a7a-4b69-bc3c-110eed07cae1}} . By applying Lemma REF and Lemma REF to {{formula:e32ca1d7-a4ae-433a-8ee3-4797a9ed4ce0}} with the appropriate {{formula:8963603d-2fcc-4bcf-9137-73b8058c114f}} , we have the following bounds: {{formula:a5965e31-c77d-4756-b4d3-72bfe73c0930}} Therefore, we have {{formula:6889f823-fbca-4b97-a451-eca0c7df90bb}} The main lemma of this subsection is the following, stating that if {{formula:70e8a897-f8bc-4b08-a439-81089e01c8ef}} contributes a nontrival amount of total advantage in {{formula:da48f8e0-cbd1-4bd6-b6bc-7180ac12aab3}} , then we can construct a protocol for {{formula:20aac636-0cbf-49bf-afa9-991b86a02980}} satisfying the requirements of Lemma REF . Lemma 39 If we have {{formula:c325c339-7cbe-4861-be96-3002f64f49d8}} then Lemma REF holds. The proof will use the following lemma that converts a generalized protocol with low costs to a standard protocol with low communication. Lemma 40 Let {{formula:d97f4a07-3c37-4a7d-8741-26d0d0404e1a}} be any fixed parameter. Let {{formula:b9b2ffc5-0eb3-4474-8539-7b0635982196}} be an {{formula:637920cb-19c7-4140-bafa-0aec14600188}} -round generalized protocol and let {{formula:7ff8314f-dfe6-49a3-99ce-8f915ee1265a}} be an event such that {{formula:293b8a30-8c62-4fea-833b-fe48362bb4fa}} has the rectangle property with respect to {{formula:b5275841-1ede-459e-9f75-1a609a8baf36}} . Then for any function {{formula:eb1121ee-8565-4894-9e8a-44e012bc7e3c}} , there is an {{formula:0688d4a0-dc33-4fbc-a1ce-ac42fc03f491}} -round standard protocol {{formula:263a3042-d98e-423c-b47f-19208a0eec31}} such that in odd rounds of {{formula:fb01618d-7b25-4889-83c8-e24fb062e0dc}} , Alice sends a message of at most {{formula:278b8256-b0c2-41bd-b965-be5e73fd1dd8}} bits; in even rounds of {{formula:c4d658f2-a27c-4ef6-9706-deb3f2b64cf4}} , Bob sends a message of at most {{formula:46d44205-eca7-4e76-ac35-c1252299fbdf}} bits; {{formula:fb409e4e-1427-44b4-a9b0-0ec5582acd8e}} computes {{formula:b6d5fbae-be34-4731-93d3-e38f6be79535}} correctly under input distribution {{formula:d5ed220d-dbd6-4a9e-a59f-2d50104cfaf0}} with probability at least {{formula:d1a0cba0-bf52-488b-823e-85191561367e}} To focus on the main proof, we will defer the proof of Lemma REF to Section . [Proof of Lemma REF ] Similar to the proof of Lemma REF , we first apply Lemma REF to {{formula:ddc4dfd0-7b0e-4086-a984-fadb1e68bc29}} , event {{formula:dbff2928-1165-4828-a9a5-3827094b71cc}} , input distribution {{formula:42d983de-abfc-438d-9556-39dbc98149b9}} and function {{formula:39b4da98-8ef0-46e1-abdf-4ce796e00fd0}} for {{formula:2b8c18f2-ca7a-4f03-a1d8-ce59ef31b213}} . We obtain sets {{formula:57135f6b-8933-44e0-b8e1-830982ffa4c7}} , {{formula:feb17253-975d-48e2-bf8f-01ab4acb4ba3}} , {{formula:988d59c6-6dcd-468a-9830-913ff44ccbe3}} and {{formula:eabf9fbf-b8fd-4628-9409-e03139d7022e}} , {{formula:0d532c0f-b611-4fc7-8b04-2e2038822892}} . Let {{formula:078be3d1-eba5-4bc3-a844-0dc27191b8c9}} , and {{formula:ca63ea55-219b-4a0e-a620-baed7ce56bc5}} be {{formula:7c71eb01-f25a-4449-9ee0-ace9dd161dab}} conditioned on {{formula:5fc45e43-6c3b-442c-bc0f-e63d167ca110}} . Again, we have that {{formula:1189ceb2-d768-4eb6-96ec-315947a9fc3a}} , {{formula:4b3a89b8-b8a7-43ac-92d4-4cddc5093609}} , {{formula:6d62fbb9-03b1-4837-b1e3-e9ea1c83e65a}} , {{formula:e2fb83d1-c2bb-45e7-b8fe-383e607f33e8}} and {{formula:139d0619-1077-43cb-9469-b5a999d54cd3}} , {{formula:45074c3c-5933-4944-a90b-c91be4d689be}} has the rectangle property with respect to {{formula:06e7a9ca-6cc2-45b4-8820-a76a1086211c}} and for {{formula:567fc534-8e41-46a9-8e2d-a1024603c5e5}} , {{formula:e2dbdd3c-03da-4973-b04b-fb4897f99519}} . Thus, the protocol {{formula:bce6e4a7-bf22-4c76-bbb4-e89137a6a669}} and the event {{formula:83b10ef5-757b-46c1-9ac3-de4a40e0e170}} are one possible candidate for Lemma REF . We will prove the following sufficient condition for them to satisfy the requirements of Lemma REF . Claim 41 If we have {{formula:c3815e3e-534b-4d96-876c-ba76ca778620}} then {{formula:4d6b7313-e73f-490f-9d9f-7b235e4f8b0b}} and {{formula:3bbd9da4-7a60-4bdd-8361-2fd49051ba99}} satisfy the requirements of Lemma REF for {{formula:dff2fc2d-d60b-4356-8d8f-b11e28d86ff9}} and {{formula:6123dfd3-6a17-4db9-b4fe-c5534ddd924c}} . Similar to the proof of Lemma REF , before proving the claim, we first show that it implies the lemma. If Equation (REF ) holds for either {{formula:4be0d2ec-1866-400c-98fe-3103946038e9}} or {{formula:14d14996-b65f-4c78-b029-a2807f773a3f}} , then the lemma holds. Otherwise, we have {{formula:0bf993bc-c2f0-4e73-8fd3-5a7933ebc348}} for {{formula:0fa6b7e9-eeef-4780-96fe-6b953c91f6dc}} . Lemma REF guarantees that {{formula:4fe1e003-f346-4013-badc-d07a16040cc1}} . By Proposition REF (i), we also have {{formula:55ee189f-dcd0-4096-b68c-079f80d7d3d2}} By summing up the inequalities and applying Lemma REF , we obtain {{formula:ef03193a-18bd-4545-bb31-45f0d8f448cf}} contracting with the lemma premise. Now it suffices to prove the claim. By Proposition REF (i), Equation (REF ) implies that {{formula:f8242fbb-e430-44bf-9c54-66c6d7a6c4bb}} . Therefore, the probability of {{formula:1b1fa035-03c7-483e-8781-658a260b7e24}} in the distribution {{formula:56276c43-9ee0-48be-bec8-57755277168f}} is at least {{formula:77ff1c89-3027-42da-85d5-d0c431af7a74}} as required by Lemma REF . In the following, we show that the bound on {{formula:ef1dd4f8-be3c-455f-b625-1b408f104331}} holds. Bounding {{formula:f8f14c27-040d-4b4e-8f4f-a7d5e8edcde0}} . We first bound its {{formula:b9ed7fd4-baa4-4fd9-975b-f7c912e8bca7}} value. Similar to the proof of Lemma REF , Lemma REF guarantees that {{formula:5a7992f7-d1bd-4867-ba64-54a8bd138fca}} where we use the fact that {{formula:840c1f31-76a1-491b-ad57-0078e6075ea0}} for a sufficiently {{formula:630ca4a7-945d-447d-929b-d8c7ffdc1e1f}} , and {{formula:bb9f128a-96ad-4242-8f4a-e15c64050604}} . To bound its {{formula:2f38fb36-c8c7-42dd-8d61-21df3464c0e1}} and then {{formula:b2cffa3d-8460-4c66-81a3-c3d6563a1fd6}} , we will consider two cases: {{formula:ad0e8beb-a1b7-4082-abda-3c380ec9996c}} and {{formula:df976235-addd-4e8d-b34c-8a364adafe45}} . Bounding {{formula:36af95ec-12cb-49d9-b6fb-ef57b5d7e342}} when {{formula:10c9cc65-3a89-4549-ba7e-b63e3d615c2a}} . We first bound its {{formula:92ea0ae4-2030-46ff-bb93-8f87d165e979}} when {{formula:c56bda61-8067-4ab2-a294-11eb9ad0d723}} . Similar to the proof of Lemma REF , Lemma REF implies that {{formula:90628030-d00d-4f85-9fe4-66161465d4f9}} Thus, the first part of Lemma REF for {{formula:5b991f71-0b8a-41ad-a8d3-313d9f3d7d71}} implies that {{formula:a161e51e-eca6-48af-bde9-f97e4d48a726}} That is, {{formula:1331f82f-92c6-4d30-b404-2520a66832d0}} Bounding {{formula:56aa98f3-0940-4e52-be12-94cd5c52440f}} when {{formula:29a9a133-b66b-4b25-bfcd-f53786da4d89}} . By Equation (REF ) and (REF ), we have {{formula:aae784b2-247f-4165-a751-30a6ab1b0f35}} By Proposition REF (ii), Equation (REF ) implies that {{formula:6dd62c8a-2225-4205-ae9c-9a945eb29e44}} Thus, {{formula:36a3b387-b0c6-43c7-b549-dc899b78ab4d}} , as {{formula:0d6db566-5afe-42a8-9857-f03783b53b22}} for a large {{formula:9c5cb3d5-002b-47e0-99d1-e059d018f407}} . This proves Claim REF when {{formula:c02bf049-ca07-4731-8267-c7101966f7b2}} . Bounding {{formula:3c7b3677-1386-4124-9101-454f12d2304d}} when {{formula:6d2ad5e6-f99f-4de7-a6db-b118cdc8261f}} . Next we consider the case where {{formula:9c31b1ae-e3f9-4f0f-a4e4-82c24940ef10}} . To bound {{formula:dcb2d1a1-ed72-4dc8-b4b2-f2f9031fc4d3}} in this case, we will apply the second part of Lemma REF . To this end, we will first upper bound the advantage of {{formula:05b06a63-47b7-4cba-8066-2bfd8f4e0128}} for computing {{formula:dbcb5093-d00e-4bc6-9382-89d75e4177d9}} by applying Lemma REF to {{formula:0febe16c-9606-497a-8b5e-56491d0bc529}} and {{formula:f924c207-cdc6-4ed6-9078-916ccb99034f}} and using the assumption on the communication complexity of {{formula:c2666c3d-bbf8-4561-8033-ba7bfc0e8e37}} . To verify the premises of Lemma REF are satisfied, note that {{formula:7b436594-2428-4bc9-a1b4-e94f55031bbb}} is in {{formula:5b498679-ba2a-4d0e-8057-e32b7fffcefc}} , hence, {{formula:489fe2a2-df4e-46f6-b063-ab3c6d36e65d}} has the partial rectangle property with respect to {{formula:fdb2af39-c883-4a14-a6e4-dccab374bd64}} by Proposition REF (ii). Hence, {{formula:50964bee-0f70-4fc0-aae7-f71b41e36361}} has the rectangle property with respect to {{formula:d91e1b18-3d49-4c59-8346-fcc2c17d3dc7}} by Proposition REF . To bound the costs of {{formula:a9c3d994-600f-4d5b-8e8d-8dc55e28cbf0}} conditioned on {{formula:5214ee4a-345a-47b3-b149-808061ecd1f8}} , note that since {{formula:3a6e73bb-81ba-4c9f-95e5-be3772f0a3b5}} , we have {{formula:945ae47c-8733-41f4-9691-7712c2999a56}} and {{formula:cf7bf349-6feb-4c39-94d9-7ce6a8b7a3f8}} are independent conditioned on {{formula:630ca9d3-7cb1-4bb0-a94f-eb7411eb4817}} by Proposition REF . Note that {{formula:3adaf02a-3847-4c03-96e5-48a81bd74db4}} , and note that whether the event {{formula:6424d5ce-5d14-4476-a9a5-86ccbf4773cc}} happens is determined by {{formula:635a19b4-a8f1-473a-9b17-08c2d44ed2f3}} , and whether {{formula:fab8e7a5-ff9f-47db-8a0e-dde17481da9e}} happens is determined by {{formula:18eaccaa-2549-4983-b951-d7ace091fb79}} . Therefore, when {{formula:14d0aefb-da84-4bf0-8f18-e305cb59daa8}} , the distribution of {{formula:882db6ed-50a7-4996-8e71-8e3241a2b218}} conditioned on {{formula:39c2c00b-287b-42a4-a09a-405c8a5c0d27}} is the same as the distribution of {{formula:1b477ab8-097a-482b-a611-e8c48eddbe14}} conditioned on {{formula:f6e0d8cf-85d2-4012-a951-d1332a0035a6}} , because {{formula:ab505fcf-552a-46c2-99b8-dc3612313ee0}} Thus, the {{formula:c18efda9-1425-400b-bf38-dc015bfa6a97}} -cost of {{formula:c36ba82f-ca0f-4517-9304-2d819092ea68}} conditioned on {{formula:cc345139-afb9-44d7-a718-bdc0de9f68cd}} is at most {{formula:b0064d3c-2d9b-4600-8b3d-cc2932e9133c}} For the same reason, its {{formula:cd0dd858-07ea-4a53-bbfd-419def4de01e}} -cost by Alice is at most {{formula:e42e974d-abc5-4320-962d-07db550e1c6d}} , and its {{formula:533ccb5b-0274-4c05-9201-0424011227e6}} -cost by Bob is at most {{formula:42140214-3e2a-462a-9441-38dbe74a84c6}} . By applying Lemma REF to {{formula:4cbf14b6-1bf9-4102-9523-817242152b1a}} and event {{formula:f6f3c3fe-9e96-4f14-9a24-73e45c8d3ce4}} for {{formula:ba38c27e-2735-4d3f-87bb-482ea0b99f45}} and {{formula:cff18f98-b77f-4e22-ad52-4ffc7359469d}} , we obtain a standard {{formula:43c7b26f-e431-45b9-a4e3-d7b3d73bd710}} -round protocol {{formula:6ff1710f-b6bb-4bac-b161-667dc22de568}} . Since {{formula:14dce6e0-4fcf-4ca9-83e1-247b035cb1a7}} is a sufficiently large constant, we have that in {{formula:687800a7-d913-4053-be82-4f95ef844e65}} , Alice sends at most {{formula:98a99880-ac43-456c-8aeb-ba632a3ed14a}} bits in every odd round; Bob sends at most {{formula:086c1092-2021-4ee7-b65d-73054524a7ef}} bits in every even round; {{formula:241659e8-5b1d-49aa-b52a-e71a5c021cd6}} computes {{formula:d0420062-3512-48ec-a328-3164ccb9a1a0}} correctly under input distribution {{formula:0c567bc4-b0cf-4c7a-b5c7-60c0bac7b4a6}} with probability at least {{formula:48f3d91a-2370-4cac-bd57-d24904a1c94c}} By the our assumption on the communication complexity of {{formula:710939f9-213f-49aa-9f93-89a2be37d154}} , the expected advantage of {{formula:dd1b4aa8-7137-4e4d-a199-b73d67995166}} must be at most {{formula:6b73ef60-c338-47a6-9ead-3e9bc9b4818b}} : {{formula:8243be77-0c3a-4c6a-8ba2-fc8a358c3e71}} It implies that {{formula:8346684a-9e83-4725-853e-5c8bfc2cd18c}} Since {{formula:98f2f3c1-80db-443c-8cd7-d8843fa8c0f4}} , we apply the second part of Lemma REF for {{formula:bf2deeca-9090-4314-99f7-65bb74abded8}} and {{formula:0d72d627-af38-4f78-80a8-c7a09b620b57}} The premises of Lemma REF are satisfied, because Lemma REF gives that {{formula:3dfda7fc-e175-4182-9522-ed6581d5d35b}} by Proposition REF (i), Equation (REF ) implies that {{formula:4cfa25ee-283d-40c0-8d06-b6a780f6da8b}} , hence, {{formula:fb40c2fe-5997-4013-8ac2-fb5befb5c562}} then we have {{formula:5e6988c4-8930-45fa-af06-f8da721ff675}} Equation (REF ) implies {{formula:cee936bc-583b-4764-a9aa-3da6e23fbccf}} which is at least {{formula:a105cb79-4ba3-4421-a3db-aa8e7e3b0b1f}} by the upper bound on {{formula:ca443168-2f73-434f-9632-db61b237de8a}} in (c) and the fact that {{formula:03eb1fd1-718b-4a8f-873c-7676be9e2488}} is sufficiently small. Hence, we obtain the following by the second part of Lemma REF , {{formula:93bc781f-c267-44d0-b69e-314c0877445c}} i.e., {{formula:aa1e8745-35b4-4bab-bcd2-dc7ed4e083f2}} Bounding {{formula:bc4ec8e7-a5bb-4cf8-89ec-24ac386e71df}} when {{formula:fda1d3ed-5986-43c9-8647-a08fb7c9da32}} . Combining Equation (REF ) and (REF ), we have {{formula:66b570f5-c14c-4f52-a8df-9ac7834e75fa}} This proves Claim REF when {{formula:87e35bf5-6ba1-403f-b423-8f7bf685ff8e}} , and completes the proof of Lemma REF . Putting together Now we are ready to prove Lemma REF . The two main lemmas in the previous two subsections show that if either {{formula:4ffb55f7-1e42-4b74-a766-2a702dec8bd0}} or {{formula:547e6811-c6e3-408d-99cd-155a2a0ea5c3}} contributes a nontrivial advantage in {{formula:0e405880-de8e-4102-8e58-5fae740510c1}} , then Lemma REF holds. We will show that the complement of their union has very low probability, hence contributes a small amount of advantage by Proposition REF (i). Then the superadditivity of weighted advantage implies the lemma. (Lemma REF ) For {{formula:31a1d14d-60ec-4f0f-b730-dbbc1fb5a5c9}} ,       if there is a generalized protocol {{formula:a7ec3e44-78de-4731-a4b2-a18b612b4b93}} for {{formula:535ab153-821a-42eb-8c69-9d095974b6ab}} with the rectangle property with respect to {{formula:e4e061fb-773c-4432-8416-711d4f3643b5}} and an event {{formula:3e7c7f1e-3d61-4603-8b60-5aaa1c3999c3}} such that {{formula:19f72421-4ef0-4723-b0c1-6759f6b6e114}} ,       then there is a generalized protocol {{formula:9f16f8af-5417-46b1-8b48-3ca9fcb51d4c}} for {{formula:02b28d0c-a85c-4f34-bdc9-75437b6ed5ca}} with the rectangle property with respect to {{formula:1d28b389-9812-42bc-ade8-b069309f832a}} and an event {{formula:28ea968a-5eed-4dc1-9d96-bc93080b74fb}} such that {{formula:348b60cf-aace-402d-be06-50f777cb349d}} , and {{formula:3637d532-3328-4cdb-9727-f848df82936b}} If the premise of Lemma REF or Lemma REF holds, then Lemma REF holds. Otherwise, we have that {{formula:dbd18fa7-9501-4722-bd9f-9e5646dbd663}} and {{formula:a62289b7-8665-41c2-b093-150b4a96e36e}} On the other hand, by construction, the complement of {{formula:893ffb61-0e8e-4eaf-a012-7185d4ccbd7a}} is the set of all triples {{formula:34ca4241-d577-4d4b-b277-d3f90348037e}} such that either {{formula:62c5bc74-bff8-46f0-bf90-d2532818df0c}} Denote this set by {{formula:ee313a6d-ac5a-4048-bd21-2b8e80af8a1f}} . Clearly, we also have {{formula:fc7b4104-b461-47a0-b27a-a6154ef10927}} . However, by Proposition REF , we have {{formula:e6d0c5e5-216c-40e8-8723-8b0445cf5224}} Since {{formula:a510b245-5f96-4bef-8996-4de5b256b77d}} is a function of {{formula:1f3c439e-dcab-4497-aed9-6618c9f90cea}} , by the convexity of {{formula:62c0971e-8fa4-442b-9e0c-52db7a86294f}} , we also have {{formula:36373a1f-271a-4075-b339-ce888923f2cf}} and hence, {{formula:47a00e2d-9ea2-4f97-bb91-92e517932e50}} By Markov's inequality, we obtain {{formula:5f5a2bfc-9f3c-45c9-b18d-42f4d0d4c119}} Similarly, by invoking Proposition REF , we have {{formula:00b79818-930c-461a-872d-e7a748db10f3}} and {{formula:ee8db9f7-ff35-4dd2-9e5c-7d3c5b207cc7}} Thus, by union bound, we have {{formula:1d819c60-474f-4c30-905d-e57520a701ad}} since {{formula:e0bb97ba-6f45-44ec-b94e-fde8cc22ee17}} , {{formula:d4d3f55a-7f92-4775-a8b4-e871bb6439e9}} for a sufficiently large {{formula:e3d9a4cb-1ef7-46c9-999c-2e7d93da4e5e}} . By applying Proposition REF (i) on {{formula:9b08c625-691b-484c-a993-6d3b871273fe}} , we obtain {{formula:e641a44e-b171-45db-a7e7-6563e39aa713}} By summing up Equation (REF ), (REF ) and (REF ), we get a contradiction with Lemma REF . This completes the proof of Lemma REF . Proof of Lemma  REF In this subsection, we prove Lemma REF , which lets us convert a protocol conditioned on an event to a generalized protocol with bounded costs. (Lemma REF ) Fix any {{formula:e7d77349-72ad-42a1-91df-907855758ee8}} . Let {{formula:36667121-b8c2-48c9-929f-8938e2d61f97}} be an {{formula:76340d67-21cd-43e6-9d69-179d16c27f8b}} -round generalized protocol over {{formula:8d272d1a-46df-4c64-a69b-1be5c2809f29}} , {{formula:07e84ab4-d635-402e-a557-a4760672241e}} be an event, {{formula:68405a88-97ea-4e7a-9728-047a03ffe572}} be an input distribution and {{formula:3717e3b3-d245-4ce8-87a1-b1edc32fc9c3}} be a function of the inputs. Then there exists a partition of {{formula:7df41f83-460d-4e85-bb33-bc9770a330ef}} into three events {{formula:165f6f5e-b314-442d-92bb-5a24e9bab6e3}} and a partition of {{formula:43e8c652-d704-4140-94b8-fba0c955e907}} into {{formula:14fc246e-8804-44ce-a059-63447728f94b}} such that the following holds: all three events {{formula:94df8c55-eefd-4d83-bf82-4e59676b40ee}} , {{formula:c5ef77c9-be30-4f9a-a31e-d0fb9fc307dd}} , {{formula:b9314fe8-74a2-4c41-8195-c852f67040e3}} have the form {{formula:4f7d371f-3b88-460a-90e7-955588319231}} for some {{formula:f5a65cdf-e733-480d-ab3d-712c8b269d38}} ; {{formula:cf5bf700-6360-46f8-8ab7-fbe8d9d73d46}} ; let {{formula:5d0e0155-2f01-4334-9827-35c974a3cf10}} be the protocol {{formula:8d680f88-592b-48ef-a963-a16d8beaacbc}} , {{formula:d4b5e4a3-ee80-4f36-acf3-766f03dedc4d}} and {{formula:2aa8a032-70f8-4039-89c6-2c8c1d2bd7d1}} , then for {{formula:717f1c06-e2e1-44c3-a1ba-a2230f4e3a6d}} , {{formula:343f2d2c-b57c-4c57-bc4c-be0ec7f8986a}} for {{formula:d745c90b-c0f6-4abc-a8f2-324e43ed2734}} , and all {{formula:0b251b17-fce3-48f9-af6f-c559e1133f25}} such that {{formula:db2dbad2-ed29-49d1-89f7-ed69e399882d}} , {{formula:b6d9cbb0-94fd-434a-b6fc-36fe7590f1ca}} Ideally, we could simply let {{formula:95bf73af-dd90-43a0-9f72-f7b4c8e4b783}} be the intersection of {{formula:9545a4f5-02cd-476b-8880-01007b8821ab}} and all {{formula:ceab59e8-dc5a-4d00-a883-582e6dec107c}} such that {{formula:e79bb3fb-d73e-460a-a504-aa55f30b4ad4}} , and {{formula:e67886a5-0be6-4b23-98f7-ba527b650987}} be the intersection of {{formula:49fdeb61-e716-44df-854f-5c4bf2685444}} and all other {{formula:ce487d6c-1151-4f35-b249-82b27b9c6d3d}} . In this way, the last line of the lemma holds, since {{formula:4fa183a2-0fd3-4c84-85d6-a6ef8c6927c4}} is a set that depends only on {{formula:054aab44-cccb-493d-a7bc-84d4e27f4fe5}} and is a subset of {{formula:a353b042-ac49-44c7-99be-47cdcba01142}} , {{formula:67418750-459b-46d7-81bf-24aa03aee5de}} However, the new protocol {{formula:0fc4b754-4018-451c-9438-6e6ad508a92d}} (for {{formula:fde9d419-3fdd-423c-8998-4f782a4702fa}} in this case) may not have low costs, since the denominators in the definitions may become arbitrarily small (recall Definition REF and Definition REF ). To ensure that the costs of the new protocol {{formula:261ad854-261d-4474-bcb3-dc07b2d21531}} are bounded, we will identify all {{formula:1d3638ee-7a60-4bb8-886b-1850e88943d0}} and {{formula:d4f07541-af44-4c6d-9b36-7c561c088adf}} at which the denominators in the definition of {{formula:924168e6-44e2-4622-9926-f243075498e8}} -cost and {{formula:6609404b-56a3-44c1-9f89-db3865a2e2d5}} -costs becomes much smaller, and repeatedly remove such pairs from the support. More specifically, we repeatedly remove from the support of {{formula:a80ab5aa-e1c2-4057-8e31-e6d49644a4f8}} , all {{formula:51f959b6-c8ef-471b-bd9f-18352cd58b69}} whose probability becomes much smaller after conditioning on {{formula:0e1c34e2-cf70-4895-b9c5-605dabd0337e}} , we also remove pairs {{formula:fc3143de-3f69-4a7c-b3e4-b74c8330529c}} [resp. {{formula:98b858f5-cc90-4d2a-8ce7-fb4e95a0d3d3}} ] such that either {{formula:082caef0-0495-4368-8ee3-c24a111e536f}} or for some odd {{formula:6144e068-2682-4711-bdac-4fa9cc3d1eb6}} , {{formula:ecb32d4a-b81e-499e-a1f3-ea632116b686}} [resp. {{formula:6aca082e-8f0b-4c57-81c6-a11f030daf46}} or for some even {{formula:90c2dee8-367c-4150-b1f2-847474d12d3a}} , {{formula:88bcf323-ace8-4edc-8854-892c2735b1ed}} ] becomes much smaller after conditioning. Formally, consider the following process:Note that {{formula:53ae4ce3-648b-4f65-bffb-c8acc801a00e}} may not necessarily be a subset of {{formula:ef28af7e-a485-4ed7-9d54-a505befe3843}} , it could be any event. {{formula:2677d3f4-5a3c-4375-9155-089c0176549a}}       // the current {{formula:5bc0b7b4-72c6-49df-af96-7f6345080b55}} {{formula:d76eae19-12c4-49a4-9d40-c435cae0398d}}       // the bad {{formula:c24090ba-466f-4927-a643-2a8916c16ad7}} pairs {{formula:546b1844-1af5-425c-b4a0-b2115a61d764}}       // the bad {{formula:788d3ced-8b9e-4794-9097-78015d26b767}} pairs repeat       if {{formula:d317aaf7-86b1-4721-95e9-4d5e86083bf7}} such that {{formula:72e04477-3845-4138-ba56-be87072c27cd}}             {{formula:b11f3569-6a14-4dd5-a392-673e637a8993}}             {{formula:ff6b6565-4a47-45bd-8e56-76684d31a6d4}}       if {{formula:5ab4236a-5a5b-4e50-83fb-fa2b8e7770d6}} } such that {{formula:07e335bb-0558-407c-89dc-a2d3b1e5694b}}             {{formula:06620f74-9987-43a9-a481-960afd93b83d}}             {{formula:164e049e-a7fe-4193-8536-9ffb5a78b37d}}       if {{formula:31525bb6-2fa6-4f3f-a38e-04e5bc805653}} } such that {{formula:a9550f5a-da9f-4689-a3f3-e8794dec7a7d}}             {{formula:fdb18a59-ad41-420c-bfe5-5a8cb9966dc1}}             {{formula:63a5f031-d27b-4122-a696-2a0944754af3}}       if {{formula:6fc2e663-16e7-4c0d-8aa3-c95c12831987}} such that {{formula:f9736e13-0ae8-40d9-88cb-78ab2e6ba562}}             {{formula:a110d6ed-45cb-495b-ac37-d04c4ec32997}}             {{formula:5a8dd608-5805-4d80-b642-9944e4610ec2}}       if {{formula:45a7af1c-8e04-4da2-b924-b6b5672989e5}} such that {{formula:bf910b21-5cf6-4f2b-b68e-1563ea8942fb}}             {{formula:b302ba97-0cbb-4775-b628-1394786d3e83}}             {{formula:79bbf9d0-d0ef-4d27-a0f6-712d3b455ece}} until none of line 5,8,11,14,17 holds {{formula:9fdead4b-d418-4e1b-8221-a3e5261a2ccb}} {{formula:78b67e13-0390-4206-85a0-29dc687b1db8}} {{formula:8005f399-bb40-4c00-9e6f-88dd690ef3e1}} {{formula:cfa39e82-5bd0-4e1a-ba9e-4c0330b7015a}} {{formula:25ef1683-b2f2-4b67-8d59-a6d9c07a3b7d}} {{formula:87457826-7d64-44ef-bb21-0b4e9b391616}} {{formula:0ec40746-5ee5-4e2c-bced-ceba2d05dfab}} return {{formula:51de0764-499a-4f6a-bc87-55476ab35836}} By construction, {{formula:3189876c-3c24-487a-b1e5-8cc317ceecf2}} is a partition of {{formula:a2adb4e3-fd75-4a00-8dff-ed259d7ba215}} and {{formula:342c2b13-9f4c-4f8a-a3aa-2759f59e6091}} is a partition of {{formula:51f3b1d5-b268-4d1a-a0f2-3d38150a7a8c}} . To see that Item 1 holds, note that the set {{formula:94257714-dfef-4608-a05a-8a1c523f0826}} and its complement are in {{formula:c379c546-5fc0-490a-a710-b6f914e23575}} , the set {{formula:eb889c27-2294-4e9a-a91f-8488f807f25a}} is in {{formula:e857a10c-c456-4b8f-80f7-e2f4d9e77b81}} (recall Definition REF ), and note that {{formula:1da45731-0b96-4778-b519-f4baebed19de}} is also the set {{formula:4d0f1b94-5a28-4bbe-b0fc-aa57805e85f4}} . Thus, {{formula:fd2e82a2-ce73-4c4e-accd-f0aa6f07ea2d}} have the form {{formula:48b2df3d-6aec-4469-a57a-f6d5a87b85c8}} for some {{formula:f2c9ad2a-9052-4702-abf3-ba0cbc1c3b0e}} . Since for {{formula:d7b84851-3fe3-46e6-89f7-df4ad85f2ae0}} , for all {{formula:d23c2ff8-d4f8-43e6-8fcc-ce36c464c134}} such that {{formula:fdab9e08-d0b5-4609-8098-4a829acfcaa4}} , we have {{formula:9225dbd0-c4bd-481d-9c12-19ee79519571}} Item 4 also holds. It remains to bound {{formula:15015551-c1a9-4522-87ab-dae017630ca3}} , and bound the costs of {{formula:2c2a4e0b-91b1-4257-bc0a-2529f1ed5f03}} conditioned on {{formula:fb36ca4b-e2a6-4f6c-bae5-d31ca2faa7b5}} . Claim 42 We have {{formula:94bd1f86-5e0d-437e-899f-2d0165bf5bee}} . To see this, first observe that by construction, {{formula:97d5afe2-af66-4c8d-8f94-ca9e6dae5d18}} contains all triples that are “removed from {{formula:3f8568ad-096e-4b5e-9345-4bd076b76448}} ” in the whole process. Let us first focus on step 5-7, and upper bound the total probability of all {{formula:8af47f85-f82b-43ea-8413-cbff558aef51}} that are removed in step 7. Observe that each {{formula:7f825aac-c1a4-4785-b916-4471a88d9781}} can only be removed at most once. Each time we remove a {{formula:64126f25-d2e2-4ffe-9fe5-5cf0f186035f}} , {{formula:f59f9f3f-aee9-428a-85b4-e71e5878e0bf}} decreases by {{formula:539ea93e-390c-475f-a5c1-48b1e15e8a3e}} (for the {{formula:e2efae85-361b-4f99-8435-0714a6c0991f}} at the time of the removal). Since {{formula:a72dbd3c-611d-4205-b3ec-291ccfbf0526}} at the time of the removal, we have {{formula:caba4c83-5586-4e06-80ba-51efd2a6fb5e}} Therefore, during the entire process, {{formula:293f864c-1735-4a95-8b1a-725bc25bf79e}} can decrease in step 7 by at most {{formula:5c83dac0-4cb6-490a-afe0-06dcd142f08d}} Similarly, in step 10 and step 13, {{formula:e3371015-4ae9-471d-8b6d-7bb5c5229986}} can also decrease by at most {{formula:4d0dac5d-2999-47c5-b3cb-aa1bdf7b2251}} respectively. Next, consider step 14-16, and fix an odd {{formula:f79bd185-0691-40f7-9bb3-82a59833ccae}} . Each time we remove a pair {{formula:1b19eede-955d-4ab1-9bf5-f7627850b8c3}} , {{formula:db2b552a-8eb8-4308-9bb0-a5a6df3ad515}} decreases by {{formula:7e76a5b7-c568-4ee2-a445-68b8b2bd55c5}} . Since {{formula:a984e58c-7155-43bf-ba64-acc8608dac0d}} , we have {{formula:f5c1ec3b-0912-4fcf-a681-3a45d9ae7393}} Therefore, during the whole process, the probability {{formula:653c3e14-bfab-439e-b15c-7af09ccca7e8}} can decrease in step 16 for a fixed {{formula:d6674cd1-f0dc-4e0b-a699-828fe03a9357}} by at most {{formula:abfe0e6d-312d-4ba8-a027-9940bd8fd851}} Similarly, {{formula:2a0766a2-b8c4-445d-8e13-bf5380a24853}} can decrease in step 19 for a fixed {{formula:06d27abe-db3d-44fd-b743-753612f191fd}} by at most {{formula:b89f6bd1-d161-412b-a669-9506ee08a0d2}} . Hence, summing over all {{formula:690ad60e-0ec5-462c-a840-37254760827f}} for step 16 and 19 and over all steps, {{formula:ee8533e9-5f34-4540-872b-e9b5b570e399}} decreases by at most {{formula:cef425bb-5c45-49de-ab78-64c7d0cd2b5a}} , i.e., {{formula:904c9d34-a34c-4e4f-8cdf-079f2824347f}} , proving Claim REF . Equivalently, {{formula:057893bd-6c32-497c-8574-841b2c356b5d}} , and {{formula:05864db7-8989-4a55-ab61-2f4623d25747}} . Hence, Item 2 holds. Finally, it remains to bound the costs of {{formula:d85e1b6f-c758-48b4-99ae-7fd3fb55f566}} conditioned on {{formula:fa845f95-643b-4aab-a6d4-cdbe91bd91b9}} for Item 3. Since {{formula:b102e685-4494-4400-a67b-e34fde12dfc5}} is the final {{formula:700c3ae9-ccd6-4111-86f9-de126e3f829a}} , which passes line 20, {{formula:884bffac-2f1b-422e-915b-b367f56b972d}} must satisfy {{formula:65e75c00-c556-4198-8099-4edcc744c237}} for {{formula:2c1f2cbd-d76a-4202-bd38-79f5ba764323}} with {{formula:9cf27b08-a17e-4ece-b218-8d3e82bd45ad}} , {{formula:99ddb0cd-fe79-4591-bd65-b934f1e512d8}} for {{formula:e088342f-e8ac-413e-98e8-9449b1a2f4f9}} with {{formula:0212d13a-e47b-404f-9635-0bcf0395293e}} , {{formula:20d69ebf-c2a7-4517-bc95-b72075d6e383}} for {{formula:e9cc03e8-af47-4cca-80e7-b489aa4c9ef4}} with {{formula:b2e72ce9-a480-47c1-98ab-3317c8dd27d2}} , {{formula:79f627b9-94d0-499f-a276-47b0fb2bdcf8}} for all odd {{formula:0a56cc20-8c0f-45ee-af78-7c0b00c84a4b}} and {{formula:5f1d594c-852b-4d1b-a8ee-fdfd9cf63bc3}} with {{formula:c8c19c34-8d75-40fc-ab4e-452a774b8eb3}} , and {{formula:a40c1e79-60c1-413f-b77f-e4558add77b3}} for all even {{formula:f1fbc81d-14a8-4c76-8905-5d2e37d7052c}} and {{formula:dc60802d-ac14-4f7d-91d7-5b19f80825b4}} with {{formula:ee0691b9-5f90-4b2d-b90c-c33de7802bd8}} . The {{formula:d197b183-d9a6-463f-bbc5-63c2fc602fef}} -cost of {{formula:49823371-04f2-4b67-a501-1e90deb7e61e}} conditioned on {{formula:a914bedb-f684-4537-80f4-347093a8f9c1}} is {{formula:9876bd4c-6232-42b3-bbb1-1ee99a917a0b}} For the {{formula:a5e4c644-7bfc-49aa-9cea-60a64daf09bf}} -cost by Alice, we have {{formula:d84004ff-f337-4e85-94eb-f4c5dfe2fcbf}} Similarly, {{formula:b9f5554e-a6e3-4cd7-8e22-ab3ad6f6b0b3}} . This proves the lemma. Compression of Generalized Protocols: Proof of Lemma  REF In this subsection, we design a standard protocol with lower communication from a generalized protocol with low costs. We will use the following lemma as a subroutine, whose proof is similar to Theorem 4.1 in {{cite:1d6cf6ed23926cb5d5f384337af2f6ff46b1fb69}}. The lemma lets the players sample from a distribution {{formula:31b065f7-16e8-4285-ad4c-b6207d03e7b0}} with low communication, where only Alice knows {{formula:86fbcde9-55db-4cc8-ac75-a23314654693}} , and Bob knows a different distribution {{formula:2ea35e1d-9cc1-469d-a8ac-43f549dcc7d1}} . The success probability depends on how “close” the two distributions are. Lemma 43 Let {{formula:60da85e1-db1f-4d3b-8fe4-56e7d7f2ad10}} be two distributions over {{formula:51a3ba8f-eae8-44e3-b5be-c3e594ae3da2}} , such that Alice knows {{formula:8ef30212-b9bc-4ff0-a73e-24ba86f1165e}} and Bob knows {{formula:8843adb4-ee85-48cc-9a6b-fbcc668b05ff}} . For any {{formula:1132feb3-df50-4157-aada-72208147a616}} and {{formula:ac0c22f9-0f8b-4603-b47b-c9b92dc4171b}} , there is a (standard) one-way communication protocol with shared public random bits, where Alice sends one message of {{formula:cfe8a616-593c-4020-934c-c954ad67419b}} bits to Bob. Then Alice and Bob simultaneously output an element in {{formula:7d3ed3c8-14f8-4522-b187-8537added3e0}} such that Alice outputs {{formula:a0dfbb9c-8aca-498d-bc66-ce122205dcef}} with probability {{formula:f6537f22-04a5-4497-8a66-f8cf72c4a7cb}} for every {{formula:1050a90a-5981-4647-a515-57498912f937}} ; conditioned on Alice outputting {{formula:25687ce2-d0c5-4e8f-82fd-3ef196166831}} , Bob outputs the same {{formula:dbd04147-ab7e-4909-a21a-9612a43081af}} with probability at least {{formula:7859a018-fd4b-4b9c-8fef-7b2510888915}} , some different {{formula:3c68d886-6e38-4efb-a3b5-637d27fd9839}} with probability at most {{formula:3f306e0f-8b64-4a38-8049-bef1610b50fb}} , {{formula:13a0b5d9-58ee-4817-9b58-33bc1be08484}} otherwise. In particular, for each {{formula:bbb37a2e-3de0-49a9-a3a4-a2539a9edce5}} such that {{formula:c0db52a8-2fa4-49bd-827c-0dff8626656b}} , the players will agree on {{formula:6dba80fe-3dd6-494a-a3b0-8351266a7b8a}} with probability at least {{formula:d9f80e96-caaf-49ec-a99e-10b5f2ca7922}} . Let {{formula:871dd352-d270-4968-b73f-5c27cb3503ff}} . Consider the following protocol. sample[Protocol][(P; Q)] Part I: Alice samples {{formula:06508be0-0c4f-4fba-a689-117eac4d5ae4}} Alice and Bob view the public random bits as a sequence of {{formula:0fcf8b0b-8275-417e-92c1-2971bbf7aaaa}} uniform samples {{formula:edc0e0f0-04c1-40d4-87e1-da23734e2d39}} in {{formula:a135c88f-cee6-47e9-8577-376121eb9f3b}} Alice finds the first pair {{formula:39d28300-661f-4d71-be0e-f6f940dea9d5}} such that {{formula:54d8506f-46e5-4085-a7c3-c77142737724}} if such pair does not exist       Alice outputs an {{formula:f1b121f4-f5a6-409f-8386-da9bbc5459a9}} , and sends “0” to Bob // “0” indicates “fail”        upon receiving “0”, Bob outputs {{formula:cf06b33a-32eb-45f4-a0e8-0af1714ec947}} , and the protocol aborts otherwise, Alice outputs {{formula:57d142b5-482d-4ae1-8fe9-7d778c3c633c}} , and sends 1 (to be cont'd) So far, the protocol describes how Alice samples {{formula:f2a7cd1d-4613-4f43-af60-a195ddea5b38}} . Now, we show that Alice indeed samples {{formula:48a53dbd-1f56-4e98-a84f-23b365ea695d}} according to {{formula:2daf1bdc-f8df-40e8-8972-b126115db8ce}} . Fix {{formula:23c24972-375d-4453-9bea-74ae5f39da8d}} , for each sample {{formula:423dc460-2d1c-402e-9d45-b17f6ff1e302}} , the probability that {{formula:6376c7b5-33df-4479-ab9f-e01ab54d4adc}} and {{formula:921c9c19-80ff-4afc-aab9-66a1ed33bafc}} is {{formula:6c128191-a2e8-4677-bc53-ad75b3e59b27}} Summing over all possible {{formula:3576cda6-12a2-459a-bbf8-0739e2487cbc}} , the probability that {{formula:65a8d4af-c60e-4282-8645-17e24c12b9a2}} is equal to {{formula:d3d93260-4fa4-47c0-9796-224c8666e18d}} . The probability that Alice outputs {{formula:c48a9ea1-ef50-42ed-8259-dc8a5bc9a611}} is {{formula:7de04d4c-bc4d-42ad-a96e-8b2fdfa7ba33}} Note that {{formula:407b98dc-31ae-46e2-b05e-c02fd0b14de7}} . The probability that Alice finds a pair in step REF is at least {{formula:afa38d66-1059-44bc-9b5d-3070bdcd3fa1}} , and conditioned on finding such a pair, {{formula:800e47e9-0dcf-49d4-9995-fee1d69d79cd}} is distributed according to {{formula:2d6042b6-fd7c-4ebe-9572-dca16a460e58}} . Next, Alice tries to inform Bob by hashing the index {{formula:9e5c62d6-d30a-4ae9-a785-5dcb112fa41f}} . sample Part II: Bob outputs {{formula:3ee2536b-92f9-412a-8a07-abe4473460e7}} the players view the remaining public random bits as a uniformly random hash function {{formula:e7e78541-2b19-497e-ab1a-97a42227798d}} Alice sends {{formula:2219b7b2-7508-4b86-81fc-5b813f3844ae}} to Bob upon receiving {{formula:f198b61b-9dd2-4530-928f-ec7c5a5304bd}} , Bob finds all {{formula:b6ea7972-0416-4cfc-a515-fb3a8287b738}} such that {{formula:b01f911e-328e-453f-8f26-81348697fbdb}} and {{formula:fe120eee-1bb4-4eac-9b13-6261eedef4f2}} if there is only one such {{formula:49c15e17-3358-4b00-973a-a9a8fd467753}}       Bob outputs {{formula:b3d23593-d9d7-4710-a634-489de5f1c688}} else       Bob outputs {{formula:d52f9612-1cde-49c6-9c02-2a0748dd68b8}} It is clear that the communication cost is at most {{formula:f3c74cac-a3ed-4d08-9933-9decf3780825}} bits. Conditioned on Alice outputting {{formula:e2e56550-23a2-4571-a2d3-8a2b69effc6a}} , {{formula:2ad09649-9d01-4dce-b14d-715a819ace71}} is uniform in {{formula:28278bc8-6315-4a68-a240-007c5a382ec4}} . Hence, the correct {{formula:8b1f0548-9591-49ef-bce6-f617fa80cc18}} will be found in step REF with probability {{formula:161e3fbe-d8ef-4be9-b043-b3796a8344c4}} . Next, we bound the probability that Bob finds any other {{formula:4f07924c-f9b5-476a-b7e3-31bd6aba7d49}} , conditioned on Alice outputting {{formula:596ca91f-3130-42eb-b9e4-8320d73c6f17}} . For each {{formula:b26413b1-38d2-43e6-a8c2-b81666f5cc16}} , if {{formula:f72294dd-0d50-4402-9869-c76dd87bf891}} , the probability that {{formula:69d5ce62-68ac-4a84-b899-5ffbd24a90fd}} is {{formula:14d85246-ddd7-4c93-af4e-d6876a01a968}} . If {{formula:2ef69856-9406-4d06-bbac-71994771d91c}} , conditioned on Alice outputting {{formula:4eb66914-745d-40be-8665-d2d9f6b8b7f6}} , {{formula:1978c440-3cb9-477b-a8be-d3eb7c370b99}} is uniform in {{formula:94426a9e-5d3e-49b8-95bd-ef1c09dc85d6}} . The probability that {{formula:4971fdef-ad32-4748-b006-95195d459005}} is {{formula:f8546176-76e9-40bd-b1a9-1309b23578d2}} Independently, the probability that {{formula:603fdb4c-2d07-4d79-9225-f1101d20263d}} is equal to {{formula:a9e3cdfa-3614-45e6-80ea-f969b06f4a8d}} . Therefore, the probability {{formula:6abc343c-0b60-4b0a-8999-e6012621c686}} satisfies both conditions is at most {{formula:637fafcc-d107-4eae-a1e5-6336b5cce78c}} By union bound, the probability that any other {{formula:8253c39b-4cf0-4185-8b5f-625a2027718a}} satisfies both conditions is at most {{formula:b28f9d70-4389-4d77-af40-b3b1c3e623f8}} . To conclude, Bob outputs the same {{formula:c4c88ff5-461b-4353-b1fd-17c9d22337a4}} when Bob does not output {{formula:55c43f9d-c513-4cb8-8bcf-729db9f15de9}} in step REF (with probability {{formula:e645dff0-130d-4178-99ab-84fa7fda4c38}} ), and Bob finds the correct {{formula:8b009f72-23d9-4590-b47e-cc18d5cc1e59}} in step REF (with probability {{formula:53970f49-f817-41b0-a73d-6fbe3b1eaa62}} ), and Bob does not find any other {{formula:3a70e62a-efb6-49df-9609-cbdc8f74bf16}} in step REF (with probability {{formula:566e7caa-94aa-499b-8aaa-53c20f60e926}} ). By union bound, Bob outputs the same {{formula:47154325-a230-4f42-9848-46c2ad29aeb0}} with probability at least {{formula:c5e3e3fa-a7ab-42ff-97e3-24dc7842759d}} . Bob outputs some different {{formula:a7368cb5-aeec-4791-8809-7b27a189cf60}} only when Bob does not output {{formula:46f1e2de-e226-4978-bc84-0eed184c9f49}} in step REF , and Bob does not find the correct {{formula:2695e44b-b691-4c9a-985e-f555ee09b45f}} in step REF , and Bob find some other {{formula:6c3c1671-640d-4e51-b3bd-332443ada419}} (and {{formula:675adc7e-24d3-4fdb-a9fc-3f4ab00121cf}} ) in step REF (with probability {{formula:8809f02e-9d66-4223-ab77-7b236aface9a}} ). Bob outputs some different {{formula:a1c135db-b60c-444e-b0a2-b50763fbc9a9}} with probability at most {{formula:d87bcbf0-0f3f-4d9d-aeaa-90dd767d1ee3}} . Otherwise, Bob outputs {{formula:a25a34aa-40e9-4f84-99d2-b84d917b4d48}} . This proves the lemma. We will use the above lemma to sample messages {{formula:715ab150-02c7-4b0d-ad1c-f5284a08f9ee}} given {{formula:80ad7826-ccaf-4f87-b8f1-3c0e8e0a1fd9}} . The next lemma proves that most of time, in Alice and Bob's view, the probabilities of {{formula:c1bbe74c-a363-431a-b8dd-f7e97e51a819}} are not too different. Lemma 44 Let {{formula:bdcb37c2-7a07-4f3e-bd23-f9c70082bbc6}} be an {{formula:9976e831-b187-4f42-9ace-118563e49ee5}} -round generalized protocol and {{formula:e9e0dcd9-d0c1-44fa-b747-2153f0be1343}} be an event such that {{formula:d1fda91c-675a-4fb9-8c8c-8ad9d9e665ec}} has the rectangle property with respect to {{formula:fc3a802d-43ba-4c29-8ba2-0a19ef7399a5}} , and let {{formula:2827470e-0e9f-4517-9a89-5652b91077d6}} . Then for any {{formula:e8c87632-3df3-41ca-a217-6ebceee19502}} , the probability that there exists an odd {{formula:b8de387b-dbb1-4722-969b-4f19ec49ded8}} such that {{formula:665b2e12-2461-4545-8c60-1d13a7a86d19}} or there exists an even {{formula:1518c1a5-42c5-4586-87a5-6334d5259b2b}} such that {{formula:5d8a791c-e855-46d3-8491-3f4fcb8144ef}} is at most {{formula:a8ff4e6b-5041-4d24-bb12-181f0bb45571}} . We first fix an odd {{formula:d5890336-26b4-4ce2-b0c6-a03a8779620b}} , and upper bound the probability that {{formula:4cb6f12f-c37b-4481-845d-3fbaad9b5761}} . Recall that {{formula:c3a1372c-e3b0-46d3-895a-83b58b097efb}} and we have {{formula:1cf4605f-1a67-474d-a441-011255321f79}} where {{formula:1e69a0a4-6ba8-41c1-bf90-ed2aabfb464e}} denote the four fractions in the parenthesis respectively. Note that the fraction outside the parenthesis is what we want to upper bound. We now show that {{formula:2daa10c0-6fbf-48bb-af49-cf56145af1f0}} are all not-too-small with high probability. For {{formula:9a9be3a0-fb3e-4f2a-bf64-8dfd6403c53b}} , we have {{formula:b6d636ea-618f-4063-bc1e-778f9d520e04}} Similarly, we can show that {{formula:a5dc220b-8d73-4faf-82e0-8dd3b01fc022}} and {{formula:aa123454-29a3-4e34-82db-b948ffa584ab}} Since {{formula:7c6fc9fb-abf3-4ada-9c6f-b8c40959631c}} are all nonnegative, by Markov's inequality, we have {{formula:674d5577-c3ae-4c1f-a01d-4241f1d2dc2f}} for {{formula:935b5a71-e685-45f7-9aa6-963f39043349}} and any {{formula:11b3506d-eb95-4857-9e12-0f8315d95817}} . Thus, {{formula:11d3edaa-ebb9-43af-b4ce-58fe1565d8e4}} . By union bound and plugging into (REF ), we have {{formula:2c37bbd9-39c6-4299-adc2-8d1311612761}} Thus by union bound over all odd {{formula:97d10c65-2451-4852-878d-59e9057c9563}} , we have {{formula:94602b6f-d04a-4eb3-8d02-095226b50a28}} By Markov's inequality again, we have {{formula:2bee9484-9cd7-43c2-adb9-4bd36cd4739c}} Combining the two inequalities, we obtain {{formula:e09b9a8d-fef9-48d2-89c4-19956dab854d}} Similarly, for even {{formula:6cb13a49-7456-43fd-9ef2-5fb7ad703992}} , we can prove that {{formula:f9b678bb-e62d-4ca9-a448-d9b17418f53f}} Finally, by setting {{formula:ac8689ab-743a-4d0c-92b9-abd814a13132}} and applying a union bound on the odd and the even case, the probability is at most {{formula:4aa32513-623b-48d8-92bd-5d2d6104e19d}} This proves the lemma. We will also use the following lemma in the proof. Lemma 45 Let {{formula:d1ae7445-b013-4c01-a5e1-11c5902773f7}} be an {{formula:73bfde25-5df9-4fb3-b8d5-ca985c4088a0}} -round generalized protocol and {{formula:e54f19b7-3366-46b3-b005-3a817a8da47d}} be an event such that {{formula:22562131-bb4a-4390-90f2-5fd1c68bacd6}} has the rectangle property with respect to {{formula:faac955f-0aa8-4a08-b450-055bd3fa973e}} , and let {{formula:6485f2d9-1e4f-455e-b583-7f74569a84d4}} . Then for any {{formula:4e25b2fa-5401-4f8f-b2ea-014ce7f08851}} , the probability that {{formula:f37621c4-c81f-4d09-bf0f-1c1e923253f8}} or {{formula:c1aecb89-db9e-4330-9c41-eaeca4068a6b}} is at most {{formula:f077fbab-150c-4ec0-b392-9bc8b7b889d5}} . The first half is bounded using an application of Markov's inequality and the fact that {{formula:86099275-be88-4693-8a51-bb2bc6e301ba}} : {{formula:4a3cbad5-b40f-46fd-8da7-cf79d3cf2106}} implying that {{formula:92091b90-4006-40c7-9c29-a74574302dda}} For the second half, similar to the proof of Lemma REF , we have {{formula:0995ecd1-ac34-4b15-b4b5-ebd9ac1529b7}} Hence, by Markov's inequality, {{formula:2e246ef6-1b3c-4824-97e5-f9825b789581}} We prove the lemma by an application of the union bound. Finally, we are ready to prove Lemma REF . (Lemma REF ) Let {{formula:25332caa-6cf5-45b5-96e6-5f74ac29b634}} be any fixed parameter. Let {{formula:33d4a6c2-22c4-4ca5-a6e6-35645d635793}} be an {{formula:1f23aa7b-1ee3-45fc-9981-ea3328248fda}} -round generalized protocol and let {{formula:610ecc6b-b6e1-4f17-9ddc-29a7b11c0d36}} be an event such that {{formula:86812113-1942-41da-af8e-3704549fa469}} has the rectangle property with respect to {{formula:ae5a9361-91c1-426b-bdfc-6d526366ca2d}} . Then for any function {{formula:a3781763-f7b3-43c7-8b9f-8fb8efd3f028}} , there is an {{formula:b44a60a4-6a71-4a96-b112-1cabfc310d80}} -round standard protocol {{formula:1b7fc1bd-922c-4b26-83a6-a7592c80b5c8}} such that in odd rounds of {{formula:8a931115-eb99-4bf8-8c14-08fb2c0c0214}} , Alice sends a message of at most {{formula:2c86380c-c85a-4c20-9535-30edd8779aa7}} bits; in even rounds of {{formula:eb68fea7-26cd-4b00-a716-d3da2193bc75}} , Bob sends a message of at most {{formula:d0d6045b-70f3-4c4d-8475-01a7be75660a}} bits; {{formula:68263460-cc60-41c5-90cc-3ba149eb406f}} computes {{formula:eae65eb2-71d4-47b6-8efc-1b7232bf2d07}} correctly under input distribution {{formula:12e9b983-d703-46e8-a155-325245ae3f6a}} with probability at least {{formula:14c3aacb-f8aa-4578-8aa0-45ea5996618f}} Let us first consider the following “ideal protocol” {{formula:5e4e9b58-72de-476c-a0ce-9b005d5bed27}} that cannot necessarily be implemented in the standard communication setting. But we can still analyze the probability that {{formula:ddf5dfa2-54e5-4418-9e20-79a0945ebcbe}} computes {{formula:849c9b92-e736-4c44-b3e7-d59f0fffa032}} . Then we construct a standard protocol {{formula:f4cc9b01-2d39-48a4-8e66-0541705ddbee}} with low communication and statistically close to {{formula:167bde99-0077-4ec7-903f-6bcfcbfb1a00}} when {{formula:356529a0-fbd7-4274-8f25-ad518622dac3}} is sampled from {{formula:222cfb19-3c92-40a3-9393-d8fe1e6c6702}} . The ideal protocol consists of two parts: In the first part, the players generate a transcript {{formula:7154b4c5-d166-43c7-8aa2-eb06bd57b9f9}} given the inputs {{formula:34c81c32-0111-4c13-a044-bb05c2582e89}} ; in the second part, they use rejection sampling, and accept {{formula:498c9642-3a26-47d4-80e8-fd4a8123b764}} with some carefully chosen probability (and output a random bit if they reject). ideal-tau[“Ideal protocol”][(X; Y)][*] Part I Alice and Bob use public random bits to sample {{formula:60131e94-8f15-4d6f-89b5-1b7beb0fbc9c}} from {{formula:244dd4dc-28fd-4dbc-933c-d7443f148695}} for {{formula:1deb79a9-a578-441a-b512-a8bd216021e4}}       if {{formula:e1639eee-ab6a-448a-b100-663bce45d4a2}} is odd, Alice samples {{formula:d026e31c-9883-4f13-bfc0-c97020eb355c}} from {{formula:75b973cb-79dd-4272-831a-43082025a4d9}} and sends it to Bob       if {{formula:cdcd670f-1601-4c41-ab86-adbdd026ac17}} is even, Bob samples {{formula:c8e7b5f7-44d6-4821-919d-acba8481e0c0}} from {{formula:a5698a53-e46f-4636-8563-cbc01d8876a9}} and sends it to Alice Bob locally samples {{formula:da212ef8-7095-4abc-92d9-4bde94487e6a}} from {{formula:12702b02-a19e-476f-a8ee-8a47896b1136}} // recall that {{formula:7423d46d-90e7-4d4d-a5cd-6124c9cf916e}} for {{formula:344816d5-d72e-4d21-aa7f-5a14af65bd96}}       Alice examines the distribution of {{formula:b3eac8be-0ffa-4c96-8c42-9bbf9a3d9b42}} pretending {{formula:5fe4a634-117b-458f-b716-9e3e5f31fe88}}       Alice sends Bob the more likely value {{formula:16041d38-5a16-44a1-8fd8-3d2624c6cdc4}} of {{formula:712fc684-bb90-4b88-8e83-181560f673c1}} in this conditional distribution Part II Alice and Bob accept {{formula:e5a4bc02-8586-4c8a-af15-faeacea08c00}} with probability equal to {{formula:b646af39-05f8-4e97-bc1e-5d4b61e67efc}} (assuming it is at most 1), for some fixed parameter {{formula:7083aec8-bdb8-4231-9dfa-cbbe1b382d7c}} if the players decide to accept       Bob sends {{formula:79e00995-503f-46b0-9747-6d63f521c74e}} else       Bob sends a random bit Success probability of {{formula:fcea3681-b47a-466c-a177-612aa7de41df}} . Given {{formula:72b1e50c-e2ea-4344-ae94-2dea0a7b104a}} , Alice and Bob generate transcript {{formula:e86311f6-429e-4ee6-a6f8-169516eb15e4}} with probability {{formula:df66a183-5928-4034-922b-5e3f9aa7aaf7}} where {{formula:04e2af27-5833-435b-8cb2-e496405a0127}} is only known to Bob. Then it is accepted with probability {{formula:3d73e2a6-58ec-4a1a-b524-1c92e4badfc1}} . Recall that {{formula:cce12d0c-8600-466f-926b-9e25789d3ed6}} Hence, for {{formula:0758628f-c099-4040-9d89-25c2f81988ef}} , the probability that the players generate and accept {{formula:4be5e9b1-ed3f-47ee-b0a2-7bf0cadfea29}} is {{formula:b30cb31f-d3e3-4a31-83df-4e7b93606c07}} Thus, the probability that {{formula:a78f9514-2846-4712-b36d-84b5def9569a}} accepts is {{formula:0da21419-2cad-4661-a028-c4038d0fd1ac}} . Alice does not know {{formula:3117f76c-7dd1-4732-9e87-b00911ed456e}} , so she sends the more likely value {{formula:14a6d8d6-913c-4451-ac85-3d7af0117bd9}} conditioned on {{formula:14e6aa2a-eac0-4d11-99a0-8ad08fa6eb92}} for both possibilities of {{formula:62743e86-a151-422e-8c26-12cf9a704034}} , and Bob outputs this bit when they accept. Since conditioned on accepting, {{formula:8ed4462a-116b-4c69-ada8-d97073e82cd2}} follows the distribution of {{formula:c96a0737-5aa2-4443-98b5-c996941dd6b5}} , and Alice has told Bob in advance what is the more like value of {{formula:a46bea33-5979-44b4-a8c5-05ea8abed76c}} conditioned on {{formula:ec149688-896b-437c-b26c-c64869eff116}} . Intuitively, this should imply that the overall advantage should be {{formula:30be4aff-1920-4779-b760-f7bbbf5cfb29}} . We now formally prove that this holds. We use {{formula:7b15c5ab-d502-4fba-b129-0df4def022ba}} to denote the probability of {{formula:c74f4fac-33b6-4a14-bded-3c27be3a6677}} in the distribution induced by running {{formula:7312c977-1511-4163-9e2d-c2ff852f705b}} on {{formula:2cadefa5-bc92-4a33-bb8f-7e62b7eed1ca}} . Note that the transcript of {{formula:63103d93-90c7-48af-9790-7b7b3aaecf88}} is {{formula:b1c2da57-9882-4561-a545-95a2a3030940}} . The expected overall advantage of {{formula:334c6a6b-584a-4d5a-abdc-b6f703b46826}} is at least {{formula:957fafc1-7e2e-4bac-a035-faaf7243ef33}} The first term is {{formula:0bc31336-31f6-4399-8023-28601649eebf}} The second term is equal to {{formula:2d69344f-9ff7-4d7d-b835-215baf10757c}} The two terms sum up to {{formula:f687c9b6-aea6-49ee-963e-ff31160812a8}} . Hence, we have proved the following claim. Claim 46 If the probability that a protocol generates and accepts a triple {{formula:f77e4810-e4ff-45fd-9800-80334bcee8e2}} is equal to {{formula:e2b7446e-58a9-4797-b642-80c5bae2baec}} , and it outputs a random bit otherwise, then this protocol computes {{formula:111111ff-463f-4c34-b504-9fd52bbedfa3}} correctly with probability at least {{formula:2534c02c-5c85-477d-b1c6-cb71b6878347}} Standard protocol {{formula:df61fa54-e3f5-4845-96da-a43939a5a2dd}} . Next, we will construct a standard protocol {{formula:ed61adb5-e681-4ddc-a05c-d0e2de8a65a7}} that simulates {{formula:87d01bfc-7a60-4a3c-a57c-d24b91cabfee}} . Similar to {{formula:2b890555-5e43-4990-8b62-f06fb990866a}} , protocol {{formula:482ab970-a2d7-43f6-9db1-2c5f8540b559}} also has two parts: In the first part, the players generate a transcript {{formula:18430635-6e5f-4634-b3ed-f748cd5b8971}} ; in the second part, they decide if they will accept {{formula:de1131a1-81bd-486d-93ef-9f7ab2be88e6}} . For the first part, the players first use public randomness to sample {{formula:7c255f8b-4906-4a48-93d7-a2c95dcb004c}} . Then for the subsequent messages {{formula:42fbdfbb-d228-4294-a163-adbfdca65817}} , Alice knows the distribution {{formula:55561abb-a6ba-4a21-b11e-8c4eee11d9a2}} , and Bob knows the distribution {{formula:d2819474-18de-4eea-a811-c5b850970ad9}} . For odd {{formula:71f42b96-92b0-4303-9218-cad49d20b4db}} , the players use Lemma REF to sample from {{formula:310c5608-a40f-4826-91ed-e9ce447fe79a}} where Alice sends a message; for even {{formula:a5da7634-4d0a-4f82-9238-0bd9313370e9}} , they sample from {{formula:5522924e-3166-4e02-a4e9-09ec65bc94c3}} with Bob sending a message. Finally, Bob locally samples the last message {{formula:9f7d4f8f-5978-4216-93e5-2f1132606e92}} . We will show that Lemma REF guarantees that the probability of sampling {{formula:2261f1af-53bd-44f7-b68a-fc8f02b759df}} is a good approximation of {{formula:ee24a351-8fd3-44cb-ad9f-949e53e13dc0}} tau[Protocol][(X; Y)][] Part I fix parameters {{formula:8421fdbd-22ab-43b5-bffc-ca8b7ed20d80}} Alice and Bob use public random bits to sample {{formula:78d74aa7-ff15-4fcd-97c1-ccb6454d4f53}} from {{formula:f478f57e-f432-49c6-951b-3f1f3218f261}} for {{formula:f1abc873-b27e-459a-b82b-2d5bca76802a}}       for odd {{formula:0f310d52-f29b-42a9-b08d-507a07fe5deb}} , use Lemma REF to sample {{formula:3978c209-0108-4778-b631-a78167f9a05f}} from {{formula:8fe5886f-0f40-47d9-9f44-0714a160427d}} given that Bob only knows {{formula:dccf0dab-c07b-4b51-bb83-964b3fd68249}} , where we set {{formula:5feb8c62-9054-4829-9c11-7174a172f0a6}} and {{formula:4ae076c3-c8d7-4ea6-8356-3e4595c92c1b}} // Alice sends one message       for even {{formula:d9b5026c-39a1-488d-9139-32360ba54505}} , use Lemma REF to sample {{formula:8e9b466d-b5fa-453e-878a-41663476b0ce}} from {{formula:8038d4e9-5249-43c2-90be-0dcf95114915}} given that Alice only knows {{formula:e6330ff2-f729-4b07-a636-5784986d2eae}} , where we set {{formula:f3860ae2-90b3-4d1c-a6c2-673a4616179b}} and {{formula:27f0166d-3ac1-4aad-862c-fd12fb8bdbd5}} // Bob sends one message in Bob's local memory: {{formula:b2e8cf96-fb5e-420c-8f74-b947d3262d76}} // the final value of {{formula:903e0972-1caa-4d21-ac61-8382ee1d8ac7}} indicates if they will accept if any player outputs {{formula:daa4bf4c-bbe1-4ec8-ab8b-6638d37418f5}} in any round       {{formula:52c348b0-a60c-4c89-a181-995d8da56562}} Bob locally samples {{formula:ef2976ae-09df-4772-bb3f-abe4a3046d38}} from {{formula:b7f69ad1-5c42-4a38-bbc2-4d96cdb6e234}} (to be cont'd) Each player will send one extra bit indicating whether they output {{formula:75469b7e-3aaa-4379-a841-77a663e534c3}} in the previous round. Hence, Bob knows if any player outputs {{formula:79ba6095-3102-4ec4-b4a9-f83db53316b5}} in the first {{formula:8ba9a8e7-b41e-4d51-a2e0-e0fe47e92edc}} rounds (including round {{formula:36bc6dde-d67c-45bf-a880-372d48e746f8}} , for which he does not need to send the extra bit). Next, we use rejection sampling, and accept {{formula:03a60db8-e4b9-4953-ba3d-c36b6f456bb6}} with probability roughly {{formula:e66fc59b-e183-4880-b8e4-00c001d7325c}} for some carefully chosen {{formula:736f576a-e20b-42fa-b2b0-77ffd6903c05}} . The rectangle property of {{formula:ebaee9fe-4330-42bb-969d-b0224b0742de}} ensures that this rejection sampling can be done approximately using very little communication. More specifically, by the rectangle property of {{formula:1a025926-1b43-4496-97f2-cba90124d262}} with respect to {{formula:a2b0ff4c-fca4-4412-8a47-24efa5f025f9}} (see Definition REF ), there exists {{formula:a3714a17-180b-4f1e-8156-c830d0d591a9}} such that {{formula:46391364-0580-4c9a-8e38-8a7207754b61}} . Hence, {{formula:12fa952a-612f-419b-8881-6543281f3274}} can be written as {{formula:d8d6eabe-a4d0-440d-ab7e-a66ef291e2d4}} by letting {{formula:cafd6d44-a1b7-44cc-ab75-a408c5a01fcf}} and {{formula:ad3c70ad-4553-48a0-aef5-f69ac9fc7a4b}} . Suppose we let Alice accept with probability {{formula:c682caa1-c1f7-496a-9ae9-4fb639e15a8d}} and Bob accept with probability {{formula:27afe748-ab3c-4a6e-9a40-ce664367fd3d}} for {{formula:f75349e8-ac30-4d30-a956-d6ec3bf993d2}} , then they will be able to accept with the correct probability by sending only one bit, i.e., whether they accept locally. We will also need to choose {{formula:8424e980-366e-4eb4-89c7-447dbf30943b}} and {{formula:094413a3-13c1-44aa-a75a-04d97901cbc1}} carefully so that both probabilities are at most one. This is done by applying Lemma REF , which ensures that most of the time {{formula:138c731d-325f-431f-924d-a0f5bb91a03e}} is between {{formula:c6c16b50-ae47-401a-b753-158d486c21dc}} and {{formula:cc306987-7a82-432e-9894-a6b108e934fc}} for small {{formula:0ef87795-2711-439f-9a5f-2b773c4c1039}} . Thus, they can coordinate the values of {{formula:8a9f2133-090b-429d-b44d-cde4dd5ef9c8}} and {{formula:6c19ef8a-8a91-47c0-9d42-8fb219798ea0}} by Alice sending a small hash value of some approximation of {{formula:8853f6f5-590f-4b5a-89b7-4f7cc3749683}} . tau Part II for {{formula:df1f54e0-33e3-44ed-b93a-e3081cf2517d}} , Alice computes {{formula:5a215b15-0b0c-4d57-88cb-d84b9ac3ce84}} pretending {{formula:d44d746a-3436-4d2e-b876-753f6739d729}} , and computes {{formula:ded905aa-294f-4794-a92f-9be5db140fc4}} Bob computes {{formula:e1cba87b-616b-4d85-9d40-de4765f8a87b}} and {{formula:0fb5d3c6-ca84-442b-88d5-761d873f7a00}} let {{formula:27c50d1d-cb8c-46af-b656-d6792a33fd73}} , Alice and Bob use public random bits to sample a hash function {{formula:a836889a-8bba-4430-a7f7-a503d42bb45f}} for {{formula:37f3df6f-12a1-483f-8f67-404b43c0af2a}}       Alice samples a bit {{formula:0d6f66ec-fccd-4018-9798-c1969e60d1ec}} such that {{formula:2178fe56-bcf7-40e1-b883-7c66a7ebbc7c}} pretending {{formula:46ac1e9f-88f3-4f8c-afd0-1865c87338b8}}       Alice examines the distribution of {{formula:c68ab357-e318-4076-82fe-008c42e03504}} pretending {{formula:3157d22c-b1a4-4032-b493-deedf7ab0565}}       Alice sets {{formula:f45a3f42-d488-4e6f-bf69-298c2d4ace0e}} to the more likely value of {{formula:5c79fddd-3e28-4555-9a34-09b2c128ba1f}} in this conditional distribution Alice appends {{formula:ea21bc56-d128-48cf-a271-6f6c94a71160}} to her last message {{formula:9efc6aa0-9970-4aa7-bfad-dece65816da4}} let {{formula:7402dabb-e2ac-4cfa-a4ff-82255063182c}} and {{formula:1acd519b-058d-40d2-a657-f1a544c39882}} upon receiving {{formula:f7abfca3-e3c9-4049-9742-aedd9109b327}} , Bob checks: if there is one unique integer {{formula:cc0c260d-a293-425e-b967-fb7be3fc5877}} such that {{formula:ab7b4fef-6b97-4ce9-a39f-1a0dbc6c2441}}       Bob samples a bit {{formula:7ff1d3bc-01e0-4fda-960a-914e88ede167}} such that {{formula:130b7974-d4d6-4f9d-b3cb-5b219c9018b7}} set {{formula:addcb6e8-db59-4870-8517-85eddfb88d0d}} if there is no such {{formula:ea67545a-c8b8-4bd7-b06f-19586c71213d}} , or {{formula:738eac13-1398-445c-aff0-da93175cf6a9}} is not unique, or {{formula:e3eeba7c-7315-439c-91f8-bc88c8dcb267}} , or {{formula:92081999-90b6-4970-8863-9a712b74bf2a}} if {{formula:9a48c5cb-13c6-4e0e-9c53-34f30958ca46}}       Bob sends {{formula:4b2c3b45-734c-4d92-bf9d-f8ec75873872}} else       Bob sends a random bit Note that Alice's new messages are sent before Bob starts sending the last message, hence, it is still part of round {{formula:3ee37687-0ecd-4809-869e-56189a973f34}} . In step REF , since {{formula:c079be57-e4ee-472c-a647-31a77ba23760}} for {{formula:9cc89337-6a0d-4434-84d6-13449c311a7e}} , the probability is at most 1. In step REF , since {{formula:d5d12dd6-ab2b-4749-9a91-eead97a612bd}} , the probability is also at most 1. Hence, the protocol is well-defined. Communication cost. By Lemma REF , in odd rounds, Alice sends a message of length at most {{formula:00f0afdc-b53c-4141-b6b4-1c52785787ae}} in even rounds, Bob sends a message of length at most {{formula:4e1c7a38-ffc1-4b50-9287-9c1cec0ebb00}} They also send one extra bit indicating if the lemma outputs {{formula:3116ecb9-84c2-47b7-8f18-7e7643bea557}} in the previous round. In Alice's last message (round {{formula:9ff424a4-0775-426b-b852-0be778a58988}} ), Alice further sends two hash values {{formula:4d863b4f-0421-489e-9cde-a98259cfdfd1}} and the bits {{formula:883b4b32-07fd-42f4-9f6b-8976f7d39fb0}} , which takes at most {{formula:28a85db4-5dc8-45cd-910d-e369e16bacd1}} bits in total. This proves the communication bound of {{formula:1c384586-c068-4369-928a-c865c4b52e89}} we claimed. The first part of {{formula:d7b7e615-4ed7-40b5-a0b0-429e647a0103}} . We first analyze the first part of {{formula:ed39813e-a939-44a3-85cb-e1df833a4104}} and estimate the probability that we generate a triple {{formula:6a3a1fbe-ab2a-4f72-a39a-f77985c4738b}} . By Lemma REF , for {{formula:7b27c16c-b4ab-4389-8f8f-6747daca098b}} , the probability that (recall the value of {{formula:338f6164-3221-46c9-8ec8-86115c9ada81}} in line REF and the value of {{formula:b8c1f4ec-fcc2-4742-ba20-c61b13778ac7}} in line REF of {{formula:b3ecf4fd-7ff2-430e-9dfd-874d3bdd456a}} ) there exists an odd {{formula:3feb899b-63be-4bde-957b-630bf7ed2e30}} such that {{formula:63548df3-4b8a-44bc-9ff1-3e8005e7d984}} or there exists an even {{formula:c0dbab4a-2397-4aaa-ac59-f742cbff6d4e}} such that {{formula:1f1987d3-13f9-48cf-8ebc-d9fe7c300cf9}} is at most {{formula:2c8b7bea-8002-4449-9b78-caf2c6e3fb6a}} . Denote this set of {{formula:f4b2c85a-a4bb-4de0-bf11-11f7c87c7b69}} by {{formula:b04a3f3a-04db-467a-af45-c11706ef6f83}} , hence, {{formula:ecd3d89a-d130-4e0c-be03-ad66f6854fd3}} . Now consider any {{formula:95780d93-9013-46e1-a527-757b992c77de}} , and we estimate the probability that {{formula:1ca4e567-dae9-4574-ad3e-f79505b447f8}} is generated by the players given {{formula:229d610a-3635-40db-803e-3287292005dd}} . By Lemma REF , conditioned on {{formula:7965f5ba-70fc-4a8e-a9f0-5c50536bc8e8}} , for odd {{formula:8fe04795-b353-4968-89c0-066925e45f1d}} , the probability that both players agree on {{formula:bc7892b5-070e-4c09-8b82-d48a1543a735}} in step REF is at least {{formula:44d4fa94-6fcb-4078-9add-3f3f920cca31}} as {{formula:f8d64a7e-1f75-4cd6-9732-cf4fbad79d6f}} for {{formula:6dd26e73-d65a-4dca-a57e-ddcb51c38bd4}} . Similarly, for even {{formula:5e5f7714-3a37-4b33-894c-77e334658fa2}} , both players agree on {{formula:09dbb4fb-466d-43af-975d-ee2714d7d056}} in step REF is at least {{formula:abde4cb1-ac1b-49f7-a932-90c92e105ef3}} Bob generates the last message {{formula:e7f7e081-31d5-4bd7-bfaf-59228f98442e}} with probability {{formula:fd6d25f9-c7ba-416c-ae7c-672025213bf6}} . Thus, conditioned on {{formula:041d7081-5d20-40d9-8385-c06759e86dee}} , the probability that the players generate and agree on {{formula:f2dcbcde-86b8-484c-8501-5e7d5a261696}} is at least {{formula:422372b0-0ff6-4a62-8272-91e2a56880eb}} where we used the fact that {{formula:8d811de6-35b9-4124-bd7b-90f90a26ac9e}} . On the other hand, for all {{formula:da14e0d5-b1de-4859-a655-0ed13c396093}} (not necessarily in {{formula:92ca15ca-397f-46e3-b852-1a4446645b90}} ), the probability that the players agree on {{formula:5aaa901a-b1c4-4ecc-981b-235b1f04049d}} is at most {{formula:2f06e4a1-ad0e-43db-a62e-5b06569e59f5}} for odd {{formula:dd304f69-3ebe-49ad-861b-af164c2761cb}} (since this is the probability that Alice outputs {{formula:a7597eeb-f0db-439f-b8b4-d000d79d63d5}} by Lemma REF ), and {{formula:7b293e70-8bf2-4327-88ec-183aacd1cad4}} for even {{formula:8a7cf215-27a5-454b-b20a-42383462c447}} . Thus, the probability that they agree on {{formula:cebbb177-6b6e-4811-a919-59b26a17d917}} is at most {{formula:f3138a4e-bb71-4d25-afda-af0c38b40b7d}} Also, by Lemma REF and union bound, the probability that the players do not agree on the same {{formula:2714935e-47dc-48a8-ad08-0ce308ece997}} in some round is at most {{formula:6b774d17-4414-4999-b30d-5cf9f6f2e769}} . Otherwise, some player outputs {{formula:fb3aa0ab-85da-4bb4-adec-65a30580cd12}} , and {{formula:a32752bd-93a5-433c-8ec1-f6c3ca46d6bc}} is set to 0. Thus, we obtain the following claim. Claim 47 There is a set {{formula:668cd5af-4476-40bd-8ffb-58885bec11d4}} such that {{formula:5a276c55-f338-4b4b-bbda-5542a97bb145}} , and given {{formula:b16c7aff-cd71-4145-b37b-efe1fc938109}} , the protocol {{formula:d761066b-1f85-4775-8ca3-7e9df308cf4f}} generates {{formula:7ed3eb18-b9a3-48e5-8d67-d2d3174cae77}} in Part I of {{formula:bc22f82b-73bb-4ad7-94dc-05e08a663b86}} with probability at most {{formula:0ddd2afe-de30-44e5-b303-908e4d7a846a}} furthermore, if {{formula:eb5880ce-0fbb-4e08-a8a9-d918f9dac4e4}} , {{formula:a27273ff-89af-47f4-96d9-ee98b6eb6efe}} generates {{formula:901c2b19-9ded-4393-9234-8a9f2912bb54}} with probability at least {{formula:d8d02da2-3a88-414c-8145-2cb9a0b0f48c}} Moreover, the probability that the players do not agree on the same {{formula:92b9f304-3003-4b2e-9cde-a2a1f1ff2ab6}} is at most {{formula:6467155a-7d97-4743-82a2-2552d4758cdc}} . The second part of {{formula:ea475bc5-78d1-469a-83bb-6adcd5905be8}} . Consider a triple {{formula:81507b1b-8519-44e6-aac9-10193cea81bd}} , we analyze the probability that it is accepted in the second part, conditioned on it being generated in the first part. Alice does not know {{formula:5cefc1f7-19a8-4b5d-931a-c5a961ee7255}} , but it has only two possible values. Alice pretends that {{formula:533ae052-45a0-4114-9b08-b3ab7dff047c}} for {{formula:9db7309f-9624-4b02-b01f-0d86817dfe55}} , and computes the corresponding {{formula:a03d708a-12e3-4530-950a-6043047a6b40}} and {{formula:14a3fd00-6044-4003-ab11-237bf5458049}} . She sends both copies (for {{formula:700ca107-1462-474e-8f99-e7fa3beb3117}} ) to Bob, and Bob only looks at the copy corresponding to the actual {{formula:63dda9bc-5e81-4fa5-93b5-61f9125663f5}} . In terms of the correctness, this is equivalent to Alice knowing {{formula:53f2885b-7f8b-4b29-b8ba-3df06659f1e2}} . For simplicity of notations, we will omit the subscript {{formula:e97f3fd9-8c57-4c74-8027-5eb189bff65c}} , and use {{formula:a3c1ace2-fbf4-4acc-93f2-f6b80a33a2c6}} to denote the copy for the actual {{formula:53b01d5e-0fae-46a1-8162-346047aefb18}} . If for a triple {{formula:26367336-b0be-43a6-b776-bbb82ad94a8a}} , we have {{formula:52688b9f-8304-40a2-8ead-c46caa4dbbff}} (note that {{formula:c3a2a2c6-6eff-4a8d-b807-17794862382d}} are determined by the triple), then the probability that there is a unique integer {{formula:52abea0f-e5c0-4921-badf-038f26add80f}} such that {{formula:1db56a6f-6229-4752-995d-454ae3c8bfa1}} is equal to {{formula:252f683b-b8cf-4eeb-b7e2-e68e180b8896}} and in this case, we must have {{formula:280bbdd7-54e9-432b-bc80-4108b22975ec}} . Then the probability that {{formula:c1787121-f3a3-46fd-8720-a1c0ea996479}} is {{formula:5536199a-0c89-4a82-8274-7935929b964c}} and the probability that {{formula:27d3d169-9845-4a83-9182-2fd0ca69855b}} is {{formula:6f543f4b-f236-40c9-9920-24eee9def818}} The players do not set {{formula:05ddbd0a-ffba-4993-a196-b3b5999519e9}} to 0 in Part II with probability {{formula:b638f7fb-f492-40ba-a354-2efbdc6f6d63}} On the other hand, if for a triple {{formula:b237d7cd-e183-459b-8121-73d577bdcaca}} , either {{formula:3ba2d0cc-950d-4d08-a093-52904ae60205}} or {{formula:d37f5b9e-203f-45a5-ad06-35d68adb8320}} , then the probability that the players accept it conditioned on it being generated is at most the probability that there exists some {{formula:473363a5-2cf8-4e28-9629-05f8183370e4}} that matches the hash value of {{formula:8965722a-08ea-43d9-a792-9ab4b293a2b7}} , which by union bound, is at most {{formula:1964b579-20dc-4302-b81c-ec9267c223bd}} We denote the set of {{formula:07c8d26b-402a-4f65-8b15-2f6c0590a868}} such that either {{formula:1ffbfd95-ce09-4b92-85d3-917db187ea64}} or {{formula:204b52db-2e05-4de7-a131-77fd423ff870}} by {{formula:f2c826f3-85c3-4ed5-bf89-bd22dfba7ffa}} . If {{formula:e4924d1c-ac12-417b-9bef-1f4b84767ce5}} , then we have {{formula:3d99dff4-3c32-409e-9407-dc21a5df881e}} If {{formula:d2309c8e-7a9b-4f8e-9f62-a9f4aa38e1b1}} , then we have {{formula:757fa746-f7b4-4e66-bcd9-4fab882ad2fb}} However, by Lemma REF , for {{formula:fe061e1b-892c-4dd6-95b1-7b5e084eb6d5}} , the probability that {{formula:75708c7e-0ffd-421b-ac36-aa07e59afaac}} or {{formula:39f6ac72-8811-466b-8d1b-89ff44121bbf}} is at most {{formula:91bab31b-be0f-498c-90ab-bdd3d54a11d0}} . This implies that {{formula:b21679cc-8c15-49ed-8ffe-c4c21d0a3215}} . Hence, we obtain the following claim. Claim 48 There is a set {{formula:158272c6-1259-4cfe-86c4-78e5ab60a067}} such that {{formula:716df4d4-fbb1-4a32-91f1-d329486d3a1d}} , and for {{formula:56f04ca2-fd2e-4c3e-86a5-43cdb7dd8a03}} , the probability that {{formula:b61f237b-163a-43bc-8bca-07da136efd38}} accepts {{formula:ba08640f-d5e2-46e5-afdd-4e6dd5076fd2}} conditioned on {{formula:f9cefb67-d0e8-4d33-83d3-c9e25f536243}} generating {{formula:6767200d-eb40-4b97-9833-89552c3db82c}} is equal to {{formula:20ac0c42-d0c4-4776-a3bf-337f7837ed9f}} for {{formula:97e6e168-ec9c-418a-bbc4-d9d6975494d5}} , the probability that {{formula:0670f788-6d5d-46f1-92a7-d103a346f541}} accepts {{formula:8216b288-cf8d-459e-8522-0024c803727c}} conditioned on {{formula:207a24bb-2567-401b-b2a8-51cb9a06ba29}} generating {{formula:60f70069-d36c-433b-b74a-7b824ee785fc}} is at most {{formula:faf7ba51-31c6-4bcd-91f7-07196d94f03f}} . Overall success probability. If all {{formula:edd0fa77-9a3f-4a5a-9e3c-a19623677be2}} were generated in the first part with probability equal to {{formula:128990a1-298d-41a1-8f1b-4f8164c27680}} and accepted in the second part with probability equal to {{formula:9c405ebb-d237-4271-87b1-ecd48c735182}} then {{formula:a1b3ac8b-1b72-476e-b81c-04acf83f953a}} would be the ideal protocol in Claim REF for {{formula:063704c8-6e19-4fe7-a078-d048d8a11570}} . Hence, to lower bound the overall success probability, it suffices to compare {{formula:df788f22-f983-4236-88a9-8282f6e005a7}} with {{formula:c8359bf3-fd96-46ff-b289-f3e301382d00}} , and bound the total probability difference in generating and accepting a triple {{formula:c18231f5-5da5-4183-a9ec-aecdff5650af}} . By Claim REF and Claim REF , for {{formula:c95c24fb-7e9e-4121-bf5e-ad3f395d31b9}} , the probability that it is generated and accepted is at most {{formula:89497c5d-abf3-4223-99c1-9d29fbc534d5}} and at least {{formula:570d61a6-aa5d-4e67-80ab-f5d025954bea}} Hence, the total probability difference between {{formula:5e4a7cf5-3333-422d-ba81-72bf019c2adf}} and {{formula:df1255f0-e86b-44c4-9334-9e8d7665c3cd}} for these {{formula:f7dea25e-238d-4fbb-a27c-86ffaa37624c}} is at most {{formula:e45a046d-8682-4fea-b41f-6d52d79df4f9}} For {{formula:231ae8e5-29de-4c60-87b6-4f9fd5431f16}} , the probability that it is generated and accepted is at most {{formula:00a36edd-4fbb-4f0c-b356-18878e70f6d2}} Hence, the total probability difference for these {{formula:47895d3e-e439-4c51-98b3-ca09efc02ce6}} is at most {{formula:d05f049a-9258-41cf-8bc6-e12f2d5879c2}} For {{formula:b490c1d3-9c57-4429-aa10-0b8e2cb4f009}} , the probability that it is generated and accepted is at most {{formula:18667b6c-f0d9-4be9-a080-9538fa496b1b}} Hence, the total probability difference for these {{formula:d21a08e4-62a2-4300-9f67-b1e16521a139}} is at most {{formula:b067f54e-1365-4e18-9053-5a823be7a8ed}} Finally, the players do not agree on the same {{formula:4ec1d09c-6777-4509-a533-7a38cb20a4b9}} with probability at most {{formula:da8d3a0a-bd9f-4f0c-9f7d-7d32cfe78c37}} . Summing up Equation (REF ), (REF ), (REF ) and the probability that they do no agree, the statistical distance between {{formula:487f4ef6-507b-4d57-ab57-959dd6b8c0c2}} and {{formula:25223361-0d9f-4b11-ae01-449d18b61aa6}} is at most {{formula:36d5b7da-ccc3-40fa-b5a6-0d06b26e9b9a}} , where we used the fact that {{formula:7f3b759b-1474-4e32-9f8b-437df37983ed}} and {{formula:601067f8-e997-499b-8b43-8358e337ca9c}} . Combining it with Claim REF , we obtain that {{formula:6e9fffae-6ace-433c-b390-4d1b622cf15e}} computes {{formula:de16c666-5458-4265-851c-02641436654d}} correctly with probability at least {{formula:5c3c6a40-72fb-4e3d-b1b4-2a4460587fcf}} Finally, note that if {{formula:619866c3-1ccd-495d-a225-5feb80667e17}} , then the lemma holds trivially by setting {{formula:c51cb382-68ca-4137-a9e1-ddbb3ceb25d8}} to the protocol that outputs a random bit, otherwise, we have {{formula:8945e0f1-a624-4810-a46c-40d70f28d96e}} Hence, the success probability is as claimed in the statement. This finishes the proof of the lemma. Theorem REF Implies Shaltiel's XOR Lemma Recall that the discrepancy of a function {{formula:9f7d3340-69be-4b0a-93d1-ac6e720f4e6f}} isFor {{formula:86953d1b-34ec-47f5-9ce9-bf84b6e54502}} -valued functions, we map the value to {{formula:7b17ba76-4009-40f3-89ec-8917539d7612}} then apply this definition. {{formula:b440ba24-a0d3-4514-bfde-1f1559d166c1}} Shaltiel's XOR lemma states that {{formula:7d3b15bc-ebc4-4411-99c0-670c17bfa040}} We show that Theorem REF implies this bound. First by viewing a protocol with {{formula:7bcc9078-0dd0-49c2-af89-0f1d00b5e22d}} bits of communication as a partitioning of {{formula:dabc431c-5549-4f76-a9bc-1c714fe4f264}} into {{formula:0585705d-6f0c-46bd-aa56-af0ed80af7d8}} combinatorial rectangles, any {{formula:f8a7d860-bfaa-4053-ae22-67ed2bf18238}} -bit communication protocol cannot compute {{formula:e3eaae15-0847-408d-9012-4f8310c48c25}} with probability better than {{formula:9ea19693-a976-4ec1-a9da-8b05a85af1a0}} when the inputs are sampled from the uniform distribution {{formula:549eb3c1-feea-49fe-87db-eea8a3e20ea8}} over {{formula:11f6f8c3-76a6-4ccf-ae1c-84c8993fa314}} . In particular, it applies to 2-round protocols, and we obtain that {{formula:70c43820-84bf-47a6-989b-d7ed6d9b81e8}} Thus, for {{formula:54806127-b2d2-4a51-90db-a8df229de5e8}} smaller than a sufficiently small constant (otherwise, the bound holds trivially), Theorem REF with {{formula:a734a2c5-db4e-46ec-94d1-7fdd73e4e2d9}} , {{formula:a82feaa9-675b-4e78-b6cf-9e4e77dc1874}} and {{formula:0af3ad5f-edf0-470f-9266-226610780a4c}} implies that no 2-round protocol with communication {{formula:891128ac-d5d8-4a85-841c-d0aa909250c1}} solves {{formula:108bc245-2770-4b06-be4b-7c4edf4fa22d}} with probability {{formula:4f996862-5bc6-4933-9135-04b63350961d}} In particular, no protocol with two bits of communication can solve {{formula:c154cfee-4549-417f-a119-3e4357a79717}} with this probability. Finally, as pointed out in Remark 3.12 in {{cite:e21443e067faa8c12b96691c44b0e9340fa0e6f0}}, the discrepancy of a function is equal to the maximum advantage over {{formula:e05c5354-6f68-4ad4-afe8-3b45e5da3ead}} that a 2-bit protocol can achieve on the uniform distribution (up to a constant factor). This proves that {{formula:ee697852-5cb8-408d-8c7e-e34c2c9f7f5d}} .
i
ae8de1dff4cdeca36bc4fe5150cbf1c5
Remark 6.9 If {{formula:56b26a2c-75d4-476e-bd78-64d0e2b306b0}} is an irreducible matrix with non-negative off-diagonal entries, then the corresponding Laplacian {{formula:e9fa1fac-b546-4118-b5d0-ad491de2cc4a}} has a simple eigenvalue 0. This result is known, but less so in our general setting. To see that it still holds, note that for some large enough positive scalar {{formula:ed188f60-0886-43ac-95f0-7a793a2362e0}} the matrix {{formula:5ca25c76-75ed-4969-aa11-a69dab327314}} has positive diagonal entries. Moreover, {{formula:56d67cf1-7b7b-40bf-83f9-563b1aaa6de9}} agrees with {{formula:7b74ad47-6c91-49ed-a7df-954b1dd3dc7f}} on the off-diagonal entries, so that {{formula:4ba755ae-7873-41d7-96b4-d2b36da8794e}} equals the non-negative adjacency matrix of a strongly connected weighted digraph. As in Remark REF , it follows from the Frobenius-Perron theorem that {{formula:f772c242-52ec-4b75-ab6b-6b03cbaf54c5}} has a simple eigenvalue {{formula:bf4c0ea9-4a1d-42bd-93a4-ea2590d73fff}} with a positive eigenvector {{formula:d02f9ed3-d3b0-464a-9c0d-04a7f85c9e85}} . Moreover, any non-negative eigenvector is necessarily a multiple of {{formula:b836fd78-3412-41b3-bd12-5420ce60477a}} , see {{cite:f31240f3ccf5691fb4928368067a87e85fcc571b}}. As {{formula:bad0c94f-0df3-4844-b514-ce0806302c46}} has an eigenvalue {{formula:da3a5d31-b15c-43ef-abe7-4623e1448025}} with eigenvector {{formula:5779f7e5-cec0-4ee6-9947-3ca18a414dff}} , we conclude that {{formula:8ac146d3-101f-4f6d-8f74-bb7fa159daac}} is a multiple of {{formula:4906a701-98f1-4aa8-b203-e11f425b7925}} and that {{formula:c503faaa-1734-4d1a-aee4-ce03c99c692d}} . In particular, the eigenvalue {{formula:2034d290-3018-49cc-8f42-c6c07b327fc0}} of {{formula:821de68b-5f78-4fab-adeb-18958f00a58b}} is simple, so that the eigenvalue 0 of {{formula:4b6f54f2-a363-4d62-9b72-a4d456d30613}} is as well.
r
ade8610ed9a0920a4e586ce5bd59464d
This section will empirically evaluate the proposed method. In the experiments, we utilized FIVE types of real-world datasets coming with distinct nature: 1) structured datasets from different domains {{cite:d34a5cfd6f3f6547fb6ef077b999bf8bbebdb1b5}}; 2) medical image dataset {{cite:5337ae10ce77e86733647dcf1ecc39a9bd0bac44}}; 3) face image dataset {{cite:3895bec75a8c65dd82c80d387ce15a185b5c131e}}; 4) gait image dataset {{cite:3c02893bcfeb2abb5a5d238b863bd030a66977d3}}; 5) person re-identification image dataset {{cite:205164f0478faa74f9d9b01eb518399572ab2a56}}. The feature and sample size information of all datasets are summarized in Table REF . In addition, illustrative images of the image dataset are shown in Figure REF , correspondingly. {{table:7ea3507c-2ece-4957-8881-9de41885b1e9}}
r
de35ed534e1388520465c704ba187a85
Mask Memory Unit for Consistent Saliency Maps. A common problem with perturbation-based methods {{cite:d2e2655007e854e60e8a11f88b6bd6cf062190ff}}, {{cite:8235e259f675ce24e1b039ba49938db467b3a0ac}}, {{cite:733a8b836c5c48ad71eb3da86b5554ff4e7a29ca}} is high variance between explanations, even given the same inputs multiple times. Also, during learning between epochs, the saliency maps can change dramatically {{cite:39157230690c3cb39e6a563ae2f255e174130834}}. This makes training challenging, in particular, to know when to stop training. We address this problem using a Mask Memory Unit, in which we store “good” saliency maps throughout training. A saliency map is considered to be good if the resulting prediction {{formula:f2399074-4834-46f2-8a7c-076429f730fa}} for the perturbed instance {{formula:b911dda6-1297-4bc0-8f33-cf4fac2c9c4a}} is identical to the predicted class {{formula:f59ad87e-796f-4700-a021-ba7d4b438809}} for the instance of interest {{formula:509af656-8a9e-46e1-affb-3f9fca59b0f3}} . The final saliency map generated corresponds to the average of the last few (10 or more) saliency maps in the repository. This significantly reduces the variance between the saliency maps offered by our model as it results in generating consistent explanations for same time series instance across multiple runs. {{table:b3dd50f9-a48a-4475-81db-a74366c44504}}{{table:400b493c-6ee3-4848-809a-b73bfcad4c20}}{{table:6f3421ea-718c-48bd-8abb-3f009c59c8b6}}
m
05aa29025180b2d66e1842606ec651da
Their experiments on real-world images from the DPED dataset show that their model outperforms state-of-the-art methods such as EDSR {{cite:6121a2a0d388c660b17220de585fbbad6c7d64a1}}, ESRGAN {{cite:c25ecb9bebd0aa299ac5d41985d6bb12370f4b04}}, ZSSR {{cite:8866ca7e29d97af45be24075c7c76faa9b725f3b}}, and K-ZSSR, resulting in lower noise and better visual quality. They also won the NTIRE 2020 challenge by a significant margin when scored using human perception and the Perceptual Index (PI). This is why we are keen to investigate RealSR's performance on our WideRealSR dataset.
m
bb02f6b16eb23a10092744b4ca3e8789
The proof of the theorem can be found in, for example, chapter 3 of Ref. {{cite:9c4600c5d4987c529c1b0d198d089e37aa3b57e4}}.
r
25284ffb0a33c247327649889ad823cc
The structure of multipartite entanglement is far more complex than that in bipartite systems. There are various inequivalent entanglement classes {{cite:8a13ce64fb8726f8a5dc9e6365842e5aa6f2eff3}}, {{cite:923ceab185a6a52907389d262ff4b27f3c55d3f0}}. There are also such peculiar properties as monogamy relations {{cite:9d75281ff475063e56b2bb68bb50e4c62f18d453}}, {{cite:1d3980cce5262fb4b2f39103983efda4e0923557}}, {{cite:3b279d8e4de749ebedc866789c90659fa0e4a48c}} exhibited by some correlations of particles.
i
7ba170ba1283c02207eea4c394a2a22a
In our work, we consider the same LQR setup as in {{cite:ba7f48697fbcdbde7126233207e997354d95df57}}. That is, we also consider a deterministic policy rather than a stochastic policy, a randomly distributed initial state, and noiseless dynamics. Note that it is known {{cite:2bf776cff3d5637aa95179870544f91d86507280}} that the inclusion of additive zero-mean white noise to the LQR dynamics eq:lqr dynamics does not change the optimal control.
m
e5587abcd47ab9ec3a1aab4d3f00b966
{{formula:d36641bf-bd2c-4a0e-b80f-64eb5fa60852}} Base: Minimap2 {{cite:bc6f87d020c653c85947c911657c2f1461b78f2b}} is a state-of-the-art software read mapper baseline for both short and long reads. GenCache {{cite:0d892bb6d56b93b852bfbaef6c0d30f35b478422}} and Darwin {{cite:9e3ded56a8c953edc536d51448ce7cea30c9d835}} are state-of-the-art hardware read mappers for short and long reads, respectively. {{formula:d945abf9-2a52-4c37-8f64-80266359a374}} GS-Ext: Base integrated with an implementation of the GenStore filter without in-storage support (Ext stands for external to storage). GS-Ext concurrently filters reads while using Base to perform read mapping for unfiltered reads. The goal of evaluating GS-Ext is to decouple the effects of GenStore's two major benefits: 1) alleviating I/O bottlenecks via efficient in-storage processing and 2) reducing the workload of the read mapper by filtering reads with simple operations. GS-Ext obtains the second benefit but not the first. For software read mappers, we evaluate a pure software implementation of GenStore  that concurrently runs with Base.We do not evaluate a separate GS-Ext configuration for long read software mapper since Base (Minimap2 {{cite:bc6f87d020c653c85947c911657c2f1461b78f2b}}) already incorporates the chaining filter used in GenStore-NM. GenStore-NM implements part of this chaining filter at low cost (enabled by the key observations in Section 4.3) to fit within the constraints of in-storage processing. For hardware read mappers, we evaluate a hardware implementation of GenStore outside the SSD. {{formula:12aa2558-a33b-4eca-a09c-a20748e4e15c}} GS: Base integrated with the hardware GenStore filtering accelerators, GenStore-EM and GenStore-NM, as described in Sections 4.2 and 4.3. GS concurrently filters reads inside the SSD while using Base to perform read mapping for unfiltered reads.
m
2d2689e6ce3055b0cf4c65d2165f7b5d
In Tab. REF , it shows the PSNR metric between cover images {{formula:4fbb79f6-b391-4614-8fc6-ed47e6075a30}} and watermarked images {{formula:9b78de16-05e8-499e-8d96-a3d81aa5eb9c}} produced by 6 specialized models against the corresponding noises and 1 combined model. Moreover, Fig. REF illustrates the bit accuracy of different models against 6 distortions. In general, when compared with those specialized models, our 5 models, i.e. Identity, Crop, Cropout, Gaussian, and JPEG, have higher PSNR values. Meanwhile, our solution obtains higher bit accuracy than the baseline method. Especially, our method achieves both +3.52 dB gains than the baseline for the imperceptibility under JPEG compression, and 18.4% higher bit accuracy in terms of robustness. When comparing the combined model, we have almost the same performance for the PSNR metric, but the robustness of our algorithm is much better than the baseline against the Identity, Cropout, Gaussian, and JPEG compression distortions, among which 18.6% higher bit accuracy is achieved under the JPEG compression distortion. Besides, these two metrics, i.e. the PSNR and the bit accuracy, demonstrate that our method achieves a better balance than that of HiDDeN {{cite:8b2978aa70b7569df2d3e1798655a931e5b3c9f5}} between the imperceptibility and the robustness of watermarking.
r
2fcf9ff0d9315aa702974590a2b4f821
Even though this methodology can be extended to any nonlinear elasticity problem unlike {{cite:2b8686fd50d9800e0d89614fa9182d9f5b577f83}}, for the purpose of simplicity, we have limited our explanation to the scenario of truss undergoing small deformations. This problem when formulated like in {{cite:014c3ab12401567905691edb1d9122491d6464cd}}, inherently becomes a nonconvex optimization {{cite:d8705a3a0738bc49733e84c10148f90d8463c2e4}} problem. In this paper, we have utilized the method used in {{cite:6f86217af78d2c751b5b77234a2bae7628300ca0}}, which proposes a novel method by converting the problem into a set of nonlinear equations that can be solved easily.
m
587e9a6b8a037d89ad52b788f22ab92f
First we recall the semidefinite program for finding least Euclidean distortion embeddings of finite metric spaces. Suppose we consider a finite metric space {{formula:4d62f976-4ad0-43d2-a68a-a0fbed027128}} with distance function {{formula:37720972-b68b-40bc-ba2d-a873a2a69a24}} . Then, as first observed by Linial, London, Rabinovich {{cite:acc1b03b5cd51e658503b9053387cdcffceab700}}, we can find a least distortion embedding of {{formula:8720d8ab-29c1-48e5-88d9-b0d21c513f6c}} into a Hilbert space algorithmically by solving the following semidefinite program {{formula:d61151ab-d156-49d3-8e2b-0860cb75f95b}}
m
2165fa6b05c03116bf8138da52c7309e
The survival-incorporated median is simpler to compute and requires fewer assumptions than the SACE. Aside from the monotonicity assumption described in Section , identification and estimation of the SACE require additional assumptions. {{cite:df2e6c48f27482c7b346183336ecf268c0eed6f2}} assumed that with a randomized setting, in survivors under each treatment, there exists a baseline covariate {{formula:1526360d-34eb-41fd-9801-ea8640811e77}} that does not predict potential clinical outcomes {{formula:ad6b3e61-b7d0-405f-bd89-15fd9b6f019e}} , and {{formula:b4a0edbc-fa44-4314-b94d-0c363726c2ae}} has a distribution that is different between the always-survivors and the protected. {{cite:31d00340c8bf96da42d27d22fa591caa7a866910}} introduced post-treatment covariates that may mediate the treatment effects on survival and clinical outcomes to identify SACE. Then {{cite:31d00340c8bf96da42d27d22fa591caa7a866910}} made an assumption that subjects under treatment had the post-treatment covariate taken the value it would have taken under no treatment, or the other way around. This assumption is a type of cross-worlds assumption ({{cite:5a92f875314b6187072fb52429718c96fb83fb2b}}; {{cite:c5b7dcc83c1ea1a2d5a8c6c3fa3a0edd86ad06f4}}) that relies on two simultaneous but different situations, treatment and no treatment. For identification and estimation of the SACE, these assumptions are either technical, or difficult to verify in practice.
d
d5c3e53a70dd3f2f02f6ba079734ea8f
Sharp video reconstruction is an ill-posed problem because there are infinitely many motion trajectories whose temporal averages correspond to the same blurry frame. To compensate for the ambiguity, previous works {{cite:046f83fbbc2868e1c8e9e15181a776f9b762cdda}}, {{cite:17a05f21c72d0b783481068ae9576898d0f49c5b}}, {{cite:a73f4906af93ac88f392abf9f1a01a5f97014f41}}, {{cite:de79b7355ab8bb834ab28808a8137e3229b73827}}, {{cite:a12e302d7d12013ff3d88abfd67a3b38436bc9bf}}, {{cite:54f31263c5e63fa31434d773003ab7674eae3190}}, {{cite:760cb595f85d2d825e1ca1f86369bccfd7e91412}}, {{cite:9890a4363f1d820ebd33ccf0ea6f64e4c1f0823f}}, {{cite:9910518173db78d386d2d9f63c60598fe6b44faf}} exploit event data as an auxiliary input, which provides additional information during the exposure interval at a finer temporal resolution, as shown in Figure REF . Even with the event input, difficult challenges remain. The events fail to capture the complete motion information. The video reconstruction quality is determined not only by the appearance of each individual frame but also the temporal smoothness. The immense density of events creates another obstacle for effective and efficient processing. The success of video deblurring depends on how the blurry image, the events, and priors about video sequences are integrated together. This calls for suitable video representations and prediction algorithms.
i
e043fc9d21f34535cae1683b4b5a8199
Recently, deep learning-based methods have been applied to automatic retinal blood vessel segmentation and have accomplished very promising results. Although the vast majority of existing deep learning based methods for retinal vessel segmentation employ UNet{{cite:8031680193484383a2d622fb7c5c3229ab452ced}} shape networks, there are a few methods that utilize traditional CNN architectures. For example, Liskowski et al.{{cite:0683dc1e3aab856357dec6d8accf45fa2530d12e}} propose a network that consists of consecutive convolutional {{formula:58e488b1-415d-4831-a981-6907c2f959ae}} max pooling and fully connected layers. The proposed network inputs {{formula:94c843bc-212e-4c07-a401-7b1684e0926d}} image patches, and classifies them whether centered on vessel pixels or not. Maninis et al. {{cite:76d723898d28ec68499712832d5d42e4b903576b}} propose an image to image regression approach for retinal vessel segmentation. The method uses the pre-trained VGG {{cite:bb85e9fc37f9404ac55873ca0cf39629188737d5}} network to extract multi-scale features, and a regression layer to classify the vessel pixels. More precisely, the output of each layer of the pre-trained VGG network is extracted right before the down-sampling (max-pooling) layers, then resized to the original image size, and concatenated so as to create a volume of fine-to-coarse features. Then, the final vessel map is obtained through the regression layer by feeding-forward the extracted fine-to-coarse feature volume.
m
c448e1d2dd66d8c2a47b7cd115bd6681
Effect of high-frequency adversarial learning. In this experiment, we show the effectiveness of the proposed high-frequency adversarial learning. For the generator network, we use the 3D U-Net for standard adversarial learning, and frequency-supervised synthesis network for our high-frequency adversarial learning, as shown in Fig. REF , where 3D U-Net is the base network. For the discriminator, we both use the relativistic average discriminator introduced in {{cite:83b8d8e4add48f4b6c806de56d767626c66159b9}} (See section REF ). The network architecture of the discriminator is the same as the encoder of 3D U-Net. 3D U-Net combined with standard adversarial learning leads to a 3D GAN based synthesis model, as the work of Nie et al. {{cite:063f964c3cd68b0518cebee6ae1afc557a765d5a}}. As shown in Table REF , the 3D GAN achieves better synthesis performance than 3D U-Net only. Our high-frequency adversarial learning further improves the performance of our frequency-supervised synthesis network. From Fig. REF , we observe the proposed method yields synthesized CT images with better perceptive quality, in particular higher structural similarity and more anatomical details.
r
9bf5fb8db44a9e84b2ff0a34d8e88989
Several methods that treat the rotation of nuclei have been developed. As a microscopic theory, the cranking model {{cite:5c0050f392c0c0927d15d407bcdc6066c17bfbc2}} has been proposed. The Inglis formula {{cite:b7244cc8972c84c99b336227d433686d26a4ada0}} and the Belyaev formula {{cite:cfcf19224a00b209a5a818579c97654b957c12a3}} have been derived for the moment-of-inertia from the cranking model. The Thouless-Valatin formula {{cite:bf6a2fc25c98b7cdba429b0a862b1d9205dd9291}} has been obtained in connection to the random phase approximation (RPA). The angular momentum projection (AMP) has been developed {{cite:5c0050f392c0c0927d15d407bcdc6066c17bfbc2}}, {{cite:5fc5365b0e7d69ac20ee7485d2ea804cc02d8f6f}}, {{cite:af7e586aad120f5922c293d4f7f341e2aa27028f}}, {{cite:4241628765fba0821e9cd5095b2fea4a455bc969}}, {{cite:b7cf4877f90fafb32954cd4d1ad8f110b57fc144}}, {{cite:8cf89408c98a90c7988e125b5d7337e1f11965a8}}, {{cite:fd1ab870b49b0495913106aa51ab5f1a79e50992}}, in which the degenerate intrinsic states along the NG mode are superposed. The AMP enables to calculate rotational energies straightforwardly. The {{formula:3cbf5303-ac25-47f7-9999-66813b3e28d2}} rule of the excitation energy with the moment-of-inertia is derived from the AMP under a reasonable approximation for well-deformed heavy nuclei {{cite:5fc5365b0e7d69ac20ee7485d2ea804cc02d8f6f}}, {{cite:af7e586aad120f5922c293d4f7f341e2aa27028f}}, {{cite:fa6e9b734b6786495bb7ab6ec74305cf87b34c2d}}, {{cite:1b10b9424b3bdb984abc17fd061f16b3a9e9944a}}, {{cite:6443a784e5a3115c5b1bf72a9ccb5af9c031932c}}, {{cite:5c0050f392c0c0927d15d407bcdc6066c17bfbc2}}. However, for light or weakly-deformed nuclei, it has not yet become clear whether the same arguments hold.
i
3da5ff9eea362e213537a653f8d72cc0
The common neural network architectures that achieve the state-of-the-art results usually have tens of billions of trainable parameters {{cite:bea37ad128a79853241d9a45a3ba230c813e60e3}}, {{cite:22da8a6e1797df320d57badca79c76fefae94fb1}}, {{cite:c784f1091f7daa32d11a1835e18f384b79a1142a}}, leading to a problem that training and inference of these models are computationally expensive and memory intensive. To address this problem, researchers have developed many practical algorithms to compress the network structure while keeping the original network's expressive power {{cite:d1d0962fbac73f054cac58e955dd8d87c6573b0f}}, {{cite:eb7e13993049ce937eeed2eed165a22bb634c7a3}}, {{cite:cf4f6745595f358b37aa1decf52a26e4d8127edc}}, {{cite:a695f0014dbe854b1bde9f7e818bcf3f06eea89e}}.
i
f642d7222fa64ef6643748d734057f3a
In Fig. REF , we sketch the phase diagram of two-flavor QCD in the the {{formula:5fcaeb7a-a9b0-4b98-b658-5966a558d94b}} –{{formula:06a1f322-07cb-44e7-ac7a-6fc8354d2f42}} plane. In the lower left region, we have the hadronic phase where chiral symmetry is broken and quarks are confined. As the temperature increases, one enters the quark-gluon plasma phase. Along the {{formula:303999f2-aa0b-4d3f-ba65-48144570a118}} axis, there is a transition from the hadronic phase to a Bose-condensed phase of charged pions. In this phase, the {{formula:854203fc-f138-45ba-b9c0-b0d04a779d5a}} symmetry is broken giving rise to a massless Goldstone boson, which is a mixture of {{formula:24544371-b778-496d-9f36-660127963f04}} and {{formula:0d465684-859d-4c01-9eff-fbb9b0fa020a}} . For large isospin chemical and low temperature, one expects that quarks are the relevant degrees of freedom rather than pions {{cite:d19409d887dd0af81718ae46ab338785ee64b129}} . The Fermi surface that exists when the interactions are turned off, is rendered unstable once they are turned on, since they are attractive. The system is then described in terms of loosely bound Cooper pairs instead of tightly bound pions. Since the symmetry breaking pattern is the same, there is a cross-over transition rather than a true phase transition between the BEC and the BCS phases {{cite:d19409d887dd0af81718ae46ab338785ee64b129}}. {{figure:1e371bb4-72dd-464d-ba26-15a32ab8f21c}}
i
20f5f2ace81a2624183dc050187d6578
Along with the increase of the count rates, the model above is also attempted to fit the obsids with count rate {{formula:6958a72b-e06a-4ef5-94a9-3fda390fbc46}} 40. However, the parameters derived from the above model is unreasonable, e.g., the blackbody radius {{formula:241b6dc8-6e27-4611-9b28-3bafef0f1a8b}} is much larger than 10 km and up to more than 100 km, which is much larger than the NS radius. Moreover, the blackbody temperature {{formula:c3e4dc2c-587e-4f09-b343-f98e623f74cc}} decreases as the count rate increases, which is also very unlikely. For the obsids in the rising phase of the outburst in 2021 with luminosity (4.5–13.8{{formula:7daa305c-c1c6-4e84-a1d6-ed6971b40220}} ) and count rate (70–190 cts/s) which are similar to that of the two obsid of the soft-to-hard state transition (see the next paragraph), the inferred {{formula:19f4ed2e-c763-45d1-9d1f-3961716b4cb5}} is 0.39–0.11 keV and 20–120 km. Under this condition, the spherical corona scenario is disfavored during the rising phase of the outburst. Thus, we assume the NS surface emission (the temperature and the area) does not change during the outbursts, and take it as part of the seed photons of the Comptonization, i.e., tbabs*thcomp*(bb+diskbb) with the fixed blackbody parameters derived from the spectral fitting at the outburst onset. This assumption should underestimate the NS surface emission and thus leads to an overestimate the accretion disk emission/radius, since {{formula:47cb6d72-60df-43d1-a773-d3ea5af59cde}} should be higher as the accretion rate increases. The underestimation of the inclination angle (we take {{formula:3f64d0a7-19eb-4175-b6ba-c2ee580c97b7}} = 0 to calculate the inner disk radius) could offset the overestimation above of the disk emission/radius. Some works {{cite:398e74bbd9066b682d27ac07921eb57cedc349ca}}, {{cite:3d31bde5967174f4f73d0ba85d9e6921f5274b6a}} indicated that the NS surface temperature could be up to 0.6–0.8 keV and partial of the NS surface emission is blocked by the disk {{cite:398e74bbd9066b682d27ac07921eb57cedc349ca}} and not involved in the Comptonization, which could also offset the overestimate the accretion disk emission/radius. As given in the following paragraph, the derived inner radii are consistent well with the NS radius, which indicates this assumption has a mild influence on the spectral fitting and is accepted.
r
9940c095337facfb8d994db5e56c9fe8
A number of RT investigations focused on the K i D lines were carried out in the past {{cite:9ed6236672aa9b68069c35a0eff1f32eca4375eb}}, {{cite:8beb681c3b9afa1b242a4360157610f17f1a58c2}}, {{cite:0128685aeaf9fbe2e3d76f42e8d4cdac34db6d3e}}, {{cite:d96254380a957b0798a992d46f914d8f1809ce84}}, although they did not take scattering polarization and its magnetic sensitivity into account. The hyperfine structure (HFS) of potassium was not considered either, even though its two most abundant stable isotopes, {{formula:0b4f2b14-6937-47ec-95f7-70a90d427e08}} K and {{formula:91de7b13-bc9f-4e37-bccd-272d70e92476}} K, have nuclear spin {{formula:1aa8074b-73c1-4380-896f-fad6387fb35e}} and even though all the atomic levels involved in the transitions responsible for the D{{formula:c17b569f-fda5-48a5-bae2-5978e6ec542e}} and D{{formula:1dfb4122-b611-49fa-b098-70d646dc2ae7}} lines present considerable hyperfine splitting {{cite:5fec71221d569e947a49f6c4a947d4f4f5f437e4}}, {{cite:92742afc7517a37595d2ef390de0fed046fb57bb}}, {{cite:0c7a7870dbea8841aafd0e48fd1a0f75746ba810}}. Although HFS is known to only have an appreciable impact on the intensity profiles for a small number of spectral lines, such as the infrared Mn i lines investigated by {{cite:3bee77dd22a1390a91447eac70fcdc9b8520e0ba}}, it is essential for modeling the scattering polarization of various other resonance lines of interest, such as the Ba ii and Na i D lines {{cite:b8ee854c8871ea8f0c74835ca51aec6a3c8c6946}}, {{cite:1467e998f7f9d748fe28898c1f236152ae1d7bb4}}, {{cite:62c6b03f50fd18b4bc2426207f90e143d959629b}}. {{figure:d5bf1b64-bd5b-4867-9914-486a2e50e57e}}
i
50a1694a16e0381cff5c382885bde9c3
For agents with equal entitlement (and additive valuations over chores), we show that allocation mechanisms not based on picking sequences provide better approximations with respect to the APS than those achievable by picking sequences (for which we present a lower bound in Theorem REF ). We consider algorithm AlgChores of {{cite:50383d8835bdd3f2d85971e64c04f366d2009f5e}}, for which it was shown in {{cite:50383d8835bdd3f2d85971e64c04f366d2009f5e}} that it gives every agent a bundle of disvalue not larger than {{formula:3301ba1b-0f85-4e09-b063-41058e4b7fcf}} times her MMS. We extend this result to the more demanding benchmark of the APS.
r
545bf02864d9efe7b45e6eae4ebc6f58
Nevertheless, a ML-based NN has a few advantages over the existing pipelines that warrent it future studying. First of all, as multiple authors have pointed out (see, e.g., Refs. {{cite:e8f72f4f8020c2494e4743de18747885170208c5}}, {{cite:144ffcf336e97a8640d8d9d830ee728cf54215b7}}, {{cite:8903bc8d869280d9acada325227c2d8e9c6699a6}}), an NN is highly efficient in prediction. Indeed, as we showed in Fig. REF , it takes the CNN-class only 30 ms to detect and classify a GW signal from a 256-s data segment. In comparison, the typical latency is about 6 s for GstLAL, indicating the possibility of accelerating the existing pipelines even further.
d
16a9404e985605d08dafba383a837299
Pre-training is also a realistic assumption. We have argued that application domains (e.g., computer vision and natural language understanding) have vast amounts of curated data and increasingly available pre-trained models that CL algorithms could readily exploit. Additionally, we demonstrated in sec:expmimg that pre-training with even a modest dataset is very beneficial, which is a promising result for applying this strategy to other domains. In particular, we note these settings are analogous to initializing a CL algorithm with a random model but then having a large first batch (i.e., containing samples from several classes), a schedule that is gaining increasing traction in the literature {{cite:34c93c11a1a457c56a796807d815bace4b45a541}}, {{cite:564bb885ec4d4b24588de947c559fcde1eb683b1}}. Our strategy extends naturally to these settings if we allow offline training on the first batch to replace pre-training.
d
7fc108204ff730254c904e72c3b4021e
Figure REF (right hand side) shows results of our numerical scan. The ic86 event rate is plotted against the mass of the dark matter particle. We see two groups of models: The lower group consists mainly of models with a large mass splitting between {{formula:2ff90c9d-8531-4fe4-a20d-3fb522913b2d}} and {{formula:0bcaaf1e-8f35-4f8e-8824-182073473a74}} , for which inelastic scattering is kinematically forbidden. The upper group with significantly higher event rates consists of models with small mass splittings and dominant inelastic dark matter. Note that we show only those points that survive the following constraints: PLANCK relic density {{cite:c6c2d48aa38980512c2eb067b4358f189b6dc388}}, the mass of the Higgs boson {{cite:0d7ff0f2b20e74b5c6eba7d863057c908ba502e1}}, LFV limits {{cite:fa6547cc7841846996d8c30f8b734d14a1bece49}}, {{cite:cb4f15b45037f2ce1b04eb46bd84b8fec4898508}}, {{cite:c7bd8679509d970d820b904ee4fbd137e227db93}}, and limits on the Standard Model neutrino masses (we use the Casas-Ibarra parametrization {{cite:6383c94e6c0ac870ab91830cb928edac72bbffc6}} to calculate the Yukawa coupling matrix). We plot also the LEP exclusion from the invisible {{formula:21b06d80-f7ac-42c8-bb21-9a9ef9e5992d}} width {{cite:7af3626b6000051ed70dfb409b2ef6e42ea60e30}}, {{cite:ab7bc84e3cdfa5c5af47d645d91eca282fb39df3}}, to show that it does not constrain our parameter space. The remaining models can further be constrained by direct detection limits, where the XENON1T limit on the spin-independent WIMP-nucleon cross section {{cite:e88a2c0c0b1a04dd5432a2906d391b90029eb7e2}} imposes the strongest and only constraint to our parameter space. In both groups of models we draw the XENON-excluded ones in orange, whereas in the inelastic group they are excluded for their elastic scattering cross section, since the XENON limit does not apply to inelastic dark matter.
r
acf5819272009a12c7e5c4802de8c0ab
We have interpreted bursts as a simple combination of pulses, without taking account of the temporal variation of the Lorentz factor {{formula:6b43bd7d-8185-4636-9323-d0513da91b92}} of the jet. If this is accounted for, each pulse may have different {{formula:4c9598a1-de18-454e-a03b-5fee9bed29f6}} but the same {{formula:033711b0-5070-411f-a21c-dcd5d2b4ec45}} . We should then average the polarization with respect to fluence of each pulse having different {{formula:b370bb03-001b-44a9-a4de-474a90aed07e}} {{cite:6091df0bd9c406d1ac1ad22440af2f06622d3f6c}}, {{cite:d859a6b5232e0201cf44cf3b9dac0fae9c722795}}. However, in the SO model, the cumulative distribution of measurable {{formula:04da118b-fcdc-4a2b-9bb3-da48655289d3}} will not be changed significantly as long as {{formula:85457b6c-e0fa-4472-a5cf-6e15a0eddfa3}} , because {{formula:9fea2189-ceae-4a2d-bd0c-6e05743d6c3a}} is clustered into a small range for {{formula:63bcf74c-4a55-45dd-acde-77516f3b6492}} and {{formula:682cfcf3-1f1b-4326-8af6-643e511ae61a}} . To average the polarization in the case of {{formula:15ada55d-9c89-4dfb-a493-fe6da3dd62cb}} , the relation between the luminosity and the Lorentz factor for each pulse is required to predict the polarization distribution.
d
dcc690efb30d384c58cac346e86c7f21
This section provides details about the three major steps in the proposed accident detection framework. These steps involve detecting interesting road-users by applying the state-of-the-art YOLOv4 {{cite:c665eef45e55ae6ff7998a250eb012eaa877c775}} method with a pre-trained model based on deep convolutional neural networks, tracking the movements of the detected road-users using the Kalman filter approach, and monitoring their trajectories to analyze their motion behaviors and detect hazardous abnormalities that can lead to mild or severe crashes. The proposed framework is purposely designed with efficient algorithms in order to be applicable in real-time traffic monitoring systems.
m
430d8dfb92cdd521f7ab0ceba1556744
The {{formula:58269b9e-3f88-42e6-9454-e0682f65fb95}} parameter is measured to be {{formula:5baed2ab-6a79-437d-b51a-34cd4ac73fff}} , consistent with no direct {{formula:d37f78b6-eaed-4046-b315-db975382289d}} violation ({{formula:a0c6fe70-37f0-4a71-b14b-b0925c676259}} ). The average of the heavy and light mass eigenstate decay widths is determined to be {{formula:c3539379-983d-4215-a5a7-a88e3bcb9c92}} , consistent with the world-average value {{formula:3cdcd089-e38b-4425-a66e-1384a16e15bd}}  {{cite:be90854f8fc7e90846576ea1643dfeb5617eb30b}}. The mass difference between the heavy and light meson mass eigenstates is measured to be {{formula:084c281d-b838-4152-a84e-b884fb00f39a}} , consistent with the theoretical prediction {{formula:dfbc081a-f08e-4fba-bfdd-209b9a3dcf4c}}  {{cite:41930f88770d8c41a6cdaec0d6d38ae6ca0dcf1d}}, and in slight tension with the world-average value {{formula:b6f2df7a-6ed0-4804-a3de-a860fe393c6e}}  {{cite:be90854f8fc7e90846576ea1643dfeb5617eb30b}}. The uncertainties in all these measured parameters are dominated by the statistical component. This analysis represents the first measurement by CMS of the mass difference {{formula:f813f9ce-44da-40e7-be35-2dcc071a1d2c}} between the heavy and light mass eigenstates and of the direct {{formula:96beb83b-b06f-482b-9a61-32ecb95d823a}} observable {{formula:4032631f-d003-46b2-9b6a-29931569fb9f}} .
r
a3101dc12b6b2253aa0919093d48a1ba
In the modified nonrelativistic quark model, we determine two parameters, {{formula:ab0d3425-ffdf-4e2a-a56b-f65ea44e225a}}  GeV of Eq. (REF ) and {{formula:6c01c5cc-4532-4932-9034-0e64f38c0820}}  GeV of Eq. (REF ), by fitting the masses of the bottom mesons {{formula:f3a279f3-d76a-49d6-ba21-1fc3ac32faab}} and {{formula:295eae69-a34b-4894-aaa0-b14d022ace97}} , which have already been well established as the {{formula:cdb46716-cfe4-4984-bb9c-9dd43d0072d3}} and {{formula:765ac123-9dce-4fb6-a6c3-e549a91c808b}} , respectively. With these values, we calculated the mass spectrum of the bottom mesons, as shown in Table REF , where we also present the predictions with other models without considering the screening effect {{cite:89b2b97abc12123720811487c119f3a7453645f5}}, {{cite:a70139d2dda5a40003fc8de5d8792acaccc8cf69}}, {{cite:7a709333ed5f7ad31123d7adcbf551a22dcac860}}, {{cite:53a94e0a7759bed92a4e3011593f50af72f6fad5}}, {{cite:5f379012ca4198826e7172b9948f149ce230b098}}, {{cite:101d4fa2319e06c108aaf6595f5351c71b9ff362}}. It should be stressed that the mixing angles are obtained as {{formula:4b291b8c-9621-416c-89e1-15843347c38d}} , {{formula:1036b430-ac2c-4cda-a3a2-39e5147901b7}} , {{formula:fd59b483-86a6-4cb2-b4d2-ae43adb5ea9c}} , {{formula:a1a20a4c-8f7f-4a83-a316-220475445190}} and {{formula:086483da-ca8e-4422-85bf-127d4710828c}} by solving the potential model with the Hamiltonian of Eq. (REF ), which are close to the heavy quark limit mixing angles {{formula:a2a7492e-a1e7-45f0-83f4-c2e4284870c7}} , {{formula:be8304f6-4b88-49fc-b44f-ace43c471f27}} and {{formula:857ca41e-8354-42a1-9573-23d744de9a2c}}  {{cite:074a71c4e93c09cdd99c5a051848b2a2d92d02c7}}. {{table:1b015815-740c-4462-9e19-5b0e1e409ab4}}
r
63deb73a276662b68465082564b5a0f3
CVD growth of graphene on Ge (001) and Cu foils. We use two preparations of graphene: graphene that is grown by chemical vapor deposition (CVD) directly on Ge (001), and graphene that is grown by CVD on Cu foils and then layer transferred onto Ge (001) or GaAs (001). For CVD graphene growth on Ge (001) we followed growth conditions described in Refs. {{cite:859fc57b840b63892da47d1590bdaa6000c21902}}, {{cite:eb2abba9e01947d64c82246362d8d077014eccee}}. CVD graphene on Cu foils was performed at 1050 {{formula:f72dde9a-0854-4db1-b2ab-4f3905de8697}} C using ultra high purity CH{{formula:e6d13ba1-65a1-4d91-aeda-610a73ea6368}} , as described in Ref. {{cite:3957d664f46dee83b9e7167f09bfbb4f094a8c57}}.
m
65c707102f03e5606695d041baca9d18
The extracted knowledge will then be incorporated to a downstream task for better recommendation quality. Our downstream model is aligned to the production deployment, which includes several effective components such as target attention, search-based interest modeling {{cite:9dac5777548f3dd56c758e1db4eaec754c9d2063}} and Co-Action Unit {{cite:825000abcce612771ab86fb5bbc336196b417517}}. Both of the pre-training and downstream models are trained with Adam with learning rate 0.001 and the batch size is 2000. Besides, the loss weight {{formula:d0a736b0-cefe-4ad4-b508-7d0c019feefb}} in Equation REF is set to 0.25. All experiments are built on XDL2 {{cite:c67f5a1ae1bcd4c46a958805a93e4ba0ba2987c0}}.
m
408dd73d68ede24d16e6f6d8e9d3e994
In this section, we discuss empirical results to validate our theoretical findings. First, we compare the Gaussian predictions with the output distribution of the nodes of a wide stochastic network. Then we report the results obtained by training a stochastic network on MNIST, and on a binary version of it, with PAC-Bayesian methods like those from {{cite:89bbc421c81989a08a94081983ad063dcadb6880}}, {{cite:ac4bab43908d6aa91b761c1e1daf397a7e96b6e0}}, and with our Gaussian method. On both datasets, the Gaussian method led to tighter final generalization bounds.
r
1ddc57417d4be7b81126bb961a8d7084
For {{formula:c94c5c73-db55-45d3-ac78-4a3ddae04809}} we recover {{formula:e23fb481-ac56-488e-a2dd-c1e68716a845}} as previously described. For {{formula:ae0b257a-249b-475a-9f54-69fb5aed8195}} , the term {{formula:7a99e736-8223-4393-9b05-379a6def29c2}} will dominate for large {{formula:31585640-caab-41cc-bd49-9bbe75e93087}} . Given equation (REF ), this means that the fractal dimension will approach {{formula:7017930f-f681-4170-9466-07a737a6bd9e}} for large {{formula:f2bce189-cfbd-43ab-a402-cbc6406cfc56}} for all non-trivial rules. Although an integer {{formula:b2f87d90-d59f-4a70-903f-e96deb9b2895}} does not rule out a fractal object exhibiting scale invariance {{cite:b8232ff1223d3b0836fd40aa277bce3484a38030}}, visual inspection of the geometries generated by these rules confirm they do indeed generate simple geometries that are non–fractal. These rules were therefore not considered further. {{figure:57dfd2a3-fd10-41d4-a881-446ec57e886d}}
r
212ee22eae494780ef581b0dd9198e00
To show that, we use the following change-of-measure inequality, which is also known as the transportation lemma {{cite:5200e243eeb727b751747c4221a62a7f06dac3c3}}, {{cite:5a6c380dcbb121cb866f901dd0c2f9919d16964d}}, {{cite:99a4f7e27d380765561e40ffcfdfb3a8bf8c8da7}}.
d
ac15afaf227c8269860c2fb0ab2aa873
Fig. 2 illustrates the outage probability versus the CSI uncertainty level. As seen from Fig. 2, the outage probabilities of both “CSI/Nonrobust HWI of {{cite:a5530f5a44a826be9bc578a2c64d14f850c765b8}}” and “HWI/Nonrobust CSI of {{cite:03c5bffc0436a6a77ecfafdea4256c24ebbe9202}}” gradually increase with the increase of channel estimation error. In particular, the outage probabilities of “Nonrobust CSI HWI of {{cite:cbe4faf766f97cb3c8450f4ae01b1e98c516fb65}}” are always 1, which is due to the fact that the transceiver hardware impairments always deteriorate the the received signal quality. By contrast, our proposed robust design method can always ensure that the outage probability is 0 with both transceiver hardware impairments and imperfect CSI. Fig. 3 depicts the outage probability versus the transceiver hardware impairments level. We can also find that our proposed robust design can keep the outage probability at 0. The system performances for the other three schemes become worse with the increase of transceiver hardware impairments level.
r
851ffccb4333640cd3877e96a0e242a0
In this work, we present a novel one-shot GDA method, i.e., DiFa, for diverse generation and faithful adaption. In terms of faithful adaption, we consider both attributes and styles. For global-level adaptation, we define the domain-gap direction as the difference between the CLIP embedding of reference image and the mean embedding of source images. As for local-level adaptation, we introduce an attentive style (AS) loss on the intermediate layer of the CLIP image encoder. For each intermediate token of an adapted image, it first finds the nearest token of the reference image, and then minimizes their difference to make GDA adapt to the target style. In terms of diverse generation, selective cross-domain consistency (SCC) is introduced to select and retain domain-sharing attributes in the editing latent {{formula:713ce163-da04-49f2-a8e7-56f1eaf2830e}} space to inherit the diversity of pre-trained generator. In particular, we use a styleGAN inversion models {{cite:c2f3088b65bd127b2cefcea93029734f6643951a}}, {{cite:fcd082c872e59495e05ad831fe0e127a8996846e}} to invert the images from source and target domains into the {{formula:591858aa-5eda-4876-a6cf-21e8e29ff8a5}} space. Then, we compute the direction {{formula:8a7466e6-57dc-4b00-ac94-72502016e9fb}} between the two domains, where smaller values in {{formula:6fdabfa5-8a28-4eef-a162-88ee7e26d579}} indicate that the corresponding latent variables in {{formula:6d42beba-fb24-4762-a745-aa9fb6e3849c}} space are domain-sharing attributes. Selective cross-domain consistency encourages an adapted image and its corresponding source image to be similar in domain-sharing attributes, and can be different in other attributes. SCC allows the adapted generator to inherit from the pre-trained generator selectively. Thus our DiFa can guarantee the diversity of adapted images without the sacrifice of decreasing domain adaption ability.
i
62064005f43d9915dcc09d5d016539ea
This investigation is an exploratory venture into developing and applying deep learning techniques to tasks involving digital signal processing, specifically for the selection of digital filtering methods. For the task of removing noise from ECG signals we investigate the wavelet filter, as proposed by various studies carried out for similar applications {{cite:7facb7257db6ce2969712714d01a01a7c148dbf4}}, {{cite:c5ffe2d7ac5b31cb2cd1e5f70fa0241967ca5d95}}, {{cite:a229d5f5e70c21e6716cfb541c539f684bb1293c}}, {{cite:c1a326e69ab3672adb2b3b643c331a37a011c119}}, and an elliptical filter, due to its frequency response at the defined cut-off frequency {{cite:2e0ce6eb0612bd1785043772580994d71183fe0e}}, {{cite:ed89728717d946ce02b5728fa7d2ddc47c0d9d7c}}. Consequently, various avenues of research are yet to be explored, such as using novel complex networks capable of processing complex signals and developing graphical representations of signals to be used is conjunction with machine learning models.
d
d7c4f96ff01763608635a17dc223a1fa
For the models with dropout, we use the uncertainty methods described in Gal and Ghahramani {{cite:21b9c2f845517f8a31d87eaf006c34c1651c4ce2}} to create the spike and slab prior setting our dropout rate to 0.5. The target models which use differential privacy are optimized using the default DPSGD optimizer as described by Abadi et al. {{cite:c97cc28a6edb93e68d33796d5cd25eb4313a0e01}}. In this work, we use the default values for norm clipping and noise generation of 1.5 and 1.3 respectively. For both models, we set the same learning rate of 0.1 to ensure that they are able to be compared directly.
m
12ff9966ae6e5740269450601f4bdcf6
For CIFAR and Mini-ImageNet datasets, a modified ResNet {{cite:e5d660adc35a70db662e828aaf269e822656be7f}} architecture is used, which is 10 layers deep and has fewer number of feature maps in each of the four residual blocks ({{formula:93c35786-8927-4154-803a-9601b40f72f4}} ). This reduces the number of parameters from {{formula:bde6b40e-0cdd-4511-a625-ec10e30b97ae}} to 34997. In spite of using a weaker base network (owing to computing constraints), our method outperforms baselines, as shown in our results. For the MNIST dataset, we use a two-layer fully connected neural network with 100 neurons each, with ReLU activation, following the experimental setting in GEM {{cite:569964b30f139b89ad9ca7872b6a36987957a102}}. To train these base models (which are then used to train the VAE in MERLIN), batch size is set to 10 and Adam {{cite:3294ff5b36f71bf0f24d5492248beb11244105d1}} is used as the optimizer, with an initial learning rate of {{formula:447e2e89-eb27-401e-9444-474916398020}} and weight decay of {{formula:e76c6e66-4939-43fb-9121-431562f0f086}} . To ensure the online setting, the model is trained only for a single epoch, similar to baseline methods {{cite:569964b30f139b89ad9ca7872b6a36987957a102}}, {{cite:80e9940e4ddcdfe9dc507ad14f7837081769a79b}}, {{cite:528c6a0452b9a1d660bfb865a2f9dfeac3893190}}. For class-incremental experiments, we follow earlier methods {{cite:569964b30f139b89ad9ca7872b6a36987957a102}}, {{cite:80e9940e4ddcdfe9dc507ad14f7837081769a79b}} to assume an upper-bound on the number of classes to expect, and modify the loss function to consider only classes that are seen so far. This is done by setting the final layer classification logits of the unseen class to a very low value ({{formula:54059cce-fb79-4813-8ae6-b39d2c2f3dc2}} ), as in {{cite:569964b30f139b89ad9ca7872b6a36987957a102}}, {{cite:8d0b317afe5bfff55291b149a9428931a4c567e2}}, {{cite:3aff58921d365b8a684abd1d4e2236b5ca12a165}}.
r
548b6bafb1911a1197e800ced19eebf4
Recent directions in segmentation include weakly-supervised semantic segmentation {{cite:2245d611bf34b3b975fe2b831f1a876b59650d98}}, {{cite:e15492e6f16bbec9dda029103e760ab35009efd2}}, domain adaptation {{cite:9dbd238de26f2c13ca1c077128592feffa457575}}, {{cite:047ceee96551193e7f6a4b70ceefe3af3e8ea1dc}}, multi-modal data fusion {{cite:f7fc2219f27aac53437f0b86d26a47eb209c69c0}}, {{cite:fd7cbae794032212f85d75d0b57d23e6701308f9}}, and real-time semantic segmentation {{cite:0a58a06cc18d30bdea7c314abf779bb468a4d2ac}}, {{cite:09b5c33a1e504705bbdc2e353543210285a8bde4}}, {{cite:3393c6f76e37e302f7910399f9b3a7abe483330e}}.
m
21765b98bca5d2352e5e40ac2a8822a7