text
large_stringlengths 1.66k
2.05k
| label
large_stringclasses 1
value | is_target
bool 1
class | is_added
bool 1
class | dataset
large_stringclasses 1
value |
|---|---|---|---|---|
---
abstract: 'Many deep learning algorithms can be easily fooled with simple adversarial examples. To address the limitations of existing defenses, we devised a probabilistic framework that can generate an exponentially large ensemble of models from a single model with just a linear cost. This framework takes advantage of neural network depth and stochastically decides whether or not to insert noise removal operators such as VAEs between layers. We show empirically the important role that model gradients have when it comes to determining transferability of adversarial examples, and take advantage of this result to demonstrate that it is possible to train models with limited adversarial attack transferability. Additionally, we propose a detection method based on metric learning in order to detect adversarial examples that have no hope of being cleaned of maliciously engineered noise.'
author:
- |
George A. Adam\
Department of Computer Science\
University of Toronto\
Toronto, ON M5S 3G4\
`alex.adam@mail.utoronto.ca`\
Petr Smirnov\
Medical Biophysics\
University of Toronto\
Toronto, ON M5S 3G4\
David Duvenaud\
Department of Computer Science\
University of Toronto\
Toronto, ON M5S 3G4\
Benjamin Haibe-Kains\
Medical Biophysics\
University of Toronto\
Toronto, ON M5S 3G4 Anna Goldenberg\
Department of Computer Science\
University of Toronto\
Toronto, ON M5S 3G4\
bibliography:
- 'sample.bib'
- 'Zotero.bib'
title: |
Stochastic Combinatorial Ensembles for\
Defending Against Adversarial Examples
---
Introduction
============
Deep Neural Networks (DNNs) perform impressively well in classic machine learning areas such as image classification, segmentation, speech recognition and language translation [@hinton_deep_2012; @krizhevsky_imagenet_2012; @sutskever_sequence_2014]. These results have lead to DNNs being increasingly deployed in production settings, including self-driving cars, on-the-fly speech translation, and facial recognition
|
ArXiv
| true
| false
|
arxiv_only
|
for identification. However, like previous machine learning approaches, DNNs have been shown to be vulnerable to adversarial attacks during test time [@szegedy_intriguing_2013]. The existence of such adversarial examples suggests that DNNs lack robustness, and might not be learning the higher level concepts we hope they would learn. Increasingly, it seems that the attackers are winning, especially when it comes to white box attacks where access to network architecture and parameters is granted.
Several approaches have been proposed to protect against adversarial attacks. Traditional defense mechanisms are designed with the goal of maximizing the perturbation necessary to trick the network, and making it more obvious to a human eye. However, iterative optimization of adversarial examples by computing gradients in a white box environment or estimating gradients using a surrogate model in a black-box setting have been shown to successfully break such defenses. While these methods are theoretically interesting as they can shed light on the nature of potential adversarial attacks, there are many practical applications in which being perceptible to a human is not a reasonable defense against an adversary. For example, in a self-driving car setting, any deep CNN applied to analyzing data originating from a non-visible light spectrum (e.g. LIDAR), could not be protected even by an attentive human observer. It is necessary to generate ‘complete’ defenses which preclude the existence of adversarial attacks against the model. This requires deeper understanding of the mechanisms which make finding adversarial examples against deep learning models so simple. In this paper, we review characteristics of such mechanisms and propose a novel defense method inspired by our understanding of the problem. In a nutshell, the method makes use of the depth of neural networks to create an exponential population of defenses for the attacker to overcome, and employs randomness to increase the difficulty of successfully finding an atta
|
ArXiv
| true
| false
|
arxiv_only
|
ck against the population.
Related Work
============
Adversarial examples in the context of DNNs have come into the spotlight after Szegedy et al. [@szegedy_intriguing_2013], showed the imperceptibility of the perturbations which could fool state-of-the-art computer vision systems. Since then, adversarial examples have been demonstrated in many other domains, notably including speech recognition [@carlini_audio_2018], and malware detection[@grosse_adversarial_2016]. Nevertheless, Deep Convolutional Neural Networks (CNNs) in computer vision provide a convenient domain to explore adversarial attacks and defenses, both due to the existence of standardized test datasets, high performing CNN models reaching human or super-human accuracy on clean data, and the marked deterioration of their performance when subjected to adversarial examples to which human vision is robust.
In order to construct effective defenses against adversarial attacks, it is important to understand their origin. Early work speculated that DNN adversarial examples exist due to the highly non-linear nature of DNN decision boundaries and inadequate regularization, leading to the input space being scattered with small, low probability adversarial regions close to existing data points [@szegedy_intriguing_2013]. Follow-up work [@goodfellow_explaining_2014] speculated that adversarial examples are transferable due to the linear nature of some neural networks. While this justification did not help to explain adversarial examples in more complex architectures like ResNet 151 on the ImageNet dataset [@Liu], the authors found significant overlap in the misclassification regions of a group of CNNs. In combination with the fact that adversarial examples are transferable between machine learning approaches (for example from SVM to DNN) [@papernot_transferability_2016], this increasingly suggests that the linearity or non-linearity of DNNs is not the root cause of the existence of adversarial examples that can fool these networks.
Recent work to systematica
|
ArXiv
| true
| false
|
arxiv_only
|
lly characterize the nature of adversarial examples suggests that adversarial subspaces can lie close to the data submanifold [@tanay_boundary_2016], but that they form high-dimensional, contiguous regions of space [@tramer_space_2017; @ma_characterizing_2018]. This corresponds to empirical observation that the transferability of adversarial examples increases with the allowed perturbation into the adversarial region [@carlini_towards_2016], and is higher for examples which lie in higher dimensional adversarial regions [@tramer_space_2017]. Other work [@gilmer_adversarial_2018] further suggests that CNNs readily learn to ignore regions of their input space if optimal classification performance can be achieved without taking into account all degrees of freedom in the feature space. In summary, adversarial examples are likely to exist very close to but off the data manifold, in regions of low probability under the training set distribution.
Proposed Approaches to Defending Against Adversarial Attacks
------------------------------------------------------------
The two approaches most similar to ours for defending against adversarial attacks are MagNets [@meng_magnet:_2017] and MTDeep [@sengupta_mtdeep:_2017]. Similar to us, MagNets combine a detector and reformer, based off of the Variational Autoencoder (VAE) operating on the natural image space. The detector is designed by examining the probability divergence of the classification for the original image and the autoencoded reconstruction, with the hypothesis that for adversarial examples, the reconstructions will be classified into the adversarial class with much lower probability. We adapt the probability divergence approach, but instead of relying on the same classification network, we compute the similarity using an explicit metric learning technique, as described in Section \[need\_for\_detection\]. MagNets also incorporated randomness into their model by training a population of 8 VAEs acting directly on the input data, with a bias towards differences in t
|
ArXiv
| true
| false
|
arxiv_only
|
heir encoding space. At test time, they chose to randomly apply one of the 8 VAEs. They were able to reach 80% accuracy on defending against a model trained to attack one of the 8 VAEs, but did not evaluate the performance on an attack trained with estimation of the expectation of the gradient over all eight randomly sampled VAEs. We hypothesize that integrating out only 8 discrete random choices would add a trivial amount of computation to mount an attack on the whole MagNet defense. Our method differs in that we train VAEs at each layer of the embedding generated by the classifier network, relying on the differences in embedding spaces to generate diversity in our defenses. This also gives us the opportunity to create a combinatorial growth of possible defenses with the number of layers, preventing the attacker from trivially averaging over all possible defenses.
In MTDeep (Moving Target Defense) [@sengupta_mtdeep:_2017], the authors investigate the effect of using a dynamically changing defense against an attacker in the image classification framework. The framework they consider is a defender that has a small number of possible defended classifiers to choose for classifying any one image, and an attacker that can create adversarial examples with high success for each one of the models. They also introduce the concept of differential immunity, which directly quantifies the maximal defense advantage to switching defenses optimally against the attacker. Our method also builds on the idea of constantly moving the target for the attacker to substantially increase the difficulty the attacker has to fool our classifier. However, instead of randomizing only at test time, we use the moving target to make it increasingly costly for the attacker to generate adversarial examples.
Methods
=======
Probabilistic Framework
-----------------------
Deep neural networks offer many places at which to insert a defense against adversarial attacks. Our goal is to exploit the potential for exponential combinations of defenses. M
|
ArXiv
| true
| false
|
arxiv_only
|
ost defenses based on cleaning inputs tend to operate in image space, prior to the image being processed by the classifier. Attacks that are aware of this preprocessing step can simply create images that fool the defense and classifier jointly. However, an adversary would have increased difficulty attacking a model that is constantly changing. For example, a contracted version of VGG-net has 7 convolutional layers, thus offering $2^{1 + 7} = 64$ (1 VAE before input, and 1 VAE for each of the 7 convolutional layers) different ways to arrange defenses (Figure \[figure\_1\]); this can be thought of as a bag of $64$ models. If the adversary creates an attack for one possible defense arrangement, there is only a $\frac{1}{64}$ chance that the same defense will be used when they try to evaluate the network on the crafted image. Hence, assuming that the attacks are not easily transferable between defense arrangements, the adversary’s success rate decreases exponentially as the number of layers increases just linearly. Even if an adversary generates malicious images for all 64 versions of the model, the goal post when testing these images is always moving, so it would take 64 attempts to fool the model on average. An attacker trying to find malicious images fooling all the models together would have to contend with gradient information sampled from random models resulting in possibly orthogonal goals. An obvious defense to use at any given layer is an autoencoder with an information bottleneck such that it is difficult to include adversarial noise in the reconstruction. This should have minimal impact on classifier performance when given normal images, and should be able to clean the noise at various layers in the network when given adversarial images.
![Illustration of probabilistic defense framework for a deep CNN architecture. Pooling and activations are left out to save space.[]{data-label="figure_1"}](figures/figure_1_report.pdf){width="\textwidth"}
Need for Detection {#need_for_detection}
------------------
So
|
ArXiv
| true
| false
|
arxiv_only
|
me adversarial examples are generated with large enough perturbations that they no longer resemble the original image, even when judged by humans. It is unreasonable to assume that basic image processing techniques can restore such adversarial examples to their original form since the noise makes the true class ambiguous. Thus, there is a need to flag images with substantial perturbations. Carlini and Wagner [@Carlini2017], demonstrated that methods that try to detect adversarial examples are easily fooled, and also operate under the assumption that adversarial examples are fundamentally different from natural images in terms of image statistics. However, this was in the context of small perturbations. We believe that a good detection method should be effective in detecting adversarial examples that have been generated with large enough perturbations. Furthermore, we believe that detection methods should be generalizable to new types of attacks in order to be of practical relevance, so in our work we do not make any distributional assumptions regarding adversarial noise in building our detector.
Our proposed detection method might not seem like a detection method at all. In fact, it is not explicitly trained to differentiate between adversarial examples and natural images. Instead, it relies on the consensus between the predictions of two models: the classifier we are trying to defend, and an auxiliary model. To assure low transferability of adversarial examples between the classifier and auxiliary model, we choose an auxiliary model that is trained in a fundamentally different way than a softmax classifier. This is where we leverage recent developments in metric learning.
As an auxiliary model here we use a triplet network [@Hoffer], which was previously introduced in a face recognition application[@FaceNet]. A triplet network is trained using 3 different training examples at once: an anchor, a positive example, and a negative example as seen in Supplementary Figure 3; this results in semantic image embeddings
|
ArXiv
| true
| false
|
arxiv_only
|
that are then used to cluster and compute similarities between images. We use a triplet network for the task of classification via an unconventional type of KNN in the embedding space. This is done by randomly sampling $50$ embeddings for each class from the training set, and then computing the similarity of these embeddings to the embedding for a new test image (Figure \[triplet\_detector\]a). Doing so gives a distribution of similarities that can be then converted to a probability distribution by applying the softmax function (Figure \[triplet\_detector\]b). To classify an image as adversarial or normal, we first take the difference between the probabilities from classifier and from the embedding network (the probabilities are compared for the most likely class of the original classifier). If this difference in probability is high, then the two models do not agree, so the image is classified as adversarial (Figure \[triplet\_detector\]c). Note that for this setup to work, classifiers have to agree in classification of most unperturbed images. In our experiments, we have confirmed the agreement between LeNet and the triplet network is 90% as seen in Supplementary Table 2. More formally, the detector uses the following logic
$$\begin{aligned}
k &= \mathrm{argmax}(p_c(y | x)) & \Delta &= |p_c(y = k | x) - p_t(y = k | x)| & D(\Delta) =
\begin{cases}
1 & \Delta \geq \eta \\
0 & \mathrm{otherwise}
\end{cases}\end{aligned}$$
![a) The test image is projected into embedding space by the embedding network. A fixed number of examples from all possible classes from the training set are randomly sampled. The similarity of the test image embedding to the randomly sampled training embeddings is computed and averaged to get a per-class similarity. b) The vector of similarities is converted to a probability distribution via the softmax function. c) The highest probability class (digit 0 in this case) of the classifier being defended is determined, and the absolute difference between the classifier and
|
ArXiv
| true
| false
|
arxiv_only
|
triplet probability for that class is computed.[]{data-label="triplet_detector"}](figures/classification_with_triplet.pdf){width="80.00000%"}
where $p_c(y | x)$ and $p_t(y | x)$ are the probability distributions from the classifier and triplet network respectively, $k$ is the most probable class output by the classifier, and $\Delta$ is the difference in probability between the most probable class output by the classifier and the same class output by the triplet network. In our experiments we set the threshold $\eta=0.4$. Note that while we used a triple network as an auxiliary model in our examples, the goal is to find a model that is trained in a distinct manner and will thus have different biases than the original classifier, so other models can certainly be used in place of a triplet network if desired.
Defense Analysis
================
Before discussing the experiments and corresponding results, here is the definition of adversarial examples that we are working with in this paper. An image is an adversarial example if
1. It is classified as a different class than the image from which it was derived.
2. It is classified incorrectly according to ground truth label.
3. The image from which it was derived was originally correctly classified.
Since we are evaluating attack success rates in the presence of defenses, point 3 in the above definition ensures that the attack success rate is not confounded by a performance decrease in the classifier potentially caused by a given defense. In our analysis, we use the fast gradient sign (FGS) [@goodfellow_explaining_2014], iterative gradient sign (IGS) [@kurakin_adversarial_2016], and Carlini and Wagner (CW) [@carlini_towards_2016] attacks. We are operating under the white-box assumption that the adversary has access to network architecture, weights, and gradients.
Additionally, we did not use any data augmentation when training our models since that can be considered as a defense and could further confound results. To address possible concerns regarding poor
|
ArXiv
| true
| false
|
arxiv_only
|
performance on unperturbed images when defenses are used, we performed the experiment below. For the FGS and IGS attacks, unless otherwise noted, an epsilon of 0.3 was used as is typical in the literature.
Performance of Defended Models on Clean Data
--------------------------------------------
One of the basic assumptions of our approach is that there exist operations that can be applied to the feature embeddings generated by the layers of a deep classification model, which preserve the classification accuracy of the network while removing the adversarial signal. As an example of such transformations, we propose a variational autoencoder. We have evaluated the effect of inserting VAEs on two models: Logistic Regression (LR) and a 2 convolutional layer LeNet on the MNIST dataset. The comparison of the performance of these methods is summarized in Table \[table:performance\_reduction\]. Surprisingly, on MNIST it is possible to train quite simple variational autoencoding models to recreate feature embeddings with sufficient fidelity to leave the model performance virtually unchanged. Reconstructed embeddings are visualized in Supplementary Materials. Supplementary Figure 3 shows how the defense reduces the distance between adversarial and normal examples in the various layers of LeNet.
Model Undef. Accuracy Deterministic Def. Accuracy Stochastic Def. Accuracy
----------- ----------------- ----------------------------- --------------------------
LR-VAE 0.921 0.907 0.914
LeNet-VAE 0.990 0.957 0.972
: Performance reduction caused by defenses on the MNIST dataset.[]{data-label="table:performance_reduction"}
Transferability of Attacks Between Defense Arrangements
-------------------------------------------------------
The premise of our defense is that the exponentially many arrangements of noise removing operations are not all exploitable by the same set of adversarial images. The worst case sce
|
ArXiv
| true
| false
|
arxiv_only
|
nario is if the adversary creates malicious examples when noise removing operations are turned on in all possible locations. It is possible that such adversarial examples would also fool the classifier when the defense is only applied in a subset of the layers. Fortunately, we note that for FGS, IGS, and CW2, transferability of attacks between defense arrangements is limited as seen in Table \[table:transferability\_layer\_lenet\]. The column headers are binary strings indicating presence or absence of defense at the 3 relevant points in LeNet: input layer, after the first convolutional layer, after the second convolutional layer. Column \[0, 0, 0\] shows the attack success rate against an undefended model, and column \[1, 1, 1\] shows the attack success rate against a fully deterministically-defended model. The remaining columns show the transfer attack success rate of the perturbed images created for the \[1, 1 ,1\] defense arrangement. The most surprising result is that defense arrangements using 2 autoencoders are more susceptible to a transfer attack than defense arrangements using a single autoencoder. Specifically, \[1, 0, 0\] is more robust than \[1, 1, 0\] which does not make sense intuitively. Although the perturbed images are engineered to fool a classifier with VAEs in all convolutional layers in addition to the input layer, it is possible that the gradient used to generate such images is orthogonal to the gradient required to fool just a single VAE in the convolutional layers.
------------------------------ ------------- ------------- ------------- ------------- ------------- ------------- ------------- -------------
(r)[2-3]{} (r)[4-9]{} Attack \[0, 0, 0\] \[1, 1, 1\] \[0, 0, 1\] \[0, 1, 0\] \[1, 0, 0\] \[0, 1, 1\] \[1, 0, 1\] \[1, 1, 0\]
FGS 0.176 0.1451 0.035 0.019 0.117 0.01
|
ArXiv
| true
| false
|
arxiv_only
|
8 0.133 0.131
IGS 0.434 0.270 0.016 0.011 0.193 0.014 0.231 0.223
CW2 0.990 0.977 0.003 0.002 0.775 0.003 0.959 0.892
------------------------------ ------------- ------------- ------------- ------------- ------------- ------------- ------------- -------------
: Transferability of attacks from strongest defense arrangement \[1, 1, 1\] to other defense arrangements for LeNet-VAE.[]{data-label="table:transferability_layer_lenet"}
Investigating Cause of Low Attack Transferability
-------------------------------------------------
To confirm the suspicion that orthogonal gradients are responsible for the unexpected transferability results between defense arrangements seen in Table \[table:transferability\_layer\_lenet\], we computed the cosine similarity of the gradients of the output layer w.r.t to the input images. Table \[table:cosine\_similarities\_lenet\] shows the average cosine similarity between the strongest defense arrangement \[1, 1, 1\] and other defense arrangements. To summarize the relationship between cosine similarity and attack transferability, we computed the correlations of the transferabilities in Table \[table:transferability\_layer\_lenet\] with the cosine similarities in Table \[table:cosine\_similarities\_lenet\]. These correlations are shown in Table \[table:pearson\_correlations\]. It is quite clear that cosine similarity between gradients is an almost perfect predictor of the transferability between defense arrangements. Thus, training VAEs with the goal of gradient orthogonality, or training conventional ensembles of models with this goal has the potential to drastically decrease the transferability of adversarial examples between models.
----------------------------- ------------- ------------- ------------- ------------- ------------- ------------- -------------
|
ArXiv
| true
| false
|
arxiv_only
|
(r)[2-8]{} Base Arrangement \[0, 0, 1\] \[0, 1, 0\] \[0, 1, 1\] \[1, 0, 0\] \[1, 0, 1\] \[1, 1, 0\] \[1, 1, 1\]
[\[1, 1, 1\]]{} 0.219 **0.190** 0.249 0.648 0.773 0.728 0.949
----------------------------- ------------- ------------- ------------- ------------- ------------- ------------- -------------
: Cosine similarities of LeNet probability output vector gradients w.r.t. input images.[]{data-label="table:cosine_similarities_lenet"}
--------------------- ------- ------- -------
(r)[2-4]{} Cor Type FGS IGS CW2
Pearson 0.990 0.997 0.997
Spearman 0.829 0.943 0.986
--------------------- ------- ------- -------
: Correlations of cosine similarities between different defense arrangements (using \[1, 1, 1\] as the baseline defense) and the transferability between the attacks on the \[1, 1, 1\] defense to other defense arrangements.[]{data-label="table:pearson_correlations"}
Training for Gradient Orthogonality
-----------------------------------
In order to test the claim that explicitly training for gradient orthogonality will result in lower transferability of adversarial examples, we focus on a simple scenario. We trained 16 pairs of LeNet models, 8 of which were trained to have orthogonal input-output Jacobians, and 8 of which were trained to have parallel input-output Jacobians. As can be seen in Figure \[fig:parallel\_vs\_perpendicular\], the differences in transfer rates and relevant transfer rates are quite vast between the two approaches. The median relevant transfer attack success rate for the parallel approach is approx. 92%, whereas it is only approx. 17% for the perpendicular approach. This result further illustrates the importance of the input-output Jacobian cosine similarity between models when it comes to transferability.
!
|
ArXiv
| true
| false
|
arxiv_only
|
[Baseline attack success rates and transfer success rates for an IGS attack with an epsilon of 1.0 on LeNet models trained on MNIST. 8 pairs of models were trained for the parallel Jacobian goal, and 8 pairs of models were trained for the perpendicular goal to obtain error bars around attack success rates.[]{data-label="fig:parallel_vs_perpendicular"}](figures/parallel_vs_perpendicular.pdf){width="90.00000%"}
Effect of Gradient Magnitude
----------------------------
When training the dozens of models used for the results of this paper, we noticed that attack success rates varied greatly between models, even though the difference in hyperparameters seemed negligible. We posited that a significant confounding factor that determines susceptibility to attacks such as FGS and IGS is the magnitude of the input-output Jacobian for the true class. Intuitively, if the magnitude of the input-output Jacobian is large, then the perturbation required to cause a model to misclassify an image is smaller than had the magnitude of the input-output Jacobian been small. This is seen in Figure \[fig:effect\_of\_grad\_magnitude\] where there is a clear increasing trend in attack success rate as the input-output Jacobian increases. This metric can be a significant confounding factor when analyzing robustness to adversarial examples, so it is important to measure it before concluding that differences in hyperparameters are the cause of varying levels of adversarial robustness.
![Relationship between input-output Jacobian L2 norm and susceptibility to IGS attack with epsilon of 0.3. 8 LeNet models were trained on MNIST for each magnitude with different random seeds. The ’mean’ and ’std’ reported underneath the goal note the actual input-output Jacobian L2 norms, whereas the ’goal’ is the target that was used during training to regularize the magnitude of the Jacobian.[]{data-label="fig:effect_of_grad_magnitude"}](figures/effect_of_grad_magnitude_igs.pdf){width="\textwidth"}
Detector Analysis
=================
The effect
|
ArXiv
| true
| false
|
arxiv_only
|
of the CW2 attack on the average of defense arrangements is investigated in Supplementary Material. While it may seem that our proposed defense scheme is easily fooled by a strong attack such as CW2, there are still ways of recovering from such attacks by using detection. In fact, there will always be perturbations that are extreme enough to fool any classifier, but perturbations with larger magnitude become increasingly easy for humans to notice. In this section, “classifier” is the model we are trying to defend, and “auxiliary model” is another model we train (a triplet network) that is combined with the classifier to create a detector.
Transferability of Attacks Between LeNet and Triplet Network
------------------------------------------------------------
The best case scenario for an auxiliary model would be if it were fooled only by a negligible percentage of the images that fool the classifier. It is also acceptable for the auxiliary model to be fooled by a large percentage of those images, provided it does not make the same misclassifications as the classifier. Fortunately, we observe that the majority of the perturbed images that fooled LeNet did not fool the triplet network in a way that would affect the detector’s viability. This is shown in Table \[table:transferability\_classifier\_detector\]. For example, 1060 perturbed images created for LeNet using FGS fooled the triplet network. However, only 70 images were missed by the detector due to the requirement for agreement between the auxiliary model and classifier. The columns with “Relevant” in the name show the success rate of transfer attacks that would have fooled the detector.
------------ ------- ------- ------- ------- ------- -------
(r)[2-7]{}
FGS 0.089 0.106 0.164 0.015 0.007 0.004
IGS 0.130 0.094 0.244 0.004 0.007 0.002
CW2 0.990 0.121 0.819 0.013 0.049 0.008
|
ArXiv
| true
| false
|
arxiv_only
|
------------ ------- ------- ------- ------- ------- -------
: Transferability of attacks between LeNet and triplet network.[]{data-label="table:transferability_classifier_detector"}
Jointly Fooling Classifier and Detector
---------------------------------------
If an adversary is unaware that a detector is in place, the task of detecting adversarial examples is much easier. To stay consistent with the white-box scenario considered in previous sections, we assume that the adversary is aware that a detector is in place, so they choose to jointly optimize fooling the VAE-defended classifier and detector. We follow the approach described in [@Carlini2017] where we add an additional output as follows
$$G(x)_i =
\begin{cases}
Z_F(x)_i & \text{if $i \leq N$} \\
(Z_D(x) + 1) \cdot \max\limits_{j} Z_F(x)_j & \text{if $i=N + 1$}
\end{cases}$$
where $Z_D(x)$ is the logit output by the detector, and $Z_F(x)$ is the logit output by the classifier. Table \[table:lenet\_detector\] shows the effectiveness of combining the detector and VAE-defended classifier. The reason why this undefended attack success rate for the CW2 attack is lower than that in Table \[table:transferability\_classifier\_detector\] is probably because the gradient signal when attacking the joint model is weaker. Overall, less than 7% (0.702 - 0.635) of the perturbed images created using CW2 fool the combination of VAE defense and detector.
------------------------------ -------- --------- ------- ---------- ------------ --------------- ------------
(r)[2-4]{} (r)[5-8]{} Attack Undef. Determ. Stoc. Original Undefended Deterministic Stochastic
FGS 0.197 0.178 0.179 0.906 0.941 0.957 0.962
IGS 0.323 0.265 0.146 0.903 0.938 0.949 0.967
|
ArXiv
| true
| false
|
arxiv_only
|
CW2 0.787 0.703 0.702 0.909 0.848 0.635 0.657
------------------------------ -------- --------- ------- ---------- ------------ --------------- ------------
: Attack success rates and detector accuracy for adversarial examples on LeNet-VAE using a triplet network detector.[]{data-label="table:lenet_detector"}
Discussion
==========
Our proposed defense and the experiments we conducted on it have revealed intriguing, unexpected properties of neural networks. Firstly, we showed how to obtain an exponentially large ensemble of models by training a linear number of VAEs. Additionally, various VAE arrangements had substantially different effects on the network’s gradients w.r.t. a given input image. This is a counterintuitive result because qualitative examination of reconstructions shows that reconstructed images or activation maps look nearly identical to the original images. Also, from a theoretical perspective, VAE reconstructions in general should have a gradient of $\approx \mathbf{1}$ w.r.t. input images since VAEs are trained to approximate the identity function. Secondly, we demonstrated that reducing the transferability of adversarial examples between models is a matter of making the gradients orthogonal between models w.r.t. the inputs. This result makes more sense, and such a goal is something that can be enforced via a regularization term when training VAEs or generating new filtering operations. Using this result can also help guide the creation of more effective concordance-based detectors. For example, a detector could be made by training an additional LeNet model with the goal of making the second model have the same predictions as the first one while having orthogonal gradients. Conducting an adversarial attack that fools both models in the same way would be difficult since making progress on the first model might have the opposite effect or no effect at all on the second model.
A limitation of training models with such unconvent
|
ArXiv
| true
| false
|
arxiv_only
|
ional regularization in place is determining how to trade off classification accuracy and gradient orthogonality. Our defense framework requires little computational overhead to filter operations such as blurs and sharpens, and is not particularly computationally intensive when there are VAEs to train. Training a number of VAEs equal to the depth of a network in order to obtain an ensemble containing an exponentially large number of models can be computationally intensive, however, in critical mission scenarios, such as healthcare and autonomous driving, spending more time to train a robust system is certainly warranted and is a key to broad adoption.
Conclusion
==========
In this project, we presented a probabilistic framework that uses properties intrinsic to deep CNNs in order to defend against adversarial examples. Several experiments were performed to test the claims that such a setup would result in an exponential ensemble of models for just a linear computation cost. We demonstrated that our defense cleans the adversarial noise in the perturbed images and makes them more similar to natural images (Supplementary). Perhaps our most exciting result is that the cosine similarity of the gradients between defense arrangements is highly predictive of attack transferability which opens a lot of avenues for developing defense mechanisms of CNNs and DNNs in general. As proof of a concept regarding classification biases between models, we showed that the triplet network detector was quite effective at detecting adversarial examples, and was fooled by only a small fraction of the adversarial examples that fooled LeNet. To conclude, probabilistic defenses are able to substantially reduce adversarial attack success rates, while revealing interesting properties about existing models.
Supplementary Material {#supplementary-material .unnumbered}
======================
Reconstruction of Feature Embeddings {#appdx:reconstruct .unnumbered}
------------------------------------
Since using autoencoders to reconstruct activ
|
ArXiv
| true
| false
|
arxiv_only
|
ation maps is an unconventional task, we visualize the reconstructions in order to inspect their quality. For the activations obtained from the first convolutional layer seen in Figure \[fig:conv1\_layer\_reconstructions\], it is obvious that the VAEs are effective at reconstructing the activation maps. The only potential issue is that the background for some of the reconstructions is slightly more gray than in the original activation maps. For the most part, this is also the case for the second convolutional layer activation maps. However, in the first, fourth, and sixth rows of Figure \[fig:conv2\_layer\_reconstructions\], there is an obvious addition of arbitrary pixels that were not present in the original activation maps.
![First convolutional layer feature map visualization for LeNet on MNIST. Original feature maps are on the left, VAE reconstructed features maps are on the right. As is seen, reconstructions are of very high quality.[]{data-label="fig:conv1_layer_reconstructions"}](figures/conv1_layer_reconstructions.pdf){width="5cm" height="15cm"}
![Second convolutional layer feature map visualization for LeNet on MNIST. Original feature maps are on the left, VAE reconstructed features maps are on the right. As is seen, reconstructions are of very high quality.[]{data-label="fig:conv2_layer_reconstructions"}](figures/conv2_layer_reconstructions.pdf){width="5cm" height="15cm"}
Cleaning Adversarial Examples {#cleaning-adversarial-examples .unnumbered}
-----------------------------
The intuitive notion that VAEs or filters remove adversarial noise can be tested empirically by comparing the distance between adversarial examples and their unperturbed counterparts. In figure \[fig:lenet\_fgs\_distances\], the evolution of distances between normal an adversarial examples can be seen. When the classifier is undefended, the distance increases significantly with the depth of the network, and this confirms the hypothesis that affine transformations amplify noise. However, it is clear that applying our defense has
|
ArXiv
| true
| false
|
arxiv_only
|
a marked impact on the distance between normal and adversarial examples. Thus, we can conclude that part of the reason for why the defense works is that it dampens the effect of adversarial noise.
[.5]{} ![L-$\infty$ distance between adversarial and normal images as a function of layer number for LeNet attacked with FGS for the MNIST dataset.[]{data-label="fig:lenet_fgs_distances"}](figures/lenet_vae_fgs_0_mnist_distances_lineplot.pdf "fig:"){width="1\linewidth"}
Effect of Attacks on Averaged Defense {#effect-of-attacks-on-averaged-defense .unnumbered}
-------------------------------------
Since the IGS and CW2 attacks are iterative, they have the ability to see multiple defense arrangements while creating adversarial examples. This can result in adversarial examples that might fool any defense of the available defense arrangements. Indeed, this seems to happen for the CW2 attack shown in Table \[table:defense\_success\_lenet\]. The cause of this is most easily explained by the illustration in Figure \[fig:failure\_mode\]. Since the depth of the models we trained was not deep enough, it was possible for the iterative attacks to see all defense combinations when creating adversarial examples, so our defense was defeated. We believe that given a deep enough network of 25 or more layers, it would be computationally infeasible for an adversary to create examples that fool the stochastic ensemble.
----------- ------- ------- ------- ------- -------
LR-VAE 0.920 0.032 0.922 0.473 0.921
LeNet-VAE 0.990 0.014 0.977 0.140 0.984
----------- ------- ------- ------- ------- -------
: Success rate of CW2 attack on LR and LeNet defended with VAEs. []{data-label="table:defense_success_lenet"}
![Illustration of how the defense can fail against iterative attacks. Even though the two defense arrangements have orthogonal gradients, thereby exhibiting low transferability of attacks, an iterative attack that alternates between optimizing for either arrangement can end up fooling both.[]{data-
|
ArXiv
| true
| false
|
arxiv_only
|
label="fig:failure_mode"}](figures/failure_mode.pdf){width="0.8\linewidth"}
Triplet Network Visualization {#triplet-network-visualization .unnumbered}
-----------------------------
Here we illustrate how a triplet network is trained. An anchor, positive example, and negative example are all passed through the same embedding network. The triplet loss is then computed which encourages the distance between the anchor and positive example to be some margin $\alpha$ closer together than the anchor and negative example.
![Illustration of how a triplet network works on the MNIST dataset.[]{data-label="triplet_net"}](figures/triplet_net.pdf){width="\textwidth"}
The triplet loss function is shown here for convenience:
$$L(x, x_{-}, x_{+}) = \mathrm{max}(0, \alpha + \mathrm{d}(x, x_{+}) - \mathrm{d}(x, x_{-}))$$
Agreement Between LeNet and Triplet Network {#agreement-between-lenet-and-triplet-network .unnumbered}
-------------------------------------------
We investigated the agreement between LeNet and triplet network we trained in order to confirm that a concordance based detector is a viable option, and does not result in false positives on normal images. Importantly, the models agree 90% (Table \[table:concordance\]) of the time on normal images, so false positives are not a major concern.
Overall Concordance Correct Concordance Incorrect Concordance
--------------------- --------------------- -----------------------
0.911 0.908 0.003
: Concordance between LeNet and triplet network on predictions on normal images.[]{data-label="table:concordance"}
---
abstract: 'I present the results of first principles calculations of the electronic structure and magnetic interactions for the recently discovered superconductor YFe$_2$Ge$_2$ and use them to identify the nature of superconductivity and quantum criticality in this compound. I find that the Fe $3d$ derived states near the Fermi level show a rich structure with the presence of both linearly dispersive and h
|
ArXiv
| true
| false
|
arxiv_only
|
eavy bands. The Fermi surface exhibits nesting between hole and electron sheets that manifests as a peak in the susceptibility at $(1/2,1/2)$. I propose that the superconductivity in this compound is mediated by antiferromagnetic spin fluctuations associated with this peak resulting in a $s_\pm$ state similar to the previously discovered iron-based superconductors. I also find that various magnetic orderings are almost degenerate in energy, which indicates that the proximity to quantum criticality is due to competing magnetic interactions.'
author:
- Alaska Subedi
title: 'Unconventional sign-changing superconductivity near quantum criticality in YFe$_2$Ge$_2$'
---
Unconventional superconductivity and quantum criticality are two of the most intriguing phenomena observed in physics. The underlying mechanisms and the properties exhibited by the systems in which these two phenomena occur has not been fully elucidated because unconventional superconductors and materials at quantum critical point are so rare. The dearth of realizable examples has also held back the study of the relationship and interplay between unconventional superconductivity and quantum criticality, if there are any.
Therefore, the recent report of non-Fermi liquid behavior and superconductivity in YFe$_2$Ge$_2$ by Zou *et al.* is of great interest despite a low superconducting $T_c$ of $\sim$1.8 K.[@zou13] This material is also interesting because it shares some important features with the previously discovered iron-based high-temperature superconductors. Like the other iron-based superconductors, its structural motif is a square plane of Fe that is tetrahedrally coordinated, in this case, by Ge. This Fe$_2$Ge$_2$ layer is stacked along the $z$ axis with an alternating layer of Y ions. The resulting body-centered tetragonal structure ($I4/mmm$) of this compound is the same as that of the ‘122’ family of the iron-based superconductors. The nearest neighbor Fe–Ge and Fe–Fe distances of 2.393 and 2.801 Å, respectively, in this compound[@ve
|
ArXiv
| true
| false
|
arxiv_only
|
nt96] are similar to the Fe–As and Fe–Fe distances of 2.403 and 2.802 Å, respectively, found in BaFe$_2$As$_2$.[@rott08] This raises the possibility that the direct Fe–Fe hopping is important to the physics of this material, which is the case for the previously discovered iron-based superconductors.[@sing08a]
Furthermore, Zou *et al.* report that the superconductivity in this compound exists in the vicinity of a quantum critical point that is possibly associated with antiferromagnetic spin fluctuations.[@zou13] A related isoelectronic compound LuFe$_2$Ge$_2$ that occurs in the same crystal structure exhibits antiferromagnetic spin density wave order below 9 K,[@avil04; @fers06] and the magnetic transition is continuously suppressed in Lu$_{1-x}$Y$_x$Fe$_2$Ge$_2$ series as Y content is increased, with the quantum critical point lying near the composition Lu$_{0.81}$Y$_{0.19}$Fe$_2$Ge$_2$.[@ran11] The proximity of YFe$_2$Ge$_2$ to quantum criticality is observed in the non-Fermi liquid behavior of the specific-heat capacity and resistivity. Zou *et al.* find that the unusually high Sommerfeld coefficient with a value of $C/T \simeq 90$ mJ/mol K$^2$ at 2 K further increases to a value of $\sim$100 mJ/mol K$^2$ as the temperature is lowered, although the experimental data is not detailed enough to distinguish between a logarithmic and a square root increase. They also find that the resistivity shows a behavior $\rho \propto T^{3/2}$ up to a temperature of 10 K.
In this paper, I use the results of first principles calculations to discuss the interplay between superconductivity and quantum criticality in YFe$_2$Ge$_2$ in terms of its electronic structure and competing magnetic interactions. I find that the fermiology in this compound is dominated by Fe $3d$ states with the presence of both heavy and linearly dispersive bands near the Fermi level. The Fermi surface consists of five sheets. There is an open tetragonal electron cylinder around $X = (1/2, 1/2, 0)$. A large three dimensional closed sheet that is
|
ArXiv
| true
| false
|
arxiv_only
|
shaped like a shell of a clam is situated around $Z =
(0, 0, 1/2) = (1, 0, 0)$. This sheet encloses a cylindrical and two almost spherical hole sheets. The tetragonal cylinder sheet around $X$ nests with the spherical and the cylindrical sheets around $Z$, which manifests as a peak at $(1/2,1/2)$ in the bare susceptiblity. I propose that the superconductivity in this compound is mediated by antiferromagnetic spin fluctuations associated with this peak, and the resulting superconductivity has a sign-changing $s_\pm$ symmetry with opposite signs on the nested sheets around $X$ and $Z$. This superconductivity is similar to the one proposed for previously discovered iron-based superconductors.[@mazi08; @kuro08] Furthermore, I find that there are competing magnetic interactions in this compound, and the quantum criticality is due to the fluctuations associated with these magnetic interactions.
The results presented here were obtained within the local density approximation (LDA) using the general full-potential linearized augmented planewave method as implemented in the WIEN2k software package.[@wien2k] Muffin-tin radii of 2.4, 2.2, and 2.1 a.u. for Y, Fe, and Ge, respectively, were used. A $24 \times 24 \times 24$ $k$-point grid was used to perform the Brillouin zone integration in the self-consistent calculations. An equivalently sized or larger grid was used for supercell calcualtions. Some magnetic calculations were also checked with the ELK software package.[@elk] I used the experimental parameters ($a$ = 3.9617 and $c$ = 10.421 Å),[@vent96] but employed the internal coordinate for Ge ${z_{\textrm{Ge}}}$ = 0.3661 obtained via non-spin-polarized energy minimization. The calculated value for ${z_{\textrm{Ge}}}$ is different from the experimentally determined value of ${z_{\textrm{Ge}}}$ = 0.3789. The difference in the Ge height between the calculated and experimental structures is 0.13 Å, which is larger than the typical LDA error in predicting the crystal structure. Such a discrepancy is also found in the iron-
|
ArXiv
| true
| false
|
arxiv_only
|
based superconductors.[@sing08a] This may suggest that YFe$_2$Ge$_2$ shares some of the underlying physics with the previously discovered iron-based superconductors.
![ Top: LDA non-spin-polarized band structure of YFe$_2$Ge$_2$. Bottom: A blow-up of the band structure around Fermi level. The long $\Gamma$–$Z$ direction is from $(0,0,0)$ to $(1,0,0)$ and the short one is from $(0,0,0)$ to $(0, 0, 1/2)$. The $X$ point is $(1/2,1/2,0)$. The stacking of the Brillouin zone is such that $(1,0,0) = (0,0,1/2)$. See Fig. 1 of Ref. for a particularly illuminating illustration of the reciprocal-space structure. []{data-label="fig:yfg-bnd"}](yfg-bnd-l.ps "fig:"){width="\columnwidth"}\
![ Top: LDA non-spin-polarized band structure of YFe$_2$Ge$_2$. Bottom: A blow-up of the band structure around Fermi level. The long $\Gamma$–$Z$ direction is from $(0,0,0)$ to $(1,0,0)$ and the short one is from $(0,0,0)$ to $(0, 0, 1/2)$. The $X$ point is $(1/2,1/2,0)$. The stacking of the Brillouin zone is such that $(1,0,0) = (0,0,1/2)$. See Fig. 1 of Ref. for a particularly illuminating illustration of the reciprocal-space structure. []{data-label="fig:yfg-bnd"}](yfg-bnd-s.ps "fig:"){width="\columnwidth"}
![ (Color online) Electronic density of states non-spin-polarized of YFe$_2$Ge$_2$ and projections on to the LAPW spheres per formula unit both spin basis. []{data-label="fig:yfg-dos"}](yfg-dos.eps){width="\columnwidth"}
The non-spin-polarized LDA band structure and density of states (DOS) are shown in Figs. \[fig:yfg-bnd\] and \[fig:yfg-dos\], respectively. The lowest band that starts out from $\Gamma$ at $-$5.2 eV relative to the Fermi energy has Ge $4p_z$ character. There is only one band with Ge $4p_z$ character below Fermi level, and there is another band with this character above the Fermi level. This indicates that the Ge ions make covalent bonds along the $c$ axis, which is not surprising given the short Ge–Ge distance in that direction. The four bands between $-$1.2 and $-$4.8 eV that start out from $\Gamma$ at $
|
ArXiv
| true
| false
|
arxiv_only
|
-$1.5 and $-$2.6 eV have Ge $4p_x$ and $4p_y$ character. Rest of the bands below the Fermi level have mostly Fe $3d$ character. Similar to the other iron-based superconductors,[@sing08a] there is no gap-like structure among the Fe $3d$ bands splitting them into a lower lying $e_g$ and higher lying $t_{2g}$ states. This shows that Fe–Ge covalency is minimal and direct Fe–Fe interactions dominate. Almost all of the Fe $4s$ and Y $4d$ and $5s$ character lie above the Fermi level. This indicates a nominal occupation of Fe $3d^{6.5}$, although the actual occupancy will be different because there is some covalency of Fe $3d$ states with Y $4d$ and Ge $4p$ states.
The electronic states near the Fermi level come from Fe $3d$ derived bands and show a rich structure. The electronic DOS at the Fermi level is $N(E_F) = 4.50$ eV$^{-1}$ on a per formula unit both spin basis corresponding to a calculated Sommerfeld coefficient of 10.63 mJ/mol K$^2$. The Fermi level lies at the bottom of a valley with a large peak due to bands of mostly $d_{xz}$ and $d_{yz}$ characters on the left and a small peak due to a band of mostly $d_{xy}$ character on the right. (The local coordinate system of the Fe site is rotated by 45$^\circ$ in the $xy$ plane with respect to the global cartesian axes such that the Fe $d_{x^2-y^2}$ orbital points away from the Ge $p_x$ and $p_y$ orbitals.) There is a pair of linearly dispersive band with mostly $d_{xz}$ and $d_{yz}$ as well as noticeable Ge $p_z$ characters either side of $Z$. If they are not gapped in the superconducting state, they will provide the system with a massless excitation. In addition to this pair of linearly dispersive bands, there is also a very flat band near the Fermi level along $X$–$\Gamma$. This band has an electron-like nature around $X$ and crosses the Fermi level close to it. Along the $X$–$\Gamma$ direction, it reaches a maximum at 0.08 eV above the Fermi level, turns back down coming within 0.01 eV of touching the Fermi level, and again moves away from the Fermi level
|
ArXiv
| true
| false
|
arxiv_only
|
. It may be possible to access these band critical points that have vanishing quasiparticle velocities via small perturbations due to impurities, doping, or changes in structural parameters. The role of such band critical points in quantum criticality has been emphasized recently,[@neal11] and similar physics may be relevant in this system.
![(Color online) Top: LDA Fermi surface of YFe$_2$Ge$_2$. Bottom: The Fermi surface without the large sheet. The shading is by velocity.[]{data-label="fig:yfg-fs"}](yfg-fs1v2 "fig:"){width="0.8\columnwidth"} ![(Color online) Top: LDA Fermi surface of YFe$_2$Ge$_2$. Bottom: The Fermi surface without the large sheet. The shading is by velocity.[]{data-label="fig:yfg-fs"}](yfg-fs2v2 "fig:"){width="0.8\columnwidth"}
The Fermi surface of this compound is shown in Fig. \[fig:yfg-fs\]. There is an open very two dimensional tetragonal electron cylinder around $X$. This has mostly $d_{xz}$ and $d_{yz}$ character. There are four closed sheets around $Z$. One of them is a large three dimensional sheet with the shape like the shell of a clam with $d_{xz}$, $d_{yz}$, $d_{xy}$, and $d_{z^2}$ characters. There are two almost spherical hole sheets. These have mostly $d_{xz}$ and $d_{yz}$ characters, with the smaller one also containing noticeable Ge $p_z$ character. These two spherical sheets are enclosed by a closed cylindrical hole sheet that has mostly $d_{xy}$ character.
The cylindrical and larger spherical sheets centered around $Z$ touch at isolated points. Otherwise, the Fermi surface is comprised of disconnected sheets. If one considers the $\Gamma$–$Z$–$\Gamma$ path along the $k_z$ direction, there is a series of box-shaped cylindrical hole sheet that encloses the two spherical sheets. Although there are no sections around $\Gamma$, these sheets around $Z$ enclose almost two-third of the $\Gamma$–$Z$–$\Gamma$ path. Therefore, there is likely to be substantial nesting between the sheets around $Z$ and $X$ that will lead to a peak in the susceptibility at the wave vector $(
|
ArXiv
| true
| false
|
arxiv_only
|
1/2,1/2)$.
I have calculated the Lindhard susceptibility $$\chi_0(q,\omega) = \sum_{k,m,n} |M_{k,k+q}^{m,n}|^2
\frac{f(\epsilon_k^m) - f(\epsilon_{k+q}^n)}{\epsilon_k^m -
\epsilon_{k+q}^n - \omega - \imath \delta}$$ at $\omega \to 0$ and $\delta \to 0$, where $\epsilon_k^m$ is the energy of a band $m$ at wave vector $k$ and $f$ is the Fermi distribution function. $M$ is the matrix element, which is set to unity. The real part of the susceptibility is shown in Fig. \[fig:yfg-suscep\], and it shows peaks at $\Gamma$, $Z$, and $X$ with the peak at $X$ having the highest magnitude. Note, however, that the cylinders around $Z$ and $X$ have different characters, which should reduce the peak $X$ and make it broader as well. The peak at $\Gamma$ is equal to the DOS $N(E_F)$. The peak at $Z$ reflect the nesting along the flat sections of the sheets along $(0,0,1/2)$ direction, while the peak at $X$ is due to the nesting between the hole cylinder and spheres centered around $Z$ and the electron cylinder centered around $X$.
The bare Lindhard susceptibility is further enhanced due to the RPA interaction, and its real part is related to magnetism and superconductivity. It is found experimentally that pure YFe$_2$Ge$_2$ does not order magnetically down to a temperature of 2 K although it shows non-Fermi liquid behavior in the transport and heat capacity measurements that is likely due to proximity to a magnetic quantum critical point.[@zou13] As the temperature is lowered further, superconductivity manifests in the resistivity measurements at $T_c^\rho$ = 1.8 K and DC magnetization at $T_c^{\textrm{mag}}$ = 1.5 K. This superconductivity can be due spin fluctuations associated with the peak in the susceptibility.[@berk66; @fay80] The pairing interaction has the form $$V(q=k-k') = - \frac{I^2(q) \chi_0(q)}{1 - I^2(q) \chi_0^2(q)}$$ in the singlet channel and is repulsive. (In the triplet channel, the interaction is attractive and also includes an angular factor.) Here $I(q)$ is the Stoner parameter which microscopically de
|
ArXiv
| true
| false
|
arxiv_only
|
rives from Coulomb repulsion between electrons.
![The real part of bare susceptibility calculated with the matrix element set to unity.[]{data-label="fig:yfg-suscep"}](yfg-suscep-color){width="0.6\columnwidth"}
In the present case, the structure of the calculated susceptibility leads to the off-diagonal component of the interaction matrix to have a large negative value $-\lambda$ for the pairing between the hole sheets at $Z$ and electron cylinder at $X$ in the singlet channel. The diagonal component of the interaction matrix $\lambda_d$ pairing interactions on the hole and electron sheets are small and ferromagnetic. (For simplicity, I have assumed that the density of states are same for the hole and electron sections.) The eigenvector corresponding to the largest eigenvalue of this interaction matrix has opposite signs between the hole sheets around $Z$ and electron cylinder around $X$, and this is consistent with a singlet $s_\pm$ superconductivity with a wave vector $(1/2,1/2)$. This superconductivity is similar to the previously discovered iron-based superconductors.[@mazi08; @kuro08]
The proposed superconducvity in YFe$_2$Ge$_2$ and the previously discovered iron-based superconductor is similar, but the $T_c$ = 1.8 K for YFe$_2$Ge$_2$ is much smaller than those reported for other iron-based superconductors. One reason for this may be the smaller nesting in this compound leading to a smaller peak in susceptibility. The hole cylinder around $Z$ has mostly $d_{xy}$ character whereas the hole spheres around $Z$ and the electron cylinder around $X$ have mostly $d_{xz}$ and $d_{yz}$ character. These factors should lead to a slightly smaller and broader peak at $X$. I note, however, that nesting in the other iron-based superconductors is also not perfect[@mazi08] and the band characters between the nested sheets also vary.[@kuro08]
Another reason for the smaller $T_c$ in YFe$_2$Ge$_2$ may be due to the existence of competing magnetic fluctuations associated with the proximity to quantum criticality. The DOS fr
|
ArXiv
| true
| false
|
arxiv_only
|
om non-spin-polarized calculation is $N(E_F)$ = 1.125 eV$^{-1}$ per spin per Fe, which puts this material on the verge of a ferromagnetic instability according to the Stoner criterion. Ferromagnetism is pair-breaking for the singlet pairing and will suppress the $T_c$ in this compound. Furthermore, there is a peak in the susceptibility at $Z$ as well. The presence of additional antiferromagnetic interactions might reduce the phase space available for the spin fluctuation associated with the pairing channel and may be pair-breaking as well.
Energy (meV/Fe) Moment ($\mu_B$/Fe)
----------------- ----------------- ---------------------
NSP 0 0
FM $-$6.29 0.59
AFM (0,0,1/2) $-$11.63 0.64
SDW (1/2,1/2,0) $-$6.52 0.72
: \[tab:mag\] The relative energies of various magnetic orderings and the moments within the Fe spheres. These are almost degenerate, indicating the proximity to quantum criticality is due to competing magnetic interactions.
I performed magnetic calculations with various orderings on $(1 \times
1 \times 2)$ and $(\sqrt{2} \times \sqrt{2} \times 2)$ supercells to check the strength of competing magnetic interactions. The relative energies and the Fe moments are summarized in Table \[tab:mag\]. I was able to stabilize various magnetic configurations, and their energies are close to that of the non-spin-polarized configuration. However, I was not able to stabilize the checkerboard antiferromagnetic order in the $ab$ plane. When the magnetic order is stabilized, the magnitude of the Fe moment is less than 1 $\mu_{B}$, and the magnitudes vary between different orderings. This indicates that the magnetism is of itinerant nature. It is worthwhile to note that LDA calculations overestimate the magnetism in this compound as it does not exhibit any magnetic order experimentally. This disagreement between LDA and experiment is different from that for the Mott insulati
|
ArXiv
| true
| false
|
arxiv_only
|
ng compounds where LDA in general underestimates the magnetism.
Although this compound does not magnetically order experimentally, it nonetheless shows proximity to magnetism. It is found that partial substitution of Y by isovalent Lu causes the system to order antiferromagnetically, with 81% Lu substitution being the critical composition.[@ran11] At substitution values below the critical composition, the system shows non-Fermi liquid behavior in the heat capacity and transport measurements.[@zou13] The unusually high Sommerfeld coefficient of $\sim$90 mJ/mol K$^2$ at 2 K further increases as the temperature is lowered and the resistivity varies as $\rho \propto T^{3/2}$ up to around 10 K. This non-Fermi liquid behavior and the large renormalization of the magnetic moments may happen because there is a large phase for competing magnetic tendencies in this compound. This is due to the fluctuation-dissipation theorem, which relates the fluctuation of the moment to the energy and momentum integrated imaginary part of the susceptibility.[@mori85; @solo95; @ishi98; @agua04; @lars04] If the quantum criticality is due to competing magnetic interactions, the inelastic neutron scattering experiments, which measures the imaginary part of the susceptibility, would exhibit the structure related to the competing interactions. Therefore, even though this compound does not show magnetic ordering, it would be useful to perform such experiments and compare with the results presented here.
In any case, I indeed find that various magnetic orderings and the non-spin-polarized configuration are close in energy (see Table \[tab:mag\]). The energy of the lowest magnetic configuration is only 11.6 meV/Fe lower than the non-spin-polarized one, and the energies of the different magnetic orderings are within 6 meV/Fe of each other. As a comparison, the difference in energy between the non-magnetic configuration and the most stable magnetic ordering in BaFe$_2$As$_2$ is 92 meV/Fe, and the energy of the magnetic ordering closest to the mos
|
ArXiv
| true
| false
|
arxiv_only
|
t stable one is higher by 51 meV.[@sing08b] Signatures of quantum criticality has been reported for BaFe$_2$As$_2$ and related compounds.[@ning09; @jian09; @kasa10] YFe$_2$Ge$_2$ should show pronounced effects of proximity to quantum criticality as the competition between magnetic interactions is even stronger.
In summary, I have discussed the superconductivity and quantum criticality in YFe$_2$Ge$_2$ in terms of its electronic structure and competing magnetic interactions. The electronic states near the Fermi level are derived from Fe $3d$ bands and show a rich structure with the presence of both linearly dispersive and heavy bands. The Fermi surface consists of five sheets. There is an open rectangular electron cylinder around $X$. A big sheet shaped like the shell of a clam encloses a hole cylinder and two hole spheres around $Z$. There is a peak in the bare susceptibility at $(1/2,1/2)$ due to nesting between the hole sheets around $Z$ and the electron cylinder around $X$. I propose that the superconductivity in YFe$_2$Ge$_2$ is due to antiferromagnetic spin fluctuations associated with this peak. The resulting superconducting state has a $s_\pm$ state similar to that of previously discovered iron-based superconductors. I also find that different magnetic configurations are close in energy, which suggests the presence of competing magnetic interactions that are responsible for the proximity to quantum criticality observed in this compound.
I am grateful to Antoine Georges for helpful comments and suggestions. This work was partially supported by a grant from Agence Nationale de la Recherche (PNICTIDES).
Y. Zou, Z. Feng, P. W. Logg, J. Chen, G. I. Lampronti, and F. M. Grosche, arXiv e-print, 1311:0247.
G. Venturini, and B. Malaman, J. Alloys Compounds [**235**]{}, 201 (1996).
M. Rotter, M. Tegel, D. Johrendt, I. Schellenberg, W. Hermes, and R. Pöttgen, Phys. Rev. B [**78**]{}, 020503(R) (2008).
D. J. Singh, and M.-H. Du, Phys. Rev. Lett. [ **100**]{}, 237003 (2008).
M. Avila, S. Bud’ko, and P. Canfie
|
ArXiv
| true
| false
|
arxiv_only
|
ld, J. Magn. Magn. Mater. [**270**]{}, 51 (2004).
J. Ferstl, H. Rosner, and C. Geibel, Physica B: Condens. Matter [**378-380**]{}, 744 (2006).
S. Ran, S. L. Bud’ko, and P. C. Canfield, Philosophical Magazine [**91**]{}, 4388 (2011).
I. I. Mazin, D. J. Singh, M. D. Johannes, and M. H. Du, Phys. Rev. Lett. [**101**]{}, 057003 (2008).
K. Kuroki, S. Onari, R. Arita, H. Usui, Y. Tanaka, H. Kotani, H. Aoki, Phys. Rev. Lett. [**101**]{}, 087004 (2008).
P. Blaha, K. Schwarz, G. Madsen, D. Kvasnicka, and J. Luitz, “WIEN2k, An Augmented Plane Wave + Local Orbitals Program for Calculating Crystal Properties” (K. Schwarz, Tech. Univ. Wien, Austria) (2001).
http://elk.sourceforge.net.
B. P. Neal, E. R. Ylvisaker, and W. E. Pickett, Phys. Rev. B [**84**]{}, 085133 (2011).
J. T. Park, D. S. Inosov, A. Yaresko, S. Graser, *et al.*, Phys. Rev. B [**82**]{}, 134503 (2010).
N. F. Berk and J. R. Schrieffer, Phys. Rev. Lett. [**17**]{}, 433 (1966).
D. Fay and J. Appel, Phys. Rev. B [**22**]{}, 3173 (1970).
T. Moriya, *Spin Fluctuations in Itinerant Electron Magnetism* (Springer, Berlin, 1985).
A. Z. Solontsov and D. Wagner, Phys. Rev. B [**51**]{}, 12410 (1995).
A. Ishigaki and T. Moriya, J. Phys. Soc. Jpn. [ **67**]{}, 3924 (1998).
A. Aguayo, I. I. Mazin, and D. J. Singh, Phys. Rev. Lett. [**92**]{}, 147201 (2004).
P. Larson, I. I. Mazin, and D. J. Singh, Phys. Rev. B [**69**]{}, 064429 (2004).
D. J. Singh, Phys. Rev. B [**78**]{}, 094511 (2008).
F. Ning, K. Ahilan, T. Imai, A. S. Sefat, R. Jin, M. A. McGuire, B. C. Sales, D. Mandrus, J. Phys. Soc. Jpn. [**78**]{}, 013711 (2009).
S. Jiang, H. Xing, G. Xuan, C. Wang, Z. Ren, C. Feng, J. Dai, Z. Xu, G. Cao, J. Phys. Condens. Matter [**21**]{}, 382203 (2009).
S. Kasahara, T. Shibauchi, K. Hashimoto, K. Ikada, S. Tonegawa, R. Okazaki, H. Shishido, H. Ikeda, H. Takeya, K. Hirata, T. Terashima, Y. Matsuda, Phys. Rev. B [**81**]{}, 184519 (2010).
---
abstract: 'Recently Marcus, Spielman and Srivastava gave a spectacular proof of a theorem which implies a positive
|
ArXiv
| true
| false
|
arxiv_only
|
solution to the Kadison–Singer problem. We extend (and slightly sharpen) this theorem to the realm of hyperbolic polynomials. A benefit of the extension is that the proof becomes coherent in its general form, and fits naturally in the theory of hyperbolic polynomials. We also study the sharpness of the bound in the theorem, and in the final section we describe how the hyperbolic Marcus–Spielman–Srivastava theorem may be interpreted in terms of strong Rayleigh measures. We use this to derive sufficient conditions for a weak half-plane property matroid to have $k$ disjoint bases.'
address: 'Department of Mathematics, Royal Institute of Technology, SE-100 44 Stockholm, Sweden'
author:
- Petter Brändén
title: 'Hyperbolic polynomials and the Marcus–Spielman–Srivastava theorem'
---
This work is based on notes from a graduate course focused on hyperbolic polynomials and the recent papers [@MSS1; @MSS2] of Marcus, Spielman and Srivastava, given by the author at the Royal Institute of Technology (Stockholm) in the fall of 2013.
Introduction
============
Recently Marcus, Spielman and Srivastava [@MSS2] gave a spectacular proof of Theorem \[MSSmain\] below, which implies a positive solution to the infamous Kadison–Singer problem [@KS]. One purpose of this work is to extend Theorem \[MSSmain\] to the realm of hyperbolic polynomials. Although our proof essentially follows the setup in [@MSS2], a benefit of the extension (Theorem \[t1\]) is that the proof becomes coherent in its general form, and fits naturally in the theory of hyperbolic polynomials. We study the sharpness of the bound in Theorem \[t1\]. We prove that a conjecture in [@MSS2] on the sharpness of the bound (Conjecture \[maxmax\] in this paper) is equivalent to the seemingly weaker Conjecture \[maxmax2\]. Using known results about the asymptotic behavior of the largest zero of Jacobi polynomials, we prove in Section \[sbound\] that the bound is close to being optimal in the hyperbolic setting, see Proposition \[lowprop\].
In the final sect
|
ArXiv
| true
| false
|
arxiv_only
|
ion we describe how Theorem \[t1\] may be interpreted in terms of strong Rayleigh measures. We use this to derive sufficient conditions for a weak half-plane property matroid to have $k$ disjoint bases. These conditions are very different from Edmonds characterization in terms of the rank function of the matroid [@Edm].
The following theorem is a stronger version of Weaver’s $KS_k$ conjecture [@We] which is known to imply a positive solution to the Kadison–Singer problem [@KS]. See [@Cas] for a review of the many consequences of Theorem \[MSSmain\].
\[MSSmain\] Let $k \geq 2$ be an integer. Suppose ${\mathbf{v}}_1, \ldots, {\mathbf{v}}_m \in {\mathbb{C}}^d$ satisfy $\sum_{i=1}^m {\mathbf{v}}_i{\mathbf{v}}_i^* = I$, where $I$ is the identity matrix. If $\|{\mathbf{v}}_i \|^2 \leq \epsilon$ for all $1\leq i \leq m$, then there is a partition of $S_1\cup S_2 \cup \cdots \cup S_k=[m]:=\{1,2,\ldots,m\}$ such that $$\label{sqbound1}
\left\| \sum_{i \in S_j} {\mathbf{v}}_i{\mathbf{v}}_i^* \right\| \leq \frac {(1+\sqrt{k\epsilon})^2} k,$$ for each $j \in [k]$, where $\|\cdot \|$ denotes the operator matrix norm.
Hyperbolic polynomials are multivariate generalizations of real–rooted polynomials, which have their origin in PDE theory where they were studied by Petrovsky, Gårding, Bott, Atiyah and Hörmander, see [@ABG; @Ga; @Horm]. During recent years hyperbolic polynomials have been studied in diverse areas such as control theory, optimization, real algebraic geometry, probability theory, computer science and combinatorics, see [@Pem; @Ren; @Vin; @Wag] and the references therein.
A homogeneous polynomial $h({\mathbf{x}}) \in {\mathbb{R}}[x_1, \ldots, x_n]$ is *hyperbolic* with respect to a vector ${\mathbf{e}}\in {\mathbb{R}}^n$ if $h({\mathbf{e}}) \neq 0$, and if for all ${\mathbf{x}}\in {\mathbb{R}}^n$ the univariate polynomial $t \mapsto h(t{\mathbf{e}}-{\mathbf{x}})$ has only real zeros. Here are some examples of hyperbolic polynomials:
1. Let $h({\mathbf{x}})= x_1\cdots x_n$. Then $h({\mathbf{x}})$ is h
|
ArXiv
| true
| false
|
arxiv_only
|
yperbolic with respect to any vector ${\mathbf{e}}\in {\mathbb{R}}_{++}^n=(0,\infty)^n$: $$h(t{\mathbf{e}}-{\mathbf{x}}) = \prod_{j=1}^n (te_j-x_j).$$
2. Let $X=(x_{ij})_{i,j=1}^n$ be a matrix of $n(n+1)/2$ variables where we impose $x_{ij}=x_{ji}$. Then $\det(X)$ is hyperbolic with respect to $I=\diag(1, \ldots, 1)$. Indeed $t \mapsto \det(tI-X)$ is the characteristic polynomial of the symmetric matrix $X$, so it has only real zeros.
More generally we may consider complex hermitian $Z=(x_{jk}+iy_{jk})_{j,k=1}^n$ (where $i = \sqrt{-1}$) of $n^2$ real variables where we impose $x_{jk}=x_{kj}$ and $y_{jk}=-y_{kj}$, for all $1\leq j,k \leq n$. Then $\det(Z)$ is a real polynomial which is hyperbolic with respect to $I$.
3. Let $h({\mathbf{x}})=x_1^2-x_2^2-\cdots-x_n^2$. Then $h$ is hyperbolic with respect to $(1,0,\ldots,0)^T$.
Suppose $h$ is hyperbolic with respect to ${\mathbf{e}}\in {\mathbb{R}}^n$. We may write $$\label{dalambdas}
h(t{\mathbf{e}}-{\mathbf{x}}) = h({\mathbf{e}})\prod_{j=1}^d (t - \lambda_j({\mathbf{x}})),$$ where ${\lambda_{\rm max}}({\mathbf{x}})=\lambda_1({\mathbf{x}}) \geq \cdots \geq \lambda_d({\mathbf{x}})={\lambda_{\rm min}}({\mathbf{x}})$ are called the *eigenvalues* of ${\mathbf{x}}$ (with respect to ${\mathbf{e}}$), and $d$ is the degree of $h$. In particular $$\label{prolambda}
h({\mathbf{x}}) = h({\mathbf{e}})\lambda_1({\mathbf{x}}) \cdots \lambda_d({\mathbf{x}}).$$
By homogeneity $$\label{dilambdas}
\lambda_j(s{\mathbf{x}}+t{\mathbf{e}})=
\begin{cases}
s\lambda_j({\mathbf{x}})+t &\mbox{ if } s\geq 0 \mbox{ and } \\
s\lambda_{d-j}({\mathbf{x}})+t &\mbox{ if } s \leq 0
\end{cases},$$ for all $s,t \in {\mathbb{R}}$ and ${\mathbf{x}}\in {\mathbb{R}}^n$.
The (open) *hyperbolicity cone* is the set $$\Lambda_{\tiny{++}}= \Lambda_{\tiny{++}}({\mathbf{e}})= \{ {\mathbf{x}}\in {\mathbb{R}}^n : {\lambda_{\rm min}}({\mathbf{x}}) >0\}.$$ We denote its closure by $\Lambda_{\tiny{+}}= \Lambda_{\tiny{+}}({\mathbf{e}})=\{ {\mathbf{x}}\in {\mathbb{R}}^n : {\lambda_{\rm min}}({\mathbf{x}}) \
|
ArXiv
| true
| false
|
arxiv_only
|
geq 0\}$. Since $h(t{\mathbf{e}}-{\mathbf{e}})=h({\mathbf{e}})(t-1)^d$ we see that ${\mathbf{e}}\in \Lambda_{\tiny{++}}$. The hyperbolicity cones for the examples above are:
1. $\Lambda_{\tiny{++}}({\mathbf{e}})= {\mathbb{R}}_{++}^n$.
2. $\Lambda_{\tiny{++}}(I)$ is the cone of positive definite matrices.
3. $\Lambda_{\tiny{++}}(1,0,\ldots,0)$ is the *Lorentz cone* $$\left\{{\mathbf{x}}\in {\mathbb{R}}^n : x_1 > \sqrt{x_2^2+\cdots+x_n^2}\right\}.$$
The following theorem collects a few fundamental facts about hyperbolic polynomials and their hyperbolicity cones. For proofs see [@Ga; @Ren].
\[hypfund\] Suppose $h$ is hyperbolic with respect to ${\mathbf{e}}\in {\mathbb{R}}^n$.
1. $\Lambda_+({\mathbf{e}})$ and $\Lambda_{++}({\mathbf{e}})$ are convex cones.
2. $\Lambda_{++}({\mathbf{e}})$ is the connected component of $$\{ {\mathbf{x}}\in {\mathbb{R}}^n : h({\mathbf{x}}) \neq 0\}$$ which contains ${\mathbf{e}}$.
3. ${\lambda_{\rm min}}: {\mathbb{R}}^n \rightarrow {\mathbb{R}}$ is a concave function, and ${\lambda_{\rm max}}: {\mathbb{R}}^n \rightarrow {\mathbb{R}}$ is a convex function.
4. If ${\mathbf{e}}' \in \Lambda_{++}({\mathbf{e}})$, then $h$ is hyperbolic with respect to ${\mathbf{e}}'$ and $\Lambda_{++}({\mathbf{e}}')=\Lambda_{++}({\mathbf{e}})$.
Recall that the *lineality space*, $L(C)$, of a convex cone $C$ is $C \cap -C$, i.e., the largest linear space contained in $C$. It follows that $L(\Lambda_+)= \{{\mathbf{x}}: \lambda_i({\mathbf{x}})=0 \mbox{ for all } i\}$, see e.g. [@Ren].
The *trace*, *rank* and *spectral radius* (with respect to ${\mathbf{e}}$) of ${\mathbf{x}}\in {\mathbb{R}}^n$ are defined as for matrices: $$\tr({\mathbf{x}}) = \sum_{i=1}^d\lambda_i({\mathbf{x}}), \ \ \rk({\mathbf{x}})= \#\{ i : \lambda_i({\mathbf{x}})\neq 0\} \ \ \mbox{ and } \ \ \|{\mathbf{x}}\| = \max_{1\leq i\leq d} |\lambda_i({\mathbf{x}})|.$$ Note that $\| {\mathbf{x}}\| = \max\{ {\lambda_{\rm max}}({\mathbf{x}}), -{\lambda_{\rm min}}({\mathbf{x}})\}$ and hence $ \| \cdot \|$ is convex by Theorem \[hyp
|
ArXiv
| true
| false
|
arxiv_only
|
fund\] (3). It follows that $\| \cdot \|$ is a seminorm and that $\| {\mathbf{x}}\|=0$ if and only if ${\mathbf{x}}\in L(\Lambda_+)$. Hence $\| \cdot \|$ is a norm if and only if $L(\Lambda_+)=\{0\}$.
The following theorem is a generalization of Theorem \[MSSmain\] to hyperbolic polynomials.
\[t1\] Let $k\geq 2$ be an integer and $\epsilon$ a positive real number. Suppose $h$ is hyperbolic with respect to ${\mathbf{e}}\in {\mathbb{R}}^n$, and let ${\mathbf{u}}_1, \ldots, {\mathbf{u}}_m \in \Lambda_{+}$ be such that
- $\rk({\mathbf{u}}_i) \leq 1$ for all $1\leq i \leq m$,
- $\tr({\mathbf{u}}_i) \leq \epsilon$ for all $1\leq i \leq m$, and
- ${\mathbf{u}}_1+ {\mathbf{u}}_2+\cdots+ {\mathbf{u}}_m={\mathbf{e}}$.
Then there is a partition of $S_1\cup S_2 \cup \cdots \cup S_k=[m]$ such that $$\label{sqbound}
\left\| \sum_{i \in S_j} {\mathbf{u}}_i \right\| \leq \frac 1 k \delta(k\epsilon, m),$$ for each $j \in [k]$, where $$\delta(\alpha, m):=\left( 1-\frac 1 m +\sqrt{\alpha - \frac 1 m \left(1-\frac 1 m\right)}\right)^2.$$
We recover (a slightly improved) Theorem \[MSSmain\] when $h= \det$ in Theorem \[t1\].
Compatible families of polynomials
==================================
Let $f$ and $g$ be two real–rooted polynomials of degree $n-1$ and $n$, respectively. We say that $f$ is an *interleaver* of $g$ if $$\beta_1 \leq \alpha_1\leq \beta_2 \leq \alpha_2 \leq \cdots \leq \alpha_{n-1} \leq \beta_n,$$ where $\alpha_1 \leq \cdots \leq \alpha_{n-1}$ and $\beta_1 \leq \cdots \leq \beta_{n}$ are the zeros of $f$ and $g$, respectively.
A family of polynomials $\{f_1(x), \ldots, f_m(x)\}$ of real–rooted polynomials of the same degree and the same sign of leading coefficients is called *compatible* if it satisfies any of the equivalent conditions in the next theorem. Theorem \[CS\] has been discovered several times. We refer to [@CS Theorem 3.6] for a proof.
\[CS\] Let $f_1(x), \ldots, f_m(x)$ be real–rooted polynomials of the same degree and with positive leading coefficients. The following are equiv
|
ArXiv
| true
| false
|
arxiv_only
|
alent.
1. $f_1(x), \ldots, f_m(x)$ have a common interleaver.
2. for all $p_1, \ldots, p_m \geq 0$, $\sum_{i}p_i=1$, the polynomial $$p_1f_1(x)+ \cdots+ p_mf_m(x)$$ is real–rooted.
\[largestz\] Let $f_1,\ldots, f_m$ be real–rooted polynomials that have the same degree and positive leading coefficients, and suppose $p_1, \ldots, p_m \geq 0$ sum to one. If $\{f_1,\ldots, f_m\}$ is compatible, then for some $1 \leq i \leq m$ with $p_i >0$ the largest zero of $f_i$ is smaller or equal to the largest zero of the polynomial $$f=p_1f_1 + p_2f_2 + \cdots + p_mf_m.$$
If $\alpha$ is the largest zero of the common interleaver, then $f_i(\alpha) \leq 0$ for all $i$, so that the largest zero, $\beta$, of $f(x)$ is located in the interval $[\alpha, \infty)$, as are the largest zeros of $f_i$ for each $1\leq i \leq m$. Since $f(\beta)=0$, there is an index $i$ with $p_i >0$ such that $f_i(\beta) \geq 0$. Hence the largest zero of $f_i$ is at most $\beta$.
Suppose $S_1, \ldots, S_m$ are finite sets. A family of polynomials, $\{f({\mathbf{s}};t)\}_{{\mathbf{s}}\in S_1 \times \cdots \times S_m}$, for which all non-zero members are of the same degree and have the same signs of their leading coefficients is called *compatible* if for all choices of independent random variables ${\mathsf{X}}_1 \in S_1, \ldots, {\mathsf{X}}_m \in S_m$, the polynomial $
{\mathbb{E}}f({\mathsf{X}}_1,\ldots, {\mathsf{X}}_n;t)
$ is real–rooted.
The notion of compatible families of polynomials is less general than that of *interlacing families of polynomials* in [@MSS1; @MSS2]. However since all families appearing here (and in [@MSS1; @MSS2]) are compatible we find it more convenient to work with these. The following theorem is in essence from [@MSS1].
\[expfam\] Let $\{f({\mathbf{s}};t)\}_{{\mathbf{s}}\in S_1 \times \cdots \times S_m}$ be a compatible family, and let ${\mathsf{X}}_1 \in S_1, \ldots, {\mathsf{X}}_m \in S_m$ be independent random variables such that ${\mathbb{E}}f({\mathsf{X}}_1,\ldots, {\mathsf{X}}_m;t) \not \equiv 0$. Th
|
ArXiv
| true
| false
|
arxiv_only
|
en there is a tuple ${\mathbf{s}}=(s_1, \ldots, s_n) \in S_1 \times \cdots \times S_m$, with ${\mathbb{P}}[{\mathsf{X}}_i=s_i]>0$ for each $1\leq i \leq m$, such that the largest zero of $f(s_1,\ldots, s_m;t)$ is smaller or equal to the largest zero of ${\mathbb{E}}f({\mathsf{X}}_1,\ldots, {\mathsf{X}}_m;t)$.
The proof is by induction over $m$. The case when $m=1$ is Lemma \[largestz\], so suppose $m>1$. If $S_m=\{c_1,\ldots, c_k\}$, then $${\mathbb{E}}f({\mathsf{X}}_1,\ldots, {\mathsf{X}}_m;t)= \sum_{i=1}^k q_i {\mathbb{E}}f({\mathsf{X}}_1,\ldots, {\mathsf{X}}_{m-1}, c_i;t),$$ for some $q_i \geq 0$. However $$\sum_{i=1}^k p_i {\mathbb{E}}f({\mathsf{X}}_1,\ldots, {\mathsf{X}}_{m-1}, c_i;t)$$ is real–rooted for all choices of $p_i \geq 0$ such that $\sum_ip_i=1$. By Lemma \[largestz\] and Theorem \[CS\] there is an index $j$ with $q_j>0$ such that ${\mathbb{E}}f({\mathsf{X}}_1,\ldots, {\mathsf{X}}_{m-1}, c_j;t) \not \equiv 0$ and such that the largest zero of ${\mathbb{E}}f({\mathsf{X}}_1,\ldots, {\mathsf{X}}_{m-1}, c_j;t)$ is no larger than the largest zero of ${\mathbb{E}}f({\mathsf{X}}_1,\ldots, {\mathsf{X}}_m;t)$. The theorem now follows by induction.
Mixed hyperbolic polynomials
============================
Recall that the *directional derivative* of $h({\mathbf{x}}) \in {\mathbb{R}}[x_1,\ldots, x_n]$ with respect to ${\mathbf{v}}=(v_1,\ldots, v_n)^T \in {\mathbb{R}}^n$ is defined as $$D_{\mathbf{v}}h({\mathbf{x}}) := \sum_{k=0}^n v_k \frac{ \partial h }{\partial x_k}({\mathbf{x}}),$$ and note that $$\label{dvalt}
(D_{\mathbf{v}}h)({\mathbf{x}}+t{\mathbf{v}}) = \frac d {dt} h({\mathbf{x}}+ t {\mathbf{v}}) .$$ If $h$ is hyperbolic with respect to ${\mathbf{e}}$, then $$\tr({\mathbf{v}})= \frac {D_{\mathbf{v}}h({\mathbf{e}})}{h({\mathbf{e}})},$$ by . Hence ${\mathbf{v}}\rightarrow \tr({\mathbf{v}})$ is linear.
The following theorem is essentially known, see e.g. [@BGLS; @Ga; @Ren]. However we need slightly more general results, so we provide proofs below, when necessary.
\[direct\] Let $h$ be a hyperb
|
ArXiv
| true
| false
|
arxiv_only
|
olic polynomial and let ${\mathbf{v}}\in \Lambda_+$ be such that $D_{\mathbf{v}}h \not \equiv 0$. Then
1. $D_{\mathbf{v}}h$ is hyperbolic with hyperbolicity cone containing $\Lambda_{++}$.
2. The polynomial $h({\mathbf{x}})-yD_{\mathbf{v}}h({\mathbf{x}}) \in {\mathbb{R}}[x_1,\ldots, x_n,y]$ is hyperbolic with hyperbolicity cone containing $\Lambda_{++} \times \{y: y \leq 0\}$.
3. The rational function $${\mathbf{x}}\mapsto \frac {h({\mathbf{x}})}{D_{\mathbf{v}}h({\mathbf{x}})}$$ is concave on $\Lambda_{++}$.
(1). See [@BrOp Lemma 4].
(2). The polynomial $h({\mathbf{x}})y$ is hyperbolic with hyperbolicity cone containing $\Lambda_{++} \times \{y : y<0\}$. Hence so is $H({\mathbf{x}},y):= D_{({\mathbf{v}},-1)} h({\mathbf{x}})y= h({\mathbf{x}})- y D_{\mathbf{v}}h({\mathbf{x}})$ by (1). Since $H({\mathbf{e}}',0) = h({\mathbf{e}}') \neq 0$ for each ${\mathbf{e}}' \in \Lambda_{++}$, we see that also $\Lambda_{++}\times \{0\}$ is a subset of the hyperbolicity cone (by Theorem \[hypfund\] (2)) of $H$.
(3). If ${\mathbf{x}}\in \Lambda_{++}$, then (by Theorem \[hypfund\] (2)) $({\mathbf{x}},y)$ is in the closure of the hyperbolicity cone of $H({\mathbf{x}},y)$ if and only if $$y \leq \frac {h({\mathbf{x}})}{D_{\mathbf{v}}h({\mathbf{x}})}.$$ Since hyperbolicity cones are convex $$y_1 \leq \frac {h({\mathbf{x}}_1)}{D_{\mathbf{v}}h({\mathbf{x}}_1)} \mbox{ and } y_2 \leq \frac {h({\mathbf{x}}_2)}{D_{\mathbf{v}}h({\mathbf{x}}_2)} \mbox{ imply } y_1+y_2 \leq \frac {h({\mathbf{x}}_1+{\mathbf{x}}_2)}{D_{\mathbf{v}}h({\mathbf{x}}_1+{\mathbf{x}}_2)},$$ for all ${\mathbf{x}}_1,{\mathbf{x}}_2 \in \Lambda_{++}$, from which (3) follows.
\[rankalt\] Let $h$ be hyperbolic with hyperbolicity cone $\Lambda_{++}\subseteq {\mathbb{R}}^n$. The rank function does not depend on the choice of ${\mathbf{e}}\in \Lambda_{++}$, and $$\rk({\mathbf{v}})= \max\{ k : D_{\mathbf{v}}^kh \not \equiv 0\}, \quad \mbox{ for all } {\mathbf{v}}\in {\mathbb{R}}^n.$$
That the rank does not depend on the choice of ${\mathbf{e}}\in \Lambda_{++}$ is known,
|
ArXiv
| true
| false
|
arxiv_only
|
see [@Ren Prop. 22] or [@BrObs Lemma 4.4].
By $$\label{mag}
h({\mathbf{x}}-y{\mathbf{v}}) = \left( \sum_{k=0}^{\infty} \frac {(-y)^k D_{\mathbf{v}}^k}{k!} \right) h({\mathbf{x}}).$$ Thus $$h({\mathbf{e}}-t{\mathbf{v}}) = h({\mathbf{e}})\prod_{j=1}^d(1-t\lambda_j({\mathbf{v}}))= \sum_{k=0}^d (-1)^k\frac {D^k_{\mathbf{v}}h({\mathbf{e}})} {k!} t^k,$$ and hence $\rk({\mathbf{v}})= \deg h({\mathbf{e}}-t{\mathbf{v}})= \max\{k : D^k_{\mathbf{v}}h({\mathbf{e}})\neq 0\}$. Since the rank does not depend on the choice of ${\mathbf{e}}\in \Lambda_{++}$, if $D^{k+1}_{\mathbf{v}}h({\mathbf{e}})=D^{k+2}_{\mathbf{v}}h({\mathbf{e}})=\cdots =0$ for some ${\mathbf{e}}\in \Lambda_{++}$, then $D^{k+1}_{\mathbf{v}}h({\mathbf{e}}')=D^{k+2}_{\mathbf{v}}h({\mathbf{e}}')=\cdots =0$ for all ${\mathbf{e}}' \in \Lambda_{++}$. Since $ \Lambda_{++}$ has non-empty interior this means $D^{k+1}_{\mathbf{v}}h \equiv 0$.
If $h({\mathbf{x}}) \in {\mathbb{R}}[x_1,\ldots, x_n]$ and ${\mathbf{v}}_1, \ldots, {\mathbf{v}}_m \in {\mathbb{R}}^n$ let $h[{\mathbf{v}}_1, \ldots, {\mathbf{v}}_m]$ be the polynomial in ${\mathbb{R}}[x_1,\ldots, x_n,y_1,\ldots, y_m]$ defined by $$h[{\mathbf{v}}_1, \ldots, {\mathbf{v}}_m] = \prod_{j=1}^m \left(1-y_jD_{{\mathbf{v}}_j}\right) h({\mathbf{x}}).$$ By iterating Theorem \[direct\] (2) we get:
\[mixhyp\] If $h({\mathbf{x}})$ is hyperbolic with hyperbolicity cone $\Lambda_{++}$ and ${\mathbf{v}}_1, \ldots, {\mathbf{v}}_m \in \Lambda_+$, then $h[{\mathbf{v}}_1, \ldots, {\mathbf{v}}_m]$ is hyperbolic with hyperbolicity cone containing $\Lambda_{++} \times (-{\mathbb{R}}_+^m)$, where ${\mathbb{R}}_+ := [0,\infty)$.
\[rk1le\] Suppose $h$ is hyperbolic. If ${\mathbf{v}}_1, \ldots, {\mathbf{v}}_m \in \Lambda_+$ have rank at most one, then $$h[{\mathbf{v}}_1, \ldots, {\mathbf{v}}_m] = h({\mathbf{x}}-y_1{\mathbf{v}}_1 - \cdots - y_m {\mathbf{v}}_m).$$
If ${\mathbf{v}}$ has rank at most one, then $D_{\mathbf{v}}^k h \equiv 0$ for all $k \geq 2$ by Lemma \[rankalt\]. Hence, by , $$h({\mathbf{x}}-y{\mathbf{v}}) = \left( \sum_{
|
ArXiv
| true
| false
|
arxiv_only
|
k=0}^{\infty} \frac {(-y)^k D_{\mathbf{v}}^k}{k!} \right) h({\mathbf{x}})= (1-yD_{{\mathbf{v}}})h({\mathbf{x}}),$$ from which the lemma follows.
Note that $({\mathbf{v}}_1,\ldots,{\mathbf{v}}_m) \mapsto h[{\mathbf{v}}_1,\ldots,{\mathbf{v}}_m]$ is affine linear in each coordinate, i.e., for all $p \in {\mathbb{R}}$ and $1\leq i \leq m$: $$\begin{aligned}
& h[{\mathbf{v}}_1,\ldots,(1-p){\mathbf{v}}_i+p{\mathbf{v}}_i',\ldots, {\mathbf{v}}_m] \\
= &(1-p)h[{\mathbf{v}}_1,\ldots,{\mathbf{v}}_i,\ldots, {\mathbf{v}}_m] +ph[{\mathbf{v}}_1,\ldots,{\mathbf{v}}_i',\ldots, {\mathbf{v}}_m].\end{aligned}$$ Hence if ${\mathsf{X}}_1, \ldots, {\mathsf{X}}_m$ are independent random variables in ${\mathbb{R}}^n$, then $$\label{mixedexp}
{\mathbb{E}}h[{\mathsf{X}}_1,\ldots,{\mathsf{X}}_m] = h[{\mathbb{E}}{\mathsf{X}}_1,\ldots,{\mathbb{E}}{\mathsf{X}}_m].$$
\[mixedchar\] Let $h({\mathbf{x}})$ be hyperbolic with respect to ${\mathbf{e}}\in {\mathbb{R}}^n$, let $V_1, \ldots, V_m$ be finite sets of vectors in $\Lambda_+$, and let ${\mathbf{w}}\in {\mathbb{R}}^{n+m}$. For ${\mathbf{V}}=({\mathbf{v}}_1,\ldots, {\mathbf{v}}_m) \in V_1\times \cdots \times V_m$, let $$f({\mathbf{V}};t) := h[{\mathbf{v}}_1,\ldots, {\mathbf{v}}_m](t{\mathbf{e}}+{\mathbf{w}}).$$ Then $\{f({\mathbf{V}};t)\}_{{\mathbf{V}}\in V_1\times \cdots \times V_m}$ is a compatible family.
In particular if in addition all vectors in $V_1 \cup \cdots \cup V_m$ have rank at most one, and $$g({\mathbf{V}};t) := h(t{\mathbf{e}}+{\mathbf{w}}- \alpha_1{\mathbf{v}}_1-\cdots-\alpha_m{\mathbf{v}}_m),$$ where ${\mathbf{w}}\in {\mathbb{R}}^n$ and $(\alpha_1,\ldots, \alpha_m)\in {\mathbb{R}}^m$, then $\{g({\mathbf{V}};t)\}_{{\mathbf{V}}\in V_1\times \cdots \times V_m}$ is a compatible family.
Let ${\mathsf{X}}_1 \in V_1, \ldots, {\mathsf{X}}_m \in V_m$ be independent random variables. Then the polynomial ${\mathbb{E}}h[{\mathsf{X}}_1, \ldots, {\mathsf{X}}_m]= h[{\mathbb{E}}{\mathsf{X}}_1,\ldots,{\mathbb{E}}{\mathsf{X}}_m]$ is hyperbolic with respect to $({\mathbf{e}}, 0,\ldots,0)$ b
|
ArXiv
| true
| false
|
arxiv_only
|
y Theorem \[mixhyp\] (since ${\mathbb{E}}{\mathbf{v}}_i \in \Lambda_+$ for all $i$ by convexity). In particular the polynomial ${\mathbb{E}}f({\mathsf{X}}_1,\ldots, {\mathsf{X}}_m;t)$ is real–rooted.
The second assertion is an immediate consequence of the first combined with Lemma \[rk1le\].
Bounds on zeros of mixed characteristic polynomials
===================================================
To prove Theorem \[hypprob\], we want to bound the zeros of the *mixed characteristic polynomial* $$\label{mip}
t \mapsto h[{\mathbf{v}}_1, \ldots, {\mathbf{v}}_m](t{\mathbf{e}}+{\mathbf{1}}),$$ where $h$ is hyperbolic with respect to ${\mathbf{e}}\in {\mathbb{R}}^n$, ${\mathbf{1}}\in {\mathbb{R}}^m$ is the all ones vector, and ${\mathbf{v}}_1, \ldots, {\mathbf{v}}_m \in \Lambda_+({\mathbf{e}})$ satisfy ${\mathbf{v}}_1+\cdots+{\mathbf{v}}_m ={\mathbf{e}}$ and $\tr({\mathbf{v}}_i) \leq \epsilon$ for all $1\leq i \leq m$.
\[hypid\] Note that a real number $\rho$ is larger than the maximum zero of if and only if $\rho {\mathbf{e}}+{\mathbf{1}}$ is in the hyperbolicity cone $\Gamma_{++}$ of $h[{\mathbf{v}}_1, \ldots, {\mathbf{v}}_m]$. Hence the maximal zero of is equal to $$\inf \{ \rho >0 : \rho {\mathbf{e}}+{\mathbf{1}}\in \Gamma_{++}\}.$$
For the remainder of this section, let $h \in {\mathbb{R}}[x_1,\ldots, x_n]$ be hyperbolic with respect to ${\mathbf{e}}$, and let ${\mathbf{v}}_1,\ldots, {\mathbf{v}}_m \in \Lambda_{++}$. To enhance readability in the computations to come, let $\partial_j := D_{{\mathbf{v}}_j}$ and $$\xi_j[g] := \frac {g}{\partial_j g}.$$
Note that a continuously differentiable concave function $f : (0,\infty) \to {\mathbb{R}}$ satisfies $$f(t+\delta) \geq f(t)+ \delta f'(t+\delta), \quad \mbox{ for all } \delta \geq 0.$$ Hence by Theorem \[direct\] $$\label{concon}
\xi_i[h]({\mathbf{x}}+\delta {\mathbf{v}}_j) \geq \xi_i[h]({\mathbf{x}}) + \delta \partial_j \xi_i[h]({\mathbf{x}}+\delta {\mathbf{v}}_j)$$ for all ${\mathbf{x}}\in \Lambda_{+}$ and $\delta \geq 0$. The following elementary identity is
|
ArXiv
| true
| false
|
arxiv_only
|
left for the reader to verify.
\[tech\] $$\xi_i[h-\partial_jh]=\xi_i[h]-\frac {\partial_j \xi_i[h]\cdot \xi_j[\partial_ih]}{\xi_j[\partial_ih]-1}.$$
\[engine\] If ${\mathbf{x}}\in \Lambda_{++}$, $1\leq i,j \leq n$, $\delta > 1$ and $$\xi_j[h]({\mathbf{x}}) \geq \frac \delta {\delta-1},$$ then $$\xi_i[h-\partial_jh]({\mathbf{x}}+\delta {\mathbf{v}}_j) \geq \xi_i[h]({\mathbf{x}}).$$
Since $\xi_i[h]$ is concave on $\Lambda_{++}$ (Theorem \[direct\] (3)) and homogeneous of degree one: $$\frac {\xi_i[h]({\mathbf{z}}+\delta {\mathbf{v}}_j) - \xi_i[h]({\mathbf{z}})}{\delta} \geq \xi_i[h]({\mathbf{v}}_j), \ \ \ \mbox{ for all } {\mathbf{z}}\in \Lambda_{++}.$$ Hence $$\label{parat}
\partial_j \xi_i[h]({\mathbf{z}}) \geq \xi_i[h]({\mathbf{v}}_j)\geq 0, \ \ \ \mbox{ for all } {\mathbf{z}}\in \Lambda_{++}.$$ If ${\mathbf{z}}\in \Lambda_{++}$, then (by Theorem \[hypfund\] (2)) $({\mathbf{z}}, t)$ is in the closure of the hyperbolicity cone of $h-y\partial_j h$ if and only if $t \leq \xi_j[h]({\mathbf{z}})$. By Theorem \[direct\] the polynomial $$D_{({\mathbf{v}}_i,0)}(h-y\partial_j h) = \partial_i h -y \partial_j \partial_i h$$ is hyperbolic with hyperbolicity cone containing the hyperbolicity cone of $h-y\partial_j h$. Hence if ${\mathbf{z}}\in \Lambda_{++}$ and $t \leq \xi_j[h]({\mathbf{z}})$, then $t \leq \xi_j[\partial_ih]({\mathbf{z}})$, and thus $$\label{parata}
\xi_j[\partial_ih]({\mathbf{z}}) \geq \xi_j[h]({\mathbf{z}}), \ \ \ \mbox{ for all } {\mathbf{z}}\in \Lambda_{++}.$$
Let ${\mathbf{x}}$ be as in the statement of the lemma. By Lemma \[tech\] and $$\begin{aligned}
\xi_i[h-\partial_jh]({\mathbf{x}}+\delta {\mathbf{v}}_j) - \xi_i[h]({\mathbf{x}}) &= \xi_i[h]({\mathbf{x}}+\delta {\mathbf{v}}_j)-\xi_i[h]({\mathbf{x}})- \frac {\partial_j \xi_i[h]\cdot \xi_j[\partial_ih]}{\xi_j[\partial_ih]-1}({\mathbf{x}}+\delta {\mathbf{v}}_j)\\
&\geq \partial_j \xi_i[h]({\mathbf{x}}+\delta {\mathbf{v}}_j) \left( \delta - \frac { \xi_j[\partial_ih]({\mathbf{x}}+\delta {\mathbf{v}}_j) }{\xi_j[\partial_ih]({\mathbf{x}}+\delta
|
ArXiv
| true
| false
|
arxiv_only
|
{\mathbf{v}}_j)-1}\right)\\
&\geq \xi_i[h]({\mathbf{v}}_j) \left( \delta - \frac { \delta/(\delta-1)}{ \delta/(\delta-1)-1}\right) =0, \end{aligned}$$ where the last inequality follows from , and the concavity of ${\mathbf{z}}\rightarrow \xi_j[h]({\mathbf{z}})$.
Consider ${\mathbb{R}}^{n+m}={\mathbb{R}}^n\oplus {\mathbb{R}}^m$ and let ${\mathbf{e}}_1,\ldots, {\mathbf{e}}_m$ be the standard bases of ${\mathbb{R}}^m$ (inside ${\mathbb{R}}^n\oplus {\mathbb{R}}^m$).
\[corbond\] Suppose $h$ is hyperbolic with respect to ${\mathbf{e}}\in {\mathbb{R}}^n$, and let $\Gamma_+$ be the (closed) hyperbolicity cone of $h[{\mathbf{v}}_1,\ldots, {\mathbf{v}}_m]$, where ${\mathbf{v}}_1,\ldots, {\mathbf{v}}_m \in \Lambda_{+}({\mathbf{e}})$. Suppose $t_i, t_j > 1$ and ${\mathbf{x}}\in \Lambda_{+}({\mathbf{e}})$ are such that $${\mathbf{x}}+t_k {\mathbf{e}}_k \in \Gamma_+, \quad \mbox{ for } k \in \{i,j\}.$$ Then $${\mathbf{x}}+\frac {t_j}{t_j-1}{\mathbf{v}}_j + {\mathbf{e}}_j + t_i {\mathbf{e}}_i \in \Gamma_+.$$ Moreover if ${\mathbf{x}}+t_k {\mathbf{e}}_k \in \Gamma_+$ for all $k \in [m]$, then $${\mathbf{x}}+ \left(1-\frac 1 m\right) \sum_{i=1}^m \frac {t_i}{t_i-1}{\mathbf{v}}_i +\left(1-\frac 1 m\right)\sum_{i=1}^m {\mathbf{e}}_i+ \frac 1 m\sum_{i=1}^m t_i{\mathbf{e}}_i \in \Gamma_+.$$
By continuity we may assume ${\mathbf{x}},{\mathbf{v}}_1,\ldots, {\mathbf{v}}_m \in \Lambda_{++}({\mathbf{e}})$. Let $\delta_k = t_k/(t_k-1)$. Then $${\mathbf{x}}+ t_k {\mathbf{e}}_k \in \Gamma_+ \mbox{ if and only if } \xi_k[h] \geq \frac {\delta_k}{\delta_k-1}.$$ Also $\xi_i[h-\partial_jh]({\mathbf{x}}+\delta_j {\mathbf{v}}_j) \geq \delta_i/(\delta_i-1)$ is equivalent to $${\mathbf{x}}+\delta_j{\mathbf{v}}_j + {\mathbf{e}}_j+\frac {\delta_i} {\delta_i-1} {\mathbf{e}}_i \in \Gamma_+.$$ Hence the first part follows from Lemma \[engine\].
Suppose ${\mathbf{x}}+t_k {\mathbf{e}}_k \in \Gamma_+$ for all $k \in [m]$. Since ${\mathbf{x}}+s{\mathbf{e}}_1, {\mathbf{v}}_1 \in \Gamma_+$ for all $s \leq t_1$, the vector $${\mathbf{x}}' := {\mathbf{x}}+
|
ArXiv
| true
| false
|
arxiv_only
|
\frac {t_1}{t_1-1}{\mathbf{v}}_1 + {\mathbf{e}}_1$$ is in the hyperbolicity cone of $(1-y_1D_{{\mathbf{v}}_1})h$. By the first part we have ${\mathbf{x}}'+t_2{\mathbf{e}}_2, {\mathbf{x}}'+t_3{\mathbf{e}}_3\in \Gamma_+$. Hence we may apply the first part of the theorem with $h$ replaced by $(1-y_1D_{{\mathbf{v}}_1})h$ to conclude $${\mathbf{x}}'+ \frac {t_2}{t_2-1}{\mathbf{v}}_2 + {\mathbf{e}}_2+ t_3{\mathbf{e}}_3={\mathbf{x}}+\frac {t_1}{t_1-1}{\mathbf{v}}_1 + \frac {t_2}{t_2-1}{\mathbf{v}}_2+{\mathbf{e}}_1 +{\mathbf{e}}_2 + t_3{\mathbf{e}}_3\in \Gamma_+.$$ By continuing this procedure with different orderings we may conclude that $${\mathbf{x}}+ \left(\sum_{i=1}^m \frac {t_i}{t_i-1}{\mathbf{v}}_i\right)-\frac {t_j}{t_j-1}{\mathbf{v}}_j +\left(\sum_{i=1}^m {\mathbf{e}}_i\right)-{\mathbf{e}}_j+t_j{\mathbf{e}}_j \in \Gamma_+,$$ for each $1\leq j \leq m$. The second part now follows from convexity of $\Gamma_+$ upon taking the convex sum of these vectors.
\[mainbound\] Suppose $h$ is hyperbolic with respect to ${\mathbf{e}}\in {\mathbb{R}}^n$ and suppose ${\mathbf{v}}_1,\ldots, {\mathbf{v}}_m \in \Lambda_{+}({\mathbf{e}})$ are such that ${\mathbf{e}}= {\mathbf{v}}_1+\cdots+{\mathbf{v}}_m$, where $\tr({\mathbf{v}}_j) \leq \epsilon$ for each $1\leq j \leq m$. Then the largest zero of the polynomial $$t \mapsto h[{\mathbf{v}}_1, \ldots, {\mathbf{v}}_m](t{\mathbf{e}}+{\mathbf{1}})$$ is at most $$\delta(\epsilon, m):=\left( 1-\frac 1 m +\sqrt{\epsilon - \frac 1 m \left(1-\frac 1 m\right)}\right)^2.$$
Let $t >1$ and set ${\mathbf{x}}=\epsilon t{\mathbf{e}}$ and $t_i=t$ for $1\leq i \leq m$. Then ${\mathbf{x}}+t_i {\mathbf{e}}_i = t(\epsilon {\mathbf{e}}+ {\mathbf{e}}_i) \in \Lambda_+$ since $$h[{\mathbf{v}}_1,\ldots, {\mathbf{v}}_m](\epsilon {\mathbf{e}}+ {\mathbf{e}}_i)= \epsilon h({\mathbf{e}})- D_{{\mathbf{v}}_i}h({\mathbf{e}}) = h({\mathbf{e}})(\epsilon -\tr({\mathbf{v}}_i)) \geq 0.$$ Apply Corollary \[corbond\] to conclude that for each $t>1$: $$\left(\epsilon t+ \left(1-\frac 1 m\right)\frac t {t-1}\right) {\mat
|
ArXiv
| true
| false
|
arxiv_only
|
hbf{e}}+ \left(1-\frac 1 m + \frac t m \right) {\mathbf{1}}\in \Gamma_+.$$
Hence by (the homogeneity of $\Gamma_+$ and) Remark \[hypid\], the maximal zero is at most $$\inf \left\{ \frac {\epsilon t+ \left(1-\frac 1 m\right)\frac t {t-1}} {1-\frac 1 m + \frac t m } : t >1\right\}.$$ It is a simple exercise to deduce that the infimum is exactly what is displayed in the statement of the theorem.
Proof of Theorem \[t1\]
=======================
To prove Theorem \[t1\] we use the following theorem which for $h=\det$ appears in [@MSS1; @MSS2]:
\[hypprob\] Suppose $h$ is hyperbolic with respect to ${\mathbf{e}}$. Let ${\mathsf{X}}_1, \ldots, {\mathsf{X}}_m$ be independent random vectors in $\Lambda_+$ of rank at most one and with finite supports such that $$\label{hypeta2}
{\mathbb{E}}\sum_{i=1}^m {\mathsf{X}}_i ={\mathbf{e}},$$ and $$\label{hyptr}
\tr({\mathbb{E}}{\mathsf{X}}_i) \leq \epsilon \mbox{ for all } 1\leq i \leq m.$$ Then $$\label{hypbig}
{\mathbb{P}}\left[ {\lambda_{\rm max}}\left(\sum_{i=1}^m {\mathsf{X}}_i \right) \leq \delta(\epsilon,m) \right] >0.$$
Let $V_i$ be the support of ${\mathsf{X}}_i$, for each $1 \leq i \leq m$. By Theorem \[mixedchar\], the family $$\{h(t{\mathbf{e}}- {\mathbf{v}}_1-\cdots-{\mathbf{v}}_m)\}_{{\mathbf{v}}_i \in V_i}$$ is compatible. By Theorem \[expfam\] there are vectors ${\mathbf{v}}_i \in V_i$, $1\leq i \leq m$, such that the largest zero of $h(t{\mathbf{e}}- {\mathbf{v}}_1-\ldots-{\mathbf{v}}_m)$ is smaller or equal to the largest zero of $${\mathbb{E}}h(t{\mathbf{e}}- {\mathsf{X}}_1-\cdots-{\mathsf{X}}_m)= {\mathbb{E}}h[{\mathsf{X}}_1,\ldots, {\mathsf{X}}_m](t{\mathbf{e}}+{\mathbf{1}})= h[{\mathbb{E}}{\mathsf{X}}_1,\ldots, {\mathbb{E}}{\mathsf{X}}_m](t{\mathbf{e}}+{\mathbf{1}}).$$ The theorem now follows from Theorem \[mainbound\].
For $1\leq i \leq k$, let ${\mathbf{x}}^i=(x_{i1},\ldots,x_{in})$ where ${\mathbf{y}}=\{x_{ij} : 1\leq i \leq k, 1\leq j \leq k\}$ are independent variables. Consider the polynomial $$g({\mathbf{y}}) = h({\mathbf{x}}^1)h({\mathbf{x}}^2) \
|
ArXiv
| true
| false
|
arxiv_only
|
cdots h({\mathbf{x}}^k) \in {\mathbb{R}}[{\mathbf{y}}],$$ which is hyperbolic with respect to ${\mathbf{e}}^1\oplus \cdots \oplus {\mathbf{e}}^k$, where ${\mathbf{e}}^i$ is a copy of ${\mathbf{e}}$ in the variables ${\mathbf{x}}^i$, for all $1 \leq i \leq k$. The hyperbolicity cone of $g$ is the direct sum $\Lambda_+:=\Lambda_+({\mathbf{e}}^1) \oplus \cdots \oplus \Lambda_+({\mathbf{e}}^k)$, where $\Lambda_+({\mathbf{e}}^i)$ is a copy of $\Lambda_+({\mathbf{e}})$ in the variables ${\mathbf{x}}^i$, for all $1 \leq i \leq k$.
Let ${\mathsf{X}}_1, \ldots, {\mathsf{X}}_m$ be independent random vectors in $\Lambda_+$ such that for all $1\leq i \leq k$ and $1\leq j \leq m$: $${\mathbb{P}}\left[ {\mathsf{X}}_j = k{\mathbf{u}}_j^i\right] = \frac 1 k,$$ where ${\mathbf{u}}_1^i, \ldots, {\mathbf{u}}_m^i$ are copies in $\Lambda_+({\mathbf{e}}^i)$ of ${\mathbf{u}}_1, \ldots, {\mathbf{u}}_m$. Then $$\begin{aligned}
{\mathbb{E}}{\mathsf{X}}_j &= {\mathbf{u}}_j^1 \oplus {\mathbf{u}}_j^2 \oplus \cdots \oplus {\mathbf{u}}_j^k, \\
\tr({\mathbb{E}}{\mathsf{X}}_j) &= k\tr({\mathbf{u}}_j) \leq k\epsilon, \mbox{ and } \\
{\mathbb{E}}\sum_{j=1}^m {\mathsf{X}}_j &= {\mathbf{e}}^1\oplus \cdots \oplus {\mathbf{e}}^k,\end{aligned}$$ for all $1\leq j \leq k$. By Theorem \[hypprob\] there is a partition $S_1\cup \cdots \cup S_k =[m]$ such that $${\lambda_{\rm max}}\left(\sum_{i \in S_1}k{\mathbf{u}}_i^1+\cdots + \sum_{i \in S_k}k{\mathbf{u}}_i^k \right)\leq \delta(k\epsilon,m).$$ However $${\lambda_{\rm max}}\! \left(\sum_{i \in S_1}k{\mathbf{u}}_i^1+\cdots + \sum_{i \in S_k}k{\mathbf{u}}_i^k \right) = k \! \max_{1\leq j \leq k} {\lambda_{\rm max}}\! \left(\sum_{i \in S_j}{\mathbf{u}}_i^j \right) =
k \! \max_{1\leq j \leq k} {\lambda_{\rm max}}\! \left(\sum_{i \in S_j}{\mathbf{u}}_i \right),$$ and the theorem follows.
On a conjecture on the optimal bound
====================================
We have seen that the core of the proof of Theorem \[t1\] is to bound the zeros of mixed characteristic polynomials. To achieve better bounds in T
|
ArXiv
| true
| false
|
arxiv_only
|
heorem \[t1\] we are therefore motivated to look closer at the following problem.
\[central\] Let $h$ be a polynomial of degree $d$ which is hyperbolic with respect to ${\mathbf{e}}$, and let $\epsilon >0$ and $m \in {\mathbb{Z}}_+$ be given. Determine the largest possible maximal zero, $\rho=\rho(h,{\mathbf{e}},\epsilon,m)$, of mixed characteristic polynomials: $$\chi[{\mathbf{v}}_1,\ldots, {\mathbf{v}}_m](t):=h[{\mathbf{v}}_1, \ldots, {\mathbf{v}}_m](t{\mathbf{e}}+ {\mathbf{1}})$$ subject to the conditions
1. ${\mathbf{v}}_1, \ldots, {\mathbf{v}}_m \in \Lambda_+$,
2. ${\mathbf{v}}_1 + \cdots + {\mathbf{v}}_m = {\mathbf{e}}$, and
3. $\tr({\mathbf{v}}_i) \leq \epsilon$ for all $1 \leq i \leq m$.
The following conjecture was made by Marcus *et al.* [@MSS2] in the case when $h = \det$, but we take the liberty to extend the conjecture to any hyperbolic polynomial.
\[maxmax\] The maximal zero in Problem \[central\] is achieved for $${\mathbf{v}}_1=\cdots ={\mathbf{v}}_k= \frac \epsilon d {\mathbf{e}}, {\mathbf{v}}_{k+1}= \left(1-\frac k d \epsilon\right){\mathbf{e}}, {\mathbf{v}}_{k+2}={\mathbf{v}}_{k+3}= \cdots= {\mathbf{v}}_{m}=0,$$ where $k= \lfloor d/\epsilon \rfloor$.
We will prove here that Conjecture \[maxmax\] is equivalent to the following seemingly weaker conjecture.
\[maxmax2\] The maximal zero in Problem \[central\] is achieved for some ${\mathbf{v}}_1, \ldots, {\mathbf{v}}_m$ where ${\mathbf{v}}_i \in \Lambda_{++}\cup \{0\}$ for each $i \in [m]$.
We start by proving that there is a solution to Problem \[central\] for which the ${\mathbf{v}}_i$’s have correct traces, i.e., as those in Conjecture \[maxmax\]. By a “solution" to Problem \[central\] we mean a list of vectors ${\mathbf{v}}_1, \ldots, {\mathbf{v}}_m$, as in Problem \[central\], which realize the maximal zero. First a useful lemma.
\[nicein\] Suppose ${\mathbf{u}}, {\mathbf{v}}, {\mathbf{w}}\in \Lambda_+$. Then $$(D_{\mathbf{u}}D_{\mathbf{v}}h({\mathbf{w}}))^2 \geq D_{\mathbf{u}}^2 h({\mathbf{w}}) \cdot D_{\mathbf{v}}^2 h({\math
|
ArXiv
| true
| false
|
arxiv_only
|
bf{w}}),$$ and hence $$D_{\mathbf{u}}D_{\mathbf{v}}h({\mathbf{w}}) \geq \min\{ D_{\mathbf{u}}^2 h({\mathbf{w}}), D_{\mathbf{v}}^2 h({\mathbf{w}})\}.$$
By continuity we may assume ${\mathbf{u}}, {\mathbf{v}}, {\mathbf{w}}\in \Lambda_{++}$. Then the polynomial $$\begin{aligned}
& g(x,y,z):=h(x{\mathbf{u}}+y{\mathbf{v}}+z{\mathbf{w}}) = h({\mathbf{w}})z^d+ \big(D_{\mathbf{u}}h({\mathbf{w}}) x + D_{\mathbf{v}}h({\mathbf{w}})y\big)z^{d-1}+ \\
&+ \left( D_{\mathbf{u}}D_{\mathbf{v}}h({\mathbf{w}}) xy + \frac 1 2 D_{\mathbf{u}}^2 h({\mathbf{w}}) x^2 + \frac 1 2 D_{\mathbf{v}}^2 h({\mathbf{w}})y^2\right)z^{d-2}+ \cdots\end{aligned}$$ is hyperbolic with hyperbolicity cone containing the positive orthant. By Theorem \[direct\] (1) so is $\partial^{d-2} g /\partial z^{d-2}$, and hence the polynomial $$2\frac {\partial^{d-2} g} {\partial z^{d-2}} \big( (1,0,0)+ t(0,0,1) \big) = D_{\mathbf{u}}^2 h({\mathbf{w}}) + 2D_{\mathbf{u}}D_{\mathbf{v}}h({\mathbf{w}})t + D_{\mathbf{v}}^2 h({\mathbf{w}})t^2$$ is real–rooted. Thus its discriminant is nonnegative, which yields the desired inequality.
\[righttrace\] There is a solution to Problem \[central\] such that all but at most one of the ${\mathbf{v}}_i$’s have trace either zero or $\epsilon$.
Moreover, if there is a solution to Problem \[central\] which satisfies the condition in Conjecture \[maxmax2\], then there is such a solution such that all but at most one of the ${\mathbf{v}}_i$’s have trace either zero or $\epsilon$.
Let ${\mathbf{v}}_1, \ldots, {\mathbf{v}}_m$ be a solution to Problem \[central\], and let $\rho$ be the maximal zero. Suppose $0<\tr({\mathbf{v}}_1), \tr({\mathbf{v}}_2) <\epsilon$. By Remark \[hypid\] $\rho {\mathbf{e}}+ {\mathbf{1}}$ is in the hyperbolicity cone $\Gamma_+$ of $h[{\mathbf{v}}_1, \ldots, {\mathbf{v}}_m]$. Since also $-{\mathbf{e}}_1, -{\mathbf{e}}_2 \in \Gamma_+$ we have ${\mathbf{w}}:= \rho{\mathbf{e}}+{\mathbf{1}}-{\mathbf{e}}_1-{\mathbf{e}}_2 \in \Gamma_+$, and hence ${\mathbf{w}}$ is in the (closed) hyperbolicity cone of $g=
|
ArXiv
| true
| false
|
arxiv_only
|
h[{\mathbf{v}}_3, \ldots, {\mathbf{v}}_m]$. By Lemma \[nicein\] we may assume $$D_{{\mathbf{v}}_1}D_{{\mathbf{v}}_2} g({\mathbf{w}}) \geq D_{{\mathbf{v}}_1}^2g({\mathbf{w}}) \geq 0,$$ since otherwise change the indices $1$ and $2$. For $$0 \leq s \leq \min\left\{1, \frac {\epsilon -\tr({\mathbf{v}}_2)} {\tr({\mathbf{v}}_1)}\right\},$$ we have (since $(1-D_{{\mathbf{v}}_1})(1-D_{{\mathbf{v}}_2})g({\mathbf{w}})=0$): $$\begin{aligned}
& h[{\mathbf{v}}_1-s{\mathbf{v}}_1, {\mathbf{v}}_2+s{\mathbf{v}}_1, {\mathbf{v}}_3, \ldots, {\mathbf{v}}_m](\rho {\mathbf{e}}+{\mathbf{1}}) \\
=& -s(D_{{\mathbf{v}}_1}D_{{\mathbf{v}}_2} g({\mathbf{w}})-D_{{\mathbf{v}}_1}^2g({\mathbf{w}}))-s^2D_{{\mathbf{v}}_1}^2g({\mathbf{w}}) \leq 0. \end{aligned}$$ Hence the maximal zero of $\chi[{\mathbf{v}}_1-s{\mathbf{v}}_1, {\mathbf{v}}_2+s{\mathbf{v}}_1, {\mathbf{v}}_3, \ldots, {\mathbf{v}}_m](t)$ is at least $\rho$, and since $\rho$ is the largest possible maximal zero $$\chi[{\mathbf{v}}_1-s{\mathbf{v}}_1, {\mathbf{v}}_2+s{\mathbf{v}}_1, {\mathbf{v}}_3, \ldots, {\mathbf{v}}_m](\rho)=0.$$ We may therefore alter ${\mathbf{v}}_1, {\mathbf{v}}_2$ so that either ${\mathbf{v}}_1=0$ or $\tr({\mathbf{v}}_2) = \epsilon$, while retaining the maximal zero $\rho$. Continuing this process we arrive at a solution of the desired form.
\[average\] Suppose ${\mathbf{v}}_1,\ldots, {\mathbf{v}}_m$ is a solution to Problem \[central\] such that $\tr({\mathbf{v}}_1)= \tr({\mathbf{v}}_2)$ and ${\mathbf{v}}_1, {\mathbf{v}}_2 \in \Lambda_{++}$. Then $({\mathbf{v}}_1+{\mathbf{v}}_2)/2, ({\mathbf{v}}_1+{\mathbf{v}}_2)/2, {\mathbf{v}}_3, \ldots, {\mathbf{v}}_m$ is also a solution to Problem \[central\].
Let ${\mathbf{v}}_1(s)= (1-s){\mathbf{v}}_1+s{\mathbf{v}}_2$ and ${\mathbf{v}}_2(s)= (1-s){\mathbf{v}}_2+s{\mathbf{v}}_1$. Then $\tr({\mathbf{v}}_1(s))=\tr({\mathbf{v}}_2(s)) = \tr({\mathbf{v}}_1)$, ${\mathbf{v}}_1(s)+{\mathbf{v}}_2(s)={\mathbf{v}}_1+{\mathbf{v}}_2$, and ${\mathbf{v}}_1(s), {\mathbf{v}}_2(s) \in \Lambda_{++}$ for all $s \in (-\delta, 1+\delta)$ for so
|
ArXiv
| true
| false
|
arxiv_only
|
me $\delta>0$. Let $\rho$ be the maximal zero in Problem \[central\]. Then the function $$(-\delta, 1+\delta) \ni s \mapsto \chi[{\mathbf{v}}_1(s),{\mathbf{v}}_2(s), {\mathbf{v}}_3, \ldots, {\mathbf{v}}_m](\rho)$$ is a degree at most two polynomial which has local minima at $s=0$ and $s=1$. Hence this function is identically zero, and thus $$\chi[{\mathbf{v}}_1(1/2),{\mathbf{v}}_2(1/2), {\mathbf{v}}_3, \ldots, {\mathbf{v}}_m](\rho) =0$$ as desired.
\[infav\] Suppose ${\mathbf{v}}_1, \ldots, {\mathbf{v}}_m$ is a solution to Problem \[central\] such that ${\mathbf{v}}_1, \ldots, {\mathbf{v}}_k \in \Lambda_{++}$ all have the same trace, and let $${\mathbf{v}}= \frac 1 k ({\mathbf{v}}_1 + \cdots + {\mathbf{v}}_k).$$ By applying Lemma \[average\] infinitely many times (and invoking Hurwitz’ theorem on the continuity of zeros) we see that also ${\mathbf{v}}, \ldots, {\mathbf{v}}, {\mathbf{v}}_{k+1}, \ldots, {\mathbf{v}}_m$ is a solution to Problem \[central\].
Clearly Conjecture \[maxmax\] implies Conjecture \[maxmax2\]. To prove the other implication assume Conjecture \[maxmax2\]. Then by Proposition \[righttrace\] and Remark \[infav\] we may assume that we have a solution of the form ${\mathbf{v}}_1, \ldots, {\mathbf{v}}_m$, where ${\mathbf{v}}_1=\cdots={\mathbf{v}}_k={\mathbf{v}}$, ${\mathbf{v}}_{k+1}={\mathbf{e}}-k{\mathbf{v}}$, ${\mathbf{v}}_{k+2}=\cdots={\mathbf{v}}_m=0$, and where ${\mathbf{v}}, {\mathbf{e}}-k{\mathbf{v}}\in \Lambda_{++}$ and $\tr({\mathbf{v}})= \epsilon$ and $0<d-k\epsilon = \tr({\mathbf{e}}-k{\mathbf{v}})<\epsilon$. Hence we want to maximize the largest zero of $$\label{gv}
g_{\mathbf{v}}(t):=(1-D_{\mathbf{v}})^{k} (1-D_{\mathbf{e}}+kD_{\mathbf{v}}) h(t{\mathbf{e}})$$ where
- ${\mathbf{v}}, {\mathbf{e}}-k{\mathbf{v}}\in \Lambda_{++}$
- $\tr({\mathbf{v}})=\epsilon$, where $0<d-k\epsilon <\epsilon$.
Let $I \subseteq {\mathbb{R}}$ be an interval. We say that a univariate polynomial is $I$–*rooted* if all its zeros lie in $I$.
\[tgv\] Let $g_{\mathbf{v}}(t)$ be given by . Then $g_{\m
|
ArXiv
| true
| false
|
arxiv_only
|
athbf{v}}(t)= T_{k,d}(h(t{\mathbf{e}}-{\mathbf{v}}))$ where $T_{k,d}: {\mathbb{R}}[t] \rightarrow {\mathbb{R}}[t]$ is the linear operator defined by $$T_{k,d}\left(\sum_{j\geq 0} a_jt^j\right) = -\sum_{j=0}^d \left( \frac {j+1} {k+1} a_{j+1} +(d-1-j)a_j\right) (d-j)! \binom {k+1}{d-j}t^j.$$ Moreover if $f$ is a $[0,1/k]$–rooted polynomial of degree $d$, then $T_{k,d}(f)$ is real–rooted.
By $$\label{collect}
h(t{\mathbf{e}}-{\mathbf{v}})= \sum_{j=0}^d (-1)^j \frac 1 {j!} D_{\mathbf{v}}^jh({\mathbf{e}})t^{d-j}=:\sum_{j \geq 0} a_j t^j.$$ Note that $D_{\mathbf{v}}^j h(t{\mathbf{e}})= D_{\mathbf{v}}^j h({\mathbf{e}})t^{d-j}$ and $D_{\mathbf{v}}^j D_{\mathbf{e}}h(t{\mathbf{e}})= D_{\mathbf{e}}D_{\mathbf{v}}^j h(t{\mathbf{e}})= (d-j)D_{\mathbf{v}}^j h({\mathbf{e}})t^{d-j-1}$. Expanding and comparing coefficients with one sees that $g_{\mathbf{v}}(t)=T_{k,d}(h(t{\mathbf{e}}-{\mathbf{v}})$.
To prove the final statement of the lemma we may by Hurwitz’ theorem on the continuity of zeros assume that $f$ is a $(0,1/k)$–rooted polynomial of degree $d$. We may choose a hyperbolic degree $d$ polynomial $h$ and a vector ${\mathbf{v}}$ such that $f(t)= h(t{\mathbf{e}}-{\mathbf{v}})$, for example $h(x,y)= (-y)^df(-x/y)$, ${\mathbf{e}}=(1,0)$ and ${\mathbf{v}}=(0,1)$. Then ${\mathbf{v}}\in \Lambda_{++}$ and ${\mathbf{w}}={\mathbf{e}}-k{\mathbf{v}}\in \Lambda_{++}$ by e.g. . Hence $$T_{k,d}(f)(t) = \chi[{\mathbf{v}},{\mathbf{v}}, \ldots, {\mathbf{v}}, {\mathbf{w}}](t)$$ is real–rooted.
The *trace*, $\tr(f)$, of a non-constant polynomial is the sum of the the zeros of $f$ (counted with multiplicity). Let ${\mathcal{M}}_d$ be the affine space of all monic real polynomials of degree $d$.
\[affine\] Let $T : {\mathcal{M}}_d \rightarrow {\mathcal{M}}_m$ be an affine linear operator, and let $\epsilon>0$. Suppose $T$ sends $[a,b]$–rooted polynomials to real–rooted polynomials. Consider the problem of maximizing the largest zero of $T(f)$ over all $[a,b]$–rooted polynomials $f \in {\mathcal{M}}_d$ with $\tr(f) = \epsilo
|
ArXiv
| true
| false
|
arxiv_only
|
n$. Then this (maximal) zero is achieved for some $T(f)$, where $f$ has at most one distinct zero in $(a,b)$.
Moreover, if the maximal zero above is achieved for some $T(f)$, where $f \in {\mathcal{M}}_d$ is $(a,b)$–rooted, then the maximal zero is also achieved for $T ((t-\epsilon/d)^d)$.
Let ${\mathcal{A}}={\mathcal{A}}(a,b,d,\epsilon)$ be the set of all $[a,b]$–rooted polynomials $f \in {\mathcal{M}}_d$ with $\tr(f)=\epsilon$. Note that continuity, compactness and Hurwitz’ theorem the maximum zero (say $\rho$) is achieved for some $T(f)$, where $f \in {\mathcal{A}}$. We argue that we may move zeros of $f$ to the boundary of $[a,b]$, while retaining $\tr(f)$ and the maximal zero of $T(f)$ as long as $f$ has at least two distinct zeros in $(a,b)$.
Suppose $a<\alpha<\beta<b$ are two zeros of $f\in {\mathcal{A}}$ and that the maximal zero is realized for $T(f)$. For $0<|s| \leq \min(b-\beta, \alpha-a, \beta-\alpha)$, let $$f_s(x) := \frac {(x-\alpha-s)(x-\beta+s)}{(x-\alpha)(x-\beta)} f(x),$$ and note that $f_s \in {\mathcal{A}}$ and $$f= (1-\theta)f_{s}+ \theta f_{-s}, \quad \mbox{ where } \quad \theta = \frac 1 2 \left(1- \frac s {\beta-\alpha}\right)\in [0,1].$$ By assumption $T(f_s)(\rho) \geq 0$. Since $0=T(f)(\rho)= (1-\theta)T(f_{s})(\rho)+ \theta T(f_{-s})(\rho)$, we conclude that $T(f_s)(\rho) = T(f_{-s})(\rho)=0$. Hence the maximal zero $\rho$ is realized also for $T(f_s)$ where $s= -\min(b-\beta, \alpha-a, \beta-\alpha)$. By possible iterating this process a few times we will have moved at least one interior zero to the boundary. We can continue until there is at most one distinct zero in $(a,b)$.
Suppose the maximal zero $\rho$ above is achieved for some $f \in {\mathcal{M}}_d$ which is $(a,b)$–rooted. Then $\rho$ is also attained for the same problem when we replace $[a,b]$ by $[r,s]$ where $a<r<s<b$ and $r-a$ and $b-s$ are sufficiently small. Hence, by what we have just proved, for each such $r,s$ there are nonnegative integers $i,j$ with $i+j \leq d$ such that $$\label{abs}
T\left( (t-r)
|
ArXiv
| true
| false
|
arxiv_only
|
^i (t-s)^j \left(t - \frac {\epsilon -ir-js}{d-i-j}\right)^{d-i-j}\right) (\rho)=0.$$ The left–hand–side of is a polynomial, say $P_{i,j}(r,s) \in {\mathbb{R}}[r,s]$. Hence the polynomial $\prod_{i,j}P_{i,j}(r,s)$, where the product is over all $i,j$ which are realized for some such $r,s$, vanishes on a set with nonempty interior, so it is identically zero. Hence $P_{i,j}(r,s) \equiv 0$ for some $i,j$. But then $0=P_{i,j}(\epsilon/d, \epsilon/d)=T((t-\epsilon/d)^d)(\rho)$ as desired.
We may now finish the proof of that Conjecture \[maxmax2\] implies Conjecture \[maxmax\]. It remains to prove that the largest possible zero of $g_{\mathbf{v}}(t)$, where ${\mathbf{v}}$ satisfies (a) and (b) above is achieved when $h(t{\mathbf{e}}-{\mathbf{v}})= (t-\epsilon/d)^d$, assuming (we assume Conjecture \[maxmax2\]) that the maximum is achieved for some ${\mathbf{v}}$ where $h(t{\mathbf{e}}-{\mathbf{v}})$ is $(0,1/k)$–rooted. By e.g. considering $h=\det$ on $d \times d$–matrices this is equivalent to proving that the maximal zero of $T_{k,d}(f)$ where $f$ ranges over all monic $[0,1/k]$–rooted polynomials of degree $d$ with trace $\epsilon$ is achieved when $f=(t-\epsilon/d)^d$, under the assumption that the maximal zero is achieved for some $(0,1/k)$–rooted $f$. This follows from the last part of Lemma \[affine\].
Sharpness of the bound in Theorem \[t1\] {#sbound}
========================================
We will in this section use results known about the asymptotic behavior of the largest zero of Jacobi polynomials to see that the bound in Theorem \[t1\] is close to being optimal.
Consider the degree $d$ elementary symmetric polynomial in $mk$ variables: $$e_d(x_1,\ldots, x_{mk}) = \sum_{|S|=d} \prod_{i \in S}x_i,$$ which is hyperbolic with respect to the all ones vector ${\mathbf{1}}\in {\mathbb{R}}^{mk}$, see e.g. [@BrOp; @COSW]. Since the coefficients of $e_d({\mathbf{x}})$ are nonnegative, its hyperbolicity cone contains the positive orthant. If ${\mathbf{e}}_i$ denotes the $i$th standard basis vector, t
|
ArXiv
| true
| false
|
arxiv_only
|
hen $$\tr({\mathbf{e}}_i) = \frac {d} {mk}, \ \ \rk({\mathbf{e}}_i)=1 \ \ \mbox{ and } \ \ {\mathbf{e}}_1+\cdots+{\mathbf{e}}_{mk}={\mathbf{1}},$$ for all $1\leq i \leq mk$. By symmetry, the partition $$S_1=\{1,\ldots, m\}, S_2=\{m+1, \ldots, 2m\}, \ldots, S_k=\{(k-1)m+1,\ldots, km\}$$ minimizes the bound in . Now $$\begin{aligned}
e_d\left(t{\mathbf{1}}-\sum_{i \in S_1}{\mathbf{e}}_i\right) &= \sum_{|A|=d} (t- 1)^{|A\cap S_1|}t^{d- |A\cap S_1|}\\
&= \sum_{j=0}^d \binom {m(k-1)} {j} \binom m {d-j}(t-1)^{d-j}t^{j}\\
&= P^{(mk-m-d,m-d)}_d(2t-1),\end{aligned}$$ where $P^{(\alpha,\beta)}_k(t)$ is a Jacobi polynomial. The asymptotic behavior of the largest zero of Jacobi polynomials is well studied, see e.g. [@Is; @Kra]. For example, if $\alpha_d, \beta_d >-1$ satisfy $$\frac {\alpha_d}{\alpha_d +\beta_d +2d} \to a \mbox{ and } \frac {\beta_d}{\alpha_d +\beta_d +2d} \to b \mbox{ as } d \to \infty,$$ then the largest zero of $P_d^{(\alpha_d,\beta_d)}(t)$ converges to $$\label{abby}
b^2-a^2+\sqrt{(a^2+b^2-1)^2-4a^2b^2},$$ as $d \to \infty$, see [@Is Theorem 8].
Fix $\epsilon$ and $k$, and let $m:=m(d)=\lceil d/(\epsilon k) \rceil $ and $\alpha_d=mk-m-d$, $\beta_d=m-d$. Then $a=1-1/k-\epsilon$ and $b=1/k-\epsilon$, and so by the largest zero of $P^{(\alpha_d, \beta_d)}_d(2t-1)$ converges to $$\frac 1 k + \epsilon \frac {k-2} k + 2 \frac{\sqrt{k-1}}{k} \sqrt{\epsilon-\epsilon^2},$$ which should be compared to the bound achieved by Theorem \[t1\] (as $m\to \infty$): $$\frac 1 k + \epsilon + 2\frac {\sqrt{k}} k \sqrt{\epsilon}.$$ We conclude:
\[lowprop\] There is no version of Theorem \[t1\] with an ($m,d$-independent) bound in the right–hand–side of which is smaller than $$\label{bbound}
\frac 1 k + \epsilon \frac {k-2} k + 2 \frac{\sqrt{k-1}}{k} \sqrt{\epsilon-\epsilon^2},$$ for $\epsilon \leq 1-1/k$.
It is known that if $1<d<n-1$, then $e_d(x_1, \ldots, x_n)$ is *not* a determinantal polynomial, i.e., there is no tuple of positive semidefinite matrices $A_1, \ldots, A_n$ such that $$e_d(x_1, \ldots, x_n) = \det
|
ArXiv
| true
| false
|
arxiv_only
|
(x_1A_1+ \cdots+x_nA_n).$$ Thus we cannot directly derive an analog of Proposition \[lowprop\] for Theorem \[MSSmain\].
Consequences for strong Rayleigh measures and weak half-plane property matroids
===============================================================================
A discrete probability measure, $\mu$, on $2^{[n]}$ is called *strong Rayleigh* if its *multivariate partition function* $$P_\mu({\mathbf{x}}) := \sum_{S \subseteq [n]} \mu(\{S\}) \prod_{j \in S}x_j,$$ is *stable*, i.e., if $P_\mu({\mathbf{x}}) \neq 0$ whenever ${{\rm Im}}(x_j)>0$ for all $1 \leq j \leq n$. Strong Rayleigh measures were investigated in [@BBL], see also [@Pem; @Wag]. We shall now reformulate Theorem \[t1\] in terms of strong Rayleigh measures. The measure $\mu$ is of *constant sum* $d$ if $|S|=d$ whenever $\mu(\{S\}) \neq 0$, i.e., if $P_\mu({\mathbf{x}})$ is homogeneous of degree $d$. It is not hard to see that a constant sum measure $\mu$ is strong Rayleigh if and only if $P_\mu({\mathbf{x}})$ is hyperbolic with respect to the all ones vector ${\mathbf{1}}$ and ${\mathbb{R}}_+^n \subseteq \Lambda_+({\mathbf{1}})$, see [@BBL]. Note that if ${\mathbf{e}}_i$ is the $i$th standard basis vector then $$\tr({\mathbf{e}}_i) = \sum_{S \ni i} \mu(\{S\})= {\mathbb{P}}[S : i \in S],$$ where the trace is defined as in the introduction for the hyperbolic polynomial $P_\mu$, with ${\mathbf{e}}={\mathbf{1}}$. If $S \subseteq [n]$ we write ${\mathbf{e}}_S:=\sum_{i \in S}{\mathbf{e}}_i$. The following theorem is now an immediate consequence of Theorem \[t1\].
\[t11\] Let $k\geq 2$ be an integer and $\epsilon$ a positive real number. Suppose $\mu$ is a constant sum strong Rayleigh measure on $2^{[n]}$ such that ${\mathbb{P}}[S : i \in S] \leq \epsilon$ for all $1\leq i \leq n$. Then there is a partition $S_1 \cup \cdots \cup S_k=[n]$ such that $$\| e_{S_i} \| = {\lambda_{\rm max}}({\mathbf{e}}_{S_i}) \leq \frac 1 k \delta(k\epsilon,n)$$ for each $1 \leq i \leq n$.
Let us also see that Theorem \[t1\] easily follows from Theorem \[t1
|
ArXiv
| true
| false
|
arxiv_only
|
1\]. Assume the hypothesis in Theorem \[t1\], and form the polynomial $$P({\mathbf{x}})= h(x_1{\mathbf{u}}_1+\cdots+x_m{\mathbf{u}}_m)/h({\mathbf{e}}).$$ It follows that $P({\mathbf{x}})$ is hyperbolic with hyperbolicity cone containing the positive orthant. Since $\rk({\mathbf{u}}_i) \leq 1$ for all $1\leq i \leq m$ we may expand $P({\mathbf{x}})$ as $$P({\mathbf{x}})= \sum_{S \subseteq [m]} \mu(\{S\}) \prod_{j \in S}x_j,$$ where $\mu(\{S\}) \geq 0$ for all $S \subseteq [m]$. Since $\tr_h({\mathbf{u}}_i)= \tr_P({\mathbf{e}}_i)$ for all $1\leq i \leq m$, the conclusion in Theorem \[t1\] now follows from Theorem \[t11\].
The *support* of $\mu$ is $\{S: \mu(\{S\})>0\}$. Choe *et al.* [@COSW] proved that the support of a constant sum strong Rayleigh measure is the set of bases of matroid. Such matroids are called *weak half-plane property matroids*. The rank function, $r$, of such a matroid is given by $$r(S) = \rk\left(\sum_{i \in S}{\mathbf{e}}_i\right),$$ where $\rk$ is the rank function associated to the hyperbolic polynomial $P_\mu$ as defined in the introduction, see [@BrObs; @Gu]. Edmonds Base Packing Theorem [@Edm] characterizes, in terms of the rank function, when a matroid contains $k$ disjoint bases. Namely if and only if $$r(S) \geq d -\frac {n-|S|} k, \quad \mbox{ for all } S \subseteq [n],$$ where $r$ is the rank function of a rank $d$ matroid on $n$ elements. Using Theorem \[t11\] we may deduce a sufficient condition (of a totally different form) for a matroid with the weak half-plane property to have $k$ disjoint bases:
\[packing\] Let $k\geq 2$ be an integer. Suppose $\mu$ is a constant sum strong Rayleigh measure such that $${\mathbb{P}}[S : i \in S] \leq \left(\frac 1 {\sqrt{k-1}} - \frac 1 {\sqrt{k}}\right)^2$$ for all $1\leq i \leq n$. Then the support of $\mu$ contains $k$ disjoint bases.
Suppose $\tr({\mathbf{e}}_i) \leq \epsilon$ for all $i$. Let $S_1 \cup \cdots \cup S_k=[n]$ be a partition afforded by Theorem \[t11\], and let ${\mathbf{v}}_j= \sum_{i \in S_j}{\mathbf{e}}_i$ for each
|
ArXiv
| true
| false
|
arxiv_only
|
$j \in [n]$. If we can prove that ${\lambda_{\rm min}}({\mathbf{v}}_j)>0$, then $\rk({\mathbf{v}}_j)=\rk({\mathbf{1}})$ and so $S_j$ contains a basis. Now, by , Theorem \[t11\], and the convexity of ${\lambda_{\rm max}}$: $$\begin{aligned}
{\lambda_{\rm min}}({\mathbf{v}}_j) &= 1-{\lambda_{\rm max}}({\mathbf{1}}-{\mathbf{v}}_j) =1-{\lambda_{\rm max}}\left(\sum_{i \neq j}{\mathbf{v}}_i\right) \\
&\geq 1- \sum_{i \neq j}{\lambda_{\rm max}}({\mathbf{v}}_i) \geq 1-\frac {k-1} {k} \delta(k\epsilon,n) \\
&> 1- \frac {k-1} {k} \left(1+\sqrt{\epsilon k}\right)^2.\end{aligned}$$ Hence we want the quantity on the left hand side to be nonnegative, which is equivalent to $$\epsilon \leq \left(\frac 1 {\sqrt{k-1}} - \frac 1 {\sqrt{k}}\right)^2.$$
We have not investigated the sharpness of Theorem \[packing\], nor if it is possible to prove analogous versions for arbitrary matroids. For an arbitrary matroid on $[n]$ one could take the uniform measure on the set of bases of the matroid and define $\tr(i) = {\mathbb{P}}[S: i\in S]$. What trace bounds guarantees the existence of $k$ disjoint bases?
It would be interesting to see if other theorems on matroids have analogs for weak half-plane property matroids using Theorem \[t11\]. Also, can we find continuous versions of theorems in matroid theory using the analogy that Theorem \[t1\] can be seen as a continuous version of Edmonds Base Packing Theorem?
[99]{} M. F. Atiyah, R. Bott, L Gårding, Lacunas for hyperbolic differential operators with constant coefficients. I, Acta Math. [**124**]{} (1970), 109–189.
H. H. Bauschke, O. Güler, A. S. Lewis, H. S. Sendov, Hyperbolic polynomials and convex analysis, Canad. J. Math. [**53**]{} (2001), 470–488.
J. Borcea, P. Brändén, T. M. Liggett, [Negative dependence and the geometry of polynomials]{}, J. Amer. Math. Soc. [**22**]{} (2009), 521–567, <http://arxiv.org/abs/0707.2340>.
P. Brändén, [ Obstructions to determinantal representability,]{} Adv. Math., [**226**]{} (2011), 1202–1212, <http://arxiv.or
|
ArXiv
| true
| false
|
arxiv_only
|
g/pdf/1004.1382.pdf>. P. Brändén, Hyperbolicity cones of elementary symmetric polynomials are spectrahedral, Optim. Lett. [**8**]{} (2014), 1773–1782, <http://arxiv.org/abs/1204.2997>.
Y. Choe, J. Oxley, A. Sokal, D. G. Wagner, [ Homogeneous multivariate polynomials with the half-plane property]{}. Adv. Appl. Math. [**32**]{} (2004), 88–187, <http://arxiv.org/pdf/math/0202034.pdf>.
P. G. Casazza, Consequences of the Marcus/Spielman/Srivastava solution to the Kadison–Singer Problem, <http://arxiv.org/abs/1407.4768>.
M. Chudnovsky, P. Seymour, The roots of the independence polynomial of a clawfree graph, J. Combin. Theory Ser. B [**97**]{} (2007), 350–357.
J. Edmonds, Lehman’s switching game and a theorem of Tutte and Nash-Williams, J. Res. Nat. Bur. Standards Sect. B [**69 B**]{} (1965), 73–77.
L. Gårding, [An inequality for hyperbolic polynomials]{}, J. Math. Mech. [**8**]{} (1959), 957–965.
L. Gurvits, Combinatorial and algorithmic aspects of hyperbolic polynomials, <http://arxiv.org/abs/math/0404474>. L. Hörmander, The analysis of linear partial differential operators. II. Differential operators with constant coefficients, Springer-Verlag, Berlin, 1983.
M. E. H. Ismael, X. Li, Bound on the extreme zeros of orthogonal polynomials, Proc. Amer. Math. Soc. [**115**]{} (1992), 131–140.
R. V. Kadison, I. M. Singer, Extensions of pure states, Amer. J. Math. [**81**]{} (1959), 383–400.
I. Krasikov, On extreme zeros of classical orthogonal polynomials, J. Comput. Appl. Math. [**193**]{} (2006), 168–182.
A. W. Marcus, D. A. Spielman, N. Srivastava, Interlacing families I: Bipartite Ramanujan graphs of all degrees, Ann. of Math. (to appear), [http://arxiv.org/abs/1304.4132 ](http://arxiv.org/abs/1304.4132 ).
A. W. Marcus, D. A. Spielman, N. Srivastava, Interlacing families II: Mixed characteristic polynomials and the Kadison-Singer problem, Ann. of Math. (to appear), <http://arxiv.org/abs/1306.3969>.
R. Pemantle, Hyperbolicity and stable polynomials in
|
ArXiv
| true
| false
|
arxiv_only
|
combinatorics and probability, Current developments in mathematics, 2011, 57–123, Int. Press, Somerville, MA, 2012, <http://arxiv.org/abs/1210.3231>.
J. Renegar, Hyperbolic programs, and their derivative relaxations, Found. Comput. Math., [**6**]{} (2006), 59–79.
V. Vinnikov, LMI representations of convex semialgebraic sets and determinantal representations of algebraic hypersurfaces: past, present, and future, Mathematical methods in systems, optimization, and control, 325–349, Oper. Theory Adv. Appl., [**222**]{}, BirkhŠuser/Springer Basel AG, Basel, 2012, <http://arxiv.org/abs/1205.2286>.
D. G. Wagner, Multivariate stable polynomials: theory and applications, Bull. Amer. Math. Soc. [**48**]{} (2011), 53–84, <http://arxiv.org/abs/0911.3569>.
N. Weaver, The Kadison-Singer problem in discrepancy theory, Discrete Math. [**278**]{} (2004), 227–239.
---
abstract: 'We introduce the notion of homotopy inner products for any cyclic quadratic Koszul operad $\mathcal O$, generalizing the construction already known for the associative operad. This is done by defining a colored operad $\widehat{\mathcal O}$, which describes modules over $\mathcal O$ with invariant inner products. We show that $\widehat{\mathcal O}$ satisfies Koszulness and identify algebras over a resolution of $\widehat{\mathcal O}$ in terms of derivations and module maps. As an application we construct a homotopy inner product over the commutative operad on the cochains of any Poincaré duality space.'
address:
- 'Dipartimento di Matematica “G. Castelnuovo”, Università di Roma “La Sapienza”, Piazzale Aldo Moro, 2 I-00185 Roma, Italy'
- 'College of Technology of the City University of New York, Department of Mathematics, 300 Jay Street, Brooklyn, NY 11201, USA'
author:
- Riccardo Longoni
- Thomas Tradler
title: Homotopy Inner Products for Cyclic Operads
---
Introduction
============
In [@GeK], the notion of cyclic operads and invariant inner product for such operads was defined. A homotopy version of these inner products
|
ArXiv
| true
| false
|
arxiv_only
|
for the associative operad was given in [@Tr] and the starting point for a similar version for the commutative operad was considered in [@Ginot]. It is natural to ask for a generalization of these constructions applicable to any cyclic operad. This is what is done in this paper.
Starting with a cyclic operad $\mathcal O$, we use the notion of colored operads to incorporate the cyclic structure of $\mathcal O$ into the colored operad $\widehat{\mathcal O}$. Algebras over the colored operad $\widehat{\mathcal O}$ consist of pairs $(A,M)$, where $A$ is an algebra over $\mathcal O$ and $M$ is an $\mathcal O$-module over $A$ which has an invariant inner product. Section \[cyclic-op\] is devoted to explicitly defining $\widehat{\mathcal O}$, and in the case that $\mathcal O$ is given by quadratic generators and relations, we give a description of $\widehat{\mathcal O}$ in terms of generators and relations coming from those of $\mathcal O$.
A major tool in the theory of operads is the notion of Koszul duality. Let us recall, that a (colored) operad $\mathcal P$ is called Koszul if there is a quasi-isomorphism of operads $\mathbf{D}(\mathcal P^!)\to \mathcal P$, where $\mathbf{D}(\mathcal P^!)$ denotes the dual operad (in the sense of [@GK (3.2.12)]) on the dual quadratic operad $\mathcal P^!$. This implies that the notion of algebras of $\mathcal P$ has a canonical infinity version given by algebras over $\mathcal P_\infty:=\mathbf{D}(\mathcal P^!)$. Our main theorem states, that the Koszulness property is preserved when going from $\mathcal O$ to $\widehat{\mathcal O}$.
Let $\mathcal O$ be a cyclic quadratic operad. If $\mathcal O$ is Koszul, then so is $\widehat{\mathcal O}$, i.e. we have a resolution $\widehat{\mathcal O}_\infty:=\mathbf{D}(\widehat{\mathcal O^!})$ of $\widehat{\mathcal O}$.
The proof of this theorem will be given in section \[quadrat-koszul\]. Theorem \[O\_hat\_Koszul\] justifies the concept of the infinity version of algebras and modules with invariant inner products over cyclic operads $\math
|
ArXiv
| true
| false
|
arxiv_only
|
cal O$ as algebras over the operad $\widehat{\mathcal O}_\infty=\mathbf{D}(\widehat{\mathcal O^!})$. The concept of algebras over the operad $\widehat{\mathcal O}_\infty$ will be investigated in more detail in section \[homotop-ip\]. In particular, we explicitly reinterpret in proposition \[O\_hat\_algebras\] algebras over $\widehat{\mathcal O}_\infty$ in terms of derivations and module maps over free $\mathcal O^!$-algebras and modules.
Recall that the associative operad $\mathcal Assoc$, the commutative operad $\mathcal Comm$ and the Lie operad $\mathcal Lie$ are all cyclic quadratic Koszul operads, so that theorem \[O\_hat\_Koszul\] may be applied to all these cases. As a particularly application of infinity inner products, we consider the examples of the associative operad $\mathcal Assoc$ and the commutative operad $\mathcal Comm$, which have an interesting application to the chain level of a Poincaré duality space $X$. In [@TZ], it was shown that the simplicial cochains $C^\ast(X)$ on $X$ with rational coefficients form an algebra over the operad $\widehat{\mathcal Assoc}_\infty$. This structure was then used in [@Tr2] and [@TZ2] to obtain string topology operations on the Hochschild cohomology, respectively the Hochschild cochain complex, of $C^\ast(X)$. Since the method of constructing the $\widehat{\mathcal Assoc}_\infty$ algebra on $C^\ast (X)$ easily transfers to the commutative case, we will show in section \[Comm-section\] that $C^\ast(X)$ also forms a $\widehat{\mathcal Comm}_\infty$-algebra. We expect that this stronger algebraic structure should induce even more string topology operations, which take into account the commutative nature of the cochains of the space $X$. A first step in this direction was done in [@TZ3], where the string topology operations for algebras over $\widehat{\mathcal Assoc}$ and $\widehat{\mathcal Comm}$ were investigated.
We are grateful to Dennis Sullivan for many valuable suggestions and illuminating discussions. We also thank Domenico Fiorenza, Martin Markl, Jim Sta
|
ArXiv
| true
| false
|
arxiv_only
|
sheff and Scott Wilson for useful comments and remarks regarding this topic. The second author was partially supported by the Max-Planck Institute in Bonn.
$\widehat{\mathcal Comm}_\infty$ structure for Poincaré duality spaces {#Comm-section}
======================================================================
Before going into the details of the construction of homotopy inner products over a general cyclic quadratic operad $\mathcal O$, we give an application for the case of the commutative operad $\mathcal Comm$. More precisely, we show how a homotopy $\mathcal Comm$-inner product arises on the chain level of a Poincaré duality space $X$. In fact, the construction for the homotopy $\mathcal Comm$-algebra is taken from R. Lawrence and D. Sullivan’s paper [@S] on the construction of local infinity structures. In [@TZ], M. Zeinalian and the second author construct homotopy $\mathcal Assoc$-inner products on a Poincaré duality space $X$. The same reasoning may in fact be used to construct homotopy $\mathcal Comm$-inner products on $X$. The proof of the next proposition will be a sketch using these arguments.
\[prop:comm-pd\] Let $X$ be a closed, finitely triangulated Poincaré duality space, such that the closure of every simplex is contractible. Denote by $C=C_\ast(X)$ the simplicial chains on $X$. Then its dual space $A:=C^*=Hom(C_*(X),k)$ has the structure of a $\widehat{\mathcal Comm}_\infty$ algebra, such that the lowest multiplication is the symmetrized Alexander-Whitney multiplication and the lowest inner product is given by capping with the fundamental cycle $\mu\in C$.
Let $\mathcal Lie$ denote the Lie-operad, $F_{\mathcal Lie}V=\bigoplus_{n\geq 1} (\mathcal Lie(n)\otimes V^{\otimes n})_{S_n}$ denote the free Lie algebra generated by $V$, and $F_{\mathcal Lie,V}W=\bigoplus_{n\geq 1} (\bigoplus_{k+l=n-1}\mathcal Lie(n)\otimes V^{\otimes k}\otimes W\otimes V^{\otimes l})_{S_n}$ the canonical module over $F_{\mathcal Lie}V$. We will see in proposition \[O\_hat\_algebras\] and example \[exa-comm\],
|
ArXiv
| true
| false
|
arxiv_only
|
that the required data for a homotopy $\mathcal Comm$-inner product consists of,
- a derivation $d\in \mathrm{Der}(F _{\mathcal Lie}\,C[1])$ of degree $1$, with $d^2=0$,
- a derivation $g\in \mathrm{Der}_d (F_{\mathcal Lie,\,C[1]}C[1])$ over $d$ of degree $1$, with $g^2=0$, which imduces a derivation $h\in \mathrm{Der}_d (F_{\mathcal Lie,\,C[1]}C^*[1])$ over $d$ with $h^2=0$,
- a module map $f\in \mathrm{Mod}(F_{\mathcal Lie, C[1]}C^*, F_{\mathcal
Lie,C[1]}C[1])$ of degree $0$ such that $f\circ h = g \circ f$.
In order to construct the derivation $d\in \mathrm{Der}(F _{\mathcal Lie}\,C[1])$ with $d^2=0$, let $F_{\mathcal Lie}C[1]=L_1\oplus L_2\oplus\dots$, where $L_n=(\mathcal Lie(n)\otimes C[1]^{\otimes n})_{S_n}$, be the decomposition of $F_{\mathcal Lie}C[1]$ by the monomial degree in $C[1]$. Then, $d:F_{\mathcal Lie}C[1]\to F_{\mathcal Lie}C[1]$ is determined by maps $d=d_1+d_2+\dots$, where $d_i:C[1]\to L_i$ is lifted to $F_{\mathcal Lie}(C[1])$ as a derivation. Let $d_1$ be the differential on $C[1]$, and $d_2$ be the symmetrized Alexander-Whitney comultiplication. For the general $d_i$, we use the inductive hypothesis that $d_1$, …, $d_{i-1}$ are local maps so that $\nabla_i:=d_1+ \dots+d_{i-1}$ has a square $\nabla_i^2$ mapping only into higher components $L_{i}\oplus L_{i+1}\oplus \dots$. Here, “local” means that every simplex maps into the sub-Lie algebra of its closure. Now, by the Jacobi-identity, it is true that $0=[\nabla_i, [\nabla_i,\nabla_i]]=
[d_1,e_i]\text{+ higher terms}$, where $e_i:C[1]\to L_i$ is the lowest term of $[\nabla_i,\nabla_i]$. Thus $e_i$ is $[d_1,.]$-closed and thus, using the contractibility hypothesis of the proposition, also locally $[d_1,.]$-exact. These exact terms can be put together to give a map $d_i$, so that $[d_1,d_i]$ vanishes on $L_1\oplus \dots \oplus
L_{i-1}$ and equals $- 1/2\cdot e_i$ on $L_i$. In other words, $(d_1+
\dots+d_i)^2=1/2\cdot [d_1+ \dots+d_i,d_1+ \dots+d_i]=1/2\cdot [\nabla_i
,\nabla_i ]+ [d_1,d_i]+\text{higher terms}$, maps only
|
ArXiv
| true
| false
|
arxiv_only
|
into $L_{i+1}\oplus L_{i+2}\oplus\dots$. This completes the inductive step, and thus produces the wanted derivation $d$ on $F_{\mathcal Lie}(C[1])$.
In a similar way, we may produce the derivation $g$ of $F_{\mathcal Lie,\,C[1]}C[1]$ over $d$, by decomposing $F_{\mathcal Lie, C[1]} C[1]=L'_1\oplus L'_2\oplus\dots$, where $L'_n$ is given by the space $\left(\bigoplus_{k+l=n-1} \mathcal Lie(n) \otimes C[1]^{\otimes k} \otimes C[1] \otimes C[1]^{\otimes l}\right)_{S_{n}}$. With this notation, $g$ is written as a sum $g=g_1+g_2+\dots$, where $g_i:C[1]\to L'_i$ is lifted to $F_{\mathcal Lie, C[1]} C[1]$ as a derivation over $d$, and $(g_1+\dots+g_i)^2$ only maps into $L'_{i+1}\oplus L'_{i+2}\oplus\dots$.
Using a slight variation of the above method, we may also construct the wanted homotopy $\mathcal Comm$-inner product, i.e. the module map $f$ stated above. More precisely, we build a map $\chi:C[1]\to Mod(F_{\mathcal Lie, C[1]}C^*[1],F_{\mathcal Lie, C[1]}C[1])$, so that $\chi$ is a chain map under the differential $d_1$ on $C[1]$, and the differential $\delta(f)=f\circ h - (-1)^{|f|} g\circ f$ on $Mod(F_{\mathcal Lie, C[1]}C^*[1],F_{\mathcal Lie, C[1]}C[1])$. Since a module map is given by the components $M_n=\bigoplus_{k+l=n-2} \mathcal Lie(n)\otimes C[1]^{\otimes k}\otimes C[1]\otimes C[1]^{\otimes l}\otimes C[1]$, it is enough to construct $\chi$ as a sum $\chi=\chi_2+\chi_3+\dots$, where $\chi_i:C[1]\to M_i$. Now, the lowest component $\chi_2:C[1]\to C[1]\otimes C[1]$ is defined to be the symmetrized Alexander-Whitney comultiplication. For the induction, we assume that $\Upsilon_i:=\chi_2+\dots+\chi_{i-1}$ are local maps such that $D(\Upsilon_i):=\Upsilon_i\circ d_1-\delta\circ \Upsilon_i$ maps only into higher components $M_i\oplus M_{i+1}\oplus\dots$. Let $\epsilon_i:C[1]\to M_i$ be the lowest term of $D(\Upsilon_i)$. Since $D^2=0$ and $\delta$ has $d_1$ as its lowest component, we see that $\epsilon_i$ is $[d_1,.]$-closed, and by the hypothesis of the proposition also locally $[d_1,.]$-exact. These exact te
|
ArXiv
| true
| false
|
arxiv_only
|
rms can be put together as before to produce a map $\chi_i$, so that now $\Upsilon_{i+1}:=\chi_2+\dots+\chi_i$ only maps into $M_{i+1}\oplus M_{i+2}\oplus\dots $. We therefore obtain the chain map $\chi$, and with this, we define the homotopy $\mathcal Comm$-inner product as $f:=\chi(\mu)\in Mod(F_{\mathcal Lie, C[1]}C^*[1],F_{\mathcal Lie, C[1]}C[1]) $, where $\mu\in C$ denotes the fundamental cycle of the space $X$. Since $\mu$ is $d_1$-closed, it follows that $f\circ h-g\circ f=0$.
The operad $\widehat{\mathcal O}$ {#cyclic-op}
=================================
In this section, we define for any cyclic operad $\mathcal O$ the colored operad $\widehat{\mathcal O}$. In the case that $\mathcal O$ is cyclic quadratic, we give an explicit description of $\widehat{\mathcal O}$ in terms of generators and relations coming from generators and relations in $\mathcal O$.
We assume that the reader is familiar with the notion of operads, colored operads and cyclic operads. For a good introduction to operads, we refer to [@Ad], [@GK] and [@MSS], for cyclic operads we recommend [@GeK] and [@MSS]. Colored operads were first introduced in [@BV] and appeared in many other places, see e.g. [@L] and [@BM]. Since in our case, we only need a special type of colored operad, it will be convenient to setup notation with the following definition.
As in [@GK (1.2.1)] and [@GeK (1.1)], we assume throughout this paper that $k$ is a field of characteristic $0$. Note however, that for certain operads such as e.g. the associative operad, a more general setup is possible.
\[0/1-operad\] Let $\mathcal P$ be a 3-colored operad in the category of (differential graded) vector spaces, where we use the three colors “full", “dashed" and “empty", in symbols written ${
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}},\varnothing$. This means that to each finite sequence o
|
ArXiv
| true
| false
|
arxiv_only
|
f symbols $x_1, \dots, x_n, x\in \{{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}},\varnothing \}$, we have (differential graded) vector spaces $\mathcal P(x_1,\dots, x_n; x_0)$ over $k$, which label the operations with $n$ inputs with colors $x_1,\dots, x_n$, and one output with color $x$. The operad $\mathcal P$ comes with maps $\circ_i: \mathcal P(x_1,\dots,x_n;x) \otimes \mathcal P(y_1,\dots,y_m;x_i) \to \mathcal P(x_1,\dots,x_{i-1},y_1,\dots, y_m,x_{i+1},\dots, x_n;x)$ which label the composition in $\mathcal P$, and with an action of the symmetric group $S_n$, which, for $\sigma\in S_n$, maps $\mathcal P(x_1,\dots, x_n;x)\to \mathcal P(x_{\sigma(1)},\dots,x_{\sigma(n)};x)$. These maps have to satisfy the usual associativity and equivariance axioms of colored operads.
$\mathcal P$ is called a 0/1-operad if the color $\varnothing$ can appear only as an output, and the only nontrivial spaces with one input are $\mathcal P({
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}};{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}})=k$ and $\mathcal P({
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}};{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}})=k$. We assume furthermore, that there are fixed generators of the spaces $\mathcal P({
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}};{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}})=k$ and $\mathcal P({
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}};{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[
|
ArXiv
| true
| false
|
arxiv_only
|
linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}})=k$.
Graphically, we represent $\mathcal P(x_1,\dots,x_n;x)$ by a tree with $n$ inputs and one output of the given color. Since the color $\varnothing$ cannot appear as an input, we may use the following convention: we represent the output $\varnothing$ with a blank line, i.e., with no line, and we say that the operation “has no output”. $$\begin{pspicture}(0,.5)(4,4)
\psline[linestyle=dashed, arrowsize=0.1, arrowinset=0](2,2)(1.2,3)
\psline[arrowsize=0.1, arrowinset=0](2,2)(1.6,3)
\psline[linestyle=dashed, arrowsize=0.1, arrowinset=0](2,2)(2,3)
\psline[linestyle=dashed, arrowsize=0.1, arrowinset=0](2,2)(2.4,3)
\psline[arrowsize=0.1, arrowinset=0](2,2)(2.8,3)
\rput[b](2,.5){$\mathcal P ({
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}};\varnothing)$}
\rput[b](2,3.2){$1\,\,\, 2\,\,\, 3\,\,\, 4\,\,\, 5$}
\end{pspicture}
\quad \quad \quad
\begin{pspicture}(0,0)(4,3.6)
\psline[arrowsize=0.1, arrowinset=0](2,2)(1.2,3)
\psline[linestyle=dashed, arrowsize=0.1, arrowinset=0](2,2)(1.6,3)
\psline[linestyle=dashed, arrowsize=0.1, arrowinset=0](2,2)(2,3)
\psline[arrowsize=0.1, arrowinset=0](2,2)(2.4,3)
\psline[arrowsize=0.1, arrowinset=0](2,2)(2.8,3)
\psline[linestyle=dashed, arrowsize=0.1, arrowinset=0](2,2)(2,1)
\rput[b](2,0){$\mathcal P ({
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=
|
ArXiv
| true
| false
|
arxiv_only
|
4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}};{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}})$}
\rput[b](2,3.2){$1\,\,\, 2\,\,\, 3\,\,\, 4\,\,\, 5$}
\end{pspicture}$$ The canonical example of a 0/1-operad is the endomorphism 0/1-operad given for $k$-vector spaces $A$ and $M$ by $$\begin{aligned}
{\mathcal E\!nd}^{A,M}(\vec X;{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}})=& Hom(\text{tensor products of $A$ and $M$},A)\\
{\mathcal E\!nd}^{A,M}(\vec X;{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}})=& Hom(\text{tensor products of $A$ and $M$},M)\\
{\mathcal E\!nd}^{A,M}(\vec X;\varnothing)=& Hom(\text{tensor products of $A$ and
$M$},k).\end{aligned}$$ With this notation $(A,M,k)$ is an algebra over the 0/1-operad $\mathcal P$ if there exists a 0/1-operad map $\mathcal P \to {\mathcal E\!nd}^{A,M}$. By slight abuse of language we will also call the tuple $(A,M)$ an algebra over $\mathcal P$.
It is our aim to define for each cyclic operad $\mathcal O$, the associated 0/1-operad $\widehat{ \mathcal O}$, which incorporates the cyclic structure as a colored operad. Before doing so, let us briefly recall the definition of a cyclic operad from [@GeK Theorem (2.2)].
Let $\mathcal O$ be a operad, i.e we have vector spaces $\mathcal O(n)$ for $n\geq 1$, composition maps $\circ_i:\mathcal O(n)\otimes \mathcal O(m)\to \mathcal O(n+m-1)$, and an $S_n$-action on $\mathcal O(n)$ for each $n$, satisfying the usual axioms, see [@GK (1.2.1)]. $\mathcal O$ is called [*cyclic*]{} if there is an action of the symmetric gr
|
ArXiv
| true
| false
|
arxiv_only
|
oup $S_{n+1}$ on $\mathcal O(n)$, which extends the given $S_n$-action, and satisfies, for $1\in\mathcal O(1)$, $\alpha\in \mathcal O(m)$, $\beta\in \mathcal O(n)$ the following relations: $$\begin{aligned}
\label{compos_cyclic1} \tau_2(1)&=&1,\\ \label{compos_cyclic2}
\tau_{m+n}(\alpha\circ_k \beta)&=&\tau_{m+1}(\alpha)\circ_{k+1}
\beta,\quad\quad\quad \text{ for } k<m \\ \label{compos_cyclic3}
\tau_{m+n}(\alpha\circ_m \beta)&=&\tau_{n+1}(\beta)\circ_1
\tau_{m+1}(\alpha),\end{aligned}$$ where $\tau_{j}\in S_{j}$ denotes the cyclic rotation of $j$ elements $\tau_{j} :=1\in{\mathbb{Z}}_{j}\subset S_{j}$.
\[def\_O\_hat\] Let $\mathcal O$ be a cyclic operad with $\mathcal O(1)=k$. For a sequence of $n$ input colors $\vec X=(x_1,\dots, x_n)$ and the output color $x$, where $x_1, \dots, x_n, x\in\{{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}},\varnothing\}$, let $$\widehat{\mathcal O}(\vec X;x):=
\begin{cases}
\mathcal O(n) & \text{if } x \text{ is ``full'', and } \vec X=({
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}},\ldots,{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}}),\\
\mathcal O(n) & \text{if } x \text{ is ``dashed'', and $\vec X$ has
exactly one ``dashed'' input}\\
\mathcal O(n-1) & \text{if } x=\varnothing \text{ and
$\vec X$ has exactly two ``dashed' inputs,} \\
\{0\} & \text{otherwise}.
\end{cases}$$ The definition of $\widehat{\mathcal O}(\vec X,\varnothing)$ is motivated by the idea that one considers trees with $n-1$ inputs and one output, and then uses the $S_{n+1}$ action to turn this output into a new input: $$\begin{pspicture}(0,0.8)(4,3.4)
\psline[linestyle=dashed](2,2)(1.4,3)
\psline(2,2)(1.8,3)
\psline(2,2)(2.2,3)
\psline(2,2)(2.6,3)
\psline[linestyle=dashed](2,2)(2,1)
\rput[b](2,3.2){$1\,\,\, 2
|
ArXiv
| true
| false
|
arxiv_only
|
\,\,\, 3\,\,\, 4$}
\rput[b](4,2){$\rightsquigarrow$}
\end{pspicture}
\begin{pspicture}(0,0.8)(4,3.4)
\psline[linestyle=dashed](2,2)(1.2,3)
\psline(2,2)(1.6,3)
\psline(2,2)(2,3)
\psline(2,2)(2.4,3)
\psline[linestyle=dashed](2,2)(2.8,3)
\rput[b](2,3.2){$1\,\,\, 2\,\,\, 3\,\,\, 4\,\,\, 5$}
\end{pspicture}$$ We define the $S_n$-action on $\widehat{\mathcal O}(\vec X;{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}})$ and $\widehat{\mathcal O}(\vec X;{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}})$ as before by using the $S_n$-action on $\mathcal O(n)$, and the $S_n$-action on $\widehat{\mathcal O} (\vec X;\varnothing)$ by using the $S_n$-action on $\mathcal O(n-1)$ given by the cyclicity of $\mathcal O$.
Diagrams with different positions of the two “dashed” inputs can be mapped to each other using the action of the symmetric group. In fact, as each $\sigma\in S_{n+1}$ induces an isomorphism which preserves all the structure, any statement about diagrams with a fixed choice of position of “dashed” inputs immediately carries over to any other choice of positions of “dashed” inputs. We therefore often restrict our attention to the choice where the two “dashed” inputs are at the far left and the far right, as shown in the above picture.
It is left to define the composition. On $\widehat{\mathcal O}(\vec X;{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}})$ and $\widehat{\mathcal O}(\vec
X;{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}})$, the composition is simply the composition in $\mathcal
O(n)$, so that it clearly satisfies associativity and equivariance. If $|\vec X|=n+1$, then on $\widehat{\mathcal O}(\vec
X;\varnothing)=\mathcal O(n)$, the composition is predetermined on the first $n$ components by the usual composition in $\mathcal O$. As for t
|
ArXiv
| true
| false
|
arxiv_only
|
he last component, we define $$\label{def_cyclic_compos}
\alpha\circ_{m+1} \beta:=\tau_{n+m} (\tau^{-1}_{m+1}(\alpha)\circ_m
\beta)\stackrel{\eqref{compos_cyclic3}}{=} \tau_{n+1}(\beta) \circ_1
\alpha$$ $$\begin{pspicture}(1,1.6)(10.5,5.6)
\psline[linestyle=dashed](2,2)(1.2,2.9)
\psline(2,2)(1.6,2.9)
\psline(2,2)(2,2.9)
\psline(2,2)(2.4,2.9)
\psline[linestyle=dashed](2,2)(2.8,2.9)
\rput[b](3,2){$\alpha$} \rput[b](2.4,3.4){$\beta$}
\psline[linestyle=dashed](2.8,3)(2.8,3.5)
\psline[linestyle=dashed](2.8,3.5)(3,4)
\psline(2.8,3.5)(2.8,4)
\psline(2.8,3.5)(2.6,4)
\rput[b](3.5,3){$:=$}
\psline[linestyle=dashed](5,2)(4.2,2.9)
\psline[linestyle=dashed](4.2,3)(4.3,3.1)(5,3.25)(5.7,3.4)(5.8,3.5)
\psline(5,2)(4.6,2.9) \psline(4.6,3)(4.2,3.5)
\psline(5,2)(5,2.9) \psline(5,3)(4.6,3.5)
\psline(5,2)(5.4,2.9) \psline(5.4,3)(5,3.5)
\psline[linestyle=dashed](5,2)(5.8,2.9) \psline[linestyle=dashed](5.8,3)(5.4,3.5)
\rput[b](6,2){$\alpha$} \rput[b](6.4,3.1){$\tau_{m+1}^{-1}$}
\psline[linestyle=dashed](5.4,3.6)(5.4,4.1)
\psline[linestyle=dashed](5.4,4.1)(5.6,4.5)
\psline(5.4,4.1)(5.4,4.5)
\psline(5.4,4.1)(5.2,4.5)
\rput[b](5.2,3.8){$\beta$}
\psline[linestyle=dashed](5.8,3.6)(5.8,4.5)
\psline(4.2,3.6)(4.2,4.5)
\psline(4.6,3.6)(4.6,4.5)
\psline(5,3.6)(5,4.5)
\psline(4.2,4.6)(4.4,5.3)
\psline(4.6,4.6)(4.8,5.3)
\psline(5 ,4.6)(5.2,5.3)
\psline(5.2,4.6)(5.4,5.3)
\psline(5.4,4.6)(5.6,5.3)
\psline[linestyle=dashed](5.6,4.6)(5.8,5.3)
\psline[linestyle=dashed](5.8,4.6)(5.6,4.8)(5,4.95)(4.4,5.1)(4.2,5.3)
\rput[b](6.4,4.7){$\tau_{n+m}$}
\rput[b](7.3,3){$=$}
\psline[linestyle=dashed](8.55,2)(8.1,2.9)
\psline(8.55,2)(8.4,2.9)
\psline(8.55,2)(8.7,2.9)
\psline[linestyle=dashed](8.55,2)(9,2.9)
\rput[b](9.7,2){$\tau_{n+1}(\beta)$}
\psline[linestyle=dashed](8.1,3)(8.1,3.5)
\psline[linestyle=dashed](8.1,3.5)(7.8,4)
\psline(8.1,3.5)(8 ,4)
\psline(8.1,3.5)(8.2,4)
\psline(8.1,3.5)(8.4,4)
\rput[b](8.5,3.5){$\alpha$}
\end{pspicture}$$ where the last equal
|
ArXiv
| true
| false
|
arxiv_only
|
ity follows form equation , $\alpha\in \widehat{\mathcal O}(\vec X;\varnothing) =\mathcal O(m)$ has $m+1$ inputs, and $\beta\in\widehat{\mathcal
O}(\vec Y;{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}})=\mathcal O(n)$ (or similarly $\beta\in\widehat{\mathcal
O}(\vec Y;{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}})$) has $n$ inputs. It is clear that this will satisfy equivariance, since equivariance was just used to define the composition. The next lemma establishes the final property for $\widehat{\mathcal O}$ being a 0/1-operad.
The composition in $\widehat{\mathcal O}$ satisfies the associativity axiom.
By definition the composition is just the usual composition in $\mathcal O(n)$, except for inserting trees in the last input of elements in $\widehat{\mathcal
O}(\vec X;\varnothing)$. Thus, except for composition in the last spot, associativity of $\widehat{\mathcal O}$ follows from the associativity of $\mathcal O$.
Now, let $\alpha\in \widehat{\mathcal O}(\vec X;\varnothing)\cong
\mathcal O(m)$, $\beta\in \widehat{\mathcal O}(\vec Y;y)\cong
\mathcal O(n)$, and $\gamma\in \widehat{\mathcal O} (\vec Z;z) \cong
\mathcal O(p)$, where $y,z\in\{{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}}\}$. Then, associativity is satisfied, because for $1\leq j\leq m$, it is $$\begin{gathered}
(\alpha\circ_{m+1}\beta)\circ_{j} \gamma
\stackrel{\mathit{\eqref{def_cyclic_compos}}}{=} (\tau_{n+1}(\beta)
\circ_1 \alpha)\circ_j \gamma \stackrel{\mathit{op.comp}}{=}\\
=\tau_{n+1}(\beta) \circ_1 (\alpha\circ_j \gamma )
\stackrel{\mathit{\eqref{def_cyclic_compos}}}{=} (\alpha\circ_j
\gamma )\circ_{m+p}\beta,\end{gathered}$$ and for $m< j< m+n$, it is $$\begin{gathered}
(\alpha\circ_{m+1}\beta)\circ_{j} \gamma
\stackrel{\mathit{\eqref{d
|
ArXiv
| true
| false
|
arxiv_only
|
ef_cyclic_compos}}}{=} (\tau_{n+1}(\beta)
\circ_1 \alpha)\circ_j \gamma \stackrel{\mathit{op.comp}}{=}
(\tau_{n+1}(\beta) \circ_{j-m+1} \gamma)\circ_1 \alpha
\stackrel{\eqref{compos_cyclic2}}{=}\\ = \tau_{n+p}(\beta
\circ_{j-m}\gamma)\circ_1\alpha \stackrel{\mathit{
\eqref{def_cyclic_compos}}}{=} \alpha\circ_{m+1}
(\beta\circ_{j-m}\gamma),\end{gathered}$$ while $$\begin{gathered}
(\alpha\circ_{m+1}\beta)\circ_{m+n} \gamma
\stackrel{\mathit{\eqref{def_cyclic_compos}}}{=} (\tau_{n+1}(\beta)
\circ_1 \alpha)\circ_{m+n} \gamma \stackrel{\mathit{
\eqref{def_cyclic_compos}}}{=}\\ = \tau_{p+1}(\gamma)\circ_1
(\tau_{n+1}(\beta) \circ_1 \alpha) \stackrel{\mathit{op.comp}}{=}
(\tau_{p+1}(\gamma)\circ_1 \tau_{n+1}(\beta)) \circ_1 \alpha
\stackrel{\eqref{compos_cyclic3}}{=}\\ = \tau_{n+p}(\beta
\circ_{n}\gamma) \circ_1 \alpha \stackrel{\mathit{
\eqref{def_cyclic_compos}}}{=} \alpha\circ_{m+1} (\beta
\circ_{n}\gamma).\end{gathered}$$
We end this section by giving a presentation of $\widehat{\mathcal O}$ in terms of generators and relations, when $\mathcal O$ is given by quadratic generators and relations. Let us first recall the notion of operads given by generators and relations.
Fix a set of colors $C$. Then let $E=\{E^{x,y}_z\}_{x,y,z\in C}$ be a collection of $k$-vector spaces, together with an $S_2$-action compatible with the colors. We want $E$ to be the binary generating set of a colored operad, where $x$, $y$ and $z$ correspond to the colors of the edges of a binary vertex, i.e. a vertex with exactly two incoming and one outgoing edge. Let $T$ be a rooted, colored tree where each vertex is binary, and let $v$ be a vertex with colors $(x,y;z)$ in $T$. Then, we define $E(v):= \left( E^{x,y}_z \oplus E^{y,x}_z \right)_{S_2}$, and with this, we set $E(T):=\bigotimes_{\text{vertex }v\text{ of }T} E(v)$. We define the free colored operad $\mathcal F(E)$ generated by $E$ to be $$\mathcal F(E)(\vec X;z):= \bigoplus_{
\text{binary trees $T$ of type }(\vec X;z)
} E(T).$$ The $S_n$-action is given by an obvious permutation
|
ArXiv
| true
| false
|
arxiv_only
|
of the leaves of the tree using the $S_2$-action on $E$, and the composition maps are given by attaching trees. This definition can readily be seen to define a colored operad.
An ideal $\mathcal I$ of a colored operad $\mathcal P$ is a collection of $S_n$-sub-modules $\mathcal I(\vec X;z)\subset
\mathcal P(\vec X;z)$ such that $f\circ_{i} g$ belongs to the ideal whenever $f$ or $g$ or both belong to the ideal. A colored operad $\mathcal P$ is said to be quadratic if $\mathcal P=\mathcal F(E)/(R)$ where $\mathcal F(E)$ is the free colored operad on some generators $E$, and $(R)$ is the ideal in $\mathcal F(E)$ generated by a subspace with 3 inputs, called the relations, $R\subset \bigoplus_{w,x,y,z\in C}
\mathcal F(E)(w,x,y;z)$.
We recall from [@GeK (3.2)], that an operad $\mathcal O$ is called cyclic quadratic if it is quadratic, with generators $E$ and relations $R$, so that the $S_2$-action on $E$ is naturally extended to a $S_3$-action via the sign-representation $sgn:S_3\to S_2$, and $R\subset\mathcal F(E)(3)$ is an $S_4$-invariant subspace. In this case, $\mathcal O$ becomes a cyclic operad, see [@GeK (3.2)].
The following lemma is straight forward to check.
\[O\_hat\_quadratic\] Let $\mathcal O$ be cyclic quadratic with generators $E$ and relations $R\subset\mathcal F(E)(3)$. Then $\widehat{\mathcal O}$ is generated by $\widehat{E}:=\widehat{E} ^{{
\begin{pspicture}(0,0)(0.2,0.2)
\psline[linewidth=1pt](0.1,0)(0.1,0.2)
\end{pspicture}},{
\begin{pspicture}(0,0)(0.2,0.2)
\psline[linewidth=1pt](0.1,0)(0.1,0.2)
\end{pspicture}}}_{{
\begin{pspicture}(0,0)(0.2,0.2)
\psline[linewidth=1pt](0.1,0)(0.1,0.2)
\end{pspicture}}}\oplus \widehat{E} ^{{
\begin{pspicture}(0,0)(0.2,0.2)
\psline[linewidth=1pt](0.1,0)(0.1,0.2)
\end{pspicture}},{
\begin{pspicture}(0,0)(0.2,0.2)
\psline[linewidth=1pt, linestyle=dashed,dash=3pt 2pt](0.1,0)(0.1,0.2)
\end{pspicture}}}_{{
\begin{pspicture}(0,0)(0.2,0.2)
\psline[linewidth=1pt, linestyle=dashed,dash=3pt 2pt](0.1,0)(0.1,0.2)
\end{pspicture}}} \oplus \widehat{E} ^{
|
ArXiv
| true
| false
|
arxiv_only
|
{
\begin{pspicture}(0,0)(0.2,0.2)
\psline[linewidth=1pt, linestyle=dashed,dash=3pt 2pt](0.1,0)(0.1,0.2)
\end{pspicture}},{
\begin{pspicture}(0,0)(0.2,0.2)
\psline[linewidth=1pt](0.1,0)(0.1,0.2)
\end{pspicture}}}_{{
\begin{pspicture}(0,0)(0.2,0.2)
\psline[linewidth=1pt, linestyle=dashed,dash=3pt 2pt](0.1,0)(0.1,0.2)
\end{pspicture}}} \oplus \widehat{E} ^{{
\begin{pspicture}(0,0)(0.2,0.2)
\psline[linewidth=1pt, linestyle=dashed,dash=3pt 2pt](0.1,0)(0.1,0.2)
\end{pspicture}},{
\begin{pspicture}(0,0)(0.2,0.2)
\psline[linewidth=1pt, linestyle=dashed,dash=3pt 2pt](0.1,0)(0.1,0.2)
\end{pspicture}}}$, defined as $$\begin{aligned}
\widehat{E} ^{{
\begin{pspicture}(0,0)(0.2,0.2)
\psline[linewidth=1pt](0.1,0)(0.1,0.2)
\end{pspicture}},{
\begin{pspicture}(0,0)(0.2,0.2)
\psline[linewidth=1pt](0.1,0)(0.1,0.2)
\end{pspicture}}}_{{
\begin{pspicture}(0,0)(0.2,0.2)
\psline[linewidth=1pt](0.1,0)(0.1,0.2)
\end{pspicture}}}:=E&\subset \widehat {\mathcal
O}({
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}};{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}}),\\\widehat{E} ^{{
\begin{pspicture}(0,0)(0.2,0.2)
\psline[linewidth=1pt](0.1,0)(0.1,0.2)
\end{pspicture}},{
\begin{pspicture}(0,0)(0.2,0.2)
\psline[linewidth=1pt, linestyle=dashed,dash=3pt 2pt](0.1,0)(0.1,0.2)
\end{pspicture}}}_{{
\begin{pspicture}(0,0)(0.2,0.2)
\psline[linewidth=1pt, linestyle=dashed,dash=3pt 2pt](0.1,0)(0.1,0.2)
\end{pspicture}}}:=E&\subset \widehat {\mathcal
O}({
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}};{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}}) ,\\\widehat{E} ^{{
\begin{pspicture}(0
|
ArXiv
| true
| false
|
arxiv_only
|
,0)(0.2,0.2)
\psline[linewidth=1pt, linestyle=dashed,dash=3pt 2pt](0.1,0)(0.1,0.2)
\end{pspicture}},{
\begin{pspicture}(0,0)(0.2,0.2)
\psline[linewidth=1pt](0.1,0)(0.1,0.2)
\end{pspicture}}}_{{
\begin{pspicture}(0,0)(0.2,0.2)
\psline[linewidth=1pt, linestyle=dashed,dash=3pt 2pt](0.1,0)(0.1,0.2)
\end{pspicture}}}:= E&\subset \widehat {\mathcal
O}({
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}};{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}}),\\\widehat{E}^{{
\begin{pspicture}(0,0)(0.2,0.2)
\psline[linewidth=1pt, linestyle=dashed,dash=3pt 2pt](0.1,0)(0.1,0.2)
\end{pspicture}},{
\begin{pspicture}(0,0)(0.2,0.2)
\psline[linewidth=1pt, linestyle=dashed,dash=3pt 2pt](0.1,0)(0.1,0.2)
\end{pspicture}}}:=k&\subset \widehat {\mathcal O}({
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}};\varnothing),\end{aligned}$$ and has relations $$\begin{aligned}
R\subset\mathcal F(E)(3)\cong\mathcal F(\widehat{E})({
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}};{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}}), \\
R\subset\mathcal F(E)(3)\cong\mathcal F(\widehat{E})({
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\p
|
ArXiv
| true
| false
|
arxiv_only
|
sline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}};{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}}), \\
R\subset\mathcal F(E)(3)\cong\mathcal F(\widehat{E})({
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}};{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}}), \\
R\subset\mathcal F(E)(3)\cong\mathcal F(\widehat{E})({
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}};{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}}).\end{aligned}$$ together with the relations $$\begin{aligned}
G\subset \mathcal F(\widehat{E})({
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}};\varnothing), \\ G\subset
\mathcal F(\widehat{E})({
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end
|
ArXiv
| true
| false
|
arxiv_only
|
{pspicture}};\varnothing), \\ G\subset \mathcal
F(\widehat{E})({
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}};\varnothing),\end{aligned}$$ where $G$ corresponds for a given coloring to the space $$G:= span \left<
\begin{pspicture}(0,0.2)(1,1)
\psline(0.5,0)(0.3,0.5)
\psline(0.5,0)(0.7,0.5)
\psline(0.3,0.5)(0.3,1)
\psline(0.3,0.7)(0.1,1)
\psline(0.3,0.7)(0.5,1)
\rput[b](0.7,0.8){$\alpha$}
\end{pspicture}
-
\begin{pspicture}(0,0.2)(2,1)
\psline(0.5,0)(0.3,0.5)
\psline(0.5,0)(0.7,0.5)
\psline(0.7,0.5)(0.7,1)
\psline(0.7,0.7)(0.9,1)
\psline(0.7,0.7)(0.5,1)
\rput[b](1.5,0.6){$\tau_3(\alpha)$}
\end{pspicture}
\text{ , for all } \alpha\in E^{{
\begin{pspicture}(0,0)(0.2,0.2)
\psline[linewidth=1pt](0.1,0)(0.1,0.2)
\end{pspicture}},{
\begin{pspicture}(0,0)(0.2,0.2)
\psline[linewidth=1pt](0.1,0)(0.1,0.2)
\end{pspicture}}}_{{
\begin{pspicture}(0,0)(0.2,0.2)
\psline[linewidth=1pt](0.1,0)(0.1,0.2)
\end{pspicture}}} \right>.$$
Koszulness of $\widehat{\mathcal O}$ {#quadrat-koszul}
====================================
This section is concerned with our main theorem, that Koszulness for $\mathcal O$ implies Koszulness for $\widehat{\mathcal O}$. To set up notation, we briefly recall the notion of quadratic dual, cobar dual and Koszulness of a (colored) operad.
Recall that if the vector space $V$ is an $S_n$-module and $sgn_n$ is the sign representation, then we defined $V^\vee$ to be $V^*\otimes sgn_n$, where $V^*=Hom(V,k)$ denotes the dual space.
Let $C$ be a set of colors. For every quadratic colored operad $\mathcal P$, we define the quadratic dual colored operad $\mathcal P^!:=\mathcal
F(E)^\vee/(R^\perp)$, where $(R^\perp)$ is the ideal in $\mathcal
F(E)^\vee$ generated by the orthogonal complement $R^\
|
ArXiv
| true
| false
|
arxiv_only
|
perp$ of $R$ as a subspace of $\left(\bigoplus_{w,x,y,z\in C} \mathcal F(E)(w,x,y;z) \right) ^\vee$. Notice that $\mathcal F(E)(\vec X;x)^\vee = \mathcal F(E^\vee) (\vec X;x)$, so that $\mathcal P^!$ is generated by $E^\vee$ with relations $R^\perp$, see [@GK (2.1.9)].
Now if $\mathcal P=\mathcal F(E)/(R)$ is a quadratic colored operad, then we can follow the definition from [@GK (3.2.12)], and define the cobar dual colored operad $\textbf{D} (\mathcal P)$ of $\mathcal P$, to be given by the complexes $\textbf{D} (\mathcal P)(\vec X;x)$ concentrated in non-positive degree, $\textbf{D} (\mathcal P)(\vec X;x):=$ $$\begin{gathered}
\bigoplus_{
\substack{
\text{trees $T$ of}\\
\text{type $(\vec X;x)$},\\
\text{no internal edge}}}
\mathcal P(T)^*\otimes {\mathrm{Det}}(T)\stackrel{{\partial}}{{{\longrightarrow}}}
\bigoplus_{
\substack{
\text{trees } T \text{ of}\\
\text{type }(\vec X;x),\\
\text{1 internal edge}}}
\mathcal P(T)^*\otimes {\mathrm{Det}}(T)\stackrel{{\partial}}{{{\longrightarrow}}}\\
{{\longrightarrow}}\dots\stackrel{{\partial}}{{{\longrightarrow}}} \bigoplus_{
\substack{
\text{trees } T \text{ of}\\
\text{type }(\vec X;x),\\
\text{binary tree}}}
\mathcal P(T)^*\otimes {\mathrm{Det}}(T).\end{gathered}$$ Here, $\mathcal P(T)^*$ denotes the dual of $\mathcal P(T)$, and ${\mathrm{Det}}(T)$ denotes the top exterior power on the space $k^{Ed(T)}$, where $Ed(T)$ is the space of edges of the tree $T$. By definition, we let the furthest right space whose sum is over binary trees, be of degree zero, and all other spaces be in negative degree.
In general, the zero-th homology of this complex is always canonically isomorphic to the quadratic dual $\mathcal P^!$, i.e. $H^0(\textbf{D}(\mathcal P)(\vec X;x))\cong \mathcal P^!(\vec X;x)$, see [@GK (4.1.2)]. The quadratic operad $\mathcal P$ is then said to be Koszul if the cobar dual on the quadratic dual $\textbf{D}(\mathcal P^!)(\vec X;x)$ is quasi-isomorphic to $\mathcal P(\vec X;x)$, i.e., by the above, $\textbf{D}(\mathcal P^!)(\vec X;x)$ has h
|
ArXiv
| true
| false
|
arxiv_only
|
omology concentrated in degree zero.
We now state our main theorem.
\[O\_hat\_Koszul\] If $\mathcal O$ is cyclic quadratic and Koszul, then $\widehat{\mathcal O}$ has a resolution, which for a given sequence $\vec X$ of colors, with $|\vec X|=n$, is given by the quasi-isomorphisms $$\begin{aligned}
\textbf{D}(\widehat{\mathcal O^!}) (\vec X;{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}})&\to& \widehat{
\mathcal O}(\vec X;{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}})=\mathcal O(n) \\
\textbf{D}(\widehat{\mathcal O^!}) (\vec X;{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}})&\to& \widehat{
\mathcal O}(\vec X;{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}})=\mathcal O(n) \\
\textbf{D}(\widehat{\mathcal O^!}) (\vec X;\varnothing) &\to&
\widehat{ \mathcal O}(\vec X;\varnothing)=\mathcal O(n-1)\end{aligned}$$
Koszulness of $\mathcal O$ means exactly that the first and second maps are quasi-isomorphisms. The proof that the third map is also a quasi-isomorphism will concern the rest of this section.
We need to show that the homology of $\textbf{D}(\widehat{\mathcal O^!}) (\vec X;
\varnothing)$ is concentrated in degree $0$: $$\begin{aligned}
\label{H0}
H_0 \left(\textbf{D}(\widehat{\mathcal O^!}) (\vec X;\varnothing)\right
)&=&\widehat{ \mathcal O}(\vec X;\varnothing)\\ \label{Hi<0} H_{i<0} \left(\textbf{D}(\widehat{\mathcal O^!})
(\vec X;\varnothing)\right)&=&\{0\}\end{aligned}$$ As mentioned in definition \[def\_O\_hat\], it is enough to restrict attention to the case where $\vec X = ({
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}},\ldots,{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth
|
ArXiv
| true
| false
|
arxiv_only
|
=1pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}})$. Let us first show the validity of equation . The map $\textbf{D} (\widehat{\mathcal O^!}) (\vec X;\varnothing)\to \widehat{ \mathcal O}(\vec X;\varnothing)$ expanded in degrees of $\textbf{D}(\widehat{\mathcal O^!})(\vec X;\varnothing)$ is written as $\cdots \stackrel{{\partial}}{{{\longrightarrow}}} \textbf{D}(\widehat{\mathcal O^!})
(\vec X;\varnothing)^{-1} \stackrel{{\partial}}{{{\longrightarrow}}}
\textbf{D}(\widehat{\mathcal O^!}) (\vec X;\varnothing)^0
\stackrel{proj}{{\longrightarrow}}\widehat{\mathcal O}(\vec X;\varnothing)$. As $\mathcal O$ and thus $\mathcal O^!$ are quadratic, we have the following identification, using the language and results of lemma \[O\_hat\_quadratic\]: $$\begin{aligned}
\widehat{\mathcal O}(\vec X;\varnothing)&=&\mathcal F(
\widehat{E})/(R,G) (\vec X;\varnothing) \\
\textbf{D}(\widehat{\mathcal O^!}) (\vec X;\varnothing)^0&=&
\bigoplus_{ \substack{
\text{binary trees }T \\
\text{of type }(\vec X;\varnothing)
}} (\widehat{E^\vee}(T))\otimes {\mathrm{Det}}(T) =\mathcal
F(\widehat{E})(\vec X;\varnothing)\\ \textbf{D}(\widehat{\mathcal
O^!}) (\vec X;\varnothing)^{-1}&=&\bigoplus_{ \substack{
\text{trees }T\text{ of type }(\vec X;\varnothing), \\
\text{binary vertices, except}\\
\text{one ternary vertex}
}} \left(\widehat{\mathcal O^!}(T)\right)\otimes {\mathrm{Det}}(T) \\ &=&
\left\{
\begin{array}{c}
\text{space of relations in $\mathcal F(\widehat{E})(\vec X;\varnothing)$}\\
\text{generated by $R$ and $G$}
\end{array}
\right\}\end{aligned}$$ The last equality follows, because the inner product relations for $\mathcal O^!$, which are the relation space $G$ for the cyclic quadratic operad $\mathcal O^!$ from lemma \[O\_hat\_quadratic\], are the orthogonal complement of the inner product relations for $\mathcal O$. Hence, the map $proj$ is surjective with kernel ${\partial}\big(\textbf{D}(\widehat{\mathcal
|
ArXiv
| true
| false
|
arxiv_only
|
O^!}) (\vec X;
\varnothing )^{-1}\big)$. This implies equation .
As for equation (\[Hi<0\]), we will use an induction that shows that every closed element in $\textbf{D}(\widehat{\mathcal O^!})
(\vec X;\varnothing)^{-r}$, for $r\geq 1$ is also exact. The argument will use an induction which slides all of the “full” inputs from one of the two “dashed” inputs to the other. As a main ingredient of this, we will employ the products $*$ and $\#$ defined below, which are used to uniquely decompose an element in $\textbf{D}(\widehat {\mathcal O^!})(\vec X;\varnothing)$ as a sum of products of $*$ and $\#$.
We need the following definition. Given two decorated trees $\varphi\in \textbf{D}(\mathcal O^!)(k), \psi \in
\textbf{D}(\mathcal O^!)(l)$, we define new elements $\varphi * \psi$ and $\varphi \# \psi$ in $\textbf{D}(\widehat{ \mathcal O^!})({
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}},\ldots,{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}};\varnothing)$ as follows. First, for $\varphi * \psi$ take the outputs of $\varphi$ and $\psi$ and insert them into the unique inner product decorated by the generator $1\in \widehat{\mathcal O^!}({
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}};\varnothing)$: $$\begin{pspicture}(-3,0)(6,4)
\psline[linestyle=dashed](2.5,0.5)(0,3)
\psline[linestyle=dashed](2.5,0.5)(5,3)
\rput(2.7,0.3){\tiny $1$}
\pscircle[linestyle=dotted](4,2.2){1.4}
\rput(-0.2,1){$\varphi$}
\psline(1.5,1.5)(2,3)
\psline(1.5,1.5)(1,3)
\psline(1.3,2.1
|
ArXiv
| true
| false
|
arxiv_only
|
)(1.5,3)
\psline(0.4,2.6)(0.4,3)
\psline(0.7,2.3)(0.7,3)
\rput(1.4,1.3){\tiny $\alpha_1$}
\rput(0.2,2.4){\tiny $\alpha_2$}
\rput(0.6,2.1){\tiny $\alpha_3$}
\rput(1,2.3){\tiny $\alpha_4$}
\pscircle[linestyle=dotted](1,2.2){1.4}
\rput(5.2,1){$\psi$}
\psline(3.5,1.5)(3,3)
\psline(3.5,1.5)(4,3)
\psline(4.5,2.5)(4.2,3)
\psline(3.3,2.1)(3.3,3)
\psline(3.3,2.1)(3.6,3)
\rput(3.6,1.2){\tiny $\beta_1$}
\rput(3,2){\tiny $\beta_2$}
\rput(4.6,2.2){\tiny $\beta_3$}
\rput(-1.5,2){$\varphi * \psi =$}
\end{pspicture}$$ As for $\varphi\# \psi$, we assume, that $\varphi\in \textbf{D}(\mathcal O^!)(k)$ with $k\geq 2$. Then identify $\varphi$ with an element in $\textbf{D} (\widehat{
\mathcal O^!})({
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}},\ldots,{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}};\varnothing)$ by interpreting the lowest decoration $\alpha_1\in O^!(m)$ of $\varphi$, as an inner product $\alpha_1\in \widehat{\mathcal O^!}({
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}},\ldots,{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}};\varnothing)=\mathcal O^!(m)$. $\varphi\# \psi$ is defined by attaching $\psi$ to the dashed input labeled by $\alpha_1$: $$\begin{pspicture}(-3,0)(6,4)
\psline[linestyle=dashed](2.5,0.5)(0,3)
\psline[linestyle=dashed](2.5,0.5)(5,3)
\rput(2.7,0.3){\tiny $\alpha_1$}
\pscircle[linestyle=dotted](4
|
ArXiv
| true
| false
|
arxiv_only
|
,2.2){1.4}
\rput(-0.2,1){$\varphi$}
\psline(2.5,0.5)(2,3)
\psline(2.5,0.5)(1.3,3)
\psline(1.7,2.2)(1.7,3)
\psline(0.4,2.6)(0.4,3)
\psline(0.7,2.3)(0.7,3)
\rput(0.2,2.4){\tiny $\alpha_2$}
\rput(0.6,2.1){\tiny $\alpha_3$}
\rput(1.4,2.1){\tiny $\alpha_4$}
\psccurve[linestyle=dotted](0.5,0.8)(-0.5,3)(1,3.6)(2.4,3)(2.4,2)(2.7,1.2)(3.1,0.2)(2.4,0)
\rput(5.2,1){$\psi$}
\psline(3.5,1.5)(3,3)
\psline(3.5,1.5)(4,3)
\psline(4.5,2.5)(4.2,3)
\psline(3.3,2.1)(3.3,3)
\psline(3.3,2.1)(3.6,3)
\rput(3.6,1.2){\tiny $\beta_1$}
\rput(3,2){\tiny $\beta_2$}
\rput(4.6,2.2){\tiny $\beta_3$}
\rput(-1.5,2){$\varphi \# \psi =$}
\end{pspicture}$$ Both $\varphi$ and $\psi$ are elements of $\textbf{D} (\mathcal O^!)$ and thus uncolored. The colorations for $\varphi *\psi$ and $\varphi\#\psi$ are uniquely determined by having the first and last entry dashed. The operations $*$ and $\#$ are extended to $\textbf{D} (\mathcal O^!)\otimes \textbf{D}
(\mathcal O^!)\to \textbf{D} (\widehat{\mathcal O^!})(\vec X; \varnothing)$ by bilinearity.
It is important to notice that every labeled tree, whose first and last inputs are “dashed", can uniquely be written as a product of $*$ or $\#$. To be more precise, suppose $\varphi$ and $\psi$ are labeled trees with $k+1$ and $l+1$ inputs respectively. We restrict our attention to the case of elements in $\textbf{D} (\widehat{\mathcal O^!})(\vec X; \varnothing)$ whose “dashed” inputs are labeled to be the first and the last input, and thus appear in the planar representation on the far left and the far right. Let $\sigma$ be a $(k,l)$-shuffle, i.e. a permutation of $\{1,\dots,k+l\}$ such that $\sigma(1)<\dots <\sigma(k)$ and $\sigma(k+1)<\dots <\sigma(k+l)$. Then define $\phi *_\sigma \psi$, resp. $\phi \#_\sigma \psi$, as the composition of $*$, resp. $\#$, with $\sigma$ applied to the “full” leaves of the resulting labeled tree. The “dashed” inputs remain far left and far right. With these notations it is now clear, that every labeled tree in $\textbf{D}(\widehat {\mathcal O
|
ArXiv
| true
| false
|
arxiv_only
|
^!})(\vec X;\varnothing)$ with “dashed” first and last inputs, can uniquely be written in the form $\varphi *_\sigma\psi$ or $\varphi\#_\sigma\psi$ for some $\varphi,\psi$ and $\sigma$.
For each $r\geq 1$, we show that the $(-r)$th homology of $\textbf{D}(\widehat{\mathcal O^!})(\vec X;\varnothing)$ vanishes by decomposing every element in $\textbf{D}(\widehat{\mathcal O^!})(\vec X;\varnothing)^{-r}$ as a sum of terms of the form $\varphi *_\sigma\psi$ or $\varphi\#_\sigma\psi$, and then performing and induction on the degree of $\psi$. More precisely, let us define the order of $\varphi*_\sigma \psi$ or $\varphi\#_\sigma \psi$ to be the degree of $\psi$ in $\textbf{D}(\mathcal O^!)$. Then for $s\in \mathbb N$, we claim the following statement:
- Let $\chi\in \textbf{D}(\widehat{\mathcal O^!})
(\vec X;\varnothing)^{-r}$ be a closed element ${\partial}(\chi)=0$. Then $\chi$ is homologous to a sum $\sum_i \sum_\sigma \varphi_i *_\sigma
\psi_i+\sum_j \sum_\sigma \varphi'_j \#_\sigma\psi'_j$, where the order of each term is less or equal to $(-s)$.
Rather intuitively this means, that for smaller $(-s)$, $\chi$ is homologous to decorated trees whose total degree is more and more concentrated on the right branch of the tree.
It is easy to the that the above is true for $s=1$. As for the inductive step, let $\chi=\sum_i \sum_\sigma \varphi_i *_\sigma \psi_i+\sum_j\sum_\sigma \varphi'_j
\#_\sigma\psi'_j$, where we assume an expansion so that $\{\varphi_i\}_i$ are linear independent, $\{\varphi'_j \}_j$ are linear independent, but the $\psi_i$ and the $\psi'_j$ are allowed to be linear combinations in $\textbf{D}(\mathcal O^!)$. We claim that those elements $\psi_i$ and $\psi'_j$, which are of degree $-s$, are closed in $\textbf{D} (\mathcal O^!)$. This follows from ${\partial}(\chi)=0$ and the inductive hypothesis, because the only terms of the boundary ${\partial}(\chi)$, which are of order $-s+1$, are terms of the form $\varphi_i *_\sigma {\partial}(\psi_i)$ and $\varphi'_j \#_\sigma
{\partial}(\psi'
|
ArXiv
| true
| false
|
arxiv_only
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 12