text large_stringlengths 1.66k 2.05k | label large_stringclasses 1
value | is_target bool 1
class | is_added bool 1
class | dataset large_stringclasses 1
value |
|---|---|---|---|---|
---
abstract: 'Many deep learning algorithms can be easily fooled with simple adversarial examples. To address the limitations of existing defenses, we devised a probabilistic framework that can generate an exponentially large ensemble of models from a single model with just a linear cost. This framework takes advantag... | ArXiv | true | false | arxiv_only |
for identification. However, like previous machine learning approaches, DNNs have been shown to be vulnerable to adversarial attacks during test time [@szegedy_intriguing_2013]. The existence of such adversarial examples suggests that DNNs lack robustness, and might not be learning the higher level concepts we hope th... | ArXiv | true | false | arxiv_only |
ck against the population.
Related Work
============
Adversarial examples in the context of DNNs have come into the spotlight after Szegedy et al. [@szegedy_intriguing_2013], showed the imperceptibility of the perturbations which could fool state-of-the-art computer vision systems. Since then, adversarial examples ha... | ArXiv | true | false | arxiv_only |
lly characterize the nature of adversarial examples suggests that adversarial subspaces can lie close to the data submanifold [@tanay_boundary_2016], but that they form high-dimensional, contiguous regions of space [@tramer_space_2017; @ma_characterizing_2018]. This corresponds to empirical observation that the transfe... | ArXiv | true | false | arxiv_only |
heir encoding space. At test time, they chose to randomly apply one of the 8 VAEs. They were able to reach 80% accuracy on defending against a model trained to attack one of the 8 VAEs, but did not evaluate the performance on an attack trained with estimation of the expectation of the gradient over all eight randomly s... | ArXiv | true | false | arxiv_only |
ost defenses based on cleaning inputs tend to operate in image space, prior to the image being processed by the classifier. Attacks that are aware of this preprocessing step can simply create images that fool the defense and classifier jointly. However, an adversary would have increased difficulty attacking a model tha... | ArXiv | true | false | arxiv_only |
me adversarial examples are generated with large enough perturbations that they no longer resemble the original image, even when judged by humans. It is unreasonable to assume that basic image processing techniques can restore such adversarial examples to their original form since the noise makes the true class ambiguo... | ArXiv | true | false | arxiv_only |
that are then used to cluster and compute similarities between images. We use a triplet network for the task of classification via an unconventional type of KNN in the embedding space. This is done by randomly sampling $50$ embeddings for each class from the training set, and then computing the similarity of these emb... | ArXiv | true | false | arxiv_only |
triplet probability for that class is computed.[]{data-label="triplet_detector"}](figures/classification_with_triplet.pdf){width="80.00000%"}
where $p_c(y | x)$ and $p_t(y | x)$ are the probability distributions from the classifier and triplet network respectively, $k$ is the most probable class output by the classif... | ArXiv | true | false | arxiv_only |
performance on unperturbed images when defenses are used, we performed the experiment below. For the FGS and IGS attacks, unless otherwise noted, an epsilon of 0.3 was used as is typical in the literature.
Performance of Defended Models on Clean Data
--------------------------------------------
One of the basic assum... | ArXiv | true | false | arxiv_only |
nario is if the adversary creates malicious examples when noise removing operations are turned on in all possible locations. It is possible that such adversarial examples would also fool the classifier when the defense is only applied in a subset of the layers. Fortunately, we note that for FGS, IGS, and CW2, transfera... | ArXiv | true | false | arxiv_only |
8 0.133 0.131
IGS 0.434 0.270 0.016 0.011 0.193 0.014 0.231 0.223
CW2 0.990 0.977 0.003 0.002 0.775 0.003 0.959 0.892
------... | ArXiv | true | false | arxiv_only |
(r)[2-8]{} Base Arrangement \[0, 0, 1\] \[0, 1, 0\] \[0, 1, 1\] \[1, 0, 0\] \[1, 0, 1\] \[1, 1, 0\] \[1, 1, 1\]
[\[1, 1, 1\]]{} 0.219 **0.190** 0.249 0.648 0.773 ... | ArXiv | true | false | arxiv_only |
[Baseline attack success rates and transfer success rates for an IGS attack with an epsilon of 1.0 on LeNet models trained on MNIST. 8 pairs of models were trained for the parallel Jacobian goal, and 8 pairs of models were trained for the perpendicular goal to obtain error bars around attack success rates.[]{data-label... | ArXiv | true | false | arxiv_only |
of the CW2 attack on the average of defense arrangements is investigated in Supplementary Material. While it may seem that our proposed defense scheme is easily fooled by a strong attack such as CW2, there are still ways of recovering from such attacks by using detection. In fact, there will always be perturbations th... | ArXiv | true | false | arxiv_only |
------------ ------- ------- ------- ------- ------- -------
: Transferability of attacks between LeNet and triplet network.[]{data-label="table:transferability_classifier_detector"}
Jointly Fooling Classifier and Detector
---------------------------------------
If an adversary is unaware that a detector is in ... | ArXiv | true | false | arxiv_only |
CW2 0.787 0.703 0.702 0.909 0.848 0.635 0.657
------------------------------ -------- --------- ------- ---------- ------------ --------------- ------------
: Attack success rates and detector accuracy for adversarial examples on LeNet-VAE using a trip... | ArXiv | true | false | arxiv_only |
ional regularization in place is determining how to trade off classification accuracy and gradient orthogonality. Our defense framework requires little computational overhead to filter operations such as blurs and sharpens, and is not particularly computationally intensive when there are VAEs to train. Training a numbe... | ArXiv | true | false | arxiv_only |
ation maps is an unconventional task, we visualize the reconstructions in order to inspect their quality. For the activations obtained from the first convolutional layer seen in Figure \[fig:conv1\_layer\_reconstructions\], it is obvious that the VAEs are effective at reconstructing the activation maps. The only potent... | ArXiv | true | false | arxiv_only |
a marked impact on the distance between normal and adversarial examples. Thus, we can conclude that part of the reason for why the defense works is that it dampens the effect of adversarial noise.
[.5]{} {width="0.8\linewidth"}
Triplet Network Visualization {#triplet-network-visualization .unnumbered}
-----------------------------
Here we illustrate how a triplet network is trained. An anchor, positive example, and negative example are all passed through the same em... | ArXiv | true | false | arxiv_only |
eavy bands. The Fermi surface exhibits nesting between hole and electron sheets that manifests as a peak in the susceptibility at $(1/2,1/2)$. I propose that the superconductivity in this compound is mediated by antiferromagnetic spin fluctuations associated with this peak resulting in a $s_\pm$ state similar to the pr... | ArXiv | true | false | arxiv_only |
nt96] are similar to the Fe–As and Fe–Fe distances of 2.403 and 2.802 Å, respectively, found in BaFe$_2$As$_2$.[@rott08] This raises the possibility that the direct Fe–Fe hopping is important to the physics of this material, which is the case for the previously discovered iron-based superconductors.[@sing08a]
Furtherm... | ArXiv | true | false | arxiv_only |
shaped like a shell of a clam is situated around $Z =
(0, 0, 1/2) = (1, 0, 0)$. This sheet encloses a cylindrical and two almost spherical hole sheets. The tetragonal cylinder sheet around $X$ nests with the spherical and the cylindrical sheets around $Z$, which manifests as a peak at $(1/2,1/2)$ in the bare susceptibl... | ArXiv | true | false | arxiv_only |
based superconductors.[@sing08a] This may suggest that YFe$_2$Ge$_2$ shares some of the underlying physics with the previously discovered iron-based superconductors.
![ Top: LDA non-spin-polarized band structure of YFe$_2$Ge$_2$. Bottom: A blow-up of the band structure around Fermi level. The long $\Gamma$–$Z$ directi... | ArXiv | true | false | arxiv_only |
-$1.5 and $-$2.6 eV have Ge $4p_x$ and $4p_y$ character. Rest of the bands below the Fermi level have mostly Fe $3d$ character. Similar to the other iron-based superconductors,[@sing08a] there is no gap-like structure among the Fe $3d$ bands splitting them into a lower lying $e_g$ and higher lying $t_{2g}$ states. This... | ArXiv | true | false | arxiv_only |
. It may be possible to access these band critical points that have vanishing quasiparticle velocities via small perturbations due to impurities, doping, or changes in structural parameters. The role of such band critical points in quantum criticality has been emphasized recently,[@neal11] and similar physics may be re... | ArXiv | true | false | arxiv_only |
1/2,1/2)$.
I have calculated the Lindhard susceptibility $$\chi_0(q,\omega) = \sum_{k,m,n} |M_{k,k+q}^{m,n}|^2
\frac{f(\epsilon_k^m) - f(\epsilon_{k+q}^n)}{\epsilon_k^m -
\epsilon_{k+q}^n - \omega - \imath \delta}$$ at $\omega \to 0$ and $\delta \to 0$, where $\epsilon_k^m$ is the energy of a band $m$ at wave vector... | ArXiv | true | false | arxiv_only |
rives from Coulomb repulsion between electrons.
![The real part of bare susceptibility calculated with the matrix element set to unity.[]{data-label="fig:yfg-suscep"}](yfg-suscep-color){width="0.6\columnwidth"}
In the present case, the structure of the calculated susceptibility leads to the off-diagonal component of ... | ArXiv | true | false | arxiv_only |
om non-spin-polarized calculation is $N(E_F)$ = 1.125 eV$^{-1}$ per spin per Fe, which puts this material on the verge of a ferromagnetic instability according to the Stoner criterion. Ferromagnetism is pair-breaking for the singlet pairing and will suppress the $T_c$ in this compound. Furthermore, there is a peak in t... | ArXiv | true | false | arxiv_only |
ng compounds where LDA in general underestimates the magnetism.
Although this compound does not magnetically order experimentally, it nonetheless shows proximity to magnetism. It is found that partial substitution of Y by isovalent Lu causes the system to order antiferromagnetically, with 81% Lu substitution being the... | ArXiv | true | false | arxiv_only |
t stable one is higher by 51 meV.[@sing08b] Signatures of quantum criticality has been reported for BaFe$_2$As$_2$ and related compounds.[@ning09; @jian09; @kasa10] YFe$_2$Ge$_2$ should show pronounced effects of proximity to quantum criticality as the competition between magnetic interactions is even stronger.
In sum... | ArXiv | true | false | arxiv_only |
ld, J. Magn. Magn. Mater. [**270**]{}, 51 (2004).
J. Ferstl, H. Rosner, and C. Geibel, Physica B: Condens. Matter [**378-380**]{}, 744 (2006).
S. Ran, S. L. Bud’ko, and P. C. Canfield, Philosophical Magazine [**91**]{}, 4388 (2011).
I. I. Mazin, D. J. Singh, M. D. Johannes, and M. H. Du, Phys. Rev. Lett. [**101**]{}... | ArXiv | true | false | arxiv_only |
solution to the Kadison–Singer problem. We extend (and slightly sharpen) this theorem to the realm of hyperbolic polynomials. A benefit of the extension is that the proof becomes coherent in its general form, and fits naturally in the theory of hyperbolic polynomials. We also study the sharpness of the bound in the th... | ArXiv | true | false | arxiv_only |
ion we describe how Theorem \[t1\] may be interpreted in terms of strong Rayleigh measures. We use this to derive sufficient conditions for a weak half-plane property matroid to have $k$ disjoint bases. These conditions are very different from Edmonds characterization in terms of the rank function of the matroid [@Edm]... | ArXiv | true | false | arxiv_only |
yperbolic with respect to any vector ${\mathbf{e}}\in {\mathbb{R}}_{++}^n=(0,\infty)^n$: $$h(t{\mathbf{e}}-{\mathbf{x}}) = \prod_{j=1}^n (te_j-x_j).$$
2. Let $X=(x_{ij})_{i,j=1}^n$ be a matrix of $n(n+1)/2$ variables where we impose $x_{ij}=x_{ji}$. Then $\det(X)$ is hyperbolic with respect to $I=\diag(1, \ldots, 1)$... | ArXiv | true | false | arxiv_only |
geq 0\}$. Since $h(t{\mathbf{e}}-{\mathbf{e}})=h({\mathbf{e}})(t-1)^d$ we see that ${\mathbf{e}}\in \Lambda_{\tiny{++}}$. The hyperbolicity cones for the examples above are:
1. $\Lambda_{\tiny{++}}({\mathbf{e}})= {\mathbb{R}}_{++}^n$.
2. $\Lambda_{\tiny{++}}(I)$ is the cone of positive definite matrices.
3. $\Lam... | ArXiv | true | false | arxiv_only |
fund\] (3). It follows that $\| \cdot \|$ is a seminorm and that $\| {\mathbf{x}}\|=0$ if and only if ${\mathbf{x}}\in L(\Lambda_+)$. Hence $\| \cdot \|$ is a norm if and only if $L(\Lambda_+)=\{0\}$.
The following theorem is a generalization of Theorem \[MSSmain\] to hyperbolic polynomials.
\[t1\] Let $k\geq 2$ be a... | ArXiv | true | false | arxiv_only |
alent.
1. $f_1(x), \ldots, f_m(x)$ have a common interleaver.
2. for all $p_1, \ldots, p_m \geq 0$, $\sum_{i}p_i=1$, the polynomial $$p_1f_1(x)+ \cdots+ p_mf_m(x)$$ is real–rooted.
\[largestz\] Let $f_1,\ldots, f_m$ be real–rooted polynomials that have the same degree and positive leading coefficients, and suppose... | ArXiv | true | false | arxiv_only |
en there is a tuple ${\mathbf{s}}=(s_1, \ldots, s_n) \in S_1 \times \cdots \times S_m$, with ${\mathbb{P}}[{\mathsf{X}}_i=s_i]>0$ for each $1\leq i \leq m$, such that the largest zero of $f(s_1,\ldots, s_m;t)$ is smaller or equal to the largest zero of ${\mathbb{E}}f({\mathsf{X}}_1,\ldots, {\mathsf{X}}_m;t)$.
The proo... | ArXiv | true | false | arxiv_only |
olic polynomial and let ${\mathbf{v}}\in \Lambda_+$ be such that $D_{\mathbf{v}}h \not \equiv 0$. Then
1. $D_{\mathbf{v}}h$ is hyperbolic with hyperbolicity cone containing $\Lambda_{++}$.
2. The polynomial $h({\mathbf{x}})-yD_{\mathbf{v}}h({\mathbf{x}}) \in {\mathbb{R}}[x_1,\ldots, x_n,y]$ is hyperbolic with hyper... | ArXiv | true | false | arxiv_only |
see [@Ren Prop. 22] or [@BrObs Lemma 4.4].
By $$\label{mag}
h({\mathbf{x}}-y{\mathbf{v}}) = \left( \sum_{k=0}^{\infty} \frac {(-y)^k D_{\mathbf{v}}^k}{k!} \right) h({\mathbf{x}}).$$ Thus $$h({\mathbf{e}}-t{\mathbf{v}}) = h({\mathbf{e}})\prod_{j=1}^d(1-t\lambda_j({\mathbf{v}}))= \sum_{k=0}^d (-1)^k\frac {D^k_{\mathbf{... | ArXiv | true | false | arxiv_only |
k=0}^{\infty} \frac {(-y)^k D_{\mathbf{v}}^k}{k!} \right) h({\mathbf{x}})= (1-yD_{{\mathbf{v}}})h({\mathbf{x}}),$$ from which the lemma follows.
Note that $({\mathbf{v}}_1,\ldots,{\mathbf{v}}_m) \mapsto h[{\mathbf{v}}_1,\ldots,{\mathbf{v}}_m]$ is affine linear in each coordinate, i.e., for all $p \in {\mathbb{R}}$ and... | ArXiv | true | false | arxiv_only |
y Theorem \[mixhyp\] (since ${\mathbb{E}}{\mathbf{v}}_i \in \Lambda_+$ for all $i$ by convexity). In particular the polynomial ${\mathbb{E}}f({\mathsf{X}}_1,\ldots, {\mathsf{X}}_m;t)$ is real–rooted.
The second assertion is an immediate consequence of the first combined with Lemma \[rk1le\].
Bounds on zeros of mixed ... | ArXiv | true | false | arxiv_only |
left for the reader to verify.
\[tech\] $$\xi_i[h-\partial_jh]=\xi_i[h]-\frac {\partial_j \xi_i[h]\cdot \xi_j[\partial_ih]}{\xi_j[\partial_ih]-1}.$$
\[engine\] If ${\mathbf{x}}\in \Lambda_{++}$, $1\leq i,j \leq n$, $\delta > 1$ and $$\xi_j[h]({\mathbf{x}}) \geq \frac \delta {\delta-1},$$ then $$\xi_i[h-\partial_jh]... | ArXiv | true | false | arxiv_only |
{\mathbf{v}}_j)-1}\right)\\
&\geq \xi_i[h]({\mathbf{v}}_j) \left( \delta - \frac { \delta/(\delta-1)}{ \delta/(\delta-1)-1}\right) =0, \end{aligned}$$ where the last inequality follows from , and the concavity of ${\mathbf{z}}\rightarrow \xi_j[h]({\mathbf{z}})$.
Consider ${\mathbb{R}}^{n+m}={\mathbb{R}}^n\oplus {\mat... | ArXiv | true | false | arxiv_only |
\frac {t_1}{t_1-1}{\mathbf{v}}_1 + {\mathbf{e}}_1$$ is in the hyperbolicity cone of $(1-y_1D_{{\mathbf{v}}_1})h$. By the first part we have ${\mathbf{x}}'+t_2{\mathbf{e}}_2, {\mathbf{x}}'+t_3{\mathbf{e}}_3\in \Gamma_+$. Hence we may apply the first part of the theorem with $h$ replaced by $(1-y_1D_{{\mathbf{v}}_1})h$ t... | ArXiv | true | false | arxiv_only |
hbf{e}}+ \left(1-\frac 1 m + \frac t m \right) {\mathbf{1}}\in \Gamma_+.$$
Hence by (the homogeneity of $\Gamma_+$ and) Remark \[hypid\], the maximal zero is at most $$\inf \left\{ \frac {\epsilon t+ \left(1-\frac 1 m\right)\frac t {t-1}} {1-\frac 1 m + \frac t m } : t >1\right\}.$$ It is a simple exercise to deduce ... | ArXiv | true | false | arxiv_only |
cdots h({\mathbf{x}}^k) \in {\mathbb{R}}[{\mathbf{y}}],$$ which is hyperbolic with respect to ${\mathbf{e}}^1\oplus \cdots \oplus {\mathbf{e}}^k$, where ${\mathbf{e}}^i$ is a copy of ${\mathbf{e}}$ in the variables ${\mathbf{x}}^i$, for all $1 \leq i \leq k$. The hyperbolicity cone of $g$ is the direct sum $\Lambda_+:=... | ArXiv | true | false | arxiv_only |
heorem \[t1\] we are therefore motivated to look closer at the following problem.
\[central\] Let $h$ be a polynomial of degree $d$ which is hyperbolic with respect to ${\mathbf{e}}$, and let $\epsilon >0$ and $m \in {\mathbb{Z}}_+$ be given. Determine the largest possible maximal zero, $\rho=\rho(h,{\mathbf{e}},\epsi... | ArXiv | true | false | arxiv_only |
bf{w}}),$$ and hence $$D_{\mathbf{u}}D_{\mathbf{v}}h({\mathbf{w}}) \geq \min\{ D_{\mathbf{u}}^2 h({\mathbf{w}}), D_{\mathbf{v}}^2 h({\mathbf{w}})\}.$$
By continuity we may assume ${\mathbf{u}}, {\mathbf{v}}, {\mathbf{w}}\in \Lambda_{++}$. Then the polynomial $$\begin{aligned}
& g(x,y,z):=h(x{\mathbf{u}}+y{\mathbf{v}}+... | ArXiv | true | false | arxiv_only |
h[{\mathbf{v}}_3, \ldots, {\mathbf{v}}_m]$. By Lemma \[nicein\] we may assume $$D_{{\mathbf{v}}_1}D_{{\mathbf{v}}_2} g({\mathbf{w}}) \geq D_{{\mathbf{v}}_1}^2g({\mathbf{w}}) \geq 0,$$ since otherwise change the indices $1$ and $2$. For $$0 \leq s \leq \min\left\{1, \frac {\epsilon -\tr({\mathbf{v}}_2)} {\tr({\mathbf{v}... | ArXiv | true | false | arxiv_only |
me $\delta>0$. Let $\rho$ be the maximal zero in Problem \[central\]. Then the function $$(-\delta, 1+\delta) \ni s \mapsto \chi[{\mathbf{v}}_1(s),{\mathbf{v}}_2(s), {\mathbf{v}}_3, \ldots, {\mathbf{v}}_m](\rho)$$ is a degree at most two polynomial which has local minima at $s=0$ and $s=1$. Hence this function is ident... | ArXiv | true | false | arxiv_only |
athbf{v}}(t)= T_{k,d}(h(t{\mathbf{e}}-{\mathbf{v}}))$ where $T_{k,d}: {\mathbb{R}}[t] \rightarrow {\mathbb{R}}[t]$ is the linear operator defined by $$T_{k,d}\left(\sum_{j\geq 0} a_jt^j\right) = -\sum_{j=0}^d \left( \frac {j+1} {k+1} a_{j+1} +(d-1-j)a_j\right) (d-j)! \binom {k+1}{d-j}t^j.$$ Moreover if $f$ is a $[0,1/k... | ArXiv | true | false | arxiv_only |
n$. Then this (maximal) zero is achieved for some $T(f)$, where $f$ has at most one distinct zero in $(a,b)$.
Moreover, if the maximal zero above is achieved for some $T(f)$, where $f \in {\mathcal{M}}_d$ is $(a,b)$–rooted, then the maximal zero is also achieved for $T ((t-\epsilon/d)^d)$.
Let ${\mathcal{A}}={\mathca... | ArXiv | true | false | arxiv_only |
^i (t-s)^j \left(t - \frac {\epsilon -ir-js}{d-i-j}\right)^{d-i-j}\right) (\rho)=0.$$ The left–hand–side of is a polynomial, say $P_{i,j}(r,s) \in {\mathbb{R}}[r,s]$. Hence the polynomial $\prod_{i,j}P_{i,j}(r,s)$, where the product is over all $i,j$ which are realized for some such $r,s$, vanishes on a set with nonemp... | ArXiv | true | false | arxiv_only |
hen $$\tr({\mathbf{e}}_i) = \frac {d} {mk}, \ \ \rk({\mathbf{e}}_i)=1 \ \ \mbox{ and } \ \ {\mathbf{e}}_1+\cdots+{\mathbf{e}}_{mk}={\mathbf{1}},$$ for all $1\leq i \leq mk$. By symmetry, the partition $$S_1=\{1,\ldots, m\}, S_2=\{m+1, \ldots, 2m\}, \ldots, S_k=\{(k-1)m+1,\ldots, km\}$$ minimizes the bound in . Now $$\b... | ArXiv | true | false | arxiv_only |
(x_1A_1+ \cdots+x_nA_n).$$ Thus we cannot directly derive an analog of Proposition \[lowprop\] for Theorem \[MSSmain\].
Consequences for strong Rayleigh measures and weak half-plane property matroids
===============================================================================
A discrete probability measure, $\mu$,... | ArXiv | true | false | arxiv_only |
1\]. Assume the hypothesis in Theorem \[t1\], and form the polynomial $$P({\mathbf{x}})= h(x_1{\mathbf{u}}_1+\cdots+x_m{\mathbf{u}}_m)/h({\mathbf{e}}).$$ It follows that $P({\mathbf{x}})$ is hyperbolic with hyperbolicity cone containing the positive orthant. Since $\rk({\mathbf{u}}_i) \leq 1$ for all $1\leq i \leq m$ w... | ArXiv | true | false | arxiv_only |
$j \in [n]$. If we can prove that ${\lambda_{\rm min}}({\mathbf{v}}_j)>0$, then $\rk({\mathbf{v}}_j)=\rk({\mathbf{1}})$ and so $S_j$ contains a basis. Now, by , Theorem \[t11\], and the convexity of ${\lambda_{\rm max}}$: $$\begin{aligned}
{\lambda_{\rm min}}({\mathbf{v}}_j) &= 1-{\lambda_{\rm max}}({\mathbf{1}}-{\math... | ArXiv | true | false | arxiv_only |
g/pdf/1004.1382.pdf>. P. Brändén, Hyperbolicity cones of elementary symmetric polynomials are spectrahedral, Optim. Lett. [**8**]{} (2014), 1773–1782, <http://arxiv.org/abs/1204.2997>.
Y. Choe, J. Oxley, A. Sokal, D. G. Wagner, [ Homogeneous multivariate polynomials with the half-plane property]{}. Adv. Appl. Math. [*... | ArXiv | true | false | arxiv_only |
combinatorics and probability, Current developments in mathematics, 2011, 57–123, Int. Press, Somerville, MA, 2012, <http://arxiv.org/abs/1210.3231>.
J. Renegar, Hyperbolic programs, and their derivative relaxations, Found. Comput. Math., [**6**]{} (2006), 59–79.
V. Vinnikov, LMI representations of convex semialgebra... | ArXiv | true | false | arxiv_only |
for the associative operad was given in [@Tr] and the starting point for a similar version for the commutative operad was considered in [@Ginot]. It is natural to ask for a generalization of these constructions applicable to any cyclic operad. This is what is done in this paper.
Starting with a cyclic operad $\mathca... | ArXiv | true | false | arxiv_only |
cal O$ as algebras over the operad $\widehat{\mathcal O}_\infty=\mathbf{D}(\widehat{\mathcal O^!})$. The concept of algebras over the operad $\widehat{\mathcal O}_\infty$ will be investigated in more detail in section \[homotop-ip\]. In particular, we explicitly reinterpret in proposition \[O\_hat\_algebras\] algebras ... | ArXiv | true | false | arxiv_only |
sheff and Scott Wilson for useful comments and remarks regarding this topic. The second author was partially supported by the Max-Planck Institute in Bonn.
$\widehat{\mathcal Comm}_\infty$ structure for Poincaré duality spaces {#Comm-section}
======================================================================
Befo... | ArXiv | true | false | arxiv_only |
that the required data for a homotopy $\mathcal Comm$-inner product consists of,
- a derivation $d\in \mathrm{Der}(F _{\mathcal Lie}\,C[1])$ of degree $1$, with $d^2=0$,
- a derivation $g\in \mathrm{Der}_d (F_{\mathcal Lie,\,C[1]}C[1])$ over $d$ of degree $1$, with $g^2=0$, which imduces a derivation $h\in \mathr... | ArXiv | true | false | arxiv_only |
into $L_{i+1}\oplus L_{i+2}\oplus\dots$. This completes the inductive step, and thus produces the wanted derivation $d$ on $F_{\mathcal Lie}(C[1])$.
In a similar way, we may produce the derivation $g$ of $F_{\mathcal Lie,\,C[1]}C[1]$ over $d$, by decomposing $F_{\mathcal Lie, C[1]} C[1]=L'_1\oplus L'_2\oplus\dots$, wh... | ArXiv | true | false | arxiv_only |
rms can be put together as before to produce a map $\chi_i$, so that now $\Upsilon_{i+1}:=\chi_2+\dots+\chi_i$ only maps into $M_{i+1}\oplus M_{i+2}\oplus\dots $. We therefore obtain the chain map $\chi$, and with this, we define the homotopy $\mathcal Comm$-inner product as $f:=\chi(\mu)\in Mod(F_{\mathcal Lie, C[1]}... | ArXiv | true | false | arxiv_only |
f symbols $x_1, \dots, x_n, x\in \{{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}},\varnothing \}$, we have (differential graded) vector spaces $\... | ArXiv | true | false | arxiv_only |
linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}})=k$.
Graphically, we represent $\mathcal P(x_1,\dots,x_n;x)$ by a tree with $n$ inputs and one output of the given color. Since the color $\varnothing$ cannot appear as an input, we may use the following convention: we represent the output... | ArXiv | true | false | arxiv_only |
4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth... | ArXiv | true | false | arxiv_only |
oup $S_{n+1}$ on $\mathcal O(n)$, which extends the given $S_n$-action, and satisfies, for $1\in\mathcal O(1)$, $\alpha\in \mathcal O(m)$, $\beta\in \mathcal O(n)$ the following relations: $$\begin{aligned}
\label{compos_cyclic1} \tau_2(1)&=&1,\\ \label{compos_cyclic2}
\tau_{m+n}(\alpha\circ_k \beta)&=&\tau_{m+1}(\alph... | ArXiv | true | false | arxiv_only |
\,\,\, 3\,\,\, 4$}
\rput[b](4,2){$\rightsquigarrow$}
\end{pspicture}
\begin{pspicture}(0,0.8)(4,3.4)
\psline[linestyle=dashed](2,2)(1.2,3)
\psline(2,2)(1.6,3)
\psline(2,2)(2,3)
\psline(2,2)(2.4,3)
\psline[linestyle=dashed](2,2)(2.8,3)
\rput[b](2,3.2){$1\,\,\, 2\,\,\, 3\,\,\, 4\,\,\, 5$}
\end{pspicture}$$ We defi... | ArXiv | true | false | arxiv_only |
he last component, we define $$\label{def_cyclic_compos}
\alpha\circ_{m+1} \beta:=\tau_{n+m} (\tau^{-1}_{m+1}(\alpha)\circ_m
\beta)\stackrel{\eqref{compos_cyclic3}}{=} \tau_{n+1}(\beta) \circ_1
\alpha$$ $$\begin{pspicture}(1,1.6)(10.5,5.6)
\psline[linestyle=dashed](2,2)(1.2,2.9)
\psline(2,2)(1.6,2.9)
\psline(2,2)(2,... | ArXiv | true | false | arxiv_only |
ity follows form equation , $\alpha\in \widehat{\mathcal O}(\vec X;\varnothing) =\mathcal O(m)$ has $m+1$ inputs, and $\beta\in\widehat{\mathcal
O}(\vec Y;{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}})=\mathcal O(n)$ (or similarly $\beta\i... | ArXiv | true | false | arxiv_only |
ef_cyclic_compos}}}{=} (\tau_{n+1}(\beta)
\circ_1 \alpha)\circ_j \gamma \stackrel{\mathit{op.comp}}{=}
(\tau_{n+1}(\beta) \circ_{j-m+1} \gamma)\circ_1 \alpha
\stackrel{\eqref{compos_cyclic2}}{=}\\ = \tau_{n+p}(\beta
\circ_{j-m}\gamma)\circ_1\alpha \stackrel{\mathit{
\eqref{def_cyclic_compos}}}{=} \alpha\circ_{m+1}
(\b... | ArXiv | true | false | arxiv_only |
of the leaves of the tree using the $S_2$-action on $E$, and the composition maps are given by attaching trees. This definition can readily be seen to define a colored operad.
An ideal $\mathcal I$ of a colored operad $\mathcal P$ is a collection of $S_n$-sub-modules $\mathcal I(\vec X;z)\subset
\mathcal P(\vec X;z)$ ... | ArXiv | true | false | arxiv_only |
{
\begin{pspicture}(0,0)(0.2,0.2)
\psline[linewidth=1pt, linestyle=dashed,dash=3pt 2pt](0.1,0)(0.1,0.2)
\end{pspicture}},{
\begin{pspicture}(0,0)(0.2,0.2)
\psline[linewidth=1pt](0.1,0)(0.1,0.2)
\end{pspicture}}}_{{
\begin{pspicture}(0,0)(0.2,0.2)
\psline[linewidth=1pt, linestyle=dashed,dash=3pt 2pt](0.1,0)(0.... | ArXiv | true | false | arxiv_only |
,0)(0.2,0.2)
\psline[linewidth=1pt, linestyle=dashed,dash=3pt 2pt](0.1,0)(0.1,0.2)
\end{pspicture}},{
\begin{pspicture}(0,0)(0.2,0.2)
\psline[linewidth=1pt](0.1,0)(0.1,0.2)
\end{pspicture}}}_{{
\begin{pspicture}(0,0)(0.2,0.2)
\psline[linewidth=1pt, linestyle=dashed,dash=3pt 2pt](0.1,0)(0.1,0.2)
\end{pspicture... | ArXiv | true | false | arxiv_only |
sline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}};{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}}), \\
R\subset\mathcal F(E)(3)\cong\mathcal F(\widehat{E})({
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[lin... | ArXiv | true | false | arxiv_only |
{pspicture}};\varnothing), \\ G\subset \mathcal
F(\widehat{E})({
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4... | ArXiv | true | false | arxiv_only |
perp$ of $R$ as a subspace of $\left(\bigoplus_{w,x,y,z\in C} \mathcal F(E)(w,x,y;z) \right) ^\vee$. Notice that $\mathcal F(E)(\vec X;x)^\vee = \mathcal F(E^\vee) (\vec X;x)$, so that $\mathcal P^!$ is generated by $E^\vee$ with relations $R^\perp$, see [@GK (2.1.9)].
Now if $\mathcal P=\mathcal F(E)/(R)$ is a quadra... | ArXiv | true | false | arxiv_only |
omology concentrated in degree zero.
We now state our main theorem.
\[O\_hat\_Koszul\] If $\mathcal O$ is cyclic quadratic and Koszul, then $\widehat{\mathcal O}$ has a resolution, which for a given sequence $\vec X$ of colors, with $|\vec X|=n$, is given by the quasi-isomorphisms $$\begin{aligned}
\textbf{D}(\wideha... | ArXiv | true | false | arxiv_only |
=1pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}})$. Let us first show the validity of equation . The map $\textbf{D} (\widehat{\mathcal O^!}) (\vec X;\varnothing)\to \widehat{ \mathcal O}(\vec X;\varnot... | ArXiv | true | false | arxiv_only |
O^!}) (\vec X;
\varnothing )^{-1}\big)$. This implies equation .
As for equation (\[Hi<0\]), we will use an induction that shows that every closed element in $\textbf{D}(\widehat{\mathcal O^!})
(\vec X;\varnothing)^{-r}$, for $r\geq 1$ is also exact. The argument will use an induction which slides all of the “full”... | ArXiv | true | false | arxiv_only |
)(1.5,3)
\psline(0.4,2.6)(0.4,3)
\psline(0.7,2.3)(0.7,3)
\rput(1.4,1.3){\tiny $\alpha_1$}
\rput(0.2,2.4){\tiny $\alpha_2$}
\rput(0.6,2.1){\tiny $\alpha_3$}
\rput(1,2.3){\tiny $\alpha_4$}
\pscircle[linestyle=dotted](1,2.2){1.4}
\rput(5.2,1){$\psi$}
\psline(3.5,1.5)(3,3)
\psline(3.5,1.5)(4,3)
\psline(4.5,2.5)(... | ArXiv | true | false | arxiv_only |
,2.2){1.4}
\rput(-0.2,1){$\varphi$}
\psline(2.5,0.5)(2,3)
\psline(2.5,0.5)(1.3,3)
\psline(1.7,2.2)(1.7,3)
\psline(0.4,2.6)(0.4,3)
\psline(0.7,2.3)(0.7,3)
\rput(0.2,2.4){\tiny $\alpha_2$}
\rput(0.6,2.1){\tiny $\alpha_3$}
\rput(1.4,2.1){\tiny $\alpha_4$}
\psccurve[linestyle=dotted](0.5,0.8)(-0.5,3)(1,3.6)(2.4,3... | ArXiv | true | false | arxiv_only |
^!})(\vec X;\varnothing)$ with “dashed” first and last inputs, can uniquely be written in the form $\varphi *_\sigma\psi$ or $\varphi\#_\sigma\psi$ for some $\varphi,\psi$ and $\sigma$.
For each $r\geq 1$, we show that the $(-r)$th homology of $\textbf{D}(\widehat{\mathcal O^!})(\vec X;\varnothing)$ vanishes by decomp... | ArXiv | true | false | arxiv_only |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 6