text
large_stringlengths
384
2.05k
rank_avg
float64
1
4.19k
rank_max
float64
1
8.21k
rank_min
float64
1
5.03k
rank_median
float64
1
4.21k
rank_by_avgsim
float64
1
4.19k
avgsim_to_github
float32
0.77
0.85
dataset
large_stringclasses
1 value
ch $z_i$. Therefore $P=\widetilde{P}$. Another module {#C} ============== {#app-c-1} Fix $c\in {\mathbb{C}}$ that satisfies Hypothesis \[main-hyp\] and an integer $k\geq 0$. For applications in [@GS2] we will need an analogue of Proposition \[pre-cohh\] for the left $H_{c+k}$-module $M(k) = H_{c+k}eB_{k0}\subseteq D({\mathfrak{h}^{\text{reg}}})\ast {{W}}$. As before, we filter $M(k)$ by the induced order filtration $\operatorname{{\textsf}{ord}}$, so that $\operatorname{{\textsf}{ogr}}M(k)\subseteq \operatorname{{\textsf}{ogr}}D({\mathfrak{h}^{\text{reg}}})\ast{{W}}= {\mathbb{C}}[{{\mathfrak{h}}^{\text{reg}}}\oplus {\mathfrak{h}}^*]\ast {{W}}.$ The aim of this appendix is then to prove: \[app-c-prop\] The left $H_{c+k}$-module $M(k) = H_{c+k}eB_{k0}$ satisfies $\operatorname{{\textsf}{ogr}}M(k) = J^{k-1}\delta^ke.$ Recall that Proposition \[pre-cohh\] showed that the module $N(k)=B_{k0}\otimes eH_c$ had associated graded ring $eJ^k\delta^k$. In a sense, Proposition \[app-c-prop\] is just a left-right analogue of that result and so much of the present proof is formally very similar to that of Proposition \[pre-cohh\]. We should first explain why the two results involve different powers of $J^1$. The reason is that one can write $M(k) = H_{c+k}eH_{c+k}\delta eB_{k-1,0}$. By Corollary \[morrat-cor\] and the left hand end of this expression collapses to give $ M(k) = H_{c+k}\delta eB_{k-1,0}.$ In particular, $M(1) = H_{c+1}\delta e$. A routine computation using Lemmas \[abstract-products\] and \[grade-elements\] then gives: Lemma {#thetainjC} ----- [*$\operatorname{{\textsf}{ogr}}M(1) = {{\mathbb{C}}[{\mathfrak{h}}\oplus {\mathfrak{h}}^*]}\delta e$ while $J^{k-1}\delta^{k}e \subseteq \operatorname{{\textsf}{ogr}}M(k)$ for all $k\geq 1$.*]{} It takes considerably more work to show that $J^{k-1}\delta^{k}e $ actually equals $ \operatorname{{\textsf}{ogr}}M(k)$ for $k>1$. The proofs of the first few steps in this argument are very similar to those of Lemmas \[Bbar-freeA\], \[filter-injA\] and \[step
901
487
779
945
2,055
0.783176
github_plus_top10pct_by_avg
specifically, $(w_0 ... w_{n\delta})$ in and $(v_0...v_{n\delta})$ in are coupled through the shared $(\eta_0...\eta_{n-1})$ variables. For convenience, we will let $v_t := v_{i \delta}$ and $w_t := w_{i\delta}$, where $i$ is the unique integer satisfying $t\in[i\delta, (i+1)\delta)$. We can verify that, marginally, the process $x_t$ in has the same distribution as , using the proof as Lemma \[l:marginal\_of\_coupling\]. It is also straightforward to verify that $w_{k\delta}$, as defined in , has the same marginal distribution as , due to the definition of $\eta_i$ in . [One Epoch Contraction]{} \[ss:epoch\_nongaussian\] In Lemma \[l:non\_gaussian\_contraction\_stationary\], we prove a discretization error bound between $f(x_T - y_T)$ and $f(x_T - v_T)$, for the coupling defined in , and . In Lemma \[l:non\_gaussian\_contraction\_anisotropic\], we prove a discretization error bound between $f(x_T - v_T)$ and $f(x_T - w_T)$, for the coupling defined in , and . \[l:non\_gaussian\_contraction\_stationary\] Let $f$ be as defined in Lemma \[l:fproperties\] with parameter $\epsilon$ satisfying $\epsilon \leq \frac{\Rq}{\aq\Rq^2 + 1}$. Let $x_t$, $y_t$ and $v_t$ be as defined in , , . Let $n$ be any integer and $\delta$ be any step size, and let $T:= n\delta$. If $\E{\lrn{x_0}_2^2} \leq 8 \lrp{R^2 + \beta^2/m}$, $\E{\lrn{y_0}_2^2} \leq 8 \lrp{R^2 + \beta^2/m}$ and $T\leq \min\lrbb{\frac{1}{16L}, \frac{\beta^2}{8L^2\lrp{R^2 + \beta^2/m}}}$ and $$\begin{aligned} \delta \leq \min\lrbb{\frac{T\epsilon^2L}{36 d\beta^2\log \lrp{ \frac{36 d\beta^2}{\epsilon^2L}}}, \frac{T\epsilon^4L^2} {2^{14} d\beta^4\log\lrp{\frac{2^{14} d\beta^4}{\epsilon^4L^2}}}} \end{aligned}$$ Then $$\begin{aligned} & \E{f(x_T - v_T)} - \E{f(x_T - y_T)} \leq 4TL\epsilon \end{aligned}$$ By Taylor’s Theorem, $$\begin{aligned} & \E{f(x_T - v_T)}\\ =& \E{f(x_T - y_T)+ \lin{\nabla f(x_T - y_T), y_T - v_T} + \int_0^1\int_0^s \lin{\nabla^2 f(x_T - y_T + s(y_T-v_T)), (y_T - v_T
902
1,576
1,064
877
null
null
github_plus_top10pct_by_avg
2(GS)]{}\^2\ =&-C\_0\_[L\^2(GS)]{}\^2, and hence $${\left\langle}(A_0(E)-CI)\phi,\phi{\right\rangle}_{L^2(G\times S)} ={}&{\left\langle}A_0(E)\phi,\phi{\right\rangle}_{L^2(G\times S)}-C{\left\Vert \phi\right\Vert}^2_{L^2(G\times S)} \\ \leq{}&(C_0-C){\left\Vert \phi\right\Vert}^2_{L^2(G\times S)}.$$ Choosing $C\geq C_0$, one finds that $A_0(E)-CI$ is dissipative for any $E\in I$. B. We still have to show that $R(\lambda I-(A_0(E)-CI))=L^2(G\times S)$ for (any) $\lambda>0$. The equation \[csda18op\] (I-(A\_0(E)-CI))=f, means that $\phi\in \tilde W_{-,0}^2(G\times S)$ and that \[csda18\] [1]{}\_x+(+C)=f, which is equivalent to \[csda19\] \_x+(+C)S(E)=S(E)f, since by (\[csda9\]) and (\[csda9a\]) $f\in L^2(G\times S)$ if and only if $\tilde{S}(E)f\in L^2(G\times S)$. Let $B_0:L^2(G\times S)\to L^2(G\times S)$ be a linear operator with domain $D(B_0)$ defined by $$& D(B_0)=\tilde W^2_{-,0}(G\times S),\\ & B_0\phi=-\omega\cdot\nabla_x\phi.$$ Then $B_0$ is $m$-dissipative ([@tervo14], [@dautraylionsv6]). Let $\lambda':=\kappa (\lambda+C)$. The equation (\[csda19\]) is equivalent to \[csda20\] (’I -(B\_0+B\_1))=S(E)f where $B_1$ is defined by $$B_1\phi=-((\lambda+C)\tilde S(E)-\lambda')\phi.$$ It is clear that the operator $B_1:L^2(G\times S)\to L^2(G\times S)$ is bounded, and since $(\lambda+C)\tilde{S}(E)-\lambda'\geq 0$ by the assumption and the definition of $\lambda'$, it follows that $B_1$ is dissipative. These observations imply that $B_0+B_1$ is $m$-dissipative (cf. [@engelnagel Chapter III], or [@tervo14 Theorem 4.2]). Since is equivalent to as explained above, this shows that $R(\lambda I-(A_0(E)-CI))=L^2(G\times S)$, and thus completes the proof. For $E\in I$, let $A_1(E):L^2(G\times S)\to L^2(G\times S)$ be the linear operator $$A_1(E)\phi:=-{1\over {\tilde S(E)}}\tilde\Sigma(E)\phi-{1\over {\tilde S(E)}}{{\frac{\partial \tilde S}{\partial E}}}(E)\phi+{1\over {\tilde S(E)}}\tilde K(E)\phi.$$ We have the following uniform bound for the family of operators $\{A_1(E)\ |\ E\in I\}$. \[csdale2\] Under the as
903
392
1,123
1,012
3,993
0.768739
github_plus_top10pct_by_avg
472.5 ± 13.9 260.7 ± 2.8 277.9 ± 3.3 314.7 ± 4.4 292.7 ± 8.2 264.2 ± 0.9 266.9 ± 17.3 77.3 ± 1.0 62.2 ± 2.8 101.4 ± 0.8 95.9 ± 5.3 Sac 2 436.3 ± 14.8 456.4 ± 17.5 285.0 ± 3.6 279.3 ± 7.9 322.7 ± 10.1 274.8 ± 6.6 280.3 ± 4.8 285.4 ± 7.4 74.3 ± 3.8 60.4 ± 2.1 98.5 ± 2.9 102.2 ± 1.7 Sac 3 404.5 ± 1.5 448.8 ± 2.3 262.9 ± 2.9 307.2 ± 5.1 325.0 ± 20.3 285.5 ± 10.9 276.4 ± 22.5 309.6 ± 3.7 80.3 ± 5.0 63.6 ± 2.3 104.9 ± 7.4 100.9 ± 1.9 Sac 4 416.2 ± 8.0 452.3 ± 2.4 254.7 ± 5.1 317.5 ± 3.6 322.0 ± 6.5 271.4 ± 5.4 265.5 ± 9.0 307.5 ± 6.2 77.5 ± 2.7 60.0 ± 1.3 104.2 ± 2.9 96.8 ± 1.9 Sac 5 412.1 ± 7.7 483.3 ± 10.8 284.1 ± 5.4 287.9 ± 8.3 293.5 ± 13.7 266.7 ± 6.4 261.5 ± 4.7 277.5 ± 14.2 71.4 ± 3.9 55.4 ± 2.5 92.2 ± 3.2 96.3 ± 3.0 Average 418.4 ± 3.2 443.2 ± 5.9 265.1 ± 3.6 292.1 ± 3.5 316.1 ± 3.4 281.2 ± 3.1 260.6 ± 2.4 287.6 ± 6.2 75.8 ± 1.0 63.9 ± 1.3 98.5 ± 1.1 98.4 ± 1.3 Genotype 0.005 0.011 0.001 \<0.001 \<0.001 0.055 Date \<0.001 \<0.001 \<0.001 \<0.001 \<0.001 0.902 Geno × Date 0.038
904
4,965
441
358
null
null
github_plus_top10pct_by_avg
j_1}} \Bigg(\frac{\exp(\ltheta_{j_2})}{\lW-\exp(\ltheta_{j_1})}\cdots \Bigg( \sum_{\substack{j_{\ell-1} \in S \\ j_{\ell-1} \neq i, \\ j_1,\cdots,j_{\ell-2}}} \frac{\exp(\ltheta_{j_{\ell-1}})}{\lW-\sum_{k=j_1}^{j_{\ell-2}}\exp(\ltheta_{k})}\Bigg)\Bigg)\Bigg) \frac{e^{2b}}{\kappa-\ell+1} \nonumber\\ &\leq & e^{4b} \sum_{\substack{j_1 \in S \\ j_1 \neq i}} \Bigg(\frac{\exp(\ltheta_{j_1})}{\lW} \sum_{\substack{j_2 \in S \\ j_2 \neq i,j_1}} \Bigg(\frac{\exp(\ltheta_{j_2})}{\lW-\exp(\ltheta_{j_1})}\cdots \nonumber\\ && \Bigg( \sum_{\substack{j_{\ell-1} \in S \\ j_{\ell-1} \neq i, \\ j_1,\cdots,j_{\ell-2}}} \frac{\exp(\ltheta_{j_{\ell-1}})}{\lW-\sum_{k=j_1}^{j_{\ell-2}}\exp(\ltheta_{k})} \frac{\exp(\ltheta_i)}{\lW - \sum_{k = j_1}^{j_{\ell-1}} \exp(\ltheta_k)} \Bigg)\Bigg)\Bigg) \nonumber\\ &\leq & e^{4b} \P_{\ltheta}\Big[\sigma^{-1}(i) = \ell\Big] \label{eq:posl_upper5}\end{aligned}$$ The second inequality uses $\lalpha_2/(\kappa- \ell+\lalpha_{i,\ell,\theta}) \geq e^{-2b}/(\kappa - \ell +1)$. Observe that $\exp(\ltheta_j) = 1$ for all $j \neq i$ and $\exp(\ltheta_i) = \widetilde{\alpha}_{i,\ell,\theta} \geq {\left \lfloor{\widetilde{\alpha}_{i,\ell,\theta}} \right \rfloor} = \alpha_{i,\ell,\theta} \geq 0$. Therefore, we have $$\begin{aligned} \P_{\ltheta}\Big[\sigma^{-1}(i) = \ell \Big] &=& {\kappa-1 \choose \ell-1} \frac{\lalpha_{i,\ell,\theta}(\ell-1)!}{(\kappa-1+\widetilde{\alpha}_{i,\ell,\theta})(\kappa-2+\widetilde{\alpha}_{i,\ell,\theta})\cdots(\kappa-\ell+\widetilde{\alpha}_{i,\ell,\theta}) } \nonumber\\ &\leq& \frac{(\kappa-1)!}{(\kappa-\ell)!} \frac{e^{2b}}{(\kappa -1 + \alpha_{i,\ell,\theta})(\kappa -2+ \alpha_{i,\ell,\theta} )\cdots(\kappa -\ell + \alpha_{i,\ell,\theta} )} \nonumber\\ &\leq& \frac{e^{2b}}{\kappa} \bigg( 1- \frac{\ell}{\kappa+\alpha_{i,\ell,\theta}}\bigg)^{\alpha_{i,\ell,\theta}-1}, \label{eq:posl_upper6}\end{aligned}$$ Note that equation holds for all values of $\alpha_{i,\ell,\theta} \geq 0$. Claim \[eq:posl\_upperbound\_eq\] follows by combining Equations and . Proof of Theorem \[th
905
1,122
1,069
929
null
null
github_plus_top10pct_by_avg
del $\mathcal{C}_{\alpha \beta}$ is bounded from above and below as [@Fong:2016yyh] $$\frac{1}{N} \biggl( 1 - \sum_{j=1}^3 |U_{\alpha j}|^2 \biggr) \biggl( 1 - \sum_{j=1}^3 |U_{\beta j}|^2 \biggr) \leq \mathcal{C}_{\alpha \beta} \leq \biggl( 1 - \sum_{j=1}^3 |U_{\alpha j}|^2 \biggr) \biggl( 1 - \sum_{j=1}^3 |U_{\beta j}|^2 \biggr). \label{Cab-bound}$$ In the $(3+1)$ model, the $W$ matrix elements are unique, with the upper and lower bound being equal. For the numbers given in section \[sec:parameter-choice\], we have $W_{e4} = 0.141$, $W_{\mu 4} = 0.099$, and $W_{\tau 4} = 0.141$ assuming that they are real. Then, the leaking constants have the unique values, $\mathcal{C}_{e \mu}^{(N=1)} = 2 \times 10^{-4}$, $\mathcal{C}_{\mu \mu}^{(N=1)} = 9.6 \times 10^{-5}$, and $\mathcal{C}_{\tau \mu}^{(N=1)} = 9.5 \times 10^{-4}$. The lower bound is realized in the “universal scaling” model described in appendix \[sec:scaling-model\], which predicts $W_{\alpha J} = \frac{ 1 }{ \sqrt{N} } W_{\alpha 4}^{(N=1)}$ ($J=4,5, \cdot \cdot \cdot, 3+N$).[^19] It is shown in appendix \[sec:scaling-model\] that under the assumption of equal sterile state masses the universal scaling model predicts the same $W^2$ correction terms as those of the $(3+1)$ model. ### How large are the $W$ corrections and $\mathcal{C}_{\alpha \beta}$? {#sec:how-large} Let us go back to the expression of the oscillation probability to second order in $W$, eq. (\[P-beta-alpha-2nd-averaged\]), in section \[sec:probability-2nd\] to know where we might see visible effects. If we enter into the region $\rho E \gg 10 \, \text{ (g/cm}^3) \text{GeV}$ at around the first oscillation maximum, the first two terms in $2 \mbox{Re} \{ \cdot \cdot \cdot \}$ in (\[P-beta-alpha-2nd-averaged\]) can become large apart from $W^2$ suppression, $$\begin{aligned} \biggl | \frac{ AA L }{ ( \Delta_{J} - h_{i} ) } \biggr | &\sim& \biggl | \frac{ AA }{ ( h_{k} - h_{j} ) ( \Delta_{J} - h_{i} ) } \biggr | = 0.27 \left(\frac{ \Delta m^2_{J i} }{ 0.1 \mbox{eV}^2}\right)^{-
906
1,628
1,147
1,046
1,956
0.784162
github_plus_top10pct_by_avg
phisms from $S^\la$ to $S^{\mu'}$ is one-dimensional, spanned by the homomorphism $\sigma=\sum_{T\in{\calu}}{\hat\Theta_{T}}$. On the other hand, the space of homomorphisms from $S^\mu$ to $S^\la$ has dimension one or two, each homomorphism being a linear combination of the homomorphisms ${\hat\Theta_{A}}$ and ${\hat\Theta_{B}}$. So it suffices to compute the compositions $\sigma\circ{\hat\Theta_{A}}$ and $\sigma\circ{\hat\Theta_{B}}$. Let $D$ be the $\mu$-tableau $$D=\gyoung(;1;2;3_4{{\begin{tikzpicture}[baseline=0cm]\draw[thick,densely dotted](0,.13)--++(4*.25,0);\end{tikzpicture}}};u,;1;2;3_2{{\begin{tikzpicture}[baseline=0cm]\draw[thick,densely dotted](0,.13)--++(2*.25,0);\end{tikzpicture}}};v)$$ of type $\mu'$. Then we have the following. \[uab\] Suppose $T\in{\calu}$, and let $x$ be the entry in the $(2,2)$-position of $T$. Then $${\hat\Theta_{T}}\circ{\hat\Theta_{A}}={\hat\Theta_{D}}, \qquad {\hat\Theta_{T}}\circ{\hat\Theta_{B}}=\begin{cases} {\hat\Theta_{D}}&(x\ls v)\\ 0&(x>v). \end{cases}$$ Furthermore, ${\hat\Theta_{D}}\neq0$. The fact that ${\hat\Theta_{D}}\neq0$ is a simple application of Lemma \[lemma7\]. To show that the compositions of homomorphisms are as claimed, take $T\in{\calu}$ and recall the notation of Proposition \[tabcomp\], with $S$ equal to either $A$ or $B$. Suppose $X\in\calx$. Since each $T^i$ is a proper set, each $X^{ij}$ must be as well. This means that if some integer $i$ appears in two sets $X^{kj},X^{lj}$, then the multinomial coefficient $\mbinom{X^{1j}_j+X^{2j}_j+X^{3j}_j+\dots}{X^{1j}_j,X^{2j}_j,X^{3j}_j,\dots}$ from Proposition \[tabcomp\] will include a factor $\binom21$, which gives $0$. So in order to get a non-zero coefficient in Proposition \[tabcomp\], we must have $X^{1j},X^{2j},X^{3j},\dots$ pairwise disjoint for each $j$, which means that we will have $$X^{11}\sqcup X^{21}\sqcup\dots=\{1,\dots,u\},\qquad X^{12}\sqcup X^{22}=\{1,\dots,v\};\tag*{($\dagger$)}$$ so $U_X$ will equal $D$. If $S=A$, the only way to achieve this is to have $$X^{11}=T^1\setminus\{1,\d
907
612
973
879
2,494
0.779344
github_plus_top10pct_by_avg
ma_{\textrm{d}} > \sigma_{i}$, the colloid-droplet adsorption energy is [@Ingmar2011] $$\Phi_{i\textrm{d}}(r)= \left \{ \begin{array}{ll} - \gamma_{i} \pi \sigma_{\textrm{d}} h & \dfrac{\sigma_{\textrm{d}}-\sigma_{i}}{2}<r< \dfrac{\sigma_{\textrm{d}}+\sigma_{i} }{2}\\ 0 & \textrm{otherwise}, \end{array} \right . \label{eqn:phicd1}$$ and when $\sigma_{\textrm{d}} < \sigma_{i}$, $$\Phi_{i\textrm{d}}(r)= \left \{ \begin{array}{ll} - \gamma_{i} \pi \sigma_{\textrm{d}}^{2} & r< \dfrac{\sigma_{i}-\sigma_{\textrm{d}}}{2} \\ - \gamma_{i} \pi \sigma_{\textrm{d}} h & \dfrac{\sigma_{i}-\sigma_{\textrm{d}}}{2}<r< \dfrac{\sigma_{i}+\sigma_{\textrm{d}} }{2}\\ 0 & \textrm{otherwise,} \end{array} \right . \label{eqn:phicd2}$$ where $i=1,2$ labels the two colloidal species in each dumbbell, ${h=(\sigma_{i}/2-\sigma_{\textrm{d}}/2+r)(\sigma_{i}/2+\sigma_{\textrm{d}}/2-r)/(2r)}$ is the height of the spherical cap that results from the colloid-droplet intersection [@Ingmar2011], and the parameter $\gamma_{i}$ is the droplet-solvent interfacial tension used to control the strength of the colloid-droplet interaction. \[See Fig. \[fig:pot\](b) for an illustration of the colloid-droplet pair potential.\] We introduce the energy ratio $k$ defined by $$k=\frac{\gamma _{2}}{\gamma _{1}}, \label{eqn:k}$$ which characterizes the dissimilarity of the surface properties of the two colloidal species. We define a bond between two colloidal spheres of type $i$ and $j$ when their distance is smaller than or equal to ${(\sigma_{i}+\sigma_{j})/2+\Delta}$, with $i,j=1,2$. A cluster is a group of colloidal particles connected with each other by a sequence of bonds. Hence, each cluster is characterized by both the number of bonds $n_b$ and the number of colloidal particles $n_c$ belonging to this cluster. A single dumbbell can be considered as a trivial cluster structure with $n_b=1, n_c=2$. These trivial clusters will be neglected in the following analysis. We carry out Metropolis MC simulations in the *NVT* ensemble. F
908
4,016
964
788
null
null
github_plus_top10pct_by_avg
es of the quantum numbers, super-inflationary phase takes place in this semi-classical region. Our goal is to describe the production of the gravitational waves during the super-inflation. This problem was preliminary analysed in Ref. [@Mielczarek:2007zy], but quantum corrections to the equation for tensor modes was not included to calculate the spectrum of the gravitons. In this paper we include so called inverse volume corrections to the equation for evolution of the tensor modes and then calculate the spectrum of produced gravitational waves. The equations for the tensor modes was recently derived by Bojowald and Hossain [@Bojowald:2007cd]. They had analysed the inverse volume corrections and corrections from holonomies. In this paper we concentrate on these first ones. The quantum corrections are generally complicated functions but they have simple asymptotic behaviours. To calculate the productions of gravitons during some process we need somehow to know only initial and final states, where asymptotic solutions are good approximation. In these regimes calculations can be done analytically. We use numerical solutions to match them. The organization of the text is the following. In section II we fix the semi-classical dynamics. Then in section III we consider creation of the gravitons on the defined background. In section IV we summarize the results. Background dynamics =================== The formulation of Loop Quantum Gravity bases on the Ashtekar variables [@Ashtekar:1987gu] and holonomies. The Ashtekar variables replace the spatial metric field $q_{ab}$ in the canonical formulation as follow $$\begin{aligned} A^i_a &=& \Gamma^i_a+\gamma K_a^i , \\ E^a_i &=& \sqrt{|\det q|} e^{a}_i\end{aligned}$$ where $\Gamma^i_a$ is the spin connection defined as $$\Gamma^i_a = -\epsilon^{ijk}e^b_j(\partial_{[a}e^k_{b]}+\frac{1}{2}e^c_k e^l_a \partial_{[c}e^l_{b]} )$$ and the $K_a^i$ is the intrinsic curvature. The $e^{a}_i$ is the inverse of the co-triad $e^i_a$ defined as $q_{ab}=e_a^ie_b^j$. In terms of the Asht
909
1,283
926
925
null
null
github_plus_top10pct_by_avg
302 180.9 449.5 3.24X10^-4^/5.25X10^-7^ *p*=0.227 3 107 250 159.8 416.6 1.17X10^-2^/2.27X10^-5^ *p*=0.0295^\*\*^ 1 267 506 345.7 698.5 0.200/0.702 Women 2 72 331 225.1 512.4 4.83X10-4/6.22X10^-9^ *p*=0.000140^\*\*\*^ 3 107 436 275.1 596.2 4\. 6X10^-5^/9.96X10^-5^ *p*=0.0186^\*^ a\) K -- S: Kolmogorov-Smirnov test of normality; b) S -- W: Shapiro-Wilk test of normality; p-values are rounded to three significant digits Similarly, to the hippuric acid, levels of *o*-cresol showed a log normal distribution of values (both Kolmogorov-Smirnov test of normality as well as the Shapiro-Wilk test of normality had a *p*-value \<0.01). The median of the values in the categories did not differ and has a value close to zero µg/g of creatinine. We detected differences in the inter-quartile ranges (75th percentile -- 25th percentile) of the values among the categories. Non-parametric Wilcoxon test indicated a statistically significant differences in *o*-cresol in Roma men (Category 1) compared to men in Category 3 (*p*=0.014; [Table 2](#t0002){ref-type="table"}). ###### Urinary o-cresol level of the study respondents. percentile \[µg/g creatinine\] ------- --- -------------------------------- ------- ------- ----------------------------- ------------------ 1 155 0.000 0.331 6\. 25X10^-23^/5.58X10^-14^ Men 2 62 0.000 0.315 4.16X10^-27^/6.38X10^-16^ *p*=0.471^ns^ 3 107 0.000 0.000 8.05X10^-35^/4.07X10^-15^ *p*=0.0138^\*\*^ 1 267 0.000 0.316 1.28x10^-20^/2.20X10^-12^ Women 2 72 0.000 0.000 2.76X10
910
2,819
1,469
952
null
null
github_plus_top10pct_by_avg
s is a larger literature than can be addressed completely here, it includes early work on model selection [@hurvich1990impact] and model averaging interpretations [@hjort2003frequentist]; the impossibility results of [@leeb2008can], [@buja2015models] on random $X$ and model misspecification; methods based on resampling or sample splitting [@CL:11; @CL:13; @Efron:14; @wasserman2009high; @meinshausen2009pvalues]; stability selection [@meinshausen2010stability; @shah2013variable]; the conformal inference approach of [@lei2016distribution]; goodness-of-fit tests of [@shah2018goodness]; moment-constraint-based uniform confidence sets [@andrews2009hybrid]; [@meinshausen2015group] on inference about groups of variables under general designs; [@belloni2011inference] in the instrumental variable setting; [@belloni2015uniform] on post-selection inference for $Z$-estimators, and the knockoffs approach of [@barber2015controlling] and later [@candes2016panning]. Although they are not directed at linear models,[@wager2014confidence] and [@JMLR:v17:14-168] address similar problems for random forests. Method Parameter Assumptions Accuracy Computation Robust ------------------ -------------- ------------- --------------------------------------- ------------- -------- Debiasing True $\beta$ Very Strong $1/\sqrt{n}$ Easy No Conditional Projection Strong Not known Easy No Uniform Projection Strong $\sqrt{k/n}$ NP hard Yes Sample Splitting Projection Weak $\sqrt{k^{5/2}\log k\sqrt{\log n}/n}$ Easy Yes Sample Splitting LOCO None $\sqrt{\log (kn)/n}$ Easy Yes : *Different inferential methods. ‘accuracy’ refers to the size of sides of the confidence set. ‘robust’ refers to robustness to model assumptions. The term ‘Very Strong
911
772
1,714
1,117
null
null
github_plus_top10pct_by_avg
nerated by assigning to each $p_{ij}$ ($i\neq j$) a random number between $0$ and $ \frac{1}{m}$ and $p_{ii}= 1-\sum_{j\neq i} p_{ij}$. The mean KSE is plotted versus the mixing time (Fig. \[fig:KS1\]) by working out $h_{KS}$ and $t(\epsilon)$ for each random matrix. (Fig. \[fig:KS1\]) shows that KSE is on average a decreasing function of the mixing time. ![Averaged KSE versus mixing time (top) for $10^6$ random $m=10$ size matrices and averaged $\lambda(P)$ versus mixing time (bottom) for $10^6$ random $m=10$ size matrices in curve blue and $f(t)=\epsilon^{1/t}$ in red. $\epsilon=10^{-3}$ and the norm is chosen to be the euclidian one.[]{data-label="fig:KS1"}](KSfuncmixtime1m10gmax1064.jpg){width="10cm"} We stress the fact that this relation is only true on average. We can indeed find two special Markov chains $P1$ and $P2$ such that $h_{KS}(P1) \leq h_{KS}(P2)$ and $t_1(\epsilon) \leq t_2(\epsilon)$. We illustrate this point further. The link between the mixing time and the KSE can be understood via their dependence as a function of the transition matrix eigenvalues. A general irreducible transition matrix $P$ is not necessarily diagonalizable on $\mathbb{R}$. However, since $P$ is chosen randomly, it is almost everywhere diagonalizable on $\mathbb{C}$. According to Perron Frobenius theorem, the largest eigenvalue is 1 and the associated eigen-space is one-dimensional and equal to the vectorial space generated by $\mu_\text{stat}$. Without loss of generality, we can label the eigenvalues in decreasing order of their module: $$1=\lambda_1 > \lvert \lambda_2 \rvert \geq....\geq \lvert \lambda_m \rvert \geq 0$$ The convergence speed toward $\mu_\text{stat}$ is given by the second maximum module of the eigenvalues of $P$ [@boyd2004fastest], [@pierre1999markov]: $$\lambda(P)=\max_{i=2...m}{ |\lambda_i|}= \lvert \lambda_2 \rvert$$ The eigenvalues $\lambda_1=1,...,\lambda_m$ of $P$ and $P^t$ being equal, let us denote their associated eigenvectors $\mu_1=\mu_\text{stat},...,\mu_m$. For any initial probability
912
2,723
2,358
926
1,372
0.790365
github_plus_top10pct_by_avg
\rm out\to inn}) \sigma^m_{k'}({\rm out}){\cal V}_{N_R+1},\end{aligned}$$ $$\begin{aligned} \label{eq:Theta_out} \Theta^m_k({\rm out}) &= \sum_{i'=1}^{N_R} {\cal G}^m_{k,i'}({\rm top\to out}) \sigma^m_{i'}({\rm top}){\cal V}_{i'} + \sum_{i'=1}^{N_R} {\cal G}^m_{k,i'}({\rm bot\to out}) \sigma^m_{i'}({\rm bot}){\cal V}_{i'}\nonumber\\ &+ \sum_{k'=1}^{N_z} {\cal G}^m_{k-k'}({\rm inn\to out}) \sigma^m_{k'}({\rm inn}){\cal V}_{0} + \sum_{k'=1}^{N_z} {\cal G}^m_{k-k'}({\rm out\to out}) \sigma^m_{k'}({\rm out}){\cal V}_{N_R+1},\end{aligned}$$ where we use symbolic notations such that ${\cal G}^m_{i,k'}({\rm inn\to top}) = {\cal G}^m_{i,0,N_z+1-k'}$, $\sigma^m_{i'}({\rm top}) = \sigma^m_{i',N_z+1}$, etc. Finally, we apply an inverse Fourier transform to $\Theta^m_i({\rm top}), \Theta^m_i({\rm bot}), \Theta^m_k({\rm inn})$, and $\Theta^m_k({\rm out})$ to obtain the boundary potential $\Theta_{i,j,k}^{\rm B}$ due to the surface charges. Then, the desired boundary potential $\Phi^{\rm B}$ due to the original charge $\rho$ is given by $$\label{eq:boundary_condition} \Phi^{\rm B}_{i,j,k} = \Psi^{\rm B}_{i,j,k} -\Theta_{i,j,k}^{\rm B} = -\Theta_{i,j,k}^{\rm B},$$ which gives the required Dirichlet boundary condition for the interior solver. Note that the boundary potential calculations explained above involve FFTs on 2D arrays (e.g., $\sigma^m_{i'}({\rm top})$) together with the summations of the Green’s function amounting to ${\cal O}(N^3)$ operations. Therefore, the overall computational cost of the boundary potential calculation is of order ${\cal O}(N^3 + N^2\log N)$, similarly to the case with a Cartesian grid. We note that the above formulation for the boundary potential is valid for a mass distribution under $P$-fold symmetry in $\phi$. In Appendix \[s:P-fold\_symm\], we directly demonstrate that this is really the case as long as ${\cal G}_{i,i',j-j',k-k'}$ in Equation properly accounts for the contributions to the boundary potential from all periodic images of the mass density. TEST RES
913
2,056
1,108
926
null
null
github_plus_top10pct_by_avg
in $\cup_{j=1}^{k} D_{t_j}$ is allocated at least $c$ balls (that is, the tree $T$ is $c$-loaded) is at most $$\binom{m}{cy} \left(\frac{\alpha y}{n}\right)^{cy}{\leqslant}\left(\frac{{\mathrm{e}}m}{cy}\right)^{cy} \left(\frac{\alpha y}{n}\right)^{cy}{\leqslant}\left(\frac{{\mathrm{e}}\alpha}{c}\right)^{cy}{\leqslant}{\mathrm{e}}^{-c(d-1)(k-r-1)},$$ where the last inequality follows from $m{\leqslant}n$ and [the fact that]{} $c>\alpha {\mathrm{e}}^2$. Since balls are independent from each other, we can multiply the above inequality by (\[ineq:final\]) to show that [the probability that ${\mathcal{C}}_m$ contains a $c$-loaded $k$-vertex tree with $r$ red vertices is at most]{} $$\label{eq:prob-bound} \exp\Big\{4k\log(2\beta d)- c(d-1)(k-r-1) + \big(c_0 + 3 - r\varepsilon/2\big) \log n\Big\},$$ proving the first statement of the lemma. [ Finally, suppose that $r\varepsilon\to\infty$ as $n\to \infty$. Then the upper bound in (\[eq:prob-bound\]) can be written as $$\begin{aligned} & \exp\Big\{ \big( 4\log(2\beta d) - c(d-1) \big) k + {\mathcal{O}}(\log n) + o(r\cdot \log n) - (r\varepsilon/2)\log n \Big\}\\ &{\leqslant}\exp\Big\{ {\mathcal{O}}(\log n) + o(r\cdot \log n) - (r\varepsilon/2)\log n\Big\}. \end{aligned}$$ Since $r\varepsilon\to\infty$, this term dominates and the probability that ${\mathcal{C}}_m$ contains a blue-red coloured tree with $r$ red vertices tends to zero. Therefore, if such a tree is present in ${\mathcal{C}}_m$ then $r={\mathcal{O}}(1/\varepsilon)$ with high probability. This completes the proof. ]{} Missing Part of Proof of Theorem \[thm:d-choice\] {#app:missmain} ================================================= In order to prove the second statement of Theorem \[thm:d-choice\] we show the sub-additivity of the balanced allocation algorithm. We want to prove that for every constant integer $\gamma {\geqslant}1$ [with $\gamma m {\leqslant}n$]{}, after allocating $\gamma m$ balls, the maximum load is at most $\gamma(\log_d\log n+{\mathcal{O}}(1/\varepsi
914
975
1,422
933
3,025
0.775375
github_plus_top10pct_by_avg
e{z}_i&1+\pi \tilde{w}_i \end{pmatrix}$$ such that $\tilde{s}_i=\mathrm{id}$ mod $\pi \otimes 1$. Then $$\begin{gathered} \label{ea23} \sigma({}^t\tilde{m}_{i,i})h_i\tilde{m}_{i,i}=(-1)^{(i-1)/2} \begin{pmatrix}\sigma({}^t\tilde{s}_i)&\sigma( {}^t \tilde{y}_i)&\sigma(\pi\cdot {}^t \tilde{v}_i)\\ \sigma(\pi\cdot {}^t \tilde{r}_i)&1+\sigma(\pi \tilde{x}_i)&\sigma( \pi\cdot \tilde{z}_i)\\ \sigma( {}^t \tilde{t}_i)&\sigma( {}^t \tilde{u}_i)&1+\sigma(\pi \tilde{w}_i) \end{pmatrix}\\ \begin{pmatrix} a_i&0&0\\ 0 &\pi^3\bar{\gamma}_i&1 \\ 0 &-1 &\pi \end{pmatrix} \begin{pmatrix} \tilde{s}_i&\pi \tilde{r}_i& \tilde{t}_i\\ \tilde{y}_i&1+\pi \tilde{x}_i&\tilde{u}_i \\ \pi \tilde{v}_i&\pi \tilde{z}_i&1+\pi \tilde{w}_i \end{pmatrix}.\end{gathered}$$ Then the $(1,2)$-block of $\sigma({}^t\tilde{m}_{i,i})h_i\tilde{m}_{i,i}$ is $(-1)^{(i-1)/2}\pi\left(a_i\tilde{r}_i+\sigma({}^t\tilde{v}_i)+\sigma({}^t\tilde{y}_i)\tilde{z}_i+\pi(\ast)\right)$, the $(1, 3)$-block is $(-1)^{(i-1)/2}\left(a_i\tilde{t}_i+\sigma({}^t\tilde{y}_i)+\pi(\ast\ast)\right)$, the $(2, 3)$-block is $(-1)^{(i-1)/2}(1+$ $\pi(-\sigma({}^t\tilde{r}_i)a_i\tilde{t}_i-\sigma(\tilde{x}_i)+ \sigma(\tilde{z}_i)\tilde{u}_i+\pi^2(\ast\ast\ast)))$, and the $(2,2)$-block is $(-1)^{(i-1)/2}\left(\pi^3\bar{\gamma}_i+\pi^3\left(\tilde{z}_i+\tilde{z}_i^2+\pi^2(\ast\ast\ast\ast)\right)\right)$ for certain polynomials $(\ast), (\ast\ast), (\ast\ast\ast), (\ast\ast\ast\ast)$. Therefore, by observing the $(1, 2), (1,3), (2,3), (2,2)$-blocks of Equation (\[ea1\]) again, we have $$\left \{ \begin{array}{l} \mathcal{X}_{i,1,2}(m)=\bar{a}_ir_i+{}^tv_i+{}^ty_iz_i+\mathcal{P}^i_{1, 2};\\ \mathcal{X}_{i,1,3}(m)=\bar{a}_it_i+{}^ty_i;\\ \mathcal{X}_{i,2,3}(m)={}^tr_i\bar{a}_it_i+x_i+z_iu_i+\mathcal{P}^i_{2, 3}.\\ \mathcal{X}_{i,2,2}(m)=\bar{\gamma}_i+z_i+z_i^2+1/2\left({}^tm_{i-1, i}'h_{i-1}m_{i-1, i}'+{}^tm_{i+1, i}'h_{i+1}m_{i+1, i}'\right)+\notag\\ \textit{ } ~~~~~~~~~~ \left(\delta_{i-2}'(m_{i-2, i}^{\#})^2+\
915
2,918
623
900
null
null
github_plus_top10pct_by_avg
\to \beta \in R$, implying that the same step can be performed in $G$ as $\gamma_1 \alpha_1 \underline{\alpha} \alpha_2 \gamma_2 {\Rightarrow}_{G,1} \gamma_1 \alpha_1 \underline{\beta} \alpha_2 \gamma_2.$ Thus $L(G')\subseteq L(G)$ holds as well. Moreover, any derivation step in $G$, $\gamma_1 \alpha_1 \underline{\alpha} \alpha_2 \gamma_2 {\Rightarrow}_{G,1} \gamma_1 \alpha_1 \underline{\beta} \alpha_2 \gamma_2$, $\alpha_1\alpha\alpha_2$ being a maximal nonterminal block, can be performed in $G'$ replacing the maximal nonterminal block $\alpha_1\alpha\alpha_2$ by $\alpha_1\beta\alpha_2$. In the second step we construct a context-free matrix grammar $H$ which simulates exactly those derivations in $G'$ that replace a maximal nonterminal block in each step. We introduce two alphabets $$\begin{aligned} [V]&=&\{[\alpha] {:}\alpha \in V^+, |\alpha|_A\leq 1, \mbox{ for all } A \in V\}\mbox{ and } \overline{V}=\{\overline{A} {:}A \in V\}. \end{aligned}$$ The symbols of $[V]$ are used to encode each maximal nonterminal block as single symbols, while $\overline{V}$ is a disjoint copy of $V$. Any word $$\alpha=x_1 \beta_1 x_2 \beta_2 \cdots x_n \beta_n x_{n+1}, x_1,x_{n+1} \in \Sigma^*, x_2,\ldots,x_n \in \Sigma^+, \beta_1,\ldots \beta_n\in V^+$$ such that $|\alpha|_A\leq 1$, for all $A \in V$, can be represented by the word $[\alpha]=x_1 [\beta_1] x_2 [\beta_2] \cdots x_n [\beta_n] x_{n+1}$, where the maximal nonterminal blocks in $\alpha$ are replaced by the corresponding symbols from $[V]$. The desired matrix grammar is obtained as $H=(V_H,\Sigma,S',M)$, with $V_H=[V]\cup V \cup \overline{V} \cup \{S'\}$ and the set of matrices defined as follows. For any rule $r=\alpha\to \beta$ in $R'$, $M$ contains the matrix $m_r$ consisting of the rules - $[\alpha] \to [\beta]$ (note that $\alpha \in [V]$, but $\beta\in ([V]\cup\Sigma)^*$), - $A \to \overline{A}$, for all $A \in V$ such that $|\alpha|_A=1$ and $|\beta|_A=0$, - $\overline{A}\to A$, for all $A \in V$ such that $|\alpha|_A=0$ and $|\beta|_A
916
3,312
1,038
702
2,599
0.778542
github_plus_top10pct_by_avg
dealloc { [super dealloc]; } @end Calling code: { self.userInteractionEnabled = NO; mathKeyboardAccess = [[MathKeyboardKey alloc] initWithImage:[UIImage imageNamed:@"mathKey.png"]]; CGRect frm = mathKeyboardAccess.frame; frm.origin = CGPointMake(80, 171); mathKeyboardAccess.frame = frm; mathKeyboardAccess.userInteractionEnabled = YES; [self addSubview:mathKeyboardAccess]; } This view is being added as a subview of another (the MathKeyboard). The superview cannot have user interaction enabled, as it is covering another view (the system keyboard). When I attempt to add MathKeyboardKey to the system keyboard view, it doesn't show up. If I try to use a UIButton, it never shows up, no matter which view I place it in. Question, in case it isn't apparent, is: how do I detect touches in this UIImageView subclass? A: self.userInteractionEnabled = NO; Since you have disabled user interaction of the parent view, how could the child receive any interactions? Remove this line. Q: swing layout: vertical flow What LayoutManager should I use to achieve a transposed version of FlowLayout? Essentially, I want a vertical list which occupies multiple columns if it can't fit all of it's components within one column. +------------------------+ | item 1 | | item 2 | | item 3 | | item 4 | | item 5 | | item 6 | | item 7 | | item 8 | +------------------------+ or +------------------------+ | item 1 item 7 | | item 2 item 8 | | item 3 | | item 4 | | item 5 | | item 6 | +------------------------+ this wrapping logic needs to happen dynamically, ie as the container is resized. A: very easy,you just need this. yourPanel.setLayout(new BoxLayout(yourPanel, BoxLayout.Y_AXIS)); after this,you just add view to yourPanel and you will get vertical flow layout. A: I'm working on a solution t
917
3,031
412
601
null
null
github_plus_top10pct_by_avg
chi_2}}} \end{bmatrix} + o(1).$$ From here the argument is completed as in the proof of Theorem \[T:circular-law-correlated\]. Suppose now that $p > 0$. Given $\chi_1 \in \widehat{G}$, by Lemma \[T:extensions\] there are exactly $\frac{1}{p_2}$ values of $\chi_2 \in \widehat{G}$ with $\chi_1\vert_A = \chi_2\vert_A$. Therefore, $$\operatorname{Cov}\bigl((\lambda_{\chi_1}, \lambda_{\chi_2})\bigr) = \bigl(1 + p_2 (\beta - \alpha - 1)\bigr) I_2 + \alpha \begin{bmatrix} {\mathbbm{1}_{\chi_1 = \overline{\chi_1}}} & 0 \\ 0 & {\mathbbm{1}_{\chi_2 = \overline{\chi_2}}} \end{bmatrix}$$ for all but a negligible fraction of pairs $\chi_1, \chi_2 \in \widehat{G}$. The argument is again completed as in the proof of Theorem \[T:circular-law-correlated\]. The proof is analogous to that of Theorem \[T:semicircle-law-general\], setting $\alpha = 0$ and $\beta = 1$. In that case $\operatorname{Cov}(\lambda_{\chi_1}, \lambda_{\chi_2}) = {\mathbbm{1}_{\chi_1 = \chi_2}}$, so it is unnecessary to assume that $p_2^{(n)}$ approaches a limit. 1. The Poincaré inequality assumption and independence imply an exponential concentration property for the family of eigenvalues $\bigl\{\lambda_\chi \mid \chi \in \widehat{G}^{(n)}\bigr\}$. In particular, combining Corollaries 5.7 and 3.2 of [@Ledoux], it follows that for each $L$-Lipschitz $F: \ell^2(G^{(n)}) \to {\mathbb{R}}$, $${\mathbb{P}}\left[{\left\vert F\bigl(Y^{(n)}\bigr) - {\mathbb{E}}F\bigl(Y^{(n)}\bigr) \right\vert} \ge t \right] \le 2 e^{-c t /\sqrt{K} L}$$ for each $t > 0$, where $c > 0$ is some absolute constant and $Y^{(n)}$ is shorthand for $\bigl(Y_a^{(n)}\bigr)_{a\in G^{(n)}}$. Now for a $1$-Lipschitz $f:{\mathbb{C}}\to {\mathbb{R}}$ and $k \in {\mathbb{N}}$, $${\left\vert \frac{1}{k} \sum_{j=1}^k f(w_j) - \frac{1}{k} \sum_{j=1}^k f(z_j) \right\vert} \le \frac{1}{k} \sum_{j=1}^k {\left\vert w_j - z_j \right\vert} \le \sqrt{\frac{1}{k} \sum_{j=1}^k {\left\vert w_j - z_j \right\vert}^2}$$ by the Cauchy–Schwarz inequality. Combining th
918
1,785
790
868
null
null
github_plus_top10pct_by_avg
, in which $k_{\parallel}\gg 4m^2$. ![\[fig:mb1\] Photon magnetic moment behavior with regard to external magnetic field strength for the second mode.](prl2.eps){width="3in"} We conclude, thus, that for photons in a strong magnetic field a nonzero magnetic moment arises, which is paramagnetic, and has a maximum near the first threshold of pair creation. These results may have several interesting consequences. For instance, if we consider a photon beam of density $n_\gamma$, it carries a magnetization ${\cal M}=n_\gamma \mu_{\gamma}^{(2)}$ which contributes to increasing the field $B$ to $B^{\prime}= B + 4\pi{\cal M}$. Trough this mechanism, the radiation field might contribute to the increase of the external field. Both authors are indebted to A.E. Shabad for several comments and important remarks on the subject of this paper. H.P.R. thanks G. Altarelli, J. Ellis and P. Sikivie for comments, and to CERN, where part of this paper was written, for hospitality. [18]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , ****, (). , ****, (). , ****, (). , ****, (). ****, (). , , , , , . , ****, (). , , ****, (). --- bibliography: - 'cite.bib' --- --- abstract: 'We study a natural notion of decoherence on quantum random walks over the hypercube. We prove that this model possesses a decoherence threshold beneath which the essential properties of the hypercubic quantum walk, such as linear mixing times, are preserved. Beyond the threshold, we prove that the walks behave like their classical counterparts.' author: - Gorjan Alagic$^1$ - Alexander Russell$^2$ title: Decoherence in quantum walks on the hypercube --- Introduction ============ The notion of a *quantum random walk* has emerged as an important element in the development of efficient quantum algorithms. In particular, it makes a dramatic appearance in the most efficient known algorithm for element distinctness [@A03]. The technique has also provi
919
512
499
872
3,006
0.775487
github_plus_top10pct_by_avg
3rd, 4th and 5th order polynomial terms respectively with each polynomial having evenly spaced roots between $t=0$ and $t=t_f$. As with the simple parametrization $t_f$ specifies the end of the ramps in time. In each of the three ramps being optimized, the parameters $y_{i}$, $y_{f}$, $A_{1}$, $A_{2}$, $A_{3}$ are independent. However, the final time $t_f$ is common. --- abstract: 'In this paper, we study the class of Jordan dialgebras (also called quasi-Jordan algebras). We develop an approach for reducing problems on dialgebras to the case of ordinary algebras. It is shown that straightforward generalizations of the classical Cohn’s, Shirshov’s, and Macdonald’s Theorems do not hold for dialgebras. However, we prove dialgebraic analogues of these statements. Also, we study multilinear special identities which hold in all special Jordan algebras and do not hold in all Jordan algebras. We find a natural correspondence between special identities for ordinary algebras and dialgebras.' author: - 'Vasily Voronin[^1]' title: Special and exceptional Jordan dialgebras --- INTRODUCTION {#introduction .unnumbered} ============ One of the most important classes of nonassociative algebras is the class of Lie algebras defined by the anti-commutativity and Jacobi identities $x^2=0$, $(xy)z+(zx)y+(yz)x=0$. It is well-known that every associative algebra $A$ turns into a Lie algebra with respect to the new product $[a,b]=ab-ba$, $a,b\in A$. The Lie algebra obtained is denoted by $A^{(-)}$. The classical Poincaré—Birkhoff—Witt Theorem implies that every Lie algebra can be embedded into $A^{(-)}$ for an appropriate associative algebra $A$. Leibniz algebras introduced in [@Loday:93; @Cuvier:94] are the most popular non-commutative analogues of Lie algebras. An algebra $(L, [\cdot,\cdot])$ is said to be a (right) Leibniz algebra if the product $[\cdot,\cdot]\colon L\times L\to L$ satisfies the following (right) Leibniz identity: $$\label{eq:IdOfLeibnizAlgebras} [[x,y],z]=[[x,z],y]+[x,[y,z]].$$ To get an analogue
920
1,184
1,761
1,059
null
null
github_plus_top10pct_by_avg
we altered the final step of the data generating mechanism in Equation [(8)](#sim7930-disp-0008){ref-type="disp-formula"}, so that the final outcome was calculated by $$\begin{matrix} Y_{\mathit{Fij}} & {= \beta_{i} + \left( {\theta + u_{2i}} \right)\textit{treat}_{\mathit{ij}} + e_{\mathit{ij}}} \\ & {\beta_{i} \sim \left( {\textit{Beta}\left( {15,3} \right)} \right) \times 220} \\ & {\mspace{45mu} u_{2i} \sim N\left( {0,\tau^{2}} \right)} \\ & {\mspace{45mu} e_{\mathit{ij}} \sim N\left( {0,\sigma^{2}} \right).} \\ \end{matrix}$$ Therefore, the intercept term β ~i~ was now derived from a beta distribution with shape parameters of 15 and 3, which represent a negatively skewed distribution that was then scaled by 220 to give sensible values for systolic blood pressure (the outcome upon which the hypothetical data is based). An example density plot of this beta distribution for modeling the intercept term is shown in Web Figure A.1. Secondly, we also considered a data generating mechanism with a common (fixed) treatment effect (ie, τ ^2^ = 0). Here, the fitted stratified and random intercept models were also modified to have a common treatment effect. 3.2. Results {#sim7930-sec-0010} ------------ Simulation results are shown in Tables [2](#sim7930-tbl-0002){ref-type="table"} and [3](#sim7930-tbl-0003){ref-type="table"}, covering most of the scenarios under the normal and beta distribution intercept data generating mechanisms, across all options for specifying and estimating the intercept. These tables show the mean percentage bias of the summary treatment effect estimate $\left( \hat{\theta} \right)$ (Table [2](#sim7930-tbl-0002){ref-type="table"}) and the median percentage bias in its heterogeneity $\left( {\hat{\tau}}^{2} \right.$) (Table [3](#sim7930-tbl-0003){ref-type="table"}). Figure [2](#sim7930-fig-0002){ref-type="fig"} graphically depicts the percentage coverage of the summary treatment effect estimate $\left( \hat{\theta} \right)$. ###### Mean percentage bias of the summary treatment eff
921
900
1,998
912
458
0.809316
github_plus_top10pct_by_avg
\[prop-ex\]) $$(L_-g)(y,\omega,E):=g(y-\tau_+(y,\omega)\omega,\omega,E),\quad g\in T^2_{\tau_-}(\Gamma_-).$$ From [@dautraylionsv6 p. 253] (or [@cessenat85]) it follows that for $g\in T^2_{\tau}(\Gamma)$, where $\tau|_{\Gamma_-}=\tau_-$, $\tau|_{\Gamma_+}=\tau_+$ and $\tau|_{\Gamma_0}=0$, there exists an element $\tilde\psi\in \tilde W^2(G\times S\times I)$ such that $$\gamma_-(\tilde\psi)=g_{|\Gamma_-}=:g_-\quad {\rm and}\quad \gamma_+(\tilde\psi)=g_{|\Gamma_+}=:g_+$$ if and only if $$(g-L_-(g_-))_{|\Gamma_+}\in L^2(\Gamma_+,\tau_+^{-1}(y,\omega)|\omega\cdot\nu(y)|d\sigma d\omega dE).$$ G. It can also be shown that $\gamma_-:\tilde W_{-,0}^2(G\times S\times I)\to L^2(\Gamma_-,\tau_{-}^{-1}(y,\omega)|\omega\cdot\nu| d\sigma dE)$ is bounded (note again that for bounded $G$ we have $\tau_-(x,\omega)\leq d$) and that it has a bounded right inverse $L_{-,0}: L^2(\Gamma_-,\tau_-^{-1}(y,\omega)|\omega\cdot\nu| d\sigma dE)\to \tilde W_{-,0}^2(G\times S\times I)$ ([@cessenat84] or [@dautraylionsv6 p.252]). Similar result hold for $\gamma_+$. We still consider the following special case of the trace theory for an exterior of a [*convex*]{} bounded domain $G$. Let $G_e$ be the complement (the exterior of $G$) $G_e:={\mathbb{R}}^3\setminus \ol G$. Then $\partial G_e=\partial G$. Denote (as above for $G$) $$\Gamma_{e,+}:={}&\{(y,\omega,E)\in \partial G_e\times S\times I)=\partial G\times S\times I)|\ \omega\cdot\nu_e(y)>0\}, \\[2mm] \Gamma_{e,-}:={}&\{(y,\omega,E)\in \partial G_e\times S\times I)=\partial G\times S\times I)|\ \omega\cdot\nu_e(y)<0\}, \\[2mm] \Gamma_e:={}&\Gamma_{e,+}\cup\Gamma_{e,-}$$ where $\nu_e$ is the unit outward pointing normal vector on $\partial G_e$. We find that $\nu_e=-\nu$ and then $\Gamma_{e,\pm}=\Gamma_{\mp}$ and $\gamma_{e,\pm}(\psi):=\psi_{|\Gamma_{e,\pm}}=\gamma_{\mp}(\psi)$. Furthermore, let $t_e(x,\omega)$ be the escape time mapping for the domain $G_e$ and let for an element $(y,\omega,E)\in \Gamma_{e,-}$ (as above) $\tau_{e,-}(y,\omega)=\inf\{s>0\ |\ y+s\omega\not\in G_e \}.$ We ob
922
592
1,089
919
null
null
github_plus_top10pct_by_avg
_i(r_p(\chi ^{-1}))^{-1} E ^-_{i,-c_{pi}},\\ {T}_p^-(F_p)=&E _p L _p^{-1},& {T}_p^-(F_i)=&(-1)^{c_{p i}} F ^-_{i,-c_{p i}}. \end{aligned}$$ \(ii) The maps ${T}_p$, ${T}_p^-$ satisfy ${T}_p {T}_p^-={T}_p^-{T}_p={\operatorname{id}}_{U(\chi )}$. \(iii) There exists a unique ${\underline{a}}\in ({{\Bbbk }^\times })^I$ such that ${T}_p {\Omega }={\Omega }{T}^-_p \varphi _{{\underline{a}}}$ in ${\mathrm{Hom}}(U(\chi ),U(r_p(\chi )))$. Note that ${T}_p{T}_p^-$ is an automorphism of $U(\chi )$ if one regards ${T}_p^-$ as a map from $U(\chi )$ to $U(r_p(\chi ))$ and ${T}_p$ as a map from $U(r_p(\chi ))$ to $U(r_p r_p(\chi ))=U(\chi )$. \[pr:LTdeg\] Let $\chi \in {\mathcal{X}}$ and $p\in I$. Assume that $\chi $ is $p$-finite. Then $${T}_p(U(\chi )_{\alpha })=U(r_p(\chi ))_{{\sigma }_p^\chi ({\alpha })} \quad \text{for all ${\alpha }\in {\mathbb{Z}}^I$.}$$ The maps ${T}_p:U(\chi )\to U(r_p(\chi ))$ and ${T}_p^-:U(r_p(\chi ))\to U(\chi )$ are mutually inverse algebra isomorphisms, and send generators of degree ${\alpha }$ into the homogeneous component of degree ${\sigma }_p({\alpha })$. \[le:TpU+U+\] Let $\chi \in {\mathcal{X}}$ and $p\in I$. Assume that $\chi $ is $p$-finite. Then $$\begin{aligned} {T}_p(U^+_{p,L}(\chi ))=&\,U^+_{p,K}(r_p(\chi )), & {T}_p(U^-_{p,K}(\chi ))=&\,U^-_{p,L}(r_p(\chi )),\\ {T}^-_p(U^+_{p,K}(\chi ))=&\,U^+_{p,L}(r_p(\chi )), & {T}^-_p(U^-_{p,L}(\chi ))=&\,U^-_{p,K}(r_p(\chi )). \end{aligned}$$ Since $\chi $ and $r_p(\chi )$ are $p$-finite, [@p-Heck07b Prop.5.10] and [@p-Heck07b Prop.6.7(d)] give that $${T}_p(U^+_{p,L}(\chi ))\subset U^+_{p,K}(r_p(\chi )),\qquad {T}_p^-(U^+_{p,K}(r_p(\chi )))\subset U^+_{p,L}(\chi ).$$ Thus ${T}_p(U^+_{p,L}(\chi ))=U^+_{p,K}(r_p(\chi ))$ by Thm. \[th:Liso\](ii). Similar arguments yield that ${T}^-_p(U^+_{p,K}(\chi ))=U^+_{p,L}(r_p(\chi ))$. The remaining two equations can be obtained from these and Thm. \[th:Liso\](iii). In the rest of the section assume that $\chi \in {\mathcal{X}}_3$. Let $n=|R_+^\chi |\in {\mathbb{
923
1,269
1,045
892
null
null
github_plus_top10pct_by_avg
r this study. However, the scale needs to be validated using a larger representative sample. Tension-reduction motivations seemed to be the most important social-cognitive factor in young Sri Lankan males' drinking behavior. These findings have several implications for public health research and interventions. There is need for a continued focus on individual tension-reduction reasons for drinking in adolescents and young adults in substance use prevention programs. Alcohol motives are likely to have been shaped by other indirect and distal forces such as availability and the media. For example, television scenes that glamorize its use or incorporate strong symbolic meaning, such as rebellion against prejudice, while featuring alcohol may cause young people to initiate and continue drinking. Research that offers a better understanding of psycho-social and environmental factors associated with alcohol use behavior among the younger population in Sri Lanka is urgently needed. ###### Subscales and total scale of motivations towards alcohol use: Correlations, means and standard deviations. **Motives** **1** **2** **3** ***M*** ***SD*** --------------------------- ------- ------- ------- --------- ---------- **1. Personal Enjoyment** -- -- -- 2.10 1.99 **2. Tension Reduction** 0.43 -- -- 4.26 4.59 **3. Social Pressure** 0.28 0.45 -- 2.54 2.78 **Total Score** 0.64 0.90 0.72 8.91 7.43 ###### Scale items and factor loading of the 3-factor model of motivations towards drinking. **Item** **Factor Loading[\*](#tfn1-ijerph-06-02408){ref-type="table-fn"}** --------------------------------------------------- -------------------------------------------------------------------- I use alcohol because,
924
4,669
667
468
null
null
github_plus_top10pct_by_avg
nonzero elements is nonzero. This is a contradiction, and so Eq. \[eq-identity\] does not hold. Concluding remarks {#sec-conclude} ================== In this work we presented the first 2-server PIR scheme (information theoretic) with sub-polynomial cost. It is unclear what is the optimal communication cost of 2-server schemes and we conjecture that our protocol is far from optimal. One approach to decrease the communication cost is to take $m$ to be a product of $r>2$ prime factors in theorem \[Grolmusz\] to get a larger $S$-matching vector family where $S={\{a\in \Z_m: a\mod p_i \in {\{0,1\}}\ \forall\ i \in[r]\}}\setminus {\{0\}}$ which is of size $2^{r}-1$. So we need $2^{r-1}$ independent equations from each server to find $c_0$. We can ask the servers for derivatives of $F$ at $\gamma^{\bz+t\bv_\tau}$ up to order $2^{r-1}-1$. If these equations are ‘independent’ i.e. the determinant of the coefficient matrix doesn’t vanish then we can find $c_0$. If we can do this, we can decrease the cost to $n^{{O\left(2^r(\log\log n/\log n)^{1-1/r}\right)}}$. But observe that for each $l\in S$, $l^2=l\mod m$ since $l\mod p_i \in {\{0,1\}}\ \forall i\in[r]$. So higher order derivatives of $g$ are equal to the first order derivative and we get repeated rows in the coefficient matrix $M$. One avenue for improvement could be by trying to construct $S$ such that elements of $S$ doesn’t satisfy a low-degree monic polynomial. Acknowledgements ================ We would like to thank Klim Efremenko and Sergey Yekhanin for helpful comments. [^1]: Department of Computer Science and Department of Mathematics, Princeton University. Email: `zeev.dvir@gmail.com`. Research supported by NSF grants CCF-1217416 and CCF-0832797. [^2]: Department of Computer Science, Princeton University. Email: `sgopi@cs.princeton.edu`. [^3]: Our scheme can infact be made linear and using a simple transformation given in [@RazborovY06], any linear scheme can be converted to a bilinear scheme [^4]: The rings $\cR_{m,r}$ are sometimes denot
925
1,070
2,121
1,045
1,915
0.784412
github_plus_top10pct_by_avg
riangle; - each side of triangle intersect one outgoing edge of corresponding vertex. $$\begin{picture}(260,60) \qbezier[25](0,30)(0,50)(20,50) \qbezier[25](0,30)(0,10)(20,10) \qbezier[25](20,10)(40,10)(40,30) \qbezier[20](60,30)(60,45)(75,45) \qbezier[20](60,30)(60,15)(75,15) \qbezier[20](75,45)(90,45)(90,30) \qbezier[20](75,15)(90,15)(90,30) \linethickness{0.5mm} \put(20,10){\line(0,1){40}} \put(40,30){\line(1,0){20}} \qbezier(20,50)(40,50)(40,30) \put(20,7){$\to$} \put(110,28){$\Rightarrow$} \thinlines \qbezier[25](140,30)(140,50)(160,50) \qbezier[25](140,30)(140,10)(160,10) \qbezier[25](160,10)(180,10)(180,30) \qbezier[20](200,30)(200,45)(215,45) \qbezier[20](200,30)(200,15)(215,15) \qbezier[20](215,45)(230,45)(230,30) \qbezier[20](215,15)(230,15)(230,30) \linethickness{0.5mm} \put(160,10){\line(0,1){40}} \put(180,30){\line(1,0){20}} \qbezier(160,50)(180,50)(180,30) \thinlines \put(152,42){\line(1,0){16}} \put(152,42){\line(1,2){8}} \put(168,42){\line(-1,2){8}} \put(152,18){\line(1,0){16}} \put(152,18){\line(1,-2){8}} \put(168,18){\line(-1,-2){8}} \put(188,22){\line(0,1){16}} \put(188,22){\line(-2,1){16}} \put(188,38){\line(-2,-1){16}} \put(192,22){\line(0,1){16}} \put(192,22){\line(2,1){16}} \put(192,38){\line(2,-1){16}} \put(160,7){$\longrightarrow$}\end{picture}$$ Thick lines above mark spanning tree and an arrow indicates the direction of the root edge. Two triangles will be called adjacent, if the corresponding vertices are adjacent and the edge, that connects them, belongs to the spanning tree. The sides of adjacent triangles that intersect this edge also will be called adjacent. We construct a polygon $P$ by glewing adjacent triangles by adjacent sides. This polygon has $2n+2$ sides and is divided into $2n$ triangles. Each edge of the cubic map, that does not belong to the spanning tree, intersects two sides of $P$ and we will say that these sides constitute a pair. Polygon $P$ has a marked side: the marked edge of the cubic map intersects it in direction from inside $P$ to outside. Continuation
926
3,225
1,387
701
null
null
github_plus_top10pct_by_avg
{\Delta_{c}(\lambda)^{\text{reg}}}\not=0.$$ If $c\not\in \mathcal{C}$, we are done. Indeed, in this case [@BEGqi Corollary 2.11] implies that $\Delta_{c+1}(\lambda)$, $\Delta_{c}(\lambda)$ and hence $\widetilde{S}_c(\Delta_{c}(\lambda))$ are all simple modules. The isomorphism implies that $\widetilde{S}_c(\Delta_{c}(\lambda))\hookrightarrow {\Delta_{c+1}(\lambda)^{\text{reg}}}$. Under this embedding, $\widetilde{S}_c(\Delta_{c}(\lambda))\cap \Delta_{c+1}(\lambda)\not=0$ and hence $\widetilde{S}_c(\Delta_{c}(\lambda))= \Delta_{c+1}(\lambda)$. We may therefore assume that $c\in \mathcal{C}$, in which case Hypothesis \[morrat-hyp\] implies that $c\geq 0$ and we can use the KZ-functor from . By and , $ {{\textsf}{KZ}}(\widetilde{S}_c(\Delta_{c}(\lambda))) \cong {{\textsf}{KZ}}(\Delta_{c}(\lambda)) \cong Sp_q(\lambda)^{\ast}.$ By and we therefore have $$\label{nonzeromap} \operatorname{Hom}_{H_{c+1}}(\widetilde{S}_c(\Delta_{c}(\lambda)), \Delta_{c+1}(\lambda)) \cong \operatorname{Hom}_{S_q}(W_q(\lambda), W_q(\lambda)) = {\mathbb{C}}.$$ It follows from Corollary \[poono\] that the composition factors of $\Delta_{c+1}(\lambda)$ are of the form $L_{c+1}(\nu)$ with $\nu\leq \lambda$ in the dominance ordering. We will show by an ascending induction on this ordering that $\widetilde{S}_c(\Delta_{c}(\lambda)) \cong \Delta_{c+1}(\lambda)$. If $\lambda$ is minimal in the dominance ordering then $\lambda=\operatorname{{\textsf}{sign}}$ and so both $\Delta_{c+1}(\lambda)$ and $\widetilde{S}_c(\Delta_{c}(\lambda))$ are simple by Remark \[poono\]. By there is a non-zero map from $\widetilde{S}_c(\Delta_{c}(\lambda))$ to $\Delta_{c+1}(\lambda)$ which therefore must be an isomorphism. This begins the induction. Let $\lambda$ be arbitrary and suppose that, for all $\nu < \lambda$ in the dominance ordering, we have $\widetilde{S}_c(\Delta_{c}(\nu))\cong\Delta_{c+1}(\nu)$, and hence that $\widetilde{S}_c(L_{c}(\nu)) \cong L_{c+1}(\nu)$. Since $\widetilde{S}_c$ is an equivalence, $\widetilde{S}_c(\Delta_{c}(\mu))$ has simple
927
367
1,229
968
3,922
0.769286
github_plus_top10pct_by_avg
(\pi)\cdot {}^tm_{i,i}'a_i+ \pi\cdot a_i m_{i,i}'=\pi\begin{pmatrix} 2z&-x+w\\-x+w&-2y\end{pmatrix}.$$ Thus there are three linear equations $-x+w=0, ~~~ z=0, ~~~ y=0$ and $x$ determines every other entry of $m_{i,i}'$.\ 2. Assume that $i$ is odd and that $L_i$ is *free of type $I$*. Then $\pi^ih_i=\xi^{(i-1)/2}\cdot \pi\begin{pmatrix} a_i&\pi b_i& e_i\\ -\sigma(\pi \cdot {}^tb_i) &\pi^3f_i&1+\pi d_i \\ -\sigma({}^te_i) &-\sigma(1+\pi d_i) &\pi+\pi^3c_i \end{pmatrix}$ as explained in Section \[h\] and we have $$\begin{gathered} \label{ea6} \begin{pmatrix} a_i'&\pi b_i'& e_i'\\ -\sigma(\pi \cdot {}^tb_i') &\pi^3f_i'&1+\pi d_i' \\ -\sigma({}^te_i') &-\sigma(1+\pi d_i') &\pi+\pi^3c_i' \end{pmatrix}=\\ \sigma(1+\pi\cdot {}^tm_{i,i}')\cdot \begin{pmatrix} a_i&\pi b_i& e_i\\ -\sigma(\pi \cdot {}^tb_i) &\pi^3f_i&1+\pi d_i \\ -\sigma({}^te_i) &-\sigma(1+\pi d_i) &\pi+\pi^3c_i \end{pmatrix} \cdot(1+\pi m_{i,i}')+\pi^3(\ast).\end{gathered}$$ Here, the nondiagonal entries of $a_i'$ as well as the entries of $b_i', e_i', d_i'$ are considered in $B\otimes_AR$, each diagonal entry of $a_i'$ is of the form $\pi^3 x_i$ with $x_i\in R$, and $c_i', f_i'$ are in $R$. In addition, $b_i=0, d_i=0, e_i=0, c_i=0, f_i=\bar{\gamma}_i$ as explained in Remark \[r33\].(2) and $a_i$ is the diagonal matrix with $\begin{pmatrix} 0&1\\-1&0\end{pmatrix}$ on the diagonal. In the above equation, we can cancel the term $\pi^3(\ast)$ since its nondiagonal entries contain $\pi^3$ as a factor and its diagonal entries contain $\pi^5$ as a factor since $L_i$ is *free of type $I$* so that both $L_{i-1}$ and $L_{i+1}$ are *of type II*. Note that in this case, $m_{i,i}'=\begin{pmatrix} s_i^{\prime}& \pi r_i^{\prime}& t_i^{\prime}\\ y_i^{\prime}&\pi x_i^{\prime}&u_i^{\prime}\\ \pi v_i^{\prime}& \pi z_i^{\prime}&\pi w_i^{\prime} \end{pmatrix}$. Compute $\sigma(\pi\cdot {}^tm_{i,i}')\cdot\begin{pmatrix} a_i&0&0\\ 0&\pi^3 \bar{\gamma}_i&1 \\ 0&-1 &\pi \end{pmatrix}\cdot(\pi m_{i,i}')$ formally and this equals $\
928
2,934
1,123
884
null
null
github_plus_top10pct_by_avg
$m_\pi=138$ MeV and $m_K=495$ MeV, $q_0\equiv \frac12(M_\Lambda-M_N)$, $g_{NN\pi}\equiv \frac{g_A M_N}{f_\pi}$, $g_{\Lambda N K}\equiv-\frac{D_s+3F_s}{2\sqrt3f_\pi}$, ${\overline{M}}\equiv\frac12(M_N+M_\Lambda)$, and $$\begin{aligned} {\hat A} &=\left( \frac{ C^{PV}_{K}}{2} + D^{PV}_{K} + \frac{ C^{PV}_{K}}{2} {\vec \tau}_1 {\vec \tau}_2 \,\right), \\ {\hat B}&= \left( \frac{ C^{PC}_{K}}{2} + D^{PC}_{K} + \frac{ C^{PC}_{K}}{2} {\vec \tau}_1 \, {\vec \tau}_2 \right) \,.\end{aligned}$$ Next-to-leading order contributions {#ss:nloc} =================================== Order Parity Structures ------- -------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0 PC $1$, ${\vec{\sigma}_1}\cdot{\vec{\sigma}_2}$ ${\vec{\sigma}_1}\cdot{\vec{q}}$, ${\vec{\sigma}_1}\cdot{\vec{p}}$, ${\vec{\sigma}_2}\cdot{\vec{q}}$, ${\vec{\sigma}_2}\cdot{\vec{p}}$, $({\vec{\sigma}_1}\times{\vec{\sigma}_2})\cdot{\vec{q}}$, $({\vec{\sigma}_1}\times{\vec{\sigma}_2})\cdot{\vec{p}}$, ${\vec{q}}^2$, ${\vec{p}}^2$, $({\vec{\sigma}_1}\cdot{\vec{\sigma}_2}){\vec{q}}^2$, $({\vec{\sigma}_1}\cdot{\vec{\sigma}_2}){\vec{p}}^2$, $({\vec{\sigma}_1}\cdot{\vec{q}})({\vec{\sigma}_2}\cdot{\vec{q}})$, $({\vec{\sigma}_1}\cdot{\vec{p}})({\vec{\sigma}_2}\cdot{\vec{p}})$, $({\vec{\sigma}_1}+{\vec{\sigma}_2})\cdot({\vec{q}}\times{\vec{p}})$ ${\vec{q}}\cdot{\vec{p}}$, $({\vec{\sigma}_1}\cdot{\vec{\sigma}_2}){\vec{q}}\cdot{\vec{p}}$, $({\vec{\sigma}_1}\cdot{\vec{q}})({\vec{\sigma}
929
1,322
1,428
955
null
null
github_plus_top10pct_by_avg
er analysis indicated that the genetic heterogeneity has certain unique properties. First, a majority of the genetic heterogeneity sites were detected in unique genomic areas. Many of them have non-synonymous mutations, leading to amino acid alterations in target proteins (Table [2](#T2){ref-type="table"}). For example, the genetic heterogeneity leads to a Q^59^- \> K^59^substitution on putative fibronectin/fibrinogen binding protein, a C^717^- \> Y^717^on oxacillin resistance-related FmtC protein at a ratio of 3.87% \[T:6 G:145\], and 3.7% \[A:5 G:128\] out of the detected candidate sequences, respectively. The genetic heterogeneity also leads to truncated proteins, e.g. at genes encoding sensor histidine kinase at a percentage of 5.74% \[C:7 G:115\], and phosphotransferase system, glucose-specific IIABC component at a percentage of 2.85% \[A:5 C:170\]. ###### Synonymous and non-synonymous analysis of mutations in the heterogeneity sites Chrom Position Types of Mutations Genotype at SRX007711 Genotype at FPR3757 Alignment with Orthologous Proteins Gene and Function ---------------- -------------------- ----------------------- --------------------- ------------------------------------- ------------------- ---------------- --------------------------------------------------------------------------------------------------------- `gi 87161249 ref` `423` `EKDATIEKSNTG` `E DATIEKSNTG` `ENDATIEKSNTG` `SRR022865_26088` `1` `gaggaagataag` 778416
930
1,806
2,186
1,003
null
null
github_plus_top10pct_by_avg
16) 0.0040 (16) C27 0.018 (2) 0.016 (2) 0.019 (2) −0.0002 (18) 0.0084 (19) 0.0023 (18) C28 0.019 (3) 0.020 (2) 0.037 (3) 0.005 (2) 0.012 (2) 0.000 (2) C29 0.011 (2) 0.033 (3) 0.022 (3) −0.002 (2) 0.0048 (19) 0.002 (2) C30 0.018 (2) 0.021 (2) 0.017 (2) −0.0063 (19) 0.0049 (19) −0.0068 (19) C31 0.019 (2) 0.018 (2) 0.014 (2) −0.0008 (18) 0.0030 (19) −0.0003 (18) C32 0.013 (2) 0.012 (2) 0.0088 (19) 0.0016 (16) 0.0031 (16) −0.0014 (16) C33 0.018 (2) 0.014 (2) 0.017 (2) 0.0010 (17) 0.0056 (19) 0.0008 (18) C34 0.020 (3) 0.026 (3) 0.017 (2) −0.003 (2) 0.010 (2) 0.002 (2) C35 0.024 (3) 0.027 (3) 0.013 (2) 0.000 (2) 0.007 (2) −0.004 (2) C36 0.028 (3) 0.011 (2) 0.020 (2) 0.0032 (19) 0.006 (2) −0.0052 (18) C37 0.025 (3) 0.017 (2) 0.014 (2) 0.0002 (19) 0.007 (2) 0.0025 (18) C38 0.016 (2) 0.012 (2) 0.012 (2) −0.0024 (17) 0.0048 (18) −0.0011 (17) C39 0.011 (2) 0.017 (2) 0.0081 (19) −0.0030 (16) 0.0038 (16) 0.0013 (16) C40 0.018 (2) 0.017 (2) 0.019 (2) −0.0022 (18) 0.0034 (19) −0.0039 (19) C41 0.023 (3) 0.017 (2) 0.033 (3) 0.003 (2) 0.013 (2) −0.002 (2) C42 0.018 (3) 0.033 (3) 0.025 (3) 0.003 (2) 0.009 (2) −0.003 (2) C43 0.015 (2) 0.023 (2) 0.019 (2) −0.0064 (19) 0.0053 (19) −0.005 (2) C44 0.017 (2) 0.017 (2) 0.015 (2) −0.0040 (18) 0.0071 (18) −0.0027 (18) C45 0.013 (2) 0.018 (2) 0.012 (2) −0.0051 (17) 0.0032 (17) 0.0018 (17) C46 0.028 (3) 0.017 (2) 0.015 (2) −0.0001 (19) 0.006 (2) 0.0006 (18) C47 0.037 (3) 0.017 (2) 0.018 (2) 0.000 (2)
931
4,789
453
540
null
null
github_plus_top10pct_by_avg
1}.\end{aligned}$$ As before, the coefficient of the principal chiral model term is $1/f^2$ and the Wess-Zumino term has coefficient $k$. The Maurer-Cartan equation $d (dg g^{-1} ) = dg g^{-1} \wedge dg g^{-1}$ is: $$\begin{aligned} - \frac{1}{2} ( \frac{1}{f^2} - k) \bar{\partial} j^a_z + \frac{1}{2} ( \frac{1}{f^2} + k) \partial j^a_{\bar{z}} - i {f^{a}}_{bc} j_z^c j_{\bar{z}}^b &=& 0. \end{aligned}$$ In this context it is easier to work with the canonical right invariant one-form: $$\begin{aligned} \omega &=& dg g^{-1}\end{aligned}$$ and rewrite the equations of motion in terms of $\omega$ and the coefficients $c_\pm$ defined as in the bulk of the paper: $$\begin{aligned} \bar{\partial} \omega_z &=& -\frac{ c_-}{c_+ + c_-} [\omega_z , \omega_{\bar{z}}] \nonumber \\ \partial \omega_{\bar{z}} &=& + \frac{c_+}{c_++c_-} [\omega_z , \omega_{\bar{z}}]. \label{EOMs}\end{aligned}$$ Now consider a connection which is a function of a spectral parameter $\lambda$: $$\begin{aligned} A(\lambda) &=& -\frac{2}{1+\lambda} \frac{c_+}{c_++c_-} \omega_z dz - \frac{2}{1-\lambda} \frac{ c_-}{c_++c_-} \omega_{\bar{z}} d \bar{z}\end{aligned}$$ and compute the curvature of the connection: $$\begin{aligned} %F &=& d A + A \wedge A %\nonumber \\ %F_{ \bar{z} z} &=& \frac{2}{1+\lambda} \frac{c_+}{c_++c_-} \bar{\partial} \omega_z - % \frac{2}{1-\lambda} \frac{ c_-}{c_++c_-} \partial \omega_{\bar{z}} - % \frac{c_+}{c_++c_-} \frac{ c_-}{c_++c_-} \frac{2}{1+\lambda} \frac{2}{1-\lambda} %[\omega_z , \omega_{\bar{z}}] %\nonumber \\ F_{\bar{z} z} &=& -\frac{2}{1+\lambda} \frac{c_+}{c_++c_-} \bar{\partial} \omega_z + \frac{2}{1-\lambda} \frac{ c_-}{c_++c_-} \partial \omega_{\bar{z}} - \frac{c_+}{c_++c_-} \frac{ c_-}{c_++c_-} 2 (\frac{1}{1+\lambda}+ \frac{1}{1-\lambda}) [\omega_z ,\omega_{\bar{z}}]. \nonumber\end{aligned}$$ Flatness of the connection for all values of the spectral parameter $\lambda$ is equivalent to the validity of the equations of motion (\[EOMs\]). Using the on-shell flat connection, we can define an inf
932
2,634
1,210
946
null
null
github_plus_top10pct_by_avg
follows: $$\begin{aligned} {\bf 45} & = & {\bf 8}_{-2} \oplus {\bf 28}_0 \oplus {\bf 1}_0 \oplus {\bf 8}_2, \\ {\bf 16} & = & {\bf 8}_{-1} \oplus {\bf 8}_{+1}, \\ {\bf 10} & = & {\bf 1}_{-2} \oplus {\bf 8}_0 \oplus {\bf 1}_{2}, \\ {\bf 1} & = & {\bf 1}_0,\end{aligned}$$ where the subscript indicates the $q_-$ charge. We arrange the (untwisted sector) states into $so(10)$ representations, with the following results: - The adjoint of $so(10)$ arises from $H^*(X, {\cal O})$. Contributing terms are: - $H^*(X, {\cal O})$ in (R,R), transforming as ${\bf 8}_{-2}$, - $H^*(X, \wedge^4 {\cal E} \cong {\cal O})$ in (R,R), transforming as ${\bf 8}_{+2}$, - $H^*(X, {\rm Tr} \, {\cal E}^* \otimes {\cal E} \cong {\cal O})$ in (NS,R), transforming as ${\bf 1}_0$, - $H^*(X, {\cal O})$ in (NS,R), transforming as ${\bf 28}_0$. - Copies of ${\bf 10}$ of $so(10)$ arise from $H^*(X, \wedge^2 {\cal E})$. Contributing terms are: - $H^*(X, \wedge^2 {\cal E})$ in (R,R), transforming as ${\bf 8}_0$, - $H^*(X, \wedge^2 {\cal E})$ in (NS,R), transforming as ${\bf 1}_2$, - $H^*(X, \wedge^2 {\cal E}^* \cong \wedge^2 {\cal E})$ in (NS,R), transforming as ${\bf 1}_{-2}$. - Gauge singlets, arising as $H^*(X, {\rm End} \, {\cal E})$ (where we use End to denote the traceless endomorphisms), arising in the (NS,R) sector. In addition, there is one vector in the adjoint representation of the second $E_8$, which is always present in computations of the form of appendix \[app:spectra\]. In any event, altogether in this six-dimensional theory we have $h^0(X, {\cal O})=1$ vector multiplets in the adjoint of ${\rm Spin}(10)$, One vector multiplet in the adjoint of $E_8$, $h^1(X, \wedge^2 {\cal E}) = 36$ half-hypermultiplets[^16] in the ${\bf 10}$ of ${\rm Spin}(10)$, 20 singlet hypermultiplets for $K3$ moduli, $h^1({\rm End}\, {\cal E}) = 162$ singlet half-hypermultiplets for bundle moduli[^17], so that we find $$\begin{aligned} n_V & = & 45 \: + \: 248 \: = \: 293, \\ n_H & = & (1/2)\left
933
1,097
1,373
865
3,480
0.772065
github_plus_top10pct_by_avg
{aligned}$$ where $\w\in\Rbb^L$, and $\bsG$ is defined in with $\h$ being nonnegative ordered. Also, let $\floor{\w}_\ell$ and $\ceil{\w}_\ell$ be the vectors generated from $\w$ by applying the floor and the ceiling operations on the $\ell$-th element only, respectively. ${{\bar{\a}}^\diamond}_k\leftarrow {{\bar{\a}}^\dagger}_k$ To simplify the inequality condition $f\left(\floor{{{\bar{\a}}^\dagger}_k}_\ell\right) < f\left(\ceil{{{\bar{\a}}^\dagger}_k}_\ell\right)$ in line \[line:FloorCondition\] of Algorithm \[agorithm:SuccessiveQuantization\], we first introduce the following lemma. \[lemma:FloorCondition\] For the function $f(\w) = \w^T \bsG \w$ defined in where $\bsG^T = \bsG$, the inequality condition $f\left(\floor{\w}_\ell\right) < f\left(\ceil{\w}_\ell\right)$ is equivalent to $$\begin{aligned} \label{equation:FloorCondition} 2 \left(\floor{\w}_\ell\right)^T \bsG(:,\ell) + \bsG(\ell,\ell) > 0.\end{aligned}$$ $f\left(\floor{\w}_\ell\right) < f\left(\ceil{\w}_\ell\right)$ implies $\floor{\w}_\ell \neq \ceil{\w}_\ell$, i.e., $\w(\ell)$ is not an integer. Let $\e_\ell \in \Rbb^L$ be the vector with only one nonzero element $\e_\ell(\ell) = 1$, then $\ceil{\w}_\ell = \floor{\w}_\ell + \e_\ell$, and $$\begin{aligned} &f\left(\ceil{\w}_\ell\right) = \left(\ceil{\w}_\ell\right)^T \bsG \ceil{\w}_\ell\\ &= \left( \floor{\w}_\ell + \e_\ell \right)^T \bsG \left( \floor{\w}_\ell + \e_\ell \right)\\ &= \left(\floor{\w}_\ell\right)^T \bsG \floor{\w}_\ell + \left(\floor{\w}_\ell\right)^T \bsG \e_\ell + \e_\ell^T \bsG \floor{\w}_\ell + \e_\ell^T \bsG \e_\ell\\ &= f\left(\floor{\w}_\ell\right) + \left(\floor{\w}_\ell\right)^T \bsG(:,\ell) + \bsG(\ell,:) \floor{\w}_\ell + \bsG(\ell,\ell)\\ &= f\left(\floor{\w}_\ell\right) + 2 \left(\floor{\w}_\ell\right)^T \bsG(:,\ell) + \bsG(\ell,\ell). \end{aligned}$$ Obviously, $f\left(\floor{\w}_\ell\right) < f\left(\ceil{\w}_\ell\right)$ is equivalent to $$\begin{aligned} 2 \left(\floor{\w}_\ell\right)^T \bsG(:,\ell) + \bsG(\ell,\ell) > 0.\tag*{\qedhere}\end{aligned}$$ \[lemma:
934
511
1,353
951
null
null
github_plus_top10pct_by_avg
$\bar{A} = (a_1, \dotsc, a_n)$ and $\bar{B} = (b_1, \dotsc, b_n)$. Inductively and by symmetry, it suffices to show for each $\ell$ that $(\bar{A}-\{a_\ell\},\bar{B}-\{b_{\ell}\})$ is upper-triangular in $N = M \con a_\ell \del b_\ell$. Indeed, for each $k < \ell$ we have $\cl_N(\{a_1,\dotsc, a_k\}) \supseteq \cl_M(\{a_1, \dotsc, a_k\}) \supseteq \{b_1, \dotsc, b_k\}$ and vice versa, so $\cl_N(\{a_1, \dotsc, a_k\}) = \cl_N(\{b_1, \dotsc, b_k\})$. For $k > \ell$, $$\begin{aligned} \cl_N(\{a_1,\dotsc, a_k\} - \{a_\ell\}) &= \cl_M(\{a_1, \dotsc, a_k\}) - \{a_{\ell},b_{\ell}\} \\& \supseteq \{b_1, \dotsc, b_k\} - \{b_\ell\}, \text{ and}\\ \cl_N(\{b_1, \dotsc, b_k\} - \{b_\ell\}) &= \cl_{M \con a_\ell}(\{b_1, \dotsc, b_{\ell-1}\} \cup \{b_{\ell+1}, \dotsc, b_k\}) - \{b_\ell\} \\ &= \cl_M(\{a_1, \dotsc, a_{\ell-1},a_\ell\} \cup \{b_\ell+1, \dotsc, b_k\}) - \{a_\ell,b_\ell\} \\ &= \cl_M(\{b_1, \dotsc, b_k\}) - \{a_{\ell},b_\ell\}\\ &\supseteq \{a_1,\dotsc, a_k\} - \{a_\ell\},\end{aligned}$$ where we use the fact that $\cl_M(\{a_1, \dotsc, a_\ell\}) = \cl_M(\{b_1, \dotsc, b_\ell\})$. Thus $\cl_N(\{a_1, \dotsc, a_k\}-\{a_\ell\}) = \cl_N(\{b_1, \dotsc, b_k\} - \{b_\ell\})$. The lemma follows. \[triangularone\] Let $s \ge 2$ and $t \ge 0$ be integers. If $A$ and $B$ are disjoint bases of a matroid $M$ with $r(M) \ge 4^{st}$, then either - $M$ has a $U_{s,2s}$-minor $U$ in which $E(U) \cap A$ and $E(U) \cap B$ are bases, or - $M$ has a rank-$t$ minor with a lower-triangular pair $(\bar{A},\bar{B}) \in A^t \times B^t$. We may assume that $E(M) = A \cup B$, that the first outcome does not hold, and inductively that $t \ge 1$ and the lemma holds for smaller $t$. Let $A_0$ be an $s$-element subset of $A$, and let $A' = A- A_0$. Let $B_0$ be a basis for $M \con A' \del A_0$. Since $r(M \con A') = s$, by Lemma \[udensity\] we have $\tau_{s-1}(M \con A') \le \binom{2s}{s-1}$, so by a majority argument there is some $B'' \subseteq B-B_0$ for which $r_{M \con A'}(B'') \le s-1$ and $|B''| \ge (n-s)/\bino
935
933
1,198
908
null
null
github_plus_top10pct_by_avg
thcal O(n) \otimes A^{\otimes r}\otimes M\otimes A^{\otimes s}\to M$, and since $\widehat{\mathcal O}(\vec X;\varnothing) =\mathcal O(n-1)$ for $|\vec X|=n$, we also have “inner product maps” $ \mathcal O(n-1) \to Hom(A^{\otimes i-1} \otimes M \otimes A ^{\otimes j-i-1} \otimes M \otimes A^{\otimes n-j} ,k)$. Notice that in the lowest case $n=2$, the “inner product map" $\mathcal O(1)\to Hom(M\otimes M,k)$ determines a map $<.,.>:M\otimes M\to k$, given by the image of the unit $1_k\in k=\mathcal O(1)$. Note that $<.,.>$ is invariant under the module maps mentioned above, and using the composition and the $S_n$-action of $\widehat{\mathcal O}$, all the higher inner product maps are determined by $<.,.>$ together with the module maps. We now describe algebras over the operad $\textbf{D} (\widehat{\mathcal O^!})$. This will be given in terms of derivations and maps of free $\mathcal O$-algebras and free $\mathcal O$-modules. Recall from [@GK (1.3.4)], that a free $\mathcal P$-algebra is generated by the $k$-vector space $A$ given by $F_{\mathcal P} A:=\bigoplus_{n\geq 1}(\mathcal P(n)\otimes A^{\otimes n})_{S_n}$. $F_{\mathcal P} A$ is an algebra over $\mathcal P$, i.e., there are maps $ \gamma:\mathcal P(n)\otimes (F_{\mathcal P} A )^{\otimes n} \to F_{\mathcal P} A$ coming from the composition in $\mathcal P$ and the tensor products, which satisfy the required compatibility conditions, see [@GK (1.3.2)]. An algebra derivation $d\in \mathrm{Der}(F_{\mathcal P}A)$ is defined to be a map from $F_{\mathcal P} A$ to itself, making the following diagram commute: $$\begin{CD} \mathcal P(k) \otimes (F_{\mathcal P}A)^{\otimes k}& @>\gamma>> & F_{\mathcal P}A\\ @V\sum_i \mathrm{id}\otimes \mathrm{id}^{\otimes i} \otimes d \otimes \mathrm{id}^{\otimes (k-i-1)}VV & & @VVdV\\ \mathcal P(k) \otimes (F_{\mathcal P}A)^{\otimes k}& @>\gamma>> & F_{\mathcal P}A \end{CD}$$ Similarly, if $A$ and $M$ are $k$-vector spaces, we define the free module $M$ over $A$ to be $F_{\mathcal P,A} M := \bigoplus_{n\geq 1} \left(\bigoplus
936
765
833
873
2,566
0.778813
github_plus_top10pct_by_avg
\sum_{t=1}^{m} q_i(t,\text{blue})\right) \, \left( { \prod_{i\in {\text{col}}^{-1}(\text{red})}} \, \sum_{t=1}^{m} q_i(t, \text{red})\right) \nonumber\\ &{\leqslant}\frac{n}{\binom{{s}}{d}} \left({\beta d^3}\right)^{|{\text{col}}^{-1}({\text{blue}})|} \left({\frac{2\beta kd^4}{n^{\varepsilon}}}\right)^{|{\text{col}}^{-1}(\text{red})|} {\leqslant}\frac{{n\beta^k d^{4k}}}{\binom{{s}}{d}} \left(\frac{{2k}}{n^{\varepsilon}}\right)^{|{\text{col}}^{-1}(\text{red})|}. \label{huge} \end{aligned}$$ There are at most $2^{k-1}$ coloring functions and $4^{k}\text{poly}(n) \binom{{s}}{d}$ rooted and ordered trees. So by the upper bound (\[huge\]), together with the union bound over all colored ordered trees, we obtain $$\begin{aligned} \label{ineq:final} &{\ensuremath{\operatorname{\mathbf{Pr}}\left[\text{ ${\mathcal{C}}_{m}$ contains a valid blue-red colored $k$-vertex tree with $r$ red vertices }\right]}}\nonumber\\ &{\leqslant}4^k 2^{k-1}\cdot {n^{c_0+2}} \binom{{s}}{d} \cdot \frac{{n\, \beta^k d^{4k}}}{\binom{{s}}{d}} \left(\frac{{2k}}{n^{\varepsilon}}\right)^{r}\nonumber \\ & {\leqslant}n^{c_0+3}\cdot (2\beta d)^{4k} \cdot n^{-r\varepsilon/2} {\leqslant}{\exp\big( 4k\log(2\beta d) + (c_0+3 -r\varepsilon/2 )\log n\big),} \end{aligned}$$ using $k={\mathcal{O}}(\log n)$ for the penultimate inequality. Let $b=k-r-1$ be the number of blue vertices and let $D_{s_1},\ldots,D_{s_{b}}$ be the sorted list of blue vertices such that $s_1<s_{2} < \cdots < s_b$. Then, by the definition of blue-red coloring, for every $j=1,\ldots, b$ we have $ |(\cup_{g=1}^{j-1}D_{s_g}) \cap D_{s_j}|{\leqslant}1$. This implies that $$y=|\cup_{j=1}^{k} D_{t_j}|{\geqslant}|\cup_{j=1}^{b} D_{s_j}|{\geqslant}(d-1)b=(d-1)(k-1-r),$$ [since $\{s_1,\ldots, s_b\}\subseteq \{t_1,\ldots, t_k\}$]{}. Applying Lemma \[lem:uni\] implies that the balanced allocation is $(\alpha, m)$-uniform, where $\alpha = 44\beta$, say. Hence [for any $c{\geqslant}44\beta {\mathrm{e}}^2$]{}, the probability that each bin
937
1,098
858
964
null
null
github_plus_top10pct_by_avg
-1$ instead of $\mu $). We obtain that $$\begin{aligned} F_{\beta '_{n+\nu +1-\mu }} F_{\beta '_{n+2-\mu }}^{{b^{\chi}} (\beta '_{n+2-\mu })-1} F_{\beta '_{n+3-\mu }}^{{b^{\chi}} (\beta '_{n+3-\mu })-1}\cdots F_{\beta '_n}^{{b^{\chi}} (\beta '_n)-1} F_{i_\mu }^t v_{\Lambda _\mu }=0 \end{aligned}$$ for all $\mu \in \{1,2,\dots ,n\}$ and $\nu \in \{\mu ,\mu +1,\dots ,n\}$. Apply on both sides of this equation the map ${\hat{T}}_{i_1}{\hat{T}}_{i_2}\cdots {\hat{T}}_{i_{\mu -1}}: M^{\chi _\mu }(\Lambda _\mu )\to M^\chi (\Lambda )$ to obtain Eq. . The Shapovalov form {#sec:shapdet} =================== We discuss the analog of the Shapovalov form for the algebras $U(\chi )$ following the construction in [@b-Joseph 3.4.10]. Let $\chi \in {\mathcal{X}}$. By Prop. \[pr:tridec\], there exists a decomposition $${\mathcal{U}}(\chi )=\Big(\sum _{i\in I}F_i{\mathcal{U}}(\chi )+\sum _{i\in I}{\mathcal{U}}(\chi )E_i \Big) \oplus \Uz$$ and hence a unique projection $${\theta ^{\chi}} :{\mathcal{U}}(\chi )\to \Uz$$ with kernel $\sum _{i\in I}F_i{\mathcal{U}}(\chi )+\sum _{i\in I}{\mathcal{U}}(\chi )E_i$. This map is commonly known as the *Harish-Chandra map*. By definition, ${\theta ^{\chi}} $ satisfies the property $$\begin{aligned} {\theta ^{\chi}} (u_-uu_+)={\varepsilon }(u_-){\theta ^{\chi}} (u){\varepsilon }(u_+) \label{eq:HCprop}\end{aligned}$$ for all $u_-\in U^-(\chi )$, $u\in U(\chi )$, $u_+\in U^+(\chi )$. Since ${\Omega }(u)=u$ for all $u\in {{\mathcal{U}}^0}$, $$\begin{aligned} {\theta ^{\chi}} ({\Omega }(u))={\theta ^{\chi}} (u)\qquad \text{for all $u\in {\mathcal{U}}(\chi )$.} \label{eq:HCprop2}\end{aligned}$$ The bilinear map $$\begin{aligned} {\mathrm{Sh}}: {\mathcal{U}}(\chi )\times {\mathcal{U}}(\chi )\to {{\mathcal{U}}^0},\qquad {\mathrm{Sh}}(u,v)={\theta ^{\chi}} ({\Omega }(u)v), \label{eq:Shf}\end{aligned}$$ is called the *Shapovalov form*. By Eq.  and since ${\Omega }^2={\operatorname{id}}$, $$\begin{aligned} {\mathrm{Sh}}(u,v)={\mathrm{Sh}}(v,u) \quad \text{for all $u,
938
890
1,226
1,034
null
null
github_plus_top10pct_by_avg
lo variability and the size of the numerical lattice required for accurate numerical integration also increase as with the true number of MUs. This demonstrates the challenge of MUNE for large neuromuscular systems that possess complex features resulting from alternation. Of the $19$ datasets where the MAP estimate $\hat{u}$ did not correspond to the truth, $u^*$, one dataset had $\hat{u}>u^*$ with the rest (including the two outliers) having $\hat{u}<u^*$. The stimulus-response curves for the first case and a typical example of the second are presented in Figure \[fig:SimCaseData\] and are discussed in turn below. ![Stimulus-response curve (top) for the simulated data with lines representing the expected WMTF over the stimuli intervals where the joint firing probability is greater than 5% according to the individual excitability curves (bottom). Left: Data set D1 contains $u^*=7$ MUs but $\hat{u}=8$. Right: Data set D2 contains $u^*=8$ MUs but $\hat{u}=7$. Circle points, additional 23 simulations over the 23–32V alternation period involving 5 MUs.[]{data-label="fig:SimCaseData"}](figures/SRcurve_sim2.pdf){width="80.00000%"} Over-estimation {#sec:SimStudyOver} --------------- The first data set, D1, contains $u^*=7$ MUs in truth but the SMC-MUNE method produces a MAP estimate of $\hat{u}=8$. The posterior probability of the true model is $\hat{p}_{u^*}=14.9\%$ and this model, along with the larger 9 MU model, is contained with a 95% HPCS. Parameter estimates for the MAP model (Table \[tab:D1muEst\]) show that the penultimate MU has a median expected twitch force of 9.6mN with a credible upper bound of 15.7mN, much lower than the 20mN simulation threshold. Figure \[fig:PredDen\_S37\] presents the the construction of the predictive WMTF density for the true and MAP models at a 37V stimulus. The local modes in the model containing the true MU-number correspond uniquely to particular firing combinations. In contrast, the weak MU in the MAP model principally serves to increase the variability around a specific
939
41
1,300
804
null
null
github_plus_top10pct_by_avg
1 , R., [van Dokkum]{}, P. G., [Tal]{}, T., [Marchesini]{}, D., [Kriek]{}, M., [Franx]{}, M., & [Coppi]{}, P. 2009, , 697, 1290 , M. R., [Eisenstein]{}, D., [Hogg]{}, D. W., [Schlegel]{}, D. J., & [Brinkmann]{}, J. 2005, , 629, 143 , M. [et al.]{} 2010, , 524, A76+ , F., [Jog]{}, C. J., & [Combes]{}, F. 2007, , 476, 1179 , M. & [Ma]{}, C.-P. 2007, , 374, 1227 , M., [Ma]{}, C.-P., & [Quataert]{}, E. 2006, , 369, 1081 , E. J. [et al.]{} 2011, , 415, 2626 , K., [Ellis]{}, R. S., & [Conselice]{}, C. J. 2005, , 625, 621 , K. [et al.]{} 2006, , 651, 120 —. 2010, , 719, 1969 , P., [Abraham]{}, R. G., [Ellis]{}, R. S., [Mobasher]{}, B., [Scoville]{}, N., [Sheth]{}, K., & [Koekemoer]{}, A. 2007, , 172, 284 , A., [Colafrancesco]{}, S., & [Menci]{}, N. 1992, , 392, 41 , A. J. & [Trujillo]{}, I. 2009, , 696, L43 , J. Y., [Faber]{}, S. M., [Simard]{}, L., [Graves]{}, G. J., [Lopez]{}, E. D., [Yan]{}, R., & [Cooper]{}, M. C. 2011, , 412, 727 , A. [et al.]{} 2008, , 482, 21 , A. L., [Hennawi]{}, J. F., [Newman]{}, J. A., [Cooper]{}, M. C., & [Davis]{}, M. 2007, , 654, 115 , A. L., [Newman]{}, J. A., [Kaiser]{}, N., [Davis]{}, M., [Ma]{}, C.-P., [Kocevski]{}, D. D., & [Koo]{}, D. C. 2004, , 617, 765 , A. L., [Weiner]{}, B. J., [Holz]{}, D. E., [Cooper]{}, M. C., [Yan]{}, R., & [Aird]{}, J. 2011, ArXiv e-prints , A. L. [et al.]{} 2006, , 638, 668 —. 2008, , 672, 153 —. 2009, , 701, 1484 , F., [Young]{}, L. M., & [Bureau]{}, M. 2007, , 377, 1795 , C. J., [Bundy]{}, K., [U]{}, V., [Eisenhardt]{}, P., [Lotz]{}, J., & [Newman]{}, J. 2008, , 383, 1366 , M. C., [Gallazzi]{}, A., [Newman]{}, J. A., & [Yan]{}, R. 2010, , 402, 1942 , M. C., [Newman]{}, J. A., [Madgwick]{}, D. S., [Gerke]{}, B. F., [Yan]{}, R., & [Davis]{}, M. 2005, , 634, 833 , M. C. [et al.]{} 2006, , 370, 198 —. 2007, , 376, 1445 —. 2008, , 383, 1058 —. 2010, , 409, 337 —. 2011, , 193, 14 , S. M. [et al.]{} 2005, , 356, 415 , D. J. [et al.]{} 2005, , 356, 1155 , E. [et al.]{}
940
317
2,485
1,129
null
null
github_plus_top10pct_by_avg
triplet probability for that class is computed.[]{data-label="triplet_detector"}](figures/classification_with_triplet.pdf){width="80.00000%"} where $p_c(y | x)$ and $p_t(y | x)$ are the probability distributions from the classifier and triplet network respectively, $k$ is the most probable class output by the classifier, and $\Delta$ is the difference in probability between the most probable class output by the classifier and the same class output by the triplet network. In our experiments we set the threshold $\eta=0.4$. Note that while we used a triple network as an auxiliary model in our examples, the goal is to find a model that is trained in a distinct manner and will thus have different biases than the original classifier, so other models can certainly be used in place of a triplet network if desired. Defense Analysis ================ Before discussing the experiments and corresponding results, here is the definition of adversarial examples that we are working with in this paper. An image is an adversarial example if 1. It is classified as a different class than the image from which it was derived. 2. It is classified incorrectly according to ground truth label. 3. The image from which it was derived was originally correctly classified. Since we are evaluating attack success rates in the presence of defenses, point 3 in the above definition ensures that the attack success rate is not confounded by a performance decrease in the classifier potentially caused by a given defense. In our analysis, we use the fast gradient sign (FGS) [@goodfellow_explaining_2014], iterative gradient sign (IGS) [@kurakin_adversarial_2016], and Carlini and Wagner (CW) [@carlini_towards_2016] attacks. We are operating under the white-box assumption that the adversary has access to network architecture, weights, and gradients. Additionally, we did not use any data augmentation when training our models since that can be considered as a defense and could further confound results. To address possible concerns regarding poor
941
79
2,241
938
2,252
0.781421
github_plus_top10pct_by_avg
`atcttatttatt` 2481059 NONSYN T:5 G:148 G:37 `ttagattggttg` response regulator protein `gi 87161886 ref` `196` `KSENIEKTVNRF` `K ENIEKTVNRF` `KIENIEKTVNRF` `SRR022865_56405` `1` `aagaagaagact` `ataataactagt` thiamine-phosphate pyrophosphorylase 2212436 NONSYN A:5 C:107 C:37 `atattagtttac` `gi 87161325 ref` `64` `PVVKELKKHAK` `P VKELKKHAK` `PFVKELKKHAK
942
1,339
2,605
1,055
null
null
github_plus_top10pct_by_avg
riaxone daily (3.26 ± 6.9 days) (p = 0.034). The odds ratio for ICU utilization was 0.11 (95% CI 0.02--0.65). However, this difference was no longer significant after controlling for MELD score - odds ratio 0.21 (95% CI 0.04--1.07). Finally, we examined one-year survival for patients treated with 1 versus 2 g ceftriaxone, and found a significant improvement in survival associated with the 2 g dose (p 0.0034 log rank test) ( [Figure 1](#f1){ref-type="fig"}). Median overall survival was greater for patients treated with the 2 g dose (228 days vs. 102 days, however it was not significant (p = 0.26). ###### Hospital course characteristics by ceftriaxone dose. --------------------------------------------------------------------------------- 1 g (N=34) 2 g (N=57) p-value ------------------------------------ -------------- ------------- --------------- Length of stay (days) 13.24 ± 21.5 10.28 ± 7.2 0.443 ICU days 3.26 ± 6.9 0.59 ± 1.78 ***0.034\**** Repeat paracentesis at\ 14 (41) 32 (56) 0.167 index hospitalization (N, %) Repeat paracentesis with\ 7 (21) 11 (19) 0.881 \>250 neutrophils (N, %) 30-day readmission (N, %) 11 (32) 16 (28) 0.665 Other inpatient antibiotics (N, %) 25 (74) 35 (61) 0.238 Total inpatient antibiotic days 8.4 ± 8.5 8.8 ± 6.3 0.821 Inpatient duration of\ 4.8 ± 3.1 5.3 ± 3.2 0.491 ceftriaxone (days) Creatinine at\ 1.57 ± 1.04 1.41 ± 1.46 0.529 discharge --------------------------------------------------------------------------------- \[\[i\] ICU = intensive care unit. \* Not significant after controlling for MELD sco
943
818
1,282
1,055
null
null
github_plus_top10pct_by_avg
 \[TKMeqn\] and \[TKMupdate\] below). This is similar to a short-term memory, allowing previous inputs to have some effect on the processing of the current input, so that the neurons which have won recently are more likely to win again. In the experiments reported here the value $\gamma = 0.4$ was used, meaning that only the previous 2 or 3 winners had any influence in deciding the current winner. The activity of the neurons is calculated using $$a_i (t) = \gamma \cdot a_i (t-1) + e^{ \left( - \frac{1}{2} \right) \left[ \mathbf{v} (t) - \mathbf{w}_i (t) \right]^2}, \label{TKMeqn}$$ and, in a similar way to the SOM, the neuron with the largest activity $a$ is chosen as winner, and its weights and those of its topological neighbours updated using the following weight update rule ($\eta$ and the neighbourhood remained the same): $$\mathbf{w}_i (t+1) = \mathbf{w}_{i} (t) + \eta \sum_{k=0}^{n} \gamma^k \left[ \mathbf{v} (t-k) - \mathbf{w}_i (t-k) \right]. \label{TKMupdate}$$ ### The $K$–Means Clustering Algorithm One of the simplest ways to cluster data is by using the $K$–means algorithm [@Bishop95b]. A pre-determined number of prototypes, $\mathbf{\mu}$, are chosen to represent the data, so that it is partitioned into $K$ clusters. The positions of the prototypes are chosen to minimise the sum-of-squares clustering function, $$J = \sum_{j=1}^{K} \sum_{n \in S_j} \| \mathbf{x}^n - \mathbf{\mu}_j \|^2$$ for data points $\mathbf{x}^n$. This separates the data into $K$ partitions $S_j$. The algorithm can be carried out as an on-line or batch procedure, with the on-line version, used here, having the update rule $$\Delta \mathbf{\mu}_j = \eta \left( \mathbf{x}^n - \mathbf{\mu}_j \right).$$ Using the Novelty Filter on a Mobile Robot\[impl\] ================================================== The robot implementation was designed to show that the novelty filter described in section \[NF\] can be used to detect new stimuli. The novelty filter was incorporated into a system where a robot detects and turns to
944
4,849
1,206
903
607
0.804457
github_plus_top10pct_by_avg
} - u_{m,n}^{(i+\ell_2)} = v^{(i)}_{m+1,n} - v^{(i+\ell_1)}_{m,n} , \label{eq:dLP-ex-cc-2}\end{gathered}$$ for $i \in {\mathbb{Z}}_N$. Quotient potentials {#sect:coprime-quotient} ------------------- Equations (\[eq:dLP-ex-cc-1\]) hold identically if we set $$\begin{gathered} \label{eq:dLP-gen-ph-1} u^{(i)}_{m,n} = \alpha \frac{\phi^{(i)}_{m+1,n}}{\phi^{(i+k_1)}_{m,n}} ,\qquad v^{(i)}_{m,n} = \beta \frac{\phi^{(i)}_{m,n+1}}{\phi^{(i+k_2)}_{m,n}} ,\qquad i \in {\mathbb{Z}}_N,\end{gathered}$$ after which (\[eq:dLP-ex-cc-2\]) takes the form $$\begin{gathered} \label{eq:dLP-gen-sys-1} \alpha \left(\frac{\phi^{(i)}_{m+1,n+1}}{\phi^{(i+k_1)}_{m,n+1}} - \frac{\phi^{(i+\ell_2)}_{m+1,n}}{\phi^{(i+\ell_2+k_1)}_{m,n}} \right) = \beta \left(\frac{\phi^{(i)}_{m+1,n+1}}{\phi^{(i+k_2)}_{m+1,n}} - \frac{\phi^{(i+\ell_1)}_{m,n+1}}{\phi^{(i+\ell_1+k_2)}_{m,n}} \right) , \qquad i \in {\mathbb{Z}}_N,\end{gathered}$$ defined on a square lattice. These equations can be explicitly solved for the variables on any of the four vertices and, in particular, $$\begin{gathered} \label{eq:dLP-gen-sys-1-a} \phi^{(i)}_{m+1,n+1} = \frac{\phi_{m,n+1}^{(i+k_1)} \phi_{m+1,n}^{(i+k_2)}}{\phi_{m,n}^{(i+k_1+\ell_2)}} \left( \frac{\alpha \phi_{m+1,n}^{(i+\ell_2)}- \beta \phi_{m,n+1}^{(i+\ell_1)}}{\alpha \phi_{m+1,n}^{(i+k_2)}- \beta \phi_{m,n+1}^{(i+k_1)}} \right) ,\qquad i \in {\mathbb{Z}}_N.\end{gathered}$$ In this potential form, the Lax pair (\[eq:dLP-gen\]) can be written \[eq:LP-ir-g-rat\] $$\begin{gathered} \Psi_{m+1,n} = \big( \alpha {\boldsymbol{\phi}}_{m+1,n} \Omega^{k_1} {\boldsymbol{\phi}}_{m,n}^{-1} + \lambda \Omega^{\ell_1}\big) \Psi_{m,n},\nonumber\\ \Psi_{m,n+1} = \big( \beta {\boldsymbol{\phi}}_{m,n+1} \Omega^{k_2} {\boldsymbol{\phi}}_{m,n}^{-1} + \lambda \Omega^{\ell_2}\big) \Psi_{m,n}, \label{eq:LP-ir-g-rat-1}\end{gathered}$$ where $$\begin{gathered} \label{eq:LP-ir-g-rat-2} {\boldsymbol{\phi}}_{m,n} := \operatorname{diag}\big(\phi^{(0)}_{m,n},\dots,\phi^{(N-1)}_{m,n}\big) \qquad {\mbox{and}} \qquad \det\left({\boldsymbol{\p
945
467
1,439
1,069
null
null
github_plus_top10pct_by_avg
frac{2k}{\lambda_n}\tau\right)d\tau,\quad s \in [0,t),\end{aligned}$$ which leads to $$\begin{aligned} e_{n,2}(s)&=c_1\sin\left(\frac{2k}{\lambda_n}s\right)-c_2\cos\left(\frac{2k}{\lambda_n}s\right)\\* &\hspace{4mm}+c_2\left(\cos\left(\frac{2k}{\lambda_n}t\right)+1\right)-c_1\sin\left(\frac{2k}{\lambda_n}t\right),\quad s \in [0,t).\end{aligned}$$ Since $e_{n,2}$ is of the form (\[44\]) we have $d_1=-c_2$, $d_2=c_1$ and $$\begin{aligned} c_2\left(\cos\left(\frac{2k}{\lambda_n}t\right)+1\right)-c_1\sin\left(\frac{2k}{\lambda_n}t\right)=0\label{45}.\end{aligned}$$ \ Now of course (\[41\]) must also hold for $s=0$, thus $$\lambda_nc_1=\lambda_ne_{n,1}(0)=k\int\limits_{0}^tc_1\sin\left(\frac{2k}{\lambda_n}\tau\right)-c_2\cos\left(\frac{2k}{\lambda_n}\tau\right)d\tau,$$ which implies $$\begin{aligned} c_1&\left(\cos\left(\frac{2k}{\lambda_n}t\right)+1\right)=-c_2\sin\left(\frac{2k}{\lambda_n}t\right)\label{46}.\end{aligned}$$ First assume $c_1=0$, then we have with $$c_2\left(\cos\left(\frac{2k}{\lambda_n}t\right)+1\right)=0,$$ and with $$-c_2\sin\left(\frac{2k}{\lambda_n}t\right)=0.$$ But as we assume $(e_{n,1},e_{n,2})^T$ to be an eigenvector, the functions $e_{n,1}$ and $e_{n,2}$ may not both be the zero function, i.e. $c_2\neq 0$.\ Hence we have $$\sin\left(\frac{2k}{\lambda_n}t\right)=0 \text{ and }\cos\left(\frac{2k}{\lambda_n}t\right) = -1,$$ which is equivalent to $$\begin{aligned} \frac{2k}{\lambda_n}t=(2n-1)\pi,\end{aligned}$$ for some $n\in\mathbb{Z}$, i.e.  $$\begin{aligned} \lambda_n=\frac{2k}{(2n-1)\pi}t,\ n\in{\mathbb{Z}}.\end{aligned}$$ If we assume $c_2=0$, then $$c_1\sin\left(\frac{2k}{\lambda_n}t\right)=0=c_1\left(\cos\left(\frac{2k}{\lambda_n}t\right)+1\right).$$ This again is equivalent to $$\begin{aligned} \frac{2k}{\lambda_n}t=(2n-1)\pi,\end{aligned}$$ for some $n\in\mathbb{Z}$, i.e. , $$\begin{aligned} \lambda_n=\frac{2k}{(2n-1)\pi}t,\ n\in{\mathbb{Z}}.\end{aligned}$$ Assume $c_1\neq0\neq c_2$, then we multiply on both sides with and obtain: $$c_1 c_2 \left(\cos^2\left(\frac{2k}{\l
946
3,563
1,561
897
null
null
github_plus_top10pct_by_avg
based on desired tolerance of $10^{-4}$. From top left to bottom right, we see the standard deviation of the estimator, the sample variance, the absolute error (using a quadrature approximation for for the reference value), and the amount work (number of samples $\times$ mean number of steps).[]{data-label="fig:test5"}](test5_fig2 "fig:") ![Example simulation with the walk-on-spheres algorithm for based on desired tolerance of $10^{-4}$. From top left to bottom right, we see the standard deviation of the estimator, the sample variance, the absolute error (using a quadrature approximation for for the reference value), and the amount work (number of samples $\times$ mean number of steps).[]{data-label="fig:test5"}](test5_fig7 "fig:") Non-constant source term ------------------------ Suppose that, again in the context of , we again take $D$ to be equal to the unit ball and the source term equal to $$ {f}(x)=2^{\alpha} \Gamma(2+\alpha/2) \Gamma(1+\alpha/2) (1-(1+\alpha/2) \|{x}\|^2), \qquad x\in D,$$ and zero exterior data ${g}=0$. This has the exact solution $u(x)=\max\{0, 1-\|x\|^2\}^{1+\alpha/2}$; cf. [@Dyda2012-kl]. The behaviour of the algorithm is shown in Figure \[fig:dyda\]. As expected, we again observe no obvious trend in estimator standard deviation and absolute error. The sample variance of sums of Monte Carlo-generated integrals increases with $\alpha$ as does the number of samples accordingly. Work required grows with $\alpha$ as in Figure \[fig:test5\], but with a slightly steeper trend. Notice that accuracy of $10^{-4}$ for the inhomogeneous part of the solution would demand a lot more work than the homogeneous part in Figure \[fig:test5\]. ![Example simulation with the walk-on-spheres algorithm for based on desired tolerance of $10^{-3}$ and $n_{R,\Theta}=1000$. From top left to bottom right, we see the standard deviation of the estimator, the sample variance, the absolute error, and the amount work (number of samples $\times$ mean number of steps).[]{data-label="fig:dyda"}
947
140
1,028
913
2,558
0.778876
github_plus_top10pct_by_avg
\hat{\mathcal{H}}^{[2]}_{-},$$ where $$\label{hblocks-} \begin{aligned} & \hat{\mathcal{H}}^{[1]}_{-} = \bigoplus_{n = 0}^{m - 1} \langle n , e |\hat{\mathcal{H}}^{(m)}_{+} | n , e \rangle = \hbar \bigoplus_{n = 0}^{m - 1} \left( \nu n + \tfrac{\omega_0}{2} \right), \\ & \hat{\mathcal{H}}^{[2]}_{-} \!=\! \bigoplus_{n = 0}^{\infty} \!\!\begin{pmatrix} \!\! \langle n , g |\hat{\mathcal{H}}^{(m)}_{+} | n , g \rangle \!\! & \!\! \!\! \langle n\! + \! m, e |\hat{\mathcal{H}}^{(m)}_{+} | n , g \rangle \!\!\\ \!\! \langle n , g |\hat{\mathcal{H}}^{(m)}_{+} | n\! + \! m , e \rangle \!\! & \!\! \langle n\! + \! m, e |\hat{\mathcal{H}}^{(m)}_{+} | n\! + \! m , e \rangle \!\! \end{pmatrix} \\ & \,\,\,\,\, = \hbar \bigoplus_{n = 0}^{\infty} \begin{pmatrix} \nu (n+m) + \frac{\omega_{0}}{2} & \text{e}^{-i\omega_{L}t} \Omega f_n^m \\ \text{e}^{i\omega_{L}t} \Omega f_n^{m\ast} & \nu n-\frac{\omega_{0}}{2} \end{pmatrix}, \end{aligned}$$ and $f_n^m$ is defined in Eq. (\[auxf1\]). Now considering the one dimensional blocks where $m > n$, its eigenvalues and eigenvectors are, respectively, given by $$\label{eigval1-} \zeta_{-}^{(n,m)} = \hbar\nu n+\frac{\hbar\omega_{0}}{2}, \,\, \left|\zeta_{-}^{(n,m)}\right\rangle = \left|n,e\right\rangle,$$ for each $n = 0,...,m-1$ for a given $m$. The eigenvalues of each $2 \times 2$ blocks in Eq. (\[hblocks-\]) now becomes $$\label{eigval-} \begin{aligned} \mu_{-}^{(n,m)} & =\hbar\nu\left(n+\frac{m}{2}\right) -\frac{\hbar}{2}\sqrt{\omega_{L}^{2}+ \Omega^{2}\left|f_n^m\right|^{2}} \\ \gamma_{-}^{(n,m)} & = \hbar\nu\left(n+\frac{m}{2}\right)+ \frac{\hbar}{2}\sqrt{\omega_{L}^{2}+ \Omega^{2}\left|f_n^m\right|^{2}}, \end{aligned}$$ respectively, associated to the eigenvectors $$\label{eigevec-} \begin{aligned} \left|\mu_{-}^{(n,m)}\right\rangle & = \tfrac{\text{e}^{-i\omega_{L}t}\left[\omega_{\!L}- \s
948
3,685
821
694
null
null
github_plus_top10pct_by_avg
d in $\hat H_{\rm e-ph}$ since it captures the fact that a vibrational excitation of the adsorbed molecule may couple to electrons in the substrate when parts of the molecule periodically beat onto the substrate surface. In particular, we will show below that this unconventional Holstein coupling can reduce the Kondo temperature [@PAM-E-PHON2013] of the system S. The additional constants $n_{d0}$ and $n_{c0}$ in Eq.  are often set to zero in the literature [@GalperinRatnerNitzan2007] when the polaronic energy shift in the single-particle energies is of primary interest, because they do not play a role then. Here, however, we focus on the quantum fluctuations with respect to some reference filling that are induced by the electron-phonon coupling [@EidelsteinSchiller2013; @JovchevAnders2013] and use these constants to ensure $\langle \hat X_\nu \rangle=0$. Typical values are $n_{d0}= n_{c0}=1$ at half filling. ### Interaction-driven displacement of the harmonic oscillator {#sec:e-phonon-displacement} Away from particle-hole symmetry, an electron-phonon coupling as the one in Eq.  generates a displacement of the equilibrium position of the corresponding harmonic oscillator. Since we are going to use an atomistic DFT calculation with relaxed atomic coordinates to generate the input parameters of the model Hamiltonian $\hat H_{\rm e}+ \hat H_{\rm ph} + \hat H_{\rm e-ph}$, such an additional displacement is not justified. We therefore include appropriately adjusted $n_{d0}$ and $n_{c0}$ added in Eq.  to ensure $\langle\hat X_0\rangle=0$. However, the perturbative derivation of the tunnel current does not rely on explicitly vanishing displacements $\langle \hat X_\nu \rangle$, and therefore the absorption of the equilibrium displacement in Eq.  is just a convention and must not alter the physics. The equilibrium displacement generated by the electron-phonon coupling also touches upon a more fundamental issue: Evidently the physical observables such as the total STS spectra must not depend on the precise definiti
949
118
2,004
1,111
null
null
github_plus_top10pct_by_avg
}{n}} \cdot \frac{\binom{{s}-2}{d-2}}{\binom{{s}}{d}} \cdot {\ensuremath{\operatorname{\mathbf{Pr}}\left[A(p_J,t)\right]}} = {\frac{2\beta d(d-1)}{({s}-1)n}}\, {\ensuremath{\operatorname{\mathbf{Pr}}\left[A(p_J,t)\right]}}, \end{aligned}$$ as $\binom{{s}-2}{d-2}$ is the number of $d$-element subsets of $H_t$ which contain the pair $p_J$. Then $$\begin{aligned} q_i(t, \text{red}){\leqslant}\sum_{J=1}^U {\frac{2\beta d(d-1)}{({s}-1)n}}\, {\ensuremath{\operatorname{\mathbf{Pr}}\left[A(p_J,t)\right]}}. \end{aligned}$$ Note that by (\[up:u\]) we have $U{\leqslant}kd^2$ and hence, $$\begin{aligned} \label{pr:red} \sum_{t=1}^{m} q_i(t, \text{red}){\leqslant}\sum_{J=1}^U \sum_{t=1}^{n} {\frac{2\beta d(d-1)}{({s}-1)n}}\, {\ensuremath{\operatorname{\mathbf{Pr}}\left[A(p_J,t)\right]}} {\leqslant}\sum_{J=1}^U {\frac{2\beta d(d-1)}{({s}-1)n}}\, {\ensuremath{\operatorname{\mathtt{vis}}(p_J)}}{\leqslant}{\frac{2\beta kd^4}{n^\varepsilon}}. \end{aligned}$$ The final inequality follows from the visibility property, using the fact that $d < s$. Write ${\text{col}}^{-1}(\text{blue})$ for the set of blue vertices in $T$, and similarly for ${\text{col}}^{-1}(\text{red})$. Then $$|{\text{col}}^{-1}(\text{red})|+|{\text{col}}^{-1}(\text{blue})|=k-1.$$ Suppose that $(t_1,\ldots,t_k)$ is the sequence of balls that are going to select vertices $1,2,\ldots, k$ of $T$. By applying (\[pr:no\]), (\[pr:blu\]) and (\[pr:red\]), we find that the probability that the edges of the colored tree $T$ appears in ${\mathcal{C}}_{m}$ at times [$(t_1,\ldots, t_k)$]{}, and the corresponding sets $D_{t_1},\ldots, D_{t_k}$ consistent with chosen blue-red coloring scheme, is at most $$\begin{aligned} &\sum_{(t_1,\ldots,t_k)} \left\{q_1(t_1)\prod_{i=2}^k q_{i}(t_i, {\text{col}}(i))\right\} {\leqslant}\left(\sum_{t=1}^{m} q_1(t)\right){\prod_{2=1}^{k}}\left(\sum_{t=1}^{m} q_i(t, {\text{col}}(i))\right)\nonumber\\ &{\leqslant}\frac{n}{\binom{{s}}{d}} \, \left( { \prod_{i\in {\text{col}}^{-1}(\text{blue})}} \,
950
2,750
1,065
923
3,946
0.769077
github_plus_top10pct_by_avg
in of precursor events should actually lead to a different conclusion. Transduced values potentially control the status of any safety concern. Tests simply pass or fail, but evaluation of why a test passes or fails can become nontrivial, requiring collaboration between mechanical, software, and system safety engineers. ### Statistics {#S:STATISTICS} Some statistical error originates in inference from random sample to [unknown]{} population (parametric family of probability distributions on a measurable space). Just one distribution is true, while the others are false. An assertion separating the parameterization into two decision units is called a hypothesis. One decision unit is traditionally designated null, while the other is called alternate. The true distribution belongs either to the null or alternative decision units. Each sample item either passes or fails its associated test (see §\[S:PHYSICS\]). Within the entire cone ${\mathcal{C}}$, suppose the proportion of tests that fail is $\rho$. This proportion is subsequently realized approximately through a random sample. Regardless of the sample size, since the application is to safety, the only cases of interest will be when the number of failures is zero. Other cases, implying need for reliability growth, are treated in the literature, particularly [@jM87]. We now examine the case defined by drawing a random sample of size $N$ from ${{\operatorname{edge}{{\mathcal{C}}}}}$ and allowing $n = 0$ failures in the associated tests from cone ${\mathcal{C}}$. The null decision unit contains the probability distribution $P_0(\text{pass}) = 1$ and $P_0(\text{fail}) = 0$. The alternate decision unit is the set of probability distributions $P_\rho$ having $0 < \rho \leq 1$. Hypothesis evaluation entails two types of error, known as $\alpha$ and $\beta$ error. #### False rejection ($\alpha$ error) {#S:FALSE_REJECTION} The first is false rejection of the null decision unit, with associated measurement error $\alpha$. The sampling plan can reject only if finds an
951
3,956
1,933
727
2,229
0.781644
github_plus_top10pct_by_avg
iscoveryCNN_1; @ObjectDiscoveryCNN_2; @ObjectDiscoveryCNN_3] and co-segmentation [@InteractiveCoseg] only require image-level labels without object bounding boxes. Object discovery is mainly implemented by identifying common foreground patterns from the noisy background. People usually consider closed boundaries and common object structure as a strong prior for object discovery. In contrast to objects, it is difficult to mine true part parsing of objects without sufficient supervision. Up to now, there is no reliable solution to distinguishing semantically meaningful parts from other potential divisions of object parts in an unsupervised manner. In particular, some parts (*e.g.* the abdomen) do not have shape boundaries to determine their shape extent. **Part localization/detection vs. semanticizing CNN patterns:** There are two key points to differentiate our study from conventional part-detection approaches. First, most detection methods deal with classification problems, but inspired by graph mining [@OurICCV15AoG; @OurSAPPAMI; @OurCVPR14Graph], we mainly focus on a mining problem. *I.e.* we aim to discover meaningful latent patterns to clarify CNN representations. Second, instead of summarizing common knowledge from massive annotations, our method requires very limited supervision to mine latent patterns. Method ====== The overall objective is to sequentially minimize the following three loss terms. $${Loss}={Loss}^{\textrm{CNN}}+{Loss}^{\textrm{QA}}+{Loss}^{\textrm{AOG}} \label{eqn:obj}$$ ${Loss}^{\textrm{CNN}}$ denotes the classification loss of the CNN. ${Loss}^{\textrm{QA}}$ is referred as to the loss for active QA. Given the current AOG, we use ${Loss}^{\textrm{QA}}$ to actively determine a sequence of questions about objects that cannot be explained by the current AOG, and require people to annotate bounding boxes of new object parts for supervision. ${Loss}^{\textrm{AOG}}$ is designed to learn an AOG for the CNN. ${Loss}^{\textrm{AOG}}$ penalizes 1) the incompatibility between the AOG and CNN f
952
316
1,087
1,036
765
0.800575
github_plus_top10pct_by_avg
1.375 (8) C10---H10 0.9500 C66---H66 0.9500 C11---C12 1.381 (7) C67---C68 1.380 (8) C11---H11 0.9500 C67---H67 0.9500 C12---H12 0.9500 C68---C69 1.395 (7) C13---H13A 0.9900 C68---H68 0.9500 C13---H13B 0.9900 C69---H69 0.9500 C14---C19 1.390 (7) C70---C71 1.392 (6) C14---C15 1.399 (6) C70---C75 1.390 (6) C15---C16 1.381 (7) C71---C72 1.388 (6) C15---H15 0.9500 C71---H71 0.9500 C16---C17 1.376 (7) C72---C73 1.378 (7) C16---H16 0.9500 C72---H72 0.9500 C17---C18 1.394 (7) C73---C74 1.394 (7) C17---H17 0.9500 C73---H73 0.9500 C18---C19 1.390 (7) C74---C75 1.385 (6) C18---H18 0.9500 C74---H74 0.9500 C19---H19 0.9500 C75---H75 0.9500 C20---C21 1.398 (6) O1---C76 1.224 (6) C20---C25 1.396 (6) N1---C76 1.333 (6) C21---C22 1.397 (6) N1---C77 1.452 (6) C21---H21 0.9500 N1---C78 1.465 (7) C22---C23 1.384 (8) C76---H76 0.9500 C22---H22 0.9500 C77---H77A 0.9800 C23---C24 1.376 (8) C77---H77B 0.9800 C23---H23 0.9500 C77---H77C 0.9800 C24---C25 1.398 (7) C78---H78A 0.9800 C24---H24 0.9500 C78---H78B 0.9800 C25---H25 0.9500 C78---H78C 0.9800 C26---C27 1.387 (6) O2---C79 1.237 (7) C26---C31 1.399 (6) N2---C79 1.300 (7) C27---C28 1.383 (7) N2---C81 1.427 (7
953
4,096
987
707
null
null
github_plus_top10pct_by_avg
Q}$*, then* $\delta _{1}A$* *$\delta _{2}$* may belong or not to* $\psi _{AD}^{Q}$*. To be precise, it belongs to* $\psi _{AD}^{Q}$* iff* $S_{\delta _{1}}\subset S_{\delta _{2}}$* or* $S_{\delta _{2}}\subset S_{\delta _{1}}$ Indeed, $S_{\delta _{1}}\cup S_{\delta _{2}}\in \mathcal{L(S)}$ or, equivalently, $S_{\delta _{1}}\cup S_{\delta _{2}}=S_{\delta _{1}}\Cup S_{\delta _{2}}$, iff one of the conditions in C$_{4}$ is satisfied. It is apparent from criteria C$_{2}$ and C$_{3}$ that $\psi _{AD}^{Q}$ is closed with respect to the pragmatic connectives $N$ and $K$, in the sense that $\delta \in \psi _{AD}^{Q}$ implies $N\delta \in \psi _{AD}^{Q}$, and $\delta _{1}$, $\delta _{2}\in \psi _{AD}^{Q}$ implies $\delta _{1}K\delta _{2}\in \psi _{AD}^{Q}$. On the contrary, $\psi _{AD}^{Q}$ is not closed with respect to $A$, since it may occur that $\delta _{1}A$ $\delta _{2}\notin \psi _{AD}^{Q}$ even if $\delta _{1}$, $\delta _{2}\in \psi _{AD}^{Q}$. In order to obtain a closed subset of afs of $\mathcal{L}_{Q}^{P} $, one can consider the set $\phi _{AD}^{Q}=\{\delta \in \psi _{A}^{Q}\mid $ the pragmatic connective $A$ does not occur in $\delta \}$. The set $\phi _{AD}^{Q}$ obviously contains all elementary afs of $\mathcal{L}_{Q}^{P}$, plus all afs of $\psi _{A}^{Q}$ in which only the pragmatic connectives $N$ and $K$ occur. We can thus consider a sublanguage of $\mathcal{L}_{Q}^{P}$ whose set of afs reduces to $\phi _{AD}^{Q}$. This new language is relevant since all its afs are p-decidable, hence we call it* the* *p-decidable sublanguage* of $\mathcal{L}_{Q}^{P}$ and denote it by $\mathcal{L}_{QD}^{P}$. The p-decidable sublanguage $\mathcal{L}_{QD}^{P}$ -------------------------------------------------- As we have anticipated in the Introduction, we aim to show in this paper that the sublanguage $\mathcal{L}_{QD}^{P}$ has the structure of a physical QL, hence it provides a new pragmatic interpretation of this relevant physical structure. However, this interpretation will be more satisfactory from an intuiti
954
2,890
1,545
991
null
null
github_plus_top10pct_by_avg
r> <td> @Html.TextBoxFor(model => model.AllFeatures[i].Id) </td> <td> @Html.TextBoxFor(model => model.AllFeatures[i].Name) </td> <td> @Html.EditorFor(model => model.AllFeatures[i].IsActive) </td> </tr> } Q: Add class to td with colspan=2 I want to add a Class to every td which has colspan=2: <div class="content"> <table> <tr> <td>zuppa di zucchini</td> <td>€ 6,00</td> </tr> <tr> <td colspan="2">huisgemaakte soep van courgette met munt</td> </tr> <tr> <td>insalata caprese</td> <td>€ 6,00</td> </tr> <tr> <td colspan="2">frisse salade met tomaatjes, mozzarella en verse pesto</td> </tr> </table> </div> I thought something like this would do the trick, but it doesn't work: $('.content td[colspan=2]').addClass('dus'); Can somebody please help me? Thanks in advance! A: Your code $('.content td[colspan=2]').addClass('dus'); it works fine just include jquery on your project without any conflicts with other tools like mootools Q: LaPlace Transform of a step function Consider the function $f(t)=\begin{cases} 0 \;\;\;\;\;\;\;\;\;\;\;t<\pi \\t-\pi \;\;\;\;\; \pi\leq t \leq2\pi \\ 0 \;\;\;\;\;\;\;\;\;\;\; t\leq 2\pi \end{cases}$ Find the laplace transform of this function. What I did was I wrote it in terms of the step function $f(t)=u_{\pi}(t)[t-\pi]+u_{2\pi}(t)[\pi-t]$ So therefore I set up a integral of $\int_{\pi}^{2\pi} e^{-st}(t-\pi) +\int_{2\pi}^{\infty}e^{-st}(\pi-t)$ The result I end up with is $\dfrac{-2\pi e^{-2\pi s}}{s}-\dfrac{2e^{-2\pi s}}{s^2}+\dfrac{e^{-\pi s}}{s^2}$ However the book ends up with a different answer which I get when I only consider $\int_{\pi}^{2\pi} e^{-st}(t-\pi)$, but not the other. Why are they considering 1 integral when there is in fact 2 that need to be evaluated. Or is the way I defined my step functi
955
1,314
57
837
829
0.799194
github_plus_top10pct_by_avg
mulation if no signal were present. []{data-label="tab:lim"} Table \[tab:MSSM\_lim\] shows the preliminary 95% C.L. lower limits on ${M_{\mathrm{h}}}$ and ${M_{\mathrm{A}}}$ for the four LEP experiments [@felcini; @al_moriond; @del_moriond; @l3_moriond; @op_moriond], as well as the derived excluded ranges of $\tan\beta$ for both no mixing and maximal mixing in the scalar-top sector. -------- ---------------------------- ---------------------------- --------------------- ----------------------- ${M_{\mathrm{h}}}$ ([ ]{}) ${M_{\mathrm{A}}}$ ([ ]{}) $\tan\beta$ $\tan\beta$ max. mixing no mixing ALEPH 80.8 81.2 - $1<\tan\beta<2.2$ DELPHI 83.5 84.5 $0.9<\tan\beta<1.5$ $0.6<\tan\beta<2.6$ L3 77.0 78.0 $1.<\tan\beta<1.5$ $1.<\tan\beta<2.6$ OPAL 74.8 76.5 - $0.81<\tan\beta<2.19$ -------- ---------------------------- ---------------------------- --------------------- ----------------------- : Observed 95% C.L. lower limits on ${M_{\mathrm{h}}}$ and ${M_{\mathrm{A}}}$. Also shown are the derived excluded ranges of $\tan\beta$. The mass limits are given for $\tan\beta>1$, except for those of DELPHI, given for $\tan\beta>0.5$. []{data-label="tab:MSSM_lim"} In the years 1999 to 2000 LEP2 is expected to deliver a luminosity larger than 200 $\rm{pb}^{-1}$ per experiment at a centre-of-mass energy eventually as high as $\sim 200$ GeV. These data should allow to discover a SM Higgs of 107 [ ]{}or to exclude a Higgs lighter than $\sim$108 [ ]{} [@lellouch; @chamonix]. This is a particularly interesting region to explore, given the present indication for a light Higgs from the standard model fit of the electroweak precision data. The se
956
864
1,627
1,112
null
null
github_plus_top10pct_by_avg
--0.2 PM25 --0.34 --0.23 0.31 --0.18 1 0.73 0.65 0.81 0.29 0.29 SO~2~ --0.33 0.03 0.09 --0.27 0.73 1 0.35 0.66 0.43 0.22 CO --0.26 --0.24 0.21 0.21 0.65 0.35 1 0.68 --0.07 0.35 NO~2~ --0.42 --0.25 0.29 0.03 0.81 0.66 0.68 1 0.13 0.35 O~3~\_8h --0.24 0.39 --0.18 --0.28 0.29 0.43 --0.07 0.13 1 --0.14 Number of visits 0.15 --0.38 0.39 --0.2 0.29 0.22 0.35 0.35 --0.14 1 ^a^WS: wind speed. ^b^TP: outside temperature. ^c^AP: atmospheric pressure. ^d^RH: relative humidity. ^e^PM25: particulate matter less than 2.5 μm in diameter. ^f^SO~2~: sulphur dioxide. ^g^CO: carbon monoxide. ^h^NO~2~: nitrogen dioxide. ^i^O~3~\_8h: 8-hour average ozone slip in a day. ###### Weather and air quality data distribution of peak and nonpeak groups visiting outpatient and emergency departments. Variables Peak group, mean (SD) Nonpeak group, mean (SD) ------------------------------------------------- ----------------------- -------------------------- Wind speed (m/sec) 2.49 (1.10) 2.15 (0.91) Outside temperature (°C) 17.81 (5.59) 23.11 (5.81) Atmosphere pressure (mb) 1009.99 (5.26) 1003.73 (6.57) Relative humidity (%) 77 (12.51) 82.15 (9.65) Particulate matter less than 2.5 μm in diameter 43.74 (23.69) 32.83 (16.49) Sulphur dioxide 13.16 (4.65) 11.45 (3.73) Carbon monoxide
957
5,631
326
263
null
null
github_plus_top10pct_by_avg
ackrel{\eta}{\to} (u\times\id)_!(\id\times v)_\ast (u\times\id)^\ast(u\times\id)_!\\ &\toiso (u\times\id)_!(u\times\id)^\ast(\id\times v)_\ast (u\times\id)_!\\ & \stackrel{\varepsilon}{\to} (\id\times v)_\ast (u\times\id)_!\end{aligned}$$ is an isomorphism in . This is to say that the morphism $u_!\colon{\sD}^A\to{\sD}^{A'}$ preserves right Kan extensions along $v$ or that the morphism $v_\ast\colon{\sD}^B\to{\sD}^{B'}$ preserves left Kan extensions along $u$ [@groth:can-can Lem. 4.8]. For the purpose of a simpler terminology, we also say that $u_!$ and $v_\ast$ commute in . In general, these canonical mates are not invertible as is for example illustrated by the following characterization of pointed derivators. \[prop:ptd-comm\] The following are equivalent for a derivator . 1. The derivator is pointed.\[item:pc1\] 2. Empty colimits and empty limits commute in .\[item:pc2\] 3. Left Kan extensions along cosieves and right Kan extensions along sieves commute in .\[item:pc3\] Left Kan extensions along cosieves and arbitrary right Kan extensions commute in . \[item:pc4a\] Arbitrary left Kan extensions and right Kan extensions along sieves commute in .\[item:pc4b\] For the equivalence of the first two statements we consider the empty functor $\emptyset\colon\emptyset\to\bbone$. Correspondingly, for every derivator there is the canonical mate $$\xymatrix{ {\sD}^{\emptyset\times\emptyset}\ar[r]^-{(\id\times\emptyset)_\ast}\ar[d]_-{(\emptyset\times\id)_!}\drtwocell\omit{}& {\sD}^{\emptyset\times\bbone}\ar[d]^--{(\emptyset\times\id)_!}\\ {\sD}^{\bbone\times\emptyset}\ar[r]_--{(\id\times\emptyset)_\ast}&{\sD}^{\bbone\times\bbone} }$$ detecting if empty colimits and empty limits commute. By construction of initial and final objects in derivators (see [@groth:ptstab §1.1]), the source of this canonical mate is given by initial objects in while the target is given by final objects. Hence, is pointed if and only if empty colimits and empty limits commute in . Obviously, each of the statements \[item:pc4a\] o
958
1,101
871
948
1,102
0.7944
github_plus_top10pct_by_avg
= 2k+1, \ldots, 3k$, $\{a_n, \ldots, a_{n+k}\}$ contains one point in every row. From now on, $a_n$ refers to the points defined in the above proof. This proposition is the motivation for choosing the value $6k$ in the proof of Lemma \[otherlemma\]. We can now prove Theorem \[generalk\]. We will need 3 distinct partial tilings of $\mathbb{Z}^3$ slices, corresponding to the 3 cases in the proof of Lemma \[biglemma\] with $d = 3$. The repeating unit in each of these partial tilings will have size $(k+1) \times (k+1) \times 6k$, so we will work in $\mathbb{Z}_{k+1}^2 \times \mathbb{Z}_{6k}$. We start by choosing the sets $S$ as in Lemma \[biglemma\]. These will be as follows:\ For $n = 1, \ldots, k$, $S_n = \{(0,0,n),(a_n,n+k),(a_{k+1},n+k)\}$.\ For $n = k+1, \ldots, 2k$, $S_n = \{(0,0,n+k),(a_n,n+2k),(a_{2k+1},n+2k)\}$.\ For $n = 2k+1, \ldots, 3k$, $S_n = \{(0,0,n+2k),(a_n,n+3k),(a_1,n+3k)\}$.\ We will refer to the points in $S_n$ as $x_{n,1},x_{n,2},x_{n,3}$ in the order given. We can construct a set $Y_n \subset \mathbb{Z}^4$ from each $S_n$ using the construction in the proof of Lemma \[biglemma\]. Let $Y = \bigcup_{1 \leq n \leq 3k} Y_n$. For a given $m \in \mathbb{Z}$, there are two possibilities for the structure of $Y \cap (\mathbb{Z}_{k+1}^2 \times \mathbb{Z}_{6k} \times \{m\})$: 1. $Y \cap (\mathbb{Z}_{k+1}^2 \times \mathbb{Z}_{6k} \times \{m\})$ consists of pairs of the form $\{x_{n,1},x_{n,2}\}$ or $\{x_{n,1},x_{n,3}\}$. Then it contains exactly one point in each $\mathbb{Z}_{k+1}^2$ layer. We can therefore tile $(\mathbb{Z}_{k+1}^2 \times \mathbb{Z}_{6k} \times \{m\}) \setminus Y$ entirely with strings, by Proposition \[onepoint\]. 2. $Y \cap (\mathbb{Z}_{k+1}^2 \times \mathbb{Z}_{6k} \times \{m\})$ consists of pairs of the form $\{x_{n,2},x_{n,3}\}$. Then it contains either 2 or 0 points in each $\mathbb{Z}_{k+1}^2$ layer.\ If $A = \{a_1, \ldots, a_{3k}\}$, and $B$ is the set constructed from $A$ in the proof of Lemma \[otherlemma\], then, by the choice of the $S_n$, the sets $B$ and $Y \ca
959
750
1,070
881
3,126
0.774711
github_plus_top10pct_by_avg
h} x} (UX)^{\dagger} A W e^{ - i {\bf \Delta_{s} } x} \\ e^{ i {\bf \Delta_{s} } x} W^{\dagger} A (UX) e^{ - i {\bf h} x} & e^{ i {\bf \Delta_{s} } x} W^{\dagger} A W e^{ - i {\bf \Delta_{s} } x} \\ \end{array} \right]. \label{H1-matrix}\end{aligned}$$ That is, $(H_{1})_{i j} = 0$ in the whole active neutrino subspace. The non-vanishing elements of $H_{1}$ are as follows: $$\begin{aligned} (H_{1})_{i J} &=& e^{- i ( \Delta_{J} - h_{i} ) x} \left\{ (UX)^{\dagger} A W \right\}_{i J}, \nonumber \\ (H_{1})_{J i} &=& e^{ - i ( h_{i} - \Delta_{J} ) x} \left\{ W ^{\dagger} A (UX) \right\}_{J i}, \nonumber \\ (H_{1})_{J K} &=& e^{- i ( \Delta_{K} - \Delta_{J} ) x} \left\{ W ^{\dagger} A W \right\}_{J K}. \label{H1-elements}\end{aligned}$$ Inserting eq. (\[H1-elements\]) into (\[Omega-expand\]), we can compute all the $\Omega$ matrix elements. The simplest ones in first order in $H_{1}$, the second term in (\[Omega-expand\]), are given by $$\begin{aligned} \Omega_{i j} [1] &=& 0, \nonumber \\ \Omega_{i J} [1] &=& \frac{e^{- i ( \Delta_{J} - h_{i} ) x} - 1 }{ ( \Delta_{J} - h_{i} ) } \left\{ (UX)^{\dagger} A W \right\}_{i J}, \nonumber \\ \Omega_{J i} [1] &=& - \frac{e^{ i ( \Delta_{J} - h_{i} ) x} - 1 }{ ( \Delta_{J} - h_{i} ) } \left\{ W ^{\dagger} A (UX) \right\}_{J i}, \nonumber \\ \Omega_{J K} \vert_{J \neq K} [1] &=& \frac{e^{- i ( \Delta_{K} - \Delta_{J} ) x} - 1 }{ ( \Delta_{K} - \Delta_{J} ) } \left\{ W ^{\dagger} A W \right\}_{J K}, \nonumber \\ \Omega_{J J} [1] &=& (-i x) \left\{ W ^{\dagger} A W \right\}_{J J}, \label{Omega-1st-order}\end{aligned}$$ which serve as a building block of the perturbation series because of the structure in (\[Omega-expand\]). The notation “\[1\]” implies that the terms come from first order perturbation with $H_{1}$. For more about notations, see appendix \[sec:hatS-elements\]. We need to compute up to fourth order in $H_{1}$ because we want to keep all the order $W^4$ terms. The requirement arises because the probability leaking term, whose observation i
960
2,944
1,331
999
null
null
github_plus_top10pct_by_avg
\in {{\mathbb R}}^2 : x_1\cos \theta + x_2\sin\theta = r \}$, where $\theta\in[0,\pi)$ is the angle and $r\in{{\mathbb R}}$ is the distance of $L$ from the origin as shown in Figure \[Radon transform\]. (100,80) (0,-65)[![An illustration of the Radon transform. It maps the object $f$ on the $(x_1,x_2)$-domain into $f$ on the $(r,\theta)$ domain. The measurement data is collected from the intensities $I_d$ of x-rays for all lines L through the object $f(x_1,x_2)$ and from different angles of view. []{data-label="Radon transform"}](figure1 "fig:"){width="10cm"}]{} The parametrization of the straight line $L$ with respect to the arc length $s$ can be written as: $$\begin{split} x_1(s,\theta,r) &= r \, \cos(\theta) - s \, \sin(\theta), \\ x_2(s,\theta,r) &= r \, \sin(\theta) + s \, \cos(\theta). \\ \end{split} \label{eq:xystr}$$ In this work, the object is placed inside a circular disk with radius $R$. Then, as a function of $r$ and $\theta$ the line integral in (\[Measurement Model\]) can be written as $$\label{Radon} \begin{split} \mathcal{R} f(r,\theta) &= \int_{-R}^{R} f(x_1(s,\theta,r),x_2(s,\theta,r)) \, ds \\ &= \int_{-R}^R f({\mathbf x}^0+s\hat{{\mathbf u}}) ds, \end{split}$$ where $$\begin{aligned} {\mathbf x}^0 = \begin{bmatrix} r\cos(\theta) & r\sin(\theta) \end{bmatrix}^{\mathsf{T}}, \qquad \hat{{\mathbf u}} = \begin{bmatrix} -\sin(\theta) & \cos(\theta) \end{bmatrix}^{\mathsf{T}}.\end{aligned}$$ In a real x-ray tomography application, the measurement is corrupted by at least two noise types: photons statistics and electronic noise. In x-ray imaging, a massive number of photons are usually recorded at each detector pixel. In such case, a Gaussian approximation for the attenuation data in $\eqref{calibration2}$ can be used [@bouman1992generalized; @sachs19993d]. Recall that a logarithm of the intensity is involved in $\eqref{Radon}$, and so additive noise is a reasonable model for the electronic noise. We collect a set of measurements as $$\label{noisy measurement} y_i =
961
4,137
581
707
1,961
0.784131
github_plus_top10pct_by_avg
mperature-dependent coupling that runs into a three-dimensional fixed point $\alpha_{*,3d}k_{\rm phys}/T$ for low cut-off scales $k_{\rm phys}/T\ll 1$. This choice carries some uncertainty as the running coupling in Yang-Mills theory is not universal beyond two loop order. Here we have chosen the Landau gauge couplings $\alpha_{{\rm Landau},4d} (k_{\rm phys}^2)$ at cut-off scales $k_{\rm phys}/T\gg 1$, see [@Alkofer:2000wg; @jan; @Fischer:2008uz; @von; @Smekal:1997is; @Bonnet:2001uh; @Lerche:2002ep]. The corresponding three-dimensional fixed point $\alpha_{*,3d}= 1.12$ is obtained from [@Lerche:2002ep]. A specific choice for such a running coupling is given in Fig. \[fig:alpha\]. ![$\alpha_s$ for temperatures $T=0,150,300,600$ MeV[]{data-label="fig:alpha"}](coupling.eps "fig:"){width="8cm"}\ The normalisation of the momentum scale has been done by the comparison of continuum Landau gauge propagators to their lattice analogues. Fixing the lattice string tension to $\sqrt{\sigma}=440$ MeV, we are led to the above momentum scales. For a comparison with the Landau gauge results obtained in [@Braun:2007bx] we have also computed the temperature-dependence of the Polyakov loop by using $\alpha_{{\rm Landau},4d}$ for all cut-off scales. Indeed, this over-estimates the strength of $\alpha_s$, as can be seen from Fig. \[fig:alpha\], However, qualitatively this does not make a difference: for infrared scales far below the temperature scale, $\hat k\to 0$, the flow switches off for fields $\varphi$ with $\partial^2_\varphi(\hat{V}_{\bot} + \Delta \hat{ V})\geq 0$, that is for the convex part of the potential. This happens both for $g_k^2\to {\rm const}$, and for $g_k^2(\hat k^2\to 0) \sim \hat k$. In other words, the minimum of the potential freezes out in this regime. For the non-convex part of the potential, $\partial^2_\varphi(\hat{V}_{\bot} + \Delta \hat{ V})<0$, the flow does not tend to zero but simply flattens the potential, thus arranging for convexity of the effective potential $V_{\rm eff}=V_{k=0}$. The u
962
1,168
1,571
1,057
2,591
0.778643
github_plus_top10pct_by_avg
with frequency separations of $4.7-4.8\,\mu$Hz. However, the low amplitude central peak of this triplet at $f_6=5569.6\,\mu$Hz do not reach the 4S/N significance limit in the test 30s data. Still, to make the discussion of the triplet structures clear, we added $f_6$ to the list of Table \[table:lp133freq\] in parentheses. Besides these, the first harmonic of $f_1$ also appeared. Fig. \[fig:lp133prewh\] shows the FT of the whole dataset, the consecutive pre-whitening steps at the multiplet frequencies and at the frequency domains of $f_4$, $f_5$ and $2f_1$. We plot the frequencies of @2004ApJ...600..404B and the frequencies found in the 2007 Konkoly observations together in Fig. \[fig:lp133oldnew\]. Assuming that the closely spaced peaks at $\mathrm{F}_2$ and $\mathrm{F}_3$ are results of the not properly resolved components of the $f_2$ triplet, we found, with similar amplitudes, all the frequencies observed in 2003. Besides these, we detected three new frequencies: a relatively large amplitude mode at $f_3$, and two additional low-amplitude modes at $f_5$ and $f_6$. That is, we doubled the number of modes can be used for the asteroseismic fits. The schematic plot of the triplets can be seen in Fig. \[fig:lp133triplet\]. It is clearly visible that the frequency separations of the components are larger at higher frequencies. We discuss the rotation of LP 133-144 based on the investigation of these triplets in Sect. \[sect:lp133rot\]. ![image](lp133prewh.eps){width="17.5cm"} ![LP 133-144: comparison of the frequencies obtained in 2003 (*red dashed lines*) and in 2007 (*black solid lines*).[]{data-label="fig:lp133oldnew"}](lp133oldnewfrek.eps){width="\columnwidth"} ![LP 133-144: schematic plot of the triplets found at different frequency domains. The frequency errors are comparable to the width of the lines.[]{data-label="fig:lp133triplet"}](lp133triplet.eps){width="\columnwidth"} Asteroseismology ================ We built our model grid for the asteroseismic investigations of our targets utilizing
963
90
1,387
991
2,024
0.783401
github_plus_top10pct_by_avg
om the introduction. Indeed we will prove more generally that a version of that theorem holds for all values of $c\in {\mathbb{C}}$ that satisfy Hypothesis \[morrat-hyp\]. As was true with Corollary \[morrat-cor\] and Proposition \[shiftonO\], the theorem will take slightly different forms depending on whether $c\in \mathbb{Q}_{\leq -1}$ or not, so it is convenient to separate the cases with Hypothesis {#main-hyp} ---------- [*The element $c\in{\mathbb{C}}$ satisfies Hypothesis \[morrat-hyp\] but $c\not\in \mathbb{Q}_{\leq -1}$.* ]{} {#subsec-6.1} Assume that Hypothesis \[main-hyp\] holds. By Corollary \[morrat-cor\] there is a Morita equivalence $S_c: {U}_c{\text{-}{\textsf}{mod}}\to {U}_{c+1}{\text{-}{\textsf}{mod}}$ given by $S_c(M) = Q_c^{c+1} \otimes_{{U}_c} M$, where $Q_c^{c+1}=eH_{c+1}e_-\delta\subset D({\mathfrak{h}^{\text{reg}}})\ast{{W}}$ is considered as a right ${U}_c$-module via . Following we can therefore define a Morita ${\mathbb{Z}}$-algebra $B(c)= B=U_{\mathbb{Z}}$\[B-ring-defn\] associated to the data $\{U_{c+i},\, Q_{c+i}^{c+i+1}; i\in {\mathbb{N}}\}$; thus $B=\bigoplus_{i\geq j\geq 0}B_{ij}$ where, for integers $i>j\geq 0$, $$\label{Mij-defn} B_{jj}={U}_{c+j} \qquad\text{and}\qquad B_{ij} = \ Q_{c+i-1}^{c+i}Q_{c+i-2}^{c+i-1}\cdots Q_{c+j}^{c+j+1},$$ where the multiplication in taken in $D({\mathfrak{h}^{\text{reg}}})\ast W$. Note that, by Corollary \[morrat-cor\], we have a natural isomorphism $$\label{tpdef} B_{ij}\ \cong\ Q_{c+i-1}^{c+i}\otimes_{{U}_{c+i-1}}Q_{c+i-2}^{c+i-1} \otimes_{{U}_{c+i-2}}\cdots\otimes_{{U}_{c+j+1}} Q_{c+j}^{c+j+1},$$ and so this does accord with the definition in . The Main Theorem {#subsec-6.6} ---------------- Assume that $c\in {\mathbb{C}}$ satisfies Hypothesis \[main-hyp\]. The differential operator filtration $\operatorname{{\textsf}{ord}}$ on $D({\mathfrak{h}^{\text{reg}}})\ast{{W}}$, as defined in , induces filtrations on the subspaces $B_{ij}$ and hence on $B$, which we will again write as $\operatorname{{\textsf}{ord}}$.
964
778
788
905
2,051
0.783179
github_plus_top10pct_by_avg
3 for $b=2$, and 0.0211 for $b=3$ fractal, equal to $D^*$ for the corresponding cases $v<v_c(u_\theta)$ of the ASAWs model. - When $w=w_c(u_\theta,t)$, the RG parameters tend to the fixed point $$\label{fpt2} (A_\theta,B_\theta,C^*,A_1^*,A_2^*,A_3^*,A_4^*,B_1^*, B_2^*)\>,$$ which corresponds to the phase in which chains are not segregated anymore, but they are not yet completely entangled. In contrast to the $w=w_c(u<u_\theta,t)$ case, for which symmetrical fixed point is obtained, values of $A_i$ $(i=1,2,3,4)$, as well as $B_1$ and $B_2$, are not mutually equal ($A_i\neq A_\theta C^*$, $B_i\neq B_\theta C^*$). The scaling relation $\langle M^{(r)}\rangle\sim \langle {N_3}^{(r)}\rangle^{\varphi}$ is satisfied, with $\varphi$ given by (\[fie2\]). - For $w>w_c(u_\theta,t)$ the RG parameters flow towards the entangled fixed point (\[fp5\]). Strong self-attraction of the 3D chain -------------------------------------- When self-attraction of the 3D polymer is strong ($u>u_\theta$), depending on the values of inter-chain interaction parameters, the following phases are possible: - For $w<w_c(u,t)$ the chains are segregated. Due to the large compactness of the 3D chain, with $(A^*,B^*)=(0,B_G)$, none of the configurations $A_1,A_2,A_3,A_4,B_1,B_2$ can be accomplished, and the corresponding fixed point is $$\label{fpg1} (0,B_G,C^*,0,0,0,0,0,0)\, .$$ The chains are completely separated. - When attraction between the chains is critical, $w=w_c(u,t)$, the chains are partially entangled, and the fixed point $$\label{fpg2} (0,B_G,C^*,0,0,0,0,B_G C^*,B_G C^*)\>,$$ is attained. In this case the interaction between chains is sufficiently strong to connect them, but not strong enough to destroy the compactness of the 3D globule, so that all $A_i^*=0$. Again, the scaling relation $\langle M^{(r)}\rangle\sim \langle {N_3}^{(r)}\rangle^{\varphi}$ is satisfied for both $b=2$ and 3, with $\varphi$ given by (\[fie2\]). However, while in the case $b=2$ the coordinates of corresponding fixed
965
1,716
1,937
1,040
2,017
0.783489
github_plus_top10pct_by_avg
oot> So in short, I have a node in which I can have some text, and various different nodes (or none) in any order. There is no way of knowing before hand if the content is only text, text and one of the nodes, only one of the nodes, text and both nodes in any order, etc. Here's what I need to have: Wanted XML output <root> <unwrapped> <txt>text 1</txt> <reworkedNodeA/> <txt>text 2</txt> <reworkedNodeB/> <txt>text 3</txt> </unwrapped> <unwrapped> <txt>text 4</txt> <reworkedNodeB/> </unwrapped> </root> And of course I'm reworking the insideNodeA and B. My XSL code <xsl:template match="exampleNodee"> <xsl:if test="count(*)&gt;0"> <xsl:element name="unwrapped"> <xsl:element name="txt"><xsl:value-of select="text()"/></xsl:element> <xsl:apply-templates select="child::node()"/> </xsl:element> </xsl:if> </xsl:template> The trouble is that this code only creates one "txt" element with the first text of the "exampleNode" and then applies the templates for each child nodes. How can I retrieve the contents of "exampleNode" in the correct order and apply the correct templates to obtain the wanted output ? A: OK I actually found the answer to my question, it was pretty simple after all. So all I had to do was create a template matching "text()" and change my exammpleNode template as such : <xsl:element name="unwrapped"> <xsl:apply-templates select="text() | node()"/> </xsl:element> Q: When is html officially in "run-time" I just had a discussion about meta tags in html and when they are interpreted by the web browser. My colleague was adament that this is done pre-run-time (html parse time) as run-time officially starts when body.onload is triggered. When does run-time officially start in html? I know this seems trivial, but googling "html run-time" yields naught. A: As noted by nevatype, "run-time" doesn't exactly make sense for declarative languages like HTML. That's why you aren't getting any Google results. What you're asking about sounds sort of like t
966
696
197
549
466
0.808808
github_plus_top10pct_by_avg
vq-\MW\,\vln\,\vu+const_1+\vln\,\softmax(\MW\,\vln\,\vu+\vb)) \\ &=\softmax(\MW\,\vln\,\vq-\MW\,\vln\,\vu+const_1+\MW\,\vln\,\vu+\vb+const_2) \\ &=\softmax(\MW\,\vln\,\vq+\vb+const_1+const_2) =\softmax(\MW\,\vln\,\vq+\vb)=\muh_{DirLin}(\vq; \MW,\vb)\\ &=\muh(\vq)\end{aligned}$$ #### 3. Consider a function $\muh(\vq)=\vmuh_{Dir}(\vq; \MA,\vc)$. Let us define a matrix $\valpha$, vector $\vb$ and vector $\pi$ as follows: $$\begin{aligned} \alpha_{ij}=a_{ij}+1,\qquad \vb=\vln\,\vc-\MA\,\vln\,\vu,\qquad \pi_i=\exp(b_i)\cdot B(\valpha^{(i)})\end{aligned}$$ with $\alpha_{ij}$ and $a_{ij}$ denoting elements of matrices $\valpha$ and $\MA$, respectively, and $\vu=(1/k,\dots,1/k)$ is a column vector of length $k$. We can now write: $$\begin{aligned} \muh(\vq)&=\muh_{Dir}(\vq; \MA,\vc) =\softmax(\MA\,\vln\,\frac{\vq}{1/k}+\vln\,\vc) =\softmax(\MA\,\vln\,\vq-\MA\,\vln\,\vu+\vln\,\vc) \\ &=\softmax((\valpha-1)\vln\,\vq+\vb)\end{aligned}$$ Element $i$ in the vector within the softmax is equal to: $$\begin{aligned} \sum_{j=1}^k (\valpha_{ij}-1)\ln(q_j)+b_j &= \sum_{j=1}^k (\valpha_{ij}-1)\ln(q_j) +\ln(\pi_i\cdot\frac{1}{B(\valpha^{(i)})}) \\ &= \ln(\pi_i\cdot \frac{1}{B(\valpha^{(i)})} \prod_{j=1}^k q_j^{\valpha_{ij}-1}) \\ &= \ln(\pi_i\cdot f_i(\valpha^{(i)}))\end{aligned}$$ and therefore: $$\begin{aligned} \muh(\vq)=\softmax((\valpha-1)\vln(\vq)+\vb)=\softmax(\ln(\pi_i\cdot f_i(\valpha^{(i)})))=\vmuh_{DirGen}(\vq; \valpha,\vpi)\end{aligned}$$ The following proposition proves that temperature scaling can be viewed as a general-purpose calibration method, being a special case within the Dirichlet calibration map family. Let us denote the temperature scaling family by $\muh'_{TempS}(\vz; t)=\softmax(\vz/t)$ where $\vz$ are the logits. Then for any $t$, temperature scaling can be expressed as $$\begin{aligned} \muh'_{TempS}(\vz; t)=\muh_{DirLin}(\softmax(\vz); \frac{1}{t}\MI, \vzero)\end{aligned}$$ where $\MI$ is the identity matrix and $\vzero$ is the vector of zeros. Let us first observe that for any $\vx\
967
1,800
1,470
960
3,044
0.775238
github_plus_top10pct_by_avg
nd §\[details\]. We will show that a germ $\alpha(t)$ as above leads to a rank-2 limit (and hence does not contribute a component to the PNC) unless $\alpha(t)$ and certain formal branches (cf. [@MR88a:14001] and [@MR1836037], Chapter 6 and 7) of the curve are closely related. More precisely, we will prove the following result. \[standardform\] Let $\alpha(t)$ be as specified above, and assume that $\lim_{t\to 0}\mathcal C\circ\alpha(t)$ is not a rank-2 limit. Then ${{\mathscr C}}$ has a formal branch $z=f(y)$, tangent to $z=0$, such that $\alpha$ is equivalent to a germ $$\begin{pmatrix} 1 & 0 & 0 \\ t^a & t^b & 0 \\ \underline{f(t^a)} & \underline{f'(t^a) t^b} & t^c\end{pmatrix} \quad,$$ with $a<b<c$ positive integers. Further, it is necessary that $\frac ca\le \lambda_0+2(\frac ba-1)$, where $\lambda_0>1$ is the (fractional) order of the branch. For a power series $g(t)$ with fractional exponents, we write here $\underline{g(t)}$ for its truncation modulo $t^c$. (The truncations appearing in the statement are in fact polynomials.) The proof of the proposition requires the analysis of several cases. We will first show that under the hypothesis that $\lim_{t\to 0} {{\mathscr C}}\circ\alpha(t)$ is not a rank-2 limit we may assume that $q(t)\not\equiv 0$, and this will allow us to replace it with a power of $t$; next, we will deal with the $b=c$ case; and finally we will see that if $b<c$ and $\alpha(t)$ is not in the stated form, then the limit of every irreducible branch of ${{\mathscr C}}$ is a star with center $(0:0:1)$. This will imply that the limit of ${{\mathscr C}}$ is a kernel star in this case, proving the assertion by Lemma \[rank2lemma\]. This analysis is carried out in §\[formalbranches\]–\[Eop\]. In §\[charaV\] we determine germs of the form given in Proposition \[standardform\] that can lead to components of type V, obtaining the description given in §\[germlist\]. In §\[quadritangent\] we complete the proof of Theorem \[mainmain\], recovering the description given in §\[de
968
1,833
1,853
950
null
null
github_plus_top10pct_by_avg
the polar angle $\phi$ is shown in Fig. \[Fig::occl1\]. Besides a change of the absolute hit rate, analogous distributions can be observed for the remaining layers. The $\phi$ distribution shows a significant increase of the number of hits in the region $|\phi|<50^\circ$, due to the particles with large hit time which are not produced symmetrically around the $z$ axis. The “spikes” in the $\phi$ distributions are due to particle crossing the overlapping regions of neighbouring ladders. Occupancy --------- [r]{}[0.5]{} ![image](Fig8.eps){width="0.5\columnwidth"} In order to calculate the occupancy, the effective path length of the particles inside the sensitive volume of the detector ougth to be accounted for. It may reach up to several millimeters, especially for backscattered particles which were produced at small polar angle in order to reach the VXD.\ The occupancy depends on the characteristics of the VXD, namely pixel size, integration time, number of hit pixels per impact, effective thickness of the sensitive volume. In absence of choice of the sensor technology, a set of those parameters has been agreed upon in the ILD vertex community as reference and they have been used to estimate the occupancy. As a comparison, the occupancy has been also estimated in the framework of a specific technology (CMOS [@cmos]). The parameters describing both options are shown in Tab. \[Tab::sC\]). ------- ---------------- --------------------------- ---------------- --------------------------- layer pitch ($\mu$m) integration time ($\mu$s) pitch ($\mu$m) integration time ($\mu$s) 1 25 50 20 25 2 25 200 25 50 3 25 200 33 100 4 25 200 33 100
969
3,925
1,544
924
null
null
github_plus_top10pct_by_avg
l{final} && F^{(1)}=\frac{\alpha}{3\pi}\left[\frac{(Z\alpha)^2}{\gamma}x^2+ \frac{\kappa(\gamma+\kappa)}{\gamma^2}x+\frac{\kappa}{2\gamma^2}+a \right]\nonumber\\ && G^{(1)}=\frac{\alpha}{3\pi}\left[\frac{(Z\alpha)^2}{\gamma}x^2- \frac{\kappa(\gamma-\kappa)}{\gamma^2}x-\frac{\kappa}{2\gamma^2}+a\right],\end{aligned}$$ where $a$ is some constant. Using ${\cal F}$ and ${\cal G}$ instead of $F$ and $G$ in Eq. (\[pnc\]) we obtain the PNC matrix element in the form $$\label{pnc1} <p_{1/2}|H_{W}|s_{1/2}>=M_0\left(1+\delta\right),$$ where $\delta$ due to the diagram Fig.1b is $$\label{ffgg} \delta_b={{1+\gamma}\over{2}}\left(F_s^{(1)}+G_p^{(1)}\right)+ {{1-\gamma}\over{2}}\left(F_p^{(1)}+G_s^{(1)}\right).$$ To find the correction $\delta_b$ with the logarithmic accuracy there is no need to calculate $a$ in Eqs. (\[final\]), it is enough to substitute the logarithmic terms from (\[final\]) into (\[ffgg\]). An analysis that also includes a consideration of distances $r\sim \lambda_C$ gives $$\label{db} \delta_b=\alpha\left({1\over{4}}Z\alpha+ {{2(Z\alpha)^2}\over{3\pi\gamma}} \left[\ln^2(b\lambda_C/r_0)+B\right]\right),$$ where $b=\exp(1/(2\gamma)-C-5/6)$, and $B\sim 1$ is some smooth function of $Z\alpha$ independent of $r_0$. A numerical calculation of $\delta_b$ for Cs was performed recently in Ref. [@W]. The result is in a good agreement with Eq. (\[db\]). Comparison of Eq. (\[db\]) with the result of Ref. [@W] allows also to determine $B$: $B\approx 1$. We would like to emphasize that Eq. (\[db\]) does not assume that $Z\alpha \ll 1$, it is valid for any $Z\alpha <1$. Note that the $(Z\alpha)^2$ term in (\[db\]) is larger than the $Z\alpha$ one at $Z> 10$. We already pointed out that the weak charge calculated in Refs.[@Mar1; @Mar2] corresponds to zero momentum transfer. On the other hand, it is clear from Eq. (\[pnc\]) that the weak interaction matrix element is determined by the momentum transfer $q\sim 1/r_0$. The renormalization of the weak charge from $q=0$ to $q\sim 1/r_0$ is described by diagrams c and d i
970
1,024
1,489
1,114
null
null
github_plus_top10pct_by_avg
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Data Source Information Relevant Fields ------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
971
598
542
1,245
null
null
github_plus_top10pct_by_avg
we present a collective detection technique based on statistical divergence. The technique extracts distribution similarities among data collections, and then uses the statistical divergence to detect collective anomalies. Our technique continuously evaluates metrics as evolving features and calculates adaptive threshold to meet the best mathematical expectation. To illustrate details of the technique and explore its efficiency, we case-studied a real world problem of click farming detection against malicious online sellers. The evaluation shows that these techniques provided efficient classifiers. They were also sufficiently sensitive to a much smaller magnitude of data alteration, compared with real world malicious behaviours. Thus, it is applicable in the real world.' author: - | [Ruoyu Wang[$^{1,2}$]{}, Daniel Sun[$^{2,3}$]{}, Guoqiang Li[$^{1*}$]{} ]{}\ *$^{1}$School of Software, Shanghai Jiao Tong University, China\ $^{2}$School of Computer Science and Engineering, University of New South Wales, Australia\ $^{3}$Data61, CSIRO, Australia\ {ruoyu.wang,li.g}@sjtu.edu.cn, daniel.sun@data61.csiro.au* title: Statistical Detection of Collective Data Fraud --- Introduction ============ Statistical divergence is widely applied in multimedia processing. Prevalent applications include multimedia event detection [@amid2014unsupervised], content classification [@moreno2004kullback; @park2005classification] and qualification [@pheng2016kullback; @goldberger2003efficient]. It has been attracting more attention since the dawn of big data era, basically due to regularity and interpretable features displayed in the data. However, in a broader range of data realm, these advantages may not out-stand (e.g. in online sales data records). It requires a more general approach. Currently, there are more than 2.7ZB data in the digital universe [@bigDataStatistics] and the growing speed is doubling every two years. It has already been hard and will be much harder in the future to harness the exploding volume
972
1,247
591
963
1,839
0.785175
github_plus_top10pct_by_avg
el{\otimes}{\to}{\sV}(A\times B\op\times B\times C\op)\stackrel{\int^B}{\to}{\sV}(A\times C\op),$$ and also this operation enjoys associativity and unitality properties. \[thm:bicategory\] If is a monoidal left derivator, then there is a bicategory $\cProf({\sV})$ described as follows: - Its objects are small categories. - Its hom-category from $A$ to $B$ is ${\sV}(A\times B\op)$. - Its composition functors are the external-canceling tensor products $$\otimes_{[B]} \colon {\sV}(A\times B\op) \times {\sV}(B\times C\op) \too {\sV}(A\times C\op).$$ - The identity 1-cell of a small category $B$ is $$\lI_B\;=\;(t,s)_! \lS_{{\ensuremath{\operatorname{tw}}}(B)} \;\cong\; (t,s)_! \pi_{{\ensuremath{\operatorname{tw}}}(B)}^* \lS_{\bbone} \; \in {\sV}(B\times B\op).\label{eq:unit}$$ The notation related to the identity $1$-cells $\lI_B\in{\sV}(B\times B\op)$, also called **identity profunctors**, is as follows. ${\ensuremath{\operatorname{tw}}}(B)$ is the **twisted arrow category** of $B$, i.e., the category of elements of $\hom_B$, and the functor $(t,s)\colon{\ensuremath{\operatorname{tw}}}(B)\to B\times B\op$ sends a morphism to its target and source (see [@gps:additivity §5]). We refer to $\cProf({\sV})$ as the **bicategory of profunctors** in . \[con:wcolim\] Let be a monoidal left derivator and let be a -module with tensors $\otimes\colon{\sV}\times{\sD}\to{\sD}$. The external-canceling version of this morphism yields functors $$\otimes_{[B]}\colon{\sV}(A\times B\op)\times {\sD}(B\times C\op)\to{\sD}(A\times C\op).$$ Passing to parametrized versions of these functors, we obtain an external-canceling tensor morphism $$\otimes_{[B]}\colon{\sV}^{A\times B\op}\times{\sD}^{B\times C\op}\to{\sD}^{A\times C\op}.$$ In particular, plugging in a fixed $W\in{\sV}(A\times B\op)$ and specializing to $C=\bbone$, we obtain an induced partial morphism $$\colim^W=(W\otimes_{[B]}-)\colon{\sD}^B\to{\sD}^A,$$ the **weighted colimit morphism with weight** $W\in{\sV}(A\times B\op)$. We abuse terminology and refer to a morph
973
503
1,262
986
2,823
0.776752
github_plus_top10pct_by_avg
ic kinds of navigational patterns. To explicitly rule this possibility out, we would need to investigate the underlying link networks in greater detail, which we leave open for future work. We also plan on looking at data capturing navigational paths over distinct platforms of the Web (e.g., from toolbar data) which may allow us to make even more generic statements about human navigation on the Web. ![**Common global transition patterns of navigational behavior on the Wikigame topic dataset.** The results should be compare with Figure \[fig:heatmaps\]. The results are split by only looking at a corpus of paths where each path starts with the same topic as it ends (A) and by looking at a corpus with distinct start and target categories (B). []{data-label="fig:heatmaps_same_diff"}](wikigame_same_diff){width="\textwidth"} Conclusions {#sec:conclusions .unnumbered} =========== This work presented an extensive view on detecting memory and structure in human navigational patterns. We leveraged Markov chain models of varying order for detecting memory of human navigation and took a thorough look at structural properties of human navigation by investigating Markov chain transition matrices. We developed an open source framework[^16] [@github] for detecting memory of human navigational patterns by calculating the appropriate Markov chain order using four different, yet complementary, approaches (likelihood, Bayesian, information-theoretic and cross validation methods). In this article we thoroughly present each method and emphasize strengths, weaknesses and relations between them. By applying this framework to actual human navigational data we find that it is indeed difficult to make plausible statements about the appropriate order of a Markov chain having insufficient data but a vast amount of states which results in too complex models. However, by representing pages by their corresponding topic we could identify that navigation on a topical level is not memoryless – an order of two and respectively three best exp
974
781
2,309
880
null
null
github_plus_top10pct_by_avg
^{\iota}(X,t)\rangle&=& \overrightarrow{\cal U}_{{\mbox{\tiny\boldmath$\cal B$}},[\hat{\rho}]}(t) \vert\psi^{\iota}(X)\rangle \\ \langle\psi^{\iota}(X,t)\vert&=& \langle\psi^{\iota}(X,)\vert \overleftarrow{\cal U}_{{\mbox{\tiny\boldmath$\cal B$}},[\hat{\rho}]}(t) \;.\end{aligned}$$ Quantum classical averages can be written as $$\begin{aligned} \langle\hat{\chi}\rangle(t)&=&\int dX\sum_{\iota}w_{\iota} \langle\psi^{\iota}(X,t)| \hat{\chi} |\psi^{\iota}(X,t)\rangle \;. \label{eq:wave-ave}\end{aligned}$$ One can always transform back to the operator picture to show that the probability is conserved. Non-linear wave dynamics by means of non-Hamiltonian brackets ------------------------------------------------------------- The wave equations in (\[eq:fckrk\]) were derived starting from the non-Hamiltonian commutator expressing the dynamics of phase space dependent operators [@b3]. It is interesting to recast quantum-classical wave dynamics itself by means of non-Hamiltonian brackets. It turns out that this form of the wave equations generalizes the mathematical formalism first proposed by Weinberg [@weinberg] in order to study possible non-linear effects in quantum mechanics (see Appendix \[app:weinberg\]). Consider a case in which a single state is present, *i.e.* $\iota=1$. Then, consider the wave fields $\vert\psi\rangle$ and $\langle\psi\vert$ as coordinates of an abstract space, and denote the point of such a space as $$\mbox{\boldmath$\zeta$}=\left[\begin{array}{c}|\psi\rangle\\ \langle\psi|\end{array}\right]\;.$$ Introduce the function $${\cal H}=\langle\psi\vert\hat{H}\vert\psi\rangle\;,$$ and the antisymmetric matrix operator $$\mbox{\boldmath$\Omega$} = \left[\begin{array}{cc}0 & 1-\frac{\hbar}{2i}\frac{\left\{\hat{H},\ln(\hat{\rho})\right\}_{ \mbox{\tiny\boldmath$\cal B$}}\vert\psi\rangle}{\hat{H}\vert\psi\rangle} \\ -1+\frac{\hbar}{2i}\frac{\left\{\ln(\hat{\rho}),\hat{H}\right\}_{ \mbox{\tiny\boldmath$\cal B$}}\vert\psi\rangle}{\langle\psi\vert\hat{H}} & 0 \end{array}\right]$$ Equations (\[eq:fckrk
975
883
1,698
1,084
3,680
0.770754
github_plus_top10pct_by_avg
his type of differential operator. This is the underlying reason why the method of finding the unitary irreps of NHEK’s isometry introduced in Sec. \[sec:high-lowest-weight\] will lead to separation of variables in many physical systems. Scalar Laplacian {#sec:sep-scalar} ---------------- As the first example, we look at the massless scalar wave equation $\Box \psi = S$ in NHEK space time, where $S$ is a source term (including a mass term also works). Since the scalar d’Alembert operator $\Box \equiv {\nabla}^{a}{\nabla}_{a}$ is built only from $g_{ab}$ and ${\nabla}_{a}$, it should commute with $\Omega$ and ${\mathcal{L}}_{X}$ where $X$ is any KVF. To show this explicitly, note that in Poincaré coordinates, $\Box\psi$ can be written as $$\begin{aligned} \Box \psi={}&\frac{1}{2\Gamma(u)}\Bigg\{(\Omega + \Xi(u){\mathcal{L}}^2_{Q_0})\psi +{\mathcal{L}}_{\partial _u}\left[(1-u^2){\mathcal{L}}_{\partial _u}\psi\right]\Bigg\} \,,\end{aligned}$$ where $\Xi(u)\equiv\Lambda(u)^{-2}-1$. Assume we can decompose an arbitrary scalar field $\psi(T,\,\Phi,\,R,\,u)$ according to $$\begin{aligned} \psi &= \sum_{mhk}C_{mhk}(u)F^{(m\,h\,k)}(T,\,\Phi,\,R) \\ \nonumber &= \sum_{mhk} \psi_{mhk}(T,\,\Phi,\,R,\,u), \end{aligned}$$ where $F$ is the scalar basis on $\Sigma_u$ and $C_{mhk}$ are some unknown functions of $u$. We also decompose the source term using the scalar basis functions via $S=\sum_{mhk} S_{mhk} F^{(m\,h\,k)}$. The basis functions $F^{(m\,h\,k)}$ are eigenfunctions of $\Omega$ and ${\mathcal{L}}_{Q_{0}}$, and so $\psi_{mhk}$ are also eigenfunctions. Therefore it is straightforward to see that the $(T,\Phi,R)$-dependence in $\psi_{mhk}$ is invariant after applying the scalar box operator. The equation for a specific mode labeled by $(m,h,k)$ becomes $$\begin{aligned} &S_{mhk} F^{(m\,h\,k)}={}\Box^{(m,h)} \psi_{mhk}={}\frac{1}{2\Gamma(u)}\times \\{\nonumber}&\times\Bigg\{[h(h+1) - m^{2} \Xi(u)]\psi_{mhk} +\mathcal{L}_{\partial _u}\left[(1-u^2)\mathcal{L}_{\partial _u}\psi_{mhk}\right]\Bigg
976
3,948
781
751
null
null
github_plus_top10pct_by_avg
tic). Of course the proofs can be rewritten with prefixes $\alpha \in ({\cal R} \times {\cal R})^*$. #### Strategies $\;\;$\ The formal systems ${\cal J}(T_0,T'_0,S_0,{\cal B})$ described in subsection \[subsec\_formal\_systems\] were devised so that their set of judgments is recursive. Let us consider now the formal systems $\hat{{\cal J}}(T_0,T'_0,S_0,{\cal B})$ really considered in pages 21-24. Their judgments are also of the forms $$m {\:|\!\!\!=\!\!\!\!=\:}(T,T',S),\;\;m {\:|\!\!\!=\!\!\!\!=\:}(T,T',S) \leadsto \alpha {\:|\!\!\!=\!\!\!\!=\:}(T_1,T'_1,S_1),\;\; m {\:|\!\!\!=\!\!\!\!=\:}(T,T',S) \leadsto \alpha {\:|\!\!\!=\!\!\!\!=\:}{{\rm SUCC}}$$ but where $S,S_1$ are D-strategies (instead of finite prefixes of strategies), “except when a judgment is obtained by rule R2”: see the fuzzy remark on page 23, line 11, followed by the enigmatic remark that “we could complete the definition anyhow for such cases”. Since $S,S_1,S_2,S_3,S_4,S_5,{\rm Id_{D,2}},{\rm Id_{E,2}}$ are really D-strategies and $S_6$ is obtained by an application of rule R2, it seems that our proofs $\pi_3,\pi_5,\pi_6$ are also proofs in the systems $\hat{{\cal J}}(T_0,T'_0,S_0,{\cal B})$. As well, replacing ${\rm Id_{C,1}}$ by ${\rm Id_{C,\infty}}$ in $\pi_4$, we obtain a proof of judgment $0 {\:|\!\!\!=\!\!\!\!=\:}(C(L_1),C(L_1),{\rm Id_{C,\infty}}) \leadsto (\varepsilon,\varepsilon) {\:|\!\!\!=\!\!\!\!=\:}{{\rm SUCC}}$ in the system $(\hat{{\cal J}}(C(L_1),C(L_1),{\rm Id_{C,\infty}},{\cal B}))$. #### Depth of the examples $\;\;$\ One can devise such proofs of non-bisimilar pairs, with an arbitrary long initial strategy: it suffices to add non-terminals $D_1,D_2,\ldots, D_k,E_1,E_2,\ldots ,E_k$ and to replace rules (\[ruleD\],\[ruleE1\],\[ruleE2\],\[ruleL1\]) by the sequence of rules: $$\begin{aligned} D(v) &{\stackrel{x}{\longrightarrow_{}}} & D_1(v) \label{nruleD}\\E(v) &{\stackrel{x}{\longrightarrow_{}}} & E_1(v) \label{nruleE}\\\vdots&&\vdots \nonumber\\ D_1(v) &{\stackrel{x}{\longrightarrow_{}}} & D_2(v) \label{nruleD1}\\E_
977
466
973
1,098
1,536
0.788525
github_plus_top10pct_by_avg
he right hand side of is precisely $$\mu_1\left\|\sum_p\nu^p\right\|^2-n\sum_p\left\|\nu^p\right\|^2,$$ which by Lemma \[ineq-1\] is bounded above by $\mu_1n^2-n\left\|\mu\right\|^2$ with equality only where either $\rho^p=(1^{\mu_p})$ (case (ii)) or all $\rho^p$ are equal and $\mu=(t^{n/t})$ for some $t$ (case (i)). Combining this with Lemma \[n-ineq\] we see that to obtain the maximum of the left hand side of we must also have $\rho^1\cup \cdots\cup\rho^r=\lambda$. In case (i) then, $\lambda$ is the union of $n/t$ copies of $\lambda_0$, the common value of $\rho^p$, and in case (ii), $\lambda=(1^n)$. We first prove (ii). Using Lemma \[v-lambda\] we have $$-v(\lambda)=(2g-2+k) n(\lambda) +(g-1)n -\sum_{i=1}^kv(\lambda,\mu^i)= \frac{\delta}{n}n(\lambda)+(g -1)n +\frac1n\sum_{i=1}^k\left[\mu_1^in(\lambda)-nv(\lambda,\mu^i)\right]. \label{vlambda}$$ The terms $n(\lambda)$ and $\sum_{i=1}^n\left[\mu_1^in(\lambda)-nv(\lambda,\mu^i)\right]$ are all maximal at $\lambda=(1^n)$ (the last by Proposition \[opti\]). Hence $-v(\lambda)$ is also maximal at $(1^n)$, since $\delta\geq 0$. Now $n(\lambda)$ has a unique maximum at $(1^n)$ by Lemma \[n-ineq\], hence $-v(\lambda)$ reaches its maximum at other partitions if and only if $\delta=0$ and for each $i$ we have $\mu^i=(t_i^{n/t_i})$ for some positive integer $t_i\mid n$ (again by Proposition \[opti\]). In this case the maximum occurs only for $\lambda$ the union of $n/t$ copies of a partition $\lambda_0\in \calP_t$, where $t=\gcd t_i$. Now (ii) follows from Proposition \[affine-descrip\]. To prove (i) we use Lemma \[v-lambda\] and and find that $v((1^n))=-\Delta(\muhat)$ as claimed. Let $\muhat=(\mu^1,\mu^2,\ldots,\mu^k)\in {\P_n}^k$ with $\delta(\muhat) \geq 0$. Suppose that $v(\lambda)$ is minimal. Then the coefficient of $q^{v(\lambda)}$ in $\calA_{\lambda \muhat}$ is $1$. \[lowest\] We use the notation of the proof of Lemma \[v-fmla-lemma\]. Note that the coefficient of the lowest power of $q$ in $\calH_\lambda(\sqrt q,1/\sqrt q) \left(q^{-n(\lambda
978
1,412
836
1,071
null
null
github_plus_top10pct_by_avg
rate (FDR) are presented in [Table [2](#tbl2){ref-type="other"}](#tbl2){ref-type="other"}. Interestingly, the metabolites were predominantly increased following freezing at −80 °C prior to preparation of fecal water ([Figure [2](#fig2){ref-type="fig"}](#fig2){ref-type="fig"}). The sample stored on ice for 24 h had the highest fidelity to the freshly prepared samples. Closer examination of the significantly different metabolites revealed that the BCAAs, aromatic amino acids, Krebs cycle intermediates, and monosaccharides were found at higher levels in fecal water prepared from frozen samples. However, the short chain fatty acid (SCFA), butyrate, was lower in fecal water from frozen/defrosted, and on ice samples. [Figure [3](#fig3){ref-type="fig"}](#fig3){ref-type="fig"} shows the typical spectra of fecal water analyzed from fresh, frozen, and on ice samples. ![Box plots of significant metabolites from the resulting analysis of fecal water from fresh, frozen/defrosted, and on ice samples. \* denotes significant differences between frozen and fresh samples and frozen and on ice samples (Bonferroni post hoc). \# denotes significant difference between fresh and frozen sample and fresh and on ice samples (Bonferroni post hoc). All *p*-values were FDR adjusted.](ao-2018-01761t_0002){#fig2} ![600 MHz ^1^H NMR spectra of fecal water colored by sample storage: analysis of fecal water from fresh (blue), frozen (red), and on ice (green) samples. Assignations of significant metabolites (FDR \< 0.05) are presented. BCAAs: valine, leucine, and isoleucine.](ao-2018-01761t_0003){#fig3} ###### Differential Fecal Metabolites between the Different Conditions of Sample Storage[a](#t2fn1){ref-type="table-fn"} metabolite *P*-value FDR description --------------- ----------- ------- ---------------------------------------------------------- aspartate \<0.001 0.008 Krebs cycle: oxaloacetate transamination butyrate \<0.001 0.017 short chain fatty acid fructose \<0.001 0.008
979
146
2,163
1,311
null
null
github_plus_top10pct_by_avg
$R$ (see \[eqn:R\]) of the neutron frequency to the frequency of the cohabiting mercury magnetometer, which samples the volume uniformly. The measured EDM signals as a function of this ratio are shown in 13 of [@pendlebury04], and are fitted to the straight lines anticipated from \[eqn:DeltaR\] above. However, the frequency shifts due to the enhanced depolarization mean that the appropriate frequency ratio is more complex than this, and a function similar to that shown in \[fig:freq\_shift\] is required instead. Fitting to these lines should therefore be carried out with due care and attention, and only after careful modelling. It is clearly far preferable to undertake EDM measurements in conditions of very low magnetic-field gradients. Conclusion ========== UCN are of very low energy, and preferentially populate the lower regions of any trap within which they are contained. It has been shown that these gravitational effects result in a significant enhancement of the $T_2$ relaxation of the UCN, and can also lead to shifts in the measured Larmor precession frequency. Although there are potential impacts upon systematic-error calculations for EDM measurements, these are at a very manageable level; nonetheless, they underline the importance both of careful and precise modelling of the system, and also of keeping to an absolute minimum any magnetic-field gradients within the measurement apparatus. Acknowledgments {#acknowledgments .unnumbered} =============== This work was supported in part by grant no. ST/K001329/1 from the UK Science and Technology Facilities Council. --- author: - | János Kollár\ [ ]{}\ [with an appendix by]{} C. Raicu bibliography: - 'refs.bib' title: Quotients by finite equivalence relations --- Let $f:X\to Y$ be a finite morphism of schemes. Given $Y$, one can easily describe $X$ by the coherent sheaf of algebras $f_*{{\mathcal O}}_X$. Here our main interest is the converse. Given $X$, what kind of data do we need to construct $Y$? For this question, the surjectivity o
980
310
1,514
981
null
null
github_plus_top10pct_by_avg
{\label{eq:2nddec-bd:n=0}} &\bigg(\sup_y\sum_{v,b}|v|^2P_{\Lambda;o}^{\prime{{\scriptscriptstyle}(0)}}(v,{\underline{b}})\, \tau_b\big(\delta_{{\overline{b}},y}+\tilde G_\Lambda({\overline{b}},y)\big)\bigg)\bigg( \sup_z\sum_{y,v}\tilde Q''_{\Lambda;v,o}(y,z)\bigg)^{j-1}\bigg( \sum_{z,x}P'_{\Lambda;o}(z,x)\bigg){\nonumber}\\ &\quad\leq d\sigma^2O(\theta_0)^{j+1}.\end{aligned}$$ ![\[fig:tildeQ”\]The leading diagrams of $\tilde Q''_{\Lambda;u,v}(y,x)$, due to $P_{\Lambda;u,v}^{\prime\prime{{\scriptscriptstyle}(0)}}(y,{\underline{b}})$ and $P_{\Lambda;u}^{\prime{{\scriptscriptstyle}(0)}}(y',{\underline{b}})$ in [(\[eq:tildeQ”-def\])]{}, respectively.](tildeQpp1 "fig:") ![\[fig:tildeQ”\]The leading diagrams of $\tilde Q''_{\Lambda;u,v}(y,x)$, due to $P_{\Lambda;u,v}^{\prime\prime{{\scriptscriptstyle}(0)}}(y,{\underline{b}})$ and $P_{\Lambda;u}^{\prime{{\scriptscriptstyle}(0)}}(y',{\underline{b}})$ in [(\[eq:tildeQ”-def\])]{}, respectively.](tildeQpp2 "fig:") \(iii) By translation invariance and [(\[eq:tildeQ”-def\])]{}–[(\[eq:tildeQ”-bd\])]{}, the contribution from $|a_n|^2$ for an $n\ne0,j$ is bounded by $$\begin{aligned} {\label{eq:2nddec-bd:0<n<j}} &\bigg(\sum_{v,y}P_{\Lambda;v}^{\prime{{\scriptscriptstyle}(0)}}(o,y)\bigg)\bigg(\sup_y \sum_{\substack{b,v,z\\ {\underline{b}}=y}}\tau_bQ''_{\Lambda;o,v}({\overline{b}},z)\bigg)^{n -1}\bigg(\sup_z\sum_{y,v}\tilde Q''_{\Lambda;v,o}(y,z)\bigg)^{j-1-n} \bigg(\sum_{z,x}P'_{\Lambda;o}(z,x)\bigg){\nonumber}\\ &\times\bigg(\sup_{y,z}\sum_{\substack{b,b',v\\ {\underline{b}}=y}}\Big(|{\underline{b}}' |^2{\mathbbm{1}{\raisebox{-2pt}{$\scriptstyle \{n\text{ odd}\}$}}}+|v-{\underline{b}}|^2{\mathbbm{1}{\raisebox{-2pt}{$\scriptstyle \{n\text{ even}\}$}}}\Big)\,\tau_b Q''_{\Lambda;o,v}({\overline{b}},{\underline{b}}')\,\tau_{b'}\big(\delta_{{{\overline{b}}^{\raisebox{-2pt}{$\scriptscriptstyle\prime$}}},v+z} +\tilde G_\Lambda({{\overline{b}}^{\raisebox{-2pt}{$\scriptstyle\prime$}}},v+z)\big)\bigg),\end{aligned}$$ where the first line is $O(\theta_0)^{j-2}$. Th
981
396
1,389
1,115
null
null
github_plus_top10pct_by_avg
\frac{a^2 k_{max}^2}{2}\right) - I_1\left(\frac{a^2 k_{max}^2}{2}\right) \right] \label{eq:forcegamma0}$$ in terms of the modified Bessel functions of the first kind $I_n(x)$ and where $k_{max} =2\sqrt{V_p^2 -1}$. For vanishing $a$ the dominant term is proportional to $(V_p^2-1)/V_p$  [@astrakharchik2004motion]. This drag is pertaining to energy dissipation by radiating sound waves in the condensate away from the impurity. We stress again that we assume $a$ small enough such that emission of other excitations, such as vortex pairs, does not occur. It is important to note [@astrakharchik2004motion; @pinsker2017gaussian] that in order to obtain a real value for the force in Eq. (\[eq:forcegamma0\]) one has to consider that it has been obtained from the limit $\gamma\rightarrow 0^+$ in (\[eq:force\_int\]), which implies that an infinitesimal positive imaginary part needs to be considered in the denominator to properly deal with the poles in the integral. ![image](Figure_1.pdf){width="\textwidth"} In general, for a non-zero $\gamma$, Eq. (\[eq:force\_int\]) simplifies upon an expansion in powers of $V_p$ to the leading order. For the linear term in $V_p$, we can perform the polar integration and arrive at $$F_\parallel = -\frac{2}{\pi}V_p \frac{\gamma}{1+\gamma^2}g^2_p \int \frac{k^3 e^{-a^2k^2}}{(4 +k^2)^2}dk \ .$$ Substituting $u = a^2(k^2 + 4)$, we find $$\begin{aligned} F_{\|} =-V_p \frac{\gamma}{1+\gamma^2}g_p^2 \frac{1}{\pi} \left[ e^{4a^2 }E_1(4a^2)(1+4a^2) - 1\right], \nonumber\\ \label{eq:drag_coeff_full}\end{aligned}$$ where $E_1(x)$ denotes the positive exponential integral. When $a \to 0$, the expression inside the bracket diverges as $-\gamma_E - 1 - \ln(4a^2)$ with $\gamma_E$ begin the Euler-Mascheroni constant. It is therefore necessary to keep a finite size $a$. This drag force is analogous to the viscous Stokes drag force in classical fluids since it is due to an effective interaction of the impurity with the normal fluid through the thermal drag on the BEC. The effective drag coefficient
982
2,229
2,296
1,041
2,364
0.780512
github_plus_top10pct_by_avg
PSs and RCTs. In the sensitivity analysis excluding studies stopped for harm, benefit, or futility the difference between NPSs and RCTs was no longer statistically significant (p = 0.057, missing excluded, see [Table 3](#pone.0165605.t003){ref-type="table"}). Poor recruitment was the most frequent reason for discontinuation in both NPSs (37%) and RCTs (36%) ([Table 2](#pone.0165605.t002){ref-type="table"}). Completion status was very similar in RCTs approved in Freiburg and in Canada or Switzerland. 10.1371/journal.pone.0165605.t002 ###### Study status, reasons for discontinuation. ![](pone.0165605.t002){#pone.0165605.t002g} Study characteristics REC Freiburg Other RECs[^1^](#t002fn001){ref-type="table-fn"} ------------------------------------------------------- -------------- -------------------------------------------------- --------- ---------- ---------- ---------- **Total n** 27 158 56 241 306 711 **Study status** Completed[^2^](#t002fn003){ref-type="table-fn"} 23 124 42 189 210 474 Unclear/missing[^2^](#t002fn003){ref-type="table-fn"} 2 10 8 20 18 62 Discontinued[^2^](#t002fn003){ref-type="table-fn"} 2 24 6 32 78 175 **Reasons for discontinuation** Poor recruitment 1 (50%) 6 (27%) 4 (67%)
983
193
1,233
1,301
null
null
github_plus_top10pct_by_avg
law of the projection parameter cannot be consistently estimated without sample splitting. We want to emphasize that we do not claim that the LOCO parameter is optimal in any sense. We just aim to show that there exist alternatives to the usual parameters that, when the linear model is not true, (i) are more interpretable and (ii) can be inferred more accurately. ### Problem Setup and Four (Random) Parameters that Measure Variable Importance {#problem-setup-and-four-random-parameters-that-measure-variable-importance .unnumbered} We consider a distribution-free regression framework, where the random pair $Z = (X,Y) \in \mathbb{R}^d \times \mathbb{R} $ of $d$-dimensional covariates and response variable has an unknown distribution $P$ belonging to a large non-parametric class $\mathcal{Q}_n$ of probability distributions on $\mathbb{R}^{d+1}$. We make no assumptions on the regression function $x \in \mathbb{R}^d \mapsto \mu(x) = \mathbb{E}\left[ Y | X = x \right]$ describing the relationship between the vector of covariates and the expected value of the response variable. In particular, we do not require it to be linear. We observe $\mathcal{D}_n = (Z_1,\ldots, Z_n)$, an i.i.d. sample of size $n$ from some $P \in \mathcal{Q}_n$, where $Z_i = (X_i,Y_i)$, for $i = 1,\ldots,n$. We apply to the data a procedure $w_n$, which returns both a subset of the coordinates and an estimator of the regression function over the selected coordinates. Formally, $$\mathcal{D}_n \mapsto w_n(\mathcal{D}_n) = \left(\widehat{S}, \widehat{\mu}_{\widehat{S}}\right),$$ where $\widehat{S}$, the selected model, is a random, nonempty subset of $\{1,\ldots,d\}$ and $\widehat{\mu}_{\widehat{S}}$ is an estimator of the regression function $x \in \mathbb{R}^d \mapsto \mathbb{E}\left[ Y | X_{\widehat{S}} = x_{\widehat{S}} \right]$ restricted to the selected covariates $\widehat{S}$, where for $x \in \mathbb{R}^d$, $x_{{\widehat{S}}} = (x_j, j \in {\widehat{S}})$ and $(X,Y) \sim P$, independent of $\mathcal{D}_n$. The model selection and e
984
2,929
1,101
949
2,432
0.779797
github_plus_top10pct_by_avg
. Perhaps a moderator, admin or dev happens to have a mentally challenged family member, or just has enough common sense to know that there are probably countless users on Stack Overflow who do. Would you call your boss that name, to his/her face? Would you feel good if your boss called you that name (to your face or behind your back)? If not, then it's probably not an appropriate username here, either. Some interesting reading, if you care to: Possibly offensive usernames Flag abusive users What are the rules governing display names and avatars? Policy on display names Q: Updating .txt file in J2ME located in Binary package I have developed J2ME application that read data from product.txt file located in binary package using following code InputStream is = getClass().getResourceAsStream("/product.txt"); int size = is.available(); byte bytes[] = new byte[size]; is.read(bytes, 0, size); str = new String(bytes, 0, size); Now I want's to update product.txt file located in binary package using J2ME code. I have tried Following code to get relative path of product.txt String path=getClass().getResource("/product.txt").getPath(); But it does not works on J2ME platform. How to get relative path Of product.txt file located in binary package? A: You can't change a file that is baked in to the jar. You'll notice that there is no equivalent of getResourceAsStream() that offers an OutputStream. Q: jQuery validation on not working for input array I am trying to validate the inputs(name, age and job) from my input arrays using jQuery. However, my code does not validate all the empty input fields but only the first row of inputs when submitting the form. Please see the image below: Can anyone advise what I did wrong in my javascript code? Javascript function add_row() { $rowno=jQuery("#employee_table tr").length; $rowno=$rowno+1; jQuery("#employee_table tr:last").after( "<tr id='row"+$rowno+"'>"+ "<td><input type='text' name='names[]' placeholder='Name'></td>"+
985
4,501
447
789
93
0.826766
github_plus_top10pct_by_avg
ean and 95% confidence interval from @Hal04.[]{data-label="fig:RealParam"}](figures/R10_Fest.pdf "fig:"){width="33.00000%"} ![Estimated excitability curves from the eight MU hypotheses for data sets R10 (left) and R50 (centre) with corresponding expected MUTF mean estimates. Right: median and 95% credible interval for the coefficient of variation for the random variable associated with the excitability curve for each MU, together with the mean and 95% confidence interval from @Hal04.[]{data-label="fig:RealParam"}](figures/R50_Fest.pdf "fig:"){width="33.00000%"} ![Estimated excitability curves from the eight MU hypotheses for data sets R10 (left) and R50 (centre) with corresponding expected MUTF mean estimates. Right: median and 95% credible interval for the coefficient of variation for the random variable associated with the excitability curve for each MU, together with the mean and 95% confidence interval from @Hal04.[]{data-label="fig:RealParam"}](figures/CoV2.pdf "fig:"){width="33.00000%"} \[tab:Xest\] ------------ --- ----- --- --- --- --- --- --- -- ------------ --- --- --- --- --- --- --- --- $~$ Level (mN) 1 2 3 4 5 6 7 8 Level (mN) 1 2 4 3 5 7 6 8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 50 1 0 0 0 0 0 0 0 40 1 0 0 0 0 0 0 0 – – – – – – – – – 70 0 1 0 0 0 0 0 0 100 1 1 0 0 0 0 0 0 110 1 1 0 0 0 0 0 0 120 1 0 0 1 0 0 0 0 120 0 1 0 1 0 0 0 0 170 1 1 0 1 0 0 0 0 150 1 1 0 1 0 0 0 0 210 1 1 1 1 0 0 0 0 200 1 1 1 1 0 0 0 0 230 1 1 1 1 1 0 0 0
986
4,765
2,530
819
null
null
github_plus_top10pct_by_avg
\left(\frac{2}{n}\right)^{k-1}. \end{aligned}$$ On the other hand, ball $t$ is allocated to a given bin with probability at most $\Delta_t/(n\Delta_t/2)=2/n$. Therefore, the probability that $T$ [is $c_1$-loaded]{} is at most $$\begin{aligned} \label{up:2} \binom{n}{ck}\left(\frac{2k}{n}\right)^{c_1k}{\leqslant}\left(\frac{{\mathrm{e}}n}{c_1k}\right)^{c_1k}\left(\frac{2k}{n}\right)^{c_1k}=\left(\frac{2{\mathrm{e}}}{c_1}\right)^{c_1 k}, \end{aligned}$$ where we used the fact that $\binom{n}{c_1k}{\leqslant}\left(\frac{{\mathrm{e}}n}{c_1k}\right)^{c_1k} $. Since balls are independent, one can multiply (\[up:1\]) by (\[up:2\]) and derive an upper bound for the probability that $T$ is constructed [at the given times]{} and [is $c$-loaded]{}. Taking the union bound over all rooted ordered trees and time sequences gives $$\begin{aligned} n4^{k-1}\sum_{(t_1,\ldots, t_{k-1})}\left\{\left(\frac{2}{n}\right)^{k-1}\left(\frac{2{\mathrm{e}}}{c_1}\right)^{c_1k}\right\} &{\leqslant}n4^{k-1} n^{k-1}\cdot \left\{\left(\frac{2}{n}\right)^{k-1}\left(\frac{2{\mathrm{e}}}{c_1}\right)^{c_1k}\right\}\\ &=n8^{k-1}\left(\frac{2{\mathrm{e}}}{c_1}\right)^{c_1k}, \end{aligned}$$ [proving the first statement of the lemma]{}. By setting $c_1=12(c+1)$ and $k=\log n$ in the above formula, we infer that the probability that ${\mathcal{C}}_n$ contains a $c_1$-loaded tree with $\log n$ vertices is at most $$n8^{k-1}\left(\frac{2{\mathrm{e}}}{c_1}\right)^{c_1k}< n2^{3k}2^{-12(c+1)k}{\leqslant}n2^{-ck-9k}{\leqslant}n^{-c},$$ [completing the proof.]{} \[lem:treecycle\] Suppose that the conflict graph ${\mathcal{C}}_n$ contains a $c$-loaded $k$-vertex tree $T$, where $c> 4{\mathrm{e}}$ is any constant [ and $k$ is a positive integer]{}. Let $p$ denotes the number of cycle-producing edges [(with respect to $T$) which have been added between vertices in this tree during the allocation process]{}. Then $p < 2(c+1)/\varepsilon$ with probability at least $1-n^{-c}$. For a given connected component of $
987
1,382
1,223
1,038
null
null
github_plus_top10pct_by_avg
}^{{b^{\chi}} (\beta _2)-1} F_{\beta _1}^{{b^{\chi}} (\beta _1)-1}v_\Lambda \,|&\\ 0\le l_k<{b^{\chi}} (\beta _k)\quad \text{for all $k\in \{m+1,m+2,\dots ,m+n\}$}& \end{aligned}$$ forms a vector space basis of $M^\chi (\Lambda )$. For all $k\in \{1,2,\dots ,m\}$ let $\Lambda _k={t}_{i_{k-1}}\cdots {t}_{i_2}{t}_{i_1}^\chi (\Lambda )$. By Lemma \[le:VTinv\] and Eq. , Eq.  is equivalent to $${\rho ^{\chi _k}} ({\alpha }_{i_k})\Lambda _k(K_{i_k}L_{i_k}^{-1})\not= \rhomap{\chi _k}({\alpha }_{i_k},{\alpha }_{i_k})^t$$ for all $k\in \{1,2,\dots ,m\}$, $t\in \{1,2,\dots ,{b^{\chi _k}}({\alpha }_{i_k})-1\}$. Hence, by Prop. \[pr:VTMiso\], the map $${\hat{T}}_{i_m}\cdots {\hat{T}}_{i_2}{\hat{T}}_{i_1}: M^{\chi _m}(\Lambda _m)\to M^\chi (\Lambda )$$ is an isomorphism. Thus the claim of the lemma holds by Lemma \[le:MLiso\] for $M^{\chi _m}(\Lambda _m)$ and by Thm. \[th:PBWtau\]. \[pr:M=L\] Assume that $$\begin{aligned} \label{eq:MLass} \prod _{\nu =1}^n \prod _{t=1}^{{b^{\chi}} (\beta _\nu )-1} \big( {\rho ^{\chi}} (\beta _\nu )\Lambda (K_{\beta _\nu } L_{\beta _\nu }^{-1})-\chi (\beta _\nu ,\beta _\nu )^t\big)\not=0. \end{aligned}$$ Then $I^\chi (\Lambda )=0$. For all $\nu \in \{1,2,\dots ,n\}$ let $\chi _\nu =r_{i_{\nu -1}}\cdots r_{i_2}r_{i_1}(\chi )$ and $\Lambda _\nu ={t}_{i_{\nu -1}}\cdots {t}_{i_2} {t}_{i_1}^\chi (\Lambda )$. By Lemma \[le:VTinv\] and Eq. , Eq.  is equivalent to $${\rho ^{\chi _\nu }} ({\alpha }_{i_\nu })\Lambda _\nu (K_{i_\nu }L_{i_\nu }^{-1})\not= \chi _\nu ({\alpha }_{i_\nu },{\alpha }_{i_\nu })^t$$ for all $\nu \in \{1,2,\dots ,n\}$, $t\in \{1,2,\dots ,{b^{\chi _\nu }}({\alpha }_{i_\nu })-1\}$. Hence, by Prop. \[pr:VTMiso\], the map $${\hat{T}}_{i_1}{\hat{T}}_{i_2}\cdots {\hat{T}}_{i_n}: M^{r_{i_n}(\chi _n)} ({t}_{i_n}(\Lambda _n))\to M^\chi (\Lambda )$$ is an isomorphism. Thus $v=F_{\beta _n}^{{b^{\chi}} (\beta _n)-1} \cdots F_{\beta _2}^{{b^{\chi}} (\beta _2)-1} F_{\beta _1}^{{b^{\chi}} (\beta _1)-1} v_\Lambda \not=0$ and $(U^+
988
1,879
1,007
1,008
null
null
github_plus_top10pct_by_avg
random vectors in $\mathbb{R}^p$. Let $S^X_n = \frac{1}{\sqrt{n}} \sum_{i=1}^n X_i$ and, similarly, let $S^Y_n = \frac{1}{n} \sum_{i=1}^n Y_i$, where $Y_1,\ldots, Y_n$ are independent vectors with $Y_i \sim N_p(0,\mathbb{E}[X_i X_i^\top])$. Let $\mathcal{A}$ be the collection of polyhedra $A$ in $\mathbb{R}^p$ of the form $$A =\left\{ x \in \mathbb{R}^d \colon v^\top x \leq t_v , v \in \mathcal{V}(\mathcal{A}) \right\}$$ where $ \mathcal{V}(\mathcal{A}) \subset \mathbb{R}^p$ is a set of $m$ points of unit norm, with $m \leq (n p)^d$ for some constant $d>0$, and $( t_v \colon v \in \mathcal{V}(\mathcal{A}) )$ is a set of $m$ positive numbers. For each $i=1,\ldots,n$ let $$\tilde{X}_i = (\tilde{X}_{i1},\ldots,\tilde{X}_{im})^\top = \left( v^\top X_i, v \in \mathcal{V}(\mathcal{A}) \right).$$ Assume that the following conditions are satisfied, for some $B_n \geq 1$ and $\underline{\sigma}>0$: 1. $n^{-1} \sum_{i=1}^n \mathbb{E}\left[ \tilde{X}_{ij}^2 \right] \geq \underline{\sigma}^2$, for all $j=1,\ldots, m$; 2. $n^{-1} \sum_{i=1}^n \mathbb{E}\left[ | \tilde{X}_{ij}|^{2+k} \right] \leq B^{k}_n$, for all $j=1,\ldots,m$ and $k=1,2$; 3. $\mathbb{E}\left[ \exp\left( | \tilde{X}_{i,j} | / B_n \right) \right] \leq 2$, for $i=1,\ldots,n$ and $k=1,2$. Then, there exists a constant $C>0$ depending only on $d$ such that $$\sup_{A \in \mathcal{A}} \left|\mathbb{P}(S^X_n \in A) - \mathbb{P}(S^Y_n \in A) \right| \leq \frac{C}{\underline{\sigma}} \left( \frac{B_n^2 \log^7(pn) }{n } \right)^{1/6}.$$ Finally, we make frequent use the following comparison theorem for the maxima of Gaussian vectors. Its proof can be established using arguments from the proof of Theorem 4.1 in [@cherno2] – which itself relies on a modification of Theorem 1 from [@chernozhukov2015comparison] – along with the above anti-concentration bound of . As usual, we have kept the dependence on the minimal variance explicit. \[thm:comparisons\] Let $X \sim N_p(0,\Sigma_X)$ and $Y \sim N_p(0,\Si
989
2,052
670
920
null
null
github_plus_top10pct_by_avg
t path (i.e., $o\to v_1\to b_2\to v_3\to\cdots\to x$) and a middle zigzag path. We use the lowermost path to bound $|x|^2$ as $$\begin{aligned} {\label{eq:x2-bd}} |x|^2=\sum_{n=0}^j|a_n|^2+2\sum_{0\leq m<n\leq j}a_m\cdot a_n \leq(j+1)\sum_{n=0}^j|a_n|^2,\end{aligned}$$ where $a_0=v_1$, $a_1={\underline{b}}_2-v_1$ ,$a_2=v_3-{\underline{b}}_2,\dots$, and $a_j=x-v_j$ or $x-{\underline{b}}_j$ depending on the parity of $j$. $$\begin{gathered} \includegraphics[scale=0.16]{pi3dec}\\[1pc] \text{(i)}\quad\raisebox{-1.2pc}{\includegraphics[scale=0.12] {pi3dec4}}\qquad\qquad \text{(ii)}\quad\raisebox{-1.2pc}{\includegraphics[scale=0.12] {pi3dec1}}\\[5pt] \text{(iii)}\quad\raisebox{-1.2pc}{\includegraphics[scale=0.12] {pi3dec2}}\qquad~~~\&~~\qquad\raisebox{-1.2pc}{\includegraphics [scale=0.12]{pi3dec3}}\end{gathered}$$ We discuss the contributions to $\sum_x|x|^2\pi_\Lambda^{{\scriptscriptstyle}(j)}(x)$ from (i) $|a_j|^2$, (ii) $|a_0|^2$ and (iii) $|a_n|^2$ for $n\ne0,j$, separately (cf., Figure \[fig:pi3-dec\]). \(i) The contribution from $|a_j|^2$ is bounded by $$\begin{aligned} {\label{eq:2nddec-bd:n=j}} &\bigg(\sum_{v,y}P_{\Lambda;v}^{\prime{{\scriptscriptstyle}(0)}}(o,y)\bigg)\bigg(\sup_y \sum_{\substack{b,v,z\\ {\underline{b}}=y}}\tau_bQ''_{\Lambda;o,v}({\overline{b}},z)\bigg)^{j -1}{\nonumber}\\ &\qquad\qquad\times\bigg(\sup_y\sum_{\substack{b,x\\ {\underline{b}}=y}}\Big(|x|^2 {\mathbbm{1}{\raisebox{-2pt}{$\scriptstyle \{j\text{ odd}\}$}}}+|x-{\underline{b}}|^2{\mathbbm{1}{\raisebox{-2pt}{$\scriptstyle \{j\text{ even}\}$}}}\Big)\,\tau_b Q'_{\Lambda;o}({\overline{b}},x)\bigg){\nonumber}\\ &\leq O(\theta_0)^{j-1}\sup_y\sum_{z,z',x}\Big(|x|^2{\mathbbm{1}{\raisebox{-2pt}{$\scriptstyle \{j\text{ odd}\}$}}} +|x-y|^2{\mathbbm{1}{\raisebox{-2pt}{$\scriptstyle \{j\text{ even}\}$}}}\Big)\,\tau_{y,z}\big(\delta_{z,z'}+\tilde G_\Lambda(z,z')\big)P'_{\Lambda;o}(z',x).\end{aligned}$$ By [(\[eq:P’0-def\])]{}, the leading contribution from $P_{\Lambda;o}^{\prime{{\scriptscriptstyle}(0)}}(z',x)$ for an odd $j$ can
990
1,512
989
1,080
3,564
0.771524
github_plus_top10pct_by_avg
though it is dynamical. It “anchors” the construction, setting a reference scale by fixing an observable dimensionful dynamical variable. (In closed worlds $K$ is like a “time” variable, in that it may “locate” the thin sandwich. In cosmology, $K$ is essentially the inverse mean “Hubble time.”) There is no underlying geometrical derivation of $\bar{K}=K$, unlike the case of $\bar{A}_{i j}$ below. The conformal invariance of $K$ is primitive. See the result in (\[Eq:DotLogPsi\]) below. Now we solve (\[Eq:traceless\]) for $\bar{A}^{i j}$ and find $$\begin{aligned} \bar{A}^{i j} &=& \psi^{-6} (2N)^{-1} \left[ \psi^{-4} (L \bar{\beta})^{i j} - \psi^{-4} u^{i j} \right] \nonumber\\ &=& \psi^{-10} \left\{ (2N)^{-1} \left[ (L \bar{\beta})^{i j} - u^{i j} \right] \right\} = \psi^{-10} A^{i j} \; , \label{Eq:Abar}\end{aligned}$$ the same conformal scaling that was postulated by Lichnerowicz [@Lich] and others [@CBYHeld; @OMY; @York79] for the traceless part of $\bar{K}^{i j}$ in the one-hypersurface problem. One now has a derivation of this fundamental transformation from its metrical foundations. The momentum constraint (\[Eq:MomCon\]) becomes $$\begin{aligned} \nabla_j \left[ (2N)^{-1} (L \bar{\beta})^{i j}\right] &=& \nabla_j \left[ (2N)^{-1} u^{i j} \right] \nonumber\\ & & + (2/3) \psi^6 \nabla^i K \; , \label{Eq:NewMomCon}\end{aligned}$$ for unknown $\bar{\beta}^i$ and known $N$, $g_{i j}$, $u_{i j}$, and $K$. The operator on the left, being in elliptic “divergence form” with $N>0$, does not differ in any important property from its counterpart in the $(\Sigma, \bar{\mbox{\bf g}}, \bar{\mbox{\bf K}})$ analysis [@CBYHeld; @OMY; @York79]. The Hamiltonian constraint (\[Eq:HamCon\]) becomes [@York73] $$8 \Delta_g \psi - R(g) \psi + A_{i j} A^{i j} \psi^{-7} - (2/3) K \psi^5 = 0 \; , \label{Eq:NewHamCon}$$ for unknown $\psi$, where $A^{i j}$ is given in (\[Eq:Abar\]). This equation has precisely the same form in the one-hypersurface and two-hypersurfaces constraint proble
991
3,834
767
790
2,899
0.776237
github_plus_top10pct_by_avg
pmod4)\\[5pt] \hbox to \frt{\hfil$\mfrac a2$\hfil}&(a\equiv0\ppmod4). \end{cases}$$ Then $n',a'$ satisfy the conditions of the proposition, and $n'<n$. So by induction there is a pair $u',v'$ such that $$v'\equiv1\pmod4,\qquad u-v\equiv-1\pmod{2^{l(v'-2)}},\qquad\mbinom{u'-v'}{u'-a'}\equiv1\pmod2.$$ Again, because $u'-v'$ is odd and $u'-a'$ is even, $\mbinom{u'-v'}{u'-a'+1}$ is also odd. We let $u=2u'$ and $v=2v'-1$. Then $u+v+2=n$, $v\equiv1\ppmod4$, and $$u-v=2(u'-v')+1\equiv-1\pmod{2^{l(v'-2)+1}},$$ with $l(v-2)\ls l(v'-2)+1$. So $S^{(u,v,2)}$ is irreducible. Furthermore $$\binom{u-v}{u-a}=\binom{2u'-2v'+1}{2u'-2a'(+2)}\equiv\binom{u'-v'}{u'-a'(+1)}\equiv1\pmod2.\tag*{\raisebox{-10pt}{\qedhere}}$$ The next result addresses most of the cases where $a+b\equiv2\ppmod8$. \[ab1\] Suppose $n\equiv5\ppmod8$, and $a$ is even, with $8\ls a\ls n-7$. Let $b=n-a-3$. Then $S^{(a,3,1^b)}$ has an irreducible summand of the form $S^{(u,v)}$ with $v\gs7$. The proof is very similar to the proof of Proposition \[ab0\]. We need to show that there is a pair $u,v$ such that $S^{(u,v)}$ is irreducible, $v\gs7$, $v\equiv3\ppmod4$ and $\mbinom{u-v}{u-a}$ is odd. The condition for $S^{(u,v)}$ to be irreducible is $u-v\equiv-1\pmod{2^{l(v)}}$. Again, we need three cases. : In this case, take $v=7$ (so $u=n-7$). Since $n\equiv5\ppmod8$, we get $u\equiv6\ppmod8$, which means that $u-v\equiv7\ppmod8$ (so $S^{(u,v)}$ is irreducible), and the binomial coefficients $$\binom{u-7}0,\binom{u-7}1,\binom{u-7}2, \binom{u-7}3$$ are all odd, which means that $\mbinom{u-7}{u-a}$ will be odd. : In this case, let $$n'=\frac{n+5}2,\quad a'= \begin{cases} \mfrac{a+2}2&(a\equiv2\ppmod4)\\[5pt] \mfrac{a+4}2&(a\equiv0\ppmod4). \end{cases}$$ Then $n',a'$ satisfy the conditions of the proposition, and $n'<n$. So by induction there is a pair $u',v'$ such that $$v'\equiv3\pmod4,\qquad v'\gs7,\qquad u-v\equiv-1\pmod{2^{l(v')}},\qquad \mbinom{u'-v'}{u'-a'}\equiv1\pmod2.$$ Note that because $u'-v'$ is odd and $u'-a'$ is even,
992
1,474
1,135
918
1,298
0.791413
github_plus_top10pct_by_avg
the point $w$: :j\^a\_[L,z]{}(z) j\^b\_[L,z]{}(w): = \_[n,|n=0]{}\^ :(\^n |\^[|n]{} j\^a\_[L,z]{}) j\^b\_[L,z]{}:(w). Let us now consider the OPE of one of these composite operators with the current $j^c_{L,z}(x)$: $$\begin{aligned} j^c_{L,z}(x) & :(\p^n \bar \p ^{\bar n} j^a_{L,z}) j^b_{L,z}:(w) = j^c_{L,z}(x) \lim_{:y \to w:} \p_y^n \bar \p_y ^{\bar n} j^a_{L,z}(y) j^b_{L,z}(w) \cr % & = \lim_{:y \to w:} \p_y^n \bar \p_y ^{\bar n} \left[ \left ( \frac{c_1 \kappa^{ca}}{(x-y)^2} + \frac{c_2 {f^{ca}}_d j^d_{L,z}(y)}{x-y}+ \frac{(c_2-g) {f^{ca}}_d j^d_{L,\bar z}(y)(\bar x - \bar y)}{(x-y)^2} \right.\right. \cr & \left. \qquad \qquad + \sum_{m,\bar m=0}^{\infty} \frac{(x-y)^m }{m ! }\frac{ (\bar x - \bar y)^{\bar m}}{ \bar m !} :(\p^m \bar \p ^{\bar m} j^c_{L,z}) j^a_{L,z}:(y) + \mathcal{O}(f^2) \right) j^b_{L,z}(w) \cr & \left. \qquad + j^a_{L,z}(y) \left ( \frac{c_1 \kappa^{cb}}{(x-w)^2} + ... \right ) \right] \cr % & = \lim_{:y \to w:} \p_y^n \bar \p_y ^{\bar n} \left[ \frac{c_1 \kappa^{ca}j^b_{L,z}(w)}{(x-y)^2} + \frac{c_1 c_2 {f^{cab}} }{(x-y)(y-w)^2} + \frac{c_1 \kappa^{cb}j^a_{L,z}(y)}{(x-w)^2} +.. \right] \end{aligned}$$ where the ellipses in the last line contains singular terms that comes from the OPE between the regular operators and the current in the third line of the previous computation. These terms in this OPE will be removed by the regular limit $:y \to w:$. In order to compute the action of the derivatives more conveniently, we rewrite the second term in the last line as: = c\_1 c\_2 [f\^[cab]{}]{} \_[p=0]{}\^ Thus we obtain: $$\begin{aligned} j^c_{L,z}&(x) :(\p^n \bar \p ^{\bar n} j^a_{L,z}) j^b_{L,z}:(w) = \lim_{:y \to w:} \left[ \delta_{\bar n,0}\frac{(n+1)!}{(x-y)^{n+2}} c_1 \kappa^{ca}j^b_{L,z}(w) \right. \cr & \left. \qquad + \delta_{\bar n,0}\sum_{p=0}^{\infty} \frac{(p-2)...(p-2-n+1)(y-w)^{p-2-n}}{(x-w)^{p+1}} c_1 c_2 {f^{cab}} + \frac{c_1 \kappa^{cb} \p^n \bar \p ^{\bar n} j^a_{L,z}(y)}{(x-w)^2}+... \right] \cr % & = \delta_{\bar n,0}\frac{(n+1)!}{(x-w)^{n+2}} c_1
993
3,122
1,226
857
null
null
github_plus_top10pct_by_avg
t $L_i=\bigoplus_{\lambda}H_{\lambda}\oplus A(2\delta, 2b_i, 1)$ for certain $b_i\in A$ and $\delta (\in A) \equiv 1 \mathrm{~mod~}2$. Thus the orthogonal group $\mathrm{O}(B_i/Z_i, \bar{q}_i) ~(=\mathrm{O}(n_i, \bar{q}_i))$ is split if and only if the quadratic space $A(2\delta, 2b_i, 1)/\pi A(2\delta, 2b_i, 1)$ is isotropic. Recall that $\pi=-\sigma(\pi)$. Using this, the quadratic form on $A(2\delta, 2b_i, 1)/\pi A(2\delta, 2b_i, 1)$ is $q(x, y)=x^2+xy+\bar{b}_iy^2$, where $\bar{b}_i$ is the reduction of $b_i$ in $\kappa$. We consider the identity $q(x, y)=x^2+xy+\bar{b}_iy^2=0$. If $y=0$, then $x=0$. Assume that $y\neq 0$. Then we have that $\bar{b}_i=(x/y)^2+x/y$. Thus we can see that there exists a solution of the equation $z^2+z=\bar{b}_i$ over $\kappa$ if and only if $q(x, y)$ is isotropic if and only if $\mathrm{O}(B_i/Z_i, \bar{q}_i) ~(=\mathrm{O}(n_i, \bar{q}_i))$ is split. The construction of component groups {#cg} ------------------------------------ The purpose of this subsection is to define a surjective morphism from $\tilde{G}$ to $(\mathbb{Z}/2\mathbb{Z})^{\beta}$, where $\beta$ is the number of integers $j$ such that $L_j$ is *of type I* and $L_{j+2}, L_{j+3}, L_{j+4}$ (resp. $L_{j-1}, L_{j+1},$ $L_{j+2}, L_{j+3}$) are *of type II* if $j$ is even (resp. odd), as defined in Lemma \[l46\]. We start with reproducing the definitions of the sublattices $L^i$ and $C(L)$ of $L$ given in Definitions 4.8 and 4.9 of [@C2]. \[d48\] We set $L^0=L$ and inductively define, for positive integers $i$, $$L^i:=\{x\in L^{i-1} | h(x, L^{i-1})\subset (\pi^i)\}.$$ When $i=2m$ is even, $$L^{2m}=\pi^m(L_0\oplus L_1)\oplus\pi^{m-1}(L_2\oplus L_3)\oplus \cdots \oplus \pi(L_{2m-2}\oplus L_{2m-1})\oplus \bigoplus_{i\geq 2m}L_i.$$ We choose a Jordan splitting for the hermitian lattice $(L^{2m}, \xi^{-m}h)$ as follows: $$L^{2m}=\bigoplus_{i \geq 0} M_i,$$ where $$M_0=\pi^mL_0\oplus\pi^{m-1}L_2\oplus \cdots \oplus \pi L_{2m-2}\oplus L_{2m},$$ $$M_1=\pi^mL_1\oplus\pi^{m-1}L_3\oplus \cdots \oplus \pi L_{2m-1}\oplus L_{
994
1,296
1,197
962
3,382
0.772754
github_plus_top10pct_by_avg
ight).[]{data-label="fig:topology"}](Plot11_0b-eps-converted-to.pdf "fig:"){width=".3\textwidth"} (-95,-5)[graph size ]{} (-95,100)[Worst-case $\theta^*$]{} The Role of the Position of the Separators ------------------------------------------ As predicted by theorem \[thm:main2\], rank-breaking fails when $\gamma$ is small, i.e. the position of the separators are very close to the bottom. An extreme example is the bottom-$\ell$ separators scenario, where each person is offered $\kappa$ randomly chosen alternatives, and is asked to give a ranked list of bottom $\ell$ alternatives. In other words, the $\ell$ separators are placed at $(p_{j,1},\ldots,p_{j,\ell})=(\kappa_j-\ell, \ldots,\kappa-1)$. In this case, $\gamma\simeq 0$ and the error bound is large. This is not a weakness of the analysis. In fact we observe large errors under this scenario. The reason is that many alternatives that have large weights $\theta_i$’s will rarely be even compared once, making any reasonable estimation infeasible. Figure \[fig:bottom\_l\_1\] illustrates this scenario. We choose $\ell=8$, $\kappa=128$, and $d=1024$. The other settings are same as that of the first figure of Figure \[fig:scaling\_l\_n\]. The left figure plots the magnitude of the estimation error for each item. For about 200 strong items among 1024, we do not even get a single comparison, hence we omit any estimation error. It clearly shows the trend: we get good estimates for about 400 items in the bottom, and we get large errors for the rest. Consequently, even if we only take those items that have at least one comparison into account, we still get large errors. This is shown in the figure right. The error barely decays with the sample size. However, if we focus on the error for the bottom 400 items, we get good error rate decaying inversely with the sample size. Normalization constant $C$ in the second figure is $10^2 \,x\,d/\ell$ and $10^{2}(400)d/\ell$ for the first and second lines respectively, where $x$ is the number of items that appeared in rank-breaki
995
549
299
1,066
null
null
github_plus_top10pct_by_avg
etermine factors associated with changes in percentage plasma choline concentration **Simple regression** **Regression coefficient** **r** **r**^**2**^ ***P*** ----------------------------------- ---------------------------- ------- -------------- --------- Age \[months\] 0.2 0.26 0.07 0.204 Sex 10.8 0.18 0.03 0.364 Neuter status 14.5 0.13 0.02 0.518 Breed group^\*^ −23.8 −0.40 0.16 0.043 Starting weight \[kg\] −1.0 −0.57 0.32 0.002 Body fat percentage (pre) \[%\] −2.3 −0.48 0.23 0.014 Lean tissue mass (pre) \[kg\] −1.9 −0.54 0.29 0.004 Duration of weight loss \[days\] −0.09 −0.46 0.21 0.018 Rate of weight loss \[%SBW/week\] 42.4 0.48 0.23 0.013 Percentage weight loss \[%\] 0.1 0.03 0.00 0.873 Energy intake during weight loss −0.6 −0.14 0.02 0.493 Change in fat tissue mass \[%\] −0.5 −0.19 0.04 0.345 Change in lean tissue mass \[%\] −0.02 0.00 0.00 0.979 ^\*^ Breed based upon a dummy variable where dogs of retriever breeds were assigned a value of 1. ###### Multiple linear regression to determine factors associated with changes in percentage plasma choline concentration **Model 1** **Regression coefficient** **r** **r**^**2**^ ***P*** ------------------------------- ---------------------------- ------- -------------- --------- Final model
996
4,517
528
533
null
null
github_plus_top10pct_by_avg
(Q)\subset \cdots$. First we define a graph $\Gamma_n$ \[resp. $\Gamma_{n+\frac{1}{2}}$\] for a non-negative integer $n\in\mathbb{Z}_{\geq 0}$. Then we define the sets of [*tableaux*]{} as sets of paths on this graph. Figure \[fig:brad\] will help the reader to understand the recipe. ![$\Gamma_4$[]{data-label="fig:brad"}](18.eps) For the moment, we assume that $Q$ is a sufficiently large integer. Let $\lambda = (\lambda_1, \lambda_2, \ldots, \lambda_l)$ be a partition. For this $\lambda$, define $$\begin{aligned} \widetilde{\lambda} &=& (Q-|\lambda|, \lambda_1, \lambda_2, \ldots, \lambda_l)\\ \big[\mbox{resp.}\ \widehat{\lambda} &=& (Q-1-|\lambda|, \lambda_1, \lambda_2, \ldots, \lambda_l)\big]\end{aligned}$$ to be a partition of size $Q$ \[resp. $Q-1$\]. Pictorially, $\widetilde{\lambda}$ \[resp. $\widehat{\lambda}$\] is obtained by adding $Q-|\lambda|$ \[resp. $Q-1-|\lambda|$\] boxes on the top of $\lambda$. Let $ P_{\leq i} = \bigcup_{j=0}^i\{ \lambda\ |\ \lambda\vdash j\} $ be a set of Young diagrams of size less than or equal to $i$. We define ${\mbox{\boldmath $\Lambda$}}_i$ and ${\mbox{\boldmath $\Lambda$}}_{i+\frac{1}{2}}$ to be $${\mbox{\boldmath $\Lambda$}}_i = \{\widetilde{\lambda}\ |\ \lambda\in P_{\leq i}\} \mbox{ and } {\mbox{\boldmath $\Lambda$}}_{i+\frac{1}{2}} = \{\widehat{\lambda}\ |\ \lambda\in P_{\leq i}\},$$ which are set of Young diagrams of size $Q$ and $Q-1$ respectively. Under these preparations we define a graph $\Gamma_n$ \[resp. $\Gamma_{n+\frac{1}{2}}$\] which consists of the vertices labeled by: $$\left( \bigsqcup_{i=0,1, \ldots, n-1} ({\mbox{\boldmath $\Lambda$}}_i \sqcup {\mbox{\boldmath $\Lambda$}}_{i+\frac{1}{2}}) \right) \bigsqcup{\mbox{\boldmath $\Lambda$}}_n \quad \left[ \mbox{resp.}\ \left( \bigsqcup_{i=0,1, \ldots, n} ({\mbox{\boldmath $\Lambda$}}_i \sqcup {\mbox{\boldmath $\Lambda$}}_{i+\frac{1}{2}}) \right) \right]$$ and the edges joined by either of the following rule: - join $\widetilde{\lambda}\in{\mbox{\boldmath $\Lambda$}}_{i}$ and $\wideha
997
1,376
748
1,025
3,652
0.770892
github_plus_top10pct_by_avg
since the proof of Theorem \[thm:mrdeterm\] is constructive, the states which can be transformed into $y$ by catalyst-assisted transformation while cannot by multiple-copy transformation can also be constructed. We have proved that $T(y)\not = M(y)$ in some cases. Moreover, witness vectors which are in $T(y)$ but not in $M(y)$ are also constructed explicitly. It should be pointed out, however, that the witness vectors we constructed lie on the boundary of $T(y)$ without any exception, that is, they all satisfy the property that $x^\downarrow_1 = y^\downarrow_1$ or $x^\downarrow_n = y^\downarrow_n$. These witness vectors can be involved if we consider the closure of $M(y)$ instead. In fact, we will see in the following section that in probabilistic setting, the two sets $M^{\lambda}(y)$ and $T^{\lambda}(y)$ defined in Eqs.(\[eq:ty\]) and (\[eq:my\]) have exactly the same closure for $0\leq \lambda< 1$. So the question remained is to show whether or not ${M(y)}$ and ${T(y)}$ also have the same closure. Probabilistic case ================== We considered deterministic entanglement transformations in the previous section. In this section, let us turn to examine transformations with maximal probability strictly less than $1$. Given a nonnegative number $\lambda <1$, let $$\label{eq:sy}S^{\lambda}(y)=\{x\in V^n\ |\ P(x\ra y)\geq \lambda \}$$ be the set of states that can be transformed into $y$ by LOCC with the maximal probability not less than $\lambda$, $$\label{eq:ty}T^{\lambda}(y)=\{x\in V^n\ |\ \exists c, P(x\otimes c\ra y\otimes c)\geq \lambda \}$$ be the set of states that can be transformed into $y$ by catalyst-assisted LOCC with the maximal probability not less than $\lambda$, and $$\label{eq:my}M^{\lambda}(y)=\{x\in V^n\ |\ \exists k, P(x^{\otimes{k}}\ra y^{{\otimes{k}}})^{1/k} \geq \lambda\}$$ the set of states which, when some appropriate number of copies are provided, can be transformed into the same number of $y$ by multiple-copy LOCC with the maximal geometric average probability not less than $\lam
998
122
1,287
1,009
2,968
0.775765
github_plus_top10pct_by_avg
rtex states already found in §\[sec:3D\_InstOpt\_E0to0\] and §\[sec:3D\_InstOpt\_E\], and to the Poincaré limit. Another interesting possibility is to replace the energy $\K({\mathbf{u}})$ with the helicity [$\H({\mathbf{u}}) := \int_{\Omega} {\mathbf{u}}\cdot(\bnabla\times{\mathbf{u}})\,d\Omega$]{} in the multiobjective formulation , as this might allow one to obtain extreme vortex states with a more complicated topology (i.e., a certain degree of “knottedness”). We note that all the extreme vortex states found in the present study were “unknotted”, i.e., were characterized by $\H({\widetilde{\mathbf{u}}_{\E_0}}) = 0$, as the vortex rings were in all cases disjoint (cf. figure \[fig:ring\]). Finally, another promising possibility to find initial data producing a larger growth of enstrophy is to solve a [*finite-time*]{} optimization problem of the type already studied by [@ap11a] in the context of the 1D Burgers equation, namely \[pb:maxdE\] $$\tilde{{\mathbf{u}}}_{0;\E_0,T} = \mathop{\arg\max}_{{\mathbf{u}}_0\in{\mathcal{S}_{\E_0}}} \, \E(T),$$ where $T>0$ is the length of the time interval of interest and ${\mathbf{u}}_0$ the initial data for the Navier-Stokes system . In contrast to problems \[pb:maxdEdt\_E\] and \[pb:maxdEdt\_KE\], solution of problem \[pb:maxdE\] is more complicated as it involves flow evolution. It represents therefore a formidable computational task for the 3D Navier-Stokes system. However, it does appear within reach given the currently available computational resources and will be studied in the near future. Acknowledgements {#acknowledgements .unnumbered} ================ The authors are indebted to Charles Doering for many enlightening discussions concerning the research problems studied in this work. The authors are grateful to Nicholas Kevlahan for making his parallel Navier-Stokes solver available, which was used to obtain the results reported in §\[sec:timeEvolution\]. Anonymous referees provided many insightful comments which helped us improve this work. This
999
442
1,576
1,103
null
null
github_plus_top10pct_by_avg
symbol meaning ${\mathbf{X}}^T$ transpose of ${\mathbf{X}}$ ${\mathbf{X}}^H$ conjugate transpose of ${\mathbf{X}}$ ${\mathbf{X}}\left(m,n\right)$ entry of ${\mathbf{X}}$ in $m$-th row and $n$-th column ${\mathrm{Tr}}({\mathbf{X}})$ trace of ${\mathbf{X}}$ $\left|{\mathbf{X}}\right|$ determinant of ${\mathbf{X}}$ $\left\|{\mathbf{X}}\right\|_F$ Frobenius norm of ${\mathbf{X}}$ $\mathrm{Re}[x]$ real part of $x$ $\mathrm{Im}[x]$ imaginary part of $x$ $\odot$ Hadamard (element-wise) product $\left|\mathcal{X}\right|$ cardinality of set $\mathcal{X}$ ${\mathrm{sgn}}(\cdot)$ sign function $\mathrm{erf}(\cdot)$ error function $\mathrm{erf}^{-1}(\cdot)$ inverse error function ${\mathbb{E}}\{\cdot\}$ expectation operation ${\mathbb{P}}(\cdot)$ probability measure $\mathrm{corr}(\cdot,\cdot) $ correlation coefficient ${\mathbf{I}}_n$ identity matrix of size $n$ $\mathcal{N}(\mu,\sigma^2)$ Gaussian distribution with mean $\mu$ and variance $\sigma^2$ $\mathcal{CN}(\mu,\sigma^2)$ complex Gaussian distribution with mean $\mu$ and variance $\sigma^2$ $\mathcal{CN}({\mathbf{a}},{\mathbf{A}})$ distribution of a circularly symmetric complex Gaussian random vector with mean ${\mathbf{a}}$ and covariance matrix ${\mathbf{A}}$ ------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------- : Summary of notations.[]{data-label="table:Notations"} Preliminaries ============= Different from conventional antennas with fixed radiation characteristics, reconfigur
1,000
1,215
1,849
1,148
null
null
github_plus_top10pct_by_avg