text
large_stringlengths 384
2.05k
| rank_avg
float64 1
4.19k
⌀ | rank_max
float64 1
8.21k
⌀ | rank_min
float64 1
5.03k
⌀ | rank_median
float64 1
4.21k
⌀ | rank_by_avgsim
float64 1
4.19k
⌀ | avgsim_to_github
float32 0.77
0.85
⌀ | dataset
large_stringclasses 1
value |
|---|---|---|---|---|---|---|---|
d in is given by $$\begin{cases}
{\mathbb Z}^{abc} &\rightarrow {\mathbb Z}^a \oplus {\mathbb Z}^b \oplus {\mathbb Z}^c \\
(x_{l,m,n}) &\mapsto \Big( \sum_{m,n} x_{l,m,n}, \sum_{l,n} x_{l,m,n}, \sum_{l,m} x_{l,m,n} \Big).
\end{cases}$$ We conclude that the multiplicity of a weight $\delta = (\delta^A, \delta^B, \delta^C)$ for $\operatorname{U}(a) \times \operatorname{U}(b) \times \operatorname{U}(c)$ is given by the number of integral points in the polytope $\Delta(k,\delta)$ described above.
Just as for our main algorithm, gives rise to a polynomial-time algorithm for computing Kronecker coefficients with a bounded number of rows. This second algorithm runs faster than the generic one presented earlier, since the ambient space ${\mathbb R}^{abc}$ has a smaller dimension than what we would get from the construction described in the proof of . We remark that the time complexity for unbounded $a$, $b$ and $c$ can be deduced from [@barvinokpommersheim99].
Asymptotics {#section:asymptotics}
===========
In this section we will prove our result on the generic order of growth of multiplicities in the coordinate ring of a projective variety ().
We will work in the following general setup: Let $V$ be a finite-dimensional rational representation of $H$, and suppose that $X$ is an $H$-stable closed subvariety of the associated projective space ${\mathbb P}(V)$. The homogeneous coordinate ring ${\mathbb C}[X]$ is graded, and we can decompose each part into its irreducible components, $$\label{alggeo setup}
{\mathbb C}[X]
= \bigoplus_{k=0}^\infty {\mathbb C}[X]_k
= \bigoplus_{k=0}^\infty \bigoplus_{\mu} m_{H,X,k}(\mu) \, V_{H,\mu}^*,$$ where, following the usual conventions, we have decomposed with respect to the dual representations $V_{H,\mu}^*$. The *stretching function* is then by definition $k \mapsto m_{H,X,k}(k\mu)$. We stress that in contrast to [@mulmuley07], where it was assumed that $X$ has at most rational singularities, we do not even require that $X$ is a normal variety [@hartshorne77]. Thi
| 1,001
| 1,330
| 1,425
| 896
| null | null |
github_plus_top10pct_by_avg
|
Thank you for any help you can give.
A:
Please Please Please do not do this.
Make every date a new row.
Make a date column.
So your table would look something like this:
ID | Date | Attendance |
1 | 2011-11-01 | 5 |
2 | 2011-11-02 | 12 |
3 | 2011-11-03 | 3 |
Then you can have a table of members.
Then you can have a join table of member to meeting. --
Then when you have this you do not even need an Attendance column in the Meeting table because you can just count how many members are at each meeting with a simple query:
SELECT m.id, m.date, count(*) as `attendance`
FROM meeting m
LEFT JOIN meeting_to_member mm ON mm.meetingid = m.id
LEFT JOIN member mem ON mem.id = mm.memberid
GROUP BY m.id;
A:
I'd suggest you set up something like this:
id | MeetingDate | MemberName | Present
1 | 2011-11-01 | John Doe | 1
2 | 2011-11-01 | John Smith | 0
3 | 2011-11-01 | John Jackson | 1
4 | 2011-11-02 | John Doe | 0
5 | 2011-11-02 | John Smith | 1
6 | 2011-11-02 | John Jackson | 1
7 | 2011-11-03 | John Doe | 1
8 | 2011-11-03 | John Smith | 1
9 | 2011-11-03 | John Jackson | 1
So you'd have a row for each member for each meeting whether they attended or not. My guess is you're not going to have millions and millions of rows with this system, so don't worry about performance.
You can get the total attendance for a member with something like:
SELECT SUM(Present) as total FROM attendance WHERE MemberName = 'John Doe';
// returns -> 2
You can get the total attendance for a meeting with something like:
SELECT SUM(Present) as total FROM attendance WHERE MeetingDate = 2011-11-01;
// returns -> 2
You can get all of the members that missed a specific meeting with something like:
SELECT MemberName FROM attendance WHERE Present = 0 AND MeetingDate = 2011-11-01;
// returns -> John Smith
Q:
Pote
| 1,002
| 6,566
| 78
| 488
| 2,061
| 0.783117
|
github_plus_top10pct_by_avg
|
s of the restriction of the given immersion $f$ ($g$) to the submanifold in $M^{n-1}$ ($N^{n-2}$) dual to $w_1(\kappa)^{k-1} \in H^{k-1}(M^{n-1};\Z/2)$ ($w_2(\eta)^{k-1} \in H^{2k-2}(N^{n-2};\Z/2)$).
Let $(g,\Xi,\eta)$ be a $\D_4$-framed (generic) immersion in the codimension $2k$. Let $h: L^{n-4k} \looparrowright \R^n$ be the immersion of the self-intersection (double points) manifold of $g$. The normal bundle $\nu_h$ of the immersion $h$ is decomposed into a direct sum of $k$ isomorphic copies of a 4-dimensional bundle $\zeta$ with the structure group $\Z/2 \int \D_4$. This decomposition is given by the isomorphism $\Psi: \nu_h \cong k \zeta$. The bundle $\nu_h$ itself is classified by the mapping $\zeta:
L^{n-4k} \to K(\Z/2 \int \D_4,1)$.
All the triples $(h,\zeta,\Psi)$ described above (we do not assume that a triple is realized as the double point manifold for a $\D_4$-framed immersion) up to the standard cobordism relation form the cobordism group $Imm^{\Z/2 \int \D_4}(n-4k,4k)$. The self-intersection of an arbitrary $\D_4$-framed immersion is a $\Z/2 \int \D_4$-framed immersed manifold and the cobordism class of this manifold well-defines the natural homomorphism $$\delta_{\D_4}^k : Imm^{\D_4}(n-2k,2k) \to Imm^{\Z/2 \int \D_4}(n-4k,4k). \eqno(6)$$
The subgroup $\D_4 \oplus \D_4 \subset \Z/2 \int \D_4$ of index 2 induces the double cover $\bar L^{n-4k} \to L^{n-4k}$. This double cover corresponds with the canonical double cover over the double point manifold.
Let $\bar \zeta: \bar L^{n-4k} \to K(\D_4,1)$ be the classifying mapping induced by the projection homomorphism $\D_4 \oplus \D_4
\to \D_4$ to the first factor. Let $\bar \zeta \to L^{n-4k}$ be the 2-dimensional $\D_4$–bundle defined as the pull-back of the universal 2-dimensional bundle with respect to the classifying mapping $\bar \zeta$.
### Definition 2 {#definition-2 .unnumbered}
The Kervaire invariant $\Theta_{\Z/2 \int \D_4}^k: Imm^{\Z/2 \int \D_4}(n-4k,4k) \to \Z/2$ for a $\Z/2
\int \D_4$-framed immersion $(h,\Psi,\zeta)$ is defined by
| 1,003
| 1,515
| 954
| 927
| null | null |
github_plus_top10pct_by_avg
|
ely dotted](0,.13-1.5*.125)--++(0,1.5*.25);\end{tikzpicture}}},;\star),\qquad
\gyoung(;1;2;3_{1.5}{{\begin{tikzpicture}[baseline=0cm]\draw[thick,densely dotted](0,.13)--++(1.5*.25,0);\end{tikzpicture}}};v;\star_{1.5}{{\begin{tikzpicture}[baseline=0cm]\draw[thick,densely dotted](0,.13)--++(1.5*.25,0);\end{tikzpicture}}};\star,;1;1;2,;1,;3,|{1.5}{{\begin{tikzpicture}[baseline=0cm]\draw[thick,densely dotted](0,.13-1.5*.125)--++(0,1.5*.25);\end{tikzpicture}}},;v,;\star,|{1.5}{{\begin{tikzpicture}[baseline=0cm]\draw[thick,densely dotted](0,.13-1.5*.125)--++(0,1.5*.25);\end{tikzpicture}}},;\star).$$ But these two homomorphisms are equal by Lemma \[lemma7\], and we are done.
Now, as in Section \[hlamu’1\] we have to show that $\sigma\neq0$. Again, we use a dominance argument.
\[sigmanz2\] With the notation above, $\sigma\neq0$.
We’ll show that when $\sigma$ is expressed as a linear combination of semistandard homomorphisms, the homomorphism ${\hat\Theta_{S}}$ occurs with non-zero coefficient, where $$S=
{\text{\footnotesize$\gyoungx(1.2,;1;1;1;2;4;5_{1.5}{{\begin{tikzpicture}[baseline=0cm]\draw[thick,densely dotted](0,.13)--++(1.5*.25,0);\end{tikzpicture}}};v;{b\!\!+\!\!3}_{1.5}{{\begin{tikzpicture}[baseline=0cm]\draw[thick,densely dotted](0,.13)--++(1.5*.25,0);\end{tikzpicture}}};u,;2;2;3,;3,|2{{\begin{tikzpicture}[baseline=0cm]\draw[thick,densely dotted](0,.13-2*.125)--++(0,2*.25);\end{tikzpicture}}},;{b\!\!+\!\!2})$}}.$$ Recall that when ${\hat\Theta_{T}}$ is expressed as a linear combination of semistandard homomorphisms, the coefficient of ${\hat\Theta_{S}}$ is zero unless $S\dom T$. The only elements of ${\calu}$ which are dominated by $S$ are the tableaux of the form $$T[i]=
{\text{\footnotesize$\gyoungx(1.2,;1;2;3;4_{1.5}{{\begin{tikzpicture}[baseline=0cm]\draw[thick,densely dotted](0,.13)--++(1.5*.25,0);\end{tikzpicture}}};v;i;{b\!\!+\!\!3}_{1.5}{{\begin{tikzpicture}[baseline=0cm]\draw[thick,densely dotted](0,.13)--++(1.5*.25,0);\end{tikzpicture}}};u,;1;1;2,;2,;3,;{{\begin{tikzpicture}[baseline=0cm]\draw
| 1,004
| 606
| 1,313
| 1,093
| 206
| 0.818426
|
github_plus_top10pct_by_avg
|
\]) can be written in compact form as $$\begin{aligned}
\frac{\partial\mbox{\boldmath$\zeta$}}{\partial t}
&=&-\frac{i}{\hbar}
\left[
\begin{array}{cc}\frac{\partial{\cal H}}{\partial\vert\psi\rangle}
&\frac{\partial{\cal H}}{\partial\langle\psi\vert}
\end{array}
\right]
\cdot\mbox{\boldmath$\Omega$}\cdot
\left[\begin{array}{c}\frac{\partial\mbox{\boldmath$\zeta$}}{\partial\vert\psi\rangle}
\\\frac{\partial\mbox{\boldmath$\zeta$}}{\partial\langle\psi\vert}\end{array}\right]
\nonumber\\
&=&-\frac{i}{\hbar}
\left\{{\cal H},\mbox{\boldmath$\zeta$}\right\}_{\mbox{\tiny\boldmath$\Omega$};\zeta}\;.
\label{eq:wein-like}\end{aligned}$$ Equations (\[eq:fckrk\]), or their compact “Weinberg-like” form in Eq. (\[eq:wein-like\]), express the wave picture for the quantum-classical dynamics of phase space dependent quantum degrees of freedom [@qc-bracket; @kcmqc]. Such a wave picture makes one recognize the intrinsic non-linearity of quantum-classical dynamics. This specific features will be discussed, among other issues, in the next section.
Adiabatic basis representation and surface-hopping schemes {#sec:qcwdab}
==========================================================
Equations (\[eq:fckrk\]) are written in an abstract form. In order to devise a numerical algorithm to solve them, one has to obtain a representation in some basis. Of course, any basis can be used but, since one would like to find a comparison with surface-hopping schemes, the adiabatic basis is a good choice. To this end, consider the following form of the quantum-classical Hamiltonian operator: $$\hat{H}=\frac{P^2}{2M}+\hat{h}(R)\;,$$ where the first term provides the kinetic energy of the classical degrees of freedom with mass $M$, while $\hat{h}(R)$ describes the quantum sub-system and its coupling with the classical coordinates $R$. The adiabatic basis is then defined by the following eigenvalue equation: $$\hat{h}\vert\alpha;R\rangle=E_{\alpha}(R)\vert\alpha;R\rangle\;.$$ Since the non-linear wave equations in (\[eq:fckrk\]) have been derived
| 1,005
| 274
| 1,214
| 1,068
| null | null |
github_plus_top10pct_by_avg
|
ficient algorithms tailored for independent paired comparisons. However, due to the ignored dependencies in the data, naive rank-breaking approaches can result in inconsistent estimates. The key idea to produce accurate and consistent estimates is to treat the pairwise comparisons unequally, depending on the topology of the collected data. In this paper, we provide the optimal rank-breaking estimator, which not only achieves consistency but also achieves the best error bound. This allows us to characterize the fundamental tradeoff between accuracy and complexity. Further, the analysis identifies how the accuracy depends on the spectral gap of a corresponding comparison graph.'
author:
- |
[Ashish Khetan and Sewoong Oh ]{}\
[Department of ISE, University of Illinois at Urbana-Champaign]{}\
[Email: $\{$khetan2,swoh$\}$@illinois.edu]{}
bibliography:
- '\_ranking.bib'
title: 'Data-driven Rank Breaking for Efficient Rank Aggregation'
---
Introduction {#sec:intro}
============
In several applications such as electing officials, choosing policies, or making recommendations, we are given partial preferences from individuals over a set of alternatives, with the goal of producing a global ranking that represents the collective preference of the population or the society. This process is referred to as [*rank aggregation*]{}. One popular approach is [*learning to rank*]{}. Economists have modeled each individual as a rational being maximizing his/her perceived utility. Parametric probabilistic models, known collectively as Random Utility Models (RUMs), have been proposed to model such individual choices and preferences [@McF80]. This allows one to infer the global ranking by learning the inherent utility from individuals’ revealed preferences, which are noisy manifestations of the underlying true utility of the alternatives.
Traditionally, learning to rank has been studied under the following data collection scenarios: pairwise comparisons, best-out-of-$k$ comparisons, and $k$-way comparisons. [*Pairwise co
| 1,006
| 774
| 353
| 696
| null | null |
github_plus_top10pct_by_avg
|
} - q^{\frac{2-r+1}2}\right)$$ which combined with Theorem \[euler-spec\] gives $$\label{expansion-combo}
\H_{(n-1,1)} \left(\sqrt{q},\frac1{\sqrt{q}}\right)
= \sum_{\substack{rs=2n\\ r\not\equiv s\mod2}}
(-1)^r \left( q^{\frac{s-r-1}2} - q^{\frac{2-r+1}2}\right)$$
We compute the logarithm of the left hand side of and get $$\sum_{m,n\ge1} (q^m + q^{-m} -2) \frac{T^{mn}}{m}$$ Applying $(q\frac{d}{dq})^k$ and then setting $q=1$ we obtain $$\sum_{m,n\ge1} (m^k + (-m)^k) \frac{T^{mn}}{m},$$ which vanishes identically if $k$ is odd. For $k$ even, it equals $$2 \sum_{n\ge1} \sum_{d\mid n} d^{k-1}\ T^n.$$ Comparing with we see that this series equals $2G_k$, up to the constant term.
Note that if $q=e^u$ then $$q\frac{d}{dq} = \frac{d}{du}\ ,\qquad q=1 \leftrightarrow u=0.$$ Hence , $$\log \bigg( 1+\sum_{n\ge1} \H_{(n-1,1)}
( e^{u/2}, e^{-u/2})T^n\bigg) = \sum_{\substack{k\ge2 \\ \text{even}}}
\left( 2G_k + \frac{B_k}{k}\right) \frac{u^k}{k!}.$$ On the other hand, it is easy to check that $$u\exp \bigg( \sum_{k\ge2}\frac{B_k}{k}\ \frac{u^k}{k!}\bigg)
= e^{u/2} - e^{-u/2}$$ ($B_k=0$ if $k>1$ is odd.) This proves the claim.
Connectedness of character varieties
====================================
The main result
---------------
Let $\muhat$ be a multi-partition $(\mu^1,\dots,\mu^k)$ of $n$ and let $\M_{\muhat}$ be a genus $g$ generic character variety of type $\muhat$ as in §\[char\].
The character variety $\M_\muhat$ is connected (if not empty). \[connectedness\]
Let us now explain the strategy of the proof.
We first need the following lemma.
If $\M_\muhat$ is not empty, its number of connected components equals the constant term in $E(\M_\muhat;q)$.
The number of connected components of $\M_\muhat$ is ${\rm dim}\hspace{.05cm}H^0(\M_\muhat,\C)$ which is also equal to the mixed Hodge number $h^{0,0;0}(\M_\muhat)$.
Poincaré duality implies that
$$h^{i,j;k}(\M_\muhat)=h_c^{d_\muhat-i,d_\muhat-j;2d_\muhat-k}(\M_\muhat).$$ From Formula (\[curious\]) we thus have
$$E(\M_\muhat;q)=\sum_i\left(\sum_k(-1)^kh^
| 1,007
| 1,864
| 1,143
| 1,054
| 2,580
| 0.778714
|
github_plus_top10pct_by_avg
|
inite constant. It follows that rank-breaking requires the effective sample size $n\ell=O(d\log d / \varepsilon^2 )$ in order to achieve arbitrarily small error of $\varepsilon>0$, on the weakest $\ld=\ell\,d/(2 \kappa)$ items.
Real-World Data Sets {#sec:real}
====================
On real-world data sets on sushi preferences [@Kam03], we show that the data-driven rank-breaking improves over Generalized Method-of-Moments (GMM) proposed by [@ACPX13]. This is a widely used data set for rank aggregation, for instance in [@ACPX13; @APX12; @MG15a; @LLN15; @LB11; @LB11b]. The data set consists of complete rankings over $10$ types of sushi from $n=5000$ individuals. Below, we follow the experimental scenarios of the GMM approach in [@ACPX13] for fair comparisons.
To validate our approach, we first take the estimated PL weights of the 10 types of sushi, using [@Hun04] implementation of the ML estimator, over the entire input data of $5000$ complete rankings. We take thus created output as the ground truth $\theta^*$. To create partial rankings and compare the performance of the data-driven rank-breaking to the state-of-the-art GMM approach in Figure \[fig:sushi\_10\_mse\], we first fix $\ell=6$ and vary $n$ to simulate top-$\ell$-separators scenario by removing the known ordering among bottom $10-\ell$ alternatives for each sample in the data set (left). We next fix $n=1000$ and vary $\ell$ and simulate top-$\ell$-separators scenarios (right). Each point is averaged over $1000$ instances. The mean squared error is plotted for both algorithms.
![The data-driven rank-breaking achieves smaller error compared to the state-of-the-art GMM approach. []{data-label="fig:sushi_10_mse"}](sushi10_n_mse-eps-converted-to.pdf "fig:"){width=".3\textwidth"} (-170,50) (-90,-7)[sample size ]{} (-100,100) ![The data-driven rank-breaking achieves smaller error compared to the state-of-the-art GMM approach. []{data-label="fig:sushi_10_mse"}](sushi10_l_mse-eps-converted-to.pdf "fig:"){width=".3\textwidth"} (-100,100) (-115,-7 )[number of se
| 1,008
| 901
| 196
| 901
| 355
| 0.812303
|
github_plus_top10pct_by_avg
|
re crank form expressions as they are, by induction on the lengths of the words in the alphabets ${\cal L}_n$, any word turn out to be equal to (a scalar multiple) of the standard expression of a seat-plan $w$ of $\Sigma_n^1$. Hence we have $$\mbox{rank}\ \widetilde{A_{n}(Q)} \leq |\Sigma_n^1|.$$
As Tanabe showed in [@Ta], $\Sigma_n^1$ makes a basis of ${\mbox{${\mathbb C}$}}\otimes A_{n}(k) = {\mbox{${\mathbb C}$}}\otimes \psi(\widetilde{A_{n}(k)})$ if $k\geq n$. Hence $\mbox{rank}\ {\mbox{${\mathbb C}$}}\otimes A_{n}(z) = |\Sigma_n^1|$ holds as far as $z$ takes any integer value $k\geq n$. This implies that $\psi$ is an isomorphism and we find that the generators and the relations in Theorem 1.2 characterize the partition algebra $A_{n}(Q)$.
Definition of $A_{n-\frac{1}{2}}(Q)$, a subalgebra of $A_n(Q)$ {#sec:5-1}
==============================================================
In this section, we consider a subalgebra $A_{n-\frac{1}{2}}(Q)$ of $A_n(Q)$ generated by the special elements $s_1, \ldots, s_{n-2}$, $f_1, \ldots, f_{n-1}$ and $e_1,\ldots, e_{n-1}$. As we have noted in Remark \[rem:gen\], $\{f_i\}$ ($1\leq n-2$) and $\{e_i\}$ ($1\leq n-1$) are written as products of $f=f_1$, $e = e_1$ and $s_1,\ldots, s_{n-2}$. The special element $f_{n-1}$, however, can not be expressed as a product of other special elements in $A_{n-\frac{1}{2}}(Q)$, since we deleted $s_{n-1}$ from the generators of $A_n(Q)$. Hence $A_{n-\frac{1}{2}}(Q)$ can be defined as a subalgebra of $A_n(Q)$ generated by the following elements: $s_1, \ldots, s_{n-2}$, $f = f_1$, $f_* = f_{n-1}$ and $e = e_1$. We can obtain the defining relations among these generators just as in the case of $A_n(Q)$.
\[def:half-int-alg\] Let $\mathbb{Z}$ be the ring of rational integers and $Q$ the indeterminate. We put ${A}_{\frac{1}{2}}(Q) = \mathbb{Z}[Q]\cdot 1$. For an integer $n\geq2$, ${A}_{n-\frac{1}{2}}(Q)$ is characterized by the generators $$e, f, s_1, s_2, \ldots, s_{n-2}, f_{*} \mbox{(if $n>2$)}$$ and the relations ($R0$), ($R1'$)-($R4'$) and ($
| 1,009
| 325
| 969
| 1,020
| 2,445
| 0.779686
|
github_plus_top10pct_by_avg
|
Unit N[^a^](#t001fn001){ref-type="table-fn"} Median (Range) Lower limit (90% CI) Upper limit (90% CI)
---------------------------------------------------- --------------- ----------------------------------------- --------------------------- ---------------------- -------------------------
Packed cell volume \% 190 31.0 (9.0--40.0) 17.0 (10.0--20.0) 39.0 (38.0--40.0)
Estimated white blood cell count X 10^9^/L 190 9.0 (2.0--27.0) 5.0 (2.0--5.0) 19.0 (16.0--27.0)
Heterophils X 10^9^/L 190 4.7 (0.0--21.6) 0.0 (0.0--0.8) 13.6 (12.2--16.1)
Lymphocytes X 10^9^/L 190 3.4 (0.6--9.2) 1.0 (0.7--1.1) 7.9 (7.2--9.2)
Monocytes X 10^9^/L 190 0.14 (0.0--1.6) 0.0 (0.0--0.0) 1.0 (0.8--1.6)
Eosinophils X 10^9^/L 190 0.3 (0.0--4.8) 0.0 (0.0--0.0) 2.9 (1.6--4.0)
Basophils X 10^9^/L 190 0.0 (0.0--0.5) 0.0 (0.0--0.0) 0.3 (0.2--0.4)
Azurophils X 10^9^/L 190 0.0 (0.0--1.2) 0.00 (0.0--0.0) 0.63 (0.5--1.2)
Total protein g/L 191 35.0 (21.0--60.0) 22.8 (22.0--24.0) 52.2 (47.0--53.0)
Albumin g/L 191
| 1,010
| 4,967
| 605
| 754
| null | null |
github_plus_top10pct_by_avg
|
|\cR(2,4)|>1$\
By \[prop:qij\] this implies that $\cR(3,5)=\emptyset$ and, hence, for $i\in\{2,4\}$, that $\sum_{k\in C_{i}}|\cR(i,k)|\ge|\cR\setminus\cR^*|-1$. Using equation (\[eq:individual\]) we get that $\sum_{k\in C_2}|\cR(2,k)|+|\cR^*|+2+|\cS_2^*|\le 4+|\cS^*|$. Since $7+|\cS_2^*|\le|\cR|+1+|\cS^*_2|\le \sum_{k\in C_2}|\cR(2,k)| +|\cR^*| +2 +|\cS_2^*|$, we obtain $3+|\cS^*_2|\le |\cS^*|\le 3$. This implies that $S_2^*=\emptyset$ and $|S^*|=3$, contradicting \[prop:last\].\
2. $|\cR(2,4)|=1$\
Now we have (from \[prop:qij\]) that $|\cR(3,5)|\le 1$, so we know that $|\cR(i,j)|\le 1$ for every $i\in\{2,3\},j\in\{4,5\}$. Then $|\cR\setminus\cR^*|\ge 4$ implies that $|\cR(i,j)|=1$, $|\cR\setminus\cR^*|=4$, and $\cR^*=\{X^\pr_3,X^\pr_5\}$. Using equation (\[eq:individual\]) with $x\in\{2,4\}$, we get that $7+|\cS_x^*|\le 4+|\cS^*|$, and so $3\le |\cS^*|-|\cS_x^*|\le 3$, a contradiction.\
2. $Y_2\ne Y_4$\
Here we have $|Y|=2$ and, without loss of generality, $Y_2=\{6\}$ and $Y_4=\{7\}$. From \[prop:ysmall\] we get that $\cR(3,j)\subseteq\{\{3,j,6\}\}$ for each $j\in \{4,5\}$ and $\cR(i,5)\subseteq\{\{i,5,7\}\}$ for each $i\in \{2,3\}$. This implies, in particular, that $\cR(3,5)=\emptyset$. Thus, for $i\in\{2,4\}$, we have $\sum_{k\in C_{i}}|\cR(i,k)|\ge|\cR\setminus\cR^*|-1$. In particular, $$7\le |\cR| \le \sum_{k\in C_2}|\cR(2,k)| + |\cR^*| +1 \le \sum_{k\in C_2}|\cR(2,k)| + |\cR^*_2| +2\ .$$ Using inequality (\[eq:individual\]) with $x=2$ we get that $$2 +|\cS_2^*| + \sum_{k\in C_2}|\cR(2,k)| + |\cR^*_2|\le |\cS^*|+4\ .$$ Together, these imply that $3+|\cS_2^*|\le |\cS^*|\le 3$, and so $|S^*|=3$ and $|S^*_2|=0$, which contradicts \[prop:last\].\
This completes the proof.
Proof of Theorem \[bigchvatal\]
===============================
We now proceed to a proof of Theorem \[bigchvatal\].\
\
Let $\cI_i=\cI\cap \binom{[n]}{i}$, for $i=1,2,3$. We can assume $\cI_1=\mt$, since otherwise, $\cI$ is a star. Similarly, we can assume $|\cI_2|\leq 3$. Thus, we have $|\cI_
| 1,011
| 315
| 434
| 1,128
| null | null |
github_plus_top10pct_by_avg
|
our study were spaced plants so it was not possible to calculate yields in t ha^−1^ from the values of individuals. However, in a study comparing 15 diverse genotypes harvested in autumn (September--October), a maximum yield of 19 t ha^−1^ was reported (Clifton‐Brown *et al*., [2001](#gcbb12419-bib-0009){ref-type="ref"}). This maximum value is in agreement with the spring yields of *M. x giganteus* in the United Kingdom being reported at \~14 t ha^−1^ which +40% for an autumn harvest equals 19.6 t ha^−1^. Yields in July were considered to be 30% of peak harvest mass (Fig. [4](#gcbb12419-fig-0004){ref-type="fig"}b), which equalled 5.7 t ha^−1^. The genotypes that produced the highest yields were the hybrids of the mixed population (Hyb 1--4); therefore, their average carbohydrate concentrations were used to calculate potential maximum yields. The maximum potential yield of total NSC in July was 0.56 t ha^−1^, nearly all of which (0.52 t) was in the form of soluble sugar (Table [3](#gcbb12419-tbl-0003){ref-type="table-wrap"}). In October, potential yields of total NSC were 1.3 t ha^−1^, 68% of which was soluble sugar and the other 32% was starch (Table [3](#gcbb12419-tbl-0003){ref-type="table-wrap"}).
######
Predicted yields (t ha^−1^) of nonstructural carbohydrates (NSC) from high‐yielding hybrids
Projected yields t ha^−1^
--------- --------------------------- ------ ------
July 0.52 0.04 0.56
October 0.89 0.41 1.30
John Wiley & Sons, Ltd
Saccharification potential {#gcbb12419-sec-0023}
--------------------------
The accessibility of the cell wall carbohydrates at the two time points was assessed by calculating the saccharification potential of cell wall‐derived glucose and xylose. The amount of total cell wall glucose and xylose yielded from acid hydrolysis generally increased between July and October in the mixed population and was significantly different between genotypes
| 1,012
| 631
| 1,634
| 1,277
| null | null |
github_plus_top10pct_by_avg
|
o$ is the initial distribution $Y_0$, then we have that $||\rho||_\pi{\leqslant}{\mathcal{O}}(n^{\delta})$. Applying Theorem \[chernof\] implies that $${\ensuremath{\operatorname{\mathbf{Pr}}\left[\sum_{t=1}^{n}f(Y_t){\geqslant}\mu \cdot n\right]}}
={\mathcal{O}}(n^{\delta}){\mathrm{e}}^{-\Theta(rn^{1-3\delta})}=n^{-\omega(1)}.$$ Therefore, with probability $1-n^{-\omega(1)}$, $${\ensuremath{\operatorname{\mathtt{vis}}(a,b)}}{\leqslant}\sum_{t=0}^nf(Y_t) = {\mathcal{O}}(rn^{1-\delta})= {\mathcal{O}}(n^{1-\delta + o(1)}) = {\mathcal{O}}(n^{1-\varepsilon}),$$ taking $\varepsilon = \delta/2$, say. Taking the union bound over all pairs of agents completes the proof.
Appearance Probability of a Certain Structure {#sub:exist}
=============================================
In this subsection we work towards a proof of Lemma \[lem:col\]. First we will give some useful definition and prove some helpful results. The definition was introduced [in [@PJ19]]{}.
\[def:uniform\] Suppose that ${\mathcal{A}}$ is an allocation algorithm that sequentially allocates $n$ balls into $n$ bins according to some mechanism. [For a given constant $\alpha > 0$, and for $\Theta(n) = m {\leqslant}n$,]{} we say that ${\mathcal{A}}$ is $(\alpha, m)$-uniform if for every ball $1{\leqslant}t{\leqslant}m=\Theta(n)$ and every bin $i\in [n]$, $${\ensuremath{\operatorname{\mathbf{Pr}}\left[\,\text{ball $t$ is allocated to bin $i$ by ${\mathcal{A}}$} \mid \text{balls $1,2,\ldots, t-1$ {have been} allocated by ${\mathcal{A}}$}\,\right]}}{\leqslant}\frac{\alpha}{n}.$$
[In the above definition, we condition on the allocations of balls $1,\ldots, t-1$ into bins made by $\mathcal{A}$. ]{}
The following result, proved in Appendix \[app:uni\], states that the balanced allocation process is uniform on dynamic hypergraphs.
\[lem:uni\] Fix $d=d(n)$ with $2{\leqslant}d = o(\log n)$ [and suppose that for some constant $\beta {\geqslant}1$, the $s$-uniform dynamic hypergraph $({\mathcal{H}}^{(1)},\ldots, {\mathcal{H}}^{(n)})$ satisfies the $\beta$-bala
| 1,013
| 420
| 799
| 1,031
| 1,408
| 0.789952
|
github_plus_top10pct_by_avg
|
nu_j\gamma^j$. Since $\mu \mod p_i \ne 0$, there exists $j$ such that $\mu_j \mod p_i \ne 0$. So $(a_\tau\mod p_i)=(\mu_j \mod p_i)^{-1}(\nu_{j+{\langle \bu_\tau,\bz \rangle}} \mod p_i)$. So we can find $a_\tau \mod p_i$ for each $i\in[r]$. Finally we use Chinese Remainder Theorem to find $a_\tau \in \Z_m$.
Proof of Lemma \[lem-lambda\]
-----------------------------
For any $\blam=[\alpha_1,\beta_1, \cdots ,\alpha_q, \beta_q]\in \cR^{2q}$ we can define a function $h:S\cup{\{0\}}\mapsto \cR$ as: $$h(\ell) =(\blam M)_\ell = \left(\sum_{i=1}^q \alpha_i \gamma^{t_i\ell} \right)+ \ell \left(\sum_{i=1}^q \beta_i \gamma^{t_i \ell}\right).$$ Our goal is then to construct an $h$ of this form such that $$\begin{aligned}
h(\ell)
\begin{cases}
= 0 &\mbox{if}\ \ell\in S\\
= \mu & \mbox{if}\ \ell=0
\end{cases}\end{aligned}$$ where $(\mu \mod p_i) \ne 0\ \forall i\in[r]$.
Notice that, by Chinese Remaindering, $$\label{eq-isomorphism}
\cR = \cR_{m,m} \cong \cR_{p_1,m} \times \ldots \times \cR_{p_r,m},$$ where we recall that $\cR_{p_i,m} = \Z_{p_i}[\gamma]/(\gamma^m-1)$. Therefore, we also get that, for a formal variable $x$, the rings of univariate polynomials also satisfy $$\cR[x] \cong \cR_{p_1,m}[x] \times \ldots \times \cR_{p_r,m}[x].$$ In other words, any family of polynomials $f_i \in \cR_{p_i,m}[x]$, $i\in [r]$ can be ‘lifted’ to a single polynomial $f \in \cR[x]$ so that $ (f \mod p_i) = f_i$ for all $i$ (reducing $f$ mod $p_i$ is done coordinate-wise). Moreover, since this lift is done coefficient-wise (using Eq.\[eq-isomorphism\]), we get that the degree of $f$ is equal to the maximum of the degrees of the $f_i$’s.
We begin by constructing, for each $i \in [r]$ the following polynomial $f_i(x)\in \cR_{p_i,m}[x]$: $$f_i(x)=\prod_{\ell\in S,\ \ell=0\mod p_i}(x-\gamma^\ell)$$ The degree of $f_i$ is $2^{r-1}-1=q-1$ so, by the above comment, we can find a polynomial $f(x)\in \cR[x]$ of degree $q-1$ such that $f(x)\equiv f_i(x) \mod p_i$ for all $i\in [r]$. Define $\alpha_i, i\in[q]$ to be the coefficients of the
| 1,014
| 941
| 1,077
| 921
| null | null |
github_plus_top10pct_by_avg
|
rectangular. As we observed above $\delta(\muhat)\geq (2g-2)n$. Hence if $\delta(\muhat)=0$ then $g=1$ or $g=0$. If $g=1$ then necessarily $\mu^i=(n)$ and $\Gamma$ is the Jordan quiver $J$.
If $g=0$ then $\delta=0$ is equivalent to the equation $$\label{affine-eqn}
\sum_{i=1}^k\frac{1}{l_i}=k-2,$$ where $l_i:=n/t_i$ is the length of $\mu^i=(t_i^{n/t_i})$. In solving this equation, any term with $l_i=1$ can be ignored. It is elementary to find all of its solutions; they correspond to the cases $\Gamma=\tilde D_4,\tilde E_6,\tilde E_7$ or $\tilde E_8$.
We summarize the results in the following table $$\label{table}
\begin{array}{|c|c|c|c|}
\hline
\Gamma & l_i& n & \muhat^* \\
\hline
J & (1) & 1& (1) \\
\tilde{D}_4 & (2,2,2,2) & 2 &(1,1),\quad (1,1),\quad (1,1),\quad (1,1)\\
\tilde{E}_6& (3,3,3) & 3 & (1,1,1),\quad (1,1,1),\quad (1,1,1)\\
\tilde{E}_7 & (2,4,4) & 4 &(2,2),\quad (1,1,1,1),\quad (1,1,1,1)\\
\tilde{E}_8 & (2,3,6) & 6 & (3,3),\quad (2,2,2),\quad (1,1,1,1,1,1)\\
\hline
\end{array}$$ where we listed the cases with smallest possible positive values of $n$ and $k$ and the corresponding multi-partition $\muhat^*$.
Proposition \[affine-descrip\] is due to Kostov, see for example [@simpson1 p.14].
We will need the following result about $\Delta$.
\[Delta-ineq\] Let $\muhat\in \left(\calP_n\right)^k$ and $\nuhat^p=(\nu^{1,p},\ldots,\nu^{k,p})\in \left(\calP_{n_p}\right)^k$ for $p=1,\ldots, s$ be non-zero multi-partitions such that up to permutations of the parts of $\nu^{i,p}$ we have $$\mu^i=\sum_{p=1}^s \nu^{i,p}, \qquad \qquad i=1,\ldots,k.$$ Assume that $\delta(\muhat)\geq 0$. Then $$\sum_{p=1}^s\Delta(\nuhat^p)\leq\Delta(\muhat).$$ Equality holds if and only if
\(i) $s=1$ and $\muhat=\nuhat^1$.
or
\(ii) $\Gamma$ is affine and $\muhat,\nuhat^i,\ldots,\nuhat^s$ correspond to positive imaginary roots.
We start with the following. For partitions $\mu,\nu$ define $$\nrm_\mu(\nu):=\mu_1 |\nu|^2 -|\mu|\sum_i \nu_i^2.$$ Note that $\nrm_\mu(\mu)=|\mu|\,\sigma(\mu)$.
\[nrm-ineq\] Let $\nu^1,\ldots,\nu
| 1,015
| 794
| 1,017
| 993
| null | null |
github_plus_top10pct_by_avg
|
guarantees:
[https://github.com/dhall-lang/dhall-lang/wiki/Safety-
guarant...](https://github.com/dhall-lang/dhall-lang/wiki/Safety-guarantees)
The main risks in executing potentially malicious Dhall code that is not
protected by a semantic integrity check are:
* Using more computer resources than you expected (i.e. network/CPU/RAM)
* Unintentional DDos (as you mentioned)
* The malicious import returning a value which changes the behavior of your program
If you protect the import with a semantic integrity check then the malicious
import can no longer return an unexpected value, which eliminates the third
issue (changing program behavior). Also, upcoming versions will cache imports
based on the semantic integrity check, which would mitigate the second issue
(DDos) for all but the first time you interpret the program. There is also a
`dhall freeze` subcommand which takes a program and automatically pins imports
to their most recent value using semantic integrity check.
Regarding exfiltration, the import system guarantees that only local imports
can access sensitive information such as file contents or environment
variables. See:
[https://github.com/dhall-lang/dhall-lang/wiki/Safety-
guarant...](https://github.com/dhall-lang/dhall-lang/wiki/Safety-
guarantees#cross-site-scripting-xss)
The only way that a remote import can obtain that information is if a local
import supplied that information via Dhall's support for custom headers. In
fact, this is actually an intended use of that feature (i.e. a local import
fetching a Dhall expression from a private GitHub repository using an access
token retrieved from an environment variable).
So in other words the threat model is that as long as you can trust local
imports then you can transitively trust remote imports because they cannot
access your local filesystem or environment variables unless you explicitly
opt into that via a local import. I think that's a reasonable threat model
because if can't trust the contents of your local filesystem then you can't
even tr
| 1,016
| 68
| 529
| 1,022
| 689
| 0.80232
|
github_plus_top10pct_by_avg
|
frac{i m \left(u^2-1\right)}{u^2+1} & 0 & -\frac{i m \left(u^4+6 u^2-3\right)}{4 \left(u^2+1\right)} \\
\mathcal{D}_{Ru} & 0 & \frac{i m}{2 \left(u^2+1\right)} & 0 & -\frac{i m \left(u^4+6 u^2-3\right)}{8 \left(u^4-1\right)} & 0 \\
\mathcal{D}_{uu} & 0 & 0 & 0 & 0 & 0 \\
\mathcal{D}_{TR} & 0 & -\frac{u \left(u^2-3\right)}{\left(u^2+1\right)^2} & -\frac{\left(u^2-1\right) \left(-u^4-6 u^2+h \left(u^2+1\right)^2+3\right)}{2 \left(u^2+1\right)^3} & \frac{u \left(u^2-3\right)}{\left(u^2+1\right)^2} & -\frac{\left(u^2-1\right) \left(u^4+6 u^2-3\right)}{2 \left(u^2+1\right)^3} \\
\mathcal{D}_{Tu} & 0 & \frac{h+2}{2 \left(u^2+1\right)} & 0 & 0 & 0 \\
\mathcal{D}_{\Phi R} & 0 & 0 & \frac{2 \left(u^2-1\right)^2}{\left(u^2+1\right)^3} & 0 & -\frac{\left(u^2-1\right) \left(h \left(u^2+1\right)^2+4 \left(u^2-1\right)\right)}{2 \left(u^2+1\right)^3} \\
\mathcal{D}_{\Phi u} & 0 & 0 & 0 & \frac{h+1}{2 \left(u^2+1\right)} & 0 \\
\end{array}$$
\[tab:b-matrix-LEE\]
$
\begin{array}{c|cc}
\mathcal{D}_{AB} & C_{TT}(u) & C_{T\Phi }(u) \\
\noalign{\smallskip}
\hline \hline \noalign{\smallskip}
\mathcal{D}_{TT} & \frac{\left(u^2-1\right) \left(u^4+2 u^2+2 h^2 \left(u^2+1\right)^2+6 h \left(u^2+1\right)^2+9\right)}{\left(u^2+1\right)^5} & -\frac{u^8-28 u^6-42 u^4+36 u^2+2 h^2 \left(u^8+8 u^6+10 u^4-3\right)+3 h \left(u^8+8 u^6+10 u^4-3\right)-15}{2 \left(u^2+1\right)^5} \\
\mathcal{D}_{T\Phi} & \frac{\left(u^2-1\right) \left(2 h^2 \left(u^2+1\right)^2+5 h \left(u^2+1\right)^2+8\right)}{\left(u^2+1\right)^5} & -\frac{h^2 \left(u^4+10 u^2-7\right) \left(u^2+1\right)^2+h \left(u^4+10 u^2-7\right) \left(u^2+1\right)^2-8 \left(3 u^6+4 u^4-5 u^2+2\right)}{2 \left(u^2+1\right)^5} \\
\mathcal{D}_{\Phi \Phi } & \frac{2 \left(u^2-1\right) \left(h^2 \left(u^2+1\right)^2+2 h \left(u^2+1\right)^2+4\right)}{\left(u^2+1\right)^5} & -\frac{2 \left(u^2-1\right) \left(-3 u^4-6 u^2+2 h^2 \left(u^2+1\right)^2+h \left(u^2+1\right)^2+5\right)}{\left(u^2+1\right)^5} \\
\mathcal{D}_{RR} & \frac{8 \left(u^6-8 u^4+9 u^2-2\right)-m^2 \left(u^2+1\ri
| 1,017
| 905
| 1,247
| 1,078
| null | null |
github_plus_top10pct_by_avg
|
}_{{\widehat{S}}} = \bigotimes_{j \in {\widehat{S}}}^n E_j.$$ Then, a standard result about confidence sets for medians along with union bound implies that $\hat{E}_{{\widehat{S}}}$ is a $1-\alpha$ confidence set for the median LOCO parameters, uniformly over $\mathcal{P}_n$.
For every $n$, $$\inf_{w_n \in \mathcal{W}_n} \inf_{P\in {\cal P}_{n}}\mathbb{P}(\phi_{{\widehat{S}}} \in
\hat{E}_{{\widehat{S}}}) \geq 1-\alpha.$$
[**Remark.**]{} Of course, if the median of $\delta_i(j)$ is not unique, the length of the corresponding confidence interval does not shrink ad $n$ increases. But if the median is unique for each $j \in {\widehat{S}}$, and under addition smoothness conditions, we obtain the maximal length the side of the confidence rectangle $\hat{E}_{{\widehat{S}}}$ is of order $O \left(
\sqrt{\frac{\log k + \log n}{n}} \right)$, with high probability.
\[thm::median\] Suppose that there exists positive numbers $M$ and $\eta$ such that, for each $j \in {\widehat{S}}$, the cumulative distribution function of each $\delta_i(j)$ is differentiable with derivative no smaller than $M$ at all points at a distance no larger than $\eta$ from its (unique) median. Then, for all $n$ for which $$\frac{1}{n} +
\sqrt{\frac{1}{2n}\log\left(\frac{2k}{\alpha}\right)} + \sqrt{
\frac{ \log 2kn}{2n} }
\leq \eta M,$$ the sides of $\hat{E}_{{\widehat{S}}}$ have length uniformly bounded by $$\frac{2}{M} \left( \frac{1}{n} +
\sqrt{\frac{1}{2n}\log\left(\frac{2k}{\alpha}\right)} + \sqrt{
\frac{ \log 2kn}{2n} } \right),$$ with probability at least $1 - \frac{1}{n}$.
Future Prediction Error
-----------------------
To construct a confidence interval for the future prediction error parameter $\rho_{{\widehat{S}}}$ consider the set $$\hat{F}_{{\widehat{S}}} = \Bigl[\hat\rho_S - z_{\alpha/2} s/\sqrt{n},\ \hat\rho_S + z_{\alpha/2} s/\sqrt{n}\Bigr]$$ where $z_{\alpha/2}$ is the $1-\alpha/2$ upper quantile of a standard normal distribution, $$\hat\rho_{{\widehat{S}}} = \frac{1}{n}\sum_{i\in {\cal I}_2}\sum_i A_i, \quad
s
| 1,018
| 2,264
| 965
| 1,072
| null | null |
github_plus_top10pct_by_avg
|
uiv& E_2-E_1
\nonumber\\
&=&p\frac W2 \frac{1-x_-^2}{1+x_-^2-x_-^{2\gamma}}x_-^{2\gamma}
\left(x_--x_-^{-1}\right) \; ,\end{aligned}$$ or $$\label{eq:relative}
\frac{\Delta E}{E_0} = -\frac{\left(1-x_-^2\right)^2x_-^{2\gamma-1}}
{\left(1+x_-^2-x_-^{2\gamma}\right) \sqrt{\frac{4\nu^2}{W^2}+1}}\;.$$
As with a single defect, the energy eigenvalues are replaced by the corresponding quasienergies in the presence of time-periodic forcing, and the energy splitting turns into a quasienergy splitting. The above result (\[eq:relative\]), with $W$ replaced by $W_{\rm eff}$ according to Eq. (\[eq:qeband\]), should also be a good approximation to the quasienergy splitting if the driving frequency is sufficiently high.
In order to check this hypothesis, the time-dependent Schrödinger equation for the periodically forced two-defect system has been solved numerically, and the quasienergies for the localized states have been obtained. The results for two defects at $\gamma = \pm 3$, with $\nu/W = 0.1$ and $\hbar\omega/W = 7.5$, are plotted in Fig. \[fig:delta\] as functions of the scaled amplitude $eFd/(\hbar\omega)$. As can be seen, the agreement between the analytical approximation and the exact numerical data becomes excellent when $eFd/(\hbar\omega) > 1$.
Controlled population transfer {#sec:twodefs}
==============================
It is assumed now that initially, at time $t = 0$, the particle is localized at one of the two defects, and the periodic force is present. Denoting the two Floquet functions associated with the defects at the given driving amplitude $F$ by $\left|u_{1,2}^{F}(t)\right>$, and their quasienergies by $\varepsilon^{F}_{1,2}$, the initial state is given by a superposition $$\label{eq:ini}
\left|\psi(0)\right> = \frac1{\sqrt{2}}\big( \left|u_1^{F}(0)\right>
\pm \left|u_2^{F}(0)\right> \big) \; .$$ Under the influence of forcing with constant amplitude, this state evolves in time according to $$\begin{aligned}
\left|\psi(t)\right> & = & \frac1{\sqrt{2}}\Big( \left|u_1^{F}(t)
| 1,019
| 4,529
| 446
| 853
| 1,324
| 0.791016
|
github_plus_top10pct_by_avg
|
we expand the exponential and Laguerre in Eq. (\[auxf1\]) up to second order in $\eta$ to find $$\begin{aligned}
\label{fauxap}
\left|{f_n^m}\right|^{2} &\approx& \frac{(n+m)!}{n!m!^2}
\left[1 - \frac{2n+m+1}{m+1} \eta^2 \right]{\eta^{2m}}. \end{aligned}$$ To obtain this expression we used $d L_n^m(x)/d x = - L_{n-1}^{m+1} (x)$ and $L_n^m(0) = (n+m)!/(n!m!)$ [@gradshteyn]. Now, by keeping just terms up to $\eta^2$ in $\left|f_n^m\right|^{2}$, one gets $$\label{fauxap2}
\left|f_n^m\right|^{2} \approx [1 - (2n+1) \eta^2] \delta_{m 0 } +
(n+1) \eta^2 \delta_{m 1}.$$ Terms with $m\ge 2$ appear only in higher powers of $\eta$. Notice that for $m = 1$, $\left|f_n^1\right|^{2} \to 0$ and $\mathcal{Z}_{\pm}(\lambda_f) \to \mathcal{Z}(\lambda_i)$ as $\eta \to 0$, whhich makes $\mathcal {L} \to 0$. On the other hand, $\left|f_n^0\right|^{2}$ in Eq. (\[fauxap2\]) is a concave function of $\eta$ with $\lim_{\eta\to 0}\left |f_n^0\right|^{2}=1$, $\forall n$. Consequently, $\mathcal{L}\neq 0$ as $\eta\to 0$. All these features can be seen from Fig. \[fig1L1\]. For $m > 1$, only higher order terms in $\eta$ contribute to $|f_n^m|$, forcing $\mathcal Z_{\pm}(\lambda_f) \to \mathcal Z(\lambda_i)$ as $\eta \to 0$, just like what happens when $m=1$. The physical explanation for the distinct behavior found in the carrier transition $m=0$ lies in the system Hamiltonian after and before laser application. From Eq. (\[Omegaux1\]), one can see that $$\label{etalim}
\lim_{\eta \to 0} \hat{\Omega}_{m}^\pm = \frac{\Omega}{2} \delta_{m 0}{{\sf 1 \hspace{-0.3ex} \rule{0.1ex}{1.52ex}\rule[-.01ex]{0.3ex}{0.1ex}}},$$ where ${{\sf 1 \hspace{-0.3ex} \rule{0.1ex}{1.52ex}\rule[-.01ex]{0.3ex}{0.1ex}}}$ is the identity operator for the center-of-mass motion. By taking Eqs. (\[hamct\]) and (\[etalim\]) into account, it follows that, when $m=0$, the laser is able to drive transitions between the two electronic states, even when $\eta = 0$. In other words, the pre-
| 1,020
| 1,559
| 1,162
| 1,101
| null | null |
github_plus_top10pct_by_avg
|
events reading from or clearing the output buffers of the partition $j$. For any memory area $a$ of the system ($\mathcal{M}$), $a$ is a memory area in the partition $j$ ($A_j$), if the value of $a$ in state $s$ and $s'$ are not equal.
The No-Infiltration Property states that data processing in a partition is not influenced by data outside that partition, which is formulated as follows. $$\begin{aligned}
& s_1,s_2,s'_1,s'_2 \in S \wedge s'_1 = T(s_1,e) \wedge \\
& s'_2 = T(s_2,e) \wedge (\forall a \in A_i) \; a_{s_1} = a_{s_2} \\
& \Rightarrow (\forall a \in A_i)a_{s'_1} = a_{s'_2}
\end{aligned}$$
The Separation of Control Property states that when data processing is in progress in a partition, no data is being processed in other partitions until processing in the first partition terminates, which is formulated as follows. $$\begin{aligned}
& s,s' \in S \wedge s' = T(s,e) \; \wedge \\
& c_s \neq j \wedge c_{s'} \neq j \\
& \Rightarrow (\forall a \in A_j) \; a_{s'_1} = a_{s'_2}
\end{aligned}$$ where $c_s$ is the id of the partition that is processing data in state $s$.
The Kernel Integrity Property states when data processing is in progress in a partition, the data stored in the shared memory area do not change, which is formulated as follows. $$\begin{aligned}
s,s' \in S \wedge s' = T(s,e) \wedge e \in P_i \\
\Rightarrow G_s = G_{s'}
\end{aligned}$$ where $G$ is the single shared memory area and contains all programs and data not residing in any memory area of partitions, $P_i$ is the internal event set of the partition $i$.
### Information Flow Security Properties
In the domain of operating systems, state-event based information flow security properties are often applied [@Murray12]. We present two major categories of information flow security properties: the GWV policy and noninterference.
- GWV Policy
Greve, Wilding and Vanfleet propose the GWV security policy in [@Greve03] to model separation kernels. The separation axiom of this policy is as follows.
$$\label{eq:gwv}
\begin{aligned}
& selectlist(
| 1,021
| 2,832
| 1,862
| 1,134
| 520
| 0.807139
|
github_plus_top10pct_by_avg
|
45 (4) 0.78542 (19) 0.0251 (11)
H66 1.0388 0.3787 0.8055 0.030\*
C67 0.9514 (3) 0.3723 (4) 0.80778 (19) 0.0286 (12)
H67 0.9612 0.3739 0.8434 0.034\*
C68 0.8912 (3) 0.3677 (4) 0.77874 (19) 0.0277 (12)
H68 0.8595 0.3658 0.7943 0.033\*
C69 0.8771 (2) 0.3658 (3) 0.72647 (18) 0.0191 (10)
H69 0.8357 0.3630 0.7065 0.023\*
C70 0.9557 (2) 0.4595 (3) 0.62351 (17) 0.0142 (9)
C71 1.0132 (2) 0.4363 (3) 0.61976 (17) 0.0153 (9)
H71 1.0264 0.3718 0.6227 0.018\*
C72 1.0514 (2) 0.5073 (3) 0.61178 (17) 0.0173 (9)
H72 1.0904 0.4913 0.6087 0.021\*
C73 1.0329 (2) 0.6012 (4) 0.60827 (18) 0.0205 (10)
H73 1.0593 0.6496 0.6030 0.025\*
C74 0.9756 (2) 0.6252 (4) 0.61239 (19) 0.0226 (11)
H74 0.9633 0.6900 0.6108 0.027\*
C75 0.9368 (2) 0.5540 (3) 0.61882 (19) 0.0193 (10)
H75 0.8970 0.5699 0.6200 0.023\*
O1 0.71217 (18) 0.7272 (3) 0.26992 (15) 0.0286 (9)
N1 0.78968 (19) 0.6532 (3) 0.24912 (16) 0.0215 (9)
C76 0.7377 (2) 0.7023 (3) 0.2391 (2) 0.0230 (11)
H76 0.7188 0.7196 0.2050 0.028\*
C77 0.8220 (2) 0.6244 (4) 0.3001 (2) 0.0287 (12)
H77A 0.8074 0.6621 0.3239 0.043\*
H77B 0.8659 0.6350 0.3065 0.043\*
H77C 0.8145
| 1,022
| 4,519
| 313
| 614
| null | null |
github_plus_top10pct_by_avg
|
^\chi ({\alpha },\beta _\nu ;t)$ for all $\chi \in {\overline{{\mathcal{X}}}}_3$, ${\alpha }\in {\mathbb{N}}_0^I$, $\beta _\nu \in R^\chi _+$, and $t\in
{\mathbb{N}}$ with $t<{b^{\chi}} (\beta _\nu )$ by $$\begin{aligned}
{P}^\chi ({\alpha },\beta _\nu ;t)=\Big|\Big\{(m_1,\dots ,m_n)\in {\mathbb{N}}_0^n\,\big|\,
\sum _{\mu =1}^n m_\mu \beta _\mu ={\alpha },\,m_\nu \ge t,\quad &\\
m_\mu <{b^{\chi}} (\beta _\mu )\quad \text{for all $\mu \in \{1,2,\dots ,n\}$}
\Big\}\Big|.&
\label{eq:PF2}
\end{aligned}$$
\[th:Shapdet2\] Let $\chi \in {\mathcal{X}}_3$. Assume that $\chi (\beta ,\beta )\not=1$ for all $\beta \in R^\chi _+$. The Shapovalov determinant of $U(\chi )$ is the family $(\det ^\chi _{\alpha })_{{\alpha }\in {\mathbb{N}}_0^I}$, where $$\begin{aligned}
\label{eq:det2}
\det \nolimits ^\chi _{\alpha }=
\prod _{\beta \in R^\chi _+}
\prod _{t=1}^{{b^{\chi}} (\beta )-1}
({\rho ^{\chi}} (\beta )K_{\beta }
-\chi (\beta ,\beta )^t L_{\beta })
^{{P}^\chi ({\alpha },\beta ;t)}.
\end{aligned}$$
Let ${\alpha }\in {\mathbb{N}}_0^I$. Choose a basis $\{F'_1,\dots ,F'_k\}$ of $U^-(\chi )_{-{\alpha }}$ consisting of monomials $F_{i_1}F_{i_2}\cdots
F_{i_l}$, where $k,l\in {\mathbb{N}}_0$ and $i_1,\dots ,i_l\in I$. Identify $\oplus _{\beta ,\gamma \in {\mathbb{N}}_0^I,\,\beta +\gamma ={\alpha }}\Fie
K_\beta L_\gamma $ with ${\bar{{\Bbbk }}}^N$ for an appropriate $N\in {\mathbb{N}}$. By the commutation relations – and the definition of ${\mathrm{Sh}}$, the map $$d:{\overline{{\mathcal{X}}}}\to {\bar{{\Bbbk }}}^N, \quad \chi '\mapsto \det ({\mathrm{Sh}}(F'_i,F'_j))_{i,j\in
\{1,2,\dots ,k\}}$$ is a morphism of affine varieties. Further, $d(\chi )\not=0$ by Lemma \[le:Shfcoeffs\], the choice of $\{F'_1,\dots ,F'_k\}$, and the nondegeneracy of the pairing $\eta $, see Prop. \[pr:sHpdef\](iv). Recall the definition of $|\beta |$, $\beta \in {\mathbb{Z}}^I$, from Eq. . Restrict $d$ to the set $V^\chi _{\underline{n}}$ defined in Prop. \[pr:Vchi\], with $n_\beta =|{\alpha }|/|\beta |$
| 1,023
| 1,312
| 810
| 1,011
| null | null |
github_plus_top10pct_by_avg
|
rent from $S$. If $i>2$ then we apply Lemma \ref{lemma7} to move the $2$ from row $3$ to row $2$; neglecting the tableau not dominated by $S$ and (in the case $j>2$) neglecting the tableau with two rows equal to $\young(j)$, the only tableau we get is}}
U''[i,j]&={\text{\footnotesize$\gyoungx(1.2,;1;1;2;{{\begin{tikzpicture}[baseline=0cm]\draw[thick,densely dotted](0,.13)--++(1*.25,0);\end{tikzpicture}}};{\hat\jmath};{{\begin{tikzpicture}[baseline=0cm]\draw[thick,densely dotted](0,.13)--++(1*.25,0);\end{tikzpicture}}};v;{b\!\!+\!\!3};{b\!\!+\!\!5};{b\!\!+\!\!6}_2{{\begin{tikzpicture}[baseline=0cm]\draw[thick,densely dotted](0,.13)--++(2*.25,0);\end{tikzpicture}}};u,;2;j;{b\!\!+\!\!4},;i,;3,;{{\begin{tikzpicture}[baseline=0cm]\draw[thick,densely dotted](0,.13-1*.125)--++(0,1*.25);\end{tikzpicture}}},;{\hat\imath},;{{\begin{tikzpicture}[baseline=0cm]\draw[thick,densely dotted](0,.13-1*.125)--++(0,1*.25);\end{tikzpicture}}},;{b\!\!+\!\!2})$}};\end{aligned}$$]{} $i-3$ more applications of Lemma \[lemma7\] show that ${\hat\Theta_{U''[i,j]}}$ equals a semistandard homomorphism different from ${\hat\Theta_{S}}$.
We conclude that ${\hat\Theta_{U[i]}}$ equals ${\hat\Theta_{S}}$ plus a linear combination of homomorphisms indexed by tableaux which are either not dominated by $S$ or semistandard and different from $S$. The homomorphism ${\hat\Theta_{V[i]}}$ is analysed in exactly the same way, interchanging $b+3$ and $b+4$.
Putting these cases together, we find that the coefficient of ${\hat\Theta_{S}}$ in $\sigma$ is the total number of tableaux of the form $T[i]$, $U[i]$ or $V[i]$, i.e. $(b+2-v)+2(b+1)$, which is odd.
It remains to consider the case $v=b+3$. In this case only the tableaux $V[i]$ appear, but the analysis of these tableaux is exactly the same, so the coefficient of ${\hat\Theta_{S}}$ in $\sigma$ is the number of tableaux $V[i]$, i.e. $b+1$, which again is odd.
It turns out that up to scaling, $\sigma$ is the only homomorphism from $S^\la$ to $S^\mu$.
\[cdhomdim1\] With $\la,\mu$ as above, $$\dim_\bbf{
| 1,024
| 1,679
| 843
| 995
| 921
| 0.797596
|
github_plus_top10pct_by_avg
|
g(a, c'), \end{aligned}$$ where $c' := \lvert c / b \rvert^{\frac{1}{a}}$. Therefore, for any $c \in \mathbb{R}$, we have $$\begin{aligned}
\label{E[Pe(Z+c)]-2}
\operatorname{{E}}[\Pe(Z + c)]
= \frac{(k_{1} - k_{2}) c}{2}
+ \frac{(k_{1} + k_{2}) \lvert c \rvert}{2 \G(a)}
\g\left(a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right)
+ \frac{(k_{1} + k_{2}) b}{2 \G(a)}
\G\left(2a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right). \end{aligned}$$
**[Variance of the loss]{}**
----------------------------
Now let us calculate the variance of the loss $\Pe(Z + c)$ for $c \in \mathbb{R}$. Put $\beta := (2 a b \G(a))^{-1}$; then, we have $$\begin{aligned}
\operatorname{{E}}[\Pe(Z + c)^{2}]
&= \int_{- \infty}^{+\infty} \Pe(z + c)^{2} f_{Z}(z) dz
\allowdisplaybreaks \\
&= k_{2}^{2} \beta \int_{- \infty}^{- c} (z + c)^{2}
\exp{\left( - \left\lvert \frac{z}{b} \right\rvert^{\frac{1}{a}} \right)} dz
+ k_{1}^{2} \beta \int_{- c}^{+\infty} (z + c)^{2}
\exp{\left( - \left\lvert \frac{z}{b} \right\rvert^{\frac{1}{a}} \right)} dz. \end{aligned}$$ Replace $z$ with $b z$ to get $$\begin{aligned}
\operatorname{{E}}[\Pe(Z + c)^{2}]
= k_{2}^{2} b \beta \int_{- \infty}^{- c / b} (b z + c)^{2}
\exp{\left( - \lvert z \rvert^{\frac{1}{a}} \right)} dz
+ k_{1}^{2} b \beta \int_{- c / b}^{+\infty} (b z + c)^{2}
\exp{\left( - \lvert z \rvert^{\frac{1}{a}} \right)} dz. \end{aligned}$$ When $c \geq 0$, we have $$\begin{aligned}
\operatorname{{E}}[\Pe(Z + c)^{2}]
&= k_{2}^{2} b \beta \int_{- \infty}^{- c / b} (b z + c)^{2}
\exp{\left( - (- z)^{\frac{1}{a}} \right)} dz \\
&\quad + k_{1}^{2} b \beta \int_{- c / b}^{0} (b z + c)^{2}
\exp{\left( - (- z)^{\frac{1}{a}} \right)} dz
+ k_{1}^{2} b \beta \int_{0}^{+\infty} (b z + c)^{2}
\exp{\left( - z^{\frac{1}{a}} \right)} dz \allowdisplaybreaks \\
&= k_{2}^{2} b \beta \int_{c / b}^{+\infty} (- b z + c)^{2}
\exp{\left( - z^{\frac{1}{a}} \right)} dz \\
&\quad + k_{1}^{2} b \beta \int_{0}^{c / b} (- b z + c)^{2}
\exp{\left( - z^{\frac{1}{a}} \right)} dz
+ k_{1}^{2}
| 1,025
| 2,068
| 1,294
| 1,023
| null | null |
github_plus_top10pct_by_avg
|
-1}\left( |\cos\Theta_{0}|g\epsilon^{\prime \prime} \right)\right]} }{4 \left( e^{\frac{2 \pi \omega}{g}}-1\right)}
\notag
\\[1ex]
& \hspace{13ex}
\times \frac{\sin \left( \frac{\omega}{g} \cosh^{-1} (c) \right)}{\sqrt{1-{c^{\prime}}^2} \sqrt{1-{c^{\prime \prime}}^2} \sinh \left[ \cosh^{-1} (c) \right] } \; \; \bigg\}_{\epsilon^\prime = \epsilon, \epsilon^{\prime \prime} = \epsilon}\end{aligned}$$ where $c = [1 - ({c^\prime}^2 + {c^{\prime \prime}}^2)/2]/\sqrt{1 - {c^\prime}^2}\sqrt{1- {c^{\prime \prime}}^2}$. Note that $c\ge1$. Considering only the factor inside the braces (without the partial derivatives), it does seem to satisfy the KMS condition with the inverse of the temperature being $2 \pi/ g - (2/g)\tan^{-1} \left( |\cos\Theta_{0}|g\epsilon^{\prime} \right)+ (2/g)\tan^{-1}\left( |\cos\Theta_{0}|g\epsilon^{\prime \prime} \right)$. One gets such a result in the Schlicht case for the total transition rate with $\epsilon^{\prime} = \epsilon^{\prime \prime}$ and without the $\cos\Theta_0$ dependence. However, in the present case, the additional partial derivatives break the KMS property for ${\dot {\cal F}}_{\Theta_0}(\omega) $.
Another way to approach at the final expression is to first differentiate the integrand in Eq.(\[angFinter\]) and then perform the contour integration. This leads to the following $${\dot {\cal F}}_{\Theta_0}(\omega) = 2 \operatorname{Re}\int_0^{\infty} ds \, e^{-i\omega s} \, \frac{1}{D_\epsilon(s)}$$ where $$\begin{aligned}
\frac{1}{D_\epsilon(s)} = \frac{ g^{2} \pi\bigg\{ 3 b^2 \epsilon^2 + b^4 \epsilon^4 - 2(1- b^2 \epsilon^2)\sinh^{2}\left[ \frac{gs}{2}-i\alpha \right] - 2 i b \epsilon \sinh\left[ gs - i2\alpha \right] \bigg\}}{32 \left( 1+b^2\epsilon^2 \right)^3 \sinh^{4}\left[ \frac{gs}{2}-i\alpha \right]}
\label{denominator}\end{aligned}$$ and $b = g |\cos\Theta_0|$. This contour integral can be calculated using the similar procedure outlined for integral in Eq.(\[angFinter\]) for each of the three terms. One finally gets the angular transition rate to be $$\begin{ali
| 1,026
| 3,431
| 1,141
| 1,122
| null | null |
github_plus_top10pct_by_avg
|
}{m_\pi(W;v)}-\frac{\Vert\nabla_W m_\pi(W;v)\Vert^2]}{\{m_\pi(W;v)\}^2}\bigg].\end{aligned}$$ Combining this identity and Lemma \[lem:identity\] completes the proof. $\Box$
Using Lemma \[lem:identity2\] immediately establishes the following proposition.
\[prp:cond\_mini\] $\ph_\pi(Y|X)$ is minimax relative to the KL loss (\[eqn:loss\]) if $$2\tr[\nabla_W\nabla_W^\top m_\pi(W;v)]-\frac{\Vert\nabla_W m_\pi(W;v)\Vert^2}{m_\pi(W;v)}\leq 0$$ for $v_w\leq v\leq v_x$.
Differentiation of matrix-valued functions
------------------------------------------
Next, some useful formulae are listed for differentiation with respect to a symmetric matrix. The formulae are applied to evaluation of the Kullback-Leibler risks of our Bayesian predictive densities.
Let $S=(s_{ij})$ be an $r\times r$ symmetric matrix of full rank. Let $\Dc_S$ be an $r\times r$ symmetric matrix of differentiation operators with respect to $S$, where the $(i,j)$-th element of $\Dc_S$ is $$\{\Dc_S\}_{ij}=\frac{1+\de_{ij}}{2}\frac{\partial}{\partial s_{ij}}$$ with the Kronecker delta $\de_{ij}$.
Let $g(S)$ be a scalar-valued and differentiable function of $S=(s_{ij})$. Also let $G(S)=(g_{ij}(S))$ be an $r\times r$ matrix, where all the elements $g_{ij}(S)$ are differentiable functions of $S$. The operations $\Dc_S g(S)$ and $\Dc_S G(S)$ are, respectively, $r\times r$ matrices, where the $(i,j)$-th elements of $\Dc_S g(S)$ and $\Dc_S G(S)$ are defined as, respectively, $$\{\Dc_S g(S)\}_{ij}=\frac{1+\de_{ij}}{2}\frac{\partial g(S)}{\partial s_{ij}},\quad
\{\Dc_S G(S)\}_{ij}=\sum_{k=1}^r\frac{1+\de_{ik}}{2}\frac{\partial g_{kj}(S)}{\partial s_{ik}}.$$
First, the product rule in terms of $\Dc_S$ is expressed in the following lemma due to Haff (1982).
\[lem:diff1\] Let $G_1$ and $G_2$ be $r\times r$ matrices such that all the elements of $G_1$ and $G_2$ are differentiable functions of $S$. Then we have $$\Dc_S (G_1G_2)=(\Dc_S G_1)G_2+(G_1^\top \Dc_S)^\top G_2.$$ In particular, for differentiable scalar-valued functions $g_1(S)$ and $g_2(S)$, $$\Dc_S \{g
| 1,027
| 1,031
| 1,635
| 1,044
| 3,833
| 0.769791
|
github_plus_top10pct_by_avg
|
$ for all $i,j\in I$. Choose the ideal $J$ in Lemma \[le:Uzideal\] as explained above. Then one gets the Shapovalov determinants of $U_q({\mathfrak{g}})$ and $u_q({\mathfrak{g}})$ from the one of $U(\chi )$ in Thm. \[th:Shapdet2\].
The second part of Thm. \[th:ShapdetUqg\] was proved in [@a-KumLetz97] in the case when the order of $q$ is prime and ${\Bbbk }$ is the cyclotomic field $\mathbb{Q}[q]$.
Appendix
========
For the proofs of Thms. \[th:Shapdet\] and \[th:Shapdet2\] we need some commutative algebra which is considered here. Let ${\bar{{\Bbbk }}}$ be an algebraically closed field.
\[le:rankX\] Let $B$ be an integral domain, $x$ an indeterminate, $k\in {\mathbb{N}}$, and $X\in B[x]^{k\times k}$. Then there exist $s\in \{0,1,\dots ,k\}$, $D_1,D_2\in
B^{k\times k}$, $D_0\in B[x]^{k\times k}$ and $b\in B\setminus \{0\}$ such that $\det D_1,\det D_2\not=0$, $$\begin{aligned}
D_1XD_2=xD_0+b\,\mathrm{diag} (\underbrace{1,\dots ,1}_s,0,\dots ,0).
\label{eq:rankX}
\end{aligned}$$
Let ${\mathrm{Frac}}(B)$ be the field of fractions of $B$. Then there exist $s\in \{0,1,\dots ,k\}$ and $D'_1,D'_2\in {\mathrm{Frac}}(B)^{k\times k}$ such that $\det D'_1,\det D'_2\not=0$ and $$D'_1X(0)D'_2=\mathrm{diag} (\underbrace{1,\dots ,1}_s,0,\dots ,0).$$ Let $b_1,b_2\in B\setminus \{0\}$ such that $b_1D'_1,b_2D'_2\in B[x]^{k\times k}$. Let $b=b_1b_2$, $D_1=b_1D'_1$, and $D_2=b_2D'_2$. Then $$D_1X(0)D_2=b\,\mathrm{diag} (\underbrace{1,\dots ,1}_s,0,\dots ,0),$$ and hence the lemma holds for $D_0=D_1X'D_2$, where $X'\in B[x]^{k\times k}$ such that $X=X(0)+xX'$.
\[le:detXfactor\] Let $B$ be a finitely generated integral domain over ${\bar{{\Bbbk }}}$, $x$ an indeterminate, $k\in {\mathbb{N}}$, $r\in \{0,1,\dots ,k\}$, and $X\in B[x]^{k\times k}$. Assume that $\mathrm{rk}\,X(0)_p\le r$ for all points $p$ in a non-empty Zariski open subset of the affine variety of $B$. Then $\det X=x^{k-r}b$ for some $b\in B[x]$.
By Lemma \[le:rankX\] there exist $s\in \{0,1,\dots ,k\}$, $b\in B\setminus \{0\}$, $D_1,D_2\in B^{k\
| 1,028
| 525
| 1,086
| 1,095
| 3,966
| 0.768923
|
github_plus_top10pct_by_avg
|
A/C versus C/C genotype, we calculated an increased crude OR of 2.67 (95% CI = 1.26, 5.65; *P* = 0.0089) for RVR (+) versus RVR (−). The association of rs12126768 genotypes with RVR remained significant in the HCV-2 infected group (*P* = 0.0436). Therefore, HCV infected individuals with the *GNB1* rs4648727 C/C and rs12126768 G/G genotypes may be at increased risk being non-responsive to PEG-IFNα-RBV treatment.
######
**Genotype frequencies of*GNB1*single nucleotide polymorphisms (SNPs) in HCV-1 and HCV-2 infected patients receiving PEG-IFNα-RBV therapy with and without a RVR in a Chinese population in Taiwan**
**HCV-1** **HCV-2**
----------------------- ---------------------- ------------ ---------- ------------------- ----------- ------------ ----------- ---------- --------------------
**rs10907185** **rs10907185**
A/A 10 (9.2) 5 (3.2) 3.09 (1.00, 9.56) A/A 13 (8.3) 3 (8.1) 1.25 (0.33, 4.76)
A/G 45 (41.7) 70 (44.6) 0.99 (0.60, 1.66) A/G 62 (39.2) 10 (27.0) 1.79 (0.80, 4.02)
G/G 53 (49.1) 82 (52.2) 0.1096 1 G/G 83 (52.5) 24 (64.9) 0.3601 1
A/A + A/G 55 (50.9) 75 (47.8) 0.6137 1.13 (0.69, 1.85) A/A + A/G 75 (47.5) 13 (35.1) 0.1748 1.67 (0.79, 3.51)
**rs6603797** **rs6603797**
C/C 87 (80.5) 116 (73.9) 1.13 (0.18, 6.88) C/C 129 (81.6) 31 (83.8) 2.08 (0.18, 23.69)
C/T
| 1,029
| 3,142
| 1,179
| 1,030
| null | null |
github_plus_top10pct_by_avg
|
}^{m_n}=0
\label{eq:lindep}
\end{aligned}$$ in $U^+(\chi )$. Let ${T}^-={T}^-_{i_\mu }\cdots {T}^-_{i_2}{T}^-_{i_1}$. Since ${T}^-(E_{\beta _\mu })={T}^-_{i_\mu }(E_{i_\mu })
=K_{i_\mu }^{-1}F_{i_\mu }$, we obtain that $$\sum _{m_\mu ,\dots ,m_n}
a _{m_\mu ,\dots ,m_n}
(K_{i_\mu }^{-1}F_{i_\mu })^{m_\mu }{T}^-(E_{\beta _{\mu +1}})^{m_{\mu +1}}
\cdots {T}^-(E_{\beta _n})^{m_n}=0.$$ Since ${T}^-(E_{\beta _\nu })\in U^+(r_{i_\mu }\cdots r_{i_2}r_{i_1}(\chi ))$ for all $\nu \in \{\mu +1,\mu +2,\dots ,n\}$, Prop. \[pr:tridec\] implies that $$\sum _{m_{\mu +1},\dots ,m_n}a _{m_\mu ,m_{\mu +1},\dots ,m_n}
{T}^-(E_{\beta _{\mu +1}})^{m_{\mu +1}}\cdots {T}^-(E_{\beta _n})^{m_n}
=0$$ for all $m_\mu \in {\mathbb{N}}_0$, $m_\mu <{b^{\chi}} (\beta _\mu )$. Therefore $$\sum _{m_{\mu +1},\dots ,m_n}a _{m_\mu ,m_{\mu +1},\dots ,m_n}
E_{\beta _{\mu +1}}^{m_{\mu +1}}\cdots E_{\beta _n}^{m_n}=0$$ for all $m_\mu \in {\mathbb{N}}_0$, $m_\mu <{b^{\chi}} (\beta _\mu )$. Then $a_{m_\mu ,m_{\mu +1},\dots ,m_n}=0$ for all $(m_\mu ,\dots ,m_n)$ by induction hypothesis, which proves the induction step. Thus the theorem holds.
Assume that $\chi \in {\mathcal{X}}_3$. Then $\ker ({\partial ^K}_{i_1}:U^+(\chi )\to U^+(\chi ))$ coincides with the subalgebra of $U^+(\chi )$ generated by the elements $E_{\beta _\nu }$, $\nu \in \{2,3,\dots ,n\}$. The set $$\begin{aligned}
\big\{ E_{\beta _2}^{m_2} E_{\beta _3}^{m_3}\cdots E_{\beta _n}^{m_n}\,&|\,
0\le m_\nu <{b^{\chi}} (\beta _\nu )
\text{ for all $\nu \in \{2,3,\dots ,n\}$} \big\}
\end{aligned}$$ forms a vector space basis of $\ker {\partial ^K}_{i_1}$. \[le:kerderK\]
Let $\nu \in \{2,3,\dots ,n\}$. By [@p-Heck07b Lemma5.10] and Lemma \[le:rvrel\] there exist $m\in {\mathbb{N}}_0$ and $x_0,x_1,\dots ,x_m\in \ker {\partial ^K}_{i_1}$ such that $m<{b^{\chi}} ({\alpha }_{i_1})$ and $E_{\beta _\nu }=\sum _{\mu =0}^m x_\mu E_{i_1}^\mu $. Then $${T}_{i_1}^-(E_{\beta _\nu })=
\sum _{\mu =0}^m{T}^-_{i_1}(x_\mu ) (K_{i_1}^{-1}F_{i_1})^\mu .$$ Moreover, ${T}^-_{i_1}(
| 1,030
| 1,775
| 1,508
| 1,015
| null | null |
github_plus_top10pct_by_avg
|
`gi 87161394 ref` `71` `KKVLLTGLGIVI`
`KK+LLTGLGIVI`
`KKLLLTGLGIVI`
`SRR022865_30969` `1` `aatctagtgaga`
2648343 NONSYN A:5 C:192 C:37 `aatttcgtgttt` drug transporter
`aaatagagaaac`
`gi 87160343 ref` `120` `EVQSKEMLIISI`
`EVQSKEMLI+SI`
`EVQSKEMLIVSI`
`SRR022865_47009` `2` `ggctagatagaa`
2262790 NONSYN A:153 G:6 A:37
| 1,031
| 6,114
| 356
| 252
| null | null |
github_plus_top10pct_by_avg
|
above $1$), for very gently inclined straight lines. On the other hand, a steeper straight line indicates a faster reduction of layer sizes as we progressively move toward layer $0$ from layer $K-1$ through the other layers. In the analysis that follows, then, we also use the slope of the least-squares linear approximation of $Y$ as a function of $X$, denoted by $S(X,Y)$ and given by $S(X,Y)=\mathrm{cov}(X,Y)/\sigma_X^2$. For $C(X,Y)$ close to $1$, the base of the aforementioned exponential approaches $e^{S(X,Y)}$.
Our simulation results are summarized in Fig. \[fig:5layers\], where $K=5$, $n=500,1\,000$, and $r=1.1,2.0$. For each combination and each of four $a$ values ($a=1,2,3,4$), a scatter plot is given representing each of the graphs generated by its fixation probability and the slope $S(X,Y)$ for its two sequences, provided $C(X,Y)>0.9$. We see that, in all cases, strengthening the layer-selection criterion by increasing $a$ has the effect of moving most of the resulting graphs away from the Moran probability ($\rho_1$) and also away from the near-$0$ slope.
![(Color online) Simulation results for $K=5$. Each graph $D$ for which $C(X,Y)>0.9$ is represented by its fixation probability and by the slope $S(X,Y)$. For each combination of $n$ and $r$, $500$ graphs are shown, corresponding roughly to $12\%$ of the number of graphs that were grown. Dashed lines mark $\rho_1$ through $\rho_3$ for $r=1.1$, $\rho_1$ for $r=2.0$.[]{data-label="fig:5layers"}](sp_5_all.eps)
It is important to notice that, in the absence of the slope indicator for each graph, we would be left with a possibly wide range of fixation probabilities for the same value of $a$, unable to tell the true likeness of the best graphs to the $K$-funnel without examining their structures one by one. In a similar vein, the results shown in Fig. \[fig:5layers\] emphasize very strongly the role of our particular choice of a rule for selecting layers, as opposed to merely proceeding uniformly at random. To see this, it suffices that we realize that
| 1,032
| 2,103
| 3,011
| 1,159
| null | null |
github_plus_top10pct_by_avg
|
, S., [Hosokawa]{}, T., [Yoshida]{}, N., [Omukai]{}, K., & [Yorke]{}, H. W. 2015, , 448, 568
, D., [Johnstone]{}, D., [Lizano]{}, S., & [Shu]{}, F. 1994, , 428, 654
, T., [Hirano]{}, S., [Kuiper]{}, R., [et al.]{} 2016, , 824, 119
, T., [Omukai]{}, K., [Yoshida]{}, N., & [Yorke]{}, H. W. 2011, Science, 334, 1250
, D. G., & [Storey]{}, P. J. 1998, , 297, 1073
, K., [Haiman]{}, Z., & [Ostriker]{}, J. P. 2016, , 459, 3738
, K., & [Tanaka]{}, T. L. 2015, , 450, 4350
, R. K., [Langer]{}, W. D., & [Evans]{}, K. 1987, [Elementary processes in Hydrogen-Helium plasmas - Cross sections and reaction rate coefficients]{} (Springer)
, M., [Pawlik]{}, A. H., [Greif]{}, T. H., [et al.]{} 2012, , 754, 34
, Y.-F., [Stone]{}, J. M., & [Davis]{}, S. W. 2014, , 796, 106
, S., [Fukue]{}, J., & [Mineshige]{}, S. 1998, [Black-hole accretion disks]{} (Kyoto University Press)
, H., [Sijacki]{}, D., & [Haehnelt]{}, M. G. 2015, , 451, 2352
, M., [Lane]{}, N. F., [Dalgarno]{}, A., & [Dixson]{}, R. G. 1993, , 405, 801
, M., [Pollney]{}, D., [Reisswig]{}, C., [et al.]{} 2007, Physical Review Letters, 99, 041102
, R., [Klahr]{}, H., [Beuther]{}, H., & [Henning]{}, T. 2010, , 722, 1556
—. 2011, , 732, 20
, R., [Klahr]{}, H., [Dullemond]{}, C., [Kley]{}, W., & [Henning]{}, T. 2010, , 511, A81
, R., & [Klessen]{}, R. S. 2013, , 555, A7
, R., & [Proga]{}, D. 2009, , 693, 1929
, J., [Ostriker]{}, J., & [Sunyaev]{}, R. 2013, , 767, 105
, Y. 2011, ArXiv e-prints, arXiv:1109.3442
, P., [Haardt]{}, F., & [Dotti]{}, M. 2014, , 784, L38
, C. F., & [Tan]{}, J. C. 2008, , 681, 771
, J. C., [Tchekhovskoy]{}, A., [Sadowski]{}, A., & [Narayan]{}, R. 2014, , 441, 3177
, A., [Bodo]{}, G., [Massaglia]{}, S., [et al.]{} 2007, , 170, 228
, M., [Bromm]{}, V., [Couch]{}, S. M., & [Oh]{}, S. P. 2009, , 698, 766
, M., [Couch]{}, S. M., & [Bromm]{}, V. 2009, , 696, L146
, D. J., [Warren]{}, S. J., [Venemans]{}, B. P., [et al.]{} 2011, , 474, 616
, M., [Ohsuga]{}, K., [Takahashi]{}, H. R., [Wada]{}, K., & [Yo
| 1,033
| 1,236
| 3,085
| 1,217
| null | null |
github_plus_top10pct_by_avg
|
{\pmb{\sum}}}}$(sequential)**
Let $X$ be a compact sequential space. Let $Y\subseteq X$, $|Y|=\aleph_1$. Suppose $\{W_\alpha\}_{\alpha\in\omega_1}$, $\{V_\alpha\}_{\alpha\in\omega_1}$ are open subsets of $X$ such that:
- $W_\alpha\subseteq\overline{W_\alpha}\subseteq V_\alpha,$
- $|V_\alpha\cap Y|\leq\aleph_0$,
- $Y\subseteq\bigcup\{W_\alpha:\alpha\in\omega_1\}$.
Then $Y$ is $\sigma$-closed discrete in $\bigcup\{W_\alpha:\alpha\in\omega_1\}$.
Without the parenthetical “sequential”, ${\mathbf{\mathop{\pmb{\sum}}}}^-$ and ${\mathbf{\mathop{\pmb{\sum}}}}$ refer to the corresponding propositions obtained by replacing “sequential” by countably tight”, which follow from their sequential versions if one has
**Moore-Mrówka**
Every compact countably tight space is sequential.
It follows easily from **Moore-Mrówka** that *locally compact countably tight spaces are sequential*. A proof of **Moore-Mrówka** from PFA$(S)[S]$ is sketched in [@To] and the author remarks that, by the usual methods, large cardinals are not necessary. Thus, one can obtain a model of MA$_{\omega_1}(S)[S]$ in which, for example, both **PPI** and **${\mathbf{\mathop{\pmb{\sum}}}}$** hold, without the need for large cardinals. Working in such a model, we can establish the following proposition, the conclusion of which was proved from PFA$(S)[S]$ in [@To] and asserted to be obtainable without large cardinals.
If ZFC is consistent, it’s consistent to additionally assume that locally compact, hereditarily normal, separable spaces are hereditarily Lindelöf.
Let $X$ be such a space. By \[lem48\] $X$ has countable spread. So does its one-point compactification $X^*$, which hence is countably tight [@A2]. If $X$ were not hereditarily Lindelöf, it would include a right-separated subspace $\{x_\alpha:\alpha\in\omega_1\}$. Let $\{V_\alpha:\alpha\in\omega_1\}$ be open sets witnessing right-separation. Let $x_\alpha\in W_\alpha\subseteq\overline{W_\alpha}\subseteq V_\alpha$, with $W_\alpha$ open and $\overline{W_\alpha}$ compact.
| 1,034
| 2,071
| 1,865
| 1,053
| 2,929
| 0.776008
|
github_plus_top10pct_by_avg
|
ons).
Before closing, we would like to emphasize that the proposed shear-based parameterizations are only applicable away from the surface. Near the surface, due to the blocking effect [see @hunt88; @hunt89], $L_C$ or $L_H$ cannot be a representative length scale. They should be properly combined with an explicit parameterization involving height above ground (e.g., the harmonic mean of $0.4z$ and $L_H$).
Data and Code Availability {#data-and-code-availability .unnumbered}
==========================
The DNS code (HERCULES) is available from: <https://github.com/friedenhe/HERCULES>. Upon acceptance of the manuscript, all the analysis codes and processed data will be made publicly available via [zenodo.org](zenodo.org). Given the sheer size of the raw DNS dataset, it will not be uploaded on to any repository; however, it will be available upon request from the authors.
The first author thanks Bert Holtslag for thought-provoking discussions on this topic. The authors acknowledge computational resources obtained from the Department of Defense Supercomputing Resource Center (DSRC) for the direct numerical simulations. The views expressed in this paper do not reflect official policy or position by the U.S Air Force or the U.S. Government.
Appendix 1: Derivation of Length Scales {#appendix-1-derivation-of-length-scales .unnumbered}
=======================================
#### Integral Length Scale:
Based on the original ideas of Taylor![@taylor35], both Tennekes and Lumley [@tennekes72] and Pope [@pope00] provided a heuristic derivation of the integral length scale. Given TKE ($\overline{e}$) and mean energy dissipation rate ($\overline{\varepsilon}$), an associated integral time scale can be approximated as $\overline{e}/\varepsilon$. One can further assume $\sqrt{\overline{e}}$ to be the corresponding velocity scale. Thus, an integral length scale ($\mathcal{L}$) can be approximated as $\overline{e}^{3/2}/\varepsilon$.
In the literature, the autocorrelation function of the longitudinal velocity series is commo
| 1,035
| 107
| 1,847
| 1,001
| 1,661
| 0.787091
|
github_plus_top10pct_by_avg
|
$L_j$ is free of type $I$}.
\end{array}\right.$$ We emphasize that we have $2z_j^{\ast}$, not $\pi z_j$, when $j$ is even.
In Lemma \[la9\], we will show that $F_j$ is isomorphic to $ \mathbb{A}^{1} \times \mathbb{Z}/2\mathbb{Z}$ as a $\kappa$-variety so that it has exactly two connected components, by enumerating equations defining $F_j$ as a closed subvariety of an affine space of dimension $2$ (resp. $4$) if $j$ is even and $L_j$ is *of type $\textit{I}^o$* (resp. otherwise). Here, $\mathbb{A}^{1}$ is an affine space of dimension $1$. These equations are necessary in this theorem and thus we state them in Equation (\[e42\]) below. We refer to Lemma \[la9\] for the proof. We write $x_j=(x_j)_1+\pi \cdot(x_j)_2$, $z_j=(z_j)_1+\pi \cdot(z_j)_2$, and $z_j^{\ast}=(z_j^{\ast})_1+\pi \cdot(z_j^{\ast})_2$, where $(x_j)_1, (x_j)_2, (z_j)_1, (z_j)_2, (z_j^{\ast})_1, (z_j^{\ast})_2 \in R \subset R\otimes_AB$ and $\pi$ stands for $1\otimes \pi\in R\otimes_AB$. Then the equations defining $F_j$ as a closed subvariety of an affine space of dimension $2$ (resp. $4$), if $j$ is even and $L_j$ is *of type $I^o$* (resp. otherwise), are $$\label{e42}
\left\{
\begin{array}{l l}
(z_j^{\ast})_1+(z_j^{\ast})_1^2=0 & \quad \textit{if $j$ is even and $L_j$ is of type $I^o$};\\
(x_j)_1=0, (x_j)_2+(z_j^{\ast})_1=0, (z_j^{\ast})_1+(z_j^{\ast})_1^2=0 & \quad \textit{if $j$ is even and $L_j$ is of type $I^e$};\\
(z_j)_1+(z_j)_1^2=0, (x_j)_1=0, (z_j)_1+(x_j)_2=0 & \quad \textit{if $j$ is odd and $L_j$ is free of type $I$}.
\end{array}\right.$$
The proof of the surjectivity of $\psi_j$ is given below. The main idea is to show that $\psi_j|_{F_j}$ is surjective. First assume that $j$ is even. There are 4 cases according to the types of $M_0$ and $L_j$. Recall that $\bigoplus_{i \geq 0} M_i$ is a Jordan splitting of a rescaled hermitian lattice $(L^{j}, \xi^{-j/2}h)$ and that $M_0=\pi^{j/2}L_0\oplus\pi^{j/2-1}L_2\oplus \cdots \oplus \pi L_{j-2}\oplus L_{j}$.
1. Assume that both $M_0$ and $L_j$ are *of type $I^e$*. In this case and the
| 1,036
| 465
| 542
| 1,168
| 2,291
| 0.78113
|
github_plus_top10pct_by_avg
|
ls with $u=1, \ldots, {u_{\max}}$ MUs, for some maximum size ${u_{\max}}$. The paper proceeds as follows. Section \[sec:Model\] presents the neuromuscular model of @Rid06 for a fixed number of MUs and defines the priors for the model parameters. Section \[sec:Method\] describes the SMC-MUNE method. Due the complexity of the problem that MUNE addresses, this section is broken into three parts: inference for the firing events and associated parameters; inference for the parameters of the baseline and MUTF processes; and, estimation of the marginal likelihood so as to evaluate the posterior mass function for MU-number. Section \[sec:SimStudy\] assesses the performance of the SMC-MUNE method for $200$ simulated data sets. Closer examination of cases where the point estimate of the number of MUs was incorrect revealed two classes of error; an example in each of these classes is investigated in detail. Section \[sec:CaseStudy\] applies the SMC-MUNE method to data (collected using the method in [@Cas10]) from a rat tibial muscle that has undergone stem cell therapy. Section \[sec:Discussion\] concludes the paper with a discussion on the effectiveness of SMC-MUNE and of potential avenues for improvement.
The neuromuscular model and prior specification {#sec:Model}
===============================================
The three assumptions A1–A3 underpin a comprehensive description of the neuromuscular system. This section expands on these assumptions to form the model of the neuromuscular system for a given fixed number of MUs. Section \[sec:Notation\] introduces the notational convention. Section \[sec:Neuro-model\] presents the neuromuscular model under the assumptions of @Rid06, and Section \[sec:PriorDist\] defines the prior distributions for the model parameters.
Notation {#sec:Notation}
--------
The total number of MUs operating the muscle of interest is denoted by $u$ and a particular MU is indexed by $j$. An EMG data set consists of $T$ measurements whereby the datum for the $t$th test, $t=1,\ldots,T$, co
| 1,037
| 151
| 1,016
| 924
| 2,978
| 0.775647
|
github_plus_top10pct_by_avg
|
\Delta_{m+n+1}+\Delta_{m-n+1}+\Delta_{m+n-1}+\Delta_{m-n-1})\big]
\eqno(A4)$$
$$\bigg < \bar{K}^+ \bar{\nu} \bigg | -{\tau_0^2 \alpha^2 \over 4}
F^2\bigg | K^+ \nu \bigg
> =- {\pi \tau_0^2 \alpha^2 \over 4}\sum_{m=0}^{\bar{K}}\sum_{n=0}^{K}
c_{\bar{K}m} c_{Kn} \Delta_{\bar{\nu}-\nu}$$ $$\big[ P_1 \Delta_{m-n}+{1\over 2}(P_2( \Delta_{m+n-1}+
\Delta_{m-n+1}+ \Delta_{m-n-1}) +$$ $$P_3( \Delta_{m+n-2}+ \Delta_{m-n+2}+ \Delta_{m-n-2})$$ $$+ P_4( \Delta_{m+n-3}+ \Delta_{m-n+3}+ \Delta_{m-n-3}) ) \big]
\eqno(A5)$$
$$\bigg < \bar{K}^+ \bar{\nu} \bigg | -{\tau_1^2 \alpha^2 \over 4}
F^2 {\rm sin^2\phi} \bigg | K^+ \nu \bigg
> =- {\pi \tau_1^2 \alpha^2 \over 4}\sum_{m=0}^{\bar{K}}\sum_{n=0}^{K}
c_{\bar{K}m} c_{Kn} \big({1\over 2}\Delta_{\bar{\nu}-\nu}-
{1\over 4} \Delta_{\nu -\bar{\nu}+2} - {1\over 4}
\Delta_{\nu-\bar{\nu}-2}\big )$$ $$\big[ P_1 \Delta_{m-n}+{1\over 2}(P_2( \Delta_{m+n-1}+
\Delta_{m-n+1}+ \Delta_{m-n-1}) +$$ $$P_3( \Delta_{m+n-2}+ \Delta_{m-n+2}+ \Delta_{m-n-2})$$ $$+ P_4( \Delta_{m+n-3}+ \Delta_{m-n+3}+ \Delta_{m-n-3}) ) \big]
\eqno(A6)$$
$$\bigg < \bar{K}^+ \bar{\nu} \bigg | -{\tau_1^2 \alpha^4 \over 4}
{\rm sin^2\theta} \bigg | K^+ \nu \bigg
> =- {\pi \tau_1^2 \alpha^4 \over 8}\sum_{m=0}^{\bar{K}}\sum_{n=0}^{K}
c_{\bar{K}m} c_{Kn} \Delta_{\bar{\nu}-\nu}
\big[ \Delta_{m+n}+ \Delta_{m-n}+$$ $${\alpha \over 4}( \Delta_{m+n+1}+ \Delta_{m-n+1}+
\Delta_{n-m+1}+\Delta_{1-m-n})$$ $$-{1\over 2}( \Delta_{m+n+2}+ \Delta_{m-n+2}+
\Delta_{n-m+2}+\Delta_{2-m-n})$$ $$-{\alpha\over 4}( \Delta_{m+n-3}+ \Delta_{m-n+3}+
\Delta_{m-n-3}+\Delta_{3-m-n}) ) \big]. \eqno(A7)$$
The negative to negative terms are
$$\bigg < \bar{K}^- \bar{\nu} \bigg | {\partial^2 \over \partial^2
\theta} \bigg | K^- \nu \bigg > =\pi
\sum_{m=1}^{\bar{K}}\sum_{n=1}^{K} d_{\bar{K}m} d_{Kn} (-n^2)
\Delta_{\bar{\nu}-\nu}\big[ \Delta_{m-n} + {\alpha \over 2}
(\Delta_{m-n+1}+\Delta_{m-n-1})\big] \eqno(A8)$$
$$\bigg < \bar{K}^- \bar{\nu} \bigg | -{\alpha \over F} \ {\rm sin}
\theta {\partial \over
\partial \theta} \bigg | K^- \nu \bi
| 1,038
| 2,520
| 776
| 1,007
| null | null |
github_plus_top10pct_by_avg
|
ot{y}}}_{m}}}{2} \right)}^{2}}+{{\left( \frac{mg}{2} \right)}^{2}}}+m{{\ddot{x}}_{m}}.
\end{array}$$ Nonetheless, if ${{\ddot{x}}_{m(t)}}<0$, those forces can be obtained by $$\label{eq10}
\begin{array}{r@{}l@{\qquad}l}
{{F}_{1}}&=\frac{1}{\mu }\sqrt{{{\left( \frac{m{{{\ddot{y}}}_{m}}}{2} \right)}^{2}}+{{\left( \frac{mg}{2} \right)}^{2}}}-m{{\ddot{x}}_{m}}, \\
%\label{eq11}
{{F}_{2}}&=\frac{1}{\mu }\sqrt{{{\left( \frac{m{{{\ddot{y}}}_{m}}}{2} \right)}^{2}}+{{\left( \frac{mg}{2} \right)}^{2}}}.
\end{array}$$
![Physical model of the robot arms[]{data-label="fig3"}](force.png){width="0.8\linewidth"}
By the use of Lagrange multipliers, the dynamic model of the dual arm robot manipulating the payload can be summarized as follows, $$\label{eq12}
M(\theta)\ddot{\theta}+C(\theta,\dot{\theta})\dot{\theta}={u}+{{J}^{T}}(\theta)F(\theta,\dot{\theta},\ddot{\theta})-{{T}_{d}}-\beta, \nonumber$$ where $u$ is a $4\times1$ control torque input vector, ${{T}_{d}}$ is a $4\times1$ vector presenting the noise effects on the robot arms and $\beta$ denotes the viscous friction forces on all the joints, which are specified as follows, $$\begin{array}{r@{}l@{\qquad}l}
\theta&={{\left[ \begin{matrix}
{{\theta}_{1}}\; {{\theta}_{2}}\; {{\theta}_{3}}\; {{\theta}_{4}} \nonumber
\end{matrix} \right]}^{T}}, \\ \nonumber
u&={{\left[ \begin{matrix}
{{u}_{1}} \; {{u}_{2}} \; {{u}_{3}} \; {{u}_{4}} \nonumber
\end{matrix} \right]}^{T}}, \\ \nonumber
F&={{\left[ \begin{matrix}
{{F}_{1}} \; {{F}_{s1y}} \; {{F}_{2}} \; {{F}_{s2y}} \nonumber
\end{matrix} \right]}^{T}}, \\ \nonumber
{{T}_{d}}&={{\left[ \begin{matrix}
{{T}_{d1}} \;{{T}_{d2}} \; {{T}_{d3}}\; {{T}_{d4}} \nonumber
\end{matrix} \right]}^{T}}, \\ \nonumber
\beta &={{\left[ \begin{matrix}
{{b}_{1}}{{{\dot{\theta}}}_{1}} \; {{b}_{2}}{{{\dot{\theta}}}_{2}} \; {{b}_{3}}{{{\dot{\theta}}}_{3}}\; {{b}_{4}}{{{\dot{\theta}}}_{4}}
| 1,039
| 4,262
| 332
| 848
| null | null |
github_plus_top10pct_by_avg
|
*Social Pressure*
my friends drink 0.61
it is difficult to refuse 0.46
other people are drinking 0.77
it will enhance my creative ability 0.51
it is customary for men on special occasions I 0.59
want to be prominent 0.67
*Personal Enjoyment*
I like the taste 0.62
it makes me feel good 0.71
I get thirsty 0.66
it goes well with the meals 0.41
*Tension Reduction*
it helps me to relax 0.58
it would ease me when I get blamed 0.60
it helps me to sleep 0.61
it helps me to forget my worries 0.70
it helps me to get rid of restlessness and tense 0.61
it helps me to cheer up when I get dull or boring 0.67
it gives me energy 0.65
it is a habit 0.45
it helps me to face difficulties with confidence 0.72
it helps me to control others 0.44
All items were loaded significantly on their respective factors, *p \< 0 .01.*
######
Multiple regression analysis predicting drinking frequency from 3 motives towards alcohol use.
**Drinking Motives** **B** ***SE*B** **β**
------------------------ ------- ----------- -----------------------------------------------------------
**Constant** 1.12 0.06 --
**Personal Enjoyment** 0.04 0.02 0.097
**Tension-reduction** 0.05 0.01 0.311**[\*](#tfn3-ijerph-06-02408){ref-type="table-fn"}**
**Social Pressure** 0.01 0.01
| 1,040
| 5,784
| 521
| 334
| null | null |
github_plus_top10pct_by_avg
|
to $k=1$ as in the argument below Claim \[claim2\], we obtain the theorem.
To prove Claim \[claim3\], let $\eta_1,\dots, \eta_n$ be an independent sequence of random variables distributed as $\eta_i\sim N(0, \frac{\overline{\sigma}_i^2}{\overline{B}_n^2})$ and be independent of $\{X_1, \dots, X_n\}$, and let As in [4]{}, we have Given $W_{k-1}$, let $f$ be the solution to Based on lemma \[l2\] and $||\varphi'||= 1$, we have By a similar argument leading to [104]{}, we have
The appropriate change to [34]{} is as follows: where and Based on [7p]{} and the fact that $X_k$ is independent of $W_{k-1}$ and $\eta_k$ is independent of $\{X_1,\dots, X_n\}$, Therefore, we have where By the definition of $\xi_k$, we have Since we have assumed that $\E(X_k)=\E(-X_k)=0$, we have, using the property [33]{} of the sublinear expectation and also the fact that $T_{n-k}$ is independent of $\{X_1,\dots, X_n\}$, From Lemma \[l3\] and the fact that $T_{n-k}$ is independent of $\{X_1,\dots, X_n\}$, we have Since we have assumed that $\varphi$ is convex, the solution to the PDE [102]{} (cf. [107]{}) is also convex in the argument $x$, that is, $\partial^2_{xx} V{\geqslant}0$. Therefore, by the definition of sublinear expectation, and hence by [108]{}, This proves Claim \[claim3\].
Proofs in Section 4
-------------------
\[Proof of Theorem \[CLT\]\] Define $\xi=(\xi_1,\cdots, \xi_n):\mathbb{R}^n\rightarrow\mathbb{R}^n$ by $\xi_i(x)=x_i$, $i=1,\cdots, n$. Denote as $\mathcal{H}$ the collection of continuous real-valued functions $h$ on $\mathbb{R}^n$ with $|h(x)|\le C(1+|x|^3)$ for some constant $C>0$. For a function $h\in\mathcal{H}$, set $$\mathbb{E}[h(\xi)]:=\sup\limits_{\sigma\in\Sigma^{\mathbb{N}}_G}E[h(X^{\sigma}_{1,n},\cdots, X^{\sigma}_{n,n})].$$ Then, $\mathbb{E}[\xi_i]=\mathbb{E}[-\xi_i]=0,$ $ \mathbb{E}[\xi_i^2]=\overline{\sigma}^2$ and $-\mathbb{E}[-\xi_i^2]=\underline{\sigma}^2$, $i=1,2,\cdots,n.$ Moreover, for a function $\varphi\in lip(\mathbb{R})$, we have $$\mathbb{E}[\varphi(\xi_i)]=\sup\limits_{\lambda\in[\underlin
| 1,041
| 388
| 237
| 1,161
| 3,575
| 0.771461
|
github_plus_top10pct_by_avg
|
re not possible. Proverbially, the forest may be secure but each of the trees reveals enough information to reconstruct the possible forests. By eliminating approximately one quarter of the key options from each qubit we see that by measuring all the individual qubits in a random basis does in fact reveal a great deal about the key. This attack has no concern on quantum memory though relies heavily on classical computation power. Hence, unlike [@Damgard14; @Bouman13] where the authors consider a bounded quantum storage model, the only way to make this protocol secure without greatly changing its construction is to constrict an adversaries computational power.
The attack proposed here is general in the sense of QIA protocols in the prepare and measure setup, thus any future protocol of this type must consider possible key space reduction attacks. Regardless of the method it is known that any identification protocol which poses no bounds on the adversary will inevitably fail due to results of Lo and Buhrman et al. For this reason we advise that any future attempts at identification schemes consider, and clearly communicate, their assumptions and objectives.
[**[Acknowledgements:]{}**]{} This research was sponsored in part by the NATO Science for Peace and Security Programme under grant G5448, in part by Spanish MINECO under grants MTM2016-77213-R and MTM2017-88385-P, and in part by Programa Propio de I+D+i of the Universidad Politécnica de Madrid.
[^1]: carlos.gguillen@upm.es
[^2]: mariaisabel.vasco@urjc.es
[^3]: johnsonf2017@fau.edu
[^4]: angel.perez@urjc.es
---
abstract: |
In [@Esnault-Viehweg82], Esnault-Viehweg developed the theory of cyclic branched coverings $\tilde X\to X$ of smooth surfaces providing a very explicit formula for the decomposition of $H^1(\tilde X,\CC)$ in terms of a resolution of the ramification locus. Later, in [@Artal94] the first author applies this to the particular case of coverings of $\PP^2$ reducing the problem to a combination of global and local conditions on projec
| 1,042
| 114
| 1,778
| 1,002
| null | null |
github_plus_top10pct_by_avg
|
-
We highlight the fact that the limiting SDE of a discrete process, $$\label{e:disc-mcmc-new}
w_{k+1} = w_k - s\nabla U(w_k) + \sqrt{s} \xi(w_k, \eta_k),$$ depends only on the covariance matrix of $\xi$. More specifically, as long as $\xi$ satisfies $\sqrt{\E{\xi(w, \eta)\xi(w, \eta)^T}} = M(w)$, will have as its limiting SDE, *regardless of higher moments of $\xi$*. This fact, combined with Theorem \[t:main\_nongaussian\], means that in the limit of $\delta \to 0$ and $k\to \infty$, the distribution of $w_k$ will be determined by the covariance of $\xi$ alone. An immediate consequence is the following: *at convergence, the test performance of any Langevin MCMC-like algorithm is almost entirely determined by the covariance of its noise term.*
Returning to the case of SGD algorithms, since the noise covariance is $M(x)^2 = \frac{\delta}{b} H(x)$ (see ), we know that the ratio of step size $\delta$ to batch size $b$ is an important quantity which can dictate the test error of the algorithm; this observation has been made many times in prior work [@jastrzkebski2017three; @he2019control], and our results in this paper are in line with these observations. Here, we move one step further, and provide experimental evidence to show that more fundamentally, it is the noise covariance in the constant-noise limit that controls the test error.
To verify this empirically, we propose the following algorithm called *large-noise SGD*.
\[def:large\_noise\_sgd\] An $(s, \sigma, b_1, b_2)$-large-noise SGD is an algorithm that aims to minimize using the following updates: $$\begin{aligned}
\numberthis\label{e:sgd-noisy}
& w_{k+1} = w_k - \frac{s}{b_1} \sum_{i \in \eta_k} \nabla U_i(w_k) \\
& \qquad + \frac{\sigma \sqrt{s}}{b_2} \left( \sum_{i\in \eta_k'}\nabla U_i(w_k) - \sum_{i\in\eta_k''}\nabla U_i(w_k) \right),\end{aligned}$$ where $\eta_k$, $\eta_k'$, and $\eta_k''$ are minibatches of sizes $b_1$, $b_2$, and $b_2$, sampled uniformly at random from $\{1, \ldots, n\}$ with replacement. The three minibatches are
| 1,043
| 741
| 950
| 984
| 1,470
| 0.789165
|
github_plus_top10pct_by_avg
|
in terms of $\varphi$ rather than in $A_0$, and $g_k^2=g^2/Z_0$ is nothing but the running coupling at momentum $\vec p^2\sim k_{\rm phys}^2$. Thus we estimate $g_k^2=4\pi \alpha_s(\vec p^2=k_{\rm phys}^2)$. Note that $g_k$ is an RG-invariant. The momentum integration can be performed analytically, and we are led to $$\begin{aligned}
\label{eq:preflowVapp}
\beta \partial_k \Delta V_k
= \frac{2}{3 (2 \pi)^2} \frac{(1+\eta_0/5) k^2 }{1+\frac{ g_{k}^2
\beta^2}{ k^2 }
\partial^2_{\varphi} ( V_{\bot,k} + \Delta V_k)}\,, \end{aligned}$$ where $\eta_0$ is given by $$\label{eq:eta0app}
\eta_0=-\partial_t \log \alpha_s\,,$$ as the consistent choice in the given truncation.
Matching scales {#app:match}
===============
The flow of the temporal component of the gauge field, $A_0(\vec x)$, is computed with a three-dimensional regulator, see . In Polyakov gauge $A_0(\vec x)$ only depends on the spatial coordinates, whereas the spatial components $A_\bot(x)$ are four-dimensional fields. For cut-off scales far lower than the temperature, $k/T\ll 1$, also the spatial gauge fields are effectively three-dimensional fields as only the Matsubara zero mode propagates. Hence in this regime we can identify $k=k_\bot$. For large cut-off scales, $k/T\gg 1$, the $A_0$-flow decouples from the theory. A comparison between the two flows can only be done after the summation of the spatial flow over the Matsubara frequencies. In the asymptotic regime $k/T\gg 1$ this leads to the relation $$\label{eq:kTinf}
\frac{1}{k} \simeq \sum_{n=-\infty}^{\infty}
\frac{1}{\omega_n^2 + k_\bot ^2}\to \frac{1}{2 k_\bot}\,,$$ The crossover between these asymptotic regimes happens at about $k/T=
1$. This crossover is implemented with the help of an appropriately chosen interpolating function $f$, $$\begin{aligned}
\label{eq:compare}
\frac{T}{k^2} f(k/T) &=& T \sum_{n=-\infty}^{\infty}
\frac{1}{\omega_n^2 + k_\bot ^2}\,,\end{aligned}$$ A natural choice for $f(k/T)$ is depicted in Fig. \[fig:kbotk\], and has been used in the computation.
| 1,044
| 2,170
| 1,537
| 1,064
| 1,455
| 0.789344
|
github_plus_top10pct_by_avg
|
a_{n}$ with other numbers $n$ into the second expression (\[eq.3.1.4\]) for the potential $V_{2}(r)$, one can construct the whole hierarchy of the radial reflectionless potentials of this new type.
In Fig. \[fig.1\] the potential $V_{2}(r)$ for the chosen values of the parameters $C$ and $\gamma_{n}$ is shown. From here one can see, that such potential has one hole and one barrier, after which it falls down monotonously to zero with increasing of the radial coordinate $r$.
![ A dependence of the radial potential $V_{2}(r)$ on $C$ and $\gamma_{n}$: (a) the barrier maximum and the hole minimum of this potential are changed along the axis $r$ at the change of $C$ (at $C = 0.01, 0.1, 0.3, 1.0, 2.5$, $\gamma_{n}=6$, $r_{0}=0.5$); (b) the barrier maximum of this potential practically does not changed along the axis $r$ at the change of $\gamma_{n}$ (at $C = 1$, $\gamma_{n}=2, 6, 12, 20 $, $r_{0}=0.5$). \[fig.1\]](f31a.eps "fig:"){width="57mm"} ![ A dependence of the radial potential $V_{2}(r)$ on $C$ and $\gamma_{n}$: (a) the barrier maximum and the hole minimum of this potential are changed along the axis $r$ at the change of $C$ (at $C = 0.01, 0.1, 0.3, 1.0, 2.5$, $\gamma_{n}=6$, $r_{0}=0.5$); (b) the barrier maximum of this potential practically does not changed along the axis $r$ at the change of $\gamma_{n}$ (at $C = 1$, $\gamma_{n}=2, 6, 12, 20 $, $r_{0}=0.5$). \[fig.1\]](f31b.eps "fig:"){width="57mm"}
In its behavior such potential looks qualitatively like radial potentials with barriers used in theory of nuclear collisions for a description of scattering of particles on spherical nuclei, and for a description of decay and synthesis of nuclei of a spherical type also. This potential is reflectionless, if the parameter $\gamma_{n}$ has discrete values from the sequence (\[eq.3.1.7\]). For any reflectionless potential with given $\gamma_{n}$ one can displace continuously its barrier and hole along an axis $r$ by use of the parameter $C$. Such deformation of the shape of the reflectionless potential is shown in
| 1,045
| 334
| 439
| 1,254
| 1,719
| 0.786458
|
github_plus_top10pct_by_avg
|
b}-1}{q}
\prod _{t=1}^{{b}-1}(q^{t+1-{b}}\Lambda (K_p)-\Lambda (L_p))v_\Lambda
\end{aligned}$$ by Lemma \[le:EmFn\]. By assumption, ${\hat{T}}'(v_\Lambda )\not=0$, and hence ${\hat{T}}'$ is a nonzero multiple of ${\operatorname{id}}_{M^\chi (\Lambda )}$. Therefore ${\hat{T}}_p$ is an isomorphism. The proof for ${\hat{T}}_p^-$ is analogous.
\[le:hwvector\] Let $t\in \{1,2,\dots ,{b}-1\}$. Let $q=\chi ({\alpha }_p,{\alpha }_p)$. Assume that $\Lambda (K_pL_p^{-1})=q^{t-1}$. Then in $M^\chi (\Lambda )$ $$\begin{aligned}
E_pF_p^m v_\Lambda =\qnum{m}{q}\Lambda (L_p)(q^{t-m}-1)F_p^{m-1} v_\Lambda \quad
\text{for all $m\in {\mathbb{N}}_0$.}
\label{eq:EpFpmv}
\end{aligned}$$ In particular, if $q\not=1$, then $EF_p^mv_\Lambda ={\varepsilon }(E)F_p^mv_\Lambda $ for all $E\in U^+(\chi )$ if and only if $m=0$, $m=t$ or $m\ge {b}$. If $q=1$, then $EF_p^mv_\Lambda ={\varepsilon }(E)F_p^mv_\Lambda $ for all $E\in
U^+(\chi )$, $m\in {\mathbb{N}}_0$.
Eq. follows from Lemma \[le:EmFn\]. By definition of ${b}={b^{\chi}} ({\alpha }_p)$, either $q\not=1$ and $q$ is a primitive ${b}$-th root of $1$, or $q=1$ and ${b}=\mathrm{char}\,{\Bbbk }$. Therefore, if $q\not=1$ and $m\in \{0,1,\dots ,{b}-1\}$, then $q^{t-m}=1$ if and only if $t=m$. If $E=E_i$ with $i\not=p$, then $EF_p^m=0$ by Eqs. , . The rest is a consequence of Lemma \[le:Eheight\](i).
\[pr:VTMker\] Assume that $\Lambda (K_pL_p^{-1})=\chi ({\alpha }_p,{\alpha }_p)^{t-1}$ for some $t\in \{1,2,\dots ,{b}-1\}$.
\(i) If $\chi ({\alpha }_p,{\alpha }_p)\not=1$, then $t$ is unique, and $$\begin{aligned}
\ker {\hat{T}}^{\chi }_{p,\Lambda }=\ker {\hat{T}}^{\chi ,-}_{p,\Lambda }
= & \, U^-(\chi )F_p^{{b}-t}{\otimes }{\mathbb{K}}_{{t}_p^\chi (\Lambda )},\\
{\operatorname{Im}}{\hat{T}}^{\chi }_{p,\Lambda }={\operatorname{Im}}{\hat{T}}^{\chi ,-}_{p,\Lambda }
= & \, U^-(\chi )F_p^t {\otimes }{\mathbb{K}}_\Lambda .
\end{aligned}$$
\(ii) If $\chi ({\alpha }_p,{\alpha }_p)=1$, then $\mathrm{char}\,{\Bbbk }={b}>0$ and $$\begin{aligned}
\ker
| 1,046
| 893
| 666
| 1,006
| null | null |
github_plus_top10pct_by_avg
|
this result:
[memtest]$ g++ -O2 mem.cpp -o mem
[memtest]$ ./mem
size start prev.size
-----------------------------------
BLOCK 0: 08b, 0x1f47030
BLOCK 1: 08b, 0x1f47050, 32b
BLOCK 2: 16b, 0x1f47070, 32b
BLOCK 3: 16b, 0x1f47090, 32b
BLOCK 4: 04b, 0x1f470b0, 32b
BLOCK 5: 04b, 0x1f470d0, 32b
BLOCK 6: 06b, 0x1f470f0, 32b
BLOCK 7: 06b, 0x1f47110, 32b
This is normal expected result. What makes new behave in such an odd way at first run? I am using g++ (GCC) 7.1.1 20170516. You may compile without optimizations and result is the same.
A:
You'll be surprised to learn that your program does a lot more than just make a few memory allocations.
std::cout << "BLOCK " << num << ": ";
Your program also generates formatted output to std::cout, making use of its built-in std::streambuf.
It appears rather obvious that the first output to std::cout allocates a 1024 byte buffer for the internal std::streambuf. This happens after your first new allocation, and before your second one. The buffer only needs to be allocated once, the first time it's used.
Although it goes without saying that the particulars of internal memory allocations are highly implementation-defined, this seems to be the most likely explanation in your case.
Q:
How can I increase the size of the projection from my projection clock?
I have the "Mpow Projection Alarm Clock" (https://amzn.to/2VtA1Cx) and although in the example image the time projected on the ceiling is gigantic, in reality it's only about 7 inches (18cm). It's still small when I project it on the other side of the room (I have a small room). I want it to be much bigger.. Ideally triple the size.
I tried holding two kinds of magnifying glasses at various distances over the thing where the time is projected out and that doesn't work. The projected time turned into an unreadable circle. I tried a camera lens and perhaps the projection isn't bright enough because nothing was visible - no light was coming through.
A:
If you have mirrors in the room, you could
| 1,047
| 1,207
| 581
| 965
| null | null |
github_plus_top10pct_by_avg
|
------------------ ------------------
**WMC**
OSPANs 14.656 (5.036) 14.938 (5.147) 24.719 (11.312) 19.969 (9.177)
**Attentional control**
Latency 359.179 (33.018) 379.813 (45.241) 369.916 (52.576) 387.399 (51.719) 333.286 (32.058) 338.738 (30.633) 382.246 (58.466) 370.708 (49.867)
Error rate 0.168 (0.101) 0.200 (0.095) 0.191 (0.138) 0.271 (0.114) 0.174 (0.115) 0.200 (0.164) 0.171 (0.131) 0.205 (0.124)
WMC, working memory capacity; SA, state anxiety; WM training, the working memory training group; Control, the control group; OSPANs, operation-word span task scores; Latency, the latency of first correct saccade; Error rate, the percentage of incorrect saccades
.
### Re-examination for the Effects of SA and WMC on Attentional Control
The manipulation check of SA was conducted first: the heart rate, skin conductance and MRF-3 scores were all significantly increased in the high-SA condition (all *ps* \< 0.002 for both pre- and post-training), which implied that SA manipulation was successful for both pre- and post-training. We used the change of latency \[i.e., post-training latency minus pre-training latency. The average change of latency under low-SA condition was -25.894 (*SD* = 20.702) for WM training group, and 12.330 (*SD* = 47.953) for control group, whereas under high-SA condition was -41.075 (*SD* = 36.636) for WM training group, and -16.691 (*SD* = 33.313) for control group\] and the change of error rate \[i.e., post-training error rate minus pre-training error rate. The average change of error rate under low-SA condition was 0.006 (*SD* = 0
| 1,048
| 1,620
| 1,269
| 1,231
| null | null |
github_plus_top10pct_by_avg
|
eudonatural transformation, giving a 2-category . We can define two-variable morphisms of left derivators, and (separate) preservation of colimits, just as for derivators.
A **monoidal left derivator** is a left derivator with a pseudo-monoid structure that preserves colimits separately in both variables. If is a monoidal left derivator, a **-module** is a cocontinuous pseudo-module, i.e. a left derivator with an action ${\sV}\times{\sD}\to {\sD}$ that is coherently associative and unital and preserves colimits separately in both variables. We say that is a **-opmodule** if ${\sD}\op$ is a -module. A **closed -module**, or **-enriched derivator**, is a -module whose action is part of a two-variable adjunction (hence, in particular, it is also a -opmodule).
Now recall that derivator morphisms of two variables come in three different forms; see [@gps:additivity §3 and §5]. We right away specialize to the situation of an action as above.
1. In the *internal form* $\otimes_A\colon{\sV}(A)\times{\sD}(A)\to{\sD}(A)$ which naively is given by $(W\otimes_A X)_a=W_a\otimes X_a$, where $\otimes\colon{\sV}(\bbone)\times{\sD}(\bbone)\to{\sD}(\bbone)$ denotes the underlying functor of two variables.
2. In the *external form* $\otimes\colon{\sV}(A)\times{\sD}(B)\to{\sD}(A\times B)$, which we think of as being defined by $(W\otimes X)_{a,b}=W_a\otimes X_b$.
3. Finally, in the *canceling form* $\otimes_{[A]}\colon{\sV}(A\op)\times{\sD}(A)\to{\sD}(\bbone)$ which is obtained from the external form by composing it with the coend functor $$\int^A\colon{\sD}(A\op\times A)\to{\sD}(\bbone).$$ For the notion of (co)ends in derivators we refer to [@gps:additivity §5 and Appendix A].
Note the different notation used for these three variants; the notation for internal versions was already used for the monoidal categories $({\sV}(A),\otimes_A,\lS_A)$.
Every monoidal left derivator is, of course, a module over itself. If it is a closed module over itself, we call it a **closed monoidal left derivator**.
More generally, if is
| 1,049
| 1,214
| 1,444
| 1,021
| 2,335
| 0.780784
|
github_plus_top10pct_by_avg
|
\pm$5.0 27.5$\pm$2.0 5.7$\pm$0.1
1b 05 39 52.10 -69 45 23.17 36 28.1$\pm$2.8 170.3$\pm$17.0 231.8$\pm$11.6 152.4$\pm$7.6 57.7$\pm$4.1 23.9$\pm$1.7 9.1$\pm$0.6 1.6$\pm$0.1
1c (N159W) 05 39 32.51 -69 46 02.74 68 48.7$\pm$4.9 481.2$\pm$48.1 651.8$\pm$32.7 487.7$\pm$24.4 202.6$\pm$14.4 82.6$\pm$5.9 30.4$\pm$2.2 5.9$\pm$0.2
2 (N160) 05 39 38.63 -69 39 06.79 110 144.7$\pm$14.5 1073.3$\pm$107.3 1362.3$\pm$68.5 912.9$\pm$45.6 359.2$\pm$25.5 153.6$\pm$10.9 58.9$\pm$4.2 10.2$\pm$0.4
3 (N158) 05 39 11.22 -69 30 13.65 110 68.3$\pm$6.8 572.9$\pm$57.3 770.2$\pm$39.1 540.8$\pm$27.0 216.2$\pm$15.3 94.4$\pm$6.7 37.1$\pm$2.6 5.8$\pm$0.4
4 05 40 49.41 -69 44 48.22 110 4.7$\pm$0.5 134.5$\pm$13.4 226.2$\pm$12.5 217.0$\pm$10.9 106.88$\pm$7.5 49.6$\pm$3.5 20.2$\pm$1.4 2.4$\pm$0.4
5 05 40 22.25 -69 40 33.51 110 24.5$\pm$2.4 235.6$\pm$23.6 394.6$\pm$20.6 309.6$\pm$15.5 135.2$\pm$9.6 61.9$\pm$4.4 25.2$\pm$1.8 2.5$\pm$0.4
6 (N159S) 05 40 03.75 -69 51 01.62 110 2.8$\pm$0.3 76.3$\pm$7.6 166.2$\pm$9.6 193.9$\pm$9.7 107.3$\pm$7.5 50.4$\pm$3.5 20.4$\pm$1.4 3.0$\pm$0.4
7 05 39 30.65 -69 36 37.42 55 4.9$\pm$0.5 93.1$\pm$9.3 159.6$\pm$8.2 143.8$\pm$7.2 67.4$\pm$4.7 31.2$\pm$2.2 12.6$\pm$0.9 1.8$\pm$0.1
8 05 40 04.63 -69 37 59.86 55 7.4$\pm$0.7 102.6$\pm$10.3 138.6$\pm$7.1 118.2$\pm$5.9 53.9$\pm$3.8 25.1$\pm$1.8 10.3$\pm$0.7 1.6$\pm$0.1
9 05 38 41.55 -69 24 58.82 55 4.8$\pm$0.5 60.7$\pm$6.1 84.8$\pm$4.5 71.9$\pm$3.6 34.0$\pm$2.4 15.4$\pm$1.1 6.3$\pm$0.5 1.1$\pm$0.1
10 05 39 49.03 -69 26 26.38
| 1,050
| 3,467
| 248
| 808
| null | null |
github_plus_top10pct_by_avg
|
motion for the wave fields can be written in compact form as $$\frac{\partial\mbox{\boldmath$\zeta$}}{\partial t}=\frac{i}{\hbar}
\{ {\cal H}[\mbox{\boldmath$\zeta$}] , \mbox{\boldmath$\zeta$} \}_{\mbox{\tiny\boldmath$\cal B$}} \;.
\label{eq:wein_eqofm}$$ The compact form of Eq. (\[eq:wein\_eqofm\]) can be set into an explicit form as $$\begin{aligned}
\frac{\partial}{\partial t}|\Psi\rangle&=&\frac{i}{\hbar}
\frac{\partial{\cal H}}{\partial\langle\Psi|}
{\mathcal B}_{21}
\label{eq:wein_eqofm1}
\\
\frac{\partial}{\partial t} \langle\Psi|
&=&\frac{i}{\hbar}
\frac{\partial{\cal H}}{\partial\vert\Psi\rangle}
{\mathcal B}_{12}
\label{eq:wein_eqofm2}
\;.\end{aligned}$$ It is easy to see that, when the Hamiltonian function is chosen as in Eq. (\[eq:h\_qm\]), Eq. (\[eq:wein\_eqofm\]), or its explicit form (\[eq:wein\_eqofm1\]-\[eq:wein\_eqofm2\]), gives the usual formalism of quantum mechanics. It is worth to remark that in order not to alter gauge invariance, the Hamiltonian and the other observables must obey the homogeneity condition: $${\cal H}=\langle\Psi|(\partial{\cal H}/\partial\zeta_2)\rangle
=\langle(\partial{\cal H}/\partial\zeta_1)|\Psi\rangle
\;.\label{eq:homogeneity}$$ Weinberg showed how the formalism above sketched can be generalized in order to describe non-linear effects in quantum mechanics [@weinberg]. To this end, one must maintain the homogeneity condition, Eq. (\[eq:homogeneity\]), on the Hamiltonian but relax the constraint which assumes that the Hamiltonian must be a bilinear function of the wave fields. Thus, the Hamiltonian can be a general function given by $$\tilde{\cal H}=\sum_{i=1}^n\rho^{-i}{\cal H}_i\;,$$ where $n$ is arbitrary integer that fixes the order of the correction, ${\cal H}_0=h$, and $$\begin{aligned}
{\cal H}_1&=&\rho^{-1}\int dr dr'dr''dr'''\Psi^*(r)\Psi^*(r')
\nonumber\\
&\times&
G(r,r',r'',r''')\Psi(r'')\Psi(r''')\;,\end{aligned}$$ with analogous expressions for higher order terms. Applications and thorough discussions of the above formalism can be found in Ref. [@
| 1,051
| 381
| 1,580
| 1,194
| 3,842
| 0.769745
|
github_plus_top10pct_by_avg
|
suitable and short tree iterable $a$-premouse", then there could be $\Q\in \mathcal{F}(\b, a, \P)$ which is not in $\mathcal{F}(\a, a, \P)$. However, we always have the following easy lemma.
\[inclusion\] Suppose $\a<\b<\k$ are two ordinals which end weak gaps and such that $J_\a(\mathbb{R})$ and $J_\b(\mathbb{R})$ both satisfy that $\P$ is suitable and short tree iterable. Then $\mathcal{F}(\a, a, \P)\subseteq \mathcal{F}(\b, a, \P)$.
The lemma follows because any iteration tree on $\P$ which is correctly guided and short in the sense of $J_\a(\mathbb{R})$ is also correctly guided and short in the sense of $J_\b(\mathbb{R})$.
Next we define $\leq_{\a, a}$ on $\mathcal{F}(\a, a)$ by setting $\Q\leq_{\a, a}\R$ iff there is an iteration tree $\T$ on $\Q$ according to $\Sigma_\Q$ with last model $\S$ such that $\pi^\T$ exists, $\S\inseg\R$ and $\S=\R|(\eta_\S^+)^\R$. Also, let $\leq_{\a, a, \P}=\leq_{\a, a}{\restriction}\mathcal{F}(\a, a)$. As usual, we have that
\[directedness\] $\leq_{\a, a}$ and $\leq_{\a, a, \P}$ are directed, and $\leq_{\a, a, \P}$ is dense in $\leq_{\a, a}$.
Let then $\M_\infty(\a, a)$ be the direct limit of $(\mathcal{F}(\a, a), \leq_{\a, a})$ under the iteration embeddings $\pi_{\Q, \R}$. Also, let $\M_\infty(\a, a, \P)$ be the direct limit of $(\mathcal{F}(\a, a, \P), \leq_{\a, a, \P})$ under the iteration embeddings $\pi_{\Q, \R}$. It follows from [Lemma \[directedness\]]{} that
\[equality of direct limits\] $\M_\infty(\a, a)=\M_\infty(\a, a, \P)$.
We let $\pi_{\Q, \infty}:\Q\rightarrow \Q^*\inseg\M_\infty(\a, a, \P)$ be the direct limit embedding[^5].
We can now define $\phi$. First let $S$ be the set of those reals $x$ which code a pair $(y_x, \P_x)$ such that
1. $y_x\in \mathbb{R}$,
2. for some $\a<\k$ ending a weak gap, $J_\a(\mathbb{R}){\vDash}``\P_x$ is suitable and short tree iterable $y_x$-premouse".
Clearly $S$ is $\Sigma^2_1$. Also let $f:\kappa^2\rightarrow \k$ be the function given by: for all $(\b, {\gamma})\in \kappa^2$, $f(\b, {\gamma})$ is the least ordinal $\
| 1,052
| 1,911
| 1,210
| 992
| null | null |
github_plus_top10pct_by_avg
|
alculation by induction on $n$.
An analogue of Lusztig’s PBW basis {#sec:Lusztig}
==================================
Let $\chi \in {\mathcal{X}}$ and $p\in I$. Assume that $\chi $ is $p$-finite. Let $q_{i j}=\chi ({\alpha }_i,{\alpha }_j)$ and $c_{p i}=c_{p i}^\chi $ for all $i,j\in I$.
For all $m\in {\mathbb{N}}_0$ and $i\in I\setminus \{p\}$ define recursively $E^\pm _{i,m}\in U^+_{{\alpha }_i+m{\alpha }_p}$, $F^\pm _{i,m}\in U^-_{{\alpha }_i+m{\alpha }_p}$ by $$\begin{aligned}
E^+_{i,0}=&E_i, &
E^+_{i,m+1}=&\,E_p E^+_{i,m}-(K_p{\boldsymbol{.}}E^+_{i,m})E_p,\\
E^-_{i,0}=&E_i, &
E^-_{i,m+1}=&\,E_p E^-_{i,m}-(L_p{\boldsymbol{.}}E^-_{i,m})E_p,\\
F^+_{i,0}=&F_i, &
F^+_{i,m+1}=&\,F_p F^+_{i,m}-(L_p{\boldsymbol{.}}F^+_{i,m})F_p,\\
F^-_{i,0}=&F_i, &
F^-_{i,m+1}=&\,F_p F^-_{i,m}-(K_p{\boldsymbol{.}}F^-_{i,m})F_p.\end{aligned}$$ We also define $E^+_{i,-1}=E^-_{i,-1}=F^+_{i,-1}=F^-_{i,-1}=0$. The above definitions depend essentially on $p$. If we want to emphasize this, we will write $E^\pm _{i,m(p)}$ and $F^\pm _{i,m(p)}$ instead of $E^\pm _{i,m}$ and $F^\pm _{i,m}$, respectively.
For all $i\in I\setminus \{p\}$ define $$\lambda _i^\chi =\qfact{-c _{p i}}{q_{p p}}
\prod _{j=0}^{-c _{p i}-1}(q_{p p}^j q_{p i}q_{i p}-1).$$ Then $\lambda _i^\chi \not=0$ by definition of $c_{p i}=c^\chi _{p i}$. The next theorem was proven in [@p-Heck07b Thm.6.11].
\[th:Liso\] Let $\chi \in {\mathcal{X}}$ and $p\in I$. Assume that $\chi $ is $p$-finite. Let $c_{pi}=c_{pi}^\chi $ for all $i\in I$.
\(i) There exist unique algebra isomorphisms ${T}_p, {T}_p^-: U (\chi )\to U (r_p(\chi ))$ such that $$\begin{aligned}
{T}_p(K_p)=&{T}_p^-(K_p)=K _p^{-1},&
{T}_p(K_i)=&{T}_p^-(K_i)=K _iK _p^{-c_{p i}},\\
{T}_p(L_p)=&{T}_p^-(L_p)=L _p^{-1},&
{T}_p(L_i)=&{T}_p^-(L_i)=L _iL _p^{-c_{p i}},\\
{T}_p(E_p)=&F _p L _p^{-1},&
{T}_p(E_i)=&E ^+_{i,-c_{p i}},\\
{T}_p(F_p)=&K _p^{-1}E _p,&
{T}_p(F_i)=&\lambda _i(r_p(\chi ))^{-1}F ^+_{i,-c_{p i}},\\
{T}_p^-(E_p)=&K _p^{-1}F _p,&
{T}_p^-(E_i)=&\lambda
| 1,053
| 1,705
| 1,230
| 1,076
| null | null |
github_plus_top10pct_by_avg
|
\delta_{i-1}v_{i-1}\cdot {}^tm_{i, i-1}^{\prime}+\delta_{i+1}v_{i+1}\cdot {}^tm_{i, i+1}^{\prime} =0
~~~ \left(=\pi (m_{i,i}^{\ast\ast})^{\prime}\right).\\
\end{array}
\right.$$
Here, notations follow from those of (e) and (f) in the description of an element of $\tilde{M}(R)$.
Here, all matrices having ${}^{\prime}$ in the superscription are considered as matrices with entries in $R$. When $i$ is even and $L_i$ is *of type* $\textit{I}$ or when $i$ is odd and $L_i$ is *free of type* $\textit{I}$, we formally write $m_{i,i}=\mathrm{id}+\pi m_{i,i}^{\prime}$. Then $\tilde{G}^1(R)$ is the set of $m\in \tilde{M}^1(R)$ such that $h\circ m=h=(f_{i, j}, a_i\cdots f_i)$. Since $h\circ m$ is an element of $\underline{H}(R)$, we can write $h\circ m$ as $(f_{i, j}', a_i'\cdots f_i')$. In what follows, we will write $(f_{i, j}', a_i'\cdots f_i')$ in terms of $h=(f_{i, j}, a_i\cdots f_i)$ and $m$, and will compare $(f_{i, j}', a_i'\cdots f_i')$ with $(f_{i, j}, a_i\cdots f_i)$, in order to obtain a set of equations defining $\tilde{G}^1$.\
If we put all these (1)-(7) into (\[ea2\]), then we obtain $$\pi^j\left(\sigma(1+\pi\cdot {}^tm_{i,i}')h_i\pi m_{i,j}'+\sigma(\pi\cdot {}^tm_{j,i}')h_j(1+\pi m_{j,j}')+\pi^2(\ast))\right),$$ where $(\ast)$ is a certain formal polynomial. Therefore, $$\label{ea3-}
f_{i,j}'=\left(\sigma(1+\pi\cdot {}^tm_{i,i}')h_i\pi m_{i,j}'+\sigma(\pi\cdot {}^tm_{j,i}')h_j(1+\pi m_{j,j}')+\pi^2(\ast)\right),$$ where this equation is considered in $B\otimes_AR$ and $\pi$ stands for $\pi\otimes 1 \in B\otimes_AR$. Thus each term having $\pi^2$ as a factor is $0$ and we have $$\label{ea3}
f_{i,j}'=h_i\pi m_{i,j}'+\sigma(\pi\cdot {}^tm_{j,i}')h_j, \textit{where $i<j$}.$$ This equation is of the form $f_{i,j}'=X+\pi Y$ since it is an equation in $B\otimes_AR$. By letting $f_{i,j}'=f_{i,j}=0$, we obtain $$\label{ea4}
\bar{h}_i m_{i,j}'+{}^tm_{j,i}'\bar{h}_j=0, \textit{where $i<j$},$$ where $\bar{h}_i$ (resp. $\bar{h}_j$) is obtained by letting each term in $h_i$ (resp. $h_j$) having
| 1,054
| 788
| 1,070
| 1,056
| 2,413
| 0.779988
|
github_plus_top10pct_by_avg
|
fore achieved by storing a single grid of values for each unique firing pattern to date.
A higher-order Newton-Cotes numerical integration method would produce a more accurate estimate of , but the associated interpolated density surface of piecewise polynomials would not be guaranteed to be bounded below by zero, making an inspection of parameter estimates for assessing model fit problematic. Alternatively, quadrature on adaptive sparse grids [@Bun03], where the grid is finer at regions of high curvature, could improve estimator accuracy over the static regular rectangular lattice. However, this would be achieved at the expense of additional implementation complexity and further approximation error when estimating the surface at infilled lattice points.
Details concerning the observation process {#sec:DetailObsProc}
------------------------------------------
Consider the observation model . At time $t\le \tau-1$, when no MUs fire, ${\mathbf{x}}_t = {\mathbf{0}}$, the observation, $y_t$, provides no new information about the observation parameters for the MUs, ${\mathcal{A}}_t = {\mathcal{A}}_{t-1}$, and $Y_{j,t}|{\mathbf{x}}_t=0,{\bar{\mu}},{\bar{\nu}},{\boldsymbol{\mu}},\nu\sim \mathrm{N}({\bar{\mu}},{\bar{\nu}}^{-1})$. Standard conjugate updates may, therefore, be applied to obtain ${\bar{\mathcal{A}}}_t$ as follows: $$\begin{aligned}
{\bar{m}}_t = {\bar{m}}_{t-1} + \frac{y_t-{\bar{m}}_{t-1}}{1+{\bar{c}}_{t-1}},\quad
{\bar{c}}_t = \frac{{\bar{c}}_{t-1}}{1+{\bar{c}}_{t-1}},\quad
{\bar{a}}_t = {\bar{a}}_{t-1} + \frac{1}{2},\quad
{\bar{b}}_t = {\bar{b}}_t + \frac{(y_t-{\bar{m}}_{t-1})^2}{2(1+{\bar{c}}_{t-1})}. $$
When $t\ge \tau$, at least one MU fires and tractable updates are not possible. However, in real experiments, because of the precautions detailed in Section \[sec:Intro\], the variance (and expectation) of the baseline noise are generally much smaller than the variability in response from a given MU when it fires. For example [@Hen06] find a ratio of an order of magnitude. We, therefore
| 1,055
| 453
| 1,696
| 1,142
| 1,943
| 0.784243
|
github_plus_top10pct_by_avg
|
independent quantities $\one$, $\gamma_5$, $\gamma^\mu$, $\gamma^\mu\gamma_5$ and $\sigma^{\mu\nu}$ (one has indeed $\sigma^{\mu\nu}\gamma_5=i\epsilon^{\mu\nu\rho\sigma}\sigma_{\rho\sigma}/2$ where $\epsilon^{0123}=+1$).
Further identities involving four Dirac spinors are also important to establish supersymmetry invariance. These involve the celebrated Fierz identities,[@QFT] the simplest of which is of the form,[^18] $$\begin{array}{r l}
\overline{\psi_1}\one\psi_2\,\overline{\psi_3}\one\psi_4=
-\frac{1}{4}\Big\{&
\overline{\psi_1}\one\psi_4\,\overline{\psi_3}\one\psi_2\,+\,
\overline{\psi_1}\gamma^\mu\psi_4\,\overline{\psi_3}\gamma_\mu\psi_2\,+\,\\
& \\
&+\frac{1}{2}\overline{\psi_1}\sigma^{\mu\nu}\psi_4\,
\overline{\psi_3}\sigma_{\mu\nu}\psi_2\,-\,\\
& \\
& - \overline{\psi_1}\gamma^\mu\gamma_5\psi_4\,
\overline{\psi_3}\gamma_\mu\gamma_5\psi_2\,+\,
\overline{\psi_1}\gamma_5\psi_4\,\overline{\psi_3}\gamma_5\psi_2\Big\}\ ,
\end{array}$$ where $\psi_1$, $\psi_2$, $\psi_3$ and $\psi_4$ are arbitrary Grassmann odd Dirac spinors. An application of this identity leads, for instance, to the relation $$\overline{\epsilon_{1R}}\,\partial_\mu\psi_L\,\gamma^\mu\epsilon_{2R}=
-\frac{1}{2}\overline{\epsilon_{1R}}\gamma_\nu\epsilon_{2R}\,
\gamma^\mu\gamma^\nu\partial_\mu\psi_L\ ,$$ where $\epsilon_{1R}$, $\epsilon_{2R}$ and $\psi_L$ are Grassmann odd Dirac spinors of definite chirality as indicated by their lower label. This relation is central in establishing the supersymmetry invariance property of the simplest example of a supersymmetric field theory, the so-called Wess-Zumino model involving a scalar and a Weyl or Majorana spinor.[@WZ; @Deren]
In the case of Grassmann odd Majorana spinors $\epsilon$ and $\lambda$, one also has, $$\begin{array}{r c c c l}
\overline{\epsilon}\lambda&=&\overline{\lambda}\epsilon&=&
\left(\overline{\epsilon}\lambda\right)^\dagger\ ,\\
\overline{\epsilon}\gamma_5\lambda&=&
\overline{\lambda}\gamma_5\epsilon&=&
-\left(\overline{\epsilon}\gamma_5\lambda\right)^\dagger\ ,\\
\overline{\epsil
| 1,056
| 210
| 893
| 1,092
| null | null |
github_plus_top10pct_by_avg
|
in .
\[cor:accuracy.beta\] With probability at least $ 1- \frac{2}{n}$, the maximal length of the sides of the hyper-rectangle $\tilde{C}_{{\widehat{S}}}$ is bounded by $$C \sqrt{ \frac{\log k}{n} \left( \frac{k^{5/2}}{u_n^3 u^2} \overline{v} \sqrt{ \frac{\log n}{n}} + \frac{k }{u^4} \overline{v}\right) },$$ for a constant $C>0$ depending on $A$ only, uniformly over all $P \in \mathcal{P}_n^{\mathrm{OLS}}$.
### Confidence sets for the projection parameters: The Bootstrap {#confidence-sets-for-the-projection-parameters-the-bootstrap .unnumbered}
The confidence set in based on the Normal approximation require the evaluation of both the matrix $\hat{\Gamma}_{{\widehat{S}}}$ and the quantile $\hat{t}_\alpha$ in , which may be computationally inconvenient. Similarly the hyper-rectangle requires computing the diagonal entries in $\hat{\Gamma}_{{\widehat{S}}}$. Below we show that the paired bootstrap can be deployed to construct analogous confidence sets, centered at $\hat{\beta}_{{\widehat{S}}}$, without knowledge of $\hat{\Gamma}_{{\widehat{S}}}$.
Throughout, by the bootstrap distribution we mean the empirical probability measure associated to the sub-sample $\mathcal{D}_{2,n}$ and conditionally on $\mathcal{D}_{1,n}$ and the outcome of the sample splitting procedure.
We let $\hat{\beta}^*_{{\widehat{S}}}$ denote the estimator of the projection parameters $\beta_{{\widehat{S}}}$ of the form and arising from an i.i.d. sample of size $n$ drawn from the bootstrap distribution. It is important to point out that $\hat{\beta}^*_{{\widehat{S}}}$ is well-defined only provided that the bootstrap realization of the covariates $(X_1^*,\ldots,X_n^*)$ is such that the corresponding $k$-dimensional empirical covariance matrix $$\frac{1}{n} \sum_{i \in \mathcal{I}_{2,n}} X_i^*({\widehat{S}}) (X_i^*({\widehat{S}}))^\top$$ is invertible. Since the data distribution is assumed to have a $d$-dimensional Lebesgue density, this occurs almost surely with respect to the distribution of the full sample $\mathcal{D}_n$ if the bootstr
| 1,057
| 244
| 798
| 1,193
| 3,177
| 0.774268
|
github_plus_top10pct_by_avg
|
{\prime}}^{\iota *}(X,t)
\chi_{\alpha^{\prime}\alpha}(X)\;,
\label{eq:qc-ave-ad}$$ where the coefficients $C_{\alpha}^{\iota}(X,t)$ and $C_{\alpha^{\prime}}^{\iota *}(X,t)$ are evolved according to Eqs. (\[eq:c\]) and (\[eq:cstar\]), respectively. Equations (\[eq:c\]) and (\[eq:cstar\]) are non-linear equations which couple all the adiabatic states used to analyze the system.
At this stage, a general discussion about such a non-linear character is required. With a wide consensus, quantum mechanics is considered a linear theory. This leads, for example, to the visualization of quantum transitions as instantaneous *quantum jumps*. The linearity of the theory also determines the need of considering infinite perturbative series which must be re-summed in some way in order to extract meaningful predictions. Density Functional Theory is an example of a non-linear theory [@dft] but it is usually considered just as a computational tool. However, there are other approaches to quantum theory that represent interactions by an intrinsic non-linear scheme [@mead]. It is not difficult to see how this is possible. Matter is represented by waves, these very same waves enter into the definition of the fields defining their interaction [@tomonaga]. This point of view has been pursued by Jaynes [@jaynes] and Barut [@barut], among others. These non-linear approaches depict quantum transitions as abrupt but continuous events [@mead] in which, to go from state $\vert 1\rangle$ to state $\vert 2\rangle$, the system is first brought by the interaction in a superposition $\alpha\vert 1\rangle+\beta\vert 2\rangle$, and then, as the interaction ends, it finally goes to state $\vert 2\rangle$. It is understood that this is made possible by the non-linearity of such theories because, instead, a linear theory would preserve the superposition indefinitely. Incidentally, the picture of the transition process just depicted also emerges from the numerical implementation [@kapral] of the nonadiabatic quantum-classical dynamics of phase
| 1,058
| 4,506
| 1,835
| 1,107
| null | null |
github_plus_top10pct_by_avg
|
's suppose to give back the assets in order based on t.Count but I think it might not be working because the .Count is actually not part of asset which is what is being selected, but I have no idea how to fix this.
As you can see there is an assetVisits table and an assets table, and I need to get back the assets in order of the assetVisits.AccessCount but I can't get it to work, what the hell??
A:
You asked an almost identical question a couple of hours ago, and the answer is the same: do the ordering after you have selected the rows you want to order.
Change:
return final.Take(limit);
to:
var finalOrdered = from asset in final
join assetVisit in db.AssetVisits on asset.AssetID equals assetVisit.AssetID
orderby assetVisit.AccessCounter
select asset;
return finalOrdered.Take(limit);
You can also remove the premature 'orderby' from your own code, since it is not doing anything.
Q:
Можно ли стилизовать пусть маршрута в Google maps?
чтобы дорогу от точки до точки можно было покрасить в произвольный цвет
A:
Вот здесь посмотрите
var polylineOptions={};
polylineOptions.strokeColor="#438391";
polylineOptions.strokeOpacity=.6;
polylineOptions.strokeWeight=4;
//now assign to map render options
mapRendererOptions.polylineOptions=polylineOptions;
//now set renderer
directionsDisplay.setOptions(
new google.maps.DirectionsRenderer(mapRendererOptions));
Также по Вашему вопросу на англоязычном stackoverflow
Q:
Changing
I have a solution for my problem, but im looking for a more efficient way to solve it.
I basicly want to code a function, that adds new lines of styles in css with Javascript.
My solution is this.
function addStyle(newStyleLine){
var mainStyle = document.getElementsByTagName("style")[0];
mainStyle.innerHTML = mainStyle.innerHTML + "body {"+newStyleLine+"}";
}
And i have a <style> tag in my html. It works fine, but i think when i will
| 1,059
| 997
| 190
| 315
| null | null |
github_plus_top10pct_by_avg
|
mogorov--Smirnov test, the residual gutta-percha and sealer data were not normally distributed. Therefore, a nonparametric Kruskal--Wallis and post hoc Dunn's tests were used, at P=0.05 to compare the mean area of residual gutta-percha and sealer. All the statistical analysis were performed with SPSS 21.0 (IBM Corp., Armonk, NY, USA) software.
RESULTS {#sec1-3}
=======
The results for the mean area of residual gutta-percha and sealer are shown in [Table 1](#T1){ref-type="table"}. There was a significant difference regarding the total residual gutta-percha and sealer among groups (P\<0.001). The mean area of the gutta-percha and sealer remnant in the XP group (0.80±0.25) was significantly lower than that in the other groups (P\<0.001). The mean area of gutta-percha and sealer remnant in the CI group (1.84±0.50) was significantly greater than that in the other groups (P\<0.001).
######
Mean and Standard Deviations of Residual Gutta-percha and Sealer on Canal Walls (mm^2^)
Group Apical Middle Coronal P value Total
------------------------- ----------------- ------------------- ------------------ --------- --------------
XP-endo Finisher 0.80±0.25^x\ a^ 0.80±0.32^x\ a^ 0.79±0.18^x\ a^ \>0.05 0.80±0.25^x^
EndoActivator 0.93±0.28^x\ a^ 1.21±0.25^yz\ bc^ 1.20±0.36^y\ ac^ \<0.05 1.11±0.32^y^
IrriSafe 0.82±0.33^x\ a^ 1.07±0.47^xz\ ac^ 1.25±0.37^y\ bc^ \<0.05 1.05±0.43^y^
Conventional Irrigation 2.05±0.49^y\ a^ 1.50±0.40^y\ b^ 1.97±0.42^z\ a^ \<0.001 1.84±0.50^z^
P value \<0.001 \<0.001 \<0.001 \<0.001
\*Different superscript letters indicate a significant difference between groups (abc; for rows and xyz; for columns)
When comparing the root canal regions, the apical third of the CI group had significantly more residual gutta-percha and sealer when compared to that of the other groups (P\
| 1,060
| 61
| 1,242
| 1,376
| null | null |
github_plus_top10pct_by_avg
|
e the representative galaxy for the observed universe. This representative galaxy could, in principal, be found by sectioning the observed universe into three-dimensional, non-overlapping cells of different sizes centered on each galaxy. By surveying these cells, a representative galaxy, with an average $v_H^{*}$ and $r^{*}_H$, can be found, and used as inputs for the model galaxy. Even though such a survey has not yet been done, a large repository of galactic rotation curves and core radii [@Blok-1; @Cour; @Math] is present in the literature. Taken as a whole, these 1393 galaxies are reasonably random, and are likely representative of the observed universe at large.
While we were able to estimate of $\alpha_\Lambda=3/2$ by looking at the galactic structure, the accuracy of this estimate is unknown; comparison with experiment is not possible. We instead *require* that $r_{II} = \mathfrak{K}(\Omega)\lambda_H/2$, which in turn gives $\alpha_\Lambda$ as the solution of $\mathfrak{K}(\Omega)^2(1+4^{1+\alpha_\Lambda}) =
32\pi \chi(\alpha_\Lambda)/3\Omega_\Lambda$; this sets $\alpha_\Lambda =
1.51_{\pm 0.11}$.
A calculation of $\sigma_8^2$ has been done [@ADS] using Eq. $(\ref{rho-beta})$. The resultant $\sigma_8^2$ is dominated by two terms. The first is due to the background density $\rho_{\hbox{\scriptsize{asymp}}}$. It depends only on $\alpha_\Lambda$, and contributes a set amount of 0.141 to $\sigma_8^2$. The second is the larger one, and is due primarily to the $1/r^2$ term in Eq. $(\ref{rho-beta})$. It depends explicitly on the rotation curves through the term $(v_H^{*}/c)^4(8h^{-1}\hbox{Mpc}/r_H^{*})$.
Although there have been a many studies of galactic rotation curves in the literature, both $v_H$ and $r_H$ are needed here. This requires fitting the observed velocity curve to some model. To our knowledge, both values are available from four places in the literature: The de Blok et. al. data set [@Blok-1]; the CF data set [@Cour]; the Mathewson et. al. data set [@Math; @Pers-1995] analysed in [@Cour];
| 1,061
| 4,048
| 1,622
| 858
| 2,239
| 0.781501
|
github_plus_top10pct_by_avg
|
GC simultaneously, that's why your require -XX:+UseParNewGC to be paired with CMS otherwise use -XX:+UseSerialGC explicitly OR -XX:-UseParNewGC if you wish to use serial method against young generation
A:
UseParNewGC usually knowns as "parallel young generation collector" is same in all ways as the parallel garbage collector (-XX:+UseParallelGC), except that its more sophiscated and effiecient. Also it can be used with a "concurrent low pause collector".
See Java GC FAQ, question 22 for more information.
Note that there are some known bugs with UseParNewGC
Q:
Finding New/Existing Customers from a Dataframe
I need to create a categorical column indicating whether the client account code has occurred for the first time i.e. "New" or it has occurred before i.e. "Existing".
Only the first occurrence needs to be considered as "New", the rest of the occurrences, irrespective of the gap in occurrences, should all be considered as "Existing".
I tried looping through the list of unique account codes within which I would filter the Dataframe for that particular account code and find the minimum date which would be stored in a separate table. Then looking-up to this table I would enter the New/Existing tag in the categorical column. Couldn't Execute it properly though.
Is there a simple way to accomplish it?
I have attached the sample file below:
Sample Data
Also the Data has some non UTF-8 encoded characters which couldn't be handled by me.
A:
Try:
df.assign(Occurence=np.where(~df['Account Code'].duplicated(),'New','Existing'))
Output:
Created Date Account Code Occurence
0 7-Sep-13 CL000247 New
1 7-Sep-13 CL000012 New
2 7-Sep-13 CL000875 New
3 7-Sep-13 CL000084 New
4 7-Sep-13 CL000186 New
5 7-Sep-13 CL000167 New
6 7-Sep-13 CL000167 Existing
7 7-Sep-13 CL000215 New
8 12-Sep-13 Wan2013001419 New
9 12-Sep-13 CL000097 New
...
Q:
Spring boot 1.4 externalize log4j
| 1,062
| 4,695
| 641
| 915
| 1,627
| 0.787422
|
github_plus_top10pct_by_avg
|
F(t)\Psi\
=&\ \frac{1}{1+\Xi(D_\eta(t)-\eta)}\,\Psi
\nonumber\\
=&\ \Psi + \Xi[\![A_\eta(t)\,, \Psi]\!]
+ \Xi[\![A_\eta(t),\Xi[\![A_\eta(t), \Psi]\!] ]\!]+\cdots\,.
\label{def F}\end{aligned}$$ The map $F(t)$ has a property that changes $D_\eta(t)$ into $\eta$: $$D_\eta(t)F(t)\ =\ F(t)\eta\,.
\label{important property}$$ Using $F(t)$, we can define a homotopy operator for $D_\eta(t)$ as $F(t)\Xi$ satisfying[@Kunitomo:2015usa] $$\{D_\eta(t), F(t)\Xi\}\ =\ 1\,,
\label{homotopy relation}$$ which trivializes the $D_\eta$-cohomology as well as the $\eta$-cohomology in the large Hilbert space. From the definition (\[def F\]), we can show that the homotopy operator $F\Xi$ is BPZ even $$\langle F\Xi \Psi_1, \Psi_2\rangle\ =\ (-1)^{\Psi_1}\langle \Psi_1, F\Xi \Psi_2\rangle\,,
\label{BPZ homotopy R}$$ and satisfies $$ \{Q, F\Xi\}A\ =\
FXF\Xi D_\eta A + FX\eta F\Xi A-F\Xi[QA_\eta, F\Xi A]\,,
\label{Q and FXi}$$ for a string field $A$. It is useful to note that we can define the projection operators $$\mathcal{P}_R\ =\ D_\eta F\Xi\,,\qquad
\mathcal{P}_R^{\perp} =\ F\Xi D_\eta\,,
\label{proj ramond}$$ onto the Ramond string field annihilated by $D_\eta$ and its orthogonal complement, respectively.
The BPZ inner product in the small Hilbert space ${\langle\!\langle}\cdot,\cdot{\rangle\!\rangle}$ is related to that in the large Hilbert space $\langle\cdot,\cdot\rangle$ as $$\begin{aligned}
{\langle\!\langle}A\,, B{\rangle\!\rangle}\ =&\ \langle\Xi A\,, B\rangle\ =\ (-1)^A\langle A\,, \Xi B\rangle
\nonumber\\
=&\ \langle\xi_0 A\,, B\rangle\ =\ (-1)^A\langle A\,, \xi_0 B\rangle\,,
\label{small to large}\end{aligned}$$ where $A$ and $B$ are in the small Hilbert space, and also in the Ramond sector for the equations in the first line.
Using a general variation of the map $F(t)$ on a string field $A$, $$(\delta F(t))A\
=\ -F(t)(\delta F^{-1}(t))F(t)A\
=\ F\Xi[\![\delta A_\eta(t)\,, F(t)A]\!]\,,
\label{variation F}$$ a general variation of the action (\[complete action\]) can be calculated as[@Kunitomo:2015usa] $$\delta S\
| 1,063
| 1,469
| 1,119
| 1,029
| null | null |
github_plus_top10pct_by_avg
|
ections \[sec:cart\_int\] and \[s:interior\_solver\_cylindrical\] below, $\rho_{i,j,k}$ is any arbitrary density distribution on the grid, and in fact represents a different quantity for each of the three instances where we solve for the interior potential.
Cartesian Grid Solution with Zero Boundary Value {#sec:cart_int}
------------------------------------------------
It is conventional to utilize the eigenfunctions of a differential operator in solving an elliptic partial differential equation. The same technique can be applied to the discretized Poisson equation, if the eigenfunctions of the corresponding discrete Laplace operator can be found.
Let ${\cal X}^l_i$, ${\cal Y}^m_j$, and ${\cal Z}^n_k$ be the eigenfunctions of the discrete Laplace operators $\Delta_x^2$, $\Delta_y^2$, and $\Delta_z^2$ satisfying $\Delta_x^2{\cal X}^l_i = \lambda_x^l{\cal X}^l_i$, $\Delta_y^2{\cal Y}^m_j = \lambda_y^m{\cal Y}^m_j$, and $\Delta_z^2{\cal Z}^n_k = \lambda_z^n{\cal Z}^n_k$, with respective eigenvalues $\lambda^l_x$, $\lambda_y^m$, and $\lambda_z^n$. It is straightforward to show that $${\cal X}^l_i = \sin\left(\frac{\pi li}{N_x+1}\right),\label{eq:car_eigen_x}$$ $${\cal Y}^m_j = \sin\left(\frac{\pi mj}{N_y+1}\right),\label{eq:car_eigen_y}$$ $${\cal Z}^n_k = \sin\left(\frac{\pi nk}{N_z+1}\right),\label{eq:car_eigen_z}$$ are the desired eigenfunctions satisfying the zero boundary condition at the ghost cells. The corresponding eigenvalues are $$\lambda_x^l = -k_l^2\left[ \sin\left( \frac{\pi l}{2(N_x+1)} \right) \bigg/\left( \frac{\pi l}{2N_x} \right) \right]^2,$$ $$\lambda_y^m = -k_m^2\left[ \sin\left( \frac{\pi m}{2(N_y+1)} \right) \bigg/\left( \frac{\pi m}{2N_y} \right) \right]^2,$$ $$\lambda_z^n = -k_n^2\left[ \sin\left( \frac{\pi n}{2(N_z+1)} \right) \bigg/\left( \frac{\pi n}{2N_z} \right) \right]^2,$$ where $k_l \equiv \pi l / L_x$, $k_m \equiv \pi m / L_y$, and $k_n \equiv \pi n / L_z$. In the limit of $l/N_x, m/N_y, n/N_z \ll 1$, the discrete eigenvalues reduce to the counterpart of the continuous Lap
| 1,064
| 2,992
| 1,322
| 1,052
| null | null |
github_plus_top10pct_by_avg
|
the appendix. We quote the final, exact form here [@Pierce:1996zz].
m\_b\^ = \[Eq:fullgluino\] ,
where the momentum of the bottom quark is given by $p$. In the limit $p \rightarrow 0$ (which is a good assumption here since $p^2 = m_b^2$), the Passarino-Veltman functions can be written as B\_0(0, , m\_) &=& - () + 1 + () x\
B\_1(0, , m\_) &=& \[Eq:pass-velt-B1\] where $x = m_{\tilde{b}}^2/\mg^2$. The first term in the above expression simplifies to &&\
&=& .
The angle $\sin 2\theta_b$ can be determined to be 2\_b = = , where we have ignored terms proportional to $M_Z$ or $m_b$. The trilinear coupling $A_b$ is often ignored since $\mu$ is enhanced by $\tanb$.[^2] Similarly, the second term in \[fullgluino\] is also neglected. Collecting terms, we arrive at the form in \[Eq:common-app\], (-A\_b) I(\^2, m\_[\_1]{}\^2, m\_[\_2]{}\^2) , \[Eq:gluino-app\] where I (a, b, c) = .
This is the expression that is typically used in most of the literature with large $\tanb$ models.
![The plot shows the exact, one-loop gluino-sbottom threshold correction to the bottom quark mass vs. the approximate form of this correction given in \[Eq:common-app\]. Darker shades of blue represent increasing squark masses from 1 TeV to $\ge$4 TeV. The black (lower) diagonal line represents where the exact and approximate forms would be equal. The red (upper) diagonal line represents where the correction from the exact form is $\sim$8% larger than the correction from the approximate form. []{data-label="fig:gl-ex-app"}](new_PLOTS/gluino-sbottom.pdf){width="60.00000%"}
In \[fig:gl-ex-app\], the exact, one-loop gluino-sbottom threshold correction to the bottom quark mass is compared to the approximate form of this correction given in \[Eq:common-app\]. Darker shades of blue represent increasing squark masses from 1 TeV to $\ge$4 TeV. The black (lower) diagonal line represents where the exact and approximate forms would be equal. The red (upper) diagonal line represents where the correction from the exact form is $\sim$8% larger than the
| 1,065
| 127
| 1,965
| 1,154
| 3,050
| 0.7752
|
github_plus_top10pct_by_avg
|
y Theorem \[210\], there is a suitable basis for this lattice such that the norm of the $\pi^1$-modular Jordan component is the ideal $(4)$. Namely, we choose $$(e_5-e_1', e_1'-\frac{2\pi(b+b') }{\delta(1+4b')}e_2', \pi e_5+\frac{a}{1+4b'}e_2').$$ Here, a method to find the above basis follows from the argument used in Case (iii) of Case (1) with $j$ even. Then the lattice spanned by the latter two vectors is $\pi^1$-modular with the norm $(4)$ (so that it is isometric to $H(1)$ by Theorem \[210\]) and the lattice spanned by the first vector is $(a+4(b+b'))$. Let $$\left\{
\begin{array}{l}
\tilde{M}_0=\left(\oplus H(0)\right)\oplus B(e_5-e_1');\\
\tilde{M}_1=\left(\oplus H(1)\right)\oplus \left( Be_3'\oplus Be_4' \right)
\oplus \left( B(e_1'-\frac{2\pi(b+b') }{\delta(1+4b')}e_2')\oplus B(\pi e_5+\frac{a}{1+4b'}e_2') \right).
\end{array}\right.$$ Then $\tilde{M}_0\oplus\tilde{M}_1\oplus(\oplus_{i\geq 2}M_i)$ is another Jordan splitting of $L^{j-1}$, where $\tilde{M}_0$ is $\pi^0$-modular and *of type $I^o$* and $\tilde{M}_1$ is isometric to $\oplus H(1)$.
For $\tilde{M}_0\oplus M_2$, the associated diagonal block of the image of a fixed element of $F_j$ in the special fiber of the smooth integral model associated to $L^{j-1}$ is $$\begin{pmatrix}id&0&0 \\ 0&1+2z_j&0
\\ 0&0&id \end{pmatrix}.$$ Here, the $(2,2)$-block corresponds to $B(e_5-e_1')$.
We now follow the argument used in Step (3) in even case. Then we can easily check that the Dickson invariant of the image of a fixed element of $F_j$ in the orthogonal group associated to $M_0''$ is $(z_j)_1$.
In conclusion, $(z_j)_1$ is the image of a fixed element of $F_j$ under the map $\psi_j$. Since $(z_j)_1$ can be either $0$ or $1$ by Equation (\[e42\]), $\psi_j|_{F_j}$ is surjective onto $\mathbb{Z}/2\mathbb{Z}$ and thus $\psi_j$ is surjective.\
So far, we have proved that $\psi_j$ is surjective. Let $\mathcal{B}$ be the set of integers $j$ such that $L_j$ is *of type I* and $L_{j+2}, L_{j+3}, L_{j+4}$ (resp. $L_{j-1}
| 1,066
| 2,050
| 1,342
| 1,024
| 3,220
| 0.773925
|
github_plus_top10pct_by_avg
|
e show that has the same distribution as $x_t$ in , and has the same distribution as $y_t$ in . Thus, for any $t$, the process $(x_t,y_t)$ defined by is a valid coupling for and .
[One step contraction]{} \[ss:step\_gaussian\]
\[l:gaussian\_contraction\] Let $f$ be as defined in Lemma \[l:fproperties\] with parameters $\epsilon$ satisfying $\epsilon \leq \frac{\Rq}{\aq\Rq^2 + 1}$. Let $x_t$ and $y_t$ be as defined in . If we assume that $\E{\lrn{y_0}_2^2} \leq 8\lrp{R^2 + \beta^2/m}$ and $T\leq \min\lrbb{\frac{\epsilon^2}{\beta^2}, \frac{\epsilon}{6 L\sqrt{R^2 + \beta^2/m}}}$, then $$\begin{aligned}
\E{f(x_T - y_T)}
\leq e^{-\lambda T} \E{f(x_0 - y_0)} + 3T (L+\LN^2) \epsilon
\end{aligned}$$
For ease of reference: $m,L,\LR, R$ are from Assumption \[ass:U\_properties\], $\cm, \beta$ are from Assumption \[ass:xi\_properties\], $\aq, \Rq, \LN, \lambda$ are defined in .
For notational convenience, for the rest of this proof, let us define $z_t := x_t - y_t$ and $\nabla_t := \nabla U(x_t) - \nabla U(y_t)$, $\Delta_t := \nabla U(y_0) - \nabla U(y_t)$ $N_t := N(x_t) - N(y_t)$.
It follows from that $$\begin{aligned}
d z_t = - \nabla_t dt + \Delta_t dt + 2 \cm \gamma_t \gamma_t^T dV_t + \lrp{N_t + N(y_t) - N(y_0)} dW_t
\numberthis \label{e:coupled_difference_sde}
\end{aligned}$$
Using Ito’s Lemma, the dynamics of $f(z_t)$ is given by $$\begin{aligned}
&d f(z_t)\\
=& {\lin{\nabla f(z_t), dz_t}}
+ {2\cm^2\tr\lrp{\nabla^2 f(z_t) \lrp{\gamma_t \gamma_t^T}}} dt
+ {\frac{1}{2}\tr\lrp{\nabla^2 f(z_t) \lrp{N_t+ N(y_t) - N(y_0)}^2}} dt\\
=& \underbrace{-\lin{\nabla f(z_t), \nabla_t}}_{\circled{1}} dt + \underbrace{\lin{\nabla f(z_t), \Delta_t}}_{\circled{2}} dt + \underbrace{\lin{\nabla f(z_t), 2 \cm \gamma_t \gamma_t^T dV_t + \lrp{N_t + N(y_t) - N(y_0)} dW_t }}_{\circled{3}}\\
&\quad + \underbrace{2\cm^2\tr\lrp{\nabla^2 f(z_t) \lrp{\gamma_t \gamma_t^T}}}_{\circled{4}} dt
+ \underbrace{\frac{1}{2}\tr\lrp{\nabla^2 f(z_t
| 1,067
| 1,987
| 876
| 1,037
| null | null |
github_plus_top10pct_by_avg
|
rac{1}{\kappa-1} + \cdots + \frac{1}{\kappa-b+1}\bigg) \Bigg) \,, \label{eq:crC3}\end{aligned}$$ such that, $$\begin{aligned}
\label{eq:cr10}
\frac{\partial^2\P(\theta)}{\partial\theta_i^2}\bigg|_{\theta = {\boldsymbol{0}}} &=& \I_{\{ \Omega^{-1}(i) > p \}}A_1\Big((-A_2)(-A_2) - C_1 \Big) + \I_{\{ \Omega^{-1}(i) = p \}}A_1\Big((1-A_2) - A_2(1-A_2) - C_1 \Big) \nonumber\\
&& + \, \I_{\{ \Omega^{-1}(i) < p \}}A_1 \Big((1-A_3) - A_3 - C_2 + C_3 \Big)\,.\end{aligned}$$ The claims is easy to verify by combining Equations and with .
Proof of Theorem \[thm:topl\_upperbound\] {#sec:proof_topl_upperbound}
-----------------------------------------
The proof is analogous to the proof of Theorem \[thm:main\]. It differs primarily in the lower bound that is achieved for the second smallest eigenvalue of the Hessian matrix $H(\theta)$, .
\[lem:hessian\_topl\] Under the hypotheses of Theorem \[thm:topl\_upperbound\], if $\sum_{j = 1}^n \ell_j \geq (2^{12}e^{6b}/\beta\alpha^2) d\log d$ then with probability at least $ 1- d^{-3}$, $$\begin{aligned}
\label{eq:lambda2_bound_topl}
\lambda_2(-H(\theta)) \;\geq\; \frac{\alpha}{2(1+ e^{2b})^2} \frac{1}{d-1} \sum_{j = 1}^n \ell_j\,. \end{aligned}$$
Using Lemma \[lem:gradient\_topl\] that is derived for the general value of $\lambda_{j,a}$ and $p_{j,a}$, and by substituting $\lambda_{j,a} = 1/(\kappa_j-1)$ and $p_{j,a} = a$ for each $j \in [n]$, we get that with probability at least $1 - 2e^3d^{-3}$, $$\begin{aligned}
\label{eq:gradient_bound_topl}
\|\nabla\Lrb(\theta^*)\|_2 \;\leq\; \sqrt{16\log d\sum_{j=1}^n \ell_j} \;. \end{aligned}$$ Theorem \[thm:topl\_upperbound\] follows from Equations , and .
### Proof of Lemma \[lem:hessian\_topl\]
Define $M^{(j)} \in \cS^d$ as $$\begin{aligned}
\label{eq:M_j_def_topl}
M^{(j)} &=& \frac{1}{\kappa_j -1} \sum_{i<\i \in S_j} \sum_{a = 1}^{\ell_j} \I_{\{(i,\i)\; \in \; G_{j,a}\}} (e_i - e_{\i})(e_i - e_{\i})^\top,\end{aligned}$$ and let $M = \sum_{j=1}^n M^{(j)}$. Similar to the analysis carried out in the proof of Lemma \[lem:hessian\_
| 1,068
| 691
| 1,248
| 1,061
| 3,697
| 0.770612
|
github_plus_top10pct_by_avg
|
of $X$’s is : :j\^r: = f\^[-r]{} :X\^[r]{}: + (:X\^[r+1]{}:). So to get the order of the coefficient that multiplies and operator $:j^r:$, it is enough to look for the coefficient of the terms multiplying $f^{-r}:X^r:$ in the OPE . These terms have a coefficient of order: f\^[-2p-2+n+m+p+1+|n+1-m-p|]{}={
[lll]{} f\^[2(n+1-p)-2]{} & if & n+1 m+p\
f\^[2m-2]{} & if & n+1 m+p.
. Thus this coefficient is of order $\mathcal{O}(f^{-2})$. This completes the proof of .
Now let us come back to the evaluation of the OPE between a current and the composite operator in equation : \[modjMC3\] j\^a\_[L,z]{}(z) i f\^2 [f\^b]{}\_[cd]{}:j\^d\_[L,z]{} j\^c\_[L,|z]{}:(w) Let us consider one term of order $f^{2n}$ in the OPE between the operators $j^a_{L,z}$ and $j^d_{L,z}$, that we write schematically $f^{2n}:j^p:$. To complete the computation we have to perform the OPE of this operator with the remaining current $j^c_{L,\bar
z}$. According to the previous lemma, this OPE produces terms with coefficients of order $f^{-2}$. So we have proven that terms of order $f^{2n}$ in the current-current OPE produce in the OPE terms of order $f^{2 + 2n -
2} = f^{2n}$. This proves the consistency of the algorithm to compute the current-current OPE order by order in $f^2$.
Current-primary OPE {#current-primary-ope .unnumbered}
-------------------
As explained in section \[bootstrap\] the same logic allows us to perturbatively compute the operator product expansion between a current and a primary operator. The Maurer-Cartan equation can be combined with current conservation to give the constraint : \[phiModMC\] (z) (|j\^b\_[L,z]{}(w)+i f\^2 [f\^b]{}\_[cd]{}:j\^d\_[L,z]{} j\^c\_[L,|z]{}:(w)) = 0 This allows the computation of the $j^a_{L,z}.\phi$ OPE order by order in $f^2$. The consistency of this algorithm is ensured by a slight generalization of lemma , namely: \[lemmaf-2bis\] j(z) :j\^p :(w) = (f\^[-2]{}). The proof is similar to the proof of formula .
Conformal current algebra: precisions {#AppCurrents}
============================
| 1,069
| 290
| 1,563
| 1,191
| 2,644
| 0.778195
|
github_plus_top10pct_by_avg
|
a i\delta} \lrp{R^2 + \beta^2/m}} + \frac{16}{\lambda} \exp\lrp{2\frac{7\aq\Rq^2}{3}}\lrp{L + \LN^2} \epsilon\\
=& 4\exp\lrp{\frac{7\aq\Rq^2}{3}}\lrp{e^{-\lambda i\delta} \lrp{R^2 + \beta^2/m}} + \hat{\epsilon}
\end{aligned}$$ By our assumption that $i\geq \frac{1}{\delta} \cdot 3\aq\Rq^2 \log \frac{R^2 + \beta^2/m}{\hat{\epsilon}}$, the first term is also bounded by $\hat{\epsilon}$, and this proves our second claim.
[Simulating the SDE]{} \[ss:simlutating\_discrete\_sde\] One can verify that the SDE in can be simulated (at discrete time intervals) as follows: $$\begin{aligned}
y_{(k+1) \delta} = y_{k\delta} - \delta \nabla U(y_{k\delta}) + \sqrt{\delta} M(y_{k\delta}) \theta_k
\end{aligned}$$ Where $\theta_k \sim \N(0,I)$. This however requires access to $M(y_{k,\delta})$, which may be difficult to compute.
If for any $y$, one is able to draw samples from some distribution $p_y$ such that
1. $\Ep{\xi \sim p_y}{\xi}=0$
2. $\Ep{\xi\sim p_y}{\xi \xi^T}=M(y)$
3. $\lrn{\xi}_2 \leq \beta$ almost surely, for some $\beta$.
then one might sample a noise that is $\delta$ close to $M(y_{k\delta}) \theta_k$ through Theorem \[t:zhai\].
Specifically, if one draws $n$ samples $\xi_1...\xi_n\overset{iid}{\sim} p_y$, and let $S_n := \frac{1}{\sqrt{n}}\sum_{i=1}^n \xi_i$, Theorem \[t:zhai\] guarantees that\
$W_2 \lrp{S_n, M(y) \theta} \leq \frac{6\sqrt{d}\beta\sqrt{\log n}}{\sqrt{n}}$. We remark that the proof of Theorem \[t:main\_gaussian\] can be modified to accommodate for this sampling error. The number of samples needed to achieve $\epsilon$ accuracy will be on the order of $n \approxeq O(\delta \epsilon)^{-2} = O(\epsilon^{-6})$.
[Proofs for Convergence under Non-Gaussian Noise (Theorem \[t:main\_nongaussian\])]{} \[s:nongaussianproof\]
[Proof Overview]{} The main proof of Theorem \[t:main\_nongaussian\] is contained in Appendix \[ss:proof:t:main\_nongaussian\].
Here, we outline the steps of our proof:
1. In Appendix \[ss:4\_coupling\], we construct a coupling between $\eqref{e:ex
| 1,070
| 648
| 1,014
| 1,032
| null | null |
github_plus_top10pct_by_avg
|
smooth.
Proof. We may assume that all occurring schemes are affine. Thus we have $I_i\subset R_i$ and $S_i\subset R_i/I_i$. Furthermore, $R_1$ is flat over $R_2$, $I_1=I_2R_1$ and $S_1$ is flat over $S_2$. We may also assume that $R_2$ is local. The key point is the isomorphism $$\bigl( R_1/I_1\bigr)\cong
\bigl( R_2/I_2\bigr)\otimes_{R_2}R_1\cong \bigl( R_2/I_2\bigr)\otimes_{S_2}S_1.
\eqno{(\ref{glue.etloc.lem}.3)}$$ Note that this isomorphism is not naturally given, see (\[twist-glue.rem\]).
We check the local criterion of flatness (cf. [@mats-cr Thm.22.3]). The first condition we need is that $q_1^{-1}(S_1)/I_1\cong S_1$ be flat over $q_2^{-1}(S_2)/I_2\cong S_2$. This holds by assumption. Second, we need that the maps $$\bigl( I_2^n/I_2^{n+1}\bigr)\otimes_{S_2}S_1\to
I_2^nR_1/I_2^{n+1}R_1$$ be isomorphisms. Since $R_1$ is flat over $R_2$, the right hand side is isomorphic to $$\bigl(I_2^n/I_2^{n+1}\bigr)\otimes_{R_2/I_2} \bigl(R_1/I_1\bigr).$$ Using (\[glue.etloc.lem\].3), we get that $$\bigl(I_2^n/I_2^{n+1}\bigr)\otimes_{R_2/I_2} \bigl(R_1/I_1\bigr)\cong
\bigl(I_2^n/I_2^{n+1}\bigr)\otimes_{R_2/I_2} \bigl(R_2/I_2\bigr)
\otimes_{S_2}S_1\cong
\bigl(I_2^n/I_2^{n+1}\bigr)\otimes_{S_2}S_1.$$ This settles flatness. In order to prove the smooth case, we just need to check that the fibers of $Y_1 \to Y_2$ are smooth. Outside $V_1\to V_2$ we have the same fibers as before and $V_1\to V_2$ is smooth by assumption.
\[twist-glue.rem\] Note that there is some subtlety in (\[glue.etloc.lem\]). Consider the simple case when $X_2$ is a smooth curve over a field $k$, $Z_2=\{p,q\}$ two $k$-points and $V_2={\operatorname{Spec}}k$. Then $Y_2$ is a nodal curve where $p$ and $q$ are identified.
Let now $X_1=X_2\times\{0,1\}$ as 2 disjoint copies. Then $Z_1$ consists of 4 points $p_0,q_0,p_1,q_1$ and $V_1$ is 2 copies of ${\operatorname{Spec}}k$. There are two distinct way to arrange $g_1$. Namely,
1. either $g'_1(p_0)=g'_1(q_0)$ and $g'_1(p_1)=g'_1(q_1)$ and then $Y'_1$ consists of 2 disjoint nodal curves,
2. or $g''_1(p_0
| 1,071
| 1,886
| 1,416
| 1,025
| 1,493
| 0.788857
|
github_plus_top10pct_by_avg
|
n)^2}{(1-q^{n-1}t^{2n}T^n)(1-t^{2n+2}q^{n+1}
T^n)}
\label{GS}$$
with $H_c\left(X^{[n]};q,t\right):=\sum_{i,k}h_c^{i,i;k}(X^{[n]})q^it^k$.
Define $\H^{[n]}(z,w)$ such that $$H_c\left(X^{[n]};q,t\right)
=(t\sqrt{q})^{2n}\H^{[n]}\left(-t\sqrt{q},\frac{1}{\sqrt{q}}\right).$$ Then Formula (\[GS\]) reads $$\sum_{n\geq 0}\H^{[n]}(z,w)T^n=\prod_{n\geq
1}\frac{(1-zwT^n)^2}{(1-z^2T^n)(1-w^2T^n)},
\label{GS1}$$ with the convention that $\H^{[0]}(z,w)=1$. Hence we may re-write Formula (\[GS\]) as $$\Log\left(\sum_{n\geq
0}\H^{[n]}(z,w)T^n\right)=\sum_{n\geq 1}(z-w)^2T^n.
\label{GS2}$$
Specializing Formula (\[GS2\]) with $(z,w)\mapsto (0,\sqrt{q})$ we see from Formula (\[Ynbis\]) that $$P_c(Y^{[n]};q)=q^n\cdot
\H^{[n]}(0,\sqrt{q}).\label{PH=P}$$We thus have the following result.
We have
$$PH_c(X^{[n]};T)=P_c(Y^{[n]};T).$$ where $PH_c(X^{[n]};T):=\sum_ih_c^{i,i;2i}(X^{[n]})T^i$ is the Poincaré polynomial of the pure part of the cohomology of $X^{[n]}$.
A conjecture
------------
The aim of this section is to discuss the following conjecture.
We have $$\H_{(n-1,1)}(z,w)=\H^{[n]}(z,w).
\label{CV=HS}$$ \[conjCV=HS\]
Modulo the conjectural formula (\[mainconj\]), Formula (\[CV=HS\]) says that the two mixed Hodge polynomials $H_c(X^{[n]};q,t)$ and $H_c(\M_{(n-1,1)};q,t)$ agree. This would be a multiplicative analogue of Theorem \[adpure\]. Unfortunately the proof of Theorem \[adpure\] does not work in the multiplicative case. This is because the natural family $g:\mathfrak{X}\to \C$ with $X^{[n]}=g^{-1}(0)$ and $\M_{(n-1,1)}=g^{-1}(\lambda)$ for $0\neq \lambda \in \C$ does not support a $\C^\times$-action with a projective fixed point set and so [@hausel-letellier-villegas Appendix B] does not apply.
One can still attempt to prove that the restriction map $H^*(\mathfrak{X};\Q)\to H^*(g^{-1}(\lambda);\Q)$ is an isomorphism for every fibre over $\lambda\in \C$ by using a family version of the non-Abelian Hodge theory as developed in the tamely ramified case in [@simpson]. In other words one would construct a fa
| 1,072
| 550
| 1,197
| 1,096
| null | null |
github_plus_top10pct_by_avg
|
ined in the second line of [(\[eq:diagbd-reorg\])]{}; let $Y_{m,l}$ be the supremum of what remains in the second line over $b_m,v_m,y_{l+1},v_{l+1}$. Then we can perform the sum of the first line over $b_m,v_m$ and the sum of the third line over $y_{l+1},v_{l+1}$ independently; the former is $O(\theta_0)^{m-1}$ and the latter is $O(\theta_0)^{j-1-l}$, due to [(\[eq:block-sumbd\])]{} and [(\[eq:tildeQ”-bd\])]{}, respectively. Finally, we can bound $Y_{m,l}$ using the Schwarz inequality by $O(\theta_0)^{l-m}$, where $l-m$ is the number of nonzero segments in the second line of [(\[eq:diagbd-reorg\])]{} (i.e., $\sum_{b_i}\tau_{b_i}(\delta_{{\overline{b}}_i,y_i}+\tilde
G_\Lambda({\overline{b}}_i,y_i))$ for some $y_m,\dots,y_{l+1}$) minus 2 (= the maximum number of those along the uppermost and lowermost paths that are extracted to obtain the aformentioned $|x|$-decaying term). For example, one of the leading contributions to $Y_{m,m+4}$ is bounded, by using translation invariance and the Schwarz inequality, as $$\begin{aligned}
\sup_{u,v,y}\raisebox{-1pc}{\includegraphics[scale=0.14]{Yml1}}~\leq
O(\theta_0)~\sup_{u,z}\raisebox{-1pc}{\includegraphics[scale=0.14]
{Yml2}}\\
\leq O(\theta_0)^{3/2}\left(\raisebox{-1.9pc}{\includegraphics[scale
=0.14]{Yml3}}\right)^{1/2}\leq O(\theta_0)^2~\sup_{s'}\raisebox{-1pc}
{\includegraphics[scale=0.14]{Yml4}}~&\leq O(\theta_0)^4.{\nonumber}\end{aligned}$$
The other cases can be estimated similarly [@sNN]. This completes the proof of [(\[eq:pi-kbd\])]{}.
Acknowledgements {#acknowledgements .unnumbered}
================
First of all, I am grateful to Masao Ohno for having drawn my attention to the subject of this paper. I would like to thank Takashi Hara for stimulating discussions and his hospitality during my visit to Kyushu University in December 2004 and April 2005. I would also like to thank Aernout van Enter for useful discussions on reflection positivity. Special thanks go to Mark Holmes and John Imbrie for continual encouragement and valuable comments to the
| 1,073
| 179
| 1,711
| 1,272
| 514
| 0.807383
|
github_plus_top10pct_by_avg
|
$m=h+(i-j)\geq h$ so that $a^mb^0\in S$. Hence $0\in I$.
Since $F_D\subseteq D\cap L_{min(I)}$, the following corollary is clear.
\[flo\]Let $S$ be a lower subsemigroup of $\mathcal{B}$. If $S$ is a left I-order in $\mathcal{B}$, then $F_D=\{1\}$ or $F_D=\emptyset$.
Suppose that a lower subsemigroup $S$ is a left I-order in $\mathcal{B}$. From Lemma \[zerolowecase\], we have that $d=1$ and $0\in I$. We claim that $I=\mathbb{N}^0$. By Corollary \[flo\], $F_D=\{1\}$ or $F_D=\emptyset$, so that as $S$ intersects every $\mathcal{L}$-class of $\mathcal{B}$, by Lemma \[identity\], we have that $I=\mathbb{N}^0$. We have one half of the following proposition.
\[loweriorder\] A lower subsemigroup $S$ is a left I-order in $\mathcal{B}$ if and only if $d=1$ and $I=\mathbb{N}^0$.
Suppose that $d=1$ and $I=\mathbb{N}^0$. Then $$\widehat{\Lambda}_{i,m_{i},1}=\{a^jb^i:j=t+i, j\geq m_i\}=\{a^{t+i}b^i:t+i\geq m_i\}.$$ For any $a^hb^k \in \mathcal{B}$ we have $$a^hb^k=(a^{h+k+t}b^h)^{-1}(a^{h+k+t}b^k)$$ where $t=$max$\{m_h,m_k\}$ for $i\in \mathbb{N}^0$. It is clear that $a^{h+k+t}b^h,a^{h+k+t}b^k\in S$.
The following corollary is clear from the proof of Proposition \[loweriorder\]
\[stralow\] Let $S$ be a lower subsemigroup of $\mathcal{B}$. If $S$ is a left I-order in $\mathcal{B}$, then it is straight.
Two-sided subsemigroups {#leftitwo-sided}
=======================
In this section we give necessary and sufficient conditions for the two-sided subsemigroups of $\mathcal{B}$ to be left I-orders in $\mathcal{B}$. [The two-sided subsemigroups of $\mathcal{B}$ have the forms (2).($i$) and (2).($ii$) in Proposition \[subbicyclic\]. Throughout this section we shall assume that a two-sided subsemigroup $S$ of $\mathcal{B}$ is proper, in the sense $S\neq \mathcal{B}$.]{}
We divide this section into two parts. We study the first form in the first part, and the second form in the second part.
We begin with the two-sided subsemigroups which have the form (2).($i$) in Proposition \[subbicyclic\].
Let $a^mb^n \in F \subse
| 1,074
| 1,397
| 1,243
| 1,172
| null | null |
github_plus_top10pct_by_avg
|
ypes.
Type 1. The points $\kappa(\bar x_1)$, $\kappa(\bar x_2)$ in $\RP^s$ are $\varepsilon_2$-close.
Type 2. The distances between the points $\kappa(\bar x_1)$, $\kappa(\bar x_2)$ in $\RP^s$ are greater then the caliber $\varepsilon_2$ of the regular approximation. Points of this type belong to the regular neighborhood $U_{\Delta}$ (of the radius $\varepsilon_1$).
Let us classify components of the triple self-intersection manifold $\Delta_3(f)$ of the immersion $f$. The a priori classification of components is the following.
A point $x \in
\Delta_3(f)$ has inverse images $\bar x_1, \bar x_2, \bar x_3$ in $M^{n-k}$.
Type 1. The images $\kappa(\bar x_1), \kappa(\bar x_2),
\kappa(\bar x_3)$ are $\varepsilon_2$-close in $\RP^s$.
Type 2. The images $\kappa(\bar x_1), \kappa(\bar x_2)$ are $\varepsilon_2$-close in $\RP^s$ and the distance between the images $\kappa(\bar x_3)$ and $\kappa(\bar x_1)$ (or $\kappa(\bar
x_2)$) are greater than the caliber $\varepsilon_2$ of the approximation.
Type 3. The pairwise distances between the points $\kappa(\bar x_1), \kappa(\bar x_2), \kappa(\bar x_3)$ greater than the caliber $\varepsilon_2$ of the approximation.
By a general position argument the component of the type 3 does not intersect $d(\RP^s)$. Therefore the immersion $f$ can be deformed by a small $\varepsilon_2$-small regular homotopy inside the $\varepsilon_3$-regular neighborhood of the regular part of $d(\RP^s)$ such that after this regular homotopy $\Delta_3(f)$ is contained in the complement of $U^{reg}_{\Delta}$. The codimension of the submanifold $\bar \Delta_2(d) \subset \RP^s$ is equal to $n-3k+1=q+k+1$ and greater then $dim(\Delta_3(f)) = n-3k$. By analogical arguments the component of triple points of the type 1 is outside $U^{reg}_{\Delta}$.
Let us classify components of the quadruple self-intersection manifold $\Delta_4(f)$ of the immersion $f$. A point $x \in \Delta_4(f)$ has inverse images $\bar x_1, \bar x_2, \bar x_3, \bar x_4$ in $M^{n-k}$. The a priori classification is the following.
Type
| 1,075
| 1,541
| 1,107
| 1,047
| 3,901
| 0.769416
|
github_plus_top10pct_by_avg
|
nodes in a moded SLD-derivation such that all integer variables in $LHS$ are in $A_i^1$ and let $\underline{I_1},\ldots,\underline{I_n}$ be all integer variables of $A_i^1$.
If there exist subterms of $A_j^1$, $t_1,\ldots,t_n$, such that $\forall L: subterm(L,A_i^1)=\underline{I_p} \Longrightarrow
subterm(L,A_j^1)=t_p, 1 \leq p \leq n$, then *$replace(LHS,N_i,N_j)$* is obtained by applying $\lbrace \underline{I_1} \setminus t_1, \ldots, \underline{I_n} \setminus t_n\rbrace$ to all constraints in $LHS$. $\hfill \square$
In Example \[example:apply\_cons\], we generated the precondition of the implication, $\underline{M1} > \underline{N}$. To obtain the consequence, $replace(\underline{M1} > \underline{N},N_5,N_9)$ is applied, yielding $\underline{M2} > \underline{N}$. Then, the integer variable of $N_9$, $\underline{M_2}$, is expressed in terms of the integer variables of $N_5$ using $apply\_cons(\underline{M2} > \underline{N},N_5,N_9)=\underline{M_1}+1 > \underline{N}$.
Adding the domains to the pre- and postcondition yields the desired implication: $\exists Dom_N, Dom_{M1} \subset {{\mathbb{Z}}}, \forall N,M1 \in {{\mathbb{Z}}}: M1 > N,~N \in Dom_N,~M1 \in Dom_{M1} \Longrightarrow$\
$~~~~~~~~~M1+1 > N,~N \in Dom_N,~M1+1 \in Dom_M$ $\hfill \square$
Adding these constraints to the class of queries detected by Theorem \[th:analysis1\], yields a class of non-terminating queries.
Proving that the constraints on integers are solvable
-----------------------------------------------------
The previous subsection introduced constraints, implying that all integer conditions in a considered derivation succeed. In this subsection, we introduce a technique to check if these constraints have solutions, using a constraint-based approach. Symbolic coefficients represent values for the integers in the query and domains in the implication, for which the considered path is a loop. After these coefficients are introduced, the implication is transformed into a set of equivalent implications over natural numbers. These implicat
| 1,076
| 1,541
| 1,424
| 1,089
| 906
| 0.797953
|
github_plus_top10pct_by_avg
|
(\hat{\psi}_{{\widehat{S}}}).$$
This formulation of $\beta_{{\widehat{S}}}$ and $\hat{\beta}_{{\widehat{S}}}$ is convenient because, by expanding each coordinate of $g(\hat{\psi})$ separately through a first-order Taylor series expansion around $\psi$, it allows us to re-write $\hat{\beta}_{{\widehat{S}}} -
\beta_{{\widehat{S}}}$ as a linear transformation of $\hat{\psi} - \psi$ given by the Jacobian of $g$ at $\psi$, plus a stochastic reminder term. Since $\hat{\psi} - \psi$ is an average, such approximation is simpler to analyze that the original quantity $\hat{\beta}_{{\widehat{S}}} -
\beta_{{\widehat{S}}}$ and, provided that the reminder term of the Taylor expansion be small, also sufficiently accurate. This program is carried out in detail and greater generality in a later Section \[section::berry\], where we derive finite sample Berry-Esseen bounds for non-linear statistics of sums of independent random vectors. The results in this section are direct, albeit non-trivial, applications of those bounds.
### Concentration of $\hat{\beta}_{{\widehat{S}}}$ {#concentration-of-hatbeta_widehats .unnumbered}
We begin by deriving high probability concentration bonds for $\hat{\beta}_{{\widehat{S}}}$ around $\beta_{{\widehat{S}}}$. When there is no model selection nor sample splitting – so that ${\widehat{S}}$ is deterministic and equal to $\{1,\ldots,d$) – our results yield consistency rates for the ordinary least squares estimator of the projection parameters, under increasing dimensions and a misspecified model. An analogous result was established in [@hsu14], where the approximation error $\mu(x) - x^\top \beta$ is accounted for explicitly.
\[thm:beta.accuracy2\] Let $$B_n =
\frac{ k}{u^2}
\sqrt{ U \frac{ \log k +
\log n}{n}}$$ and assume that $\max\{ B_n, u B_n \} \rightarrow 0$ as $n \rightarrow \infty$. Then, there exists a constant $C>0$, dependent on $A$ and $\eta$ only, such that, for all $n$ large enough, $$\label{eq::beta2}
\sup_{w_n \in \mathcal{W}_n} \sup_{P \in \mathcal{P}_n^{\mathr
| 1,077
| 963
| 1,433
| 1,196
| 2,333
| 0.780804
|
github_plus_top10pct_by_avg
|
$I^e$}},$$ $$(a_i, x_i^j, b_i, c_i, d_i, e_i, f_i)_{\textit{$L_i$ free of type $I$ with $i$ odd}}, (a_i, x_i^j, f_{i,i}^{\ast})_{\textit{$L_i$ bound of type $I$ with $i$ odd}})$$ of $\underline{H}(R)$ is denoted by $(f_{i,j}, a_i \cdots f_i)$.
\[r33\]
1. Recall that $\delta$ is a unit element in $A$ such that $\delta\equiv 1 \mathrm{~mod~}2$ and $\pi=\sqrt{2\delta}$. Note that the given hermitian form $h$ is an element of $\underline{H}(A)$. We represent the given hermitian form $h$ by a hermitian matrix $\begin{pmatrix} \pi^{i}\cdot h_i\end{pmatrix}$ whose $(i,i)$-block is $\pi^i\cdot h_i$ for all $i$, and all of whose remaining blocks are $0$. Then:
1. If $i$ is even and $L_i$ is *of type* $\textit{I}^o$, then $\pi^i\cdot h_i$ has the following form (with $\gamma_i\in A$): $$\xi^{i/2}\begin{pmatrix} \begin{pmatrix} 0&1\\1&0\end{pmatrix}& & & \\ &\ddots & & \\ & &\begin{pmatrix} 0&1\\1&0\end{pmatrix}& \\ & & & 1+2\gamma_i \end{pmatrix}.$$
2. If $i$ is even and $L_i$ is *of type* $\textit{I}^e$, then $\pi^i\cdot h_i$ has the following form (with $\gamma_i\in A$): $$\xi^{i/2}\begin{pmatrix} \begin{pmatrix} 0&1\\1&0\end{pmatrix}& & & \\ &\ddots & & \\ & &\begin{pmatrix} 0&1\\1&0\end{pmatrix}& \\ & & & \begin{pmatrix} 1&1\\1&2\gamma_i\end{pmatrix} \end{pmatrix}.$$
3. If $i$ is even and $L_i$ is *of type* $\textit{II}$, then $\pi^i\cdot h_i$ has the following form (with $\gamma_i\in A$): $$\xi^{i/2}\begin{pmatrix} \begin{pmatrix} 0&1\\1&0\end{pmatrix}& & & \\ &\ddots & & \\ & &\begin{pmatrix} 0&1\\1&0\end{pmatrix}& \\ & & & \begin{pmatrix} 2\delta&1\\1&2\gamma_i\end{pmatrix} \end{pmatrix}.$$
4. If $i$ is odd and $L_i$ is *free of type I*, then $\pi^i\cdot h_i$ has the following form (with $\gamma_i\in A$): $$\xi^{(i-1)/2}\begin{pmatrix} \begin{pmatrix} 0&\pi\\ \sigma(\pi)&0\end{pmatrix}& & & \\ &\ddots & & \\ & &\begin{pmatrix} 0&\pi\\ \sigma(\pi)&0\end{pmatrix}& \\ & & & \begin{pmatrix} 4\gamma_i&\pi\\ \sigma(\pi)&2\delta\end{pmatrix} \end{pmatrix}.$$
5. If $i$ is odd and $L_i$ is *b
| 1,078
| 3,763
| 1,217
| 793
| null | null |
github_plus_top10pct_by_avg
|
ght) + \left( {\theta + u_{2i}} \right)\textit{treat}_{\mathit{ij}} + e_{\mathit{ij}}} \\
& {\mspace{140mu} u_{1i} \sim N\left( {0,\tau_{\beta}^{2}} \right)} \\
& {\mspace{140mu} u_{2i} \sim N\left( {0,\tau^{2}} \right)} \\
& {\mspace{144mu} e_{\mathit{ij}} \sim N\left( {0,\sigma_{i}^{2}} \right)} \\
\end{matrix}$$ Parameters are as in Equation [(1)](#sim7930-disp-0001){ref-type="disp-formula"}, except that within‐trial clustering has now been accounted for by a random (instead of stratified) intercept term, with $\ \tau_{\beta}^{2}\ $ denoting the between trial variance in the intercept about the mean intercept (). Equation [(2)](#sim7930-disp-0002){ref-type="disp-formula"} assumes independence of the two random effects (ie, a covariance of zero), but their correlation could be accounted for assuming a bivariate random effect distribution; indeed, this might be of special interest when evaluating the relationship across trials of mean baseline in the control group and true treatment effect.[13](#sim7930-bib-0013){ref-type="ref"}
Compared to Equation [(1)](#sim7930-disp-0001){ref-type="disp-formula"}, the number of parameters to be estimated has been reduced, with only *β* and for the intercept, instead of *K* separate terms. Therefore, fewer estimation problems might be anticipated than in Equation [(1)](#sim7930-disp-0001){ref-type="disp-formula"}. On the downside, Equation [(2)](#sim7930-disp-0002){ref-type="disp-formula"} makes a strong and potentially unnecessary assumption that control group means are drawn from a normal distribution with a common mean and variance. Furthermore, the estimation of an additional random effect term might increase computational intensity.
2.3. Options for estimation and CI derivation {#sim7930-sec-0005}
---------------------------------------------
The parameters in models (1) and (2) are typically estimated using either a ML or REML approach. ML is known to produce downwardly biased estimates of between trial variance when there are few trials,([14](#sim7930-bib-
| 1,079
| 493
| 2,271
| 1,033
| 1,227
| 0.792386
|
github_plus_top10pct_by_avg
|
ptyset\})$ and ${\mathsf{Stab}_R}(\{\emptyset\})$ are the collection $\mathsf{POINT}$ of pointed derivators, while ${\mathsf{Abs}_L}(\mathsf{POINT})$ contains all cosieves and ${\mathsf{Abs}_R}(\mathsf{POINT})$ contains all sieves. In particular, $\mathsf{POINT}$ is a fixed point of both Galois correspondences. Similarly, \[thm:stable-lim-III\] can be restated by saying that ${\mathsf{Stab}_L}(\mathsf{FIN})$ and ${\mathsf{Stab}_R}(\mathsf{FIN})$, for $\mathsf{FIN}$ the class of homotopy finite categories, are both the collection $\mathsf{STABLE}$ of stable derivators; while ${\mathsf{Abs}_L}(\mathsf{STABLE})$ contains all left homotopy finite functors and ${\mathsf{Abs}_R}(\mathsf{STABLE})$ contains all right homotopy finite functors.
The cone functor $C\colon{\sD}^{[1]} \to {\sD}$ is not a colimit (though it is a weighted colimit, in the sense to be defined in \[con:wcolim\], for a suitable enrichment), so we cannot consider “${\mathsf{Stab}_L}(\{C\})$”. However, if the pushout functor ${\sD}^{\ulcorner} \to {\sD}$ is continuous, then so is $C$, since $C$ is the composite of a pushout, a right Kan extension, and an evaluation morphism. Thus, we can say that $\mathsf{STABLE} = {\mathsf{Stab}_L}(\{\emptyset,\ulcorner\})$ and similarly $\mathsf{STABLE} = {\mathsf{Stab}_R}(\{\emptyset,\lrcorner\})$.
Of course, ${\mathsf{Stab}_L}(\emptyset)$ and ${\mathsf{Stab}_R}(\emptyset)$ are the collection $\mathsf{DERIV}$ of all derivators, while ${\mathsf{Abs}_L}(\emptyset)$ and ${\mathsf{Abs}_R}(\emptyset)$ are the class $\mathsf{FUNC}$ of all functors. However, ${\mathsf{Abs}_L}(\mathsf{DERIV})$ and ${\mathsf{Abs}_R}(\mathsf{DERIV})$ are nonempty; for instance, ${\mathsf{Abs}_L}(\mathsf{DERIV})$ contains all left adjoint functors, ${\mathsf{Abs}_R}(\mathsf{DERIV})$ all right adjoint functors, and they both include the splitting of idempotents. On the other hand, ${\mathsf{Stab}_L}(\mathsf{FUNC})$ and ${\mathsf{Stab}_R}(\mathsf{FUNC})$ include only the trivial derivator, by [@ps:linearity Remark 9.4].
Let $\Phi=\maths
| 1,080
| 517
| 736
| 1,193
| 3,927
| 0.769239
|
github_plus_top10pct_by_avg
|
rms can be put together as before to produce a map $\chi_i$, so that now $\Upsilon_{i+1}:=\chi_2+\dots+\chi_i$ only maps into $M_{i+1}\oplus M_{i+2}\oplus\dots $. We therefore obtain the chain map $\chi$, and with this, we define the homotopy $\mathcal Comm$-inner product as $f:=\chi(\mu)\in Mod(F_{\mathcal Lie, C[1]}C^*[1],F_{\mathcal Lie, C[1]}C[1]) $, where $\mu\in C$ denotes the fundamental cycle of the space $X$. Since $\mu$ is $d_1$-closed, it follows that $f\circ h-g\circ f=0$.
The operad $\widehat{\mathcal O}$ {#cyclic-op}
=================================
In this section, we define for any cyclic operad $\mathcal O$ the colored operad $\widehat{\mathcal O}$. In the case that $\mathcal O$ is cyclic quadratic, we give an explicit description of $\widehat{\mathcal O}$ in terms of generators and relations coming from generators and relations in $\mathcal O$.
We assume that the reader is familiar with the notion of operads, colored operads and cyclic operads. For a good introduction to operads, we refer to [@Ad], [@GK] and [@MSS], for cyclic operads we recommend [@GeK] and [@MSS]. Colored operads were first introduced in [@BV] and appeared in many other places, see e.g. [@L] and [@BM]. Since in our case, we only need a special type of colored operad, it will be convenient to setup notation with the following definition.
As in [@GK (1.2.1)] and [@GeK (1.1)], we assume throughout this paper that $k$ is a field of characteristic $0$. Note however, that for certain operads such as e.g. the associative operad, a more general setup is possible.
\[0/1-operad\] Let $\mathcal P$ be a 3-colored operad in the category of (differential graded) vector spaces, where we use the three colors “full", “dashed" and “empty", in symbols written ${
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}},\varnothing$. This means that to each finite sequence o
| 1,081
| 674
| 1,522
| 1,130
| 744
| 0.800971
|
github_plus_top10pct_by_avg
|
_{-1/2}, spacetime vector, in chiral spinor of $so(16)$
\overline{\psi}^{1-2}_{-1/2} \right)$
------------------------------------------------------------------------------------------------------
Finally, let us consider the $k=3$ sector. There are no massless states in (R,NS), so we only consider (NS,NS). Fields in this sector have the following boundary conditions: $$\begin{aligned}
X^{1-2}(\sigma + 2\pi) & = & + X^{1-2}(\sigma), \\
X^{3-4}(\sigma + 2\pi) & = & - X^{3-4}(\sigma), \\
\psi^{1-2}(\sigma + 2\pi) & = & - \psi^{1-2}(\sigma), \\
\psi^{3-4}(\sigma + 2 \pi) & = & + \psi^{3-4}(\sigma), \\
\lambda^{1-8}(\sigma + 2\pi) & = & - \lambda^{1-8}(\sigma), \\
\lambda^{9-16}(\sigma + 2\pi) & = & \exp\left( \frac{\pi i}{2} \right)
\lambda^{9-16}(\sigma).\end{aligned}$$ It is straightforward to compute $E_{\rm left} = -1/2$, $E_{\rm right} = 0$. The available field modes are $$\overline{\partial} X^{3-4}_{-1/2}, \: \: \:
\lambda^{1-8}_{-1/2}, \overline{\lambda}^{1-8}_{-1/2}, \: \: \:
\lambda^{9-16}_{-3/4}, \overline{\lambda}^{9-16}_{-1/4}.$$ Because $\psi^{3-4}$ is periodic, there is a multiplicity of right Fock vacua. The states $|+-\rangle$, $|-+\rangle$ are invariant under the generator of ${\mathbb Z}_4$, whereas the states $|++\rangle$, $|--\rangle$ get a sign flip.
Putting this together, we find ${\mathbb Z}_4$- and GSO-invariant massless states of the form:
-----------------------------------------------------------------------------------------------------------------
State Count
--------------------------------------------------------------- -------------------------------------------------
$\left( \overline{\partial} X^{3-4}_{-1/2}, 8 scalars
\overline{\partial} \overline{X}^{3-4}_{-1/2} \right) \otimes
| \pm \pm \rangle$
$\left( \overline{\lambda}^{9-16}_{-1/4} \right)^2 \otimes 2 sets of sc
| 1,082
| 3,420
| 1,218
| 982
| null | null |
github_plus_top10pct_by_avg
|
form weights, and is generally inconsistent. However, when sample size is small, inconsistent estimators can achieve smaller variance leading to smaller error. Normalization constant $C$ is $10^{3}\ell/d^2$, and each point is averaged over $100$ trials. We use the minorization-maximization algorithm from [@Hun04] for computing the estimates from the rank-breakings.
Even if we use the consistent rank-breakings first proposed in [@APX14a], there is ambiguity in the choice of the weights. We next study how much we gain by using the proposed optimal choice of the weights. The optimal choice, $\lambda_{j,a}=1/(\kappa_j-p_{j,a})$, depends on two parameters: the size of the offerings $\kappa_j$ and the position of the separators $p_{j,a}$. To distinguish the effect of these two parameters, we first experiment with fixed $\kappa_j=\kappa$ and illustrate the gain of the optimal choice of $\lambda_{j,a}$’s.
![There is a constant factor gain of choosing optimal $\lambda_{j,a}$’s when the size of offerings are fixed, i.e. $\kappa_j = \kappa$ (left). We choose a particular set of separators where one separators is at position one and the rest are at the bottom. An example for $\ell=3$ and $\kappa=10$ is shown, where the separators are indicated by blue (right).[]{data-label="fig:lambda_impact1"}](Plot3-eps-converted-to.pdf "fig:"){width=".3\textwidth"} (-85,102)[Top-$1$ and bottom-$(\ell-1)$ separators]{} (-180,50) (-115,-5)[number of separators ]{} ![There is a constant factor gain of choosing optimal $\lambda_{j,a}$’s when the size of offerings are fixed, i.e. $\kappa_j = \kappa$ (left). We choose a particular set of separators where one separators is at position one and the rest are at the bottom. An example for $\ell=3$ and $\kappa=10$ is shown, where the separators are indicated by blue (right).[]{data-label="fig:lambda_impact1"}](topbottom-eps-converted-to.pdf "fig:"){width=".22\textwidth"}
Figure \[fig:lambda\_impact1\] illustrates that the optimal choice of the weights improves over consistent rank-breaking w
| 1,083
| 475
| 233
| 1,018
| 717
| 0.801641
|
github_plus_top10pct_by_avg
|
sms of the third and the of forth t-maps has order two. Thus, they generate three tree-rooted cubic maps each and the second cubic map generates $18$ tree-rooted maps.
The third cubic map in Figure 1 generates three t-maps. $$\begin{picture}(270,95) \put(0,30){\circle*{3}} \put(70,30){\circle*{3}}
\put(35,50){\circle*{3}} \put(35,90){\circle*{3}}
\qbezier[30](0,30)(35,5)(70,30) \qbezier[30](0,30)(0,70)(35,90)
\qbezier[30](70,30)(70,70)(35,90) \put(33,2){\small 1}
\put(100,30){\circle*{3}} \put(170,30){\circle*{3}}
\put(135,50){\circle*{3}} \put(135,90){\circle*{3}}
\qbezier[30](100,30)(100,70)(135,90)
\qbezier[30](135,50)(135,70)(135,90)
\qbezier[30](135,50)(152,40)(170,30) \put(133,2){\small 2}
\put(200,30){\circle*{3}} \put(270,30){\circle*{3}}
\put(235,50){\circle*{3}} \put(235,90){\circle*{3}}
\qbezier[30](270,30)(270,70)(235,90)
\qbezier[30](235,50)(235,70)(235,90)
\qbezier[30](200,30)(217,40)(235,50) \put(233,2){\small 3}
\linethickness{0.5mm} \qbezier(0,30)(17,40)(35,50)
\qbezier(35,50)(52,40)(70,30) \put(35,50){\line(0,1){40}}
\qbezier(100,30)(117,40)(135,50) \qbezier(100,30)(135,5)(170,30)
\qbezier(170,30)(170,70)(135,90)
\qbezier(200,30)(235,5)(270,30) \qbezier(200,30)(200,70)(235,90)
\qbezier(235,50)(252,40)(270,30)
\end{picture}$$ The group of automorphisms of the first of them has order $3$ and of the second and the third — order $2$. Thus they generate $2+3+3=8$ tree-rooted cubic maps.
The forth cubic map in Figure 1 generates one t-map with order three group of automorphisms. Thus, it generates $2$ tree-rooted cubic maps.
The fifth cubic map in Figure 1 also generates one t-map with trivial group of automorphisms. Thus, it generates $6$ tree-rooted cubic maps.
The sixth cubic map in Figure 1 generates two t-maps $$\begin{picture}(240,80) \put(10,40){\oval(20,20)}
\qbezier(40,40)(40,70)(70,70) \qbezier(70,70)(100,70)(100,40)
\put(70,40){\oval(20,20)}
\put(150,40){\oval(20,20)} \qbezier(180,40)(180,10)(210,10)
\qbezier(210,10)(240,10)(240,40) \put(210,40){\oval(20,20)}
\linethickness{0.6
| 1,084
| 1,045
| 1,177
| 1,027
| null | null |
github_plus_top10pct_by_avg
|
the only semistandard tableaux which can occur in $\theta$ are those with a $2$ in each row, i.e. those of the form $$\young(1111233\star,2\star\star\star\star),\qquad
\young(111123\star\star,23\star\star\star)\quad\text{or}\quad
\young(11112\star\star\star,233\star\star).$$ Now the first and last of these three types can be ruled out using the same argument with ${\psi_{2,2}}$. So $\theta$ can only involve tableaux with a $2$ and a $3$ in each row; call these *usable* tableaux. Now we consider ${\psi_{d,1}}\circ\theta$ for $d\gs4$. Now for each usable tableau $T$, ${\psi_{d,1}}\circ{\hat\Theta_{T}}$ is either zero (if $d$ and $d+1$ occur in the same row in $T$) or a semistandard homomorphism. Furthermore, these semistandard homomorphisms ‘pair up’; for example, with $d=4$ we have $${\psi_{4,1}}\circ\,\young(11112356,23478)\,={\psi_{4,1}}\circ\,
\young(11112346,23578)\,=\,
\young(11112346,23478).$$ Since the semistandard tableau on the right can only arise in this way from the two semistandard tableaux on the left, these two semistandard homomorphisms must occur with equal coefficients in $\theta$. Now we observe that we can get from any usable tableau to any other by a sequence of steps in which we interchange the integers $d,d+1$ for various values of $d\gs4$. So if we apply the above argument for all $d\gs4$, we see that all usable tableaux occur with the same coefficient in $\theta$.
Homomorphisms from $S^\mu$ to $S^\la$
-------------------------------------
Now we consider homomorphisms from $S^\mu$ to $S^\la$, where $\la,\mu$ are as above. In view of Proposition \[cdhomdim1\], *we assume for the rest of this section that $3\ls v\ls a-1$*. It turns out that all such homomorphisms can be expressed as linear combinations of ${\hat\Theta_{A}}$ and ${\hat\Theta_{B}}$, where $A,B$ are the following $\mu$-tableaux of type $\la$: $$\begin{aligned}
A&=
{\text{\footnotesize$\gyoungx(1.2,;1_2{{\begin{tikzpicture}[baseline=0cm]\draw[thick,densely dotted](0,.13)--++(2*.25,0);\end{tikzpicture}}};1;2;2;2;3;4_2{{\b
| 1,085
| 864
| 1,320
| 1,043
| 1,519
| 0.788653
|
github_plus_top10pct_by_avg
|
�\[pr:2\]]{} that $$\widehat{\alpha_{0}} = \{\alpha' | \, \alpha' (g, h) = r(g) h
r(g \lhd h)^{-1}, {\rm for ~ some ~} r: G\to {\rm Ker}(\beta) ~~
{\rm ~ a~ morphism~ of~ groups}\}$$
We record this observation in the following:
Let $H$, $G$ be two groups, $\beta: G \times H \rightarrow G$ an action as automorphisms and $H\rtimes_{\beta} G$ the right version of the semidirect product. The following statements are equivalent:
1. There exists a matched pair $(H, G, \alpha, \beta)$ such that the bicrossed products $H\, {}_{\alpha}\!\! \bowtie_{\beta}
\, G$ and $H\rtimes_{\beta} G$ are isomorphic in the category $B_{2}^{\beta}(H,G)$;
2. There exists a morphism of groups $r: G \rightarrow {\rm
Ker}(\beta)$ such that the action $\alpha$ is given by $g \rhd h =
r(g) h r(g \lhd h)^{-1}$ for all $g \in G$, $h \in H$.
Examples
========
[\[se:4\]]{}
In this section we describe all matched pairs between $C_{n}$ and $C_{m}$, for $n \in \{2,3\}$ and $m \in \NN^{*}$ arbitrary. First, let us introduce some notation. We denote by $a$ a generator of the cyclic group $C_n$ and $b$ a generator of $C_m$. The set of group morphisms from the group $C_n$ to the group of automorphisms ${{\rm Aut}\,}(C_m)$ will be denoted by $\varsigma (n, m)$. Such a morphism $\vartheta : C_n \rightarrow {{\rm Aut}\,}(C_m)$ is uniquely determined by a positive integer $t\in [m-1] := \{1, 2,
\cdots, m-1\} $ such that $m|t^n -1$ and $${\label{eq:2.4.399}}
\vartheta : C_n \rightarrow {{\rm Aut}\,}(C_m), \qquad \vartheta (a) (b) =
b^t$$ Therefore, one can equivalently think of $\varsigma (n, m)$ as the subgroup of $U(\ZZ_m)$ consisting of all solutions in $\ZZ_{m}$ of the equation $x^n = 1$.
Using the fact that if $m= 2^{a_0} p_1^{a_1}\cdots p_k^{a_k}$ with $p_1$, $\cdots$, $p_k$ odd primes, then $${{\rm Aut}\,}(C_m)\cong U(
\ZZ_m ) \cong U( \ZZ_{2^{a_0}}) \times U(\ZZ_{p_1^{a_1}}) \times
\cdots \times U(\ZZ_{p_k^{a_k}})$$ it is a routine computation to check that $$|\varsigma (n, m)| = \left \{\begin{array}{rcl}
\prod_{i=1}^k (n, p_i^{
| 1,086
| 2,081
| 1,275
| 1,051
| 3,847
| 0.769716
|
github_plus_top10pct_by_avg
|
f $C\le\lambda_0$, then $\lim_{t\to 0}{{\mathscr C}}\circ\alpha(t)$ is a $(0:1:0)$-star.
If $C=\frac ca\le\lambda_0$, then $f_{(C)}(y)=0$, so $$\alpha(t)=\begin{pmatrix}
1 & 0 & 0 \\
t^a & t^b & 0 \\
0 & 0 & t^c\end{pmatrix}\quad.$$ The statement follows by computing the limit of individual formal branches, using Definition \[branchlimit\].
By Lemma \[rank2lemma\], the limits obtained in Lemma \[Clelambda0\] are rank-2 limits, so the first part of Proposition \[abc\] is proved. As for the second part, the limit of a branch tangent to $z=0$ depends on whether the branch truncates to $f_{(C)}(y)$ or not. These cases are studied in the next two lemmas. Recall that, by our choice, $B\ge \frac{C-\lambda_0}2+1$.
\[nottrunc\] Assume $C>\lambda_0$, and let $z=g(y)$ be a formal branch tangent to $z=0$, such that $g_{(C)}(y)\ne
f_{(C)}(y)$. Then the limit of the branch is supported on a kernel line.
The limit of the branch is determined by the dominant terms in $$\underline{f(t^a)}+\underline{f'(t^a)t^b}y+t^cz=g(t^a)+g'(t^a)
t^by+\dots .$$ As the truncations $g_{(C)}$ and $f_{(C)}$ do not agree, the dominant term is independent of $z$. Under our hypotheses on $B$ and $C$, it is found to be independent of $y$ as well, as needed.
\[dominant\] Assume $C>\lambda_0$, and let $z=g(y)$ be a formal branch tangent to $z=0$, such that $g_{(C)}(y)=
f_{(C)}(y)$. Denote by $\gamma_C^{(g)}$ the coefficient of $y^C$ in $g(y)$.
- If $B> \frac{C-\lambda_0}2+1$, then the limit of the branch $z=g(y)$ by $\alpha(t)$ is the line $$z=(C-B+1)\gamma_{C-B+1} y+\gamma_C^{(g)}\quad.$$
- If $B= \frac{C-\lambda_0}2+1$, then the limit of the branch $z=g(y)$ by $\alpha(t)$ is the conic $$z=\frac {\lambda_0(\lambda_0-1)}2\gamma_{\lambda_0}y^2+\frac
{\lambda_0+C}2\gamma_{\frac{\lambda_0+C}2}y+\gamma_C^{(g)}\quad.$$
Rewrite the expansion whose dominant terms give the limit of the branch as: $$t^c z=(g(t^a)-\underline{f(t^a)})+(g'(t^a)t^b-\underline{f'(t^a)t^b})y
+\frac{g''(t^a)}2 t^{2b} y^2+\dots$$ The dominant term has weight $c=C
| 1,087
| 2,831
| 1,362
| 1,039
| 3,718
| 0.770521
|
github_plus_top10pct_by_avg
|
p (\mathbb{Z}_{k+1}^2 \times \mathbb{Z}_{6k} \times \{m\})$ are disjoint. Furthermore, if $C$ is the union of these two sets, then, for every $n$, $C \cap (\mathbb{Z}_{k+1}^2 \times \{n\} \times \{m\}) = \{a_r, \ldots, a_{r+k}\}$ for some $r$, and by Proposition \[anprop\], this contains either one point in every row or one point in every column and is therefore a hole.\
Since $T$ tiles $B$, it also tiles $(\mathbb{Z}_{k+1}^2 \times \mathbb{Z}_{6k} \times \{m\}) \setminus Y$.
$T$ tiles $Y$ by Lemma \[biglemma\]. Hence $T$ tiles $\mathbb{Z}_{k+1}^2 \times \mathbb{Z}_{6k} \times \mathbb{Z}$, and therefore also $\mathbb{Z}^4$, completing the proof of Theorem \[generalk\].
The 4 mod 8 case
================
To finish the proof of Theorem \[mainthm\], all that remains is to prove the following:
\[4mod8\] Let $T$ be the tile $\underbrace{\texttt{\emph{XXXXX}}}_{k}\!\texttt{.}\!\underbrace{\texttt{\emph{XXXXX}}}_{k}$, with $k \equiv 4 \pmod 8$. Then $T$ tiles $\mathbb{Z}^3$.
We will prove this by constructing partial tilings of each $\mathbb{Z}^2$ slice and filling in the gaps using the construction from the proof of Lemma \[biglemma\]. We will define 3 subsets $X_1$, $X_2$, $X_3$ of $\mathbb{Z}^2$ and show that $T$ tiles each of them. However, two of these tilings will not make use of strings.
Let $S_1 = \{(x,x+n(k+1)) \; | \; n \in \mathbb{Z}, x \equiv 2n,2n+1,2n+2,2n+3 \pmod 8\}$.
Let $S_2 = \{(x,x+n(k+1)) \; | \; n\in \mathbb{Z}, x \equiv 2n+4,2n+5,2n+6,2n+7 \pmod 8\}$.
Let $S_3 = \{(x,x+n(k+1)+1) \; | \; n \in \mathbb{Z}, x \equiv 2n+2,2n+3,2n+4,2n+5 \pmod 8\}$.
Let $X_1 = \mathbb{Z}^2 \setminus (S_2 \cup S_3)$, $X_2 = \mathbb{Z}^2 \setminus (S_1 \cup S_3)$, $X_3 = \mathbb{Z}^2 \setminus (S_1 \cup S_2)$.
Let the first coordinate be horizontal and the second vertical.
$X_3$ is $\mathbb{Z}^2$ with every $(k+1)$th diagonal removed, so each row (or column) is $Z$ with every $(k+1)$th point removed, that is, a string. Hence $T$ tiles $X_3$.
We will show that $X_1$ can be tiled with vertical copies of $T$
| 1,088
| 719
| 1,066
| 1,146
| 1,520
| 0.788653
|
github_plus_top10pct_by_avg
|
n of elements of $\underline{M}(R)$ in Section \[m\]. Based on these, an element of $\tilde{M}(R)$ is $$m= \begin{pmatrix} \pi^{max\{0,j-i\}}m_{i,j} \end{pmatrix} \mathrm{~with~}z_i^{\ast}, m_{i,i}^{\ast}, m_{i,i}^{\ast\ast}$$ satisfying the following:
- If $i$ is even and $L_i$ is *of type* $\textit{I}^o$ (resp. *of type* $\textit{I}^e$), then $$m_{i,i}=\begin{pmatrix} s_i&\pi y_i\\ \pi v_i&1+\pi z_i \end{pmatrix} \textit{(resp.
$\begin{pmatrix} s_i&r_i&\pi t_i\\ \pi y_i&1+\pi x_i&\pi z_i\\ v_i&u_i&1+\pi w_i \end{pmatrix}$)},$$ where $s_i\in M_{(n_i-1)\times (n_i-1)}(B\otimes_AR)$ (resp. $s_i\in M_{(n_i-2)\times (n_i-2)}(B\otimes_AR)$), etc., and $s_i$ mod $\pi\otimes 1$ is invertible.
- If $i$ is odd and $L_i$ is *free of type I*, then $$m_{i,i}=\begin{pmatrix} s_i&\pi r_i&t_i\\ y_i&1+\pi x_i& u_i\\\pi v_i&\pi z_i&1+\pi w_i \end{pmatrix},$$ where $s_i\in M_{(n_i-2)\times (n_i-2)}(B\otimes_AR)$, etc., and $s_i$ mod $\pi\otimes 1$ is invertible.
- For the remaining $m_{i,j}$’s except for the cases explained above, $m_{i,j}\in M_{n_i\times n_j}(B\otimes_AR)$ and $m_{i,i}$ mod $\pi\otimes 1$ is invertible.
- Assume that $i$ is even and that $L_i$ is *of type I*. Then $$z_i+\delta_{i-2}k_{i-2, i}+\delta_{i+2}k_{i+2, i}=\pi z_i^{\ast}$$ such that $z_i^{\ast}\in B\otimes_AR$. This equation is considered in $B\otimes_AR$ and $\pi$ stands for $\pi\otimes 1\in B\otimes_AR$. Here,
- $z_i$ is an entry of $m_{i,i}$ as described in the above step (a).
- $k_{i-2, i}$ (resp. $k_{i+2, i}$) is the $(n_{i-2}, n_i)^{th}$-entry (resp. $(n_{i+2}, n_i)^{th}$-entry) of the matrix $m_{i-2, i}$ (resp. $m_{i+2, i}$) if $L_{i-2}$ (resp. $L_{i+2}$) is *of type* $\textit{I}^o$.
- $k_{i-2, i}$ (resp. $k_{i+2, i}$) is the $(n_{i-2}-1, n_i)^{th}$-entry (resp. $(n_{i+2}-1, n_i)^{th}$-entry) of the matrix $m_{i-2, i}$ (resp. $m_{i+2, i}$) if $L_{i-2}$ (resp. $L_{i+2}$) is *of type* $\textit{I}^e$.
- Assume that $i$ is odd and that $L_i$ is *bound of type I*. Then $$\delta_{i-1}v_{i-1}\cdot m_{i-1, i}+\
| 1,089
| 3,281
| 1,505
| 956
| 2,487
| 0.779375
|
github_plus_top10pct_by_avg
|
ber\,
4A_{\Sigma} K_{22} M_N i\left({\vec{\sigma}_1}\times
{\vec{\sigma}_2}\right){\vec{q}}\\-&\nonumber\,
4 A_{\Sigma} M_N\left({\vec{q}}^2 K_{23}+5 K_{34}+{\vec{q}}^2
K_{35}+K_{22}\right){\vec{\sigma}_1}\cdot {\vec{q}}\\+&\nonumber\,
2B_{\Sigma} K_{22} ({\vec{\sigma}_1}\cdot {\vec{q}})({\vec{\sigma}_2}\cdot {\vec{q}})
+
2 B_{\Sigma} \left({\vec{q}}^2 K_{22}+{\vec{q}}^4
K_{23}
\right.\\-&\nonumber\left.\,
{\vec{q}}^2 K_{31}+(3-\eta) (\Delta M-\Delta M_\Sigma) K_{32}
\right.\\+&\nonumber\left.\,
{\vec{q}}^2 (\Delta M -\Delta M_\Sigma)K_{33}+2(5-\eta) {\vec{q}}^2 K_{34}
+2 {\vec{q}}^4 K_{35}
\right.\\-&\nonumber\left.\,
(3-\eta)
K_{42}-{\vec{q}}^2 K_{43}+(15-8\eta) K_{46}
\right.\\+&\nonumber\left.\,
2(5-\eta) {\vec{q}}^2 K_{47}
+
{\vec{q}}^4K_{48}
+{\vec{q}}^2 K_{21} \left(\text{$\Delta $M}-\text{$\Delta
$M}_{\Sigma }\right) \right)
\Big]\,.\end{aligned}$$ To take into account the isospin we must replace every $A_\Sigma$ and $B_\Sigma$ by $$\begin{aligned}
A\to&
-\sqrt3A_{\Sigma\frac12}+2A_{\Sigma\frac32}
+\frac23(\sqrt3A_{\Sigma\frac12}+A_{\Sigma\frac32}){\vec{\tau}_1}\cdot{\vec{\tau}_2}\\
B\to&
-\sqrt3B_{\Sigma\frac12}+2B_{\Sigma\frac32}
+\frac23(\sqrt3B_{\Sigma\frac12}+B_{\Sigma\frac32}){\vec{\tau}_1}\cdot{\vec{\tau}_2}\,.\end{aligned}$$ We have used the master integrals with $q_0=-\frac{M_\Lambda-M_N}{2}$, $q_0'=M_\Sigma-M_\Lambda$, and ${\vec{q}}={\vec{p}}'-{\vec{p}}$.
![Crossed-box diagram contributing at NLO.[]{data-label="box2g"}](box2g)
The second crossed box diagram (Fig. \[box2g\]) includes a $\Sigma$-propagator and contributes to the potential with $$\begin{aligned}
V_h=&
i\frac{G_Fm_\pi^2g_A^3}{8f_\pi^3}
(3+2{\vec{\tau}_1}\cdot{\vec{\tau}_2})
{\int\frac{d^4l}{(2\pi)^4}}\frac{1}{(l+q)^2-m_\pi^2+i\epsilon}
\nonumber\\\times&\nonumber\,
\frac{1}{l^2-m_\pi^2+i\epsilon}\,
\frac{1}{r_N^2-M_N^2+i\epsilon}\,
\frac{(l^\rho)(l^\nu+q^\nu)(l^\mu)}{k_N^2-M_N^2+i\epsilon}
\\\times&\nonumber\,
{\overline{u}}_1({\overline{E}},{\vec{p}\,'})
\gamma_\rho\gamma_5({\cancel{k}_N}+M_N)(A+B\gamma_5) u_1(
| 1,090
| 1,488
| 1,450
| 1,175
| null | null |
github_plus_top10pct_by_avg
|
now the Feynman–Kac representation of the solution to the above fractional Poisson problem, thanks to Theorem 3.2 in [@bucur] for domains which are balls, we are forced to conclude that $$\hat{u}(x) = \mathbb{E}_x\left[\hat{u} (X_{\sigma_{B(x')}}) + \int_0^{\sigma_{B(x')}} {f}(X_s)\,{\rm d}s\right], \qquad x\in B(x'),\quad x'\in D.
\label{fixedpointonspheres}$$ Here again, we are implicitly using that ${f}\in C^{\alpha +\varepsilon}(\overline{D})$ in the application of Theorem 3.2 of [@bucur]. Let us now appeal to the same notation we have used for the walk-on-spheres. Specifically, recall the sequential exit times from maximally sized balls $\sigma_{B_k}$ for the walk-on-spheres which were defined in Section \[WoSfL\]. We claim that $$M_k\eqqcolon \hat{u} (X_{\sigma_{B_k}\wedge \sigma_D}) + \int_0^{\sigma_{B_k}\wedge \sigma_D }{f}(X_s)\,{\rm d}s, \qquad k \geq 0,$$ is a martingale. To see why, note that, by the strong Markov property and then by , $$\begin{aligned}
\mathbb{E}\left[M_{k+1}|\mathcal{G}_{k}\right] = & \, \mathbf{1}_{\{k<N\}}\left\{\left.\mathbb{E}_{x}\left[
\hat{u} (X_{\sigma_{B(x)}}) + \int_0^{\sigma_{B(x)}}{f}(X_s)\,{\rm d}s
\right]\right|_{x = \smash{X_{\sigma_{B_k}}}} + \int_0^{\sigma_{B_k}}{f}(X_s)\,{\rm d}s\right\}\\
& + \mathbf{1}_{\{k\geq N\}}\left\{
\hat{u} (X_{\sigma_D}) + \int_0^{ \sigma_D }{f}(X_s)\,{\rm d}s
\right\}\\
= & \, \mathbf{1}_{\{k<N\}}\left\{\hat{u}(X_{\sigma_{B_k}}) + \int_0^{\sigma_{B_k}}{f}(X_s)\,{\rm d}s\right\}+ \mathbf{1}_{\{k\geq N\}}\left\{
\hat{u} (X_{\sigma_D}) + \int_0^{ \sigma_D }{f}(X_s)\,{\rm d}s
\right\}\\
= & \, \hat{u} (X_{\sigma_{B_k}\wedge \
| 1,091
| 2,696
| 1,246
| 1,087
| null | null |
github_plus_top10pct_by_avg
|
\- \- **17%**
n individuals (NUTS II) 17,087 (99) 16,534 (99) 18,734 (110) 18,148 (110)
RMSEA 0.030 0.021 0.031 0.021
R2 Between 0.824 0.940 0.787 0.945
R2 Within 0.097 0.156 0.099 0.160
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Note. Unstandardized Coefficients. Standard errors in parentheses. MLR.
\* *p* ≤ 0.05,
\*\* *p* ≤ 0.01,
\*\*\* *p* ≤ 0.001.
Models 2 and 4 account also for mediation of institutional trust at the individual level. Sources: ESS, EUROSTAT, QoG regional data.
At the individual level, the analysis points out the relevance of usual predictors of social trust: age, gender, perceived health status, and being part of a discriminated group are all significantly correlated with generalized trust in the expected directions. Furthermore, Model 1 shows that being unemployed or worried about crime have a negative influen
| 1,092
| 4,948
| 315
| 546
| null | null |
github_plus_top10pct_by_avg
|
with* *the filter* *$\mathcal{F}$, see Eq. (\[Eqn: DirectedNet2\]).$\qquad\square$*
**Definition A1.11.** *Let $\chi\!:\mathbb{D}\rightarrow X$ be a net and $\mathbb{R}_{\alpha}=\{\beta\in\mathbb{D}\!:\beta\succeq\alpha\in\mathbb{D}\}$ a residual in $\mathbb{D}$. Then* $$_{\textrm{F}}\mathcal{B}_{\chi}\overset{\textrm{def}}=\{\chi(\mathbb{R}_{\alpha})\!:\textrm{Res}(\mathbb{D})\rightarrow X\textrm{ for all }\alpha\in\mathbb{D}\}$$ *is the* *filter-base associated with* *$\chi$, and the corresponding filter $\mathcal{F}_{\chi}$ obtained by taking all supersets of the elements of* $_{\textrm{F}}\mathcal{B}_{\chi}$ *is the* *filter* *associated with* *$\chi$.$\qquad\square$*
$_{\textrm{F}}\mathcal{B}_{\chi}$ is a filter-base in $X$ because $\chi(\bigcap\mathbb{R}_{\alpha})\subseteq\bigcap\chi(\mathbb{R}_{\alpha})$, that holds for any functional relation, proves (FB2). It is not difficult to verify that
\(i) $\chi$ is eventually in $A\Longrightarrow A\in\mathcal{F}_{\chi}$, and
\(ii) $\chi$ is frequently in $A\Longrightarrow(\forall\mathbb{R}_{\alpha}\in\textrm{Res}(\mathbb{D}))(A\bigcap\chi(\mathbb{R}_{\alpha})\neq\emptyset)$ $\Longrightarrow A\bigcap\mathcal{F}_{\chi}\neq\emptyset$ .
Limits and adherences are obviously preserved in switching between nets (respectively, filters) and the filters (respectively, nets) that they generate: $$\begin{aligned}
\lim(\chi)=\lim(\mathcal{F}_{\chi}), & & \textrm{adh}(\chi)=\textrm{adh}(\mathcal{F}_{\chi})\label{Eqn: net-fil}\\
\lim(\mathcal{F})=\lim(\chi_{\mathcal{F}}), & & \textrm{adh}(\mathcal{F})=\textrm{adh}(\chi_{\mathcal{F}}).\label{Eqn: fil-net}\end{aligned}$$
The proofs of the two parts of Eq. (\[Eqn: net-fil\]), for example, go respectively as follows. $x\in\lim(\chi)\Leftrightarrow\chi\textrm{ is eventually in }\mathcal{N}_{x}\Leftrightarrow(\forall N\in\mathcal{N}_{x})(\exists F\in\mathcal{F}_{\chi})\textrm{ such that }(F\subseteq N)\Leftrightarrow x\in\lim(\mathcal{F}_{\chi})$, and $x\in\textrm{adh}(\chi)\Leftrightarrow\chi\textrm{ is frequently in }\mathca
| 1,093
| 1,912
| 1,200
| 1,109
| 3,651
| 0.770915
|
github_plus_top10pct_by_avg
|
in {{\mathbb{Z}}}: M > N \Longrightarrow M+1>N$$ This implication is correct and thus proves non-termination for the considered queries if the precondition holds in the first iteration. This is the case for all queries in $Den(\leftarrow count\_to(\underline{N},L))$ with $0 > \underline{N}$ since the value corresponding to $M$ in the first iteration is $0$ and the value corresponding to $N$ is $\underline{N}$. This proves non-termination of all considered queries for which $0 > \underline{N}$. $\hfill \square$
In the following example, applicability of the derivation does not imply non-termination. To detect a class of non-terminating queries, a domain constraint is added to the pre- and postcondition of the implication.
\[example:constants\_nt\_cond\]
constants(I,J):- I =:= 2, In is J*2, Jn is I-J, constants(In,Jn).
The clause in *constants* is applicable to any goal with $constants(2,\underline{J})$ as selected atom, with $\underline{J}$ an integer variable. Since the first argument in the next iteration is the value corresponding to $\underline{J}*2$, only goals with the selected atom $constants(2,1)$ are non-terminating for this program.
Since applicability of the derivation does not imply non-termination, a similar implication as in the previous example is false, $\forall I,J \in {{\mathbb{Z}}}: I=2 \Longrightarrow J*2 = 2$. To overcome this, a constraint is added to the pre- and post-condition of this implication, restricting the considered values of $\underline{J}$ to an unknown set of integers, called its *domain*. $$\exists Dom_j \subset {{\mathbb{Z}}}, \forall I,J \in {{\mathbb{Z}}}: I=2, J \in Dom_j \Longrightarrow J*2 = 2, I-J \in Dom_j$$ The resulting implication is true for $Dom_j = \lbrace 1 \rbrace$. By requiring that the considered moded query satisfies both the reachability constraint and the additional constraint in the pre-condition, the non-terminating query $\leftarrow constants(2,1)$ is obtained. $\hfill \square$
All information needed to construct these constraints can be obtaine
| 1,094
| 5,738
| 1,581
| 514
| 722
| 0.801514
|
github_plus_top10pct_by_avg
|
igned}
\lrabs{\psi'(r) \nu'r)}
=& \lrabs{\psi(r)(\aq \tau'(r)) \nu'r)}\\
\leq& \aq \lrabs{\tau'(r)}\lrabs{\psi(r) \nu'r)}\\
\leq& \frac{5\aq\Rq}{4} \cdot \frac{4}{\Rq}\\
\leq& 5\aq
\end{aligned}$$ Where the second last line follows form Lemma \[l:tau\] and our proof of \[f:q”(r)\_bounds\].
Next, $$\begin{aligned}
\psi''(r) = \psi(r) \lrp{\aq^2 \tau'(r)^2 - \aq \tau''(r)}
\end{aligned}$$ Thus applying Lemma \[l:tau\].1 and Lemma \[l:tau\].3, $$\begin{aligned}
\lrabs{\psi''(r) \nu(r)} \leq& 2\aq^2\Rq^2 + \aq
\end{aligned}$$
Finally, $$\begin{aligned}
\nu''(r)
=& \frac{1}{2\int_0^{4\Rq}\frac{\mu(s)\Psi(s)}{\psi(s)} ds}\cdot \frac{d}{dr} {\mu(r)\Psi(r)/\psi(r)}
\end{aligned}$$
Expanding the numerator, $$\begin{aligned}
\frac{d}{dr} \frac{\mu(r) \Psi(r)}{\psi(r)}
=& \mu'(r) \frac{\Psi(r)}{\psi(r)} + \mu(r) - \mu(r) \frac{\Psi(r) \psi'(r)}{\psi(r)^2}\\
=& \mu'(r) \frac{\Psi(r)}{\psi(r)} + \mu(r) + \mu(r) \frac{\Psi(r) \psi(r)\aq \tau'(r)}{\psi(r)^2}
\end{aligned}$$
Thus $$\begin{aligned}
\psi(r) \nu''(r) = \frac{1}{2\int_0^{4\Rq}\frac{\mu(s)\Psi(s)}{\psi(s)} ds}\cdot\lrp{\mu'(r) \Psi(r) + \mu(r) \psi(r) + \mu(r) \Psi(r) \aq \tau'(r)}
\end{aligned}$$ Using the same argument as from the proof of \[f:q”(r)\_bounds\], we can bound $$\begin{aligned}
\frac{1}{2\int_0^{4\Rq}\frac{\mu(s)\Psi(s)}{\psi(s)} ds}
\leq& \frac{1}{2\int_0^\Rq s ds}\\
\leq& \frac{1}{\Rq^2}
\end{aligned}$$ Finally, from Lemma \[l:mu\], $\lrabs{\mu'(r)}\leq \frac{\pi}{6\Rq}$, so $$\begin{aligned}
\lrabs{\psi(r) \nu''(r)}\leq& \frac{\pi/6 + 1 + 5\aq \Rq^2/4}{\Rq^2}\\
\leq& \frac{2(\aq\Rq^2 + 1)}{\Rq^2}
\end{aligned}$$
\[l:tau\] Let $\tau(r): [0,\infty) \to \Re$ be defined as $$\begin{aligned}
\tau(r)=\fourcase
{\frac
| 1,095
| 2,723
| 1,157
| 1,075
| null | null |
github_plus_top10pct_by_avg
|
lgebra, $$\begin{aligned}
[{\mathcal{S}}(\epsilon_1), {\mathcal{S}}(\epsilon_2)]\
=&\
\tilde{p}(v_{12})\,,
\label{1st quantized alg}\end{aligned}$$ with $v_{12}^\mu=(\epsilon_1C\bar{\gamma}^\mu\epsilon_2)/\sqrt{2}$, where $\tilde{p}(v)$ is the operator with picture number $p=-1$ defined by $$\tilde{p}(v)\ =\ v_\mu\tilde{p}^\mu\ =\ - v_\mu\oint\frac{dz}{2\pi i}\psi^\mu(z) e^{-\phi(z)}\,.
\label{p with -1}$$ This is equivalent to the space-time translation operator $p(v)=v_\mu\oint\frac{dz}{2\pi i}i\partial X^\mu(z)$ (center of mass momentum of the string) in the sense that, for example,[@Witten:1986qs] $$(p(v)-X_0\tilde{p}(v))\ =\ \{Q, M(v)\}\,,
\label{p tilde p}$$with $$M(v)\ =\ v^\mu\oint\frac{dz}{2\pi i}(\xi(z)-\xi_0)\psi_\mu(z)e^{-\phi(z)}\,.
\label{kernel M}$$ Note that $M(v)$ does not include $\xi_0$, and so is in the small Hilbert space: $\{\eta, M(v)\}=0$. The algebra (\[1st quantized alg\]) and the Jacobi identity imply that $[Q, \tilde{p}(v)]=[\eta, \tilde{p}(v)]=[\xi_0, \tilde{p}(v)]=0$. We frequently omit specifying the parameters explicitly and denote, for example, ${\mathcal{S}}(\epsilon_1)$ by ${\mathcal{S}}_1$. Since $\eta\Phi$ and $\Psi$ are in the small Hilbert space containing the physical spectrum, (\[restricted linear\]) is the transformation law given in Ref. except that the local picture-changing operator at the midpoint is replaced by the $X$ in (\[PCO\]) so that the transformation is closed in the restricted space. As a transformation of $\Phi$ in the large Hilbert space, we adopt here that $$\delta^{(0)}_{{\mathcal{S}}(\epsilon)}\Phi\
=\ {\mathcal{S}}(\epsilon)\Xi\Psi\,.\label{linear tf phi}$$ This is consistent with (\[restricted linear\]) but is not unique. A different choice, however, can be obtained by combining (\[linear tf phi\]) and an $\Omega$-gauge transformation, for example, $$\begin{aligned}
\tilde{\delta}_{{\mathcal{S}}(\epsilon)}^{(0)}\Phi\ =&\ \xi_0{\mathcal{S}}(\epsilon)\Psi
\nonumber\\
=&\ \delta_{{\mathcal{S}}(\epsilon)}^{(0)}\Phi - \eta(\xi_0{\mathcal{S}}(\epsilon
| 1,096
| 207
| 685
| 1,181
| null | null |
github_plus_top10pct_by_avg
|
\delta = \max_{j \in [n]} \bigg\{ 4 \delta_{j,1}^2 + \frac{2\big(\delta_{j,1}\delta_{j,2} +\delta_{j,2}^2\big)\kappa_j}{\eta_{j}\ell_j} \bigg\} \;\;\leq\;\; 28 (\log(\ell_{\max} +2))^2\,.\end{aligned}$$
Proof of Theorem \[thm:main\]
-----------------------------
We first introduce two key technical lemmas. In the following lemma we show that $\E_{\theta^*}[\nabla \Lrb(\theta^*)] = 0$ and provide a bound on the deviation of $\nabla \Lrb(\theta^*)$ from its mean. The expectation $\E_{\theta^*}[\cdot]$ is with respect to the randomness in the samples drawn according to $\theta^*$. The log likelihood Equation can be rewritten as $$\begin{aligned}
\label{eq:likelihood}
\Lrb(\theta) =
\sum_{j=1}^n \sum_{a = 1}^{\ell_j}\sum_{i < \i \in S_j}\I_{\big\{(i,\i) \in \Gja\big\}}
\lambda_{j,a} \Big(\theta_i\I_{\big\{\sigma_j^{-1}(i) < \sigma_j^{-1}(\i)\big\}} + \theta_{\i}\I_{\big\{\sigma_j^{-1}(i) > \sigma_j^{-1}(\i)\big\}} - \log \Big(e^{\theta_i}
+ e^{\theta_{\i}}\Big) \Big)\;.\end{aligned}$$ We use $(i,\i) \in G_{j,a}$ to mean either $(i,\i)$ or $(\i,i)$ belong to $E_{j,a}$. Taking the first-order partial derivative of $\Lrb(\theta)$, we get $$\begin{aligned}
\label{eq:liklihood_grad}
\nabla_i\Lrb(\theta^*) \;\, =\;\, \sum_{j:i\in S_j} \sum_{a=1}^{\ell_j} \sum_{\substack{\i \in S_j \\ \i \neq i}} \,\lambda_{j,a}\,\I_ {\big\{(i,\i) \in G_{j,a}\big\}} \,\Bigg(\I_{\big\{\sigma_j^{-1}(i) < \sigma_j^{-1}(\i)\big\}} - \frac{\exp(\theta_i^*)}{\exp(\theta_i^*) + \exp(\theta_{\i}^*)} \Bigg)\;.\end{aligned}$$
\[lem:gradient\_topl\] Under the hypotheses of Theorem \[thm:main2\], with probability at least $1 - 2e^{3}d^{-3}$, $$\begin{aligned}
\big\|\nabla\Lrb(\theta^*)\big\|_2 \;\;\leq\;\; \sqrt{ 6\log d \, \sum_{j=1}^n \sum_{a=1}^{\ell_j} \big(\lambda_{j,a}\big)^2 \big(\kappa_j - p_{j,a}\big)\big(\kappa_j- p_{j,a}+1\big)} \,.
\end{aligned}$$
The Hessian matrix $H(\theta) \in \cS^d$ with $H_{i\i}(\theta) = \frac{\partial^2\Lrb(\theta)}{\partial\theta_i \partial\theta_{\i}}$ is given by $$\begi
| 1,097
| 2,844
| 1,172
| 1,077
| null | null |
github_plus_top10pct_by_avg
|
e 1$ and that the claim holds for all smaller values of $m$. Let $j\in I$.
Suppose first that $j=i_m$. Then $$T_{i_1}\cdots T_{i_m}(E_j)= T_{i_1}\cdots T_{i_{m-1}}(F_{i_m}L_{i_m}^{-1})
=F_{\beta _m}L_{\beta _m}^{-1}.$$ Hence Eq. follows from Lemma \[le:rvrel\].
Suppose now that $j\not=i_m$. Let $\chi '=r_{i_{m-1}}\cdots r_{i_2}r_{i_1}(\chi )$. Then $$\begin{aligned}
\label{eq:TEFF1}
T_{i_1}\cdots T_{i_m}(E_j) F_{\beta _m}^{{b^{\chi}} (\beta _m)-1}
=T_{i_1}\cdots T_{i_{m-1}}(E^+_{j,a(i_m)}
F_{i_m}^{\bfun{\chi '}({\alpha }_{i_m})-1})
\end{aligned}$$ where $a=-c^{\chi '}_{i_m j}$. Let $Z=\oplus _{k=0}^a\fie
E^+_{j,k(i_m)}\subset U^+(\chi ')$. By [@p-Heck07b Cor.5.4], $$E^+_{j,k(i_m)}F_{i_m}-F_{i_m}E^+_{j,k(i_m)}\in \fie
L_{i_m}E^+_{j,k-1}\subset Z \quad \text{for all $k\in {\mathbb{N}}_0$.}$$ Hence $ZF_{i_m}^{\bfun{\chi '}({\alpha }_{i_m})-1}\subset U(\chi ')Z$. Therefore $$\begin{aligned}
T_{i_1}\cdots T_{i_m}(E_j) F_{\beta _m}^{{b^{\chi}} (\beta _m)-1}
&\in T_{i_1}\cdots T_{i_{m-1}}(U(\chi ')Z)\\
&\subset
\sum _{j'\in I}U(\chi )T_{i_1}\cdots T_{i_{m-1}}(E_{j'}).
\end{aligned}$$ by Eq. and since $Z\subset \sum _{j'\in I}U(\chi ')E_{j'}$. This and induction hypothesis imply Eq. . The last claim of the lemma follows from inserting $j=i_{m+1}$ in Eq. and using Eq. .
\[le:MLbasis\] Let $m\in \{0,1,\dots ,n\}$ and $\tau $ a permutation of the set $\{1,2,\dots ,n\}$. For all $k\in \{1,2,\dots ,m\}$ let $\chi _k=r_{i_{k-1}}\cdots r_{i_2}r_{i_1}(\chi )$. Assume that $$\begin{aligned}
\label{eq:MLass}
\prod _{k=1}^m \prod _{t=1}^{{b^{\chi}} (\beta _k)-1}
\big( {\rho ^{\chi}} (\beta _k)\Lambda (K_{\beta _k}L_{\beta _k}^{-1})
-{\rho ^{\chi _k}}({\alpha }_{i_k})^t\big)\not=0.
\end{aligned}$$ Then the set $$\label{eq:MLbasis}
\begin{aligned}
\{ F_{\beta _{m+\tau (1)}}^{l_{m+\tau (1)}}
F_{\beta _{m+\tau (2)}}^{l_{m+\tau (2)}}\cdots
F_{\beta _{m+\tau (n)}}^{l_{m+\tau (n)}}
F_{\beta _m}^{{b^{\chi}} (\beta _m)-1}
\cdots F_{\beta _2
| 1,098
| 3,226
| 1,611
| 1,010
| null | null |
github_plus_top10pct_by_avg
|
26 23 S ribosomal RNA
1998333 A:6 C:495 C:185 6 13.8 26 23 S ribosomal RNA
1999217 T:34 C:408 C:185 34 14.11 40 23 S ribosomal RNA
2176557 A:6 C:495 C:185 6 13.83 26 23 S ribosomal RNA
2177441 T:34 C:408 C:185 34 14.1 40 23 S ribosomal RNA
2293111 A:6 C:495 C:185 6 13.83 26 23 S ribosomal RNA
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Heterogeneity analysis of the genes of single DNA copies
--------------------------------------------------------
Furth
| 1,099
| 4,870
| 302
| 626
| null | null |
github_plus_top10pct_by_avg
|
valuations use a simple Nelder-Mead algorithm to learn about the cost space. The machine learning algorithm (red and blue) optimizes to BEC faster than the Nelder-Mead (black). By utilizing the machine learning model a parameter is eliminated and the convergence improves (red).](figure3.pdf){width="\columnwidth"}
The learner used in Fig. 2 only used the *best* hypothesis set when picking the next parameters, in other words we set $P=1$. Evaluating multiple GPs is computationally expensive with so many parameters, so to save time we made this restriction. In spite of this, the learner discovered ramps that produced BEC in very few iterations. This is because the learner consistently fitted the correlation lengths of the 3 most important parameters — the end points of the ramps — very quickly. However, we found the other correlations lengths were not estimated well and would not converge, even after a BEC was found. This meant that we were unable to make useful predictions about the cost landscape and we could not reliably determine what parameters were least important.
Gramacy *et al.* [@gramacy_particle_2011] have suggested that making good online estimation of the GP correlation lengths requires multiple particles. We considered achieving this goal in a different experiment as shown in Fig. 3. Here we used a learner with many particles $P=16$, but had to use the simple parameterization for the ramps to save computational time. This resulted in a total of 7 parameters. We can see again the overall trend for the machine learner is still faster than Nelder-Mead, but less pronounced. More carefully estimating the correlation lengths has hindered the convergence rate compared to the $16$ parameter case. Nevertheless, as we now have a more reliable estimate of the correlation lengths we can take advantage of a different feature of the learner.
In Fig. 4(a) we show estimates of the cost landscape as 1D cross sections about the best measured point. We plot the two most sensitive parameters and the least. We ca
| 1,100
| 683
| 2,237
| 1,150
| null | null |
github_plus_top10pct_by_avg
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.