text
large_stringlengths 384
2.05k
| rank_avg
float64 1
4.19k
⌀ | rank_max
float64 1
8.21k
⌀ | rank_min
float64 1
5.03k
⌀ | rank_median
float64 1
4.21k
⌀ | rank_by_avgsim
float64 1
4.19k
⌀ | avgsim_to_github
float32 0.77
0.85
⌀ | dataset
large_stringclasses 1
value |
|---|---|---|---|---|---|---|---|
s, unlike for a linear operator, the non-existence of an inverse is not just due the set $\{ f^{-1}(0)\}$ which happens to be the only way a linear map can fail to be injective. Thus the map defined piecewise as $\alpha+2(1-\alpha)x$ for $0\leq x<1/2$ and $2(1-x)$ for $1/2\leq x\leq1$, with $0<\alpha<1$, is not invertible on its range although $\{ f^{-}(0)\}=1$. Comparing Fig. \[Fig: Appel\] and Table \[Table: Appel\_bnds\], it is seen that in cases (b), (c) and (d), the intervals $[|f|_{\textrm{b}},\Vert f\Vert_{\textrm{B}}]$ are subsets of the $\lambda$-values for which the respective maps are not injective; this is to be compared with (a), (e) and (f) where the two sets are the same. Thus the linear bounds are not good indicators of the uniqueness properties of solution of nonlinear equations for which the Lipschitzian bounds are seen to be more appropriate.
[|c||c|c|c|c|]{} Function& $|f|_{\textrm{b}}$& $\Vert f\Vert_{\textrm{B}}$& $|f|_{\textrm{lip}}$& $\Vert f\Vert_{\textrm{Lip}}$[\
]{} $f_{\textrm{a}}$& $0$& $1$& $0$& $1$[\
]{} $f_{\textrm{b}}$& $0$& $1/2$& $0$& $1$[\
]{} $f_{\textrm{c}}$& $0$& $1/2$& $0$& $\infty$[\
]{} $f_{\textrm{d}}$& $2(\sqrt{2}-1)$& $\infty$& $0$& $2$[\
]{}$f_{\textrm{e}}$& $0$& $1$& $0$& $1$[\
]{} $f_{\textrm{f}}$& $0$& $1$& $0$& $1$[\
]{}
[|c||c|c|]{} Functions& $\sigma_{\textrm{Lip}}(f)$& $P\sigma(f)$[\
]{} $f_{\textrm{a}}$& $[0,1]$& $(0,1]$[\
]{} $f_{\textrm{b}}$& $[0,1]$& $[0,1/2]$[\
]{} $f_{\textrm{c}}$& $[0,\infty)$& $[0,1/2]$[\
]{} $f_{\textrm{d}}$& $[0,2]$& $[2(\sqrt{2}-1),1]$[\
]{} $f_{\textrm{e}}$& $[0,1]$& $(0,1)$[\
]{} $f_{\textrm{f}}$& $[0,1]$& $(0,1)$[\
]{}
In view of the above, we may draw the following conclusions. If we choose to work in the space of multifunctions $\textrm{Multi}(X,\mathcal{T})$, with $\mathcal{T}$ the topology of pointwise biconvergence, when all functional relations are (multi) invertible on their ranges, we may make the following definition for the net of functions $f(\lambda;x)$ satisfying $f(\lambda;x)=x$.
**Definition 6.1.** *Let* $f(\lam
| 801
| 395
| 1,479
| 843
| null | null |
github_plus_top10pct_by_avg
|
et(M)=4\gamma^3(\gamma-1)^9=4+2\gamma^3\ne 0$ and so we can use recover $a_\tau$ as before.
Generalization to more servers {#sec-kserver}
==============================
In this section we prove Theorem \[THM-kserver\]. As was mentioned in the introduction, we will allow the database symbols to belong to a slightly larger alphabet $\Z_m$.
Let $q=2^{r-1}$ denote the number of servers $\cS_1,\cdots,\cS_q$ for some $r\ge 2$. Let $m=p_1p_2\cdots p_r$ where $p_1,p_2,\cdots,p_r$ are distinct primes. By theorem \[Grolmusz\], there is an explicit $S$-matching vector family $\cF=(\cU,\cV)$ of size $n$ and dimension $k=n^{{O\left((\log\log n/\log n)^{1-1/r}\right)}}$ where $S={\{a\in \Z_m: a\mod p_i \in {\{0,1\}}\ \forall\ i \in[r]\}}\setminus {\{0\}}$. By remark \[CRT\], $|S\cup {\{0\}}|=2^r=2q$.
#### The Protocol:
We will work over the ring $\cR=\cR_{m,m}=\Z_m[\gamma]/(\gamma^m-1)$. The servers represent the database $\ba=({a_1,\cdots,a_n})\in \Z_m^n$ as a polynomial $F(\bx)\in \mathcal{R}[\bx]=\mathcal{R}[x_1,\cdots,x_k]$ given by $$F(\bx)=F(x_1,\cdots,x_k)=\sum_{i=1}^n a_i \bx^{\bu_i},$$ where $\cU = (\bu_1,\ldots,\bu_n)$ are given by the matching vector family $\cF = (\cU,\cV)$.
The user samples a uniformly random $\bz\in \Z_m^k$ and then sends $\bz+t_i\bv_\tau$ to $\cS_i$ for $i\in [q]$ where $t_i=i-1$. $\cS_i$ then responds with the value of $F$ at the point $\bgam^{\bz+t_i\bv_\tau}$, that is with $F(\bgam^{\bz+t_i\bv_\tau})$ and the value of the ‘first order derivative’ at the same point $F^{(1)}(\bgam^{\bz+t_i\bv_\tau})$. Notice that the protocol is private since $\bz+t\bv_\tau$ is uniformly distributed over $\Z_m^k$ for any fixed $\tau$ and $t$.
[align\*]{} &: \_m\^k\
\_i &: +t\_i\_\
\_i &: F(\^[+t\_i\_]{}), F\^[(1)]{}(\^[+t\_i\_]{})
#### Recovery:
Similarly to the 2-server analysis, we define $$G(t):=F(\bgam^{\bz+t\bv_\tau})
=\sum_{i=1}^n a_i \gamma^{{\langle \bz,\bu_i \rangle}+t{\langle \bv_\tau,\bu_i \rangle}}=c_0+\sum_{\ell\in S}c_\ell\gamma^{t\ell},$$ and $$g(T) = c_0+\sum_{\ell\in S}c_\ell T^\
| 802
| 1,686
| 1,363
| 779
| 3,540
| 0.771672
|
github_plus_top10pct_by_avg
|
erturbations (a)–(c) described in Sec. \[sec:exper\]. For all results below the system without the varied antenna, corresponding to $\lambda=0$, is chosen as the reference, whereas for the perturbed system the coupling constant is $\lambda'=\lambda_{50\Omega}$, $\lambda_{\rm oe}$, or $\lambda_{\rm hw}$, depending on the terminator.
 Real part $f_R(t)$ and imaginary part $f_I(t)$ of the fidelity amplitude for three types of perturbation: $\lambda_{\rm 50\Omega}$ (black); $\lambda_{\rm hw}$ (orange, light gray); $\lambda_{\rm oe}$ (green, dark gray) in frequency range $8.0-8.5\rm\,GHz$. The time is given in units of the Heisenberg time $t_H=2\pi\hbar/\Delta$, where $\Delta$ is the mean level spacing. Solid lines show the experimental results. The theoretical curves are dotted for experimental parameter (available only for the 50$\Omega$ load case) and dashed for fitting parameter. The corresponding parameter and the transmission coefficient for the measuring antenna $a$ are listed in Tab. \[tab:01\]. The black dotted curve is nearly indistinguishable from the dashed one.](fig2a "fig:"){width=".95\columnwidth"}\
 Real part $f_R(t)$ and imaginary part $f_I(t)$ of the fidelity amplitude for three types of perturbation: $\lambda_{\rm 50\Omega}$ (black); $\lambda_{\rm hw}$ (orange, light gray); $\lambda_{\rm oe}$ (green, dark gray) in frequency range $8.0-8.5\rm\,GHz$. The time is given in units of the Heisenberg time $t_H=2\pi\hbar/\Delta$, where $\Delta$ is the mean level spacing. Solid lines show the experimental results. The theoretical curves are dotted for experimental parameter (available only for the 50$\Omega$ load case) and dashed for fitting parameter. The corresponding parameter and the transmission coefficient for the measuring antenna $a$ are listed in Tab. \[tab:01\]. The black dotted curve is nearly indistinguishable from the dashed one.](fig2b "fig:"){width=".95\columnwidth"}
We start with a plot of the complex valued fidelity amplitude for one fr
| 803
| 175
| 730
| 796
| 2,150
| 0.782256
|
github_plus_top10pct_by_avg
|
of size one, i.e. $\lrabs{\eta}=1$, we define $$\begin{aligned}
H(w) := \E{\zeta(w,\eta) \zeta(w,\eta)^T }
\numberthis\label{e:m0}\end{aligned}$$ as the covariance matrix of the difference between the true gradient and a single sampled gradient at $w$. A standard run of SGD, with minibatch size $b := \lrabs{\eta_k} $, then has the following form: $$\begin{aligned}
w_{k+1} &= w_k - \delta \frac{1}{b} \sum_{i\in\eta_k} \nabla U_{i}(w_k)\\
&= w_k - \delta \nabla U(w_k) + \sqrt{\delta}\lrp{\sqrt{\delta} \zeta (w_k, \eta_k)}.
\numberthis \label{e:sgd}\end{aligned}$$ We refer to an SGD algorithm with step size $\delta$ and minibatch size $b$ a $(\delta, b)$-SGD. Notice that $\eqref{e:sgd}$ is in the form of , with $\xi(w,\eta) = \sqrt{\delta} \zeta (w,\eta)$. The covariance matrix of the noise term is $$\label{e:noise_cov_sgd}
\E{\xi (w,\eta)\xi (w,\eta)^T} = \frac{\delta}{b} H(w).$$ Because the magnitude of the noise covariance scales with $\sqrt{\delta}$, it follows that as $\delta \to 0$, converges to deterministic gradient flow. However, the loss of randomness as $\delta\to 0$ is not desirable as it has been observed that as SGD approaches GD, through either small step size or large batch size, the generalization error goes up [@jastrzkebski2017three; @he2019control; @keskar2016large; @hoffer2017train]; this is also consistent with our experimental observations in Section \[ss:acc\_rel\_var\].
Therefore, a more meaningful way to take the limit of SGD is to hold the noise term constant in . More specifically, we define the ***constant-noise limit*** of as $$d x_t = - \nabla U(x_t) dt + M(x_t) dB_t,
\numberthis
\label{e:sgd-limit}$$ where $M(x) := \sqrt{\frac{\delta}{b}H(x)}$. Note that this is in the form of , with noise covariance $M(x_t)^2$ matching that of SGD in . Using Theorem \[t:main\_nongaussian\], we can bound the $W_1$ distance between the SGD iterates $w_{k}$ from , and the continuous-time SDE $x_t$ from .
Importance of Noise Covariance {#ss:noise_covariance}
-----------------------------
| 804
| 665
| 740
| 895
| 1,206
| 0.792764
|
github_plus_top10pct_by_avg
|
n</p>
</div>
<div class="col-md-3">
<p>Fourth column</p>
</div>
</div>
A:
use linear-gradient in body , giving your .container is the parent, and you won't have any wrapper around .container
Sorry that I'm bothering you, but what I meant was to place the image
like this
image.prntscr.com/image/78940c8115c943a186500ac24556758b.jpeg
Add another parent div (being child of body) and add background-image to that div
body {
background: linear-gradient(to right, white 0%, white 50%, black 50%, black 50%, black 100%);
}
#wrap {
background: url(//lorempixel.com/200/200) no-repeat 66.6% 0 / 25% auto
}
.container {
border: 1px solid #f01;
}
[class^="col-"] p {
color: white;
height: 75px;
background-color: blue
}
<link href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css" rel="stylesheet" />
<div id="wrap">
<div class="container">
<div class="col-xs-3 col-md-3">
<p>First column</p>
</div>
<div class="col-xs-3 col-md-3">
<p>Second column</p>
</div>
<div class="col-xs-3 col-md-3">
<p>Third column</p>
</div>
<div class="col-xs-3 col-md-3">
<p>Fourth column</p>
</div>
</div>
</div>
Q:
Oracle: Updates only allowed with comment
The question is maybe a little strange. But I am looking for a way that updates on a table are only allowed when the user gives a comment. The comments should be saved in a second audit-table when updating the normal table.
I think there is no way in oracle how the user can give a comment when executing "update table set...".
So i create my own procedure, where a user can pass the statement and the comment as parameters. Updates without this procedure are prevented by a trigger.
create or replace
PROCEDURE Update_Table(stmt varchar2, comment varchar2) AS
BEGIN
Insert into audit_table values(stmt, comment);
Execute immediate stmt;
END Update_Table;
I am not really happy with this solution. So maybe there is solution much simpler.
A:
Th
| 805
| 6,332
| 98
| 122
| 33
| 0.834546
|
github_plus_top10pct_by_avg
|
is nontrivial in the total Hilbert space including unphysical degrees of freedom.
In any case, except for such an unphysical complexity, we have now understood how space-time supersymmetry is realized in superstring field theory, and therefore are ready to study various consequences of space-time supersymmetry[@Kishimoto:2005bs] on a firm basis. We have to (re)analyze them precisely using the techniques developed in conventional quantum field theory.[^7] We hope to report on them in the near future.
Acknowledgements {#acknowledgements .unnumbered}
================
The author would like to give special thanks to Ted Erler for helpful discussion, that was essential for clarifying the structure of the supersymmetry algebra in the large Hilbert space. The main part of the work was completed at the workshop on “String Field Theory and Related Aspects VIII” held at ICTP, SAIFR in São Paulo, Brazil. The author also thanks to the organizers, particularly Nathan Berkovits, for their hospitality and for providing a stimulating atmosphere.
Spinor conventions and Ramond ground states {#convention}
===========================================
In this paper, although it is mostly implicit, we adopt the chiral representation for $SO(1,9)$ gamma matrices $\Gamma^\mu$, in which $\Gamma^\mu$ is given by $$\Gamma^\mu\ =\
\begin{pmatrix}
0 & (\gamma^\mu)_{\alpha\dot{\beta}}\\
(\bar{\gamma}^\mu)^{\dot{\alpha}\beta} & 0
\end{pmatrix}\,,$$ where $\gamma^\mu$ and $\bar{\gamma}^\mu$ satisfy $${(\gamma^\mu\bar{\gamma}^\nu + \gamma^\nu\bar{\gamma}^\mu)_\alpha}^\beta
=\ 2\eta^{\mu\nu}{\delta_\alpha}^\beta\,,\qquad
{(\bar{\gamma}^\mu\gamma^\nu + \bar{\gamma}^\nu\gamma^\mu)^{\dot{\alpha}}}_{\dot{\beta}}
=\ 2\eta^{\mu\nu}{\delta^{\dot{\alpha}}}_{\dot{\beta}}\,.
$$ The charge conjugation matrix $\mathcal{C}$ satisfies the relations $$(\Gamma^\mu)^T\ =\ -\mathcal{C}\Gamma^\mu\mathcal{C}^{-1},\qquad
\mathcal{C}^T\ =\ -\mathcal{C}\,,$$ and is given in the chiral representation by $$\mathcal{C}\ =\
\begin{pmatrix}
0 & {C^\alpha}_{\dot
| 806
| 1,416
| 1,442
| 867
| null | null |
github_plus_top10pct_by_avg
|
ee bosons in the semi-classical limit, as anticipated. At fixed $kf^2$ and for each interaction vertex, the power of the coupling constant $f$ is equal to the number of structure constants that appear. Since we are interested in computing OPEs involving the currents and the primary fields, let us write these fields in terms of the bosons $X^a$: $$\begin{aligned}
\label{current(X)}
\frac{j^a_{L,z}}{c_+} &=& (\partial g g^{-1})^a = i (f \partial X^a + f^2 \frac{{f^{a}}_{bc}}{2} X^c \partial X^b +
\frac{f^3}{6} {f^a}_{bc} {f^c}_{de} \partial X^e X^d X^b +...)
\nonumber \\
\frac{j^a_{L,\bar z}}{c_-} &=& (\bar{\partial} g g^{-1})^a = i (f \bar{\partial} X^a + f^2 \frac{{f^{a}}_{cb}}{2} X^b \bar{\partial} X^c +
\frac{f^3}{6} {f^a}_{bc} {f^c}_{de} \bar \partial X^e X^d X^b +...),\end{aligned}$$ \[phi(X)\] = e\^[i f X\_a t\^a]{} = i f X\_a t\^a - f\^2 X\_a t\^a X\_b t\^b +... where in the last line the generators $t^a$ are taken in the representation associated to the primary field $\phi$.
The semi-classical behavior of the current-current OPE {#the-semi-classical-behavior-of-the-current-current-ope .unnumbered}
------------------------------------------------------
We study the semi-classical behavior of the OPE between two $z$-components of the left-current. The discussion generalizes straightforwardly to other current-current OPEs. We assume that the only operators that appear in the result of this OPE are composites of (derivatives of) left currents. This is true at the WZW point, and can presumably be proven at any point using conformal perturbation theory. Let us isolate one term in this OPE : \[jjOneTerm\] j\^a\_[L,z]{}(z) j\^b\_[L,z]{}(w) = ... + [A\^[ab]{}]{}\_[a\_p a\_[p-1]{}...a\_[2]{} a\_1]{}(z-w, |z - |w) :j\^[a\_1]{}\_[L,z]{}:j\^[a\_2]{}\_[L,z]{}...:j\^[a\_[p-1]{}]{}\_[L,z]{}j\^[a\_p]{}\_[L,z]{}:...::(w) +...Our goal is to evaluate the behavior of the tensor ${A^{ab}}_{a_p...a_1}(z-w, \bar z - \bar w)$ when the parameter $f$ is small. The reasoning will not depend on the particular current compo
| 807
| 1,326
| 1,489
| 899
| null | null |
github_plus_top10pct_by_avg
|
[$\frac{1}{4}(-1,3)$]{}; at ($.5*(D)+.5*(B)+(-1.5,.8)$) [$E_2$]{};
($.5*(B)+.5*(D)-(0,.75)$) circle \[radius=.1cm\]; at ($.5*(D)+.5*(B)-(-.25,.5)$) [$\frac{1}{3}(-1,4)$]{};
($.5*(F)+.5*(C)+(0,-1)$) node\[below=3\] [$-\frac{1}{30}$]{} – ($.5*(F)+.5*(C)+(0,1.5)$); at ($.5*(F)+.5*(C)+(-1.5,.5)$) [$\frac{1}{3}(-1,10)$]{}; ($.5*(F)+.5*(C)+(0,.75)$) circle \[radius=.1cm\]; at ($.5*(F)+.5*(C)+(.8,1.25)$) [$E_1$]{};
at ($.5*(F)+.5*(C)-(2.5,.5)$) [$\frac{1}{10}(-1,3)$]{}; ($.5*(F)+.5*(C)-(0,.75)$) circle \[radius=.1cm\];
(0,1) ellipse \[x radius=.75cm,y radius=1.5cm\] ; at (0,-.8) [$0$]{};
Locally at $P_3$ the polynomial $G_\lambda$ can be written as $$(x+\lambda y)^3 + 3xy^3 (y^2 - 2(\lambda^2+\lambda+1)xy + \lambda^2x^2)
+ 3x^2y^6(y+\lambda x)+x^3y^9$$ and thus changing the local coordinates with $(x,y) = (x_3,\lambda^2(y_3-x_3))$ which is compatible with the group action, one obtains $\gamma_\lambda x_3^6 + y_3^3 + \sum_{i,j} c_{\lambda ij} x_3^i y_3^j$ with $i+2j>6$ and $\gamma_\lambda \neq 0$. Hence the blow-up with respect to the weight $(1,2)$ resolves the curve at the point $P_3\in S_3=\frac{1}{3}(1,1)$. Let us denote by $E_3$ the exceptional divisor, then $\operatorname{mult}_{E_3}\pi^*(x_3^iy_3^j)=i+2j$ and hence $m_3 = 6$, $\nu_3 = 3$ (see section \[sec:Qres\]). Note that the equation $x_3^6 + y_3^3$ defines an *irreducible* curve in $S_3$. Using these local coordinates $\mathcal{O}_{S_3} = \CC\{ x_3^3, x_3^2y_3,x_3y_3^2,y_3^3\}$. The third vector space in can be described as $$\label{eq:M_P3}
\frac{\cO_{S_3,\zeta^{(k-5)}}}{\cM_{\cC_\lambda,P_3}^{(k)}}
= \frac{\cO_{S_3,\zeta^{(k+1)}}}{ \left\{ g \ \big| \ \operatorname{mult}_{E_3} \pi^{*} g
> \dfrac{k}{2} - 3 \right\}} \cong
\begin{cases}
0 & \text{ if }\ \ k = 0,\ldots,7, \\
\langle x_3^{\overline{k+1}} \rangle_\CC & \text{ if } \ \ k = 8,\ldots,11,
\end{cases}$$ where $\overline{k+1}$ denotes $k+1$ modulo 3. This is a consequence of the fact that $\mathcal{O}_{S_3,\zeta^0} = \mathcal{O}_{S_3}$, $\mathcal{O}_{S_3,\zeta^1} = \langle x_3,y_3 \rangle
| 808
| 1,450
| 627
| 817
| 2,831
| 0.776712
|
github_plus_top10pct_by_avg
|
by other variables when $L_i$ is *of type* $\textit{I}^o$ (resp. *of type* $\textit{I}^e$), and so on.
From now on, we eliminate suitable variables based on Equations (\[ea20\]), (\[ea22\]), (\[24\]), (\[24’\]), (\[ea25\]), (\[ea27\]), and (\[ea32\]), the equations $\sum_{l=0}^{k_j}z_{j-l}^{\ast}+\sum_{l=0}^{k_j}
\bar{\gamma}_{j-l}u_{j-l}^{\ast}=0$ for all $j\in \mathcal{B}_1$, and the equations $\sum_{l=0}^{k_j}z_{j-l}^{\ast}+\sum_{l=0}^{k_j}
\bar{\gamma}_{j-l}u_{j-l}^{\ast}=1$ for all $j\in \mathcal{B}_2$.
1. We first consider Equation (\[ea20\]). This case is similar to the case (1) in the proof of Lemma A.8 in [@C2]. Thus, all lower triangular blocks $m_{j,i}$ can be eliminated by upper triangular blocks $m_{i,j}$ together with other variables arose in diagonal blocks.
On the other hand, if $i$ is odd and $L_i$ is *bound of type I*, then we have two equations involving $m_{i-1, i}, m_{i+1, i}, m_{i, i-1}', m_{i, i+1}'$ as explained above in this proof. Since these two equations involve lower triangular blocks, we should figure it out in terms of upper triangular blocks. By Equation (\[ea21\]) we have that $$v_{i-1}\cdot m_{i-1, i}={}^tm_{i, i-1}'h_i \textit{ and } v_{i+1}\cdot m_{i+1, i}={}^tm_{i, i+1}'h_i.$$ Therefore, these two equations involving $m_{i-1, i}, m_{i+1, i}, m_{i, i-1}', m_{i, i+1}'$ explained above in this proof are identical and they are the same as the following single equation: $$\delta_{i-1}v_{i-1}\cdot m_{i-1, i}+\delta_{i+1}\cdot {}^tm_{i, i+1}'h_i=0.$$\
2. By considering Equation (\[ea22\]), we see that $m_{i,i}^{\ast}$ can be eliminated by $m_{i,i}^{\ast\ast}$ when $L_i$ is *bound of type I* with $i$ odd.\
3. We consider Equation (\[24\]). If $L_i$ is *free of type $I$* with $i$ odd, then $v_i$ can be eliminated by $\left(y_iz_i, r_i, m_{i-1,i}, m_{i,i+1}\right)$, $y_i$ can be eliminated by $\left(t_i\right)$, and $x_i$ can be eliminated by $\left(z_iu_i, r_i, t_i, m_{i-1,i}, m_{i,i+1}\right)$.\
4. We consider Equation (\[ea25\]). If $L_i$ is *of type $I^o$*, then
| 809
| 572
| 1,438
| 959
| 2,731
| 0.777451
|
github_plus_top10pct_by_avg
|
nonumber\end{aligned}$$
Moreover, we have the following relations,
$$\begin{aligned}
& & \mbox{exp} \left( \hat{b}^{ij}_+(u) \right)
: \mbox{exp} \left( \hat{b}^{i'j'}(v) \right) : \\
& &~~~~= \left( \frac{u-v-\frac{1}{2} \hbar}
{u-v +\frac{1}{2} \hbar} \right)^{\delta^{ii'} \delta^{jj'}}
: \mbox{exp} \left( \hat{b}^{i'j'}(v) \right):
\mbox{exp} \left( \hat{b}^{ij}_+(u) \right), \\
& & \mbox{exp} \left( \hat{c}^{ij}_+(u) \right)
: \mbox{exp} \left( \hat{c}^{i'j'}(v) \right) : \\
& &~~~~= \left( \frac{u-v+\frac{1}{2} \hbar}
{u-v -\frac{1}{2} \hbar} \right)^{\delta^{ii'} \delta^{jj'}}
: \mbox{exp} \left( \hat{c}^{i'j'}(v) \right):
\mbox{exp} \left( \hat{c}^{ij}_+(u) \right). \\\end{aligned}$$
To specify the correspondence of our notations and that of Ref. [@sln] for $q$-bosons, we give the second observation
\[ob2\] The expressions $\hat{a}^{i}_\pm(u)$, $\hat{b}^{ij}_\pm(u)$ and $\hat{c}^{ij}_\pm(u)$ correspond to the fields $a^i_{\pm}(q^{\pm \frac{k+g}{2}}z)$, $b^{ij}_\pm(z)$ and $c^{ij}_\pm(z)$ of Ref. [@sln] respectively.
\[rem1\] Notice that in Ref. [@sln], the explicit expressions for $a^i_{\pm}(q^{\pm \frac{k+g}{2}}z)$ are symmetric with respect to $+ \leftrightarrow -$, but this is not the case for $\hat{a}^{i}_\pm(u)$. The partial reason for this difference is that, for $q$-affine algebras, the Drinfeld currents $\psi^i_\pm(z)$ are defined in an symmetric way in $H^i_n$, whilst for Yangian doubles the currents $H^{\pm}_i$ are defined asymmetrically.
In the next subsection, we shall see that, despite the difference stated in Remark \[rem1\], the above observations are rather useful to guess the bosonic expressions for the Drinfeld currents of $DY_\hbar(sl_N)$.
Free boson representation of $DY_\hbar(sl_N)$ with level $k$
------------------------------------------------------------
Let us define
$$\begin{aligned}
& & H^{\pm}_i(u) = :
\mbox{exp} \left\{
\sum_{l=1}^i \hat{b}^{l,i+1}_{\pm}
( u \pm \frac{1}{2} (\frac{k}{2} + l -1) \hbar )
- \sum_{l=1}^{i-1} \hat{b}^{l,i}_{\pm}
( u \pm \frac{1}{2} (\frac{k
| 810
| 385
| 959
| 869
| 4,001
| 0.768698
|
github_plus_top10pct_by_avg
|
re $k =|{\widehat{S}}| < n$. For the LOCO and prediction parameters, based on $\mathcal{D}_{1,n}$, we also compute $\widehat{\beta}_{{\widehat{S}}}$, any estimator of the projection parameters restricted to ${\widehat{S}}$. In addition, for each $j \in {\widehat{S}}$, we further compute, still using $\mathcal{D}_{1,n}$ and the rule $w_n$, $\widehat{\beta}_{{\widehat{S}}(j)}$, the estimator of the projection parameters over the set $\widehat{S}(j)$. Also, for $l=1,2$, we denote with $\mathcal{I}_{l,n}$ random subset of $\{1,\ldots, 2n\}$ containing the indexes for the data points in $\mathcal{D}_{l,n}$.
Projection Parameters {#sec:projection}
---------------------
In his section we will derive various statistical guarantees for the projection parameters, defined in . We will first define the class of data generating distributions on $\mathbb{R}^{d+1}$ for which our results hold. In the definition below, $S$ denotes a non-empty subset of $\{1,\ldots,d\}$ and $W_S = ({\rm vech}(X_S X_S^\top), X_SY)$.
\[def:Pdagger\] Let ${\cal P}_n^{\mathrm{OLS}} $ be the set of all probability distributions $P$ on $\mathbb{R}^{d+1}$ with zero mean, Lebesgue density and such that, for some positive quantities $A, a, u, U , v$ and $\overline{v}$,
1. the support of $P$ is contained in $[-A,A]^{d+1}$;
2. $\min_{ \{ S \colon |S| \leq k \} } \lambda_{\rm min}(\Sigma_S) \geq u$ and $\max_{ \{ S \colon |S| \leq k\} } \lambda_{\rm max}(\Sigma_S) \leq U$, where $\Sigma_S = \mathbb{E}_P[X_S X_S^\top]$;
3. $\min_{ \{S \colon |S| \leq k \} } \lambda_{\rm min}({\rm Var}_P(W_S))\geq v$ and $\max_{ \{S \colon |S| \leq k\} }
\lambda_{\rm max}({\rm Var}_P(W_S))\leq \overline{v}$.
4. $\min\{ U, \overline{v} \} \geq \eta$, for a fixed $\eta>0$.
The first compactness assumption can be easily modified by assuming instead that $Y$ and $X$ are sub-Gaussian, without any technical difficulty. We make such boundedness assumption to simplify our results. The bound on the smallest eigenvalue of $\Sigma_S$, uniformly over all subsets $S$ is natu
| 811
| 329
| 701
| 849
| 2,471
| 0.779486
|
github_plus_top10pct_by_avg
|
; \\
\mathcal{X}_{i,2,3}(m)={}^tr_i\bar{a}_it_i+x_i+z_iu_i+\mathcal{P}^i_{2, 3}=\bar{d}_i=0
\end{array} \right.$$ and $$\begin{gathered}
\label{24'}
\mathcal{X}_{i,2,2}(m)=\bar{\gamma}_i+z_i+z_i^2+1/2\left({}^tm_{i-1, i}'h_{i-1}m_{i-1, i}'+{}^tm_{i+1, i}'h_{i+1}m_{i+1, i}'\right)+\\
\textit{ } ~~~~~~~~~~ \left(\delta_{i-2}'(m_{i-2, i}^{\#})^2+\delta_{i+2}'(m_{i+2, i}^{\#})^2\right)+
\left(\delta_{i-3}(m_{i-3, i}^{\natural})^2+\delta_{i+3}(m_{i+3, i}^{\natural})^2\right)=\bar{f}_i=\bar{\gamma}_i.
\end{gathered}$$
Thus we get polynomials $\mathcal{X}_{i,1,2}, \mathcal{X}_{i,1,3}, \mathcal{X}_{i,2,3}, \mathcal{X}_{i,2,2}-\bar{\gamma}_i$ on $\mathrm{Ker~}\tilde{\varphi}/\tilde{M}^1$, vanishing on the subscheme $\mathrm{Ker~}\varphi/\tilde{G}^1$.\
4. Assume that $i$ is even and that $L_i$ is *of type* $\textit{I}^o$. By Equation (\[13\]) which involves an element of $\tilde{M}^1(R)$, each entry of $b_i'$ has $\pi$ as a factor so that $b_i'\equiv b_i=0$ mod $(\pi\otimes 1)(B\otimes_AR)$. Let $\tilde{m}\in \mathrm{Ker~}\tilde{\varphi}(R)$ be a lift of $m$. By using an argument similar to the paragraph just before Equation (\[ea20\]) of Step (1), if we write the $(1, 2)$-block of the $(i, i)$-block of the formal matrix product $\sigma({}^t\tilde{m})\cdot h\cdot \tilde{m}$ as $\xi^{i/2}\cdot \pi\mathcal{X}_{i,1,2}(\tilde{m})$, where $\mathcal{X}_{i,1,2}(\tilde{m}) \in M_{(n_i-1)\times 1}(B\otimes_AR)$, then the image of $\mathcal{X}_{i,1,2}(\tilde{m})$ in $M_{(n_i-1)\times 1}(B\otimes_AR)/(\pi\otimes 1)M_{(n_i-1)\times 1}(B\otimes_AR)$ is independent of the choice of the lift $\tilde{m}$ of $m$. Therefore, we may denote this image by $\mathcal{X}_{i,1,2}(m)$. As for Equation (\[ea20\]) of Step (1), we need to express $\mathcal{X}_{i,1,2}(m)$ as matrices. Recall that $\pi^ih_i=\xi^{i/2} \begin{pmatrix} a_i&0\\ 0 &1 +2\bar{\gamma}_i \end{pmatrix}
=\pi^i\cdot(-1)^{i/2}\begin{pmatrix} a_i&0\\ 0 &1 +2\bar{\gamma}_i \end{pmatrix}$. We write $m_{i,i}$ as $\begin{pmatrix} id&\p
| 812
| 1,770
| 752
| 870
| 3,244
| 0.773793
|
github_plus_top10pct_by_avg
|
\frac{\nabla D(X)}{D(x)}} dx = \int_{0}^{x} \lrp{\frac{\nabla U(x)}{D(x)}} dx + \log D(x) - \log D(0)$. We can verify that $p^*(x) \propto e^{-V(x)}$ satisfies .
For a concrete example, let the potential $U(x)$ and the diffusion function $M(x)$ be defined as $$\begin{aligned}
& U(x) := \threecase{\frac{1}{2} x^2}{x\in[-1,4]}{\frac{1}{2} (x+2)^2 - 1}{x\leq -1}{\frac{1}{2} (x-8)^2 -16}{x\geq 4} \\
& M(x) = \threecase
{\frac{1}{2} (x+2)}{x\in[-2,8]}
{1}{x \leq -2}
{6}{x \geq 8}.
\end{aligned}$$
[0.23]{} 
[0.23]{} 
[0.23]{} 
[0.23]{} 
We plot $U(x)$ in Figure \[fig:1d\_ux\]. Note that $U(x)$ has two local minima: a shallow minimum at $x=-2$ and a deeper minimum at $x=8$. A plot of $M(x)$ can be found in Figure \[fig:1d\_mx\]. $M(x)$ is constructed to have increasing magnitude at larger values of $x$. This has the effect of biasing the invariant distribution towards smaller values of $x$.
We plot $V(x)$ in Figure \[fig:1d\_vx\]. Remarkably, $V(x)$ has only one local minimum at $x=-2$. The larger minimum of $U(x)$ at $x=8$ has been smoothed over by the effect of the large diffusion $M(x)$. This is very different from when the noise is homogeneous (e.g., $M(x)=I$), in which case $p^*(x) \propto e^{-U(x)}$. We also simulate (using ) for the given $U(x)$ and $M(x)$ for 1000 samples (each simulated for 1000 steps), and plot the histogram in Figure \[fig:1d\_histo\].
[Assumptions and Definitions]{} In this section, we state the assumptions and definitions that we need for our main results in Theorem \[t:main\_gaussian\] and Theorem \[t:main\_nongaussian\]. \[ss:ass\]
We assume that $U(x)$ satisfies \[ass:U\_properties\]
1. The function $U(x)$ is continuously-differentiable on $\mathbb{R}^d$ and has Lipschitz continuous gradients; that is, there exists a positive constant $L\geq0$ such that\
for all $x,y\in \mathbb{R}^d$, $
\lVert \nabla U(x) - \nabla U(y) \rVert_2 \le L
| 813
| 3,175
| 1,028
| 748
| 1,527
| 0.788615
|
github_plus_top10pct_by_avg
|
$ (resp. $g_{i+2, i}$) if $L_{i-2}$ (resp. $L_{i+2}$) is *of type* $\textit{I}^o$.
- $k_{i-2, i}$ (resp. $k_{i+2, i}$) is the $(n_{i-2}-1, n_i)^{th}$-entry (resp. $(n_{i+2}-1, n_i)^{th}$-entry) of the matrix $g_{i-2, i}$ (resp. $g_{i+2, i}$) if $L_{i-2}$ (resp. $L_{i+2}$) is *of type* $\textit{I}^e$.\
4. Assume that $i$ is odd. Then $g$ induces the identity on $A_i/B_i$. To interpret this as a matrix, we consider the following $(1\times n_i)$-matrix: $$\left\{
\begin{array}{l l}
v_i\cdot (g_{i, i}-\mathrm{Id}_{n_i}) & \quad \textit{if $L_i$ is \textit{free of type I}};\\
\delta_{i-1}v_{i-1}\cdot g_{i-1, i}+\delta_{i+1}v_{i+1}\cdot g_{i+1, i} & \quad \textit{if $L_i$ is \textit{bound of type I}}.
\end{array} \right.$$
Here,
- $v_{i}=(0,\cdots, 0, 1)$ of size $1\times n_{i}$ and $\mathrm{Id}_{n_i}$ is the identity matrix of size $n_i \times n_i$.
- $v_{i-1}=(0,\cdots, 0, 1)$ (resp. $v_{i-1}=(0,\cdots, 0, 1, 0)$) of size $1\times n_{i-1}$ if $L_{i-1}$ is *of type* $\textit{I}^o$ (resp. *of type* $\textit{I}^e$).
- $v_{i+1}=(0,\cdots, 0, 1)$ (resp. $v_{i+1}=(0,\cdots, 0, 1, 0)$) of size $1\times n_{i+1}$ if $L_{i+1}$ is *of type* $\textit{I}^o$ (resp. *of type* $\textit{I}^e$).\
Then each entry of the above matrix lies in the ideal $(\pi)$. If $L_i$ is *of type II*, then $A_i=B_i$ so that there is no contribution.\
5. Assume that $i$ is odd. The fact that $g$ induces the identity on $A_i/B_i$ is equivalent to the fact that $g$ induces the identity on $B_i^{\perp}/A_i^{\perp}$.
We give another description of this condition. Since the space V has a non-degenerate bilinear form $h$, V can be identified with its own dual. We define the adjoint $g^{\ast}$ characterized by $h(gv,w)=h(v, g^{\ast}w)$. Then the fact that $g$ induces the identity on $B_i^{\perp}/A_i^{\perp}$ is the same as the fact that $g^{\ast}$ induces the identity on $A_i/B_i$.
In terms of matrices, we consider the following $(1\times n_i)$-matrix: $$\left\{
\begi
| 814
| 4,439
| 251
| 487
| 2,630
| 0.778255
|
github_plus_top10pct_by_avg
|
ality proposal in this example predicts that the dual is defined by a heterotic $E_8 \times E_8$ compactification on $[T^4/{\mathbb Z}_2]$, with $E_8$ bundle defined by ${\cal E}^* \otimes {\cal E}$, $\wedge^{\rm even} {\cal E}$, for ${\cal E} = {\cal O}^8$ on $T^4$, but such that ${\cal E}^* \otimes {\cal E}$ and $\wedge^{\rm even} {\cal E}$ are odd under the action of the ${\mathbb Z}_2$ defining $[T^4/{\mathbb Z}_2]$. We do not see how such an $E_8$ bundle on $[T^4/{\mathbb Z}_2]$ could be obtained from embedding an $SU(n)$ bundle in the usual fashion, and indeed, as remarked earlier, it need not be, the duals in general may only be describable by fibered WZW models. That said, the reader should note that the spectrum computed above is nearly the same as the massless spectrum of an $E_8 \times E_8$ string compactified on a (2,2) $[T^4/{\mathbb Z}_2]$, which in general terms is consistent with the existence of a duality between the current Spin$(32)/{\mathbb Z}_2$ gerbe compactification and an $E_8 \times
E_8$ compactification. So, although we cannot check the details at this time, certainly in broad brushstrokes this is consistent.
Examples in Distler-Kachru models
---------------------------------
In table \[table:DK-duality-exs\] we tabulate the combinatorial data for a number of anomaly-free Distler-Kachru (0,2) GLSM’s of the pertinent form. Each describes a bundle ${\cal E}$ over a Calabi-Yau hypersurface in a weighted projective stack, $${\mathbb P}^n_{[w_0, \cdots, w_n]}[w_0 + \cdots + w_n],$$ a ${\mathbb Z}_2$ gerbe over a Calabi-Yau space, where the (rank 8) bundle is given as a kernel of the form $$0 \: \longrightarrow \: {\cal E} \: \longrightarrow
\: \oplus_a {\cal O}(n_a) \: \longrightarrow \:
\oplus_i {\cal O}(m_i) \: \longrightarrow \: 0.$$
$w_0, \cdots, w_4$ $n_a$ $m_i$
-------------------- ------------ --------- --
$2,2,2,4$ $1^9$ $9$
$2,2,2,2,2$ $1^9, 9$ $7, 11$
$2,2,2,2,2$ $3^9, 19$ $9, 21$
$2,2,2,2,4$
| 815
| 765
| 1,147
| 771
| 2,098
| 0.782756
|
github_plus_top10pct_by_avg
|
114 116 0.0300
Only blood urea nitrogen differences met criteria for partitioning by season for reference intervals.
10.1371/journal.pone.0115739.t003
###### Hematology and plasma biochemical parameters that were significantly correlated with water temperature in juvenile loggerhead sea turtles (*Caretta caretta*) sampled in Core Sound, North Carolina, USA.
{#pone.0115739.t003g}
Parameter Correlation with Water Temperature Spearman's ρ P-value
---------------------------------- ------------------------------------ -------------- ----------
Estimated white blood cell count − -0.18 0.010
Heterophils − -0.32 \< 0.001
Phosphorous − -0.20 0.005
Creatine phosphokinase − -0.28 \< 0.001
Packed cell volume \+ 0.40 \<0.001
Glucose \+ 0.22 0.002
Aspartate aminotransferase \+ 0.44 \<0.001
Calcium \+ 0.24 0.001
Potassium \+ 0.49 \<0.001
Uric Acid \+ 0.17 0.020
Assessment for differences in clinical pathology parameters by sex revealed that glucose was higher in males than females (P = 0.03). A number of parameters were positively correlated with straight carapace length, including packed cell volume (Spearman's ρ = 0.30; P = 0.004), total protein (Spearman's ρ = 0.25; P = 0.02), albumin (Spearman's ρ = 0.29; P = 0.006), and phosphorus (Spearman's ρ = 0.40; P = 0.0001)
| 816
| 5,574
| 267
| 275
| null | null |
github_plus_top10pct_by_avg
|
silon}}$, then $$\begin{aligned}
W_1\lrp{p^*, p^y_{k\delta}} \leq 2\hat{\epsilon}
\end{aligned}$$ where $p^y_t := \Law(\by_t)$.
Let $\epsilon := \frac{\lambda}{16 (L+\LN^2)} \exp\lrp{-\frac{7\aq\Rq^2}{3}} \hat{\epsilon}$. Let $f$ be defined as in Lemma \[l:fproperties\] with the parameter $\epsilon$.
$$\begin{aligned}
& \E{\lrn{\bx_{i\delta} - \by_{i\delta}}_2}\\
\leq& 2\exp\lrp{\frac{7\aq\Rq^2}{3}}\E{f(\bx_{i\delta} - \by_{i\delta})} + 2\exp\lrp{\frac{7\aq\Rq^2}{3}}\epsilon\\
\leq& 2\exp\lrp{\frac{7\aq\Rq^2}{3}}\lrp{e^{-\lambda i\delta} \E{f(\bx_{0} - \by_{0})} + \frac{6}{\lambda} \lrp{L + \LN^2} \epsilon} + 2\exp\lrp{\frac{7\aq\Rq^2}{3}}\epsilon\\
\leq& 2\exp\lrp{\frac{7\aq\Rq^2}{3}}e^{-\lambda i\delta} \E{f(\bx_{0} - \by_{0})} + \frac{16\lrp{L+\LN^2}}{\lambda}\exp\lrp{\frac{7\aq\Rq^2}{3}} \cdot \epsilon
\numberthis \label{e:t:asdkja:1}\\
=& 2\exp\lrp{\frac{7\aq\Rq^2}{3}}e^{-\lambda i\delta} \E{f(\bx_{0} - \by_{0})} + \hat{\epsilon}
\end{aligned}$$
where the first inequality is by item 4 of Lemma \[l:fproperties\], the second inequality is by Corollary \[c:main\_gaussian:1\] (notice that $\delta$ satisfies the requirement on $T$ in Theorem \[t:main\_gaussian\], for the given $\epsilon$). The third inequality uses the fact that $1\leq L/m \leq \frac{\lrp{L+\LN^2}}{\lambda}$.
The first claim follows from substituting $\bx_0 = \by_0$ into , so that the first term is $0$, and using the definition of $\epsilon$, so that the second term is $0$.
For the second claim, let $\bx_0 \sim p^*$, the invariant distribution of . From Lemma \[l:energy\_x\], we know that $\bx_0$ satisfies the required initial conditions in this Lemma. Continuing from , $$\begin{aligned}
& \E{\lrn{\bx_{i\delta} - \by_{i\delta}}_2}\\
\leq& 2\exp\lrp{\frac{7\aq\Rq^2}{3}}\lrp{2e^{-\lambda i\delta} \E{\lrn{\bx_0}_2^2 + \lrn{\by_0}_2^2} + \frac{6}{\lambda} \lrp{L + \LN^2} \epsilon} + \epsilon\\
\leq& 2\exp\lrp{\frac{7\aq\Rq^2}{3}}\lrp{2e^{-\lambd
| 817
| 1,544
| 780
| 835
| null | null |
github_plus_top10pct_by_avg
|
b \beta \int_{0}^{+\infty} (b z + c)^{2}
\exp{\left( - z^{\frac{1}{a}} \right)} dz \allowdisplaybreaks \\
&= (k_{1}^{2} + k_{2}^{2}) b^{3} \beta
\int_{0}^{+\infty} z^{2} \exp{\left( - z^{\frac{1}{a}} \right)} dz
+ (k_{1}^{2} - k_{2}^{2}) b^{3} \beta
\int_{0}^{c / b} z^{2} \exp{\left( - z^{\frac{1}{a}} \right)} dz \\
&\quad + 2 (k_{1}^{2} - k_{2}^{2}) b^{2} c \beta
\int_{c / b}^{+\infty} z \exp{\left( - z^{\frac{1}{a}} \right)} dz \\
&\quad + (k_{1}^{2} + k_{2}^{2}) b c^{2} \beta
\int_{0}^{+\infty} \exp{\left( - z^{\frac{1}{a}} \right)} dz
+ (k_{1}^{2} - k_{2}^{2}) b c^{2} \beta
\int_{0}^{c / b} \exp{\left( - z^{\frac{1}{a}} \right)} dz. \end{aligned}$$ When $c < 0$, we have $$\begin{aligned}
\operatorname{{E}}[\Pe(Z + c)^{2}]
&= k_{2}^{2} b \beta
\int_{- \infty}^{0} (b z + c)^{2} \exp{\left( - (- z)^{\frac{1}{a}} \right)} dz
+ k_{2}^{2} b \beta
\int_{0}^{- c / b} (b z + c)^{2} \exp{\left( - z^{\frac{1}{a}} \right)} dz \\
&\quad + k_{1}^{2} b \beta
\int_{- c / b}^{+\infty} (b z + c)^{2} \exp{\left( - z^{\frac{1}{a}} \right)} dz
\allowdisplaybreaks \\
&= k_{2}^{2} b \beta
\int_{0}^{+\infty} (- b z + c)^{2} \exp{\left( - z^{\frac{1}{a}} \right)} dz
+ k_{2}^{2} b \beta
\int_{0}^{- c / b} (b z + c)^{2} \exp{\left( - z^{\frac{1}{a}} \right)} dz \\
&\quad + k_{1}^{2} b \beta
\int_{- c / b}^{+\infty} (b z + c)^{2} \exp{\left( - z^{\frac{1}{a}} \right)} dz
\allowdisplaybreaks \\
&= (k_{1}^{2} + k_{2}^{2}) b^{3} \beta
\int_{0}^{+\infty} z^{2} \exp{\left( - z^{\frac{1}{a}} \right)} dz
- (k_{1}^{2} - k_{2}^{2}) b^{3} \beta
\int_{0}^{- c / b} z^{2} \exp{\left( - z^{\frac{1}{a}} \right)} dz \\
&\quad + 2 (k_{1}^{2} - k_{2}^{2}) b^{2} c \beta
\int_{- c / b}^{+\infty} z \exp{\left( - z^{\frac{1}{a}} \right)} dz \\
&\quad + (k_{1}^{2} + k_{2}^{2}) b c^{2} \beta
\int_{0}^{+\infty} \exp{\left( - z^{\frac{1}{a}} \right)} dz
- (k_{1}^{2} - k_{2}^{2}) b c^{2} \beta
\int_{0}^{- c / b} \exp{\left( - z^{\frac{1}{a}} \right)} dz. \end{aligned}$$ From the above, for any $c \in \mathbb{R}$, we have $$\begin{alig
| 818
| 4,054
| 795
| 613
| null | null |
github_plus_top10pct_by_avg
|
uence ${\mathscr{I}} \to {\mathbb{S}}$ of steps (usually ${\mathscr{I}}$ will be the natural numbers ${\mathbb{N}}$).
We revisit the three fundamental P’s (path, procedure, and process – §\[S:THREE\_P\]). A walk in step space decomposes into these three sequences: $\textrm{walk}_{\,i} = (\textrm{path}_{\,i}, \textrm{process}_{\,i}, \textrm{procedure}_{\,i})$. This triple is not logically independent; being so, shorter characterizations of step space exist. However we retain the present representation, favoring its three-element formulation, which covers all possibilities in simple fashion.
[\[Extended projection\]]{} \[D:EXTENDED\_PROJECTION\] Let $\lbrace {\mathit{s}}_n \rbrace$ be a walk in step space ${\mathbb{S}} = \Lambda \times {\mathscr{F}} \times {\mathbf{F}}$ and let ${\mathscr{I}}$ be a denumerable index set (usually ${\mathbb{N}}$). Use the locus projection $\mho_\Lambda \colon {\mathbb{S}} \to \Lambda$ of definition \[D:STEP\_SPACE\_PROJECTION\] to construct the sequential *path* projection $\overline{\mho}_\Lambda \colon {\mathbb{S}}^{\mathscr{I}} \to \Lambda^{\mathscr{I}}$ via setting $\overline{\mho}_\Lambda(\lbrace {\mathit{s}}_n \rbrace) = \lbrace (i,\mho_\Lambda({\mathit{s}})) \colon
(i,{\mathit{s}}) \in \lbrace {\mathit{s}}_n \rbrace \rbrace$. Similarly define the sequential *process* and *procedure* projections $\overline{\mho}_{\mathbf{F}} \colon {\mathbb{S}}^{\mathscr{I}} \to {\mathbf{F}}^{\mathscr{I}}$ and $\overline{\mho}_{\mathscr{F}} \colon {\mathbb{S}}^{\mathscr{I}} \to {\mathscr{F}}^{\mathscr{I}}$.
With $\lbrace x_n \rbrace \colon {\mathscr{I}} \to X$ a sequence in some set $X$, we alternatively denote the sequence’s ${{i}^{\text{th}}}$ term by $x_i = \lbrace x_n \rbrace(i)$.
\[L:RELATED\_PROJECTION\] Let $\langle \Psi, \Phi \rangle$ and locus set $\Lambda$ be the bases for step space ${\mathbb{S}} = \Lambda \times {\mathscr{F}} \times {\mathbf{F}}$. Let $\lbrace {\mathit{s}}_n \rbrace$ be a walk ${\mathscr{I}} \to {\mathbb{S}}$. For each $i \in {\mathscr{I}}$, $(\overli
| 819
| 851
| 1,603
| 806
| null | null |
github_plus_top10pct_by_avg
|
to Appendix \[nc\] to get a first glimpse into why the case of $p=2$ is really different. Some of the ideas behind our construction can be seen in the simple example illustrated in Appendix \[cfot\].
Acknowledgements
----------------
This paper was initially the second half of [@C2]. Due to a huge number of pages and technical difficulty, we decide to divide it into two papers. The author greatly thanks the referee of Algebra & Number Theory, who read [@C2], and Professor Brian Conrad for incredibly precise and helpful comments and discussions on this project. The author also appreciates Professor Chia-Fu Yu’s notice to point out one error in this paper and [@C2]. It is explained in Remark \[correction\].
Structure theorem for hermitian lattices and notations {#sthln}
======================================================
In this section, we explain a structure theorem for hermitian lattices. This theorem is proved in [@C2]. Thus we take necessary definitions and theorems from [@C2], without providing proofs.
Notations {#Notations}
---------
Notations and definitions in this section are taken from [@C1], [@GY], [@J], and [@C2].
- Let $F$ be an unramified finite extension of $\mathbb{Q}_2$ with $A$ its ring of integers and $\kappa$ its residue field.
- Let $E$ be a ramified quadratic field extension of $F$ with $B$ its ring of integers.
- Let $\sigma$ be the non-trivial element of the Galois group $\mathrm{Gal}(E/F)$.
- The lower ramification groups $G_i$’s of the Galois group $\mathrm{Gal}(E/F)$ satisfy one of the following: $$\left\{
\begin{array}{l }
\textit{Case 1}: G_{-1}=G_{0}=G_{1}, G_{2}=0;\\
\textit{Case 2}: G_{-1}=G_{0}=G_{1}=G_{2}, G_{3}=0.
\end{array} \right.$$ In *Case 2*, based on Section 6 and Section 9 of [@J], there is a suitable choice of a uniformizer $\pi$ of $B$ such that $$\pi=\sqrt{2\delta}, \textit{where $\delta\in A $ and $\delta\equiv 1 \mathrm{~mod~}2$}.$$ Thus $E=F(\pi)$ and $\sigma(\pi)=-\pi$. From now on, we assume that $E/F$ satisfies *C
| 820
| 435
| 370
| 886
| 2,050
| 0.783186
|
github_plus_top10pct_by_avg
|
0 0 1 0 0.25 0
Fatal 0 1 1 1 0 1 0 0 1 3 0.25 0.75
Cardiac events (n=2)
Fatal 0 0 1 1 0 0 0 0 1 1 0.25 0.75
Total 6 4 8 9 4 9 6 8 24 30 6.00 7.50
Injuries are divided into type: Acute Spinal Cord Injury (ASCI), Traumatic Brain Injury (TBI) and Cardiac events and clinical outcome (indented below Type of injury)An annual average, which is the total number of events divided by the 4 years, is also provided.
Average is calculated for the 4 years that the data have been collected.
Owing to the small changes in numbers per year, incidences were calculated on the annual average of injuries over the 4-year period ([table 1](#BMJOPEN2012002475TB1){ref-type="table"}).
With an estimated 651 146 players at both levels (Junior: n=529 483; Senior: n=121 663) in South Africa, the average annual incidence for all catastrophic injuries (TBI, cardiac events and ASCIs) was 2.07 per 100 000 players (95% CI 0.97 to 3.18). Senior players had a significantly higher incidence of these events (6.16, 95% CI 1.75 to 10.58) than Junior players (1.13, 95% CI 0.23 to 2.04; p=0.03). The average annual incidence for all TBIs and ASCIs combined (excluding cardiac events) was also significantly higher at the Senior level (5.96, 95% CI 1.62 to 10.30) than the Junior level (1.09, 95% CI 0.20 to 1.97) (p=0.03) (combined=2.00 per 100 000 players, 95% CIs 0.91 to 3.08) between 2008 and 2011. In combination, *permanent* TBIs and ASCIs occurred significantly more often at the Senior level (5.14 per 100 000 players, 95% CIs 1.11 to 9.16) than the Junior level (0.33 per 100 000 players, 95% CIs: 0 to 0.82; p=0.02) between 2008 and 2011 (combined: 1.
| 821
| 3,590
| 834
| 700
| null | null |
github_plus_top10pct_by_avg
|
cancer (*n* = 5), malignancies of the upper respiratory tract (*n* = 2), advanced melanoma (*n* = 2), Hodgkin\'s lymphoma (*n* = 1) and Merkel cell carcinoma (*n* = 1) ([@B7]--[@B15]). With respect to demographics, the patients were either of Asian (*n* = 6) or Caucasian (*n* = 5) origin, aged 49--87 years and predominantly male (*n* = 10) ([@B7]--[@B15]). To our knowledge, there are no reports of similar cases associated with the use of CTLA-4 and PD-L1 inhibitors.
######
Summary of case reports documenting development of acute pulmonary tuberculosis in cancer patients treated with PD-1 inhibitors.
**Patient(s)** **Country of origin of report** **Type of malignancy** **PD-1 inhibitor** **Outcome** **References**
---------------- --------------------------------- ------------------------ ------------------------------------------------ --------------- ---------------- ------------------------------------------
87 M Singapore Hodgkin\'s lymphoma Pembrolizumab Survived ([@B7])
72 M Japan NSCLC[^\*^](#TN1){ref-type="table-fn"} Nivolumab Not reported ([@B8])
59 M China Stage 4 pulmonary adenocarcinoma Nivolumab Survived ([@B10])[^+^](#TN2){ref-type="table-fn"}
50 M France Metastatic melanoma Pembrolizumab Survived ([@B9])
64 M France NSCLC Nivolumab Died ([@B9])
65 F China Advanced melanoma Pembrolizumab Survived ([@B11])
56 M
| 822
| 713
| 887
| 1,186
| null | null |
github_plus_top10pct_by_avg
|
rans\] demonstrates that the desired result is obtained when the time scales are prolonged: With $T_{\rm ramp} = 200\,T$ and $T_{\rm pulse} = 1070\,T$, one has practically complete transfer.
It is, of course, also possible to employ the method outlined here to prepare the particle in superpositions of defect states. For instance, if one chooses $T_{\rm pulse}$ such that the phase integral in Eq. (\[eq:int\]) yields $\pi/2$, rather than $\pi$, the resulting state describes a particle which, after initially being localized at a single defect, is eventually found with equal probability on either one. In the same manner, any desired probability ratio can be obtained; Fig. \[fig:super\] shows an example where the particle remains with a probability of $1/4$ at the initial defect, and is found with a probability of $3/4$ at the other one, after forcing according to Eq. (\[eq:env\]).
Conclusion {#sec:conclusion}
==========
It has been shown in this paper that a periodic force can drastically alter the energy splitting associated with the two states bound by two identical defects in a one-dimensional tight-binding lattice. In the presence of the force, the band width $W$ entering the energy splitting (\[eq:relative\]) has to be replaced by the effective width (\[eq:qeband\]), so that the splitting can be monitored within wide ranges, and even be completely suppressed. Thus, the times required for coherent population exchange between the defects can be varied over several orders of magnitude by adjusting the amplitude of the force.
The strategy employed here to give a matter-of-principle illustration of coherent control of population transfer relies on the adiabatic principle, and thus is restricted to forces with slowly varying amplitudes. It appears possible to overcome this restriction: Utilizing techniques developed for the coherent control of molecules [@JudsonRabitz92; @AssionEtAl98], it might be possible to design even rapidly changing envelopes which effectuate a guided transport of a particle from one
| 823
| 3,997
| 1,997
| 972
| null | null |
github_plus_top10pct_by_avg
|
ac{1-u^2}{u^2+1} & 0 \\
\mathcal{D}_u & 0 & 0 & 0 & 0 \\
\noalign{\bigskip}
\text{} & C_T'(u) & C_\Phi'(u) & C_R'(u) & C_u'(u) \\
\noalign{\smallskip} \hline \hline \noalign{\smallskip}
\mathcal{D}_T & -\frac{4 u}{\left(u^2+1\right)^2} & -\frac{2 u \left(u^2-3\right)}{\left(u^2+1\right)^2} & 0 & 0 \\
\mathcal{D}_\Phi & 0 & -\frac{2 u \left(u^2-1\right)}{\left(u^2+1\right)^2} & 0 & \frac{i m \left(u^2-1\right)}{u^2+1} \\
\mathcal{D}_R & 0 & 0 & -\frac{4 u}{\left(u^2+1\right)^2} & \frac{h \left(u^2-1\right)}{u^2+1} \\
\mathcal{D}_u & -\frac{i m}{u^2+1} & \frac{i m \left(u^4+6 u^2-3\right)}{4 \left(u^4-1\right)} & -\frac{h+1}{u^2+1} & 0 \\
\noalign{\bigskip}
\text{} & C_T(u) & C_\Phi(u) & C_R(u) & C_u(u) \\
\noalign{\smallskip} \hline \hline \noalign{\smallskip}
\multirow{2}{*}{$\mathcal{D}_T$} &
\multicolumn{1}{l}{\frac{\left(u^4+6 u^2-3\right) m^2}{4(u^4-1)}+{}} &
\multirow{2}{*}{$\frac{h \left(u^4+6 u^2-3\right)}{\left(u^2+1\right)^3}$} &
\multirow{2}{*}{$-\frac{i m \left(u^4+6 u^2-3\right)}{\left(u^2+1\right)^3}$} &
\multirow{2}{*}{$\frac{2 i m u \left(u^2-3\right)}{\left(u^2+1\right)^2}$} \\
& \multicolumn{1}{r}{{}+\frac{(h+1) \left(-4 u^2+h \left(u^2+1\right)^2+4\right)}{\left(u^2+1\right)^3}} & & &\\
\mathcal{D}_\Phi & \frac{m^2 \left(u^2+1\right)^2-4 (h+1) \left(u^2-1\right)}{\left(u^2+1\right)^3} & \frac{h \left((h+1) u^4+2 (h+3) u^2+h-3\right)}{\left(u^2+1\right)^3} & -\frac{i m \left((h+1) u^4+2 (h+3) u^2+h-3\right)}{\left(u^2+1\right)^3} & \frac{2 i m u \left(u^2-1\right)}{\left(u^2+1\right)^2} \\
\mathcal{D}_R & -\frac{i (h+1) m}{u^2+1} & \frac{i h m \left(u^4+6 u^2-3\right)}{4 \left(u^4-1\right)} & \frac{m^2 \left(u^4+6 u^2-3\right)}{4 \left(u^4-1\right)} & \frac{4 h u}{\left(u^2+1\right)^2} \\
\mathcal{D}_u & 0 & 0 & 0 & \frac{4 \left(u^2-1\right) h^2+4 \left(u^2-1\right) h+m^2 \left(u^4+6 u^2-3\right)}{4 \left(u^4-1\right)} \\
\end{array}$$
Expressions of $\mathcal{D}^{(m,h)}_{AB}[\mathbf{C}(u)]$ in linearized Einstein equations {#app:A-B-C-matrice}
==================
| 824
| 2,989
| 1,053
| 812
| null | null |
github_plus_top10pct_by_avg
|
will be left as a free constant.
Note that the integrand in eq. (\[Pf\]) is precisely of the form shown in eq. (\[projection\]). In fact, on large scales (small $k_\parallel$ as well as $k$ i.e. small compared to $k^s_\parallel$, $1/b_{T_0}$ and $k_F$), modulo multiplicative factors, it reduces to the famous Kaiser [-@kaiser87] result, if one identifies $[2-0.7(\gamma-1)]$ with the usual galaxy-bias-factor. Interestingly, the smoothing factor ${\rm exp} [- {k_\parallel^2/
{k^s_\parallel}^2}]$ is exactly of the form commonly used to model nonlinear redshift distortions on small scales (e.g. , but see also ). We will take advantage of this fact, and estimate the effect of small scale distortions on the inversion procedure at large scales by allowing $k^s_\parallel$ to vary.
### Inversion on Large Scales {#largescales}
Motivated by eq. (\[Pf\]), we consider the following inversion problem: how to estimate $\tilde P^\rho$, on large scales, from $P^f$, for $$P^f (k_{\parallel}) = \int_{k_{\parallel}}^\infty W^{f\rho} (k_{\parallel}/k,k)
{\tilde P^\rho} (k) {k dk \over {2 \pi}}
\label{inverse}$$ where $$\begin{aligned}
\label{Wfull}
W^{f\rho} (k_{\parallel}/k,k) =&& A' \, {\rm exp} [- {k_\parallel^2/
{k^s_\parallel}^2}] \, {\rm exp} [-{k^2 / k_F^2}] \\ \nonumber
&& \Biggl[ 1 + {f_{\Omega}
\over {2 - 0.7 (\gamma-1)}} {k_\parallel^2 \over k^2} -
{{\gamma-1}\over {4 [2 - 0.7 (\gamma-1)]}} k_\parallel^2 b_{T_0}^2 \Biggr]^2\end{aligned}$$ where $A'$ is a constant.[^3]
The above $W^{f\rho}$ is the actual distortion kernel we will use to compute $P^f$ for some given input $\tilde P^\rho$. However, for the inversion problem ($P^f \rightarrow \tilde P^\rho$), we will not assume we know all the parameters in $W^{f\rho}$, or even the precise form of $W^{f\rho}$, except that, on large scales, it is equal to $$W^{f\rho}_\ell (k_{\parallel}/k,k) = A' \Biggl[ 1 + \beta_f
{k_\parallel^2 \over k^2} \Biggr]^2 \, \, , \, \, \beta_f =
{f_{\Omega}
\over {2 - 0.7 (\gamma-1)}}
\label{Wlinear}$$ The $\beta_f$ here is the analog of the u
| 825
| 2,025
| 1,527
| 887
| 2,595
| 0.7786
|
github_plus_top10pct_by_avg
|
th order F\^[(0)]{}\_[abc]{} \_a A\_[bc]{} - \_b A\_[ac]{}. The first term of (\[3-point-1\]) comes from the Lie algebra structure of this generalized YM theory. The second term arises due to the field-dependent inner-product of the Lie algebra.
Near the end of Sec.\[heuristic\], we discussed how the formulation of gravity as a generalized YM theory can heuristically explain the double-copy procedure for 3-point amplitudes at tree level. Eq.(\[3-point-1\]) is the exact expression of the heuristic expression $f AAF_{(0)} \sim F_{(0)} A F_{(0)}$ there. It may seem that there is a small discrepancy between $F_{(0)}([A,A]+AF_{(0)})$ (\[3-point-1\]) and $F_{(0)}AF_{(0)}$. But recall that the structure constant $f_{abc}$ is assumed to be cyclic in the double-copy procedure (and in our heuristic discussion in Sec.\[heuristic\]), while $F_{(0)}^{abc}$ is not. The exact expression (\[3-point-1\]) is in fact of the form of $F_{(0)}AF_{(0)}$ but additional terms that have some of the indices permuted. In Sec.\[pert-hatA\] below, we will see a simpler and more direct match with the discussions in Sec.\[heuristic\].
The 3-point vertices for the gravitons are therefore (\^a h\^[bc]{}-\^b h\^[ac]{}) . This is already a very simple expression, especially if we compare it with the expression obtained from the Hilbert-Einstein action. But the expression can be even further simplified as we will shown below.
Perturbative Expansion in $\hat{A}$ {#pert-hatA}
-----------------------------------
It is more economic, at least at the lowest order, to use the variable $\hat{A}_{\m a}$ defined by \_\^a = \_\^a + \_\^[a]{}, where $\hat{e}_{\m}{}^a$ is the inverse of $\hat{e}_a{}^{\m}$ (\[def-e-A\]). The new variable $\hat{A}_{\m}{}^a$ is merely a field redefinition of $A_a{}^{\m}$. They are related via A\_a\^ = \_a\^ - \_a\^ = - \_a\^ + \_a\^\_\^ + [O]{}(A\^3). Therefore h\_[ab]{} - \_[ab]{} + . Up to sign, the physical (on-shell) amplitudes of $h_{ab}$ and $\hat{h}_{ab}$ should agree. As it is suggested by the notation, we have decompo
| 826
| 188
| 1,559
| 958
| null | null |
github_plus_top10pct_by_avg
|
d\]) we get the set of winning values given in Table 1.
s, t Alice, Bob
------ ------------
14 01, 10, 22
15 01, 10, 22
17 00, 12, 21
24 02, 11, 20
25 01, 10, 22
28 02, 11, 20
34 00, 11, 22
37 00, 11, 22
38 02, 10, 21
41 01, 10, 22
42 02, 11, 20
43 00, 11, 22
: Winning configurations for nonlocal game defined by three orbits from example I
s, t Alice, Bob
------ ------------
51 01, 10, 22
52 01, 10, 22
56 02, 10, 21
65 01, 12, 20
67 02, 10, 21
68 00, 11, 22
71 00, 12, 21
73 00, 11, 22
76 01, 12, 20
82 02, 11, 20
83 01, 12, 20
86 00, 11, 22
: Winning configurations for nonlocal game defined by three orbits from example I
Following Ref. [@ugur1] we can show that the maximal classical probability of winning the game is determined by Bell inequality. In fact, let $f_A{\left(s\right)}$ and $f_B{\left(t\right)}$ be the strategies of Alice and Bob, respectively; the function $f_{A,B}$ take their values in the set ${\left\{0,1,2\right\}}$. Let $F{\left(a,b;s,t\right)}$ be the characteristic function for the set of winning strategies. Then the winning probability for the given strategies $f_A$, $f_B$ is $$\frac{1}{64}\sum_{a,b=0}^2\sum_{s,t=1}^8 F{\left(a,b;s,t\right)}\delta_{a,f_A{\left(s\right)}}\delta_{b,f_B{\left(t\right)}}.\label{c}$$ Now, the sum entering the left hand side of Bell inequality can be written as $$\sum_{a,b=0}^2\sum_{s,t=1}^8 F{\left(a,b;s,t\right)}p{\left(a_s=a, b_t=b\right)}$$ which is bounded, in example I, by 16 provided $p{\left(a_s=a,b_t=b\right)}$ can be derived from a joint probability distribution. However, defining $$p{\left(a_1,\dots,a_8,b_1,\ldots,b_8\right)}\equiv \prod_{k=1}^8 \delta_{a_k, f_A(k)}\delta_{b_k, f_B(k)}$$ we find that $p{\left(a_s,b_t\right)}$ are derived as marginals from the above joint probability. Therefore, the success probability for any classical strategy ${\left(f_A(s),f_B(t)\right)}$ cannot exceed $\fra
| 827
| 4,241
| 1,230
| 532
| 3,711
| 0.770535
|
github_plus_top10pct_by_avg
|
{r^2}{2}}{r\leq \R}
{\frac{\R^2}{2} + \R (r-\R) + \frac{(r-\R)^2}{2}- \frac{(r-\R)^3}{3\R}}{r\in[\R,2\R]}
{\frac{5\R^2}{3} + \R(r-2\R) - \frac{(r-2\R)^2}{2} + \frac{(r-2\R)^3}{12\R}}{r\in [2\R,4\R]}
{\frac{7\R^2}{3}}{r\geq 4\R]}
\end{aligned}$$ Then
1. $\tau'(r) \in [0, \frac{5\R}{4}]$, with maxima at $r= \frac{3\R}{2}$. $\tau'(r) = 0$ for $r\in \lrbb{0}\bigcup [4\R, \infty)$
2. As a consequence of 1, $\tau(r)$ is monotonically increasing
3. $\tau''(r) \in [-1,1]$
We provide the derivatives of $\tau$ below. The claims in the Lemma can then be immediately verified. $$\begin{aligned}
\tau'(r) =&
\fourcase
{r}{r\leq \R}
{\R + (r-\R) - \frac{(r-\R)^2}{\R}}{r\in[\R,2\R]}
{\R - (r-2\R) + \frac{(r-2\R)^2}{4\R} }{r\in [2\R,4\R]}
{0}{r\geq 4\R]}
\end{aligned}$$
$$\begin{aligned}
\tau''(r) =&
\fourcase
{1}{r\leq \R}
{1-\frac{2(r-\R)}{\R}}{r\in[\R,2\R]}
{-1 + \frac{r-2\R}{2\R}}{r\in [2\R,4\R]}
{0}{r\geq 4\R]}
\end{aligned}$$
\[l:mu\] Let $$\begin{aligned}
\mu(r) := \threecase{1}{r \leq \R}{\frac{1}{2} + \frac{1}{2} \cos \lrp{\frac{\pi (r-\R)}{3\R}}}{r\in[\R,4\R]}{0}{r\geq 4\R}
\end{aligned}$$ Then $$\begin{aligned}
\mu'(r) := \threecase{0}{r \leq \R}{-\frac{\pi}{6\R} \sin \lrp{\frac{\pi (r-\R)}{\R}}}{r\in[\R,4\R]}{0}{r\geq 4\R}
\end{aligned}$$ Furthermore, $\mu'(r) \in [-\frac{\pi}{6\R},0]$
This Lemma can be easily verified by algebra.
[Miscellaneous]{} The following Theorem, taken from [@eldan2018clt], establishes a quantitative CLT.
\[t:zhai\] Let $X_1...X_n$ be random vectors with mean 0, covariance $\Sigma$, and $\lrn{X_i}\leq \beta$ almost surely for each $i$. Let $S_n= \frac{1}{\sqrt{n}}\sum_{i=1}^n X_i$, and let $Z$ be a Gaussian with covariance $\Sigma$, then $$\begin{aligned}
W_2(S_n, Z)\leq \frac{6\sqrt{d}\beta\sqrt{\log n}}{\sqrt{n}}\end{alig
| 828
| 2,707
| 911
| 841
| null | null |
github_plus_top10pct_by_avg
|
k National University in 2015.
Peer review under responsibility of King Saud University.
{#f0005}
{#f0010}
{#f0015}
{#f0020}
{#f0025}
{#f0030}
{#f0035}
######
Effect of low shear modeled microgravity on antibiotic resistance of *S. pyogenes* to various antibiotics (NG: normal gravity, MMG: low shear modeled microgravity).
Antibiotic Concentration of the antibiotic (μg/disc)
-------------- ------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------
Streptomycin 16.0 ± 1.0 18.0 ± 1 19.3 ± 1.1 18.6 ± 1.1 19.6 ± 1.1 19.6 ± 1.1 21.3 ± 1.5 20.3 ± 1.5
Penicillin 23.6 ± 2.3 24.3 ± 2.0 25.6 ± 1.5 26 ± 1.7 26.6 ± 1.5 27.3 ± 2.3 28.3 ± 0.5 29.3 ± 2.5
Kanamycin 18.0
| 829
| 95
| 1,756
| 1,140
| null | null |
github_plus_top10pct_by_avg
|
is therefore a function indexed by a directed set. We adopt the convention of denoting nets in the manner of functions and do not use the sequential notation $\chi_{\alpha}$ that can also be found in the literature. Thus, while every sequence is a special type of net, $\chi:\!\mathbb{Z}\rightarrow X$ is an example of a net that is not a sequence.
Convergence of sequences and nets are described most conveniently in terms of the notions of being *eventually in* and *frequently in* every neighbourhood of points. We describe these concepts in terms of nets which apply to sequences with obvious modifications.
**Definition A1.6.** *A net* $\chi\!:\mathbb{D}\rightarrow X$ *is said to be*
\(a) *Eventually in* *a subset $A$* *of* *$X$ if its tail is eventually in $A$*: *$(\exists\beta\in\mathbb{D})\!:(\forall\gamma\succeq\beta)(\chi(\gamma)\in A).$*
\(b) *Frequently in* *a subset $A$* *of* *$X$ if for any index $\beta\in\mathbb{D}$, there is a successor index $\gamma\in\mathbb{D}$ such that $\chi(\gamma)$* is in $A$: *$(\forall\beta\in\mathbb{D})(\exists\gamma\succeq\beta)\!:(\chi(\gamma)\in A).\qquad\square$*
It is not difficult to appreciate that
\(i) A net eventually in a subset is also frequently in it but not conversely,
\(ii) A net eventually (respectively, frequently) in a subset cannot be frequently (respectively, eventually) in its complement.
With these notions of eventually in and frequently in, convergence characteristics of a net may be expressed as follows.
**Definition A1.7.** *A net* *$\chi\!:\mathbb{D}\rightarrow X$ converges to $x\in X$ if it is eventually in every neighbourhood of $x$, that is* $$(\forall N\in\mathcal{N}_{x})(\exists\mu\in\mathbb{D})(\chi(\nu\succeq\mu)\in N).$$ *The point $x$ is known as the* *limit* *of $\chi$ and the collection of all limits of a net is the* *limit set* $$\textrm{lim}(\chi)=\{ x\in X\!:(\forall N\in\mathcal{N}_{x})(\exists\mathbb{R}_{\beta}\in\textrm{Res}(\mathbb{D}))(\chi(\mathbb{R}_{\beta})\subseteq N)\}\label{Eqn: lim net}$$ *of $\chi$, with the set of*
| 830
| 2,931
| 1,954
| 855
| 2,750
| 0.777292
|
github_plus_top10pct_by_avg
|
F\_j, V\_[1,j]{}(0)=0, where $(V_{1,j}(\eta))(x,\omega)=V_{1,j}(x,\omega,\eta)$ and $(F_j(\eta))(x,\omega)=F_j(x,\omega,\eta)$. The $C^0$-semigroup $G(\eta)$ generated by $B_0-\Sigma_j$ is (by the Trotter’s formula) given by \[infgen\] (G()h)(x,) = e\^[-\_0\^\_j(x-,) d]{}H(t(x,)-)h(x-,). Hence the solution $V_{1,j}$ is (cf. [@engelnagel p. 439], [@pazy83 pp. 105-108]) \[solV\] V\_[1,j]{}()=\_0\^G(-s)F\_j(s) ds, and thus \[solv1\] &v\_[1,j]{}(x,,)=V\_[1,j]{}(x,,r\_[m,j]{}-) =\_0\^[r\_[m,j]{}-]{} (G(r\_[m,j]{}--s)F\_j(s))(x,) ds\
=& \_0\^[r\_[m,j]{}-]{} e\^[-\_0\^[r\_[m,j]{}--s]{} \_j(x-,) d]{}\
& H(t(x,)-(r\_[m,j]{}--s))f\_j(x-(r\_[m,j]{}--s),,r\_[m,j]{}-s) ds. This can be shown to be a weak (distributional) solution of by a similar argument as in the proof of Lemma \[trathle2\].
Moreover, the weak solution of the (primary) problem $$\omega\cdot\nabla_x u_1+\Sigma_1 u_1={}&f_1, \\[2mm]
{u_1}_{|\Gamma_-}={}&g_1,$$ is given by (see Lemmas \[trathle1\] and \[trathle2\]) \[desol23\] u\_1(x,,E) =& \_0\^[t(x,)]{}e\^[-\_0\^t\_1(x-s,,E) ds]{}f\_1(x-t,,E)dt\
&+e\^[-\_0\^[t(x,)]{}\_1(x-s,,E) ds]{}g\_1(x-t(x,),,E).
Hence the explicit solution of the total primary problem $$\begin{aligned}
\omega\cdot\nabla_x u_1+\Sigma_1(x,\omega,E) u_1={}&f_1,\nonumber\\
-{{\frac{\partial (S_ju_j)}{\partial E}}}+\omega\cdot\nabla_x u_j
+\Sigma_j u_j
={}& f_j,\quad j=2,3
\nonumber\\
u_{|\Gamma_-}={}&g
\nonumber\\[2mm]
u_j(\cdot,\cdot,E_m)={}&0,\quad j=2,3, \label{desol24}\end{aligned}$$ is given by $u=(u_1,u_2,u_3)$, where $u_1$ is obtained from and $u_j$, $j=2,3$ are obtained from formulas , and , recalling that $v_j=v_{1,j}+v_{2,j}$, $j=2,3$.
A Solution Based on the Neumann Series {#meta}
--------------------------------------
Consider the transport problem (\[desol10\]), (\[desol11\]), (\[desol12\]). Setting $\phi=e^{CE}\psi$ as in Section \[cosyst\], we recall that the problem then takes the equivalent form (\[cosyst1\])-(\[cosyst4\]). Denote \[cmp4\] T\_[1,C]{}:=&\_x\_1+\_1\_1-K\_[1,C]{}\
T\_[j,C]{}:=&-[E]{}+\_x\_j+ CS\_j\_j+\_j\_
| 831
| 203
| 1,453
| 980
| null | null |
github_plus_top10pct_by_avg
|
ments, namely dynamic graph and hypergraph models. In order to measure the dynamicity, [we introduce]{} the notion of *pair visibility*. For a pair $\{i,j\}$ of distinct vertices, the *visibility* of $\{i, j\}$, denoted by ${\ensuremath{\operatorname{\mathtt{vis}}(i,j)}}$, is the number of rounds $t\in\{1,\ldots, n\}$ such $\{i,j\}$ is contained in the edge chosen at round $t$. (A more formal definition is given below.) When ball $i$ is placed into a bin, the *height* of ball $i$ is the number of balls that were allocated to the bin before ball $i$. We say that event ${\mathsf{E}}_n$ holds with high probability (w.h.p.) if ${\ensuremath{\operatorname{\mathbf{Pr}}\left[{\mathsf{E}}_n\right]}}{\geqslant}1-n^{-c}$ for every constant $c>0$.
### Balanced Allocation on Dynamic Hypergraphs {#balanced-allocation-on-dynamic-hypergraphs .unnumbered}
Write $[n]=\{1,\ldots, n\}$ [ to be the set of $n$ bins]{}. A hypergraph ${\mathcal{H}}=([n],{\mathcal{E}})$ is ${s}$-*uniform* if $|H|={s}$ for every $H\in {\mathcal{E}}$. [For every integer $n{\geqslant}1$, let $s={s}(n)$ be an integer such $2{\leqslant}s{\leqslant}n$.]{} A *dynamic ${s}$-uniform hypergraph*, denoted by [$({\mathcal{H}}^{(1)},{\mathcal{H}}^{(2)},\ldots, {\mathcal{H}}^{(n)})$,]{} is a sequence of $s$-uniform hypergraphs ${\mathcal{H}}^{(t)}=([n], \mathcal{E}_t)$ with vertex set $[n]$. The edge sets ${\mathcal{E}}_t$ may change with $t$. [A hypergraph is *regular* if every vertex is contained in the same number of edges.]{}
In this paper, we are interested in the following properties which dynamic hypergraphs may satisfy. We refer to these properties as the *balancedness*, *visibility*, and *size* properties. [The balancedness property is adapted from [@BBFN12; @God08].]{}
- Let $H_t$ denote a randomly chosen edge from ${\mathcal{E}}_t$. [If there exists a constant $\beta {\geqslant}1$]{} such that ${\ensuremath{\operatorname{\mathbf{Pr}}\left[i\in H_t\right]}}{\leqslant}\beta {s}/n$ for every $1{\leqslant}t{\leqslant}n$ and each bin $i\in [n]$, then t
| 832
| 637
| 1,297
| 889
| 1,482
| 0.789013
|
github_plus_top10pct_by_avg
|
definition. Read more about it in this blog article
Q:
Changing a property modifier when merging an interface
The Window interface has a few properties that are readonly:
interface Window extends ... {
// ...
readonly innerHeight: number;
readonly innerWidth: number;
// ...
}
I get that those cannot really be changed, but in my unit tests, I'm changing the values of the object to simulate the changes. And it's an object of that type.
Is there a way I can augment that type in a custom d.ts file and change these properties modifiers?
I tried to just create a .d.ts file with this:
interface Window {
innerHeight: number;
innerWidth: number;
}
But the compiler is complaining with:
All declarations of 'innerWidth' must have identical modifiers.
A:
(I'm pretty sure that) no, you can't change the modifiers or types of existing types.
But as you're only using it for testing then it shouldn't be problematic to cast it to any:
(window as any).innerWidth = 500;
(window as any).innerHeight = 300;
Edit
There's a trick to make a type mutable:
type Mutable<T extends { [x: string]: any }, K extends string> = {
[P in K]: T[P];
}
type MyWindow = Mutable<Window, keyof Window>;
(window as MyWindow).innerWidth = 500;
(window as MyWindow).innerHeight = 300;
(code in playground)
It was suggested in this issue: Mapped Types syntax to remove modifiers, but if it's for testing then this is probably an overkill and casting to any would do the trick (in my opinion).
Q:
Textfield in alertview not working in ios7?
I have an application in which I'm using a specific design for a reason. I put a text field in an alert view above an otherbutton with a background image. Everything is working fine in ios 6 version.
UIAlertView *av=[[UIAlertView alloc] initWithTitle:@"fdhdj" message:@" hdfjkhfjkhdk" delegate:self cancelButtonTitle:@"ok" otherButtonTitles:@" ",@"cancel",nil];
av.alertViewStyle = UIAlertViewStylePlainTextInput;
namefield = [[UITextField alloc] initWithFrame:CGRectMake(10.0,43.0, 264.0, 44.0)];
| 833
| 4,239
| 11
| 769
| 324
| 0.813367
|
github_plus_top10pct_by_avg
|
in $B\otimes_AR$. Thus there are exactly $(n_i-2)$ independent linear equations among the entries of $v_i'$ and $r_i'$.
3. The $(1,3)$-block is $$\label{ea8}
e_i'=\pi(- {}^ty_i'+a_it_i').$$ This is an equation in $B\otimes_AR$. By letting $e_i'=e_i=0$, there are exactly $(n_i-2)$ independent linear equations among the entries of $y_i'$ and $t_i'$.
4. The $(2,3)$-block is $$1+\pi d_i'=1-\pi(-\pi x_i'-\pi^2z_i')+\pi(\pi^3\bar{\gamma}_iu_i'+\pi w_i').$$ By letting $d_i'=d_i=0$, we have $$\label{ea9}
d_i'=\pi(x_i'+ w_i')=0.$$ This is an equation in $B\otimes_AR$. Thus there is exactly one independent linear equation between $x_i'$ and $w_i'$.
5. The $(2,2)$-block is $$\label{ea10}
\pi^3 f_i'=\pi^3\bar{\gamma}_i-\pi(-\pi^4\bar{\gamma}_ix_i'+\pi z_i')+\pi(\pi^4\bar{\gamma}_i x_i'+\pi z_i').$$ Since $-\pi(-\pi^4\bar{\gamma}_ix_i'+\pi z_i')+\pi(\pi^4\bar{\gamma}_i x_i'+\pi z_i')$ contains $2\pi^5$ as a factor, by letting $f_i'=f_i=\bar{\gamma}_i$, this equation is trivial.
6. The $(3,3)$-block is $$\pi+\pi^3c_i'=\pi-\pi(u_i'-\pi^2w_i')+\pi(-u_i'+\pi^2 w_i').$$ By letting $c_i'=c_i=0$, $$\label{ea11}
c_i'=-u_i+2w_i'=0.$$ This is an equation in $R$. Thus $u_i=0$ is the only independent linear equation.\
By combining all six cases (a)-(f), there are exactly $((n_i-2)^2+(n_i-2))/2+2(n_i-2)+2=(n_i^2+n_i)/2-1$ independent linear equations and $(n_i^2-n_i)/2+1$ entries of $m_{i,i}'$ determine all entries of $m_{i,i}'$.\
3. Assume that $i$ is even and that $L_i$ is *of type II*. This case is parallel to the above case (i). Then $\pi^ih_i=\xi^{i/2} a_i$ as explained in Section \[h\] and thus we have $$a_i'=\sigma(1+\pi\cdot {}^tm_{i,i}')a_i(1+\pi m_{i,i}')+\pi^3(\ast).$$ Here, the nondiagonal entries of this equation are considered in $B\otimes_AR$ and each diagonal entry of $a_i'$ is of the form $2 x_i$ with $x_i\in R$. Thus we can cancel the term $\pi^3(\ast)$ since each entry contains $\pi^3$ as a factor. In addition, we can cancel the term $\sigma(\pi\cdot {}^tm
| 834
| 2,070
| 1,370
| 854
| null | null |
github_plus_top10pct_by_avg
|
3---S6 90.68 (4) C40---C41---H41 119.7
S5---Ag3---S6 87.08 (4) C41---C42---C43 120.5 (5)
P4---Ag3---W2 117.46 (3) C41---C42---H42 119.8
P3---Ag3---W2 104.71 (3) C43---C42---H42 119.8
S5---Ag3---W2 45.48 (2) C42---C43---C44 119.6 (4)
S6---Ag3---W2 42.93 (3) C42---C43---H43 120.2
P6---Ag4---P5 122.24 (4) C44---C43---H43 120.2
P6---Ag4---S5 117.74 (4) C39---C44---C43 120.0 (4)
P5---Ag4---S5 115.87 (4) C39---C44---H44 120.0
P6---Ag4---S7 91.08 (4) C43---C44---H44 120.0
P5---Ag4---S7 112.28 (4) C46---C45---C50 119.2 (4)
S5---Ag4---S7 86.36 (4) C46---C45---P4 120.1 (3)
P6---Ag4---W2 124.47 (3) C50---C45---P4 120.5 (4)
P5---Ag4---W2 107.12 (3) C45---C46---C47 119.9 (5)
S5---Ag4---W2 47.06 (3) C45---C46---H46 120.0
S7---Ag4---W2 44.41 (2) C47---C46---H46 120.0
W1---S1---Ag1 77.20 (4) C48---C47---C46 119.9 (5)
W1---S2---Ag1 75.69 (3) C48---C47---H47 120.0
W1---S2---Ag2 75.48 (3) C46---C47---H47 120.0
Ag1---S2---Ag2 93.50 (4) C49---C48---C47 120.2 (5)
W1---S3---Ag2 77.39 (3) C49---C48---H48 119.9
W2---S5---Ag4 76.69 (3) C47---C48---H48 119.9
W2---S5---Ag3 80.10 (3) C48---C49---C50 120.7 (5)
Ag4---S5---Ag3 102.12 (4) C48---C49---H49 119.7
W2---S5---Ag1 128.34 (4) C50---C49---H49 119.7
Ag4---S5---Ag1 92.22 (4) C49---C50---C45 120.0 (5)
Ag3---S5---Ag1 150.82 (5) C49---C50---H50 120.0
W2---S5---Ag2 114.86 (4) C45---C50---H50 120.0
Ag4---S5---Ag2 167.86 (5) C56---C51---C52 119.1 (5)
Ag3---S5---Ag2 77.31 (3) C56---C51---P3 123.5 (4)
Ag1---S5---Ag2 83.40 (3) C52---C51---P3 117.4 (4)
W2---S6---Ag3 78.81 (4)
| 835
| 2,272
| 2,768
| 1,113
| null | null |
github_plus_top10pct_by_avg
|
and Innovation project TEC 2012-32336, and by the Generalitat de Catalunya research support program SGR-1202. This work is also partially supported by the Secretariat for Universities and Research (SUR) and the Ministry of Economy and Knowledge through AGAUR FI-DGR 2012 and BE-DGR 2012 grants (M. M.)
[^1]: Marc Manzano and Eusebi Calle are with University of Girona, Spain. Anna Manolova Fagertun and Sarah Ruepp are with Technical University of Denmark, Denmark. Caterina Scoglio is with Kansas State University, USA. Ali Sydney is with Raytheon BBN Technologies, USA. Antonio de la Oliva and Alfonso Muñoz are with University Carlos III of Madrid, Spain. Corresponding author: Marc Manzano (email: mmanzano@eia.udg.edu - marcmanzano@ksu.edu).
[^2]: <http://users.csc.calpoly.edu/~jdalbey/SWE/Papers/att_collapse.html>
[^3]: <http://www.zdnet.com/juniper-fail-seen-as-culprit-in-site-outages-4010024743/>
[^4]: <http://spectrum.ieee.org/telecom/security/the-real-story-of-stuxnet>
[A Basic Thermodynamic Derivation of the Maximum Overburden Pressure Generated in Frost Heave]{}
[Kenneth G. Libbrecht]{}[^1]
[Department of Physics, California Institute of Technology]{}-1pt
[Pasadena, California 91125]{}-1pt
------------------------------------------------------------------------
------------------------------------------------------------------------
**ABSTRACT**
I describe a simple heat-enginederivation of the maximum overburden pressure that can be generated in frost heave. The method stems from the fact that useful work can, in principle, be extracted from the forces generated by an advancing solidification front via the frost heave mechanism. Using an idealized frost heave engine, together with the maximum thermodynamic efficiency of any heat engine, one can derive the maximum overburden pressure. A similar argument can also produce the maximum thermodynamic buoyancy force on a foreign object within a solid surrounded by a premelted layer.
A Frost Heave Engine
====================
Frost heave is a common e
| 836
| 65
| 1,461
| 1,232
| null | null |
github_plus_top10pct_by_avg
|
eft( \left( \det {\cal E}_1^{\alpha} \right)
\left( \det T_1^{\alpha} \right)^{-1} \right)^{+1/6} \otimes
\left( \left( \det {\cal E}_2^{\alpha} \right) \left( \det T_2^{\alpha}
\right)^{-1} \right)^{-1/6} ,\end{aligned}$$ and $$\begin{aligned}
{\cal F}_+^{\alpha^{-1}} & = & \left( \left( \det {\cal E}_1^{\alpha^{-1}}
\right) \left( \det T_1^{\alpha^{-1}} \right)^{-1} \right)^{+1/6} \otimes
\left( \left( \det {\cal E}_2^{\alpha^{-1}} \right)
\left( \det T_2^{\alpha^{-1}} \right)^{-1} \right)^{-1/6}, \\
& = &
\left( \left( \det {\cal E}_2^{\alpha}
\right) \left( \det T_2^{\alpha} \right)^{-1} \right)^{+1/6} \otimes
\left( \left( \det {\cal E}_1^{\alpha} \right)
\left( \det T_1^{\alpha} \right)^{-1} \right)^{-1/6}, \\
& = &
\left( {\cal F}_+^{\alpha} \right)^* .\end{aligned}$$ In this fashion we confirm equation (\[eq:pos-Fock-duality\]) explicitly.
Vacuum energies are invariant: if a fermion boundary condition in sector $\alpha$ is determined by $\theta$, then in $\alpha^{-1}$ it is determined by $- \theta$, but vacuum energies only depend upon $(\theta)^2$, and so are invariant. Contributions to the spectrum from sector $\alpha$ are matched by Serre duals in sector $\alpha^{-1}$. In terms of global quotients by finite groups, this means the untwisted sector closes into itself under Serre duality, but twisted sectors are exchanged. For example, the Serre duals to (\[eq:countstates1\]) are given by $$\begin{aligned}
\lefteqn{
H^{{\rm dim}-k_0} \Bigl(
I_{\mathfrak{X}}|_{\alpha},
\left(
\wedge^{m_0} {\cal E}_0^{\alpha } \right)
\otimes_{n>0}\left( \wedge^{m_n} {\cal E}_n^{\alpha *}
\otimes \wedge^{p_n} {\cal E}_n^{\alpha }
\otimes \wedge^{\ell_n} T_n^{\alpha *}
\otimes \wedge^{k_n} T_n^{\alpha } \right)
}
\\
& & \hspace*{3.25in} \left.
\otimes ({\cal F}^{\alpha}_+)^* \otimes
\sqrt{ K_{\alpha}^* \otimes \det {\cal E}^{\alpha *}_0 }
\otimes K_{\alpha}
\right)^*
\\
& = &
H^{{\rm dim}-k_0}\Bigl(
I_{\mathfrak{X}}|_{\alpha^{-1}},
\left( \wedge^{{\rm rk} - m_0} {\cal E}_0^{\alpha^{-1} } \right)
\otimes_{n>0}\
| 837
| 1,795
| 1,613
| 860
| null | null |
github_plus_top10pct_by_avg
|
18
FP4_Bowtie 0.58 0.331 8 0.60 0.356 10 0.330
FP4_MAQ 0.58 0.335 9 0.60 0.361 12 0.334
FP4_Soap2 0.58 0.333 9 0.60 0.357 11 0.331
Density Chen Eland 0.53 0.281 10 0.58 0.334 12 0.282
FP4_Eland 0.57 0.324 10 0.60 0.358 12 0.325
FP4_Bowtie 0.59 0.342 9 0.61 0.366 10 0.340
FP4_MAQ 0.59 0.346 9 0.61 0.370 10 0.345
FP4_Soap2 0.59 0.344 9 0.61 0.366 12 0.342
TF interactions wired with epigenetic effects
---------------------------------------------
To investigate the cooperative effects among TFs and epigenetic patterns in gene regulation, we exhaustively searched significant interaction terms from our regression model. First, a subset of ESC-specific genes that are co-bound by a specific TF pair is prepared. Then, the saturated model for the genes is constructed. The model involves 469 variables; 14 main effect terms (11 TFs and 3 epigenetic states) and 455 higher-order interaction terms (all the possible pairwise and triplewise interactions). Finally, our pipeline greedily identifies important variables (see *methods*). This procedure is independently performed with each of five peak datasets.
In total, 215 models were identified in which the predictive power is higher than the models without higher-order terms. These models contained 6-30 variables including at least one interactive term. As an example, the regression model for genes co-bound by Oct4 and Sox2, a well-known pluripotent complex \[[@B9],[@B25
| 838
| 2,366
| 1,632
| 983
| null | null |
github_plus_top10pct_by_avg
|
isits following RFVTA is presented in Figure [1](#F1){ref-type="fig"}. Of the 135 women satisfying all inclusion criteria and treated at baseline, 11 subjects had interfering circumstances unrelated to the procedure (pregnancy, lack of menses, and Hashimoto's Disease) that could have influenced bleeding assessments positively or negatively. Of the 124 subjects continuing past the 12-month follow up, six sought surgical re-interventions, one subject became pregnant, one chose Novasure ablation as diagnostic hysteroscopy revealed no myomas, and three subjects were lost to follow up or withdrew from the trial. Of the outstanding 113 Uterine Fibroid Symptom and Quality-of-Life questionnaires, the 11 sites received 112 completed questionnaires. Demographics of those 124 subjects entering the second year of the study are presented in Table [1](#T1){ref-type="table"}.
{#F1}
######
Demographics characteristics of subjects entering 12 months follow up (n = 124)
**Variable** **Statistic/Response**^**a**^ **All sites (n = 124)**
----------------- ------------------------------- -------------------------
Age (years) Mean (SD) 42.4 (4.4)
Median 43
Range 31 -- 52
Race White or Caucasian 58 (46.8%)
Black or African American 41 (33.1%)
Asian 2 (1.6%)
Other ^b^ 23 (18.5%)
Ethnicity Hispanic or Latino 56 (45.2%)
Not Hispanic or Latino 68 (54.8%)
Smoking History Current 25 (20.2%)
Past 21 (16.9%)
Never 78 (62.9%)
Height (cm) Mean (SD) 162.5 (8.1)
Median
| 839
| 1,024
| 965
| 1,048
| null | null |
github_plus_top10pct_by_avg
|
erature [@BGMP].
For the comparison we have computed the present flow with the zero-temperature running coupling in Fig. \[fig:alpha\] for all temperatures. This mimics the approximation used in [@Braun:2007bx], which implicitly relies on the zero-temperature running coupling $\alpha_s$. We also remark that the quantity $L[\langle A_0\rangle]$ in general is gauge-dependent, and only the critical temperature derived from it is not. However, in Landau-DeWitt gauge with backgrounds $A_0$ in Polyakov gauge temporal fluctuations about this background include those in Polyakov gauge. For this reason we might expect a rather quantitative agreement for the quantity $L[\langle A_0\rangle]$ in both approaches. The results for the temperature dependence of the Polyakov loop are depicted in Fig. \[fig:compare\].
![Comparison of $L[\langle A_0\rangle]$ computed in Polyakov gauge and in Landau-DeWitt gauge from [@Braun:2007bx].[]{data-label="fig:compare"}](LGcompare.eps "fig:"){width="8cm"}\
The coincidence between the two gauges is very remarkable, particularly since the mechanisms driving confinement are quite different in the different approaches, as are the approximations used in both cases. This provides further support for the respective results. It also sustains the argument concerning the lack of gauge dependence made above. The quantitative deviations in the vicinity of the phase transition are due to the truncation used in [@Braun:2007bx], that cannot encode the correct critical physics yet, as has been already discussed there.
Summary and outlook {#sec:summ}
===================
In the present work we have put forward a formulation of QCD in Polyakov gauge. We have argued that this gauge is specifically well-adapted for the investigation of the confinement-deconfinement phase transition as the order parameter, the Polyakov loop expectation value $\langle L[A_0]\rangle $, has a simple representation in terms of the temporal gauge field. Moreover, we have shown that $L[\langle
A_0\rangle]$ also serves as an orde
| 840
| 132
| 969
| 838
| null | null |
github_plus_top10pct_by_avg
|
yperparameters), and the small portion was used to train the calibrators. The 3 calibrators trained in the inner 3-folds were used to predict the corresponding test partition, and their predictions were averaged in order to obtain better estimates of their performance with the 7 different metrics (accuracy, Brier score, log-loss, maximum calibration error, confidence-ECE, classwise-ECE and the p test statistic of the ECE metrics). Finally, the 25 resulting measures were averaged.
\[tab:data\]
{width="0.9\linewidth"} \[fig:ds:partition\]
Full example of statistical analysis {#sec:exp:example}
------------------------------------
The following is a full example of how the final rankings and statistical tests are computed. For this example, we will focus on the metric log-loss, and we will start with the naive Bayes classifier. Table \[table:nbayes:loss\] shows the estimated log-loss by averaging the 5-times 5-fold cross-validation log-losses of the inner 3-fold aggregated predictions. The sub-indices are the ranking of every calibrator for each dataset (ties in the ranking share the averaged rank). The resulting table of sub-indices is used to compute the Friedman test statistic, resulting in a value of $73.8$ and a p-value of $6.71e^{-14}$ indicating statistical difference between the calibration methods. The last row contains the average ranks of the full table, which is shown in the corresponding critical difference diagram in Figure \[fig:cd:nbayes:loss\]. The critical difference uses the Bonferroni-Dunn one-tailed statistical test to compute the minimum ranking distance that is shown in the Figure, indicating that for this particular classifier and metric the Dirichlet calibrator with L2 regularisation is significantly better than the other methods.
[.49]{} {width="\linewidth"}
[.49]{} ![Critical Differenc
| 841
| 435
| 364
| 527
| null | null |
github_plus_top10pct_by_avg
|
erimental situation shall now be mapped onto the theory derived in the preceding subsection. Though the calculation is straightforward, and similar approaches can be found elsewhere [@stoe02c], it is repeated here for the reader’s convenience. Let us start with the expression of the scattering matrix in terms of Wigner’s reaction matrix: $$\label{eq:s02}
S=\frac{1-i W^\dag GW}{1+i W^\dag GW}\,.$$ $G=(E-H)^{-1}$ is the Green’s function of the closed system and matrix $W=(W_a,W_c)$ contains the information on the coupling. As before, index “$c$” refers here to the antenna with variable coupling, and “$a$” to the measuring antenna. Per definition, the $S$-matrix relates the amplitudes of the incoming ($u$) and outgoing ($v$) waves, $$\label{eq:s03}
S\left(\begin{array}{c}
u_c \\
u_a
\end{array}\right)
=\left(\begin{array}{c}
v_c \\
v_a
\end{array}\right).$$ A termination of antenna $c$ is described by $$\label{eq:s04}
u_c=rv_c\,,\qquad r=e^{-(\alpha-i\varphi)}\,,$$ where $r$ contains the information on the reflection properties of the antenna. For reflection at an antenna with open or closed end we have $\alpha=0$ (as long as the absorption in the antenna can be neglected). The termination of the antenna by a 50$\Omega$ load corresponds to $\alpha\to\infty$.
Making use of Eq. (\[eq:s02\]), one can rewrite Eq. (\[eq:s03\]) as $$\label{eq:s05}
i W^\dag GW\left(\begin{array}{c}
u_c+v_c \\
u_a+v_a
\end{array}\right)
=\left(\begin{array}{c}
u_c-v_c \\
u_a-v_a
\end{array}\right).$$ Substituting relation (\[eq:s04\]) in Eq. (\[eq:s05\]), $u_c$ and $v_c$ can be eliminated, resulting in an equation for $u_a$ and $v_a$, $$\label{eq:s06}
iW_a^\dag\hat{G}W_a(u_a+v_a)=u_a-v_a.$$ Here, we have introduced the modified Green’s function, $\hat{G}$, with the following matrix element $$\label{eq:s09}
W_a^\dag\hat{G}W_a \equiv G_{a
| 842
| 4,295
| 649
| 677
| 846
| 0.798998
|
github_plus_top10pct_by_avg
|
under arbitrary model selection rules without relying on sample splitting. We omit the details.
### Confidence sets for the projection parameters: Normal Approximations {#confidence-sets-for-the-projection-parameters-normal-approximations .unnumbered}
We will now derive confidence intervals for the projection parameters using on a high-dimensional Normal approximation to $\hat{\beta}_{{\widehat{S}}}$. The construction of such confidence sets entails approximating the dominant linear term in the Taylor series expansion of $\hat{\beta}_{{\widehat{S}}} - \beta_{{\widehat{S}}}$ by a centered Gaussian vector in $\mathbb{R}^{{\widehat{S}}}$ with the same covariance matrix $\Gamma_{{\widehat{S}}}$ (see (\[eq:Gamma\]) in ). The coverage properties of the resulting confidence sets depend crucially on the ability to estimate such covariance. For that purpose, we use a plug-in estimator, given by $$\label{eq::Ga}
\hat\Gamma_{{\widehat{S}}} = \hat{G}_{{\widehat{S}}}\hat V_{{\widehat{S}}} \hat{G}_{{\widehat{S}}}^\top$$ where $\hat V_{{\widehat{S}}} = \frac{1}{n}\sum_{i=1}^n [ (W_i - \hat\psi) (W_i -
\hat\psi)^\top]$ is the $b \times b$ empirical covariance matrix of the $W_i$’s and the $k \times
b$ matrix $\hat{G}_{{\widehat{S}}}$ is the Jacobian of the mapping $g$, given explicitly below in , evaluated at $\hat{\psi}$.
The first confidence set for the projection parameter based on the Normal approximation that we propose is an $L_\infty$ ball of appropriate radius centered at $\hat{\beta}_{{\widehat{S}}}$: $$\label{eq::beta.conf-rectangle}
\hat{C}_{{\widehat{S}}} = \Bigl\{ \beta \in \mathbb{R}^k:\
||\beta-\hat\beta_{{\widehat{S}}}||_\infty \leq
\frac{\hat{t}_\alpha}{\sqrt{n}}\Bigr\},$$ where $\hat{t}_\alpha$ is a random radius (dependent on $\mathcal{D}_{2,n}$ ) such that $$\label{eq:t.akpha}
\mathbb{P}\left( \| \hat{\Gamma}^{1/2}_{\hat{S}} Q \|_\infty \leq \hat{t}_\alpha
\right) = \alpha,$$ with $Q$ a random vector having the $k$-dimensional standard Gaussian distribution and independent of th
| 843
| 2,971
| 1,262
| 828
| 3,618
| 0.771206
|
github_plus_top10pct_by_avg
|
ution to in $D$ by $$u(x) =\pi^{-2} \sin(\pi \alpha/2) \int_{D^{\texttt{c}}} \left( \frac{1-|x|^2} {|y|^2-1} \right)^{\alpha/2} \frac{1} {|y-x|^2} \exp(-|x-y|^2) \,{\rm d}y,
\qquad x\in D.
\label{eq:ana_T4.4}$$ This integral can be computed numerically via a quadrature approximation. Here, instead of a fixed number of samples, the number of samples is taken adaptively based on a tolerance $\varepsilon$ for the computed sample standard deviation. Figure \[fig:test5\] shows the results with $y=(2,0)$ and tolerance $\varepsilon=10^{-4}$ for evaluation of $u(0.6,0.6)$ as previously. The estimator standard deviation and absolute error exhibit no obvious trend, whereas the sample variance peaks at about $\alpha=0.6$. Also at this value, the largest number of samples is needed to satisfy the tolerance. Despite the sample variance decreasing after $\alpha=0.6$, there is an increasing trend in the amount of work required. This implies that the increase in the number of steps with $\alpha$ (see Figure \[fig:steps\]) dominates and therefore a solution point of accuracy $10^{-4}$ is computationally more costly for larger values of $\alpha$.
![Example simulation with the walk-on-spheres algorithm for based on desired tolerance of $10^{-4}$. From top left to bottom right, we see the standard deviation of the estimator, the sample variance, the absolute error (using a quadrature approximation for for the reference value), and the amount work (number of samples $\times$ mean number of steps).[]{data-label="fig:test5"}](test5_fig3 "fig:") ![Example simulation with the walk-on-spheres algorithm for based on desired tolerance of $10^{-4}$. From top left to bottom right, we see the standard deviation of the estimator, the sample variance, the absolute error (using a quadrature approximation for for the reference value), and the amount work (number of samples $\times$ mean number of steps).[]{data-label="fig:test5"}](test5_fig4 "fig:")\
![Example simulation with the walk-on-spheres algorithm for
| 844
| 1,629
| 2,720
| 891
| 943
| 0.797096
|
github_plus_top10pct_by_avg
|
{m,n} \phi^{(0)}_{m-1,n} + \phi^{(1)}_{m+1,n} \phi^{(1)}_{m,n} \phi^{(1)}_{m-1,n}}{1+\phi^{(0)}_{m+1,n}\phi^{(0)}_{m,n} \phi^{(0)}_{m-1,n} + \phi^{(1)}_{m+1,n} \phi^{(1)}_{m,n} \phi^{(1)}_{m-1,n}} .
\end{gathered}$$ We also have the master symmetry (\[eq:phi-sys-msym\]), which can be written $$\begin{gathered}
\partial_\tau \phi^{(0)}_{m,n} = m \partial_{t^1} \phi^{(0)}_{m,n} ,\qquad \partial_\tau \phi^{(1)}_{m,n} = m \partial_{t^1} \phi^{(1)}_{m,n} ,\qquad \partial_\tau \alpha = \alpha,\end{gathered}$$ which allows us to construct a hierarchy of symmetries of system (\[eq:3D-1212\]) in the $m$-direction. For instance, the second symmetry is
\[eq:3D-1212-sym-2\] $$\begin{gathered}
\partial_{t_2} \phi^{(0)}_{m,n} = \frac{\phi^{(0)}_{m,n} \phi^{(1)}_{m+1,n} \phi^{(1)}_{m,n} \phi^{(1)}_{m-1,n}}{{\cal{F}}_{m,n}} ({\cal{S}}_m+1)\left( \frac{({\cal{S}}_m- 1)\big(\phi^{(0)}_{m,n} \phi^{(0)}_{m-1,n} \phi^{(0)}_{m-2,n} \big)}{{\cal{F}}_{m,n} {\cal{F}}_{m-1,n}} \right),\\
\partial_{t_2} \phi^{(1)}_{m,n} = \frac{\phi^{(1)}_{m,n} \phi^{(0)}_{m+1,n} \phi^{(0)}_{m,n} \phi^{(0)}_{m-1,n}}{{\cal{F}}_{m,n}} ({\cal{S}}_m+1)\left( \frac{({\cal{S}}_m- 1)\big(\phi^{(1)}_{m,n} \phi^{(1)}_{m-1,n} \phi^{(1)}_{m-2,n} \big)}{{\cal{F}}_{m,n} {\cal{F}}_{m-1,n}} \right),\end{gathered}$$ where $$\begin{gathered}
{\cal{F}}_{m,n} := 1+\phi^{(0)}_{m+1,n}\phi^{(0)}_{m,n} \phi^{(0)}_{m-1,n} + \phi^{(1)}_{m+1,n} \phi^{(1)}_{m,n} \phi^{(1)}_{m-1,n} ,\end{gathered}$$ and ${\cal{S}}_m$ denotes the shift operator in the $m$-direction.
### The reduced system {#the-reduced-system .unnumbered}
The reduced system (\[self-dual-equn\]) takes the explicit form (first introduced in [@14-6]) $$\begin{gathered}
\label{eq:MX2}
\phi_{m,n} \phi_{m+1,n+1} ( \phi_{m+1,n} + \phi_{m,n+1} ) = 2,\end{gathered}$$ where $$\begin{gathered}
\phi^{(0)}_{m,n} = \phi^{(1)}_{m,n} = \frac{1}{\phi_{m,n}} ,\qquad \beta = - \alpha.\end{gathered}$$ With this coordinate, the second symmetry (\[eq:3D-1212-sym-2\]) takes the form $$\begin{gathered}
\partial_{t_2} \phi_{m,n} = \p
| 845
| 1,536
| 1,258
| 989
| null | null |
github_plus_top10pct_by_avg
|
N_\phi} \Theta_{ijk} e^{-2\pi \sqrt{-1}jm/N_\phi},$$ and similarly for ${\cal G}$ and $\sigma$. Then, Equation can be cast into a more compact form $$\label{eq:gpot_by_discrete_green_fft}
\Theta^m_{ik} = \sum_{i'=0}^{N_R+1}\sum_{k'=0}^{N_z+1}{\cal G}^m_{i,i',k-k'} \sigma^m_{i'k'}{\cal V}_{i'}.$$ Since $\sigma$ is nonzero only at the ghost cells and we need to evaluate $\Theta$ also only at the ghost cells, we do not have to perform full double summations for all $m,i,k$ indices in Equation . By collecting the individual contributions of the surface charges, Equation yields the Fourier-transformed potentials at the four boundaries $$\begin{aligned}
\label{eq:Theta_top}
\Theta^m_i({\rm top}) &= \sum_{i'=1}^{N_R} {\cal G}^m_{i,i'}({\rm top\to top}) \sigma^m_{i'}({\rm top}){\cal V}_{i'} + \sum_{i'=1}^{N_R} {\cal G}^m_{i,i'}({\rm bot\to top}) \sigma^m_{i'}({\rm bot}){\cal V}_{i'}\nonumber\\
&+ \sum_{k'=1}^{N_z} {\cal G}^m_{i,k'}({\rm inn\to top}) \sigma^m_{k'}({\rm inn}){\cal V}_{0} + \sum_{k'=1}^{N_z} {\cal G}^m_{i,k'}({\rm out\to top}) \sigma^m_{k'}({\rm out}){\cal V}_{N_R+1},\end{aligned}$$ $$\begin{aligned}
\label{eq:Theta_bot}
\Theta^m_i({\rm bot}) &= \sum_{i'=1}^{N_R} {\cal G}^m_{i,i'}({\rm top\to bot}) \sigma^m_{i'}({\rm top}){\cal V}_{i'} + \sum_{i'=1}^{N_R} {\cal G}^m_{i,i'}({\rm bot\to bot}) \sigma^m_{i'}({\rm bot}){\cal V}_{i'}\nonumber\\
&+ \sum_{k'=1}^{N_z} {\cal G}^m_{i,k'}({\rm inn\to bot}) \sigma^m_{k'}({\rm inn}){\cal V}_{0} + \sum_{k'=1}^{N_z} {\cal G}^m_{i,k'}({\rm out\to bot}) \sigma^m_{k'}({\rm out}){\cal V}_{N_R+1},\end{aligned}$$ $$\begin{aligned}
\label{eq:Theta_inn}
\Theta^m_k({\rm inn}) &= \sum_{i'=1}^{N_R} {\cal G}^m_{k,i'}({\rm top\to inn}) \sigma^m_{i'}({\rm top}){\cal V}_{i'} + \sum_{i'=1}^{N_R} {\cal G}^m_{k,i'}({\rm bot\to inn}) \sigma^m_{i'}({\rm bot}){\cal V}_{i'}\nonumber\\
&+ \sum_{k'=1}^{N_z} {\cal G}^m_{k-k'}({\rm inn\to inn}) \sigma^m_{k'}({\rm inn}){\cal V}_{0} + \sum_{k'=1}^{N_z} {\cal G}^m_{k-k'}({
| 846
| 2,523
| 1,435
| 922
| null | null |
github_plus_top10pct_by_avg
|
ant $C > 0$ such that $$\label{eq:loco.coverage1}
\inf_{w_n \in \mathcal{W}_n} \inf_{P \in \mathcal{P}_n^{\mathrm{LOCO}}} \mathbb{P} \left(
\gamma_{{\widehat{S}}} \in \widehat{D}_{{\widehat{S}}} \right) \geq 1 - \alpha - C
\left(
\mathrm{E}_{1,n} + \mathrm{E}_{2,n} \right)- \frac{1}{n},$$ and $$\label{eq:loco.coverage2}
\inf_{w_n \in \mathcal{W}_n} \inf_{P \in \mathcal{P}_n^{\mathrm{LOCO}}} \mathbb{P} \left(
\gamma_{{\widehat{S}}} \in \tilde{D}_{{\widehat{S}}} \right) \geq 1 - \alpha - C \left(
\mathrm{E}_{1,n} + \tilde{\mathrm{E}}_{2,n} \right) - \frac{1}{n},$$ where $$\begin{aligned}
\label{eq:E1n}
\mathrm{E}_{1,n} &= \frac{2(A+\tau) + \epsilon }{\epsilon} \left(\frac{ (\log n
k)^7}{n}\right)^{1/6},\\
\label{eq:E2n}
\mathrm{E}_{2,n} & =
\frac{N_n^{1/3} (2 \log 2k)^{2/3}}{\underline{\epsilon}^{2/3}},\\
\label{eq:tildeE2.n}
\tilde{\mathrm{E}}_{2,n} &= \min \left\{ \mathrm{E}_{2,n},\frac{ N_n z_{\alpha/(2k)}}{\epsilon^2}
\left(\sqrt{ 2 + \log(2k ) } + 2 \right) \right\}\end{aligned}$$ and $$\label{eq:Nn}
N_n = \left( 2(A+\tau) + \epsilon \right)^2 \sqrt{
\frac{4\log k + 2 \log n}{n} }.$$
[**Remark.**]{} The term $\mathrm{E}_{1,n}$ quantifies the error in applying the high-dimensional normal approximation to $\hat{\gamma}_{{\widehat{S}}} - \gamma_{{\widehat{S}}}$, given in [@cherno2]. The second error term $\mathrm{E}_{2,n}$ is due to the fact that $\Sigma_{{\widehat{S}}}$ is unknown and has to be estimated using the empirical covariance matrix $\widehat{\Sigma}_{{\widehat{S}}}$. To establish $\mathrm{E}_{2,n}$ we use the Gaussian comparison Theorem \[thm:comparisons\]. We point out that the dependence in $\epsilon$ displayed in the term $\mathrm{E}_{2,n}$ above does not follow directly from Theorem 2.1 in [@cherno2]. It can be obtained by tracking constants and using Nazarov’s inequality in the proof of that result. See in for details.
The accuracy of the confidence set can be easily established to be of order $O \lef
| 847
| 3,928
| 395
| 660
| null | null |
github_plus_top10pct_by_avg
|
adding these constraints to the pre- and postcondition, we obtain the following implication.
- $\forall M,N \in {{\mathbb{Z}}}: ~M > N, ~N \in Dom_N, ~M \in Dom_M \Longrightarrow $\
$~~~~~~~~~~~M+1>N, ~N \in Dom_N, ~M+1 \in Dom_M$
Representing these domains by symbolic coefficients yields the following implication.
- $\forall M,N \in {{\mathbb{Z}}}: ~M > N, ~d_N * N \geq d_N * c_N, ~d_M * M \geq d_M * c_M \Longrightarrow $\
$~~~~~~~~~~~M+1>N, ~d_N * N \geq d_N * c_N, ~d_M * (M+1) \geq d_M * c_M$
To guarantee that the precondition succeeds for the considered derivation, $c_M$ and $c_N$ are required to be the values for $\underline{M}$ and $\underline{N}$ in node $N_5$. Combining these constraints implies non-termination for the query $\leftarrow count\_to(n,L)$, for which the following constraints are satisfied with some unknown integers $c_N,c_M,d_N$ and $d_M$.
- $0>n,~0+1>n$ to guarantee applicability of the derivation
- $c_N = n, ~c_M = 0+1$ to guarantee that the precondition holds
- $d_N = 1 \lor d_N = -1, ~d_M = 1 \lor d_M = -1$,
- $\forall M,N \in {{\mathbb{Z}}}: M > N, d_N * N \geq d_N * c_N, d_M * M \geq d_M * c_M \Longrightarrow $\
$~~~~~~~~~~~M+1>N, d_N * N \geq d_N * c_N, d_M * (M+1) \geq d_M * c_M$ to prove that the condition succeeds infinitely often.
Due to the implication, $d_M$ has to be $1$. $d_N$ can be either $1$ or $-1$. $\hfill \square$
To be able to infer singleton domains, we allow the constant describing the direction of the interval to be $0$. If in such a constant $d_I$ is zero, the constraints on the domain are satisfied trivially because they simplify to $0 \geq 0$. To guarantee that the domain is indeed a singleton when $d_I$ is inferred to be zero, a constraint of the form $(1-d_I^2)Exp=(1-d_I^2)*c_I$ is added to the postcondition for every constraint $d_I * I \geq d_I * c_I$. This constraint is trivially satisfied for half-open domains and proves that $\lbrace c_I \rbrace$ is the domain in the case that $d_I = 0$.
In Example \[example:constants\_nt\
| 848
| 4,182
| 1,469
| 536
| 1,437
| 0.789577
|
github_plus_top10pct_by_avg
|
0.340 0.364 0.366 0.356 0.372
mMSE 0.400 0.352 0.384 0.416 0.368 0.356 0.376
BLB($n^{0.6}$) 0.000 0.000 0.000 0.000 0.002 0.000 0.002
BLB($n^{0.8}$) 0.206 0.230 0.224 0.230 0.210 0.226 0.230
SDB($n^{0.6}$) 0.000 0.000 0.000 0.000 0.000 0.000 0.000
SDB($n^{0.8}$) 0.014 0.018 0.006 0.016 0.010 0.012 0.006
TB 0.982 0.980 0.984 0.968 0.986 0.972 0.970
5 K=50 1.000 1.000 1.000 1.000 1.000 1.000 1.000
K=100 1.000 1.000 1.000 1.000 1.000 1.000 1.000
K=150 1.000 1.000 1.000 1.000 1.000 1.000 1.000
mVC 1.000 0.998 1.000 1.000 0.998 1.000 1.000
mMSE 1.000 1.000 1.000 1.000 1.000 1.000 1.000
BLB($n^{0.6}$) 0.394 0.418 0.406 0.442 0.424 0.466 0.394
BLB($n^{0.8}$) 1.000 1.000 1.000 1.000 1.000 1.000 1.000
SDB($n^{0.6}$) 0.002 0.000 0.012 0.000 0.010 0.004 0.004
SDB($n^{0.8}$) 1.000 1.000 1.000 1.000 1.000 1.000 1.000
TB 1.000 1.000 1.000 1.000 1.000 1.000 1.000
6 K=50 0.996 0.996 0.998 0.984
| 849
| 5,758
| 234
| 361
| null | null |
github_plus_top10pct_by_avg
|
i,L}).\end{aligned}$$ Recall the definition of ${b^{\chi}} $ in Eq. .
Let $i\in I$.
\(i) Let $m\in {\mathbb{N}}$. The following are equivalent.
- $E_i^m=0$ in $U(\chi )$,
- $F_i^m=0$ in $U(\chi )$,
- $m\ge {b^{\chi}} ({\alpha }_i)$.
\(ii) Let ${\Bbbk }[E_i]$ and ${\Bbbk }[F_i]$ be the subalgebras of $U(\chi )$ generated by $E_i$ and $F_i$, respectively. The multiplication maps $$\begin{aligned}
U^+_{i,K}(\chi ) {\otimes }{\Bbbk }[E_i] \to &\,U^+(\chi ),&
{\Bbbk }[E_i] {\otimes }U^+_{i,K}(\chi ) \to &\,U^+(\chi ),\\
U^+_{i,L}(\chi ) {\otimes }{\Bbbk }[E_i] \to &\,U^+(\chi ),&
{\Bbbk }[E_i] {\otimes }U^+_{i,L}(\chi ) \to &\,U^+(\chi ),\\
U^-_{i,K}(\chi ) {\otimes }{\Bbbk }[F_i] \to &\,U^-(\chi ),&
{\Bbbk }[F_i] {\otimes }U^-_{i,K}(\chi ) \to &\,U^-(\chi ),\\
U^-_{i,L}(\chi ) {\otimes }{\Bbbk }[F_i] \to &\,U^-(\chi ),&
{\Bbbk }[F_i] {\otimes }U^-_{i,L}(\chi ) \to &\,U^-(\chi ),
\end{aligned}$$ are isomorphisms of ${\mathbb{Z}}^I$-graded algebras. \[le:Eheight\]
\(i) is standard in the theory of Nichols algebras. It follows from Eqs. , and the definitions of ${\mathcal{I}}^+(\chi )$ and ${\mathcal{I}}^-(\chi )$. The proof of (ii) for $U^+(\chi )$ can be performed as in [@a-Heck06a]. The formulas with $U^-(\chi )$ follow from those with $U^+(\chi )$ and Eqs. , .
\[le:EmFn\] Let $m,n\in {\mathbb{N}}_0$ and $p\in I$. Then $$\begin{aligned}
E_p^m F_p^n=\sum _{i=0}^{\makebox[0pt]{\scriptsize $\min \{m,n\}$}}
{\textstyle
\frac{\qfact{m}{q_{p p}}\qfact{n}{q_{p p}}}{
\qfact{i}{q_{p p}}\qfact{m-i}{q_{p p}}\qfact{n-i}{q_{p p}}}
}
F_p^{n-i}\prod _{j=1}^i
(q_{p p}^{i+j-m-n}K_p-L_p)E_p^{m-i}.
\end{aligned}$$
For $n=0$ the claim is trivial. By [@p-Heck07b Cor. 5.4], $$E_p^m F_p-F_p E_p^m=
\qnum{m}{q_{p p}}(q_{p p}^{1-m}K_p-L_p)E_p^{m-1}.$$ Hence the lemma holds for $n=1$. It suffices to check the claim for $m\ge n$, since then it also holds for $m<n$ using the algebra antiisomorphism ${\Omega }$. The proof of the lemma for $m\ge n$ is a standard c
| 850
| 2,632
| 754
| 826
| 4,139
| 0.767822
|
github_plus_top10pct_by_avg
|
}} , \nabla_a \right]{\bf t}=0,
\label{eq:commutation-isometry-covariant}$$ where $\bf t$ can be a scalar, vector, or tensor. To prove Eq. , one can start by showing the commutation relations for $\bf t$ being a 0-form (which follows immediately from Cartan’s magic formula for a 0-form) and a one-form, then use the Leibniz rule to generalize the relations to the vector and tensor cases. Eq. says that the operator ${\nabla}_{a}$ is ${\ensuremath{SL(2,\mathbb{R})\times U(1)}}$ *equivariant*: that is, its action commutes with left-translation by the group [@MR2954043].
An important consequence of the commutation relation Eq. is that the Casimir element $\Omega$ of the algebra $\mathfrak{g}$ also commutes with the covariant derivative. Simply commute each Lie derivative one at a time, and the coefficients $c^{(i)(j)}$ are constants. As a result, $$\left[ \Omega , \nabla_a \right]\mathbf{t}=0.
\label{eq:commutation-casimir-covariant}$$
Now consider a tensor $\mathbf{t}$ living in an irrep with labels $\lambda_{i}$ and $\omega$, meaning $$\begin{aligned}
{\mathcal{L}}_{X_{(i)}} \mathbf{t} &= \lambda_{i} \mathbf{t} \,, \\
\Omega \cdot \mathbf{t} &= \omega \mathbf{t} \,.\end{aligned}$$ As an immediate consequence of Eq. and Eq. is that ${\nabla}\mathbf{t}$ has the same labels $\lambda_{i}$ and $\omega$, $$\begin{aligned}
{\mathcal{L}}_{X_{(i)}} {\nabla}\mathbf{t} &= \lambda_{i} {\nabla}\mathbf{t} \,, \\
\Omega \cdot {\nabla}\mathbf{t} &= \omega {\nabla}\mathbf{t} \,.\end{aligned}$$
Thus any linear differential operator which is built just from ${\nabla}_{a}$ and the metric $g_{ab}$ can not mix tensors with different irrep labels $(\lambda_{i}, \omega)$. This even extends to differential operators which include the Levi-Civita tensor $\epsilon$ and the Riemann tensor $R_{abcd}$, because these two objects are also annihilated by all of the ${\mathcal{L}}_{X_{(i)}}$. As a result, when tensors are decomposed into a sum over irreps with different labels, they will remain separated in the same ways under t
| 851
| 3,421
| 1,729
| 759
| null | null |
github_plus_top10pct_by_avg
|
O^!}) (\vec X;
\varnothing )^{-1}\big)$. This implies equation .
As for equation (\[Hi<0\]), we will use an induction that shows that every closed element in $\textbf{D}(\widehat{\mathcal O^!})
(\vec X;\varnothing)^{-r}$, for $r\geq 1$ is also exact. The argument will use an induction which slides all of the “full” inputs from one of the two “dashed” inputs to the other. As a main ingredient of this, we will employ the products $*$ and $\#$ defined below, which are used to uniquely decompose an element in $\textbf{D}(\widehat {\mathcal O^!})(\vec X;\varnothing)$ as a sum of products of $*$ and $\#$.
We need the following definition. Given two decorated trees $\varphi\in \textbf{D}(\mathcal O^!)(k), \psi \in
\textbf{D}(\mathcal O^!)(l)$, we define new elements $\varphi * \psi$ and $\varphi \# \psi$ in $\textbf{D}(\widehat{ \mathcal O^!})({
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}},\ldots,{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}};\varnothing)$ as follows. First, for $\varphi * \psi$ take the outputs of $\varphi$ and $\psi$ and insert them into the unique inner product decorated by the generator $1\in \widehat{\mathcal O^!}({
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}},{
\begin{pspicture}(0,0.1)(0.2,0.4)
\psline[linewidth=1pt, linestyle=dashed,dash=4pt 3pt](0.1,0)(0.1,0.4)
\end{pspicture}};\varnothing)$: $$\begin{pspicture}(-3,0)(6,4)
\psline[linestyle=dashed](2.5,0.5)(0,3)
\psline[linestyle=dashed](2.5,0.5)(5,3)
\rput(2.7,0.3){\tiny $1$}
\pscircle[linestyle=dotted](4,2.2){1.4}
\rput(-0.2,1){$\varphi$}
\psline(1.5,1.5)(2,3)
\psline(1.5,1.5)(1,3)
\psline(1.3,2.1
| 852
| 1,674
| 505
| 902
| 567
| 0.805451
|
github_plus_top10pct_by_avg
|
N, then you can apply some of the other parsing techniques suggested (use the Java StringTokenizer or the String.split() method as others here have suggested if it's known to be separated only by spaces). That assumes that you can make assumptions (eg. the first element in the resulting array is the firstName,the last element is the lastName and everything in between is middle names / initials) about the CN format.
A:
You can use split:
String distinguisedName = "CN=Paul Sebula,OU=BAE,OU=Users,OU=TIES Project,DC=SPHQTest,DC=na,DC=baesystems,DC=com";
String[] names = distinguisedName.split(",")[0].split("=")[1].split(" ");
String firstName = names[0];
String lastName= names.length > 2 ? names[names.length-1] : names[1];
System.out.println(firstName + " " + lastName);
See IDEONE demo, output: Paul Sebula.
This also accounts for just 2 names (first and last only). Note how last name is accessed it being the last item in the array.
Q:
destructors on gc-ed lua objects
I know that Lua is gc-ed. I know that Lua can deal with c objects via userdata.
Here is my question: is there anyway to register a function so that it's called when a C userdata object is gc-ed by lua? [Basically a destructor].
Thanks!
A:
Yes, there is a metamethod called __gc specifically for this purpose. See Chapter 29 - Managing Resources of Programming in Lua (PIL) for more details.
The following snippet creates a metatable and registers a __gc metamethod callback:
luaL_newmetatable(L, "SomeClass");
lua_pushcfunction(L, some_class_gc_callback);
lua_setfield(L, -2, "__gc");
Q:
vaadin VerticalSplitPanel, horizontal scrollbar appears but i can't see the end line on the right
I use a VerticalSplitPanel as a second component of HorizontalSplitPanel.
I can see a vertical scrollbar and horizontal scrollbar when I add a panel as a second component of the VerticalSplitPanel (cf screenshoot).
My problem is that I can't scroll until the end of the line on the right.
This is my source code:
hsplit = new HorizontalSplitPanel();
| 853
| 5,032
| 124
| 690
| 428
| 0.810054
|
github_plus_top10pct_by_avg
|
gma_Y)$ with $$\Delta = \max_{i,j} | \Sigma_X(j,k) - \Sigma_Y(j,k)|$$ Let $\underline{\sigma}^2 = \max\{ \min_j \Sigma_X(j,j) ,
\min_j \Sigma_Y(j,j) \}$. Then, there exists a universal constant $C>0$ such that $$\sup_{t \in \mathbb{R}^p} \left| \mathbb{P}( X \leq t) -
\mathbb{P}( Y \leq t) \right| \leq C \frac{\Delta^{1/3} (2 \log
p)^{1/3}}{\underline{\sigma}^{2/3}}.$$
[**Remark.**]{} The above result further implies that $$\sup_{t >0 } \left| \mathbb{P}( \| X \|_\infty \leq t) -
\mathbb{P}( \| Y \|_\infty \leq t) \right| \leq 2 C \frac{\Delta^{1/3} (2 \log
p)^{1/3}}{\underline{\sigma}^{2/3}},$$ which corresponds to the original formulation of the Gaussian comparison theorem of [@chernozhukov2015comparison].
Appendix 7: The Procedures
==========================
------------------------------------------------------------------------
[Boot-Split]{}
[Input]{}: Data ${\cal D} = \{(X_1,Y_1),\ldots, (X_{2n},Y_{2n})\}$. Confidence parameter $\alpha$. Constant $\epsilon$ (Section \[sec:loco.parameters\]).\
[Output]{}: Confidence set $\hat{C}^*_{{\widehat{S}}}$ for $\beta_{{\widehat{S}}}$ and $\hat{D}^*_{{\widehat{S}}}$ for $\gamma_{{\widehat{S}}}$.
Randomly split the data into two halves ${\cal D}_{1,n}$ and ${\cal D}_{2,n}$.
Use ${\cal D}_{1,n}$ to select a subset of variables ${\widehat{S}}$. This can be forward stepwise, the lasso, or any other method. Let $k= |{\widehat{S}}|$.
Write ${\cal D}_{2,n}=\{(X_1,Y_1),\ldots, (X_n,Y_n)\}$. Let $P_n$ be the empirical distribution of ${\cal D}_{2,n}$.
For $\beta_{{\widehat{S}}}$:
Get $\hat\beta_{{\widehat{S}}}$ from ${\cal D}_{2,n}$ by least squares.
Draw $(X_1^*,Y_1^*),\ldots, (X_m^*,Y_m^*) \sim P_n$. Let $\hat\beta^*_{{\widehat{S}}}$ be the estimator constructed from the bootstrap sample.
Repeat $B$ times to get $\hat\beta_{{\widehat{S}},1}^*, \ldots,\hat\beta_{{\widehat{S}},B}^*$.
Define $\hat{t}_\alpha$ by $$\frac{1}{B}\sum_{b=1}^B I\Bigl(\sqrt{n}||\hat\beta_{{\widehat{S}},b}^* - \hat\beta_{{\widehat{S}}}||_\infty > \hat{t}_\alpha\Big
| 854
| 564
| 981
| 904
| 1,751
| 0.786026
|
github_plus_top10pct_by_avg
|
udo data from unsupervised machine translation is especially effective for BWEs because (1) the pseudo data makes the source and target corpora (partially) parallel; (2) the pseudo data reflects some nature of the original language that helps learning similar embedding spaces between the source and target languages.'
author:
- |
Sosuke Nishikawa, Ryokan Ri and Yoshimasa Tsuruoka\
The University of Tokyo\
7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan\
[sosuke-nishikawa@nii.ac.jp]{}\
[{li0123,tsuruoka}@logos.t.u-tokyo.ac.jp]{}\
bibliography:
- 'coling2020.bib'
title: Data Augmentation for Learning Bilingual Word Embeddings with Unsupervised Machine Translation
---
---
abstract: 'For some estimations and predictions, we solve minimization problems with asymmetric loss functions. Usually, we estimate the coefficient of regression for these problems. In this paper, we do not make such the estimation, but rather give a solution by correcting any predictions so that the prediction error follows a general normal distribution. In our method, we can not only minimize the expected value of the asymmetric loss, but also lower the variance of the loss.'
author:
- 'Naoya Yamaguchi, Yuka Yamaguchi, and Ryuei Nishii'
bibliography:
- 'reference.bib'
title: Minimizing the expected value of the asymmetric loss and an inequality of the variance of the loss
---
Introduction {#S1}
============
For some estimations and predictions, we solve minimization problems with loss functions, as follows: Let $\{ (x_{i}, y_{i}) \mid 1 \leq i \leq n \}$ be a data set, where $x_{i}$ are $1 \times p$ vectors and $y_{i} \in \mathbb{R}$. We assume that the data relate to a linear model, $$y = X \beta + \varepsilon,$$ where $y = {}^{t}(y_{1}, \ldots, y_{n})$, $\varepsilon = {}^{t}(\varepsilon_{1}, \ldots, \varepsilon_{n})$, and $X$ is the $n \times p$ matrix having $x_{i}$ as the $i$th row. Let $L$ be a loss function and let $r_{i}(\beta) := y_{i} - x_{i} \beta$. Then we estimate the value: $$\begin{aligned}
\hat{\beta} := \arg\min_{\
| 855
| 374
| 467
| 882
| 1,469
| 0.789176
|
github_plus_top10pct_by_avg
|
he robot’s base. The distance from the mass centre of a link to a joint is denoted as $k_i$ while the joint angle between a link and the base or its preceding link is denoted as $\theta_i$.
![Dual arm robot modelling[]{data-label="fig1"}](model.png){width="0.8\linewidth"}
![Operational motions of dual arm robot[]{data-label="fig2"}](motion.png){width="0.8\linewidth"}
Operationally, in this work we consider that the robot manipulators make motions on the horizontal $xy$ plane. In other words, the robot arms first move towards the object. After the manipulators are firmly attached to the load, the robot then picks the object up and transports it to a new position by adjusting the motions to robustly follow the given trajectory, demonstrated in Fig. \[fig2\], where ($x_i$,$y_i$) and ($x_f$,$y_f$) are the initial and final locations of the payload, respectively. We let $x_m$ and $y_m$ denote the mass center of the payload on the $xy$ plane, the trajectory of the object can be specified by $$\begin{array}{r@{}l@{\qquad}l}
{{x}_{m}}&{}=\frac{{{d}_{2}}}{2}+{{l}_{1}}\cos {{\theta}_{1}}+{{l}_{2}}\cos ({{\theta}_{1}}+{{\theta}_{2}})-\frac{{{d}_{1}}}{2} \\
&{}=-\frac{{{d}_{2}}}{2}+{{l}_{3}}\cos {{\theta}_{3}}+{{l}_{4}}\cos ({{\theta}_{3}}+{{\theta}_{4}})+\frac{{{d}_{1}}}{2}, \\ \nonumber
{{y}_{m}}&{}={{l}_{1}}\sin {{\theta}_{1}}+{{l}_{2}}\sin ({{\theta}_{1}}+{{\theta}_{2}}) \\ \nonumber
&{}={{l}_{3}}\sin {{\theta}_{3}}+{{l}_{4}}\sin ({{\theta}_{3}}+{{\theta}_{4}}).
\end{array}$$
In order to transport the object to a new position, the robot manipulators apply forces $F_1$ and $F_2$ to the payload as illustrated in Fig. \[fig3\]. On the other hands, to rigidly hold the load up, friction forces $F_{s1}$ and $F_{s2}$ are needed. Let $F_{siy}$ and $F_{siz}$ denote the components of the friction forces in $y$ and $z$ directions, respectively. To prevent the load from rotating around $y$ and $z$ axes, it is supposed that $F_{s1y}=F_{s2y}$ and $F_{s1z}=F_{s2z}$. Then the dynamic equations of the object
| 856
| 4,184
| 1,292
| 550
| 3,113
| 0.774786
|
github_plus_top10pct_by_avg
|
ho_x}{df^*\rho_{F(x)}}(\xi) \ , \xi \in \partial X.$$ and let $K_x \subset \partial X$ denote the set where the function $u_x$ achieves its maximum. In [@biswas6], it is shown that for any $x \in X$, there exists a probability measure $\mu_x$ on $\partial X$ with support contained in $K_x$ such that $\mu_x$ is balanced at $x \in X$ and $f_* \mu_x$ is balanced at $F(x) \in Y$. We will need the following propositions from [@biswas7]:
\[rconstant\] ([@biswas7]) The function $r : X \to \mathbb{R}$ defined by $$r(x) = d_{\mathcal{M}}( \rho_x, f^* \rho_{F(x)} ) \ , x \in X$$ is constant.
\[qisom\] ([@biswas7]) Let $M \geq 0$ denote the constant value of the function $r$. Then the circumcenter map $F : X \to Y$ is a $(1, 2M)$-quasi-isometry.
Given $x \in X$, the flip map $T^1_x X \to T^1_x X, v \mapsto -v$ induces an involution $i_x : \partial X \to \partial X$, defined by requiring that $\overrightarrow{xi_x(\xi)} = -\overrightarrow{x\xi}$ for all $\xi \in \partial X$.
\[maxmin\] ([@biswas7]) For $x \in X$, the function $u_x$ achieves its maximum at $\xi \in \partial X$ if and only if it achieves its minimum at $i_x(\xi) \in \partial X$.
\[dFstar\] ([@biswas7]) Let $x \in X$ be a point of differentiability of the circumcenter map $F : X \to Y$. Then for any $\xi \in K_x$ and any $v \in T_x X$, we have $$< dF_x(v), \overrightarrow{F(x)f(\xi)} > = < v, \overrightarrow{x\xi} >$$ Equivalently, $$dF^*_x( \overrightarrow{F(x)f(\xi)} ) = \overrightarrow{x\xi}$$ for all $\xi \in K_x$.
The following Lemma follows from Propositions \[qisom\] and \[maxmin\]:
\[antisom\] Suppose for some $x \in X$, there exists $\xi \in \partial X$ such that $\xi, i_x(\xi) \in K_x$. Then the circumcenter map $F : X \to Y$ is an isometry.
[**Proof:**]{} It follows from Proposition \[maxmin\] that the maximum and minimum values of the function $u_x$ are equal. On the other hand we know that the maximum and minimum values are negatives of each other. Since the maximum value equals the constant $M$, we have $M = - M$ and hence $M = 0$. It foll
| 857
| 1,227
| 1,288
| 935
| 1,880
| 0.784739
|
github_plus_top10pct_by_avg
|
[08:51:03:206]: Note: 1: 1334 2: FailingFile 3: cab1.cab
Error 1334. The file 'FailingFile' cannot be installed because the file cannot be found in cabinet file 'cab1.cab'. This could indicate a network error, an error reading from the CD-ROM, or a problem with this package.
MSI (s) (88:54) [08:51:04:317]: Product: MyProduct V2.2.0 -- Error 1334. The file 'FailingFile' cannot be installed because the file cannot be found in cabinet file 'cab1.cab'. This could indicate a network error, an error reading from the CD-ROM, or a problem with this package.
A:
What's an installer log tell you? Have you updated the FileSize column in the File table? Have you run MSI validation after editing the MSI.
My instinct is you've created a malformed MSI but I need more information to be more specific in how to fix it.
Q:
How to perform same calculations over several matrices?
I've B1, B2, B3, etc as igraph objects.
This is my code:
setwd("D:\\Educacion\\PeerEffects\\matriz de contactos\\Intentos\\")
filenames <- list.files(path=getwd(),pattern="matriz+.*dta")
list(filenames)
names <-substr(filenames,1,7)
for(i in names)
{
filepath <- file.path("D:/Educacion/PeerEffects/matriz de contactos/Intentos",paste(i,".dta",sep=""))
assign(i, read.dta(filepath))
}
for (i in 1:length(names)){
assign(paste0("A", i), unname(as.matrix(get(paste0("matriz", i)))))
assign(paste0("B", i), graph.adjacency(get(paste0("A", i)), mode = "directed", weighted = NULL, diag = FALSE))
}
This is what I need to do to every igraph objetc B1, B2, etc. where "matrices" should be the list of igraph objetcs:
for (i in matrices) {
average.path.length(i)
diameter(i)
transitivity(i)
degree(i)
}
This is the error I get when matrices is a list of names (B1, B2, etc):
Error in average.path.length(i) : Not a graph object
A:
To get the variable called 'matrix1', use get
m <- get(paste0('matrix', i))
Then do all your things like degree(m), ...
Though as I mentioned in your previous question, it would be better if matrices was the list of
| 858
| 568
| 121
| 383
| 434
| 0.809843
|
github_plus_top10pct_by_avg
|
'[@thornley]'
- '[@dpj]'
- '[@doherty95]'
- '[@lph320]'
- '[@shields]'
- '[@vanzi]'
- '[@vr]'
- '[@chad]'
- '[@fs-m82]'
- '[@lph328]'
- '[@seaquist]'
- '[@sb]'
- '[@chip]'
- '[@guseva]'
- '[@schmutz]'
- '[@lejeune]'
- '[@kurtz]'
- '[@hanson]'
- '[@depree]'
- '[@garcia]'
- '[@btk]'
- '[@thb]'
- '[@kj99; @vjc]'
- '[@ccm]'
title: 'Missing Massive Stars in Starbursts: Stellar Temperature Diagnostics and the IMF'
---
INTRODUCTION {#sec:intro}
============
In the very local ($D<5 h_{100}^{-1}$) Universe, the circumnuclear regions of just four galaxies (M82, NGC 253, NGC 4945, and M83) are responsible for $\sim25\%$ of the current massive star formation [@heckman97]. In these “circumnuclear starburst galaxies”, the star formation is confined to the inner $0.2$ to $2$ kpc, in a dense, gas–rich disk where star formation rates can reach $1000$ [@robhubble]. If the starburst initial mass function (IMF) includes significant numbers of low–mass stars, then each starburst is currently building up the stellar component of its host galaxy as well. A starburst enriches and heats its interstellar medium, as well as the local intergalactic medium. Starbursts can also drive large–scale winds that eject interstellar gas, presumably casting metals into the voids and heating the gas between galaxies. Starburst galaxies thus play a number of important roles in galaxy evolution.
If starbursts could be dated, then a sequence could be pieced together, charting starburst evolution from triggering to post–starburst quiescence. Starburst ages are most directly determined by understanding the population of rapidly evolving massive stars. The feedback effect of a starburst on its gas supply is transmitted through massive stellar winds and supernovae–driven superwinds. Thus, understanding the evolution of starbursts and their effects on the interstellar and intergalactic media both critically depend on understanding the populations of massive stars.
Unfortunately, since starburst galaxies are too far away to count indiv
| 859
| 1,674
| 2,149
| 1,123
| null | null |
github_plus_top10pct_by_avg
|
e specific properties of GMAX0505. The structure of GMAX0505 is designed for visible light. In particular, its advanced light pipe structure optimizes the response to visible light as described in Yokoyama et al.[@Yokoyama2018] We applied it for x-ray imaging and polarimetry for the first time. Although Yokoyama et al.[@Yokoyama2018] provide a schematic view of the pixel structure, the size of the photodiode, which is essential for the x-ray detection, is not provided. We thus estimate the thickness of the x-ray detection layer to be $\sim$5$\mathrm{\mu}$m in Sec. 3.3. Although the device is usually equipped with a cover glass, we employ the device without it. However, micro-lenses implemented in the sensor at the illumination side are not removed in our experiment. As this sensor is very sensitive to the visible light, we cover the sensor with a dark curtain to block the visible light.
We adopt the evaluation board and software developed by GPixel Inc. to operate GMAX0505 and acquire data. GMAX0505 has 32 levels of gain, and we utilize two of them; when a register is set to 0, GMAX0505 is operated in a low-gain mode. While the register is set to 4, it is in a high-gain mode. GMAX0505 is operated at room temperature and the atmospheric condition throughout our experiments in this paper. Readout noise is evaluated from data with an exposure time of 1ms. The average noise is 5.7e${}^{-}$ (RMS) in the low-gain mode and 2.7e${}^{-}$ (RMS) in the high-gain mode.
\[H\]
----------------------------------------------------------------------------------------
![GMAX0505 properties.[]{data-label="GMAX_table"}](GMAX_image.eps "fig:"){width="7cm"}
----------------------------------------------------------------------------------------
------------------ ----------------------------------------------
chip size $15.85~\mathrm{mm} \times 16.88~\mathrm{mm}$
pixel size $2.5~\mu\mathrm{m} \times 2.5~\mu\mathrm{m}$
number of pixels $5120 \times5120$
Effective Area $12.8
| 860
| 47
| 422
| 1,005
| null | null |
github_plus_top10pct_by_avg
|
places protoset $G_\heartsuit^{(1)} = \tilde{V}({\mathit{s}}_{\text{crux}})$.
For a discussion of protoset $G_\heartsuit^{(n)}$ in context of the Cartesian product, see Appendix §\[D:PROTOSET\].
### Partial order {#S:CONE_ORDER}
Membership in a converse iterative operator induces a partial ordering:
Let ${\mathbb{S}}$ be a step space with converse iterative operator $\tilde{V} \colon {\mathbb{S}} \to {\mathscr{P}({{\mathbb{S}}})}$. If ${\mathit{s}}' \in \tilde{V}({\mathit{s}})$, then ${\mathit{s}}'$ *precedes* ${\mathit{s}}$, written ${\mathit{s}}' \prec {\mathit{s}}$.
### Predecessor walk {#S:CONE_WALK}
A predecessor walk begins at step ${\mathit{s}}_0 = {\mathit{s}}_{\text{crux}}$ and proceeds backwards, indexing through the negative integers. In this case we abuse the proper sense of the term [sequence]{} by permitting an indexing not being the natural numbers.
Let ${\mathbb{S}}$ be a step space. A localized predecessor walk starting with step ${\mathit{s}}_0 = {\mathit{s}}_{\text{crux}}$ is a *finite* sequence in step space such that ${\mathit{s}}_{i-1} \prec {\mathit{s}}_i$ for every $i \leq 0$.
For example in the case $i = -2$, we have ${\mathit{s}}_{-3} \prec {\mathit{s}}_{-2}$.
Since a localized predecessor walk ${\mathit{w}}$ is a *finite* sequence of steps, then it has a finite number of terms which run in index $i$ from $-(n-1) \leq i \leq 0$, where $n = {\lvert{{\mathit{w}}}\rvert} < \infty$ is the number of steps in ${\mathit{w}}$.
\[S:COMPLETE\_CONE\_WALK\] A set ${\mathbb{W}}$ of localized predecessor walks, all starting at ${\mathit{w}}_0 = {\mathit{s}}_{\text{crux}}$, is *complete* if\
$$\forall({\mathit{w}} \in {\mathbb{W}})
\forall(-({\lvert{{\mathit{w}}}\rvert}-2) \leq i \leq 0)
\forall({\mathit{s}} \in \tilde{V}({\mathit{w}}_{i}))
\exists({\mathit{e}} \in {\mathbb{W}})
: {\mathit{w}}_{i} = {\mathit{e}}_{i} \wedge {\mathit{s}} = {\mathit{e}}_{i-1}.$$
Completeness assures combinatorial diversity. An algorithm equivalent to the above one-liner is:\
\
$
\text{for each }{\mathi
| 861
| 1,345
| 1,463
| 974
| 1,858
| 0.784994
|
github_plus_top10pct_by_avg
|
$. So far, we do not have uniform description for $\rho_{{\mbox{\boldmath $\alpha$}}}(s_i)$. So we define $\rho_{{\mbox{\boldmath $\alpha$}}}(s_1)$ and $\rho_{{\mbox{\boldmath $\alpha$}}}(s_2)$ one by one.
First we define $\rho_{{\mbox{\boldmath $\alpha$}}}(s_1)$. For tableaux $p_1$ and $p_2$ of $\mathbb{T}({\mbox{\boldmath $\alpha$}})$ which go through $(\widetilde{\emptyset}, \widehat{\emptyset},
\widetilde{\emptyset}, \widehat{\emptyset}, \widetilde{\emptyset})$ and $(\widetilde{\emptyset}, \widehat{\emptyset},
\widetilde{{\mbox{\tiny\yng(1)}}}, \widehat{\emptyset}, \widetilde{\emptyset})$ at the 0-th, the $1-\frac{1}{2}$-th, the 1-st, the $2-\frac{1}{2}$-th and the 2-nd coordinate respectively, let $u_1$ and $u_2$ be the corresponding standard vectors. Then we define $\rho_{{\mbox{\boldmath $\alpha$}}}(s_1)(u_1\ u_2)$ by $$\rho_{{\mbox{\boldmath $\alpha$}}}(s_1)(u_1\ u_2)
= (u_1\ u_2)
\begin{pmatrix}
1 & 0\\
0 & 1
\end{pmatrix}.$$
For tableaux $p_3$, $p_4$ and $p_5$ of $\mathbb{T}({\mbox{\boldmath $\alpha$}})$ which go through $(\widetilde{\emptyset}, \widehat{\emptyset},
\widetilde{\emptyset}, \widehat{\emptyset}, \widetilde{{\mbox{\tiny\yng(1)}}})$, $(\widetilde{\emptyset}, \widehat{\emptyset},
\widetilde{{\mbox{\tiny\yng(1)}}}, \widehat{\emptyset}, \widetilde{{\mbox{\tiny\yng(1)}}})$ and $(\widetilde{\emptyset}, \widehat{\emptyset},
\widetilde{{\mbox{\tiny\yng(1)}}}, \widehat{{\mbox{\tiny\yng(1)}}}, \widetilde{{\mbox{\tiny\yng(1)}}})$ at the 0-th, the $1-\frac{1}{2}$-th, the 1-st, the $2-\frac{1}{2}$-th and the 2-nd coordinate respectively, let $u_3$, $u_4$ and $u_5$ be the corresponding standard vectors. Then we define $\rho_{{\mbox{\boldmath $\alpha$}}}(s_1)(u_1\ u_2\ u_3)$ by $$\rho_{{\mbox{\boldmath $\alpha$}}}(s_1)(u_1\ u_2\ u_3)
= (u_1\ u_2\ u_3)
\begin{pmatrix}
0 & 1 & 1\\
\frac{1}{Q-1} & \frac{Q-2}{Q-1} & \frac{-1}{Q-1}\\
\frac{Q-2}{Q-1} & -\frac{Q-2}{Q-1} & \frac{1}{Q-1}
\end{pmatrix}.$$
Next we define $\rho_{{\mbox{\boldmath $\alpha$}}}(s_2)$. In the following, we write $$p
| 862
| 1,872
| 1,760
| 963
| 1,890
| 0.784663
|
github_plus_top10pct_by_avg
|
ec(%)** **F1(%)** **T(ms)** **Pre(%)** **Rec(%)** **F1(%)** **T(ms)** **Pre(%)** **Rec(%)** **F1(%)** **T(ms)** **Pre(%)** **Rec(%)** **F1(%)** **T(ms)**
**SDD-R** 89.51 48.75 63.12 266.77 21.97 **99.38** 35.98 11.12 6.67 0.63 1.14 249.15 21.22 72.50 32.83 7.81
**SDD-R+** 91.25 91.25 **91.25** **265.96** **61.88** 61.88 61.88 10.08 9.38 9.38 9.38 **247.31** **44.38** 44.38 44.38 6.92
**SDD-E Static** **92.46** 68.75 78.86 292.50 36.55 98.13 53.26 5.75 6.67 0.63 1.14 271.45 36.07 86.25 50.86 5.64
**SDD-E Static+** 85.02 32.50 47.02 293.77 46.24 91.88 61.52 5.95 10.00 0.63 1.18 272.71 43.60 76.25 **55.48** 5.68
**SDD-E Dynamic** 49.11 **99.38** 65.73 699.97 23.01 **99.38** 37.37 245.65 10.36 **18.13** **13.18** 681.09 22.09 **93.13** 35.71 242.85
**SDD-E Dynamic+** 73.21 98.75 84.09 701.06 48.02 96.25 **64.07** 255.43 8.15 6.88 7.46 681.89 40.79 78.13 53.59 253.03
**MGoF** 14.08 21.88 17.13 292.14 13.01 4.38 6.55 **3.64** **12.50** 3.13 5.00 250.42 12.50 3.13 5.00 **3.71**
-------------------- ------------ ------------ ----------- ------------ ------------ ------------ ----------- ----------- ------------ ------------ ----------- ------------ ------------ ------------ ----------- -----------
-------------------- ------------ ------------ ----------- ------------ ------------
| 863
| 3,190
| 838
| 684
| null | null |
github_plus_top10pct_by_avg
|
--C37 119.0 (4)
S7---W2---Ag4 60.14 (3) C33---C32---P5 118.1 (3)
S5---W2---Ag4 56.25 (3) C37---C32---P5 122.8 (3)
S8---W2---Ag3 112.82 (4) C32---C33---C34 120.8 (4)
S6---W2---Ag3 58.26 (3) C32---C33---H33 119.6
S7---W2---Ag3 137.85 (3) C34---C33---H33 119.6
S5---W2---Ag3 54.42 (3) C33---C34---C35 119.6 (4)
Ag4---W2---Ag3 81.533 (11) C33---C34---H34 120.2
P1---Ag1---S1 125.90 (4) C35---C34---H34 120.2
P1---Ag1---S2 125.34 (4) C36---C35---C34 120.2 (4)
S1---Ag1---S2 92.36 (4) C36---C35---H35 119.9
P1---Ag1---S5 112.32 (4) C34---C35---H35 119.9
S1---Ag1---S5 103.87 (4) C35---C36---C37 119.7 (4)
S2---Ag1---S5 89.89 (3) C35---C36---H36 120.2
P1---Ag1---W1 156.66 (3) C37---C36---H36 120.2
S1---Ag1---W1 46.40 (3) C32---C37---C36 120.6 (4)
S2---Ag1---W1 47.43 (2) C32---C37---H37 119.7
S5---Ag1---W1 90.70 (2) C36---C37---H37 119.7
P2---Ag2---S3 134.21 (4) P5---C38---P4 114.5 (2)
P2---Ag2---S2 118.94 (4) P5---C38---H38A 108.6
S3---Ag2---S2 93.86 (4) P4---C38---H38A 108.6
P2---Ag2---S5 112.41 (4) P5---C38---H38B 108.6
S3---Ag2---S5 98.39 (4) P4---C38---H38B 108.6
S2---Ag2---S5 88.61 (3) H38A---C38---H38B 107.6
P2---Ag2---W1 155.06 (3) C40---C39---C44 119.8 (4)
S3---Ag2---W1 46.95 (3) C40---C39---P4 119.9 (3)
S2---Ag2---W1 47.38 (2) C44---C39---P4 120.1 (3)
S5---Ag2---W1 89.76 (2) C41---C40---C39 119.5 (4)
P4---Ag3---P3 130.09 (4) C41---C40---H40 120.3
P4---Ag3---S5 105.22 (4) C39---C40---H40 120.3
P3---Ag3---S5 123.65 (4) C42---C41---C40 120.6 (5)
P4---Ag3---S6 102.12 (4) C42---C41---H41 119.7
P3---Ag
| 864
| 3,963
| 2,119
| 862
| null | null |
github_plus_top10pct_by_avg
|
$D(i,j)$ is the rate of hopping attempts from site $i$ to $j$ and depend on the investigated model. The sum runs over all $N$ pairs of nearest-neighbors of the lattice. Let $n_i$ be the number of lateral bonds of site $i$ and $n^\text{max}_i$ the largest number of bonds among the nearest-neighbors of $i$. For the WV model, $D(i,j)$ is given by $$D(i,j)=
\left\{
\begin{array}{cl}
1/q_i^\text{WV}, & \textrm{if } ~ n_j = n^\text{max}_i \textrm{ and } n_i < n^\text{max}_i\\
0, & \textrm{otherwise.}
\end{array}
\right.,$$ where $q_i^\text{WV}$ is the number of nearest-neighbors with $n^\text{max}_i$ lateral bonds. We can express $D(i,j)$ for the DT as $$D(i,j)=
\left\{
\begin{array}{cl}
1/q_i^\text{DT}, & \textrm{if } ~ n_j > 0 \textrm{ and } n_i = 0\\
0, & \textrm{otherwise.}
\end{array}
\right.,$$ where $q_i^{DT}$ is number of nearest-neighbors with at least one lateral bond. The quantity $J_z$ is the average interlayer diffusion rate per site.
=\_20pt\^3pt
-- --------- --------- -------- --------------------
WV DT WV DT
-0.0034 -0.0042 -0.015 -5$\times 10^{-5}$
-0.0011 -0.0052 -0.019 -3$\times 10^{-4}$
-0.014 -0.0053 -0.020 -5$\times 10^{-4}$
-0.090 -0.050 -0.047 -0.030
-- --------- --------- -------- --------------------
: \[fit\_parameters\] Parameters $J_\infty$ obtained in the regression using the Eq. (\[current\_fit\]) in the in the last decade of data of the out-plane current curves ($t>10^6$ for $d=1$ and $t>10^5$ for $d=2$).
The currents for simulations in $d=1$ are presented in Fig. \[current\_1d\]. All versions in both $1+1$ and $2+1$ dimensions are characterized by a current with a downward (negative) flux with the intensity decreasing monotonically. Considering the last decade of time, we estimated the current $J_\infty$ for $t\rightarrow\infty$ using a regression with a simple allometric function in the form. $$J_z = J_\infty + a t^{-\gamma}\label{current_fit}
| 865
| 532
| 770
| 1,060
| 3,120
| 0.774733
|
github_plus_top10pct_by_avg
|
right|&\ge\frac{|AH|}{|X_1|\cdots|X_\ell|}\\
&\ge\frac{|AH|}{\exp(e^{O(s)}\log^{O(1)}2K)^\ell}\\
&\ge\frac{|AH|}{\exp(e^{O(s^2)}\log^{O(s)}2K)}.\end{aligned}$$ In particular, setting $Q_i=\{u_i^{-1},1,u_i\}$ for $i=1,\ldots,\ell$, we have $$\left|H\prod\{P_1,\ldots,P_k,Q_1,\ldots,Q_\ell\}\right|\ge\frac{|AH|}{\exp(e^{O(s^2)}\log^{O(s)}2K)},$$ with the product again taken in the same order.
Now $\prod\{P_1,\ldots,P_k,Q_1,\ldots,Q_\ell\}$ is an ordered progression, say $P_{\text{\textup{ord}}}(x;L)$. The ranks of the progressions $P_i$ coming from \[prop:prod.of.progs.and.small\] are at most $e^{O(s)}\log^{O(1)}2K$, and hence that the rank of $P_{\text{\textup{ord}}}(x;L)$ is at most $ke^{O(s)}\log^{O(1)}2K+\ell$, which is at most $e^{O(s^2)}\log^{O(s)}2K$. Furthermore, the containment implies that $$\begin{aligned}
P_{\text{\textup{ord}}}(x;L)&\subset A^{(k+\ell)e^{O(s)}}\\
&\subset A^{e^{O(s^2)}\log^{O(s)}2K},\end{aligned}$$ and \[prop:nilprog.equiv\] therefore implies that $$P_{\text{\textup{ord}}}(x;L)\subset P_{\text{\textup{nil}}}(x;L)\subset\overline P(x;L)\subset A^{e^{O(s^3)}\log^{O(s^2)}2K}.$$ This comletes the proof.
\[rem:poly.bound\]The polynomial bound on the product set of $A$ required to contain $H$ in \[thm:new.gen\] comes from our applications of Propositions \[prop:lower.step.in.quotient\] and \[prop:grp.in.normal\]. These propositions are themselves both applications of the same result, namely [@nilp.frei Proposition 7.2], and so the polynomial bound in \[thm:new.gen\] can be traced to this result. It appears that a new idea would be required to improve this result in such a way as to remove the polynomial bound from \[thm:new.gen\].
Covering arguments {#sec:covering}
==================
In this section we use covering arguments to prove \[cor:ruzsa,cor:chang.ag\]. \[cor:ruzsa\] follows from \[thm:new.gen\] and a straightforward application of Ruzsa’s covering lemma, as follows.
We may assume that $A$ generates $G$. Let $H$ and $P=P_{\text{\textup{ord}}}(x;L)$ be as given by \[t
| 866
| 245
| 820
| 937
| 2,068
| 0.783067
|
github_plus_top10pct_by_avg
|
. (). . . , , & (). , [ ** ]{}, . , , , & (). . , [ ** ]{}, . , , , , , , , , , & (). . , [ ** ]{}, . (). . , [ ** ]{}, . , , , , & (). . , [ ** ]{}, . (). . , [**]{}, . , & (). . , [**]{}, . , & (). . , [ ** ]{}, . , & (). . , [**]{}, . , & (). . , [ ** ]{}, . , , , & (). . , [ ** ]{}, . , , , , , , , & (). . , [ ** ]{}, . , , , , & (). . , [ ** ]{}, . (). . , [ ** ]{}, . , , & (). . , [**]{}, . , & (). . , [ ** ]{}, . , , , , , & (). . , [**]{}, . , & (). . , [ ** ]{}. . (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . , , & (). . , [ ** ]{}, . , , & (). . , [ ** ]{}, . (). . , [ ** ]{}, . , , , , , & (). . , [ ** ]{}, . , , & (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . , , , & (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . , & (). . , [**]{}, . , , , & (). . , [ ** ]{}, . , , , & (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . , , , , , , , , , , & (). . , [ ** ]{}, . , , , & (). . , [ ** ]{}, . (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . , , & (). . , [**]{}, . , , & (). . In , , & (Eds.), [**]{} (pp. ). : volume . (). . In , , , & (Eds.), [ ** ]{} (pp. ). volume . , , & (). . , [ ** ]{}, . , , & (). . , [**]{}, . (). . , [**]{}, . (). . , [**]{}, . , , & (). . , [ ** ]{}, . , , , , , , & (). . , [**]{}, . , , , , & (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . , & (). . In (Ed.), [ ** ]{} (pp. ). : volume . , , & (). . , [ ** ]{}, . , & (). . , [**]{}, . (). , [ ** ]{}, . (). , [ ** ]{}, . , & (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . , , & (). . , [ ** ]{}, . , & (). . , [ ** ]{}, . , & (). . , [**]{}, . (). . , [**]{}, . , & (). . , [**]{}, . , & (). . , [ ** ]{}, . , , & (). . , [ ** ]{}, . , & (). , [ ** ]{}, . , , & (). . , [**]{}, . , & (). . , [ ** ]{}, . , & (). . , [**]{}, . , , & (). . , [ ** ]{}, . , , , & (). . , [ ** ]{}, . , , & (). . , [ ** ]{}, . , , , , , & (). . , [ ** ]{}, . (). . In , , & (Eds.), [**]{} (pp. ). volume . (). , [ ** ]{}. . (). , [ ** ]{}, . (). . , [ ** ]{}, . , & (). . In (Ed.), [ ** ]{} (pp. ). . , , & (). . , [**]{}, . , & (). . , [ ** ]{}, . , , , , , , , , , et al. (). . , [ **
| 867
| 2
| 2,481
| 971
| null | null |
github_plus_top10pct_by_avg
|
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Study Year Country Patients Number LVAD type AKI definition AKI and/or RRT incidence
------------------------------------- ------ -------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------- ------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------- -----------------------------------
Kaltenmaier et al. \[[@CIT0040]\] 2000 Germany Patient underwent LVAD implantation during 1988--1995\ 227 Pulsatile\ RRT
| 868
| 3,148
| 1,796
| 940
| null | null |
github_plus_top10pct_by_avg
|
P0C0L4 Complement C4-A 2219.29 76.26 192.7 7.08
P00734 Prothrombin 2011.89 71.54 70.0 5.90
P19823 Inter-alpha-trypsin inhibitor heavy chain H2 1452.50 55.81 106.4 6.86
P19827 Inter-alpha-trypsin inhibitor heavy chain H1 1205.49 54.23 101.3 6.79
Q06033 Inter-alpha-trypsin inhibitor heavy chain H3 256.28 46.97 99.8 5.74
P02760 Protein AMBP 213.99 25.57 39.0 6.25
P00740 Coagulation factor IX 114.46 45.55 51.7 5.47
P00742 Coagulation factor X 74.31 36.07 54.7 5.94
P02768 Serum albumin 46.51 27.91 69.3 6.28
P01857 Immunoglobulin heavy constant gamma 1 45.77 43.33 36.1 8.19
P49747 Cartilage oligomeric matrix protein 34.16 20.08 82.8 4.60
P01834 Immunoglobulin kappa constant 32.23 49.53 11.8 6.52
P67936 Tropomyosin alpha-4 chain 27.70 37.90 28.5 4.69
P01860 Immunoglobulin heavy constant gamma 3 27.37 25.99 41.3 7.90
P51884 Lumican 25.59 39.94 38.4 6.61
P01861 Immunoglobulin heavy constant gamma 4 24.73 26.61 35.9 7.36
P01859 Immunoglobulin heavy constant gamma 2 23.14 32.82 35.9 7.59
P04004 Vitronectin 22.83 23.43 54.3 5.80
P07359 Platelet glycoprotein Ib alpha chain 21.45 9.82 71.5 6.29
P07225 Vitamin K-dependent protein S 20.38 10.65 75.1 5.67
P0DOY2 Immunoglobulin lambda constant 2
| 869
| 4,438
| 709
| 572
| null | null |
github_plus_top10pct_by_avg
|
ing filtration with 11 μm filter.
Accession Description Score Coverage MW \[kDa\] calc. pI
----------- ---------------------------------------------- --------- ---------- ------------ ----------
P00734 Prothrombin 2458.17 67.20 70.0 5.90
P0C0L5 Complement C4-B 2389.47 80.68 192.6 7.27
P0C0L4 Complement C4-A 2352.05 79.59 192.7 7.08
P19823 Inter-alpha-trypsin inhibitor heavy chain H2 1268.03 56.87 106.4 6.86
P19827 Inter-alpha-trypsin inhibitor heavy chain H1 1122.57 59.50 101.3 6.79
Q06033 Inter-alpha-trypsin inhibitor heavy chain H3 352.56 40.00 99.8 5.74
P02760 Protein AMBP 285.56 27.56 39.0 6.25
P00740 Coagulation factor IX 182.33 57.27 51.7 5.47
P00742 Coagulation factor X 80.33 41.19 54.7 5.94
P01834 Immunoglobulin kappa constant 54.45 54.21 11.8 6.52
P49747 Cartilage oligomeric matrix protein 53.77 24.31 82.8 4.60
P01857 Immunoglobulin heavy constant gamma 1 45.54 43.33 36.1 8.19
P02768 Serum albumin 44.26 25.78 69.3 6.28
P07359 Platelet glycoprotein Ib alpha chain 33.83 16.56 71.5 6.29
P01860 Immunoglobulin heavy constant gamma 3 33.53 31.83 41.3 7.90
P0DOY3 Immunoglobulin lambda constant 3 33.11 70.75 11.3 7.24
P01861 Immunoglobulin heavy constant gamma 4 30.81 23.85 35.9 7.36
P04004 Vitronectin 27.21 23.64 54.3 5.80
| 870
| 5,153
| 262
| 284
| null | null |
github_plus_top10pct_by_avg
|
ition of the relationship between cause and effect in combination with the small number of included subjects. Also, the measurement of other markers of oxidative stress was unavailable.
V.R.: research plan, statistics, data collection and manuscript writing; V.K.: hemodynamic measurements and Echo-cardiographical assessment; D.K.: biochemical analyses, ELISA and radioimmunoassays.
APC was sponsored by MDPI.
The authors declare no conflicts of interest.
{#jcdd-05-00035-f001}
{#jcdd-05-00035-f002}
jcdd-05-00035-t001_Table 1
######
Differences between hemodiafiltration patients (*n* = 96) and healthy control subjects (*n* = 45) (\* *p* \< 0.05).
Characteristic Hemodiafiltration Patients (*n* = 96) Mean ± SD or/Mean Rank Healthy Control Subjects (*n* = 45) Mean ± SD or/Mean Rank *p* Value
---------------------------- -------------------------------------------------------------- ------------------------------------------------------------ -----------
Gender (males %/females %) 62(64.6%)/34(35.4%) 24 (53.3%)/21 (46.7%) 0.1
Age (years) 62.1 ± 14.2 59.6 ± 11.9 0.6
BMI (Kg/m^2^) 25.08 ± 3.9 25.9 ± 3.6 0.2
Cholesterol (mg/dL) 163.4 ± 45.3 \* 188.7 ± 27.3
| 871
| 887
| 1,697
| 990
| null | null |
github_plus_top10pct_by_avg
|
(\lambda_{m} -\lambda_{k} ) },
\label{S-alpha-beta-0th-final}\end{aligned}$$ and the oscillation probability by $P(\nu_\beta \rightarrow \nu_\alpha) = \vert S_{\alpha \beta}^{(0)} \vert^2$.
Finally, armed with the solution , we can also calculate all higher order terms in oscillation probability for e.g. those in eq. since only such combination $X_{ik} X_{jk}^*$ (no sum over $k$ implied) can appear.
Low-scale vs. high-scale unitarity violation and $W$ corrections {#sec:UV-low-high-E}
=================================================================
Low-scale versus high-scale unitarity violation {#sec:low-vs-high}
-------------------------------------------------
In leptonic unitarity test, a clear understanding of the relationship between low-scale and high-scale unitarity violation may be one of the key issues. While the presence of probability leaking term $\mathcal{C}_{\alpha \beta}$, if detected, clearly testifies for low-scale unitarity violation [@Fong:2016yyh], it would be better to provide a global view of the difference between them, looking for alternative ways to reach the goal. In this section, we would like to give a preliminary discussion on this topics.
One would guess, on intuitive ground, that at zeroth order in $W$ our system describes high-scale unitarity violation. There is no “$W$ corrections” in high-scale unitarity violation because the energy scale is so high that the high-mass sector is truncated. It is in agreement with the formulations in ref. [@Blennow:2016jkn] with which we share the same evolution equation (\[Schroedinger-eq-0th\]) in the vacuum mass eigenstate basis. See also [@Antusch:2006vwa].[^16] Based on this perspective, we just conclude this subsection by stating the generic characteristic differences between high-scale and low-scale unitarity violation:
- Our system calculated with evolution equations (\[Schroedinger-eq-0th\]) in leading (zeroth) order in $W$ expansion, which is common to both high- and low-scale unitarity violation, would serve as a firs
| 872
| 438
| 1,557
| 1,004
| null | null |
github_plus_top10pct_by_avg
|
ow a surface abundance about 0.2 - 0.3 dex lower than the predicted one. A possible way to improve the agreement with these stars is to adopt an initial lithium abundance of about $\epsilon_\mathrm{Li} \approx 3$. However, this method does not improve the agreement with the low-mass stars, a problem still largely discussed in the literature [see e.g. @king00; @jeffries00; @umezu00; @dantona03; @clarke04; @xiong06; @king10 and references therein].
The results we obtain for young open clusters confirm the partial results of previous analysis, which have noticed that models with low-convection efficiency during pre-MS phase agree much better with lithium observation than those with solar or MS calibrated values [see e.g., @ventura98; @dantona03; @landin06].
{width="0.98\columnwidth"} {width="0.98\columnwidth"} {width="0.98\columnwidth"} {width="0.98\columnwidth"}
{width="0.98\columnwidth"} {width="0.98\columnwidth"} {width="0.98\columnwidth"} {width="0.98\columnwidth"}
Binary stars {#sec:binary}
------------
\[tab:binarie\]
[lccccc]{} System & Mass \[M$_\odot$\] & $\log T_\mathrm{eff} \mathrm{[K]}$ & $\log L/L_{\odot}$ & $\epsilon_\mathrm{Li}$ & \[Fe/H\]\
\
ASAS J052821+0338.5 (a) & $1.387 \pm0.017$ & $3.708\pm0.009$ & $0.314\pm0.034$ & $3.10\pm0.20$ & $-0.20\pm0.20$\
ASAS J052821+0338.5 (b) & $1.331 \pm0.017$ & $3.663\pm0.009$ & $0.107\pm0.034$ & $3.35\pm0.20$ & $-0.10\pm0.20$\
\
EK Cep (a) & $2.020\pm0.010$ & $3.954\pm0.010$ & $1.170\pm0.040$ & $-$ & $+0.07\pm0.05$\
EK Cep (b) & $1.124\pm0.012$ & $3.755\pm0.015$ & $0.190\pm0.070$ & $3.11\pm0.30$ & $+0.07\pm0.05$\
\
RXJ 0529.4+0041 A (a) & $1.270\pm0.010$ & $3.716\pm0.013$ & $0.140\pm0.080$ & $3.20\pm0.30$ & $-0.01\pm0.04$\
RXJ 0529.4+0041 A (b) & $0.930\pm0.010$ & $3.625\pm0.015$ & $-0.280\pm0.150$ & $2.40\pm0.50$ & $-0.01\pm0.04$\
\
V117
| 873
| 446
| 400
| 861
| 1,567
| 0.788182
|
github_plus_top10pct_by_avg
|
elta_{K} - h_{i} ) (\Delta_{L} - h_{i} ) ( h_{j} - h_{i} )^2 } e^{- i h_{i} x}
\nonumber \\
&-&
\frac{ 1 }{ (\Delta_{K} - h_{j} )^2 (\Delta_{L} - h_{j} )^2 ( h_{j} - h_{i} )^2 }
\biggl\{
3 h_{j}^2 - 2 h_{i} h_{j} - \left( 2 h_{j} - h_{i} \right) (\Delta_{K} + \Delta_{L} ) + \Delta_{K} \Delta_{L}
\biggr\}
e^{- i h_{j} x}
\biggr]
\nonumber \\
&\times&
\left\{ (UX)^{\dagger} A W \right\}_{i K}
\left\{ W ^{\dagger} A (UX) \right\}_{K j}
\left\{ (UX)^{\dagger} A W \right\}_{j L}
\left\{ W ^{\dagger} A (UX) \right\}_{L j}
\nonumber \\
&+&
\sum_{K \neq L} \sum_{k \neq i, j}
\biggl[
\frac{1}{ ( h_{k} - h_{i} ) ( h_{k} - h_{j} ) ( h_{j} - h_{i} ) ( \Delta_{K} - h_{i} ) ( \Delta_{K} - h_{j} ) ( \Delta_{L} - h_{i} ) ( \Delta_{L} - h_{j} ) }
\nonumber \\
&+&
\biggl\{
( h_{k} - h_{j} ) (\Delta_{K} - h_{j} ) ( \Delta_{L} - h_{j} ) e^{ - i h_{i} x}
-
( h_{k} - h_{i} ) (\Delta_{K} - h_{i} ) ( \Delta_{L} - h_{i} ) e^{ - i h_{j} x}
\biggr\}
\nonumber \\
&+&
\frac{1}{ ( h_{k} - h_{i} ) ( h_{k} - h_{j} ) ( \Delta_{K} - h_{k} ) ( \Delta_{L} - h_{k} ) }
e^{ - i h_{k} x}
\nonumber \\
&+&
\frac{1}{ ( \Delta_{K} - h_{i} ) ( \Delta_{K} - h_{j} ) ( \Delta_{K} - h_{k} ) ( \Delta_{L} - h_{i} ) ( \Delta_{L} - h_{j} ) ( \Delta_{L} - h_{k} ) ( \Delta_{L} - \Delta_{K} ) }
\nonumber \\
&\times&
\biggl\{ ( \Delta_{K} - h_{i} ) ( \Delta_{K} - h_{j} ) ( \Delta_{K} - h_{k} ) e^{- i \Delta_{L} x}
- ( \Delta_{L} - h_{i} ) ( \Delta_{L} - h_{j} ) ( \Delta_{L} - h_{k} ) e^{- i \Delta_{K} x}
\biggr\}
\biggr]
\nonumber \\
&\times&
\left\{ (UX)^{\dagger} A W \right\}_{i K}
\left\{ W ^{\dagger} A (UX) \right\}_{K k}
\left\{ (UX)^{\dagger} A W \right\}_{k L}
\left\{ W ^{\dagger} A (UX) \right\}_{L j}.
\label{hatS-ij-W4-H4-double}\end{aligned}$$
Order $W^4$ $\hat{S}$ matrix elements $\hat{S}_{i i}^{(4)} [3]$ and $\hat{S}_{i i}^{(4)} [4]$ {#sec:hatSii-3nd-4th}
----------------------------------------------------------------------------------------------
$\hat{S} _{i i}^{(4)} [3]$ is given by $$\begin{aligned}
&& \hat{S}_{ii}^
| 874
| 302
| 1,440
| 999
| null | null |
github_plus_top10pct_by_avg
|
----
T~0~ WT 1 47 \*^1^ 4: 100 0 60.0
T~0~ 1 71 \*^1^ 4: 100 100 38.0
T~1~ WT 6 53.5 ± 4.8 4: 100 0 42.4 ± 5.8
T~1~ 60 100.6 ± 14.6 ^\#\#\#\#^ 3: 1.7, 4: 41.7, 5: 35.0, 6: 16.7, 7: 3.3, 8: 1.7 45.0 52.1 ± 8.5 ^\#\#^
T~2~ WT 12 53.3 ± 4.0 3: 25.0, 4: 75.0 8.3 36.0 ± 7.6
\#1-27(HT) 15 90.8 ± 12.6 ^\#\#\#\#^ 5: 60.0, 6: 33.3, 10: 6.7 60.0 44.7 ± 5.4 ^\#\#^
\#2-17(HT) 6 79.8 ± 2.5 ^\#\#\#\#^ 5: 50.0, 6: 33.3, 8: 16.7 83.3 45.1 ± 3.4 ^\#^
\#2-1(LT) 12 83.3 ± 6.8 ^\#\#\#\#^ 5: 66.7, 6: 16.7, 7: 8.3, 8: 8.3 33.3 35.6 ± 7.8
\#2-6(LT) 10 76.4 ± 3.6 ^\#\#\#\#^ 5: 10.0, 6: 40.0,7: 30.0, 8: 10.0,12: 10.0 80.0 39.4 ± 3.1
T~3~ WT 6 109.4 ± 0.9 \*^2^ 4: 100 0 80.3 ± 5.8
\#1-27(HT)L\#2 10 129.2 ± 11.9 \*^3,\ \#\#\#^ 4: 40.0, 5: 50.0, 6: 10.0 10.0 45.8 ± 7.9 ^\#\#\#\#^
\#2-17(HT)\#2-1 12 131.1 ± 7.3 \*^
| 875
| 4,592
| 921
| 411
| null | null |
github_plus_top10pct_by_avg
|
Q:
Reusing react components across projects
I want to split my web project into three: front-end, back-end and super-admin.
What is the best way to re-use the components across code bases? Npm packages? That seems hard to maintain:
Open component dev package
Make changes
Push changes
Tag version
Update all projects
Seems complex and prone to errors. Is there a better way?
A:
Depends on if you need to use different versions of the shared components from different projects. If so, you probably need to make a versioned npm package. However, if you just want to share the packages and use the same version everywhere, you have other options. For one, you don't have to actually build and publish an npm packge, you can just use npm link. This will basically create a symlink to your shared code in node_modules of the other projects.
Alternatively, you can do it without any npm package at all, just have the shared components in a separate project (directory) and import them in javascript through an alias, using a bundling system (webpack alias / browserify aliasify).
Q:
Python script stopping after a while
I have a raspberry pi with raspbian on it, and I've created a python script that I need to have running all the time. The problem is that after running for an hour or so, the script suddenly stops, and the python process shuts down. I have no idea why, and I'm very new to linux so I don't know how to debug it or anything. I don't think it's a memory issue because during the while loop I free and reinit all objects used. How can I find out what the issue is, or at least automatically restart the program once it stops?
This is the code:
import time
import sys
import ftputil
import pygame
import pygame.camera
import logging
pygame.camera.init()
#pygame.camera.list_camera() #Camera detected or not
cam = pygame.camera.Camera("/dev/video0",(640,480))
count = 5
logging.basicConfig(filename='log.log', level=logging.INFO)
logging.info(str(time.time())+" : Script was started")
while True:
cam.start()
| 876
| 1,425
| 357
| 833
| 2,137
| 0.782452
|
github_plus_top10pct_by_avg
|
Usually one uses weighted or toric blow-ups with smooth center as a tool for finding embedded $\Q$-resolutions. Here we will briefly discuss weighted blow-ups in the surface case $\pi: \widehat{X} \to X$ at a point $P\in X=\frac{1}{d}(p,q)$ with respect to $w = (a,b)$. Consider $$\hat X:=\{((x,y),[u:v]_w)\in \CC^2\times\PP^1_w\mid x=t^a u, y=t^b v \text{ for some } t\in \CC\}/\qa_{d,p,q}$$ and $\pi: \widehat{X} \to X$ the projection map on the first coordinate, as usual. For practical reasons, we will give a specific description of $\hat X$ as a cyclic $V$-surface. The new space $\hat X$ is covered by two open sets $\widehat{U}_1 \cup \widehat{U}_2$ each one of them is a quotient singularity of type: $$\label{eq:charts}
\array{ccc}
\widehat{U}_1 \cong
X \left( \displaystyle\frac{ad}{e}; 1, \frac{-b+p' aq}{e} \right)
& \text{ and } &
\widehat{U}_2 \cong
X\left( \displaystyle\frac{bd}{e}; \frac{-a+q' bp}{e}, 1\right),
\endarray$$ where $p'p=q'q\equiv 1 \mod (d)$ and $e := \gcd(d,aq-bp)$. Finally, the charts are given by
$\begin{array}{c|c}
X \left( \displaystyle\frac{ad}{e}; 1, \frac{-b+p' aq}{e} \right) \ \longrightarrow \
\widehat{U}_1, &X \left( \displaystyle\frac{bd}{e}; \frac{-a+q' bp}{e}, 1
\right) \ \longrightarrow \ \widehat{U}_2, \\[0.2cm] \,\big[ (x^e,y) \big] \mapsto
\big[ ((x^a,x^b y),[1:y]_{w}) \big]_{(d;p,q)} & \big[ (x,y^e) \big] \mapsto
\big[ ((x y^a, y^b),[x:1]_{w}) \big]_{(d;p,q)}.
\end{array}$
Intersection theory on projective V-surfaces
--------------------------------------------
In this section we will briefly recall the definitions of intersection theory on projective $V$-surfaces. We denote by $\operatorname{Weil}_{\QQ}(X)$ the vector space $\operatorname{Weil}(X)\otimes \QQ$ and refer to their elements as $\QQ$-divisors. As a general reference for intersection theory in normal varieties one can use [@Fulton-Intersection], however in the particular setting of $V$-surfaces we will follow the notation given in [@AMO-Intersection].
\[def:multintVM\] Let $X$ be a $V$-surface and con
| 877
| 358
| 886
| 863
| 2,691
| 0.777837
|
github_plus_top10pct_by_avg
|
\nonumber \\
S_{Sa} &=&
\left( Z X \right) \hat{S}_{aa} (U X)^{\dagger} + \left( Z X \right) \hat{S}_{aS} W^{\dagger} +
V \hat{S}_{Sa} (U X)^{\dagger} + V \hat{S}_{SS} W^{\dagger},
\nonumber \\
S_{SS} &=&
\left( Z X \right) \hat{S}_{aa} \left( Z X \right)^{\dagger} + \left( Z X \right) \hat{S}_{aS} V^{\dagger} +
V \hat{S}_{Sa} \left( Z X \right)^{\dagger} + V \hat{S}_{SS} V^{\dagger}.
\label{S-Shat-relation}\end{aligned}$$
### Computation of $\hat{S}$ matrix elements {#sec:hatS-matrix}
To calculate $\hat {S} (x) = \exp \left[ -i \int^{x}_{0} dx \hat{H} (x) \right] $ we define $\Omega(x)$ as $$\begin{aligned}
\Omega(x) = e^{i \hat{H}_{0} x} \hat{S} (x).
\label{def-omega}\end{aligned}$$ $\Omega(x)$ obeys the evolution equation $$\begin{aligned}
i \frac{d}{dx} \Omega(x) = H_{1} \Omega(x),
\label{omega-evolution}\end{aligned}$$ where $$\begin{aligned}
H_{1} \equiv e^{i \hat{H}_{0} x} \hat{H}_{1} e^{-i \hat{H}_{0} x}.
\label{def-H1}\end{aligned}$$ Then, $\Omega(x)$ can be computed perturbatively as $$\begin{aligned}
\Omega(x) &=& 1 +
(-i) \int^{x}_{0} dx' H_{1} (x') +
(-i)^2 \int^{x}_{0} dx' H_{1} (x') \int^{x'}_{0} dx'' H_{1} (x'')
\nonumber \\
&+&
(-i)^3 \int^{x}_{0} dx' H_{1} (x') \int^{x'}_{0} dx'' H_{1} (x'') \int^{x''}_{0} dx''' H_{1} (x''')
\nonumber \\
&+&
(-i)^4 \int^{x}_{0} dx' H_{1} (x') \int^{x'}_{0} dx'' H_{1} (x'') \int^{x''}_{0} dx''' H_{1} (x''') \int^{x'''}_{0} dx'''' H_{1} (x'''') + \cdot \cdot \cdot,
\nonumber \\
\label{Omega-expand}\end{aligned}$$ where the “space-ordered” form in (\[Omega-expand\]) is essential because of the highly nontrivial spatial dependence in $H_{1}$. Upon obtaining $\Omega(x)$, $\hat{S}$ matrix can be obtained as $$\begin{aligned}
\hat{S} (x) = e^{- i \hat{H}_{0} x} \Omega(x).
\label{hatS-Omega}\end{aligned}$$ By knowing $\hat{S}$ matrix elements, the $S$ matrix is obtained by using (\[flavor-hat-relation2\]).
The perturbing Hamiltonian $H_{1}$ defined in (\[def-H1\]) has a structure $$\begin{aligned}
H_{1} = \left[
\begin{array}{cc}
0 & e^{ i {\bf
| 878
| 3,855
| 382
| 680
| null | null |
github_plus_top10pct_by_avg
|
ch that for every $\kappa \in {\mathbb{R}}$, $p>1$ and $f\in W^{q,k,p}({\mathbb{R}}^{d})$, $$\underline{C}_{q}\Vert \psi _{\kappa }|f|_{q}\Vert _{p}\leq \Vert f\Vert
_{q,\kappa ,p}\leq \overline{C}_{q}\Vert \psi _{\kappa }|f|_{q}\Vert _{p};
\label{NOT4a}$$
- for every $q\in {\mathbb{N}}$ and $p>1$ there exists $C_{q,p}>0$ such that for every $\kappa \in {\mathbb{R}}$ and $f\in W^{q,k,p}({\mathbb{R}}%
^{d})$, $$\left\Vert f\right\Vert _{q,\kappa ,p}\leq C_{q,p}\left\Vert f\right\Vert
_{q,\kappa +d,\infty } \label{NOT5a}$$and if $p>d$, $$\left\Vert f\right\Vert _{q,\kappa ,\infty }\leq C_{q,p}\left\Vert
f\right\Vert _{q+1,\kappa ,p}; \label{NOT5b}$$
- for $\kappa ,\kappa ^{\prime }\in {\mathbb{R}}$, $q,q^{\prime }\in {%
\mathbb{N}}$, $p\in (1,\infty ]$ and $U:C^{\infty }({\mathbb{R}}%
^{d})\rightarrow C^{\infty }({\mathbb{R}}^{d})$, the following two assertions are equivalent: there exists a constant $C_{\ast }\geq 1$ such that for every $f$, $$\left\Vert Uf\right\Vert _{q,\kappa ,\infty }\leq C_{\ast }\left\Vert
f\right\Vert _{q^{\prime },\kappa ^{\prime },p} \label{NOT6a}$$and there exists a constant $C^{\ast }\geq 1$ such that for every $f$, $$\Big\Vert\psi _{\kappa }U\Big(\frac{1}{\psi _{\kappa ^{\prime }}}f\Big)%
\Big\Vert_{q,\infty }\leq C^{\ast }\left\Vert f\right\Vert _{q^{\prime },p} .
\label{NOT6b}$$
Notice that (\[NOT4a\]) is a consequence of (\[NOT3c\]). The inequality (\[NOT5a\]) is an immediate consequence of (\[NOT3c\]) and of the fact that $\psi _{-d}\in L^{p}({\mathbb{R}}^{d})$ for every $p\geq 1$. And the inequality (\[NOT5b\]) is a consequence of Morrey’s inequality (Corollary IX.13 in [@Morrey]), whose use gives $\left\Vert f\right\Vert
_{0,0,\infty }\leq \left\Vert f\right\Vert _{1,0,p}$, and of (\[NOT3c\]). In order to prove the equivalence between (\[NOT6a\]) and (\[NOT6b\]), one takes $g=\psi _{\kappa ^{\prime }}f$ (respectively $g=\frac{1}{\psi
_{\kappa ^{\prime }}}f)$ and uses(\[NOT3c\]) as well.
Main results {#sect:results}
------------
| 879
| 517
| 1,044
| 976
| null | null |
github_plus_top10pct_by_avg
|
where $\vartheta$ denotes the normalized spatial angle. The normalized spatial angle is related to the physical AOA or AOD $\theta\in\left[-\pi/2,\pi/2\right]$ by $
\vartheta={d\sin(\theta)}/{\lambda},
$ where $d$ denotes the antenna spacing and $\lambda$ denotes the wavelength.
We assume that $N_{1,{\mathrm{cl}}}=\cdots=N_{\Psi,{\mathrm{cl}}}$ and $N_{1,{\mathrm{ry}}}=\cdots=N_{\Psi,{\mathrm{ry}}}$, which implies that the sparsity of the mmWave MIMO channel remains the same for all reconfiguration states. By transmitting and receiving with orthogonal radiation patterns, the propagated signals undergo different reflections and diffractions such that different reconfiguration states lead to different multipath parameters. That is, the values of $\alpha_{\psi,i,l}$, $\theta^r_{\psi,i,l}$, and $\theta^t_{\psi,i,l}$ change as the reconfiguration state changes.
### Virtual Channel Representation (VCR)
The virtual (beamspace) representation is a natural choice for modelling mmWave MIMO channels due to the highly directional nature of propagation [@Health_16_OverviewSPTmmMIMO]. The virtual model characterizes the physical channel by coupling between the spatial beams in fixed virtual transmit and receive directions, and represents the channel in beamspace domain.
The VCR of $\mathbf{H_\psi}$ in is given by [@Sayeed_02_Deconstuctingmfc; @Tse_05_Fundamentals] $$\begin{aligned}
\label{eq:H_VirtualModeling}
\mathbf{H_{\psi}}&=\sum^{N_r}_{i=1}\sum^{N_t}_{j=1} H_{\psi,V}(i,j)\mathbf{a}_R\left(\ddot{\theta}_{R,i}\right)
\mathbf{a}_T^H\left(\ddot{\theta}_{T,j}\right) \notag\\
&={\mathbf{A}}_R\mathbf{H}_{\psi,V}{\mathbf{A}}_T^H,\end{aligned}$$ where $\ddot{\theta}_{R,i}=\arcsin\left(\lambda\ddot{\vartheta}_{R,i}/d\right)$ and $\ddot{\theta}_{T,j}=\arcsin\left(\lambda\ddot{\vartheta}_{T,j}/d\right)$ are fixed virtual receive and transmit angles corresponding to uniformly spaced spatial angles[^5] $$\label{}
\ddot{\vartheta}_{R,i}=\frac{i-1-(N_r-1)/2}{N_r}$$ and $$\label{}
\ddot{\vartheta}_{T,j}=\frac{j-1-(N_t
| 880
| 3,665
| 1,299
| 894
| 1,900
| 0.784593
|
github_plus_top10pct_by_avg
|
P-value log~2~ (fold-change)
----------- ------------- ----------------------
FABP4 0.001621719 −2.100023748
CMAHP 0.024414059 −1.066127776
ITM2A 0.016922069 −1.060957028
CA4 0.003105875 −1.030691864
FAM189A2 0.001993414 −1.000814956
MPPED2 0.007206792 −0.981309728
HGD 0.023615869 −0.852452356
TNFRSF11B 0.036234193 −0.836887107
SLC16A4 0.020876545 −0.826437495
BZRAP1 0.006323889 −0.784389089
PDGFRL 0.019724555 −0.769696713
TFF3 0.004160978 −0.758375163
LAMB1 0.019590536 −0.754903323
UBR2 0.001897906 −0.746515077
TGFBR2 0.016942031 −0.736954014
CPQ 0.040123761 −0.722559518
RDX 0.014625862 −0.700582583
PTPRN2 0.008418827 −0.692563257
SALL1 0.039302461 −0.687132893
LRRN3 0.006382367 −0.685656353
TRAM2 0.016638751 −0.664343953
SLITRK5 0.040721395 −0.652918137
RGS16 0.027819411 −0.648828034
STARD13 0.013643606 −0.642603503
THBD 0.010984131 −0.635676793
GJA1 0.035287842 −0.633044868
TNFSF10 0.009577025 −0.620052341
PKIA 0.022943634 −0.617442203
CLDN8 0.014099121 −0.606720582
IL11RA 0.016027339 −0.589253369
CDC27 0.012718038 −0.584208837
SLC35D2 0.010299633 −0.579905832
IFI44L 0.015727697 −0.576791484
NR2F2 0.013187137 −0.575425266
CHD9 0.048840737 −0.573993055
FCGRT 0.011662768 −0.566086321
AGTR1 0.013773676 −0.538397573
CXCL12 0.003888448 −0.538189987
HSD17B8 0.019222754 −0.533271237
LOC728093 0.035060908 −0.524392385
SELE 0.017742851 −0.507241011
AKAP12 0.010628133 −0.506141506
PVALB 0.045712444 0.509393237
NNT 0.027397598 0.513526356
ADORA1 0.036372247 0.513841883
GPI 0.018981531 0.5142
| 881
| 1,841
| 751
| 979
| null | null |
github_plus_top10pct_by_avg
|
)|<\epsilon$ if $O$ is small and $|\lambda-\lambda_i|\leq \delta$. This means that, up to a point $\lambda=\lambda_i+\delta'$ with $\delta'<\delta$, the value of $\theta_\Lambda(\lambda)$ does not explode. Choose $\lambda_i<\lambda_1<\lambda_i+\delta'$, then $\lambda_e+\Delta\lambda_e>\lambda_1$ for all the curves parameterized by $\Lambda$. Then the mentioned problem does not arise. The actual value of $\lambda_1$ is not known, but it exists, and this is the only thing needed in the following reasoning.
The subtraction of both expressions (\[mat\]) and (\[mat2\]) obtained above gives that $$\frac{1}{R_\Lambda(\lambda_1)}-\frac{1}{R_0(\lambda_1)}=\int_{\lambda_1}^{\lambda_e+\Delta \lambda_e}\bigg\{1+\frac{1}{R_\Lambda(\beta)}\bigg[-\theta_\Lambda(\lambda_i)-c+\frac{c}{2}(e^{-c\lambda_i}-e^{-c \beta})+\int_{\lambda_1}^\beta e^{-c\xi} p_\Lambda(\xi)d\xi\bigg]\bigg\}^2e^{c\beta}d\beta$$ $$-\int_{\lambda_1}^{\lambda_e}\bigg\{1+\frac{1}{R_0(\beta)}\bigg[-\theta_0(\lambda_i)-c+\frac{c}{2}(e^{-c\lambda_i}-e^{-c \beta})+\int_{\lambda_i}^\beta e^{-c\xi} p_0(\xi)\bigg] \bigg\}^2e^{c\beta}d\beta.$$ The last expression can be cast in the following form $$\frac{1}{R_\Lambda(\lambda_1)}-\frac{1}{R_0(\lambda_1)}=\int_{\lambda_1}^{\lambda_e}\bigg\{1+\frac{1}{R_\Lambda(\beta)}\bigg[-\theta_\Lambda(\lambda_i)-c+\frac{c}{2}(e^{-c\lambda_i}-e^{-c \beta})+\int_{\lambda_i}^\beta e^{-c\xi} p_\Lambda(\xi)d\xi\bigg]\bigg\}^2e^{c\beta}d\beta$$ $$-\int_{\lambda_1}^{\lambda_e}\bigg\{1+\frac{1}{R_0(\beta)}\bigg[-\theta_0(\lambda_i)-c+\frac{c}{2}(e^{-c\lambda_i}-e^{-c \beta})+\int_{\lambda_i}^\beta e^{-c\xi} p_0(\xi) \bigg]\bigg\}^2e^{c\beta}d\beta$$ +\_e {1+}\^2e\^[c’]{}, where in the last integral the mean value theorem has been employed and $\lambda'$ is some value in the interval $[\lambda_e, \; \lambda_e+\Delta \lambda_e]$.
The last formula (\[artdeco\]) already gives an intuition about the intended proof. The condition (\[inicial\]) implies that the term multiplying $\Delta \lambda_e$ in is strictly positive (as $R_\Lambda>0$). It v
| 882
| 285
| 883
| 994
| null | null |
github_plus_top10pct_by_avg
|
e{k}_{i+2, i})_2^2+ (\tilde{k}_{i+2, i})_1\cdot (\tilde{k}_{i+2, i})_1'\right)
& \quad \textit{if $L_{i+2}$ is of type $I^e$} \end{array}\right.\\
+ \left(\delta_{i-3}'(\tilde{k}_{i-3, i})_1^2+ \delta_{i+3}'(\tilde{k}_{i+3, i})_1^2\right)+
\left(\delta_{i-4}(\tilde{k}_{i-4, i})_1^2+ \delta_{i+4}(\tilde{k}_{i+4, i})_1^2\right).
\end{gathered}$$ Here,
- $1/2\cdot{}^t(\tilde{y}_i)_1a_i(\tilde{y}_i)_1$ in the first line should be interpreted as follows. We formally compute $1/2\cdot{}^t(\tilde{y}_i)_1a_i(\tilde{y}_i)_1$ and it is of the form $1/2(2X)$. Then the term $1/2\cdot{}^t(\tilde{y}_i)_1a_i(\tilde{y}_i)_1$ is defined as the modified $X$ by letting each term having $\pi^2$ as a factor in $X$ be zero. We interpret all terms having $1/2$ as a factor appeared below in this manner.
- $\tilde{m}_{i\pm 1, i}'$ is the last column vector of $\tilde{m}_{i\pm 1, i}$.
- $\tilde{m}_{i\pm 2, i}', \tilde{k}_{i\pm 2, i}, \tilde{k}_{i\pm 2, i}'$ are such that the last column vector of $\tilde{m}_{i\pm 2, i}$ is $$\left\{
\begin{array}{l l}
\tilde{m}_{i\pm 2, i}' & \quad \textit{if $L_{i\pm 2}$ is of type II};\\
{}^t({}^t\tilde{m}_{i\pm 2, i}', \tilde{k}_{i\pm 2, i}) & \quad \textit{if $L_{i\pm 2}$ is of type $I^o$};\\
{}^t({}^t\tilde{m}_{i\pm 2, i}', \tilde{k}_{i\pm 2, i}, \tilde{k}_{i\pm 2, i}') & \quad \textit{if $L_{i\pm 2}$ is of type $I^e$}.
\end{array} \right.$$
- $\delta_{i\pm 3}'=1$ if $L_{i\pm 3}$ is *free of type I* and $0$ otherwise.
- $\tilde{k}_{j, i}$ is the $(n_{j}, n_i)^{th}$-entry (resp. $(n_{j}-1, n_i)^{th}$-entry) of the matrix $\tilde{m}_{j, i}$ if $L_{j}$ is *of type* $\textit{I}^o$ with $j$ even or *free of type I with j odd* (resp. *of type $I^e$ with j even*).
Note that Condition (d) of the description of an element of $\tilde{M}(R)$ given at the paragraph following Lemma \[la1\], $\tilde{z}_i+\delta_{i-2}\tilde{k}_{i-2, i}+\delta_{i+2}\tilde{k}_{i+2, i}=\pi \tilde{z}_i^{\ast}$, yields the fo
| 883
| 3,918
| 783
| 557
| 2,237
| 0.781511
|
github_plus_top10pct_by_avg
|
we add the constraint on the maximum of the coefficients.
General $\ell_1$ minimization problem to promote sparsity {#l1}
---------------------------------------------------------
In addition to the number of connected components, we assume that we know one representative of each component i.e. a node belonging to this component. This assumption is not so restrictive compared to traditional spectral clustering where the number of clusters is assumed to be known. If we do not exactly know a representative for each component, we can estimate them by first applying a rough partitioning algorithm or just an algorithm that aims to find hubs of very densely connected parts of the graph.
Let $I_{n-k+1},\dots, I_n$ be the row indices of the representative element of each component and let $\tilde{\mathcal{V}}_{1,k}^1 =\{v\in \mathcal{V}_{1,k}^0 : v_{I_j}=1\}$ for all $j\in \left\{n-k+1, \dots,n \right\}$. This is straightfoward to see that the indicator vector of the smallest component is solution of the following optimization problem.
\[prop4\] The minimization problem ($\mathcal{P}_1$)
$\underset{v\in \tilde{\mathcal{V}}_{1,k}^1 }{\arg\min} {\|v\|}_1 $
has a unique solution given by $\textbf{1}_{C_{n-k+1}}$.
\[proof2\] We recall that $\|v\|_1=\sum\limits_{i=1}^n |v_i|$. Let $v \in \tilde{\mathcal{V}}_k^1$. Therefore, as $\textbf{1}_{C_{n-k+1}} \in \mathcal{V}_{1,k}^0$, $v$ can be decomposed as $v=\sum\limits_{j=n-k+1}^n \alpha_j \textbf{1}_{C_j}$ where $\alpha=(\alpha_{n-k+1},\dots ,\alpha_n)\in \mathbb{R}^k \ $ and there exists $ j \in \left\{n-k+1, \dots, n \right\} , \ \alpha_j \ne 0$.
The connected components of sizes $c_{n-k+1},\dots, c_n$ are sorted in increasing order of size. Therefore, $\|v\|_1=\alpha_{n-k+1} c_{n-k+1}+\dots +\alpha_n c_n$. The solution of ($\mathcal{P}_1$) is given by the vector in $\tilde{\mathcal{V}}_{1,k}^1$ that satisfies $\|v\|_{+\infty}$ and with the smallest $\ell_1$-norm such that $\alpha=(\alpha_{n-k+1} , 0, \dots, 0)$ where $\alpha_{n-k+1}=1$.
To simplify and without
| 884
| 2,235
| 1,610
| 859
| 3,155
| 0.77444
|
github_plus_top10pct_by_avg
|
ome new data, which supports our earlier findings.
The paper is organized as follows. Section \[sec:intro\] introduces notations. Section \[sec:value\] shows that the distribution of traded volume/value is not universal, and it is not in the Levy stable regime as suggested by Ref. [@gopi.volume]. Section \[sec:correl\] shows, that traded value displays only weak correlations for time scales shorter than one day. On longer horizons there is stronger persistence whose degree depends logarithmically on the liquidity of the stock. Finally, Section \[sec:alpha\] surveys the concept of fluctuation scaling, shows how it complements the observed liquidity dependence of correlations, and how those two form a consistent scaling theory.
Notations and data {#sec:intro}
==================
For a fixed time window size $\Delta t$, let us denote the total traded value of the $i$th stock at time $t$ by $$f_i^{\Delta t}(t) = \sum_{n, t_i(n)\in [t, t+\Delta t]} V_i(n),
\label{eq:flow}$$ where $t_i(n)$ is the time when the $n$th transaction of the $i$th stock takes place. Tick-by-tick data are denoted by $V_i(n)$, this is the value traded in transaction $n$, calculated as the product of the price and the traded volume.
Since price changes very little from trade to trade while variations of trading volume are much faster, the fluctuations of the traded value $f_i(t)$ are basically determined by those of traded volume. Price merely acts as a weighting factor that enables one to compare different stocks, while this also automatically corrects the data for stock splits and dividends. The correlation properties and the normalized distribution are nearly indistinguishable between traded volume and traded value.
This study is based on the complete Trades and Quotes database of New York Stock Exchange for the period $1994-1995$.
Note that throughout the paper we use $10$-base logarithms.
Traded value distributions revisited {#sec:value}
====================================
In this section, we first revisit the analysis done in Ref.
| 885
| 234
| 998
| 741
| null | null |
github_plus_top10pct_by_avg
|
mas está desse jeito. Assim que o app é iniciado, ele verifica a permissão chamando a classe Permissao. Se o usuário aceitar, o app inicia e se ele recusar o app fechar com uma mensagem informando que é preciso aceitar as permissões. Até ai tudo bem, porém se ele aceitar a permissão o app só funciona se ele fechar e abrir novamente o aplicativo ou se ele girar a tela.
Basicamente o app pega a localização atual do dispositivo e mostra no mapa. Gostaria que assim que ele escolhesse permitir, o app já funcionasse sem os detalhes que mencionei acima.
private GoogleMap mMap;
private Marker marcador;
double lat = 0.0;
double lng = 0.0;
private String[] permissoesNecessarias = new String[]{
android.Manifest.permission.ACCESS_FINE_LOCATION,
android.Manifest.permission.ACCESS_COARSE_LOCATION
};
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_maps);
Permissao.validaPermissoes(1, this, permissoesNecessarias);
// Obtain the SupportMapFragment and get notified when the map is ready to be used.
SupportMapFragment mapFragment = (SupportMapFragment) getSupportFragmentManager()
.findFragmentById(R.id.map);
mapFragment.getMapAsync(this);
}
@Override
public void onMapReady(GoogleMap googleMap) {
mMap = googleMap;
minhaLocalizacao();
}
private void adicionarMarcador(double lat, double lng) {
LatLng coordenadas = new LatLng(lat, lng);
CameraUpdate minhaLocalizacao = CameraUpdateFactory.newLatLngZoom(coordenadas, 16);
if (marcador != null) marcador.remove();
marcador = mMap.addMarker(new MarkerOptions()
.position(coordenadas)
.title("Você está aqui")
.icon(BitmapDescriptorFactory.fromResource(R.mipmap.ic_launcher_round)));
mMap.animateCamera(minhaLocalizacao);
}
private void atualizarLoca
| 886
| 872
| 140
| 332
| 3,032
| 0.775327
|
github_plus_top10pct_by_avg
|
round 6 and 7 MeV are consistent between the two calculations. The energy-weighted sum ($1.867\times10^{4}$MeV$\cdot$fm$^{4}$) overestimates by about 13.9% the EWSR value ($1.638\times 10^{4}$MeV$\cdot$fm$^{4}$). The overshooting of the EWSR for the isoscalar quadrupole mode in the LM approximation was pointed out in Ref [@miz07].

\[results\]Results and Discussion
=================================
$^{26}$Ne $^{28}$Ne $^{30}$Ne
------------------------------------------- ------------ ------------- -------------
$\lambda_{\nu}$ (MeV) $-4.60$ $-3.06$ $-2.90$
$\lambda_{\pi}$ (MeV) $-14.8$ $-17.0$ $-19.9$
$\beta_{2}^{\nu}$ 0.08 0.12 0.32
$\beta_{2}^{\pi}$ 0.14 0.20 0.39
$\langle \Delta_{\nu} \rangle$ (MeV) 0.0 (0.70) 1.27 (1.24) 1.34 (1.30)
$\langle \Delta_{\pi} \rangle$ (MeV) 1.04 0.87 0.0
$\sqrt{\langle r^{2} \rangle_{\nu}}$ (fm) 3.20 3.35 3.53
$\sqrt{\langle r^{2} \rangle_{\pi}}$ (fm) 2.93 2.98 3.08
: Ground state properties of $^{26,28,30}$Ne obtained by the deformed HFB calculation with the SkM\* interaction and the surface-type pairing interaction. Chemical potentials, deformations, average pairing gaps and root-mean-square radii for neutrons and protons are listed. []{data-label="GS"}
We now discuss the properties of $^{26,28,30}$Ne nuclei calculated with the SkM\* interaction. We summarize in Table \[GS\] the ground state properties of these Ne isotopes obtained by solving Eq. (\[eq:HFB1\]). The ground state is slightly deformed in $^{26}$Ne and $^{28}$Ne, and we obtain a well-deformed ground state for $^{30}$Ne. The values in parentheses are experimental pairing gaps extracted by the three-point mass difference formula [@sat98] using the experimental binding e
| 887
| 668
| 1,422
| 985
| 1,694
| 0.786718
|
github_plus_top10pct_by_avg
|
t.
Standard tools {#sec:prelim}
==============
In this section we record various standard results relating to sets of small doubling and approximate groups. This material is likely to be familiar to experts in the subject, who may therefore decide to skip straight to \[sec:details\].
\[lem:slicing\] Let $K,L\ge1$ and let $G$ be a group. Let $A\subset G$ be a $K$-approximate group and $B\subset G$ an $L$-approximate group. Then for every $m,n\ge2$ the set $A^m\cap B^n$ is covered by at most $K^{m-1}L^{n-1}$ left translates of $A^2\cap B^2$. In particular, $A^m\cap B^n$ is a $K^{2m-1}L^{2n-1}$-approximate group.
\[lem:fibre.pigeonhole\] Let $k\in{\mathbb{N}}$. Let $G$ be a group with a subgroup $H$, let $A\subset G$, and suppose that $A$ is contained in a union of $k$ left cosets of $H$. Then $A$ is contained in a union of $k$ left translates of $A^{-1}A\cap H$.
Let $x_1,\ldots,x_m\in A$ be representatives of the left cosets of $H$ containing at least one element of $A$, noting that $m\le k$ by hypothesis. If $a$ is an arbitrary element of $A\cap x_iH$ then there exists $h\in H$ such that $a=x_ih$. It follows that $h=x_i^{-1}a\in A^{-1}A$, and hence $h\in A^{-1}A\cap H$ and $a\in x_i(A^{-1}A\cap H)$.
\[thm:plun\] Let $G$ be an abelian group, and let $A$ be a finite subset of $G$. Suppose that $|A+A|\le K|A|$. Then $|mA-nA|\le K^{m+n}|A|$ for all non-negative integers $m,n$.
\[lem:covering\] Let $A$ and $B$ be finite subsets of some group and suppose that $|AB|/|B|\le K$. Then there exists a subset $X\subset A$ with $|X|\le K$ such that $A\subset XBB^{-1}$.
Let $H$ and $P$ be as given by \[thm:sanders\]. Then we have $$\begin{aligned}
\frac{|A+H+P|}{|H+P|}&\le\exp(O(\log^{O(1)}2K))\frac{|A+H+P|}{|A|}\\
&\le K^5\exp(O(\log^{O(1)}2K))&\text{(by \cref{thm:plun})}\\
&\le\exp(O(\log^{O(1)}2K)),\end{aligned}$$ and so \[lem:covering\] gives a set $X\subset A$ of size at most $\exp(O(\log^{O(1)}2K))$ such that $A\subset X+H+2P$. Now $2P$ is also a progression of the same rank as $P$. Moreover, since $H+P\subse
| 888
| 932
| 1,061
| 885
| 3,866
| 0.769633
|
github_plus_top10pct_by_avg
|
)\right]\;.
\nonumber\\\end{aligned}$$ The process of relaxation from the initial state can be followed by monitoring the subsystem observables $\hat{\sigma_z}$, which in the adiabatic basis reads $$\mbox{\boldmath$\sigma$}_z=\frac{1}{1+G^2}\left(
\begin{array}{cc} 2G & 1-G^2 \\ 1-G^2 & -2G\end{array}\right)\;.$$
The adiabatic basis is real so that the initial density matrix of the system can be written as $$\rho_{\alpha\alpha'}(X,0)=
\sum_{\alpha=1}^2 \psi_{\alpha}(X,0) \phi_{\alpha'}(X,0) \;,$$ where $$\begin{aligned}
\psi_{1}(X,0)= \phi_{1}(X,0)
&=&\sqrt{\rho_b(X)}\frac{1+G}{\sqrt{2(1+G^2)}}\;,
\nonumber\\
\\
\psi_{2}(X,0)= \phi_{2}(X,0)
&=&\sqrt{\rho_b(X)}\frac{1-G}{\sqrt{2(1+G^2)}}\;.
\nonumber\\\end{aligned}$$ Such coefficients enter into the calculation of the observable $$\langle\mbox{\boldmath$\sigma$}_z(t)\rangle
=
\sum_{\alpha\alpha'}\int dX \phi_{\alpha'}(X,t)\sigma_{z}^{\alpha'\alpha}(X)
\psi_{\alpha}(X,t)\;.
\label{eq:sigma-sb}$$ The coefficients evolve in time according to Eqs. (\[eq:c\]) and (\[eq:cstar\]), where one must set $C_{\alpha}^{\iota}\equiv \psi_{\alpha}$ and $C_{\alpha'}^{\iota *}\equiv \phi_{\alpha'}$. In order to devise an effective computational scheme for such equations, one could assume that the density matrix entering Eqs. (\[eq:c\]) and (\[eq:cstar\]) is taken to be that at $t=\infty$, when the total system (subsystem plus bath) has reached thermal equilibrium. The equilibrium quantum-classical density matrix is known as a series expansion in $\hbar$ [@qc-stat]. If one makes the additional assumption of complete decoherence at $t=\infty$, only the ${\cal O}(\hbar^0)$ term can be taken $$\rho_{e}^{(0)\alpha\alpha'}(X)
=Z_0^{-1}e^{-\beta(\sum_jP_j^2/2+E_{\alpha}(R))}\delta_{\alpha\alpha'}
\;,$$ where $Z_0=\sum_{\alpha\alpha'}\int dX\rho_{e}^{(0)\alpha\alpha'}(X)$. Then $$\begin{aligned}
\frac{\partial\ln \rho_{e}^{(0)\alpha\alpha'} }{\partial R}
&=&-\beta\frac{\partial E_{\alpha}}{\partial R}\delta_{\alpha\alpha'}
\equiv \beta F_{\alpha}(R)\delta_{\alpha\alpha'}\;,
\\
\frac
| 889
| 1,020
| 1,350
| 1,000
| null | null |
github_plus_top10pct_by_avg
|
fferent typically by $3-4$ orders of magnitude, we increase the radial cell size $\Delta
r$ with the fixed size ratio $\Delta r_i/\Delta r_{i-1}$ $(>1)$. We set $\Delta r_1 = 0.1 R_{\mathrm{in}}$ at the inner boundary. The grids in the angular direction are homogeneously distributed over $0<\theta<90^\circ$, and thus the grid size is $\Delta\theta =
90^\circ/N_\theta$.
We have tested the convergence of the numerical results by varying the grid numbers or inner boundary radius (see Appendix \[sec:res\_check\]). We follow the evolution over the duration $t_{\mathrm{end}}$, until the accretion reaches a steady state in Di and Ddn runs, or until $\dot{M}$ reaches almost constant in the other runs.
Results {#sec:result}
=======
Structures of flows {#sec:structure}
-------------------
. \[tab:d-model\]
run subgrid radiation type $\theta_{\mathrm{inflow}}(r_{\mathrm{B}})$$^{a}$ $\dot{M}/\dot{M}_{\mathrm{B}}$$^{d}$
----- ------------------------ -------------------------------------------------- --------------------------------------
Di isotropic ...$^{b}$ $0.17\%^{e}$
Ddn disc ...$^{c}$ $0.13\%^{e}$
Dds disc $+$ shadow $40^\circ$ $59\%^{f}$
: Summary of the results in Sec. \[sec:structure\]
\
NOTES. $^{a}$opening angle of equatorial neutral inflow region at $r_{\mathrm{B}}$ (see text); $^{b}$no equatorial neutral region; $^{c}$equatorial neutral region does not reach $R_{\mathrm{in}}$; $^{d}$accretion rate normalized by Bondi one; $^{e}$averaged between $t=4\times 10^5{\,\mathrm{yr}}$ and $5\times 10^5{\,\mathrm{yr}}$; $^{f}$evaluated at the end of simulation.
In this section, we perform the simulations of “D-series”, in order to see how the flow structure changes with different directional dependences of the radiation fields. The basic results for these cases are summarized in
| 890
| 1,139
| 1,581
| 917
| null | null |
github_plus_top10pct_by_avg
|
ls. However, a difference in estimated firing behavior occurs at the 120mN response level whereby the SMC-MUNE procedure obtained two different model fits; MU1+MU4 in R10 and MU2+MU3 in R50. As a consequence, the estimated excitation range for MU1 in R50 is unusually large, leading to a relatively flat excitability curve with an enlarged coefficient of variation (Figure \[fig:RealParam\]) in relation to other MUs and between data sets. Nevertheless, the net effect of these firing combinations with the estimated expected MUTFs, see Figure \[fig:RealParam\] inserts, does not suggest that the overall description of the two data sets greatly differ. This exemplifies the difficulty in disseminating between MUs with similar excitation and twitch characteristics. The difference in fit could have occurred in part due to the 70mN response level in the R50 data set not being represented within dataset R10.
-------------------------------- ------ ------- ------ ------ -- ------ -------- ------ ------
Data set $~$
No. of MUs ($u$) 7 8 9 10 7 8 9 10
$\mathbb{P}(u|\mathbf{y})$ (%) 0.04 99.95 0.01 0.00 0.00 100.00 0.00 0.00
Grid Size ($n{\times}n$) 30 30 30 30 100 90 50 90
No. of Particles (,000s) 20 5 5 5 155 100 65 115
-------------------------------- ------ ------- ------ ------ -- ------ -------- ------ ------
: Posterior summary from the SMC-MUNE procedure for the rat tibial muscle using 10sec and 50sec duration stimuli.[]{data-label="tab:RealResults"}
![Estimated excitability curves from the eight MU hypotheses for data sets R10 (left) and R50 (centre) with corresponding expected MUTF mean estimates. Right: median and 95% credible interval for the coefficient of variation for the random variable associated with the excitability curve for each MU, together with the m
| 891
| 1,819
| 1,432
| 1,104
| null | null |
github_plus_top10pct_by_avg
|
\-
A/C 31 (56.4) 39 (52.7) 1.19 (0.57, 2.49) A/C 39 (52.0) 5 (41.7) 2.02 (0.58, 7.05)
C/C 20 (36.3) 30 (40.5) 0.8905 1 C/C 27 (36.0) 7 (58.3) 0.2255 1
A/A + A/C 35 (63.6) 44 (59.5) 0.6301 1.19 (0.58, 2.45) A/A + A/C 48 (64.0) 5 (41.7) 0.1410 2.49 (0.72, 8.61)
**Females**^**a**^ **Females**
A/A 6 (11.4) 9 (11.0) 0.79 (0.25, 2.48) A/A 7 (8.4) 4 (16.7) 0.78 (0.20, 3.04)
A/C 19 (35.8) 40 (48.8) 0.56 (0.27, 1.18) A/C 40 (48.2) 4 (16.7) 4.44 (1.36, 14.53)
C/C 28 (52.8) 33 (40.2) 0.3067 1 C/C 36 (43.4) 16 (66.6) 0.0202\* 1
A/A + A/C 25 (47.2) 49 (59.8) 0.1513 0.60 (0.30, 1.21) A/A + A/C 47 (56.6) 8 (33.3) 0.0443\* 2.61 (1.01, 6.77)
**rs12126768** **rs12126768**
**Males** **Males**
G/G 3 (5.5) 3 (4.1) 1.06 (0.20, 5.59) G/G 2 (2.7) 0 (0.0) \-
G/T 17 (30.9) 34 (45.9) 0.53 (0.25, 1.11) G/T 31 (41.3) 4 (30.8) 1.66 (0.47, 5.89)
T/T 35 (63.6) 37 (50.0) 0.2244 1 T/T 42 (56.0) 9 (69.2) 0.6089 1
G/G + G/T 20 (36.4) 37 (50.0) 0.1230 0.57 (0.28, 1.17) G/G + G/T 33 (44.0) 4 (30.8) 0.
| 892
| 4,580
| 517
| 471
| null | null |
github_plus_top10pct_by_avg
|
two closely spaced peaks: $\mathrm{F}_1=3055.1$, $\mathrm{F}_2=3258.4$, $\mathrm{F}_3=3284.1$ and $\mathrm{F}_4=4780.6\,\mu$Hz. Similarly to the case of G 207-9, no further results of time series photometric observations have been published up to now.
### Konkoly observations {#sect:lp133freq}
We found four recurring frequencies in the daily datasets at 3055, 3270, 3695 and 4780$\mu$Hz (median values). Their amplitudes varied from night to night, but the 4780$\mu$Hz peak was the dominant in almost all cases. One additional peak exceeded the 4S/N limit at 5573$\mu$Hz, but on one night only.
We created four monthly datasets and analysed them independently. These are Month 1 (JD2454115–130), Month 2 (JD2454175–194), Month 3 (JD2454203–208) and Month 4 (JD2454231–237). The analyses of the monthly data revealed that at the 3270, 3695 and 4780$\mu$Hz frequencies there are actually doublets or triplets with 2.6–4.7$\mu$Hz frequency separations. This explains the different amplitudes in the daily FTs. The 3055$\mu$Hz frequency was found to be a singlet. In Month 3, the linear combination of the largest amplitude components of the 3270 and 4780$\mu$Hz multiplets also could be detected. The 5573$\mu$Hz frequency was significant in Month 2.
The panels of Fig. \[fig:lp133FTa\] show the FT of one daily dataset and the monthly data. As in the case of G 207-9, there were no remarkable amplitude variations from one month to another.
![LP 133-144: amplitude spectra of one night’s observation (*top panel*) and the monthly datasets (*lower panels*).[]{data-label="fig:lp133FTa"}](lp133FTa.eps){width="\columnwidth"}
The analysis of the whole 2007 dataset resulted in the detection of 19 significant frequencies in the $\sim2300-8000\,\mu$Hz frequency region. We also performed the test analysis utilizing the averaged 30s dataset, which confirmed the presence of the 14 largest amplitude frequencies (the other five peaks remained slightly under the significance level). Thus we accepted them as the frequencies
| 893
| 927
| 2,986
| 1,086
| null | null |
github_plus_top10pct_by_avg
|
*r + r - 159. Give j(m(h)).
-26*h
Let y(a) = -3*a**2 - 19*a - 910. Let d(s) = s**2 + 6*s + 303. Let x(i) = -19*d(i) - 6*y(i). Let p(h) = 3*h**2. What is p(x(u))?
3*u**4 + 1782*u**2 + 264627
Let u(g) = -2*g**2. Let r be (-138)/(-8) - ((-7)/4 + 2). Let t(d) = 57*d**2 - 43 + 19*d + 43 - r*d. Give u(t(m)).
-6498*m**4 - 456*m**3 - 8*m**2
Let n(k) = 2*k**2 - 1235*k. Let x(l) = 34668*l. Give n(x(u)).
2403740448*u**2 - 42814980*u
Let a(n) = -3*n**2. Suppose 3*q = 9 + 3. Let d(s) = 2 - 4*s - s**2 - 6 + 3 + 5. Let h(r) = -r + 1. Let m(l) = q*h(l) - d(l). Give a(m(f)).
-3*f**4
Let b(c) = -c**2 + c - 205. Let f(a) = a + 11. Calculate f(b(r)).
-r**2 + r - 194
Let z(c) be the second derivative of 11*c**4/24 + 301*c**2/2 - 7*c + 6. Let i(v) be the first derivative of z(v). Let o(t) = -8*t**2. What is i(o(r))?
-88*r**2
Let y(s) = -332*s + 2. Let r(n) = -6*n - 63 - 7*n + 63 + 11*n. Give r(y(k)).
664*k - 4
Let p(c) be the third derivative of c**4/12 + c**2. Let b(f) = -39*f**2 - 2*f - 156. Let o(s) = s**2 + 26. Let r(d) = b(d) + 6*o(d). Give r(p(g)).
-132*g**2 - 4*g
Let i(n) = -48*n**2. Let t(x) = 11*x**2 + 179. Let u(c) = -11*c**2 - 223. Let g(r) = -5*t(r) - 4*u(r). Determine i(g(p)).
-5808*p**4 - 3168*p**2 - 432
Let r(t) = t. Let l(v) = 220620290*v**2. Calculate r(l(p)).
220620290*p**2
Let q(m) be the third derivative of m**5/120 + 19*m**3/3 - 7*m**2. Let h(y) be the first derivative of q(y). Let j(t) = -110*t**2. Give j(h(s)).
-110*s**2
Let j(k) = -3*k - 14. Let y(f) = -32809*f**2 - f. Give j(y(m)).
98427*m**2 + 3*m - 14
Let b(k) = -154*k**2 + 112*k + 224. Let m(n) = 4*n**2 - 3*n - 6. Let u(h) = 3*b(h) + 112*m(h). Let x(j) = -15*j**2 - 2. Determine x(u(s)).
-2940*s**4 - 2
Let y(d) = d + 101. Let u(b) = -3*b**2 + 3*b + 188. Calculate u(y(t)).
-3*t**2 - 603*t - 30112
Let k(v) = -308*v**2 - 26. Let t(x) = -3*x - 79. Give t(k(d)).
924*d**2 - 1
Let q(f) = f. Suppose 4*p - 4*g + 192 = 0, -4*g + 2 = 18. Let l(u) = 99*u + 30. Let m(c) = 1040*c + 312. Let t(z) = p*l(z) + 5*m(z). Determine t(q(v)).
52*v
Let x(j) = -5*j. Let o(m) = -308*m
| 894
| 105
| 464
| 942
| null | null |
github_plus_top10pct_by_avg
|
maly probability, SDD-R can be also optimized by ranking all data collections according to their divergence value and select first $n \cdot \alpha$ ones with highest values as anomalies.
Evaluation {#sec:evaluation}
==========
Our algorithm was implemented and interpreted in Python 3.5.2. All experiments were tested on Ubuntu 16.04. In the following experiments, we figured out properties of real world data and performance of our technique against anomalous data collections. We also made a comparison among variations of SDD algorithms and MGoF.[^1]
Methodology {#sec:exp-methodology}
-----------
We adopted two data sets: 1) Koubei sellers’ transaction records[^2]; 2) Synthetic random distribution data set. Koubei data set was provided by Alibaba Tian Chi big data competition where all records were collected from real world business scenarios. It contained information about seller features, user payments and browsing behaviour. We randomly chose one seller (ID: 1629) and extracted transaction history of this seller, records ranging from Nov. 11th 2015 to Oct. 31st 2016. Entire transaction set was then divided into 325 collections, each containing records in one day. Fig. \[fig:daily-transaction-volume\] and \[fig:sale-distribution-sample\] give an overview of it.
![Changing of the daily sales volume shows that environment of online sales had been changing all the time.[]{data-label="fig:daily-transaction-volume"}](./DailyTransactionVolume.pdf){width="0.75\linewidth"}
![We selected 3 days randomly and drew sales distribution by counting hourly volume. Although sales volume has changed from day to day, the shape of the distribution remain almost alike.[]{data-label="fig:sale-distribution-sample"}](./SaleDistributions.pdf){width="\linewidth"}
Two types of click-farmed data was generated according to patterns described in section \[sec:related-realworld\]. To emulate centralized click farming, we randomly inserted some Gaussian-distributed transactions in the chosen collection. As for emulating the equalized
| 895
| 355
| 889
| 930
| 1,353
| 0.790567
|
github_plus_top10pct_by_avg
|
result in this article.
\[uppercondition\] For an upper subsemigroup $S$ of $\mathcal{B}$, the following are equivalent:
$(i)$ $S$ is a left I-order in $\mathcal{B}$;
$(ii)$ $R_1 \subseteq S$.
Moreover, writing $S$ as $S=F_{D}\cup\bigcup_{i\in I} S_{i}$, we have $R_1 \subseteq S$ if and only if $0\in I, \ d=1$ and $F_D\cup F_0=\{1,...,a^0b^{m_0-1}\}$.
The equivalence of $(i)$ and $(ii)$ follows from Example \[exRclass\] and Corollary \[RinI\]. The remaining statement follows from inspection of the description of $S$ as in $3(i)$ of Proposition \[subbicyclic\].
\[sublowbicyclic\] Let $S$ be an upper subsemigroup of $\mathcal{B}$. If $S$ is a left I-order in $\mathcal{B}$, then it is straight.
Lower subsemigroups {#leftilower}
===================
In this section we give necessary and sufficient conditions for the lower subsemigroups of $\mathcal{B}$ to be left I-orders in $\mathcal{B}$. Throughout this section $S$ is a lower subsemigroup of $\mathcal{B}$ having the form (3).($ii$) in Proposition \[subbicyclic\]. We begin with:
The lower subsemigroup $T=\{a^i b^j : i\geq j , i \geq m\}$ of $\mathcal{B}$ is a straight left I-order in $\mathcal{B}$. Since for any element $q=a^{k}b^{h}$ in $\mathcal{B}$ we have $$q=a^{k}b^{h}=a^{k}b^{k+h+m}a^{k+h+m}b^{h}=(a^{k+h+m}b^{k})^{-1}(a^{k+h+m}b^{h})$$ and it is clear that $a^{k+h+m}b^{k}$ and $a^{k+h+m}b^{h}$ are in $T$.
\[R1\] Let $S$ be a lower subsemigroup of $\mathcal{B}$. If $j\notin I$, then $S$ contains no element $a^ib^j$ with $i>j$.
\[zerolowecase\] Let $S$ be a lower subsemigroup of $\mathcal{B}$. If $S$ is a left I-order in $\mathcal{B}$, then $d=1$ and $0\in I$.
Since $S$ a lower subsemigroup, it follows that for all $a^ib^j \in S$ we have that $d|j-i$ for some $d\in \mathbb{N}$. By Lemma \[dimpact\], it is clear that $d=1$. Let $a^hb^0\in \mathcal{B}$ where $h\in \mathbb{N}$. Then $$a^hb^0=(a^ib^j)^{-1}(a^mb^n)=a^{j-i+t}b^{n-m+t}$$ where $t=\max\{m,i\}$, so that $0=n-m+t$. Hence we deduce that $n=0$ and $t=m$. We also have that $h=j-i+m$ so that
| 896
| 1,307
| 685
| 1,019
| 3,836
| 0.769777
|
github_plus_top10pct_by_avg
|
density $\mu_0$, we find:
$$\label{eqmu0}
|| (P^t)^n\mu_0 - \mu_\text{stat}|| \propto (\lambda(P))^n.$$
According to Eqs. (\[eqdn\]) and (\[eq:mix1\]), $\lambda(P)^{t(\epsilon)} \propto \epsilon$, i.e. $\lambda(P) \propto \epsilon^{1/t(\epsilon)}$. Hence, the smaller $\lambda(P)$ the shorter the mixing time (Fig. \[fig:KS1\]). $h_{KS}$ being a decreasing function of $t(\epsilon)$ and $\lambda(P)$ being an increasing function of $t(\epsilon)$, we deduce that $h_{KS}$ is a decreasing function of $\lambda(P)$.
This link between maximum KSE and minimum mixing time actually also extends naturally to optimal diffusion coefficients. Such a notion has been introduced by Gomez-Gardenes and Latora [@gomez2008entropy] in networks represented by a Markov chain depending on a diffusion coefficient. Based on the observation that in such networks, KSE has a maximum as a function of the diffusion coefficient, they define an optimal diffusion coefficient as the value of the diffusion corresponding to this maximum. In the same spirit, one could compute an optimal diffusion coefficient with respect to the mixing time, corresponding to the value of the diffusion coefficient which minimizes the mixing time -or equivalently the smallest second largest eigenvalue $\lambda(P)$. This would roughly correspond to the diffusion model reaching the stationary time in the fastest time. To define such an optimal diffusion coefficient, we follow Gomez and Latora and vary the transition probability depending on the degree of the graph nodes. More accurately, if $k_i=\sum_j A(i,j)$ denotes the degree of node $i$, we set:
$$\label{eq:diff1}
p_{ij}=\frac{A_{ij}k_j^\alpha}{\sum_j A_{ij}k_j^\alpha}.$$
If $\alpha <0$ we favor transitions towards low degrees nodes, if $\alpha=0$ we find the typical random walk on network and if $\alpha>0$ we favor transitions towards high degrees nodes. We assume here that $A$ is symmetric. It may then be checked that the stationary probability density is equal to:
$$\label{eq:diff2}
\pi_{stat_i}=\frac{c_ik_i
| 897
| 1,015
| 1,569
| 954
| 2,622
| 0.778349
|
github_plus_top10pct_by_avg
|
a}-G_{ac}\frac{1}{1+i\lambda_T G_{cc}}i\lambda_T G_{ca}\,,$$ where $G_{nm}=W^\dag_n G W_m$ and $\lambda_T$ is the coupling constant of the “terminator,” $$\label{eq:s08}
\lambda_T=\frac{1-r}{1+r}=\tanh\frac{\alpha+i\phi}{2}\,.$$
Equation (\[eq:s06\]) has the same form as Eq. (\[eq:s05\]), but for the measuring antenna only and with the modified Green’s function. Substituting explicit expressions for matrix elements $G_{nm}$, we obtain in a number of elementary steps $$\label{eq:s10}
\hat{G} = G \frac{1}{1+i\lambda_T W_c W_c^\dag G} \equiv \frac{1}{E-H^a_\mathrm{eff}}\,,$$ where $H^a_\mathrm{eff}=H-i\lambda_T W_cW_c^\dag$. Introducing the normalized coupling vector $V=\frac{1}{\sqrt{\lambda_W}}W_c$, where $\lambda_W=W_c^\dag W_c$ is a channel coupling strength, $H^a_\mathrm{eff}$ may be finally written as $$\label{eq:s12a}
H^a_{\rm eff}=H-i\lambda VV^\dag, \quad \lambda=\lambda_T\lambda_W\,.$$ The total coupling constant $\lambda$ is generally complex and takes into account the effects of both the channel coupling ($\lambda_W$) and the terminator ($\lambda_T$). The $2\times2$ scattering matrix \[Eq. (\[eq:s02\])\] for the measuring antenna and the antenna with variable terminator has thus been reduced to a $1\times1$ scattering matrix for the measuring antenna only, $$\label{eq:s12b}
S_{aa}=\frac{1-i W_a^\dag\displaystyle\frac{1}{E-H^a_\mathrm{eff}} W_a}{1+i W_a^\dag \displaystyle\frac{1}{E-H^a_\mathrm{eff}}W_a}.$$ In the case of a single measurement antenna and one antenna with variable coupling, Eq. (\[eq:s12b\]) is equivalent to Eq. (\[eq:s\_cc2\]). Equations (\[eq:s10\])–(\[eq:s12b\]) constitute the main result of this section. They show that the influence of the variable antenna can be taken into account by an appropriate modification of the Hamiltonian.
Two special cases are of particular importance. For the termination of the antenna with a 50$\Omega$ load the outgoing wave is completely absorbed, corresponding to the limit $\alpha\to\infty$. It follows $\lambda_T=\tanh \infty=
| 898
| 3,249
| 1,799
| 934
| 2,079
| 0.782958
|
github_plus_top10pct_by_avg
|
Mills theory [@Litim:1998nf; @Pawlowski:2005xe] in Polyakov gauge, $$\begin{aligned}
\nonumber
\hspace{-.5cm} \partial_t \Gamma_{k}& =
& \frac{\beta}{2}
\int \0{d^3 p}{(2 \pi)^3} \left(\frac{1}{\Gamma_k^{(2)}
+ R_A}\right)_{00}\partial_t R_{0,k}\\
& & +
\frac{T}{2} \sum_{n\in \Z}
\int \0{d^3 p}{(2 \pi)^3} \left(\frac{1}{\Gamma_k^{(2)}
+ R_A}\right)_{ii}\partial_t R_{\bot, k}\,,
\label{eq:flow}\end{aligned}$$ where $t$ is the RG time $t = \ln (k / \Lambda)$, and $\Lambda$ is some reference scale.
Approximation scheme {#sec:approx}
====================
together with an initial effective action at some initial ultraviolet scale $k=\Lambda_{\rm UV}$ provides a definition of the full effective action at vanishing cut-off scale $k=0$ via the integrated flow. For the solution of we have to resort to approximations to the full effective action. In gauge theories such an approximation also requires the control of gauge invariance, see e.g. [@Pawlowski:2005xe].
Here we shall argue that in Polyakov gauge a rather simple approximation to the full effective action already suffices to describe the confinement-deconfinement phase transition, and, in particular, to estimate the critical temperature. We compute the flow of the effective action $\Gamma[A_0,\vec A_{\bot}]$ in the following truncation $$\begin{aligned}
\nonumber
\hspace{-.4cm}\Gamma_k[A_0,\vec A_{\bot}] &\!=&\! \beta
\int d^3x\, \left(
-\0{Z_{0}}{2} A_0
\vec \partial{\,}^2 A_0+V_{k}[A_0]\right) \\
&&\hspace{-.8cm}-\frac{1}{2} \int_T d^4x\, Z_{i} \vec
A_{\bot}^a \left[(D_0^2)^{ab} +
\vec \partial{\,}^2\delta^{ab} \right] \vec A_{\bot}^a\,,
\label{eq:effact}\end{aligned}$$ with $k$-dependent wave function renormalisations $Z_0,Z_i$. The effective action relates to the order parameter $\langle L(\vec x)\rangle$ as well as its two point correlation $\langle L(\vec x) L^\dagger (\vec y) \rangle$ via the effective potential $V_{\rm eff}[A_0]=V_k[A_0]$ as explained in section \[sec:QCDinPol\]. The expectation value $\langle L(\
| 899
| 2,441
| 1,247
| 836
| 2,353
| 0.780628
|
github_plus_top10pct_by_avg
|
erational profile (of collection $Z$) is $$P(Z) = \lim_{\;k \to \infty} \frac{N_Z(\{{\mathit{s}}_n\}, k)}{k}.$$
### Conversion into a rate {#S:ABSOLUTE_OP_PROFILE_RATE}
Section \[S:INTRO\_REACTIVE\] mentions the synchronization function, a cross-reference between discrete and real time. During each step, an amount of real time appropriate for a software system emulating the automaton’s step is added to the time consumption budget. Let $Z$ be the usual arbitrary reference collection of steps and ${\mathit{o}} = \{{\mathit{s}}_n\}$ be an orbit. These two provide a set of events and a sequence of steps in which to count the events’ arrivals. The synchronization records discrete pairs $(i, t_i)$, where $i$ is the index of the automaton step and $t_k$ is the total elapsed time after $k$ steps. Call this mapping the synchronization function, having the formalism $\sync \colon {\mathbb{S}}^{\mathbb{N}}\times {\mathbb{N}}\to {\mathbb{R}^+}$, along with assumed starting point $\sync(\{{\mathit{s}}_n\}, 0) = 0$.
Let the sequence index of each step be the discrete analog of time. Of course, this has the effect that discrete software time will not hold proportional to hardware real time. The approximate real time required by execution of step ${\mathit{s}} = (\lambda, {\mathit{f}}, {\mathbf{f}})$ is $\tau({\mathit{f}})$ – that is, elapsed real time is taken as a function of the executing functionality.
\[D:SYNC\_APPOX\] For orbit ${\mathit{o}} = \{{\mathit{s}}_n\}$, approximate time elapsed during the first $k$ steps accumulates to $$\sync(\{{\mathit{s}}_n\}, k) = \sum_{i=1}^k \tau({\mathit{f}}_i)) = \sum_{i=1}^k \tau(\mho_{{\mathscr{F}}}({\mathit{s}}_i)) = t_k.$$
A theorem to avoid creating dependency on specific orbits is in order. Inability to define normal operations leads instead to conjecture, expressing such need:
\[T:TIME\_RATIO\] For different orbits ${\mathit{o}} = \{{\mathit{s}}_n\}$ and ${\mathit{o}}' = \{{\mathit{s}}'_n\}$ having the same usage pattern, $$\lim_{\;k \to \infty} \frac{N_Z(\{{\mathit{s}
| 900
| 2,906
| 2,481
| 978
| 1,333
| 0.79083
|
github_plus_top10pct_by_avg
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.