text
large_stringlengths
384
2.05k
rank_avg
float64
1
4.19k
rank_max
float64
1
8.21k
rank_min
float64
1
5.03k
rank_median
float64
1
4.21k
rank_by_avgsim
float64
1
4.19k
avgsim_to_github
float32
0.77
0.85
dataset
large_stringclasses
1 value
mitigation is achieved by iteratively re-running the algorithm with more and more particles until inferences are stable (see Appendix \[sec:APX\]). Details for the firing vector and excitability parameters {#sec:DetailFireProc} --------------------------------------------------------- At time $t-1$, each particle sample consists of a historical sequence of firing events for all MUs, and from this an associated joint posterior for the firing parameters, $\eta_{1:u}$ and $\lambda_{1:u}$, is derived. A representation of the distribution for the excitability parameters is sought that is analogous to that described for the observation parameters in that it should (a) permit simple calculation of the firing event predictive , (b) be deterministically updatable when assimilating the current measurement, and (c) provide a concise and sufficient description for the posterior distribution. From the independence of MU firing under Assumption A1 and the excitability parameter prior in , it follows that the predictive for the firing event ${\mathbf{x}}_t$ in factorises: $$\begin{aligned} \mathbb{P}\left({\mathbf{x}}_t|~ {\mathbf{x}}_{1:t-1},~ s_{1:t}\right) & = \prod_{j=1}^{u} \iint \mathbb{P}\left(x_{j,t}|~\eta_j,~\lambda_j,~s_t\right) \pi\left(\eta_j,~ \lambda_j|~x_{j,1:t-1},~s_{1:t-1}\right) d\eta_j~d\lambda_j,\label{eq:ECj_marg}\end{aligned}$$ where the posterior at time $t-1$ for the excitability parameters associated with MU $j$ is: $$\begin{aligned} \pi\left(\eta_j,~ \lambda_j|~x_{j,1:t-1},~s_{1:t-1}\right) & \propto \prod_{r=1}^{t-1} \mathbb{P}\left(x_{j,r}|~\eta_j,~\lambda_j,~s_r\right) \pi\left(\eta_j\right) \pi\left(\lambda_j\right).\label{eq:ECj_post}\end{aligned}$$ Regardless of the excitability curve definition, this product of firing probabilities does not lead to a simple conjugate structure with a concise set of sufficient statistics for the posterior distribution. Furthermore, whilst for specific values of $(\eta_j,\lambda_j)$ the update in may be performed sequentially, the integrations for the norm
601
498
788
756
1,321
0.791037
github_plus_top10pct_by_avg
e class methods somewhere and want to invoke them on an object of my own choosing.This is what I came up with. ObjectOperations ops = engine.Operations; IList<string> members = ops.GetMemberNames(scope); foreach(string member in members) { if(member.StartsWith("foo")) { dynamic klass= scope.GetVariable(member); dynamic instance= klass(); //for brevity I am skipping the enumeration of //function names here. dynamic func = ops.GetMember(instance, "bar1");//and store it somewhere Action act = ops.ConvertTo<Action>(func); //This part is not really necessary } } What I want to do is invoking the func(of type IronPython.Runtime.Method) on a instance I create sometime later. From what I glean from IronPython source that the instance is set in stone when it's created and I cannot change it.Is there a way accomplish this? I am also not sure in what context(scope??) to run the function or I shouldn't worry about it all. ObjectOperations do have a method called MemberInvoke that takes a object and member name but I'd rather prefer MethodInfo.Invoke type of invocation both for style and performance considerations. Any pointers would be greatly appreciated. P.S. I am using IronPython 2.7 and .Net4.0 by the way.. A: Yes, this is actually very easy. In Python methods can either be invoked as bound methods such as: x = C().f # produces a bound method x() # invokes the bound method Which is the same as: C().f() Or they can be invoked with an explicit self parameter: C.f(C()) This works the same way from the hosting APIs, so all you need to do is something like: dynamic klass = scope.GetVariable(member); dynamic func = ops.GetMember(klass, "bar1"); // and then later: func(inst); Q: Echo a multi dimensional array I have an multidimensional array of a player list for Call of Duty 4. When I try to echo the array it comes back with Array 30 times because there are 30 current players in the server. Var_Dump of $promodplist (Players List)
602
952
465
495
1,233
0.792307
github_plus_top10pct_by_avg
ample from such a region (acyclic cone). The complete story is not so simple, because the cone is not a probabilistic structure; it possesses no probability to support randomness. As a probabilistic structure, the operational profile (§\[S:RELATIVE\_OP\_PROFILE\]) permits random sampling from its reference set, regardless of its higher level meaning. As the edge of a cone is a set, it can become a relative operational profile’s reference set. Thus we tie an operational profile to a cone’s edge. Let $\mathcal{O} \colon {{\operatorname{edge}{{\mathcal{C}}}}} \to [0,1]$ be a relative operational profile on the edge of cone ${\mathcal{C}}$. At this stage we have the ability to draw a random sample from ${{\operatorname{edge}{{\mathcal{C}}}}}$. Theorem \[T:ACYLIC\_CONE\_CORRESPONDENCE\] asserts that an acyclic cone ${\mathcal{C}}$ and ${{\operatorname{edge}{{\mathcal{C}}}}}$ are in one-to-one correspondence via the edge step relation of a localized predecessor walk. Equivalent to the one-to-one correspondence is the bijection $\mathbf{b} = \{({{\operatorname{edge}{{\mathit{w}}}}},{\mathit{w}}) \colon {\mathit{w}} \in {\mathcal{C}}\}$. For ${\mathit{e}} \in {{\operatorname{edge}{{\mathcal{C}}}}}$, $\mathbf{b}({\mathit{e}})$ is the bijectively corresponding localized predecessor walk. We now bijectively associate the random edge event ${\mathit{e}} = \mathbf{b}^{-1}({\mathit{w}})$ with the localized predecessor walk ${\mathit{w}}$: $\mathcal{O}' = \{ (\mathcal{O}(\mathbf{b}^{-1}({\mathit{w}})), {\mathit{w}}) \colon {\mathit{w}} \in {\mathcal{C}} \}$. With probability inherited from an operational profile, we can speak validly of a random sample from a cone. ### Tests {#S:TESTS} The last piece of the safety demonstration story is converting local predecessor walks into tests. Localized predecessor walks are finite walks existing in confusion-prone backwards time. One may skip this section unless he wishes the detail of converting backward to forward walks. The *test* function reverses and re-indexes localized
603
187
967
743
1,225
0.792439
github_plus_top10pct_by_avg
f_{\mu}(v^{-1})f_{\mu}(1) v^{k(n(\mu) - n(\mu^t))}}{\prod_{i=2}^n (1-v^{-i})}.$$ By [@op Theorem 8] the fake degrees satisfy $f_{\mu}(v^{-1}) = f_{\mu^t}(v^{-1})v^{n(\mu^t)-n(\mu)}.$ Combined with this implies that $$f_{\mu}(v^{-1}) f_{\mu}(1) v^{k(n(\mu)-n(\mu^t))} = f_{\mu^t}(v^{-1})f_{\mu^t}(1) v^{-(k-1)(n(\mu^t)-n(\mu))}.$$ Substituting this into gives the stated formula for $p(\overline{M(k)},\, v)$. {#poincare-S2C} As was true for Corollary \[poincare-S2A\], we need to slightly modify Proposition \[poincare-SC\] in order to compute the Poincaré series for $\overline{M(k)}$ under the ${\mathbf{E}}$-grading. Set $K=kn(n-1)/2$ and $\mathfrak{n}={\mathbb{C}}[{\mathfrak{h}}]_+$. Under the ${\mathbf{E}}$-grading there is an equality of Poincaré series $$\label{poincare-sss} \phantom{\frac{\displaystyle \int}{\displaystyle \int}} p(\overline{M(k)}, v) = v^{K} \frac{\sum_{\mu} f_{\mu}(1)f_{\mu}(v^{-1}) v^{-(k-1)(n(\mu) - n(\mu^t))}}{\prod_{i=2}^n (1-v^{-i})}= p(J^{k-1}\delta^k/\mathfrak{n}J^{k-1}\delta^k,\,v).$$ Equation \[diaggrad1\] continues to hold if we replace $em\delta e$ by $m\delta e$. Thus the argument of Corollary \[poincare-S2A\](1) combined with Proposition \[poincare-SC\] and the formula $M(k)=H_{c+k}\delta e B_{k-1,0}$ gives the first equality of . In order to obtain the second equality in , note that $p(J^{k-1}\delta^k/\mathfrak{n}J^{k-1}\delta^k,\,v) = v^Kp(J^{k-1} /\mathfrak{n}J^{k-1},\,v)$. Set $p(v)=p(J^{k-1} /\mathfrak{n}J^{k-1},\,v)$ and $q(v)=p(J^{k-1}/\mathfrak{m}J^{k-1},\,v)$, where $\mathfrak{m}={\mathbb{C}}[{\mathfrak{h}}]^{{W}}_+$ The Poincaré series $q(v)$ has been computed in Corollary \[gr\]. Since that series was obtained by specialising the bigraded Poincaré series $p(J^d, s, t)$ from Corollary \[bigr\], it follows immediately that $$p( v)\ = \ \frac{p({\mathbb{C}}[{\mathfrak{h}}],v)}{p({\mathbb{C}}[{\mathfrak{h}}]^{{W}},v)} \, \,q(v) \ = \ \frac{(1-v)^{n-1}}{\prod_{i=2}^{n}(1-v^i)} \, \, q(v) \ = \ \frac{q(v)}{[n]_v!}$$ where the final equality
604
1,164
1,083
635
3,834
0.769789
github_plus_top10pct_by_avg
tion : ${\cal O}(N_xN_yN_z\log_2[N_xN_yN_z])$. In practice, the discrete transforms in Equations – can be performed efficiently with an FFT algorithm. The public [FFTW]{} library[^1] performs sine transforms of various kinds, among which we use [FFTW\_R0DFT00]{} consistent with the zero boundary condition. To perform 3D FFTs in parallel, we decompose the computational domain into 2D pencils along, for example, the $y$-axis and execute 1D sine transforms locally in each pencils (e.g., @li10). We then transpose the pencils to the $z$- and $x$-axes sequentially, each time by executing corresponding sine transforms, which completes a 3D FFT. The parallel transpose among different pencil decompositions are done with the [remap\_3d]{} function in Steve Plimpton’s parallel FFT package[^2]. We note that, since mass density and the gravitational potential in general are distributed as blocks rather than pencils in real applications, we have to transpose between block and pencil decompositions at the input and output stage of the Poisson solver. Plimpton’s remap routine provides this functionality as well. Cylindrical Grid Solution with Zero Boundary Value {#s:interior_solver_cylindrical} -------------------------------------------------- A natural boundary condition in the azimuthal direction is that both mass density and gravitational potential are periodic, with period $L_\phi$. This holds true even when the problem under study has $P$-fold symmetry in the $\phi$-direction, with a domain size $L_\phi = 2\pi / P$. The algorithm presented below is applicable for such systems as long as the $P$-fold symmetry is considered in the boundary condition for the DGF (Equation ). The eigenfunction ${\cal P}^m_j$ for the discrete Laplace operator $\Delta_\phi^2$ and the corresponding eigenvalue $\lambda_\phi^m$ are given by $$\label{eq:cyl_eigenfunction} {\cal P}^m_j = \exp\left[ \frac{2\pi\sqrt{-1}mj}{N_\phi} \right],$$ $$\lambda^m_\phi = -\frac{m^2}{R_i^2}\left[ \sin\left( \frac{\pi m}{N_\phi} \right) \bigg/\left( \
605
1,055
1,135
724
null
null
github_plus_top10pct_by_avg
ion date and, hence, relatively old blood (days to expiration from 0 to 11) were included in group A (n=99). Patients who received blood that was relatively new (days to expiration from 11 to 38) were included in group B (n=99). Baseline characteristics, including age, gender, height, weight, relevant blood count indices, and medical history, were compared. We calculated the mean pre-transfusion and the mean post-transfusion hemoglobin, hematocrit, and red blood cell count of all patients. To determine the effect of the storage lesion on efficacy, we compared the mean rise in hemoglobin, hematocrit, and RBC count between the two groups using the one-tailed t-test. All data were analyzed using SPSS 25.0 (SPSS Inc., Chicago, Illinois, US). Results ======= The baseline characteristics of patients in both groups were similar (Table [1](#TAB1){ref-type="table"}). ###### Patient characteristics SD - standard deviation; RBC - red blood cell count -------------------------------------------------------------- ------------------ ------------------ --------- Patient characteristics Old blood (n=99) New blood (n=99) p-value Age - mean (SD)  65.59 ± 18.8  65.46 ± 16.4  0.961  Male gender (n) 35  36 0.885  Height - mean (SD)  161.64 ± 19.52  165.95 ± 12.7  0.067  Weight - mean (SD)  72.53 ± 26.9  76.43 ± 19.7  0.246  Pre-transfusion hemoglobin -mean (SD)  7.41 ± 0.85  7.39 ± 0.74  0.844  Pre-transfusion hematocrit - mean (SD)  23.18 ± 2.79  23.09 ± 2.66  0.833  Pre-transfusion RBC count - mean (SD)  2.62 ± 0.44  2.67 ± 0.52  0.424  Post-transfusion hemoglobin - mean (SD) 
606
4,678
398
189
null
null
github_plus_top10pct_by_avg
\lrb{\frac{1}{2}\exp\lrp{-\frac{7\aq\Rq^2}{3}} (\|z\|_2 - 2\epsilon), \|z\|_2}$ <!-- --> 1. 1. chain rule 2. Use definition of $\nabla g(z)$ from Lemma \[l:gproperties\]. 3. By definition, $\nabla f(z) = q'(g(z)) \nabla g(z)$. From Lemma \[l:qproperties\], $\lrabs{q'(g(z))} \leq 1$. By definition, $\nabla g(z) = h'(\lrn{z}_2) \frac{z}{\lrn{z}_2}$. Our conclusion follows from $h' \leq 1$ using item 2 of Lemma \[l:hproperties\]. 2. 1. chain rule 2. by item 2 b) of Lemma \[l:gproperties\] 3. by item 1 c) and item 2 d) of Lemma \[l:gproperties\], and item 3 and item 4 of Lemma \[l:qproperties\], and our assumption that $\epsilon\leq \frac{\Rq}{\aq + \Rq^2 + 1}$. 4. by item 4 of Lemma \[l:qproperties\]), and items 2 c) and 2 d) of Lemma \[l:gproperties\], and our expression for $\nabla^2 f(z)$ established in item 2 a). 3. It can be verified that $$\begin{aligned} \nabla^3 f(z) =& q'''(g(z)) \cdot \nabla g(z)^{\bo 3} + q''(g(z)) \nabla g(z) \bo \nabla^2 g(z) + q''(g(z)) \nabla^2 g(z) \bo \nabla g(z) \\ &\quad + q''(g(z)) \nabla g(z) \bo \nabla^2 g(z) + q'(g(z)) \nabla^3 g(z) \end{aligned}$$ Thus $$\begin{aligned} \lrn{\nabla^3 f(z)}_2 \leq& \lrabs{q'''(g(z))} \lrn{\nabla g(z)}_2^3 + 3 q''(g(z)) \lrn{\nabla g(z)}_2 \lrn{\nabla^2 g(z)}_2 + q'(g(z)) \lrn{\nabla^3 g(z)}\\ \leq& 5\lrp{\aq + \frac{1}{\Rq^2}} \lrp{\aq\Rq^2 + 1} + 3\lrp{\frac{5\aq\Rq}{4} + \frac{4}{\Rq}}\cdot \frac{1}{\epsilon} + \frac{1}{\epsilon^2}\\ \leq& \frac{9}{\epsilon^2} \end{aligned}$$ Where the first inequality uses Lemma \[l:qproperties\] and Lemma \[l:gproperties\], and the second inequality assumes that $\epsilon \leq \frac{\Rq}{\aq\Rq^2 + 1}$ 4. $$\begin{aligned} f(z) \in \lrb{\frac{1}{2}\exp\lrp{-\frac{7\aq\Rq^2}{3}} g(\|z\|_2), g(\|z\|_2)} \in \lrb{\frac{1}{2}\exp\lrp{-\frac{7\aq\Rq^2}{3}} (\|z\|_2 - 2\epsilon), \|z\|_2} \end{aligned}$$
607
3,605
621
424
null
null
github_plus_top10pct_by_avg
\hat{\beta}}_{i}, \label{438}$$ where the conditions of unit length and non-negative relative weight stipulated earlier have already been imposed within the stated order of approximation. Equation (\[438\]) specifies the (relative) composition of the market-aligned portfolio. The relative weight ${W}_{N}$ of this portfolio, on the other hand, is expected to be of the order of ${N}^{1 \over 2}$, since this portfolio consists entirely of purchased assets (recall our estimate of the relative weights earlier in §2). Indeed one can see from Eq. (\[438\]) that ${W}_{N} \simeq {\sum}_{i=1}^{N}{\hat{\beta}}_{i}$ in the leading order of approximation,[^4] which confirms the above-stated estimate (recall that the average of the ${\hat{\beta}}_{i}^{2}$ equals ${N}^{-1}$). Equations (\[437\]) and (\[438\]) provide approximate expressions for the major eigenvalue and eigenvector of the covariance matrix of the single-index model. Rescaling Eqs. (\[437\]) and (\[438\]) back to original variables, we find, for the variance and the composition of the market-aligned principal portfolio, the expressions $${{V}_{N}}^{2} \simeq [1+3 {\sum}_{i=1}^{N} {\gamma}_{i}^{2}{\hat{\beta}}_{i}^{2}-{({\sum}_{i=1}^{N} {\hat{\beta}}_{i})}^{-1}{\sum}_{i=1}^{N} {\gamma}_{i}^{2}{\hat{\beta}}_{i}]{({\bm{\beta}} \cdot {\bm{\beta}})} {\bar{{\rho}^{2}}}_{mkt} /{({\sum}_{i=1}^{N}{\hat{\beta}}_{i})}^{2}, \label{439}$$ $${e}^{N}_{i}/{W}_{N} \simeq [1+ {\gamma}_{i}^{2}-{({\sum}_{i=1}^{N} {\hat{\beta}}_{i})}^{-1}{\sum}_{i=1}^{N} {\gamma}_{i}^{2}{\hat{\beta}}_{i}]{\hat{\beta}}_{i}/({\sum}_{j=1}^{N} {\hat{\beta}}_{j}), \label{440}$$ where we have left the small correction terms in dimensionless form. It is clear from Eq. (\[440\]) that the market-aligned portfolio is basically composed by investing in each asset in proportion to how strongly it is correlated with the overall market fluctuations, i.e., in proportion to the value of its beta; cf. Eq. (\[431\]). Consequently, it is expected to be strongly suseptible to market-driven fluctuations. Indeed
608
2,661
2,393
821
null
null
github_plus_top10pct_by_avg
dots,\pi_k f_k(\vq))) % =\softmax(\vln\pi+\vln(f_1(\vq),\dots,f_k(\vq)))\end{aligned}$$ where $f_i(\vq)$ is the probability density function of the distribution $Dir(\valpha^{(i)})$ where $\valpha^{(i)}$ is the $i$-th row of matrix $\valpha$. Hence, $f_i(\vq)=\frac{1}{B(\valpha^{(i)})}\prod_{j=1}^k q_j^{\alpha_{ij}-1}$, where $B(\cdot)$ denotes the multivariate beta function. Let us define a matrix $\MW$ and vector $\vb$ as follows: $$\begin{aligned} w_{ij}=\alpha_{ij}-1,\qquad b_i=\ln(\pi_i)-\ln(B(\valpha^{(i)}))\end{aligned}$$ with $w_{ij}$ and $\alpha_{ij}$ denoting elements of matrices $\MW$ and $\valpha$, respectively, and $b_i,\pi_i$ denoting elements of vectors $\vb$ and $\vpi$. Now we can write $$\begin{aligned} \ln(\pi_i f_i(\vq)) &=\ln(\pi_i)-\ln(B(\valpha^{(i)}))+\ln\prod_{j=1}^k q_j^{\alpha_{ij}-1} \\ &=\ln(\pi_i)-\ln(B(\valpha^{(i)}))+\sum_{j=1}^k (\alpha_{ij}-1)\ln(q_j) \\ &=b_i+\sum_{j=1}^k w_{ij}\ln(q_j)\end{aligned}$$ and substituting this back into $\muh(\vq)$ we get: $$\begin{aligned} \muh(\vq)&=\softmax(\vln(\pi_1 f_1(\vq),\dots,\pi_k f_k(\vq))) \\ &=\softmax(\vb+\MW\vln(\vq))=\muh_{DirLin}(\vq; \MW,\vb)\end{aligned}$$ #### 2. Consider a function $\muh(\vq)=\vmuh_{DirLin}(\vq; \MW,\vb)$. Let us define a matrix $\MA$ and vector $\vc$ as follows: $$\begin{aligned} a_{ij}=w_{ij}-\min_{i}w_{ij},\qquad \vc=\softmax(\MW\,\vln\,\vu+\vb)\end{aligned}$$ with $a_{ij}$ and $w_{ij}$ denoting elements of matrices $\MA$ and $\MW$, respectively, and $\vu=(1/k,\dots,1/k)$ is a column vector of length $k$. Note that $\MA\,\vx=\MW\,\vx+const_1$ and $\vln\,\softmax(\vx)=\vx+const_2$ for any $x$ where $const_1$ and $const_2$ are constant vectors (all elements are equal), but the constant depends on $\vx$. Taking into account that $\softmax(\vv+const)=\softmax(\vv)$ for any vector $\vv$ and constant vector $const$, we obtain: $$\begin{aligned} \muh_{Dir}(\vq; \MA,\vc) &=\softmax(\MA\,\vln\,\frac{\vq}{1/k}+\vln\,\vc) =\softmax(\MW\,\vln\,\frac{\vq}{1/k}+const_1+\vln\,\vc) \\ &=\softmax(\MW\,\vln\,\
609
2,271
740
717
2,540
0.77903
github_plus_top10pct_by_avg
lias, (Key) privateKey, password.toCharArray(), certificate); } catch (KeyStoreException ex) { } } /** * Devuelve una clave privada del almacen de contraseñas. El almacen de * contraseñas debe estar cargado. * * @param alias | Alias para almacenar la clave privada. * @param password | Contraseña para almacenar la clave privada. * @return PrivateKey | Clave privada almacenada en el KeyStore. */ public PrivateKey getPrivateKey(String alias, String password) { PrivateKey privateKey = null; try { Key key = this.keyStore.getKey(alias, password.toCharArray()); privateKey = (PrivateKey)key; } catch (NoSuchAlgorithmException | UnrecoverableEntryException | KeyStoreException ex) { Exception e = ex; } return privateKey; } /** * Guarda un certificado en el keyStore. El KeyStore debe estar cargado. * * @param alias | Alias para almacenar el certificado. * @param x509Certificate | Certificado a guardar. */ public void saveCertificate(String alias, X509Certificate x509Certificate) { try { this.keyStore.setCertificateEntry(alias, (Certificate) x509Certificate); } catch (KeyStoreException ex) { } } /** * Devuelve un certificado del keyStore. El KeyStore debe estar cargado. * * @param alias | Alias para almacenar el certificado. * @return Certificate | Certificado. */ public Certificate getCertificate(String alias) { Certificate certificate = null; try { certificate = this.keyStore.getCertificate(alias); } catch (KeyStoreException ex) { } return certificate; } /** * Devuelve una clave publica del keyStore desde el certificado. * * @param alias | Alias obtener el certificado. * @return PublicKey | Clave publica del certificado. */ public PublicKey getPublicKeyFromCertificate(String alias)
610
692
75
583
1,032
0.795367
github_plus_top10pct_by_avg
arepsilon}^{2/3} \kappa_b^{-2/3}. \end{split} \label{W81x}$$ Weinstock [@weinstock81] assumed that $\kappa_b$ can be parameterized by $\frac{N}{\sigma_w}$ (basically, the inverse of the buoyancy length scale $L_b$). By plugging this parameterization into Eq. \[W81x\] and simplifying, we get: $$\begin{split} \overline{\varepsilon} & \approx \sigma_w^3 \kappa_b \\ & \approx \sigma_w^2 N. \end{split}$$ Appendix 3: Normalization of DNS Variables {#appendix-3-normalization-of-dns-variables .unnumbered} ========================================== In DNS, the relevant variables are normalized as follows: $$z_n = \frac{z}{h},$$ $$u_n = \frac{u}{U_b},$$ $$v_n = \frac{v}{U_b},$$ $$w_n = \frac{w}{U_b},$$ $$\theta_n = \frac{\theta-\Theta_{top}}{\Theta_{top}-\Theta_{bot}}.$$ After differentiation, we get: $$\frac{\partial u}{\partial z} = \frac{\partial u}{\partial z_n} \frac{\partial z_n}{\partial z} = \frac{\partial u}{\partial u_n} \frac{\partial u_n}{\partial z_n}\frac{\partial z_n}{\partial z} = \frac{U_b}{h} \frac{\partial u_n}{\partial z_n},$$ $$\frac{\partial v}{\partial z} = \frac{\partial v}{\partial z_n} \frac{\partial z_n}{\partial z} = \frac{\partial v}{\partial v_n} \frac{\partial v_n}{\partial z_n}\frac{\partial z_n}{\partial z} = \frac{U_b}{h} \frac{\partial v_n}{\partial z_n},$$ $$S = \sqrt{\left(\frac{\partial \overline{u}}{\partial z}\right)^2 + \left(\frac{\partial \overline{v}}{\partial z} \right)^2} = \frac{U_b}{h} S_n,$$ $$\frac{\partial \theta}{\partial z} = \frac{\partial \theta}{\partial z_n} \frac{\partial z_n}{\partial z} = \frac{\partial \theta}{\partial \theta_n} \frac{\partial \theta_n}{\partial z_n}\frac{\partial z_n}{\partial z} = \left(\frac{\Theta_{top}-\Theta_{bot}}{h}\right) \frac{\partial \theta_n}{\partial z_n}.$$ The gradient Richardson number can be expanded as: $$Ri_g = \frac{N^2}{S^2} = \frac{\left(\frac{g}{\Theta_0}\right)\left(\frac{\partial \overline{\theta}}{\partial z}\right)}{S^2} = \left(\frac{g}{\Theta_{top}}\right) \left(\frac{\Theta_{top}-\Theta_{bot}
611
3,390
1,061
691
null
null
github_plus_top10pct_by_avg
oups of patients with and without systolic dysfunction, defined by ejection fraction less or more than 50% in a total of 96 hemodiafiltration patients (\* *p* \< 0.05). ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Characteristic Patients with Ejection Fraction Less Than 50% (*n* = 26)\ Patients with Ejection Fraction More Than 50% (*n* = 70)\ *p* Value Mean ± SD/Mean Rank Mean ± SD/Mean Rank ------------------------------------------- ----------------------------------------------------------- ----------------------------------------------------------- ----------- Age (years) 68.8 ± 12.6 \* 59.6 ± 14.1 0.005 Dialysis vintage (years) 52.8 46.9 0.4 spKt/V for urea 46.1 49.4 0.6 nPCR (g/Kg/day) 2.2 ± 0.6 2.2 ± 0.5 0.9 Interdialytic urine volume (mL) 225 ± 154.1 361.4 ± 326.4 0.3 Interdialytic weight gain (L) 2.2 ± 0.7 2.3 ± 1.1 0.4 BMI (Kg/m^2^) 24.8 ± 3.3 25.2 ± 4.04
612
3,286
509
441
null
null
github_plus_top10pct_by_avg
y the calculations are step by step analogous to that of Ref. [@sln] for $q$-affine case. So we omit all such calculations and only refer to [@sln] and remind the readers of our correspondence rules (Observations 1 and 2). \[rem2\] In proving Proposition \[prop1\], only the OPE relation (\[aapn\]) for $\hat{a}^i_\pm$ is used and the exact expressions for the fields $\hat{a}^i_\pm$ are not important. Actually, there are infinite many choices for $\hat{a}^i_\pm$ which satisfy the relation (\[aapn\]). For example, the following is another example which differs from the original definitions (\[ap\]-\[an\]), $$\begin{aligned} & & \hat{a}^i_+(u) = \frac{1}{k+g} \sum_{j,l=1}^{N-1} (B^{-1})^{lj} a^j_+(u;\frac{k+g}{2} - B_{ij}) - a^j_+(u;\frac{k+g}{2} + B_{ij}), \label{ap2}\\ & & \hat{a}^i_-(u) = a^i_-(u;-\frac{k+g}{2}) - a^i_+(u;\frac{k+g}{2}). \label{an2}\end{aligned}$$ However, no matter which choice we use, we cannot make the definition of $\hat{a}^i_+(u)$ and $\hat{a}^i_-(u)$ symmetric, i.e. no violation of Remark \[rem1\] could occur. \[rem3\] While $N=2$, equations (\[hpn\]-\[en\]) become $$\begin{aligned} & & H^{+}(u) = : \mbox{exp}\left( \hat{b}_+(u+\frac{1}{4}k\hbar) + \hat{b}_+(u+\frac{1}{2}(\frac{k}{2}+2) \hbar) + \hat{a}_+(u-\frac{1}{4}k\hbar) \right):~, \label{hppp}\\ & & H^{-}(u) = : \mbox{exp}\left( \hat{b}_-(u-\frac{1}{4}k\hbar) + \hat{b}_-(u-\frac{1}{2}(\frac{k}{2}+2) \hbar) + \hat{a}_-(u+\frac{1}{4}k\hbar) \right):~,\label{hnnn}\\ & & E^+(u)= -\frac{1}{\hbar} : \left[ \mbox{exp}\left( \hat{b}_+(u) -(b+c)(u+\frac{\hbar}{2}) \right) - \mbox{exp}\left( \hat{b}_-(u) -(b+c)(u-\frac{\hbar}{2}) \right) \right]:~, \label{eppp}\\ & & E^-(u)= \frac{1}{\hbar} : \left[ \mbox{exp}\left( (b+c)(u+\frac{1}{2}(k+1)\hbar ) \right) \mbox{exp}\left( \hat{a}_+(u) + \hat{b}_+(u+ \frac{1}{2}(k+2)\hbar) \right) \right. \nonumber \\ & &~~~~\left. - \mbox{exp}\left( (b+c)(u-\frac{1}{2}(k+1)\hbar ) \right) \mbox{exp}\left( \hat{a}_-(u) + \hat{b}_-(u- \frac{1}{2}(k+2)\hbar) \right) \right]: \label{ennn}\end{aligned}$$ whi
613
864
1,078
681
3,749
0.770348
github_plus_top10pct_by_avg
verline{\bf p}_{u}$, the displacement $\Delta{\bf p}_{u}$ can be estimated as $$\Delta{\bf p}_{u}=\overline{\bf p}^{*}_{v}-\overline{\bf p}_{u}\nonumber$$ where $\overline{\bf p}^{*}_{v}$ denotes the average position of all ground-truth parts that are annotated for part template $v$. As a result, for each latent pattern $u$, we only need to learn its channel $D_{u}\in{\boldsymbol\theta}$ and central position $\overline{\bf p}_{u}\in{\boldsymbol\theta}$. Scores of terminal nodes {#scores-of-terminal-nodes .unnumbered} ------------------------ The inference score for each terminal node $v^{\textrm{unt}}$ under a latent pattern $u$ is formulated as $$\begin{aligned} &S_{v^{\textrm{unt}}}=S_{v^{\textrm{unt}}}^{\textrm{rsp}}+S_{v^{\textrm{unt}}}^{\textrm{loc}}+S_{v^{\textrm{unt}}}^{\textrm{pair}}\nonumber\\ &S_{v^{\textrm{unt}}}^{\textrm{rsp}}=\left\{\begin{array}{ll}\lambda^{\textrm{rsp}}X(v^{\textrm{unt}}),& X(v^{\textrm{unt}})>0\\ \lambda^{\textrm{rsp}}S_{none},& X(v^{\textrm{unt}})\leq0\end{array}\right.\nonumber\\ &S_{v^{\textrm{unt}}}^{\textrm{pair}}=-\lambda^{\textrm{pair}}\!\!\!\!\!\!\!\!\underset{u_{\textrm{upper}}\in\!\textrm{Neighbor}(u)}{\mathbb{E}}\!\!\!\!\!\!\Vert[{\bf p}_{v^{\textrm{unt}}}-{\bf p}_{u_{\textrm{upper}}}]-[\overline{\bf p}_{u_{\textrm{upper}}}-\overline{\bf p}_{u}]\Vert\nonumber\end{aligned}$$ The score of $S_{v^{\textrm{unt}}}$ consists of the following three terms: 1) $S_{v^{\textrm{unt}}}^{\textrm{rsp}}$ denotes the response value of the unit $v^{\textrm{unt}}$, when we input image $I$ into the CNN. $X(v^{\textrm{unt}})$ denotes the normalized response value of $v^{\textrm{unt}}$; $S_{none}=-3$ is set for non-activated units. 2) When the parent $u$ selects $v^{\textrm{unt}}$ as its location inference (*i.e.* $\Lambda_{u}\leftarrow\Lambda_{v^{\textrm{unt}}}$), $S_{v^{\textrm{unt}}}^{\textrm{loc}}$ measures the deformation level between $v^{\textrm{unt}}$’s location ${\bf p}_{v^{\textrm{unt}}}$ and $u$’s ideal location $\overline{\bf p}_{u}$. 3) $S_{v^{\textrm{unt}}}^{\textrm{pair
614
227
815
735
2,762
0.7772
github_plus_top10pct_by_avg
8 0.133 0.131 IGS 0.434 0.270 0.016 0.011 0.193 0.014 0.231 0.223 CW2 0.990 0.977 0.003 0.002 0.775 0.003 0.959 0.892 ------------------------------ ------------- ------------- ------------- ------------- ------------- ------------- ------------- ------------- : Transferability of attacks from strongest defense arrangement \[1, 1, 1\] to other defense arrangements for LeNet-VAE.[]{data-label="table:transferability_layer_lenet"} Investigating Cause of Low Attack Transferability ------------------------------------------------- To confirm the suspicion that orthogonal gradients are responsible for the unexpected transferability results between defense arrangements seen in Table \[table:transferability\_layer\_lenet\], we computed the cosine similarity of the gradients of the output layer w.r.t to the input images. Table \[table:cosine\_similarities\_lenet\] shows the average cosine similarity between the strongest defense arrangement \[1, 1, 1\] and other defense arrangements. To summarize the relationship between cosine similarity and attack transferability, we computed the correlations of the transferabilities in Table \[table:transferability\_layer\_lenet\] with the cosine similarities in Table \[table:cosine\_similarities\_lenet\]. These correlations are shown in Table \[table:pearson\_correlations\]. It is quite clear that cosine similarity between gradients is an almost perfect predictor of the transferability between defense arrangements. Thus, training VAEs with the goal of gradient orthogonality, or training conventional ensembles of models with this goal has the potential to drastically decrease the transferability of adversarial examples between models. ----------------------------- ------------- ------------- ------------- ------------- ------------- ------------- -------------
615
467
570
720
null
null
github_plus_top10pct_by_avg
tive comments. [^1]: Supported by the Fund for Scientific Research - FWO-project G0561-08 [^2]: Results are available at http://termcomp.uibk.ac.at/ [^3]: Proposition 3 in [@DBLP:journals/corr/abs-0912-4360] uses natural coefficients, but the proposition also holds for polynomials with integer coefficients. --- abstract: | We study $\gamma\gamma$ scattering in noncommutative QED (NCQED) where the gauge field has Yang-Mills type coupling, giving new contributions to the scattering process and making it possible for it to occur at tree level. The process takes place at one loop level in the Standard Model (SM) and could be an important signal for physics beyond SM. But it is found that the Standard Model contribution far exceeds the tree level contribution of the noncommutative case.\ \ [**Keywords**]{}: Noncommutative, gamma-gamma scattering\ \ [**PACS**]{}: 12.60.-i, 13.40.-f author: - | Namit Mahajan[^1]\ [*Department of Physics and Astrophysics,*]{}\ [*University of Delhi, Delhi-110 007, India.*]{} title: 'Noncommutative QED and $\gamma\gamma$ scattering' --- =cmr10 \*[Introduction]{} Noncommutativity of a pair of conjugate variables forms the central theme of quantum mechanics in terms of the Uncertainty Principle. We are quite familiar with the noncommutativity of rotations in ordinary Euclidean space. The idea of noncommutative (NC) space-time can be traced back to the work of Snyder [@snyder]. But more recently, string theory arguments have motivated an extensive study of Quantum Field Theory (QFT) on NC spaces [@douglas]. The noncommutativity of space-time is realised by the coordinate operators, $x_{\mu}$, satisfying $$[x_{\mu},x_{\nu}] = \iota\Theta_{\mu\nu}$$ with $\Theta_{\mu\nu} = \theta \epsilon_{\mu\nu}$. $\theta$ is the noncommutativity parameter with dimensions $(mass)^{-2}$ and $\epsilon _{\mu\nu}$ is a dimensionless antisymmetric matrix with elements ${\mathcal O} (1)$. The field theories formulated on such spaces are non-local and violate Lorentz symmetry.
616
326
336
545
null
null
github_plus_top10pct_by_avg
oplus C(L^j)))$ and compute the Dickson invariant of the image of an element of $F_j$ in this orthogonal group. We write $M_0=N_0\oplus L_j$, where $N_0$ is unimodular with even rank. Thus $N_0$ is either *of type II* or *of type $I^e$*. First we assume that $N_0$ is *of type $I^e$*. Then we can write $N_0=(\oplus_{\lambda'}H_{\lambda'})\oplus A(1, 2b, 1)$ and $L_j=(\oplus_{\lambda''}H_{\lambda''})\oplus (a)$ by Theorem \[210\], where $H_{\lambda'}=H(0)=H_{\lambda''}$, $b\in A$, and $a (\in A) \equiv 1$ mod 2. Thus we write $M_0=(\oplus_{\lambda}H_{\lambda})\oplus A(1, 2b, 1)\oplus (a)$, where $H_{\lambda}=H(0)$. For this choice of a basis of $L^j=\bigoplus_{i \geq 0} M_i$, the image of a fixed element of $F_j$ in the special fiber of the smooth integral model associated to $L^j$ is $$\begin{pmatrix} id&0 &0\\ 0 &\begin{pmatrix} 1+2 z_j^{\ast} \end{pmatrix} &0 \\ 0& 0 &id \end{pmatrix}.$$ Here, $id$ in the $(1,1)$-block corresponds to the direct summand $(\oplus_{\lambda}H_{\lambda})\oplus A(1, 2b, 1)$ of $M_0$ and the diagonal block $\begin{pmatrix} 1+2 z_j^{\ast}\end{pmatrix}$ corresponds to the direct summand $(a)$ of $M_0$. Let $(e_1, e_2, e_3)$ be a basis for the direct summand $A(1, 2b, 1)\oplus (a)$ of $M_0$. Since this is *unimodular of type $I^o$*, we can choose another basis based on Theorem 2.2 of [@C2]. Namely, if we choose $(-2be_1+e_2, -ae_1+e_3, e_2+e_3)$ as another basis, then $A(1, 2b, 1)\oplus (a)$ becomes $A(2b(2b-1), a(a+1), a(2b-1))\oplus (a+2b)$. Here, $A(2b(2b-1), a(a+1), a(2b-1))$ is *unimodular of type II*. Thus we can write that $$M_0=(\oplus_{\lambda}H_{\lambda})\oplus \left(B(-2be_1+e_2)\oplus B(-ae_1+e_3)\right)\oplus B(e_2+e_3).$$ For this basis, the image of a fixed element of $F_j$ in the special fiber of the smooth integral model associated to $L^j$ is $$\begin{pmatrix} id&0 &0\\ 0 & \begin{pmatrix}1&\frac{-2a}{a+2b}z_j^{\ast} & \frac{-2a}{a+2b}z_j^{\ast}\\ 0&1+\frac{4(a+b)}{a+2b}z_j^{\ast} &\frac{4b}{a+2b}z_j^{\ast}\\0&\frac{-2a}{a+2b}z_j^{\ast}&1+\frac{2a}{a
617
1,059
1,020
661
2,217
0.781786
github_plus_top10pct_by_avg
parately). ![Percentages of liver biopsies with marked necroinflammation and fibrosis categorized by age. A: Percentages of marked necroinflammation in different age groups; B: Percentages of marked fibrosis in different age groups. HBeAg: Hepatitis e antigen.](WJG-23-2802-g004){#F4} Demographic and clinical characteristics of subjects with and without marked hepatic histological changes are presented in Table [2](#T2){ref-type="table"}. Patients with significant liver necroinflammation had higher ALT and AST values (both, *P* \< 0.01) and lower PLT values (*P =* 0.016) compared to patients without necroinflammation. Patients with marked liver fibrosis had higher AST, ALP, GGT and TBA values (*P =* 0.012, 0.004, 0.000 and 0.002, respectively) and lower PLT value (*P =* 0.001) compared to those without fibrosis. ###### Demographic and clinical characteristics of carriers with and without significant liver changes **Significant necroinflammation (*n* = 42)** **No significant necroinflammation (*n* = 113)** ***P* value** **Significant fibrosis (*n* = 24)** **No significant fibrosis (*n* = 131)** ***P* value** -------------------- ---------------------------------------------- -------------------------------------------------- --------------- ------------------------------------- ----------------------------------------- --------------- Age (yr) 40.0 ± 9.3 39.7 ± 10.1 0.854 41.3 ± 10.3 39.5 ± 9.8 0.416 Male, *n* (%) 13 79 0.417 20 23 0.505 HBV DNA (logIU/mL) 6.0 ± 2.0 5.7 ± 2.2 0.368 5.6 ± 2.0 5.8 ± 2.1
618
350
774
936
null
null
github_plus_top10pct_by_avg
n Eq. (\[eq:gen-qlm\]), and as such generalizes the standard quantum law of motion [@b3]. The antisymmetric super-operator $\mbox{\boldmath$\cal D$}$ in Eq. (\[D\]) introduces a novel mathematical structure that characterizes the time evolution of quantum-classical systems. The Jacobi relation in quantum-classical dynamics is $${\cal J}=\left[\hat{\chi}, \left[\hat{\xi},\hat{\eta}\right]_{\mbox{\tiny\boldmath$\cal D$}} \right]_{\mbox{\tiny\boldmath$\cal D$}} +\left[\hat{\eta},\left[\hat{\chi},\hat{\xi} \right]_{\mbox{\tiny\boldmath$\cal D$}} \right]_{\mbox{\tiny\boldmath$\cal D$}} +\left[\hat{\xi},\left[\hat{\eta},\hat{\chi} \right]_{\mbox{\tiny\boldmath$\cal D$}} \right]_{\mbox{\tiny\boldmath$\cal D$}}. \label{qc-jacobi}$$ The explicit expression of $\cal J$ has been given in Ref. [@b3] where it was shown that it may be different from zero at least in some point $X$ of phase space: for this reason the quantum-classical theory of Refs. [@qc-bracket; @kcmqc; @b3; @bsilurante] can be classified as a non-Hamiltonian theory. It is worth to note that the quantum-classical law of motion in Eq. (\[qclm\]) is a particular example of a more general form of quantum mechanics where time evolution is defined by means of non-Hamiltonian commutators. The non-Hamiltonian commutator between two arbitrary operators $\hat{\chi}$ and $\hat{\xi}$ is defined by $$[\hat{\chi},\hat{\xi}]_{\mbox{\tiny\boldmath$\Omega$}}= \left[\begin{array}{cc}\hat{\chi} & \hat{\xi}\end{array}\right] \cdot\mbox{\boldmath$\Omega$}\cdot \left[\begin{array}{c}\hat{\chi} \\ \hat{\xi}\end{array}\right]\;, \label{eq:gen-quantum-algebra}$$ where $\mbox{\boldmath$\Omega$}$ is an antisymmetric matrix operator of the form $$\mbox{\boldmath$\Omega$}= \left[\begin{array}{cc}0 & f[\hat{\eta}]\\ -f[\hat{\eta}] & 0\end{array}\right]\;,$$ where $f[\hat{\eta}]$ can be another arbitrary operator or functional of operators. Then, generalized equations of motion can be defined as $$\begin{aligned} \frac{d\hat{\chi}}{dt}&=&\frac{i}{\hbar} \left[\begin{array}{cc} \
619
3,144
1,082
570
null
null
github_plus_top10pct_by_avg
e actuated automaton is a mechanism representing software. When extended by the principle of emergence (§\[S:PRINCIPLE\_OF\_EMERGENCE\]) and the constructs of the operational profiles (§\[S:OPERATIONAL\_PROFILE\_SECTION\]) and cones (§\[S:CONE\_SECTION\]), it becomes capable of representing precursor conditions for software accidents. Let ${\Vert{{{\operatorname{edge}{{\mathcal{C}}}}}}\Vert}$ (see §\[S:ABSOLUTE\_OP\_PROFILE\_RATE\]) be the rate-based absolute operational profile of the edge of an acyclic cone ${\mathcal{C}}$. Since a member of ${{\operatorname{edge}{{\mathcal{C}}}}}$ is executed at the average intensity of ${{\Vert{{{\operatorname{edge}{{\mathcal{C}}}}}}\Vert}}$, then so is the cone’s step of convergence ${\mathit{s}}_\text{crux}$. Let $\rho$ be the proportion of failing tests (localized predecessor walks). Under that supposition, failures occur at the intensity of $\rho \cdot {\Vert{{{\operatorname{edge}{{\mathcal{C}}}}}}\Vert}$. The definition of ${\Vert{{{\operatorname{edge}{{\mathcal{C}}}}}}\Vert}$, through the internal function $\text{sync}(\cdot)$, allows for the passage of time in the proper duration. #### Uniting mechanism and model We presume that one failing test equals one accident. The cone’s step of convergence is considered to be the point of exhibition of a hazard whenever safety constraints are not met. This mechanism may be separately equated to the intensity (not the rate of loss) of the compound Poisson process: $$\lambda = \rho \cdot {\Vert{{{\operatorname{edge}{{\mathcal{C}}}}}}\Vert}.$$ This equation places a property of the model on the left and properties of the mechanism on the right. The execution rate of the edge of an acyclic cone numerically equals the execution rate of the (set containing the) cone’s crux. The cone’s definitional status (as a complete independent set of localized predecessor walks ending at ${\mathit{s}}_{\text{crux}}$) causes this. Symbolically, $${\Vert{{{\operatorname{edge}{{\mathcal{C}}}}}}\Vert} = {\Vert{\lbrace{\mathit{s}}_{\tex
620
426
1,497
718
1,319
0.79109
github_plus_top10pct_by_avg
form \[eq:4.23\] u(t,x,y)&=h(t,r)+|[h]{}(t,|[r]{}),\ v(t,x,y)&=i[( h(t,r)-|[h]{}(t,|[r]{}) )]{},\ (t,x,y)&=[( i )]{},\ (t,x,y)&=-V(t,x,y)+[12]{}(2(t,x,y))++u\_t(t,x,y)dx\ & +\_y(t,x,y)(2(t,x,y))dx+\_0(t), where the functions $h$ and $\bar{h}$ are defined by (\[eq:4.22bis\]). Note that the solution (\[eq:4.23\]) is a real-valued solution. #### Let us now consider the situation when $\Omega$ is identically equal to zero in the system of ODEs (\[eq:4.15\]). This leads to two possible independent solutions for $h(r)$ depending on whether its second order derivative $h^{(2)}(r)$ vanishes or not. These solutions are respectively \[eq:4.24\] (i) &h(r)=c\_1r+c\_2,&&h\^[(2)]{}(r)=0,\ (ii)&h(r)=+c\_3,&&h\^[(2)]{}(r)0, and their complex conjugates, where the $c_i$ are integration constant. Let us consider separately two cases: when the function $h$ is given by (\[eq:4.24\].i) and when $h$ is of the form (\[eq:4.24\].ii). #### Case i. In this case, the real solution of the original system (\[eq:4.1\]) takes the form \[eq:4.25\] u(t,x,y)=&4[( e(c\_1(t))x+m(c\_1(t))y+e(c\_2(t)) )]{},\ v(t,x,y)=&4[( m(c\_1(t))x-e(c\_1(t))y+m(c\_2(t)) )]{},\ (t,x,y)=& { &-[( )]{},&&m(c\_1(t))0,\ & && m(c\_1(t))=0, .\ (t,x,y)=&-V(t,x,y)+[( 2|c\_1(t)|\^2+e(\_1(t)) )]{}x\^2 -2m(\_1(t))xy\ &+[( 2|c\_1(t)|\^2-e(\_1(t)) )]{}y\^2+2[( 2e[( c\_1(t)|[c]{}\_2(t) )]{}+e(\_2(t)) )]{}x\ &-2[( 2m[( c\_1(t)|[c]{}\_2(t) )]{}+m(\_2(t)) )]{}y+\_0(t), where the real function $\sigma_0(t)$ and the complex functions $c_1(t)$, $c_2(t)$ are arbitrary functions of time and $\dot{c}_1(t)$, $\dot{c}_2(t)$ their respective derivatives with respect to $t$. It should be noted that if the potential $V$, the real function $\sigma_0(t)$ and the complex functions $c_1(t)$, $c_2(t)$, are all bounded and are their derivatives $\dot{c}_1(t)$, $\dot{c}_2(t)$ are also bounded, then the solution (\[eq:4.19\]) is bounded. Moreover, if we take these functions to be of the form \[eq:4.25\] \_0(t)=a\_0e\^[-s\_0t]{},c\_i(t)=a\_je\^[-s\_jt]{}+ib\_je\^[-q\_jt]{},j=1,2, wher
621
2,942
1,680
648
null
null
github_plus_top10pct_by_avg
T^{-}(y) \end{array}$$ for deterministic case. Where we write $A(y)\varsubsetneq B(y)$ if $A(y)\subseteq B(y)$ holds for all $y$ but there exists some $y$ such that $A(y)\neq B(y)$; while by $A(y){\ensuremath {\underset{\mbox{\sout{\tiny{\,?\,}}}}{\subset}}}B(y)$ we indicate that whether or not there exists $y$ such that $A(y)\neq B(y)$ is still open, although $A(y)\subseteq B(y)$ always holds for all $y$. From the above two diagrams, the remaining questions for further study are: 1). Whether or not $\overline{T^{\lambda}(y)}= T^{\lambda-}(y)$ (or equivalently, $\overline{M^{\lambda}(y)}= M^{\lambda-}(y)$ ) for any $y$ and $\lambda\leq 1$. In other words, whether or not the function $T^{\lambda}(y)$ (or $M^{\lambda}(y)$) is ‘almost’ left continuous at any $\lambda\leq 1$. 2). Whether or not $\overline{T(y)}= \overline{M(y)}$ (or equivalently, $T(y)^\circ= M(y)^\circ$) for any $y$. That is, whether or not catalyst-assisted transformation and multiple-copy transformation are also geometrically equivalent in deterministic setting. ACKNOWLEDGMENTS {#acknowledgments .unnumbered} =============== The authors thank the colleagues in the Quantum Computation and Quantum Information Research Group for useful discussion. This work was partly supported by the Natural Science Foundation of China (Grant Nos. 60503001, 60321002, and 60305005), and by Tsinghua Basic Research Foundation (Grant No. 052220204). R. Duan acknowledges the financial support of Tsinghua University (Grant No. 052420003). --- abstract: | Background : Measurement of the fusion cross-section for neutron-rich light nuclei is crucial in ascertaining if fusion of these nuclei occurs in the outer crust of a neutron star. Purpose : Measure the fusion excitation function at near-barrier energies for the $^{19}$O + $^{12}$C system. Compare the experimental results with the fusion excitation function of $^{18}$O + $^{12}$C and $^{16}$O + $^{12}$C. Method : A beam of $^{19}$O, produced via the $^{18}$O(d,p) reactio
622
1,094
1,663
726
1,548
0.788359
github_plus_top10pct_by_avg
$. Compute $\sigma(\pi\cdot {}^tm_{i,i}')\cdot\begin{pmatrix} a_i&0&0\\ 0&1&1 \\ 0&1 &2\bar{\gamma}_i \end{pmatrix}\cdot(\pi m_{i,i}')$ formally and this equals $\sigma(\pi)\pi\begin{pmatrix} {}^ts_i'a_is_i'+\pi^2X_i & Y_i & \pi Z_i \\ \sigma( {}^tY_i) &{}^tr_i'a_ir_i'+\pi^2X_i'&\pi Y_i' \\ \sigma(\pi\cdot {}^tZ_i)&\sigma(\pi\cdot {}^tY_i') &-\pi^2(z_i')^2+ \pi^4 Z_i' \end{pmatrix}$ for certain matrices $X_i, Y_i, Z_i, X_i', Y_i', Z_i'$ with suitable sizes. Thus we can ignore the $(1, 1), (1, 2), (1, 3), (2, 2),$ and $(2, 3)$-blocks of the term $\sigma(\pi\cdot {}^tm_{i,i}')\cdot\begin{pmatrix} a_i&0&0\\ 0&1&1 \\ 0&1 &2\bar{\gamma}_i \end{pmatrix}\cdot(\pi m_{i,i}')$ in Equation (\[ea13\]). As in the above case (iv), we should consider the $(3,3)$-block of this term because of the appearance of $\pi^4(z_i')^2$. By the same reason, we can ignore the $(1, 1), (1, 2), (1, 3), (2, 2),$ and $(2, 3)$-blocks of the term $\pi^3(\ast)$, whereas the $(3,3)$-block of this term should be considered. We interpret each block of Equation (\[ea13\]) below: 1. Firstly, we consider the $(1,1)$-block. The computation associated to this block is similar to that for the above case (iii). Hence there are exactly $((n_i-2)^2-(n_i-2))/2$ independent linear equations and $((n_i-2)^2+(n_i-2))/2$ entries of $s_i'$ determine all entries of $s_i'$. 2. Secondly, we consider the $(1, 2)$-block. Then it equals $$\label{ea14} {}^tb_i'={}^tb_i+\pi( -{}^tv_i'+a_ir_i').$$ This is an equation in $B\otimes_AR$. By letting $b_i'=b_i=0$, there are exactly $(n_i-2)$ independent linear equations among the entries of $v_i'$ and $r_i'$. 3. The $(1, 3)$-block is $$\pi e_i'=\pi e_i+\pi^2({}^ty_i'+a_it_i').$$ By letting $e_i'=e_i=0$, we have $$\label{ea15} e_i'=e_i+\pi({}^ty_i'+a_it_i')=0.$$ This is an equation in $B\otimes_AR$. Thus there are exactly $(n_i-2)$ independent linear equations among the entries of $y_i'$ and $t_i'$. 4. The $(2, 3)$-block is $$1+\pi d_i'=1+\pi d_i+\pi^2(x_i'+z_i'+w_i').$$ By
623
3,294
672
475
3,353
0.772964
github_plus_top10pct_by_avg
ategorical variables, odds ratios, odds of CSB among comparison group vs. odds of CSB among reference group. Prior to adjusting for covariates, there was a significant association between time and CSB, whereby the proportion of respondents with CSB was significantly lower at 6 months post-deployment than at baseline (*p* \< 0.05). This difference was no longer significant after the inclusion of other covariates in our model (*p* = 0.061). The final multivariable model is displayed in [Table 2](#T2){ref-type="table"}. Only three variables were statistically significant. Age was positively associated with CSB; every increase of 1 standard deviation in age was associated with a 30% increase in the odds of CSB (*p* \< 0.05). Those with a history of CST had more than 3 times greater odds of CSB than did those without such trauma (OR = 3.17, 95% CI = 1.27--7.93). Finally, each standard deviation unit of increase in the PCL score (i.e., PTSD severity) was associated with a 55% increase in the odds of CSB (*p* \< 0.05). ###### Results from multivariable modeling: Factors associated with compulsive sexual behavior OR (95% CI) *p*-value -------------------------------------- ------------------- ----------- Time  Baseline Ref.  3 months 1.14 (0.3, 1.78) 0.560  6 months 0.53 (0.27, 1.03) 0.061 Age (standardized) 1.30 (1.07, 1.57) **0.007** Childhood sexual trauma 3.17 (1.27, 7.93) **0.014** Childhood physical trauma 2.38 (0.97, 5.82) 0.058 PTSD symptom severity (standardized) 1.55 (1.12, 2.12) **0.006** *Note:* Statistically significant values in bold. Based on GEE modeling, specifying binomial family, logit link, AR 1 correlation structure, and robust standard errors. In our final model, we examined associations between specific
624
1,973
2,050
810
null
null
github_plus_top10pct_by_avg
\nu}{l^2} + \| {\boldsymbol{\omega}} \|_2^2 \right)^{-(\nu+d/2)},\end{aligned}$$ where $K_\nu$ is a modified Bessel function [@Rasmussen2006]. The smoothness of the process is increased with the parameter $\nu$: in the limit $\nu\rightarrow\infty$ we recover the squared exponential covariance function. Gaussian processes are also closely connected to classical spline smoothing [@kimeldorf1970] as well as other classical regularization methods [@kaipio2006statistical; @mueller2012linear] for inverse problems. Although the construction of the corresponding covariance function is hard (or impossible), it is still possible to construct the corresponding spectral density in many cases. With these spectral densities and the basis function method of Section \[sec:approx\], we can construct probabilistic versions of the classical regularization methods as discussed in the next section. Covariance functions arising from classical regularization ---------------------------------------------------------- Let us recall that a classical way to seek for solutions to inverse problems is via optimization of a functional of the form $$\mathcal{J}[f] = \frac{1}{2 \sigma^2} \sum_i (y_i - \mathcal{H}_{{\mathbf x},i} f({\mathbf x}))^2 + \frac{1}{2 \sigma_f^2} \int | \mathcal{L} f({\mathbf x}) |^2 \, d{\mathbf x},$$ where $\mathcal{L}$ is a linear operator. This is equivalent to a Gaussian process regression problem, where the covariance operator is formally chosen to be $\mathcal{K} = [\mathcal{L}^* \mathcal{L}]^{-1}$. In (classical) Tikhonov regularization we have $\mathcal{L} = \mathcal{I}$ (identity operator) which corresponds to penalizing the norm of the solution. Another option is to penalize the Laplacian which gives $\mathcal{L} = \nabla^2$. Although the kernel of this covariance operator is ill-defined with the classical choices of $\mathcal{L}$ and thus it is not possible to form the corresponding covariance function, we can still compute the corresponding spectral density function by computing the Fourier transform
625
311
992
661
null
null
github_plus_top10pct_by_avg
Y$ is a bornologous equivalence between metric spaces. Let $x_0$ be a basepoint of $X$ and set $y_0=f(x_0)$. Suppose $X$ and $Y$ are $\sigma$-stable. Then $\sigma(X,x_0)=\sigma(Y,y_0)$. Change of basepoint in $\sigma$-stable spaces ============================================= As mentioned above, the definition of $\sigma$-stable depends on the choice of basepoint. We show that in fact a space being $\sigma$-stable is independent of basepoint. \[zlemma\] Suppose $x_0,y_0\in X$ and $n\geq {\text{d}}(x_0,y_0)$. Let $z_n:\sigma_n(X,x_0)\to\sigma_n(X,x_1)$ be the function that sends the equivalence class of a sequence $x_0,x_1,x_2,\ldots$ to the equivalence class of $y_0,x_0,x_1,x_2,\ldots$. Then $z_n$ is a bijection. Let $w_n:\sigma_n(X,y_0)\to \sigma_n(X,x_0)$ be the function that sends the equivalence class of a sequence $y_0,y_1,y_2,\ldots$ to the equivalence class of $x_0,y_0,y_1,y_2,\ldots$. We show that $z_n$ and $w_n$ compose to form the identities and thus $z_n$ must be a bijection. Suppose $[(x_i)]\in\sigma_n(X,x_0)$. Then $(w_n\circ z_n)([(x_i)])$ is the equivalence class of the sequence $x_0,y_0,x_0,x_1,\ldots$ which is a supersequence of $(x_0)$. Similarly, $z_n\circ w_n$ is the identity on $\sigma_n(X,y_0)$. Suppose a metric space $X$ is $\sigma$-stable with respect to a basepoint $x_0\in X$. Let $y_0\in X$. Then $X$ is $\sigma$-stable with respect to $y_0$ and $\sigma(X,x_0)=\sigma(X,y_0)$. Let $N\in\mathbb N$ be such that $\phi_n:\sigma_n(X,x_0)\to\sigma_{n+1}(X,x_0)$ is a bijection for all $n\geq N$. Choose $M\in\mathbb N$ such that $M\geq N,{\text{d}}(x_0,x_1)$. Suppose $n\geq M$. Then the following diagram commutes. \_[n+1]{}(X,x\_0) & \^[z\_[n+1]{}]{} & \_[n+1]{}(X,y\_0)\ \^[\_n]{} & & \_[\_n]{}\ \_n(X,x\_0) & \^[z\_n]{} & \_n(X,y\_0)\ Since $\phi_n$, $z_n$, and $z_{n+1}$ are bijections, so is $\psi_n$. The invariant ============= Suppose $X$ and $Y$ are coarsely equivalent and $\sigma$-stable. Then $\sigma(X)=\sigma(Y)$ Suppose $f:X\to Y$ and $g:Y\to X$ compose a coarse equivalence. Le
626
761
896
722
null
null
github_plus_top10pct_by_avg
ep (2), $(z_j^{\ast})_1$ is the image of a fixed element of $F_j$ under the map $\psi_j$. Since $(z_j^{\ast})_1$ can be either $0$ or $1$ by Equation (\[e42\]), $\psi_j|_{F_j}$ is surjective onto $\mathbb{Z}/2\mathbb{Z}$ and thus $\psi_j$ is surjective.\ If $N_0$ is *of type II*, then the proof of the surjectivity of $\psi_j$ is similar to and simpler than that of the above case when $N_0$ is *of type $I^e$* and so we skip it.\ 4. Assume that $M_0$ is *of type $I^o$* and that $L_j$ is *of type $I^e$*. We write $M_0=N_0\oplus L_j$, where $N_0$ is unimodular with odd rank so that it is *of type $I^o$*. Then we can write $N_0=(\oplus_{\lambda'}H_{\lambda'})\oplus (a)$ and $L_j=(\oplus_{\lambda''}H_{\lambda''})\oplus A(1, 2b, 1)$ by Theorem \[210\], where $H_{\lambda'}=H(0)=H_{\lambda''}$ and $a, b \in A$ such that $a \equiv 1$ mod 2. We write $M_0=(\oplus_{\lambda}H_{\lambda})\oplus (a)\oplus A(1, 2b, 1)$, where $H_{\lambda}=H(0)$. For this choice of a basis of $L^j=\bigoplus_{i \geq 0} M_i$, the image of a fixed element of $F_j$ in the special fiber of the smooth integral model associated to $L^j$ is $$\begin{pmatrix} id&0 &0\\ 0 &\begin{pmatrix} 1+\pi x_j & 2 z_j^{\ast}\\ 0 & 1 \end{pmatrix} &0 \\ 0& 0 &id \end{pmatrix}.$$ Here, $id$ in the $(1, 1)$-block corresponds to the direct summand $(\oplus_{\lambda}H_{\lambda})\oplus (a)$ of $M_0$ and the diagonal block $\begin{pmatrix} 1+\pi x_j & 2 z_j^{\ast}\\ 0 & 1 \end{pmatrix} $ corresponds to the direct summand $A(1, 2b, 1)$ of $M_0$. Let $(e_1, e_2, e_3)$ be a basis for the direct summand $(a)\oplus A(1, 2b, 1)$ of $M_0$. Since this is *unimodular of type $I^o$*, we can choose another basis based on Theorem \[210\]. Namely, if we choose $(-2be_2+e_3, e_1-ae_2, e_1+e_3)$ as another basis, then $(a)\oplus A(1, 2b, 1)$ becomes $A(2b(2b-1), a(a+1), a(2b-1))\oplus (a+2b)$. Here, $A(2b(2b-1), a(a+1), a(2b-1))$ is *unimodular of type II*. Thus we can write that $M_0=(\oplus_{\lambda}H_{\lambda})\oplus A(2b(2b-1), a(a+1), a(2b-1))\oplus (a+2b)$. For this basis
627
1,048
774
685
2,944
0.775946
github_plus_top10pct_by_avg
icular, if $(K,\mathcal{O},k)$ is a $p$-modular system for $G$, then the Mackey algebras $\mu_{k}(G)$ and $\mu_{\mathcal{O}}(G)$ are symmetric if and only of $p^2 \nmid |G|$. We use the following notations: - Let $G$ be a finite group. Then $[s(G)]$ denotes a set of representatives of the conjugacy classes of subgroups of $G$. - Let $X$ be a finite $G$-set. We still denote by $X$ the isomorphism class of $X$ in the Burnside ring $B(G)$. - All the $G$-sets are supposed to be finite. - Let $p$ be a prime number. Then $O^{p}(G)$ is the smallest normal subgroup of $G$ such that $G/O^{p}(G)$ is a $p$-group. A finite group $G$ is $p$-perfect if $O^{p}(G)=G$. - Let $H$ and $K$ be two subgroups of $G$. We use the notation $H=_{G} K$ if $H$ and $K$ are conjugate in $G$. Symmetric algebras ------------------ Let $R$ be a commutative ring with unit. Let $A$ be an $R$-algebra. Then $A$ is a symmetric algebra if: 1. $A$ is a finitely generated projective $R$-module. 2. There exist a non degenerate, associative, symmetric bilinear form $b$ on $A$. That is a bilinear form $b$ such that: - for $x$, $y$, $z\in A$ we have $b(xy,z)=b(x,yz)$. - For $x$ and $y$ in $A$, we have $b(x,y)=b(y,x)$. - The map from $A$ to $Hom_{R}(A,R)$ defined by $x\mapsto b(x,-)$ is an isomorphism of $R$-modules. Let $A$ be an $R$-algebra which is a finitely generated projective $R$-module. Then $A$ is a symmetric algebra if and only if $A$ is isomorphic to $Hom_{R}(A,R)$ as $A$-$A$-bimodule. We have the following elementary result: Let $A$ be an $R$-algebra which is free of finite rank over $R$. Let $b$ be a bilinear form on $A$. Let $e:=(e_{1},\cdots,e_{n})$ be an $R$-basis of $A$. Then $b$ is non-degenerate if and only if the matrix of $b$ in the basis $e$ is invertible. Lemma $2.2$ [@symmetric_bilinear_forms]. Mackey functors. ---------------- There are several possible definitions for the notion of Mackey functor for $G$ over $R$. In this paper we use two of them. The first definition is due to Dress
628
2,379
934
629
null
null
github_plus_top10pct_by_avg
s thus generates determinants that belong to the external space, ${\cal H}_{1}$. The spawning probability of those $\Ket{\Psi_{0}}$ to $\Ket{\Psi_{1}}$ walkers follows the expression of Eq.\[eq:pspawn\]. However those walkers are spawned on replica 1 instead of replica 0. Replica 1 that was initially empty starts getting populated, the walkers coming from replica 0 correspond to the source term in Eq.\[eq:psi1diffeq\]. We emphasize the fact that in replica 0 the population dynamic is not modified by this extra excitation step, and that by construction the first and zeroth order wavefunctions are orthogonal to each other, this obviates the use of orthogonalization techniques that will be required for the higher order perturbation. The walkers on replica 1 are also subjected to a cloning/dying step and a spawning step at each iteration. For the dying step the applied operator is $\left(\hat{H}_{0}-E_{0}\right)$, where the $E_{0}$ is the projected energy in replica 0. For the spawning step, the applied excitation operator belongs to $\hat{H}_{0}$. The simultaneous sampling of the two wavefunctions is schematized in Fig.\[fig1:scheme\]. ![Schematized description of an iteration update of the two replicas. On the left the 0 order wave function, sampled in replica 0 is updated by applying the $\left(\mathds{1}-\Delta\tau\left(\hat{H_{0}}-S\mathds{1}\right)\right)$ operator. On the right the first order perturbation, in replica 1, is updated by applying the $\left(\mathds{1}-\Delta\tau\left(\hat{H_{0}}-E_{0}\mathds{1}\right)\right)$ operator to $\protect\Ket{\Psi_{1}}$ and adding walkers spawned by applying $\hat{V}$ onto $\protect\Ket{\Psi_{0}}$. \[fig1:scheme\]](scheme-crop){width="60.00000%"} Note that with this implementation the timesteps used for $0\rightarrow0$, $0\rightarrow1$ and $1\rightarrow1$ spawning steps, and the 0 and 1 cloning and dying steps are identical. It is the one that has been optimized for replica 0. In practice this timestep is chosen to ensure that the spawning probability in replica 0 is
629
154
596
784
604
0.804604
github_plus_top10pct_by_avg
al gain of asking about each of the unexplained objects, thereby determining an optimal sequence of questions for QA. Note that the QA is implemented based on pre-define ontology, instead of using open-ended questions or answers. As in Fig. \[fig:QA\], the user is asked to provide five types of answers (*e.g.* labeling the correct part position when the AOG cannot accurately localize the part), in order to guide the growth of the AOG. Given each specific answer, our method may either refine the AOG branch of an existing part template or construct a new AOG branch for a new part template. Based on human answers, we mine latent patterns for new AOG branches as follows. We require the new latent patterns - to represent a region highly related to the annotated object parts, - to frequently appear in unannotated objects, - to consistently keep stable spatial relationships with other latent patterns. Similar requirements were originally proposed in studies of pursuing AOGs, which mined hierarchical object structures from Gabor wavelets on edges [@MiningAOG] and HOG features [@OurICCV15AoG]. We extend such ideas to feature maps of neural networks. The active QA process mines object-part patterns from the CNN with fewer human supervision. There are three mechanisms to ensure the stability of weakly-supervised learning. - Instead of learning all representations from scratch, we transfer patterns in a pre-trained CNN to the target object part, which boosts the learning efficiency. Because the CNN has been trained using numerous images, latent patterns in the AOG are supposed to consistently describe the same part region among different object images, instead of over-fitting to part annotations obtained during the QA process. For example, we use the annotation of a specific tiger head to mine latent patterns. The mined patterns are not over-fitted to the head annotation, but represent generic appearances of different tiger heads. In this way, we can use very few (*e.g.* 1–3) part annotations to extract l
630
1,543
2,101
827
null
null
github_plus_top10pct_by_avg
ncertainty in $g^2_k$ is taken into account by evaluating the limiting cases. Together with the error estimate on the physical cutoff scale $k_{\rm phys}$ in Appendix \[app:match\] this leads to an estimate for the systematic error of the results presented below. This error includes that related to our specific choice of the running coupling. For example, a viable alternative choice to Fig. \[fig:alpha\] is provided by the background field coupling derived in [@Braun:2005uj] which is covered by the above error estimate. Integration {#sec:integration} =========== The numerical solution of is done on a suitably chosen grid or parameterisation of $\Delta \hat V$ and its derivatives. As $\hat V$, $\hat V_\bot$ and $\Delta \hat V$ are periodic, one is tempted to solve in a Fourier decomposition, see e.g. [@Braun:2005cn]. However, as can be seen already at the example of the perturbative Weiss potential $V_W = V_{\bot,0}$, , this periodicity is deceiving. The Weiss potential is polynomial of order four in $\tilde \varphi=\varphi\mod 2 \pi$, its periodicity comes from the periodic $\tilde \varphi(\varphi)$, [@Weiss:1980rj]. Consequently the third derivative $\partial_\varphi^3 V_W$ jumps at $\varphi =2 \pi n$ with $n\in \Z$. Moreover, $\partial_\varphi^3 V_W[\varphi\to 0_+]=-\partial_\varphi^3 V_W[\varphi\to 0_-]\neq 0$. A periodic expansion of $V_W$, e.g. in trigonometric functions cannot capture this property at finite order. This does not only destabilise the parameterisation, but also fails to capture important physics: the flow of the position of the minima is proportional to $\partial_\varphi^3 \hat V$. This follows from $\partial_t \hat V[\varphi_{\rm min,k}]=0$. Expanding this identity leads to $$\label{eq:phimin} \partial_t \varphi_{min,k} = -\left.\0{\partial_t \hat V'[\varphi]}{\hat V''[\varphi]} \right|_{\varphi=\varphi_{\min,k}}\,,$$ where $\hat V'=\partial_\varphi \hat V$ and $\hat V''=\partial_\varphi^2 \hat V$. The flow $\partial_t \hat V'[\varphi]$ is proportional to $\partial_\varphi^3
631
3,262
1,134
564
2,734
0.777414
github_plus_top10pct_by_avg
�]{}, [ ]{}, [ ]{}, [ ]{}, [ ]{}, [ ]{}, [ ]{}, [ ]{}, [ ]{}, [ ]{}, [ ]{}, [ ]{}, [ ]{}, [ ]{}, [ ]{}, [ ]{}, [ ]{}, , ****, (). , [ ]{}, [ ]{}, [ ]{}, [ ]{}, [ ]{}, [ ]{}, [ ]{}, [ ]{}, [ ]{}, [ ]{}, [ ]{}, [ ]{}, [ ]{}, [ ]{}, [ ]{}, [ ]{}, [ ]{}, [ ]{}, [ ]{}, [ ]{}, , ****, (). , ****, (). , , ****, (); , , ****, (); , , ****, (); , ****, (). , [ ]{}, , ****, (). , [ ]{}, [ ]{}, , ****, (). , [ ]{}, [ ]{}, [ ]{}, , ****, (). , , ****, (). , [ ]{}, , ****, (). , [ ]{}, [ ]{}, , ****, (); , , ****, (); , [ ]{}, , ****, (). , ****, (). , [ ]{}, , ****, (); , [ ]{}, [ ]{}, , ****, (); , [ ]{}, [ ]{}, , ****, (); , [ ]{}, [ ]{}, , ****, (). , ****, (). , [ ]{}, , ****, (). , . , , ****, (). , [ ]{}, , ****, (). --- abstract: | In this paper, we apply the Hausdorff measure of noncompactness to obtain the necessary and sufficient conditions for certain matrix operators on the Fibonacci difference sequence spaces $\ell_{p}(\widehat{F})$ and $\ell _{\infty}(\widehat{F})$ to be compact, where $1\leq p<\infty$. address: - 'DEPARTMENT OF MATHEMATICS, DÜZCE UNIVERSITY, 81620, DÜZCE, TURKEY' - 'DEPARTMENT OF MATHEMATICS, SAKARYA UNIVERSITY, 54187, SAKARYA, TURKEY' - 'DEPARTMENT OF MATHEMATICS, ALIGARH MUSLIM UNIVERSITY, 202002, ALIGARH, INDIA' author: - Emrah Evren KARA - Metİn Başarir - 'M. Mursaleen' title: Compactness of matrix operators on some sequence spaces derived by Fibonacci numbers --- **Introduction and preliminaries** ================================== Let $\mathbb{N} =\{0,1,2,...\}$ and $\mathbb{R} $ be the set of all real numbers. We shall write $\lim_{k}$, $\sup_{k}$, $\inf_{k}$ and $\sum_{k}$ instead of $\lim_{k\rightarrow \infty}$, $\sup_{k\in\mathbb{N} }$, $\inf_{k\in\mathbb{N} }$ and $\sum_{k=0}^{\infty}$, respectively. Let $\omega$ be the vector space of all real sequences $x=(x_{k})_{k\in\mathbb{N} }$. By the term $\mathit{sequence}$ $\mathit{space}$, we shall mean any linear subspace of $\omega$. Let $\varphi,$ $\
632
278
397
452
null
null
github_plus_top10pct_by_avg
again assume that we can expand the vector potential in the scalar and vector bases. Define a one-form $n_a=\text{d}u$, this expansion is given by $$\label{eq:vector-potential} A_{a} = \sum_{mhk}\left(C_u(u) n_{a} F^{(m\,h\,k)} + \sum_{B} C_B(u){V_{a}^{B}}^{(m\,h\,k)}\right),$$ where $B\in \{T,\Phi,R\}$, $C_B(u)$ and $C_u(u)$ are unknown functions of $u$. Notice that $B$ is *not* a tensor index. It is the label of a specific choice of vector bases and their corresponding unknown $C$-functions. The expression of $F^{(m\,h\,k)}$ and the projection of ${V_{a}^{B}}^{(m\,h\,k)}$ onto $\Sigma_u$, i.e. ${V_{i}^{B}}^{(m\,h\,k)}$ are both given in App. \[app:Poincare-basis\]. Then at the highest weight $k=0$, the left hand side of Maxwell’s equation can be rewritten as $$\begin{aligned} \nabla^{a}\mathcal{F}_{ab}|_{k=0} &= \mathcal{D}^{(m,h)}_u[\mathbf{C}(u)] n_{b} F^{(m\,h\,0)} \\ \nonumber &+ \sum_{B} \mathcal{D}^{(m,h)}_{B}[\mathbf{C} (u)]V_b^{B(m\,h\,0)} \,,\end{aligned}$$ where we have collected the four $C$-functions into the vector $\mathbf{C}(u)$, and defined the general differentiation as $\mathcal{D}^{(m,h)}[\mathbf{C} (u)]$, whose expressions are given in App. \[app:G-function\]. As long as the source field can also be decomposed using the scalar and vector bases, the inhomogeneous Maxwell equations in NHEK will reduce to four ordinary differential equations with four unknown $C$-functions. Although we only show this is true for the highest-weight case, this conclusion holds for any $k$. This is due to the commutation of the lowering operator and the covariant differentiation. For explicit calculations of Maxwell’s system using the highest-weight vector basis we refer our readers to [@Lupsasca:2014pfa; @Compere:2015pja]. Linearized Einstein system {#sec:sep-symmetric-tensor} -------------------------- In this subsection we show that we can separate variables on the left hand side of linearized Einstein equation, using our scalar, vector, and tensor bases for NHEK. Consider the metr
633
2,736
995
611
3,309
0.773279
github_plus_top10pct_by_avg
ans of the mapping $\partial N^{n-2k}_{int} = N^{n-2k}_Q \to Q$, where $N^{n-2k}_{int} \subset N^{n-2k}$, $N^{n-2k}_{int}=g^{-1}(U_P)$, $U_P \subset \R^n$. ### Proposition 3. Geometrical Control Principle for $\I_b$–controlled immersions {#proposition-3.-geometrical-control-principle-for-i_bcontrolled-immersions .unnumbered} Let $j_P : P \subset \R^n$ be an arbitrary embedding; such an embedding is unique up to isotopy by a dimensional reason, because $2dim(P)+1=4k-1<n$. Let $g_1: N^{n-2k} \to \R^n$ be an arbitrary mapping, such that the restriction $g_1 \vert_{N_{int}}: (N^{n-2k}_{int}, N^{n-2k-1}_Q) \looparrowright (U_P, \partial U_P)$ is an immersion (the restriction $g \vert_{N^{n-2k-1}_Q}$ is an embedding) that corresponds to the immersion $g \vert_{N^{n-2k}_{int}}: (N^{n-2k}_{int},N^{n-2k-1}_Q) \looparrowright (U_P, \partial U_P)$ by means of the standard diffeomorphism of the regular neighborhoods $U_{i_P}=U_{j_P}$ of subpolyhedra $i(P)$ and $j(P)$. (For a dimension reason there is a standard diffeomorphism of $U_{i_P}$ and $U_{j_P}$ up to an isotopy.) Then for an arbitrary $\varepsilon >0$ there exists an immersion $g_{\varepsilon}: N^{n-2k} \looparrowright \R^n$ such that $dist_{C^0}(g_1,g_{\varepsilon})<\varepsilon$ and such that $g_{\varepsilon}$ is regular homotopy to an immersion $g$ and the restrictions $g_{\varepsilon} \vert _{N^{n-2k}_{int}}$ and $g_1 \vert_{N^{n-2k}_{int}}$ coincide. $$$$ We start the proof of Theorem 2 with the following construction. Let us consider the manifold $Z=S^{\frac{n}{2}+64}/i \times \RP^{\frac{n}{2}+64}$. This manifold is the direct product of the standard lens space $(mod 4)$ and the projective space. The cover $p_Z: \hat Z \to Z$ over this manifold with the covering space $\hat Z = \RP^{\frac{n}{2}+64} \times \RP^{\frac{n}{2}+64}$ is well-defined. Let us consider in the manifold $Z$ a family of submanifolds $X_i$, $i=0, \dots, \frac{n+2}{64}$ of the codimension $\frac{n+2}{2}$, defined by the formulas $X_0 = S^{\frac{n}{2}+64}/i \times \RP^{63}$, $X_1= S^{\fr
634
745
753
659
null
null
github_plus_top10pct_by_avg
^3]{}. We have the following corollary for the original problem. \[cosystco1\] Suppose that the assumptions of Theorem \[cosystth2\] are valid. Let ${ f}\in L^2(G\times S\times I)^3$ and ${ g}\in T^2(\Gamma_-)^3$. Then the following assertions hold. \(i) The variational equation \[vareqco\] \_0(,v)=[**F**]{}\_0(v)v has a solution $\tilde{\psi}=(\tilde{\psi}_1, \tilde{\psi}_2, \tilde{\psi}_3)\in {{{\mathcal{}}}H}$. Writing $\tilde{\psi}_1=(\psi_1,q_1)$, $\tilde{\psi}_j=(\psi_j,q_j,p_{0j},p_{mj}),\ j=2,3$, and $\psi=(\psi_1,\psi_2,\psi_3)\in L^2(G\times S\times I)^3$, then $\psi\in{\mathcal{H}}_{{\bf P}}(G\times S\times I^\circ)$ (see ) and it is a weak (distributional) solution of the system of equations (\[csda1a\]), (\[csda1b\]), and $\psi_1\in W^2(G\times S\times I)$. \(ii) Suppose that additionally the assumption ${\bf TC}$ holds (p. ). Then a solution $\psi$ of the equations (\[csda1a\]), (\[csda1b\]) obtained in part (i) is a solution of the problem (\[csda1a\])-(\[csda3\]). \(iii) Under the assumptions imposed in part (ii), any solution $\psi$ of the problem (\[csda1a\])-(\[csda3\]) that further satisfies \[asscl-aa-co\] \_[|\_+]{}T\^2(\_+)\^3 (,,0)L\^2(GS)\^3, is unique and obeys the estimate \[csda40aaa-co\] \_[[H]{}]{} (\_[L\^2(GSI)\^3]{}+\_[T\^2(\_-)\^3]{}). (Recall that $C$ is defined in , $c'$ in and that $E_m$ is the cutoff energy.) A solution $\psi$ of the problem (\[csda1a\])-(\[csda3\]) is obtained from a solution $\phi$ of the problem - by taking $\psi=e^{-CE}\phi$. Note that if $\phi_1\in W^2(G\times S\times I)$, then $\psi_1\in W^2(G\times S\times I)$ as well. The rest of the proof proceeds in exactly the same way as that for Corollary \[csdaco1\] (of course, one uses Theorem \[cosystth2\] instead of Theorem \[csdath3\]). Existence of Solutions Based on $m$-dissipativity {#mdiss-op} ------------------------------------------------- The method of section \[m-d\] can extended to the case if coupled system in a straightforward manner. Let us state what the problem to be solved for $\phi$ i
635
113
1,070
761
null
null
github_plus_top10pct_by_avg
that specify the size of each unit’s response are independent of the parameters defining the probability that a unit will respond at all. The scalability of our methodology relies on the natural conjugacy structure that we create for the former and an enforced, approximate conjugate structure for the latter. A simulation study demonstrates the accuracy of our method, and inferences are consistent across two different datasets arising from the same rat tibial muscle. Keywords {#keywords .unnumbered} ======== Motor Unit Number Estimation; Sequential Monte Carlo; Model Selection Introduction {#sec:Intro} ============ Motor unit number estimation (MUNE) is a continuing challenge for clinical neurologists. An ability to determine the number of motor units (MUs) that operate a particular muscle provides important insights into the progression of various neuromuscular ailments such as amyotrophic lateral sclerosis [@She06; @Bro07], and aids the assessment of the efficacy of potential therapy treatments [@Cas10]. A MU is the fundamental component of the neuromuscular system and consists of a single motor neuron and the muscle fibres whos contraction it governs. Restriction to a MU’s operation may be a result of impaired communication between the motor neuron and muscle fibres, abnormaility in their function, or atrophy of either cell type. A direct investigation into the number of MUs via a biopsy, for example, is not helpful since this only determines the presence of each MU, not its functionality. Electromyography (EMG) provides a set of electrical stimulii of varying intensity to a group of motor neurons; each stimulus artificially induces a twitch in the targeted muscle, providing an *in situ* measurement of the functioning of the MUs. The effect on the muscle may be measured by recording either the minute variation in muscle membrane potential or the physical force the muscle exerts [@Maj05]. The generic methods developed in this article are applicable to either type of measurement. Since our data consist
636
266
2,452
675
null
null
github_plus_top10pct_by_avg
e{s}_i=\mathrm{id}$ mod $\pi \otimes 1$. Then $$\begin{gathered} \label{ea26} \sigma({}^t\tilde{m}_{i,i})h_i\tilde{m}_{i,i}=(-1)^{i/2}\begin{pmatrix}\sigma({}^t\tilde{s}_i)&\sigma(\pi\cdot {}^t \tilde{y}_i)&\sigma( {}^t \tilde{v}_i)\\ \sigma({}^t \tilde{r}_i)&1+\sigma(\pi \tilde{x}_i)&\sigma(\tilde{u}_i)\\\sigma(\pi\cdot {}^t \tilde{t}_i)&\sigma(\pi\cdot {}^t \tilde{z}_i)&1+\sigma(\pi \tilde{w}_i) \end{pmatrix}\\ \begin{pmatrix} a_i&0&0\\ 0 &1&1 \\ 0 &1 &2c_i \end{pmatrix} \begin{pmatrix} \tilde{s}_i&\tilde{r}_i&\pi \tilde{t}_i\\ \pi \tilde{y}_i&1+\pi \tilde{x}_i&\pi \tilde{z}_i\\ \tilde{v}_i&\tilde{u}_i&1+\pi \tilde{w}_i \end{pmatrix}.\end{gathered}$$ The $(1,2)$-block of $\sigma({}^t\tilde{m}_{i,i})h_i\tilde{m}_{i,i}$ is $(-1)^{i/2}(a_i\tilde{r}_i+\sigma({}^t\tilde{v}_i))+\pi(\ast)$, the $(1, 3)$-block is $(-1)^{i/2}\pi(a_i\tilde{t}_i-\sigma({}^t\tilde{y}_i)+\sigma({}^t\tilde{v}_i)\tilde{z}_i) +\pi^2(\ast\ast)$, the $(2, 3)$-block is $(-1)^{i/2}(1+\pi(\sigma({}^t\tilde{r}_i)a_i\tilde{t}_i-\sigma(\tilde{x}_i)+\tilde{z}_i+\tilde{w}_i+\sigma(\tilde{u}_i)\tilde{z}_i) +\pi^2(\ast\ast\ast))$, and the $(2, 2)$-block is $(-1)^{i/2}(1+\sigma({}^t\bar{r}_i)a_i\bar{r}_i+2\bar{u}_i+2\bar{\gamma}_i\bar{u}_i^2-2\bar{x}_i^2$ $+\pi^4(\ast\ast\ast\ast))$ for certain polynomials $(\ast), (\ast\ast), (\ast\ast\ast), (\ast\ast\ast\ast)$. Therefore, by observing the $(1, 2), (1,3), (2,3), (2,2)$-blocks of Equation (\[ea1\]) again, we have $$\left \{ \begin{array}{l} \mathcal{X}_{i,1,2}(m)=\bar{a}_ir_i+{}^tv_i;\\ \mathcal{X}_{i,1,3}(m)=\bar{a}_i t_i+{}^ty_i+{}^tv_iz_i+\mathcal{P}^i_{1, 3};\\ \mathcal{X}_{i,2,3}(m)={}^tr_i\bar{a}_it_i+x_i+z_i+ w_i+u_iz_i+\mathcal{P}^i_{2, 3};\\ \mathcal{X}_{i,2,2}(m)=u_i+\bar{\gamma}_iu_i^2+x_i^2+1/2\cdot{}^tr_i\bar{a}_ir_i+\left(\delta_{i-2}(m_{i-2, i}^{\natural})^2+\delta_{i+2}(m_{i+2, i}^{\natural})^2\right). \end{array} \right.$$ Here, $\mathcal{P}^i_{1, 3}, \mathcal{P}^i_{2, 3}$ are suitable polynomials with variables in the en
637
2,828
625
668
null
null
github_plus_top10pct_by_avg
ave approximated $q/m\gtrsim{\rm~few}$. For comparison, in the small $Q$ limit the decay rate into a larger, expanding bubble should be well-approximated by the decay of the ordinary KK vacuum into neutral bubbles. (For larger $Q$, the rate will be faster.) The rate for this process is of order $$\begin{aligned} \Gamma_{\rm Witten}\sim e^{-M_6^4 L^4}\sim e^{-M_5^3 L^3}. \end{aligned}$$ For $q/m\gtrsim 1$ and recalling the hierarchies in Eq. (\[eq:hierarchy\]), the discharge rate (\[eq:wkbrate\]) satisfies $$\begin{aligned} \Gamma> e^{-mL}\gg \Gamma_{\rm Witten}\;.\end{aligned}$$ We cannot take $Q$ smaller than $L/m$ if we want to keep curvatures everywhere below the string scale. In this limit, the discharge rate becomes $$\begin{aligned} \Gamma\sim \exp{\left(-\frac{\pi m^2}{q^2}\right)}\;.\end{aligned}$$ In other words, when the decay rate into expanding bubbles is minimized, the discharge process is unsuppressed, if the WGC is satisfied. We can also assess the WKB exponent numerically for other values of the parameters. In Fig. \[fig:WKBexp\] we show the exponent for $L/m<Q<Q_{max}$ and $mL=10^3$ as a function of $q/m$. The exponent is typically not large if $q/m\gtrsim{\rm few}$. Determining the endpoint of the discharge process requires consideration of backreaction effects. Qualitatively, we expect the bubble to collapse into a black string. It is remarkable that in the limit where one instability is made small, a new one appears with large rate. ![ The WKB exponent in Eq. (\[eq:intwkbexp\]) for $mL=10^3$ as a function of $q/m$. From bottom to top the curves correspond to $Q=(L/m,\dots,Q_{max})$. Higher curves (larger $Q$) have increasingly suppressed discharge rates, but are increasingly unstable against tunneling to expanding bubbles. (The top curve is completely unstable.) We see that when the WGC is satisfied by a modest amount the exponent is generally small and the discharge rate is fast. []{data-label="fig:WKBexp"}](WKBexp.pdf){width="0.5\linewidth"} Charged Dilatonic Black Holes ==========
638
1,131
1,966
818
1,570
0.788125
github_plus_top10pct_by_avg
F = & -\frac{\Gamma w f_0}{a(v/D)} \left[\frac{a}{v} \left(1-\exp \left(-\frac{2vr}{D} \right)\right)\exp\left(-\frac{v}{D}\ell\right) \right. \nonumber \\ & \left.-\frac{v}{D} \left(1-\exp\left(-\frac{2ar}{v}\right)\right)\right] \nonumber\\ \simeq & -\frac{\Gamma wf_0D}{av}\left(-\frac{v}{D} \right)\left(\frac{2ar}{v} \right) \nonumber\\ = & \frac{2\Gamma wf_0r}{v}. \label{eq:F} \end{aligned}$$ As $F = K\mu v$ from Eq. (\[eq:motion1\]), $$\begin{aligned} K\mu v = \frac{2\Gamma w f_0 r}{v}. \label{eq:F1} \end{aligned}$$ From Eq. (\[eq:F1\]), we obtain $$\begin{aligned} v=\sqrt{\frac{2\Gamma wf_0r}{K\mu}}. \label{eq:v} \end{aligned}$$ Equation (\[eq:v\]) shows a power law $v\propto\mu^{-1/2}$, if other parameters such as $\Gamma, w$ and $f_0$ are independent of $\mu$. The power law with the index $-1/2$ is an interesting result, since Stokes relation naturally suggests another relation; $v \propto \mu^{-1}$ [@fluid]. Numerical results ================= In the theoretical analysis, we have assumed the solution depending on $\xi = x - vt$. However, the supposed mathematical model has other symmetries and whether the considered solution depending on $\xi$ is an attractor or not should be checked. Therefore, we performed numerical calculations based on equations in Sec. \[sec:model\]. For numerical calculation, we considered a one-dimensional array with a spatial step of $\Delta x = 0.1$. The spatial size of the considered system was 1000 with periodic boundary condition, and we adopted Euler method with time step $\Delta t = 10^{-3}.$ As for the spatial derivative, we used explicit method. The parameters are set to be $m = 0.1$, $w = 1$, $\Gamma = 1$, $r=1$, $\ell = 1$, $D = 1$, $a = 1$, and $f_0 = 1$. In the discretization process, the first-order interpolation was adopted for Eqs.  and . The parameter $h$ corresponding to the viscosity $\mu$ was changed, and we investigated
639
3,588
1,162
641
4,182
0.767494
github_plus_top10pct_by_avg
\Lambda_P$ for some polynomial $P$. Therefore, $$\D(\Gamma)^{D_n}_{\Delta} \cong \D((\Gamma_P)_{\tilde{\Delta}})=\D(\Gamma_{P\Delta}) = \D(\Gamma)_{P\Delta}.$$ Forming the skew fields of fractions we conclude ${{\rm{Frac}}}\, \D(X)^{D_n} \cong {{\rm{Frac}}}\, \D(X)\cong F_n$, which completes the proof of Theorem 3. Galois algebras =============== Let $\Gamma$ be an integral domain, $K$ the field of fractions of $\Gamma$, $K\subset L$ is a finite Galois extension with the Galois group $G$. Let $\mathcal M\subset {{\rm{Aut}_{\c}}}L$ be a monoid on which $G$ acts by conjugations, Recall that an associative $\c$-algebra $U$ containing $\Gamma$ is called a *Galois $\Gamma$-algebra* with respect to $\Gamma$ if it is finitely generated $\Gamma$-subalgebra in $(L*\mathcal M)^{G}$ and $KU=(L*\mathcal M)^{G}, UK=(L*\mathcal M)^{G}$ [@FO]. If $U$ is such algebra then $S=\Gamma\setminus \{0\}$ satisfies both left and right Ore condition and the canonical embedding $U\hookrightarrow (L*\mathcal M)^{G}$ induces the isomorphisms of rings of fractions $[S^{-1}]U\simeq (L*\mathcal M)^{G}$, $ U[S^{-1}]\simeq (L*\mathcal M)^{G}$. The following is standard. \[corol-skew-field-invariants\] If $L*\mathcal M$ is an Ore domain, then $(L*\mathcal M)^G$ is an Ore domain. If $\mathcal L$ is the skew field of fractions of $L*\mathcal M$, then the skew field of fractions of $(L*\mathcal M)^G$ coincides with $\mathcal L^G$, where the action of $G$ on $\mathcal L$ is induced by the action of $G$ on $L*\mathcal M$. We immediately have \[corol-skew-field-Galois\] Let $U$ be a Galois $\Gamma$-algebra and the skew group algebra $L*\mathcal M$ is the left and the right Ore domain with the skew field of fractions $\mathcal L$. Then $U$ is the left and right Ore domain and for its skew field of fractions $\mathcal U$ holds $\mathcal U=\mathcal L^G$. In particular, all Galois subalgebras with respect to $\Gamma$ in $(L*\mathcal M)^G$ have the same skew field of fractions. Due to Proposition \[corol-skew-field-invariants\], $U[S^{-1}]\sim
640
471
1,166
655
null
null
github_plus_top10pct_by_avg
Codeine Thebaine Papaverine Noscapine ----------------------- ---------------- -------------------- -------------------- --------------------- -------------------- --------------------- ------------ T~0~ WT 1 10.9 1.3 nd \* 2.0 9.2 T~0~ 1 1.3 4.2 16.3 2.3 10.2 T~1~ WT 6 11.2 ± 4.0 2.6 ± 2.1 0.3 ± 0.2 2.4 ± 0.7 11.6 ± 4.3 T~1~ 60 6.3 ± 4.6 ^\#^ 3.8 ± 1.5 11.1 ± 6.1 ^\#\#\#^ 1.6 ± 0.5 7.9 ± 2.1 ^\#\#\#^ Selected lines (T~1~) \#1-27(HT) \- 4.3 5.1 23.1 2.3 7.2 \#2-17(HT) \- 5.5 3.7 24.4 1.9 6.8 \#2-1(LT) \- 23.0 1.3 0.3 1.8 8.4 \#2-6(LT) \- 13.6 2.1 1.0 1.5 6.9 T~2~ WT 11 18.4 ± 3.3 1.5 ± 0.9 0.4 ± 0.2 2.7 ± 1.0 18.4 ± 4.3 \#1-27(HT) 15 7.0 ± 4.1 ^\#\#\#^ 5.8 ± 1.6 ^\#\#\#^ 19.1 ± 7.3 ^\#\#\#^ 2.2 ± 0.4 9.6 ± 2.4 ^\#\#\#^ \#2-17(HT) 6 9.8 ± 8.1 ^\#^ 6.0 ± 0.5 ^\#\#\#^ 14.5 ± 6.1 ^\#\#\#^ 3.0 ± 0.4 9.2 ± 1.7 ^\#\#\#^ \#2-1(LT) 12 7.6 ± 3.3 ^\#\#\#^ 5.8 ± 1.1 ^\#\#\#^ 15.9 �
641
5,272
647
219
null
null
github_plus_top10pct_by_avg
perator $\mathcal{L}_{H_-}$, $k$ times, on Eq. . While applying the lowering operator, in general different components of $G^{(1)}_{ab}[h^{(m\,h\,k)}]$ will get mixed up, but the separation of variables still holds. Therefore we conclude that with these scalar, vector, and tensor bases, we can separate variables in the linearized Einstein system in NHEK. Given some source terms, these bases can be directly applied to solving for the corresponding metric perturbations. For instance, we have obtained the highest-weight metric deformations in NHEK sourced by the decoupling limits of dynamical Chern-Simons and Einstein-dilaton-Gauss-Bonnet gravity [@ChenSteinForthcoming]. Conclusions and future work {#sec:concl-future-work} =========================== In this paper, we proposed an isometry-inspired method to study metric perturbations in the near-horizon extremal Kerr spacetime. That is, we separated variables in the metric perturbation equations in the NHEK spacetime, by expanding the perturbation in terms of basis functions adapted to the isometry group. With the separable linearized Einstein equation, one obtains the perturbed metric directly, without the complication of metric reconstruction. Further, our formalism does not depend on gauge choice. Within our formalism, partial differential equations built from ${\ensuremath{SL(2,\mathbb{R})\times U(1)}}$-equivariant operators can be converted into ordinary differential equations in the polar angle, which are simpler to solve. The price is that one must solve coupled, rather than decoupled, equations in our metric formalism. We accomplished three things: (i) we used the highest-weight method to obtain the scalar, vector, and symmetric tensor bases for the isometry group of NHEK; (ii) in global coordinates, we showed that these bases form orthogonal basis sets when the labels of irreps satisfy $h<-1$; and (iii) with these basis functions, we separated variables in many physical equations like the scalar wave equation, Maxwell’s equations, and the lineariz
642
233
1,362
775
null
null
github_plus_top10pct_by_avg
=F$ (use [Lemma \[easy lemma\]]{}). Suppose $\b, {\gamma}<\k$. Let $x\in R_{\b, {\gamma}} \iff \phi(x)(\b, {\gamma}) \wedge \forall {\gamma}^\prime <\k (\phi(x)(\b, {\gamma}^\prime) \rightarrow {\gamma}^\prime={\gamma})$. Then $R_{\b, {\gamma}}$ is $\utilde{\Delta}^2_1$. We have that the following are equivalent: 1. $x\in R_{\b, {\gamma}}$. 2. There is $\a>f(\b, {\gamma})$ such that $J_{\a}(\mathbb{R}){\vDash}``x\in dom(\phi_\a)$ and ${\gamma}$ is the unique ordinal such that $(\b, {\gamma})\in \phi_\a(x)"$, 3. For all $\a>f(\b, {\gamma})$, $J_{\a}(\mathbb{R}){\vDash}``x\in dom(\phi_\a)$ and ${\gamma}$ is the unique ordinal such that $(\b, {\gamma})\in \phi_\a(x)"$ Clearly 1 implies 2 and 3. Also, that 3 implies 1 is rather straightforward. We show that 2 implies 1. Fix then $\a>f(\b, {\gamma})$ such that $J_{\a}(\mathbb{R}){\vDash}``x\in dom(\phi_\a)$ and ${\gamma}$ is the unique ordinal such that $(\b, {\gamma})\in \phi_\a(x)"$. Let $(y, \P)$ be the pair coded by $x$ and $a\in \P$ the transitive collapse of $x(0)$. Working in $J_\a(\mathbb{R})$, let $\T$ be a correctly guided short tree on $\P$ with last model $\S$ such that $\pi_{\P, \S}$ exists and an $\S$-cardinal $\eta$ such that 1. $(\eta^+)^\S<\l^\S$, 2. if $\Q=\S|(\eta^+)^\S$ and $a^\Q=\pi_{\P, \S}(a){\restriction}\eta$ then $(\b, {\gamma})\in \pi_{\Q, \infty}(a^\Q)\cap rng(\pi_{\Q, \infty})$. Suppose now there is some $\xi$ such that for some ${\gamma}^\prime$, $(\b, {\gamma}^\prime)\in \phi_\xi(x)$. Working in $J_\xi(\mathbb{R})$, let $\T^*$ be a correctly guided short tree on $\P$ with last model $\S^*$ such that $\pi_{\P, \S}$ exists and an $\S^*$-cardinal $\nu$ such that 1. $(\nu^+)^\S<\l^\S$, 2. if $\R=\S|(\nu^+)^\S$ and $a^\R=\pi_{\P, \S^*}(a){\restriction}\nu$ then $(\b, {\gamma}^\prime)\in \pi_{\R, \infty}(a^\R)\cap rng(\pi_{\R, \infty})$. Without loss of generality assume that $\xi>\a$. We then have that $J_\xi(\mathbb{R}){\vDash}``\S$ and $\S^*$ are suitable and short tree iterable". Work now in $J_\xi(\mathbb{R})$. We ca
643
708
654
622
null
null
github_plus_top10pct_by_avg
s. a) The radial position distribution, $\rho(R)$, as a function of the distance $R$ and the radial return velocity $v$ as given by [Eq. (\[radial-constant-velocity\])]{}. Experimental results of a realization with $v=0.8\mu m/s$ are superimposed on the theoretical prediction (black spheres). b) The radial position distribution as a function of $R$ and the return time $\tau_0$ as given by [Eq. (\[radial-constant-time\])]{}. Experimental results of a realization with $\tau_0=3.79 s$ are superimposed on the theoretical prediction (black spheres).[]{data-label="non-inst"}](3Dradialjoint1.pdf){width="8.5cm"} Next, we consider a case where upon resetting HOTs are used to return the particle to the origin at a constant time $\tau_0$ — irrespective of the particle’s position at the resetting epoch. This case is appealing due to its simplicity and ease of experimental implementation. Here too, we find that the radial steady-state position distribution can be put in a closed form which reads [@SM] (R)=p\_D\^[c.t.]{}\_(R)+(1-p\_D\^[c.t.]{})\_(R), \[radial-constant-time\] where $ p_D^{c.t.}=(1+r \tau_0)^{-1}$ is the steady-state probability to find the particle in the diffusive phase, and with $\rho_{\text{diff}}(R)=\alpha_0^2 R K_0(\alpha_0R)$ and $\rho_{\text{ret}}(R)=\frac{\pi \alpha_0^2}{2}\left[ \frac{1}{\alpha _0}-R \left[K_0\left( \alpha _0 R \right) \pmb{L}_{-1}\left( \alpha _0 R \right) +K_1\left( \alpha _0 R \right) \pmb{L}_0\left( \alpha _0 R \right)\right] \right]$, standing for the conditional probability densities of the particle’s radial position when in the diffusive and return phases respectively. Here, $\pmb{L}_n$ is the modified Struve function of order $n$ [@Stegun]. The result in [Eq. (\[radial-constant-velocity\])]{} is in very good agreement with experimental data as shown in [Fig. \[non-inst\]]{}b and Fig. S5. Comparing the steady-state distributions for the constant time and constant velocity cases, we find that they are almost identical for short return times and high return speeds. In
644
756
1,729
765
1,631
0.787391
github_plus_top10pct_by_avg
PE. These poles were already computed to all orders in $f^2$ in [@Ashok:2009xx] using different methods. [^5]: Using similar methods it can be shown that the subleading terms in equation do not modify this conclusion [^6]: These conventions differ only slightly from those in [@Ashok:2009xx]. [^7]: We would like to thank Matthias Gaberdiel for raising the issue. --- abstract: 'The concepts of Feynman integrals in white noise analysis are used to realize the Feynman integrand for a charged particle in a constant magnetic field as a Hida distribution. For this purpose we identify the velocity dependent potential as a so called generalized Gauss kernel.' address: - | Functional Analysis and Stochastic Analysis Group,\ Department of Mathematics,\ University of Kaiserslautern, 67653 Kaiserslautern, Germany - | Functional Analysis and Stochastic Analysis Group,\ Department of Mathematics,\ University of Kaiserslautern, 67653 Kaiserslautern, Germany - | Functional Analysis and Stochastic Analysis Group,\ Department of Mathematics,\ University of Kaiserslautern, 67653 Kaiserslautern, Germany author: - Wolfgang Bock - Martin Grothaus - Sebastian Jung title: The Feynman integrand for the Charged Particle in a Constant Magnetic field as White Noise Distribution --- Introduction ============ As an alternative approach to quantum mechanics Feynman introduced the concept of path integrals ([@F48; @Fe51; @FeHi65]), which was developed into an extremely useful tool in many branches of theoretical physics. In this article we use concepts for realizing Feynman integrals in the framework of white noise analysis. The Feynman integral for a particle moving from $0$ at time $0$ to $\mathbf{y} \in {\mathbb{R}}^d$ at time $t$ under the potential $V$ is given by $$\label{eqnfey} {\rm N} \int_{\mathbf{x}(0)=0, \mathbf{x}(t)=\mathbf{y}} \int \exp\left(\frac{i}{\hbar} \int_0^t \frac{1}{2}m\dot{\mathbf{x}}^2 -V(\mathbf{x},\dot{\mathbf{x}}) \, d\tau \right) \prod_{0<\tau<t} d\mathbf{x}(\tau),\quad \h
645
152
244
781
3,576
0.771431
github_plus_top10pct_by_avg
ve performance and training time easy to navigate. Additionally, multiple subset evaluation has no effect on prediction times. Higher values of $\lambda$ give diminishing returns on predictive performance, so a value that is suitable for the computational budget should be chosen. When training an ensemble of nested dichotomies, it may be desirable to adopt a *class threshold*, where single subset selection is used if fewer than a certain number of classes is present at an internal node. This reduces the probability that the same subtrees will appear in many ensemble members, and therefore reduce ensemble diversity. In the lower levels of the tree, the number of possible binary problems is relatively low (Fig. \[fig:growthrate\]). \[sec:growth\_functions\_effect\]Effect on Growth Functions ----------------------------------------------------------- Performance of an ensemble of nested dichotomies relies on the size of the sample space of nested dichotomies, given an $n$-class problem, to be relatively large. Multiple subset evaluation removes the $\lambda-1$ class splits that correspond to the worst-performing binary models at each internal node $i$ from being able to be used in the tree. The effect of multiple subset evaluation on the growth function is non-deterministic for random selection, as the sizes of $\mathcal{C}_{i1}$ and $\mathcal{C}_{i2}$ affect the values of the growth function for the subtrees that are children of $i$. The upper bound occurs when all worst-performing splits isolate a single class, and the lower bound is given when all worst-performing splits are class-balanced. Class-balanced selection, on the other hand, is affected deterministically as the size of $\mathcal{C}_{i1}$ and $\mathcal{C}_{i2}$ are the same for the same number of classes. Growth functions for values of $\lambda \in \{1, 3, 5, 7\}$, for random, class balanced and random-pair selection methods, are plotted in Figure \[fig:growthrate\]. The growth curves for random and class balanced selection were generated using br
646
3,891
1,613
692
3,871
0.769613
github_plus_top10pct_by_avg
_{i_2}\big( {\rho ^{\chi '}}(w({\alpha }))K_{{\sigma }_{i_2}{\sigma }_{i_1}^{\chi }({\alpha })} L_{{\sigma }_{i_2}{\sigma }_{i_1}^{\chi }({\alpha })}^{-1} \times \\ &\qquad \qquad {T}_{i_2}^-{T}_{i_1}^-(F_{\beta _m}^{{b^{\chi}} (\beta _m)-1} \cdots F_{\beta _3}^{{b^{\chi}} (\beta _3)-1}) v_{{t}_{i_2}{t}_{i_1}^\chi (\Lambda )} \big) = \dots\\ &={\hat{T}}_{i_1}{\hat{T}}_{i_2}\cdots {\hat{T}}_{i_m}\big( {\rho ^{\chi '}}(w({\alpha }))K_{w({\alpha })}L_{w({\alpha })}^{-1} v_{\Lambda '} \big)\\ &={\hat{T}}_{i_1}{\hat{T}}_{i_2}\cdots {\hat{T}}_{i_m}\big( {\rho ^{\chi '}}(w({\alpha })) \, \Lambda '(K_{w({\alpha })}L_{w({\alpha })}^{-1}) v_{\Lambda '} \big). \end{aligned}$$ Similarly, one obtains that $$\begin{aligned} {\rho ^{\chi }}({\alpha })\,\Lambda (K_{\alpha }L_{\alpha }^{-1}) F_{\beta _m}^{{b^{\chi}} (\beta _m)-1}\cdots F_{\beta _2}^{{b^{\chi}} (\beta _2)-1} F_{\beta _1}^{{b^{\chi}} (\beta _1)-1}v_\Lambda \qquad &\\ = {\hat{T}}_{i_1}{\hat{T}}_{i_2}\cdots {\hat{T}}_{i_m}\big( {\rho ^{\chi }}({\alpha }) \, \Lambda (K_{{\alpha }}L_{{\alpha }}^{-1}) v_{\Lambda '} \big).& \end{aligned}$$ Now the claim of the lemma follows from Lemma \[le:VTinv\]. \[le:TEFF\] Let $\nu \in \{0,1,\dots ,n\}$. For all $j\in I$, $$\begin{aligned} T_{i_1}\cdots T_{i_\nu }(E_j^{r_{i_\nu }\cdots r_{i_2}r_{i_1}(\chi )}) F_{\beta _\nu }^{{b^{\chi}} (\beta _\nu )-1}\cdots F_{\beta _2}^{{b^{\chi}} (\beta _2)-1} F_{\beta _1}^{{b^{\chi}} (\beta _1)-1}v_\Lambda =0 \label{eq:TEFF} \end{aligned}$$ in $M^\chi (\Lambda )$. If $0\le m\le n-1$, then in $M^\chi (\Lambda )$ $$\begin{aligned} F_{\beta _{m+n+1}} F_{\beta _m}^{{b^{\chi}} (\beta _m)-1}\cdots F_{\beta _2}^{{b^{\chi}} (\beta _2)-1} F_{\beta _1}^{{b^{\chi}} (\beta _1)-1}v_\Lambda =0. \end{aligned}$$ We proceed by induction on $m$. For $m=0$ the claim is that $E_jv_\Lambda =0$ for all $j\in I$, which is clear from the definition of $M^\chi (\Lambda )$. Assume now that $m\g
647
1,912
817
643
null
null
github_plus_top10pct_by_avg
nt of ${\hat\Theta_{D}}$ in $\delta\circ\gamma$ is $$\mbinom{u-v}{a-v}\left(\mbinom{u+v-a-1}2+\mbinom{u-a}2\right).$$ Since $v\equiv1$, we have $u+v-a-1\equiv u-a$, so that $\mbinom{u+v-a-1}2$ and $\mbinom{u-a}2$ have the same parity. Hence $\delta\circ\gamma=0$, a contradiction. Irreducible summands of the form $S^{(u,v,2)}$ {#uv2sec} ============================================== In this section, we find when one of our Specht modules $S^{(a,3,1^b)}$ has a summand isomorphic to an irreducible Specht module of the form $S^{(u,v,2)}$, where $u$ is even and $v$ is odd. By Theorem \[jreg\] and Lemma \[reg\], $D^{(u,v,2)}$ cannot appear as a composition factor of $S^{(a,3,1^b)}$ unless $(u,v,2)\dom(a,3,1^b){^{\operatorname{reg}}}$. So we may assume that this is the case, which is the same as saying $v\ls\min\{a-1,b+1\}$. We set out notation and assumptions for this section. **Assumptions and notation in force throughout Section \[uv2sec\]:** $\la=(a,3,1^b)$ and $\mu=(u,v,2)$, where $a,b,u,v$ are positive integers with $a,b,u$ even, $a\gs4$, $u>v>2$, $n=a+b+3=u+v+2$ and $v\ls\min\{a-1,b+1\}$. Homomorphisms from $S^\la$ to $S^{\mu'}$ {#homomorphisms-from-sla-to-smu} ---------------------------------------- We begin by constructing a homomorphism from $S^\la$ to $S^{\mu'}$. As in §\[hlamu’1\], we construct this using non-semistandard tableaux. Let ${\calu}$ be the set of $\la$-tableaux having the form $$\gyoung(;1;2;3_{1.5}{{\begin{tikzpicture}[baseline=0cm]\draw[thick,densely dotted](0,.13)--++(1.5*.25,0);\end{tikzpicture}}};v;\star_{1.5}{{\begin{tikzpicture}[baseline=0cm]\draw[thick,densely dotted](0,.13)--++(1.5*.25,0);\end{tikzpicture}}};\star,;1;1;2,;2,;3,|{1.5}{{\begin{tikzpicture}[baseline=0cm]\draw[thick,densely dotted](0,.13-1.5*.125)--++(0,1.5*.25);\end{tikzpicture}}},;v,;\star,|{1.5}{{\begin{tikzpicture}[baseline=0cm]\draw[thick,densely dotted](0,.13-1.5*.125)--++(0,1.5*.25);\end{tikzpicture}}},;\star)$$ in which the $\star$s represent the numbers from $v+1$ to $u$, and in which the entries are wea
648
457
853
723
446
0.809559
github_plus_top10pct_by_avg
(\[ea25\]), (\[ea27\]), and (\[ea32\]). Note that such $m$ also satisfies Equation (\[32’\]). In Lemma \[la8\] below, we will prove that $G^{\ddag}$ is represented by a smooth closed subscheme of $ \mathrm{Ker~}\tilde{\varphi}/\tilde{M}^1$ and is isomorphic to $ \mathbb{A}^{l^{\prime}}\times (\mathbb{Z}/2\mathbb{Z})^{\beta}$ as a $\kappa$-variety, where $\mathbb{A}^{l^{\prime}}$ is an affine space of dimension $$\begin{aligned} l^{\prime}=\sum_{i<j}n_in_j +\sum_{i:\mathrm{even~and~} L_i:\textit{of type }I^e}(n_i-1) + \sum_{i:\mathrm{odd~and~}L_i:\textit{free of type I}}(2n_i-2) \notag \\ - \sum_{i:\mathrm{even~and~} L_i:\textit{bound of type II}}n_i +\#\{i:\textit{$i$ is even and $L_i$ is of type I}\} ~~~~~~~~~~~~~~~\notag \\ -\#\{i:\textit{$i$ is even, $L_i$ is of type I and $L_{i+2}$ is of type II}\}. ~~~~~~~~~~~~~~\end{aligned}$$ For ease of notation, let $G^{\dag}=\mathrm{Ker~}\varphi/\tilde{G}^1$. Since $G^{\dag}$ and $G^{\ddag}$ are both closed subschemes of $ \mathrm{Ker~}\tilde{\varphi}/\tilde{M}^1$ and $G^{\dag}(\bar{\kappa})\subset G^{\ddag}(\bar{\kappa})$, $(G^{\dag})_{\mathrm{red}}$ is a closed subscheme of $(G^{\ddag})_{\mathrm{red}}=G^{\ddag}$. It is easy to check that $\mathrm{dim~} G^{\dag} = \mathrm{dim~}G^{\ddag}$ since $\mathrm{dim~} G^{\dag} =\mathrm{dim~}\mathrm{Ker~}\varphi - \mathrm{dim~}\tilde{G}^1=l-\mathrm{dim~}\tilde{G}^1$ and $\mathrm{dim~}G^{\ddag}=l'=l-\mathrm{dim~}\tilde{G}^1$. Here, $\mathrm{dim~}\mathrm{Ker~}\varphi = l$ is given in Lemma \[l46\] and dim $\tilde{G}^1$ is given in Theorem \[ta4\].\ We claim that $(G^{\dag})_{\mathrm{red}}$ contains at least one (closed) point of each connected component of $G^{\ddag}$. Choose an integer $j$ such that $L_j$ is *of type I* and $L_{j+2}, L_{j+3}, L_{j+4}$ (resp. $L_{j-1}, L_{j+1},$ $L_{j+2}, L_{j+3}$) are *of type II* if $j$ is even (resp. odd). Consider the closed subgroup scheme $F_j$ of $\tilde{G}$ defined by the following equations: - $m_{i,k}=0$ *if $i\neq k$*; - $m_{i,i}=\mathrm{id}, z_i^{\ast}=0, m_{i,i}^{\ast}=0,
649
1,044
699
670
2,371
0.780444
github_plus_top10pct_by_avg
mic-like failures or attacks likely in BTNs?”* and *“Will the upgrade to SDNTNs increase the vulnerability with respect to these type of failures or attacks?”* To do so, we present the state of the art of epidemic-like failure models in Section \[soa\]. Then, we review the main failure propagation model that has been proposed for transport networks in Section \[epidemicsontelecom\]. In Section \[vulnerability\] we discuss whether a failure propagation could occur in BTNs and SDNTNs. Section \[providers\] presents feedback from three network providers based on their experience in addition to the results of our research. Finally, Section \[sec:conclusions\] reviews the main contributions and findings of this work. ![Multiple failures broad classification. An epidemic-like failure is a process where a temporary failure propagates to physical neighbors. A cascading failure that triggers failures in physical neighbors is also considered as an epidemic-like failure.[]{data-label="fig:multfail"}](multfail2.pdf) What is an epidemic-like failure propagation?\[soa\] ==================================================== Epidemics theory has been used to describe and predict the propagation of diseases, human interactions, natural phenomena, and failures in a wide range of networks. An epidemic-like failure is a dynamic process where a partial/temporary failure propagates to physical neighbors. The spreading of the aforementioned events is formally represented by epidemic models, and can be generally classified in one of the following three families: - The *Susceptible-Infected* (SI) considers individuals as being either susceptible (S) or infected (I). This family assumes that the infected individuals will remain infected forever, and so can be used for worst-case propagation scenarios ($S\rightarrow I$). - The *Susceptible-Infected-Susceptible* (SIS) considers that a susceptible individual can become infected on contact with another infected individual, then recovers with some probability of becoming sus
650
35
1,861
666
2,359
0.780562
github_plus_top10pct_by_avg
training and the rest for testing. Note that as $\lambda$ increases, the distribution of the train and test error shifts to lower values and the variance decreases. This reduction in error affects each binary model in the tree structure, so the effects accumulate when constructing a nested dichotomy. The third row shows the distribution of RMSE of 1,000 nested dichotomies trained with multiple subset evaluation on `mfeat-fourier`, using logistic regression as the base learner, considering increasing values of $\lambda$. As expected, a reduction in error with diminishing returns is seen as $\lambda$ increases. In order to show an example of how the estimate from (\[eqn:expected\_order\_statistics\]) behaves when the error is not normally distributed, the distribution of $E$ for logistic regression trained on the `segment` UCI data is plotted in the bottom row. This assumption is commonly violated in real datasets, as the distribution is often skewed towards zero error. As with the other examples, 1,000 different random choices for $\mathcal{C}_1$ and $\mathcal{C}_2$ were used to generate the histogram. Although the distribution in this case is not very well modelled by a Gaussian, the approximation of $\mathbb{E}[\hat{E}_\lambda]$ from (\[eqn:expected\_order\_statistics\]) still closely matches the empirical mean. This shows that even when the normality assumption is violated, performance gains of the same degree can be expected. This example is not cherry picked; the same behaviour was observed on the entire collection of datasets used in this study. Related Work\[sec:related\_work\] ================================= Splitting a multi-class problem into several binary problems in a tree structure is a general technique that has been referred to by different names in the literature. For example, in a multi-class classification context, nested dichotomies in the broadest sense of the term have been examined as filter trees, conditional probability trees, and label trees. @beygelzimer2009conditional proposed
651
607
1,999
707
null
null
github_plus_top10pct_by_avg
ine(0,1){40}} \put(10,40){\line(1,-1){30}} \put(10,40){\line(1,1){70}} \put(10,80){\line(1,1){30}} \qbezier(10,40)(25,75)(40,110) \put(40,110){\line(1,0){40}} \put(40,10){\vector(1,0){40}} \put(40,10){\line(2,5){40}} \qbezier(40,10)(75,25)(110,40) \put(80,110){\line(1,-1){30}} \qbezier(80,110)(95,75)(110,40) \put(110,40){\line(0,1){40}} \put(80,10){\line(1,1){30}} \put(0,36){\small H} \put(0,76){\small G} \put(36,0){\small A} \put(36,113){\small F} \put(79,0){\small B} \put(79,113){\small E} \put(115,36){\small C} \put(115,76){\small D} \put(150,55){$\Rightarrow$} \put(200,40){\line(0,1){40}} \put(200,40){\line(1,-1){30}} \put(200,40){\line(1,1){70}} \put(200,80){\line(1,1){30}} \qbezier(200,40)(215,75)(230,110) \put(230,110){\line(1,0){40}} \put(230,10){\vector(1,0){40}} \put(230,10){\line(2,5){40}} \qbezier(230,10)(265,25)(300,40) \put(270,110){\line(1,-1){30}} \qbezier(270,110)(285,75)(300,40) \put(300,40){\line(0,1){40}} \put(270,10){\line(1,1){30}} \put(190,36){\small H} \put(190,76){\small G} \put(226,0){\small A} \put(226,113){\small F} \put(269,0){\small B} \put(269,113){\small E} \put(305,36){\small C} \put(305,76){\small D} \linethickness{0.5mm} \put(210,80){\line(1,0){20}} \qbezier(230,80)(235,65)(240,50) \put(240,50){\line(1,0){30}} \put(270,50){\line(0,-1){30}} \qbezier(270,50)(280,65)(290,80) \thinlines \put(270,20){\vector(-3,-2){30}} \put(270,20){\line(2,-1){20}} \put(290,80){\line(2,-3){20}} \put(290,80){\line(0,1){20}} \put(240,50){\line(-1,-1){35}} \put(230,80){\line(2,3){25}} \put(210,80){\line(-2,-3){20}} \put(210,80){\line(0,1){25}} \end{picture}$$ Here sides $AB$ and $FG$, $BC$ and $CD$, $DE$ and $EF$, $GH$ and $HA$ constitute pairs and $AB$ is the marked side. Thus, we must connect the arc that intersects $AB$ with the arc that intersects $FG$, the arc that intersects $BC$ with the arc that intersects $CD$, the arc that intersects $DE$ with the arc that intersects $EF$ and the arc that intersects $GH$ with the arc that intersects $HA$. An arrow in the arc that intersects $AB$ indicates
652
2,208
394
485
null
null
github_plus_top10pct_by_avg
is replaced by an isotropic $\alpha$-stable process with $\alpha\in(0,2)$. A significant difference to the Brownian setting is that the stable processes will exit spheres by a jump rather than hitting their boundary. This difference ensures that disconnected domains may be considered and that, unlike the diffusive setting, the algorithm ends after an almost surely finite number of steps.' author: - '[Andreas E. Kyprianou]{}[^1] $^,$[^2]' - 'Ana Osojnik[^3]' - '[Tony Shardlow$^*$]{}' bibliography: - 'sphere\_stepping.bib' title: | Unbiased ‘walk-on-spheres’ Monte Carlo methods\ for the fractional Laplacian --- =1 Introduction ============ We start by recalling the classical Dirichlet problem in $d$-dimensions and re-examining a, now, classical Monte Carlo algorithm that is used to numerically simulate its solution. Suppose that $D$ is a domain in $\mathbb{R}^d$, $d\geq 2$, with sufficiently smooth boundary. We are interested in finding $u\colon D\to \mathbb{R}$ such that $$\begin{gathered} \begin{aligned} \Delta u(x) & = 0, & \qquad x & \in D, \\ u(x) & = {g}(x), & x & \in \partial D, \end{aligned}\label{Dirichlet}\end{gathered}$$ where ${g}$ is a given continuous function on the boundary. Feynman–Kac representation tells us that, for example, if $u\in C^2(\overline{D} )$ is a solution to , then $$\label{FK} u(x) = \mathbb{E}_x[{g}(W_{\tau_D}) ], \qquad x\in D,$$ where $\tau_D \coloneqq \inf\{t>0 : W_t \not\in D\}$ and $W\coloneqq (W_t, t\geq 0)$ is standard $d$-dimensional Brownian motion with probabilities $(\mathbb{P}_x, x\in\mathbb{R}^d)$. The representation suggests that solutions to can be generated numerically via straightforward Monte Carlo simulations of the path of $W$ until first exit from $D$. That is to say, if $(W^{i}_t, t\leq \tau^{i}_D)$, $i \in \mathbb{N}$ are [*iid*]{}copies of $(W_t, t\leq \tau_D)$ issued from $x\in D$, then, by the [strong law of large numbers]{}, $$\lim_{n\to\infty}\frac{1}{n}\sum_{i=1}^n {g}(W^{i
653
186
330
758
2,363
0.780514
github_plus_top10pct_by_avg
$$ Use of the fourth and third equations of Eq. (\[Eqn: HR Ortho\]) and the explicit relation Eq. (\[Eqn: W(mu)\]) for $W(\mu)$ gives respectively the coefficients $$\begin{aligned} {\displaystyle a_{\textrm{A}}(\nu_{0})} & = & X(-\nu_{0})/X(v_{0})\nonumber \\ a_{\textrm{A}}(\nu) & = & -\frac{1}{N(\nu)}\textrm{ }c(1-c)\nu_{0}^{2}\nu X(-\nu_{0})X(-\nu)\label{Eqn: Milne_Coeff}\end{aligned}$$ The extrapolated end-point $z_{0}$ of Eq. (\[Eqn: extrapolated\]) is related to $a_{\textrm{A}}(\nu_{0})$ of the Milne problem by $a_{\textrm{A}}(\nu_{0})=-\exp(-2z_{0}/\nu_{0})$. **Problem B: The Constant Source Problem.** Here ****the boundary condition at $x=0$ is $$1=a_{\textrm{B}}(\nu_{0})\phi(\mu,\nu_{0})+\int_{0}^{1}a_{\textrm{B}}(\nu)\phi(\mu,\nu)d\nu\qquad\mu\geq0$$ which leads, using the integral relations satisfied by $W$, to the expansion coefficients $$\begin{aligned} {\displaystyle a_{\textrm{B}}(\nu_{0})} & = & -2/c\nu_{0}X(v_{0})\label{Eqn: Constant_Coeff}\\ a_{\textrm{B}}(\nu) & = & \frac{1}{N(\nu)}\textrm{ }(1-c)\nu(\nu_{0}+\nu)X(-\nu)\nonumber \end{aligned}$$ where the $X(\pm\nu_{0})$ are related to Problem A as $$\begin{aligned} X(\nu_{0}) & = & \frac{1}{\nu_{0}}\sqrt{\frac{\nu_{0}^{2}(1-c)-1}{2a_{\textrm{A}}(\nu_{0})(1-c)(\nu_{0}^{2}-1)}}\\ X(-\nu_{0}) & = & \frac{1}{\nu_{0}}\sqrt{\frac{a_{A}(\nu_{0})\left(\nu_{0}^{2}(1-c)-1\right)}{2(1-c)(\nu_{0}^{2}-1)}}.\end{aligned}$$ This brief introduction to the singular eigenfunction method should convince the reader of the great difficulties associated with half-space, half-range methods in particle transport theory; note that the $X$-functions in the coefficients above must be obtained from numerically computed tables. In contrast, full-range methods are more direct due to the simplicity of the weight function $\mu$, which suggests the full-range formulation of half-range problems presented in Sec. 5. Finally it should be mentioned that this singular eigenfunction method is based on the theory of singular integral equations. **Acknowledgment** It is a ple
654
2,351
900
749
null
null
github_plus_top10pct_by_avg
ession Description Score Coverage MW \[kDa\] calc. pI ----------- ---------------------------------------------- --------- ---------- ------------ ---------- P00734 Prothrombin 2759.75 73.31 70.0 5.90 P0C0L5 Complement C4-B 2411.85 78.61 192.6 7.27 P0C0L4 Complement C4-A 2379.43 75.80 192.7 7.08 P19823 Inter-alpha-trypsin inhibitor heavy chain H2 1480.56 56.77 106.4 6.86 P19827 Inter-alpha-trypsin inhibitor heavy chain H1 1167.61 54.45 101.3 6.79 Q06033 Inter-alpha-trypsin inhibitor heavy chain H3 352.03 43.60 99.8 5.74 P02760 Protein AMBP 263.80 27.56 39.0 6.25 P00740 Coagulation factor IX 216.46 54.66 51.7 5.47 P00742 Coagulation factor X 90.46 37.09 54.7 5.94 P02768 Serum albumin 62.34 36.45 69.3 6.28 P01857 Immunoglobulin heavy constant gamma 1 47.83 39.09 36.1 8.19 P04004 Vitronectin 45.60 23.64 54.3 5.80 P49747 Cartilage oligomeric matrix protein 43.78 19.82 82.8 4.60 P51884 Lumican OS=Homo sapiens 43.31 42.90 38.4 6.61 P01834 Immunoglobulin kappa constant 40.16 49.53 11.8 6.52 P67936 Tropomyosin alpha-4 chain 39.83 40.32 28.5 4.69 P07359 Platelet glycoprotein Ib alpha chain 31.44 12.12 71.5 6.29 P0DOY2 Immunoglobulin lambda constant 2 29.10 75.47 11.3 7.24 Q08380 Galectin-3-binding protein
655
4,507
334
364
null
null
github_plus_top10pct_by_avg
\mathbf{N}_{ab}\left[ \phi_{+}\right] \phi_{-}^b +... \label{d80}$$ Where the ellipsis means terms of higher order in $\phi_-$. Here $\mathbf{D}$ ($\mathbf{N}$) is the so-called dissipation (noise) kernel. This is of course identical in form to the effective action for a stochastic theory \[ne20\], and thereby we may write down the equivalent Langevin equation \_a=-\_a \[classpath\] where the $\tilde{\xi}_a $ are Gaussian stochastic sources with zero mean and self-correlation \_[a]{}\_[b]{} =\_[ab]{} Full Hadamard propagator from the stochastic 1PI approach --------------------------------------------------------- As an application of the stochastic 1PI approach we shall show that the two point function for the stochastic theory \[ne21\] is identical to half the Hadamard propagator of the full quantum theory. The full quantum propagators may be derived from the 1PI generating functional as G\^[AB]{}=-i-iW\_[1PI]{}\^[,AB]{} From the properties of the Legendre transform we have \_[1PI,AB]{}W\_[1PI,]{}\^[BC]{}=-\^C\_A whereby \_[1PI,AB]{}G\^[BC]{}=i\^C\_A In the $A=\left(\alpha ,a\right)$ representation, where $\alpha =\pm$, we find \_[1PI,(,a)]{}= ( [c]{} \_[-]{}\^c\ \_a+i\_[ab]{} \_[-]{}\^b ) And therefore the Hessian, evaluated at the physical point $\phi_{-}^c=0$ is \_[1PI,(,a),(,b)]{}= ( [cc]{} 0&\_[b,a]{}\ \_[a,b]{}&i\_[ab]{} ) \[Hessian\] We also identify G\^[(,a),(,b)]{}= ( [cc]{} 12G\_1\^[ab]{}&-iG\_[ret]{}\^[ab]{}\ -iG\_[adv]{}\^[ab]{}&0 ) \[onshellprop\] where $G_{ret}^{ab}$ is the retarded propagator, $G_{adv}^{ab}=G_{ret}^{ba}$ is the advanced propagator, and $G_1^{ab}$ is the Hadamard propagator G\_1\^[ab]{}={\_H\^a,\_H\^b}\[ne50\] The equations for the propagators become ( [cc]{} 0&\_[b,a]{}\ \_[a,b]{}&i\_[ab]{} ) ( [cc]{} 12G\_1\^[bc]{}&-iG\_[ret]{}\^[bc]{}\ -iG\_[adv]{}\^[bc]{}&0 ) =i ( [cc]{} \^c\_a&0\ 0&\^c\_a ) \[onshelleq\] namely \_[a,b]{}G\_[ret]{}\^[bc]{}&=&-\^c\_a\_[b,a]{}G\_[adv]{}\^[bc]{}&=&-\^c\_a\_[a,b]{}G\_1\^[bc]{}&=&-2\_[ab]{}G\_[adv]{}\^[bc]{} This last eq
656
52
484
807
null
null
github_plus_top10pct_by_avg
Medium Medium Low Improbable (E) Medium Medium Medium Low Eliminated (F) ---------------- -------------- ---------- ---------- ------------ : MIL-STD-882E Risk Assessment Matrix[]{data-label="Ta:RISK_ASSESSMENT_MATRIX"} This table suffers the same ambiguity as in Table 3. MIL-STD-882’s definitions are clearly inadequate for quantitative analysis. Through equivocation, exposure to an intermittent compound Poisson process is regarded as not different than exposure to a compound Poisson process, despite that the difference becomes obvious through the linear factor $(1 - \iota)$. MIL-STD-882 is an evolving document in its fifth major revision; let us hope these ambiguities are resolved in the future. Other approaches to automata ============================ Deterministic finite automaton {#S:DFA} ------------------------------ This depiction of the deterministic finite automaton [@wW11autmaton] appears in Wikipedia: > An *automaton* is represented formally by the 5-tuple $\langle Q, \Sigma, \delta, q_0, A \rangle$, where: > > - $Q$ is a finite set of *states*. > > - $\Sigma$ is a finite set of *symbols*, called the *alphabet* of the automaton. > > - $\delta$ is the *transition function*, that is, $\delta \colon Q \times \Sigma \to Q$. > > - $q_0$ is the *start state*, that is, the state which the automaton is *in* when no input has been processed yet, where $q_0 \in Q$. > > - $A$ is a set of states of $Q$ (i.e. $A \subseteq Q$) called *accept states*. > An approach for engineers is found in [@jH79]. [20]{} Benjamin S. Blanchard and Wolter J. Fabrycky, Systems Engineering and Analysis, 3rd edition, Prentice Hall, 1998 Arthur D. Hall, A Methodology For Systems Engineering, (New York: Van Nostrand Reinhold Company), 1962 John D. Musa and Anthony Iannino and Kazuhira Okumoto, [*Software reliability - measurement, prediction, application*]{}, McGraw-Hill, 1987 Hopcroft, J. E. and Ullman, J. D., [*Introduction to Autom
657
387
937
852
1,771
0.785836
github_plus_top10pct_by_avg
can obtain the most information gain. A question [$q\!=\!(I,\hat{v},\Lambda_{\hat{v}})$]{} requires people to determine whether our approach predicts the correct part template $\hat{v}$ and parses a correct region $\Lambda_{top}=\Lambda_{\hat{v}}$ for the part. Our method expects one of the following answers. **Answer 1:** the part detection is correct. **Answer 2:** the current AOG predicts the correct part template in the parse graph, but it does not accurately localize the part. **Answer 3:** neither the part template nor the part location is correctly estimated. **Answer 4:** the part belongs to a new part template. **Answer 5:** the target part does not appear in the image. In particular, in case of receiving Answers 2–4, our method will ask people to annotate the target part. In case of getting Answer 3, our method will require people to specify its part template and whether the object is flipped. Our method uses new part annotations to refine (for Answers 2–3) or create (for Answer 4) an AOG branch of the annotated part template based on Equation (\[eqn:LossAOG\]). ### Question ranking The core of the QA-based learning is to select a sequence of questions that reduce the uncertainty of part localization the most. Therefore, in this section, we design a loss function to measure the incompatibility between the AOG and real part appearances in object samples. Our approach predicts the potential gain (decrease of the loss) of asking about each object. Objects with large gains usually correspond to not well explained CNN neural activations. Note that annotating a part in an object may also help localize parts on other objects, thereby leading to a large gain. Thus, we use a greedy strategy to select a sequence of questions [$\Omega=\{q_{i}|i=1,2,\ldots\}$]{}, *i.e.* asking about the object that produces the most gain in each step. For each object image $I$, we use [${\bf P}(y|I)$]{} and [${\bf Q}(y|I)$]{} to denote the prior distribution and the estimated distribution of an object part on $I$, respec
658
1,255
1,871
687
1,833
0.785282
github_plus_top10pct_by_avg
( {\bf 1} - W^{\dagger} W ) \end{array} \right]. \nonumber \\ \end{aligned}$$ Therefore, our system depends only on $U$ and $W$, and the dependence on $Z$ and $V$ is superficial. Universal scaling model of $N$ sterile sector {#sec:scaling-model} ============================================== Suppose that we obtain a particular parametrization of $U$ matrix by taking $N=1$ sterile sector, as we did in section \[sec:probabilities\]. In this $(3+1)$ model, the $W$ matrix elements are completely determined, up to phase, by unitarity $$\begin{aligned} \vert W_{\alpha 4} \vert^2 = 1 - \sum_{j=1}^3 |U_{\alpha j}|^2. \end{aligned}$$ Now, we attempt to create a toy model of $N$ sterile sector by “universal scaling”. We postulate that all the $W$ matrix elements are real and equal: $$\begin{aligned} W_{\alpha 4} = W_{\alpha 5} = \cdot \cdot \cdot W_{\alpha N+3} = \frac{ 1 }{ \sqrt{N} } \biggl( 1 - \sum_{j=1}^3 |U_{\alpha j}|^2 \biggr)^{1/2}\end{aligned}$$ which is consistent with $(3+N)$ space unitarity. In this universal scaling model, the order $W^2$ correction terms in (\[P-beta-alpha-2nd-averaged\]) remains unchanged provided that we further assume that all the sterile masses are equal.[^24] It is because the $W$ matrix elements enter into the $W^2$ terms in the form $$\begin{aligned} \sum_{K} W_{\alpha K} W^{\dagger}_{K \beta} \frac{ 1 }{ ( \Delta_{K} - h_{k} )^n }, \label{W2-scaling} \end{aligned}$$ where $n=1$ or 2. However, the leaking constant $\mathcal{C}_{\alpha \beta}$ becomes smaller by a factor of $N$ in the universal scaling model. In the $(3+1)$ model, $\mathcal{C}_{\alpha \beta}$ takes the largest value, the upper limit in eq. (\[Cab-bound\]). Because $\mathcal{C}_{\alpha \beta}$ is fourth order in $W$ it is evident that in the universal scaling model, $$\mathcal{C}_{\alpha \beta} = \frac{1}{N} \biggl( 1 - \sum_{j=1}^3 |U_{\alpha j}|^2 \biggr) \biggl( 1 - \sum_{j=1}^3 |U_{\beta j}|^2 \biggr), \label{Cab-USM}$$ which is the lower limit of (\[Cab-bound\]). [99]{} Z. Maki, M. Nakagawa a
659
3,659
1,046
520
1,398
0.79007
github_plus_top10pct_by_avg
linic, *P*2~1~/*c* Mo *K*α radiation, λ = 0.71073 Å Hall symbol: -P 2ybc Cell parameters from 27248 reflections *a* = 22.931 (2) Å θ = 3.0--27.5° *b* = 14.0395 (12) Å µ = 3.93 mm^−1^ *c* = 27.855 (3) Å *T* = 93 K β = 107.224 (1)° Block, yellow *V* = 8565.6 (14) Å^3^ 0.47 × 0.43 × 0.43 mm *Z* = 4 -------------------------------------------------- ---------------------------------------- Data collection {#tablewrapdatacollectionlong} =============== ------------------------------------------------------------- --------------------------------------- Rigaku R-AXIS RAPID diffractometer 19520 independent reflections Radiation source: Rotating Anode 18777 reflections with *I* \> 2σ(*I*) graphite *R*~int~ = 0.045 ω scans θ~max~ = 27.5°, θ~min~ = 3.0° Absorption correction: multi-scan (*ABSCOR*; Higashi, 1995) *h* = −29→20 *T*~min~ = 0.262, *T*~max~ = 0.281 *k* = −18→18 69256 measured reflections *l* = −35→36 ------------------------------------------------------------- --------------------------------------- Refinement {#tablewraprefinementdatalong} ========== ------------------------------------- -------------------------------------------------------------------------------------------------- Refinement on *F*^2^ Primary atom site location: structure-invariant direct methods Least-squares matrix: full Secondary atom site location: difference Fourier map *R*\[*F*^2^ \> 2σ(*F*^2^)\] = 0.039 Hydrogen site location: inferred from neighbou
660
296
1,697
731
null
null
github_plus_top10pct_by_avg
19746 AREG 0.042790083 0.515091119 ELF4 0.001468499 0.516939945 NAB2 0.011359365 0.527142113 GPC1 0.030827537 0.529427807 SLC25A5 0.029858761 0.534298394 RYR1 0.014981235 0.538436907 PEG10 0.018255692 0.549699307 SLIT2 0.005701378 0.551929669 ESYT1 0.003009396 0.561757654 CRLF1 0.029356956 0.562850973 DPP4 0.018431425 0.567301902 CES2 0.003842477 0.571035541 CYCS 0.030715945 0.585863898 BASP1 0.026758782 0.594217929 KLHL21 0.022505203 0.599053201 LCN2 0.017669691 0.601124995 EEF1A2 0.018392179 0.602560272 TGIF1 0.030147532 0.605396529 TSPYL2 0.046736047 0.617648983 CKS2 0.026759687 0.623716633 SPAG5 0.027923235 0.624868361 CKMT2 0.043578063 0.645621487 ALDH1A3 0.010848438 0.653685247 ZNF148 0.031227695 0.661077991 UCHL1 0.003566593 0.663217388 ASNS 0.006539605 0.667304137 NPTXR 0.009448284 0.674187256 FKBP5 0.027541727 0.694255151 GOT1 0.001282557 0.730143285 IGFBP3 0.009116606 0.771327366 SERPINE2 0.020069598 0.785032654 SOX4 0.006616428 0.808876177 BSG 0.001981905 0.809699733 SCNN1A 0.027983691 0.825462627 NPC2 0.005654992 0.837641446 CDH2 0.038829964 0.862357097 MTHFD2 0.002917107 0.870767506 PAX1 0.040281329 0.936795356 SCG5 0.044745635 0.990761798 EPB41L3 0.000527208 1.020917517 ###### Significant regulation of mRNAs in the specific miRNA-mRNA interacting regulatory network. Gene P-value log~2~ (fold-change) ----------- ---------- ---------------------- TNFRSF11B 0.036234 −0.836892 BZRAP1 0.006324 −0.784391 TGFBR2 0.016942 −0.736951 SALL1 0.039302 −0.687134 LRRN3 0.006382 −0.685662 TRAM2 0.016639 −0.664343 RGS16 0.027819
661
470
668
939
null
null
github_plus_top10pct_by_avg
an occur as irreducible summands of $S^\la$ are $S^{(a+b,3)}$ and $S^{(a+b-4,5,2)}$. Suppose $S^\la$ has an irreducible summand $S^{(u,v)}$ with $v>3$. Then $v\equiv3\ppmod4$ and $u-v\equiv7\ppmod8$, which means that $u+v\equiv5\ppmod8$ and hence $a+b\equiv2\ppmod8$. Furthermore, $(u,v)\dom\la{^{\operatorname{reg}}}$, which implies that $a\gs6$ and $b\gs4$. Similarly, if $S^\la$ has an irreducible summand $S^{(u,v,2)}$ with $v>5$, then $v\equiv1\ppmod4$, $u-v\equiv7\ppmod8$ and hence $u+v+2\equiv3\ppmod8$, which gives $a+b\equiv0\ppmod8$. Furthermore, the fact that $(u,v,2)\dom\la{^{\operatorname{reg}}}$ implies that $a\gs6$ and $b\gs4$. Concluding remarks {#concsec} ================== The results in this paper do not give anything like a complete picture; this work is intended as a re-awakening of a long-dormant subject. Given how small the first example of a decomposable Specht module in this paper is, it is surprising that it has taken thirty years for this example to be found. We hope that this paper will be the start of a longer study of decomposable Specht modules. We conclude the paper by making some speculations about decomposable Specht modules; these are based on calculations and observations, but we do not have enough evidence to make formal conjectures. Specht filtrations ------------------ Our main results show that in certain cases summands of Specht modules are isomorphic to irreducible Specht modules. In fact, reducible Specht modules can also occur as summands; for example, the first new decomposable Specht module $S^{(4,3,1^2)}$ found in this paper decomposes as $S^{(6,3)}\oplus S^{(4,3,2)}$, with the latter Specht module being reducible. However, it is certainly not the case that every summand of a decomposable Specht module is isomorphic to a Specht module. But in the cases we have been able to calculate, every summand appears to have a filtration by Specht modules. If this is true in general, it means that our main results are stronger, in that we have found all irreducible summands o
662
380
1,622
686
3,052
0.775194
github_plus_top10pct_by_avg
(-1)^b T(b,s,a \,; x-y,x) + (-1)^a T(s,a,b \,; y, y-x).$$ It is easy to see that $S (a,b,s \,; x,y) = S (a-1,b,s+1 \,; x,y) + S (a,b-1,s+1 \,; x,y)$ because of $T (a,b,s \,; x,y) = T (a-1,b,s+1 \,; x,y) + T (a,b-1,s+1 \,; x,y)$. Hence we have $$\begin{split} S (a,b,s \,; x,y) = &\sum_{j=1}^{a} \binom{a+b-j-1}{a-j} S (j,0,a+b+s-j \,; x,y) \\ &+ \sum_{j=1}^{b} \binom{a+b-j-1}{b-j} S (0,j,a+b+s-j \,; x,y). \end{split}$$ Now we consider the function $S (j,0,a+b+s-j \,; x,y)$ in the above formula. By the definition of $S (a,b,s \,; x,y)$ and the harmonic product formula, we have $$\begin{split} &S (j,0,a+b+s-j \,; x,y) = T (j,0,a+b+s-j \,; x,y) \\ &+ T(0,a+b+s-j,j \,; x-y,x) + (-1)^j T(a+b+s-j,j,0 \,; y, y-x) \\= \, &\zeta (a+b+s-j \, ; y ) \bigl( \zeta (j \, ; x-y) + (-1)^j \zeta (j \, ; y-x) \bigr) - \zeta (a+b+s \, ; x ) . \end{split}$$ By exchanging the parameters $x$ and $y$, we can repeat the same manner for $S (0,j,a+b+s-j \,; x,y)$. Therefore we obtain this theorem when $\Re (s) >1$ by the well-known formula $\sum_{j=1}^{a} \binom{a+b-j-1}{a-j} = \binom{a+b-1}{b}$. By the analytic continuation, we obtain Theorem \[th:1\]. By putting $s=0$ in (\[eq:th1\]), we have $$\zeta (a \,; x) \zeta (b \,; y) + (-1)^b T(b,0,a \,; x-y,x) + (-1)^a T(0,a,b \,; y, y-x) = K (a,b \,; x,y)$$ On the other hand, one has $$T(0,a,b \,; -y,x-y) + T(b,0,a \,; x-y,x) + \zeta (a+b \,; x-y) = \zeta (a \,; -y) \zeta (b \,; x)$$ by the harmonic product formula. Therefore we have Proposition \[pro:1\] by removing the term $T(b,0,a \,; x-y,x)$. Recall the well-known formula $$\chi (n) = \frac{1}{\tau (\overline{\chi})} \sum_{l=1}^{k-1} \overline{\chi} (l) e^{2 \pi i ln/k} = \frac{\chi (-1)}{\tau (\overline{\chi})} \sum_{l=1}^{k-1} \overline{\chi} (l) e^{-2 \pi i ln/k}.$$ By using above formula and $\phi \chi \psi(-1) = (-1)^{a+b+1}$, we have $$\begin{split} &\tau (\overline{\phi}) \tau (\overline{\chi}) \tau (\overline{\psi}) L(0,a,b \,; \phi, \chi ,\psi) \\ &=\sum_{j=1}^{h-1} \sum_{l=1}^{k-1} \sum_{r=1}^{q-1} \overline{\phi}(j) \o
663
1,520
807
719
null
null
github_plus_top10pct_by_avg
, a contradiction. [**Case**]{} $|L|=2$\ We may assume that $L=\{2,4\}$. Then $|\cS|=|\cS^*|+2+|Y_2|+|Y_4|$ and $Y=Y_2\cup Y_4$. From (\[prop:bi\]) we have $\cR^*\subseteq\{ \{2,3,4\},\{2,4,5\}\}$. We need only consider cases for which $|\cR|\ge 5+|Y|$.\ 1. $|Y_2|,|Y_4|\ge 2$\ From (\[prop:ylarge\]) we know that $\cR\setminus\cR^*=\cR(2,4)$. Using inequality (\[eq:individual\]) with $\{i,j\}=\{2,4\}$ and the fact that $5+|Y|\le|\cR|$, we get $|Y_i|+|S_i^*|+6+|Y|\le 1+|Y_i|+|\cS_i^*|+|\cR(2,4)|+|\cR^*|\le |\cS|=|\cS^*|+2+|Y_i|+|Y_j|$, which gives, for each $j\in\{2,4\}$, that $|Y|\le |\cS^*|-4+|Y_j|< |Y_j|$, a contradiction. 2. $|Y_2|=1$ and $|Y_4|\ge 2$ (the case $|Y_4|=1$ and $|Y_2|\ge 2$ is handled symmetrically)\ Without loss of generality, $Y_2=\{6\}$. From (\[prop:ylarge\]) we have $\cR(2,5)=\cR(3,5)=\emptyset$, and from (\[prop:ysmall\]) we know that $\cR(3,4)\subseteq\{\{3,4,6\}\}$ and $\cR\setminus\cR^*=\cR(3,4)\cup\cR(2,4)$. Thus, $\cR_4=\cR$. Also, $\cS^*\subseteq\{\{1,2,4\},\{1,2,5\},\{1,3,4\},\{1,3,5\}\}$. Set ${\cal P}=\cS^*\setminus\cS^*_4$. Since $\cR(2,4)\cup\cR(3,4)=\cR\setminus\cR^*\ne\emptyset$, we get from (\[prop:sstar\]) that $|{\cal P}|\le 1$. Thus, $\cI_4=\cI\setminus\left(\{\{1,2,6\},\{1,2,3\}\}\cup{\cal P}\right)$, so $|\cI|\le |\cI_4|+3$. Therefore the family $\cI_4\cup\{\{4\},\{1,4\}\}\cup\{\{4,y\}:y\in Y_4\}\}$ is a star subfamily of $\cH$ of size $|\cI_4|+2+|Y_4|\ge |\cI_4|+4> |\cI|$, a contradiction.\ 3. $|Y_2|=|Y_4|=1$ (so $1\le|Y|\le 2$)\ In this case $|\cS|=4+|\cS^*|$ and $\cR\subseteq\{\{2,3,4\},\{2,4,5\}\}$\ 1. $Y_2=Y_4=Y$\ Here we have $|Y|=1$ so, without loss of generality, $Y=\{6\}$. Then from \[prop:ysmall\] we learn that $\cR(2,5)\subseteq\{\{2,5,6\}\}$, $\cR(3,5)\subseteq\{\{3,5,6\}\}$ and $\cR(3,4)\subseteq\{\{3,4,6\}\}$. Therefore $4=6-2\le |\cR|-|\cR^*| = |\cR\setminus\cR^*|\le 3+|\cR(2,4)|$, and so $|\cR(2,4)|\ge 1$. By \[prop:sstar\] we know that $\{1,3,5\}\not\in\cS^*$, and thus $\cS^*\sse \cS^*_2\cup \{\{1,3,4\}\}$.\ 1. $
664
633
812
693
null
null
github_plus_top10pct_by_avg
In \[Sec:intro\] the conditions for obtaining an appropriate SUSY threshold correction to the bottom quark mass in models with third family Yukawa unification were determined by an interpretation of the approximate formula given in \[Eq:common-app\]. We revisit this scenario here to offer a more accurate interpretation. In models with third family Yukawa unification, the SUSY threshold corrections to the bottom quark typically need to be $-\mathcal{O}$(few)%. For $\mu>0$, the common interpretation is $A_t$ needs to be large and negative in order for the chargino-stop contribution to overcome the $B_0^{\widetilde{g}}$ term from the gluino-sbottom contribution. By including the “missing" terms, which are positive, we see that the size of $A_t$ is underestimated when the approximate form of the corrections is used to interpret the size of the parameters. This is particularly true when the squarks are heavy. In this regime, the chargino-stop term is suppressed but the “missing" terms are not and so $A_t$ must be quite large to overcome both the suppression by the heavy stops and also the positive contribution from the missing terms. It has been pointed out in earlier works that light Higgsinos are disfavoured in Yukawa unified GUTs [@Baer:2012jp; @Anandakrishnan:2013tqa], and this can be traced back to \[fig:ch-muM2\], where we see that the corrections from the chargino are small for small $\mu$ and do not compensate for the large gluino corrections. For $\mu<0$, the common interpretation is that the parameters need not be large since the terms in \[Eq:common-app\] already have the needed minus sign. Including the “missing" terms introduces a positive contribution (these terms are not proportional to $\mu$) that is relatively large and so $A_t$ and/or $M_{\widetilde{g}}$ must be larger than expected in order to overcome the additional contributions. Higgs couplings to the bottom quark {#higgs-couplings-to-the-bottom-quark .unnumbered} ----------------------------------- The MSSM predicts four new physi
665
72
1,224
813
null
null
github_plus_top10pct_by_avg
neighbourhoods of the SOM that are described below. 1. [**Gaussian SOM** ]{}\ The kernel neighbourhood is defined by gaussian function\ $h_{ci}(t) =e^{\LARGE{\left({d^2_{ci}}/{2\sigma^2_t}\right)}}, $ here $d^2_{ci}$ is the distance between the winner unit $c$ and the unit $i$, $\sigma_t$ is the neighbourhood radius. The results of the classification are shown in the figure 2.a. The Map is a 25X25 network and is trained with 300 epochs. Further increase in map size and epochs does not shown any improved results. 2. [**Cutgaussian SOM** ]{}\ The kernel neighbourhood is defined by cut-gaussian function\ $h_{ci}(t) = e^{\LARGE{\left({d^2_{ci}}/{2\sigma^2_t}\right)}}\cdot1\left(\sigma_t-d_{ci}\right). $ The results of the classification are shown in the figure 2.b. The Map is 40X30 network and is trained with 300 epochs. Further increase in map size and epochs does not shown improved results.The cutgaussian kernel shown better performance than that of of gaussian kernel. We developed a c++ implementation of SOM with both kernel neighbourhoods. The SOM trained results are visualized using the u-matrix technique implemented in SOM TOOLBOX 2.0 in MATLAB environment [@kn:vesa]. Conclusions and Future Work =========================== In this article we classified the monte-carlo gamma ray data of the MAGIC experiment, using MLP and SOM. Both the networks shown good classification results. The advantage of the SOM algorithm is that it needs no training vectors to find the groups in the data set [*i.e.*]{} it clusters the data set in an automatic way, but the disadvantage of this technique is that it cannot label the data groups found. At the other hand MLP based on supervised technique identifies the group labels, but the training session could be longer. The proposal for the future work will be combining MLP and SOM techniques. The combination of both techniques could yield better results. First train the data set with SOM, which yields in a clustered data set then use this data set to train t
666
5,060
1,333
431
null
null
github_plus_top10pct_by_avg
putting your name on the title slide. As for the rest of the team, if they are still around, take a team photo with everyone and insert that on the last slide of the presentation. You can then list their names from left to right, however they happen to arrange themselves. Q: Flutter: multiple firebase projects in one app but showing incorrect data The last few days I spend a lot of time to read through several SO-questions and tutorials. What I'm trying to achieve is, that a user of my flutter app can choose a firebase project and log in with email/password. After the login, obviously, the correct data of the corresponding database should be shown. And that is where I fail. After a while of reading some sites and questions from SO, I went with the following site to get the first part of the login. https://firebase.googleblog.com/2016/12/working-with-multiple-firebase-projects-in-an-android-app.html After working through this article, I was able to successfully log in to my defined firebase projects. How did I know that the login was successful? I compared the user-uids from the projects with the print statement from my app in the console. That was the prove my configuration for the non-default project is correct. But now the main problem which I can't solve. After the login, the data is always of the default firebase project from the google-service.json. For state management, I choose the provider package, as they mentioned in the I/O '19. So inside my main.dart, I wrap the whole application with MultipleProvider: Widget build(BuildContext context) { return MultiProvider( providers: [ ChangeNotifierProvider<LoginModel>( builder: (_) => LoginModel(), ), ChangeNotifierProvider<Auth>( builder: (_) => Auth(), ), ], child: MaterialApp( title: 'Breaking News Tool', theme: ThemeData( primarySwatch: Colors.blue, ), home: RootPage(), ), ); } The provided Auth class is a service that con
667
292
161
272
1,274
0.79173
github_plus_top10pct_by_avg
rt’ is $O(D)$, where $D$ is a diameter of $\mathbf P$) repelling forces allows us to implement Jarvis wrapping algorithm [@jarvis1973identification]. We select a starting point which is extremal point of $\mathbf P$. We pull a rope to other extremal point. We continue until the set $\mathbf P$ is wrapped completely. We tested feasibility of the idea in laboratory experiments [@adamatzky2012slime]. In each experiment we arranged several half-pills in a random fashion near centre of a Petri dish and inoculated an oat flake colonised by Physarum few centimetres away from the set $\mathbf P$. Physarum propagates towards set $\mathbf P$ and starts enveloping the set with its body and the network of protoplasmic tubes. The plasmodium does not propagate inside configuration of pills. The plasmodium completes approximation of a shape by entirely enveloping $\mathbf P$ in a day or two. Computing circuits ================== Attraction-based logical gates ------------------------------ When two growing zones of separate Physarum cells meet they repel if there is a free space to deviate to. If there is no opportunity to deviate the cells merge. This feature is employed in the construction of Boolean logical gates — [not]{}, [or]{} and [and]{} —- in [@tsuda2004robust]. The gates are made of segments of agar gel along which the Physarum propagates. To implement input ‘1’ ([True]{}) in channel $x$ a piece of Physarum is inoculated in $x$ otherwise the input is considered to be ‘0’ ([False]{}). Attractants are placed in the end of the output channels to stimulate growth of the Physarum towards outputs. The Physarum propagates towards closest source of attractants along a shortest path. The gate [or]{} is a $ \begin{smallmatrix} \searrow & & \swarrow \\ & \downarrow & \end{smallmatrix} $ junction. Physarum placed in one of the inputs propagates towards the output. If each input contains the Physarum, the propagating cells merge and appear in the output as if they were a sing
668
4,818
1,988
616
null
null
github_plus_top10pct_by_avg
y the day but it doesn't display correct data $VisitsTrends=DB::table('visits') ->select('created_at',\DB::raw('count(*) as total')) -> groupBy('created_at')->get(); dd($VisitsTrends); it just displays data by each time and counts differently A: You may use use Illuminate\Support\Facades\DB; //... $VisitsTrends = Visit::groupBy('date')->get(array( DB::raw('Date(created_at) as date'), DB::raw('COUNT(*) as "total"') )); Q: Referencing the next entry in RDD within a map function I have a stream of <id, action, timestamp, data>s to process. For example, (let us assume there's only 1 id for simplicity) id event timestamp ------------------------------- 1 A 1 1 B 2 1 C 4 1 D 7 1 E 15 1 F 16 Let's say TIMEOUT = 5. Because more than 5 seconds passed after D happened without any further event, I want to map this to a JavaPairDStream with two key : value pairs. id1_1: A 1 B 2 C 4 D 7 and id1_2: E 15 F 16 However, in my anonymous function object, PairFunction that I pass to mapToPair() method, incomingMessages.mapToPair(new PairFunction<String, String, RequestData>() { private static final long serialVersionUID = 1L; @Override public Tuple2<String, RequestData> call(String s) { I cannot reference the data in the next entry. In other words, when I am processing the entry with event D, I cannot look at the data at E. If this was not Spark, I could have simply created an array timeDifferences, store the differences in two adjacent timestamps, and split the array into parts whenever I see a time difference in timeDifferences that is larger than TIMEOUT. (Although, actually there's no need to explicitly create an array) How can I do this in Spark
669
1,867
86
338
1,208
0.792684
github_plus_top10pct_by_avg
d therefore, that $| x- \mu_j| \leq \frac{F_j(x) - F_j(\mu_j)}{M}$. By the DKW inequality and the union bound, with probability at least $1-1/n$, $$\label{eq:dkw.median} \max_{j \in {\widehat{S}}} \|F_{n,j} - F_j \|_\infty \leq \sqrt{\frac{ \log 2kn}{2n} }.$$ Thus, for any $j \in {\widehat{S}}$, $$\left| F_{n,j}( \delta_{(u)}(j)) - F_j( \delta_{(u)}(j))\right| \leq \sqrt{\frac{ \log 2kn}{2n} }.$$ Since $$F_{n,j}( \delta_{(u)}(j)) = \beta_u \leq 1/2 + \frac{1}{n} + \sqrt{\frac{1}{n}\log\left(\frac{2k}{\alpha}\right)} = F_j(\mu_j) + \frac{1}{n} + \sqrt{\frac{1}{n}\log\left(\frac{2k}{\alpha}\right)},$$ using , we conclude that, on the event and provided that $ \frac{1}{n} + \sqrt{\frac{1}{2n}\log\left(\frac{2k}{\alpha}\right)} + \sqrt{ \frac{ \log 2kn}{2n} } \leq \eta M$, $$| \mu_j - \delta_{(u)}(j) | \leq \frac{1}{M} \left( \frac{1}{n} + \sqrt{\frac{1}{2n}\log\left(\frac{2k}{\alpha}\right)} + \sqrt{ \frac{ \log 2kn}{2n} } \right).$$ Similarly, under the same conditions, $$| \mu_j - \delta_{(l)}(j) | \leq \frac{1}{M} \left( \frac{1}{n} + \sqrt{\frac{1}{2n}\log\left(\frac{2k}{\alpha}\right)} + \sqrt{ \frac{ \log 2kn}{2n} } \right).$$ The claim now follows by combining the last two displays. Notice that the result holds uniformly over all $j \in {\widehat{S}}$ and all distributions satisfying the conditions of the theorem. $\Box$ Appendix 3: Proof of the results in ==================================== [**Proof of Lemma \[lemma::est-accuracy\].**]{} The upper bounds are obvious. The lower bound (\[eq::lower1\]) is from Section 4 in [@sackrowitz1986evaluating]. We now show (\[eq::lower2\]). Let $\hat\beta =g(Y)$ be any estimator where $Y=(Y_1,\ldots, Y_n)$. Given any $Y$ and any $w(Y)$, $\hat\beta$ provides an estimate of $\beta(J)$ where $J= w(Y)$. Let $w_j$ be such that $w_j(X)=j$. Then define $\hat\beta = ( g(Y,w_1(Y)),\ldots, g(Y,w_D(Y)))$. Let $w_0(Y) = \operatorname*{argmax}_j |\beta(j)-\hat\beta(j)|$. Then $\mathbb{E}[|\hat \beta(J) - \beta(J)|]= \mathbb
670
2,067
504
706
3,230
0.773864
github_plus_top10pct_by_avg
--- abstract: 'Many deep learning algorithms can be easily fooled with simple adversarial examples. To address the limitations of existing defenses, we devised a probabilistic framework that can generate an exponentially large ensemble of models from a single model with just a linear cost. This framework takes advantage of neural network depth and stochastically decides whether or not to insert noise removal operators such as VAEs between layers. We show empirically the important role that model gradients have when it comes to determining transferability of adversarial examples, and take advantage of this result to demonstrate that it is possible to train models with limited adversarial attack transferability. Additionally, we propose a detection method based on metric learning in order to detect adversarial examples that have no hope of being cleaned of maliciously engineered noise.' author: - | George A. Adam\ Department of Computer Science\ University of Toronto\ Toronto, ON M5S 3G4\ `alex.adam@mail.utoronto.ca`\ Petr Smirnov\ Medical Biophysics\ University of Toronto\ Toronto, ON M5S 3G4\ David Duvenaud\ Department of Computer Science\ University of Toronto\ Toronto, ON M5S 3G4\ Benjamin Haibe-Kains\ Medical Biophysics\ University of Toronto\ Toronto, ON M5S 3G4 Anna Goldenberg\ Department of Computer Science\ University of Toronto\ Toronto, ON M5S 3G4\ bibliography: - 'sample.bib' - 'Zotero.bib' title: | Stochastic Combinatorial Ensembles for\ Defending Against Adversarial Examples --- Introduction ============ Deep Neural Networks (DNNs) perform impressively well in classic machine learning areas such as image classification, segmentation, speech recognition and language translation [@hinton_deep_2012; @krizhevsky_imagenet_2012; @sutskever_sequence_2014]. These results have lead to DNNs being increasingly deployed in production settings, including self-driving cars, on-the-fly speech translation, and facial recognition
671
246
737
523
3,279
0.773509
github_plus_top10pct_by_avg
&f_1,\nonumber\\ -{{\frac{\partial (S_j\psi_j)}{\partial E}}}+\omega\cdot\nabla_x\psi_j+ \Sigma_j\psi_j - K_j\psi={}&f_j, \quad j=2,3, \label{desol10}\end{aligned}$$ holding a.e. on $G\times S\times I$, together with the inflow boundary and initial values $$\begin{aligned} {3} \psi_{|\Gamma_-}&=g && \quad {\rm a.e.\ on}\ \Gamma_-, \label{desol11} \\[2mm] \psi_j(\cdot,\cdot,E_m)&=0\quad && \quad {\rm a.e.\ on}\ G\times S,\ j=2,3. \label{desol12}\end{aligned}$$ The solution $\psi=(\psi_1,\psi_2,\psi_3)$ for this problem can be decomposed as follows. Let $u=(u_1,u_2,u_3)$ be the solution of the problem without collisions $$\begin{aligned} \omega\cdot\nabla_x u_1+\Sigma_1 u_1={}&f_1,\nonumber\\ -{{\frac{\partial (S_ju_j)}{\partial E}}}+\omega\cdot\nabla_x u_j+\Sigma_j u_j ={}& f_j,\quad j=2,3, \label{desol13}\end{aligned}$$ together with the inflow boundary and initial values u\_[|\_-]{}&=g, \[desol14\]\ u\_j(,,E\_m)&=0,j=2,3.\[desol15\] Furthermore, let $w=(w_1,w_2,w_3)$ be the solution of the problem $$\begin{aligned} \omega\cdot\nabla_x w_1+\Sigma_1 w_1-K_1w={}& K_1u\nonumber\\ -{{\frac{\partial (S_jw_j)}{\partial E}}}+\omega\cdot\nabla_x w_j +\Sigma_j w_j-K_jw ={}& K_ju,\quad j=2,3,\label{desol16}\end{aligned}$$ together with *homogeneous* inflow boundary and initial values w\_[|\_-]{}&=0,\[desol17\]\ w\_j(,,E\_m)&=0,j=2,3. \[desol18\] Then we find that $\psi=u+w$ is the solution of -. This corresponds to decomposing the evolution of the particle field $\psi$ obeying the full CSDA Boltzmann transport problem - in terms of the evolution of the *primary (uncollided) particles*, represented by $u$, and of *secondary (collided) particles*, represented by $w$. The method of decomposing $\psi=u+w$ in this way is useful e.g. in constructing numerical solutions, and is known, for example in neutron transport theory, under the name collided-uncollided split (cf. the recent work [@hauck2013coll] and references therein). To explain a bit this terminology, notice that the primary, uncollided field $u$ obeys which does n
672
418
675
757
2,418
0.77991
github_plus_top10pct_by_avg
n{array}{l l} v_i\cdot ({}^tg_{i, i}-\mathrm{Id}_{n_i}) & \quad \textit{if $L_i$ is \textit{free of type I}};\\ \delta_{i-1}v_{i-1}\cdot {}^tg_{i, i-1}+\delta_{i+1}v_{i+1}\cdot {}^tg_{i, i+1} & \quad \textit{if $L_i$ is \textit{bound of type I}}. \end{array} \right.$$ Here, - $v_{i}=(0,\cdots, 0, 1, 0)$ of size $1\times n_{i}$ and $\mathrm{Id}_{n_i}$ is the identity matrix of size $n_i \times n_i$. - $v_{i-1}$ (resp. $v_{i+1}$)$=(0,\cdots, 0, 1)$ of size $1\times n_{i-1}$ (resp. $1\times n_{i+1}$).\ Then each entry of the above matrix lies in the ideal $(\pi)$. If $L_i$ is *of type II*, then $A_i=B_i$ and $B_i^{\perp}=A_i^{\perp}$ so that there is no contribution.\ Construction of ** {#m} ------------------ We define a functor from the category of commutative flat $A$-algebras to the category of monoids as follows. For any commutative flat $A$-algebra $R$, let $$\underline{M}(R) \subset \{m \in \mathrm{End}_{B\otimes_AR}(L \otimes_A R)\}$$ to be the set of $m \in \mathrm{End}_{B\otimes_AR}(L \otimes_A R)$ satisfying the following conditions: - $m$ stabilizes $A_i\otimes_A R,B_i\otimes_A R,W_i\otimes_A R,X_i\otimes_A R$ for all $i$ and $Z_i\otimes_A R$ for all even integer $i$. - $m$ induces the identity on $A_i\otimes_A R/ B_i\otimes_A R$ for all $i$. - $m$ induces the identity on $W_i\otimes_A R/(X_i\cap Z_i)\otimes_A R$ for all even integer $i$. - $m$ induces the identity on $B_i^{\perp}\otimes_A R/ A_i^{\perp}\otimes_A R$ for all odd integer $i$. \[r31\] We give another description for the functor $\underline{M}$. Let us define a functor from the category of commutative flat $A$-algebras to the category of rings as follows: For any commutative flat $A$-algebra $R$, define $$\underline{M}^{\prime}(R) \subset \{m \in \mathrm{End}_{B\otimes_AR}(L \otimes_A R) \}$$ to be the set of $m\in \mathrm{End}_{B\otimes_AR}(L \otimes_A R)$ satisfying the following conditions: - $m$ stabilizes $A_i\otimes_A R,B_i\otimes_A R,W_i\otimes_A R,X_i\otimes_A R$ f
673
3,059
839
584
null
null
github_plus_top10pct_by_avg
t( \sqrt{ \frac{\log k}{n}} \right)$, a fact made precise in the following result. \[cor:accuracy.LOCO\] With probability at least $ 1- \frac{1}{n}$, the maximal length of the sides of the hyper-rectangle $\tilde{C}_n$ is bounded by $$C \left(2(A + \tau) + \epsilon \right) \sqrt{ \frac{\log k}{n} \left( 1 + \frac{(4\log k + 2 \log n)^{1/2}}{n^{1/2}}\right)},$$ for a universal constant $C>0$, uniformly over all $P \in \mathcal{P}_n^{\mathrm{LOCO}}$. ### The Bootstrap {#the-bootstrap .unnumbered} We now demonstrate the coverage of the paired bootstrap version of the confidence set for $\gamma_{{\widehat{S}}}$ given above in . The bootstrap distribution is the empirical measure associated to the $n$ triplets $\left\{ (X_i,Y_i,\xi_i), i \in \mathcal{I}_{2,n} \right\}$ and conditionally on $\mathcal{D}_{1,n}$. Let $\hat{\gamma}^*_{{\widehat{S}}}$ denote the estimator of the LOCO parameters of the form computed from an i.i.d. sample of size $n$ drawn from the bootstrap distribution. Notice that $\mathbb{E}\left[ \hat{\gamma}^*_{{\widehat{S}}} \Big| (X_i,Y_i,\xi_i), i \in \mathcal{I}_{2,n} \right] = \hat{\gamma}_{{\widehat{S}}}$. For a given $\alpha \in (0,1)$, let $\hat{t}^*_\alpha$ be the smallest positive number such that $$\mathbb{P}\left( \sqrt{n} \| \hat{\gamma}^*_{{\widehat{S}}} - \hat{\gamma}_{{\widehat{S}}}\| \leq \hat{t}^*_\alpha \Big| (X_i,Y_i,\xi_i), i \in \mathcal{I}_{2,n} \right) \geq 1 - \alpha.$$ Next, let $(\tilde{t}^*_j, j \in {\widehat{S}})$ be such that $$\mathbb{P}\left( \sqrt{n} | \hat{\gamma}^*_{{\widehat{S}}}(j) - \hat{\gamma}_{{\widehat{S}}} (j) \leq \tilde{t}^*_j, \forall j \Big| (X_i,Y_i,\xi_i), i \in \mathcal{I}_{2,n} \right) \geq 1 - \alpha.$$ In particular, using the union bound, each $\tilde{t}^*_j$ can be chosen to be the largest positive number such that $$\mathbb{P}\left( \sqrt{n} | \hat{\gamma}^*_{{\widehat{S}}}(j) - \hat{\gamma}_{{\widehat{S}}} (j) > \tilde{t}^*_j, \Big| (X_i,Y_i,\xi_i), i \in \mathcal{I}_{2,n} \right) \leq \frac{\alpha}{k}.$$
674
2,477
1,152
739
null
null
github_plus_top10pct_by_avg
ng characteristic (ROC) curve to evaluate the multiple ML approaches on the same dataset ([Table 4](#table4){ref-type="table"}). We found that adaptive boosting neural networks achieved the biggest ROC area under the curve on the air quality data, tree bag on the climate data, and random forest on weather and air quality data. In general, we discovered that the predictive performance of the ML approaches improves as data variables increase. ###### Evaluation of machine learning approaches using receiver operating characteristic. Machine learning approaches Weather, AUC^a^ Air quality, AUC Weather and air quality, AUC ---------------------------------- ----------------- ------------------ ------------------------------ Generalized linear model 0.538 0.682 0.758 Support vector machine 0.500 0.494 0.621 Adaptive boosting neural network 0.611 0.698 0.734 Tree bag 0.714 0.680 0.780 Random forest 0.669 0.692 0.809 ^a^AUC: area under the curve. Discussion ========== Clinical Significance --------------------- Recent studies have shown that weather and air pollution have been a major problem leading to an increase in daily deaths and hospital admissions for chronic respiratory illnesses \[[@ref3]-[@ref5],[@ref27],[@ref28]\]. We focused the distribution of daily patient visits for 2 years (ie, 2016 and 2017) ([Figure 2](#figure2){ref-type="fig"}). It is worth noting that peak days are more dominant from October to March, which indicates that the haze is a strong predictor, as these months are mostly colder in Guangzhou. Thus, it is important to recognize the peak OED visits for respiratory conditions. ![Histogram of patients visiting outpatient and emergency rooms.](medinform_v8i3e13075_fig2){#figure2} Previous studies mainly focused on the peak event forecasting ED visits for patients with one
675
40
651
750
null
null
github_plus_top10pct_by_avg
ate for $\mathscr{C}(X|\mathcal{O}, \mathcal{H})$, we need to determine an optimization *strategy* for picking the next set of parameters to test. ![The optimization of the evaporation stage of creating a BEC using the complex 16 parameter scheme. The first 20 evaluations are an initial training run using a simple Nelder-Mead algorithm. The machine learning algorithm (green) then quickly optimizes to BEC. The insets show the different regimes that the experiment goes through, from a large, completely thermal cloud through to the sharp edged BEC.](figure2.pdf){width="\columnwidth"} Consider the following homogeneous strategies: we could test parameters that minimize $M_{\hat{\mathscr{C}}}(X)$, but this strategy can get trapped in local minima; or we could test parameters that maximize $\Sigma_{\hat{\mathscr{C}}}^2(X)$ (i.e. where we are most uncertain), but this may require a large number of trials to map the space and would not prioritize refinement of the minima. We chose to implement an inhomogeneous strategy that repeatably *sweeps* between these two extremes by minimizing a biased cost function: $B_{\hat{\mathscr{C}}}(X) \equiv b M_{\hat{\mathscr{C}}}(X) - (1 - b) \Sigma_{\hat{\mathscr{C}}}(X)$, where the value for $b$ is linearly increased from $0$ to $1$ in a cycle of length $Q$. This makes the learner change strategy from getting maximum information ($b=0$), to looking for a new minima: with a high risk-seeking ($b$ small) to risk-neutral ($b=1$) preference. During testing with synthetic data, we found sweeping was more robust and efficient than the homogeneous approaches. When we minimized $B_{\hat{\mathscr{C}}}(X)$ we also put bounds, set to 20% of the parameters maximum-minimum values, on the search relative to the last best measured $X$. We call these bounds a *leash*, as it restricts how fast the learner could change the parameters but did not stop it from exploring the full space (similar to trust-regions [@conn_trust_2000; @yuan_review_2000]). This was a technical requirement for our experiment: w
676
632
1,098
617
3,407
0.772637
github_plus_top10pct_by_avg
nd choline performed. Paired sampled from 20 dogs were available for urine selenium analysis. Those 20 samples were selected from the original 31 dogs in order of percentage weight lost (i.e., the 20 dogs with the greatest percentage of weight loss had their samples analysed). Weight loss characteristics for dogs included in the study did not differ from those also completing but not ultimately being included in the study (data not shown). In the 31 dogs finally included in the study, percentage weight loss was 28.3% (16.0-40.1%) starting body weight (SBW), over a period of 250 days (91--674 days); therefore, median rate of weight loss was 0.8% (0.3-1.4%) SBW/week (Table [2](#T2){ref-type="table"}). Neither the signalment nor weight loss outcomes differed between the 26 dogs having choline and amino acid analysis, and those where it was not measured (*P* \> 0.12 for all; data not shown); similarly, signalment and weight loss outcomes did not differ between the 20 dogs where urinary selenium was measured and those where it was not measured (*P* \> 0.28 for all, data not shown). ###### Summary of weight loss in the study dogs ***Criterion*** **Result** --------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- *Age (months)* 66 (12 to 132) *Sex* 17 NM, 2 F, 12 NF *Breed*
677
411
595
969
null
null
github_plus_top10pct_by_avg
1.1 8.6 $f_4$ $1603.071\pm0.004$ 623.8 1.1 8.1 $f_1^+?$ $3437.384\pm0.005$ 290.9 11.1 1.0 11.6 $f_5$ $7726.540\pm0.003$ 129.4 1.0 13.0 $f_4^-?$ $1595.481\pm0.004$ 626.8 7.6 0.8 6.1 $2f_1$ $6852.604\pm0.005$ 145.9 0.6 8.5 $f_6$ $3146.670\pm0.007$ 317.8 0.5 5.0 $f_7$ $3276.485\pm0.008$ 305.2 0.4 4.6 ---------- -------------------- ------- ------ ------ ------- : G 207-9: frequency content of the 2007 dataset. The errors were calculated by Monte Carlo simulations. $\delta f$ denotes the frequency differences of the closely spaced frequencies to $f_1$, $f_2$ or $f_4$.[]{data-label="table:g207freq"} ![image](g207prewh.eps){width="17.5cm"} ![G 207-9: comparison of the frequencies obtained in 1975 (*red dashed lines*) and in 2007 (*black solid lines*). The amplitudes of the 1975 observations are from the paper of @2006ApJ...640..956M.[]{data-label="fig:g207oldnew"}](g207oldnewfrek.eps){width="\columnwidth"} We checked the frequency content of the whole dataset by averaging three consecutive data points of the $10\,$s measurements as a test. That is, we created a new, more homogeneous dataset mimicking $30\,$s exposure times. We then compared the frequency solutions of this $30\,$s dataset with the frequencies of the original mixed $10$–$30\,$s data. Finally, we accepted as the frequencies characterizing the whole light curve the frequencies that could be determined in both datasets, that is, without 1d$^{-1}$ differences. This resulted a reduced frequency list of 12 frequencies. We list them in Table \[table:g207freq\]. There are still several closely spaced frequencies around three of the main frequencies ($f_1$, $f_2$ and $f_4$) remained with separations between $7.6$ and $11.7\,\mu$Hz. In the case of $f_1$ and $f_2$, these separations are close to $11.574\,\mu$Hz (1d$^{-1}$). It is possible that at least some of these frequency components are res
678
520
1,391
914
2,277
0.781207
github_plus_top10pct_by_avg
kappa_j(\kappa_j-1)} \sum_{a =1}^{\ell_j} \lambda_{j,a}(\kappa_j-p_{j,a}) \sum_{i<\i \in S_j} (e_i - e_{\i})(e_i - e_{\i})^\top \nonumber\\ &=& 2\gamma e^{-6b} L, \label{eq:positionl_expec}\end{aligned}$$ where we used $\gamma\leq (1-p_{j,\ell_j}/\kappa_j )^{\alpha_1-2}$ which follows for the definition in . follows from the definition of Laplacian $L $, defined for the comparison graph $\H $ in Definition \[def:comparison\_graph2\]. Using $\lambda_2(L) = (\alpha /(d-1)) \sum_{j = 1}^n \tau_j\ell_j$ from , we get the desired bound $\lambda_2(\E[M]) \geq 2 \gamma e^{-6b} (\alpha /(d-1)) \sum_{j = 1}^n \tau_j\ell_j$. Next we need to upper bound $\|\sum_{j =1}^n\E[(M^{j})^2]\|$ to bound the deviation of $M$ from its expectation. To this end, we prove an upper bound on $\P[\sigma_j^{-1}(i) = p_{j,a} \; | \;i \in S_j ]$ in the following lemma. \[lem:posl\_upperbound\] Under the hypotheses of Lemma \[lem:posl\_lowerbound\], $$\begin{aligned} \label{eq:posl_upperbound_eq} \P_{\theta}\Big[ \sigma^{-1}(i) = \ell \Big] \;\;\leq\;\; \frac{e^{6b}}{\kappa} \bigg(1 - \frac{\ell}{\kappa+\alpha_{i,\ell,\theta}} \bigg)^{\alpha_{i,\ell,\theta} -1} \;\; \leq \;\; \frac{e^{6b}}{\kappa-\ell} \;, \end{aligned}$$ where $0 \leq \alpha_{i,\ell,\theta} = {\left \lfloor{\widetilde{\alpha}_{i,\ell,\theta}} \right \rfloor}$, and $\widetilde{\alpha}_{i,\ell,\theta}$ is, $$\begin{aligned} \label{eq:posl_upper1} \widetilde{\alpha}_{i,\ell,\theta} \;\; \equiv \;\; \min_{\ell' \in [\ell]} \min_{\substack{\Omega \in S\setminus\{i\} \\ : |\Omega| = \kappa-\ell'+1}} \Bigg\{\frac{\exp(\theta_i)}{\big(\sum_{j\in \Omega} \exp(\theta_j)\big)/|\Omega|} \Bigg \}\;.\end{aligned}$$ In the worst case, $e^{-2b} \leq \widetilde{\alpha}_{i,\ell,\theta} \leq e^{2b}$. Note that $\alpha_{i,\ell,\theta} =0$ gives the worst upper bound. Therefore using Equation , for all $i \in [d]$, we have, $$\begin{aligned} \label{eq:hess_posl_16} \P\Big[\sigma_j^{-1}(i) \in \cP_j \Big] \leq \min \Bigg\{1, \frac{e^{6b}\ell_j}{\kappa_j - p_{j,\ell_j}} \Bigg\}
679
955
643
729
null
null
github_plus_top10pct_by_avg
1 & i \\ 1 & -i \end{array} \right)},$$we observe that the matrices $\eta_{a_j}$ defined by (\[eq:3.28b\]) all vanish. Consequently, all trace conditions (\[eq:3.34\]) used to obtain solutions of the form (\[eq:4.10\]) are identically satisfied. We still have to consider the trace conditions (\[eq:3.30\]) where the matrices $\mathcal{A}^\mu$, $\mu=1,2,3$, are given by $$\mathcal{A}^1={\left( \begin{array}{cccc} 0 & -1 & 0 & 0 \\ 1 & 0 & 0 & 0 \end{array} \right)},\qquad \mathcal{A}^2={\left( \begin{array}{cccc} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array} \right)},\qquad \mathcal{A}^3={\left( \begin{array}{cccc} 0 & 0 & 0 & -1 \\ 0 & 0 & -1 & 0 \end{array} \right)}.$$Conditions (\[eq:3.30\]) take the form \[eq:4.11\] &(a)+--+i[( +++ )]{},\ &(b)++++i[( +-- )]{},\ &(c)+--+i[( +++ )]{}. Condition (\[eq:4.11\].a) implies that \[eq:4.12\] e[( )]{}=-m[( )]{}, while conditions (\[eq:4.11\].b) and (\[eq:4.11\].c) imply that the solutions for velocities $u$ and $v$ take the particular form \[eq:4.13\] u=h(r)+|[h]{}(|[r]{}),v=i[( h(r)-|[h]{}(|[r]{}) )]{}, where $r$ and $\bar{r}$ are given by (\[eq:4.9\]). Note that the most general solution for $u$ and $v$ of the trace conditions (\[eq:3.30\]-\[eq:3.32\]) is exactely the one given by the expression (\[eq:4.13\]). The function $h$ and its complex conjugate are also determined using condition (\[eq:4.12\]) and the PDE (\[eq:4.5\].a). This allows us to also determine the functions $\phi$ and $\psi$. For computational purposes, it is convenient to introduce velocities of the form (\[eq:4.13\]) in equation (\[eq:4.1\].c) in order to determine the angle $\theta$ in the form \[eq:4.14\] =[( i )]{}, where we denote $h^{(n)}=d^nh/dr^n$. Next, we substitute (\[eq:4.14\]) into equation (\[eq:4.2\]) in order to determine the function $h$ and its complex conjugate. Proceeding in this way, we find that $h$ and $\bar{h}$ satisfy the third-order O
680
3,579
1,554
640
3,030
0.775344
github_plus_top10pct_by_avg
for $i\in \mathcal{H}$. The proof to show that $\varphi=\prod_i \varphi_i$ is surjective is similar to that of Theorem 4.5 in [@C2] explained from the last paragraph of page 485 to the first paragraph of page 486 and so we skip it. Now it suffices to prove Equation (\[e41\]) made at the beginning of the proof, which is the next lemma. \[l46\] $\mathrm{Ker~}\varphi $ is smooth and unipotent of dimension $l$. In addition, the number of connected components of $\mathrm{Ker~}\varphi $ is $2^\beta$. Here, - $l$ is such that $$\textit{$l$ + $\sum_{i:\mathrm{even}}$ (dim $~\mathrm{O}(B_i/Z_i, \bar{q}_i)_{\mathrm{red}}$) + $\sum_{i:\mathrm{odd}}$ (dim $~\mathrm{Sp}(B_i/Y_i, h_i)$) = dim $\tilde{G} ~(=n^2)$.}$$ - $\beta$ is the number of integers $j$ such that $L_j$ is of type I and $L_{j+2}, L_{j+3}, L_{j+4}$ (resp. $L_{j-1}, L_{j+1},$ $L_{j+2}, L_{j+3}$) are of type II if $j$ is even (resp. odd). Recall that the zero lattice with $i$ even is *of type II*. If $i$ is odd, then the zero lattice is *of type II* only when both $L_{i-1}$ and $L_{i+1}$ are *of type II*. The proof is postponed to Appendix \[App:AppendixA\]. \[r47\] We summarize the description of Im $\varphi_i$ as follows. $$\begin{array}{c|c|c} \mathrm{type~of~lattice~} L_i & i & \mathrm{Im~} \varphi_i \\ \hline \textit{II, free}& even & \mathrm{O}(n_i, \bar{q}_i)\\ \textit{II, bound}& even & \mathrm{SO}(n_i+1, \bar{q}_i)\\ \textit{I}^o & even & \mathrm{SO}(n_i, \bar{q}_i)\\ \textit{I}^e & even & \mathrm{SO}(n_i-1, \bar{q}_i)\\ \textit{II} & odd & \mathrm{Sp}(n_i, h_i)\\ \textit{I, bound} & odd & \mathrm{Sp}(n_i, h_i)\\ \textit{I, free} & odd & \mathrm{Sp}(n_i-2, h_i)\\ \end{array}$$ Let $i$ be even and $L_i$ be *free of type II*. Then $B_i/Z_i=L_i/\pi L_i$ is a $\kappa$-vector space with even dimension. We now consider the question of whether the orthogonal group $\mathrm{O}(B_i/Z_i, \bar{q}_i) ~(=\mathrm{O}(n_i, \bar{q}_i))$ is split or nonsplit. By Theorem \[210\], we have tha
681
3,072
800
567
2,611
0.778436
github_plus_top10pct_by_avg
frac{1}{a}} \right)} dz + (k_{1} + k_{2}) b c \beta \int_{0}^{c / b} \exp{\left( - z^{\frac{1}{a}} \right)} dz. \end{aligned}$$ When $c < 0$, we have $$\begin{aligned} \operatorname{{E}}[\Pe(Z + c)] &= k_{2} b \beta \int_{- \infty}^{0} (- b z - c) \exp{\left( - (-z)^{\frac{1}{a}} \right)} dz + k_{2} b \beta \int_{0}^{- c / b} (- b z - c) \exp{\left( - z^{\frac{1}{a}} \right)} dz \\ &\quad + k_{1} b \beta \int_{- c / b}^{+\infty} (b z + c) \exp{\left( - z^{\frac{1}{a}} \right)} dz \allowdisplaybreaks \\ &= k_{2} b \beta \int_{0}^{+\infty} (b z - c) \exp{\left( - z^{\frac{1}{a}} \right)} dz + k_{2} b \beta \int_{0}^{- c / b} (- b z - c) \exp{\left( - z^{\frac{1}{a}} \right)} dz \\ &\quad + k_{1} b \beta \int_{- c / b}^{+\infty} (b z + c) \exp{\left( - z^{\frac{1}{a}} \right)} dz \allowdisplaybreaks \\ &= (k_{1} + k_{2}) b^{2} \beta \int_{- c / b}^{+\infty} z \exp{\left( - z^{\frac{1}{a}} \right)} dz \\ &\quad + (k_{1} - k_{2}) b c \beta \int_{0}^{+\infty} \exp{\left( - z^{\frac{1}{a}} \right)} dz - (k_{1} + k_{2}) b c \beta \int_{0}^{- c / b} \exp{\left( - z^{\frac{1}{a}} \right)} dz. \end{aligned}$$ From the above, for any $c \in \mathbb{R}$, we have $$\begin{aligned} \operatorname{{E}}[\Pe(Z + c)] &= (k_{1} + k_{2}) b^{2} \beta \int_{\lvert c / b \rvert}^{+\infty} z \exp{\left( - z^{\frac{1}{a}} \right)} dz \\ &\quad + (k_{1} - k_{2}) b c \beta \int_{0}^{+\infty} \exp{\left( - z^{\frac{1}{a}} \right)} dz + (k_{1} + k_{2}) b \lvert c \rvert \beta \int_{0}^{\lvert c / b \rvert} \exp{\left( - z^{\frac{1}{a}} \right)} dz. \end{aligned}$$ Now set $t := z^{\frac{1}{a}}$ to get $$\begin{aligned} \operatorname{{E}}[\Pe(Z + c)] &= (k_{1} + k_{2}) a b^{2} \beta \int_{c'}^{+\infty} t^{2a - 1} e^{-t} dt \\ &\quad + (k_{1} - k_{2}) a b c \beta \int_{0}^{+\infty} t^{a - 1} e^{-t} dt + (k_{1} + k_{2}) a b \lvert c \rvert \beta \int_{0}^{c'} t^{a - 1} e^{-t} dt \allowdisplaybreaks \\ &= (k_{1} + k_{2}) a b^{2} \beta \G(2a, c') + (k_{1} - k_{2}) a b c \beta \G(a) + (k_{1} + k_{2}) a b \lvert c \rvert \beta \
682
3,019
817
665
null
null
github_plus_top10pct_by_avg
attering of particles from the given external region $G_{\rm e}$. The flux $u=(u_1,u_2,u_3)$ contributed by this inflow source is governed by the system of equations $$\begin{gathered} \omega\cdot\nabla_x u_1+\Sigma_1 u_1-K_{1}u=0,\label{ref9}\\ -{{\frac{\partial (S_{j}u_j)}{\partial E}}}+\omega\cdot\nabla_x u_j+\Sigma_{j} u_j-K_{j}u=0,\quad j=2,3,\label{ref10}\end{gathered}$$ on $G\times S\times I$, such that on $\Gamma_-$, \[ref11\] u\_[|\_-]{}=R\_[b]{}(\_[|\_+]{}), and for almost every $(x,\omega)\in G\times S$, \[ref12\] u\_j(x,,E\_[m]{})=0,j=2,3. We point out that if $\psi\in D(R_{\rm b})$ and if $u$ solves the problem -, and is such that \[nrep\] R\_[b]{}(u\_[|\_+]{})=0, holds, i.e. no particles are backscattered repeatedly from $G_{\rm e}$ into $G$, then for $\varphi:=\psi+u$ we have $$\varphi_{|\Gamma_-} =\psi_{|\Gamma_-}+u_{|\Gamma_-} =g+R_{\rm b}(\psi_{|\Gamma_+}) =g+R_{\rm b}((\psi+u)_{|\Gamma_+}) =g+R_{\rm b}(\varphi_{|\Gamma_+}),$$ and hence $\varphi=(\varphi_1,\varphi_2,\varphi_3)=\psi+u$ is a solution of the following problem on $G\times S\times I$, with boundary and initial conditions holding on $\Gamma_-$ and $G\times S$, respectively, $$\begin{gathered} \omega\cdot\nabla_x \varphi_1+\Sigma_1 \varphi_1-K_{1}\varphi=0, \label{ref13}\\[2mm] -{{\frac{\partial (S_{j}\varphi_j)}{\partial E}}}+\omega\cdot\nabla_x \varphi_j+\Sigma_{j}\varphi_j-K_{j}\varphi=0,\label{ref14} \\[2mm] \varphi_{|\Gamma_-}=R_{\rm b}(\varphi_{|\Gamma_+})+g, \label{ref15} \\[2mm] \varphi_j(\cdot,\cdot,E_{\rm m})=0, \label{ref16}\end{gathered}$$ where $j=2,3$. This shows that solving for $\varphi$ (in place of $\psi$) the problem , , under boundary condition , is equivalent to solving first for $\psi$ the problem -, and then for $u$ the problem - under the additional condition . We will not explore the question of the existence of solutions for the problem - (or for -) in this paper. Adjoint Transport Problem {#adjoint} ========================= We will discuss briefly the adjoint version of the transport problem (\[csda1a\])
683
108
488
798
3,561
0.771539
github_plus_top10pct_by_avg
large as those based on *F*, and *R*- factors based on ALL data will be even larger. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Fractional atomic coordinates and isotropic or equivalent isotropic displacement parameters (Å^2^) {#tablewrapcoords} ================================================================================================== ------ --------------- --------------- --------------- -------------------- -- *x* *y* *z* *U*~iso~\*/*U*~eq~ W1 0.685156 (8) 0.527538 (12) 0.590934 (6) 0.00959 (5) W2 0.828154 (8) 0.329261 (13) 0.458854 (7) 0.01383 (5) Ag1 0.701283 (16) 0.55717 (2) 0.490031 (13) 0.01578 (7) Ag2 0.797387 (15) 0.41531 (2) 0.598751 (13) 0.01418 (7) Ag3 0.776781 (15) 0.20725 (2) 0.532723 (12) 0.01373 (7) Ag4 0.693052 (15) 0.34050 (2) 0.401396 (12) 0.01168 (7) S1 0.60897 (5) 0.52790 (8) 0.52055 (4) 0.0157 (2) S2 0.77015 (5) 0.59208 (8) 0.57931 (4) 0.0129 (2) S3 0.70064 (5) 0.37936 (8) 0.62017 (4) 0.0136 (2) S4 0.65859 (5) 0.61359 (8) 0.64489 (4) 0.0154 (2) S5 0.75460 (5) 0.37745 (8) 0.49394 (4) 0.0136 (2) S6 0.84065 (6) 0.17432 (9) 0.46535 (5) 0.0247 (3) S7 0.79958 (5) 0.36970 (9) 0.37903 (4) 0.0156 (2) S8 0.91157 (6)
684
279
985
871
null
null
github_plus_top10pct_by_avg
(1-2x^2+5x^4) A_{22} \,, \nonumber \\ % \left[ \rule{0pt}{9pt} \ldots \right] &= &2 A_{00} + (1+3x^2) (A_{11} + A_{22}) + 2 \, x (9x^2-5) \nonumber \\ % & \times & \cos(\phi_{12})A_{12} + 2 \sqrt{2} (3x^2 - 1) \cos(\phi_{20}) A_{20} \, , \nonumber %\end{aligned}$$ for $\{s_{1/2},p_{1/2},d_{3/2} \}$, $\{s_{1/2},p_{3/2},d_{5/2} \}$, and $\{s_{1/2},p_{3/2},$ $d_{3/2} \}$ sets of states respectively. The asymmetric term ($\sim x)$ is present here either for $s$-$p$ interference only or for $p$-$d$ only or for neither. The angular distributions in energy bins (Fig. \[fig:distrib\]) provide a good indication that a [*narrow*]{} $p_{1/2}$ state is not populated in the reaction. Fig. \[fig:qualit\] shows qualitatively what happens with angular distribution if there is a narrow resonance. The phase shift changes across the narrow resonance to a value close to $\pi$ and the character of angular distribution should change drastically within this energy range. No trend of this kind is observed in Fig. \[fig:distrib\]. Phase shift for the [*broad*]{} $1/2^-$ state changes slowly and hardly achieves $\pi /2$ in our calculations. This allows to explain the smooth behaviour of asymmetry up to 3 MeV. ![Schematic illustration of possible behavior of angular distributions (shown by inserts for angles $\theta_{^8\text{He}}$ from $0^{\circ}$ to $180^{\circ}$) due to $s_{1/2}$-$p_{12}$ interference around a narrow resonance for different phases $\phi_{10}^{(0)}$.[]{data-label="fig:qualit"}](qualit){width="45.00000%"} The existing experimental data can be regarded as not contradicting our results. In Refs. [@set87; @boh99] the narrow states were observed with low statistics (15–40 events/state). If the narrow states really exist they should be observable even at such a low counting rates. However, simulations show that if the cross section behavior is smooth, “statistically driven” narrow structures are quite probable in such a situation. A look at the data of Ref. [@gol03] also shows that states (which should have ana
685
553
1,546
814
2,412
0.779991
github_plus_top10pct_by_avg
t.AspNetCore.Mvc.Infrastructure.ActionMethodExecutor.TaskOfIActionResultExecutor.Execute at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.<InvokeActionMethodAsync>g__Awaited|12_0 at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.<InvokeNextActionFilterAsync>g__Awaited|10_0 at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Rethrow at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Next at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.InvokeInnerFilterAsync at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeFilterPipelineAsync>g__Awaited|19_0 at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeAsync>g__Awaited|17_0 at Microsoft.AspNetCore.Routing.EndpointMiddleware.<Invoke>g__AwaitRequestTask|6_0 at Microsoft.AspNetCore.Authorization.AuthorizationMiddleware.Invoke at Microsoft.AspNetCore.Server.IIS.Core.IISHttpContextOfT`1.ProcessRequestAsync edit 2: I checked Debug console in Kudos for eventlog.xml file to find exception. (Please tell me if there's a better way to check for server exceptions.) Here's the actual exception: System.Net.NetworkInformation.PingException: An exception occurred during a Ping request. ---> System.ComponentModel.Win32Exception (5): Access is denied. at System.Net.NetworkInformation.Ping.InitialiseIcmpHandle() at System.Net.NetworkInformation.Ping.DoSendPingCore(IPAddress address, Byte[] buffer, Int32 timeout, PingOptions options, Boolean isAsync) at System.Net.NetworkInformation.Ping.GetAddressAndSend(String hostNameOrAddress, Int32 timeout, Byte[] buffer, PingOptions options) --- End of inner exception stack trace --- at System.Net.NetworkInformation.Ping.GetAddressAndSend(String hostNameOrAddress, Int32 timeout, Byte[] buffer, PingOptions options) at System.Net.NetworkInformation.Ping.Send(String hostNameOrAddress, Int32 timeout, Byte[] buffer, PingOptions options) at System.Net.NetworkInformation.Ping.Send(String hostNameOrAddress) at TestApi.Contr
686
2,432
809
888
278
0.81489
github_plus_top10pct_by_avg
be $\frac{1-C}{(1+\delta)C_1}$ for any given constant $0<C<1$. Clearly, for such an $\varepsilon$, we have $1-(1+\delta)\varepsilon C_1 = C$ and $$1-\frac{1+\delta}{4\varepsilon\alpha} \ge C \quad \Longleftrightarrow \quad \alpha \ge \frac{(1+\delta)^2C_1}{4(1-C)^2}.$$ Combine the above and let $\alpha \ge \frac{(1+\delta)^2C_1}{4(1-C)^2}$, we have $$\begin{aligned} A(v, v) &= \sum_{K\in\mathcal{T}_h} \|\nabla v\|_K^2 - (1+\delta) \sum_{e\in\mathcal{E}_h}\langle\{\nabla v\},\, [v]\rangle_e + \alpha \sum_{e\in\mathcal{E}_h}\frac{1}{h_e} \|[v]\|_e^2 \\ &\ge \bigg( 1-(1+\delta)\varepsilon C_1\bigg)\sum_{K\in\mathcal{T}_h} \|\nabla v\|_K^2 + \bigg(1-\frac{1+\delta}{4\varepsilon\alpha}\bigg)\alpha \sum_{e\in\mathcal{E}_h}\frac{1}{h_e} \|[v]\|_e^2 \\ &\ge C \left( \sum_{K\in\mathcal{T}_h} \|\nabla v\|_K^2 + \alpha \sum_{e\in\mathcal{E}_h}\frac{1}{h_e} \|[v]\|_e^2 \right)\\ &\ge \frac{C}{1+C_1} \3bar v\3bar^2. \end{aligned}$$ Lemma \[lem:wellposedness\] guarantees the existence and uniqueness of the solution to Equation (\[eq:dg\]). In the rest of this paper, we shall always assume $\alpha \ge \frac{(1+\delta)^2C_1}{4(1-C)^2}$. Let $u$ and $u_h$ be the solution to equations (\[eq:ellipticeq\]) and (\[eq:dg\]), respectively. By subtracting (\[eq:dg-exactsol\]) from (\[eq:ellipticeq\]), one gets the standard orthogonality property of the error, $$A(u-u_h,\, v_h) = 0\qquad\textrm{for all } v\in V_h.$$ Then clearly, for all $\chi_h\in V_h$, $$\begin{aligned} \3bar \chi_h - u_h\3bar^2 &\le \frac{1+C_1}{C}A(\chi_h - u_h,\, \chi_h - u_h) \\ &= \frac{1+C_1}{C}A(\chi_h - u,\, \chi_h - u_h) \\ &\le \frac{(1+C_1)(1+\alpha)}{C\alpha} \3bar \chi_h - u\3bar \, \3bar \chi_h - u_h\3bar. \end{aligned}$$ Then, using the triangle inequality, $$\begin{aligned} \3bar u-u_h\3bar &\le \inf_{\chi_h\in V_h} \bigg( \3bar u-\chi_h\3bar + \3bar \chi_h-u_h\3bar \bigg) \\ &\le \bigg(1 + \frac{(1+C_1)(1+\alpha)}{C\alpha} \bigg) \inf_{\chi_h\in V_h}\3bar u-\chi_h\3bar. \end{aligned}$$ Combine this with assumption [**[I3]{}**]
687
1,364
629
716
null
null
github_plus_top10pct_by_avg
rly, the *persistent* state of frame ${\mathbf{f}}$ is $\phi = {{({{\operatorname{absc}{{\mathbf{f}})}}}}\negmedspace\mid\negmedspace{{{\operatorname{dom}{\Phi}}}}}$. Process concepts interpret into systems language. The reactive space ${\prod{\Psi}}$ contains the system stimulus. Sequential conjointness allows circumstantial interpretation of the choice space ${\prod{\Phi}}$. It is the system’s response in the context of the frame ending condition. To place ${\prod{\Phi}}$ in context of the frame’s reactive state, the Cartesian product ${\prod{\Phi}} = \prod ({{\Psi}\negmedspace\mid\negmedspace{{{\operatorname{dom}{\Phi}}}}}) = (\thinspace\prod\Psi) \mid {{\operatorname{dom}{\Phi}}}$ \[by theorem \[T:CHC\_RSTR\_EQ\_RSTR\_CHC\]\] is the persistent state space. Using this nomenclature, sequential conjointness is summarized that each frame’s response becomes the next frame’s persistent state, symbolically ${{\operatorname{ord}{{\mathbf{f}}_{\,i}}}} = {{{{\operatorname{absc}{{\mathbf{f}}_{\,i+1}}}}}\negmedspace\mid\negmedspace{{{\operatorname{dom}{\Phi}}}}}$. Procedure --------- The procedure is useful to portray a process frame as a transformation from the stimulus space to the response space. To distinguish such transformations from other mappings, we use the special term [functionality]{} and stipulate that the collection of functionalities is a finite set called a catalog. The term [catalog]{} will later be applied to resource sets identified with an automaton. ### Functionality {#S:FUNCTIONALITY} The functionality generalizes the frame. If ${\mathbf{f}}_i = (\psi_i, \phi_i)$ is the $i^{\text{th}}$ process frame, this concept permits writing $\phi_i = {{\mathit{f}}}_i(\psi_i)$, where ${\mathit{f}}_i$ is some functionality belonging to catalog ${\mathscr{F}}$. \[D:FUNCTIONALITY\] A *functionality* is a mapping whose domain and codomain are choice spaces (definition \[D:CHOICE\_SPACE\]), with the codomain a subspace (definition \[D:SUBSPACE\]) of the domain. \[L:BASIS\_FUNCTIONALITY\] Let $\langle
688
577
1,359
898
685
0.802353
github_plus_top10pct_by_avg
X$ into $Y$ as the prototype of a preimage continuous map. Clearly the topology of $Y$ may also contain open sets not in $e(X)$, and any subset in $Y-e(X)$ may be added to the topology of $Y$ without altering the preimage topology of $X$: *open sets of $Y$ not in $e(X)$ may be neglected in obtaining the preimage topology* as $e^{-}(Y-e(X))=\emptyset$. The final topology on a quotient set by the quotient map $Q\!:(X,\mathcal{U})\rightarrow X/\sim$, which is just the collection of $Q$-images of the $Q$-saturated open sets of $X$, known as the *quotient topology of $X/\sim$,* is the basic example of the image topology and the resulting space $(X/\sim,\textrm{FT}\{\mathcal{U};Q\})$ is called the *quotient space.* We take the generalization $q\!:(X,\mathcal{U})\rightarrow(Y,\textrm{FT}\{\mathcal{U};q\})$ of $Q$ as the prototype of a image continuous function. The following results are specifically useful in dealing with initial and final topologies; compare the corresponding results for open maps given later. **Theorem A2.1.** *Let $(X,\mathcal{U})$ and $(Y_{1},\mathcal{V}_{1})$ be topological spaces and let $X_{1}$ be a set. If $f\!:X_{1}\rightarrow(Y_{1},\mathcal{V}_{1})$, $q\!:(X,\mathcal{U})\rightarrow X_{1}$, and $h=f\circ q\!:(X,\mathcal{U})\rightarrow(Y_{1},\mathcal{V}_{1})$ are functions with the topology $\mathcal{U}_{1}$ of $X_{1}$ given by* $\textrm{FT}\{\mathcal{U};q\}$, *then* \(a) *$f$ is continuous iff $h$ is continuous.* \(b) *$f$ is image continuous iff* $\mathcal{V}_{1}=\textrm{FT}\{\mathcal{U};h\}$.$\qquad\square$ **Theorem A2.2.** *Let $(Y,\mathcal{V})$ and $(X_{1},\mathcal{U}_{1})$ be topological spaces and let $Y_{1}$ be a set. If $f\!:(X_{1},\mathcal{U}_{1})\rightarrow Y_{1}$, $e\!:Y_{1}\rightarrow(Y,\mathcal{V})$ and $g=e\circ f\!:(X_{1},\mathcal{U}_{1})\rightarrow(Y,\mathcal{V})$ are function with the topology $\mathcal{V}_{1}$ of $Y_{1}$ given by* $\textrm{IT}\{ e;\mathcal{V}\}$*, then* \(a) *$f$ is continuous iff $g$ is continuous.* \(b) *$f$ is preimage continuous iff* $\mathcal{U}_{
689
3,582
1,986
667
2,726
0.7775
github_plus_top10pct_by_avg
$C^\infty$-functions $h$ and $A:V_{y_0}\times S\to {\mathbb{R}}$ such that $$\begin{aligned} & A_f(x,\omega):={1\over{S_0(x)}}\omega\cdot \nabla_x f(x)=-h(x,\omega)A(x,\omega) \label{ass-on-G-1} \\[2mm] & (V_{y_0}\times S)\cap\Gamma_+'=\{(y,\omega)\in\partial G\times S\ |\ h(y,\omega)>0\}, \label{ass-on-G-2} \\[2mm] & (V_{y_0}\times S)\cap\Gamma_-'=\{(y,\omega)\in\partial G\times S\ |\ h(y,\omega)<0\}, \label{ass-on-G-3} \\[2mm] & (V_{y_0}\times S)\cap\Gamma_0'=\{(y,\omega)\in\partial G\times S\ |\ h(y,\omega)=0\}. \label{ass-on-G-4}\end{aligned}$$ Let $$A_h(x,\omega):={1\over{S_0(x)}}(\omega\cdot \nabla_x h)(x,\omega).$$ The additional assumption is \[add-ass\] A(y,) [and]{} A\_h(y,) [are positive definite on]{}  (V\_[y\_0]{}S)\_0’. For example, in the case of the ball $G=B(0,1)\subset{\mathbb{R}}^3$ we can choose $f(x)=1-{\left\Vert x\right\Vert}^2$. Then $A_f(y,\omega)=-2{1\over{S_0(y)}}(y\cdot \omega)$ and we choose $h(y,\omega)=2(y\cdot\omega)$, $A(y,\omega)={1\over{S_0(y)}}>0$. Moreover, we find that $A_h(y,\omega)=2{1\over{S_0(y)}}>0$. Hence the stated assumptions can be met for the case of the ball $G=B(0,1)$. The following result holds in this context. Suppose that $\partial G$ is in the class $C^1$ and that the assumption (\[add-ass\]) holds. Furthermore, suppose that $S_0\in C^1(\ol G\times I)$ such that $$\begin{aligned} S_0>0\quad {\rm on}\ \ol G\times I. \label{flp2}\end{aligned}$$ Then \[inf4\] C\_0\^(GSI\^)R(I+P\_0) and \[inf5\] P\_0,\_[L\_2(GSI)]{}0,D(P\_0). Since our equation is scalar valued it is symmetric in the sense of [@nishitani96]. We choose the linear subspace of [@nishitani96] by $$M(y,\omega)=\begin{cases} {\mathbb{R}},\quad & (y,\omega)\in \Gamma'_+\cup\Gamma'_0\\ \{0\},\quad & (y,\omega)\in\Gamma'_- \end{cases}.$$ Then we find that $M(y,\omega)$ is maximal positive in the sense of [@nishitani96]. Due to the Theorem 5.5 of [@nishitani96], for any $f\in C_0^\infty(G\times S\times I^\circ)$ the equation ${{\frac{\partial \Phi}{\partial E}}}+Q(x,\omega,E,D)\Phi={\bf f}$ ha
690
412
1,069
787
null
null
github_plus_top10pct_by_avg
elation $\langle f|Le \rangle = \langle L^\dag f|e \rangle^\ast$): $${{\mathcal C}}_{AB} = \big\{ L\colon {\cal H}_A \to {\cal H}_B\; \mbox{bound antilinear}\, \big| \mathop{\mbox{tr}}(L^\dag L) < \infty \big\}. \label{eq:CBC}$$ ${{\mathcal C}}_{AB}$ forms a Hilbert space (the scalar product is $(L,L') = \mathop{\mbox{tr}}(L'^\dag L)$, which is conjugate linear in the first argument). It is shown in Ref. [@jmp41_638] that (\[eq:statdef\]) establishes a unitary isomorphism between ${{\mathcal C}}_{AB}$ and ${\cal H}_A\otimes {\cal H}_B$ in a natural way. Every pure bipartite state $|\Phi\rangle_{AB}$ can uniquely be described by an antilinear operator $L_{|\Phi\rangle}\in {{\mathcal C}}_{AB}$ such that $\mathop{\mbox{tr}}(L_{|\Phi\rangle}^\dag L_{|\Phi\rangle})=1$. Conversely, every such $L$ describes a pure bipartite state. Now let us characterize maximally entangled states and possible relative state representations. By maximally entangled state we mean a pure bipartite state with maximally mixed partial trace (\[eq:parctrac\]). The partial traces of bipartite states over systems $A$ and $B$ are $LL^\dag$ and $L^\dag L$ respectively. Thus the state (\[eq:statdef\]) is maximally entangled if and only if $LL^\dag=N^{-1} I_B$ and $L^\dag L=N^{-1} I_A$. This is equivalent to that $\sqrt{N}L$ is antiunitary. On the other hand, the operator $L_{|\Phi\rangle}$ gives rise to a relative state representation if and only if $\{ L_{|\Phi\rangle}|i\rangle \}_{i=0,\ldots ,N-1}$ forms an orthogonal basis on ${\cal H}_B$, which means, that $\sqrt{N}L_{|\Phi\rangle}$ is antiunitary. Relative state representations can be defined via maximally entangled states. Probabilistic teleportation with partially entangled states {#sec:teleport} =========================================================== In this section we apply the antilinear description of bipartite states introduced in Section \[sec:formalism\] for quantum teleportation. Suppose that system A prepared in the unknown state $|\Phi\rangle_A$ is to be teleported, an
691
1,395
639
773
1,781
0.785753
github_plus_top10pct_by_avg
frak z$. It follows that $$\label{semisimple restriction} m_{T_G,V_{G,\lambda}}(\beta) = \begin{cases} m_{T_{G_{\operatorname{ss}}},V_{G_{\operatorname{ss}},\lambda_{\operatorname{ss}}}}(\beta_{\operatorname{ss}}) & \text{if $\lambda_z = \beta_z$},\\ 0 & \text{otherwise}, \end{cases}$$ where we write $\mu_{\operatorname{ss}}$ and $\mu_z$ for the restriction of a weight $\mu$ to the Cartan subalgebra of $[\mathfrak g,\mathfrak g]$ and to $\mathfrak z$, respectively. These multiplicities can therefore be evaluated by using . The Finite Difference Formula {#section:finite difference formula} ============================= Let $V$ be an arbitrary finite-dimensional representation of the compact, connected Lie group $G$. Clearly, we can compute the weight multiplicity function $m_{T_G,V}$ from the highest weight multiplicity function $m_{G,V}$ by using any of the classical formulas and , or by evaluating the vector partition function described in . By “inverting” the Weyl character formula, the converse can also be achieved: \[steinberg lemma\] The highest weight and weight multiplicity function of a finite-dimensional $G$-representation $V$ are related by $$m_{G,V} = {\left.\left(\prod_{\alpha \in R_{G,+}} - D_\alpha \right) m_{T_G,V}\vphantom{\big|}\right|_{\Lambda^*_{G,+}}},$$ where $(D_\alpha m)(\lambda) = m(\lambda + \alpha) - m(\lambda)$ is the finite-difference operator in direction $\alpha$. Note that any two of the operators $D_\alpha$ commute, so that their product is independent of the order of multiplication. By linearity, it suffices to establish the lemma for a single irreducible representation $V = V_{G,\lambda}$ of highest weight $\lambda$. The Weyl character formula can be rewritten in the form $$\label{weyl for steinberg} \prod_{\alpha > 0} \left( 1 - e^{-\alpha} \right) \operatorname{ch}V_{G,\lambda} = \sum_{w \in W_G} \det(w) \, e^{w(\lambda + \rho) - \rho}.$$ If we identify elements in ${\mathbb Z}[\Lambda^*_G]$ with functions on the weight lattice, applying finite-dif
692
1,793
960
758
null
null
github_plus_top10pct_by_avg
�кая: Создан memDC, в нём выбран битмап. В функции, предназначенной для рисования (или даже в нескольких функциях) рисуем на этот memDC кучу объектов - например, заполнили фон, нарисовали сто прямоугольников, десять эллипсов и надписей. Когда картинка подготовлена, вызываем InvalidateRect для окна программы. Это приводит к вызову WM_PAINT. Внутри обработчика WM_PAINT в BeginPaint получаем текущий контекст окна и шлёпаем на него memDC. Всё, в WM_PAINT больше ничего не делаем. Q: open a file in a non existing directory c versus c++ I tried to open a file in a non existing directory in C++ and for some reason my application didn't crash. I find that a bit "weird" because I'm used to C, where the equivalent program does crash. Here is my C++ version followed by the equivalent C version: $ cat main.cpp #include <fstream> int main(int argc, char **argv) { std::ofstream f("/home/oren/NON_EXISTING_DIR/file.txt"); f << "lorem ipsum\n"; f.close(); printf("all good\n"); } $ g++ main.cpp -o main $ ./main all good When I try the equivalent thing with C I get a segfault: $ cat main.c #include <stdio.h> int main(int argc, char **argv) { FILE *fl = fopen("/home/oren/NON_EXISTING_DIR/file.txt","w+t"); fprintf(fl,"lorem ipsum\n"); fclose(fl); printf("all good\n"); } $ gcc main.c -o main $ ./main Segmentation fault (core dumped) Why is that? A: "I'm used to C", but then "[...] in C++" - well, C++ is not C, but that's irrelevant here. First, have a look at this - writes to an unopened stream (which is the case here) are ignored after setting the failbit. In your C example, however, the problem is that if fopen fais, it returns a NULL pointer. Then you pass it to fp
693
935
92
344
null
null
github_plus_top10pct_by_avg
ert g\Vert _{\infty }.$$Since $\eta -\kappa >d$, it follows that for every $m_0\geq 1$ (recall that $t<1$), $$\begin{aligned} d_{0}(\mu ^{\eta ,\kappa },\mu ^{\eta ,\kappa ,m_{0}}) \leq \sum_{m\geq m_{0}+1}\frac{(c_\kappa \rho)^m}{m!} \leq \frac{(c_\kappa \rho)^{m_0}}{{m_0}!}\, e^{c_\kappa \rho}.\end{aligned}$$ So, for $m\geq 1$ we have, for every $l$$$\begin{aligned} \sup_{m\geq 1}d_{0}(\mu ^{\eta ,\kappa },\mu ^{\eta ,\kappa ,m})\times \theta _{l,t}(m)^{r} &\leq \frac{1}{(\lambda t)^{\theta _{0}(l+d+2\theta _{1})r}}\times \sup_{m\geq 1}\frac{(c_\kappa \rho Q^r)^{m}}{{m}!}\, e^{c_\kappa \rho}\nonumber\\ &\leq \frac{e^{c_\kappa \rho (1+Q^r)}}{(\lambda t)^{\theta _{0}(l+d+2\theta _{1})r}}. \label{J9-bis}\end{aligned}$$ We now use Lemma \[REG\] and we get $\mu ^{\eta ,\kappa }(dx,dy)=p^{\eta ,\kappa }(x,y)dxdy$ with $p^{\eta ,\kappa }\in C^{\infty }({\mathbb{R}}% ^{d}\times {\mathbb{R}}^{d}).$ And one concludes that $\overline{P}% _{t}(x,dy)=\overline{p}_{t}(x,y)dxdy$ with $\overline{p}_{t}\in C^{\infty }({% \mathbb{R}}^{d}\times {\mathbb{R}}^{d}).$ We will now obtain estimates of $\overline{p}_{t}.$ We fix $h\in {\mathbb{N}} $ (to be chosen sufficiently large, in a moment) and we recall that in ([reg5]{}) we have defined $\rho _{h}=(q+2d/p_{\ast })/2h$ (in our case $k=0$ and we work on $R^{d}\times R^{d}\sim R^{2d}).$ So, with the notation from (\[reg11\]) (with $n_\ast=1$) $$C_{h,1 }(\varepsilon )=\frac{{e^{c_\kappa \rho (1+Q^{\rho_h+\varepsilon})}}}{(\lambda t)^{\theta _{0}(2h+q+d+2\theta _{1})(\rho _{h}+\varepsilon)} }.$$We have used here (\[J9-bis\]) with $l=2h+q$ and $r=\rho _{h}+\varepsilon .$ Then by (\[reg12\]) with $n_{\ast }=1 $, for every $\delta >0$$$\left\Vert p^{\eta ,\kappa }\right\Vert _{q,p}\leq C(\Theta +\theta _{2h+q,t}^{\rho _{h}(1+\delta )}(1 )+C_{h,1 }(\varepsilon )).$$Taking $h$ sufficiently large we have$$C_{h,1 }(\varepsilon )\leq \frac{e^{2c_\kappa \rho }}{(\lambda t)^{\theta _{0}(q+2d/p_{\ast })(1+\delta )}}.$$and, for $\delta >0$, $$\theta _{2h+q,t}^{\
694
1,917
576
745
null
null
github_plus_top10pct_by_avg
presence of neutrino masses, we get an additional source of weak CP-violation coming from the complex phase of the PMNS matrix. In complete analogy with the quark sector, we can construct new CP-violating flavor structures which tune the PMNS-induced quark and lepton EDMs. In this case, quark EDMs have a bubble topology whereas lepton EDMs have a rainbow topology. For instance, the dominant diagrams for the PMNS-induced quark and lepton EDMs are shown in figure \[fig:DiracEDMs\]. ![PMNS-induced quark (on the left) and lepton (on the right) EDMs[]{data-label="fig:DiracEDMs"}](figures/FigDiracEDMs) They are tuned respectively by $J_{\mathcal{CP}}^{Dirac}$ and $Im(\textbf{X}_{e}^{Dirac})^{11}$, where $$\begin{aligned} J_{\mathcal{CP}}^{Dirac}= & \frac{1}{2i}\det\left[Y_{\nu}^{\dagger}Y_{\nu},Y_{e}^{\dagger}Y_{e}\right]\\ \textbf{X}_{e}^{Dirac}= & \left[Y_{\nu}^{\dagger}Y_{\nu},Y_{\nu}^{\dagger}Y_{\nu}Y_{e}^{\dagger}Y_{e}Y_{\nu}^{\dagger}Y_{\nu}\right]. \label{eq:XeDirac}\end{aligned}$$ In this scenario, $Im(\textbf{X}_{e}^{Dirac})^{11}$ is 11 orders of magnitude larger than $J_{\mathcal{CP}}^{Dirac}$ and they are correlated (strictly proportional). Majorana neutrino masses ------------------------ Another way for generating neutrino masses is possible if we consider Majorana masses. In this mechanism, there is no additional RH neutrinos, we get directly a gauge-invariant but lepton-number violating mass term for the left-handed (LH) neutrinos. Indeed, we add to the SM Yukawa Lagrangian the effective dimension-five Weinberg operator: $$\mathcal{L}_{Yukawa}=\mathcal{L}_{Yukawa}^{SM}-\frac{1}{2v}(L^{I}H)(\Upsilon_{\nu})^{IJ}(L^{J}H)+h.c,$$ which after spontaneous symmetry breaking collapses to a Majorana mass term for the LH neutrinos: $$\frac{1}{2v}(L^{I}H)(\Upsilon_{\nu})^{IJ}(L^{J}H)\overset{SSB}{\longrightarrow}\frac{v}{2}(\Upsilon_{\nu})^{IJ}\nu_{L}^{I}\nu_{L}^{J}.$$ $\Upsilon_{\nu}$ (3$\times$3 matrix in flavor space) is a new flavor structure purely of the Majorana type. In this model, we must redefine
695
996
1,525
791
2,574
0.778781
github_plus_top10pct_by_avg
rgence in topological spaces with a proof of the following theorem which demonstrates the relationship that “eventually in” and “frequently in” bears with each other; Eq. (\[Eqn: net adh\]) below is the net-counterpart of the filter equation (\[Eqn: filter adh\]). **Theorem A1.5.** *If $\chi$ is a net in a topological space $X$, then* $x\in\textrm{adh}(\chi)$ *iff some subnet $\zeta(\beta)=\chi(\sigma(\beta))$ of $\chi(\alpha)$, with $\alpha\in\mathbb{D}$ and $\beta\in\mathbb{E}$ , converges in $X$ to $x$; thus* $$\textrm{adh}(\chi)=\{ x\in X\!:(\exists\textrm{ a subnet }\zeta\preceq\chi\textrm{ in }X)(\zeta\rightarrow x)\}.\qquad\square\label{Eqn: net adh}$$ ****Proof.** *Necessity.* Let $x\in\textrm{adh}(\chi)$. Define a subnet function $\sigma\!:\,_{\mathbb{D}}N_{\alpha}\rightarrow\mathbb{D}$ by $\sigma(N_{\alpha},\alpha)=\alpha$ where $_{\mathbb{D}}N_{\alpha}$ is the directed set of Eq. (\[Eqn: DirectedIndexed\]): (SN1) and (SN2) are quite evidently satisfied according to Eq. (\[Eqn: DirectionIndexed\]). Proceeding as in the proof of the preceding theorem it follows that $x_{\beta}=\chi(\sigma(N_{\alpha},\alpha))=\zeta(N_{\alpha},\alpha)\rightarrow x$ is the required converging subnet that exists from Eq. (\[Eqn: adh net1\]) and the fact that $\chi(\mathbb{R}_{\alpha})\bigcap N_{\alpha}\neq\emptyset$ for every $N_{\alpha}\in\mathcal{N}_{x}$, by hypothesis. *Sufficiency.* Assume now that $\chi$ has a subnet $\zeta(N_{\alpha},\alpha)$ that converges to $x$. If $\chi$ does not adhere at $x$, there is a neighbourhood $N_{\alpha}$ of $x$ not frequented by it, in which case $\chi$ must be eventually in $X-N_{\alpha}$. Then $\zeta(N_{\alpha},\alpha)$ is also eventually in $X-N_{\alpha}$ so that $\zeta$ cannot be eventually in $N_{\alpha}$, a contradiction of the hypothesis that $\zeta(N_{\alpha},\alpha)\rightarrow x$.[^29]$\qquad\blacksquare$ Eqs. (\[Eqn: net closure\]) and (\[Eqn: net adh\]) imply that the closure of a subset $A$ of $X$ is the class of $X$-adherences of all the (sub)nets of $X$ that are
696
1,309
1,686
780
2,095
0.782776
github_plus_top10pct_by_avg
confined phase. The expectation value $\langle \varphi\rangle$ in the center-broken deconfined phase is given by the transition point between decreasing part of the potential for small $\varphi$ and the flat region in the middle of the plot. It can also be explicitly computed from . In the center-symmetric confined phase it is just given by the minimum at $\varphi=\pi$.\ ![Full effective potential $\hat V_{\rm eff}$, normalised to 0 at $\varphi =0$[]{data-label="fig:Veff"}](VeffforT.eps "fig:"){width="8cm"}\ The temperature-dependence of the order parameter $L[\langle A_0\rangle ] = \cos(\langle \varphi / 2 \rangle)$ is shown in Fig. \[fig:LofT\], and we observe a second order phase transition from the confined to the deconfined phase at a critical temperature $$\label{eq:Tc} T_c = 305^{+ 40}_{-55}\, {\rm MeV},\qquad \quad {T_c}/{\sqrt{\sigma}} =0.69^{+.04}_{-.12}\,,$$ with the string tension $\sqrt{\sigma}=440$ MeV. The corresponding value on the lattice is ${T_c}/{\sqrt{\sigma}}=.709$, [@Fingberg:1992ju], and agrees within the errors with our result. The estimate of the systematic error in is dominated by that of the uncertainty of the identification of $k_{\rm phys}$, see Appendix \[app:match\]. We would also like to comment on the difference of the temperature-dependence of $L[\langle A_0\rangle ]$ depicted in Fig. \[fig:LofT\] and that of the Polyakov loop $\langle L[A_0]\rangle$. It has been shown in section \[sec:QCDinPol\] that in the confined phase they both vanish and both are non-zero in the deconfined phase. However, the Jensen inequality entails that the present observable $L[\langle A_0\rangle ]$ takes bigger values than the Polyakov loop $\langle L[A_0]\rangle$, which is in agreement with lattice results. ![Temperature dependence of the Polyakov loop $L[\langle A_0\rangle]=\cos(\langle \varphi \rangle / 2)$ in $SU(2)$[]{data-label="fig:LofT"}](ptrans.eps "fig:"){width="8cm"}\ The critical physics should not depend on this issue. Here we compute the critical exponent $\nu$, a quantity we
697
136
1,069
845
1,927
0.78433
github_plus_top10pct_by_avg
th Tarski’s theory by reinterpreting them as theories of metalinguistic concepts that are different from truth (in the case of physical QL, the concept of *empirical justification* in QM). Secondly, we observe that our interpretation has some consequences that are intuitively satisfactory. For instance, for every state $S\in \mathcal{S}$, it attributes a justification value to every af in $\phi _{AD}^{Q}$, while it is well known that there are formulas in physical QL which have no truth value according to the standard interpretation of QL (Sec. 2.4). Some remarks on a possible calculus for $\mathcal{L}_{QD}^{P}$ -------------------------------------------------------------- One may obviously wonder whether a calculus can be given for the language $\mathcal{L}_{QD}^{P}$ which is *pragmatically correct* (*p-correct*) and *pragmatically complete* (*p-complete*). This is not a difficult task if we limit ourselves to the general lattice structure of $(\phi _{AD}^{Q}/\approx ,\prec )$. Indeed, a set of axioms and/or inference rules which endow $\phi _{AD}^{Q}/\approx $ of the structure of orthomodular lattice can be easily obtained by using the formal correspondence introduced in Sec. 3.7, since this correspondence allows one to translate the axioms and/or inference rules that are usually stated in order to provide a calculus for orthomodular QL into $\phi _{AD}^{Q}$ (of course, all the afs produced by this translation are p-valid afs of $\mathcal{L}_{QD}^{P}$). Here is a sample set of axioms of this kind (where, of course, $\delta $, $\delta _{1}$, $\delta _{2}$ and $\delta _{3}$ are afs of $\phi _{AD}^{Q}$) obtained by translating a set of rules provided by Dalla Chiara and Giuntini.$^{(32)}$ A$_{1}$. $\delta I_{Q}\delta $. A$_{2}$. $($ $\delta _{1}K$ $\delta _{2})I_{Q}\delta _{1}$. A$_{3}$. $($ $\delta _{1}K$ $\delta _{2})I_{Q}\delta _{2}$. A$_{4}$. $\delta I_{Q}(NN\delta )$. A$_{5}$. $(NN\delta )I_{Q}\delta $. A$_{6}$. $((\delta _{1}I_{Q}\delta _{2})K(\delta _{1}I_{Q}\delta _{3}))I_{Q}(\delta _{1}I_{Q}(\
698
1,639
2,410
846
3,118
0.774752
github_plus_top10pct_by_avg
qual to $$\begin{gathered} \alpha_1 \{ \dot xyx^3 \} +\alpha_2 \{ y\dot x x^3 \} +\alpha_3 \{ \dot xxyx^2 \} +\alpha_4 \{ \dot xyx^3 \} +\alpha_5 \{ yx^2\dot xx \} +\alpha_6 \{ y\dot xx^3 \} \\ - \alpha_1 \{ \dot xyxy^2 \} - \alpha_2 \{ y\dot xxy^2 \} - \alpha_3 \{ \dot x xy^3 \} - \alpha_4 \{ \dot xy^3x \} - \alpha_5 \{ y^3\dot xx \} - \alpha_6 \{ y\dot xy^2x \} \\ +\beta_1 \{ xy\dot x x^2 \} +\beta_2 \{ yx \dot x x^2 \} +\beta_3 \{ x\dot xyx^2 \} +\beta_4 \{ xyx^2 \dot x \} +\beta_5 \{ yx^3\dot x \} +\beta_6 \{ yx^3\dot x \} \\ - \beta_1 \{ xy\dot xy^2 \} - \beta_2 \{ yx \dot xy^2 \} - \beta_3 \{ x\dot xy^3 \} - \beta_4 \{ xy^3 \dot x \} - \beta_5 \{ y^3x\dot x \} - \beta_6 \{ yxy^2\dot x \} \\ +\gamma_1 \{ xyx\dot x x \} +\gamma_2 \{ yx^2\dot xx \} +\gamma_3 \{ x^2y\dot xx \} +\gamma_4 \{ xy\dot xx^2 \} +\gamma_5 \{ y\dot xx^3 \} +\gamma_6 \{ yx\dot xx^2 \} \\ +\gamma_1 \{ xyx^2\dot x \} +\gamma_2 \{ yx^3\dot x \} +\gamma_3 \{ x^2yx\dot x\} +\gamma_4 \{ xyx\dot xx \} +\gamma_5 \{ yx\dot xx^2 \} +\gamma_6 \{ yx^2\dot xx \} \\ -\gamma_1 \{ xyx\dot yy \} -\gamma_2 \{ yx^2\dot yy \} -\gamma_3 \{ x^2y\dot yy \} -\gamma_4 \{ xy\dot yyx \} -\gamma_5 \{ y\dot yyx^2 \} -\gamma_6 \{ yx\dot yyx \} \\ -\gamma_1 \{ xyxy\dot y \} -\gamma_2 \{ yx^2y\dot y \} -\gamma_3 \{ x^2y^2\dot y \} -\gamma_4 \{ xy^2\dot yx \} -\gamma_5 \{ y^2\dot yx^2\} -\gamma_6 \{ yxy\dot yx \} \\ \end{gathered}$$ $$\begin{gathered} +\delta_1\{\dot xxy^3\}+\delta_1\{x\dot xy^3\}-\delta_1\{\dot yy^4\}-\delta_1\{y\dot yy^3\} \\ +\delta_2\{y\dot xxy^2\}+\delta_2\{yx\dot xy^2\}-\delta_2\{y\dot y y^3\}-\delta_2\{y^2\dot y y^2\} \\ +\delta_3\{\dot xx^3y\}+\delta_3\{x\dot xx^2y\}-\delta_3\{\dot yyx^2y\} -\delta_3\{y\dot yx^2y\} \\ -\delta_3\{\dot xxy^3\}-\delta_3\{x\dot xy^3\}+\delta_3\{\dot yy^4\}+\delta_3\{y\dot yy^3\} \\ +\delta_4\{x^2\dot xxy\}+\delta_4\{x^3\dot xy\}-\delta_4\{x^2\dot yy^2\}-\delta_4\{x^2y\dot yy\} \\ -\delta_4\{y^2\dot xxy\}-\delta_4\{y^2x\dot xy\}+\delta_4\{y^2\dot yy^2\}+\delta_4\{y^3\dot yy\} \\
699
823
1,578
876
null
null
github_plus_top10pct_by_avg
imulus 37V from the seven (left) and eight (right) MU model without post-process adjustment. Thin lines identify the contribution to the predictive for the indicated firing combinations associated to the final few MUs. In both cases, the first five MUs fire with near certainty. Most firing combinations with negligible predictive probabilities are omitted from the plot.[]{data-label="fig:PredDen_S37"}](figures/PredDen_D1_S37_version2.pdf){width="80.00000%"} Under-estimation {#sec:SimStudy_under} ---------------- The second data set, D2, contains $u^*=8$ MUs and presents a period of alternation between 23–32V which involves five MUs. The SMC-MUNE procedure, however, estimates $\hat{u}=7$ and gives this a high posterior probability of 97.1% after applying the post-process adjustment at ${\mu_{\min}}=15$mN. The main source for this under-estimation arises through the over-estimation of the excitability scale parameter (Table \[tab:lamMAX\]) for the fourth MU ($\lambda_4$), so that the stimulus interval for probabilistic firing behavior is nearly three times wider than it should be. Consequently, this incorrectly estimated MU acts as a surrogate for MU-number $6$, which has similar twitch force properties. One potential solution is to reduce the upper bound for the scale parameter ${\lambda_{\max}}$ in to constrain estimation against shallow excitability curves. Table \[tab:lamMAX\] presents scale parameter estimates for selected MUs at the original (${\lambda_{\max}}=14$V) and reduced (${\lambda_{\max}}=7$V) upper bounds. Under the reduced bound the 8 MU model becomes a member of the HPCS, but the MAP estimate remains at $\hat{u}=7$ with a high posterior probability of 94.6%. Although a further reduction to ${\lambda_{\max}}$ might be appealing, this action is likely to be detrimental in determining good model fits. For example, the scale parameter of the first MU, which has a true value of 5.0V is accurately estimated whether ${\lambda_{\max}}$ is $7$ or $14$, principally because its excitability curve is wel
700
98
1,875
664
2,345
0.780715
github_plus_top10pct_by_avg