text
large_stringlengths 384
2.05k
| rank_avg
float64 1
4.19k
⌀ | rank_max
float64 1
8.21k
⌀ | rank_min
float64 1
5.03k
⌀ | rank_median
float64 1
4.21k
⌀ | rank_by_avgsim
float64 1
4.19k
⌀ | avgsim_to_github
float32 0.77
0.85
⌀ | dataset
large_stringclasses 1
value |
|---|---|---|---|---|---|---|---|
in Sec. \[sec:high-lowest-weight\], i.e. they form the highest-weight modules for ${\ensuremath{SL(2,\mathbb{R})\times U(1)}}\circlearrowleft \mathcal{M} $. Such a highest-weight module is infinite dimensional, the length of this paper, however, is supposed to be finite. Therefore, we give the highest three weights for scalar bases, the highest two weights for vector bases, and only the highest weight for tensor bases. Note all other basis functions can be generated by applying the lowering operator on the highest weight basis order by order. In order to compare the basis functions in different modules, in global coordinates, we also give the expressions of the scalar bases obtained using the lowest-weight method.
All expressions in these appendices are also available in the companion <span style="font-variant:small-caps;">Mathematica</span> notebooks: `Sep-met-pert-in-NHEK-Poinc.nb`, `Sep-met-pert-in-NHEK-global.nb`, and precomputed quantities in `NHEK-precomputed.mx` [@NHEKsupplement].
Basis functions in Poincaré coordinates {#app:Poincare-basis}
---------------------------------------
### Scalar bases {#app:scalar-basis-Poincare}
The scalar bases in Poincaré coordinates are given by $$F^{(m\,h\,k)} \propto R^{h-k} e^{i m \Phi} \times f^{(m\,h\,k)}\,,$$ where $$\begin{aligned}
f^{(m\,h\,0)} =& 1\,, \\ {\nonumber}f^{(m\,h\,1)} =& -2 (h R T+i m)\,, \\ {\nonumber}f^{(m\,h\,2)} =& -2 [-2 i (2 h-1) m R T + \\{\nonumber}&\quad{}+h (1-2 h) R^2 T^2+h+2 m^2]\,.\end{aligned}$$
### Vector bases {#app:vector-basis-Poincare}
The covector bases in Poincaré coordinates can be decomposed using the dual basis one-forms $\{\text{d}T,\text{d}\Phi,\text{d}R\}$ via $$\mathbf{V}^{(m\,h\,k)}= V_{i}^{(m\,h\,k)}\text{d}x^i,\quad x\in\{T,\Phi,R\}\,.$$ The covector components are given by $$\begin{aligned}
V_{i}^{(m\,h\,k)} \propto
\begin{bmatrix}
v^{(m\,h\,k)}_{T}R^{+1} \\
v^{(m\,h\,k)}_{\Phi} R^{+0} \\
v^{(m\,h\,k)}_{R} R^{-1}
\end{bmatrix}
R^{h-k} e^{i m \Phi}
\,,\end{aligned}$$ where $$\begin{align
| 201
| 2,152
| 613
| 246
| null | null |
github_plus_top10pct_by_avg
|
\left(u^2+1\right)^3} \\
\mathcal{D}_{Tu} & -\frac{i m u \left(u^6+3 u^4-21 u^2+9\right)}{8 \left(u^4-1\right)^2} \\
\mathcal{D}_{\Phi R} & -\frac{i m \left(u^4+6 u^2-3\right)}{4 \left(u^2+1\right)^3} \\
\mathcal{D}_{\Phi u} & -\frac{i m u \left(u^2-3\right)}{2 \left(u^2-1\right) \left(u^2+1\right)^2} \\
\noalign{\bigskip}
\text{} & C_{Tu}(u) \\
\noalign{\smallskip}
\hline \hline \noalign{\smallskip}
\mathcal{D}_{TT} & -\frac{2 i m u \left(u^2-1\right) \left(u^2+3\right)}{\left(u^2+1\right)^3} \\
\mathcal{D}_{T\Phi} & -\frac{i m u \left(u^4+4 u^2-5\right)}{\left(u^2+1\right)^3} \\
\mathcal{D}_{\Phi \Phi} & -\frac{4 i m u \left(u^2-1\right)}{\left(u^2+1\right)^3} \\
\mathcal{D}_{RR} & \frac{4 i m u}{\left(u^2+1\right)^2} \\
\mathcal{D}_{Ru} & \frac{i (h+1) m}{2 \left(u^2+1\right)} \\
\mathcal{D}_{uu} & -\frac{i m u}{u^4-1} \\
\mathcal{D}_{TR} & -\frac{2 u \left(u^4-14 u^2+h \left(u^2+1\right)^2+9\right)}{\left(u^2+1\right)^4} \\
\mathcal{D}_{Tu} & -\frac{4 h^2 \left(u^2-1\right) \left(u^2+1\right)^2+4 h \left(u^6-3 u^4+7 u^2-5\right)+\left(u^4+6 u^2-3\right) \left(-8 u^2+m^2 \left(u^2+1\right)^2+8\right)}{8 \left(u^2-1\right) \left(u^2+1\right)^3} \\
\mathcal{D}_{\Phi R} & -\frac{4 u \left(u^4-6 u^2+5\right)}{\left(u^2+1\right)^4} \\
\mathcal{D}_{\Phi u} & \frac{-m^2 \left(u^2+1\right)^2+4 h \left(u^2-1\right)+4 \left(u^2-1\right)}{2 \left(u^2+1\right)^3} \\
\end{array}
$
---
abstract: 'Let the group $G = AB$ be the product of subgroups $A$ and $B$, and let $p$ be a prime. We prove that $p$ does not divide the conjugacy class size (index) of each $p$-regular element of prime power order $x\in A\cup B$ if and only if $G$ is $p$-decomposable, i.e. $G=O_p(G) \times O_{p''}(G)$.'
author:
- |
María José Felipe[^1], Lev S. Kazarin[^2],\
Ana Martínez-Pastor, and Víctor Sotomayor
date: '*Dedicated to the memory of Carlo Casolo*'
title: On products of groups and indices not divisible by a given prime
---
Introduction and statement of results
=====================================
All
| 202
| 300
| 470
| 266
| null | null |
github_plus_top10pct_by_avg
|
ponse. Further details can be found in [@Thompson66; @Wang95].
The habituation mechanism used in the system described here is Stanley’s model. The synaptic efficacy, $y(t)$, decreases according to the following equation:
$$\tau \frac{dy(t)}{dt} = \alpha \left[ y_0 - y(t) \right] - S(t),
\label{HabEqn}$$
where $y_0$ is the original value of $y$, $\tau$ and $\alpha$ are time constants governing the rate of habituation and recovery respectively, and $S$ is the stimulus presented. The effects of the equation are shown in figure \[curves\]. The principal difference between this and the model of Wang and Hsu is that the latter allows for long-term memory, so repeated training causes faster learning.
![ []{data-label="curves"}](habit.eps){width=".45\textwidth"}
Figure \[curves\] shows the synaptic efficacy increasing again at time 150, when the stimulus is removed. This is effectively a ‘forgetting’ effect, and is caused by a dishabituation mechanism which increases the strength of synapses that do not fire. In the implementation described here this effect can be removed. The experiments reported in section \[Results\] investigate effects of the filter both with and without forgetting.
Using Habituation for a Novelty Filter\[NF\]
--------------------------------------------
![ []{data-label="hsom"}](HSOM.eps){width=".45\textwidth"}
The principle behind the novelty filter is that perceptions are classified by some form of clustering network, whose output is modulated by habituable synapses, so that the more frequently a neuron fires, the lower the efficacy of the synapse becomes. This means that only novel features will produce any noticeable output. If the habituable synapses receive zero input (rather than none) during turns when their neuron does not fire, the synapses will ‘forget’ the inhibition over time, providing that this forgetting mechanism (or dishabituation) is turned on.
The choice of clustering algorithm is very important and depends on the data being classified. In this paper, we co
| 203
| 3,366
| 518
| 248
| null | null |
github_plus_top10pct_by_avg
|
(0.008)
Gender (= 1 if female) 0.054\ 0.050\ 0.041\
(0.080) (0.080) (0.077)
Age 0.007\ 0.006\ 0.006\
(0.010) (0.010) (0.009)
Trust (GSS) -0.017\ -0.017\ -0.012\
(0.097) (0.095) (0.098)
*σ*~*u*~ 0.147 0.137 0.149 0.154 0.146 0.158
*σ*~*e*~ 0.194 0.194 0.197 0.194 0.194 0.197
*ρ* 0.367 0.333 0.362 0.387 0.364 0.391
R-squared 0.042 0.060 0.050 0.069 0.076 0.068
Wald test 17.15\*\*\* 12.65\*\* 12.46\*\* 39.33\*\*\* 15.53\*\* 40.45\*\*
Observations 106 106 106 106 106 106
---------------------------------------------------------------------------------------------------------------------------
Note: Standard errors in parentheses are clustered by groups. The independent variable Trust (GSS) correspond to the answer in the attitudinal survey question from the General Social Survey: "Generally speaking, would you say that most people can be trusted or that you cannot be careful in dealing with people?" (= 1 if most people can be trusted). Significance at the \*\*\*\* p\<0.01, \*\* p\<0.05, \* p\<0.1 level.
Overall, our results are in line with our previou
| 204
| 4,324
| 448
| 137
| null | null |
github_plus_top10pct_by_avg
|
ight come with the risk of name clashes, and it might be difficult to associate them with their original counterparts.
Instead, I'd suggest you wrap all those identifiers into d['...'] to make them strings used for lookup in a dictionary. Then wrap the dictionary with the values into another accordingly named dictionary before passing it into the eval function. Do this after replacing the operators and only match uppercase letters so the operators (then lowercase) are not wrapped, too.
You can use re.sub with a callback function for the replacement. As regex, you can use e.g. TP\([^)]+\)|[-A-Z0-9]+. Note that the TP part has to come first.
expression = expression.replace("XOR", "is not").replace("OR", "or").replace("AND", "and").replace("NOT", "not")
expression = re.sub(r"TP\([^)]+\)|[-A-Z0-9]+", lambda m: "d['%s']" % m.group(), expression)
if eval(expression, {"d": opt_disp_dict_norm}):
print(value)
Note, however, that I'm still getting key errors for some of the IDs and the first of the expressions is missing a closing ). To handle missing keys, you could also replace with "d.get('%s', True)" or "d.get('%s', False)" to give those default values.
Q:
R error at the start of Android project
I have started a new application project in android and I am getting this strange error on R file.I have done many projects but this is for the first time that an R error is coming as R file is generated automatically.There is no error in my xml or class files as I have just started the project.This is what its showing-:
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
}
R cannot be resolved to a variable.
I tried creating the R class but it shows up in src folder and not in gen folder.Please help me out here anyone.Thanks in advance.
A:
Delete R.java file from your source folder. Now Clean your project
and also try this.
Right Click on your Projectname >> Android Tools >> Fix project Property
Q:
Encrypt We
| 205
| 4,754
| 148
| 145
| 1,290
| 0.791487
|
github_plus_top10pct_by_avg
|
action with the environment affects software behavior, which is ultimately transmitted through response to changing volatile variables. Normal operations calls for a run of figuratively unbounded duration during which software experiences the usage pattern’s variation of volatile stimulus, in response to which possibly unbalanced service is demanded from its inventory of functions.
Systems engineering often augments what is here the automaton’s step poset with a transition network of modes. These modes symbolically encapsulate enabled or disabled capabilities. However, even though this augmentation facilitates visualization of behaviors, it fails to be mathematically definitive[^5].
#### Orbit {#S:ORBIT}
We have not defined normal operations, but every example would certainly constitute a walk (sequence of steps). A special walk illustrating normal operations (as described above) will be termed here an *orbit*. Without formal definition, use of this term sacrifices rigor.
The actuated automaton governs pure step transition logic, but an orbit also reflects a usage pattern.
#### Limit conjecture {#S:LIMIT_CONJECTURE}
Orbits may differ in specific sequence and content, but they have the same limit ratios. We consider a case drawn with the counting procedure of §\[S:COUNTING\].
\[T:LIMIT\_RATIO\] For different orbits ${\mathit{o}} = \{{\mathit{s}}_n\}$ and ${\mathit{o}}' = \{{\mathit{s}}'_n\}$ having the same usage pattern, $$\lim_{\;k \to \infty} \frac{N_U(\{{\mathit{s}}_n\}, k)}{N_Z(\{{\mathit{s}}_n\}, k)} =
\lim_{\;k \to \infty} \frac{N_U(\{{\mathit{s}}'_n\}, k)}{N_Z(\{{\mathit{s}}'_n\}, k)}$$ for sets of steps $\varnothing \neq U \subseteq Z \subseteq {\mathbb{S}}$.
### Types of operational profile {#S:TYPES_OP_PROFILE}
A relative operational profile is the conditional probability that a step in an actuated automaton’s orbit coincides with a particular member of the reference set, given that it agrees with the reference set. We consider one other: an absolute operational profile is the time rate
| 206
| 1,444
| 1,040
| 228
| 1,138
| 0.793776
|
github_plus_top10pct_by_avg
|
[Strauss]{}, M. A., [Yahil]{}, A., & [Huchra]{}, J. P., 1994, , 927+
, K. B. & [Nusser]{}, A., 1996, , L1, , N. Y. & [Hui]{}, L., 1998, , 44+
, A. J. S., 1997, astro-ph 9708102
, A. F. & [Taylor]{}, A. N., 1995, , 483
, L., 1998, in preparation
, L. & [Gnedin]{}, N. Y., 1997, , 27
, L., [Gnedin]{}, N. Y., & [Zhang]{}, Y., 1997, , 599+
, L., [Kofman]{}, L., & [Shandarin]{}, S., 1998, in preparation
, L. & [Rutledge]{}, R. E., 1997, preprint, astro-ph 9709100
, N., 1987, , 1
, N. & [Peacock]{}, J. A., 1991, , 482
, J. & [Rees]{}, M., 1993, , 617+
, A. & [Haehnelt]{}, M., 1998, preprint, astro-ph 9806109
, J. A. & [Dodds]{}, S. J., 1994, , 1020+
, P. J. E., 1980, , Princeton University Press
, U., 1997, preprint, astro-ph 9711180
, G. B. & [Lightman]{}, A. P., 1979, , John Wiley & Sons
, R. & [Weinberg]{}, D., 1997, preprint, astro-ph 9712192
, R., 1998, in preparation
, A. N. & [Hamilton]{}, A. J. S., 1996, , 767
[^1]: Croft et al. [-@croft98] in fact differentiated the Gaussianized transmission power spectrum rather than the transmission power spectrum itself. Their investigation seems to indicate that the two give very similar results, except that the former yields smaller error-bars. We will consider the non-Gaussianized version of their method in this paper for simplicity.
[^2]: The $k_F$ here is equal to the $k_F$ in Gnedin & Hui divided by $\sqrt 2$.
[^3]: Note that an alternative would be to group the baryon-smoothing factor ${\rm exp} [-{k^2 / k_F^2}]$ together with $\tilde P^\rho (k)$ instead of with the rest of the terms in the distortion kernel $W^{f\rho}$. Our inversion procedure can then be viewed as an attempt to recover the baryon power spectrum ${\tilde P^\rho} (k) {\rm exp} [-{k^2 /
k_F^2}]$ rather than the mass power spectrum itself ${\tilde P^\rho}
(k)$. However, the two coincide on large scales.
[^4]: We ignore the spatial dependence of the thermal profile to simplify the discussion here; see eq. (\[tau\]).
---
abstract: 'The critical behaviour
| 207
| 1,195
| 1,465
| 347
| null | null |
github_plus_top10pct_by_avg
|
hin the class
------------------------------------ ----------------------------------------------- ---------------------------- ------------------------------------------- -------------------------------------------- ------- -------
0 \- Not assigned 3 10 1.57 7.14
1 T Unambiguously assigned T 7 3 3.66 2.14
1 A Unambiguously assigned A 26 28 13.61 20.00
1 B Unambiguously assigned B 45 47 23.56 33.57
\>1 A Unambiguously assigned A 5 7 2.62 5.00
\>1 B Unambiguously assigned B 74 26 38.74 18.57
\>1 T/A Ambiguously assigned T/A 0 1 0.00 0.71
\>1 T/B Ambiguously assigned T/B 5 1 2.62 0.71
\>1
| 208
| 5,108
| 217
| 107
| null | null |
github_plus_top10pct_by_avg
|
uracy, precision, recall, and F measure for nonpeak events and peak events, respectively. Evaluation of the ML approaches on the weather and air quality data are shown in [Table 3](#table3){ref-type="table"}. It showed that the developed random forest gave the best predictive performance. This was mainly due to the data collection fitting better with the random forest.
######
Evaluation of machine learning approaches on weather and air quality.
--------------------------------------------------------------------------------
Machine learning approaches F1 measure Accuracy, % (n/N)
--------------------------------------- ---------------- ------------------- ---
**Generalized linear model** 85.6 (479/559)
\ Peak 0.667 \
\ Nonpeak 0.908 \
**Support vector machine** 80.2 (448/559)
\ Peak 0.289 \
\ Nonpeak 0.882 \
**Adaptive boosting neural networks** 84.7 (473/559)
\ Peak 0.667 \
\ Nonpeak 0.900 \
**Tree bag** 83.8 (468/559)
\ Peak 0.640 \
\ Nonpeak 0.895 \
**Random forest** 88.3 (494/559)
\ Peak 0.745 \
\ Nonpeak 0.924 \
--------------------------------------------------------------------------------
In addition, we used the receiver operati
| 209
| 4,497
| 183
| 127
| null | null |
github_plus_top10pct_by_avg
|
Inclusion items Selection criteria
-------------------- ---------------------------------------------------------------
Tumor type Osteosarcoma
Sample type Tumor tissue or blood
Assay method qRT-PCR or FISH
Time of study January 2003 to September 2017
Follow-up (months) ≥60
Included results Multivariate analysis of OS and Kaplan--Meier survival curves
**Abbreviations:** FISH, fluorescence in situ hybridization; lncRNA, long noncoding RNA; OS, overall survival; qRT-PCR, quantitative reverse transcriptase polymerase chain reaction.
######
Basic information of included articles
Reference Year LncRNA Total patients (n) Survival analysis Follow-up (months)
-------------------------------- ------ ----------- -------------------- ------------------- --------------------
Wang et al[@b21-ott-10-5355] 2017 TUG1 44 OS 120
Wen et al[@b23-ott-10-5355] 2017 UCA1 151 OS 60
Zhou et al[@b30-ott-10-5355] 2017 CCAL 46 OS 60
Chen et al[@b24-ott-10-5355] 2016 BCAR4 60 OS 60
Cong et al[@b32-ott-10-5355] 2016 TUSC7 82 OS 120
Gao and Lian[@b27-ott-10-5355] 2016 MALAT1 162 OS 65
Ju et al[@b25-ott-10-5355] 2016 BCAR4 168 OS 68
Li et al[@b26-ott-10-5355] 2016 HIF2PUT 82 OS 60
Li et al[@b22-ott-10-5355] 2016 UCA1 135 OS 60
Ma et al[@b20-ott-10-5355] 2016 TUG1 76 OS 60
Uzan et al[@b33-ott-10-5355] 2016 HULC 33 OS 96
Xia et al[@b29-ott-10-5355] 2016 91H 67 OS
| 210
| 947
| 629
| 314
| null | null |
github_plus_top10pct_by_avg
|
obot dynamics, then the weight matrix $W$ of the RBFN were initialized by zeros. Moreover, an unexpected disturbance as shown in Fig. \[fig5\], which exerts the applied forces, was taken into consideration to illustrate robustness of the proposed approach.
![External disturbance[]{data-label="fig5"}](disturbance.png){width="0.8\linewidth"}
\[tab1\]
Dynamic model parameters
--------------------------------------------------------------------------------------------------------------------------------------------
${{m}_{1}}={{m}_{2}}={{m}_{3}}={{m}_{4}}=1.5\,(kg)$;
${{I}_{1}}={{I}_{2}}={{I}_{3}}={{I}_{4}}=0.18\,(kg{{m}^{2}})$;
${{l}_{1}}={{l}_{2}}={{l}_{3}}={{l}_{4}}=1.2\,(m)$;
${{k}_{1}}={{k}_{2}}={{k}_{3}}={{k}_{4}}=0.48\,(m)$;
${{b}_{1}}={{b}_{2}}={{b}_{3}}={{b}_{4}}=110\,(Nm/s)$;
${{d}_{1}}=0.25\,(m)$; ${{d}_{2}}=1.2\,(m)$; $\mu =0.35$; $m=1.5\,(kg)$
Reference trajectory parameters
$\left( {{x}_{i1}},{{y}_{i1}},{{x}_{i2}},{{y}_{i2}} \right)=\left( 0.76,\,0.6,\,-0.76,\,0.6 \right)$;
$\left( {{x}_{f1}},{{y}_{f1}},{{x}_{f2}},{{y}_{f2}} \right)=\left( -0.275,\,1.4,\,-0.525,\,1.4 \right)$;
$\left( {{x}_{0}},{{y}_{0}} \right)=\left( 0,\,1.4 \right)$; ${{r}_{m}}=0.4$;
${{\theta}_{1}}(0)=\frac{\pi }{6};\,\,{{\theta}_{2}}(0)=\frac{\pi }{2};\,\,{{\theta}_{3}}(0)=\pi ;\,\,{{\theta}_{4}}(0)=\frac{-2\pi }{3}$;
${{\dot{\theta}}_{1}}(0)={{\dot{\theta}}_{2}}(0)={{\dot{\theta}}_{3}}(0)={{\dot{\theta}}_{4}}(0)=0$
Controller parameters
$\lambda =dia
| 211
| 1,871
| 663
| 316
| null | null |
github_plus_top10pct_by_avg
|
f the other methods provide any guarantees over unknown selection rules.
Numerical Examples {#section::simulation}
==================
In this section we briefly consider a few illustrative examples. In a companion paper, we provide detailed simulations comparing all of the recent methods that have proposed for inference after model selection. It would take too much space, and go beyond the scope of the current paper, to include these comparisons here.
We focus on linear models, and in particular on inference for the projected parameter $\beta_{{\widehat{S}}}$ and the LOCO parameter $\gamma_{{\widehat{S}}}$ of and , respectively. The data are drawn from three distributions:
Setting A
: *Linear and sparse with Gaussian noise.* A linear model with $\beta_i\sim U[0,1]$ for $j=1,\dots,5$ and $\beta_j=0$ otherwise.
Setting B
: *Additive and sparse with $t$-distributed noise.* An additive model with a cubic and a quadratic term, as well as three linear terms, and $t_5$-distributed additive noise.
Setting C
: *Non-linear, non-sparse, $t$-distributed noise.* The variables from Setting $B$ are rotated randomly to yield a dense model.
In Settings A and B, $n=100$ (before splitting); in Setting C $n=200$. In all Settings $p=50$ and the noise variance is 0.5. The linear model, $\hat{\beta}_{{\widehat{S}}}$ is selected on $\mathcal{D}_1$ by lasso with $\lambda$ chosen using 10-fold cross-validation. For $\gamma_{{\widehat{S}}}(j)$, $\hat{\beta}_{{\widehat{S}}}(j)$ is estimated by reapplying the same selection procedure to $\mathcal{D}_1$ with the $j^{\mathrm{th}}$ variable removed. Confidence intervals are constructed using the pairs bootstrap procedure of Section 2 with $\alpha=0.05$.
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 212
| 83
| 334
| 311
| null | null |
github_plus_top10pct_by_avg
|
ial V(\rho_0)}|\rho_1-z|\Bigr\};$$ that is, the minimum of $\varepsilon$ and the orthogonal distance of $\rho_1$ from $\partial V(\rho_0)$. Next, define $\theta_1$, the angle that subtends at $\rho_0$ between $\rho_1$ and the origin $(0,\dots,0)$ and recall that symmetry implies that $\theta_1$ is uniformly distributed on $[0,2\pi]$. Simple geometric considerations tell us that $$\zeta_1=x_1 - r_1 \sin\left(\frac{\pi}{2} - \theta_1\right) = \zeta_0 - \zeta_0 \sin\left(\frac{\pi}{2} - \theta_1\right) = \zeta_0(1-\cos(\theta_1)).
\label{zeta1}$$ This provides an implicit expression for $\theta_1$ in terms of the orthogonal distance $\rho_0$ from the nearest tangent hyperplane. See Figure \[fig:class\_proofa\].
![Geometric setting of the proof[]{data-label="fig:class_proofa"}](proof1-fig1_new){width="0.8\linewidth"}
Assuming that $\zeta_0>\varepsilon$, thanks to isotropic symmetry, the walk-on-sphere algorithm will end at the first step if $\theta_1$ lies in a certain critical interval dictated by the choice of skin thickness $\varepsilon$. We can compute this critical (and obviously) symmetric interval as a function of $\zeta_0$, say $(-\theta^*(\zeta_0), \theta^*(\zeta_0))$, where $$\theta^*(\zeta_0) = \arccos\pp{\frac{\zeta_0 - \varepsilon}{\zeta_0}}.
\label{theta*}$$
A quantity that will be of interest to us in order to complete the proof is the expectation $
\mathbb{E}_{x}[\sqrt{\zeta_1}] = \mathbb{E}_{\rho_0}[\sqrt{\zeta_1}].
$ To this end, we compute $$\begin{aligned}
\mathbb{E}_{\rho_0}\left[\sqrt{\zeta_1}\right] & \leq \sqrt{\varepsilon}\,\mathbb{P}_{\rho_0}\pp{\theta_1\in (-\theta^*(\zeta_0), \theta^*(\zeta_0))}+
\mathbb{E}_{\rho_0}\left[ \mathbf{1}_{(\theta_1\not\in (-\theta^*(\zeta_0), \theta^*(\zeta_0)))}\sqrt{\zeta_1}\right]\notag\\
& = \sqrt{\varepsilon}\, \frac{\theta^*(\zeta_0)}{\pi}
+ \frac{1}{\pi}\int_{\theta^*(\zeta_0)}^\pi \sqrt{\zeta_0(1-\cos(
| 213
| 3,003
| 456
| 226
| null | null |
github_plus_top10pct_by_avg
|
nly (step 4.2); pilot survey, data linkage and further contact (step 4.3); or pilot survey only and further contact (step 4.4). Those who consent to further contact, irrespective of data linkage consent, will be invited to take part in the HAGIS Wave 1 (step 6.0).
Survey instrument {#s2e}
-----------------
The HAGIS survey is largely based on the ELSA and NICOLA questionnaires and therefore is widely comparable (see online [supplementary appendices 4 and 5](#SP4 SP5){ref-type="supplementary-material"}). The survey instrument contains validated questions covering a wide range of topics including cognitive health, financial literacy, personality and standard of living (see online [supplementary appendix 4](#SP4){ref-type="supplementary-material"}). This will ensure that the study data will (1) create a valid and valuable data resource and (2) be harmonised with other global ageing studies to support cross-country comparisons. The topics covered in the questionnaires are outlined in [table 1](#T1){ref-type="table"}.
10.1136/bmjopen-2017-018802.supp4
######
Content of the main CAPI and self-completion questionnaires
Main CAPI questionnaire Self-completion questionnaire
---------------------------------------- ---------------------------------
Demographics Internet and TV use
Social circumstances Social activities
Employment Support from family and friends
Income and assets Transport
Expectations and retirement Current financial situation
Financial literacy Health and health behaviours
Physical health Personality
Cognitive health
Health behaviour
Activities of daily living and helpers
Social participation
CAPI, Computer Assisted Personal Interview.
Data transfer {#s2f}
-------------
The CAPI survey will be collected in state-of-the-a
| 214
| 217
| 908
| 321
| null | null |
github_plus_top10pct_by_avg
|
tries of $m_{i-1, i}, m_{i+1, i}$, and $$m_{i\pm 2, i}^{\natural}= \left\{
\begin{array}{l l}
\textit{the $n_{i\pm 2}\times (n_i-1)$-th entry of $m_{i\pm 2, i}$} & \quad \textit{if $L_{i \pm 2}$ is of type $I^o$};\\
\textit{the $(n_{i\pm 2}-1)\times (n_i-1)$-th entry of $m_{i\pm 2, i}$} & \quad \textit{if $L_{i \pm 2}$ is of type $I^e$}. \end{array} \right.$$ In the right hand side of the equation including $\mathcal{X}_{i,2,2}(m)$, the term $1/2\cdot{}^tr_i\bar{a}_ir_i$ should be interpreted as follows. We formally compute $1/2\cdot{}^tr_i\bar{a}_ir_i$ and it is of the form $1/2(2X)$. Then the term $1/2\cdot{}^tr_i\bar{a}_ir_i$ is defined as the modified $X$ by letting each term having $\pi$ as a factor in $X$ be zero.
These equations are considered in $(B\otimes_AR)/(\pi\otimes 1)(B\otimes_AR)$. Since $m$ actually belongs to $\mathrm{Ker~}\varphi(R)/\tilde{G}^1(R)$, we have the following equations by the argument made at the beginning of this step: $$\label{ea27}
\left \{
\begin{array}{l}
\mathcal{X}_{i,1,2}(m)=\bar{a}_ir_i+{}^tv_i=\bar{b}_i=0;\\
\mathcal{X}_{i,1,3}(m)=\bar{a}_i t_i+{}^ty_i+{}^tv_iz_i+\mathcal{P}^i_{1, 3}=\bar{e}_i=0; \\
\mathcal{X}_{i,2,3}(m)={}^tr_i\bar{a}_it_i+x_i+z_i+ w_i+u_iz_i+\mathcal{P}^i_{2, 3}=\bar{d}_i=0;\\
\mathcal{X}_{i,2,2}(m)=u_i+\bar{\gamma}_iu_i^2+x_i^2+1/2\cdot{}^tr_i\bar{a}_ir_i+\left(\delta_{i-2}(m_{i-2, i}^{\natural})^2+\delta_{i+2}(m_{i+2, i}^{\natural})^2\right)=\bar{f}_i=0.
\end{array} \right.$$ Thus we get polynomials $\mathcal{X}_{i,1,2}, \mathcal{X}_{i,1,3}, \mathcal{X}_{i,2,3}, \mathcal{X}_{i,2,2}$ on $\mathrm{Ker~}\tilde{\varphi}/\tilde{M}^1$, vanishing on the subscheme $\mathrm{Ker~}\varphi/\tilde{G}^1$.\
6. Assume that $i$ is even and that $L_i$ is *of type I*. By Equation (\[ea19\]) which involves an element of $\tilde{M}^1(R)$, we have $c_i' = c_i+z_i'$. Since $c_i'\not\equiv c_i$, we cannot follow the argument used in the previous steps in the case of the $(2, 2)$-block (when $L_i$ is *of type
| 215
| 897
| 242
| 290
| 2,306
| 0.781012
|
github_plus_top10pct_by_avg
|
eps, to allow the network to settle to a steady state before stimulus presentation. For each filter channel, the response at the central receptive field was quantified and normalized to unit maximum before averaging. The average $R_1$ response and two exemplar units are displayed in Fig. \[end\_stopping\_R\]. As mentioned in the main text, $R_1$ did not have a significant length suppression effect, with some neurons showing length suppression (right panel) and others showing an opposite effect (middle panel).
![Length suppression analysis for $R_1$ units.[]{data-label="end_stopping_R"}](end_stopping-kitti_keras2_relu_R.pdf){width="100.00000%"}
Sequence Learning Effects in Visual Cortex
------------------------------------------
For the exposure training phase in the learned sequence experiment, the Adam [@Kingma_2014] optimizer was used with default parameters. Table \[sequence\_table\] contains the percent increase in response between predicted and unpredicted sequences for each layer.
Unit Type Layer $0$ Layer $1$ Layer $2$ Layer $3$
----------- ----------- ----------- ----------- -----------
E $308$ $90$ $109$ $108$
A N/A $78$ $109$ $108$
R N/A $18$ $19$ $30$
: Percent increase of response between predicted and unpredicted sequences[]{data-label="sequence_table"}
Norm-Based Coding of Faces
--------------------------
For the faces generated for the norm-based coding experiment, a caricature level of, say $2$, corresponds to having all principal components with a of magnitude $2$ (either positive or negative). The hyperparameters of the tested PredNet model were chosen to match those of the rotating faces model in the original paper [@Lotter_2017]. Fig. \[norm\_faces\_AR\] shows the responses of the $A$ and $R$ units to the caricature faces. Responses are calculated as an average per each layer, and then averaged across layers. Training on rotating faces led to a much higher caricatur
| 216
| 1,057
| 1,299
| 342
| null | null |
github_plus_top10pct_by_avg
|
- e_{\i})^\top \sum_{m = 1}^{\ell_j} \frac{(\kappa_j - m)}{\kappa_j(\kappa_j - 1)(\kappa_j - m +1)} \\
&\preceq& \ell\Big(1 - \frac{1}{\ell_j}\sum_{m= 1}^{\ell_j} \frac{1}{\kappa_{\max} - m +1}\Big) \underbrace{\sum_{j = 1}^n \frac{1}{\kappa_j(\kappa_j - 1)} \sum_{i<\i \in S_j} (e_i - e_{\i})(e_i - e_{\i})^\top}_{=L}\;,\end{aligned}$$ where $L$ is the Laplacian defined for the comparison graph $\H$, Definition \[def:comparison\_graph1\]. By Jensen’s inequality, we have $$\begin{aligned}
\sum_{i = 2}^d \frac{1}{\lambda_i(L)} \geq \frac{(d-1)^2}{\sum_{i = 2}^d \lambda_i(L)} = \frac{(d-1)^2}{\Tr(L)} = \frac{(d-1)^2}{n}.\end{aligned}$$
Proof of Theorem \[thm:bottoml\_upperbound\]
--------------------------------------------
We prove a slightly more general result that implies the desired theorem. For $\ell\geq 4$, we can choose $\beta_1=1/2$. Then, the condition that $\gamma_{\beta_1}\leq1$ implies $\ld\leq (\ell/2+1)(d-2)/(\kappa-2)$, which implies $\ld \leq \ell d / (2 \kappa)$. With the choice of $\ld = \ell d / (2 \kappa) $, this implies Theorem \[thm:bottoml\_upperbound\].
\[thm:bottoml\_upperbound\_general\] Under the bottom-$\ell$ separators scenario and the PL model, $n$ partial orderings are sampled over $d$ items parametrized by $\theta^* \in \Omega_b$. For any $\beta_1$ with $ 0 \leq \beta_1 \leq \frac{\ell-2}{\ell}$, define $$\begin{aligned}
\label{eq:bottoml_2_genreal}
\gamma_{\beta_1} \;\; \equiv \;\; \frac{\ld(\kappa-2)}{({\left \lfloor{\ell\beta_1} \right \rfloor}+1)(d-2)}, \;\;
\end{aligned}$$ and for $\gamma_{\beta_1}\leq1$, $$\begin{aligned}
\chi_{\beta_1} & \equiv& \big(1-{\left \lfloor{\ell\beta_1} \right \rfloor}/\ell\big)^2\Bigg(1 - \exp\bigg(-\frac{({\left \lfloor{\ell\beta_1} \right \rfloor}+1)^2(1-\gamma_{\beta_1})^2}{2(\kappa-2)}\bigg)\Bigg) \;.
\end{aligned}$$ If $$\begin{aligned}
\label{eq:bottoml_1_general}
n\ell \;\; \geq \;\; \bigg(\frac{2^{12}e^{8b}}{\chi_{\beta_1}^2}\frac{d^2}{{\ld}^2}\frac{\kappa}{\ell}\bigg) d\log d\;, \;\;
\end{ali
| 217
| 1,280
| 380
| 280
| null | null |
github_plus_top10pct_by_avg
|
j$, if $i= j$ and $L_i$ is *of type $II$*, or if $L_i$ is *bound of type $I$* with odd $i$, $$m_{i,j}''=\sum_{k=1}^{N}\pi^{(max\{0, k-i\}+max\{0, j-k\}-max\{0, j-i\})}m_{i, k}m_{k, j}';$$
2. For $L_i$ *of type $I^o$* with $i$ even, we write $m_{i, i-1}m_{i-1, i}'+m_{i, i+1}m_{i+1, i}'=\begin{pmatrix} a_i''&b_i''\\ c_i''&d_i'' \end{pmatrix}$ and $m_{i, i-2}m_{i-2, i}'+m_{i, i+2}m_{i+2, i}'=\begin{pmatrix} \tilde{a}_i''&\tilde{b}_i''\\ \tilde{c}_i''&\tilde{d}_i'' \end{pmatrix}$ where $a_i''$ and $\tilde{a}_i''$ are $(n_i-1) \times (n_i-1)$-matrices, etc. Then $$\left\{
\begin{array}{l}
s_i''=s_is_i'+\pi a_i'';\\
y_i''=s_iy_i'+y_i+b_i''+\pi (y_iz_i'+ \tilde{b}_i'');\\
v_i''=v_is_i'+v_i'+c_i''+\pi (z_iv_i'+ \tilde{c}_i'');\\
z_i''=z_i+z_i'+d_i''+\pi (z_iz_i'+ v_iy_i'+ \tilde{d}_i'').
\end{array} \right.$$
3. When $L_i$ is *of type $I^e$* with $i$ even, we write $m_{i, i-1}m_{i-1, i}'+m_{i, i+1}m_{i+1, i}'=
\begin{pmatrix} a_i''&b_i''&c_i''\\ d_i''&e_i''&f_i''\\ g_i''&h_i''&k_i'' \end{pmatrix}$ and $m_{i, i-2}m_{i-2, i}'+m_{i, i+2}m_{i+2, i}'=
\begin{pmatrix} \tilde{a}_i''&\tilde{b}_i''&\tilde{c}_i''\\ \tilde{d}_i''&\tilde{e}_i''&\tilde{f}_i''\\ \tilde{g}_i''&\tilde{h}_i''&\tilde{k}_i'' \end{pmatrix}$ where $a_i''$ and $\tilde{a}_i''$ are $(n_i-2) \times (n_i-2)$-matrices, etc. Then $$\left\{
\begin{array}{l}
s_i''=s_is_i'+\pi (r_iy_i'+t_iv_i'+a_i'');\\
r_i''=s_ir_i'+r_i+\pi (r_ix_i'+t_iu_i'+b_i'') ;\\
t_i''=s_it_i'+r_iz_i'+t_i+c_i''+\pi (t_iw_i'+\tilde{c}_i'');\\
y_i''=y_is_i'+y_i'+z_iv_i'+d_i''+\pi (x_iy_i'+\tilde{d}_i'');\\
x_i''=x_i+x_i'+z_iu_i'+y_ir_i'+e_i''+\pi (x_ix_i'+\tilde{e}_i'');\\
z_i''=z_i+z_i'+f_i''+\pi (y_it_i'+x_iz_i'+z_iw_i'+\tilde{f}_i'');\\
v_i''=v_is_i'+v_i'+\pi (u_iy_i'+w_iv_i'+g_i'');\\
u_i''=u_i+u_i'+v_ir_i'+\pi(u_ix_i'+w_iu_i'+h_i'');\\
w_i''=w_i+w_i'+v_it_i'+u_iz_i'+k_i''+\pi (w_iw_i'+\tilde{k}_i'').
\end{array} \right.$$
4. When $L_i$ is *free of type $I$* with $i$ odd, we write
| 218
| 3,241
| 280
| 215
| null | null |
github_plus_top10pct_by_avg
|
ne{\mho}_\Lambda(\lbrace {\mathit{s}}_n \rbrace))(i) = \mho_\Lambda(\lbrace {\mathit{s}}_n \rbrace(i))$; that is, the ${{i}^{\text{th}}}$ term of the sequential path projection equals the locus projection of the ${{i}^{\text{th}}}$ step.
Analogous assertions are true of the remaining sequential projections: $(\overline{\mho}_{\mathbf{F}}(\lbrace {\mathit{s}}_n \rbrace))(i) = \mho_{\mathbf{F}}(\lbrace {\mathit{s}}_n \rbrace(i))$ and $(\overline{\mho}_{\mathscr{F}}(\lbrace {\mathit{s}}_n \rbrace))(i) = \mho_{\mathscr{F}}(\lbrace {\mathit{s}}_n \rbrace(i))$.
By definition \[D:EXTENDED\_PROJECTION\], the sequential path projection $\overline{\mho}_\Lambda \colon {\mathbb{S}}^{\mathscr{I}} \to \Lambda^{\mathscr{I}}$ is $\overline{\mho}_\Lambda(\lbrace {\mathit{s}}_n \rbrace) = \lbrace (i,\mho_\Lambda({\mathit{s}})) \colon
(i,{\mathit{s}}) \in \lbrace {\mathit{s}}_n \rbrace \rbrace$. The $i^\text{th}$ term of $\overline{\mho}_\Lambda(\lbrace {\mathit{s}}_n \rbrace)$ is $(\overline{\mho}_\Lambda(\lbrace {\mathit{s}}_n \rbrace))(i)$. From set builder notation we observe that the $i^\text{th}$ term of the expression $\lbrace (i,\mho_\Lambda({\mathit{s}})) \colon (i,{\mathit{s}}) \in \lbrace {\mathit{s}}_n \rbrace \rbrace$ is $\mho_\Lambda({\mathit{s}}_i) = \mho_\Lambda(\lbrace {\mathit{s}}_n \rbrace(i))$. Since the sequences are equal, then each of their corresponding terms are equal: $(\overline{\mho}_\Lambda(\lbrace {\mathit{s}}_n \rbrace))(i) = \mho_\Lambda(\lbrace {\mathit{s}}_n \rbrace(i))$. Demonstration is similar for the other two sequential projections.
### Iterative operators {#S:ITERATIVE_OPERATOR}
\[D:ITERATIVE\_OPERATOR\] Let ${\mathbb{S}}$ be a step space with basis $\langle \Psi, \Phi \rangle$. Suppose the volatile excitation space $\Psi \setminus \Phi$ is non-empty. A *iterative* operator is a mapping $V \colon {\mathbb{S}} \to {\mathbb{S}}$.
#### Disambiguation
An iterative operator maps a step space into itself. One element of a step space is a reactive state space, having persistent and volati
| 219
| 1,497
| 561
| 289
| 4,000
| 0.768715
|
github_plus_top10pct_by_avg
|
$\mathcal{C}_{i2}$. In this section, we give an overview of existing class subset selection methods for nested dichotomies. Note that other methods than those listed here have been proposed for constructing nested dichotomies—these are not suitable for use with our method and are discussed later in Related Work.
Random Selection
----------------
The most basic form of class subset selection method, originally proposed in [@frank2004ensembles], is to split the set of classes into two subsets such that each member of the space of nested dichotomies has an equal probability of being sampled. This approach has several attractive qualities. It is simple to compute, and does not scale with the size of the dataset, making it suitable for datasets of any size. Furthermore, for an $n$-class problem, the number of possible nested dichotomies is very large, given by the recurrence relation $$\begin{aligned}
T(n) = (2n-3) \times T(n-1)\end{aligned}$$
where $T(1) = 1$. This ensures that, in an ensemble of nested dichotomies, there is a high level of diversity amongst ensemble members. We refer to this function that relates the number of classes to the size of the sample space of nested dichotomies for a given subset selection method as the *growth function*. Growth functions for each method discussed in this section are compared in Figure \[fig:growth\].
Balanced Selection
------------------
An issue with random selection is that it can produce very unbalanced tree structures. While the number of internal nodes (and therefore, binary models) is the same in any nested dichotomy for the same number of classes, an unbalanced tree often implies that the internal binary models are trained on large datasets near the leaves, which has a negative effect on the time taken to train the full model. Deeper subtrees also provide more opportunity for estimation errors to accumulate. Dong *et. al.* mitigate this effect by enforcing $\mathcal{C}_i$ to be split into two subsets $\mathcal{C}_{i1}$ and $\mathcal{C}_{i2}$ such that
| 220
| 2,326
| 1,244
| 285
| 3,768
| 0.77019
|
github_plus_top10pct_by_avg
|
rrentTimeMillis();
Date startDate = new Date(now);
// X500Name dnName = new X500Name(subjectDN);
X500Name dnName = new X500Name("C = DE, O = Organiztion");
BigInteger certSerialNumber = new BigInteger(Long.toString(now));
Calendar calendar = Calendar.getInstance();
calendar.setTime(startDate);
calendar.add(Calendar.YEAR, 1);
Date endDate = calendar.getTime();
String signatureAlgorithm = "SHA512WithRSA";
ContentSigner contentSigner = new JcaContentSignerBuilder(signatureAlgorithm).build(this.privateKey);
JcaX509v3CertificateBuilder certBuilder = new JcaX509v3CertificateBuilder(dnName, certSerialNumber, startDate, endDate, dnName, this.publicKey);
BasicConstraints basicConstraints = new BasicConstraints(true);
certBuilder.addExtension(new ASN1ObjectIdentifier("2.5.29.19"), true, basicConstraints);
x509Certificate = new JcaX509CertificateConverter().setProvider(provider).getCertificate(certBuilder.build(contentSigner));
} catch (CertIOException | CertificateException | OperatorCreationException ex) {
x509Certificate = null;
}
return x509Certificate;
}
}
El certificado en principio, tras debuggear, parece que lo crea bien. ¿Es posible que me este fallando por utilizar bouncy castle como proveedor al crear el certificado y JKS al crear y cargar el java keystore? y, en caso contrario ¿dónde puede estar el problema? Llevo investigando una semana y no consigo seguir, se agradece cualquier idea. Un saludo.
A:
El problema esta en la llamada a la función createKeyStore(); y después en la llamada a la función loadKeyStore();. Creaba el archivo y lo cargaba (creo que de forma paralela en memoria, por lo que los cambios no se guardaban).
Solución:
Un solo método que crea o carga el KeyStore según si existe o no el KeySore.
public void loadKeyStore(String name, String password) {
try {
| 221
| 109
| 134
| 135
| null | null |
github_plus_top10pct_by_avg
|
\alpha/(2s)} \sqrt{
\frac{\hat\Gamma_n(j,j)}{n} }, \hat\beta_S(j) + z_{\alpha/(2s)} \sqrt{
\frac{\hat\Gamma_n(j,j)}{n} }\right],$$ with $\hat\Gamma$ given by (\[eq::Ga\]) and $z_{\alpha/(2s)}$ the upper $1 -
\alpha/(2s)$ quantile of a standard normal variate. Notice that we use a Bonferroni correction to guarantee a nominal coverage of $1-\alpha$. Also, note that $z_{\alpha/(2s)} = O(\sqrt{\log s})$, for each fixed $\alpha$. The coverage rate for this other confidence set is derived in the next result.
\[thm::bonf\] Let $$\label{eq.Delta3.tilde}
\tilde{\Delta}_{n,3} = \min\left\{\Delta_{3,n}, \frac{ \aleph_n
z_{\alpha/(2s)}}{\underline{\sigma}^2 } \left(\sqrt{ 2 + \log(2s ) } + 2
\right) \right\}.$$ There exists a $C>0$, dependent only on $A$, such that $$\inf_{P \in \mathcal{P}_n} \mathbb{P}(\theta \in \tilde C_n) \geq
(1-\alpha) - C \Big( \Delta_{n,1} +
\Delta_{n,2} + \tilde \Delta_{n,3} + \frac{1}{n} \Big).$$
### Asymptotically honest confidence sets: the bootstrap approach {#asymptotically-honest-confidence-sets-the-bootstrap-approach .unnumbered}
To construct the confidence set , one has to compute the estimator $\hat{\Gamma}$ and the quantile $\hat{t}_\alpha$ in , which may be computationally inconvenient. Similarly, the hyper-rectangle requires computing the diagonal entries in $\hat{\Gamma}$.
Below we rely on the bootstrap to construct analogous confidence sets, centered at $\hat{\theta}$, which do not need knowledge of $\hat{\Gamma}$. We let $\hat{\psi}^*$ denote the sample average of an i.i.d. sample of size $n$ from the bootstrap distribution, which is the empirical measure associated to the sample $(W_1,\ldots,W_n)$. We also let $\hat{\theta}^* =
g(\hat{\psi}^*)$.
For a fixed $\alpha \in (0,1)$, let $\hat{t}^*_\alpha$ be the smallest positive number such that $$\mathbb{P}\left( \sqrt{n} \| \hat{\theta}^* - \hat{\theta}\|
\leq \hat{t}^*_\alpha \Big| (W_1,\ldots,W_n) \right) \geq 1 - \alpha.$$ and let $(\tilde{t}^*_j, j =1,\ldots,s)$ be such that $$\mathbb{P}\left( \sqrt{n} |
| 222
| 1,257
| 346
| 302
| 1,670
| 0.786996
|
github_plus_top10pct_by_avg
|
{i-2, i}$ (resp. $y_{i+2, i}$) if $L_{i-2}$ (resp. $L_{i+2}$) is *of type* $\textit{I}$.
4. Assume that $i$ is odd. Consider the following $(1\times n_i)$-matrix: $$\left\{
\begin{array}{l l}
v_i\cdot y_{i, i} & \quad \textit{if $L_i$ is \textit{free of type I}};\\
\delta_{i-1}v_{i-1}\cdot y_{i-1, i}+\delta_{i+1}v_{i+1}\cdot y_{i+1, i} & \quad \textit{if $L_i$ is \textit{bound of type I}}.
\end{array} \right.$$ Here,
- $v_{i}=(0,\cdots, 0, 1, 0)$ of size $1\times n_{i}$.
- $v_{i-1}$ (resp. $v_{i+1}$)$=(0,\cdots, 0, 1)$ of size $1\times n_{i-1}$ (resp. $1\times n_{i+1}$).\
Then each entry of the above matrix lies in the ideal $(\pi)$.\
5. Assume that $i$ is odd. Consider the following $(1\times n_i)$-matrix: $$\left\{
\begin{array}{l l}
v_i\cdot {}^ty_{i, i} & \quad \textit{if $L_i$ is \textit{free of type I}};\\
\delta_{i-1}v_{i-1}\cdot {}^ty_{i, i-1}+\delta_{i+1}v_{i+1}\cdot {}^ty_{i, i+1} & \quad \textit{if $L_i$ is \textit{bound of type I}}.
\end{array} \right.$$ Here, $v_{i}, v_{i-1}, v_{i+1}$ are as described in the above Step (d). Then each entry of the above matrix lies in the ideal $(\pi)$.\
The functor $T_3$ is represented by a flat $A$-scheme which is isomorphic to an affine space by Lemma 3.1 of [@C1]. Moreover it is represented by a commutative group scheme since it is closed under addition. So far, we have defined three functors $T_1, T_2, T_3$ and these are represented by schemes. Therefore, we can talk about their $\bar{\kappa}$-points.
We identify $T_m$ with $T_1(\bar{\kappa})$ and $T_{\rho(m)}$ with $T_2(\bar{\kappa})$. The map $\rho_{\ast, m}:T_m \rightarrow T_{\rho(m)}$ is then $X \mapsto \sigma(m^t)\cdot h\cdot X + \sigma(X^t)\cdot h\cdot m$. For an explanation of these identifications and the map, we refer to the argument explained from the second half of page 475 to the top of page 477 in [@C2]. An explanation of the explicit computation of the map is also explained in [@C2] and we reproduce it here.
We
| 223
| 2,256
| 521
| 258
| 2,274
| 0.781219
|
github_plus_top10pct_by_avg
|
each cell by the complete number of transitions in the dataset. We illustrate these matrices as heatmaps to get insights into the most common transitions in the complete datasets. Due to tractability, we focus on a first order analysis and will focus on higher order patterns later on.
{width="\textwidth"}
The heatmaps are illustrated in Figure \[fig:heatmaps\]. Predominantly, we can observe that self transitions seem to be very common as we can see from the high transition counts in the diagonals of the matrices. This means, that users regularly seem to stay in the same topic while they navigate the Web[^15]. For the Wikigame (A) we can observe that the categories *Culture* and *Politics* are the most visited topics throughout the navigational paths. Most of the time the navigational paths start with a page belonging to the *People* topic which is visible by the dark red cell from *RESET* to *People* (remember that the *RESET* state marks both the start and end of a path - see Section “”). However, as this is a game-based goal-oriented navigation scenario, the start node is always predefined. In our second goal-oriented navigation dataset (B) we can see that the paths are dominated by transitions from and to the categories *Science* and *Geography* and there are fewer transitions between other topics. In our MSNBC dataset (C) we can observe that most of the time users remain in the same topic while they navigate and globally no topic changes are dominant. This may be an artifact of the free navigation users practice on MSNBC. Perhaps unsurprisingly, users start with the frontpage most of the time while navigating but do not necessarily come back to it in the end.
{ref-type="fig"}*B*) and had a trend to have higher fasting insulin levels than their WT counterparts (data not shown), suggesting an alteration of their whole body insulin sensitivity. To address this last question, we performed conscious EU clamps coupled with \[^3^H\]glucose infusion in β-SG null and WT mice ([Fig. 2](#F2){ref-type="fig"}, *C--E*). EU clamps indicated that the glucose clearance was significantly diminished in the β-SG null mice as compared with the WT controls (22% decrease with *p* = 0.029, data not shown), confirming the glucose tolerance test data. The whole body glucose infusion rate was 30% lower (*p* = 0.03) in the β-SG null mice than in the controls ([Fig. 2](#F2){ref-type="fig"}*C*), demonstrating that the null animals were insulin-resistant. Interestingly, insulin resistance was accompanied by decreased insulin sensitivity in skeletal muscle only (35% decrease in insulin-stimulated glucose uptake in skeletal muscle of β-SG null mice as compared with that of WT age-matched controls, *p* = 0.016) ([Fig. 2](#F2){ref-type="fig"}*D*). Insulin sensit
| 225
| 2,651
| 1,815
| 312
| null | null |
github_plus_top10pct_by_avg
|
data set. A value of 40 has a converted probability of 0.0001 incorrect reads. In contrast, the position is homogeneous in the isogenic reference genome. The same phenomenon was observed at other heterogeneity sites, including those at the genes encoding for the sulfatase family protein, the lipoate-protein ligase A family protein, the penicillin-binding protein 3, the lantibiotic epidermin biosynthesis protein EpiC, the oxacillin resistance-related FmtC protein, and the putative fibronectin/fibrinogen binding protein. Furthermore, a majority of the heterogeneity sites are located in the single-copy DNA fragment in the isogenic reference genome.
######
Characterization of the genetic heterogeneity Sites and SNP in large gene families
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Maq Chrom Position Genotype profile at selected loci of SRX007711 Genotype profile at selected loci of FPR3757 Read Depth SRX007711 Mean Phred Values Max Phred Value Functional Description
----- ----------------------------------------------------------------------------------------------------------------------- ------------------------------------------------ ---------------------------------------------- ------------ ----------------------------- ----------------- ---------------------------------------------------------
I. Heterogeneity sites that passed the SNPfilter and have an average of per-base Phred value greater than 13
| 226
| 1,479
| 937
| 356
| null | null |
github_plus_top10pct_by_avg
|
------------ ------- ------- ------- ------- ------- -------
: Transferability of attacks between LeNet and triplet network.[]{data-label="table:transferability_classifier_detector"}
Jointly Fooling Classifier and Detector
---------------------------------------
If an adversary is unaware that a detector is in place, the task of detecting adversarial examples is much easier. To stay consistent with the white-box scenario considered in previous sections, we assume that the adversary is aware that a detector is in place, so they choose to jointly optimize fooling the VAE-defended classifier and detector. We follow the approach described in [@Carlini2017] where we add an additional output as follows
$$G(x)_i =
\begin{cases}
Z_F(x)_i & \text{if $i \leq N$} \\
(Z_D(x) + 1) \cdot \max\limits_{j} Z_F(x)_j & \text{if $i=N + 1$}
\end{cases}$$
where $Z_D(x)$ is the logit output by the detector, and $Z_F(x)$ is the logit output by the classifier. Table \[table:lenet\_detector\] shows the effectiveness of combining the detector and VAE-defended classifier. The reason why this undefended attack success rate for the CW2 attack is lower than that in Table \[table:transferability\_classifier\_detector\] is probably because the gradient signal when attacking the joint model is weaker. Overall, less than 7% (0.702 - 0.635) of the perturbed images created using CW2 fool the combination of VAE defense and detector.
------------------------------ -------- --------- ------- ---------- ------------ --------------- ------------
(r)[2-4]{} (r)[5-8]{} Attack Undef. Determ. Stoc. Original Undefended Deterministic Stochastic
FGS 0.197 0.178 0.179 0.906 0.941 0.957 0.962
IGS 0.323 0.265 0.146 0.903 0.938 0.949 0.967
| 227
| 734
| 555
| 305
| null | null |
github_plus_top10pct_by_avg
|
space dependent operators [@qc-bracket; @kcmqc]: The action of the operator $J$ in Eq. (\[eq:qc-l\]) can build and destroy coherence in the system by creating and destroying superposition of states. As explained above, this is a feature of a non-linear theory. Such a non-linear character is simply hidden in the operator version of quantum-classical dynamics and clearly manifested by the wave picture of the quantum-classical evolution, which has been introduced in this paper.
Since Eqs. (\[eq:c\]) and (\[eq:cstar\]) are non-linear, their numerical integration requires either to adopt an iterative self-consistent procedure (according to which one makes a first guess of $\rho_{\alpha\alpha^{\prime}}$, as dictated by Eq. (\[eq:rho-ansatz-ad\]), calculates the evolved $C_{\alpha}^{\iota}(X,t)$ and $C_{\alpha^{\prime}}^{\iota *}(X,t)$, and then goes into a recursive procedure until numerical convergence is obtained) or to choose a definite form for $\rho_{\alpha\alpha^{\prime}}^G$, following physical intuition, and then calculating the time evolution, according to the form of Eqs. (\[eq:c\]) and (\[eq:cstar\]) which is obtained by using $\rho_{\alpha\alpha^{\prime}}^G$. This last method is already known within the Wigner formulation of quantum mechanics [@lee] as the method of *Wigner trajectories* [@wignertraj]. It is also important to find some importance sampling scheme for the phase space integral in Eq. (\[eq:qc-ave-ad\]). Such sampling scheme may depend on the specific form $\chi_{\alpha\alpha^{\prime}}$ of the observable. It is interesting to note that Eqs. (\[eq:c\]), (\[eq:cstar\]), and (\[eq:qc-ave-ad\]) can be used to address both equilibrium and non-equilibrium problems on the same footing. However, the dynamical picture provided by Eqs. (\[eq:c\]) and (\[eq:cstar\]) is very different both from that of the usual surface-hopping schemes [@tully] and from that of the nonadiabatic evolution of quantum-classical operators [@kapral]. In order to appreciate this, for simplicity, one can consider
| 228
| 97
| 472
| 299
| null | null |
github_plus_top10pct_by_avg
|
);
for (var i in all) {
var cur = all[i];
if (cur.getAttribute('class') === "linkclass") {
return cur.getAttribute('href');
}
}
return undefined;
})();
Note: If there is only ever one element of that class it would be much more efficient to give the element a unique id instead of a class. The code would then be much simpler
var url = document.getElementById('theUniqueId').getAttribute('href');
Q:
Prove by induction that $7^{2n}-48n-1$ is divisible by 2304
$$P(n):2304\mid7^{2n}-48n-1$$
I've done the base case; $P(1)$ is true because the expression then evaluates to zero, which is divisible by 2304. Now I'm stuck on the inductive step: proving $P(m+1)$ true if $P(m)$ is true. I do know this though:
$$7^{2m+2}-48(m+1)-1
=49\cdot7^{2m}-48m-49$$
A:
$$7^{2(m+1)} - 48(m+1) - 1 = 49 \cdot 7^{2m} - 48m - 49 = 49\left(7^{2m} - 48m - 1\right) + 2304m$$
Q:
How can I populate the details of all the products related to opportunity
I have to render VF page as PDF. It should display opportunity fields and a table showing related opportunity product details in PDF form.I am gettin a table for opportunity but am not able to populate the details of all the products related to opportunity which Shows product details in a table populating following opportunity product fields,
a) Date
b) Discount
c) List Price
d) Product Code
e) Quantity
f) Sales Price
g) Subtotal
h) Total Price
Can anyone help me out with this.
A:
<apex:page standardController="opportunity" renderAs="pdf">
<apex:form>
<apex:pageBlock>
<apex:pageBlockTable value="{!opportunity.opportunityLineitems}" var="product">
<apex:column value="{!product.Name}"/>
</apex:pageBlockTable>
</apex:pageBlock>
</apex:form>
</apex:page>
You need to use {!opportunity.opportunityLineitems}
Q:
Writing Html file with java duplicates the entry
I have a program to do some calculations in excel and writing the output in a table tag in html file. I am adding rows dynamically at runtim
| 229
| 22
| 212
| 177
| 128
| 0.823887
|
github_plus_top10pct_by_avg
|
erically search for HTML elements based on name and type. I'd approach this by selecting how to search the element name based on a type parameter. The example below assumes that the target cell (in row sourceRow and column sourceCol) contains the element name, ie "media-body", and the cell to the right contains its type - i.e. "ClassName".
Public Function GetElement(targetSheet As Worksheet, doc As HTMLDocument, _
sourceRow As Long, sourceCol As Long) As IHTMLElement
With targetSheet
'Get the element name from the passed cell.
Dim elementName As String
elementName = .Cells(sourceRow, sourceCol)
'Get the element type from the adjacent cell.
Dim elementType As String
elementType = .Cells(sourceRow, sourceCol + 1)
Select Case elementType
Case "ClassName"
Set GetElement = doc.getElementsByClassName(elementName)
Case "Id"
Set GetElement = doc.getElementById(elementName)
Case "Name"
Set GetElement = doc.getElementsByName(elementName)
'...
End Select
End With
End Function
This could just as easily be accomplished with a comma delimited string in a single cell - something like "media-body,ClassName", or several other methods but this is the direction I'd go.
Q:
C# Webbrowser navigating links in order
I'm trying to teach myself C# and to start I'm trying to convert a program I originally wrote in Autoit.
I'm using a Windows Application Form and the program is suppose to take one or two links as input. Navigate to those to pages, grab some links from a table, then visit each of those pages to grab some content.
If only one link is entered it seems to go to that page and grab the links from a table like it is suppose to. If two links are entered it seems to only grab the links from the second table.
So if two links are passed this method
private void getPageURLList(string site1, string site2)
{
getPageURLLi
| 230
| 5,930
| 82
| 58
| 177
| 0.820483
|
github_plus_top10pct_by_avg
|
$0.21$ $0.05\,\imath$ $-0.83\,\imath$ $0.20$ $0.34$
: \[tab:01\] Coupling constants in the three different frequency ranges (i)-(iii). According to Eq. (\[eq:s15\]), $\lambda^{\rm exp}_{\rm 50\Omega}$ and $\lambda^{\rm fit}_{\rm 50\Omega}$ should be compared to $\lambda_W=\sqrt{\lambda_{\rm oe}\lambda_{\rm hw}}$, see main text for discussion.
Finally we perform a check on the coupling constants based on Eq. (\[eq:s15\]). Accordingly, the square root of the product of the coupling constants for open-end and hard-wall reflection should give $\lambda_{50\Omega}$. Table \[tab:01\] shows that for the frequency ranges (ii) and (iii) there is indeed good agreement between $\sqrt{\lambda_{\rm oe}\lambda_{\rm hw}}$, $\lambda^{\rm fit}_{\rm 50\Omega}$ and $\lambda^{\rm exp}_{\rm 50\Omega}$. In the case (i) $\sqrt{\lambda_{\rm oe}\lambda_{\rm hw}}$ agrees quite good with the experimental parameter $\lambda^{\rm exp}_{\rm 50\Omega}$, but the fitting parameter is much larger. This deviation reconfirms our arguments presented in the above discussion of the fidelity plot shown on Fig. \[fig:03\](i).
Conclusions {#sec:conclusions}
===========
In this work, we have studied the influence of the coupling to the continuum on the decay of fidelity. This complements previous experiments of our group, where the fidelity decay under the influence of various types of geometrical perturbations was studied [@sch05b; @hoeh08a; @sch05d; @bod09a] but for closed systems exclusively. To get rid of an overall absorption we used the concept of scattering fidelity introduced by us previously [@sch05b], defined as the parametric cross-correlation function of $S$-matrix elements normalized to the corresponding autocorrelation function.
On the theoretical side we have developed a model description of the fidelity decay in terms of a modified VWZ approach. The parametric cross-correlation function of $S$-matrix elements for two different $\lambda \ne \lambda'$ can be reduced to an autoc
| 231
| 181
| 856
| 340
| null | null |
github_plus_top10pct_by_avg
|
}.\end{aligned}$$ Here $v_i$ is the standard vector which corresponds to $q_i$. Similarly, for the bases $\langle v_5, v_6, v_7\rangle$ and $\langle v_8, v_{9}, v_{10}\rangle$ we have the following matrices respectively: $$\begin{pmatrix}
\frac{Q-3}{(Q-1)(Q-4)} &\frac{Q-5}{3(Q-4)} &\frac{2(Q-2)}{3(Q-1)}\\
\frac{Q-3}{(Q-1)(Q-4)} &\frac{Q-5}{3(Q-4)} &\frac{2(Q-2)}{3(Q-1)}\\
\frac{Q-3}{(Q-1)(Q-4)} &\frac{Q-5}{3(Q-4)} &\frac{2(Q-2)}{3(Q-1)}
\end{pmatrix},\quad
\begin{pmatrix}
\frac{Q-1}{Q(Q-3)} &\frac{2(Q-4)}{3(Q-3)} &\frac{Q-1}{3Q}\\
\frac{Q-1}{Q(Q-3)} &\frac{2(Q-4)}{3(Q-3)} &\frac{Q-1}{3Q}\\
\frac{Q-1}{Q(Q-3)} &\frac{2(Q-4)}{3(Q-3)} &\frac{Q-1}{3Q}
\end{pmatrix}.$$
Definition of $\rho_{{\mbox{\boldmath $\alpha$}}}(s_i)$
-------------------------------------------------------
Finally, we define linear maps for $s_i$. Unfortunately, we do not have uniform description for $\rho_{{\mbox{\boldmath $\alpha$}}}(s_i)$, except for “non-reductive” paths. So first we define $\rho_{{\mbox{\boldmath $\alpha$}}}(s_i)$ for the non-reductive paths. Then we define $\rho_{{\mbox{\boldmath $\alpha$}}}(s_1)$ and $\rho_{{\mbox{\boldmath $\alpha$}}}(s_2)$ for “reductive” paths one by one.
### Non-Reductive Case {#non-reductive-case .unnumbered}
In the following, we use notation $\mu{\mbox{$\vartriangleleft$}}\lambda$ if a Young diagram $\lambda$ is obtained from a Young diagram $\mu$ by adding one box.
For $1\leq j\leq i$, let ${\nu}$, ${\mu}$, ${\lambda}$ be Young diagrams of size $j-1$, $j$ and $j+1$ respectively such that $\nu{\mbox{$\vartriangleleft$}}\mu{\mbox{$\vartriangleleft$}}\lambda$. If a tableau $P$ of $\mathbb{T}({\mbox{\boldmath $\alpha$}})$ goes through $\widetilde{\nu}$, $\widetilde{\mu}$ and $\widetilde{\lambda}$ at the $(i-2)$-nd, the $(i-1)$-st and the $i$-th coordinate, then $P$ goes through $\widehat{\nu}$ and $\widehat{\mu}$ at the $(i-3/2)$-th and the $(i-1/2)$-th coordinate. We call such a tableau [*non-reductive*]{} at $i$. If a tableau $P$ does not satisfy the property above
| 232
| 908
| 470
| 291
| 2,443
| 0.779713
|
github_plus_top10pct_by_avg
|
ng at least once. We solve convex program for $\theta$ restricted to the items that appear in rank-breaking at least once. The second figure of Figure \[fig:bottom\_l\_1\] is averaged over $1000$ instances.
![Under the bottom-$\ell$ separators scenario, accuracy is good only for the bottom 400 items (left). As predicted by Theorem \[thm:bottoml\_upperbound\], the mean squared error on the bottom 400 items scale as $1/n$, where as the overall mean squared error does not decay (right). []{data-label="fig:bottom_l_1"}](Plot9-eps-converted-to.pdf "fig:"){width=".3\textwidth"} (-162,50) (-20,102)[Bottom-$8$ separators]{} (-90,-15) (-127,-3) (-22,-3) ![Under the bottom-$\ell$ separators scenario, accuracy is good only for the bottom 400 items (left). As predicted by Theorem \[thm:bottoml\_upperbound\], the mean squared error on the bottom 400 items scale as $1/n$, where as the overall mean squared error does not decay (right). []{data-label="fig:bottom_l_1"}](Plot10-eps-converted-to.pdf "fig:"){width=".3\textwidth"} (-175,50) (-90,-7)[sample size ]{}
We make this observation precise in the following theorem. Applying rank-breaking to only to those weakest $\ld$ items, we prove an upper bound on the achieved error rate that depends on the choice of the $\ld$. Without loss of generality, we suppose the items are sorted such that $\theta^*_1 \leq \theta_2^* \leq \cdots \leq \theta_d^*$. For a choice of $\ld = \ell d/ (2 \kappa) $, we denote the weakest $\ld$ items by $\ltheta^* \in \reals^{\ld}$ such that $\ltheta_i^* = \theta^*_i - (1/\ld)\sum_{\i = 1}^{\ld} \theta^*_{\i}$, for $i \in [\ld]$. Since $\theta^* \in \Omega_b$, $\ltheta^* \in [-2b,2b]^{\ld}$. The space of all possible preference vectors for $[\ld]$ items is given by $\lOmega = \{ \ltheta \in \reals^{\ld} : \sum_{i =1}^{\ld} \ltheta_i = 0\}$ and $\lOmega_{2b} = \lOmega \cap [-2b,2b]^{\ld}$.
Although the analysis can be easily generalized, to simplify notations, we fix $\kappa_j = \kappa$ and $\ell_j = \ell$ and assume that the comparison sets $S_j$, $|S_j|
| 233
| 141
| 136
| 259
| 1,223
| 0.792478
|
github_plus_top10pct_by_avg
|
the others get other fourth roots of unity.
The ${\mathbb Z}_4$- and GSO-invariant states in this sector are of the form
-----------------------------------------------------------------------------------------------------------------------------------------------
State Count
---------------------------------------------------- ------------------------------------------------------------------------------------------
$|m=0,4,8\rangle \otimes \left( \psi^{1-2}_{-1/2}, spacetime vectors, in the ${\bf 1}$, ${\bf 1}$, $\wedge^4 {\bf 8} = {\bf 70}$ of $su(8)$
\overline{\psi}^{1-2}_{-1/2} \right)$
$|m=2,6\rangle \otimes \left( \psi^{3-4}_{-1/2}, 1 hypermultiplet in $\wedge^2 {\bf 8} = {\bf 28}$, $\wedge^2 {\bf \overline{8}} =
\overline{\psi}^{3-4}_{-1/2} \right)$ {\bf \overline{28}}$ of $su(8)$
-----------------------------------------------------------------------------------------------------------------------------------------------
The (R,NS) sector in $k=2$ is closely related. Here, fields have the following boundary conditions: $$\begin{aligned}
X^{1-4}(\sigma + 2\pi) & = & + X^{1-4}(\sigma), \\
\psi^{1-4}(\sigma + 2 \pi) & = & - \psi^{1-4}(\sigma), \\
\lambda^{1-8}(\sigma + 2 \pi) & = & + \lambda^{1-8}(\sigma), \\
\lambda^{9-16}(\sigma + 2 \pi) & = & - \lambda^{9-16}(\sigma).\end{aligned}$$ Just as in the (NS,NS) sector, $E_{\rm left} = 0$ and $E_{\rm right} = -1/2$. Here, the left Fock vacua form a spinor of the low-energy $so(16)$.
The ${\mathbb Z}_4$-invariant states in this sector are of the form
------------------------------------------------------------------------------------------------------
State Count
----------------------------------------------------- ------------------------------------------------
$(\mbox{spinor}) \otimes \left( \psi^{1-2}
| 234
| 1,709
| 490
| 310
| null | null |
github_plus_top10pct_by_avg
|
ed by $B$.
In Section \[selfdualsection\], we prove a result of independent interest, Theorem \[selfdual\], that finds the unavoidable minors for arbitrary large matroids that have two disjoint bases. A corollary is the following, which finds one of two specific minors in any matroid that is not close to being ‘trivial’.
\[unavoidable\] Let $s \ge 0$ be an integer and $k = 4^{4^{2s^2}}$. Then, for each matroid $M$, either
- $M$ has a $U_{s,2s}$-minor,
- $M$ has a minor isomorphic to the direct sum of $s$ copies of $U_{1,2}$, or
- there is a distance-$k$ perturbation of $M$ whose elements are all loops or coloops.
Structure Theory {#structure-theory .unnumbered}
----------------
Theorems \[main1\] and \[main2\] fit into a larger, mostly conjectural, regime of structure theory in minor-closed classes omitting a uniform matroid. The first of these conjectures predicts the unavoidable minors for very highly connected matroids. A matroid is *vertically $k$-connected* if, for every $A \subseteq E(M)$ with $\lambda_M(A) < k-1$, either $A$ or $E(M)-A$ is spanning in $M$. The following conjecture was posed in \[\[highlyconnected\]\].
\[highconn\] For all $n \ge 2$ there is an integer $k$ such that, if $M$ is a vertically $k$-connected matroid with $|M|\ge 2k$, then $M$ or $M^*$ has a minor isomorphic to one of $M(K_n),B(K_n),$ or $U_{n,2n}$.
While $M(K_n)^*$ and $B(K_n)^*$ are not even vertically $4$-connected themselves, they do contain minors with high vertical connectivity; indeed, for each $k$ there is a graph $G$ so that $M(G)^*$ and $B(G)^*$ are both vertically $k$-connected. To obtain such a graph one can take a $k$-regular Cayley graph with girth at least $k$ (see Margulis \[\[margulis\]\] for the construction); by \[\[gr\], Theorem 3.4.2\], these graphs are $k$-connected.
In any case, the dual outcomes in Conjecture \[highconn\] are perhaps not needed if $M$ has large co-rank.
For all $n \ge 2$ there is an integer $k$ so that, if $M$ is a vertically $k$-connected matroid with $|M|\ge
| 235
| 267
| 641
| 293
| null | null |
github_plus_top10pct_by_avg
|
T_CODE AND A.HOURS_SUMMARY = B . HOURS_SUMMARY
WHERE A . EM_NUMBER = EMPLOYEE_ID OR B . EM_NUMBER = EMPLOYEE_ID
UNION
SELECT COUNT ( * ) AS ERROR_COUNT
FROM MPRLIB . V_REQHOURSUMM A EXCEPTION JOIN MPRLIB . V_TSHOURSUMM B
ON A . EM_NUMBER = B . EM_NUMBER AND A . TIMESHEET_CODE = B . TIMESHEET_CODE AND A . HOURS_SUMMARY = B . HOURS_SUMMARY
WHERE A . EM_NUMBER = EMPLOYEE_ID OR B . EM_NUMBER = EMPLOYEE_ID ) TABLE
It seems to work, but seems... excessive. Thoughts? Is there a better way?
Q:
Independence of functions of order statistics when the random variables are uniformly distributed
Let $X_1$,$X_2$,…,$X_n$ be $n$ i.i.d. random variables with $f(x)$ as the pdf and $F(x)$ as the cdf in interval $[0,1]$. Let $F$ be uniformly distributed. Let $X_{i:n}$ be the $i^{th}$ order statistic such that $X_{1:n} \leq X_{2:n} \leq ... \leq X_{n:n}$. I wish to compute the expected value $\mathbb{E} [\frac{X_{(k-1):n} X_{i:n}}{X_{k:n}} ]$ for any $ k < i \leq n$. So the question is are $\frac{X_{(k-1):n}}{X_{k:n}}$ and $X_{i:n}$ independent? Because if they are not, then the problem is non-trivial. Due to a standard result in theory of order statistics, we already know that for any $i \leq n$, $\frac{X_{(i-1):n}}{X_{i:n}}$ and $X_{i:n}$ are independent.
A:
It is easy to show that given $X_{i:n} = x$, the order statistics $X_{1:n}, \dots, X_{(i-1):n}$ have the same joint distribution as the order statistics $X_{1:(i-1)}, \dots, X_{(i-1):(i-1)}$ of a sample from the uniform distribution on $[0,x]$, which, in turn, have the same distribution as $x$ times the order statistics of a sample from $[0,1]$. It follows, in particular, that for $k<i$, $\frac{X_{(k-1):n}}{X_{k:n}}$ is indeed independent of $X_{i:n}$ and
$$
\mathrm{E}\Big[\frac{X_{(k-1):n} X_{i:n}}{X_{k:n}} \Big] = \mathrm{E}\Big[\frac{X_{(k-1):n} }{X_{k:n}} \Big]\mathrm{E}[X_{i:n}] = \frac{k-1}k \cdot \frac{i}{n+1}.
$$
Q:
javascript delete object safe for memory leak
this is my code, I do not know if it good for prevent leaking memory ? help and how can I test
| 236
| 59
| 101
| 211
| 479
| 0.808344
|
github_plus_top10pct_by_avg
|
ere in the url:[your_servlet_path]
Q:
How to read tabular data from text file - Perl
We have text file which is having data in normal as well as tabular form. i can read normal data but i am unable to read the data which is in tabular form.
Can anyone please help me out to read the and extract the tabular data.
Text File Data :
225 Top Hitters
RT(ms) BRT(ms) TL(ms) l_mig_a l_mig_w b_mig_a b_mig_w l_b_mig_a l_b_mig_w b_l_mig_a b_l_mig_w
-------- --------- -------- --------- --------- --------- --------- ----------- ----------- ----------- -----------
11078.9 141.3 3754.8 418 7325 0 0 0 4 0 4
Total active inter-cluster migrations: 0
Total wakeup inter-cluster migrations: 8
Total active migrations: 418
Total wakeup migrations: 7333
My Code:
use strict;
use warnings;
my ($RT,$BRT,$TL ,$l_mig_a,$l_mig_w,$b_mig_a,$b_mig_w,$l_b_mig_a,$l_b_mig_w,$b_l_mig_a,$b_l_mig_w);
open (FH, "<" ,"file.txt") or print "could not open $!";
my @lines = <FH>;
close FH;
foreach my $line (@lines) {
print "$line \n";
}
Expected Output :
$RT = 11078.9
$BRT = 141.3
$TL = 3754.8
$l_mig_a = 418
$l_mig_w = 7325
$b_mig_a = 0
$b_mig_w = 0
$l_b_mig_a = 0
$l_b_mig_w = 4
$b_l_mig_a = 0
$b_l_mig_w = 4
A:
You can "slurp" whole file into single string variable and use regular expression to parse tabular data. Below please find sample script with subroutine to simplify generating regular expression.
Below please find sample implementation with test data bundled with code into single script/file.
use strict;
use warnings;
my $text;
{
# put all lines into single string
local $/ = undef;
$text = <DATA>;
}
my $regex = &make_regex(qw{RT(ms) BRT(ms) TL(ms) l_mig_a l_mig_w b_mig_a b_mig_w l_b_mig_a l_b_mig_w b_l_mig_a b_l_mig_w});
print "REGEX-START\n$regex\nREGEX-END\n"; # Debuging: Show generated regular expression
my ($RT,$BRT,$
| 237
| 5,704
| 4
| 124
| 187
| 0.819892
|
github_plus_top10pct_by_avg
|
MCMC-like algorithms. We also remark that @hoffer2017train proposed a different way of injecting noise, multiplying the sampled gradient with a suitably scaled Gaussian noise.
[Satisfying the Assumptions]{}\[ss:example\_ass\]
Before presenting the experimental results, we remark on a particular way that a function $U(w)$ defined in , along with the stochastic sequence $w_k$ defined in , can satisfy the assumptions in Section \[ss:ass\].
Suppose first that we shift the coordinate system so that $\nabla U(0)=0$. Let us additionally assume that for each $i$, $U_i(w)$ has the form $$\begin{aligned}
U_i(w) = U_i'(w) + V(w),\end{aligned}$$ where $V(w):= m \lrp{\|x\|_2-R/2}^2$ is a $m$-strongly convex regularizer outside a ball of radius $R$, and each $U_i'(w)$ has $\LR$-Lipschitz gradients. Suppose further that $m \geq 4 \cdot \LR$. These additional assumptions make sense when we are only interested in $U(w)$ over $B_R(0)$, so $V(w)$ plays the role of a barrier function that keeps us within $B_R(0)$. Then, it can immediately be verified that $U(w)$ satisfies Assumption \[ass:U\_properties\] with $L=m+\LR$.
The noise term $\xi$ in satisfies Assumption \[ass:xi\_properties\].1 by definition, and satisfies Assumption \[ass:xi\_properties\].3 with $L_\xi = \lrp{\sqrt{s} + 2\sigma} L$. Assumption \[ass:xi\_properties\].2 is satisfied if $\zeta(w,\eta)$ is bounded for all $w$, i.e. the sampled gradient does not deviate from the true gradient by more than a constant. We will need to assume directly Assumption \[ass:xi\_properties\].4, as it is a property of the distribution of $\nabla U_i(w)$ for $i=1, \ldots, n$.
![Relationship between test accuracy and the noise covariance of SGD algorithm. In each plot, the dots with the same color correspond to SGD runs with the same batch size but different step sizes.[]{data-label="fig:const_lr_acc_vs_var"}](sgd_runs.pdf){width="1\linewidth"}
[Experiments]{}\[subsection:experiments\]
{width="1.0\linewidth"}
{width="1.0\linewidth"}
In
| 238
| 174
| 162
| 308
| 1,131
| 0.793951
|
github_plus_top10pct_by_avg
|
0.098
mMSE 0.093 0.093 0.093 0.093 0.093 0.093 0.093
BLB($n^{0.6}$) 1.521 1.512 1.538 1.516 1.522 1.530 1.526
BLB($n^{0.8}$) 0.466 0.466 0.472 0.459 0.463 0.467 0.468
SDB($n^{0.6}$) 1.806 1.815 1.803 1.813 1.816 1.806 1.811
SDB($n^{0.8}$) 0.579 0.578 0.576 0.578 0.579 0.577 0.579
TB 0.168 0.167 0.168 0.169 0.167 0.168 0.167
: Lengths of confidence interval for Cases 4-6 in Example \[example2\]
\[table9\]
A real data {#sec4}
===========
In this section, we apply the proposed method to a census income data set, which aims to determine whether a person makes \$50K or more a year. The data can be obtained from <https://archive.ics.uci.edu/ml/datasets/census+income>, with 48,842 observations in total. As in [@Wang2018], the response variable is whether a person’s income exceeds \$50K a year. The explanatory variables are as follows:
- $X_{1}$: age
- $X_{2}$: final weight (Fnlwgt)
- $X_{3}$: highest level of education in numerical form (Education-num)
- $X_{4}$: capital loss (Capital-loss);
- $X_{5}$: hours worked per week (Hours-per-week).
There are 11,687 individuals (23.929%) in the data whose income exceeds \$50K a year. In order to eliminate the effect of scale, we have scaled and centered each explanatory variable so that they have mean 0 and variance 1. To evaluate the performance of the above methods, we replicate each method 500 times since these methods split sample randomly. We report the average estimate and the average proportion of rejecting the null hypothesis that the regression coefficient is zero by all methods.
Table \[table10\] shows the result. The traditional Logistic regression (TLR
| 239
| 2,662
| 1,123
| 313
| null | null |
github_plus_top10pct_by_avg
|
type="table"}), FP4 with the previously mapped tags (FP4_Eland) covers 85-98% of Chen_Eland, and the intensity of overlapped peaks is strongly correlated. Thus, it is deemed that FP4 has reproduced Chen_Eland and extended it with novel peaks in different genomic locations. In contrast, FP4 with remapped tags shows relatively lower reproducibility, whereas peak intensities are still correlated with Chen_Eland except Esrrb (Figure [1B](#F1){ref-type="fig"}). Similar observations can be found from an independent study \[[@B22]\].
######
Reproducibility of newly detected peaks
Fold Change Overlap of Chen Eland (%) Correlation of Peak Intensity
---------- ------------- --------------------------- ------------------------------- ------ ------- ------- ------- ------- ------ ------ ------ ------
c-Myc 1.01 3.26 2.25 3.41 95.12 78.23 77.53 79.78 1.00 0.97 0.98 0.98
E2f1 1.03 1.34 1.36 1.40 85.41 74.67 74.83 75.70 1.00 0.98 0.99 0.99
Esrrb 2.88 3.12 3.29 3.93 99.10 88.62 89.01 90.22 1.00 0.82 0.83 0.83
Klf4 2.30 3.56 3.54 3.83 97.00 91.66 90.90 92.48 1.00 0.94 0.95 0.95
Nanog 1.01 2.15 1.84 2.42 97.93 87.93 90.06 91.69 1.00 0.97 0.99 0.99
n-Myc 1.86 3.24 3.59 3.60 95.39 84.22 85.71 86.15 1.00 0.97 0.97 0.97
Oct4 2.39 6.21 6.72 6.78 96.89 84.26 84.53 87.58 1.00 0.97 0.98 0.98
Smad1 1.49 3.19 3.24 3.53 91.5
| 240
| 208
| 1,248
| 390
| null | null |
github_plus_top10pct_by_avg
|
underwent surgical re-intervention for fibroid-related bleeding between 12 and 24 months (Table [3](#T3){ref-type="table"}): 4 hysterectomies and 2 hysteroscopic myomectomies. Follow-up pathology revealed multiple small fibroids with adenomyosis in four cases (patients 1, 3, 4, and 6), and a possible polyp (patient 5). Pathology studies were not available for Patient 2.
######
Surgical re-interventions (6/124, 4.8%) between 12 and 24 months post procedure
**Pt** **Treated fibroids**^**a**^ **Symp severity**^**b**^ **HRQL**^**c**^ **Reintervention** **Pathology**
----------------------------- -------------------------------------------- -------------------------- ----------------- -------------------- --------------- ---------------------------------------------------- --------------------------------------------------------------------------------------------------
**1** 6 Subserosals 5.6; 4.4; 1.5; 3.6; 1.9; 3.0 53.1 28.1 46.6 89.7 Hysterectomy at 16.5 months Multiple myomas ranging from 0.4 to 4.7 cm; focally irregular endo-myometrial junction
2 Intramurals 4.8: 4.7
1 Subserosal/Intramural 2.7
**2** 4 Intramurals 5.2; 1.8; 1.1; 1.8 68.8 28.1 10.3 82.8 Hysteroscopic myomectomy by resection at 15 months No pathology
**3** 3 Subserosals 5.0; 1.3; 1.9 78.1 43.8
| 241
| 516
| 964
| 400
| null | null |
github_plus_top10pct_by_avg
|
re>
<figure class="pic-6"></figure>
<figure class="pic-7"></figure>
<figure class="pic-8"></figure>
<figure class="pic-9"></figure>
<figure class="pic-10"></figure>
<figure class="pic-11"></figure>
<figure class="pic-12"></figure>
<figure class="pic-13"></figure>
<figure class="pic-14"></figure>
<figure class="pic-15"></figure>
<figure class="pic-16"></figure>
<figure class="pic-17"></figure>
<figure class="pic-18"></figure>
<figure class="pic-19"></figure>
</div>
A:
It has to do with the time set for your animation. When i changed the time to 114s it cycled throught all images.
If you need to make it faster or slower you will have to go through and adjust the animation on the figure element and the animation-delay manually on each .pic-x element.
Q:
await method in task
I work on a document downloader in a windows store app, and I have a problem with tasks.
So here is a sample of my code :
Task created and started
...
HttpDownloader httpDownloader = new HttpDownloader(server);
Action<object> action = (object doc ) => httpDownloader.DownloadDocument((Document)doc);
Task t1 = new Task(action,selection.doc);
t1.Start();
...
DownloadDocument method
...
FileSavePicker savePicker = new FileSavePicker();
savePicker.SuggestedStartLocation = PickerLocationId.DocumentsLibrary;
// Dropdown of file types the user can save the file as
savePicker.FileTypeChoices.Add("Application/pdf", new List<string>() { ".pdf" });
// Default file name if the user does not type one in or select a file to replace
savePicker.SuggestedFileName = doc.name+"_"+doc.version;
StorageFile file = await savePicker.PickSaveFileAsync(); // Here an exception is launch.
...
And every time I get :
Element not found (Exception de HRESULT : 0x80070490)
Without task, my code works fine, but since I want to use tasks to manage the different download, I have this error.
A:
Your Action runs on a random pool thread, which is different from your main thread (as scheduled by Task.Start). There you access
| 242
| 4,715
| 39
| 41
| 168
| 0.821082
|
github_plus_top10pct_by_avg
|
$3"/>
<rewrite url="^/Membership/(.+)/(.+)" to="/Membership/Index.aspx?parentf=$1&f=$2"/>
So just reversing the order except that I have kept the first rule in the same position.
A:
Instead of all your posted rules, try this:
<rewrite url="^/Membership/([^/]+)$" to="/Membership/Index.aspx?f=$1"/>
<rewrite url="^/Membership/([^/]+/)*([^/]+)/([^/]+)$" to="/Membership/Index.aspx?parentf=$2&f=$3"/>
This should avoid the rule ambiguity, and has a general rule for arbitrary path depth.
Q:
How to loop a string array? class?
Ok, so this program allows user to input five adjectives and nouns, which outputs in a paragraph. There is a certain name for this game, usually found in kid's magazines..but the name escapes me right now. Ex. "Mary hopped on the (adjective) horse. and flew over the ___(noun).
I've created a class for both noun and adjectives.
class noun {
String noun;
noun (String _noun) {
noun = _noun;
}
}
class adjective {
String adjective;
adjective (String _adjective) {
adjective = _adjective;
}
}
ArrayList <adjective> small = new ArrayList <adjective>(5);
ArrayList <noun> office = new ArrayList <noun>(5);
There is some code here between above and below which adds information from textFields into array. Below, is the code that lists nouns and adjectives. Though as of now I'm only working with nouns, and will be incorporating the adjectives into paragraph later.
So I have this.
for (int x=0; x<=noun.length() - 1; x++) { //length is underlined
temp = temp + "paragraph" + noun.get(x).noun + "more paragraph"; //get underlined
}
paraTArea.setText(temp);
Now this has worked before when I was using integers (only I used "size()-1" instead of length) so I'm not sure if the code is freaking because I'm using strings and a class now.
Possible important note: When I did noun "dot" it wanted me to put 'class' after the dot. So I'm a little lost now.
And I just realized it will list all nouns in the place I assigned the nouns in the paragraph...but I'l
| 243
| 1,902
| 371
| 250
| 656
| 0.80307
|
github_plus_top10pct_by_avg
|
cannot interfere with a domain $v$ if no action performed by $u$ can influence subsequent outputs seen by $v$. The system is divided into a number of *domains*, and the allowed information flows between domains are specified by means of an information flow policy $\rightsquigarrow$, such that $u \rightsquigarrow v$ if information is allowed to flow from a domain $u$ to a domain $v$. The standard noninterference is too strong and not able to model channel-control policies. Thus, the intransitive noninterference is introduced, which uses a $sources(\alpha,u)$ function to identify those actions in an action sequence $\alpha$ that their domain may influence the domain $u$. Rushby [@rushby92] gives a standard definition of intransitive noninterference as follows.
$$\label{eq:nonitf}
\begin{aligned}
noninterference \equiv \forall \alpha \ u . (s_0 \lhd \alpha \stackrel{u}{\bumpeq} s_0 \lhd ipurge(\alpha,u))
\end{aligned}$$
where $ipurge(\alpha,u)$, defined based on $sources(\alpha,u)$, removes the actions from the action sequence $\alpha$ that their domains cannot interfere with $u$ directly or indirectly. A system is secure for the policy $\rightsquigarrow$, if for each domain $u$ and each action sequence $\alpha$, the final states of executing $\alpha$ and $\alpha'$ ($\alpha'$ is the result of removing actions that their domain can not influence $u$) from the initial state $s_0$ are observed equivalently for $u$.
The intransitive noninterference is usually chosen to formally verify information flow security of general purpose operating systems or separation kernels [@Murray12].
Classical noninterference is concerned with the secrets that events introduce in the system state and that are possibly observed via outputs [@von04]. Although noninterference is adequate for some sorts of applications, there are many others considering the prevention of secret information leakage out of the domains it is intended to be confined to. Language-based information flow security typically considers information leakage and has t
| 244
| 1,241
| 1,319
| 380
| null | null |
github_plus_top10pct_by_avg
|
ute-force computational enumeration, while the effect on random-pair selection is estimated.
{width="95.00000%"}
Analysis of error\[sec:theoretical\]
------------------------------------
In this section, we provide a theoretical analysis showing that performance of each internal binary model is likely to be improved by adopting multiple subset evaluation. We also show empirically that the estimates of performance improvements are accurate, even when the assumptions are violated.
Let $E$ be a random variable for the training root mean squared error (RMSE) for some classifier for a given pair of class subsets $\mathcal{C}_{i1}$ and $\mathcal{C}_{i2}$, and assume $E \sim N(\mu, \sigma^2)$ for a given dataset under some class subset selection scheme. For a given set of $\lambda$ selections of subsets $\mathcal{S} = \{(\mathcal{C}_{i1}, \mathcal{C}_{i2})_1, \dots, (\mathcal{C}_{i1}, \mathcal{C}_{i2})_\lambda\}$ and corresponding training s $\mathcal{E} = \{E_1, \dots, E_\lambda\}$, let $\hat{E}_\lambda = min(\mathcal{E})$. There is no closed form expression for the expected value of $\hat{E}_\lambda$, the minimum of a set of normally distributed random variables, but an approximation is given by $$\mathbb{E}[\hat{E}_\lambda] \approx \mu + \sigma \Phi^{-1} \Bigg( \frac{1-\alpha}{\lambda-2\alpha + 1}\Bigg) \label{eqn:expected_order_statistics}$$ where $\Phi^{-1}(x)$ is the inverse normal cumulative distribution function [@royston1982algorithm], and the *compromise value* $\alpha$ is the suggested value for $\lambda$ given by Harter ([-@harter1961expected]).[^1]
Figure \[fig:norm\_drawn\] illustrates how this expected value changes when increasing values of $\lambda$ from $1$ to $5$. The first two rows show the distribution of $E$ and estimated $\mathbb{E}[\hat{E}_\lambda]$ on the UCI dataset `mfeat-fourier`, for a logistic regression model trained on 1,000 random splits of the class set $\mathcal{C}$. These rows show the training and testing RMSE respectively, using 90% of the data for
| 245
| 1,478
| 883
| 251
| 996
| 0.796056
|
github_plus_top10pct_by_avg
|
lumn{1}{c|}{\ensuremath{a_{1}}} & \multicolumn{1}{c|}{} & \ensuremath{\mathbf{B}_{1,1}=+1} & \ensuremath{b_{1}-\beta_{1}} & \ensuremath{\beta_{1}} & \multicolumn{1}{c|}{\ensuremath{b_{1}}} & \tabularnewline\multicolumn{1}{|c|}{\ensuremath{\mathbf{A}_{1,1}=-1}} & \ensuremath{\alpha_{1}} & \ensuremath{1-a_{1}-\alpha_{1}} & \multicolumn{1}{c|}{\ensuremath{1-a_{1}}} & \multicolumn{1}{c|}{} & \ensuremath{\mathbf{B}_{1,1}=-1} & \ensuremath{\beta_{1}} & \ensuremath{1-b_{1}-\beta_{1}} & \multicolumn{1}{c|}{\ensuremath{1-b_{1}}} & \tabularnewline\cline{1-4} \cline{6-9} & \ensuremath{a_{1}} & \ensuremath{1-a_{1}} & & & & \ensuremath{b_{1}} & \ensuremath{1-b_{1}} & & \tabularnewline\cline{2-3} \cline{7-8} \multicolumn{1}{c}{} & & \multicolumn{1}{c}{} & & & \multicolumn{1}{c}{} & & \multicolumn{1}{c}{} & & \tabularnewline\cline{2-3} \cline{7-8} & \ensuremath{\mathbf{A}_{2,2}=+1} & \ensuremath{\mathbf{A}{}_{2,2}=-1} & & & & \ensuremath{\mathbf{B}_{2,2}=+1} & \ensuremath{\mathbf{B}_{2,2}=-1} & & \tabularnewline\cline{1-4} \cline{6-9} \multicolumn{1}{|c|}{\ensuremath{\mathbf{A}_{2,1}=+1}} & \ensuremath{a_{2}-\alpha_{2}} & \ensuremath{\alpha_{2}} & \multicolumn{1}{c|}{\ensuremath{a_{2}}} & \multicolumn{1}{c|}{} & \ensuremath{\mathbf{B}_{1,2}=+1} & \ensuremath{b_{2}-\beta_{2}} & \ensuremath{\beta_{2}} & \multicolumn{1}{c|}{\ensuremath{b_{2}}} & \tabularnewline\multicolumn{1}{|c|}{\ensuremath{\mathbf{A}_{2,1}=-1}} & \ensuremath{\alpha_{2}} & \ensuremath{1-a_{2}-\alpha_{2}} & \multicolumn{1}{c|}{\ensuremath{1-a_{2}}} & \multicolumn{1}{c|}{} & \ensuremath{\mathbf{B}_{1,2}=-1} & \ensuremath{\beta_{2}} & \ensuremath{1-b_{2}-\beta_{2}} & \multicolumn{1}{c|}{\ensuremath{1-b_{2}}} & \tabularnewline\cline{1-4} \cline{6-9} & \ensuremath{a_{2}} & \ensuremath{1-a_{2}} & & & & \ensuremath{b_{2}} & \ensuremath{1-b_{2}} & & \tabularnewline\cline{2-3} \cline{7-8} \end{tabular}\label{eq:conn}$$ if and only if $$\begin{array}{l}
s_{0}\!\left
| 246
| 1,449
| 511
| 331
| null | null |
github_plus_top10pct_by_avg
|
ey provide analytics that leverage accurate entity counts and provide entity co-occurrence statistics which is helpful in analyzing semantically similar named-entities.
Research Objectives
===================
\[sec:problem\] Given the text corpora with semantic annotations, I describe three important research problems in this section: *i.* identifying important events; *ii.* using identified events for improving retrieval effectiveness; and *iii.* using identified events for analytics.
Notation
--------
Let us consider multiple corpora for the purpose of analysis. This allows us to capture frequently occurring events as well as link similar events across corpus. Given corpora $$D = \bigcup_{k=1}^{N} D_k,$$ where each document $d \in D$ consists of word sequence $x$ at appropriate granularity (e.g. paragraph or sentence): $$d = \bigcup_{i=1}^{n} x_i.$$ Further each $x \in d$ contains semantic annotations in form of *i.* named entities ($e$), *ii.* geographical location ($g$), and *iii.* temporal expressions ($t$). Additionally $x$ also consists of the a bag of words $\mathcal{W}$ drawn from a vocabulary $\mathcal{V}$. Formally represented as: $$x = \langle \mathcal{E}, g, t, \mathcal{W} \rangle$$
Problem Definition
------------------
The objective is to design a family of algorithms: $$\textsc{Event*}(X,Q,\alpha)$$
where $X = \bigcup x$, $Q$ represents an input query and $\alpha \in \mathds{R}^m$, where $\alpha$ is set of parameters.
The input query $Q$ can be a combination of following input components: *i.* keyword query $q$, *ii.* time $q_{time}$, *iii.* geographical location $q_{geo}$, and *iv.* named entity $q_{entity}$.
Given the input, we need to design the algorithms $\textsc{Event*}$ according to the different problems. We discuss the design objectives for the three different purposes in this section.
**Identifying Important Events**. *Events* are the proposed building blocks for further text analysis. An *event* in our context is defined to be an activity or an act involving named entities th
| 247
| 28
| 596
| 298
| 912
| 0.797866
|
github_plus_top10pct_by_avg
|
eight difference between head and heart and eliminating air emboli during the surgery. *Green position*: the position at the first operation, *red dot*: heart position, y*ellow dot*: head position.](nmc-55-305-g4){#F4}
######
Differences between the old and new operating table
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Table Elevation (mm) Vertical rotation Horizontal plate Back plate Monitor functions
--------------------------------------------- ---------------- ---------------------------------------------- ------------------------------------------------ ----------------- -------------------
Existing table MST-7200BX 480--1,100 20 degrees above head, 45 degrees below head 30 degrees in each direction of left and right 90 degrees up\ None
30 degrees down
Newly developed operating table MST-7200BXD 510--1,300 20 degrees above head, 60 degrees below head 30 degrees in each direction of left and right 90 degrees up\ Provided
30 degrees down
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
######
Angle and height of four operative positions evaluated in this study
Position (approach) Table up/down (mm) Trend (Deg) Tilt (Deg) Back plate (Deg) Y-axis (head-leg) (mm) X-axis (right-left) (mm) Leg plate (De
| 248
| 2,404
| 265
| 218
| null | null |
github_plus_top10pct_by_avg
|
ariance of the predictive distribution $p(f({\mathbf x}_*) \mid {\mathbf y})$ are given by [@Rasmussen2006]
\[eq:GPreg\] $$\begin{aligned}
\mathbb{E}[f({\mathbf x}_*) \mid {\mathbf y}] &=
{\mathbf k}_*^{\mathsf{T}}(K +\sigma^2I)^{-1}{\mathbf y}, \\
\mathbb{V}[f({\mathbf x}_*) \mid {\mathbf y}] &=
k({\mathbf x}_*,{\mathbf x}_*)-
{\mathbf k}_*^{\mathsf{T}}(K +\sigma^2I)^{-1}{\mathbf k}_*.
\end{aligned}$$
Here the vector ${\mathbf k}_*$ contains the covariances between $f({\mathbf x}_*)$ and each measurement while the matrix $K$ contains the covariance between all measurements, such that
$$\begin{aligned}
({\mathbf k}_*)_i &= k({\mathbf x}_i,{\mathbf x}_*), \\
K_{ij} &= k({\mathbf x}_i,{\mathbf x}_j).\end{aligned}$$
An example of GP regression for a two-dimensional input is given in Figure \[fig:se\_ill\_post\]. The red stars indicate the measurements, while the shaded surface is the GP prediction. The blue line highlights a slice of the plot that is shown explicitly to the right, including the $95\%$ credibility region.
![Left: GP prediction (shaded surface) obtained from the measurements (red stars, also indicated by their deviation from the prediction). Right: slice plot of the blue line in the left figure, including the $95\%$ credibility region.[]{data-label="fig:se_ill_post"}](gp_ill.eps){width="\textwidth"}
The Gaussian process for x-ray tomography {#GP Xray}
-----------------------------------------
In this section, we show how to apply the functional priors presented in Section \[functional priors\] to x-ray tomography application. Since the x-ray measurements are line integrals of the unknown function $f({\mathbf x})$, they are linear functionals of the Gaussian process. Hence, we can define a linear functional $\mathcal{H}_{{\mathbf x},i}$ as follows: $$\begin{aligned}
\mathcal{H}_{{\mathbf x},i} f({\mathbf x}) = \int_{-R}^R f({\mathbf x}^0_i+s\hat{{\mathbf u}}_i) ds.\end{aligned}$$ and thus the GP regression problem becomes
\[eq:invprob\] $$\begin{aligned}
| 249
| 1,772
| 642
| 349
| 597
| 0.804825
|
github_plus_top10pct_by_avg
|
−53 (−78 to −18)
*Change in lean mass*^*2*^*(%)* −7 (−36 to 10)
*EI during weight loss*^*3*^*(units)* 259 (184 to 310) \[62 (44 to 74)\]
All data are expressed as median (range). M: male; NM: neutered male; F: female; NF: neutered female; CKCS: cavalier king Charles spaniel. ^1^Rate of weight loss expressed as percentage of starting body weight lost per week. ^2^Refers to the percentage change in starting mass calculated as follows: (\[start mass--end mass\] ÷ start mass) × 100*%*.^3^EI: energy intake expressed as metabolizable energy (in kJ \[kcal\]) per kg of metabolic body weight (BW^0.75^) per day.
Changes in clinical biochemistry before and after weight loss
-------------------------------------------------------------
Clinical biochemistry results are shown in Table [3](#T3){ref-type="table"}. Urea was greater after weight loss (P = 0.047), whilst albumin (P \< 0.001), cholesterol (P = 0.040), globulins (P = 0.012), and triglycerides (P = 0.015) were less after weight loss. However, there was no difference in alanine aminotransferase (P = 0.893), alkaline phosphatase (P = 0.065), and creatinine (P = 0.064) and glucose (P = 0.210).
######
Clinical biochemistry results before and after weight loss
***Serum biochemistry*** **Before weight loss** **After weight loss** **Reference range** **P value**
----------------------------------- ------------------------ ----------------------- --------------------- -------------
*Alanine aminotransferase (IU/L)* 44 (19--303), 3, 0 52 (18--238), 3, 0 7-100 0.893
*Alkaline phosphatase (IU/L)* 73 (32--383), 7, 0 56 (2
| 250
| 2,085
| 942
| 387
| null | null |
github_plus_top10pct_by_avg
|
n and Length of Stay of Discharged Patients.\
In all four panels, X and Y axes in minutes.\
*AED*, tertiary care academic emergency department; *A2D*, admit request to departure for boarded patients awaiting hospital admission; *CED*, community emergency department; *D2P*, arrival to being seen by physician; *LOSD*, total length of stay for discharged patients;](wjem-21-647-g001){#f1-wjem-21-647}
######
Patient census data pre- to post-implementation of a physician in triage.
Outcome ED type Pre-PIT Post-PIT P-value
-------------------------------- ---------------- ---------------- ---------------- ---------
Median Daily Census (IQR) AED 284 (271, 300) 292 (275, 306) 0.01
CED 185 (174, 196) 199 (186, 210) \<0.01
Median daily admissions (IQR) AED 80 (74, 87) 84 (76, 91) \<0.01
CED 50 (45, 56) 55 (49, 61) \<0.01
Mean annual percent admit (SD) AED 28.2 (±2.8) 29.3 (±2.8) \<0.01
CED 26.7 % (±3.5) 27.6 % (± 3.7) \<0.01
Median daily LWBS (IQR) AED 11 (6, 19) 11 (6, 17) 0.13
CED 5 (2, 8) 4 (2, 9) 0.29
Mean annual percent LWBS (SD) AED 4.6 % (± 2.3) 4.1 % (± 2.3) 0.15
CED 3.2 % (±1.3) 2.9 % (± 1.2) 0.24
*ED*, emergency department; *AED*, tertiary care academic emergency department; *CED*, community emergency department; *LWBS*, left without being seen; *PIT*, physician in triage; *IQR*, interquartile range; *SD*, standard deviation.
######
Operational metrics pre- to post-implementation of a physician in triage.
Metric (min) ED type Pre-PIT Median (IQR) Post-PIT Median (IQR) P-value
--------------------- ---------------- --------------
| 251
| 1,771
| 486
| 286
| null | null |
github_plus_top10pct_by_avg
|
{jupdate}(S, 1, m \bmod 4)$ $a = 0$ $[b = 1] (-1)^e$ $b {\leftarrow}b - m a$, with $1 \leq m \leq {\lfloor b/a \rfloor}$ \[li:jacobi-update-b\] $S {\leftarrow}\proc{jupdate}(S, 0, m \bmod 4)$ $b = 0$ $[a = 1] (-1)^e$
Correctness
-----------
Let $a_0$ and $b_0$ denote the original inputs to Algorithm \[alg:jacobi\]. Since the reduction steps and the stop condition are the same as in Algorithm \[alg:gcd\], it terminates after a finite number of steps. We now prove that it returns $(a_0 | b_0)$.
Algorithm \[alg:jacobi\] clearly maintains $\alpha = a \bmod 4$ and $\beta = b \bmod 4$. We next prove that the following holds at the start of each iteration:
If $d = 0$ we have $$(a_0 | b_0) = (-1)^e \times
\begin{cases}
(b | a) & \text{$\alpha$ odd} \\
(a | b) & \text{$\alpha$ even}
\end{cases}$$ and if $d = 1$ we have $$\label{eq:invariant-1}
(a_0 | b_0) = (-1)^e \times
\begin{cases}
(a | b) & \text{$\beta$ odd} \\
(b | a) & \text{$\beta$ even}
\end{cases}$$ This clearly holds at the start of the loop, to prove that it is maintained, consider the case $a \geq b$ (the case $a < b$ is analogous). Let $a$, $b$ (unchanged) and $S = (e, \alpha, \beta, d)$ denote the values of the variables before line \[li:update-a\]. There are a couple of different cases, depending on the state:
- If $\beta$ is odd and either $\alpha$ is even or $d = 1$, then $(a_0 |
b_0) = (-1)^e (a | b) = (-1)^e (a - m b | b)$.
- If $\alpha$ and $\beta$ are both odd and $d = 0$, then $(a_0 | b_0) =
(-1)^e (b | a) = (-1)^{e + (a-1)(b-1)/4} (a - m b | b)$.
- If $\beta = 0 \pmod 4$, then $(a_0 | b_0) = (-1)^e (b |
a) = (-1)^e (b | a - m b)$.
- If $\beta = 2 \pmod 4$, then $(a_0 | b_0) = (-1)^e (b |
a) = (-1)^{e + m(a-1)/2 + m(m-1)/2} (b | a - m b)$.
In each case, the call to makes the appropriate change to $e$, and Eq. holds after the iteration.
Results
=======
The algorithm was implemented in -5.1.0, released 2012. In benchmarks at the time, comparing the old binary algorithm to the n
| 252
| 2,972
| 686
| 262
| 1,122
| 0.794066
|
github_plus_top10pct_by_avg
|
air of four-parameter (multivariate) Gaussian-gamma prior distributions are specified for the observation parameters: $$\begin{aligned}
{\bar{\nu}}& \sim \mathrm{Gam}\left({\bar{a}}_0,~ {\bar{b}}_0\right), \quad\quad {\bar{\mu}}|{\bar{\nu}}\sim \mathrm{N}\left({\bar{m}}_0,~ {\bar{\nu}}^{-1}{\bar{c}}_0\right),\nonumber\\
\nu & \sim \mathrm{Gam}\left(a_0,~ b_0\right), \quad\quad {\boldsymbol{\mu}}|\nu \sim \mathrm{MVN}_u\left({\mathbf{m}}_0,~ \nu^{-1}C_0\right).\label{eq:ObsPrior}\end{aligned}$$ All hyper-parameters are strictly positive scalars except for the real-valued scalar expectation ${\bar{\mu}}_0$, $u$-vector ${\mathbf{m}}_0$ and $u
\times u$ positive definite matrix $C_0$. The prior distributions defined for the precision parameters are consistent with @Rid06. However, the prior for both baseline and MUTF expectations differ from the gamma definition of @Rid06. The tractability reasons for adopting Gaussian rather than gamma priors are detailed in Section \[sec:DetailObsProc\]; the problems that arise from the support now including the whole real line are addressed in Section \[sec:ML\].
The range of MUs to consider, $u=1, \ldots, {u_{\max}}$, defines a set of neuromuscular models. Previous Bayesian MUNE methods defined a uniform prior on the model space in assuming that each is equally probable. However, there is typically a preference for identifying the simplest representation of the underlying process. This is of particular importance in the presence of alternation where the data could be equally probably under two or more models. To impose an *a priori* preference for smaller models the number of MUs is given a $\mathsf{Geom}(1/2)$ distribution, truncated at ${u_{\max}}$.
Methodology for SMC-MUNE {#sec:Method}
========================
The methodology that defines the SMC-MUNE procedure detailed in this section is based on an approximation to the ideal model defined in Section \[sec:Model\] using, effectively, an approximation to the prior specification. The reasons for the approximat
| 253
| 1,292
| 635
| 345
| 2,555
| 0.77889
|
github_plus_top10pct_by_avg
|
fund\] (3). It follows that $\| \cdot \|$ is a seminorm and that $\| {\mathbf{x}}\|=0$ if and only if ${\mathbf{x}}\in L(\Lambda_+)$. Hence $\| \cdot \|$ is a norm if and only if $L(\Lambda_+)=\{0\}$.
The following theorem is a generalization of Theorem \[MSSmain\] to hyperbolic polynomials.
\[t1\] Let $k\geq 2$ be an integer and $\epsilon$ a positive real number. Suppose $h$ is hyperbolic with respect to ${\mathbf{e}}\in {\mathbb{R}}^n$, and let ${\mathbf{u}}_1, \ldots, {\mathbf{u}}_m \in \Lambda_{+}$ be such that
- $\rk({\mathbf{u}}_i) \leq 1$ for all $1\leq i \leq m$,
- $\tr({\mathbf{u}}_i) \leq \epsilon$ for all $1\leq i \leq m$, and
- ${\mathbf{u}}_1+ {\mathbf{u}}_2+\cdots+ {\mathbf{u}}_m={\mathbf{e}}$.
Then there is a partition of $S_1\cup S_2 \cup \cdots \cup S_k=[m]$ such that $$\label{sqbound}
\left\| \sum_{i \in S_j} {\mathbf{u}}_i \right\| \leq \frac 1 k \delta(k\epsilon, m),$$ for each $j \in [k]$, where $$\delta(\alpha, m):=\left( 1-\frac 1 m +\sqrt{\alpha - \frac 1 m \left(1-\frac 1 m\right)}\right)^2.$$
We recover (a slightly improved) Theorem \[MSSmain\] when $h= \det$ in Theorem \[t1\].
Compatible families of polynomials
==================================
Let $f$ and $g$ be two real–rooted polynomials of degree $n-1$ and $n$, respectively. We say that $f$ is an *interleaver* of $g$ if $$\beta_1 \leq \alpha_1\leq \beta_2 \leq \alpha_2 \leq \cdots \leq \alpha_{n-1} \leq \beta_n,$$ where $\alpha_1 \leq \cdots \leq \alpha_{n-1}$ and $\beta_1 \leq \cdots \leq \beta_{n}$ are the zeros of $f$ and $g$, respectively.
A family of polynomials $\{f_1(x), \ldots, f_m(x)\}$ of real–rooted polynomials of the same degree and the same sign of leading coefficients is called *compatible* if it satisfies any of the equivalent conditions in the next theorem. Theorem \[CS\] has been discovered several times. We refer to [@CS Theorem 3.6] for a proof.
\[CS\] Let $f_1(x), \ldots, f_m(x)$ be real–rooted polynomials of the same degree and with positive leading coefficients. The following are equiv
| 254
| 820
| 589
| 304
| null | null |
github_plus_top10pct_by_avg
|
esults and further findings of the algorithm. Finally, the paper is concluded in section \[sec:conclusion\].
Related Work {#sec:related-work}
============
Data Anomaly Detection
----------------------
Statistical divergence was applied mainly as classifiers on multimedia content [@park2005classification], especially as kernels in SVMs [@moreno2004kullback]. As a similarity measurement, it can also be used in qualitative and quantitative analysis in image evaluation [@pheng2016kullback; @goldberger2003efficient]. [@amid2014unsupervised] adopted divergence to detect events in multimedia streams.
Anomaly detection, also known as outlier detection, has been studied for a long time and discussed in diverse research domains, such as fraud detection, intrusion detection, system monitoring, fault detection and event detection in sensor networks. Anomaly detection algorithms deal with input data in the form of points (or records), sequences, graphs and spatial and geographical relationships. [@chandola2009anomaly] According to relationships within data records, outliers can be classified into *point anomalies*, *contextual (or conditional) anomalies* and *collective anomalies*. [@goldberger2000components]
Currently, distance based [@cao2014scalable; @cao2017multi] and feature evolving algorithms [@masud2013classification; @li2015discovery; @shao2014prototype] catch most attention. Others adopted tree isolation [@zhang2017lshiforest], model based [@yin2016model] and statistical methods [@zhu2002statstream] in certain applications.
To detect collective anomalies, [@caudell1993adaptive] adopts the *ART (Adoptive Resonance Theory)* neural networks to detect time-series anomalies. *Box Modeling* is proposed in [@chan2005modeling]. *Longest Common Subsequence* was leveraged in [@budalakoti2006anomaly] as similarity metric for symbolic sequence. Markovian modeling techniques are also popular in this domain[@ye2000markov; @warrender1999detecting; @pavlov2003sequence]. [@yu2015glad] depicts groups in social me
| 255
| 110
| 608
| 376
| null | null |
github_plus_top10pct_by_avg
|
variables known and held constant, and unsubscripted variables free. Roots of the system represent discrete solutions. Let us look carefully at the further case that ${\mathit{s}} = (\lambda, {\mathit{f}}, {\mathbf{f}}) = (\lambda, {\mathit{f}}, (\psi, \phi)) \in \Lambda \times {\mathscr{F}} \times {\mathbf{F}}$. $$\begin{aligned}
\tilde{{\mathfrak{A}}}(\lambda_0, {\mathit{f}}_0, {\mathbf{f}}_0) &= \lbrace (\lambda, {\mathit{f}}, {\mathbf{f}}) \in {\mathbb{S}} \;\colon\;
{\mathfrak{A}}(\lambda, {\mathit{f}}, {\mathbf{f}}) = (\lambda_0, {\mathit{f}}_0, {\mathbf{f}}_0) \rbrace \\
\tilde{{\mathfrak{A}}}(\lambda_0, {\mathit{f}}_0, (\psi_0, \phi_0)) &= \lbrace (\lambda, {\mathit{f}}, (\psi, \phi))
\in {\mathbb{S}} \;\colon\; {\mathfrak{A}}(\lambda, {\mathit{f}}, (\psi, \phi)) = (\lambda_0, {\mathit{f}}_0, (\psi_0, \phi_0)) \rbrace.\end{aligned}$$
### Constraining equations {#S:CONSTRAINING_EQUATIONS}
State transition in an actuated automaton is built in three successive phases: locus state, functionality state, and frame state.
Definition \[D:ITERATIVE\_TRANSFORM\] presents rules governing forward state transition in the form of three equations, portraying current state as known and unknown future state as uniquely determined by formulas. This sense can be reversed, with current state known and feasible past states represented as unknowns.
The automaton-induced *forward* transformation ${\mathfrak{A}} \colon {\mathbb{S}} \to {\mathbb{S}}$ has been set (definition \[D:ITERATIVE\_TRANSFORM\]) as $$\begin{aligned}
\lambda' &= \Delta(\lambda, \psi),\\
{\mathit{f}}' &= (\ell(\Delta(\lambda, \psi)))({\mathit{f}}(\psi) \xi'),\\
{\mathbf{f}}\,' &= (\psi', \phi') =
([{\mathit{f}}(\psi) \xi'], [(\ell(\Delta(\lambda, \psi)))({\mathit{f}}(\psi) \xi')]([{\mathit{f}}(\psi) \xi'])).\end{aligned}$$
The respective governing *backwards* transformations are $$\begin{aligned}
\lambda_0 &= \Delta(\lambda, \psi),\\
{\mathit{f}}_0 &= (\ell(\Delta(\lambda, \psi)))({\mathit{f}}(\psi) \xi_0),\\
{\mathbf
| 256
| 3,973
| 275
| 166
| 2,934
| 0.775989
|
github_plus_top10pct_by_avg
|
approach?
A:
(A GUID is 128 bits, so it cannot safely be converted into a 32-bit integer.)
A better option might be to use a Dictionary<Guid, ItemTable_s>. Then you can still use GUIDs to index it.
var tempdata = new Dictionary<Guid, ItemTable_s>;
foreach(var anItem in datacontext.ItemTable_s)
{
tempdata.Add(anItem.ItemID, anItem);
}
Then the line that's currently raising an exception (s1 += tempdata[i].productname + "\n";) should work as-is.
This will also make your itemid array unnecessary. Alternatively you could keep the two arrays you're using. Then you would have to change that line to:
s1 += tempdata[itemid.IndexOf(i)].productname + "\n";
It looks like that is what you were intending to do. But that will be much slower than using a dictionary. It means you'll be doing a sequential search of the itemid array every time you need to figure out where an entry is in tempdata.
Q:
R calculated variable from integer variable - how?
I have a variable (HTB) which is an integer - it can have a value of either 1 or 2.
I have done some calaculations which have involved aggregating HTB by user name - so I know how often the user got a 1 or a 2 response
The resulting data frame therefore displays variables HTB.1 and HTB.2
I would like to calculate the percentage of HTB=2 for each user but this
results$HTBpercent<-results$HTB.2/(results$HTB.1+results$HTB.2)*100
does not work (presumably because it is really one variable)
How do I do this?
A:
You probably have something like:
df <- aggregate(c(1,1,2,2), list(c(1,2,1,2)), FUN=table)
df
# Group.1 x.1 x.2
#1 1 1 1
#2 2 1 1
...where df$x is a matrix, like:
df$x
# 1 2
#[1,] 1 1
#[2,] 1 1
Therefore,
df$perc2 <- df$x[,"2"] / rowSums(df$x)
df
# Group.1 x.1 x.2 perc2
#1 1 1 1 0.5
#2 2 1 1 0.5
Q:
gh-pages -d build fails on 'npm run deploy'
I am trying to deploy my react app to the GitHub pages but I have encountered the following error:
The build folder is ready to be deployed.
To publish it at https://j
| 257
| 2,923
| 142
| 223
| 461
| 0.809026
|
github_plus_top10pct_by_avg
|
Then $$m +c(n(\lambda)-n(\lambda^t))+n(\lambda)\
=\ m+(c+1)n(\lambda) -cn(\lambda^t) \
\geq\ m+c(n(\mu)-n(\mu^t))+n(\mu),$$ with equality if and only if $\lambda=\mu$. This means that $\operatorname{{\textsf}{triv}}$ appears in $\Delta_c(\lambda)$ in a higher degree than its first appearance in $\Delta_c(\mu)$. In particular, the simple quotient $L_c(\mu)$ of $\Delta_c(\mu)$ contains a copy of $\operatorname{{\textsf}{triv}}$ and so it cannot be annihilated by $e$. This therefore completes the proof of and and hence proves the theorem.
General equivalences {#subsec-4.55}
--------------------
We now give the promised extension of Theorem \[morrat\] to more general values of $c$. Since it requires no extra work, and it is put to crucial use in [@BFG], we will also prove the result over more general base fields. Thus if $k$ is a subfield of ${\mathbb{C}}$, with $c\in k$, let $H(k)_c$ denote the $k$-algebra defined by the generators and relations from . We write $U(k)_c$, $Q(k)_c^{c+1}$, etc, for the corresponding objects defined over $k$.
\[morrat-hyp\] Set $\mathcal{C} = \{z: z=\frac{m}{d}\ \mathrm{where}\
m, d \in {\mathbb{Z}}\text{ with } 2\leq d\leq n \text{ and } z\notin {\mathbb{Z}}\}.$ Assume that $c\in {\mathbb{C}}$ is such that $c\notin \frac{1}{2} +
\mathbb{Z}$. If $c$ is a rational number with $-1<c<0$ assume further that $c\not\in \mathcal{C}$.
Corollary {#morrat-cor}
---------
*Let $k\subseteq {\mathbb{C}}$ be a field and assume that $c\in k$ satisfies Hypothesis \[morrat-hyp\].*
[(1)]{} $U(k)_c$ and $H(k)_{c}$ are Morita equivalent. If $c \notin (-2,-1)_{\mathcal{C}}
=\{z\in \mathcal{C} : -2<z<0\}$, then $U(k)_c$ is Morita equivalent to $U(k)_{c+1}$.
[(2)]{} Let $a=-c$. Then $H(k)_{a}$ is Morita equivalent to $U(k)^-_{a} = e_-H(k)_{a}e_-$. If $a \notin (1,2)_{\mathcal{C}}$, then $U(k)^-_{a}$ is Morita equivalent to $U(k)^-_{a-1}$.
\(1) We start with the case $U_c=U({\mathbb{C}})_c$. If $c\not\in \mathcal{C}$ then it follows from [@BEGqi Theorem 8.1] and [@DJ2 Theorem 4.3] that $H_c
| 258
| 1,049
| 415
| 335
| 2,508
| 0.779254
|
github_plus_top10pct_by_avg
|
ortions were calculated after excluding missing data (if present) for a particular section---the denominator is always indicated to remove ambiguity.
Results {#s3}
=======
Since 2008, there have been 54 catastrophic injuries (24 in Juniors and 30 in Seniors) recorded in total in South Africa ([table 1](#BMJOPEN2012002475TB1){ref-type="table"}), the majority of which (n=45) were ASCIs. In Juniors, the highest number of injuries occurred in 2009 (n=8), while for Seniors the highest number (n=9) occurred in both 2009 and 2010.
######
Absolute numbers of serious/catastrophic injuries in Junior and Senior Rugby levels in South Africa by year, between 2008 and 2011 (4 years, inclusive)
2008 2009 2010 2011 Total Annual average
---------------------------------------- ------ ------ ------ ------ ------- ---------------- --- --- ---- ---- ------ ------
Acute spinal cord injury (ASCI) (n=45)
'Near miss' (full recovery/ambulant) 2 1 4 0 3 1 3 1 12 3 3.00 0.75
Neurological deficit 1 1 0 2 0 4 2 3 3 10 0.75 2.5
Quadriplegics 1 1 0 3 0 2 1 3 2 9 0.50 2.25
Fatal 0 0 0 1 0 1 0 1 0 3 0 0.75
Not provided 1 0 0 1 1 0 0 0 2 1 0.50 0.25
Traumatic brain injury (TBI) (n=7)
Fully recovered 0 0 2 0 0 0 0 0 2 0 0.50 0
Disability 1 0 0 0 0 0
| 259
| 3,835
| 263
| 128
| null | null |
github_plus_top10pct_by_avg
|
or of $p$, ignoring poly-logarithmic factors.
Tighter Analysis for the Special Case of Top-$\ell$ Separators Scenario {#sec:topl}
-----------------------------------------------------------------------
The main result in Theorem \[thm:main2\] is general in the sense that it applies to any partial ranking data that is represented by positions of the separators. However, the bound can be quite loose, especially when $\gamma$ is small, i.e. $p_{j,\ell_j}$ is close to $\kappa_j$. For some special cases, we can tighten the analysis to get a sharper bound. One caveat is that we use a slightly sub-optimal choice of parameters $\lambda_{j,a} = 1/\kappa_j$ instead of $1/(\kappa_j - a)$, to simplify the analysis and still get the order optimal error bound we want. Concretely, we consider a special case of top-$\ell$ separators scenario, where each agent gives a ranked list of her most preferred $\ell_j$ alternatives among $\kappa_j$ offered set of items. Precisely, the locations of the separators are $(p_{j,1},p_{j,2},\ldots,p_{j,\ell_j})=(1,2,\ldots,\ell_j)$.
\[thm:topl\_upperbound\] Under the PL model, $n$ partial orderings are sampled over $d$ items parametrized by $\theta^* \in \Omega_b$, where the $j$-th sample is a ranked list of the top-$\ell_j$ items among the $\kappa_j$ items offered to the agent. If $$\begin{aligned}
\label{eq:topl1}
\sum_{j = 1}^n \ell_j \;\; \geq \;\; \frac{2^{12}e^{6b}}{\beta\alpha^2} d\log d\,,\end{aligned}$$ where $b \equiv \max_{i,\i} |\theta^*_i - \theta^*_{\i}|$ and $\alpha,\beta$ are defined in and , then the [*rank-breaking estimator*]{} in with the choice of $\lambda_{j,a} = 1/{\kappa_j}$ for all $a\in[\ell_j]$ and $j\in[n]$ achieves $$\begin{aligned}
\label{eq:main_topl}
\frac{1}{\sqrt{d}}\big\|\widehat{\theta} - \theta^* \big\|_2 \;\; \leq \;\; \frac{16(1+ e^{2b})^2}{\alpha} \sqrt{\frac{d\, \log d}{\sum_{j=1}^n \ell_j}} \;,
\end{aligned}$$ with probability at least $ 1- 3e^3 d^{-3}$.
A proof is provided in Section \[sec:proof\_topl\_upperbound\]. In comparison to
| 260
| 304
| 463
| 323
| 2,493
| 0.779348
|
github_plus_top10pct_by_avg
|
05){#sensors-17-00869-f005}
{#sensors-17-00869-f006}
{#sensors-17-00869-f007}
{#sensors-17-00869-f008}
{#sensors-17-00869-f009}
sensors-17-00869-t001_Table 1
######
Gesture recognition accuracy and errors in hand movements decoding of the proposed controller.
---------------------- -------------- ---------------- --------- ---------------- -----
**Healthy Subjects** **COMPLETE** **REDUCED**
**Accuracy** **SVs** **FSM Errors** **SVs** **FSM Errors**
S1 88.01 178 0 124 0
S2 89.76 312 0 209 0
S3 89.21 229 0 155 0
S4 86.34 404 0 204 0
S5 83.49 311 0 206 0
MEAN 87.37 296 0 179 0
**INAIL patients** **COMPLETE** **REDUCED**
**Accuracy** **SVs** **FSM errors** **SVs** **FSM errors**
S1 94.86 166 0 55 0
S2 93.65 262 0 38 0
S3 81.38 393
| 261
| 3,463
| 978
| 240
| null | null |
github_plus_top10pct_by_avg
|
thbb{F})$ is moved to a crank form expression of $fw$.
First consider the case $\{1, 2\}\subset M_k$ for some $k$. In this case, there exists an integer $i$ such that $i = x^{-1}_{\overline{\mathbb{M}}}(1)$ and $i+1 = x^{-1}_{\overline{\mathbb{M}}}(2)$. Hence in this case we have $fx_{\overline{\mathbb{M}}} = x_{\overline{\mathbb{M}}}f_i$ and $f_i{\cal C}_{\mathbb{M}}[k] = {\cal C}_{\mathbb{M}}[k]$. Thus we obtain $f{\cal C}(\mathbb{M}, \sigma, \mathbb{F})
= {\cal C}(\mathbb{M}, \sigma, \mathbb{F})$.
Next consider the case $1\in M_j$ and $2\in M_k$ ($j\neq k$). In the following we assume that $M_j$ and $M_k$ are both propagating. Even if either $M_j$ or $M_k$ or both of them are defective, the similar proof will hold. Proposition \[prop:normalize\] implies that the standard expression ${\cal C}(\mathbb{M}, \sigma, \mathbb{F})$ is moved to a crank form expression ${\cal C}(\mathbb{M}', id, \mathbb{F}')$ so that the first and the second components of $\mathbb{M}'$ are $M_j$ and $M_k$ respectively and the first and the second components of $\mathbb{F}'$ are jointed to $M_j$ and $M_k$ respectively. Using the relations ($R2''$), ($R2$) and ($R12''$), we find that the first and the second components of $\mathbb{M}'$ and those of $\mathbb{F}'$ are merged by the action of $f$. For example, if $|M_j| = 5$ and $|M_k| = 4$ then we have Figure \[fig:fwE\].
![Action of $f$ on $w$[]{data-label="fig:fwE"}](16.eps)
The merged propagating parts will be moved to a crank form expression ${\cal C}(\mathbb{M}'', id, \mathbb{F}'')$ by “bumping” as in Figure \[fig:bump\]. Here $\mathbb{M}''$ \[resp. $\mathbb{F}''$\] is a sequence of upper \[resp. lower\] parts obtained from $\mathbb{M}$ \[resp. $\mathbb{F}$\] by merging the first two components.
![Bumping[]{data-label="fig:bump"}](17.eps)
If ${\cal C}(\mathbb{M},\sigma,\mathbb{F})$ is the standard expression of a seat-plan $w$, then $e{\cal C}(\mathbb{M},\sigma,\mathbb{F})$ is moved to a crank form expression of $ew$.
By the same argument in the previous proposition, we m
| 262
| 54
| 503
| 375
| 1,999
| 0.783719
|
github_plus_top10pct_by_avg
|
ions are concentrated in the same values, while for Matérn yields higher estimate, $l = 10.14$.
Figure \[fig:ChestPhantomRec\](c)-(f) shows GP reconstructions of the 2D chest phantom using different covariance functions from 9 projections (uniformly spaced) out of 180$^\circ$ angle of view and $185$ number of rays for each projection. The computation times for all numerical tests are reported in Table \[Computation time\]. The Metropolis–Hastings reconstruction shows longer computational time due to the need for generation of a large number of samples from the posterior distribution. However, the benefit of this algorithm is that it is easy to implement and it is reliable for sampling from high dimensional distributions.
Target FBP SE Matérn Laplacian Tikhonov
--------------- ----- ------- -------- ----------- ----------
Chest phantom 0.5 11210 9676 9615 9615
: Computation times of chest phantom (in seconds)[]{data-label="Computation time"}
The numerical test of the simulated data reconstructions is compared against figures of merit, namely:
- the relative error (RE) $$\begin{aligned}
\frac{\|f_{\text{true}} - f_{\text{rec}} \|_2}{\| f_{\text{true}}\|_2},
\end{aligned}$$ where $f_{\text{rec}}$ is the image reconstruction, and
- the peak-signal-to-noise ratio (PSNR) $$\begin{aligned}
10\log_{10}\left(\frac{\mathrm{peakval}^2}{\mathrm{MSE}}\right),
\end{aligned}$$ where $\mathrm{peakval}$ is the maximum possible value of the image and $\mathrm{MSE}$ is the mean square error between $f_{\text{true}}$ and $f_{\text{rec}}$,
as shown in Table \[Figures of merit\].
In practice, image quality in CT depends on other parameters as well, such as image contrast, spatial resolution, and image noise [@goldman2007principles]. These parameters can be evaluated when the CT device is equipped with CT numbers for various materials, high-resolution image is available, and statistical fluctuations of image noise which require several
| 263
| 2,901
| 545
| 281
| 1,432
| 0.789669
|
github_plus_top10pct_by_avg
|
h part template $v$, which uses this template’s annotations on images $I\in{\bf I}_{v}\subset{\bf I}^{\textrm{ant}}$, as follows.
1\) We first enumerate all possible latent patterns corresponding to the $k$-th CNN conv-layer ($k=1,\ldots,K$), by sampling all pattern locations *w.r.t.* $D_{u}$ and $\overline{\bf p}_{u}$.
2\) Then, we sequentially compute $\Lambda_{u}$ and $Score(u)$ for each latent pattern.
3\) Finally, we sequentially select a total of $n_{k}$ latent patterns. In each step, we select $\hat{u}\!=\!{\arg\!\max}_{u\in Child(v)}\Delta{\bf L}_{v}$. *I.e.* we select latent patterns with top-ranked values of [$Score(u)$]{} as children of part template $v$.
Learning via active question-answering {#sec:QA}
--------------------------------------
We propose a new learning strategy, *i.e.* active QA, which is more efficient than conventional batch learning. The QA-based learning algorithm actively detects blind spots in feature representations of the model and ask questions for supervision. In general, blind spots in the AOG include 1) neural-activation patterns in the CNN that have not been encoded in the AOG and 2) inaccurate latent patterns in the AOG. The unmodeled neural patterns potentially reflect new part templates, while inaccurate latent patterns correspond to sub-optimized part templates.
As an interpretable representation of object parts, the AOG can represent blind spots using linguistic description. We design five types of answers to project these blind spots onto semantic details of objects. Our method selects and asks a series of questions. We then collect answers from human users, in order to incrementally grow new AOG branches to explain new part templates and refine existing AOG branches of part templates.
Our approach repeats the following QA process. As shown in Fig. \[fig:QA\], at first, we use the current AOG to localize object parts on all unannotated objects of a category. Based on localization results, the algorithm selects and asks about the object $I$, from which the AOG
| 264
| 491
| 659
| 330
| null | null |
github_plus_top10pct_by_avg
|
sp. $\mathbb{F} = \mathbb{F}(\rho_w) = (T^F_{j_1}, \ldots, T^F_{j_v})$ ($j_1<\cdots<j_v$, $v\leq s$)\] omitting empty parts.
Using these data, we define [*cranks*]{} $C_{\mathbb{M}}[i]$, $C^*_{\mathbb{F}}[i]$ and $C^{\mathbb{M}}_{\mathbb{F}}[\sigma])$ as products of the generators as in Figure \[fig:mcrank\], \[fig:fcrank\] and \[fig:midcrank\] respectively. Here $\sigma$ is a word in the alphabet $\{s_1,\ldots, s_{|\pi(w)|-1}\}$.
![$C_{\mathbb{M}}[l]$[]{data-label="fig:mcrank"}](11.eps)
![$C^*_{\mathbb{F}}[l]$[]{data-label="fig:fcrank"}](12.eps)
![$C^{\mathbb{M}}_{\mathbb{F}}[\sigma]$[]{data-label="fig:midcrank"}](13.eps)
Further we define the “product of cranks” ${C}[\mathbb{M}]$ and ${C}[\mathbb{F}]$ by $${C}[\mathbb{M}] =
C_{\mathbb{M}}[1]C_{\mathbb{M}}[2]\cdots C_{\mathbb{M}}[u-1]$$ and $${C}^*[\mathbb{F}] =
C^*_{\mathbb{F}}[v-1]\cdots C^*_{\mathbb{F}}[2]C^*_{\mathbb{F}}[1]$$ respectively. We note that $C_{\mathbb{M}}[l]$ \[resp. $C^*_{\mathbb{F}}[l]$\] is defined by a composition $\mathbb{E} = (E_1,\ldots, E_s)$ of $n$ whose components have labels either “propagating” or “defective”. For example if $\mathbb{M} = (2,1,2,2,3)$, $(t(M_i))_{1\leq i \leq 5} = (0,1,0,1,1)$, $\mathbb{F} = (3,4,3)$, $(t(F_i))_{i=1,2,3} = (1,1,1)$ and $\sigma=(1,2)(2,3)\in\mathfrak{S}_3$, then the product of cranks ${C}[\mathbb{M}]C^{\mathbb{M}}_{\mathbb{F}}[\sigma]C^*[\mathbb{F}]$ is presented as in Figure \[fig:crank\].
![Product of cranks[]{data-label="fig:crank"}](14.eps)
Let $\overline{\mathbb M}$ be the sequence of $n$ symbols obtained from $\mathbb{M} = \mathbb{M}(\rho_w)$ by arranging all elements of $T^M_{i_k}$s in accordance with the sequence $\mathbb{M}$ so that all elements of each $T^M_{i_k}$ are increasingly lined up from left to right. For example, if $\mathbb{M} = (\{3,1,7\},\{6,4\},\{5,2\})$, then $\overline{\mathbb{M}} = (1,3,7,4,6,2,5)$. Similarly $\overline{\mathbb{F}}$ is defined from $\mathbb{F} = \mathbb{F}(\rho_w)$.
Then the following product becomes an expression of a seat-plan
| 265
| 597
| 958
| 345
| 2,248
| 0.781428
|
github_plus_top10pct_by_avg
|
ving no one condition repeated three times, while solving the problem mentioned above.
Or would there be any better ways?
A:
disclaimer: this solution is not perfect.
Ok, so my iterative approach is to create a permuted vector of all possible trials and then append each one to another vector if it doesn't violate the more than 3 consecutive conditions of the same type.
First, I'll set up some constants
N_CONDITIONS = 5;
TRIALS_PER_CONDITION = [10 10 10 7 9];
N_DUPS_ALLOWED = 3;
N_TOTAL = sum(TRIALS_PER_CONDITION);
Then I create a random permutation of all the trials:
randomInds = randperm(N_TOTAL);
% make vector containing all the replicates
conditionTrials = repelem(1:N_CONDITIONS, TRIALS_PER_CONDITION);
% permute the conditions
conditionTrials = conditionTrials(randomInds);
Then I prepare to loop over the conditionTrials vector element by element
% initialize the random trials vector
randomizedTrials = zeros(N_TOTAL, 1);
% pre assign the first allowable possible duplications
randomizedTrials(1:N_DUPS_ALLOWED) = conditionTrials(1:N_DUPS_ALLOWED);
% drop the used values
conditionTrials(1:N_DUPS_ALLOWED) = [];
Next, I setup the loop variables/counters and perform the loop:
% initialize counter
i = N_DUPS_ALLOWED + 1;
iterCounter = 1;
maxIter = 1000; % set me pretty low, but high enough for extra perms
while any(~randomizedTrials)
iterCounter = iterCounter + 1;
if iterCounter > maxIter
fprintf(2, '\nMaximum interations exceeded.\n');
break
end
% get the value we want to test
currentTrial = conditionTrials(1);
% get the previes n_dups_allowed values
previousConditions = randomizedTrials( i - (N_DUPS_ALLOWED:-1:1) );
% check if they're the same
if sum(previousConditions == currentTrial) == N_DUPS_ALLOWED
% reject this value because last 3 values == currentValue
% accepting would lead to > 3 consecutive trials
% create a new shuffle
newPermInds = randperm(length(conditionTrials));
conditionTrials = conditionTrials(newPermInds);
continue
end
% accept the r
| 266
| 5,844
| 111
| 151
| 575
| 0.805292
|
github_plus_top10pct_by_avg
|
ph at /Library/Perl/5.18/Graph/Easy/Parser.pm line 1302.
',798.1", lwidth=0.37, penwidth=0.8, rank=sink, style=filled, tooltip="package: github.com/syncthing/syncthing/lib/db" ]; "(*github.com/syncthing/syncthing/lib/db.VersionList)
...
112.31,203.1 154.04,203.1 237.26,203.1 299.56,203.1"]; } }' not recognized by Graph::Easy::Parser::Graphviz at /usr/local/bin/graph-easy line 93.
https://gist.github.com/quantonganh/d2052370bfcae6b1788465c9b5dcffd9#file-syncthing-cmd-stindex-dot-L45
Can you tell me what is the problem? Why it always failed at lp attribute in a nested subgraph?
A:
By adding --parse --debug=1:
# Parser: found subcluster 'cluster_github.com/syncthing/syncthing/lib/db'
# Creating new group 'cluster_github.com/syncthing/syncthing/lib/db'.
# remapping attributes 'HASH(0x7f89b1a0b7a0)' for graph
#$VAR1 = {
'fontsize' => '16',
'fillcolor' => 'lightyellow',
'label' => '[db',
'URL' => '/?f=github.com/syncthing/syncthing/lib/db',
'bb' => '265.57,734.1,493.03,810.1',
'fontname' => 'bold'
};
# Parser: new node '", lheight=0.22, lp="'
# Parser: Creating normal node from name ', lheight=0.22, lp='.
# Parser: new node '379.3'
# Parser: Creating normal node from name '379.3'.
# Parsing done.
# Parser cleanup pass
',798.1", lwidth=0.37, penwidth=0.8, rank=sink, style=filled, tooltip="package: github.com/syncthing/syncthing/lib/db" ]; "(*github.com/syncthing/syncthing/lib/db.VersionList)
...
If you look at the label value carefully, you will see that it is parsed as '[db' while its value is [db], so the next ones are parsed wrongly:
# Parser: new node '", lheight=0.22, lp="'
# Parser: Creating normal node from name ', lheight=0.22, lp='.
# Parser: new node '379.3'
# Parser: Creating normal node from name '379.3'.
# Parsing done.
I have to remove all square brackets around label value to make it can be parsed completely:
# Parser: found subcluster 'cluster_github.com/syncthing/syncthing/lib/db'
# Creating new group 'cluster_github.com/syncthi
| 267
| 89
| 357
| 150
| 17
| 0.839583
|
github_plus_top10pct_by_avg
|
ticipants \[[@pone.0201732.ref012]\]. Finally, we assume that evacuee achieved their desired speed in the evacuation tunnel. For each evacuee we calculate their speed in the main tunnel as a percentage of desired speed (understood as speed of free movement in evacuation tunnel). This makes it possible to analyze how different conditions in experiments 1, 2 and 3 influenced pedestrians speed. Results of this analysis are presented in [Table 3](#pone.0201732.t003){ref-type="table"}.
10.1371/journal.pone.0201732.t003
###### Movement speed in main tunnel as a percentage of the desired speed (movement speed in the evacuation tunnel).
In experiment 2, the first six persons decided to run in the main tunnel and then walk in the evacuation tunnel, this allows them to achieve a higher speed in the main tunnel. We consider it as a different desired speed in different part of the tunnel, and we present two versions of experiment 2 analysis (with and without "runners"). Similarly to the assumptions in section 4.4, in results for experiment 1 we excluded the first 9 persons, who after leaving the bus, stop and discus which evacuation path to select (see.: 4.2).
{#pone.0201732.t003g}
Experiment section Minimum Maximum Mean Std. deviation
------------------------------ --------- --------- -------- ----------------
experiment 1 51.15% 78.56% 62.56% 6.12%
experiment 2 all evacuees 57.83% 162.60% 80.73% 22.23%
experiment 2 without runners 57.83% 81.96% 73.17% 5.90%
experiment 3 26.44% 39.64% 31.46% 3.12%
In experiment 1 the mean speed of participants in the main tunnel equals to 62.56% of their desired speed. Surprisingly, in experiment 2 despite less visibility, evacuees were able to achieve 73.17% of their desired speed. Substantially smaller values are obtained with heavy smoke in experiment 3. Evacuees were able to achieve only 31.46% of their desired speed. One should note the low dispersion (s
| 268
| 3,898
| 1,302
| 232
| null | null |
github_plus_top10pct_by_avg
|
. Assume that $L_i$ is *of type II* with $i$ even, or that $L_i$ is *bound of type I or type II* with $i$ odd. Then $m_{i,i}=\mathrm{id}$.
3. Let $i$ be even.
- If $L_i$ is *bound of type II*, then $\delta_{i-1}^{\prime}e_{i-1}\cdot m_{i-1, i}+\delta_{i+1}^{\prime}e_{i+1}\cdot m_{i+1, i}+\delta_{i-2}e_{i-2}\cdot m_{i-2, i}+\delta_{i+2}e_{i+2}\cdot m_{i+2, i}=0$.
- If $L_i$ is *of type I*, then $v_i(\mathrm{resp.~}(y_i+\sqrt{\bar{\gamma}_i}v_i))+(\delta_{i-2}e_{i-2}\cdot m_{i-2, i}+\delta_{i+2}e_{i+2}\cdot m_{i+2, i})\tilde{e_i}=0$ if $L_i$ is *of type* $\textit{I}^o$ (resp. *of type* $\textit{I}^e$).
Here, notations are as explained in Step (c) of the description of an element of $\mathrm{Ker~}\tilde{\varphi}(R)$ given at the paragraph following Lemma \[la2\].
4. If $i$ is even and $L_i$ is *of type I*, then $$z_i+\delta_{i-2}k_{i-2, i}+\delta_{i+2}k_{i+2, i}=0 ~~~ \left(=\pi z_i^{\ast} \right).$$ Here, notations are as explained in Step (d) of the description of an element of $\tilde{M}(R)$ given at the paragraph following Lemma \[la1\].
5. If $i$ is odd and $L_i$ is *bound of type I*, then $$\delta_{i-1}v_{i-1}\cdot m_{i-1, i}+\delta_{i+1}v_{i+1}\cdot m_{i+1, i}=0 ~~~ \left(=\pi m_{i,i}^{\ast}\right).$$ Here, notations are as explained in Step (e) of the description of an element of $\tilde{M}(R)$ given at the paragraph following Lemma \[la1\].
6. If $i$ is odd and $L_i$ is *bound of type I*, then $$\delta_{i-1}v_{i-1}\cdot {}^tm_{i, i-1}+\delta_{i+1}v_{i+1}\cdot {}^tm_{i, i+1}=0 ~~~ \left(=\pi m_{i,i}^{\ast\ast}\right).$$ Here, notations are as explained in Step (f) of the description of an element of $\tilde{M}(R)$ given at the paragraph following Lemma \[la1\].
\[ta6\] $\mathrm{Ker~}\varphi/\tilde{G}^1 $ is isomorphic to $ \mathbb{A}^{l^{\prime}}\times (\mathbb{Z}/2\mathbb{Z})^{\beta}$ as a $\kappa$-variety, where $\mathbb{A}^{l^{\prime}}$ is an affine space of dimension $l^{\prime}$. Here,
- $l^{\prime}$ is such that *$l^{\prime}$ + dim $\tilde{G}^1=l$.* Note that $l$ is def
| 269
| 617
| 460
| 317
| 2,544
| 0.778996
|
github_plus_top10pct_by_avg
|
Relatedness (O) Female 38 4.17 (0.99) −2.20\* 0.49 0.73
Male 51 4.56 (0.52)
Physical self-concept Female 66 52.19 (21.37) −5.16\*\*\* 0.90 0.99
Male 63 68.71 (14.49)
\*p \< 0.05, \*\*p \< 0.01, \*\*\*p \< 0.001; U, unstructured PA; O, organized PA; M, mean; SD, standard deviation; T, T-values (Independent Samples t-test); d, Cohen's d; P, observed power.
Basic Psychological Needs, Physical Self-Concept and PA {#S3.SS3}
-------------------------------------------------------
The different parameters of satisfaction of basic psychological needs show significant positive interrelation between them and a significant positive relationship between them and physical self-concept ([Table 6](#T6){ref-type="table"}). None of the psychological variables (basic psychological needs and physical self-concept) show a correlation with PA (average steps per week). High correlations (\>0.50) are observed between the three dimensions tested by the BPNES in relation to both organized and unstructured PA. Competence and relatedness show a high correlation (\>0.60) in relation to both types of activity. However, the same is not true of autonomy, the correlation for which, though still significant, is only 0.37.
######
Correlation analysis between the variables.
1 2 3 4 5 6 7 8
------------- ------------ ------------ ------------ ------------ ------------ ------------ ------ ----
\(1\) A (U) --
\(2\) C (U) 0.61\*\*\* --
\(3\) R (U) 0.60\*\*\* 0.55\*\*\* --
\(4\) A (O) 0.37\*\*\* 0.37\*\*\* 0.31\*\* --
| 270
| 483
| 1,026
| 449
| null | null |
github_plus_top10pct_by_avg
|
ties*. A property $E$ is actual in the state $S$ iff the assertion $\vdash E(x)$, with $x$ in $S$, is justified.
*Nonactual properties*. A property $E$ is nonactual in the state $S$ iff the assertion $\vdash E^{\bot }(x)$, with $x$ in $S$, is justified.
*Potential properties*. A property $E$ is potential in the state $S$ iff both assertions $\vdash E(x)$ and $\vdash E^{\bot }(x)$, with $x$ in $S$, are unjustified.
Physical preliminaries
======================
We introduce in this section a number of symbols, definitions and physical concepts that will be extensively used in Sec. 3 in order to supply an intuitive support and an intended interpretation for the pragmatic language that will be introduced there.
Basic notions and mathematical representations
----------------------------------------------
The following notions will be taken as primitive.
*Physical system* $\Omega $*.*
*Pure state* $S$* of* $\Omega $, and *set $\mathcal{S}$ of all pure states of* $\Omega $ (the word *pure* will be usually implied in the following).
*Testable property* $E$* of* $\Omega $, and *set $\mathcal{E}$ of all testable properties of* $\Omega $ (the word *testable* will be usually implied in the following).[^1]
States and properties will be interpreted operationally as follows.
A state $S\in \mathcal{S}$ is a class of physically equivalent[^2] preparing devices (briefly, *preparations*) which may prepare individual samples of $\Omega $ (*physical objects*). A physical object $x$ *is in the state* $S$ iff it is prepared by a preparation $\pi \in S$.
A property $E\in \mathcal{E}$ is a class of physically equivalent ideal dichotomic (outcomes 1, 0) registering devices (briefly, *registrations*) which may test physical objects.[^3]
The above notions do not distinguish between classical and quantum mechanics. The mathematical representation of physical systems, states and properties are different, however, in the two theories. Let us resume these representations in the case of QM.
Every physical system $\Omega $ is asso
| 271
| 2,421
| 1,710
| 402
| 2,577
| 0.778739
|
github_plus_top10pct_by_avg
|
re\]
{#f1}
Given the high prevalence of additional antibiotic treatment, we also examined the pattern of antibiotic use. Overall, 70% of patients were treated with at least one additional antibiotic. The duration of antibiotic use, as well as the number and type of antibiotics prescribed were highly variable ( [Table 3](#T3){ref-type="table"}). While vancomycin was the most common concurrent antibiotic, used in 46% of patients, over 25 different medications were used in a variety of combinations. To further understand the antibiotic regimens observed, we next examined the available culture data. Of 91 patients diagnosed with SBP on neutrophil criteria, 13 were culture-positive. Of these, one patient had a documented infection resistant to ceftriaxone. This patient was excluded from the analysis. 14 patients had evidence of a secondary infection ( [Table 1](#T1){ref-type="table"}). These included pneumonia (diagnosed with chest x-ray), urinary tract infection (\>100,000 colonies on urine dipstick with positive urine culture), and cellulitis (clinical diagnosis documented in chart).
###### Types of inpatient antibiotics prescribed in addition to ceftriaxone, with number of patients and percentage of total population (n=138) and range of duration of inpatient antibiotic coverage (days).
-----------------------------------------------------------------
Antibiotic Number of patients\ Duration range\
N (%) (days)
------------------------- --------------------- -----------------
Vancomycin 63 (46) 1--22
Metronidazole 39 (28) 1--113
Piperacillin-tazobactam 26 (19) 1--23
Levofloxacin 21 (15) 1--43
Ciprofloxacin\* 20 (15) 1--14
Cefepime 13 (9) 1--14
| 272
| 3,447
| 735
| 178
| null | null |
github_plus_top10pct_by_avg
|
u
\frac{(t-r)(r+1-t)}{(2u+1)(t^2-r^2+x)^2} \nonumber \\
& & \times \prod_{k=1}^M\frac{1-T_k (t-r)}{\sqrt{1+2T_k r + T_k^2 x}} [\cdots]\,.\end{aligned}$$ Here and below, $t$ denotes the dimensionless time measured in units of the Heisenberg time $t_H=2\pi\hbar/\Delta$, where $\Delta$ is the mean level spacing.
The calculation of the correlator \[Eq. (\[eq:ss\_ft\])\] in the case of $\lambda \neq \lambda^\prime$ proceeds along the same lines as in [@ver85a], see Appendix \[app:theo\]. The result turns out to be formally given by the same VWZ expression (\[eq:VWZ\]), where the transmission coefficient $T_c$ in the varied channel $c$ has to be substituted by $$\label{eq:Teff}
T_c^{\mathrm{eff}} = \frac{2\left(\lambda+\lambda^{\prime *}\right) }{ \left(1+\lambda\right)\left(1+\lambda^{\prime *}\right)}\,,$$ while performing the integration \[Eq. (\[eq:I\])\]. The quantity $T_c^{\mathrm{eff}}$ may be considered as an effective transmission coefficient due to a parametric variation of the coupling strength in the channel $c$. Only if $\lambda=\lambda^\prime$, $T_c^{\mathrm{eff}}$ becomes equal to the conventional transmission coefficient \[Eq. (\[eq:Tc\])\]. In contrast to Eq. (\[eq:Tc\]), $T_c^{\mathrm{eff}}$ is generally complex and also $T_c^{\mathrm{eff}}\neq1-\langle S_{cc}\rangle \langle S_{cc}^{\prime*}\rangle$. We note, however, that Eq. (\[eq:Teff\]) can be cast in the following form $$\label{eq:Teff2}
T_c^{\mathrm{eff}} = 1 - S_{\mathrm{eff}}' S_{\mathrm{eff}}^*\,,$$ where $$\label{eq:Teff3}
S_{\mathrm{eff}}' = \frac{ 1-\lambda^{\prime *} }{ 1+\lambda }\,, \qquad
S_{\mathrm{eff}}^* = \frac{ 1-\lambda }{ 1+\lambda^{\prime *} }\,.$$ These quantities might be interpreted as the (average) parametric $S$-matrix amplitudes in the varied channel for the forward and backward time evolution, respectively.
The subsequent evaluation of coupling fidelity cannot be done analytically and will be performed numerically.
Effective Hamiltonian description {#subsec:heff}
---------------------------------
The exp
| 273
| 1,092
| 679
| 384
| 1,930
| 0.784316
|
github_plus_top10pct_by_avg
|
eenshot of the output
A:
A second scrollbar was there on the bottom but doesn't appear.
I set margin to true on the root layout and i remove the Panel.
The issue is fixed.
private VerticalLayout getResultLayout() {
VerticalLayout resultLayout = new VerticalLayout();
VerticalLayout .setWidth("1380px");
resultLayout.setStyleName("mwiWorksResultLayout");
resultLayout.setSizeUndefined();
for (int i = 0; i < 200; i++) {
Label l = new Label("test horizontal scrollbar right side not
shown totaly
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaattttttttttttttttttttttttttttttttttttttttttttttttttt
tttttttttttttttttttttttttXXXXXXXXXXXX");
resultLayout.addComponent(l);
}
return resultLayout;
}
It was a vaadin bug because without setting margin to true on the root layout, the bottom scrollbar doesn't appear.
I will create a ticket.
Q:
Go parse JSON array of array
I have data like this
"descriptionMap": [[[1,2], "a"], [[3,4], "b"]]
and I was trying to decode it with
DescriptionMap []struct {
OpcodeTableIdPair []int
OpcodeDescription string
} `json:"descriptionMap"`
but I keep on getting empty arrays,
[[{[] } {[] }]]
A:
You have a very unfortunate JSON schema which treats arrays as objects. The best you can do in this situation is something like this:
type Body struct {
DescriptionMap []Description `json:"descriptionMap"`
}
type Description struct {
IDPair []int
Description string
}
func (d *Description) UnmarshalJSON(b []byte) error {
arr := []interface{}{}
err := json.Unmarshal(b, &arr)
if err != nil {
return err
}
idPair := arr[0].([]interface{})
d.IDPair = make([]int, len(idPair))
for i := range idPair {
d.IDPair[i] = int(idPair[i].(float64))
}
d.Description = arr[1].(string)
return nil
}
Playground: https://play.golang.or
| 274
| 1,642
| 140
| 191
| 581
| 0.805173
|
github_plus_top10pct_by_avg
|
ads, it grabs the HTML from the field on the table that matches with the appropriate DIV and plugs it into place. Any changes to the content would be written back to that field, through a PHP script, where the new information would be permanently stored.
My trouble is, that I don't have the ability at the moment to run a server-side database to store all of the site content. So I am trying to come up with a solution to store all of my data without having to rely on a database server to do so. Are there ways that I can go about storing tables of information that a site can read/write to, without using a SQL server of some sort?
A:
Check out SQLite.
This is a whole database contained in a single file that resides anywhere in your file system. It doesn't require running a SQL server and still gives you all the benfits of using one! Support is built-in to PHP and using PDO you connect very easily:
$db = '/path/to.file/my_database.sqlite';
try {
$conn = new PDO('sqlite:' . $db);
} catch (PDOException $e) {
exit('Fatal error: ' . $e->getMessage());
}
This will even create the database file if it doesn't exist.
Q:
Swift/iOS 8: search bar raising fatal error: unexpectedly found nil while unwrapping an Optional value
Following this tutorial, I have just added a "Search Bar and Search Display Controller" to my Table View Controller.
As you can see from the following screenshot, the table and the search bar are correctly loaded:
by using "Cell" as cell reuse identifier.
Anyway, there are two problems:
1) When the search bar is tapped it simply disappears under the navigation bar even if it still accepts text to search
2) as soon as I start typing something in the search bar (even if it is "hidden") then an exception mentioned in title is raised.
Here is my code where the line with /***/ is where the exception raises:
import UIKit
class AllTasksViewController: UITableViewController, UISearchBarDelegate, UISearchDisplayDelegate {
var allTasks = [Task]()
var taskService = TaskService()
var organize
| 275
| 4,161
| 160
| 249
| 915
| 0.797758
|
github_plus_top10pct_by_avg
|
t the same topic. The Wikigame topic dataset consists of more distinct categories than the Wikispeedia and MSNBC dataset. Furthermore, the most frequently occuring topic in the Wikigame topic dataset is Culture with around 13%. The Wikispeedia dataset is dominated by the two categories the most Science and Geography each making up for almost 25% of all clicks. Finally, the most frequent topic in the MSNBC dataset is the frontpage with a frequency of around 22%.[]{data-label="fig:histograms"}](histograms){width="\textwidth"}
#### MSNBC dataset
This dataset[^11] consists of Web navigational paths from MSNBC[^12] for a complete day. Each single path is a sequence of page categories visited by a user within a time frame of 24 hours. The categories are available through the structure of the site and include categories such as *news*, *tech*, *weather*, *health*, *sports*, etc. In this dataset we also eliminate all paths with just a single click. Table \[tab:datasetfacts\] shows the basic statistics for this dataset and in Figure \[fig:histograms\] the frequency of all categories of this dataset are depicted (C).
#### Data preparation
Each dataset $D$ consists of a set of paths $\mathbb{P}$. A single path contains a single game in the Wikigame and Wikispeedia dataset or a single navigation session in the MSNBC dataset. A path $p$ is defined as a $n$-tuple $(v_1,\ldots,v_n)$ with $v_i
\in V, 1\leq i\leq n$ and $(v_i, v_{i+1}) \in E, 1 \leq i \leq n-1$ where $V$ is the set of all nodes in $\mathbb{P}$ and $E$ is the set of all observed transitions in $\mathbb{P}$. We also define the length of a path $len(p)$ as the length of the corresponding tuple $(v_1,\ldots,v_n)$. Additionally, we want to define ${\bf p} = \left\{ v_k | k
=1 \ldots n \right\}$ as the set of nodes in a path $p$. Note that $|{\bf
p}| \leq n$. The finite state set $S$ needed for Markov chain modeling is originally the set of vertices $V$ in a set of paths $\mathbb{P}$ given a specific dataset $D$. To prepare the paths for estimation of parameters
| 276
| 766
| 755
| 433
| 993
| 0.796127
|
github_plus_top10pct_by_avg
|
tent group by a theorem of Lazard which is stated at the beginning of Appendix \[App:AppendixA\].
Recall that we have defined the morphism $\varphi$ in Section \[red\]. The morphism $\varphi$ extends to an obvious morphism $$\tilde{\varphi} : \tilde{M} \longrightarrow \prod_{i:even}\mathrm{GL}_{\kappa}(B_i/Z_i) \times \prod_{i:odd}\mathrm{GL}_{\kappa}(B_i/Y_i)$$ such that $\tilde{\varphi}|_{\tilde{G}}=\varphi $. Note that $Y_i\otimes_AR$, when $i$ is odd, is preserved by an element of $\underline{M}(R)$ for a flat $A$-algebra $R$ (cf. Lemma \[l42\]). By using this, the construction of $\tilde{\varphi}$ is similar to Theorems \[t43\] and \[t44\] and thus we skip it. Let $R$ be a $\kappa$-algebra. Based on the description of the morphism $\varphi_i$ explained in Section \[red\], $\mathrm{Ker~}\tilde{\varphi}(R)$ is the subgroup of $\tilde{M}(R)$ defined by the following conditions:
1. If $i$ is even and $L_i$ is *of type I*, $s_i=\mathrm{id}$ mod $\pi \otimes 1$.
2. If $i$ is even and $L_i$ is *of type II*, $m_{i,i}=\mathrm{id}$ mod $\pi \otimes 1$.
3. Let $i$ be even and $L_i$ be *bound of type II*. Then $\delta_{i-1}^{\prime}e_{i-1}\cdot m_{i-1, i}+\delta_{i+1}^{\prime}e_{i+1}\cdot m_{i+1, i}+\delta_{i-2}e_{i-2}\cdot m_{i-2, i}+\delta_{i+2}e_{i+2}\cdot m_{i+2, i}=0$ mod $\pi \otimes 1$.\
Let $i$ be even and $L_i$ be *of type I*. Then $v_i(\mathrm{resp.~}(y_i+\sqrt{\bar{\gamma}_i}v_i))+(\delta_{i-2}e_{i-2}\cdot m_{i-2, i}+\delta_{i+2}e_{i+2}\cdot m_{i+2, i})\tilde{e_i}=0$ mod $\pi \otimes 1$ if $L_i$ is *of type* $\textit{I}^o$ (resp. *of type* $\textit{I}^e$).
Here,
- $
\delta_{j}^{\prime} = \left\{
\begin{array}{l l}
1 & \quad \textit{if $j$ is odd and $L_j$ is \textit{free of type I}};\\
0 & \quad \textit{otherwise}.
\end{array} \right.
$
- If $j$ is odd, then $e_{j}=(0,\cdots, 0, 1)$ of size $1\times n_{j}$.
- When $j$ is even, $e_{j}=(0,\cdots, 0, 1)$ (resp. $e_j=(0,\cdots, 0, 1, 0)$) of size $1\times n_
| 277
| 2,011
| 392
| 318
| 1,911
| 0.784456
|
github_plus_top10pct_by_avg
|
uniformly random choices correspond to setting $a=0$ in the expression for $p_k$, and then we can expect the graphs that pass the correlation threshold to be clustered around the points of $\rho\sim \rho_1$ and $S(X,Y)\sim 0$.
We also note a sharp variation in how the fixation probabilities of the graphs relate to the asymptotic fixation probabilities of the $K$-funnel as a mutant’s fitness is increased. For $r=1.1$, the graphs exhibiting the highest fixation probabilities, and also the highest slopes, are such that $\rho$ is somewhere between $\rho_2$ and $\rho_3$. For $r=2.0$, though, this happens between $\rho_1$ and $\rho_2$ ($=0.75$, not shown), therefore providing considerably less amplification. Part of the reason why this happens may be simply that the more potent amplifiers are harder to generate by our layer-selection mechanism as $r$ is increased. But it is also important to realize that, even for the $K$-funnel, achieving a fixation probability near $\rho_K$ requires progressively larger graphs as $r$ is increased. This is illustrated in Fig. \[fig:funnel\] for $K=3$ and the same two values of $r$.
![(Color online) Simulation results for the $3$-funnel. Dashed lines mark the values of $\rho_3$.[]{data-label="fig:funnel"}](funnel.eps)
Additional simulation results, for the much larger case of $K=10$ and $n=10\,000$, are presented in Fig. \[fig:10layers\] for $r=1.1$ and $a=1,2,3,4$. Computationally, this case is much more demanding than those of Fig. \[fig:5layers\], owing mainly to the number of distinct networks that can occur, as discussed earlier (in fact, for $K=10$ and $n=10\,000$, this number is at least of the order of $10^{33}$). Consequently, many fewer graphs surpassing the $0.9$ correlation threshold were obtained. Even so, one possible reading is that results similar to those reported in Fig. \[fig:5layers\] can be expected, but this remains to be seen.
In summary, we have demonstrated that strongly connected layered networks can be grown for which the fixation probability signi
| 278
| 166
| 1,994
| 392
| null | null |
github_plus_top10pct_by_avg
|
ord, e.g. $B$, and when you paste the text in LyX, use Edit -> Paste special -> Paste from LaTeX. The LaTeX code for math mode ($ ... $) will be interpreted properly.
Q:
read socket from C
When I use read() socket to retrieval bytes from server I found something unexpected in the chars.
Below is my code:
Char buf[80];
Char output[80];
if ( (count = read(socket_d, buf, 80)) == -1) {
perror("Error on read call");
exit(1);
}
strcat(output, buf);
printf("Client read %d bytes\n", count);
while (count==80) {
memset(&buf, 0, sizeof(buf));
if ( (count = read(socket_d, buf, 80)) == -1) {
perror("Error on read call");
exit(1);
}
printf("Client read %d bytes\n", count);
strcat(output, buf);
}
/* print the received message */
printf("\n%s\n\n", output);
The output as below:
Client read 80 bytes
Client read 80 bytes
Client read 8 bytes
Your messages:
1:From User1, 04/21/16 02:12 AM, MSG1
2:From User2, 04/21/16 02:162 AM, MSG2
3:From User2, 04/21/16 02:12 AM, MSG3
4:From User1, 04/21/16 02:12 AM6, MSG4
The expect output should be:
Client read 80 bytes
Client read 80 bytes
Client read 8 bytes
Your messages:
1:From User1, 04/21/16 02:12 AM, MSG1
2:From User2, 04/21/16 02:12 AM, MSG2
3:From User2, 04/21/16 02:12 AM, MSG3
4:From User1, 04/21/16 02:12 AM, MSG4
It seems that there is unexpected char('6') shown in buf[] in the second loop.
As I define the size of each time read() socket, so I want to loop read until the read amount is less than the limit_size, then output.
What should I do with the buf[] during the loop to avoid the unexpected chars?
A:
1- output is used uninitialized in
strcat(output, buf);
2- memset wants a pointer and buf is already (decays into) a pointer
memset(&buf, 0, sizeof(buf));
should be
memset(buf, 0, sizeof(buf));
3- If count == 80 then you have no space for the trailing NUL in your buffer, declare bufas char buf[81]; (or read 79 bytes) and don't forget to end your string with \0 after read()
buf[count] = '\0';
Q:
printing te
| 279
| 5,035
| 100
| 222
| 109
| 0.825351
|
github_plus_top10pct_by_avg
|
assumed in the above theorem. This settles the question raised in [@HOX14] on whether it is possible to achieve optimal accuracy using rank-breaking under the top-$\ell$ separators scenario. Analytically, it was proved in [@HOX14] that under the top-$\ell$ separators scenario, naive rank-breaking with uniform weights achieves the same error bound as the MLE, up to a constant factor. However, we show that this constant factor gap is not a weakness of the analyses, but the choice of the weights. Theorem \[thm:topl\_upperbound\] provides a guideline for choosing the optimal weights, and the numerical simulation results in Figure \[fig:top\_l\] show that there is in fact no gap in practice, if we use the optimal weights. We use the same settings as that of the first figure of Figure \[fig:scaling\_l\_n\] for the figure below.
![The proposed data-driven rank-breaking achieves performance identical to the MLE, and improves over naive rank-breaking with uniform weights. []{data-label="fig:top_l"}](Plot5_new-eps-converted-to.pdf "fig:"){width=".3\textwidth"} (-171,50) (-115,-7 )[number of separators ]{} (-100,100)
To prove the order-optimality of the rank-breaking approach up to a constant factor, we can compare the upper bound to a Cramér-Rao lower bound on any unbiased estimators, in the following theorem. A proof is provided in Section \[sec:proof\_cramer\_rao\_topl\].
\[thm:cramer\_rao\_topl\] Consider ranking $\{\sigma_j(i)\}_{i \in [\ell_j]}$ revealed for the set of items $S_j$, for $j \in [n]$. Let $\mathcal{U}$ denote the set of all unbiased estimators of $\theta^*\in\Omega_b$. If $b >0$, then $$\begin{aligned}
\inf_{\widehat{\theta} \in \mathcal{U}} \sup_{\theta^* \in \Omega_b} \E[{\|\widehat{\theta} - \theta^*\|}^2]
\;\; \geq \;\; \Bigg(1 - \frac{1}{\ell_{\max}}\sum_{i= 1}^{\ell_{\max}} \frac{1}{\kappa_{\max} - i +1}\Bigg)^{-1} \sum_{i = 2}^d \frac{1}{\lambda_i(L)}
\;\; \geq \;\; \frac{(d-1)^2}{\sum_{j = 1}^n \ell_j}\;,
\label{eq:cramer_rao_topl}
\end{aligned}$$ where $\ell_{\max} =
| 280
| 188
| 428
| 343
| 2,769
| 0.777104
|
github_plus_top10pct_by_avg
|
ng the special elements and the basic relations $(R0)$-$(R4)$ and $(E1)$-$(E5)$.
The partition algebras $A_n(Q)$ were introduced in early 1990s by Martin [@Ma1; @Ma2] and Jones [@Jo] independently and have been studied, for example, in the papers [@Ma3; @DW; @HR]. The theorem above has already shown in the paper [@HR]. Here we give another poof defining a “standard” expression of a word of the special elements of $A_{n}(Q)$ according to the papers [@Ko1; @Ko3; @Ko4]. From this standard expression, we will find that the partition algebra $A_{n}(Q)$ is cellular in the sense of Graham and Lehrer [@GL]. Thus, applying the general representation of cellular algebras to the partition algebras, we will get a description of the irreducible modules of $A_{n}(Q)$ for any field of arbitrary characteristic. (For the cell representations, we also refer the paper [@KL].)
Further, we can make the character table of $A_{n}(Q)$ using the standard expressions. These topics will be studied in near future. For the present we refer the notes [@Ko3; @Na1] and the results about the partition algebras [@DW; @Xi].
Local moves deduced from the basic relations
============================================
Let $${\cal L}_n^1 =
\{
s_1, s_2, \ldots, s_{n-1},
f_1, f_2, \ldots, f_{n-1},
e_1, e_2, \ldots, e_{n}
\}$$ be the set of symbols whose words satisfy the basic relations $(R0)$-$(R4)$ and $(E1)$-$(E5)$. There are many relations among these symbols which are deduced from the basic relations. These relations are pictorially expressed as local moves. Among them, we frequently use relations $f_{i+1}s_is_{i+1}=s_is_{i+1}f_i$ ($R0$), $f_is_{i+1}f_i = f_if_{i+1}$ ($R2''$) and $e_is_i = e_if_ie_{i+1} =s_ie_{i+1}$ ($E4''$) as in Figure \[fig:fss\],\[fig:fsf\] and \[fig:efe\] respectively. The latter two relations are deduced from the relations ($R0$)-($R3$) and ($R0$), ($R3$), ($E4$) respectively.
![$f_{i+1}s_is_{i+1}=s_is_{i+1}f_i$ ($R0$)[]{data-label="fig:fss"}](4.eps)
![$f_is_{i+1}f_i = f_if_{i+1}$ ($R2''$)[]{data-
| 281
| 158
| 1,092
| 395
| 818
| 0.799383
|
github_plus_top10pct_by_avg
|
], III.10.4, it suffices to show that, for any $m \in \underline{M}^{\ast}(\bar{\kappa})$, the induced map on the Zariski tangent space $\rho_{\ast, m}:T_m \rightarrow T_{\rho(m)}$ is surjective.
We define the two functors from the category of commutative flat $A$-algebras to the category of abelian groups as follows: $$T_1(R)=\{m-1 : m\in\underline{M}(R)\},$$ $$T_2(R)=\{f-h : f\in\underline{H}(R)\}.$$
The functor $T_1$ (resp. $T_2$) is representable by a flat $A$-algebra which is a polynomial ring over $A$ of $2n^2$ (resp. $n^2$) variables by Lemma 3.1 of [@C1]. Moreover, each of them is represented by a commutative group scheme since they are closed under addition. In fact, $T_1$ is the same as the functor $\underline{M}^{\prime}$ in Remark \[r31\].
We still need to introduce another functor on flat $A$-algebras. Define $T_3(R)$ to be the set of all $(n \times n)$-matrices $y$ over $B\otimes_AR$ satisfying the following conditions:
1. The $(i,j)$-block $y_{i,j}$ of $y$ has entries in $\pi^{max(i,j)}B\otimes_AR$ so that $$y=\begin{pmatrix} \pi^{max(i,j)}y_{i,j}\end{pmatrix}.$$ Here, the size of $y_{i,j}$ is $n_i\times n_j$.
2. Assume that $i$ is even.
- If $L_i$ is *of type* $\textit{I}^o$, then $y_{i,i}$ is of the form $$\begin{pmatrix} s_i&\pi y_i\\ \pi v_i&\pi z_i \end{pmatrix}\in \mathrm{M}_{n_i}(B\otimes_AR)$$ where $s_i$ is an $(n_i-1) \times (n_i-1)$ matrix, etc.
- If $L_i$ is *of type* $\textit{I}^e$, then $y_{i,i}$ is of the form $$\begin{pmatrix} s_i&r_i&\pi t_i\\ y_i&x_i&\pi w_i\\ \pi v_i&\pi u_i&\pi z_i \end{pmatrix}\in \mathrm{M}_{n_i}(B\otimes_AR)$$ where $s_i$ is an $(n_i-2) \times (n_i-2)$-matrix, etc.
3. Assume that $i$ is even and that $L_i$ is *of type I*. Then $$z_i+\delta_{i-2}k_{i-2, i}+\delta_{i+2}k_{i+2, i} \in (\pi).$$ Here,
- $z_i$ is in the $(n_i\times n_i)^{th}$-entry of $y_{i,i}$ as described in the above Step (b).
- $k_{i-2, i}$ (resp. $k_{i+2, i}$) is the $(n_{i-2}\times n_i)^{th}$-entry (resp. $(n_{i+2}\times n_i)^{th}$-entry) of the matrix $y_
| 282
| 1,506
| 645
| 324
| 3,797
| 0.770049
|
github_plus_top10pct_by_avg
|
- CDS or CDS -- 3′ UTR, the decoy site is assigned to the region in which the majority of the site is contained. The length of the 5′ UTR, CDS, and 3′ UTR of 286 transcripts is tallied respectively, and then the number of decoy sites for each feature is normalized to 1 kb sequence length. (**B**) Predicted decoy sites are classified into the 'bulge' type if there are bulges corresponding to miRNA bases 10--11, otherwise, they are classified into the 'mismatch' type, decoys of which have at least one mismatch to miRNA base 10 or 11.](pone.0021330.g007){#pone-0021330-g007}
10.1371/journal.pone.0021330.t001
###### Distribution of predicted decoys in functional categories.
{#pone-0021330-t001-1}
Category Loci Transcripts Decoy Sites
------------------------ --------- ------------- -------------
Protein Coding 230 286 292
Short Peptide 1 1 1
Pseudogene 5 5 5
⊤Transposable Elements 22 22 23
Known Decoy (IPS1) 1 1 1
Other RNA 1 2 2
**Total** **260** **317** **364**
miRNA decoy sites are predicted from *Arabidopsis* loci in different functional categories, including protein coding genes.
Discussion {#s3}
==========
Typically, plants are genetically engineered to express single exogenous genes for traits, such as resistance to herbicides and insects [@pone.0021330-James1]. A global challenge is increasing agricultural productivity for a growing population. This challenge may be met by creating more productive crops through the addition of traits, including improved tolerance to abiotic stresses such as drought, or increased yield potential. Such traits are usually controlled by a number of environmentally regulated genes, and complex trait engineering may require finely regulated or coordinately modified expression o
| 283
| 3,300
| 1,181
| 278
| null | null |
github_plus_top10pct_by_avg
|
the other hand we perform the OPE of the operators $A$ and $C$, and then we perform the OPE of the result with $B$. Eventually take the regular limit $:x \to w:$ and add up the two terms. Additional details about these operations follow.
- First let us consider the OPE between $A(z)$ and $B(x)$. We evaluate the result at the point $x$ – otherwise taking the regular limit $:x \to w:$ would become cumbersome. Let us consider one term in the OPE between $A(z)$ and $B(x)$: \[A:BC:step1\] A(z) B(x) = ... + (z-x)\^[\_D - \_A - \_B]{} (|z-|x)\^[|\_D - |\_A - |\_B]{} D(x) + ... where $\Delta_O$ (respectively $\bar \Delta_O$) stands for the holomorphic (respectively anti-holomorphic) conformal dimension of an operator $O$. For simplicity we consider a term in which no logarithm appears, but the generalization is straightforward. We have to perform the OPE of the right-hand side with the operator $C(w)$. Let us consider one term in the result: $$\begin{aligned}
&(z-x)^{\Delta_D - \Delta_A - \Delta_B}(\bar z-\bar x)^{\bar\Delta_D - \bar\Delta_A - \bar \Delta_B} D(x) C(w)
=\cr
&...+ (x-w)^{\Delta_E - \Delta_D - \Delta_C}(\bar x-\bar w)^{\bar\Delta_E - \bar\Delta_D - \bar \Delta_C}(z-x)^{\Delta_D - \Delta_A - \Delta_B}(\bar z-\bar x)^{\bar\Delta_D - \bar\Delta_A - \bar \Delta_B} E(w) +...
\nonumber
\end{aligned}$$ Now to take the normal ordered limit $:x \to w:$, we expand the functions depending on $x$ in the neighborhood of $w$, namely, we write: (z-x)\^= (z-w)\^-(x-w) (z-w)\^[-1]{} + ... and we keep only the terms that end up with no factor of $(x-w)$. The same manipulations have to be done for the anti-holomorphic factors. If both $\Delta_E - \Delta_D - \Delta_C$ and $\bar\Delta_E - \bar\Delta_D -
\bar \Delta_C$ are non-positive integers, then the term we isolated in the previous steps contributes to the OPE as: $$\begin{aligned}
\label{A:BC:step4}\lim_{z\to w}& A(z) :BC:(w) = ... + \# (z-w)^{\Delta_E - \Delta_A - \Delta_B-\Delta_C} (\bar z - \bar w)^{\bar\Delta_E - \bar\Delta_A - \ba
| 284
| 375
| 402
| 348
| 3,048
| 0.77521
|
github_plus_top10pct_by_avg
|
that are consistent with the predictions of inequality aversion.
{#pone.0204392.t003g}
*N* Prediction Observed
-------------------- ----- -------------------------- -------------
(10,10) vs (10,40) 17 *y*~10,10~ \< *y*~10,40~ 6 (35.29%)
(10,10) vs (40,10) 14 *y*~40,10~ \< *y*~10,10~ 4 (28.57%)
(10,40) vs (40,40) 17 *y*~40,40~ \< *y*~10,40~ 6 (35.29%)
(40,10) vs (40,40) 13 *y*~40,10~ \< *y*~40,40~ 8 (61.54%)
(10,40) vs (40,10) 30 *y*~40,10~ \< *y*~10,40~ 11 (36.67%)
Overall 91 35 (38.46%)
Overall, there are 91 situations in which the idea of inequality aversion predicts a change in the allocators' behavior when we vary the level of endowments. We observe that less than 40% of the times, allocators behaved in the predicted direction (i.e., more than 60% of choices were inconsistent with inequality-aversion). While this is a substantial proportion of choices, the binomial test rejects the hypothesis that majority of the choices are in the direction predicted by inequality aversion (p-value \< 0.018, one-sided test).
### Strong definition of inequality aversion {#sec010}
Next, we use the assumption in \[[@pone.0204392.ref017]\] that inequality-averse subjects want to restore strict equality to investigate heterogeneity across subjects. We consider that there are three different behavioral patters to explain our data. Allocators can keep the available funds and transfer *y*\* = 0 if selfish. Allocators can also return what they have received from investors *y*\* = 1/3 if reciprocal. Finally, allocators can be inequality-averse and return a proportion of the available funds that depend on the level of endowments and what they have received from investors, *y*\*(*X*, *e*~*i*~, *e*~*a*~), as indicated in [Eq (3)](#pone.0204392.e005){ref-type="disp-formula"}. In [Fig 1](#pone.0204392.g001){ref-type="fig"}, we classify choices using the minimum distance to each of
| 285
| 3,965
| 769
| 255
| 2,984
| 0.775611
|
github_plus_top10pct_by_avg
|
object foo holds daily share price data for a stock starting from Monday 3 January 2011 and ending on Monday 20 September 2011. To aggregate this daily data I used:
tmp <- to.weekly(foo)
The above approach succeeds in that tmp now holds a series of weekly OHLC data points, as per the quantmod docs. The problem is that the series begins on Monday 3 January 2011 and each subsequent week also begins on Monday e.g. Monday 10 January, Monday 17 January and so on. I had expected the week to default to ending on Friday so that the weekly series started on Friday 7 January and ended on Friday 16 September.
I have experimented with adjusting the start and end of the data and using 'endof' or 'startof' together with the indexAt parameter but I cannot get it to return a week ending in Friday.
I am grateful for any insights received.
(Sorry, I could not find any way to attach dput file so data appears below)
foo:
2011-01-03 2802
2011-01-04 2841
2011-01-05 2883
2011-01-06 2948
2011-01-07 2993
2011-01-10 2993
2011-01-11 3000
2011-01-12 3000
2011-01-13 3025
2011-01-14 2970
2011-01-17 2954
2011-01-18 2976
2011-01-19 2992
2011-01-20 2966
2011-01-21 2940
2011-01-24 2969
2011-01-25 2996
2011-01-26 2982
2011-01-27 3035
2011-01-28 3075
2011-01-31 3020
tmp:
foo.Open foo.High foo.Low foo.Close
2011-01-03 2802 2802 2802 2802
2011-01-10 2841 2993 2841 2993
2011-01-17 3000 3025 2954 2954
2011-01-24 2976 2992 2940 2969
2011-01-31 2996 3075 2982 3020
A:
I've come up with something yielding only Close values, perhaps it can be hacked further to return OHLC series.
Assuming that foo is an xts object, first we create the vector of indeces of Fridays:
fridays = as.POSIXlt(time(foo))$wday == 5
Then we prepend it with 0:
indx <- c(0, which(fridays))
And use period.apply:
period.apply(foo, INDEX=indx, FUN=last)
Result:
[,1]
2011-01-07 2993
2011-01-14 2970
2011-01-21 2940
2011-01-28 3075
Q:
How to start distributed Erlang app without start
| 286
| 2,014
| 127
| 204
| 1,773
| 0.785819
|
github_plus_top10pct_by_avg
|
1}
\left( \frac{ \rho E }{ 100 (\text{g/cm}^3) \mbox{GeV} }\right)^2.
\label{enhanced-case}\end{aligned}$$ It should be compared to (\[denominator-size\]). After taking account of $W^2$ suppression of $\sim 0.01$ (assuming $W \simeq 0.1$), $| \frac{ AA L }{ ( \Delta_{J} - h_{i} ) } W^2 | \sim 3 \times 10^{-2}$ at $E \sim 100$ GeV, assuming $\Delta m^2_{J i} =0.1$ eV$^2$.
![ The sum of the order $W^2$ correction terms (see eq. (\[P-beta-alpha-2nd-averaged\])) plus the probability leaking term $\mathcal{C}_{\mu \alpha}$ (see eq. (\[Cab\]) for definition) in $P(\nu_{\mu} \rightarrow \nu_{\alpha})$, namely, $\delta P(\nu_{\mu} \rightarrow \nu_{\alpha})
\equiv P(\nu_{\mu} \rightarrow \nu_{\alpha}) -
P(\nu_{\mu} \rightarrow \nu_{\alpha})^{(0)}$, are plotted assuming a common $m_J^2 = 0.1$ eV$^2$. The top, middle and bottom panels are for $\alpha = e, \tau$, and $\mu$, respectively. In each panel the three cases are shown: $N=1$ case with maximal $\mathcal{C}_{\mu \alpha}$ (solid line), the universal scaling model with $N=3$ (dotted line), and the order $W^2$ correction only (dashed line). The last case corresponds to the universal scaling model with $N=\infty$. The blue lines are for $E=10$ GeV, and the red for $E=100$ GeV. The leaking constants in the $N=1$ model (shown without superscript $(N=1)$ in the legend) have values $\mathcal{C}_{e \mu} = 2 \times 10^{-4}$, $\mathcal{C}_{\tau \mu} = 9.5 \times 10^{-4}$, and $\mathcal{C}_{\mu \mu} = 9.6 \times 10^{-5}$. []{data-label="fig:W-correction"}](Delta_Prob_W_corrections.pdf){width="120.00000%"}
To know more quantitatively their sizes, we fix the parameters as discussed in section \[sec:parameter-choice2\] with a common $m_J^2 = 0.1$ eV$^2$,[^20] and plot in figure \[fig:W-correction\] the order $W^2$ correction terms in $P(\nu_{\mu} \rightarrow \nu_{\alpha})$ in eq. (\[P-beta-alpha-2nd-averaged\]) plus the probability leaking term $\mathcal{C}_{\mu \alpha}$, $\alpha = e$ (top panel), $\alpha = \tau$ (middle panel), and $\alpha = \mu$ (bottom panel). Under the
| 287
| 117
| 425
| 403
| 1,473
| 0.78912
|
github_plus_top10pct_by_avg
|
ratio of a intermittent compound Poisson process is $\iota = \frac{\mu_\text{off}}{\mu_\text{on} + \mu_\text{off}}$.
Let $\lambda$ be the rate, $L$ be the loss random variable, and $\iota$ be the idle ratio of an intermittent compound Poisson process. The expectation of the ICPP for a time interval $t$ units long is $$\begin{aligned}
{1}
\E (\text{intermittent compound Poisson}) &= (1 - \iota) \cdot \lambda t \cdot \E(L) \\
&= (1 - \iota) \cdot \lambda t \cdot \mu_L.\end{aligned}$$
The statistical risk of an ICPP is $$\begin{aligned}
{1}
h &= \frac{d}{dt} \E(\text{intermittent compound Poisson}) \\
&= \frac{d}{dt} ((1 - \iota) \cdot \lambda t \cdot \mu_L + \iota \cdot 0 t \cdot 0) \\
&= (1 - \iota) \lambda \mu_L.\end{aligned}$$
Indemnification {#S:INDEMNIFICATION_FORMULA}
---------------
Hypothesize that a software hazard is emulated by a compound Poisson process (CPP) having intensity $\lambda$ and expected loss $\mu_L$. Suppose further that the actual control mechanism is a cone convergent to the software point of exhibition of the hazard. We wish to consider statistical evidence that the hazard’s hypothetical description via the stochastic process is consistent with its mechanism as revealed by safety demonstration.
### Unification {#S:UNIFICATION}
Before undertaking the question of whether test data supports a hypothetical stochastic process, we must establish the theoretical conditions under which equality is expected.
#### Fundaments of the model
The compound Poisson process is a model stochastic process for occurrence of accidents. This model is used in safety analysis to quantify the occurrence and losses of accidents without considering their causes. MIL-STD-882 (see Appendix \[S:MIL-STD-882\]) is an important example. In a time interval of duration $t$, accidents converge stochastically in rate to expectation $\lambda t$ and in mean loss to $\mu_L$. This means an intensity of $\lambda$ accidents per time unit.
#### Fundaments of the mechanism
Th
| 288
| 2,818
| 687
| 336
| 3,437
| 0.772372
|
github_plus_top10pct_by_avg
|
ns (\[init1\]) and (\[init2\]). The approximated solutions described by (\[solution1\]) and (\[solution2\]).[]{data-label="fig:solution1"}](plot1.eps){width="7cm"}
We fix the boundary approximations in $\tau_1=-20$ and $\tau_2=-1$. The the numerical solution gives us for these points $$\begin{aligned}
a|_{\tau_1=-20} &=& 0.536 l_{\text{Pl}} \ , \ a'|_{\tau_1=-20} = 0.007 l_{\text{Pl}} \\
a|_{\tau_2=-1} &=& 3.912 l_{\text{Pl}} \ , \ a'|_{\tau_2=-1} = 0.461 l_{\text{Pl}}.\end{aligned}$$ Now with the use of expressions (\[fixing\]) we can fix the approximated solutions. However when we use formula (\[fixing\]) directly to calculate parameters in solution (\[solution1\]) for the outer state we obtain complex $\xi$. It is due to the expression under square $(p=1/2)$ is negative. So to put away complex numbers we redefine the exit solution to the form $$a=\kappa\sqrt{\tau+\zeta}
\label{solution2}$$ where $$\begin{aligned}
\kappa &=& -i \xi \\
\zeta &=& - \beta.\end{aligned}$$
We show in Fig. \[fig:solution1\] how these approximated solutions match with solutions obtained numerically. As we can see these approximated solutions well describe the evolution in the neighbourhood of $a_*$.
Gravitational waves
===================
We have already mentioned in section \[sec:intro\] that gravitational waves can be abundantly produced during the accelerating phase. In this section we want to show in details how it works and calculate properties of produced gravitons. To describe the spectrum of gravitons it is common to use the parameter $$\Omega_{\text{gw}}(\nu) =\frac{\nu}{\rho_c}\frac{d \rho_{\text{gw}}}{d \nu}
\label{omegaGW}$$ where $\rho_{\text{gw}} $ is the energy density of gravitational waves and $\rho_c$ is present critical energy density. Our goal in this section is to calculate the function $\Omega_{\text{gw}}(\nu)$ for the gravitons produced during the super-inflationary phase.
The gravitational waves $h_{ij}$ are the perturbations of the background spacetime in the form $$ds^2=a^2(\tau) \left[ -d\t
| 289
| 2,961
| 698
| 339
| 2,325
| 0.780838
|
github_plus_top10pct_by_avg
|
came to 2nd level distributions, histograms became much coarser since data available was highly limited and thus its performance suffered dramatically.
MGoF tended to classify every distribution as anomaly, therefore benefited most by larger $\alpha$. It always classifies as anomalous the first $c_{th}$ distributions supporting every null hypothesis. Thus when $\alpha$ increased, the proportion of misclassified normal collections also became larger, while those anomalies were still considered anomalous. And given that the total number of normal collections drops down, the overall accuracy tended to increase as more instances are correctly classified as anomalous. However, the right half shows a different trend. One reason is that MGoF uses KLD other than JSD. In the Koubei dataset, the discrete estimation of distributions oscillated in a wide range, leading to that the prerequisite of KLD is often unsatisfied. Thus the calculation of KLD may not give a correct measurement. Furthermore, the 2nd histogram provided fewer probability entries than the 1st level did. Thus it shows a more significant deviation from our expectation. For the classifiers of MGoF, they compromised to a high error rate. Because more anomalies gathered together and the algorithm recognized them as clusters of normal data.
![Accuracy and F1 on Different Anomaly Magnitudes[]{data-label="fig:anomaly-magnitude"}](./PerformanceOnAnomalyMagnitude.pdf){width="\linewidth"}
From Fig. \[fig:anomaly-magnitude\] we can conclude that our algorithms are still the best, given that they are most sensitive toward tiny anomalous variations. However, static SDD-E did not rise until $\nu > 1$, this is because it suffered from fluctuation on the trade environment at the mean time. MGoF is not sensitive toward minor anomalies either. For a relatively small magnitude of click farming, the classifiers of MGoF quickly degrade to be trivial. The rigid threshold could not automatically rise up and was thus far from to optimal.
Results on Synthetic Data Set
-------
| 290
| 21
| 1,662
| 320
| null | null |
github_plus_top10pct_by_avg
|
in \psi )^{-1}\\
* &
*&
w^{(m\,h\,k)}_{\psi\psi} (\sin \psi )^{-2}
\end{bmatrix}
(\sin \psi )^{-h} e^{i [(h-k) \tau + m \varphi] +m \psi}\,,\end{aligned}$$ where $$\begin{aligned}
w_{\tau\tau}^{(m\,h\,0)} &=\, +\frac{1}{16}(c_1 e^{-2 i \psi }+4 c_1 e^{2 i \psi }-6 c_2 e^{-2 i \psi }+16 c_3 e^{2 i \psi }+8 c_5 e^{-2 i \psi }+16 c_6 e^{2 i \psi }+4 c_1-8 c_2+16 c_3+8 c_4)\,, \\ {\nonumber}w_{\varphi\varphi}^{(m\,h\,0)} &=\, c_1 \,, \\ {\nonumber}w_{\psi\psi}^{(m\,h\,0)} &=\, +\frac{1}{16} (-8 c_4 +16 c_6 e^{2 i \psi }+c_1 e^{-2 i \psi }+2 c_2 e^{-2 i \psi }+8 c_5 e^{-2 i \psi })\,, \\ {\nonumber}w_{\tau\varphi}^{(m\,h\,0)} &=\, -\frac{1}{4} \left(2 c_1 e^{ i \psi }+4 c_3 e^{ i \psi }+c_1 e^{ -i \psi }-2 c_2e^{ -i \psi }\right)\,, \\ {\nonumber}w_{\varphi\psi}^{(m\,h\,0)} &=\, +\frac{1}{4} \left(4 c_3 e^{ i \psi }+c_1 e^{ -i \psi }+2 c_2 e^{ -i \psi }\right)\,, \\ {\nonumber}w_{\psi\tau}^{(m\,h\,0)} &=\, -\frac{1}{16} \left(2 c_1 +4 c_2 +8 c_3 +8 c_3 e^{2 i \psi }+16 c_6 e^{2 i \psi }+c_1 e^{-2 i \psi }+2 c_2 e^{-2 i \psi }-8 c_5e^{-2 i \psi }\right)\,.\end{aligned}$$
Expressions of $\mathcal{D}^{(m,h)}_{A}[\mathbf{C}(u)]$ in Maxwell systems {#app:G-function}
==========================================================================
We have decomposed the differential operators $\mathcal{D}^{(m,h)}_{A}[\mathbf{C}(u)],\,A\in\{T,\Phi,R,u\}$, introduced in Sec. \[sec:sep-vector\], by the coefficients multiplying the 2nd, 1st, and 0th derivatives of the $C-$functions. These coefficients are tabulated here in Table \[tab:maxwell\]. Expressions in this appendix can be computed using the companion <span style="font-variant:small-caps;">Mathematica</span> notebook `Sep-met-pert-in-NHEK-Poinc.nb` [@NHEKsupplement].
$$\begin{array}{c|cccc}
\mathcal{D}_A & C_T''(u) & C_\Phi''(u) & C_R''(u) & C_u''(u) \\
\noalign{\smallskip} \hline \hline \noalign{\smallskip}
\mathcal{D}_T & \frac{1-u^2}{u^2+1} & 0 & 0 & 0 \\
\mathcal{D}_\Phi & 0 & \frac{1-u^2}{u^2+1} & 0 & 0 \\
\mathcal{D}_R& 0 & 0 & \fr
| 291
| 2,418
| 418
| 326
| null | null |
github_plus_top10pct_by_avg
|
istribution being fixed, the reversible matrix which minimizes the mixing time. If we note $\pi$ the stationary measure and $\Pi=diag(\pi)$. Then $P$ is reversible if and only if $\Pi P=\Pi^t P$. Then in particular $\Pi^{\frac{1}{2}}P\Pi^{-\frac{1}{2}}$ is symmetric and has the same eigenvalues as $\Pi$. Finally, $p=(\sqrt{\pi_1},...,\sqrt{\pi_n})$ is an eigenvector of $\Pi^{\frac{1}{2}}P\Pi^{-\frac{1}{2}}$ associated to the eigenvalue $1$. Then the minimization problem can be written as the following system:
$$\label{eq:mix2}
\left\{
\begin{array}{rcr}
\min\limits_{P} ||| (I_d-\frac{1}{n}\textbf{q}\textbf{q}^t)\Pi^{\frac{1}{2}}P\Pi^{-\frac{1}{2}}(I_d-\frac{1}{n}\textbf{q}\textbf{q}^t)|||\\
=|||\Pi^{\frac{1}{2}}P\Pi^{-\frac{1}{2}}-\frac{1}{n}\textbf{q}\textbf{q}^t|||\\
P(i,j) \geq 0, P*\textbf{1}=\textbf{1}, \Pi P=\Pi^t P \\
A(i,j)=0 \Rightarrow P(i,j)=0\\
\end{array}
\right.$$
When we implement this problem in Matlab with $\pi=\pi_{KS}$ we find a matrix $P_{mix}$ such that naturally $\lambda(P_{mix}) \leq \lambda(P_{KS})$. Moreover we can compare both dynamics by evaluating $|||P_{KS}-P_{mix}|||$ compared to $|||P_{KS}|||$ which is approximatively equal to $|||P_{mix}|||$. We remark that $|||P_{KS}-P_{mix}|||$ depends on the density $\rho$ of $0$ in the matrix $A$. For a density equal to $0$ the matrices $P_{KS}$ and $P_{mix}$ are equal and the quantity $|||P_{KS}-P_{mix}|||$ will increase continuously when $\rho$ increases. This is shown in (Fig. \[fig:KS2bis\]).
![$|||P_{KS}-P_{mix}|||/|||P_{KS}|||$ as a function of the density $\rho$ of $0$ present in $A$.[]{data-label="fig:KS2bis"}](normePksmoinsPmixfonctiondensitedeAm103.jpg){width="10cm"}
From this, we conclude that the rules which maximize the KSE are close to those which minimize the mixing time. This becomes increasingly accurate as the fraction of removed links in $A$ is weaker. Since the calculation of $P_{mix}$ quickly becomes tedious for quite large values of $m$, we offer here a much cheaper alternative by computing $P_{KS}$
| 292
| 389
| 504
| 354
| 3,759
| 0.770272
|
github_plus_top10pct_by_avg
|
ipe(
take(40),
map(value => value +1),
map(value => {
if(value === 40) {
finish();
}
else if (value % 5 === 0){
return 'can devide by 5 we did some magic';
}else{
return value;
} })
);
const subscribe = example.subscribe(
val => console.log(val),
error => console.log("Error handled: " , error),
() => console.log('resolved'));
A:
both answers mention takeuntil and take are correct but another way is to use the subscription object to unsubscribe it just another option
const subx= example.subscribe(val => {
console.log(val);
if (val == 40) {
subx.unsubscribe()
}
});
demo
Updated
in case you have a many subscribers and you want to put the condtion that complate a source observable take operator can do the job here
const source = interval(1000).pipe(take(5)); //
source.pipe(map(res => res * 10)).subscribe(val => {
console.log("", val);
});
source.subscribe(val => {
console.log(val);
});
demo
Q:
How to pseudo-randomize trials without repeating same condition more than three times
I know there are lots of pseudo-randomization skills but this one, I couldn't search it so I put it on here.
I am using MATLAB 2018a. I've been trying to set up a behavior experiment which has 10 conditions. Each condition has 50 trials. This results in 500 trials total.
I would like to pseudo-randomize the sequence of trials such that no same conditions appear more than three times consecutively.
I thought it would be not so difficult since I have many conditions but some of the methods I found by googling had minor problems. One of the methods I used was by extracting indexes using 'unique(find(diff(seq)==0))', re-randomize it and replace it with the original redundant sequence. (Link) But this method had a problem that it would randomly change the total number of a condition. If you wanted 40 trials for each condition, it would result in 39 for some conditions while 41 for others..
My question would be how to improve this method to have constraints of ha
| 293
| 111
| 133
| 175
| null | null |
github_plus_top10pct_by_avg
|
eline and 24-month scores.
{#F3}
######
Improvements in UFS-QOL subscale scores from baseline to 24 months
**Subscale** **Baseline** **24 months** **Change in score** **95% confidence interval**
-------------------- -------------- --------------- --------------------- -----------------------------
Concern 24.7 ± 20.7 70.8 ± 28.6 45.6 39.9, 51.3
Activities 37.1 ± 24.1 81.1 ± 24.2 41.9 37.5, 48.2
Energy/Mood 38.1 ± 21.8 79.3 ± 22.9 39.6 34.6, 44.6
Control 45.5 ± 24.9 85.8 ± 22.1 39.1 33.4, 44.9
Self-consciousness 38.2 ± 28.3 82.1 ± 21.1 42.0 36.3, 47.7
Sexual function 45.5 ± 29.8 74.1 ± 29.1 29.2 22.7, 35.8
Mean health state scores (EQ-5D) improved from baseline to 3 months and then changed slightly over time from 85.0 to 84.0 (Figure [4](#F4){ref-type="fig"}). There was a significant improvement in the mean health state score between baseline and 3 months after treatment (p \< .001). Measurements at subsequent intervals showed no continued improvement, but remained statistically improved over baseline.
{#F4}
There was one serious adverse event, which occurred between 12 and 24 months and was possibly related to the procedure. One subject became pregnant and delivered a healthy, full-term baby by Cesarean section. However, during the Cesarean section, the subject lost 1400--1500 mL of blood. Approximately 48 hours later, she experienced abdominal pain with additional blood loss and tissue expulsion. Preliminary pathology indicated degenerative fibroid tissue. The patient received 6 units of blood altogether and was discharged from the hospital with oral iron therapy for her anemia.
Six patients (6/124, 4.8%)
| 294
| 353
| 573
| 430
| null | null |
github_plus_top10pct_by_avg
|
odes; then...:
>>> class z(v):
... def visit_Name(self, node): print 'Name:', node.id
...
>>> z().visit(t)
Module
AugAssign
Subscript
Name: d
Index
Name: x
Store
Add
Subscript
Name: v
Index
Tuple
Name: y
Name: x
Load
Load
But, NodeVisitor is a class because this lets it store information during a visit. Suppose all we want is the set of names in a "module". Then we don't need to override generic_visit any more, but rather...:
>>> class allnames(ast.NodeVisitor):
... def visit_Module(self, node):
... self.names = set()
... self.generic_visit(node)
... print sorted(self.names)
... def visit_Name(self, node):
... self.names.add(node.id)
...
>>> allnames().visit(t)
['d', 'v', 'x', 'y']
This kind of thing is a more typical use case than ones requiring overrides of generic_visit -- normally, you're only interested in a few kinds of nodes, like we are here in Module and Name, so we can just override visit_Module and visit_Name and let ast's visit do the dispatching on our behalf.
A:
Looking at the code in ast.py it's not that hard to copy paste and roll your own walker. E.g.
import ast
def str_node(node):
if isinstance(node, ast.AST):
fields = [(name, str_node(val)) for name, val in ast.iter_fields(node) if name not in ('left', 'right')]
rv = '%s(%s' % (node.__class__.__name__, ', '.join('%s=%s' % field for field in fields))
return rv + ')'
else:
return repr(node)
def ast_visit(node, level=0):
print(' ' * level + str_node(node))
for field, value in ast.iter_fields(node):
if isinstance(value, list):
for item in value:
if isinstance(item, ast.AST):
ast_visit(item, level=level+1)
elif isinstance(value, ast.AST):
ast_visit(value, level=level+1)
ast_visit(ast.parse('a + b'))
Prints out
Module(body=[<_ast.Expr object at 0x02808510>])
Expr(value=BinOp(op=Add()))
BinOp(op=Add())
Name(id='a', ctx=Load())
Load()
Add()
Name(id='b', ctx=Load())
| 295
| 4,741
| 152
| 288
| 229
| 0.817095
|
github_plus_top10pct_by_avg
|
dia as combinations of different “roles” and compare groups according to the proportion of each role within each group.
Wang et al. proposed a technique, *Multinomial Goodness-of-Fit* (MGoF), to analyze likelihood ratio of distributions via Kullback-Leibler divergence, and is fundamentally a hypothesis test on distributions [@wang2011statistical]. MGoF divides the observed data sequence into several windows. It quantifies data in each window into a histogram and check these estimated distributions against several hypothesis. If the target distribution rejects all provided hypothesis, it is considered an anomaly and preserved as a new candidate of null hypothesis. If the target distribution failed to reject some hypothesis, then it is considered a supporting evidence of the one that yields most similarity. Furthermore, if the number of supporting evidence is larger than a threshold $c_{th}$, it is classified as non-anomaly.
MGoF is the best competitor out of the similar techniques, and we use it as our baseline against our approach.
Real World Problem: Click Farming Detection {#sec:related-realworld}
-------------------------------------------
Taobao possesses a market share of 50.6% to 56.2% in China by 2016 [@iresearch2016b2c]. Currently, there are more than 9.4 million sellers in Taobao, providing more than 1 billion different products. Under the super-pressure caused by massive competitors, a number of the sellers choose to use some cheating techniques to raise reputation and sale volumes, then improve rankings in search lists.
The most popular approach to manipulate transaction and reputation data is *Click Farming*, where sellers use a large number of customer accounts to create fake transaction records and give high remarks on products. Professional click farmers are usually well organized groups or companies containing thousands of people. Some companies even develop professional applications that can be deployed on common PCs to improve productivity [@zhao2016on].
There are two types of click
| 296
| 92
| 554
| 494
| null | null |
github_plus_top10pct_by_avg
|
t{\mu}\in{\mbox{\boldmath $\Lambda$}}_{i+\frac{1}{2}}$ if $\widehat{\mu}$ is obtained from $\widetilde{\lambda}$ by removing a box ($i = 0, 1, 2, \ldots n-1$) \[resp. ($i=0, 1, 2, \dots, n$)\],
- join $\widehat{\mu}\in{\mbox{\boldmath $\Lambda$}}_{i-\frac{1}{2}}$ and $\widetilde{\lambda}\in{\mbox{\boldmath $\Lambda$}}_i$ if $\widetilde{\lambda}$ is obtained from $\widehat{\mu}$ by adding a box ($i = 1, 2, \ldots n$).
For a pair of Young diagrams $({\mbox{\boldmath $\alpha$}}, {\mbox{\boldmath $\beta$}})$, if ${\mbox{\boldmath $\beta$}}$ is obtained from ${\mbox{\boldmath $\alpha$}}$ by one of the method above, we write this as ${\mbox{\boldmath $\alpha$}}\smile{\mbox{\boldmath $\beta$}}$.
Finally, we define the sets of the tableaux. For a half integer $n\in\frac{1}{2}\mathbb{Z}$ and ${\mbox{\boldmath $\alpha$}}\in{\mbox{\boldmath $\Lambda$}}_n$, we define ${\mathbb T}({\mbox{\boldmath $\alpha$}})$, [*tableaux of shape ${\mbox{\boldmath $\alpha$}}$*]{}, to be $$\begin{aligned}
{\mathbb T}({\mbox{\boldmath $\alpha$}})&=&
\{P = ({\mbox{\boldmath $\alpha$}}^{(0)}, {\mbox{\boldmath $\alpha$}}^{(1/2)}, \ldots, {\mbox{\boldmath $\alpha$}}^{(n)})\ |
\ {\mbox{\boldmath $\alpha$}}^{(j)} \in {\mbox{\boldmath $\Lambda$}}_j\ (j = 0, 1/2, \ldots, n),\\
& & \quad{\mbox{\boldmath $\alpha$}}^{(n)} = {\mbox{\boldmath $\alpha$}},
{\mbox{\boldmath $\alpha$}}^{(j)}\smile{\mbox{\boldmath $\alpha$}}^{(j+1/2)}
\ (j = 0, 1/2, \ldots, n-1/2)\}.\end{aligned}$$
Construction of representation {#sec:rep}
==============================
Now we have defined the sets of tableaux, we define linear transformations among the tableaux.
Let ${\mathbb Q}$ be the field of rational numbers and $K_0 = {\mathbb Q}(Q)$ its extension. In the following, the linear transformations are defined over $K_0$. If they preserve the relations defined in the previous sections, they define representations of ${A}_n = {A}_n(Q)\otimes K_0$. Similar methods are used for example in the references [@AK; @GHJ; @Mu; @W1; @W2; @Ko2].
Let ${\mathbb V}(
| 297
| 2,676
| 498
| 306
| 2,965
| 0.775789
|
github_plus_top10pct_by_avg
|
B C^2 +
36 A^5 D^2 B C^2 + 2 A D^3 B C^2 +6 A^3 D^3 B C^2 \nonumber\\
\fl &+& 24 A^4 D^3 B C^2+
100 A^4 D B^2 C^2 + 88 A^5 D B^2 C^2 +
6 A D^2 B^2 C^2 \nonumber\\
\fl &+& 18 A^3 D^2 B^2 C^2 +
56 A^4 D^2 B^2 C^2 + 4 A^2 D^3 B^2 C^2 +
20 A^3 D^3 B^2 C^2 \nonumber\\
\fl &+& 160 A^4 D B^3 C^2+
36 A^2 D^2 B^3 C^2 + 32 A^2 D^3 B^3 C^2 +256 A^3 D B^4 C^2 \nonumber\\
\fl &+& 132 A^2 D^2 B^4 C^2 +
44 A D^3 B^4 C^2 + A^3 C^3 + 6 A^4 C^3 + 10 A^5 C^3 + 10 A^6 C^3 \nonumber\\
\fl &+& 6 A^7 C^3 + 8 A^3 D C^3 + 10 A^4 D C^3 +
14 A^5 D C^3 + 10 A^6 D C^3 \nonumber\\
\fl &+&4 A^4 D^2 C^3 +
6 A^5 D^2 C^3 + 2 A^3 D^3 C^3 +
6 A^4 D^3 C^3 + 8 A^4 B C^3 + 16 A^5 B C^3\nonumber\\
\fl &+& 20 A^6 B C^3 +
28 A^4 D B C^3 + 36 A^5 D B C^3 +
8 A^3 D^2 B C^3 + 12 A^4 D^2 B C^3 \nonumber\\
\fl &+&
4 A^2 D^3 B C^3 + 16 A^3 D^3 B C^3 + 12 A^3 B^2 C^3 +
18 A^5 B^2 C^3 + 12 A^2 D B^2 C^3 \nonumber\\
\fl &+& 44 A^4 D B^2 C^3 +
12 A^3 D^2 B^2 C^3 + 20 A^2 D^3 B^2 C^3 +
24 A^4 B^3 C^3\nonumber\\
\fl &+& 48 A^3 D B^3 C^3 + 24 A^2 D^2 B^3 C^3 + 24 A D^3 B^3 C^3
+ 3 A^3 C^4 + 10 A^4 C^4 \nonumber\\
\fl &+& 10 A^5 C^4 + 4 A^6 C^4 +A^2 D C^4
+ 2 A^3 D C^4 + 2 A^4 D C^4 +
12 A^4 B C^4 \nonumber\\
\fl &+& 8 A^5 B C^4 + 4 A^3 D B C^4 + 18 A^3 B^2 C^4 + 6 A^2 D B^2 C^4 + 2 A^2 D C^5\nonumber\\\fl&+&
4 A^3 D C^5 +
4 A^4 D C^5 + 8 A^3 D B C^5 + 12 A^2 D B^2 C^5 \> . \label{eq:A4b3}\end{aligned}$$ Equations (\[eq:Ab3\]) and (\[eq:Bb3\]) were found in [@Knezevic], and (\[eq:Cb3\]) in [@EKM].
For the $b=4$ case, equations are too cumbersome to be quoted here, and, they are available upon request to the authors.
Renormalization group equations for the CSAWs model \[app:CSAWsRG\]
===================================================================
It can be shown, via direct computer enumeration of the corresponding paths within the generator of the $b=2$ 3D SG fractal, that RG parameters $A_1, A_2, A_3, A_4, B_1$, and $B_2$ fulfil the following recursion relations $$\begin{aligned}
| 298
| 2,727
| 523
| 234
| null | null |
github_plus_top10pct_by_avg
|
$\bm{1}$ $\bm{1}$ $\bm{2}$ $\bm{2}$ $\bm{2}$
$U(1)_Y$ $\frac12$ $-1$ $0$ $\frac12$ $\frac12$ $\frac12$
$A_4$ ${(1,1',1'')}$ ${(1,1'',1')}$ $3$ $1$ $1$ $1$
$-k$ $0$ $0$ $-1$ $0$ $-1$ $-5$
$Z_3$ $1$ $1$ $\omega$ $1$ $\omega^2$ $\omega^2$
----------- --------------------------------------------------- ----------------------------------------- ---------- ------------ ------------ ------------
: Fermionic and bosonic field content of the model and their charge assignments under $SU(2)_L\times U(1)_Y\times A_4$, where $-k$ is the number of modular weight. The quark sector is the same as that in the SM.[]{data-label="tab:fields"}
------------ -------------------- -------------------- ------------------- -- -- --
$Y^{(4)}_{\bf1}$ $Y^{(2)}_{\bf3}$ $Y^{(6)}_{\bf3}$
[ $A_4$]{} ${\bf1}$ ${\bf3}$ ${\bf3}$
$-k$ $4$ $2$ $6$
------------ -------------------- -------------------- ------------------- -- -- --
: Modular weight assignments for Yukawa couplings.[]{data-label="tab:couplings"}
The modular forms of weight 2, [$(y_{1},y_{2},y_{3})$]{}, which transform as a triplet of $A_4$, are written in terms of the Dedekind eta-function, $\eta(\tau)$, and its derivative, $\eta'(\tau)$, as [@Feruglio:2017spp] $$\begin{aligned}
\label{eq:Y-A4}
y_{1}(\tau) &=& \frac{i}{2\pi}\left( \frac{\eta'(\tau/3)}{\eta(\tau/3)} +\frac{\eta'((\tau +1)/3)}{\eta((\tau+1)/3)}
+\frac{\eta'((\tau +2)/3)}{\eta
| 299
| 1,927
| 881
| 355
| null | null |
github_plus_top10pct_by_avg
|
Keyboard.Modifiers & ModifierKeys.Control) == ModifierKeys.Control)
{
MessageBox.Show("You hit ctrl + D");
}
}
private void dtg_view_InitNewRow(object sender, DevExpress.Xpf.Grid.InitNewRowEventArgs e)
{
dtg_tabletrial.SetCellValue(e.RowHandle, "UserName", "emre");
dtg_tabletrial.SetCellValue(e.RowHandle, "Surname", "newcompany");
dtg_tabletrial.SetCellValue(e.RowHandle, "Address", "new addres");
dtg_tabletrial.SetCellValue(e.RowHandle, "Phone", "new phone");
}
A:
Thanks again to Mike Strobel , but I also included another solution for that. I'm writing down here to someone who will need it.
Peace
private void dtg_tabletrial_KeyDown(object sender, KeyEventArgs e)
{
if (e.Key == Key.D && (Keyboard.Modifiers & ModifierKeys.Control) == ModifierKeys.Control)
{
DataGridTrialTable ff = new DataGridTrialTable();
ff.Address = dtgtrialTable.LastOrDefault().Address;
ff.UserName = dtgtrialTable.LastOrDefault().UserName;
ff.Phone = dtgtrialTable.LastOrDefault().Phone;
ff.Surname = dtgtrialTable.LastOrDefault().Surname;
dtgtrialTable.Add(ff);
}
}
Q:
Opening a document in wizard before install
My problem is that i would like to make an "hyperlink"(i know there is now such thing in inno) when you click label a document(rtf) with instructions will open.
The problem: i DON'T want to copy this program along with the setup,
it should be inside the setup and after the installation it is no more
needed, thus it should be deleted or thrown out.
cant use {tmp} folder since it is accesed only in [run] phase(that is installation if i am not mistaken) and i need it earlier.
Any suggestions?
A:
The temporary folder is not explicitly reserved for [Run] section. It can be used whenever needed (it is widely used e.g. for DLL libraries). And there is no such thing as a hyperlink label in Inno Setup as far as I know. I've made an example of a link labl
| 300
| 410
| 116
| 278
| 873
| 0.798516
|
github_plus_top10pct_by_avg
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.