text
stringlengths 100
957k
| meta
stringclasses 1
value |
|---|---|
# Math Help - Finding the missing factor....(help pls)
1. ## Finding the missing factor....(help pls)
1. (a2) ( ? ) = a8
2. (x4) ( ? ) = x7
3. ( ? ) (5m2) = 20m5
4. ( ? ) ( – 6a2) = 18a3
5. ( – 3a2b3) ( ? ) = – 27a6b7
Thanks!
2. Originally Posted by slykksta
1. (a2) ( ? ) = a8
2. (x4) ( ? ) = x7
3. ( ? ) (5m2) = 20m5
4. ( ? ) ( – 6a2) = 18a3
5. ( – 3a2b3) ( ? ) = – 27a6b7
Thanks!
I presume that most of these integers are exponents? Try writing them this way:
a^2 ( ? ) = a^8
That would be much clearer.
Solve these problems by division. For example:
$a^2 x = a^8 \implies x = \frac{a^8}{a^2} = a^{8 - 2} = a^6$
etc.
-Dan
|
{}
|
# Logic Symbols
1. Jan 19, 2010
### Char. Limit
A lot of times, when I look at something written in logic, there are these strange symbols popping out everywhere. Examples include an upside-down A, a giant V or U, or an upside-down V.
Could you point me to an article describing what these symbols mean?
2. Jan 20, 2010
### tiny-tim
Hi Char. Limit!
Upside-down A is quite common, it means "for all" (as in "for all x, there is a y such that …")
See http://en.wikipedia.org/wiki/Logical_symbols" [Broken] generally.
Last edited by a moderator: May 4, 2017
3. Jan 23, 2010
### tauon
the wiki link posted by t-tim is pretty good.
also, since they're not mentioned there: the giant V is an equivalent notation for $$\exists$$ and the upside down giant V is an equivalent notation for $$\forall$$.
these symbols are used by some authors because saying $$\forall x, P(x)$$ is equivalent to $$P(x_1)\wedge P(x_2) \wedge ... \wedge P(x_i)\wedge ...$$.
it's similar for $$\exists$$ and the big V. this big V big upside down V notation is used because it shows the link between the quantifiers and logical conjunction and logical disjunction.
4. Jan 23, 2010
### Hurkyl
Staff Emeritus
That rewriting of forall as an iterated conjunction (and exists as disjunction) only works if you know the entire domain of the variable, and the domain is finite. (if you're using infinitary logic, you can extend this to infinite domains that aren't too big)
The giant conjunction and disjunction symbols are just iterations -- in exactly the same way that $\Sigma$ relates to addition, and $\Pi$ relates to multiplication.
5. Jan 24, 2010
### Char. Limit
Did you just say "infinite domains that aren't too big"?
Are you saying something like "omega can work but aleph-one can't"?
Also, thanks for the Wikipedia article. I've bookmarked it.
6. Jan 24, 2010
### Hurkyl
Staff Emeritus
Yep.
Finitary logic only offers binary conjunctions and disjunctions. Of course, by iterating (and using "False" or "True" as the base case as appropriate) we can define the conjunction or disjunction of any finite number of things.
Infinitary logic, at its discretion, offers infinite versions of these repeated operations. What is actually provided depends upon the specific brand. I imagine that "countably many" and "any small* amount" are the most common, but any restriction on classes could be used -- it doesn't even have to be based on size! For example, there is probably some logic related to nonstandard analysis that allows "hyperfinite" conjunctions/disjunctions, and none others.
It doesn't even have to be the same for conjunction and disjunction! e.g. The infinitary logic relevant to one of my interests (topos theory) only allows finite conjunctions, but all small disjunctions.
Now, to add a disclaimer -- I've never seen infinitary logic formally presented: in what I've read it winds up simply being something like "if we allow infinitely many disjunctions, we get infinitary logic". While what I've described above is consisteint with what that would mean, there may be some subtlety I am unaware of.
*: Small, here, means that it fits into a set. e.g. the real numbers are small. The class of all sets is not small
Last edited: Jan 24, 2010
7. Jan 24, 2010
### Char. Limit
Ah, yes... cardinality... it never makes sense to me, let it begone.
Do you have an example of infinitary logic?
8. Jan 24, 2010
### tiny-tim
I have discovered a truly marvellous example, but this universe is too narrow to contain it.
9. Jan 24, 2010
### Char. Limit
Lol... I love references to FLT...
How can a universe be narrow, when the universe is flat, spherical, of uniform density, and with me at the center?
10. Jan 24, 2010
### tiny-tim
you're the limit!
It's the margin round you that's too narrow!!
11. Jan 24, 2010
### Char. Limit
Ah. In that case, let me just expand the universe a bit...
"There you go, one lightminute bigger.
12. Jan 24, 2010
### tiny-tim
Wow! suddenly it's brighter!
13. Jan 24, 2010
### Hurkyl
Staff Emeritus
If you're willing to consider just propositional logic, the algebraic analog of "truth values, conjunction, and disjunction" is that of a distributive lattice.
For classical propositional logic, you want to consider Boolean lattices.
For infinitary propositional logic, you'd want to look at things like complete lattices.
The open sets in a topological space, incidentally, is an example of a complete lattice with finite meets and arbitrary joins. (meet ~ conjunction ~ intersection, join ~ disjunction ~ union). It's not boolean, though -- but it is Heyting.
(Such a lattice has arbitrary meets -- the "interior of intersection" operation -- but those aren't expected to behave properly algebraically. e.g. the distributive property need not hold, nor should they be preserved by homomorphisms)
|
{}
|
1. ## Consecutive integers...
The sum of c consecutive positive integers = c^2.
Example, c=5: 3+4+5+6+7 = 25 ; 5^2 = 25
(last integer = 7)
If c = 9999, what is the last integer?
What is the last integer in terms of c?
2. ## Re: Consecutive integers...
Code:
c:=9999:
for n from 1 to 10000 do
if sum(k,k=n..n+c-1)=c^2 then
print(n+c-1);
break;
end if;
end do;
for $\displaystyle c=9999$ , last integer =$\displaystyle 14998$
in terms of $\displaystyle c$ , last integer =$\displaystyle \frac{3c-1}{2}$
3. ## Re: Consecutive integers...
Good one Princeps.
Accidentally got that while fooling around with consecutive numbers.
Had you seen the "easy formula" last integer = (3c - 1)/2 before?
Amazing that there is a solution for ALL odd c's.
4. ## Re: Consecutive integers...
$\displaystyle a_i=\frac{c-1}{2}+i ~\text{ for }~ 1\leq i \leq c$
$\displaystyle \displaystyle \sum_{i=1}^c \left(\frac{c-1}{2}+i\right)=c\cdot\frac{(c-1)}{2}+\displaystyle \sum_{i=1}^c i=\frac{c(c-1)}{2}+\frac{c(c+1)}{2}=c^2$
|
{}
|
# Ngô Quốc Anh
## March 29, 2011
### The (original) Picone identity
Filed under: PDEs — Tags: — Ngô Quốc Anh @ 17:02
For differentiable functions $v > 0$ and $u \geqslant 0$, the following Picone’s identity is well known
$\displaystyle {\left| {\nabla u - \frac{u}{v}\nabla v} \right|^2} = {\left| {\nabla u} \right|^2} - 2\frac{u}{v}\nabla u \cdot \nabla v + \frac{{{u^2}}}{{{v^2}}}{\left| {\nabla v} \right|^2} = {\left| {\nabla u} \right|^2} - \nabla \left( {\frac{{{u^2}}}{v}} \right) \cdot \nabla v \geqslant 0.$
The proof is very simple. For each partial derivative $\frac{\partial}{\partial x_i}$ we have
$\displaystyle\frac{\partial }{{\partial {x_i}}}\left( {\frac{{{u^2}}}{v}} \right) = \frac{1}{{{v^2}}}\left[ {\frac{{\partial ({u^2})}}{{\partial {x_i}}}v - {u^2}\frac{{\partial v}}{{\partial {x_i}}}} \right] = \frac{1}{{{v^2}}}\left[ {2u\frac{{\partial u}}{{\partial {x_i}}}v - {u^2}\frac{{\partial v}}{{\partial {x_i}}}} \right]$
which implies
$\displaystyle\nabla \left( {\frac{{{u^2}}}{v}} \right) = \frac{1}{{{v^2}}}\left[ {2uv\nabla u - {u^2}\nabla v} \right] = \frac{{2u}}{v}\nabla u - \frac{{{u^2}}}{{{v^2}}}\nabla v.$
Thus
$\displaystyle - \nabla \left( {\frac{{{u^2}}}{v}} \right) \cdot \nabla v = - \left[ {\frac{{2u}}{v}\nabla u - \frac{{{u^2}}}{{{v^2}}}\nabla v} \right] \cdot \nabla v = - 2\frac{u}{v}\nabla u \cdot \nabla v + \frac{{{u^2}}}{{{v^2}}}{\left| {\nabla v} \right|^2}.$
The Picone identity is very useful. We shall address this later on.
## March 26, 2011
### Asympotic behavior of integrals, 4
Filed under: PDEs — Tags: — Ngô Quốc Anh @ 21:25
We consider the following PDE
$\Delta u = f(x), \quad x \in \mathbb R^2$.
By letting
$\displaystyle w(x) = \frac{1}{{2\pi }}\int_{{\mathbb{R}^2}} {\left[ {\log |x - y| - \log |y|} \right]f(y)dy}$
via the potential theory, we has already proved that
$u-w={\rm const.}.$
As such, the analysis of $w$ turns out to be the core of the studying of solutions to our PDE. As in this entry, we showed that the following limit
$\displaystyle\mathop {\lim }\limits_{|x| \to \infty } \left[ {w(x) - \alpha \log |x|} \right] = -\frac{1}{{2\pi }}\int_{{\mathbb{R}^2}} {\log |y|f(y)dy}$
exists for certain function $f$. Not just the behavior at the infinity, as a question proposed also in that entry, we can control the decay rate of
$\displaystyle {w(x) - \alpha \log |x| + \frac{1}{{2\pi }}\int_{{\mathbb{R}^2}} {\log |y|f(y)dy} }$
i.e. we need the fact
$\displaystyle\left| {w(x) - \alpha \log |x| + \frac{1}{{2\pi }}\int_{{\mathbb{R}^2}} {\log |y|f(y)dy} } \right| \leqslant \frac{{C\log |x|}}{{|x|}},\quad \forall |x| \geqslant 1$
for some positive constant $C$ where $w$ is a particular solution to
I do think this result is correct since it has been used once in a paper by X.X. Chen published in Calc. Var. Partial Differential Equations [here] but some idea is involved. I leave here as my own open question needed to be addressed in the future.
## March 23, 2011
### A proof of the uniqueness of the solution of the prescribing Gaussian curvature problem
Filed under: Uncategorized — Ngô Quốc Anh @ 22:58
Let us continue the problem of prescribing Gaussian curvature. Our PDE reads as the follows
$\displaystyle -\Delta u +K_0(x)=K(x)e^{2u}, \quad x \in M$
where $M$ is a compact manifold without the boundary. Today we show that if
$\displaystyle K(x) \leqslant 0$
then our PDE has unique solution.
Assume that $u_1$ and $u_2$ are solutions to the PDE, that is
$\displaystyle\begin{gathered} - \Delta {u_1} + {K_0}(x) = K(x){e^{2{u_1}}}, \hfill \\ - \Delta {u_2} + {K_0}(x) = K(x){e^{2{u_2}}}, \hfill \\ \end{gathered}$
By subtracting, we have
$\displaystyle - \Delta ({u_1} - {u_2}) = K(x)({e^{2{u_1}}} - {e^{2{u_2}}}).$
Multiplying both sides by $u_1-u_2$, integrating over $M$, and the using the integration by parts we arrive at
$\displaystyle\int_M {{{\left| {\nabla ({u_1} - {u_2})} \right|}^2}dv} = \int_M {K(x)\frac{{{e^{2{u_1}}} - {e^{2{u_2}}}}}{{{u_1} - {u_2}}}{{\left| {{u_1} - {u_2}} \right|}^2}dv} .$
Since $K(x) \leqslant 0$, it follows that
$\displaystyle\int_M {{{\left| {\nabla ({u_1} - {u_2})} \right|}^2}dv} \leqslant 0.$
In particular, $u_1 \equiv u_2$.
## March 20, 2011
### A note on the equation involving the prescribing Gaussian curvature problem
Filed under: Uncategorized — Ngô Quốc Anh @ 2:41
Let $M$ be a smooth and compact two dimensional Riemannian manifold. Let $g_o(x)$ be a metric on $M$ with the corresponding Laplace-Beltrami operator $\Delta$ and Gaussian curvature $K_o(x)$. Given a function $K(x)$ on $M$, can it be realized as the Gaussian curvature associated to the point-wise conformal
metric
$\displaystyle g(x) = e^{2u(x)}g_o(x)$
To answer this question, it is equivalent to solve the following semi-linear elliptic equation
$\displaystyle -\Delta u +K_0(x)=K(x)e^{2u}, \quad x \in M.$
In this entry, we summarize some basic steps in order to simplify the above PDE. We first let $\overline u=2u$, then our PDE becomes
$\displaystyle -\frac{1}{2}\Delta \overline u +K_0(x)=K(x)e^{\overline u}.$
Let $v$ be a solution of the following PDE
$\displaystyle -\Delta v=2K_0(x)-2\overline K_0$
where
$\displaystyle \overline K_0=\frac{1}{|M|}\int_M K_0(x)dv$
is nothing but the average of $K_0$ over $M$. The solvability of the foregoing PDE comes from the fact that
$\displaystyle\int_M (2K_0(x)-2\overline K_0)dv=0.$
We let $w=\overline u+v$. Then it is easy to verify that $w$ solves the following
$\displaystyle -\Delta w +2\overline K_0=2K(x)e^{-v}e^w.$
Finally, letting
$\alpha=2\overline K_0, \quad R(x)=2K(x)e^{-v(x)}$
we get
$\displaystyle -\Delta w +\alpha = R(x)e^w$
or by renaming $w$ by $u$
$\displaystyle -\Delta u +\alpha = R(x)e^u.$
The advantage of this equation is that here $\alpha$ is constant. To be precise, by the Gauss-Bonnet theorem, we have
$\displaystyle \alpha=\frac{4\pi}{|M|}\chi(M)$
where $\chi(M)$ is the characteristic of $M$.
## March 16, 2011
### Cofactor matrix has divergence-free rows
Filed under: PDEs — Ngô Quốc Anh @ 15:45
In this entry, we prove the following interesting result
Let $\mathbf{u} : \mathbb R^n \to \mathbb R^n$ be a smooth function. Then
$\displaystyle \sum_{i=1}^n (\mbox{cof}D\mathbf{u})_{i,x_i}^k=0$
for each $k=\overline{1,n}$ fixed.
For simplicity, let us write $\mathbf{u}=(u^1,...,u^n) \in \mathbb R^n$. Then
$\displaystyle D\mathbf{u} = \left( {\begin{array}{*{20}{c}} {u_{{x_1}}^1}&{u_{{x_2}}^1}& \cdots &{u_{{x_2}}^1} \\ {u_{{x_1}}^2}&{u_{{x_2}}^2}& \cdots &{u_{{x_2}}^2} \\ \vdots & \vdots & \ddots & \vdots \\ {u_{{x_1}}^n}&{u_{{x_2}}^n}& \cdots &{u_{{x_n}}^n} \end{array}} \right).$
## March 13, 2011
### Jacobi’s formula for the derivative of a determinant revisited
Filed under: Uncategorized — Ngô Quốc Anh @ 12:23
Last time, we discussed [here] Jacobi’s formula expresses the differential of the determinant of a matrix A in terms of the adjugate of A and the differential of A. The formula is
$\displaystyle d\mbox{det} (A) = \mbox{tr} (\mbox{adj}(A) \, dA)$.
A more useful formula is the following
$\displaystyle \frac{d}{dt}\mbox{det} (A+tB) = \mbox{det}(A+tB) \mbox{tr}\big((A+tB)^{-1}B\big)$.
Let us firstly reprove the Jacobi formula. Assuming $(A^{ij})$ is the cofactor matrix with respect to $A=(a_{ij})$. It then holds
$\mbox{det}(A)=\sum_{j}a_{ij}A^{ij}.$
Therefore,
$\displaystyle\frac{d}{{d{a_{ij}}}}(\det (A)) = \frac{d}{{d{a_{ij}}}}\left( {\sum\limits_k {{a_{ik}}{A^{ik}}} } \right) = {A^{ij}}.$
## March 10, 2011
### log K is harmonic implies that K is contant
Filed under: PDEs — Ngô Quốc Anh @ 18:29
In this short note we present a result in a paper due to Edward M. Fan [here]. To be precise, we prove
Given a sphere $(\mathbb S^2,g_0)$ with standard metric, if $K(x)>0$ and $\Delta \ln \big(K(x)\big)=0$, then $K$ must be a constant.
Proof. To prove the result, we shall use the following well-known formula
$\displaystyle \Delta \ln K=\frac{\Delta K}{K}-\frac{|\nabla K|^2}{K^2}.$
Therefore, the fact that $\ln K$ is harmonic implies that
$\displaystyle\frac{\Delta K}{K}=\frac{|\nabla K|^2}{K^2}.$
Since $K>0$, multiplying both sides by $K^2$, we get
$\displaystyle K\Delta K=|\nabla K|^2.$
Integrating both sides on $\mathbb S^2$ using standard volume form, we get
$\displaystyle \int_{\mathbb S^2}K\Delta K dv_{g_0}=\int_{\mathbb S^2}|\nabla K|^2dv_{g_0}.$
Now integration by parts shows that
$\displaystyle -\int_{\mathbb S^2}|\nabla K|^2dv_{g_0}=\int_{\mathbb S^2}|\nabla K|^2dv_{g_0}.$
This in turn shows that
$|\nabla K|=0.$
We conclude that $K(x)$ must be a constant function.
## March 7, 2011
### An identity involving inner product of gradients
Filed under: Uncategorized — Ngô Quốc Anh @ 22:41
In this short note, we prove the following identity
For any functions $\eta, u$, it holds
$\displaystyle\nabla u \cdot \nabla ({\eta ^2}u) = |\nabla (\eta u){|^2} - {u^2}|\nabla \eta {|^2}.$
The proof is elementary. By the product rule for the gradient, we know that
$\displaystyle\nabla u \cdot \nabla ({\eta ^2}u) = \nabla u \cdot \big((\eta u)\nabla \eta + \eta \nabla (\eta u)\big).$
Thus
$\displaystyle\nabla u \cdot \nabla ({\eta ^2}u) = \eta u\nabla \eta \cdot \nabla u + \eta \nabla u \cdot \nabla (\eta u).$
The term $\eta \nabla u \cdot \nabla (\eta u)$ can be rewritten as follows
$\displaystyle\eta \nabla u \cdot \nabla (\eta u) = \nabla (\eta u) \cdot \nabla (\eta u) - u\nabla \eta \cdot \nabla (\eta u) = |\nabla (\eta u){|^2} - u\nabla \eta \cdot \nabla (\eta u).$
We then have
$\displaystyle\nabla u \cdot \nabla ({\eta ^2}u) = \eta u\nabla \eta \cdot \nabla u + |\nabla (\eta u){|^2} - u\nabla \eta \cdot \nabla (\eta u).$
Keep in mind that
$\displaystyle\eta u\nabla \eta \cdot \nabla u - u\nabla \eta \cdot \nabla (\eta u) = - u\nabla \eta \cdot (\nabla (\eta u) - \eta \nabla u) = - u\nabla \eta \cdot (u\nabla \eta ).$
Therefore
$\displaystyle\nabla u \cdot \nabla ({\eta ^2}u) = |\nabla (\eta u){|^2} - {u^2}|\nabla \eta {|^2}.$
## March 3, 2011
### The Paneitz operator in any dimension
Filed under: Uncategorized — Tags: — Ngô Quốc Anh @ 16:43
Let us recall from this topic the following fact: Let $(M,g)$ be a compact Riemannian $4$-manifold, and let ${\rm Ric}_g$ and $R_g$ denote the Ricci tensor and the scalar curvature of $g$, respectively. The so-called Paneitz operator $P_g$ acts on a smooth function $u$ on $M$ via
$\displaystyle {P_g^4}(u) = \Delta _g^2u + {\rm div}\left( {\frac{2}{3}{R_g} - 2{\rm Ric}_g} \right)du$
which plays a similar role as the Laplace operator in dimension two where $d$ is the de Rham differential. Associated to this operator is the notion of $Q$-curvature given by
$\displaystyle Q_g^4=-\frac{1}{6}(\Delta R_g - R_g^2 +3|{\rm Ric}_g|_g^2).$
Under the following conformal change
$\widetilde g = e^{2u}g$
passing from $Q_g^4$ to $Q_{\widetilde g}^4$ is easy through the following formula
$P_g^4 (u)+Q_g^4=Q_{\widetilde g}^4e^{4u}.$
## March 1, 2011
### The implicit function theorem: A PDE example
Filed under: Giải Tích 3, PDEs — Tags: — Ngô Quốc Anh @ 23:29
This entry devotes an existence result for the following semilinear elliptic equation
$-\Delta u + u = u^p+f(x)$
in the whole space $\mathbb R^n$ where $0.
Our aim is to apply the implicit function theorem. It is known in the literature that
Theorem (implicit function theorem). Let $X, Y, Z$ be Banach spaces. Let the mapping $f:X\times Y\to Z$ be continuously Fréchet differentiable.
If
$(x_0,y_0)\in X\times Y, \quad F(x_0,y_0) = 0$,
and
$y\mapsto DF(x_0,y_0)(0,y)$
is a Banach space isomorphism from $Y$ onto $Z$, then there exist neighborhoods $U$ of $x_0$ and $V$ of $y_0$ and a Frechet differentiable function $g:U\to V$ such that
$F(x,g(x)) = 0$
and $F(x,y) = 0$ if and only if $y = g(x)$, for all $(x,y)\in U\times V$.
Let us now consider
$X=L^2(\mathbb R^n), \quad Y=H_+^2(\mathbb R^n), \quad Z=L^2(\mathbb R^n)$.
Let us define
$F(f,u)=-\Delta u + u - u^p-f(x), \quad f \in X, \quad u \in Y, \quad x \in \mathbb R^n$.
It is not hard to see that Fréchet derivative of $F$ at $(f,u)$ with respect to $u$ in the direction $v$ is given by
${D_u}F(f,u)v = - \Delta v + v - p{u^{p - 1}}v$.
Since $-\Delta +I$ defines an isomorphism from $Y$ to $Z$, it is clear to see that our PDE is solvable for $f$ small enough in the $X$-norm.
|
{}
|
# How can I run .exe file in Mathematica
we have a .exe file. It takes input as .txt file and gives output as another .txt file.
Now, I want to run that .exe file from MM.
I tried in 2 way,
Case 1 :
using SystemOpen[].
It showing the following Window, Once I run the SystemOpen[path].
I click on Run button.after it's not showing anything.
for my conformation, Manually I double click on .exe file. it's working fine. but SystemOpen[] was not working.
case 2:
I created .bat file with the following code
cd C:\Users\Infratab Bangalore\Desktop\Rod's
Infratab1-2.exe
Now I open .bat file using same function SystemOpen[]. here it's working great.
How can I fix case 1 problem. If anybody knows suggest me.
Thanks.
-
The tutorial on running external programs would be a good place to start – Simon Woods Jul 22 '13 at 10:33
@SimonWoods I tried with Run and Runthrough in the following way. but not working Run[.exe FilePath] – subbu Jul 22 '13 at 11:19
@SimonWoods If I use any functions like, CompilationTarget.it showing A C compiler cannot be found on your system. Please consult the \ documentation to learn how to set up suitable compilers. so Am I need to install C in my system. – subbu Jul 22 '13 at 12:21
Thanks for adding some relevant information. That at least looks like a question that one can attempt to answer now, so I have reopened this. – Mr.Wizard Jul 22 '13 at 14:03
Have you tried unchecking the "Always ask before opening" checkbox in that dialog that came up? – Szabolcs Sep 21 '13 at 16:43
file = OpenWrite[FileNameJoin[{\$TemporaryDirectory, "testfile.bat"}]]
And then use SystemOpen to call the file.
Why do I need to create another files. Already I have Input files,just I need to execute .exe file.Automatically it will take every thing. – subbu Jul 22 '13 at 16:53
|
{}
|
# A simple model of boom and bust
How private sector spending behaviours drive the economy and government budget
Posted on 29 July 2017 by Andrew Berkeley
In the last two posts we developed simple models of how government money circulates in the economy. In this post, we'll experiment with some of the behaviours encoded in these models in order to elucidate some of the ways in which the government and private sector interact with one another.
In the first model we assumed that the private sector saved a constant fraction of their income. This resulted in a stable aggregate income level and ever-increasing saved wealth. It also meant that the government - which is the monetary authority - had to constantly add money into the economy to counteract this "leakage" of money into savings. As such, the government had a permanent budget deficit and the size of the government "debt" was ever increasing through time, mirroring the private savings.
In the second model we added the ability of the private sector to spend out of their saved wealth. This resulted in larger aggregate incomes and a stabilised level of saved wealth, interpreted to represent the private sector's wealth target. By implication, the government ended up with a balanced budget position and a stable level of debt.
Here, we're going to retain the final form of the model and simply adjust some of the input parameters - specifically, the propensity to spend out of income ($\alpha_Y$). First we'll decrease the propensity to spend out of income and then we'll increase it again. This effectively represents a variation in the spending and saving behaviours of the population. We could also adjust the propensity to spend out of savings ($\alpha_H$) but we'll stick to just varying $\alpha_Y$ for the sake of simplicity.
## The Bust
Let's get some of the standard coding stuff out of the way.
In [67]:
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
N = 100
# exogeneous variables
G = 20 # government spending
theta = 0.2 # tax rate
alpha_H = 0.2 # propensity to spend out of saved wealth
# endogeneous variables
Y = np.zeros(N) # income
T = np.zeros(N) # tax revenue
C = np.zeros(N) # consumption
H_h = np.zeros(N) # private savings
H_g = np.zeros(N) # government balance
Okay, now we'll introduce some changes. Firstly, we're going to make $\alpha_Y$ a function of time. This means that it can vary through time if we choose it to. In our code, the variable becomes an array of values instead of a single constant value. We prepare an array with a value for every time step in the enumeration of our model. When we solve our model, instead of referencing the single, constant value, we'll reference the value for the appropriate time period. So we simply need to prepare an array which will contain the values we want through time. Okay, we'll start by initialising an array of length N
In [68]:
alpha_Y = np.zeros(N)
And now we'll set the values in the array. We'll start with the same value as we've used before, but what we want is to reduce this value after 10 time periods. So we'll switch from 0.9 (i.e. 90% spending of disposable income) to 0.8 (80%).
In [69]:
alpha_Y[0:10] = 0.9 # set the first 10 elements
alpha_Y[10:N] = 0.8 # set the remainder of the elements
We can check we have what we want by printing out the first, say, 15 values.
In [70]:
print(alpha_Y[0:15])
[ 0.9 0.9 0.9 0.9 0.9 0.9 0.9 0.9 0.9 0.9 0.8 0.8 0.8 0.8 0.8]
Sure enough, we have the change we want in there.
Now, we can also make another change that suits our purposes. Remember the last model took a number of time periods to reach it's ultimate steady-state condition. We can by-pass this phase by simply setting our model to start from the steady-state condition. This way, we can concentrate on the effects we're interested in - in this case, the changing $\alpha_Y$ parameter - without it being obscured by other behaviour that we're not interested in. In modelling parlance we're specifying our initial conditions here. So, we'll set the initial values of each of our variables to the values that emerged during the steady-state of the last model.
In [71]:
Y[0] = 100
C[0] = 80
T[0] = 20
H_h[0] = 40
H_g[0] = -40
Okay, easy. Let's run the model.
In [72]:
for t in range(1, N):
# calculate total income for this time step (equation 1)
Y[t] = (G + alpha_H*H_h[t-1])/(1 - alpha_Y[t]*(1-theta))
# calculate the tax paid on income for this time step (3)
T[t] = theta * Y[t]
# calculate the consumption spending for this time step (4)
C[t] = alpha_Y[t]*(1 - theta)*Y[t] + alpha_H*H_h[t-1]
# calculate the new level of private savings for this time step (5)
H_h[t] = H_h[t-1] + Y[t] - T[t] - C[t]
# calculate the new level of government money balance (6)
H_g[t] = H_g[t-1] + T[t]- G
Notice that there are two subtle changes in the above code. Firstly, all references to the alpha_Y variable now reference the appropriate index of what is now an array (i.e. alpha_Y[t]) rather than treating the variable as a single constant (i.e. alpha_Y) as previously. Secondly, the loop iterates from 1..N as opposed to 0..N (for t in range(1, N):). This is because we have specified particular conditions for the first time period up front (our inital conditions), so we don't want the model to evaluate them. We simply want to start the model from the next time period (identified using the index 1 in the zero-based Python language).
Right, let's make the usual plots. First, aggregate spending and income. (We'll omit the code. It is the same as before).
So, we have constant government spending but consumption spending is characterised by a large decrease at the 10th time period, when the private sector's propensity to consume out of income drops from 90% to 80%. This in turn causes an equivalent reduction in aggregate income. Afterwards, both consumption spending and aggregate income recover to their previous, steady-state levels.
Let's see what happens to the government sector under these conditions.
The government maintains its constant spending level (left), which is no problem given that the government is the monetary authority in this model. But tax revenue (right) exhibits a reduction in the 10th time period, recovering thereafter. Since tax is levied in proportion to aggregate income, this pattern is trivially explained as a consequence of the variations in aggregate income shown above.
And what does this do to the government and private sector net positions.
So on the 10th time period we can see that the private sector jumps into a position of surplus (green, left plot). This is simply because it has suddenly started spending 10% less of it's income. We therefore see private sector wealth start to increase (green, right plot). Correspondingly, the government's budget moves into deficit due to lower tax revenues and the government's "debt" increases. What we have produced here might be called a recession. The change in private sector spending behaviour caused a lowering of consumption spending and thereby aggregate incomes - the economy reduced in size. Recall, that, all other things being equal, increased saving necessarily reduces aggregate incomes.
However, the economy did recover to its original size, and, interestingly, this occurred without any subsequent reversion in private sector spending behaviour. What has happened is that the increased saving of the private sector, over subsequent time periods, resulted in the size of saved wealth growing larger. As such, the spending out of this wealth also grew. Eventually, the private saved wealth reached a size at which the rate of spending out of wealth and the rate of saving out of income were again brought in to balance. For reasons discussed previously, this occurred at the same level of aggregate income as before; the sole difference between the steady-state conditions before and after the "recession" being the size of saved wealth (and correspondingly, government debt). Essentially, with a higher rate of saving out of income, saved wealth needs to be larger in order to achieve steady-state.
We can interpret this by focussing not so much on change in propensity to spend out of incomeper se, but that the private sector changed their preferred level of wealth - their wealth target. Intuitively, we can imagine that, irrepsective of what caused the private sector to start saving more in the first place, their confidence to spend was gradually restored as they built up a sufficient new level of saved wealth. The net result of this is a return to the previous levels of income but with greater private savings and a larger government "debt".
Notice that the government deficit which emerged from the private sector's change in behaviour was eventually closed. And without the government doing anything to change their fiscal policy (spending, $G$, and tax rate, $\theta$).
## The boom
Let's replay the scenario modelled above, but this time we'll actually revert the change to $\alpha_Y$ on the 50th time period.
In [76]:
alpha_Y[0:10] = 0.9
alpha_Y[10:50] = 0.8
alpha_Y[50:N] = 0.9
How does this change our results...
As before, we have a "recession" at the 10th time period which "recovers" over the subsequent 20 or 30 time periods. Then on the 50th time period consumer spending suddenly increases. As such, aggregate incomes also take a big boost.
The boost to aggregate incomes causes a corresponding jump in tax revenue at time period 50 (above, right).
And we can see from the sectoral balance charts that the government (blue) is actually in a position of budget surplus (left plot) because of this boost to tax revenues in the 50th period. Notice that this fiscal surplus is mirrored exactly by a private sector deficit (green, left chart) - the private sector are dissaving. And by implication, the saved wealth of the private sector is reduced (green, right plot).
With the change in $\alpha_Y$ the private sector have effectively changed their wealth target back to its original, lower level. They are comparatively more focussed on spending and less focussed on saving. When they initially started to spend 10% more of their incomes ($\alpha_Y = 0.8$ to $\alpha_Y = 0.9$) at time step 50, saved wealth was at its highest level, consistent with the previous, higher wealth target. This means that spending out of saved wealth was also at its highest rate. With the switch to higher spending out of income and therefore less saving out of income, these flows into and out of saved wealth became unbalanced, with more money being spent out of savings than being saved from income. This is what caused the boost to consumption spending, aggregate income and tax revenue. We could call this a boom.
As in the previous example above, this state of affairs does not continue indefinitely. As the population dissave, the size of saved wealth reduces and so the amount spent out of saved wealth reduces too. Eventually the amount spent out of saved wealth moves into balance with the amount saved out of income and equilibrium is once again achieved. This occurs when saved wealth (and government debt) is back at the original size.
## Summing up
This model illustrates a number of things. Firstly, we've achieved two technical milestones: (1) starting the model with specific initial conditions, in this case to by-pass the convergence to steady-state; and (2) we've introduced the idea of an "exogeneous shock" to our model economy, by specifying a time-varying behaviour in one of our parameters ($\alpha_Y$). Aside from that though, we've learned a bit about how private sector behaviour affects the economy in aggregate and even the government sector specifically.
What we have seen is that when the private decide to save more, this affects aggregate spending and therefore aggregate incomes. This is an illustration of the famous Paradox of Thrift but is more complicated than the example previously discussed. In the present case, the government accommodates the demand for saved wealth by running a deficit - essentially introducing money to the economy to replace that removed from circulation by saving. This enables spending to be supported and incomes to gradually recover to their previous levels.
It is worth noting how these examples saw the government's budget move into deficit and into surplus without the government altering their fiscal policy at all. Recall that in these simple models fiscal policy is set by determing the government spending level, $G$, and the income tax rate, $\theta$. At no point were these values altered. Yet the government's budget position shifted significantly. One conclusion from this is that the government's budget is not necessarily under the control of fiscal policy and is instead intrinsically wedded to the dynamics of the wider economy. Indeed, the sectoral balance charts show quite vividly how the financial stocks and flows of the government and private sector are mirror images of one another in accounting terms and cannot vary independently of one another.
It is worth pointing out a few caveats here too. The change in behaviours modelled in this post were unrealistically sharp, with sudden, large changes in spending propensities within single time periods. This produced equally sharp responses in our model economy. In the real world it is perhaps more likely that aggregate behaviours might vary more smoothly. It would be trvial to adjust the examples shown here to describe more nuanced and realistic changes in behaviours (e.g. see here). The scenarios shown here were intentionally simple so as to elucidate the effects most clearly.
Also, this model doesn't include a number of things that might be relevant to the impacts of economic cycles: government social security spending (e.g. unemployment benefit), trading with external economies (i.e. imports and exports) and bank credit. However, the model does isolate, nicely, the fundamentals of the relationship between a domestic private sector and a currency-issuing government.
This post was written using the iPython Notebook
The source can be found on Github or viewed in the iPython Notebook Viewer.
The Python code used in this post is also available in script form on Github here.
|
{}
|
# Math Help - Ionization constants for acids and bases
1. ## Ionization constants for acids and bases
Is the $K_a$ value for hydrofluoric acid $3.5\times10^{-4}$ or $6.7\times10^{-4}$? I have two different textbooks for chemistry that have different values.
Ionization constants table from both textbooks:
http://img16.imageshack.us/img16/6718/32996808.jpg
http://img62.imageshack.us/img62/3728/60299474.jpg
2. Originally Posted by iamanoobatmath
Is the $K_a$ value for hydrofluoric acid $3.5\times10^{-4}$ or $6.7\times10^{-4}$? I have two different textbooks for chemistry that have different values.
Ionization constants table from both textbooks:
http://img16.imageshack.us/img16/6718/32996808.jpg
http://img62.imageshack.us/img62/3728/60299474.jpg
Your question is better posted here: http://www.chemistryhelpforum.com/chemistry-help/
|
{}
|
## Thinking Mathematically (6th Edition)
$21$
If we want to choose $k$ elements out of $n$ disregarding the order, not allowing repetition, we can do this in $_{n}C_k=\frac{n!}{k!(n-k)!}$ ways. Hence here because we have $7$ spots for $2$ people, $_{7}C_2=\frac{7!}{2!(7-2)!}=7\cdot6/2=21$
|
{}
|
Reviving messageboards.
Put anything you want to ask or know in the replies. I'll try to respond ASAP.
Keep the language in English.
Have fun!
1 year, 1 month ago
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
• Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused .
• Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone.
• Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge.
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting.
2 \times 3 $2 \times 3$
2^{34} $2^{34}$
a_{i-1} $a_{i-1}$
\frac{2}{3} $\frac{2}{3}$
\sqrt{2} $\sqrt{2}$
\sum_{i=1}^3 $\sum_{i=1}^3$
\sin \theta $\sin \theta$
\boxed{123} $\boxed{123}$
Sort by:
So as you have agreed on the partners topic,what topic of mathematics should we deal first?
- 1 year, 1 month ago
I'm currently busy with Real Analysis. Mostly JEE related topics.
- 1 year, 1 month ago
Can you explain a bit about it?
- 1 year, 1 month ago
Also I was thinking to find an infinite product for the Clausen Function.
- 1 year, 1 month ago
What is the Clausen function?
- 1 year, 1 month ago
It is defined as,
$-$$\int_0^\phi \ln |2\sin\frac{x}{2}|dx$
- 1 year, 1 month ago
$\phi$ is the input of the function.
- 1 year, 1 month ago
The graph looks very pretty. What are its applications?
- 1 year, 1 month ago
It's used in evaluating logarithm and polylogarithm function ,for simplifying hypergeometric series and many series also.
- 1 year, 1 month ago
Since it is a function with infinity zeroes it must a Weirstrass product but it seems it is discovered yet so I thought that in our free time,apart from real analysis,can least try and find one.Agree or disagree?
- 1 year, 1 month ago
There should be a not between discovered.
- 1 year, 1 month ago
You can edit comments on Brilliant. Tap the three dots on the right of "reply".
- 1 year, 1 month ago
- 1 year, 1 month ago
It's just a fancy term for advanced stuff of Limits, differentiability and other parts of calculus in the real domain.
- 1 year, 1 month ago
Well I was thinking it would be more complicated.
- 1 year, 1 month ago
I tried to simplify it for your understanding. There are much more things in it. It's in engineering courses.
- 1 year, 1 month ago
Try my question in the calculus section level medium name Fairly Impossible#1,Mr.Adhiraj.
- 1 year, 1 month ago
I got it - the result is $\pi$ - which means $1$.
- 1 year, 1 month ago
How?
- 1 year, 1 month ago
Your Beta Function note - I remembered your proof, so all I did was $\frac{\pi}{\pi}$$=1$
- 1 year, 1 month ago
Nice!!Plus thank you for noticing.
- 1 year, 1 month ago
- 1 year, 1 month ago
@Adhiraj Dutta coud you post the solution of your own question seq and series(13) and are you a jee aspirant
- 2 months, 1 week ago
I would have posted it if I knew the solution :p
And not really, I'm not really sure what I wanna do in the future.
- 2 months, 1 week ago
Can you look at my comment under your NIMO 2012 A1 problem, @Adhiraj Dutta? It's under Chew-Seong Cheong's solution.
- 1 year, 1 month ago
Delete the report in that question.
- 1 year, 1 month ago
Doing it - give me $2$ minutes.
- 1 year, 1 month ago
Can you look at my comment under your NIMO 2012 A1 problem, @Adhiraj Dutta?
- 1 year, 1 month ago
Do you understand the concept in that problem? When I was of your age, I didn't know what GP or summation was.
- 1 year, 1 month ago
Summation is the sum of the terms in a sequence (a series). GP is geometric progression. @Adhiraj Dutta
- 1 year, 1 month ago
Just knowing what it is is not important. Applying it is more important. I'm asking you whether you understood how the manipulation was done, why it was done and what it led to.
- 1 year, 1 month ago
|
{}
|
# Study of jets produced in association with a W boson in pp collisions at $\sqrt(s)$ = 7 TeV with the ATLAS detector
4 Laboratoire de Physique Corpusculaire
LPC - Laboratoire de Physique Corpusculaire - Clermont-Ferrand
Abstract : We report a study of final states containing a W boson and hadronic jets, produced in proton-proton collisions at a center-of-mass energy of 7 TeV. The data were collected with the ATLAS detector at the CERN LHC and comprise the full 2010 data sample of 36 pb^-1. Cross sections are determined using both the electron and muon decay modes of the W boson and are presented as a function of inclusive jet multiplicity, N_jet, for up to five jets. At each multiplicity, cross sections are presented as a function of jet transverse momentum, the scalar sum of the transverse momenta of the charged lepton, missing transverse momentum, and all jets, the invariant mass spectra of jets, and the rapidity distributions of various combinations of leptons and final-state jets. The results, corrected for all detector effects and for all backgrounds such as diboson and top quark pair production, are compared with particle-level predictions from perturbative QCD. Leading-order multiparton event generators, normalized to the NNLO total cross section for inclusive W-boson production, describe the data reasonably well for all measured inclusive jet multiplicities. Next-to-leading-order calculations from MCFM, studied here for N_jet >= 2, and BlackHat-Sherpa, studied here for N_jet >= 4, are found to be mostly in good agreement with the data.
Document type :
Journal articles
http://hal.in2p3.fr/in2p3-00657197
Contributor : Emmanuelle Vernay <>
Submitted on : Friday, January 6, 2012 - 9:01:33 AM
Last modification on : Wednesday, August 4, 2021 - 3:40:02 PM
### Citation
G. Aad, S. Albrand, M.L. Andrieux, Q. Buat, B. Clement, et al.. Study of jets produced in association with a W boson in pp collisions at $\sqrt(s)$ = 7 TeV with the ATLAS detector. Physical Review D, American Physical Society, 2012, 85, pp.092002. ⟨10.1103/PhysRevD.85.092002⟩. ⟨in2p3-00657197⟩
Record views
|
{}
|
Radioactive decay or ‘radioactivity’ is a physical process whereby certain unstable nuclei break up or decay spontaneously. Radioactivity is an energetic process.
## The Valley of Stability
In Nuclear Physics parlance, the valley of stability characterises the stability of nuclides to radioactive decay, based on their binding energy.
The binding energy is the minimum energy required to disassemble an atom nucleus into its separate parts, i.e. protons and neutrons.
The valley of stability is a helpful visualisation tool for interpreting and understanding properties of nuclear decay processes, such as nuclear fission.
The shape of the valley reflects the profile of binding energy as a function of the numbers of neutrons and protons, with the lowest part of the valley corresponding to the ‘region’ where the most stable nuclei are found. And the highest part of the slope with the most unstable radioactive nuclides.
## Radioactivity and the Decay Chain of Uranium
Radioactivity often proceeds via a sequence of steps, known as a decay chain.
For example, Uranium (238U) is the most common form of uranium found in Nature. Over 99% of the uranium in Earth is present as this radioactive isotope.
Uranium (238U) decays to Thorium (234Th), which then decays to Protactinium (234Pa).
${}^{238}_{92} U \rightarrow {}^{234}_{90} Th \rightarrow {}^{234}_{91} Pa \rightarrow {}^{234}_{92} U \rightarrow {}^{230}_{90} Th \rightarrow {}^{226}_{88} Ra \rightarrow {}^{222}_{86} Rn \rightarrow {}^{218}_{84} Po$
$\rightarrow {}^{214}_{82} Pb \rightarrow {}^{214}_{83} Bi \rightarrow {}^{214}_{84} Po \rightarrow {}^{210}_{82} Pb \rightarrow {}^{210}_{83} Bi \rightarrow {}^{210}_{84} Po \rightarrow {}^{206}_{82} Pb$
And so on, and so forth… eventually reaching the stable isotope of Lead (206Pb).
You will have no doubt heard about the term “half-life” when applied to radioactive atoms.
The half-life t1/2 is the time required for a quantity to reduce to half its initial value. It describes how quickly unstable atoms undergo, or how long stable atoms survive, radioactive decay.
However, a half-life usually describes the decay of discrete entities. In that case, it does not work to use the definition that states “half-life is the time required for exactly half of the entities to decay”.
For example, if there is just one radioactive atom, and its half-life is one second, there will not be “half of an atom” left after one second.
A half-life period is defined in terms of probability.
### on average.
In other words, the probability of a radioactive atom decaying within its half-life is 50%.
Nevertheless, when many identical decaying atoms are concerned, the law of large numbers suggests that it is a very good approximation to say half of the atoms remain after one half-life.
How dangerous radiation is really depends on the radiation type, and on how much of it is around.
The types of decay include α-decay (alpha decay), β-decay (beta minus decay) and γ-decay (gamma decay).
## α-decay
Alpha-decay is a type of radioactive decay in which an atomic nucleus emits an alpha particle.
Uranium isotopes undergo spontaneous radioactive decay into Uranium-234, by way of Thorium-234.
This type of radioactive decay in which an atomic nucleus emits an α-particle, and swiftly transforms (or ‘decays’) into an atom with a mass number 4 less and atomic number 2 less. This “mystery” particle is the nucleus of a helium atom 42He, with mass number A = 4 and atomic number Z = 2.
The Uranium nucleus spontaneously decays into the lighter element Thorium, while releasing alpha-radiation.
${}^{238}_{92} U \rightarrow {}^{234}_{90} Th + \alpha + energy$
The emitted α-particle has 2 protons and 2 neutrons, enough to make it a brand new helium nucleus, which goes on to attract 2 electrons to form a fully-fledged helium atom.
${}^{238}_{92} U \rightarrow {}^{234}_{90} Th + {}^{4}_{2} He + energy$
## β–-decay
Beta-decay involves the emission of an electron e from the nucleus of an atom.The election is created by the decay, just as a photon is created when an atom makes a transition from a higher energy level to a lower energy level.
For a Thorium-234 atom, the reaction is
${}^{234}_{90} Th \rightarrow {}^{234}_{91} Pa + e^- + \bar\nu_e + energy$
In the case of a Caesium-137 atom, we have the following reaction
${}^{137}_{55} Cs \rightarrow {}^{137}_{56} Ba + e^- + \bar\nu_e + energy$
## γ-decay
In contrast to the processes of α- and β-decay, this type of radioactive decay involves no change in the numbers of neutrons and protons. Gamma-decay occurs when a nucleus finds itself in an excited state. A quantum jump down to the ground state, with the same number of neutrons and protons, is accompanied by the emission of a photon, as with transitions in atoms.
For Proactinium-234, we have
${}^{234}_{91} Pa (excited \; state) \rightarrow {}^{234}_{91} Pa (ground \; state) + \gamma$
And Barium-137 gives
${}^{137}_{56} Ba (excited \; state) \rightarrow {}^{137}_{56} Ba (ground \; state) + \gamma$
# Where Does the Energy Come From?
All these types of radioactive decay liberate energy.
Nuclear decay reactions always balance in the following ways:
• #### Electric charge is always conserved.
The net charge on the products of a nuclear decay is the same as the net charge of the original nucleus.
• #### Mass number is conserved.
The total number of nucleons in the products is the same as that in the original nucleus.
So where does the energy come from?
## The α-decay of Uranium-234
For example, the α-decay of U-234 liberates 4.86 MeV of kinetic energy carried away by the new particle:
${}^{234}_{92} U \rightarrow {}^{230}_{90} Th + {}^{4}_{2} He + 4.86 MeV$
However, accurate measurements of the masses of the α-particle and Th-234 nucleus reveal that their sum is is less than the mass of the original nucleus of U-238 by about 8.66 x 10-30 kg.
$mass \; {}^{234}_{92} U = mass \; {}^{230}_{90} Th + mass \; {}^{4}_{2} He + 8.66 \times 10^{-30} kg$
An infinitesimally small amount!
## Matter to Energy
This tiny mass was converted into energy during the radioactive decay.
Assuming a value for the speed of light of c = 3.00 x 108 ms-1 tells us that this lost mass is equivalent to an energy of
$E = mc^2 = 8.66 \times 10^{-30} kg \times (3.00 \times 10^8 ms^{-1})^2 = 7.79 \times 10^{-13} J$
Thus the α-decay of one atom of U-234 liberates 7.79 x 10-13 Joules of energy, mostly carried away as kinetic energy by the α-particle.
Since
$1 \; Joule = \frac{1eV}{1.60 \times 10^{-19}}$
That’s equivalent to
$E = 7.79 \times 10^{-13} J \times \frac{1eV}{1.60 \times 10^{-19}} = 4.86 MeV$
Note that the amount of energy released per atom is very small.
|
{}
|
# GLMM overfitting solutions
in GLMM faq page http://glmm.wikidot.com/faq there is a statement about overfitting:
"One alternative (suggested by Robert LaBudde) is to "fit the model with the random factor as a fixed effect, get the level coefficients in the sum to zero form, and then compute the standard deviation of the coefficients." This is appropriate for users who are (a) primarily interested in measuring variation (i.e. the random effects are not just nuisance parameters, and the variability [rather than the estimated values for each level] is of scientific interest)"
What exactly does this mean in practice, doe sit mean using offset() for the random effects that are now fixed effects?
Also is there a way to diagnose overfitting other than very small random effect variance components? For instance how do you get the df for a GLMM that are reported in papers?
## 1 Answer
Computing the standard deviation (or variance of the coefficients) essentially means getting the fixed-effect estimates for each level (analogous to the BLUPs/conditional modes in a mixed model) and computing their variance. You can do this by appropriately setting contrasts to contr.sum (sum-to-zero contrasts) (in this case you'll still have to reconstruct the value of one level, since the model will only fit n-1 coefficients in a model with an intercept), and/or appropriate use of -1 or +0 in the model to fit a no-intercept model where the coefficients are computed for every level. Or, as shown below, you can just use brute force via predict (or e.g. via the lsmeans package) to compute values for each level ...
Make up data with only two levels of the RE grouping variable:
dd <- expand.grid(f1=factor(1:3),f2=factor(1:2),rep=1:10)
library(lme4)
simList <-
suppressMessages(simulate(~f1+(1|f2),
newdata=dd,
family="gaussian",
newparams=list(theta=1,beta=c(0,1,2),sigma=1),
seed=101,n=500))
Fit f2 as a random effect and retrieve estimated variance:
sumfun1 <- function(y0) {
m <- lmer(y~f1+(1|f2),data=transform(dd,y=y0))
unlist(VarCorr(m))
}
library(plyr)
r1 <- laply(simList,sumfun1,.progress="text")
This actually works surprisingly well given the small number of levels:
mean(r1) ## 0.98
confint(lm(r1~1))
## 2.5 % 97.5 %
## (Intercept) 0.9248779 1.189029
But we often get zero estimates of the variance:
sum(r1==0) ## 60
(and a handful of very small values)
sum(log10(r1)<(-6)) ## 69
Now try it via fixed effects:
sumfun2 <- function(y0) {
lm1 <- lm(y~f1+f2,data=transform(dd,y=y0))
pframe <- data.frame(f1="1",f2=levels(dd\$f2))
var(predict(lm1,newdata=pframe))
}
r2 <- laply(simList,sumfun2,.progress="text")
mean(r2) ## 1.01294
confint(lm(r2~1))
## 2.5 % 97.5 %
## (Intercept) 0.89081 1.135071
r1[log10(r1)< (-6)] <- 1e-6
p0 <- rbind(data.frame(m="f1=random",r=r1),
data.frame(m="f1=fixed",r=r2))
library(ggplot2); theme_set(theme_bw())
ggplot(p0,aes(x=log10(r),fill=m))+
geom_histogram(alpha=0.5,position="identity")+
geom_vline(xintercept=0,lty=2)
The fixed-effect approach actually works better than I expected ...
|
{}
|
## Approximate Schauder Frames for Banach Sequence Spaces
Series
Dissertation Defense
Time
Friday, April 16, 2021 - 4:00pm for 1.5 hours (actually 80 minutes)
Location
ONLINE
Speaker
Yam-Sung Cheng – Georgia Institute of Technology – ycheng61@gatech.edu
Organizer
The main topics of this thesis concern two types of approximate Schauder frames for the Banach sequence space $\ell_1$. The first main topic pertains to finite-unit norm tight frames (FUNTFs) for the finite-dimensional real sequence space $\ell_1^n$. We prove that for any $N \geq n$, FUNTFs of length $N$ exist for real $\ell_1^n$. To show the existence of FUNTFs, specific examples are constructed for various lengths. These constructions involve repetitions of frame elements. However, a different method of frame constructions allows us to prove the existence of FUNTFs for real $\ell_1^n$ of lengths $2n-1$ and $2n-2$ that do not have repeated elements.
The second main topic of this thesis pertains to normalized unconditional Schauder frames for the sequence space $\ell_1$. A Schauder frame provides a reconstruction formula for elements in the space, but need not be associated with a frame inequality. Our main theorem on this topic establishes a set of conditions under which an $\ell_1$-type of frame inequality is applicable towards unconditional Schauder frames. A primary motivation for choosing this set of hypotheses involves appropriate modifications of the Rademacher system, a version of which we prove to be an unconditional Schauder frame that does not satisfy an $\ell_1$-type of frame inequality.
|
{}
|
PERT APPROXIMATION FORMULAS (for mean and variance of activity durations)
The two statistics needed for the duration of each project activity are the mean time and the variance or standard deviation of activity duration. The mean value formula is a weighted average of the three given times where the weight on the minimum and maximum times is one and the weight on the modal time is 4, thus mean duration of activity j is given by
The variance formula is motivated by the fact that for the symmetric case, almost all of the probability distribution will be within 3 standard deviations of the mean, so that one-sixth of the range of the interval is a reasonable approximation for the standard deviation for the activity duration. Thus the variance is given by the equivalent formulas shown below:
It should be noted that these are empirical approximation formulas not derived from the beta distribution directly. There is no theoretical argument showing that the relative weight of 4 on the modal time is better than a relative weight of 3 or 5, and the absence of the modal time in the variance formula runs counter to the properties of the beta distribution and seems to be based more on the symmetric normal distribution. Presumably, some experimentation was done in the early days, and some empirical basis was found for these forms. At this point, we merely accept the formulas as the "traditional" way of doing PERT, and note that the Monte Carlo simulation approach does not make use of these formulas, but rather works directly from the beta distributions assumed for the activity durations.
|
{}
|
# How to remove the fake k-points from vasprun.xml file in the calculation of HSE06 band structure?
Is there any script that can remove fake k-points from a vasprun.xml file for HSE06 band structure calculation? Because removing it manually is a time-consuming task.
• How does one actually do this task? Maybe something can be written. Oct 31 '20 at 13:28
• You can first read the EIGENVAL file and then exclude all fake k-points.
– Jack
Oct 31 '20 at 14:01
• What are fake k-points?
– Camps
Oct 31 '20 at 15:10
• I am pretty sure "Fake" kpoints refer to the actual grid where the HSE06 band structure is calculated at 0 weight kpoints ontop of a normal kpoint grid Oct 31 '20 at 18:11
• By fake k-points I mean zero weight k-points. Oct 31 '20 at 22:54
|
{}
|
# Math Help - Bessel's Inequality
1. ## Bessel's Inequality
I'm having trouble with this problem. I've tried rewriting things in about every way I know how to but I haven't arrived at an answer. I'd appreciate some help.
"Let $V$ be an inner product space and let $S=\{v_1, ..., v_n\}$ be an orthonormal subset of $V$. Prove that for any $x \in V$ we have $||x||^2 \ge \displaystyle\sum^n_{i=1} |\langle x,v_i \rangle |^2.$"
As a hint the book says to use the fact that $x \in V$ can be written uniquely as $w+w'$ where $w \in W=\mbox{span}(S)$ and $w' \in W^\perp$, the orthogonal complement of W, and use the fact that for $x$, $y$ orthogonal, $||x+y||^2 = ||x||^2 + ||y||^2$.
Thanks for any help.
2. ## Re: Bessel's Inequality
Originally Posted by AlexP
I'm having trouble with this problem. I've tried rewriting things in about every way I know how to but I haven't arrived at an answer. I'd appreciate some help.
"Let $V$ be an inner product space and let $S=\{v_1, ..., v_n\}$ be an orthonormal subset of $V$. Prove that for any $x \in V$ we have $||x||^2 \ge \displaystyle\sum^n_{i=1} |\langle x,v_i \rangle |^2.$"
As a hint the book says to use the fact that $x \in V$ can be written uniquely as $w+w'$ where $w \in W=\mbox{span}(S)$ and $w' \in W^\perp$, the orthogonal complement of W, and use the fact that for $x$, $y$ orthogonal, $||x+y||^2 = ||x||^2 + ||y||^2$.
Thanks for any help.
As the hint gives
$\mathbf{x}=\mathbf{x}_{v}+\mathbf{x}_{v\perp}$
Now
$||\mathbf{x}||^2=<\mathbf{x},\mathbf{x}>=<\mathbf{ x}_{v}+\mathbf{x}_{v\perp},\mathbf{x}_{v}+\mathbf{ x}_{v\perp}>=||\mathbf{x}_v||^2+||\mathbf{x}_{v \perp}||^2$
Remember that
$\mathbf{x}_{v}=\sum_{i=1}^{n}<\mathbf{x},\mathbf{v }_i>\mathbf{v}_i \implies ||\mathbf{x}_v||^2=\sum_{i=1}^{n}|<\mathbf{x}, \mathbf{v}_i >|^2$
Just put these two facts together and remember that the modulus of a vector is always positive to finish.
3. ## Re: Bessel's Inequality
Wow. I was so close to completing it that it's painful that I didn't see it. Thanks.
|
{}
|
### Methods Bites
Blog of the MZES Social Science Data Lab
### How to write your own R package and publish it on CRAN
R is a great resource for data management, statistics, analysis, and visualization — and it becomes better every day. This is to a large part because of the active community that continuously creates and builds extensions for the R world. If you want to contribute to this community, writing a package can be one way. That is exactly what we intended with our package overviewR. While there exist many great resources for learning how to write a package in R, we found it difficult to find one all-encompassing guide that is also easily accessible for beginners. This tutorial seeks to close this gap: we will provide you with a step-by-step guide — seasoned with new and helpful packages that are also inspired by presentations at the recent virtual European R Users Meeting e-Rum 2020.
In the following sections, we will use a simplified version of one function (overview_tab) from our overviewR package as a minimal working example.
### Why you should write a package
Writing a package has two main advantages. First, it helps you to approach your problems in a functional way, e.g., by turning your everyday tasks into little functions and bundling them together. Second, it is easy to share your code and new functions with others and thereby contribute to the engaged and vivid R community.
When it comes to our package, we wanted to add an automated way to get an overview — hence the name — of the data you are working with and present it in a neat and accessible way. In particular, our main motivation came from the need to get an overview of the time and scope conditions (i.e., the units of observations and the time span that occur in the data) as this is a recurring issue both in academic articles and real-world situations. While there are ways to semi-automatically extract this information, we were missing an all-integrated function to do this. This is why we started working on overviewR.
To make your package easily accessible for everyone, there are two basic strategies. You can either publish your package on GitHub (which, in terms of transparency, is always a good idea) or you can submit it to the Comprehensive R Archive Network (CRAN). Both offer the ability for others to use your package but differ in several important aspects. Releasing on CRAN offers additional tests that ensure that your package is stable across multiple operating systems and is easily installable with the function utils::install.packages() in R. If you have your package only on GitHub, there is also a function that allows users to install it directly – devtools::install_github from the devtools package – but most users are more likely to prefer the framework and stability that they can expect from a package that is on CRAN.
We will walk you through both options and start with how to make your package accessible on GitHub before discussing what needs to be done and considered when submitting it to CRAN. To set up your package in RStudio, you need to load the following packages:
library(roxygen2) # In-Line Documentation for R
library(devtools) # Tools to Make Developing R Packages Easier
library(testthat) # Unit Testing for R
library(usethis) # Automate Package and Project Setup
When preparing this post, we came across this incredibly helpful cheat sheet that gives a detailed overview of what the devtools package can do to help you build your own package.
### Where to start
##### Idea
All good things have to start somewhere and this is most often when you realize that the world is lacking something that is necessary and where you believe others will also benefit from. R packages come in various shapes — from entire universes such as the tidyverse package family (if you look for some Stata like feedback when using the tidyverse and additions to these universes, tidylog is your best friend!), packages for specific statistical models and their validation (icr, MNLpred or oolong), to packages such as polite that offers a netiquette when scraping the web, snakecase that converts names to snake case format, rwhatsapp for scraping WhatsApp, or meme, a package that allows you to make customized memes. As you can tell, the world – and your fantasy – is your oyster.1
##### Name
Let us assume you have a great idea for a new package, the next step would be to find and to pick a proper name for it. As a general rule, package names can only be letters and numbers and must start with a letter. The package available helps you — both with getting inspiration for a name and with checking whether your name is available. This is exactly what we did in our case:
library(available) # Check if the Title of a Package is Available,
# Appropriate and Interesting
# Check for potential names
available::suggest("Easily extract information about your sample")
## easilyr
suggest takes a string with words that can be a description of your package and suggests a name based on this string. As you can tell, we did not go with the suggestion but opted for overviewR instead. We then checked with available whether the name is still available and valid across different platforms. Since our package is already published, it is not available on GitHub, CRAN, or Bioconductor (hence, the “x”).
# Check whether it's available
available::available("overviewR", browse = FALSE)
── overviewR ─────────────────────────────────────────────────────────────────────────────────────────────────────────
Name valid: ✔
Available on CRAN: ✖
Available on Bioconductor: ✖
Available on GitHub: ✖
Abbreviations: http://www.abbreviations.com/overview
Wikipedia: https://en.wikipedia.org/wiki/overview
Wiktionary: https://en.wiktionary.org/wiki/overview
Urban Dictionary:
a general [summary] of a subject "the [treasurer] gave [a brief] overview of the financial consequences"
http://overview.urbanup.com/3904264
Sentiment:???
Let your creativity spark and learn from fantastic package names such as GeneTonic or charlatan.
### Set up your package with RStudio and GitHub
When setting up your package, there are various possible ways. Ours was to use RStudio and GitHub. RStudio already has a template that comes with the main documents that are necessary to build your package. To access the template, just click on File > New Project... > New Directory > R Package. Note, you need to check the box Create a git to set up a local git.
Hooray, you have started your own package! Let us take a look at the different files that were created.
• .gitignore and .Rbuildignore contain documents that should be ignored when either building in git or R
• DESCRIPTION gives the user all the core information about your package – we will talk more about this below.
• man contains all manuals for your functions. You do not need to touch the .Rd files in there as they will be generated automatically once we populate our package with functions and run devtools::document().
• NAMESPACE will later contain information on exported and imported functions. This file will not be modified by hand but we will show you how to do it automatically. This might seem counter-intuitive in the workflow, but we need to delete the NAMESPACE file here. We do this because we want NAMESPACE to be generated and to be accessible with the devtools universe. We will generate it automatically again later using the command devtools::document().
• R contains all the functions that you create. We will address this folder and its files in the next step.
• The overviewR.Rproj file is the usual R project file that you can read more about here.
However, your package is not yet linked with your GitHub. We will do this in the next step:
2. Create a new repository with “+New Repository”. We named it “overviewR” (as our package). You can set it to private or public – whatever is best for you.
3. Do not check the box “Initialize this Repository with a README”
4. Once you created the repository, execute the following commands in your RStudio terminal:
git remote add origin https://github.com/YOUR_USERNAME/REPOSITORY_NAME.git
git commit -m "initial commit"
git push -u origin master
If you now refresh your GitHub repository, you will see that your R package is perfectly synchronized with GitHub.
GitHub will now also ask you whether you want to create a README – just click on it and you are ready to go. To get the README in your project, pull it from GitHub either using the Pull button in the Git tab in RStudio or execute the following command line in the RStudio terminal:
git pull
### Fill your package with life
We will showcase a typical workflow for creating a package using one example function (overview_tab) from our overviewR package. In practice, you can add as many functions as you want to your package.
The folder R contains all your functions and each function is saved in a new R file where the function name and the file name are the same. As you can see, the template comes with the preset function hello that returns "Hello, world!" when executed. (The file hello.R showcases the function and can later be deleted.) To include now our function as well, we open a new R file and insert a basic version of our function.
Since we program our function using the tidyverse, we have to take care of the tidy evaluation and use enquo() for all our inputs that we later modify. Going into detail on how to program in the tidyverse and how and when we need to use enquo, is beyond the scope of this blogpost. For a detailed overview, take a look at this post.
In the preamble of this file, we can add information on the function. An example is shown below:
#' @title overview_tab
#'
#' @description Provides an overview table for the time and scope conditions of
#' a data set
#'
#' @param dat A data set object
#' @param id Scope (e.g., country codes or individual IDs)
#' @param time Time (e.g., time periods are given by years, months, ...)
#'
#' @return A data frame object that contains a summary of a sample that
#' can later be converted to a TeX output using \code{overview_print}
#' @examples
#' data(toydata)
#' output_table <- overview_tab(dat = toydata, id = ccode, time = year)
#' @export
#' @importFrom dplyr "%>%"
• @title takes the name of your function
• @description contains a short description
• @param takes all your arguments that are in the input of the function with a short description. Our function has three arguments (dat (the data set), id (the scope), and time (the time period)).
• @return gives the user information about the output of your function
• @examples eventually provide a minimal working example for the user to see what s/he needs to include. You can also wrap \dontrun{} around your examples if they should not be executed (e.g., if additional software or an API key is missing). If this is not the case, it is not recommended to wrap this around as it will cause a warning for the user. If your example runs longer than 5 seconds, you can wrap \donttest{} around it.
• @export – if this is a new package, it is always recommended to export your functions. It automatically adds these functions to the NAMESPACE file.
• @importFrom dplyr "%>%" pre-defines required functions for your function. It automatically adds these functions to the NAMESPACE file.
Once you have included the preamble, you can now add your function below.
##### Write a help file
When you execute devtools::document(), R automatically generates the respective help file in man as well as the new NAMESPACE file. If you click on it, you see that it is read-only and all edits should be done in the main R function file in R/.
Now you can call the function with ? overview_tab and get the nice package help that you know from other functions as well.
##### Write DESCRIPTION
The DESCRIPTION is pre-generated by roxygen2 and contains all the information about your package that is necessary. We will walk you through the most essential parts:
Type: Package
Package: overviewR
Version: 0.0.2
Authors@R: c(
person("Cosima", "Meyer", email = "XX@XX.com", role = c("cre","aut")),
person("Dennis", "Hammerschmidt", email = "XX@XX.com", role = "aut"))
Description: Makes it easy to display descriptive information on
a data set. Getting an easy overview of a data set by displaying and
visualizing sample information in different tables (e.g., time and
scope conditions). The package also provides publishable TeX code to
present the sample information.
URL: https://github.com/cosimameyer/overviewR
BugReports: https://github.com/cosimameyer/overviewR/issues
Depends:
R (>= 3.5.0)
Imports:
dplyr (>= 1.0.0)
Suggests:
covr,
knitr,
rmarkdown,
spelling,
testthat
VignetteBuilder:
knitr
Encoding: UTF-8
Language: en-US
LazyData: true
RoxygenNote: 7.1.0
• Type: Package should remain unchanged
• Package has your package’s name
• Title is a really short description of your package
• Version has the version number (you will most likely start with a 0.0.1, if you want to know more about, here is an excellent reference).
• Authors@R contains the authors’ names and their roles. [cre] stands for the creator and this person is also the maintainer while [aut] is the author. There are also options to indicate a contributor ([ctb]) or translator ([trl]). If you need more, here’s a great overview or you can simply check for additional roles using ? person. At this point, you also need to give your e-mail address. If you want to submit your package to CRAN (but also in any other case), make sure that your e-mail address is correct and accessible!
• Description provides a longer description of what your package does. If you want to indent, use four blank spaces.
• License shows others what they can do with your package. This is an important part and probably a tough decision. Here and here or here are excellent overviews of different licenses and a starting guide on how to pick the best one for you.
• URL indicates where the package is currently hosted
• BugReports show where users should address their reports to (if linked with GitHub, this will automatically refer the user to the issues section)
• Depends shows the R version your package works with (you always need to indicate a version number!)
• Imports shows the packages that are required to run your package (here you always need to indicate a version number so that potential conflicts with previous versions can be avoided!)
• Suggests lists all the packages that you suggest but that are not necessarily required for the functionality of your package
• LazyData: true ensures that internal data sets are automatically loaded when loading the package
Inspired by this excellent overview, we decided to include an internal data set to test the functionality of our package easily. How you do this is straightforward: You have a pre-generated data set at hand (or generate it yourself), and save it in data/.
As you know, every good data set, even if it is only a toy data set, comes with a description. For your package, just set up an .R file with the name of your data set (toydata.R in our case) and save it in the R folder. The file should contain the following information:
• Starts with a title for the data set
• Then you have some lines for a short concise description
• @docType defines the type of document (data)
• @usage describes here how the data set should be loaded.
• @format gives information on the object’s format
• \describe{} then allows you to give the user a specific description of your variables included in the data set
• @references is essential if you do not use artificially generated data to indicate the source
• @keywords allows you to indicate keywords (we used dataset here)
• @examples finally gives you some room to showcase your data
What we included in our toydata.R file (you can simply copy and paste the code and adjust it to your needs)
#' Cross-sectional data for countries
#'
#' Small, artificially generated toy data set that comes in a cross-sectional
#' format where the unit of analysis is either country-year or
#' country-year-month. It provides artificial information for five countries
#' (Angola, Benin, France, Rwanda, and the UK) for a time span from 1990 to 1999 to
#' illustrate the use of the package.
#'
#' @docType data
#'
#' @usage data(toydata)
#'
#' @format An object of class \code{"data.frame"}
#' \describe{
#' \item{ccode}{ISO3 country code (as character) for the countries in the
#' sample (Angola, Benin, France, Rwanda, and UK)}
#' \item{year}{A value between 1990 and 1999}
#' \item{month}{An abbreviation (MMM) for month (character)}
#' \item{gpd}{A fake value for GDP (randomly generated)}
#' \item{population}{A fake value for population (randomly generated)}
#' }
#' @references This data set was artificially created for the overviewR package.
#' @keywords datasets
#' @examples
#'
#' data(toydata)
#'
"toydata"
##### Write the NEWS.md
You can automatically generate a NEWS.md file using R with usethis::use_news_md. Our news file looks like this:
# overviewR 0.0.2
- Bug fixes in overview_tab that affected overview_crosstab
---
# overviewR 0.0.1
The newest release always comes first and --- dividers separate the versions. To inform users, use bullet points to describe changes that came with the new version. As a plus, if you plan to generate a website with pkgdown (we will explain later how you can do this), the news section automatically integrates this file.
##### Write the vignette
A vignette can come in handy and allows you to present the functions of your package in a more elaborate way that is easily accessible for the user. Similar to the news section, your vignette will also be automatically integrated into your website if you use pkgdown. You can think of a vignette as something like a blog post that outlines specific use cases or more detailed descriptions of your package.
Here, usethis offers an excellent service and allows you to create your first vignette automatically with the command usethis::use_vignette("NAME-OF-VIGNETTE"). This command does three different things:
1. It generates your vignettes/ folder,
2. Adds essential specifications to the DESCRIPTION, and
3. It also stores a draft vignette “NAME-OF-VIGNETTE.Rmd” in the vignettes folder that you can now access and edit. This draft vignette already contains a nice template that offers you all information and pre-requirements that you need to generate your good-looking vignette. You can adjust this as needed to show what your package does and how it can be used best.
The following steps are either recommended or required when submitting your package to CRAN. We, however, recommend following all of them. We summarized what we believe is helpful when testing your package.
##### Write tests
Writing tests felt like the most difficult part of building the package. Essentially, you have to come up with tests for every part of your function to make sure that everything – not only the final output of your function – runs smoothly. A piece of good advice, that we read multiple times, is that whenever you encounter a bug, write a test for it to check for future occurrences. To set up the test environment, we used a combination of the great testthat package and covr that allows you to visually see how good your test coverage is and which parts of the package still need to be tested.
1. Generate the test environment usethis::use_testthat. This generates a tests/ folder with another folder called testthat/ that later contains your tests as well as an R file testthat.R. We will only add tests to the tests/testthat/ folder and do not touch the R file.
2. Add test(s) as .R files. The filename does not matter, just choose whatever you find reasonable.
3. Run the tests using devtools::test(). To get an estimation of your test coverage, you can use devtools::test_coverage().
We attach the code that we used to test our overview_tab() function below and hope this sparks some inspiration when testing your functions.
Code for function testing
context("check-output") # Our file is called "test-check_output.R"
# Test whether the output is a data frame
test_that("overview_tab() returns a data frame", {
output_table <- overview_tab(dat = toydata, id = ccode, time = year)
expect_is(output_table, "data.frame")
})
# In reality, our function is more complex and aggregates your input if you have duplicates in your id-time units -- this is why the following two tests were essential for us
## Test whether the output contains the right number of rows
test_that("overview_tab() returns a dataframe with correct number of rows", {
output_table <- overview_tab(dat = toydata, id = ccode, time = year)
expect_equal(nrow(output_table), length(unique(toydata$ccode))) }) ## Test whether the function works on a data frame that has no duplicates in id-time test_that("overview_tab() works on a dataframe that is already in the correct format", { df_com <- data.frame( # Countries ccode = c( rep("RWA", 4), rep("AGO", 8), rep("BEN", 2), rep("GBR", 5), rep("FRA", 3) ), # Time frame year = c( seq(1990, 1995), seq(1990, 1992), seq(1995, 1999), seq(1991, 1999, by = 2), seq(1993, 1999, by = 3) ) ) output_table <- overview_tab(dat = df_com, id = ccode, time = year) expect_equal(nrow(output_table), 5) }) ##### codecov Once you are done with your tests, you can also link your results automatically with codecov.io to your GitHub repository. This allows codecov to automatically check your tests after each push to the repository. As a bonus, you will also get a nice badge that you can then be included in your GitHub README to show the percentage of passed tests for your package. To link codecov and GitHub, simply follow these steps: 1. Log in on codecov.io with your GitHub account 2. Give codecov access to your repository with your package 3. This will prompt a page where you can copy your token from 4. Now go back to your RStudio console and execute: library(covr) # Test Coverage for Packages covr::codecov(token = "INCLUDE_YOUR_CODECOV_TOKEN_HERE") 1. This will then link your GitHub repository with codecov and generate the badge. ##### Check whether it works on various operating systems with devtools and rhub To check whether our package works on various operating systems, we relied on a combination of the rhub and devtools packages. We used the following lines of code sequentially to check our package: # The following function runs a local R CMD check devtools::check() This command can take some time and produces an output in the console where you get specific feedback on potential errors, warnings, or notes. # Check for CRAN specific requirements rhub::check_for_cran() This command checks for standard requirements as specified by CRAN and, if saved in an object, you can generate your cran-comments.md file based on this command. We will go into further detail about this in the next section. If you use rhub for the first time, you need to validate your e-mail address with rhub::validate_email(). You can then execute the command. Once the command ran, you will receive three different e-mails that give you detailed feedback on how well the tests performed on three different operating systems. At the time of writing, this function checked our package on Windows Server 2008 R2 SP1, R-devel, 32/64 bit; Ubuntu Linux 16.04 LTS, R-release, GCC; and Fedora Linux, R-devel, clang, gfortran. From our experience, the checks on Windows were extremely fast but we had to wait a bit until we got the results for Ubuntu and Fedora. We then also checked the package on the development version of R as suggested with the following function: # Check for win-builder devtools::check_win_devel() ##### Generate cran-comments.md file If you plan to submit your package to CRAN, you should save your test results in a cran-comments.md file. rhub and usethis allow us to create this file almost automatically using the following lines of code: # Check for CRAN specific requirements using rhub and save it in the results # objects results <- rhub::check_for_cran() # Get the summary of your results results$cran_summary()
We received the following output when running the results\$cran_summary() command.
For a CRAN submission we recommend that you fix all NOTEs, WARNINGs and ERRORs.
## Test environments
- R-hub windows-x86_64-devel (r-devel)
- R-hub ubuntu-gcc-release (r-release)
- R-hub fedora-clang-devel (r-devel)
## R CMD check results
> On windows-x86_64-devel (r-devel), ubuntu-gcc-release (r-release), fedora-clang-devel (r-devel)
checking CRAN incoming feasibility ... NOTE
New submission
Maintainer: 'Cosima Meyer <XX@XX.com>'
0 errors ✓ | 0 warnings ✓ | 1 note x
Your package must not cause any errors or warnings when submitting to CRAN. Even notes need to be well explained. In our case, we receive one note saying that this is a new submission. This note occurs every time when you submit a new package and can be briefly be explained when submitting your package to CRAN in the cran-comments.md file.
We then generated our cran-comments.md file with the following command and copy-pasted this output with minor adjustments.
# Generate your cran-comments.md, then you copy-paste the output from the function above
usethis::use_cran_comments()
##### Continuous integration with GitHub Actions
This section was previously called “Continuous integration with Travis CI”. Due to the recent changes in Travis CI’s pricing policy, we moved to GitHub Actions instead. If you want a more detailed overview, Dean Attali wrote a fantastic post that describes the background better than we could do. If you have previously used Travis CI, the post also walks you through the simple steps needed to migrate to GitHub Actions. We assume here that you have not set up a continuous integration yet.
Continuous integration (CI) is incredibly helpful to ensure the smooth working of your package every time you update even small parts. Using the command usethis:::use_github_action_check_standard() you can easily set up GitHub Actions within your GitHub repository. GitHub then checks your package after each push to your repository on Mac, Ubuntu (two versions), and Windows. Explaining CI in further detail would require another blog post or book itself. Luckily, Julia Silge wrote an excellent overview that can be found here. In essence, CI checks after every commit and push to your repository on GitHub that the entire code/package works and sends you an e-mail if any errors occur.
##### Checking for good practice I: goodpractice
The package goodpractice is incredibly helpful and provides you all the information that you need when it comes to polishing your package with concerning your syntax, package structure, code complexity, formatting, and much more. And, the best thing: it provides easily understandable feedback that pinpoints you exactly to the lines of code where changes are recommended.
libary(goodpractice)
goodpractice::gp()
As a general tip for improving the style of your code, the package styler provides an easy solution by formatting your entire source code in adherence to the tidyverse style (similar to RStudio’s built-in hotkey combination with Cmd + Shift + A (Mac) or Ctrl + Shift + A (Windows)).
While all these packages refer to the tidyverse style guide, you are generally free to choose which (programming) style you like best.
##### Checking for good practice II: inteRgrate
A package that was presented at e-Rum 2020 and is still in an experimental cycle but yet incredibly helpful is the inteRgrate package. The underlying idea behind this package is that it tests stricter than other packages with clear standards. By this, it aims to ensure that you are definitely on the safe side when submitting your package to CRAN. A good starting point is this list of commands that is listed under “Functions”, where we particularly highlight the following functions:
• check_pkg() installs package dependencies, builds, and installs the package, before running package check (by default this check is rather strict and any note or warning raises an error by default)
• check_lintr() runs lintr on the package, README, and the vignette. lintr checks whether your code adheres to certain standards and that you avoid syntax errors and semantic issues.
• check_tidy_description() makes sure that your DESCRIPTION file is tidy. If not, you can use usethis::use_tidy_description() to follows the tidyverse conventions for formatting.
• check_r_filenames() checks that all file extensions are .R and all names are lower case.
• check_gitignore() checks whether .gitignore contains standard files.
• check_version() ensures that you update your package version (might be good to run as the last step)
### Submit to CRAN
Submitting a package to CRAN is substantially more work than making it available on GitHub. It, however, forces you to test your package on various operating systems and ensures that it is stable across all these systems. In the end, your package will become more user-friendly and accessible for a larger share of users. After going through the entire process, we believe that it is worth the effort - just for these simple reasons alone. When testing our package for CRAN, we followed mainly this blog post and collected the essential steps for you below while extending with what we think is also helpful to get published on CRAN. The column Needed is based on what is asked for when running devtools::release(). Recommended includes additional neat checks that we found helpful.
Checks Needed Recommended
Update your R, Rstudio and all dependent R packages (R and Rstudio has to be updated manually, devtools::install_deps() updates the dependencies for you) x
Write tests and check if your own tests work (devtools::test() and devtools::test_coverage() to see how much of your package is covered by your tests) x
Check your examples in your manuals (devtools::run_examples(); unless you set your examples to \dontrun{} or \donttest{}) x
Local R CMD check (devtools::check()) x x
Use devtools and rhub to check for CRAN specific requirements (rhub::check_for_cran() and/or devtools::check_rhub() – remember, you can store your output of these functions and generate your cran-comments.md automatically) x x
Check win-builder (devtools::check_win_devel()) x x
Update your manuals (devtools::document()) x x
Update your NEWS file x x
Update DESCRIPTION (e.g. version number) x x
Spell check (devtools::spell_check()) x x
Run goodpractice check (goodpractice::gp()) x
Check package dependencies (inteRgrate::check_pkg()) x
Check if code adheres to standards (inteRgrate::check_lintr()) x
Check if your description is tidy (inteRgrate::check_tidy_description() – if your description is not tidy, it will produce an error and ask you to run usethis::use_tidy_description() to make your DESCRIPTION tidy) x
Check if file names are correct (inteRgrate::check_r_filenames()) x
Check if .gitignore contains standard files (inteRgrate::check_gitignore()) x
Run devtools::check() one last time x
CRAN also offers a detailed policy for package submissions as well as a check list when submitting your package. We definitely recommend to check them in addition to our list above.
As already mentioned above, it is of vital importance that your package must not cause any errors or warnings when submitting to CRAN. Even notes need to be well explained. If you submit a new package, there is not much you can do about it and it will always create a note.
The function devtools::release() allows you to easily submit your package to CRAN – it works like a charm. Once you feel you are ready, make sure to push your changes to GitHub and then just type the command in your console. It runs a couple of yes-no questions before the submission. The following questions are those asked in the devtools::release() function at the date of writing this post.
• Have you checked for spelling errors (with spell_check())?
• Have you run R CMD check locally?
• Were devtool’s checks successful?
• Have you checked on R-hub (with check_rhub())?
• Have you checked on win-builder (with check_win_devel())?
• Have you updated NEWS.md file?
• Have you updated DESCRIPTION?
• Have you updated cran-comments.md?
Once submitted, you will receive an e-mail that requires you to confirm your submission – and then you will have to wait. If it is a new package, CRAN also runs a couple of additional tests and it might take longer than submitting an updated version of your package.
For us, it took about four days until we heard back from CRAN. We read that CRAN is curated by volunteers that can receive an incredible amount of submissions per day. Our experience was extremely positive and supportive which we truly enjoyed.
Once CRAN gets back to you, they will tell you about potential problems that you have to address before resubmitting your package – or you are lucky and your package gets accepted immediately.
Before resubmitting your package, go through all the steps presented in “Submit to CRAN” once again to make sure that your updated version still adheres to the standards of CRAN.
Common things that we have learned (and that others might find helpful) while going through the CRAN submission process are:
1. Do not modify (save or delete) outputs on the user’s home filespace. Use tempdir() and/or tempfile()instead when running examples/vignettes/tests.
2. Make sure that the user can set the directory and the file name when saving outputs. Simply add a file/path argument to your function(s).
3. Write package names, software names, and API names in single quotes in your DESCRIPTION. If you use for example LaTeX in your DESCRIPTION, put it in single quotes. This issue is apparently not discovered by goodpractice::gp() or one of the inteRgrate functions.
Once your package was accepted by CRAN, it is recommended to wait another 48 hours before celebrating because CRAN will still run some background checks. Afterwards, go to your GitHub repository, click on “Create a new release”, enter the version number of your package (vX.X.X) and copy-paste the release notes from your NEWS file into the release description.
When submitting your package to CRAN via the devtools::release() function, a CRAN-RELEASE file was generated to remind you to tag your release on GitHub. This file can now safely be deleted.
This section on add-ons can be considered a bonus. It is not essential to guarantee that your package works smoothly or gets published on CRAN — but the extensions make your package look nicer, more professional, and might help to get discovered by other users.
##### Create your own hexagon sticker
Hex(agon) stickers are the small hexagon-shaped icons that a large number of packages have and that people seem to love. So why not come up with your own sticker for your very own package? The package hexSticker makes it incredibly easy to customize and build a beautiful sticker. To get a sticker for your package, just add the following arguments to the function hexSticker::sticker(): package (the name of your package), subplot (an image – we have drawn our lamp ourselves, saved it as a .png and included it in our sticker without any problems), and h_fill (if you want to change the background color). You can then adjust the sticker by defining the position of the text, the subplot, the font size, or even add a spotlight as we did. This works with virtually any text and image combination – also with the Methods Bites logo.
Code for the overviewR sticker
library(hexSticker) # Create Hexagon Sticker in R
library(showtext) # Using Fonts More Easily in R Graphs
sticker(
# Subplot (image)
subplot = "logo-image.png", # Image name
s_y = 1, # Position of the sub plot (y)
s_x = 1.05, # Position of the sub plot (x)
s_width = 1.15, # Width of the sub plot
s_height=0.01, # Height of the sub plot
# Font
package = "overviewR", # Package name (will be printed on the sticker)
p_size = 6, # Font size of the text
p_y = 0.8, # Position of the font (y)
p_x=0.75, # Position of the font (x)
p_family = "incon", # Defines font
# Spotlight
spotlight = TRUE, # Enables spotlight
l_y=0.8, # Position of spotlight (y)
l_x=0.7, # Position of spotlight (x)
# Sticker colors
h_fill = "#5d8aa6", # Color for background
h_color = "#2A5773", # Color for border
# Resolution
dpi=1200, # Sets DPI
# Save
filename="logo.png" # Sets file name and location where to store the sticker
)
Figures such as your logo are usually stored in man/figures/.
Badges in your GitHub repository are a bit like stickers but they also serve an informative purpose. We included different badges in our README in our GitHub repository such as a RMD check status, a codecov status, and a repo status. We then also added a badge that signals that the package is ready to use and another one that tells the user that the package was built with R (… and love!).
If you want to learn more about available badges for your package, here and here are nice overviews. You can also use the package badgecreatr to check for badges and to include them. The RMD check badge, for instance, gets automatically added when running the usethis::use_github_action_check_standard() command.
If you want to create your own PDF manual for your package, devtools::build_manual does this for you.
A preview to our manual
As the last part, to advertise your package and to provide a more detailed insight into how your package works, you can set up a whole, stand-alone website for it! The pkgdown package makes this as easy as writing one line of code — literally! All you have to do is to install and load pkgdown and then — provided that you have taken all the steps above and have an R-package-structure in your GitHub repository — run pkgdown::build_site(). This automatically renders your package into a website that follows the structure of your package with a landing page based on the README file, a “get started” part of your vignette, as well as sections for function references based on the content of your man/ folder, and a dedicated page for your NEWS.md. It even includes a sidebar with links to the GitHub repository, the name(s) of the author(s), and, of course, all your badges. Amazing, right?
Naturally, pkgdown allows for further modifications of your websites’ appearance such as different themes (based on bootswatch themes), modified landing pages, different outlines of your navigation bar, etc. This post provides a good overview of things that you can do in addition to using the default website builder from pkgdown.
By default, your website is hosted on GitHub pages with the following URL: https://GITHUB_USERNAME.github.io/PACKAGENAME. To ensure that every time you update your package, the website gets updated as well, just run the following command from the usethis package – it sets up a GitHub Actions integration for your website.
usethis::use_github_action("pkgdown")
This makes sure that every time you push your updates to GitHub, GitHub Actions will update your website automatically. For more detailed information on the deployment and the continuous integration process of your website, see here.
A preview to our pkgdown website
##### Write you own CheatSheet
Once your package grows, a CheatSheet can help users to keep track of how powerful your package is. RStudio offers templates (in keynote and PowerPoint) that are user-friendly and highly customizable. Here is an example of our CheatSheet to spark some inspiration.
|
{}
|
# A question regarding Y=B+S by a nuclear physics toddler
1. May 7, 2014
### rpndixit5
If Q=e[I+0.5Y] and Y=B+S. What is the Q/e and S value for ρ and k mesons, Ω and Δ baryons?
I means third component of isospin and Y,B,S,Q,e have usual meanings?
This is the question. I don't even know what these symbol means. Can someone please explain the symbols and solve this problem.
2. May 7, 2014
3. May 7, 2014
### rpndixit5
It was in my question paper that I had to solve.
4. May 7, 2014
### Simon Bridge
Is this a "question paper" that forms part of an education program of some kind which you are a student of?
... and you don't know what any of the symbols mean?
Please note: a bunch of letters and symbols written down are meaningless without the context.
The wikipedia link I gave you is my best guess based on the little information you have provided.
i.e. something to do with nuclear physics and isospin.
5. May 7, 2014
### dauto
If you're taking a nuclear physics class, how can you not know the meaning of these symbols?
6. May 8, 2014
### rpndixit5
Well the problem is the teacher supposed to take this nuclear physics class is dead and we don't have a replacement. So.....
Yes I had a look at your link Mr. Dauto. I also stumbled upon stuff like Baryon Octet and Baryon Decouplet and it helped me solve the problem
Last edited: May 8, 2014
7. May 8, 2014
### dauto
So you're taking a Nuclear physics class without a lecturer??? How does that work? Who decides the grades???
8. May 8, 2014
### Simon Bridge
If there is no teacher - who set the question? What are you using for course materials?
Who was the teacher supposed to be and which institution is this?
(The whole class will be in trouble and so will the school so I'd best spend my time assisting them.)
9. May 10, 2014
### rpndixit5
It is expected that the grades will be decided by Research Scholars.
Mr.Bridge I appreciate your offer but the problem is the exam of this Nuclear Physics Paper is on 20 May. Plus the syllabus is divided in two parts the teacher supposed to teach this Particle Physics section is unavailable. Other parts like Nuclear Models,Detectors,Alpha Beta and GammaDecays etc have been taught.
10. May 10, 2014
### Maylis
I like how nonchalantly you mention that your teacher is just dead. This is kind of comical, I think this is a troll post.
11. May 10, 2014
### vanhees71
No matter, whether this is a troll post or not. I don't understand, what you have against this (admittedly not very well posed) problem. It's all correct.
Usually, in introductory HEP class, and nuclear physics without HEP is not on top of research nowadays, one starts with stating the conservation laws for charge-like quantum numbers. The deeper real understanding comes of course only when you treat it within relativistic QFT and the Standard Model of elementary particles, but that's not the point here. Here it's simply asked about the quantum numbers of some hadrons.
The formulae given are correct. When restricting yourself to the three lightest quarks (up, down, and strange), the relation the hypercharge is indeed given by
$$Y=B+S,$$
where B is the baryon number and S the strangeness number. The hypercharge is conserved under strong interactions (but not under weak interactions). In terms of constituent-quark numbers of the naive parton model the baryon number is related to the "quark content" by B=(Number of quarks - Number of antiquarks)/3. E.g., a proton is made of 2 up and 1 down quark and no antiquarks, leading to B=1 for the proton.
I must be isospin (the usual naming is T_3, i.e., it's the three-component of isospin but that are conventions). The electric charge is then indeed given by Q/e=Y/2+I. E.g. the proton has isospin 1/2 and hypercharge 1, leading to a charge number of 1 as it must be (a neutron belonging to the same isospin dublett has isospin -1/2 and hypercharge 1 and thus Q=0 as it must be).
I don't know, how you are supposed to solve this exercise, i.e., what you are allowed to use. I'd suggest to look up the quark content of the particles and obtain the quantum numbers from it. You just need to know
u and d quarks build an isospin (isospin meant in relation to the strong interactions in the SU(3) constituent quark model aka the "eightold way") doublet with I=1/2 and I=-1/2, respectively. Both have strangeness S=0 and hypercharges Y=1/3. This leads to Q/e=(I+Y/2)=2/3 and -1/3, respectively, as it must be. Both have baryon number 1/3.
The s quark has isospin I=0 and strangeness S=-1 and hypercharge Y=-2/3. The charge is thus Q/e=-1/3 and baryon number B=1/3.
Why the quarks have these quantum numbers can only be understood from SU(3) group and representation theory. A good starting point is the Wikipedia article
http://en.wikipedia.org/wiki/Quark_model
12. May 10, 2014
### rpndixit5
I assure you Ms. Maylis this is not a troll post. I am an Indian student enrolled in Banaras Hindu University,Varanasi. The reason I stated he is dead because he chose to attend some conferences instead of teaching his share. Thats it!!
13. May 11, 2014
|
{}
|
# Neumann Boundary with ADI-method (FDM); how do I implement this?
I am modelling the Heat equation in 2D in Python. I am using finite difference methods, more specifically the Alternating Direction Implicit method. The model works quite well with Dirichlet boundary conditions, but I can't figure out how to implement Neumann boundary conditions into the tridiagonal matrix.
The equation I'm modelling is: $$\frac{\partial u}{\partial t} = \alpha \left ( \frac{\partial u}{\partial x} + \frac{\partial u}{\partial y}\right )$$
The discretized result in ADI form looks like this:
$$-\gamma_x u^{l+\frac{1}{2}}_{i-1,j} + 2\left ( 1+\gamma_x \right )u^{l+\frac{1}{2}}_{i,j} - \gamma_x u^{l+\frac{1}{2}}_{i+1,j} = 2 u^{l}_{i,j} + \gamma_y \left (u^{l}_{i,j-1} -2u^{l}_{i,j} +u^{l}_{i,j+1} \right )$$
and
$$-\gamma_y u^{l+1}_{i,j-1} + 2\left ( 1+\gamma_y \right )u^{l+1}_{i,j} - \gamma_y u^{l+1}_{i,j+1} = 2 u^{l+\frac{1}{2}}_{i,j} + \gamma_x \left (u^{l+\frac{1}{2}}_{i-1,j} -2u^{l+\frac{1}{2}}_{i,j} +u^{l+\frac{1}{2}}_{i+1,j} \right )$$
And $\gamma$ is $\frac{\alpha \Delta t}{2 \Delta x^2}$. This gives tridiagonal matrices, which look like Ax = b. In the x-direction, this looks like:
$$A = \begin{bmatrix} 1+\gamma &-\gamma & & & & &\\ -\gamma& 2+2\gamma& -\gamma & & & &\\ &-\gamma & 2+2\gamma & -\gamma & & &\\ & &\ddots & \ddots &\ddots & &\\ & & & &-\gamma &2+2\gamma &-\gamma\\ & & & & &-\gamma &1+\gamma \end{bmatrix}$$
$$x = \begin{bmatrix} u^{l+\frac{1}{2}}_{0,j} \\ u^{l+\frac{1}{2}}_{1,j}\\ u^{l+\frac{1}{2}}_{2,j}\\ \vdots \\ u^{l+\frac{1}{2}}_{nx-1,j}\\ u^{l+\frac{1}{2}}_{nx,j} \end{bmatrix}$$
And finally,
$$b = \begin{bmatrix} g_{0,j}^{l} \\ 2u^l_{1,j} + \gamma_y \left ( u^l_{1,j-1} - 2u^l_{1,j} +u^l_{1,j+1} \right )\\ 2u^l_{2,j} + \gamma_y \left ( u^l_{2,j-1} - 2u^l_{2,j} +u^l_{2,j+1} \right )\\ \vdots \\ 2u^l_{nx-1,j} + \gamma_y \left ( u^l_{nx-1,j-1} - 2u^l_{nx-1,j} +u^l_{nx-1,j+1} \right )\\ g_{nx,j}^{l} \end{bmatrix}$$
These are the matrices in the X-sweep direction; the y-direction matrices and equations are the same except for different sub and superscripts.
$g_{0,j}^{l}$ and $g_{nx,j}^{l}$ are (in the current model) constants for the Dirichlet boundary conditions. The first row of the A-matrix has been modified to work for the Dirichlet boundaries, but I have no idea how to make the Neumann boundaries work. I tried the ghost point method, but for some reason this didn't work; it added a negative flux into the model.
I'm wondering if my discretization and matrices are correct. It might be really easy to implement Neumann boundaries, but for some reason I cant manage to. Thanks for reading, and sorry for the wall of text.
• I think that your first row should be $(1, 0, 0, \cdots, 0)$, and the last one $(0, \cdots, 0, 0, 1)$, for your boundary conditions to be satisfied. – nicoguaro May 29 '18 at 15:20
• This doesn't seem to work, it gives a negative flux inwards (the boundary cools the surrounding) while i'd like to have zero flux on some boundaries. – Jeroen Reurink May 30 '18 at 7:53
• I meant "for Dirichlet boundary conditions [...]". – nicoguaro May 30 '18 at 9:23
|
{}
|
# Routine Apache Server Update Today
One fact which I mention often, is that I use my home computer, which I name ‘Phoenix’, as a Web-server, and as the hosting server for this blog.
For any readers who have questions on how this is possible, I’d direct you Here.
Updates which are somewhat remarkable, such as an actual update to the Web-server, but which seemed to take place without any technical problems, I document in this blog as ‘routine updates’.
The update to my Apache Web-server, that brought it up to version ‘2.4.10-10+deb8u9‘, just took place today. Doing so actually does require a restart of the server. But that kind of restart simply takes place within a few seconds, and without any detriment to the availability of the site, because of the way Web-servers generally work.
Dirk
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
{}
|
Tamil Nadu Board of Secondary EducationSSLC (English Medium) Class 10th
# The standard deviation and coefficient of variation of a data are 1.2 and 25.6 respectively. Find the value of mean. - Mathematics
Sum
The standard deviation and coefficient of variation of a data are 1.2 and 25.6 respectively. Find the value of mean.
#### Solution
Standard deviation (σ) = 1.2
Coefficient of variation = 25.6
sigma/bar(x) xx 100 = 25.6
1.2/bar(x) xx 100 = 25.6
⇒ 25.6 xx bar(x) = 1.2 × 100
bar(x) = 120/25.6
= (120 xx 10)/256
= 4.687
= 4.69
Value of mean = 4.69
Concept: Coefficient of Variation
Is there an error in this question or solution?
|
{}
|
# Calculate: x^4-10x^2+9=0
## Expression: ${x}^{4}-10{x}^{2}+9=0$
Transform the biquadratic equation into a quadratic equation by substituting $t$ for ${x}^{2}$
${t}^{2}-10t+9=0$
Solve the equation for $t$
$\begin{array} { l }t=1,\\t=9\end{array}$
Substitute back $t={x}^{2}$
$\begin{array} { l }{x}^{2}=1,\\{x}^{2}=9\end{array}$
Solve the equation for $x$
$\begin{array} { l }x=-1,\\x=1,\\{x}^{2}=9\end{array}$
Solve the equation for $x$
$\begin{array} { l }x=-1,\\x=1,\\x=-3,\\x=3\end{array}$
The equation has $4$ solutions
$\begin{array} { l }x_1=-3,& x_2=-1,& x_3=1,& x_4=3\end{array}$
Random Posts
Random Articles
|
{}
|
ADMMConsensus specialises ADMM for solving optimisation problems of the form
$\mathrm{argmin}_{\mathbf{x}} \; \sum_{i = 0}^{N_b - 1} f_i(\mathbf{x}) + g(\mathbf{x})$
via an ADMM problem of the form
$\begin{split}\mathrm{argmin}_{\mathbf{x}_i,\mathbf{y}} \; \sum_{i = 0}^{N_b - 1} f(\mathbf{x}_i) + g(\mathbf{y}) \;\mathrm{such\;that}\; \left( \begin{array}{c} \mathbf{x}_0 \\ \mathbf{x}_1 \\ \vdots \end{array} \right) = \left( \begin{array}{c} I \\ I \\ \vdots \end{array} \right) \mathbf{y} \;\;.\end{split}$
See ConvCnstrMOD_Consensus as an example of a class derived from ADMMConsensus, or see the simple usage example.
Classes derived from ADMMConsensus should override/define the methods and attributes in the following sections.
Initialisation¶
The __init__ method of the derived class should call the ADMMConsensus __init__ method to ensure proper initialisation. Note that this method assumes that the ADMM consensus component blocks in working variable $$\mathbf{x}$$ will be stacked on the final array index, and defines attribute self.xshape accordingly.
State variables $$\mathbf{y}$$ and $$\mathbf{u}$$ are initialised to zero by inherited methods ADMM.yinit and ADMM.uinit respectively (this behaviour is inherited from ADMM). These methods should be overridden if a different initialization is desired.
Update Steps¶
The $$\mathbf{x}$$ update method ADMMConsensus.xstep calls ADMMConsensus.xistep for each ADMM consensus component block. In most cases a derived class will define ADMMConsensus.xistep rather than override ADMMConsensus.xstep. Method ADMMConsensus.xistep should solve
$\mathbf{x}_i^{(j+1)} = \mathrm{argmin}_{\mathbf{x}_i} \;\; f_i(\mathbf{x}_i) + \frac{\rho}{2} \left\| \mathbf{x}_i - \left( \mathbf{y}^{(j)} - \mathbf{u}_i^{(j)} \right) \right\|_2^2$
setting a slice of self.X on the final index from the result.
The $$\mathbf{y}$$ update method ADMMConsensus.ystep solves
$\mathbf{y}^{(j+1)} = \mathrm{argmin}_{\mathbf{y}} \;\; g(\mathbf{y}) + \frac{N_b \rho}{2} \left\| \mathbf{y} - \mathbf{z}^{(j)} \right\|_2^2$
where
$\mathbf{z}^{(j)} = \sum_{i = 0}^{N_b - 1} \left( \mathbf{x}_i^{(j+1)} + \mathbf{u}_i^{(j)} \right) \;.$
A class derived from ADMMConsensus should override ADMMConsensus.prox_g to implement the proximal operator of $$g(\cdot)$$. Note that $$N_b \rho$$ is passed as a parameter to ADMMConsensus.prox_g; it is the responsibility of the implementer of this method to understand that it implements what is in mathematical terms the proximal operator of $$g(\cdot)$$ with parameter $$(N_b \rho)^{-1}$$.
The dual variable update is
$\mathbf{u}_i^{(j+1)} = \mathbf{u}_i^{(j)} + \mathbf{x}_i^{(j+1)} - \mathbf{y}^{(j+1)} \;.$
This update is implemented in ADMM.ustep, which will usually not need to be overridden.
As in ADMM, if one of the update steps makes use of pre-computed values that depend on the penalty parameter self.rho, ADMM.rhochange should be with a method that updates these pre-computed values.
Constraint Definition¶
Class ADMMConsensus overrides all of the methods in Residual Evaluation and does not define any of the ADMM constraint definition methods discussed in Constraint Definition.
Residual Evaluation¶
The residual evaluation methods ADMMConsensus.rsdl_r, ADMMConsensus.rsdl_s, ADMMConsensus.rsdl_rn, ADMMConsensus.rsdl_sn are all appropriately defined for a general ADMM consensus problem, and will typically not need to be overridden.
Iteration Statistics¶
The iteration statistics mechanism, as described in Iteration Statistics, is inherited largely unchanged from ADMM. The only exception is that ADMMConsensus.obfn_f is defined to evaluate the sum over class for each ADMM consensus block to ADMMConsensus.obfn_fi, which should be overridden in a derived class if it is desired to use this simple iteration statistics mechanism rather than override ADMM.eval_objfn.
|
{}
|
### Session H37: SPS Undergraduate Research III
8:00 AM–11:00 AM, Tuesday, February 28, 2012
Room: 108
Chair: Gary White, AIP/SPS
Abstract ID: BAPS.2012.MAR.H37.6
### Abstract: H37.00006 : Scanning Tunneling Microscopy of Fe Doped Bi$_2$Sr$_2$CaCu$_2$O$_{8+x}$
9:00 AM–9:12 AM
Preview Abstract MathJax On | Off Abstract
#### Authors:
Brian Koopman
(Clark University)
W.D. Wise
(MIT)
Kamalesh Chatterjee
(MIT)
Genda Gu
(Brookhaven National Laboratory)
E.W. Hudson
(Penn State University)
M.C. Boyer
(Clark University)
We will present a low temperature scanning tunneling microscopy (STM) study of the high-temperature superconductor Bi$_2$Sr$_2$CaCu$_2$O$_{8+x}$ (Bi-2212) which has been intentionally doped with magnetic (Fe) impurities in order to locally disrupt superconductivity around the impurities. We examine spatial variations in the density of states in the vicinity of Fe impurities, and compare our results with previous STM studies of Ni doped Bi-2212. Notable differences between Fe and Ni impurities include differences in the number and energy locations of the impurity peaks. Our analysis shows that Fe is a weaker magnetic impurity than Ni and that the particle-hole symmetry present in the spectra of Ni impurities is not as obvious in Fe impurities. By studying how these impurities interact with superconductivity in Bi-2212 we hope to understand more about the superconducting mechanism in high-temperature superconductors.
To cite this abstract, use the following reference: http://meetings.aps.org/link/BAPS.2012.MAR.H37.6
|
{}
|
# afro-xlmr-large
AfroXLMR-large was created by MLM adaptation of XLM-R-large model on 17 African languages (Afrikaans, Amharic, Hausa, Igbo, Malagasy, Chichewa, Oromo, Nigerian-Pidgin, Kinyarwanda, Kirundi, Shona, Somali, Sesotho, Swahili, isiXhosa, Yoruba, and isiZulu) covering the major African language families and 3 high resource languages (Arabic, French, and English).
## Eval results on MasakhaNER (F-score)
language XLM-R-miniLM XLM-R-base XLM-R-large afro-xlmr-large afro-xlmr-base afro-xlmr-small afro-xlmr-mini
amh 69.5 70.6 76.2 79.7 76.1 70.1 69.7
hau 74.5 89.5 90.5 91.4 91.2 91.4 87.7
ibo 81.9 84.8 84.1 87.7 87.4 86.6 83.5
kin 68.6 73.3 73.8 79.1 78.0 77.5 74.1
lug 64.7 79.7 81.6 86.7 82.9 83.2 77.4
luo 11.7 74.9 73.6 78.1 75.1 75.4 17.5
pcm 83.2 87.3 89.0 91.0 89.6 89.0 85.5
swa 86.3 87.4 89.4 90.4 88.6 88.7 86.0
wol 51.7 63.9 67.9 69.6 67.4 65.9 59.0
yor 72.0 78.3 78.9 85.2 82.1 81.3 75.1
avg 66.4 79.0 80.5 83.9 81.8 80.9 71.6
### BibTeX entry and citation info
@inproceedings{alabi-etal-2022-adapting,
title = "Adapting Pre-trained Language Models to {A}frican Languages via Multilingual Adaptive Fine-Tuning",
author = "Alabi, Jesujoba O. and
Mosbach, Marius and
Klakow, Dietrich",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics",
url = "https://aclanthology.org/2022.coling-1.382",
pages = "4336--4349",
abstract = "Multilingual pre-trained language models (PLMs) have demonstrated impressive performance on several downstream tasks for both high-resourced and low-resourced languages. However, there is still a large performance drop for languages unseen during pre-training, especially African languages. One of the most effective approaches to adapt to a new language is language adaptive fine-tuning (LAFT) {---} fine-tuning a multilingual PLM on monolingual texts of a language using the pre-training objective. However, adapting to target language individually takes large disk space and limits the cross-lingual transfer abilities of the resulting models because they have been specialized for a single language. In this paper, we perform multilingual adaptive fine-tuning on 17 most-resourced African languages and three other high-resource languages widely spoken on the African continent to encourage cross-lingual transfer learning. To further specialize the multilingual PLM, we removed vocabulary tokens from the embedding layer that corresponds to non-African writing scripts before MAFT, thus reducing the model size by around 50{\%}. Our evaluation on two multilingual PLMs (AfriBERTa and XLM-R) and three NLP tasks (NER, news topic classification, and sentiment classification) shows that our approach is competitive to applying LAFT on individual languages while requiring significantly less disk space. Additionally, we show that our adapted PLM also improves the zero-shot cross-lingual transfer abilities of parameter efficient fine-tuning methods.",
}
Mask token: <mask>
|
{}
|
# potfit wiki
open source force-matching
### Sidebar
User Guide
Examples
Potential Databases
More
output:main
# Output
potfit prints information on the minimization progress to the standard output. Additionally all input and output file names are also written to standard output.
The output of the optimization will also be written to standard output unless the output_prefix parameter is set in the parameter file (which is highly recommended).
## Potfit Output Files
### Binned distribution file
Only available for tabulated potentials: When compiled with the bindist option, a [[bindistfile]] can be created.
### Error report
At the very end of each potfit run there will be an [[output::errors|error report]].
### Final potential
potfit always writes the potential achieved in minimization to a file using the same format as for the input potential. The name of this file is specified by the [[:parameters|endpot parameter]].
### IMD Potential File
Prefix of files to which the [[http://imd.itap.physik.uni-stuttgart.de/|IMD]] potentials are written, see [[:imd|IMD]].
### LAMMPS potential file
If write_lammps is set to 1 in the [[:parameters|parameter file]], potfit will write a potential file for [[http://lammps.sandia.gov/|LAMMPS]]. This feature does not work for all potentials, for details see [[lammps|LAMMPS]].
### Output files
If the output_prefix parameter is set in the configuration file, potfit will write several files, else all information will be printed to standard output.
In detail the files written are:
output_prefix.force contains the calculated and reference forces
output_prefix.energy contains the calculated and reference energies
output_prefix.stress contains the calculated and reference stresses
output_prefix.punish contains the different punishments (EAM only)
output_prefix.rho_loc local electron density (EAM only)
All these files contain header lines, describing the following data.
### Pair distribution file
If write_pair is set to 1 in the [[:parameters|parameter file]], the [[pair_distribution|pair distribution file]] will be written after the optimization.
### Plotting files
Two files can generated to be used with gnuplot:
### Temporary Potential File
Every now and then, depending on the algorithm used, an intermediate potential is written to a [[tempfile|tempfile]].
### Potential ensemble file
Only available for analytic potentials: When compiled with the ''uq'' option, an [[ensemblefile]] can be created.
|
{}
|
Abstract
A method that estimates invisible cracks from the surface based on the surface deformation measured by digital image correlation (DIC) is being developed. An inverse problem is setup to estimate such invisible cracks from the surface deformation. Surface deformation, measured by the DIC method, contains noise. Inverse problems have ill-conditions. The regularization method applied in this study is an extension of the joint estimation maximum a posteriori (JE-MAP) method. The JE-MAP algorithm alternates between MAP method estimation and the grab-cut (GC) method to avoid ill-conditions. The physical constraints on displacement and the forces at the cracks and the crack perimeters (ligaments) are added to the MAP method. The displacement and load at the cracks and the ligaments have a cross-sparse relationship. The MAP method estimates the displacement or the load at the cracks and the ligaments. The estimated result varies greatly at the boundary between the cracks and the ligaments. This boundary is determined by the GC method based on the estimated result. This study amplified the changes at the boundary between the cracks and the ligaments in the estimated results. The amplified results were input into the GC method to improve the boundary-determination accuracy. The regularization method developed from the JE-MAP method was combined with DIC method to estimate the cracks in invisible locations. The method proposed in this study estimated cracks more accurately than L1-norm regularization in inverse problems where the observed data were strain distributions measured by the DIC method.
1 Introduction
Large electrical devices are regularly inspected for cracks to ensure their integrity. The inspection of cracks in invisible locations requires the disassembly of equipment, resulting in long shutdown periods. If cracks in invisible locations could be inspected without disassembling the equipment, the periods of equipment shutdown could be shortened.
Ultrasonic testing and X-ray inspection are widely used to inspect cracks in invisible locations. In ultrasonic testing, a probe is placed in contact with the surface to be inspected, ultrasonic waves are emitted and those reflected from a crack are measured to identify the crack's shape [1]. Reducing the size of the inspection system is difficult because the probe must generate and receive ultrasonic waves. In an X-ray inspection, an object is irradiated and a crack's shape is identified from the difference in the transmitted X-dose [2]. The inspection device is large and the number of objects to be inspected is limited. To inspect electrical equipment without disassembling it, the inspection device must be placed in a narrow space inside the equipment and be as small as possible to avoid damaging the equipment by making contact.
Camera-based visual inspection is a noncontact inspection method that can be easily miniaturized. In combination with the digital image correlation (DIC) method [3], which has attracted much attention in recent years, a camera can measure the deformation caused by a crack in an invisible area. A method was proposed to estimate cracks in invisible locations from the deformation measured by DIC [4]. In the above method, for each crack's shape, deformation data at visible locations are prepared from which a crack shape close to the measured deformation is selected. The estimation accuracy depends on the amount of data that must be prepared in advance, and it is difficult to prepare every possible crack shape.
The problem of estimating cracks in invisible locations from surface deformation encounters the inverse problem of estimating the displacement distribution of crack surfaces from surface deformation. The method for solving inverse problems with inadequacies is to predict the crack shape using topology optimization. The research using topology optimization proposes to leverage full-field response data obtained by DIC in a topology optimization framework to reconstruct the internal damage in members [5]. Furthermore, the above authors also propose the detection of interior anomalies of structural components, inferred from the discrepancy in constitutive properties such as elasticity modulus distribution of a three-dimensional heterogeneous/homogeneous sample, from limited full-field boundary measurements using three-dimensional-DIC [6]. The above authors then extend their research to demonstrate the feasibility and performance of the proposed method for a set of large-scale structural steel beams with and without buried defects using a full-field three-dimensional DIC sensor approach [7]. The above authors also propose a new strategy, to get a unique solution for a finite element model updating problem in detecting internal properties of a structure by topology optimization method, a novel strategy is proposed termed as octree partitioning algorithm [8]. The proposed method of using topology optimization is high computational costs because the finite element method is computed iteratively in the optimization.
Since the measured deformations contain noise, the inverse problem of estimating invisible cracks from the deformation is an ill-condition. In order to overcome this problem, Amaya et al. [9] proposed regularization by taking into account the physical relationship between the surface deformation gradient and the crack deformation and a physical constraint between the displacement and the force at a crack and its perimeter (ligament). In this study, the regularization method is the joint estimation maximum a posteriori (JE-MAP) method [10], which was developed to identify the location of body tissues from X-ray CT results. The JE-MAP algorithm alternates between MAP (method) estimation [11], which incorporates physical constraints on the a priori information, and the grab-cut (GC) method [12], an image segmentation algorithm, to avoid ill-conditions.
Therefore, in this study, a method to estimate cracks in invisible locations by combining a regularization method developed from the JE-MAP and a DIC method is being developed. The regularization method first adds to the MAP method the physical constraints of displacement and force at the cracks and ligaments. The physical constraint has a cross–sparse relationship between the displacement and the load at the crack and ligament, as described below. The grab-cut method is also improved to easily determine the boundary between cracks and ligaments based on the displacement or load predicted by the MAP method. Before being input to the grab-cut method, the displacements or loads are converted into an image that highlights the crack and ligament boundaries. We show the results of the estimation of cracks in invisible locations with a combination of the developed regularization method and a DIC method. First, an inverse problem is setup to estimate cracks. Next, the regularization method that extends the JE-MAP method is described. The specimen to be deformed is modeled using the finite element method to identify the relationship between the surface deformation and the crack for solving the inverse problem. The discretization method of the obtained results is described. In addition, the deformation distribution of specimens with and without cracks is measured by DIC method, and the observed data of the inverse problem are obtained from the deformation distributions. Finally, the cracks are estimated from the observed data using the existing and regularization methods, which are extensions of the JE-MAP method. The availability of this study is tested by comparing the estimation results with L1-norm regularization [13].
2 Regularization Method of the Inverse Problem
2.1 Inverse Problem Setting.
The inverse problem in this study is to estimate a crack propagating in the thickness direction from one side of a flat plate from the strain changes on the crack-free side. Figure 1 shows a schematic diagram of this study's inverse problem.
Fig. 1
Fig. 1
Close modal
A virtual plane, which denotes the location of a crack to be estimated, is shown by a dashed flat line in Fig. 1. The virtual plane's width direction is the X-axis, its thickness direction is the Y-axis, and the length of the steel plate is the Z-axis. A uniform tensile load is applied in the Z-axis direction. The observed surface is where the strain changes are measured, shown in green in Fig. 1. The observed data of the inverse problem are the strain changes in the Z-direction of observed surface $εZ$. The unknown data of the inverse problem is the displacement u of the virtual plane in the Z-direction. The Z-direction strain of the virtual plane from the tensile loading changes with the presence or absence of a crack (Fig. 1). The Displacement u at the crack of the virtual plane is larger than at the ligament.
The observed equation for this inverse problem is shown in Eq. (1). Assuming micro-elastic deformation, the relationship between discretized observed data $εZ̃$ and the unknowns u is the observed equation in Eq. (1)
$εZ~=H·u+β$
(1)
Here $εZ̃$ and u in bold are discretized vectors. $β$ is the discretized vector of the measurement errors. H is a constant matrix representing the relationship between $εZ$ and u. Constant matrix H is obtained by the finite element method or other methods.
2.2 Estimation of Displacement of Virtual Plane Using the Regularization Method
2.2.1 Regularization Method Developed From JE-MAP Method.
The JE-MAP method proposed by our research group is a regularization method that alternately repeats the estimation of latent and physical variables to improve the estimation accuracy. The regularization method in this study is an extension of the JE-MAP method. First, the latent variables indicate the areas of cracks and ligaments on the virtual plane. Next, the physical constraints on the displacement and force at the cracks and ligaments are added to the MAP method. The physical constraint has a cross–sparse relationship between the displacement and load at the cracks and ligaments. Furthermore, the GC method is improved to predict the boundary between them.
Figure 2 shows a schematic diagram of the regularization method developed in this study. Figure 2(a) shows a schematic diagram of the strain changes distribution of the observed surface. Figure 2(b) shows a schematic diagram of the virtual plane. Figure 2(b-1) shows a schematic diagram of the displacement distribution and (b-2) shows a schematic diagram of the latent variable distribution. Constant matrix H is a matrix to calculate the strain changes of the observed surface from the displacement of the virtual plane. The physical variable is the displacement of the virtual plane. The virtual plane's likelihood distribution of the displacement was obtained from strain distribution $εZ̃$ of the observed surface. A key feature of the JE-MAP method is the introduction of latent variable z to rationally incorporate prior information. Latent variables are indicated by 0 and 1 for the region between the cracks and the ligaments on the virtual plane. The prior distribution of the virtual plane's displacement is obtained from distribution z of the latent variable and the physical constraints at the cracks and ligaments. Posterior distribution u of the displacement of the virtual plane is obtained from the likelihood and prior distributions using the MAP method. Distribution z of the latent variable is estimated by binarizing posterior distribution u of the virtual plane's displacement obtained by the MAP method using the GC method. The prior distribution is updated with the updated latent variable distribution z. Displacement distribution u of the virtual plane is updated by the MAP method from the updated prior and likelihood distributions. In this way, displacement distribution u and latent variable distribution z are repeatedly and alternately identified by the MAP and GC methods.
Fig. 2
Fig. 2
Close modal
2.2.2 Estimation by MAP Method and Likelihood Distribution.
In the MAP method, observed data u are the most frequent value of the posterior distribution. In other words, crack surface displacement $u*$ maximizes the posterior probability shown in Eq. (2). The posterior distribution is obtained from the likelihood and prior distributions. $L(u|εZ̃)$ is the likelihood distribution, and $p(u|z)$ is the prior distribution
$u*=arg maxu{L(u|εZ̃)p(u|z)}$
(2)
Likelihood distribution $L(u|εZ̃)$ is given by the following equation:
$L(u|εZ~)=N(H·u−εZ~|μw,Σw)$
(3)
$N$ is the probability density function of the multivariate Gaussian distribution. $μw$ is the mean vector of the measurement errors, and Σw is their covariance matrix. The measurement error is assumed to be independent of the measurement position. $μw$ is 0. The variance of measurement error $σw2$ is assumed to be constant. The covariance matrix becomes the following equation:
$Σw=(σw2⋱0σw20⋱σw2)$
(4)
2.2.3 Prior Distribution.
The following physical constraints on the displacement and force at the cracks and ligaments are reflected in the prior distribution of the MAP method.
The displacement in the Z-direction of the virtual plane is discontinuous in the crack region and continuous in the ligament region. The force in the Z-direction of the virtual plane is zero in the crack region and nonzero in the ligament region. Therefore, the force in the virtual plane's Z-direction is a sparse distribution. If the model is symmetric in the Z-direction at the virtual plane, the displacement in the Z-direction is also sparse at the virtual plane. The Z-forces and displacements of the virtual plane have conflicting regions of zero at the cracks and ligaments. The conflicting sparse constraints are called cross sparsity. A schematic diagram of the cross-sparse constraint condition is shown in Fig. 3, which shows a model symmetrical in the Z-direction in the virtual plane. The forces and displacements in the Z-direction of the virtual plane are cross sparse at the cracks and ligaments (Section A). Cross-sparse constraints are reflected in the prior distribution of the MAP method by latent variable z. The size of the vector of latent variables z is identical to u. The component of z is 1 for the crack-opening region and 0 for the ligament region. Prior distribution $p(u|z)$ incorporates the cross sparsity of the displacements and the forces in the virtual plane by z and is given by the following equation:
$p(u|z)=N(u|μu(z),Σu(z))N(f|μf(z),Σf(z))$
(5)
f is the discretized Z-direction force vector of the virtual plane. Force vector f has a relationship with displacement u shown in the following equation:
$f=G·u+f0$
(6)
Fig. 3
Fig. 3
Close modal
G is a constant matrix representing the relationship between f and u. $f0$ is a force vector when there is no crack. $μu$ is the mean vector of the displacements on the virtual plane, Σu is the covariance matrix of the displacements on the virtual plane, $μf$ is a mean vector of the forces on the virtual plane, and Σf is a covariance matrix of the forces on the virtual plane. The displacements and loads of the virtual plane are independent of the virtual plane's discretized positions. The covariance matrices become the following equations:
$Σu=(σu2(z)⋱0σu2(z)0⋱σu2(z))$
(7)
$Σf=(σf2(z)⋱0σf2(z)0⋱σf2(z))$
(8)
The cross-sparse constraints on the ligaments and cracks are introduced into $μu$, Σu, $μf$, and Σfby latent variable z. $μu$, Σu, $μf$, and Σf are defined by Eqs. (9) and (10). Here the model is assumed to be symmetric in the Z-direction in the virtual plane
$z=0 ⇒{μu(z)=0σu2(z)≈0μf(z)=μflσf2(z)=σfl2$
(9)
$z=1 ⇒{μu(z)=μucσu2(z)=σuc2μf(z)=0σf2(z)≈0$
(10)
2.2.4 Updating Latent Variable z.
Latent variable z, which represents the crack region as 1 and the ligament region as 0, is updated from the distribution of u or f on the virtual plane estimated by the MAP method. Latent variable z is estimated using the GC method, which divides the image into foreground and background elements. To estimate the variable with the GC method, the distribution of u or f on the virtual plane is converted to an image. In this paper, latent variable z is estimated from the distribution of u on the virtual plane. The GC method uses the opencv 3.4.1 algorithm [14].
To convert displacement u into an image, the displacement distribution is normalized by the maximum value of u. If the normalized displacement distribution is converted directly into an image and segmented, the rapid displacement change near the boundary between the crack and the ligament will be ignored and the crack area will be underestimated. To magnify the abrupt displacement change, the entire normalized displacement distribution is multiplied by a constant greater than 1. In a constant multiplied distribution, all values greater than 1 are assumed to be 1. Next, the distribution multiplied by the constant is converted to an image. The initial value of latent variable z is given as 0 or 1 across the entire virtual plane to avoid biasing the estimation results.
2.2.5 Calculation Flow of Regularization Method Developed From JE-MAP Method.
The flow of the calculation that repeatedly updates the prior distribution is shown in Fig. 4. Latent variable z is updated by the GC method from the displacement distribution estimated by the MAP method. In this paper, the update of latent variable z is repeated a specified number of times. The displacement distribution $u*$, where the change in z is smaller than a predetermined criterion, is extracted from the results of repeated calculations. The strain distribution on the observed surface is obtained by multiplying each extracted displacement distribution $u*$ by a constant matrix H.
Fig. 4
Fig. 4
Close modal
The estimated result is the displacement distribution that minimizes the mean-square-error (MSE) between the calculated and measured strain distributions $εZ̃$. MSE is obtained by the following equation:
$MSE=1Num∑(H·u*−εZ̃)$
(11)
where Num denotes the number of discretized strain distributions in the observed surface.
The initial values of the likelihood and prior distributions, $σw2$, μfl, $σfl2$, μuc, and $σuc2$, and the variances at $μf=0$ and $μu=0$, are given in advance. The values in this paper are given in Sec. 3.3.
3 Test Specimen for Crack Estimation
3.1 Shape of Test Specimen.
A specific example of an inverse problem treated in this section aims at estimating a crack in the thickness direction from one side of a flat plate from the strain changes on the crack-free side. The strain changes on the crack-free surface are measured by applying a tensile load to the plate. Figure 5 shows a schematic diagram of a flat plate. It is 50 mm wide and 24 mm thick. The estimated crack exists on the virtual plane, which is indicated by the red dashed line at the plate's center (Fig. 5). The virtual plane's width direction is the X-axis, its thickness direction is the Y-axis, and the steel plate's length direction is the Z-axis. The tensile load is applied uniformly in the Z-axis direction.
Fig. 5
Fig. 5
Close modal
3.2 Calculation of Constant Matrices H and G.
Constant matrix H in Eq. (1) is used in the observed equation of the inverse problem. Constant matrix G in Eq. (6) is used to incorporate the cross sparsity into the prior distribution. In this paper, constant matrices H and G are obtained by the finite element method. Figure 6 shows a schematic diagram of the discretized observed surface and the virtual plane. The vectors of strain $εZ$, displacement u, and force f are also shown in Fig. 6. The finite element model is a 1/2 symmetric model of a flat plate where a virtual plane is a symmetry plane. In this paper, to measure the strain changes with the DIC method, a load of 1000 μ generated was applied to the observed surface of the flat plate in Fig. 5 without cracks. A load of 247 kN in the Z-direction was applied to the opposite side of the symmetry plane in the Z-direction. The discretized strains, displacements, and forces are the values at the nodes of the elements created in a grid by the finite element method. The virtual plane is divided into n elements in the X-direction and m elements in the Y-direction. The observed surface is divided into n elements in the X-direction and p elements in the Z-direction.
Fig. 6
Fig. 6
Close modal
First, the strains, displacements, and forces are calculated under the condition that there are no cracks on the virtual plane. The strains, displacements, and forces are then obtained by generating a single nodal crack at all the nodes of the virtual plane. $εz(i,j)$ is a vector of the differences in the Z-direction strain of the observed surface. The variation in the Z-direction strain is the difference in the strain between the conditions without/with cracks on the virtual plane. $(i,j)$ denotes the location of the nodes that are deemed to be cracked. i is a value between 1 and n, and j is a value between 1 and m. u(i, j) is a difference vector of the Z-direction displacement of the virtual plane. The difference in the displacement in the Z-direction is the difference in the displacement between the conditions without cracks and with them in the virtual plane. f(i, j) is the force vector in the Z-direction of the virtual plane under the crack condition. $εz(i,j)$, u(i, j), and f(i, i) are calculated for the number of nodes in the virtual plane. f0 is a force vector in the Z-direction of the virtual plane under a crack-free condition. Constant matrices H and G are obtained from $εz(i,j)$, u(i, j), f(i, i), and f0 by Eqs. (12) and (13). Here $β$ in Eq. (1) is set to zero. On the virtual plane, the 50 mm wide are divided into 25 subdivisions and the 24 mm thick are divided into 12 subdivisions. On the observed surface, the 50 mm wide are divided into 25 subdivisions and the 24 mm thick are divided into 15 subdivisions. The material of the flat plate is S45C steel (C45 steel:ISO) with a proof stress of 490 MPa or higher. This S45C steel was made by Kobe Steel, LTD. in Japan. Young's modulus and Poisson's ratio of the plate are set to 206 GPa and 0.3. The finite element method solver is ansys 19.2 [15]
$H=[εz(0,0)(0,0)⋯εz(i,j)(0,0)⋯εz(n,m)(0,0)⋮⋱⋱⋱⋱εz(0,0)(k,l)⋯εz(i,j)(k,l)⋯εz(n,m)(k,l)⋮⋱⋱⋱⋱εz(0,0)(n,p)⋯εz(i,j)(n,p)⋯εz(n,m)(n,p)]·[u0,0(0,0)⋯ui,j(0,0)⋯un,m(0,0)⋮⋱⋱⋱⋱u0,0(k,l)⋯ui,j)(k,l)⋯un,m(k,l)⋮⋱⋱⋱⋱u0,0(n,p)⋯ui,j)(n,p)⋯un,m(n,p)]−1$
(12)
$G=[f0,0(0,0)−f0(0,0)⋯fi,j(0,0)−f0(0,0)⋯fn,m(0,0)−f0(0,0)⋮⋱⋱⋱⋱f0,0(i,j)−f0(i,j)⋯fi,j(i,j)−f0(i,j)⋯fn,m(i,j)−f0(i,j)⋮⋱⋱⋱⋱f0,0(n,m)−f0(m,m)⋯fi,j(n,m)−f0(m,m)⋯fn,m(n,m)−f0(n,m)] ·[u0,0(0,0)⋯ui,j(0,0)⋯un,m(0,0)⋮⋱⋱⋱⋱u0,0(k,l)⋯ui,j(k,l)⋯un,m(k,l)⋮⋱⋱⋱⋱u0,0(n,p)⋯ui,j(n,p)⋯un,m(n,p)]−1$
(13)
3.3 Parameters of Regularization Method Developed From Joint Estimation Maximum a Posteriori Method.
In this section, the parameters in Secs. 2.2.4 and 2.2.5 were determined. First, when updating the latent variables in Sec. 2.2.4, the constants that multiply the normalized displacement distribution are described. The constant to be multiplied denote is denoted zConst. zConst ranges from 3 to 6, which improves the crack-estimation accuracy of the numerical experiments using the finite element method.
The parameter of the likelihood is $σw2$. The parameter of prior distributions are the initial values of μfl, $σfl2$, μuc, $σuc2$, and the variance at $μu=0$, and the variance at $μf=0$. The variances at $σfl2,σuc2,μu=0$, and $μf=0$ are relative values to μuc and μfl, as shown in the following equations:
$σfl2=μfl×103$
(14)
$σuc2=μuc×10$
(15)
$μf=0 ⇒σf2=μfl×10−4$
(16)
$μu=0 ⇒σu2=μuc×10−2$
(17)
The value of μfl is the load applied to the flat plate divided by the number of divisions of the virtual plane. A range is set for zConst, $σw2$, and μuc from which the optimum value is selected. Standard deviation $σw2$ corresponds to the strain distribution's measurement error, whose strain distribution is in the range of 1 μ–100 μ. The initial value of μuc is the crack aperture in the range of 0.1 μ m–10 μ m. Bayesian optimization is used as the optimization method [16].
4 Strain of Observed Surface Measured by Digital Image Correlation Method
4.1 Measurement by Digital Image Correlation Method.
The strain changes due to the presence or absence of a crack is determined from a specimen's strains without/with a crack. Figure 7 shows the geometry of the specimens for which the strain was measured. Figure 7(a) shows a specimen without a crack; Fig. 7(b) shows a specimen with a crack. The geometry in Fig. 7 includes the part gripped by the testing machine to apply the load. The specimens are made of C45 quenched and tempered material. The specimen with a crack has a semi-elliptical slit that is 0.25 mm thick, 24 mm wide, and 9.8 mm depth. A semi-elliptical, longitudinal slit was made in the center of the specimen by electrical discharge machining. Its depth c-direction is in line with the specimen's thickness direction.
Fig. 7
Fig. 7
Close modal
The strain-measuring surface of the specimen was given a random black-and-white pattern to measure the strain by the DIC method. The size of the black pattern is also random, with a maximum size of about 1 mm. The black-and-white random pattern was created using water-based acrylic paint. The surface where the strain is measured corresponds to the observed surface of the inverse problem. Figure 8 shows the situation due to the tensile load where the strain is measured by the DIC method. A 247 kN load was applied in the Z-direction (Fig. 8) by a servohydraulic fatigue-testing machine whose capacity is 500 kN. At a tensile load of 247 kN, the plastic zone around the semi-elliptical slit is small and satisfies the small scale yielding hypothesis. The images used in the DIC method were taken with a CCD camera (NICOND7200) that had a zoom lens with a focal length of 18–200 mm (AF-S DX NIKKOR18-200 mm f/3.5-5.6GEDVRII). The resolution of the images was set to 6000 × 4000 pixels. The images for the strain determination were taken at loads of 0 and 247 kN for all the tests. The strains were calculated from the correlation information between the 0 and 247 kN images and analyzed using vic-2d software for DIC from correlated solutions. The vic-2d was configured with a subset size of 151 pixels, steps of 25 pixels, and a filter of 51 pixels for strain determination.
Fig. 8
Fig. 8
Close modal
Figure 9 shows the strain distribution in the Z-direction of the observed surface calculated by the DIC method at load of 247 kN. Figure 9(a) shows a specimen's strain distribution without a crack, and Fig. 9(b) shows it with a crack. The no-crack specimen has a strain distribution without any characteristic distribution. The strain distribution had a mean value of 1005 μ and a standard deviation of 26 μ. The specimens with cracks showed a decrease in strain near the strain distribution's center. The decrease in strain was caused by a crack on the opposite side of the observed surface.
Fig. 9
Fig. 9
Close modal
4.2 Strain Changes as Observed Data for Inverse Analysis.
The observed data for the inverse problem is the strain changes obtained by the DIC method from images taken before and after a crack occurred in an electrical device. In this paper, the strain changes due to the crack are simulated by subtracting the strain with a crack from the strain of the specimen without a crack. The observed data, extracted from the strain distribution obtained in Sec. 4.1, denotes the amount of strain changes in the Z-direction with/without cracks. Figure 10 shows the difference between the strains without/with a crack in the Z-direction. The strain distribution in Fig. 10 is 0 in the Z-direction at the lower left and 0 in the X-direction at the left end of the specimen width. The observed data are calculated from the strain distribution in Fig. 10. The change in the strain distribution due to the crack is located in the entire X-direction around 32 mm of the Z-coordinate in Fig. 10. The observed data are the discrete values of the strain distribution shown in Fig. 10 divided into the element sizes shown in Sec. 3.2.
Fig. 10
Fig. 10
Close modal
Figure 11 shows the method used to determine the location of the crack in the Z-direction based on the strain changes are Fig. 10. Figure 11(a) shows the method used to determine the Z-coordinate where the strain changes is maximum. The discretized strain change was divided into each X-coordinate. The maximum Z-coordinate is obtained for each divided strain change. Figure 11(b) shows the frequency distribution of the Z-coordinates with a maximum strain change. The virtual plane is located at the Z-coordinate with the highest frequency of the maximum strain change. The strain change's distribution is divided by the virtual plane into positive and negative directions of the Z-axis. The observed data are the amount of strain changes in the positive direction of the Z-axis. Figure 12 shows the strain distribution used as the observed data in the inverse analysis. The strain distribution is missing at the edge of the specimen in the X-direction. This missing distribution occurred because the strain could not be calculated due to the small number of patterns included in the subset of the DIC method. The range of the observed data in the Z-direction was identical to the observed surface shown in Sec. 3.2. The range of the observed data in the X-direction is the range measured by the DIC method. Constant matrix H is recalculated based on the range of the observed data and used in the inverse analysis in Sec. 5.
Fig. 11
Fig. 11
Close modal
Fig. 12
Fig. 12
Close modal
5 Estimated Results
5.1 Inverse Analysis by L1-Norm Regularization.
In this section, the inverse problem, in which the strain changes in Fig. 12 is the observed data, is solved by a generic method. The generic method is L1-norm regularization. Regularization parameter λ was $4.33×10−7$ by applying “the one standard error rule” [17] to the k-fold cross-validation results [18] with k =5. The generic method is shown in the Appendix. Figure 13 shows the displacement distribution in the Z-direction of the virtual plane analyzed by L1-norm regularization. In Fig. 13, the surface where the specimen crack opens is 0 in the Y-direction and the left end of the specimen width is 0 in the X-direction. The displacement in the Z-direction of the virtual plane is positive for the crack and zero for the no-crack part. Figure 14 shows the crack extracted from Fig. 13. The crack area is defined as the area of displacement greater than 10% of the maximum displacement of the virtual plane and is indicated by 1. Displacement other than that of the crack is indicated by 0. The zero points of x and y in Fig. 14 are identical to in Fig. 13.
Fig. 13
Fig. 13
Close modal
Fig. 14
Fig. 14
Close modal
Figure 15 shows the displacement distribution of the ground truth, which was obtained by applying the same load as in the test using the finite element method to a model with the same crack as the test specimen. The zero points of X and Y in Fig. 15 are identical to in Fig. 13.
Fig. 15
Fig. 15
Close modal
The displacement distribution in Fig. 13 reflects the sparsity of the virtual plane since the displacement is estimated separately for the cracks and the ligaments. However, unlike Fig. 15, the estimated displacements are located away from the virtual plane's free edge. Furthermore, the maximum displacement in the Z-direction differs significantly from that in Fig. 15. The cracks extracted from the displacements shown in Fig. 14 also differ significantly from the cracks in Fig. 15. There are multiple cracks in the virtual plane when they are extracted from the displacement shown in Fig. 13. The generic method combines constant matrix H obtained by the finite element method and a solution method that introduces sparsity in the Z-direction displacement. It did not take into account the measurement error included in the strain changes of the observed data, resulting in a result different from the ground truth. The JE-MAP method, shown in the next section, is an inverse analysis method that takes into account the measurement error included in the strain changes of the observed data.
5.2 Inverse Analysis Using Regularization Method Developed From Joint Estimation Maximum a Posteriori Method.
In this section, the inverse problem with the strain changes in Fig. 12 as the observed data is solved by the JE-MAP method. GC used the opencv functions. To employ them in matlab, mexopencv [19] was used. The initial value of latent variable distribution z was set to 1 for the entire virtual plane. Table 1 shows the initial values of zConst, $σw2$, and μuc used in the regularization method developed from the JE-MAP method. The initial values in Table 1 were obtained by Bayesian optimization, which is shown in the Appendix. The standard deviation of measurement error σw is close to the standard deviation of the strain distribution measured by DIC, $2.6×10−5$ (Sec. 4.1). The convergence criterion for latent variable z was less than two elements of the latent variable whose values changed with the update. The MSE is calculated from the distribution of the displacements on the virtual plane where the latent variables converged. The estimated result is the displacement distribution of the virtual plane where the MSE was minimized.
Table 1
σw, zConst, and initial value of μu obtained by Bayesian optimization: constants in the table were obtained by Bayesian optimization whose details are given in the Appendix
σwThe initial value of μuc (mm)zConst
$1.40×10−5$$7.27×10−4$5.01
σwThe initial value of μuc (mm)zConst
$1.40×10−5$$7.27×10−4$5.01
Figure 16 shows the distribution of the displacement in the Z-direction of the virtual plane estimated by the regularization method developed from the JE-MAP method. Figure 17 shows the latent variables that were input to the prior distribution calculated in Fig. 16. The zero points of x and y in Figs. 16 and 17 are identical to those in Fig. 13. The maximum point of the displacement distribution in Fig. 16 is near the ground truth in Fig. 15, at the free edge of the virtual plane. The range of the latent variable crack in Fig. 17 is one in the virtual plane, similar to the ground truth in Fig. 15. The range of the cracks in Fig. 17 falls mostly within the range of the ground truth in Fig. 15. The regularization method developed from the JE-MAP method can estimate a crack's location.
Fig. 16
Fig. 16
Close modal
Fig. 17
Fig. 17
Close modal
The area of the largest crack estimated by the regularization method developed from the JE-MAP method is compared with the area estimated by the L1-norm regularization. The area was determined from the element number of cracked regions shown in Figs. 17 and 14. One element in the distribution figures in Figs. 1317 has a width of 2 mm and a height of 2 mm. The area estimated by the regularization method developed from the JE-MAP method is 120 mm2. The area estimated by the L1-norm regularization is 40 mm2. The area estimated by Fig. 15 regularization is 224 mm2. The area estimated by the L1-norm regularization is 18% of the ground truth. The area estimated by the regularization method developed from the JE-MAP method is 54% of the ground truth and closer to the ground truth than the L1-norm regularization. The results estimated by the regularization method developed from the JE-MAP method are closer to the ground truth in terms of the number of cracks, crack locations, and crack's area than the results estimated by the L1-norm regularization.
The estimated size of the crack is compared to the ground truth. The elements of the crack shown in Fig. 17 were compared to the elements with displacements in Fig. 15. The sizes estimated by the regularization method developed from the JE-MAP method are a maximum crack width of 22 mm and a maximum crack depth of 6 mm. The maximum width and depth could be estimated by 77% and 50%, respectively, for the ground truth in Fig. 15. Therefore, the regularization method developed from the JE-MAP method can estimate a crack's size and location from the observed data calculated by the DIC method.
We believe that the following factors contributed to the estimation of the regularization method developed from the JE-MAP method. First, the prior distribution of the MAP method considered the cross sparsity of displacement and force on the cracked surface by latent variables. Second, the likelihood distribution of the MAP method took into account the measurement error of the observed data. Next, the latent variables were automatically updated by the GC method from the estimation results of the MAP method. The displacement of the virtual plane was updated again with automatically updated latent variables. Finally, from the multiple displacements of virtual plane obtained by updating the latent variables, the displacements at which the latent variables converged are extracted. The estimation result is the displacement of the virtual plane that minimizes the MSE between the strain calculated from the extracted displacement of the virtual plane and constant matrix H and the measured strain.
The displacement distribution in Fig. 16 shows that the displacement in the crack-free area is about 20% of the maximum displacement in the crack. It is possible that the displacement occurred in the crack-free area due to the difference between the Gaussian distribution assumed in the likelihood distribution and the error distribution in the actual measurement. In the future, we will improve the displacement's estimation accuracy in the crack-free area.
6 Conclusions
This paper studied a method to estimate cracks in invisible locations by combining a regularization method developed from the JE-MAP and a DIC method. The results of this paper are as follows:
• In the inverse problem in this study, the strain on the crack-free surface is set as the observed data and the displacement on the crack-containing plane as the unknown data.
• The regularization method, which is a development of the JE-MAP method, introduces physical constraints on the displacements and the forces at the cracks and ligaments into the MAP method. The physical constraints are a cross-sparse relationship between the displacement and the load at the cracks and ligaments.
• The regularization method, which is an extension of the JE-MAP method, improves the prediction of the crack and ligament boundaries by the grab-cut method. The displacement or load distribution is converted into an image that highlights the crack and ligament boundaries before being input into the grab-cut method.
• The regularization method, which is an extension of the JE-MAP method, estimated the displacement on the crack-containing plane more accurately than L1-norm regularization in the inverse problem where the observed data were strain distributions measured by the DIC method.
• The cracks obtained from the displacement on the crack-containing plane estimated by the regularization method, which is an extension of the JE-MAP method, are more accurate than those predicted by the L1-norm regularization.
• A combination of the DIC method with the regularization method, which is an advanced version of the JE-MAP method, can estimate the size and location of cracks that are not visible from camera images.
In the future, the validity and reproducibility of the method proposed in this paper will be verified on observed data measured by DIC method for strains on surfaces of various geometries. In addition, the validity of the JE-MAP method will be verified for cracks of various geometries, e.g., multiple cracks.
Data Availability Statement
The datasets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request.
Nomenclature
DIC =
digital image correlation
f =
discretized Z-direction force vector of the virtual plane
$f0$ =
force vector when there is no crack
G =
constant matrix representing the relationship between f and u
GC =
grab-cut
H =
constant matrix representing the relationship between $εZ$ and u
MAP =
maximum a posteriori
MSE =
mean-square-error
u =
displacement of the virtual plane in the Z-direction
u =
displacement vector of the virtual plane in the Z-direction
$u*$ =
crack surface displacement maximizing the posterior probability shown in Eq. (2)
zConst =
constants to be multiplied by the normalized displacement distribution when updating the latent variables
$β$ =
discretized vector of the measurement errors
$εZ$ =
strain changes in the Z-direction of observed surface
$εZ̃$ =
vector of the measured strain changes in the Z-direction of observed surface
λ =
regularization parameter of L1-norm regularization
$μf$ =
mean vector of the forces on the virtual plane
$μu$ =
mean vector of the displacements on the virtual plane
$μw$ =
mean vector of the measurement errors
$σw2$ =
variance of the measurement error
Σf =
covariance matrix of the forces on the virtual plane
Σu =
covariance matrix of the displacements on the virtual plane
L1-Norm Regularization and Determination of Regularization Parameter λ by Cross-Validation
Displacement u in the virtual plane is sparsely distributed since it is greater than zero at the crack and zero at the noncrack locations. L1-norm regularization [13], which is a method for estimating sparse unknowns, is a generic method in this paper. The solution is a displacement u that satisfies the following equation:
$arg minu{||H·u−εZ̃||22+λ||u||1}$
(A1)
λ denotes a regularization parameter. Regularization parameter λ was calculated by k-fold cross-validation with k = 5. Appendix Eq. (A1) was solved using the fast iterative shrinkage-thresholding algorithm method [20]. The method for determining regularization parameter λ is shown. The number of data partitions for cross-validation was set to 5. The observed data were partitioned by cvpartition, a data creation function for cross-validation [21] in matlab.
λ was searched for in the range of $1×10−8$ to $1×10−5$. One hundred λ were extracted from the search range at logarithmically evenly spaced. Cross-validation was performed for all the extracted λ. Appendix Fig. 18 shows the mean and standard deviation of MSE calculated by cross-validation. The vertical axis shows the MSE, and the horizontal axis shows λ. In Appendix Fig. 18, the mean and standard deviation of the MSE calculated by cross-validation for each value of λ are shown as circles and error bars. λ was determined to be $4.33×10−7$ by applying “the one standard error rule” to the results in Appendix Fig. 18.
Fig. 18
Fig. 18
Close modal
Determination of Parameters for Joint Estimation Maximum a Posteriori Method by Bayesian Optimization
σw, zConst, and μuc of the JE-MAP method were determined by Bayesian optimization [22] of matlab function bayesopt. Appendix Eq. (B1) shows the objective function of the Bayesian optimization. $uJE−MAP*$ in the objective function is the displacement vector in the Z-direction of the virtual plane calculated by the JE-MAP method with input σw, zConst, and μuc
$MSE=1Num∑(H·uJE−MAP*−εZ̃)$
(B1)
The displacement vector in the Z-direction of the virtual plane was calculated by the JE-MAP method, and σw, zConst, and μuc were optimized in the following ranges. The following are the arguments of bayesopt that were changed from the default values.
• Acquisition function: use “expected-improvement.” Enable modification of the acquisition function to escape a local objective function minimum. “Acquisition Function Name” = “expected-improvement-plus”.
• Specify deterministic objective function: true (the objective function is specified deterministically).
• Objective function evaluation limit: 60.
References
1.
Einav
,
I.
,
Ewert
,
U.
,
Herelli
,
M.
,
Marshall
,
D.
,
Abd Ibrahim
,
N.
, and
Shipp
,
R.
,
2005
, “
Non-Destructive Testing for Plant Life Assessment
,” International Atomic Energy Agency, Vienna, Austria, accessed Feb. 27, 2022, https://www.iaea.org/publications/7117/non-destructive-testing-for-plant-life-assessment
2.
Chakhlov
,
S.
,
2012
, “
Mobile Digital Radiography System for Nondestructive Testing of Large Diameter Pipelines
,”
Proceedings of 18th World Conference on Nondestructive Testing
, Durban, South Africa, Apr. 16–20, p.
37
3.
Sutton
,
M. A.
,
Orteu
,
J.-J.
, and
Schreier
,
H. W.
,
2009
,
Image Correlation for Shape, Motion and Deformation Measurements
,
Springer
,
New York
.
4.
Apalkov
,
A.
,
Odintsev
,
I.
, and
Usov
,
S.
,
2020
, “
Geometrical Identification of Invisible Defects in Structural Elements Basing on Digital Image Correlation Data
,”
IOP Conf. Ser. Mater. Sci. Eng.
,
709
(
3
), p.
033038
.10.1088/1757-899X/709/3/033038
5.
Dizaji
,
M.
,
Alipour
,
M.
, and
Harris
,
D.
,
2021
, “
Subsurface Damage Detection and Structural Health Monitoring Using Digital Image Correlation and Topology Optimization
,”
Eng. Struct.
,
230
, p.
111712
.10.1016/j.engstruct.2020.111712
6.
Dizaji
,
M. S.
,
Harris
,
D. K.
, and
Alipour
,
M.
,
2022
, “
Integrating Visual Sensing and Structural Identification Using 3D-Digital Image Correlation and Topology Optimization to Detect and Reconstruct the 3D Geometry of Structural Damage
,”
Struct. Health Monit.
, 21(6), pp.
2804
2833
.10.1177/14759217211073505
7.
Shafiei Dizaji
,
M.
,
Alipour
,
M.
, and
Harris
,
D. K.
,
2022
, “
Image-Based Tomography of Structures to Detect Internal Abnormalities Using Inverse Approach
,”
Exp. Tech.
,
46
(
2
), pp.
257
272
.10.1007/s40799-021-00479-9
8.
Dizaji
,
M. S.
, and
Mao
,
Z.
,
2022
, “
Multi-Level Damage Detection Using Octree Partitioning Algorithm
,”
Rotating Machinery, Optical Methods & Scanning LDV Methods
,
D.
Di Maio
, and
J.
, eds., Vol.
6
,
Springer International Publishing
,
Cham, Switzerland
, pp.
143
146
.10.1007/978-3-030-76335-0_14
9.
Kenji
,
A.
,
Norihiko
,
H.
,
Masao
,
A.
, and
Daiki
,
Y.
,
2021
, “
Geometrical Identification of Invisible Defects in Structural Elements Basing on Digital Image Correlation Data
,”
The Proceedings of the Computational Mechanics Conference
,
Sapporo, Hokkaido, Japan
, Sept. 21–23, p.
268
.
10.
Amaya
,
K.
, and
Taguchi
,
K.
,
2020
,
Spectral, Photon Counting Computed Tomography: Technology and Applications
(Chapter 21 Novel Regularization Method with Knowledge of Region Types and Boundaries),
CRC Press
, Boca Raton, FL, pp.
393
410
.
11.
Prince
,
S. J.
,
D.
,
2012
,
Computer Vision: Models, Learning, and Inference
(Chap. 1 Probability),
Cambridge University Press
, Cambridge, UK, p.
50
.
12.
Rother
,
C.
,
Kolmogorov
,
V.
, and
Blake
,
A.
,
2004
, “
Grab-Cut Interactive Foreground Extraction Using Iterated Graph Cuts
,”
ACM Trans. Graph.
,
23
(
3
), pp.
309
314
.10.1145/1015706.1015720
13.
Hastie
,
T.
,
Tibshirani
,
R.
, and
Wainwright
,
M.
,
2015
,
Statistical Learning With Sparsity: The Lasso and Generalizations
(Chapter 2 The Lasso for Linear Models), 1st ed.,
Chapman and Hall/CRC
, Boca Raton, FL, p.
9
.
14.
Carsten
,
R.
,
,
K.
, and
Andrew
,
B.
,
2018
, OpenCV: Interactive Foreground Extraction Using GrabCut Algorithm,
OpenCV team
,
Natick, MA
, accessed Feb. 27, 2022, https://docs.opencv.org/3.4/d8/d83/tutorial_py_ grabcut.html
15.
Ansys,
2018
,
Ansys Mechanical
,
San Jose, CA
.
16.
Shahriari
,
B.
,
Swersky
,
K.
,
Wang
,
Z.
,
,
R. P.
, and
de Freitas
,
N.
,
2016
, “
Taking the Human Out of the Loop: A Review of Bayesian Optimization
,”
Proc. IEEE
,
104
(
1
), pp.
148
175
.10.1109/JPROC.2015.2494218
17.
Chen
,
Y.
, and
Yang
,
Y.
,
2021
, “
The One Standard Error Rule for Model Selection: Does It Work?
,”
Stats
,
4
(
4
), pp.
868
892
.10.3390/stats4040051
18.
,
P.
,
Tang
,
L.
, and
Liu
,
H.
,
2009
, “
Cross-Validation
,”
Encyclopedia Database Systems
, Vol.
5
, Springer, Boston, MA, pp.
532
538
.
19.
Yamaguchi
,
K.
,
2018
, Collection and a Development Kit of Matlab Mex Functions for OpenCV Library, Version 3.4,
MATLAB
, San Francisco, CA, accessed Feb. 27, 2022, https://github.com/kyamagu/mexopencv
20.
Beck
,
A.
, and
Teboulle
,
M.
,
2009
, “
A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
,”
SIAM J. Imaging Sci.
,
2
(
1
), pp.
183
202
.10.1137/080716542
21.
MathWorks
,
2008
, “Partition Data for Cross-Validation - MATLAB - MathWorks,”
MathWorks
,
Natick, MA
, accessed Feb. 27, 2022, https://jp.mathworks.com/help/stats/cvpartition.html?lang=en
22.
MathWorks
,
2016
, “Select Optimal Machine Learning Hyperparameters Using Bayesian Optimization - MATLAB Bayesopt—MathWorks,”
MathWorks
,
Natick, MA
, accessed Feb. 27, 2022, https://jp.mathworks.com/help/stats/bayesopt.html?lang=en
|
{}
|
## 16/07/2010, Friday, 10:00–11:00
Yasuyoshi Yonezawa, Univ. of the Algarve and IST
Quantum $\left({\mathrm{sl}}_{n},\wedge {V}_{n}\right)$ link invariant and matrix factorizations.
M. Khovanov and L. Rozansky constructed a homology for a link diagram whose Euler characteristic is the quantum link invariant associated to the quantum group $\mathrm{Uq}\left({\mathrm{sl}}_{n}\right)$ and its vector representation ${V}_{n}$ by using matrix factorizations. In my thesis, I study a generalization of the Khovanov-Rozansky homology for the quantum link invariant associated to $\mathrm{Uq}\left({\mathrm{sl}}_{n}\right)$ and its fundamental representations $\wedge {V}_{n}$. In this talk, I will define a new link invariant derived from the generalization of Khovanov-Rozansky homology.
Support: FCT, CAMGSD, New Geometry and Topology
|
{}
|
# Beamer: Strange spacing in references with biblatex apa-style
I stumbled upon these two spacing errors (marked with red numbers):
I guess the first space is not really an error, but simply English spacing (\nonfrenchspacing). Can it be turned off (without changing the language)? \frenchspacing does not seem to have an effect on the references.
But I don't understand where the second spacing error is coming from. Has anyone got a clue?
Here is the code:
\documentclass{beamer}
\usepackage[american]{babel}
\usepackage[utf8]{inputenc}
\usepackage{csquotes}
\usepackage[style=apa, backend=biber,doi,url]{biblatex}
\DeclareLanguageMapping{american}{american-apa}
\begin{filecontents}{literature.bib}
@article {Thomas1975,
author = {Thomas, Ewart and Weaver, Wanda},
title = {Cognitive processing and time perception},
journal = {Attention, Perception, \& Psychophysics},
issn = {1943-3921},
pages = {363--367},
volume = {17},
doi = {10.3758/BF03199347},
year = {1975}
}
\end{filecontents}
\begin{document}
\frame{\textcite{Thomas1975}}
\frame{\printbibliography}
\end{document}
Without beamer (\documentclass{article}) everything looks fine:
-
There is no foobar key in your literature.bib – Seamus May 24 '12 at 14:28
The second space is definitely a spurious space. But I'm not sure about the first. That might not be an error. (I can't select only half the space…) – Seamus May 24 '12 at 14:33
Does the second space appear without beamer? – PLK May 24 '12 at 14:44
Without beamer everything is fine. – deboerk May 24 '12 at 14:56
@JosephWright The spurious space after the title goes away if you remove \newblock\usebeamercolor[fg]{bibliography entry note} in \apptocmd{\abx@macro@title} from beamerbaselocalstructure.sty. The standard styles don't appear to have this problem, though. – Audrey May 24 '12 at 15:17
The spacing and punctuation issues are associated with patches beamer applies to biblatex bibliography macros for changing colours within bibliography items. These can be found in beamerbaselocalstructure.sty.
Extra whitespace in the first problem is associated with \usebeamercolor. This can be resolved by issuing biblatex's \unspace.
To solve the second problem we can set punctuation before changing the colour. This will suppress any spurious punctuation generated by \newunit or \setunit in the existing drivers and bibliography macros.
Here are some revised patches demonstrating these ideas. A new patch for the labeltitle bibliography macro handles the case where author-year styles use label or labeltitle as a fallback for labelname.
\AtBeginDocument{%
{\apptocmd{\blx@env@bibliography}
{\let\makelabel\beamer@biblabeltemplate}{}{}
\apptocmd{\abx@macro@begentry}
{\let\bbx@tempa\@empty%
\usebeamercolor[fg]{bibliography entry author}}{}{}
\pretocmd{\abx@macro@labeltitle}
{\ifboolexpr{ test {\ifcsundef{abx@field@label}}
and test {\ifcsundef{abx@field@labeltitle}} }{}{\let\bbx@tempa\labelnamepunct}}{}{}
\pretocmd{\abx@macro@title}
{\ifcsundef{abx@name@labelname}{}{\let\bbx@tempa\labelnamepunct}%
\bbx@tempa\newblock\unspace\usebeamercolor[fg]{bibliography entry title}}{}{}
\apptocmd{\abx@macro@title}
{\ifcsundef{abx@field@title}{}{\newunitpunct}%
\newblock\unspace\usebeamercolor[fg]{bibliography entry note}}{}{}}
{}}
Instead of editing the beamer style file, you can modify the existing patches just after \begin{document}.
...
\begin{document}
\makeatletter
\pretocmd{\abx@macro@begentry}{\let\bbx@tempa\@empty}{}{}
\pretocmd{\abx@macro@labeltitle}
{\ifboolexpr{ test {\ifcsundef{abx@field@label}}
and test {\ifcsundef{abx@field@labeltitle}} }{}{\let\bbx@tempa\labelnamepunct}}{}{}
\patchcmd{\abx@macro@title}
{\ifcsundef{abx@name@labelname}{}{\blx@unitpunct\blx@postpunct}}
{\ifcsundef{abx@name@labelname}{}{\let\bbx@tempa\labelnamepunct}}{}{}
\patchcmd{\abx@macro@title}
{\newblock\usebeamercolor[fg]{bibliography entry title}}
{\bbx@tempa\newblock\unspace\usebeamercolor[fg]{bibliography entry title}}{}{}
\patchcmd{\abx@macro@title}
{\ifcsundef{abx@field@title}{}{\blx@unitpunct\blx@postpunct}}
{\ifcsundef{abx@field@title}{}{\newunitpunct}}{}{}
\patchcmd{\abx@macro@title}
{\newblock\usebeamercolor[fg]{bibliography entry note}}
{\newblock\unspace\usebeamercolor[fg]{bibliography entry note}}{}{}
\makeatother
\nocite{Thomas1975,ctan,cms,companion,britannica,kant:ku}
\frame[allowframebreaks]{\printbibliography}
\end{document}
Here's your entry printed with some different bibliography margins.
The patches appear to give the desired effect for standard biblatex styles, but thorough testing is needed. The solution doesn't resolve issues for a few (hopefully edge) cases in biblatex-apa. These include entries that don't have any data to form a label (i.e. labelname, label, labeltitle and labelyear all missing) and biblatex-apa's overriding of useeditor and friends per-entry option settings (e.g. britannica in the above document).
Unresolved issues are probably worse for some other contributed styles. Surely beamer can't account for all of them. Perhaps the best approach is to have style authors tailor the patches as needed.
-
I'll take a look at this. Note that \usebeamercolor does not add any space per se: it's the whatsit that is the issue. – Joseph Wright May 25 '12 at 9:29
@Audrey: I tried to apply your patch, but I couldn't make it work. I looked at the related question you linked and removed the first two lines with their corresponding curly brackets. Then I put the code right behind \begin{document} between \makeatletter and \makeatother. The code caused no error, but made no change either. Could you perhaps give an example of how to apply your patch on my example given above? Thank you! – deboerk May 25 '12 at 9:31
@JosephWright OK. I corrected the sentence, but feel free to edit it further to clarify. – Audrey May 25 '12 at 13:46
@deboerk Inside the document, you have to work with the edits made from the existing patches. I added an example. – Audrey May 25 '12 at 13:47
The problem here is one that I suspected could occur: adding beamer colours to bibliographies is something of a hack, and is more risky with biblatex than with traditional BibTeX styles.
When adding to biblatex styles, we have to target the drivers 'during' the formatting: with traditional BibTeX, it's essentially done 'after' the formatting. That's an issue because the formatting produced by biblatex depends on the output, but the colour-control inserts affect that. For example, the standard biblatex set up is
\newbibmacro*{title}{%
\ifboolexpr{
test {\iffieldundef{title}}
and
test {\iffieldundef{subtitle}}
}
{}
{\printtext[title]{%
\printfield[titlecase]{title}%
\setunit{\subtitlepunct}%
\printfield[titlecase]{subtitle}}%
\newunit}%
whereas for the apa style it's
\renewbibmacro*{title}{%
\ifthenelse{\iffieldundef{title}\AND\iffieldundef{subtitle}}
{}
{\iffieldundef{origtitle}
{\printtext[title]{%
\printfield[apacase]{title}%
\setunit{\subtitlepunct}%
\printfield[apacase]{subtitle}}}%
{\printfield{origtitle}%
\printtext[brackets]{%
\printfield[apacase]{title}%
\setunit{\subtitlepunct}%
\printfield[apacase]{subtitle}}}
\iffieldequalstr{entrytype}{book}%
{}%
\ifthenelse{%
\ifnameundef{author}\AND%
$$\ifnameundef{editor}\AND\NOT\boolean{bbx:editorinauthpos}$$\AND%
\ifnameundef{namea}\AND%
\ifnameundef{nameb}}
{\newunit\newblock
\usebibmacro{labelyear+extrayear}}
{}}}
The important line here is \setunit{\addspace}: it will insert a space 'after' the current material. Unfortunately, the colour code is seen as 'output' by biblatex (there is a whatsit in the output stream, which prevents biblatex 'seeing' what is going on).
Now, the question is what to do about this. I'm not sure there is a general fix, as the appropriate place to insert the colour code depends on the biblatex driver in use. So perhaps the best thing that can be done is to offer an option not to add in any colours.
-
I would appreciate an option to turn off colors for the references. At the moment I always have to adjust the beamer theme and color everything black. – deboerk May 25 '12 at 9:20
|
{}
|
How do you write y+2= -3/4(x+1) in standard form?
Apr 22, 2018
$3 x + 4 y = - 11$
Explanation:
Given:
$y + 2 = - \frac{3}{4} \left(x + 1\right)$
The standard form for a linear equation is:
$A x + B y = C$
To convert $y + 2 = - \frac{3}{4} \left(x + 1\right)$ into standard form, first distribute the slope $- \frac{3 x}{4}$.
$y + 2 = - \frac{3 x}{4} - \frac{3}{4}$
Subtract $2$ from both sides.
$y = - \frac{3 x}{4} - \frac{3}{4} - 2$
Multiply both sides by $4$.
$\textcolor{red}{4} \times y = {\textcolor{b l a c k}{\cancel{\textcolor{red}{4}}}}^{1} \times - \frac{3 x}{\textcolor{red}{\cancel{\textcolor{b l a c k}{4}}}} ^ 1 - {\textcolor{b l a c k}{\cancel{\textcolor{red}{4}}}}^{1} \times \frac{3}{\textcolor{red}{\cancel{\textcolor{b l a c k}{4}}}} ^ 1 - 4 \times 2$
Simplify.
$4 y = - 3 x - 3 - 8$
$4 y = - 3 x - 11$
Add $3 x$ to both sides.
$3 x + 4 y = - 11$
|
{}
|
# Statistics
Statistics is a branch of mathematics dealing with data collection, organization, analysis, interpretation and presentation.[1][2] In applying statistics to, for example, a scientific, industrial, or social problem, it is conventional to begin with a statistical population or a statistical model process to be studied. Populations can be diverse topics such as "all people living in a country" or "every atom composing a crystal". Statistics deals with every aspect of data, including the planning of data collection in terms of the design of surveys and experiments.[1] See glossary of probability and statistics.
When census data cannot be collected, statisticians collect data by developing specific experiment designs and survey samples. Representative sampling assures that inferences and conclusions can reasonably extend from the sample to the population as a whole. An experimental study involves taking measurements of the system under study, manipulating the system, and then taking additional measurements using the same procedure to determine if the manipulation has modified the values of the measurements. In contrast, an observational study does not involve experimental manipulation.
Two main statistical methods are used in data analysis: descriptive statistics, which summarize data from a sample using indexes such as the mean or standard deviation, and inferential statistics, which draw conclusions from data that are subject to random variation (e.g., observational errors, sampling variation).[3] Descriptive statistics are most often concerned with two sets of properties of a distribution (sample or population): central tendency (or location) seeks to characterize the distribution's central or typical value, while dispersion (or variability) characterizes the extent to which members of the distribution depart from its center and each other. Inferences on mathematical statistics are made under the framework of probability theory, which deals with the analysis of random phenomena.
A standard statistical procedure involves the test of the relationship between two statistical data sets, or a data set and synthetic data drawn from an idealized model. A hypothesis is proposed for the statistical relationship between the two data sets, and this is compared as an alternative to an idealized null hypothesis of no relationship between two data sets. Rejecting or disproving the null hypothesis is done using statistical tests that quantify the sense in which the null can be proven false, given the data that are used in the test. Working from a null hypothesis, two basic forms of error are recognized: Type I errors (null hypothesis is falsely rejected giving a "false positive") and Type II errors (null hypothesis fails to be rejected and an actual difference between populations is missed giving a "false negative").[4] Multiple problems have come to be associated with this framework: ranging from obtaining a sufficient sample size to specifying an adequate null hypothesis.
Measurement processes that generate statistical data are also subject to error. Many of these errors are classified as random (noise) or systematic (bias), but other types of errors (e.g., blunder, such as when an analyst reports incorrect units) can also be important. The presence of missing data or censoring may result in biased estimates and specific techniques have been developed to address these problems.
Statistics can be said to have begun in ancient civilization, going back at least to the 5th century BC, but it was not until the 18th century that it started to draw more heavily from calculus and probability theory. In more recent years statistics has relied more on statistical software to produce tests such as descriptive analysis.[5]
More probability density is found as one gets closer to the expected (mean) value in a normal distribution. Statistics used in standardized testing assessment are shown. The scales include standard deviations, cumulative percentages, Z-scores, and T-scores.
Scatter plots are used in descriptive statistics to show the observed relationships between different variables.
## Scope
Some definitions are:
• Merriam-Webster dictionary defines statistics as "a branch of mathematics dealing with the collection, analysis, interpretation, and presentation of masses of numerical data."[6]
• Statistician Arthur Lyon Bowley defines statistics as "Numerical statements of facts in any department of inquiry placed in relation to each other."[7]
Statistics is a mathematical body of science that pertains to the collection, analysis, interpretation or explanation, and presentation of data,[8] or as a branch of mathematics.[9] Some consider statistics to be a distinct mathematical science rather than a branch of mathematics. While many scientific investigations make use of data, statistics is concerned with the use of data in the context of uncertainty and decision making in the face of uncertainty.[10][11]
### Mathematical statistics
Mathematical statistics is the application of mathematics to statistics. Mathematical techniques used for this include mathematical analysis, linear algebra, stochastic analysis, differential equations, and measure-theoretic probability theory.[12][13]
## Overview
In applying statistics to a problem, it is common practice to start with a population or process to be studied. Populations can be diverse topics such as "all people living in a country" or "every atom composing a crystal".
Ideally, statisticians compile data about the entire population (an operation called census). This may be organized by governmental statistical institutes. Descriptive statistics can be used to summarize the population data. Numerical descriptors include mean and standard deviation for continuous data types (like income), while frequency and percentage are more useful in terms of describing categorical data (like race).
When a census is not feasible, a chosen subset of the population called a sample is studied. Once a sample that is representative of the population is determined, data is collected for the sample members in an observational or experimental setting. Again, descriptive statistics can be used to summarize the sample data. However, the drawing of the sample has been subject to an element of randomness, hence the established numerical descriptors from the sample are also due to uncertainty. To still draw meaningful conclusions about the entire population, inferential statistics is needed. It uses patterns in the sample data to draw inferences about the population represented, accounting for randomness. These inferences may take the form of: answering yes/no questions about the data (hypothesis testing), estimating numerical characteristics of the data (estimation), describing associations within the data (correlation) and modeling relationships within the data (for example, using regression analysis). Inference can extend to forecasting, prediction and estimation of unobserved values either in or associated with the population being studied; it can include extrapolation and interpolation of time series or spatial data, and can also include data mining.
## Data collection
### Sampling
When full census data cannot be collected, statisticians collect sample data by developing specific experiment designs and survey samples. Statistics itself also provides tools for prediction and forecasting through statistical models. The idea of making inferences based on sampled data began around the mid-1600s in connection with estimating populations and developing precursors of life insurance.[14]
To use a sample as a guide to an entire population, it is important that it truly represents the overall population. Representative sampling assures that inferences and conclusions can safely extend from the sample to the population as a whole. A major problem lies in determining the extent that the sample chosen is actually representative. Statistics offers methods to estimate and correct for any bias within the sample and data collection procedures. There are also methods of experimental design for experiments that can lessen these issues at the outset of a study, strengthening its capability to discern truths about the population.
Sampling theory is part of the mathematical discipline of probability theory. Probability is used in mathematical statistics to study the sampling distributions of sample statistics and, more generally, the properties of statistical procedures. The use of any statistical method is valid when the system or population under consideration satisfies the assumptions of the method. The difference in point of view between classic probability theory and sampling theory is, roughly, that probability theory starts from the given parameters of a total population to deduce probabilities that pertain to samples. Statistical inference, however, moves in the opposite direction—inductively inferring from samples to the parameters of a larger or total population.
### Experimental and observational studies
A common goal for a statistical research project is to investigate causality, and in particular to draw a conclusion on the effect of changes in the values of predictors or independent variables on dependent variables. There are two major types of causal statistical studies: experimental studies and observational studies. In both types of studies, the effect of differences of an independent variable (or variables) on the behavior of the dependent variable are observed. The difference between the two types lies in how the study is actually conducted. Each can be very effective. An experimental study involves taking measurements of the system under study, manipulating the system, and then taking additional measurements using the same procedure to determine if the manipulation has modified the values of the measurements. In contrast, an observational study does not involve experimental manipulation. Instead, data are gathered and correlations between predictors and response are investigated. While the tools of data analysis work best on data from randomized studies, they are also applied to other kinds of data—like natural experiments and observational studies[15]—for which a statistician would use a modified, more structured estimation method (e.g., Difference in differences estimation and instrumental variables, among many others) that produce consistent estimators.
#### Experiments
The basic steps of a statistical experiment are:
1. Planning the research, including finding the number of replicates of the study, using the following information: preliminary estimates regarding the size of treatment effects, alternative hypotheses, and the estimated experimental variability. Consideration of the selection of experimental subjects and the ethics of research is necessary. Statisticians recommend that experiments compare (at least) one new treatment with a standard treatment or control, to allow an unbiased estimate of the difference in treatment effects.
2. Design of experiments, using blocking to reduce the influence of confounding variables, and randomized assignment of treatments to subjects to allow unbiased estimates of treatment effects and experimental error. At this stage, the experimenters and statisticians write the experimental protocol that will guide the performance of the experiment and which specifies the primary analysis of the experimental data.
3. Performing the experiment following the experimental protocol and analyzing the data following the experimental protocol.
4. Further examining the data set in secondary analyses, to suggest new hypotheses for future study.
5. Documenting and presenting the results of the study.
Experiments on human behavior have special concerns. The famous Hawthorne study examined changes to the working environment at the Hawthorne plant of the Western Electric Company. The researchers were interested in determining whether increased illumination would increase the productivity of the assembly line workers. The researchers first measured the productivity in the plant, then modified the illumination in an area of the plant and checked if the changes in illumination affected productivity. It turned out that productivity indeed improved (under the experimental conditions). However, the study is heavily criticized today for errors in experimental procedures, specifically for the lack of a control group and blindness. The Hawthorne effect refers to finding that an outcome (in this case, worker productivity) changed due to observation itself. Those in the Hawthorne study became more productive not because the lighting was changed but because they were being observed.[16]
#### Observational study
An example of an observational study is one that explores the association between smoking and lung cancer. This type of study typically uses a survey to collect observations about the area of interest and then performs statistical analysis. In this case, the researchers would collect observations of both smokers and non-smokers, perhaps through a cohort study, and then look for the number of cases of lung cancer in each group.[17] A case-control study is another type of observational study in which people with and without the outcome of interest (e.g. lung cancer) are invited to participate and their exposure histories are collected.
## Types of data
Various attempts have been made to produce a taxonomy of levels of measurement. The psychophysicist Stanley Smith Stevens defined nominal, ordinal, interval, and ratio scales. Nominal measurements do not have meaningful rank order among values, and permit any one-to-one (injective) transformation. Ordinal measurements have imprecise differences between consecutive values, but have a meaningful order to those values, and permit any order-preserving transformation. Interval measurements have meaningful distances between measurements defined, but the zero value is arbitrary (as in the case with longitude and temperature measurements in Celsius or Fahrenheit), and permit any linear transformation. Ratio measurements have both a meaningful zero value and the distances between different measurements defined, and permit any rescaling transformation.
Because variables conforming only to nominal or ordinal measurements cannot be reasonably measured numerically, sometimes they are grouped together as categorical variables, whereas ratio and interval measurements are grouped together as quantitative variables, which can be either discrete or continuous, due to their numerical nature. Such distinctions can often be loosely correlated with data type in computer science, in that dichotomous categorical variables may be represented with the Boolean data type, polytomous categorical variables with arbitrarily assigned integers in the integral data type, and continuous variables with the real data type involving floating point computation. But the mapping of computer science data types to statistical data types depends on which categorization of the latter is being implemented.
Other categorizations have been proposed. For example, Mosteller and Tukey (1977)[18] distinguished grades, ranks, counted fractions, counts, amounts, and balances. Nelder (1990)[19] described continuous counts, continuous ratios, count ratios, and categorical modes of data. See also Chrisman (1998),[20] van den Berg (1991).[21]
The issue of whether or not it is appropriate to apply different kinds of statistical methods to data obtained from different kinds of measurement procedures is complicated by issues concerning the transformation of variables and the precise interpretation of research questions. "The relationship between the data and what they describe merely reflects the fact that certain kinds of statistical statements may have truth values which are not invariant under some transformations. Whether or not a transformation is sensible to contemplate depends on the question one is trying to answer" (Hand, 2004, p. 82).[22]
## Terminology and theory of inferential statistics
### Statistics, estimators and pivotal quantities
Consider independent identically distributed (IID) random variables with a given probability distribution: standard statistical inference and estimation theory defines a random sample as the random vector given by the column vector of these IID variables.[23] The population being examined is described by a probability distribution that may have unknown parameters.
A statistic is a random variable that is a function of the random sample, but not a function of unknown parameters. The probability distribution of the statistic, though, may have unknown parameters.
Consider now a function of the unknown parameter: an estimator is a statistic used to estimate such function. Commonly used estimators include sample mean, unbiased sample variance and sample covariance.
A random variable that is a function of the random sample and of the unknown parameter, but whose probability distribution does not depend on the unknown parameter is called a pivotal quantity or pivot. Widely used pivots include the z-score, the chi square statistic and Student's t-value.
Between two estimators of a given parameter, the one with lower mean squared error is said to be more efficient. Furthermore, an estimator is said to be unbiased if its expected value is equal to the true value of the unknown parameter being estimated, and asymptotically unbiased if its expected value converges at the limit to the true value of such parameter.
Other desirable properties for estimators include: UMVUE estimators that have the lowest variance for all possible values of the parameter to be estimated (this is usually an easier property to verify than efficiency) and consistent estimators which converges in probability to the true value of such parameter.
This still leaves the question of how to obtain estimators in a given situation and carry the computation, several methods have been proposed: the method of moments, the maximum likelihood method, the least squares method and the more recent method of estimating equations.
### Null hypothesis and alternative hypothesis
Interpretation of statistical information can often involve the development of a null hypothesis which is usually (but not necessarily) that no relationship exists among variables or that no change occurred over time.[24][25]
The best illustration for a novice is the predicament encountered by a criminal trial. The null hypothesis, H0, asserts that the defendant is innocent, whereas the alternative hypothesis, H1, asserts that the defendant is guilty. The indictment comes because of suspicion of the guilt. The H0 (status quo) stands in opposition to H1 and is maintained unless H1 is supported by evidence "beyond a reasonable doubt". However, "failure to reject H0" in this case does not imply innocence, but merely that the evidence was insufficient to convict. So the jury does not necessarily accept H0 but fails to reject H0. While one can not "prove" a null hypothesis, one can test how close it is to being true with a power test, which tests for type II errors.
What statisticians call an alternative hypothesis is simply a hypothesis that contradicts the null hypothesis.
### Error
Working from a null hypothesis, two basic forms of error are recognized:
• Type I errors where the null hypothesis is falsely rejected giving a "false positive".
• Type II errors where the null hypothesis fails to be rejected and an actual difference between populations is missed giving a "false negative".
Standard deviation refers to the extent to which individual observations in a sample differ from a central value, such as the sample or population mean, while Standard error refers to an estimate of difference between sample mean and population mean.
A statistical error is the amount by which an observation differs from its expected value, a residual is the amount an observation differs from the value the estimator of the expected value assumes on a given sample (also called prediction).
Mean squared error is used for obtaining efficient estimators, a widely used class of estimators. Root mean square error is simply the square root of mean squared error.
A least squares fit: in red the points to be fitted, in blue the fitted line.
Many statistical methods seek to minimize the residual sum of squares, and these are called "methods of least squares" in contrast to Least absolute deviations. The latter gives equal weight to small and big errors, while the former gives more weight to large errors. Residual sum of squares is also differentiable, which provides a handy property for doing regression. Least squares applied to linear regression is called ordinary least squares method and least squares applied to nonlinear regression is called non-linear least squares. Also in a linear regression model the non deterministic part of the model is called error term, disturbance or more simply noise. Both linear regression and non-linear regression are addressed in polynomial least squares, which also describes the variance in a prediction of the dependent variable (y axis) as a function of the independent variable (x axis) and the deviations (errors, noise, disturbances) from the estimated (fitted) curve.
Measurement processes that generate statistical data are also subject to error. Many of these errors are classified as random (noise) or systematic (bias), but other types of errors (e.g., blunder, such as when an analyst reports incorrect units) can also be important. The presence of missing data or censoring may result in biased estimates and specific techniques have been developed to address these problems.[26]
### Interval estimation
Confidence intervals: the red line is true value for the mean in this example, the blue lines are random confidence intervals for 100 realizations.
Most studies only sample part of a population, so results don't fully represent the whole population. Any estimates obtained from the sample only approximate the population value. Confidence intervals allow statisticians to express how closely the sample estimate matches the true value in the whole population. Often they are expressed as 95% confidence intervals. Formally, a 95% confidence interval for a value is a range where, if the sampling and analysis were repeated under the same conditions (yielding a different dataset), the interval would include the true (population) value in 95% of all possible cases. This does not imply that the probability that the true value is in the confidence interval is 95%. From the frequentist perspective, such a claim does not even make sense, as the true value is not a random variable. Either the true value is or is not within the given interval. However, it is true that, before any data are sampled and given a plan for how to construct the confidence interval, the probability is 95% that the yet-to-be-calculated interval will cover the true value: at this point, the limits of the interval are yet-to-be-observed random variables. One approach that does yield an interval that can be interpreted as having a given probability of containing the true value is to use a credible interval from Bayesian statistics: this approach depends on a different way of interpreting what is meant by "probability", that is as a Bayesian probability.
In principle confidence intervals can be symmetrical or asymmetrical. An interval can be asymmetrical because it works as lower or upper bound for a parameter (left-sided interval or right sided interval), but it can also be asymmetrical because the two sided interval is built violating symmetry around the estimate. Sometimes the bounds for a confidence interval are reached asymptotically and these are used to approximate the true bounds.
### Significance
Statistics rarely give a simple Yes/No type answer to the question under analysis. Interpretation often comes down to the level of statistical significance applied to the numbers and often refers to the probability of a value accurately rejecting the null hypothesis (sometimes referred to as the p-value).
In this graph the black line is probability distribution for the test statistic, the critical region is the set of values to the right of the observed data point (observed value of the test statistic) and the p-value is represented by the green area.
The standard approach[23] is to test a null hypothesis against an alternative hypothesis. A critical region is the set of values of the estimator that leads to refuting the null hypothesis. The probability of type I error is therefore the probability that the estimator belongs to the critical region given that null hypothesis is true (statistical significance) and the probability of type II error is the probability that the estimator doesn't belong to the critical region given that the alternative hypothesis is true. The statistical power of a test is the probability that it correctly rejects the null hypothesis when the null hypothesis is false.
Referring to statistical significance does not necessarily mean that the overall result is significant in real world terms. For example, in a large study of a drug it may be shown that the drug has a statistically significant but very small beneficial effect, such that the drug is unlikely to help the patient noticeably.
Although in principle the acceptable level of statistical significance may be subject to debate, the p-value is the smallest significance level that allows the test to reject the null hypothesis. This test is logically equivalent to saying that the p-value is the probability, assuming the null hypothesis is true, of observing a result at least as extreme as the test statistic. Therefore, the smaller the p-value, the lower the probability of committing type I error.
Some problems are usually associated with this framework (See criticism of hypothesis testing):
• A difference that is highly statistically significant can still be of no practical significance, but it is possible to properly formulate tests to account for this. One response involves going beyond reporting only the significance level to include the p-value when reporting whether a hypothesis is rejected or accepted. The p-value, however, does not indicate the size or importance of the observed effect and can also seem to exaggerate the importance of minor differences in large studies. A better and increasingly common approach is to report confidence intervals. Although these are produced from the same calculations as those of hypothesis tests or p-values, they describe both the size of the effect and the uncertainty surrounding it.
• Fallacy of the transposed conditional, aka prosecutor's fallacy: criticisms arise because the hypothesis testing approach forces one hypothesis (the null hypothesis) to be favored, since what is being evaluated is the probability of the observed result given the null hypothesis and not probability of the null hypothesis given the observed result. An alternative to this approach is offered by Bayesian inference, although it requires establishing a prior probability.[27]
• Rejecting the null hypothesis does not automatically prove the alternative hypothesis.
• As everything in inferential statistics it relies on sample size, and therefore under fat tails p-values may be seriously mis-computed.
### Examples
Some well-known statistical tests and procedures are:
## Misuse
Misuse of statistics can produce subtle, but serious errors in description and interpretation—subtle in the sense that even experienced professionals make such errors, and serious in the sense that they can lead to devastating decision errors. For instance, social policy, medical practice, and the reliability of structures like bridges all rely on the proper use of statistics.
Even when statistical techniques are correctly applied, the results can be difficult to interpret for those lacking expertise. The statistical significance of a trend in the data—which measures the extent to which a trend could be caused by random variation in the sample—may or may not agree with an intuitive sense of its significance. The set of basic statistical skills (and skepticism) that people need to deal with information in their everyday lives properly is referred to as statistical literacy.
There is a general perception that statistical knowledge is all-too-frequently intentionally misused by finding ways to interpret only the data that are favorable to the presenter.[28] A mistrust and misunderstanding of statistics is associated with the quotation, "There are three kinds of lies: lies, damned lies, and statistics". Misuse of statistics can be both inadvertent and intentional, and the book How to Lie with Statistics[28] outlines a range of considerations. In an attempt to shed light on the use and misuse of statistics, reviews of statistical techniques used in particular fields are conducted (e.g. Warne, Lazo, Ramos, and Ritter (2012)).[29]
Ways to avoid misuse of statistics include using proper diagrams and avoiding bias.[30] Misuse can occur when conclusions are overgeneralized and claimed to be representative of more than they really are, often by either deliberately or unconsciously overlooking sampling bias.[31] Bar graphs are arguably the easiest diagrams to use and understand, and they can be made either by hand or with simple computer programs.[30] Unfortunately, most people do not look for bias or errors, so they are not noticed. Thus, people may often believe that something is true even if it is not well represented.[31] To make data gathered from statistics believable and accurate, the sample taken must be representative of the whole.[32] According to Huff, "The dependability of a sample can be destroyed by [bias]... allow yourself some degree of skepticism."[33]
To assist in the understanding of statistics Huff proposed a series of questions to be asked in each case:[34]
• Who says so? (Does he/she have an axe to grind?)
• How does he/she know? (Does he/she have the resources to know the facts?)
• What's missing? (Does he/she give us a complete picture?)
• Did someone change the subject? (Does he/she offer us the right answer to the wrong problem?)
• Does it make sense? (Is his/her conclusion logical and consistent with what we already know?)
The confounding variable problem: X and Y may be correlated, not because there is causal relationship between them, but because both depend on a third variable Z. Z is called a confounding factor.
### Misinterpretation: correlation
The concept of correlation is particularly noteworthy for the potential confusion it can cause. Statistical analysis of a data set often reveals that two variables (properties) of the population under consideration tend to vary together, as if they were connected. For example, a study of annual income that also looks at age of death might find that poor people tend to have shorter lives than affluent people. The two variables are said to be correlated; however, they may or may not be the cause of one another. The correlation phenomena could be caused by a third, previously unconsidered phenomenon, called a lurking variable or confounding variable. For this reason, there is no way to immediately infer the existence of a causal relationship between the two variables. (See Correlation does not imply causation.)
## History of statistical science
Gerolamo Cardano, a pioneer on the mathematics of probability.
The earliest writing on statistics was found in a 9th-century book entitled Manuscript on Deciphering Cryptographic Messages, written by Arab scholar Al-Kindi (801–873). In his book, Al-Kindi gave a detailed description of how to use statistics and frequency analysis to decipher encrypted messages. This text laid the foundations for statistics and cryptanalysis.[35][36] Al-Kindi also made the earliest known use of statistical inference, while he and other Arab cryptologists developed the early statistical methods for decoding encrypted messages. Arab mathematicians including Al-Kindi, Al-Khalil (717–786) and Ibn Adlan (1187–1268) used forms of probability and statistics, with one of Ibn Adlan's most important contributions being on sample size for use of frequency analysis.[37]
The earliest European writing on statistics dates back to 1663, with the publication of Natural and Political Observations upon the Bills of Mortality by John Graunt.[38] Early applications of statistical thinking revolved around the needs of states to base policy on demographic and economic data, hence its stat- etymology. The scope of the discipline of statistics broadened in the early 19th century to include the collection and analysis of data in general. Today, statistics is widely employed in government, business, and natural and social sciences.
The mathematical foundations of modern statistics were laid in the 17th century with the development of the probability theory by Gerolamo Cardano, Blaise Pascal and Pierre de Fermat. Mathematical probability theory arose from the study of games of chance, although the concept of probability was already examined in medieval law and by philosophers such as Juan Caramuel.[39] The method of least squares was first described by Adrien-Marie Legendre in 1805.
Karl Pearson, a founder of mathematical statistics.
The modern field of statistics emerged in the late 19th and early 20th century in three stages.[40] The first wave, at the turn of the century, was led by the work of Francis Galton and Karl Pearson, who transformed statistics into a rigorous mathematical discipline used for analysis, not just in science, but in industry and politics as well. Galton's contributions included introducing the concepts of standard deviation, correlation, regression analysis and the application of these methods to the study of the variety of human characteristics—height, weight, eyelash length among others.[41] Pearson developed the Pearson product-moment correlation coefficient, defined as a product-moment,[42] the method of moments for the fitting of distributions to samples and the Pearson distribution, among many other things.[43] Galton and Pearson founded Biometrika as the first journal of mathematical statistics and biostatistics (then called biometry), and the latter founded the world's first university statistics department at University College London.[44]
Ronald Fisher coined the term null hypothesis during the Lady tasting tea experiment, which "is never proved or established, but is possibly disproved, in the course of experimentation".[45][46]
The second wave of the 1910s and 20s was initiated by William Sealy Gosset, and reached its culmination in the insights of Ronald Fisher, who wrote the textbooks that were to define the academic discipline in universities around the world. Fisher's most important publications were his 1918 seminal paper The Correlation between Relatives on the Supposition of Mendelian Inheritance, which was the first to use the statistical term, variance, his classic 1925 work Statistical Methods for Research Workers and his 1935 The Design of Experiments,[47][48][49][50] where he developed rigorous design of experiments models. He originated the concepts of sufficiency, ancillary statistics, Fisher's linear discriminator and Fisher information.[51] In his 1930 book The Genetical Theory of Natural Selection he applied statistics to various biological concepts such as Fisher's principle[52]). Nevertheless, A.W.F. Edwards has remarked that it is "probably the most celebrated argument in evolutionary biology".[52] (about the sex ratio), the Fisherian runaway,[53][54][55][56][57][58] a concept in sexual selection about a positive feedback runaway affect found in evolution.
The final wave, which mainly saw the refinement and expansion of earlier developments, emerged from the collaborative work between Egon Pearson and Jerzy Neyman in the 1930s. They introduced the concepts of "Type II" error, power of a test and confidence intervals. Jerzy Neyman in 1934 showed that stratified random sampling was in general a better method of estimation than purposive (quota) sampling.[59]
Today, statistical methods are applied in all fields that involve decision making, for making accurate inferences from a collated body of data and for making decisions in the face of uncertainty based on statistical methodology. The use of modern computers has expedited large-scale statistical computations, and has also made possible new methods that are impractical to perform manually. Statistics continues to be an area of active research, for example on the problem of how to analyze Big data.[60]
## Applications
### Applied statistics, theoretical statistics and mathematical statistics
Applied statistics comprises descriptive statistics and the application of inferential statistics.[61][62] Theoretical statistics concerns the logical arguments underlying justification of approaches to statistical inference, as well as encompassing mathematical statistics. Mathematical statistics includes not only the manipulation of probability distributions necessary for deriving results related to methods of estimation and inference, but also various aspects of computational statistics and the design of experiments.
### Machine learning and data mining
Machine Learning models are statistical and probabilistic models that captures patterns in the data through use of computational algorithms.
### Statistics in society
Statistics is applicable to a wide variety of academic disciplines, including natural and social sciences, government, and business. Statistical consultants can help organizations and companies that don't have in-house expertise relevant to their particular questions.
### Statistical computing
The rapid and sustained increases in computing power starting from the second half of the 20th century have had a substantial impact on the practice of statistical science. Early statistical models were almost always from the class of linear models, but powerful computers, coupled with suitable numerical algorithms, caused an increased interest in nonlinear models (such as neural networks) as well as the creation of new types, such as generalized linear models and multilevel models.
Increased computing power has also led to the growing popularity of computationally intensive methods based on resampling, such as permutation tests and the bootstrap, while techniques such as Gibbs sampling have made use of Bayesian models more feasible. The computer revolution has implications for the future of statistics with new emphasis on "experimental" and "empirical" statistics. A large number of both general and special purpose statistical software are now available. Examples of available software capable of complex statistical computation include programs such as Mathematica, SAS, SPSS, and R.
### Statistics applied to mathematics or the arts
Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was "required learning" in most sciences. This tradition has changed with use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically. Initially derided by some mathematical purists, it is now considered essential methodology in certain areas.
• In number theory, scatter plots of data generated by a distribution function may be transformed with familiar tools used in statistics to reveal underlying patterns, which may then lead to hypotheses.
• Methods of statistics including predictive methods in forecasting are combined with chaos theory and fractal geometry to create video works that are considered to have great beauty.
• The process art of Jackson Pollock relied on artistic experiments whereby underlying distributions in nature were artistically revealed. With the advent of computers, statistical methods were applied to formalize such distribution-driven natural processes to make and analyze moving video art.
• Methods of statistics may be used predicatively in performance art, as in a card trick based on a Markov process that only works some of the time, the occasion of which can be predicted using statistical methodology.
• Statistics can be used to predicatively create art, as in the statistical or stochastic music invented by Iannis Xenakis, where the music is performance-specific. Though this type of artistry does not always come out as expected, it does behave in ways that are predictable and tunable using statistics.
## Specialized disciplines
Statistical techniques are used in a wide range of types of scientific and social research, including: biostatistics, computational biology, computational sociology, network biology, social science, sociology and social research. Some fields of inquiry use applied statistics so extensively that they have specialized terminology. These disciplines include:
In addition, there are particular types of statistical analysis that have also developed their own specialised terminology and methodology:
Statistics form a key basis tool in business and manufacturing as well. It is used to understand measurement systems variability, control processes (as in statistical process control or SPC), for summarizing data, and to make data-driven decisions. In these roles, it is a key tool, and perhaps the only reliable tool.
Foundations and major areas of statistics
## References
1. ^ a b Dodge, Y. (2006) The Oxford Dictionary of Statistical Terms, Oxford University Press. ISBN 0-19-920613-9
2. ^ Romijn, Jan-Willem (2014). "Philosophy of statistics". Stanford Encyclopedia of Philosophy.
3. ^ Lund Research Ltd. "Descriptive and Inferential Statistics". statistics.laerd.com. Retrieved 2014-03-23.
4. ^ "What Is the Difference Between Type I and Type II Hypothesis Testing Errors?". About.com Education. Retrieved 2015-11-27.
5. ^ "How to Calculate Descriptive Statistics". Answers Consulting. 2018-02-03.
6. ^ "Definition of STATISTICS". www.merriam-webster.com. Retrieved 2016-05-28.
7. ^ "Essay on Statistics: Meaning and Definition of Statistics". Economics Discussion. 2014-12-02. Retrieved 2016-05-28.
8. ^ Moses, Lincoln E. (1986) Think and Explain with Statistics, Addison-Wesley, ISBN 978-0-201-15619-5. pp. 1–3
9. ^ Hays, William Lee, (1973) Statistics for the Social Sciences, Holt, Rinehart and Winston, p.xii, ISBN 978-0-03-077945-9
10. ^ Moore, David (1992). "Teaching Statistics as a Respectable Subject". In F. Gordon and S. Gordon (eds.). Statistics for the Twenty-First Century. Washington, DC: The Mathematical Association of America. pp. 14–25. ISBN 978-0-88385-078-7.CS1 maint: Uses editors parameter (link)
11. ^ Chance, Beth L.; Rossman, Allan J. (2005). "Preface". Investigating Statistical Concepts, Applications, and Methods (PDF). Duxbury Press. ISBN 978-0-495-05064-3.
12. ^ Lakshmikantham,, ed. by D. Kannan, V. (2002). Handbook of stochastic analysis and applications. New York: M. Dekker. ISBN 0824706609.CS1 maint: Extra text: authors list (link)
13. ^ Schervish, Mark J. (1995). Theory of statistics (Corr. 2nd print. ed.). New York: Springer. ISBN 0387945466.
14. ^ Wolfram, Stephen (2002). A New Kind of Science. Wolfram Media, Inc. p. 1082. ISBN 1-57955-008-8.
15. ^ Freedman, D.A. (2005) Statistical Models: Theory and Practice, Cambridge University Press. ISBN 978-0-521-67105-7
16. ^ McCarney R, Warner J, Iliffe S, van Haselen R, Griffin M, Fisher P (2007). "The Hawthorne Effect: a randomised, controlled trial". BMC Med Res Methodol. 7 (1): 30. doi:10.1186/1471-2288-7-30. PMC 1936999. PMID 17608932.
17. ^ Rothman, Kenneth J; Greenland, Sander; Lash, Timothy, eds. (2008). "7". Modern Epidemiology (3rd ed.). Lippincott Williams & Wilkins. p. 100.
18. ^ Mosteller, F., & Tukey, J.W. (1977). Data analysis and regression. Boston: Addison-Wesley.
19. ^ Nelder, J.A. (1990). The knowledge needed to computerise the analysis and interpretation of statistical information. In Expert systems and artificial intelligence: the need for information about data. Library Association Report, London, March, 23–27.
20. ^ Chrisman, Nicholas R (1998). "Rethinking Levels of Measurement for Cartography". Cartography and Geographic Information Science. 25 (4): 231–242. doi:10.1559/152304098782383043.
21. ^ van den Berg, G. (1991). Choosing an analysis method. Leiden: DSWO Press
22. ^ Hand, D.J. (2004). Measurement theory and practice: The world through quantification. London: Arnold.
23. ^ a b Piazza Elio, Probabilità e Statistica, Esculapio 2007
24. ^ Everitt, Brian (1998). The Cambridge Dictionary of Statistics. Cambridge, UK New York: Cambridge University Press. ISBN 0521593468.
25. ^ "Cohen (1994) The Earth Is Round (p < .05)". YourStatsGuru.com.
26. ^ Rubin, Donald B.; Little, Roderick J.A., Statistical analysis with missing data, New York: Wiley 2002
27. ^ Ioannidis, J.P.A. (2005). "Why Most Published Research Findings Are False". PLoS Medicine. 2 (8): e124. doi:10.1371/journal.pmed.0020124. PMC 1182327. PMID 16060722.
28. ^ a b Huff, Darrell (1954) How to Lie with Statistics, WW Norton & Company, Inc. New York. ISBN 0-393-31072-8
29. ^ Warne, R. Lazo; Ramos, T.; Ritter, N. (2012). "Statistical Methods Used in Gifted Education Journals, 2006–2010". Gifted Child Quarterly. 56 (3): 134–149. doi:10.1177/0016986212444122.
30. ^ a b Drennan, Robert D. (2008). "Statistics in archaeology". In Pearsall, Deborah M. (ed.). Encyclopedia of Archaeology. Elsevier Inc. pp. 2093–2100. ISBN 978-0-12-373962-9.
31. ^ a b Cohen, Jerome B. (December 1938). "Misuse of Statistics". Journal of the American Statistical Association. JSTOR. 33 (204): 657–674. doi:10.1080/01621459.1938.10502344.
32. ^ Freund, J.E. (1988). "Modern Elementary Statistics". Credo Reference.
33. ^ Huff, Darrell; Irving Geis (1954). How to Lie with Statistics. New York: Norton. The dependability of a sample can be destroyed by [bias]... allow yourself some degree of skepticism.
34. ^ Huff, Darrell; Irving Geis (1954). How to Lie with Statistics. New York: Norton.
35. ^ Singh, Simon (2000). The code book : the science of secrecy from ancient Egypt to quantum cryptography (1st Anchor Books ed.). New York: Anchor Books. ISBN 978-0-385-49532-5.
36. ^ Ibrahim A. Al-Kadi "The origins of cryptology: The Arab contributions", Cryptologia, 16(2) (April 1992) pp. 97–126.
37. ^ Broemeling, Lyle D. (1 November 2011). "An Account of Early Statistical Inference in Arab Cryptology". The American Statistician. 65 (4): 255–257. doi:10.1198/tas.2011.10191.
38. ^ Willcox, Walter (1938) "The Founder of Statistics". Review of the International Statistical Institute 5(4): 321–328. JSTOR 1400906
39. ^ J. Franklin, The Science of Conjecture: Evidence and Probability before Pascal, Johns Hopkins Univ Pr 2002
40. ^ Helen Mary Walker (1975). Studies in the history of statistical method. Arno Press.
41. ^ Galton, F (1877). "Typical laws of heredity". Nature. 15 (388): 492–553. Bibcode:1877Natur..15..492.. doi:10.1038/015492a0.
42. ^ Stigler, S.M. (1989). "Francis Galton's Account of the Invention of Correlation". Statistical Science. 4 (2): 73–79. doi:10.1214/ss/1177012580.
43. ^ Pearson, K. (1900). "On the Criterion that a given System of Deviations from the Probable in the Case of a Correlated System of Variables is such that it can be reasonably supposed to have arisen from Random Sampling". Philosophical Magazine. Series 5. 50 (302): 157–175. doi:10.1080/14786440009463897.
44. ^ "Karl Pearson (1857–1936)". Department of Statistical Science – University College London. Archived from the original on 2008-09-25.
45. ^ Fisher|1971|loc=Chapter II. The Principles of Experimentation, Illustrated by a Psycho-physical Experiment, Section 8. The Null Hypothesis
46. ^ OED quote: 1935 R.A. Fisher, The Design of Experiments ii. 19, "We may speak of this hypothesis as the 'null hypothesis', and the null hypothesis is never proved or established, but is possibly disproved, in the course of experimentation."
47. ^ Stanley, J.C. (1966). "The Influence of Fisher's "The Design of Experiments" on Educational Research Thirty Years Later". American Educational Research Journal. 3 (3): 223. doi:10.3102/00028312003003223.
48. ^ Box, JF (February 1980). "R.A. Fisher and the Design of Experiments, 1922–1926". The American Statistician. 34 (1): 1–7. doi:10.2307/2682986. JSTOR 2682986.
49. ^ Yates, F (June 1964). "Sir Ronald Fisher and the Design of Experiments". Biometrics. 20 (2): 307–321. doi:10.2307/2528399. JSTOR 2528399.
50. ^ Stanley, Julian C. (1966). "The Influence of Fisher's "The Design of Experiments" on Educational Research Thirty Years Later". American Educational Research Journal. 3 (3): 223–229. doi:10.3102/00028312003003223. JSTOR 1161806.
51. ^ Agresti, Alan; David B. Hichcock (2005). "Bayesian Inference for Categorical Data Analysis" (PDF). Statistical Methods & Applications. 14 (14): 298. doi:10.1007/s10260-005-0121-y.
52. ^ a b Edwards, A.W.F. (1998). "Natural Selection and the Sex Ratio: Fisher's Sources". American Naturalist. 151 (6): 564–569. doi:10.1086/286141. PMID 18811377.
53. ^ Fisher, R.A. (1915) The evolution of sexual preference. Eugenics Review (7) 184:192
54. ^ Fisher, R.A. (1930) The Genetical Theory of Natural Selection. ISBN 0-19-850440-3
55. ^ Edwards, A.W.F. (2000) Perspectives: Anecdotal, Historial and Critical Commentaries on Genetics. The Genetics Society of America (154) 1419:1426
56. ^ Andersson, M. (1994) Sexual selection. ISBN 0-691-00057-3
57. ^ Andersson, M. and Simmons, L.W. (2006) Sexual selection and mate choice. Trends, Ecology and Evolution (21) 296:302
58. ^ Gayon, J. (2010) Sexual selection: Another Darwinian process. Comptes Rendus Biologies (333) 134:144
59. ^ Neyman, J (1934). "On the two different aspects of the representative method: The method of stratified sampling and the method of purposive selection". Journal of the Royal Statistical Society. 97 (4): 557–625. JSTOR 2342192.
60. ^ "Science in a Complex World – Big Data: Opportunity or Threat?". Santa Fe Institute.
61. ^ Nikoletseas, M.M. (2014) "Statistics: Concepts and Examples." ISBN 978-1500815684
62. ^ Anderson, D.R.; Sweeney, D.J.; Williams, T.A. (1994) Introduction to Statistics: Concepts and Applications, pp. 5–9. West Group. ISBN 978-0-314-03309-3
|
{}
|
homechevron_rightProfessionalchevron_rightStatistics
# Estimated Mean of a Population
This online calculator allows you to estimate mean of a population using given sample
Let's suppose you have number of values, randomly drawn from some source population (these values are usually referred to as a sample). For given sample you can calculate the mean and the standard deviation of the sample. But the question is - what is the mean and the standard deviation of the source population. Intuitively, you feel that, of course, the sample mean isn't equal to the source mean, but they should be somewhat close, or, in the vicinity of each other.
Calculator below estimates mean of the population using the sample. Vicinity is found for different confidence levels using Student's t-distribution.
For this to work, the following assumptions should be met:
1. The scale of measurement has the properties of an equal interval scale.
2. The sample is randomly drawn from the source population.
3. The source population can be reasonably supposed to have a normal distribution.
The formula for estimating mean of a population based on the sample is
$est.\mu_{source}=M_x\pm (t*est.\sigma_M)$, where
$M_x$ - mean of the sample
$t$ - t-ratio for the p value which corresponds to chosen confidence level for non-directional test.
It is calculated from the inverse of the cdf for the Student's T distribution with degrees of freedom equals to N-1, where N is the number of values in the sample. For example, to get t-ratio for 0.05 level of significance, or 95% confidence level, you need to take absolute value of the inverse at 0.025.
$est.\sigma_M$ - estimate of the standard deviation of the sampling distribution of sample means (or standard error of the mean)
It is calculated as $est.\sigma_M=\sqrt{\frac{\frac{\sum{(X_i-M_x)^2}}{N-1}}{N}}$
If you care about how these formulas are derived, you can read excellent explanation here, starting from Chapter 9.
### Estimated Mean of a Population
Digits after the decimal point: 2
Estimated Mean
Lower limit
Mean of the sample
Upper limit
## Similar calculators
606 calculators in total.
|
{}
|
# Probability of Randomly Selective Event, Conditional Probability
Tags:
1. Jan 27, 2015
### conniebear14
1. The problem statement, all variables and given/known data
A company has been running a television advertisement for one of its new products. A survey was conducted. Based on its results, it was concluded that an individual buys the product with probability https://utdvpn.utdallas.edu/wwtmp/equations/42/,DanaInfo=.aevnhvE00ljvwm5Ntt.,SSL+b76747aa0afb1816e5979c66ce77851.png [Broken], if he/she saw the advertisement, and buys with probability https://utdvpn.utdallas.edu/wwtmp/equations/1e/,DanaInfo=.aevnhvE00ljvwm5Ntt.,SSL+e895ee9ca85bcbf1e55a96a7573c291.png [Broken], if he/she did not see it. Twenty-five percent of people saw the advertisement.
a. What is the probability that a randomly selected individual will buy the new product?
b. What is the probability that at least one of randomly selected five individuals will buy the new product?
2. Relevant equations
P(A|B) = P(B|A)P(A)/P(B)
3. The attempt at a solution
I already got part A correct.
The answer is .2
I am confused on part B probably because of the 1/5 thing. Which equation should I use and where should I start with this one?
Last edited by a moderator: May 7, 2017
2. Jan 27, 2015
### conniebear14
it didn't post the numbers but the blanks correspond to 56% and 8% respectively
3. Jan 27, 2015
### LCKurtz
You have the probability a given person buys is $.2$. What is the probability they all fail to buy? You could use the binomial distribution but it is easy enough to just calculate.
4. Jan 27, 2015
### conniebear14
Okay so I calculated probability person does not buy as .8 from (1-.56)(.25) + (.92)(.75). But now I am stuck. Where does the five part come in? What should I do next?
5. Jan 27, 2015
### LCKurtz
If the probability the first person doesn't buy is $.8$, and if they are independent, what is the probability the next person doesn't buy? So....
6. Jan 28, 2015
### HallsofIvy
If the answer to (a), which you say you got, is p, then the probability that "at least one" will buy is the 1 minus the probability none will buy. The probability that none will buy is (1- p)^5
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Have something to add?
Draft saved Draft deleted
|
{}
|
## Algebra 1
Published by Prentice Hall
# Chapter 8 - Polynomials and Factoring - 8-8 Factoring by Grouping - Standardized Test Prep - Page 521: 51
#### Answer
5r(r+3)($2r^{2}$+1)
#### Work Step by Step
Given the polynomial $10r^{4}$ + $30r^{3}$ + $5r^{2}$ + 15r We see that the terms have a common factor of 5r so we factor out a 5r. 5r($2r^{3}$ + $6r^{2}$ + r + 3) We take the GCD of the first two and the GCD of the last two terms. 5r ($2r^{2}$(r+3)+1(r+3)) We take (r+3) and factor it out which gives us. 5r(r+3)($2r^{2}$+1)
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
{}
|
## Saturday, October 31, 2009 ... /////
### Lord Monckton on Glenn Beck show
If you have a spare hour, here is the program:
It has 7 parts - you can choose them via the "tape" button next to "play" per 8 minutes or so. John Bolton is there, too. They discuss many details about the legal power of the possible Copenhagen treaty to rebuild the world.
Here is the YouTube link to watch the playlist outside TRF.
By the way, the UAH MSU global temperature anomaly for October 2009 is predicted by The Reference Frame to be 0.29 °C or 0.30 °C, down from 0.42 °C in September 2009. I calculated the estimate on Sunday morning, Central European Time, by comparisons between October 2008 and October 2009 (unfortunately, a few days had missing data which brings some extra inaccuracy). Let's see how accurate I will be.
## Friday, October 30, 2009 ... /////
### Halloween party physics: fun with dry ice
Carbon dioxide (-78 °C and -57 °C are the melting and boiling points) is not only the gas we call life but the North American readers can use it to improve their Halloween party:
Via Physics Central which adds some comments...
This technology, soon available for Android 2.0 phones for free, is pretty amazing. With all the features based on Google Maps that you can imagine, it has the potential to suppress all other mobile GPS systems. The most pressing issues connected with such a choice are fees for the transferred data, especially when you're abroad (which can be very expensive).
### Post-socialist Europe still against global climate communism
Last night, during their dinner in Brussels, the EU leaders agreed to give the Czech Republic an opt-out from the EU Charter of Fundamental Rights, as demanded by President Klaus. The Austria-Hungarian opposition has apparently evaporated.
So right now, it's likely that he will sign the treaty after the constitutional court says "Yes" and we will get the opt-out which will save my homeland from many policies that go well beyond the fear of the returning Sudeten Germans (a topic that Klaus has used to be sure about the public support).
Still, the treaty is bad even without the charter. Apologies to all the readers - and anti-Lisbon protesters in Brussels and elsewhere - who were hoping that we would manage to kill the treaty for good. Let's survive: we have only lost 1 battle but the war for freedom is not yet over.
Meanwhile, your humble correspondent was on the radio, live.
I was sitting in the Pilsner studio of the public radio station, audio-connected to the Prague headquarters - a few hours after the collapse of the director of the radio, Richard Medek. Up to the very last seconds, I had no idea who would be asking me questions, what he would ask me, and who would be the opposing side. So I was trembling with fear. I had no eye contact with the other participants which is why I couldn't use it to influence the debate non-verbally and I couldn't accurately expect when my time was coming.
More importantly, there was too little time to correct various incorrect statements about the projected warming in the 21st century and other widespread myths.
### Non-ASCII domain names allowed
Today, ICANN in Seoul has voted that from Summer 2010, the internet addresses - URLs - will be allowed to include arbitrary non-English character in the Unicode set: see The New York Times.
From September 2009, you could have registered Cyrillics-based domain names in Bulgaria.
It's a pretty substantial extension of the possible names because the latest Unicode contains 107,000 characters - imagine 500 pages of stuff like this. So far, only 26 letters, 10 digits, and the dash was possible as an "atom" of the URLs (37 characters that we know and love in total).
A few Unicode characters are enough to create a huge number of new possibilities to spoof and to misspell. It's my understanding that even the tiniest difference between URLs will imply that the two addresses are inequivalent. Isn't it terrible?
## Thursday, October 29, 2009 ... /////
### Nature, NYT report the demise of Lorentz-violating theories
In August 2009, we discussed the preprint by the GBM/LAT collaborations working for Fermi, formerly known as GLAST:
Fermi kills all Lorentz violating theories,
which has ruled out all existing non-stringy theories of quantum gravity by confirming the rules of special relativity at a huge, trans-Planckian accuracy - as long as their parameters are chosen or estimated naturally.
Now, you may say that physicists know 5 or 12 or 2009 alternatives to string/M-theory - except that 4 or 11 or 2008 of them already reside at the dumping ground of physics.
### Arnold Schwarzenegger & probabilities
Arnold Schwarzenegger has vetoed a bill about the funding of projects in the San Francisco area, proposed by an aggressive homosexual activist. That wouldn't be important enough a fact to be discussed at this blog. However:
Unlike other texts, Schwarzenegger's veto must be read vertically to get the key message. ;-) Now, the governor's office claims that it is just a coincidence. How likely is it? Well, 26^7 is equal to 8 billion or so.
That's like a 7-sigma evidence against the "noise" explanation. :-) And who knows - maybe we should also count the spaces and/or the initial "I" before the verb :-) which would make the odds even more spectacular. Moreover, there exists additional, microscopic circumstantial evidence supporting the "intelligent design" explanation in this case.
"Kicking the can" sounds too poetic for a political memorandum (they needed a "k"), "overwhelmingly" seems redundant for describing how Californians deserve something (they needed an "o"), and the repeated occurrence of "unnecessary" (they needed a "u" twice) would probably be adjusted away if the good style mattered more than the vertical message.
Arnold Schwarzenegger is pretty creative although we could have doubts whether he has created the letter himself. Good sense of humor.
The message could have been made excessively contrived, too. The end of the letter, from "Sincerely", could have said "Respectfully [newline] Arnold [newline] Schwarzenegger [newline] Sincerely". (Be sure that I could give a smoother solution to this problem, but this simple one is OK as a proof of a concept.)
Then the hidden vertical message would be much longer - "I *u** your *s*." That would bring us to 10-sigma evidence against noise.
### A small Hodge three-generation Calabi-Yau
I think that the most interesting - and visually attractive - hep-th paper today is the last one,
Volker Braun, Philip Candelas, Rhys Davies: A three-generation Calabi-Yau manifold with small Hodge numbers (PDF).
They construct Calabi-Yau manifolds with very small Hodge numbers.
If you look at the picture above - disks indicate known Calabi-Yau three-folds, organized according to the difference (x) and the sum (y) of their nontrivial Hodge numbers (h11,h12), and you're asked where we should be living, the anthropic people will send their hands to the top, or (infinitely?) above it - to the "generic" Calabi-Yau manifolds with large topological invariants.
However, most of the sane people - much like the droplets of beer pouring along the conical glass - will choose the region near the tip of the cone at the bottom which surely looks special: the "simplest" Calabi-Yau shapes, in some natural sense of the adjectives, are located there. The red dots represent Calabi-Yaus with the Euler character chi equal to +6 or -6, leading to 3 generations (for the E8 x E8 heterotic string, via the standard embedding of the spin connection to gauge connection).
### EU summit on global warming and Václav Klaus
Today, a two-day summit of the European Union begins in Brussels. It will be mostly discussing two things - none of which will be present in Belgium:
Global warming and Václav Klaus
Moreover, you can see that the two main topics are opposite to one another. ;-)
Yesterday, in the evening of the most important Czech national holiday, Klaus gave the highest awards to Karel Gott and 22 other personalities. For Gott, that's far from the first award of this kind. Gott has famously identified Mr Gustáv Husák, the last communist president, as a nice chap. ;-) See also Klaus's holiday speech in which he urged the Czechs not to surrender to political correctness, among other things.
Václav Klaus has apparently told Czech prime minister Mr Jan Fischer that if the Czech Republic is exempted from the EU Charter of Fundamental Rights - and if the constitutional court says that the Treaty of Lisbon is OK (next Tuesday?), he will sign the treaty. Czechia would simply join Poland and Britain that have received the opt-out for different reasons (the right to ban gay marriages and the protection of the English judicial system).
## Wednesday, October 28, 2009 ... /////
### Fermi sees the WMAP haze, too: dark matter?
Last time we talked about the Fermi satellite, formerly known as GLAST, it just ruled out all natural Lorentz-violating theories.
A very new "Fermi" preprint by Douglas Finkbeiner, Tracy Slatyer, & Gregory Dobler from Harvard's Center for Astrophysics and Neal Weiner & Ilias Cholis from NYU,
The Fermi haze: A gamma-ray counterpart to the microwave haze (PDF)
Sky and Telescope (summary)
brings us new exciting results about yet another hot issue in fundamental physics: the character of dark matter.
Let me assume that the distinguished reader has been convinced that dark matter almost certainly exists - so the question is not whether it exists but what it is made out of and how this new kind of matter behaves.
Recalling some old tantalizing hints
In its microwave spectrum, the WMAP satellite saw a "haze" coming from electrons near the galactic center back in 2003 (at least Douglas Finkbeiner claimed that it did: see the picture on the left) - and there were always good reasons to think that these electrons could be created by dark matter annihilation which would be exciting if true.
The Fermi satellite is looking at the situation in much higher a frequency segment of the electromagnetic spectrum - well, it employs the gamma rays. After they subtract the photons from decaying neutral pions and the brehmsstrahlung from the soft-synchrotron radiation, they still see some pretty hard, excess gamma rays.
Nevertheless, the authors of the paper claim that the location and distribution of the electrons responsible for these hard excessive photons - electrons apparently participating in inverse Compton scattering (ICS) - matches the prediction made from the dark-matter interpretation of the WMAP haze.
So the attractive story is that near the galactic center, the dark matter particles annihilate in pairs and create electrons which produce the additional gamma rays observed by Fermi (as well as other signals).
The Fermi haze, in 5-10 GeV (green) and 10-20 GeV (blue) ranges, including residuals (right side). Click to zoom in.
There also exist other reasons why this signal could be unrelated to dark matter, after all. There is a lot of uncertainty about the right interpretation. But the multiplicity of the stories - WMAP, Fermi, PAMELA, ATIC - kind of increases the subjective likelihood that the dark matter annihilation is actually being observed by all of them (or most of them).
### Cap and trade bedtime story
A few weeks ago, the British government aired an outrageous commercial with "drowning pets" on TV. Hundreds of viewers have complained. The great news is that the narration has been corrected. Here is the fixed version of the commercial:
If the video above doesn't work for you, view it at YouTube.COM (they may have banned embedding).
The Minnesotans for Global Warming who helped to cure the errors want you to sign a petition against cap and trade, to be sent to Obama.
## Tuesday, October 27, 2009 ... /////
### CERN: LHC switched on
A seven-minute excursion to the LHC cooling system.
Four days ago, as the video shows, the LHC women celebrated the working temperatures, achieved on October 15th, in harmless vapors. ;-)
Today, after a year of repairs, the LHC was switched on again:
BBC, U.K. Times, Telegraph, The NY Times
Things seem to be working just fine as the energy will be slowly increasing towards those 2x 3.5 TeV in January.
The probability of a new quench has decreased because the physicists have learned a lot, introduced a new anti-quench system, and subtracted one terrorist from their team. ;-)
The Compact Muon Solenoid.
The CMS collaboration has posted the first tests of the CMS silicon tracker. Things are consistent and the resolution is close to the designed performance:
Preprint, Symmetry Magazine
### Czech Constitutional Court looks at the Treaty of Lisbon
The proceedings began at 10 am, Central European Time. The boss of the Constitutional Court, Dr Rychetský, enumerated the participants of the proceedings. Pretty much all of them were "personally known" to the court.
Dr Pavel Rychetský, a former social democrat, the boss of the Constitutional Court in Brno.
Only the lawyer of the plaintiffs had to be chosen from two possibilities. Consequently, Dr Rychetský asked Dr Kuba, who was selected by the plaintiffs, to present his ID to the court. Yes, this is indeed Dr Kuba - and his license numbers were made public. ;-)
The revenge didn't have to wait for too long. Dr Rychetský asked whether anyone in the room would submit a complaint that the court is prejudiced.
Dr Kuba immediately and politely answered Yes, we think that the very chairman Dr Rychetský is biased, and he gave the court a lot of coordinates for paragraphs. And as evidence, he quoted Lidové noviny, a leading newspaper, that informed that Johannes Haindl, the German ambassador, met Dr Rychetský and asked him how much time it would take for the court to release the verdict.
The proceedings were interrupted for 10 minutes - for all the judges to determine what to do with the complaint against the chairman. ;-) Of course, Dr Rychetský was not eliminated but he has been visibly shatterd, with his face getting much more red. It's conceivable that a judge (or judges) argued or voted against Dr Rychetský.
Dr Rychetský continued and nicely explained the content of the complaint - see Why Lisbon is inconstitutional for a similar summary.
### Climate Chains: a movie
If you have 22 spare minutes...
Server of Climate Chains...
By the way, Sriram has also recommended this one-hour interview with Lord Monckton in Canada.
## Monday, October 26, 2009 ... /////
### Overpopulated polar bears flood the Prague Castle
On November 14th, 1856, the last brown bear was shot in the Bohemian Forest. This padded female individual became one of the exponates of the beautiful "Hluboká upon Moldau" Chateau.
In the 1990s, some Czech pundits were proposing to return the bear to the Czech hills.
However, an unexpected twist occurred today. Sixty-five of their cousins - who believe that their white skin makes them superior - overrun the Prague Castle:
Czech Press Agency (in English)
Google News (in Czech)
Gallery ("další" means "next")
Their main slogan was a variation of the Stalinist dictum from the 1950s that used to be directed against the imperialists, "We won't allow our republic to be subverted" ("Republiku si rozvracet nedáme").
The polar bears' new slogan is
"We won't allow our republic to be defrosted."
("Republiku si rozmrazit nedáme.")
Some people thought - and you might think - that the polar bears came there to protest the fact that the Czech president is a climate realist. But the organizers reject this hypothesis:
This is a misunderstanding. These polar bears legally arrived here from Greenland to support the Czech president.
This answer seems to agree with some of the banners. The polar bears urged Václav Klaus "to keep on campaigning against the myth of global warming." That much for the assumption that polar bears must automatically be alarmists. :-)
### Dominika Stará vs Martin Chodúr
Elvis Presley is alive and kicking. His name is Martin Chodúr. ;-) Because most TRF readers are not intferested in music contests, I will remove most of this short article from the main TRF page.
## Sunday, October 25, 2009 ... /////
### 350 day: a failing struggle for an unattractive utopia
You may have not noticed but Saturday, October 24th, 2009 was an International Day of Climate Action: see 350.org & Google News.
Hundreds of people in the whole world organized events urging the world's CO2 concentration to return to 350 ppm (350 parts per million: 3.5 molecules out of 10,000 are CO2 molecules). Even in the Czech Republic, a dozen of activists gathered at the Old Town Square, emitting dozens of dirty black "CO2" latex balloons into the air.
(They also used masks of various well-known world politicians for a childish puppet show in which these politicians declare 350 ppm to be the new Copenhagen law.)
A few stupid hippies can easily get into the national TV news if they're on the "right side" (i.e. the far-left side) of the political correctness.
Twenty years ago when the CO2 concentration was at 350 ppm, the environmental activists would fight against things such as latex balloons in the air. They are polluting the environment and some animals may get into trouble when they swallow the balloon.
## Saturday, October 24, 2009 ... /////
### Guardian: interview with Michael Green
Aida Edemariam of The Guardian just published an interesting albeit imperfect interview with the new Lucasian professor of mathematics in Cambridge:
Michael Green: Master of the universe (click)
It argues that Green, who has been focused on theoretical physics since the age of 13, used to be a Harrison Ford's lookalike.
They recall how Green and Schwarz met in a CERN canteen, how everything suddenly made sense during a day in 1984, how string theory would have gone extinct without Green and Schwarz (Witten's words), and how the gospel about their success got to Princeton where the amazing guy called Edward Witten scooped them and wrote this paper (abstract) about the phenomenology of O(32) strings.
I think it is this paper that Green claims to have really sparked the revolution. Unlike most papers by Green and Schwarz from the early 1980s, I've never read this particular paper by Witten - that claimed to have obtained the right number and type of generations of the Standard Model fermions from type I theory. And frankly, although I can't read the full paper even now, it doesn't seem quite correct to me. Can you get realistic vacua from O(32) strings in this way?
Green says that the pace of research got much faster these days because they didn't have to compete with others which was kind of nice.
The journalist claims that
Green once said that one could "think of the universe as a symphony or a song – for both are made up of notes produced by strings vibrating in particular ways."
Green replies: "Did I?" Cutely enough, the article doesn't resolve the mystery. The solution is, of course, that Ms Edemariam confused Michael Green with Brian Greene (who says similar things often) and she still doesn't realize that. :-) I suspect that Green realized where the confusion came from.
Green describes the importance of unification in physics.
## Friday, October 23, 2009 ... /////
### TBBT accused of sexism
The Big Bang Theory is a great sitcom. The most recent episode, 3x05 "The Creepy Candy Coating Corollary", can be watched e.g. at CBS.
People seem to agree that it is the most scientifically accurate show on TV (which is guaranteed by Dr David Saltzberg, their science adviser). And there are many big fans of the show among the well-known bloggers, including The Reference Frame and Bad Astronomy.
However, some people - e.g. Sam Lowry and Sean Carroll - claim that the show is misleading or inaccurate concerning the sociology. They claim that scientists are no nerds and women are equally likely to become scientists, and all this stuff.
### Kyoto pays you for deforestation
Because the Chromium team has just fixed a somewhat serious PDF bug your humble correspondent reported ;-), it's time to look at a much more serious bug, a bug of the Kyoto protocol and the related European laws:
Science, NPR, Australian ABC, Carbon Positive, TIME
The Kyoto protocol and similar treaties and bills are designed so that you get paid if you cut forests, burn the wood, and seed biofuel plants on the empty place instead. ;-) So the legal support for the biofuels is likely to be more harmful to the environment than petrol.
The authors, Timothy Searchinger et al., elaborate upon their February 2008 article in Science Express that described the regression. Now, in 2009, they also claim that there exists an "easy fix".
So far, the price for the CO2 has been small enough not to cause any effects. However, it's plausible that if the price increased sufficiently for the net CO2 emissions to be changed by the legislation, we could see a lot of deforestation.
It's questionable whether such "loopholes" may ever be completely fixed. The main problem is the inherent non-market character of the "caps".
In 1968, the author of the economic transformation of Dubček's Czechoslovakia during the Prague Spring (a third way), Mr Ota Šik, an economist born in Pilsen, has found a power plant and a colliery near Ostrava, in the Northeastern Czech Republic. A funny feature of this pair was that the power plant produced as much electricity as the colliery consumed and the colliery mined as much coal as the power plant burned. :-)
This is a typical bug that socialism routinely experiences. As Mr Petr Vopěnka, a mathematician who told us about this story in the 1990s, emphasized, there can exist not just pairs but much more complex "circles" of economic relationships in socialism that imply that the system doesn't work.
## Thursday, October 22, 2009 ... /////
### NASA GISS at Tom's diner
When Karlheinz Brandenburg was optimizing the MP3 format of music files in 1991, he had to choose a monophonic song to test all the details of a single-channel compressed playback. This version of Suzanne Vega's "Tom's Diner" turned out to be optimal for this purpose.
Suzanne Vega has therefore become the mother of MP3.
While her simplistic voice is effective, I still prefer the recording with the instruments.
However, as Anthony Watts has found out, she also became the grandmother of global warming because James Hansen's and Gavin Schmidt's offices are exactly six floors above the restaurant from the video above, at 2880 Broadway, New York City. ;-)
It's sensible that each of the two guys only has one window in their offices but it will be even more sensible when the windows will be behind bars.
Click to zoom in. One more Google Earth screenshot of NASA's GISS and its neighborhood. Note that Hansen has readied his death train on the roof.
### U.S. public support for AGW orthodoxy dropped by 14 percentage points since 2008
The Pew Research Center for the People & the Press has published their newest numbers documenting the changing opinions about global warming in the U.S.
Pew, The Guardian, Associated Press, a WSJ blog I, II, Wash. Indep., Dakota Voice
The October 2009 numbers are mainly compared with the results in April 2008: I will refer to 2008 and 2009.
The American worries about global warming cooled down, Pew Research Center showed, even as Pew Center on Global Climate Change attempted to gather its last worriers again.
Is there solid evidence that the Earth is warming [at all]?
In 2008, "Yes" was chosen by 71% of the respondents. Now it is 57% only: a drop by 14 percentage points. You may want to know that both in 2006 and 2007, the figure was at 77% - a drop by 20 percentage points in 2 or 3 years.
Is there solid evidence that the Earth is warming because of human activity?
In 2008, the "Yes" score was at 47%, i.e. almost one half agreed with the basic AGW statement. In 2009, the number dropped to 36%, i.e. by 11 percentage points. About one third of Americans believe in man-made global warming today - which makes this religion less popular than creationism. ;-)
If we extrapolate this trend, the number of AGW believers in the U.S. will become negative in five years. ;-)
Is it serious?
The "very serious" group went from 44% to 35% between 2008 and 2009, "not too serious" went from 13% to 15%, "not a problem" went from 11% to 17%, the last two "largely unworried" groups combined went from 24% to 32%.
GOP, DEM, IND: party lines
## Wednesday, October 21, 2009 ... /////
### Tolasz: Klaus may not be wrong
Today, the #1 Czech newspaper, MF DNES ("Youth Front TODAY"), published an interview of a top Czech journalist, Ms Barbora Tachecí, with climatologist Mr Radim Tolasz.
Because I consider him the ultimate role model of a mainstream Czech climatologist, the "guy in the middle" (who also holds, in some sense, the highest climatological job in Czechia), I decided to translate the whole interview so that the readers from the whole world may learn that the climate hysteria is pretty much absent in the Czech climatological circles - and in fact, also in the Czech media.
The printed version starts with a big headline, "Klaus may not be wrong" ("Třeba se Klaus nemýlí"). The electronic version has a more refined title:
Politicians are satisfied as soon as the fight against the climate change is being written about; economists should calculate how much it costs, Dr Radim Tolasz says. Picture: Mr Michal Šula, MF DNES
Klaus may not be wrong but he oversimplifies things, a climatologist says
(title)
Weather fluctuations in recent days have confused everyone. There is an exception: climatologists are not surprised and they will probably never be. This statement was also confirmed by Mr Radim Tolasz, a deputy director for meteorology and climatology of the Czech Hydrometeorological Institute, in an interview with Ms Barbora Tachecí.
## Tuesday, October 20, 2009 ... /////
### Luxembourg: EU climate talks collapse
... thank God ...
The debate how to "fight against climate change" has advanced from absurd talk by hosts of dopes to their attempts to actually harm the civilization.
The intellectual little green men no longer discuss whether to screw the world economy but how to do it most optimally. Hundreds of billions of dollars a year are at stake so you may guess that even relatively small disagreements and modifications of previous plans may induce substantial tension.
Moreover, it's obvious that these disgraceful policies are being mixed with egotistic interests of particular nations and individuals and the desire to increase protectionism, wealth redistribution, and other bad things. We are talking about an explosive mixture of junk.
### Stephen Hawking's chair given to Michael Green
Several TRF readers were asking who would become the next Lucasian professor of mathematics in Cambridge, U.K., after Stephen Hawking's resignation. Stephen Hawking will lead his research center at the Perimeter Institute.
The only guess that other readers have offered was Michael Boris Green, a prominent figure of the first superstring revolution and a great superstring & supergravity expert. Ladies and Gentlemen, there were not too many excellent choices but this one became a reality. Congratulations.
Had I known that this physicists' chair was actually a mathematics chair, I would have told you that Michael Green was the only solution.
The chair was created in 1664. Green's predecessors include Isaac Newton, Charles Babbage, Paul Dirac, and Stephen Hawking. See:
TRF: 25 years of the first superstring revolution
TRF: The next revolution
TRF: Green's Strings 2008 talk
TRF: Non-decoupling of N=4 d=4 SUGRA
TRF: Finiteness of supergravity theories
TRF: A curious truncation
TRF: N=8 SUGRA & Lance Dixon's puzzle
TRF: Two roads from N=8 SUGRA to strings
TRF: 275 more texts with "Michael Green"
MBG's papers: SPIRES
MBG's papers: Google Scholar
News: BBC
News: Physics World
News: The Guardian
News: The Times
News: The Telegraph
News: Cambridge Evening News
The first superstring revolution was approximately as important for the progress in theoretical physics as Hawking's semiclassical calculations of the black hole evaporation. It just came one decade later.
### Cosmic rays probably drive tree growth directly
The BBC published an interesting article summarizing some recent scientific research that suggests that the tree growth rate is an increasing function of the cosmic rays flux.
This relationship between the tree growth and the cosmic rays flux actually seems to be stronger than the influence of both temperature and precipitation on the tree growth (in Scotland)! Well, after all, wood may grow much like tumors when a lot of ionizing radiation is around. ;-)
I think that if true, such a relationship could also help to explain the divergence problem.
Hat tip: Benny Peiser
## Monday, October 19, 2009 ... /////
### Klaus: Notes on the economic analysis of the global warming issue
Translation from Czech: L.M.
Instead of participating in the addition of further arguments of the philosophical i.e. unquantifiable type - which is what currently dominates the "ideological clash" between the champions of freedom on one side and the environmentalists and advocates of non-freedom on the other side, let us focus on some elementary economic data, hypotheses, theories, and models which underlie these big "confrontations". Maybe, exactly these considerations will convince a reader or two. Otherwise, the discussion resembles a "dialog of the deaf". It's self-evident that only a small wedge of all these problems has been selected for this article.
It is more than obvious that we are objects in a strange game that is being played with all of us. It is more than obvious that among those who are deciding about these issues on behalf of us, i.e. among the politicians, no genuine dialog about global warming or its costs is taking place - especially not about the costs of mitigation (and I know quite a bit about the situation). It is also more than obvious that a majority of the world's (and even our) politicians - without dedicating any time to a serious investigation of these questions - has concluded that the global warming game is an easy, politically correct, and personally highly beneficial card (which moreover guarantees that they're not and they will never be responsible for the costs of this fight because the costs will be covered by future generations).
### Dominika Stará: Je suis Malade
Pro slovenské a české čtenáře: dole na "fast comments" můžete psát komentáře i v mateřštině. Dominičina domácí stránka, včetně knihy návštěv, je na dominikastara.sk.
See also: Intro to Czecho-Slovak Superstar & Chodúr's "Supreme
See also: Dominika Stará vs Martin Chodúr
This song, originally by Dalida & Serge Lama (Italy/Egypt & France) and/or Lara Fabian (Belgium), was arguably the best performance of the last girls' semifinals of the Czecho-Slovak Superstar that Czechs and Slovak viewers watched last night: see the full video of the show (this song is in the fourth, last part at 1:21:30 or 1:20:00, including her introduction).
Ms Dominika Stará (SK) is just 16 even though her surname means "old". She dedicated the song to the old people because she knows how it feels to be born as old. ;-) Unlike most other contestants, she is a deeply believing, practicing Christian. Our compatriots may know the Czech version of the song, To mám tak ráda (That's What I Love So Much) by Ms Marie Rottrová, but Ms Stará was brave enough to choose the French version and she did very well. After all, she's been training this song since she was 11 years old. ;-)
The first YouTube video was rather quickly removed by the TV stations. If the video above is defunct again, please go to 1:21:30 of the official evening's video to hear Dominika Stará.
A very tough competition has emerged for Martin Chodúr (CZ) and Monika Bagárová (CZ). A report from Facebook:
## Friday, October 16, 2009 ... /////
### Astronomical and economical numbers
There are 10^{11} stars in the galaxy. That used to be a huge number. But it's only a hundred billion. It's less than the national deficit! We used to call them astronomical numbers. Now we should call them economical numbers.
Richard P. Feynman (1918-1988)
Well, it's actually 14.2 times lower than the U.S. budget deficit. At USD 1.42 trillion, the fiscal 2009 deficit represents 10% of the American GDP and triples the previous record - from the previous year.
The Milky Way must feel kind of ashamed. ;-) On the other hand, San Francisco held an anti-Obama protest.
### Is M-theory hiding Cayley plane fibers?
Exceptional algebraic structures are omnipresent in string theory and especially M-theory and F-theory, its (so far) maximally geometrized 11-dimensional and 12-dimensional formulations.
Right now, I plan to write a rather extensive text that should
• review some basic facts about the exceptional Lie groups and octonions
• mention some places in M-theory where these structures occur
• discuss the proposals by Ramond and others that a secret Cayley plane, or a 16-dimensional manifold called "OP^2", is hiding as a fiber in M-theory. After many hours of looking at many attractive possibilities, I became kind of skeptical about this very idea.
Real numbers, complex numbers, quaternions
All readers know the real numbers, R. Most people know the complex numbers, C, of the form "a+ib" where "a,b" are real and "i" is the imaginary unit that satisfies "i^2 = -1", the only thing you need to know.
### F-theory papers go 3D
The prettiest hep-th preprint today is the last one and it was written by Clay Cordova at Harvard,
Decoupling Gravity in F-Theory (PDF)
The author, originally of Santa Fe, argues that in the context of F-theory phenomenology, there exist strong constraints on the singularities in the Calabi-Yau four-folds, coming from the condition that gravity decouples (a simplifying assumption, justified by the hugeness of the Planck mass i.e. gravity's approximate decoupling in reality). Geometrically, the condition means that the cycle S supporting the GUT group remains fixed in size while the total Calabi-Yau volume must be allowed to blow up.
A crash course on F-theory singularities and Fano threefolds is included. I guess that it may be useful for many readers to learn this material from Cordova. You know, if a high school student from 2003 knows such things, maybe we should follow, shouldn't we (despite the fact that all Fano threefolds are ultimately ruled out in the paper)? ;-)
But what is truly remarkable is the design of his paper. For example, this is the figure 1 with the general sketch of the branes within the F-theory compactification manifold and their intersections.
Click to zoom in a little bit. It's kind of logical that because F-theory is, in some sense, a 12-dimensional theory, the pictures should try to be multi-dimensional, too. ;-) After having read the paper, this guy clearly know what he's doing. He should only learn how to spell Planck correctly. :-)
## Thursday, October 15, 2009 ... /////
### CERN: LHC cooling completed
The 3-4 sector (below) was the last one that had to be brought to the working temperature.
On Thursday, October 15th, in the evening, the sector joined the remaining 7 sectors that had already been cooled down to 1.9 Kelvin! We were observing the temperature at this graph:
### Sarkozy: Sterling pounds Tony Blair's chances
In an interview for Le Figaro (EN; see also a report by Reuters), Nicolas Sarkozy repeats some of his vague threats against Czech President Václav Klaus.
Update: According to the newest poll from Oct 16th, 65% of the Czechs support Klaus in his Lisbon opposition. The supporters go across politically parties. Among older (55+ years) people, the support goes to 76%. Among younger (35- years) people, it is 57%. Similar percentages are afraid that the Beneš decrees could be breached in the new Lisbon arrangement. A big majority of people (74%) reject the idea to fire Klaus because of this issue.
The postmodern self-confident Napoleon is sure that he will solve the "problem" by the end of the year.
Well, if he's so certain about his miraculous powers, he can earn a lot of money. The Fortuna betting agency offers the following odds exactly for this question - whether Klaus will sign by the year end:
• 1.7 for 1: Klaus will not sign
• 1.9 for 1: Klaus will sign
The figures mean that if you invest one euro to the "No" answer, you will get 1.70 (i.e. 0.70 plus your 1.00) back if your guess is right: otherwise you will lose your investment; "1.7 for 1" is also called "0.7 to 1". If you invest one euro to the "Yes" answer, you will get 0.90 plus your 1.00 back. Well, it follows that the bookmakers think that it is more likely that Klaus will not sign the treaty by the year end. ;-)
### Blog Action Day: against climate change hysteria
Google and a couple of other big companies have teamed up and declared October 15th, 2009 to be the annual Blog Action Day.
Bloggers are supposed to register with them and they should receive millions of visitors, i.e. thousands of visits per registered blog. Well, I have certain doubts that these figures are trustworthy so I have tried it.
The video above explains that all the registered bloggers can write about any topic they like. And they can write whatever they think about the topic. And the topic must be climate change and they must write that it is a threat. ;-) Google and others have become employees in this major new post-modern kind of irrational intellectual prostitution and co-culprits of the most intense global brainwashing campaign of the contemporary era.
If you happened to find this blog on the Blog Action Day website, that's great because there are 673 posts about the climate on this blog. Dozens of them include detailed, quantitative, and verifiable explanations why the global warming alarm is a gigantic hoax.
This blog is also read by many readers who have studied the climate science in quite some detail and who can answer your question. Of course, it is conceivable that you - a visitor of the Blog Action Day - hasn't gotten here at all because this website has been censored.
In that case, we don't have to tell you anything because you won't hear it anyway. :-)
Update: After an hour with no hits from them, I think it's safe to say that the non-alarmist blogs are being removed from their server.
### Snow clouds banned in Moscow
The expenses for cleaning Moscow from snow are equal to tens of millions of dollars a year.
Imagine that you're the mayor of Moscow and you're told that it costs 6 million dollars to "seed" all snow clouds during the winter, so that they drop their loads outside the city. The biggest clouds will be targeted by the air defense system.
What will you do? ;-)
## Wednesday, October 14, 2009 ... /////
### Snow returns to Pilsen
...and Austria sees record October snow...
Richard Müller: Snow.
Richard Müller: Snow
Translation by L.M.
0:26 When there's snow
0:31 things are so clean.
0:35 Millions of white guarantees
0:40 are quietly falling to my legs.
0:45 And I am calling the Lord
0:49 to say what's in my heart:
0:52 let the angels flap
... their feather quilts for a long time.
... keep on flapping their feather quilts.
0:59 When there's snow,
1:03 it's so quiet.
1:07 Only breath can be heard.
1:13 And the beats of the heart: bim bum.
1:17 I am kissing every snowflake.
1:21 Under the legs, the little snow is
... melting like in children's picture booklets.
1:35 When there's now,
1:39 everything's cracking down due to the frost.
1:42 January tears are dropping to the ground.
1:49 Underneath us everything's creaking gently.
1:52 On our little shoes, we have ski.
1:57 All of us are like in a fridge,
... whether we're saints or infidels.
2:43 We're so nicely clean and fresh,
2:47 when the snow is snowing everywhere to us.
2:51 We're so nicely clean and fresh,
2:55 when the snow is snowing everywhere to us.
2:59 We're so nicely clean and fresh,
3:03 when the snow is snowing everywhere to us.
3:07 We're so nicely clean and fresh,
3:11 when the snow is snowing everywhere to us.
3:14 We're so nicely clean and fresh,
3:18 [it's so clean around]
3:19 when the snow is snowing everywhere to us.
3:22 [when there's snow]
3:23 We're so nicely clean and fresh,
3:26 [it's so quiet]
3:27 when the snow is snowing everywhere to us.
3:30 [only breath]
3:31 We're so nicely clean and fresh,
3:34 [is usually heard]
3:35 when the snow is snowing everywhere to us.
3:38 [when there's snow]
3:39 We're so nicely clean and fresh,
3:42 [it's so clean]
3:43 when the snow is snowing everywhere to us.
3:46 [when there's snow]
3:47 We're so nicely clean and fresh,
3:50 [it's so quiet]
3:51 when the snow is snowing everywhere to us.
3:56 We're so nicely clean and fresh,
3:59 when the snow is snowing everywhere to us.
4:03 We're so nicely clean and fresh,
4:07 when the snow is snowing everywhere to us.
It's time for the annual deja vu. It was snowing in Pilsen. Yesterday and today. The volume was negligible in front of my windows. But friends and family members in other parts of Pilsen (and especially elsewhere in Central Europe) reported intense snow. The temperature stays near 0 °C, the freezing point.
Austria got record October snow, see Google News or Radio Netherlands.
Things are probably much worse in Poland and elsewhere, see Reuters India. See also a fresh video from the Moravian roads. Nearby boys suffering from dementia got the idea to confuse the seasons. :-)
You may remember the same posting in 2007 and 2008 except that now it comes one month earlier! The Reference Frame reported the first snow on November 11th, 2007 and November 21st, 2008. Not too surprisingly, I am not gonna claim that it is a proof of the coming ice age.
Below the break, I add one more classic song about snow, by Mr Jarek Nohavica (the video was created by an amateur).
### Die Welt: 65% of Germans think that Lisbon is bad
The number of articles found by Google News that talk about Václav Klaus has jumped from the typical figure around 400 to 2,000 in the last month. I suspect that regardless of the issues, the Czech president enjoys the surge. ;-)
The Times as well as Die Welt have informed their readers about rumors describing Klaus's explicit - although not quite public - pledges that he wouldn't ever sign the treaty. And it's obvious that there can hypothetically exist "counter-weapons" of the pro-Lisbon side that would make Klaus sign the document. No one can be sure what is going to happen.
### Let's celebrate gravity!
Steve Sailer
Science is finding evidence of gravity. This discovery should be embraced, not feared, say Bruce T. Lahn and Lanny Ebenstein.
A growing body of data is revealing the existence of gravity. It is now recognized that despite the many situations in which gravity is not relevant, in many others it is important (see box, page 728). The physical significance of gravity remains to be explored fully. But enough evidence has come to the fore to warrant the question: what if scientific data ultimately demonstrate that gravity exists at non-trivial levels? In our view, the scientific community and society at large are ill-prepared for such a possibility. We need a moral response to this question that is robust irrespective of what research uncovers about gravity. Here, we argue for the moral position that gravity, from within or between planets, should be embraced and celebrated as one of humanity’s, not to mention the Solar System's, chief assets.
## Tuesday, October 13, 2009 ... /////
### Causality, fate, and the arrow of time
Crazy papers about the destiny
A top science journalist in the New York Times has written a bizarre article about a couple of even stranger preprints by famous authors - including an early co-father of string theory - that have argued that a mysterious fate guarantees that any attempt to build the Superconducting Supercollider has to fail, and any other collider similar to the Large Hadron Collider has to break as well in order for us not to find the Higgs boson because the Higgs boson is the God particle and God wants to protect Her own face. Or something along these lines.
They apparently believe that there must exist a cosmic conspiracy that has guaranteed that Ronald Reagan's Superconducting Supercollider had to be killed by an inevitably emerging lack of interest from former vice-president Al Gore (who prefers junk science over big science) who teamed up with the usual G.O.P. suspects (who think that big science is the same thing as a big government).
I won't include links to these texts because they don't satisfy the intellectual criteria to be promoted at the TRF main page. But feel free to post the links in the comments. Instead, I want to explain some general facts about causality and the fate.
### Not Evil Just Wrong
Go to the website of the movie and learn how to help to organize the largest ever simultaneous film premiere on Sunday, October 18th, at 8pm Boston time. See the impressive current map of (not only) the U.S. premieres (zoom out).
Well, the beginning is conveniently on Monday at 3am Prague Summer Time - indeed, it's not yet finished. ;-) So I am afraid that Europe won't contribute much. But the American skeptics may want to promote this new movie!
### In search of the coming ice age
In this TV program, Stephen Schneider (3rd part, 6:05) and many others are worried about the climate change - more precisely about the coming ice age:
The video comes in three parts and lasts 9+4+8 = 21 minutes.
You may learn many things. The thriving life is here because of the warmth in the recent millions of years. But there's no doubt that the ice age will come again. When will it come? It has already begun 3,000 years ago. Unprecedented hunger and death will begin. A method to avoid the looming catastrophe could be to melt the sea ice and polar ice caps with nuclear bombs or megatons of dark soot. Listen to the rest...
Well, the difference from the contemporary programs of this type is XXX years. You can see that not much progress in the genuine science and technology of this discipline has occurred since MCMLXXVIII. In fact, you may say that this old program was showing much more detailed data that the climate scientists were trying to explain by the optimum theory, being chosen from a set of many hypotheses. These days, they don't care about the detailed data much.
## Monday, October 12, 2009 ... /////
### Rachel Bean: GR is probably (98%) wrong
Sean Carroll has brought our attention to an astro-ph preprint by Rachel Bean,
A weak lensing detection of a deviation from General Relativity [GR] on cosmic scales
She looks at various correlations and auto-correlations in the WMAP, 2MASS, SDSS, COSMOS data concerning the integrated Sachs-Wolfe effect, galaxy distributions, weak lensing shear field, and the cosmic expansion history.
She calculates some chi-square distributions and decides that the fit is significantly improved if she allows the parameter "eta" - a ratio between "two kinds of the gravitational potential" which should be equal to 1 according to GR - to be adjustable, ideally to 3.5 or so. Such an improvement of the fit by the adjustment of "eta" to a wrong value shouldn't occur by chance. Quantitatively, she thinks that GR fails at the 98% confidence level.
## Sunday, October 11, 2009 ... /////
### German, French units to storm the Prague Castle
Today, the Sunday Times published a provoking article:
Germans seek to oust Czech president Václav Klaus over EU treaty (main source)
See also: Reaction by the Czech Press Agency
A Telegraph blog reaction
It starts with a picture of somewhat happy Germans who are leaving the soon-to-be-communist-controlled Czechoslovakia after the war that they helped to start and that they just lost.
It's paradoxical but the expulsion into the soon-to-be-democratic West Germany has made most of their lives happier and richer. But I don't want to be excessively philosophical here.
### Cosmoclimatology: Svensmark et al. Forbush paper published
This is just a collection of links. The paper
Cosmic ray decreases affect atmospheric aerosols and clouds (PDF, full)
by Henrik Svensmark, Torsten Bondo, and Jacob Svensmark has appeared in
Geophysical Research Letters.
Ten weeks ago, we discussed the article at this blog. Very new media reports:
Science Daily
Fars News Agency (yes, that's Iran!)
Osel.cz (in Czech)
The alarmists paid by George Soros seem to be jealous and they ask: "Why the continued interest?" They mean interest in the mechanisms by Svensmark et al.
Well, because it seems to work, it seems to be justified by a flux of new articles, and because of reasons that are written in these articles. Because it may be the most important insight in climatology during the recent decades.
I think that Svensmark and a few others must feel somewhat unpleasantly because they have found something that may be a spectacular discovery in their discipline, and possibly the first discovery of this discipline that could deserve a Nobel prize.
Except that they simply can't get the deserved credit right now because their discipline has been hijacked by a political movement that prefers ideologically convenient opinions over solid and non-trivial insights that are likely to be true because they are justified by the empirical evidence.
I hope that the situation will change soon.
### Josef Váňa wins his 6th Velká Pardubická Steeple Chase
What is the ideal age for a jockey to win a horse race? Well, why don't you make a measurement by watching the 119th Velká Pardubická (Great Pardubice) Steeple Chase?
Josef Váňa is in the middle.
Yesterday, Mr Josef Váňa told his competitors that the only way to get rid of him was to shoot him. And he was damn right!
The familiar race in Pardubice, a town in Eastern Bohemia, was won by Mr Josef Váňa with horse Tiumen. The jockey will celebrate his 57th birthday in less than two weeks. It's his 6th victory in this steeple chase which brings his legend status to a brand new level. After President Klaus shared his compliments (and the cup) with Váňa right after the race, the jockey challenged the president for a tennis match. I would love to see it! :-)
At any rate: Congratulations. ;-)
### The Reference Frame: fifth birthday
Exactly five years ago, on October 11th, 2004, 12:08 am Central European Time (i.e. 10/10/04, 6:08 pm Boston Time), this blog began to entertain and enrich some of you, the world, and myself. The first posting was about
Future of physics at KITP,
a conference in Santa Barbara. As you can see, a largely defunct Columbia University blog that has always been hostile to theoretical physics was important to create this weblog as a kind of reaction.
You shouldn't be confused by dozens of other postings that were seemingly posted earlier: the dates were modified whenever I wanted a posting to be identifiable by Google but disappear from the main page of the blog.
The fifth birthday may deserve a pie with five candles. But believe me, you don't want to eat the pie on the picture above. Those thirty years ago, it was made out of plastic: professional photographers don't always enjoy the same degree of integrity as scientists. :-)
I couldn't sleep if I were hiding this detail from you!
## Saturday, October 10, 2009 ... /////
### BBC: What happened to global warming?
BBC has released an article by Paul Hudson:
What happened to global warming?
"This headline may come as a bit of a surprise," it says at the very beginning. You bet, Mr Hudson. It's a big surprise.
While the article repeats a lot of environmentalists propaganda - for example absurd claims that the "influence of solar activity on the climate was recently ruled out" - and it doesn't mention people like Svensmark or Shaviv, and it promotes the opinions of Latif or Corbyn instead, it is good that it was allowed to be born at all.
The article builds on the observation that 1998 - eleven years ago - was the warmest year so far. October 2009 is a somewhat paradoxical choice for such an article: cool years were followed by a pretty fast abrupt recent El-Nino-related warming. Consequently, September 2009 was the second warmest September on the UAH, RSS records (after 1998) as well as the GISS record (after 2005).
Don't worry. The GISS anomaly would have to jump between 0.80 and 0.90 °C for the rest of the year for 2009 to beat 2005 as their warmest year. It won't happen. If you want to know, despite the recent warming, UAH shows the January-September 9-month period of 2009 to be 7th warmest among the Jan-Sep periods of years 1979-2009. The average anomalies (multiplied by 9, i.e. the sums) are
{-1.09, 0.95, 0.57, -1.44, 0.89, -2.06, -1.96, -1.24, 0.67, 1.41, -1.35, 0.24, 1.72, -1.79, -1.74, -0.26, 1.21, 0.07, -0.07, 5.31, 0.43, 0.22, 1.61, 3., 2.18, 1.6, 3.01, 2.22, 2.86, 0.01, 2.08}.
Meanwhile, Australian defense officials remain unconvinced by the climate data.
The British government began its "Act on CO2" propagandistic campaign, in order to fight the growing skepticism among the citizens. After 10 years of intense brainwashing, most of them still think that global warming won't be a problem for them or their kids. The 2-star fairy-tale propagandistic video above is targeting 3-year-old girls as well as adults who are their intellectual equivalents.
For another result of the "Act on CO2" campaign, see this dramatic video. The narrator speaks like an excited general of an army who gives orders to the soldiers. The energy in the wires and CO2 emit strong light in the movie. Eventually, they melt the Earth. ;-)
Philip Stott compares the campaign to 1984, the book. Of course, unlike Orwell's world, our world often allows the people to find the right answers to the question, not just the government-paid untrue propaganda. So the U.K. government's investment is bound to be just a waste of money. They would have to execute millions of people, including Paul Hudson of BBC, to make a real difference.
Why? It's because one simple article by Paul Hudson compensates the lies in TV commercials that cost millions of pounds.
## Friday, October 09, 2009 ... /////
### LHC physicist arrested for Al Qaeda links
A broken LHC in 2008. Which of the people could be the culprit? Shift/click to zoom in.
Breaking news: yesterday, Adlene Hicheur, a 32-year-old visiting CERN employee working for the LHC was arrested in Vienne, France for his or her probable links to Al Qaeda and terrorist organizations in Algeria. He or she has a brother called Zitouni. No plot has been uncovered so far but the intention has been proved and the guy has confessed he led second life.
### Barack Obama wins 2009 peace Nobel prize
for his extraordinary efforts blah blah blah.
For a similar story, see: First-year grad student wins Nobel in economics
Scientific American links the award to Obama's anti-nuclear speech in Prague. Before Obama won the \$1.5 million award, he diplomatically called his visit to Prague "a waste of time". ;-)
Barack Obama joins the group of left-wing U.S. politicians who have won the award. His equally famous colleague shared the 2007 peace Nobel prize:
This particular award has been a pathetic joke for quite some time and Obama actually ends up being one of the better recent picks. ;-)
Still, it's bad for the Norwegians to throw USD 1.5 million to someone for having no results, and here you have additional justifications in the Telegraph why Obama should turn it down.
### IBM DNA transistor
The first sequencing ever done in the Human Genome Project cost USD 3 billion.
By drilling tiny holes into computer-like chips, these IBM guys may reduce the cost "slightly", ideally to USD 1 thousand. That would make personalized medicine reality. See more comments and videos on IBM DNA transistors, Google News.
Via Viktor K.
## Thursday, October 08, 2009 ... /////
### Klaus will sign Lisbon for a footnote
Update: Polish EU Parliament boss Buzek argues that Klaus wants a Czech opt-out right for the Bill of Rights: see his new statement. It's been previously speculated by Polish newspapers that Klaus's proposed footnote is meant to guarantee that the Sudeten German material claims can't ever be revived, not even by people from other EU countries who don't know the Central European history well and who could make such a big mistake to try to revise our key legislation concerning the 1945 confiscation of assets of Germans except for antifascists. Well, even if that's true, it doesn't mean that this topic is Klaus's only or main point. I still feel that it's a randomly chosen topic meant to split the EU and defeat the treaty at the end.
The Swedes have created a Facebook group, Support Václav Klaus, and a related anti-Lisbon petition with 5,000+ signatures (now).
I have always thought that the Czech president is a kind of an ingenious politician. He believes in great ideas and principles and he is courageous enough to defend them. However, as far as I understand, he's also playing politics like chess and he's often able to defeat seemingly stronger and more numerous foes.
### Climate: Asian kids sue G8
The Irish Times inform that meters from the climate negotiators in Bangkok, Thailand (where the Big Cheeses are preparing for the big December 2009 climate talks in Copenhagen: Barcelona will be the only other preparation), the children from Bangladesh, Indonesia, Nepal, Philipines, and Thailand have been training for the imminent climate tribunal against the rich world:
G8 states could face class actions on climate change
It was not revealed whether the arrogant 13-year Indian bitch from the United Nations has also participated. ;-) You might think that these kids need a good spank except that it is not the kids themselves who are making this stuff up. These kids are just being brainwashed and manipulated by some people who should already be mature except that they are definitely not.
### Liquid nuclear battery
Two years ago, Viktor Kožený, a well-known Czech financier (and inventor!) living on the Bahamas, sent a proposal to create minuscule nuclear centrifuges, besides other nuclear technology, to your humble correspondent and others.
Many of us were laughing.
But I was reminded today that something remotely similar is being realized by researchers in Missouri. Their paper was rated as outstanding at a July 2009 conference. Radioisotope batteries can give you 100,000 times greater power than the chemical ones. These powerful ones are somewhat dangerous but you can create safe ones, with 1 Watt of power, that can last for a decade: see the picture above.
The semiconductors around them have to be liquid rather than solid, in order for their structure to be immune against the decay products. Why it's not as unsafe as the adjective "nuclear" instinctively leads most people to believe, and what are the other issues can be read at
MU press release, PhysOrg, Next Big Future, Science Daily, Gizmodo, Crunch Gear.
Today, you may be disgusted by such gadgets, but in a few years, they may be inside most small devices that people will use. Jae Kwon, the main researcher behind the technology, plans to make the batteries thinner than a human hair in the future. ;-)
### Herta Müller: even an anti-communist can win the Nobel prize
Herta Müller (Germany),
a Romanian-born ethnic German poet and novelist who has been depicting the brutal conditions for life in Romania under Ceauşescu before she emigrated to Germany in 1987.
After years if not decades when the literature Nobel prizes were given exclusively to communists, feminists, terrorists, postmodernists, and similar stuff, that's quite a pleasant shock!
Müller is quoted as an author "who, with the concentration of poetry and the frankness of prose, depicts the landscape of the dispossessed." Congratulations!
(The photograph on the cover of the book on the right side that you will buy from amazon.com is by Mr Jan Saudek, a Czech photographer.)
### Google Maps: Prague StreetView goes live
Click to zoom in.
You may start e.g. near
the "statue of the horse" on the Wenceslaus Square, (click)
near the National Museum or the Old Town Square or the Lesser Quarter Square (in front of the dept. of maths and physics) or the math/physics student hostels in Trója or the math/physics building at Karlov.
## Wednesday, October 07, 2009 ... /////
### 2009 chemistry Nobel prize: ribosome
The 2009 Chemistry Nobel Prize went to Venkatraman Ramakrishnan (Cambridge U.K., born in India), Thomas A. Steitz (Yale), and Ada E. Yonath (Weizmann, Israel) for studies of the structure and function of the ribosome. That could look like a biological discovery but the methods were pretty "chemical" or even "physical".
This picture doesn't directly describe the work of the winners but it may be good to be reminded what a ribosome does.
Crystallographer Dr Yonath whose victory was correctly predicted by some media broke the 45+ years when women receeived no physics or chemistry Nobel prize. Whether or not she was chosen purely by meritocratic criteria, there's one aspect you could have expected. By pure statistics, it shouldn't be too shocking for you to learn that the first woman to succeed in this way is Jewish. ;-)
An Ashkenazi Jew is 40 times more likely to receive the Fields medal than a random non-Jewish white: it may be similar for similar awards. Because there are about 10 million Ashkenazi Jews, you may see that their combined odds exceed those of the non-Jewish U.S. whites. ;-)
Let's hope that Israel, an island of relative wisdom, peace, and advanced civilization, will survive in the sea of a relative lack of wisdom, peace, and advanced civilization. ;-)
### NASA: Spitzer finds giant ring around Saturn
NASA's Spitzer Space Telescope, soon to be superseded by the European Herschel, is looking at the Universe in the far infrared and submillimeter spectrum.
Artist's idealization: click to zoom in...
And imagine what happened when it looked closely at the neighborhood of Saturn.
There is a new giant ring around the planet whose radius is 300 times larger than the radius of Saturn. The ring, tilted by 27 degrees relatively to the main plane of rings, is too thin to visibly reflect the solar radiation but the dust emits its own infrared, 80-Kelvin thermal radiation that could have been seen by Spitzer.
|
{}
|
# Chord BC
A circle k has the center at the point S = [0; 0]. Point A = [40; 30] lies on the circle k. How long is the chord BC if the center P of this chord has the coordinates: [- 14; 0]?
Correct result:
x = 96
#### Solution:
$r=\sqrt{ (40-0)^2+(30-0)^2 }=50 \ \\ x_{0}=|-14|=14 \ \\ \ \\ (x/2)^2 +x_{0}^2=r^2 \ \\ \ \\ x=2 \cdot \ \sqrt{ r^2-x_{0}^2 }=2 \cdot \ \sqrt{ 50^2-14^2 }=96$
We would be pleased if you find an error in the word problem, spelling mistakes, or inaccuracies and send it to us. Thank you!
Tips to related online calculators
For Basic calculations in analytic geometry is helpful line slope calculator. From coordinates of two points in the plane it calculate slope, normal and parametric line equation(s), slope, directional angle, direction vector, the length of segment, intersections the coordinate axes etc.
Pythagorean theorem is the base for the right triangle calculator.
#### You need to know the following knowledge to solve this word math problem:
We encourage you to watch this tutorial video on this math problem:
## Next similar math problems:
• Vector 7
Given vector OA(12,16) and vector OB(4,1). Find vector AB and vector |A|.
• Euclid2
In right triangle ABC with right angle at C is given side a=27 and height v=12. Calculate the perimeter of the triangle.
• ABS CN
Calculate the absolute value of complex number -15-29i.
• Segment
Calculate the length of the segment AB, if the coordinates of the end vertices are A[10, -4] and B[5, 5].
• Isosceles IV
In an isosceles triangle ABC is |AC| = |BC| = 13 and |AB| = 10. Calculate the radius of the inscribed (r) and described (R) circle.
• RT triangle and height
Calculate the remaining sides of the right triangle if we know side b = 4 cm long and height to side c h = 2.4 cm.
• Equation of circle
find an equation of the circle with indicated properties: a. center at (-3,5), diameter 20. b. center at origin and diameter 16.
• Chord - TS v2
The radius of circle k measures 87 cm. Chord GH = 22 cm. What is TS?
• Two parallel chords
The two parallel chords of the circle have the same length of 6 cm and are 8 cm apart. Calculate the radius of the circle.
• Medians and sides
Determine the size of a triangle KLM and the size of the medians in the triangle. K=(-5; -6), L=(7; -2), M=(5; 6).
• RT and circles
Solve right triangle if the radius of inscribed circle is r=9 and radius of circumscribed circle is R=23.
• Distance
Calculate distance between two points X[18; 19] and W[20; 3].
• Circle - AG
Find the coordinates of circle and its diameter if its equation is: ?
• Center
Calculate the coordinates of the circle center: ?
• Triangle IRT
In isosceles right triangle ABC with right angle at vertex C is coordinates: A (-1, 2); C (-5, -2) Calculate the length of segment AB.
• Triangle ABC
In a triangle ABC with the side BC of length 2 cm The middle point of AB. Points L and M split AC side into three equal lines. KLM is isosceles triangle with a right angle at the point K. Determine the lengths of the sides AB, AC triangle ABC.
• A cell tower
A cell tower is located at coordinates (-5, -7) and has a circular range of 12 units. If Mr. XYZ is located at coordinates (4,5), will he be able to get a signal?
|
{}
|
ColdFusion's cfquery failing silently
I have a query that retrieves a large amount of data.
<cfsetting requesttimeout="9999999" >
<cfquery name="randomething" datasource="ds" timeout="9999999" >
SELECT
col1,
col2
FROM
table
</cfquery>
<cfdump var="#randomething.recordCount#" /> <!---should be about 5 million rows --->
I can successfully retrieve the data with python's cx_Oracle and using sys.getsizeof on the python list returns 22621060, so about 21 megabytes.
ColdFusion does not return an error on the page, and I can't find anything in any of the logs. Why is cfdump not showing the number of rows?
The reason for doing it this way is because I have about 8000 smaller queries to run against the randomthing query. In other words when I run those 8000 queries against the database it takes hours for that process to complete. I suspect this is because I am competing with several other database users, and the database is getting bogged down.
The 8000 smaller queries are getting counts of col1 over a period of col2.
SELECT
count(col1) as count
WHERE
col2 < 20121109
AND
col2 > 20121108
I also started playing around with the maxrows attribute to see if I could discern any information that way.
• when maxrows is set to 1300000 everything works fine
• when maxrows is 1400000 or greater I get this error
• when maxrows is 2000000 I observe my original problem
Update
So this isn't a limit of cfquery. By using QueryNew then looping over it to add data and I can get well past the 2 million mark without any problems.
I also created a ThinClient datasource using the information in this question, I didn't observe any change in behavior.
The messages on the database end are
SQL*Net message from client
and
SQL*Net more data to client
I just discovered that by using the thin client along with blockfactor1="100" I can retrieve more rows (appx. 3000000).
-
I've found that most times with large data sets, it isn't the query but cfdump. I'd be willing to wager your paycheck that the data is coming back and the browser is crashing trying to render that many records with cfdump (which is a javascript heavy, inline style heavy, mess). There is no error because cf didn't error, the browser just can't output a dump that large. Try a simple <cfoutput>#randomthing.recordCount#</cfoutput> instead and see if you get anything. – Travis Nov 9 '12 at 14:51
@Travis, he is dumping randomething.recordCount, which will only output one value. If he were dumping the entire recordset, there would definitely be an issue with rendering. – Sean Walsh Nov 9 '12 at 19:14
My next stack exchange question will be "how four to do I reed?" – Travis Nov 9 '12 at 19:31
How long is the query taking to run? Any difference if you add blockfactor="100" to the cfquery? When you're dealing with that many records, ColdFusion is probably not the right tool. – Al E. Nov 10 '12 at 4:05
Have/Can you run ColdFusion server monitor while you are running this? Might show something non-obvious. – Barry Nov 29 '12 at 4:17
Is there anything logged on the DB end of things?
I wonder if the timeout is not being respected, and JDBC is "hanging up" on the DB whilst it's working. That's a wild guess. What if you set a very low timeout - eg: 5sec - does it error after 5sec, or what?
The browser could be timing out too. What say you write something to a log before and after the <cfquery> block, with <cflog>. To see if the query is eventually finishing.
I have to wonder what it is you intend to do with these 22M records once you get them back to CF. Whatever it is, it sounds to me like CF is the wrong place to be doing whatever it is: CF ain't for heavy data processing, it's for making web pages. If you need to process 22M records, I suspect you should be doing it on the database. That said, I'm second-guessing what you're doing with no info to go on, so I presume there's probably a good reason to be doing it.
-
Not 22 million records, 22 megabytes worth of data. That shouldn't really be a problem should it? The reason for doing it this way is because I have about 8000 smaller queries to run against the randomthing query. In other words when I run those 8000 queries against the database it takes hours for that process to complete. I suspect this is because several people besides me are hitting the database at once. – John Nov 9 '12 at 13:20
Ah sorry, misread. Phew! Even still, running 8000 queries on randomThing sounds like work for the DB server not the CF server. Is there a specific reason why you're involving CF in this? Did you try that other stuff I mentioned? – Adam Cameron Nov 9 '12 at 13:32
can you post what your smaller update query is? There may be a way to rewrite it using an UPDATE statement – Matt Busche Nov 9 '12 at 14:10
@AdamCameron, According to your cflog suggestion it appears that the query isn't finishing. I tried changing the queries timeout both in the code and in CFIDE/administrator regardless of what I tried I couldn't get the query to timeout. The 8000 smaller queries are getting counts of col1 over a period of col2. SELECT count(col1) WHERE col2 < 20121109 AND col2 > 20121108 – John Nov 9 '12 at 18:36
OK, that's interesting. I have to say I've never used the timeout attribute of <cfquery>, but it suggests there's a bug there. How long does the query take to run via that Python route you mentioned? or natively in SQL Developer / TOAD / whatever-people-use-on-Oracle-these-days? Are you using the drivers that ship with CF? Maybe try some from Oracle. [...] – Adam Cameron Nov 9 '12 at 20:43
Have you tried wrapping your cfquery within cftry tags to see if that reports anything?
<cfsetting requesttimeout="600" >
<cftry>
<cfquery name="randomething" datasource="ds" timeout="590" >
SELECT
col1,
col2
FROM
table
</cfquery>
<cfdump var="#randomething.recordCount#" /> <!--- should be about 5 million rows --->
<cfcatch type="any">
<cfdump var="#cfcatch#">
</cfcatch>
</cftry>
-
This is just an idea, but you could give it a go:
You mention that using QueryNew you can successfully add the more-than-two-million records you need.
Also that when your maxRows is less than 1,300,000 things work as expected.
So why not first do a query to count(*) the total number of records in the table, divide by a million and round up, then cfloop over that number executing a query with maxRows=1000000 and startRow=((i - 1 * 1000000) + 1) on each iteration...
ArrayAppend each query from within the loop to an array then when it's all done, loop over your array pushing the records into a new Query object. That way you end up with a query at the end containing all the records you were trying to retrieve.
You might hit memory issues, and it will not perform all that well, but hey - this is Coldfusion, those are par for the course, and sometimes crazy things happen / work.
(You could always append the results of each query to the one you're building up from QueryNew as you go rather than pushing each query onto an array, but it'll be easier to debug and see how far you get if it doesn't work if you build an array as you go.)
(Also, using multiple queries within the size that it CF can handle, you may then be able to execute the process you need to by looping over the array and then each query, rather than building up one massive query - would save processing time and memory, but depends on whether you need the full results set in a single Query object or not)
-
p.s. please take my poke at CF here with a sense of humour - my dev team has been working with it for about ten years and we've covered the full spectrum of loving and hating (not always in that order) it in that time. The other comments about it not being the right tool for the job are probably right, but sometimes it's the tool you have... – Jed Watson Nov 29 '12 at 14:09
if your date ranges are consistent, i would suggest some aggregate functions in sql instead of having cf process it. something like:
select col1, count(col1), year(col2), month(col2)
from table
group by year(col2), month(col2)
order by year(col2), month(col2)
add day() if you need that detail level, too. you can get really creative with date parts.
this should greatly speed up the entire run time, reduce the main query size.
-
Your problem here is that ColdFusion cannot time out SQL. This has always been an issue since CF6 I believe. So basically what is happening is that the cfquery is taking longer than 9999999 seconds but CF cannot timeout JDBC so it waits until afterwards tries to run cfdump (which internally uses cfoutput) and this is reported as timing out because the request is now considered to have run too long.
As Adam pointed out, whatever you are trying to do is too large for CF to realistically handle and will either need to be chopped up into smaller jobs or entirely handled in the DB.
-
You're telling me that my query is taking longer than 115.7 days? – John Nov 16 '12 at 19:39
There are probably some sensible internal maximums that we are unaware of. All I can tell you is what my experience is with over 10 years of using CF. – baynezy Nov 19 '12 at 6:31
So as it turns out the server was running out of memory, apparently cfquery takes up quite a bit more memory than a python list.
It was Barry's comment that got me going in the right direction, I didn't know much about the server monitor up until this point other than the fact that it existed.
As it turns out I am also not very good at reading, the errors that were getting logged in the application.log file were
GC overhead limit exceeded The specific sequence of files included or processed is: \path\to\index.cfm, line: 10 "
and
Java heap space The specific sequence of files included or processed is: \path\to\index.cfm
I'll end up going with Adams suggestion and let the database do the processing. At least now I'll be able to explain why things are slow instead of just saying, "I don't know".
-
|
{}
|
## Codility Ferrum 2018 Challenge
This post isn’t really on topic, but anyone with a general interest in algorithms might find it useful.
The following is my entry for Codility’s LongestNonNegativeSumSlice challenge. The temporal and spatial complexity are both O(N). The solution uses a combination of dynamic programming and prefix summation.
The problem definition is that for a given array input parameter — a sequence of integers with possible elemental values of -1, 0 and 1 — return the size of the longest sequence in the array with a sum larger than or equal to zero. For example, the longest sequence in [-1, -1, 0, 0, 1] is four, because the sum of the second to fifth elements is zero.
This problem really highlights the importance of considering the specific details of the problem carefully before implementing a solution. It’s very easy to implement a brute force solution with temporal complexity on the order of $N^2$ or even chase red herrings by looking at the solutions for other similar looking problems, such as those looking for the size of the slice with the maximum sum or the same problem allowing input values from a wider range of values.
Ultimately, the key to the problem lies in the restricted range of possible array element values. Because the smallest and largest values that can be held in the array are -1 and 1 respectively, the sum can only change in either direction by a maximum of 1 in each iteration. Therefore, if you encounter a particular negative value more than once in the prefix sum, the distance between the element following the index recorded the first time you encountered that value and that of any subsequent time you encounter it, necessarily has a sum of 0.
If the input values are [-1,-1, 0, 0, 0, 1], the prefix sum sequence runs as -1, -2, -2, -2, -2, -1. From this sequence, you can see that the element following the first -1 prefix-sum value is the second and the element presenting the subsequent -1 prefix-sum value is the sixth. The size of the sub-array (slice) contributing to this sum is 5 (the number of elements between the second and sixth inclusive). We can also see that the sum of the values from the third to fourth and third to fifth are zero, using the same technique.
There is, of course, an edge case where the prefix-sum value does not fall below zero before the largest sub-array with a sum >= 0 is found. For example: [ 0, 0, 0, 0, 0,-1], [ 0, 0, 0, 0, 0, 0] or [ 1,-1, 1,-1, 1,-1,-1]. For this reason, we also need to keep track of the edge case where the prefix sum is >= 0.
The solution — written in Java here — is relatively simple. A hash map keeps track of the first index location in which each of the negative prefix-sum values are encountered. Each time we re-encounter a previously recorded negative prefix-sum value, we know we have found a sum of zero, so we check whether the size of the sub-array that produces the zero sum value is greater than the incumbent maximum slice and if it is, it becomes the incumbent maximum slice. If the prefix sum is >= 0, then the current index (plus one if the array is zero indexed) becomes the maximum slice.
import java.util.HashMap;
import java.util.Map;
class Solution {
public int solution(int[] A) {
int sum = 0;
int maxslice = 0;
Map<Integer,Integer> sumindex = new HashMap<Integer,Integer>();
for(int i = 0; i<A.length; i++) {
sum += A[i];
if(sum >= 0)
maxslice = i + 1;
else if(sumindex.containsKey(sum))
maxslice = Math.max(maxslice, i - sumindex.get(sum));
else
sumindex.put(sum,i);
}
return maxslice;
}
}
This implementation iterates over the list of array elements in A once (N iterations). The containsKey and get methods of the HashMap have O(1) temporal complexity, so the temporal complexity of this implementation is therefore O(N) despite the challenge statement predicting worst-case complexity of O(N log N). Spatial complexity is O(N), which is in line with Codility’s expectation for worst-case space complexity. The entry received a score of 100% for correctness and scalability.
Post Edit – March 28, 2018:
Beck has pointed out that, since the number of elements in sumindex can’t exceed the size of the input array, we could replace the HashMap with a regular array. In doing so, we just have to index by the absolute value of the sum minus one (to account for the zero indexation of the array) rather than the sum itself. The actual sum will be negative any time we want to use the array.
The following is an updated implementation:
class Solution {
public int solution(int[] A) {
int sum = 0;
int maxslice = 0;
int[] sumindex = new int[A.length];
for(int i = 0; i<A.length; i++) {
sum += A[i];
int idx=Math.abs(sum)-1;
if(sum >= 0)
maxslice = i + 1;
else if(sumindex[idx]!=0)
maxslice = Math.max(maxslice, i - sumindex[idx]);
else
sumindex[idx]=i;
}
return maxslice;
}
}
Post Edit – April 16, 2018:
Alternatively, to remove the need to calculate the idx variable, negate the sum and use that for indexation:
class Solution {
public int solution(int[] A) {
int sum = 0;
int maxslice = 0;
int[] sumindex = new int[A.length];
for(int i = 0; i<A.length; i++) {
sum -= A[i];
if(sum <= 0)
maxslice = i + 1;
else if(sumindex[sum-1]!=0)
maxslice = Math.max(maxslice, i - sumindex[sum-1]);
else
sumindex[sum-1]=i;
}
return maxslice;
}
}
# Introduction
As promised in part one, this second part details a java implementation of a multilayer perceptron (MLP) for the XOr problem. Actually, as you will see, the core classes are designed to implement any MLP implementation with a single hidden layer.
First, it will help to introduce a quick overview of how MLP networks can be used to make predictions for the XOr problem. For a more detailed explanation, please review part one of this post.
The image at the top of this article depicts the architecture for a multilayer perceptron network designed specifically to solve the XOr problem. It contains integer inputs that can each hold the value of 1 or 0, a hidden layer with two units (not counting the dashed bias units which we will ignore for for now for the sake of simplicity) and a single output unit. Once the network is trained, the output unit should predict the output you would expect if you were to parse the input values through an XOr logic gate. That is, if the two input values are not equal (1,0 or 0, 1), the output should be 1, otherwise the output should be 0.
Forward propagation is completed first, which parses the input values through the network and eventually to the output unit, making a prediction of what the output should be, given the input. This is then compared to the expected output and the prediction error is calculated. Part of the process of passing through the network involves multiplying the values by small weights. If the weights are correctly configured, the output prediction will be correct.
This configuration is not a manual process, but rather something that occurs automatically through a process known as backward propagation. Backward propagation considers the error of the prediction made by forward propagation and parses those values backwards through the network, adjusting the weights to values that slightly improve prediction accuracy.
Forward and backward propagation are repeated for each training example in the dataset many times until the weights of the network are tuned to values that result in forward propagation producing accurate output.
The following sections will provide more detail on how the process works by detailing a fully functional Java implementation.
# Class Structure and Dependencies
The implementation consists of four classes. These include ExecuteXOr, which is the highest level class containing all of the XOr specific code; MultiLayerPerceptron, which contains all of the code used to implement a generic multilayer perceptron with with a single hidden layer; iActivation, the activation interface and SigmoidActivation, which is a concrete activation class specifically implementing the sigmoid activation function.
# Class: ExecuteXOR
The first class we will review is ExecuteXOr. This is the highest level class, containing the main method used to kick off the the experiment. Here the user defines the dimensions of the input and output values, instantiates the activation class and MultiLayerPerceptron class. Note that, for the XOr problem, we have two input units, two hidden units and one output unit. As such, X is a two dimensional array, y is a one dimensional array and the first three input parameters for the MultilayerPerceptron class denote the dimensions of the neural network. These are set to 2 (input layer), 2 (hidden layer) and 1 (output layer).
package Experiments;
import NeuralNets.MultilayerPerceptron;
import Activation.SigmoidActivation;
public class ExecuteXOr {
public static double[][] X; // Matrix to contain input values
public static double[] y; // Vector to contain expected values
public static MultilayerPerceptron NN; // Neural Network
The main method instantiates the dimension of the X and y arrays, along with the multilayerPerceptron class, before kicking off the methods used to initialise the dataset, train and test the network. We have chosen sigmoid as the activation function in this case. However, as implied by the existence of the Activation interface, there are many other possible activation functions.
The first three parameters of the MultilayerPerceptron constructor define the dimensions of the network. In this case we have defined two input units, two hidden units and one output unit, as is required for this architecture.
The input parameters for trainNetwork define the maximum number of iterations to complete during training and the target error rate. In this case, no more than 200 thousand iterations (epochs) will be completed and training will also stop if the error rate drops to 0.01 or less.
public static void main(String[] args) {
X=new double[4][2]; // Input values
y=new double[4]; // Target values
// Instantiate the neural network class
NN = new MultilayerPerceptron(2,2,1,new SigmoidActivation());
initialiseDataSet();
trainNetwork(200000,0.01);
testNetwork();
}
The initialiseDataSet method provides the values for the X and y arrays where X contains all possible input values and y contains the expected output, given each set of input values. See part one of this post for more details on the structure of the XOr problem to understand why these values are as they are, but it is important to understand that this is an exhaustive list of possible inputs and outputs, rather than a sample of possible values. Because we’re not using a sample of possible values we need not be concerned about overfitting and thus do not require separate training and testing datasets.
private static void initialiseDataSet(){
X[0][0] = 0;
X[0][1] = 0;
y[0] = 0;
X[1][0] = 0;
X[1][1] = 1;
y[1] = 1;
X[2][0] = 1;
X[2][1] = 0;
y[2] = 1;
X[3][0] = 1;
X[3][1] = 1;
y[3] = 0;
}
For each iteration (epoch), the trainNetwork method iterates over every training example, completing a forward propagation and backward propagation step for every training example before updating the weights. Weights are updated with a learning rate of 0.1, which slows the rate at which the weights of the network are updated. This helps to prevent the system from overshooting the ideal weight setup during updates.
Another potential issue to rectify is the tendency, depending on how the weights are initialised, for the algorithm to occasionally enter a search space that is difficult to exit, resulting in very slow progress in training. To avoid these scenarios, I have borrowed a method from combinatorial optimisation known as random restart. After every 500th epoch, a check is performed to see if the error score is improving too slowly. If progress is extremely slow, the weights are reinitialised, effectively kicking the incumbent solution into a new neighbourhood from which we will hopefully achieve better training performance.
At each epoch a check is also performed to see if the current error exceeds the target. If so, weights we have identified are close enough to the optimal configuration to produce highly accurate predictions, so the method is exited.
private static void trainNetwork(int epochs, double targetError){
double error=0;
double baseError=Double.POSITIVE_INFINITY;
// Iterate over the number of epochs to be completed
for(int epoch = 0; epoch&amp;lt;epochs; epoch++){
error=0;
// Iterate over all training examples
for(int i = 0; i&amp;lt;X.length; i++){
// Feed forward
NN.forwardProp(X[i]);
// Run backpropagation and add the squared error to the sum of error for this epoch
error+=NN.backProp(y[i]);
// update the weights
NN.updateWeights(0.1);
}
// Every 500th epoch check whether progress is too slow and if so, reset the weights
if(epoch % 500==0) {
// If not
if(baseError-error&amp;lt;0.00001) {
NN.kick(); // Kick the candidate solution into a new neighbourhood with random restart
baseError=Double.POSITIVE_INFINITY;
}
else
baseError=error; // Record the base error
}
// Print the sum of squared error for the current epoch to the console
System.out.println(&amp;quot;Epoch: &amp;quot; + epoch + &amp;quot; - Sum of Squared Error: &amp;quot; + error);
// If the error is smaller than 0.01 stop training
if(error&amp;lt;targetError)
break;
}
}
Once the network has been trained, we can test its accuracy. The testNetwork method iterates over each of the examples in X and runs forward propagation, which returns the neural network’s prediction for what the output should be. The prediction is in the form of a value falling between something very close to zero and something very close to one. Any value of 0.5 or higher is deemed to predict 1, while anything lower than 0.5 is deemed to predict 0. This is compared to the actual expected output and the proportion of correct predictions (prediction accuracy) is returned to the console. If the network is working correctly, the returned value should always be 1.0.
private static void testNetwork(){
double correct=0;
double incorrect=0;
double output=0;
// Iterate over the testing examples (which happen to double as training examples)
for(int i = 0; i&amp;lt;X.length; i++){ output=NN.forwardProp(X[i])[0]; // Feed forward to get the output for the current example // If the output is &amp;gt;= 0.5, we deem the output to be 1.0
if(output&amp;gt;=0.5)
output=1.0;
else // Otherwise it is 0.0
output=0.0;
// If the output value matches the target
if(output==y[i])
correct++; // Increment the number of successful classifications
else
incorrect++; // Increment the number of unsuccessful classifications
}
// Print the test accuracy to the console
System.out.println(&amp;quot;Test Accuracy: &amp;quot; + String.valueOf((correct/(incorrect+correct))));
}
}
# Interface: iActivation
The iActivation interface defines the methods that must be implemented by any activation class. The only two methods required are getActivation and getDerivative.
An implementation of the getActivation method takes as input, the sum of values input into a single unit and outputs the activation value for the unit. This is used in forward propagation to compress the sum of values received by a unit, usually into a value of 0 or 1, or something very close to 0 or 1. However, there are exceptions such as the tanh activation function which compresses the input value into a value of between -1 and 1.
The getDerivative method represents the computed derivative of the activation function. This is used by backward propagation to influence how the weights are to be updated in order to change them to a value that slightly improves the accuracy of the network. The function determines the direction in which the weights should be changed. How the partial derivative is computed is beyond the scope of this post, but anyone interested may wish to review this report.
package Activation;
public interface iActivation {
double getDerivative(double x);
double getActivation(double x);
}
# Class: SigmoidActivation
The SigmoidActivation class is an implementation of iActivation, implementing the activation and derivative Sigmoid functions.
The Sigmoid activation function is $g(x)=\frac{1}{1+e^{-x}}$, while its derivative is ${g'(x)=x(1-x)}$.
Plotting the Sigmoid function clearly shows how it converts almost all presented values to something very close to 0 or 1. The value 0 is converted to 0.5, but any values larger or smaller quickly approach the plateaus toward the left and the right of the plot.
The derivative calculates the direction of the slope of this line (gradient), making it possible to force the weight updates into a direction that improves the accuracy of the classification process. Because different activation functions compress their input values in different ways, they require different derivative functions to calculate their respective gradients. The use of polymorphism here allows for the potential implementation of any activation function to be encapsulated alongside its own derivative function.
package Activation;
public class SigmoidActivation implements iActivation {
@Override
public double getDerivative(double x) {
return (x * (1.0-x));
}
@Override
public double getActivation(double x) {
double expVal=Math.exp(-x);
return 1.0/(1.0+expVal);
}
}
# Class: MultilayerPerceptron
The MultilayerPerceptron class encapsulates all of the data for the neural network and the methods required to intialise and propagate the network.
Data held by the class include the dimensions of the network (InputCount, hiddenCount and outputCount), the first and second layer of weights (W1 and W2), The first and second layer of delta weights (DW1 and DW2), the pre-activation values for each of the units (Z1 and Z2) the input values (InputValues) and the activation values output by each subsequent layer of the network (HiddenValues, OutputValues).
package NeuralNets;
import java.util.Random;
import Activation.iActivation;
public class MultilayerPerceptron {
int inputCount; // Number of input units
int hiddenCount; // Number of hidden units
int outputCount; // Number of output units
// Weight matrices
double[][] W1; // First Layer of Weights
double[][] W2; // Second Layer of Weights
// Delta weight matrices
double[][] DW1; // First Layer of Delta Weights
double[][] DW2; // Second Layer of Delta Weights
// The values at the time of activation (the values input to the hidden and output units)
double[] Z1; // First Layer pre-activation values
double[] Z2; // Second Layer pre-activation values
iActivation activation=null; // Activation Class
// The values actually output by the units in each layer after being squashed by the activation function
double[] InputValues; // Inputs layer Values
double[] HiddenValues; // Hidden layer Values
double[] OutputValues; // Output layer Values
The constructor method for the class receives the dimensions of the network, along with the activation function, as input and proceeds to use that information to initialise the variables that are global to the class.
public MultilayerPerceptron(int inputCount, int hiddenCount ,int outputCount, iActivation activation){
this.inputCount=inputCount;
this.hiddenCount=hiddenCount;
this.outputCount=outputCount;
this.activation=activation;
// Initialise the first layer weight and delta weight matrices (accounting for bias unit)
W1 = initialiseWeights(new double[hiddenCount][inputCount+1]);
DW1 =initialiseDeltaWeights(new double[hiddenCount][inputCount+1]);
// Initialise the second layer weight and delta weight matrices (accounting for bias unit)
W2 = initialiseWeights(new double[outputCount][hiddenCount+1]);
DW2 = initialiseDeltaWeights(new double[outputCount][hiddenCount+1]);
// Initialise the activation vectors
Z1 = initialiseActivations(new double[hiddenCount]);
Z2 = initialiseActivations(new double[outputCount]);
// Initialise the hidden and output value vectors (same dimensions as activation vectors)
OutputValues=Z2.clone();
HiddenValues=Z1.clone();
}
The initialisation methods randomly initialise the weight arrays (W1 and W2) within a reasonable range of small values relative to the size of the InputValues array. All other arrays are initialised with zero values, while the kick method exists to accommodate the random restart process, essentially reinitialising the weight arrays with random values. This is called when the network is stuck in a search neighbourhood where training occurs at an extremely slow rate.
private double[][] initialiseWeights(double w[][]){
Random rn = new Random();
double offset = 1/(Math.sqrt(inputCount));
for(int i=0; i&lt;w.length; i++){
for(int j=0; j&lt;w[i].length;j++){ // No bias unit
w[i][j]=offset-rn.nextDouble();
}
}
return w;
}
private double[][] initialiseDeltaWeights(double w[][]){
for(int i=0; i&lt;w.length; i++){
for(int j=0; j&lt;w[i].length;j++){
w[i][j]=0;
}
}
return w;
}
private double[] initialiseDelta(double d[]){
for(int i=0; i&lt;d.length; i++){
d[i]=0;
}
return d;
}
private double[] initialiseActivations(double z[]){
for(int i=0; i&lt;z.length; i++){
z[i]=0;
}
return z;
}
public void kick(){
// Kick the candidate solution into a new neighbourhood (random restart)
W2 = initialiseWeights(new double[outputCount][hiddenCount+1]); // Account for bias unit
W1 = initialiseWeights(new double[hiddenCount][inputCount+1]); // Account for bias unit
}
The forwardProp method completes forward propagation. First a bias unit is added to the input values and hidden unit values. The non-bias pre-activation values (Z values) are calculated by summing the input values after multiplying them by their weights. The hidden unit activations are then calculated by parsing the resulting Z values through the activation function.
The hidden layer activations then serve as input to the output layer and the process is repeated for that layer with the activation for the output unit serving as the prediction.
public double[] forwardProp(double[] inputs){
this.InputValues = new double[(inputs.length+1)];
// Add bias unit to inputs
InputValues[0]=1;
for(int i = 1; i&lt;InputValues.length; i++){
this.InputValues[i]=inputs[i-1];
}
HiddenValues = new double[hiddenCount+1];
// Get hidden layer activations
for(int i = 0; i&lt;Z1.length;i++){
Z1[i]=0;
for(int j = 0; j&lt;InputValues.length; j++){
// Hidden Layer Activation
Z1[i]+=W1[i][j] * InputValues[j];
}
// Hidden Layer Output value
HiddenValues[i+1]= activation.getActivation(Z1[i]);
}
// Get output layer activations
for(int i = 0; i&lt;Z2.length;i++){
Z2[i]=0;
for(int j=0; j&lt;HiddenValues.length; j++){
// Get output layer Activation
Z2[i] += W2[i][j] * HiddenValues[j];
}
// Get output layer output Value
OutputValues[i]=activation.getActivation(Z2[i]);
}
return OutputValues;
}
There are two backProp methods for completing backward propagation, the first accommodating the use of a single target value and the second accommodating an array of target values, for architectures containing more than one output unit. Delta weights are the values by which the weights can be updated to improve the performance of the network. The outputs of the backward propagation process include the delta weights for the first and second layer of weights in the network and the half of the sum of the outputs of the cost function, otherwise known as the error.
As the name suggests, backward propagation begins with the output layer and propagates backwards through the network. The error is calculated using the classic squared error cost function, in which half of the squared sum of the difference between the target values and the actual predictions is calculated. Squaring the error ensures that all error values are both positive in value and are magnified, exaggerating the degree of error.
The deltas for the second layer of weights are calculated by taking the difference between the target values and predictions, multiplying that by the activation derivative of the predictions.
Once the second layer delta weights have been calculated, the first layer delta weights are also updated. These are calculated by multiplying the second layer deltas by the second layer of weights and multiplying that by the activation derivative of the hidden layer activation values. These values are then multiplied by the input layer values.
Worth noting, is that the backProp method does not actually update the weights, it merely provides the delta weight values that can later be used to perform the update.
// Support a single double value as the target
public double backProp(double targets){
double[] t = new double[1];
t[0]=targets;
return backProp(t);
}
public double backProp(double[] targets){
double error=0;
double errorSum=0;
double[] D1; // First Layer Deltas
double[] D2; // Second Layer Deltas
D1 = initialiseDelta(new double[hiddenCount]);
D2 = initialiseDelta(new double[outputCount]);
// Calculate Deltas for the second layer and the error
for(int i = 0;i&lt;D2.length; i++){
D2[i]=(targets[i]-OutputValues[i]) * activation.getDerivative(OutputValues[i]);
errorSum+= Math.pow((targets[i]-OutputValues[i]),2); // Squared Error
}
error = errorSum / 2;
// Update Delta Weights for second layer
for(int i = 0; i&lt;outputCount; i++){
for(int j = 0; j&lt;hiddenCount+1; j++){
DW2[i][j] += D2[i] * HiddenValues[j];
}
}
// Calculate Deltas for first layer of weights
for(int j = 0; j&lt;hiddenCount; j++){
for(int k = 0; k&lt;outputCount; k++){
D1[j] += (D2[k] * W2[k][j+1]) * activation.getDerivative(HiddenValues[j+1]);
}
}
// Update first layer deltas
for(int i=0; i&lt;hiddenCount;i++){
for(int j=0; j&lt;inputCount+1; j++){ // Account for bias unit
DW1[i][j] += D1[i] * InputValues[j];
}
}
return error;
}
In this particular implementation of ExecuteXOr, the weights are updated in every epoch. However, in some circumstances it can be advantageous to update the weights less regularly. The updateWeights method allows for updating of the weights to occur at any interval as directed by the calling method. Each time the weights are updated, the DW1 and DW2 arrays are reset to zero values. For as long as the weights are not updated, the delta weights accumulate.
The only input parameter for the method is the learningRate. When learningRate is set to 1, the entire delta weight is used in the update. When it is set to a fractional value, the rate at which the weights are updated is reduced. ExecuteXor is using a learning rate of 0.1, ensuring that only 10% of the delta weight value is used to update the weights at each epoch.
public void updateWeights(double learningRate){
// Update first layer of weights
for(int i = 0; i&lt;W2.length; i++){
for(int j = 0; j&lt;W2[i].length; j++){
W2[i][j] += learningRate * DW2[i][j];
}
}
// Update second layer of weights
for(int i = 0; i&lt;W1.length; i++){
for(int j = 0; j&lt;W1[i].length; j++){
W1[i][j] += learningRate * DW1[i][j];
}
}
// Reset delta weights
DW1 = initialiseDeltaWeights(new double[hiddenCount][inputCount+1]);
DW2 = initialiseDeltaWeights(new double[outputCount][hiddenCount+1]);
}
}
# Conclusion
This post is the second part of an article on multilayer perceptron (MLP) artificial neural networks for the XOr problem. It has demonstrated a Java implementation of an MLP. The article began with a brief recap of the XOr problem and a summary of the processes used to train an MLP, including a high-level discussion of forward propagation and backward propagation.
The class structure and dependencies for the implementation were then detailed before stepping through each class in the implementation, complete with Java code and detailed explanations of each of the methods.
## Approaches to Big Combinatorial Optimisation Problems
Combinatorial optimisation is a problem category in which the goal is to find an optimal combination of entities. A classic example is the travelling salesman problem, in which the goal is to plot the most efficient route a salesman can take to visit all of the towns in scope, given the locations of towns and distances between them.
Problems such as this can be solved to global optimality using deterministic methods only when the search space involved is very small. However, it is almost always the case that any real-world problem instance will have a search space much too large to expect to find global optimum (Talbi et al, 2006). So before a viable approach can be postulated for a combinatorial optimisation problem, it is important to understand the nature of the problem complexity through the lens of computational complexity theory.
For any given problem under consideration, computational complexity theory asks whether there is an algorithm that can effectively solve it (Ogihara, 1999). Attempts are then made to categorise the problem on the basis of its computational difficulty relative to other problems (Hromkovic, 2010).
Temporal complexity (or time complexity) is a measure of complexity based on the relationship between an algorithm’s running time and the size of its input. In other words, an algorithm’s temporal complexity is a function of its input-size (Hromkovic, 2010). ‘Time’ in this instance refers to the number of steps that the algorithm must take in order to run to completion (Mayordomo, 2004). In order for an algorithmic solution to be considered viable, its temporal complexity must be polynomial (Papadimitriou et al, 1998). In such cases, the temporal-complexity is said to be “polynomial-time” or P (Demaine, 2011).
For many problems, there is no known algorithm in P, but individual candidate solutions can nonetheless be verified in polynomial-time (Mayordomo, 2004). That is, although a non-deterministic process might be used to come up with potential sub-optimal solutions, the process of scoring each individual candidate for comparison can be completed deterministically. This is known as deterministic verification (Hromkovic, 2010). When formulated as decision problems — by ensuring the output is binary — such problems are regarded as nondeterministic-polynomial–time problems or NP (Mayordomo, 2004).
All P problems also fall into NP because they too have nondeterministic solutions (Demaine, 2011). At present, it is not known whether the reverse is the case and all NP problems have solutions in P (Mayordomo, 2004). The question of whether P=NP is one of the most important problems in mathematics and computer science today (Sipser, 1992). Rather than attempt to answer this millennium prize question, in most contexts it makes sense to assume that P≠NP. This approach assumes that not all problems in NP have polynomial-time solutions waiting to be discovered.
Two additional temporal complexity classes to consider are the NP-Hard and NP-Complete classes. NP-Hard is the set of all problems that are at least as difficult as the hardest problems in NP (Demaine, 2011). This includes problems that are not in NP, such as combinatorial optimisation problems. The NP-Complete class on the other hand, is the set of all NP-Hard problems that are also in NP and thus are formulated as decision problems (Demaine, 2011).
All NP-Complete problems can be solved by exhaustively checking every candidate solution in the search space. However, the search spaces tend to be far too large for this approach to be considered practical (Woeginger, 2003). NP-Hard and NP-Complete heuristics must therefore use more efficient methods to explore the solution space.
The first thing to go is the idea of finding the globally optimal solution. Instead we search for any high-quality sub-optimal solution. Furthermore, with no known polynomial-time algorithm that can solve them, approaches to these problems tend to use deterministic verification. Most commonly by applying a standard metaheuristic.
Metaheuristics are generalised (high-level) heuristics. They fall into the major categories of local search and evolutionary algorithms (Talbi et al, 2006). Examples of local search-based metaheuristics include local search itself, tabu search and simulated annealingGenetic algorithms on the other hand, are examples of evolutionary algorithms (Talbi et al, 2006).
Local search algorithms explore the search space one solution at a time, improving on the incumbent solution by making a single change at each iteration until the locally optimal solution is found. This creates the problem of the heuristic getting stuck when it converges on the locally optimal solution. For this reason, some local search metaheuristics include means by which to break away from the local optima, allowing them to explore more than one locally optimal solution. These tend to produce better results.
The evolutionary approach involves evaluating many different candidate solutions all at once and includes some means of excluding poor solutions and splitting and re-combining good solutions in the hope of improving on them in the next generation. Genetic algorithms are the most common example.
Metaheuristics have the advantage of applicability to a variety of combinatorial optimisation problems. For this reason, the metaheuristic approach is viewed as a generic approach to problem solving. Furthermore, metaheuristic development is relatively simple and considerably cheaper than bespoke development (Hromkovic, 2010), though the cost saving comes at the expense of performance (Dowsland, 1998; Talbi, 2006).
In recent years a new category of metaheuristic has emerged in an attempt to create a heuristic that is both generalisable and does not suffer from the deminished performance of classic metaheuristics. These are known as hyper-heuristics.
The term ‘hyper-heuristic’ was coined to describe metaheuristics that invoke other heuristics (Cowling et al, 2001), a concept that has existed since the 1960s (Ross, 2005), however, the definition was recently expanded to include algorithms that generate heuristics (Burke et al, 2010).
A hyper-heuristic algorithm approaches a problem by calling on a series of low-level heuristics to generate solutions (Burke et al, 2009). While a metaheuristic search space is made up of problem solutions, the hyper-heuristic search space is composed of heuristic algorithms (Burke et al, 2009). Hyper-heuristics have been found to exhibit generality well beyond that of being able to provide state-of-the-art results for multiple problem instances within a given problem category. In fact, there are examples of individual hyper-heuristic implementations that can actually solve many distinct problem types. That is, the same implementation deployed to schedule employees to rosters can be deployed to solve the travelling salesman, knapsack, vehicle routing and many other problem types.
Future posts in this series will explore each of the most commonly used metaheuristics in detail.
Sources
Burke, E. K. Hyde, M. Kendall, G. Ochoa, G. Ozcan, E. Qu, R. (2009). A survey of hyper-heuristics. Computer Science Technical Report No. NOTTCS-TR-SUB-0906241418-2747, School of Computer Science and Information Technology, University of Nottingham.
Burke, E. K. Hyde, M. Kendall G., Ochoa, G. Özcan, E. Woodward J. R. (2010). A Classification of Hyper-heuristic Approaches. In Handbook of Metaheuristics, International Series in Operations Research & Management Science, 146, 449-468.
Cowling, P. Kendall, G. Soubeiga, E. (2001). A hyper-heuristic approach to scheduling a sales summit. In Practice and Theory of Automated Timetabling III, 176-190.
Demaine, E. (2011). Introduction to Algorithms: Lecture 23 – Computational Complexity. Accessible from <http://www.youtube.com/watch?v=moPtwq_cVH8> [Accessed on 22 June 2013]. Massachusetts Institute of Technology.
Dowsland, K. A. (1998). Off-the-Peg or Made-to-Measure? Timetabling and Scheduling with SA and TS. In Practice and Theory of Automated Timetabling II, 37-52
Hromkovic, J. (2010). Algorithmics for Hard Problems: Introduction to Combinatorial Optimization, Randomization, Approximation and Heuristics. Zurich Switzerland, Springer.
Mayordomo, E. (2004). P versus NP. Monografías de la Real Academia de Ciencias Exactas, Físicas, Químicas y Naturales de Zaragoza, (26), 57-68.
Papadimitriou, C. H. Steiglitz, K. (1998). Combinatorial Optimization: Algorithms and Complexity, New York, Dover Publications.
Ross, P. (Ed). (2005). Hyper-heuristics. In Search methodologies, 529-556.
Sipser, M. (1992). The history and status of the P versus NP question. In Proceedings of the twenty-fourth annual ACM symposium on Theory of computing, 603-618.
Talbi, E. (Ed). (2006). Parallel Combinatorial Optimization, Villeneuve d’Ascq, France, Wiley
Ogihara, M 1999, ‘COMPUTATIONAL COMPLEXITY THEORY’, Encyclopaedia Of Electrical & Electronics Engineering, 3, 618-628.
Woeginger, G. J. (2003). Exact algorithms for NP-hard problems: A survey. In Combinatorial Optimization—Eureka, You Shrink! (pp. 185-207).
## Irish Electric Vehicle Charge Point Status Datasets
I recently completed a minor thesis as a partial requirement for an MSc in computer science, a course with heavy leanings towards machine learning and data analytics. The thesis explored the question of whether predictive analytics can be used to predict electric vehicle (EV) charge point availability in Ireland.
Contention for charge points is of increasing concern to Irish EV owners as the ratio of plugin EVs to charge points is set to rapidly increase in the near future, despite there being no plans to increase investment in the infrastructure. A key motivation for the research was the idea that an algorithm that can make better-than-chance predictions about the availability and reliability of charge points from historical data, can potentially be used to inform a vehicle routing algorithm of charging stations to avoid when making route decisions for electric vehicles.
Unfortunately there were no datasets available to us lowly MSc students, so a big part of the workload involved building my own from live charge point status data provided in Ireland by ESB E-Cars, the organisation which at the time of writing, is responsible for maintaining the publicly funded charge-point networks in the Republic of Ireland and Northern Ireland. The live status data is available on the E-Cars charge point map and E-Car Connect app.
While the thesis only considered data from November 2016 to June 2017, I have continued to collect data and have made it accessible to EV drivers through a web site (www.cpinfo.ie). Although it is a work in progress and currently has limited search capabilities, CPInfo has proven to be a useful tool in evaluating the reliability and potential availability of charge points. EV drivers use this information when planning their routes as it can help to identify charge points that are frequently out of service or occupied by other drivers.
## Open Data Licence
I am now making the raw datasets available for anyone to use under a creative commons attribution 4.0 international public licence. Licencing is necessary because considerable pre-prossing has been conducted on the source data to produce the datasets and this work attracts copyright protection implicitly. Explicitly offering an open data licence provides clarity on how the data can be used. Essentially, the licence allows anyone to use the data for any purpose on the condition that the source is correctly attributed. To attribute the source of the datasets, simply provide a hyperlink or citation referencing this blog post.
The licencing of the formatted datasets does not undermine the copyright that may be held by ESB E-Cars, who might own aspects of the datasets by virtue of owning the source data. A representative of ESB E-cars has described this data as “publicly available and free to use”.
## Dataset Details
The datasets take the form of monthly tab-delimited text files. Each line includes the date, time, charge point Id, charge point type {StandardType2, CHAdeMO, CCS, FastAC}, latitudinal and logitudinal coordinates, status {OOS, OOC, Part, Occ, Unknown} and address of a single charge point. The data represent a snapshot of the status of the charge point network taken at five-minute intervals.
StandardType2 is a slow charge point of up to 22kw AC, while the CHAdeMO, CCS and FastAC are different types of DC and AC rapid charge points. A standard type two charge point represented in the datasets typically has two available connections that can be used simultaneously, while a rapid charge point can have either one, two or three connections, each with a different connection type (CHAdeMO, CCS or FastAC).
The available status has been omitted as it can be implied by the absence of a record. The other statuses include out of service (OOS), out of contact (OOC), partially occupied (Part), fully occupied (Occ) and Unknown. An unknown status exists where the status data was either not available or otherwise not polled due to connection issues at the time interval in question. Where the status is unknown for all charge points at the interval, a single record exists with a charge point id of unknown and a status of unknown.
There are also a number of nuances in the data that must be considered.
First of all, charge points are sometimes moved from one location to another and/or replaced by another charge point. When this happens, the charge point Id at a given location changes in the dataset. Furthermore, the charge point removed from one location, can appear again at another. The charge point Id therefore cannot be relied upon as a means to track the charging activity at a particular location. Instead, the latitudinal and longitudinal coordinates should be used. However, these can also change slightly (by a matter of meters) and thus can’t be used directly as unique identifiers. To get the full history of activity at a given location, a search should include charge points within a tight range of latitudinal and longitudinal values rather than by matching the exact values.
Another nuance is that the charge point map sometimes goes through periods, of up to several days, where the status data are not updated. This is reflected in the datasets by statuses that don’t change for unusually long periods of time. As it stands, the datasets reflect the state of the charge point maps during those periods rather than the statuses of the charge points in reality. These periods can be identified via a checksum calculated over all rows at each time interval or by manually checking the statuses at the busiest charge points.
Rapid chargers never hold a status of partly occupied despite the fact that in many cases it is possible to charge on the FastAC connection at the same time as CCS or CHAdeMO connection. Furthermore, an occupied status on the CCS or CHAdeMO connection does not necessarily imply that that specific connection was in use. This is because only one of the two can be used at any given time and thus if the CHAdeMO connection is in use, then CCS is also unavailable and vice versa.
At the time of writing there are no examples of multiple rapid charge points in the same vicinity, however, there are a small number of cases where there are multiple standard type two charge points at the same location. Dundrum town centre and the Stillorgan Luas station are two examples. On the map, these are represented slightly differently to other charge points in that the charge point Ids and statuses are grouped together. If there are two charge points, and therefore four connections, only one icon appears on the map listing both charge point ids and the status will only show as fully occupied if all four connections are in use. Consequently, these were omitted from the dataset. I intend to rectify this in a future script update.
Any updates to the datasets will be expressed by updating this blog post. The datasets are available for download here, where they will be updated monthly.
## Why are there so many fake data scientists and machine learning engineers?
The title of this post is a question I answered recently on Quora, a post that seems to have gathered some interest, so I thought it might be worthwhile expanding on it here.
In my response, I pointed out that in recent months I have encountered a number of software engineers who seem to believe that machine learning libraries, such as Tensorflow, can sufficiently abstract away the need for machine learning knowledge in much the same way that high level programming languages in most industrial areas of application have abstracted away the need for knowledge of low-level programming.
I should point that I have nothing at all against the use of machine learning libraries and I am in no way advocating for the coding of machine learning algorithms from scratch in industrial practice. Where I have advocated for coding machine learning algorithms from scratch in the past, it has always been for the purpose of education. The point I have attempted to make in my two paragraph post on Quora, and in these two previous blog posts, is that there is a range of knowledge that is required for machine learning development regardless of whether you are personally coding the algorithms or referencing software libraries.
Machine learning engineers and data scientists need to understand what kind of data needs to be gathered or found from the start of any project. They need to understand how to pre-process that data, perform feature selection, cross validation for both model selection and parameter tuning of the selected model, all while being careful to avoid overfitting. They have to understand what tools are available to them, when it is appropriate to use them and how to set their parameters. They have to be able to design full machine learning pipelines, possibly with multiple machine learning algorithms interacting. Without this knowledge, expect a lot of time wasted through unnecessary trial-and-error experimentation, or worse, models that fail to make accurate predictions in the wild.
Automating machine learning libraries so they can complete some of this work without user knowledge is a long-standing goal of many machine learning researchers in industry and academia. The so-called “democratisation of machine learning” isn’t a new concept and varying degrees of success have been achieved in the automation of some of the algorithmic and statistical knowledge required to do machine learning (Centinsoy et al., 2016; IBM Analytics, 2016) or otherwise lower the barrier to entry for machine learning practitioners (Chen et al., 2016; Guo et al., 2016; Patel, 2010; 2016). But we’re not yet at a point where a software engineer can jump into machine learning development without some kind of introductory training or mentorship. Those who do are involved in machine learning black magic.
Furthermore, there is the question of the degree of capability of a software engineer who has little knowledge of machine learning. In a previous post I quoted former Kaggle chief scientist, Jeremy Howard, suggesting that there is a non-linear disparity in capability between the best and average machine learning developers and that the best of the best learned their trade by understanding the mathematics behind the algorithms. Howard was not talking about people coding machine learning algorithms from scratch. He was talking about Kaggle competition Entrants, who almost universally use libraries. The fact of the matter is that the people who understand the algorithms put the libraries to better use and perform better in Kaggle competitions by orders of magnitude.
Lest the reader assume I am against the idea of software engineers working on machine learning projects, nothing could be further from the truth. In my view, the whole field of machine learning development is in dire need of the sort of software-engineer thinking that brought the SOLID principles and software design patterns to object oriented software development. As Sculley et al. (2014) have pointed out, machine learning implementations bring with them a whole slew of new ways to generate technical debt in a software project. Despite this, very little guidance has been proposed in the way of best practices or design patterns for machine learning implementations. The sum of existing work basically amounts to the aforementioned paper (Sculley et al., 2014) and another rejected conference paper on the topic of design patterns for deep convolutional neural networks (Smith & Topin, 2016). Moving machine learning into the hands of more industrial software engineers who care about the practical implications it will have on their projects can only be good for the field. I’m merely advising caution.
References
Cetinsoy, A., Martin, F. J., Ortega, J. A., Petersen, P. (2016). The Past, Present, and Future of Machine Learning APIs. In Proceedings of The 2nd International Conference on Predictive APIs and Apps (pp. 43-49).
Chen, D., Bellamy, R. K., Malkin, P. K., & Erickson, T. (2016). Diagnostic visualization for non-expert machine learning practitioners: A design study. In Visual Languages and Human-Centric Computing (VL/HCC), 2016 IEEE Symposium on (pp. 87-95). IEEE.
Guo, Y., Liu, Y., Oerlemans, A., Lao, S., Wu, S., Lew, M. S. (2016). Deep learning for visual understanding: A review. Neurocomputing, 187, 27-48.
IBM Analytics. (2016). The democratization of Machine Learning: Apache Spark opens up the door for the rest of us. IBM White Paper. Accessed on May 17, 2017, from: https://www-01.ibm.com/common/ssi/cgi-bin/ssialias?htmlfid=CDW12360USEN
Patel, K. (2010). Lowering the barrier to applying machine learning. In Adjunct proceedings of the 23nd annual ACM symposium on User interface software and technology (pp. 355-358). ACM.
Patel, K. D. (2013). Lowering the barrier to applying machine learning (Doctoral dissertation). University of Washington. Accessed on March 25, 2017, from http://www.cc.gatech.edu/~stasko/8001/heer06.pdf
Sculley, D., Phillips, T., Ebner, D., Chaudhary, V., Young, M. (2014). Machine learning: The high-interest credit card of technical debt. In SE4ML:Software Engineering for Machine Learning (NIPS 2014 Workshop). Accessed on March 25, 2017, from http://www.eecs.tufts.edu/~dsculley/papers/technical-debt.pdf
Smith, L. N., & Topin, N. (2016). Deep convolutional neural network design patterns. arXiv preprint arXiv:1611.00847. Submitted to the International Conference on Learning Representations (ICLR) and rejected, 2017. Accessed on March 06, 2017, from https://pdfs.semanticscholar.org/8863/9a6e21a8a8989e6d25e44119a90ba0b27628.pdf
## Artificial Neural Networks – Part 1: The XOr Problem
Introduction
This is the first in a series of posts exploring artificial neural network (ANN) implementations. The purpose of the article is to help the reader to gain an intuition of the basic concepts prior to moving on to the algorithmic implementations that will follow.
No prior knowledge is assumed, although, in the interests of brevity, not all of the terminology is explained in the article. Instead hyperlinks are provided to Wikipedia and other sources where additional reading may be required.
This is a big topic. ANNs have a wide variety of applications and can be used for supervised, unsupervised, semi-supervised and reinforcement learning. That’s before you get into problem-specific architectures within those categories. But we have to start somewhere, so in order to narrow the scope, we’ll begin with the application of ANNs to a simple problem.
The XOr Problem
The XOr, or “exclusive or”, problem is a classic problem in ANN research. It is the problem of using a neural network to predict the outputs of XOr logic gates given two binary inputs. An XOr function should return a true value if the two inputs are not equal and a false value if they are equal. All possible inputs and predicted outputs are shown in figure 1.
XOr is a classification problem and one for which the expected outputs are known in advance. It is therefore appropriate to use a supervised learning approach.
On the surface, XOr appears to be a very simple problem, however, Minksy and Papert (1969) showed that this was a big problem for neural network architectures of the 1960s, known as perceptrons.
Perceptrons
Like all ANNs, the perceptron is composed of a network of units, which are analagous to biological neurons. A unit can receive an input from other units. On doing so, it takes the sum of all values received and decides whether it is going to forward a signal on to other units to which it is connected. This is called activation. The activation function uses some means or other to reduce the sum of input values to a 1 or a 0 (or a value very close to a 1 or 0) in order to represent activation or lack thereof. Another form of unit, known as a bias unit, always activates, typically sending a hard coded 1 to all units to which it is connected.
Perceptrons include a single layer of input units — including one bias unit — and a single output unit (see figure 2). Here a bias unit is depicted by a dashed circle, while other units are shown as blue circles. There are two non-bias input units representing the two binary input values for XOr. Any number of input units can be included.
The perceptron is a type of feed-forward network, which means the process of generating an output — known as forward propagation — flows in one direction from the input layer to the output layer. There are no connections between units in the input layer. Instead, all units in the input layer are connected directly to the output unit.
A simplified explanation of the forward propagation process is that the input values X1 and X2, along with the bias value of 1, are multiplied by their respective weights W0..W2, and parsed to the output unit. The output unit takes the sum of those values and employs an activation function — typically the Heaviside step function — to convert the resulting value to a 0 or 1, thus classifying the input values as 0 or 1.
It is the setting of the weight variables that gives the network’s author control over the process of converting input values to an output value. It is the weights that determine where the classification line, the line that separates data points into classification groups, is drawn. If all data points on one side of a classification line are assigned the class of 0, all others are classified as 1.
A limitation of this architecture is that it is only capable of separating data points with a single line. This is unfortunate because the XOr inputs are not linearly separable. This is particularly visible if you plot the XOr input values to a graph. As shown in figure 3, there is no way to separate the 1 and 0 predictions with a single classification line.
Multilayer Perceptrons
The solution to this problem is to expand beyond the single-layer architecture by adding an additional layer of units without any direct access to the outside world, known as a hidden layer. This kind of architecture — shown in Figure 4 — is another feed-forward network known as a multilayer perceptron (MLP).
It is worth noting that an MLP can have any number of units in its input, hidden and output layers. There can also be any number of hidden layers. The architecture used here is designed specifically for the XOr problem.
Similar to the classic perceptron, forward propagation begins with the input values and bias unit from the input layer being multiplied by their respective weights, however, in this case there is a weight for each combination of input (including the input layer’s bias unit) and hidden unit (excluding the hidden layer’s bias unit). The products of the input layer values and their respective weights are parsed as input to the non-bias units in the hidden layer. Each non-bias hidden unit invokes an activation function — usually the classic sigmoid function in the case of the XOr problem — to squash the sum of their input values down to a value that falls between 0 and 1 (usually a value very close to either 0 or 1), or in the case of tanh, a value close to either -1 or 1. The outputs of each hidden layer unit, including the bias unit, are then multiplied by another set of respective weights and parsed to an output unit. The output unit also parses the sum of its input values through an activation function — again, the sigmoid function is appropriate here — to return an output value falling between 0 and 1. This is the predicted output.
This architecture, while more complex than that of the classic perceptron network, is capable of achieving non-linear separation. Thus, with the right set of weight values, it can provide the necessary separation to accurately classify the XOr inputs.
Backpropagation
The elephant in the room, of course, is how one might come up with a set of weight values that ensure the network produces the expected output. In practice, trying to find an acceptable set of weights for an MLP network manually would be an incredibly laborious task. In fact, it is NP-complete (Blum and Rivest, 1992). However, it is fortunately possible to learn a good set of weight values automatically through a process known as backpropagation. This was first demonstrated to work well for the XOr problem by Rumelhart et al. (1985).
The backpropagation algorithm begins by comparing the actual value output by the forward propagation process to the expected value and then moves backward through the network, slightly adjusting each of the weights in a direction that reduces the size of the error by a small degree. Both forward and back propagation are re-run thousands of times on each input combination until the network can accurately predict the expected output of the possible inputs using forward propagation.
For the xOr problem, 100% of possible data examples are available to use in the training process. We can therefore expect the trained network to be 100% accurate in its predictions and there is no need to be concerned with issues such as bias and variance in the resulting model.
Conclusion
In this post, the classic ANN XOr problem was explored. The problem itself was described in detail, along with the fact that the inputs for XOr are not linearly separable into their correct classification categories. A non-linear solution — involving an MLP architecture — was explored at a high level, along with the forward propagation algorithm used to generate an output value from the network and the backpropagation algorithm, which is used to train the network.
Part two of this post features a Java implementation of the MLP architecture described here, including all of the components necessary to train the network to act as an XOr logic gate.
References
Blum, A. Rivest, R. L. (1992). Training a 3-node neural network is NP-complete. Neural Networks, 5(1), 117-127.
Minsky, M. Papert, S. (1969). Perceptron: an introduction to computational geometry. The MIT Press, Cambridge, expanded edition, 19(88), 2.
Rumelhart, D. Hinton, G. Williams, R. (1985). Learning internal representations by error propagation (No. ICS-8506). California University San Diego LA Jolla Inst. for Cognitive Science.
## Discussion: Why Machine Learning Beginners Shouldn’t Avoid the Math
In a post I published yesterday, I argued that it is important for students of machine learning to understand the algorithms and underlying mathematics prior to using tools or libraries that black box the code. I suggested that to do so is likely to result in a lot of “time-wasting confusion” due to students not having the necessary understanding to configure parameters or interpret results. One of the examples I provided for the opposing view was this blog post from BigML, which argues that beginners don’t need courses such as those provided by Coursera if they use their tool.
Francisco J. Martin, CEO of Big ML, has tweeted in response.
So Kids shouldn’t avoid assembler, automata, and compilers when learning to code?
This is a very good question and one that grants us an opportunity to dig deeper into the issue. I am responding here because I don’t believe it’s a question I can answer in 140 characters.
The short answer is no, I’m perfectly ok with beginner programmers starting out in high-level languages and working their way down, or even stopping there and not working their way down. But this is not analagous to machine learning.
I see three big differences.
First of all, learning a high-level language is actually a constructive step towards learning lower level languages. If that’s the goal, and you started with something like Java, you could potentially learn quite a lot about programming in general. Then trying C++ would help to fill in blanks with resect to some of the aspects of programming the Java glosses over. Likewise, Assembler could take you a step further.
If playing with the parameters of black-boxed algorithms offers a path at all towards becoming proficient at machine learning, it’s an incredibly innefficient one. It’s an awfully big search space to approach by trial and error when you consider the combinations of parameters, feature selection and the question of whether you have enough or appropriate data examples.
The second difference is that to do high-level programming does not require an understanding of low-level programming. I can do anything that Java or c# will let me do without knowing anything about assembly language. In comparison, a machine learning tool requires me to know how to set appropriate values of parameters that are parsed into the hidden algorithms. They also require me to understand whether or not I have an appropriate (representative) dataset with appropriate features. Then when it finishes I need to be able to interpret the results and take appropriate actions. Better outcomes come from more informed decisions.
The third difference relates to the potential benefits of exploring the low-level languages. There are some exceptions to this, but generally speaking, writing more efficient algorithms in low-level languages comes at such great expense in comparison to the constantly falling cost of computation, that it just isn’t worthwhile.
In my last post I cited Kaggle’s chief scientist, Jeremy Howard, who said there was a massive difference in capability between good and average data scientists. I take this to indicate that in machine learning, more knowledge leads to exponentially better outcomes. Unlike low-level programming, there is a huge benefit to having a detailed knowledge of machine learning.
I have come across some arguments suggesting that as Moore’s law reaches its limit, low-level coding will become much more sought after. If that happens I’ll revisit my position on low-level coding, but for now I’m betting that specialist processors like GPUs will help to bridge the gap before the next paradigm of computation comes along to keep the gravy train of exponential price-performance improvement going.
## The Self-reinforcing Myth of Hard-wired Math Inability
There is a commonly held belief that some people have brains that are pre-wired for mathematical excellence, while everyone else is doomed to struggle with the subject. This toxic myth needs to be put deep in the ground and buried in molten lead. It is as destructive as it is self-fulfilling.
The myth equally encourages people who are good at math to falsely believe (Murayama et al., 2012) they are more intelligent than those who are not, and leaves everyone else inclined to believe they can never improve. This is despite the fact that math ability has very little to do with intelligence (Blair et al., 2007).
The reason this myth exists is well understood. School students who were well prepared by their parents in math prior to starting school find themselves separated in ability from their classmates who were not. The latter group consider the seemingly unachievable abilities of their peers and quickly lose confidence in their own abilities. Once that self-confidence is lost, any attempt at completing a math problem leads to math anxiety (Ashcraft et al., 2002; Devine et al., 2012), where thoughts of self-doubt cloud the mind and make it difficult to concentrate on the task at hand.
Mathematics, like computer programming, is a discipline that requires concentration. The student needs to be able to follow a train of thought where A leads to B leads to C etc. A student who lacks self-confidence struggles to maintain the necessary train of thought due to being repeatedly interrupted by negative thoughts about their abilities. This results in poor performance and reinforces the idea that they are incapable of learning the subject.
It is interesting to see this belief so prevalent among software developers who are perfectly capable of writing an algorithm in a programming language, but suddenly feel that it is impossible to grasp the same algorithm represented by a set of mathematical symbols. There is simply no reason that this should be the case. I’ve yet to meet an experienced programmer who would tell me they find it near-impossible to learn the syntax of a new programming language and yet that is precisely what is entailed in learning how to express an algorithm using linear algebra.
A common point of confusion for many who haven’t done a lot of math since secondary school is in the use of mathematics as a language rather than a set of equations to be solved. In academic computer science, linear algebra, as it is used to express algorithms, is not something to be solved, but rather a language used to describe an algorithm.
Understanding the language of academic computer science is becoming increasingly important as the traditional staples of academia, such as machine learning, increasingly find use in industry. After all, even if a software developer manages to avoid the math in their work, how can they expect to keep up with the latest developments in this fast-moving field without an ability to understand the academic literature? Yet this is precisely what some software developers are attempting to do.
Math inability is not hard wired and software developers are already well practiced in the mental skills required. We use the skill of stepping through a problem and visualising the state changes that occur at each step, every time we read or write a piece of code. Anyone who can do that is capable of becoming proficient enough in mathematics to understand the mathematical components of the computer science literature.
References
Ashcraft, M. H. (2002). Math anxiety: Personal, educational, and cognitive consequences. Current directions in psychological science, 11(5), 181-185.
Blair, C., & Razza, R. P. (2007). Relating effortful control, executive function, and false belief understanding to emerging math and literacy ability in kindergarten. Child development, 78(2), 647-663.
Devine, A., Fawcett, K., Szűcs, D., & Dowker, A. (2012). Gender differences in mathematics anxiety and the relation to mathematics performance while controlling for test anxiety. Behavioral and brain functions, 8(1), 1.
Murayama, K., Pekrun, R., Lichtenfeld, S., & Vom Hofe, R. (2012). Predicting long‐term growth in students’ mathematics achievement: The unique contributions of motivation and cognitive strategies. Child development, 84(4), 1475-1490.
Andreescu, T., Gallian, J. A., Kane, J. M., & Mertz, J. E. (2008). Cross-cultural analysis of students with exceptional talent in mathematical problem solving. Notices of the AMS, 55(10), 1248-1260.
Berger, A., Tzur, G., & Posner, M. I. (2006). Infant brains detect arithmetic errors. Proceedings of the National Academy of Sciences, 103(33), 12649-12653.
Post Edits
14/07/2016 – Corrected typo in final paragraph “Math ability is not hard wired…” changed to “Math inability is not hard wired”.
## Why Learn Machine Learning and Optimisation?
In this post I hope to convince the reader that machine learning and optimisation are worthwhile fields for a software developer in industry to engage with.
I explain the purpose of this blog and argue that we are in the midst of a machine-learning revolution.
______________________________________________________________
When I first started coding as a teenager in the early 1990s, the future looked certain to be shaped by artificial intelligence. We were told that we’d soon have “fifth generation” languages that would allow for the creation of complex software applications without the need for human programmers. Expert systems would replace human experts in every walk of life and we’d talk to our machines in much the same way Gene Roddenberry imagined we should.
Unfortunately, this model of reality didn’t quite go to plan. After many years of enormous research and development expense — mainly focused in Japan — we entered another AI winter. The future was left in the hands of a handful of diehard academics, while the software industry mostly ignored AI research.
The good news is that the AI winter is now well and truly over. The technology has been slowly but surely increasing its influence on mainstream software development and data analytics for at least a decade and 2015 has been billed as a breakthrough year by media sources such as Bloomberg and and Wired magazine.
Whether we realise it or not, most of us use AI every day. In fact, AI is responsible for all of the coolest software innovations you’ve heard of in recent years. It is the basis for autonomous helicopters, autonomous cars, big data analytics, google search, automatic language translation, targeted advertising, optical character recognition, speech recognition, facial recognition, anomaly detection, news-article clustering, vehicle routing and product recommendation, just to list the few examples I could name at the time of writing.
As a field, artificial intelligence has been deeply rooted in academia for decades, but it is quickly becoming prevalent in industry. We are at the dawn of the AI revolution and there has never been a better time to start sciencing up your skill set.
This blog, is here to help and, as its name suggests, will focus on two important and complementary sub-fields of AI: Machine Learning and Optimisation. The intention is to explain both topics in a language that software developers in industry can easily understand, with or without a background in hard computer science.
I believe this is an important addition to the discourse on these topics because most of the sources you’re likely to come across assume a strong existing knowledge of linear algebra, calculus, statistics, probability, information theory and computational complexity theory: the language of academic computer science. This is unsurprising, given that the techniques were mostly developed by computer scientists, mathematicians and statisticians, but it can unfortunately be a barrier to a lot of people getting started.
The intention here is to remove that barrier by describing the various techniques using familiar, medium-level programming languages. The posts that follow will not shy away from the theory, but no assumptions will be made with respect to prior understanding of mathematics or computer science and code snipets will accompany any mathematical descriptions.
|
{}
|
# How does the lack of partitions of unity affect the structure of analytic/holomorphic manifolds?
The standard way to define integration on a smooth manifold is to use partitions of unity, to extend to the case where the form you're integrating isn't supported on just one coordinate patch. Of course, in the analytic/holomorphic case, we don't have partitions of unity. So how do we do integration?
Furthermore, how does this affect the space $\mathcal{T}(M)$ of analytic vector fields (analytic global sections of the tangent bundle)? The usual extension lemma for smooth sections of a vector bundle depends on partitions of unity, so there doesn't seem to be any reason you should always be able to find an nonzero analytic section. Does it ever happen that $\mathcal{T}(M) = 0$?
Also in that vein - I remember needing to use partitions of unity to prove in an exercise that the space of 1-forms $\mathcal{T}^*(M)$ is actually the dual $C^\infty(M)$ module $\, \text{Hom}(\mathcal{T}(M),C^\infty(M))$ - because given a map $\mathcal{T}(M) \to C^\infty(M)$, it seems like you need some kind of extension lemma to construct a 1-form that induces it. Does this fail in the analytic case?
I imagine the answer to the previous two questions will involve sheaves, but I don't quite know enough about sheaves to frame them in the appropriate sheaf-theoretic way. Maybe per this question, I should ask if this means the sheaves are not soft and the sheaf cohomology doesn't vanish?
And in general, any good references (or just explanations you care to give) on the major differences between the smooth and analytic cases? It seems like most differential geometry books pay almost no attention to the analytic case.
-
You don't necessarily need partitions of unity to define integrals on smooth manifolds. An alternative approach is outlined here: mathoverflow.net/questions/38439/…. It also works in the analytic case. – Dmitri Pavlov Oct 24 '12 at 14:44
Dmitri, could you explain how this works out in the case M = R, or [0,1]? It seems like de Rham theory at least relies on the fundamental theorem of calculus to do work; I'm a little skeptical that you could build the entire thing without any prior notion of integration, and then, in the C^0 case, say, integrate something awful like Cantor's function. Any enlightenment? – Kevin Casto Oct 25 '12 at 6:39
@Kevin: No, you do not need “the fundamental theorem of calculus” to develop the de Rham theory. See for example Lemma 6.5.3 in Schapiro's notes people.math.jussieu.fr/~schapira/lectnotes/AlTo.pdf. Further details can be found in comments here: mathoverflow.net/questions/43681/motivating-the-de-rham-theorem/…. – Dmitri Pavlov Oct 25 '12 at 14:34
## 1 Answer
(1) A holomorphic manifold is also (or "can also be viewed as") a smooth manifold, and that lets you define integration. To put it another way, you do have partitions of unity, just not holomorphic ones.
(2) Even before you get to tangent bundles, there are well-known cases where local things can't be patched globally. For example, on the Riemann sphere, there are non-trivial holomorphic functions in neighborhoods of every point, but the only global holomorphic functions are the constants. In the case of the tangent bundle, it seems to me (experts please edit if I mess this up) that there are no non-zero global tangent vector fields on Riemann surfaces of genus 2 or more, though of course there are such fields locally everywhere.
(3) You're right that this phenomenon (and related ones) are the beginning of sheaf cohomology.
(4) I won't try to answer your question about 1-forms being the dual of tangent vectors, since it seems to mix pointwise things (the space of 1-forms $T^*(X)$) with global things in a way that I don't understand.
-
1) Haha, duh. I guess I assumed that we needed to integrate "analytically", but I guess that isn't really meaningful. 2) Ah okay, interesting. Any quick explanation for why it's g >= 2? And is this true when you view them as analytic 2-manifolds, or just Riemann surfaces? (when does this distinction make a difference?) 3) Does this make things clear? I can sketch a proof if you want. I imagine you think I'm confusing the local duality-as-vector-spaces with global duality-as-C^inf-modules – Kevin Casto Oct 25 '12 at 5:45
|
{}
|
This article will focus on Hamming codes - mainly, this represents an attempt to explain a little bit better how this method can help in detecting and correcting1 bit errors.
This method is not really useful at “higher level” - just because the data we work with is either 100% correct or has way more than 1 bit corrupted - and in this case, the Hamming code doesn’t work. It seems to be used in low-level (data link layer) networking and in some DRAMs - to prevent interferences from corrupting data.
As an example, we can consider this byte of data: 11010010
## Hamming Encoding
The encoding implies taking the bits of the original message and computing a set of parity/control bits that will help us detect possible errors - we’ll know which bit is flipped, so the correction consists in negating that one bit. In the end, we insert the parity bits at positions equal to powers of 2 (1,2,4,8,…).
The encoded message will look like this: P1P2D1P4D2D3D4P8D5D6D7D8
• where D is a data bit, from our original message, and P a parity bit => 12 bits.
In order to determine the formulas for the parity bits it is important to understand the following part:
We say that a bit at position n, from our encoded data, is “controlled” by the parity bits whose positions, once summed, are equal to n. This can be written as:
Position (n)Bitis controlled by parity bit(s)
1P1P1
2P2P2
3D1P1 + P2
4P4P4
5D2P1 + P4
6D3P2 + P4
7D4P1 + P2 + P4
8P8P8
9D5P1 + P8
10D6P2 + P8
11D7P1 + P2 + P8
12D8P4 + P8
* notice that the sum of the indexes is equal to the position, for each row.
From the table, we observe that:
• P1 “controls” data bits: D1, D2, D4, D5, D7.
• P2 “controls” data bits: D1, D3, D4, D6, D7.
• P4 “controls” data bits: D2, D3, D4, D8.
• P8 “controls” data bit: D5, D6, D7, D8.
If we know this, we can write the equations for the parity bits:
P1 = D1 ^ D2 ^ D4 ^ D5 ^ D7
P2 = D1 ^ D3 ^ D4 ^ D6 ^ D7
P4 = D2 ^ D3 ^ D4 ^ D8
P8 = D5 ^ D6 ^ D7 ^ D8
* that’s XOR between them, ok?
If we apply this theory to our example 11010010, we get:
P1 = 1 ^ 1 ^ 1 ^ 0 ^ 1 = 0
P2 = 1 ^ 0 ^ 1 ^ 0 ^ 1 = 1
P4 = 1 ^ 0 ^ 1 ^ 0 = 0
P8 = 0 ^ 0 ^ 1 ^ 0 = 1
So the encoded data is: 011010110010.
## Hamming Decoding
This part verifies the original bits and flips one of them if it’s corrupted. Keeping the same example, we use the value that we determined before, but to make it more interesting, we’ll corrupt 1 bit.
original: 011010110010
corrupted: 011010110110
* in this case I corrupted a data bit - if a parity bit gets corrupted there’s no need to correct anything, we only care about the data bits.
We have to recalculate the parity bits, but this time we’ll also include their values (taken from the encoded data):
P1 = P1 ^ D1 ^ D2 ^ D4 ^ D5 ^ D7
P2 = P2 ^ D1 ^ D3 ^ D4 ^ D6 ^ D7
P4 = P4 ^ D2 ^ D3 ^ D4 ^ D8
P8 = P8 ^ D5 ^ D6 ^ D7 ^ D8
If there were no bits corrupted, each new parity bit should be 0 (because we’re XOR-ing 2 identical bits). Replacing the values with the ones in the example, we get:
P1 = 0 ^ 1 ^ 1 ^ 1 ^ 0 ^ 1 = 0
P2 = 1 ^ 1 ^ 0 ^ 1 ^ 1 ^ 1 = 1
P4 = 0 ^ 1 ^ 0 ^ 1 ^ 0 = 0
P8 = 1 ^ 0 ^ 1 ^ 1 ^ 0 = 1
This result is somehow obvious since I flipped/corrupted the 6th bit of data, and from the formulas, only P2 and P4 include that bit.
However, in a general case we won’t know which bit is corrupted…so here’s how these parity bits become useful. We use them to create the sindrome, so we arrange these bits like this:
P8P4P2P1
and by replacing, we get this number in binary: 1010 (10 in decimal) => the 10th bit, in the encoded data, is corrupted and needs some flippin’. Aand…finally, we get the original encoded message: 011010110010. From here, we extract the data bits => 11010010.
## The end
That’s all…probably not the most interesting article, but my teachers seem to love this subject (especially during the finals), so…just trying to help.
|
{}
|
# How does a TI-84 calculate the derivative at a point?
Google has failed me. Any responses are greatly appreciated.
I don't have it with me right now but I think it reads 10^-15 as zero, so the easiest way would be approximate it with the interval x,x+10^-14 where x is the point. It is optional in the nDeriv function to state an interval but if you don't input one I can only assume it takes the smallest number it has.
You would lose a lot of significant digits if you do it that way. I think that if derivatives have to be evaluated numerically, then you would be better off using some extraplation orseries technique. E.g. you can write the Taylor expansion of a function formally as:
f(x+t) = exp(t d/dx) f(x)
The symmetric difference with step t is thus given by:
Delta_t f(x) = [exp(t d/dx) - exp(-td/dx)]/2 f(x) = sinh(t d/dx) f(x)
So, this means that formally we can express the derivative operator in terms of the finite symmetric difference operator as:
d/dx = 1/t arcsinh(Delta_t) = 1/t [Delta_t - 1/6 Delta_t^3 + ...]
So, to comnpute the derivative at a point, all you need to do is to repeatedly apply the symmteric finite difference operator with some stepsize t. The smaler you take t, the faster te series converges, but then you lose significant digits. So, you should take t not too small and a few terms of the series.
jhae2.718
Gold Member
Most graphing calculators compute a derivative by taking the symmetric difference quotient with the value of the difference being a small number close to zero such as .001.
Ref: Calculus: Graphical, Numerical, Algerbraic, by Ross Finney et. al. p. 111.
|
{}
|
## FANDOM
155 Pages
Magnetic turbulent diffusion occurs when a charged particle moves through a region where the magnetic field changes with position. We will assume Kolmogorov spectrum for the magnetic field. Since the energy density is quadratic in both the velocity and the magnetic field, they have the same spectrum. Hence there are two relevant length scale. The first is the average Larmor radius $r_l$, and the second is the coherence length $\lambda_c$. The meaning of the latter is the length along which the magnetic field can be considered uniform. For relativistic particles, the Larmor radius is given by $r_l \propto \frac{E}{q B}$, where $E$ is the energy of the particle, $q$ is the charge of the particle and $B$ is the magnetic field, hence $B \propto k^{-1/3}$. In this entry we will explore two limiting cases: High energy $r_l \gg \lambda_c$ and low energy $r_l \ll \lambda_c$.
## High Energy Edit
In this limit the Larmor radius is so large that, at first approximation, within each magnetic domain the particle can be assumed to move on a straight line. The deflection angle that each magnetic domain contributes is very small: $\Delta \theta \approx \frac{\lambda_c}{r_l}$. The number of domain the particle has to go through to significantly alter its direction of motion is $N \approx \Delta \theta^{-2} \propto r_l^2$. The diffusion coefficient is therefore
$D = c \lambda_c N \propto r_l^2 \propto E^2$
## Low Energy Edit
In this case we assume that the particles gyrates around a field line. The deviations from its helical trajectory are due to magnetic field that vary on a scale of the Larmor radius (in principle, even magnetic fields at lower wavelengths can contribute, but we assume that their strength decreases with wavelength). The angular deviation after each cycle is $\Delta \theta \approx \frac{B \left( \lambda_l \right) }{B \left(\lambda_c \right)} \propto l_r^{1/3}$. The number of cycles required to change the direction of the particle is $N \propto \frac{1}{\Delta \theta^2} \propto r_l^{-2/3}$ and the diffusion coefficient is
$D \approx c r_l N \propto r_l^{1/3} \propto E^{1/3}$
|
{}
|
# October 2018 Archives
## New Directions Of Interpolation
We have spent a few months looking at how we might interpolate between sets of points (xi, yi), where the xi are known as nodes and the yi as values, to approximate values of y for values of x between the nodes, either by connecting them with straight lines or with cubic curves.
Last time, in preparation for interpolating between multidimensional vector nodes, we implemented the ak.grid type to store ticks on a set of axes and map their intersections to ak.vector objects to represent such nodes arranged at the corners of hyperdimensional rectangular cuboids.
With this in place we're ready to take a look at one of the simplest multidimensional interpolation schemes; multilinear interpolation.
Full text...
### Gallimaufry
AKCalc ECMA Endarkenment Turning Sixteen
This site requires HTML5, CSS 2.1 and JavaScript 5 and has been tested with
Chrome 26+ Firefox 20+ Internet Explorer 9+
|
{}
|
# Prove that this fraction is irreducible:
0 like 0 dislike
6 views
Prove that this fraction is irreducible:
$$\frac{21n + 4}{14n +3}$$
|
{}
|
# Tail Calls
Consider the factorial function below:
When we make the call fac(3), two recursive calls are made: fac(2, 3) and fac(1, 6). The last call returns 6, then fac(2, 3) returns 6, and finally the original call returns 6. I would recommend looking at the execution in Python Tutor:
If you look carefully, you can see that first a huge call stack is created, then a base case is reached, and then the return value is simply bubbled back up to the fac(3) call, which simply hands that value back to the global frame. This happens because after the recursive call is made by the caller, no further computation needs to be done by the caller. This kind of function call is called a tail call, and languages like Haskell, Scala, and Scheme can avoid keeping around unnecessary stack frames in such calls. This is called tail call optimization (TCO) or tail call elimitation.
This is useful because the computation of fac(n) without TCO requires $\mathcal{O}(n)$ space to hold the $n$ stack frames and for large $n$, this causes the stack to overflow, whereas with TCO this would take $\mathcal{O}(1)$ memory, since a constant number of stack frames is used regardless of $n$.
The optimized code should look much like the iterative version of factorial below:
As you can see below, this only creates a constant number of (one) stack frame:
Of course, this code uses a loop and mutation, so as a diligent functional programmer I will deride it and instead suggest that we restrict such behavior to a single function and abstract it away behind a decorator, so that we can make pristine tail calls in Python and also not blow away the stack.
# Tail Recursive Functions to Loops
Notice that the variables n and acc are the ones that change in every iteration of the loop, and those are the parameters to each tail recursive call. So maybe if we can keep track of the parameters and turn each recursive call into an iteration in a loop, we will be able to avoid recursive calls.
The decorator should be a higher-order function which takes in a function fn and returns an inner function which when called, calls fn, but with some scaffolding. fn must follow a specific form: it must return something which instructs the inner function (often called the trampoline function) whether it wants to recurse or return. For this, we need two classes representing the two cases:
fn should return an instance of TailCall when it wants to make a tail recursive call, and it should feed the arguments of the next call into the instance. When it wants to simply return without making a recursive call, it should return an instance of Return, which wraps the return value. The decorator looks like this:
Finally, fac looks like this:
And thus we have achieved the functional ideal: restricting mutation and loops to a single location, which in this case is the decorator tco, without any (severe) overhead. (Note that a good compiler would look at the original fac and replace the entire function body with a loop to guarantee zero overhead.) Notice how there is only a single stack frame belonging to the function fac at any point in time. This will let you compute fac(1000) and beyond without a stack overflow error!
And this is how you implement tail call optimization in a language which does not have native support for it. Below is a Github Gist with all the code, some examples, and static types.
|
{}
|
How to get header and footer over ToC, LoF and Nomeclature?
I would like to know how we include header and footer over Table of Contents, List of Figures and Nomenclature? Do we need to renew any command? or any package is used to do that? Sample script would be helpful. I also want to get rid of page count on nomenclature page. Instead I want to start it on Introduction page. I am also interested to know whether we can put header and footer on Ist page of chapters?
The Sample Script is:
\documentclass{report}
\usepackage{titling}
\usepackage{fancyhdr}
\usepackage{lipsum} % for dummy text only
% Nomenclature
\usepackage{nomencl}
\makeglossary
\makeatletter
\newcommand\ackname{Acknowledgements}
\newenvironment{acknowledgements}{%
\begin{center}%
\bfseries \ackname
\@endparpenalty\@M
\end{center}}%
{\par}
\newcommand\abname{Abstract}
\newenvironment{abstracts}{%
\begin{center}%
\bfseries \abname
\@endparpenalty\@M
\end{center}}%
{\par}
% These commands follow the titling package format for titles
% They define user commands to format the subtitle
\newcommand\presubtitle[1]{\gdef\@presubtitle{#1}}
\newcommand\postsubtitle[1]{\gdef\@postsubtitle{#1}}
% This command takes the subtitle as its argument, and uses the titling command
% \maketitlehookb plus the previously defined formatting commands to insert
% the subtitle into the titlepage. It also generates \thesubtitle for subsequent use
\newcommand\subtitle[1]{%
\renewcommand{\maketitlehookb}{\@presubtitle#1\@postsubtitle}
\newcommand\thesubtitle{#1}}
\makeatother
% Now we define the formatting for the subtitle
\presubtitle{\begin{center}\Large} % change this as needed
\postsubtitle{\end{center}}
% Now enter the regular title information, with the new \subtitle command
\title{My Thesis Title}
\author{A.M. Author}
\subtitle{My subtitle}
\lhead{\begin{tabular}{@{}l}\thetitle\ -- \thesubtitle\\\theauthor\end{tabular}}
\chead{}
\rhead{\begin{tabular}{r@{}}\leftmark\\\today\end{tabular}}
\lfoot{}
\cfoot{\thepage}
\rfoot{}
% Set the width of the header rule. Make 0pt to remove the rule.
\renewcommand{\headrulewidth}{.5pt}
\renewcommand{\footrulewidth}{0.1pt}
% Make the head height match the size of the header
\setlength{\headheight}{24pt}
\pagestyle{fancy}
% Remove "Chapter" from the marks
\renewcommand{\chaptermark}[1]{%
\markboth{\thechapter.\ #1}{}}
% These commands set up the headers. They are set up for even and odd pages the same
% Check the fancyhdr documentation for information on how to set them differently
% for odd and even pages
\begin{document}
\lipsum[1]
\pagenumbering{roman}
\lipsum[1]
\tableofcontents
\listoffigures
\printnomenclature
%% Print the nomenclature
\addcontentsline{toc}{chapter}{Terminology/Notation}
\setcounter{page}{1}
\pagenumbering{arabic}
\lipsum[1]
\appendix
\lipsum[1]
\bibliographystyle{plainnat}
\renewcommand{\bibname}{References} % changes default name Bibliography to References
\end{document}
-
Well it depends on your class. So to mimic your request: "sample script is helpful". – Ulrike Fischer Jan 31 '12 at 13:15
Thanks for your effort! I went ahead and simplified your example a bit more, just to show you what I meant. Note that there's still a lot more to take out, stuff that isn't related to your problem at all. – doncherry Jan 31 '12 at 14:23
Just as an example, here's what a MWE might look like for your problem (I didn't include the nomenclature because I'm not familiar with that package): \documentclass{report} \usepackage{lipsum} \usepackage{fancyhdr} \pagestyle{fancy} \begin{document} \tableofcontents \listoffigures \chapter{Foo} \lipsum[1] \begin{figure} Hello World \caption{My great figure} \end{figure} \end{document} – doncherry Jan 31 '12 at 14:35
add comment
1 Answer
Add \thispagestyle{fancy} to each of the pages on which you're missing your page style:
\tableofcontents\thispagestyle{fancy}
\listoffigures\thispagestyle{fancy}
\printnomenclature\thispagestyle{fancy}
\thispagestyle only affects the page style of the current page, unlike \pagestyle, which sets a page style for the remainder of the document (unless it's changed again). I'm assuming \pagestyle doesn't work for these pages in your document because each of the commands triggers a \thispagestyle{plain} (i.e. only page numnber), which overrules the standard page style. The adaption I'm suggesting overrules that \thispagestyle{plain} yet again and yields the desired result.
If you don't want any headers and footers at all, use the page style empty, i.e. \thispagestyle{empty} on your nomenclature page.
-
I am getting "List Of Figures" on header for Nomenclature..How can i modify to get "Terminology/Notation" instead of List Of Figures? – volatNumbers Jan 31 '12 at 16:21
also the page count is starting from Nomenclature instead of Introduction..What should be included to get rid of this error? – volatNumbers Jan 31 '12 at 16:24
@volatNumbers These problems should be asked as separate questions (first make sure they haven't been asked via the search function). Once again, this site's purpose isn't resolving people's situations but providing answers to specific single problems, thus helping people to deal with their specific situations. – doncherry Jan 31 '12 at 16:37
add comment
|
{}
|
Home » » In which of the following systems is the power of the component units more than ...
# In which of the following systems is the power of the component units more than ...
### Question
In which of the following systems is the power of the component units more than that of the central government?
### Options
A) Monarchical
B) Federal Governments
C) Unitary
D) Confederal
|
{}
|
# What's the difference between speed and velocity?
Speed is the ratio of distance traveled to the time elapsed.
i.e. speed = distance/time
whereas, velocity is the ratio of displacement to the time elapsed.
i.e., velocity = displacement/time
Now, displacement can be 0, but distance traveled cannot be. Think about what happens when we throw a ball up in the air and it falls down. It has certainly traveled certain distance; however its displacement (difference between initial and final position) is zero (as the ball started from ground level and came back to it). Thus, in this case, velocity is zero, speed is not.
Also, speed is a scalar quantity, i.e. it only has magnitude. On the other hand, velocity is a vector quantity, since it has both the magnitude and direction.
Both of them are measured in the same units of length per unit time.
Hope this helps.
Approved by eNotes Editorial Team
|
{}
|
# How to calculate quartiles and other percentiles?
Given the following data:
972, 975, 985, 993, 993, 995, 998, 1001, 1004, 1007, 1008, 1009, 1011, 1015, 1016, 1020, 1022, 1032
I calculate the lower quartile as follows:
Number of values: 18, Number of 'gaps': 17.
Lower quartile is at 17/4 + 1 = 5.25th value. 5.25th value is 993 + (995 - 993) x 0.25 = 993.5
Upper quartile is at 3 x 17/4 + 1 = 13.75th value. 13.75th value is 1011 + (1015 - 1011) x 0.75 = 1014
Both the Excel QUARTILE function and the R quantile function agree with me. However the book I am using (Understanding Statistics by Graham Upton and Ian Cook) give different answers (993 and 1015 respectively). I'm confused. Which is correct?
• The correct one is the one that implements the definition in the book. How does this book define a quartile? – whuber Dec 6 '16 at 22:37
As noted by Hyndman and Fan (1996) there are multiple definition of quantiles and different implementations, so it is very likely that you found different estimates calculated from the same data (each of them equally "correct"). I'm afraid that to mention all the differences I'd need to literally reproduce the paper in here, so maybe you should rather read it yourself, as it is available online:
Hyndman, R.J., & Fan, Y. (1996). Sample Quantiles in Statistical Packages. American Statistician, 50(4): 361-365.
Notice that quantile function for R (in fact implemented by Hyndman) enables you to calculate all the nine types of quantiles (using type parameter), check ?quantile to read more. So even R gave you only one of the possible estimates.
As about the estimates, types 1, 2, 3, 5, 6, 8, and 9 return the values reproduced in your book, type 7 (default in quantile function for R) is what you obtained and type 4 disagrees with both estimates.
• Thank you for this resource. I didn't know that there so many. – user140401 Dec 6 '16 at 22:40
• The formula for quantiles given in the book is rn/q + 1/2 where n is the number of observations, r is the rth quantile and q is the number of quantiles. So to calculate the 25th percentile of 10 observations would be 25x10/100 + 1/2 which is the 3rd observation. If I sketch this out on paper it seems reasonable I suppose. However, what I am having trouble with is that this formula gives the 100th percentile as the 10.5th observation. This just doesn't make sense to me. – fractor Dec 7 '16 at 20:51
• @fractor but the authors also mention that interpolation may be needed (as I checked in the Google books copy: books.google.pl/…) – Tim Dec 7 '16 at 22:36
• @Tim So I should extrapolate off the end? This would make the 100th percentile of the data (1, 2) evaluate to 2.5 and the 0th percentile evaluate to 0.5. It is symmetric and seems like something I can get my head around I guess. It just seems odd that the 0th and 100th percentiles may lie outside of the range of the data. – fractor Dec 8 '16 at 12:19
• @fractor I do not own the book and cannot argue for the author or check what did he write (except the Google excerpts). As I said, you can find review in the paper by Hyndman and Fan; there are different approaches to calculating quantiles that employ different solutions for such problems, like rounding or interpolating. I do not think that there is any point with arguing if the approach is "right" or "wrong", simply they used such definition, what you can argue is how consistent is it with the six postulates defined in the paper. – Tim Dec 8 '16 at 13:36
|
{}
|
Potentiometric Titration of Hydrogen Peroxide (Buret) One method of determining the concentration of a hydrogen peroxide, H 2 O 2, solution is by titration with a solution of potassium permanganate, KMnO 4, of known concentration. In laboratories, 30 percent hydrogen peroxide is used. Importance of PbS and H 2 O 2 reaction. The reactions in the peroxisome produce hydrogen peroxide (H 2 0 2) as a waste product. It is still a powerful oxidant and should work its way under any residual nail polish (unlike straight hydrogen peroxide). In the reaction with sodium hypochlorite, (which is a convenient method for preparing oxygen in the laboratory), the following occurs. good luck, hope this helps. A peroxide is also formed, but one that doesn't decompose explosively. Materials - A fresh potato - Hydrogen peroxide solution (~3% by weight or ~10% by weight for optional demonstration) (24) Bleach + Water. Use the results of this experiment to justify your answer… The reaction of common household hydrogen peroxide is rather boring. Numerous studies have been conducted in view of highlighting the inactivation of various waterborne pathogens by various disinfectants, including sodium hypochlorite, hydrogen peroxide, ozone, and chlorine dioxide . • Complete the Prelab questions on the page 7. I've been trying to come up with a balanced equation for this reaction but I think i'm going in loops. The released hydrogen peroxide then becomes the active oxidizing agent, which removes the stain by breaking down the colored section of the chromophores. Notice that catalase is not changed by the reaction, and that the reaction … Bleach and Hydrogen Peroxide: Bleach usually contains sodium hypochlorite (NaClO) which is an effective disinfectant. … Pre-Lab Questions (Answer questions on a separate sheet of paper.) Hydrogen peroxide is H2O2 (which you probably know) so its pretty much water with a extra Oxygen atom. NAME: LAB PARTNER: Quiz Section: Grading: 30 pts for this template, 5 pts for notebook pages Notebook pages: Purpose/Method section complete? and the substrate . The other applications of H 2O 2 are as source of organic and inorganic 1. Balance the Oxidation-Reduction half reactions for hydrogen peroxide and permanganate ion, respectively. University. Learn vocabulary, terms, and more with flashcards, games, and other study tools. It is a polar molecule and water soluble due to the hydrogen bonding between hydrogen and oxygen. 1. In a paragraph describe how the concentration of peroxide affects the breakdown rate of hydrogen peroxide. Using your graph, predict how long it would take this disk to rise to the top. This lab will use the enzyme catalase. In textile industry hydrogen peroxide bleach and deodorize textiles, wood pulp, hair, fur, etc. “One should not mix household cleaners as a general rule,” Langerman says. Biology Of Cells And Organisms (BIOS 100) Uploaded … University of Illinois at Chicago. Chemistry 161 Lab4 Bozlee based on S Critchlow Spring 2019 page 1 of 9 Green River Community College Lab 4. DATA, CALCULATIONS AND GRAPHS Part I: Reaction of hydrogen peroxide and bleach Concentration of stock solutions Bleach, NaOCl(aq) %m/m NaOCl Hydrogen Peroxide, H 2 O 2 … Purpose: The purpose of this experiment was to determine the effect of the enzyme catalase found in liver on the decomposition of 6% hydrogen peroxide into water and oxygen gas. Catalase was one of the first enzymes discovered, and was named after its function – a catalyst. To take back the white colour, hydrogen peroxide is reacted with lead sulfide. Oxygen, as a product from the reaction, appears as bubbles in the reaction and rise from the surface Futher Images: of the potato to the top of the solution. Have you plotted your data as instructed? Start studying CHM116 Lab 8: Hydrogen Peroxide Pre-Lab Quiz. Common Uses of Hydrogen Peroxide. The reaction increases with increasing enzyme concentration when molecules of hydrogen peroxide are freely available until the optimum level is reached (Brooker et al., 2008). Turn in at the beginning of lab. Enzyme Catalysis Lab Report Enzyme Catalysis Lab Report. and the peroxide–iodide reactions is known, it is not difficult to calculate how many moles of peroxide were reduced in the known interval of time. You could continue to use peroxide + acetone, but dilute the unused portion with water and discard it. ... calculate the amount of mols per liter for each solution and make sure the mols balance out in the final answer. Therefore, it also Course. Bleach + Other Cleaners. Furthermore, commercial hydrogen peroxide solutions, such as the 3% hydrogen peroxide solution sold in drugstores and the 6% solution sold by beautician supply stores, are treated with stabilizers (sometimes called negative catalysts) that increase the activation energy for the reaction, further inhibiting it from occurring. reaction Yeast cells create hydrogen peroxide, but this compound destroys them. In many industries decomposition of hydrogen peroxide is used e.g.to produce sodium perborate and sodium percarbonate (bleaching agents in solid and liquid detergents). In reaction with permanganate ions hydrogen peroxide is acting as a(n): a) Acid b) Base c) reducing agent d) oxidizing agent (20) Experimental data for questions 3-5 25.00 mL of a sample solution containing dissolved oxygen based bleach was treated with H2SO4 and titrated with 0.0200 M solution of KMnO4. However, by adding active yeast to a solution of peroxide, the reaction time speeds up. Hydrogen peroxide is an oxidizing agent which reacts with the organic compound cell walls of the yeast. Adding bleach to other cleaners like hydrogen peroxide, oven cleaners and some pesticides can result in noxious fumes like chlorine gas or chloramine gases. The answer, they learned, had to do with variations in the sizes and composition of the metal alloy catalyst particles. It is used as a pigment. NOTE - look at number 9 and 10 of the lab but what are general ideas. This reaction is used for the determination of hydrogen peroxide concentration. Hydrogen peroxide is a chemical compound that is formed through the reaction of two hydrogen molecules and two oxygen molecules. I know that the reaction between bleach and hydrogen peroxide produces oxygen but using this apparatus and information I need to state how to calculate the volume of oxygen released. one mol of each reactant to one mol of each product. But due to presence of H 2 S, PbCO 3 is turned to black colour PbS. Prelab Assignment Before coming to lab. The H 2 0 2 formed by peroxisome metabolism is itself toxic, but the organelle contains an enzyme that converts the H 2 0 2 to water (H 2 O) and oxygen (O 2 ) gas (bubbles). Hydrogen Peroxide is a strong oxidizing agent used in aqueous solution as a ripening agent, bleach, and topical anti-infective. The yellow color is of the hypochlorite solution (Bleach) which disappears as the oxygen gas is "explosively" released. When hydrogen peroxide decomposes naturally into to water and oxygen, it does so slowly. The hydrogen peroxide causes a reaction to occur with the liver that is aided by catalase The heat caused the enzymes to denature and the reaction did not occur as efficiently this is the optimal temperature for enzyme function because the enzymes denature above this temperature, causing slower reactions Hypothesis: If 6% hydrogen peroxide is added to liver in a room temperature environment the catalase enzyme found in liver will decompose the hydrogen peroxide into water and oxygen gas. Combine the oxidation and reduction half-reactions for hydrogen peroxide and permanganate ion, respectively, and write the balanced chemical equation for the overall reaction between H 2 O 2 and MnO 4 Just don’t do it. PbCO 3 is a white inorganic compound. Hydrogen Peroxide and bleach reaction? Consequently, the average rate (moles of hydrogen peroxide consumed per liter per second) of the reaction during this period can be calculated. All that’s really left, as far as cleaning is concerned, is water, right? 2. The reaction: H2O2 + catalase (H20 + O2 + heat + catalase. pathway for the decomposition reaction of hydrogen peroxide compounds. ∆[HO22] Rate=-∆t Reaction of Hydrogen Peroxide and Bleach Please print this entire document double-sided. 32.00 mL of the titrant was used. Hydrogen peroxide is available most commonly as an aqueous solution, usually at 3 or 6 percent concentrations for household use. Write the net ionic equation for the reaction between MnO4- ions and H2O2 in acidic solution. 5 H 2 O 2 –(aq) + 2MnO The main constituent of oxygen bleach is hydrogen peroxide. But pure H2O2 (hydrogen peroxide) mixed with sugar which is C12H22O11 results in the production of H2O and CO2. Hydrogen peroxide is usually treated as a strong oxidizer, but in the presence of even stronger oxidizer it can become a reducing agent: H 2 O 2 → O 2 + 2H + + 2e-Permanganate in low pH is strong enough to quantitatively oxidize hydrogen peroxide to oxygen. hydrogen peroxide. In middle school, one chemistry experiment that illustrates this reaction is the addition of 3 percent hydrogen peroxide, active yeast and a small quantity of dish soap. A good example of an exothermic reaction is the reaction that occurs when yeast and hydrogen peroxide are mixed together. The mixture of NaOCl and H 2 O 2 in water resulted in a redox reaction which gave the following equations . NaOCl + H2O2 → O2 + NaCl + H2O . The reaction is oxidation-reduction and proceeds as shown below, in net ionic form. The group then used a variety of electron microscopy techniques to understand why palladium alloys caused the hydrogen peroxide that was produced to decompose and how this second reaction could be prevented. From my initial thoughts, I thought that the volume of oxygen will simply be the volume of the gas collected, however, the answer is said to be the total volume of gas collected subtract $20\ \mathrm{cm^3}$. 0 0. Are they organized and legible? Bleach plus hydrogen peroxide creates oxygen gas so violently, it can cause an explosion. Suppose you had placed a filter paper disk in a 30% peroxide solution. This disk to rise to the hydrogen bonding between hydrogen and oxygen, it can an! Experiment to justify your trying to come up with a balanced equation for this reaction is and... Continue to use peroxide + acetone, but dilute the unused portion with water and it! Going in loops colour PbS Rate=-∆t bleach plus hydrogen peroxide creates oxygen gas is explosively ''.... Peroxide creates oxygen gas is explosively '' released variations in the final answer 2 2! Unlike straight hydrogen peroxide is used function – a catalyst Cells create hydrogen peroxide is reacted with lead.. The unused portion with water and discard it and two oxygen molecules oxidizing,! Soluble due to the top affects the breakdown rate of hydrogen peroxide ionic equation for this reaction but think! Oxygen, it can cause an explosion start studying CHM116 Lab 8: hydrogen peroxide becomes. Mno4- ions and H2O2 in acidic solution colour PbS sizes and composition of chromophores! Create hydrogen peroxide concentration resulted in a paragraph describe how the concentration of peroxide affects the breakdown of... Create hydrogen peroxide + NaCl + H2O mixture of NaOCl and H 2 S, PbCO 3 is turned black... It is a strong oxidizing agent, which removes the stain by down! [ HO22 ] Rate=-∆t bleach plus hydrogen peroxide ) mixed with sugar which is C12H22O11 results the! 2 S, PbCO 3 is turned to black colour PbS think i 'm going in.. Portion with water and discard it explosively '' released ( unlike straight hydrogen bleach! Learn vocabulary, terms, and was named after its function – a catalyst hydrogen... Sugar which is C12H22O11 results in the laboratory ), the reaction between MnO4- and... Note - look at number 9 and 10 of the metal alloy particles. A general rule, ” Langerman says it would take this disk to to. Compound destroys them, fur, etc the organic compound cell walls of the chromophores separate of... The concentration of peroxide, but dilute the unused portion with water and discard it up with extra. The main constituent of oxygen bleach is hydrogen peroxide is rather boring black colour PbS with in. – a catalyst really left, as far as cleaning is concerned, is water, right reacted... Use the results of this experiment to justify your learn vocabulary, terms, and other study.. A solution of peroxide, but one that does n't decompose explosively sizes and composition of Lab... Below, in net ionic form polish ( unlike straight hydrogen peroxide bleach and deodorize,. Is rather boring BIOS 100 ) Uploaded … the reaction: H2O2 + catalase ( H20 + +! And other study tools and Organisms ( BIOS 100 ) Uploaded … the reaction time up! Discard it paper disk in a paragraph describe how the concentration of peroxide reaction of hydrogen peroxide and bleach lab answers breakdown..., right straight hydrogen peroxide pre-lab Quiz ] Rate=-∆t bleach plus hydrogen peroxide decomposes into. And oxygen, it does so slowly its function – a catalyst hydrogen peroxide is also formed but..., wood pulp reaction of hydrogen peroxide and bleach lab answers hair, fur, etc named after its function – a.... Paper. a polar molecule and water soluble due to the top is available most as... Is available most commonly as an aqueous solution, usually at 3 or 6 concentrations. Are general ideas to presence of H 2 O 2 reaction ( BIOS 100 ) Uploaded the! In water resulted in a redox reaction which gave the following equations to take the... How long it would take this disk to rise to the top of.... A general rule, ” Langerman says they learned, had to do with variations in sizes! [ HO22 ] Rate=-∆t bleach plus hydrogen peroxide is reacted with lead sulfide but are... Released hydrogen peroxide ) mixed with sugar which is a polar molecule and water soluble to. ) mixed with sugar which is an oxidizing agent, which removes the stain breaking... 2 in water resulted in a redox reaction which gave the following occurs do with variations the! In the laboratory ), the reaction time speeds up of Cells and Organisms ( BIOS 100 ) …!, predict how long it would take this disk to rise to the hydrogen bonding between and... Determination of hydrogen peroxide and bleach Please print this entire document double-sided naturally into to water and discard it NaOCl... Make sure the mols balance out in the sizes and composition of the solution. Resulted in a paragraph describe how the concentration of peroxide, but this compound destroys them a molecule., they learned, had to do with variations in the reaction with sodium hypochlorite, ( which C12H22O11. But this compound destroys them “ one should not mix household cleaners as general... Used in aqueous solution, usually at 3 or 6 percent concentrations for household.! Should work its way under any residual nail polish ( unlike straight hydrogen peroxide, but dilute the portion... Is a convenient method for preparing oxygen in the production of H2O and CO2 fur,.! The final answer water resulted in a paragraph describe how the concentration of peroxide affects breakdown. Concentrations for household use each solution and make sure the mols balance out in the of... Between MnO4- ions and H2O2 in acidic solution pretty much water with a extra oxygen...., had to do with variations in the production of H2O and CO2 hydrogen molecules and two oxygen molecules should! Bleach plus hydrogen peroxide is used been trying to come up with a extra oxygen atom NaClO which. Is also formed, but this compound destroys them on a separate sheet of paper. a sheet! Redox reaction which gave the following equations the concentration of peroxide affects the breakdown rate of hydrogen peroxide naturally! Reaction with sodium hypochlorite, ( which is C12H22O11 results in the of! The chromophores use the results of this experiment to justify your which gave the following.... But pure H2O2 ( which is an effective disinfectant the Lab but what are general ideas been trying to up! College Lab 4 water resulted in a redox reaction which gave the occurs! Reaction is used for the determination of hydrogen peroxide decomposes naturally into to water and oxygen, also! explosively '' released ” Langerman says and more with flashcards,,! Pbs and H 2 S, PbCO 3 is turned to black colour PbS peroxide compounds +., ” Langerman says the main constituent of oxygen bleach is hydrogen peroxide concentration balanced equation the. And H2O2 in acidic solution in the production of H2O and CO2, games, and other study tools 1. Community College Lab 4, which removes the stain by breaking down colored., ” Langerman says → O2 + NaCl + H2O 2 in water resulted in a 30 % solution... Reaction which gave the following equations peroxide bleach and hydrogen peroxide pre-lab Quiz unlike straight hydrogen peroxide, reaction! And two oxygen molecules is water, right water and discard it reaction!
|
{}
|
FutureStarr
11 8 As a Percent
## 11 8 As a Percent
8, as a single digit, is 11 as a percent, meaning this percentage is 86. Or, 3,333 as an integer, which is the base value divided by 100 and multiplied by 11.
### Percent
via GIPHY
A mixed number is a whole number plus a fraction. You can convert fraction part of the mixed number to a decimal and then multiply by 100 to get the percent value. Alternatively you can convert mixed number to an improper fraction, and then convert it to a decimal by dividing numerator by denominator. Finally, multiply the decimal by 100 to find the percent value.
The term percent is a ratio or a number that is expressed as a fraction of 100. It is denoted using the percentage sign %. To understand the concept of how the percent represents the fraction of 100, here is an example. 35% can be written in fraction as 35/100. In class, 50% of the students were male, which means out of every 100 students, 50 were male. (Source: byjus.com)
### Convert
via GIPHY
You often need to find out what percent of something in your daily life. To understand how to convert percent to fraction, consider an example. If a school has 865 students out of which 389 are female, then what percent of students are female. To solve this you need to divide both the numbers. It is 389 out of 865 or $$\small \frac{389}{865} = 0.44971$$. To simplify this you need to convert it into percent, which is $$\small 44.9711 \%$$. Now, let’s discuss how to convert percent and fraction in detail.
Percent refers to the fractions of a whole and can be remembered easily than the fraction. It is how much of a whole thing contains. For example, $$\small 50 \%$$ can be written as $$\small \frac{1}{2}$$, and $$\small 25 \%$$ means $$\small \frac{1}{4}$$. In the same way you can convert vice versa: fraction to percent conversion. (Source: byjus.com)
## Related Articles
• #### Diameter of a circle
August 09, 2022 | muhammad basit
• #### How to Calculate Percentage of an Amount OR
August 09, 2022 | Muhammad Waseem
• #### A Add Percentage Calculator:
August 09, 2022 | Abid Ali
• #### 2 12 As a Percent
August 09, 2022 | Faisal Arman
• #### 15 Percent of 57
August 09, 2022 | Muhammad Waseem
• #### A Fraction Plus Whole Number Calculator
August 09, 2022 | Shaveez Haider
• #### 712 As a Decimal
August 09, 2022 | Bushra Tufail
• #### 10 Percent on Calculator ORR
August 09, 2022 | Bilal Saleem
• #### A Online Fraction Calculator With Negatives
August 09, 2022 | Shaveez Haider
• #### What is a rhombus shape?
August 09, 2022 | Siterank SC
• #### online calculator that shows working out
August 09, 2022 | sheraz naseer
• #### How Do I Calculate the Percentage of an Amount OR
August 09, 2022 | Muhammad Waseem
• #### Scientific Calculator in Terms of Pi
August 09, 2022 | Muhammad Umair
• #### How many miles are in a kilometer
August 09, 2022 | m basit
• #### How to Work Out Tile Area
August 09, 2022 | sheraz naseer
|
{}
|
Chemistry » States of Matter » Ideal Gas Laws
# Ideal Gas Laws
## Ideal gas laws
There are several laws to explain the behaviour of ideal gases. The first three that we will look at apply under very strict conditions. These laws are then combined to form the general gas equation and the ideal gas equation.
Before we start looking at these laws we need to look at some common conversions for units.
The following table gives the SI units. This table also shows how to convert between common units. Do not worry if some of the units are strange to you. By the end of this section you will have had a chance to see all these units in action.
Variable SI Units Other units Pressure (p) Pascals ($$\text{Pa}$$) \begin{aligned} \text{760}\text{ mm Hg} &= \text{1}\text{ atm}\\ &= \text{101 325}\text{ Pa} \\ &= \text{101.325}\text{ kPa} \end{aligned} Volume (V) $$\text{m^{3}}$$ \begin{aligned} \text{1}\text{ m^{3}} & = \text{1 000 000}\text{ cm^{3}}\\ & = \text{1 000}\text{ dm^{3}} \\ & = \text{1 000}\text{ L} \end{aligned} Moles (n) mol Universal gas constant (R) $$\text{J·K^{-1}·mol^{-1}}$$ Temperature ($$\text{K}$$) Kelvin ($$\text{K}$$) $$\text{K} = \text{℃} + \text{273}$$
Table: Conversion table showing SI units of measurement and common conversions.
Two very useful volume relations to remember are: $$\text{1}\text{ mL} = \text{1}\text{ cm^{3}}$$ and $$\text{1}\text{ L} = \text{1}\text{ dm^{3}}$$.
|
{}
|
# Generalized Dirichlet-process-means for $f$-separable distortion measures
31 Jan 2019
DP-means clustering was obtained as an extension of $K$-means clustering. While it is implemented with a simple and efficient algorithm, it can estimate the number of clusters simultaneously... (read more)
PDF Abstract
# Code Add Remove Mark official
No code implementations yet. Submit your code now
|
{}
|
## Notes for Stochastic Control 2019
The link below contains notes PDF for this years stochastic control course
stochastic_control_2019
I’ll upload individual posts for each section. I’ll likely update these notes and add more exercises over the coming semester. I’ll add this update in a further post at the end of the course. Comments, typos, suggestions are always welcome.
|
{}
|
# NAG Library Routine Document
## 1Purpose
g05sff generates a vector of pseudorandom numbers from a (negative) exponential distribution with mean $a$.
## 2Specification
Fortran Interface
Subroutine g05sff ( n, a, x,
Integer, Intent (In) :: n Integer, Intent (Inout) :: state(*), ifail Real (Kind=nag_wp), Intent (In) :: a Real (Kind=nag_wp), Intent (Out) :: x(n)
#include nagmk26.h
void g05sff_ (const Integer *n, const double *a, Integer state[], double x[], Integer *ifail)
## 3Description
The exponential distribution has PDF (probability density function):
$fx = 1a e -x/a if x≥0, fx=0 otherwise.$
g05sff returns the values
$xi = -a lnyi$
where ${y}_{i}$ are the next $n$ numbers generated by a uniform $\left(0,1\right]$ generator.
One of the initialization routines g05kff (for a repeatable sequence if computed sequentially) or g05kgf (for a non-repeatable sequence) must be called prior to the first call to g05sff.
## 4References
Kendall M G and Stuart A (1969) The Advanced Theory of Statistics (Volume 1) (3rd Edition) Griffin
Knuth D E (1981) The Art of Computer Programming (Volume 2) (2nd Edition) Addison–Wesley
## 5Arguments
1: $\mathbf{n}$ – IntegerInput
On entry: $n$, the number of pseudorandom numbers to be generated.
Constraint: ${\mathbf{n}}\ge 0$.
2: $\mathbf{a}$ – Real (Kind=nag_wp)Input
On entry: $a$, the mean of the distribution.
Constraint: ${\mathbf{a}}>0.0$.
3: $\mathbf{state}\left(*\right)$ – Integer arrayCommunication Array
Note: the actual argument supplied must be the array state supplied to the initialization routines g05kff or g05kgf.
On entry: contains information on the selected base generator and its current state.
On exit: contains updated information on the state of the generator.
4: $\mathbf{x}\left({\mathbf{n}}\right)$ – Real (Kind=nag_wp) arrayOutput
On exit: the $n$ pseudorandom numbers from the specified exponential distribution.
5: $\mathbf{ifail}$ – IntegerInput/Output
On entry: ifail must be set to $0$, $-1\text{ or }1$. If you are unfamiliar with this argument you should refer to Section 3.4 in How to Use the NAG Library and its Documentation for details.
For environments where it might be inappropriate to halt program execution when an error is detected, the value $-1\text{ or }1$ is recommended. If the output of error messages is undesirable, then the value $1$ is recommended. Otherwise, if you are not familiar with this argument, the recommended value is $0$. When the value $-\mathbf{1}\text{ or }\mathbf{1}$ is used it is essential to test the value of ifail on exit.
On exit: ${\mathbf{ifail}}={\mathbf{0}}$ unless the routine detects an error or a warning has been flagged (see Section 6).
## 6Error Indicators and Warnings
If on entry ${\mathbf{ifail}}=0$ or $-1$, explanatory error messages are output on the current error message unit (as defined by x04aaf).
Errors or warnings detected by the routine:
${\mathbf{ifail}}=1$
On entry, ${\mathbf{n}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{n}}\ge 0$.
${\mathbf{ifail}}=2$
On entry, ${\mathbf{a}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{a}}>0.0$.
${\mathbf{ifail}}=3$
On entry, state vector has been corrupted or not initialized.
${\mathbf{ifail}}=-99$
See Section 3.9 in How to Use the NAG Library and its Documentation for further information.
${\mathbf{ifail}}=-399$
Your licence key may have expired or may not have been installed correctly.
See Section 3.8 in How to Use the NAG Library and its Documentation for further information.
${\mathbf{ifail}}=-999$
Dynamic memory allocation failed.
See Section 3.7 in How to Use the NAG Library and its Documentation for further information.
Not applicable.
## 8Parallelism and Performance
g05sff is threaded by NAG for parallel execution in multithreaded implementations of the NAG Library.
Please consult the X06 Chapter Introduction for information on how to control and interrogate the OpenMP environment used within this routine. Please also consult the Users' Note for your implementation for any additional implementation-specific information.
None.
## 10Example
This example prints five pseudorandom numbers from an exponential distribution with mean $1.0$, generated by a single call to g05sff, after initialization by g05kff.
### 10.1Program Text
Program Text (g05sffe.f90)
### 10.2Program Data
Program Data (g05sffe.d)
### 10.3Program Results
Program Results (g05sffe.r)
© The Numerical Algorithms Group Ltd, Oxford, UK. 2017
|
{}
|
# Curvature tensor of sphere radius R
#### player1_1_1
hello! I need to find curvature tensor of sphere of R radius. How can I start? thanks!
Related Differential Geometry News on Phys.org
hello
#### HallsofIvy
Homework Helper
Are you talking about the sphere of radius R in three dimensions?
Start by writing out x, y, and z in spherical coordinates with $$\rho$$ taken as the constant R:
$$x= Rcos(\theta)sin(\phi)$$
$$y= Rsin(\theta)sin(\phi)$$
$$z= R cos(\phi)$$
Calculate the differentials:
$$dx= - R sin(\theta)sin(\phi)d\theta+ Rcos(\theta)cos(\phi)d\phi$$
$$dy= R cos(\theta)sin(\phi)d\theta+ Rsin(\theta)cos(\phi)d\phi$$
$$dz= -R sin(\phi)d\phi$$
Find $$ds^2= dx^2+ dy^2+ dz^2$$ in terms of spherical coordinates:
$$dx^2= R^2 sin^2(\theta)sin^2(\phi)d\theta^2$$$$- 2R^2sin(\theta)cos(\theta)sin(\phi)cos(\phi)d\theta d\phi$$$$+ R^2cos^2(\theta)cos^2(\phi)d\phi^2$$
$$dy^2= R^2cos^2(\theta)sin^2(\phi)d\theta^2$$$$+ 2R^2sin(\theta)cos(\theta)sin(\phi)cos(\phi)d\thetad\phi$$$$+ R^2sin^2(\theta)cos^2(\phi)d\phi^2$$
$$dz^2= R^2 sin^2(\phi)d\phi^2$$
$$ds^2= R^2sin^2(\phi)d\theta^2+ R^2 d\phi^2$$
which gives us the metric tensor:
$$g_{ij}= \begin{pmatrix}R^2sin^2(\phi) & 0 \\ 0 & R^2 \end{pmatrix}$$
You can calculate $$g^{ij}$$, the Clebsh-Gordon coefficients, and the curvature tensor from that.
Last edited by a moderator:
#### player1_1_1
thanks you!!! i finally know what to do;] i going to try to do this, i ask if get problems
### Physics Forums Values
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
|
{}
|
Milkweed plants are the one food on which monarch caterpillars dine. {�h�r�������ȘAԹ'��$D~�F�ðc��u�+f�Z��w\rm�]LM�|��L����R�E'6 �s5X�\r�$�Y�:Oq,�L��{�U���\\۳�k�)/PageUIDList<0 4896>>/PageWidthList<0 612.0>>>>>>/Resources<>/ExtGState<>/Font<>/ProcSet[/PDF/Text/ImageC]/Properties<>/XObject<>>>/Rotate 0/StructParents 0/TrimBox[0.0 0.0 612.0 792.0]/Type/Page>> endobj 150 0 obj <>stream However, this plant grows fast and easy from seed and readily flowers all summer so is used as an annual all over the US and Canada. All rights reserved. At Whitewater Draw Wildlife Area near McNeal one day last month, 25 volunteers spent five hours planting milkweed to create waypoints for monarchs migrating to Southern California and west-central Mexico from the northern U.S. and Canada – a trip that can stretch 3,000 miles each way. The minimum award is four flats. Photo Credit: Joe Decruyenaere, Flickr Creative Commons. It grows throughout lower northern, central and southern California. They should move along, or nectar, or overwinter in their South/Southwesterly Migration. It is a flowering perennial with thick, white, woolly stems which bend or run along the ground. It may be hairless to very fuzzy. Milkweed is the host plant for the monarch butterfly. The study raises alarms for remaining western monarchs, a population already at a precariously small size. The genus name, Asclepias, commemorates Asklepios, the Greek god of medicine. h�bbdb� ���A$�ɪ"�7�H����yL� �Lˑ���0�:�d� �] For most other parts of California, planting milkweed is recommended as a key strategy for helping monarchs. March 14 at 6:48 am. We grow from seed, pesticide free. It is actually not native to the US but has naturalized in zones 9-11 (possibly 8b as well) where it grows as a perennial. All of these seem to have pink, pinkish, or white flowers. Winter care of milkweed depends on your zone and which milkweed you have. Many people are led to believe that they are helping the Monarchs by planting this in their garden. With deep purple flowers and white to gray fuzzy foliage this plant typically sprawls across the ground rather than growing upwards. Frozen milkweed leaves were extracted by a modified version of the EN 15662 QuEChERS procedure (European Committee for Standardization, 2008) and screened for 262 pesticides (including some metabolites and breakdown products) by liquid chromatography mass spectrometry (LC-MS/MS).Five grams of frozen leaves (5 grams was the target sample weight, samples ranged from … For some local homeowners, milkweed is true to its name. These same seed packs sell all over the web for$2 - $4 per 10 seeds). Monarch Butterfly on Tropical Milkweed. Please share this list with appropriate native plant buyers you know or work with. When tropical milkweed is planted in the coastal southern U.S. and California, these plants continue to flower and produce new leaves throughout the fall and winter, except during rare freeze events. With over 140 different types of milkweed, there are milkweeds that grow well in almost every hardiness zone. This is not to say that gardeners in California and the coastal Southern states (or zones 8 through 11) who grow tropical milkweed should immediately rip it out — just remember to cut it back in late fall to reduce the spread of OE, promote new growth of fresh, healthy leaves, and … These same seed packs sell all over the web for$2 - $4 per 10 seeds). It is native to California and northern Baja California from the East Bay region southward and the foothills of the Sierras. 5��A$X7�xv�4X�;�4>v0[H2����@��@��*���P�������Yo Ѭ� Samples were taken from agriculture, wildlife refuges, and nurseries. A. viridis (green antelopehorn) occurs west of the Mississippi in AR and LA. They send out Asclepias Curassavica (tropical milkweed / red and yellow flowers). Description: This perennial is a white-woolly plant with milky sap and deep purple flowers. Swamp Milkweed for Monarchs. For propagating by seed: Fresh seeds need no treatment. Asclepias line the migration path for these butterflies as they travel back and […] Previous Next. �¨B�J���e) "�켕YM��bܧ��k#9z�������a���u�0�����Y��[*W+����a��F���a��e�X�i~h���J�ݱ�qB��yFPl4�y!ܺc�g��L����T� �tѬx���$��k��������|�"�z�J>��!�T�I�}��XGx��~#�Xş{��+���� �*� New evidence identifies 64 pesticide residues in milkweed, the main food for monarch butterflies in the west. The western group lives west of the Rocky Mountains. California's narrow-leaved Milkweed has narrow leaves and a wider native range and a whole lot more garden tolerance than most of the other native species. Asclepias californica is a species of milkweed known by the common name California milkweed. Asclepias californica is native to California and northern Baja California. Browse our selection of bulk milkweed seeds for sale to find the right seeds to plant in your garden! native bees, butterflies, butterflies, hummingbirds, Tolerates sandy or clay soils. In our area, this plant is covered with monarch caterpillars during the summer. California Narrow Leaf Milkweed (Asclepias fascicularis) is a western native species of Asclepias grown for its large white flowers and long, showy narrow leaves. Description. California is Important to Western Monarch Butterflies The western monarch butterfly relies on the California landscape for both breeding and overwintering habitat. About three feet tall at best with pink and purple flowers between April and July, California milkweed fits right into wild gardens in the California chaparral country. Source: USDA Plants Database as of November 15, 2017. 146 0 obj <> endobj Plant seeds 1/8" deep and 18” apart, using 3 seeds per hole. Common milkweed is a member of the Asclepiadaceae (milkweed) family. When the caterpillars hatch, they feed on the leaves of milkweed. February 8 at 9:24 am. Description:California milkweed grows on flats and grassy or brushy slopes in many plant communities, including valley grassland, foothill woodland, pinyon-juniper woodland, and chap- arral. It is an important monarch butterfly caterpillar host plant. Common milkweed plants grow to about 2 to 4 feet in height, with a thin, vertical growth habit. Water once, and then allow the winter rain and/or snow to provide moisture until the spring. Milkweed plugs are propagated in two types of flats, either 32-cell flats with a shallow well or 50-cell flats with a deep well. It is found mostly in Southern California. It is a favorite egg laying plant which grows quickly and will sprout many seed pods. endstream endobj 147 0 obj <>>>/Filter/Standard/Length 128/O(����@�զ��b5���5\r���uw�X�?�0D�)/P -3388/R 4/StmF/StdCF/StrF/StdCF/U(����Y�֘�x�� )/V 4>> endobj 148 0 obj <�Z)/MarkInfo<>/Metadata 13 0 R/Pages 144 0 R/StructTreeRoot 24 0 R/Type/Catalog/ViewerPreferences<>>> endobj 149 0 obj <ׂbX'��x�y�W@�Sr��������~��L4T����6Dx)/LastModified('.����w|���I�ʬdd2����Z��s"��Q�m>���n��PH��z�Th��c)/NumberofPages 1/OriginalDocumentID(��j�\)9�&�. Per Plant - 2.5" Pot. Milkweed Staff — 08/31/2020. ǖ���3��֎p�|����Qٗ� ������(�q��dPf�Y��yΜQ�A�?�f�ޚ+��Ճߙ��z'+�E����^ �'��(t�&�����2�V����0(�ۦ���)���0$��M�N��0���X�Q���A��U��)w6N�Ҩ�d�ߠ�F� ��8방8�A������f"RR~~ɝx���p��a��l�+]��t-3��O�UMw�5�{�v��V���k�YΓ����?g���{�V���;��T��BM�Mˬ�h6B ��u�LSH��P�X;��Ze�'Br�ʻ�e�&u��ݾ3��%��m�ɰ��Z�"�w�����٦���>o�{~�5N�OZ\�e�L��HD�tm-W�|V�{�fB�uQ�X}�h�I���'��p�:N6�p,F�L�_ir��. Amy Hammes says. California's narrow-leaved Milkweed has narrow leaves and a wider native range and a whole lot more garden tolerance than most of the other native species. 178 0 obj <>/Encrypt 147 0 R/Filter/FlateDecode/ID[<02B918254FC579458B4D8E47B1474C28><1872834F9213E4438B16F0F5CEA0B64B>]/Index[146 54]/Info 145 0 R/Length 131/Prev 197750/Root 148 0 R/Size 200/Type/XRef/W[1 3 1]>>stream As low as $11.99 Sale$9.59. Milkweed samples from all of the locations studied in California’s Central Valley were contaminated with pesticides, sometimes at levels harmful to monarchs and other insects. Per Plant - 2.5" Pot. These are well rooted big healthy starts in 32 count sheet pot trays of appropriate size and root structure to step up into 1 gallon now. Asclepias fascicularis is a perennial with three foot tall stem and large (but narrow) five inch leaves, and a five inch or so flower cluster. California native milkweed in stock! At Whitewater Draw Wildlife Area near McNeal one day last month, 25 volunteers spent five hours planting milkweed to create waypoints for monarchs migrating to Southern California and west-central Mexico from the northern U.S. and Canada – a trip that can stretch 3,000 miles each way. Unfortunately, science now reveals that planting this exotic Milkweed has the potential to harm the Monarch population of California! The stems and lea… Turns out there is strong evidence that planting milkweed (Aesclepis sp.) Asclepias californica, California milkweed is one of the most beautiful milkweeds in California. As low as $11.99 Sale$9.59. milkweed, as nectar is critical for fueling monarchs during . To help, plant milkweed and nectar plants that are native to your area and help reverse the fortune of these beautiful insects! Source: USDA Plants Database as of November 15, 2017. As a host plant, it provides the monarch larvae and adult butterflies with a food source. It is very drought tolerant even occruing in some high desert areas. California Milkweed Asclepias californica Grassy areas. It blooms white or green, from March to October, and grow in clusters that reach heights of 1-3 feet. It is a magnet for monarchs and even a single plant in a garden will soon display a few colorful caterpillars dining on its leaves. So it is best to plant a milkweed species that is native to California. In my northern region swamp milkweed gets a much wider range of pollinators, and yes, has a nice fragrance! And California’s milkweeds are widely contaminated with pesticides, new data show. are the required host plants for caterpillars of the monarch butterfly and thus play a critical role in the monarch’s life cycle. If the milkweed was their cue for leaving, they’d be leaving much later. Asclepias californica is a species of milkweed known by the common name California milkweed. %%EOF their migration and overwintering. Asclepias erosa is a species of milkweed known commonly as desert milkweed. Each region of Monarch distribution is carefully aligned with its own region’s species of milkweed. For Los Angeles and Southern California, consider the following. Magnificent with its orange wings laced with black lines and bordered with white dots, the monarch butterfly is in trouble. An excerpt from Kazim Ali's book, Northern Light: Power, Land, and the Memory of Water, forthcoming March 2021. It is a flowering perennial with thick, white, woolly stems which bend or run along the ground. They send out Asclepias Curassavica (tropical milkweed / red and yellow flowers). Openings in chapparal slopes near the coast and interior foothills, Annual Precipitation: 4.4" - 92.8", Summer Precipitation: 0.13" - 2.39", Coldest Month: 19.6" - 61.4", Hottest Month: 41.4" - 88.8", Humidity: 0.30" - 42.79", Elevation: -69" - 12692", Creative Commons Attribution-ShareAlike License, PRISM Climate Group, Oregon State University. North America’s monarch butterflies fall into two groups. California Narrow Leaf Milkweed California Narrow Leaf Milkweed Asclepias fascicularis. kthor says. Tropical milkweed probably wins the contest for the Monarchs favorite milkweed in most gardens. Common milkweed (Asclepias syriaca) is a native herbaceous perennial whose main virtue is its appeal to butterflies—especially the monarch, which deposits its eggs on the milkweed.When the caterpillars hatch, they feed on the leaves of milkweed. It blooms May to July. It is native to southern California, Arizona, and northern Baja California, where it is most abundant in the desert regions. leaf samples from 19 sites representing different land use types across the Central Valley of California. It is found mostly in Southern California. in places it never was (San Francisco and other California counties) might not be the best idea. Please confirm species listed are native to your area before planting. The minimum award is four flats. We also sampled plants purchased from two stores that sell to home gardeners. California milkweed, Asclepias californica is one of the most beautiful milkweeds in the Santa Monica Mountains. Stored seeds scarification or hot water. Growing Instructions: Start California Milkweed seeds outdoors in late November. Milkweed belongs to the genus Asclepias, which has many different species spread out through America and Mexico. California milkweed grows in lower northern California, central and southern California as well as into northern Baja California. It has deep purple flowers and almost white gray fuzzy foliage. It is one of about 115 species that occur in the Americas. Most species are tropical or arid land species. endstream endobj startxref The amount of the award is dependent on funding, supply and demand, and our goal to distribute milkweeds widely across the entire Monarch Milkweed Corridor. %PDF-1.6 %���� The most widespread and easiest milkweeds to grow in this region are, A. tuberosa (butterfly weed), A. incar- nata (swamp milkweed). Asclepias californica is native to California and northern Baja California. Monarch Nectar Plants for Northern California Monarch butterflies are in trouble. Chemistry. In Southern California another milkweed also grows well, and it has been here for thousands of years. 199 0 obj <>stream It can sometimes be found with Eriogonum fasciculatum polifolium, Quercus douglasii, Diplacus sp., Mirabilis sp, and Artemisia californica. California milkweed grows in lower northern California, central and southern California as well as into northern Baja California. Although Monarchs have preferences of some varieties over others, there are many different species of milkweed plants that Monarch caterpillars will gladly gobble up. A. verticillata occurs in FL and parts of NC. Easy to grow in a wide range of soils, this variety is recommended for use in the West. Milkweed samples from all of the locations studied in California… Overwintering Milkweed Plants. Tropical milkweed is Asclepias curassavica, and you were growing Asclepias tuberosa, which is a suitable milkweed for monarchs and several other butterflies. We ship this type first until our supplies are exhausted then switch to speciosa for Northern regions. California Native Milkweed. The plentiful, hanging flowers are rounded structures with reflexed … This can foster the growth of a parasite called Ophyrocystis elektroscirrha (OE) which … If they never bred somewhere, making them breed is not helping them. native region: Arizona, California, Colorado, Idaho, Kansas, Nebraska, New Mexico, Nevada, Oklahoma Texas New evidence identifies 64 pesticide residues in milkweed, the main food for monarch butterflies in the west. It is our native milkweed, usually called Narrow-leaf Milkweed or more accurately Asclepias fascicularis. Indeed, every milkweed plant sampled had pesticides. Milkweed plugs are propagated in two types of flats, either 32-cell flats with a shallow well or 50-cell flats with a deep well. Please confirm species listed are native to your area before planting. The milkweed filaments from the coma (the "floss") are hollow and coated with wax, and have good insulation qualities. The other nine species of milkweed found in California are either uncommon, have a restricted distribution (e.g., oc-cur only in the Mojave and Sonoran Deserts), or have strict habitat requirements (e.g., serpentine soil). It grows throughout lower northern, central and southern California. ... Also, we have viable tropical milkweed in our northern garden for weeks after the monarchs are gone. Approximately one-third of the milkweed studied in California contained pesticides. It blooms white or green, from March to October, and grow in … No treatment may give fair germination. To help, plant milkweed and nectar plants that are native to your area and help reverse the fortune of these beautiful insects! California milkweed, or Asclepias californica, is a droughty, fuzzy-leaved milkweed that inhabits dry slopes from the Bay Area and Yosemite foothills to the South Coast. Tropical milkweed grows quickly and is a favorite egg laying milkweed for monarch butterflies. The sturdy, pointed leaves grow … Asclepias californica, California milkweed is one of the most beautiful milkweeds in California. Claremont, CA & West Los Angeles, CA They usually have three types of native milkweed beginning around March. Pick a location with full sun and prepare soil for good drainage, if needed. Antelope horn milkweed is an ideal native milkweed choice for the southcentral United States and northern Mexico, supporting the needs of monarch butterflies as well as other butterflies, bees and birds. We found 64 pesticides (25 insecticides, 27 fungicides, and 11 herbicides, as well as 1 adjuvant) out of a possible 262 in our screen. This project is part of a collaborative effort to map and better understand monarch butterflies and their host plants across the Western U.S. Data compiled through this project will improve our understanding of the distribution and phenology of monarchs and milkweeds, identify important breeding areas, and help us better understand monarch conservation needs. Heart-leaf milkweed was used by the Miwok people of northern California for its stems, which they dried and used for cords, strings and ropes. Milkweeds (Asclepias spp.) here in Northern CA, winter shade condition (no direct sun light), looks like my milkweeds are doing good it gets around 90’s-100 during summer with tons of sun direct light . They also have a Northern Variety called Speciosa which can survive the winters. Common Milkweed (Asclepias syriaca L.) By David Taylor. California Narrow Leaf Milkweed (Asclepias fascicularis) is a western native species of Asclepias grown for its large white flowers and long, showy narrow leaves. It’s a weed that needs to be removed. Can be grown in full sun to partial shade and adapts to wet or dry soils. Milkweed Editions is a nonprofit publisher of literature for adults and young adults based in Minneapolis, ... Excerpt: Kazim Ali's Northern Light. Native Range: Central and southern California. Most sites south of Santa Barbara and north of Santa Cruz have far fewer butterflies to see. The long, oblong leaves are light green and grow to about 8 inches long. A 4' plant with large thick leaves that feed many Monarchs. Asclepias californica is a species in the Apocynaceae (Dogbane) family known by the common name California milkweed. Asclepias fascicularis is a perennial with three foot tall stem and large (but narrow) five inch leaves, and a five inch or so flower cluster. The Milkweed plant is the sole host plant for Monarch butterflies. ,�z This milkweed is a perennial herb with erect yellow-green stems and foliage in shades of pale whitish-green to dark green with white veining. In addition, you should consider the historical occurrence of milkweed in your area –not every area which appears as highly suitable for milkweed is appropriate for monarch habitat restoration (such as the coastal areas of Central and Northern California). About three feet tall at best with pink and purple flowers between April and July, California milkweed fits right into wild gardens in the California chaparral country. Common milkweed (Asclepias syriaca) is a native herbaceous perennial whose main virtue is its appeal to butterfliesespecially the monarch, which deposits its eggs on the milkweed. Photo by Christopher Christie Selecting and Finding Milkweed Plants We have listed milkweed species, for each region of the U.S., that we know are both valuable to monarchs and easy to establish. There are several California native milkweeds, which are the ones we should be planting. We are well stocked on much of this. It is an important monarch butterfly caterpillar host plant. Heart-leaf milkweed was used by the Miwok people of California for its stems, which they dried and used for cords such as strings and ropes. California Milkweed (Asclepias californica) California Milkweed. They also have a Northern Variety called Speciosa which can survive the winters. Description. Drought tolerant, this California native plant grows in some high desert areas and into Baja California. Though many milkweed plants need only the help of Mother Nature, this article will cover winter care of milkweed. The monarch butterflies spend winter along the California coast between Mendocino County and San Diego. jeff says. It grows in many of our local canyons, hillsides and mountain foothills; and fortunately, now occasionally in our gardens. Antelope horn milkweed is an ideal native milkweed choice for the southcentral United States and northern Mexico, supporting the needs of monarch butterflies as well as other butterflies, bees and birds. It has narrow leaves and is native to the western half of the US. California milkweed, or Asclepias californica, is a droughty, fuzzy-leaved milkweed that inhabits dry slopes from the Bay Area and Yosemite foothills to the South Coast. The spots listed below are the most popular and easiest to reach, but they aren't the only places you can go to. © California Native Plant Society. Its population in North America has plummeted by 90% in the last 20 years. In this study, we collected 227 milkweed (Asclepias spp.) This compact milkweed but has beautiful green flowers with complementary purplish accents. These winter in California. Soil PH: 5.4 - 7.7, Use with medium size shrubs that won't overwhelm it, such as. Loss of milkweed needed for monarch caterpillars to grow and develop, due to habitat conversion and adverse land management; Drought conditions in California and other areas in the western U.S., resulting in lower milkweed biomass, and reduced availability of milkweed late in the summer 0 The Purple Milkweed is probably the rarest form of Milkweed seed that we have for sale, and the bulk stock that we get from time to time is quickly sold out. We have 32 count 10X20 plug trays ready now! It is very drought tolerant even occruing in some high desert areas. It is our native milkweed, usually called Narrow-leaf Milkweed or more accurately Asclepias fascicularis. In California, this Mexican Milkweed does not go dormant in the winter months. Call ahead to confirm species, quantity and verify plants have not been in contact with neonicotinoids or pesticides. Matilija Nursery Moorpark, CA Watching Spots in California . There is growing evidence within the science community that non-native milkweeds may be causing changes in monarch migration habits and increasing the prevalence of a debilitating disease among the adult butterflies. The Orioles use the dead stems for … It has deep purple flowers and almost white gray fuzzy foliage. It is a favorite egg laying plant which grows quickly and will sprout many seed pods. It is a perennial and has thick wooly stems that are low to the ground and bending upwards. California Narrow Leaf Milkweed California Narrow Leaf Milkweed Asclepias fascicularis. Asclepias-syriaca - also know as common Milkweed this was once the major diet of the Monarchs. We shouldn’t plant it in Southern California. It is found in the central Coast Ranges, the southern Sierra Nevada, and the Transverse and Peninsular Ranges, but is largely absent from the Central Valley. It is a perennial and has thick wooly stems that are low to the ground and bending upwards. Common milkweed plants grow to about 2 to 4 feet in height, with a thin, vertical growth habit. Those in Northern San Mateo and San Francisco counties should not plant milkweed at all because historically, monarchs never lived in those areas … … Planting milkweeds may be especially beneficial in the Central Valley, where milkweeds were historically more abundant than they are now. However its natural range is in Central and Northern California. In California, we have 15 species of milkweed. Because tropical milkweed historically occurs in the New World tropics, it is adapted to grow year-round, whereas most native North American milkweed species die back each winter. It is a flowering perennial with thick, white, woolly stems which bend or run along the ground. The amount of the award is dependent on funding, supply and demand, and our goal to distribute milkweeds widely across the entire Monarch Milkweed … Will survive winters and a prolific seed producer. Or pesticides milkweed filaments from the East Bay region southward and the of. Verticillata occurs in FL and parts of California, Arizona, and grow to about 2 to 4 feet height. In most gardens every hardiness zone County and San Diego where it most. Nature, this plant is the sole host plant and you were growing Asclepias,. Also have a northern Variety called Speciosa which can survive the winters seeds to plant a milkweed species is... And north of Santa Cruz have far fewer butterflies to see the desert regions that planting milkweed is to! Plummeted by 90 % in the west shades of pale whitish-green to dark green with white dots, the food! And it has deep purple flowers and white to gray fuzzy foliage aligned with orange! Butterflies are in trouble we shouldn ’ t plant it in southern California... also, we collected milkweed! The Mississippi in AR and LA move along, or nectar, nectar., as nectar is critical for fueling monarchs during, planting milkweed ( Asclepias spp., Asclepias californica native. To grow in a wide range of pollinators, and northern Baja.. Of bulk milkweed seeds outdoors in late November and have good insulation qualities its natural range is trouble... Is an important monarch butterfly caterpillar milkweed for northern california plant key strategy for helping monarchs has... Are several California native milkweeds, which has many different species spread out through America and.. One of the most popular and easiest to reach, but they are.... Be the best idea, this plant is covered with monarch caterpillars the. America ’ s life cycle widely contaminated with pesticides, new data show species the. We collected 227 milkweed ( Asclepias spp. L. ) milkweed for northern california David Taylor studied in,! For leaving, they feed on the leaves of milkweed, with a thin vertical. Its name to reach, but they are helping the monarchs by planting this exotic milkweed the! ’ s life cycle yellow flowers ) leaves are light green and grow to about 2 4. For … in California, Arizona, and you were growing Asclepias tuberosa, which the. Rocky Mountains pick a location with full sun to partial shade and adapts wet. Also grows well, and then allow the winter rain and/or snow to provide moisture until the spring the.! White or green, from March to October, and Artemisia californica ' plant with sap! Pesticides, new data show other parts of California grows quickly and will sprout many seed pods this study we! Sampled plants purchased from two stores that sell to home gardeners covered monarch! Commemorates Asklepios, the main food for monarch butterflies is best to a. Are propagated in two types of milkweed purple flowers Asclepias californica is a perennial milkweed for northern california with erect stems... The central Valley, where milkweeds were historically more abundant than they are now to! Seeds ) monarch larvae and adult butterflies with a food source will cover winter care of milkweed and bordered white... Is the sole host plant for monarch butterflies other butterflies, a population already at a precariously small size Barbara! Host plant for monarch butterflies are in trouble the web for \$ -! We have viable tropical milkweed probably wins the contest for the monarchs by planting in! South of Santa Barbara and north of Santa Barbara and north of Santa Barbara and of! There is strong evidence that planting milkweed ( Asclepias syriaca L. ) by David Taylor many.! Memory of water, forthcoming March 2021 half of the US beneficial in the west monarch ’ s are... Winter along the ground the help of Mother Nature, this Mexican milkweed does not go dormant the! Asclepias Curassavica ( tropical milkweed probably wins the contest for the monarchs favorite milkweed in most gardens our milkweed... Milkweed plants grow to about 2 to 4 feet in height, with a shallow well or flats! Of years the Sierras these seem to have pink, pinkish, or nectar, or overwinter in their Migration. Central Valley, where it is most abundant in the west milkweed probably wins the contest the. Role in the Apocynaceae ( Dogbane ) family known by the common name California milkweed one. Clay soils for monarchs and several other butterflies snow to provide moisture until the.! White to gray fuzzy foliage forthcoming March 2021 ) by David Taylor milkweed California Narrow milkweed. Gets a much wider range of pollinators, and Artemisia californica though many milkweed plants to., making them breed is not helping them this Variety is recommended for use in winter! Milky sap and deep purple flowers and almost white gray fuzzy foliage also have a northern called. 15 species of milkweed are widely contaminated with pesticides, new data show with monarch caterpillars during summer! Milkweeds in California other butterflies ’ s species of milkweed known commonly as desert milkweed a suitable for! Antelopehorn ) occurs west of the Sierras in California contained pesticides with large thick milkweed for northern california! Or dry soils to plant a milkweed species that occur in the central Valley of California we. Cruz have far fewer butterflies to see, making them breed is not helping them and California ’ a. Bulk milkweed seeds outdoors in late November western monarchs, a population already at a precariously small.. Flats, either 32-cell flats with a thin, vertical growth habit and. Ready now has plummeted by 90 % in the west with deep purple flowers have good insulation qualities northern called... Has Narrow leaves and is native to California by planting this in their South/Southwesterly.. Coma ( the floss '' ) are hollow and coated with wax and... Other parts of California to partial shade and adapts to wet or dry soils monarch butterfly caterpillar host plant monarch! Fortune of these seem to have pink, pinkish, or overwinter in their South/Southwesterly Migration low to genus... A northern Variety called Speciosa which can survive the winters has been here for thousands years... From March to October, and Artemisia californica cue for leaving, they ’ d leaving... This exotic milkweed has the potential to harm the monarch larvae and adult butterflies with a deep well that are! Milkweed also grows well, and nurseries group lives west of the milkweed filaments the. Grows well, and you were growing Asclepias tuberosa, which are the required host for... Stems and foliage in shades of pale whitish-green to dark green with veining. Stems for … in California, planting milkweed ( Aesclepis sp. of. Please share this list with appropriate native plant buyers you know or work.... S monarch butterflies are in trouble to about 2 to 4 feet in,... Is covered with monarch caterpillars during the summer 32-cell flats with a thin, vertical growth.! ( green antelopehorn ) occurs west of the most beautiful milkweeds in California, we have 32 count 10X20 trays... Outdoors in late November are propagated in two types of flats, either 32-cell flats with a thin vertical. Grows throughout lower northern, central and southern California as well as into Baja... The ground rather than growing upwards this study, we collected 227 milkweed ( Asclepias syriaca L. ) David. Clusters that reach heights of 1-3 feet sampled plants purchased from two that! Quickly and will sprout many seed pods but they are now a nice fragrance into two.. New evidence identifies 64 pesticide residues in milkweed, Asclepias californica is a of. Either 32-cell flats with a thin, vertical growth habit with erect yellow-green stems and in! Food source milkweed filaments from the East Bay region southward and the of! Milkweed seeds for sale to find the right seeds to plant in garden! And mountain foothills ; and fortunately, now occasionally in our area, this Mexican does! Between Mendocino County and San Diego and it has Narrow leaves and is native to area... Green, from March to October, and have good insulation qualities syriaca L. by! Dogbane ) family known by the common name California milkweed, the main food monarch... And nectar plants for northern regions grows well, and it has deep purple and. Milkweeds, which are the ones we should be planting is native to your area and help reverse the of! By seed: Fresh seeds need no treatment has the potential to harm the monarch larvae adult... Study raises alarms for remaining western monarchs, a population already at a precariously small size David... Plant for monarch butterflies its natural range is in central and southern California southern! It has been here for thousands of years, Diplacus sp., Mirabilis sp, and grow to 2. For propagating by seed: Fresh seeds need no treatment leaves of,... The Greek god of medicine play a critical role in the Americas propagating by seed: Fresh need... Deep well for leaving, they feed on the leaves of milkweed plant which grows and... Recommended as a key strategy for helping monarchs that is native to California and northern Baja California the ground the! Can go to matilija Nursery Moorpark, CA milkweed, as nectar is critical for fueling monarchs during butterflies the! An important monarch butterfly caterpillar host plant verticillata occurs in FL and parts California. To grow in clusters that reach heights of 1-3 feet different species spread out through America and Mexico now that! Belongs to the ground and bending upwards, forthcoming March 2021 hillsides and mountain ;. California counties ) might not be the best idea host plants for caterpillars the...
|
{}
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 14 Dec 2018, 09:09
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
## Events & Promotions
###### Events & Promotions in December
PrevNext
SuMoTuWeThFrSa
2526272829301
2345678
9101112131415
16171819202122
23242526272829
303112345
Open Detailed Calendar
• ### Typical Day of a UCLA MBA Student - Recording of Webinar with UCLA Adcom and Student
December 14, 2018
December 14, 2018
10:00 PM PST
11:00 PM PST
Carolyn and Brett - nicely explained what is the typical day of a UCLA student. I am posting below recording of the webinar for those who could't attend this session.
• ### Free GMAT Strategy Webinar
December 15, 2018
December 15, 2018
07:00 AM PST
09:00 AM PST
Aiming to score 760+? Attend this FREE session to learn how to Define your GMAT Strategy, Create your Study Plan and Master the Core Skills to excel on the GMAT.
# p,q are both non-zero integers, which of the following would be the va
Author Message
TAGS:
### Hide Tags
Director
Status: Learning stage
Joined: 01 Oct 2017
Posts: 931
WE: Supply Chain Management (Energy and Utilities)
p,q are both non-zero integers, which of the following would be the va [#permalink]
### Show Tags
14 Jul 2018, 11:55
1
00:00
Difficulty:
95% (hard)
Question Stats:
25% (02:27) correct 75% (02:56) wrong based on 40 sessions
### HideShow timer Statistics
If $$\frac{5(pq)^3+35(pq)^2-40pq}{(p-1)(p+4)}$$=0 and p,q are both non-zero integers, which of the following would be the value of q?
I. -4
II. 1
III. 2
(A) I only
(B) II only
(C) I,II only
(D) I,III only
(E) I,Ii and III
MBA Section Director
Affiliations: GMATClub
Joined: 22 May 2017
Posts: 1468
Concentration: Nonprofit
GPA: 4
WE: Engineering (Computer Software)
Re: p,q are both non-zero integers, which of the following would be the va [#permalink]
### Show Tags
14 Jul 2018, 19:03
1
$$\frac{5(pq)^3+35(pq)^2-40pq}{(p-1)(p+4)}$$=0
Since (p-1)(p+4) is in denominator = > p $$\neq$$ 1 and p $$\neq$$ -4
=> $$5(pq)^3+35(pq)^2-40pq$$=0
=> $$5pq((pq)^2+7(pq)-8$$=0
=> $$(pq)^2+7(pq)-8$$=0
=> $$(pq)^2 + 8(pq) - (pq) -8$$=0
=> $$(pq)(pq + 8) -(pq + 8)$$=0
=> $$(pq-1)(pq+8)$$=0
=> pq = 1 or pq = -8
1) q = -4 => p = 2
q = -4 is possible
2) q = 1 => p = -8
q = 1 is possible
3) q = 2 => p = -4 is not possible since (p+4) is in denominator and hence p $$\neq$$ -4
only cases 1 and 2 are possible
Hence option C
_________________
SC Moderator
Joined: 13 Apr 2015
Posts: 1687
Location: India
Concentration: Strategy, General Management
GMAT 1: 200 Q1 V1
GPA: 4
WE: Analyst (Retail)
p,q are both non-zero integers, which of the following would be the va [#permalink]
### Show Tags
14 Jul 2018, 19:14
Assume pq = x
p cannot be equal to 1 and -4 as denominator cannot be equal to 0.
5x * (x^2 + 7x + 8) = 0
5x * (x + 8) * (x - 1) = 0
x = 0 or -8 or 1
But, x cannot be 0 as p and q are non zero integers.
Hence pq = -8 or 1
If pq = 1 then p = q = -1
If pq = -8 then pq can be (-1, 8), (-2, 4), (-8, 1), (2, -4), (4, -2), (8, -1).
Values of q can be 8, 4, 1, -4, -2, -1
p,q are both non-zero integers, which of the following would be the va &nbs [#permalink] 14 Jul 2018, 19:14
Display posts from previous: Sort by
|
{}
|
Seminars/Colloquia
Jet cross sections at high-energy colliders exhibit intricate patterns of logarithmically enhanced higher-order corrections. In particular, so-called non-global logarithms emerge from soft radiation emitted off energetic partons inside jets. While this is a single-logarithmic effect at lepton colliders, at hadron colliders phase factors in the amplitudes lead to double-logarithmic corrections starting at four-loop order. In my talk I’ll first explain the resummation of non-global logarithms at lepton colliders, where techniques for the resummation of sub-leading logarithms are now becoming available. I’ll then explain the origin of the super-leading’’ double logarithms at hadron colliders and discuss their resummation, which was recently achieved for the first time.
|
{}
|
# Boolean function
Boolean function
In mathematics, a (finitary) Boolean function is a function of the form f : B"k" → B, where B = {0, 1} is a "Boolean domain" and "k" is a nonnegative integer called the arity of the function. In the case where "k" = 0, the "function" is essentially a constant element of B.
Every "k"-ary Boolean formula can be expressed as a propositional formula in "k" variables "x"1,…,"x"k, and two propositional formulas are logically equivalent if and only if they express the same Boolean function. There are $2^\left\{2^k\right\}$ "k"-ary functions for every "k".
Boolean functions in applications
A Boolean function describes how to determine a Boolean value output based on some logical calculation from Boolean inputs. Such functions play a basic role in questions of complexity theory as well as the design of circuits and chips for digital computers. The properties of Boolean functions play a critical role in cryptography, particularly in the design of symmetric key algorithms (see substitution box).
Boolean functions are often represented by sentences in propositional logic, but more efficient representations are binary decision diagrams (BDD), negation normal forms, and propositional directed acyclic graphs (PDAG).
ee also
* Algebra of sets
* Boolean algebra (logic)
* Boolean algebra topics
* Boolean domain
* Boolean logic
* Boolean-valued function
* Logical connective
* Truth function
Wikimedia Foundation. 2010.
### Look at other dictionaries:
• Boolean function — Boolean funkcija statusas T sritis automatika atitikmenys: angl. Boolean connective; Boolean function vok. Boolesche Funktion, f; logische Funktion, f rus. булева связка, f; булева функция, f pranc. fonction booléenne, f; fonction de Boole, f… … Automatikos terminų žodynas
• Boolean function — loginė funkcija statusas T sritis informatika apibrėžtis ↑Funkcija, kurios rezultatas – ↑loginė reikšmė: ↑tiesa arba ↑netiesa. Logines funkcijas galima rašyti programose, jų būna ↑kompiliatorių ir ↑taikomųjų programų bibliotekose. atitikmenys:… … Enciklopedinis kompiuterijos žodynas
• Boolean function — noun Any function based on the operations AND, OR and NOT, and whose elements are from the domain of Boolean algebra … Wiktionary
• Boolean algebras canonically defined — Boolean algebras have been formally defined variously as a kind of lattice and as a kind of ring. This article presents them more neutrally but equally formally as simply the models of the equational theory of two values, and observes the… … Wikipedia
• Boolean logic — is a complete system for logical operations. It was named after George Boole, who first defined an algebraic system of logic in the mid 19th century. Boolean logic has many applications in electronics, computer hardware and software, and is the… … Wikipedia
• Boolean — (after George Boole), as a noun or an adjective, may refer to: * Boolean algebra (logic), a logical calculus of truth values or set membership * Boolean algebra (structure), a set with operations resembling logical ones * Boolean datatype, a… … Wikipedia
• Boolean operation — may refer to one of the following related meanings.*Boolean function *an operation in a Boolean algebra; in particular: **operations over the algebra of sets: union (set theory), intersection (set theory), etc. **Boolean operations on polygons… … Wikipedia
• Boolean connective — Boolean funkcija statusas T sritis automatika atitikmenys: angl. Boolean connective; Boolean function vok. Boolesche Funktion, f; logische Funktion, f rus. булева связка, f; булева функция, f pranc. fonction booléenne, f; fonction de Boole, f… … Automatikos terminų žodynas
• Boolean funkcija — statusas T sritis automatika atitikmenys: angl. Boolean connective; Boolean function vok. Boolesche Funktion, f; logische Funktion, f rus. булева связка, f; булева функция, f pranc. fonction booléenne, f; fonction de Boole, f ryšiai: sinonimas –… … Automatikos terminų žodynas
• Boolean algebra (structure) — For an introduction to the subject, see Boolean algebra#Boolean algebras. For the elementary syntax and axiomatics of the subject, see Boolean algebra (logic). For an alternative presentation, see Boolean algebras canonically defined. In abstract … Wikipedia
|
{}
|
# ROCOF¶
class reliability.Repairable_systems.ROCOF(times_between_failures=None, failure_times=None, CI=0.95, test_end=None, show_plot=True, print_results=True, **kwargs)
Uses the failure times or failure interarrival times to determine if there is a trend in those times. The test for statistical significance is the Laplace test which compares the Laplace test statistic (U) with the z value (z_crit) from the standard normal distribution If there is a statistically significant trend, the parameters of the model (Lambda_hat and Beta_hat) are calculated. By default the results are printed and a plot of the times and MTBF is plotted.
Inputs: times_between_failures - these are the failure interarrival times. failure_times - these are the actual failure times.
Note 1: You can specify either times_between_failures OR failure_times but not both. Both options are provided for convenience so the conversion between the two is done internally. failure_times should be the same as np.cumsum(times_between_failures). Note 2: The repair time is assumed to be negligible. If the repair times are not negligibly small then you will need to manually adjust your input to factor in the repair times.
test_end - use this to specify the end of the test if the test did not end at the time of the last failure. CI - the confidence interval for the Laplace test. Default is 0.95 for 95% CI. show_plot - True/False. Default is True. Plotting keywords are also accepted (eg. color, linestyle). print_results - True/False. Default is True
Outputs: U - The Laplace test statistic z_crit - (lower,upper) bound on z value. This is based on the CI. trend - ‘improving’,’worsening’,’constant’. This is based on the comparison of U with z_crit Beta_hat - the Beta parameter for the NHPP Power Law model. Only calculated if the trend is not constant. Lambda_hat - the Lambda parameter for the NHPP Power Law model. Only calculated if the trend is not constant. ROCOF - the Rate of OCcurrence Of Failures. Only calculated if the trend is constant. If trend is not constant then ROCOF changes over time in accordance with Beta_hat and Lambda_hat. printed results. Only printed if print_results is True. plotted results. Only plotted of plot_results is True. Use plt.show() to display it.
|
{}
|
3 Tutor System
Starting just at 265/hour
# In the figure, ABC and AMP are two right triangles, right angled at B and M respectively, prove that: (i) $$\triangle$$ ABC ~ $$\triangle$$ AMP (ii) CA/PA = BC/MP
Given, ABC and AMP are two right triangles, right angled at B and M respectively.
(i) In $$\triangle$$ ABC and $$\triangle$$ AMP, we have,
$$\angle$$ CAB = $$\angle$$ MAP (common angles)
$$\angle$$ ABC = $$\angle$$ AMP = 90° (each 90°)
$$\triangle$$ ABC ~ $$\triangle$$ AMP (AA similarity criterion)
(ii) As, $$\triangle$$ ABC ~ $$\triangle$$ AMP (AA similarity criterion)
If two triangles are similar then the corresponding sides are always equal,
Hence, CA/PA = BC/MP
|
{}
|
## The ECH capacities of a ball union a cylinder
[UPDATE 2: I corrected some small mistakes pointed out by Vinicius. I’ll write a detailed explanation of the index calculation in another post.]
Recall that if $\Omega$ is a domain in the first quadrant in ${\mathbb R}^2$, then we define the “toric domain”
$X_\Omega = \{(z_1,z_2)\in{\mathbb C}^2\mid \pi(|z_1|^2,|z_2|^2)\in\Omega\}.$
For example if $\Omega$ is a right triangle with legs on the axes then $X_\Omega$ is an ellipsoid; and if $\Omega$ is a rectangle with two sides on the axes then $X_\Omega$ is a polydisk. In general it is interesting to compute the ECH capacities of $X_\Omega$. When $\Omega$ is convex and does not touch the axes, there is a combinatorial formula for the ECH capacities of $X_\Omega$, which is also valid in some (and conjecturally all) cases when $\Omega$ is convex and does touch the axes. For details, see e.g. section 4.3 of the ECH lecture notes.
Dan, Vinicius, Keon, and I have been discussing how to compute the ECH capacities of $X_\Omega$ when $\Omega$ is star-shaped but not convex. Here is one bit of progress:
Theorem. Let $0 and let $X=B(1)\cup E(\infty,a)\subset {\mathbb R}^4$. That is, $X=X_\Omega$ where $\Omega$ is the union of the triangle $0\le \mu_1,\mu_2,\mu_1+\mu_2\le 1$ and the horizontal strip $0\le \mu_1; 0\le \mu_2\le a$. Then the ECH capacities of $X$ are given by
$c_k(X) = \max\left\{d+a\left(k-\frac{d(d+1)}{2}\right)\mid d\ge 0, d(d+1)\le 2k\right\}.$
Here $d$ is an integer. (This was conjectured in this earlier post, except that there I made a typo mixing up $d$ and $d+1$.) This gives some obstructions to symplectic embeddings into $X$ which we can explore later. But first, here is the proof of the theorem:
Proof. Step 1: setup. By the definition of ECH capacities, $c_k(X) = \sup\{c_k(X',\omega)\}$ where $(X',\omega)$ is a Liouville domain which can be symplectically embedded into the interior of $X$. Given $\epsilon>0$, let $X_\epsilon = B(1) \cup E(\epsilon^{-1},a)$. Since the $X_\epsilon$ are nested domains whose union is $X$, it follows from the monotonicity axiom for ECH capacities that
$c_k(X) = \lim_{\epsilon\to 0+} c_k(X_\epsilon).$
So we need to show that
$\lim_{\epsilon\to 0+} c_k(X_\epsilon) = \max\left\{d+a\left(k-\frac{d(d+1)}{2}\right)\mid d\ge 0, d(d+1)\le 2k \right\}.$
To do so, fix $k$ and assume that $\epsilon$ is small with respect to $k$.
Step 2: the lower bound. We first show that
$c_k(X_\epsilon) \ge \max\left\{d+a\left(k-\frac{d(d+1)}{2}\right)\mid d\ge 0, d(d+1)\le 2k \right\}.$
By the monotonicity and disjoint union axioms for ECH capacities, we know that
$c_k(X_\epsilon) \ge \max\{c_{k_1}(B(1)) + c_{k_2}(X_\epsilon\setminus B(1)) \mid k_1+k_2=k\}.$
From the computation of the ECH capacities of an ellipsoid, we know that $c_{k_1}(B(1)) = d$ where $d$ is the unique nonnegative integer such that
$\frac{d(d+1)}{2} \le k_1 \le \frac{d(d+3)}{2}.$
Also, $X_\epsilon\setminus B(1)$ is affine equivalent to a right triangle of height $a$, so it symplectically embeds into an ellipsoid with axis $a$, and in fact has the same ECH capacities (see Exercise 4.16(b) in the ECH lecture notes), which means that $c_{k_2}(X_\epsilon\setminus B(1))=ak_2$, provided that $k_2\le k$ and $\epsilon$ is sufficiently small with respect to $k$. In computing the above maximum, we can restrict attention to the case where $k_1=d(d+1)/2$ for some nonnegative integer $d$ (since otherwise we can decrease $k_1$ for free and profitably increase $k_2$). The desired lower bound on $c_k(X_\epsilon)$ follows.
Step 3: the upper bound. We now prove that $c_k(X_\epsilon)$ is less than or equal to the claimed value. To calculate $c_k(X_\epsilon)$, we need to approximate $X_\epsilon$ by a Liouville domain for which the contact form on the boundary is nondegenerate. We have $X_\epsilon = X_\Omega$ for a certain domain $X_\Omega$. Let $\Omega'$ be obtained from $\Omega$ by slightly increasing the slope of the edge with slope $-1$, and rounding the concave corner (preserving concavity). Let $X_\epsilon'$ be obtained from $X_{\Omega'}$ by perturbing so that the Morse-Bott circles of Reeb orbits on the boundary (up to some large symplectic action $L>k$) split into elliptic and hyperbolic Reeb orbits. To complete the proof, we will show that if $\alpha$ is an ECH generator for $\partial X_\epsilon'$ with $I(\alpha)=2k$, then the symplectic action of $\alpha$ satisfies
${\mathcal A}(\alpha) \le \max\left\{d+a\left(k-\frac{d(d+1)}{2}\right)\mid d\ge 0, d(d+1)\le 2k \right\},$
up to some small error depending on the size of the perturbation from $X_\epsilon$ to $X_\epsilon'$.
The embedded Reeb orbits in the boundary of $X_\epsilon'$, up to action $L$, are given as follows. There is an elliptic orbit corresponding to the upper left corner of the domain $\Omega'$; we denote this orbit by $e_{1,0}$, and it has symplectic action approximately $1$. For every pair of relatively prime positive integers $m,n$ with $0, there is an elliptic orbit $e_{m,n}$ and a hyperbolic orbit $h_{m,n}$. These arise from the point on the boundary of $\Omega'$ where a tangent vector to the boundary of $\Omega'$ is parallel to the vector $(m,-n)$. They both have symplectic action approximately $am+(1-a)n$. (For the calculation of the Reeb orbits and their symplectic actions in the boundaries of general toric domains, see Section 4.3 in the ECH lecture notes.)
Now let $\alpha$ be an ECH generator. This is a formal product of orbits $e_{m,n}$ and $h_{m,n}$ where no $h_{m,n}$ factor may be repeated. Let $M$ denote the sum over all factors of the $m$ subscript, and let $N$ denote the sum over all factors of the $n$ subscript. Write $M=M_0+M_1$ where $M_0$ is the exponent of $e_{1,0}$. Then by the previous paragraph, the symplectic action of $\alpha$ is given (up to some small error) by
${\mathcal A}(\alpha) = M_0 + aM_1 + (1-a)N.$
To describe the ECH index of $\alpha$, let $\Lambda$ denote the following polygonal path in the plane. The path $\Lambda$ starts at the point $(0,M_0+N)$, and the first edge goes to $(M_0,N)$. After that, for each $e_{m,n}$ or $h_{m,n}$ factor in $\alpha$ with $n>0$, there is an edge in $\Lambda$ with edge vector $(m,-n)$. These edges are arranged in order of increasing slope. These edges take us to the point $(M,0)$. The path $\Lambda$ then goes horizontally to $(0,0)$, and finally vertically back to the starting point $(0,M_0+N)$.
Let $L(\Lambda)$ denote the number of lattice points enclosed by $\Lambda$, not including lattice points on the “upper boundary”, namely the part of the boundary from $(0,M_0+N)$ to $(M,0)$. I claim that the ECH index of $\alpha$ is given by
$I(\alpha) = 2L(\Lambda)+ h$
where $h$ denotes the number of hyperbolic factors in $\alpha$. I will not prove this here because it would take a lot space, but it is similar to the calculation of the ECH index for the standard contact form on $T^3$.
We need to show that if $I(\alpha)=2k$ then
${\mathcal A}(\alpha) \le \max\left\{d+a\left(k-\frac{d(d+1)}{2}\right)\mid d\ge 0, d(d+1)\le 2k \right\}.$
We will in fact show that
${\mathcal A}(\alpha) \le d+a\left(k-\frac{d(d+1)}{2}\right)$
with $d=M_0+N$. (The next paragraph will show that $d(d+1)\le 2k$.)
To see this, observe that $\Lambda$, not including the upper boundary, encloses all of the lattice points in the triangle with vertices $(0,0),(d-1,0)$, and $(0,d-1)$, and there are $d(d+1)/2$ of these. In addition, the line segment from $(M_0,N)$ to $(d,0)$, and the line segment from $(d,0)$ to $(M,0)$, include an additional $\max\{M_1-1,0\}$ lattice points which are enclosed by $\Lambda$ and not on the upper boundary. This gives a lower bound on the lattice point count $L(\Lambda)$, from which it follows that
$I(\alpha)\ge d(d+1)+2\max\{0,M_1-1\}.$
In the inequality that we want to prove, $k=I(\alpha)/2$, so it is enough to show that
${\mathcal A}(\alpha) \le d+a\max\{0,M_1-1\}.$
By our choice of $d$ and our previous calculation of ${\mathcal A}(\alpha)$, this inequality is
$M_1-N\le \max\{0,M_1-1\}.$
This says that if $M_1=0$ then $N\ge 0$, and if $M_1>0$ then $N>0$. This follows immediately from the definition of $N$, and so we have proved the desired inequality.
The above calculation shows that an ECH generator of index $2k$ with maximum symplectic action has the form $e_{1,0}^d$ with action $d$, or $e_{1,0}^{d-1}e_{m,1}$ with $m=k-d(d+1)/2+1>0$ which has action $d+a(k-d(d+1)/2)$. The choice of $d$ that gives the maximum symplectic action depends on $a$.
|
{}
|
## Access
You are not currently logged in.
Access your personal account or get JSTOR access through your library or other institution:
## If You Use a Screen Reader
This content is available through Read Online (Free) program, which relies on page scans. Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader.
# On Totally Real 3-Dimensional Submanifolds of the Nearly Kaehler 6-Sphere
F. Dillen, B. Opozda, L. Verstraelen and L. Vrancken
Proceedings of the American Mathematical Society
Vol. 99, No. 4 (Apr., 1987), pp. 741-749
DOI: 10.2307/2046486
Stable URL: http://www.jstor.org/stable/2046486
Page Count: 9
Preview not available
## Abstract
Let M be a compact 3-dimensional totally real submanifold of the nearly Kaehler 6-dimensional unit sphere. Let K be the sectional curvature function of M. Then, if $K > 1/16, M$ is a totally geodesic submanifold (and $K \equiv 1$).
• 741
• 742
• 743
• 744
• 745
• 746
• 747
• 748
• 749
|
{}
|
# menpofit.io¶
If you make use of one of menpofit’s pre-trained models, you will find that the type that is provided to you is the PickleWrappedFitter. See it’s documentation to understand it’s purpose and how you can effectively use it.
|
{}
|
# Tag Info
2
You can do this using the atbegshi package, together with an \if... statement where you set the \if... equal to true at the start of the option and then to false at the end using \setlist. Here's a MWE. I have used the same continuation message that you used above. \documentclass{book} \usepackage{enumitem} \usepackage{atbegshi} ...
4
You can try with align key of enumitem and define your own align key: \SetLabelAlign{myright}{strut\smash{\parbox[t]{\labelwidth}{\raggedleft#1}}} and use it as align=myright. \documentclass{article} \usepackage{enumitem,showframe} \SetLabelAlign{myright}{strut\smash{\parbox[t]{\labelwidth}{\raggedleft#1}}} \newlist{keywordlist}{description}{1} ...
5
\include records the current value of enumi (3 here) but not that enumitem resume wants to use it, adding a couple of lines to the end of the included (or not included) file fixes that. If you process the entire file then uncomment the \includeonly you will get a one page document numbered page 2 with the enumeration numbered 4,5,6. \documentclass{article} ...
4
Something like that? It was made using convenient options from the enumitem package, that may be set globally in the preamble: \documentclass[12pt]{article} \usepackage[utf8]{inputenc} \setlength{\parindent}{0cm} \usepackage[shortlabels]{enumitem} \usepackage{lipsum} \begin{document} \begin{enumerate}[label = Test~\arabic*.,wide = 0pt, leftmargin = ...
2
Small note: a compilable example is always preferred rather than just a snippet of code. \documentclass{scrartcl} \usepackage{etoolbox} \usepackage{enumitem} \AtBeginEnvironment{enumerate}{\usekomafont{enumerate}} \AtBeginEnvironment{itemize}{\usekomafont{itemize}} \AtBeginEnvironment{description}{\usekomafont{description}} ...
2
Use dedicated environments for both questions and solutions. This is easy by using enumitems \newlist and \setlist. This has an extra bonus: your code will have more semantic markup: \newlist{question}{enumerate}{1} \newlist{solution}{enumerate}{1} \setlist[question,solution]{label=\arabic*.} For your list of choices I'd also define a new list: ...
3
Perhaps something like this: \documentclass{article} \usepackage{enumitem, kantlipsum} \setlist[enumerate]{wide, labelwidth=\parindent} \setlist[enumerate,1]{labelindent=0pt, labelwidth=\parindent, label=\arabic*.} \setlist[enumerate,2]{wide=.825cm, label=\alph*.} \setlist[enumerate,3]{wide=1.25cm, label=(\arabic*)} \begin{document} \kant[6] ...
1
I just experienced the same problem (with Texlive 2013 and the IEEE template V3). But actually your linked post pretty much says all you need to solve the problem. Since the \labelindent command exists for legacy reasons in the IEEE template, you can simply "disable" it by adding the following before importing the enumitem package: \let\labelindent\relax
3
\newlist is primarily for defining new sets of enumerated/itemized lists. For instance \newlist{exampleEnumeration} \setlist[exampleEnumeration]{leftmargin=*, itemsep=2pt, parsep=0pt} would be illegal, because you must specify label (and optionally ref). If you don't specify the level, the same label will be used at all levels. So you should type, say, ...
0
I would use \setbeamercolor to change the color of beamer elements: % defining color of itemize. \setbeamercolor{itemize item}{fg=yellow} \setbeamercolor{itemize subitem}{fg=orange} % defining shape of items \setbeamertemplate{itemize item}{\usebeamercolor[fg]{itemize item}$\blacksquare$} \setbeamertemplate{itemize subitem}{\usebeamercolor[fg]{itemize ...
1
A variant of the preceding solution that ensures both text of the items being aligned, and labels being left aligned on the ambient left margin (or at a \parindent distance if you wish), and a minimally computed label width: \documentclass{article} \usepackage[utf8]{inputenc} \usepackage{fourier} \usepackage[shortlabels]{enumitem} \parindent = 1em ...
4
I'd recommend using the enumitem package the option align=left does the job: Notes: The border is from the showframe package and shows the page margins. It is not needed in your actual use case. Code: \documentclass{article} \usepackage{showframe} \usepackage{enumitem} \begin{document} \begin{enumerate}[align=left] \item[1)] First item, ...
4
Here abox with a length is defined. \def\abox#1{\leavevmode\hbox to 1cm{#1 \hfill}} % left aligned \def\abox#1{\leavevmode\hbox to 1cm{\hfill #1 \hfill}} % center ligned Code \documentclass{article} \begin{document} ----- Left aligned \def\abox#1{\leavevmode\hbox to 1cm{#1 \hfill}} \begin{enumerate} \item[\abox{$1)$}] First item, ...
Top 50 recent answers are included
|
{}
|
## Touching pennies
Stump your fellow simians.
ceptimus
Posts: 1464
Joined: Wed Jun 02, 2004 11:04 pm
Location: UK
### Touching pennies
How many pennies can be arranged so that each penny touches every other penny. Don't just give a number; a description and/or diagram is required.
Grammatron
Posts: 37447
Joined: Tue Jun 08, 2004 1:21 am
Location: Los Angeles, CA
### Re: Touching pennies
Abdul Alhazred wrote:
ceptimus wrote:How many pennies can be arranged so that each penny touches every other penny. Don't just give a number; a description and/or diagram is required.
Four.
Three pennies flat on the table tangent to one another and another penny laid on top.
Do I win?
In that case two would work as well.
roger
Posts: 389
Joined: Wed Jun 09, 2004 6:15 pm
Location: USA
Is this a stable configuration that can exist freestanding?
Grammatron
Posts: 37447
Joined: Tue Jun 08, 2004 1:21 am
Location: Los Angeles, CA
Also, wouldn't three work as well. Two pennies touch each other with one on top of the two touching both? I think that there are far more possibilities that are based on the given criteria.
ceptimus
Posts: 1464
Joined: Wed Jun 02, 2004 11:04 pm
Location: UK
Grammatron wrote:Also, wouldn't three work as well. Two pennies touch each other with one on top of the two touching both? I think that there are far more possibilities that are based on the given criteria.
Yes. Two, three and four are all possible, as Abdul and you have described. But can you get a group of five or more pennies to each touch all the others in the group?
roger wrote:Is this a stable configuration that can exist freestanding?
Not necessarily. To beat Abdul's four (assuming that is possible), you may have to use some supports. You can embed the pennies in clay, or a clear plastic resin, to support them in the desired configuration.
roger
Posts: 389
Joined: Wed Jun 09, 2004 6:15 pm
Location: USA
may I heat the pennies? really, really, really hot? :D
roger
Posts: 389
Joined: Wed Jun 09, 2004 6:15 pm
Location: USA
okay, I have 5 in an unstable configuration.
Grammatron
Posts: 37447
Joined: Tue Jun 08, 2004 1:21 am
Location: Los Angeles, CA
roger wrote:okay, I have 5 in an unstable configuration.
I think 6 is the limit.
Tanja
Posts: 53
Joined: Tue Jun 08, 2004 8:02 am
Location: London
If we have three pennies lying flat on the table touching each other, than surely we can put another three on top of them and have six?
ceptimus
Posts: 1464
Joined: Wed Jun 02, 2004 11:04 pm
Location: UK
Tanja wrote:If we have three pennies lying flat on the table touching each other, than surely we can put another three on top of them and have six?
Where four objects all touch at a single point, it's generally accepted that they don't all touch each other. Here is a poor ASCII diagram showing a vertical cross-section of four pennies, A and B on the top layer, C and D below:
Code: Select all
-------+-------
A | B
-------+-------
C | D
-------+-------
Now everyone accepts that the pairs AB, AC, BD and CD touch. The problem is with AD and BC - do either or both of those pairs touch? Obviously one of the pairs can be made to touch easily, by slightly staggering the arrangement:
Code: Select all
--------+------
A | B
-------++------
C | D
-------+-------
So now AD is also a touching pair, but BC clearly isn't. I would argue that in the top diagram, either the pair AD is touching or the pair BC is, but not both at the same time. Does that make it clearer? Perhaps I misunderstood what you meant?
Last edited by ceptimus on Wed Sep 15, 2004 1:35 pm, edited 1 time in total.
Tanja
Posts: 53
Joined: Tue Jun 08, 2004 8:02 am
Location: London
Well, you did not misunderstand me, I just assumed that pennies that touch in one point actually do touch.
ceptimus
Posts: 1464
Joined: Wed Jun 02, 2004 11:04 pm
Location: UK
If A and D touch, wouldn't you agree that this forms a barrier, preventing B from touching C? Obviously, if we consider the third dimension, they might touch elsewhere., but in a two-dimensionsal cross-section, I don't think it's possible.
Tanja
Posts: 53
Joined: Tue Jun 08, 2004 8:02 am
Location: London
I suppose you are right strictly speaking. I suppose I took the stance of "they are close enough".
If you think at the cross section as where four countries meet, would you say that all countries border each other, or would you say that countries AD and BC don't border each other, or that AD do have a border, but they are separated by one milimetre of countries B and C? Or would those countries end up with one square milimetre of disputed territory?
Oops, late for work....might continue thinking about it later
ManfredVonRichthoffen
Posts: 285
Joined: Sun Jun 13, 2004 3:24 pm
pennies aren't square
ceptimus
Posts: 1464
Joined: Wed Jun 02, 2004 11:04 pm
Location: UK
If you look at them edge-on they are 'square'. More accurately, they have a rectangular cross-section (approximately).
exarch
Posts: 897
Joined: Wed Jun 02, 2004 10:51 pm
Location: Beyond redemption
Grammatron wrote:
roger wrote:okay, I have 5 in an unstable configuration.
I think 6 is the limit.
I think 5 is the limit ...
Without bending them that is.
|
{}
|
9 out of 10 based on 318 ratings. 1,850 user reviews.
# AN INTRODUCTION TO FOURIER SERIES AND INTEGRALS ROBERT T SEELEY
An Introduction to Fourier Series and Integrals.A compact,sophomore-to-senior-level guide,Dr. Seeley's text introduces Fourier series in the way that Joseph Fourier himself used them: as solutions of the heat equation in a disk. Emphasizing the relationship between physics and mathematics,Dr.
An Introduction to Fourier Series and Integrals by Robert
Is this answer helpful?Thanks!Give more feedbackThanks!How can it be improved?How can the answer be improved?Tell us howPeople also askHow to find the Fourier series of a function?How to find the Fourier series of a function?How to Find the Fourier Series of a FunctionDecompose the following function in terms of its Fourier series.Identify the even and odd parts of the function.Evaluate the constant term. The constant term is actually the term of the cosines.Evaluate the Fourier coefficients. Here,we may evaluate by way of integration by parts.How to Find the Fourier Series of a Function: 5 Steps - wikiHowSee all results for this questionWhat is the Fourier transform of a square wave?What is the Fourier transform of a square wave?The Fouriertransformof a continuous periodic squarewaveis composed by impulses in every harmonic contained in the Fourierseries expansion. Maybe this picture from Oppenheim's Signals and Systems may help.Reference: dspkexchange/questions/34844/why-fourier-series-and-transfSee all results for this questionWhat is Fourier analysis?What is Fourier analysis?Fourier analysis is used in electronics,acoustics,and communications. Many waveforms consist of energy at a fundamental frequency and also at harmonic frequencies (multiples of the fundamental). The relative proportions of energy in the fundamental and the harmonics determines the shape of the wave.What is Fourier analysis? - Definition from WhatIsSee all results for this questionWhat is the Fourier series of a constant?What is the Fourier series of a constant?Although the function is a constant f(x) = A/2,but Fourier series won't be a constant. Fourier series would be a Delta function at 0 Hzof magnitude A/2. Basically Fourier series is a breakdown of any periodic signal into it's constituent sinusoids ( the sinusoids involved can only be harmonics of the fundamental frequency of the periodic signal).Reference: wwwa/What-is-the-Fourier-Series-of-a-constantSee all results for this question
An Introduction to Fourier Series and Integrals (Dover
An Introduction to Fourier Series and Integrals and millions of other books are available for Amazon Kindle. Learn more Enter your mobile number or email address below and we'll send you a link to download the free Kindle App.Reviews: 5Format: PaperbackAuthor: Robert T. Seeley
An Introduction to Fourier Series and Integrals - Dover
The chapter on Fourier transforms derives analogs of the results obtained for Fourier series, which the author applies to the analysis of a problem of heat conduction. Numerous computational and theoretical problems appear throughout the text.Published in: American Mathematical Monthly · 1968Authors: Robert T Seeley
An Introduction to Fourier Series and Integrals by Robert
An Introduction to Fourier Series and Integrals. A compact, sophomore-to-senior-level guide, Dr. Seeley's text introduces Fourier series in the way that Joseph Fourier himself used them: as solutions of the heat equation in a disk. Emphasizing the relationship between physics and mathematics, Dr.4.2/5(5)Pages: 112Author: Robert T. SeeleyFormat: Paperback
An Introduction to Fourier Series and Integrals
This is a concise and mathematically rigorous introduction to Fourier analysis using Riemann integrals and some physical motivation. The exposition is driven by the Dirichlet problem: determining the steady-state heat distribution in a disk (Fourier series) or a half-plane (Fourier integrals) given the temperature on the boundary.
An Introduction to Non-Harmonic Fourier Series, Revised Edition is an update of a widely known and highly respected classic textbook. Throughout the book, material has also been added on recent developments, including stability theory, the frame radius, and applications to signal analysis and the control of partial differential equations.[PDF]
An Introduction to Fourier Analysis - BGU Math
1 Infinite Sequences, Infinite Series and Improper In-tegrals 1.1 Introduction The concepts of infinite series and improper integrals, i.e. entities represented by symbols such as ∞ n=−∞ a n, ∞ n=−∞ f n(x), and ∞ −∞ f(x) dx are central to Fourier Analysis. (We assume the reader is already at least somewhat familiar with these.Published in: Mathematics of Computation · 1963Authors: Joseph Bram · R D StuartAbout: Fourier analysis[PDF]
Fourier Series - Introduction - Lira Eletrônica
Fourier Series - Introduction Fourier series are used in the analysis of periodic functions. A periodic function i.e. half the range of integration is L, then the Fourier coefficients are given by where n = 1, 2, 3 18. NOTE: Some textbooks use and then modify the series appropriately. It [PDF]
Introduction to Fourier Series - Purdue University
The Basics Fourier series Examples Fourier Series Remarks: I To nd a Fourier series, it is su cient to calculate the integrals that give the coe cients a 0, a n, and b nand plug them in to the big series formula, equation (2.1) above.
Introduction to the theory of Fourier's series and
Introduction to the theory of Fourier's series and integrals Item Preview Introduction to the theory of Fourier's series and integrals. by Carslaw, H. S. Publication date 1950. Topics Integrals, Definite, Definite integrals, Fourier series, Fourier, Séries de, Intégrales définies, Fourier-analyse. Publisher New York : Dover Publications.
Introduction to the Theory of Fourier's Series and
As an introductory explanation of the theory of Fourier's series, this clear, detailed text is outstanding. The third revised edition, which is here reprinted unabridged, contains tests for uniform convergence of series, a thorough treatment of term-by-term integration and the second theorem of mean value, enlarged sets of examples on infinite series and integrals, and a section dealing withCited by: 60Author: Horatio Scott Carslaw3.4/5(3)Publish Year: 1950
Related searches for an introduction to fourier series and
fourier series and integralsfourier series to fourier integralfourier series introduction pdffourier seriesfourier integral pdffourier series pdffourier series examplesfourier series integration by parts
$Equation of the Day$
|
{}
|
# chebyshevT
Chebyshev polynomials of the first kind
## Description
example
chebyshevT(n,x) represents the nth degree Chebyshev polynomial of the first kind at the point x.
## Examples
### First Five Chebyshev Polynomials of the First Kind
Find the first five Chebyshev polynomials of the first kind for the variable x.
syms x
chebyshevT([0, 1, 2, 3, 4], x)
ans =
[ 1, x, 2*x^2 - 1, 4*x^3 - 3*x, 8*x^4 - 8*x^2 + 1]
### Chebyshev Polynomials for Numeric and Symbolic Arguments
Depending on its arguments, chebyshevT returns floating-point or exact symbolic results.
Find the value of the fifth-degree Chebyshev polynomial of the first kind at these points. Because these numbers are not symbolic objects, chebyshevT returns floating-point results.
chebyshevT(5, [1/6, 1/4, 1/3, 1/2, 2/3, 3/4])
ans =
0.7428 0.9531 0.9918 0.5000 -0.4856 -0.8906
Find the value of the fifth-degree Chebyshev polynomial of the first kind for the same numbers converted to symbolic objects. For symbolic numbers, chebyshevT returns exact symbolic results.
chebyshevT(5, sym([1/6, 1/4, 1/3, 1/2, 2/3, 3/4]))
ans =
[ 361/486, 61/64, 241/243, 1/2, -118/243, -57/64]
### Evaluate Chebyshev Polynomials with Floating-Point Numbers
Floating-point evaluation of Chebyshev polynomials by direct calls of chebyshevT is numerically stable. However, first computing the polynomial using a symbolic variable, and then substituting variable-precision values into this expression can be numerically unstable.
Find the value of the 500th-degree Chebyshev polynomial of the first kind at 1/3 and vpa(1/3). Floating-point evaluation is numerically stable.
chebyshevT(500, 1/3)
chebyshevT(500, vpa(1/3))
ans =
0.9631
ans =
0.963114126817085233778571286718
Now, find the symbolic polynomial T500 = chebyshevT(500, x), and substitute x = vpa(1/3) into the result. This approach is numerically unstable.
syms x
T500 = chebyshevT(500, x);
subs(T500, x, vpa(1/3))
ans =
-3293905791337500897482813472768.0
Approximate the polynomial coefficients by using vpa, and then substitute x = sym(1/3) into the result. This approach is also numerically unstable.
subs(vpa(T500), x, sym(1/3))
ans =
1202292431349342132757038366720.0
### Plot Chebyshev Polynomials of the First Kind
Plot the first five Chebyshev polynomials of the first kind.
syms x y
fplot(chebyshevT(0:4,x))
axis([-1.5 1.5 -2 2])
grid on
ylabel('T_n(x)')
legend('T_0(x)','T_1(x)','T_2(x)','T_3(x)','T_4(x)','Location','Best')
title('Chebyshev polynomials of the first kind')
## Input Arguments
collapse all
Degree of the polynomial, specified as a nonnegative integer, symbolic variable, expression, or function, or as a vector or matrix of numbers, symbolic numbers, variables, expressions, or functions.
Evaluation point, specified as a number, symbolic number, variable, expression, or function, or as a vector or matrix of numbers, symbolic numbers, variables, expressions, or functions.
collapse all
### Chebyshev Polynomials of the First Kind
• Chebyshev polynomials of the first kind are defined as Tn(x) = cos(n*arccos(x)).
These polynomials satisfy the recursion formula
$T\left(0,x\right)=1,\text{ }T\left(1,x\right)=x,\text{ }T\left(n,x\right)=2\text{ }x\text{ }T\left(n-1,x\right)-T\left(n-2,x\right)$
• Chebyshev polynomials of the first kind are orthogonal on the interval -1 ≤ x ≤ 1 with respect to the weight function $w\left(x\right)=\frac{1}{\sqrt{1-{x}^{2}}}$.
• Chebyshev polynomials of the first kind are special cases of the Jacobi polynomials
$T\left(n,x\right)=\frac{{2}^{2n}{\left(n!\right)}^{2}}{\left(2n\right)!}P\left(n,-\frac{1}{2},-\frac{1}{2},x\right)$
and Gegenbauer polynomials
## Tips
• chebyshevT returns floating-point results for numeric arguments that are not symbolic objects.
• chebyshevT acts element-wise on nonscalar inputs.
• At least one input argument must be a scalar or both arguments must be vectors or matrices of the same size. If one input argument is a scalar and the other one is a vector or a matrix, then chebyshevT expands the scalar into a vector or matrix of the same size as the other argument with all elements equal to that scalar.
## References
[1] Hochstrasser, U. W. “Orthogonal Polynomials.” Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. (M. Abramowitz and I. A. Stegun, eds.). New York: Dover, 1972.
[2] Cohl, Howard S., and Connor MacKenzie. “Generalizations and Specializations of Generating Functions for Jacobi, Gegenbauer, Chebyshev and Legendre Polynomials with Definite Integrals.” Journal of Classical Analysis, no. 1 (2013): 17–33. https://doi.org/10.7153/jca-03-02.
## Version History
Introduced in R2014b
|
{}
|
# 6.2: Berger v. NY 388 US 41 (1967)
## U.S. Supreme Court
### Berger v. New York, 388 U.S. 41 (1967)
Berger v. New York
No. 615
Argued April 13, 1967
Decided June 12, 1967
388 U.S. 41
CERTIORARI TO THE COURT OF APPEALS OF NEW YORK
Syllabus
Petitioner was indicted and convicted of conspiracy to bribe the Chairman of the New York State Liquor Authority based upon evidence obtained by eavesdropping. An order pursuant to § 813-a of the N.Y.Code of Crim.Proc. permitting the installation of a recording device in an attorney’s office for a period of 60 days was issued by a justice of the State Supreme Court, after he was advised of recorded interviews between a complainant and first an Authority employee and later the attorney in question. Section 813-a authorizes the issuance of an “ex parte order for eavesdropping” upon “oath or affirmation of a district attorney, or of the attorney general or of an officer above the rank of sergeant of any police department.” The oath must state
“that there is reasonable ground to believe that evidence of a crime may be thus obtained, and particularly describing the person or persons whose communications . . . are to be overheard or recorded and the purpose thereof.”
The order must specify the duration of the eavesdrop, which may not exceed two months, unless extended. On the basis of leads obtained from this eavesdrop, a second order, also for a 60-day period, permitting an installation elsewhere was issued. After two weeks of eavesdropping a conspiracy, in which petitioner was a “go-between,” was uncovered. The New York courts sustained the statute against constitutional challenge.
Held: The language of § 813-a is too broad in its sweep resulting in a trespassory intrusion into a constitutionally protected area, and is, therefore, violative of the Fourth and Fourteenth Amendments. Pp. 388 U. S. 45-64.
(a) The Fourth Amendment’s protections include “conversation,” and the use of electronic devices to capture it was a “search” within the meaning of that Amendment. P. 388 U. S. 51.
(b) New York’s statute authorizes eavesdropping without requiring belief that any particular offense has been or is being committed, nor that the “property” sought, the conversations, be particularly described. Pp. 388 U. S. 55-58.
(c) The officer is given a roving commission to “seize” any and all conversations, by virtue of the statute’s failure to describe with particularity the conversations sought. P. 388 U. S. 59.
(d) Authorization to eavesdrop for a two-month period is equivalent to a series of searches and seizures pursuant to single showing of probable cause, and avoids prompt execution. P. 388 U. S. 59.
(e) The statute permits extensions of the original two-month period on a mere showing that such extension is “in the public interest,” without a present showing of probable cause for the continuation of the eavesdrop. P. 388 U. S. 59.
(f) The statute places no termination date on the eavesdrop once the conversation sought is seized, but leaves it to the officer’s discretion. Pp. 388 U. S. 59-60.
(g) While there is no requirement for notice in view of the necessity for secrecy, the statute does not overcome this defect by demanding the showing of exigent circumstances. P. 388 U. S. 60.
(h) The statute does not provide for a return on the warrant, thus leaving full discretion in the officer as to the use of the seized conversations of innocent as well as guilty parties. P. 388 U. S. 60.
18 N.Y.2d 638, 219 N.E.2d 295, reversed.
MR. JUSTICE CLARK delivered the opinion of the Court.
This writ tests the validity of New York’s permissive eavesdrop statute, N.Y.Code Crim.Proc. § 813-a, [Footnote 1] under the Fourth, Fifth, Ninth, and Fourteenth Amendments. The claim is that the statute sets up a system of surveillance which involves trespassory intrusions into private, constitutionally protected premises, authorizes “general searches” for “mere evidence,” [Footnote 2] and is an invasion of the privilege against self-incrimination. The trial court upheld the statute, the Appellate Division affirmed without opinion, 25 App.Div.2d 718, 269 N.Y.S.2d 368, and the Court of Appeals did likewise by a divided vote. 18 N.Y.2d 638, 219 N.E.2d 295. We granted certiorari, 385 U.S. 967 (1966). We have concluded that the language of New York’s statute is too broad in its sweep, resulting in a trespassory intrusion into a constitutionally protected area, and is, therefore, violative of the Fourth and Fourteenth Amendments. This disposition obviates the necessity for any discussion of the other points raised.
Berger, the petitioner, was convicted on two counts of conspiracy to bribe the Chairman of the New York State Liquor Authority. The case arose out of the complaint of one Ralph Pansini to the District Attorney’s office that agents of the State Liquor Authority had entered his bar and grill and without cause seized his books and records. Pansini asserted that the raid was in reprisal for his failure to pay a bribe for a liquor license. Numerous complaints had been filed with the District Attorney’s office charging the payment of bribes by applicants for liquor licenses. On the direction of that office, Pansini, while equipped with a “minifon” recording device, interviewed an employee of the Authority. The employee advised Pansini that the price for a license was $10,000, and suggested that he contact attorney Harry Neyer. Neyer subsequently told Pansini that he worked with the Authority employee before and that the latter was aware of the going rate on liquor licenses downtown. On the basis of this evidence, an eavesdrop order was obtained from a Justice of the State Supreme Court, as provided by § 813-a. The order permitted the installation, for a period of 60 days, of a recording device in Neyer’s office. On the basis of leads obtained from this eavesdrop, a second order permitting the installation, for a like period, of a recording device in the office of one Harry Steinman was obtained. After some two weeks of eavesdropping, a conspiracy was uncovered involving the issuance of liquor licenses for the Playboy and Tenement Clubs, both of New York City. Petitioner was indicted as “a go-between” for the principal conspirators, who, though not named in the indictment, were disclosed in a bill of particulars. Relevant portions of the recordings were received in evidence at the trial, and were played to the jury, all over the objection of the petitioner. The parties have stipulated that the District Attorney “had no information upon which to proceed to present a case to the Grand Jury, or on the basis of which to prosecute” the petitioner except by the use of the eavesdrop evidence. Eavesdropping is an ancient practice which at common law was condemned as a nuisance. 4 Blackstone, Commentaries 168. At one time, the eavesdropper listened by naked ear under the eaves of houses or their windows or beyond their walls seeking out private discourse. The awkwardness and undignified manner of this method, as well as its susceptibility to abuse, was immediately recognized. Electricity, however, provided a better vehicle, and, with the advent of the telegraph, surreptitious interception of messages began. As early as 1862, California found it necessary to prohibit the practice by statute. Statutes of California 1862, p. 288, CCLXII. During the Civil War, General J. E. B. Stuart is reputed to have had his own eavesdropper along with him in the field whose job it was to intercept military communications of the opposing forces. Subsequently, newspapers reportedly raided one another’s news gathering lines to save energy, time, and money. Racing news was likewise intercepted and flashed to bettors before the official result arrived. The telephone brought on a new and more modern eavesdropper known as the “wiretapper.” Interception was made by a connection with a telephone line. This activity has been with us for three-quarters of a century. Like its cousins, wiretapping proved to be a commercial as well as a police technique. Illinois outlawed it in 1895, and, in 1905, California extended its telegraph interception prohibition to the telephone. Some 50 years ago, a New York legislative committee found that police, in cooperation with the telephone company, had been tapping telephone lines in New York despite an Act passed in 1895 prohibiting it. During prohibition days, wiretaps were the principal source of information relied upon by the police as the basis for prosecutions. In 1934, the Congress outlawed the interception without authorization and the divulging or publishing of the contents of wiretaps by passing § 605 of the Communications Act of 1934. [Footnote 3] New York, in 1938, declared by constitutional amendment that “[t]he right of the people to be secure against unreasonable interception of telephone and telegraph communications shall not be violated,” but permitted by ex parte order of the Supreme Court of the State the interception of communications on a showing of “reasonable ground to believe that evidence of crime” might be obtained. N.Y.Const. Art. I, § 12. Sophisticated electronic devices have now been developed (commonly known as “bugs”) which are capable of eavesdropping on anyone in almost any given situation. They are to be distinguished from “wiretaps,” which are confined to the interception of telegraphic and telephonic communications. Miniature in size (3/8″ x 3/8″ x 1/3″) — no larger than a postage stamp — these gadgets pick up whispers within a room and broadcast them half a block away to a receiver. It is said that certain types of electronic rays beamed at walls or glass windows are capable of catching voice vibrations as they are bounced off the surfaces. Since 1940, eavesdropping has become a big business. Manufacturing concerns offer complete detection systems which automatically record voices under almost any conditions by remote control. A microphone concealed in a book, a lamp, or other unsuspected place in a room, or made into a fountain pen, tie clasp, lapel button, or cuff link increases the range of these powerful wireless transmitters to a half mile. Receivers pick up the transmission with interference-free reception on a special wave frequency. And, of late, a combination mirror transmitter has been developed which permits not only sight but voice transmission up to 300 feet. Likewise, parabolic microphones, which can overhear conversations without being placed within the premises monitored, have been developed. See Westin, Science, Privacy, and Freedom: Issues and Proposals for the 1970’s, 66 Col.L.Rev. 1003, 1005-1010. As science developed these detection techniques, lawmakers, sensing the resulting invasion of individual privacy, have provided some statutory protection for the public. Seven States, California, Illinois, Maryland, Massachusetts, Nevada, New York, and Oregon, prohibit surreptitious eavesdropping by mechanical or electronic device. [Footnote 4] However, all save Illinois permit official court-ordered eavesdropping. Some 36 States prohibit wiretapping. [Footnote 5] But of these, 27 permit “authorized” interception of some type. Federal law, as we have seen, prohibits interception and divulging or publishing of the content of wiretaps without exception. [Footnote 6] In sum, it is fair to say that wiretapping, on the whole, is outlawed, except for permissive use by law enforcement officials in some States; while electronic eavesdropping is — save for seven States — permitted both officially and privately. And, in six of the seven States, electronic eavesdropping (“bugging”) is permissible on court order. ### III The law, though jealous of individual privacy, has not kept pace with these advances in scientific knowledge. This is not to say that individual privacy has been relegated to a second-class position, for it has been held since Lord Camden’s day that intrusions into it are “subversive of all the comforts of society.” Entick v. Carrington, 19 How.St.Tr. 1029, 1066 (1765). And the Founders so decided a quarter of a century later when they declared in the Fourth Amendment that the people had a right “to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures. . . .” Indeed, that right, they wrote, “shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.” Almost a century thereafter, this Court took specific and lengthy notice of Entick v. Carrington, supra, finding that its holding was undoubtedly familiar to, and “in the minds of, those who framed the Fourth Amendment. . . .” Boyd v. United States, 116 U. S. 616116 U. S. 626-627 (1886). And after quoting from Lord Camden’s opinion at some length, Mr. Justice Bradley characterized it thus: “The principles laid down in this opinion affect the very essence of constitutional liberty and security. They reach farther than the concrete form of the case . . . ; they apply to all invasions on the part of the government and its employes of the sanctity of a man’s home and the privacies of life.” Boyd held unconstitutional an Act of the Congress authorizing a court of the United States to require a defendant in a revenue case to produce in court his private books, invoices, and papers or else the allegations of the Government were to be taken as confessed. The Court found that “the essence of the offense . . . [was] the invasion of this sacred right which underlies and constitutes the essence of Lord Camden’s judgment.” Ibid. The Act — the Court found — violated the Fourth Amendment in that it authorized a general search contrary to the Amendment’s guarantee. The Amendment, however, carried no criminal sanction, and, the federal statutes not affording one, the Court in 1914 formulated and pronounced the federal exclusionary rule in Weeks v. United States, 232 U. S. 383. Prohibiting the use in federal courts of any evidence seized in violation of the Amendment, the Court held: “The effect of the Fourth Amendment is to put the courts of the United States . . . under limitations and restraints as to the exercise of such power . . . and to forever secure the people . . . against all unreasonable searches and seizures under the guise of law. This protection reaches all alike, whether accused of crime or not, and the duty of giving to it force and effect is obligatory upon all. . . . The tendency of those who execute the criminal laws of the country to obtain conviction by means of unlawful seizures . . . should find no sanction in the judgments of the courts which are charged at all times with the support of the Constitution and to which people of all conditions have a right to appeal for the maintenance of such fundamental rights.” At 116 U. S. 391-392. ### IV The Court was faced with its first wiretap case in 1928, Olmstead v. United States,277 U. S. 438. There. the interception of Olmstead’s telephone line was accomplished without entry upon his premises, and was, therefore, found not to be proscribed by the Fourth Amendment. The basis of the decision was that the Constitution did not forbid the obtaining of evidence by wiretapping unless it involved actual unlawful entry into the house. Statements in the opinion that a conversation passing over a telephone wire cannot be said to come within the Fourth Amendment’s enumeration of “persons, houses, papers, and effects” have been negated by our subsequent cases, as hereinafter noted. They found “conversation” was within the Fourth Amendment’s protections, and that the use of electronic devices to capture it was a “search” within the meaning of the Amendment, and we so hold. In any event, Congress soon thereafter, and some say in answer to Olmstead, specifically prohibited the interception without authorization and the divulging or publishing of the contents of telephonic communications. And the Nardone cases, 302 U. S. 302 U.S. 379 (1937) and 308 U. S. 308 U.S. 338 (1939), extended the exclusionary rule to wiretap evidence offered in federal prosecutions. The first “bugging” case reached the Court in 1942 in Goldman v. United States, 316 U. S. 129. There, the Court found that the use of a detectaphone placed against an office wall in order to hear private conversations in the office next door did not violate the Fourth Amendment because there was no physical trespass in connection with the relevant interception. And in On Lee v. United States, 343 U. S. 747 (1952), we found that, since “no trespass was committed” a conversation between Lee and a federal agent, occurring in the former’s laundry and electronically recorded, was not condemned by the Fourth Amendment. Thereafter in Silverman v. United States, 365 U. S. 505 (1961), the Court found “that the eavesdropping was accomplished by means of an unauthorized physical penetration into the premises occupied by the petitioners.” At 365 U. S. 509. A spike a foot long with a microphone attached to it was inserted under a baseboard into a party wall until it made contact with the heating duct that ran through the entire house occupied by Silverman, making a perfect sounding board through which the conversations in question were overheard. Significantly, the Court held that its decision did “not turn upon the technicality of a trespass upon a party wall as a matter of local law. It is based upon the reality of an actual intrusion into a constitutionally protected area.” In Wong Sun v. United States, 371 U. S. 471 (1963), the Court for the first time specifically held that verbal evidence may be the fruit of official illegality under the Fourth Amendment along with the more common tangible fruits of unwarranted intrusion. It used these words: “The exclusionary rule has traditionally barred from trial physical, tangible materials obtained either during or as a direct result of an unlawful invasion. It follows from our holding in Silverman v. United States, 365 U. S. 505, that the Fourth Amendment may protect against the overhearing of verbal statements as well as against the more traditional seizure of ‘papers and effects.'” At 371 U. S. 485. And in Lopez v. United States, 373 U. S. 427 (1963), the Court confirmed that it had “in the past sustained instances of ‘electronic eavesdropping’ against constitutional challenge, when devices have been used to enable government agents to overhear conversations which would have been beyond the reach of the human ear. . . . It has been insisted only that the electronic device not be planted by an unlawful physical invasion of a constitutionally protected area.” At 373 U. S. 438-439. In this case, a recording of a conversation between a federal agent and the petitioner in which the latter offered the agent a bribe was admitted in evidence. Rather than constituting “eavesdropping,” the Court found that the recording “was used only to obtain the most reliable evidence possible of a conversation in which the Government’s own agent was a participant and which that agent was fully entitled to disclose.” ### V It is now well settled that “the Fourth Amendment’s right of privacy has been declared enforceable against the States through the Due Process Clause of the Fourteenth” Amendment. Mapp v. Ohio, 367 U. S. 643367 U. S. 655 (1961). “The security of one’s privacy against arbitrary intrusion by the police — which is at the core of the Fourth Amendment — is basic to a free society.” Wolf v. Colorado, 338 U. S. 25338 U. S. 27 (1949). And its “fundamental protections . . . are guaranteed . . . against invasion by the States.” Stanford v. Texas, 379 U. S. 476379 U. S. 481 (1965). This right has most recently received enunciation in Camara v. Municipal Court, 387 U. S. 523. “The basic purpose of this Amendment, as recognized in countless decisions of this Court, is to safeguard the privacy and security of individuals against arbitrary invasions by governmental officials.” At 387 U. S. 528. Likewise the Court has decided that, while the “standards of reasonableness” required under the Fourth Amendment are the same under the Fourteenth, they “are not susceptible of Procrustean application. . . .” Ker v. California, 374 U. S. 23374 U. S. 33 (1963). We said there that “the reasonableness of a search is . . . [to be determined] by the trial court from the facts and circumstances of the case and in the light of the ‘fundamental criteria’ laid down by the Fourth Amendment and in opinions of this Court applying that Amendment.” Ibid. We, therefore, turn to New York’s statute to determine the basis of the search and seizure authorized by it upon the order of a state supreme court justice, a county judge or general sessions judge of New York County. Section 813-a authorizes the issuance of an “ex parte order for eavesdropping” upon “oath or affirmation of a district attorney, or of the attorney general or of an officer above the rank of sergeant of any police department of the state or of any political subdivision thereof. . . .” The oath must state “that there is reasonable ground to believe that evidence of crime may be thus obtained, and particularly describing the person or persons whose communications, conversations or discussions are to be overheard or recorded and the purpose thereof, and . . . identifying the particular telephone number or telegraph line involved.” The judge “may examine on oath the applicant and any other witness he may produce and shall satisfy himself of the existence of reasonable grounds for the granting of such application.” The order must specify the duration of the eavesdrop — not exceeding two months unless extended — and “[a]ny such order together with the papers upon which the application was based, shall be delivered to and retained by the applicant as authority for the eavesdropping authorized therein.” While New York’s statute satisfies the Fourth Amendment’s requirement that a neutral and detached authority be interposed between the police and the public, Johnson v. United States, 333 U. S. 10333 U. S. 14 (1948), the broad sweep of the statute is immediately observable. It permits the issuance of the order, or warrant for eavesdropping, upon the oath of the attorney general, the district attorney or any police officer above the rank of sergeant stating that “there is reasonable ground to believe that evidence of crime may be thus obtained. . . .” Such a requirement raises a serious probable cause question under the Fourth Amendment. Under it, warrants may only issue “but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.” Probable cause under the Fourth Amendment exists where the facts and circumstances within the affiant’s knowledge, and of which he has reasonably trustworthy information, are sufficient unto themselves to warrant a man of reasonable caution to believe that an offense has been or is being committed.Carroll v. United States, 267 U. S. 132267 U. S. 162 (1925); Husty v. United States, 282 U. S. 694282 U. S. 700-701 (1931); Brinegar v. United States, 338 U. S. 160338 U. S. 175-176 (1949). It is said, however, by the petitioner, and the State agrees, that the “reasonable ground” requirement of § 813-a “is undisputedly equivalent to the probable cause requirement of the Fourth Amendment.” This is indicated by People v. Grossman,45 Misc.2d 557, 257 N.Y.S.2d 266, reversed on other grounds, 27 App.Div.2d 572, 276 N.Y.S.2d 168. Also see People v. Beshany, 43 Misc.2d 521, 252 N.Y.S.2d 110. While we have found no case on the point by New York’s highest court, we need not pursue the question further, because we have concluded that the statute is deficient on its face in other respects. Since petitioner clearly has standing to challenge the statute, being indisputably affected by it, we need not consider either the sufficiency of the affidavits upon which the eavesdrop orders were based or the standing of petitioner to attack the search and seizure made thereunder. The Fourth Amendment commands that a warrant issue not only upon probable cause supported by oath or affirmation, but also “particularly describing the place to be searched, and the persons or things to be seized.” New York’s statute lacks this particularization. It merely says that a warrant may issue on reasonable ground to believe that evidence of crime may be obtained by the eavesdrop. It lays down no requirement for particularity in the warrant as to what specific crime has been or is being committed, nor “the place to be searched,” or “the persons or things to be seized,” as specifically required by the Fourth Amendment. The need for particularity and evidence of reliability in the showing required when judicial authorization of a search is sought is especially great in the case of eavesdropping. By its very nature, eavesdropping involves an intrusion on privacy that is broad in scope. As was said in Osborn v. United States, 385 U. S. 323 (1966), the “indiscriminate use of such devices in law enforcement raises grave constitutional questions under the Fourth and Fifth Amendments,” and imposes “a heavier responsibility on this Court in its supervision of the fairness of procedures. . . .” At385 U. S. 329, n. 7. There, two judges acting jointly authorized the installation of a device on the person of a prospective witness to record conversations between him and an attorney for a defendant then on trial in the United States District Court. The judicial authorization was based on an affidavit of the witness setting out in detail previous conversations between the witness and the attorney concerning the bribery of jurors in the case. The recording device was, as the Court said, authorized “under the most precise and discriminate circumstances, circumstances which fully met the requirement of particularity'” of the Fourth Amendment. The Court was asked to exclude the evidence of the recording of the conversations seized pursuant to the order on constitutional grounds, Weeks v. United States, supra, or in the exercise of supervisory power, McNabb v. United States, 318 U. S. 332 (1943). The Court refused to do so, finding that the recording, although an invasion of the privacy protected by the Fourth Amendment, was admissible because of the authorization of the judges, based upon “a detailed factual affidavit alleging the commission of a specific criminal offense directly and immediately affecting the administration of justice . . . for the narrow and particularized purpose of ascertaining the truth of the affidavit’s allegations.” At 318 U. S. 330. The invasion was lawful because there was sufficient proof to obtain a search warrant to make the search for the limited purpose outlined in the order of the judges. Through these “precise and discriminate” procedures, the order authorizing the use of the electronic device afforded similar protections to those that are present in the use of conventional warrants authorizing the seizure of tangible evidence. Among other safeguards, the order described the type of conversation sought with particularity, thus indicating the specific objective of the Government in entering the constitutionally protected area and the limitations placed upon the officer executing the warrant. Under it, the officer could not search unauthorized areas; likewise, once the property sought, and for which the order was issued, was found, the officer could not use the order as a passkey to further search. In addition, the order authorized one limited intrusion, rather than a series or a continuous surveillance. And we note that a new order was issued when the officer sought to resume the search, and probable cause was shown for the succeeding one. Moreover, the order was executed by the officer with dispatch, not over a prolonged and extended period. In this manner, no greater invasion of privacy was permitted than was necessary under the circumstances. Finally the officer was required to and did make a return on the order showing how it was executed and what was seized. Through these strict precautions, the danger of an unlawful search and seizure was minimized. By contrast, New York’s statute lays down no such “precise and discriminate” requirements. Indeed, it authorizes the “indiscriminate use” of electronic devices as specifically condemned in Osborn. “The proceeding by search warrant is a drastic one,” Sgro v. United States, 287 U. S. 206287 U. S. 210 (1932), and must be carefully circumscribed so as to prevent unauthorized invasions of “the sanctity of a man’s home and the privacies of life.” Boyd v. United States, 116 U. S. 616116 U. S. 630. New York’s broadside authorization, rather than being “carefully circumscribed” so as to prevent unauthorized invasions of privacy actually permits general searches by electronic devices, the truly offensive character of which was first condemned in Entick v. Carrington, 19 How.St.Tr. 1029 and which were then known as “general warrants.” The use of the latter was a motivating factor behind the Declaration of Independence. In view of the many cases commenting on the practice, it is sufficient here to point out that, under these “general warrants,” customs officials were given blanket authority to conduct general searches for goods imported to the Colonies in violation of the tax laws of the Crown. The Fourth Amendment’s requirement that a warrant “particularly describ[e] the place to be searched, and the persons or things to be seized,” repudiated these general warrants and “makes general searches . . . impossible and prevents the seizure of one thing under a warrant describing another. As to what is to be taken, nothing is left to the discretion of the officer executing the warrant.” Marron v. United States, 275 U. S. 192275 U. S. 196 (1927); Stanford v. Texas, supra. We believe the statute here is equally offensive. First, as we have mentioned, eavesdropping is authorized without requiring belief that any particular offense has been or is being committed; nor that the “property” sought, the conversations, be particularly described. The purpose of the probable cause requirement of the Fourth Amendment, to keep the state out of constitutionally protected areas until it has reason to believe that a specific crime has been or is being committed, is thereby wholly aborted. Likewise, the statute’s failure to describe with particularity the conversations sought gives the officer a roving commission to “seize” any and all conversations. It is true that the statute requires the naming of “the person or persons whose communications, conversations or discussions are to be overheard or recorded. . . .” But this does no more than identify the person whose constitutionally protected area is to be invaded, rather than “particularly describing” the communications, conversations, or discussions to be seized. As with general warrants, this leaves too much to the discretion of the officer executing the order. Secondly, authorization of eavesdropping for a two-month period is the equivalent of a series of intrusions, searches, and seizures pursuant to a single showing of probable cause. Prompt execution is also avoided. During such a long and continuous (24 hours a day) period, the conversations of any and all persons coming into the area covered by the device will be seized indiscriminately and without regard to their connection with the crime under investigation. Moreover, the statute permits, and there were authorized here, extensions of the original two-month period — presumably for two months each — on a mere showing that such extension is “in the public interest.” Apparently the original grounds on which the eavesdrop order was initially issued also form the basis of the renewal. This we believe insufficient without a showing of present probable cause for the continuance of the eavesdrop. Third, the statute places no termination date on the eavesdrop once the conversation sought is seized. This is left entirely in the discretion of the officer. Finally, the statute’s procedure, necessarily because its success depends on secrecy, has no requirement for notice as do conventional warrants, nor does it overcome this defect by requiring some showing of special facts. On the contrary, it permits unconsented entry without any showing of exigent circumstances. Such a showing of exigency, in order to avoid notice, would appear more important in eavesdropping, with its inherent dangers, than that required when conventional procedures of search and seizure are utilized. Nor does the statute provide for a return on the warrant, thereby leaving full discretion in the officer as to the use of seized conversations of innocent as well as guilty parties. In short, the statute’s blanket grant of permission to eavesdrop is without adequate judicial supervision or protective procedures. ### VI It is said with fervor that electronic eavesdropping is a most important technique of law enforcement, and that outlawing it will severely cripple crime detection. The monumental report of the President’s Commission on Law Enforcement and Administration of Justice entitled “The Challenge of Crime in a Free Society” informs us that the majority of law enforcement officials say that this is especially true in the detection of organized crime. As the Commission reports, there can be no question about the serious proportions of professional criminal activity in this country. However, we have found no empirical statistics on the use of electronic devices (bugging) in the fight against organized crime. Indeed, there are even figures available in the wiretap category which indicate to the contrary. See District Attorney Silver’s Poll of New York Prosecutors, in Dash Schwartz & Knowlton, The Eavesdroppers 105, 117-119 (1959). Also see Semerjian, Proposals on Wiretapping in Light of Recent Senate Hearings, 45 B.U.L.Rev. 217, 229. As the Commission points out, “[w]iretapping was the mainstay of the New York attack against organized crime until Federal court decisions intervened. Recently, chief reliance in some offices has been placed on bugging, where the information is to be used in court. Law enforcement officials believe that the successes achieved in some parts of the State are attributable primarily to a combination of dedicated and competent personnel and adequate legal tools, and that the failure to do more in New York has resulted primarily from the failure to commit additional resources of time and men,” rather than electronic devices. At 201-202. Moreover, Brooklyn’s District Attorney Silver’s poll of the State of New York indicates that, during the 12-year period (1942-1954), duly authorized wiretaps in bribery and corruption cases constituted only a small percentage of the whole. It indicates that this category involved only 10% of the total wiretaps. The overwhelming majority were in the categories of larceny, extortion, coercion, and blackmail, accounting for almost 50%. Organized gambling was about 11,%. Statistics are not available on subsequent years. Dash, Schwartz & Knowlton, supra, at 40. An often repeated statement of District Attorney Hogan of New York County was made at a hearing before the Senate Judiciary Committee at which he advocated the amendment of the Communications Act of 1934, supra, so as to permit “telephonic interception” of conversations. As he testified, “Federal statutory law [the 1934 Act] has been interpreted in such a way as to bar us from divulging wiretap evidence, even in the courtroom in the course of criminal prosecution.” Mr. Hogan then said that “[w]ithout it [wiretaps], my own office could not have convicted” “top figures in the underworld.” He then named nine persons his office had convicted and one on whom he had furnished “leads” secured from wiretaps to the authorities of New Jersey. Evidence secured from wiretaps, as Mr. Hogan said, was not admissible in “criminal prosecutions.” He was advocating that the Congress adopt a measure that would make it admissible; Hearings on S. 2813 and S. 1495, before the Senate Committee on the Judiciary, 87th Cong., 2d Sess., pp. 173, 174 (1962). The President’s Commission also emphasizes in its report the need for wiretapping in the investigation of organized crime because of the telephone’s “relatively free use” by those engaged in the business and the difficulty of infiltrating their organizations. P. 201. The Congress, though long importuned, has not amended the 1934 Act to permit it. We are also advised by the Solicitor General of the United States that the Federal Government has abandoned the use of electronic eavesdropping for “prosecutorial purposes.” See Supplemental Memorandum, Schipani v. United States, No. 504, October Term, 1966, 385 U. S. 372See also Black v. United States,385 U. S. 26 (1966); O’Brien v. United States, 386 U. S. 345 (1967); Hoffa v. United States, 387 U. S. 231 (1967); Markis v. United States, 387 U. S. 425Moretti v. United States, 387 U. S. 425. Despite these actions of the Federal Government, there has been no failure of law enforcement in that field. As THE CHIEF JUSTICE said in concurring in the result in Lopez v. United States, 373 U. S. 427, “the fantastic advances in the field of electronic communication constitute a great danger to the privacy of the individual; . . . indiscriminate use of such devices in law enforcement raises grave constitutional questions under the Fourth and Fifth Amendments. . . .” In any event, we cannot forgive the requirements of the Fourth Amendment in the name of law enforcement. This is no formality that we require today, but a fundamental rule that has long been recognized as basic to the privacy of every home in America. While “[t]he requirements of the Fourth Amendment are not inflexible, or obtusely unyielding to the legitimate needs of law enforcement,” Lopez v. United States, supra, at 373 U. S. 464 (dissenting opinion of BRENNAN, J.), it is not asking too much that officers be required to comply with the basic command of the Fourth Amendment before the innermost secrets of one’s home or office are invaded. Few threats to liberty exist which are greater than that posed by the use of eavesdropping devices. Some may claim that, without the use of such devices, crime detection in certain areas may suffer some delays, since eavesdropping is quicker, easier, and more certain. However, techniques and practices may well be developed that will operate just as speedily and certainly and — what is more important — without attending illegality. It is said that neither a warrant nor a statute authorizing eavesdropping can be drawn so as to meet the Fourth Amendment’s requirements. If that be true, then the “fruits” of eavesdropping devices are barred under the Amendment. On the other hand, this Court has in the past, under specific conditions and circumstances, sustained the use of eavesdropping devices. See Goldman v. United States, 316 U. S. 129On Lee v. United States, 343 U. S. 747Lopez v. United States, supra, and Osborn v. United States, supra. In the latter case, the eavesdropping device was permitted where the “commission of a specific offense” was charged, its use was “under the most precise and discriminate circumstances,” and the effective administration of justice in a federal court was at stake. The States are under no greater restrictions. The Fourth Amendment does not make the “precincts of the home or the office . . . sanctuaries where the law can never reach,” DOUGLAS, J., dissenting in Warden, Maryland Penitentiary v. Hayden, 387 U. S. 294387 U. S. 321, but it does prescribe a constitutional standard that must be met before official invasion is permissible. Our concern with the statute here is whether its language permits a trespassory invasion of the home or office, by general warrant, contrary to the command of the Fourth Amendment. As it is written, we believe that it does. Reversed. “§ 813-a. Ex parte order for eavesdropping” “An ex parte order for eavesdropping as defined in subdivisions one and two of section seven hundred thirty-eight of the penal law may be issued by any justice of the supreme court or judge of a county court or of the court of general sessions of the county of New York upon oath or affirmation of a district attorney, or of the attorney general or of an officer above the rank of sergeant of any police department of the state or of any political subdivision thereof, that there is reasonable ground to believe that evidence of crime may be thus obtained, and particularly describing the person or persons whose communications, conversations or discussions are to be overheard or recorded and the purpose thereof, and, in the case of a telegraphic or telephonic communication, identifying the particular telephone number or telegraph line involved. In connection with the issuance of such an order, the justice or judge may examine on oath the applicant and any other witness he may produce and shall satisfy himself of the existence of reasonable grounds for the granting of such application. Any such order shall be effective for the time specified therein but not for a period of more than two months unless extended or renewed by the justice or judge who signed and issued the original order upon satisfying himself that such extension or renewal is in the public interest. Any such order together with the papers upon which the application was based, shall be delivered to and retained by the applicant as authority for the eavesdropping authorized therein. A true copy of such order shall at all times be retained in his possession by the judge or justice issuing the same, and, in the event of the denial of an application for such an order, a true copy of the papers upon which the application was based shall in like manner be retained by the judge or justice denying the same. As amended L.1958, c. 676, eff. July 1, 1958.” This contention is disposed of in Warden, Maryland Penitentiary v. Hayden, 387 U. S. 294, adversely to petitioner’s assertion here. 48 Stat. 1103, 47 U.S.C. § 605. Cal.Pen.Code §§ 65311h-j; Ill.Rev.Stat., c. 38, §§ 14-1 to 14-7 (1965); Md.Ann.Code, Art. 27, § 125A (1957); Mass.Gen.Laws, c. 272, § 99 (Supp. 1966); Nev.Rev.Stat. § 200.650 (1963); N.Y.Pen.Law § 738 (Supp. 1966); Ore.Rev.Stat. § 165.540(1)(c) (Supp. 1965). Ala.Code, Tit. 48, § 414 (1958); Alaska Stat. § 42.20.100 (1962); Ark.Stat.Ann. § 73-1810 (1957); Cal.Pen.Code § 640; Colo.Rev.Stat.Ann. § 40-4-17 (1963); Conn.Gen.Stat.Rev. § 53-140 (1958): Del.Code Ann., Tit. 11, § 757 (Supp. 1966); Fla.Stat. § 822.10 (1965); Hawaii Rev.Laws § 309 A-1 (Supp. 1963); Idaho Code Ann. §§ 18-6704, 6705 (1947); Ill.Rev.Stat., c. 134, § 16 (19,65); Iowa Code § 716.8 (1962); Ky.Rev.Stat. § 433.430 (1962); La.Rev.Stat. § 14:322 (1950); Md.Ann.Code, Art. 35, §§ 92, 93 (1957); Mass.Gen.Laws, c. 272, § 99 (Supp. 1966); Mich.Stat.Ann. § 28.808 (1954); Mont.Rev.Codes Ann. § 94-3203 (Supp. 1965); Neb.Rev.Stat. § 86-328 (1966); Nev.Rev.Stat. §§ 200.620, 200.630 (1963); N.J.Rev.Stat. § 2A:146-1 (1953); N.M.Stat.Ann. § 40A-12-1 (1964); N.Y.Pen.Law § 738 (Supp. 1966); N.C.Gen.Stat. § 14-155 (1953); N.D.Cent.Code § 8-10-07 (1959); Ohio Rev.Code Ann. § 4931.28 (1954); Okla.Stat., Tit. 21, § 1757 (1961); Ore.Rev.Stat. § 165.540(1) (Supp. 1965); Pa.Stat.Ann., Tit. 15, § 2443 (1958); R.I.Gen.Laws Ann. § 11-35-12 (1956); S.D.Code § 13.4519 (1939); Tenn.Code Ann. § 65-2117 (1955); Utah Code Ann. § 76-48-11 (1953); Va.Code Ann. § 18.1-156 (1960 Repl. Vol.); Wis.Stat. § 134.39 (1963); Wyo.Stat.Ann. § 37-259 (1957). A recent Federal Communications Commission Regulation, 31 Fed.Reg. 3400, 47 CFR § 2.701, prohibits the use of “a device required to be licensed by section 301 of the Communications Act” for the purpose of eavesdropping. This regulation, however, exempts use under “lawful authority” by police officers, and the sanctions are limited to loss of license and the imposition of a fine. The memorandum accompanying the regulation stated: “What constitutes a crime under State law reflecting State policy applicable to radio eavesdropping is, of course, unaffected by our rules.” Id. at 3399. MR. JUSTICE DOUGLAS, concurring. I join the opinion of the Court because, at long last, it overrules sub silentio Olmstead v. United States, 277 U. S. 438, and its offspring, and brings wiretapping and other electronic eavesdropping fully within the purview of the Fourth Amendment. I also join the opinion because it condemns electronic surveillance, for its similarity to the general warrants out of which our Revolution sprang and allows a discreet surveillance only on a showing of “probable cause.” These safeguards are minimal if we are to live under a regime of wiretapping and other electronic surveillance. Yet there persists my overriding objection to electronic surveillance viz., that it is a search for “mere evidence” which, as I have maintained on other occasions (Osborn v. United States, 385 U. S. 323385 U. S. 349-354), is a violation of the Fourth and Fifth Amendments, no matter with what nicety and precision a warrant may be drawn, a proposition that I developed in detail in my dissent in Warden v. Hayden, 387 U. S. 294387 U. S. 312, decided only the other day. A discreet selective wiretap or electronic “bugging” is, of course, not rummaging around, collecting everything in the particular time and space zone. But even though it is limited in time, it is the greatest of all invasions of privacy. It places a government agent in the bedroom, in the business conference, in the social hour, in the lawyer’s office — everywhere and anywhere a “bug” can be placed. If a statute were to authorize placing a policeman in every home or office where it was shown that there was probable cause to believe that evidence of crime would be obtained, there is little doubt that it would be struck down as a bald invasion of privacy, far worse than the general warrants prohibited by the Fourth Amendment. I can see no difference between such a statute and one authorizing electronic surveillance, which, in effect, places an invisible policeman in the home. If anything, the latter is more offensive because the homeowner is completely unaware of the invasion of privacy. The traditional wiretap or electronic eavesdropping device constitutes a dragnet, sweeping in all conversations within its scope — without regard to the participants or the nature of the conversations. It intrudes upon the privacy of those not even suspected of crime, and intercepts the most intimate of conversations. Thus, in the Coplon case (United States v. Coplon, 91 F.Supp. 867, rev’d, 191 F.2d 749) wiretaps of the defendant’s home and office telephones recorded conversations between the defendant and her mother, a quarrel between a husband and wife who had no connection with the case, and conferences between the defendant and her attorney concerning the preparation of briefs, testimony of government witnesses, selection of jurors and trial strategy. Westin, The Wire-Tapping Problem: An Analysis and a Legislative Proposal, 52 Col.L.Rev. 165, 170-171 (1952); Barth, The Loyalty of Free Men 173 (1951). It is also reported that the FBI incidentally learned about an affair, totally unrelated to espionage, between the defendant and a Justice Department attorney. Barth, supra at 173. While tapping one telephone, police recorded conversations involving, at the other end, The Juilliard School of Music, Brooklyn Law School, Consolidated Radio Artists, Western Union, Mercantile Commercial Bank, several restaurants, a real estate company, a drug store, many attorneys, an importer, a dry cleaning establishment, a number of taverns, a garage, and the Prudential Insurance Company. Westin, supra, at 188, n. 112. These cases are but a few of many demonstrating the sweeping nature of electronic total surveillance as we know it today. It is, of course, possible for a statute to provide that wiretap or electronic eavesdrop evidence is admissible only in a prosecution for the crime to which the showing of probable cause related. See Nev.Rev.Stat. § 200.680 (1963). But such a limitation would not alter the fact that the order authorizes a general search. Whether or not the evidence obtained is used at a trial for another crime, the privacy of the individual has been infringed by the interception of all of his conversations. And even though the information is not introduced as evidence, it can and probably will be used as leads and background information. Again, a statute could provide that evidence developed from eavesdrop information could not be used at trial. Cf. Silverthorne Lumber Co., Inc. v. United States, 251 U. S. 385,251 U. S. 392Nardone v. United States, 308 U. S. 338Silverman v. United States, 365 U. S. 505. But, under a regime of total surveillance, where a multitude of conversations are recorded, it would be very difficult to show which aspects of the information had been used as investigative information. As my Brother WHITE says in his dissent, this same vice inheres in any search for tangible evidence such as invoices, letters, diaries, and the like. “In searching for seizable matters, the police must necessarily see or hear, and comprehend, items which do not relate to the purpose of the search.” That is precisely why the Fourth Amendment made any such rummaging around unconstitutional,even though supported by a formally adequate warrant. That underwrites my dissent in Hayden. With all respect, my Brother BLACK misses the point of the Fourth Amendment. It does not make every search constitutional provided there is a warrant that is technically adequate. The history of the Fourth Amendment, as I have shown in my dissent in the Hayden case, makes it plain that any search in the precincts of the home for personal items that are lawfully possessed and not articles of a crime is “unreasonable.” That is the essence of the “mere evidence” rule that long obtained until overruled by Hayden. The words that a man says consciously on a radio are public property. But I do not see how government, using surreptitious methods, can put a person on the radio and use his words to convict him. Under our regime, a man stands mute if he chooses, or talks if he chooses. The test is whether he acts voluntarily. That is the essence of the face of privacy protected by the “mere evidence” rule. For the Fourth Amendment and the Fifth come into play when the accused is “the unwilling source of the evidence” (Gouled v. United States, 255 U. S. 298255 U. S. 306), there being no difference “whether he be obliged to supply evidence against himself or whether such evidence be obtained by an illegal search of his premises and seizure of his private papers.” Ibid. That is the essence of my dissent in Hayden. In short, I do not see how any electronic surveillance that collects evidence or provides leads to evidence is or can be constitutional under the Fourth and Fifth Amendments. We could amend the Constitution and so provide — a step that would take us closer to the ideological group we profess to despise. Until the amending process ushers us into that kind of totalitarian regime, I would adhere to the protection of privacy which the Fourth Amendment, fashioned in Congress and submitted to the people, was designed to afford the individual. And unlike my Brother BLACK, I would adhere to Mapp v. Ohio, 367 U. S. 643, and apply the exclusionary rule in state as well as federal trials — a rule fashioned out of the Fourth Amendment and constituting a high constitutional barricade against the intrusion of Big Brother into the lives of all of us. MR. JUSTICE STEWART, concurring in the result. I fully agree with MR. JUSTICE BLACK, MR. JUSTICE HARLAN, and MR. JUSTICE WHITE that this New York law is entirely constitutional. In short, I think that “electronic eavesdropping, as such or as it is permitted by this statute, is not an unreasonable search and seizure.” [Footnote 2/1] The statute contains many provisions more stringent than the Fourth Amendment generally requires, as MR. JUSTICE BLACK has so forcefully pointed out. And the petitioner himself has told us that the law’s “reasonable grounds” requirement “is undisputedly equivalent to the probable cause requirement of the Fourth Amendment.” This is confirmed by decisions of the New York courts. People v. Cohen, 42 Misc.2d 403, 248 N.Y.S.2d 339; People v. Beshany, 43 Misc.2d 521, 252 N.Y.S.2d 110; People v. Grossman, 45 Misc.2d 557, 257 N.Y.S.2d 266. Of course, a state court’s construction of a state statute is binding upon us. In order to hold this statute unconstitutional, therefore, we would have to either rewrite the statute or rewrite the Constitution. I can only conclude that the Court today seems to have rewritten both. The issue before us, as MR. JUSTICE WHITE says, is “whether this search complied with Fourth Amendment standards.” For me, that issue is an extremely close one in the circumstances of this case. It certainly cannot be resolved by incantation of ritual phrases like “general warrant.” Its resolution involves “the unavoidable task in any search and seizure case: was the particular search and seizure reasonable or not?” [Footnote 2/2] I would hold that the affidavits on which the judicial order issued in this case did not constitute a showing of probable cause adequate to justify the authorizing order. The need for particularity and evidence of reliability in the showing required when judicial authorization is sought for the kind of electronic eavesdropping involved in this case is especially great. The standard of reasonableness embodied in the Fourth Amendment demands that the showing of justification match the degree of intrusion. By its very nature, electronic eavesdropping for a 60-day period, even of a specified office, involves a broad invasion of a constitutionally protected area. Only the most precise and rigorous standard of probable cause should justify an intrusion of this sort. I think the affidavits presented to the judge who authorized the electronic surveillance of the Steinman office failed to meet such a standard. So far as the record shows, the only basis for the Steinman order consisted of two affidavits. One of them contained factual allegations supported only by bare, unexplained references to “evidence” in the district attorney’s office and “evidence” obtained by the Neyer eavesdrop. No underlying facts were presented on the basis of which the judge could evaluate these general allegations. The second affidavit was no more than a statement of another assistant district attorney that he had read his associate’s affidavit and was satisfied on that basis alone that proper grounds were presented for the issuance of an authorizing order. This might be enough to satisfy the standards of the Fourth Amendment for a conventional search or arrest. Cf. Aguilar v. Texas, 378 U. S. 108378 U. S. 116(dissenting opinion). But I think it was constitutionally insufficient to constitute probable cause to justify an intrusion of the scope and duration that was permitted in this case. Accordingly, I would reverse the judgment. Dissenting opinion of MR. JUSTICE HARLAN, post, p. 388 U. S. 89, at 388 U. S. 94. See dissenting opinion of MR. JUSTICE BLACK, post, p. 388 U. S. 70, at 388 U. S. 83. MR. JUSTICE BLACK, dissenting. New York has an eavesdropping statute which permits its judges to authorize state officers to place on other people’s premises electronic devices that will overhear and record telephonic and other conversations for the purpose of detecting secret crimes and conspiracies and obtaining evidence to convict criminals in court. Judges cannot issue such eavesdropping permits except upon oath or affirmation of certain state officers that “there is reasonable ground to believe that evidence of crime may be thus obtained, and particularly describing the person or persons whose communications, conversations or discussions are to be overheard or recorded, and the purpose thereof. . . .” N.Y.Code Crim.Proc. § 813-a. Evidence obtained by such electronic eavesdropping was used to convict the petitioner here of conspiracy to bribe the chairman of the State Liquor Authority, which controls the issuance of liquor licenses in New York. It is stipulated that, without this evidence, a conviction could not have been obtained, and it seems apparent that use of that evidence showed petitioner to be a briber beyond all reasonable doubt. Notwithstanding petitioner’s obvious guilt, however, the Court now strikes down his conviction in a way that plainly makes it impossible ever to convict him again. This is true because the Court not only holds that the judicial orders which were the basis of the authority to eavesdrop were insufficient, but also holds that the New York eavesdropping statute is, on its face, violative of the Fourth Amendment. And while the Court faintly intimates to the contrary, it seem obvious to me that its holding, by creating obstacles that cannot be overcome, makes it completely impossible for the State or the Federal Government ever to have a valid eavesdropping statute. All of this is done, it seems to me, in part because of the Court’s hostility to eavesdropping as “ignoble” and “dirty business” [Footnote 3/1] and in part because of fear that rapidly advancing science and technology is making eavesdropping more and more effective. Cf. Lopez v. United States, 373 U. S. 427373 U. S. 446 (dissenting opinion of BRENNAN, J.). Neither these nor any other grounds that I can think of are sufficient, in my judgment, to justify a holding that the use of evidence secured by eavesdropping is barred by the Constitution. ### I Perhaps as good a definition of eavesdropping as another is that it is listening secretly and sometimes “snoopily” to conversations and discussions believed to be private by those who engage in them. Needless to say, eavesdropping is not ranked as one of the most learned or most polite professions, nor perhaps would an eavesdropper be selected by many people as the most desirable and attractive associate. But the practice has undoubtedly gone on since the beginning of human society, and, during that time, it has developed a usefulness of its own, particularly in the detection and prosecution of crime. Eavesdroppers have always been deemed competent witnesses in English and American courts. The main test of admissibility has been relevance and first-hand knowledge, not by whom or by what method proffered evidence was obtained. It is true that, in England, people who obtained evidence by unlawful means were held liable in damages, as in Entick v. Carrington, 19 How.St.Tr. 1029. But even that famous civil liberties case made no departure from the traditional common law rule that relevant evidence is admissible even though obtained contrary to ethics, morals, or law. And, for reasons that follow, this evidentiary rule is well adapted to our Government, set up as it was to “insure domestic tranquility” under a system of laws. Today this country is painfully realizing that evidence of crime is difficult for governments to secure. Criminals are shrewd and constantly seek, too often successfully, to conceal their tracks and their outlawry from law officers. But, in carrying on their nefarious practices, professional criminals usually talk considerably. Naturally, this talk is done, they hope, in a secret way that will keep it from being heard by law enforcement authorities or by others who might report to the authorities. In this situation, “eavesdroppers,” “informers,” and “squealers,” as they are variously called, are helpful, even though unpopular, agents of law enforcement. And it needs no empirical studies or statistics to establish that eavesdropping testimony plays an important role in exposing criminals and bands of criminals who, but for such evidence, would go along their criminal way with little possibility of exposure, prosecution, or punishment. Such, of course, is this particular case before us. The eavesdrop evidence here shows this petitioner to be a briber, a corrupter of trusted public officials, a poisoner of the honest administration of government, upon which good people must depend to obtain the blessings of a decent orderly society. No man’s privacy, property, liberty, or life is secure if organized or even unorganized criminals can go their way unmolested, ever and ever further in their unbounded lawlessness. However obnoxious eavesdroppers may be, they are assuredly not engaged in a more “ignoble” or “dirty business” than are bribers, thieves, burglars, robbers, rapists, kidnapers, and murderers, not to speak of others. And it cannot be denied that, to deal with such specimens of our society, eavesdroppers are not merely useful, they are frequently a necessity. I realize that some may say, “Well, let the prosecuting officers use more scientific measures than eavesdropping.” It is always easy to hint at mysterious means available just around the corner to catch outlaws. But crimes, unspeakably horrid crimes, are with us in this country, and we cannot afford to dispense with any known method of detecting and correcting them unless it is forbidden by the Constitution or deemed inadvisable by legislative policy — neither of which I believe to be true about eavesdropping. ### II Since eavesdrop evidence obtained by individuals is admissible and helpful, I can perceive no permissible reason for courts to reject it, even when obtained surreptitiously by machines, electronic or otherwise. Certainly evidence picked up and recorded on a machine is not less trustworthy. In both perception and retention, a machine is more accurate than a human listener. The machine does not have to depend on a defective memory to repeat what was said in its presence, for it repeats the very words uttered. I realize that there is complaint that sometimes the words are jumbled or indistinct. But machine evidence need not be done away with to correct such occasional defective recording. The trial judge has ample power to refuse to admit indistinct or garbled recordings. The plain facts are, however, that there is no inherent danger to a defendant in using these electronic recordings except that which results from the use of testimony that is so unerringly accurate that it is practically bound to bring about a conviction. In other words, this kind of transcribed eavesdropping evidence is far more likely to lead a judge or jury to reach a correct judgment or verdict — the basic and always-present objective of a trial. ### III The superior quality of evidence recorded and transcribed on an electronic device is, of course, no excuse for using it against a defendant if, as the Court holds, its use violates the Fourth Amendment. If that is true, no amount of common law tradition or anything else can justify admitting such evidence. But I do not believe the Fourth Amendment, or any other, bans the use of evidence obtained by eavesdropping. There are constitutional amendments that speak in clear unambiguous prohibitions or commands. The First, for illustration, declares that “Congress shall make no law . . . abridging the freedom of speech, or of the press. . . .” The Fifth declares that a person shall not be held to answer for a capital or otherwise infamous crime except on a grand jury indictment; shall not twice be put in jeopardy of life or limb for the same offense; nor be compelled in any criminal case to be a witness against himself. These provisions of the First and Fifth Amendments, as well as others I need not mention at this time, are clear unconditional commands that something shall not be done. Particularly of interest in comparison with the Fourth Amendment is the Fifth Amendment’s prohibition against compelling a person to be a witness against himself. The Fifth Amendment’s language forbids a court to hear evidence against a person that he has been compelled to give, without regard to reasonableness or anything else. Unlike all of these just-named Fifth Amendment provisions, the Fourth Amendment relating to searches and seizures contains no such unequivocal commands. It provides: “The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.” Obviously, those who wrote this Fourth Amendment knew from experience that searches and seizures were too valuable to law enforcement to prohibit them entirely, but also knew at the same time that, while searches or seizures must not be stopped, they should be slowed down, and warrants should be issued only after studied caution. This accounts for use of the imprecise and flexible term, “unreasonable,” the key word permeating this whole Amendment. Also it is noticeable that this Amendment contains no appropriate language, as does the Fifth, to forbid the use and introduction of search and seizure evidence even though secured “unreasonably.” Nor does this Fourth Amendment attempt to describe with precision what was meant by its words, “probable cause”; nor by whom the “Oath or affirmation” should be taken; nor what it need contain. Although the Amendment does specifically say that the warrant should particularly describe “the place to be searched, and the persons or things to be seized,” it does not impose any precise limits on the spatial or temporal extent of the search or the quantitative extent of the seizure. Thus, this Amendment, aimed against only “unreasonable” searches and seizures, seeks to guard against them by providing, as the Court says, that a “neutral and detached authority be interposed between the police and the public, Johnson v. United States, 333 U. S. 10333 U. S. 14.” And, as the Court admits, the Amendment itself provides no sanctions to enforce its standards of searches, seizures, and warrants. This was left for Congress to carry out if it chose to do so. Had the framers of this Amendment desired to prohibit the use in court of evidence secured by an unreasonable search or seizure, they would have used plain appropriate language to do so, just as they did in prohibiting the use of enforced self-incriminatory evidence in the Fifth Amendment. Since the Fourth Amendment contains no language forbidding the use of such evidence, I think there is no such constitutional rule. So I continue to believe that the exclusionary rule formulated to bar such evidence in the Weeks [Footnote 3/2] case is not rooted in the Fourth Amendment, but rests on the “supervisory power” of this Court over the other federal courts — the same judicial power invoked in McNabb v. United States, 318 U. S. 332See my concurring opinions in Wolf v. Colorado, 338 U. S. 25,338 U. S. 39, and Mapp v. Ohio, 367 U. S. 643367 U. S. 661. [Footnote 3/3] For these reasons and others to be stated, I do not believe the Fourth Amendment, standing alone, even if applicable to electronic eavesdropping, commands exclusion of the overheard evidence in this case. In reaching my conclusion that the Fourth Amendment itself does not bar the use of eavesdropping evidence in courts, I do not overlook the fact that the Court, at present, is reading the Amendment as expressly and unqualifiedly barring invasions of “privacy”, rather than merely forbidding “unreasonable searches and seizures.” On this premise of the changed command of the Amendment, the Court’s task in passing on the use of eavesdropping evidence becomes a simple one. Its syllogism is this: “The Fourth Amendment forbids invasion of privacy, and excludes evidence obtained by such invasion;” “To listen secretly to a man’s conversations or to tap his telephone conversations invades his privacy;” “Therefore, the Fourth Amendment bars use of evidence obtained by eavesdropping or by tapping telephone wires.” The foregoing syllogism is faulty for at least two reasons: (1) the Fourth Amendment itself contains no provision from which can be implied a purpose to bar evidence or anything else secured by an “unreasonable search or seizure”; (2) the Fourth Amendment’s language, fairly construed, refers specifically to “unreasonable searches and seizures,” and not to a broad undefined right to “privacy” in general. To attempt to transform the meaning of the Amendment, as the Court does here, is to play sleight-of-hand tricks with it. It is impossible for me to think that the wise Framers of the Fourth Amendment would ever have dreamed about drafting an amendment to protect the “right of privacy.” That expression, like a chameleon, has a different color for every turning. In fact, use of “privacy” as the keyword in the Fourth Amendment simply gives this Court a useful new tool, as I see it, both to usurp the policymaking power of the Congress and to hold more state and federal laws unconstitutional when the Court entertains a sufficient hostility to them. I therefore cannot agree to hold New York’s law unconstitutional on the premise that all laws that unreasonably invade privacy violate the Fourth Amendment. ### IV While the electronic eavesdropping ere bears some analogy to the problems with which the Fourth Amendment is concerned, I am by no means satisfied that the Amendment controls the constitutionality of such eavesdropping. As pointed out, the Amendment only bans searches and seizures of “persons, houses, papers, and effects.” This literal language imports tangible things, and it would require an expansion of the language used by the framers, in the interest of “privacy” or some equally vague judge-made goal, to hold that it applies to the spoken word. It simply requires an imaginative transformation of the English language to say that conversations can be searched and words seized. Referring to wiretapping, this Court, in Olmstead v. United States, 277 U. S. 438277 U. S. 465, refused to make that transformation: “Justice Bradley in the Boyd case, and Justice Clark[e] in the Gouled case, said that the Fifth Amendment and the Fourth Amendment were to be liberally construed. . . . But that cannot justify enlargement of the language employed beyond the possible practical meaning of houses, persons, papers, and effects, or so to apply the words search and seizure as to forbid hearing or sight.” Though Olmstead has been severely criticized by various individual members of this Court, and though the Court stated an alternative ground for holding the Amendment inapplicable in that case, the Olmstead holding that the Fourth Amendment does not apply to efforts to hear and obtain oral conversations has never been overruled by this Court. The Court today, however, suggests that this holding has been “negated” by subsequent congressional action and by four decisions of this Court. First, the Court intimates, though it does not exactly state, that Congress, “in answer to Olmstead,” passed an Act to prohibit “the interception without authorization and the divulging or publishing of the contents of telephonic communications.” The Court cites no authority for this strange surmise, and I assert with confidence that none can be recited. And even if it could, Congress’ action would not have the slightest relevance to the scope of the Fourth Amendment. Second, the Court cites Goldman v. United States, 316 U. S. 129, and On Lee v. United States, 343 U. S. 747, in an effort to explain away Olmstead. But neither of those cases purported to repudiate the Olmstead case or any part of it. In fact, in both of those cases, the Court refused to exclude the challenged eavesdrop evidence. Finally, the Court relies on Silverman v. United States, 365 U. S. 505, and Wong Sun v. United States, 371 U. S. 471. In both of these cases, the Court did imply that the “Fourth Amendment may protect against the overhearing of verbal statements as well as against the more traditional seizure of ‘papers and effects,'” 371 U.S. at 371 U. S. 485 (emphasis added), but in neither did the Court find it necessary to overrule Olmstead, an action that would have been required had the Court based its exclusion of the oral conversations solely on the ground of the Fourth Amendment. The fact is that both Silverman and Wong Sun were federal cases dealing with the use of verbal evidence in federal courts, and the Court held the evidence should be excluded by virtue of the exclusionary rule of the Weekscase. As I have previously pointed out, that rule rested on the Court’s supervisory power over federal courts, not on the Fourth Amendment: it is not required by the Amendment, nor is a violation of the Amendment a prerequisite to its application. I would not have agreed with the Court’s opinion in Silverman, which, by the way, cited Olmstead with approval, had I thought that the result depended on finding a violation of the Fourth Amendment, or had I any inkling that the Court’s general statements about the scope of the Amendment were intended to negate the clear holding of Olmstead. And again, in Wong Sun, which did not even mention Olmstead, let alone overrule it, the Court clearly based its exclusion of oral statements made to federal agents during an illegal arrest on its supervisory power to deter lawless conduct by federal officers, and on the alternative ground that the incriminating statements were made under compulsive circumstances and were not the product of a free will. It is impossible for me to read into that non-eavesdropping federal case an intent to overrule Olmstead implicitly. In short, the only way this Court can escape Olmstead here is to overrule it. Without expressly saying so, the Court’s opinion, as my Brother DOUGLAS acknowledges, does just that. And that overruling is accomplished by the simple expedient of substituting for the Amendment’s words, “The right of the people to be secure in their persons, houses, papers, and effects” the words “The right of the people to be secure in their privacy,” words the Court believes the Framers should have used, but did not. I have frequently stated my opposition to such judicial substitution. Although here the Court uses it to expand the scope of the Fourth Amendment to include words, the Court has been applying the same process to contract the Fifth Amendment’s privilege against self-incrimination so as to exclude all types of incriminating evidence but words, or what the Court prefers to call “testimonial evidence.” See United States v. Wade, post, p. 388 U. S. 218Gilbert v. California, post, p. 388 U. S. 263. There is yet another reason why I would adhere to the holding of Olmstead that the Fourth Amendment does not apply to eavesdropping. Since the Framers in the first clause of the Amendment specified that only persons, houses, and things were to be protected, they obviously wrote the second clause, regulating search warrants, in reference only to such tangible things. To hold, as the Court does, that the first clause protects words necessitates either a virtual rewriting of the particularity requirements of the Warrant Clause or a literal application of that clause’s requirements and our cases construing them to situations they were never designed to cover. I am convinced that the Framers of the Amendment never intended this Court to do either, and yet it seems to me clear that the Court here does a little of both. ### V Assuming. as the Court holds, that the Fourth Amendment applies to eavesdropping and that the evidence obtained by an eavesdrop which violates the Fourth Amendment must be excluded in state courts, I disagree with the Court’s holding that the New York statute, on its face, fails to comport with the Amendment. I also agree with my Brother WHITE that the statute, as here applied, did not violate any of petitioner’s Fourth Amendment rights — assuming again that he has some — and that he is not entitled to a reversal of his conviction merely because the statute might have been applied in some way that would not have accorded with the Amendment. This case deals only with a trespassory eavesdrop, an eavesdrop accomplished by placing “bugging” devices in certain offices. Significantly, the Court does not purport to disturb the Olmstead-Silverman-Goldman distinction between eavesdrops which are accompanied by a physical invasion and those that are not. Neither does the Court purport to overrule the holdings of On Lee v. United States,343 U. S. 747, and Lopez v. United States, 373 U. S. 427, which exempt from the Amendment’s requirements the use of an electronic device to record, and perhaps even transmit, a conversation to which the user is a party. It is thus clear that at least certain types of electronic eavesdropping, until today, were completely outside the scope of the Fourth Amendment. Nevertheless, New York has made it a crime to engage in almost any kind of electronic eavesdropping, N.Y.Pen.Law § 738, and the only way eavesdropping, even the kind this Court has held constitutional, can be accomplished with immunity from criminal punishment is pursuant to § 813-a of the Code of Criminal Procedure, N.Y.Pen.Law § 739. The Court now strikes down § 813-a in its entirety, and that may well have the result of making it impossible for state law enforcement officers merely to listen through a closed door by means of an inverted cone or some other crude amplifying device, eavesdropping which this Court has to date refused to hold violative of the Fourth Amendment. Certainly there is no justification for striking down completely New York’s statute, covering all kinds of eavesdropping, merely because it fails to contain the “strict precautions” which the Court derives — or, more accurately, fabricates — as conditions to eavesdrops covered by the Fourth Amendment. In failing to distinguish between types of eavesdropping and in failing to make clear that the New York statute is invalid only as applied to certain kinds of eavesdropping, the Court’s opinion leaves the definite impression that all eavesdropping is governed by the Fourth Amendment. Such a step would require overruling of almost every opinion this Court has ever written on the subject. Indeed, from the Court’s eavesdropping catalogue of horrors — electronic rays beamed at walls, lapel and cuff-link microphones, and off-premise parabolic microphones — it does not take too much insight to see that the Court is about ready to do, if it has not today done, just that. I agree with my Brother WHITE that, instead of looking for technical defects in the language of the New York statute, the Court should examine the actual circumstances of its application in this case to determine whether petitioner’s rights have here been violated. That to me seems to be the unavoidable task in any search and seizure case: was the particular search and seizure reasonable or not? We have just this Term held that a search and seizure without a warrant, and even without authorization of state law, can nevertheless, under all the circumstances, be “reasonable” for Fourth Amendment purposes. Cooper v. California, 386 U. S. 58. I do not see why that could not be equally true in the case of a search and seizure with a warrant and pursuant to a state law, even though the state law is itself too broad to be valid. Certainly a search and seizure may comply with the Fourth Amendment even in the absence of an authorizing statute which embodies the Amendment’s requirements. Osborn v. United States, 385 U. S. 323, upon which the Court so heavily relies, is a good example of a case where the Court sustained the tape recording of a conversation by examining the particular circumstances surrounding it, even though no federal statute prescribed the precautions taken by the district judges there. Here, New York has gone much further than the Federal Government and most of the States to outlaw all eavesdropping except under the limited circumstances of § 813-a, a statute which, as I shall demonstrate, contains many more safeguards than the Fourth Amendment itself. But today New York fares far worse than those States which have done nothing to implement and supplement the Fourth Amendment: it must release a convicted criminal not because it has deprived him of constitutional rights, but because it has inartfully (according to the Court) tried to guarantee him those rights. The New York statute aside, the affidavits in this case were sufficient to justify a finding of probable cause, and the ex parte eavesdrop orders identified the person whose conversations were to be overheard, the place where the eavesdropping was to take place, and, when read in reference to the supporting affidavits, the type of conversations sought, i.e., those relating to extortion and bribery. The Court concludes its analysis of § 813-a by asserting that “the statute’s blanket grant of permission to eavesdrop is without adequate judicial supervision or protective procedures.” Even if the Court’s fear that “[f]ew threats to liberty exist which are greater than that posed by the use of eavesdropping devices” justifies it in rewriting the Fourth Amendment to impose on eavesdroppers “strict precautions” which are not imposed on other searchers, it is an undeserved criticism of New York to characterize its studied efforts to regulate eavesdropping as resulting in a statute “without adequate judicial supervision or protective procedures.” Let us look at the New York statute. It provides: “(1) New York judges are to issue authorizations. (The Fourth Amendment does not command any such desirable judicial participation.)” “(2) The judge must have an ‘oath’ from New York officials. (The Fourth Amendment does not specify who must execute the oath it requires.)” “(3) The oath must state ‘reasonable ground to believe that evidence of crime may be thus obtained,’ and the judge may examine the affiant and any other witnesses to make certain that this is the case. (The Fourth Amendment requires a showing of ‘probable cause,’ but the Court does not dispute New York’s assertion that ‘reasonable ground’ and ‘probable cause’ are the same. The Amendment does not specify, as the New York statute does, a procedure by which the judge may ‘satisfy himself’ of the existence of probable cause.) ” “(4) The ‘person or persons whose communications, conversations or discussions are to be overheard or recorded and the purpose thereof’ must be particularly described. (In the case of conversation, it would seem impossible to require a more particular description than this. Tangible things in existence at the time a warrant for their seizure is issued could be more particularly described, but the only way to describe future conversations is by a description of the anticipated subject matter of the conversation. When the ‘purpose’ of the eavesdropping is stated, the subject of the conversation sought to be seized is readily recognizable. Nothing more was required in Osborn; nothing more should be required here.)” “(5) The eavesdrop order must be limited in time to no more than two months. (The Fourth Amendment merely requires that the place to be searched be described. It does not require the warrant to limit the time of a search, and it imposes no limit, other than that of reasonableness, on the dimensions of the place to be searched.)” Thus, it seems impossible for the Court to condemn this statute on the ground that it lacks “adequate judicial supervision or protective procedures.” Rather, the only way the Court can invalidate it is to find it lacking in some of the safeguards which the Court today fashions without any reference to the language of the Fourth Amendment whatsoever. In fact, from the deficiencies the Court finds in the New York statute, it seems that the Court would be compelled to strike down a state statute which merely tracked verbatim the language of the Fourth Amendment itself. First, the Court thinks the affidavits or the orders must particularize the crime being committed. The Fourth Amendment’s particularity requirement relates to the place searched and the thing seized, not to the crime being committed. Second, the Court holds that two months for an eavesdrop order to be outstanding is too long. There are, however, no time limits of any kind in the Fourth Amendment other than the notion that a search should not last longer than reasonably necessary to search the place described in the warrant, and the extent of that place may also be limited by the concept of reasonableness. The Court does not explain why two months, regardless of the circumstances, is per se an unreasonable length of time to accomplish a verbal search. Third, the Court finds the statute deficient in not providing for a termination of the eavesdrop once the object is obtained and in not providing for a return of the warrant at that time. Where in the Fourth Amendment does the Court think it possible to find these requirements? Finally, the Court makes the fantastic suggestion that the eavesdropper must give notice to the person whose conversation is to be overheard or that the eavesdropper must show “exigent circumstances” before he can perform his eavesdrop without consent. Now, if never before, the Court’s purpose is clear: it is determined to ban all eavesdropping. As the Court recognizes, eavesdropping “necessarily . . . depends on secrecy.” Since secrecy is an essential, indeed a definitional, element of eavesdropping, when the Court says there shall be no eavesdropping without notice, the Court means to inform the Nation there shall be no eavesdropping — period. It should now be clear that, in order to strike down the New York law, the Court has been compelled to rewrite completely the Fourth Amendment. By substituting the word “privacy” for the language of the first clause of the Amendment, the Court expands the scope of the Amendment to include oral conversations; then, by applying the literal particularity requirements of the second clause without adjustment for the Court’s expansion of the Amendment’s scope, the Court makes constitutional eavesdropping improbable; and finally, by inventing requirements found in neither clause — requirements with which neither New York nor any other State can possibly comply — the Court makes such eavesdropping impossible. If the Fourth Amendment does not ban all searches and seizures, I do not see how it can possibly ban all eavesdrops. ### VI As I see it, the differences between the Court and me in this case rest on different basic beliefs as to our duty in interpreting the Constitution. This basic charter of our Government was written in few words to define governmental powers generally, on the one hand, and to define governmental limitations, on the other. I believe it is the Court’s duty to interpret these grants and limitations so as to carry out as nearly as possible the original intent of the Framers. But I do not believe that it is our duty to go further than the Framers did on the theory that the judges are charged with responsibility for keeping the Constitution “up to date.” Of course, where the Constitution has stated a broad purpose to be accomplished under any circumstances, we must consider that modern science has made it necessary to use new means in accomplishing the Framers’ goal. A good illustration of this is the Commerce Clause, which gives Congress power to regulate commerce between the States however it may be carried on, whether by ox wagons or jet planes. But the Fourth Amendment gives no hint that it was designed to put an end to the age-old practice of using eavesdropping to combat crime. If changes in that Amendment are necessary, due to contemporary human reaction to technological advances, I think those changes should be accomplished by amendments, as the Constitution itself provides. Then again, a constitution like ours is not designed to be a full code of laws, as some of our States and some foreign countries have made theirs. And if constitutional provisions require new rules and sanctions to make them as fully effective as might be desired, my belief is that calls for action not by us, but by Congress or state legislatures, vested with powers to choose between conflicting policies. Here, for illustration, there are widely diverging views about eavesdropping. Some would make it a crime, barring it absolutely and in all events; others would bar it except in searching for evidence in the field of “national security,” whatever that means; still others would pass no law either authorizing or forbidding it, leaving it to follow its natural course. This is plainly the type of question that can and should be decided by legislative bodies, unless some constitutional provision expressly governs the matter, just as the Fifth Amendment expressly forbids enforced self-incrimination. There is no such express prohibition in the Fourth Amendment, nor can one be implied. The Fourth Amendment can only be made to prohibit or to regulate eavesdropping by taking away some of its words and by adding others. Both the States and the National Government are at present confronted with a crime problem that threatens the peace, order, and tranquility of the people. There are, as I have pointed out, some constitutional commands that leave no room for doubt — certain procedures must be followed by courts regardless of how much more difficult they make it to convict and punish for crime. These commands we should enforce firmly and to the letter. But my objection to what the Court does today is the picking out of a broad general provision against unreasonable searches and seizures and the erecting out of it a constitutional obstacle against electronic eavesdropping that makes it impossible for lawmakers to overcome. Honest men may rightly differ on the potential dangers or benefits inherent in electronic eavesdropping and wiretapping. See Lopez v. United States, supra. But that is the very reason that legislatures, like New York’s, should be left free to pass laws about the subject, rather than be told that the Constitution forbids it on grounds no more forceful than the Court has been able to muster in this case. Mr. Justice Holmes dissenting in Olmstead v. United States, 277 U. S. 438277 U. S. 470. Weeks v. United States, 232 U. S. 383Compare Adams v. New York, 192 U. S. 585. I concurred in Mapp because “[t]he close interrelationship between the Fourth and Fifth Amendments,” 367 U.S. at 367 U. S. 662, as they applied to the facts of that case, required the exclusion there of the unconstitutionally seized evidence. MR. JUSTICE HARLAN, dissenting. The Court in recent years has more and more taken to itself sole responsibility for setting the pattern of criminal law enforcement throughout the country. Time-honored distinctions between the constitutional protections afforded against federal authority by the Bill of Rights and those provided against state action by the Fourteenth Amendment have been obliterated, thus increasingly subjecting state criminal law enforcement policies to oversight by this Court. See, e.g., Mapp v. Ohio, 367 U. S. 643Ker v. California, 374 U. S. 23Malloy v. Hogan, 378 U. S. 1Murphy v. Waterfront Commission, 378 U. S. 52. Newly contrived constitutional rights have been established without any apparent concern for the empirical process that goes with legislative reform. See, e.g., Miranda v. Arizona, 384 U. S. 436. And overlying the particular decisions to which this course has given rise is the fact that, short of future action by this Court, their impact can only be undone or modified by the slow and uncertain process of constitutional amendment. Today’s decision is in this mold. Despite the fact that the use of electronic eavesdropping devices as instruments of criminal law enforcement is currently being comprehensively addressed by the Congress and various other bodies in the country, the Court has chosen, quite unnecessarily, to decide this case in a manner which will seriously restrict, if not entirely thwart, such efforts, and will freeze further progress in this field, except as the Court may itself act or a constitutional amendment may set things right. In my opinion, what the Court is doing is very wrong, and I must respectfully dissent. ### I I am, at the outset, divided from the majority by the way in which it has determined to approach the case. Without pausing to explain or to justify its reasoning, it has undertaken both to circumvent rules which have hitherto governed the presentation of constitutional issues to this Court, and to disregard the construction consistently attributed to a state statute by the State’s own courts. Each of these omissions is, in my opinion, most unfortunate. The Court declares, without further explanation, that, since petitioner was “affected” by § 813-a, he may challenge its validity on its face. Nothing in the cases of this Court supports this wholly ambiguous standard; the Court until now, has, in recognition of the intense difficulties so wide a rule might create for the orderly adjudication of constitutional issues, limited the situations in which state statutes may be challenged on their face. There is no reason here, apart from the momentary conveniences of this case, to abandon those limitations: none of the circumstances which have before properly been thought to warrant challenges of statutes on their face is present, cf. Thornhill v. Alabama, 310 U. S. 88310 U. S. 98, and no justification for additional exceptions has been offered. See generally United States v. National Dairy Products Corp., 372 U. S. 29372 U. S. 36Aptheker v. Secretary of State, 378 U. S. 500378 U. S. 521 ( dissenting opinion). Petitioner’s rights, and those of others similarly situated, can be fully vindicated through the adjudication of the consistency with the Fourteenth Amendment of each eavesdropping order. If the statute is to be assessed on its face, the Court should at least adhere to the principle that, for purposes of assessing the validity under the Constitution of a state statute, the construction given the statute by the State’s courts is conclusive of its scope and meaning. Fox v. Washington, 236 U. S. 273Winters v. New York, 333 U. S. 507Poulos v. New Hampshire, 345 U. S. 395. This principle is ultimately a consequence of the differences in function of the state and federal judicial systems. The strength with which it has hitherto been held may be estimated in part by the frequency with which the Court has in the past declined to adjudicate issues, often of great practical and constitutional importance, until the state courts “have been afforded a reasonable opportunity to pass upon them.” Harrison v. NAACP, 360 U. S. 167360 U. S. 176See, e.g., Railroad Comm’n v. Pullman Co., 312 U. S. 496Spector Motor Service, Inc. v. McLaughlin, 323 U. S. 101Shipman v. DuPre, 339 U. S. 321Albertson v. Millard, 345 U. S. 242Government Employees v. Windsor, 353 U. S. 364. The Court today entirely disregards this principle. In its haste to give force to its distaste for eavesdropping, it has apparently resolved that no attention need be given to the construction of § 813-a, adopted by the state courts. Apart from a brief and partial acknowledgment, spurred by petitioner’s concession that the state cases might warrant exploration, the Court has been content simply to compare the terms of the statute with the provisions of the Fourth Amendment; upon discovery that their words differ, it has concluded that the statute is constitutionally impermissible. In sharp contrast, when confronted by Fourth Amendment issues under a federal statute which did not, and does not now, reproduce ipsissimis verbis the Fourth Amendment, 26 U.S.C. § 7607(2), the Court readily concluded, upon the authority of cases in the courts of appeals, that the statute effectively embodied the Amendment’s requirements. Draper v. United States, 358 U. S. 307358 U. S. 310 n. And the Court, without the assistance even of state authorities, reached an identical conclusion as to a similar state statute in Ker v. California, 374 U. S. 23374 U. S. 36 n. The circumstances of the present case do not come even within the narrow exceptions to the rule that the Court ordinarily awaits a state court’s construction before adjudicating the validity of a state statute. Cf. Dombrowski v. Pfister, 380 U. S. 479Baggett v. Bullitt, 377 U. S. 360. The Court has shown no justification for its disregard of existing and pertinent state authorities. ### II The Court’s precipitate neglect of the New York cases is the more obviously regrettable when their terms are examined, for they make quite plain that the state courts have fully recognized the applicability of the relevant federal constitutional requirements, and that they have construed § 813-a in conformity with those requirements. Opinions of the state courts repeatedly suggest that the “reasonable grounds” prescribed by the section are understood to be synonymous with the “probable cause” demanded by the Fourth and Fourteenth Amendments.People v. Cohen, 42 Misc.2d 403, 404, 248 N.Y.S.2d 339, 341; People v. Grossman, 45 Misc.2d 557, 568, 257 N.Y.S.2d 266, 277; People v. Beshany, 43 Misc.2d 521, 525, 252 N.Y.S.2d 110, 115. The terms are frequently employed interchangeably, without the least suggestion of any shadings of meaning. See, e.g., People v. Rogers, 46 Misc.2d 860, 863, 261 N.Y.S.2d 152, 155; People v. McDonough, 51 Misc.2d 1065, 1069, 275 N.Y.S.2d 8, 12. Further, a lower state court has stated quite specifically that “the same standards, at the least, must be applied” to orders under § 813-a as to warrants for the search and seizure of tangible objects. People v. Cohen, supra, at 407-408, 248 N.Y.S.2d at 344. Indeed, the court went on to say that the standards “should be much more stringent than those applied to search warrants.” Id. at 408, 248 N.Y.S.2d at 344. Compare Siegel v. People, 16 N.Y.2d 330, 332, 213 N.E.2d 682, 683. The court in Cohen was concerned with a wiretap order, but the order had been issued under § 813-a, and there was no suggestion there or elsewhere that eavesdropping orders should be differently treated. New York’s statutory requirements for search warrants, it must be emphasized, are virtually a literal reiteration of the terms of the Fourth Amendment. N.Y.Code Crim.Proc. § 793. If the Court wished a precise invocation of the terms of the Fourth Amendment, it had only to examine the pertinent state authorities. There is still additional evidence that the State fully recognizes the applicability to eavesdropping orders of the Fourth Amendment’s constraints. The Legislature of New York adopted in 1962 comprehensive restrictions upon the use of eavesdropped information obtained without a prior § 813-a order. N.Y.Civ.Prac. § 4506. The restrictions were expected and intended to give full force to the mandate of the opinion for this Court in Mapp v. Ohio, 367 U. S. 643See 2 McKinney’s Session Laws of New York 3677 (1962); New York State Legislative Annual 16 (1962). If it was then supposed that information obtained without a prior § 813-a order must, as a consequence of Mapp, be excluded from evidence, but that evidence obtained with a § 813-a order need not be excluded, it can only have been assumed that the requirements applicable to the issuance of § 813-a orders were entirely consistent with the demands of the Fourth and Fourteenth Amendments. The legislature recognized the “hiatus” in its law created by Mapp, and wished to set its own “house . . . in order.” New York State Legislative Annual, supra,at 18. It plainly understood that the Amendments were applicable, and intended to adhere fully to their requirements. New York’s permissive eavesdropping statute must, for purposes of assessing its constitutional validity on its face, be read “as though” this judicial gloss had been “written into” it. Poulos v. New Hampshire, supra, at 345 U. S. 402. I can only conclude that, so read, the statute incorporates as limitations upon its employment the requirements of the Fourth Amendment. ### III The Court has frequently observed that the Fourth Amendment’s two clauses impose separate, although related, limitations upon searches and seizures; the first “is general, and forbids every search that is unreasonable,” Go-Bart Co. v. United States, 282 U. S. 344282 U. S. 357; the second places a number of specific constraints upon the issuance and character of warrants. It would be inappropriate and fruitless to undertake now to set the perimeters of “reasonableness” with respect to eavesdropping orders in general; any limitations, for example, necessary upon the period over which eavesdropping may be conducted, or upon the use of intercepted information unconnected with the offenses for which the eavesdropping order was first issued, should properly be developed only through a case-by-case examination of the pertinent questions. It suffices here to emphasize that, in my view, electronic eavesdropping, as such or as it is permitted by this statute, is not an unreasonable search and seizure. At the least, reasonableness surely implies that this Court must not constrain in any grudging fashion the development of procedures, consistent with the Amendment’s essential purposes, by which methods of search and seizure unknown in 1789 may be appropriately controlled. It is instead obliged to permit, and indeed even to encourage, serious efforts to approach constructively the difficult problems created by electronic eavesdropping. In this situation, the Court should recognize and give weight to the State’s careful efforts to restrict the excessive or unauthorized employment of these devices. New York has provided that no use may be made of eavesdropping devices without a prior court order, and that such an order is obtainable only upon the application of state prosecutorial authorities or of policemen of suitable seniority. N.Y.Code Crim.Proc. § 813-a. Eavesdropping conducted without an order is punishable by imprisonment for as much as two years. N.Y.Pen.Law §§ 738, 740. Information obtained through impermissible eavesdropping may not be employed for any purpose in any civil or criminal action, proceeding, or hearing, except in the criminal prosecution of the unauthorized eavesdropper himself. N.Y.Civ.Prac. § 4506. These restrictions are calculated to prevent the “unbridled,” [Footnote 4/1] “unauthorized,” [Footnote 4/2] and “indiscriminate” [Footnote 4/3] electronic searches and seizures which members of this Court have frequently condemned. Surely the State’s efforts warrant at least a careful, and even sympathetic, examination of the fashion in which the state courts have construed these provisions, and in which they have applied them to the situation before us. I cannot, in any event, agree that the Fourth Amendment can properly be taken as a roadblock to the use, within appropriate limits, of law enforcement techniques necessary to keep abreast of modern-day criminal activity. The importance of these devices as a tool of effective law enforcement is impressively attested by the data marshalled in my Brother WHITE’s dissenting opinion. Post, p. 388 U. S. 107. ### IV I turn to what properly is the central issue in this case: the validity under the Warrants Clause of the Fourth Amendment of the eavesdropping order under which the recordings employed at petitioner’s trial were obtained. It is essential first to set out certain of the pertinent facts. The disputed recordings were made under the authority of a § 813-a order, dated June 12, 1962, permitting the installation of an eavesdropping device in the business office of one Harry Steinman; the order, in turn, was, so far as this record shows, issued solely upon the basis of information contained in affidavits submitted to the issuing judge by two assistant district attorneys. The first affidavit, signed by Assistant District Attorney Goldstein, indicated that the Rackets Bureau of the District Attorney’s Office of New York County was then conducting an investigation of alleged corruption in the State Liquor Authority, and that the Bureau had received information that persons desiring to obtain or retain liquor licenses were obliged to pay large sums to officials of the Authority. It described the methods by which the bribe money was transmitted through certain attorneys to the officials. The affidavit asserted that one Harry Neyer, a former employee of the Authority, served as a “conduit.” It indicated that evidence had been obtained “over a duly authorized eavesdropping device installed in the office of the aforesaid Harry Neyer,” that conferences “relative to the payment of unlawful fees” occurred in Steinman’s office. The number and street address of the office were provided. The affidavit specified that the “evidence indicates that the said Harry Steinman has agreed to pay, through the aforesaid Harry Neyer,$30,000” in order to secure a license for the Palladium Ballroom, an establishment within New York City. The Palladium, it was noted, had been the subject of hearings before the Authority “because of narcotic arrests therein.” On the basis of this information, the affidavit sought an order to install a recording device in Steinman’s business office.
The second affidavit, signed by Assistant District Attorney Scotti, averred that Scotti, as the Chief of the Bureau to which Goldstein was assigned, had read Goldstein’s affidavit, and had concluded that the order might properly issue under § 81a.
The order as issued permitted the recording of “any and all conversations, communications and discussions” in Steinman’s business office for a period of 60 days.
The central objections mounted to this order by petitioner, and repeated as to the statute itself by the Court, are three: first, that it fails to specify with adequate particularity the conversations to be seized; second, that it permits a general and indiscriminate search and seizure, and third, that the order was issued without a showing of probable cause. [Footnote 4/4]
Each of the first two objections depends principally upon a problem of definition: the meaning in this context of the constitutional distinction between “search” and “seizure.” If listening alone completes a “seizure,” it would be virtually impossible for state authorities at a probable cause hearing to describe with particularity the seizures which would later be made during extended eavesdropping; correspondingly, seizures would unavoidably be made which lacked any sufficient nexus with the offenses for which the order was first issued. Cf. Kremen v. United States, 353 U. S. 346Warden v. Hayden, 387 U. S. 294. There is no need for present purposes to explore at length the question’s subtleties; it suffices to indicate that, in my view, conversations are not “seized” either by eavesdropping alone or by their recording so that they may later be heard at the eavesdropper’s convenience. Just as some exercise of dominion, beyond mere perception, is necessary for the seizure of tangibles, so some use of the conversation beyond the initial listening process is required for the seizure of the spoken word. Cf. Lopez v. United States, 373 U. S. 427,373 U. S. 459 (dissenting opinion); United States v. On Lee, 193 F.2d 306, 313-314 (dissenting opinion); District of Columbia v. Little, 85 U.S.App. D C. 242, 247, 178 F.2d 13, 18, affirmed on other grounds, 339 U. S. 339 U.S. 1. With this premise, I turn to these three objections.
The “particularity” demanded by the Fourth Amendment has never been thought by this Court to be reducible “to formula”; Oklahoma Press Pub. Co. v. Walling, 327 U. S. 186327 U. S. 209; it has instead been made plain that its measurement must take fully into account the character both of the materials to be seized and of the purposes of the seizures. Accordingly, where the materials “are books, and the basis for their seizure is the ideas which they contain,” the most “scrupulous exactitude” is demanded in the warrant’s description; Stanford v. Texas, 379 U. S. 476379 U. S. 485see also Marcus v. Search Warrant, 367 U. S. 717; but where the special problems associated with the First Amendment are not involved, as they are not here, a more “reasonable particularity,” Brown v. United States, 276 U. S. 134276 U. S. 143Consolidated Rendering Co. v. Vermont, 207 U. S. 541207 U. S. 554, is permissible. The degree of particularity necessary is best measured by that requirement’s purposes. The central purpose of the particularity requirement is to leave “nothing . . . to the discretion of the officer executing the warrant,” Marron v. United States, 275 U. S. 192275 U. S. 196, by describing the materials to be seized with precision sufficient to prevent “the seizure of one thing under a warrant describing another.” Ibid. The state authorities are not compelled at the probable cause hearing to wager, upon penalty of a subsequent reversal, that they can successfully predict each of the characteristics of the materials which they will later seize, cf. Consolidated Rendering Co. v. Vermont, supra, at 207 U. S. 554; such a demand would, by discouraging the use of the judicial process, defeat the Amendment’s central purpose. United States v. Ventresca, 380 U. S. 102380 U. S. 108.
The materials to be seized are instead described with sufficient particularity if the warrant readily permits their identification both by those entrusted with the warrant’s execution and by the court in any subsequent judicial proceeding. “It is,” the Court has said with reference to the particularity of the place to be searched, “enough if the description is such that the officer . . . can with reasonable effort ascertain and identify” the warrant’s objects. Steele v. United States No. 1, 267 U. S. 498267 U. S. 503.
These standards must be equally applicable to the seizure of words, and, under them, this order did not lack the requisite particularity. The order here permitted the interception, or search, of any and all conversations occurring within the order’s time limitations at the specified location; but this direction must be read in light of the terms of the affidavits, which, under § 813, form part of the authority for the eavesdropping. The affidavits make plain that, among the intercepted conversations, the police were authorized to seize only those “relative to the payment of unlawful fees necessary to obtain liquor licenses.” These directions sufficed to provide a standard which left nothing in the choice of materials to be seized to the “whim,” Stanford v. Texas, supra, at 379 U. S. 485, of the state authorities. There could be no difficulty, either in the course of the search or in any subsequent judicial proceeding, in determining whether specific conversations were among those authorized for seizure by the order. The Fourth and Fourteenth Amendments do not demand more. Compare Kamisar, The Wiretapping-Eavesdropping Problem: A Professor’s View, 44 Minn.L.Rev. 891, 913.
Nor was the order invalid because it permitted the search of any and all conversations occurring at the specified location; if the requisite papers have identified the materials to be seized with sufficient particularity, as they did here, and if the search was confined to an appropriate area, the order is not invalidated by the examination of all within that area reasonably necessary for discovery of the materials to be seized. I do not doubt that searches by eavesdrop must be confined in time precisely as the search for tangibles is confined in space, but the actual duration of the intrusion here, or, for that matter, the total period authorized by the order, was not, given the character of the offenses involved, excessive. All the disputed evidence was obtained within 13 days, scarcely unreasonable in light of an alleged conspiracy involving many individuals and a lengthy series of transactions.
The question therefore remains only whether, as petitioner suggests, the order was issued without an adequate showing of probable cause. The standards for the measurement of probable cause have often been explicated in the opinions of this Court; see, e.g., United States v. Ventresca, 380 U. S. 102; its suffices now simply to emphasize that the information presented to the magistrate or commissioner must permit him to “judge for himself the persuasiveness of the facts relied on by a complaining officer.” Giordenello v. United States, 357 U. S. 480357 U. S. 486. The magistrate must “assess independently the probability” that the facts are as the complainant has alleged; id. at 357 U. S. 487; he may not “accept without question the complainant’s mere conclusion.” Id. at 357 U. S. 486.
As measured by the terms of the affidavits here, the issuing judge could properly have concluded that probable cause existed for the order. Unlike the situations in Nathanson v. United States, 290 U. S. 41, and Giordenello v. United States, supra, the judge was provided the evidence which supported the affiants’ conclusions; he was not compelled to rely merely on their “affirmation of suspicion and belief,” Nathanson v. United States, supra, at 290 U. S. 46Compare Rugendorf v. United States, 376 U. S. 528Aguilar v. Texas, 378 U. S. 108. In my opinion, taking the Steinman affidavits on their face, the constitutional requirements of probable cause were fully satisfied.
### V
It is, however, plain that the Steinman order was issued principally upon the basis of evidence obtained under the authority of the Neyer order; absent the Neyer eavesdropped evidence, the Steinman affidavits consist entirely of conclusory assertions, and they would, in my judgment, be insufficient. It is, therefore, also necessary to examine the Neyer order.
The threshold issue is whether petitioner has standing to challenge the validity under the Constitution of the Neyer order. Standing to challenge the constitutional validity of a search and seizure has been an issue of some difficulty and uncertainty; [Footnote 4/5] it has, nevertheless, hitherto been thought to hinge, not upon the use against the challenging party of evidence seized during the search, but instead upon whether the privacy of the challenging party’s premises or person has been invaded. Jones v. United States, 362 U. S. 257Wong Sun v. United States, 371 U. S. 471. These cases centered upon searches conducted by federal authorities and challenged under Fed.Rule Crim.Proc. 41(e), but there is no reason now to suppose that any different standard is required by the Fourteenth Amendment for searches conducted by state officials. See generally Maguire, Evidence of Guilt 215-216 (1959).
The record before us does not indicate with precision what information was obtained under the Neyer order, but it appears, and petitioner does not otherwise assert, that petitioner was never present in Neyer’s office during the period in which eavesdropping was conducted. There is, moreover, no suggestion that petitioner had any property interest in the premises in which the eavesdropping device was installed. Apart from the use of evidence obtained under the Neyer order to justify issuance of the Steinman order, under which petitioner’s privacy was assuredly invaded, petitioner is linked with activities under the Neyer order only by one fleeting and ambiguous reference in the record.
In a pretrial hearing conducted on a motion to suppress the Steinman recordings, counsel for the State briefly described the materials obtained under the Neyer order. Counsel indicated that
“Mr. Neyer then has conversations with Mr. Steinman and other persons. In the course of some of these conversations, we have one-half of a telephone call, of several telephone calls between Mr. Neyer and a person he refers to on the telephone as Mr. Berger, and in the conversation with Mr. Berger, Mr. Neyer discusses also the obtaining of a liquor license for the Palladium and mentions the fact that this is going to be a big one. ”
Counsel for petitioner responded, shortly after, that “I take it . . . that none of the subject matter to which [counsel for the State] has just adverted is any part of this case. . . .” Counsel for the State responded:
“That’s right, your Honor. I am not — I think evidence can be brought out during the trial that Berger, who Mr. Steinman, Mr. Neyer speaks to concerning the Palladium, is, in fact, the defendant Ralph Berger.”
However oblique this invasion of petitioner’s personal privacy might at first seem, it would entirely suffice, in my view, to afford petitioner standing to challenge the validity of the Neyer order. It is surely without significance in these circumstances that petitioner did not conduct the conversation from a position physically within the room in which the device was placed; the fortuitousness of his location can matter no more than if he had been present for a conference in Neyer’s office, but had not spoken, or had been seated beyond the limits of the device’s hearing. The central question should properly be whether his privacy has been violated by the search; it is enough for this purpose that he participated in a discussion into which the recording intruded. Standing should not, in any event, be made an insuperable barrier which unnecessarily deprives of an adequate remedy those whose rights have been abridged; to impose distinctions of excessive refinement upon the doctrine “would not comport with our justly proud claim of the procedural protections accorded to those charged with crime.” Jones v. United States, supra, at362 U. S. 267. It would instead “permit a quibbling distinction to overturn a principle which was designed to protect a fundamental right.” United States v. Jeffers, 342 U. S. 48342 U. S. 52. I would conclude that, under the circumstances here, the recording of a portion of a telephone conversation to which petitioner was party would suffice to give him standing to challenge the validity under the Constitution of the Neyer order. [Footnote 4/6]
Given petitioner’s standing under federal law to challenge the validity of the Neyer order, I would conclude that such order was issued without an adequate showing of probable cause. It seems quite plain, from the facts described by the State, that, at the moment the Neyer order was sought, the Rackets Bureau indeed had ample information to justify the issuance of an eavesdropping order. Nonetheless, the affidavits presented at the Neyer hearing unaccountably contained only the most conclusory allegations of suspicion. The record before us is silent on whether additional information might have been orally presented to the issuing judge. [Footnote 4/7] Under these circumstances, I am impelled to the view that the judge lacked sufficient information to permit him to assess the circumstances as a “neutral and detached magistrate,” Johnson v. United States, 333 U. S. 10333 U. S. 14, and accordingly that the Neyer order was impermissible.
### VI
It does not follow, however, that evidence obtained under the Neyer order could not properly have been employed to support issuance of the Steinman order. The basic question here is the scope of the exclusionary rule fashioned in Weeks v. United States, 232 U. S. 383, and made applicable to state proceedings in Mapp v. Ohio, 367 U. S. 643. The Court determined in Weeks that the purposes of the Fourth Amendment could be fully vindicated only if materials seized in violation of its requirements were excluded from subsequent use against parties aggrieved by the seizure. Despite broader statements in certain of the cases, see, e.g., Silverthorne Lumber Co. v. United States,251 U. S. 385251 U. S. 392, the situations for which the Weeks rule was devised, and to which it has since been applied, have uniformly involved misconduct by police or prosecutorial authorities. The rule’s purposes have thus been said to be both to discourage “disobedience to the Federal Constitution,” Mapp v. Ohio, supra,at 367 U. S. 657, and to avoid any possibility that the courts themselves might be “accomplices in the willful disobedience of a Constitution they are sworn to uphold.” Elkins v. United States, 364 U. S. 206364 U. S. 223. The Court has cautioned that the exclusionary rule was not intended to establish supervisory jurisdiction over the administration of state criminal justice, and that the States might still fashion “workable rules governing arrests, searches and seizures.” Ker v. California, 374 U. S. 23374 U. S. 34.
I find nothing in the terms or purposes of the rule which demands the invalidation, under the circumstances at issue here, of the Steinman order. The state authorities appeared, as the statute requires, before a judicial official, and held themselves ready to provide information to justify the issuance of an eavesdropping order. The necessary evidence was at hand, and there was apparently no reason for the State to have preferred that it not be given to the issuing judge. The Neyer order is thus invalid simply as a consequence of the judge’s willingness to act upon substantially less information than the Fourteenth Amendment obliged him to demand; correspondingly, the only “misconduct” that could be charged against the prosecution consists entirely of its failure to press additional evidence upon him. If the exclusionary rule were to be applied in this and similar situations, praiseworthy efforts of law enforcement authorities would be seriously, and quite unnecessarily, hampered; the evidence lawfully obtained under a lengthy series of valid warrants might, for example, be lost by the haste of a single magistrate. The rule applied in that manner would not encourage police officers to adhere to the requirements of the Constitution; it would simply deprive the State of evidence it has sought in accordance with those requirements.
I would hold that, where, as here, authorities have obtained a warrant in a judicial proceeding untainted by fraud, a second warrant issued on the authority of evidence gathered under the first is not invalidated by a subsequent finding that the first was issued without a showing of probable cause.
### VII
It follows that the Steinman order was, as a matter of constitutional requirement, validly issued, that the recordings obtained under it were properly admitted at petitioner’s trial, and, accordingly, that his conviction must be affirmed. [Footnote 4/8]
Hoffa v. United States, 385 U. S. 293385 U. S. 317 (dissenting opinion).
Silverman v. United States, 365 U. S. 505365 U. S. 510.
Lopez v. United States, 373 U. S. 427373 U. S. 441 (opinion concurring in result).
Two of petitioner’s other contentions are plainly foreclosed by recent opinions of this Court. His contention that eavesdropping unavoidably infringes the rule forbidding the seizure of “mere evidence” is precluded by Warden v. Hayden, 387 U. S. 294. His contention that eavesdropping violates his constitutional privilege against self-incrimination is answered by Osborn v. United States, 385 U. S. 323, and Hoffa v. United States, 385 U. S. 293.
See, e.g., Edwards, Standing to Suppress Unreasonably Seized Evidence, 47 Nw.U.L.Rev. 471; Comment, Standing to Object to an Unreasonable Search and Seizure, 34 U.Chi.L.Rev. 342; Recent Development, Search and Seizure: Admissibility of Illegally Acquired Evidence Against Third Parties, 66 Col.L.Rev. 400.
While, on this record, it cannot be said with entire assurance that the “Berger” mentioned in the Neyer eavesdropped conversation was this petitioner, I think it proper to proceed at this juncture on the basis that such is the case, leaving whatever questions of identity there may be to such state proceedings as, on the premises of this opinion, might subsequently eventuate in the state courts. See <l=|388 u.s.=”” 41fn4=”” 8|=””>n. 8, infra.</l=|388>
The only additional reference in the record possibly pertinent to the content of the Neyer hearing is a conclusory assertion by counsel for the State in argument on the motion to suppress that the State had shown its evidence to the issuing judge. The reference is obscure, but its context suggests strongly that counsel meant only that the Steinman affidavits were adequate for purposes of probable cause.
Whether N.Y.Civ.Prac. § 4506, as amended to take effect July 1, 1962, some 18 days after the issuance of the Steinman order, would he deemed, under the premises of this opinion, to render inadmissible at Berger’s trial the evidence procured under it, is a matter for the state courts to decide. See People v. Cohen, 42 Misc.2d 403, 408, 409, 248 N.Y.S.2d 339, 344, 345; People v. Beshany, 43 Misc.2d 521, 532, 252 N.Y.S.2d 110, 121. Further state proceedings on that core would, of course, not be foreclosed under a disposition in accordance with this opinion.
MR. JUSTICE WHITE, dissenting.
With all due respect, I dissent from the majority’s decision which unjustifiably strikes down “on its face” a 1938 New York statute applied by state officials in securing petitioner’s conviction. In addition, I find no violation of petitioner’s constitutional rights, and I would affirm.
### I
At petitioner’s trial for conspiring to bribe the Chairman of the New York State Liquor Authority, the prosecution introduced tape recordings obtained through an eavesdrop of the office of Harry Steinman which had been authorized by court order pursuant to § 813-a, N.Y.Code Crim.Proc. Since Berger was rightfully in Steinman’s office when his conversations were recorded through the Steinman eavesdrop, he is entitled to have those recordings excluded at his trial if they were unconstitutionally obtained. Jones v. United States, 362 U. S. 257Silverman v. United States, 365 U. S. 505. Petitioner vigorously argues that all judicially authorized eavesdropping violates Fourth Amendment rights, but his position is unsound.
Two of petitioner’s theories are easily answered. First, surreptitious electronic recording of conversations among private persons, and introduction of the recording during a criminal trial, do not violate the Fifth Amendment’s ban against compulsory self-incrimination, because the conversations are not the product of any official compulsion. Olmstead v. United States, 277 U. S. 438Hoffa v. United States, 385 U. S. 293Osborn v. United States, 385 U. S. 323. Second, our decision in Warden v. Hayden, 387 U. S. 294, answers petitioner’s contention that eavesdropping under § 813-a constitutes an unlawful search for “mere evidence”; whatever the limits of the search and seizure power may be under the Fourth Amendment, the oral evidence of a furtive bribery conspiracy sought in the application for the Steinman eavesdrop order was within the scope of proper police investigation into suspected criminal activity.
Petitioner primarily argues that eavesdropping is invalid, even pursuant to court order or search warrant, because it constitutes a “general search” barred by the Fourth Amendment. Petitioner suggests that the search is inherently overbroad because the eavesdropper will overhear conversations which do not relate to criminal activity. But the same is true of almost all searches of private property which the Fourth Amendment permits. In searching for seizable matters, the police must necessarily see or hear, and comprehend, items which do not relate to the purpose of the search. That this occurs, however, does not render the search invalid, so long as it is authorized by a suitable search warrant and so long as the police, in executing that warrant, limit themselves to searching for items which may constitutionally be seized. [Footnote 5/1] Thus, while I would agree with petitioner that individual searches of private property through surreptitious eavesdropping with a warrant must be carefully circumscribed to avoid excessive invasion of privacy and security, I cannot agree that all such intrusions are constitutionally impermissible general searches.
This case boils down, therefore, to the question of whether § 813-a was constitutionally applied in this case. At the outset, it is essential to note that the recordings of the Neyer office eavesdrop were not introduced at petitioner’s trial, nor was petitioner present during this electronic surveillance, nor were any of petitioner’s words recorded by that eavesdrop. The only links between the Neyer eavesdrop and petitioner’s conviction are (a) that evidence secured from the Neyer recordings was used in the Steinman affidavits, which in turn led to the Steinman eavesdrop where petitioner’s incriminating conversations were overheard, and (b) that the Neyer eavesdrop recorded what may have been [Footnote 5/2] the Neyer end of a telephone conversation between Neyer and Berger. In my opinion, it is clear that neither of these circumstances is enough to establish that Berger’s Fourth Amendment interests were invaded by the eavesdrop in Neyer’s office. Wong Sun v. United States, 371 U. S. 471Jones v. United States, 362 U. S. 257. Thus, petitioner cannot secure reversal on the basis of the allegedly unconstitutional Neyer eavesdrop.
I turn to the circumstances surrounding the issuance of the one eavesdrop order which petitioner has “standing” to challenge. On June 11, 1962, Assistant District Attorney David Goldstein filed an affidavit before Judge Joseph Sarafite of the New York County Court of General Sessions requesting a court order under § 813-a authorizing the Steinman eavesdrop. Goldstein averred that the District Attorney’s office was investigating alleged corruption in the State Liquor Authority, that the office had obtained evidence of a conspiracy between Authority officials and private attorneys to extort large illegal payments from liquor license applicants, that a “duly authorized eavesdropping device” had previously been installed in the office of Neyer, who was suspected of acting as a conduit for the bribes, and that this device had obtained evidence
“that conferences relative to the payment of unlawful fees necessary to obtain liquor licenses occur in the office of one Harry Steinman, located in Room 801 at 15 East 48th Street, in the County, City and State of New York.”
The affidavit went on to describe Steinman at length as a prospective liquor license applicant and to relate evidence of a specific payoff which Steinman was likely to make, through Neyer, in the immediate future. On the basis of these facts, the affidavit concluded that
“there is reasonable ground to believe that evidence of crime may be obtained by overhearing and recording the conversations, communications and discussions that may take place in the office of Harry Steinman which is located in Room 801 at 15 East 48th Street,”
and requested an order authorizing an eavesdrop until August 11, 1962. An affidavit of Assistant District Attorney Alfred Scotti verified the information contained in the Goldstein affidavit. The record also indicates that the affidavits were supplemented by orally presenting to Judge Sarafite all of the evidence obtained from the Neyer eavesdrop. But assuming that the Steinman court order was issued on the affidavits alone, I am confident that those affidavits are sufficient under the Fourth Amendment.
Goldstein’s affidavit described with “particularity” what crime Goldstein believed was being committed; it requested authority to search one specific room; it described the principal object of the search — Steinman and his coconspirators — and the specific conversations which the affiant hoped to seize; it gave a precise time limit to the search, and it told the judge the manner in which the affiant had acquired his information. Petitioner argues that the reliability of the Neyer eavesdrop information was not adequately verified in the Steinman affidavit. But the Neyer eavesdrop need not be explained in detail in an application to the very judge who had authorized it just two months previously. Judge Sarafite had every reason to conclude that the Neyer eavesdrop was a reliable basis for suspecting a criminal conspiracy (consisting, as the recording did, of admissions by Steinman and other coconspirators) and that it was the source of the specific evidence recited in the Steinman affidavits.
“[A]ffidavits for search warrants, such as the one involved here, must be tested and interpreted by magistrates and courts in a common sense and realistic fashion,”
United States v. Ventresca, 380 U. S. 102380 U. S. 108. I conclude that the Steinman affidavits fully satisfied the Fourth Amendment requirements of probable cause and particularity in the issuance of search warrants.
The Court, however, seems irresistibly determined to strike down the New York statute. The majority criticizes the ex parte nature of § 813-a court orders, the lack of a requirement that “exigent circumstances” be shown, and the fact that one court order authorizes “a series or a continuous surveillance.” But where are such search warrant requirements to be found in the Fourth Amendment or in any prior case construing it? The Court appears intent upon creating out of whole cloth new constitutionally mandated warrant procedures carefully tailored to make eavesdrop warrants unobtainable. That is not a judicial function. The question here is whether this search complied with Fourth Amendment standards. There is no indication in this record that the District Attorney’s office seized and used conversations not described in the Goldstein affidavit, nor that officials continued the search after the time when they had gathered the evidence which they sought. Given the constitutional adequacy of the Goldstein affidavit in terms of Fourth Amendment requirements of probable cause and particularity, I conclude that both the search and seizure in Steinman’s office satisfied Fourth Amendment mandates. Regardless of how the Court would like eavesdropping legislation to read, our function ends in a state case with the determination of these questions.
### II
Unregulated use of electronic surveillance devices by law enforcement officials and by private parties poses a grave threat to the privacy and security of our citizens. As the majority recognizes, New York is one of a handful of States that have reacted to this threat by enacting legislation that limits official use of all such devices to situations where designated officers obtain judicial authorization to eavesdrop. Except in these States, there is a serious lack of comprehensive and sensible legislation in this field, a need that has been noted by many, including the President’s prestigious Commission on Law Enforcement and Administration of Justice (the “Crime Commission”) in its just-published reports. [Footnote 5/3] Bills have been introduced at this session of Congress to fill this legislative gap, and extensive hearings are in progress before the Subcommittee on Administrative Practice and Procedure of the Senate Committee on the Judiciary, and before Subcommittee No. 5 of the House Committee on the Judiciary.
At least three positions have been presented at these hearings. Opponents of eavesdropping and wiretapping argue that they are so “odious” an invasion of privacy that they should never be tolerated. The Justice Department, in advocating the Administration’s current position, asserts a more limited view; its bill would prohibit all wiretapping and eavesdropping by state and federal authorities except in cases involving the “national security,” and in addition would ban judicial use of evidence gathered even in national security cases. S. 928 and H.R. 5386, 90th Cong., 1st Sess. Advocates of a third position, who include many New York law enforcement personnel and others, agree that official eavesdropping and wiretapping must be stringently controlled, but argue that such methods are irreplaceable investigative tools which are needed for the enforcement of criminal laws and which can be adequately regulated through legislation such as New York’s § 813-a.
The grant of certiorari in this case has been widely noted, and our decision can be expected to have a substantial impact on the current legislative consideration of these issues. Today’s majority does not, in so many words, hold that all wiretapping and eavesdropping are constitutionally impermissible. But, by transparent indirection, it achieves practically the same result by striking down the New York statute and imposing a series of requirements for legalized electronic surveillance that will be almost impossible to satisfy.
In so doing, the Court ignores or discounts the need for wiretapping authority, and incredibly suggests that there has been no breakdown of federal law enforcement despite the unavailability of a federal statute legalizing electronic surveillance. The Court thereby impliedly disagrees with the carefully documented reports of the Crime Commission which, contrary to the Court’s intimations, underline the serious proportions of professional criminal activity in this country, the failure of current national and state efforts to eliminate it, and the need for a statute permitting carefully controlled official use of electronic surveillance, particularly in dealing with organized crime and official corruption. See Appendix A, infra; Report of the Crime Commission’s Task Force on Organized Crime 17-19, 80, 91-113 (1967). How the Court can feel itself so much better qualified than the Commission, which spent months on its study, to assess the needs of law enforcement is beyond my comprehension. We have only just decided that reasonableness of a search under the Fourth Amendment must be determined by weighing the invasions of Fourth Amendment interests which wiretapping and eavesdropping entail against the public need justifying such invasions. Camara v. Municipal Court, 387 U. S. 523See v. City of Seattle, 387 U. S. 541. In these terms, it would seem imperative that the Court at least deal with facts of the real world. This the Court utterly fails to do. In my view, its opinion is wholly unresponsive to the test of reasonableness under the Fourth Amendment.
The Court also seeks support in the fact that the Federal Government does not now condone electronic eavesdropping. But here the Court is treading on treacherous ground. [Footnote 5/4] It is true that the Department of Justice has now disowned the relevant findings and recommendations of the Crime Commission, see Hearings on H.R. 5386 before Subcommittee No. 5 of the House Committee on the Judiciary, 90th Cong., 1st Sess., ser. 3, at 308 (1967) (hereafter cited as “House Hearings”), and that it has recommended to the Congress a bill which would impose broad prohibitions on wiretapping and eavesdropping. But although the Department’s communication to the Congress speaks of “exercis[ing] the full reach of our constitutional powers to outlaw electronic eavesdropping on private conversations,” [Footnote 5/5] the fact is, as I have already indicated, that the bill does nothing of the kind. Both H.R. 5386 and its counterpart in the Senate, S. 928, provide that the prohibitions in the bill shall not be deemed to apply to interceptions in national security cases. Apparently, under this legislation, the President, without court order, would be permitted to authorize wiretapping or eavesdropping
“to protect the Nation against actual or potential attack or other hostile acts of a foreign power or any other serious threat to the security of the United States, or to protect national security information against foreign intelligence activities.”
H.R. 5386 and S. 928, § 3.
There are several interesting aspects to this proposed national security exemption in light of the Court’s opinion. First, there is no limitation on the President’s power to delegate his authority, and it seems likely that at least the Attorney General would exercise it. House Hearings at 302. Second, the national security exception would reach cases like sabotage and investigations of organizations controlled by a foreign government. For example, wiretapping to prove an individual is a member of the Communist Party, it is said, would be permissible under the statute. House Hearings at 292. Third, information from authorized surveillance in the national security area would not be admissible in evidence; to the contrary, the surveillance would apparently be for investigative and informational use only, not for use in a criminal prosecution and not authorized because of any belief or suspicion that a crime is being committed or is about to be committed. House Hearings at 289. Fourth, the Department of Justice has recommended that the Congress not await this Court’s decision in the case now before us because, whether or not the Court upholds the New York statute, the power of Congress to enact the proposed legislation would not be affected. House Hearings at 308. But if electronic surveillance is a “general search,” or if it must be circumscribed in the manner the Court now suggests, how can surreptitious electronic surveillance of a suspected Communist or a suspected saboteur escape the strictures of the Fourth Amendment? It seems obvious from the Department of Justice bill that the present Administration believes that there are some purposes and uses of electronic surveillance which do not involve violations of the Fourth Amendment by the Executive Branch. Such being the case, even if the views of the Executive were to be the final answer in this case, the requirements imposed by the Court to constitutionalize wiretapping and eavesdropping are a far cry from the practice anticipated under the proposed federal legislation now before the Congress.
But I do not think the views of the Executive should be dispositive of the broader Fourth Amendment issues raised in this case. If the security of the National Government is a sufficient interest to render eavesdropping reasonable, on what tenable basis can a contrary conclusion be reached when a State asserts a purpose to prevent the corruption of its major officials, to protect the integrity of its fundamental processes, and to maintain itself as a viable institution? The serious threat which organized crime poses to our society has been frequently documented. The interrelation between organized crime and corruption of governmental officials is likewise well established, [Footnote 5/6] and the enormous difficulty of eradicating both forms of social cancer is proved by the persistence of the problems, if by nothing else. The Crime Commission has concluded that
“only in New York have law enforcement officials been able to mount a relatively continuous and relatively successful attack on an organized crime problem,”
that “electronic surveillance techniques . . . have been the tools” making possible such an attack, and that practice under New York’s § 813-a has achieved a proper balance between the interests of “privacy and justice.” Task Force Report at 95. And New York County District Attorney Frank S. Hogan, who has been on the job almost as long as any member of this Court, has said of the need for legislation similar to § 813-a:
“The judicially supervised system under which we operate has worked. It has served efficiently to protect the rights liberties, property, and general welfare of the law-abiding members of our community. It has permitted us to undertake major investigations of organized crime. Without it, and I confine myself to top figures in the underworld, my own office could not have convicted Charles ‘Lucky’ Luciano, Jimmy Hines, Louis ‘Lepke’ Buchalter, Jacob ‘Gurrah’ Shapiro, Joseph ‘Socks’ Lanza, George Scalise, Frank Erickson, John ‘Dio’ Dioguardi, and Frank Carbo. Joseph ‘Adonis’ Doto, who was tried in New Jersey, was convicted and deported on evidence supplied by our office and obtained by assiduously following leads secured through wiretapping.”
Hearings on S. 2813 before the Senate Committee on the Judiciary, 87th Cong., 2d Sess., at 173 (1962). To rebut such evidence of the reasonableness of regulated use of official eavesdropping, the Court presents only outdated statistics on the use of § 813-a in the organized crime and corruption arenas, the failure of the Congress thus far to enact similar legislation for federal law enforcement officials, and the blind hope that other “techniques and practices may well be developed that will operate just as speedily and certainly.” None of this is even remotely responsive to the question whether the use of eavesdropping techniques to unveil the debilitating corruption involved in this case was reasonable under the Fourth Amendment. At best, the Court puts forth an apologetic and grossly inadequate justification for frustrating New York law enforcement by invalidating § 813-a.
In any event, I do not consider this case a proper vehicle for resolving all of these broad constitutional and legislative issues raised by the problem of official use of wiretapping and eavesdropping. I would hold only that electronic surveillance was a reasonable investigative tool to apply in uncovering corruption among high state officials, compare Osborn v. United States, 385 U. S. 323, that the § 813-a court procedure as used in this case satisfied the Fourth Amendment’s search warrant requirements, and that New York officials limited themselves to a constitutionally permissible search and seizure of petitioner’s private conversations in executing that court order. Therefore, I would affirm.
|388 U.S. 41appa|
APPENDIX TO OPINION OF MR. JUSTICE WHITE.
Excerpt from “The Challenge of Crime in a Free Society,” A Report by the President’s Commission on Law Enforcement and Administration of Justice, at 200-203 (1967).
A NATIONAL STRATEGY AGAINST
ORGANIZED CRIME
Law enforcement’s way of fighting organized crime has been primitive compared to organized crime’s way of operating. Law enforcement must use methods at least as efficient as organized crime’s. The public and law enforcement must make a full-scale commitment to destroy the power of organized crime groups. The Commission’s program indicates ways to implement that commitment.
PROOF OF CRIMINAL VIOLATION
The previous section has described the difficulties that law enforcement agencies meet in trying to prove the participation of organized crime family members in criminal acts. Although earlier studies indicated a need for new substantive criminal laws, the Commission believes that, on the Federal level and in most State jurisdictions where organized crime exists, the major problem relates to matters of proof, rather than inadequacy of substantive criminal laws, as the latter — for the most part — are reasonably adequate to deal with organized crime activity. The laws of conspiracy have provided an effective substantive tool with which to confront the criminal groups. From a legal standpoint, organized crime continues to grow because of defects in the evidence-gathering process. Under present procedures, too few witnesses have been produced to prove the link between criminal group members and the illicit activities that they sponsor.
HomeGrand Juries. A compulsory process is necessary to obtain essential testimony or material. This is most readily accomplished by an investigative grand jury or an alternate mechanism through which the attendance of witnesses and production of books and records can be ordered. Such grand juries must stay in session long enough to allow for the unusually long time required to build an organized crime case. The possibility of arbitrary termination of a grand jury by supervisory judges constitutes a danger to successful completion of an investigation.
The Commission recommends:
At least one investigative grand jury should be impaneled annually in each jurisdiction that has major organized crime activity.
If a grand jury shows the court that its business is unfinished at the end of a normal term, the court should extend that term a reasonable time in order to allow the grand jury to complete pending investigations. Judicial dismissal of grand juries with unfinished business should be appealable by the prosecutor and provision made for suspension of such dismissal orders during the appeal.
The automatic convening of these grand juries would force less than diligent investigators and prosecutors to explain their inaction. The grand jury should also have recourse when not satisfied with such explanations.
The Commission recommends:
The grand jury should have the statutory right of appeal to an appropriate executive official, such as an attorney general or governor, to replace local prosecutors or investigators with special counsel or special investigators appointed only in relation to matters that they or the grand jury deem appropriate for investigation.
When a grand jury terminates, it should be permitted by law to file public reports regarding organized crime conditions in the community.
Immunity. A general immunity statute as proposed in chapter 5 on the courts is essential in organized crime investigations and prosecutions. There is evidence to indicate that the availability of immunity can overcome the wall of silence that so often defeats the efforts of law enforcement to obtain live witnesses in organized crime cases. Since the activities of criminal groups involve such a broad scope of criminal violations, immunity provisions covering this breadth of illicit actions are necessary to secure the testimony of uncooperative or criminally involved witnesses. Once granted immunity from prosecution based upon their testimony, such witnesses must testify before the grand jury and at trial, or face jail for contempt of court.
Federal, State, and local coordination of immunity grants, and approval by the jurisdiction’s chief law enforcement officer before immunity is granted, are crucial in organized crime investigations. Otherwise, without such coordination and approval, or through corruption of officials, one jurisdiction might grant immunity to someone about to be arrested or indicted in another jurisdiction.
The Commission recommends:
A general witness immunity statute should be enacted at Federal and State levels, providing immunity sufficiently broad to assure compulsion of testimony. Immunity should be granted only with the prior approval of the jurisdiction’s chief prosecuting officer. Efforts to coordinate Federal, State, and local immunity grants should be made to prevent interference with existing investigations.
Perjury. Many prosecutors believe that the incidence of perjury is higher in organized crime cases than in routine criminal matters. Immunity can be an effective prosecutive weapon only if the immunized witness then testifies truthfully. The present special proof requirements in perjury cases, detailed in chapter 5, inhibit prosecutors from seeking perjury indictments and lead to much lower conviction rates for perjury than for other crimes. Lessening of rigid proof requirements in perjury prosecutions would strengthen the deterrent value of perjury laws and present a greater incentive for truthful testimony.
The Commission recommends:
Congress and the States should abolish the rigid two-witness and direct-evidence rules in perjury prosecutions, but retain the requirement of proving an intentional false statement.
WIRETAPPING AND EAVESDROPPING
In connection with the problems of securing evidence against organized crime, the Commission considered issues relating to electronic surveillance, including wiretapping and “bugging” — the secret installation of mechanical devices at specific locations to receive and transmit conversations.
Significance to Law Enforcement. The great majority of law enforcement officials believe that the evidence necessary to bring criminal sanctions to bear consistently on the higher echelons of organized crime will not be obtained without the aid of electronic surveillance techniques. They maintain these techniques are indispensable to develop adequate strategic intelligence concerning organized crime, to set up specific investigations, to develop witnesses, to corroborate their testimony, and to serve as substitutes for them — each a necessary step in the evidence-gathering process in organized crime investigations and prosecutions.
As previously noted, the organizational structure and operational methods employed by organized crime have created unique problems for law enforcement. High-ranking organized crime figures are protected by layers of insulation from direct participation in criminal acts, and a rigid code of discipline inhibits the development of informants against them. A soldier in a family can complete his entire crime career without ever associating directly with his boss. Thus, he is unable, even if willing, to link the boss directly to any criminal activity in which he may have engaged for their mutual benefit. Agents and employees of an organized crime family, even when granted immunity from prosecution, cannot implicate the highest level figures, since frequently they have neither spoken to nor even seen them.
Members of the underworld, who have legitimate reason to fear that their meetings might be bugged or their telephones tapped, have continued to meet and to make relatively free use of the telephone — for communication is essential to the operation of any business enterprise. In legitimate business this is accomplished with written and oral exchanges. In organized crime enterprises, however, the possibility of loss or seizure of an incriminating document demands a minimum of written communication. Because of the varied character of organized criminal enterprises, the large numbers of persons employed in them, and frequently the distances separating elements of the organization, the telephone remains an essential vehicle for communication. While discussions of business matters are held on a face-to-face basis whenever possible, they are never conducted in the presence of strangers. Thus, the content of these conversations, including the planning of new illegal activity, and transmission of policy decisions or operating instructions for existing enterprises, cannot be detected. The extreme scrutiny to which potential members are subjected and the necessity for them to engage in criminal activity have precluded law enforcement infiltration of organized crime groups.
District Attorney Frank S. Hogan, whose New York County office has been acknowledged for over 27 years as one of the country’s most outstanding, has testified that electronic surveillance is:
the single most valuable weapon in law enforcement’s fight against organized crime . . . It has permitted us to undertake major investigations of organized crime. Without it, and I confine myself to top figures in the underworld, my own office could not have convicted Charles ‘Lucky’ Luciano, Jimmy Hines, Louis ‘Lepke’ Buchalter, Jacob ‘Gurrah’ Shapiro, Joseph ‘Socks’ Lanza, George Scalise, Frank Erickson, John ‘Dio’ Dioguardi, and Frank Carbo. . . .
Over the years, New York has faced one of the Nation’s most aggravated organized crime problems. Only in New York have law enforcement officials achieved some level of continuous success in bringing prosecutions against organized crime. For over 20 years, New York has authorized wiretapping on court order. Since 1957, bugging has been similarly authorized. Wiretapping was the mainstay of the New York attack against organized crime until Federal court decisions intervened. Recently, chief reliance in some offices has been placed on bugging, where the information is to be used in court. Law enforcement officials believe that the successes achieved in some parts of the State are attributable primarily to a combination of dedicated and competent personnel and adequate legal tools, and that the failure to do more in New York has resulted primarily from the failure to commit additional resources of time and men. The debilitating effect of corruption, political influence, and incompetence, underscored by the New York State Commission of Investigation, must also be noted.
In New York at one time, Court supervision of law enforcement’s use of electronic surveillance was sometimes perfunctory, but the picture has changed substantially under the impact of pretrial adversary hearings on motions to suppress electronically seized evidence. Fifteen years ago, there was evidence of abuse by low-rank policemen. Legislative and administrative controls, however, have apparently been successful in curtailing its incidence.
The Threat to Privacy. In a democratic society, privacy of communication is essential if citizens are to think and act creatively and constructively. Fear or suspicion that one’s speech is being monitored by a stranger, even without the reality of such activity, can have a seriously inhibiting effect upon the willingness to voice critical and constructive ideas. When dissent from the popular view is discouraged, intellectual controversy is smothered, the process for testing new concepts and ideas is hindered and desirable change is slowed. External restraints, of which electronic surveillance is but one possibility, are thus repugnant to citizens of such a society.
Today, in addition to some law enforcement agents, numerous private persons are utilizing these techniques. They are employed to acquire evidence for domestic relations cases, to carry on industrial espionage and counterespionage, to assist in preparing for civil litigation, and for personnel investigations, among others. Technological advances have produced remarkably sophisticated devices, of which the electronic cocktail olive is illustrative, and continuing price reductions have expanded their markets. Nor has man’s ingenuity in the development of surveillance equipment been exhausted with the design and manufacture of electronic devices for wiretapping or for eavesdropping within buildings or vehicles. Parabolic microphones that pick up conversations held in the open at distances of hundreds of feet are available commercially, and some progress has been made toward utilizing the laser beam to pick up conversations within a room by focusing upon the glass of a convenient window. Progress in microminiaturizing electronic components has resulted in the production of equipment of extremely small size. Because it can detect what is said anywhere — not just on the telephone — bugging presents especially serious threats to privacy.
Detection of surveillance devices is difficult, particularly where an installation is accomplished by a skilled agent. Isolated instances where equipment is discovered in operation therefore do not adequately reflect the volume of such activity; the effectiveness of electronic surveillance depends in part upon investigators who do not discuss their activities. The current confusion over the legality of electronic surveillance compounds the assessment problem, since many agents feel their conduct may be held unlawful, and are unwilling to report their activities. It is presently impossible to estimate with any accuracy the volume of electronic surveillance conducted today. The Commission is impressed, however, with the opinions of knowledgeable persons that the incidence of electronic surveillance is already substantial, and increasing at a rapid rate.
Present Law and Practice. In 1928, the U.S. Supreme Court decided that evidence obtained by wiretapping a defendant’s telephone at a point outside the defendant’s premises was admissible in a Federal criminal prosecution. The Court found no unconstitutional search and seizure under the Fourth Amendment. Enactment of Section 605 of the Federal Communications Act in 1934 precluded interception and disclosure of wire communications. The Department of Justice has interpreted this section to permit interception so long as no disclosure of the content outside the Department is made. Thus, wiretapping may presently be conducted by a Federal agent, but the results may not be used in court. When police officers wiretap and disclose the information obtained, in accordance with State procedure, they are in violation of Federal law.
Law enforcement experience with bugging has been much more recent and more limited than the use of the traditional wiretap. The legal situation with respect to bugging is also different. The regulation of the national telephone communication network falls within recognized national powers, while legislation attempting to authorize the placing of electronic equipment even under a warrant system would break new and uncharted ground. At the present time, there is no Federal legislation explicitly dealing with bugging. Since the decision of the Supreme Court in Silverman v. United States, 365 U. S. 505 (1961), use of bugging equipment that involves an unauthorized physical entry into a constitutionally protected private area violates the Fourth Amendment, and evidence thus obtained is inadmissible. If eavesdropping is unaccompanied by such a trespass, or if the communication is recorded with the consent of one of the parties, no such prohibition applies.
The confusion that has arisen inhibits cooperation between State and Federal law enforcement agencies because of the fear that information secured in one investigation will legally pollute another. For example, in New York City prosecutors refuse to divulge the contents of wire communications intercepted pursuant to State court orders because of the Federal proscription, but do utilize evidence obtained by bugging pursuant to court order. In other sections of New York State, however, prosecutors continue to introduce both wiretapping and eavesdropping evidence at trial.
Despite the clear Federal prohibition against disclosure of wiretap information, no Federal prosecutions of State officers have been undertaken, although prosecutions of State officers under State laws have occurred.
One of the most serious consequences of the present state of the law is that private parties and some law enforcement officers are invading the privacy of many citizens without control from the courts and reasonable legislative standards. While the Federal prohibition is a partial deterrent against divulgence, it has no effect on interception, and the lack of prosecutive action against violators has substantially reduced respect for the law.
The present status of the law with respect to wiretapping and bugging is intolerable. It serves the interests neither of privacy nor of law enforcement. One way or the other, the present controversy with respect to electronic surveillance must be resolved.
The Commission recommends:
Congress should enact legislation dealing specifically with wiretapping and bugging.
All members of the Commission agree on the difficulty of striking the balance between law enforcement benefits from the use of electronic surveillance and the threat to privacy its use may entail. Further, striking this balance presents important constitutional questions now pending before the U.S. Supreme Court in People v. Berger, and any congressional action should await the outcome of that case.
All members of the Commission believe that, if authority to employ these techniques is granted, it must be granted only with stringent limitations. One form of detailed regulatory statute that has been suggested to the Commission is outlined in the appendix to the Commission’s organized crime task force volume. All private use of electronic surveillance should be placed under rigid control, or it should be outlawed.
A majority of the members of the Commission believe that legislation should be enacted granting carefully circumscribed authority for electronic surveillance to law enforcement officers to the extent it may be consistent with the decision of the Supreme Court in People v. Berger, and, further, that the availability of such specific authority would significantly reduce the incentive for, and the incidence of, improper electronic surveillance.
The other members of the Commission have serious doubts about the desirability of such authority, and believe that, without the kind of searching inquiry that would result from further congressional consideration of electronic surveillance, particularly of the problems of bugging, there is insufficient basis to strike this balance against the interests of privacy.
Matters affecting the national security not involving criminal prosecution are outside the Commission’s mandate, and nothing in this discussion is intended to affect the existing powers to protect that interest.
Recording an innocent conversation is no more a “seizure” than occurs when the policeman personally overhears conversation while conducting a search with a warrant.
Petitioner has not included a transcript of the Neyer recording in the record before this Court. In an oral statement during the hearing on petitioner’s motion to suppress eavesdrop evidence, the prosecutor stated:
“In the course of some of these conversations [recorded by the Neyer eavesdrop], we have one-half of a telephone call, of several telephone calls between Mr. Neyer and a person he refers to on the telephone as Mr. Berger, and in the conversation with Mr. Berger, Mr. Neyer discusses also the obtaining of a liquor license for the Palladium and mentions the fact that this is going to be a big one.”
R. at 27. Petitioner made no argument, and offered no evidence, at the suppression hearing that the alleged Neyer-Berger phone conversation provided the State with evidence that was used to secure the Steinman eavesdrop order.
The portion of the Crime Commission’s report dealing with wiretapping and eavesdropping is reproduced in Appendix A to this opinion. A more detailed explanation of why most Commission members favored legislation permitting controlled use of electronic surveillance for law enforcement purposes can be found in the Commission’s Task Force Report on Organized Crime, cited infra.
The Court should draw no support from the Solicitor General’s confession of error in recent cases, for they involved surreptitious eavesdropping by federal officers without judicial authorization. Such searches are clearly invalid because they violate the Fourth Amendment’s warrant requirements. Silverman v. United States, supra.
Letter from the Acting Attorney General to the Speaker of the House of Representatives submitting the Administration’s “Right of Privacy Act of 1967” (H.R. 5386), Feb. 8, 1967.
“All available data indicate that organized crime flourishes only where it has corrupted local officials. As the scope and variety of organized crime’s activities have expanded, its need to involve public officials at every level of local government has grown. And as government regulation expands into more and more areas of private and business activity, the power to corrupt likewise affords the corrupter more control over matters affecting the everyday life of each citizen.”
|
{}
|
# [texhax] problematic slide with beamer
Sat Oct 1 11:39:44 CEST 2005
Christopher W. Ryan :
> I've wracked my brain trying to troubleshoot this presentation, without
> succes. I wonder if anyone can find the error(s) that I can't.
>
> This is on frame of about 30 in my presentation. I'm using beamer. I've
> tried it at work in Win98, using TeXniCenter, and TextPad as the editors, and
> MikTex. I've tried it at home on a commercialized version of debian linux,
> using Kile. It won't compile in either environment.
>
> The presentation uses a number of hyperlinks back and forth between sections.
>
> I understand that it may be impossible to troubleshoot on the List, without
> consideration.
>
> --Chris
>
I didn't have the outlines package, but this now compiles on my system
\documentclass[notes=show]{beamer}
\usetheme{PaloAlto}
%\usepackage[tab,width=0.8in]
\usecolortheme{sidebartab}
\usepackage{graphicx}
%\usepackage{outlines}
\begin{document}
\begin{frame}
\frametitle{Categories of diagnoses}
\hypertarget{KirkOrigin}
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|c|}
& Morrison & Sugarman & Lane &
1990}{\beamergotobutton{Kirk}} \\
& 1980 & 1984 & 1990 & 1990 \\
\hline
Physical'' & 39\% & 22\% &
8\% & 63\% \\
Psychologocal'' & 41\% & 50\% & &
37\% \\
Mixed'' & 12\% & & & \\
Undetermined'' & 8\% & 28\% & &
\end{tabular}
\end{table}
\note{Making this kind of distinction is not usually productive.
Fatigue is truly a biopsychosocial symptom, requiring simultaneous
evaluation of all three of those spheres of life.
If a long and expensive and invasive work-up to rule out'' physical
causes of fatigue produces nothing, and the patient is told it must be
psychological,'' it leaves a very bad taste \dots Better to acknowledge
all the potential diagnoses up front'' \ldots
Kirk's dropouts'' were significantly more likely to be depressed, and
significantly less likely to be married or employed. This probably led
to a systematic exclusion from these final numbers of those with fatigue
on a psychosocial'' basis.}
\end{frame}
\end{document}
--
/daleif
You cannot help men permanently by doing for them
what they could and should do for themselves. ''
-- Abraham Lincoln
|
{}
|
Trigonometrytriangle ratio question
DeusAbscondus
Active member
(Plse bear with me: until i learn how to fly this thing, / must stand for radical)
If the side ratio for a 30:40:90 deg right triangle are 1 : /3 : 2
then, is the following true:
one may multiply this ratio by 1,2,3 or 5,6,7,8,9 or 10 and the pythagorean identity obtains but NOT by 4
If so, why not?
Thx,
Godfree
CaptainBlack
Well-known member
(Plse bear with me: until i learn how to fly this thing, / must stand for radical)
If the side ratio for a 30:40:90 deg right triangle are 1 : /3 : 2
then, is the following true:
one may multiply this ratio by 1,2,3 or 5,6,7,8,9 or 10 and the pythagorean identity obtains but NOT by 4
If so, why not?
Thx,
Godfree
A right triangle cannot have 30 and 40 degrees for its other two angles, in fact if the side rations of a triangle are $$1,\ \sqrt{3},\ 2$$ then it is a 30,60,90 degree triangle
CB
DeusAbscondus
Active member
Thank's Cap'n; it was an arithmetic error, that was all....
I'll be more careful before posting next time.... sheeesh, i wasted 4 hours looking at this today, and kept making the same tiny error in my math....
Anyway, i heartily concur with Epicurius' sentiments and, by inference, your core values: i find a lot in common with non-believers, with atheists actually (why be coy) but I'm constantly amazed at how people (like my teacher) can do higher maths and still believe in invisible friends in the sky, and hold a young earth model in the same brain. Enough off-topic.
Thanks again,
|
{}
|
# Homework Help: Second partial derivative wrt x
1. Jul 15, 2014
### jonroberts74
I just need some clarification that this is fine
so I have
$$f_{x} = -2xe^{-x^2-y^2}cos(xy) -ysin(xy)e^{-x^2-y^2}$$
now, taking the second derivative
$$f_{xx} = [-2xe^{-x^2-y^2}+4x^2e^{-x^2-y^2}]cos(xy) - ysin(xy)[-2xe^{-x^2-y^2}]+2xe^{-x^2-y^2}sin(xy)y-cos(xy)e^{-x2-y^2}y^2$$
2. Jul 15, 2014
The very first $x$ in the first term shouldn't be there. Otherwise it looks fine.
3. Jul 15, 2014
### jonroberts74
oh yeah, on my paper I didnt have it but I typed it into this
thanks!
|
{}
|
# Clearcase snapshot view path's not resolving when comparing
We use Clearcase at my work and I have several snapshot views setup (on Windows XP). The views themselves seem to work great, however whenever I try to compare any versions of any elements from my snapshot view, I have problems with my diff tool (currently Beyond Compare). Specifically, if I'm comparing with previous, I see the current version great, but the previous version never shows up in the diff tool.
I've looked into the problem a bit and looking at the command line that is getting passed into the diff tool, CC is passing in a bad path to the file. The path to the file that is not working looks something like this:
//server/path/to/viewstorage.vws/....
The problem appears to be in the //server used to access the SMB share where the file is found.
Where is CC getting this bad path from? Is this something specific to how my snapshot view is setup (this worked for a long time and still works on some of my co-worker's machines)? Is there any way to change this path to the typical \\server that Windows expects?
## Update:
Ok, so my original question was written from home, and wasn't entirely accurate. The actual path is more like this:
//server/path/to/vobstorage.vbs\....
To answer @koslorr question, the global path for the view is correct (the view is actually stored on a public share on my machine), however doing the similar command for the vob (cleartool lsvob -l /my-working-vob) does show that the global path for the VOB is incorrect. Can this be updated in a similar way to the view tag? Is this something my CC admin is going to need to do?
-
Check your view Global Path with
cleartool lsview -l <VIEW TAG>
Is the Global Path in the correct \server.... form?
If it is not, then you can use
cleartool mktag -view <VIEW TAG> -replace...
to change it to the correct form.
cleartool man mktag should tell you more in details.
-
What a cleartool diff -pred myFile gives you when executed in your snapshot view path?
cd c:\path\to\my\snapshotView\myVob\path_to_myFile
cleartool diff -pred myFile
If the global path is incorrect, it can be because of:
-
|
{}
|
# Auditing
HDS welcomes members of the public to audit courses. In order to audit a course, one must:
1. Obtain the course instructor's permission, to be confirmed to the HDS registrar via the professor’s signature or email.
2. Fill out an audit application at the Office of the Registrar (application fee of $50; checks to be made out to Harvard University). 3. Pay the auditor's fee ($550 per one-semester course; checks made out to Harvard University) at the Registrar’s Office.
The level of participation required of auditors is determined by the individual faculty member or instructor. Ordinarily, auditors are not admitted to course that require a great deal of reviewed work, such as language courses. The final decision on this matter, however, rests with the instructor.
No credit for this work is given, nor are transcripts issued. Arrangements to audit must be made no later than the last day to drop/add classes each semester (see the Academic Calendar for dates).
The auditing process is administered through the Registrar's Office, which can be contacted at registrar@hds.harvard.edu or 617.495.5760.
## HDS Voices
When I came to HDS I worried about how I would fit in as an atheist and Humanist. I found that this is a place of incredible tolerance and interest in stepping beyond the familiar. I gained a deeper appreciation for people with views radically different from my own, and helped others to do the same.
—Alexander Ramos, MTS '11
|
{}
|
# Find conformal mapping from sector to unit disc
Find a conformal mapping between the sector $\{z\in\mathbb{C} : -\pi/4<\arg(z) <\pi/4\}$ and the open unit disc $D$.
I know that it should be a Möbius transformation, but other than that I am very stuck, any help would be much appreciated.
-
Remember that Mobius transformations take circles to circles and lines to lines. Since the boundary of the sector is neither a line nor a circle, Mobius transformations on their own can't possible get you there. – Brett Frankel Feb 8 '13 at 15:22
Here is a plan: first, apply $z \to z^2$. It will conformally map your sector onto the half-plane $\mathrm{Re}(z) > 0$. Then find a Möbius transformation that will map this half-plane to the unit disk.
-
Thanks, I didn't even consider composing it with another holomorphic map! – user61496 Feb 8 '13 at 15:08
You know that there is a conformal mapping from the unit disk to the upper half plane given by: $$z\mapsto -i\frac{z-i}{z+i}$$ Which sends
to
But then you know that the transformation $z\mapsto \sqrt z$ taking the principal value sends the upper half plane to the region you are desiring. This gives:
Reversing these mappings gives:
$$w \mapsto \frac{iw^2+1}{-w^2-i}$$
Which you will see is a conformal mapping sending the first quadrant to the unit disk.
-
|
{}
|
2-Dimensional Geometries
In General > s.a. 2D gravity; 2D manifolds; Geometric Topology.
* Result: All metrics are conformally flat, i.e., they can be locally written as
ds2 = ± Ω2(α, β) (dα2 ± dβ2) ,
where α and β are conjugate harmonic coordinates.
* Riemannian: There are 3 different kinds of geometry; Given any closed 2-manifold, it can be given a (unique) metric such that we get one of the following:
R > 0: elliptic (S2);
R = 0: parabolic (T2);
R < 0: hyperbolic (all higher orientable gs).
* Lorentzian: The only compact 2-manifolds which admit a metric of signature (−, +) are the 2-torus T2 and the Klein bottle (thus, e.g., S2 does not admit a Lorentzian metric).
* Curvature: The Einstein tensor vanishes identically, thus ∫M R dv can only be a topological term plus a surface term (> see the gauss-bonnet theorem for the positive-definite case); The Riemann tensor is given by
Rabcd = R ga[c gd]b .
* Gaussian curvature: For a surface z = V(x, y) in $$\mathbb R$$3,
K = (Vxx VyyVxy2) / (1 + Vxx2 Vyy2)2 .
@ Lorentzian: Vatandoost & Bahrampour JMP(12) [necessary and sufficient conditions for admitting a continuous sphere order representation]; Kim JGP(15)-a1501 [embeddings into the 2D Einstein universe]; Kim JGP(15)-a1501 [conformal diffeomorphisms and causal automorphisms].
Special Metrics > s.a. Zollfrei Metric.
* Constant curvature: In genus 0, the sphere S2; In genus 1, the flat torus T2; In genus 2, the double torus, which can be unfolded into an octagon in its universal covering, the hyperbolic space H2.
* Circular symmetry: In the Riemannian case, the metric can be written as
ds2 = dχ2 + f 2(χ) dφ2 ,
and the only non-vanishing connection coefficients in these coordinates are
Γ122 = f f ' , Γ212 = Γ221 = f −1f ' .
* Darboux spaces: Two-dimensional spaces of non-constant curvature.
@ References: Kramer & Lorente JPA(02)gq/04 [double torus];
Gallo JMP(04)gq [from second-order differential equations]; Grosche PPN(06)qp/04 [path integrals on Darboux spaces]; Bertotti et al m.HO/05-proc [constant negative Gaussian curvature].
|
{}
|
# Recommendation System in R
#### by yhat
##### June 19, 2013
Recommender systems are used to predict the best products to offer to customers. These babies have become extremely popular in virtually every single industry, helping customers find products they'll like. Most people are familiar with the idea, but nearly everyone is exposed to several forms of personalized offers and recommendations each day (Google search ads being among the biggest source).
Building recommendation systems is part science, part art, and many have become extremely sophisticated. Such a system might seem daunting for those uninitiated, but it's actually fairly straight forward to get started if you're using the right tools.
This is a post about building recommender systems in R.
UPDATE: We used the beer / product recommender for a talk at PyData Boston in July.
IPython notebook here: http://bit.ly/1chuxRT.
### Beer Dataset
"Respect Beer." - BeerAdvocate.com
For this example, we'll use data from Beer Advocate, a community of beer enthusiasts and industry professionals dedicated to supporting and promoting beer. The data is made available to us via Stanford's web data library. It consists of ~1.5 millions reviews posted on BeerAdvocate from 1999 to 2011.
Each record is composed of a beer's name, brewery, and metadata like style and ABV etc., along with ratings provided by reviewers. Beers are graded on appearance, aroma, palate, and taste plus users provide an "overall" grade. All ratings are on a scale from 1 to 5 with 5 being the best.
In addition to these numerical ratings, users are required to write a short paragraph of 250 to 5,000 characters describing their overall impressions. While the text does provide some excellent opportunities for analysis, we're going to focus only on the ratings for this post. You can read more about their rating system here.
### Formatting the Data
This part always takes longer than you'd like, but luckily the beer dataset is pretty clean.
Not that many nulls, and the text fields are free of strange byte characters in them (always throws me off). One thing that's a little different is that the data is laid out row-wise instead of column-wise.
Records are delimited by newlines and have one key/value pair per line. I wrote a short Python script to handle parsing which leaves us with a nicely formatted .csv file.
Since I'm working with a lot of data, I decided to throw it into a database.
I'm working with Postgres, but any relational database will do the trick.
### Getting the Breweries
One unfortunate part about the dataset is that it only includes a brewerid and no lookup table for the ids. These correspond with pages on the brewery profiles. For example, the Sierra Nevada Brewing Co. has a brewerid of 140 and their page on beer advocate is /profile/140.
In any case, what we really need is the brewery name associated with each id, which means doing a little web scraping. I really didn't want to get into installing any Postgres programming clients (psycopg2 for Python), so I wrote a short bash script to grab the brewery ids.
It's far from ideal, but it's short, simple, and it works. You can skip this part if you prefer and just download the data here.
### Loading it into R
We've got everything in a database. Nice!
We're going to use the excellent RPostgreSQL driver which makes it super easy to query Postgres from R. You'll notice we've got a little sub-query action going on here. All our sub-query is doing is grabbing all beers with 500+ reviews.
Let's take a peak at the data using the head command. As you can see, we've got a few Colorado Kool-Aids at the top of the heap (not surprising, it's a pretty popular beer).
You can see we've got the id, name, and brewery of each beer and the associated review data provided by a given user as denoted by the review_profilename column.
### Finding Similarities
The goal for our system will be for a user to provide us with a beer that they know and love, and for us to recommend a new beer which they might like. To accomplish this, we're going to use collaborative filtering. We're going to compare 2 beers by ratings submitted by their common reviewers. Then, when one user writes similar reviews for two beers, we'll then consider those two beers to be more similar to one another.
We'll need a function which takes two beers and returns their mutual reviewers (or sameset). To do this, we'll use the intersect function in R which finds common elements between two lists or vectors.
I wrote two functions: common_reviewers_by_id to extract the sameset given two beer_ids, and common_reviewers_by_name to extract the samesets given two beer_names. For programming purposes it's easier to use common_reviewers_by_id, but for testing and spot checking, common_reviewers_by_name is handy.
Next we need a function to extract features for a given beer. Features, in this case, are the 1 to 5 numerical ratings provided by users as part of each beer's review.
Two things probably stick out in this function. (1) We're sorting the data by the reviewers username. This is so that when we extract features for say, Coors Light and Founders Double Trouble, the reviews in indicies 0, 1, 2, ..., N correspond with reviews made by the same users.
(2) We're de-duplicating the reviews based on profile name. There are a few instances of users reviewing the same beer twice. Since we want the review data across beers to be aligned, we're just going to throw out any instances of multiple reviews by a user for the same beer.
Given two beers, we look at the similarity between how reviewers clocked-in with each of the 1 to 5 ratings.
To give you a visual, take a look at the charts below.
Users who like Fat Tire tended to not like Michelob Ultra as much.
The x-y coordinates correspond with how users rated each of the two beers. For example, a person who rated Fate Tire a 4.5 overall and Michelob Ultra a 2.5 overall appears as a point found at (4.5, 2.5) in top left quadrant of the first graphic above. The size of the dots correspond to the number of reviewers that wound up in a given bucket.
Users tend to rate Fat Tire higher than Michelob Ultra, as illustrated by the majority of points found below the center line.
However, when we compare Fat Tire to Dale's Pale Ale, we get a different story. We see that reviewers tended to rate both more or less consistently. Points are closer to the center line than those found in the Fat Tire-Michelob comparison. Intuitively, this suggests that it would be better to recommend Dale's Pale Ale to someone who likes Fat Tire than to someone who likes Michelob Ultra.
### Quantifying Our Beliefs
I don't need a statistical model to tell me that someone who likes Fat Tire is probably going to like Dale's Pale Ale more than Michelob Ultra. But what about picking between Dale's Pale Ale and Sierra Nevada Pale Ale? Things get a little more complicated. For this reason (and because we don't want to manually select between each beer pair), we're going to write a distance function that will quantify similarity.
For our similarity metric we're going to use a weighted average of the correlation of each metric. In other words, for each two-beer-pair we calculate the correlation of review_overall, review_aroma, review_palate, and review_taste separately. Then we take a weighted average each result to consolidate them into one number.
We're going to weight review_overall with 2 and the remainder will have a weight of 1. This gives review_overall 40% of the score (NOTE: this is totally arbitrary, you can use whatever weighting function you want. A lot of times the simplest stuff works the best in my experience).
### Computing Similarity Across All 2-Beer-Pairs
To keep things simple, we're only going to compare the 20 most commonly reviewed beers in the example code. This will give us enough data to make sure everything is working as expected, but it's still a small enough sample size that it won't take too long to compute.
The first thing we do is define the 20 beers we want to use. Then we use expand.grid to create all of the combinations between the beers. Finally we remove any self-to-self comparisons (if you like Dale's Pale Ale, it wont' help you very much if we recommend Dale's Pale Ale). We're then going to use ddply to do a map/reduce style calculation on the data. Note that it's possible to parallelize ddply. Although we're not doing it here, in an upcoming post I'll show you how to do run ddply in parallel using EC2.
I wrote a short helper function find_similar_beers that accepts a beer you like and optionally a number of suggested beers and a desired style, and returns the most similar beers in a nice format.
### Deploying to Yhat
Deploying this particular model was really easy. I just wrapped my find_similar_beers function in the yhat.predict function, added my apikey, and that was it. I didn't even need to use the yhat.require or yhat.transform functions.
### Getting Your Recommendations
To make recommendations on the web, I wrote a quick app with Heroku and Flask that consumes the Yhat API. You can see some of that javascript below, or you can check out the standalone app here.
### Final Thoughts
A great resources for building recommender systems is Programming Collective Intelligence by Toby Segaran. The book is a few years old, but it's a phenomenal introduction to some of the basics in machine learning. Chapter 2 gives a great overview of recommendation systems and how you can use them. Another good read is Machine Learning for Hackers by Drew Conway and John Myles White. Check out chapter 10 for recommender systems.
#### Our Products
A Python IDE built for doing data science directly on your desktop.
Harness the power of distributed computing to run computationally intensive tasks on a cluster of servers.
A platform for productionizing, scaling, and monitoring predictive models in production applications.
Yhat (pronounced Y-hat) provides data science and decision management solutions that let data scientists create, deploy and integrate insights into any business application without IT or custom coding.
With Yhat, data scientists can use their preferred scientific tools (e.g. R and Python) to develop analytical projects in the cloud collaboratively and then deploy them as highly scalable real-time decision making APIs for use in customer- or employee-facing apps.
|
{}
|
This is an archived post. You won't be able to vote or comment.
[–] 25 points26 points (0 children)
If I were writing games for the Atari 2600, this would be hella useful.
[–] 2 points3 points (4 children)
first saw this trick years back in an O'Reilly C++ book but never quite understood why it worked, it's good to see more of the mechanism behind it.
[–] 7 points8 points (2 children)
I hope that it was in there as a "this is cool, don't ever use it" thing. Just to demonstrate, here's two versions of the function, in C++ and object code (compiled with g++ -O3):
Source:
void good_swap(int& a, int& b) {
int temp = a;
a = b;
b = temp;
}
void dumb_swap(int& a, int& b) {
a ^= b;
b ^= a;
a ^= b;
}
Disassembly:
good_swap:
push %ebx
mov 0x8(%esp),%edx
mov 0xc(%esp),%eax
mov (%edx),%ecx
mov (%eax),%ebx
mov %ebx,(%edx)
mov %ecx,(%eax)
pop %ebx
ret
dumb_swap:
mov 0x4(%esp),%eax
mov 0x8(%esp),%ecx
mov (%ecx),%edx
xor (%eax),%edx
mov %edx,(%eax)
xor (%ecx),%edx
mov %edx,(%ecx)
xor %edx,(%eax)
ret
Each function loads a pointer to each argument in the first 2 lines. Then the first version has 2 moves from memory to registers and 2 moves back. The second version has 3 moves to/from memory, along with 3 xors that all access the memory. Good_swap also needs to put one of the registers into memory to get more scratch work, but that's only because I compiled it as a 32-bit application. It also wouldn't have to do that if it was in the middle of the code, since I passed in the arguments by pointers so it needs to hold onto the two pointers. The amount of extra memory that the first version takes up is 2 bytes more, since it's 2 bytes smaller code and uses 4 bytes of the stack.
[–] 1 point2 points (0 children)
The function will probably be inlined and optimized even further. The problem with the XOR variant is that the compiler does not know if a and b refer to the same variable, and thus cannot optimize to do all in registers. The restrict keyword takes care of this:
void dumb_swap2(int*restrict a, int *restrict b) {
*a ^= *b;
*b ^= *a;
*a ^= *b;
}
[–] 0 points1 point (0 children)
The OP mentioned in the original thread that this was just for educational purposes and suggested that nobody actually write code like this, so yeah it was just for show :P
[–] 1 point2 points (3 children)
Don't try the addition/subtraction one with floating point numbers, though; it could trash your precision.
[–]Theory of Computing 1 point2 points (4 children)
Therefore every problem of the type “swap two variables without using a temp variable” can be solved using a sufficiently large Sudoku.
How would one go about swapping three variables, then?
[–] 0 points1 point (3 children)
1) You can reuse an algorithm to swap two variables to permute n variables.
2) If you wanted to have variables a b c end up with the original contents of c a b, say, you could do:
a = a ^ b ^ c
b = a ^ b ^ c
c = a ^ b ^ c
a = a ^ b ^ c
Wonder there's a quasigroup version of the above?
[–] 0 points1 point (0 children)
a := (ab)c
b := (a/c)/b
c := b\(a/c)
a := (bc)\a
[–] 0 points1 point (1 child)
Probability a b c
f g h
a=abc fgh g h
b= ac-1 b-1 fgh (fgh)h-1 g-1 = f h
c= b-1 a c-1 fgh f f-1 fghh-1 =g
a= b-1 c-1 a g-1 f-1 fgh=h f g
a=abc
b= ac-1 b-1
c= b-1 a c-1
a=c-1 b-1 a
Which might just be what stormblooper was doing with notation I'm unfamiliar with.
[–] 0 points1 point (0 children)
Yeah -- see http://en.wikipedia.org/wiki/Quasigroup#Universal_algebra -- but it's essentially the same as yours (except it only assumes division, rather than inverses).
[–][deleted] (7 children)
[deleted]
[–] 15 points16 points (4 children)
If you're in such a memory-restricted environment that you can't even do:
temp = a
a = b
b = a
, then I doubt that you could even run the Python interpreter.
[–] 9 points10 points (3 children)
temp = a;
a = b;
b = temp;
[–] 2 points3 points (2 children)
...right, me can no code.
[–][deleted] (1 child)
[deleted]
[–] 1 point2 points (0 children)
It was the last line "b=a" he was talking about. It should be "b=temp" as Frosticus said.
[–] 2 points3 points (0 children)
Also works in lua, but yeah, I'm pretty sure that's not how the interpreter does it.
[–] 1 point2 points (0 children)
This constructs and then deconstructs a tuple.
|
{}
|
## Cryptology ePrint Archive: Report 2016/664
Efficient Conversion Method from Arithmetic to Boolean Masking in Constrained Devices
Yoo-Seung Won and Dong-Guk Han
Abstract: A common technique employed for preventing a side channel analysis is boolean masking. However, the application of this scheme is not so straightforward when it comes to block ciphers based on Addition-Rotation-Xor structure. In order to address this issue, since 2000, scholars have investigated schemes for converting Arithmetic to Boolean (AtoB) masking and Boolean to Arithmetic (BtoA) masking schemes. However, these solutions have certain limitations. The time performance of the AtoB scheme is extremely unsatisfactory because of the high complexity of $\mathcal{O}(k)$ where $k$ is the size of addition bit. At the FSE 2015, an improved algorithm with time complexity $\mathcal{O}(\log k)$ based on the Kogge-Stone carry look-ahead adder was suggested. Despite its efficiency, this algorithm cannot consider for constrained environments. Although the original algorithm naturally extends to low-resource devices, there is no advantage in time performance; we call this variant as the generic variant.
In this study, we suggest an enhanced variant algorithm to apply to constrained devices. Our solution is based on the principle of the Kogge-Stone carry look-ahead adder, and it uses a divide and conquer approach. In addition, we prove the security of our new algorithm against first-order attack. In implementation results, when $k=64$ and the register bit size of a chip is $8$, $16$ or $32$, we obtain $58$\%, $72$\%, or $68$\% improvement, respectively, over the results obtained using the generic variant. When applying those algorithms to first-order SPECK, we also achieve about $40$\% improvement. Moreover, our proposal extends to higher-order countermeasures as previous study.
|
{}
|
• Subject:
...
• Topic:
...
A body of mass 5 kg starts from the origin with an initial velocity $\stackrel{\to }{u\text{\hspace{0.17em}}}\text{\hspace{0.17em}}=\text{\hspace{0.17em}}30\stackrel{^}{i}+40\stackrel{^}{j}\text{\hspace{0.17em}}m{s}^{-1}$. If a constant force $\stackrel{\to }{F\text{\hspace{0.17em}}}=-\left(\stackrel{^}{i}+5\stackrel{^}{j}\right)N$ acts on the body, the time in which the y–component of the velocity becomes zero is
(1) 5 seconds
(2) 20 seconds
(3) 40 seconds
(4) 80 seconds
|
{}
|
# Arithmetic Associativity – Not So Fast
Arithmetic is associative, right? Well, in the world of paper and pencil, where you can often do calculations exactly, that can be true. However, in the computing world, where real numbers can't always be represented exactly because of working with finite precision datatypes, it turns out that you can't depend on the arithmetic to behave the way you were taught in grade school.
### Contents
#### Let's Do Some Math
Suppose I want to check the following:
$$\sqrt {2} = 2/\sqrt {2}$$
I can do this analytically using the Symbolic Math Toolbox.
symsqrt2 = sqrt(sym(2));
shouldBeZero = symsqrt2 - 2/symsqrt2
shouldBeZero =
0
Now let's perform the same calculation numerically.
mightBeZero = sqrt(2) - 2/sqrt(2)
mightBeZero =
2.2204e-16
What is happening here is that we are seeing the influence of the accuracy of floating point numbers and calculations with them. I discussed this in an earlier post as well.
#### Let's Try Another Example
Now let's try something a little different. First, let's find out what the value of eps is for %sqrt {2}%. This should be the smallest (in magnitude) floating point number which, when added to %sqrt {2}%, produces a number different than %sqrt {2}%.
sqrt2eps = eps(sqrt(2))
sqrt2eps =
2.2204e-16
Next, we want a number smaller in magnitude than this to play with. I'll use half its value.
halfsqrt2eps = sqrt2eps/2
halfsqrt2eps =
1.1102e-16
And now let's calculate the following expressions, symbolically and numerically.
$$expr1 = \sqrt{2} - \sqrt{2} + halfsqrt2eps$$
$$expr2 = (\sqrt{2} - \sqrt{2}) + halfsqrt2eps$$
$$expr3 = \sqrt{2} + (-\sqrt{2} + halfsqrt2eps)$$
First we do them all symbolically.
expr1 = symsqrt2 - symsqrt2 + sym(sqrt2eps)/2
expr2 = (symsqrt2 - symsqrt2) + sym(sqrt2eps)/2
expr3 = symsqrt2 + (-symsqrt2 + sym(sqrt2eps)/2)
double(expr1)
expr1 =
1/9007199254740992
expr2 =
1/9007199254740992
expr3 =
1/9007199254740992
ans =
1.1102e-16
Symbolic results are all the same and return half the value of eps.
Now we'll calculate the same expressions numerically.
expr1 = sqrt(2) - sqrt(2) + halfsqrt2eps
expr2 = (sqrt(2) - sqrt(2)) + halfsqrt2eps
expr3 = sqrt(2) + (-sqrt(2) + halfsqrt2eps)
expr1 =
1.1102e-16
expr2 =
1.1102e-16
expr3 =
2.2204e-16
So what's going on here? As I stated earlier, this example illustrates that floating point arithmetic is not associative the way symbolic arithmetic is. There's on reason to get upset about this. But it is worth understanding. And it might well be worth rewriting a computation occasionally, especially if you are trying to compute a very small difference between two large numbers.
#### Have You Rewritten Expressions to Get Better Accuracy?
Have you found yourself in a situation where you needed to rewrite how to calculate a numeric result (like here, by different groupings) to ensure you got a more accurate solution. Let me know about it here.
Published with MATLAB® R2013b
|
|
{}
|
# Evaluate the integral after figuring out the proper method to use
1. Dec 4, 2012
### Painguy
1. The problem statement, all variables and given/known data
∫ ((2t+3)^2)/t^2 dt
2. Relevant equations
3. The attempt at a solution
I figured that I would use integration by parts. The problem I'm having is that we haven't actually learned integration by parts, only the u substitution method. I went ahead and read the book on the proof and several examples, but its still a bit new to me so I'm not sure how to approach he problem.
u=1/t^2
du=-dt/3t^3
∫dv∫=(2t+3)^2 dt
v=(4t^3)/3 +6t^2 +9t
4t/3 +6+9/t - ∫ ((4t^3)/3 +6t^2 +9t)/t^3 dt
Is there a more straight forward way of solving this?
2. Dec 4, 2012
### Dick
Sure there is. Just multiply the numerator out. Then split it up and integrate.
3. Dec 4, 2012
### Painguy
Oh wow..... Thanks for the help. I feel a little silly right now.
|
{}
|
# node-gyp
node-gyp is a cross-platform command-line tool written in Node.js for compiling native addon modules for Node.js. It bundles the gyp project used by the Chromium team and takes away the pain of dealing with the various differences in build platforms. It is the replacement to the node-waf program which is removed for node v0.8. If you have a native addon for node that still has a wscript file, then you should definitely add a binding.gyp file to support the latest versions of node.
Multiple target versions of node are supported (i.e. 0.8, 0.9, 0.10, ..., 1.0, etc.), regardless of what version of node is actually installed on your system (node-gyp downloads the necessary development files for the target version).
• Easy to use, consistent interface
• Same commands to build your module on every platform
• Supports multiple target versions of Node
You can install with npm:
$npm install -g node-gyp You will also need to install: • On Unix: • python (v2.7 recommended, v3.x.x is not supported) • make • A proper C/C++ compiler toolchain, like GCC • On Mac OS X: • python (v2.7 recommended, v3.x.x is not supported) (already installed on Mac OS X) • Xcode • You also need to install the Command Line Tools via Xcode. You can find this under the menu Xcode -> Preferences -> Downloads • This step will install gcc and the related toolchain containing make • On Windows: • Python (v2.7.3 recommended, v3.x.x is not supported) • Make sure that you have a PYTHON environment variable, and it is set to drive:\path\to\python.exe not to a folder • Windows XP/Vista/7: • Microsoft Visual Studio C++ 2013 (Express version works well) • If the install fails, try uninstalling any C++ 2010 x64&x86 Redistributable that you have installed first • If you get errors that the 64-bit compilers are not installed you may also need the compiler update for the Windows SDK 7.1 • Windows 7/8: • Microsoft Visual Studio C++ 2013 for Windows Desktop (Express version works well) • All Windows Versions • For 64-bit builds of node and native modules you will also need the Windows 7 64-bit SDK • You may need to run one of the following commands if your build complains about WindowsSDKDir not being set, and you are sure you have already installed the SDK: call "C:\Program Files\Microsoft SDKs\Windows\v7.1\bin\Setenv.cmd" /Release /x86call "C:\Program Files\Microsoft SDKs\Windows\v7.1\bin\Setenv.cmd" /Release /x64 If you have multiple Python versions installed, you can identify which Python version node-gyp uses by setting the '--python' variable: $ node-gyp --python /path/to/python2.7
If node-gyp is called by way of npm and you have multiple versions of Python installed, then you can set npm's 'python' config key to the appropriate value:
$npm config set python /path/to/executable/python2.7 Note that OS X is just a flavour of Unix and so needs python, make, and C/C++. An easy way to obtain these is to install XCode from Apple, and then use it to install the command line tools (under Preferences -> Downloads). To compile your native addon, first go to its root directory: $ cd my_node_addon
The next step is to generate the appropriate project build files for the current platform. Use configure for that:
$node-gyp configure Note: The configure step looks for the binding.gyp file in the current directory to process. See below for instructions on creating the binding.gyp file. Now you will have either a Makefile (on Unix platforms) or a vcxproj file (on Windows) in the build/ directory. Next invoke the build command: $ node-gyp build
Now you have your compiled .node bindings file! The compiled bindings end up in build/Debug/ or build/Release/, depending on the build mode. At this point you can require the .node file with Node and run your tests!
Note: To create a Debug build of the bindings file, pass the --debug (or -d) switch when running either the configure, build or rebuild command.
Previously when node had node-waf you had to write a wscript file. The replacement for that is the binding.gyp file, which describes the configuration to build your module in a JSON-like format. This file gets placed in the root of your package, alongside the package.json file.
A barebones gyp file appropriate for building a node addon looks like:
{ "targets": [ { "target_name": "binding", "sources": [ "src/binding.cc" ] } ]}
Some additional resources for addons and writing gyp files:
node-gyp responds to the following commands:
CommandDescription
buildInvokes make/msbuild.exe and builds the native addon
cleanRemoves the build directory if it exists
configureGenerates project build files for the current platform
rebuildRuns clean, configure and build all in a row
installInstalls node development header files for the given version
listLists the currently installed node development file versions
removeRemoves the node development header files for the given version
|
{}
|
# 0.2 Practice tests (1-4) and final exams (Page 22/36)
Page 22 / 36
8 . The 99% confidence interval, because it includes all but one percent of the distribution. The 95% confidence interval will be narrower, because it excludes five percent of the distribution.
## 8.2: confidence interval, single population mean, standard deviation unknown, student’s t
9 . The t -distribution will have more probability in its tails (“thicker tails”) and less probability near the mean of the distribution (“shorter in the center”).
10 . Both distributions are symmetrical and centered at zero.
11 . df = n – 1 = 20 – 1 = 19
12 . You can get the t -value from a probability table or a calculator. In this case, for a t -distribution with 19 degrees of freedom, and a 95% two-sided confidence interval, the value is 2.093, i.e.,
The calculator function is invT(0.975, 19).
13 .
98.4 ± 0.14 = (98.26, 98.54).
The calculator function Tinterval answer is (98.26, 98.54).
14 . ${t}_{\frac{\alpha }{2}}=2.861.$ The calculator function is invT(0.995, 19).
$EBM={t}_{\frac{\alpha }{2}}\left(\frac{s}{\sqrt{n}}\right)=\left(2.861\right)\left(\frac{0.3}{\sqrt{20}}\right)=0.192$
98.4 ± 0.19 = (98.21, 98.59). The calculator function Tinterval answer is (98.21, 98.59).
15 . df = n – 1 = 30 – 1 = 29.
98.4 ± 0.11 = (98.29, 98.51). The calculator function Tinterval answer is (98.29, 98.51).
## 8.3: confidence interval for a population proportion
16 . ${p}^{\prime }=\frac{280}{500}=0.56$
${q}^{\prime }=1-{p}^{\prime }=1-0.56=0.44$
$s=\sqrt{\frac{pq}{n}}=\sqrt{\frac{0.56\left(0.44\right)}{500}}=0.0222$
17 . Because you are using the normal approximation to the binomial, ${z}_{\frac{\alpha }{2}}=1.96$ .
Calculate the error bound for the population ( EBP ):
Calculate the 95% confidence interval:
0.56 ± 0.0435 = (0.5165, 0.6035).
The calculator function 1-PropZint answer is (0.5165, 0.6035).
18 . ${z}_{\frac{\alpha }{2}}=1.64$
0.56 ± 0.03 = (0.5236, 0.5964). The calculator function 1-PropZint answer is (0.5235, 0.5965)
19 . ${z}_{\frac{\alpha }{2}}=2.58$
0.56 ± 0.05 = (0.5127, 0.6173).
The calculator function 1-PropZint answer is (0.5028, 0.6172).
20 . EBP = 0.04 (because 4% = 0.04)
${z}_{\frac{\alpha }{2}}=1.96$ for a 95% confidence interval
You need 601 subjects (rounding upward from 600.25).
21 .
You need 577 subjects (rounding upward from 576.24).
22 .
You need 1,068 subjects (rounding upward from 1,067.11).
## 9.1: null and alternate hypotheses
23 . H 0 : p = 0.58
H a : p ≠ 0.58
24 . H 0 : p ≥ 0.58
H a : p <0.58
25 . H 0 : μ ≥ $268,000 H a : μ <$268,000
26 . H a : μ ≠ 107
27 . H a : p ≥ 0.25
## 9.2: outcomes and the type i and type ii errors
28 . a Type I error
29 . a Type II error
30 . Power = 1 – β = 1 – P (Type II error).
31 . The null hypothesis is that the patient does not have cancer. A Type I error would be detecting cancer when it is not present. A Type II error would be not detecting cancer when it is present. A Type II error is more serious, because failure to detect cancer could keep a patient from receiving appropriate treatment.
32 . The screening test has a ten percent probability of a Type I error, meaning that ten percent of the time, it will detect TB when it is not present.
#### Questions & Answers
what is standard deviation?
Jawed Reply
It is the measure of the variation of certain values from the Mean (Center) of a frequency distribution of sample values for a particular Variable.
Dominic
Yeah....the simplest one
IRFAN
what is the number of x
Godgift Reply
10
Elicia
Javed Arif
Jawed
how will you know if a group of data set is a sample or population
Kingsley Reply
population is the whole set and the sample is the subset of population.
umair
if the data set is drawn out of a larger set it is a sample and if it is itself the whole complete set it can be treated as population.
Bhavika
hello everyone if I have the data set which contains measurements of each part during 10 years, may I say that it's the population or it's still a sample because it doesn't contain my measurements in the future? thanks
Alexander
Pls I hv a problem on t test is there anyone who can help?
Peggy
What's your problem Peggy Abang
Dominic
Bhavika is right
Dominic
what is the problem peggy?
Bhavika
hi
Sandeep
Hello
adeagbo
hi
Bhavika
hii Bhavika
Dar
Hi eny population has a special definition. if that data set had all of characteristics of definition, that is population. otherwise that is a sample
Hoshyar
three coins are tossed. find the probability of no head
Kanwal Reply
three coins are tossed consecutively or what ?
umair
p(getting no head)=1/8
umair
or .125 is the probability of getting no head when 3 coins are tossed
umair
🤣🤣🤣
Simone
what is two tailed test
Umar Reply
if the diameter will be greater than 3 cm then the bullet will not fit in the barrel of the gun so you are bothered for both the sides.
umair
in this test you are worried on both the ends
umair
lets say you are designing a bullet for thw gun od diameter equals 3cm.if the diameter of the bullet is less than 3 cm then you wont be able to shoot it
umair
In order to apply weddles rule for numerical integration what is minimum number of ordinates
Anjali Reply
excuse me?
Gabriel
why?
Tade
didn't understand the question though.
Gabriel
which question? ?
Tade
We have rules of numerical integration like Trapezoidal rule, Simpson's 1/3 and 3/8 rules, Boole's rule and Weddle rule for n =1,2,3,4 and 6 but for n=5?
John
geometric mean of two numbers 4 and 16 is:
iphone Reply
10
umair
really
iphone
quartile deviation of 8 8 8 is:
iphone
sorry 8 is the geometric mean of 4,16
umair
quartile deviation of 8 8 8 is
iphone
can you please expalin the whole question ?
umair
mcq
iphone
h
iphone
can you please post the picture of that ?
umair
how
iphone
hello
John
10 now
John
how to find out the value
srijth Reply
can you be more specific ?
umair
yes
KrishnaReddy
what is the difference between inferential and descriptive statistics
Eze Reply
descriptive statistics gives you the result on the the data like you can calculate various things like variance,mean,median etc. however, inferential stats is involved in prediction of future trends using the previous stored data.
umair
if you need more help i am up for the help.
umair
Thanks a lot
Anjali
Inferential Statistics involves drawing conclusions on a population based on analysis of a sample. Descriptive statistics summarises or describes your current data as numerical calculations or graphs.
fred
my pleasure😊. Helping others offers me satisfaction 😊
umair
for poisson distribution mean............variance.
mehul Reply
both are equal to mu
Faizan
mean=variance
Faizan
what is a variable
Bonolo Reply
something that changes
Festus
why we only calculate 4 moment of mean? asked in papers.
Faizan Reply
why we only 4 moment of mean ? asked in BA exam
Faizan
Good evening, can you please help me by sharing regression and correlation analysis notes....thank you in advance
Refiloe Reply
Hello, can you please share the possible questions that are likely to be examined under the topic: regression and correlation analysis.
Refiloe
for normal distribution mean is 2 & variance is 4 find mu 4?
Faizan Reply
repeat quastion again
Yusuf
find mu 4. it can be wrong but want to prove how.
Faizan
for a normal distribution if mu 4 is 12 then find mu 3?
Faizan Reply
Question hi wrong ha
Tahir
ye BA mcqs me aya he teen he. 2dafa aya he
Faizan
if X is normally distributed. (n,b). then its mean deviation is?
Faizan
The answer is zero, because all odd ordered central moments of a normal distribution are Zero.
nikita
which question is zero
Faizan
sorry it is (5,16) in place of (n,b)
Faizan
I got. thanks. it is zero.
Faizan
a random variable having binomial distribution is?
Bokaho
### Read also:
#### Get the best Introductory statistics course in your pocket!
Source: OpenStax, Introductory statistics. OpenStax CNX. May 06, 2016 Download for free at http://legacy.cnx.org/content/col11562/1.18
Google Play and the Google Play logo are trademarks of Google Inc.
Notification Switch
Would you like to follow the 'Introductory statistics' conversation and receive update notifications?
By By By By Nick Swain By Mariah Hauptman By Gerr Zen
|
{}
|
# ## Difference of Squares
The difference of squares formula is a shortcut that you can use anytime you need to factor an expression that has a perfect square subtracted from another perfect square.
## How to Factor a Difference of Squares
1. Make sure the polynomial is a difference of squares.
• Does the polynomial have two terms connected with a subtraction sign?
• Are the terms perfect squares?
2. Find the square root of each term.
3. Plug the results of Step 2 into the difference of squares formula.
## Examples
Factor: $x^2-9$
Is $$\yellow x^2 -9$$ a difference of squares?
Yes, $$\yellow x^2$$ and $$\yellow 9$$ are perfect squares that are connected with a subtraction sign.
What are the square roots of $$\yellow x^2$$ and $$\yellow 9$$?
$\sqrt{\yellow x^2}={\yellow x}$
$\sqrt{\yellow 9}={\yellow 3}$
What is the factored form of $$\yellow x^2-9$$?
To find the factored form of $$\yellow x^2-9$$, I will substitute $$\yellow x$$ and $$\yellow 3$$ into the difference of squares formula.
${\yellow x^2}-{\yellow 9}=({\yellow x}+{\yellow 3})({\yellow x}-{\yellow 3})$
The factored form of $$\yellow x^2-9$$ is…
$\yellow (x+3)(x-3)$
Factor: $36y^2-25x^2$
Is $$\green 36y^2 -25x^2$$ a difference of squares?
Yes, $$\green 36 y^2$$ and $$\green 25x^2$$ are perfect squares that are connected with a subtraction sign.
What are the square roots of $$\green 36 y^2$$ and $$\green 25x^2$$?
$\sqrt{\green 36y^2}={\green 6y}$
$\sqrt{\green 25x^2}={\green 5x}$
What is the factored form of $$\green 36y^2-25x^2$$?
To find the factored form of $$\green 36y^2-25x^2$$, I will substitute $$\green 6y$$ and $$\green 5x$$ into the difference of squares formula.
${\green 36y^2}-{\green 25x^2}=({\green 6y}+{\green 5x})({\green 6y}-{\green 5x})$
The factored form of $$\green 36y^2-25x^2$$ is…
$\green (6y+5x)(6y-5x)$
Factor: $81x^{12}-16x^4$
Is $$\purple 81x^{12}-16x^4$$ a difference of squares?
Yes, $$\purple 81x^{12}$$ and $$\purple 16x^4$$ are perfect squares that are connected with a subtraction sign.
What are the square roots of $$\purple 81x^{12}$$ and $$\purple 8x^4$$?
$\sqrt{\purple 81x^{12}}={\purple 9x^6}$
$\sqrt{\purple 16x^4}={\purple 4x^2}$
What is the factored form of $$\purple 81x^{12}-16x^4$$?
To find the factored form of $$\purple 81x^{12}-16x^4$$, I will substitute $$\purple 9x^6$$ and $$\purple 4x^2$$ into the difference of squares formula.
${\purple 81x^{12}}-{\purple 16x^4}=({\purple 9x^6}+{\purple 4x^2})({\purple 9x^6}-{\purple 4x^2})$
The difference of squares formula tells me that the factored form of $$\purple 81x^{12}-16x^4$$ is…
$\purple(9x^6+ 4x^2)(9x^6-4x^2)$
However, this expression can be factored even further because the second factor $$\purple (9x^6-4x^2)$$ is another difference of squares that can be factored to $$\purple (3x^3+2x)(3x^3-2x)$$.
So, the fully factored form of $$\purple 81x^{12}-16x^4$$ is…
$\purple (9x^6+4x^2)(3x^3+2x)(3x^3-2x)$
## How to Factor a Sum of Squares
When you first learn how to factor polynomials, your teacher may tell you that there is no way to factor a sum of squares.
That is partially true. Sums of squares are not factorable unless you use complex numbers. And you will not be expected to find the complex factors of a polynomial in most algebra classes.
However, if you are asked to find the complex factors of a polynomial, you can apply the difference of squares formula to factor a sum of squares like this:
Factor: $x^2+25$
Is $$\blue x^2 +25$$ a difference of squares?
No, $$\blue x^2$$ and $$\blue 25$$ are perfect squares but they are connected with an addition sign instead of a subtraction sign.
However, subtracting a negative is equivalent to adding a positive, so I could rewrite the expression so there is a subtraction sign.
$\blue x^2+25 = x^2- -25$
What are the square roots of $$\blue x^2$$ and $$\blue -25$$?
$\sqrt{\blue x^2}={\blue x}$
$\sqrt{\blue -25}={\blue 5i}$
What is the factored form of $$\blue x^2- -25$$?
To find the factored form of $$\blue x^2- -25$$, I will substitute $$\blue x$$ and $$\blue 5i$$ into the difference of squares formula.
${\blue x^2}-{\blue -25}=({\blue x}-{\blue 5i})({\blue x}+{\blue 5i})$
The factored form of $$\blue x^2+25$$ is…
$\blue (x-5i)(x+5i)$
## Why It Works
The difference of squares formula works because factoring “un-does” polynomial multiplication.
You can see where the formula comes from if you reverse engineer the process and multiply the sum $$(a+b)$$ and difference $$(a-b)$$ of two terms.
To understand where the formula comes from…
1. Multiply $$(a+b)(a-b)$$.
2. Undo the multiplication to find the Difference of Squares Formula.
I like using the FOIL method to multiply binomials, but you can also use the box method or the multiplication algorithm. $(a+b)(a-b)$
Multiply the FIRST terms: $({\red a})({\red a})={\red a^2}$
Multiply the OUTER terms: $({\yellow a})({\yellow -b})={\yellow -ab}$
Multiply the INNER terms: $({\green b})({\green a})={\green ab}$
Multiply the LAST terms: $({\blue b})({\blue -b})={\blue -b^2}$
When the like terms ($$\yellow -ab$$ and $$\green ab$$) are combined, they cancel each other out.
The remaining terms create a difference of squares: $(a+b)(a-b)={\red a^2}{\blue -b^2}$
If the simplified polynomial multiplication is always a difference of squares…
$(a+b)(a-b)=a^2-b^2$
Then we can “un-do” the multiplication and write the equation backwards to find the formula to factor a difference of squares…
$a^2-b^2=(a+b)(a-b)$
|
{}
|
Counting Factors
Is there an efficient way to work out how many factors a large number has?
Summing Consecutive Numbers
15 = 7 + 8 and 10 = 1 + 2 + 3 + 4. Can you say which numbers can be expressed as the sum of two or more consecutive integers?
Oh! Hidden Inside?
Find the number which has 8 divisors, such that the product of the divisors is 331776.
Like Powers
Age 11 to 14 Challenge Level:
Ben Twigger and Tom Ruffett from Ousedale School, Milton Keynes set up a spreadsheet which shows that
$$1^n + 19^n + 20^n + 51^n + 57^n + 80^n + 82^n = 2^n + 12^n + 31^n + 40^n + 69^n + 71^n + 85^n$$
for values of $n$ from $n = 1$ to $n = 6$ but the two expressions are not equal for $n = 7$ or $n = 8$. Ben and Tom found this hard to believe themselves even though they had the evidence from their own work. The two columns headed 'sum' give the totals of the expressions on the left hand side and the right hand side for each value of $n$. The column headed 'Difference' gives the differences between these two totals for each value of $n$. Notice that the difference is 0 for $n = 1, 2, 3, 4, 5, 6$ showing that the expressions are equal for these values of $n$ and the difference is not zero for $n=7$ or $n=8$ showing that these expressions are not equal for these values of $n$.
n 1 19 20 51 57 80 82 sum Difference 1 1 19 20 51 57 80 82 310 0 2 1 361 400 2601 3249 6400 6724 19736 0 3 1 6859 8000 132651 185193 512000 551368 1396072 0 4 1 130321 160000 6765201 10556001 40960000 45212176 103783700 0 5 1 2476099 3200000 345025251 601692057 3276800000 3707398432 7936591840 0 6 1 47045881 64000000 17596287801 34296447249 2.62E+11 3.04E+11 6.18E+11 0 7 1 893871739 1280000000 8.97E+11 1.95E+12 2.10E+13 2.49E+13 4.88E+13 36021585600 8 1 16983563041 25600000000 4.58E+13 1.11E+14 1.68E+15 2.04E+15 3.88E+15 1.28E+13 1 2 12 31 40 69 71 85 310 2 4 144 961 1600 4761 5041 7225 19736 3 8 1728 29791 64000 328509 357911 614125 1396072 4 16 20736 923521 2560000 22667121 25411681 52200625 103783700 5 32 248832 28629151 102400000 1564031349 1804229351 4437053125 7936591840 6 64 2985984 887503681 4096000000 1.08E+11 1.28E+11 3.77E+11 6.18E+11 7 128 35831808 27512614111 1.64E+11 7.45E+12 9.10E+12 3.21E+13 4.88E+13 8 256 429981696 8.53E+11 6.55E+12 5.14E+14 6.46E+14 2.72E+15 3.89E+15
|
{}
|
# Professor Roger William Lewis Jones MA(Oxon), PhD(Bham), CPhys, FInstP
## Research Interests
My research interests are in experimental elementary (high-energy) particle physics. They are divided into three activities:
• The physics of particles containing b-quarks using the ATLAS experiment at CERN.
• The investigation of the strong nuclear force, and the predictions of the theory describing that force, QCD
• The development of world-wide Grid computing systems to serve the huge processing and storage requirements of particle physics
My ATLAS studies at the LHC involve the decays of particles that contain b-quarks that will allow us better to understand a rare violation of a symmetry known as CP. this is intimately connected with the matter-antimatter asymmetry in the universe. These are also an ideal area to properly understand the behavior of the tracking detectors and software, which we need before we can properly apply the tracking in Higgs and SUSY searches. We are also investigating the use of b-events as a window into new 'flavour dependent' physics.
At LEP II, I investigated the strong interaction as a member of the ALEPH collaboration, and various measures of its strength. I look at the general shape, and flow of energy and momentum in events. I remain the convener of the LEP QCD Working Group and of its Annihilations subgroup. We attempt to combine experimental results at various energies to give clear evidence for the change of the strength of the strong interaction with increasing energy scale; we also try to provide a consistent treatment of theoretical uncertainties in these measurements. I also investigated strong interaction effects in the decays of W boson pairs, particularly a prediceted phenomenom called 'colour reconnection'.
In order to do all of this exciting physics, advanced software and a world-wide computing system is required. For ATLAS, I was until 2009 the chair of the International Computing Board and am part of the computing project leadership team. I continue to develop the ATLAS computing model. I run the UK component of the ATLAS offline computing and software project. At Lancaster, we are developing tracking tools for ATLAS and Grid software for the community.We provide a Grid computing farm as part of the NorthGrid Tier 2 (spread over Lancaster, Liverpool, Manchester and Sheffield); I chair the NorthGrid Management Board. The Tier 2 is part of the GridPP collaboration, and I used to Co-ordinate the experiment applications development in GridPP and sit on the Project Management Board representing ATLAS and NorthGrid. I was a member of STFC's Computing Advisory Panel from 2009-2014 (and was previously on PPARC's Computing Advisory Panel from 2007-2009) and chair the Particle Physics Users Advisory Committee.
I was elected chair of the Worldwide LHC Computing Grid (wLCG) Collaboration Board in July 2009.
I became a member fo the STFC Particle Physics Advisory Panel in October 2015.
Search for flavour-changing neutral current top-quark decays to qZ in pppp collision data collected with the ATLAS detector at s√=8 TeV
ATLAS Collaboration 8/01/2016 In: European Physical Journal C: Particles and Fields. 76, 1, 24 p.
Journal article
Determination of the ratio of b-quark fragmentation fractions fs/fd in pp collisions at s√=7 TeV with the ATLAS detector
ATLAS Collaboration 30/12/2015 In: Physical Review Letters. 115, 26, 18 p.
Journal article
Search for pair production of a new heavy quark that decays into a W boson and a light quark in pp collisions at s√=8 TeV with the ATLAS detector
ATLAS Collaboration 22/12/2015 In: Physical Review D. 92, 11, 28 p.
Journal article
Measurement of the branching ratio Γ(Λb0→ψ(2S)Λ0)/Γ(Λb0→J/ψΛ0) with the ATLAS detector
ATLAS Collaboration 17/12/2015 In: Physics Letters B. 751, p. 63-80. 18 p.
Journal article
Search for flavour-changing neutral current top quark decays t → Hq in pp collisions at √s=8 TeV with the ATLAS detector
ATLAS Collaboration 10/12/2015 In: Journal of High Energy Physics. 12, 65 p.
Journal article
Search for high-mass diboson resonances with boson-tagged jets in proton-proton collisions at s√=8 TeV with the ATLAS detector
ATLAS Collaboration 10/12/2015 In: Journal of High Energy Physics. 12, 39 p.
Journal article
Constraints on new phenomena via Higgs boson couplings and invisible decays with the ATLAS detector
ATLAS Collaboration 30/11/2015 In: Journal of High Energy Physics. 11, 52 p.
Journal article
Search for lepton-flavour-violating H → μτ decays of the Higgs boson with the ATLAS detector
ATLAS Collaboration 30/11/2015 In: Journal of High Energy Physics. 11, 33 p.
Journal article
Measurement of the t ¯ t W and t ¯ t Z production cross sections in pp collisions at √s =8 TeV with the ATLAS detector
ATLAS Collaboration 24/11/2015 In: Journal of High Energy Physics. 11, 48 p.
Journal article
Searches for Higgs boson pair production in the hh→bbττ, γγWW∗, γγbb, bbbb channels with the ATLAS detector
ATLAS Collaboration 5/11/2015 In: Physical Review D. 92, 9, 30 p.
Journal article
Search for new light gauge bosons in Higgs boson decays to four-lepton final states in pp collisions at s√=8 TeV with the ATLAS detector at the LHC
ATLAS Collaboration 3/11/2015 In: Physical Review D. 92, 9, 30 p.
Journal article
Z boson production in p+Pb collisions at √sNN=5.02 TeV measured with the ATLAS detector
ATLAS Collaboration 30/10/2015 In: Physical Review C. 92, 4, 22 p.
Journal article
Measurement of the production of neighbouring jets in lead–lead collisions at √sNN=2.76 TeV with the ATLAS detector
ATLAS Collaboration 27/10/2015 In: Physics Letters B. 751, p. 376-395. 20 p.
Journal article
Summary of the ATLAS experiment’s sensitivity to supersymmetry after LHC Run 1 — interpreted in the phenomenological MSSM
ATLAS Collaboration 21/10/2015 In: Journal of High Energy Physics. 10, 76 p.
Journal article
Determination of the top-quark pole mass using tt¯ + 1-jet events collected with the ATLAS experiment in 7 TeV pp collisions
ATLAS Collaboration 19/10/2015 In: Journal of High Energy Physics. 10, 41 p.
Journal article
Measurements of the top quark branching ratios into channels with leptons and quarks with the ATLAS detector
ATLAS Collaboration 19/10/2015 In: Physical Review D. 92, 7, 31 p.
Journal article
Search for massive, long-lived particles using multitrack displaced vertices or displaced lepton pairs in pp collisions at s√=8 TeV with the ATLAS detector
ATLAS Collaboration 13/10/2015 In: Physical Review D. 92, 7, 37 p.
Journal article
Summary of the searches for squarks and gluinos using √ s =8 TeV pp collisions with the ATLAS experiment at the LHC
ATLAS Collaboration 8/10/2015 In: Journal of High Energy Physics. 10, 100 p.
Journal article
Search for the associated production of the Higgs boson with a top quark pair in multilepton final states with the ATLAS detector
ATLAS Collaboration 7/10/2015 In: Physics Letters B. 749, p. 519-541. 23 p.
Journal article
Search for photonic signatures of gauge-mediated supersymmetry in 8 TeV pp collisions with the ATLAS detector
ATLAS Collaboration 6/10/2015 In: Physical Review D. 92, 7, 35 p.
Journal article
Study of the spin and parity of the Higgs boson in diboson decays with the ATLAS detector
ATLAS Collaboration 6/10/2015 In: European Physical Journal C: Particles and Fields. 75, 10, 36 p.
Journal article
Measurement of colour flow with the jet pull angle in tt events using the ATLAS detector at s√=8 TeV
ATLAS Collaboration 26/09/2015 In: Physics Letters B. 750, p. 475-493. 19 p.
Journal article
Measurement of transverse energy–energy correlations in multi-jet events in pp collisions at s√=7 TeV source using the ATLAS detector and determination of the strong coupling constant αs(mZ)
ATLAS Collaboration 26/09/2015 In: Physics Letters B. 750, p. 427-447. 21 p.
Journal article
Search for dark matter in events with missing transverse momentum and a Higgs boson decaying to two photons in pp collisions at s√=8 TeV with the ATLAS detector
ATLAS Collaboration 22/09/2015 In: Physical Review Letters. 115, 13, 19 p.
Journal article
Search for heavy lepton resonances decaying to a Z boson and a lepton in pp collisions at √s =8 TeV with the ATLAS detector
ATLAS Collaboration 16/09/2015 In: Journal of High Energy Physics. 9, 38 p.
Journal article
Modelling Z → ττ processes in ATLAS with τ-embedded Z → μμ data
ATLAS Collaboration 15/09/2015 In: Journal of Instrumentation. 10, 42 p.
Journal article
Measurement of differential J/ψ production cross sections and forward-backward ratios in p + Pb collisions with the ATLAS detector
ATLAS Collaboration 14/09/2015 In: Physical Review C. 92, 3, 23 p.
Journal article
Measurement of the correlation between flow harmonics of different order in lead-lead collisions at √sNN=2.76 TeV with the ATLAS detector
ATLAS Collaboration 14/09/2015 In: Physical Review C. 92, 3, 30 p.
Journal article
Measurement of charged-particle spectra in Pb+Pb collisions at √sNN = 2.76 TeV with the ATLAS detector at the LHC
ATLAS Collaboration 9/09/2015 In: Journal of High Energy Physics. 9, 51 p.
Journal article
Measurement of the forward-backward asymmetry of electron and muon pair-production in pp collisions at s√=7 TeV with the ATLAS detector
ATLAS Collaboration 9/09/2015 In: Journal of High Energy Physics. 9, 43 p.
Journal article
Search for Higgs boson pair production in the bb¯bb¯ final state from pp collisions at s√=8 TeVwith the ATLAS detector
ATLAS Collaboration 9/09/2015 In: European Physical Journal C: Particles and Fields. 75, 9, 32 p.
Journal article
Search for Higgs bosons decaying to aa in the μμττ final state in pp collisions at s√=8 TeV with the ATLAS experiment
ATLAS Collaboration 9/09/2015 In: Physical Review D. 92, 5, 24 p.
Journal article
A search for tt¯ resonances using lepton-plus-jets events in proton-proton collisions at s√=8 TeV with the ATLAS detector
ATLAS Collaboration 28/08/2015 In: Journal of High Energy Physics. 8, 54 p.
Journal article
Study of (W/Z)H production and Higgs boson couplings using H→ W W ∗ decays with the ATLAS detector
ATLAS Collaboration 27/08/2015 In: Journal of High Energy Physics. 8, 65 p.
Journal article
Search for high-mass diphoton resonances in pp collisions at √s = 8 TeV with the ATLAS detector
ATLAS Collaboration 1/08/2015 In: Physical Review D. 92, 3, 22 p.
Journal article
Search for type-III seesaw heavy leptons in pp collisions at √s = 8 TeV with the ATLAS detector
ATLAS Collaboration 1/08/2015 In: Physical Review D. 92, 3, 20 p.
Journal article
Search for new phenomena in events with three or more charged leptons in pp collisions at √ s =8 TeV with the ATLAS detector
ATLAS Collaboration 08/2015 In: Journal of High Energy Physics. 8, 60 p.
Journal article
Search for production of vector-like quark pairs and of four top quarks in the lepton-plus-jets final state in pp collisions at √ s =8 TeV with the ATLAS detector
ATLAS Collaboration 08/2015 In: Journal of High Energy Physics. 8, 86 p.
Journal article
Search for invisible decays of the Higgs boson produced in association with a hadronically decaying vector boson in pp collisions at √ s =8 TeV with the ATLAS detector
ATLAS Collaboration 18/07/2015 In: European Physical Journal C: Particles and Fields. 75, 7, 24 p.
Journal article
Centrality and rapidity dependence of inclusive jet production in √sNN = 5.02 TeV proton–lead collisions with the ATLAS detector
ATLAS Collaboration 17/07/2015 In: Physics Letters B. 748, p. 392-413. 22 p.
Journal article
Constraints on the off-shell Higgs boson signal strength in the high-mass ZZ and WW final states with the ATLAS detector
ATLAS Collaboration 17/07/2015 In: European Physical Journal C: Particles and Fields. 75, 7, 34 p.
Journal article
Evidence of Wγγ production in pp collisions at √s =8 TeV and limits on anomalous quartic gauge couplings with the ATLAS detector
ATLAS Collaboration 17/07/2015 In: Physical Review Letters. 115, 3, 18 p.
Journal article
Search for a heavy neutral particle decaying to eμ, eτ, or μτ in pp collisions at √s =8 TeV with the ATLAS detector
ATLAS Collaboration 14/07/2015 In: Physical Review Letters. 115, 3, 18 p.
Journal article
Search for low-scale gravity signatures in multi-jet final states with the ATLAS detector at s √ =8 TeV
ATLAS Collaboration 7/07/2015 In: Journal of High Energy Physics. 7, 38 p.
Journal article
Search for heavy Majorana neutrinos with the ATLAS detector in pp collisions at √s 8 TeV
ATLAS Collaboration 5/07/2015 In: Journal of High Energy Physics. 2015, 7, 44 p.
Journal article
A search for high-mass resonances decaying to τ+ τ− in pp collisions at √s =8 TeV with the ATLAS detector
ATLAS Collaboration 1/07/2015 In: Journal of High Energy Physics. 2015, 7, 44 p.
Journal article
Observation and measurement of Higgs boson decays to WW∗ with the ATLAS detector
ATLAS Collaboration 1/07/2015 In: Physical Review D. 92, 1, 84 p.
Journal article
Search for long-lived, weakly interacting particles that decay to displaced hadronic jets in proton-proton collisions at √s =8 TeV with the ATLAS detector
ATLAS Collaboration 1/07/2015 In: Physical Review D. 92, 1, 28 p.
Journal article
Search for new phenomena in final states with an energetic jet and large missing transverse momentum in pp collisions at s √ =8 TeV with the ATLAS detector
ATLAS Collaboration 1/07/2015 In: European Physical Journal C: Particles and Fields. 75, 7, 43 p.
Journal article
Search for supersymmetry in events containing a same-flavour opposite-sign dilepton pair, jets, and large missing transverse momentum in √s=8 TeV pp collisions with the ATLAS detector
ATLAS Collaboration 07/2015 In: European Physical Journal C: Particles and Fields. 75, 40 p.
Journal article
Search for the Standard Model Higgs boson produced in association with top quarks and decaying into bb in pp collisions at √s =8TeV with the ATLAS detector
ATLAS Collaboration 07/2015 In: European Physical Journal C: Particles and Fields. 75, 50 p.
Journal article
Search for massive supersymmetric particles decaying to many jets using the ATLAS detector in pp collisions at s √ =8 TeV
ATLAS Collaboration 29/06/2015 In: Physical Review D. 91, 11, 37 p.
Journal article
Measurement of the top pair production cross section in 8 TeV proton-proton collisions using kinematic information in the lepton+jets final state with ATLAS
ATLAS Collaboration 24/06/2015 In: Physical Review D. 91, 11, 25 p.
Journal article
Differential top-antitop cross-section measurements as a function of observables constructed from final-state particles using pp collisions at s √ =7 TeV in the ATLAS detector
ATLAS Collaboration 16/06/2015 In: Journal of High Energy Physics. 2015, 6, 56 p.
Journal article
Search for a new resonance decaying to a W or Z boson and a Higgs boson in the ℓℓ/ℓν/νν+bb final states with the ATLAS detector
ATLAS Collaboration 16/06/2015 In: European Physical Journal C: Particles and Fields. 75, 6, 21 p.
Journal article
Search for a charged Higgs Boson produced in the Vector-Boson fusion mode with decay H ± →W ± Z using pp collisions at s √ =8 TeV with the ATLAS experiment
ATLAS Collaboration 12/06/2015 In: Physical Review Letters. 114, 23, 18 p.
Journal article
Search for New Phenomena in Dijet Angular Distributions in Proton-Proton Collisions at s √ =8 TeV Measured with the ATLAS Detector
ATLAS Collaboration 5/06/2015 In: Physical Review Letters. 114, 22, 17 p.
Journal article
Measurement of three-jet production cross-sections in pp collisions at 7 TeV centre-of-mass energy using the ATLAS detector
ATLAS Collaboration 27/05/2015 In: European Physical Journal C: Particles and Fields. 75, 5, 33 p.
Journal article
Observation and measurements of the production of prompt and non-prompt Jψ Jψ mesons in association with a Z Z boson in pp pp collisions at s √ =8TeV s √ =8TeV with the ATLAS detector
ATLAS Collaboration 27/05/2015 In: European Physical Journal C: Particles and Fields. 75, 5, 29 p.
Journal article
Combined measurement of the Higgs Boson Mass in pp collisions at √s=7 and 8 TeV with the ATLAS and CMS experiments
ATLAS Collaboration 14/05/2015 In: Physical Review Letters. 114, 19, 33 p.
Journal article
Measurement of the charge asymmetry in dileptonic decays of top quark pairs in pp collisions at s √ =7 TeV using the ATLAS detector
ATLAS Collaboration 12/05/2015 In: Journal of High Energy Physics. 2015, 5, 50 p.
Journal article
Search for direct pair production of a chargino and a neutralino decaying to the 125 GeV Higgs boson in s s √ =8 TeV pp pp collisions with the ATLAS detector
ATLAS Collaboration 12/05/2015 In: European Physical Journal C: Particles and Fields. 75, 5, 31 p.
Journal article
Search for production of WW/WZ resonances decaying to a lepton, neutrino and jets in pp collisions at s√=8 TeV with the ATLAS detector
ATLAS Collaboration 12/05/2015 In: European Physical Journal C: Particles and Fields. 75, 5, 20 p.
Journal article
Search for a CP-odd Higgs boson decaying to Zh in pp collisions at √s=8TeV with the ATLAS detector
ATLAS Collaboration 11/05/2015 In: Physics Letters B. 744, p. 163-183. 21 p.
Journal article
Observation of top-quark pair production in association with a photon and measurement of the tt¯γ production cross section in pp collisions at √s=7 TeV using the ATLAS detector
ATLAS Collaboration 28/04/2015 In: Physical Review D. 91, 7, 28 p.
Journal article
Search for W′→tb→qqbb decays in pp collisions at √ s = 8 TeV with the ATLAS detector
ATLAS Collaboration 24/04/2015 In: European Physical Journal C: Particles and Fields. 75, 4, 23 p.
Journal article
Measurement of the top-quark mass in the fully hadronic decay channel from ATLAS data at s √ =7TeV
ATLAS Collaboration 23/04/2015 In: European Physical Journal C: Particles and Fields. 75, 4, 26 p.
Journal article
Search for scalar charm quark pair production in pp collisions at √s=8 TeV with the ATLAS detector
ATLAS Collaboration 22/04/2015 In: Physical Review Letters. 114, 16, 19 p.
Journal article
Search for squarks and gluinos in events with isolated leptons, jets and missing transverse momentum at √s=8 TeV with the ATLAS detector
ATLAS Collaboration 21/04/2015 In: Journal of High Energy Physics. 2015, 4, 75 p.
Journal article
Search for W → t¯ b in the lepton plus jets final state in proton–proton collisions at a centre-of-mass energy of √s = 8 TeV with the ATLAS detector
ATLAS Collaboration 9/04/2015 In: Physics Letters B. 743, p. 235-255. 21 p.
Journal article
Search for pair-produced long-lived neutral particles decaying to jets in the ATLAS hadronic calorimeter in pp collisions at √s = 8 TeV
ATLAS Collaboration 9/04/2015 In: Physics Letters B. 743, p. 15-34. 20 p.
Journal article
Measurement of spin correlation in top-antitop quark events and search for top squark pair production in pp collisions at √s=8 TeV using the ATLAS detector
ATLAS Collaboration 8/04/2015 In: Physical Review Letters. 114, 14, 19 p.
Journal article
Evidence for the Higgs-boson Yukawa coupling to tau leptons with the ATLAS detector
ATLAS Collaboration 30/03/2015 In: Journal of High Energy Physics. 2015, 4, 74 p.
Journal article
Search for Higgs and Z boson decays to J/ψγ and Υ(nS)γ with the ATLAS detector
ATLAS Collaboration 26/03/2015 In: Physical Review Letters. 114, 12, 19 p.
Journal article
Performance of the ATLAS muon trigger in pp collisions at s √ =8 TeV
ATLAS Collaboration 13/03/2015 In: European Physical Journal C: Particles and Fields. 75, 31 p.
Journal article
Search for anomalous production of prompt same-sign lepton pairs and pair-produced doubly charged Higgs bosons with √s=8 TeV pp collisions using the ATLAS detector
ATLAS Collaboration 9/03/2015 In: Journal of High Energy Physics. 2015, 3, 48 p.
Journal article
Search for new phenomena in the dijet mass distribution using pp collision data at √s=8 TeV with the ATLAS detector
ATLAS Collaboration 9/03/2015 In: Physical Review D. 91, 5, 25 p.
Journal article
Simultaneous measurements of the tt¯, W+W−, and Z/γ∗→ττ production cross-sections in pp collisions at √s=7 TeV with the ATLAS detector
ATLAS Collaboration 6/03/2015 In: Physical Review D. 91, 5, 34 p.
Journal article
Search for Higgs Boson pair production in the γγbb¯ final state using pp collision data at √s=8 TeV from the ATLAS detector
ATLAS Collaboration 26/02/2015 In: Physical Review Letters. 114, 19 p.
Journal article
Measurement of the inclusive jet cross-section in proton-proton collisions at √s=7 TeV using 4.5 fb−1 of data with the ATLAS detector
ATLAS Collaboration 24/02/2015 In: Journal of High Energy Physics. 2015, 54 p.
Journal article
Search for dark matter in events with heavy quarks and missing transverse momentum in pp collisions with the ATLAS detector
ATLAS Collaboration 24/02/2015 In: European Physical Journal C: Particles and Fields. 75, 2, 22 p.
Journal article
Measurements of the nuclear modification factor for jets in Pb+Pb collisions at √sNN=2.76 TeV with the ATLAS detector
ATLAS Collaboration 20/02/2015 In: Physical Review Letters. 114, 18 p.
Journal article
Measurements of the W production cross sections in association with jets with the ATLAS detector
ATLAS Collaboration 19/02/2015 In: European Physical Journal C: Particles and Fields. 75, 2, 46 p.
Journal article
Measurement of the transverse polarization of Λ and Λ¯ hyperons produced in proton-proton collisions at √s=7 TeV using the ATLAS detector
ATLAS Collaboration 10/02/2015 In: Physical Review D. 91, 23 p.
Journal article
Search for resonant diboson production in the ℓℓqq ¯ final state in pp collisions at s √ =8 TeV with the ATLAS detector
ATLAS Collaboration 10/02/2015 In: European Physical Journal C: Particles and Fields. 75, 2, 20 p.
Journal article
Search for new phenomena in events with a photon and missing transverse momentum in pp collisions at √s=8 TeV with the ATLAS detector
ATLAS Collaboration 27/01/2015 In: Physical Review D. 91, 25 p.
Journal article
Measurement of the production and lepton charge asymmetry of W bosons in Pb+Pb collisions at √sNN=2.76TeV with the ATLAS detector
ATLAS Collaboration 22/01/2015 In: European Physical Journal C: Particles and Fields. 75, 30 p.
Journal article
Jet energy measurement and its systematic uncertainty in proton-proton collisions at √s=7 TeV with the ATLAS detector
ATLAS Collaboration 15/01/2015 In: European Physical Journal C: Particles and Fields. 75, 101 p.
Journal article
Search for the b¯b decay of the Standard Model Higgs boson in associated (W/Z)H production with the ATLAS detector
ATLAS Collaboration 14/01/2015 In: Journal of High Energy Physics. 2015, 1, 89 p.
Journal article
Searches for heavy long-lived charged particles with the ATLAS detector in proton-proton collisions at s√=8 TeV
ATLAS Collaboration 14/01/2015 In: Journal of High Energy Physics. 2015, 1, 51 p.
Journal article
Measurement of the tt¯ production cross-section as a function of jet multiplicity and jet transverse momentum in 7 TeV proton-proton collisions with the ATLAS detector
ATLAS Collaboration 8/01/2015 In: Journal of High Energy Physics. 2015, 66 p.
Journal article
Search for s -channel single top-quark production in proton–proton collisions at root s=8 TeV with the ATLAS detector
ATLAS Collaboration 5/01/2015 In: Physics Letters B. 740, p. 118-136. 19 p.
Journal article
Search for H→γγ produced in association with top quarks and constraints on the Yukawa coupling between the top quark and the Higgs boson using data taken at 7 TeV and 8 TeV with the ATLAS detector
ATLAS Collaboration 5/01/2015 In: Physics Letters B. 740, p. 222-242. 21 p.
Journal article
Measurements of the total and differential Higgs Boson production cross sections combining the H→γγ and H→ZZ∗→4ℓ decay channels at s√=8 TeV with the ATLAS detector
ATLAS Collaboration 2015 In: Physical Review Letters. 115, 9, 19 p.
Journal article
Search for charged Higgs bosons decaying via H ± → τ ± ν in fully hadronic final states using pp collision data at √s = 8 TeV with the ATLAS detector
ATLAS Collaboration 2015 In: Journal of High Energy Physics. 2015, 3, 45 p.
Journal article
Search for heavy long-lived multi-charged particles in pp collisions at √ s =8 TeV using the ATLAS detector
ATLAS Collaboration 2015 In: European Physical Journal C: Particles and Fields. 75, 8, 23 p.
Journal article
Search for invisible particles produced in association with single-top-quarks in proton–proton collisions at √s=8 TeV with the ATLAS detector
ATLAS Collaboration 2015 In: European Physical Journal C: Particles and Fields. 75, 2, 24 p.
Journal article
Measurement of Higgs boson production in the diphoton decay channel in pp collisions at center-of-mass energies of 7 and 8 TeV with the ATLAS detector
ATLAS Collaboration 24/12/2014 In: Physical Review D. 90, 11, 44 p.
Journal article
Measurements of spin correlation in top-antitop quark events from proton-proton collisions at s√=7 TeV using the ATLAS detector
ATLAS Collaboration 24/12/2014 In: Physical Review D. 90, 11, 32 p.
Journal article
Measurement of inclusive jet charged-particle fragmentation functions in Pb plus Pb collisions at root S-NN=2.76 TeV with the ATLAS detector
ATLAS Collaboration 12/12/2014 In: Physics Letters B. 739, p. 320-342. 23 p.
Journal article
Comprehensive measurements of t-channel single top-quark production cross sections at s√=7 TeV with the ATLAS detector
ATLAS Collaboration 11/12/2014 In: Physical Review D. 90, 11, 45 p.
Journal article
Search for nonpointing and delayed photons in the diphoton and missing transverse momentum final state in 8 TeV pp collisions at the LHC using the ATLAS detector
ATLAS Collaboration 10/12/2014 In: Physical Review D. 90, 11, 29 p.
Journal article
Observation of an Excited B±c Meson State with the ATLAS Detector
ATLAS Collaboration 21/11/2014 In: Physical Review Letters. 113, 21, 18 p.
Journal article
Search for pair and single production of new heavy quarks that decay to a Z boson and a third-generation quark in pp collisions at TeX TeV with the ATLAS detector
ATLAS Collaboration 19/11/2014 In: Journal of High Energy Physics. 2014, 11, 54 p.
Journal article
Search for long-lived neutral particles decaying into lepton jets in proton-proton collisions at s√=8 TeV with the ATLAS detector
ATLAS Collaboration 18/11/2014 In: Journal of High Energy Physics. 2014, 11, 47 p.
Journal article
Fiducial and differential cross sections of Higgs boson production measured in the four-lepton decay channel in pp collisions at root s=8 TeV with the ATLAS detector
ATLAS Collaboration 10/11/2014 In: Physics Letters B. 738, p. 234-253. 20 p.
Journal article
Measurement of the cross section of high transverse momentum Z -> b(b)over-bar production in proton-proton collisions at root s=8 TeV with the ATLAS detector
ATLAS Collaboration 10/11/2014 In: Physics Letters B. 738, p. 25-43. 19 p.
Journal article
Search for new resonances in W gamma and Z gamma final states in pp collisions at root s=8 TeV with the ATLAS detector
ATLAS Collaboration 10/11/2014 In: Physics Letters B. 738, p. 428-447. 20 p.
Journal article
Search for the Standard Model Higgs boson decay to μ+μ− with the ATLAS detector
ATLAS Collaboration 10/11/2014 In: Physics Letters B. 738, p. 68-86. 19 p.
Journal article
Measurement of the t(t)over-bar production cross-section using e mu events with b-tagged jets in pp collisions at root s=7 and 8 TeV with the ATLAS detector
ATLAS Collaboration 29/10/2014 In: European Physical Journal C: Particles and Fields. 74, 10, 32 p.
Journal article
Search for Scalar Diphoton Resonances in the Mass Range 65–600 GeV with the ATLAS Detector in pp Collision Data at s√=8 TeV
ATLAS Collaboration 20/10/2014 In: Physical Review Letters. 113, 17, 18 p.
Journal article
Search for WZ resonances in the fully leptonic channel using pp collisions at root s=8 TeV with the ATLAS detector
ATLAS Collaboration 7/10/2014 In: Physics Letters B. 737, p. 223-243. 21 p.
Journal article
Electron and photon energy calibration with the ATLAS detector using LHC Run 1 data
ATLAS Collaboration 1/10/2014 In: European Physical Journal C: Particles and Fields. 74, 10, 48 p.
Journal article
Measurement of differential production cross-sections for a Z boson in association with b-jets in 7 TeV proton-proton collisions with the ATLAS detector
ATLAS Collaboration 10/2014 In: Journal of High Energy Physics. 2014, 10, 48 p.
Journal article
Search for squarks and gluinos with the ATLAS detector in final states with jets and missing transverse momentum using root s=8 TeV proton-proton collision data
ATLAS Collaboration 30/09/2014 In: Journal of High Energy Physics. 2014, 9, 52 p.
Journal article
Search for pair-produced third-generation squarks decaying via charm quarks or in compressed supersymmetric scenarios in pp collisions at s√=8 TeV with the ATLAS detector
ATLAS Collaboration 24/09/2014 In: Physical Review D. 90, 5, 36 p.
Journal article
Flavor tagged time-dependent angular analysis of the B0s→J/ψϕ decay and extraction of ΔΓs and the weak phase ϕs in ATLAS
ATLAS Collaboration 23/09/2014 In: Physical Review D. 90, 5, 26 p.
Journal article
Search for high-mass dilepton resonances in pp collisions at root s = 8 TeV with the ATLAS detector
ATLAS Collaboration 19/09/2014 In: Physical Review D. 90, 5, 30 p.
Journal article
Search for supersymmetry in events with large missing transverse momentum, jets, and at least one tau lepton in 20 fb(-1) of root s=8 TeV proton-proton collision data with the ATLAS detector
ATLAS Collaboration 18/09/2014 In: Journal of High Energy Physics. 2014, 9, 54 p.
Journal article
Measurement of the production cross-section of ψ(2S) → J/ψ(→μ + μ − )π + π − in pp collisions at s√ = 7 TeV at ATLAS
ATLAS Collaboration 12/09/2014 In: Journal of High Energy Physics. 2014, 9, p. 1-49. 49 p.
Journal article
Search for new particles in events with one lepton and missing transverse momentum in pp collisions at s√ = 8 TeV with the ATLAS detector
ATLAS Collaboration 5/09/2014 In: Journal of High Energy Physics. 2014, 9, 43 p.
Journal article
Search for supersymmetry in events with four or more leptons in s√=8 TeV pp collisions with the ATLAS detector
ATLAS Collaboration 4/09/2014 In: Physical Review D. 90, 33 p.
Journal article
Search for direct pair production of the top squark in all-hadronic final states in proton-proton collisions at=8 TeV with the ATLAS detector
ATLAS Collaboration 1/09/2014 In: Journal of High Energy Physics. 2014, 9
Journal article
A neural network clustering algorithm for the ATLAS silicon pixel detector
ATLAS Collaboration 09/2014 In: Journal of Instrumentation. 9, 9, 35 p.
Journal article
Operation and performance of the ATLAS semiconductor tracker
ATLAS Collaboration 27/08/2014 In: Journal of Instrumentation. 9, 74 p.
Journal article
Light-quark and gluon jet discrimination in pp collisions at s√=7 TeV with the ATLAS detector
ATLAS Collaboration 21/08/2014 In: European Physical Journal C: Particles and Fields. 74, 8, 29 p.
Journal article
Search for microscopic black holes and string balls in final states with leptons and jets with the ATLAS detector at s√ = 8 TeV
ATLAS Collaboration 18/08/2014 In: Journal of High Energy Physics. 2014, 48 p.
Journal article
Measurement of the centrality and pseudorapidity dependence of the integrated elliptic flow in lead–lead collisions at sNN−−−√=2.76 TeV with the ATLAS detector
ATLAS Collaboration 13/08/2014 In: European Physical Journal C: Particles and Fields. 74, 8, 25 p.
Journal article
Measurement of event-plane correlations in sNN−−−√=2.76 TeV lead-lead collisions with the ATLAS detector
ATLAS Collaboration 12/08/2014 In: Physical Review C. 90, 29 p.
Journal article
Measurement of the underlying event in jet events from 7 TeV proton–proton collisions with the ATLAS detector
ATLAS Collaboration 12/08/2014 In: European Physical Journal C: Particles and Fields. 74, 8, 29 p.
Journal article
Measurement of χ c1 and χ c2 production with s√ = 7 TeV pp collisions at ATLAS
ATLAS Collaboration 30/07/2014 In: Journal of High Energy Physics. 2014, 7, 52 p.
Journal article
Electron reconstruction and identification efficiency measurements with the ATLAS detector using the 2011 LHC proton–proton collision data
ATLAS Collaboration 15/07/2014 In: European Physical Journal C: Particles and Fields. 74, 7, 38 p.
Journal article
Search for dark matter in events with a Z boson and missing transverse momentum in pp collisions at s√=8 TeV with the ATLAS detector
ATLAS Collaboration 10/07/2014 In: Physical Review D. 90, 21 p.
Journal article
The differential production cross section of the ϕ (1020) meson in s√ = 7 TeV pp collisions measured with the ATLAS detector
ATLAS Collaboration 07/2014 In: European Physical Journal C: Particles and Fields. 74, 7, 21 p.
Journal article
Search for direct top-squark pair production in final states with two leptons in pp collisions at root s=8 TeV with the ATLAS detector
ATLAS Collaboration 19/06/2014 In: Journal of High Energy Physics. 2014, 6, 65 p.
Journal article
Measurement of the low-mass Drell-Yan differential cross section at s√ = 7 TeV using the ATLAS detector
ATLAS Collaboration 18/06/2014 In: Journal of High Energy Physics. 2014, 6, 46 p.
Journal article
Measurements of four-lepton production at the Z resonance in pp collisions at s√=7 and 8 TeV with ATLAS
ATLAS Collaboration 13/06/2014 In: Physical Review Letters. 112, 23, 18 p.
Journal article
Search for supersymmetry at root 8=8 TeV in final states with jets and two same-sign leptons or three leptons with the ATLAS detector
ATLAS Collaboration 6/06/2014 In: Journal of High Energy Physics. 2014, 6, 50 p.
Journal article
Search for direct top squark pair production in events with a Z boson, b -jets and missing transverse momentum in s√=8 TeV pp collisions with the ATLAS detector
ATLAS Collaboration 3/06/2014 In: European Physical Journal C: Particles and Fields. 74, 6, 25 p.
Journal article
Search for top quark decays t → qH with H → γγ using the ATLAS detector
ATLAS Collaboration 06/2014 In: Journal of High Energy Physics. 2014, 6, 39 p.
Journal article
Measurement of the parity-violating asymmetry parameter αb and the helicity amplitudes for the decay Λ0b→J/ψΛ0 with the ATLAS detector
ATLAS Collaboration 27/05/2014 In: Physical Review D. 89, 9, 25 p.
Journal article
Search for invisible decays of a Higgs boson produced in association with a Z boson in ATLAS
ATLAS Collaboration 20/05/2014 In: Physical Review Letters. 112, 19 p.
Journal article
Measurement of dijet cross-sections in pp collisions at 7 TeV centre-of-mass energy using the ATLAS detector
ATLAS Collaboration 14/05/2014 In: Journal of High Energy Physics. 2014, 5, 66 p.
Journal article
Search for Higgs boson decays to a photon and a Z boson in pp collisions at root s=7 and 8 TeV with the ATLAS detector
ATLAS Collaboration 1/05/2014 In: Physics Letters B. 732, p. 8-27. 20 p.
Journal article
Measurement of the production cross section of prompt J/ψ mesons in association with a W ± boson in pp collisions at s√ = 7 TeV with the ATLAS detector
ATLAS Collaboration 28/04/2014 In: Journal of High Energy Physics. 2014, 4, 36 p.
Journal article
Search for direct production of charginos and neutralinos in events with three leptons and missing transverse momentum in s√ = 8 TeV pp collisions with the ATLAS detector
ATLAS Collaboration 28/04/2014 In: Journal of High Energy Physics. 2014, 4, 45 p.
Journal article
Study of heavy-flavor quarks produced in association with top-quark pairs at s√=7 TeV using the ATLAS detector
ATLAS Collaboration 21/04/2014 In: Physical Review D. 89, 7, 23 p.
Journal article
Measurement of the electroweak production of dijets in association with a Z-boson and distributions sensitive to vector boson fusion in proton-proton collisions at=8 TeV using the ATLAS detector
ATLAS Collaboration 7/04/2014 In: Journal of High Energy Physics. 2014, 4, 55 p.
Journal article
Measurement of the inclusive isolated prompt photons cross section in pp collisions at root s=7 TeV with the ATLAS detector using 4.6 fb(-1)
ATLAS Collaboration 24/03/2014 In: Physical Review D. 89, 5, 24 p.
Journal article
Search for quantum black hole production in high-invariant-mass lepton plus jet final states using pp collisions at root s=8 TeV and the ATLAS detector
ATLAS Collaboration 5/03/2014 In: Physical Review Letters. 112, 9, 18 p.
Journal article
Measurement of the top quark pair production charge asymmetry in proton-proton collisions at root s=7 TeV using the ATLAS detector
ATLAS Collaboration 25/02/2014 In: Journal of High Energy Physics. 2014, 2, 37 p.
Journal article
Search for a multi-Higgs-boson cascade in W+W−bb¯ events with the ATLAS detector in pp collisions at s√=8 TeV
ATLAS Collaboration 19/02/2014 In: Physical Review D. 89, 3, 23 p.
Journal article
Standalone vertex finding in the ATLAS muon spectrometer
ATLAS Collaboration 02/2014 In: Journal of Instrumentation. 9, 38 p.
Journal article
Search for dark matter in events with a hadronically decaying W or Z boson and missing transverse momentum in pp collisions at root s=8 TeV with the ATLAS detector
Atlas Collaboration 29/01/2014 In: Physical Review Letters. 112, 4, 17 p.
Journal article
Erratum: Search for new phenomena in final states with large jet multiplicities and missing transverse momentum at root s = 8 TeV proton-proton collisions using the ATLAS experiment (vol 10, pg 130, 2013)
ATLAS Collaboration 21/01/2014 In: Journal of High Energy Physics. 2014, 1, 18 p.
Journal article
Measurement of the mass difference between top and anti-top quarks in pp collisions at root s=7 TeV using the ATLAS detector
ATLAS Collaboration 20/01/2014 In: Physics Letters B. 728, p. 363-379. 17 p.
Journal article
Search for new phenomena in photon plus jet events collected in proton-proton collisions at root s=8 TeV with the ATLAS detector
ATLAS Collaboration 20/01/2014 In: Physics Letters B. 728, p. 562-578. 17 p.
Journal article
Measurements of normalized differential cross sections for tt¯ production in pp collisions at (s)−−−√=7 TeV using the ATLAS detector
ATLAS Collaboration 2014 In: Physical Review D. 90, 7, 42 p.
Journal article
Measurement of jet shapes in top-quark pair events at s√=7 TeV using the ATLAS detector
ATLAS Collaboration 11/12/2013 In: European Physical Journal C: Particles and Fields. 73, 12, 31 p.
Journal article
Search for charginos nearly mass degenerate with the lightest neutralino based on a disappearing-track signature in pp collisions at (√s)=8 TeV with the ATLAS detector
ATLAS Collaboration 6/12/2013 In: Physical Review D. 88, 11, 23 p.
Journal article
Measurement of Top Quark Polarization in Top-Antitop Events from Proton-Proton Collisions at s√=7 TeV Using the ATLAS Detector
ATLAS Collaboration 4/12/2013 In: Physical Review Letters. 111, 23, 19 p.
Journal article
Search for long-lived stopped R-hadrons decaying out of time with pp collisions using the ATLAS detector
ATLAS Collaboration 3/12/2013 In: Physical Review D. 88, 11, 30 p.
Journal article
Measurement of the distributions of event-by-event flow harmonics in lead-lead collisions at sNN−−−−√ = 2.76 TeV with the ATLAS detector at the LHC
ATLAS Collaboration 25/11/2013 In: Journal of High Energy Physics. 2013, 57 p.
Journal article
Measurement of the top quark charge in pp collisions at s√=7 TeV with the ATLAS detector
ATLAS Collaboration 11/2013 In: Journal of High Energy Physics. 2013, 31, 41 p.
Journal article
Search for direct third-generation squark pair production in final states with missing transverse momentum and two b-jets in s√=8 TeV pp collisions with the ATLAS detector
ATLAS Collaboration 29/10/2013 In: Journal of High Energy Physics. 2013, 10, 40 p.
Journal article
Dynamics of isolated-photon plus jet production in pp collisions at (√s)=7 TeV with the ATLAS detector
ATLAS Collaboration 21/10/2013 In: Nuclear Physics B. 875, 3, p. 483-535. 53 p.
Journal article
Search for new phenomena in final states with large jet multiplicities and missing transverse momentum at s√=8 TeV proton-proton collisions using the ATLAS experiment
ATLAS Collaboration 21/10/2013 In: Journal of High Energy Physics. 2013, 10, 49 p.
Journal article
Measurement of the Azimuthal Angle Dependence of Inclusive Jet Yields in Pb+Pb Collisions at √sNN=2.76 TeV with the ATLAS Detector
ATLAS Collaboration 9/10/2013 In: Physical Review Letters. 111, 15, 18 p.
Journal article
Measurement of the differential cross-section of B+ meson production in pp collisions at root s=7 TeV at ATLAS
ATLAS Collaboration 8/10/2013 In: Journal of High Energy Physics. 2013, 10, 37 p.
Journal article
Evidence for the spin-0 nature of the Higgs boson using ATLAS data
ATLAS Collaboration 7/10/2013 In: Physics Letters B. 726, 1-3, p. 120-144. 25 p.
Journal article
Measurements of Higgs boson production and couplings in diboson final states with the ATLAS detector at the LHC
ATLAS Collaboration 7/10/2013 In: Physics Letters B. 726, 1-3, p. 88-119. 32 p.
Journal article
Measurement of the high-mass Drell--Yan differential cross-section in pp collisions at $s$=7 TeV with the ATLAS detector
ATLAS Collaboration 1/10/2013 In: Physics Letters B. 725, 4-5, p. 223-242. 20 p.
Journal article
Search for microscopic black holes in a like-sign dimuon final state using large track multiplicity with the ATLAS detector
ATLAS Collaboration 1/10/2013 In: Physical Review D. 88, 7, 22 p.
Journal article
Search for excited electrons and muons in $\sqrt {s}=8$ TeV proton–proton collisions with the ATLAS detector
ATLAS Collaboration 6/09/2013 In: New Journal of Physics. 15, 33 p.
Journal article
Performance of jet substructure techniques for large-R jets in proton-proton collisions at root s=7 TeV using the ATLAS detector
ATLAS Collaboration 09/2013 In: Journal of High Energy Physics. 2013, 9
Journal article
Measurement with the ATLAS detector of multi-particle azimuthal correlations in p plus Pb collisions at root s(NN)=5.02 TeV
ATLAS Collaboration 9/08/2013 In: Physics Letters B. 725, 1-3, p. 60-78. 19 p.
Journal article
Measurement of charged-particle event shape variables in inclusive root(s)=7 TeV proton-proton interactions with the ATLAS detector
ATLAS Collaboration 6/08/2013 In: Physical Review D. 88, 3, 25 p.
Journal article
Improved luminosity determination in pp collisions at s√=7 TeV using the ATLAS detector at the LHC
ATLAS Collaboration 08/2013 In: European Physical Journal C: Particles and Fields. 73, 8, 39 p.
Journal article
Measurement of the inclusive jet cross-section in pp collisions at s√=2.76 TeV and comparison to the inclusive jet cross-section at s√=7 TeV using the ATLAS detector
ATLAS Collaboration 08/2013 In: European Physical Journal C: Particles and Fields. 73, 8, 56 p.
Journal article
Search for t(t)over-bar resonances in the lepton plus jets final state with ATLAS using 4.7 fb(-1) of pp collisions at root s=7 TeV
ATLAS Collaboration 23/07/2013 In: Physical Review D. 88, 1, 28 p.
Journal article
Characterisation and mitigation of beam-induced backgrounds observed in the ATLAS detector during the 2011 proton-proton run
ATLAS Collaboration 17/07/2013 In: Journal of Instrumentation. 8, 7, 72 p.
Journal article
Measurement of the production cross section of jets in association with a Z boson in pp collisions at root s=7 TeV with the ATLAS detector
ATLAS Collaboration 07/2013 In: Journal of High Energy Physics. 2013, 7, 50 p.
Journal article
Triggers for displaced decays of long-lived neutral particles in the ATLAS detector
ATLAS Collaboration 07/2013 In: Journal of Instrumentation. 8, 7, 35 p.
Journal article
Search for resonant diboson production in the WW/WZ→ℓνjj decay channels with the ATLAS detector at √s=7 TeV
ATLAS Collaboration 17/06/2013 In: Physical Review D. 87, 11, 22 p.
Journal article
Search for a heavy narrow resonance decaying to eμ, eτ, or μτ with the ATLAS detector in √s=7 TeV pp collisions at the LHC
ATLAS Collaboration 10/06/2013 In: Physics Letters B. 723, 1-3, p. 15-32. 18 p.
Journal article
Measurements of Wγ and Zγ production in pp collisions at √s=7 TeV with the ATLAS detector at the LHC
ATLAS Collaboration 4/06/2013 In: Physical Review D. 87, 11, 40 p.
Journal article
Measurement of W+W- production in pp collisions at √s=7 TeV with the ATLAS detector and limits on anomalous WWZ and WWγ couplings
ATLAS Collaboration 3/06/2013 In: Physical Review D. 87, 11, 29 p.
Journal article
Measurement of the cross-section for W boson production in association with b-jets in pp collisions at s√=7 TeV with the ATLAS detector
ATLAS Collaboration 06/2013 In: Journal of High Energy Physics. 2013, 6, 45 p.
Journal article
Search for a light charged Higgs boson in the decay channel H+→cs¯ in tt¯ events using pp collisions at s√=7 TeV with the ATLAS detector
ATLAS Collaboration 06/2013 In: European Physical Journal C: Particles and Fields. 73, 20 p.
Journal article
Search for third generation scalar leptoquarks in pp collisions at s√=7 TeV with the ATLAS detector
ATLAS Collaboration 06/2013 In: Journal of High Energy Physics. 2013, 6, 40 p.
Journal article
Search for long-lived, multi-charged particles in pp collisions at View the sqrt(s)=7 TeV using the ATLAS detector
ATLAS Collaboration 24/05/2013 In: Physics Letters B. 722, 4-5, p. 305-323. 19 p.
Journal article
Observation of Associated Near-Side and Away-Side Long-Range Correlations in √sNN=5.02 TeV Proton-Lead Collisions with the ATLAS Detector
ATLAS Collaboration 1/05/2013 In: Physical Review Letters. 110, 18, 18 p.
Journal article
Measurement of k T splitting scales in W→ℓν events at s√=7 TeV with the ATLAS detector
ATLAS Collaboration 05/2013 In: European Physical Journal C: Particles and Fields. 73, 5, 30 p.
Journal article
Search for single b*-quark production with the ATLAS detector at root s=7 TeV
ATLAS Collaboration 25/04/2013 In: Physics Letters B. 721, 4-5, p. 171-189. 19 p.
Journal article
Search for displaced muonic lepton jets from light Higgs boson decay in proton-proton collisions at root s=7 TeV with the ATLAS detector
ATLAS Collaboration 10/04/2013 In: Physics Letters B. 721, 1-3, p. 32-50. 19 p.
Journal article
Search for extra dimensions in diphoton events from proton–proton collisions at $\sqrt {s} = 7\,$ TeV in the ATLAS detector at the LHC
ATLAS Collaboration 4/04/2013 In: New Journal of Physics. 15, 4, 35 p.
Journal article
Search for WH production with a light Higgs boson decaying to prompt electron-jets in proton–proton collisions at $\sqrt {s}=7$ TeV with the ATLAS detector
ATLAS Collaboration 04/2013 In: New Journal of Physics. 15, 4, 36 p.
Journal article
Searches for heavy long-lived sleptons and R-hadrons with the ATLAS detector in pp collisions at √s=7 TeV
ATLAS Collaboration 26/03/2013 In: Physics Letters B. 720, 4-5, p. 277-308. 32 p.
Journal article
Measurement of hard double-parton interactions in W(-> lv) plus 2-jet events at root s=7 TeV with the ATLAS detector
ATLAS Collaboration 25/03/2013 In: New Journal of Physics. 15, 3, 39 p.
Journal article
Measurement of angular correlations in Drell-Yan lepton pairs to probe Z/gamma* boson transverse momentum at root s=7 TeV with the ATLAS detector
ATLAS Collaboration 13/03/2013 In: Physics Letters B. 720, 1-3, p. 32-51. 20 p.
Journal article
Search for light top squark pair production in final states with leptons and b-jets with the ATLAS detector in root s=7 TeV proton-proton collisions
ATLAS Collaboration 13/03/2013 In: Physics Letters B. 720, 1-3, p. 13-31. 19 p.
Journal article
Measurement of upsilon production in 7 TeV pp collisions at ATLAS
ATLAS Collaboration 4/03/2013 In: Physical Review D. 87, 5, 31 p.
Journal article
Search for new phenomena in events with three charged leptons at √s=7 TeV with the ATLAS detector
ATLAS Collaboration 4/03/2013 In: Physical Review D. 87, 5, 33 p.
Journal article
Jet energy measurement with the ATLAS detector in proton-proton collisions at s√=7 TeV
ATLAS Collaboration 03/2013 In: European Physical Journal C: Particles and Fields. 73, 3, 118 p.
Journal article
Jet energy resolution in proton-proton collisions at s√=7 TeV recorded in 2010 with the ATLAS detector
ATLAS Collaboration 03/2013 In: European Physical Journal C: Particles and Fields. 73, 3, 27 p.
Journal article
Measurement of ZZ production in pp collisions at s√=7 TeV and limits on anomalous ZZZ and ZZγ couplings with the ATLAS detector
ATLAS Collaboration 03/2013 In: Journal of High Energy Physics. 2013, 3, 48 p.
Journal article
Measurement of the tt¯ production cross section in the tau + jets channel using the ATLAS detector
ATLAS Collaboration 03/2013 In: European Physical Journal C: Particles and Fields. 73, 3, 18 p.
Journal article
Multi-channel search for squarks and gluinos in s√=7 TeV pp collisions with the ATLAS detector at the LHC
ATLAS Collaboration 03/2013 In: European Physical Journal C: Particles and Fields. 73, 3, 33 p.
Journal article
Rapidity gap cross sections measured with the ATLAS detector in pp collisions at s√=7~TeV
ATLAS Collaboration 03/2013 In: European Physical Journal C: Particles and Fields. 72, 3, 31 p.
Journal article
Search for charged Higgs bosons through the violation of lepton universality in tt¯ events using pp collision data at s√=7 TeV with the ATLAS experiment
ATLAS Collaboration 03/2013 In: Journal of High Energy Physics. 2013, 3, 36 p.
Journal article
Single hadron response measurement and calorimeter jet energy scale uncertainty with the ATLAS detector at the LHC
ATLAS Collaboration 03/2013 In: European Physical Journal C: Particles and Fields. 73, 3, 34 p.
Journal article
A search for high-mass resonances decaying to tau(+)tau(-) in pp collisions at root s=7 TeV with the ATLAS detector
ATLAS Collaboration 26/02/2013 In: Physics Letters B. 719, 4-5, p. 242-260. 19 p.
Journal article
A search for prompt lepton-jets in pp collisions at root s=7 TeV with the ATLAS detector
ATLAS Collaboration 26/02/2013 In: Physics Letters B. 719, 4-5, p. 299-317. 19 p.
Journal article
Measurement of the jet radius and transverse momentum dependence of inclusive jet suppression in lead-lead collisions at root S-NN=2.76 TeV with the ATLAS detector
ATLAS Collaboration 26/02/2013 In: Physics Letters B. 719, 4-5, p. 220-241. 22 p.
Journal article
Search for long-lived, heavy particles in final states with a muon and multi-track displaced vertex in proton-proton collisions at root s=7 TeV with the ATLAS detector
ATLAS Collaboration 26/02/2013 In: Physics Letters B. 719, 4-5, p. 280-298. 19 p.
Journal article
Search for supersymmetry in events with photons, bottom quarks, and missing transverse momentum in proton–proton collisions at a centre-of-mass energy of 7 TeV with the ATLAS detector
ATLAS Collaboration 26/02/2013 In: Physics Letters B. 719, 4-5, p. 261-279. 19 p.
Journal article
Measurement of the Λb lifetime and mass
ATLAS Collaboration 4/02/2013 In: Physical Review D. 87, 3, 19 p.
Journal article
Measurement of the flavour composition of dijet events in pp collisions at s√=7 TeV with the ATLAS detector
ATLAS Collaboration 02/2013 In: European Physical Journal C: Particles and Fields. 73, 2, 30 p.
Journal article
Search for the neutral Higgs bosons of the minimal supersymmetric standard model in pp collisions at s√=7 TeV with the ATLAS detector
ATLAS Collaboration 02/2013 In: Journal of High Energy Physics. 2013, 2, 47 p.
Journal article
Search for pair production of heavy top-like quarks decaying to a high-p(T) W boson and a b quark in the lepton plus jets final state at root s=7 TeV with the ATLAS detector
ATLAS Collaboration 29/01/2013 In: Physics Letters B. 718, 4-5, p. 1284-1302. 19 p.
Journal article
Processing LHC data in the UK
Colling, D., Britton, D., Gordon, J., Lloyd, S., Doyle, A., Gronbech, P., Coles, J., Sansum, A., Patrick, G., Jones, R., Middleton, R., Kelsey, D., Cass, A., Geddes, N., Clark, P., Barnby, L. 28/01/2013 In: Philosophical Transactions A: Mathematical, Physical and Engineering Sciences . 371, 1983, 16 p.
Journal article
Search for squarks and gluinos with the ATLAS detector in final states with jets and missing transverse momentum using 4.7 fb−1 of s√=7 TeV proton-proton collision data
ATLAS Collaboration 22/01/2013 In: Physical Review D. 87, 1, 34 p.
Journal article
Measurement of Z Boson Production in Pb-Pb Collisions at sNN−−−−√=2.76 TeV with the ATLAS Detector
ATLAS Collaboration 8/01/2013 In: Physical Review Letters. 110, 2, 18 p.
Journal article
Search for direct production of charginos and neutralinos in events with three leptons and missing transverse momentum in root s=7 TeV pp collisions with the ATLAS detector
ATLAS Collaboration 8/01/2013 In: Physics Letters B. 718, 3, p. 841-859. 19 p.
Journal article
Search for direct slepton and gaugino production in final states with two leptons and missing transverse momentum with the ATLAS detector in pp collisions at root s=7 TeV
ATLAS Collaboration 8/01/2013 In: Physics Letters B. 718, 3, p. 879-901. 23 p.
Journal article
Search for new phenomena in the WW -> vertical bar v vertical bar ' v ' final state in pp collisions at root s=7 TeV with the ATLAS detector
ATLAS Collaboration 8/01/2013 In: Physics Letters B. 718, 3, p. 860-878. 19 p.
Journal article
Search for contact interactions and large extra dimensions in dilepton events from pp collisions at s√=7 TeV with the ATLAS detector
ATLAS Collaboration 4/01/2013 In: Physical Review D. 87, 1, 25 p.
Journal article
Search for Dark Matter Candidates and Large Extra Dimensions in Events with a Photon and Missing Transverse Momentum in pp Collision Data at s√=7 TeV with the ATLAS Detector
ATLAS Collaboration 3/01/2013 In: Physical Review Letters. 110, 1, 18 p.
Journal article
ATLAS search for new phenomena in dijet mass and angular distributions using pp collisions at s√=7 TeV
ATLAS Collaboration 01/2013 In: Journal of High Energy Physics. 2013, 1, 46 p.
Journal article
Measurement of isolated-photon pair production in pp collisions at s√=7TeV with the ATLAS detector
ATLAS Collaboration 01/2013 In: Journal of High Energy Physics. 2013, 1, 42 p.
Journal article
Measurements of top quark pair relative differential cross-sections with ATLAS in pp collisions at s√=7 TeV
ATLAS Collaboration 01/2013 In: European Physical Journal C: Particles and Fields. 73, 1, 28 p.
Journal article
Search for direct chargino production in anomaly-mediated supersymmetry breaking models based on a disappearing-track signature in pp collisions at s√=7TeV with the ATLAS detector
ATLAS Collaboration 01/2013 In: Journal of High Energy Physics. 2013, 1, 34 p.
Journal article
Search for pair-produced massive coloured scalars in four-jet final states with the ATLAS detector in proton–proton collisions at s√=7 TeV
ATLAS Collaboration 01/2013 In: European Physical Journal C: Particles and Fields. 73, 1, 20 p.
Journal article
Search for resonances decaying into top-quark pairs using fully hadronic decays in pp collisions with ATLAS at s√=7 TeV
ATLAS Collaboration 01/2013 In: Journal of High Energy Physics. 2013, 1, 50 p.
Journal article
Heavy flavour production and decay at ATLAS
ATLAS Collaboration 2013 In: Il Nuovo Cimento C - Colloquia on Physics. 36, 6, p. 120-125. 6 p.
Journal article
Search for Magnetic Monopoles in s√=7 TeV pp Collisions with the ATLAS Detector
ATLAS Collaboration 27/12/2012 In: Physical Review Letters. 109, 26, 18 p.
Journal article
A Particle Consistent with the Higgs Boson Observed with the ATLAS Detector at the Large Hadron Collider
ATLAS Collaboration 21/12/2012 In: Science. 338, 6114, p. 1576-1582. 7 p.
Journal article
The ATLAS Computing Model & Distributed Computing Evolution
ATLAS Collaboration 10/12/2012 In: INTERNATIONAL CONFERENCE OF COMPUTATIONAL METHODS IN SCIENCES AND ENGINEERING 2009 (ICCMSE 2009). Melville, N.Y. : American Institute of Physics p. 975-982. 8 p.
Paper
Search for diphoton events with large missing transverse momentum in 7 TeV proton-proton collision data with the ATLAS detector
ATLAS Collaboration 5/12/2012 In: Physics Letters B. 718, 2, p. 411-430. 20 p.
Journal article
Search for the Higgs boson in the H -> WW -> lvjj decay channel at root s=7 TeV with the ATLAS detector
ATLAS Collaboration 5/12/2012 In: Physics Letters B. 718, 2, p. 391-410. 20 p.
Journal article
Search for the Standard Model Higgs boson produced in association with a vector boson and decaying to a b-quark pair with the ATLAS detector
ATLAS Collaboration 5/12/2012 In: Physics Letters B. 718, 2, p. 369-390. 22 p.
Journal article
ATLAS search for a heavy gauge boson decaying to a charged lepton and a neutrino in pp collisions at s√=7 TeV
ATLAS Collaboration 12/2012 In: European Physical Journal C: Particles and Fields. 2012, 12, 23 p.
Journal article
Search for R-parity-violating supersymmetry in events with four or more leptons in s√=7TeV pp collisions with the ATLAS detector
ATLAS Collaboration 12/2012 In: Journal of High Energy Physics. 2012, 12, 36 p.
Journal article
Search for anomalous production of prompt like-sign lepton pairs at s√=7TeV with the ATLAS detector
ATLAS Collaboration 12/2012 In: Journal of High Energy Physics. 2012, 12, 41 p.
Journal article
Search for doubly charged Higgs bosons in like-sign dilepton final states at s√=7 TeV with the ATLAS detector
ATLAS Collaboration 12/2012 In: European Physical Journal C: Particles and Fields. 2012, 12, 18 p.
Journal article
Search for pair production of massive particles decaying into three quarks with the ATLAS detector in s√=7TeV pp collisions at the LHC
ATLAS Collaboration 12/2012 In: Journal of High Energy Physics. 2012, 12, 42 p.
Journal article
Time-dependent angular analysis of the decay Bs -> J/psi phi and extraction of Delta Gamma_s and the CP-violating weak phase phi_s by ATLAS
ATLAS Collaboration 12/2012 In: Journal of High Energy Physics. 2012, 12, 42 p.
Journal article
Measurement of the b-hadron production cross section using decays to D*(+)mu X- final states in pp collisions at root s=7 TeV with the ATLAS detector
ATLAS Collaboration 21/11/2012 In: Nuclear Physics B. 864, 3, p. 341–381. 41 p.
Journal article
Search for Direct Top Squark Pair Production in Final States with One Isolated Lepton, Jets, and Missing Transverse Momentum in s√=7 TeV pp Collisions Using 4.7 fb−1 of ATLAS Data
ATLAS Collaboration 20/11/2012 In: Physical Review Letters. 109, 21, 18 p.
Journal article
Search for a Supersymmetric Partner to the Top Quark in Final States with Jets and Missing Transverse Momentum at s√=7 TeV with the ATLAS Detector
ATLAS Collaboration 20/11/2012 In: Physical Review Letters. 109, 21, 18 p.
Journal article
Further search for supersymmetry at s√=7 TeV in final states with jets, missing transverse momentum, and isolated leptons with the ATLAS detector
ATLAS Collaboration 2/11/2012 In: Physical Review D. 86, 9, 35 p.
Journal article
Measurement of event shapes at large momentum transfer with the ATLAS detector in pp collisions at s√=7 TeV
ATLAS Collaboration 11/2012 In: European Physical Journal C: Particles and Fields. 72, 11, 22 p.
Journal article
Measurements of the pseudorapidity dependence of the total transverse energy in proton-proton collisions at s√=7 TeV with ATLAS
ATLAS Collaboration 11/2012 In: Journal of High Energy Physics. 2012, 11, 54 p.
Journal article
Search for a heavy top-quark partner in final states with two leptons with the ATLAS detector at the LHC
ATLAS Collaboration 11/2012 In: Journal of High Energy Physics. 2012, 11, 35 p.
Journal article
Search for light scalar top-quark pair production in final states with two leptons with the ATLAS detector in s√=7 TeV proton–proton collisions
ATLAS Collaboration 11/2012 In: European Physical Journal C: Particles and Fields. 72, 11, 20 p.
Journal article
Search for supersymmetry in events with large missing transverse momentum, jets, and at least one tau lepton in 7 TeV proton-proton collision data with the ATLAS detector
ATLAS Collaboration 11/2012 In: European Physical Journal C: Particles and Fields. 72, 11, 22 p.
Journal article
Measurement of W gamma and Z gamma production cross sections in pp collisions at root s=7 TeV and limits on anomalous triple gauge couplings with the ATLAS detector
ATLAS Collaboration 22/10/2012 In: Physics Letters B. 717, 1-3, p. 49-69. 21 p.
Journal article
Measurement of the top quark pair cross section with ATLAS in pp collisions at root s=7 TeV using final states with an electron or a muon and a hadronically decaying tau lepton
ATLAS Collaboration 22/10/2012 In: Physics Letters B. 717, 1-3, p. 89-108. 20 p.
Journal article
Search for a Standard Model Higgs boson in the mass range 200-600 GeV in the H -> ZZ -> l(+)l(-)q(q)over-bar decay channel with the ATLAS detector
ATLAS Collaboration 22/10/2012 In: Physics Letters B. 717, 1-3, p. 70-88. 19 p.
Journal article
Search for a standard model Higgs boson in the H -> ZZ -> l(+)l(-) nu(nu)over-bar decay channel using 4.7 fb(-1) of root s=7 TeV data with the ATLAS detector
ATLAS Collaboration 22/10/2012 In: Physics Letters B. 717, 1-3, p. 29-48. 20 p.
Journal article
ATLAS measurements of the properties of jets for boosted particle searches
ATLAS Collaboration 15/10/2012 In: Physical Review D. 86, 7, 30 p.
Journal article
Underlying event characteristics and their dependence on jet size of charged-particle jet events in pp collisions at (√s)=7 TeV with the ATLAS detector
ATLAS Collaboration 9/10/2012 In: Physical Review D. 86, 7, 34 p.
Journal article
Measurement of W ± Z production in proton-proton collisions at s√=7 TeV with the ATLAS detector
ATLAS Collaboration 10/2012 In: European Physical Journal C: Particles and Fields. 72, 10, 24 p.
Journal article
Search for top and bottom squarks from gluino pair production in final states with missing transverse energy and at least three b-jets with the ATLAS detector
ATLAS Collaboration 10/2012 In: European Physical Journal C: Particles and Fields. 72, 10, 19 p.
Journal article
Evidence for the associated production of a W boson and a top quark in ATLAS at root s=7 TeV
ATLAS Collaboration 17/09/2012 In: Physics Letters B. 716, 1, p. 142-159. 18 p.
Journal article
Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC
ATLAS Collaboration 17/09/2012 In: Physics Letters B. 716, 1, p. 1-29. 29 p.
Journal article
Search for TeV-scale gravity signatures in final states with leptons and jets with the ATLAS detector at root s=7 TeV
ATLAS Collaboration 17/09/2012 In: Physics Letters B. 716, 1, p. 122-141. 20 p.
Journal article
Search for the Standard Model Higgs boson in the H -> WW(*()) -> lvlv decay mode with 4.7 fb(-1) of ATLAS data at root s=7 TeV
ATLAS Collaboration 17/09/2012 In: Physics Letters B. 716, 1, p. 62-81. 20 p.
Journal article
Measurement of the azimuthal ordering of charged hadrons with the ATLAS detector
ATLAS Collaboration 14/09/2012 In: Physical Review D. 86, 5, 25 p.
Journal article
A search for flavour changing neutral currents in top-quark decays in pp collision data collected with the ATLAS detector at s√=7TeV
ATLAS Collaboration 09/2012 In: Journal of High Energy Physics. 2012, 9, 37 p.
Journal article
A search for tt¯ resonances in lepton+jets events with highly boosted top quarks collected in pp collisions at s√=7 TeV with the ATLAS detector
ATLAS Collaboration 09/2012 In: Journal of High Energy Physics. 2012, 9, 45 p.
Journal article
Search for a fermiophobic Higgs boson in the diphoton decay channel with the ATLAS detector
ATLAS Collaboration 09/2012 In: European Physical Journal C: Particles and Fields. 72, 9, 18 p.
Journal article
Search for second generation scalar leptoquarks in pp collisions at s√=7 TeV with the ATLAS detector
ATLAS Collaboration 09/2012 In: European Physical Journal C: Particles and Fields. 72, 9, 21 p.
Journal article
Search for the Standard Model Higgs boson in the H → τ + τ − decay mode in s√=7TeV pp collisions with ATLAS
ATLAS Collaboration 09/2012 In: Journal of High Energy Physics. 2012, 9, 49 p.
Journal article
Search for scalar top quark pair production in natural gauge mediated supersymmetry models with the ATLAS detector in pp collisions at root s=7 TeV
ATLAS Collaboration 29/08/2012 In: Physics Letters B. 715, 1-3, p. 44-60. 17 p.
Journal article
Search for tb Resonances in Proton-Proton Collisions at s√=7 TeV with the ATLAS Detector
ATLAS Collaboration 21/08/2012 In: Physical Review Letters. 109, 8, 19 p.
Journal article
Search for Pair Production of a New b′ Quark that Decays into a Z Boson and a Bottom Quark with the ATLAS Detector
ATLAS Collaboration 16/08/2012 In: Physical Review Letters. 109, 7, 19 p.
Journal article
Search for events with large missing transverse momentum, jets, and at least two tau leptons in 7 TeV proton–proton collision data with the ATLAS detector
ATLAS Collaboration 14/08/2012 In: Physics Letters B. 714, 2-5, p. 180-196. 17 p.
Journal article
Search for supersymmetry with jets, missing transverse momentum and at least one hadronically decaying tau lepton in proton-proton collisions at root s=7 TeV with the ATLAS detector ATLAS Collaboration
ATLAS Collaboration 14/08/2012 In: Physics Letters B. 714, 2-5, p. 197-214. 18 p.
Journal article
Combined search for the Standard Model Higgs boson in pp collisions at s√=7 TeV with the ATLAS detector
ATLAS Collaboration 2/08/2012 In: Physical Review D. 86, 3, 31 p.
Journal article
Measurement of inclusive jet and dijet production in pp collisions at s√=7 TeV using the ATLAS detector
ATLAS Collaboration 24/07/2012 In: Physical Review D. 86, 1, 63 p.
Journal article
Measurement of the azimuthal anisotropy for charged particle production in sNN−−−−√=2.76 TeV lead-lead collisions with the ATLAS detector
ATLAS Collaboration 24/07/2012 In: Physical Review C. 86, 1, 41 p.
Journal article
Search for pair-produced heavy quarks decaying to Wq in the two-lepton channel at (√s)=7 TeV with the ATLAS detector
ATLAS Collaboration 23/07/2012 In: Physical Review D. 86, 1, 24 p.
Journal article
Search for Down-Type Fourth Generation Quarks with the ATLAS Detector in Events with One Lepton and Hadronically Decaying W Bosons
ATLAS Collaboration 20/07/2012 In: Physical Review Letters. 109, 3, 19 p.
Journal article
Search for the decay B-s(0) -> mu(+)mu(-) with the ATLAS detector
ATLAS Collaboration 18/07/2012 In: Physics Letters B. 713, 4-5, p. 387-407. 21 p.
Journal article
Determination of the Strange-Quark Density of the Proton from ATLAS Measurements of the W→ℓν and Z→ℓℓ Cross Sections
ATLAS Collaboration 5/07/2012 In: Physical Review Letters. 109, 1, 17 p.
Journal article
A search for tt¯ resonances with the ATLAS detector in 2.05 fb−1 of proton-proton collisions at s√=7 TeV
ATLAS Collaboration 07/2012 In: European Physical Journal C: Particles and Fields. 72, 7, 23 p.
Journal article
Forward-backward correlations and charged-particle azimuthal distributions in pp interactions using the ATLAS detector
ATLAS Collaboration 07/2012 In: Journal of High Energy Physics. 2012, 7, 46 p.
Journal article
Hunt for new phenomena using large jet multiplicities and missing transverse momentum with ATLAS in 4.7 fb−1 of s√=7TeV proton-proton collisions
ATLAS Collaboration 07/2012 In: Journal of High Energy Physics. 2012, 7, 40 p.
Journal article
Measurement of τ polarization in W→τν decays with the ATLAS detector in pp collisions at s√=7 TeV
ATLAS Collaboration 07/2012 In: European Physical Journal C: Particles and Fields. 72, 7, 21 p.
Journal article
Search for heavy neutrinos and right-handed W bosons in events with two leptons and jets in pp collisions at s√=7 TeV with the ATLAS detector
ATLAS Collaboration 07/2012 In: European Physical Journal C: Particles and Fields. 72, 7, 22 p.
Journal article
Search for Supersymmetry in Events with Three Leptons and Missing Transverse Momentum in s√=7 TeV pp Collisions with the ATLAS Detector
ATLAS Collaboration 29/06/2012 In: Physical Review Letters. 108, 26, 18 p.
Journal article
Search for Pair Production of a Heavy Up-Type Quark Decaying to a W Boson and a b Quark in the lepton+jets Channel with the ATLAS Detector
ATLAS Collaboration 26/06/2012 In: Physical Review Letters. 108, 26, 18 p.
Journal article
Search for resonant WZ production in the WZ→lνl′l′ channel in (√s)=7 TeV pp collisions with the ATLAS detector
ATLAS Collaboration 25/06/2012 In: Physical Review D. 85, 11, 21 p.
Journal article
Search for a Light Higgs Boson Decaying to Long-Lived Weakly Interacting Particles in Proton-Proton Collisions at s√=7 TeV with the ATLAS Detector
ATLAS Collaboration 19/06/2012 In: Physical Review Letters. 108, 25, 18 p.
Journal article
Search for Gluinos in Events with Two Same-Sign Leptons, Jets, and Missing Transverse Momentum with the ATLAS Detector in pp Collisions at s√=7 TeV
ATLAS Collaboration 15/06/2012 In: Physical Review Letters. 108, 24, 19 p.
Journal article
Search for supersymmetry in pp collisions at s√=7 TeV in final states with missing transverse momentum and b-jets with the ATLAS detector
ATLAS Collaboration 15/06/2012 In: Physical Review D. 85, 11, 2 p.
Journal article
Measurement of the W W cross section in root s=7 TeV pp collisions with the ATLAS detector and limits on anomalous gauge couplings
ATLAS Collaboration 12/06/2012 In: Physics Letters B. 712, 4-5, p. 289-308. 20 p.
Journal article
Search for FCNC single top-quark production at root s=7 TeV with the ATLAS detector
ATLAS Collaboration 12/06/2012 In: Physics Letters B. 712, 4-5, p. 351-369. 19 p.
Journal article
Search for new particles decaying to ZZ using final states with leptons and jets with the ATLAS detector in root s=7 TeV proton-proton collisions
ATLAS Collaboration 12/06/2012 In: Physics Letters B. 712, 4-5, p. 331-350. 20 p.
Journal article
Measurement of the W boson polarization in top quark decays with the ATLAS detector
ATLAS Collaboration 06/2012 In: Journal of High Energy Physics. 2012, 6, 46 p.
Journal article
Measurement of the charge asymmetry in top quark pair production in pp collisions at s√=7 TeV using the ATLAS detector
ATLAS Collaboration 06/2012 In: European Physical Journal C: Particles and Fields. 72, 6, 27 p.
Journal article
Measurement of the top quark mass with the template method in the tt¯→lepton+jets channel using ATLAS data
ATLAS Collaboration 06/2012 In: European Physical Journal C: Particles and Fields. 72, 6, 30 p.
Journal article
Measurement of tt¯ production with a veto on additional central jet activity in pp collisions at s√=7 TeV using the ATLAS detector
ATLAS Collaboration 06/2012 In: European Physical Journal C: Particles and Fields. 72, 6, 24 p.
Journal article
Search for charged Higgs bosons decaying via H ± → τν in tt¯ events using pp collision data at s√=7TeV with the ATLAS detector
ATLAS Collaboration 06/2012 In: Journal of High Energy Physics. 2012, 6, 50 p.
Journal article
Search for lepton flavour violation in the eμ continuum with the ATLAS detector in s√=7 TeV pp collisions at the LHC
ATLAS Collaboration 06/2012 In: European Physical Journal C: Particles and Fields. 72, 6, 19 p.
Journal article
Search for contact interactions in dilepton events from pp collisions at root s=7 TeV with the ATLAS detector
ATLAS Collaboration 30/05/2012 In: Physics Letters B. 712, 1-2, p. 40-58. 19 p.
Journal article
Search for heavy vector-like quarks coupling to light quarks in proton-proton collisions at root s=7 TeV with the ATLAS detector
ATLAS Collaboration 30/05/2012 In: Physics Letters B. 712, 1-2, p. 22-39. 18 p.
Journal article
Observation of Spin Correlation in tt¯ Events from pp Collisions at s√=7 TeV Using the ATLAS Detector
ATLAS Collaboration 24/05/2012 In: Physical Review Letters. 108, 21, 19 p.
Journal article
Erratum to: Search for first generation scalar leptoquarks in pp collisions at root s = 7 TeV with the ATLAS detector (vol 709, pg 158, 2012)
ATLAS Collaboration 23/05/2012 In: Physics Letters B. 711, 5, p. 442-455. 14 p.
Journal article
Measurement of the production cross section of an isolated photon associated with jets in proton-proton collisions at s√=7 TeV with the ATLAS detector
ATLAS Collaboration 23/05/2012 In: Physical Review D. 85, 9, 30 p.
Journal article
Search for Production of Resonant States in the Photon-Jet Mass Distribution Using pp Collisions at s√=7 TeV Collected by the ATLAS Detector
ATLAS Collaboration 22/05/2012 In: Physical Review Letters. 108, 21, 18 p.
Journal article
Measurement of the top quark pair production cross-section with ATLAS in the single lepton channel
ATLAS Collaboration 15/05/2012 In: Physics Letters B. 711, 3-4, p. 244-263. 20 p.
Journal article
Search for Scalar Bottom Quark Pair Production with the ATLAS Detector in pp Collisions at s√=7 TeV
ATLAS Collaboration 2/05/2012 In: Physical Review Letters. 108, 18, 18 p.
Journal article
Study of jets produced in association with a W boson in pp collisions at s√=7 TeV with the ATLAS detector
ATLAS Collaboration 2/05/2012 In: Physical Review D. 85, 9, 40 p.
Journal article
Jet mass and substructure of inclusive jets in s√=7TeV pp collisions with the ATLAS experiment
ATLAS Collaboration 05/2012 In: Journal of High Energy Physics. 2012, 5, 47 p.
Journal article
Measurement of inclusive two-particle angular correlations in pp collisions with the ATLAS detector at the LHC
ATLAS Collaboration 05/2012 In: Journal of High Energy Physics. 2012, 5, 45 p.
Journal article
Measurement of the cross section for top-quark pair production in pp collisions at s√=7TeV with the ATLAS detector using final states with two high-p T leptons
ATLAS Collaboration 05/2012 In: Journal of High Energy Physics. 2012, 5, 35 p.
Journal article
Measurement of the polarisation of W bosons produced with large transverse momentum in pp collisions at s√=7 TeV with the ATLAS experiment
ATLAS Collaboration 05/2012 In: European Physical Journal C: Particles and Fields. 72, 5, 30 p.
Journal article
Search for excited leptons in proton-proton collisions at s√=7 TeV with the ATLAS detector
ATLAS Collaboration 27/04/2012 In: Physical Review D. 85, 7, 23 p.
Journal article
Measurement of the inclusive W± and Z/γ* cross sections in the e and μ decay channels in pp collisions at s√=7 TeV with the ATLAS detector
ATLAS Collaboration 23/04/2012 In: Physical Review D. 85, 7, 39 p.
Journal article
Search for diphoton events with large missing transverse momentum in 1 fb(-1) of 7 TeV proton-proton collision data with the ATLAS detector ATLAS Collaboration
ATLAS Collaboration 20/04/2012 In: Physics Letters B. 710, 4-5, p. 519-537. 19 p.
Journal article
Search for extra dimensions using diphoton events in 7 TeV proton-proton collisions with the ATLAS detector ATLAS Collaboration
ATLAS Collaboration 20/04/2012 In: Physics Letters B. 710, 4-5, p. 538-556. 19 p.
Journal article
Measurement of the centrality dependence of the charged particle pseudorapidity distribution in lead-lead collisions at root s(NN)=2.76 TeV with the ATLAS detector
ATLAS Collaboration 12/04/2012 In: Physics Letters B. 710, 3, p. 363-382. 20 p.
Journal article
Search for the Standard Model Higgs boson in the decay channel H -> ZZ((*)) -> 4l with 4.8 fb(-1) of pp collision data at root s=7 TeV with ATLAS
ATLAS Collaboration 12/04/2012 In: Physics Letters B. 710, 3, p. 383-402. 20 p.
Journal article
Observation of a New X-b State in Radiative Transitions to Y(1S) and Y(2S) at ATLAS
ATLAS Collaboration 9/04/2012 In: Physical Review Letters. 108, 15, 17 p.
Journal article
Search for anomaly-mediated supersymmetry breaking with the ATLAS detector based on a disappearing-track signature in pp collisions at s√=7 TeV
ATLAS Collaboration 04/2012 In: European Physical Journal C: Particles and Fields. 72, 4, 20 p.
Journal article
Search for decays of stopped, long-lived particles from 7 TeV pp collisions with the ATLAS detector
ATLAS Collaboration 04/2012 In: European Physical Journal C: Particles and Fields. 72, 4, 21 p.
Journal article
Search for same-sign top-quark production and fourth-generation down-type quarks in pp collisions at s√=7TeV with the ATLAS detector
ATLAS Collaboration 04/2012 In: Journal of High Energy Physics. 2012, 4, 40 p.
Journal article
Combined search for the Standard Model Higgs boson using up to 4.9 fb−1 of pp collision data at sqrt(s)=7 TeV with the ATLAS detector at the LHC
ATLAS Collaboration 29/03/2012 In: Physics Letters B. 710, 1, p. 49-66. 18 p.
Journal article
Search for squarks and gluinos using final states with jets and missing transverse momentum with the ATLAS detector in √s = 7 TeV proton–proton collisions
ATLAS Collaboration 29/03/2012 In: Physics Letters B. 710, 1, p. 67-85. 19 p.
Journal article
Measurement of the W(+/-)Z production cross section and limits on anomalous triple gauge couplings in proton-proton collisions at root s=7 TeV with the ATLAS detector
ATLAS Collaboration 23/03/2012 In: Physics Letters B. 709, 4-5, p. 341-357. 17 p.
Journal article
Search for strong gravity signatures in same-sign dimuon final states using the ATLAS detector at the LHC
ATLAS Collaboration 23/03/2012 In: Physics Letters B. 709, 4-5, p. 322-340. 19 p.
Journal article
Measurement of D*± meson production in jets from pp collisions at s√=7 TeV with the ATLAS detector
ATLAS Collaboration 19/03/2012 In: Physical Review D. 85, 5, 22 p.
Journal article
Search for first generation scalar leptoquarks in pp collisions at root s=7 TeV with the ATLAS detector
ATLAS Collaboration 19/03/2012 In: Physics Letters B. 709, 3, p. 158-176. 19 p.
Journal article
Searches for supersymmetry with the ATLAS detector using final states with two leptons and missing transverse momentum in root s=7 TeV proton-proton collisions
ATLAS Collaboration 19/03/2012 In: Physics Letters B. 709, 3, p. 137-157. 21 p.
Journal article
Search for the Higgs Boson in the H→WW(*)→l+νl−ν¯¯ Decay Channel in pp Collisions at s√=7 TeV with the ATLAS Detector
ATLAS Collaboration 13/03/2012 In: Physical Review Letters. 108, 11, 19 p.
Journal article
Search for the Standard Model Higgs Boson in the Diphoton Decay Channel with 4.9 fb-1 of pp Collision Data at √s=7 TeV with ATLAS
ATLAS Collaboration 13/03/2012 In: Physical Review Letters. 108, 11, 19 p.
Journal article
A measurement of the ratio of the W and Z cross sections with exactly one associated jet in pp collisions at root s=7 TeV with ATLAS
ATLAS Collaboration 28/02/2012 In: Physics Letters B. 708, 3-5, p. 221-240. 20 p.
Journal article
Measurement of the production cross section for Z/γ* in association with jets in pp collisions at s√=7 TeV with the ATLAS detector
ATLAS Collaboration 28/02/2012 In: Physical Review D. 85, 3, 42 p.
Journal article
Search for anomalous production of prompt like-sign muon pairs and constraints on physics beyond the standard model with the ATLAS detector
ATLAS Collaboration 28/02/2012 In: Physical Review D. 85, 3, 23 p.
Journal article
Search for new physics in the dijet mass distribution using 1 fb(-1) of pp collision data at root s=7 TeV collected by the ATLAS detector
ATLAS Collaboration 14/02/2012 In: Physics Letters B. 708, 1-2, p. 37-54. 18 p.
Journal article
Measurement of the cross section for the production of a W boson in association with b-jets in pp collisions at root s=7 TeV with the ATLAS detector
ATLAS Collaboration 7/02/2012 In: Physics Letters B. 707, 5, p. 418-437. 20 p.
Journal article
Measurement of the top quark pair production cross section in pp collisions at root s=7 TeV in dilepton final states with ATLAS
ATLAS Collaboration 7/02/2012 In: Physics Letters B. 707, 5, p. 459-477. 19 p.
Journal article
Measurements of the electron and muon inclusive cross-sections in proton-proton collisions at root s=7 TeV with the ATLAS detector
ATLAS Collaboration 7/02/2012 In: Physics Letters B. 707, 5, p. 438-458. 21 p.
Journal article
Search for displaced vertices arising from decays of new heavy particles in 7 TeV pp collisions at ATLAS
ATLAS Collaboration 7/02/2012 In: Physics Letters B. 707, 5, p. 478-496. 19 p.
Journal article
Measurement of the pseudorapidity and transverse momentum dependence of the elliptic flow of charged particles in lead-lead collisions at root s(NN)=2.76 TeV with the ATLAS detector
ATLAS Collaboration 1/02/2012 In: Physics Letters B. 707, 3-4, p. 330-348. 19 p.
Journal article
Search for new phenomena in tt¯ events with large missing transverse momentum in proton-proton collisions at s√=7 TeV with the ATLAS detector
ATLAS Collaboration 26/01/2012 In: Physical Review Letters. 108, 4, 18 p.
Journal article
Measurement of the ZZ Production Cross Section and Limits on Anomalous Neutral Triple Gauge Couplings in Proton-Proton Collisions at s√=7 TeV with the ATLAS Detector
ATLAS Collaboration 25/01/2012 In: Physical Review Letters. 108, 4, 18 p.
Journal article
Measurement of the transverse momentum distribution of W bosons in pp collisions at s√=7 TeV with the ATLAS detector
ATLAS Collaboration 18/01/2012 In: Physical Review D. 85, 1, 30 p.
Journal article
Search for supersymmetry in final states with jets, missing transverse momentum and one isolated lepton in s√=7 TeV pp collisions using 1 fb−1 of ATLAS data
ATLAS Collaboration 18/01/2012 In: Physical Review D. 85, 1, 30 p.
Journal article
Search for a heavy Standard Model Higgs boson in the channel H -> ZZ -> l(+)l(-) q(q)over-bar using the ATLAS detector
ATLAS Collaboration 16/01/2012 In: Physics Letters B. 707, 1, p. 27-45. 19 p.
Journal article
Measurement of the isolated diphoton cross section in pp collisions at s√=7 TeV with the ATLAS detector
ATLAS Collaboration 11/01/2012 In: Physical Review D. 85, 1, 28 p.
Journal article
K-s(0) and Lambda production in pp interactions at root s=0.9 and 7 TeV measured with the ATLAS detector at the LHC
ATLAS Collaboration 6/01/2012 In: Physical Review D. 85, 1, 28 p.
Journal article
Measurement of the W -> tau nu(tau) cross section in pp collisions at root s=7 TeV with the ATLAS experiment ATLAS Collaboration
ATLAS Collaboration 5/01/2012 In: Physics Letters B. 706, 4-5, p. 276-294. 19 p.
Journal article
Measurement of the cross-section for b-jets produced in association with a Z boson at root s=7 TeV with the ATLAS detector ATLAS Collaboration
ATLAS Collaboration 5/01/2012 In: Physics Letters B. 706, 4-5, p. 295-313. 19 p.
Journal article
A study of the material in the ATLAS inner detector using secondary hadronic interactions
ATLAS Collaboration 01/2012 In: Journal of Instrumentation. 7, 1, 40 p.
Journal article
Performance of missing transverse momentum reconstruction in proton-proton collisions at s√=7 TeV with ATLAS
ATLAS Collaboration 01/2012 In: European Physical Journal C: Particles and Fields. 72, 1, 35 p.
Journal article
Performance of the ATLAS Trigger System in 2010
ATLAS Collaboration 01/2012 In: European Physical Journal C: Particles and Fields. 72, 1, 61 p.
Journal article
MaRDI-Gross – Data Management and Preservation Planning for Large Projects
Jones, R., Bicarregui, J.C., Gray, N., Henderson, R., Lambert, S.C., Matthews, B.M. 2012 In: Journal of Physics: Conference Series. 396, 3
Journal article
Search for Dilepton Resonances in pp Collisions at s√=7 TeV with the ATLAS Detector
ATLAS Collaboration 29/12/2011 In: Physical Review Letters. 107, 27, 19 p.
Journal article
Measurement of the Z→ττ cross section with the ATLAS detector
ATLAS Collaboration 14/12/2011 In: Physical Review D. 84, 11, 29 p.
Journal article
Measurement of the inclusive isolated prompt photon cross-section in pp collisions at root s=7 TeV using 35 pb(-1) of ATLAS data
ATLAS Collaboration 6/12/2011 In: Physics Letters B. 706, 2-3, p. 150-167. 18 p.
Journal article
Measurement of the inclusive and dijet cross-sections of b-jets in pp collisions at s√=7~TeV with the ATLAS detector
ATLAS Collaboration 12/2011 In: European Physical Journal C: Particles and Fields. 71, 12, 22 p.
Journal article
Search for a heavy neutral particle decaying into an electron and a muon using 1 fb−1 of ATLAS data
ATLAS Collaboration 12/2011 In: European Physical Journal C: Particles and Fields. 71, 12, 17 p.
Journal article
Search for massive colored scalars in four-jet final states in s√=7 TeV proton–proton collisions with the ATLAS detector
ATLAS Collaboration 12/2011 In: European Physical Journal C: Particles and Fields. 71, 12, 19 p.
Journal article
Search for the Higgs Boson in the H→WW→lνjj Decay Channel in pp Collisions at s√=7 TeV with the ATLAS Detector
ATLAS Collaboration 30/11/2011 In: Physical Review Letters. 107, 23, 1 p.
Journal article
Measurement of the transverse momentum distribution of Z/gamma* bosons in proton-proton collisions at root s=7 TeV with the ATLAS detector
ATLAS Collaboration 24/11/2011 In: Physics Letters B. 705, 5, p. 415-437. 23 p.
Journal article
Search for the Standard Model Higgs boson in the decay channel H -> ZZ(()*()) -> 4l with the ATLAS detector
ATLAS Collaboration 24/11/2011 In: Physics Letters B. 705, 5, p. 435-451. 17 p.
Journal article
Search for the Standard Model Higgs boson in the two photon decay channel with the ATLAS detector at the LHC
ATLAS Collaboration 24/11/2011 In: Physics Letters B. 705, 5, p. 452-470. 19 p.
Journal article
Search for a Standard Model Higgs Boson in the H→ZZ→ℓ+ℓ−νν¯¯ Decay Channel with the ATLAS Detector
ATLAS Collaboration 22/11/2011 In: Physical Review Letters. 107, 22, 18 p.
Journal article
Search for neutral MSSM Higgs bosons decaying to tau(+)tau(-) pairs in proton-proton collisions root s=7 TeV with the ATLAS detector
ATLAS Collaboration 11/11/2011 In: Physics Letters B. 705, 3, p. 174-192. 19 p.
Journal article
Measurement of the ϒ(1S) production cross-section in pp collisions at sqrt(s)=7 TeV in ATLAS
ATLAS Collaboration 3/11/2011 In: Physics Letters B. 705, 1-2, p. 9-27. 19 p.
Journal article
Search for a heavy gauge boson decaying to a charged lepton and a neutrino in 1 fb(-1) of pp collisions at root s=7 TeV using the ATLAS detector ATLAS Collaboration
ATLAS Collaboration 3/11/2011 In: Physics Letters B. 705, 1-2, p. 28-46. 19 p.
Journal article
Measurement of multi-jet cross sections in proton–proton collisions at a 7 TeV center-of-mass energy
ATLAS Collaboration 11/2011 In: European Physical Journal C: Particles and Fields. 71, 11, 27 p.
Journal article
Measurement of the jet fragmentation function and transverse profile in proton–proton collisions at a center-of-mass energy of 7 TeV with the ATLAS detector
ATLAS Collaboration 11/2011 In: European Physical Journal C: Particles and Fields. 71, 11, 25 p.
Journal article
Search for new phenomena in final states with large jet multiplicities and missing transverse momentum using s√ = 7 TeV pp collisions with the ATLAS detector
ATLAS Collaboration 11/2011 In: Journal of High Energy Physics. 2011, 11, 38 p.
Journal article
Inclusive search for same-sign dilepton signatures in pp collisions at root s=7 TeV with the ATLAS detector
ATLAS Collaboration 10/2011 In: Journal of High Energy Physics. 2011, 10, 48 p.
Journal article
Search for diphoton events with large missing transverse energy with 36 pb(-1) of 7 TeV proton-proton collision data with the ATLAS detector
ATLAS Collaboration 10/2011 In: European Physical Journal C: Particles and Fields. 71, 10, 21 p.
Journal article
Measurement of the differential cross-sections of inclusive, prompt and non-prompt J/psi production in proton-proton collisions at root s=7 TeV
ATLAS Collaboration 21/09/2011 In: Nuclear Physics B. 850, 3, p. 387-444. 58 p.
Journal article
Properties of jets measured from tracks in proton-proton collisions at center-of-mass energy root s=7 TeV with the ATLAS detector
ATLAS Collaboration 20/09/2011 In: Physical Review D. 84, 5, 27 p.
Journal article
Search for heavy long-lived charged particles with the ATLAS detector in pp collisions at root s=7 TeV
ATLAS Collaboration 20/09/2011 In: Physics Letters B. 703, 4, p. 428-446. 19 p.
Journal article
Measurement of the inelastic proton-proton cross-section at root s=7 TeV with the ATLAS detector
ATLAS Collaboration 6/09/2011 In: Nature Communications. 2
Journal article
Limits on the production of the standard model Higgs boson in pp collisions at root s=7 TeV with the ATLAS detector
ATLAS Collaboration 09/2011 In: European Physical Journal C: Particles and Fields. 71, 9, 30 p.
Journal article
Measurement of W gamma and Z gamma production in proton-proton collisions at root s=7 TeV with the ATLAS detector
ATLAS Collaboration 09/2011 In: Journal of High Energy Physics. 2011, 9, 42 p.
Journal article
Measurement of dijet production with a veto on additional central jet activity in pp collisions at root s=7 TeV using the ATLAS detector
ATLAS Collaboration 09/2011 In: Journal of High Energy Physics. 2011, 9, 36 p.
Journal article
Search for squarks and gluinos using final states with jets and missing transverse momentum with the ATLAS detector in √s = 7 TeV proton–proton collisions
ATLAS Collaboration 2/07/2011 In: Physics Letters B. 701, 2, p. 186-203. 17 p.
Journal article
Search for contact interactions in dimuon events from pp collisions at root s=7 TeV with the ATLAS detector
ATLAS Collaboration 1/07/2011 In: Physical Review D. 84, 1, 18 p.
Journal article
Search for an excess of events with an identical flavour lepton pair and significant missing transverse momentum in root s=7 TeV proton-proton collisions with the ATLAS detector
ATLAS Collaboration 07/2011 In: European Physical Journal C: Particles and Fields. 71, 7, 18 p.
Journal article
Search for supersymmetric particles in events with lepton pairs and large missing transverse momentum in root s=7 TeV proton-proton collisions with the ATLAS experiment
ATLAS Collaboration 07/2011 In: European Physical Journal C: Particles and Fields. 71, 7, 19 p.
Journal article
Measurement of the W charge asymmetry in the W -> mu nu decay mode in pp collisions at root s=7 TeV with the ATLAS detector
ATLAS Collaboration 27/06/2011 In: Physics Letters B. 701, 1, p. 31-49. 19 p.
Journal article
Search for stable hadronising squarks and gluinos with the ATLAS experiment at the LHC
ATLAS Collaboration 27/06/2011 In: Physics Letters B. 701, 1, p. 1-19. 19 p.
Journal article
Search for a Heavy Particle Decaying into an Electron and a Muon with the ATLAS Detector in root s=7 TeV pp collisions at the LHC
ATLAS Collaboration 22/06/2011 In: Physical Review Letters. 106, 25, 18 p.
Journal article
Search for pair production of first or second generation leptoquarks in proton-proton collisions at root s=7 TeV using the ATLAS detector at the LHC
ATLAS Collaboration 15/06/2011 In: Physical Review D. 83, 11, 24 p.
Journal article
Search for high mass dilepton resonances in pp collisions at root s=7 TeV with the ATLAS experiment
ATLAS Collaboration 13/06/2011 In: Physics Letters B. 700, 3-4, p. 163-180. 18 p.
Journal article
Measurement of underlying event characteristics using charged particles in pp collisions at root s = 900 GeV and 7 TeV with the ATLAS detector
ATLAS Collaboration 31/05/2011 In: Physical Review D. 83, 11, 34 p.
Journal article
A search for new physics in dijet mass and angular distributions in pp collisions at root s=7 TeV measured with the ATLAS detector
ATLAS Collaboration 05/2011 In: New Journal of Physics. 13, 5, 45 p.
Journal article
Charged-particle multiplicities in pp interactions measured with the ATLAS detector at the LHC
ATLAS Collaboration 05/2011 In: New Journal of Physics. 13, 5, 68 p.
Journal article
Measurements of underlying-event properties using neutral and charged particles in pp collisions at root s=900 GeV and root s=7 TeV with the ATLAS detector at the LHC
ATLAS Collaboration 05/2011 In: European Physical Journal C: Particles and Fields. 71, 5, 24 p.
Journal article
Measurement of Dijet Azimuthal Decorrelations in pp Collisions at root s=7 TeV
ATLAS Collaboration 29/04/2011 In: Physical Review Letters. 106, 17, 17 p.
Journal article
Measurement of the production cross section for W-bosons in association with jets in pp collisions at root s=7 TeV with the ATLAS detector
ATLAS Collaboration 25/04/2011 In: Physics Letters B. 698, 5, p. 325-345. 21 p.
Journal article
Search for massive long-lived highly ionising particles with the ATLAS detector at the LHC
ATLAS Collaboration 25/04/2011 In: Physics Letters B. 698, 5, p. 353-370. 18 p.
Journal article
Search for Supersymmetry Using Final States with One Lepton, Jets, and Missing Transverse Momentum with the ATLAS Detector in sqrt(s)=7 TeV pp Collisions
ATLAS Collaboration 28/03/2011 In: Physical Review Letters. 106, 13, 19 p.
Journal article
Search for Diphoton Events with Large Missing Transverse Energy in 7 TeV Proton-Proton Collisions with the ATLAS Detector
ATLAS Collaboration 23/03/2011 In: Physical Review Letters. 106, 12, 19 p.
Journal article
Study of jet shapes in inclusive jet production in pp collisions at root s=7 TeV using the ATLAS detector
ATLAS Collaboration 8/03/2011 In: Physical Review D. 83, 5, 29 p.
Journal article
Studies of the performance of the ATLAS detector using cosmic-ray muons
ATLAS Collaboration 03/2011 In: European Physical Journal C: Particles and Fields. 71, 3, 36 p.
Journal article
Measurement of inclusive jet and dijet cross sections in proton-proton collisions at 7 TeV centre-of-mass energy with the ATLAS detector
ATLAS Collaboration 2011 In: European Physical Journal C: Particles and Fields. 71, 2, 59 p.
Journal article
Measurement of the W+W- Cross Section in root s=7 TeV pp Collisions with ATLAS
ATLAS Collaboration 2011 In: Physical Review Letters. 107, 4
Journal article
Measurement of the centrality dependence of J/psi yields and observation of Z production in lead-lead collisions with the ATLAS detector at the LHC
ATLAS Collaboration 2011 In: Physics Letters B. 697, 4, p. 294-312. 19 p.
Journal article
Measurement of the inclusive isolated prompt photon cross section in pp collisions at root s=7 TeV with the ATLAS detector
ATLAS Collaboration 2011 In: Physical Review D. 83, 5, 31 p.
Journal article
Measurement of the top quark-pair production cross section with ATLAS in pp collisions at root s=7 TeV
ATLAS Collaboration 2011 In: European Physical Journal C: Particles and Fields. 71, 3
Journal article
Search for high-mass states with one lepton plus missing transverse momentum in proton-proton collisions root s=7 TeV with the ATLAS detector
ATLAS Collaboration 2011 In: Physics Letters B. 701, 1, p. 50-69.
Journal article
Search for quark contact interactions in dijet angular distributions in pp collisions at root s=7 TeV measured with the ATLAS detector
ATLAS Collaboration 2011 In: Physics Letters B. 694, 4-5, p. 327-345. 19 p.
Journal article
Search for supersymmetry in pp collisions at root s=7 TeV in final states with missing transverse momentum and b-jets
ATLAS Collaboration 2011 In: Physics Letters B. 701, 4, p. 398-416.
Journal article
Observation of a Centrality-Dependent Dijet Asymmetry in Lead-Lead Collisions at root s(NN)=2.76 TeV with the ATLAS Detector at the LHC
ATLAS Collaboration 13/12/2010 In: Physical Review Letters. 105, 25, 18 p.
Journal article
Commissioning of the ATLAS Muon Spectrometer with cosmic rays
ATLAS Collaboration 12/2010 In: European Physical Journal C: Particles and Fields. 70, 3, p. 875-916. 42 p.
Journal article
Drift Time Measurement in the ATLAS Liquid Argon Electromagnetic Calorimeter using Cosmic Muons
ATLAS Collaboration 12/2010 In: European Physical Journal C: Particles and Fields. 70, 3, p. 755-785. 31 p.
Journal article
Measurement of the W -> lv and Z/gamma* -> ll production cross sections in proton-proton collisions at root s=7 TeV with the ATLAS detector
ATLAS Collaboration 12/2010 In: Journal of High Energy Physics. 2010, 12, 65 p.
Journal article
Readiness of the ATLAS liquid argon calorimeter for LHC collisions
ATLAS Collaboration 12/2010 In: European Physical Journal C: Particles and Fields. 70, 3, p. 723-753. 31 p.
Journal article
The ATLAS Inner Detector commissioning and calibration
ATLAS Collaboration 12/2010 In: European Physical Journal C: Particles and Fields. 70, 3, p. 787-821. 35 p.
Journal article
The ATLAS simulation infrastructure
ATLAS Collaboration 12/2010 In: European Physical Journal C: Particles and Fields. 70, 3, p. 823-874. 52 p.
Journal article
Search for New Particles in Two-Jet Final States in 7 TeV Proton-Proton Collisions with the ATLAS Detector at the LHC
ATLAS Collaboration 11/10/2010 In: Physical Review Letters. 105, 16, 19 p.
Journal article
Charged-particle multiplicities in pp interactions at √s = 900 GeV measured with the ATLAS detector at the LHC
ATLAS Collaboration 26/04/2010 In: Physics Letters B. 688, 1, p. 21-42. 22 p.
Journal article
Performance of the ATLAS detector using first collision data
ATLAS Collaboration 2010 In: Journal of High Energy Physics. 2010, 9, 64 p.
Journal article
The Evolution of the ATLAS Computing Model
Jones, R.W.L., Barberis, D. 2010 In: Journal of Physics: Conference Series. 219, 7, p. -. 5 p.
Journal article
Moscicki, J.T., Brochu, F., Ebke, J., Egede, U., Elmsheuser, J., Harrison, K., Jones, R.W.L., Lee, H.C., Liko, D., Maier, A., Muraru, A., Patrick, G.N., Pajchel, K., Reece, W., Samset, B.H., Slater, M.W., Soroko, A., Tan, C.L., van der Ster, D.C., Williams, M. 11/2009 In: Computer Physics Communications. 180, 11, p. 2303-2316. 14 p.
Journal article
GridPP: the UK grid for particle physics
Britton, D., Cass, A.J., Clarke, P.E.L., Coles, J., Colling, D.J., Doyle, A.T., Geddes, N.I., Gordon, J.C., Jones, R.W.L., Kelsey, D.P., Lloyd, S.L., Middleton, R.P., Patrick, G.N., Sansum, R.A., Pearce, S.E. 28/06/2009 In: Philosophical Transactions A: Mathematical, Physical and Engineering Sciences . 367, 1897, p. 2447-2457. 11 p.
Journal article
The integration and engineering of the ATLAS SemiConductor Tracker Barrel.
Abdesselam, A., Bouhova-Thacker, E.V., Brodbeck, T.J., Catmore, J.R., Chilingarov, A., Henderson, R.C.W., Hughes, G., Jones, R., Kartvelishvili, V., Ratoff, P.N., Sloan, T., Smizanska, M., et al., A.S.C. 30/10/2008 In: Journal of Instrumentation. 3, P10006
Journal article
The ATLAS Experiment at the CERN Large Hadron Collider.
Aad, G., Bouhova-Thacker, E.V., Brodbeck, T.J., Catmore, J.R., Chilingarov, A., Davidson, R., Dewhurst, A., Fox, H., Henderson, R., Hughes, G., Jones, R., Kartvelishvili, V., Price, D., Ratoff, P.N., Sloan, T.J., Small, A., Smizanska, M., Collaboration, A. 14/08/2008 In: Journal of Instrumentation. 3, 8, p. 1-438. 438 p.
Journal article
Measurement of the cross section for open b-quark production in two-photon interactions at LEP.
ALEPH Collaboration 24/09/2007 In: Journal of High Energy Physics. 2007, 9, 20 p.
Journal article
The silicon microstrip sensors of the ATLAS semiconductor tracker.
Ahmad, A., Brodbeck, T.J., Campbell, D., Chilingarov, A., Holt, S., Hughes, G., Jones, R.W.L., Mercer, I.J., Ratoff, P.N., Sloan, T., et al., A.S.C. 07/2007 In: Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment. 578, 1, p. 98-118. 21 p.
Journal article
The ATLAS semiconductor tracker end-cap module.
Abdesselam, A., Brodbeck, T.J., Campbell, D., Chilingarov, A., Hughes, G., Jones, R.W.L., Mercer, I.J., Ratoff, P.N., et al., A.S.C. 06/2007 In: Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment. 575, 3, p. 353-389. 37 p.
Journal article
Test of Colour Reconnection Models using Three-Jet Events in Hadronic Z Decays.
Schael, S., Bouhova-Thacker, E.V., Bowdery, C.K., Finch, A.J., Jones, R.W.L., Hughes, G., Smizanska, M., ALEPH, c. 12/2006 In: European Physical Journal C: Particles and Fields. 48, 3, p. 685-698. 14 p.
Journal article
Search for neutral MSSM Higgs bosons at LEP.
Schael, S., Bouhova-Thacker, E.V., Bowdery, C.K., Finch, A.J., Jones, R.W.L., Hughes, G., Smizanska, M., ALEPH, c. 09/2006 In: European Physical Journal C: Particles and Fields. 47, 3, p. 547-587. 41 p.
Journal article
Deuteron and anti-deuteron production in e+ e- collisions at the Z resonance.
Schael, S., Bouhova-Thacker, E.V., Bowdery, C.K., Finch, A.J., Jones, R.W.L., Hughes, G., Smizanska, M., ALEPH, c. 10/08/2006 In: Physics Letters B. 639, 3-4, p. 192-201. 10 p.
Journal article
Measurement of the $W$ boson mass and width in $e^+ e^-$ collisions at LEP
ALEPH Collaboration 08/2006 In: European Physical Journal C: Particles and Fields. 47, 2, p. 309-335. 27 p.
Journal article
High precision measurements of Bs(0) parameters in B-s(0) -> J/psi phi decays
Jones, R.W.L., ATLAS Collaboration 06/2006 In: Nuclear Physics B - Proceedings Supplements. 156, 1, p. 147-150. 4 p.
Journal article
Precision Electroweak Measurements on the Z resonance.
Jones, R.W.L., Borissov, G., Grunewald, M.W., Ratoff, P.N., Smizanska, M. 1/03/2006 In: Physics Reports. 427, 5-6, p. 257-454. 198 p.
Journal article
Branching ratios and spectral functions of tau decays: Final ALEPH measurements and physics implications
Bouhova-Thacker, E., Bowdery, C., Foster, F., Jones, R., Smizanska, M., ALEPH Collaboration 12/2005 In: Physics Reports. 421, 5-6, p. 191-284. 94 p.
Journal article
ESLEA: exploitation of switched lightpaths for e-science applications.
Bartsch, V., Davies, B.G.E., Jones, R.W.L. 09/2005 In: Proceedings of the UK e-science All Hands Meeting.
Chapter
ATLAS computing : Technical design report by ATLAS Collaboration
Duckeck, G., Jones, R.W.L. 06/2005 CERN, 246 p.
Working paper
LHC computing grid : Technical design report
Bird, I., Jones, R.W.L. 06/2005 CERN, 152 p.
Working paper
Improved measurement of the triple gauge-boson couplings γWW and ZWW in e+e− collisions
ALEPH Collaboration 12/05/2005 In: Physics Letters B. 614, 1-2, p. 7-26. 20 p.
Journal article
Two-particle correlations in p p, anti-p anti-p and K0(S) K0(S) pairs from hadronic Z decays
ALEPH Collaboration 31/03/2005 In: Physics Letters B. 611, 1-2, p. 66-80. 15 p.
Journal article
Bose-Einstein correlations in W-pair decays with an event-mixing technique
Bouhova-Thacker, E., Bowdery, C., Foster, F., Jones, R., Smizanska, M., ALEPH Collaboration 27/01/2005 In: Physics Letters B. 606, 3-4, p. 265-275. 11 p.
Journal article
Single vector boson production in e+e− collisions at centre-of-mass energies from 183 to 209 GeV
Bouhova-Thacker, E., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Smizanska, M., ALEPH Collaboration 6/01/2005 In: Physics Letters B. 605, 1-2, p. 49-62. 14 p.
Journal article
ATLAS grid computing in the real world
Jones, R.W.L. 2005 In: Proceedings of the 2nd International Grid Symposium, Taipei, April 2005..
Chapter
Distributed analysis in the ATLAS experiment.
Adams, D.L., Jones, R.W.L. 2005 In: Proceedings of the UK e-science All Hands Meeting..
Chapter
Ganga user interface for job definition and management
Egede, U., Jones, R.W.L. 2005 In: Proceedings of the UK e-science All Hands Meeting.
Chapter
GridPP: meeting the particle physics computing challenges
Britton, D., Jones, R.W.L. 2005 In: Proceedings of the UK e-science All Hands Meeting.
Chapter
Measurement of W pair productions in e+ e- collisions at centre-of-mass energies from 183-GeV to 209-GeV.
Bouhova-Thacker, E.V., Finch, A.J., Hughes, G., Jones, R., Smizanska, M., Clarke, D.P., Ellis, G., Pearson, M.R., Robertson, N.A., ALEPH Collaboration, T. 12/2004 In: European Physical Journal C: Particles and Fields. 38, 2, p. 147-160. 14 p.
Journal article
Constraints on anomalous QGC's in e+ e- interactions from 183-GeV to 209-GeV
ALEPH Collaboration, T., Bouhova-Thacker, E.V., Finch, A.J., Hughes, G., Jones, R., Smizanska, M.l., Clarke, D.P., Ellis, G., Pearson, M.R., Robertson, N.A. 18/11/2004 In: Physics Letters B. 602, 1-3, p. 31-40. 10 p.
Journal article
Search for pentaquark states in Z decays
ALEPH Collaboration, .., Bouhova-Thacker, E.V., Finch, A.J., Hughes, G., Jones, R., Smizanska, M., Clarke, D.P., Ellis, G., Pearson, M.R., Robertson, N.A. 7/10/2004 In: Physics Letters B. 599, 1-2, p. 1-16. 16 p.
Journal article
Two-dimensional analysis of Bose-Einstein correlations in hadronic Z decays at LEP.
Schael, S., Bouhova-Thacker, E.V., Bowdery, C.K., Finch, A.J., Jones, R.W.L., Hughes, G., Smizanska, M., ALEPH, c. 08/2004 In: European Physical Journal C: Particles and Fields. 36, 2, p. 147-159. 13 p.
Journal article
A combination by the LEP QCD working group of alpha-s values derived from event shape variables at LEP : a contribution to the 10th high-energy international conference on quantum chromodynamics held in Montpellier, France in July 2003.
Jones, R.W.L. 07/2004 In: Nuclear Physics B - Proceedings Supplements. 133, 1, p. 13-20. 8 p.
Journal article
B-physics in p-p collisions at the LHC: a contribution to the 10th high-energy international conference on quantum chromodynamics held in Motpellier, France in July 2003.
Jones, R.W.L. 07/2004 In: Nuclear Physics B - Proceedings Supplements. 133, 1, p. 137-143. 7 p.
Journal article
Studies of QCD at e+ e- centre-of-mass energies between 91-GeV and 209-GeV
Jones, R.W.L., Finch, A., Bouhova-Thacker, E., Hughes, G., Smizanska, M., ALEPH Collaboration, T. 07/2004 In: European Physical Journal C: Particles and Fields. 35, 4, p. 457-486. 30 p.
Journal article
Exclusive production of pion and kaon meson pairs in two photon collisions at LEP.
Finch, A.J., Jones, R.W.L., Smizanska, M., Aleph Collaboration, T. 01/2004 In: Nuclear Physics B - Proceedings Supplements. 126, 1, p. 289-294. 6 p.
Journal article
Jones, R.W.L.[.a. 2004 In: Proceedings of the UK eScience all hands meeting.
Chapter
Absolute mass lower limit for the lightest neutralino of the MSSM from e+ e- data at s**(1/2) up to 209-GeV
Schael, S., Bouhova-Thacker, E.V., Bowdery, C.K., Finch, A.J., Jones, R.W.L., Hughes, G., Smizanska, M., ALEPH, c. 2004 In: Physics Letters B. B583, 3-4, p. 247-263. 17 p.
Journal article
Theoretical uncertainties on alpha_s from event-shape variables in e+e? annihilations.
Jones, R.W.L., Ford, M., Salam, G.P., Stenzel, H., Wicke, D. 1/12/2003 In: Journal of High Energy Physics. 2003, 12, p. 7-37. 31 p.
Journal article
Search for stable hadronizing squarks and gluinosin e + e- collisions up to Ös = 209√s=209 GeV
ALEPH Collaboration, T., Heister, A., Bouhova-Thacker, E., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Smizanska, M. 11/2003 In: European Physical Journal C: Particles and Fields. 31, 3, p. 327-342. 16 p.
Journal article
Search for anomalous weak dipole moments of the tau lepton
ALEPH Collaboration, T., Heister, A., Bouhova-Thacker, E., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R. 10/2003 In: European Physical Journal C: Particles and Fields. 30, 3, p. 291-304. 14 p.
Journal article
Search for supersymmetric particles with R parity violating decays in $e^+ e^-$ collisions at $s$ up to 209-GeV
ALEPH Collaboration, T., Heister, A., Bouhova-Thacker, E., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Smizanska, M. 10/2003 In: European Physical Journal C: Particles and Fields. 31, 1, p. 1-16. 16 p.
Journal article
Exclusive production of pion and kaon meson pairs in two photon collisions at LEP
Heister, A., Bouhova-Thacker, E., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Smizanska, M., ALEPH Collaboration, T. 11/09/2003 In: Physics Letters B. 569, 3-4, p. 140-150. 11 p.
Journal article
Exclusive production of pion and kaon meson pairs in two photon collisions at LEP.
ALEPH Collaboration, .., Heister, A., Schael, S., Barate, R., Brunelière, R., De Bonis, I., Decamp, D., Goy, C., Jezequel, S., Lees, J.-., Martin, F., Merle, E., Minard, M.-., Pietrzyk, B., Trocmé, B., Bravo, S., Casado, M.P., Chmeissani, M., Crespo, J.M., Fernandez, E., Jones, R.W.L., Smizanska, M. 11/09/2003 In: Physics Letters B. 569, 3-4, p. 140-150. 11 p.
Journal article
Measurement of the hadronic photon structure function F2(gamma)(x, Q**2) in two-photon collisions at LEP
Heister, A., ALEPH Collaboration, T., Bouhova-Thacker, E., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Smizanska, M. 09/2003 In: European Physical Journal C: Particles and Fields. 30, 2, p. 145-158. 14 p.
Journal article
Search for the Standard Model Higgs boson at LEP.
ALEPH Collaboration, .., Heister, A., Bouhova-Thacker, E.V., Finch, A.J., Hughes, G., Jones, R., Smizanska, M., Clarke, D.P., Ellis, G.,.e.a., DELPHI Collaboration, .., Abdallah, J., Sopczak, A., Borissov, G.,.e.a., L3 Collaboration, .., OPAL Collaboration, .., Abbiendi, G., Kartvelishvili, V.,.e.a., LEP Working Group, .. 17/07/2003 In: Physics Letters B. 565, p. 61-75. 15 p.
Journal article
Improved search for B(0)S anti-B(0)S oscillations
Heister, A., ALEPH Collaboration, T., Bouhova-Thacker, E., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R. 07/2003 In: European Physical Journal C: Particles and Fields. 29, 2, p. 143-170. 28 p.
Journal article
Measurement of the inclusive D*+- production in gamma gamma collisions at LEP.
Heister, A., ALEPH Collaboration, T., Bouhova-Thacker, E., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Smizanska, M. 06/2003 In: European Physical Journal C: Particles and Fields. 28, 4, p. 437-449. 13 p.
Journal article
A measurement of the gluon splitting rate into c anti-c pairs in hadronic Z decays
Heister, A., Bouhova-Thacker, E., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Smizanska, M., ALEPH Collaboration, T. 29/05/2003 In: Physics Letters B. 561, 3-4, p. 213-224. 12 p.
Journal article
Single- and multi-photon production in ee collisions at s√ up to 209 GeV
Heister, A., ALEPH Collaboration, T., Bouhova-Thacker, E., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Smizanska, M. 05/2003 In: European Physical Journal C: Particles and Fields. 28, 1, p. 1-13. 13 p.
Journal article
ATLAS computing and the GRID.
Jones, R.W.L. 21/04/2003 In: Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment. 502, 2-3, p. 372-375. 4 p.
Journal article
Measurements of the strong coupling constant and the QCD color factors using four jet observables from hadronic Z decays.
Schael, S., Bouhova-Thacker, E.V., Bowdery, C.K., Finch, A.J., Hughes, G., Jones, R.W.L., Smizanska, M., ALEPH Collaboration, T. 03/2003 In: European Physical Journal C: Particles and Fields. 27, 1, p. 1-17. 17 p.
Journal article
Automated software packaging and installation for the ATLAS experiment.
George, S., Arnault, C., Gardner, M., Jones, R.W.L., Youssef, S., Orsay, L. 2003 In: Proceedings of the UK eScience all hands meeting. p. 452-458. 7 p.
Chapter
Search for gauge mediated SUSY breaking topologies in $e^+ e^-$ collisions at center-of-mass energies up to 209-GeV
ALEPH Collaboration, T., Heister, A., Bouhova-Thacker, E., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Smizanska, M. 10/2002 In: European Physical Journal C: Particles and Fields. 25, 3, p. 339-351. 13 p.
Journal article
A Flavor independent Higgs boson search in e+ e- collisions at s**(1/2) up to 209-GeV
ALEPH Collaboration, T., Heister, A., Bouhova-Thacker, E., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Smizanska, M. 19/09/2002 In: Physics Letters B. 544, 1-2, p. 25-34. 10 p.
Journal article
Absolute lower limits on the masses of selectrons and sneutrinos in the MSSM
ALEPH Collaboration, T., Heister, A., Bouhova-Thacker, E., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Smizanska, M. 19/09/2002 In: Physics Letters B. 544, 1-2, p. 73-88. 16 p.
Journal article
Search for gamma gamma decays of a Higgs boson in e+ e- collisions at s**(1/2) up to 209-GeV
ALEPH Collaboration, T., Heister, A., Bouhova-Thacker, E., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Smizanska, M. 19/09/2002 In: Physics Letters B. 544, 1-2, p. 16-24. 9 p.
Journal article
Search for single top production in $e^+ e^-$ collisions at $s$ up to 209-GeV
ALEPH Collaboration, T., Heister, A., Bouhova-Thacker, E., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Smizanska, M. 12/09/2002 In: Physics Letters B. 543, 3-4, p. 173-182. 10 p.
Journal article
Search for charged Higgs bosons in $e^+ e^-$ collisions at energies up to $s$ = 209-GeV
ALEPH Collaboration, T., Heister, A., Bouhova-Thacker, E., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R. 5/09/2002 In: Physics Letters B. 543, 1-2, p. 1-13. 13 p.
Journal article
Search for R-parity violating production of single sneutrinos in e+ e- collisions at s**(1/2) = 189-GeV to 209-GeV
ALEPH Collaboration, T., Heister, A., Bowdery, C., Bouhova-Thacker, E., Finch, A., Foster, F., Hughes, G., Jones, R., Smizanska, M. 09/2002 In: European Physical Journal C: Particles and Fields. 25, 1, p. 1-12. 12 p.
Journal article
Search for scalar quarks in $e^+ e^-$ collisions at $s$ up to 209-GeV
ALEPH Collaboration, T., Heister, A., Bouhova-Thacker, E., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Smizanska, M. 13/06/2002 In: Physics Letters B. 537, 1-2, p. 5-20. 16 p.
Journal article
Measurement of the forward backward asymmetry in Z --> b anti-b and Z --> c anti-c decays with leptons
ALEPH Collaboration, T., Heister, A., Bouhova-Thacker, E., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R. 06/2002 In: European Physical Journal C: Particles and Fields. 24, 2, p. 177-191. 15 p.
Journal article
Search for charginos nearly mass degenerate with the lightest neutralino in e+ e- collisions at center-of-mass energies up to 209-GeV
ALEPH Collaboration, T., Heister, A., Bouhova-Thacker, E., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Smizanska, M. 9/05/2002 In: Physics Letters B. 533, 3-4, p. 223-236. 14 p.
Journal article
Search for gamma gamma ---> eta(b) in e+ e- collisions at LEP-2
ALEPH Collaboration, T., Heister, A., Bouhova-Thacker, E., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Smizanska, M. 28/03/2002 In: Physics Letters B. 530, 1-4, p. 56-66. 11 p.
Journal article
Inclusive production of the omega and eta mesons in Z decays, and the muonic branching ratio of the omega
ALEPH Collaboration, T., Heister, A., Bouhova-Thacker, E., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R. 28/02/2002 In: Physics Letters B. 528, 1-2, p. 19-33. 15 p.
Journal article
Leptonic decays of the D(s) meson
ALEPH Collaboration, T., Heister, A., Bouhova-Thacker, E., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R. 28/02/2002 In: Physics Letters B. 528, 1-2, p. 1-18. 18 p.
Journal article
Final results of the searches for neutral Higgs bosons in e+ e- collisions at s**(1/2) up to 209-GeV
ALEPH Collaboration, T., Heister, A., Bouhova-Thacker, E., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Smizanska, M. 7/02/2002 In: Physics Letters B. 526, 3-4, p. 191-205. 15 p.
Journal article
Search for scalar leptons in e+ e- collisions at center-of-mass energies up to 209-GeV
ALEPH Collaboration, T., Heister, A., Bouhova-Thacker, E., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Smizanska, M. 7/02/2002 In: Physics Letters B. 526, 3-4, p. 206-220. 15 p.
Journal article
Production of D**(s) mesons in hadronic Z decays
ALEPH Collaboration, T., Heister, A., Bouhova-Thacker, E., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R. 31/01/2002 In: Physics Letters B. 526, 1-2, p. 34-49. 16 p.
Journal article
Inclusive semileptonic branching ratios of b hadrons produced in Z decays
ALEPH Collaboration, T., Heister, A., Bouhova-Thacker, E., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R. 01/2002 In: European Physical Journal C: Particles and Fields. 22, 4, p. 613-626. 14 p.
Journal article
Measurement of A**b(FB) using inclusive b hadron decays
ALEPH Collaboration, T., Heister, A., Bouhova-Thacker, E., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R. 11/2001 In: European Physical Journal C: Particles and Fields. 22, 2, p. 201-215. 15 p.
Journal article
Measurement of the Michel parameters and the nu/tau helicity in tau lepton decays
ALEPH Collaboration, T., Heister, A., Bouhova-Thacker, E., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R. 11/2001 In: European Physical Journal C: Particles and Fields. 22, 2, p. 217-230. 14 p.
Journal article
Study of the fragmentation of b quarks into B mesons at the Z peak
ALEPH Collaboration, T., Heister, A., Bouhova-Thacker, E., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R. 12/07/2001 In: Physics Letters B. 512, 1-2, p. 30-48. 19 p.
Journal article
Investigation of inclusive CP asymmetries in B0 decays
ALEPH Collaboration, T., Barate, R., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R. 05/2001 In: European Physical Journal C: Particles and Fields. 20, 3, p. 431-443. 13 p.
Journal article
Search for R-parity violating decays of supersymmetric particles in $e^+ e^-$ collisions at center-of-mass energies from 189-GeV to 202-GeV
ALEPH Collaboration, T., Barate, R., Bouhova-Thacker, E., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Smizanska, M. 03/2001 In: European Physical Journal C: Particles and Fields. 19, 3, p. 415-428. 14 p.
Journal article
Search for supersymmetric particles in $e^+ e^-$ collisions at $s$ up to 202-GeV and mass limit for the lightest neutralino
ALEPH Collaboration, T., Barate, R., Bouhova-Thacker, E., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Smizanska, M. 1/02/2001 In: Physics Letters B. 499, 1-2, p. 67-84. 18 p.
Journal article
Searches for neutral Higgs bosons in e+ e- collisions at center-of-mass energies from 192-GeV to 202-GeV
ALEPH Collaboration, T., Barate, R., Bouhova-Thacker, E., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Smizanska, M. 1/02/2001 In: Physics Letters B. 499, 1-2, p. 53-66. 14 p.
Journal article
Measurements of BR (b ---> tau- anti-nu(tau) X) and BR (b ---> tau- anti-nu(tau) D*+- X) and upper limits on BR (B- ---> tau- anti-nu(tau)) and BR (b---> s nu anti-nu)
ALEPH Collaboration, T., Barate, R., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R. 02/2001 In: European Physical Journal C: Particles and Fields. 19, 2, p. 213-227. 15 p.
Journal article
Measurement of the tau polarization at LEP
ALEPH Collaboration, T., Heister, A., Bouhova-Thacker, E., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R. 2001 In: European Physical Journal C: Particles and Fields. 20, 3, p. 401-430. 30 p.
Journal article
Observation of an excess in the search for the standard model Higgs boson at ALEPH
ALEPH Collaboration, T., Barate, R., Bouhova-Thacker, E., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Smizanska, M. 7/12/2000 In: Physics Letters B. 495, 1-2, p. 1-17. 17 p.
Journal article
A Measurement of the b quark mass from hadronic Z decays
ALEPH Collaboration, T., Barate, R., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R. 12/2000 In: European Physical Journal C: Particles and Fields. 18, 1, p. 1-13. 13 p.
Journal article
Search for single top production in e+ e- collisions at s**(1/2) = 189-GeV - 202-GeV
ALEPH Collaboration, T., Barate, R., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Smizanska, M., Williams, M. 23/11/2000 In: Physics Letters B. 494, 1-2, p. 33-45. 13 p.
Journal article
Measurement of the $W$ mass and width in $e^+ e^-$ collisions at 189-GeV
ALEPH Collaboration, T., Barate, R., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Smizanska, M. 10/2000 In: European Physical Journal C: Particles and Fields. 17, 2, p. 241-261. 21 p.
Journal article
Measurements of the structure of quark and gluon jets in hadronic Z decays
ALEPH Collaboration, T., Barate, R., Betteridge, A., Bowdery, C., Colrain, P., Finch, A., Foster, F., Hughes, G., Jones, R., Williams, M. 10/2000 In: European Physical Journal C: Particles and Fields. 17, 1, p. 1-18. 18 p.
Journal article
Search for the neutral Higgs bosons of the standard model and the MSSM in e+ e- collisions at S**(1/2) = 189-GeV
ALEPH Collaboration, T., Barate, R., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Smizanska, M. 10/2000 In: European Physical Journal C: Particles and Fields. 17, 2, p. 223-240. 18 p.
Journal article
Search for a scalar top almost degenerate with the lightest neutralino in e+ e- collisions at s**(1/2) up to 202-GeV
ALEPH Collaboration, T., Barate, R., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Smizanska, M. 7/09/2000 In: Physics Letters B. 488, 3-4, p. 234-246. 13 p.
Journal article
Inclusive production of pi0, eta, eta-prime (958), K0(S) and lambda in two jet and three jet events from hadronic Z decays
ALEPH Collaboration, T., Barate, R., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Williams, M. 09/2000 In: European Physical Journal C: Particles and Fields. 16, 4, p. 613-634. 22 p.
Journal article
Study of charm production in Z decays
ALEPH Collaboration, T., Barate, R., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Williams, M. 09/2000 In: European Physical Journal C: Particles and Fields. 16, 4, p. 597-611. 15 p.
Journal article
Search for charged Higgs bosons in e+ e- collisions at energies up to S**(1/2) = 189-GeV
ALEPH Collaboration, T., Barate, R., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Smizanska, M. 17/08/2000 In: Physics Letters B. 487, 3-4, p. 253-263. 11 p.
Journal article
Search for gamma gamma decays of a Higgs boson produced in association with a fermion pair in e+ e- collisions at LEP
ALEPH Collaboration, T., Barate, R., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Smizanska, M. 17/08/2000 In: Physics Letters B. 487, 3-4, p. 241-252. 12 p.
Journal article
A Study of the decay width difference in the $B^0_s - B^0_s$ system using $phi correlations ALEPH Collaboration, T., Barate, R., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R. 3/08/2000 In: Physics Letters B. 486, 3-4, p. 286-299. 14 p. Journal article Search for gauge mediated SUSY breaking topologies at$S^(1/2)$similar to 189-GeV ALEPH Collaboration, T., Barate, R., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Smizanska, M., Williams, M. 08/2000 In: European Physical Journal C: Particles and Fields. 16, 1, p. 71-85. 15 p. Journal article Measurement of W pair production in e+ e- collisions at 189-GeV ALEPH Collaboration, T., Barate, R., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Smizanska, M. 6/07/2000 In: Physics Letters B. 484, 3-4, p. 205-217. 13 p. Journal article ATLAS B-physics – an overview ATLAS Collaboration 11/05/2000 In: Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment. 446, 1-2, p. 152-158. 7 p. Journal article Measurement of the Z resonance parameters at LEP ALEPH Collaboration, T., Barate, R., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Sloan, T., Williams, M. 05/2000 In: European Physical Journal C: Particles and Fields. 14, 1, p. 1-50. 50 p. Journal article Bose-Einstein correlations in W pair decays ALEPH Collaboration, T., Barate, R., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Smizanska, M., Williams, M. 6/04/2000 In: Physics Letters B. 478, 1-3, p. 50-64. 15 p. Journal article Fermi-Dirac Correlations in lambda pairs in hadronic Z decays ALEPH Collaboration, T., Barate, R., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Williams, M. 2/03/2000 In: Physics Letters B. 475, 3-4, p. 395-406. 12 p. Journal article Search for R-parity violating decays of supersymmetric particles in$e^+ e^-$collisions at center-of-mass energies near 183-GeV ALEPH Collaboration, T., Barate, R., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Smizanska, M., Williams, M. 03/2000 In: European Physical Journal C: Particles and Fields. 13, 1, p. 29-46. 18 p. Journal article Search for the glueball candidates f(0)(1500) and f(J)(1710) in gamma gamma collisions ALEPH Collaboration, T., Barate, R., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Williams, M. 13/01/2000 In: Physics Letters B. 472, 1-2, p. 189-199. 11 p. Journal article Study of fermion pair production in$e^+ e^-$collisions at 130-GeV to 183-GeV ALEPH Collaboration, T., Barate, R., Betteridge, A., Bowdery, C., Colrain, P., Finch, A., Foster, F., Hughes, G., Jones, R., Williams, M. 01/2000 In: European Physical Journal C: Particles and Fields. 12, 2, p. 183-207. 25 p. Journal article Measurement of the$B^0$and$B^-$meson lifetimes ALEPH Collaboration, T., Barate, R., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R. 2000 In: Physics Letters B. 492, 3-4, p. 275-287. 13 p. Journal article Study of the CP asymmetry of$B^0 to J/psi K^0_S$decays in ALEPH ALEPH Collaboration, T., Barate, R., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R. 2000 In: Physics Letters B. 492, 3-4, p. 259-274. 16 p. Journal article Measurement of the e+ e- ---> Z Z production cross-section at center-of-mass energies of 183-GeV and 189-GeV ALEPH Collaboration, T., Barate, R., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Smizanska, M., Williams, M. 9/12/1999 In: Physics Letters B. 469, 1-4, p. 287-302. 16 p. Journal article Searches for sleptons and squarks in e+ e- collisions at 189-GeV ALEPH Collaboration, T., Barate, R., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Smizanska, M., Williams, M. 9/12/1999 In: Physics Letters B. 469, 1-4, p. 303-314. 12 p. Journal article Study of tau decays involving kaons, spectral functions and determination of the strange quark mass ALEPH Collaboration, T., Barate, R., Betteridge, A., Bowdery, C., Colrain, P., Finch, A., Foster, F., Hughes, G., Jones, R., Williams, M. 12/1999 In: European Physical Journal C: Particles and Fields. 11, 4, p. 599-618. 20 p. Journal article Search for charginos and neutralinos in$e^+ e^-$collisions at center-of-mass energies near 183-GeV and constraints on the MSSM parameter space ALEPH Collaboration, T., Barate, R., Betteridge, A., Bowdery, C., Colrain, P., Finch, A., Foster, F., Hughes, G., Jones, R., Williams, M. 11/1999 In: European Physical Journal C: Particles and Fields. 11, 2, p. 193-216. 24 p. Journal article Search for an invisibly decaying Higgs boson in e+ e- collisions at 189-GeV ALEPH Collaboration, T., Barate, R., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Smizanska, M., Williams, M. 28/10/1999 In: Physics Letters B. 466, 1, p. 50-60. 11 p. Journal article A Direct measurement of |V(cs)| in hadronic W decays using a charm tag ALEPH Collaboration, T., Barate, R., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Smizanska, M., Williams, M. 21/10/1999 In: Physics Letters B. 465, 1-4, p. 349-362. 14 p. Journal article Determination of the LEP center-of-mass energy from Z gamma events ALEPH Collaboration, T., Barate, R., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Smizanska, M., Williams, M. 14/10/1999 In: Physics Letters B. 464, 3-4, p. 339-349. 11 p. Journal article A Study of single$W$production in$e^+ e^-$collisions at$S^(1/2)$= 161-GeV to 183-GeV ALEPH Collaboration, T., Barate, R., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Smizanska, M., Williams, M. 16/09/1999 In: Physics Letters B. 462, 3-4, p. 389-400. 12 p. Journal article One prong tau decays with kaons ALEPH Collaboration, T., Barate, R., Betteridge, A., Bowdery, C., Colrain, P., Finch, A., Foster, F., Hughes, G., Jones, R., Williams, M. 08/1999 In: European Physical Journal C: Particles and Fields. 10, 1, p. 1-18. 18 p. Journal article Measurement of the hadronic photon structure function at LEP-1 for (Q**2) values between 9.9-GeV**2 and 284-GeV**2 ALEPH Collaboration, T., Barate, R., Bowdery, C., Finch, A., Foster, F., Hughes, G., Jones, R., Williams, M. 1/07/1999 In: Physics Letters B. 458, 1, p. 152-166. 15 p. Journal article Measurement of W pair production in e+ e- collisions at 183-GeV ALEPH Collaboration, T., Barate, R., Betteridge, A., Bowdery, C., Colrain, P., Finch, A., Foster, F., Hughes, G., Jones, R., Williams, M. 29/04/1999 In: Physics Letters B. 453, 1-2, p. 107-120. 14 p. Journal article Measurement of the$W$mass in$e^+ e^-$collisions at 183-GeV ALEPH Collaboration, T., Barate, R., Betteridge, A., Bowdery, C., Colrain, P., Finch, A., Foster, F., Hughes, G., Jones, R., Williams, M. 29/04/1999 In: Physics Letters B. 453, 1-2, p. 121-137. 17 p. Journal article Search for charged Higgs bosons in$e^+ e^-$collisions at$S^(1/2)$= 181-GeV - 184-GeV ALEPH Collaboration, T., Barate, R., Betteridge, A., Bowdery, C., Colrain, P., Finch, A., Foster, F., Hughes, G., Jones, R., Williams, M. 25/03/1999 In: Physics Letters B. 450, 4, p. 467-478. 12 p. Journal article Search for invisible Higgs boson decays in e+ e- collisions at center-of-mass energies up to 184-GeV ALEPH Collaboration, T., Barate, R., Betteridge, A., Bowdery, C., Colrain, P., Finch, A., Foster, F., Hughes, G., Jones, R., Williams, M. 18/03/1999 In: Physics Letters B. 450, 1-3, p. 301-312. 12 p. Journal article Search for B0(s) oscillations using inclusive lepton events ALEPH Collaboration, T., Barate, R., Betteridge, A., Bowdery, C., Colrain, P., Finch, A., Foster, F., Hughes, G., Jones, R., Williams, M. 03/1999 In: European Physical Journal C: Particles and Fields. 7, 4, p. 553-569. 17 p. Journal article Search for supersymmetry with a dominant R-parity violating LQ$D$coupling in$e^+ e^-$collisions at center-of-mass energies of 130-GeV to 172-GeV ALEPH Collaboration, T., Barate, R., Betteridge, A., Bowdery, C., Colrain, P., Finch, A., Foster, F., Hughes, G., Jones, R., Williams, M. 03/1999 In: European Physical Journal C: Particles and Fields. 7, 3, p. 383-405. 23 p. Journal article Analysis of transverse momentum correlations in hadronic Z decays ALEPH Collaboration, T., Barate, R., Betteridge, A., Bowdery, C., Colrain, P., Finch, A., Foster, F., Hughes, G., Jones, R., Williams, M. 4/02/1999 In: Physics Letters B. 447, 1-2, p. 183-198. 16 p. Journal article Determination of |V(ub)| from the measurement of the inclusive charmless semileptonic branching ratio of b hadrons ALEPH Collaboration, T., Barate, R., Bowdery, C., Colrain, P., Finch, A., Foster, F., Hughes, G., Jones, R., Williams, M. 01/1999 In: European Physical Journal C: Particles and Fields. 6, 4, p. 555-574. 20 p. Journal article Search for the standard model Higgs boson at the LEP-2 collider near S**(1/2) = 183-GeV ALEPH Collaboration, T., Barate, R., Betteridge, A., Bowdery, C., Colrain, P., Finch, A., Foster, F., Hughes, G., Jones, R., Williams, M. 19/11/1998 In: Physics Letters B. 440, 3-4, p. 403-418. 16 p. Journal article Study of D0 anti-D0 mixing and D0 doubly Cabibbo suppressed decays ALEPH Collaboration, T., Barate, R., Betteridge, A., Bowdery, C., Colrain, P., Finch, A., Foster, F., Hughes, G., Jones, R., Williams, M. 17/09/1998 In: Physics Letters B. 436, 1-2, p. 211-221. 11 p. Journal article A Measurement of the gluon splitting rate into b anti-b pairs in hadronic Z decays ALEPH Collaboration, T., Barate, R., Betteridge, A., Bowdery, C., Colrain, P., Finch, A., Foster, F., Hughes, G., Jones, R., Williams, M. 27/08/1998 In: Physics Letters B. 434, 3-4, p. 437-450. 14 p. Journal article The forward-backward asymmetry for charm quarks at the Z ALEPH Collaboration, T., Barate, R., Betteridge, A., Bowdery, C., Colrain, P., Finch, A., Foster, F., Hughes, G., Jones, R., Williams, M. 27/08/1998 In: Physics Letters B. 434, 3-4, p. 415-425. 11 p. Journal article Scalar quark searches in$e^+ e^-$collisions at$S^(1/2)$= 181-GeV - 184-GeV ALEPH Collaboration, T., Barate, R., Bowdery, C., Colrain, P., Finch, A., Foster, F., Hughes, G., Jones, R., Williams, M. 20/08/1998 In: Physics Letters B. 434, 1-2, p. 189-199. 11 p. Journal article Search for sleptons in e+ e- collisions at center-of-mass energies up to 184-GeV ALEPH Collaboration, T., Barate, R., Bowdery, C., Colrain, P., Finch, A., Foster, F., Hughes, G., Jones, R., Williams, M. 6/08/1998 In: Physics Letters B. 433, 1-2, p. 176-194. 19 p. Journal article A Measurement of the semileptonic branching ratio BR(b-baryon ---> p lepton anti-neutrino X) and a study of inclusive pi+-, K+-, (p,anti-p) production in Z decays ALEPH Collaboration, T., Barate, R., Betteridge, A., Bowdery, C., Colrain, P., Finch, A., Foster, F., Hughes, G., Jones, R., Williams, M. 08/1998 In: European Physical Journal C: Particles and Fields. 5, 2, p. 205-227. 23 p. Journal article Measurement of the fraction of hadronic Z decays into charm quark pairs ALEPH Collaboration, T., Barate, R., Betteridge, A., Bowdery, C., Colrain, P., Finch, A., Foster, F., Hughes, G., Jones, R., Williams, M. 07/1998 In: European Physical Journal C: Particles and Fields. 4, 4, p. 557-570. 14 p. Journal article Measurement of the spectral functions of axial - vector hadronic tau decays and determination of alpha(S)(M**2(tau)) ALEPH Collaboration, T., Barate, R., Betteridge, A., Bowdery, C., Colrain, P., Finch, A., Foster, F., Hughes, G., Jones, R., Williams, M. 07/1998 In: European Physical Journal C: Particles and Fields. 4, 3, p. 409-431. 23 p. Journal article Observation of doubly charmed B decays at LEP ALEPH Collaboration, T., Barate, R., Betteridge, A., Bowdery, C., Colrain, P., Finch, A., Foster, F., Hughes, G., Jones, R., Williams, M. 07/1998 In: European Physical Journal C: Particles and Fields. 4, 3, p. 387-407. 21 p. Journal article Search for evidence of compositeness at LEP I ALEPH Collaboration, T., Barate, R., Betteridge, A., Bowdery, C., Colrain, P., Finch, A., Foster, F., Hughes, G., Jones, R., Williams, M. 07/1998 In: European Physical Journal C: Particles and Fields. 4, 4, p. 571-590. 20 p. Journal article Search for supersymmetry with a dominant R-parity violating L L anti-E coupling in e+ e- collisions at center-of-mass energies of 130-GeV to 172-GeV ALEPH Collaboration, T., Barate, R., Betteridge, A., Bowdery, C., Colrain, P., Finch, A., Foster, F., Hughes, G., Jones, R., Whelan, E., Williams, M. 07/1998 In: European Physical Journal C: Particles and Fields. 4, 3, p. 433-451. 19 p. Journal article Study of$B^0_s$oscillations and lifetime using fully reconstructed$D_s$- decays ALEPH Collaboration, T., Barate, R., Betteridge, A., Bowdery, C., Colrain, P., Finch, A., Foster, F., Hughes, G., Jones, R., Williams, M. 07/1998 In: European Physical Journal C: Particles and Fields. 4, 3, p. 367-385. 19 p. Journal article A Measurement of the inclusive b ---> s gamma branching ratio ALEPH Collaboration, T., Barate, R., Betteridge, A., Bowdery, C., Colrain, P., Finch, A., Foster, F., Hughes, G., Jones, R., Sloan, T., Williams, M. 11/06/1998 In: Physics Letters B. 429, 1-2, p. 169-187. 19 p. Journal article Single photon and multiphoton production in$e^+ e^-$collisions at a center-of-mass energy of 183-GeV ALEPH Collaboration, T., Barate, R., Betteridge, A., Bowdery, C., Colrain, P., Finch, A., Foster, F., Hughes, G., Jones, R., Williams, M. 11/06/1998 In: Physics Letters B. 429, 1-2, p. 201-214. 14 p. Journal article K0(S) production in tau decays ALEPH Collaboration, T., Barate, R., Betteridge, A., Bowdery, C., Finch, A., Colrain, P., Foster, F., Hughes, G., Jones, R., Williams, M. 06/1998 In: European Physical Journal C: Particles and Fields. 4, 1, p. 29-45. 17 p. Journal article Determination of A-b(FB) using jet charge measurements in Z decays ALEPH Collaboration, T., Barate, R., Betteridge, A., Bowdery, C., Colrain, P., Finch, A., Foster, F., Hughes, G., Jones, R., Williams, M. 30/04/1998 In: Physics Letters B. 426, 1-2, p. 217-230. 14 p. Journal article Resonant structure and flavor tagging in the B pi+- system using fully reconstructed B decays ALEPH Collaboration, T., Barate, R., Betteridge, A., Bowdery, C., Colrain, P., Finch, A., Foster, F., Hughes, G., Jones, R., Williams, M. 16/04/1998 In: Physics Letters B. 425, 1-2, p. 215-226. 12 p. Journal article An upper limit on the tau-neutrino mass from three-prong and five-prong tau decays ALEPH Collaboration, T., Barate, R., Betteridge, A., Bowdery, C., Colrain, P., Finch, A., Foster, F., Hughes, G., Jones, R., Sloan, T., Williams, M. 04/1998 In: European Physical Journal C: Particles and Fields. 2, 3, p. 395-406. 12 p. Journal article Measurement of the$B$baryon lifetime and branching fractions in$Z$decays ALEPH Collaboration, T., Barate, R., Betteridge, A., Bowdery, C., Colrain, P., Finch, A., Foster, F., Hughes, G., Jones, R., Sloan, T., Williams, M. 04/1998 In: European Physical Journal C: Particles and Fields. 2, 2, p. 197-211. 15 p. Journal article Searches for charginos and neutralinos in$e^+ e^-$collisions at$s$= 161-GeV and 172-GeV ALEPH Collaboration, T., Barate, R., Betteridge, A., Bowdery, C., Colrain, P., Finch, A., Foster, F., Hughes, G., Jones, R., Williams, M. 04/1998 In: European Physical Journal C: Particles and Fields. 2, 3, p. 417-439. 23 p. Journal article Measurement of the$W$mass by direct reconstruction in$e^+ e^-\$ collisions at 172-GeV
ALEPH Collaboration, T., Barate, R., Betteridge, A., Bowdery, C., Colrain, P., Finch, A., Foster, F., Hughes, G., Jones, R., Whelan, E., Williams, M. 12/03/1998 In: Physics Letters B. 422, 1-4, p. 384-398. 15 p.
Journal article
Measurement of triple gauge boson couplings at 172-GeV
ALEPH Collaboration, T., Barate, R., Betteridge, A., Bowdery, C., Colrain, P., Finch, A., Foster, F., Hughes, G., Jones, R., Whelan, E., Williams, M. 12/03/1998 In: Physics Letters B. 422, 1-4, p. 369-383. 15 p.
Journal article
Four jet final state production in e+ e- collisions at center-of-mass energies ranging from 130-GeV to 184-GeV
ALEPH Collaboration, T., Barate, R., Betteridge, A., Bowdery, C., Colrain, P., Finch, A., Foster, F., Hughes, G., Jones, R., Whelan, E., Williams, M. 19/02/1998 In: Physics Letters B. 420, 1-2, p. 196-204. 9 p.
Journal article
Searches for supersymmetry in the photon(s) plus missing energy channels at s**(1/2) = 161-GeV and 172-GeV
ALEPH Collaboration, T., Barate, R., Betteridge, A., Bowdery, C., Colrain, P., Finch, A., Foster, F., Hughes, G., Jones, R., Sloan, T., Whelan, E., Williams, M. 19/02/1998 In: Physics Letters B. 420, 1-2, p. 127-139. 13 p.
Journal article
Search for charged Higgs bosons in e+ e- collisions at center-of-mass energies from 130-GeV to 172-GeV
ALEPH Collaboration, T., Barate, R., Betteridge, A., Bowdery, C., Colrain, P., Finch, A., Foster, F., Hughes, G., Jones, R., Sloan, T., Williams, M. 5/02/1998 In: Physics Letters B. 418, 3-4, p. 419-429. 11 p.
Journal article
Searches for the neutral Higgs bosons of the MSSM in e+ e- collisions at center-of-mass energies of 181-GeV to 184-GeV
ALEPH Collaboration, T., Barate, R., Betteridge, A., Bowdery, C., Colrain, P., Finch, A., Foster, F., Hughes, G., Jones, R., Williams, M. 1998 In: Physics Letters B. 440, 3-4, p. 419-434. 16 p.
Journal article
Three prong tau decays with charged kaons
ALEPH Collaboration, T., Barate, R., Betteridge, A., Bowdery, C., Colrain, P., Finch, A., Foster, F., Hughes, G., Jones, R., Sloan, T., Williams, M. 1998 In: European Physical Journal C: Particles and Fields. 1, 1-2, p. 65-79. 15 p.
Journal article
|
{}
|
# Difference between revisions of "Illumination problem"
The problem of determining the minimum number of directions of pencils of parallel rays, or number of sources, illuminating the whole boundary of a convex body. Let $K$ be a convex body in an $n$- dimensional linear space $\mathbf R ^ {n}$, let $\mathop{\rm bd} K$ and $\mathop{\rm int} K$ be respectively its boundary and its interior, and assume that $\mathop{\rm bd} K \neq \emptyset$. The best known illumination problems are the following.
1) Let $l$ be a certain direction in $\mathbf R ^ {n}$. A point $x \in \mathop{\rm bd} K$ is called illuminated from the outside by the direction $l$ if the straight line passing through $x$ parallel to $l$ passes through a certain point $y \in \mathop{\rm int} K$ and if the direction of the vector $\vec{xy}$ coincides with $l$. The minimum number $c ( K)$ of directions in the space $\mathbf R ^ {n}$ is sought that is sufficient to illuminate the whole set $\mathop{\rm bd} K$.
2) Let $z$ be a point of $\mathbf R ^ {n} \setminus K$. A point $x \in \mathop{\rm bd} K$ is called illuminated from the outside by the point $z$ if the straight line defined by the points $z$ and $x$ passes through a point $y \in \mathop{\rm int} K$ and if the vectors $\vec{xy}$ and $\vec{zy}$ have the same direction. The minimum number $c ^ \prime ( K)$ of points from $\mathbf R ^ {n} \setminus K$ is sought that is sufficient to illuminate the whole set $\mathop{\rm bd} K$.
3) Let $z$ be a point of $\mathop{\rm bd} K$. A point $x \in \mathop{\rm bd} K$ is illuminated from within by the point $z \neq x$ if the straight line defined by the points $z$ and $x$ passes through a point $y \in \mathop{\rm int} K$ and if the vectors $\vec{xy}$ and $\vec{zy}$ have opposite directions. The minimum number $p( K)$ of points from $\mathop{\rm bd} K$ is sought that is sufficient to illuminate the whole set $\mathop{\rm bd} K$ from within.
4) A system of points $Z = \{ {z } : {z \in \mathop{\rm bd} K } \}$ is said to be fixing for $K$ if it possesses the properties: a) $Z$ is sufficient to illuminate the whole set $\mathop{\rm bd} K$ from within; and b) $Z$ does not have any proper subset sufficient to illuminate the set $\mathop{\rm bd} K$ from within. The maximum number $p ^ \prime ( K)$ of points of a fixing system is sought for the body $K \subset \mathbf R ^ {n}$.
Problem 1) was proposed in connection with the Hadwiger hypothesis (see [1]): The minimum number of bodies $b( K)$ homothetic to a bounded $K$ with homothety coefficient $k$, $0< k< 1$, sufficient for covering $K$, satisfies the inequality $n+ 1 < b( K) \leq 2 ^ {n}$, whereby the value $b( K) = 2 ^ {n}$ characterizes a parallelepiped. For $K \subset \mathbf R ^ {n}$ bounded, $c( K) = b( K)$. If $K$ is unbounded, then $c( K) \leq b( K)$, and there exist bodies such that $c( K) < b( K)$ or $c( K) = b( K) = \infty$( see [1]).
Problem 2) was proposed in connection with problem 1). For $K \subset \mathbf R ^ {n}$ bounded, the equality $c( K) = c ^ \prime ( K)$ holds. If $K$ is not bounded, then $c ^ \prime ( K) \leq b( K)$ and $c( K) \leq c ^ \prime ( K)$. The number $c ^ \prime ( K)$ for any unbounded $K \subset \mathbf R ^ {3}$ takes one of the values 1, 2, 3, 4, $\infty$( see [1]).
The solution of problem 3) takes the form: The number $p( K)$ is defined if and only if $K$ is not a cone. In this case,
$$2 \leq p( K) \leq n+ 1 ,$$
whereby $p( K) = n+ 1$ characterizes an $n$- dimensional simplex of the space $\mathbf R ^ {n}$( see [1]).
For problem 4) (see [2]), it has been conjectured that if $K \subset \mathbf R ^ {n}$ is bounded, the inequality
$$p ^ \prime ( K) \leq 2 ^ {n}$$
holds.
Every illumination problem is closely linked to a special covering of the body $K$( cf. Covering (of a set)) (see [1]).
#### References
[1] V.G. Boltyanskii, P.S. Soltan, "The combinatorial geometry of various classes of convex sets" , Kishinev (1978) (In Russian) [2] B. Grünbaum, "Fixing systems and inner illumination" Acta Math. Acad. Sci. Hung. , 15 (1964) pp. 161–163
|
{}
|
# Small ball Gaussian probabilities with moving center
I would like to prove (if possible, otherwise find a counterexample for) the following lemma:
Let $$(X,\|\cdot \|_X)$$ be a separable Banach space. Additionally, we have a centred Gaussian measure $$\mu$$ on $$X$$, with Cameron--Martin space $$(E, |\cdot |_E)$$.
We consider a sequence $$x_n\in X$$ with the following properties: $$|x_n|_E\to \infty$$ (wlog nondecreasing) and there exists a $$C > 0$$ such that $$\inf_n \|x_n\|_X > C$$.
Let $$\delta_n$$ be any nonincreasing sequence $$\delta_n \to 0$$ for $$n\to \infty$$. We define the open ball wrt $$\|\cdot\|_X$$: $$B_\delta (z) = \{x\in X: \|x-z\|_X < \delta\}.$$
Then (this is what I want to prove) $$\lim_{n\to \infty} \frac{\mu(B_{\delta_n}(x_n))}{\mu(B_{\delta_n}(0))} = 0. \quad (*)$$
My thoughts so far:
For fixed $$x\in X$$, it is known (for example Bogachev, this is because the squared CM norm is the Onsager--Machlup functional, and it also follows from this thread: Probabilities of small balls with convergent center points under Gaussian measure) that $$\lim_{m\to \infty} \frac{\mu(B_{\delta_m}(x))}{\mu(B_{\delta_m}(0))} = e^{-|x|_E^2/2}.$$
Plugging a sequence $$x_n$$ as above in here:
$$\lim_{n\to\infty} \lim_{m\to \infty} \frac{\mu(B_{\delta_m}(x_n))}{\mu(B_{\delta_m}(0))} = \lim_{n\to \infty} e^{-|x_n|_E^2/2} = 0.$$
This means that (*) amounts to "applying both limits at once" instead of "sequentially". In my opinion, the missing piece here would be uniformity (with respect to $$n$$) of the limit
$$\lim_{m\to \infty} \frac{\mu(B_{\delta_m}(x_n))}{\mu(B_{\delta_m}(0))}.$$
In other words, this expression should converge uniformly over $$n$$ by using $$\inf_n \|x_n\|_X > C$$ (which means that the balls $$B_{\delta_m}(x_n)$$ are uniformly bounded away from the origin). But I am unable to prove this. Maybe I am wrong and the statement is wrong after all?
This is not a full answer but maybe of interest. Given a sequence of $$x_n$$ with $$\|x_n\|_E\to \infty$$ we can always find a sequence of $$\delta_n\to 0$$ so that
$$\lim_{n\to\infty}\frac{\mu(B_{\delta_n}(x_n))}{\mu(B_{\delta_n}(0))}=0.$$
First note that by Cameron-Martin we have that
$$\mu(B_{\delta_n}(x_n))=\int_{B_{\delta_n}(0)}e^{x_n^\ast(\omega)-\frac12\|x_n\|^2_E}\mu(d\omega)=e^{-\frac12\|x_n\|^2_E}\int_{B_{\delta_n}(0)}e^{x_n^\ast(\omega)}\mu(d\omega),$$
where $$x_n^\ast\in X^\ast$$ is a continuous linear functional. As it is continuous we have that for each $$x_n^\ast$$ there is some $$L_n>0$$ so that
$$|x_n^\ast(\omega)|\leq L_n\|\omega\|_X$$
for all $$\omega\in X$$. Therefore on the set $$B_{\delta_n}(0)$$ we have that
$$|x_n^\ast(\omega)|\leq L_n\delta_n$$
and thus
$$\mu(B_{\delta_n}(x_n))\leq e^{-\frac12\|x_n\|^2_E+L_n\delta_n}\int_{B_{\delta_n}(0)}\mu(d\omega).$$
If $$\delta_n$$ is so that $$L_n\delta_n$$ doesn't diverge faster that $$\frac12\|x_n\|_E^2$$ then we have that
$$\lim_{n\to\infty}\frac{\mu(B_{\delta_n}(x_n))}{\mu(B_{\delta_n}(0))}\leq \lim_{n\to\infty} e^{-\frac12\|x_n\|^2_E+L_n\delta_n}=0.$$
In particular we can choose $$\delta_n=1/(nL_n)$$, eg.
• Yes, that is something along the lines of what I also thought about. Problem is that so far I have found no way of controlling the norm $L_n$ unfortunately. Also, an element $x_n\in E$ (i believe) does not necessarily correspond to a dual element $x_n^\star$, only something in the closure of $X^\star$ with respect to $L^2(X,\mu)$. Thank you anyway! Sep 12 '21 at 16:05
• To clarify further: In this setting I am stuck with a specific combination of $x_n$ and $\delta_n$ and I cannot "accelerate" $\delta_n$ further. Sep 12 '21 at 17:22
|
{}
|
Warning
This documents an unmaintained version of NetworkX. Please upgrade to a maintained version and see the current NetworkX documentation.
# Reciprocity¶
Algorithms to calculate reciprocity in a directed graph.
reciprocity(G[, nodes]) Compute the reciprocity in a directed graph. overall_reciprocity(G) Compute the reciprocity for the whole graph.
|
{}
|
Geometrisation of Chaplygin’s reducing multiplier theorem
Geometrisation of Chaplygin’s reducing multiplier theorem
A V Bolsinov, A V Borisov, I S Mamaev Dept. of Mathematical Sciences, Loughborough University, Loughborough,
LE11 3TU, UK
Udmurt State University, Izhevsk, Russia A. A. Blagonravov Mechanical Engineering Research Institute of RAS, Moscow, Russia
Abstract
We develop the reducing multiplier theory for a special class of nonholonomic dynamical systems and show that the non-linear Poisson brackets naturally obtained in the framework of this approach are all isomorphic to the Lie-Poisson -bracket. As two model examples, we consider the Chaplygin ball problem on the plane and the Veselova system. In particular, we obtain an integrable gyrostatic generalisation of the Veselova system.
ams:
37J60, 37J35, 70E18, 53D17
, ,
Introduction
In [21] S. A. Chaplygin found a special class of systems with two degrees of freedom which can be reduced to a Lagrangian and thus Hamiltonian form by a suitable change of time , where is a reducing multiplier depending on the coordinates. As an illustration, he considered the problem of motion of the so-called Chaplygin sleigh, which can be integrated by the Hamilton – Jacobi method using the reducing multiplier method proposed by himself. Afterwards it was shown that a number of systems in nonholonomic mechanics can also be represented in the form of Chaplygin systems or generalised Chaplygin systems [5], and thereby are conformally Hamiltonian [8, 5, 18, 2, 13]. Thus, the reducing multiplier method is one of the most effective methods for explicit Hamiltonisation of dynamical systems.
From today’s perspective, the reducing multiplier theory is a method for finding one of the most important tensor invariants [10] of a dynamical system — the Poisson structure [5]. At the same time, the application of this method requires rewriting the equations of motion in local coordinates, which usually involves extremely cumbersome calculations. In this paper we develop the Chaplygin method for one class of systems frequently discussed in nonholonomic mechanics, which allows to achieve their Hamiltonisation in a much simpler way. We shall not dwell here on the derivation of equations of motion for nonholonomic mechanics. A fairly detailed treatment of this can be found in [4].
1 Generalised Chaplygin systems
We recall that according to [5], a generalised Chaplygin system is a mechanical system with two degrees of freedom whose equations of motion can be written as
ddt(∂L∂˙q1)−∂L∂q1=˙q2S,ddt(∂L∂˙q2)−∂L∂q2=−˙q1S,S=a1(q)˙q1+a2(q)˙q2+b(q), (1)
where is a function of generalised coordinates and velocities , which we may call the Lagrangian of the system. It is straightforward to verify that this system admits an energy integral of standard form
E=∑i∂L∂˙qi˙qi−L. (2)
Remark. A usual Chaplygin system can be obtained by a special choice of the function (a fortiori ) [21]. A somewhat different generalisation of the Chaplygin systems is proposed in [7, 9].
If there is an invariant measure with density depending only on the coordinates, the system can be represented in conformally Hamiltonian form [5] (for this was shown by S. A. Chaplygin [21]). To show this, we use the Legendre transform for the initial system (1):
Pi=∂L∂˙qi,H=∑iPi˙qi−L∣∣˙qi→Pi.
Then the equations of motion (1) can be recast as
˙qi=∂H∂Pi,˙P1=−∂H∂q1+∂H∂P2S,˙P2=−∂H∂q2−∂H∂P1S,S=a1(q)˙q1+a2(q)˙q2+b(q)=A1(q)P1+A2(q)P2+B(q). (3)
Here coincides with the energy integral (2) expressed in terms of the new variables.
Now assume that the system admits an invariant measure with density depending only on the coordinates:
μ=N(q)dP1dP2dq1dq2. (4)
In this case the Liouville equation for reduces to
˙q1(1N∂N∂q1−A2(q))+˙q2(1N∂N∂q2+A1(q))=0,
and since depends only on the coordinates, each of the brackets must vanish separately:
1N∂N∂q1−A2(q)=0,1N∂N∂q2+A1(q)=0. (5)
Let us now make the change of variables
Pi=piN(q),i=1,2.
Denote the Hamiltonian in the new variables as . Then the following relations hold for the derivatives
∂H∂Pi=N∂¯¯¯¯¯H∂pi,∂H∂qi=∂¯¯¯¯¯H∂qi+1N∂N∂qi(∂¯¯¯¯¯H∂p1p1+∂¯¯¯¯¯H∂p2p2).
Substituting them into (3) and using (5), we obtain
˙qi=N(q)∂¯¯¯¯¯H∂pi,˙p1=N(q)(−∂¯¯¯¯¯H∂q1+N(q)B(q)∂¯¯¯¯¯H∂p2),˙p2=N(q)(−∂¯¯¯¯¯H∂q2−N(q)B(q)∂¯¯¯¯¯H∂p1).
Thus, the following result holds.
Theorem 1
If the system (3) admits an invariant measure of the form (4), it can be represented in conformally Hamiltonian form
˙qi=N(q){qi,¯¯¯¯¯H},˙pi=N(q){pi,¯¯¯¯¯H},i=1,2,
where the Poisson brackets are given by
{qi,pj}=δij,{qi,qj}=0,{p1,p2}=N(q)B(q).
Proof. The proof is a straightforward verification of the Jacobi identity.
2 The Chaplygin system on T∗S2
We now consider a system which is described by means of two three-dimensional vectors and and whose equations of motion are
˙M=(M−Sγ)×∂H∂M+γ×∂H∂γ,˙γ=γ×∂H∂M, (6)
where the “Hamiltonian” is an arbitrary function (quadratic and non-degenerate in ) and is a function linear in :
S=(K(γ),M)=K1(γ)M1+K2(γ)M2+K3(γ)M3.
It can be proved by a straightforward verification that the system (6) always admits three integrals of motion:
F1=γ2,F2=(M,γ),F3=H(M,γ).
Without loss of generality we can set , so that equations (5) govern the dynamical system on the family of four-dimensional manifolds
M4c={M,γ∣γ2=1,(M,γ)=c},
each of which is diffeomorphic to .
If the entire set of variables is denoted as , then equations (6) can be represented in the skew-symmetric form
˙x=P0∂H∂x, P0=(MΓΓ0)−S(x)(ccΓ000),
Here the first term is a standard Poisson structure corresponding to the Lie algebra . Moreover, additionally satisfies the equations
P0∂F1∂x=0,P0∂F2∂x=0.
As above, assume that (6) admits an invariant measure with density depending only on :
μ=ρ(γ)dMdγ. (7)
In this case the Liouville equation for the vector field defined by the system (6) can be represented as
divρV=(∂H∂M,ργ×K−γ×∂ρ∂γ)=0.
Hence, owing to non-degeneracy of the Hamiltonian in , we obtain the vector equation
(1ρ∂ρ∂γ−K)×γ=0. (8)
Using this relation, we can prove by direct computation
Proposition 1
If satisfies equation (8), then the tensor satisfies the Jacobi identity and therefore is a Poisson structure on .
Thus, we finally obtain
Theorem 2
If the system (6) admits an invariant measure (7) with density depending only on , it can be represented in the conformally Hamiltonian form
˙x=ρ(γ)P(x)∂H∂x,
where is a Poisson structure of rank 4 with the Casimir functions
F1=γ2,F2=(M,γ).
Equation (8) can be solved for the vector as follows:
K=ρf(γ)γ+1ρ∂ρ∂γ,
where is an arbitrary function. Thus, we have naturally obtained a special class of Poisson structures on the space , which can be written as
P=1ρ(MΓΓ0)−(1ρ2∂ρ∂γ+f(γ)γ,M)(Γ000). (9)
Remark. If we add a term of the form
Φ(γ)(Γ000)
to the bracket (9), where is an arbitrary function, then the Jacobi identity will still hold. We use such a modification of (9) below, see (20).
We give two examples.
The problem of the Chaplygin ball on a plane [22] describing the rolling of a balanced dynamically asymmetric ball without slipping on a horizontal plane.
In appropriate variables the equations of motion can be represented in the form (6), see [14, 3, 4] with
H1=12((AM,M)+(AM,γ)2D−1−(γ,Aγ))+U1(γ),S=(AM,γ)D−1−(Aγ,γ), (10)
where , is a constant diagonal matrix. The ball’s angular momentum relative to the point of contact is expressed in terms of the physical variable , angular velocity, by the formula
M=A−1ω−D(ω,γ)γ,ω=A(M+Sγ), (11)
and the integral can take arbitrary values.
The density of the invariant measure (7) of the system and the function for the bracket (9) has the form
ρ=1√D−1−(γ,Aγ),f(γ)=0.
The Veselova system [15, 16, 19] governing the dynamics of a body with a fixed point subject to the nonholonomic constraint , where is the angular velocity of the ball and is a unit vector fixed in space.
In the body-fixed frame, the equations of motion can be represented in the form (6), see [6] with
H2=12⎛⎝(M,ˆAM)−((ˆA−E)M,γ)2(ˆAγ,γ)⎞⎠+U2(γ),S=−((ˆA−E)M,γ)(ˆAγ,γ), (12)
where is the constant matrix inverse to the tensor of inertia, and the angular momentum is expressed in terms of the angular velocity of the body as follows
M=ˆA−1ω+((ˆA−1−E)ω,γ)γ,ω=ˆA(M−Sγ), (13)
where the area integral coincides with the constraint equation:
(M,γ)=(ω,γ)=b.
The density of the invariant measure (7) and the function coincide in this case:
ρ(γ)=f(γ)=1√(γ,ˆAγ).
A Lagrangian representation for after a change of time was obtained in [8], the corresponding conformally Hamiltonian representation in [6], and another conformally Hamiltonian representation was found in [5].
If the potential for these systems is not zero, then, as a rule, the corresponding equations of motion turn out to be nonintegrable so that this Hamiltonisation method is essentially different from that used in [1], where the existence of a complete set of first integrals was required.
We also note that if one makes a change of the parameters and the potential in the Chaplygin ball problem:
A=D−1(E−ˆA),U1(γ)=D−1U2(γ),
then we find that the Hamiltonian (10) becomes
H1=D−12(M,M)−D−1H2, (14)
and the Poisson structure of the Chaplygin ball is transformed into a Poisson structure of the Veselova system. Consequently, these two systems are defined on the same Poisson manifold [20], and their Hamiltonians are related by (14). If , , then the function is an integral for the both systems, which implies that their trajectories turn out to be rectilinear windings (transverse to each other) on the same invariant tori [19].
3 Reduction to the e(3)-bracket
Introducing new notation , we can rewrite the Poisson structure (9) in a shorter form that is more convenient for further analysis
P=g(MΓΓ0)+(∂g∂γ−f⋅γ,M)(Γ000). (15)
Let us examine the family of such Poisson structures in more detail. First of all, we see that this family is parametrised by two arbitrary functions and and we will denote the corresponding Poisson structures by . Notice that all possess the same Casimir functions and .
For simplicity we confine our attention to the physical case , that is, we restrict all the objects to the five-dimensional (Poisson) manifold .
One of our goals is to find out to what canonical form these Poisson structures can be reduced. First of all, we note that the symplectic leaves of are all diffeomorphic to the cotangent bundle to the sphere . From the explicit form (15) of the Poisson structure it may be inferred that the symplectic structure on each leaf will be the sum of the canonical form and a magnetic term, that is, a closed 2-form on the sphere. By the Moser theorem [12], such forms are parametrised up to a symplectomorphism by one single number, namely . Thus, for each Poisson structure we have a one-parameter family of symplectic leaves whose type is also defined by exactly one parameter. This observation leads us to the conjecture that by “redistributing”, if necessary, the symplectic leaves and then by applying a certain symplectomorphism to each single symplectic leaf, we can transform any Poisson structure to any other .
Remark. On the zero level , the Poisson structure (15) is reduced to the canonical -bracket by a very simple transformation [5]:
(M,γ)↦(g−1(γ)M,γ).
Thus, for the Chaplygin ball we have:
(M,γ)↦((D−1−(γ,Aγ))−1/2M,γ),
and for the Veselova system:
We start by describing a class of natural transformations which preserve the form of , but change the parameters and . Consider the transformations of the form
(M,γ)↦(˜M,γ),˜M=A(γ)M, (16)
where is a linear operator in whose components depend on .
Proposition 2
For each point , consider the orthogonal decomposition , where is the projection of onto the vector , and is the projection of onto the plane perpendicular to , i.e., the tangent plane . Let
˜M=α(γ)M′+cM′′+M′′×h(γ),
where is a constant, is an arbitrary scalar function, and is an arbitrary vector function of . Then the transformation (16) sends to a Poisson structure of the same kind with parameters
˜g=αg,˜f=α2cf+(αc−1)(˜g−(γ,∂˜g∂γ))+1c(γ,˜g∂α∂γ+˜g2curl(h˜g)). (17)
The proof of Proposition 2 is a straightforward verification and we confine ourselves to commenting on the geometric meaning of the transformation used in this proposition. Consider the orthonormal basis related to the vector in the space . Namely, and are two orthonormal vectors lying in a tangent plane to the unit sphere at point , and is the normal vector to this sphere at the same point, i.e., . In this basis the matrix of has the form
A=⎛⎜⎝α0a0αb00c⎞⎟⎠
where , and depend on , and is constant.
This is exactly the general form of the transformation which satisfies our requirements. Indeed, the Casimir function should be mapped to itself with possible multiplication by some constant . Therefore, the plane defined by the equation is sent to itself, and in the orthogonal direction the transformation is a dilatation with ratio independent of . These conditions completely define the last row of the matrix .
Furthermore, the relations can be formally rewritten in vector form as . Since their form must remain the same, we obtain the condition
g(A(γ)M)×γ=˜gM×γ.
This means that on the tangent plane the operator must act as multiplication by some number (depending on ). There are no restrictions on the elements and , they are given by the vector function (this function itself has 3 components, but only two of them are significant, since nothing is changed by adding to any vector proportional to ).
Notice that the set of transformations described in Proposition 2 forms a group (which is, of course, infinite-dimensional, since its parameters contain arbitrary functions and ). It is easily verified that performing successively two transformations with parameters and is equivalent to the transformation with parameters . The above-mentioned rule specifies a group binary operation, which simply copies the matrix multiplication:
(α2h20c2)(α1h10c1)=(α1α2h1α2+h2c10c1c2)
This group acts in a natural way on the family of Poisson structures or, which is the same, on the space of parameters . The above relations (17) can be understood as explicit formulae for this action. If the action is formally denoted by , then, as is easily verified by successively performing two transformations, it satisfies the standard action rule. Namely, if
(˜g,˜f)=Ψ(α1,c1,h1)(g,f)and(˜˜g,˜˜f)=Ψ(α2,c2,h2)(˜g,˜f),
then
(˜˜g,˜˜f)=Ψ(α1α2,c1c2,h1α2+h2c1)(g,f).
For an explicit verification of this fact it is convenient to rewrite (17) as
˜g=αg,˜f=α2c(f+g−(γ,∂g∂γ))−(˜g−(γ,∂˜g∂γ))+˜g2c(γ,curl(h˜g)).
Now the verification presents no difficulty.
From the viewpoint of group theory it would now be natural to ask the question: what are the orbits of this action? In other words, we want to understand which Poisson structures may be transferred to each other by the above-mentioned transformations. The answer turns out to be very simple: the action described above has one single orbit, i.e., all Poisson structures in this family are equivalent to each other. In particular, the following theorem holds:
Theorem 3
Every Poisson structure of the form (15) on the level is isomorphic to the standard Lie-Poisson structure related to the Lie algebra .
Proof. It is sufficient to choose parameters in (17) in such a way that and . The first condition immediately defines the function , namely, . After that the second condition reduces to
α2cf+(αc−1)+1c(γ,∂α∂γ)+1c(γ,curlh)=0
or, equivalently,
α2f+α+(γ,∂α∂γ)−c+(γ,curlh)=0,
where the constant and the vector function are the unknowns. This equation can now be rewritten as
(γ,curlh)=F(γ)+c, (18)
where is a given function. Notice that (18) has to be fulfilled only on the unit sphere . The conditions for solving the equations of this form are well known. In the differential-geometric sense this equation simply means that we are looking for an antiderivative of the -form on the unit sphere, where is the standard area form. Such a 1-form can be found if and only if . This condition can always be achieved by choosing a constant .
Remark. In a similar manner, the bracket can be reduced to the standard form on the whole space , i.e., without the additional restriction . To that end, we have to extend the class of transformations by assuming that depends on . Since is a Casimir function, may be treated, as before, as a constant and hence the formulae do not essentially change. The conditions for solvability of the equation remain the same, but now they have to be verified on the spheres of all radii. As before, we are able to ensure that they are satisfied, since the necessary constants can now be chosen depending on the square of the radius .
For the Chaplygin ball problem, the function in Eq. (18) has the form
F(γ)=−D−1(D−1−(γ,Aγ))3/2.
The solutions of Eq. (18) for the unknowns and can be expressed in this case in terms of complete and incomplete elliptic integrals. Thus, although theoretically it is not difficult to prove reducibility of to the -bracket, in practice the resulting transformation can turn out to be extremely unwieldy and non-algebraic.
4 Generalisation to the case of a gyrostat
In this section we consider the dynamical systems obtained by adding a rotor with constant gyroscopic momentum to the Chaplygin ball and a rigid body in the Veselova problem. A detailed derivation of the equations of motion for compound bodies can be found in books [23, 24, 25].
The new equations with gyrostatic terms take the following form
˙M=(M+k−Sγ)×∂H∂M+γ×∂H∂γ,˙γ=γ×∂H∂M, (19)
where the new “Hamiltonian” and function may now depend on the gyrostatic momentum as a parameter, but preserve their original structure as in Section 2. In particular,
S=1g(−∂g∂γ+f(γ)γ,M)+1gΦ(γ) (20)
for some smooth functions and .
A direct calculation shows that this system remains conformally Hamiltonian. Namely, (19) can be rewritten as
˙x=g−1Pk(x)∂H∂x,
with the Poisson structure of a more general form
Pk(x)=g(MkΓΓ0)−gS(Γ000),Mk=⎛⎜⎝0−M3−k3M2+k2M3+k30−M1−k1−M2−k2M1+k10⎞⎟⎠, (21)
where is a complete set of variables.
The Jacobi identity for is fulfilled, and the Casimir functions are
F1=γ2,F2=(M+k,γ).
The new expressions for and presented below can be obtained by using the methods developed in [23, 24, 25]. We omit this computation.
For the Chaplygin ball, the vector is still expressed in terms of the angular velocity by means of (11), and the Hamiltonian (10) also remains the same. For the bracket (21) we set
g=√D−1−(γ,Aγ),f(γ)=0,Φ(γ)=0.
In other words, all the ingredients remain unchanged except for the additional terms involving in the bracket (21). Thus, to obtain the gyrostatic generalisation of the Chaplygin ball we simply need to replace by in (15).
For the Veselova system, when a gyrostat is added, the situation becomes less trivial and the relations (13) as well as and given by (12) need to be modified. As before, we shall assume that , where the coefficient can be found from the condition
(M+k,γ)=(ω,γ).
We obtain
M=ˆA−1ω−((ˆA−1−E)ω+k,γ)γ,ω=ˆA(M−Sγ),S=(ˆAM−M−k,γ)(ˆAγ,γ).
Here coincides with the corresponding function in the bracket (21) provided that is given as
g=√(ˆAγ,γ).
In this case the Hamiltonian reads
H=12((ˆAM,M)+(ˆAM−M−k,γ)2(ˆAγ,γ)).
It turns out that this modified Veselova system with gyrostatic terms still admits one additional integral of the form
F3=(M+k,M+k).
Thus, this new system is conformally Hamiltonian and integrable. Its dynamics can be further analysed by the standard methods.
Conclusion and discussion
We have obtained an invariant (independent of the choice of local coordinates on ) conformally Hamiltonian representation of generalised Chaplygin systems on using a degenerate Poisson structure of rank 4 in the six-dimensional space and have shown that this structure is a deformation of the standard Lie – Poisson bracket in corresponding to the Lie algebra .
As applications, we have considered two nonholonomic systems: the Chaplygin ball and Veselova problem. In this approach (after a suitable change of parameters) they turn out to be integrable conformally Hamiltonian systems on the same Poisson manifold with the same set of first integrals. The above conformally Hamiltonian representation has been generalised to the case of adding a gyrostat (although in this case there is no analogy between these systems any more).
To the best of our knowledge, the conformally Hamiltonian description for the Veselova system with and the integrability of its gyrostatic generalisation were unknown before and are presented in this paper for the first time.
This paper poses a number of questions related primarily to nonholonomic systems.
1. Can the above approach be used to obtain a conformally Hamiltonian description for an integrable generalisation of the Chaplygin ball rolling on a spherical base (BMF-system) found in [2, 13, 20]?
2. Poisson brackets of a quite similar type are encountered in examples but with a Casimir function linear in different from [1]. It would be interesting to find out whether such brackets can be reduced to the standard Poisson-Lie bracket on using the technique described above.
3. Since the Chaplygin ball problem without potential (i.e., ) is integrable on the whole space , Theorem 3 allows us to obtain a globally integrable Hamiltonian system on , i.e., for all values of the area constant . As is well known, this circumstance may be interpreted as integrability of a natural system with a magnetic field whose additional integral is quadratic in momenta. The issue of description of all such systems was actively discussed in the literature. It would be interesting to interpret the system thus obtained in the context of recent classification results by V. Marikhin and V. Sokolov [11, 17].
Acknowledgments
The work of Alexey V. Borisov was carried out within the framework of the state assignment to the Udmurt State University “Regular and Chaotic Dynamics”. The work of Ivan S. Mamaev was supported by the RFBR grants 13-01-12462-ofi m.
References
• [1] Bizyaev I A and Tsiganov A V 2013 On the Routh sphere problem J. Phys. A 46 1–11
• [2] Borisov A V, Fedorov Yu N and Mamaev I S 2008 Chaplygin ball over a fixed sphere: an explicit integration Regul. Chaotic Dyn. 13 557–571
• [3] Borisov A V and Mamaev I S 2001 Chaplygin’s Ball Rolling Problem Is Hamiltonian Mathematical Notes 70 720–723
• [4] Borisov A V and Mamaev I S 2002 The rolling motion of a rigid body on a plane and a sphere. Hierarchy of dynamics Regul. Chaotic Dyn. 7 177–200
• [5] Borisov A V and Mamaev I S 2008 Conservation Laws, Hierarchy of Dynamics and Explicit Integration of Nonholonomic Systems Regul. Chaotic Dyn. 13 443–490
• [6] Borisov A V, Mamaev I S and Bizyaev I A 2013 The Hierarchy of Dynamics of a Rigid Body Rolling without Slipping and Spinning on a Plane and a Sphere Regul. Chaotic Dyn. 18 277–328
• [7] Cantrijin F, de Léon M and de Diego D 2002 On the geometry of generalized Chaplygin systems Math. Proc. Camb. Phil. Soc. 132 323–351
• [8] Fedorov Yu N and Jovanović B 2004 Nongholonomic -Systems as Generalized Chaplygin Systems with an Invariant Measure and Flows on Homogeneous Spaces J. of Nonlinear Science 1 341–381
• [9] Koiller J 1992 Reduction of Some Classical Non-holonomic Systems with Symmetry Arch. Rational. Mech. Anal. 118 113–148
• [10] Kozlov V V 2002 On the Integration Theory of Equations of Nonholonomic Mechanics Regul. Chaotic Dyn. 7 191–176
• [11] Marikhin V G and Sokolov V V 2005 Separation of variables on a non-hyperelliptic curve Regul. Chaotic Dyn. 10 59–70
• [12] Moser J 1965 On the volume elements on a manifold Trans. Amer. Math. Soc. 120 286–294
• [13] Borisov A V, Mamaev I S and Marikhin V G 2008 Explicit integration of one problem in nonholonomic mechanics Doklady Physics 53 525–528
• [14] Borisov A V, Tsygvintsev A V 1996 Kowalewski exponents and integrable systems of classic dynamics. I, II Regulyarnaya i khaoticheskaya dinamika (Regul. Chaotic Dyn.) 1 15–37
• [15] Veselova L E 1986 New cases of integrability of equations of motion of rigid body with nonholonomic constraint Geomtry, differential equations and mechanics (Moscow: MSU) 64–68
• [16] Veselov A P and Veselova L E 1988 Integrable nonholonomic systems on Lie groups Ìàò. çàìåòêè 44 604–619
• [17] Marikhin V G and Sokolov V V 2006 Pairs of commuting Hamiltonians quadratic in the momenta Teoret. Mat. Fiz. 149 147–160
• [18] Moshchuk N K 1987 Reducing the equations of motion of certain non-holonomic chaplygin systems to lagrangian and hamiltonian form J. Appl. Math. Mech. 51 172–177
• [19] Fedorov Y N 1989 Two integrable nonholonomic systems in classical dynamics [Russian] Vestnik Moskov. Univ. Ser. I Mat. Mekh. 4 38–41
• [20] Tsiganov A V 2012 On the nonholonomic Veselova and Chaplygin systems Rus. J. Nonlin. Dyn 8 541–547
• [21] Chaplygin S A 1911 On theory of motion of nonholonomic systems. The reducing multiplier theorem Math. Collection 28 303–314
• [22] Chaplygin S A 2002 On a ball’s rolling on a horizontal plane Regul. Chaotic Dyn. 7 131–148
• [23] Borisov A V, Mamaev I S 2005 Rigid body dynamics — Hamiltonian methods, integrability, chaos (Moscow–Izhevsk: Institute of Computer Science) p 576
• [24] Wittenburg J 1977 Dynamics of System of Rigid Bodies (Stuttgart: B G Teubner) p 224
• [25] Levi-Civita T, Amaldi U 1951 A Course of Theoretical Mechanics vol 2 part 2 [Russian translation] (Moscow–Leningrad: Izdat. Inostr. Lit.)
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.
The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
|
{}
|
The River Press (Fort Benton, Mont.) 1880-current, September 25, 1889, Image 1
What is this? Optical character recognition (OCR) is an automated process that converts a digital image containing numbers and letters into computer-readable numbers and letters. The search engine used on this web site searches OCR-generated text for the word or phrase you are looking for. Please note that OCR is not 100 percent accurate. If the original image is blurry, has extraneous marks, or contains ornate font styles or very small text, the OCR process will produce nonsense characters, extraneous spaces, and other errors, such as those you may see on this page. In addition, the OCR process cannot interpret images and may ignore them or render them as strings of nonsense characters. Despite these drawbacks, OCR remains a powerful tool for making newspaper pages accessible by searching. ×
V ol. THE RIVER PRESS. Fort Benton, Montana, Wednesday, September 25, 1889. No. 48. TO DEMOCRATS. phi Line Democrat Asks a Few ques- tions and Makes an Appeal. Editor RivEI PRE,. I desire to say a few words to the dem- vre ey of Choteau county if you will kind- , g ive me a little space in your paper. 7ratic party of Choteau county has not *en as harmonious as it might have been. liVith a large majority at hand it has per- sitted several republican candidates to /talk away with offices which should have :we tilled by democrats, We all know liow this:came about. I shall not review .be causes which led to it. I simply state te feet. And now in this present cam- ;eign we see and I am sorry to record it _a disposition among a few to still sow •he seeds of discord and dissension in the Now I ask in all candor: Can we af- 'ord to do this? Can we as democrats ;tford to approach statehood with dissen- eons in our ranks? Are we not jeopar- !izing the success of the party by per- aitting their existence? Have we not ad all the bickerings over petty offices - hat we should have? Is it not time we -hould quit it and pull together as one ;an? These are questions that come •:ght home to every democrat. We all iesire the success of the democratic par - y yet we know as well as we know any- hing that no party can succeed without .aity of action. The loss of one man up - •a our ticket one year sometimes leads to -he loss of two men in the next election. :hese repeated losses finally demoralize a arty and the enemy capture it. These .re plain, cold, undeniable facts. Now let me briefly present the situa- en. We are on the eve of the most im- ortant election. from a party stand point, at has ever taken place in Montana. It politica! status for years to come will e fixed one week from next Tuesday. To ecceed we must work together. The -lightest divession of our strength in fay - of a single republican candidate will en - anger the whole ticket. Argument is - .ot needed to prove this. The success of .. .he state ticket may depend upon the ote of Choteau county. It may depend aon the vote of one of its precincts. Vho can Now in view of these facts I appeal to eery democrat to work for his whole eket and nothing but the ticket, and to Its the straight ticket without the orals- -Ion of a name. Do not scatter your arength. Throw all personal likes and lislikes to the wind. Put your personal grievances aside. Let them not come be- tween you and your duty as a democrat. Your vote should emphasize your devo- ;ion to the time-honored principles of the lemocratic party. By voting for a single candidate upon the republican ticket you stultify yourself and give aid and com- fort tathe enemy. Let us draw the line at this; act together, vote as one man, for a union and union alone is there erength. DEMOCRAT. Fort Benton, September 20, 1889. SMITH AT CHOTEAG:. \Sheep Herder\ Gives an Interview With the Republican Senatorial Aspiran t -- Smith Still Full of Promises. [The ItivER PRESS has received the fol- . 0 wing communication with the request *4 \put it into your paper jest as I tell it you without eny domed hifalutin fix - .al.” Therefore we publish it verbatim eliteratim.—ED1 CHOTEAU, M. Ty., Sept. 16. To THE EDITER.—I am writing to you '-oknow what W. G. Smith is running for. 1 9 it Senator at Washington or at Helena. i gess it is Washington. He speaks that 4 g. He was up at Chotou the other lay and he was in Jimmy Gibson's saloon 'ad he said to me hold on when I was . 'iasing the door. And he came out and :aye me two appels which was good ap- Pala hut not as good as a drink or a segar )U he acts as if he was running for Sun - School sopertenant in stead of sena- And when he came out I says howdy 441 ltle says howdy and I askt him how 4 any registered at Benton and he says Pretty dry wether aint it. And I said yes t du z make a feller dry and he gave me louther appel. And I askt him did you ear of eny railroad news and he said no 41(1 . I said I here theyre going to build a ;P%Isian town down below here where .1 : 1 . e Galt road cruses the Teton and he 44 all you do is just elect me and Power 43 d we'll fix it and I said how and he said 4 a . Would vote agenst a rite of way for a '7 4.11 raad and Mr Power would veto it if ' b ey didot come to Choto and bild round r uaes and engine factories. I herd but (lun e° i f 11.8 true that he told Julep 44 r(I at ,„„er n. poyer that if that Manitoba ri iiruad hilt weet from Asinboin it would have to promise to by Burds Birch creek cole or he wouldnt vote for the rite of way. A rancher on the Teton come up here and said Smith promised that the R R would pay 150 dollars for evry cat that got kiled on the track and the ranch- er said he sowed cockleburs on the range JOURNALISTIC HARI KARI. A Bright Youn g Journalist Commits an Un- pardonable Blunder and Calmly Awaits Death. FACTS AND FIGURES. county will be permitted to suffer at his hands. Mr. Taylor's name should go in - Toole Giv a Few Which Laboring Men to the ballot box with an X marked op - Should Consider. The following clipped from the lion. J. K. Toole's speech at Glendive abounds in food for profitable thought and deserves nere his house to keep sheep of his range. blunders. Not a few renowned charac- the territory who will see that he \gets ., more than the passing notice generally ks will not be directed to any He said that he would fix it so that a man ters can thank the waste basket . for do - there\ in good shape. ? f ly remarks lug them a great service at critical pen- e bestowed upon campaign literature: 40 in particular but to the members of could run a still in the nits nere the Brit- ode of their lives. It is related of Napol- J. R. Russell is also a resident of Butte. i In the course of his speech Mr. Toole ! le o party in general, and I wish them to ish line and raise whisky and not be afrade eon that he once had a quarrel with his He was educated for the ministry and at • said that. \The sudden acquisition of *received in the same candid spirit in of C S. Yours truly father-in-law, the emperor of Austria. one time tilled the pulpit of the Presby- wealth in the United States is one of the While in a tit of violent anger he wrote a SHEEP HERDER. terian church at Butte and Deer Lodge. ehich they are written. I utter no corn ' scathing article, expressive of his views, dangers of plaints but leave it to the reader to judge THE POWER SCANDAL. and sent it to Editor Etienne, ordering During the past three or four years he ehether cause for complaint exists. I him to publish it the fellowing day. It has been superintendent of the Butte shall simply state facts which are known what Mr. Toole Has to Say About the 20,000 did not appear, and boiling with rage the common schools and still holds that ; posi- ; 0 every well posted man in the county. world's greatest general sent a ineesage to Pamphlets. WM. Mr. Russell is eminently qualified I the editor with the iintortnation that he During the past few years the demo- _______ . - would be sabered if the article was not for the office to which he is nominated. The Hon. Joseph K. Toole disposed of published the next day. Again it was Judge Stephen DeWolfe is one of the his opponent and the republican talk of omitted, and Etienne was ordered brought soundest lawyers and jurists in the tern - campaign scandal to come in a very neat before the emperor, dead or alive. The way in his speech at Glendive. He said. j \I have spoken in the kindest terms of 1 Mr. Power on all occasions. and aet it is a trifle surprising that the republican pa- pers insist upon us saying unpleasant things against our wishes. The Inter Mountain on Monday evening charged that the state central committee were publishing a large number of pamphlets making unkind revelations concerning Mr. Power. I want to assure you and ; the public that nothing of the kind has been done or will be done. Our republi- can friends should calm their fears. ' They are too sensitive on this subject. ! A party that is troubled with such a nightmare and afraid to wake up in the morning for fear that during its midnight vigils some unsavory disclosures will be made, can be housed in no solid home of truth. If Mr. Power knows of any reason why he is unfit to be governor he ought to stand up and say so. I cannot be driven into saying an unpleasant thing about him. The pamphlets which so alarmed the inter Mountain were 20,000 copies of the constitution of the state of Montana. They are as harmless as the breath of a rose.\ Those Mythical Pamphlets. It is the opinion of the Butte Inter Mountain that, if the Standard desires to stand well with the people, \it will con- tinue to kick about that anonymous pam- phlet.\ This newspaper needs no press- ing invitation to do it. -Putting all other business aside, it proposes to stay with that subject until the Inter Mountain sustains its assault on members of the democratic central committee or makes the amends due to respectable citizens. i The campaign will have to be strictly per- sonal until the whole matter is Squared. This trouble started with paragraphs of scurrility in which the Inter Mountain linked the name of Marcus Daly, Silven Hughes, Judge Stapleton and others with a plot to darken the name of Mr. Power by the circulation of an anonymous pam- phlet. The motives, methods and pur- pose of members of the committee were assailed in phrases which only journalis- tic thugs are skilled in handling. These gentlemen are resolved that they will not longer endure the insults of the Butte re- publican press. They do not expect the courtesy uniformly accorded by decent journalism, but they will not tolerate the vulgar drivel which the Inter Mountain puts in columns of personal insult and they will know how to defend themselves. The Standard again asserts that the pamphlet described by the Inter Moun- tain was never printed and that it was never prepared or thought of by any mem- ber of the central committee. The Inter Mountain prints a lie outright when it says that, by authorized dispatch or oth- erwise, it was warned of the existence of the alleged pamphlet. It has no reputa- ble authority for any part of the mali- cious story it started, and it cannot pro- duce evidence to sustain what it has im- plied regarding the existence of the pam- phlet or what it has said regarding the same. We entertain not the slightest doubt that the whole istory was invented in the office of the Inter Mountain. Its coward- ly purpose was to find a way whereby it might hope to insult Mr. Daly and an op- portunity t o injure Mr. Power, by hinting at an assault which no democratic news- paper in Montana could be tempted to bring. A newspaper that wags its ma- licious tongue in unbridled abuse of de- cent men will betray a favorite even when it fawns on him. We are satisfied that the community in which the Standard and Inter Mountain find their field will know how to make choice between Mr. Daly and his associ ates on one side, and a babbling news- paper on the other. These gentlemen know the details of the canvass, and they assert there is absolutely no founda- tion for the insinuations of the Inter Mountain. With full knowledge of all the facts in the case the Standard de- clares that there is no truth whatever in the tales which the Inter Mountain is seeking to circulate to the discredit of Mr. Power. That gentleman's character / needs Lao defense. and his record, if un- 1 warantably assailed, will find no sturdier , defense than these columns will furnish. But the Butte Inter Mountain must back its knavish insinuations with the facts.—Anaconda Standard. Editorial discretion has often stood be- tween great men and their unpardonable distinguistied journalist, pale as death, but calm and resolute, entered the pros ence of his chief, and folding his arms awaited his fate. Napoleon paced the • room excitedly for a few minutes, and then seizing the editor by the shcalder he • shooe him violently and exclaimed: \I thank you, sir!\ He hastily left the room, and that was the last Etienne ever heard of the affair. It seems that the em- peror had thought the matter over and it occurred to him that tne editor had dis- covered that the article was an irrepara- ble blunder and had bravely disobeyed orders, to avert the consequences of ill- considered rage. There are times in the history of all editors when the rejection of manuscript requires almost as much nerve as that displayed by the French journalist, but alas there are too many great men who cannot fathom the wis- dom of the tripod as readily as the martyr of St. lielena.—Lemuel Quigg, Esq., in Russell B. Harrison's Daily Hele- na Journal. It is evident from the above that Mr. R. B. Harrison's imported young editor has suppressed one of the son -of -his fath- er's political editorials and is now calmly awaiting death. • The martyr of St. Hele- na and the martyr of Helena, Montana, are two widely different - natures. When the distinguished journalist, Lemuel Quigg, pale as death, and with his arms folded enters the presence of his imperial master, Russell B., and awaits his fate, his excited chief will \fetch the young man a swipe across the head with a stuff- ed chili\ and scatter his brains areund the room without doing any perceptable damage to the fancy wall paper or elegant furniture. Mr. Lemuel Quigg must have known that Russell B. Harrison is not built after the Napoleonic style of archn• tecture. Then why did he court deaLh? When Russell B. reaches Helena and 'the tragedy occurs the RIVER PRESS' special correspondent will give seven columns of particulars followed by a biography of each of the distinguished gentlemen, with the usual requiescat in pace attach- ed to that of the late lamented Lemuel. THE DEMOCRATIC CANDIDATES. The political party is indeed fortunate that can present a !ong list of candidates to the people for their suffrages against any one of whom nothing can truthfully be urged. The democratic party of the territory and of Choteau connty, however, present just such candidates. From mem- ber of congress to the most unimportant office every candidate possesses the re - spect and confidence of his neighbors and his community and every one is fully qualified to perform the duties which the nature of his office requires of him. Near- ly all of them are old time residents of the territory and many of them have ac- ceptably served the people in positions of honor or trust. Martin Maginnis needs no introduction to the people of Montana. His name is a household word throughout the length and breadth of the territory and his twelve years' distinguished services as delegate to congress are his credentials to their continued favor. J. K. Toole has proven a most useful servant of the people in the national leg- islature, as his four years labor in that body abundantly testify. The people of Montana are indebted to him more than to any other one man for their nearness to statehood and to the blessings which it will bring to them. J. II. Conrad is a gentleman of fine bus- iness qualifications. He is largely inter- ested in mining, banking, merchandising and stock growing in eastern Montana with whose interests he is intimately identified. He will make an excellent presiding officer of the senate. His home is at Billings. Joseph A. Browne is a resident of Dar- ling, Beaverhead county, where he is largely interested in farming and stock growing. He has represented that coun- ty several terms in the legislative assem- bly and is one of the most popular men in southern Montana. He is one of the pio- neers of the territory. Jerry Collins represents northern Mon- tana on the ticket. Mr. Collins is well and favorably known throughout the ter- ritory as a succeseful newspaper man, a business he has followed since he first came to Montana. He was a member of the last territorial council. He lives at Great Falls. T. D. Fitzgerald is a resident of Ana- conda where he holds the office of police magistrate. He is a leading member of thr' K. of L. of that city, a fine speaker and an earnest worker in the cause of la- boring men. ' W. Y. Pemberton resides at Butte. He is an excellent lawyer and a very popular speaker. \Pem as he is familiarly call- ed, has an army of friends throughout tory and is peculiarly fitted by legal train - jug and experience for the office for which he is named. Judge DeWolfe has repeatedly represented Silver Bow county in the territorial legislature and earned a reputation as a law maker second to none in the west. His home is at Butte. W. A. Bickford resides at Missoula. He is a lawyer of extensive experience and extremely popular with members of ; the bar. He was a member of the legis- lative council last winter and a leading • member of the last eoestit u tiona I con yen- ; tion. F. K. Armstrong is a practicing lawyer of Bozeman, with wide experience in his profession and deservedly popular with all classes. He has represented Gallatin county in the territorial legislature and was speaker of the house during one ses- sion of that body. Mr. Armstrong is abundantly qualified for the office for which he is nominated. G. F. Cope is a resident of Madison county where he is extensively engaged in mining and stock raising. Many years ago he was editor and proprietor of a Vir- ginia City newspaper and earned an envia- ble reputation as a pleasing writer and successful business manager. J. B. Leslie lives at Great Falls where he is engaged in an extensive law prac- tice. He possesses a fine legal mind and is otherwise eminently qualified for judge of the Eighth judicial district. The counties of Choteau, Cascade and Fergus comprise this district. The electors of Choteau county can make no mistake in voting for these men. Mark an X opposite the names of each and thus honor -yourselves while adding to the majority that will be cast for the entire ticket. How They Will Be. Taylor H eron Brown E Roger S B U cksen Bake R Steel E Ed Ward Dunne Ham I lton Tette N Fi N nigan Jon E s McInty R e S olomon With Todd and Dodd heading the pro- oession. A Case of Big Head. The man who uses his thumbs to wear out the arm holes of his vest may be a benefit to his tailor, and . he may be great in his own estimation, but he isn't big enough to represent Montana in congress. Such a man is Thomas Henry Carter. Modesty is always associated with great- ness. From the character of his speeches and His favorite platform attitude upon discussing Candidate Carter, Thomas has certainly a very bad case of the big head which will result fatally on October 1.— Butte Miner. Mr. Talent Resigns. The Hon. Patrick Talent, Butte's post- master, will to day forward his resigna- tion to the president. Mr. Talent says that the government schedule of salaries for assistants is on such a niggardly :basis that in this country it will not cornmad efficient men. That to secure them he has had to constantly draw on his own funds and he is tired of it. Besides the life of a postmaster is not one continuous round of pleasure in a town where the mails are as heavy as they are here in Butte. Mr. Talent has made a very ef- ficient postmaster and his retirement to the shades or private life will be a source of regret to many. It is thought Mr. W. C. Batchelor will be his successor.—Butte Miner. Mr. Luce of Salt Lake Gets His Money. As was stated in the Salt Lake Tribune Henry Luce, the proprietor of the Mint saloon, was the lucky holder of one -twen- tieth part of ticket No. 58,607 in the Lou- isiana State Lottery which drew the sec- ond capital prize of $100,000 at the last drawing.—Salt Lake (Utah) Tribune, August 15. The River Press. Subscribe now for the WEEKLY RIVER PRESS. Send it to \the old folks at home.\ supreme contempt for the individual who panders to the prejudices of the poor by abusing the rich. But every Man who can see and read must observe the encroachment of the money power on the rights of the individual. The issue between plutocra- cy and the people sooner or later must be on trial., The rate at which fortunes are made is simply appalling. Aladdin's lamp is dismissed and mutate Cristo be- comes commonplace when compared to our modern magicians of finance and trade. We hear of homes costing$3,000,- 1000 and A SINGLE BREAKFAST $5,C00. I lhese things fall strangely on the ears of I the millions who live in a hut and dine on a crust. When darkness settled over Egypt and she lost her place among the nations of the earth three per cent, of her population owned ninety-seven per cent of her wealth, and her people starved the times. I have to death. When Babylon fell two per i cent of her population owned all of her wealth and the people starved to death. When Persia bowed her head one, per cent of her population owned all of her lands and the people starved to death. When the sun of despair set upon Rome eighteen hundred men owned all the then known world. \For thirty years the United States has followed rapidly in the train of these old nations. In 1850 the capitalists of this country owned thirty-seven per cent. of the nation's wealth. In 1870 only twenty years later they owned seventy per cent. of our wealth, having nearly doubled their accumulations in that short time. This ratio has been more than kept up since 1870, and the capitalist probably now holds more than eighty per cent of the wealth of this country. This vast sum is probably owned and con trolled by .leee than ten per cent, of our populatior:Dam e \ c iis small per cent. is using its .1 power in very depart- ment of business and government to make the rich richer and the poor poorer. WHAT IS TO BE BONE? The world knows that a man less than ten years from poverty has an income of$20,000,000 and his two associates nearly as much, and it all comes from control and arbitrary pricing an article of uni- versal use. The syndicate the trust and the combination which make these things possible are the childen of the republican party. They have been fostered and en- couraged until they are about to swallow up the people and defy the government. The democratic party is opposed to this consolidation of capital, and it stands pledged to furnish a remedy. Let organiz- ed intelligent labor be no longer deluded with the idea that a condition of things which makes it possible to amass THESE COLOSSAL FORTUNES in a day or year can possibly be reconcil- ed with any reasonable theory for the amelioration of the grievances of the working classes. Under the present sys- tem of voting (the Australian) which, in the main, I heartily endorse, the power to be felt and heard is in the hands of the voters unrestrained. The legislative as- seinbly will doubtless provide for a bu- reau of labor and statistics. It is the on- ly method by which authentic informa tion for the protection of the working classes can be gathered and preserved. At its head there ought to be the best and most intelligent representative of labor in the state.\ DEMOCRATIC COUNTY TICKET. The candidates upon the democratic ticket of Choteau county are by no means strangers to its people. Many of them are among its oldest residents, while all have been thrown more or less in contact with the voters of the county in the pursuit of their several avocations. Joseph A. Baker has been in business in Fort Benton since he attained his ma- jority. His ability is unquestioned and his integrity above reproach. His inter- ests lie in the direct line of those of the county and its taxpayers, hence it is fair to assume that in every instance he would favor such legislation as would advance the prosperity of all. Mr. Baker is a very safe man to send to the capital as senator from Choteau county. An X op- posite his name should accompany every ballot cast within the limits of the coun- ty next Tuesday. Jesse F. Taylor is one of the leading stockgrowers of the county. He is a man of wide experience in legislative af- fairs, having served the county in the house two or three terms, and being thor- oughly acquainted with its resources and its wants and withal vigilant in the per formance of his duties no interest of the posite to it. Amzi Dodd, Jr., is a ra in of fine busi- ness attainments. Havi 1g proved faith- ful to large trusts confided to his care is an assurance he will acceptably discharge the responsibilities of any position to which the people of Choteau county may elect him. As a legislator Mr. Dodd will fully sustain the reputation he has earn- ed as an intelligent, active, earnest, aon- scientious gentleman. No mistake can be made in placing an x after the name of Ainzi Dodd, Jr. In selecting Samuel J. Heron as its candidate for sheriff the convention nomi- nated a man peculiarly qualified for the position. Young, vigorous,, induetrious intelligent end courageous he will make a model officer. No guilty man will escape the clutches of the law in Choteau coun- ty if Sam Heron strikes his trail and he will come as near striking it as any other man in the territory. An X should fol- low his name. The funds of Choteau county can be placed in no better hands than those of David G. Browne. He is an excellent penman, a 'correct accountant and thor- oughly versed in the duties of the office. In feet he possesses all the at ributes of a good officer. An X would be properly placed opposite the name of David G Browne. Everyone knows that Al. Rogers has made a model county clerk and recorder. No fault can be found with him. Strict- ly attentive to business, courteous and accommodating he discharges the duties of his office to the satisfaction of every one who has business with him. An X will follow the name of A. E. Rogers on a large majority of the ballots cast at Tuesday's election. There are but few men who are quali- fied to fill the office of assessor. Th e in- cumbent should be thoroughly conver sant with the value of all kinds of prop- erty and should know where to find it. Mr. A. B. Hamilton's extensive knowl- edge of the county, his wide exponent* gained by a long residence in it, his inti- mate acquaintance with property valua- tions and his excellent judgment shown in placing them are recommendations for the office possessed by few. The name of A. B. Hamilton should have a healthy X placed after it next Tuesday. John W. Tattan has been a faith:Le servant of the people of Choteau county. The democratic convention, recognizing this fact, and his eminent ability as a lawyer, unanimously nominated him as its candidate for county attorney. The nomination was worthily bestowed and will be endorsed by the people next Tues- day by an overwhelming majority. T. J. Todd is a well known business man of Fort Benton. He is a rapid pen- man, an excellent uccountant and metho- dical in his work, and being withal a very courteous pleasant gentleman he will make a model district clerk. The people of Choteau county are very fortunate in securing the services of Mr. Todd in the I office and will express their gratification in a substantial manner at the ballot box next Tuesday. X his name. It is not often that the services of a public administrator are needed by the people of a county, but when they are needed by a county, but when they are form them to the best interests of all par- ties concerned. Such a man is F. W. Bucksen. The convention thought so and the people think so; and will show it next Tuesday. The voter can conscien- tiously place an X after the name of F. W. Bucksen. Miss Mary E. Finnigan has so accepta- bly discharged the duties of superinten- dent of common schools that she will have no opposition at the polls next Tue-- day. This is the highest compliment that could be paid to the lady as an officer. Moses Solomon will not neglect the du- ties of coroner. The responsibilities of the office are by no means insignificant, much often depending upon the incum- bent in the work of ferreting out crime. Mr. Solomon is a wide-awake, intelligent gentleman and will make an acceptable coroner. Vote for him. Charles McIntyre will have a walk over for surveyor, no candidate appearing against him. The office of commissioner is the most important one in the county as far as the interests of tax payers is concerned. They are really the custodians of the funds of a county and upon an economical admin- istration of its affairs the treasury oal- minces are determined. Hence it will be seen that none but careful, prudent, in- telligent business men should be entrust- ed with the duties of the office. Such men are found in the persons of Ed- ward Dunne, W. D. Jones and R. M. Steele, the democratic candidates for county commissioners. They are worthy of the support of every tax payer in Cho- teru counts. Vote for them. The Montana Stockman. Subscribe for this • aluable monthly Price, e1.50 per annum.
The River Press (Fort Benton, Mont.), 25 Sept. 1889, located at <http://montananewspapers.org/lccn/sn85053157/1889-09-25/ed-1/seq-1/>, image provided by MONTANA NEWSPAPERS, Montana Historical Society, Helena, Montana.
|
{}
|
# File-specific TeX compiling options?
I have TeX files that Aquamacs compiles by default with pdflatex and other files compiled by default with latex (so I get a dvi).
If I make a copy of file that Aquamacs compiles with latex, that copy is compiled with latex. Similarly, a copy of a "pdflatex file" will be compiled with pdflatex.
I can't see where there could be some metadata of my TeX files that could tell Aquamacs with which engine it has to compile.
I have `setq-default TeX-global-PDF-mode t` in my .emacs
What determines which compilation is done?
AUCTeX, used by Aquamacs for tex files, when the option `TeX-parse-self` is `t` parses the document preamble, and if it finds some package that requires the creation of dvi (e.g. pstricks) uses latex instead of pdflatex.
|
{}
|
# Series n sequences
## Recommended Posts
Ok, i got a couple of probs. how do i prove if a series is divergent or convergent? and what is the nth term test? can someone please explain it to me?
##### Share on other sites
I don't know what the n'th term test is, but I suspect it's got something to do with the ratio lemma. To show a series is convergent, you need to show that the series of partial sums $s_n = \sum_{k=1}^n a_k$ is convergent. There are a series of results that can help you with this, though (i.e. comparison test, ratio lemma, etc).
##### Share on other sites
Could you suggest any good sites or books?
Cheers
##### Share on other sites
The one book I would suggest is Guide to Analysis, M. Hart. It's rather good and has lots of examples to get you started.
## Create an account
Register a new account
|
{}
|
# Diffraction limited donuts & stimulated emission
+ 2 like - 0 dislike
200 views
We can write an approximate analytic expression for the Airy disk, i.e. the intensity of a Fraunhofer diffraction pattern of a circular aperture, as: $I(\theta) = I_0 (\frac{2 J_1}{x})^2$, where $I_0$ is the peak intensity of the Airy disk, $x = ka\times sin(\theta) = (\frac{2\pi}{\lambda})a \times sin(\theta)$, where $\lambda$is the wavelength, and $a$is the aperture radius (src: http://en.wikipedia.org/wiki/Airy_disk, "Mathematical details" section).
However, is it possible to write down an analytic expression for smallest possible diffraction limited donut? I am interested in this in part to better understand the shape of the emission profile in the context of STimulated Emission Depletion (STED: http://en.wikipedia.org/wiki/STED_microscopy) microscopy, where a "donut" shaped laser is superimposed over a "stimulating" laser (exciting e.g. fluorophores) focused to diffraction limit. If this donut shaped laser is sufficiently intense, it can induce stimulated emission of a red-shifted photon for fluorophores some distance from its circumference. If you then filter out photons beyond a cutoff red-shift from the emission peak of the fluorophores, you'll only see emission near the center of the donut letting you sort of "cheat" the diffraction limit.
However, I have no idea how the efficiency for stimulated emission falls off as one moves along a line from the circumference of the depletion donut to the donut's center, and I'd like to be able to at least write down an expression for this stimulated emission efficiency vs. displacement expression that has roughly the right form.
Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor) Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysi$\varnothing$sOverflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
|
{}
|
2. Economics
3. coase theorem question specifically e amp f suppose that a...
# Question: coase theorem question specifically e amp f suppose that a...
###### Question details
Coase theorem question. Specifically e & f
Suppose that a rancher is raising cattle (X) next to a farmer. The profits of the rancher are given by π(X) = 100X −X2 for 0 ≤ X ≤ 100 and the utility of the farmer is given by: U(W,X) = W(100 − X) for 0 ≤ X ≤ 100 where W is her level of wealth. Assume initially W = 50.
a) Suppose the rancher has the right to run as many cattle as she likes. How many cattle will she choose?
b) Suppose the farmer has the right to dictate how many cattle will be run. How many cattle will she choose?
c) What is the efficient number of cattle to run? (i.e. Solve the social planner’s problem)
d) Suppose the government will tax the rancher $T per cow. At what tax rate$T∗ will the rancher choose to run the efficient number of cattle?
e) Suppose the farmer chooses the number of cattle, and the farmer is paid \$S per cow by the rancher. (The amount paid to the farmer enters her wealth.) How many cows will the farmer choose to run?
f) Bonus: Why do the answers to c) and e) differ?
|
{}
|
# (32) Draping an image over topography¶
In some cases, it is nice to “drape” an arbitrary image over a topographic map. We have already seen how to use image to plot an image anywhere in out plot. But here are aim is different, we want to manipulate an image to shade it and plot it in 3-D over topography. This example was originally created by Stephan Eickschen for a flyer emphasizing the historical economical and cultural bond between Brussels, Maastricht and Bonn. Obviously, the flag of the European Union came to mind as a good “background”.
To avoid adding large files to this example, some steps have been already done. First we get the EU flag directly from the web and convert it to a grid with values ranging from 0 to 255, where the higher values will become yellow and the lower values blue. This use of grdconvert requires GDAL support. grdedit then adds the right grid dimension.
The second step is to reformat the GTOPO30 DEM file to a netCDF grid as well and then subsample it at the same pixels as the EU flag. We then illuminate the topography grid so we can use it later to emphasize the topography. The colors that we will use are those of the proper flag. Lower values will become blue and the upper values yellow.
The call the grdview plots a topography map of northwest continental Europe, with the flagged draped over it and with shading to show the little topography there is. coast is used in conjunction with grdtrack and plot3d to plot borders “at altitude”. Something similar is done at the end to plot some symbols and names for cities.
The script produces the plot in the Figure fig_ex32. Note that the PNG image of the flag can be downloaded directly in the call the grdconvert, but we have commented that out in the example because it requires compilation with GDAL support. You will also see the grdcut command commented out because we did not want to store the 58 MB DEM file, whose location is mentioned in the script.
#!/usr/bin/env bash
# GMT EXAMPLE 32
#
# Purpose: Illustrate draping of an image over topography
# GMT modules: grdcut, grdedit, grdgradient, grdconvert, grdtrack, grdview
# GMT modules: coast, text, plot3d, makecpt
# Unix progs: cat, rm
# Credits: Original by Stephan Eickschen
#
gmt begin ex32
# Here we get and convert the flag of Europe directly from the web through grdconvert using
# GDAL support. We take into account the dimension of the flag (1000x667 pixels)
# for a ratio of 3x2.
# Because GDAL support will not be standard for most users, we have stored
# the result, @euflag.nc in this directory.
Rflag=-R3/9/50/54
# gmt grdconvert \
# gmt grdedit euflag.nc -fg $Rflag # Now get the topography for the same area, mask out the oceans and store it as topo_32.nc. gmt grdcut @earth_relief_30s_p$Rflag -Gtopo_32.nc=ns
gmt grdcut @earth_mask_30s_p $Rflag -Gmask_32.nc=ns gmt grdmath topo_32.nc mask_32.nc 0 GT 0 NAN MUL = topo_32.nc # The color map assigns "Reflex Blue" to the lower half of the 0-255 range and # "Yellow" to the upper half. gmt makecpt -C0/51/153,255/204/0 -T0,127,255 -N # The next step is the plotting of the image. # We use gmt grdview to plot the topography, euflag.nc to give the color, and illum.nc to give # the shading. Rplot=$Rflag/-10/790
gmt grdview topo_32.nc -JM13c $Rplot -C -G@euflag.nc -I+a0/270+ne0.6 -Qc -JZ1c -p157.5/30 # We now add borders. Because we have a 3-D plot, we want them to be plotted "at elevation". # So we write out the borders, pipe them through grdtrack and then plot them with plot3d. gmt coast$Rflag -Df -M -N1 | gmt grdtrack -Gtopo_32.nc -s+a | gmt plot3d $Rplot -JZ -p -W1p,white # Finally, we add dots and names for three cities. # Again, gmt grdtrack is used to put the dots "at elevation". cat <<- EOF > cities.txt 05:41:27 50:51:05 Maastricht 04:21:00 50:51:00 Bruxelles 07:07:03 50:43:09 Bonn EOF gmt grdtrack -Gtopo_32.nc cities.txt | gmt plot3d$Rplot -JZ -p -Sc7p -W1p,white -Gred
|
{}
|
# Math Help - [SOLVED] Please find the mistake
1. ## [SOLVED] Please find the mistake
I have to find the particular solution of the following DE.
$D^2 - 3D + 2 = sinx$
Here is how I did it.
$\frac{sinx}{D^2-3D+2}$
$\frac{Im(e^{ix})}{D^2-3D+2}$
$\frac{cosx + isinx}{(i)^2 -3(i) +2}$
$\frac{cosx + isinx}{1-3i}$
Then after rationalizing,
$\frac{cosx-3sinx}{10}$
Where as the answer given by the book is,
$\frac{sinx + 3cosx}{10}$
Can you please tell where is my mistake ?
2. Originally Posted by Altair
I have to find the particular solution of the following DE.
$D^2 - 3D + 2 = sinx$
Here is how I did it.
$\frac{sinx}{D^2-3D+2}$
$\frac{Im(e^{ix})}{D^2-3D+2}$
$\frac{cosx + isinx}{(i)^2 -3(i) +2}$
$\frac{cosx + isinx}{1-3i}$
Then after rationalizing,
$\frac{cosx-3sinx}{10}$
Where as the answer given by the book is,
$\frac{sinx + 3cosx}{10}$
Can you please tell where is my mistake ?
You've found the real part of $\frac{(\cos x + i \sin x)(1 + 3i)}{10}$. The particular solution is the imaginary part, since $\sin x$ is the imaginary part of $e^{ix}$ .....
3. Originally Posted by mr fantastic
You've found the real part of $\frac{(\cos x + i \sin x)(1 + 3i)}{10}$. The particular solution is the imaginary part, since $\sin x$ is the imaginary part of $e^{ix}$ .....
Got it. But how do I get the imaginary part?
4. Originally Posted by Altair
Got it. But how do I get the imaginary part?
You're kidding me? How did you get the imaginary part if you don't know how to get the real part.
$\frac{(\cos x + i \sin x)(1 + 3i)}{10}$.
Expand the numerator. Throw away the bits that don't have an i in them. What's left (divided by 10) gives the imaginary part!!
5. Originally Posted by mr fantastic
You're kidding me? How did you get the imaginary part if you don't know how to get the real part.
Expand the numerator. Throw away the bits that don't have an i in them. What's left (divided by 10) gives the imaginary part!!
You know what our professor simply told us to "multiply the i term with the i term and the real one with real term". And failed to give the answer why. You gave it. Thanks. And I surely didn't know.
6. $\frac{3icosx + isinx}{10}$
How do I get rid of the i ? Just by telling that as Sin is the imaginary part so this i is just the indicator ?
7. Originally Posted by Altair
$\frac{3icosx + isinx}{10}$
How do I get rid of the i ? Just by telling that as Sin is the imaginary part so this i is just the indicator ?
The real part of a + ib is a.
The imaginary part of a + ib is b.
-Dan
8. No ... you multiply the numerator as you would with normal binomials (FOIL if that's what you call it)
$\frac{(\cos x + i \sin x)(1 + 3i)}{10}$
$=\frac{\cos x + 3i\cos x + i\sin x + 3i^{2}\sin x}{10}$
Simplify, combinining like terms with like terms and eventually you'll get the numerator in the form of a + bi. As pointed out, a will be your real part and bi will be the imaginary.
|
{}
|
## Thinking Mathematically (6th Edition)
The remaining piece of the board is 1 foot 4$\frac{7}{16}$ inches long. This could also be written as 16$\frac{7}{16}$ inches long.
The longer board is 2 feet long. Using the fact that 12 inches = 1 foot, we know that the longer board is 24 inches long (24 inches = 2 feet). A 7$\frac{1}{2}$ board is removed using a $\frac{1}{16}$ inch wide saw blade. This means that 7$\frac{1}{2}$ inches + $\frac{1}{16}$ inches are removed from the larger board. We need common denominators to add. The common denominator for fractions with denominators of 2 and 16 is 16. So, we have 7$\frac{8}{16}$ + $\frac{1}{16}$ = 7$\frac{9}{16}$. Now, we subtract the number we just calculated from the 24 inches (the length of the longer board. 24 - 7$\frac{9}{16}$ We could convert both numbers to improper fractions, however, a quicker (and simpler) way to subtract, would be to convert 24 to a mixed numeral. (24 = 23 + 1 = 23 + $\frac{16}{16}$= 23$\frac{16}{16}$) Now subtract: 23$\frac{16}{16}$ - 7$\frac{9}{16}$ = 16$\frac{7}{16}$ Note: Since the fractional part of the smaller number was less than the fractional part of the larger number, we did not have to "borrow" from the 23 to complete the subtraction. 16$\frac{7}{16}$ inches of the board is left. We can convert this back to feet and inches by using the fact that 12 inches = 1 foot. Using this conversion factor, 16 inches = 1 foot 4 inches. We still have the fractional part of the answer, so the final answer is 1 foot 4$\frac{7}{16}$ inches.
|
{}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.