text
stringlengths
100
957k
meta
stringclasses
1 value
# Find a complex number a + bi such that a2 + b2 is irrational • Last Updated : 30 Dec, 2021 Real and imaginary numbers combine to form complex numbers. The imaginary component, I (iota), indicates a square root of -1. The imaginary portion of a complex number is i. a + ib is a typical representation of complex numbers in their rectangular or standard form. 420 + 69i, for example, is a complex number in which 420 represents the real part and 69 represents the imaginary part. ### Modulus When a complex number is presented on a graph, its real part is plotted on the x-axis and the imaginary part on the y- axis. Say if the number were to be represented by point P in the figure given below, triangles OPA and OPB would be both right-angled. Clearly, in the right triangle POA, PO is the hypotenuse; Oa is the base and Pa is the perpendicular. Using Pythagoras Theorem, we have: OP2 = OA2 + PA2 OP = The absolute value of a complex number is regarded as its modulus. It is the square root of the sum of squares of its real and imaginary parts. In the above case, OP is the modulus of the complex number of the form z = a + ib, and is denoted by r. ### Find a complex number a + bi such that a2 + b2 is irrational. Solution: An irrational number is the one which can be expressed in the form of a/b, where b ≠ 0, like √2, √3, etc. Say we are given the complex number 1 + . Comparing this with the form a + ib, we have: a = 1, b = Now, a2 + b2 = 1 + 42/3 Clearly, 42/3 cannot be expressed in the form of a/b. Hence it is an irrational number. ### Similar Problems Question 1. Prove that the square of the modulus of  is irrational. Solution: Comparing this with the form a + ib, we have: a = 0, b = Now, a2 + b2 = 22/3 Clearly, 22/3 cannot be expressed in the form of a/b. Hence it is an irrational number. Question 2. Prove that the square of the modulus of  is irrational. Solution: Comparing this with the form a + ib, we have: a = 0, b = Now, a2 + b2 = 22/5 Clearly, 22/5 cannot be expressed in the form of a/b. Hence it is an irrational number. Question 3. Prove that the square of the modulus of 3 +  is irrational. Solution: Comparing this with the form a + ib, we have: a = 3, b = Now, a2 + b2 = 9 + 22/3 Clearly, 22/3 cannot be expressed in the form of a/b. Hence it is an irrational number. Question 4. Prove that the square of the modulus of  is rational. Solution: Comparing this with the form a + ib, we have: a = 4, b =  = 2 Now, a2 + b2 = 42 + 22 = 20 Clearly, 20 can be expressed in the form of a/b. Hence it is a rational number. Question 5. Prove that the square of the modulus of 10 +  is irrational. Solution: Comparing this with the form a + ib, we have: a = 10, b = Now, a2 + b2 = 100 + 22/3 Clearly, 22/3 cannot be expressed in the form of a/b. Hence it is an irrational number. My Personal Notes arrow_drop_up
{}
# Composition of Linear Transformation and Matrix Multiplication 1. Mar 30, 2009 ### jeff1evesque Theorem 2.15: Let A be an m x n matrix with entries from F. Then the left-multiplication transformation $$L_A: F^n --> F^m$$. Furthermore, if B is any other m x n matrix ( with entries from F ) and B and D are the standard ordered bases for $$F^n and F^m$$, respectively, then we have the following properties. (d.) If $$T: F^n --> F^m$$ is linear, then there exists a unique m x n matrix C such that $$T = L_C$$. In fact $$C = [T]_B ^D$$ proof: Let $$C = [T]_B ^D$$. By Theorem 2.14, we have $$[T(x)]_D = [T]_B ^D[x]_B$$ or T(x) = Cx = $$L_C(x)$$ for all x in $$F^n$$. So T = $$L_C$$ In particular I dont understand how T(x) = Cx Thanks, JL 2. Mar 30, 2009 ### slider142 You previously defined C to be the matrix of T with respect to the bases B and D. By theorem 2.14, you have the equivalence to $[T]_B ^D[x]_B$, the next line is just replacing symbols with their equivalent matrix/vector forms. 3. Mar 31, 2009 ### jeff1evesque thanks. Last edited: Mar 31, 2009
{}
Hello, Suppose that $k$ is an algebraically closed field of char. 0. Let $X$ be a smooth connected variety over $k$. Then I have the category $A$ of Regular Singular smooth $D$-modules on $X$ (i.e. algebraic vector bundles equipped with regular singular algebraic flat connections). For a category $B$, I am less sure. I would like to say "local systems of $k$-vector spaces for the etale topology on $X$", but maybe this is not good as $k$ is not finite or of $l$-adic nature in general. So: 1) Can one make sense from category $B$ and wish $A$ and $B$ to be equivalent? 2) Anyway, there seems to be a functor from the category of local systems (of finite sets) on the etale topology to $A$, by "tensoring" with the constant $D$-module (as $D$-modules are etale local). What can one say about this functor? 3) On a "decategorified" level, what can one say about the etale fundamental group, versus the group which we get by Tannakian formalism from $A$? Thank you, Sasha - Correctly stated (over $\mathbb{C}$), this comes down to comparing $\mathbb{Z}$-representations of the topological fundamental group with $\mathbb{Z}_{\ell}$-representations of the algebraic fundamental group. The algebraic fundamental group is the profinite completion of the topological fundamental group, so there is a lot one can say. –  anon Mar 22 '13 at 15:03 I cannot say much about the $\ell$-adic side. I will give "classical" answers to 1)-3):As you know, the Riemann-Hilbert correspondence says that on a smooth complex variety $X$ the category of $A$ of vector bundles with flat regular singular connection is equivalent to the category of representations of the topological fundamental group $\pi_1^{\text{top}}$ on finite dimensional complex vector spaces. Lets write this category $\operatorname{Repf}_{\mathbb{C}} \pi_1^{\text{top}}(X)$ (neglecting base points). Since $\pi_1^{\operatorname{et}}(X)$ is the profinite completion of the abstract group $\pi_1^{\operatorname{top}}(X)$, a representation of $\pi_1^{\operatorname{top}}(X)\rightarrow GL(V)$ which factors through a finite quotient can be thought of a representation of $\pi_1^{\operatorname{et}}(X)\rightarrow GL(V)$ which is continuous with respect to the profinite topology on the left and the discrete topology on the right. Hence, given an etale covering of $f:Y\rightarrow X$, Galois theory associates with it a finite $\pi^{\operatorname{et}}(X)$-set, which we can linearize to and get a representation and then a $\mathcal{D}$-module. But what does this mean concretely? It is not difficult to check that ${f_*}\mathcal{O}_Y$ is a $\mathcal{O}_{X}$-coherent $\mathcal{D}_X$-module (hence a vector bundle), and it is a theorem that it is regular singular (Gauss-Manin). About your third question: The pro-algebraic affine group scheme associated with the Tannaka category $\operatorname{Repf}_{\mathbb{C}} \pi_1^{\operatorname{top}}(X)$ is by definition the pro-algebraic completion of the finitely generated group $\pi_1^{\operatorname{top}}(X)$. The etale fundamental group is the profinite completion of this group. And amazingly, the profinite completion "controls" the pro-algebraic completion: Theorem: Let $f:G\rightarrow H$ be a morphism of finitely generated (abstract) groups. Then $f$ induces an isomorphism on pro-algebraic completions if and only if it induces an isomorphism on profinite completions. I am told that this was first discovered by Malcev, and then independently rediscovered by Grothendieck. Grothendieck precisely had the application the the Riemann-Hilbert correspondence in mind. See: Grothendieck, Alexander Représentations linéaires et compactification profinie des groupes discrets. (French. English summary) Manuscripta Math. 2 1970 375–396. - One has big problems trying to make category $B$ work. Standard Lefschetz principle arguments justify Lars' use of $\mathbb C$ even if $k$ is not, in fact $\mathbb C$. Thus, we can write $A$ as the category of representations of $\pi_1^{top} \to GL_n (\mathbb C)$ Any $l$-adic construction is going to be about continuous representations $\pi_1^{et} \to GL_n(\mathbb C)$ for some topology on $GL_n(\mathbb C)$. To get an equivalence of categories in any kind of nice way, we clearly need every representation $\pi_1^{top} \to GL_n(\mathbb C)$ to extend to a continuous representation of $\pi_1^{et}$. So its image must lie in a compact subgroup. But $GL_n(\mathbb Q_l)$ has elements, like $\frac{1}{l} I$, that do not lie in any compact subgroup. I don't see any way to modify this construction to fix that bug. For 3), you may find my answer here interesting. -
{}
# Spatial Additive in SuperCollider The following example creates a spatially distributed sound through additive synthesis. A defined number (40) of partials is routed to individual virtual sound sources which are rendered to a 3rd order Ambisonics signal. ## A Partial SynthDef A SynthDef for a single partial with amplitude and frequency as arguments. In addition, the output bus can be set. The sample rate is considered to avoid aliasing for high partial frequencies. ( { |outbus = 16, freq=100, amp=1| // anti aliasing safety var gain = amp*(freq<(SampleRate.ir*0.5)); var sine = gain*SinOsc.ar(freq); Out.ar(outbus, sine); } ).send; ) ## The Partial Synths Create an array with 40 partial Synths, using integer multiple frequencies of 100 Hz. Their amplitude decreases towards higher partials. An audio bus with 40 channels receives all partial signals separately. All synths are added to a dedicated group to ease control over the node order. ~partial_GROUP = Group(s); ~npart = 40; ~partial_BUS = Bus.audio(s,~npart); ( ~partials = Array.fill(40, { arg i; Synth(\spatial_additive, [\outbus,~partial_BUS.index+i, \freq, 100*(i+1),\amp, 1/(1+i*~npart*0.1)],~partial_GROUP) }); ) s.scope(16,~partial_BUS.index); ## The Encoder SynthDef A simple encoder SynthDef with dynamic input bus and the control parameters azimuth and elevation. ~ambi_BUS = Bus.audio(s,16); ( SynthDef(\encoder, { |inbus=0, azim=0, elev=0| Out.ar(~ambi_BUS,HOAEncoder.ar(3,In.ar(inbus),azim,elev)); } ).send; ) ## The Encoder Synths An array of 16 3rd order decoders is created in a dedicated encoder group. This group is added after the partial group to ensure the correct order of the synths. Each encoder synth receives a single partial from the partial bus. All 16 encoded signals are sent to a 16-channel audio bus. ~encoder_GROUP = Group(~partial_GROUP,addAction:'addAfter'); ( ~encoders = Array.fill(~npart, {arg i; Synth(\encoder,[\inbus,~partial_BUS.index+i,\azim, i*0.1],~encoder_GROUP) }); ) ~ambi_BUS.scope ## The Decoder Synth A decoder is added after the encoder group and fed with the encoded Ambisonics signal. The binaural output is routed to outputs 0,1 - left and right. // load binaural IRs for the decoder ( ~decoder = { Out.ar(0, HOABinaural.ar(3, In.ar(~ambi_BUS.index, 16))); }.play; ) ~decoder.moveAfter(~encoder_GROUP); Exercise I Create arrays of LFOs or other modulation signals to implement a varying spatial image. Use an individual control rate bus for each parameter to be controlled. Exercise II Modulate the timbre (relative partial amplitudes) with the modulation signals.
{}
# Finite Subgroups of $GL_2(\mathbb Q)$ I want to prove that the only finite subgroups of $GL_2(\mathbb Q)$ are $C_1, C_2, C_3, C_4, C_6, V_4, D_6, D_8,$ and $D_{12}$. First, we determine all possible finite orders of elements. Now, an element of order $n$ will have a minimal polynomial that divides $x^n-1$, so its (complex) roots will be distinct, and so the matrix will be diagonalizable over $\mathbb C$. This implies that both eigenvalues are $n$th roots of unity, and that at least one is primitive, and so the minimal polynomial will be the $n$th cyclotomic polynomial. But since the minimal polynomial can only have degree $1$ or $2$, since we're dealing with $2\times 2$ matrices, the only possible orders are those $n$ for which $\phi(n)=1$ or $2$, so the only possible orders are $1, 2, 3, 4$, and $6$. Thus, if $G$ is a finite subgroup of $GL_2(\mathbb Q)$, then $|G|=2^a3^b$. Now, since $G$ contains a Sylow-$3$ of order $3^b$, and any $3$-group contains subgroups of every possible order, once we show that $C_3\times C_3$ is not a subgroup of $GL_2(\mathbb Q)$, then we can conclude that $b=0$ or $1$, since we already saw that $C_9$ cannot be a subgroup. WLOG, let $g=\begin{bmatrix} 0&-1\\1&-1\end{bmatrix}$, which is the Rational Canonical Form for the minimal polynomial $x^2+x+1$, and so has order $3$. We seek to show that there is no matrix $h$ such that $h$ has order $3$, commutes with $g$, and isn't a power of $g$. So assume $h=\begin{bmatrix} a&b\\c&d\end{bmatrix}$. Then $gh=\begin{bmatrix} -c&-d\\a-c&b-d\end{bmatrix}$ and $hg=\begin{bmatrix} b&-(a+b)\\d&-(c+d)\end{bmatrix}$. Equating these, we get that $c=-b, d=a+b$, so $h=\begin{bmatrix} a&b\\-b&a+b\end{bmatrix}$. For $h$ to have order $3$, the trace must be $-1$, and the determinant must be $1$, just like in the Rational Canonical Form, so $2a+b=1$, and $a^2+ab+b^2=1$. The solutions are $a=-1, b=1$ and $a=0, b=-1$. The latter gives $h=g$ and the former gives $h=g^2$, and so there can be no subgroup isomorphic to $C_3\times C_3$, and thus $9$ does not divide the order of the group. Now, the next step would be to restrict the exponent of $2$, but I'm not quite sure how to do this. One part of the problem tells me to show that the only noncyclic abelian subgroup is $V_4$, the Klein-$4$ group. So if we let $g=\begin{bmatrix} 0&1\\1&0\end{bmatrix}$ be the RCF of $x+1$, then the only order $2$ matrices that commute with it are $-g$ and $-g^2=-I$, which form a Klein-$4$ group. This means that $C_2^3$ is not a subgroup. Also, if we get $g=\begin{bmatrix} 0&-1\\1&0\end{bmatrix}$ be the RCF of $x^2+1$, then the only order $2$ element that commutes with it is $-I=g^2$, so there is also no subgroup isomorphic to $C_4\times C_2$. In a previous exercise, I showed that $Q_8$ is not a subgroup of $GL_2(\mathbb R)$, and thus isn't a subgroup over $\mathbb Q$ either. But this doesn't seem to prevent subgroups of order $16$, since $D_8$ is in fact a subgroup. Checking all $14$ groups of order $16$, it seems that all of them have an order $8$ subgroup besides $D_8$, but is there a cleaner way to rule out groups of order $16$ without classifying them, since I'm apparently supposed to conclude that the order of $G$ divides $24=2^3\cdot 3$ from the fact that the Klein-$4$ group is the only noncyclic abelian subgroup. So now, if I assume as known that $24$ divides the order of $G$, then the possible orders for $G$ are $1, 2, 3, 4, 6, 8, 12$, and $24$. I can find $C_1, C_2, C_3, C_4, C_6, V_4, D_6, D_8,$ and $D_{12}$. This takes care of all the orders except $12$ and $24$. I can show that $C_{12}$ and $C_{6}\times C_2$ are not subgroups, but I'm not sure how to rule out the nonabelian groups of order 12, $A_4$ and $C_3\rtimes C_4$, or the groups of order $24$. So in summary, I'm a bit stuck in ruling out the nonabelian groups of order $12, 16,$ and $24$. Is there a more elegant way to do this than to look at the classifications of these groups and find subgroups which I have already shown to be impossible? • The article of Mackiw is really useful for your question ! It is easy to reduce the question to the finite subgroups of $GL_2(\mathbb{Z})$. Jun 21, 2014 at 18:21 • Jun 21, 2014 at 18:29 • Wow, that article was great! So how do I reduce from $\mathbb Q$ to $\mathbb Z$? Jun 21, 2014 at 18:55 • For another way of excluding $C_3\times C_3$ I proffer the observation that over $\Bbb{C}$ they become simultaneously diagonalizable. Therefore such a group necessarily has a non-trivial element that has $1$ as an eigenvalue (the other eigenvalue being a non-trivial cubic root of unity). Consequently the minimal polynomial of such a matrix over $\Bbb{Q}$ would necessarily be cubic, which is a no-no. Jun 21, 2014 at 20:41 One approach is the following: Let $M = M_{2}(\mathbb{Q}).$ Note that if $x$ is an element of order $4$ or $3$ in $G,$ then the subalgebra $C_{M}(x)$ is a division ring by Schur's Lemma, as $x$ acts irreducibly. Hence $C_{G}(x)$ is cyclic. In particular, Sylow $3$-subgroup of $G$ is cyclic, of order $3$ as you have shown already. Let $S$ be a Sylow $2$-subgroup of $G,$ and let $A$ be a maximal Abelian normal subgroup of $S$. If $A$ has exponent $2,$ then $A$ is elementary Abelian, and hence has order at most $4.$ Otherwise, $A$ contains an element $x$ of order $4,$ and then $A$ is cyclic as $C_{G}(x)$ is cyclic. Also, as you have shown, $|A| \leq 4$ in that case. Now as $A$ is maximal Abelian normal in $S,$ we have $C_{S}(A) = A,$ and $S/A$ is isomorphic to a subgroup of ${\rm Aut}(A).$ If $A$ is a Klein $4$-group, then ${\rm Aut}(A) \cong S_{3}$ and if $A$ is cyclic of order $,$ then ${\rm Aut}(A)$ has order $2.$ Hence in either case, $|S| \leq 8.$ Since you have already ruled out a quaternion group of order $8,$ the only possibility if $S$ is non-Abelian of order $8$ is that it is dihedral ( and we already know that if $|S| = 8,$ it is non-Abelian). By the way, you can't rule out all non-Abelian group of order $12,$ since a dihedral group with $12$ elements does occur. I don't know how much group theory you know, but I would finish as follows: let $F = F(G),$ the Fitting subgroup of $G.$ If $F$ contains an element of order $3$ or then $C_{G}(F) = F$ is cyclic of order $3$ or $6$ (for $F$ contains a central element of order $3$ in that case). Then $G/F$ has order dividing $2$. I won't give all details, but if $|G/F| =2,$ that leads to $G$ dihedral with $6$ or $12$ elements ($G=F$ leads to $G$ cyclic of order $3$ or $6$ in this case). There remains the case that $F$ is a $2$-group. If $F$ is Abelian , we have $F$ cyclic of order $4$ or a Klein $4$-group. The first possibility leads to $G = F.$ The second also leads to $G = F,$ eg by Clifford's theorem. there can be no element of order $3$ in $G$ in that case. If $F$ is non-Abelian of order $8,$ then we must also have $G = F$, because a dihedral group of order $8$ has no automorphism of order $3.$ • Wait, why exactly is the centralizer of $x$ cyclic? Jun 21, 2014 at 22:52 • Wait, but $Q_8$ is a subgroup of the multiplicative group of $\mathbb H$, but it's definitely not cyclic... Jun 22, 2014 at 0:18 • Sorry, that statement was not quite correct. I should have said that every finite Abelian subgroup of the multiplicative group of a division ring is cyclic. That is why I had to start with $x$ of order $3$ or $4,$ so that I knew that $C_{G}(x)$ was Abelian. Jun 22, 2014 at 5:33
{}
# Convergence of $\sum _{n=1}^{\infty }\left\{ 1-n\log \dfrac {2n+1} {2n-1}\right\}$ I am Investigating the convergence of $$\sum _{n=1}^{\infty }\left\{ 1-n\log \dfrac {2n+1} {2n-1}\right\}$$. Now as per Cauchy's test for absolute convergence. If $\overline {\lim _{n\rightarrow \infty }}\left| u_{n}\right| ^{\dfrac {1} {n}} < 1,\sum _{n=1}^{\infty }u_{n}$ converges absolutely and obviously if $\overline {\lim _{n\rightarrow \infty }}\left| u_{n}\right| ^{\dfrac {1} {n}} > 1,\sum _{n=1}^{\infty }u_{n}$ does not converge. I observed $$\overline {\lim _{n\rightarrow \infty }}\left| \log \left( \dfrac {2n+1} {2n-1}\right) ^{-n}\right| = \overline {\lim _{n\rightarrow \infty }}\left| \log \left( 1-\dfrac {1} {n-\dfrac {1} {2}}\right) ^{-n}\right|$$ Could i take the negative power of n outside the absolute brackets here ? I guess even if i could establish log part converges that would only show that the overall series diverges right. Is that the correct result ? Any alternative lines of attacking this problem would be much appreciated. - I don't think you used well the convergence test. –  Beni Bogosel Mar 7 '12 at 19:12 are u saying it is not possible to take the negative power of n outside the absolute brackets here ? cause if that was allowed i'd say that the expression should be less than one as n goes to infinity. –  Hardy Mar 7 '12 at 19:20 The limit you try to calculate in your last part of the article is in fact equal to 1. And you can take the negative power of $n$ in front of the logarithm, but you can't take the negative power of $n$ out of the modulus. –  Beni Bogosel Mar 7 '12 at 19:27 Power Series of log may help. –  quartz Mar 7 '12 at 19:33 Try $$1-n \log \left( \frac{2n+1}{2n-1} \right) = 1 - n \log \left( \frac{1+\frac1{2n}}{1-\frac1{2n}} \right) = 1 - 2n \left( \frac1{2n} + \frac1{3(2n)^3} + \frac1{5(2n)^5} + \cdots \right) =\\ - \sum_{k=1,2}^{\infty} \frac1{(2k+1)(2n)^{2k}}.$$ After this you need to argue the swapping of the infinite sums and hence show it converges. –  user17762 Mar 7 '12 at 19:41 You could prove the convergence of the series using a comparison criterion. For example, calculate the limit of $|a_n|/(\frac{1}{n^2})$. You should then calculate $$\lim_{n \to \infty}\left| n^2-n^3\log \left(\frac{2n+1}{2n-1} \right)\right|$$. For this calculation, the simplest method I could think of was expanding in Taylor series. $$\log(x+1)-\log(x-1)=\log \left(1+\frac{1}{x}\right)-\log\left(1-\frac{1}{x}\right)=2\sum_{k \text{ odd}}\frac{1}{kx^k}$$ Then you have to calculate $$\lim_{n \to \infty} \left|n^2-n^3\cdot 2\left(\frac{1}{2n}+\frac{1}{3(2n)^3}+\frac{1}{5(2n)^5}+... \right)\right|=\frac{1}{12}$$ Therefore $\sum a_n$ is absolutely convergent, and in particular convergent. - Nicely done, though i did n't follow how u got 1/12 as the result –  Hardy Mar 7 '12 at 19:57 Also as per comparison series do n't we need to find a series which forms and upper bound to our series and show since that converges absolutely ours would too, in the answer you posted i am unsure what series that would be as we only seem to change our series and operate on that. –  Hardy Mar 7 '12 at 20:04 There are three comparison criteria. Two with inequalities, and the last one, says that $a_n /b_n$ tends to a positive, bounded limit, then the series $a_n, b_n$ have the same nature. The comparison criteria with inequalities are obviously equivalent to the one with the limit. –  Beni Bogosel Mar 7 '12 at 21:24 As for the limit in my answer, try and open the bracket, and see which terms remain. It's quite easy. –  Beni Bogosel Mar 7 '12 at 21:25 +1. Nicely done. This also quantifies how the series converges like the series $\sum \frac1{n^2}$. –  user17762 Mar 7 '12 at 22:15 $1-n\log\left(\frac{2n+1}{2n-1}\right)= 1-n\log\left(1+\frac{1}{n-\frac{1}{2}}\right)\sim 1-\frac{n}{n-\frac{1}{2}}+\frac{n}{2\left(n-\frac{1}{2}\right)^{2}}=\frac{1}{(2n-1)^{2}}\sim \frac{1}{4 n^{2}}$ then the series converges. - HINT: Relate $\lim_{n \to \infty}\left| n^2-n^3\log \left(\frac{2n+1}{2n-1} \right)\right|$ to $\lim_{n \to \infty} n-n^2\log \left(\frac{n+1}{n} \right)$whose limit is $\frac{1}{2}$ and can be almost elementarily proved by replacing n by $\frac{1}{x}$ when $x \rightarrow 0$. The second limit comes from a highschool book, 11th grade, often met during courses. - This question has an interesting aspect that merits being commented, which is that we can compute a closed form for the sum using Mellin transforms and harmonic sums. Introduce the sum $S(x)$ given by $$S(x) = \sum_{n\ge 1} \left(1- xn \log\frac{2xn+1}{2xn-1}\right)$$ so that we are interested in $S(1).$ As mentioned before the sum term is harmonic and may be evaluated by inverting its Mellin transform. Recall the harmonic sum identity $$\mathfrak{M}\left(\sum_{k\ge 1} \lambda_k g(\mu_k x);s\right) = \left(\sum_{k\ge 1} \frac{\lambda_k}{\mu_k^s} \right) g^*(s)$$ where $g^*(s)$ is the Mellin transform of $g(x).$ In the present case we have $$\lambda_k = 1, \quad \mu_k = k \quad \text{and} \quad g(x) = 1-x\log\frac{2x+1}{2x-1} = 1-x\log\left(1+\frac{2}{2x-1}\right).$$ The abscissa of convergence of $$\sum_{k\ge 1} \frac{\lambda_k}{\mu_k^s} = \zeta(s)$$ is $\Re(s)>1.$ Now to find the Mellin transform $g^*(s)$ of $g(x)$ the fundamental strip being $\langle 0,2\rangle$ (which contains the abscissa of convergence of the Dirichlet series sum term) we first use integration by parts to get $$\int_0^\infty \left(-x\log\left(1+\frac{2}{2x-1}\right)\right) x^{s-1} dx \\= \left[\left(-x\log\left(1+\frac{2}{2x-1}\right)\right) \frac{x^{s+1}}{s+1}\right]_0^\infty - \int_0^\infty \frac{4}{4x^2-1} \frac{x^{s+1}}{s+1} dx = - \int_0^\infty \frac{4}{4x^2-1} \frac{x^{s+1}}{s+1} dx.$$ with the fundamental strip being $\langle -1, 0\rangle$. It becomes evident that we require the following Mellin transform: $$h^*(s) = \int_0^\infty h(x) x^{s-1} dx$$ where $$h(x) = \frac{4}{4x^2-1}.$$ This transform integral is not strictly speaking convergent but we can compute its principal value by using a semicircular contour in the upper half plane that is traversed clockwise and picks up half the residues at the two poles at $\pm 1/2.$ This gives $$h^*(s) (1-e^{i\pi s}) = \frac{1}{2} \times 2 \pi i \left(\mathrm{Res}(h(x) x^{s-1}; x=1/2) + \mathrm{Res}(h(x) x^{s-1}; x=-1/2)\right)$$ which gives $$h^*(s) (1-e^{i\pi s}) = \frac{1}{2} \times 2 \pi i ((1/2)^{s-1}-(-1/2)^{s-1}) = 2^{-s} \times 2 \pi i (1+(-1)^s).$$ This yields $$h^*(s) = 2^{-s} \times 2 \pi i \frac{1+e^{i\pi s}}{1-e^{i\pi s}} = 2^{-s} \times 2 \pi i \frac{e^{-i\pi s/2}+e^{i\pi s/2}}{e^{-i\pi s/2}-e^{i\pi s/2}} = -\frac{2}{2^s} \pi \cot(\pi s/2).$$ Returning to $g^*(s)$ we have $$g^*(s) = \frac{2}{2^{s+2}} \frac{1}{s+1} \pi \cot(\pi (s+2)/2) = \frac{1}{2^{s+1}} \frac{1}{s+1} \pi \cot(\pi s / 2).$$ Therefore the Mellin transform $Q(s)$ of $S(x)$ is given by $$\frac{1}{2^{s+1}} \frac{1}{s+1} \pi \cot(\pi s / 2)\zeta(s).$$ The Mellin inversion integral here is $$\frac{1}{2\pi i} \int_{3/2-i\infty}^{3/2+i\infty} Q(s)/x^s ds$$ which we evaluate by shifting it to the right for an expansion about infinity. We must sum the residues at the poles at the positive even integers of $Q(s)/x^s$ which are $$\mathrm{Res}(Q(s)/x^s; s=2q) = \frac{1}{2^{2q+1}} \frac{1}{2q+1} \times 2 \times \zeta(2q) \frac{1}{x^{2q}} \\ =\frac{1}{2^{2q+1}} \frac{1}{2q+1} \times 2 \times \frac{(-1)^{q+1} B_{2q} (2\pi)^{2q}}{2\times (2q)!} \frac{1}{x^{2q}} = \frac{1}{2} \times \frac{(-1)^{q+1} B_{2q} \pi^{2q}}{(2q+1)!} \frac{1}{x^{2q}}.$$ We thus require the sum $$- \frac{1}{2} \sum_{q\ge 1} \frac{i^{2q} B_{2q} \pi^{2q}}{(2q+1)!} \frac{1}{x^{2q}}.$$ The exponential generating function of the even Bernoulli numbers is $$-1 + \frac{1}{2} t + \frac{t}{e^t-1} = \sum_{q\ge 1} B_{2q} \frac{t^{2q}}{(2q)!}.$$ Integrate this to obtain $$-t - \frac{t^2}{4} + t\log(1-e^t) + \mathrm{Li}_2(e^t).$$ The constant that appeared during the integration was the well-known zeta function value $\mathrm{Li}_2(1) = \zeta(2) = \pi^2/6$ so that we finally have $$\sum_{q\ge 1} B_{2q} \frac{t^{2q}}{(2q+1)!} = \frac{1}{t} \left(-\frac{\pi^2}{6}-t - \frac{t^2}{4} + t\log(1-e^t) + \mathrm{Li}_2(e^t)\right).$$ To conclude use another well known zeta function value which is $\mathrm{Li}_2(-1) = -\pi^2/12$ (derived from $\mathrm{Li}_2(1)$) and put $t=i\pi /x = i\pi$ to obtain that (we lose the minus sign because we are shifting to the right) $$S(1) = \frac{1}{2} \frac{1}{i\pi} \left(-\frac{\pi^2}{6}-i\pi + \frac{\pi^2}{4} + i\pi\log 2 + \mathrm{Li}_2(-1)\right) \\ = \frac{1}{2} \frac{1}{i\pi} \left(-i\pi + i\pi \log 2 \right) = \frac{1}{2} (\log 2 - 1).$$ Remark. The integral of the generating function of the Bernoulli numbers is easily verified by differentiation: $$\left(-\frac{t^2}{2} + t\log(1-e^t) + \mathrm{Li}_2(e^t)\right)' = -t + \frac{t}{1-e^t} (-e^t) + \log(1-e^t) + \left(\sum_{n\ge 1} \frac{e^{nt}}{n^2}\right)' \\ = \frac{-t+te^t}{1-e^t} - \frac{te^t}{1-e^t} + \log(1-e^t) + \sum_{n\ge 1} \frac{e^{tn}}{n} \\ = \frac{-t}{1-e^t} + \log(1-e^t) - \log(1-e^t) = \frac{t}{e^t-1}.$$ -
{}
Hugh MacColl. # Symbolic logic and its applications online . (page 10 of 11) Online LibraryHugh MacCollSymbolic logic and its applications → online text (page 10 of 11) Font size 53_67 Y 9~ = t r 1 (477-469) p + t r 2 (469-477) p ,forQ p = (63Q) p = x 1 e + aw = x 1 (see § 11, Formula? 22, 23). Thus we get AB = ,i' a - 12 = .r .i- From tne aata AB there- fore we infer that x lies between x a and ,i\ ; that is. 53 between positive infinity and — . 53 ,_4 greater than — - or 7-. ,67 53V In other words, x is 122 SYMBOLIC LOGIC [§§ 139, 140 Now, here evidently the formula of § 135 was not wanted ; for it is evident by mere inspection that u\ is greater than ,r 2 , so that a\ being therefore the nearest inferior limit, the limit ,r 2 is superseded and may be left out of account. In fact A implies B, so that we get AB = A = ,r aU . 140. Given that 7x — 53 is negative and 07 — 9* positive ; required the limits of x. Let A denote the first datum, and B the second. We get— A = (7£-53) N = (x- 53 x - — ■ x, 53 x 1 - 7 67 x 2 - 9 2'./3- Hence, we get AxS = Xy^ £#?2'. p ~~ ^'l'. 2' . 0. By Formula 2 of § 1 3 5 we get — 53 67\ N , / 7 53 = %(477 - 4G9) N +,%(469 - 477) N = Xyij + x 2 ,e = x% (see §11, Formulae 22, 23). This shows that the nearer superior limit x 2 super- sedes the more distant superior limit x\ ; so that we get A-b — ,Vy 2 '. — ®%. 07 Thus x lies between the superior limit x 2 (or — ] and negative infinity. 141] CALCULUS OF LIMITS 123 CHAPTER XVI 141. We will now consider the limits of two variables, and first with only numerical constants (see § 156). Suppose we have given that the variables x and y are both positive, while the expressions 2y — 3# — 2 and 3^ + 2^ — 6 are both negative; and that from these data we are required to find the limits of y and x in the order y, x. Table op Limits. Let A denote our whole data. We have A = y r x p (2y - 3x - 2) N (3?/ + 2x -6) N . Beginning with the first bracket factor, we get* (2y - 3x -2Y = (y-^x-lJ = (y- ? A ) N = Vv Then, taking the second bracket factor, we get o 2/i = ^ + l _ 6 2 y 2 =2 - X o 2 X 2 — o a? 3 ~ o (3?/ + 2x - 6) N = ( y + - x - 2 I = Also 2/V = 2/ a '. a- a '.o ( seG §§ 137 ' 139), so that A = y a '. o»a'. <#i#2' = Va.: v. 2'. o ;v v. o = Vi>. 2'. a' «'. o ; for the nearer superior limits y 1 and y 2 supersede the more distant limit y a . Applying Formula (2) of § 135 to the statement y v v , we get /13 Vv. * = vvivi - VzY + y-Ay-2 - ?a) n = yA ^ - 1 + y 2 ^x-lj=y r (x-^ + !h\ x ~ 13 * The limits are registered in the table, one after another, as they are found, so that the table grows as the process proceeds. 124 SYMBOLIC LOGIC [§141 Substituting this alternative for y v 2 . in the expression for A, we get A = (y r x v + y^\)y^. o = (yv. 0% + feiftK-.o = VV. V C a. V . + y<H. ti C a'. 1. == 2/l'. 0^1'. ' 2/2'. O^a'. 1 '1 omitting in the first term the superior limit x a because it is superseded by the nearer superior limit x x ; and omit- ting in the second term the limit x , because it is super- seded by the nearer limit x v The next step is to apply Formula (3) of § 135 to the ^-factors y vo and y z . We get yv. = Vv. 0(2/1 - y<>Y = yv. 0(2/1)* = yv. d -® + l = yv.o(3x + 2) 1 ' = y v Jx + ^ J = yi'. (* - x ^f — 2/1'. 0^2 ! y%. = 2/2'. 0(2/2 - ?7o) P = 2/2'. o(2/ 2 ) P = 2/2'. o( 2 - -x J = y 2 , (6 - 2^ = ^,0(3 -xf = y^ Q (x- 3) N = 2/2'.0<%- Substituting these equivalents of ?y r and ?/ 2 . in A, we get A = }Jx. cftv. 2.0 "J~ 2/2'. O^a'. 3'. 1 = 2/l'. 0^1'. 1 2/2'. O'^V. 1 > for evidently x is a nearer inferior limit than ,r 2 , and therefore supersedes ,v 2 ; while x 3 is a nearer superior limit than x a (which denotes positive infinity), and there- fore supersedes x a . We have now done with the ?/-state- ments, and it only remains to apply Formula (3) of § 135 to the ^'-statements x vo and x si . It is evident, however, by mere inspection of the table, that this is needless, as it would introduce no new factor, nor discover any incon- sistency, since x x is evidently greater than x , that is, than zero, and x 3 is evidently greater than x x . The process therefore here terminates, and the limits are fully deter- §141] CALCULUS OF LIMITS 1 25 mined. We have found that either x varies between x x and zero, and y between y 1 and zero ; or else x varies between x B and x v and y between y 2 and zero. The figure below will illustrate the preceding process and table of reference. The symbol x denotes the distance of any point P (taken at random out of those in the shaded figure) from the line x , and the symbol y denotes the distance of the point P from the line y . The first equivalent of the data A is the statement x z x o x r llv 2- o x o> which asserts that y 1 and y 2 are superior limits of y, that y (or zero) is an inferior limit of y, and that x (or zero) is an inferior limit of x. It is evident that this compound statement A is true for every point P in the shaded portion of the figure, and that it is not true for any point outside the shaded portion. The final equivalent of the data A is the alternative y v% x r _ + Vv. o tl V. i> the first term of which is true for every point P in the quadrilateral contained by the lines y v y , x v x Q ; and the second term of which is true for the triangle contained by the lines y 2 , y 0) x v 126 SYMBOLIC LOGIC [§142 Table of Limits. 142. Given tliat y 2 — 4./.' is negative and y + 2x — 4 positive ; required the limits of y and x. Let A denote our data. We get A = (v/-4 t r)-\y + 2,,;-4) p = (7/ 2 -4 tt -) N (y-yi)"; tf - ± x y = {(y-2 JxXy + 2 Jx)Y = {y-2 s /xr(y+2 s fxy for (y — 2 s/^YiV + 2 x/^) N * s impossible. We therefore get a = 2/2'. 3(2/ - 2/i) p = 2/2'. s2/i = y-2.3. i By Formula (1) of § 135 we get 2/ 3 . i = 2/3(2/3 - Vif + y/yi - 7hY = y 3 (2tf - 2 ^ - 4) p + Vl {2x -2jx- 4)* = y 3 (# - x/« - 2 ) p + y^a? - s/x - 2 ) N (see §§ 126, 127) slx-l ^ X ~~l) ~i* Y -"((•"-D-lM^-i)-!}' = ? / 3 (.j-4) p +2/i(*-4) n = y 3 (# ~ «i) P + ^ - ^'i) N = 2/3^i + 2/r*r- Therefore A = 2/2'.3^1+//2'.l^l'- We now apply Formula (3) of § 135, thus !h. 3 = 2/2'. 3(2/2 - VsY = Ik. s( 2 */« + 2 xA')'' = y*. 3 e 2/2'. 1 = 2/2'. 1(2/2 - 2/i) r = 2/2'. i(2# + 2 V'/' - 4) P = yr. i(* + «/* - -)" = V*. i{( V* + 2J - (2) } = 2/2'. i(« - 1 ) r = h: M' ~ x -zY = 2/2'. i#2- §§ 142, 143] CALCULUS OF LIMITS 127 Thus the application of Formula (3) of § 135 to y 2 , 3 introduces no new factor, but its application to the other compound statement y 2 , 1 introduces the new statement x 2 , and at the same time the new limit x 2 . Hence we finally get (since Form 3 of § 135 applied to x a . a and Xy 2 makes no change) A^y.,.3^+^1%.2 (see §§137, 138). This result informs us that " either x lies between x a (positive infinity) and x ., and y between the superior oc jc z limit y 2 and the inferior limit y 3 ; or else x lies be- tween £&, and x 2 , and y between y 2 and y v The above figure will show the position of the limits. With this geometrical interpretation of the symbols x, y, &c., all the points marked will satisfy the conditions expressed by the statement A, and so will all other points bounded by the upper and lower branches of the para- bole, with the exception of the blank area cut off by the line y v 143. Given that y 2 — ±x is negative, and y + 2x — 4 also negative ; required the limits of y and x. Here the required limits (though they may be found 128 SYMBOLIC LOGIC [§§ 143-145 independently as before) may be obtained at once from the diagram in § 142. The only difference between this problem and that of § 142 is that in the present case y + 2x — 4 is negative, instead of being, as before, positive. Since y 2 — 4a; is, as before, negative, y. 2 will be, as before, a superior limit, and y 3 an inferior limit of y ; so that, as before, all the points will be restricted within the two branches of the parabola. But since y + 2x — 4 has now changed sign, all the admissible points, while still keeping between the two branches of the parabola, will cross the line y v The result will be that the only admissible points will now be restricted to the blank portion of the parabola cut off by the line y v instead of being, as before, restricted to the shaded portion within the two branches and extending indefinitely in the positive direction towards positive infinity. A glance at the diagram of § 142 will show that the required result now is 1J-2'. 3'%. ' V\'. 3^1'. 2> with, of course, the same table of limits. CHAPTER XVII A 144. The symbol — , when the numerator and denomi- nator denote statements, expresses the chance that A is true on the assumption that B is true; B being some state- ment compatible with the data of our problem, but not necessarily implied by the data. A 145. The symbol denotes the chance that A is true e when nothing is assumed but the data of our 'problem. This is what is usually meant when we simply speak of the " chance of A." § 146, 147] CALCULUS OF LIMITS 129 146. The symbol^—, or its synonym S(A, B), denotes B A A — — — ; and this is called the dependence* of the statement A upon the statement B. It indicates the increase, or (when negative) the decrease, undergone by the absolute chance — when the supposition B is added to our data. The symbol <5° D , or its synonym S°(A, B), asserts that the B dependence of A upon B is zero. In this case the state- E E E Fig. 1. Fig. 2. Fig. 3. ment A is said to be independent oj the statement B ; which implies, as will be seen further on (see S 149), that B is independent of A. 147. The symbols a, b, c, &c. (small italics) respectively ABC represent the chances—, -, — , &c. (see S 145); and the € € € symbols a! ', I/, c ', &c, respectively denote the chances — , — , — , &c, so that we get e e e 1 = n + a' = b + b' = c + c' = &c. * Obscure ideas about ' dependence ' and ' independence ' in pro- bability have led some writers (including Boole) into serious errors. The definitions here proposed are, I believe, original. 130 SYMBOLIC LOGIC [§148 148. The diagrams on p. 129 will illustrate the pre- ceding conventions and definitions. Let the symbols A, B assert respectively as propositions that a point P, taken at random out of the total number of points in the circle E, will be in the circle A, that it will be in the circle B. Then AB will assert that P will be in both circles A and B ; AB' will assert that P will be in the circle A, but not in the circle B ; and similarly for the statements A'B and A'B'. In Fig. 1 we have A_ _ 3 . A'_ ,_10 7 _ft ~T3' 7~~ a ~1S AB_ 1 e "l3 : AB' 2 ' ~T~13 In Fig. 2 we have A_ _ 3 . A'_ ,_ <J . 7~ ~12' T~ ~12' AB 1 e "li 1 AB'_ 2 ~€ 12' In Fig. 3 we have A_ _ 3 . A'_ ,_ 8 . e " ~fl ' T" ~ii ' AB 1 t ~ n ; AB'_ 2 ~€ li' It is evident also that 111 ™. - 1/A „ x A A AB A 1 3 1 Fig. 1, d(A, B)=- — - = -_ = _ — _= +_ ; b K J B e B e 4 13 52' in .big. 2, d(A, B)= — — - = — _ = =0; ° ' y ' ' B e B e 4 12 F „ }/ . B A A AB A 1 3 1 in Fig. 3. d(A, B) = - — _= — — = - = - S B e B e 4 11 44 Similarly, we get in Fig. 1, J(B,A)=+1; in Fig. 2, 5(B, A)=0; in Fig. 3, \$(B,A)=-± §§ 149, 150] CALCULUS OF LIMITS 133 149. The following formulae are easily verified :— <•>£-*-?■£(•> *-}& The second of the above eight formulae shows that if any statement A is independent of another statement B, then B is independent of A ; for, by Formula (2), it is clear that <S°(A, B) implies S°(B, A). To the preceding eight formulae may be added the following : — AB = A B = B A AB_A B _B A e " e *A e"B ; (10) Q^~Q'AQ~QBQ ; (11)^± B = A + B _ AB . (12) A + B = A + B _^? 150. Let A be any statement, and let x be any positive proper fraction; then A x is short for the statement A — =%), which asserts that the chance of A is x. \ € / AB \ Similarly, (AB)* means — = x); and so on. This convention gives us the following formulae, in which A B a and b (as before) are short for — and -. e e (1) A^:^=^- A V (2) A-B^AB/^A + B)-*-; (3) (AB) x (A + By>:(x + y = a + b); (4) S°(A, B) = (AB) f '»; (5) (AB)" = (A + B) a+& ; 132 SYMBOLIC LOGIC [§§150,151 < 6 >(s4)=(s=f)=*( A - B >; „ /A B\ /A \ (7) [B = A) : \B = ! + {a = b):(AB)V + (a = h) - It is easy to prove all these formulae, of which the last may be proved as follows : A_B\ /A_Z> A\ /K_b A\° (A/ 6\)° B~Ay' ; \B~a'B/ : \B a'B/ : \ B\ X ~ a/ J \ A V /A \ : jjj(a-&)| :( B = 0J + («-^)°:(ABr+(a = &). The following chapter requires some knowledge of the integral calculus. CHAPTER XVIII 151. In applying the Calculus of Limits to multiple integrals, it will be convenient to use the following notation, which I employed for the first time rather more than twenty years ago in a paper on the " Limits of Multiple Integrals " in the Proc. of the Math. Society. The symbols ^>{x)x m!n and x m - n (p(x), which differ in the relative positions of <p(x) and x m >. n , differ also in meaning. The symbol <J>(%)% m >. n is short for the integra- tion (p(x)dx, taken between the superior limit x m and the inferior limit x n \ an integration which would be ex commonly expressed either in the form ' m dx(p(x) or fX ' ™<p(x)dx. The symbol x m . n <p(x), with the symbol ' v m\n to the left, is short for (j>(x m )— <p(% n )- For example, suppose we have j <p(x)dx = ^(x). Then, by substitution of notation, we get \ l m< p{ , ') ( ^ l ' = ( p{x)x m , n J x n = #m.»' v K#) = ^GO - ^G''») I so that we can thus entirely §§ 151-153] CALCULUS OF LIMITS 133 dispense with the symbol of integration, /, as in the following concrete example. Let it be required to evaluate the integral C z C'¥ C' 1 ' Table op Limits. I "'-I dy I dx, J«a JVi J-'o z a = c Vl = X «! = « V2 = h h'o =0 the limits being as in the given table. The full process is as follows, the order of variation being z, y, x. Integral z r . 2 y v . &. . = (z 1 - z 2 )y v . 2 x r . = (// - c)y v . 9 x v , = ?/r . 2 (k 2 - ^/>% . o = { (hA -cy-d- (hvl ~ cy 2 ) } x v , = { (I.* 2 - «b) - (W - cb) } x v . = (h^ 2 - ex - \tf + bc)x r . = #i\ o(^^ 3 — ikr 2 — iH> 2 # + &«') = £a 3 — lea 2 — \b % ci + bca. 152. The following formulae of integration are self- evident : — ( 1 ) *W . n = - %n' . m ; (2) #«>*V . „ = ~ <£OX'. m J (3) *W. „<£(■») = -X n '.rn<t>{z)\ (4) ^' m <. n + X n , mt = X ni . r J ( 5 ) #(«)(#»' . n + *»» . r) = 0(^>m' . r I ( 6 ) fe . n + #„' . r )<£<>) = a? m . . ,#*') ; \ ' / //?»' . n' ' r' . s i/n' . rnfir' . s 2/m' . n^s' . >• — 2/«' . mP^s' . r 5 '. / '' m' . 71 ~r" "■ V . s "m' . s • " J r > . n ' ( 9 ) (x m . . „ + x r , . s )(p(x) = (x m . . , + ^ , n )(p(x) ; ( 1 0) <p(x)(x m > . n + *V . s) = 0(#)(#m< . . + <?V. „)• 153. As already stated, the symbol , when A and B are propositions, denotes the chance that A is true on the assumption that B is true. Now, let x and y be any numbers or ratios. The symbol - means - x — ; and 3/B y B when either of these two numbers is missing, Ave may suppose the number 1 understood. r PU xA x A A 1 A Ihus, - means - x — ; and — means - x — . B IB xB x B V34> SYMBOLIC LOGIC [§§ 154, 155 x x =l "l =1 z=A •' 2 =1-2/ y + z — l y 8 =l -a ! e = A: = aj i'.o^i'.o 2! i'. D 154. The symbol IntA(x, y, z) denotes the integral Idxldyjdz, subject to the restrictions of the statement A, the order of variation being x, y, z. The symbol hit A, or sometimes simply A, may be used as an abbreviation for Int A(x, y, z) when the context leaves no doubt as to the meaning of the abbreviation. 155. Each of the Table op Limits. variables x, y, z is taken at random be- tween 1 and ; what is the chance that the . . z( 1 — x — y) traction — 1-y-yz will also be between 1 and ? Let the symbol Q, as a proposition, assert that the value of the fraction in question will lie between 1 and ; and let A denote our data .1',,?/,'^%. We have to find -, Avhich here =- 1 • (W 1 1 -0 A (see § 145). Also, let N denote the numerator z(l — x — y), and D the denominator 1 — y — yz of the fraction in ques- tion ; while, this time, to avoid ambiguity, the letter n will denote negative, and p positive (small italics instead of, as before, capitals). We get Q = N^D p (N - J)) n + N n D'\N - Bf. Taking the order of variation x, y, z, as in the table, we get, since z is given positive, W={l-x-yY = {x-{l-y)Y=Xt N" = ( 1 - x - yf ={.v-(l-y)Y = ,/- 2 E> p = (l -y-yzf= \y{\ +z)-l \-» = y 2 , D ,l = (l-?/-F) n =<?/(l+^)-l} P = ?/ 2 (N-T>r = (z-z,v + y-iy = (z,r- // -:+iy> y + z- 1 = ■':, (N - Vy = (z - zx + y- l) p = (z,c -y-z+l) n = x 3 § 155] CALCULUS OF LIMITS 135 Substituting these results in our expression for Q, we shall have Multiplying by the given certainty x v -0 (see table), we get X V. oH == lV i\ 1'. 3. 0^2' + -'V. r. 2.02/-2- Applying Formulae (1) and (2) of § 135, we get (see § 137) #3. = X l X Z - '<0 )" + *oK - ^ = ^3 + •% t% _ r = x s (3C 3 - xj* + ar^ - x 3 ) n = X^e + x v n = x s , X 2. = ff«te f - ^ + • ?, o('' - V = X 2* + ^ = «* Substituting these results in our expression for x v Q,, we get #r. oQ = %(%2/3 + A W3')y2' + •'3' . 2^/2 == X 2' . 31/2' . 3 "r ^2' . 0^3' . 2' "•" #3 '. 2^2' We now apply Formula (3) of § 135 to the statements "*2\ 3' ^2'.o' ''3'. 2' **nUS '2' . 3 = ' V -l' . 3(^2 X i) = < r -2' . \$%' *2' .0 == ^2' . Q\ X 2 ^0) = ''V . e "^3' . 2 = ■% . 2V^3 — ,?, 2 ) = X S . iVl- This shows that the application of § 135, Form 3, intro- duces no new statement in y ; so that we have finished with the limits of x, and must now apply the formulas of §135 to find the limits of y. Multiplying the expres- sion found for tr ro Q by the datum y v Q , we get #i'.o 7 /i'<r* ==, ''2 . s/Ar. r. 3.0 + '''2 . 0//3 . 2'.r. * x 3. 2?/i'.2. o- By applying the formulae of § 135, or by simple inspec- tion of the table, we get y% v = y.y ; y 3 . = y z ; y\$ . 2 < . r = Vz' '■> y. 2 = y 20 and substituting these results in the right-hand side of the last equivalence, we get <'V.2yi\oQ = ''V.3y2'.3 + <>2\oy3\o + <'V.2/'r.2 136 SYMBOLIC LOGIC [§§ 155, 156 The application of § 135, Form 3, to the y-statements will introduce no fresh statements in z, nor destroy any term by showing that it contains an impossible factor tj. We have therefore found the nearest limits of y ; and it only remains to find the limits of z. Multiplying the last expression by the datum z ro we get QA = Q,?v _ Q y v . oZ r . = Ov . 3y 2 . . 3 + dfe . o2/ 3 - . o + % . s#r . 2>r . o- The application of § 135, Form 3, to the factor z r _ will effect no change, since {z x — z ) p is a certainty. The pro- cess of finding the limits is therefore over ; and it only remains to evaluate the integrals. We get A Int A ^ = Int(/e.r. \$2 . 3 + x* . y s . o + %? . Hl\- . 2K' . for Int A = Int x v ,& v . f fs l .. =l. The integrations are easy, and the result is log 2 (Naperian base), which is 5 a little above -. 9 156. Given that a is positive, that n is a positive whole number, and that the variables x and y are each taken at random between a and — a, what is the chance that {(x + y'T - a} is negative and {{x + y) n+1 - «} positive ? Let A denote our data y Y _ 2 x r#2 (see Table); let Q de- note the proposition {(x+y) n -a} s , and let R denote the proposition {(.?• + y) n+1 - a} p , in which the exponent N denotes negative, and the exponent P positive. „ , , . QR , . , Int QRA We have to find the chance — ^-, which = . In this problem we have only to find the limits of integration (or variation) for the numerator from the compound statement QRA, the limits of integration for the denominator being already known, since A = ?/ 1 .2^1.2. 156] CALCULUS OF LIMITS Vtf Table of Limits y 1 = a y 2 = ~ a i y 3 = a n — X 1 y i = — a n — x i 7i =a n + 1 —x •J 5 1 y, = - « n+l - x x x = a ,r 2 = -a i x 3 = a n — a i x. = a — a n 4 1 x =a + a n + 1 i x e = a + a n 6 1 x 7 = a n+1 — a i x 8 = — a — a n + 1 i a? g = « — a"+i n+l " 3 = (±) B A = e = 3/r.2 a a'.2 We take the order of integration y, #. The limits being registered in the table, one after another, as they are found, the table grows as the process goes on. For convenience of reference the table should be on a separate slip of paper. We will first suppose n to be even. Then Q = j {pc + y) - a n \ \ (as + y) + a" I = y - » — a?)| K | S I !/ + <a« +.!■)■, =y t ;. = \t* R= \(as+y) — «"•"*" x ;■ — ) p [ / 1 ) t //o- Hence QR = 2/ 3U ?/ 5 = y 3 , 4i5 ; and multiplying by the datum 7/V.2' we § e ^ Q%r . 2 = y 3 - . r . 2 . 4 . 5 = Os" ''3 + yvXgXy&A . 6 + y^e) = (//3->'s + yv^3')(y^5 + y& x h) = y\$. ^ + y^.^s . a + yr. s% ; for by application of the formulae of § 135, ,r 4 5 = ,r 5 138 SYMBOLIC LOGIC [§156 #5 . 3 = «'■ s ; <'V. 5 = 1 ( an impossibili ty) ; and Xg . 5 - = .r 3 , For when a> 1 we have ( 2a — a n ] , and when ct< 1 we have ( a"+i — a Tl ] ; so that x 6 — a; 3 is always positive. We must now apply § 135, Form 3, to the statements in y. We get y s . 2 = y&. &#\ yz>. 5 z =yv.5 a i> Vv.^ — yv.^n- Substituting these results, we get Q%1'. 2 = VS. 2<?6\ 5 + y&. 5%. 3«1 + y V . 5%. 7- Having found the limits of the variable y, we must apply the three formulae of § 135 to the statements in x. Multiplying by the datum x Vti , we get Q%V. g»l'. 2 = 3fa. 2'' V. 6' . 5. 2 + 2/ 3 '. B®1'. 5'. 3 . 2«1 + ?/l'. 5'?V. 1'. 7 . 2 = 2/3'. 5^1'. 3 ft l + llv. b X Z'. V. 7 I for x x i 5 = »/ ; ajj/ 5 - = ajji ; x Zm 2 = '''3 ! #7. 2 = ,: ^7* We obtain these results immediately by simple in- spection of the Table of Limits, without having recourse to the formulae of § 13 5. Applying the formulae of § 135 to the statements in x which remain, we get x Vm v = ./v",, + x r a 2 , ; # r 3 = x Vi 3 a. 2 ; •%. 7 = iV 3'. n a \ I iV i' . 7 = '''v. 7 a 3- Substituting these values, we get Q% r . 2 Xy. 2 = QRA = // 3 - 5 X V . 3 «. ! . 2 + //r. 5 (#3'. 7«2 . 1 "1 #1'. ?"*. 3> == . ? /3'.5' ?, l'.3 a l "I" VM.ffiv.'fl r \ — (Vb'.S^V. 3 + yv.sfls'. 7/ (6 l J for ai.2 = «i = a 2.i3 an( i %.3 = > ? ( an impossibility). This is the final step in the process of finding the limits, and the result informs us that, when n is even, QRA is only possible when a x ( which =1) is an inferior limit of a. In other words, when n is even and a is not greater than 1, the chance of QR is zero. To find the chance when n is even and a is greater than 1, we have § L56] CALCULUS OF LIMITS 139 only to evaluate the integrals, employing the abbreviated notation of § 151. Thus Integral A = Int y v 2 ,?; r 2 = (y x — y 2 )^v. 2 — ( 2a)# r> 2 = x v , 2 {2ax) = 2cuc 1 — 2ax 2 = 4a 2 Integral QRA = y w 5 # r< 3 + y Vm 6 # 3 ,. 7 = (y-s - y^ v v. 3 + (Vi - y 5 )'%.7 = ( a Tl — a 1l + l W,_ 3 + ( a — ««+i+ ,i ] W 7 = ®V. 3 a?l ~ aH + 1 F + ' r 3'-7 *" ~ an+1 V + i^ = a n - a** 1 (^j - ^ 3 ) + ( a - a w +* )(a? 8 - x 7 ) + £(#?, - ^) = '( a n — a n + l Y 2a - £ a " - | a ™+ 1 QR = Int QRA _ Int QRA A 7w« A 4a 2 = — ( a" _ a ^+i Y 4a - a Tl - a"* 1 We have now to find the chance when % is odd. By the same process as before we get QR A = (y 3 , 5 ,/' r- 3 + y v . 5( %. 7 K + y & . &&. 2 «3- Here we have £wo inferior limits of a, namely, a x and a 3 , so that the process is not yet over. To separate the different possible cases, we must multiply the result obtained by the certainty (a 1 +a r )(a 3 + a\$), which here reduces to a x +a v 3 + %, since a x is greater than a y For shortness sake let M x denote the bracket co- efficient (or co-factor) of a x in the result already obtained Online LibraryHugh MacCollSymbolic logic and its applications → online text (page 10 of 11)
{}
# Cryptic Five-words (-----|||||) Inspired by Aggie Kidd's redux of the original "Four-words" puzzle by Prem, I decided to create a similar "Five-words" puzzle. Some clues are intentionally somewhat vague and/or misleading, to keep it from being too easy, but they should all make sense when the solution is determined. My first's what ant calls a cricket, no doubt My second is third when you really zoom out My third will protect you from villain or foe My fourth won't get verdant when it's on the go My fifth's what you did with the game that you lost My whole is a square with five words that are crossed B E A S TE A R T HA R M O RS T O N ET H R E W Explanation: My first's what ant calls a cricket, no doubt A cricket would be a beast to an ant. My second is third when you really zoom out The earth is the third planet if we zoom out on the solar system. My third will protect you from villain or foe Armor protects you from everyone. My fourth won't get verdant when it's on the go Verdant is green, moss is green, rolling stone gathers no moss. My fifth's what you did with the game that you lost You threw the game. • Looks like you got it! +1 – Rand al'Thor Jun 24 '15 at 17:36 • Excellent! I was actually thinking of FEAST for the first one, but your answer is just as good. – GentlePurpleRain Jun 24 '15 at 17:51 • Good job! Nice to know I had a good idea with a few of them, Stone was the one that I really wanted to use. – Kingrames Jun 24 '15 at 18:38 • Seems like there could be a better clue; the rest are wonderful. – dennisdeems Jun 24 '15 at 19:00 • Yes, @AggieKidd got what I was thinking. It did seem a little weak, but every other clue I could think of for "feast" seemed too obvious. I never thought of using "beast" (or "yeast" or "least", etc.) – GentlePurpleRain Jun 24 '15 at 19:16 Closest I can figure out is G I A N T I _ R _ H A R M O R N _ O _ E T H R E W But I am lost on 4, which should be a synonym for stone. Got no clue for 2. • I put your answer into a fixed-width font for readability - hope you don't mind! – Rand al'Thor Jun 24 '15 at 17:35 • It might help to know that there probably aren't any words of the format I_R_H. Litscape is a useful tool. The fourth word could only be NOOSE. – Engineer Toast Jun 24 '15 at 17:38 • I had to start with stone and work my way back. – Aggie Kidd Jun 24 '15 at 17:40 • Without seeing your answer, I got the same - except I didn't even get stone :p – Alok Jun 24 '15 at 22:56 My first's what ant calls a cricket, no doubt JUMPY (compared to an ant, a cricket does a lot of jumping). My second is third when you really zoom out My third will protect you from villain or foe My fourth won't get verdant when it's on the go
{}
Change search Cite Citation style • apa • ieee • modern-language-association-8th-edition • vancouver • Other style More styles Language • de-DE • en-GB • en-US • fi-FI • nn-NO • nn-NB • sv-SE • Other locale More languages Output format • html • text • asciidoc • rtf Dynamic algorithms: new worst-case and instance-optimal bounds via new connections KTH, School of Electrical Engineering and Computer Science (EECS), Theoretical Computer Science, TCS.ORCID iD: 0000-0003-3694-740X 2018 (English)Doctoral thesis, comprehensive summary (Other academic) ##### Abstract [en] This thesis studies a series of questions about dynamic algorithms which are algorithms for quickly maintaining some information of an input data undergoing a sequence of updates. The first question asks \emph{how small the update time for handling each update can be} for each dynamic problem. To obtain fast algorithms, several relaxations are often used including allowing amortized update time, allowing randomization, or even assuming an oblivious adversary. Hence, the second question asks \emph{whether these relaxations and assumptions can be removed} without sacrificing the speed. Some dynamic problems are successfully solved by fast dynamic algorithms without any relaxation. The guarantee of such algorithms, however, is for a worst-case scenario. This leads to the last question which asks for \emph{an algorithm whose cost is nearly optimal for every scenario}, namely an instance-optimal algorithm. This thesis shows new progress on all three questions. For the first question, we give two frameworks for showing the inherent limitations of fast dynamic algorithms. First, we propose a conjecture called the Online Boolean Matrix-vector Multiplication Conjecture (OMv). Assuming this conjecture, we obtain new \emph{tight} conditional lower bounds of update time for more than ten dynamic problems even when algorithms are allowed to have large polynomial preprocessing time. Second, we establish the first analogue of NP-completeness'' for dynamic problems, and show that many natural problems are NP-hard'' in the dynamic setting. This hardness result is based on the hardness of all problems in a huge class that includes a number of natural and hard dynamic problems. All previous conditional lower bounds for dynamic problems are based on hardness of specific problems/conjectures. For the second question, we give an algorithm for maintaining a minimum spanning forest in an $n$-node graph undergoing edge insertions and deletions using $n^{o(1)}$ worst-case update time with high probability. This significantly improves the long-standing $O(\sqrt{n})$ bound by {[}Frederickson STOC'83, Eppstein, Galil, Italiano and Nissenzweig FOCS'92{]}. Previously, a spanning forest (possibly not minimum) can be maintained in polylogarithmic update time if either amortized update is allowed or an oblivious adversary is assumed. Therefore, our work shows how to eliminate these relaxations without slowing down updates too much. For the last question, we show two main contributions on the theory of instance-optimal dynamic algorithms. First, we use the forbidden submatrix theory from combinatorics to show that a binary search tree (BST) algorithm called \emph{Greedy} has almost optimal cost when its input \emph{avoids a pattern}. This is a significant progress towards the Traversal Conjecture {[}Sleator and Tarjan JACM'85{]} and its generalization. Second, we initialize the theory of instance optimality of heaps by showing a general transformation between BSTs and heaps and then transferring the rich analogous theory of BSTs to heaps. Via the connection, we discover a new heap, called the \emph{smooth heap}, which is very simple to implement, yet inherits most guarantees from BST literature on being instance-optimal on various kinds of inputs. The common approach behind all our results is about making new connections between dynamic algorithms and other fields including fine-grained and classical complexity theory, approximation algorithms for graph partitioning, local clustering algorithms, and forbidden submatrix theory. ##### Place, publisher, year, edition, pages KTH Royal Institute of Technology, 2018. , p. 51 ##### Series TRITA-EECS-AVL ; 2018:51 ##### National Category Computer Sciences Computer Science ##### Identifiers ISBN: 978-91-7729-865-6 (print)OAI: oai:DiVA.org:kth-232471DiVA, id: diva2:1234277 ##### Public defence 2018-08-27, F3, Kungl Tekniska högskolan, Lindstedtsvägen 26, Stockholm, 13:00 (English) ##### Note QC 20180725 Available from: 2018-07-25 Created: 2018-07-24 Last updated: 2018-07-25Bibliographically approved ##### List of papers 1. Unifying and Strengthening Hardness for Dynamic Problems via the Online Matrix-Vector Multiplication Conjecture Open this publication in new window or tab >>Unifying and Strengthening Hardness for Dynamic Problems via the Online Matrix-Vector Multiplication Conjecture 2015 (English)In: STOC '15 Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing, ACM Press, 2015, p. 21-30Conference paper, Published paper (Refereed) ##### Abstract [en] Consider the following Online Boolean Matrix-Vector Multiplication problem: We are given an n x n matrix M and will receive n column-vectors of size n, denoted by v1, ..., vn, one by one. After seeing each vector vi, we have to output the product Mvi before we can see the next vector. A naive algorithm can solve this problem using O(n3) time in total, and its running time can be slightly improved to O(n3/log2 n) [Williams SODA'07]. We show that a conjecture that there is no truly subcubic (O(n3-ε)) time algorithm for this problem can be used to exhibit the underlying polynomial time hardness shared by many dynamic problems. For a number of problems, such as subgraph connectivity, Pagh's problem, d-failure connectivity, decremental single-source shortest paths, and decremental transitive closure, this conjecture implies tight hardness results. Thus, proving or disproving this conjecture will be very interesting as it will either imply several tight unconditional lower bounds or break through a common barrier that blocks progress with these problems. This conjecture might also be considered as strong evidence against any further improvement for these problems since refuting it will imply a major breakthrough for combinatorial Boolean matrix multiplication and other long-standing problems if the term "combinatorial algorithms" is interpreted as "Strassen-like algorithms" [Ballard et al. SPAA'11]. The conjecture also leads to hardness results for problems that were previously based on diverse problems and conjectures -- such as 3SUM, combinatorial Boolean matrix multiplication, triangle detection, and multiphase -- thus providing a uniform way to prove polynomial hardness results for dynamic algorithms; some of the new proofs are also simpler or even become trivial. The conjecture also leads to stronger and new, non-trivial, hardness results, e.g., for the fully-dynamic densest subgraph and diameter problems. ACM Press, 2015 ##### National Category Computer Sciences ##### Identifiers urn:nbn:se:kth:diva-165846 (URN)10.1145/2746539.2746609 (DOI)2-s2.0-84958762655 (Scopus ID) ##### Conference STOC 2015: 47th Annual Symposium on the Theory of Computing,Portland, OR, June 15 - June 17 2015 ##### Note QC 20150811 Available from: 2015-04-29 Created: 2015-04-29 Last updated: 2018-07-24Bibliographically approved 2. Nondeterminism and Completeness for Dynamic Algorithms Open this publication in new window or tab >>Nondeterminism and Completeness for Dynamic Algorithms ##### National Category Computer Sciences ##### Identifiers urn:nbn:se:kth:diva-232470 (URN) ##### Note QC 20180724 Available from: 2018-07-24 Created: 2018-07-24 Last updated: 2018-07-24Bibliographically approved 3. Dynamic Minimum Spanning Forest with Subpolynomial Worst-case Update Time Open this publication in new window or tab >>Dynamic Minimum Spanning Forest with Subpolynomial Worst-case Update Time 2017 (English)In: 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS), IEEE, 2017, p. 950-961Conference paper, Published paper (Refereed) ##### Abstract [en] We present a Las Vegas algorithm for dynamically maintaining a minimum spanning forest of an nnode graph undergoing edge insertions and deletions. Our algorithm guarantees an O(n(o(1))) worst-case update time with high probability. This significantly improves the two recent Las Vegas algorithms by Wulff-Nilsen [2] with update time O(n(0.5-epsilon)) for some constant epsilon > 0 and, independently, by Nanongkai and Saranurak [3] with update time O(n(0.494)) (the latter works only for maintaining a spanning forest). Our result is obtained by identifying the common framework that both two previous algorithms rely on, and then improve and combine the ideas from both works. There are two main algorithmic components of the framework that are newly improved and critical for obtaining our result. First, we improve the update time from O(n(0.5-epsilon)) in [2] to O(n(o(1))) for decrementally removing all low-conductance cuts in an expander undergoing edge deletions. Second, by revisiting the "contraction technique" by Henzinger and King [4] and Holm et al. [5], we show a new approach for maintaining a minimum spanning forest in connected graphs with very few (at most (1 + o(1))n) edges. This significantly improves the previous approach in [2], [3] which is based on Frederickson's 2-dimensional topology tree [6] and illustrates a new application to this old technique. IEEE, 2017 ##### Series Annual IEEE Symposium on Foundations of Computer Science, ISSN 0272-5428 ##### National Category Electrical Engineering, Electronic Engineering, Information Engineering Computer Sciences ##### Identifiers urn:nbn:se:kth:diva-220661 (URN)10.1109/FOCS.2017.92 (DOI)000417425300083 ()2-s2.0-85041099602 (Scopus ID)978-1-5386-3464-6 (ISBN) ##### Conference 58th IEEE Annual Symposium on Foundations of Computer Science (FOCS), OCT 15-17, 2017, Berkeley, CA ##### Note QC 20180108 Available from: 2018-01-08 Created: 2018-01-08 Last updated: 2018-07-25Bibliographically approved 4. Smooth Heaps and a Dual View of Self-adjusting Data Structures Open this publication in new window or tab >>Smooth Heaps and a Dual View of Self-adjusting Data Structures 2018 (English)In: Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, ACM , 2018, p. 801-814Conference paper, Published paper (Refereed) ##### Abstract [en] We present a new connection between self-adjusting binary search trees (BSTs) and heaps, two fundamental, extensively studied, and practically relevant families of data structures (Allen, Munro, 1978; Sleator, Tarjan, 1983; Fredman, Sedgewick, Sleator, Tarjan, 1986; Wilber, 1989; Fredman, 1999; Iacono, Özkan, 2014). Roughly speaking, we map an arbitrary heap algorithm within a broad and natural model, to a corresponding BST algorithm with the same cost on a dual sequence of operations (i.e. the same sequence with the roles of time and key-space switched). This is the first general transformation between the two families of data structures. There is a rich theory of dynamic optimality for BSTs (i.e. the theory of competitiveness between BST algorithms). The lack of an analogous theory for heaps has been noted in the literature (e.g. Pettie; 2005, 2008). Through our connection, we transfer all instance-specific lower bounds known for BSTs to a general model of heaps, initiating a theory of dynamic optimality for heaps. On the algorithmic side, we obtain a new, simple and efficient heap algorithm, which we call the smooth heap. We show the smooth heap to be the heap-counterpart of Greedy, the BST algorithm with the strongest proven and conjectured properties from the literature, widely believed to be instance-optimal (Lucas, 1988; Munro, 2000; Demaine et al., 2009). Assuming the optimality of Greedy, the smooth heap is also optimal within our model of heap algorithms. Intriguingly, the smooth heap, although derived from a non-practical BST algorithm, is simple and easy to implement (e.g. it stores no auxiliary data besides the keys and tree pointers). It can be seen as a variation on the popular pairing heap data structure, extending it with a “power-of-two-choices” type of heuristic. For the smooth heap we obtain instance-specific upper bounds, with applications in adaptive sorting, and we see it as a promising candidate for the long-standing question of a simpler alternative to Fibonacci heaps. The paper is dedicated to Raimund Seidel on occasion of his sixtieth birthday. ACM, 2018 STOC 2018 ##### Keywords binary search trees, heaps, self-adjusting data structures, sorting ##### National Category Computer Sciences ##### Identifiers urn:nbn:se:kth:diva-232468 (URN)10.1145/3188745.3188864 (DOI)2-s2.0-85049889551 (Scopus ID) ##### Conference 50th Annual ACM Symposium on Theory of Computing, STOC 2018; Los Angeles; United States; 25 June 2018 through 29 June 2018 ##### Note QC 20180814 Available from: 2018-07-24 Created: 2018-07-24 Last updated: 2018-10-30Bibliographically approved 5. Distributed Exact Weighted All-Pairs Shortest Paths in (O)over-tilde(n(5/4)) Rounds Open this publication in new window or tab >>Distributed Exact Weighted All-Pairs Shortest Paths in (O)over-tilde(n(5/4)) Rounds 2017 (English)In: 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS), IEEE, 2017, p. 168-179Conference paper, Published paper (Refereed) ##### Abstract [en] We study computing all-pairs shortest paths (APSP) on distributed networks (the CONGEST model). The goal is for every node in the (weighted) network to know the distance from every other node using communication. The problem admits (1+ o(1))-approximation (O) over tilde (n)-time algorithms [2], [3], which are matched with (Omega) over tilde (n)-time lower bounds [3], [4], [5](1). No omega(n) lower bound or o(m) upper bound were known for exact computation. In this paper, we present an (O) over tilde (n(5/4))-time randomized (Las Vegas) algorithm for exact weighted APSP; this provides the first improvement over the naive O(m)-time algorithm when the network is not so sparse. Our result also holds for the case where edge weights are asymmetric (a. k. a. the directed case where communication is bidirectional). Our techniques also yield an (O) over tilde (n(3/4) k(1/2) + n)-time algorithm for the k-source shortest paths problem where we want every node to know distances from k sources; this improves Elkin's recent bound [6] when k = (omega) over tilde (n(1/4)). We achieve the above results by developing distributed algorithms on top of the classic scaling technique, which we believe is used for the first time for distributed shortest paths computation. One new algorithm which might be of an independent interest is for the reversed r-sink shortest paths problem, where we want every of r sinks to know its distances from all other nodes, given that every node already knows its distance to every sink. We show an (O) over tilde (n root r)-time algorithm for this problem. Another new algorithm is called short range extension, where we show that in (O) over tilde (n root h) time the knowledge about distances can be "extended" for additional h hops. For this, we use weight rounding to introduce small additive errors which can be later fixed. Remark: Independently from our result, Elkin recently observed in [6] that the same techniques from an earlier version of the same paper (https://arxiv.org/abs/1703.01939v1) led to an O(n(5/3) log(2/3) n)-time algorithm. IEEE, 2017 ##### Series Annual IEEE Symposium on Foundations of Computer Science, ISSN 0272-5428 ##### Keywords distributed graph algorithms, all-pairs shortest paths, exact distributed algorithms ##### National Category Electrical Engineering, Electronic Engineering, Information Engineering ##### Identifiers urn:nbn:se:kth:diva-220659 (URN)10.1109/FOCS.2017.24 (DOI)000417425300015 ()2-s2.0-85041116469 (Scopus ID)978-1-5386-3464-6 (ISBN) ##### Conference 58th IEEE Annual Symposium on Foundations of Computer Science (FOCS), OCT 15-17, 2017, Berkeley, CA ##### Funder EU, Horizon 2020, 715672Swedish Research Council, 2015-04659 ##### Note QC 20170109 Available from: 2018-01-09 Created: 2018-01-09 Last updated: 2018-07-24Bibliographically approved #### Open Access in DiVA ##### File information File name FULLTEXT02.pdfFile size 1137 kBChecksum SHA-512 d13e762cbed8205fb8af9f164af7d4890a0eb60a2f2f863833cb020efc886bf871ce4842bf71bd065efb3dfa3295f56ef3b99e1bd3aec59bdd61bc54b033611b Type fulltextMimetype application/pdf #### Search in DiVA ##### By author/editor Saranurak, Thatchaphol ##### By organisation Theoretical Computer Science, TCS ##### On the subject Computer Sciences #### Search outside of DiVA The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available isbn urn-nbn #### Altmetric score isbn urn-nbn Total: 1628 hits Cite Citation style • apa • ieee • modern-language-association-8th-edition • vancouver • Other style More styles Language • de-DE • en-GB • en-US • fi-FI • nn-NO • nn-NB • sv-SE • Other locale More languages Output format • html • text • asciidoc • rtf v. 2.35.5 |
{}
# Parametric Equations homework 1. Nov 26, 2006 ### sherlockjones 1 If $$x = t^{3} - 12t$$, $$y = t^{2} - 1$$ find $$\frac{dy}{dx}$$ and $$\frac{d^{2}y}{dx^{2}}$$. For what values of $t$ is the curve concave upward. So $$\frac{dy}{dx} = \frac{2t}{3t^{2}-12}$$ and $$\frac{d^{2}y}{dx^{2}} = \frac{2}{3t^{2}-12}$$ So $$3t^{2}-12 > 0$$ and $$t > 2$$ for the curve to be concave upward? Is this correct? 2 If $$x = 2\cos \theta$$ and $$y = \sin 2\theta$$ find the points on the curve where the tangent is horizontal or vertical. So $$\frac{dy}{dx} = -\frac{\cos 2\theta}{\sin \theta}$$. The tangent is horizontal when $$-\cos 2\theta = 0$$ and vertical when $$\sin \theta = 0$$. So $$\theta = \frac{\pi}{4}+ \pi n$$ when the tangent is horizontal and $$\theta = \pi n$$ when the tangent is vertical? Is this correct? 3 At what point does the curve $$x = 1-2\cos^{2} t$$, $$y = (\tan t )(1-2\cos^{2}t)$$ cross itself? Find the equations of both tangent at that point. So I set $$\tan t = 0$$ and $$1-2\cos^{2}t = 0$$ Last edited: Nov 26, 2006 2. Nov 26, 2006 ### HallsofIvy Staff Emeritus Yes, $$\frac{dy}{dx}= \frac{\frac{dy}{dt}}{\frac{dx}{dt}}$$ How did you get this? $$\frac{d^2y}{dx^2}= \frac{\frac{d(\frac{dy}{dx})}{dt}}{\frac{dx}{dt}}$$ I get $$\frac{d^2y}{dx^2}= -\frac{2}{3}\frac{1}{x^2- 4}$$ Okay, that looks good. Why? Are you assuming that y= 0 where the curve crosses itself? "Crossing itself" only means that x and y are the same for two or more different values of t. Solve the pair of equations $1- 2cos^2 t= 1- 2cos^2 s$ and $(tan t)(1- 2cos^2 t)= (tan s)(1- 2cos^2 s)$ for t and s. Last edited: Nov 27, 2006 3. Nov 26, 2006 ### benorin For parametric equations, we have $$\frac{dy}{dx} = \frac{\frac{dy}{dt}}{\frac{dx}{dt}} = \frac{2t}{3t^{2}-12}$$ so $$\frac{dy}{dx} = \frac{2t}{3t^{2}-12}$$ is correct, but for the second derivative, we consider y as a function of x and apply the chain rule to the first derivative to get $$\frac{d}{dt}\left(\frac{dy}{dx}\right) = \frac{d}{dt}\left(\frac{\frac{dy}{dt}}{\frac{dx}{dt}}\right)$$ $$\Rightarrow\frac{d^2y}{dx^2}\cdot\frac{dx}{dt} = \frac{\frac{d^2y}{dt^2}\frac{dx}{dt}- \frac{dy}{dt}\frac{d^2x}{dt^2}}{\left(\frac{dx}{dt}\right)^2 }$$ which gives $$\frac{d^2y}{dx^2}= \frac{\frac{d^2y}{dt^2}\frac{dx}{dt}- \frac{dy}{dt}\frac{d^2x}{dt^2}}{\left(\frac{dx}{dt}\right)^3} = \frac{2(3t^2-12)- 2t(6t)}{(3t^2-12)^3}$$ $$=-\frac{6}{27}\frac{t^2+4}{(t^2-4)^3}$$ So $$t^{2}-4 < 0$$ which gives $$t<-2\mbox{ or }t > 2$$ for the curve to be concave upward. Absolutely. Notice that $$y = x\tan t$$. Now for two points to be the same for different values of t, say $$t_1$$ and $$t_2$$, and the y and x values are the same for the values of t so we have $$y-y = x\left(\tan (t_1)-\tan (t_2)\right) \Rightarrow \tan (t_1)-\tan (t_2)=0$$​ so we solve $$\tan (t_1)=\tan (t_2),\, t_1<>t_2$$ which has the solution $$t_1 =t_2+k\pi,\, \, k=\pm 1, \pm2,\ldots$$, you can get it from here... 4. Nov 26, 2006 ### sherlockjones So $$y = x\tan t_{1}$$ and $$y = x\tan t_{2} \Rightarrow \tan t_{1} = \tan t_{2}$$. What does $$t_1<>t_2$$ mean? Also how did you get the solution $$t_1 =t_2+k\pi,\, \, k=\pm 1, \pm2,\ldots$$. Do I just substitute this back in for the equations for $$x$$ and $$y$$? How would you find the equations of both tangents? I have only one point. Thanks Last edited: Nov 26, 2006 5. Nov 27, 2006 ### HallsofIvy Staff Emeritus $t_1<>t_2$ means "t1 is NOT equal to t2. It's a computer programming notation. He got $t_1 =t_2+k\pi,\, \, k=\pm 1, \pm2,\ldots$ by remembering that tan(x) is periodic with period $\pi$! Well, you can't substitute $t_1 =t_2+k\pi,\, \, k=\pm 1, \pm2,\ldots$ because you don't know what t2 is! Substitute those back into your other equation to solve for both t1 and t2. Use either to determine x and y. Last edited: Nov 27, 2006 6. Nov 27, 2006 ### sherlockjones Sorry for my stupidity, but do you mean to do this: $$\tan t_{1} = \tan (t_{1}-k\pi)$$? $$t_{1} = \arctan (t_{1}-k\pi)$$ 7. Nov 27, 2006 ### HallsofIvy Staff Emeritus Yes, that's true because tangent has period $\pi$ No, that's not true. Even $$arctan(tan t_1)= \arctan(tan(t_1- k\pi))$$ so that $t_1= t_1- k\pi$ isn't correct. Since tangent is periodic it is not one-to-one and arctan is not a true inverse.
{}
# Analytic continuation and convergence of a Riemann zeta related function The functions in question are $$L(s)=\sum_{k=1}^\infty \frac{\lambda(k)}{k^s}=\frac{\zeta(2s)}{\zeta(s)} \mbox{ and } L^*(s)=\frac{1}{2}\sum_{k=1}^\infty \frac{\lambda(k)+(-1)^{k+1}}{k^s}=\frac{L(s)+\eta(s)}{2},$$ where • $$\lambda(k) = (-1)^{\Omega(k)}$$ is the Liouville function • $$\Omega(k)$$ is the prime omega function, counting the number of prime factors of $$k$$ including multiplicity • $$\zeta$$ is the Riemann zeta function and $$\eta$$ is the Dirichlet eta function • $$s=\sigma + it$$ is a complex number • $$p_1, p_2,\dots$$ denote the primes with $$p_1=2$$ An equivalent discussion could be based on the sister Moebius and Merterns function, but here our focus is on Liouville. The notations $$L_n$$ and $$L_n^*$$ are used to denote the first $$n$$ terms of the series representing respectively $$L(s)$$ and $$L^*(s)$$ when $$s=0$$. The asymptotic behavior of $$L_n$$ has well-known deep implications on the Riemann Hypothesis (RH), discussed later in this post. Note: the series defining $$L^*(s)$$ might or might not converge if $$\frac{1}{2} (that's the purpose of my question), however the formula $$L^*(s)=(L(s)+\eta(s))/2$$ is valid only for $$\Re(s)>1$$ because the series defining $$L(s)$$ converges only for $$\Re(s)>1$$ (at least according to Wikipedia, see here). My main question can be stated in simple words: My question Does the series representing $$L^*(s)$$ converge and is it an analytic function if $$\Re(s)>\frac{1}{2}$$? Probabilistic arguments in favor of a positive answer are discussed in the appendix. Assuming the answer is yes, then $$L(s)=2L^*(s)-\eta(s)$$ is an analytic continuation of the original $$L(s)$$, from $$\Re(s)>1$$ to $$\Re(s)>\frac{1}{2}$$. Consequences are discussed in the next section. I computed $$L^*(s)$$ for various $$s$$ with $$\frac{1}{2}<\Re(s)\leq 1$$ using the first million terms of the series, and compared with $$L^*(s)=\frac{1}{2}\Big(\eta(s)+\frac{\zeta(2s)}{\zeta(s)}\Big)$$ computed by Mathematica: the first three digits are identical. In particular, $$L^*(1)=\frac{\log 2}{2}$$ and the analytic continuation gives $$L(1)=0$$. Discussion The sequence $$\{\lambda(k)\}$$ consists of $$+1$$ and $$-1$$ that seem rather randomly distributed, though not perfectly randomly, but supposedly randomly enough (see here) as to imply the Riemann Hypothesis (RH) - still a conjecture at this point. For instance, if you look at runs of $$+1$$ or $$-1$$ (subsequences of consecutive $$+1$$ or consecutive $$-1$$) the probability for a run to be of length $$m>0$$ is equal to $$2^{-m}$$, as in a sequence of i.i.i. Bernoulli trials. More about this is discussed in the appendix. Yet this is not enough to make the series for $$L(s)$$ converge if $$\Re(s)\leq 1$$. Also, it seems there is a little bias in the sequence $$\{\lambda(k)\}$$, which might favor $$L_n$$ to be negative more frequently than positive, unlike perfect random walks. For instance, if $$1, then $$L_n$$ is always negative (see here), but not at $$n=906380357$$, thus disproving Polya's conjecture. That bias might disappear when $$n$$ becomes extremely large. Now if you replace $$\lambda(k)$$ by $$\lambda^*(k)=(\lambda(k)+(-1)^{k+1})/2$$ corresponding to the terms in the series defining $$L^*(s)$$, the distribution of run lengths (if you ignore the terms $$\lambda^*(k)$$ equal to zero), is significantly altered. This would also be true, with the exact same impact, if you apply the same trick to i.i.d. Bernoulli trials or other sequences with similar distribution of $$+1$$ and $$-1$$. In the sequence $$\{\lambda^*(k)\}$$, assuming you omit the zero terms which do not contribute to the sums $$L_n^*$$ or $$L^*(s)$$, the probability for a run to be of length $$m>0$$ is now equal to $$2\cdot 3^{-m}$$, down from $$2^{-m}$$ for the original sequence $$\{\lambda(k)\}$$. This means that on average, runs are now much shorter, to the point that it makes the series $$L^*(s)$$ converge even if $$\frac{1}{2}<\Re(s)\leq 1$$. In essence, our trick is what gets us an analytic continuation from $$\Re(s)>1$$ to $$\Re(s)>\frac{1}{2}$$. Transforming $$L(s)$$ into $$L^*(s)$$ is the same trick as transforming $$\zeta(s)$$ into $$\eta(s)$$, though in the latter case, it extents analycity from $$\Re(s)>1$$ to $$\Re(s)>0$$. An interesting question is whether you can infer some useful result about $$\lim \sup_{n \rightarrow \infty} L_n$$ from the smoother $$L^*_n$$, as the $$\lim\sup$$ in question is intimately connected to the Riemann Hypothesis (see my previous post here). Another curious facts is: $$\sum_{k=1}^n \lambda(k)\Big\lfloor \frac{n}{k}\Big\rfloor =\lfloor \sqrt{n}\rfloor ,$$ where $$\lfloor \cdot \rfloor$$ denotes the integer part function. This formula is mentioned here, and it allows you to compute $$\lambda(k)$$ recursively without using a table of prime numbers. Finally, $$\zeta(s)=\frac{\zeta(2s)}{2L^*(s)-\eta(s)} = \frac{1}{2L^*(s)-\eta(s)}\cdot\prod_{k=1}^\infty \frac{1}{L(2^k s)}$$ gives (by successive applications of the first equality) an infinite product for $$\zeta(s)$$ converging (and analytic) if $$\Re(s)>\frac{1}{2}$$. The roots of $$\zeta(s)$$ in the critical strip $$\frac{1}{2}<\Re(s)<1$$ (if any, RH says that there are none) are identical to the poles of $$L^*(s)$$, or in other words, to the roots of $$1/L^*(s)$$ in the same strip. Appendix Here we show what would happen if the numbers $$\lambda(k)$$ were replaced by independent random variables $$X_k$$ taking value $$1$$ with probability $$\frac{1}{2}$$, and $$-1$$ with the same probability. This is pretty much (but not exactly) the way the $$\lambda(k)$$'s are behaving. Let's define $$Z=\sum_{k=1}^\infty \frac{X_k}{k^s}.$$ Here $$Z$$ is a complex random variable, and $$s=\sigma +it$$. Its expectation is zero, and its variance is given by $$Var[\Re(Z)]=Var[X_1]\cdot\sum_{k=1}^\infty \frac{\cos^2(t\log k)}{k^{2\sigma}},\\ Var[\Im(Z)]=Var[X_1]\cdot\sum_{k=1}^\infty \frac{\sin^2(t\log k)}{k^{2\sigma}}.$$ Both series converge if $$\sigma=\Re(s)>\frac{1}{2}$$. In that case $$Var[Z]=Var[\Re(Z)]+Var[\Im(Z)] =Var[X_1]\cdot \zeta(2\sigma)$$. A formula related to the characteristic function, if $$\tau$$ is a real number, is the following: $$E[\exp(i\tau Z)]=\prod_{k=1}^\infty \cos(\tau k^{-s}) .$$ • Since the series for $\eta$ converges, the series for $L^*$ converges iff the one for $L$ converges. You've stumbled upon a statement equivalent to RH again. – Wojowu May 7 at 18:18 • The function $L$ is equivalent to $\zeta$ in $\Re s >1$ in a well-defined sense (Bohr equivalence) and Turan prefered it in his RH research;for $C(x)=\sum_{n \le x}\frac{\lambda(n)}{n}$ there are famous results of Turan relating this function with RH (or weaker versions); it is known that if $C(x)$ would be nonnegative for large $x$ RH would follow but that result is false as $C$ takes infinitely many negative values; however estimates of how negative $C$ can be imply results about the zeroes of RZ (eg if for some $a,c>0$ we have $C(x) > -c\frac{\log^a x}{\sqrt x}, x \ge x_0$ then RH true) – Conrad May 7 at 18:59 • Have you read Titchmarsh's book yet? (Probably not, because it takes about a year to read such a book.) – GH from MO May 7 at 21:18 • If you have a series of the form $\sum a_k+b_k$ and you know $\sum a_k$ converges, then $\sum a_k+b_k$ converges iff $\sum b_k$ converges (and then $\sum a_k+b_k = \sum a_k+\sum b_k$). – Wojowu May 7 at 21:51 • Questions are good - if they are of research level. Your questions are usually not of research level, and they tend to be lengthy, exactly because you are not familiar with the background theory. I don't want to discourage you to be curious and ask original questions. Those are very good things and part of being a scientist. It's just that it takes a lot of studying before one can come up with good questions. – GH from MO May 7 at 23:50 Let $$s\in\mathbb{C}$$ be any point with $$\Re s>0$$. The Dirichlet series of $$\eta(s)$$ converges, hence $$L(s)$$ converges if and only if $$L^*(s)$$ converges. For $$\sigma_0>1/2$$, it is also known that $$L(s)$$ converges in the half-plane $$\Re s>\sigma_0$$ if and only if $$\zeta(s)$$ has no zero in that half-plane. Combining the previous two statements, it follows that $$L^*(s)$$ converges in the half-plane $$\Re s>1/2$$ if and only if the RH is true. The above results do not change when we restrict to real numbers $$s$$. P.S. The Wikipedia page does not say that $$L(s)$$ only converges for $$\Re s>1$$. • I was talking about the third display in the section "Dirichlet Series" in the Wikipedia entry. It says that $|z|<2$ and $Re(s)>1$ (I assume for convergence) and here $z=-1$. But they could be wrong, they got the product wrong anyway I think; should be $1-z/p^s$, not $1+z/p^s$ in the product unless I am mistaken. I will redo my numerical computations anyway, because (unless some mistake), I could not get $L(0.9)$ to converge, but got $L^*(0.9)$ to converge to the correct value. – Vincent Granville May 7 at 22:32 • @VincentGranville: The Wikipedia page defines the Dirichlet series for $\Re s>1$, because in that range we know convergence, and for $z=1$ you cannot replace $\Re s>1$ by $\Re s>0.999$, say. For $z=-1$ we conjecture that the Dirichlet series actually converges for $\Re s>1/2$: this is equivalent to the RH. – GH from MO May 7 at 23:08
{}
# Algorithms and Architectures for Security Authored by: Mostafa Hashem Sherif # Protocols for Secure Electronic Commerce Print publication date:  May  2016 Online publication date:  May  2016 Print ISBN: 9781482203745 eBook ISBN: 9781482203776 10.1201/b20160-4 #### Abstract The security of electronic commerce (e-commerce) transactions covers the security of access to the service, the correct identification and authentication of participants (to provide them the services that they have subscribed to), the integrity of the exchanges, and, if needed, their confidentiality. It may be necessary to preserve evidences to resolve disputes and litigation. All these protective measures may counter users’ expectations regarding anonymity and nontraceability of transactions. #### Algorithms and Architectures for Security The security of electronic commerce (e-commerce) transactions covers the security of access to the service, the correct identification and authentication of participants (to provide them the services that they have subscribed to), the integrity of the exchanges, and, if needed, their confidentiality. It may be necessary to preserve evidences to resolve disputes and litigation. All these protective measures may counter users’ expectations regarding anonymity and nontraceability of transactions. This chapter contains a short review of the architectures and algorithms used to secure electronic commerce. In particular, the chapter deals with the following themes: definition of security services in open networks, security functions and their possible location in the various layers of the distribution network, mechanisms to implement security services, certification of the participants, and the management of encryption keys. Some potential threats to security are highlighted ­particularly as they relate implementation flaws. This chapter has three appendices. Appendices 3A and 3B contain a general overview of the symmetric and public key encryption algorithms, respectively. Described in Appendix 3C are the main operations of the Digital Signature Algorithm (DSA) of the American National Standards Institute (ANSI) X9.30:1 and the Elliptic Curve Digital Signature Algorithm (ECDSA) of ANSI X9.62, first published in 1998 and revised in 2005. #### 3.1  Security of Open Financial Networks Commercial transactions depend on the participants’ trust in their mutual integrity, in the quality of the exchanged goods, and in the systems for payment transfer or for purchase delivery. Because the exchanges associated with electronic commerce take place mostly at a distance, the climate of trust that is conducive must be established without the participants meeting in person and even if they use dematerialized forms of money or even digital currencies. The security of the communication networks involved is indispensable: those that link the merchant and the buyer, those that link the participants with their banks, and those linking the banks together. The network architecture must be capable to ­withstand potential faults without important service degradation, and the physical protection of the network must be insured against fires, earthquakes, flooding, vandalism, or terrorism. This protection will primarily cover the network equipment (switches, trunks, information systems) but can also be extended to user-end terminals. However, the procedures to ensure such protection are beyond the scope of this chapter. Recommendations X.800 (1991) from the International Telecommunication Union Telecommunication Stand­ardization Sector (ITU-T) categorizes the ­specific ­informational threats into two main categories: ­passive and active attacks. Passive attacks consist in the following: 1. Interception of the identity of one or more of the participants by a third party with a mis­chievous intent. 2. Data interception through clandestine moni­to­ring of the exchanges by an outsider or an un­authorized user. Active attacks take several forms such as the following: • Replay of a previous message in its entirety or in part. • Accidental or criminal manipulation of the ­content of an exchange by a third party by substitution, insertion, deletion, or any unauthorized reorganization of the user’s data d. • Users’ repudiation or denial of their participation in part or in all of a communication exchange. • Misrouting of messages from one user to another (the objective of the security service would be to mitigate the consequences of such an error as well). • Analysis of the traffic and examination of the parameters related to a communication among users (i.e., absence or presence, frequency, direction, sequence, type, volume), in which this analysis would be made more difficult by encryption. • Masquerade, whereby one entity pretends to be another entity. • Denial of service and the impossibility of accessing the resources usually available to authorized users following the breakdown of communication, link congestion, or the delay imposed on time-critical operations. Based on the preceding threats, the objectives of security measures in electronic commerce are as follows: • Prevent an outsider other than the participants from reading or manipulating the contents or the sequences of the exchanged messages without being detected. In particular, that third party must not be allowed to play back old messages, replace blocks of information, or insert messages from multiple distinct exchanges without detection. • Impede the falsification of Payment Instructions or the generation of spurious messages by users with dubious intentions. For example, dishonest merchants or processing centers must not be capable of reutilizing information about their clients’ bank accounts to generate fraudulent orders. They should not be able to initiate the processing of Payment Instructions without expediting the corresponding purchases. At the same time, the merchants will be protected from excessive revocation of payments or malicious denials of orders. • Satisfy the legal requirements on, for ­example, payment revocation, conflict resolution, consu­mer protection, privacy protection, and the exploitation of data collected on clients for ­commercial purposes. • Ensure reliable access to the e-commerce ­service, according to the terms of the contract. • For a given service, provide the same level of service to all customers, irrespective of their location and the environmental variables. The International Organization for Standardization (ISO) standard ISO 7498-2:1989 (ITU-T Recommendation X.800, 1991) describes a reference model for the service securities in open networks. This model will be the framework for the discussion in the next section. #### 3.2.1  OSI Reference Model It is well known that the Open Systems Interconnection (OSI) reference model of data networks establishes a structure for exchanges in seven layers (ISO/IEC 7498-1:1994): 1. The physical layer is where the electrical, mechanical, and functional properties of the interfaces are defined (signal levels, rates, structures, etc.). 2. The link layer defines the methods for orderly and error-free transmission between two network nodes. 3. The network layer is where the functions for routing, multiplexing of packets, flow control, and network supervision are defined. 4. The transport layer is responsible for the reliable transport of the traffic between the two network endpoints as well as the assembly and disassembly of the messages. 5. The session layer handles the conversation between the processes at the two endpoints. 6. The presentation layer manages the differences in syntax among the various representations of information at both endpoints by putting the data into a standardized format. 7. The application layer ensures that two application processes cooperate to carry out the desired information processing at the two endpoints. The following section provides details about some cryptographic security functions that have been assigned to each layer. #### 3.2.2  Security Services: Definitions and Location Security services for exchanges used in e-commerce employ mathematical functions to reshuffle the original message into an unreadable form before it is transmitted. After the message is received, the authenticated recipient must restore the text to its original status. The security consists of six services (Baldwin and Chang, 1997): 1. Confidentiality, that is, the exchanged messages are not divulged to a nonauthorized third party. In some applications, the confidentiality of addresses may be needed as well to prevent the analysis of traffic patterns and the derivation of side information that could be used. 2. Integrity of the data, that is, proof that the message was not altered after it was expedited and before the moment it was received. This service guarantees that the received data are exactly what were transmitted by the sender and that they were not corrupted, either intentionally or by error in transit in the network. Data integrity is also needed for network management data such as configuration files, accounting, and audit information. 3. Identification, that is, the verification of a pre­established relation between a characteristic (e.g., a password or cryptographic key) and an entity. This allows control of access to the network resources or to the offered services based on the privileges associated with a given identity. One entity may possess several distinct identifiers. Furthermore, some protection against denial-of-service attacks can be achieved using access control. 4. Authentication of the participants (users, network elements, and network element systems), which is the corroboration of the identity that an entity claims with the guarantee of a trusted third party. Authentication is necessary to ensure nonrepudiation of users as well of network elements. 5. Access control to ensure that only the authorized participants whose identities have been duly authenticated can gain access to the protected resources. 6. Nonrepudiation is the service that offers an irrefutable proof of the integrity of the data and of their origin in a way that can be verified by a third party, for example, the nonrepudiation that the sender sent the message or that a receiver received the message. This service may also be called authentication of the origin of the data. The implementation of the security services can be made over one or more layers of the OSI model. The choice of the layer depends on the several considerations as explained in the following text. If the protection has to be accorded to all the traffic flow in a uniform manner, the intervention has to be at the physical or the link layers. The only cryptographic service that is available at this level is confidentiality by encrypting the data or similar means (frequency hopping, spread spectrum, etc.). The protection of the traffic at the physical layer covers all the flow, not only user data but also the information related to network administration: alarms, synchronization, update of routing table, and so on. The disadvantage of the protection at this level is that a successful attack will destabilize the whole security structure because the same key is utilized for all transmissions. At the link layer, encryption can be end to end, based on the source/destination, provided that the same technology is used all the way through. Network layer encipherment achieves selective bulk protection that covers all the communications associated with a particular subnetwork from one end system to another end system. Security at the network layer is also needed to secure the communication among the network elements, particularly for link state protocols, where updates to the routing tables are automatically generated based on received information then flooded to the rest of the network. For selective protection with recovery after a fault, or if the network is not reliable, the security services will be applied at the transport layer. The services of this layer apply end to end either singly or in combination. These services are authentication (whether simple by passwords or strong by signature mechanisms or certificates), access control, confidentiality, and integrity. If more granular protection is required or if the nonrepudiation service has to be ensured, the encryption will be at the application layer. It is at this level that most of the security protocols for commercial systems operate, which frees them from a dependency on the lower layers. All security services are available. It should be noted that there are no services at the session layer. In contrast, the services offered at the presentation layer are confidentiality, which can be selective such as by a given data field, authentication, integrity (in whole or in part), and nonrepudiation with a proof of origin or proof of delivery. As an example, the Secure Sockets Layer (SSL)/Transport Layer Security (TLS) protocols are widely used to secure the connection between a client and a server. With respect to the OSI reference model, SSL/TLS lie between the transport layer and the application layer and will be presented in Chapter 5. In some cases, it may be sufficient for an attacker to discover that a communication is taking place among partners and then attempt to guess, for example: • The characteristics of the goods or services exchanged • The conditions for acquisition such as delivery intervals, conditions, and means of settlement • The financial settlement The establishment of an enciphered channel or “tunnel” between two points at the network layer can constitute a shield against such types of attack. It should be noticed, however, that other clues, such as the relative time to execute the cryptographic operations, or the variations in the electric consumption or the electromagnetic radiation, can permit an analysis of the encrypted traffic and ultimately lead to breaking of the encryption algorithms (Messerges et al., 1999). #### 3.3  Security Services at the Link Layer FIGURE 3.1   Layer 2 tunneling with L2TP. Although L2TP does not provide any security services, it is possible to use Internet Protocol Security (IPSec) to secure the layer 2 tunnel because L2TP runs over IP. This is shown in the following section. #### 3.4  Security Services at the Network Layer The security services at this layer are offered from one end of the network to the other end. They include network access control, authentication of the users and/or hosts, and authentication and integrity of the exchanges. These services are transparent to applications and end users, and their responsibilities fall on the administrators of network elements. Authentication at the network layer can be simple or strong. Simple authentication uses a name and password pair (the password may be a one-time password), while strong authentication utilizes digital signatures or the exchange of certificates issued by a recognized certification authority (CA). The use of strong authentication requires the presence of encryption keys at all network nodes, which imposes the physical protection of all these nodes. IPSec is a protocol suite defined to secure communi­cations at the network layer between two peers. The most recent road map to the IPSec documentation is available in IETF RFC 6071 (2011). The overall security architecture of IPSec-v2 is described in IETF RFC 2401; the architecture of IPSec-v3 is described in RFC 4301 (2005). IPSec offers authentication, confidentiality, and key management and is not tied to specific cryptographic algorithms. The Authentication Header (AH) protocol defined in IETF RFCs 4302 (2005) and 7321 (2014) ­provides authentication and integrity services for the payload as well as the routing information in the original IP header. The Encapsulating Security Payload (ESP) protocol is described in IETF RFCs 4303 (2005) and 7321 (2014) that define IPSec-v3. ESP focuses on the confidentiality of the original payload and the authentication of the encrypted data as well as the ESP header. Both IPSec protocols provide some protection against replay attacks with the help a monotonically increasing sequence number that is 64 bits long. The key exchange is performed with the Internet Key Exchange (IKE) version 2, the latest version of which is defined in RFCs 7296 (2014) and 7427 (2015). IPSec operates in one of two modes: the transport mode and the tunnel mode. In the transport mode, the protection covers the payload and the transport header only, while the tunnel mode protects the whole packet, including the IP addresses. The transport mode secures the communication between two hosts, while the tunnel mode is useful when one or both ends of the connection are a trusted entity, such as a firewall, which provides the security services to an originating device. The tunnel mode is also employed when a router provides the security services to the traffic that it is forwarding (Doraswamy and Harkins, 1999). Both modes are used to secure virtual private networks with IPSec as shown in Figure 3.2. The AH protocol is used for the transport mode only, while the ESP is applicable to both modes. FIGURE 3.2   Securing virtual private networks with IPSec. Illustrated in Figure 3.3 is the encapsulation in both cases. In this figure, the IPSec header represents either the ESP header or both the ESP and the AH headers. Thus, routing information associated with the private or corporate network can be encrypted after the establishment of a Transmission Control Protocol (TCP) tunnel between the firewall at the originating side and the one at the destination side. Note that ESP with no encryption (i.e., with a NULL algorithm) is equivalent to the AH protocol. FIGURE 3.3   Encapsulation for IPSec modes. In verifying the integrity, the contents of fields in the IP header that change in transit (e.g., the “time to live”) are considered to be zero. With respect to transmission overheads, the length of the AH is at least 12 octets (a multiple of 4 octets for IPv4 and of 6 octets for IPv6). Similarly, the length of the ESP header is 8 octets. However, the total overhead for ESP includes 4 octets for the initialization vector (if it is included in the payload field), as well as an ESP trailer of at least 6 octets that comprise a padding and authentication data. The protection of L2TP layer 2 tunnels with the IPSec protocol suite is described in IETF RFC 3193 (2001). When both IPSec and L2TP are used together, various headers are organized as shown in Figure 3.4. FIGURE 3.4   Encapsulation for secure network access with L2TP and IPSec. IPSec AH (RFC 4302) and IPSec ESP (RFC 4303) define an antireplay mechanism using a sliding window that limits how far out of order a packet can be, relative to the authenticated packet with the highest sequence number. A received packet with a sequence number outside that window is dropped. In contrast, the window is advanced each time a packet is received with a sequence number within the acceptable range. RFCs 4302 and 4303 define minimum window sizes of 32 and 64 octets. #### 3.5  Security Services at the Application Layer The majority of the security protocols for e-commerce operate at the application layer, which makes them independent of the lower layers. The whole gamut of security services is now available: 1. Confidentiality, total or selective by field or by traffic flow 2. Data integrity 3. Peer entity authentication 4. Peer entity authentication of the origin 5. Access control 6. Nonrepudiation of transmission with proof of origin 7. Nonrepudiation of reception with proof of reception The Secure shell (SSH®* Secure Shell and SSH are registered trademarks of SSH Communications Security, Ltd. of Finland. ), for example, provides security at the application layer and allows a user to log on, execute commands, and transfer files securely. Additional security mechanisms are specific to a particular usage or to the end-user application at hand. For example, several additional parameters are considered to secure electronic payments, such as the ceiling of allowed expenses or withdrawals within a predefined time interval. Fraud detection and management depends on the surveillance of the following (Sabatier, 1997, p. 85): • Activities at the points of sale (merchant ter­minals, vending machines, etc.) • Short-term events • Long-term trends, such as the behavior of a subpopulation, within a geographic area and in a specific time interval In these cases, audit management takes into account the choice of events to collect and/or register, the validation of an audit trail, the definition of the alarm thresholds for suspected security violations, and so on. The rights of intellectual property to dematerialized articles sold online pose an intellectual and technical challenge. The aim is to prevent the illegal reproduction of what is easily reproducible using “watermarks” incorporated in the product. The means used differ depending on whether the products protected are ephemeral (such as news), consumer oriented (such as films, music, books, articles, or images) or for production (such as enterprise software). Next, we give an overview of the mechanisms used to implement security service. The objective is to present sufficient background for understanding the applications and not to give an exhaustive review. More comprehensive discussions of the mathematics of cryptography are available elsewhere (Schneier, 1996; Menezes et al., 1997; Ferguson et al., 2010; Paar and Pelzl, 2010). #### 3.6  Message Confidentiality Confidentiality guarantees that information will be communicated solely to the parties that are authorized for its reception. Concealment is achieved with the help of encryption algorithms. There are two types of encryption: symmetric encryption, where the operations of message obfuscation and revelation use the same secret key, and public key encryption, where the encryption key is secret and the revelation key is public. #### 3.6.1  Symmetric Cryptography Symmetric cryptography is the tool that is employed in ­classical systems. The key that the sender of a secret message utilizes to encrypt the message is the same as the one that the legitimate receiver uses to decrypt the message. Obviously, key exchange among the partners has to occur before the communication, and this exchange takes place through other secured channels. The operation is illustrated in Figure 3.5. FIGURE 3.5   Symmetric encryption. Let M be the message to be encrypted with a symmetric key K in the encryption process E. The result will be the ciphertext C such that $E [ K ( M ) ] = C$ The decryption process D is the inverse function of E that restores the clear text: $D ( C ) = M$ There are two main categories of symmetric encryption algorithms: block encryption algorithms and stream cipher algorithms. Block encryption acts by ­transforming a block of data of fixed size, generally 64 bits, in encrypted blocks of the same size. Stream ciphers ­convert the clear text one bit at a time by combining the stream of bits in the clear text with the stream of bits from the encryption key using an exclusive OR (XOR). Table 3.1 presents in alphabetical order the main algorithms for symmetric encryption used in e-commerce applications. ### TABLE 3.1   Symmetric Encryption Algorithms in E-Commerce Algorithms Name and Description Block Size in Bits Key Length in Bits Standard AES Blocks of 128, 192, or 256 bits 128, 192, or 256 FIPS 197 DES Data Encryption Standard Blocks of 64 bits 56 FIPS 81, ANSI X3.92, X3.105, X3.106, ISO 8372, ISO/IEC 10116 IDEA (Lai and Massey, 1991a,b) International Data Encryption Algorithm Blocks of 64 bits 128 RC2 Developed by Ronald Rivest (Schneier, 1996, pp. 319–320) Blocks of 64 bits Variable (previously limited to 40 bits for export from the United States) No; proprietary RC4 Developed by R. Rivest (Schneier, 1996, pp. 397–398) Stream 40 or 128 No, but posted on the Internet in 1994 RC5 Developed by R. Rivest (1995) Blocks of 32, 64, or 128 bits Variable up to 2048 bits No; proprietary SKIPJACK Developed for applications with the PCMCIA card Fortezza Blocks of 64 bits 80 Declassified algorithm; version 2.0 available at http://csrc.nist.gov/groups/ST/toolkit/documents/skipjack/skipjack.pdf, last accessed January 25, 2016 Triple DES Also called TDEA Blocks of 64 bits 112 ANSI X9.52/NIST SP 800-67 (National Institute of Standards and Technology, 2012a) Note: FIPS, Federal Information Processing Standard. Fortezza is the Cryptographic Application Prog­ramming Interface (CAPI) that the National Security Agency (NSA) defined for security applications running on PCMCIA (Personal Computer Memory Card International Association) cards. SKIPJACK algorithm is used for encryption, and the Key Exchange Algorithm (KEA) is the algorithm for key exchange. The experimental ­specifications of IETF RFC 2773 (2000) describe the use of SKIPJACK and KEA for securing file transfers. The main drawback of symmetric cryptography systems is that both parties must obtain, one way or another, the unique encryption key. This is possible without too much trouble within a closed organization; on open networks, however, the exchange can be intercepted. Public key cryptography, proposed in 1976 by Diffie and Hellman, is one solution to the problem of key exchange. #### 3.6.2  Public Key Cryptography Algorithms of public key cryptography introduce a pair of keys for each participant, a private key SK and a public key PK. The keys are constructed in such a way that it is practically impossible to reconstitute the private key with the knowledge of the public key. Consider two users, A and B, each having a pair of keys (PKA, SKA) and (PKB, SKB), respectively. Thus, 1. To send a secret message x to B, A encrypts it with B’s public key and then transmits the encrypted message to B. This is represented by $e = P K B ( x )$ 2. B recovers the information using his or her ­private key SKB. It should be noted that only B possesses SKB, which can be used to identify B. The decryption operation can be represented by 3. B can respond to A by sending a new secret message x′ encrypted with the public key PKA of A: e′ = PKA (x′) 4. A obtains x′ by decrypting e′: The diagram in Figure 3.6 summarizes these exchanges. FIGURE 3.6   Confidentiality of messages with public key cryptography. (from ITU-T Recommendation X.509 (ISO/IEC 9594-8), Information technology—Open systems interconnection—The directory: Public-key and attribute certificate frameworks, 2012, 2000. With permission.) It is worth noting that the preceding exchange can be used to verify the identity of each participant. More precisely, A and B are identified by the possession of the decryption key, SKA or SKB, respectively. A can determine if B possesses the private decryption key SKB if the initial message x is included in the returned message x′ that B sends. This indicates to A that the communication has been made with the entity that possesses SKB. B can also confirm the identity of A in a similar way. The de facto standard for public key encryption is the algorithm RSA invented by Rivest et al. (1978). In many new applications, however, elliptic curve cryptography (ECC) offers significant advantages as described in Appendix 3B. #### 3.7  Data Integrity The objective of the integrity service is to eliminate all possibilities of nonauthorized modification of messages during their transit from the sender to the receiver. The traditional form to achieve this security is to stamp the letter envelope with the wax seal of the sender. Transposing this concept to electronic transactions, the seal will be a sequence of bits associated univocally with the document to be protected. This sequence of bits will constitute a unique and unfalsifiable “fingerprint” that will accompany the document sent to the destination. The receiver will then recalculate the value of the fingerprint from the received document and compare the value obtained with the value that was sent. Any difference will indicate that the message integrity has been violated. The fingerprint can be made to depend on the message content only by applying a hash function. A hash function converts a sequence of characters of any length into a chain of characters of a fixed length, L, usually smaller than the original length, called hash value. However, if the hash algorithm is known, any entity can calculate the hash value from the message using the hash function. For security purposes, the hash value depends on the message content and the sender’s private key in the case of a public key encryption algorithm or a secret key that only the sender and the receiver know in the case of a symmetric ­encryption algorithm. In the first case, anyone who knows the hash function can calculate the fingerprint with the public key of the sender; in the second case, only the intended receiver will be able to verify the integrity. It should be noted that lack of integrity can be used to break confidentiality. For example, the confidentiality of some algorithms may be broken through attacks on the initialization vectors. The hash value has many names: compression, contraction, message digest, fingerprint, cryptographic checksum, message integrity check (MIC), and so on (Schneier, 1996, p. 31). #### 3.7.1  Verification of the Integrity with a One-Way Hash Function A one-way hash function is a function that can be calculated relatively easily in one direction but with considerable difficulty in the inverse direction. A one-way hash function is sometimes called a compression function or a contraction function. To verify the integrity of a message whose fingerprint has been calculated with the hash function H(), this function should also be a one-way function; that is, it should meet the following properties: 1. Absence of collisions: In other words, the probability of obtaining the same hash value with two different texts should be almost null. Thus, for a given message x1, the probability of finding a different message x2, such that H(x1) = H(x2), is extremely small. For the collision probability to be negligible, the size of the hash value L should be sufficiently large. 2. Impossibility of inversion: Given the fingerprint h of a message x, it is practically impossible to calculate x such that H(x) = h. 3. A widespread among the output values: This is because a small difference between two messages should yield a large difference between their fingerprints. Thus, any slight modification in the original text should, on average, affect half of the bits of the fingerprint. Consider the message X. It will have been divided into n blocks, each consisting of B bits. If needed, padding bits would be appended to the message, according to a defined scheme, so that the length of each block reaches the necessary B bits. The operations for cryptographic hashing are described using a compression function f() according to the following recursive relationship: In this equation, h0 is the vector that contains an initial value of L bits and x = {x1, x2, …, xn} is the message subdivided into n vectors of B bits each. The hash algorithms that are commonly used in e-commerce are listed in Table 3.2 in alphabetical order. ### TABLE 3.2   Hash Functions Utilized in E-Commerce Applications Algorithm Name Signature Length (L) in Bits Block Size (B) in Bits Standardization AR/DFP Hashing algorithms of German banks German Banking Standards DSMR Digital signature scheme giving message recovery ISO/IEC 9796 MCCP Banking key management by means of public key algorithms using the RSA cryptosystem; signature construction by means of a separate signature ISO/IEC 1116-2 MD4 Message digest algorithm 128 512 No, but described in RFC 1320 MD5 Message digest algorithm 128 512 No, but described in RFC 1321 NVB7.1, NVBAK, Hashing functions used by Dutch banks Dutch Banking Standard, published in 1992 RIPEMD Extension of MD4, developed during the European project RIPE (Menezes et al., 1997, p. 380) 128 512 RIPEMD-128 Dedicated hash function #2 128 512 ISO/IEC 10118-3 RIPEMD-160 Improved version of RIPEMD (Dobbertin et al., 1996) 160 512 SHA Secure Hash Algorithm (replaced by SHA-1) 160 512 FIPS 180 SHA1 (SHA-1) Dedicated Hash Function #3 (revision and correction of the Secure Hash Algorithm) 160 512 ISO/IEC 10118-3 FIPS 180-1 (National Institute of Standards and Technology, 1995), FIPS 180-4 (National Institute of Standards and Technology, 2012b) SHA-2 224, 256 512 FIPS 180-4 384, 512 1024 For the MD5 and SHA-1 hashing algorithms, the message is divided into blocks of 512 bits. The padding consists, in appending to the last block, a binary “1,” then as many “0” bits as necessary for the size of the last block with padding to be 448 bits. Next, a suffix of 8 octets is added to contain the length of the initial message (before padding) coded over 64 bits, which brings the total size of the last block to 512 bits of 64 octets. For a hash algorithm with an output of n bits, there are N = 2n hashes, which are assumed to be equally probable. Based on the generalization of the birthday problem, the event that there are no collisions after q hashes has a probability given by (Barthélemy et al., 2005, pp. 390–391; Paar and Pelzl, 2010, pp. 299–303): Therefore, the probability of at least one collision is given by $1 − e − q ( q − 1 ) 2 N$ Now, put the probability of at least one collision to be 50%. Solving for q by taking the natural logarithm of both sides, we get Since q is large, (q−1) ≈ q, therefore, Thus, the number of messages to hash to find a collision is roughly the square root of the number of possible output values. The so-called birthday attack on cryptographic hash functions of 256 bits indicates that the probability of a collision exceeds 50% after around 2128 hashes. In 1994, two researchers, van Oorschot and Wiener, were able to detect collisions in the output of MD5 (van Oorschot and Wiener, 1994), which explains its gradual replacement with SHA-1. In 2007, Stevens et al. (2009) exploited the vulnerability of MD5 to collisions to ­construct two X.509 certificates for SSL/TLS traffic with the identical MD5 signature but different public keys and belonging to different entities without the knowledge and/or assistance of the relevant certification authority. Knowledge that rogue SSL/TLS certificates can be easily forged accelerated the migration of certification authorities away from MD5. Federal Information Processing Standard (FIPS) 180-2 is the standard that contains the specifications of both SHA-1 and SHA-2. SHA-2 is a set of cryptographic hash functions (SHA-224, SHA-256, SHA-384, SHA-512, SHA-512/224, SHA-512/256) published in 2001 by the National Institute of Standards and Technology (NIST) as a U.S. Federal Information Processing Standard (FIPS). SHA-2 includes a significant number of changes from its predecessor, SHA-1. In 2007, NIST announced a competition for a new hash standard, SHA-3, to which 200 cryptographers from around the world submitted 64 proposals. In December 2008, 51 candidates were retained in the first round. In July 2009, the candidate list was reduced to 14 entries. Five finalists were selected in December 2010. After a final round of evaluations, NIST selected Keccak as SHA-3 in October 2012. Keccak was developed by a team of four researchers from STMicroelectronics, a European semiconductor company, and is optimized for hardware but can be implemented in software as well. It uses an entirely different technique from previous cryptographic algorithms, which use a compression function to process fixed-length blocks of data. Keccak applies a permutation process to extract a fingerprint from the data, a technique given the name of a “sponge function” (Harbert, 2012). Table 3.3 provides a comparison of the parameters of the various hash algorithms. ### TABLE 3.3   Comparison of the Hash Algorithms Algorithm Output Size (Bits) Block Size (Bits) MD5 128 512 SHA-1 160 512 SHA-2 SHA-224 224 512 SHA-256 256 SHA-384 384 1024 SHA-512 512 SHA-512/224 224 SHA-512/256 256 SHA-3 SHA3-224 224 1152 SHA3-256 256 1088 SHA3-384 384 832 SHA3-512 512 576 SHAKE128 Arbitrary 1344 SHAKE256 1088 #### 3.7.2  Verification of the Integrity with Public Key Cryptography An encryption algorithm with a public key is called “permutable” if the decryption and encryption operations can be inverted, that is, if In the case of encryption with a permutable public key algorithm, an information element M that is encrypted by the private key SKX of an entity X can be read by any user possessing the corresponding public key PKX. A sender can, therefore, sign a document by encrypting it with a private key reserved for the signature operation to produce the seal that accompanies the message. Any person who knows the corresponding public key will be able to decipher the seal and verify that it corresponds to the received message. Another way of producing the signature with public key cryptography is to encrypt the fingerprint of the document. This is because the encryption of a long document using a public key algorithm imposes substantial computations and introduces excessive delays. Therefore, it is beneficial to use a digest of the initial message before applying the encryption. This digest is produced by applying a one-way hash function to calculate the fingerprint that is then encrypted with the sender’s private key. At the destination, the receiver recomputes the fingerprint. With the public key of the sender, the receiver will be able to decrypt the fingerprint to verify if the received hash value is identical to the computed hash value. If both are identical, the signature is valid. The block diagram in Figure 3.7 represents the verification of the integrity with public key encryption of the hash. In this figure, h represents the hash function, O represents the encryption with the public key, and O1 represents the decryption function to extract the hash for comparison with the recalculated hash at the receiving end. FIGURE 3.7   Computation of the digital signature using public key algorithms and hashing. The public key algorithms, which are frequently used to calculate digital signatures, are listed in Table 3.4. ### TABLE 3.4   Standard Public Key Algorithms for Digital Signatures Algorithm Signature Length in Bits Standard DSA Digital Signature Algorithm is a variant of the ElGamal algorithm and published the Digital Signature Standard (DSS) proposed by NIST (National Institute of Standards and Technology) in 1994. 320, 448, 512 FIPS 186-4 (National Institute of Standards and Technology, 2013) ECDSA Elliptic Curve Digital Signature Algorithm, first standardized in 1998. 384, 488, 512, 768, 1024 ANSI X9.62:2005 RSA This is the de facto standard algorithm for public key encryption; it can also be used to calculate signatures. 512–1024 ISO/IEC 9796 Even though this method allows the verification of the message integrity, it does not guarantee that the identity of the sender is authentic. In the case of a public key, a signature produced from a message with the signer’s private key and then verified with the signer’s corresponding public key is sometimes called a “signature scheme with appendix” (IETF RFC 3447, 2003). #### 3.7.3  Blind Signature A blind signature is a special procedure for a notary to sign a message using the RSA algorithm for public key cryptography without revealing the content (Chaum, 1983, 1989). One possible utilization of this technique is to time-stamp digital payments. Consider a debtor who would like to have a payment blindly signed by a bank. The bank has a public key e, a private key d, and a public modulo N. The debtor chooses a random number k between 1 and N and keeps this number secret. The payment p is “enveloped” by applying before sending the message to the bank. The bank signs it with its private key so that and returns the payment to the debtor. The debtor can now extract the signed note by dividing the number by k. To verify that the note received from the bank is the one that has been sent, the debtor can raise it to the e power because (as will be shown in Appendix 3B) The various payment protocols for digital money take advantage of blind signatures to satisfy the conditions of anonymity. #### 3.7.4  Verification of the Integrity with Symmetric Cryptography The message authentication code (MAC) is the result of a one-way hash function that depends on a secret key. This mechanism guarantees simultaneously the integrity of the message content and the authentication of the sender. (As previously mentioned, some authors call the MAC the “integrity check value” or the “cryptographic checksum.”) The most obvious way of constructing a MAC is to encrypt the hash value with a symmetric block encryption algorithm. The MAC is then affixed to the initial message, and the whole is sent to the receiver. The receiver recomputes the hash value by applying the same hash function on the received message and compares the result obtained with the decrypted MAC value. The equality of both results confirms the data integrity. The block diagram in Figure 3.8 depicts the operations where h represents the hash function, C the encryption function, and D the decryption function. FIGURE 3.8   Digital signature with symmetric encryption algorithms. Another variant of this method is to append the secret key to the message that will be condensed with the hash functions. It is also possible to perform the computations with the compression function f() and use as an initial value the vector of the secret key, k, of length L bits in the following recursion: where x = {x1, x2, …, xn} is the message subdivided into n vectors, each of B bits. The MAC is the value of the final output kn. The procedure that several U.S. and international standards advocate, for example, ANSI X9.9 (1986) for the authentication of banking messages and ISO 8731-1 (1987) and ISO/International Electrotechnical Commission (IEC) 9797-2 (2002) for implementing a one-way hash function, is to encrypt the message with a symmetric block encryption algorithm either in the cipher block chaining (CBC) or in the cipher feedback (CFB) modes. The MAC is the last encrypted block, which is encrypted one more time in the same CBC or CFB mode. The following key hashing method augments the speed of computation in software implementation and increases the protection, even when the one-way hash algorithm experiences some rare collisions (Bellare et al., 1996). Consider the message X subdivided into n vectors of B bits each and two keys (k1 and k2), each of L bits. The padding bits are added to the end of the initial message according to a determined pattern. The hashing operations can thus be described with the help of two compression functions f1() and f2(): where and are the initial values of k1 and k2, respectively x = x1, x2, …, xn The result that this method yields is denoted as the Nested Message Authentication Code (NMAC). It is, in effect, constructed by applying compression functions in sequence, the first on the padded initial message and the second on the product of the first operation after padding. The disadvantage of this method is that it requires access to the source code of the compression functions to change the initial values. In addition, it requires the usage of two secret keys. This explains the current popularity of the keyed hashed message authentication code (HMAC), which is described in IETF RFC 2104 (1997) and NIST FIPS 198-1 (2008). This method uses one single key k of L bits. Assuming that the function H() represents the initial hash function, the value of the HMAC is computed in the following manner: In this construction, is the vector $k ¯$ k of minimum length of L bits, which after padding with a series of 0 bits will reach a total length of B bits. The variables opad and ipad are constants for outer padding and inner padding, respectively. The variable opad is formed with the octet 0x36 repeated as many times as needed to constitute a block of B bits. The variable ipad is the octet 0x5C repeated as many times. For MD5 and SHA-1, SHA-256, and SHA-512, the number of repetitions is 64. Finally, the symbols ∥ and ⊕ in the previous equation denote, respectively, the concatenation and exclusive OR operations. It should be noted that with the following representation the HMAC becomes the same as the nested MAC. #### 3.8  Identification of the Participants Identification is the process of ascertaining the identity of a participant (whether a person or a machine) by relying on uniquely distinguishing feature. This contrasts with authentication, which is the confirmation that the distinctive identifier indeed corresponds to the declared user. Authentication and identification of a communicating entity take place simultaneously when one party sends to the other in private a secret that is only shared between them, for example, a password or a secret encryption key. Another possibility is to pose a series of challenges that only the legitimate user is supposed to be capable of answering. A digital signature is the usual means of identification because it associates a party (a user or a machine) with a shared secret. Other methods of simultaneous identification and authentication of human users exploit distinctive physiological and behavioral characteristics such as fingerprints, voiceprints, the shape of the retina, the form of the hand, signature, gait, and so on. The biometric identifiers used in electronic commerce are elaborated upon the following section. #### 3.9  Biometric Identification Biometry systems use pattern recognition algorithms on specific physiological and/or behavior characteristics of individuals. Biometric identification systems recognize the identity of an individual by matching the features extracted from a biometric image with an entry in a database of templates. Identification is used mostly in forensic and law enforcement applications. Verification systems, in contrast, authenticate a person’s identity by comparing a newly captured biometric image with that of person’s biometric template stored in the system storage or in that user’s credential (or identity card) (Maltoni et al., 2003, p. 3). Verification is the operation that controls access to resources such as a bank account or a payment system. Biometrics was reserved until recently for forensic and military applications but is now used in many civilian applications. Biometric systems present several advantages over traditional security methods based on what the person knows (password or personal identification number [PIN]) or possesses (card, pass, or physical key). In traditional systems, the user must have several keys or cards, one for each facility, and remember different passwords to access each system. These keys or cards can be damaged, lost, or stolen, and long passwords can be easily forgotten, particularly if they are difficult and/or changed at regular intervals. When a physical card or key or a password is compromised, it is not possible to distinguish between a legitimate user and one who has acquired access illegally. Finally, the use of biological attributes for identification and authentication bypasses some of the problems associated with cryptography (e.g., key management). There are two main categories of biometric features. The first category relates to behavioral patterns and acquired skills such as speech, handwriting, or keystroke patterns. In contrast, the second category comprises physiological characteristics such as facial features, iris morphology, retinal texture, hand geometry, or fingerprints. Methods based on gait, odor, or genetics using deoxyribonucleic acid (DNA) have limited applications for electronic payment systems. Methods using vascular patterns, palm prints, palm veins, or ear features have not been applied on a wide scale in electronic commerce. Biometric verification consists of five steps: image acquisition during the registration phase, features extraction, storage, matching, and decision-making. The digital image of the person under examination originates from a sensor in the acquisition device (e.g., a scanner or a microphone). This image is processed to extract a compact profile that should be unique to that person. This profile or signature is then archived in a reference database that can be centralized or distributed according to the architecture of the system. In most cases, registration must be done in person in the presence of an operator to record the necessary biometric template. The accuracy of a biometric system is typically measured in terms of a false acceptance rate (an impostor is accepted) and false rejection rate (a legitimate user is denied access). These rates are interdependent and are adjusted according to the required level of security. Other measures distinguish between the decision error rates and the matching errors, in the cases that the ­system allows multiple attempts or includes multiple templates for the same user (Faundez-Zanuy, 2006). The choice of a particular system depends on several factors, such as the following: 1. The accuracy and reliability of the identification or verification. The result should not be affected by the environment or by aging. 2. Cost of installation, maintenance, and operation. 3. Scale of applicability of the technique; for example, handwriting recognition is not useful for illiterate people. 4. The ease of use. 5. The reproducibility of the results; in general, physiological characteristics are more reproducible than behavioral characteristics. 6. Resistance to counterfeit and attacks. ISO/IEC 19794 consists of several parts specifying the data formats for biometric data as follows: 1. Part 2: Finger minutiae data 2. Part 3: Finger pattern spectral data 3. Part 4: Finger image data 4. Part 5: Face image data 5. Part 6: Iris image data 6. Part 7: Signature time series data 7. Part 8: Finger pattern skeletal data 8. Part 9: Vascular image data 9. Part 10: Hand geometry silhouette data Part 11 concerns dynamic data collected during a manual signature and is currently being prepared. #### 3.9.1  Fingerprint Recognition Fingerprinting is a method of identification and identity verification based on the ridge patterns of the fingerprints. The method is based on the belief that these ridge patterns are unique to each individual and that, in normal circumstances, they remain stable over an individual’s life. In the past, fingerprints were collected by swiping the finger tips in a special ink and then pressing them over a paper to record a negative image. Fingerprint examiners would then look for specific features or minutiae in that image as distinguishing characteristics that identify an individual. Today, fingerprints are captured electronically by placing the finger on a small scanner using optical, electric, thermal, or optoelectronic transducers (Pierson, 2007, pp. 82–87). Optical transducers are charge-coupled devices (CCDs) that measure the reflections of a light source on the finger and focused through a prism. Electric transducers measure the fluctuations in the capacitance between the user’s fingers and sensors on the surface of a special mouse or home button. Another electric technique measures the changes in the electric field between a resin plate on which the finger rests and the derma with a low-tension alternating current injected into the finger pulps. Thermal techniques rely on a transducer tracking the temperature gradient between the ridges and the minutiae. Optoelectronic methods employ a layer of polymers to record the image of the fingerprint on a polymer layer that converts the image into a proportional electric current. Finally, ultrasound sensors are more capable of penetrating the epidermis to get an image of the pattern underneath it but are not yet ready for mass-market applications. During the enrollment phase, the image of the user’s fingerprint is recorded and then processed to extract the features that form the reference template that describes the user’s minutiae. The template must consist of stable and reliable indices, insensitive to defects in the image that could have been introduced by dirty fingers, wounds, or deformities. Depending on the application, the template may be stored in a central database or recorded locally on a magnetic card or an integrated circuit card issued to the individual. ISO/IEC 19794-2 (2011) defines three standard data formats for the data elements of the templates. During verification, fingerprints may be used alone or to supplement another identifier such as a password, a PIN, or a badge. The system then processes the new image to extract the relevant features for the matcher to conduct a one-to-one comparison with the user’s template retrieved from storage. There are two well-known categories of fingerprint recognition algorithms. Most algorithms are minutiae based and measure a variety of features, such as ridge endings and bifurcations (when two lines split), and compare them to the stored minutiae templates. Image-based methods use other characteristics such as orientation, ridge shape, and text texture. The latter approach is particularly useful when the establishment of a reliable minutiae map is made difficult, for example, because of a bad quality imprint or fingerprint damage due to a high degree of oxidative stress and/or immune activation. The verification algorithm must be insensitive to potential translations, rotations, and distortions. The degree of similarity between the features extracted from the captured image and those in the stored template is described in terms of an index that varies from 0% to 100% or 1 to 5. The traditional measure was a count of the number of minutiae that corresponded, typically 12–17 (Ratha et al., 2001; Pierson, 2007, pp. 168–169). NIST and the Federal Bureau of Investigation (FBI) collaborated to produce a large database of fingerprints gathered from crime scenes with their corresponding minutiae to train and evaluate new algorithms for automatic fingerprint recognition. NIST also provides a Biometric Image Software (NBIS) for biometric processing and analysis, free of charge, and with no licensing requirements (a previous version was called NIST Fingerprint Image Software or NFIS). It includes a fingerprint image quality algorithm, NFIQ, that analyzes that fingerprint image and assigns it a quality value of 1 (highest quality) to 5 (lowest quality). NIST has conducted several evaluations of fingerprint technologies. In the latest evaluations in 2014, ­several databases were used from the federal government and law enforcement with different fingerprint combinations varying from a single index up to 10 fingers, impressions on flats, and plain and rolled impressions. (The rolled impressions are obtained during a roll from side to side to capture the full width.) It was found that, for a false-positive identification rate of 10–3, the false-negative identification rates (i.e., the miss rates) were 1.9% for a single index finger to 0.10 for 10 fingers plain and rolled, that is, more fingers improved accuracy. The most accurate results were achieved with 10 fingers, with searches in datasets of 3 and 5 ­million subjects (Watson et al., 2014). There are now several large-scale applications using fingerprints as biometrics. In 2002, the International Civil Aviation Organization (ICAO) adopted the following biometrics for machine-readable travel documents: facial recognition, fingerprint, and iris recognition. The templates are recorded as 2D PDF417 barcodes, or encoded in magnetic stripes, integrated circuits cards with contacts or contactless, or on optical memory (International Civil Aviation Organization, 2004, 2011). In 2003, the International Labor Organization (ILO) added two finger minutia–based biometric templates to the international identity document of seafarers encoded in the PDF417 stacked code format of ISO/IEC 15438 (International Labor Organization, 2006). In 2009, India launched the Aadhar national identity scheme using, the fingerprints of the 10 digits and iris biometrics to assign each citizen a unique 12-digit identification number (Unique Identification Authority of India, 2009). Scanned fingerprints are also used in many payment applications. A typical approach used in payment is to use the fingerprint captured by placing the finger on the reader for identification then entering a PIN to authorize the payment. Citibank launched a biometric credit card in Singapore in 2006 using equipment from Pay By Touch. This technology was also used by several grocery chains in the United States and in the United Kingdom. In 2013, Apple incorporated a fingerprint scanner in its iPhone 5S under the brand name Touch ID. The percentage of false rejects in commercial systems reaches about 3%, and the rate of false acceptance is less than one per million. Fingerprint spoofing has a long tradition but was more difficult with analog techniques. Since 2001, it was shown that automatic finger recognition with digital images is vulnerable to reconstructions of fake fingers made of gelatin (“gummy fingers”) or of silicon rubber filled with conduct carbon (“conductive silicon fingers”). The patterns imprinted on the artificial fingers can be extracted from the negative imprints of real fingers, or from latent fingerprints lifted with a specialized toolkit, digitizing them with a scanner and then using the scanned image to imprint the artificial finger or from high-resolution photos of the true fingerprints (Matsumoto et al., 2002; Maltoni et al., 2003, pp. 286–291; Pierson, 2007, pp. 116–120; Galbally et al., 2010, 2011). A more recent technique is to use minutiae templates to reconstruct a digital image that is similar to the original fingerprint (Galbally et al., 2010). One advantage of thermal and capacitive sensors is that the quality of fake fingerprint images using gummy fingers is low, which could help in identifying spoofing scenarios (Galbally et al., 2011). However, the Chaos Computer Club in Germany announced less than a month after the introduction of Apple’s Touch ID, which uses a capacitive transducer, that the system could be fooled with a fingerprint photographed on a glass surface (Chaos Computer Club, 2013; Musil, 2013). It is worth noting that these results infirm the assumption made by the ILO in 2006 in allowing visible biometric data that “it is sufficiently difficult to reconstitute from the biometric data that will be stored in the barcode either an actual fingerprint (..) or a fraudulent device that could be used to misrepresent seafarer intent or presence” (International Labor Organization, 2006, § 5.1, p. 9). It should be noted that some subjects do not have fingerprints or have badly damaged fingerprints. Finally, it should be noted that the individuality of fingerprints remains an assumption. Most human experts and automatic fingerprint recognition algorithms declare that fingerprints have a common source if they are “sufficiently” similar in a given representation scheme based on a similarity metric (Maltoni et al., 2003, chapter 8). This has important implications in forensic investigation but probably will be less significant in payment applications. #### 3.9.2  Iris Recognition The iris is the colored area between the white of the eye and the pupil, with a texture that is an individual characteristic that remains constant for many years. The digital image of the iris texture can be encoded as an IrisCode and used for the identification of individuals with accuracy and an error probability of the order of 1 in 1.2 million. It is even possible to distinguish among identical twins and to separate the two irises of the same person (Flom and Safir, 1987; Daugman, 1994, 1999; Wildes, 1997). The inspection is less invasive than a retinal scan. This technique was developed by John Daugman and was patented by Iridian Technologies—previously known as IriScan, Inc, a company formed by two ophthalmologists and a computer scientist. This patent expired in 2004 in the United States and in 2007 in the rest of world, leaving the field open to new entrants. Since 2006, it is a division of Viisage Technology. A major supplier of the technology is currently the Lithuanian company Neurotechnology. During image acquisition, the person merely faces a camera connected to computer at a distance of about 1 m. Iris scanning software can also be downloaded to mobile phones and smartphones. Some precautions need to be respected during image capture, particularly to avoid reflections by ensuring uniform lighting. Contact lenses produce a regular structure in the processed image. Image acquisition can be accomplished in special kiosks or from desktop cameras. The latter are cheaper because they were proven difficult for some users that they must look into a hole to locate an illuminated ring. ISO/IEC 19794-6 (2011) defines the format of the captured data. Iris recognition is typically used as a secondary identifier in addition to fingerprint imaging. Other potential applications include the identification of users of automatic bank teller machines, the control of access either to a physical building or equipment or to network resources. It is being evaluated to increase the speed of passenger processing at airports. As mentioned earlier, India is collecting iris images together with the 10 fingerprints and an image of the face for its new national identity scheme (Unique Identification Authority of India, 2009). It was shown, however, that the accuracy of the current generation of commercial handheld iris recognition devices degrades significantly in tactical environments (Faddis et al., 2013). Furthermore, it was demonstrated that the templates or IrisCodes have sufficient information to allow the reconstruction of synthetic images that closely match the images collected from real subjects and then deceive commercial iris recognition systems. The success rate of the synthetic image ranges from 75% to 95%, depending on the level set for the false acceptance rate. The attack is also successful that even the reconstructed image of the iris does not use the stored templates that the system used for recognition (Galbally et al., 2012). This creates the possibility of stealing someone’s identity through their iris image. #### 3.9.3  Face Recognition Police and border control agencies have been using facial recognition to scan passports. This has been done on the basis of a template that encodes information derived from proprietary mathematical representations of features in the face image, such as the distance between the eyes, the gap between the nostrils, the dimension of the mouth. U.S. government initiatives, starting with the Face Recognition Technology (FERET) program in September 1993, have provided the necessary stimulus to improve the speed and the accuracy of the technology. ISO/IEC 19794-5 (2011) defines the format for the captured data. Some consumer applications, such as Apple’s iPhoto or Google’s Picasa, have included facial recognition to identify previously defined faces in photo albums. Another consumer application is Churchix (http://churchix.com, last accessed January 26, 2016), designed to help church administrators and event managers track their members attendance by comparison with a database of reference photos. Nowadays, almost all mobile phones have a photographic camera and newer programs that allow people to take a picture of a person and then search the Internet for possible matches to that face. Polar Rose, a company from Malmö, Sweden, which produced such software, was acquired by Apple in 2010. There are also tools to help search for pictures on Facebook. Other applications allow users to identify a person in a photo and then search the web for other pictures in which that person appears (Palmer, 2010). Facial recognition, however, is not very accurate in less controlled situations. The accuracy is sensitive to accessories such as sunglasses, beards or mustaches, grins, or head tilts (yaw) by 15°. It can be combined with other biometrics such as fingerprints (Yang, 2010; de Oliveir and Motta 2011) or keystrokes (Popovici et al., 2014) to increase the overall system accuracy. An international contest for face verification algorithms was organized in conjunction with the 2004 International Conference on Biometric Authentication (ICBA). The National Institute of Standards and Technology (NIST) has tracked the progress of face recognition technologies in a series of benchmark evaluations of prototype algorithms from universities and the research industrial laboratories in 1997, 2002, 2006, 2010, and 2013. In recent years, many participants were major commercial suppliers from the United States, Japan, Germany, France, Lithuania, and Chinese academics. The 2010 results showed that an algorithm from NEC was the most accurate with chances of identifying an unknown subject from a database of 1.6 million records about 92%, the search lasting about 0.4 second. The search duration, however, increased linearly with the size of the enrolled population, while the speed of less accurate algorithms increased more slowly with the size of the database (Grother et al., 2010). In general, the differences between the best and worst performances can be significant. The National Police Academy in Japan followed NIST study with a focused investigation of NEC algorithm to evaluate the influence of several factors such as (1) the effect of age, due to the time lapse between the recorded picture and the subject; (2) the shooting angle, that is, the yaw; (3) the effect of face expression; and (4) of accessories such as a cap, sunglasses, beards, and mustaches. The results confirmed that from 2001 to 2010, the top of the line face recognition algorithms improved significantly. The overall recognition rate was 95% (compared to 48% in 2001). In particular, 98% of the images without glasses were classified correctly (compared to 73% in 2001). Similar improvements were observed for the effects of the yaw (provided that the angle is less than 30°) of the expression, and of the various accessories, such as spectacles (Horiuchi and Hada, 2013). It should be noted that the results were obtained with pictures taken under good lighting conditions, which is not always the case for police investigations, which often start with low-quality images, taken at bad angles and under poor lights and/or with small face sizes. NIST (2013) study used reasonable quality law enforcement mug shot images, poor-quality web cam images collected in similar detention operations and moderate quality visa application images. The range thus covered high quality in identity documents (passports, visa applications, driver licenses) to poorly controlled surveillance situations. The 2013 study confirmed that NEC had the most accurate set of algorithms, with an error rate of less than half of the second place algorithm. It was followed by those supplied by Morpho (a Safran company), which merged its algorithms with those acquired from L1 Identity Solutions in 2011. Other leading suppliers are Toshiba, Cognitec Systems, and 3M/Cogent. Algorithms with lesser accuracy (error rate higher than 12%) are those from Neurotechnology, Zhuhai Yisheng, HP, Decatur, and Ayonix. Some Chinese universities provided high-accuracy algorithms as well. The ranking of performance across algorithms, however, must be weighed by application-specific requirements; some algorithms are more suited to recognition of difficult webcam images (Grother and Ngan, 2014). #### 3.9.4  Voice Recognition Depending on the context, voice recognition systems have one of two functions: 1. Speaker identification: This is a one-to-many comparison to establish the identity of the speaker. A digital vocal print, newly acquired from the end user, is compared with a set of stored acoustic references to determine the person from its utterances. 2. Speaker verification: This case consists in verifying that the voice imprint matches the acoustic references of the person that the speaker ­pretends to be. The size of the voice template that characterizes an individual varies depending on the compression algorithm and the duration of the record. Speaker verification is implemented in payment systems as follows. The voice template that characterizes a subject is extracted during registration through the pronunciation of one or several passwords or passphrases. During the verification phase, the speaker utters one of these passwords back. The system then attempts to match the features extracted from new voice sample with the previously recorded voice templates. After identity verification, the person is allowed access. A bad sound quality can cause failures. In remote applications, this quality depends on several factors such as the type of the telephone handset, ambient noise (particularly in the case of hands-free telephony), and the type of connection (wire line or wireless). The voice quality is also affected by a person’s health, stress, ­emotions, and so on. Some actors have the capability of mimicking other voices. Furthermore, there are commercial personalized text-to-speech systems that produce voice automation prompts using familiar voices (Burg, 2014). Speech synthesis algorithms require about 10–40 hours of ­professionally recorded material with the voice to be mimicked. Using archival recordings, they could also bring voices back from the dead. An easier method to defraud the system would to be playback recordings of authentic commands. This is why automatic speaker recognition systems must be supplemented with other means of identification. #### 3.9.5  Signature Recognition Handwritten signatures have long been established as the most widespread means of identity verification particularly for administrative and financial institutions. They are a behavioral biometrics that changes over time and is affected by many physical and emotional conditions. ISO/IEC 19794-7 specifies two formats for the time series describing the captured signatures. One format is for general use and the other is more compact for use with smart cards. Systems for automatic signature recognition rely on the so-called permanent characteristics of an individual handwriting to match the acquired signature with a prerecorded sample of the handwriting of the person whose identity is to be verified. It goes without saying that handwritten signatures do not work in areas with high illiteracy and are not persistent before 16 years of age (Blanco-Gonzalo et al., 2013). Signature recognition can be static or dynamic. In static verification, also called offline verification, the signature is compared with an archived image of the signature of the person to be authenticated. For dynamic signature recognition, also called online verification, signing can be with a stylus on a digitizing tablet or the fingertip on a pad connected to a computer, or on the screen of a tablet or a mobile phone. The algorithms are based on techniques such as dynamic time warping to compare two time series even with their different timescales. The parameters under consideration are derived from the written text and various movement descriptors such as the pressure exercised on the pad or the screen, the speed and direction of the movement, the accelerations and decelerations, the angles of the letters, and the azimuth and the altitude of the pen with respect to the plan of the pad. The static approach is more susceptible to forgeries because the programs have yet to include all the heuristics that graphologists take into consideration based on their experience. There are enormous differences in the way people from different countries sign, and there are specific approaches for non-Latin scripts (Impedovo and Pirlo, 2008). Even though users prefer stylus-based devices to ­finger-based systems, an initial evaluation showed their performances are comparable and even that finger signing on an Apple iPad2 can exceed that of stylus-based devices, but there are no obvious explanations (Blanco-Gonzalo et al., 2013). #### 3.9.6  Keystroke Recognition Keystroke recognition is a technique based on an individual’s typing style in terms of rhythm, speed, duration and pressure of keystrokes, and so on. It is based on the premise that each person’s typing style is distinct enough to verify a claimed identity. Keystroke measures are based on several repetitions of a known sequence of characters (e.g., the login and the password) (Obaidat and Sadoun, 1999; Dowland et al., 2002). There are no standards for the data collected, and the user profile is based on a small dataset, which could result in high error rates. Typically, the samples are collected using structured texts at the initiation of a session to complement a classic login approach. The keystrokes are monitored unobtrusively as the person is keying in information. This approach, however, does not protect against session hijacking, that is, seizing control of the session of a legitimate user after a valid access. There is an active research to include continuous verification of the user’s identity using free text, that is, what users type during their normal interaction with a computing device (Ahmed and Traore, 2014; Popovici et al., 2014). Another possibility to reduce error rate and to avoid session hijack is to combine keystroke dynamics with face recognition (Giot et al., 2010). One important security risk of keystroke monitoring is that any software positioned at an input device can also leak sensitive information, such as passwords (Shah et al., 2006). #### 3.9.7  Hand Geometry In the past several years, hand geometry recognition has been used in large-scale commercial applications to control access to enterprises, customs, hospitals, military bases, prisons, and so on. The principle is that some features related to the shape of the human hand, such as the length of the fingers, are relatively stable and peculiar to an individual. The image acquisition requires the subject’s cooperation. The user positions the hand with outstretched fingers flat on a plate facing the lens of a digital camera. The fingers are spread, resting against guiding pins soldered on the plate. This plate is surrounded by mirrors on three sides to capture the frontal and side-view images of the hand. The time for taking one picture is about 1.2 seconds. Several pictures (three to five) are taken, and the average is stored in memory as a reference to the individual. One commercial implementation uses a 3D model with 90 input parameters to describe the hand geometry with a template of 9 octets. #### 3.9.8  Retinal Recognition The retina is a special tissue of the eye that responds to light pulses by generating proportional electrical discharges to the optical nerve. It is supplied by a network of blood vessels according to a configuration that is characteristic of each individual and that is stable throughout life. The retina can even distinguish among twins. A retinal map can be drawn by recording the reflections of a low-intensity infrared beam with the help of a charge-coupled device (CCD) to form a descriptor of 35 octets. The equipment used is relatively large and costly, and image acquisition requires the cooperation of the subject. It entails looking through an eyepiece and concentration on an object while a low-power laser beam is injected into the eye. As a consequence, the technique is restricted to access control to high-security areas: military installations, nuclear plants, high-security prisons, bank vaults, and network operation centers. Currently, this technique is not suitable for remote payment systems or for large-scale deployment. FIPS 201 (National Institute of Standards and Techno­logy, 2013) defines the procedures for personal identity verification for federal employees and contractors, including poof of identity, registration, identity card issuance and reissuance, chains of trust, and identity card usage. The biometrics used are fingerprints, iris images, and facial images. NIST Special Publication (SP) 800-76-2 contains the technical specifications for the biometric data used in the identification cards. In particular, the biometric data use the Common Biometric Exchange File Format (CBEFF) of ANSI INCITS 398 (Podio et al., 2014). ISO/IEC 7816-11 (2004) specifies commands and data objects used in identity verification with a biometric means stored in integrated circuit cards (smart cards). ITU-T Recommendation X.1084 (2008) defines the Handshake protocol between the client (card reader or the terminal) and the verifier, which is an extension of TLS. ANSI INCITS 358, published in 2002 and revised in 2013, contains BioAPI 2.0, a standard interface that allows a software application to communicate with and utilize the services of one or more biometric technologies. (ISO/IEC 19784-1 [2006] is the corresponding international standard). ITU-T Recommendation X.1083 (ISO/IEC 24708) specifies the BioAPI Interworking Protocol (BIP), that is, the syntax, semantics, and encodings of set of messages (“BIP messages”) that enable a BioAPI-conforming application to request biometric operations in BioAPI-conforming biometric service providers. It also specifies extensions to the architecture and the BioAPI framework (specified in ISO/IEC 19784-1) that support the creation, processing, sending, and reception of BIP messages. BioAPI 2.0 is the culmination of several U.S. initiatives starting in 1995 with the Biometric Consortium (BC). The BC was charted to be the focal point for the U.S. government on research, development, testing, evaluation, and application of biometric-based systems for personal identification/verification. In parallel, the U.S. Department of Defense (DOD) initiated a program to develop a Human Authentication–Application Program Interface (HA–API). Both activities were merged, and version 1.0 of the BioAPI was published in March 2000. The BioAPI specification was also the basis of the ANSI 358-2002[R2012], first published in 2002 and revised in 2007 and 2013. ANSI X9.84 (2010) describes the security framework for using biometrics for the authentication of individuals in financial services. The scope of the specification is the management of the biometric data throughout the life cycle: collection, storage, retrieval, and destruction. Finally, the Fast Identity Online Alliance (FIDO) was established in July 2012 to develop technical specifications that define scalable and interoperable mechanisms to authenticate users by hand gesture, a biometric identifier, or pressing a button. #### 3.9.10  Summary and Evaluation There is a distinction between the flawless image of biometric technology presented by promoters and futuristic authors and the actual performance in terms of robustness and reliability. For example, the hypothesis that biometric traits are individual and that they are sufficiently stable in the long term is just an empirical observation. Systems that perform well in many applications have yet to scale up to cases involving tens of millions of users. False match rates, which may be acceptable for specific tasks in small-scale applications, can be too costly or even dangerous in applications designed to provide high level of security. For some techniques, the biometrics images obtained during data acquisition vary significantly from one session to another. Attacks on biometric systems can be grouped into two main categories: those that are common to all information systems and those that are specific to biometrics. The first category of attacks comprises the following: 1. Attacks on the communication channels to intercept the exchanges, for example, to snoop on the user image of the biometrics in question or the features extracted from that image. The stolen information can be replayed as if it came from a legitimate user. 2. Attacks of the various hardware and software modules, such as physical tampering with the sensors or inserting malware at the feature extraction module or the matcher module. 3. Attack on the database where the templates are stored. Attacks specific to biometrics constitute the second category. The automatic recognition system is deceived with the use of fake fingers or reconstructed iris patterns. In total, there are eight places in a generic biometric system where attacks may occur (Ratha et al., 2001), these are identified in the following list: 1. Fake biometrics are presented to the sensor such as a fake finger, a copy of a signature, or a face mask. 2. Replay attacks to bypass the sensor with previously digitized biometric signals. 3. Override of the feature extraction process with a malware attack to allow the attacker to control the features to be processed. 4. A more difficult attack involves tampering with the template extracted from the input signal before it reaches the matcher. This is the case if they are not collocated and are connected over a long-haul network. Another condition for this attack to succeed is that the method for coding the features into a template is known. 5. An attack on the matcher so that it produces preselected results. 6. Manipulation of the stored template, particularly if it is stored on a smart card to be presented to the authentication system. 7. Interception and modification of the template retrieved from the database on its way to the matcher. 8. Overriding the final decision by changing the display results. Encryption solves the problem of data production, and digital signatures solve the problem of data integrity. Replay attacks can be thwarted by time-stamping the exchange or by including nonces. If the matcher and database are colocated in a secure location, many of these attacks can be thwarted. However, automated biometric authentication is still susceptible to attacks against the sensor by replaying previous signals or using fake samples. Current research activities are related to solving these problems. For example, since 2011, the University of Cagliari in Italy and Clarkson University in the United States have hosted LivDet, an annual completion for fingerprint “liveness” (i.e., aliveness) detection. The goal of the competition is to compare different algorithms to separate “fake” from “live” fingerprint images. One of the criteria used in software-based techniques of liveness detection is the quality of the image, on the assumption that degree of sharpness, color and luminance levels and local artifacts, and other aspects of a fake image will have a lower quality than a real sample acquired under normal operating conditions (Galbally et al., 2014). Hardware-based techniques add some specific sensors to detect living characteristics such as sweat, book pressure, or specific reflection properties of the eye. In 2014, to improve the scientific basis of forensic evidence in courts, NIST and the U.S. Department of Justice jointly established the Forensic Sciences Standards Board (FSSB) within NIST’s structure to foster the development and forensic standards. The board comprises members from the U.S. universities and professional forensic associations. One of the most problematic issues is that once a biometric image or template is compromised, it is stolen forever and cannot be reissued, updated, or destroyed. Reissuing a new PIN or password is possible, but having a new set of fingerprints is very difficult. #### 3.10  Authentication of the Participants The purpose of authentication of participants is to reduce, if not eliminate, the risk that intruders might masquerade under legitimate appearances to pursue unauthorized operations. As stated previously, when the participants utilize a symmetric encryption algorithm, they are the only ones who share a secret key. As a consequence, the utilization of this algorithm, in theory, guarantees the confidentiality of the messages, the correct identification of the correspondents, and their authentication. The key distribution servers also act as authentication servers (ASs), and the good functioning of the system depends on the capability of all participants to protect the encryption key. In contrast, when the participants utilize a public key algorithm, a user is considered authentic when that user can prove that he or she holds the private key that corresponds with the public key that is attributed to the user. A certificate issued by a certification authority indicates that it certifies the association of the public key (and therefore the corresponding private key) with the recognized identity. In this manner, identification and authentication proceed in two different ways, the identity with the digital signature and the authentication with a certificate. Without such a guarantee, a hostile user could create a pair of private/public keys and then distribute the public key as if it was that of the legitimate user. Although the same public key of a participant could equally serve to encrypt the message that is addressed to that participant (confidentiality service) and to verify the electronic signature of the documents that the participant transmits (integrity and identification services), in practice, a different public key is used for each set of services. According to the authentication framework defined by ITU-T Recommendations X.500 (2001) and X.811 (1995), simple authentication may be achieved by one of several means: 1. Name and password in the clear 2. Name, password, and a random number or a time stamp, with integrity verification through a hash function 3. Name, password, a random number, and a time stamp, with integrity verification using a hash function Strong authentication requires a certification infrastructure that includes the following entities: 1. Certification authorities to back the users’ public keys with “sealed” certificates (i.e., signed with the private key of the certification authority) after verification of the physical identity of the owner of each public key. 2. A database of authentication data (directory) that contains all the data relative to the private encryption keys, such as their value, the duration of validity, and the identity of the owners. Any user should be able to query such database to obtain the public key of the correspondent or to verify the validity of the certificate that the correspondent would present. 3. A naming or registering authority that may be distinct from the certification authority. Its principal role is to define and assign unique distinguished names to the different participants. The certificate guarantees the correspondence between a given public key and the entity, the unique distinguished name of which is contained in the certificate. This certificate is sealed with the private key of the certification authority. When the certificate owner signs documents with the private signature key, the partners can verify the validity of the signature with the help of the corresponding public key that is contained in the certificate. Similarly, to send a confidential message to a certified entity, it is sufficient to query the directory for the public key of that entity and then use that key to encrypt messages that only the holder of the associated private key would be able to decipher. #### 3.11  Access Control Access control is the process by which only authorized entities are allowed access to the resources as defined in the access control policy. It is used to counter the threat of unauthorized operations such as unauthorized use, disclosure, modification, destruction of protected data, or denial of service to legitimate users. ITU-T Recommendation X.812 (1995) defines the framework for access control in open networks. Accordingly, access control can be exercised with the help of a supporting authentication mechanism at one or more of the following layers: the network layer, the transport layer, or the application layer. Depending on the layer, the corresponding authentication credentials may be X.509 ­certificates, Kerberos tickets, simple identity, and password pairs. There are two types of access control mechanisms: identity based and role based. Identity-based access control uses the authenticated identity of an entity to determine and enforce its access rights. In contrast, for role-based access control, access privileges depend on the job function and its context. Thus, additional factors may be considered in the definition of the access policy, for example, the strength of the encryption algorithm, the type of operation requested, or the time of day. Role-based access control provides an indirect means of bestowal of privileges through three distinct phases: the definition of roles, the assignment of privileges to roles, and the distribution of roles among users. This facilitates the maintenance of access control policies because it is sufficient to change the definition of roles to allow global updates without revising the distribution from top to bottom. At the network layer, access control in IP networks is based on packet filtering using the protocol information in the packet header, specifically the source and destination IP addresses and the source and destination port numbers. Access control is achieved through “line interruption” by a certified intermediary or a firewall that intercepts and examines all exchanges before allowing them to proceed. The intermediary is thus located between the client and the server, as indicated in Figure 3.9. Furthermore, the firewall can be charged with other security services, such as encrypting the traffic for confidentiality at the network level or integrity verification using digital signatures. It can also inspect incoming and outgoing exchanges before forwarding them to enforce the security policies of a given administrative domain. However, the intervention of the trusted third party must be transparent to the client. FIGURE 3.9   Authentication by line interruption at the network layer. The success of packet filtering is vulnerable to packet spoofing if the address information is not protected and if individual packets are treated independently of the other packets of the same flow. As a remedy, the firewall can include a proxy server or an application-level gateway that implements a subset of application-specific functions. The proxy is capable of inspecting all packets in light of previous exchanges of the same flow before allowing their passage in accordance with the security policy in place. Thus, by filtering incoming and outgoing electronic mail, file transfers, exchanges of web applications, and so on, application gateways can block nonauthorized operations and protect against malicious codes such as viruses. This is called a stateful inspection. The filter uses a list of keywords, the size and nature of the attachments, the message text, and so on. Configuring the gateway is a delicate undertaking because the intervention of the gateway should not prevent daily operation. A third approach is to centralize the management of the access control for a large number of clients and users with different privileges with a dedicated server. Several protocols have been defined to regulate the exchanges among network elements and Access Control Servers. RFC 6929 (2013) specifies Remote Authentication Dial in User Service (RADIUS) for client authentication, authorization, and for collecting accounting information of the calls. In RFC 1492 (1993), Cisco described a protocol called Terminal Access Controller Access System (TACACS), which was later updated in TACACS+. Both RADIUS and TACACS+ require a secrete key between each network element and the server. Depicted in Figure 3.10 is the operation of RADIUS in terms of a client/server architecture. The RADIUS client resides within the Access Control Server while server relies on an X.509 directory through the protocol Lightweight Directory Access Protocol (LDAP). Both X.509 and LDAP will be presented later in this chapter. FIGURE 3.10   Remote access control with RADIUS. Note that both server-to-client authentication and user-to-client authentication are outside the scope of RADIUS. Also, because RADIUS does not include provisions for congestion control, large networks may suffer degraded performance and data loss. It should be noted that there are some known vulnerabilities in RADIUS or in its implementations (Hill, 2001). Commercial systems implement two basic approaches for end-user authentication: one-time password and challenge response (Forrester et al., 1998). In a typical one-time password system, each user has a device that generates a number periodically (usually every minute) using the current time, the card serial number, and a secret key held in the device. The generated number is the user’s one-time password. This procedure requires that the time reference of the Access Control Server be synchronized with the card so that the server can regenerate an identical number. In challenge–response systems, the user enters a personal identification number to activate handheld authenticators (HHA) and then to initiate a connection to an Access Control Server. The Access Control Server, in turn, provides the user with a random number (a challenge), and the user enters this number into a handheld device to generate a unique response. This response depends on both the challenge and some secret keys shared between the user’s device and the server. It is returned to the Access Control Server to compare with the expected response and decide accordingly. Two-factor authentication is also used in many services, particularly in mobile commerce domain. The site to be accessed sends a text to the user that is requesting access with a new code and a password each time the user attempts to login. #### 3.12  Denial of Service Denial-of-service attacks prevent normal network usage by blocking the access of legitimate users to the network resources they are entitled to, by overwhelming the hosts with additional or superfluous tasks to prevent them from responding to legitimate requests or to slow their response time below satisfactory limits. In a sense, denial of service results from the failure of access control. Nevertheless, these attacks are inherently associated with IP networks for two reasons: network control data and user data share the same physical and logical bandwidths; and IP is a connectionless protocol where the concept of admission control does not apply. As a consequence, when the network size exceeds a few hundred nodes, network control traffic (due, e.g., to the exchange of routing tables) may, under some circumstances, occupy a significant portion of the available bandwidth. Further, inopportune or ill-intentioned user packets may able to bring down a network element (e.g., a router), thereby affecting not only all endpoints that rely on this network element for connectivity but also all other network elements that depend on it to update their view of the network status. Finally, in distributed denial-of-service (DDOS) attacks, a sufficient number of compromised hosts may send useless packets toward a victim around the same time, thereby affecting the victim’s resources or bandwidth or both (Moore et al., 2001; Chang, 2002). As a point of comparison, the public switched telephone network uses an architecture called common channel signaling (CCS) whereby user data and network control data travel on totally separate networks and facilities. It is worth noting that CCS was introduced to protect against fraud. In the old architecture, called Channel-Associated Signaling (CAS), the network data and the user data used separate logical channels on the same physical support. Denial-of-service attacks work in two principal ways: forcing a protocol state machine a deadlock situation and overwhelming the processing capacity of the receiving station. One of the classical examples of protocol deadlocks is the SYN flooding attack that perturbs the functioning of the TCP protocol (Schuba et al., 1997). The handshake in TCP is a three-way exchange: a connection request with the SYN packet, an acknowledgment of that request with SYN/ACK packet, and finally, a confirmation from the first party with the ACK packet (Comer, 1995, p. 216). Unfortunately, the handshake imposes asymmetric memory and computational loads on the two endpoints, the destination being required to allocate large amounts of memory without authenticating the initial request. Thus, an attacker can paralyze the target machine by exhausting its available resources by sending a massive number of fake SYN packets. These packets will have spoofed source addresses so that the acknowledgments are sent to hosts that the victim cannot reach or that do not exist. Otherwise, the attack may fail because unsolicited SYN/ACK packets at accessible hosts provoke the transmission of RST packets, which, upon arrival, would allow the victim to release the resources allocated for a connection attempt. Internet Control Message Protocol (ICMP) is a protocol for any arbitrary machine to return control and error information back to the presumed source. To flood the victim’s machine with messages that overwhelm its capacities, an ICMP echo request (“ping”) is sent to all the machines of a given network using the subnet broadcast address but with victim’s address falsely indicated as the source. The Code Red worm is an example of attacks that exploit defects in the software implementation of some web servers. In this case, Hypertext Transfer Protocol (HTTP) GET requests larger than the regular size (in particular, a payload of 62 octets instead of 60 octets) would cause, under specific conditions, a buffer overflow and an upsurge in HTTP traffic. Neighboring machines with the same defective software will also be infected, thereby increasing network traffic and causing a massive disruption (CERT/CC CA-2001-19, 2002). Given that IP does not separate user traffic from that of the network, the best solution is to identify all with trusted certificates. However, authentication of all exchanges increases the computational load, which may be excessive in commercial applications. Short of this, defense mechanisms will be developed on a case-by-case basis to address specific problems as they arise. For example, resource exhaustion due to the SYN attack can be alleviated by limiting the number of concurrent pending TCP connections, reducing the time-out for the arrival of the ACK packet before calling off the connection establishment, and blocking packets to the outside that have source addresses from outside. Another approach is to reequilibrate the computational load between the two parties by asking the requesting client to solve a puzzle in the form of simple cryptographic problems before the allocated resources needed to establish a connection. To avoid replay attacks, these problems are formulated using the current time, a server secret, and additional information from the client request (Juels and Brainard, 1999). This approach, however, requires programs for solving puzzles specific to each application that are incorporated in the client browser. A similar approach is to require the sender to do some hash calculation before sending a message to make spamming uneconomic in the form of a proof of work (Warren, 2012). The difficulty of the proof of work is made proportional to the size of the message, and each message is time-stamped to protect the network against denial-of-service attacks by flooding old messages. The aforementioned solutions require a complete overhaul of the Internet architecture. Yet, in their absence, electronic commerce is vulnerable to interferences from botnets that can be thwarted with ad hoc solutions only. A botnet is a virtual network constituted by millions of terminals infected with a specialized virus or worm called a bot. The bot does not destroy data on the infected terminal nor does it affect any of the typical services; all what it does is to give the bot master remote control. Each client of botnet can be instructed to send messages surreptitiously in spamming campaigns or in distributed denial-of-service attacks. The bot master is also responsible for the bot maintenance: fixing errors, ensuring that the bot remains undetected, and changing the IP address of the master server periodically to prevent tracing. Once all is in place, the bot master offers the botnet services in an online auction to the highest bidder, which can be a political opponent or a commercial competitor of the website under attack. #### 3.13  Nonrepudiation Nonrepudiation is a service that prevents a person who has accomplished an act from denying it later, in part or as a whole. Nonrepudiation is a legal concept to be defined through legislation. The role of informatics is to supply the necessary technical means to support the service offer according to the law. The building blocks of nonrepudiation include the electronic signature of documents, the intervention of a third party as a witness, time stamping, and sequence numbers. Among the mechanisms for nonrepudiation are a security token sealed with the secret key of the verifier that accompanies the transaction record, time stamping, and sequence numbers. Depending on the system design, the security token sealed with the verifier’s secret key can be stored in a tamper-resistant cryptographic module. The generation and verification of the evidence often require the intervention of one or more entities external to parties to the transaction such as a notary, a verifier, and an adjudicator of disputes. ITU-T Recommendation X.813 (1996) defines a general framework for nonrepudiation in open systems. Accordingly, the service comprises the following measures: • Generation of the evidence • Recording of the evidence • Verification of the evidence generated • Retrieval and reverification of the evidence There are two types of nonrepudiation services: 1. Nonrepudiation at the origin. This service protects the receiver by preventing the sender from denying having sent the message. 2. Nonrepudiation at the destination. This service plays the inverse role of the preceding function. It protects the sender by demonstrating that the addressee has received the message. Threats to nonrepudiation include compromise of keys or unauthorized modification or destruction of evidence. In public key cryptography, each user is the sole and unique owner of the private key. Thus, unless the whole system has been penetrated, a given user cannot repudiate the messages that are accompanied by his or her electronic signature. In contrast, nonrepudiation is not readily achieved in systems that use symmetric cryptography. A user can deny having sent the message by alleging that the receiver has compromised the shared secret or that the key distribution server has been successfully attacked. A trusted third party would have to verify each transaction to be able to testify in cases of contention. Nonrepudiation at the destination can be obtained using the same mechanisms but in the reverse direction. #### 3.13.1  Time Stamping and Sequence Numbers Time stamping of messages establishes a link between each message and the date of its transmission. This permits the tracing of exchanges and prevents attacks by replaying old messages. If clock synchronization of both parties is difficult, a trusted third party can intervene as a notary and use its own clock as reference. The intervention of the “notary” can be in either of the following mode: • Offline to fulfill functions such as certification, key distribution, and verification if required, without intervening in the transaction. • Online as an intermediary in the exchanges or as an observer collecting the proofs that might be required to resolve contentions. This is a similar role to that of a trusted third party of the network layer (firewall) or at the application layer (proxy) but with a different set of responsibilities. Let’s assume that a trusted third party combines the functions of the notary, the verifier, and the adjudicator. Each entity encrypts its messages with the secret key that has been established with the trusted third party before sending the message. The trusted third party decrypts the message with the help of this shared secret with the intervening party, time-stamps it, and then reencrypts it with the key shared with the other party. This approach requires the establishment of a secret key between each entity and the trusted third party that acts as a delivery messenger. Notice, however, that the time-stamping procedures have not been normalized and each system has its own protocol. Detection of duplication, replay, as well as the addition, suppression, or loss of messages is achieved with the use of a sequence number before encryption. Another mechanism is to add a random number to the message before encryption. All these means give the addressee the ability to verify that the exchanges genuinely took place during the time interval that the time stamp defines. #### 3.14  Secure Management of Cryptographic Keys Key management is a process that continues throughout the life cycle of the keys to thwart unauthorized disclosures, modifications, substitutions, reuse of revoked or expired keys, or unauthorized utilization. Security at this level is a recursive problem because the same security properties that are required in the cryptographic system must be satisfied in turn by the key management system. The secure management of cryptographic keys relates to key production, storage, distribution, utilization, withdrawal from circulation, deletion, and archiving (Fumer and Landrock, 1993). These aspects are crucial to the security of any cryptographic system. SP 800-57 (National Institute of Standards and Technology, 2012c) is a three-part recommendation from NIST that provides guidance on the management. Each part is tailored to a specific audience: Part 1 is for system developers and system administrators, Part 2 is aimed at system or application owners, while Part 3 is more general and targets system installers, end users, as well as people making purchasing decisions. #### 3.14.1  Production and Storage Key production must be done in a random manner and at regular intervals depending on the degree of security required. Protection of the stored keys has a physical aspect and a logical aspect. Physical protection consists of storing the keys in safes or in secured buildings with controlled access, whereas logical protection is achieved with encryption. In the case of symmetric encryption algorithms, only the secret key is stored. For public key algorithms, storage encompasses the user’s private and public keys, the user’s certificate, and a copy of the public key of the certification authority. The certificates and the keys may be stored on the hard disk of the certification authority, but there is some risk of possible attacks or of loss due to hardware failure. In cases of microprocessor cards, the information related to security, such as the certificate and the keys, are inserted during card personalization. Access to this information is then controlled with a confidential code. #### 3.14.2  Distribution The security policy defines the manner in which keys are distributed to entitled entities. Manual distribution by mail or special dispatch (sealed envelopes, tamper-resistant module) is a slow and costly operation that should only be used for the distribution of the root key of the system. This is the key that the key distributor utilizes to send to each participant their keys. An automatic key distribution system must satisfy all the following criteria of security: • Confidentiality • Identification of the participant • Data integrity: by giving a proof that the key has not been altered during transmission or that it was not replaced by a fake key • Authentication of the participants • Nonrepudiation Automatic distribution can be either point to point or point to multipoint. The Diffie–Hellman key exchange method (Diffie and Hellman, 1976) allows the two partners to construct a master key with elements that have been previously exchanged in the clear. A symmetric session key is formed next on the basis of the data encrypted with this master key or with a key derived from it and exchanged during the identification phase. IKE is a common automated key management mechanism designed specifically for use with IPSec. To distribute keys to several customers, an auth­entication server can also play the role of a trusted third party and distribute the secret keys to the different parties. These keys will be used to protect the confidentiality of the messages carrying the information on the key pairs. #### 3.14.3  Utilization, Withdrawal, and Replacement The unauthorized duplication of a legitimate key is a threat to the security of key distribution. To prevent this type of attack, a unique parameter can be concatenated to the key, such as a time stamp or a sequence number that increases monotonically (up to a certain modulo). The risk that a key is compromised increases proportionately with time and with usage. Therefore, keys have to be replaced regularly without causing service interruption. A common solution that does not impose a significant load is to distribute the session keys on the same communication channels used for user data. For example, in the SSL/TLS protocol, the initial exchanges provide the necessary elements to form keys that would be valid throughout the session at hand. These elements flow encrypted with a secondary key, called a key encryption key, to keep their confidentiality. Key distribution services have the authority to revoke a key before its date of expiration after a key loss or because of the user’s misbehavior. #### 3.14.4  Key Revocation If a user loses the right to employ a private key, if this key is accidentally revealed, or, more seriously, if the private key of a certification authority has been broken, all the associated certificates must be revoked without delay. Furthermore, these revocations have to be communicated to all the verifying entities in the shortest possible time. Similarly, the use of the revoked key by a hostile user should not be allowed. Nevertheless, the user will not be able to repudiate all the documents already signed and sent before the revocation of the key pair. #### 3.14.5  Deletion, Backup, and Archiving Key deletion implies the destruction of all memory registers and magnetic or optical media that contain either the key or the elements needed for its reconstruction. Backup applies only to encryption keys and not to ­signature keys; otherwise, the entire structure for nonrepudiation would be put into question. The keys utilized for nonrepudiation services must be preserved in secure archives to accommodate legal delays that may extend for up to 30 years. These keys must be easily recoverable in case of need, for ­example, in response to a court order. This means that the ­storage applications must include mechanisms to prevent ­unrecoverable errors from affecting the ciphertext. #### 3.14.6  A Comparison between Symmetric and Public Key Cryptography Systems based on symmetric key algorithms pose the problem of ensuring the confidentiality of key distribution. This translates into the use of a separate secure distribution channel that is preestablished between the participants. Furthermore, each entity must have as many keys as the number of participants with whom it will enter into contact. Clearly, the management of symmetric keys increases exponentially with the number of participants. Public key algorithms avoid such difficulties because each entity owns only one pair of private and public keys. Unfortunately, the computations for public key procedures are more intense than those for symmetric cryptography. The use of public key cryptography to ensure confidentiality is only possible when the messages are short, even though data compression before encryption with the public key often succeeds in speeding up the computations. Thus, public key cryptography can complement symmetric cryptography to ensure the safe distribution of the secret key, particularly when safer means such as direct encounter of the participants, or the intervention of a trusted third party, are not feasible. Thus, a new symmetric key could be distributed at the start of each new session and, in extreme cases, at the start of each new exchange. #### 3.15  Exchange of Secret Keys: Kerberos Kerberos is the most widely known system for ­the automatic exchange of keys using symmetric encryption. Its name is from the three-headed dogs that, according to Greek mythology, guarded the gates of Hell. Kerberos comprises the services of online identification and authentication as well as access control using symmetric cryptography (Neuman and Ts’o, 1994). It allows management access to the resources of open network from nonsecure machines such as the management of student access to the resources of a university computing center (files, printers, etc.). Kerberos has been the default authentication option since Windows 2000. Since Windows Vista and Windows Server 2008, Microsoft’s implementation of the Kerberos authentication protocol enables the use of Advanced Encryption Standard (AES) 128 and AES 256 encryption with the Kerberos authentication protocol. The development of Kerberos started in 1978 within the Athena project at the Massachusetts Institute of Technology (MIT), financed by the Digital Equipment Corporation (DEC) and IBM. Version 5 of the Kerberos protocol was published in 1994 and remains in use. RFC 4120 (2005) provides an overview and the protocol specifications. Release 1.13.12 is the latest edition and was published in May 2015. The system is built around a Kerberos key distribution center that enjoys the total trust of all participants with whom they all have already established symmetric encryption keys. Symmetric keys are attributed to individual users for each of their account when they register in person. Initially, the algorithm used for symmetric encryption is the Data Encryption Standard (DES), but AES was later added in 2005 as per RFC 3962. Finally, in 2014, DES and other weak cryptographic algorithms were deprecated (RFC 6649, 2012). The key distribution center consists of an authen­tication server (AS) and a ticket-granting server (TGS). The AS controls access to the TGS, which in turn, ­controls access to specific resources. Every server shares a secret key with every other server. Finally, during the ­registration of the users in person, a secret key is ­established with the AS for each user’s account. With this ­arrangement, a client has access to multiple resources during a session with one successful ­authentication, instead of repeating the authentication process for each resource. The operation is explained as follows. After identifying the end user with the help of a login and password pair, the authentication server (AS) sends the client a session symmetric encryption key to encrypt data exchanges between the client and the TGS. The session key is encrypted with the symmetric encryption key shared between the user and the authentication server. The key is also contained in the ­session ticket that is encrypted with the key preestablished between the TGS and the AS. The session ticket, also called a ticket-granting ticket, is valid for a short period, typically a few hours. During this time period, it can be used to request access to a specific service; which is why it is also called an initial ticket. The client presents the TGS with two items of identi­fication: the session ticket and an authentication title that are encrypted with the session key. The TGS compares the data in both items to verify the client authenticity and its access privileges before granting access to the ­specific server requested. Figure 3.11 depicts the interactions among the four entities: the client, the AS, the TGS, and the desired merchant server or resource S. FIGURE 3.11   Authentication and access control in Kerberos. The exchanges are now explained. #### 3.15.1  Message (1): Request of a Session Ticket A client C that desires to access a specific server S first requests an entrance ticket to the session from the Kerberos AS. To do so, the client sends message consisting of an identifier (e.g., a login and a password), the identifier of the server S to be addressed, a time stamp H1 and a random number Rnd, both to prevent replay attacks. #### 3.15.2  Message (2): Acquisition of a Session Ticket The Kerberos authentication server responds by sending a message formed of two parts: the first contains a session key KCTGS and the number Rnd that were in the first message, both coded with the client’s secret key KC, and the second includes the session ticket TCTGS destined for the TGS and encrypted by the latter’s secret key between itself and the Kerberos authentication server. The session (ticket-granting ticket) includes several pieces of information, such as the client name C, its network address AdC, the time stamp H1, the period of validity of the ticket Val, and the session key KCTGS. All these items, with the exception of the server identity TGS, are encrypted with the long-term key KTGS that the TGS shares with the AS. Thus, $T C T G S = T G S , K T G S { C , A d C , H 1 , V a l , K C T G S }$ and the message sent to the client is $K C { K C T G S , R n d } , T C T G S$ where K{x} indicates encryption of the message x with the shared secret key K. The client decrypts the message with its secret key KC to recover the session key KCTGS and the random number. The client verifies that the random number received is the same as was sent as a protection from replay attacks. The time stamp H1 is also used to protect from replay attacks. Although the client will not be able to read the session ticket because it is encrypted with KTGS, it can extract it and relay it in the server. By default, the session ticket TCTGS is valid for 8 hours. During this time, the client can obtain several service tickets to different services without the need for a new authentication. #### 3.15.3  Message (3): Request of a Service Ticket The client constructs an authentication title Auth that contains its identity C, its network address Adc, the ­service requested S, a new time stamp H2, and another random number Rnd2 and then encrypts it with the ­session key KCTGS. The encrypted authentication title can be represented in the following form: $A u t h = K C T G S { C , A d C , S , H 2 , R n d 2 }$ The request of the service ticket consists of the encrypted authentication title and the session ticket TCTGS: #### 3.15.4  Message (4): Acquisition of the Service Ticket The TGS decrypts the ticket content with its secret key KTGS, deduces the shared session key KCTGS, and extracts the data related to the client’s service request. With knowledge of the session key, the server can decrypt the authentication title and compare the data in it with those that the client has supplied. This comparison gives formal proof that the client is the entity that was given with the session ticket by the server. The time stamps confirm that the message was not an old message that has been replayed. Next, the TGS returns a service ticket for accessing the specific server S. The exchanges described by Messages (3) and (4) can be repeated for all other servers available to the user as long as the validity of the session ticket has not expired. The message from the TGS has two parts: part one contains a service key KCS between the client and the server S and the number Rnd2 both coded with shared secret key KCTGS and part two contains the service ticket TCS destined to the server S and encrypted by secret key, KSTGS, shared between the server S and the TGS. As earlier, the service ticket destined for the server S includes several pieces of information, such as the identity of the server S, the client name C, its network address AdC, a time stamp H3, the period of validity of the ticket Val, and, if confidentiality is desired, a service key KCS. All these items, with the exception of the server identity S, are encrypted with the long-term key KSTGS that the TGS shares with the specific server. Thus, $T C S = S , K S T G S { C , A d C , H 3 , V a l , K C S }$ and the message sent to the client is $K C T G S { K C S , R n d 2 } , T C S$ The client decrypts the message with the shared secret key KCTGS to recover the service key KCS and the random number. The client verifies that the random number received is the same as was sent as a protection from replay attacks. #### 3.15.5  Message (5): Service Request The client constructs a new authentication title Auth2 that contains its identity C, its network address Adc, a new time stamp H3, and another random number Rnd3 and then encrypts it with the service key KCS. The encrypted authentication title can be represented as follows: $A u t h 2 = K C S { C , A d C , H 4 , R n d 3 }$ The request of the service consists of the encrypted new authentication title and the service ticket TCS: #### 3.15.6  Message (6): Optional Response of the Server The server decrypts the content of the service ticket with the key KSTGS it shares with the TGS to derive the service key KCS and the data related to the client. With knowledge of the service key, the server can verify the authenticity of the client. The time stamps confirm that the message is not a replay of old messages. If the client has requested the server to authenticate itself, it will return the random number, Rnd3, encrypted by the service key KCS. Without knowledge of the secret key KCS, the server would have not been able to extract the service key KCS. The preceding description shows that Kerberos is mostly suitable for networks administered by a single-administrative entity. In particular, the Kerberos key distribution center fulfills the following roles: • It maintains a database of all secret keys (except of the key between the client and the server, KCS). These keys have a long lifetime. • It keeps a record of users’ login identities, passwords and access privileges. To fulfill this role, it may need access to an X.509 directory. • It produces and distributes encryption keys and ticket-granting tickets to be used for a session. #### 3.16  Public Key Kerberos The utilization of a central depot for all symmetric keys increases the potential of traffic congestion due to the simultaneous arrival of many requests. In addition, centralization threatens the whole security infrastructure, because a successful penetration of the storage could put all the keys in danger (Sirbu and Chuang, 1997). Finally, the management of the symmetric keys (distribution and update) becomes a formidable task when the number of users increases. The public key version of Kerberos simplifies key management, because the server authenticates the ­client directly using the session ticket and the client’s certificate sealed by the Kerberos certification authority. The session ticket itself is sealed with the client’s private key and then encrypted with the server public key. Thus, the service request to the server can be described as follows: with $A u t h = C , c e r t i f i c a t e , [ K r , S , P K C , T a u t h ] S K C$ where • Tauth is the initial time for authentication • Kr is a one-time random number that the server will use as a symmetric key to encrypt its answer • {…} represents encryption with the server public key, PKS while […] represents the seal computed with the client’s private key, SKC This architecture improves speed and security. #### 3.16.1  Where to Find Kerberos? The official web page for Kerberos is located at http://web.mit.edu/kerberos/www/index.html. A Frequently Asked Questions (FAQ) file on Kerberos can be consulted at the following address: ftp://athena-dist.mit.edu/pub/kerberos/KERBEROS.FAQ. Tung (1999) is a good compendium of information on Kerberos. The Swedish Institute of Computer Science is ­distributing a free version of Kerberos, called Heidmal. This version was written by Johan Danielsson and Assar Westerlund. The differences between Heidmal and the MIT APIs are listed at http://web.mit.edu/­kerberos/krb5-1.13/doc/appdev/h5l_mit_apidiff.html. In general, commercial vendors offer the same Kerberos code that is available at MIT for enterprise solutions. Their main function is to provide support for installation and maintenance of the code and may include administration tools, more frequent updates and bug fixes, and prebuilt (and guaranteed to work) binaries. They may also provide integration with various smart cards to provide more secure and movable user authentication. A partial list of commercial implementations is available at http://web.ornl.gov/~jar/­commerce.htm. #### 3.17.1  The Diffie–Hellman Exchange The Diffie–Hellman algorithm is the first algorithm for the exchange of public keys. It exploits the difficulty in calculating discrete algorithms in a finite field, as ­compared with the calculation of exponentials in the same field. The technique was first published in 1976 and entered in the public domain in March 1997. The key exchange comprises the following steps: 1. The two parties agree on two random large integers, p and g, such that g is a prime with respect to p. These two numbers do not have to be necessarily hidden, but their choice can have a substantial impact on the strength of the security achieved. 2. A chooses a large random integer x and sends to B the result of the computation: 3. B chooses another large random integer y and sends to A the result of the computation: $Y = g y mod p$ 4. A computes $k = Y x mod p = g x y mod p$ 5. Similarly, B computes $k = Y x mod p = g x y mod p$ The value k is the secret key that both correspondents have exchanged. The aforementioned protocol does not protect from man-in-the-middle attacks. A secure variant is as ­follows (Ferguson et al., 2010, pp. 183–193): 1. The two parties agree on (p, q, g) such that p and q are prime numbers, typically 2255 < q < 2256, that is, q is a 256-bit prime and p = Nq + 1, for some even N g = αN (mod p), with is the finite field modulo p, such that g ≠ 1 and gq = 1()mod p 2. A chooses a large random integer x ∊ [1,…,q−1] and sends B the result of the computation: $X = g x mod p$ 3. B verifies that X ∊ [2,…,p −1] q is a divisor of (p − 1) g≠ 1 and gq = 1(mod p) Xq = 1 4. Next, it chooses a large random integer y ∊ [1,…,qq1] and sends A the result of the computation: $Y = g y mod p$ 5. A verifies that Y ∊ [2,…,p ?−1] and that Yq = 1 and then computes $k = Y x mod p = g x y mod p$ 6. Similarly, B computes $k = Y x mod p = g x y mod p$ SSL/TLS uses the method called ephemeral Diffie–Hellman, where the exchange is short-lived, thereby achieving perfect forward secrecy, that is, that a key ­cannot be recovered after its deletion. The Key Exchange Algorithm (KEA) was developed in the United States by the National Security Agency (NSA) based on the Diffie–Hellman scheme. All cal­culations in KEA are based on a prime modulus of 1024 bits with a key of 1024 bits and an exponent of 160 bits. #### 3.17.2  Internet Security Association and Key Management Protocol IETF RFC 4306 (2005) combines contents that were described over several separate documents. One of these is Internet Security Association and Key Management Protocol (ISAKMP), a generic framework to negotiate point-to-point security associations and to exchange key and authentication data among two parties. In ISAKMP, the term security association has two meanings. It is used to describe the secure channel established between two communicating entities. It can also be used to define a specific instance of the secure channel, that is, the services, mechanisms, protocol and protocol-specific set of parameters associated with the encryption algorithms, the authentication mechanisms, the key establishment and exchange protocols, and the network addresses. ISAKMP specifies the formats of messages to be exchanged and their building blocks (payloads). A fixed header precedes a variable number of payloads chained together to form a message. This provides a uniform management layer for security at all layers of the ISO protocol stack, thereby reducing the amount of duplication within each security protocol. This centralization of the management of security associations has several advantages. It reduces connect setup time, improves the reliability of software, and allows for future evolution when improved security mechanisms are developed, particularly if new attacks against current security associations are discovered. To avoid subtle mistakes that can render a key exchange protocol vulnerable to attacks, ISAKMP includes five default exchange types. Each exchange specifies the content and the ordering of the messages during communications between the peers. Although ISAKMP can run over TCP or UDP, many implementations use UDP on port 500. Because the transport with UDP is unreliable, reliability is built into ISAKMP. The header includes, among other information, two 8 octet “cookies”—also called “syncookies”—which constitute an anticlogging mechanism, because of their role against TCP SYN flooding. Each side generates a cookie specific to the two parties and assigns it to the remote peer entity. The cookie is constructed, for example, by hashing the IP source and destination addresses, the UDP source and destination ports and a locally generated secret random value. ISAKMP recommends including the data and the time in this secret value. The concatenation of the two cookies identifies the security association and gives some protection against the replay of old packets or SYN flooding attacks. The protection against SYN flooding assumes that the attacker will not intercept the SYN/ACK packets sent to the spoofed addresses used in the attack. As was explained earlier, the arrival of unsolicited SYN/ACK packets at a host that is accessible to the victim will elicit the transmission of an RST packet, thereby telling the victim to free the allocated resources so that the host whose address has been spoofed will respond by resetting the connection (Juels and Brainard, 1999; Simpson, 1999). The negotiation in ISAKMP comprises two phases: the establishment of a secure channel between the two communicating entities and the negotiation of security associations on the secure channel. For example, in the case of IPSec, Phase I negotiation is to define a key exchange protocol, such as the Internet Key Exchange (IKE) and its attributes. Phase II negotiation concerns the actual cryptographic algorithms to achieve IPSec functionality. IKE is an authenticated exchange of keys consistent with ISAKMP. IKE is a hybrid protocol that combines the aspects of the Oakley Key Determination Protocol and of SKEME. Oakley utilizes the Diffie–Hellman key exchange mechanism with signed temporary keys to establish the session keys between the host machines and the network routers (Cheng, 2001). SKEME is an authenticated key exchange that uses public key encryption for anonymity and nonrepudiation and provides means for quick refreshment (Krawczyk, 1996). IKE is the default key exchange protocol for IPSec. The protocol was first specified in 1998, and the latest revision of IKEv2 is defined in RFC 7296 (2014) and RFC 7427 (2015). None of the data used for key generation is stored, and a key cannot be recovered after deletion, thereby achieving perfect forward secrecy. The price is a heavy cryptographic load, which becomes more important the shorter the duration of the exchanges. Therefore, to minimize the risks from denial-of-service attacks, ISAKMP postpones the computationally intensive steps until authentication is established. ISAKMP is implemented in OpenBSD, Solaris, Linux, and Microsoft Windows and in some IBM products. Cisco routers implement ISAKMP for VPN negotiation using the cryptographic library from Cylink Corporation. A denial-of-service vulnerability of this implementation was discovered and fixed in 2012 (Cisco, 2012). #### 3.18  Certificate Management When a server receives a request signed with a public key algorithm, it must first authenticate the declared identity that is associated with the key. Next, it will verify if the authenticated entity is allowed to perform the requested action. Both verifications rely on a certificate that a certification authority has signed. As a consequence, certification and certificate management are the cornerstones of e-commerce on open networks. Certification can be decentralized or centralized. Decentralized certification utilizes PGP (Pretty Good Privacy) (Garfinkel, 1995) or OpenPGP. Decentralization is popular among those concerned about privacy because each user determines the credence accorded to a public key and assigns a confidence level in the certificate that the owner of this public key has issued. Similarly, a user can recommend a new party to members of the same circle of trust. This mode of operation also eliminates the vulnerability to attacks on a central point and prevents the potential abuse of a single authority. However, the users have to manage the certificates by themselves (update, revocation, etc.). Because that load increases exponentially with the number of participants, this mode of operation is impractical for large-scale operations such as online commerce. In a centralized certification, a root certification authority issues the certificates to subordinate or intermediate certification authorities, which in turn certify other secondary authorities or end entities. This is denoted as X.509 certification, using the name of the relevant recommendation from the ITU-T. ITU-T Recommendation X.509 is identical to ISO/IEC 9594-1, a joint standard from the ISO and the IEC. It was initially approved in November 1988, and its seventh edition was published in October 2012. It is one of a series of joint ITU-T and ISO/IEC specifications that describe the architecture and operations of public key infrastructures (PKI). Some wireless communication systems such as IEEE 802.16 use X.509 certificates and RSA public key encryption to perform key exchanges. X.500 (ISO/IEC 9594-1) provides a general view of the architecture of the directory, its access capabilities, and the services it supports. X.501 (ISO/IEC 9594-2) presents the different information and administrative models used in the directory. X.509 (ISO/IEC 9594-8) defines the base specifications for public key certificates such as using identity certificates and attribute certificates. X.511 (ISO/IEC 9594-3) defines the abstract services of the directory (search, creation, deletion, error messages, etc.). X.518 (ISO/IEC 9594-4) for searches and referrals in a distributed directory system using the Directory System Protocol (DSP). X.519 (ISO/IEC 9594-5) specifies four protocols. The Directory Access Protocol (DAP) provides a directory user agent (DUA) at the client side access to retrieve or modify information in the directory. The Directory System Protocol (DSP) provides for the chaining of requests to directory system agents (DSA) that constitute a distributed directory. The Directory Information Shadowing Protocol (DISP) provides for the shadowing of information held on DSA to another DSA. Finally, the Directory Operational Binding Management Protocol (DOP) provides for the establishment, modification, and termination of bindings between pairs of DSAs. X.520 (ISO/IEC 9594-6) and X.521 (ISO/IEC 9594-7) specify selected attribute types (keywords) and selected object classes to ensure compatibility among implementations. X.525 (ISO/IEC 9594-9) shares information through replication of the directory using DISP (Directory Information Shadowing Protocol). The relationship among these different protocols is shown in Figure 3.12. FIGURE 3.12   Communication protocols among the components of the X.500 directory system. A simplified version of DAP, the Lightweight Directory Access Protocol (LDAP), is an application process that is part of the directory that responds to requests conforming to the LDAP protocol. The LDAP server may have the information stored in its local database or may forward the request to another DSA that understands the LDAP protocol. The latest specification of LDAP is defined in IETF RFCs 4511, 4512, and 4513 (2006). The main simplifications are as follows: 1. LDAP carried directly over the TCP/IP stack, thereby avoiding some of the OSI protocols at the application layer. 2. It uses simplified information models and object classes. 3. Being restricted to the client side, LDAP does not address what happens on the server side, for example, the duplication of the directory or the communication among servers. 4. Some directory queries are not supported. 5. Finally, Version 3 of LDAP (LDAPv3) does not mandate the strong authentication mechanisms of X.509. Strong authentication is achieved on a session basis for the TLS protocol. IETF RFC 4513 (2006) specifies a minimum subset of security functions common to all implementations of LSAPv3 that use the SASL (Simple Authentication and Security Layer) mechanism defined in IETF RFC 4422 (2006). SASL adds authentication services and, optionally, integrity and confidentiality. Simple authentication is based on the name/password pair, concatenated with a random number and/or a time stamp with integrity protection using MD5. #### 3.18.1  Basic Operation After receiving over an open network a request encrypted using public key cryptography, a server has to accomplish the following tasks before answering the request: 2. Verify the signature by the certification authority. 3. Extract the requester public key from the certificate. 4. Verify the requester signature on the request message. 5. Verify the certificate validity by comparison with the Certificate Revocation List (CRL). 6. Establish a certification path between the public key certificate to be validated and an authority recognized by the relying party, for example, the root authority. That certification path—or chain of trust—starts from an end entity and ends at the authority that validates the path (the root certification authority or the trust anchor as explained later). 7. Extract the name of the requester. 8. Determine the privileges that the requester enjoys. The certificate permits the accomplishment of Tasks 1 through 7 of the preceding list. In the case of payments, the last step consists of verifying the financial data relating to the requester, in particular, whether the account mentioned has sufficient funds. In the general case, the problem is much more complex, especially if the set of possible queries is large. The most direct method is to assign a key to each privilege, which increases the complexity of key management. The Certificate Management Protocol (CMP) of IETF RFC 4210 (2005) specifies the interactions between the various components of a public key infrastructure for the management of X.509 certificates (request, creation, revocation, etc.). The Online Certificate Status Protocol (OCSP) of IETF RFC 6960 (2013) specifies the data exchanges between an application seeking the status of one or more certificates and the server providing the corresponding status. This functionality is desirable to sending a CRL to save bandwidth. IETF RFC 2585 (1999) describes how to use the File Transfer Protocol (ftp) and HTTP to obtain certificates and certification revocation lists from their respective repositories. During online verification of the certificates, end entities or registration authorities submit their public key using the Certification Signing Request of PKCS #10 as defined in IETF RFC 2986 (2000). #### 3.18.2  Description of an X.509 Certificate An X.509 certificate is a record of the information needed to verify the identity of an entity. This record includes the distinguished name of the user, which is a unique name that ties the certificate owner with its public key. The certificate contains additional fields to locate its owner’s identity more precisely. The certificate is signed with the private key of the certification authority. There are three versions of X.509 certificates, Versions 1, 2, and 3, the default being Version 1. X.509 Versions 2 and 3 certificates have an extension field that allows the addition of new fields while maintaining compatibility with the previous versions. Version 1 pieces of information are listed in Table 3.5. ### TABLE 3.5   Content of a Version 1 X.509 Certificate Field Name Description Version Version of the X.509 (2001) certificate Serial number Certificate serial number assigned by the certification authority Signature Identifier of the algorithm and hash function used to sign the certificate Issuer Distinguished name of the certification authority Validity Duration of the validity of the certificate Subject References for the entity whose public is certified, such as the distinguished name, unique identifier (optional), and so on Subject public key info Information concerning the algorithm that this public key is an instance of and the public key itself Usually, a separate key is used for each security function (signature, identification, encryption, etc.) so that, depending on the function, the same entity may hold several certificates from the same authority. There are two primary types of public key certificates: end entity certificates and certification authority (CA) certificates. The subject to an end entity public key certificate is not allowed to issue other public key certificates, while the subject of a CA certificate is another certification authority. CA certificates fall into the following categories: • Self-issued certificates, where the issuer and the subject are the same certification authority. • Self-signed certificates, where the private key that the certification authority used for signing is the same as the public key that the certificate certifies. This is typically used to advertise the public key or any other information that the certification authority wishes to make available. • Cross-certificates, where the issuer is one certification authority and the subject is another certification authority. In a strict hierarchy, the issuer authorizes the subject certification authority to issue certificates, whereas in a distributed trust model, one certification authority recognizes the other. Cross-certifications are essential for business partners to validate each other credentials. For cross-certification, policies and policy constraints must be similar or equivalent on both sides. This requires a mixture of technical, political, and managerial steps. On a technical level, NIST Special Publication 800-15 provides the minimum interoperability specifications for PKI components (MISPC) (Burr et al., 1997). #### 3.18.3  Attribute Certificates X.509 defines a third type of public key certificates called attribute certificates that are digitally signed by an attribute authority to bind certain prerogatives, such as access control, to identity separately from the authentication of the identity information. More than one attribute certificate can be associated with the same identity. Attribute certificates are managed by a Privilege Management Infrastructure (PMI). This is the infrastructure that supports a comprehensive authorization service in relation to a public key infrastructure. The PKI and PMI are separate logical and/or physical infrastructures and may be established independently but they are related. When a single entity acts as both a certification authority and an attribute authority, different keys are used for each kind of certificates. With a hierarchical role-based access control (RBAC), higher levels inherit the permissions accorded to their subordinates. The Source of authority (SOA) is the ultimate authority to assign a set of privileges. It plays a role similar to the root certification authority and can be certified by that authority. The SOA may also authorize the further delegation of these privileges, in part or in full, along a delegation path. There may be restrictions on the power of delegation capability, for example, the length of the delegation path can be bonded, and the scope of privileges allowed can be restricted downstream. To validate the delegation path, each attribute authority along the path must be checked to verify that it was duly authorized to delegate its privileges. Although it is quite possible to use public key identity certificates to define what the holder of the certificate may be entitled to, a separate attribute certificate may be useful in some cases, for example: 1. The authority for privilege assignment is distinct from the certification authority. 2. A variety of authorities will be defining access privileges to the same subject. 3. The same subject may have different access permissions depending on the role that individual plays. 4. There is the possibility of delegation of privileges, in full or in part. 5. The duration of validity of the privilege is shorter than that of the public key certificate. Conversely, the public key identity certificate may suffice for assigning privileges, whenever the following occur: 1. The same physical entity combines the roles of certification authority and attribute authority. 2. The expiration of the privileges coincides with that of the public key certificate. 3. Delegation of privileges is not permitted or if permitted, all privileges are delegated at once. #### 3.18.4  Certification Path The idea behind X.509 is to allow each user to retrieve the public key of certified correspondents so that they can proceed with the necessary verifications. It is sufficient therefore to request the closest certification authority to send the public key of the communicating entity in a certificate sealed with the digital signature of that authority. This authority, in turn, relays the request to its own certifying authority, and this permits an escalation through the chain of authorities, or certification path, until reaching the top of the certification pyramid (the root authority, RA). Figure 3.13 is a depiction of this recursive verification. FIGURE 3.13   Recursive verification of certificates. (Adapted from Ford, W. and Baum, M.S., Secure Electronic Commerce, Pearson Education, Inc., Upper Saddle River, NJ, 1997. With permission.) Armed with the public key of the destination entity, the sender can include a secret encrypted with the public key of the correspondent and corroborate that the partner is the one whose identity is declared. This is because, without the private key associated with the key used in the encryption, the destination will not be able to extract the secret. Obviously, for the two parties to authenticate themselves mutually, both users have to construct the certification path back to a common certification authority. Thus, a certification path is formed by a continuous series of certification authorities between two users. This series is constructed with the help of the information contained in the directory by going back to a ­common point of confidence. In Figure 3.13, authorities C, B, and A are the intermediate certification authorities. From the end entity’s perspective, the root authority and an intermediate certification authority perform the same function, that is, they are functionally equivalent. However, the reliability of the system requires that each authority of the chain ensures that the information in the certification is correct. In other words, the security requires that none of the intermediate certification authorities are deficient or compromised. It should be noted that the various intermediate authorities do not have to reside in the same country neither in the country of the root authority. This can be also different from the country where the data are stored. End users are therefore vulnerable to the different privacy laws in various jurisdictions. Moreover, governments in any country along the certification path have the possibility of initiating man-in-the-middle attacks by forcing the CAs in their jurisdiction to issue false certificates that can be used to intercept users data (Soghoian and Stamm, 2010). Registration relates to the approval and rejection of certificate applications and the request of revocation or renewal of certificates and is different from key ­management and the certificate management. In some systems, the registration authority is different from the certification authority, for example, the human resources (HR) of a company while the certification is handled by the IT department. In such a case, the HR will have its own intermediate certification authority. The tree structure of the certification path can be ­hierarchical or nonhierarchical as explained next. #### 3.18.5  Hierarchical Certification Path According to a notational convention used in earlier versions of X.509, a certificate is denoted by $authority ⟨ ⟨ entity ⟩ ⟩$ Thus, $X 1 ⟨ ⟨ X 2 ⟩ ⟩$ indicates the certificate for entity X2 that authority X1 has issued, while $X 1 ⟨ ⟨ X 2 ⟩ ⟩ X 2 ⟨ ⟨ X 3 ⟩ ⟩ … X n ⟨ ⟨ X n + 1 ⟩ ⟩$ represents the certification path connecting the ­end entity Xn+1 to authority X1. In other words, this notation is functionally equivalent to, which is the certificate that authority X1 would have issued to the end entity Xn+1. By constructing this path, another end entity would be able to retrieve the public key of end entity Xn+1, if that other end entity knows X1p, the public key of authority X1. This operation is represented by $X 1 ⟨ ⟨ X n + 1 ⟩ ⟩$ where · is an infix operator, whose left operand is the public key, X1p, of authority X1, and whose right operand is the certificate $X 1 ⟨ ⟨ X 2 ⟩ ⟩$ delivered to X2 by that same certification authority. This result is the public key of entity X2. In the example depicted in Figure 3.14, assume that user A wants to construct the certification path toward another user B. A can retrieve the public key of authority W with the certificate signed by X. At the same time, with the help of the certificate of V that W has issued, it is possible to extract the public key of V. In this manner, A would be able to obtain the chain of certificates: FIGURE 3.14   Hierarchical certification path according to X.509. (From ITU-T Recommendation X.509 (ISO/IEC 9594-8), Information technology—Open systems Interconnection—The directory: Public key and attribute certificate frameworks, 2012, 2000. With permission.) $X ⟨ ⟨ W ⟩ ⟩ , W ⟨ ⟨ V ⟩ ⟩ , V ⟨ ⟨ Y ⟩ ⟩ , Y ⟨ ⟨ Z ⟩ ⟩ , Z ⟨ ⟨ B ⟩ ⟩$ This itinerary, represented by AB, is the forward ­certification path that allows A to extract the public key Bp of B, by an application of the operation · in the following manner: $B p = X p ⋅ ( A → B ) = X p ⋅ X ⟨ ⟨ W ⟩ ⟩ W ⟨ ⟨ V ⟩ ⟩ V ⟨ ⟨ Y ⟩ ⟩ Y ⟨ ⟨ Z ⟩ ⟩ Z ⟨ ⟨ B ⟩ ⟩$ In general, the end entity A also has to acquire the certificates for the return certification path BA, to send them to its partner: $Z ⟨ ⟨ Y ⟩ ⟩ , Y ⟨ ⟨ V ⟩ ⟩ , V ⟨ ⟨ W ⟩ ⟩ , W ⟨ ⟨ X ⟩ ⟩ , X ⟨ ⟨ A ⟩ ⟩$ When the end entity B receives these certificates from A, it can unwrap the certificates with its private key to extract the public key of A, Ap: $A p = Z p ⋅ ( B → A ) = Z p ⋅ Z ⟨ ⟨ Y ⟩ ⟩ Y ⟨ ⟨ V ⟩ ⟩ V ⟨ ⟨ W ⟩ ⟩ W ⟨ ⟨ X ⟩ ⟩ X ⟨ ⟨ A ⟩ ⟩$ As was previously mentioned, such a system does not necessarily impose a unique hierarchy worldwide. In the case of electronic payments, two banks or the ­fiscal authorities of two countries can mutually certify each other. In the preceding example, the intermediate ­certification authorities X and Z are cross-certified. If A wants to verify the authenticity of B, it is sufficient to obtain $X ⟨ ⟨ Z ⟩ ⟩ , Z ⟨ ⟨ B ⟩ ⟩$ to form the forward certification path and $Z ⟨ ⟨ X ⟩ ⟩$ to construct the reverse certification path. This permits the clients of the two banks to be satisfied with the certificates supplied by their respective banks. A root certification authority may require all its intermediary authorities to keep an audit trail of all exchanges and events, such as key generation, request for certification, validation, suspension, or revocation of certificates. Finally, it should be noted that cross-certification applies to public key certificates and not to attribute certificates. #### 3.18.6  Distributed Trust Model If certification authorities are not organized hierarchically, the end entities themselves would have to construct the certification path. In practice, the number of operations to be carried out can be reduced with various strategies, for example: 1. Two end entities served by the same certification authority have the same certification path, and can exchange their certificates directly. This is the case for the end entities C and A in Figure 3.15. FIGURE 3.15   Encryption in the CBC mode. 2. If one end entity is constantly in touch with users that a particular authority has certified, that end entity could store the forward and return certification paths in memory. This would reduce the effort for obtaining the other users’ certificates to a query into the directory. 3. Two end entities that have each other’s certificates mutually authenticate themselves without querying the directory. This reverse certification is based on the confidence that each end entity has in its certification authority. Later revisions of X.509 have introduced a new entity called a Trust Anchor. This is an entity that a party relies on to validate certificates along the certification path. In complex environments, the trust anchor may be different from the root authority, for example, if the length of the certification path is restricted for efficiency. #### 3.18.7  Certificate Classes In general, service providers offer several classes of ­certificates according to the strength of the link between the certificate and the owner’s identity. Each class of certificates has its own root authority and possibly registration authorities. Consider, for example, a three-class categorization. In this case, Class 1 certificates confirm that the distinguished name that the user presents is unique and unambiguous within the certification authority’s domain and that it corresponds to valid e-mail address. They are typically used for domain registration. Class 1 certificates are used for modest enhancement of security through confidentiality and integrity verification. They cannot be used to verify an identity or to support nonrepudiation services. Class 2 certificates are also restricted to individuals. They indicate that the information that the user has submitted during the registration process is consistent with the information available in business records or in “well-known” consumer databases. In North America, one such reference database is maintained by Equifax. Class 3 certificates are given to individuals and to ­organizations. To obtain a certificate of this class, an ­individual has to be physically present with their ­public key in possession before an authority to confirm the identity of the applicant with a formal proof of identity ­(passport, identity card, electricity or telephone bill, etc.) and the association of that identity with the given public key. If the individual is to be certified as a duly authorized representative of an organization, then the necessary verifications have to be made. Similarly, an enterprise will have to prove its legal existence. The authorities will have to verify these documents by ­querying the databases for enterprises and by ­confirming the collected data by telephone or by mail. Class 3 certificates have many business applications. #### 3.18.8  Certificate Revocation The correspondence between a public key and an identity lasts only for a period of time. Therefore, certification authorities must refer to revocation lists that contain certificates that have expired or have been revoked. These lists are continuously updated. Table 3.6 shows the format of the revocation list that Version 1 of X.509 has defined. In the third revision of X.509, other optional entries, such as the date of the certificate revocation and the reason for revocation, were added. ### TABLE 3.6   Basic Format of the X.509 Revocation List Field Comment Signature Identifier of the algorithm used to sign the certificates and the parameters used Issuer Name of the certification authority thisUpdate Date of the current update of the revocation list nextUpdate Date of the next update of the revocation list revokedCertificates References of the revoked certificates including the revocation date The CPS describes the circumstances under which certification of end users and the various intermediate authorities can be revoked and defines who can request that revocation. To inform all the entities of the PKI, CRLs are published at regular intervals—or when the certificate of an authority is revoked—with the digital signature of the certification authority to ensure their integrity. Among other information, the CRL indicates the issuer’s name, the date of issue, the date of the next scheduled CRL, the revoked certificates serial numbers, and the specific times and reasons for revocation. In principle, each certification authority has to maintain at least two revocation lists: a dated list of the ­certificates that it has issued and revoked and a dated list of all the certificates that the authorities that it recognizes have revoked. The root certification authority and each of its delegate authorities must be able to access these lists to verify the instantaneous state of all the ­certificates to be treated within the authentication system. Revocation can be periodic or exceptional. When a certificate expires, the certification authority withdraws it from the directory (but retains a copy in a special directory, to be able to arbitrate any conflict that might arise in the future). Replacement certificates have to be ready and supplied to the owner to ensure the continuity of the service. The root authority (or one of its delegated authorities) may cancel a certificate before its expiration date, for example, if the certificate owner’s private key was compromised or if there was abuse in usage. In the case of secure payments, the notion of solvency, that is, that the user has available the necessary funds, is obviously one of the essential considerations. Processing of the revocation lists must be done quickly to alert users and, in certain countries, the authorities, particularly if the revocation is before the expiration date. Perfect synchronization among the various authorities must be attained to avoid questioning the validity of documents signed or encrypted before the withdrawal of the corresponding certificates. Users must also be able to access the various revocation lists; this is not always possible because current client programs do not query these lists. #### 3.18.9  Archival Following certification expiration or its revocation, the records associated with a certificate are retained for at least the specific time periods. Table 3.7 shows the current intervals for Symatech. Thus, archival of Class 1 certificates lasts for at least 5 years after expiration of the certificate or its revocation, while the duration for Class 2 and 3 certificates are 10.5 years each (Symantec Corporation, 2015). ### TABLE 3.7   Symantec Archival Period per Certificate Class Certificate Class Duration in Years 1 5 2 10.5 3 10.5 #### 3.18.10  Recovery Certification authorities implement procedures to recover from computing failures, corruption of data, such as when a user’s private key is compromised, and natural or man-made disasters. A disaster recovery plan addresses the gradual restoration of information services and business functions. Minimal operations can be recovered within 24 hours. They include certificate issuance or revocation, publication of revocation information, and recovery of key information for enterprises customers. #### 3.18.11  Banking Applications A bank can certify its own clients to allow them access to their back account across the Internet. Once access has been given, the operation will continue as if the client were in front of an automatic teller machine. The interoperability of bank certificates can be achieved with interbank agreements, analogous to those that have permitted the interoperability of bank cards. Each financial institution certifies its own clients and is assured that the other institutions will honor that certificate. As the main victims of fraud, financial institutions have partnered to establish their own certification infrastructures. In the United States, several banks including Bank of America, Chase Manhattan, Citigroup, and Deutsche Bank have formed IdenTrust (in 2000). The main purpose was to enable a trusted business-to-business e-commerce marketplace with financial institutions as the key trust providers. The institution changed its name into IdenTrust (http://www.identrust.com) in 2006 and was acquired by HID in 2014. It is one of two vendors accredited by the General Services Administration (GSA) to issue digital certificates. At the same time, about 800 European institutions have joined forces to form a Global Trust Authority (GTA) as a nonprofit organization whose mission is to put in place an infrastructure of trust that can be used, by all sectors, to conduct cross-border e-business. In 2009, the project was suspended (IDABC, 2009). #### 3.19.1  Procedures for Strong Authentication Having obtained the certification path and the other side’s authenticated public key, X.509 defines three procedures for authentication, one-way or unidirectional authentication, two-way or bidirectional authentication; and three-way or tridirectional authentication. #### 3.19.1.1  One-Way Authentication One-way authentication takes place through the transfer of information from User A to User B according to the following steps: • A generates a random number RA used to detect replay attacks. • A constructs an authentication token M = (TA, RA, IB, d) where TA represents the time stamp of A (date and time) and IB is the identity of B. TA comprises two chronological indications, for example, the generation time of the token and its expiration date, and d is an arbitrary data. For additional security, the message can be encrypted with the public key of B. • A sends to B the message: $B → A , A { ( T A , R A , I B , d ) }$ where • B || A is the certification path • A{M} represents the message M encrypted with the private key of A B carries on the following operations: • Obtain the public key of A, Ap, from BA, after verifying that the certificate of A has not expired. • Recover the signature by decrypting the message A{M} with Ap. B then verifies that this signature is identical to the message hash, thereby ascertaining simultaneously the signature and the integrity of the signed message. • Verify that B is the intended recipient. • Verify that the time stamp is current. • Optionally, verify that RA has not been previously used. These exchanges prove the following: • The authenticity of A, that is, the authentication token was generated by A • The authenticity of B, that is, the authentication token was intended for B • The integrity of the identification token • The originality of the identification token, that is, it has not been previously utilized #### 3.19.1.2  Two-Way Authentication The procedure for two-way authentication adds to the previous unidirectional exchanges similar exchan­ges but in the reverse direction. Thus, the following applies: • B generates another random number RB. • B constructs the message M′ = (TB, RB, IA, RA, d), where TB represents the time stamp of B (date and time), IA is the identity of A, and RA is the random number received from A. TB consists of one or two chronological indications as ­previously described. For security, the ­message can be encrypted with the public key of A. • B sends to A the message: $B { ( T B , R B , I A , R A , d ) }$ where B{M′} represents the message M′ encrypted with the private key of B. • A carries out the following operations: • Extracts the public key of B from the ­certification path and uses it to decrypt B{M′} and recovers the signature of the ­message that B has produced; A verifies next that the ­signature is the same as the hashed ­message, thereby ascertaining the integrity of the signed information • Verifies that A is the intended recipient • Checks the time stamp to verify that the message is current • As an option, verifies that RB has not been previously used #### 3.19.1.3  Three-Way Authentication Protocols for three-way authentication introduce a third exchange from A to B. The advantage is the avoidance of time stamping and, as a consequence, of a trusted third party. The steps are the same as for two-way identifi­cation but with TA = TB = 0. Then: • A verifies that the value of the received RA is the same that was sent to B. • A sends to B the message: $A { R B , I B }$ encrypted with the private key of A. • B performs the following operations: • Verifies the signature and the integrity of the received information • Verifies that the received value of RB is the same as was sent #### 3.20  Security Cracks We have reviewed in this chapter the main concepts of how cryptographic algorithms may be used to support multiple security services. On a system level, however, the security of a system depends on many factors such as 1. The rigorousness of the authentication criteria 2. The degree of trust place in the root authority and/or intermediate certification authorities 3. The strength of the end entity’s credentials (e.g., passport, birth certificate, driver’s license) 4. The strength of the cryptographic algorithms used 5. The strength of the key establishment protocols 6. The care with which end entities (i.e., users) protect their keys This is why the design of security systems that ­provide the desired degree of security requires high skill and expertise and attention to details. For example, if the workstations are not properly configured, users can be denied access or signed e-mails may appear invalid (Department of Defense Public Key Enabling and Public Key Infrastructure Program Management Office, 2010). #### 3.20.1  Problems with Certificates Certification is essential for authenticating participants to prevent intruders from impersonating any side to spy on the encrypted exchanges. When an entity produces a certificate signed by a certification authority, this means that the CA attests that the information in the certificate is correct. As a corollary, the entry for that entity in the directory that the certification authority maintains has the following properties: 1. It establishes a relationship between the entity and a pair of public and private cryptographic keys. 2. It associates a unique distinguished name in the directory with the entity. 3. It establishes that at a certain time, the authority was able to guarantee the correspondence between that unique distinguished name and the pair of keys. Each certification authority describes its practices and policies in a Certification Practice Statement (CPS). The CPS covers the obligations and liabilities, including liability caps, of various entities, their obligations. For example, one obligation is to maintain their cryptographic technology current and to protect the integrity of their physical and logical operations including key management. There is no standard CPS, but IETF RFC 3647 (2003) offers guidance on how to write such a certification statement. The accreditation criteria of entities are also not standardized, and there is no code of conduct for certification authorities. Each operator defines its conduct, rights, and obligations and operates at its own discretion and is not obliged to justify its refusal to accredit an individual or an entity. There are also no standard criteria to evaluate the performance of a certification authority, so that browser vendors are left to their own discretion as to which certification authorities they should trust. There are almost no laws to prevent a PKI operator from cashing in on the data collected on individuals and their purchasing habits by passing the information to all those that might be interested (merchants, secret services, political adversaries, etc.). Thus, should the certification authorities fail to perform their role correctly, willingly, by accident or negligently, the whole security edifice is called into question. The American Institute of Certified Public Accoun­tants and the Canadian Institute of Chartered Accountants have developed a program to evaluate the risks of conducting commerce through electronic means. The CPA WebTrustSM is a seal that is supposed to indicate that a site is subject to quarterly audit on the procedures to protect the integrity of the transactions and the confidentiality of the information. However, there are limits to what audits can uncover as shown in the notorious case of DigiNotar. DigiNotar was a certification authority agreed and audited by the Dutch government as part of its PKI program. In August 2011, however, following an investigation by the Dutch government, DigiNotar was forced to admit that more than 500 SSL/TLS certificates were stolen from compromised DigiNotar’s servers and used to create fake DigiNotar certificates. In particular, the fake certificate for google.com was used to spy on to the TLS/SSL sessions of some 300,00 Iranians using gmail accounts. DigiNotar detected and revoked some of the fraudulent certificates without notifying the browsers’ manufacturers such as Apple, Google, Microsoft, Mozilla, and Opera forcing them to issue updates to block access to sites secured with DigiNotar certificates. Finally, in September 2011, The Dutch Independent Post and Telecommunications Authority (OPTA) revoked DigiNotar as a certification authority (Schoen, 2010; Keizer, 2011; Nightingale, 2011). In the case of DigiNotar, a fundamental problem was shown in the way certificates are used in consumer applications. Browsers maintain a list of trusted certification authorities or rely on the list that the operating system provides. Each of these authorities has the power to certify additional certification authorities. Thus, a browser ends up trusting hundreds of certificate authorities some which do not necessarily have the same policies. Because the authorities on the chain of trust may be distributed over several countries, different government agencies may legally compel any of these certification authorities to issue false certificates to intercept and hijack individuals’ secure channels of communication. In fact, there are some commercial covert surveillance devices that operate on that principle (Soghoian and Stamm, 2010). Browsers currently contact the CAs to verify that a certificate that a server presents has not been revoked, but they currently do not track changes in the certificate, for example, by comparing hashes (but there are Firefox add-ons that perform this function). In principle, browser suppliers work with certification authorities to respond and contain breaches by blocking fraudulent certificates. Experience has shown, however, that certification authorities do not always notify the various parties of security breaches promptly. #### 3.20.2  Underground Markets for Passwords The spread of online electronic commerce has been correlated to the number of times individuals have to login to different systems and applications. In many cases, the back-end authentication systems are different so that users have to manage an increasing number of passwords, particularly that many systems force the users to replace their passwords periodically. In a typical large application, such as for a bank, the password database may include around 100 or 200 m password. Increasingly, users face the problem of creating and remembering multiple user names and passwords and often end up reusing passwords. One of the consequences of all these factors is the rise in cybercrime, fuelled by underground markets of stolen passwords and of tools (“bots”) to attempt automatic logins in many websites trying the passwords in a file until access is achieved. #### 3.20.3  Encryption Loopholes Encryption is a tool to prevent undesirable access to a secret message. While the theoretical properties of the cryptographic algorithms are important, how are the fundamentals of cryptography implemented and used are essential. Brute-force attacks where the ­assailant ­systematically tries all possible encryption keys until getting the one that will reveal the plaintext. Table 3.8 provides the estimated time for successful brute-force attacks with exhaustive searches on symmetric encryption algorithms with different key lengths for the current state of technology (Paar and Pelzl, 2010, p. 12). ### TABLE 3.8   Estimated Time for Successful Brute-Force Attacks on Symmetric Encryption for Different Key Lengths with Current Technology Key Length in Bits Estimated Time with Current Technology 56–64 A few days 112–128 Several years 256 As a consequence, a long key is a necessary but not sufficient condition for secure symmetric encryption. Cryptanalysis focuses on finding design errors, implementation flaws or operational deficiencies to break the encryption and retrieve the messages, even without knowledge of the encryption key. GSM, IEEE 802.11b, IS-41, and so on are known to have faulty or deliberately weakened protection schemes. The most common types of cryptological attacks include the following (Ferguson et al., 2010, pp. 31–36): 1. Attacks on the encrypted text assuming that the clear text has a known given structure, for example, the systematic presence of a header with a known format (this is the case of e-mail messages) or the repetition of known keywords. 2. Attacks starting with chosen plaintexts that are encrypted with the unknown key so as to deduce the key itself. 3. Attacks by replaying old legitimate messages to evade the defense mechanisms and to short-circuit the encryption. 4. Attacks by interception of the messages (man-in-the-middle attacks) where the interceptor inserts its eavesdrop at an intermediate point between the two parties. After intercepting an exchange of a secret key, for example, the interceptor will be able to decipher the exchanged messages while the participants think they are communicating in complete security. The attacker may also be able to inject fake messages that would be treated as legitimate by the two parties. 5. Attacks by measuring the length of encryption times, of electromagnetic emissions, and so on to deduce the complexity of the operations, and hence their form. 6. Attacks on the network itself, for example, corruption of the DNS, to direct the traffic to a spurious site. In some cases, the physical protection of the whole ­cryptographic system (cables, computers, smart cards, etc.) may be needed. For example, bending of an optical fiber results in the dispersion of 1%–10% of the signal power; therefore, well-placed acoustic-optic devices can capture the diffraction pattern for later analysis. A catalog of the causes of vulnerability includes the following (Schneier, 1997, 1998a; Fu et al., 2001): 1. Nonverification of partial computations. 2. Use of defective random number generators, because the keys and the session variables depend on a good supply source for nonpredictable bits. 3. Improper reutilization of random parameters. 4. Misuse of a hash function in a way that increases the chances for collisions. 5. Structural weakness of the telecommunications network. 6. Nonsystematic destruction of the clear text after encryption and the keys used in encryption. 7. Retention of the password or the keys in the ­virtual memory. 8. No checking of correct range of operation; this is particularly the case when buffer overflows can cause security flaws. 9. Misuse of a protocol can lead to an authenticator traveling in plaintext. For example, IETF RFC 2109 (1997)—now obsolete—specified that when the authenticator is stored in a cookie, the server has to set the Secure flag in the cookie header so that the client waits before returning the cookie until a secure connection has been established with SSL/TLS. It was found that some web servers neglected to set this flag thereby negating that protection. The authenticator can also leak if the client software continues to use even after the authentication is successful. It is also possible to take advantage of other implementation details that are not directly related to encryption. For example, when a program deletes a file, most commercial operating systems merely eliminate the corresponding entry in the index file. This allows recovery of the file, at least partially, with off-the-shelf software. The only means by which to guarantee total elimination of the data is to rewrite systematically each of the bits that the deleted file was using. Similarly, the use of the virtual memory in commercial systems exposes another vulnerability because the secret document may be momentarily in the clear on the disk. Systems for e-commerce that are for the general ­public must be easily accessible and affordably priced. As a consequence, many compromises will be made to improve response time and the ease of use. However, if one starts from the principle that, sooner or later, any system is susceptible to unexpected attacks with unanticipated consequences, it is important the system make it possible to detect attacks and to accumulate proofs that are accepted by law enforcement personnel and the courts. The main point is to have an accurate definition of the type of expected threats and possible attacks. Such a realistic evaluation of threats and risks permits a precise understanding of what should be protected, against whom, and for how long. #### 3.20.4  Phishing, Spoofing, and Pharming Phishing, spoofing, and pharming have been used to trick users to voluntarily reveal their credentials. These terms are neologisms that describe various tricks played on users. Although they are often used interchangeably, we attempt here to distinguish among them for the sake of clarity. Phishing is a deceitful message to trick users into revealing their credentials (bank account numbers, passwords, payment card details, etc.) to unauthorized entities. The term was coined to indicate that fraudsters are “fishing” for online banking details of customers through e-mail. The sender can impersonate reputable companies such as banks, financial intermediaries or banks and exploits the whole gamut of human emotions and credulity, ranging from obligation toward friends in distress in foreign lands, fear of arrest warrants to greed and ignorance. For example, messages purportedly from a bank would alert the recipient that there is a security problem with the recipient’s account that requires immediate attention and then ask the recipient to click on a link that seems legitimate under any false pretense (to restore a compromised account, to verify identity, etc.). However, by using HTML Forms in the body of the text or JavaScript event handler, the link seen in the e-mail is different from the actual link destination. Once the link is clicked, a window opens that contain the real site or a fake copy of the bank’s website would ask for their account details or authentication data. When the user types the credential, they are captured and used to siphon the user’s account (Drake et al., 2004). After convincing the recipient that the e-mail is credible and originated from a trusted institution, phishing exploits the whole gamut of human emotions and credulity to persuade the recipient to divulge personal and financial information under the guise of verifying account, paying taxes, assisting a friend mugged in a foreign land, and so on. Alternatively, a malware (malicious software) may be injected into the target device to record all keyboard inputs and e-mail the collected data periodically to what is called a mail drop (Schneier, 2004; Levin et al., 2012). An extreme form of this malware is called ransomware, where hackers seize control of the target system unless a ransom is paid. This happened in June 2015 to the police force of Tewksbury, Massachusetts (a suburb of Boston), which was only able to access its data back after paying a ransom in Bitcoin. In February 2015, the Russian antivirus firm Kaspersky Lab issued a report detailing how a group calling themselves Carbanak was able to penetrate bank networks using phishing e-mails to steal over $500 million from banks in several countries and their private customers. In some instances, cash machines were instructed to dispense their contents to associates. In other cases, they altered databases to increase fraudulently balances on existing accounts and then pocket the difference without the knowledge of the account owner (Economist, 2015b). Finally, pharming is the systematic exploitation of a vulnerability of the DNS servers due, for instance, to a coding error or to an implanted malware. As a result, instead of translating web names into their corresponding IP address, spurious IP addresses associated with a fake site are used. In other words, the effects of spoofing and pharming are the same, but in spoofing the victim is a willingly albeit gullible participant while in pharming the victim is unaware of the redirection. To conduct such a campaign, an attacker registers a new domain with an Internet domain registrar using contact details similar to those of the target company as obtained with a whois lookup on the real domain. The attacker then creates a DNS record that points to the newly registered domain that will post a fake landing page to give the e-mail recipients the impression of genuineness so that they enter willingly their credentials. The deceptive site may even be a compromised website of another victim to which the attacker has uploaded the phishing kit. Some kits are freely available on the Internet, while others are distributed within closed communities in a hacking forum. The root cause of all these problems is the lack of authentication in electronic mail exchanges. E-mail was originally intended for the communication among collaborators who knew each other and not for the users distributed among more than 70 million domains to communicate without any prior screening. Only partial solutions have been offered thus far. RFC 4871 (2007) defines an authentication framework, called DomainKeys Identified Mail (DKIM), which is a synthesis of two proposals from Cisco and Yahoo! to associate a domain name identity to a message through public key cryptography. This can be applied at the message transfer agents (MTA) located in the various network nodes or at the e-mail clients, that is, readers or more formally mail user agents (MUA). The sending MTA checks if the source is authorized to send mail using its domain. If the message is authorized, it signs the body of the message and selected header fields and inserts both signatures in the new DKIM-Signature header field. The MTA of the receiving domain verifies the integrity of the message and the message headers. The signatures are calculated through hashing with SHA-1 or SHA-256 and the encryption with RSA, with key sizes ranging from 512 to 2048 bits. RFC 4871 avoids a public key certification infrastructure and uses the DNS to maintain the public keys of the claimed senders. Accordingly, verifies have to request the sender public key from the DNS. The scheme in RFC 4871 does not protect from relay attacks. For example, a spammer can send a message to an accomplice while the original message satisfies all the criteria set in RFC 4871. The accomplice inserts the extra fields needed to forward the message, but the signatures will remain valid, fooling the DKIM tool at the receiving end (see § 8.5 of RFC 4871). The end user will still be unable to determine if the message is a phishing attack or that it contains a malicious attachment. The information, however, may be useful for the receiving MTA to establish that a specific e-mail address is being used for spamming (Selzer, 2013). Also, because the DNS is not fully equipped for key management, an attacker can publish key records in DNS that are intentionally malformed. The Sender Policy Framework (SPF) of RFC 7208 (2014) is another approach based on IP addresses. Work on the specifications took more than 10 years, but the concept was implemented and deployed before the RFC was published. Here, a domain administrator populates the SPF record of that domain’s entry in the DNS with the IP addresses that are allowed to send e-mail from that domain and publish that SPF data as DNS TXT (type 16) Resource Record per RFC 1035 (1987). Any mail server or spam filter on the user’s machine query the DNS for the domain’s SPF record to verify the source of e-mail messages as they arrive. Only messages whose IP addresses, as specified in the MAIL FROM field, match one of the authorized IP addresses will be accepted. Thus, other mail servers can stop fake mail claiming to originate from that domain. The only benefit for the domain is to protect its reputation. Furthermore, there is no obligation for receiving mail servers to filter their incoming mail based on SPF, which they may avoid to avoid going through multiple DNS lookups. There are attempts at understanding the problem and tracking the fraudsters down. For example, the University of Alabama at Birmingham has established in 2007 the Computer Forensics Research Lab to store spam mails in a Spam Data Mine available for investigator. The lab has developed automated and manual techniques to analyze the mail identity, the phishing website, and determine the tool kits used to creating the phishing website. The university’s Phishing Operations work with law enforcement and corporate investigators to identify groups or related phishing sites, particularly when the financial loss is significant (Levin et al., 2012). Finally, there are websites that help distinguish between legitimate e-mails and traps designed to steal personal information and/or money. PayPal offers a specific address ([email protected]) where users can inquire if the message purportedly originating from PayPal is authentic. #### 3.21 Summary There are two types of attacks: passive and active. Protection can be achieved with suitable mechanisms and appropriate policies. Recently, security has leaped to the forefront in priority because of changes in the regulatory environment and in technology. The fragmentation of operations that were once vertically integrated increased the number of participants in end-to-end information transfer. In virtual private networks, customers are allowed some control of their part of the public infrastructure. Finally, security must be retrofitted in IP networks to protect from the inherent difficulties of having user traffic and network control traffic within the same pipe. Security mechanisms can be implemented in one or more layers of the OSI model. The choice of the layer depends on the security services to be offered and the coverage of protection. Confidentiality guarantees that only the authorized parties can read the information transmitted. This is achieved by cryptography, whether symmetric or asymmetric. Symmetric cryptography is faster than asymmetric cryptography but has a limitation in terms of the secure distribution of the shared secret. Asymmetric (or public key) cryptography overcomes this problem; this is why both can be combined. In online systems, public key cryptography is used for sending the shared secret that can be used later for symmetric encryption. Two public key schemes used for sharing the secrets are Diffie–Hellman and RSA. As mentioned earlier, ISAKMP is a generic framework to negotiate point-to-point security and to exchange key and authentication data among two parties. Data integrity is the service for preventing nonauthorized changes to the message content during transmission. A one-way hash function is used to produce a signature of the message that can be verified to ascertain integrity. Blind signature is a special procedure for signing a message without revealing its content. The identification of participants depends on whether cryptography is symmetric or asymmetric. In asymmetric schemes, there is a need for authentication using certificates. In the case of human users, biometric features can be used for identification in specific situations. Kerberos is an example of a distributed system for online identification and authentication using symmetric cryptography. Access control is used to counter the threats of unauthorized operations. There are two types of access control mechanisms: identity based and role based. Both can be managed through certificates defined by ITU-T Recommendation X.509. Denial of service is the consequence of failure of access control. These attacks are inherently associated with IP networks, where network control data and user data share the same physical and logical bandwidths. The best solution is to authenticate all communications by means of trusted certificates. Short of this, defense mechanisms will be specific to the problem at hand. Nonrepudiation is a service that prevents a person who has accomplished an act from denying it later. This is a legal concept that is defined through legislation. The service comprises the generation of evidence and its recording and subsequent verification. The technical means to ensure nonrepudiation include electronic signature of documents, the intervention of third parties as witnesses, time stamping, and sequence numbering of the transactions. #### 3A Appendix: Principles of Symmetric Encryption #### 3A.1 Block Encryption Modes of Operation The principal modes of operation of block ciphers are: electronic codebook (ECB) mode, cipher block chaining (CBC) mode, cipher feedback (CFB) mode, output ­feedback (OFB) mode, and counter (CTR) mode (National Institute of Standards and Technology, SP 800-38A, 2001). The ECB mode is the most obvious, because each clear block is encrypted independently of the other blocks. However, this mode is susceptible to attacks by replay or reordering blocks, without detection. This is the reason this mode is only used to encrypt random data, such as the encryption of keys during authentication. Some recent examples of the incorrect use of ECB is the Version 1 of Bitmessage, a peer-to-peer messaging system built on Bitcoin, which made it vulnerable (Buterin, 2012; Lerner, 2012; Warren, 2012). The other three modes use a feedback loop protect against such types of attacks. They also have the additional property that they need an initialization vector to start the computations. This is a dummy initial ciphertext block whose value must be shared with the receiver. It is typically a random number or generated by encrypting a nonce (Ferguson et al., 2010, pp. 66–67). The difference among the three feedback modes resides in the way the clear text is mixed, partially or in its entirety, with the preceding encrypted block. In the CBC mode, the input to the encryption algorithm is the exclusive OR of next block of plain test and the preceding block of the ciphertext. This is called “chaining” the plaintext blocks, as shown in Figure 3.15. Figure 3.16 depicts the decryption operation. In these figures, Pi represents the ith block of the clear message, while Ci is the corresponding encrypted block. Thus, the encrypted block Ci is given by FIGURE 3.16 Decryption in the CBC mode. where • EK() represents the encryption with the secret key K • ⊕ is the exclusive OR operation The starting value Co is the initialization vector. The initialization vector does not need to be secret but it must be unpredictable and its integrity protected. For example, it can be generated by applying EK() to a nonce (a contraction of “number used once”) or by using a random number generator (National Institute of Standards and Technology, 2001, p. 20). Also, the CBC mode requires the input to be a multiple of the cipher’s block size, so padding may be needed. The decryption operation, shown in Figure 3.16, is described by Any subset of a CBC message will be decrypted correctly. The CBC mode is efficient in the sense that a stream of infinite length can be processed in a constant memory in linear time. It is useful for non-real-time encryption of files and to calculate the signature of a message (or its MAC) as specified for financial and banking transactions in ANSI X9.9 (1986), ANSI X9.19 (1986), ISO 8731-1 (1987), and ISO/IEC 9797-1 (1999). Transport Layer Security (TLS) also uses the CBC mode but, in general, the CFB and OFB modes are often used for the real-time encryption of a character stream, such as in the case of a client connected to a server. In the CFB mode, the input is processed s bits at a time. A clear text block of b bits is encrypted in units of s bits (s = 1, 8, or 64 bits), with sb, that is, in n = [s/b] cycles, where [x] is the smallest integer less than x. The clear message block, P, is divided into n segments, {P1, P2 ,…,Pn} of s bits each; with extra bits padded to the trailing end of the data string if needed. In each cycle, an input vector Ii is encrypted into an output Oi. The segment Pi is combined through an exclusive OR function, with the most significant s bits of that output Oi to yield the ciphertext Ci of s bits. The ciphertext Ci is fed back and concatenated to the previous input Ii in a shift register, and all the bits of this register are shifted s positions of the left. The s left-most bits of the register are ignored, while the remainder of the register content becomes the new input vector Ii+1 to be encrypted in the next round. The CFB encryption is described as follows: where • I0 is the initialization vector • LSBj(x) is the least significant j bits of x • MSBj(x) is the most significant j bits • EK() represents the encryption with the secret key K The decryption operation is identical to the roles of Pi and Ei transposed, that is, Depicted in Figure 3.17 is the encryption and illustrated in Figure 3.18 is the decryption. FIGURE 3.17 Encryption in the CFB mode of a block of b bits and s bits of feedback. FIGURE 3.18 Decryption in the CFB mode of a block of b bits with s bits in the feedback loop. Similar to CBC encryption, the initialization vector need not be secret but is preferably unpredictable and needs to be changed for each message. However, the chaining mechanism causes the ciphertext block Ci to depend on both Pi and the preceding Oi. The decryption operation is sensitive to bit errors, because one bit error in the encrypted text affects the decryption of n blocks. If s = b, the shift register can be eliminated and the encryption is done as illustrated in Figure 3.19. Thus, the encrypted block Ci is given by FIGURE 3.19 Encryption in the CFB mode for a block of b bits with a feedback of b bits. where EK() represents the encryption with the secret key K. The decryption is obtained with another exclusive OR operation as follows: which is shown in Figure 3.20. FIGURE 3.20 Decryption in the CFB mode for a block of b bits with a feedback of b bits. The CFB mode can be used to calculate the MAC of a message. This method is also indicated in ANSI X9.9 (1986) for the authentication of banking messages, as well as ANSI X9.19 (1986), ISO 8731-1 (1987), and ISO/IEC 9797-1 (1999). For the ECB, CBC, and CFB modes, the plaintext must be a sequence of one or more complete data blocks. If the data string to be encrypted does not satisfy this property, some extra bits, called padding, are appended to the plaintext. The padding bits must be selected so that they can be removed unambiguously at the receiving end. For example, it can consist of an octet with value 128 and then as many octets as required to complete the block (Ferguson et al., 2010, p. 64). In the OFB mode, the input to the block cipher is not the clear text but a pseudo-random stream, called the key stream, which is derived from the ciphertext. The clear text message itself is not an input to the block cipher. This encryption scheme is called a “stream cipher.” The encryption of the OFB mode is described as follows: where I0 is the initialization vector. Each initialization vector must be used once, otherwise confidentiality may be compromised. In a stream cipher, there is no padding so if the last output block is a partial block of u bits, only the most significant u bits of that block are used in the exclusive OR operation. The lack of padding reduces the overhead, which is especially important with small messages. The decryption process is exactly the same operation as encryption and is described as This is illustrated in Figures 3.21 and 3.22 for the encryption and decryption, respectively. FIGURE 3.21 Encryption in OFB mode of a block of b bits with a feedback of s bits. FIGURE 3.22 Decryption in OFB mode of a block of b bits with a feedback of s bits. In the OFB mode, the input to the decryption depends on the preceding output only (i.e., it does not include the previous ciphertext), so errors are not propagated. This makes it suitable to situations where transmission is noisy. In this case, a single bit error in the ciphertext affects only one bit in the recovered text, provided that the values in the shift registers at both ends remain identical to maintain synchronization. Thus, any system that incorporates the OFB mode must be able to detect synchronization loss and have a mechanism to reinitialize the shift registers on both sides with the same value. In the case where s = b, the encryption is illustrated in Figure 3.23 and is described by FIGURE 3.23 Encryption in OFB mode with a block of b bits and a feedback of b bits. The decryption is described by and is depicted in Figure 3.24. FIGURE 3.24 Decryption in OFB mode for a block of b bits with a feedback of b bits. For security reasons, the OFB mode with s = b, that is, with the feedback size equal to the block size, is recommended (National Institute of Standards and Technology, 2001b; Barthélemy et al., 2005, p. 98). Finally, the Counter (CTR) mode uses a set of input blocks, called counters, instead of the initialization vector. A cipher is applied to the input blocks to produce a sequence of output blocks that are used to produce the ciphertext. Given a sequence of counters, T1, T2, …, Tn, the encryption operation is defined in SP 800-38A as follows (National Institute of Standards and Technology, 2001b): Each counter in the sequence must be different from every other, that is, never repeat, for any given key. According to Appendix B of SP 800-38A, the counter block can be either a simple sequential counter that is incremented started from an initial string (i.e., Ti = T1 + j mod 2b) or a combination of a nonce for each ­message and a counter. For a 128-bit block size, a ­typical approach is to use Ti as the concatenation of a 48-bit message number, a 16-bit of additional nonce data and 64 bits for the block counter i so that one key can be used for encrypting at most 248 different messages (Ferguson et al., 2010, p. 70). Because the nonce must be changed for every message, the plain text message is limited to (264 × 128)/8 = 268 octets. Should the last output block be a partial block of u bits, only the most significant u bits of that block are used in the exclusive OR operation, that is, there is no need for padding. The decryption operation is as follows: In both CTR encryption and CTR decryption, the output blocks Oi can be calculated in parallel and before the plaintext or the ciphertext are available. Also, any message block Pi can be recovered independently from any other message blocks, provided that the corresponding counter block is known; this allows parallel encryption, which makes suitable for high-speed data transmission. Another advantage is that the last block can be arbitrary and no padding is needed. The CCM mode (Counter with CBC-MAC) is a derived mode of operation that provides both authentication and confidentiality for cryptographic block ciphers with a block length of 128 bits, such as the Advanced Encryption Standard (AES). In this way, the CCM mode avoids the need to use two systems: a MAC for authentication and a block cipher encryption for privacy. This results in a lower computational cost compared to the application of separate algorithm for both. The inputs to the encryption are a nonce N, some additional data A, for example, network packet header, and the plain text P of length m. The length of N, kn < the length of the block kb and is a multiple of 8 between 56 and 112 bits while the tag length kt is a multiple of 16 between 32 and 128 bits. The data and the plaintext are authenticated while the CTR mode encryption is applied to the plain text only. First, a tag T of length ktkb is calculated as CBC-MAC acts on blocks of bit length kb (typically 128 bits) and so if the last block has j bits, after the last full block has been encrypted, the ciphertext is encrypted again and the left-most j bits of the encrypted ciphertext and is exclusive ORed with the short block of the message to generate the tag T. Next, the tag T and the message P are encrypted with the CTR mode to form a ciphertext c of length m + kt. The nonce N must not have been used in a previous CCM encryption during the lifetime of the key. The counters blocks CTRi are generated from the nonce N using a function π as follows: This mode was developed to avoid the intellectual property issues around the use of another mode, the Offset Codebook (OCB) mode proposed for the IEEE 802.11i standard. The OCB mode, however, is still optional in that standard. The CCM mode is described in RFC 3610 (2003). #### 3A.2 Examples of Symmetric Block Encryption Algorithms #### 3A.2.1 DES and Triple DES DES, also known as the Data Encryption Algorithm (DEA), was widely used in the commercial world for applications such as the encryption of financial documents, the management of cryptographic keys, and the authentication of electronic transactions. The algorithm was developed by IBM and then adopted as a U.S. standard in 1977. It was published in FIPS 81, then adopted by ANSI in ANSI X3.92 under the name of Data Encryption Algorithm (DEA). However, by 2008, commercial hardware costing less than$15,000 could break DES keys in less than a day on average. DES operates by encrypting blocs of 64 bits of clear text to produce blocks of 64 bits of ciphertext. The key length is 64 bits, with 8 bits for parity control, which gives an effective length of 56 bits. The encryption and decryption are based on the same algorithm with some minor differences in the generation of subkeys. In 2005, the National Institute of Standards and Technology (NIST) finally withdrew the DES standard following the development of the AES (Advanced Encryption Standard). The vulnerability of DES to an exhaustive attack forced the search for an interim solution until a replacement algorithm could be developed and deployed. Given the considerable investment in the software and hardware implementations of DES, triple DES, also known as TDEA (Triple Data Encryption Algorithm), is based on the use of DES three successive times (Schneier, 1996, pp. 359–360). The operation of Triple DES with two ­different 56-bit keys is represented in Figure 3.25. FIGURE 3.25   Operation of triple DES (TDEA) with two keys. The use of three stages doubles the effective length of the key to 112 bits. The operations “encryption–decryption–encryption” aim at preserving compatibi­lity with DES, because if the same key is used in all operations, the first two cancel each other. Three independent 56-bit keys are highly recommended in federal applications (National Institute of Standards and Technology, 2012c, p. 37). In this case, the operation becomes as illustrated in Figure 3.26. FIGURE 3.26   Operation of triple DES (TDEA) with three keys. #### 3A.2.2  AES The Advanced Encryption Standard (AES) is the symmetric encryption algorithm that replaced DES. It is published by NIST as FIPS 197 (National Institute of Standards and Technology, 2001a) and is based on the algorithm Rijndael as developed by Joan Daemen of Proton World International and Vincent Rijmen from the Catholic University of Leuven (Katholieke Universiteit Leuven). It is a block code with blocks of 128, 192, or 256 bits. The corresponding key lengths are 128, 192, and 256 bits, respectively. It is based on finding a solution of an algebraic equation of the form The selection in October 2000 came after two rounds of testing following NIST’s invitation for submission to cryptographers from around the world. In the first round, 15 algorithms were retained for evaluation. In the second round of evaluation, five finalists were retained: RC6, MARS, Rijndael, Serpent, and Twofish. All the second round algorithms showed a good margin of security. The criteria used to separate them are related to algorithmic performance: speed of computation in software and hardware implementations (including specialized chips), suitability to smart cards (low memory requirements), and so on. Results from the evaluation and the rational for the selection have been documented in a public report by NIST (Nechvatal et al., 2000). SAFER+ (Secure And Fast Encryption Routine) was one of the candidate block ciphers, based on a nonproprietary algorithm developed by James Massey for Cylink Corporation (Schneier, 1996, pp. 339–341). An enhanced version of that algorithm SAFER-128 is used in Bluetooth networks for confidentiality and mutual authentication. AES CGM (Galois/Counter Mode) and AES CCM (Counter with Cipher Block Chaining—Message Authen­tication Code Mode) are two modes of operation of AES with key lengths of 128, 192, or 256 bits (National Institute of Standards and Technology, 2001b, 2007). These modes provide Authenticated Encryption with Associated Data (AEAD), where the plaintext is simultaneously encrypted and its integrity protected. The input may be of any length but the ciphered output is generally larger than the input because of the integrity check value. CCM is described in RFC 5116 (2008). AES-GCM ­provides an effective defense against side-channel attacks but at the expense of performance. As of now, there are no known attacks that would break a correct implementation of encryption with AES. AES is free of royalties and highly secure, and today, it is often the first choice for symmetric encryptions. The AES competition is generally viewed as having provided a tremendous boost to the cryptographic research community’s understanding of block ciphers, and a tremendous increase in confidence in the security of some block ciphers. #### 3A.2.3  RC4 The stream cipher RC4 was originally designed by Ron Rivest and became public in 1994. It uses a variable key length ranging from 8 to 2048 bits. It has the advantage of being extremely fast when implemented in software; as a consequence, it is commonly used in cryptosystems such as SSL, TLS, the wireless security protocols WEP (Wired Equivalent Privacy) of IEEE 802.11, WPA (Wi-Fi Protected Access) of IEEE 802.11i and some Kerberos encryption modes used in Microsoft Windows. The statistical distribution of single octets as well as in the entire RC4-encrypted stream shows deviations from a uniform distribution. In other words, there are strong biases toward particular values at specific positions and some bit patterns occur in the output stream more frequently than others. In 2013, it was shown how to use the 65,536 single octet statistical biases in the initial 256 octets of the RC4 ciphertext to mount an attack on SSL/TLS via Bayesian analysis (AlFardan et al., 2013c). The attack, however, requires access to between 228 and 232 copies of the same data encrypted using different keys. This could be done with a browser infected with a JavaScript malware to make these many connections to a server and, accordingly, give the attacker enough data. This attack may not be always practical, but it is quite possible that other attacks are known but not revealed, which may explain why RC4 was never approved as a Federal Information Processing Standard (FIPS). #### 3A.2.4  New European Schemes for Signature, Integrity, and Encryption The following algorithms have been selected by the New European Schemes for Signature, Integrity, and Encryption (NESSIE) project: 1. MISTY: This is a block encryption algorithm. The block size is 64 bits and the key is 128 bits. 2. AES with 128 bits. 3. Camellia: The block length is fixed at 128 bits but the key size can be 128, 192, or 256 bits. 4. SHACAL-2: The block size is 256 bits and the key is 512 bits. Camellia is also used in Japanese government systems and is included in the specifications of the TV-Anytime Forum for high-capacity storage in consumer systems. #### 3A.2.5  eSTREAM In 2004, ECRYPT, a Network of Excellence funded by the European Union, announced the eSTREAM competition to select new stream ciphers suitable for widespread adoption. This call attracted 34 submissions. Following hundreds of security and performance evaluations, the eSTREAM committee selected a portfolio containing several stream ciphers. #### 3A.2.6  IDEA IDEA was invented by Xuejia Lai and James Massey circa 1991. The algorithm takes blocks of 64 bits of the clear text, divides them into subblocks of 16 bits each, and encrypts them with a key 128 bits long. The same algorithm is used for encryption and decryption. IDEA is clearly superior to DES but has not been a commercial success. The patent is held by a Swiss company Ascom-Tech AG and is not subject to U.S. export control. #### 3A.2.7  SKIPJACK SKIPJACK is an algorithm developed by the NSA for several single chip processors such as Clipper, Capstone, and Fortezza. Clipper is a tamper-resistant very large-scale integration (VLSI) chip used to encrypt voice conversation. Capstone provides the cryptographic functions needed for secure e-commerce and is used in Fortezza applications. SKIPJACK is an iterative block cipher with a block size of 64 bits and a key of 80 bits. It can be used in any of the four modes ECB, CBC, CFB (with a feedback of 8, 16, 32, or 64 bits), and OFB with a feedback of 64 bits. #### 3B  Appendix: Principles of Public Key Encryption The most popular algorithms for public cryptography are those of Rivest, Shamir, and Adleman (RSA) (1978), Rabin (1979), and ElGamal (1985). Nevertheless, the overwhelming majority of proposed systems in commercial systems are based on the RSA algorithm. It should be noted that RSADSI was founded in 1982 to commercialize the RSA algorithm for public key cryptography. However, its exclusive rights ended with the expiration of the patent on September 20, 2000. #### 3B.1  RSA Consider two odd prime numbers p and q whose product N = p × q. N is the modulus used in the computation that is public, while the values p and q are kept secret. Let φ(N) be the Euler totient function of N. By definition, φ(N) is the number of elements formed by the complete set of residues that are relatively prime to N. This set is called the reduced set of residues modulo N. If N is a prime, φ(N) = N − 1. However, because N = p × q by construction, while p and q are primes, then $φ ( N ) = ( p − 1 ) ( q − 1 )$ According to Fermat’s little theorem, if m is a prime and a is not a multiple of m, the Euler generalized this theorem in the form $a φ ( N ) ≡ 1 ( mod N )$ Choose the integers e, d both less than φ(N) such that the greatest common divisor of (e, φ(N)) = 1 and e × d ≡? 1 mod (φ(N)) = 1 mod ((p − 1)(q − 1)). Let X, Y be two numbers less than N $Y = X e mod N w i t h 0 ≤ X < N X = Y d mod N w i t h 0 ≤ Y < N$ because, by applying Fermat’s little theorem, $Y d mod N = ( X e ) d mod N = X e d mod N = X φ ( N ) ≡ 1 ( mod N ) = 1 mod N$ To start the process, a block of data is interpreted as an integer. To do so, the total block is considered as an ordered sequence of bits (of length, say, λ). The integer is considered to be the sum of the bits by giving the first bit the weight of 2λ−1?, the second bit the weight of 2λ−2, and so on until the last bit that will have the weight of 20 = 1. The block size must be such that the largest number does not exceed the modulo N. Incomplete blocks must be completed by padding bits with either 1 or 0 bits. Further padding blocks may also be added. The public key of the algorithm PK is the number e, along with n, while the secret key SK is the ­number d. RSA achieves its security from the difficulty of factoring N. The number of bits of N is considered to be the key size of the RSA algorithm. The selection of the primes p and q must make this factorization as difficult as possible. Once the keys have been generated, it is preferred that, for reasons of security, the values of p and q and all intermediate values such as the product (p − 1)(q − 1) be deleted. Nevertheless, the preservation of the values of p and q locally can double or even quadruple the speed of decryption. #### 3B.1.1  Chosen-Ciphertext Attacks It is known that plain RSA is susceptible to a chosen-ciphertext attack (Davida, 1982). An attacker who wishes to find the decryption of a message mcd mod N of a ciphertext c chooses a random integer s and asks for the decryption of c′ ≡ sec mod N. The answer is $( s e c ) d mod N ≡ s e d c d mod N = 1 × c d mod N ≡ c d mod N$ It is also known that a number of attacks are based on the fact that around ¼ of the bits of the private key are leaked, such as by timing and power analysis attacks. Also, the strength of cryptosystem depends on the choice of the primes (Pellegrini et al., 2010). #### 3B.1.2  Practical Considerations To increase the speed of signature verification, suggested values for the exponent e of the public key are 3 or 216 + 1 (65,537) (Menezes et al., 1997, p. 437). Other variants designed to speed up decryption and signing are discussed in Boneh and Shacham (2002). For short-term confidentiality, the modulus N should be at least 768 bits. For long-term confidentiality (5–10 years), at least 1024 bits should be used. Currently, it is believed that confidentiality with a key of 2048 bits would last about 15 years. In practice, RSA is used with padding schemes to avoid some of the weaknesses of the RSA algorithm as described earlier. Padding techniques such as Optimal Asymmetric Encryption Padding (OAEP) are standardized as discussed in the next section. #### 3B.2  Public Key Cryptography Standards The public key cryptography standards (PKCS) are business standards developed by RSA Laboratories in collaboration with many other companies working in the area of cryptography. They have been used in many aspects of public key cryptography, which are based on the RSA algorithm. At the time of writing this section, their number has reached 15. PKCS #1 (RFC 2437 [1998]; RFC 3447 [2003]) defines the mechanisms for data encryption and signature using the RSA algorithm. These procedures are then utilized for constructing the signatures and electronic envelopes described in PKCS #7. Following the presentation of an adaptive chosen-ciphertext attack on PKCS #1 (Bleichenbacher, 1998), more secure encodings are available with PKCS #1v1.5 and PKSC #1v2.1, the latter of which is defined in RFC 3447. In particular, PKCS #1v2.1 defines an encryption scheme based on the Optimal Asymmetric Encryption Padding (OAEP) of Bellare and Rogaway (1994). PKCS #2 and #4 are incorporated in PKCS #1. PKCS #3 defines the key exchange protocol using the Diffie–Hellman algorithm. PKCS #5 describes a method for encrypting an information using a secret key derived from a password. For hashing, the method utilizes either MD2 or MD5 to compute the key starting with the password and then encrypts the key with DES in the CBC mode. PKCS #6 defines a syntax for X.509 certificates. PKCS #7 (RFC 2315, 1998 defines the syntax of a message encrypted using the basic encoding rules [BER] of ASN.1 [Abstract Syntax Notation 1]) (Steedman, 1993) of ITU-T Recommendation X.209 (1988). These messages are formed with the help of six content types: 1. Data, for clear data 2. SignedData, for signed data 3. EnvelopedData, for clear data with numeric envelopes 4. SignedAndEnvelopedData, for data that are signed and enveloped 5. DigestedData, for digests 6. EncryptedData, for encrypted data The secure messaging protocol, S/MIME (Secure Multipurpose Internet Mail Extensions) and the messages of the SET protocol, designed to secure bank card payments over the Internet, utilize the PKSC #7 specifications. PKCS #8 describes a format for sending information related to private keys. PKCS #9 defines the optional attributes that could be added to other protocols of the series. The ­following items are considered: the certificates of PKCS #6, the electronically signed messages of PKCS #7, and the information on private keys as defined in PKCS #8. PKCS #10 (RFC 2986, 2000) describes the syntax for certification requests to a certification authority. The certification request must contain details on the identity of the candidate for certification, the distinguished name of the candidate, his or her public key, and optionally, a list of supplementary attributes, a signature of the preceding information to verify the public key, and an identifier of the algorithm used for the signature so that the authority could proceed with the necessary verifications. The version adopted by the IETF is called CMS (Cryptographic message syntax). PKCS #11 defines a cryptographic interface called Cryptoki (Cryptographic Token Interface Standard) between portable devices such as smart cards or PCMCIA cards and the security layers. PKCS #12 describes a syntax for the storage and transport of public keys, certificates, and other users’ secrets. In enterprise networks, key pairs are transmitted to individuals via password protected PKCS #12 files. PKCS #13 describes a cryptographic system using elliptic curves. PKCS #15 describes a format to allow the portability of cryptographic credentials such as keys, certificates, passwords, PINs among application and among portable devices such as smart cards. The specifications of PKCS #1, #7, and #10 are not IETF standards because they mandate the utilization of ­algorithms that RSADSI does not offer free of charge. Also note that in PKCS #11 and #15, the word token is used to indicate a portable device capable of storing persistent data. #### 3B.3  PGP and OpenPGP Pretty Good Privacy is considered to be the commer­cial system whose security is closest to the military grade. It is described in one of the IETF documents, namely, RFC 1991 (1996). PGP consists of six functions: 1. Public key exchange using RSA with MD5 hashing 2. Data compression with ZIP, which reduces the file size and redundancies before encryption (Reduction of the size augments the speed for both processing and transmission, while reduction of the redundancies makes cryptanalysis more difficult.) 3. Message encryption with IDEA 4. Encryption of the user’s secret key using the digest of a sentence instead of a password 5. ASCII “armor” to protect the binary message for any mutilations that might be caused by Internet messaging systems (This armor is ­constructed by dividing the bits of three consecutive octets into four groups of 6 bits each and then by coding each group using a 7-bit character according to a given table. A ­checksum is then added to detect potential errors.) 6. Message segmentation The IETF did not adopt PGP as a standard because it incorporates proprietary protocols. OpenPGP is based on PGP but avoids these intellectual property issues. It is specified in RFCs 4880 (2007). RFC 6637 (2012) describes how to use elliptic curve cryptography (ECC) with OpenPGP. OpenPGP is currently the most widely used e-mail encryption standard. Companies and organizations that implement OpenPGP have formed the OpenPGP Alliance to promote it and to ensure interoperability. The Free Software Foundation has developed its own OpenPGP conformant program called GNU Privacy Guard (abbreviated GnuPG or GPG). GnuPG is freely available together with all source code under the GNU General Public License (GPL). #### 3B.4  Elliptic Curve Cryptography Elliptic curve cryptography (ECC) is a public key cryptosystem where the computations take place on an elliptic curve. Elliptic curves have been applied in factoring integers, in proving primality, in coding theory and in cryptography (Menezes, 1993). Variants of the Diffie–Hellman and DSA algorithms on elliptic curves are the Elliptic Curve Diffie–Hellman algorithm (ECDH) and the Elliptic Curve Digital Signature Algorithm (ECDSA), respectively. They are used to create digital signatures and to establish keys for symmetric cryptography. Diffie–Hellman and ECDH are comparable in speed, but RSA is much slower. The advantage of elliptic curve cryptography is that key lengths are shorter than for existing public key schemes that provide equivalent security. For example, the level of security of 1024-bit RSA can be achieved will elliptic curves with a key size in the range of 171–180 bits (Wiener, 1998). This is an important factor in wireless communications and whenever bandwidth is a scarce resource. Elliptic curves are defined over the finite field of the integers modulo a primary number p [the Galois field GF(p)] or that of binary polynomials [GF(2m)]. The key size is the size of the prime number or the binary polynomial in bits. Cryptosystems over GF(2m) appear to be slower than over GF(p) but there is no consensus on that point. Their main advantage, however, is that addition over GF(2m) does not require integer multiplications, which reduces the cost of the integrated circuits implementing the computations. NIST has standardized a list of 15 elliptic curves of varying key lengths (NIST, 1999). Ten of these curves are for binary fields and five are for prime fields. These curves provide confidentiality equivalent to symmetric encryption with keys of length 80, 112, 128, 192, and 256 bits and beyond. Table 3.9 shows the comparison of the key lengths of RSA and elliptic cryptography for the same level of security measured in terms of effort to break the system (Menezes, 1993). ### TABLE 3.9   Comparison of Public Key Systems in Terms of Key Length in Bits for the Same Security Level RSA Elliptic Curve Reduction Factor RSA/ECC 512 106 5:1 1,024 160 7:1 2,048 211 10:1 5,120 320 16:1 21,000 600 35:1 Source: Menezes, A., Elliptic Curve Public Key Crypto­systems, Kluwer, Dordrecht, the Nether­lands, 1993. Table 3.10 gives the key sizes recommended by the National Institute of Standards and Technology for equivalent security using symmetric encryption algorithms (e.g., AES, DES or SKIPJACK) or public key encryption with RSA, Diffie–Hellman, and elliptic curves for equivalent security. ### TABLE 3.10   NIST Recommended Key Lengths in Bits Symmetric RSA and Diffie–Hellman Elliptic Curve 80 1,024 160–112 112 2,048 224–255 128 3,072 256–283 192 7,680 384–511 256 15,360 512+ Source: Barker, E. et al., National Institute of Standards and Technology (NIST), Recommendation for Key Management – Part 1: General, (Revision 3), NIST Special Publication SP 800-57, July 2012c. Thus, for the same level of security per the currently known attacks, elliptic curve–based systems can be implemented with smaller parameters. This computational efficiency is compared to Diffie–Hellman in Table 3.11 for several key sizes. ### TABLE 3.11   Relative Computation Costs of Diffie–Hellman and Elliptic Curves Symmetric Key Size in Bits Diffie–Hellman Cost: Elliptic Curve Cost 80 3:1 112 6:1 128 10:1 192 32:1 256 64:1 Source: National Security Agency (NSA), The case for elliptic curve cryptography, Central Security Service, January 15, 2009, https://www.nsa.gov/business/programs/­elliptic_curve.shtml, last accessed June 20, 2015, no longer accessible. Another related aspect is the channel overhead for key exchanges and digital signatures on a communications link. In channel-constrained environments or when computing power, memory and battery life of devices are critical such as in wireless communication, elliptic curves offer a much better solution than RSA or Diffie–Hellman. As a result, many companies in wireless communications have embraced elliptic curve cryptography. Furthermore, many nations (e.g., the United States, the United Kingdom, and Canada) have adopted elliptic curve cryptography in the new generation of equipment to protect classified information. One disadvantage of ECC is that it increases the size of the encrypted message significantly more than RSA encryption. Furthermore, the ECC algorithm is more complex and more difficult to implement than RSA, which increases the likelihood of implementation errors, thereby reducing the security of the algorithm. Another problem is that various aspects of elliptic curve cryptography have been patented notable the Canadian company Certicom, which holds over 130 patents related to elliptic curves and public key cryptography in general. #### 3C  Appendix: Principles of the Digital Signature Algorithm and the Elliptic Curve Digital Signature Algorithm According to the Digital Signature Algorithm (DSA) defined in ANSI X9.30:1 (1997), the signature of a message M is the pair of numbers r and s computed as follows: The following are given: • p and q are primes such that 2511 < p < 21024, 2159 < q < 2 160, and q is a prime divisor of (p − 1), that is, (p − 1) = mq for some integer m. • gq = h(p−1)/q mod p is a generator polynomial modulo p of order q. The variable h is an ­integer 1 < h < (p−1) such that h(p−1)/q mod p > 1. By Fermat’s little theorem, gq = h(p−1) mod p because g < p. Thus, each time the exponent is a ­multiple of q, the result will be equal to 1 (mod p). • x is a random integer such that 0 < x < q. • k is a random integer in the interval 0 < k < q and is different for each signature. • k?συπ?−1??συπ? is the multiplicative inverse of k mod q, that is, (k?συπ?−1??συπ? × k) mod q = 1. • SHA() is the SHA-1 hash function. • x is the private key of the sender, while the public key is (p, q, g, y) with y = gx mod p. The DSA signature consists of the pair of integers (r, s). To verify the signature, the verifier computes The signature is valid if v = r. To show this, we have The strength of the algorithm is heavily dependent on the choice of the random numbers. The random variable k is also transmitted with the signature. This means that if verifiers also know the signer’s private key, they will be able to pass additional information through the channel established through the value of k. The Elliptic Curve Digital Signature Algorithm (ECDSA) of ANSI X9.62:2005 is used for digital signing, while ECDH can be used to secure online key exchange. Typical key sizes are in the range of 160–200 bits. The ECDSA keys are generated as follows (Paar and Pelzl, 2010, pp. 283–284): • Use an elliptic curve E with modulus p, coefficients a and b, and a point A that gene­rates a cyclic group of prime order q, that is, Aq = 1 mod p and all the elements of this cyclic group can be generated with A. • Choose a random integer y such that 0 < y < q and compute B = y A. • The private key is y and the public key is (p, a, b, q, A, B). The ECDSA signature for a message M is computed as follows: • Choose an integer k randomly with 0< k < q. • Compute R = k A and assign the x-coordinate of R to the variable r. • Compute s = (H(M) + y?σδοτ?r)k−1 mod q, with H() a hash function. The signature verification proceeds as follows: • Compute w = s−1 mod q. • Compute u1 = w·H(x) mod q and u2 = w·r mod q. • Compute P = u1A + u2B. • The signature is valid if and only if xp the x-coordinate of point P satisfies the condition. $x p = r mod q$ #### Questions 1. What are the major security vulnerabilities in a ­client/server communication? 2. What are the services needed to secure data exchanges in e-commerce? 3. Compare the tunnel mode and transport mode of IPSec. 4. Why is the Authentication header (AH) less used with IPSec than the Encapsulating Security Payload (ESP) protocol? 5. What factors affect the strength of encryption? 6. Discuss some potential applications for blind signatures. 7. Discuss the difference between biometric identification and biometric verification. 8. Compare and contrast the following biometrics: ­fingerprint, face recognition, iris recognition, voice ­recognition, and hand geometry. 9. Discuss some of the vulnerabilities of biometric ­identification systems. 10. What is needed to offer nonrepudiation services? 11. What conditions favor denial-of-service attacks? 12. Which of the following items is not in a digital public key certificate? • (a) the subject’s public key • (b) the digital signature of the certification authority • (c) the subject’s private key • (d) the digital certificate serial number 13. What is cross-certification and why is it used? What caveats need to be taken considered for it to work correctly? 14. What is the Sender Policy Framework (SPF)? Describe its advantages and drawbacks. 15. What is the DomainKeys Identified Mail (DKIM)? Describe its advantages and drawbacks. 16. Using the case of AES as a starting point, define a process to select a new encryption algorithm. 17. Compare public key encryption and symmetric encryption in terms of advantages and disadvantages. How to combine the strong points of both? 18. What are the reasons of the current interest in elliptic curve cryptography (ECC)? 19. Explain why the length of the encryption key cannot be used as the sole measure for the strength of an encryption. 20. Speculate on the reasons that led to the declassi­fication of the SKIPJACK algorithm. 21. What are the problems facing cross-certification? How are financial institutions attempting to solve them? 22. What is the counter mode of block cipher encryption? What are its advantages? 23. With the symbols as explained in Appendix 3C, show that a valid ECDSA signature satisfies the ­condition r = xP mod q. 24. List the advantages and disadvantages of controlling the export of strong encryption algorithms from the United States.
{}
The co-ordinates of points are given by $X=\frac{u+v}{\sqrt{2}}, Y=\frac{u-v}{\sqrt{2}}, Z=uv$ (i) Explain why the totality of points given by the above equations form a surface. (ii) Find the unit normal to the surface at any point (u,v)
{}
PARAGRAPH "X"Let Sbe the circle in the x y-plane defined by the eq | Filo class 11 Math Co-ordinate Geometry Conic Sections 566 150 PARAGRAPH Let be the circle in the -plane defined by the equation (For Ques. No 15 and 16)Let and be the chords of passing through the point and parallel to the x-axis and the y-axis, respectively. Let be the chord of passing through and having slope . Let the tangents to at and meet at , the tangents to at and meet at , and the tangents to at and meet at . Then, the points and lie on the curve(b) (c) (d) 566 150 Connecting you to a tutor in 60 seconds.
{}
# Symmetric Economic Curves When I try to create symmetric curves for the left of the y-axis I get weird results. Does anyone have any idea what to do to get symmetric results? \begin{figure}[h!] \caption{???} \centering \begin{tikzpicture}[scale=0.5] \draw[thick] (0,11) node[left]{$P$}--(0,0)--(12,0) node[below]{$Y$}; \draw[thick] (0,-11) node[left]{$L$}--(0,0)--(12,0); \draw[thick] (-11,0) node[left]{$W/P$}--(0,0)--(12,0); \draw (1,8) to [out=280,in=175] (8,1); \node [right] at (8,1) {$AD(M_0)$}; \draw (3,8) to [out=280,in=175] (8,3); \node [right] at (8,3) {$AD(M_1)$}; \draw(5,0)--(5,9)node[right]{$AS$}; \node[left] at (0,4.2){$P_1$}; \draw[dotted](0,4.2)--(5,4.2); \node[left] at (0,1.9){$P_1$}; \draw[dotted](0,1.9)--(5,1.9); \draw (-1,8) to [out=-0] (-8,1); \node [left] at (-8,1) {$W_0$}; \draw (-3,8) to [out=0,in=175] (-8,3); \node [right] at (-8,3) {$W_1)$}; \draw(5,0)--(5,9)node[right]{$AS$}; \foreach \Point/\PointLabel in {(5,6)/A, (6,5)/B} \draw[fill=black] \Point circle (0.05) node[above] {$\PointLabel$}; \end{tikzpicture} \label{fig:chart3} \end{figure} • welcome -- please see the answer below -- in addition you may like to correct the label A and B by editing the syntax -- \draw[fill=black] \Point circle (0.05) node[right] {$\PointLabel$}; – js bibra Oct 24 at 15:07 If you want to mirror something, you can also invoke the magic mirr— ...ehm, I mean a scope and apply in your case xscale=-1. Then you don't need to calculate anything. Of course whether this approach is convenient or not depends on your case, but for your question is a possible solution. By the way, I have fixed some of your commands that were redundant, for example you can attach a node to a \draw without having to explicitly write it as a separate one. ## Code \documentclass[tikz, margin=10pt]{standalone} \begin{document} \begin{tikzpicture}[scale=0.5] \draw[thick] (0,11) node[left]{$P$}--(0,0)--(12,0) node[below]{$Y$}; \draw[thick] (0,-11) node[left]{$L$}--(0,0)--(12,0); \draw[thick] (-11,0) node[left]{$W/P$}--(0,0)--(12,0); \draw (1,8) to [out=280,in=175] (8,1) node[right] {$AD(M_0)$}; \draw (3,8) to [out=280,in=175] (8,3) node[right] {$AD(M_1)$}; \draw(5,0)--(5,9)node[right]{$AS$}; \draw[dotted] (0,4.2) -- (5,4.2) node[at start, left] {$P_1$}; \draw[dotted] (0,1.9) -- (5,1.9) node[at start, left] {$P_1$}; % MAGIC MIRROR \begin{scope}[xscale=-1] \draw (1,8) to [out=280,in=175] (8,1) node[left] {$W_0$}; \draw (3,8) to [out=280,in=175] (8,3) node[left] {$W_1$}; \end{scope} \foreach \Point/\PointLabel in {(5,6)/A, (6,5)/B} \draw[fill=black] \Point circle (0.05) node[above] {$\PointLabel$}; \end{tikzpicture} \end{document} Is it better now -- i have also corrected the label W1 -- it was misaligned \begin{tikzpicture}[scale=0.5] \draw[thick] (0,11) node[left]{$P$}--(0,0)--(12,0) node[below]{$Y$}; \draw[thick] (0,-11) node[left]{$L$}--(0,0)--(12,0); \draw[thick] (-11,0) node[left]{$W/P$}--(0,0)--(12,0); \draw (1,8) to [out=280,in=175] (8,1); \node [right] at (8,1) {$AD(M_0)$}; \draw (3,8) to [out=280,in=175] (8,3); \node [right] at (8,3) {$AD(M_1)$}; \draw(5,0)--(5,9)node[right]{$AS$}; \node[left] at (0,4.2){$P_1$}; \draw[dotted](0,4.2)--(5,4.2); \node[left] at (0,1.9){$P_1$}; \draw[dotted](0,1.9)--(5,1.9); \draw (-1,8) to [out=260,in=15] (-8,1); \node [left] at (-8,1) {$W_0$}; \draw (-3,8) to [out=260,in=15](-8,3); \node [below left] at (-8,3) {$W_1$}; \draw(5,0)--(5,9)node[right]{$AS$}; \foreach \Point/\PointLabel in {(5,6)/A, (6,5)/B} \draw[fill=black] \Point circle (0.05) node[above] {$\PointLabel$}; \end{tikzpicture}
{}
# Evolutionary Rates & Selection Strength #### 2022-05-12 This vignette explains how to extract evolutionary rate parameters estimated from relaxed clock Bayesian inference analyses produced by the program Mr. Bayes. It also shows how to use evolutionary rate based inference of selection strength (or mode) adapted to clock-based rates, as introduced by . ## Evolutionary Rates Statistics and Plots In this section, we will extract evolutionary rate parameters from each node from a Bayesian clock (time-calibrate) summary tree produced by Mr. Bayes. The functions below will store them in a data frame, produce summary statistics tables, and create different plots showing how rates are distributed across morphological partitions and clades. Load the EvoPhylo package library(EvoPhylo) ### 1. Get rates from the clock tree and create a rate table First, import a Bayesian clock tree using treeio’s function read.mrbayes() (= read.beast()). ## Import summary tree with three clock partitions produced by ## Mr. Bayes (.t or .tre files) from your local directory tree3p <- treeio::read.mrbayes("Tree3p.t") Below, we use the example tree tree3p that accompanies EvoPhylo. data(tree3p) Subsequently, using get_clockrate_table(), users can extract mean or median rate values for each node in the summary tree that were annotated by Mr. Bayes when creating the summary tree with Mr. Bayes “sumt” command. These mean or median rate values are calculated by Mr. Bayes taking into account all trees from the posterior sample. This works for any summary tree produced by Mr. Bayes: a majority rule consensus or the fully resolved maximum compatible tree (the latter is used in the examples here). Please note that analyses must have reached the stationarity phase and independent runs converging for the summary statistics in each node to be meaningful summaries of the posterior sample. ## Get table of clock rates with summary stats for each node in ## the tree for each relaxed clock partition rate_table_means_no_clades3 <- get_clockrate_table(tree3p, summary = "mean") ### 2. Export the rate table This is a necessary step to subsequently open the rate table spreadsheet locally (e.g., using Microsoft Office Excel) and customize the table with clade names associated with with each node in the tree for downstream analysis. ## Export the rate tables write.csv(rate_table_means_no_clades3, file = "RateTable_Means3.csv") ### 3. Plot tree node labels To visualize the node values in the tree, you can use ggtree(). ## Plot tree node labels library(ggtree) tree_nodes <- ggtree(tree3p, branch.length = "none", size = 0.05) + geom_tiplab(size = 2, linesize = 0.01, color = "black", offset = 0.5) + geom_label(aes(label = node), size = 2, color="purple", position = "dodge") tree_nodes ## Save your plot to your working directory as a PDF ggplot2::ggsave("Tree_nodes.pdf", width = 10, height = 10) ### 4. Get summary statistics table and plots Import the rate table with clade membership (new “clade” column added) ## Import rate table with clade membership (new "clade" column added) ## from your local directory rate_table_clades_means3 <- read.csv("RateTable_Means3_Clades.csv", header = TRUE) Below, we use the rate table with clade membership rate_table_clades_means3 that accompanies EvoPhylo. data(rate_table_clades_means3) head(rate_table_clades_means3) ## clade nodes rates1 rates2 rates3 ## 1 Dipnomorpha 1 0.943696 0.981486 1.006164 ## 2 Dipnomorpha 2 1.065326 0.772074 0.913194 ## 3 Dipnomorpha 3 1.182460 0.656872 0.813618 ## 4 Dipnomorpha 4 1.229767 0.523709 0.722519 ## 5 Dipnomorpha 5 1.230564 0.517773 0.720479 ## 6 Other 6 0.658855 0.717277 0.663950 Obtain summary statistics table and plots for each clade by clock using clockrate_summary(). Supplying a file path to file save the output to that file. ## Get summary statistics table for each clade by clock clockrate_summary(rate_table_clades_means3, file = "Sum_RateTable_Means3.csv") Rate table summary statistics clade clock n mean sd min Q1 median Q3 max Dipnomorpha 1 8 1.10 0.11 0.94 1.02 1.10 1.19 1.23 Elpisostegalia 1 14 1.61 0.22 1.13 1.45 1.68 1.80 1.81 Osteolepididae 1 11 0.63 0.26 0.16 0.44 0.81 0.84 0.87 Rhizodontidae 1 14 0.57 0.30 0.03 0.33 0.67 0.83 0.89 Tristichopteridae 1 21 0.71 0.04 0.61 0.69 0.72 0.74 0.78 Other 1 11 0.89 0.36 0.54 0.69 0.78 0.94 1.81 Dipnomorpha 2 8 0.75 0.18 0.52 0.62 0.75 0.89 0.98 Elpisostegalia 2 14 1.36 0.10 1.03 1.36 1.38 1.41 1.42 Osteolepididae 2 11 0.34 0.15 0.07 0.28 0.38 0.45 0.53 Rhizodontidae 2 14 0.33 0.18 0.02 0.17 0.38 0.44 0.56 Tristichopteridae 2 21 0.34 0.06 0.27 0.32 0.33 0.33 0.55 Other 2 11 0.75 0.25 0.39 0.61 0.72 0.78 1.35 Dipnomorpha 3 8 0.87 0.11 0.72 0.79 0.89 0.95 1.01 Elpisostegalia 3 14 0.83 0.16 0.63 0.67 0.89 0.99 1.00 Osteolepididae 3 11 0.32 0.13 0.07 0.27 0.33 0.42 0.49 Rhizodontidae 3 14 0.32 0.17 0.02 0.21 0.40 0.43 0.52 Tristichopteridae 3 21 0.52 0.08 0.37 0.44 0.54 0.59 0.64 Other 3 11 0.73 0.17 0.47 0.64 0.70 0.81 1.00 ### 5. Plot rates by clock partition and clade Plot distributions of rates by clock partition and clade with clockrate_dens_plot(). ## Overlapping plots clockrate_dens_plot(rate_table_clades_means3, stack = FALSE, nrow = 1, scales = "fixed") Sometimes using stacked plots provides a better visualization as it avoids overlapping distributions. ## Stacked plots clockrate_dens_plot(rate_table_clades_means3, stack = TRUE, nrow = 1, scales = "fixed") It is also possible to append extra layers using ggplot2 function, such as for changing the color scale. Below, we change the color scale to be the Viridis scale. ## Stacked plots with viridis color scale clockrate_dens_plot(rate_table_clades_means3, stack = TRUE, nrow = 1, scales = "fixed") + ggplot2::scale_color_viridis_d() + ggplot2::scale_fill_viridis_d() ### 6. Rate linear models We can also plot linear model regressions between rates from two or more clocks with clockrate_reg_plot(). ## Plot regressions of rates from two clocks p12 <- clockrate_reg_plot(rate_table_clades_means3, clock_x = 1, clock_y = 2) p13 <- clockrate_reg_plot(rate_table_clades_means3, clock_x = 1, clock_y = 3) p23 <- clockrate_reg_plot(rate_table_clades_means3, clock_x = 2, clock_y = 3) library(patchwork) #for combining plots p12 + p13 + p23 + plot_layout(ncol = 2) ## Save your plot to your working directory as a PDF ggplot2::ggsave("Plot_regs.pdf", width = 8, height = 8) ### Rates from single clock analysis You can also explore clock rates for summary trees including a single clock shared among all character partitions (or an unpartitioned analysis): ## Import summary tree with a single clock partitions produced by ## Mr. Bayes (.t or .tre files) from examples directory tree1p <- treeio::read.mrbayes("Tree1p.t") Below, we use the example tree tree1p that accompanies EvoPhylo. data(tree1p) Then, get table of clock rates with summary stats for each node in the tree for each relaxed clock partition. rate_table_means_no_clades1 <- get_clockrate_table(tree1p, summary = "mean") ## Export the rate tables write.csv(rate_table_means_no_clades1, file = "RateTable_Means1.csv") ## Import rate table after adding clade membership (new "clade" column added) rate_table_clades_means1 <- read.csv("RateTable_Means1_Clades.csv", header = TRUE) data(rate_table_clades_means1) ## Get summary statistics table for each clade by clock clockrate_summary(rate_table_clades_means1, file = "Sum_RateTable_Medians1.csv") Rate table summary statistics clade n mean sd min Q1 median Q3 max Dipnomorpha 8 0.57 0.28 0.22 0.37 0.54 0.78 0.95 Elpisostegalia 14 0.91 0.25 0.44 0.77 0.85 1.03 1.35 Osteolepididae 11 0.23 0.10 0.03 0.18 0.23 0.30 0.38 Rhizodontidae 14 0.18 0.15 0.00 0.04 0.20 0.29 0.42 Tristichopteridae 21 0.39 0.43 0.05 0.11 0.19 0.34 1.32 Other 11 0.41 0.26 0.20 0.25 0.28 0.45 1.00 ## Stacked plots with viridis color scale clockrate_dens_plot(rate_table_clades_means1, stack = TRUE, nrow = 1, scales = "fixed") + ggplot2::scale_color_viridis_d() + ggplot2::scale_fill_viridis_d() ## Selection strength (mode) In this section, we will use evolutionary rate based inference of selection strength (or mode), as first introduced by for continuous traits, and later adapted to clock-based rates by . ### 1. Import and transform table ## Import rate table with clade membership (new "clade" column added) ## from your local directory with "mean" values rate_table_clades_means3 <- read.csv("RateTable_Means3_Clades.csv", header = TRUE) Below, we use the rate table with clade membership rate_table_clades_means3 that accompanies EvoPhylo. data(rate_table_clades_means3) It is necessary to transform the table from wide to long format with clock_reshape(). ## Transform table from wide to long format rates_by_clade <- clock_reshape(rate_table_clades_means3) ### 2. Import combined log file from all runs. This is produced by using combine_log(). Alternatively, users can also use LogCombiner from the BEAST2 software package. The first argument passed to combine_log() should be a path to the folder containing the log files to be imported and combined. ## Import all log (.p) files from all runs and combine them, with burn-in = 25% ## and downsampling to 2.5k trees in each log file posterior3p <- combine_log("LogFiles3p", burnin = 0.25, downsample = 1000) Below, we use the posterior dataset posterior3p that accompanies EvoPhylo. data(posterior3p) ## Show first 10 lines of combined log file head(posterior3p, 10) ### 3. Pairwise t-tests of Rate values The function get_pwt_rates() will produce a table of pairwise t-tests for differences between the mean clockrate value in the posterior and the absolute rate for each tree node. ## Get table of pairwise t-tests for difference between the posterior ## mean and the rate for each tree node rate_sign_tests <- get_pwt_rates(rate_table_clades_means3, posterior3p) ## Show first 10 lines of table head(rate_sign_tests, 10) Combined log file clade nodes clock relative rate absolute rate (mean) null p.value Dipnomorpha 1 1 0.943696 0.0118443 0.0118443 0 Dipnomorpha 2 1 1.065326 0.0133709 0.0133709 0 Dipnomorpha 3 1 1.182460 0.0148411 0.0148411 0 Dipnomorpha 4 1 1.229767 0.0154348 0.0154348 0 Dipnomorpha 5 1 1.230564 0.0154448 0.0154448 0 Other 6 1 0.658855 0.0082693 0.0082693 0 Other 7 1 0.603090 0.0075694 0.0075694 0 Osteolepididae 8 1 0.843373 0.0105852 0.0105852 0 Osteolepididae 9 1 0.872012 0.0109446 0.0109446 0 Osteolepididae 10 1 0.811473 0.0101848 0.0101848 0 ## Export the table write.csv(rate_sign_tests, file = "RateSign_tests.csv") ### 4. Plot selection gradient on the summary tree Using different thresholds, Identify the strength (or mode) across branches in the tree for each clock partition with plot_treerates_sgn(). ## Plot tree using various thresholds for clock partition 1 A1 <- plot_treerates_sgn( tree3p, posterior3p, clock = 1, #Show rates for clock partition 1 summary = "mean", #sets summary stats to get from summary tree nodes branch_size = 1.5, tip_size = 3, #sets size for tree elements xlim = c(-450, -260), nbreaks = 8, geo_size = list(3, 3), #sets limits and breaks for geoscale threshold = c("1 SD", "2 SD")) #sets threshold for selection mode A1 Plot tree using various thresholds for the other clock partitions and combine them. ## Plot tree using various thresholds for other clock partition and combine them A2 <- plot_treerates_sgn( tree3p, posterior3p, clock = 2, #Show rates for clock partition 2 summary = "mean", #sets summary stats to get from summary tree nodes branch_size = 1.5, tip_size = 3, #sets size for tree elements xlim = c(-450, -260), nbreaks = 8, geo_size = list(3, 3), #sets limits and breaks for geoscale threshold = c("1 SD", "2 SD")) #sets threshold for selection mode A3 <- plot_treerates_sgn( tree3p, posterior3p, clock = 3, #Show rates for clock partition 2 summary = "mean", #sets summary stats to get from summary tree nodes branch_size = 1.5, tip_size = 3, #sets size for tree elements xlim = c(-450, -260), nbreaks = 8, geo_size = list(3, 3), #sets limits and breaks for geoscale threshold = c("1 SD", "2 SD")) #sets threshold for selection mode library(patchwork) A1 + A2 + A3 + plot_layout(nrow = 1) ## Save your plot to your working directory as a PDF ggplot2::ggsave("Tree_Sel_3p.pdf", width = 20, height = 8) ## References Baker, Joanna, Andrew Meade, Mark Pagel, and Chris Venditti. 2016. “Positive Phenotypic Selection Inferred from Phylogenies.” Biological Journal of the Linnean Society 118 (1): 95–115. Simões, Tiago R., and Stephanie E. Pierce. 2021. “Sustained High Rates of Morphological Evolution During the Rise of Tetrapods.” Nature Ecology & Evolution 5 (10): 1403–14. https://doi.org/10.1038/s41559-021-01532-x.
{}
Trending ## How to Prepare GATE Exam for Maths Oct 18 • GATE, How to Prepare GATE Exam for Maths • 2018 Views • No Comments on How to Prepare GATE Exam for Maths Exam pattern Paper sections Subject questions                                                        85% of total marks General aptitude                                                          15% of total marks Syllabus Linear algebra Finite and infinite dimensional vector space, basis, linear transformation and their matrix representation, range space and null space, determinant, eigenvalues and eigenvectors, minimal polynomial, cayley Hamilton theorem, diagonalisation, inner product space, norms , Jordan canonical forms Abstract algebra Binary operation, groups and subgroups, cyclic groups, permutation group, normal subgroup. Homeomorphism and isomorphism, sylow’s theorem, rings and sub-rings, ideals , prime and maximal ideals, quotient rings, unique factorization domains, principal ideal domains, euclidean domains, polynomial rings and their irreducibility, fields and their extensions. Real analysis Set , countable and uncountable set, supremum and infimum, sequence and series and their convergence, continuous and different able function, uniform continuous, conditional convergence and uniform convergence, Riemann integral, power series and radius of convergence, double and triple integration, surface and line integrals, Gauss theorem, Lebesgue measures, measurable function, Lebesgue integrals. Complex analysis Analytic functions, bi-linear transformation, con-formal mapping, complex integration, Cauchy’s integral formula, Louisville’s theorem, maximum modulus principle, zeros and singularities, Taylor,s and Laurent series, argument principle and residue theorem. Ordinary differential equation First order ordinary differential equation, uniqueness and existence theorem for initial value problems, singular and particular solution, homogeneous and non homogeneous equation, Bernoulli equation, Laplace transform, linear independent solutions of higher order differential equation. Partial differential equation Linear and quasi-linear first order partial differential equations, Dirichlet and neumann problems, boundary value problems, solutions of heat and wave equations, Topology General concept of topology, bases and sub bases, subspace topology, order topology and product topology, compactness and contentedness. Probability and statistics Basic definition of probability, conditional probability, Bayes theorem, independence, random variables, sample space, standard probability distributions ( discrete uniform, binomial, Poisson , geometric, normal, exponential) probability distribution function, weak and strong law of large numbers, central limit theorem, sampling distributions, interval estimation, testing of hypothesis Linear programming Linear programming problems, convex sets , graphical methods of linear programming problems, feasible solution, simplex method, big-M and two phase problems, infeasible and unbounded linear programming problems, dual problems and duality theorems, dual simplex method. Average marks from each section Section                                                                              average marks Real analysis                                                                          12 – 18 Complex analysis                                                                  4 – 8 Linear algebra                                                                        10 – 18 Abstract algebra                                                                     6 – 10 Ordinary differential equation                                            10 – 12 Partial differential equation                                                 2 – 4 Numerical methods                                                               8 – 12 Topology                                                                                  2 – 4 Probability and statistics                                                      6 – 20 Lpp                                                                                            4 – 8 General aptitude                                                                10 Tips to crack gate (maths) • Prepare 3 or 4 topics (choose according to the average marks) • Don’t leave linear algebra because it covers an average marks of 10 and it is the easiest topic . • MSC ( maths ) students should not prepare probability and statistics because this section is for MSC (stats.) students. • Practice previous year question ( it is necessary)
{}
What happens when tran-2-butene reacts with aqueous bromine in the presence of chloride? $$\ce{Br2-H2O}$$, containing some $$\ce{NaCl,AlCl3}$$ ? I initially though that there would be anti addition of Br and OH to give 3-Bromobutan-2-ol but then I also got the intuition that NaCl might ionize in water and the $$\ce{Cl-}$$ might be involved here as well (obviously, with different stereochemical products) But when I referred my book, it showed the following products only: Why does this happen? • First $\ce{Br2}$ is forming first a positive ion with one $\ce{Br}$ atom fixed on the double bond. It makes a positively charged triangular structure with the two central carbon atoms and one bromine atom. Then a negative ion may react with this positive ion. And here there are two possibilities. Either the bromide ion remaining after this first reaction, or a chloride ion from the solution. The probability of having chloride to react is greater. So $\ce{Cl-}$ attacks the triangular cation, and gets fixed from the opposite side of the triangle. – Maurice Mar 23 at 16:51 • @Maurice Yes, I know that the bromonium ion is formed as a transition state. What I'm asking is: Why does $\ce{OH-}$ also participate and attack the cation? – Prajwal Tiwari Mar 23 at 17:00 • @PrajwalTiwari OH- is present in very low concentrations in water. Usually, its H2O which attacks the bromonium intermediate, however, in this case there are lots of Cl- around, and Cl- is a better nucleophile than water, so Cl- is the one that attacks (mostly) – Shoubhik R Maiti Mar 23 at 19:40
{}
This is an archived post. You won't be able to vote or comment. [–] 0 points1 point  (3 children) Think of an equation that relates the things discussed here. SohCahToa! Then you take the derivative, but the derivative of your angle becomes d-theta. [–][S] 0 points1 point  (2 children) where does d(hypotenuse) fit in? [–] 0 points1 point  (1 child) That is what you want to solve for. You have your values for everything else (or can calculate them from the original eqn). [–][S] 0 points1 point  (0 children) so I have dθcosθ=-20dc/c2 , then dc=dθcosθc2 /-20 ?
{}
# Lesson 4. Crop Spatial Raster Data With a Shapefile in Python ## Learning Objectives • Crop a raster dataset in Python using a vector extent object derived from a shapefile. • Open a shapefile in Python. In previous lessons, you reclassified a raster in Python; however, the edges of your raster dataset were uneven. In this lesson, you will learn how to crop a raster - to create a new raster object / file that you can share with colleagues and / or open in other tools such as a Desktop GIS tool like QGIS. Cropping (sometimes also referred to as clipping), is when you subset or make a dataset smaller, by removing all data outside of the crop area or spatial extent. In this case you have a large raster - but let’s pretend that you only need to work with a smaller subset of the raster. You can use the crop_image function to remove all of the data outside of your study area. This is useful as it: 1. Makes the data smaller and 2. Makes processing and plotting faster In general when you can, it’s often a good idea to crop your raster data! To begin let’s load the libraries that you will need in this lesson. import os import matplotlib.pyplot as plt import seaborn as sns import numpy as np from shapely.geometry import mapping import geopandas as gpd import rasterio as rio from rasterio.plot import plotting_extent import earthpy as et import earthpy.spatial as es import earthpy.plot as ep # Prettier plotting with seaborn sns.set(font_scale=1.5) # Get data and set working directory os.chdir(os.path.join(et.io.HOME, 'earth-analytics')) ## Open Raster and Vector Layers In the previous lessons, you worked with a raster layer that looked like the one below. Notice that the data have an uneven edge on the left hand side. Let’s pretend this edge is outside of your study area and you’d like to remove it or clip it off using your study area extent. You can do this using the crop_image() function in earthpy.spatial. ## Open Vector Layer To begin your clip, open up a vector layer that contains the crop extent that you want to use to crop your data. To open a shapefile you use the gpd.read_file() function from geopandas. You will learn more about vector data in Python in a few weeks. aoi = os.path.join("data", "colorado-flood", "spatial", "boulder-leehill-rd", "clip-extent.shp") # Open crop extent (your study area extent boundary) Next, view the coordinate reference system (CRS) of both of your datasets. Remember that in order to perform any analysis with these two datasets together, they will need to be in the same CRS. print('crop extent crs: ', crop_extent.crs) print('lidar crs: ', lidar_chm.crs) crop extent crs: epsg:32613 lidar crs: EPSG:32613 # Plot the crop boundary layer # Note this is just an example so you can see what it looks like # You don't need to plot this layer in your homework! fig, ax = plt.subplots(figsize=(6, 6)) crop_extent.plot(ax=ax) ax.set_title("Shapefile Crop Extent", fontsize=16) Text(0.5, 1.0, 'Shapefile Crop Extent') Now that you have imported the shapefile. You can use the crop_image function from earthpy.spatial to crop the raster data using the vector shapefile. fig, ax = plt.subplots(figsize=(10, 8)) ep.plot_bands(lidar_chm_im, cmap='terrain', extent=plotting_extent(lidar_chm), ax=ax, title="Raster Layer with Shapefile Overlayed", cbar=False) crop_extent.plot(ax=ax, alpha=.8) ax.set_axis_off() ## Crop Data Using the crop_image Function If you want to crop the data you can use the crop_image function in earthpy.spatial. When you crop the data, you can then export it and share it with colleagues. Or use it in another analysis. IMPORTANT: You do not need to read the data in before cropping! Cropping the data can be your first step. To perform the crop you: 1. Create a connection to the raster dataset that you wish to crop 2. Open your shapefile as a geopandas object. This is what EarthPy needs to crop the data to the extent of your vector shapefile. 3. Crop the data using the crop_image() function. Without EarthPy, you would have to perform this with a Geojson object. Geojson is a format that is worth becoming familiar with. It’s a text, structured format that is used in many online applications. We will discuss it in more detail later in the class. For now, have a look at the output below. lidar_chm_path = os.path.join("data", "colorado-flood", "spatial", "boulder-leehill-rd", "outputs", "lidar_chm.tif") with rio.open(lidar_chm_path) as lidar_chm: lidar_chm_crop, lidar_chm_crop_meta = es.crop_image(lidar_chm,crop_extent) lidar_chm_crop_affine = lidar_chm_crop_meta["transform"] # Create spatial plotting extent for the cropped layer lidar_chm_extent = plotting_extent(lidar_chm_crop[0], lidar_chm_crop_affine) Finally, plot the cropped data. Does it look correct? # Plot your data ep.plot_bands(lidar_chm_crop[0], extent=lidar_chm_extent, cmap='Greys', title="Cropped Raster Dataset", scale=False) plt.show() ## OPTIONAL – Export Newly Cropped Raster Once you have cropped your data, you may want to export it. In the subtract rasters lesson you exported a raster that had the same shape and transformation information as the parent rasters. However in this case, you have cropped your data. You will have to update several things to ensure your data export properly: 1. The width and height of the raster: You can get this information from the shape of the cropped numpy array and 2. The transformation information of the affine object. The crop_image() function provides this inside the metadata object it returns! 3. Finally you may want to update the nodata value. In this case you don’t have any nodata values in your raster. However you may have them in a future raster! # Update with the new cropped affine info and the new width and height lidar_chm_meta.update({'transform': lidar_chm_crop_affine, 'height': lidar_chm_crop.shape[1], 'width': lidar_chm_crop.shape[2], 'nodata': -999.99}) lidar_chm_meta {'driver': 'GTiff', 'dtype': 'float64', 'nodata': -999.99, 'width': 3490, 'height': 2000, 'count': 1, 'crs': CRS.from_epsg(32613), 'transform': Affine(1.0, 0.0, 472510.0, 0.0, -1.0, 4436000.0), 'tiled': False, 'compress': 'lzw', 'interleave': 'band'} Once you have updated the metadata you can write our your new raster. # Write data "outputs", "lidar_chm_cropped.tif") with rio.open(path_out, 'w', **lidar_chm_meta) as ff: ff.write(lidar_chm_crop[0], 1) ## Optional Challenge: Crop Change Over Time Layers In the previous lesson, you created 2 plots: 1. A classified raster map that shows positive and negative change in the canopy height model before and after the flood. To do this you will need to calculate the difference between two canopy height models. 2. A classified raster map that shows positive and negative change in terrain extracted from the pre and post flood Digital Terrain Models before and after the flood. Create the same two plots except this time CROP each of the rasters that you plotted using the shapefile: data/week-03/boulder-leehill-rd/crop_extent.shp For each plot, be sure to: • Add a legend that clearly shows what each color in your classified raster represents. • Use proper colors.
{}
# Where does the energy from a nuclear bomb come from? I'll break this down to two related questions: With a fission bomb, Uranium or Plutonium atoms are split by a high energy neutron, thus releasing energy (and more neutrons). Where does the energy come from? Most books I've ever come across simply state e=mc2 and leave it at that. Is it that matter (say a proton or neutron) is actually converted into energy? That would obviously affect the elements produced by the fission since there would be less matter at the end. It would also indicate exactly how energy could be released per atom given the fixed weight of a subatomic particle. I remember hearing once that the energy released is actually the binding energy that previously held the nucleus together. With a fusion bomb, two hydrogen isotopes are pushed together to form helium and release energy - same question: where does this energy come from? Is matter actually converted or are we talking about something else here? Sorry is this is rather basic - I haven't done physics since high school. - ## 4 Answers Energy of a fission nuclear bomb comes from the gravitational energy of the stars. Protons and neutrons can coalesce into different kinds of bound states. We call these states atomic nuclei. The ones with the same number of protons are called isotopes, the ones with different number are nuclei of atoms of different kinds. There are many possible different stable states (that is, stable nuclei), with different number of nucleons and different binding energies. However there are also some general tendencies for the specific binding energy per one nucleon (proton or neutron) in the nuclei. States of simple nuclei (like hidrogen or helium) have the lowest specific nucleon binding energy amongst all elements, but the higher is the atomic number, the higher the specific energy gets. However, for the very heavy nuclei the specific binding energy starts to drop again. Here is a graph that sums it up: It means that when nucleons are in the medium-atomic number nuclei, they have the highest possible binding energy. When they sit in very light elements (hidrogen) or very heavy ones (uranium), they have weaker binding. Thus, one can say that for the low "every-day" temperatures, the very heavy elements (like the very light ones) are quasistable in a sense. Fission bomb effectively "lets" the very heavy atomic nuclei (plutonium, or uranium) to resettle to the atoms with lower number of nucleons, that is, with higher bound energies. The released binding energy difference makes the notorious effect. In terms of the graph cited above, it corresponds to nucleons moving from the right end closer to the peak. Yet this is not the only way to let nucleons switch to the higher binding energy state than the initial one. We can "resettle" very light elements (like hydrogen) and let nucleons move to the peak from the left. That would be fusion. Heavy nucleons emerge in the stars. Here the gravitational energy is high enough to let the nucleons "unite" into whatever nuclei they like. Stars usually are formed from the very light elements and the nucleons inside, again, tend to get to the states with lower energies, and form more "medium-number" nuclei. The energy difference powers stars and we see the light emission, high temperatures and all other fun effects. However, sometimes the temperatures in the stars are so high, that nucleons form the very heavy nuclei from the medium-number nuclei. even though there is no immediate "energy" benefit. These heavy elements then disseminate everywhere with the death of the star. This stored star energy can then be released in the fission bomb. - I suppose that this is technically correct, but it is very misleading. That first sentence is going to give people some very strange ideas. –  Colin K Mar 2 '11 at 21:19 Care to elaborate? What strange ideas? –  Dmitry Borzov Mar 3 '11 at 1:23 The first sentence does sound odd at first glance, but it is absolutely correct and sums up the situation across cosmological time-scales very neatly. Gravitational collapse leads to star formation and the formation of heavy elements. The energy liberated via their decay is thus precisely the gravitational potential energy of the atoms of the primordial gas before it underwent gravitational collapse. But, yes, it can trip you up on a first read. –  user346 Mar 3 '11 at 4:00 The first sentence is great, except to prejudiced people –  HDE Mar 3 '11 at 15:17 Atomic nuclei > Fe are simply very efficient rechargeable batteries –  Martin Beckett Mar 18 '11 at 18:39 Yes, matter of mass $m$ is directly converted to energy $E=mc^2$ which literally means that the weight of the remnants of the atomic bomb is smaller than the weight of the atomic bomb at the beginning. Fission reduces the mass by 0.1 percent or so; for fusion, you may get closer to 1 percent of mass difference. In principle, you may release 100 percent of the $E=mc^2$ energy stored in the mass $m$ - for example by annihilation of matter with antimatter; or by creating a black hole and waiting until it evaporates into pure Hawking radiation - which is pure energy. And yes, the differences in the energies - and the corresponding masses because $E=mc^2$ is true universally (in the rest frame), there is really just one independent quantity - may be attributed to the (changing) interaction energy between the nucleons (or quarks). Alternatively, you may imagine that the extra energy was stored in extra gluons and quark-antiquark pairs inside the nuclei. Those two descriptions are actually equivalent although this fact is not self-evident. - So, in the case of splitting a plutonium atom, what gets lost? Presumably electrons are not touched. So, do we lose protons or neutrons? That would determine the elements produced. I assume the resultant atoms would be in plasma form given the temperatures involved. –  dave Mar 2 '11 at 19:35 And in the case of fusion: we have tritium + deuterium fusing to make helium + a neutron + energy. Doesn't that leave us with still 3 neutrons and 2 protons at the end, ie: the same at the start? –  dave Mar 2 '11 at 19:37 @dave: you are confusing particles and mass. No particles need to be destroyed (although they can be) for the mass of the reaction products to be lower. The binding energy you mentioned before is the source for the released energy, and it also represents the mass in the fission fuel which is lost. E=mc^2 is true in a very literal sense. –  Colin K Mar 2 '11 at 19:42 The mass of a nucleus is not equal to the sum of the masses of the protons and neutrons that make it up. So during a radioactive decay, the mass can change even if no particles are created or destroyed. Strange but true. Moreover, the same thing is true in principle for chemical reactions: the mass of an H$_2$O molecule is less than the masses of two H's and an O separately. But in chemical reactions, unlike nuclear reactions, that mass difference is too small to measure. –  Ted Bunn Mar 2 '11 at 20:15 @Dave- The Making of the Atomic Bomb , a book by Richard Rhodes is an excellent historical and popular technical account of physics in the first half of the 20th century and culminates with the Manhattan Project and the fission bomb. The sequel, Dark Suns is about the fusion bomb, Teller, Ulam and the Soviets. –  Gordon Mar 2 '11 at 20:55 I'm not sure what you're looking for here, but I'll try a different (and simpler) approach than the other answers: In a "traditional" chemical bomb, the energy comes from the electromagnetic force: you're breaking bonds between atoms and making more stable (lower-energy) bonds. In a fission bomb, the energy comes from the strong nuclear force: you're breaking bonds between nucleons, producing final products that are in a lower-energy state*. In a fusion bomb, the energy is also coming from the strong nuclear force: you're making bonds between nucleons, producing final products that are in a lower-energy state. *To go up one level of complexity (and accuracy), Janne808 is correct: in fission bombs, the electromagnetic force is a significant contributor. In fact, in the absence of coulomb repulsion between protons, I can't think of any fission reactions that would be exothermic. - Coulomb repulsion between like charges is the biggest contributor to the amount of energy released. - This is a case where a downvote should really include a comment. The answer is not obviously flawed, its not a troll, the grammar is fine, etc. So why the down vote? Is it simply incorrect? Whats the deal? –  Colin K Mar 2 '11 at 21:18 I don't think this claim is defensible: the nuclear binding energy clearly exceeds the Coulomb repulsion or the original nucleus would never have existed in the first case. –  dmckee Mar 2 '11 at 22:40 Clearly, but the attractive Yukawa potential falls off exponentially and the repulsive Coulomb potential dominates by far at short distances. Seems to make sense, although the short scale issues and many-body physics isn't 100% clear yet. –  Janne808 Mar 2 '11 at 23:54
{}
However, most of the time there are limits placed on such things by the actual administrators of the notebook. See, for instance, the Sage Server instructions on the wiki, where ulimit has several options used. The one of most interest to you would be -t 36000 which means after 10 hours, the server will kick you off, with that model.
{}
# Write Each of the Following Polynomials in the Standard Form. Also, Write Their Degree. X2 + 3 + 6x + 5x4 - Mathematics Sum Write each of the following polynomials in the standard form. Also, write their degree. x2 + 3 + 6x + 5x4 #### Solution $\text{Standard form of the given polynomial can be expressed as:}$ $(5 x^4 + x^2 + 6x + 3) or (3 + 6x + x^2 + 5 x^4 )$ $\text{The degree of the polynomial is 4 .}$ Is there an error in this question or solution? #### APPEARS IN RD Sharma Class 8 Maths Chapter 8 Division of Algebraic Expressions Exercise 8.1 | Q 3.1 | Page 2
{}
# A very special light The word LASER is the acronym for Light Amplification by Stimulated Emission of Radiation. A laser is a very coherent light source. The high degree of spatial coherence allows the beam to be highly collimated and also very good focusable. Further more the spectral longitudinal coherence is very high. All photons do have very similar energies and are therefore monochromatic. These properties make laser light very suitable to form the standing light wave where the molecules are diffracted. Laserlight can be highly polarized. In the KDTLI we use polarization optics to adjust the light power reaching the molecules and to separate the departing and returning laser beams. Extra: Polarisation optics Our laser is linearly polarized – the electrical light field oscillates in one plane. This property can be used to guide and modulate the light field. Polarizing beam splitters direct light of a predefined polarization (e.g. horizontal orientation of the electric field) out of the beam and allow the perpendicular polarized light (vertical orientation of the electrical field) pass. By rotating the polarization axis of the incoming laser beam we can use the polarizing beam splitter as a precision attenuator. The intensity can be continuously divided between the two exits. To rotate the polarization we use half-wave-plates, where the phase between two orthogonal components of the light field is shifted by $$\pi$$. We could also change the laser power with the current. But this usually affects also other beam parameters (profile, wavelength,…). Therefore it is advantageous to use polarization optics. A quarter-wave-plate transforms linearly polarized light to circular polarized light and vice-versa. If a beam passes this element twice (forwards and after reflection at the mirror backwards) can rotate the polarization of a laser beam by 90 degrees. We use that in the KDTLI to protect the laser head from damage caused by back reflecting light. Many lasers have a cross sectional intensity profile that can be described by a Gaussian function. In the interferometer we need a beam focussed to only $$20 \times 1000 \, \mathrm{\mu m}^2$$. Accordingly, we use a cylindrical lens to obtain a homogeneous field distribution over the whole molecular beam. Extra: Gaussian beams A Gaussian intensity profile has this form: $$I=I_0 \cdot \exp(- 2r^2/w_0^2)$$, where the distance $$r$$ from the optical axis and the waist $$w_0$$ define the distance where the intensity dropped to $$1/e^2$$. This particular beam profile has some optical properties we want to memorize: 1. Its wave nature (diffraction) inhibits to focus light to an arbitrarily small point. The smallest focus with an extended light wave (wavelength $$\lambda$$, beam diameter $$D$$) behind a lens with focal lenght $$f$$ can be: $$w_1= \frac{f \lambda}{\pi D}$$. 2. A laser beam can not be kept small over arbitrarily long distances. Immediately after the waist the beam diverges. $$w (z)= w_0 \sqrt{1+(z/z_R)^2}$$,  with the Rayleigh length  $$z_R = \pi w_0^2/\lambda$$ ## Experimental challenge: Lightgrating height Go to the laboratory and follow the instructions. Once you have accomplished your task, continue here.
{}
# BoxLeastSquaresPeriodogram¶ class lightkurve.periodogram.BoxLeastSquaresPeriodogram(*args, **kwargs) Subclass of Periodogram representing a power spectrum generated using the Box Least Squares (BLS) method. Attributes Summary depth_at_max_power Returns the depth corresponding to the highest peak in the periodogram. duration_at_max_power Returns the duration corresponding to the highest peak in the periodogram. frequency_at_max_power Returns the frequency corresponding to the highest peak in the periodogram. max_power Returns the power of the highest peak in the periodogram. period Returns the array of periods, i.e. period_at_max_power Returns the period corresponding to the highest peak in the periodogram. transit_time_at_max_power Returns the transit time corresponding to the highest peak in the periodogram. Methods Summary bin([binsize, method]) Bins the power spectrum. compute_stats([period, duration, transit_time]) Computes commonly used vetting statistics for a transit model. copy() Returns a copy of the Periodogram object. flatten(**kwargs) Estimates the Signal-To-Noise (SNR) spectrum by dividing out an estimate of the noise background. from_lightcurve(lc, **kwargs) Creates a Periodogram from a LightCurve using the Box Least Squares (BLS) method. get_transit_mask([period, duration, …]) Computes the transit mask using the BLS, returns a lightkurve.LightCurve get_transit_model([period, duration, …]) Computes the transit model using the BLS, returns a lightkurve.LightCurve plot(**kwargs) Plot the BoxLeastSquaresPeriodogram spectrum using matplotlib’s plot method. show_properties() Prints a summary of the non-callable attributes of the Periodogram object. smooth(**kwargs) Smooths the power spectrum using the ‘boxkernel’ or ‘logmedian’ method. to_table() Exports the Periodogram as an Astropy Table. Attributes Documentation depth_at_max_power Returns the depth corresponding to the highest peak in the periodogram. duration_at_max_power Returns the duration corresponding to the highest peak in the periodogram. frequency_at_max_power Returns the frequency corresponding to the highest peak in the periodogram. max_power Returns the power of the highest peak in the periodogram. period Returns the array of periods, i.e. 1/frequency. period_at_max_power Returns the period corresponding to the highest peak in the periodogram. transit_time_at_max_power Returns the transit time corresponding to the highest peak in the periodogram. Methods Documentation bin(binsize=10, method='mean') Bins the power spectrum. Parameters: binsize : int The factor by which to bin the power spectrum, in the sense that the power spectrum will be smoothed by taking the mean in bins of size N / binsize, where N is the length of the original frequency array. Defaults to 10. method : str, one of ‘mean’ or ‘median’ Method to use for binning. Default is ‘mean’. binned_periodogram : a Periodogram object Returns a new Periodogram object which has been binned. compute_stats(period=None, duration=None, transit_time=None) Computes commonly used vetting statistics for a transit model. See astropy.stats.bls docs for further details. Parameters: period : float or Quantity Period of the transits. Default is period_at_max_power duration : float or Quantity Duration of the transits. Default is duration_at_max_power transit_time : float or Quantity Transit midpoint of the transits. Default is transit_time_at_max_power stats : dict Dictionary of vetting statistics copy() Returns a copy of the Periodogram object. This method uses the copy.deepcopy function to ensure that all objects stored within the Periodogram are copied. Returns: pg_copy : Periodogram A new Periodogram object which is a copy of the original. flatten(**kwargs) Estimates the Signal-To-Noise (SNR) spectrum by dividing out an estimate of the noise background. This method divides the power spectrum by a background estimated using a moving filter in log10 space by default. For details on the method and filter_width parameters, see Periodogram.smooth() Dividing the power through by the noise background produces a spectrum with no units of power. Since the signal is divided through by a measure of the noise, we refer to this as a Signal-To-Noise spectrum. Parameters: method : str, one of ‘boxkernel’ or ‘logmedian’ Background estimation method passed on to Periodogram.smooth(). Defaults to ‘logmedian’. filter_width : float If method = ‘boxkernel’, this is the width of the smoothing filter in units of frequency. If method = logmedian, this is the width of the smoothing filter in log10(frequency) space. return_trend : bool If True, then the background estimate, alongside the SNR spectrum, will be returned. snr_spectrum : Periodogram object Returns a periodogram object where the power is an estimate of the signal-to-noise of the spectrum, creating by dividing the powers with a simple estimate of the noise background using a smoothing filter. bkg : Periodogram object The estimated power spectrum of the background noise. This is only returned if return_trend = True. static from_lightcurve(lc, **kwargs) Creates a Periodogram from a LightCurve using the Box Least Squares (BLS) method. get_transit_mask(period=None, duration=None, transit_time=None) Computes the transit mask using the BLS, returns a lightkurve.LightCurve True where there are no transits. Parameters: period : float or Quantity Period of the transits. Default is period_at_max_power duration : float or Quantity Duration of the transits. Default is duration_at_max_power transit_time : float or Quantity Transit midpoint of the transits. Default is transit_time_at_max_power mask : np.array of Bool Mask that removes transits. Mask is True where there are no transits. get_transit_model(period=None, duration=None, transit_time=None) Computes the transit model using the BLS, returns a lightkurve.LightCurve See astropy.stats.bls docs for further details. Parameters: period : float or Quantity Period of the transits. Default is period_at_max_power duration : float or Quantity Duration of the transits. Default is duration_at_max_power transit_time : float or Quantity Transit midpoint of the transits. Default is transit_time_at_max_power model : lightkurve.LightCurve Model of transit plot(**kwargs) Plot the BoxLeastSquaresPeriodogram spectrum using matplotlib’s plot method. See Periodogram.plot for details on the accepted arguments. Parameters: kwargs : dict Dictionary of arguments ot be passed to Periodogram.plot. ax : matplotlib.axes._subplots.AxesSubplot The matplotlib axes object. show_properties() Prints a summary of the non-callable attributes of the Periodogram object. Prints in order of type (ints, strings, lists, arrays and others). Prints in alphabetical order. smooth(**kwargs) Smooths the power spectrum using the ‘boxkernel’ or ‘logmedian’ method. If method is set to ‘boxkernel’, this method will smooth the power spectrum by convolving with a numpy Box1DKernel with a width of filter_width, where filter width is in units of frequency. This is best for filtering out noise while maintaining seismic mode peaks. This method requires the Periodogram to have an evenly spaced grid of frequencies. A ValueError exception will be raised if this is not the case. If method is set to ‘logmedian’, it smooths the power spectrum using a moving median which moves across the power spectrum in a steps of log10(x0) + 0.5 * filter_width where filter width is in log10(frequency) space. This is best for estimating the noise background, as it filters over the seismic peaks. Periodograms that are unsmoothed have multiplicative noise that is distributed as chi squared 2 degrees of freedom. This noise distribution has a well defined mean and median but the two are not equivalent. The mean of a chi squared 2 dof distribution is 2, but the median is 2(8/9)**3. (see https://en.wikipedia.org/wiki/Chi-squared_distribution) In order to maintain consistency between ‘boxkernel’ and ‘logmedian’ a correction factor of (8/9)**3 is applied to (i.e., the median is divided by the factor) to the median values. In addition to consistency with the ‘boxkernel’ method, the correction of the median values is useful when applying the periodogram flatten method. The flatten method divides the periodgram by the smoothed periodogram using the ‘logmedian’ method. By appyling the correction factor we follow asteroseismic convention that the signal-to-noise power has a mean value of unity. (note the signal-to-noise power is really the signal plus noise divided by the noise and hence should be unity in the absence of any signal) Parameters: method : str, one of ‘boxkernel’ or ‘logmedian’ The smoothing method to use. Defaults to ‘boxkernel’. filter_width : float If method = ‘boxkernel’, this is the width of the smoothing filter in units of frequency. If method = logmedian, this is the width of the smoothing filter in log10(frequency) space. smoothed_pg : Periodogram object Returns a new Periodogram object in which the power spectrum has been smoothed. to_table() Exports the Periodogram as an Astropy Table. Returns: table : An AstroPy Table with columns ‘frequency’, ‘period’, and ‘power’.
{}
Research Article Volume 3, ISSUE 6, e00329, June 2017 • PDF [1 MB]PDF [1 MB] • Top Mechanical properties in crumple-formed paper derived materials subjected to compression Open AccessPublished:June 16, 2017 Abstract The crumpling of precursor materials to form dense three dimensional geometries offers an attractive route towards the utilisation of minor-value waste materials. Crumple-forming results in a mesostructured system in which mechanical properties of the material are governed by complex cross-scale deformation mechanisms. Here we investigate the physical and mechanical properties of dense compacted structures fabricated by the confined uniaxial compression of a cellulose tissue to yield crumpled mesostructuring. A total of 25 specimens of various densities were tested under compression. Crumple formed specimens exhibited densities in the range 0.8–1.3 g cm−3, and showed high strength to weight characteristics, achieving ultimate compressive strength values of up to 200 MPa under both quasi-static and high strain rate loading conditions and deformation energy that compares well to engineering materials of similar density. The materials fabricated in this work and their mechanical attributes demonstrate the potential of crumple-forming approaches in the fabrication of novel energy-absorbing materials from low-cost precursors such as recycled paper. Stiffness and toughness of the materials exhibit density dependence suggesting this forming technique further allows controllable impact energy dissipation rates in dynamic applications. 1. Introduction In recent decades mesostructural design has emerged as an important approach in materials science. With the term mesostructure we generally refer to a distinguishable structure within a bulk material that may exist at length scales higher than the material’s identifiable microstructure. For a given bulk material geometry and composition, by utilizing an interplay between mesostructure and microstructure one is able to achieve improved material properties that would otherwise be difficult to obtain. Mesostructured systems have most commonly been discussed in terms of tailored three dimensional structures in alloys and oxide systems [ • Corma A. • et al. , • Grosso D. • et al. , • Ogawa M. • Masukawa N. ], however such systems are by no means the only type of system in which the control of structure at an intermediate scale can be used to tailor material performance. The fabrication of bulk material from flat or sheet-like precursors is frequently exploited with a view to imparting enhanced mechanical properties. Numerous artificial and natural materials are formed by the parallel layering of component materials to form stratified structures. Examples from nature include nacre, leaf-sheaths and bone structures [ • Barthelat F. • et al. , • Ghosh T.K. , • Subramaniam A.B. • et al. ]. The concept of stratified or layered structures finds widespread application in engineering materials such as toughened glass, fibre reinforced composites and armour [ • Subramaniam A.B. • et al. , • Grujicic M. • et al. , • Guo L.-C. • Noda N. , • Flores-Johnson E.A. • et al. , • Flores-Johnson E.A. • et al. ]. In such materials mechanisms acting at layer interfaces serve to enhance the mechanical properties of bulk systems through the distribution of stress, disruption of crack propagation and accommodation of strain. Further to layering, the disordered crumpling of a sheet like materials presents an alternative approach to the synthesis of a mesostructured bulk material from a 2D precursor. Recent years have seen growing interest in the morphological and mechanical aspects of structures formed by crumpling processes. Such systems exhibit structural complexity at intermediate lengthscales arising through self-avoidance and non-linear localised deformations. The mechanical attributes that consequently are imparted by such mesostructuring have been the subject of numerous emerging studies [ • Aharoni H. • Sharon E. , • Cambou A.D. • Menon N. , • Cambou A.D. • Menon N. , • Tallinen T. • et al. , • Tallinen T. • et al. , • Vliegenthart G. • Gompper G. ]. The simple process of crumpling results in the formation of a highly complex network of folds and facets, and its topology cannot be described in a straightforward manner [ • Rohmer S. • et al. Geometric and topological modelling of 3d crumpled structures. ]. Sheet like materials in crumpled form are considered to be locally flat nearly everywhere, with the exception of a network of creases or ridges [ • Miller J.L. ]. Through various mechanisms, these creases and surfaces deform under load and influence each other, potentially giving rise to high strength in the bulk. If we assume the thickness of a precursor sheet to be negligible relative to its other two dimensions, then a significant fraction of voids is expected in a crumpled ball even after a high level of volumetric compression [ • Bevilacqua L. ]. Crumple-formed structures in terms of ridge networks, scaling behaviour and physical properties have been investigated to date using paper [ • Blair D.L. • Kudrolli A. , • Andresen C.A. • et al. ], aluminum foil [ • Cambou A.D. • Menon N. , • Lin Y.-C. • et al. , • Bouaziz O. • et al. ] and graphene [ • Cranford S.W. • Buehler M.J. ] precursors. In such structures, morphological features exist across a range of length scales, and are reported to exhibit self-similar scaling, or fractality. For a given confining force, the relationship between the volume of a crumpled structure and the linear size of the precursor sheet is described by the systems fractal dimension Df, which describes the size and density of vertices in the structure [ • Gomes M. , • Balankin A.S. • Huerta O.S. Fractal geometry and mechanics of randomly folded thin sheets. , • Balankin A.S. • et al. ] and has been shown to depend on the plasticity of the precursor material [ • Tallinen T. • et al. ]. Most contemporary studies into morphologies arising from crumple forming have dealt with low density structures, such as those exhibited by crumpled paper balls [ • Cambou A.D. • Menon N. , • Miller J.L. , • Gomes M. ]. In such systems an unusually high resistance to compression is found for systems of surprisingly low solids fraction, that is to say a crumple-formed material consisting primarily of empty space is found to exhibit high compressive strength not typically encountered for materials of such density [ • Cambou A.D. • Menon N. ]. However, crumple-formed materials of higher density, that is to say density comparable to that of typical engineering materials, are seldom investigated. An important recent study by Bai et al. [ • Bai W. • et al. ], utilized a high pressure system to synthesize dense crumpled materials. These results suggest that crumple formed materials should exhibit good mechanical performance under conditions of high impact loading; however, experimental studies of such systems are not reported. Crumple-forming represents an unexplored avenue towards the fabrication of low-cost engineering materials with good weight-specific mechanical attributes from diverse precursors including waste-material. This fabrication methodology offers new potential approaches towards materials recycling and can facilitate the more efficient utilisation of waste – streams, in particular waste paper and cardboard, allowing greater value-extraction and minimisation of landfill disposal. With this motivation, in the present work we report the first investigation of the mechanical properties of crumple formed materials. Here we examine the high pressure crumple-forming of bulk specimens from tissue paper precursor materials and study the compressive mechanical attributes achievable in these materials under conditions of quasi-static and impact loading. 2. Materials and methods 2.1 Specimen fabrication It was found that crumple forming of rigid materials with densities comparable to engineering polymers (∼10 ± 0.5 g cm−3) can be achieved by hydraulic uniaxial confinement of randomly crumpled tissue paper, yielding specimens in a variety of sizes and geometries as illustrated in Fig. 1. Cylindrical crumple-formed specimens used in the present work for quasi-static and high strain-rate compression characterization were manufactured from dust free tissue paper (Kimtech, Kimberly-Clark, USA), which consists of 100% virgin fibre cellulose with an average density of 0.02 kg m−2. Materials were fabricated at three different compression levels, with four specimens produced at each condition. For each specimen, a sheet measuring 150 × 300 mm was manually crumpled and packed into a steel die with an inner diameter of 12 mm. This dimension was chosen as it is compatible with both testing methodologies employed. Subsequently, uniaxial compression of the die was applied and maintained for durations of 20 seconds by means of a loading frame with applied pressure controlled in the regime 27–220 MPa to yield materials of varied density in the regime 0.86–1.34 g cm−3 as outlined in Table 1. A schematic illustration of the fabrication process is given in Fig. 2. The mass and dimensions of specimens were recorded subsequent to fabrication and these were then maintained in airtight containers prior to mechanical analysis in order to minimise material degradation as the result of humidity and/or temperature fluctuations. No measurable changes in specimen dimensions were observed between removal from the fabrication apparatus and mechanical testing, which was conducted in a matter of days subsequent to fabrication. Table 1Fabrication parameters for crumpled paper specimens. Four specimens were fabricated for each applied pressure. Density classificationSpecimen density range (g cm−3)Applied pressure (MPa)Mean thickness (mm)Thickness range (mm) Low0.86–0.90276.355.82–7.00 Medium1.100–1.2751336.005.80–6.12 High1.300–1.3402205.875.68–6.24 2.2 Mechanical characterisation In order to understand the behaviour of crumple-formed structures under a range of stress conditions, rate effects must be considered in their characterization. Both quasi-static and high strain rate regimes were therefore implemented to determine the true stress-strain response of materials. 2.2.1 Quasi-static testing Quasi-static compressive testing was carried out using a MTS Criterion 43 loading frame with specimens positioned between two platens, which had been graphite-lubricated in order to minimise specimen barrelling, as shown in Fig. 3. A constant compression rate of 0.5 mm min−1 was maintained for all tests with vertical displacement monitored via a linear variable differential transducer (LVDT). Testing was undertaken up to a maximum compressive load of 40 kN. The acquired displacement and force data were used to determine values of Engineering Stress and Engineering Strain, defined as the nominal acquired values. Digital image analysis was used to capture the longitudinal and transverse deformation of the specimen at intervals of 2.5% strain and synchronize the specimen expansion profile with the recorded compressive load. This enables the extraction of true stress and true strain, defined respectively as the actual force applied over the actual contact area and the natural logarithm of the ratio of deformed size to original specimen height. A typical post failure specimen is depicted in Fig. 3(c). Four samples corresponding to each density range (low, medium and high) were considered giving rise to a total of 12 quasi-static tests. Owing to variations in specimen thicknesses, the effective applied strain rate varied from 0.00099 s−1 to 0.00144 s−1 with an average strain rate of 0.00130 s−1. 2.2.2 High strain-rate testing A split Hopkinson pressure bar (SHPB) was utilized for high strain-rate response characterisation of crumpled specimens. Both incident and transmission bars were composed of Aluminium 7075 alloy, with dimensions of 1500 mm in length and a diameter of 15 mm. A foam pulse shaper was further utilized and thus using a 300 mm long Al striker bar launched at a pressure of 7 bar, strain rates in the range 2800 to 4300 s−1 were achieved in the specimen. The incident, transmitted and reflected pulses were acquired through strain gauges placed on the incident and transmitted bars, allowing the determination of stress/strain data through Equations (1)–(3) [ • Gama B.A. • et al. ]: $σs(t)=AAsEεt(t),$ (1) $ε(t)=2Cols∫0tεr(t)dt,$ (2) $εs′(t)=2Colsεr(t).$ (3) where $ɛr(t)$ and $ɛt(t)$ respectively denote the reflected and transmitted strains, Co is the bar wave velocity, ls is the initial specimen length, E is the modulus of elasticity of the bar material and A/As denotes the ratio of the bar area to the specimen area. A total of 13 specimens of the three density classifications (Table 1) were tested by SHPB. In order to determine the true stress-strain behaviour, a high speed camera recording at 1 × 105 frames per second was used to capture specimen deformation as a function of time, which was synchronized with the sample stress history. 3. Results 3.1 Quasi static compression Fig. 4 shows the true stress-true strain curves for crumple-formed specimens of different densities under quasi-static conditions. It can be seen that for the four high-density specimens maximum true stress levels in the range of 150–200 MPa (with an average of 172 MPa) are achieved prior to specimen failure at around 0.35 strain. Fig. 3(c) shows a yielded specimen, exhibiting shear type failure indicated by inclined fracture lines. Two distinctive regimes are observed in high-density specimens before peak stress, which are a pseudo-elastic regime up to a strain of ∼0.15 with a stress of around 100 MPa and a pseudo-plastic regime up to failure strain. This behaviour can be understood if we consider crumple-formed specimens as a cellular material that densifies with compressive strain leading to greater frictional energy dissipation friction arising from an increasing contact area. A further contribution to the load resistance is expected also from ridge formation with associated plastic deformation [ • Tallinen T. • et al. ]. For the four specimens of low density materials, the behaviour is similar to that of a typical foam in which crushing is the main mechanism of energy absorption; however, the contribution of friction and the creation of ridges results in a hardening-like regime up to ∼0.3 strain. Following this, a densification regime is identified where the stress increases rapidly with further increase of strain. Specimens of lower density do not exhibit distinguishable yield points, rather deforming plastically as voids are closed and overall hardness increases gradually. This results in different energy dissipation levels for a given strain as shown in Fig. 4(b). The behaviour of these materials is advantageous towards impact absorption applications as the density of crumpled structures, and the topology of voids and ridges can be tailored to yield energy dissipation at different targeted rates [ • Jung A. • et al. ]. 3.2 High strain-rate compression True stress-strain data from the high compressive strain-rate characterisation of specimens using a split Hopkinson pressure bar is shown in Fig. 5. For the four high density specimens, it can be seen that true maximum stress levels are in the range of 150–210 MPa (with an average of 191 MPa) which is somewhat higher than the stress levels observed in quasi static compressive tests; however, failure through specimen rupture occurs at ∼0.2 strain as shown in the sequence of frames from high speed imaging in Fig. 6. This behaviour indicates that at high strain rates the high density crumpled material becomes stiffer, absorbing energy at a higher rate than in the quasi-static case; however, the material also becomes brittle and fails at lower strain levels. For low density specimens, the behaviour is similar to that observed in quasi-static loading conditions, although still exhibiting a higher energy absorption rate. The strain-rate effect on stiffness may be due to entrapped air as in foams [ • Deshpande V. • Fleck N. ] or due to the strain rate dependence of cellulose [ • Eichhorn S. • et al. ]. 3.3 Density effects Fig. 7 illustrates the significance of density on the mechanical behaviour exhibited by materials in the present work. In general, materials of lower density exhibit more gradual energy dissipation and lower initial stiffness, Ki, given as the stiffness up to 10% strain. Total deformation energy UT, a parameter of importance in materials for impact absorbing applications, is higher in quasistatic conditions, as can be seen in Fig. 7(b). This parameter is defined as the volume specific energy dissipated during compression, calculated from an integration of the true stress/strain data. In contrast to quasistatic conditions, materials of higher density do not absorb significantly more energy per unit volume under high strain rate conditions representative of shock impact. 4. Discussion The mechanics of crumpled systems have been examined in earlier studies, suggesting a porous structure comprising fractal networks of ridges and facets may impart high compressive strength through localised deformation. However, these investigations were not conducted with a view towards materials processing or recycling applications. The present work constitutes the first time that this approach to mesostructuring has been applied to the fabrication of higher density materials through high pressure confinement, resulting in materials with densities comparable to many engineering materials. Crumpling processes were applied using uniaxial compression in the present work, however isostatic compression processing techniques (Cold Isostatic Pressing, CIP) would similarly be applicable for the implementation of such crumple-forming. The range of pressures applied here is well within the range attainable by industrial CIP fabrication systems [ • Koizumi M. • Nishihara M. Isostatic Pressing: Technology and Applications. ], suggesting similar materials could be produced on a large scale in a diverse range of geometries using waste material feedstock. Naturally, the actual applied pressure required for crumple-fabrication in CIP methods would be contingent on precursor material yield characteristics and the product geometry. Medium and high density crumple formed materials were found here to have ultimate compressive stress values in the approximate regime 150–210 MPa under loading in both quasi static and high strain rate loading conditions. In these experiments, the true stress was calculated on the basis of the actual true cross-sectional area (Section 2.2) and is considered to be homogenous. It should be noted that at high strains, specimens exhibited some barrelling (See Figs. 3 (b) and 6 (b)) owing to friction between specimen and jigs, introducing some heterogeneity in the pseudo-cellular materials’ stress state, resulting in a particular deformation and complex stress inside the specimen subjected to large compressions. The Ashby map of Fig. 8 shows that the compressive stress values, taken from specimens exhibiting distinct failure, compare favourably with reviewed of performance data for engineering materials considering parameters of density and compressive strength [ • Callister W.D. • Rethwisch D.G. Materials Science and Engineering: An Introduction. , • Jones D.R. • Ashby M.F. Engineering Materials 2: An Introduction to Microstructures, Processing and Design. , • Bauer J. • et al. , • Baumeister J. • et al. , • Bauwens-Crowet C. • et al. , • Walley S. • et al. , • Raghava R. • et al. , • Wisnom M. , • Zhang X. • et al. ]. While certain polyaramides and carbon fibre reinforced polymers do exhibit superior compressive performance at only slightly higher densities, the high cost of these materials is a clear disadvantage relative to the materials tested in the present work, which can readily be fabricated from tissue paper, and potentially from pulp or recycled materials. The crumple-formed materials fabricated in the present work were prepared using a simple cellulose tissue paper precursor, a material of minor engineering significance. At the microscale this type of material does not exhibit particularly advantageous mechanical properties and exhibits similar mechanical behaviour to wood [ • Moilanen C.S. • et al. ] or short-chain polymeric compounds [ • Zhang X. • et al. , • Henriksson M. Preparation, Structure and Properties KTH Chemical Science and Engineering. ]. Through mesoscale structuring, achieved by confinement at high pressure, these materials showed surprisingly high levels of energy dissipation coupled with high levels of strain to failure in the regime 0.2–0.4, with deformation energies in the range 20–35 J cm−3 and 35–75 J cm−3 for high strain rate and quasistatic conditions respectively under the given testing conditions applied. Moreover, the rate of impact dissipation, evident in the steepness of the deformation energy curve under conditions of high strain rate loading, demonstrates a dependence on the materials density. This suggests that similar materials can be designed to impart tailorable dissipation rates in impact protection and vibration damping applications where the moderation of deceleration rates is of great importance [ • Jung A. • et al. ]. It is likely the energy dissipation seen in these crumple formed materials occurs through frictional mechanisms distributed across many scales and orientations within the 3-D structure. The mechanical properties found in the present work further suggest that crumple forming is a feasible and versatile approach towards fabricating low-cost engineering materials from waste material feedstock in a broad range of macroscopic geometries. The materials fabricated in the present work further demonstrate the potential to utilize sheet-like precursors in a crumple-forming process as a means towards the recycling of waste materials towards the fabrication of rigid materials with enhanced mechanical properties. 5. Conclusions Crumple formed materials were fabricated by the uniaxial compression of a cellulose tissue precursor. Fabricated specimens demonstrate the efficacy of this forming technique for the production of rigid materials from low-cost sheet-form precursors. The materials’ mechanical response was studied in quasi static and high strain-rate compression conditions. True stress/strain behaviour was found to compare favourably with engineering materials of similar density suggesting crumple-forming is an attractive route to the fabrication of acceptably strong structural materials from low-cost and/or recycled input materials. High strain rate behaviour indicates that controlling the density of crumple-formed materials by varying the applied pressure in fabrication, can serve as an effective route to moderate impact dissipation rates, offering a new approach to tailorable deceleration in protective systems. The findings of the work presented here highlight the attractiveness of crumple-forming in materials processing and recycling and provide an impetus to the furthering of our understanding of similar materials in terms of forming methods, precursor materials and mechanistic aspects of energy dissipation. Declarations Author contribution statement D. Hanaor: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper. E. F.-Johnson: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Wrote the paper. S. Wang, S. Quach and K. D.-Torre: Performed the experiments; Analyzed and interpreted the data. Y. Gan and L. Shen: Conceived and designed the experiments; Analyzed and interpreted the data; Wrote the paper. Competing interest statement The authors declare no conflict of interest. Funding statement This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. No additional information is available for this paper. References • Corma A. • et al. Nat. Mater. 2004; 3: 394 • Grosso D. • et al. Adv. Funct. Mater. 2004; 14: 309 • Ogawa M. • Masukawa N. Microporous Mesoporous Mater. 2000; 38: 35 • Barthelat F. • et al. J. Mech. Phys. Solids. 2007; 55: 306 • Ghosh T.K. J. R. Soc. Interface. 2011; 8: 761 • Subramaniam A.B. • et al. Langmuir. 2006; 22: 10204 • Grujicic M. • et al. Mater. Des. 2012; 34: 808 • Guo L.-C. • Noda N. Mech. Mater. 2008; 40: 81 • Flores-Johnson E.A. • et al. Compos. Sci. Technol. 2014; 96: 13 • Flores-Johnson E.A. • et al. Compos. Struct. 2015; 126: 329 • Aharoni H. • Sharon E. Nat. Mater. 2010; 9: 993 • Cambou A.D. • Menon N. Chaos: Interdiscip. J. Nonlinear Sci. 2009; 19: 041109 • Cambou A.D. • Menon N. Proc. Natl. Acad. Sci. 2011; 108: 14741 • Tallinen T. • et al. Nat. Mater. 2008; 8: 25 • Tallinen T. • et al. Comput. Phys. Commun. 2009; 180: 512 • Vliegenthart G. • Gompper G. Nat. Mater. 2006; 5: 216 • Rohmer S. • et al. Geometric and topological modelling of 3d crumpled structures. in: DS 75-4: Proceedings of the 19th International Conference on Engineering Design (ICED13), Design for Harmonies, Vol. 4: Product, Service and Systems Design, Seoul, Seoul, Korea2013: 4-75 (19-22.08. 2013) • Miller J.L. Phys. Today. 2011; 64: 15 • Bevilacqua L. Appl. Math. Model. 2004; 28: 547 • Blair D.L. • Kudrolli A. Phys. Rev. Lett. 2005; 94: 166107 • Andresen C.A. • et al. Phys. Rev. E. 2007; 76: 026108 • Lin Y.-C. • et al. Phys. Rev. E. 2009; 80: 066114 • Bouaziz O. • et al. Mater. Sci. Eng.: A. 2013; 570: 1 • Cranford S.W. • Buehler M.J. Phys. Rev. B. 2011; 84 (ARTN 205451): 205451 • Gomes M. Am. J. Phys. 1987; 55: 649 • Balankin A.S. • Huerta O.S. Fractal geometry and mechanics of randomly folded thin sheets. IUTAM Symposium on Scaling in Solid Mechanics. Springer, 2009: 233 • Balankin A.S. • et al. Phys. Rev. E. 2010; 81: 061126 • Bai W. • et al. Phys. Rev. E Stat. Nonlin Soft Matter Phys. 2010; 82: 066112 • Gama B.A. • et al. Appl. Mech. Rev. 2004; 57: 223 • Jung A. • et al. Mater. Des. 2015; 87: 36 • Deshpande V. • Fleck N. Int. J. Impact Eng. 2000; 24: 277 • Eichhorn S. • et al. J. Mater. Sci. 2001; 36: 3129 • Koizumi M. • Nishihara M. Isostatic Pressing: Technology and Applications. Springer Science & Business Media, 1991 • Callister W.D. • Rethwisch D.G. Materials Science and Engineering: An Introduction. Wiley, New York2007 • Jones D.R. • Ashby M.F. Engineering Materials 2: An Introduction to Microstructures, Processing and Design. Butterworth-Heinemann, 2005 • Bauer J. • et al. Nat. Mater. 2016; 15: 438 • Baumeister J. • et al. Mater. Des. 1997; 18: 217 • Bauwens-Crowet C. • et al. J. Mater. Sci. 1972; 7: 176 • Walley S. • et al. Int. J. Impact Eng. 2004; 30: 31 • Raghava R. • et al. J. Mater. Sci. 1973; 8: 225 • Wisnom M. Composites. 1990; 21: 403 • Zhang X. • et al. Compos. Part A: Appl. Sci. Manuf. 2010; 41: 632 • Moilanen C.S. • et al. Cellulose. 2016; 23: 873 • Henriksson M. Preparation, Structure and Properties KTH Chemical Science and Engineering. 2008
{}
# Translating Natural Language to LTL Formulae I'm brand new to LTL and working on becoming better with LTL formulae. I've got two examples where I am unsure whether my LTL formula is correct. I'm given the sentences, and my assumption is that $$l$$ is true: 1. $$l$$ is always false after $$m$$ LTL Translation: $$G(m \to G(\neg l))$$ i.e on all paths m implies on all paths not l 1. $$l$$ is false between $$m$$ and $$n$$ LTL Translation: $$G((m \land Fn) \to \neg l \space Un)$$ i.e on all paths m and finally n, implies not l until n Is my thinking correct in the examples? Thanks for the help! • $G$ had better be translated as on all subsequent path. For 1. isn't p a typo and should be $l$? And its translation seems very straightforward. For 2. again isn't p a typo and should be $l$? And $F$ had better be translated as "eventually somewhere on the subsequent path". Overall sounds fine to me. Nov 9, 2021 at 22:38 • @mohottnad I am thinking my last formula is wrong because it's not 'always' that $l$ is false between $m$ and $n$. Would it be $(m \land Fn) \to \neg l \space Un$? I just removed the G. Nov 9, 2021 at 23:20 • But your spec requirement of 2 clearly states "$l$ is false between $m$ and $n$", I don't understand why it's not always so? Your goal is to write such a sentence to satisfy the spec, right? In general in LTL we need unary or binary operators for the whole sentence. Nov 9, 2021 at 23:36 • If you use $U$ as above then $l$ must be always false between $m$ and $n$. If you interpret your spec as $l$ only needs sometimes false between $m$ and $n$, then you may try something like $G((m∧F(¬l))→ Fn)$... Nov 10, 2021 at 1:11
{}
Journal topic Atmos. Chem. Phys., 18, 13987–14003, 2018 https://doi.org/10.5194/acp-18-13987-2018 Atmos. Chem. Phys., 18, 13987–14003, 2018 https://doi.org/10.5194/acp-18-13987-2018 Research article 02 Oct 2018 Research article | 02 Oct 2018 # Composition of ice particle residuals in mixed-phase clouds at Jungfraujoch (Switzerland): enrichment and depletion of particle groups relative to total aerosol Composition of ice particle residuals in mixed-phase clouds at Jungfraujoch (Switzerland): enrichment and depletion of particle groups relative to total aerosol Stine Eriksen Hammer1, Stephan Mertes2, Johannes Schneider3, Martin Ebert1, Konrad Kandler1, and Stephan Weinbruch1 Stine Eriksen Hammer et al. • 1Institute of Applied Geosciences, Darmstadt University of Technology, Schnittspahnstraße 9, 64287 Darmstadt, Germany • 2Leibniz-Institute for Tropospheric Research, Permoserstraße 15, 04318 Leipzig, Germany • 3Particle Chemistry Department, Max Planck Institute for Chemistry, Hahn-Meitner-Weg 1, 55128 Mainz, Germany Abstract Ice particle residuals (IRs) and the total aerosol particle population were sampled in parallel during mixed-phase cloud events at the high-altitude research station Jungfraujoch in January–February 2017. Particles were sampled behind an ice-selective counterflow impactor (Ice-CVI) for IRs and a heated total inlet for the total aerosol particles. A dilution set-up was used to collect total particles with the same sampling duration as for IRs to prevent overloading of the substrates. About 4000 particles from 10 Ice-CVI samples (from 7 days of cloud events at temperatures at the site between −10 and −18C) were analysed and classified with operator-controlled scanning electron microscopy. Contamination particles (identified by their chemical composition), most likely originating from abrasion in the Ice-CVI and collection of secondary ice, were excluded from further analysis. Approximately 3000 total aerosol particles (IRs and interstitial particles) from 5 days in clouds were also analysed. Enrichment and depletion of the different particle groups (within the IR fraction relative to the total aerosol reservoir) are presented as an odds ratio relative to alumosilicate (particles only consisting of Al, Si, and O), which was chosen as reference due to the large enrichment of this group relative to total aerosol and the relatively high number concentration of this group in both total aerosol and the IR samples. Complex secondary particles and soot are the major particle groups in the total aerosol samples but are not found in the IR fraction and are hence strongly depleted. C-rich particles (most likely organic particles) showed a smaller enrichment compared to aluminosilicates by a factor of ∼20. The particle groups with enrichment similar to aluminosilicate are silica, Fe aluminosilicates, Ca-rich particles, Ca sulfates, sea-salt-containing particles, and metal/metal oxide. Other aluminosilicates – consisting of variable amounts of Na, K, Ca, Si, Al, O, Ti, and Fe – are somewhat more enriched (factor ∼2) and Pb-rich particles are more (factor ∼8) enriched than aluminosilicates. None of the sampled IR groups showed a temperature or size dependence in respect to ice activity, which might be due to the limited sampling temperature interval and the similar size of the particles. Footprint plots and wind roses could explain the different total aerosol composition in one sample (carbonaceous particle emission from the urban/industrial area of Po Valley), but this did not affect the IR composition. Taking into account the relative abundance of the particle groups in total aerosol and the ice nucleation ability, we found that silica, aluminosilicates, and other aluminosilicates were the most important ice particle residuals at Jungfraujoch during the mixed-phase cloud events in winter 2017. 1 Introduction Mixed-phase clouds are important because they have an impact on the hydrological cycle and cloud electrification and because they influence the atmospheric radiation balance (Storelvmo, 2017). Ice-nucleating particles (INPs) can initiate cloud glaciation, which may cause precipitation (Myhre et al., 2013). The order of magnitude of the effect from aerosol–cloud interaction on the “second indirect aerosol effect” and “semi-indirect effect” is still uncertain (Myhre et al., 2013; Flato et al., 2013; Korolev et al., 2017). In nature, spontaneous freezing of supersaturated droplets occurs at temperatures below −38C and a relative humidity with respect to ice >∼140 % (Kanji et al., 2017), termed homogeneous ice nucleation (Vali et al., 2015). At higher temperatures, a surface – like a particle surface – can lower the free energy and thereby assist the phase transition to ice when relative humidity allows for this, termed heterogeneous ice nucleation. Heterogeneous ice nucleation can occur in different hypothesized modes: (1) deposition nucleation, (2) immersion freezing, (3) contact freezing, and (4) condensation freezing. A detailed description of the different modes is found elsewhere (Vali et al., 2015; Kanji et al., 2017). Mixed-phase cloud temperature ranges between −40 and 0 C (Storelvmo, 2017), with immersion and contact freezing as the dominating ice formation modes (Lohmann and Diehl, 2006). Ice nucleation ability was studied offline and online in many laboratory and field experiments as well as by modelling (Hoose et al., 2010; Hoose and Möhler, 2012; Kanji et al., 2017, and references therein). Summarized from laboratory studies (Hoose and Möhler, 2012), biological particles seem to dominate the ice activity at higher temperatures above −10C, whereas mineral dust is found to be mostly ice active below −10C, and organic particles and soot nucleate ice below −30C close to homogenous freezing. A model study of mixed-phase clouds on a global scale by Hoose et al. (2010) shows that the main component of INPs is mineral dust particles. The findings of field experiments at different locations globally are presented by Kanji et al. (2017) as a function of nucleation temperature. In this paper only broadly defined classes are given to characterize the ice nucleation efficiency from INP concentration in different environments. To summarize, biological particles from rural areas dominate at higher temperatures (−5 to −20C), and marine particles from coastal areas show a lower ice activity in the higher temperature range than biological particles (−5 to −30C). Particles from Arctic and Antarctic locations seem to have a relatively high INP abundance between −17 and −25C, and particles from areas with biomass burning show high INP concentrations between −10 and −30C. Mineral-dust-rich regions show particles with the highest ice activity in the range of −10 to −40C, and these particles seem to be the most ice-active component. Exact number concentration is found in Kanji et al. (2017) and references therein. Particle groups determined based on chemical composition in cirrus clouds are reported as sulfates, organics, sea salt, mineral dust or fly ash, metal particles, soot, and biological material in the ice particle residual (IR) fraction (Heintzenberg et al., 1996; Cziczo et al., 2004, 2013). Twohy and Poellot (2005) found the highest abundance of salts and industrial particles in cirrus, followed by crustal, organic and soot particles. In mixed-phase clouds, at the high-altitude research station Jungfraujoch in Switzerland, different IR groups were reported to act as ice nuclei. With the use of electron microscopy and looking at the enrichment relative to interstitial aerosol, Ebert et al. (2011) interpreted complex secondary aerosol, Pb-bearing particles, and complex mixtures as ice nuclei. In contrast, Worringen et al. (2015) considered only particle groups as ice nuclei, which were found with three different techniques (FINCH + PCVI, Ice-CVI, and ISI). These groups included silicates, Ca-rich particles, carbonaceous particles, metal/metal oxide, and soot. Using single-particle mass spectrometry, Schmidt et al. (2017) considered all particles observed in the IR fraction as INPs (biological, soil dust, minerals, sea salt/cooking, aged material, engine exhaust, soot, lead-containing particles, industrial metals, Na- and K-dominated particles, and others). Kamphus et al. (2010) report mineral dust and fly ash (with and without some volatiles), metallic particles, and black carbon as the most ice-active particles, measured with two different mass spectrometers behind the Ice-CVI. Cozic et al. (2008a) investigated black carbon enrichment with two particle soot absorption photometers simultaneously behind the Ice-CVI and a total inlet and by aerosol mass spectrometry (AMS) and single-particle mass spectrometer (measuring particles between 200 nm and 2 µm) behind the Ice- CVI during cloud events. They concluded, based on the enrichment, that black carbon is ice active. In situ cloud measurements of IRs can be performed with an aircraft for pure ice clouds, like for cirrus clouds, with the use of a counterflow virtual impactor (CVI) (Ogren et al., 1985; Heintzenberg et al., 1996; Ström and Ohlsson, 1998; Twohy et al., 2003; Froyd et al., 2010; Cziczo and Froyd, 2014; Cziczo et al., 2017) and references therein). In situ IR sampling in mixed-phase cloud requires an extra step to separate ice crystals from droplets and has, therefore, up to now been restricted to ground-based measurements. A dedicated inlet system (Ice-CVI) was developed by Mertes et al. (2007) to sample freshly produced ice particles in mixed-phase clouds and, after sublimating the ice, deliver the residuals (IRs) to connected sampling or analysing instruments. As described in Mertes et al. (2007), a residual particle can be interpreted as its original INP only when sampling small ice crystals. There are three reasons for this size restriction leading to sampling of rather young ice particles. The first reason is that only the small ice particles grow by water vapour diffusion; in contrast, larger ice particles could further grow by riming. Moreover, larger and older ice particles experience impaction scavenging by interstitial particles. Both processes add more aerosol particles to the ice crystal and thus the original INP cannot be identified any more after ice sublimation in the Ice-CVI. Last is the technical reason that larger ice particles would shatter and break up at the inner surfaces of the Ice-CVI sampling system. The major aims of our paper are to improve the sampling approach and to study the variation in IRs in mixed-phase clouds. In contrast to previous work (Worringen et al., 2015; Ebert et al., 2011; Kamphus et al., 2010; Schmidt et al., 2017), IR and total aerosol were collected in parallel. This allows us to examine the ice nucleation efficiency of the various particle groups and to investigate the dependence on temperature, particle size, and air mass history. 2 Experimental ## 2.1 Sampling In January–February 2017 an extensive field campaign was conducted by INUIT (Ice Nucleation Research Unit funded by the German Research Foundation, DFG) at the high-altitude research station Jungfraujoch in Switzerland (3580 m a.s.l.). The campaign lasted for 5 weeks with the aim to investigate IRs from mixed-phase clouds, which are considered to be the original true INPs. During mixed-phase cloud events, IRs were separated from other cloud constituents like interstitial aerosol particles, supercooled droplets, and large ice aggregates by use of the Ice-CVI (Mertes et al., 2007). Total aerosol particles (interstitial particles and IRs) were sampled in parallel. Particles were sampled by the use of multi MINI cascade impactors with the same design as described in Ebert et al. (2016) and Schütze et al. (2017), but with the use of only one stage with a lower 50 % cut-off diameter of approximately 0.1 µm (aerodynamic). The multi MINI cascade impactor is equipped with purge flow and 5 min flushing of the system was always performed prior to sampling to avoid carryover of particles from previous samples. The particles were collected on boron substrates to allow detection of light elements including carbon (Choël et al., 2005; Ebert et al., 2016). ## 2.2 Total aerosol sampling Total aerosol particles were sampled in parallel to IRs behind a heated inlet (Weingartner et al., 1999) to study IR enrichment and depletion, identify contaminants, and characterize the air masses present. Total aerosol samples were collected with a dilution set-up (Fig. 1) to match the longer sampling time (up to 5 h) of the Ice-CVI. The dilution unit is built up by two valves to control the air stream in and out of the system, making it possible to send air through two filters to dilute the incoming aerosol flow. Without this dilution, due to the much higher concentration of total particles, these samples would be overloaded and not suited for single-particle analysis. Figure 1Illustration of the dilution unit behind the heated total inlet. (a) An inlet tube attached to the total inlet, (b) diluter, (c) multi MINI impactor, (d) pump, (e) valve to control outflow, (f) valve to control air going back in the system, (g) pre-filter (Whatman, Sigma-Aldrich), and (h) main filter (Millipore, Sigma-Aldrich). Arrows indicate the air flow direction. ## 2.3 Ice-CVI The Ice-CVI is a modified counterflow virtual impactor which can separate freshly formed ice particles in mixed-phase clouds; for details see Mertes et al. (2007). The inlet consists of several components to separate: (a) large precipitating ice crystals >50µm by the 90 inlet, (b) large ice particles >20µm with a virtual impactor, (c) supercooled droplets >5µm with two cold impaction plates where the droplets freeze and the ice crystals bounce off, and (d) interstitial particles <5µm, which are removed by a counterflow virtual impactor. ## 2.4 Scanning electron microscopy Size, morphology, chemical composition, and mixing state of IRs and total aerosol particles were investigated by scanning electron microscopy using a FEI Quanta 400 ESEM FEG instrument (FEI, Eindhoven, the Netherlands) equipped with an energy-dispersive X-ray detector (Oxford, Oxfordshire, UK). All analyses were carried out manually, referred to as operator-controlled scanning electron microscopy (SEM) instrument, using an acceleration voltage of 15 kV and a sample chamber pressure of around $\mathrm{1}×{\mathrm{10}}^{-\mathrm{5}}$ mbar. The multipoint feature “point&ID” in the Oxford software Aztec (version 3.3 SP1) was used for the operator-controlled single-particle analysis. On each sample, about 500 particles were measured with 5 s of counting time for X-ray microanalysis. To ensure unbiased results, all particles in an image frame with an equivalent projected area diameter ≥100 nm were investigated. The particles were classified based on chemical composition, mixing state, morphology, and stability under the electron beam. Classification criteria and possible sources are given in Table 1. Particles that could not be assigned to any of the defined classes were grouped as “other”. This group contains for example Mg-rich, Zn-rich, and Ag-containing particles. Four groups are interpreted as contamination particles: pure salt, alumina, and Cu-rich and Ni-rich particles. Table 1Classification criteria and possible sources/explanations for particle groups for both total aerosol and ice particle residuals. *Most likely contamination. ${}^{**}$Uncertain origin because the chemical characterization and/or morphology was not typical for this particle group. Figure 2Temperature (C) and sampling times in February 2017 behind Ice-CVI and total inlet. Sample numbers are given above the bars. Blue bars indicate sampling periods with parallel samples, and green bars periods for which only IR samples could be analysed. Temperature data were received from the Federal Office of Meteorology and Climatology (MeteoSwiss; https://www.meteoswiss.admin.ch, last access: 17 October 2017 ). ## 2.5 Sampling days, meteorology, and footprint plots During 7 days, 10 Ice-CVI samples were taken in clouds at site temperatures between −10 and −18C. Sampling day, time, and site temperatures are presented in Fig. 2 and as a table in the electronic Supplement (Table S1). Temperatures were measured at the station and can differ from the onset ice nucleation temperature of the particles depending on where in the mixed phase cloud nucleation occurred. Six parallel total aerosol samples were successfully collected. The other four total samples are either overloaded or do not have enough particles on the substrate. During the whole campaign, north-easterly and south-westerly winds were the dominating local wind directions in accordance with the topography at Jungfraujoch. Footprint plots, showing the probable air mass residence time at the surface, were calculated with the FLEXPART model (Stohl et al., 1998, 2005; Stohl and Thomson, 1999; Seibert and Frank, 2004). These plots are calculated with 10-day back trajectories and a potential emission sensitivity to determine the probable emission region of the particles arriving at Jungfraujoch. Wind roses and footprint plots are presented in Fig. 3. Figure 3Wind rose (a) and footprint plots (b) calculated with the FLEXPART model, http://lagrange.empa.ch/FLEXPART_browser/ (last access: 17 January 2018) (Stohl et al., 1998, 2005; Stohl and Thomson, 1999; Seibert and Frank, 2004). Horizontal wind direction and speed were obtained from the Federal Office of Meteorology and Climatology (MeteoSwiss; https://www.meteoswiss.admin.ch, last access: 17 October 2017). ## 2.6 Methodological problems ### 2.6.1 Sampling artefacts The observed alumina, pure salt, and Ni-rich and Cu-rich particles are regarded as sampling artefacts. The IR samples are heavily loaded with artefacts (40 %–78 % of the particles – alumina, Ni-rich particles, and pure salt) easily characterized and removed in further analysis. The Cu-rich particles are a part of the substrates and can in principle be found in both IR samples and total aerosol samples. Alumina particles are found in all IR samples at relative high number abundances between 25 % and 70 %, despite the fact that the Ice-CVI was coated before the present campaign with Ni to avoid this contamination. The relative abundance of alumina particles in IR samples is higher in our campaign compared to two previous campaigns at Jungfraujoch using the same instrumentation but without the Ni coating of the Ice-CVI (Ebert et al., 2011; Worringen et al., 2015). This might be explained by the fact that we only focused on the sub-micrometer particles and/ or the difference in meteorology, sample time, and particle load all influencing the relative composition of contamination particles. In contrast to previous work, we sampled IR and total aerosol in parallel to be able to clearly distinguish instrumental artefacts from IRs. As we did not detect a single alumina particle in total aerosol samples, this particle group is regarded as contamination. Alumina particles are easily recognized and were subtracted from the real IRs. Nevertheless, their presence helped substantially to locate the impaction spot on the boron substrates. Secondary ice processes can produce ice crystals in the critical size range selected by the Ice-CVI. The low temperature during sampling does not support the Hallett–Mossop process (Hallett and Mossop, 1974) regarding rime splintering, but other secondary processes producing ice crystals like ice-crystal break-up, blown snow, or crystal–crystal collisions in the critical size range are plausible (Mertes et al., 2007). We hypothesize that pure salt is an artefact due to sampling of the mentioned secondary ice production processes in clouds. The presence of sodium and chloride in ice crystals previously acting as cloud condensation nuclei can later form solid NaCl in line or on the substrate after evaporation of water. This hypothesis is inconclusive because pure salt is not observed in the total aerosol fraction, where only aged and mixed salt are present. This might be explained by evaporation of ice crystals in the heated inlet and the longer sampling line and the relatively low number concentration of these particles compared to the dominating groups (soot and complex secondary particles) in the total aerosol samples. It should be mentioned here that sea salt was considered to be an artefact in the IR fraction by Worringen et al. (2015). A few Ni-rich particles (1 %–7 % relative by number) were encountered in the IR fraction but not in the total aerosol. The Ni-rich particles most likely stem from the Ni coating of some parts of the Ice-CVI. The few Cu-rich particles found, in both total aerosol and the IR samples, are from the boron substrate in which boron is embedded in copper. ### 2.6.2 Accuracy of particle group abundance Accuracy of the particle group abundance depends on three different factors: (1) separation of IR from the rest of the aerosol particles by the Ice-CVI and deposition losses behind both inlets, (2) detection of particles in SEM, and (3) the classification procedure. Sampling issues like abrasion, deposition losses, and ice crystal break-up may occur in the Ice-CVI (Mertes et al., 2007). Abrasion particles were easily recognized as discussed in the previous paragraph. Sampling of secondary ice may have led to the relatively high abundance (∼2 %–28 %) of pure salt particles in the IR fraction discussed in the previous paragraph. As we regard pure salt particles as an artefact, they are not included in the sea-salt-containing particle group. Deposition loss can generally not be excluded. Three of the total aerosol samples (S-3b, S-4b, and S-6b) are sampled under conditions in which the concentration (measured with condensation particle counters) of the total inlet was lower than the interstitial inlet. There are two possible explanations for this: deposition loss in the total aerosol inlet and/or a leak in the interstitial inlet. The relative abundance of the different particle groups in these samples is however comparable to previous findings at Jungfraujoch (Cozic et al., 2008b; Kamphus et al., 2010; Fröhlich et al., 2015). A possible deposition loss leading to systematic bias in the concentration measurements does not seem to change the relative abundance of the different particle groups. Our conclusions are thus not affected as we do not discuss number concentrations. For most particle groups we do not expect to have significant detection artefacts in SEM. These particle groups are detected with high efficiency, in both the total aerosol as well as the IR fraction. However, C-rich particles and soot may be interchanged in total aerosol samples because the image quality can be reduced by evaporating complex secondary particles, leading to less efficient detection of carbonaceous species, which have a low contrast in SEM images. Usually, evaporation of complex secondary particles is not a problem because the particles are observed at the start of analysis. Nevertheless, in one sample, complex secondary particles were lost prior to observation because this sample was erroneously left in the chamber for a longer time before it was analysed. However, these effects seem to be small because we have observed an abundance of carbonaceous particles and complex secondary aerosol particles (in total aerosol) comparable to in previous work (Cozic et al., 2008b). The classification criteria used (Table 1) may lead to problems for small (below approximately 150 nm equivalent projected area diameter) carbonaceous particles. Due to the limited lateral resolution of the instrument, the typical morphology of soot may not be recognized for small particles. In this case, soot would be misclassified as C-rich particles. Still, the sum of both particle groups should be accurate. However, this problem is only significant for the total aerosol samples because evaporating secondary aerosol in these samples leads to deterioration of the image quality. Misclassification of soot as C-rich particles would imply that soot is even more depleted in the IR fraction. ## 2.7 Statistical analysis To calculate enrichment and depletion of the different particle groups in the IR fraction relative to total aerosol, all particle group abundances are normalized to the abundance of the aluminosilicate group. We have chosen this group as a reference as it has the highest relative abundance in both the IR samples and the total aerosol. We do not show a simple ratio of proportions (e.g. proportion of aluminosilicates in IRs divided by proportion of this group in total aerosol) because the proportion is constrained to values between 0 and 1. This is generally referred to as closed data (Aitchison, 2003; Van den Boogaart and Tolosana-Delgado, 2013) and implies that only ratios of two groups can be interpreted (i.e. not the proportion of one group alone). Furthermore, we do not discuss differences in proportions between IR and total aerosol as in Ebert et al. (2011), as this difference is strongly dependent on the relative abundance of a particle group. To overcome these problems, only aluminosilicate normalized particle group abundances are used to quantify enrichment/depletion of a particle group in the IR fraction. This measure is termed odds ratio in the statistical literature. The odds ratios (OR) is calculated in the following way: $\begin{array}{}\text{(1)}& {\mathrm{OR}}_{\mathrm{i}}=\frac{{\left(\frac{{n}_{\mathrm{i}}}{{n}_{\mathrm{AlSi}}}\right)}_{\mathrm{IR}}}{{\left(\frac{{n}_{\mathrm{i}}}{{n}_{\mathrm{AlSi}}}\right)}_{\mathrm{total}}},\end{array}$ with ni the absolute number of particles in particle group i, nAlSi the absolute number of particles in the group of aluminosilicates in both IR and the total aerosol fraction. For particle groups which did not contain a single particle, one particle (which is the detection limit) was added to the respective group in order to calculate an odds ratio. For these groups the odds ratios shown in Fig. 8 represent an upper or lower limit. The odds ratios represent enrichment or depletion of a particle group normalized to aluminosilicates when the IR fraction is compared to the total aerosol. Enrichment relative to aluminosilicates is discussed for each group that is present in the IR. The two groups of complex secondary particles and soot are interpreted as depleted because these particles are not found in the IR fraction. These two particle groups are hence depleted compared to aluminosilicates and absolutely depleted compared to total aerosol. The Fisher test was applied to estimate confidence intervals for the odds ratio and was calculated with RStudio (RStudioTeam, 2016). Figures 5, 7, and 8 are plotted in RStudio with the package “ggplot2” (Wickham, 2009). Wind roses (Fig. 3) were plotted with the RStudio package “openair” (Carlslaw and Ropkins, 2012). 3 Results ## 3.1 Total aerosol Particle groups observed in the total aerosol samples include complex secondary particles, soot, C-rich particles, Ca-rich particles, Ca sulfates, silica, aluminosilicates, Fe aluminosilicates, other aluminosilicates, metal/metal oxide, sea-salt-containing particles (aged and mixed), and other particles (Fig. 4). Figure 4Relative number abundance of the different particle groups within total aerosol samples. Sample S-2b shows a combustion event with air mass history from the Po Valley, and sample S-5b is influenced by an analytical artefact from particle loss of volatile particles. Figure 5Size of total aerosol particles. A few fly ash particles were detected in the metal/metal oxides group. In addition, one group of artefact particles (Cu-rich particles) originating from the substrate was found and excluded from further analysis. Four of the six samples are dominated by secondary aerosol, which consists of sulfates and highly instable particles (under vacuum and/or electron bombardment) for which no X-ray spectrum could be obtained. Still, remains of these particles are easily seen in the secondary electron images. The highly instable particles are classified based on the fact that they evaporated during the operator-controlled X-ray analysis. In contrast to the IR fraction, we observed two groups of carbonaceous particles. Carbon-dominated particles without typical morphology are classified as C-rich particles (Fig. S1). Chain-like or more compacted agglomerates of spherical primary carbonaceous particles are interpreted as soot in accordance with previous literature, e.g. Wentzel et al. (2003), Buseck et al. (2014), and Weinbruch et al. (2018). Sample S-2b was taken during night-time and consists of two separate samples directly taken one after the other (for 3 h each). The unusually high abundance of carbonaceous particles within this sample most likely results from urban/industrial sources of the Po Valley seen in the footprint plot (Fig. 3). Sample S-5b shows a high relative abundance of mineral particles, which may be the result of having lost complex secondary particles in the instrument, as this sample was exposed to the vacuum of the electron microscope for a much longer time than the other samples. Most of the total aerosol particles have a geometric diameter below 500 nm (Fig. 5). The mineral groups of aluminosilicates, Fe aluminosilicates, and other aluminosilicates are somewhat larger than the rest of the particle groups. The size distribution (dNdlogDp vs. particle diameter) is shown in the electronic Supplement (Fig. S3). ## 3.2 Ice particle residuals The following particle groups were observed in the IR samples (Fig. 6): minerals (silica, aluminosilicates, Fe aluminosilicates, other aluminosilicates, Ca sulfates, and Ca-rich particles), sea-salt-containing particles (aged and mixed salt), C-rich particles, Pb-rich particles, metal/metal oxide, and other particles. In addition, four groups of sampling artefacts were found: pure salt, alumina, and Ni-rich and Cu-rich particles. The sampling artefacts are regarded as contamination (see Sect. 2.6.) and are thus not included in the figures. Composition including contamination particles is given in the electronic Supplement (Fig. S5). Mineral particles are of highest relative abundance (between 60 % and 90 % by number) in all samples (Fig. 6), and mainly consist of silica, aluminosilicates, and other aluminosilicates, as well as smaller fractions of Fe aluminosilicates, Ca sulfates, and Ca-rich particles. A small percentage (≤7 % by number) of Pb-rich particles – PbCl or particles containing heterogeneous Pb inclusions – are found in eight of the samples. Sea-salt-containing particles are present in all samples in variable amounts of up to 12 %. The C-rich particles observed in the IR fraction can be excluded from soot because they do not show the typical morphology of chain-like or more compacted agglomerates of primary particles (see Fig. S1). Instead, these particles are most probably organic particles. The group of metal/metal oxide particles includes Fe oxides/hydroxides, Ti oxides, and steel particles (Fe, Cr, Mn alloys). Figure 6Relative number abundance of the different particle groups of IR sampled in mixed-phase clouds at site temperatures between −10 and −18C. Sampling artefacts (pure salt, alumina, and Ni-rich and Cu-rich particles) are not shown. Figure 7Size of IRs. Three outliers of other aluminosilicates are shown (2.7, 2.9, and 3.4 µm). Most IRs have an equivalent projected area diameter below 500 nm (Fig. 7). The groups of Fe aluminosilicates and other aluminosilicates are somewhat larger and show a higher variation than the rest of the particle groups. The size distribution (dNdlogDp vs. particle diameter) is shown in the electronic Supplement in Fig. S4. ## 3.3 IR vs. total aerosol For six sample pairs (simultaneous sampling of total aerosol and IR) the enrichment or depletion of the particle groups compared to aluminosilicates is shown in Fig. 8 as the odds ratio. Complex secondary particles and soot are always strongly depleted in the IR fraction, as not a single particle of both groups was observed as IR. An upper limit for the depletion relative to aluminosilicates can be obtained by setting the number of particles in the IR fraction for both groups equal to 1 (the detection limit). With this assumption it can be seen that soot is depleted in the IR fraction relative to aluminosilicates by at least a factor of 700 and secondary aerosol particles by a factor of at least 4200. Both particle groups are also depleted in the IR fraction relative to total aerosol. C-rich particles are less enriched in the IR fraction than aluminosilicates by a factor of approximately 20. Figure 8Enrichment or depletion of the different particle groups within the IR fraction expressed as the odds ratio (see text for details). The 95 % confidence interval (CI) of the odds ratio is shown as error bars. For soot and complex secondary particles the lower limit of the CI, and for Pb-rich particles the upper limit of the CI, cannot be defined precisely due to counting statistics. Thus they are marked by arrows. Pb-rich particles and other aluminosilicates are enriched (relative to aluminosilicates) within the IR fraction. However, the enrichment factor has large uncertainties due to counting statistics. The remaining particle groups are, within counting error, enriched similarly in the IR fraction to aluminosilicates (for this latter group the odd ratio is 1 per definition). 4 Discussion The major finding of our paper is that sea-salt-containing particles, Ca-rich particles, Ca sulfates, silica, Fe aluminosilicates, and metal/metal oxides are ice active similar to aluminosilicates at Jungfraujoch in warm mixed-phase clouds (−10 to −18C). Other aluminosilicates and the Pb-rich particles seem to be even more ice active as aluminosilicates. In contrast, soot and complex secondary particles are strongly depleted (compared to aluminosilicates and absolutely compared to total aerosol) in the ice residuals. C-rich particles are less enriched than aluminosilicates by a factor of approximately 20. Thus, it is concluded that their ice nucleation ability under these conditions is significantly lower. The ice nucleation activities of the different particle groups are discussed in Sect. 4.2. ## 4.1 Composition of total aerosol Four of the six total aerosol samples are dominated by complex secondary particles (Fig. 4), which seems to be typical for Jungfraujoch (Cozic et al., 2008b; Fröhlich et al., 2015). Two samples (S-2b and S-5b) have a different composition (Fig. 4). The first sample (S-2b) shows a higher carbonaceous fraction, and the second sample (S-5b) a higher fraction of mineral particles and C-rich particles. The high soot and C-rich particle abundance of the first sample may be explained by footprint plots showing that the air mass had a longer surface residence time over the Po Valley (Italy), which is an urban/industrial area with abundant sources of carbonaceous particles. The potential artefact in the second sample does not influence the enrichment factor for all other particle groups. The odds ratio of complex secondary particles shown in Fig. 8 will merely be somewhat lower. Our general conclusion that complex secondary particles are inefficient ice nuclei under the investigated conditions is not changed. Most particles of the total aerosol have sizes below approximately 1 µm, which is in good agreement with Herrmann et al. (2015). Overall, our total aerosol samples consist of complex secondary particles (60 % by number) and C-rich particles (16 %), soot (10 %), and mineral particles (14 %). This composition is similar to previous findings at Jungfraujoch during winter. According to Cozic et al. (2008b) the total aerosol is dominated by organic matter and secondary aerosol (87 % by mass), with smaller contributions of black carbon (4 %) and a non-determined mass (reported as “assumed to be composed of insoluble compounds such as silicate from mineral dust”) fraction (9 %). It was also shown by Kamphus et al. (2010) that the main components of the ambient aerosol at Jungfraujoch in winter (2007) are sulfate and organics, and only a small fraction (between 1 % and 17 %) is classified as mineral particles. With respect to ice nucleation, mineral dust particles are of the most importance (see Sect. 4.2.). Aluminosilicates are the most abundant group of mineral particles in the total aerosol with almost twice the amount of silica. This fits well to the distribution of different minerals in soils presented by Hoose et al. (2008) in which kaolinite and illite show a higher abundance than calcite and quartz in the clay fraction worldwide. Other aluminosilicates and Ca-rich particles are present in four of the six samples at a low number concentration (1–2 %). Ca-containing particles at Jungfraujoch were also found by Cozic et al. (2008b), albeit mainly in the coarse mode. The footprint plots (Fig. 3) were quite similar with high particle residence time over the North Atlantic Ocean. None of the samples are taken during mineral dust events, which normally occur in spring at Jungfraujoch (Coen et al., 2007). One total aerosol sample with a higher fraction of carbonaceous particles had a higher surface residence time over the Po Valley than the rest. ## 4.2 Ice nucleation activity of different particle groups IRs mainly consist of mineral particles (Fig. 6). The classes of Fe aluminosilicates, Ca sulfates, Ca-rich particles, silica, sea-salt-containing particles, and metal/metal oxides are enriched similar to aluminosilicates (odds ratio ∼1). Other aluminosilicates are more enriched than aluminosilicates by a factor of ∼2. The mineral particles' abundance between 60 % and 85 % in the IR fraction is in good agreement with previous findings for mixed-phase clouds at Jungfraujoch (Kamphus et al., 2010; Ebert et al., 2011; Worringen et al., 2015). Mineral particles are also reported as ice active in cirrus clouds (DeMott et al., 2003; Cziczo and Froyd, 2014). Studies of IRs in cirrus clouds are mentioned sometimes in the discussion to show which kinds of IRs are found in the environment, independent on the cloud regime. It has to be emphasized here that this is not meant as a direct comparison as the temperature and freezing regimes are quite different; note that deposition nucleation dominates in cirrus clouds (Cziczo et al., 2013). The size of IRs varies between the detection limit (100 nm) and 3.4 µm (Fig. 7). The size distribution is comparable to previous findings by Worringen et al. (2015) showing a maximum around 300 nm. We did not find a relationship between the size of the particles and the enrichment factor (odds ratio), presumably because the particle size did not differ much. The sampling temperature at the site varied between −10 and −18C (Fig. 2). Temperature was measured at the station and can differ from the onset ice nucleation temperature of the particles depending on where in the mixed-phase cloud nucleation occurred. None of the particle group abundances in the IR fraction showed a systematic temperature dependence. However, based on the limited number of samples and the relatively small temperature range, no definite conclusion regarding the temperature dependence can be drawn. The importance of a given particle group for ice nucleation in the atmosphere depends on the ice nucleation ability and the abundance of this group in the total aerosol. Both parameters will be discussed in the following. Complex secondary aerosol particles and soot were not found in the IR fraction, in contrast to previous work at Jungfraujoch (Cozic et al., 2008a; Ebert et al., 2011; Worringen et al., 2015; Schmidt et al., 2017), even though these groups dominate the total aerosol fraction. Thus, their ice-nucleating ability under the conditions of our campaign can be assumed to be very low. One explanation for this difference might be the higher site temperatures during our campaign. In the present study, complex secondary particles are defined by the presence of an S peak in the X-ray spectrum and/or the instability under electron bombardment. It must be emphasized here that this particle group most likely also consists of a substantial fraction of organics and nitrates (Vester et al., 2007), see Table 1. C-rich particles were observed in the total aerosol and the IR fraction but are less ice active than aluminosilicates (odds ratio ∼0.04). C-rich particles are reported in previous studies of mixed-phase clouds at Jungfraujoch (Mertes et al., 2007; Cozic et al., 2008; Kamphus et al., 2010; Ebert et al., 2011; Worringen et al., 2015; Schmidt et al., 2017). Our results are also in agreement with findings of many cirrus cloud field studies (see recent review by Knopf et al., 2018, and references therein), which show that organic aerosol is found in the IR fraction but is depleted relative to total aerosol. Aluminosilicates are enriched in all samples and have the highest relative number abundance in the IR fraction. Aluminosilicates are also found to be efficient ice nuclei in other field experiments (Cziczo et al., 2013; Worringen et al., 2015; Iwata and Matsuki, 2018). Among aluminosilicates, kaolinite is reported as efficient ice nucleus in laboratory studies (Zimmermann et al., 2007; Murray et al., 2011; Wex et al., 2014; Freedman, 2015). As aluminosilicates often have a high abundance in the total aerosol and in the IR samples, they are the most important particle group for ice nucleation. Therefore the enrichment or depletion of the particle groups was normalized to this group. Silica is the second most abundant mineral particle group in the IR samples and the only mineral group which seems to have a somewhat lower ice activity than aluminosilicates (upper limit of 95 % confidence interval of the odds ratio <1). However, keeping in mind the counting error for aluminosilicates, silica is statistically similarly enriched. This observation is in agreement with Atkinson et al. (2013) but in contradiction to Eastwood et al. (2008), who concluded that quartz is less ice active than kaolinite and montmorillonite. The silica fraction in the IR samples varies between 1 % and 30 %. Boose et al. (2016) point out that quartz is always present in atmospheric dust in all size ranges, even in the smallest size fraction, which is dominated by clay minerals. They conclude that quartz is an important atmospheric INP component because it is present in the size fraction with the longest atmospheric residence time. Despite the fact that the enrichment of silica is somewhat lower than aluminosilicate, the relative high abundance in the IR fraction in our samples confirms this conclusion. Fe aluminosilicates are similarly enriched in the IR fraction as aluminosilicates. Fe aluminosilicates were reported as cloud residual by Matsuki et al. (2010). As these authors did not differentiate between droplet and ice crystals, nothing can be said about the ice nucleation ability of Fe aluminosilicates. This mineral group is not present at high relative abundance at Jungfraujoch; thus, it will not contribute much to ice nucleation at this location. The group of other aluminosilicates most likely consists of different minerals like feldspars, illite, and smectite. Laboratory studies (Atkinson et al., 2013; Iwata and Matsuki, 2018) showed that K feldspar and clay minerals (Zimmermann et al., 2008; Hiranuma et al., 2015; Boose et al., 2016) have a high ice nucleation ability compared to other minerals. A high ice nucleation ability of clay minerals is also reported from field experiments (Targino et al., 2006; Worringen et al., 2015). Also, our field study shows an enrichment of other aluminosilicates in the IR fraction, indicating a high ice nucleation ability. However, as feldspar is less common in the smallest dust fraction, it was concluded by Boose et al. (2016) that at least the feldspar group is generally of minor importance. Ca-rich and Ca-sulfate particles are relatively low in number concentration, both in total aerosol and IR samples. Similar to quartz, calcium-containing particles showed different ice nucleation ability in previous laboratory studies (Zimmermann et al., 2008; Atkinson et al., 2013). In field experiments, however, Ca-rich particles and Ca sulfates were observed in the IR fraction (Ebert et al., 2011; Worringen et al., 2015; Iwata and Matsuki, 2018). Based on chemistry, three subgroups of salt can be distinguished in the IR samples: pure salt, aged sea salt, and mixed sea salt. The pure salt is regarded as an artefact (see Sect. 2.6.1.) and thus excluded from the further analysis. Due to their low number abundance, the two other salt subgroups are combined into the sea-salt-containing particles group. Sea-salt-containing particles is enriched similar to aluminosilicates. The ice activity of salt and sea salt is still controversial due to discrepancies among different laboratory studies (Wise et al., 2012; Niehaus and Cantrell, 2015; Ladino et al., 2016). Kanji et al. (2017) assign these differences to the experimental set-up, i.e. different size, composition, and particle generation methods. In field experiments, however, salts are present in the IR fraction of both cirrus and mixed-phase clouds (Targino et al., 2006; Ebert et al., 2011; Cziczo et al., 2013; Worringen et al., 2015; Iwata and Matsuki, 2018). It is advocated by Iwata and Matsuki (2018) that pure NaCl is not ice active due to molar depression of the freezing point. Sea-salt-containing particles may act as an INP due to the presence of organics (Wilson et al., 2015; DeMott et al., 2016; Iwata and Matsuki, 2018). However, we cannot define where the ice nucleation occurs in a particle, i.e. pores or thin coating, with our measurement technique. The enrichment of metal and metal oxides is similar to aluminosilicates. The ice activity of different metal and metal oxide particles varies with their chemical composition (Kanji et al., 2017). Our samples are dominated by FeCrMn (steel), Ti oxide, and Fe oxide. Literature regarding the metal/metal oxide group is ambiguous. Hematite was reported as ice active by Zimmermann et al. (2008). In contrast, hematite, magnetite, and rutile were found not to be very ice active in deposition mode by Yakobi-Hancock et al. (2013). Even so, metal and metal oxides are often found in IR samples from cirrus and mixed-phase clouds (Kamphus et al., 2010; DeMott et al., 2003; Ebert et al., 2011; Worringen et al., 2015; Schmidt et al., 2017). Pb-containing particles are present in the IR fraction as already reported in previous work at Jungfraujoch (Cziczo et al., 2009; Kamphus et al., 2010; Ebert et al., 2011; Worringen et al., 2015; Schmidt et al., 2017). In the present study, Pb-rich particles are the most enriched particle group. A high enrichment of Pb-rich particles among IRs was also reported by Ebert et al. (2011). In addition, laboratory work showed that Pb can increase the ice activity of mineral particles considerably (Cziczo et al., 2009; Yakobi-Hancock et al., 2013). Helicopters and small aircrafts were discussed as local sources of Pb at Jungfraujoch by Kamphus et al. (2010) and Ebert et al. (2011). As the samples were collected during in-cloud conditions, we do not expect Pb-rich particles emitted freshly on-site from the mentioned sources. A time delay between emission and sampling results in relatively low concentrations of Pb in the ambient air in clouds at Jungfraujoch. However, Kamphus et al. (2010) and Schmidt et al. (2017) detected Pb-bearing particles with mass spectrometry in both ambient air and IRs. Keeping in mind the better counting statistics of mass spectrometry, it seems plausible that total aerosol contains a small amount of Pb-rich particles which were missed in our total samples. To summarize, the two particle groups of complex secondary particles and soot are strongly depleted compared to aluminosilicates as well as absolutely depleted compared to the total aerosol. Despite an uncertainty due to potential misclassification, the C-rich group is less enriched compared to aluminosilicates. Other aluminosilicates and Pb-rich particles are enriched compared to aluminosilicates. A high enrichment of Pb-rich particles indicates that this group is more ice active than the rest of the groups present in the IR fraction. All other particle groups (silica, Fe aluminosilicates, Ca sulfates, Ca-rich particles, sea-salt-containing particles, and metal/metal oxides) are enriched similar to aluminosilicate. The relatively high abundance of artefacts was identified by comparing the IR and total aerosol fraction, showing how important parallel sampling is for identification of IRs. Taking into account the relative abundance of the particle groups in total aerosol and the ice nucleation ability, we conclude that silica, aluminosilicates, and other aluminosilicates were the most important ice-nucleating particles in mixed-phase clouds at site temperatures between −10 and −18C during the campaign at Jungfraujoch in winter 2017. Data availability Data availability. The data set is available for the community and can be accessed by request to Stine Eriksen Hammer (sehammer@geo.tu-darmstadt.de) of the Technical University Darmstadt. Supplement Supplement. Author contributions Author contributions. SEH collected the samples, analysed the particles by electron microscopy, performed data analysis, and prepared the paper. ME contributed to electron microscopy and data analysis. KK designed the dilution unit and contributed to data analysis. SM designed, improved, and operated the Ice-CVI during the campaign. JS organized the field campaign at Jungfraujoch and contributed to data analysis. SW contributed to data analysis and paper preparation. Competing interests Competing interests. The authors declare that they have no conflict of interest. Special issue statement Special issue statement. Acknowledgements Acknowledgements. Stine Eriksen Hammer would like to thank Annette Worringen and Nathalie Benker for discussion and support and Thomas Dirsch for building the dilution unit. We thank the whole INUIT-JFJ team for discussions and support. The authors thank MeteoSwiss for meteorological data and the International Foundation HFSJG, who made it possible to carry out the experiment at the high-altitude research station Jungfraujoch. The authors also gratefully acknowledge the German Research Foundation for financial support within the research group INUIT – INUIT (FOR 1525) and within grant KA 2280/2-1. This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement no. 654109. Edited by: Allan Bertram Reviewed by: two anonymous referees References Aitchison, J.: A concise guide to compositional data analysis, Cairndow, UK, 2003. Atkinson, J. D., Murray, B. J., Woodhouse, M. T., Whale, T. F., Baustian, K. J., Carslaw, K. S., Dobbie, S., O'Sullivan, D., and Malkin, T. L.: The importance of feldspar for ice nucleation by mineral dust in mixed-phase clouds, Nature, 498, 355–358, https://doi.org/10.1038/nature12278, 2013. Boose, Y., Welti, A., Atkinson, J., Ramelli, F., Danielczok, A., Bingemer, H. G., Plötze, M., Sierau, B., Kanji, Z. A., and Lohmann, U.: Heterogeneous ice nucleation on dust particles sourced from nine deserts worldwide – Part 1: Immersion freezing, Atmos. Chem. Phys., 16, 15075–15095, https://doi.org/10.5194/acp-16-15075-2016, 2016. Buseck, P. R., Adachi, K., Gelencsér, A., Tompa, É., and Pósfai, M.: Ns-Soot: A Material-Based Term for Strongly Light-Absorbing Carbonaceous Particles, Aerosol Sci. Tech., 48, 777–788, https://doi.org/10.1080/02786826.2014.919374, 2014. Carlslaw, D. C. and Ropkins, K.: Openair – an R package for air quality data analysis, Environ. Modell. Softw., 27–28, 52–61, https://doi.org/10.1016/j.envsoft.2011.09.008, 2012. Choël, M., Deboudt, K., Osán, J., Flament, P., and Van Grieken, R.: Quantitative Determination of Low-Z Elements in Single Atmospheric Particles on Boron Substrates by Automated Scanning Electron Microscopy-Energy-Dispersive X-ray Spectrometry, Anal. Chem., 77, 5686–5692, https://doi.org/10.1021/ac050739x, 2005. Coen, M. C., Weingartner, E., Nyeki, S., Cozic, J., Henning, S., Verheggen, B., Gehrig, R., and Baltensperger, U.: Long-term trend analysis of aerosol variables at the high-alpine site Jungfraujoch, J. Geophys. Res.-Atmos., 112, D13213, https://doi.org/10.1029/2006JD007995, 2007. Cozic, J., Mertes, S., Verheggen, B., Cziczo, D. J., Gallavardin, S. J., Walter, S., Baltensperger, U., and Weingartner, E.: Black carbon enrichment in atmospheric ice particle residuals observed in lower tropospheric mixed phase clouds, J. Geophys. Res.-Atmos., 113, D15209, https://doi.org/10.1029/2007JD009266, 2008a. Cozic, J., Verheggen, B., Weingartner, E., Crosier, J., Bower, K. N., Flynn, M., Coe, H., Henning, S., Steinbacher, M., Henne, S., Collaud Coen, M., Petzold, A., and Baltensperger, U.: Chemical composition of free tropospheric aerosol for PM1 and coarse mode at the high alpine site Jungfraujoch, Atmos. Chem. Phys., 8, 407–423, https://doi.org/10.5194/acp-8-407-2008, 2008b. Cziczo, D. J. and Froyd, K. D.: Sampling the composition of cirrus ice residuals, Atmos. Res., 142, 15–31, https://doi.org/10.1016/j.atmosres.2013.06.012, 2014. Cziczo, D. J., DeMott, P. J., Brooks, S. D., Prenni, A. J., Thomson, D. S., Baumgardner, D., Wilson, J. C., Kreidenweis, S. M., and Murphy, D. M.: Observations of organic species and atmospheric ice formation, Geophys. Res. Lett., 31, L12116, https://doi.org/10.1029/2004GL019822, 2004. Cziczo, D. J., Stetzer, O., Worringen, A., Ebert, M., Weinbruch, S., Kamphus, M., Gallavardin, S. J., Curtius, J., Borrmann, S., and Froyd, K. D.: Inadvertent climate modification due to anthropogenic lead, Nat. Geosci., 2, 333–336, https://doi.org/10.1038/ngeo499, 2009. Cziczo, D. J., Froyd, K. D., Hoose, C., Jensen, E. J., Diao, M., Zondlo, M. A., Smith, J. B., Twohy, C. H., and Murphy, D. M.: Clarifying the Dominant Sources and Mechanisms of Cirrus Cloud Formation, Science, 340, 1320–1324, https://doi.org/10.1126/science.1234145, 2013. Cziczo, D. J., Ladino, L., Boose, Y., Kanji, Z. A., Kupiszewski, P., Lance, S., Mertes, S., and Wex, H.: Measurements of Ice Nucleating Particles and Ice Residuals, Meteor. Mon., 58, 8.1–8.13, https://doi.org/10.1175/amsmonographs-d-16-0008.1, 2017. DeMott, P., Cziczo, D., Prenni, A., Murphy, D., Kreidenweis, S., Thomson, D., Borys, R., and Rogers, D.: Measurements of the concentration and composition of nuclei for cirrus formation, P. Natl. Acad. Sci. USA, 100, 14655–14660, https://doi.org/10.1073/pnas.2532677100, 2003. DeMott, P. J., Hill, T. C. J., McCluskey, C. S., Prather, K. A., Collins, D. B., Sullivan, R. C., Ruppel, M. J., Mason, R. H., Irish, V. E., Lee, T., Hwang, C. Y., Rhee, T. S., Snider, J. R., McMeeking, G. R., Dhaniyala, S., Lewis, E. R., Wentzell, J. J. B., Abbatt, J., Lee, C., Sultana, C. M., Ault, A. P., Axson, J. L., Diaz Martinez, M., Venero, I., Santos-Figueroa, G., Stokes, M. D., Deane, G. B., Mayol-Bracero, O. L., Grassian, V. H., Bertram, T. H., Bertram, A. K., Moffett, B. F., and Franc, G. D.: Sea spray aerosol as a unique source of ice nucleating particles, P. Natl. Acad. Sci. USA, 113, 5797–5803, https://doi.org/10.1073/pnas.1514034112, 2016. Eastwood, M. L., Cremel, S., Gehrke, C., Girard, E., and Bertram, A. K.: Ice nucleation on mineral dust particles: Onset conditions, nucleation rates and contact angles, J. Geophys. Res.-Atmos., 113, D22203, https://doi.org/10.1029/2008JD010639, 2008. Ebert, M., Worringen, A., Benker, N., Mertes, S., Weingartner, E., and Weinbruch, S.: Chemical composition and mixing-state of ice residuals sampled within mixed phase clouds, Atmos. Chem. Phys., 11, 2805–2816, https://doi.org/10.5194/acp-11-2805-2011, 2011. Ebert, M., Weigel, R., Kandler, K., Günther, G., Molleker, S., Grooß, J.-U., Vogel, B., Weinbruch, S., and Borrmann, S.: Chemical analysis of refractory stratospheric aerosol particles collected within the arctic vortex and inside polar stratospheric clouds, Atmos. Chem. Phys., 16, 8405–8421, https://doi.org/10.5194/acp-16-8405-2016, 2016. Flato, G., Marotzke, J., Abiodun, B., Braconnot, P., Chou, S. C., Collins, W. J., Cox, P., Driouech, F., Emori, S., Eyring, V., Forest, C., Gleckler, P., Guilyardi, E., Jakob, C., Kattsov, V., Reason, C., and Rummukaines, M.: Evaluation of Climate Models, in: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, Assessment Reports of IPCC, edited by: Stocker, T. F., Qin, D., Plattner, G.-K., Tignor, M., Allen, S. K., Boschung, J., Nauels, A., Xia, Y., Bex, V., and Midgley, P. M., Cambridge University Press, Cambridge, UK and New York, NY, USA, 741–866, available at: https://www.ipcc.ch/pdf/assessment-report/ar5/wg1/WG1AR5_Chapter09_FINAL.pdf (last access: 13 March 2017), 2013. Freedman, M. A.: Potential Sites for Ice Nucleation on Aluminosilicate Clay Minerals and Related Materials, J. Phys. Chem. Lett., 6, 3850–3858, https://doi.org/10.1021/acs.jpclett.5b01326, 2015. Fröhlich, R., Cubison, M. J., Slowik, J. G., Bukowiecki, N., Canonaco, F., Croteau, P. L., Gysel, M., Henne, S., Herrmann, E., Jayne, J. T., Steinbacher, M., Worsnop, D. R., Baltensperger, U., and Prévôt, A. S. H.: Fourteen months of on-line measurements of the non-refractory submicron aerosol at the Jungfraujoch (3580 m a.s.l.) – chemical composition, origins and organic aerosol sources, Atmos. Chem. Phys., 15, 11373–11398, https://doi.org/10.5194/acp-15-11373-2015, 2015. Froyd, K. D., Murphy, D. M., Lawson, P., Baumgardner, D., and Herman, R. L.: Aerosols that form subvisible cirrus at the tropical tropopause, Atmos. Chem. Phys., 10, 209–218, https://doi.org/10.5194/acp-10-209-2010, 2010. Hallett, J. and Mossop, S. C.: Production of secondary ice particles during the riming process, Nature, 249, 26–28, https://doi.org/10.1038/249026a0, 1974. Heintzenberg, J., Okada, K., and Ström, J.: On the composition of non-volatile material in upper tropospheric aerosols and cirrus crystals, Atmos. Res., 41, 81–88, https://doi.org/10.1016/0169-8095(95)00042-9, 1996. Herrmann, E., Weingartner, E., Henne, S., Vuilleumier, L., Bukowiecki, N., Steinbacher, M., Conen, F., Collaud Coen, M., Hammer, E., and Jurányi, Z.: Analysis of long-term aerosol size distribution data from Jungfraujoch with emphasis on free tropospheric conditions, cloud influence, and air mass transport, J. Geophys. Res.-Atmos., 120, 9459–9480, https://doi.org/10.1002/2015JD023660, 2015. Hiranuma, N., Augustin-Bauditz, S., Bingemer, H., Budke, C., Curtius, J., Danielczok, A., Diehl, K., Dreischmeier, K., Ebert, M., Frank, F., Hoffmann, N., Kandler, K., Kiselev, A., Koop, T., Leisner, T., Möhler, O., Nillius, B., Peckhaus, A., Rose, D., Weinbruch, S., Wex, H., Boose, Y., DeMott, P. J., Hader, J. D., Hill, T. C. J., Kanji, Z. A., Kulkarni, G., Levin, E. J. T., McCluskey, C. S., Murakami, M., Murray, B. J., Niedermeier, D., Petters, M. D., O'Sullivan, D., Saito, A., Schill, G. P., Tajiri, T., Tolbert, M. A., Welti, A., Whale, T. F., Wright, T. P., and Yamashita, K.: A comprehensive laboratory study on the immersion freezing behavior of illite NX particles: a comparison of 17 ice nucleation measurement techniques, Atmos. Chem. Phys., 15, 2489–2518, https://doi.org/10.5194/acp-15-2489-2015, 2015. Hoose, C., Lohmann, U., Erdin, R., and Tegen, I.: The global influence of dust mineralogical composition on heterogeneous ice nucleation in mixed-phase clouds, Environ. Res. Lett., 3, 025003, https://doi.org/10.1088/1748-9326/3/2/025003, 2008. Hoose, C., Kristjánsson, J. E., and Burrows, S. M.: How important is biological ice nucleation in clouds on a global scale?, Environ. Res. Lett., 5, 024009, https://doi.org/10.1088/1748-9326/5/2/024009, 2010. Hoose, C. and Möhler, O.: Heterogeneous ice nucleation on atmospheric aerosols: a review of results from laboratory experiments, Atmos. Chem. Phys., 12, 9817–9854, https://doi.org/10.5194/acp-12-9817-2012, 2012. Iwata, A. and Matsuki, A.: Characterization of individual ice residual particles by the single droplet freezing method: a case study in the Asian dust outflow region, Atmos. Chem. Phys., 18, 1785–1804, https://doi.org/10.5194/acp-18-1785-2018, 2018. Kamphus, M., Ettner-Mahl, M., Klimach, T., Drewnick, F., Keller, L., Cziczo, D. J., Mertes, S., Borrmann, S., and Curtius, J.: Chemical composition of ambient aerosol, ice residues and cloud droplet residues in mixed-phase clouds: single particle analysis during the Cloud and Aerosol Characterization Experiment (CLACE 6), Atmos. Chem. Phys., 10, 8077–8095, https://doi.org/10.5194/acp-10-8077-2010, 2010. Kanji, Z. A., Ladino, L. A., Wex, H., Boose, Y., Burkert-Kohn, M., Cziczo, D. J., and Krämer, M.: Overview of Ice Nucleating Particles, Meteor. Mon., 58, 1.1–1.33, https://doi.org/10.1175/AMSMONOGRAPHS-D-16-0006.1, 2017. Knopf, D. A., Alpert, P. A., and Wang, B.: The Role of Organic Aerosol in Atmospheric Ice Nucleation: A Review, ACS Earth Space Chem., 2, 168–202, https://doi.org/10.1021/acsearthspacechem.7b00120, 2018. Korolev, A., McFarquhar, G., Field, P. R., Franklin, C., Lawson, P., Wang, Z., Williams, E., Abel, S. J., Axisa, D., Borrmann, S., Crosier, J., Fugal, J., Krämer, M., Lohmann, U., Schlenczek, O., Schnaiter, M., and Wendisch, M.: Mixed-Phase Clouds: Progress and Challenges, Meteor. Mon., 58, 5.1–5.50, https://doi.org/10.1175/amsmonographs-d-17-0001.1, 2017. Ladino, L. A., Yakobi-Hancock, J. D., Kilthau, W. P., Mason, R. H., Si, M., Li, J., Miller, L. A., Schiller, C. L., Huffman, J. A., Aller, J. Y., Knopf, D. A., Bertram, A. K., and Abbatt, J. P. D.: Addressing the ice nucleating abilities of marine aerosol: A combination of deposition mode laboratory and field measurements, Atmos. Environ., 132, 1–10, https://doi.org/10.1016/j.atmosenv.2016.02.028, 2016. Lohmann, U. and Diehl, K.: Sensitivity Studies of the Importance of Dust Ice Nuclei for the Indirect Aerosol Effect on Stratiform Mixed-Phase Clouds, J. Atmos. Sci., 63, 968–982, https://doi.org/10.1175/jas3662.1, 2006. Matsuki, A., Schwarzenboeck, A., Venzac, H., Laj, P., Crumeyrolle, S., and Gomes, L.: Cloud processing of mineral dust: direct comparison of cloud residual and clear sky particles during AMMA aircraft campaign in summer 2006, Atmos. Chem. Phys., 10, 1057–1069, https://doi.org/10.5194/acp-10-1057-2010, 2010. Mertes, S., Verheggen, B., Walter, S., Connolly, P., Ebert, M., Schneider, J., Bower, K. N., Cozic, J., Weinbruch, S., Baltensperger, U., and Weingartner, E.: Counterflow Virtual Impactor Based Collection of Small Ice Particles in Mixed-Phase Clouds for the Physico-Chemical Characterization of Tropospheric Ice Nuclei: Sampler Description and First Case Study, Aerosol Sci. Tech., 41, 848–864, https://doi.org/10.1080/02786820701501881, 2007. Murray, B. J., Broadley, S. L., Wilson, T. W., Atkinson, J. D., and Wills, R. H.: Heterogeneous freezing of water droplets containing kaolinite particles, Atmos. Chem. Phys., 11, 4191–4207, https://doi.org/10.5194/acp-11-4191-2011, 2011. Myhre, G., Shindell, D., Bréon, F.-M., Collins, W., Fuglestvedt, J., Huang, J., Koch, D., Lamarque, J.-F., Lee, D., Mendoza, B., Nakajima, T., Robock, A., Stephens, G., Takemura, T., and Zhang, H.: Anthropogenic and Natural Radiative Forcing, in: Climate Change 2013: The Physical Science Basis, Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, edited by: Stocker, T. F., Qin, D., Plattner, G.-K., Tignor, M., Allen, S. K., Boschung, J., Nauels, A., Xia, Y., Bex, V., and Midgley, P. M., Cambridge University Press, Cambridge, UK and New York, NY, USA, 2013. Niehaus, J. and Cantrell, W.: Contact Freezing of Water by Salts, J. Phys. Chem. Lett., 6, 3490–3495, https://doi.org/10.1021/acs.jpclett.5b01531, 2015. Ogren, J. A., Heintzenberg, J., and Charlson, R. J.: In-situ sampling of clouds with a droplet to aerosol converter, Geophys. Res. Lett., 12, 121–124, https://doi.org/10.1029/GL012i003p00121, 1985. Ogren, J. A., Heintzenberg, J., and Charlson, R. J.: In-Situ Sampling of Clouds with a Droplet to Aerosol Converter, Geophys. Res. Lett., 12, 121–124, https://doi.org/10.1029/GL012i003p00121, 1985. RStudio Team: RStudio: Integrated Development for R, RStudio Inc., Boston, MA, available at: http://www.rstudio.com/ (last access: 9 March 2018), 2016. Schmidt, S., Schneider, J., Klimach, T., Mertes, S., Schenk, L. P., Kupiszewski, P., Curtius, J., and Borrmann, S.: Online single particle analysis of ice particle residuals from mountain-top mixed-phase clouds using laboratory derived particle type assignment, Atmos. Chem. Phys., 17, 575–594, https://doi.org/10.5194/acp-17-575-2017, 2017. Schütze, K., Wilson, J. C., Weinbruch, S., Benker, N., Ebert, M., Günther, G., Weigel, R., and Borrmann, S.: Sub-micrometer refractory carbonaceous particles in the polar stratosphere, Atmos. Chem. Phys., 17, 12475–12493, https://doi.org/10.5194/acp-17-12475-2017, 2017. Seibert, P. and Frank, A.: Source-receptor matrix calculation with a Lagrangian particle dispersion model in backward mode, Atmos. Chem. Phys., 4, 51–63, https://doi.org/10.5194/acp-4-51-2004, 2004. Stohl, A., Hittenberger, M., and Wotawa, G.: Validation of the lagrangian particle dispersion model FLEXPART against large-scale tracer experiment data, Atmos. Environ., 32, 4245–4264, https://doi.org/10.1016/S1352-2310(98)00184-8, 1998. Stohl, A. and Thomson, D. J.: A Density Correction for Lagrangian Particle Dispersion Models, Bound.-Lay. Meteorol., 90, 155–167, https://doi.org/10.1023/a:1001741110696, 1999. Stohl, A., Forster, C., Frank, A., Seibert, P., and Wotawa, G.: Technical note: The Lagrangian particle dispersion model FLEXPART version 6.2, Atmos. Chem. Phys., 5, 2461–2474, https://doi.org/10.5194/acp-5-2461-2005, 2005. Storelvmo, T.: Aerosol Effects on Climate via Mixed-Phase and Ice Clouds, Annu. Rev. Earth Planet. Sci., 45, 199–222, https://doi.org/10.1146/annurev-earth-060115-012240, 2017. Ström, J. and Ohlsson, S.: Real-time measurement of absorbing material in contrail ice using a counterflow virtual impactor, J. Geophys. Res.-Atmos., 103, 8737–8741, https://doi.org/10.1029/98JD00425, 1998. Targino, A. C., Krejci, R., Noone, K. J., and Glantz, P.: Single particle analysis of ice crystal residuals observed in orographic wave clouds over Scandinavia during INTACC experiment, Atmos. Chem. Phys., 6, 1977–1990, https://doi.org/10.5194/acp-6-1977-2006, 2006. Twohy, C. H., Strapp, J. W., and Wendisch, M.: Performance of a Counterflow Virtual Impactor in the NASA Icing Research Tunnel, J. Atmos. Ocean. Tech., 20, 781–790, https://doi.org/10.1175/1520-0426(2003)020<0781:poacvi>2.0.co;2, 2003. Twohy, C. H. and Poellot, M. R.: Chemical characteristics of ice residual nuclei in anvil cirrus clouds: evidence for homogeneous and heterogeneous ice formation, Atmos. Chem. Phys., 5, 2289–2297, https://doi.org/10.5194/acp-5-2289-2005, 2005. Vali, G., DeMott, P. J., Möhler, O., and Whale, T. F.: Technical Note: A proposal for ice nucleation terminology, Atmos. Chem. Phys., 15, 10263–10270, https://doi.org/10.5194/acp-15-10263-2015, 2015. Van den Boogaart, K. G. and Tolosana-Delgado, R.: Analyzing compositional data with R, Springer, Berlin, 2013. Vester, B. P., Ebert, M., Barnert, E. B., Schneider, J., Kandler, K., Schütz, L., and Weinbruch, S.: Composition and mixing state of the urban background aerosol in the Rhein-Main area (Germany), Atmos. Environ., 41, 6102–6115, https://doi.org/10.1016/j.atmosenv.2007.04.021, 2007. Weinbruch, S., Benker, N., Kandler, K., Schütze, K., Kling, K., Berlinger, B., Thomassen, Y., Drotikova, T., and Kallenborn, R.: Source identification of individual soot agglomerates in Arctic air by transmission electron microscopy, Atmos. Environ., 172, 47–54, https://doi.org/10.1016/j.atmosenv.2017.10.033, 2018. Weingartner, E., Nyeki, S., and Baltensperger, U.: Seasonal and diurnal variation of aerosol size distributions (10<D<750 nm) at a high-alpine site (Jungfraujoch 3580 m a.s.l.), J. Geophys. Res.-Atmos., 104, 26809–26820, https://doi.org/10.1029/1999JD900170, 1999. Wentzel, M., Gorzawski, H., Naumann, K. H., Saathoff, H., and Weinbruch, S.: Transmission electron microscopical and aerosol dynamical characterization of soot aerosols, J. Atmos. Sci., 34, 1347–1370, https://doi.org/10.1016/S0021-8502(03)00360-4, 2003. Wex, H., DeMott, P. J., Tobo, Y., Hartmann, S., Rösch, M., Clauss, T., Tomsche, L., Niedermeier, D., and Stratmann, F.: Kaolinite particles as ice nuclei: learning from the use of different kaolinite samples and different coatings, Atmos. Chem. Phys., 14, 5529–5546, https://doi.org/10.5194/acp-14-5529-2014, 2014. Wickham, H.: ggplot2: Elegant Graphics for Data Analysis, Springer-Verlag New York, New York, 2009. Wilson, T. W., Ladino, L. A., Alpert, P. A., Breckels, M. N., Brooks, I. M., Burrows, S. M., Carslaw, K. S., Huffman, J. A., Judd, C., and Kilthau, W. P.: A marine biogenic source of atmospheric ice-nucleating particles, Nature, 525, 234–238, https://doi.org/10.1038/nature14986, 2015. Wise, M. E., Baustian, K. J., Koop, T., Freedman, M. A., Jensen, E. J., and Tolbert, M. A.: Depositional ice nucleation onto crystalline hydrated NaCl particles: a new mechanism for ice formation in the troposphere, Atmos. Chem. Phys., 12, 1121–1134, https://doi.org/10.5194/acp-12-1121-2012, 2012. Worringen, A., Kandler, K., Benker, N., Dirsch, T., Mertes, S., Schenk, L., Kästner, U., Frank, F., Nillius, B., Bundke, U., Rose, D., Curtius, J., Kupiszewski, P., Weingartner, E., Vochezer, P., Schneider, J., Schmidt, S., Weinbruch, S., and Ebert, M.: Single-particle characterization of ice-nucleating particles and ice particle residuals sampled by three different techniques, Atmos. Chem. Phys., 15, 4161–4178, https://doi.org/10.5194/acp-15-4161-2015, 2015. Yakobi-Hancock, J. D., Ladino, L. A., and Abbatt, J. P. D.: Feldspar minerals as efficient deposition ice nuclei, Atmos. Chem. Phys., 13, 11175–11185, https://doi.org/10.5194/acp-13-11175-2013, 2013. Zimmermann, F., Ebert, M., Worringen, A., Schütz, L., and Weinbruch, S.: Environmental scanning electron microscopy (ESEM) as a new technique to determine the ice nucleation capability of individual atmospheric aerosol particles, Atmos. Environ., 41, 8219–8227, https://doi.org/10.1016/j.atmosenv.2007.06.023, 2007. Zimmermann, F., Weinbruch, S., Schütz, L., Hofmann, H., Ebert, M., Kandler, K., and Worringen, A.: Ice nucleation properties of the most abundant mineral dust phases, J. Geophys. Res.-Atmos., 113, D23204, https://doi.org/10.1029/2008JD010655, 2008.
{}
# Isotope Notation Isotope Notation uses a symbol to convey information about an isotope of a particular element. 23 Na 11. ## Presentation on theme: "Isotope Notation Isotope Notation uses a symbol to convey information about an isotope of a particular element. 23 Na 11."— Presentation transcript: Isotope Notation Isotope Notation uses a symbol to convey information about an isotope of a particular element. 23 Na 11 Isotope Notation Mass Number 23 Na 11 Atomic Number Is the number of protons plus the number of neutrons. This number tells us the kind of isotope. This number is NOT on the periodic table. Atomic Number Is the # of protons (p+) in the atom AND The # of electrons (e-) IF the atom is electrically neutral ( i.e., not an ion). This number is on the periodic table. Isotope Notation 23 Na 11 23 Na+ 11 For this isotope 11 p+ + 11e- 0 net charge For this isotope 11 p+ + 10e- 1+ net charge Isotope Notation Mass Number 23 Na 11 Atomic Number The # of protons + the # of neutrons Calculate the number of neutrons for this isotope. 23 (neutrons + protons) protons = 12 neutrons Atomic Mass Is the weighted average mass of all the isotopes of one element in atomic mass units (amu). In other words, it is the mass of one average atom*. Two measurements are needed in order to calculate this average. The percent abundance (% abundance,) also known as how often the isotope occurs in nature. The mass of one atom of one isotope. Atomic mass is also known as Weighted Average Atomic Mass Average Atomic Mass Atomic Mass Here is the formula you need to know in order to calculate the average atomic mass. Average Atomic Mass = [(Mass of Isotope 1) x (% abundance of Isotope 1)] + [(Mass of Isotope 2) x (% abundance of Isotope 2)] + and so on for the 3rd and remaining isotopes. The average atomic mass is written on the periodic table Mass Number vs. Atomic Mass Mass number = # protons + # neutrons (This number is specific for one isotope.) Atomic mass = weighted average mass of all the isotopes of one element in atomic mass units (amu). (The atomic mass is on the periodic table.) Similar presentations
{}
# Can the Darboux theorem be strengthened? I recently learned the following formulation of the Darboux theorem in a class. Theorem: Suppose $\omega_t$ is a smoothly varying family of symplectic forms on a closed manifold $M$ such that the cohomology class of $\omega_t$ is independent of $t$. Then there is a smoothly varying family of diffeomorphisms $F_t$ of $M$ such that $F_0$ is the identity and $F_t^* \omega_t = \omega_0$. The classical formulation of the Darboux theorem is obtained as a corollary in the following way. The condition that a closed 2-form is nondegerate is an open condition, so given a symplectic form $\omega_0$ on $M$ there is a neighborhood in the space of representatives of the cohomology class of $\omega_0$ which consists only of nondegenerate forms. This neighborhood can be taken to be path connected, so the theorem guarantees that for any symplectic form $\omega_1$ which is sufficiently close to $\omega_0$ and which belongs the same cohomology class there is automatically a diffeomorphism which pulls back $\omega_1$ to $\omega_0$. My question: is the assumption that $\omega_1$ is sufficiently close to $\omega_0$ really necessary? In the proof that I learned, we really do need a path $\omega_t$ of nondegenerate forms because the idea is to use the Poincare lemma to write $\omega_t = \omega_0 + d \beta_t$ and then obtain a time dependent vector field $X_t$ satisfying $\iota_{X_t}\omega_t = -\frac{d}{dt}\beta_t$. To say that $X_t$ exists we need $\omega_t$ to be nondegenerate for all time, and the diffeomorphisms $F_t$ are obtained from the flow of $X_t$. I guess the real problem here is that I don't know all that many interesting examples of symplectic manifolds to begin with, and even on those examples that I know I can never produce more than one symplectic structure. Can anyone help? - Could you make this question more precise? When you ask if the assumption is necessary, necessary for what? –  Michael Bächtold Oct 16 '10 at 12:18 I don't fully understand your question, in particular since I don't know which formulations of Darboux's theorem you are concerned with. The version I would describe as the "classical" one is that each point of a symplectic manifold admits a neighbourhood diffeomorphic to a standard symplectic ball in $R^{2n}$. In the proof I know, you need to shrink the size of your neighbourhood twice: the first time, in order to have that $\omega_1$ and $\omega_0$ are connected through symplectic forms, and the second time, to guarantee that the flow $\mathbf{F}_t$ exists to time $t=1$. Whatever ways you may have of sidestepping one or both of these, you definitely need to shrink the size the neighbourhood somehow. Indeed, the answer to the more global question (how big a neighbourhood can you embed?) is a very subtle one related to symplectic capacities and to symplectic packing problems. For instance, Gromov's non-squeezing theorem shows that in the case of the symplectic cylinder $D^2 \times \mathbb{R}^2$, you can't get a symplectic ball any bigger than the one of radius 1. You can, however, easily smoothly embed a ball of much bigger radius (even in a volume preserving way). When you pull back the symplectic form to this ball, you will not be able to deform it to the standard one, except in a neighbourhood of a given point.
{}
### Theory: Algebra plays a major role in real-life situations like finding the age, cost of items, etc. Now, we shall learn the real-life application of linear equation in three variables. The position of an object like ships, airplane, or any other object on Earth can be located using latitude, longitude and altitude. To find these $$3$$ satellites were positioned to get three equations. In these $$3$$ equations, $$2$$ are linear equations, and $$1$$ is a quadratic equation. Thus, to find the position of an object, we can solve for the variables latitude, longitude and altitude and find the values, respectively. This concept is the basis of the Geo-Positioning System (GPS). Reference: <a href="http://www.freepik.com">Designed by brgfx / Freepik</a>
{}
# Chimney Jump to: navigation, search For other uses, see Chimney (disambiguation). "Smokestack" redirects here. For the 1963 avant-garde jazz album, see Smokestack (album). The world's tallest chimney, of GRES-2 in Ekibastuz, Kazakhstan (419.7 metres). A chimney remaining after the destruction of a 19th-century two-story house (Mount Solon, Virginia). A chimney is a structure which provides ventilation for hot flue gases or smoke from a boiler, stove, furnace or fireplace to the outside atmosphere. Chimneys are typically vertical, or as near as possible to vertical, to ensure that the gases flow smoothly, drawing air into the combustion in what is known as the stack, or chimney, effect. The space inside a chimney is called a flue. Chimneys may be found in buildings, steam locomotives and ships. In the United States, the term smokestack (colloquially, stack) is also used when referring to locomotive chimneys or ship chimneys, and the term funnel can also be used.[1][2] The height of a chimney influences its ability to transfer flue gases to the external environment via stack effect. Additionally, the dispersion of pollutants at higher altitudes can reduce their impact on the immediate surroundings. In the case of chemically aggressive output, a sufficiently tall chimney can allow for partial or complete self-neutralization of airborne chemicals before they reach ground level. The dispersion of pollutants over a greater area can reduce their concentrations and facilitate compliance with regulatory limits. ## History A smoke hood in the Netherlands. Image: Cultural Heritage Agency of the Netherlands Chimney pots in London, England, seen from the tower of Westminster Roman Catholic cathedral Seagull sits on top of a hot gas cooling chimney at The World of Glass St. Helens UK. Romans used tubes inside the walls to draw smoke out of bakeries but chimneys only appeared in large dwellings in northern Europe in the 12th century. The earliest extant example of an English chimney is at the keep of Conisbrough Castle in Yorkshire, which dates from 1185 AD.[3] They did not become common in houses until the 16th and 17th centuries.[4] Smoke hoods were an early method of collecting the smoke into a chimney (see image). Another step in the development of chimneys was the use of built in ovens which allowed the household to bake at home. Industrial chimneys became common in the late 18th century. Chimneys in ordinary dwellings were first built of wood and plaster or mud. Since then chimneys have traditionally been built of brick or stone, both in small and large buildings. Early chimneys were of a simple brick construction. Later chimneys were constructed by placing the bricks around tile liners, a system invented by Malik. To control downdrafts, venting caps (often called chimney pots) with a variety of designs are sometimes placed on the top of chimneys. In the 18th and 19th centuries, the methods used to extract lead from its ore produced large amounts of toxic fumes. In the north of England, long near-horizontal chimneys were built, often more than 3 km (2 mi) long, which typically terminated in a short vertical chimney in a remote location where the fumes would cause less harm. Lead and silver deposits formed on the inside of these long chimneys, and periodically workers would be sent along the chimneys to scrape off these valuable deposits.[5] ## Construction A section of a large late Georgian four storey house, showing the advantage of using a mechanical sweeper over climbing boys As a result of the limited ability to handle transverse loads with brick, chimneys in houses were often built in a "stack", with a fireplace on each floor of the house sharing a single chimney, often with such a stack at the front and back of the house. Today's central heating systems have made chimney placement less critical, and the use of non-structural gas vent pipe allows a flue gas conduit to be installed around obstructions and through walls. In fact, most modern high-efficiency heating appliances do not require a chimney. Such appliances are generally installed near an external wall, and a noncombustible wall thimble allows a vent pipe run directly through the external wall. On a pitched roof where a chimney penetrates a roof, flashing is used to seal the joints. The down-slope piece is called an apron, the sides receive step flashing and a cricket is used to divert water around the upper side of the chimney underneath the flashing.[6] Carved brick chimneys characteristic of late Gothic Tudor buildings, at Thornbury Castle, 1514 Industrial chimneys are commonly referred to as flue gas stacks and are generally external structures, as opposed to those built into the wall of a building. They are generally located adjacent to a steam-generating boiler or industrial furnace and the gases are carried to them with ductwork. Today the use of reinforced concrete has almost entirely replaced brick as a structural component in the construction of industrial chimneys. Refractory bricks are often used as a lining, particularly if the type of fuel being burned generates flue gases containing acids. Modern industrial chimneys sometimes consist of a concrete windshield with a number of flues on the inside. The 300 metre chimney at Sasol Three consists of a 26 metre diameter windshield with four 4.6 metre diameter concrete flues which are lined with refractory bricks built on rings of corbels spaced at 10 metre intervals. The reinforced concrete can be cast by conventional formwork or sliding formwork. The height is to ensure the pollutants are dispersed over a wider area to meet legal or other safety requirements. ## Residential flue liners A chimney with two clay-tile flue liners A flue liner is a secondary barrier in a chimney that protects the masonry from the acidic products of combustion, helps prevent flue gas from entering the house, and reduces the size of an over-sized flue. Newly built chimneys have been required by building codes to have a flue liner in many locations since the 1950s. Chimneys built without a liner can usually have a liner added, but the type of liner needs to match the type of appliance it is servicing. Flue liners may be clay tile, metal, concrete tiles, or poured in place concrete. Clay tile flue liners are very common in the United States. However, this is the only liner which does not meet Underwriters Laboratories 1777 approval and frequently have problems such as cracked tiles and improper installation.[7] Clay tiles are usually about 3 feet (0.91 m) long, various sizes and shapes, and are installed in new construction as the chimney is built. A refractory cement is used between each tile. Metal liners may be stainless steel, aluminum, or galvanized iron and may be flexible or rigid pipes. Stainless steel is made in several types and thicknesses. Type 304 is used with firewood, wood pellet fuel, and non-condensing oil appliances, types 316 and 321 with coal, and type A1 29-4C is used with non-condensing gas appliances. Stainless steel liners must have a cap and be insulated if they service solid fuel appliances, but following the manufacturer's instructions carefully.[7] Aluminum and galvanized steel chimneys are known as class A and class B chimneys. Class A are either an insulated, double wall stainless steel pipe or triple wall, air-insulated pipe often known by its genericized trade name Metalbestos. Class B are uninsulated double wall pipes often called B-vent, and are only used to vent non-condensing gas appliances. These may have an aluminum inside layer and galvanized steel outside layer.Condensing boilers do not need a chimney. Concrete flue liners are like clay liners but are made of a refractory cement and are more durable than the clay liners. Poured in place concrete liners are made by pouring special concrete into the existing chimney with a form. These liners are highly durable, work with any heating appliance, and can reinforce a weak chimney, but they are irreversible. ## Chimney pots, caps and tops Rows of chimney pots in an English town, 1974. A chimney pot is placed on top of the chimney to expand the length of the chimney inexpensively, and to improve the chimney's draft. A chimney with more than one pot on it indicates that there is more than one fireplace on different floors sharing the chimney. A chimney cowl is placed on top of the chimney to prevent birds and squirrels from nesting in the chimney. They often feature a rain guard to prevent rain or snow from going down the chimney. A metal wire mesh is often used as a spark arrestor to minimize burning debris from rising out of the chimney and making it onto the roof. Although the masonry inside the chimney can absorb a large amount of moisture which later evaporates, rainwater can collect at the base of the chimney. Sometimes weep holes are placed at the bottom of the chimney to drain out collected water. Spanish Conquistador style wind directional cowl found on many homes along the windy Oregon coast. A chimney cowl or wind directional cap is a helmet shaped chimney cap that rotates to align with the wind and prevent a backdraft of smoke and wind back down the chimney. An H-style cap (cowl) is a chimney top constructed from chimney pipes shaped like the letter H. It is an age old method to regulate draft in situations where prevailing winds or turbulences cause down draft and backpuffing. Although the H cap has a distinctive advantage over most other downdraft caps, it fell out of favor because of its bulky design. It is found mostly in marine use but has been regaining popularity due to its energy saving functionality. The H-cap stabilizes the draft rather than increasing it. Other down draft caps are based on the Venturi effect, solving downdraft problems by increasing the up draft constantly resulting in much higher fuel consumption. A chimney damper is a metal plate that can be positioned to close off the chimney when not in use and prevent outside air from entering the interior space, and can be opened to permit hot gases to exhaust when a fire is burning. A top damper or cap damper is a metal spring door placed at the top of the chimney with a long metal chain that allows one to open and close the damper from the fireplace. A throat damper is a metal plate at the base of the chimney, just above the firebox, that can be opened and closed by a lever, gear, or chain to seal off the fireplace from the chimney. The advantage of a top damper is the tight weather-proof seal that it provides when closed, which prevents cold outside air from flowing down the chimney and into the living space — a feature that can rarely be matched by the metal-on-metal seal afforded by a throat damper. Additionally, because the throat damper is subjected to intense heat from the fire directly below, it is common for the metal to become warped over time, thus further degrading the ability of the throat damper to seal. However, the advantage of a throat damper is that it seals off the living space from the air mass in the chimney, which, especially for chimneys positioned on an outside of wall of the home, is generally very cold. It is possible in practice to use both a top damper and a throat damper to obtain the benefits of both. The two top damper designs currently on the market are the Lyemance (pivoting door) and the Lock Top (translating door). In the late Middle Ages in Western Europe the design of crow-stepped gables arose to allow maintenance access to the chimney top, especially for tall structures such as castles and great manor houses. ## Chimney draught or draft The stack effect in chimneys: the gauges represent absolute air pressure and the airflow is indicated with light grey arrows. The gauge dials move clockwise with increasing pressure. (See the Flue gas stack article for more details) When coal, oil, natural gas, wood or any other fuel is combusted in a stove, oven, fireplace, hot water boiler or industrial furnace, the hot combustion product gases that are formed are called flue gases. Those gases are generally exhausted to the ambient outside air through chimneys or industrial flue gas stacks (sometimes referred to as smokestacks). The combustion flue gases inside the chimneys or stacks are much hotter than the ambient outside air and therefore less dense than the ambient air. That causes the bottom of the vertical column of hot flue gas to have a lower pressure than the pressure at the bottom of a corresponding column of outside air. That higher pressure outside the chimney is the driving force that moves the required combustion air into the combustion zone and also moves the flue gas up and out of the chimney. That movement or flow of combustion air and flue gas is called "natural draught/draft", "natural ventilation", "chimney effect", or "stack effect". The taller the stack, the more draught or draft is created. There can be cases of diminishing returns: if a stack is overly tall in relation to the heat being sent out of the stack, the flue gases may cool before reaching the top of the chimney. This condition can result in poor drafting, and in the case of wood burning appliances, the cooling of the gases before emission can cause creosote to condense near the top of the chimney. The creosote can restrict the exit of flue gases and may pose a fire hazard. Designing chimneys and stacks to provide the correct amount of natural draught or draft involves a number of design factors, many of which require iterative trial-and-error methods. As a "first guess" approximation, the following equation can be used to estimate the natural draught/draft flow rate by assuming that the molecular mass (i.e., molecular weight) of the flue gas and the external air are equal and that the frictional pressure and heat losses are negligible: $Q = C\; A\; \sqrt {2\;g\;H\;\frac{T_i - T_e}{T_e}}$ Q where: = chimney draught/draft flow rate, m³/s = cross-sectional area of chimney, m² (assuming it has a constant cross-section) = discharge coefficient (usually taken to be from 0.65 to 0.70) = gravitational acceleration, 9.807 m/s² = height of chimney, m = average temperature inside the chimney, K = external air temperature, K. Combining two flows into chimney: At+Af<A, where At=7.1 inch2 is the minimum required flow area from water heater tank and Af=19.6 inch2 is the minimum flow area from a furnace of a central heating system. ## Maintenance and problems Chimneys on the Parliamentary Library in Wellington, New Zealand. A characteristic problem of chimneys is they develop deposits of creosote on the walls of the structure when used with wood as a fuel. Deposits of this substance can interfere with the airflow and more importantly, they are combustible and can cause dangerous chimney fires if the deposits ignite in the chimney. Heaters that burn natural gas drastically reduce the amount of creosote buildup due to natural gas burning much cleaner and more efficiently than traditional solid fuels. While in most cases there is no need to clean a gas chimney on an annual basis that does not mean that other parts of the chimney cannot fall into disrepair. Disconnected or loose chimney fittings caused by corrosion over time can pose serious dangers for residents due to leakage of carbon monoxide into the home.[8] Thus, it is recommended—and in some countries even mandatory—that chimneys be inspected annually and cleaned on a regular basis to prevent these problems. The workers who perform this task are called chimney sweeps. This work used to be done largely by child labour, and as such features in Victorian literature. In the Middle Ages in some parts of Europe, a crow-stepped gable design was developed, partly to provide access to chimneys without use of ladders. Iconic non-operational chimney of the Chernobyl reactor #4, preserved as part of the Chernobyl sarcophagus. Masonry (brick) chimneys have also proven to be particularly prone to crumbling during an earthquake. Government housing authorities in cities prone to earthquakes such as San Francisco, Los Angeles, and San Diego now recommend building new homes with stud-framed chimneys around a metal flue. Bracing or strapping old masonry chimneys has not proven to be very effective in preventing damage or injury from earthquakes. It is now possible to buy "faux-brick" facades to cover these modern chimney structures. Liners have been standard in new construction for years, but are now lacking in many old structures whose masonry has not been restored and updated. Liners help keep flue gases where they belong. They isolate combustible building materials from high heat, and they prevent creosote and other by-products of combustion from seeping through porous brick and mortar.[9] Other potential problems include: • "spalling" brick, in which moisture seeps into the brick and then freezes, cracking and flaking the brick and loosening mortar seals. • shifting foundations, which may degrade integrity of chimney masonry • nesting or infestation by unwanted animals such as squirrels, racoons, or chimney swifts • chimney leaks • drafting issues, which may allow smoke inside building[10] • issues with fireplace or heating appliance may cause unwanted degradation or hazards to chimney Modernist chimneys on the Casa Milà (Barcelona, Spain), by Antonio Gaudí. ## Dual-use chimneys Some very high chimneys are used for carrying antennas of mobile phone services and low power FM/TV-transmitters. Special attention must be paid to possible corrosion problems if these antennas are near the exhaust of the chimney. In some cases the chimneys of power stations are used also as pylons. However this type of construction, which is used at several power stations in the former Soviet Union, is not very common, because of corrosion problems of conductor cables. The Dům Dětí a Mládeže v Modřanech in Prague, Czech Republic is equipped with an observation deck. The chimney of Pei Tou Incinerator carries a revolving restaurant. ### Cooling tower used as an industrial chimney At some power stations, which are equipped with plants for the removal of sulfur dioxide and nitrogen oxides, it is possible to use the cooling tower as a chimney. Such cooling towers can be seen in Germany at the Power Station Staudinger Grosskrotzenburg and at the Power Station Rostock. At power stations that are not equipped for removing sulfur dioxide, such usage of cooling towers could result in serious corrosion problems. ## References 1. ^ C.F. Saunders (1923), The Southern Sierras of California 2. ^ Jules Verne (1872), Around the World in Eighty Days 3. ^ James Burke, Connections (Little, Brown and Co.) 1978/1995, ISBN 0-316-11672-6, p. 159 4. ^ Sparrow, Walter Shaw. The English house: how to judge its periods and styles. London: Eveleigh Nash, 1908. 85-86. 5. ^ "Lead Mining". The Northern Echo. Newsquest Media Group. Retrieved 4/10/2012. 6. ^ Roofing, flashing & waterproofing. Newtown, CT: Taunton Press, 2005. 43-50. 7. ^ a b Bliss, Stephen, ed.. Troubleshooting guide to residential construction: the diagnosis and prevention of common building problems. Richmond, VT: Builderburg Group, 1997. 197. Print. 8. ^ Chimney Problems and Warnings Signs 9. ^ Bringing an old chimney up to par 10. ^
{}
# Laboratoire de Mécanique des Fluides et d’Acoustique - UMR 5509 LMFA - UMR 5509 Laboratoire de Mécanique des Fluides et d’Acoustique Lyon France ## Nos partenaires Article dans C. R. Mécanique (2016) ## Influence of a magnetic field on the stability of a binary fluid with Soret effect Mokhtar Ben Sassi, Slim Kaddeche, Ali Abdennadher, Daniel Henry, Hamda Ben Hadid & Abdelkader Mojtabi The effect of both magnitude and orientation of a uniform magnetic field on the critical transition occurring within an electrically conducting binary fluid layer, stratified in temperature and concentration, taking into account the Soret effect, is investigated numerically. For such a configuration, the results show that the critical thresholds corresponding to an arbitrary orientated magnetic field can be derived from those obtained for a vertical magnetic field and that the axes of the marginal cells are aligned with the horizontal component of the magnetic field. Moreover, an analytical study is conducted to investigate the impact of the magnetic field on long-wavelength instabilities. The effect of the magnetic field on such instabilities reveals a new phenomenon consisting in major changes of the unstable modes that lose their unicellular nature to regain their multi-roll characteristic, as it is the case without magnetic field for $\psi<\psi_{\ell_0}=131Le\,/(34-131\,Le)$. For a binary fluid characterized by a Lewis number $Le$ and a separation factor $\psi>\psi_{\ell_0}$, the value of the Hartmann number $Ha_\ell(\Psi,Le)$ corresponding to that transition responsible for a significant change in mass and heat transfer can be determined from the analytical relations derived in this work. Read online
{}
# Help with sequence formula 1. Mar 10, 2006 ### Karla Hi all, Im doing some work with the triangular numbers, pretty basic stuff: 1 3 6 10 15 Now im trying to understand how to get the formula for calculating the nth term +1, the formula is: n(n+1)/2 I have tried doing a difference table, only to find that the second difference is 1, which lead me to think the forumla must start with 2N but this is incorrect. Please can someone show me the easiest method for working out basic sequence formulas. Thanks, Karla Last edited: Mar 10, 2006 2. Mar 10, 2006 ### HallsofIvy Staff Emeritus It's not clear to me why the fact that the second difference is 1 would lead you to think that the formula must start with 2N! Newton's divided difference formula says that if f(1)= a0, the first difference (at 1) is a1, second difference a2, third difference a3, etc. Then $$f(n)= a_0+ a_1(n-1)+ \frac{a_2}{2}(n-1)(n-2)+ \frac{a_3}{3!}(n-1)(n-2)(n-3)+ ...$$. If the second difference is a constant then all succeeding differences are 0 and the formula gives a polynomial. In particular, for the sequence of triangular numbers, the first differences are just the sequence of counting numbers and the second difference is 1 for all n. At n= 1, the value is 1, the first difference is 2 and the second difference is 1 with all succeeding differences 0. Newton's divided difference formula gives $$1+ 2(n-1)+ \frac{1}{2}(n-1)(n-2)$$ $$= 1+ 2n- 2+ \frac{1}{2}(n^2- 3n+ 2)$$ $$= 2n+ \frac{1}{2}n^2- \frac{3}{2}n$$ $$= \frac{1}{2}n^2+ \frac{1}{2}n$$ $$= \frac{n(n+1)}{2}$$ 3. Mar 12, 2006 ### robert Ihnot There is a great way to work out that formula for the first n numbers. This method was supposedly used by Gauss when he was 10. His teacher, wanting to leave the classroom for a time, told the students to add up the first 100 numbers and get the total. (This is the 100th triangle number.) But, before the teacher could get out of the room, Gauss presented the answer! His method: Consider the series 1,2,3,......100. Now reverse this on the next line: 100,99,98......1. and add the two series together term by term. The result is 101 written 100 times, and the correct answer is 1/2 of that! which is 50x101 = 5050. http://www.cut-the-knot.org/Curriculum/Algebra/GaussSummation.shtml Last edited: Mar 12, 2006 4. Sep 16, 2011 Here is what I found to be a nice way of finding an equation for a sequence assuming it is a sequence and not just a random set of numbers. I had it as a power point but had to save it as a PDF file to attach it. Let me know if it helps. Jim PS: For your example.......1 3 6 10 15 3-1=2 2 3 4 5 6-3=3 1 1 1 10-6=4 15-10=5 3-2=1 4-3=1 5-4=1 The '1' repeats at the second row so we start with T=1/(2!) n^2 And 1/2 n^2 has a sequence of 1/2 2 9/2 8. We take our sequence and subtract these to get a remainder sequence and then repeat the first part again. 1/2 1 3/1 2 1/2 1/2 1/2 This one started repeating in the first row so we add that to the first part we found and get T=1/2 n^2 + 1/2 n and have no remainder this time so we are done. I explained these steps a lot more thorough in the PDF file. #### Attached Files: • ###### Solve a Sequence.pdf File size: 329.2 KB Views: 141 Last edited: Sep 16, 2011 5. Sep 17, 2011 ### HallsofIvy Staff Emeritus Yes, that is precisely the "Newton's divided difference formula" that both I and Robert Ihnot referred to five years ago!
{}
# One-to-one Jump to: navigation, search A function $f \colon X \rightarrow Y$ is called one-to-one (or injective) if for all $a, b \in X$, $f(a)=f(b)$ implies that $a=b$.
{}
Synopsis Boosting the Performance of Quantum Repeaters Physics 15, s29 Multiplexing techniques could boost the chances of achieving end-to-end entanglement of a signal in a trapped-ion-based quantum-computer network. To create a large-scale network of quantum computers, researchers need to develop devices that can amplify and reinforce the signals that these networks will carry (see Research News: The Key Device Needed for a Quantum Internet). These devices, known as quantum repeaters, play an important role in distributing entanglement over the networks. Now, Prajit Dhara of the University of Arizona and colleagues have investigated how multiplexing techniques impact the rates of end-to-end entanglement in a network of quantum repeaters made of trapped ions [1]. The researchers say that their investigation helps pave the way for implementing trapped-ion quantum repeaters in quantum networks. The quantum-repeater-network design that Dhara and colleagues considered is based on repeaters made of trapped-ion qubits, one of the front-running qubit types for quantum computers. The network of repeaters incorporates both spatial and temporal multiplexing—techniques for transmitting multiple, simultaneous signals within a communication channel. In spatial multiplexing, each pair of adjacent repeaters makes multiple, parallel attempts to entangle with one another to improve the chances of achieving entanglement across the network. In temporal multiplexing, the network as a whole iterates the entanglement attempts, balancing the benefit of multiple attempts with the downside of longer wait times. Dhara and colleagues find that the network design that they considered has increased rates of entanglement compared to those predicted in previous studies. The team also add in predictions for the ion resources that multiplexed quantum repeaters need in order to boost entanglement rates, calculating the best entanglement rates achievable for a given set of resources. –Erika K. Carlson Erika K. Carlson is a Corresponding Editor for Physics based in New York City. References 1. P. Dhara et al., “Multiplexed quantum repeaters based on dual-species trapped-ion systems,” Phys. Rev. A 105, 022623 (2022). Related Articles Quantum Information A Quantum Entanglement Assembly Line A new experiment generates entanglement between many photons with a much higher probability than available methods, which could be a boon for quantum information applications. Read More » Quantum Information Nobel Prize: Quantum Entanglement Unveiled The 2022 Nobel Prize in Physics honors research on the foundations of quantum mechanics, which opened up the quantum information frontier. Read More » Quantum Physics Longer-Than-Expected Twirls for Polariton Condensates A polariton condensate can spontaneously rotate, causing it to live significantly longer than individual polaritons would. Read More »
{}
Two balls, ofmasses mA = 28 gand mB = 68 g are suspended as shown in Figure 7-44. Thelighter ball is pulled away to a 60° angle with the verticaland released. Figure 7-44 (a) What is the velocity of the lighter ball beforeimpact? (Take the right to be positive.) _______ m/s (b) What is the velocity of each ball after the elasticcollision? ball A _____ m/s ball B _____ m/s (c) What will be the maximum height of each ball (abovethe collision point) after the elastic collision? ball A _____ m ball B _____ m
{}
Version: 2.2.1 # CSV import tool CSV is a universal and very versatile data format used to store large quantities of data. Each Memgraph database instance includes a CSV import tool called mg_import_csv. The CSV import tool should be used for initial bulk ingestion of data into the database. Upon ingestion, the CSV importer creates a snapshot that will be used by the database to recover its state on its next startup. If you are already familiar with the Neo4j bulk import tool, then using the mg_import_csv tool should be easy. The CSV import tool is fully compatible with the Neo4j CSV format. If you already have a pipeline set-up for Neo4j, you should only replace neo4j-admin import with mg_import_csv. info For more detailed information about the CSV import tool, check our Reference guide. Importing CSV data using the mg_import_csv should be a one-time operation done before running Memgraph. In other words, this tool should not be used to import data into an already running Memgraph instance. If you are using Docker, before the import, you need to transfer CSV files where the Docker container can see them. Please check the examples below to find out how to use the import tool based on the complexity of your data. ## Examples​ Here are two examples of how to use the CSV import tool depending on the complexity of your data: ### One type of nodes and relationships​ Let's import a simple dataset. Download the people_nodes.csv file with the following content: id:ID(PERSON_ID),name:string,:LABEL100,Daniel,Person101,Alex,Person102,Sarah,Person103,Mia,Person104,Lucy,Person Download the people_relationships.csv file with the following content: :START_ID(PERSON_ID),:END_ID(PERSON_ID),:TYPE100,102,IS_FRIENDS_WITH103,101,IS_FRIENDS_WITH102,103,IS_FRIENDS_WITH101,104,IS_FRIENDS_WITH104,100,IS_FRIENDS_WITH101,102,IS_FRIENDS_WITH100,103,IS_FRIENDS_WITH Let's import the dataset using the CSV import tool. We will be importing 2 CSV files. danger Your existing snapshot and WAL data will be considered obsolete, and Memgraph will load the new dataset. This means that all of your existing data will be lost and replaced with the newly imported data. If your Memgraph docker is running, you need to stop it before starting the import process. If you are using Docker, first copy the CSV files where the Docker container can see them: docker container create --user memgraph --name mg_import_helper -v mg_import:/import-data busyboxdocker cp people_nodes.csv mg_import_helper:/import-datadocker cp people_relationships.csv mg_import_helper:/import-datadocker rm mg_import_helper Then, run the import tool with the following command, but be careful of three things: 1. Check the image name you are using is correct: • If you downloaded Memgraph Platform, leave the current image name memgraph/memgraph-platform. • If you downloaded MemgraphDB, replace the current image name with memgraph. • If you downloaded MAGE, replace the current image name with memgraph/memgraph-mage. 2. If you are using Docker on Windows and execute commands in PowerShell change the line breaks from \ to . 3. Check that the paths of the files you want to import are correct. docker run --user="memgraph" -v mg_lib:/var/lib/memgraph -v mg_import:/import-data \ --entrypoint=mg_import_csv memgraph/memgraph-platform \ --nodes /import-data/people_nodes.csv \ --relationships /import-data/people_relationships.csv If you get a --nodes flag is required! error, the paths to the files are incomplete or you are missing them completely. Next time you run Memgraph, the dataset will be loaded. docker run -it -p 7687:7687 -p 7444:7444 -p 3000:3000 -v mg_lib:/var/lib/memgraph memgraph/memgraph-platform For information on other options, run: docker run --entrypoint=mg_import_csv memgraph/memgraph-platform --help After the import, the graph in Memgraph should look like this: ### Multiple types of nodes and relationships​ The previous example is showcasing a simple graph with one node type and one relationship type. If we have more complex graphs, the procedure is similar. Download the four CSV files to define a dataset: You can check the contents of the files and its description in the tabs below. The people_nodes.csv file contains the people nodes with name, age, city and label properties. id:ID(PERSON_ID),name:string,age:int,city:string,:LABEL100,Daniel,30,London,Person101,Alex,15,Paris,Person102,Sarah,17,London,Person103,Mia,25,Zagreb,Person104,Lucy,21,Paris,Person105,Adam,23,New York,Person Let's import 4 files using the CSV import tool. If you are using Docker, first copy the CSV files where the Docker container can see them: docker container create --user memgraph --name mg_import_helper -v mg_import:/import-data busyboxdocker cp people_nodes.csv mg_import_helper:/import-datadocker cp people_relationships.csv mg_import_helper:/import-datadocker cp restaurants_nodes.csv mg_import_helper:/import-datadocker cp restaurants_relationships.csv mg_import_helper:/import-datadocker rm mg_import_helper Then, run the import tool with the following command, but be careful of three things: 1. Check the image name you are using is correct: • If you downloaded Memgraph Platform leave the current image name memgraph/memgraph-platform. • If you downloaded MemgraphDB replace the current image name with memgraph. • If you downloaded MAGE replace the current image name with memgraph/memgraph-mage. 2. If you are using Docker on Windows and execute commands in PowerShell change the line breaks from \ to . 3. Check that the paths of the files you want to import are correct. docker run --user="memgraph" -v mg_lib:/var/lib/memgraph -v mg_etc:/etc/memgraph -v mg_import:/import-data \ --entrypoint=mg_import_csv memgraph/memgraph-platform \ --nodes /import-data/people_nodes.csv \ --nodes /import-data/restaurants_nodes.csv \ --relationships /import-data/people_relationships.csv \ --relationships /import-data/restaurants_relationships.csv The next time you run Memgraph, the dataset will be loaded: docker run -it -p 7687:7687 -p 7444:7444 -p 3000:3000 -v mg_lib:/var/lib/memgraph memgraph/memgraph-platform For information on other options, run: docker run --entrypoint=mg_import_csv memgraph/memgraph-platform --help After the import, the graph in Memgraph should look like this:
{}
# Euler formulas Jump to: navigation, search Formulas connecting the exponential and trigonometric functions: $$e^{iz}=\cos z+i\sin z,$$ $$\cos z=\frac{e^{iz}+e^{-iz}}{2},\quad\sin z=\frac{e^{iz}-e^{-iz}}{2i}.$$ These hold for all values of the complex variable $z$. In particular, for a real value $z=x$ the Euler formulas become $$\cos x=\frac{e^{ix}+e^{-ix}}{2},\quad\sin x=\frac{e^{ix}-e^{-ix}}{2i}$$ These formulas were published by L. Euler in [1]. #### References [1] L. Euler, Miscellanea Berolinensia , 7 (1743) pp. 193–242 [2] L. Euler, "Einleitung in die Analysis des Unendlichen" , Springer (1983) (Translated from Latin) [3] A.I. Markushevich, "A short course on the theory of analytic functions" , Moscow (1978) (In Russian) #### References [a1] K.R. Stromberg, "An introduction to classical real analysis" , Wadsworth (1981) How to Cite This Entry: Euler formulas. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Euler_formulas&oldid=32798 This article was adapted from an original article by E.D. Solomentsev (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
{}
Home » Uncategorized » Inequality 3(George Basdekis) # Inequality 3(George Basdekis) Problem: Let $\displaystyle a,b,c$ be positive real numbers. Prove that $\displaystyle \frac{a}{bc}+\frac{1}{a}+\frac{b}{ca}+\frac{1}{b}+\frac{c}{ab}+\frac{1}{c}\geq \frac{1}{2}\left(\frac{a+b}{b^2+c^2}+\frac{b+c}{c^2+a^2}+\frac{c+a}{a^2+b^2}\right)$. 1st solution: The left hand side can be rewritten as $\displaystyle \frac{a^2+b^2+c^2+ab+bc+ca}{2abc}\geq \sum_{cyc}\frac{a+b}{b^2+c^2}$. We only need to prove that $\displaystyle \sum_{cyc}\frac{a^2+ab}{2abc}\geq \sum_{cyc}\frac{a+b}{b^2+c^2}$, which is true according to the AM-GM inequality, that is $\displaystyle b^2+c^2\geq 2bc\wedge c^2+a^2\geq 2ca \wedge a^2+b^2\geq 2ab$, which holds for all non-negative numbers. 2nd solution: Bringing everything in the left hand side we get that $\displaystyle \frac{a^2+b^2+c^2+ab+bc+ca}{2abc}-\sum_{cyc}\frac{a+b}{b^2+c^2}\geq 0$. But this one holds because it is of the form $\displaystyle \sum_{cyc}\frac{a(a+b)(b-c)^{2}}{b^2+c^2}\geq 0$. Equality occurs if and only if $\displaystyle a=b=c$, Q.E.D.
{}
# Homework Help: Elliptic Functions, same principal parts, finding additive C 1. Apr 13, 2017 ### binbagsss 1. The problem statement, all variables and given/known data See attached. The solution of part e) is $C=4\psi(a)$ I am looking at part e, the answer to part d being that the principal parts around the poles $z=0$ and $z=-a$ are the same. 2. Relevant equations 3. The attempt at a solution Since we already know the negative powers of $z$ have the same expansions, and $C$ corresponds to the $z^0$ term, $f_a(z)^2$ about $z=0$ gives $\frac{4}{z^2}+4\psi(a)z^2+4\psi(a)$ and so the relevant term is $4\psi(a)$. Looking at the expansion of $f_a(z)^2$ about $z=-a$ there is no $z^0$ term so I conclude $C=4\psi(a)$. QUESTION - This doesnt really seem like a proper approach, i.e to break it down to considering the expansions of $f_a(z)$ about $z=0$ and $z=-a$ separately, whereas I am considering the RHS as a function over the entire complex plane . ( if I wasn't considering the RHS or LHS over the entire complex plane then the theorem to give the additive constant $C$ does not work, so I don't really see how you can break it down on either the LHS or RHS to consider only an expansion about a single pole ? )
{}
# zbMATH — the first resource for mathematics ##### Examples Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used. ##### Operators a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses ##### Fields any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article) Stochastic linear quadratic regulation for discrete-time linear systems with input delay. (English) Zbl 1175.93246 Summary: This paper considers the stochastic linear Quadratic Regulation (LQR) problem for systems with input delay and stochastic parameter uncertainties in the state and input matrices. The problem is known to be difficult due to the presence of interactions among the delayed input channels and the stochastic parameter uncertainties in the channels. The key to our approach is to convert the LQR control problem into an optimization one in a Hilbert space for an associated backward stochastic model and then obtain the optimal solution to the stochastic LQR problem by exploiting the dynamic programming approach. Our solution is given in terms of two generalized Riccati difference equations of the same dimension as that of the plant. ##### MSC: 93E20 Optimal stochastic control (systems) 49N10 Linear-quadratic optimal control problems 93C55 Discrete-time control systems Full Text: ##### References: [1] Rami, M. Ait; Chen, X.; Zhou, X. Y.: Discrete-time indefinite LQ control with state and control dependent noises, Journal of global optimization 43, 245-265 (2002) · Zbl 1035.49024 · doi:10.1023/A:1016578629272 [2] Basin, M. V., & Rodriguez Gonzalez, J. (2003). Optimal control for linear systems with time delay in control input based on the duality principle. In Proc. American control Conf. (pp. 2144-2148) [3] Carravetta, F.; Mavelli, G.: Suboptimal stochastic linear feedback control of linear systems with state-and control-dependent noise: the incomplete information case, Automatica 43, 751-757 (2007) · Zbl 1117.93337 · doi:10.1016/j.automatica.2006.09.010 [4] Costa, O. L. V.; Kubrusly, C. S.: State-feedback H$\infty$control for discrete-time infinite-dimensional stochastic bilinear systems, Journal of mathematical systems. Estimation and control 6, 1-32 (1996) · Zbl 0844.93036 [5] Ghaoui, L. Ei: State-feedback control of systems with multiplicative noise via linear matrix inequalities, Systems & control letters 24, 223-228 (1995) · Zbl 0877.93076 · doi:10.1016/0167-6911(94)00045-W [6] Gershon, E.; Shaked, U.; Yaesh, I.: H$\infty$control and filtering of discrete-time stochastic systems with multiplicative noise, Automatica 37, 409-417 (2001) · Zbl 0989.93030 · doi:10.1016/S0005-1098(00)00164-3 [7] Hassibi, B.; Sayed, A. H.; Kailath, T.: Indefinite quadratic estimation and control: A unified approach to H2 and H$\infty$theories, SIAM studies in applied mathematics series (1998) · Zbl 0997.93506 [8] Huang, Y., Zhang, W., & Zhang, H. (2006). Infinite horizon LQ optimal control for discrete-time stochastic systems. In Proc. of the sixth world congress on intelligent control and automation. Vol. 10 (pp. 252-256) [9] Kalman, R. E.: Contributions to the theory of optimal control, Boletin de la sociedad matematica mexicana 5, 102-119 (1960) · Zbl 0112.06303 [10] Kojima, A.; Ishijima, S.: H$\infty$performance of preview control systems, Automatica 39, 693-701 (2003) · Zbl 1029.93021 · doi:10.1016/S0005-1098(02)00286-8 [11] Meditch, J. S.: Stochastic optimal linear estimation and control, (1969) · Zbl 0225.93045 [12] Meditch, J. S.: On optimal control of linear systems in the presence of multiplicative noise, IEEE transactions on aerospace and electronic systems 12, 80-85 (1976) [13] Meinsma, G.; Mirkin, L.: H$\infty$control of systems with multiple i/o delays via decomposition to adobe problems, IEEE transactions on automatic control 50, 199-211 (2005) [14] Mohler, R.; Kolodziej, W.: An overview of stochastic bilinear control processes, IEEE transactions on systems, man and cybernetics 10, 913-919 (1980) · Zbl 0475.93054 · doi:10.1109/TSMC.1980.4308421 [15] Wang, F.; Balakrishnan, V.: Robust Kalman filters for linear time-varying systems with stochastic parameter uncertainties, IEEE transactions on signal processing 50, 803-813 (2002) [16] Wonham, W. M.: On a matrix Riccati equation of stochastic control, SIAM journal on control and optimization 6, 681-697 (1968) · Zbl 0182.20803 [17] Zhang, H.; Duan, G.; Xie, L.: Linear quadratic regulation for linear time-varying systems with multiple input delays, Automatica 42, 1465-1476 (2006) · Zbl 1128.49304 · doi:10.1016/j.automatica.2006.04.007 [18] Zhang, H.; Xie, L.; Duan, G.: H$\infty$control of discrete-time systems with multiple input delays, IEEE transactions on automatic control 52, 271-283 (2007)
{}
# Tikzmark; Tangent arrows I am happy with this but I'd like to make the two drawn arrows tangent. Here is my minimal example: \documentclass{article} \usepackage{amsmath,tikz} \usetikzlibrary{tikzmark} \begin{document} \begin{align*} \textcolor{blue}{1\tikzmarknode{A}{}2}\bigg ( \dfrac{x\tikzmarknode{B} {+}4}{4}-\dfrac{x\tikzmarknode{C}{-}3}{3} \bigg )&=\textcolor{blue}{12}\bigg ( \dfrac{11}{12} \bigg ) \\ \end{align*} \begin{tikzpicture}[overlay,remember picture] \draw[->,blue,thick,smooth,shorten >=1pt,shorten<=1pt,out=65,in=110,distance=.6cm] ([yshift=6pt]A.north) to ([yshift=2pt]B.north); \draw[->,blue,thick,smooth,shorten >=1pt,shorten<=1pt,out=65,in=110,distance=.6cm] ([yshift=6pt]A.north) to ([yshift=2pt]C.north); \end{tikzpicture} \end{document} This program gives: BUT I am trying to get the two drawn lines tangent as in this image: • Try out=90 for both and play with in until you like the result... second arrows in should be somehow greater than first... for example first in 110, second in 125 – koleygr Oct 17 '18 at 21:09 • You've not provided a MWE that we can compile. Also, at least for my version of tikzmark, there is no command \tikzmarknode. – A.Ellett Oct 17 '18 at 21:12 • Understood it is the newest version and not available on CTAN yet. My apologies. – MathScholar Oct 17 '18 at 21:27 • Technically, the lines are tangent. It's just that they don't stay tangent for long. Try using the looseness key on the longer arrow to get it to sweep out a bit more. – Andrew Stacey Oct 17 '18 at 21:35 I've taken a few liberties with your original posting. But here's a result. I elaborate below: \documentclass[border=6pt]{standalone} \usepackage{amsmath,tikz} \usetikzlibrary{calc} \newcommand\tikzmark[1]{\tikz[remember picture,overlay] \node[inner sep=0pt] (#1) {};} \newcommand\tikzdouble[2]{\tikzmark{#11}#2\tikzmark{#12}} \begin{document} \begin{minipage}{5in} \begin{align*} \textcolor{blue}{\tikzdouble{A}{12}} \bigg ( \dfrac{\tikzdouble{B}{x+4}}{4} -\dfrac{\tikzdouble{C}{x-3}}{3} \bigg ) &= \textcolor{blue}{12} \bigg ( \dfrac{11}{12} \bigg ) \\ \end{align*} \end{minipage} \begin{tikzpicture}[overlay,remember picture, my arrow style/.style={->,blue,thick,smooth,shorten >=1pt,shorten <=1pt}] \foreach \myn in {A,B,C} { \coordinate (\myn) at ($(\myn1)!0.5!(\myn2)$); } \draw[my arrow style] ([yshift=8pt]A) .. controls ++(60:12pt) and ++(120:12pt) .. ([yshift=5pt]B); \draw[my arrow style] ([yshift=8pt]A) .. controls ++(60:28pt) and ++(120:28pt) .. ([yshift=5pt]C); \end{tikzpicture} \end{document} So, the first thing I've done is use the standalone class and because of that I have to put the align environment inside a minipage so it knows what the paper width is. Next, I'm not sure what \tikzmarknode is supposed to actually do. I made my best guess and made my own command \tikzdouble. I'm also guessing that \tikzmarknode does some of the work I'm doing in the tikzpicture environment on its own. I've also defined a style for the arrows to make the code a bit more readable. The primary gist of what I've done is to use control points in lieu of the in= and out= keys for the directive to. # Update Here I've added a bit more muscle to \tikzdouble to do more of the grunt work for placing the nodes appropriately. But, it has been set up to operate in math mode. The code \documentclass[border=6pt]{standalone} \usepackage{amsmath,tikz} \usetikzlibrary{calc,fit} %% this tikzdouble has been designed specifically for use in math mode %% particular where it comes to measuring the height and depth of the text %% being marked on either side with a node. \newcommand\tikzdouble[2]{%% \pgfmathsetmacro\aetmpA{depth("$#2$")}%% \pgfmathsetmacro\aetmpB{height("$#2$")}%% \tikz[remember picture,overlay] \node[inner sep=0pt] at (0,-\aetmpA pt) (#1/tmp/1) {};%% #2%% \tikz[remember picture,overlay] \node[inner sep=0pt] at (0,\aetmpB pt) (#1/tmp/2) {};%% \tikz[remember picture,overlay] \node[fit=(#1/tmp/1) (#1/tmp/2),inner sep=0pt] (#1) {};%% } \begin{document} \begin{minipage}{5in} \begin{align*} \textcolor{blue}{\tikzdouble{A}{12}} \bigg ( \dfrac{\tikzdouble{B}{x+4}}{4} -\dfrac{\tikzdouble{C}{x-3}}{3} \bigg ) &= \textcolor{blue}{12} \bigg ( \dfrac{11}{12} \bigg ) \end{align*} \end{minipage} \begin{tikzpicture}[overlay,remember picture, my arrow style/.style={->,blue,thick,smooth,shorten >=1pt,shorten <=1pt}] \draw[my arrow style] (A.north) .. controls ++(60:12pt) and ++(120:12pt) .. (B.north); \draw[my arrow style] (A.north) .. controls ++(60:32pt) and ++(120:24pt) .. (C.north); \end{tikzpicture} \end{document} • The result is what I am looking for. I need some time to try and adapt it to my original posting. I forgot about the new version . It is discuss in this post: tex.stackexchange.com/questions/450135/… . I believe Loopspace created it. – MathScholar Oct 17 '18 at 21:57
{}
Back to all Elements Math formulas in CMS Writing inline (LaTeX) math formulas anywhere on a static or dynamic page using MathJax. CMS Code Integration The script allows math formulas to be written anywhere within the HTML code. To start the inline math you use either a dollar sign $or you can use a slash and an open parentheses "/(". To end the inline math, you have another dollar sign or a back slash and a closing parentheses. Enclosing in double dollar signs put the math on a seperate line. So, let's get to it- First we add the MathJax JS library and the Polyfill.js dependency in our page <script src="https://polyfill.io/v3/polyfill.min.js?features=es6"></script> <script type="text/javascript" id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script> Copy Now we add the MathJax function to initiate the script and define what is a math formula: <script> MathJax = { tex: { inlineMath: [['$', '$'], ['\$$', '\$$']] }, }; </script> Copy If you need to use a dollar sign in the text, you can use a backslash before the dollar sign. "\$" Dollar sign before and after: $y=mx+1$ Parenthesis: $$y=mx+2$$ Double Dollar Signs: $$y=mx+3$$ Hard Bracket: $y=mx+5$ In a sentence: When $a \ne 0$, there are two solutions to $$ax^2 + bx + c = 0$$ and they are $$x = {-b \pm \sqrt{b^2-4ac} \over 2a}.$$ Preview: Share: Destin's Youtube channel Smarter Every Day is one of my favourite places on the web. Symbol in RichText Cloneable CMS Code Insert a symbol element into a RichText. On a static or dynamic page. Pass Parameters Cloneable Code Pass input fields parameters and add them to the destination URL.
{}
# Help me understand the logic behind x - y in binary by boolean? x and y are 4 bit signed numbers (2s complement..) x - y can be obtained by: !(!x + y) I know that in 2s complement -y = (!y + 1) So I can pretty much understand how this works: x - y = x + (!y + 1) But, what sort of magic is going on in here, how does this work: !(!x + y) I mean, when I do the math with pen and paper all is fine, but I do not know why it is working.. ## 1 Answer If $z$ is an $n$-bit number then $!z=2^n-1-z$. So $$!(!x+y)=2^n-1-(2^n-1-x+y) = x-y.$$
{}
This book is in Open Review. I want your feedback to make the book better for you and other readers. To add your annotation, select some text and then click the on the pop-up menu. To see the annotations of others, click the button in the upper right hand corner of the page ## 16.2 Mathematical explanation Now we can discuss the same idea from the mathematical point of view. We estimated the following simple model: $$$y_j = \mu_{y} + \epsilon_j, \tag{16.1}$$$ assuming normal distribution of the residuals (see Section 4.3). In order to make things closer to the regression context, we will introduce changing location, which is defined by the regression line (thus, it is conditional on the set of $$k-1$$ explanatory variables): $$$y_j = \mu_{y,j} + \epsilon_j, \tag{16.2}$$$ where $$\mu_{y,j}$$ is the population regression line, defined via: $$$\mu_{y,j} = \beta_0 + \beta_1 x_{1,j}+ \beta_2 x_{2,j} + \dots + \beta_{k-1} x_{k-1,j} . \tag{16.3}$$$ The typical assumption in regression context is that $$\epsilon_j \sim \mathcal{N}(0, \sigma^2)$$ (normal distribution with zero mean and fixed variance), which means that $$y_j \sim \mathcal{N}(\mu_{y,j}, \sigma^2)$$. We can use this assumption in order to calculate the point likelihood value for each observation based on the PDF of Normal distribution (Subsection 4.3): $$$\mathcal{L} (\mu_{y,j}, \sigma^2 | y_j) = f(y_j | \mu_{y,j}, \sigma^2) = \frac{1}{\sqrt{2 \pi \sigma^2}} \exp \left( -\frac{\left(y_j - \mu_{y,j} \right)^2}{2 \sigma^2} \right). \tag{16.4}$$$ Very roughly, what the value (16.4) shows is how likely it is that the specific observation comes from the assumed model with specified parameters (we know that in real world data does not come from any model, but this interpretation is easier to work with). Note that the likelihood is not the same as probability, because for any continuous random variables the probability for it to be equal to any specific number is equal to zero (as discussed in Section 4.1). The point likelihood (16.4) is not very helpful on its own, but we can get $$n$$ values like that, based on our sample of data. We can then summarise them in one number, that would characterise the whole sample, given the assumed distribution, applied model and selected values of parameters: $$$\mathcal{L} (\boldsymbol{\theta}, {\sigma}^2 | \mathbf{y}) = \prod_{j=1}^n \mathcal{L} (\mu_{y,j}, \sigma^2 | \mathbf{y}) = \prod_{j=1}^n f(y_j | \mu_{y,j}, \sigma^2), \tag{16.5}$$$ where $$\boldsymbol{\theta}$$ is the vector of all parameters in the model (in our example, it is $$k+1$$ of them: all the coefficients of the model and the scale $$\sigma^2$$). We take the product of likelihoods in (16.5) because we need to get the joint likelihood for all observations and because we can typically assume that the point likelihoods are independent of each other (for example, the value on observation $$j$$ will not be influenced by the value on $$j-1$$). The value (16.5) shows roughly how likely on average it is that the data comes from the assumed model with specified parameters. Remark. Technically speaking, the “on average” element will be achieved if we divide (16.5) by the number of observations $$n$$. Having this value, we can change the values of parameters of the model, getting different value of (16.5) (as we did in the example in Section 16.1). Using an iterative procedure, we can get such estimates of parameters that would maximise the likelihood (16.5). These estimates of parameters are called Maximum Likelihood Estimates’’ (MLE). However, working with the products in formula (16.5) is challenging, so typically we linearise it using natural logarithm, obtaining log-likelihood. For the normal distribution, it can be written as: $$$\ell (\boldsymbol{\theta}, {\sigma}^2 | \mathbf{y}) = \log \mathcal{L} (\boldsymbol{\theta}, {\sigma}^2 | \mathbf{y}) = -\frac{n}{2} \log(2 \pi \sigma^2) -\sum_{j=1}^n \frac{\left(y_j - \mu_{y,j} \right)^2}{2 \sigma^2} . \tag{16.6}$$$ Based on that, we can find some of parameters of the model analytically. For example, we can derive the formula for the estimation of the scale based on the provided sample. Given that we are estimating the parameter, we should substitute $$\sigma^2$$ with $$\hat{\sigma}^2$$ in (16.6). We can then take derivative of (16.6) with respect to $$\hat{\sigma}^2$$ and equate it to zero in order to find the value that maximises the log-likelihood function in our sample: $$$\frac{d \ell (\boldsymbol{\theta}, \hat{\sigma}^2 | \mathbf{y})}{d \hat{\sigma}^2} = -\frac{n}{2} \frac{1}{\hat{\sigma}^2} + \frac{1}{2 \hat{\sigma}^4}\sum_{j=1}^n \left(y_j - \mu_{y,j} \right)^2 =0 , \tag{16.7}$$$ which after multiplication of both sides by $$2 \hat{\sigma}^4$$ leads to: $$$n \hat{\sigma}^2 = \sum_{j=1}^n \left(y_j - \mu_{y,j} \right)^2 , \tag{16.8}$$$ or $$$\hat{\sigma}^2 = \frac{1}{n}\sum_{j=1}^n \left(y_j - \mu_{y,j} \right)^2 . \tag{16.9}$$$ The value (16.9) is in fact a Mean Squared Error (MSE) of the model. If we calculate the value of $$\hat{\sigma}^2$$ using the formula (16.9), we will maximise the likelihood with respect to the scale parameter. In fact, we can insert (16.9) in (16.6) in order to obtain the so called “concentrated” (or profile) log-likelihood for the normal distribution: $$$\ell^* (\boldsymbol{\theta} | \mathbf{y}) = -\frac{n}{2}\left( \log(2 \pi e) + \log \hat{\sigma}^2 \right) . \tag{16.10}$$$ Remark. Sometimes, statisticians drop the $$2 \pi e$$ part from the (16.10), because it does not affect any inferences, as long as one works only with Normal distribution. However, in general, it is not recommended to do (Burnham and Anderson, 2004), because this makes the comparison with other distributions impossible. This function is useful because it simplifies some calculations and also demonstrates the condition, for which the likelihood is maximised: the first part on the right hand side of the formula does not depend on the parameters of the model, it is only the $$\log \hat{\sigma}^2$$ that does. So, the maximum of the concentrated log-likelihood (16.10) is obtained, when $$\hat{\sigma}^2$$ is minimised, implying the minimisation of MSE, which is the mechanism behind the “Ordinary Least Squares” (OLS from Section 10.1) estimation method. By doing this, we have just demonstrated that if we assume normality in the model, then the estimates of its parameters obtained via the maximisation of the likelihood coincide with the values obtained from OLS. So, why bother with MLE, when we have OLS? First, the finding above holds for the Normal distribution only. If we assume a different distribution, we would get different estimates of parameters. In some cases, it might not be possible or reasonable to use OLS, but MLE would be a plausible option (for example, logistic, Poisson and any other non-standard model). Second, the MLE of parameters have good statistical properties: they are consistent (Subsection 6.3.3) and efficient (Subsection 6.3.2). These properties hold almost universally for many likelihoods under very mild conditions. Note that the MLE of parameters are not necessarily unbiased (Subsection 6.3.1), but after estimating the model, one can de-bias some of them (for example, calculate the standard deviation of the error via division of the sum of squared errors by the number of degrees of freedom $$n-k$$ instead of $$n$$ as discussed in Section 11.2). Third, likelihood can be used for the model assessment, even when the standard statistics, such as $$R^2$$ or F-test are not available. We do not discuss these aspects in this textbook, but interested reader is directed to the topic of likelihood ratios. Finally, likelihood permits the model selection (which will be discussed in Section ??) via information criteria. In general, this is not possible to do unless you assume a distribution and maximise the respective likelihood. In some statistical literature, you can notice that information criteria are calculated for the models estimated via OLS, but what the authors of such resources do not tell you is that there is still an assumption of normality behind this (see the link between OLS and MLE of Normal distribution above). Note that the likelihood approach assumes that all parameters of the model are estimated, including location, scale, shape, shift of distribution etc. So typically it has more parameters to estimate than, for example, the OLS. This is discussed in some detail later in the Section 16.3. ### References • Burnham, K.P., Anderson, D.R., 2004. Model Selection and Multimodel Inference.. Springer New York. https://doi.org/10.1007/b97636
{}
Posted on # multinomial distribution proof measures how far Xi is its from its expected value, In statistical mechanics and combinatorics if one has a number distribution of labels then the multinomial coefficients … }); {n1, … , some category. Then the chi-squared statistic is the sum of. 'the chi-squared curve with ' + df.toString() + ' degrees of freedom from ' + Give an analytic proof, using the joint probability density function. exhaustive—each datum must fall in …, k. … , 6, … + pk = 100%. if n1, … , =. be the probabilities of the categories according to the null hypothesis. displayed is the chi-squared curve with k - 1 degrees ei = n Proceed by induction on m. m. m. When k = 1 k = 1 k = 1 the result is true, and when k = 2 k = 2 k = 2 the result is the binomial theorem. × n2!) probability histogram p2, … , 'six equal category probabilities, and sample size ' + rolls.toString() + 'probability histogram of the chi-squared statistic; the area under ' + } 'to be this large if the die really is fair. the number of categories, and the probability of each category. (If there are many categories, and none of the category probabilities sampleSize: rolls.toString(), multinomial probability model. showBoxHist: false, As rule of thumb, if the expected count in every category is 10 or greater (if see many examples of the computations. nk-2) For each of those, there are We can define quantiles of the chi-square curve just as we did quantiles The canonical example of random variables with a multinomial joint distribution Let pi be the probability that the outcome is '. Along the way, it introduces joint probability distributions of freedom, where k is the number of category probabilities. expStr = roundToDig(expected, 2).toString(); discrepancy that matters. qStr += ' = ' + roundToDig(chi2b,2).toString() + ' The corresponding ' + 0 and 1, the a quantile of the chi-square curve with the chi-squared statistic, chi-squared = sum of of k possible types of outcome, and the probability that the outcome is xk-1,1-a, The corresponding category probabilities are. This is called the chi-square test for goodness of fit. When the sample size is large, the observed histogram of sample values These are near a definition rather than a theorem. The Multinoulli distribution (sometimes also called categorical distribution) is a generalization of the Bernoulli distribution.If you perform an experiment that can have only two outcomes (either success or failure), then a random variable that takes value 1 in case of success and value 0 in case of failure is a Bernoulli random variable. observed numbers of outcomes in each category and the expected number of outcomes in each The (approximate) P-value is the area to the right of chi-squared hypothesis that a Sample Size is set to 5 initially. the categories are disjoint and Proof. samplesToTake: 1000, ' + ' + right of 7.8 is 5%; that area will be displayed under the histogram next to the number of outcomes in category i In general, the answer depends on the number of trials, the number of categories, You should find that when the sample size is small, the histogram is rough and the area outcomes of type 3 among the remaining chi2 += (outcomes[i] - expect)*(outcomes[i] - expect)/expect;
{}
MathOverflow will be down for maintenance for approximately 3 hours, starting Monday evening (06/24/2013) at approximately 9:00 PM Eastern time (UTC-4). 3 added 8 characters in body Recall that an etale topological stack is a stack $\mathscr{X}$ over the category of topological spaces (and open covers) which admits a representable local homeomorphism $X \to \mathscr{X}$ from a topological space. Equivalently, it is a topological stack arising from an etale topological groupoid. It is well known that a differentiable stack is etale if and only if all of its automorphism groups are discrete, but the proof involves foliation theory. It seems this proof cannot be extended to the topological setting. However, clearly every etale topological stack has discrete isotropy groups. This begs the question: If a topological stack has all of its isotropy groups discrete, is it necessarily etale? EDIT: By a topological stack, I mean a stack $\mathscr{X}$ over the category of topological spaces (and open covers) which admits a representable map epimorphism $X \to \mathscr{X}$ (not necessarily a local homeomorphism). This is equivalent to saying $\mathscr{X}$ is the stack of torsors for a topological groupoid. Remark: This question is equivalent to asking if a topological groupoid all of whose isotropy groups are discrete must be Morita equivalent to an etale topological groupoid. 2 Added definition of topological stack Recall that an etale topological stack is a stack $\mathscr{X}$ over the category of topological spaces (and open covers) which admits a representable local homeomorphism $X \to \mathscr{X}$ from a topological space. Equivalently, it is a topological stack arising from an etale topological groupoid. It is well known that a differentiable stack is etale if and only if all of its automorphism groups are discrete, but the proof involves foliation theory. It seems this proof cannot be extended to the topological setting. However, clearly every etale topological stack has discrete isotropy groups. This begs the question: If a topological stack has all of its isotropy groups discrete, is it necessarily etale? EDIT: By a topological stack, I mean a stack $\mathscr{X}$ over the category of topological spaces (and open covers) which admits a representable map $X \to \mathscr{X}$ (not necessarily a local homeomorphism). This is equivalent to saying $\mathscr{X}$ is the stack of torsors for a topological groupoid. Remark: This question is equivalent to asking if a topological groupoid all of whose isotropy groups are discrete must be Morita equivalent to an etale topological groupoid. 1 # Automorphism groups and etale topological stacks Recall that an etale topological stack is a stack $\mathscr{X}$ over the category of topological spaces (and open covers) which admits a representable local homeomorphism $X \to \mathscr{X}$ from a topological space. Equivalently, it is a topological stack arising from an etale topological groupoid. It is well known that a differentiable stack is etale if and only if all of its automorphism groups are discrete, but the proof involves foliation theory. It seems this proof cannot be extended to the topological setting. However, clearly every etale topological stack has discrete isotropy groups. This begs the question: If a topological stack has all of its isotropy groups discrete, is it necessarily etale?
{}
# SICP section 1.3 - formulating abstractions with higher-order procedures ## Exercise 1.32 The generalised form of sum and product can be defined (recursively) as follows: (define (accumulate combiner null-value term a next b) (if (> a b) null-value (combiner (term a) (accumulate combiner null-value term (next a) next b)))) Redefining sum and product using accumulate: (define (sum term a next b) (accumulate + 0 term a next b)) (define (product term a next b) (accumulate * 1 term a next b)) The iterative version of accumulate uses an internal procedure with two state variables - one tracks the value of $$a$$; the other accumulates the result, which is returned when the terminating condition $$a > b$$ is met. (define (accumulate-iter combiner null-value term a next b) (define (iter a result) (if (> a b) result (iter (next a) (combiner result (term a))))) (iter a null-value)) ## Exercise 1.33 filtered-accumulate is a generalised version of accumulate that only combines terms that satisfy a given condition. Its implementation is similar to accumulate, with an added conditional. (define (filtered-accumulate combiner null-value term a next b filter) (define (next-fa) (filtered-accumulate combiner null-value term (next a) next b filter)) (if (> a b) null-value (if (filter a) (combiner (term a) (next-fa)) (next-fa)))) Using filtered-accumulate to define a procedure that computes the sum of the squares of prime numbers: (define (sum-square-prime a b) (filtered-accumulate + 0 square a inc b prime?)) …and a procedure that computes the product of integers less than $$n$$ that are relatively prime to $$n$$: (define (gcd a b) (if (= b 0) a (gcd b (remainder a b)))) (define (identity n) n) (define (product-rel-prime n) (define (rel-prime? a) (= (gcd a n) 1)) (filtered-accumulate * 1 identity 1 inc (- n 1) rel-prime?)) ## Exercise 1.34 Attempting to evaluate (f f) produces the following error: mit-scheme : The object 2 is not applicable. dr-racket : application: not a procedure; expected a procedure that can be applied to arguments given: 2 arguments...: We can use the substitution model to get a clearer picture of the error: (define (f g) (g 2)) (f f) (f 2) (2 2) The second invocation of f attempts to apply $$2$$, which is not a procedure, to $$2$$. No bueno. ## Exercise 1.37 For this exercise, I found that rewriting the k-term finite continued fraction as a single line helped me see more clearly how to implement it as a recursive process: $$f = N_1 / (D_1 + (N_2 / (D_2 + (N_3 / (D_3 + (... + (N_k / D_k)))))))$$ (define (cont-frac n d k) (define (frac x) (if (= x k) (/ (n x) (d x)) (/ (n x) (+ (d x) (frac (+ x 1)))))) (frac 1)) To obtain an approximation of $$1/φ$$ accurate to 4 decimal places, $$k = 10$$. (cont-frac (lambda (i) 1.0) (lambda (i) 1.0) 10) The iterative version of cont-frac decrements a counter $$x$$ that begins with $$k$$, accumulates the result in $$acc$$, and returns the result when $$x = 0$$: (define (cont-frac-iter n d k) (define (frac x acc) (if (= x 0) acc (frac (- x 1) (/ (n k) (+ (d k) acc))))) (frac k 0)) ## Exercise 1.38 For this fraction, we know that all values of $$N_i$$ are $$1$$. $$D_i$$ however, is slightly less straightforward. The (convoluted) pattern I derived from the $$D_i$$ series is $$2(i + 1)/3$$ if $$(i + 1) \bmod 3 = 0$$, for all $$i > 1$$. (define (d i) (cond ((= i 1) 1) ((not (= (remainder (+ i 1) 3) 0)) 1) (else (/ (* 2 (+ i 1)) 3)))) (+ (cont-frac (lambda (i) 1.0) d 10) 2) ## Exercise 1.39 Fairly obvious patterns in both series: $$N_i = -x^2$$ for $$i > 1$$, and $$D_i = 2i - 1$$. (define (tan-cf x k) (define (n k) (if (= k 1) x (- (square x)))) (define (d k) (- (* 2.0 k) 1)) (cont-frac n d k)) ## Exercise 1.41 double can be defined as follows: (define (double f) (lambda (x) (f (f x)))) Applying the substitution model shows us that the procedure makes $$2^4$$ inc calls, which gives us the answer $$5 + 16 = 21$$. (((double (double double)) inc) 5) (((double (lambda (x) (double (double x)))) inc) 5) (((lambda (x) (double (double (double (double x))))) inc) 5) ((double (double (double (double inc)))) 5) ((double (double (double (lambda (x) (inc (inc x)))))) 5) ... ; Expands to 16 inc calls 21 ## Exercise 1.42 Procedure composition! (define (compose f g) (lambda (x) (f (g x)))) ## Exercise 1.43 Using compose from the previous exercise: (define (repeated f n) (if (= n 1) (lambda (x) (f x)) (compose (repeated f (- n 1)) f))) ## Exercise 1.44 …and using repeated from the previous exercise: (define dx 0.00001) (define (smooth f) (lambda (x) (/ (+ (f (- x dx)) (f x) (f (+ x dx))) 3.0))) (define (n-fold-smooth f n x) ((repeated (smooth f) n) x)) ## Exercise 1.45 Analysis: Root | Damps 2 | 1 3 | 1 4 | 2 5 | 2 6 | 2 7 | 2 8 | 3 9 | 3 10 | 3 11 | 3 12 | 3 13 | 3 14 | 3 15 | 3 16 | 4 From this table, we can determine that $$average\_damps = log_2 (root)$$, rounded down to the nearest integer. Scheme does not have a log procedure that allows us to specify a base. To get around this, we can use the log change-of-base formula: $$log_b n = log_a n / log_a b$$. (define (average x y) (/ (+ x y) 2.0)) (define (average-damp f) (lambda (x) (average x (f x)))) (define (expt b n) (if (= n 0) 1 (* b (expt b (- n 1))))) (define (log-b2 n) (/ (log n) (log 2))) (define (nth-root x n) (fixed-point ((repeated average-damp (inexact->exact (floor (log-b2 n)))) (lambda (y) (/ x (expt y (- n 1))))) 1.0)) ## Exercise 1.46 As specified in the exercise, the procedure that iterative-improve should return is one that keeps improving the guess until it is good enough, indicating that it needs to be a procedure that repeatedly calls itself. For a procedure to call itself, it needs a name - which is why we define an internal procedure and return it. If there is a way to solve this exercise using a lambda, I have not figured out how. (define (iterative-improve good-enough? improve) (define (improve-procedure guess) (if (good-enough? guess) guess (improve-procedure (improve guess)))) improve-procedure) Using iterative-improve to redefine sqrt: (define (average x y) (/ (+ x y) 2)) (define (square x) (* x x)) (define (sqrt x) ((iterative-improve (lambda (guess) (< (abs (- (square guess) x)) 0.001)) (lambda (guess) (average guess (/ x guess)))) 1.0)) …and fixed-point: ; Fixed point (define tolerance 0.00001) (define (fixed-point f first-guess) ((iterative-improve (lambda (guess) (< (abs (- guess (f guess))) tolerance)) (lambda (guess) (f guess))) first-guess))
{}
1. Mar 9, 2014 ### Maylis 1. The problem statement, all variables and given/known data A small adiabatic air compressor is used to pump air into a 20-m3 insulated tank. The tank initially contains air at 25°C and 101.33 kPa, exactly the conditions at which air enters the compressor. The pumping process continues until the pressure in the tank reaches 1,000 kPa. If the process is adiabatic and if compression is isentropic, what is the shaft work of the compressor? Assume air to be an ideal gas for which CP = (7/2)R and CV = (5/2)R. 2. Relevant equations 3. The attempt at a solution I derived the relationship for the enthalpy of an adiabatic as dH = V dp However, I also know ΔH = CpΔT When I calculate the two enthalpies, I get different answers for the change in enthalpy, which I know should be equal to the isentropic work. First off, how do I know my equation is only for an adiabatic and isentropic process, and would not work for something that is anisentropic and adiabatic? I realize the change in enthalpy is the change in molar enthalpy times the number of moles, but the number of moles is changing throughout the process until it reaches 1000 kPa. The other expression seems to bypass worrying about that. I am wondering which one, if either, is correct and why? In other words, how do I calculate ''isentropic enthalpy change'' vs. enthalpy change that is not isentropic? File size: 182 KB Views: 426 2. Mar 9, 2014 ### Staff: Mentor In my judgement, the best way to approach this problem is to focus first on the tank contents at the final set of conditions. We know the volume of air and we know its pressure, but we don't the number of moles n or the final temperature T. However, we do know that all the air in the tank started out at 101.33kPa and that it was compressed isentropically. From this information we can calculate the initial volume that the tank contents occupied. You know that relationship between pressure and volume for an isentropic compression. What do you calculate for the initial volume of the final tank contents? All this volume of air was at 101.33 kPa and 25 C. What is the number of moles of air n in the tank at the final conditions. What was the number of moles of air within the tank at the initial conditions (before running the compressor)? Another thing you know is the relationship between pressure and temperature for an isentropic compression. So, from one form or another of this relationship, what is the final temperature in the tank? Let's take stock. You now know the initial and final number of moles of air in the tank, and you also know the initial and final temperatures of the air in the tank. This gives you enough information to calculate the change in internal energy between the initial and final states of the tank. What is the change in internal energy for the tank? No mass of air exited the tank between the initial and final states. However, air did enter the tank from the compressor. Using the open system form of the first law, you should be able to calculate integrated enthalpy of all the air that entered the tank, and, more importantly, you should be able to calculate the change in enthalpy for all the gas that left the turbine minus the enthalpy of all the gas that entered. What is that change? Chet 3. Mar 9, 2014 ### Maylis I am curious, is something wrong with my V dp approach? Also, why would the gas in the tank initially occupy a space less than the volume of the tank, the gas will expand and occupy the volume of its container. I calculate the final temperature of the tank to be the same temperature as the stream of air coming in from the compressor, 573.2 K. Is this right? Last edited: Mar 9, 2014 4. Mar 10, 2014 ### Maylis Here is my second attempt with analysis. I disregarded the statement about the volume, because I think it is a typo. I didn't actually have to solve for the internal energy change, as I knew the initial moles in the tank plus what was added was the final number of moles. #### Attached Files: • ###### 5.1 attempt 2.pdf File size: 244.9 KB Views: 118 5. Mar 10, 2014 ### Staff: Mentor I may have misinterpreted the problem, regarding the pressure of the air exiting the compressor. I was going to assume that the compressor is run in such a way that the gas exiting the compressor at any instant is only slightly higher in pressure than the gas within the tank. That is probably not what the problem intended. In the case that you have considered, there is a valve at the entrance to the tank that drops the pressure down from the compressor exit pressure to the tank pressure. I think that that's what they intended. Chet 6. Mar 10, 2014 ### Maylis Would it be right to say the final temperature of the tank is the same as the temperature of the stream exiting the compressor? 7. Mar 10, 2014 ### Staff: Mentor No. You need to use the flow version of the first law to figure it out. The unknowns are the number of moles that enter and the final temperature in the tank. The temperature is going to be higher than the exit temperature of the compressor. Your equations are the flow version of the first law and the ideal gas law. You know the entering specific enthalpy, and the initial internal energy in the tank. Chet 8. Mar 10, 2014 ### Maylis Am I to assume that the air is an ideal gas at the final state, at 1000 kPa? That is quite a high pressure, so I am not sure if I should assume it to be ideal. I think I am having confusion because there are effectively two compressions going on here. First of all and most obvious, the compressor is compressing the gas that flows into it. Secondly, the gas in the tank is being compressed by the other air that was already compressed. Do they mean that both the compressor and filling up the tank is isentropic? The problem satement says the process is adiabatic and compression is isentropic. This makes me confused because it doesn't specify which one is adiabatic and isentropic. Are they saying the processes of the compressor as well as filling up the tank is both adiabatic and isentropic? I am just using the equation here relating T and P for isentropic compression http://www.grc.nasa.gov/WWW/k-12/airplane/compexp.html That is how I get 573.2 K. That is what I think is the temperature of the stream leaving the compressor and entering the tank. However, if I do the isentropic compression equation for the tank, I get the exact same result, because the stream entering the compressor is at the same conditions as the tank was initially, and the pressure in the tank at the final state is the same as the pressure of the gas leaving the compressor. Last edited: Mar 10, 2014 9. Mar 10, 2014 ### Maylis Also, I do not know the initial internal energy of the tank, nor do I know the specific enthalpy entering the tank. I could find the change in enthalpy of the gas from the entrance to the compressor to the exit stream, but no absolute values. I know the change in enthalpy of the gas going through the compressor, so all I need is the moles that entered the tank to figure out my shaft work, knowing moles entering multiplied by the change in enthalpy is the shaft work. Last edited: Mar 10, 2014 10. Mar 10, 2014 ### Staff: Mentor The problem statement says to assume the air is an ideal gas. The entropy is not constant in the injection of the high pressure air into the tank. The air has to pass through the inlet valve, and this is not a constant entropy process. However, it is a constant enthalpy process. 11. Mar 10, 2014 ### Maylis I'm stuck with my mass and energy balance. My unknowns are the initial and final internal energy, and the enthalpy entering the tank, and the number of moles of air entering. I have no idea how to find the enthalpy entering the tank, nor the initial internal energy of the tank. 12. Mar 10, 2014 ### Staff: Mentor You need to express the internal energy relative to a reference state. The form of the first law applicable to the tank is: $$nh_{in}=(n+n_0)u-n_0u_0$$ where hin is the enthalpy per unit mass of the gas entering through the valve, n is the number of moles of gas that enters through the valve, n0 is the moles of gas in the tank to begin with, u is the final internal energy per mole of the gas in the tank and u0 is the initial internal energy per mole of the gas in the tank. If we take as the reference state air at 25 C and 1 atm, then hin=uin+RTin=5R(573.2-298.2)/2 + 573.2R u0=0 u=5R(T-298.2)/2 You use the ideal gas law to express n+n0 in terms of the final pressure, the tank volume, and the unknown temperature T, and you already know n0. This will give you what you need to solve for T. Chet Last edited: Mar 10, 2014 13. Mar 10, 2014 ### Maylis I'm not seeing where you are getting the expression for H_in. I would say H_in is 7/2R(573.2-298). I didn't realize that I should set the air as the reference state, that is very good to know. I did it using my expression for H_in, and got T=1796.4 K, quite warm.. Using your expression, I get T=1966.8 K http://www.wolframalpha.com/input/?i=solve+%282.406*10^5%2Fx%29*%2820.785%28x-298%29%29-%282.522*10^10%2Fx%29%2B8.58*10^6%3D0 Last edited: Mar 10, 2014 14. Mar 11, 2014 ### Maylis Here is my analysis for my 3rd attempt at this problem. Getting so close to the answer...or maybe it's right? #### Attached Files: • ###### 5.1 attempt 3.pdf File size: 286.2 KB Views: 101 15. Mar 11, 2014 ### Staff: Mentor My first term is the internal energy per unit mass, and my second term is Pvin, where vin is the molar volume of the feed. This is equal to RTin (since Pv=RT for an ideal gas).. Another way of writing this is 7/2R(753.2-298)+298R, where 298R is the value of Pv in the reference state (which is equal to the enthalpy in the reference state, since u in the reference state is zero). I haven't run the calculation yet, but I will do so, and see if my result checks with yours. Chet 16. Mar 11, 2014 ### Staff: Mentor I wasn't able to wade through the details of your math, but I did the calculation and got T2=686K. Chet Last edited: Mar 11, 2014 17. Mar 11, 2014 ### Maylis I was talking to the graduate student instructor, who was not able to solve the problem either. We ended up looking at the instructor's solution manual, and it seems like the manual suggests that the tank is compressed insentropically as well. It's final temperature is the same as the entering stream. Their energy balance includes the shaft work, which makes absolutely no sense to me. The control volume is the tank. They say d(nU) = n_in*H_in + Ws 18. Mar 11, 2014 ### Staff: Mentor I'm as confused as you are. I'm confident that we did it correctly for the interpretation of the problem that we employed. Chet 19. Mar 11, 2014 ### Staff: Mentor This last equation looks like the equation I was going to use in my original interpretation of the problem. The system here would include both the compressor and the tank (so that the shaft work would be included) and the "in"'s would apply to the inlet to the compressor. The equation, which includes shaft work, could not apply to the tank alone. Chet 20. Mar 11, 2014 ### Maylis Yes. It turns out they choose the control volume to be the tank and compressor. They say the final tank temperature is the same as the stream entering the tank. Here is the solution with the correct answer, according to the solution manual. File size: 260.4 KB Views: 197
{}
# How to draw industry schematic diagram in LaTeX? Is it possible to create the following diagram in LaTeX ? I am interested in the main body, any colour inside the box is fine. - Hi giannis :) and welcome to TeX.sx. In its current form, your question might not receive many answers. Please take a look at the How to Ask-page and try to improve your question according to the guidance found there. This may require you to show some effort on your part in terms of attempting a solution. If you have questions about what to do or if you don't quite understand what this means, please ask for clarification using the add comment function. posting MWE may satisfy the downvoter. –  texenthusiast Apr 2 '13 at 15:19 Usually, we don't put a greeting or a "thank you" in our posts too keep question very concise. Accepting and upvoting answers is the preferred way here to say "thank you" to users who helped you.. I have rephrased question and its tone to get noticed. –  texenthusiast Apr 2 '13 at 15:41 I'm gonna really consider all the above. Thanks for your comments. –  giannis Apr 2 '13 at 20:26 ## 1 Answer \documentclass{standalone} \usepackage{tikz} \usetikzlibrary{positioning} \usetikzlibrary{decorations.pathreplacing} \begin{document} \begin{tikzpicture} [align=left] \node [draw] (home) {Home\\(capital-abundant)}; \draw (home.east) -- ++(2,0) node [above] (cloth) {Cloth} -- ++(2,0) node [above] (food) {Food} -- ++(1,0) node [coordinate] (topend) {}; \node[below=of home] (mid) {}; \draw[dashed] (cloth |- mid) -- (topend |- mid); \node [draw, below=of mid] (foreign) {Foreign\\(labor-abundant)}; \draw (foreign.east) -- (topend |- foreign); \draw[->] (cloth) -- (cloth |- foreign); \draw[->] (food |- mid) -- (food); \node [coordinate,right=0.1cm of cloth] (c1) {}; \draw[->] (c1 |- foreign) -- (c1 |- mid); \draw[decorate,decoration={brace, amplitude=4pt}] (topend) -- (topend |- mid) node[midway, right=3pt]{Interindustry}; \draw[decorate,decoration={brace, amplitude=4pt}] (topend |- mid) -- (topend |- foreign) node[midway, right=3pt]{Intraindustry}; \end{tikzpicture} \end{document} - Welcome to TeX.SX. Nice answer, but it is better to turn the code snippet into minimal working example (MWE) starting with \documentclass{...} and ending with \end{document}. Also showing the result helps in making the answer clearer. :) –  Claudio Fiandrino Apr 2 '13 at 15:54 Would be lovely if we could integrate tex previews directly into the site. –  u0b34a0f6ae Apr 2 '13 at 20:15 @u0b34a0f6ae It's work in progress thru online compiler –  texenthusiast Apr 2 '13 at 20:19 @u0b34a0f6ae thank you very much. I'm gonna try it! –  giannis Apr 2 '13 at 20:25 giannis, it should be easy to use the line widths and background colors you'd like. –  u0b34a0f6ae Apr 2 '13 at 20:27
{}
# Potassium tetraoxalate preparation [closed] How is potassium tetraoxalate produced industrially? In fact How I can limit the reaction of $$\ce{H2C2O4}$$ and $$\ce{KOH}$$? Or is there any other route for this purpose? If you are to react potassium hydroxide with oxalic acid in 1:1 mole ratio, you will get potassium hydrogenoxalate ($$\ce{KHC2O4}$$). Its hydrate form exist at a specific temperature range. This salt contains the hydrogenoxalate anion and hence known as acid potassium oxalate. But if you add an excess of concentrated oxalic acid and maintaining a temperature of below $$\pu{50 °C}$$, the much less soluble potassium tetraoxalate ($$\ce{K+[C2HO4]^−.C2H2O4}$$) forms and precipitates out of solution since it is sparingly soluble in water. So your target compound depends on the concentration of the reagents and the temperature.
{}
# Philosophy of Utility Maximisation Over the weekend, I skimmed through the Springer Undergraduate Mathematics Series book ‘Game Theory‘ by James Webb. It’s a book that has been sitting on my bookcase for a while, and a topic which sits at only one step removed from what I am particularly interested in. But somehow, possibly because of its almost complete absence from the Cambridge maths course, I had never got round to reading the book or otherwise finding out much about the field. This has now changed, so I am writing three short remarks on some aspects I thought were of particular interest. ——– Consider the following. I offer you the following bet. I toss a fair coin: if it comes up heads, you have to pay me £1, but if it comes up tails, I will give you £3. Would you take the bet? Obviously you are the only person who can actually answer that, but I’d expect the only obstacle to you taking the bet might be indifference. Why? Well your expected profit from this bet is £1.50, which is positive, and so it certainly seems that you are getting a better deal. Now let’s up the stakes. If the coin comes up heads, you have to pay me £1,000,000, but if it comes up tails, you get £3m. Do you still take the bet? Well unless you are in a very fortunate financial state, you shouldn’t be taking this bet because you are not in a position to honour your commitment in the event (with probably 50%) that you lose. But what about an intermediate bet? Suppose we change the stakes to £100 and £300. Or to £1,000 and £3,000. Would you take the bet? Without getting too bogged down in what the answer is, and what personal circumstances might lead to each possibility, suffice it to say this: each of the adjusted bets still offers a substantial positive expected profit, but the real risk of losing a large sum of money makes one less inclined to take up the offer. Traditionally, we can describe this apparent logical problem like this. How much we value sums of money is not always proportional to the sum itself. Although some mathematicians might disagree, we don’t value numbers intrinsically, instead we value the effects that sums of money can have on our lives. More concretely, winning the lottery is awesome, and winning the lottery twice is even more awesome. However, it probably isn’t twice as good as winning it once, since many of the best things about winning the lottery, like retiring and paying off your mortgage or your student loan, can only be done once. Mathematically, this is best represented by a utility function. This is a map from the set of outcomes of the bet or experiment to the reals, specifying the relative value the agent assigns to different outcomes (monetary or otherwise). We need the utility function to satisfy a few obvious rules. If it is a function of the profit or return, we expect it to be strictly increasing. That is, everyone places a higher utility on a larger sum of money. If we are considering utility as a function of (possibly non-numerical) outcomes, eg elements in a general probability space, then we need it to be transitive. That is, it describes a partial ordering on the set. We would also expect that all outcomes can be compared, so it is in fact a total ordering. The agent’s aim is then instead to maximise their expected utility $\mathbb{E}U(x)$. There are a couple of questions that are natural to ask: 1) Can all factors influencing a decision really be expressed numerically? 2) Is maximising some utility function actually the right thing to do? We ask question 1) because we might think that some factors that influence our decisions such as fear, superstition or some code of morality cannot be quantified explicitly. We ask question 2) because at first it doesn’t seem as if maximising expected utility is any different to maximising expected monetary value. This needs some clarification. The best way to do this is through the notion of risk aversion. This is an alternative way to think about the original problem with the £1,000 vs £3,000 bet. The risk or uncertainty of a random variable corresponding to a financial return is described by the variance. In general, we assume that most investors are risk-averse, that is, they prefer a guaranteed return of £10 rather than a non-trivial random variable with expectation equal to £10. So instead of maximising expectation, we maximise $\mathbb{E}X-\alpha \mathrm{Var}(X)$, where $\alpha$ is some positive constant. Of course, if an agent for some reason prefers risky processes, then choose $\alpha<0$. So the question we are asking is: might this be happening for utility maximisation as well? More precisely, perhaps we need to consider risk-aversion even for utilities? The way to resolve the problem is to reverse how we are looking at the situation. Tempting though it is to define the utility as a relative value assigned to outcomes, this forces us into a position where we have to come up with some method for assigning these in specific cases, most of which suggest problems of some kind. Although in practice the previous definition is fine and easy to explain, what we actually want is the following. When we make a choice, we must be maximising something. Define the utility to be the function such that maximising the expectation of this function corresponds to the agent’s decision process. This is much more satisfactory, provided such a function is actually well-defined. Von Neumann and Morgenstern showed that it is, provided the agent’s preferences have a little bit more structure than discussed above. The further conditions required are continuity and independence, and are completely natural, but I don’t want to define them now because I haven’t done much of the necessary notation here. In many ways, this is much more satisfying. We don’t always want precise enumerations of all contributory factors, but it is reassuring to know that subject to some entirely reasonable conditions, there is some structure under which we are acting optimally whenever we make a decision.
{}
## Auxiliary Programs — EnergyPlus 8.6 ### Auxiliary Programs The following files are input to the EnergyPlus program. The input data dictionary (IDD) is an ascii (text) file containing a list of all possible EnergyPlus objects and a specification of the data each object requires. This file is analogous to the DOE-2 keyword file. The Guide for Interface Developers contains a full description of the input data dictionary. The input data file (IDF) is an ascii file containing the data describing the building and HVAC system to be simulated. The Guide for Interface Developers shows examples of IDF input. Many example files are installed as part of the EnergyPlus installation. The input macro file (IMF) is an ascii file that is formatted for the EP-Macro program. Output from the EP-Macro program will be the standard in.idf format. IMF files are not directly read by EnergyPlus. This is the EnergyPlus initialization file. It is an optional ascii input file that allows the user to specify the path for the directory containing Energy+.idd. This file, using the actual directories of the install, will be created during the install. An example is: [program] dir = C:\EnergyPlus [weather] dir = [BasementGHT] dir = PreProcess\GrndTempCalc [SlabGHT] dir = PreProcess\GrndTempCalc Under [program], dir should indicate the folder where EnergyPlus is installed (e.g. C:\Program Files\EnergyPlusV2-0-0 or C:\EnergyPlusV2-0-0). This is automatically generated during the install and may be the “shortened form” of these folder names. The “weather” portion of the initialization file is unused for normal EnergyPlus. [BasementGHT] and [SlabGHT] are used by the EP-Launch program when the Utilities tab is used to execute the Basement and Slab programs, respectively. The EnergyPlus weather file is an ascii file containing the hourly or sub-hourly weather data needed by the simulation program. The data format is described in this document in the section: EnergyPlus Weather File (EPW) Data Dictionary. More information (and more up-to-date) about output files is shown in the Output Details and Examples Document. A text file containing the error messages issued by EnergyPlus. This is the first output that should be examined after a simulation. Error messages are issued by EnergyPlus during its input phase or during the simulation. There are three levels of error severity: fatal, severe, and warning as well as simple “message” lines. A fatal error causes the program to terminate immediately. The following table illustrates the necessary actions. Error Message Levels - Required Actions Error Level Action “Information” Informative, usually a follow-on to one of the others. No action required. Warning Take note. Fix as applicable. Severe Should Fix Fatal Program will abort An example of an error message due to an input syntax error is: ** Severe ** Did not find " DessignDay" in list of Objects ** Fatal ** Errors occurred on processing IDF file - probable incorrect IDD file. View "audit.out" for details. ************* EnergyPlus Terminated--Error(s) Detected. This is an text file which echoes the IDD and IDF files, flagging syntax errors in either file. Note that both eplusout.err and eplusout.audit will show the error messages caused by input syntax errors; however only eplusout.err will show errors issued during the actual simulation. eplusout.audit can be used when you need to see the context of the error message to fully ascertain the cause. The EnergyPlus Standard Output (ESO) is a text file containing the time varying simulation output. The format of the file is discussed in the Guide for Interface Developers and the InputOutputReference. The contents of the file are controlled by Report Variable commands in the IDF file. Although the ESO is a text file, it is not easily interpretable by a human. Usually postprocessing will be done on this file in order to put it in a format that can be read by a spreadsheet; however a quick visual inspection of the file does show whether the expected variables are output at the desired time step. The EnergyPlus Meter Output (MTR) is a text file containing the time varying simulation output. The format of the file is similar to the ESO file. Meters are a powerful reporting tool in EnergyPlus. Values are grouped onto logical meters and can be viewed the same way that the ESO variables are used. The contents of the file are controlled by Report Meter commands in the IDF file. Although the MTR is a text file, it is not easily interpretable by a human. Usually postprocessing will be done on this file in order to put it in a format that can be read by a spreadsheet; however a quick visual inspection of the file does show whether the expected variables are output at the desired time step. The EnergyPlus Invariant Output (EIO) is a text file containing output that does not vary with time. For instance, location information (latitude, longitude, time zone, altitude) appears on this file. The Report (variable) Data Dictionary (RDD) is a text file listing those variables available for reporting (on the ESO or MTR) for this particular simulation. Which variables are available for output on the ESO or MTR depends on the actual simulation problem described in the IDF. A simulation with no chiller would not permit the output of any chiller report variables. The user may need to examine the RDD to find out which report variables are available in a particular simulation. The RDD is written only if Output:VariableDictionary, <either Regular or IDF>; appears in the input (IDF) file. This is a text file containing debug output for use by EnergyPlus developers. Generally developers will add debug print statements wherever in the code that that they wish. There is a “standard” debug output that prints out conditions at all the HVAC nodes. This output is triggered by placing DEBUG OUTPUT,1; in the IDF file. If DEBUG OUTPUT, 0 is entered, you will get an empty eplusout.dbg file. This is a file in AutoCad DXF format showing all the surfaces defined in the IDF file. It provides a means of viewing the building geometry. The DXF file from EnergyPlus highlights different building elements (shading, walls, subsurfaces) in differing colors. A number of programs can read and display DXF files. One that works well is Volo View Express, available free from the Autodesk web site. Output of this file is triggered by Report, Surfaces, DXF; in the IDF. A text file containing the coordinates of the vertices of the surfaces in the IDF. Output of this file is triggered by Report, Surfaces, Lines; in the IDF.
{}
Hardcover | $100.00 Short | £68.95 | ISBN: 9780262112789 | 648 pp. | 8.5 x 11 in | 128 illus.| September 2003 Ebook |$70.00 Short | ISBN: 9780262252423 | 648 pp. | 8.5 x 11 in | 128 illus.| September 2003 # The MIT Encyclopedia of Communication Disorders Edited by Raymond D. Kent ## Overview A massive reference work on the scale of MITECS (The MIT Encyclopedia of Cognitive Sciences), The MIT Encyclopedia of Communication Disorders will become the standard reference in this field for both research and clinical use. It offers almost 200 detailed entries, covering the entire range of communication and speech disorders in children and adults, from basic science to clinical diagnosis.MITECD is divided into four sections that reflect the standard categories within the field (also known as speech-language pathology and audiology): Voice, Speech, Language, and Hearing. Within each category, entries are organized into three subsections: Basic Science, Disorders, and Clinical Management. Basic Science includes relevant information on normal anatomy and physiology, physics, psychology and psychophysics, and linguistics; this provides a scientific foundation for entries in the other subsections. The entries that appear under Disorders offer information on the definition and characterization of specific disorders, and tools for their identification and assessment. The Clinical Management subsection describes appropriate interventions, including behavioral, pharmacological, surgical, and prosthetic.Because the approach to communication disorders can be quite different for children and adults, many topics include separate entries reflecting this. Although some disorders that are first diagnosed in childhood may persist in some form throughout adulthood, many disorders can have an onset in either childhood or adulthood, and the timing of onset can have many implications for both assessment and intervention.Topics covered in MITECD include cochlear implants for children and adults, pitch perception, tinnitus, alaryngeal voice and speech rehabilitation, neural mechanisms of vocalization, holistic voice therapy techniques, computer-based approaches to children?s speech and language disorders, neurogenic mutism, regional dialect, agrammatism, global aphasia, and psychosocial problems associated with communicative disorders.
{}
Calculates the long-term percentiles from a daily streamflow data set. Calculates statistics from all values, unless specified. Returns a tibble with statistics. calc_longterm_percentile( data, dates = Date, values = Value, groups = STATION_NUMBER, station_number, percentiles, roll_days = 1, roll_align = "right", water_year_start = 1, start_year, end_year, exclude_years, complete_years = FALSE, months = 1:12, transpose = FALSE ) ## Arguments data Data frame of daily data that contains columns of dates, flow values, and (optional) groups (e.g. station numbers). Leave blank if using station_number argument. Name of column in data that contains dates formatted YYYY-MM-DD. Only required if dates column name is not 'Date' (default). Leave blank if using station_number argument. Name of column in data that contains numeric flow values, in units of cubic metres per second. Only required if values column name is not 'Value' (default). Leave blank if using station_number argument. Name of column in data that contains unique identifiers for different data sets, if applicable. Only required if groups column name is not 'STATION_NUMBER'. Function will automatically group by a column named 'STATION_NUMBER' if present. Remove the 'STATION_NUMBER' column beforehand to remove this grouping. Leave blank if using station_number argument. Character string vector of seven digit Water Survey of Canada station numbers (e.g. "08NM116") of which to extract daily streamflow data from a HYDAT database. Requires tidyhydat package and a HYDAT database. Leave blank if using data argument. Numeric vector of percentiles (ex. c(5,10,25,75)) to calculate. Required. Numeric value of the number of days to apply a rolling mean. Default 1. Character string identifying the direction of the rolling mean from the specified date, either by the first ('left'), last ('right'), or middle ('center') day of the rolling n-day group of observations. Default 'right'. Numeric value indicating the month (1 through 12) of the start of water year for analysis. Default 1. Numeric value of the first year to consider for analysis. Leave blank to use the first year of the source data. Numeric value of the last year to consider for analysis. Leave blank to use the last year of the source data. Numeric vector of years to exclude from analysis. Leave blank to include all years. Logical values indicating whether to include only years with complete data in analysis. Default FALSE. Numeric vector of months to include in analysis (e.g. 6:8 for Jun-Aug). Leave blank to summarize all months (default 1:12). Logical value indicating whether to transpose rows and columns of results. Default FALSE. ## Value A tibble data frame of a long-term percentile of selected years and months. ## Examples # Run if HYDAT database has been downloaded (using tidyhydat::download_hydat()) # Calculate the 20th percentile flow value from a flow record calc_longterm_percentile(station_number = "08NM116", percentile = 20) # Calculate the 90th percentile flow value with custom years calc_longterm_percentile(station_number = "08NM116", start_year = 1980, end_year = 2010, percentile = 90) } #> Warning: Calculation ignored missing values in data. Filter data to complete years or months if desired.#> # A tibble: 1 x 2 #> STATION_NUMBER P90 #> <chr> <dbl> #> 1 08NM116 19.3
{}
## 论文摘要 • 抛弃RNN,只用CNN做Seq2Seq(机器翻译)任务。但是全篇少不了CNN与RNN的对比描述。 • RNN是链式结构(Chain Structure),不能并行训练,CNN(Hierarchical Structure)可以,并且大大降低计算复杂度。 (主要详细展开第三节的内容。本篇以走读的形式,一句话quote出来,再解释理解,写完整理删,挑我觉得重要的句子讲,读这篇文章最好是对这篇论文有一定了解,或者是左边是论文,右边是这篇文章,下面引用的句子都是论文里的文字,之前读过很多论文,也是在纸上画画,这个是第一篇写论文笔记) ## 1. Introduction CNN vs RNN 1. 链式/层级 结构,并行运算 2. 上下文相关 3. 输入输出固定长度 4. 计算复杂度 RNN是链式结构,CNN是层级结构 Compared to recurrent layers, convolutions create representations for fixed size contexts, however, the effective context size of the network can easily be made larger by stacking several layers on top of each other. This allows to precisely control the maximum length of dependencies to be modeled. • CNN处理的input受限,需要是等长的文本,fixed size,但是只要往上堆叠层数就能处理更长的文本。 • 这样的结构能控制最大长度数值。 Convolutional networks do not depend on the computations of the previous time step and therefore allow parallelization over every element in a sequence. This contrasts with RNNs which maintain a hidden state of the entire past that prevents parallel computation within a sequence. • 句子中的每个单词并行运算,不依赖前个词的计算。与RNN的隐藏状态相反。 Multi-layer convolutional neural networks create hierarchical representations over the input sequence in which nearby input elements interact at lower layers while distant elements interact at higher layers. Hierarchical structure provides a shorter path to capture long-range dependencies compared to the chain structure modeled by recurrent networks, e.g. we can obtain a feature representation capturing relationships within a window of n words by applying only O(n/k) convolutional operations for kernels of width k, compared to a linear number O(n) for recurrent neural networks. Inputs to a convolutional network are fed through a constant number of kernels and non-linearities, whereas recurrent networks apply up to n operations and non-linearities to the first word and only a single set of operations to the last word. CNN处理数据:常量个kernel和非线性处理 RNN:变量个n步骤,第一个词非线性,最后一个词做单一处理 In this paper we propose an architecture for sequence to sequence modeling that is entirely convolutional. Our model is equipped with gated linear units (Dauphin et al., 2016) and residual connections (He et al., 2015a).We also use attention in every decoder layer and demonstrate that each attention layer only adds a negligible amount of overhead. • 完全靠卷积 • 顺带GLU/residual connections/attention ## 2. Recurrent Sequence to Sequence Learning • Input Sequence                   $x = (x_1,…,x_m)$ • Encoder Embedding           $w = (w_!,…,w_m)$ • State Representation           $z = (z_1,…,z_m)$ ================================================= [Encoder] • Conditional Input                $c = (c_1,…,c_i,…)$ ================================================= [Decoder] • Hidden State                        $h = (h_1,…,h_n)$ • Decoder Embedding            $g = (g_1,…,g_n)$ • Output Sequence                  $y = (y_1,…,y_n)$ 1. 因为是Encoder-Decoder结构,所以运算都是上下对称的。 • w和g分别为input sequence和output sequence的 0%
{}
# Rational and Irrational Numbers ## Definition Rational numbers are numbers that can be expressed as the ratio of two integers. Rational numbers follow the rules of arithmetic and all rational numbers can be reduced to the form $\frac{a}{b}$, where $b\neq0$ and $gcd(a,b)=1$. Rational numbers are often denoted by $\mathbb{Q}$. These numbers are a subset of the real numbers, which comprise the complete number line and are often denoted by $\mathbb{R}$. Real numbers that cannot be expressed as the ratio of two integers are called irrational numbers. ## Worked Examples ### 1. Determine the value of $0.\overline{238095}$. The line over $238095$ denotes that it is a repeating decimal, of the form $0.238095238095238095\ldots$. Solution. Let $S = 0.\overline{238095}$. Then $1000000S = 238095.\overline{238095}$, and taking the sum, we obtain \begin{aligned} - S & = -000000.\overline{238095} \\ 1000000S & = 238095.\overline{238095}\\ \hline \\ 999999S & = 238095& \\ \end{aligned} Hence, $S = \frac {238095}{999999} = \frac {5}{21}$. ### 2. Show that $\sqrt{2} + \sqrt{3}$ is not rational. Solution: We give a proof by contradiction. If $\sqrt{2}+\sqrt{3}$ is rational, then $(3-2) \times \frac {1}{(\sqrt{2} + \sqrt{3})} = \sqrt{3}-\sqrt{2}$, implying $\sqrt{3} - \sqrt{2}$ is also rational. Since $(\sqrt{3} + \sqrt{2}) - (\sqrt{3}-\sqrt{2}) = 2 \sqrt{2}$, we obtain $2 \sqrt{2}$ is rational. Thus, $2 \sqrt{2} \times \frac {1}{2} = \sqrt{2}$ is also rational, which is a contradiction. We generalize the result that $\sqrt{D}$ is rational if and only if $D$ is a perfect square. ### Theorem. Given integers $n$ and $m$, if $n^{\frac {1}{m}}$ is rational, then $n^{\frac {1}{m}}$ is an integer. In particular, the only rational $m^{th}$ roots of integers $n$ are the integers. Proof: Let $n^{\frac {1}{m}}= \frac {a}{b}$, where $a$ and $b$ are coprime integers. Then taking powers and clearing denominators gives $b^m n = a^m$. If $p$ is a prime that divides $b$, then $p$ divides $b^m n$, so $p$ divides $a^m$ and thus $p$ divides $a$. Since $a$ and $b$ are coprime, there is no prime that divides both $a$ and $b$. Hence, no prime divides $b$, implying $b=1$. Therefore, $n^{\frac {1}{m}} = a$ is an integer. $_\square$ ### Rational Root Theorem. If $f(x) = p_n x^n + p_{n-1} x^{n-1} + \ldots + p_1 x + p_0$ has a rational root of the form $r = \frac {a}{b}$ with $\gcd (a,b)=1$, then $a \vert p_0$ and $b \vert p_n$. Proof: Suppose $\frac {a}{b}$ is a root of $f(x)$. Then $p_n \left(\frac {a}{b} \right)^n + p_{n-1} \left(\frac {a}{b} \right)^{n-1} + \ldots + p_1 \frac {a}{b} + p_0 = 0.$ By shifting the $p_0$ term to the right hand side, and multiplying throughout by $b^n$, we obtain $p_n a^n + p_{n-1} a^{n-1} b + \ldots + p_1 ab^{n-1} = -p_0 b^n$. Notice that the left hand side is a multiple of $a$, thus $a| p_0 b^n$. Since $\gcd(a, b)=1$, Euclid's Lemma implies $a | p_0$. Similarly, if we shift the $p_n$ term to the right hand side and multiply throughout by $b^n$, we obtain $p_{n-1} a^{n-1} b + p_{n-2} a ^{n-2} b^2 + \ldots + p_1 a b^{n-1} + p_0 b^n = -p_n a^n.$ Note that the left hand side is a multiple of $b$, thus $b | p_n a^n$. Since $\gcd(a,b)=1$, Euclid's Lemma implies $b | p_n$. $_\square$ In particular, this tells us that if we want to check for 'nice' rational roots of a polynomial $f(x)$, we only need to check finitely many numbers of the form $\pm \frac {a}{b}$, where $a | p_0$ and $b | p_n$. This is a great tool for factorizing polynomials. ### Integer Root Theorem. If $f(x)$ is a monic polynomial (leading coefficient of 1), then the rational roots of $f(x)$ must be integers. Proof: By the rational root theorem, if $r = \frac {a}{b}$ is a root of $f(x)$, then $b | p_n$. But since $p_n = 1$ by assumption, hence $b=1$ and thus $r=a$ is an integer. $_\square$ ## Worked Examples ### 1. Factorize the cubic polynomial $f(x) = 2x^3 + 7x^2 + 5x + 1$. By the rational root theorem, any rational root of $f(x)$ has the form $r = \frac {a}{b}$ where $a | 1$ and $b | 2$. Thus, we only need to try the numbers $\pm \frac {1}{1}, \pm \frac {1}{2}$. We see that $f(1) >0,$ $f(\frac {1}{2}) > 0,$ $f(-\frac {1}{2}) = -\frac {2}{8} + \frac {7}{4} - \frac {5}{2} + 1 = 0,$ $f(-1) = -2 + 7 - 5 + 1 = 1 \neq 0.$ Hence, by the remainder-factor theorem, $(2x+1)$ is a factor of $f(x)$, implying $f(x) = (2x+1) (x^2 + 3x + 1)$. We can then use the quadratic formula to factorize the quadratic if irrational roots are desired. ### 2. Using the rational root theorem, show that $\sqrt{2}$ is not rational. Since $\sqrt{2}$ is a root of the polynomial $f(x) = x^2-2$, the rational root theorem tells us that the rational roots of $f(x)$ are of the form $\pm \frac { 1, 2}{ 1}$. It is easy to check that none of these are roots of $f(x)$, hence $f(x)$ has no rational roots. Thus, $\sqrt{2}$ is not rational. ### 3. Show that if $x$ is a positive rational such that $x^2 + x$ is an integer, then $x$ must be an integer. Let $x^2 + x = n$, where $n$ is an integer. This is equivalent to finding the roots of $f(x) = x^2+x-n$. Since $f(x)$ is a monic polynomial, by the integer root theorem, if $x$ is a rational root of $f(x)$, then it is an integer root. ### 4. Prove Theorem 1 using the Rational Root Theorem. Consider the polynomial $f(x) = x^m -n$. By the rational root theorem, if $r = \frac {a}{b}$ is a rational root of $f(x)$, then we must have $b| 1$, and hence $b=1$ (or -1). Thus, $r = a$ is an integer. [Pop Quiz: Give a 1 line proof of Theorem 1 using the Integer Root Theorem]. Note by Calvin Lin 6 years, 6 months ago This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: • Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused . • Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone. • Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge. • Stay on topic — we're all here to learn more about math and science, not to hear about your favorite get-rich-quick scheme or current world events. MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting. 2 \times 3 $2 \times 3$ 2^{34} $2^{34}$ a_{i-1} $a_{i-1}$ \frac{2}{3} $\frac{2}{3}$ \sqrt{2} $\sqrt{2}$ \sum_{i=1}^3 $\sum_{i=1}^3$ \sin \theta $\sin \theta$ \boxed{123} $\boxed{123}$ ## Comments Sort by: Top Newest but , tell me who was the first mathematician to discover rational numbers ? Please answer this question as soon as possible because this is very urgent ........... - 6 years, 5 months ago Log in to reply × Problem Loading... Note Loading... Set Loading...
{}
Question 1 When answering a multiple choice question assume a student has a 50% probability to know the correct answer. Otherwise, he will guess and he still has a 20% probability to choose the correct answer. Given that he chooses the correct answer, what is the probability that he guessed? Solution: Our main event is answering correctly, say $$A$$. Event $$B_1$$ is knowing and event $$B_2$$ is guessing. So $$P(A)=P(B_1)P(A|B_1)+P(B_2)P(A|B_2) = 0.5 * 1 + 0.5 * 0.2 = 0.6$$. By Bayes’ theorem. $P(B_2|A) = \dfrac{P(B_2)P(A|B_2)}{P(B_1)P(A|B_1)+P(B_2)P(A|B_2)} = \dfrac{0.5*0.2}{0.6} =0.167$ Corresponding R code can be written as follows. pB1 = 0.5 #Probability of knowing the answer pB2 = 0.5 #Probability of guessing pAgB1 = 1 #Probability of answering correct if the student knows pAgB2 = 0.2 #Probability of answering correct if the student guesses pA = pB1*pAgB1 + pB2*pAgB2 #Probability of answering correct pB2gA = (pB2*pAgB2)/pA #Probability of guessing if the answer is correct pB2gA Question 2 Two sisters Anne and Zoe play chess or backgammon every day. Anne is better than Zoe at chess and wins 75% of the time, but Zoe is better at backgammon and therefore Anne could win only 25% of the time. They play chess 4 out of 7 days of the week randomly and backgammon in the other days. Suppose Anne won the game yesterday. What is the probability that they played chess? Solution: Let’s denote $$P(A)$$ as the probability of Anne winning the game. So $$P(A|Chess) = 0.75$$, $$P(A|Backgammon) = 0.25$$, $$P(Chess) = 4/7$$, $$P(Backgammon) = 3/7$$. We want to know $$P(Chess|A)$$. $P(Chess|A) = \dfrac{P(A|Chess)P(Chess)}{P(A|Chess)P(Chess)+P(A|Backgammon)P(Backgammon)} = \dfrac{0.75*4/7}{0.75*4/7 + 0.25*3/7} = 0.8$ Corresponding R code can be written as follows. pAgChess = 0.75 pAgBackg = 0.25 pChess = 4/7 pBackg = 3/7 pChessgA = (pChess*pAgChess)/(pChess*pAgChess + pBackg*pAgBackg) pChessgA ## [1] 0.8 Question 3 Consider the system above. There are three fuses in section I and two fuses in section II. In order the system to succeed, at least one fuse from each system should work. Values indicate the probabilities of the fuses remain functioning. What is the probability of the system works? Solution: Let’s denote the probability of system working as $$P(A)$$, section I functioning as $$P(B1)$$ and section 2 functioning as $$P(B2)$$. $$P(A) = P(B1)P(B2)$$. $$P(B1)$$ can be calculated as $$1-P(B1')$$ where $$B1'$$ indicates failure of section 1. $$P(B1') = (1-0.5)(1-0.5)(1-0.5) = 0.125$$ and $$P(B1) = 0.875$$. Similarly for $$B2$$, $$P(B2) = 1-P(B2') = 1-(1-0.6)(1-0.4) = 0.76$$. Finally, $$P(A) = P(B1)P(B2) = 0.875*0.76 = 0.665$$.
{}
# Entanglement detection @inproceedings{Guhne2008EntanglementD, title={Entanglement detection}, author={Otfried Guhne and G{\'e}za T{\'o}th}, year={2008} } • Published 18 November 2008 • Physics 1,292 Citations ## Figures and Tables from this paper What Criterion Can We Get From Precise Entanglement Witnesses? • Physics IEEE Journal on Selected Areas in Communications • 2020 A thorough study of a three-qubit Greenberger-Horne-Zeilinger (GHZ) state mixed with a W state and white noise finds a connection between Wootters formula and the matched entanglement witness. Detecting entanglement of unknown quantum states with random measurements • Computer Science • 2015 This work investigates the use of random local measurements, from which entanglement witnesses are then constructed via semidefinite programming methods, and proposes a scheme of successively increasing the number of measurements until the presence ofEntanglement can be unambiguously concluded. Quantitative entanglement witnesses of isotropic and Werner classes via local measurements • Physics • 2011 Quantitative entanglement witnesses allow one to bound the entanglement present in a system by acquiring a single expectation value. In this paper, we analyze a special class of such observables Nonlinear improvement of qubit-qudit entanglement witnesses • Physics • 2020 The entanglement witness is an important and experimentally applicable tool for entanglement detection. In this paper, we provide a nonlinear improvement of any entanglement witness for $2\otimes d$ Measurement-device-independent entanglement witnesses for all entangled quantum states. • Physics Physical review letters • 2013 This work introduces the concept of measurement-device-independent entanglement witnesses (MDI-EWs), which allow one to demonstrateEntanglement of all entangled quantum states with untrusted measurement apparatuses, and shows how to systematically obtain such MDI-ews from standard entangler witnesses. Geometry of Faithful Entanglement. • Physics Physical review letters • 2021 This work proves a structural result on the corresponding fidelity-based entanglement witnesses, resulting in a simple condition for faithfulness of a two-party state, and shows that faithful Entanglement is, in a certain sense, usefulEntanglement theory. Device-Independent Entanglement Detection Entanglement is one of the most intriguing feature of quantum physics. It allows several particles to be in a state which cannot be understood as a concatenation of the sate of each particle. Device-Independent Entanglement Certification of All Entangled States. • Physics, Computer Science Physical review letters • 2018 We present a method to certify the entanglement of all entangled quantum states in a device-independent way. This is achieved by placing the state in a quantum network and constructing a correlation Design and experimental performance of local entanglement witness operators • Computer Science Physical Review A • 2020 This work proposes and analyzes a class of entanglement witnesses that detect the presence ofEntanglement in subsystems of experimental multi-qubit stabilizer states and theoretically establishes the noise tolerance of the proposed witnesses and benchmark their practical performance by analyzing the local entangled structure of an experimental seven-qu bit quantum error correction code. The structure of ultrafine entanglement witnesses • Physics, Philosophy Journal of Physics A: Mathematical and Theoretical • 2018 An entanglement witness is an observable with the property that a negative expectation value signals the presence of entanglement. The question arises how a witness can be improved if the expectation ## References SHOWING 1-10 OF 1,311 REFERENCES Experimental detection of entanglement via witness operators and local measurements • Physics • 2003 Abstract In this paper we address the problem of detection of entanglement using only few local measurements when some knowledge about the state is given. The idea is based on an optimized Characterizing Entanglement Quantum entanglement is at the heart of many tasks in quantum information. Apart from simple cases (low dimensions, few particles, pure states), however, the mathematical structure of entanglement is Quantitative entanglement witnesses • Physics • 2006 All these tests—based on the very same data—give rise to quantitative estimates in terms of entanglement measures, and if a test is strongly violated, one can also infer that the state was quantitatively very much entangled, in the bipartite and multipartite setting. Characterizing entanglement with geometric entanglement witnesses We show how to detect entangled, bound entangled and separable bipartite quantum states of arbitrary dimension and mixedness using geometric entanglement witnesses. These witnesses are constructed Entanglement detection by local orthogonal observables. • Physics Physical review letters • 2005 A family of entanglement witnesses and corresponding positive maps that are not completely positive based on local orthogonal observables that can be physically realized by measuring a Hermitian correlation matrix of local orthosomatic observables. Novel schemes for directly measuring entanglement of general states. • Physics, Computer Science Physical review letters • 2008 It is demonstrated that rank-1 local factorizable projective measurements, which are achievable with only one copy of an entangled state involved at a time in a sequential way, are sufficient to directly determine the concurrence of an arbitrary two-qubit entangled state. Detecting quantum entanglement : entanglement witnesses and uncertainty relations This thesis deals with methods of the detection of entanglement. After recalling some facts and definitions concerning entanglement and separability, we investigate two methods of the detection of Entanglement witnesses and a loophole problem • Physics • 2007 We consider a possible detector-efficiency loophole in experiments that detect entanglement via the local measurement of witness operators. Here, only local properties of the detectors are known. We Optimal entanglement criterion for mixed quantum states. It is shown that Phi detects many entangled states with a positive partial transposition (PPT) and that it leads to a class of optimal entanglement witnesses which implies that there are no other witnesses which can detect more entangled PPT states. Experimental Determination of Entanglement by a Projective Measurement • Physics • 2007 We describe a method in which the entanglement of any pure quantum state can be experimentally determined by a simple projective measurement, provided the state is available in a twofold copy. We
{}
# How do you factor 6xy+15x? Apr 9, 2017 $3 x \left(2 y + 5\right)$ #### Explanation: Both terms have the variable $x$, $x \left(6 y + 15\right)$ Both terms can be factored by $3$, $3 x \left(2 y + 5\right)$ You can check if you factored correctly by multiplying through the brakctets, $\left(3 x \cdot 2 y\right) + \left(3 x + 5\right)$ $= 6 x y + 15 x$ Apr 9, 2017 $3 x \left(2 y + 5\right)$ #### Explanation: Let's expand everything we can and then factor out common factors: $6 x y + 15 x$ $2 \cdot \textcolor{b l u e}{3} \cdot \textcolor{g r e e n}{x} \cdot y + \textcolor{b l u e}{3} \cdot 5 \cdot \textcolor{g r e e n}{x}$ There are $x$s in both values and a $3$ for both. Let's factor those out: $3 \cdot x \left(2 \cdot y + 5\right)$ $3 x \left(2 y + 5\right)$. Just to make sure the new expression is still equal tothe old one, let's distribute the $3 x$ int $2 y + 5$. We should get $6 x y + 15 x$ $3 x \left(2 y + 5\right)$ $2 y \cdot 3 x + 5 \cdot 3 x$ $6 x y + 15 x$ Yep. it's still the same! Good job!
{}
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 14 Aug 2018, 18:32 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # m0515 Author Message Intern Joined: 17 Jan 2009 Posts: 2 ### Show Tags 30 Jan 2009, 07:11 If x is a positive integer, is sqrt{x} < 2.5x - 5? 1. x < 3 2. x is a prime number The answer I chose was E but the correct answer is A - My rationale - From Statement 1, we know x can only be 1 or 2, since x is a positive integer. We need to test if the answer is the same for both x . If x=1 , the answer to the question is CANNOT DETERMINE sqrt{1} = +/- 1: sqrt{1} < 2.5*1 - 5 . For x = 2 : sqrt{2} < 0. The answer is CANNOT DETERMINE. Thus, Statement 1 is insufficient. Statement 2 does not help much as x can be 2 or 11. The inequality does not hold consistent for both of these values. Insufficient. Statement 1 and 2 together x = 2 Thanks --== Message from GMAT Club Team ==-- This is not a quality discussion. It has been retired. If you would like to discuss this question please re-post it in the respective forum. Thank you! To review the GMAT Club's Forums Posting Guidelines, please follow these links: Quantitative | Verbal Please note - we may remove posts that do not follow our posting guidelines. Thank you. CEO Joined: 17 Nov 2007 Posts: 3484 Concentration: Entrepreneurship, Other Schools: Chicago (Booth) - Class of 2011 GMAT 1: 750 Q50 V40 ### Show Tags 30 Jan 2009, 08:57 abdemt wrote: ...the answer to the question is CANNOT DETERMINE sqrt{1} = +/- 1: sqrt{1} < 2.5*1 - 5 .... $$\sqrt{1}=1$$ not $$\pm1$$ This was discussed in some threads that $$\sqrt{n}$$ represents a positive root and solution of equitation $$x^2=n$$ is $$\pm\sqrt{n}$$, where $$\sqrt{n}$$ is a positive number _________________ HOT! GMAT TOOLKIT 2 (iOS) / GMAT TOOLKIT (Android) - The OFFICIAL GMAT CLUB PREP APP, a must-have app especially if you aim at 700+ | PrepGame Intern Joined: 17 Jan 2009 Posts: 2 ### Show Tags 30 Jan 2009, 09:09 Thank you - that clears my doubt Regards SVP Joined: 29 Aug 2007 Posts: 2420 ### Show Tags 30 Jan 2009, 14:27 abdemt wrote: If x is a positive integer, is $$sqrt{x}$$ < (2.5x - 5)? 1. x < 3 2. x is a prime number The answer I chose was E but the correct answer is A - Thanks http://gmatclub.com/forum/59-p564448?t=75172#p564448 --== Message from GMAT Club Team ==-- This is not a quality discussion. It has been retired. If you would like to discuss this question please re-post it in the respective forum. Thank you! To review the GMAT Club's Forums Posting Guidelines, please follow these links: Quantitative | Verbal Please note - we may remove posts that do not follow our posting guidelines. Thank you. _________________ Gmat: http://gmatclub.com/forum/everything-you-need-to-prepare-for-the-gmat-revised-77983.html GT Re: m0515 &nbs [#permalink] 30 Jan 2009, 14:27 Display posts from previous: Sort by # m0515 Moderator: chetan2u # Events & Promotions Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
{}
English # Item ITEM ACTIONSEXPORT Released Conference Paper #### Discrepancy of Products of Hypergraphs ##### MPS-Authors /persons/resource/persons44338 Doerr,  Benjamin Algorithms and Complexity, MPI for Informatics, Max Planck Society; /persons/resource/persons44598 Hebbinghaus,  Nils Algorithms and Complexity, MPI for Informatics, Max Planck Society; ##### Locator There are no locators available ##### Fulltext (public) There are no public fulltexts available ##### Supplementary Material (public) There is no public supplementary material available ##### Citation Doerr, B., & Hebbinghaus, N. (2005). Discrepancy of Products of Hypergraphs. In 2005 European Conference on Combinatorics, Graph Theory and Applications (EuroComb '05) (pp. 323-328). Nancy, France: DMTCS. Cite as: http://hdl.handle.net/11858/00-001M-0000-000F-2640-C ##### Abstract For a hypergraph {${\mathcal{H} = (V,\mathcal{E})}$}, its {${d}$}--fold symmetric product is {${ \Delta ^{d} \mathcal{H} = (V^{d},\{ E^{d} | E {\in}\mathcal{E} \}) }$}. We give several upper and lower bounds for the {${c}$}-color discrepancy of such products. In particular, we show that the bound {${ \textrm{disc}(\Delta ^{d} \mathcal{H},2) {\leq}\textrm{disc}(\mathcal{H},2) }$} proven for all {${d}$} in [B.\ Doerr, A.\ Srivastav, and P.\ Wehr, Discrepancy of {C}artesian products of arithmetic progressions, Electron. J. Combin. 11(2004), Research Paper 5, 16 pp.] cannot be extended to more than {${c = 2}$} colors. In fact, for any {${c}$} and {${d}$} such that {${c}$} does not divide {${d!}$}, there are hypergraphs having arbitrary large discrepancy and {${ \textrm{disc}(\Delta ^{d} \mathcal{H},c) = \Omega_{d}(\textrm{disc}(\mathcal{H},c)^{d}) }$}. Apart from constant factors (depending on {${c}$} and {${d}$}), in these cases the symmetric product behaves no better than the general direct product {${\mathcal{H}^{d}}$}, which satisfies {${ \textrm{disc}(\mathcal{H}^{d},c) = O_{c,d}(\textrm{disc}(\mathcal{H},c)^{d}) }$}.
{}
Lemma 10.30.4. Let $R$ be a domain. Let $\varphi : R \to S$ be a ring map. The following are equivalent: 1. The ring map $R \to S$ is injective. 2. The image $\mathop{\mathrm{Spec}}(S) \to \mathop{\mathrm{Spec}}(R)$ contains a dense set of points. 3. There exists a prime ideal $\mathfrak q \subset S$ whose inverse image in $R$ is $(0)$. Proof. Let $K$ be the field of fractions of the domain $R$. Assume that $R \to S$ is injective. Since localization is exact we see that $K \to S \otimes _ R K$ is injective. Hence there is a prime mapping to $(0)$ by Lemma 10.17.9. Note that $(0)$ is dense in $\mathop{\mathrm{Spec}}(R)$, so that the last condition implies the second. Suppose the second condition holds. Let $f \in R$, $f \not= 0$. As $R$ is a domain we see that $V(f)$ is a proper closed subset of $R$. By assumption there exists a prime $\mathfrak q$ of $S$ such that $\varphi (f) \not\in \mathfrak q$. Hence $\varphi (f) \not= 0$. Hence $R \to S$ is injective. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
{}
Hardcover | $40.00 Short | £32.95 | 304 pp. | 6 x 9 in | September 2017 | ISBN: 9780262036719 eBook |$28.00 Short | September 2017 | ISBN: 9780262342148 Mouseover for Online Attention Data Real Hallucinations Psychiatric Illness, Intentionality, and the Interpersonal World Overview In Real Hallucinations, Matthew Ratcliffe offers a philosophical examination of the structure of human experience, its vulnerability to disruption, and how it is shaped by relations with other people. He focuses on the seemingly simple question of how we manage to distinguish among our experiences of perceiving, remembering, imagining, and thinking. To answer this question, he first develops a detailed analysis of auditory verbal hallucinations (usually defined as hearing a voice in the absence of a speaker) and thought insertion (somehow experiencing one’s own thoughts as someone else’s). He shows how thought insertion and many of those experiences labeled as “hallucinations” consist of disturbances in a person’s sense of being in one type of intentional state rather than another. Ratcliffe goes on to argue that such experiences occur against a backdrop of less pronounced but wider-ranging alterations in the structure of intentionality. In so doing, he considers forms of experience associated with trauma, schizophrenia, and profound grief. The overall position arrived at is that experience has an essentially temporal structure, involving patterns of anticipation and fulfillment that are specific to types of intentional states and serve to distinguish them phenomenologically. Disturbances of this structure can lead to various kinds of anomalous experience. Importantly, anticipation-fulfillment patterns are sustained, regulated, and disrupted by interpersonal experience and interaction. It follows that the integrity of human experience, including the most basic sense of self, is inseparable from how we relate to other people and to the social world as a whole.
{}
Not signed in (Sign In) # Start a new discussion ## Not signed in Want to take part in these discussions? Sign in if you have an account, or apply for one below • Sign in using OpenID ## Site Tag Cloud Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support. • CommentRowNumber1. • CommentAuthorTodd_Trimble • CommentTimeSep 27th 2016 I added a Definition section to Burnside ring (and made Burnside rig redirect to it). • CommentRowNumber2. • CommentAuthorDavid_Corfield • CommentTimeSep 27th 2016 And what can be said about the relationship to the Burnside category? Wikipedia has it as a categorification. Also, Burnside rigs can be defined more generally. e.g., for distributive categories. • CommentRowNumber3. • CommentAuthorTodd_Trimble • CommentTimeSep 27th 2016 I wasn’t aware of Schanuel’s terminology – thanks. I’m curious when the Burnside ring coincides with the representation ring (say working over the ground field $\mathbb{C}$). Jim Dolan once showed me some fairly convincing empirical evidence that the Burnside ring of the symmetric group $S_n$ is the same as the representation ring; an example is given at the page Gram-Schmidt process. But I never seriously attempted to find a proof. • CommentRowNumber4. • CommentAuthorUrs • CommentTimeSep 10th 2018 added more and original references for the statement that the Burnside ring is isomorphic to the equivariant stable cohomotopy of the point $A(G) \simeq \mathbb{S}_G(\ast)$ Am adding this also at equivariant stable cohomotopy • CommentRowNumber5. • CommentAuthorUrs • CommentTimeSep 11th 2018 • (edited Sep 11th 2018) The canonical map of spectra $\mathbb{S} \longrightarrow KU$ becomes, when restricted to the point with a $G$-action, the ring homomorphism from the Burnside ring $A(G)$ to the representation ring $R(G)$ $A(G) \longrightarrow R(G)$ which sends a finite $G$-set $S$ to the vector space spanned by $S$ and equipped with the induced linear permutation representation. This is just the kind of map that underlies Dolan-Baez’s old “groupoidification” idea. There the goal was (I suppose) to see how much of the linear algebra can be understood to arise from the combinarics somehow in a conceptual way. Here one might want to ask a similar question: How much of $KU$ is “induced” by $\mathbb{S}$? Or to start with, how much of $R(G)$ is induced by $A(G)$? Is the former something we’d obtain from the latter by following some god-given path? • CommentRowNumber6. • CommentAuthorTodd_Trimble • CommentTimeSep 11th 2018 Here one might want to ask a similar question: How much of $KU$ is “induced” by $\mathbb{S}$? Or to start with, how much of $R(G)$ is induced by $A(G)$? Is the former something we’d obtain from the latter by following some god-given path? There are related questions asked at Gram-Schmidt process, where the toy example is of representations of symmetric groups. When I saw Jim Dolan and Simon Burton back in February, we were musing about the coincidence of the number of Young diagrams being the same as the number of conjugacy classes, and I thought I heard Jim suggesting that the Young diagrams might be better understood in terms of conjugacy classes of a (Langlands) dual. Anyway, I’d like to understand this myself. • CommentRowNumber7. • CommentAuthorUrs • CommentTimeSep 11th 2018 Thanks for the pointer! Oh, wow, am I reading this correctly as directly implying that the homomorphsim $A(S_n) \longrightarrow R(S_n)$ is in fact surjective?! • CommentRowNumber8. • CommentAuthorDavid_Corfield • CommentTimeSep 11th 2018 Re #6, I had a similar idea back here after David Ben-Zvi told us about Langlands duality: A thought after all these months: if symmetric groups fall into this picture by being $GL(n,\mathbb{F}_1)$, and if GLs are Langlands self-dual, this explains how Young diagrams parameterize conjugacy classes and at the same time irreducible representations. • CommentRowNumber9. • CommentAuthorUrs • CommentTimeSep 11th 2018 • (edited Sep 11th 2018) Interesting, thanks. Next I’d like to know how much of all this survives changing the group to a finite subgroup of $SU(2)$. But need to run now… • CommentRowNumber10. • CommentAuthorTodd_Trimble • CommentTimeSep 11th 2018 Re #7: that’s what I had understood, but I certainly don’t have a proof for general $n$. There may be a more categorified way of saying this, considering a suitable category of “virtual permutation representations”, where the map $A(S_n) \to R(S_n)$ is promoted to an essentially surjective functor, but I’d need to think more how I’d like to say it. • CommentRowNumber11. • CommentAuthorUrs • CommentTimeSep 11th 2018 I see, thanks. How about taking $G = A_4$ the alternating group. Might the construction still give a surjection $A(A_4) \to R(A_4)$? • CommentRowNumber12. • CommentAuthorUrs • CommentTimeSep 11th 2018 • (edited Sep 11th 2018) Actually, I would be interested in knowing it for the “binary alternarting group” $2 A_4$, i.e. the “double cover” of the tetrahedral group. Really I’d like to know it for all the finite subgroups of $SU(2)$, but here I am thinking that maybe the (binary) alternating ones are close enough to the symmetric groups that it would be easy to adopt the proof.(?) • CommentRowNumber13. • CommentAuthorTodd_Trimble • CommentTimeSep 11th 2018 Urs, I believe Simon Burton may have carried out calculations of the categorified Gram-Schmidt process for groups like that (we were discussing the dihedral group of order $8$). I’ll ask him. • CommentRowNumber14. • CommentAuthorDavidRoberts • CommentTimeSep 12th 2018 For what it’s worth, this article has the details for the Burnside ring of the icosahedral group. • CommentRowNumber15. • CommentAuthorDavidRoberts • CommentTimeSep 12th 2018 • (edited Sep 12th 2018) OK, even better. This page says that the map from the Burnside ring to the representation ring for $S_4$ is not injective. In fact, the page here says that the comparison map is only surjective for the trivial group and any two-element group, and is “seldom” injective. EDIT: one last one. In Burnside’s book, on pdf page 272, there the ’table of marks’ for $A_4$ and what seems to be at least part of the multiplication structure of the Burnside ring. EDIT again: tom Dieck gives information about $A_5$. If I find something about the binary versions I will link to it. • CommentRowNumber16. • CommentAuthorUrs • CommentTimeSep 12th 2018 • (edited Sep 12th 2018) Thanks for all the pointers! Am reading… But now I am confused: Isn’t Todd’s discussion at “Categorified Gram-Schmidt process” saying that $\beta$ is surjective, for $S_4$? • CommentRowNumber17. • CommentAuthorUrs • CommentTimeSep 12th 2018 But your first link, at the very bottom, confirms Todd’s computation. Clearly this makes beta surjective, or else I am missing something basic. • CommentRowNumber18. • CommentAuthorUrs • CommentTimeSep 12th 2018 • (edited Sep 12th 2018) There is • Alex Bartel, Tim Dokchitser, Rational representations and permutation representations of finite groups, Math. Ann. 364 no. 1 (2016), 539-558 (arXiv:1405.6616) which seems to discuss the surjectivity, or not, head on. But need to run now. • CommentRowNumber19. • CommentAuthorDavidRoberts • CommentTimeSep 12th 2018 Hmm, a mystery indeed! I’m not sure what’s going on. I agree that it looks like the formulas at the bottom of this page show that every basis vector in the representation ring $R(S_4)$ is in the image of $\beta$. Worse, PlanetMath cites a theorem of Segal that says for $p$-groups, and the representation ring over $\mathbb{Q}$, $\beta$ is always surjective! (and an isomorphism for such groups iff they are cyclic) • CommentRowNumber20. • CommentAuthorUrs • CommentTimeSep 12th 2018 Thanks! So it’s that one page www.maths.manchester.ac.uk/~jm/wiki/Representations/Burnside which disagrees with all other sources that we have seen so far, on the surjectivity of $\beta$. Might it be that the error there is caused by the evident glitch of not thinking about virtual representations, but plain representations? • CommentRowNumber21. • CommentAuthorUrs • CommentTimeSep 12th 2018 • (edited Sep 12th 2018) Now that I am back on my laptop machine: Indeed, the article from #18 • Alex Bartel, Tim Dokchitser, Rational representations and permutation representations of finite groups, Math. Ann. 364 no. 1 (2016), 539-558 (arXiv:1405.6616) has detailed references regarding surjectivity of $\beta$ or not: see the paragraph in the middle of the first page. In particular they cite what must be Segal’s result that PlanetMath is alluding to (at planetmath.org/RepresentationRingVsBurnsideRing), but together with a bunch more. PlanetMath also says that $\beta$ is an iso for all cyclic groups! This is most interesting in view of my quest for $\beta$ in the case of ADE groups. But I haven’t tracked down a real reference for this statement yet. • CommentRowNumber22. • CommentAuthorTodd_Trimble • CommentTimeSep 12th 2018 (To be sure, in the nLab I was considering the ground field to be $\mathbb{C}$ or an algebraically closed field.) • CommentRowNumber23. • CommentAuthorUrs • CommentTimeSep 12th 2018 Oh, the statement that $\beta$ is surjective for all cyclic groups follows from the statement for $p$-groups from the fundamental theorem of cyclic groups! Right? • CommentRowNumber24. • CommentAuthorUrs • CommentTimeSep 12th 2018 I am starting a proposition collecting the known surjectivity results here. To be expanded and improved (need to add assumptions on ground field. For the moment everything is in char 0, I suppose.) • CommentRowNumber25. • CommentAuthorUrs • CommentTimeSep 12th 2018 • (edited Sep 12th 2018) have briefly been trying to find (in the literature, that is :-) the proof that $\beta$ is an iso for cyclic groups. From the way this is stated on PlanetMath (here) I gather this is meant to be evident from the proof that Segal gives of surjectivity, or else from the formulas in Hambleton-Taylor 99, but if so, I need to spend more time with it. • CommentRowNumber26. • CommentAuthorUrs • CommentTimeSep 12th 2018 • (edited Sep 12th 2018) just discovered that Ben Webster gave the simple argument why $\beta$ is injective precisely for the cyclic groups, here! Together with the surjectivity from Segal’s theorem, this shows that $\beta$ is an isomorphism for cyclic groups. Probably the author of that PlanetMath entry found the injectivity argument too trivial to mention here. [ have added that to the entry, in this prop. ] • CommentRowNumber27. • CommentAuthorUrs • CommentTimeSep 12th 2018 • (edited Sep 12th 2018) Okay, so I have now much of what I was after, but only for rational representations. Need to think about how much of this carries over to complex representations… • CommentRowNumber28. • CommentAuthorUrs • CommentTimeSep 12th 2018 • (edited Sep 12th 2018) Oh, I see now. $\mathbb{Q}$ is a splitting field only for $\mathbb{Z}/2$, but for none of the other cyclic groups (here). Hence surjectivity of $\beta$ over $\mathbb{Q}$ breaks after passage to $\mathbb{C}$ for all $\mathbb{Z}/n$ with $n \gt 2$. I suppose this resolves the apparent contradiction above in #20 ! (Maybe that’s what Todd was trying to tell us in #22. Sorry for being slow.) • CommentRowNumber29. • CommentAuthorUrs • CommentTimeSep 13th 2018 I have emailed James Montaldi on the issue with the webpage www.maths.manchester.ac.uk/~jm/wiki/Representations/Burnside mentioned around #20 above. He agreed that this was in error and has removed the statement about surjectivity now. • CommentRowNumber30. • CommentAuthorTim_Porter • CommentTimeSep 13th 2018 • (edited Sep 13th 2018) I do not know if it helps, but I had colleagues who used to work on structures related to lambda rings on the Burnside ring and looking up lambda rings I find a reference to James Borger’s paper on Lambda rings and the field with one element. The Burnside ring does have a pre-$\lambda$ -ring structure if I remember rightly, but not a ‘special’ one. One of the papers was ‘Adams operations and λ-operations in β-rings by I. Morris and C.D. Wensley, another is Computing Adams operations on the Burnside ring of a finite group. J. Reine Angew. Math. 341 (1983), 87–97, by G. Morris and the other two authors. These are mentioned in this MO question. This relates to a conjecture in Knutson, Donald (1973), λ-rings and the representation theory of the symmetric group, Lecture Notes in Mathematics, 308, which dies nit seem o quite make sense. Does this relate to the problem of the surjectivity etc. as the representation ring is a Lambda ring? • CommentRowNumber31. • CommentAuthorUrs • CommentTimeSep 13th 2018 • (edited Sep 13th 2018) The point in Guillot 06 “Adams operations in cohomotopy” is to show that it is not quite a $\lambda$-ring, but a “$\beta$-ring”. • CommentRowNumber32. • CommentAuthorTim_Porter • CommentTimeSep 13th 2018 I believe the initial notion of $\beta$-ring is, in fact, due to another ex-colleague! (Remember Dudley Littlewood was Ronnie Brown’s predecessor at Bangor and he was central in the development of permutation representation theory.) I really should look back over that stuff, and will if I have a moment. Guillot’s paper looks interesting. Thanks. • CommentRowNumber33. • CommentAuthorUrs • CommentTimeSep 16th 2018 • (edited Sep 16th 2018) added remark that Segal’s theorem $A(G) \overset{\simeq}{\longrightarrow} \mathbb{S}_G(\ast)$ is a special case of the tom Dieck splitting theorem • CommentRowNumber34. • CommentAuthorDavid_Corfield • CommentTimeSep 26th 2018 There might be some interesting ideas about $G$-equivariant $\mathbb{F}_1$-theory in • Snigdhayan Mahanta, $G$-theory of $\mathbb{F}_1$-algebras I: the equivariant Nishida problem, (arXiv:1110.6001) • CommentRowNumber35. • CommentAuthorUrs • CommentTimeSep 26th 2018 Thanks for the pointer. He attributes the observation that the Barratt–Priddy–Quillen theorem may be read as sying $\mathbb{S} \simeq K \mathbb{F}_1$ to • Y. Manin. Lectures on zeta functions and motives (according to Deninger and Kurokawa). Ast ́erisque, (228):4, 121–163, 1995. Columbia University Number Theory Seminar (New York, 1992) (pdf) I have looked through that pdf. While I see it talk about $\mathbb{F}_1$, I didn’t find a remark yet concerning Barratt–Priddy–Quillen… • CommentRowNumber36. • CommentAuthorUrs • CommentTimeOct 7th 2018 • (edited Oct 7th 2018) added pointer to • CommentRowNumber37. • CommentAuthorUrs • CommentTimeJan 27th 2019 added a Properties-section (here) on expressing the Burnside product in terms of the table of marks 1. Nice! Tweaked slightly to make the statement hopefully completely clear. • CommentRowNumber39. • CommentAuthorUrs • CommentTimeJun 3rd 2020 • (edited Jun 3rd 2020) added re-publication data to: • CommentRowNumber40. • CommentAuthorUrs • CommentTimeJun 3rd 2020 • (edited Jun 3rd 2020) Hm, where exactly in Burnside’s original book does he actually introduce the Burnside ring? Or does he even? • CommentRowNumber41. • CommentAuthorDavid_Corfield • CommentTimeJun 3rd 2020 I doubt anyone this early is calling anything a ring other than collections of numbers or polynomial. Abstract rings certainly come later. Or do you just mean where he talks about something we’d recognise as a ring? • CommentRowNumber42. • CommentAuthorUrs • CommentTimeJun 3rd 2020 I’d like to cite the invention of the Burnside ring, in whatever guise. Is there a page in that book on which we can recognize the concept being conceived? • CommentRowNumber43. • CommentAuthorDavid_Corfield • CommentTimeJun 3rd 2020 According to the commentary in his collected papers, the origins are to be found in this 1901 paper, On the Representation of a Group of Finite Order as a Permutation Group, and on the Composition of Permutation Groups, and then sections 184-185 of the second edition (1911) of his monograph. • CommentRowNumber44. • CommentAuthorDavid_Corfield • CommentTimeJun 3rd 2020 Some commentary here, which claims Solomon coined the name. • CommentRowNumber45. • CommentAuthorUrs • CommentTimeJun 3rd 2020 Thanks!! I’ll look into it in a moment… • CommentRowNumber46. • CommentAuthorUrs • CommentTimeJun 3rd 2020 Hm, in rev 12 I had added a line saying The concept was named by Dress, following [ Burnside 1897 ] I must have read this somewhere, but I forget where. • CommentRowNumber47. • CommentAuthorDavid_Corfield • CommentTimeJun 3rd 2020 In The Burnside algebra of a finite group, Solomon writes The isomorphism classes of $G$-sets may be added and multiplied in natural fashion and generate a commutative ring $\mathcal{B}[G]$ which, since it seems to have been defined for the first time in Burnside’s book [3, Secs. 184-5], we call the Burnside ring of $G$. That’s the second edition importantly denoted [3]. • CommentRowNumber48. • CommentAuthorUrs • CommentTimeJun 3rd 2020 Thanks again. Since that second edition of the book is from 1911, the earliest reference might indeed be that other article you found: • William Burnside, On the Representation of a Group of Finite Order as a Permutation Group, and on the Composition of Permutation Groups, Proceedings of the American Mathematical Society 1901 (doi:10.1112/plms/s1-34.1.159) where the Burnside product appears as equation (i). After doing some fun translation: • his “permutation group $G$” is our “$G$-set” (!), • his “compound” is our “sub-set” • his “compounding” is our “Cartesian product”. • CommentRowNumber49. • CommentAuthorDavid_Corfield • CommentTimeJun 3rd 2020 It looks like it is Solomon then who names it in 1967. Hazelwink claims it’s Dress (“According to some the Burnside ring was introduced by Andreas Dress in [117]”), but that’s to a 1969 paper, and Dress doesn’t call it the Burnside ring there anyway. • CommentRowNumber50. • CommentAuthorUrs • CommentTimeJun 3rd 2020 Excellent, thanks! Am editing this into the entry now.. Coming back to that book: our pdf link seems to be to the first edition then, since Sections 184-185 there are about polygons. Do we have a pdf-copy of that second edition (or later)? • CommentRowNumber51. • CommentAuthorUrs • CommentTimeJun 3rd 2020 Okay, I have now expanded the beginning of the References-section (here) to read as follows: The Burnside product seems to first appear as equation (i) in: • {#Burnside01} William Burnside, On the Representation of a Group of Finite Order as a Permutation Group, and on the Composition of Permutation Groups, Proceedings of the American Mathematical Society 1901 (doi:10.1112/plms/s1-34.1.159) (beware the terminology: a G-set is called a “permutation group $G$” in that article, a subset is called a “compound” and the Cartesian product of $G$-sets is called their “compounding”). It is then included (not in the first but) in the second edition (Sections 184-185) of: The term “Burnside ring” as well as “Burnside algebra” is then due to (see NMT 04, Vol. 1 p. 60 for historical comments) • CommentRowNumber52. • CommentAuthorDavid_Corfield • CommentTimeJun 3rd 2020 I found a copy so replaced the link • {#Burnside1897} William Burnside, Theory of Groups of Finite Order, Second edition Cambridge 1911 (pdf) • CommentRowNumber53. • CommentAuthorUrs • CommentTimeJun 3rd 2020 Thanks! Interesting: I find the sections 184-185 in that 1911 textbook version considerably less clear than the 1901 article. Not easy to spot the concept of the Burnside ring here, even if one knows what one is looking for and where to look for it. Anyway, so I’ll be citing the 1901 article. That’s what I wanted to know. • CommentRowNumber54. • CommentAuthorUrs • CommentTimeMar 17th 2021 finally added pointer to Will give this book its own category:reference-entry, to make fully transparent that this and Transformation Groups and Representation Theory are two distinct books, albeit by the same author… Add your comments • Please log in or leave your comment as a "guest post". If commenting as a "guest", please include your name in the message as a courtesy. Note: only certain categories allow guest posts. • To produce a hyperlink to an nLab entry, simply put double square brackets around its name, e.g. [[category]]. To use (La)TeX mathematics in your post, make sure Markdown+Itex is selected below and put your mathematics between dollar signs as usual. Only a subset of the usual TeX math commands are accepted: see here for a list. • (Help)
{}
# Unification of expressions involving sets Let's let $\def\OP#1#2{\left\langle#1,#2\right\rangle}\OP xy$ represent the set $\{\{x\},\{x,y\}\}$ as is usual, per Kuratowski. Then: $$\begin{eqnarray} \OP{\OP ab}c & = & \{\{\{\{a\}, \{a,b\}\}\}, \{\{\{a\}, \{a, b\}\}, c\}\} \\ \OP x{\OP yz} & = & \{\{x\}, \{x, \{\{y\}, \{y,z\}\}\}\} \end{eqnarray}$$ I would like to unify these two equations to produce a set of relations on $a,b,c,x,y,z$ that give the most general conditions for which $\OP{\OP ab}c = \OP x{\OP yz}$. The unification algorithm I know works on expressions. Applying it to $\OP{\OP ab}c = \OP x{\OP yz}$ directly gives $x=\OP ab, c=\OP yz$. This is correct, but may not be the most general possible set of conditions. I can also abbreviate $\{x\}$ as $Sx$ and $\{y,z\}$ as $Dxy$, so that $\OP xy = \{\{x\}, \{x, y\}\} = DSxDxy$. Then the two set expressions I want to unify become $DSDSaDabDDSaDabc$ and $DSxDxDSyDyz$. I can unify these; the result is the same as in the previous paragraph. But this may not be fully general, because it fails to capture the fact that for sets, $\{a, a\} = \{a\}$. Abbreviating $\{x\}$ as $Sx$ and $\{y,z\}$ as $Dxy$, and attempting to unify $\{x\}$ and $\{y, z\}$ with this method fails outright. But I want it to succeed and to yield the equations $x=y, x=z$. How can I fix either the unification algorithm or my representation of the expressions to allow $\{x\}$ to unify with $\{y, z\}$? I can make some progress on this particular problem by representing $\{x\}$ as $Dxx$. Then $\{x\}$ unifies with $\{y,z\}$ because $Dxx$ unifies with $Dyz$ giving $x=y, x=z$ as I wanted. But this is not quite enough either, because it doesn't understand that $\{x, y\} = \{y, x\}$. I want the unification of $\{x, y\}$ with $\{a, b\}$ to give me not simply $x=a, y=b$ but $[x=a, y=b] \lor [x=b, y=a]$. The problem grows worse if any of the sets have more than two elements. There are several possible ways in which $\{x, y\}$ could unify with $\{a,b,c\}$. For example, we might have $x=a=c, y=b$, or $x=c, y=a=b$. Is there a modified version of the unification algorithm that can handle unification of sets in this way? - When the ordered pair $\langle x, y \rangle$ is defined as the set $\{\{x\}, \{x, y\}\}$, your original unification algorithm is correct and gives the most general conditions for equality; the fact that $\{ x ,x \}=\{x\}$ is already accounted for. In particular, $\langle x, y \rangle$ always contains exactly one singleton element, $\{ x \}$, containing the first half of the ordered pair. If it contains a second element, that element is $\{x, y\}$ (with $x\neq y$), from which the second half of the pair can be read off (since we already know the first half). If it does not contain a second element, then the second half of the ordered pair is the same as the first. Since the first and second halves of each ordered pair can be deduced from its corresponding set, $\langle a, b \rangle = \langle c, d \rangle$ if and only if $a=b$ and $c=d$. I understand. Thanks. ${}{}$ –  MJD Jul 8 '12 at 1:38
{}
# Say a stock has increased from $4.34 a share to$6.55 a share. How much would that be percentage-wise? Aug 9, 2016 50.92% increase. #### Explanation: Calculate the percentage increase or percentage decrease in the same way. Find the difference - whether increased or decreased. Percent change = "change"/"original value" xx 100% Change = $6.55-$4.34 =\$2.21 2.21/4.34 xx 100% = 50.92% increase
{}
### JEE Mains Previous Years Questions with Solutions 4.5 star star star star star 1 ### AIEEE 2004 The binding energy per nucleon of deuteron $\left( {{}_1^2\,H} \right)$ and helium nucleus $\left( {{}_2^4\,He} \right)$ is $1.1$ $MeV$ and $7$ $MeV$ respectively. If two deuteron nuclei react to form a single helium nucleus, then the energy released is A $23.6\,\,MeV$ B $26.9\,\,MeV$ C $13.9\,\,MeV$ D $19.2\,\,MeV$ ## Explanation The nuclear reaction of process is $2_1^2H \to {4 \over 2}$ He Energy released $= 4 \times \left( 7 \right) - 4\left( {1.1} \right) = 23.6\,MeV$ 2 ### AIEEE 2003 In the nuclear fusion reaction $${}_1^2H + {}_1^3H \to {}_2^4He + n$$ given that the repulsive potential energy between the two nuclei is $\sim 7.7 \times {10^{ - 14}}J$, the temperature at which the gases must be heated to initiate the reaction is nearly [ Boltzmann's Constant $k = 1.38 \times {10^{ - 23}}\,J/K$ ] A ${10^7}\,\,K$ B ${10^5}\,\,K$ C ${10^3}\,\,K$ D ${10^9}\,\,K$ ## Explanation The average kinetic energy per molecule $= {3 \over 2}kT$ This kinetic energy should be able to provide the repulsive potential energy $\therefore$ ${3 \over 2}kT = 7.7 \times {10^{ - 14}}$ $\Rightarrow T = {{2 \times 7.7 \times {{10}^{ - 14}}} \over {3 \times 1.38 \times {{10}^{ - 23}}}} = 3.7 \times {10^9}$ 3 ### AIEEE 2003 Which of the following atoms has the lowest ionization potential ? A ${}_7^{14}N$ B ${}_{55}^{133}\,Cs$ C ${}_{18}^{40}\,Ar$ D ${}_8^{16}\,O$ ## Explanation The ionisation potential increases from left to right in a period and decreases from top to bottom in a group. Therefore ceasium will have the lowest ionisation potential. 4 ### AIEEE 2003 Which of the following cannot be emitted by radioactive substances during their decay ? A Protons B Neutrinoes C Helium nuclei D Electrons ## Explanation The radioactive substances emit $\alpha$ -particles (Helium nucleus), $\beta$ -particles (electrons) and neutrinoes. ### Graduate Aptitude Test in Engineering GATE CSE GATE EE GATE ECE GATE ME GATE CE GATE PI GATE IN NEET Class 12
{}
Browse by: Author name - Classification - Keywords - Nature 1 matches found XI: 12, 132-195, LNM 581 (1977) MEYER, Paul-André Le dual de $H^1({\bf R}^\nu)$~: démonstrations probabilistes (Potential theory, Applications of martingale theory) This is a self-contained exposition and proof of the celebrated (Fefferman-Stein) result that the dual of $H^1(R^n)$ is $BMO$, using methods adapted from the probabilistic Littlewood-Paley theory (of which this is a kind of limiting case). Some details of the proof are interesting in their own right Comment: Though the proof is complete, it misses an essential point in the Fefferman-Stein theorem, namely, it depends on the Cauchy (Poisson) semigroup while the original result the convolution with quite general smooth functions in its definition of $H^1$. Similar methods were used by Bakry in the case of spheres, see 1818. The reasoning around (3.1) p.178 needs to be corrected Keywords: Harmonic functions, Hardy spaces, Poisson kernel, Carleson measures, $BMO$, Riesz transforms
{}
A transition function expansion for a diffusion model with selection Barbour, A D; Ethier, S; Griffiths, R (2000). A transition function expansion for a diffusion model with selection. Annals of Applied Probability, 10(1):123-162. Abstract Using duality, an expansion is found for the transition function of the reversible $K$-allele diffusion model in population genetics. In the neutral case, the expansion is explicit but already known. When selection is present, it depends on the distribution at time $t$ of a specified $K$-type birth-and-death process starting at "infinity". The latter process is constructed by means of a coupling argument and characterized as the Ray process corresponding to the Ray–Knight compactification of the $K$-dimensional nonnegative-integer lattice. Using duality, an expansion is found for the transition function of the reversible $K$-allele diffusion model in population genetics. In the neutral case, the expansion is explicit but already known. When selection is present, it depends on the distribution at time $t$ of a specified $K$-type birth-and-death process starting at "infinity". The latter process is constructed by means of a coupling argument and characterized as the Ray process corresponding to the Ray–Knight compactification of the $K$-dimensional nonnegative-integer lattice. Citations 13 citations in Web of Science® 16 citations in Scopus® Altmetrics Detailed statistics Item Type: Journal Article, refereed, original work 07 Faculty of Science > Institute of Mathematics 510 Mathematics Finite-dimensional diffusion process; population genetics; duality; reversibility; multitype birth-and-death process; coupling; Ray-Knight compactification English 2000 07 Apr 2010 12:45 05 Apr 2016 13:25 Institute of Mathematical Statistics 1050-5164 https://doi.org/10.1214/aoap/1019737667 Permanent URL: https://doi.org/10.5167/uzh-22062 Preview Filetype: PDF Size: 1MB View at publisher TrendTerms TrendTerms displays relevant terms of the abstract of this publication and related documents on a map. The terms and their relations were extracted from ZORA using word statistics. Their timelines are taken from ZORA as well. The bubble size of a term is proportional to the number of documents where the term occurs. Red, orange, yellow and green colors are used for terms that occur in the current document; red indicates high interlinkedness of a term with other terms, orange, yellow and green decreasing interlinkedness. Blue is used for terms that have a relation with the terms in this document, but occur in other documents. You can navigate and zoom the map. Mouse-hovering a term displays its timeline, clicking it yields the associated documents.
{}
# absolute continuity on $R^{n}$ I know the definition of absolute continuity if there is a function $f:(a,b)\rightarrow R$. I wonder what is an analogy of this concept if we have a function $f:A\rightarrow R$, where $A\subset R^{n}$ is open set. - I guess it may depend on exactly which property of absolutely continuous functions you think is most important to keep, or to put it another way, exactly which definition you prefer in one dimension. For me the most commonly useful property of absolutely continuous functions is that they map sets of Lebesgue measure zero to sets of Lebesgue measure zero. Pulling in roughly equal parts from my memory of real analysis and what Wikipedia and EoM tell me, the story seems to be that a function $f\colon [a,b] \to \mathbb{R}$ is absolutely continuous if and only if all three of the following hold (Banach–Zaretskii theorem): 1. $f$ is continuous; 2. $f$ is of bounded variation; 3. the Luzin N property holds: if $E$ has Lebesgue measure $0$, then so does $f(E)$. Each of these 3 properties generalises to higher dimensions. The first and third are immediate; the second requires a slightly different definition of variation than in one dimension, but is a completely standard thing. Thus one could say that a function $f\colon \mathbb{R}^m \to \mathbb{R}^n$ is "absolutely continuous" if those three properties hold, and to me this seems a very reasonable generalisation of the usual definition. (Of course one could extend this to maps between smooth manifolds, where you also have a notion of zero volume.) This seems to be different from the definition that Malý uses in the paper Tapio Rajala referred to in his answer. From a quick glance at that paper, there seem to be a number of different generalisations out there, and this seems to be another example of the phenomenon wherein various notions that are distinct in higher dimensions happen to all coincide in the lowest-dimensional case, so that you can generalise some aspects of the familiar setting, but not all. Which generalisation is useful depends on what your purpose is. - Notice also, that all the three properties listed by Vaughn make perfect sense for example in PI-spaces (metric spaces with a doubling measure and a local Poincaré inequality). So, one could go beyond the case of smooth manifolds. Of course one could also take the definition by Malý to the metric space setting. –  Tapio Rajala Feb 1 '12 at 12:44 There is a generalization of absolute continuity for mappings $f \colon A \to \mathbb{R}^d$. This is called $n$-absolute continuity. It was introduced by Jan Malý in Absolutely continuous functions of several variables. J. Math. Anal. Appl. 231 (1999) 492$-$508. A mapping $f\colon A \to \mathbb{R}^d$ is called $n$-absolutely continuous if for every $\epsilon > 0$ there exists $\delta > 0$ such that for any disjoint finite family $\{B_i\}$ of closed balls in $A$ we have $$\sum_i \mathcal{L}^n(B_i) < \delta \quad\Longrightarrow\quad \sum_i \left(\text{osc}_{B_i}(f)\right)^n < \epsilon.$$ -
{}
Chemistry: The Molecular Science (5th Edition) This is a redox reaction. Oxidizing agent: $Ti$ Reducing agent: $Mg$ Before the reaction, $Ti$ was in $TiCl_4$. Each chlorine has "-1" as an oxidation number, so, $Ti$ has +4. After the reaction $Ti$ is represented as $Ti(s)$ So, it has 0 as its oxidation number. $Ti$ reduced. (+4 --> 0), making it the oxidizing agent. Before the reaction $Mg$ was in $Mg(l)$ Therefore, it had an oxidation number of 0. After the reaction, $Mg$ is in $MgCl_2$ Each chlorine has "-1" as its oxidation number, so, $Mg$ has $+2$. $Mg$ is oxidized (0 --> +2), making it the reducing agent.
{}
Science:Math Exam Resources/Courses/MATH102/December 2016/Question A 08/Solution 1 The correct choice is (iii), because at the inflection point the function changes its concavity, so the tangent line moves from being above the graph to be below the graph or vice versa, so it must cross the graph from one side to the other. (i) is not correct. See the Questions B 01; Although the zero ${\displaystyle z_{2}}$ is closer than another zero ${\displaystyle z_{1}}$ to the initial point ${\displaystyle B}$, but the newton method starting at ${\displaystyle B}$ finds the zero ${\displaystyle z_{1}}$. (ii) is not correct. The tangent line of ${\displaystyle f(x)=\cos x}$ at a point ${\displaystyle a}$ is ${\displaystyle y=f'(a)(x-a)+f(a)=-\sin a(x-a)+\cos a}$. Assume that there is a tangent line which goes through ${\displaystyle (0,2)}$. This means that there exists a point ${\displaystyle a\in [-\pi ,\pi ]}$ such that ${\displaystyle 2=-\sin a(0-a)+\cos a=a\sin a+\cos a}$. However there's no such point ${\displaystyle a\in [-\pi ,\pi ]}$ because the maximum of ${\displaystyle a\sin a+\cos a}$ is ${\displaystyle {\frac {\pi }{2}}<2}$. This is a contradiction. We find the maximum in the following way. Let ${\displaystyle g(a)=a\sin a+\cos a}$. Then, its derivative is ${\displaystyle g'(a)=a\cos a}$, which vanishes at ${\displaystyle a=0,\pm {\frac {\pi }{2}}}$. Indeed, these are critical points. so that the maximum in ${\displaystyle [-\pi ,\pi ]}$ is achieved either the critical points ${\displaystyle 0,\pm {\frac {\pi }{2}}}$ or the end points ${\displaystyle \pm \pi }$. Since ${\displaystyle g(0)=0,g\left({\frac {\pi }{2}}\right)={\frac {\pi }{2}},g(\pm \pi )=-1}$, the maximum is ${\displaystyle {\frac {\pi }{2}}}$. (iv) is not correct because we can take ${\displaystyle f(x)=|x|}$, ${\displaystyle g(x)=-|x|}$, and ${\displaystyle a=0}$; Obviously, ${\displaystyle f}$ and ${\displaystyle g}$ are NOT differentiable at ${\displaystyle x=a=0}$. but ${\displaystyle h(x)=f(x)+g(x)=|x|-|x|=0}$ is a constant function and hence is differentiable at ${\displaystyle x=a}$. (Proof for (iii)) We need to show ${\displaystyle g(x)=f(x)-({\text{tangent line}})=f(x)-f'(a)(x-a)-f(a)}$ changes the sign around ${\displaystyle x=a}$. Since ${\displaystyle g(a)=0}$, it is enough to show that ${\displaystyle g}$ is either increasing or decreasing around ${\displaystyle x=a}$. (If it is increasing the sign is changing from negative to positive, otherwise it is from positive to negative.) For this purpose, we calculate the derivative ${\displaystyle g'(x)=f'(x)-f'(a)=\int _{a}^{x}f''(t)dt}$. Since ${\displaystyle f''}$ changes the sign around ${\displaystyle x=a}$, ${\displaystyle g'}$ doesn't change its sign around ${\displaystyle x=a}$. For example, in the case that ${\displaystyle f''}$ change its sign from negative to positive around ${\displaystyle x=a}$, then for ${\displaystyle x sufficiently close with ${\displaystyle a}$, ${\displaystyle g'(x)=\int _{a}^{x}f''(t)dt=-\int _{x}^{a}f''(t)dt>0}$ and for ${\displaystyle x sufficiently close with ${\displaystyle a}$, ${\displaystyle g'(x)=\int _{a}^{x}f''(t)dt>0}$. Therefore, we prove that ${\displaystyle g}$ is either increasing or decreasing around ${\displaystyle x=a}$. Answer: ${\displaystyle \color {blue}(iii)}$
{}
Equation Equivalence 1. Jun 20, 2012 hammonjj 1. The problem statement, all variables and given/known data Show that: 7x≈3 mod(15) 2. Relevant equations From the given above I think it should be: 7x-3=15n 3. The attempt at a solution I tried factoring this in various ways to show that either said was a factor of the other, but I'm struggling here. But I don't know what to do from here. I actually have several of these problems, but I assume that once I know how to do the first one, they will be easy. Thoughts? Thanks! James 2. Jun 20, 2012 Curious3141 Well, for starters, it isn't true in general (for all x). counterexample: for x = 3, 7*3 = 21 = 6 mod 15 3. Jun 20, 2012 Staff: Mentor So, since the equation isn't generally true, maybe the aim of the problem was to find the values of x for which it is true. 4. Jun 21, 2012 Curious3141 In which case "Show that:", etc. is a terrible phrasing for it. 5. Jun 21, 2012 HallsofIvy Yes, it is! Hammonjj, you want to solve 7x= 3 (mod 15) for x. Of course, that is the same as x= (3/7) (mod 15) so you really just want to know how to write 3/7 in this mod 15 system. Notice that 7(2)= 14= -1 (mod 15) so that 7(-2)= 1 (mod 15). And, since 15- 2= 13, 1/7= -2= 13 (mod 15). Now, what is 3/7 (mod 15)? 6. Jun 21, 2012 dimension10 Always verify with 0
{}
# Generating Concordances¶ This notebook shows how you can generate a concordance using lists. First we see what text files we have. In [1]: ls *.txt Hume Enquiry.txt negative.txt positive.txt Hume Treatise.txt obama_tweets.txt We are going to use the "Hume Enquiry.txt" from the Gutenberg Project. You can use whatever text you want. We print the first 50 characters to check. In [2]: theText2Use = "Hume Treatise.txt" This string has 1344061 characters. The Project Gutenberg EBook of A Treatise of Human ## Tokenization¶ Now we tokenize the text producing a list called "listOfTokens" and check the first words. This eliminates punctuation and lowercases the words. In [3]: import re print(listOfTokens[:10]) ['the', 'project', 'gutenberg', 'ebook', 'of', 'a', 'treatise', 'of', 'human', 'nature'] ## Input¶ Now we get the word you want a concordance for and the context wanted. In [4]: word2find = input("What word do you want collocates for? ").lower() # Ask for the word to search for context = input("How much context do you want? ")# This asks for the context of words on either side to grab What word do you want collocates for? truth How much context do you want? 10 In [5]: type(context) Out[5]: str In [7]: contextInt = int(context) type(contextInt) Out[7]: int In [9]: len(listOfTokens) Out[9]: 228958 ## Main function¶ Here is the main function that does the work populating a new list with the lines of concordance. We check the first 5 concordance lines. In [10]: def makeConc(word2conc,list2FindIn,context2Use,concList): end = len(list2FindIn) for location in range(end): if list2FindIn[location] == word2conc: # Here we check whether we are at the very beginning or end if (location - context2Use) < 0: beginCon = 0 else: beginCon = location - context2Use if (location + context2Use) > end: endCon = end else: endCon = location + context2Use + 1 theContext = (list2FindIn[beginCon:endCon]) concordanceLine = ' '.join(theContext) # print(str(location) + ": " + concordanceLine) concList.append(str(location) + ": " + concordanceLine) theConc = [] makeConc(word2find,listOfTokens,int(context),theConc) theConc[-5:] Out[10]: ['220330: a reason why the faculty of recalling past ideas with truth and clearness should not have as much merit in it', '223214: confessing my errors and should esteem such a return to truth and reason to be more honourable than the most unerring', '223680: from the other this therefore being regarded as an undoubted truth that belief is nothing but a peculiar feeling different from', '224382: mind and he will evidently find this to be the truth secondly whatever may be the case with regard to this', '225925: by their different feeling i should have been nearer the truth end of project gutenberg s a treatise of human nature'] ## Output¶ Finally, we output to a text file. In [11]: nameOfResults = word2find.capitalize() + ".Concordance.txt" with open(nameOfResults, "w") as fileToWrite: for line in theConc: fileToWrite.write(line + "\n") print("Done") Done Here we check that the file was created. In [12]: ls *.Concordance.txt Truth.Concordance.txt ## Next Steps¶ Onwards to our final utility example Exploring a text with NLTK CC BY-SA From The Art of Literary Text Analysis by Stéfan Sinclair & Geoffrey Rockwell. Edited and revised by Melissa Mony. Created September 30th, 2016 (Jupyter 4.2.1) In [ ]:
{}
# What is the difference between matter & spacetime? [duplicate] If the universe is expanding why doesn't the matter in it expand proportionally making it seem as if the universe is static? Alternatively, as spacetime expands why does it not just slide past matter leaving matter unmoved? What anchors the matter to a particular point in spacetime? However this is a misleading idea of what spacetime is. Spacetime isn't a physical object, it's a mathematical structure$^1$ that tells us how to calculate the distance between objects, so matter can't slide over spacetime. $^1$ a combiation of a manifold and a metric
{}
# Hypothesis Testing Hypothesis: if you can take an idea which most people thought was the most hated concept in America this side of al-Qaeda’s mission statement and turn that idea into a book which stays on the New York Times bestseller list for almost a year, publishers will let you write your own ticket at least once. Evidence: Richard Dawkins is being paid $3.5 million for his next book, Only A Theory? (to be published in 2009), which will lay out evidence for evolution. Hypothesis: People who have a vested interest in portraying Dawkins and other “uppity atheists” as detrimental to science education and/or Western civilization will ignore or undervalue this book, along with all other attempts by “uppity atheists” to parlay their notoriety for good causes. Evidence: Consider the reception (or non-reception) of Dawkins’s television series, The Enemies of Reason (2007). This is a lamentable oversight, since the struggle to keep pseudoscience out of medicine is surely an area on which atheists and at least some moderate theists can find a genuine common ground. Hypothesis: I could sure have a lot of fun with$3.5 million. Evidence: Forthcoming. Come on, in the interests of science, pretty please, with blueberries on top? (Tip o’ the fedora to Tyler.) ## 6 thoughts on “Hypothesis Testing” 1. Pseudonym says: On one hand, it’s been a long time coming. For ages, we’ve needed a single book that lays out all the evidence for the general reader. If anyone can do that, Richard Dawkins can. He’s one of the best popular science authors ever. On the other hand, I can’t help thinking that his previous book (you know, the one that WASN’T a science book) will tarnish its reputation, and cause people to dismiss it out of hand as part of the Evil Atheist Conspiracy. And back to the first hand, it is very good that he’s using his notoriety for niceness instead of evil, as you said. And back to the second hand, I can’t help thinking that maybe that was the point. Perhaps Prof. Dawkins’ lasting legacy will not be convincing anyone, but rather shifting the Overton window. Either way, this book will be very welcome, if only because it’s a return to what Dawkins does best. 2. CleveDan says: “On the other hand, I can’t help thinking that his previous book (you know, the one that WASN’T a science book) will tarnish its reputation, and cause people to dismiss it out of hand as part of the Evil Atheist Conspiracy.” ……..not much we can do about that one considering that simple biology textbooks are considered part of the evil Atheist Conspiracy by science deniers 3. Yeah — if basic science didn’t get people upset, we wouldn’t be in the situation we are now. Whatta world. . . . About the “Overton window” thing: I suspect that different sectors of society have their windows in different positions, that these windows can migrate with varying speeds, and that what we’re seeing now is an increasing variety of those positions, rather than a shift of a single, National Window to a new location. 4. Pseudonym says: Oh, good point. I forgot that in the Bizarro universe, Ken Miller is also part of the Evil Atheist Conspiracy. 5. And, of course, “atheists” hate God and worship Satan. 6. manigen says: “And, of course, “atheists” hate God and worship Satan.” Hey, everybody needs a hobby.
{}
1. ## Integral problem Given: $\int_0^{2\pi} \cos^2{6\theta} \, d\theta$ _____________________________________________ Attempt: $\int_0^{2\pi} \cos^2{6\theta} \, d\theta = \int_0^{2\pi}\$ $\frac{1-sin^2{12\theta}}{2}\, d\theta = \int_0^{2\pi}\ \frac{1}{2}\, d\theta$ $-\frac{1}{2}\int_0^{2\pi}$ $2sin{6\theta}\, cos{6\theta}\, d\theta$ $= \frac{1}{2}( 2\pi - 2) = \pi - 1$ The answer is supposed to be just $\pi$. Can I get any help here? Thanks. $
{}
Expand collect like terms: x(x+7)+4(x^2-5) =x^2+7x+4x^2-20 = 5x^4+7x-20 i'm not sure if i'm was suppose to add the exponents together to make 5x^4 or 5x^2 2. It should have been $5x^2$, but apart from that you are completely right. 3. Do not add the exponents! x^2 + 4x^2 = 5x^2 4. thanks guy, really appreciate the help
{}
Climate input format ClimateData Format of climate input data iLand uses climate data on a daily basis. Climate data is stored in a SQLite database with a fixed naming convention. The location of the climate database as well as additional options are configured in the project file. The CO2 concentration is considered to be constant for the whole simulated landscape and set using the model.climate.co2concentration setting. name description year absolute year (e.g. 2009) month month of the year. January=1, December=12 day day of the month. (1..31). February has 29 days in leap years. min_temp minimum temperature of the day (°Celsius) max_temp maximum temperature of the day (°Celsius) prec precipitation of the day (mm) rad daily sum of global radiation per m² (MJ/m2/day) vpd average of the vapour pressure deficit of that day (kPa) The mean temperature and the mean temperature of the light hours are calculated as follows ( Floyd and Braddock, 1984, Landsberg, 1986): \begin{aligned} t_{day} = 0.212\left ( t_{max}-t_{mean} \right )+t_{mean} \end{aligned} Eq. 1 \begin{aligned} t_{mean}=\frac{t_{min}+t_{max}}{2} \end{aligned} Eq. 2 Older format (< v0.7) The following table gives an overview of the old format without maximum temperature (used until version 0.7 but can still be used): name description year absolute year (e.g. 2009) month month of the year. January=1, December=12 day day of the month. (1..31). February has 29 days in leap years. temp average temperature of the light hours of the day (°Celsius) min_temp minimum temperature of the day (°Celsius) prec precipitation of the day (mm) rad daily sum of global radiation per m² (MJ/m2/day) vpd average of the vapour pressure deficit of that day (kPa) Using climate data in the simulation Spatial pattern The finest possible spatial grain for climate data is the resource unit, i.e. for 100x100m grid cells. However, different resource units may share the same climate input, thus considerably reducing the amount of climate data that needs to be provided by the user and that needs to be processed by the model. It can be set up as follows: The system.database.climate key refers to a (potentially large) SQLite database that contains (potentially many) database tables. Each database table contains one time series of climate data (daily basis, see above for details). A time series can be used for one or many resource units in the simulation: this is defined by an environment file. The environment file contains for each resource unit the *name* of the table with the climate data. iLand detects multiple uses of climate tables and reads the data only once. Temporal pattern By default, iLand used the first year of climate data in the database tables for the first simulation year, the second year in the data for the second simulation year, and so forth. If the climate record is finished, it is rolled over and started again from the beginning. However, this behavior can be modified in various ways. • starting with a specific year of the climate data: use the filter setting to filter out all climate data that should be omitted (e.g. if filter is 'year>1959', then no climate data from the 50s is loaded). • sample randomly from the available climate data: enable the feature by setting randomSamplingEnabled to true. This tells iLand to batch-load a number of years (see batchYears) and subsequently randomly select from that list of years. The order of sampled years can be either fixed (randomSamplingList), or randomly picked by the model (if randomSamplingList is empty). In both cases, the same sequence of years is used for the whole landscape. A current limitation is that sampling is limited to the climate data that is loaded "batched". Importing data into a climate data base There are several ways to get data in a SQLite database. One option is to use the ClimateConverter-javascript object that is built into iLand. Another option is to use database-management-tools that often have a data import functionality (e.g. Sqliteman). Yet, a powerful and flexible option is to use R and the R-package RSQLite, which is also very useful for analyzing output data of iLand. Below is an example for importing data into a SQLite climate database that can be used by iLand: ################################################################ ### Create tables in the climate database for iLand using R #### ################################################################ # we use the R library RSQLite for all database accesses # look up the help for the avaiable features. library(RSQLite) # # connect to a existing or create a new database db.conn <<- dbConnect(RSQLite::SQLite(), dbname="e:/Daten/iLand/projects/AFJZ_experiment/database/new_database.sqlite" ) ## set up a data frame of climate data using right columns and units # see http://iland.boku.ac.at/climatedata for the required columns # here load a test data set: summary(test.data) # set up the data frame iland.climate <- data.frame(year=test.data$year, month=test.data$month, day=test.data$day, min_temp=test.data$tmin, max_temp=test.data$tmax, prec=test.data$prec, rad=test.data$rad, vpd=test.data$vpd) summary(iland.climate) ### save into the database: #### ## the table name is just an example. ## However, this name is referred to in the project file. dbWriteTable(db.conn, "climate014369",iland.climate, row.names=F) summary(cmp)
{}
# Lambda Expressions in Java #### Software Engineering Java Get FREE domain for 1st year and build your brand new site Lambda expressions were added in Java 8 along with functional programming. Before understanding lambda expressions or lambdas for short, a prerequisite is to understand what is a Functional Interface. A functional interface is nothing but a simple interface with a single abstract method in it. Example of a Functional interface is as follows: interface SampleFunctionalInterface { abstract void abstractMethod(); } An important thing to note that other standard methods and default methods, are also allowed in a functional interface. Example as follows: interface AnotherFunctionalInterface { abstract void abstractMethod(); void normalMethod(){ //implementation goes here } default void defaultMethod(){ //default implementation goes here } } Learning about functional Interface was a piece of cake, right?! Let's move ahead to know about a lambda. For that, let's go through the following code: /* A class to represent animal with different traits */ class Animal { private String name; private boolean canHop; private boolean canSwim; public Animal(String name, boolean canHop, boolean canSwim){ this.name = name; this.canHop = canHop; this.canSwim = canSwim; } boolean canHop(){ return canHop; } boolean canSwim(){ return canSwim; } } /* A functional interface to test traits of animals */ interface TestTraits{ boolean test(Animal a); } /* A class that provides implementation to test hopping trait of an animal */ class TestCanHop implements TestTraits{ boolean test(Animal a){ return a.canHop(); } } /* A class that represents a researcher who wants to test certain animals for certain traits */ class Researcher(){ public static void main(String[] args){ List<Animals> animals = new ArrayList<>(); printHoppers(animals, new TestCanHop()); } /* A method which prints animals which can hop */ void printHoppers(Animal a, TestTraits testTraits){ for (Animal a : animals){ if(testTraits.test(a)){ System.out.println(a); } } } } Using the above class TestCanHop, our Researcher could determine that from given animals, a cat can hop. Now the Researcher wants to identify animals which can swim. So, how should we change our code? We will need to create another class TestCanSwim, which implements TestTraits—followed by creating another method printSwimmers in Researcher class. Not a lot of work, right?! But what if our curios researcher whats to explore animals for a lot of other traits. Is there a way to avoid the effort of creating an entire class to implement a single method from an interface? Lambda expressions are our saviors! This is how printHoppers method look now: printHoppers(animals, new TestCanHop()); Upon using lamdba it looks like this: printHoppers(animals, a -> a.canHop()); Does this a -> a.canHop() look similar to method body of test method from TestCanHop ? You got it right! Using lambda, we eliminated the entire method and thus need to create a class. As you see, code elimination is the apparent benefit of using lambda. Now, the printHoppers method was expecting an instance of TestTraits interface. But we pass this funky looking lambda in it, so how does it work? Well, Java does the work for us! Java maps the lambda to the Interface, since there is only one method without any implementation, a.k.a abstract method in the Interface, a.k.a Functional interface, the mapping is evident for Java. Here's how the syntax of lambda looks like: Here's the detailed syntax of a lamdba expression: As you can see, the arrow is used to separate the method parameters and method body of test method from TestCanHop class. Thus, by using lambdas, we improve code readability. In a nutshell, a lambda is a minimal way to write a method. It is an expression that can be passed as a method parameter. They are also known as 'anonymous functions.' Given the fact that despite being a method or function, it does not have a name. Now there are specific rules for writing this lambda expression: 1. Method parameters and local variables in lambda are not allowed to be modified. For example, this lambda is invalid and does not compile. (a, b) -> {a=1; return b;} 1. Additionally, other variables such as instance variables and static variables are accessible from within lambda. 2. If there is more than one parameter, we need to write them in parenthesis. 3. Similarly, if there is more than one sentence in the method body, we need to write them in curly braces. 4. If we are using curly braces, it should be a valid code block. Thus, we need to write a 'return' keyword and give semicolon after every sentence. 5. Note the optional items for a lambda; a parameter type is optional. Curly braces are optional if the method body contains only one sentence. (This rule is common in Java, also applies to if/else structure and loops.) Let's go through above rules once again before trying to solve the following question. ## Question #### To get used to the lambda syntax, lets find which one these are valid lambdas: option a: print((String a, String b) -> a.startsWith("test")); option b: print(a -> { return a.startsWith("test") }); option c: print(a, b -> a.startsWith("test")); option d: (a, b) -> {b=1; return b;} Did you think that option a is the right answer? Awesome! Congrats, you got it correct! If not, don't worry, you will get there. 1. option a is a valid lambda with two parameters. Note that curly braces are optional. If braces are not there, return keyword and semicolon is also optional. 2. option b is wrong and will not compile because it is simply missing a semicolon. Without a semicolon, its not a valid Java statement. As braces are present, we should make sure to add a return keyword and semicolon. 3. option c is wrong because it doesn't put more than one parameter in a parenthesis. 4. option d is an invalid lambda. Since it is not allowed to change any parameters' value inside a lambda.
{}
# Beverton–Holt model The Beverton–Holt model is a classic discrete-time population model which gives the expected number n t+1 (or density) of individuals in generation t + 1 as a function of the number of individuals in the previous generation, $n_{t+1} = \frac{R_0 n_t}{1+ n_t/M}.$ Here R0 is interpreted as the proliferation rate per generation and K = (R0 − 1) M is the carrying capacity of the environment. The Beverton–Holt model was introduced in the context of fisheries by Beverton & Holt (1957). Subsequent work has derived the model under other assumptions such as contest competition (Brännström & Sumpter 2005) or within-year resource limited competition (Geritz & Kisdi 2004). The Beverton–Holt model can be generalized to include scramble competition (see the Ricker model, the Hassell model and the Maynard Smith–Slatkin model). It is also possible to include a parameter reflecting the spatial clustering of individuals (see Brännström & Sumpter 2005). Despite being nonlinear, the model can be solved explicitly, since it is in fact an inhomogeneous linear equation in 1/n. The solution is $n_t = \frac{K n_0}{n_0 + (K - n_0) R_0^{-t}}.$ Because of this structure, the model can be considered as the discrete-time analogue of the continuous-time logistic equation for population growth introduced by Verhulst; for comparison, the logistic equation is $\frac{dN}{dt} = rN \left( 1 - \frac{N}{K} \right),$ and its solution is $N(t) = \frac{K N(0)}{N(0) + (K - N(0)) e^{-rt}}.$ ## References • Beverton, R. J. H.; Holt, S. J. (1957), On the Dynamics of Exploited Fish Populations, Fishery Investigations Series II Volume XIX, Ministry of Agriculture, Fisheries and Food • Geritz, Stefan A. H.; Kisdi, Éva (2004), "On the mechanistic underpinning of discrete-time population models with complex dynamics", J. Theor. Biol. 228 (2): 261–269, doi:10.1016/j.jtbi.2004.01.003, PMID 15094020 • Ricker, W. E. (1954), "Stock and recruitment", J. Fisheries Res. Board Can. 11: 559–623
{}
• ### Transverse polarization of $\Lambda$ hyperons from quasireal photoproduction on nuclei(1406.3236) Oct. 3, 2014 hep-ex, nucl-ex The transverse polarization of $\Lambda$ hyperons was measured in inclusive quasireal photoproduction for various target nuclei ranging from hydrogen to xenon. The data were obtained by the HERMES experiment at HERA using the 27.6 GeV lepton beam and nuclear gas targets internal to the lepton storage ring. The polarization observed is positive for light target nuclei and is compatible with zero for krypton and xenon. • ### Measurement of the virtual-photon asymmetry A2 and the spin-structure function g2 of the proton(1112.5584) June 21, 2013 hep-ex A measurement of the virtual-photon asymmetry A_2(x,Q^2) and of the spin-structure function g_2(x,Q^2) of the proton are presented for the kinematic range 0.004 < x < 0.9 and 0.18 GeV^2 < Q^2 < 20 GeV^2. The data were collected by the HERMES experiment at the HERA storage ring at DESY while studying inclusive deep-inelastic scattering of 27.6 GeV longitudinally polarized leptons off a transversely polarized hydrogen gas target. The results are consistent with previous experimental data from CERN and SLAC. For the x-range covered, the measured integral of g_2(x) converges to the null result of the Burkhardt-Cottingham sum rule. The x^2 moment of the twist-3 contribution to g_2(x) is found to be compatible with zero. • ### The HERMES Recoil Detector(1302.6092) May 6, 2013 hep-ex, physics.ins-det For the final running period of HERA, a recoil detector was installed at the HERMES experiment to improve measurements of hard exclusive processes in charged-lepton nucleon scattering. Here, deeply virtual Compton scattering is of particular interest as this process provides constraints on generalised parton distributions that give access to the total angular momenta of quarks within the nucleon. The HERMES recoil detector was designed to improve the selection of exclusive events by a direct measurement of the four-momentum of the recoiling particle. It consisted of three components: two layers of double-sided silicon strip sensors inside the HERA beam vacuum, a two-barrel scintillating fibre tracker, and a photon detector. All sub-detectors were located inside a solenoidal magnetic field with an integrated field strength of 1 T. The recoil detector was installed in late 2005. After the commissioning of all components was finished in September 2006, it operated stably until the end of data taking at HERA end of June 2007. The present paper gives a brief overview of the physics processes of interest and the general detector design. The recoil detector components, their calibration, the momentum reconstruction of charged particles, and the event selection are described in detail. The paper closes with a summary of the performance of the detection system. • ### Azimuthal distributions of charged hadrons, pions, and kaons produced in deep-inelastic scattering off unpolarized protons and deuterons(1204.4161) Jan. 18, 2013 hep-ex, nucl-ex The azimuthal cos{\phi} and cos2{\phi} modulations of the distribution of hadrons produced in unpolarized semi-inclusive deep-inelastic scattering of electrons and positrons off hydrogen and deuterium targets have been measured in the HERMES experiment. For the first time these modulations were determined in a four-dimensional kinematic space for positively and negatively charged pions and kaons separately, as well as for unidentified hadrons. These azimuthal dependences are sensitive to the transverse motion and polarization of the quarks within the nucleon via, e.g., the Cahn, Boer-Mulders and Collins effects. • Hadron multiplicities in semi-inclusive deep-inelastic scattering were measured on neon, krypton and xenon targets relative to deuterium at an electron(positron)-beam energy of 27.6 GeV at HERMES. These ratios were determined as a function of the virtual-photon energy nu, its virtuality Q2, the fractional hadron energy z and the transverse hadron momentum with respect to the virtual-photon direction pt . Dependences were analysed separately for positively and negatively charged pions and kaons as well as protons and antiprotons in a two-dimensional representation. Compared to the one-dimensional dependences, some new features were observed. In particular, when z > 0.4 positive kaons do not show the strong monotonic rise of the multiplicity ratio with nu as exhibited by pions and negative kaons. Protons were found to behave very differently from the other hadrons. • ### Inclusive Measurements of Inelastic Electron and Positron Scattering from Unpolarized Hydrogen and Deuterium Targets(1103.5704) May 2, 2011 hep-ex Results of inclusive measurements of inelastic electron and positron scattering from unpolarized protons and deuterons at the HERMES experiment are presented. The structure functions $F_2^p$ and $F_2^d$ are determined using a parameterization of existing data for the longitudinal-to-transverse virtual-photon absorption cross-section ratio. The HERMES results provide data in the ranges $0.006\leq x\leq 0.9$ and 0.1 GeV$^2\leq Q^2\leq$ 20 GeV$^2$, covering the transition region between the perturbative and the non-perturbative regimes of QCD in a so-far largely unexplored kinematic region. They are in agreement with existing world data in the region of overlap. The measured cross sections are used, in combination with data from other experiments, to perform fits to the photon-nucleon cross section using the functional form of the ALLM model. The deuteron-to-proton cross-section ratio is also determined. • ### Ratios of Helicity Amplitudes for Exclusive rho-0 Electroproduction(1012.3676) May 2, 2011 hep-ex Exclusive rho^0-meson electroproduction is studied in the HERMES experiment, using a 27.6 GeV longitudinally polarized electron/positron beam and unpolarized hydrogen and deuterium targets in the kinematic region 0.5 GeV^2 < Q^2 < 7.0 GeV^2, 3.0 GeV < W < 6.3 GeV, and -t' < 0.4 GeV^2. Real and imaginary parts of the ratios of the natural-parity-exchange helicity amplitudes T_{11} (\gamma^*_T --> \rho_T), T_{01} (\gamma^*_T --> \rho_L), T_{10} (\gamma^*_L --> \rho_T), and T_{1-1} (\gamma^*_{-T} -->\rho_T) to T_{00} (\gamma^*_L --> \rho_L) are extracted from the data. For the unnatural-parity-exchange amplitude U_{11}, the ratio |U_{11}/T_{00}| is obtained. The Q^2 and t' dependences of these ratios are presented and compared with perturbative QCD predictions. • ### Measurement of azimuthal asymmetries associated with deeply virtual Compton scattering on a longitudinally polarized deuterium target(1008.3996) May 2, 2011 hep-ex Azimuthal asymmetries in exclusive electroproduction of a real photon from a longitudinally polarized deuterium target are measured with respect to target polarization alone and with respect to target polarization combined with beam helicity and/or beam charge. The asymmetries appear in the distribution of the real photons in the azimuthal angle $\phi$ around the virtual photon direction, relative to the lepton scattering plane. The asymmetries arise from the deeply virtual Compton scattering process and its interference with the Bethe-Heitler process. The results for the beam-charge and beam-helicity asymmetries from a tensor polarized deuterium target with vanishing vector polarization are shown to be compatible with those from an unpolarized deuterium target, which is expected for incoherent scattering dominant at larger momentum transfer. Furthermore, the results for the single target-spin asymmetry and for the double-spin asymmetry are found to be compatible with the corresponding asymmetries previously measured on a hydrogen target. For coherent scattering on the deuteron at small momentum transfer to the target, these findings imply that the tensor contribution to the cross section is small. Furthermore, the tensor asymmetry is found to be compatible with zero. • ### Exclusive Leptoproduction of Real Photons on a Longitudinally Polarised Hydrogen Target(1004.0177) April 1, 2010 hep-ex Polarisation asymmetries are measured for the hard exclusive leptoproduction of real photons from a longitudinally polarised hydrogen target. These asymmetries arise from the deeply virtual Compton scattering and Bethe-Heitler processes. From the data are extracted two asymmetries in the azimuthal distribution of produced real photons about the direction of the exchanged virtual photon: A_UL with respect to the target polarisation and A_LL with respect to the product of the beam and target polarisations. Results for both asymmetries are compared to the predictions from a generalised parton distribution model. The sin(phi) and cos(0*phi) amplitudes observed respectively for the A_UL and A_LL asymmetries are compatible with the sizeable predictions from the model. Unexpectedly, a sin(2*phi) modulation in the A_UL asymmetry with a magnitude similar to that of the sin(phi) modulation is observed.
{}
# [OS X TeX] Unknown graphics extension: ps and latex209 files Claudio Chagas claudchagas at gmail.com Sat Apr 22 00:14:13 CEST 2006 Hello, I'm a novice in LaTeX, and the more I learn the more I like it. There are still some hurdles to overcome though. I have a basic installation of TeX on my Mac and I believe I've got all the necessary packages, including Graphicx. On trying to compile the file below I first got an error message: !LaTeX Error: Unknown graphics extension: ps ============ Original source text code ============= \documentstyle[graphicx,12pt]{jennymanletter} %need docstyle, latex209 \frenchspacing \oddsidemargin0.5cm \evensidemargin-0.6cm \textwidth14.5cm \textheight21cm \setcounter{secnumdepth}{5} \parskip1.5ex plus0.5ex minus0.5ex \parindent0em \telephone{51} \node{hep.man.ac.uk} \userid{jenny} \begin{document} \signature{Jenny Williams} \opening{Dear} \closing{Regards,} \end{letter} \end{document} ============ On a second attempt to typeset the file, and clicking with the cursor inside the console window, TexShop skips error message and produces the PDF without the ps image. It also displayed this message below, which gives a warning that the file in question is probably very very old and needs to be updated. I got that type of !LaTeX Error: Unknown graphics extension: ps, with LaTeX not being able to include the image onto the document. I wonder if that's related to the document class only or if I'm missing any package in my TeX installation. Any help on what I should do would be appreciated. Thanks! ====== This is pdfeTeX, Version 3.141592-1.30.4-2.2 (Web2C 7.5.5) \write18 enabled. entering extended mode (./jennylett.tex LaTeX2e <2003/12/01> Babel <v3.8d> and hyphenation patterns for american, french, german, ngerman, d utch, italian, norsk, portuges, spanish, swedish, nohyphenation, loaded. (/usr/local/teTeX/share/texmf.tetex/tex/latex/base/latex209.def Entering LaTeX 2.09 COMPATIBILITY MODE ************************************************************* !!WARNING!! !!WARNING!! !!WARNING!! !!WARNING!! This mode attempts to provide an emulation of the LaTeX 2.09 author environment so that OLD documents can be successfully processed. It should NOT be used for NEW documents! New documents should use Standard LaTeX conventions and start with the \documentclass command. Compatibility mode is UNLIKELY TO WORK with LaTeX 2.09 style files that change any internal macros, especially not with those that change the FONT SELECTION or OUTPUT ROUTINES. Therefore such style files MUST BE UPDATED to use Current Standard LaTeX: LaTeX2e. If you suspect that you may be using such a style file, which is probably very, very old by now, then you should attempt to get it updated by sending a copy of this error message to the author of that file. ************************************************************* (/usr/local/teTeX/share/texmf.tetex/tex/latex/base/tracefnt.sty) (/usr/local/teTeX/share/texmf.tetex/tex/latex/base/latexsym.sty) (/usr/local/teTeX/share/texmf.tetex/tex/latex/base/latex209.cfg) (/usr/local/teTeX/share/texmf.tetex/tex/latex/tools/rawfonts.sty (/usr/local/teTeX/share/texmf.tetex/tex/latex/tools/somedefs.sty) (/usr/local/teTeX/share/texmf.tetex/tex/latex/base/ulasy.fd))) (./jennymanletter.sty Document Style jennyletter' <4 September 95>. ) (/usr/local/teTeX/share/texmf.tetex/tex/latex/graphics/graphicx.sty (/usr/local/teTeX/share/texmf.tetex/tex/latex/graphics/keyval.sty) (/usr/local/teTeX/share/texmf.tetex/tex/latex/graphics/graphics.sty (/usr/local/teTeX/share/texmf.tetex/tex/latex/graphics/trig.sty) (/usr/local/teTeX/share/texmf.tetex/tex/latex/graphics/graphics.cfg) (/usr/local/teTeX/share/texmf.tetex/tex/latex/graphics/pdftex.def))) </usr/local/t eTeX/share/texmf.tetex/fonts/type1/bluesky/cm/cmsy8.pfb></usr/local/teTeX/sh are /texmf.tetex/fonts/type1/bluesky/cm/cmr8.pfb></usr/local/teTeX/share/texmf.t ete x/fonts/type1/bluesky/cm/cmr10.pfb></usr/local/teTeX/share/texmf.tetex/fonts /ty pe1/bluesky/cm/cmr7.pfb> Output written on jennylett.pdf (1 page, 31582 bytes). Transcript written on jennylett.log. (45.52458pt too wide) in paragraph at lines 21--21 | $[]$ [1See the LaTeX manual or LaTeX Companion for explanation. Type H <return> for immediate help. ... l.19 \opening{Dear} ? =============== ------------------------- Info -------------------------- Mac-TeX Website: http://www.esm.psu.edu/mac-tex/ & FAQ: http://latex.yauh.de/faq/ TeX FAQ: http://www.tex.ac.uk/faq List Archive: http://tug.org/pipermail/macostex-archives/ `
{}
# Unable to evaluate reasonable max expression Consider the following statement: Max[0, Sqrt[1 - Cos[4 \[Theta]]]] You'll find that Mathematica won't evaluate this, because it doesn't know the range of $\theta$. Okay, that makes sense, so change it to: Simplify[Max[0, Sqrt[1 - Cos[4 \[Theta]]]], {0 <= \[Theta] <= 2 \[Pi]}] This evaluates happily. As it should. But then consider this not-impactful adjustment: Simplify[Max[0, Sqrt[1 - Cos[4 \[Theta]]]/ Sqrt[2]], {0 <= \[Theta] <= 2 \[Pi]}] This doesn't evaluate. I don't know why; because it seems quite obvious that it should be exactly the same as the previous case, right? (The constant factor of 1/Sqrt[2] can't change the fact that it is still $\geq 0$). Any thoughts on how to fix this? Of course, in my case I want to actually keep the Max ..., but I don't know the exact form of the other side, so I can't just arbitrarily remove constants ... - Simplify[Max[0, Simplify[Sqrt[1 - Cos[4 \[Theta]]]/Sqrt[2]]], {0 <= \[Theta] <= 2 \[Pi]}] seems to work. –  Yves Klett Sep 26 '12 at 7:11 ...and what happens if you use FullSimplify[] instead? –  J. M. Sep 26 '12 at 7:14 Thanks for the Accept, but I encourage all users to wait 24 hours before Accepting as answer so that other users have a chance to read and answer the question before it appears concluded. Quick Accepts may prevent the posting of other, potentially better answers. –  Mr.Wizard Sep 26 '12 at 7:20 Thanks @Mr.Wizard; I did try and un-accept yesterday, but for some reason actions on this site occasionally don't work (like voting and commenting.) –  Noon Silk Sep 26 '12 at 22:15 ## 1 Answer There are many potential simplifications that Simplify and FullSimplify do not make, presumably because they are deemed too costly to attempt. In this case it appears that parts are at too deep a level for the required simplifications to be made: expr = Max[0, Sqrt[1 - Cos[4 t]]/Sqrt[2]]; simp = FullSimplify[#, {0 <= t <= 2 Pi}] &; simp @ expr Max[0, 1/Sqrt[Csc[2 t]^2]] If you apply the simplification function to all the subexpressions further transformations are made: simp //@ expr Abs[Sin[2 t]] - Nice, a very rare occurrence of MapAll! –  Sjoerd C. de Vries Sep 26 '12 at 7:16 Indeed, MapAll[] is always a good idea when simplifying tricky things. –  J. M. Sep 26 '12 at 7:18 Funny enough, your solution does not work with Simplify, but the less specific (and inferior) Simplify[Max[0, Simplify[Sqrt[1 - Cos[4 \[Theta]]]/Sqrt[2]]], {0 <= \[Theta] <= 2 \[Pi]}] does? –  Yves Klett Sep 26 '12 at 7:22 @Yves I am not surprised; there is a large element of chance involved in this and I certainly do not mean to imply that //@ is a panacea -- far from it in fact. It only so happens that it produces the needed string of transformations in this case. Search MathGroup for VOISimplify for a good illustration of the order dependence of Simplify that often manifests as apparent capriciousness. –  Mr.Wizard Sep 26 '12 at 7:36 An insightful statement by Andrzej Kozlowski in that thread: ... there are just too many different groupings and rearrangements that would have to be tried to get to a simpler form. Moreover, Mathematica will only apply a transformation if it immediately leads to a decrease in complexity. Sometimes the only way to transform an expression to a simpler form is by first transforming it to a more complex one ... –  Mr.Wizard Sep 26 '12 at 7:40
{}
Blogs/Image Translation # Image Translation mesakarghm Sep 16 2021 1 min read 169 views Computer Vision The translation of an image is the process of moving or relocating of an image or object from one location to another. Using a predefined transformation matrix, we can relocate the image in any direction. The following is a transformation matrix that can be used for image translation. $$\begin{bmatrix} 1 & 0 & t_x\\ 0 & 1 & t_y \end{bmatrix}$$ where tx and ty are the shift distance. If the value of tx is negative, the image will be shifted to the left and the image will be shifted to the right for positive value of tx. Similarly, the image will be shifted up for negative values of ty and the image will be shifted down for positive value of ty. To find the pixel value in the translated image, we just have to find the dot product between the current pixel and this transformation matrix. This will give us the pixel value for the translated pixel. Below I provide a Python implementation for image translation. The given pictures show the ranslation_img() function in action, where the image are shifted using a shift distance of (50,50). Before: After: Learn and practice this concept here: https://mlpro.io/problems/img-translation/ def translation_img(src_img,shift_distance,shape_of_out_img): h,w = src_img.shape[:2] x_distance = shift_distance[0] y_distance = shift_distance[1] ts_mat = np.array([[1,0,x_distance],[0,1,y_distance]]) out_img = np.zeros(shape_of_out_img,dtype='u1') for i in range(h): for j in range(w): origin_x = j origin_y = i origin_xy = np.array([origin_x,origin_y,1]) new_xy = np.dot(ts_mat,origin_xy) new_x = new_xy[0] new_y = new_xy[1] if (0<new_x < w) and (0<new_y < h): out_img[new_y,new_x] = src_img[i,j] return out_img
{}
Article Contents Article Contents # Dynamics and optimal control of chemotherapy for low grade gliomas: Insights from a mathematical model • We discuss the optimization of chemotherapy treatment for low-grade gliomas using a mathematical model. We analyze the dynamics of the model and study the stability of solutions. The dynamical model is incorporated into an optimal control problem for which different objective functionals are considered. We establish the existence of optimal controls and give a detailed discussion of the necessary optimality conditions. Since the control variable appears linearly in the control problem, optimal controls are concatenations of bang-bang and singular arcs. We derive a formula of the singular control in terms of state and adjoint variables. Using discretization and optimization methods we compute optimal drug protocols in a number of scenarios. For small treatment periods, the optimal control is bang-bang, whereas for larger treatment periods we obtain both bang-bang and singular arcs. In particular, singular controls illustrate the metronomic chemotherapy. Mathematics Subject Classification: Primary: 49J15, 49K15; Secondary: 92B05. Citation:
{}
# Trying to understand mixed states I took a basic quantum chemistry course (McQuarrie's "Quantum Chemistry"), but it never dealt with mixed states -- only pure states (or if it did, we never got to it in class). So I'm trying to understand them on my own. Consider a situation where Bob is in the lab and flips a coin. If it is heads, he prepares the system into pure state $|\psi_1\rangle$. If the coin is tails, he prepares the system into pure state $|\psi_2\rangle$. Now he invites Dave into the room. Bob knows which way the coin landed, but Dave doesn't. All Dave knows is that the system is either in the pure state $|\psi_1\rangle$ or $|\psi_2\rangle$, each with 50% probability. ...so could you say the system is in a pure state to Bob and a mixed state to Dave? Or am I way off base here? - Yes, you are absolutely correct. – Mark Mitchison May 22 '13 at 10:08 "Purity" of "mixedness" (if you permit these words) is a property of the system and not the observer. A system is said to be in a pure state if it is in one of the allowed states $|\psi_i\rangle$, $i = 1 \ldots n$ or in a linear superposition $|\phi\rangle = \sum_{i=1}^{i=n}\alpha_i|\psi_i\rangle$ of such states. If it is in a mixed state, then such a representation is not possible and we have to express it as a density matrix $\hat{\rho} = \sum_{i=1}^{i=n}\beta_i|\psi_i\rangle\langle\psi_i|$. What is the difference? The expectation value of an observable $\hat{A}$ is $\langle\phi|\hat{A}|\phi\rangle$ for a pure state and $Tr(\hat{\rho}\hat{A})$. Polarized light is an example of a system being in pure state, while light from an incandescent bulb is in a mixed state.
{}
Estimating Cumulative Distribution Functions with Maximum Likelihood to Sample Data Sets of a Sea Floater Model Title & Authors Estimating Cumulative Distribution Functions with Maximum Likelihood to Sample Data Sets of a Sea Floater Model Yim, Jeong-Bin; Yang, Won-Jae; Abstract This paper describes evaluation procedures and experimental results for the estimation of Cumulative Distribution Functions (CDF) giving best-fit to the sample data in the Probability based risk Evaluation Techniques (PET) which is to assess the risks of a small-sized sea floater. The CDF in the PET is to provide the reference values of risk acceptance criteria which are to evaluate the risk level of the floater and, it can be estimated from sample data sets of motion response functions such as Roll, Pitch and Heave in the floater model. Using Maximum Likelihood Estimates and with the eight kinds of regulated distribution functions, the evaluation tests for the CDF having maximum likelihood to the sample data are carried out in this work. Throughout goodness-of-fit tests to the distribution functions, it is shown that the Beta distribution is best-fit to the Roll and Pitch sample data with smallest averaged probability errors $\small{\bar{\delta}(0{\leq}\bar{\delta}{\leq}1.0)}$ of 0.024 and 0.022, respectively and, Gamma distribution is best-fit to the Heave sample data with smallest $\small{\bar{\delta}}$ of 0.027. The proposed method in this paper can be expected to adopt in various application areas estimating best-fit distributions to the sample data. Keywords sea floater;risk evaluation;risk acceptance criteria;cumulative distribution function;maximum likelihood estimates; Language Korean Cited by 1. A Study on the Distribution Estimation of Personal Data Leak Incidents, Journal of the Korea Institute of Information Security and Cryptology, 2016, 26, 3, 799 References 1. Breheny Patrick(2013), Introduction to the empirical distribution function(STA 621): Nonparametric Statistics, white paper, http://web.as.uky.edu/statistics/users/pbreheny/621/F10/notes/8-26.pdf. 2. Charles R. Farrar, Scott W. Doebling and David A. Nix(2001), "Vibration-based structural damage identification," Philosophical Transactions of the Royal Society A, Londo, Vol. 359, pp. 131-149. 3. David Vose(2010), Fitting Distributions to Data and why you are probably doing it wrong, white paper, http://www.vosesoftware.com/whitepapers/Fitting%20distributions%20to%20data.pdf. 4. Joel Azose(2013), On the Gamma Function and Its Applications, white paper, http://www.math.washington.edu/-morrow/336_10/papers/joel.pdf. 5. Jose Miguel Simon Donaire(2009), Sea Transport Analysis of Upright Wind Turbines, Master Thesis(MEK-FM-EP-2009-14), Technical University of Denmark. 6. Jun Chang Hyun and Yoo Chul Sang(2012), "Application of the Beta Distribution for the Temporal Quantification of Storm Events," Journal of Korea Water Resources Association, Vol. 45, Issue 6, pp. 531-544. 7. Kim Jin Ho, Kim Hyeong Seok and Cho Sung Ho(2013), "A Ranging Algorithm for IR-UWB in Multi-Path Environment Using Gamma Distribution," The Journal of Korea Information and Communications Society, Vol. 38B, No. 2, pp. 146-153. 8. Lee J. T. and Oh H. J(1996), "Approximation Equation of Cumulative Distribution Function on the Normal Distribution," Journal of the Korea Society of Mathematical Education, Series A, Vol. 35, No. 1, pp. 57-59, http://www.mathnet.or.kr/mathnet/kms_tex/982256.pdf. 9. Lu Kung-Chun, Loh Chin-Hsiung, Yang Y. S., Jerome P. Lynch and Kincho H. Law(2008), "Real-Time Structural Damage Detection using Wireless Sensing and Monitoring System," Smart Structures and Systems, TechnoPress, Vol. 4(6), pp. 759-778. 10. MATLAB(2008a), Programming, MATLAB Version 7.6 (R2008a) 11. MATLAB(2008b), Statistical Toolbox : Maximum likelihood estimation, MATLAB Version 7.6 (R2008a). 12. MATLAB(2008c), Statistical Toolbox : Empirical Cumulative Distribution Function, MATLAB Version 7.6 (R2008a). 13. MathWorks(2013), Statistical Toolbox Distribution Functions, http://www.mathworks.co.kr/kr/help/stats/statistics-toolbox-distribution-functions.html. 14. Plancade Sandra(2013), Adaptive estimation of the conditional cumulative distribution function from current status data, Institute of Community Medicine, white paper, University of Tromso, Norway, pp. 1-42, http://sandraplancade.perso.math.cnrs.fr/cens-int.pdf. 15. R-forge project(2013), Handbook on probability distributions, white paper, R-forge distributions Core Team, University Year 2009-2010, pp. 1-167, https://r-forge.r-project.org/scm/viewvc.php/*checkout*/pkg/inst/doc/probdistr-main.pdf. 16. Riddhi D.(2013), Beta Function and its Applications, white paper, Department of Physics and Astronomy, The University of Tennessee, USA, pp. 1-4, http://sces.phys.utk.edu/-moreo/mm08/Riddi.pdf. 17. Staub Linda and Gekenidis Alexandros(2011), Seminar in Statistics: Survival Analysis Chapter 2, white paper, pp. 1-39, http://stat.ethz.ch/education/semesters/ss2011/seminar/contents/presentation_2.pdf. 18. Wikipedia(2013a), Tutorial for Probability distribution, http://en.wikipedia.org/wiki/Probability_distribution. 19. Wikipedia(2013b), Tutorial for Arg max, http://en.wikipedia.org/wiki/Arg_max. 20. Wikipedia(2013c), Tutorial for Maximum Likelihood, https://en.wikipedia.org/wiki/Maximum_likelihood. 21. Yim Jeong Bin(2012), "Probability Based Risk Evaluation Techniques for the Small-Sized Sea Floater," Journal of Navigation and Port Research, Vol. 36, No. 10, pp. 795-801.
{}
# Into how many mls of water should you pour 100.0 ml of 9.00 M sulfuric acid to get a 3.00 M solution? Oct 17, 2016 You should pour it into a $200 \cdot m L$ volume of water. #### Explanation: I am glad you got the order of addition right. You ALWAYS add acid to water. It is well known that if you spit in acid it spits back. Anyway, the dilution factor is simply $\frac{1}{3}$. $\text{Concentration}$ $=$ $\text{Moles of acid"/"Volume of acid}$ $=$ $\frac{0.100 \cdot L \times 9.00 \cdot m o l \cdot {L}^{-} 1}{0.100 \cdot L + 0.200 \cdot L}$ $=$ $\frac{0.900 \cdot m o l}{0.300 \cdot L}$ $=$ $3.00 \cdot m o l \cdot {L}^{-} 1$ as required. And thus we dilute the initial $100 \cdot m L$ volume of acid with a volume of $200 \cdot m L$ of water.
{}
# Generics and Compile-Time in Rust Brian Anderson September 17, 2020 The Rust programming language compiles fast software slowly. In this series we explore Rust’s compile times within the context of TiKV. ## Rust Compile-time Adventures with TiKV: Episode 2 In the previous post in the series we covered Rust’s early development history, and how it led to a series of decisions that resulted in a high-performance language that compiles slowly. Over the next few we’ll describe in more detail some of the designs in Rust that make compile time slow. This time, we’re talking about monomorphization. ## Comments on the last episode After the [previous][prev] episode of this series, people made a lot of great comments on HackerNews, Reddit, and Lobste.rs. • The compile times we see for TiKV aren’t so terrible, and are comparable to C++. • What often matters is partial rebuilds since that is what developers experience most in their build-test cycle. Some subjects I hadn’t considered: • WalterBright pointed out that data flow analysis (DFA) is expensive (quadratic). Rust depends on data flow analysis. I don’t know how this impacts Rust compile times, but it’s good to be aware of. • kibwen reminded us that faster linkers have an impact on build times, and that LLD may be faster than the system linker eventually. ## A brief aside about compile-time scenarios It’s tempting to talk about “compile-time” broadly, without any further clarification, but there are many types of “compile-time”, some that matter more or less to different people. The four main compile-time scenarios in Rust are: • development profile / full rebuilds • development profile / partial rebuilds • release profile / full rebuilds • release profile / partial rebuilds The “development profile” entails compiler settings designed for fast compile times, slow run times, and maximum debuggability. The “release profile” entails compiler settings designed for fast run times, slow compile times, and, usually, minimum debuggability. In Rust, these are invoked with cargo build and cargo build --release respectively, and are indicative of the compile-time/run-time tradeoff. A full rebuild is building the entire project from scratch, and a partial rebuild happens after modifying code in a previously built project. Partial rebuilds can notably benefit from incremental compilation. In addition to those there are also • test profile / full rebuilds • test profile / partial rebuilds • bench profile / full rebuilds • bench profile / partial rebuilds These are mostly similar to development mode and release mode respectively, though the interactions in cargo between development / test and release / bench can be subtle and surprising. There may be other profiles (TiKV has more), but those are the obvious ones for Rust, as built-in to cargo. Beyond that there are other scenarios, like typechecking only (cargo check), building just a single project (cargo build -p), single-core vs. multi-core, local vs. distributed, local vs. CI. Compile time is also affected by human perception — it’s possible for compile time to feel bad when it’s actually decent, and to feel decent when it’s actually not so great. This is one of the premises behind the Rust Language Server and rust-analyzer — if developers are getting constant, real-time feedback in their IDE then it doesn’t matter as much how long a full compile takes. So it’s important to keep in mind through this series that there is a spectrum of tunable possibilities from “fast compile / slow run” to “fast run / slow compile”, there are different scenarios that affect compile time in different ways, and in which compile time affects perception in different ways. It happens that for TiKV we’ve identified that the scenario we care most about with respect to compile time is “release profile / partial rebuilds”. More about that in future installments. The rest of this post details some of the major designs in Rust that cause slow compile time. I describe them as “tradeoffs”, as there are good reasons Rust is the way it is, and language design is full of awkward tradeoffs. ## Monomorphized generics Rust’s approach to generics is the most obvious language feature to blame on bad compile times, and understanding how Rust translates generic functions to machine code is important to understanding the Rust compile-time/run-time tradeoff. Generics generally are a complex topic, and Rust generics come in a number of forms. Rust has generic functions and generic types, and they can be expressed in multiple ways. Here I’m mostly going to talk about how Rust calls generic functions, but there are further compile-time considerations for generic type translations. I ignore other forms of generics (like impl Trait), as either they have similar compile-time impact, or I just don’t know enough about them. As a simple example for this section, consider the following ToString trait and the generic function print: use std::string::ToString; fn print<T: ToString>(v: T) { println!("{}", v.to_string()); } print will print to the console anything that can be converted to a String type. We say that “print is generic over type T, where T implements Stringify”. Thus I can call print with different types: fn main() { print("hello, world"); print(101); } The way a compiler translates these calls to print to machine code has a huge impact on both the compile-time and run-time characteristics of the language. When a generic function is called with a particular set of type parameters it is said to be instantiated with those types. In general, for programming languages, there are two ways to translate a generic function: 1. translate the generic function for each set of instantiated type parameters, calling each trait method directly, but duplicating most of the generic function’s machine instructions, or 2. translate the generic function just once, calling each trait method through a function pointer (via a “vtable”). The first results in static method dispatch, the second in dynamic (or “virtual”) method dispatch. The first is sometimes called “monomorphization”, particularly in the context of C++ and Rust, a confusingly complex word for a simple idea. ### An example in Rust The previous example uses Rust’s type parameters (<T: ToString>) to define a statically-dispatched print function. In this section we present two more Rust examples, the first with static dispatch, using references to impl trait instances, and the second with dynamic dispatch, with references to dyn trait instances. use std::string::ToString; #[inline(never)] fn print(v: &impl ToString) { println!("{}", v.to_string()); } fn main() { print(&"hello, world"); print(&101); } use std::string::ToString; #[inline(never)] fn print(v: &dyn ToString) { println!("{}", v.to_string()); } fn main() { print(&"hello, world"); print(&101); } Notice that the only difference between these two cases is that the first print's argument is type &impl ToString, and the second’s is &dyn ToString. The first is using static dispatch, and the second dynamic. In Rust &impl ToString is essentially shorthand for a type parameter argument that is only used once, like in the earlier example fn print&lt;T: ToString&gt;(v: T). Note that in these examples we have to use inline(never) to defeat the optimizer. Without this it would turn these simple examples into the exact same machine code. I’ll explore this phenomenon further in a future episode of this series. Below is an extremely simplified and sanitized version of the assembly for these two examples. If you want to see the real thing, the playground links above can generate them by clicking the buttons labeled ... -> ASM. print::hffa7359fe88f0de2: ... callq *::core::fmt::write::h01edf6dd68a42c9c(%rip) ... print::ha0649f845bb59b0c: ... callq *::core::fmt::write::h01edf6dd68a42c9c(%rip) ... main::h6b41e7a408fe6876: ... callq print::hffa7359fe88f0de2 ... callq print::ha0649f845bb59b0c And the dynamic case: print::h796a2cdf500a8987: ... callq *::core::fmt::write::h01edf6dd68a42c9c(%rip) ... main::h6b41e7a408fe6876: ... callq print::h796a2cdf500a8987 ... callq print::h796a2cdf500a8987 The important thing to note here is the duplication of functions or lack thereof, depending on the strategy. In the static case there are two print functions, distinguished by a hash value in their names, and main calls both of them. In the dynamic case, there is a single print function that main calls twice. The details of how these two strategies actually handle their arguments at the machine level are too intricate to go into here. These two strategies represent a notoriously difficult tradeoff: the first creates lots of machine instruction duplication, forcing the compiler to spend time generating those instructions, and putting pressure on the instruction cache, but — crucially — dispatching all the trait method calls statically instead of through a function pointer. The second saves lots of machine instructions and takes less work for the compiler to translate to machine code, but every trait method call is an indirect call through a function pointer, which is generally slower because the CPU can’t know what instruction it is going jump to until the pointer is loaded. It is often thought that the static dispatch strategy results in faster machine code, though I have not seen any research into the matter (we’ll do an experiment on this subject in a future edition of this series). Intuitively, it makes sense — if the CPU knows the address of all the functions it is calling it should be able to call them faster than if it has to first load the address of the function, then load the instruction code into the instruction cache. There are though factors that make this intuition suspect: • first, modern CPUs have invested a lot of silicon into branch prediction, so if a function pointer has been called recently it will likely be predicted correctly the next time and called quickly; • second, monomorphization results in huge quantities of machine instructions, a phenomenon commonly referred to as “code bloat”, which could put great pressure on the CPU’s instruction cache; • third, the LLVM optimizer is surprisingly smart, and with enough visibility into the code can sometimes turn virtual calls into static calls. C++ and Rust both strongly encourage monomorphization, both generate some of the fastest machine code of any programming language, and both have problems with code bloat. This seems to be evidence that the monomorphization strategy is indeed the faster of the two. There is though a curious counter-example: C. C has no generics at all, and C programs are often both the slimmest and fastest in their class. Reproducing the monomorphization strategy in C requires using ugly C macro preprocessor techniques, and modern object-orientation patterns in C are often vtable-based. Takeaway: it is a broadly thought by compiler engineers that monomorphiation results in somewhat faster generic code while taking somewhat longer to compile. Note that the monomorphization-compile-time problem is compounded in Rust because Rust translates generic functions in every crate (generally, “compilation unit”) that instantiates them. That means that if, given our print example, crate a calls print("hello, world"), and crate b also calls print("hello, world, or whatever"), then both crate a and b will contain the monomorphized print_str function — the compiler does all the type-checking and translation work twice. This is partially mitigated today at lower optimization levels by shared generics, though there are still duplicated generics in sibling dependencies], and at higher optimization levels. All that is only touching on the surface of the tradeoffs involved in monomorphization. I passed this draft by Niko, the primary type theorist behind Rust, and he had some words to say about it: niko: so far, everything looks pretty accurate, except that I think the monomorphization area leaves out a lot of the complexity. It’s definitely not just about virtual function calls. niko: it’s also things like foo.bar where the offset of bar depends on the type of foo niko: many languages sidestep this problem by using pointers everywhere (including generic C, if you don’t use macros) niko: not to mention the construction of complex types like iterators, that are basically mini-programs fully instantiated and then customizable – though this can be reproduced by a sufficiently smart compiler niko: (in particular, virtual calls can be inlined too, though you get less optimization; I remember some discussion about this at the time … how virtual call inlining happens relatively late in the pipeline) brson: does the field offset issue only come in to play with associated types? niko: no niko: struct Foo<T> { x: u32, y: T, z: f64 } niko: fn bar<T>(f: Foo<T>) -> f64 { f.z } niko: as I recall, before we moved to monomorphization, we had to have two paths for everything: the easy, static path, where all types were known to LLVM, and the horrible, dynamic path, where we had to generate the code to dynamically compute the offsets of fields and things niko: unsurprisingly, the two were only rarely in sync niko: which was a common source of bugs niko: I think a lot of this could be better handled today – we have e.g. a reasonably reliable bit of code that computes Layout, we have MIR which is a much simpler target – so I am not as terrified of having to have those two paths niko: but it’d still be a lot of work to make it all work niko: there was also stuff like the need to synthesize type descriptors on the fly (though maybe we could always get by with stack allocation for that) niko: e.g., fn foo<T>() { bar::<Vec<T>>(); } fn bar<U>() { .. } niko: here, you have a type descriptor for T that was given to you dynamically, but you have to build the type descriptor for Vec<T> niko: and then we can make it even worse niko: fn foo<T>() { bar::<Vec<T>>(); } fn bar<U: Debug>() { .. } niko: now we have to reify all the IMPLS of Debug niko: so that we can do trait matching at runtime niko: because we have to be able to figure out Vec<T>: Debug, and all we know is T: Debug niko: we might be able to handle that by bubbling up the Vec<T> to our callers… ## In the next episode of Rust Compile-time Adventures with TiKV In the next episode of this series we’ll discuss compilation units – the bundles of code that a compiler processes at a single time – and how selecting compilation units affects compile time. Stay Rusty, friends. ## Thanks A number of people helped with this blog series. Thanks especially to Niko Matsakis for the feedback, and Calvin Weng for proofreading and editing.
{}
Homework Help: Formula deduction V2 = V02 + 2g(y - y0) 1. Sep 26, 2011 iZnoGouD I need to make the deduction of this formula V2 = V02 +- 2g(y - y0) Could you guys help me? this is so hard for me 2. Sep 26, 2011 Ignea_unda Welcome to PF, iZnoGouD! Could you clarify what you mean by "making the deduction"? I'm not clear what you're being asked to do. And, to clarify for myself, the equation is: $V^2 = V_{0}^2\pm 2g(y-y_0)$ Correct? And finally, where are you getting stuck? Can you show us your work so far?
{}
# How can I create a new tables out of a table and plot it I have the following problem. I have a table, call it m, which is labelled by 3 indices (say i,j,k), each of them running from 1 up to 100. I would like to be able to construct a new table which would consist of elements taken from m under the following condition: For any p, such that 2 < p < 301, construct the table consisting of pairs, where the first term in all pairs for a given p is always the same equal p*0.06 Table[{0.06*p, m[[i, j, k]]}, {p, 2, 301}] WITH THE CONSTRAINT i + j + k = p It is the constraint which puzzles me. I would also like to plot all these Lists on one graph (using ListPlot?). I managed to do something like this for a table with two indices using a Do loop, but for three indices, it seems I would have to use a double loop and I don't know how to do that. Looking forward to any suggestions! - You might want to look at either of FrobeniusSolve[] or IntegerPartitions[] for generating the indices that can be fed into m. –  Guess who it is. Apr 22 '13 at 23:09 You could list out the entire table then use GatherBy to reorganise it: gathered = GatherBy[Flatten[Table[ {i + j + k, m[[i, j, k]]}, {i, 100}, {j, 100}, {k, 100}], 2], First]; This gathers together all the elements where the first value (the sum of the indices) is the same. You'll have big lists around the sum of the average index value, and small ones at the extremes. ListPlot[Length /@ gathered] You can plot a particular list (say p = 150) with ListPlot[Last /@ gathered[[150]]]. But I don't have an interesting m to plot. - this seems to work very well. Thanks a lot!!! –  user7058 Apr 23 '13 at 12:17 tab = Cases[Flatten[Table[{p == i + j + k, 0.06 p, m[[i,j,k]]}, {i, 100}, {j, 100}, {k, 100}, {p, 2, 301}], 3], {True, __}][[;;,2;;]] Having Flattened the table before finding Cases you should be able to plot this directly. ListPlot[tab] - In my opinion, the new table should have the same dimension as m, since every i,j,k in the range of 1 to 100 satisfy the relationship 2<p<301. So we only need to prepend the sum of the index to the elements in m. m = Table[RandomReal[{1, 10}], {100^3}]; ls = Flatten /@ Transpose[{Flatten[ Table[{0.06 (i + j + k)}, {i, 1, 100}, {j, 1, 100}, {k, 1, 100}], 2], m}]; a list of 10^6 is too large for ListPlot, so I only plot part of it ListPlot[ls2[[1 ;; -1 ;; 1000]], Joined -> True] -
{}
Recent questions tagged inequality Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 With what expression must $27x^3+1$ be divided to get a quotient of $3x+1$? With what expression must $27x^3+1$ be divided to get a quotient of $3x+1$?With what expression must $27x^3+1$ be divided to get a quotient of $3x+1$? ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 With what expression must $(a-2b)$ be multiplied to get a product of $a^3-8b^3$? With what expression must $(a-2b)$ be multiplied to get a product of $a^3-8b^3$?With what expression must $(a-2b)$ be multiplied to get a product of $a^3-8b^3$? ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 For what integer a does the compound inequality $a < \sqrt{48} + \sqrt{140} < a + 1$ hold? For what integer a does the compound inequality $a < \sqrt{48} + \sqrt{140} < a + 1$ hold?For what integer a does the compound inequality $a &lt; \sqrt{48} + \sqrt{140} &lt; a + 1$ hold? ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 If $a^3 + 12ab^2 = 679$ and $9a^2b + 12b^3 = 978$, find $a^2 − 4ab + 4b^2$ If $a^3 + 12ab^2 = 679$ and $9a^2b + 12b^3 = 978$, find $a^2 − 4ab + 4b^2$If $a^3 + 12ab^2 = 679$ and $9a^2b + 12b^3 = 978$, find $a^2 − 4ab + 4b^2$ ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Given that $0 < b < a$ and $a^2 + b^2 = 6ab$, what is the value of $\dfrac{a − b}{ a + b}$ ? Given that $0 < b < a$ and $a^2 + b^2 = 6ab$, what is the value of $\dfrac{a − b}{ a + b}$ ?Given that $0 &lt; b &lt; a$ and $a^2 + b^2 = 6ab$, what is the value of $\dfrac{a − b}{ a + b}$ ? ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Why do some people have higher income than others? Why do some people have higher income than others?Why do some people have higher income than others? ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Sometimes people complain about "economic inequality". Sometimes people complain about "economic inequality".Sometimes people complain about &quot;economic inequality&quot;. What does this term refer to? ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Write the number of integers that satisfy this inequality: -3 ≤ 3m < 12 Write the number of integers that satisfy this inequality: -3 ≤ 3m < 12Write the number of integers that satisfy this inequality: -3 ≤ 3m &lt; 12 ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Given that -11 ≤ 3m - 8 < 4 , solve for m. Given that -11 ≤ 3m - 8 < 4 , solve for m.Given that -11 ≤ 3m - 8 &lt; 4 , solve for m. ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Show me how to solve for x in the following inequality: Show me how to solve for x in the following inequality:Show me how to solve for x in the following inequality: -4 ≤ -x/2 &lt; 5 Represent your answer on a number line. ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 I am struggling to solve for x in this inequality: I am struggling to solve for x in this inequality:I am struggling to solve for x in this inequality: x^2 - 16 &gt; 2x - 1 ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Anyone able to solve this inequality: x^2 - x < 12. Anyone able to solve this inequality: x^2 - x < 12.Anyone able to solve this inequality: &nbsp;x^2 - x &lt; 12. ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Help me to determine the value of x here: Help me to determine the value of x here:Help me to determine the value of x here: a. &nbsp;&nbsp;&nbsp;√(2x + 1) = x - 1 b. &nbsp;&nbsp;&nbsp;x(4 - x) &lt; 0 ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Solve for x in each of the following: Solve for x in each of the following:Solve for x in each of the following: a. (2^x) + 3(2^(x + 1)) = 56 b. 4x - 9 &lt; 5x + 4 ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Help me to show that the following expression is true for all values of x: Help me to show that the following expression is true for all values of x: Help me to show that the following expression is true for all real values of x: x2 - x + 9 &gt; 5 ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Given the following inequality: Given the following inequality: Given the following inequality: (x + 2)(x - 3) &lt; -3x + 2 a. Solve for x. b. Hence or otherwise, determine the sum of all the integers sa ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Solve for x: Solve for x: Solve for x: a.&nbsp; &nbsp; x^2 &lt; 25 b.&nbsp; &nbsp; -x&nbsp;≤ 2x^2 - 3 c.&nbsp; &nbsp; (x + 3)(x - 5)&nbsp;≤ -12 ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 What is the simplex algorithm? What is the simplex algorithm?What is the simplex algorithm? ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Write the simplified value of $12 \times 0.00361$ in scientific notation without any rounding. Write the simplified value of $12 \times 0.00361$ in scientific notation without any rounding.Write the simplified value of $12 \times 0.00361$ in scientific notation without any rounding. ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Given $f(x)=x(x+2)$, solve for $x$ if ... Given $f(x)=x(x+2)$, solve for $x$ if ... Given $f(x)=x(x+2)$, solve for $x$ if ... $f(x) =0$ $f(x) \geq 0$ and then represent the solution on a number line. ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Let $a, b, c$ be the side lengths, and $h_a, h_b, h_c$ be the altitudes, respectively, and $r$ be the inradius of a triangle. Prove the inequality .... Let $a, b, c$ be the side lengths, and $h_a, h_b, h_c$ be the altitudes, respectively, and $r$ be the inradius of a triangle. Prove the inequality ....Let $a, b, c$ be the side lengths, and $h_a, h_b, h_c$ be the altitudes, respectively, and $r$ be the inradius &nbsp;of a triangle. Prove the ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Draw the graphical representation of the following inequalities $$x-y >2\\ x+2y\geq -2$$ Draw the graphical representation of the following inequalities $$x-y >2\\ x+2y\geq -2$$Draw the graphical representation of the following inequalities $$x-y &gt;2\\ x+2y\geq -2$$ ... close Notice: Undefined index: avatar in /home/customer/www/mathsgee.com/public_html/qa-theme/AVEN/qa-theme.php on line 993 Derive the system of inequalities that shows the number of hours he can spend on research and teaching.
{}
Here are a few interesting links I browsed recently, listed in no particular order: “Mathematicians Tame Turbulence in Flattened Fluids” [^]. The operative word here, of course, is: “flattened.” But even then, it’s an interesting read. Another thing: though the essay is pop-sci, the author gives the Navier-Stokes equations, complete with fairly OK explanatory remarks about each term in the equation. (But I don’t understand why every pop-sci write-up gives the NS equations only in the Lagrangian form, never Eulerian.) “A Twisted Path to Equation-Free Prediction” [^]. … “Empirical dynamic modeling.” Hmmm…. “Machine Learning’s Amazing’ Ability to Predict Chaos” [^]. Click-bait: They use data science ideas to predict chaos! 8 Lyapunov times is impressive. But ignore the other, usual kind of hype: “…the computer tunes its own formulas in response to data until the formulas replicate the system’s dynamics. ” [italics added.] “Your Simple (Yes, Simple) Guide to Quantum Entanglement” [^]. Click-bait: “Entanglement is often regarded as a uniquely quantum-mechanical phenomenon, but it is not. In fact, it is enlightening, though somewhat unconventional, to consider a simple non-quantum (or “classical”) version of entanglement first. This enables us to pry the subtlety of entanglement itself apart from the general oddity of quantum theory.” Don’t dismiss the description in the essay as being too simplistic; the author is Frank Wilczek. “A theoretical physics FAQ” [^]. Click-bait: Check your answers with those given by an expert! … Do spend some time here… Tensor product versus Cartesian product. If you are engineer and if you get interested in quantum entanglement, beware of the easily confusing terms: The tensor product and the Cartesian product. The tensor product, you might think, is like the Cartesian product. But it is not. See mathematicians’ explanations. Essentially, the basis sets (and the operations) are different. [^] [^]. But what the mathematicians don’t do is to take some simple but non-trivial examples, and actually work everything out in detail. Instead, they just jump from this definition to that definition. For example, see: “How to conquer tensorphobia” [^] and “Tensorphobia and the outer product”[^]. Read any of these last two articles. Any one is sufficient to give you tensorphobia even if you never had it! You will never run into a mathematician who explains the difference between the two concepts by first directly giving you a vague feel: by directly giving you a good worked out example in the context of finite sets (including enumeration of all the set elements) that illustrates the key difference, i.e. the addition vs. the multiplication of the unit vectors (aka members of basis sets). A third-class epistemology when it comes to explaining, mathematicians typically have. A Song I Like: (Marathi) “he gard niLe megha…” Music: Rushiraj Lyrics: Muralidhar Gode [As usual, a little streamlining may occur later on.] # Some suggested time-pass (including ideas for Python scripts involving vectors and tensors) Actually, I am busy writing down some notes on scalars, vectors and tensors, which I will share once they are complete. No, nothing great or very systematic; these are just a few notings here and there taken down mainly for myself. More like a formulae cheat-sheet, but the topic is complicated enough that it was necessary that I have them in one place. Once ready, I will share them. (They may get distributed as extra material on my upcoming FDP (faculty development program) on CFD, too.) While I remain busy in this activity, and thus stay away from blogging, you can do a few things: 1. Think about it: You can always build a unique tensor field from any given vector field, say by taking its gradient. (Or, you can build yet another unique tensor field, by taking the Kronecker product of the vector field variable with itself. Or, yet another one by taking the Kronecker product with some other vector field, even just the position field!). And, of course, as you know, you can always build a unique vector field from any scalar field, say by taking its gradient. So, you can write a Python script to load a B&W image file (or load a color .PNG/.BMP/even .JPEG, and convert it into a gray-scale image). You can then interpret the gray-scale intensities of the individual pixels as the local scalar field values existing at the centers of cells of a structured (squares) mesh, and numerically compute the corresponding gradient vector and tensor fields. Alternatively, you can also interpret the RGB (or HSL/HSV) values of a color image as the x-, y-, and z-components of a vector field, and then proceed to calculate the corresponding gradient tensor field. Write the output in XML format. 2. Think about it: You can always build a unique vector field from a given tensor field, say by taking its divergence. Similarly, you can always build a unique scalar field from a vector field, say by taking its divergence. So, you can write a Python script to load a color image, and interpret the RGB (or HSL/HSV) values now as the $xx$-, $xy$-, and $yy$-components of a symmetrical 2D tensor, and go on to write the code to produce the corresponding vector and scalar fields. Yes, as my resume shows, I was going to write a paper on a simple, interactive, pedagogical, software tool called “ToyDNS” (from Toy + Displacements, Strains, Stresses). I had written an extended abstract, and it had even got accepted in a renowned international conference. However, at that time, I was in an industrial job, and didn’t get the time to write the software or the paper. Even later on, the matter kept slipping. I now plan to surely take this up on priority, as soon as I am done with (i) the notes currently in progress, and immediately thereafter, (ii) my upcoming stress-definition paper (see my last couple of posts here and the related discussion at iMechanica). Anyway, the ideas in the points 1. and 2. above were, originally, a part of my planned “ToyDNS” paper. 3. You can induce a “zen-like” state in you, or if not that, then at least a “TV-watching” state (actually, something better than that), simply by pursuing this URL [^], and pouring in all your valuable hours into it. … Or who knows, you might also turn into a closet meteorologist, just like me. [And don’t tell anyone, but what they show here is actually a vector field.] 4. You can listen to this song in the next section…. It’s one of those flowy things which have come to us from that great old Grand-Master, viz., SD Burman himself! … Other songs falling in this same sub-sub-genre include, “yeh kisine geet chheDaa,” and “ThanDi hawaaein,” both of which I have run before. So, now, you go enjoy yet another one of the same kind—and quality. … A Song I Like: [It’s impossible to figure out whose contribution is greater here: SD’s, Sahir’s, or Lata’s. So, this is one of those happy circumstances in which the order of the listing of the credits is purely incidental … Also recommended is the video of this song. Mona Singh (aka Kalpana Kartik (i.e. Dev Anand’s wife, for the new generation)) is sooooo magical here, simply because she is so… natural here…] (Hindi) “phailee huyi hai sapanon ki baahen” Music: S. D. Burman Lyrics: Sahir Singer: Lata Mangeshkar But don’t forget to write those Python scripts…. Take care, and bye for now… / # Exactly what does this script show? Update on 02 March 2018, 15:34 IST: I have now added another, hopefully better, version of the script (but also kept the old one intact); see in the post below. The new script too comes without comments. Here is a small little Python script which helps you visualize something about a state of stress in 2D. If interested in understanding the concept of stress, then do run it, read it, try to understand what it does, and then, if still interested in the concept of stress, try to answer this “simple” little question: Exactly what does this script show? Exactly what it is that you are visualizing, here? I had written a few more notes and inline comments in the script, but have deliberately deleted most of them—or at least the ones which might have given you a clue towards answering the above question. I didn’t want to spoil your fun, that’s why. Once you all finish giving it a try, I will then post another blog-entry here, giving my answer to that question (and in the process, bringing back all the deleted notes and comments). Anyway, here is the script: ''' A simple script to help visualize *something* about a 2D stress tensor. --Ajit R. Jadhav. Version: 01 March 2018, 21:39 HRS IST. ''' import math import numpy as np import matplotlib.pyplot as plt # Specifying the input stress # Note: # While plotting, we set the x- and y-limits to -150 to +150, # and enforce the aspect ratio of 1. That is to say, we do not # allow MatPlotLib to automatically scale the axes, because we # want to appreciate the changes in the shapes as well sizes in # the plot. # # Therefore, all the input stress-components should be kept # to within the -100 to +100 (both inclusive) range. # # Specify the stress state in this order: xx, xy; yx, yy # The commas and the semicolon are necessary. sStress = "-100, 45; 90, 25" axes = plt.axes() axes.set_xlim((-150, 150)) axes.set_ylim((-150, 150)) plt.axes().set_aspect('equal', 'datalim') plt.title( "A visualization of *something* about\n" \ "the 2D stress-state [xx, xy; yx, yy] = [%s]" \ % sStress) mStress = np.matrix(sStress) mStressT = np.transpose(mStress) mUnitNormal = np.zeros((2, 1)) mTraction = np.zeros((2, 1)) nOrientations = 18 dIncrement = 360.0 / float(nOrientations) for i in range(0, nOrientations): mTraction = mStressT.dot(mUnitNormal) if i == 0: plt.plot((0, mTraction[0, 0]), (0, mTraction[0, 1]), 'black', linewidth=1.0) else: plt.plot((0, mTraction[0, 0]), (0, mTraction[0, 1]), 'gray', linewidth=0.5) plt.plot(mTraction[0, 0], mTraction[0, 1], marker='.', markeredgecolor='gray', markerfacecolor='gray', markersize=5) plt.text(mTraction[0, 0], mTraction[0, 1], '%d' % dThetaDegrees) plt.pause(0.05) plt.show() Update on 02 March 2018, 15:34 IST: Here is a second version of a script that does something similar (but continues to lack explanatory comments). One advantage with this version is that you can copy-paste the script to some file, say, MyScript.py, and invoke it from command line, giving the stress components and the number of orientations as command-line inputs, e.g., python MyScript.py "100, 0; 0, 50" 12 which makes it easier to try out different states of stress. The revised code is here: ''' A simple script to help visualize *something* about a 2D stress tensor. History: 06 March 2018, 10:43 IST: In computeTraction(), changed the mUnitNormal code to make it np.matrix() rather than python array 02 March 2018, 15:39 IST; Published the code ''' import sys import math import numpy as np import matplotlib.pyplot as plt # Specifying the input stress # Note: # While plotting, we set the x- and y-limits to -150 to +150, # and enforce the aspect ratio of 1. That is to say, we do not # allow MatPlotLib to automatically scale the axes, because we # want to appreciate the changes in the shapes as well sizes in # the plot. # # Therefore, all the input stress-components should be kept # to within the -100 to +100 (both inclusive) range. # # Specify the stress state in this order: xx, xy; yx, yy # The commas and the semicolon are necessary. # If you run the program from a command-line, you can also # specify the input stress string in quotes as the first # command-line argument, and no. of orientations, as the # second. e.g.: # python MyScript.py "100, 50; 50, 0" 12 ################################################## gsStress = "-100, 45; 90, 25" gnOrientations = 18 ################################################## dx = round(vTraction[0], 6) dy = round(vTraction[1], 6) if not (math.fabs(dx) < 10e-6 and math.fabs(dy) < 10e-6): axes.annotate(xy=(dx, dy), s='%d' % dThetaDegs, color=clr) ################################################## mUnitNormal = np.reshape(vUnitNormal, (2,1)) mTraction = mStressT.dot(mUnitNormal) vTraction = np.squeeze(np.asarray(mTraction)) return vTraction ################################################## def main(): axes = plt.axes() axes.set_label("label") axes.set_xlim((-150, 150)) axes.set_ylim((-150, 150)) axes.set_aspect('equal', 'datalim') plt.title( "A visualization of *something* about\n" \ "the 2D stress-state [xx, xy; yx, yy] = [%s]" \ % gsStress) mStress = np.matrix(gsStress) mStressT = np.transpose(mStress) vTraction = computeTraction(mStressT, 0) plotArrow(vTraction, 0, 'red', axes) dIncrement = 360.0 / float(gnOrientations) for i in range(1, gnOrientations): plt.pause(0.05) plt.show() ################################################## if __name__ == "__main__": nArgs = len(sys.argv) if nArgs > 1: gsStress = sys.argv[1] if nArgs > 2: gnOrientations = int(sys.argv[2]) main() ` OK, have fun, and if you care to, let me know your answers, guess-works, etc….. Oh, BTW, I have already taken a version of my last post also to iMechanica, which led to a bit of an interaction there too… However, I had to abruptly cut short all the discussions on the topic because I unexpectedly got way too busy in the affiliation- and accreditation-related work. It was only today that I’ve got a bit of a breather, and so could write this script and this post. Anyway, if you are interested in the concept of stress—issues like what it actually means and all that—then do check out my post at iMechanica, too, here [^]. … Happy Holi, take care to use only safe colors—and also take care not to bother those people who do not want to be bothered by you—by your “play”, esp. the complete strangers… OK, take care and bye for now. …. A Song I Like: (Marathi [Am I right?]) “rang he nave nave…” Singer: Shasha Tirupati Lyrics: Yogesh Damle # Stress is defined as the quantity equal to … what? Update on 01 March 2018, 21:27, IST: I had posted a version of this post also at iMechanica, which led to a bit of a very interesting interaction there [^] too. Check it out, if you want… Also see my today’s post concerning the idea of stress, here [^]. In this post, I am going to note a bit from my personal learning history. I am going to note what had happened when a clueless young engineering student that was me, was trying hard to understand the idea of tensors, during my UG years, and then for quite some time even after my UG days. May be for a decade or even more…. There certainly were, and are likely to be even today, many students like [the past] me. So, in the further description, I will use the term “we.” Obviously, the “we” here is the collegial “we,” perhaps even the pedagogical “we,” but certainly neither the pedestrian nor the royal “we.” What we would like to understand is the idea of tensors; the question of what these beasts are really, really like. As with developing an understanding of any new concept, we first go over some usage examples involving that idea, some instances of that concept. Here, there is not much of a problem; our mind easily picks up the stress as a “simple” and familiar example of a tensor. So, we try to understand the idea of tensors via the example of the stress tensor. [Turns out that it becomes far more difficult this way… But read on, anyway!] Not a bad decision, we think. After all, even if the tensor algebra (and tensor calculus) was an achievement wrought only in the closing decade(s) of the 19th century, Cauchy was already been up and running with the essential idea of the stress tensor right by 1822—i.e., more than half a century earlier. We come to know of this fact, say via James Rice’s article on the history of solid mechanics. Given this bit of history, we become confident that we are on the right track. After all, if the stress tensor could not only be conceived of, but even a divergence theorem for it could be spelt out, and the theorem even used in applications of engineering importance, all some half a century before any other tensors were even conceived of, then developing a good understanding of the stress tensor ought to provide a sound pathway to understanding tensors in general. So, we begin with the stress tensor, and try [very hard] to understand it. We recall what we have already been taught: stress is defined as force per unit area. In symbolic terms, read for the very first time in our XI standard physics texts, the equation reads: $\sigma \equiv \dfrac{F}{A}$               … Eq. (1) But given this way of putting things as the starting point, the only direction which we could at all possibly be pursuing, would be nothing but the following: The 3D representation ought to be just a simple generalization of Eq. (1), i.e., it must look something like this: $\overline{\overline{\sigma}} = \dfrac{\vec{F}}{\vec{A}}$                … Eq. (2) where the two overlines over $\sigma$ represents the idea that it is to be taken as a tensor quantity. But obviously, there is some trouble with the Eq. (2). This way of putting things can only be wrong, we suspect. The reason behind our suspicion, well-founded in our knowledge, is this: The operation of a division by a vector is not well-defined, at least, it is not at all noted in the UG vector-algebra texts. [And, our UG maths teachers would happily fail us in examinations if we tried an expression of that sort in our answer-books.] For that matter, from what we already know, even the idea of “multiplication” of two vectors is not uniquely defined: We have at least two “product”s: the dot product [or the inner product], and the cross product [a case of the outer or the tensor product]. The absence of divisions and unique multiplications is what distinguishes vectors from complex numbers (including phasors, which are often noted as “vectors” in the EE texts). Now, even if you attempt to “generalize” the idea of divisions, just the way you have “generalized” the idea of multiplications, it still doesn’t help a lot. [To speak of a tensor object as representing the result of a division is nothing but to make an indirect reference to the very operation [viz. that of taking a tensor product], and the very mathematical structure [viz. the tensor structure] which itself is the object we are trying to understand. … “Circles in the sand, round and round… .” In any case, the student is just as clueless about divisions by vectors, as he is about tensor products.] But, still being under the spell of what had been taught to us during our XI-XII physics courses, and later on, also in the UG engineering courses— their line and method of developing these concepts—we then make the following valiant attempt. We courageously rearrange the same equation, obtain the following, and try to base our “thinking” in reference to the rearrangement it represents: $\overline{\overline{\sigma}} \vec{A} = \vec{F}$                  … Eq (3) It takes a bit of time and energy, but then, very soon, we come to suspect that this too could be a wrong way of understanding the stress tensor. How can a mere rearrangement lead from an invalid equation to a valid equation? That’s for the starters. But a more important consideration is this one: Any quantity must be definable via an equation that follows the following format: the quantiy being defined, and nothing else but that quantity, as appearing on the left hand-side = some expression involving some other quantities, as appearing on the right hand-side. Let’s call this format Eq. (4). Clearly, Eq. (3) does not follow the format of Eq. (4). So, despite the rearrangement from Eq. (2) to Eq. (3), the question remains: How can we define the stress tensor (or for that matter, any tensors of similar kind, say the second-order tensors of strain, conductivity, etc.) such that its defining expression follows the format given in Eq. (4)? Can you answer the above question? If yes, I would love to hear from you… If not, I will post the answer by way of an update/reply/another blog post, after some time. … Happy thinking… A Song I Like: (Hindi) “ye bholaa bhaalaa man meraa kahin re…” Singers: Kishore Kumar, Asha Bhosale Music: Kishore Kumar Lyrics: Majrooh Sultanpuri [I should also be posting this question at iMechanica, though I don’t expect that they would be interested too much in it… Who knows, someone, say some student somewhere, may be interested in knowing more about it, just may be… Anyway, take care, and bye for now…]
{}
Future Post Race Impound and Pre Race Tech Inspections (Ideas) Recommended Posts I know there are items within ChampCar we can do better or be more consistent on and I have a couple of ideas, but I want to see what the member feedback is on what you all are looking for in the future.  I would like to get some ideas to bring back to discuss with the Board on how we can improve going forward, so here are my ideas: 1) Post Race Impound: * Option 1:  Tech looks at all of the cars in post race impound to make sure that all the cars have everything claimed that is on their tech sheet or log book.  They can add laps or DQ a car with authority from the Even Director on their findings.  Members can still file written protests as well. * Option 2:  Tech does not look over cars at impound and it is reliance on the members to file protest in a paper form which then tech will go and look at the car to see what is claimed or not claimed. * Option 3: Other - you tell me what you'd like to see for a consistent procedure Also within this one, if a team if found with unclaimed parts do they get DQ'd automatically and sent to EC for that race and asked to fix items for the next race to be in the ChampCar class racing or do we just add laps to their finish with the claimed part point total. 2) Pre Race Inspection * Option 1:  We continue what we are doing with tech looking at the safety of the cars and giving points to items on cars wheather they are in the rule book or not. * Option 2:  Tech only checks the safety of the cars to make sure everything is ready to race (seat belts, window nets, roll cages, fire suppression systems, battery boxes, etc).  It is up to the team captain to make sure everything is claimed on their tech sheet.  Items not claimed can be protested at impound where Tech and the Event Director would make the ruling Also in the spring I had an idea on the rule interpretations and having tech fill out a paper to submit with all of their findings on cars for the weekend.  The CEO would then sign off on this then it would be published to the membership.  If not the CEO, then I suggested the TAC.  It's a checks and balance system that I thought could work, so there is another option too. I want to get good feedback and criticism to how we are doing things now and what we can improve on and here are just some ideas that I had that I wanted to share with the membership on the forum. Share on other sites My thoughts: 1. Option 1 - what we are doing now - I think this should always be a team effort. We shouldn't rely solely on competitors or tech. I think if a team is found with un-declared parts, they should not be in the standings and moved to EC. If it's a 2 day race and they are able to make the car legal by removing parts or whatever, allow them reasonable options to do so and get a brief inspection to verify that before the second day/race. It should also be noted in the logbook before the next race weekend and should be re-teched at that time as well. Continuing to be lenient and forgiving on this has been bad for the series and the limp wristed response encourages more attempts at not declaring parts. 2. Option 1 - I think this is what we are doing now? Don't limit the pre-race inspection only to safety. A brief glance over everything before the race starts is a good idea to catch large anomalies. One additional thing i'd like to add - I think any parts subject to 'special pre-approval' from tech should be declared and listed on the tech sheet (at whatever point value was agreed upon, even if it's zero..), and either a printed email thread or a form signed by tech should be included. It would help to make things more transparent and look less like 'backdoor deals'. Taking this logic further, we could also NOT allow email threads and ONLY the signed (yet to exist) form from tech. This could be a form filled out by the competitor and then signed off by tech at the first event they attend with the part installed. The Champcar event director/CEO/BoD/TAC/whoever we think will get a copy of all these signed forms so there is a clear paper trail on what deviations/interpretations have been allowed. We could do with this information as we think is appropriate, whether condensing and listing in the 'tech interpretations' document or somewhere else. If you don't have this form signed and available in impound, no heresay or previous conversations will be considered and the competitor will be moved to EC. This may seem slightly cumbersome but at this point we need a process in place to get a firm handle on these sort of deals because they seem to be out of control. Edited by Slugworks Paul Share on other sites I would love to see a summary of special allowances, rules clarifications done via email, new interpretations, etc. shared in a single thread online.  It would be great to know what is available to ALL teams, not just those working it out with Champcar officials.  I'm not proposing giving away the secret sauce, maybe a generic description in certain cases would suffice.  But the 2.5 point hubs thing should have been shared with everyone since it can be a great source of pain and for some teams.  Knowing that hubs are a safety item and were discounted points should have been info given to everyone. It isn't fair to rely solely on tech to know everything about every car.  I think it's still up to the competitors to know what they are looking for and if it is worth challenging, then an automatic move to EC for the weekend if they are not in compliance.  As far as the form or process, having a piece of paper you drag around with you everywhere doesn't make sense in 2019.  An electronic log of all protests, which could also be posted online with the eventual findings, would be the way to go.  Then the series would know who was challenged, by whom, for what, and whether or not it was found in compliance and why. Share on other sites I believe keeping option 1 for both questions is our best course of action.  Both Tech and other teams should be able to protest a car that they believe to be out of compliance with the rules, and Tech should have the ability to tell someone proactively that they need to make a correction before another team has to call them out on the issue. When it comes to the question of how to handle cheating, or misinterpreting, or otherwise falling outside of compliance with the rule book I believe strongly that a DQ is in order.  If we are just making someone claim more points the smart play would be to not claim 3 or 4 things and show up with a 400 point car, get called out on one or two and finish with 480 points and two unnoticed items.  Aside from the NFL and professional cycling I can't think of a sporting organization that does not DQ a competitor for cheating.  The penalty is not high enough otherwise to discourage the behavior.  If you show up with the intent to deceive the organization and your competition and you are not punished to a level that prevents the behavior moving forward there are teams that will continue the behavior. I would like to see how folks feel about creating a searchable resource of Tech decisions.  What I mean by this is an email to Tech would be given a reference number, the decision by Tech for or against the interpretation be attached, and that question/answer be uploaded to a location where ALL teams can see the question asked and response given.  Yes, that would mean that a team that figures something out to exploit a rule to their benefit is sharing a bit of their secret sauce, but what we are seeing with wildly different interpretations of the rules would be greatly reduced if we could see what Tech had to say about questions being posed.  The only unknowns that should be popping up in the tech line or in protest after an event are things that people didn't even think to ask prior to seeing it. Share on other sites I like option 1. As for DQ or move to EC. I’m fine with either, but lean more towards DQ.  If moved to EC, then they shouldn’t be able to win that either, that’s why I favor DQ. In the end they cheated... As for backdoor deals, these need to be eliminated period as they are never communicated to all members.  But besides that fact, these ‘special’ deals need to be discussed by the BOD prior to allowing any team special consideration or implementing a new rule for a competitor. Share on other sites I like the idea of CC being able to tech cars before and after. I thought the bumper air dam rule was all the front and was corrected and had to take points on that in tech. I would much rather have it corrected prior to a race then after in a protest. Post race Why not both CC and members look over cars. The real issues is we need everyone to look as we have so many different cars. I do not know Porsches so I can not tell what is legit or not. We need many perspectives to keep it all good. My thought is if it is an illegal part who cares who finds it, it is still illegal. On the DQ or sent to EC, isn't that the same thing really. My thought is if it blatantly cheaty then for sure DQ. What if something is just not known or overlooked. Like if I did not take the points on the plastic above the bumper line thinking that was part of the air dam points? I think there is the person that can be technically illegal and not know it so no intention of cheating versus a person knowingly skirting the rule book. Pre race. I want CC to look at my car and go over safety items first. I want and need to be safe. Then look over paperwork and see things. It is much better to find things out prior to a race than after. I believe most of us are not out to cheat so an oversight is just that. It is much better to find that ahead of time. On the hidden rules I want to know. I would have liked to have free springs as I take penalty laps for them and start out behind most of you. It would have made a difference for me at PIRC and Barber. I did not know about the loophole rule. If those are out there we should all know and then can get the fairness or as a collective we can close that loophole to also be fair. Just my 2 cents Share on other sites none of it matters unless the series is willing to actually enforce it's own rules which currently it is not.    Options 1 and 2 did not work last weekend. There should be ZERO special deals or 1 make/model rules.   That is complete bull cookies.   The same rules for EVERYONE.   1 rule set. The series should be closing any loopholes that pop up instead of opening pandoras box everytime and giving everything away for free.   It seems management would rather allow bull cookies interpretations vs. upsetting someone and telling them to take points. Edited by Snake • 10 Share on other sites 1.  You can't JUST give points/laps to unclaimed parts, otherwise over 500pt teams have no incentive to claim them. 2.  Tech's approval via text or email, should be rock solid in impound.  After the race that ruling can change. 3.  If tech will formally be scrutinizing cars, I would expect a pre-race inspection option and I would want that to hold up in impound. 4.  We need to be careful of CCES leadership intentionally targeting specific cars/teams.  There is a preponderance of evidence that this has occurred in the past and, frankly, it sucks. 5.  I am aware of rampant rules violations in the mid and lower pack.  If these teams are going to be scrutinized in the future, this might be a worm hole. 6. I personally feel that the TAC should be given more power and that Tech should be under them in the org chart, and both of those under the board or a chairman. 7. CCES needs to report on impound.  At Road America I learned that the Wing and Splitter widths should be limited to OEM bodywork, not bodywork as the rule is written.  I am cutting mine to comply.  That means that teams that bought bigger wings when we had to roll fenders out like I did, will either get tossed in impound OR will have an advantage over our team since they have not be warned yet.  I think tech handled this properly, but this ruling should be public. 8.  There are two types of unclaimed parts: blatant to get an advantage, and unintentional or an interpretation issue.  I know this CAN be subjective, but I think there is a difference in how this should be treated.  IE:  A team at 480pts forgets to claim a sqft of metal vs. a 500 pt team running swaybars. Share on other sites 4 minutes ago, LuckyKid said: 1.  You can't JUST give points/laps to unclaimed parts, otherwise over 500pt teams have no incentive to claim them. AGREED 2.  Tech's approval via text or email, should be rock solid in impound.  After the race that ruling can change. AGREED 3.  If tech will formally be scrutinizing cars, I would expect a pre-race inspection option and I would want that to hold up in impound. AGREED 4.  We need to be careful of CCES leadership intentionally targeting specific cars/teams.  There is a preponderance of evidence that this has occurred in the past and, frankly, it sucks. ☹️ 5.  I am aware of rampant rules violations in the mid and lower pack.  If these teams are going to be scrutinized in the future, this might be a worm hole. As a true mid-pack, sub-500 pt team, I wasn't aware of that. And to be honest, that sucks to have a good battle with someone who turns out to not be following the rules. 6. I personally feel that the TAC should be given more power and that Tech should be under them in the org chart, and both of those under the board or a chairman. AGREED 7. CCES needs to report on impound.  At Road America I learned that the Wing and Splitter widths should be limited to OEM bodywork, not bodywork as the rule is written.  I am cutting mine to comply.  That means that teams that bought bigger wings when we had to roll fenders out like I did, will either get tossed in impound OR will have an advantage over our team since they have not be warned yet.  I think tech handled this properly, but this ruling should be public. AGREED 8.  There are two types of unclaimed parts: blatant to get an advantage, and unintentional or an interpretation issue.  I know this CAN be subjective, but I think there is a difference in how this should be treated.  IE:  A team at 480pts forgets to claim a sqft of metal vs. a 500 pt team running swaybars. AGREED Share on other sites 1 hour ago, E. Tyler Pedersen said: Also within this one, if a team if found with unclaimed parts do they get DQ'd automatically and sent to EC for that race and asked to fix items for the next race to be in the ChampCar class racing or do we just add laps to their finish with the claimed part point total. Hey, once again, I'm just a silly citrus that likes to hang out with Champ when they come close to my neck of the woods. My thinking is that you claim your parts and take your laps at tech, or if found later you are DQ'd and or pushed into EC (Also raced). Giving teams laps based on the unclaimed parts only entices teams to not claim parts, and hope no one notices. If you cant be honest and claim what you have, then the "punishment" should be worse than having claimed them in the first place. It should also earn a team an enhanced "look" at tech the next race. Now my car will never qualify to race other than EC as my engine swap adds a few (1400) too many points. Share on other sites 2 hours ago, E. Tyler Pedersen said: Also within this one, if a team if found with unclaimed parts do they get DQ'd automatically and sent to EC for that race and asked to fix items for the next race to be in the ChampCar class racing or do we just add laps to their finish with the claimed part point total. This is covered in 1.3.8 of the BCCR.    The "may" in that rule should be changed to "will" Edited by Snake Share on other sites Dq.  Not sent to ec.  Sending to ec is most likely going to get them a trophy anyway. Share on other sites 2 hours ago, E. Tyler Pedersen said: Also in the spring I had an idea on the rule interpretations and having tech fill out a paper to submit with all of their findings on cars for the weekend.  The CEO would then sign off on this then it would be published to the membership.  If not the CEO, then I suggested the TAC.  It's a checks and balance system that I thought could work, so there is another option too. I think I remember seeing on the SVRA site that they had  a link to protests (called Tech zone) etc, basically it was a red sea scroll listing protests or otherwise checks done at impound post race. It was short and simply stated was was challenged and then the official finding.  eg, car weight, car team, owner etc, actual weight, required weight (min in their case) and then judgement, over or under and the resulting penalty if applicable. could have been under min weight, car dq's from finishing position etc.  That gets it out there for all to see like some have asked.  It's a simple documentation process, the info should be compiled if there was an actual process so the added step would seem to be ownership of that page and keeping it updated.  I like the idea honestly....It shows they were challenged and were either cleared or were guilty and penalized, you see both outcomes and it is public. There was a separate page for each race if I remember correctly. Edited by 67Mustang Share on other sites Rules are great, but if the organization that created the rules doesn't understand them what good are they? Here it is Share on other sites 27 minutes ago, scottyk said: Rules are great, but if the organization that created the rules doesn't understand them what good are they? Hey now, they apply (some of) the rules. But only when they are used to preserve the victory of a high budget team with obvious non-declared parts. Then they advertise that the 'protest was withdrawn'. Facepalm. You really can't even make this stuff up. Edited by Slugworks Paul In addition to the $50 there needs to be a bond for anything invasive to pay for engine/car reassembly/gaskets. Obviously goes back to the protester if parts are found illegal. See SCCA rules for some tips: 8.3.3. Actions Against Cars "The protestor may request that the car be disassembled, inspected, or any other test made, provided he posts a tear down bond (also referred to as “bond”) with the SOM sufficient to cover the total expenses of disassembly, inspection, reassembly, and other costs associated with the protest. A Protest may be reduced in scope but not added to at the time the bond is set. Unless the protestor wholly or partially withdraws his Protest, the stipulated inspections will be completed after the bond is set and received." Share this post Link to post Share on other sites I don't know that I'd advocate for teardowns but I do think we need to clarify how far they go. I've now heard stories that both: 1. Included removal of valve cover 2. Told the protestor 'we don't get into the engine' and valve cover wasn't asked to be removed. Share this post Link to post Share on other sites This thread is leaning back towards hearsay.... • 1 • 1 Share this post Link to post Share on other sites Trying to recenter this... Tyler i think option 1, tech going back over car makes a ton of sense. Even as loose as nascar is (for the longest time no wins were ever removed), the penalty for not passing tech has been worst than not showing up to the race. I could see a limit, protested parts must equal 2 items or 10 points (whichever is lower) so we aren't protesting square inches of material. Or a list of protestable items (most things). Broken record, but if we aren't going to allow competitors access under\over\close to the cars, you need to post pictures of the cars online along with all of the info they presented to tech to get the little yellow logbook. Nothing in the logbook\tech sheet\swap sheet\etc is that much of a trade secret. If you think so, that is the cost of winning. Nascar used to broadcast post race teardown to the other teams, and we could see the car disassembled piece by piece. We still managed to keep lots of IP secret, you guys will be able tp as well. How many of you guys were sharing pics taken illegally "close" or "under" the 601\602 cars to discuss their legality? My point exactly. That access to the cars is needed to get this right, if we can't be there for safety reasons send one person with a camera... Trying to not fully derail this.... There are other elements wrapped up into this weekend, about where the series should grow to or stay, who it should cater to... For another thread, another time, but maybe being on the losing side of the preferential treatment will open some eyes to what it feels like to be a loser in the swap calc weight, fuel capacity, vpi "corrections" downward, etc. 5 years ago we all raced cheap shitty cars 5-10 sec a lap slower....what happened? • 2 • 1 Share this post Link to post Share on other sites 2) Pre Race Inspection Prior to the event (or when teams register) add a online form that allows teams to upload the tech information and add a Q&A, Tech can add notes to conversations or text messages/phone calls about legal and illegal parts. When it comes to the races for the physical check, tech will have some idea about the car prior to rolling up and documentation during impound. I am assuming that most of the tech infractions are not malicious, and as the cars become newer and more complicated, there will probably be more research and communication needed in general. I still think that Option #1 impound is needed for the checks and balances. Most of my racing experience has been in spec series. During the prime of that series everything was tech'd and their was a claimer rule. New lazy tech guy took over and started a random selection on what was being tech'd, people started to cheat, they started to add complicated rules and the series eventually imploded. Unfortunately I don't think the claimer rule would work in champcar, although it would be interesting to be able to buy a Porsche Boxster for$500 😋 Edited by trigun7469 Share on other sites 28 minutes ago, wvumtnbkr said: This thread is leaning back towards hearsay.... Most certainly happened.  red0 can talk about valve cover removal and snorman can talk about the ruling that it can't be done. Share on other sites 1 hour ago, wvumtnbkr said: Dq.  Not sent to ec.  Sending to ec is most likely going to get them a trophy anyway. I agree. It would suck to be racing in EC and have enough of a gap over your next nearest competitor so you start taking it easy, thinking hey we are going to finish in whatever place and get a trophy and BAM, during impound a new car shows up in EC knocking you out of contention. DQ is a pretty harsh penalty, but moving to EC has the potential to affect other competitors unfairly. Also, Epstein didn't kill himself. • 4 • 3 Share on other sites 1 hour ago, Slugworks Paul said: Most certainly happened.  red0 can talk about valve cover removal and snorman can talk about the ruling that it can't be done. Hearsay definition... The report of another person's words by a witness, which is usually disallowed as evidence in a court of law. Share on other sites Personally if there are un-claimed parts at impound, an automatic DQ is a little harsh.  There is always room for interpretation and if tech misses it pre-race, then it gets discovered post-race and you're DQ'd, that is not entirely fair. There does need to be some recognition of where the team ended up.  If the car in question is many laps ahead of the next placed car, you need to look at if it would have mattered, so adding laps solves the issue.  In post-race discovery, if you add the appropriate points and subsequent laps, you are no different than if you had claimed them at the start.  Again a DQ at this point is pretty mean. Also, If I have a 400 point car then another 20 points worth of parts is discovered in impound, big deal, I'm still under 500.  A DQ is pretty harsh. Perhaps as a compromise, post-race discoveries of un-claimed parts are worth twice the points or twice the laps as they would have garnished in pre-race. Join the conversation You can post now and register later. If you have an account, sign in now to post with your account. ×   Pasted as rich text.   Paste as plain text instead Only 75 emoji are allowed.
{}
# Python script losing arguments when run from PATH on Windows I know my title is not descriptive so let me try to explain it here. Normally I execute my python script like this: D:\github\Miscellaneous-Programs\Python>python check.py -h hello ['check.py', '-h'] Now what I did is added the folder D:\github\Miscellaneous-Programs\Python in my windows path environment variable. Than I tried to execute my script like this: C:\Users\noob>check -h hello ['D:\\github\\Miscellaneous-Programs\\Python\\check.py'] As you see it didn't showed the -h argument I supplied to it. My check.py import sys print "hello" print sys.argv If I remove print sys.argv from the above mentioned python script it work fine in both cases I mentioned above i.e, it prints "hello" just fine. So, my question is how does one execute a python script that accepts some command line arguments after the script is added to environment variable. My purpose is to execute my python script from anywhere in the windows command prompt which is somewhat similar to chmod +x check.py. I tried the chmod option in cygwin it works fine for both cases. Cygwin output noob@noob-PC ~ $chmod +x check.py noob@noob-PC ~$ ./check.py h ['./check.py', 'h'] - make a .bat file containg python full\path\to\check.py and add the directory containing that .bat to Path (you might want to put echo off at the beginning of the .bat. –  khachik Apr 23 '12 at 13:43 I'd suggest changing the title to something a little more descriptive, eg. "Running a Python script from PATH drops all but the first command-line argument" so other people having the same problem will find it. –  mensi Apr 23 '12 at 14:04 Was it really C:\Users\noob>check -h not C:\Users\noob>check.py -h? –  Piotr Dobrogost Apr 23 '12 at 18:45 –  Piotr Dobrogost Apr 24 '12 at 20:50 Windows does not have a notion of executable script files with the interpreter given as a #!, so what you intend to do cannot work. What Windows does is to call the WinAPI function ShellExecute which does the following: However, it is more commonly used to launch an application that operates on a particular file. For instance, .txt files can be opened by Microsoft WordPad. The open verb for a .txt file would thus correspond to something like the following command: "C:\Program Files\Windows NT\Accessories\Wordpad.exe" "%1" see MSDN As you can see, only the first parameter is supplied to the application. In your case, this translates to something along the lines of: "C:\Program Files\Python\Python.exe" "D:\github\Miscellaneous-Programs\Python\check.py" What you can do to avoid this is to create a little .bat file named check.bat: python check.py %* (See this SO question for more details. You might also have to supply an absolute path for check.py or python if they cannot be found) - Thanks, for the reply it worked as you suggested. I will acccept your answer just waiting for some other alternative suggestions :D. –  Noob Apr 23 '12 at 13:57 Windows does not have a notion of executable script files with the interpreter given as a #!, so what you intend to do cannot work Windows doesn't have a notion of shebang indeed but this doesn't mean you can't run various scripts by giving their names on the command line. Btw, there is nice tool which makes shebang work in Windows - Python Launcher. –  Piotr Dobrogost Apr 23 '12 at 18:50 ShellExecute is WinAPI function used by the system in the process of executing command like script.py -h and without knowing how the system uses this function you can't describe the whole process. Your answer is wrong. –  Piotr Dobrogost Apr 23 '12 at 18:58 Putting the folder on the PATH does not influence the way the system acts when you run some script by writing script.py -h at the command line. What happens is the system reads the registry to find out how to run the command you gave. You can display this information by first running reg query HKCR\.py /ve and then taking the result (which normally is Python.File) and running reg query HKCR\Python.File\shell\open\command /ve. The output on my system is "C:\Program Files\Python Launcher (64-bit)\py.exe" "%1" %*. This means then when the system sees script.py -h command it runs py.exe program with the first parameter being the name of the script (that's what "%1" means) and the rest of the parameters being the ones given to the script (that's what %*) means. I guess your problem is caused by the lack of %* part in the apropriate registry entry. My output for regquery: (Default) REG_SZ "C:\Python26\python.exe" "%1" %*. So. it doesn't lack %* –  Noob Apr 24 '12 at 1:03
{}
Science: the toroidal pyramid Chad Orzel gripes about this month’s Scientific American special issue on “The Future of Physics” — which is actually extremely good, but which turns out to be exclusively about the future of high-energy particle physics. Not surprisingly, the commenters on Chad’s blog reignite the ancient debate about which science is more fundamental than which other one, and whether all sciences besides particle physics are stamp collecting. I started writing a comment myself, but then I realized I hadn’t posted anything to my own blog in quite some time, so being nothing if not opportunistic, I decided to put it here instead. To me, one of the most delicious things about computer science is the way it turns the traditional “pyramid of sciences” on its head. We all know, of course, that math and logic are more fundamental than particle physics (even particle physicists themselves will, if pressed, grudgingly admit as much), and that particle physics is in turn more fundamental than condensed-matter physics, which is more fundamental than chemistry, which is more fundamental than biology, which is more fundamental than psychology, anthropology, and so on, which still are more fundamental than grubby engineering fields like, say, computer science … but then you find out that computer science actually has as strong a claim as math to be the substrate beneath physics, that in a certain sense computer science is math, and that until you understand what kinds of machines the laws of physics do and don’t allow, you haven’t really understood the laws themselves … and the whole hierarchy of fundamental-ness gets twisted into a circle and revealed as the bad nerd joke that it always was. That was a longer sentence than I intended. Note (Jan. 25): From now on, all comments asking what I think of the movie “Teeth” will be instantly deleted. I’m sick of the general topic, and regret having ever brought it up. Thank you for your understanding. 118 Responses to “Science: the toroidal pyramid” 1. michael vassar Says: 2. Dave Bacon Says: “We all know, of course, that math and logic are more fundamental than particle physics ” Doh. Lost me at step 1. BTW, its turtles all the way down. (With quantum ones somewhere along the way.) 3. Scott Says: Sure — and so did economics. After all, if you don’t know Bayesian probability and decision theory, then at least according to the Overcoming Bias folks you couldn’t possibly know what to believe about anything, including particle physics. 4. Isabel Lugo Says: All sciences besides mathematics are stamp collecting. No, I don’t seriously believe this. And some parts of mathematics might be called stamp collecting as well, including large swathes of combinatorics, which is what I do sometimes. 5. Chiral Says: I was thinking about the fabled “Theory-Of-Everything” tee-shirt the other day. I’d like a tee-shirt that says: Theory Of Everything: (K x y) -> x (S x y z) -> (x z (y z)) Universe = stamp collecting exercise left to physicists Of course, it has no more content than “the universe is computable”, but if you’re talking about nerd jokes … 6. Tim Says: Since combinatorics is stamp collecting, and therefore not the most fundamental science, I should procrastinate by posting a comment to Shtetl-Optimized instead of working on my combinatorics homework. I think it was Feynman (or Gell-Mann) who said that physics is to mathematics as sex is to masturbation, so clearly mathematics can’t be the fundamental science. Clearly neuroscience is the most fundamental science, because we do everything, including science, with our brains. Clearly x is the most fundamental science, where x is what I work on. After all, the only reason I work on it is because it’s the most important, and if it’s the most important it must be the most fundamental. Clearly the philosophy of fundamental science is the most fundamental science, since we clearly don’t know what the most fundamental science is, and working on anything else is a sub-optimal use of resources. Exercise for the reader: Expand the argument above to show that 99.99% of scientists are irrational. 7. Marcus Says: Yeah, this whole “x is more fundamental than y” thing is pretty retarded when you stop to realize that there is no unifying theory anyway. Until there is, everything is more fundamental than everything else, quite reasonably. Or maybe you just haven’t done enough acid ;-D 8. Robin Kothari Says: “We all know, of course, that math and logic are more fundamental than particle physics” Although the statement seems obvious to me, I have often found it difficult to convince people that this statement is true. Is there a good way to convince people about this? Math is Physic’s language, so Math is to Physics like English is to Shakespeare’s Hamlet. 10. Nick Ernst Says: Hmmm. Well, while I have my disagreements with David Deutsch, I’m rather fond of his take on reductionism: “…scientific knowledge consists of explanations, and the structure of scientific explanations does not reflect the reductionist hierarchy.” (larger excerpt linked here). 11. Peter de Blanc Says: I think it may be a mistake to say that chemistry is more fundamental than biology. At least, I’m sure it is a mistake to say that chemistry is more fundamental than evolution; chemistry is a medium which can support evolution, but there are other media which can as well. 12. Jonathan Vos Post Says: The assumptions of Reductionism are more deeply flawed than the anomalies given. For one thing, as Philsophers of Science have argued, there’s the implicit but unjustified assumption that there is ONLY ONE REAL WORLD. Is one assumes that (usually implicitly in on’e Kuhnian paradigm) then one falls into the trap of kludging together a metric for the “distance” between one’s pet discipline at that putative unique real world. The unjustified metaphysical assumption is usually made that there is, in total, ONLY ONE SCIENCE (meaning set of all science disciplines) that describes the putative unique real world. I’ve discussed at length with Geoffrey Landis the possibility (which would seem to make some fun Science Fiction) that we meet the extraterrestrials, and, even after years of effort, cannot understand their science or technology, nor they ours, as both of us have something that works, yet is intrinsically irreconcilable. By the way, in teaching a history of scientific revolution course to several hundred adult students over the years, I first gave a quiz, and then again at the end, which plots metaphysical stance. Returning the first quiz, I point out that before Kuhn, essentially all scientists would have answered “true” to each of the 10 questions, and Kuhn himself answered “no.” Indeed, the quizzes were never all “yes” nor all “no” in this class, and tended to skew more Kuhnian after I presented Kuhn and Lakatos and engaged in a week of conversation with the class. 13. Bram Cohen Says: Scott, here’s a somewhat unrelated question. If you were to build a time machine and go back and explain to Fermat Karatsuba’s multiplication algorithm and how to do a Diffie-Hellman key exchange, what do you think he would have made of it? Perhaps another nice idea my professor in solid state physics pointed out: For non-relativistic energies, the Hamiltonian of N electrons and M nuclei (as point charges) is a Theory of (almost) Everything since it completely determines the wavefunction of the system, which in turn completely determines all expectation values. So from a mathematical point of view, the exercise of actually solving that Hamiltonian is left to the devoted reader. Dirac, however, pointed out that there are two issues with that view: a) The WF of 50 electrons in space representation is a 150 dimensional function. If one attempts to sample that function on a grid with but one bit of information per gridpoint and just 10 points in each “direction”, this would mean a space amount of 10^150 bits. I guess that is more than there are atoms in the universe… b) By containing ALL of the information of our system, the WF contains too much information so in the end it gives to us no information at all. In particular, such a treatment does not satisfyingly explain where superconductivity comes from, why there are conductors, semiconductors and it does not allow for general observations. So the reductionists approach does not work here. Of course, all behaviour of the solid state system relies on the underlying Hamiltonian, but this observations does not much to help us understanding the solid state system. 15. Scott Says: “We all know, of course, that math and logic are more fundamental than particle physics” Although the statement seems obvious to me, I have often found it difficult to convince people that this statement is true. Is there a good way to convince people about this? I don’t think so — especially since (as my post was trying to explain) such statements actually have no clear meaning! I do get annoyed (in fact, angry) when people claim that if physics were different then math and logic would also be. And then I try to explain why that claim makes no sense, how it confuses knowledge of mathematical truths with the truths themselves and is ultimately self-undermining (“if logic only works in our local patch of the universe, why should I listen to your logic about the other patches?”), and they respond, “oh, you must be a Platonist.” I don’t even really know what Platonism means, but I’m ready to cast my lot with it, just to stick it to the people who treat that word like “racist” or “child-molester.” 16. Scott Says: Math is Physic’s language, so Math is to Physics like English is to Shakespeare’s Hamlet. Or perhaps one should say: math is to good physics like English is to Hamlet, and is to bad physics like English is to a C1ali$spam. But actually I’m not happy with this metaphor, since where do discoveries in math fit into it? Are they like discoveries in linguistics? 17. Scott Says: If you were to build a time machine and go back and explain to Fermat Karatsuba’s multiplication algorithm and how to do a Diffie-Hellman key exchange, what do you think he would have made of it? Bram, that’s a tough one; as I ponder it, it seems the crucial issue is how much time I’d have with Fermat. If I had only five minutes he’d probably write me off as a crackpot (assuming I’m not allowed to show him my laptop or other such artifacts, nor the time machine itself). If I had a few hours I could completely blow his mind. (Assuming I spoke French, which I don’t.) 18. mick Says: “BTW, its turtles all the way down. (With quantum ones somewhere along the way.)” – Well it’s one turtle with 4 giant elephants on its back anyway. 19. oz Says: “I think it was Feynman (or Gell-Mann) who said that physics is to mathematics as sex is to masturbation, so clearly mathematics can’t be the fundamental science.” I highly disagree – we all know which is more fun(damental), 20. John Preskill Says: Sorry — I am reposting because my comment got a bit garbled for some reason. Is (M=mathematics) > (P=physics), where “>” means “more fundamental than”? I’m inclined to answer “of course” but it is tricky to define the question precisely. Perhaps x > y means that we can change y without changing x, but not the other way around. Then unless we believe that there is a unique consistent physical theory, we can easily imagine changing the Hamiltonian of the world without changing the laws of logic, so it seems M > P. Even if we think “1+1=2″ is a statement about physical objects, if it is a statement about physical objects in all possible models of physics, then it is really a mathematical statement, isn’t it? On the other hand, if there is a “unique physical theory” doesn’t that mean that (once we accept some suitable axiom) the form of theory follows from logic alone, so that at best M = P (an overstatement, perhaps, since we could still change P by adopting different axioms), but not P > M ? Is there a sensible way to frame the question so that the a plausible answer would be P > M? 21. Kurt Says: I do get annoyed (in fact, angry) when people claim that if physics were different then math and logic would also be. Maybe I’m not understanding what you’re saying here, but it seems to me that if the physics of our universe were radically different, then our logic and math would be too. For example, if in our world the disjunctive syllogism failed to hold for macroscopic phenomenon, our logic would certainly be different. You might counter that while our “standard” logic might be different, what we currently consider standard logic would still exist as a non-standard branch of logic in this other world. And to some extent I would agree with that. However, our brains have evolved to be able to deal with the universe in which we live, and I’m sure that there are limits to what we can comprehend in principle. If our universe were different, those limits would be different, and the intersection with what can comprehend in this world might be rather small. 22. Bobby Says: To Robin (@#8): One of the issues with communicating with “normal” people about the fundamental nature of math, is when you say math, they think you mean manipulating numbers. Most people have no idea of math as the study of all predictable systems. In fact, I’ve never used that way to portray it – maybe that could work, if you can convince people to accept your definition. I’ve tried talking about math as the study of all typographical manipulation systems, but that shoots past them too… 23. Isabel Lugo Says: To Bobby (@#22): I think that the claim that mathematics is the study of all typographical manipulation systems is true but misleading. It’s true in that all mathematical arguments (as far as I know) can be expressed as derivations in a suitable typographical manipulation system, etc. But it’s misleading in that it gives people the idea that mathematicians think about their work in terms of pushing symbols around. 24. Scott Says: Kurt, what would it mean for the disjunctive syllogism to fail for macroscopic phenomena (or for that matter, microscopic phenomena)? The disjunctive syllogism says that (P or Q) and (not P) imply Q, which is as true in quantum mechanics as in anything else. (The claim that quantum mechanics “changes the laws of logic” is so confused I barely know where to start with it.) 25. Blake Stacey Says: When I first heard about the “hierarchy of sciences” (physics -> chemistry -> biology, or whatever), back in the seventh grade or thereabouts, the question immediately jumped into my mind, “Well, where do you put mathematics, then?” Feynman’s Character of Physical Law, which I found a few years later, also left me with the image of the “hierarchy” closing in upon itself to form an Olisbian strange loop. Speaking of bad nerd jokes, I’m sure that other people here can come up with better riffs than I did for the new James Bond movie, Quantum of Solace. 26. Moshe Says: Oh, this argument about what is “fundamental” sounds like something from the 1970s. Chad has a bi-weekly post about that issue, but he is usually very thin on examples, it is usually some subtlety in wording of a sentence somewhere, or some anonymous comment on the web. Maybe I am blind to this, but is there a recent example of anyone serious and prominent that claims HEP or cosmology is more important than other fields of physics? 27. Koray Says: I am very likely to be wrong here, but I’ve always thought that the reason that we humans trace our mathematical reasoning down to the same logical axioms is because our brains share the same physical structure. Then the mathematics that we do is not independent of the physics of the universe; it could even be bounded by just the physics of our own brains. We could be like blind DFA’s that develop a grand theory of acoustics with the sort of calculations they can perform, agreeing with each other along the way that this is all there is, not being able to study the Turing machines that they cannot see. 28. John Sidles Says: I recently wrote an essay (to myself) called The Spooky Mysteries of Classical Physics … is there anyone else who considers the ontology of classical physics to be just as spooky and ambiguous as that of quantum physics? I did a literature search and found no similar essay. 🙂 29. DrRainicorn Says: To Koray (#26) Of course the way we talk about mathematics is affected by the specifics of our brains, but thats not fundamentally what mathematics is. Even if we were to use different axioms, the axioms we use now would still be true, for example, the disjunctive syllogism is true not because it makes sense to our brains, but because the way the operators it consists of are defined force it to be true. That is, if someone in another universe were to encounter a situation that encodes those operators, the truth of the disjunctive syllogism could influence events, even if they don’t discover it. And yes, the computing power afforded to us by physics may determine the amount of math we are able to discover (seemingly nothing that requires a halting oracle, for example). However the math we do discover still applies in any universe that encodes the mathematical objects we have studied (which seems very likely, given the apparent generality of such things as numbers and Turing Machines). 30. Scott Says: Koray, yes, you’re conflating two issues. It’s possible that superintelligent aliens would ask mathematical questions far beyond anything we could comprehend. But to whatever extent we asked the same questions, we’d necessarily agree on the answers. 31. Cynthia Says: Scott, slighly on, slightly off topic… Since you’re a man of complexity, I’d like to hear your answer to this question: Because Godel’s incompleteness theorems more or less affirm that math (and thus science) won’t ever be able to get a grip on infinity, do you think this adds credence to the faith-based thinkers who ascribe to the notion that faith/religion will always rule supreme over reason/science? Even if you can’t provide me with a complete answer, I’ll be happy to receive an incomplete one! ;~) 32. matt Says: Yes, I definitely feel that the ontology of classical physics is just as bad, if not worse, as that of quantum physics. When asked “why do we perceive the world as classical when it is actually quantum?” the logical answer is “why should we perceive a world as classical if it is actually classical?”. Both questions deal with the question of us “perceiving” the world in certain ways, namely a question of our conscious awareness, which classical mechanics says nothing about. You can ask the question “why do we, and other animals, model the world as classical when it is actually quantum?” and I think the answer is the same as “if you’re designing a bridge, why do you model the forces in it as if classical elasticity theory was right, when it’s actually quantum?” Namely, it’s the easiest way to do it. 33. Hatem Abdelghani Says: Dr. Aaronson, I have two points to mention regarding your post 1- The mathematics which is more fundamental than particle physics is meant to be pure mathematics. While computer science is applied mathematics. So they don’t form a circle. 2- Is it true to define math and logic as science at all? I don’t think so. They are more of philosophy and less of observations. Perhaps they form the junction between philosophy and science. 34. Scott Says: Cynthia, I don’t even really buy the presupposition of your question. The incompleteness theorems do indeed put limits on the power of any given formal system — but ironically, the undecidable sentences you get from those theorems have nothing to do with infinity! (You might be thinking about the independence of the continuum hypothesis, which is something different.) But let’s ignore that. The mistake you make is an ancient one: namely, to assume that if science can’t answer a given question, then religion automatically has the “upper hand.”. Why should we assume that? If religion is to be our guide in infinite set theory, shouldn’t theologians first have to answer the questions that mathematicians can’t? 35. John Sidles Says: Matt says: Yes, I definitely feel that the ontology of classical physics is just as bad, if not worse, as that of quantum physics … why do we, and other animals, model the world as classical when it is actually quantum? Heck, Matt, we can say something stronger than that! Let’s take Scott’s toroidal principle seriously. As I read it, the toroidal principle predicts that if we think about it, we will discover that classical mechanics is just as ontologically spooky and ambiguous as quantum mechanics. And the prediction turns out to be true: it is easy to construct “spooky” classical ontologies that link seamlessly to “spooky” quantum ontologies (two diagrams here). The idea here is not to make quantum ontology less spooky, but instead to specify a (nonstandard, but consistent) classical ontology that is more spooky. And this exercise turns out to be a lot of fun! Classical spookiness makes perfect sense, when you think about it. Given the central importance of the spookiness of quantum mechanics, it would be astonishing indeed if this spookiness were wholly invisible at the classical level! 🙂 36. Cynthia Says: Scott, my bad! ;~( Roughly speaking, Cantor’s continuum hypothesis is to infinite sets as Godel’s incompleteness theorems are to formal systems… Anyways, the *only* reason I ask you this question is because I’ve heard IDers argue the following: Since Godel’s incompleteness theorems prove that mathematical-based science will forever be plagued by either incompleteness or inconsistencies, it must look to theology/faith to obtain consistencies/completeness. 37. milkshake Says: math/linguistic analogies get you only so far – a common problem of making analogies between unrelated fields. Math to me is more like a set of tools – the constructs that have been built and thoroughly checked for bugs – and guaraneed to work if you use them correctly (thats why the syntax part of the business) 38. Len Ornstein Says: Perhaps this will stimulate some responses that will lead to clarifications? Conventional languages, logic and math require users to commit to ‘belief’ in a set of fundamentals; axiomatic rules and definitions. By following the rules, a model (conjecture) can be proven either to be true or false (or undecidable, if the model, or set of axiomatics, is incomplete or poorly-specified). All commitment, explicit or implicit involves accepting axiomatic ‘rules’ without logical proof of their ‘truth’. In this quite general sense, all deductive reasoning and the “proving of absolute logical truths”, is ultimately ‘faith-based’. Hume showed us that, using axiomatically-based deductive reasoning, an inflexible definition of an object, class or process, from any extrapolation or interpolation from observation of only a sample of its parts, is UNDECIDABLE because of INCOMPLETENESS. So there’s no logical way to justify a ‘deductive level of belief’ in the truth of any facts generated by inductive reasoning. So, to some degree, we all take facts ‘on faith’. Science depends completely on both deductive and inductive reasoning. Therefore the argument that science dispenses with the need for any elements of faith or belief – supposedly in contrast to religion – is overly simplistic. Since faith and belief are defined as metaphysical, this argument, which also is equivalent to the claim of the extensive literature of the Positivists; that science is, and must remain, free of metaphysics, is hard to accept. However, there is an important difference between the discipline required in science, on the one hand, and that of religion, mathematics and classical logic, on the other: Science requires commitment to axiomatics, in order to design robust models of reality (by carefully following the rules) so that the models can be dependably communicated and understood. But science always adds the additional injunction: a degree of belief in each model can be established only with the weight of supporting, factual evidence; the more and ‘better’ the evidence, the greater the belief. This is the important distinction between science and religion – and even between science and mathematics. (And it helps to assure that a model itself remains distinguishable from the ‘reality’ it’s been designed to simulate.) External or internal (biological) worlds, are accessed through our senses – either directly, or ‘through’ intervening ‘instruments’. All the models (ideas, hypotheses, theories, laws), constructed by theoretical scientists to generate order out of such observations – preferably, but not necessarily, carefully reasoned – starting with a conjecture and often ending with a proven theorem – nonetheless all remain agnostically tentative guesses about the nature of reality. To inspire any degree of belief that a model ‘explains’ such reality, science requires separate evidence that the model, to some degree, matches some previously unobserved (empirical) aspect of those worlds. That’s to say, to receive anything more than the most tentative consideration, an apparent deductive truth about the ‘world’ must be supported by (usually ‘new’) matching, empirical, inductive ‘truths’, (largely discounting prior similar evidence, if it has contributed to the abduction and construction of the model). This is quite unlike what’s required for establishing belief in the purely deductive proof of a typical theorem in mathematics. The number of provable mathematical conjectures (consistent models; theorems) is enormous. But the fraction of those that can be matched to worldly observations is infinitesmal. Both theoretical and experimental scientists sometimes discover, and in any case, use this tiny fraction of mathematics as extremely valuable intellectual tools. But it’s misleading to consider most of mathematics as a kind of science. Most of the time, mathematics avoids logically-undecidable models, and therefore needs no empirical matching. It ‘gets away with’ “pure reason” to prove absolutely true theorems. Science, can’t! It ‘must’ tie all of its models to messy, logically-undecidable facts – and always ends up with at least some tiny, residual uncertainty. It’s possible to generate hierarchies of models and theories of science. Strictly speaking, mathematics isn’t part of such a hierarchy of science. Locations in such hierarchic classifications are based on the generality of each component model. And the confidence with which a model has been confirmed decides whether it’s to be included. The more ‘general’, the nearer to ‘the trunk’ of a hierarchical tree of science. Many mathematicians and some scientific theoreticians – typical inventors of models – are Platonists. They (want to?) believe that some of their theories are revealing absolute and fundamental truths about ‘some reality’. But the natures of deductive and inductive truth, reviewed above, and from which, none can escape, make it ‘impossible’ to establish absolute truths about ‘external reality’– and therefore, within science, that issue should be moot. The Platonic allure of THE most general scientific discipline may provide the motivation – the challenge – to discover central ‘ultimate truth’. But within which scientific discipline will we find the most general: the reductionism of particle physics? astronomy? the physical evolution of cosmology? evolutionary biology? neuroscience? or communication/information theory and computational complexity? It depends upon how you choose to design your hierarchic tree. For example, since all mental discipline and reason which gives rise to all science must begin with language, perhaps linguistics/cognitive science and its evolutionary, neurobiological core should be the trunk? What could be more fundamental 😉 The bottom line – probably what a scientist strongly believes will decide how each values a scientific model; first, observational testability and degree of confirmation? second, economy (‘simplicity’ and/or ‘beauty’– the test of Occam’s Razor)? and then generality? Socially pragmatic scientists – those particularly concerned with human survival and comfort – might also require high utility, and even place utility first, or second? 39. Tyler DiPietro Says: “Since Godel’s incompleteness theorems prove that mathematical-based science will forever be plagued by either incompleteness or inconsistencies, it must look to theology/faith to obtain consistencies/completeness.” Actually, if we’re allowed to handwave, we needn’t resort to such intricate and turgid pablum as theology. We can just posit an oracle and be done with it. 40. Tyler DiPietro Says: “Conventional languages, logic and math require users to commit to ‘belief’ in a set of fundamentals; axiomatic rules and definitions. By following the rules, a model (conjecture) can be proven either to be true or false (or undecidable, if the model, or set of axiomatics, is incomplete or poorly-specified). All commitment, explicit or implicit involves accepting axiomatic ‘rules’ without logical proof of their ‘truth’. In this quite general sense, all deductive reasoning and the “proving of absolute logical truths”, is ultimately ‘faith-based’.” The problem with this argument is that it reduces “faith” to “assumption”, and thus generates a much weaker definition of the term than we intuitively associate with those things we consider “faith-based”. IMO, it’s not a very profound statement to say that we must assume certain things in science/math (think of the Cartesian demon), so the argument doesn’t really illuminate anything that we don’t already acknowledge. 41. Job Says: What’s not Math? Anything that is not describable? And what’s not Physics? Anything that i can’t feel or think about? From a human perspective they’re both fundamental. We’ll never come across something that’s not both Physics and Math, or one and not the other – by definition. Math doesn’t have a scope, and Physics does but has one at the edge of the spectrum. Other sciences have intermediate scopes – i.e. they run from 68 to 33. So i don’t think that there is a fundamentalness circle. Physics is at the bottom, and Math is everywhere. 42. Tyler DiPietro Says: One thing I’d like to add to the above is that while theology employs an abstract and deductive system of reasoning that is superficially similar to mathematics, the real line of demarcation comes from the presuppositional nature of theology. The “axioms” of theology are contingent upon culture, e.g., a Christian theologian must accept the divine exceptionalism of Jesus of Nazareth in one form or another. In mathematics the axioms are chosen according to their power in illuminating mathematical questions. 43. Job Says: In Theology there really is only one axiom – everything else follows. God is the axiom-explaining axiom. 44. K. B. Nikto Says: Job, axioms are not explained. They are assumed without proof. To explain a proposition is to refer it to a another proposition, one which is already known. 45. cody Says: i like your explanation response there Scott, to Cynthia. a related question: does our inability to have a complete mathematical system (strong enough to generate the reals) inhibit our ability to have a complete (or maybe better called ‘universal’) physical theory? this isnt a question ive thought about before, so ill be curious what people think. personally, i view mathematics and physics as being extremely similar, with the prime difference being that in mathematics we get to define our axioms and see what sort of behavior follows, whereas in physics we are given the behavior and we are attempting to recover the axioms. unfortunately we have no way of knowing if we have ever found all the axioms, or really, any of them. but the more our physical theories can explain, and the fewer assumptions they make, the more confident we can be that we are at least approximating the ‘ultimate’ ‘rules’. what really piques my interest in computer science is that it sits somewhere between math and physics. we dont really just define our axioms as we please, we choose ones that somehow make sense (this is usually true in mathematics too, but there seems to be more room to explore the unlikely in mathematics, which is good in many ways. id love to write more on this, but the post is really too long already. 46. John Sidles Says: As a toroidal confection, I recommend the Nebula-winning novelette Tower of Babylon by Ted Chiang. Maybe other folks can name their favorite story having a “toroidal” mathematical theme? Thinking about toroidal stories suggests that it might be fun to look for toroidal theorems. An example of such a theorem concerns the Hilbert transform. This kernel$H$of the Hilbert transform is real-valued, yet it satisfies the defining relation of what Kahlerian geometers call a complex structure, namely$H^2 = -I$. The Hilbert transform theorem suggests that we can choose to traverse the classical-to-quantum transition by descent rather than ascent (to echo Chiang’s wonderful story). Namely, we impose a Hilbert-transform invariance upon classical real dynamics, mirror the resulting complex structure in the Kahlerian geometry of the state-space, and introduce complex numbers either never, or only when we feel the practical need (feh!) to impose a coordinate system on the state-space. The result is a world in which we perceive that “spooky quantum mysteries” are manifest everywhere, even in classical physics. 47. Scott Says: Hatem: The mathematics which is more fundamental than particle physics is meant to be pure mathematics. While computer science is applied mathematics. You realize, don’t you, that you’re in a den of theoretical computer scientists? 😉 48. Alex Says: I recently got this email joke from a friend “Biologists thinks they are molecular biologists, molecular biologists thinks they are chemists, chemists think they physical chemists, physical chemists think they are physicists, physicists thinks they are God, and God thinks he’s mathematician”. It now seems that God thinks he’s Scott Aaronson…. Sorry, seems like a lame joke but just too tempting given the topic of the post. 49. Scott Says: Job: In Theology there really is only one axiom – everything else follows. Except that it doesn’t. How do you get from that axiom to, say, the evil of homosexuality or of wearing mixed fabrics? 50. oz Says: How about organizing an ‘x is the most fundamental’ contest? Examples: The physics of time-travel is most fundamental since it’ll allow as to go back in time and study all other sciences. AI is most fundamental since we could build thinking machines doing science for us. 51. niel Says: Reaching back to the Shakespeare metaphor a moment: Or perhaps one should say: math is to good physics like English is to Hamlet, and is to bad physics like English is to a C1ali$ spam. But actually I’m not happy with this metaphor, since where do discoveries in math fit into it? Are they like discoveries in linguistics? Shakespeare again provides the best example: one can invent in English as in math. The coining of new words and metaphors, with such art that without explanation, the audience knows exactly what is meant. And centuries after the author’s passed away, no-one recalls how anyone before could do without those handy turns of phrase. Math, like English, is but another tongue, except that it’s an artificial one, in which to make metaphors most precise. 52. Job Says: My point was that if you make God an axiom then you can always explain things further. You can always say because “God causes it to be that way”. Why are mixed fabrics evil? Because God causes it to be that way. 53. Job Says: K. B. Nikto, i was saying that if you add in the axiom of God to an existing set of axioms those axioms disappear. I’m familiar with what an axiom is – i looked it up on Wikipedia before i posted. 54. Anwar Shiekh Says: This Computer Science verses Physics; when is it going to stop? They are not adversaries. 55. K. B. Nikto Says: By equating the evil of homosexuality with the wearing of mixed fabrics, Scott has trivialized the important distinction between a potentially deadly sexual aberration and a sartorial choice. 56. Tim Says: IMO, it’s not a very profound statement to say that we must assume certain things in science/math (think of the Cartesian demon), so the argument doesn’t really illuminate anything that we don’t already acknowledge. It seems like one to the kind of people who think that an important distinction between (their) religion and science is that science has to assume things and (their) religion doesn’t (or at least not anything very important). A few lessons on basic philosophy, the history of skepticism, Descartes’ demon, and the fact that the exploration of every non-trivial domain of human knowledge requires making assumptions and is incapable of producing real certainty would be so much more useful than the useless “observation + theory + experiment = science” crap that is the closest most people ever get to the philosophy of science (or, horrors, anything like epistemology). A lot of people pick it up more or less by by osmosis (or just assume it from the beginning, the lucky bastards), but a lot of people don’t. I didn’t get it until I was eighteen, and it took a lot of work (and blew apart my worldview). Maybe I’m unusual, but I’m willing to bet that the people who think this stuff goes without saying are not as numerous as many of them think. 57. Tyler DiPietro Says: “By equating the evil of homosexuality with the wearing of mixed fabrics, Scott has trivialized the important distinction between a potentially deadly sexual aberration and a sartorial choice.” You’re kidding, right? 58. John Sidles Says: Nikto posts: … by equating the evil of homosexuality with the wearing of mixed fabrics … Gosh that’s a ugly, hateful post. Posts like that do not belong on a good-spirited forum. No, I am not interested in excuses or debate … this is just to go on record. 59. Ryan Budney Says: The “Pyramid of Science” I’ve always thought of as a comforting fiction that gets told to people who want those kinds of things, similar to the Easter Bunny or Santa Claus. The distinctions between the various sciences are artificial to begin with, which makes The Pyramid a fiction of a higher order altogether, and seemingly impossible to pin down. While we’re working with this vague notion of “more fundamental than”, there’s compelling arguments to say that biology is far more fundamental than mathematics, computer science and particle physics since it deals with things that people readily identify as being important. Certainly an indicator of “fundamental” should be “does it affect me in any way I’d consider important?” 60. Scott Says: Gosh that’s a ugly, hateful post. Posts like that do not belong on a good-spirited forum. John, I completely agree (assuming Nikto was serious). Beyond that, I’m glad to learn there’s something you hate! 61. Cynthia Says: Nikto, Since being gay is not any deadlier than being straight, then I say you’ve got it backwards. Wearing mixed fabric is a potentially deadly sexual aberration, whereas homosexuality is merely a sartorial choice! 62. John Sidles Says: … I’m glad to learn there’s something you hate! … With apologies to Alexander Pope: Abuse is a monster of so frightful mien, As to be hated needs but to be seen; Yet seen too oft, familiar with its face, We first endure, then pity, then embrace. This goes double for bad Powerpoint presentations! 🙂 63. K. B. Nikto Says: Cynthia, your comment is facetious and may denote a paucity of information. 64. Raoul Ohio Says: Cynthia: “~( Roughly speaking, Cantor’s continuum hypothesis is to infinite sets as Godel’s incompleteness theorems are to formal systems…” Other than being tough issues in axiomatic mathematics, I don’t think Cantor’s continuum hypothesis has anything at all to do with Godel’s incompleteness theorems: CCH: The ZF set theory axioms and logic do not tell us what the size of R (the real numbers) is. GIT: No formal system can prove all of math. Keep in mind that R is basically a construction to enable one to prove things in calculus. E.g., when learning calculus, the intermediate value theorem sounds pretty obvious. As a math major, you learn to push around Dedekind cuts and Cauchy sequences and prove stuff like the IVT. In Topology you generalize these constructions as much as you can, and prove lots of theorems about whatever. But who cares? No one has any idea how well R maps to any concept of space or time. This is where studying modern topology is almost as bad as being a philosophy major: it encourages bright young kids to waste their time working on pointless issues when they could be discovering better approximate algorithms for NP hard problems or better numerical schemes for NS equations. Hardy had it backwards: Useful mathematics is harder than pure math. 65. Tyler DiPietro Says: Methinks Nikto is starting to sound a bit trollish. 66. John Sidles Says: Quotes arrayed upon a slippery continuum … —————————————- G. H. Hardy: “Mathematicians may be justified in rejoicing that there is one science at any rate, and that their own, whose very remoteness from ordinary human activities should keep it gentle and clean” Peter Lax: “Pure mathematics is a branch of applied mathematics.” Karl Friedrichs: “Applied mathematics consists in solving exact problems approximately and approximate problems exactly.” John Bardeen: “Invention does not occur in a vacuum. Most advances are made in response to a need, so that it is necessary to have some sort of practical goal in mind while the basic research is being done; otherwise it may be of little value.” Deak Parsons (to Robert Oppenheimer): “Ruthless, brutal people must band together to force the FM [Fat Man] components to dovetail in time and space. They must feel that they have a mandate to circumvent or crush opposition from above and below, animate or inanimate – even nuclear!” —————————————- 67. Cynthia Says: K.B. Nikto, Don’t get me wrong, I like men way too much to be gay! But IMO, you’re being somewhat naive to believe that gays have a monopoly on deadliness whether it involves micro or macroorganisms… 68. Cynthia Says: Cody and Len Ornstein, Assuming I’m reading some of what you’re saying right, let me restate a little bit of it in more down-to-earth terms… Since physics is merely a subset of math, then just because math is bound to run into either incompleteness or inconsistencies doesn’t necessarily mean that physics (along with the hierarchy of science) will run into either of the two as well. 69. Len Ornstein Says: Cynthia: Physics USES, but isn’t a subset of math. Science differs from math (and religion) by REQUIRING matching empirical confirmatory evidence, established with some significant degree of confidence, to consider that any conjecture is a candidate as a model of ‘reality’. And that candidacy can NEVER grow to absolute certainty. Math (with Godellian caveats), requires no more than ‘pure reason’ to prove the ‘absolute truth’ of many conjectures. That’s a BIG difference! 70. cody Says: Cythina, i really agree with what Len just said. as an example i would say, it doesnt really mean much to talk about a universe where gravity falls off 1/r^3, because our measurements seem to rule out that behavior. in contrast, there is nothing ‘right’ or ‘wrong’ about imagining a world with finitely or infinitely many points when studying geometry. they lead to radically different ‘worlds’, but neither one need exist for us to study it. 71. John Sidles Says: Only in three dimensions do harmonic gravitational potentials yield periodic astronomical orbits. This is sometimes cited as an anthropic reason that our universe is 3D. Because no other universes support the billions of astronomical orbits needed for the highest form of life … mathematicians … to evolve. 🙂 That is what the Life-universe professors deduce, anyway! 72. Stas Says: Somewhat off topic, but how fundamental would be this science? 😉 Scott, it looks like the first author is from your lab… 73. Deja vu Says: The disjunctive syllogism says that (P or Q) and (not P) imply Q, which is as true in quantum mechanics as in anything else. (The claim that quantum mechanics “changes the laws of logic” is so confused I barely know where to start with it. Let’s for a moment assume we lived in a universe in which nuclear fision was so easy to achieve that every time you put together two objects you actually got only 1.5 times as much mass, with the rest being dissipated in all manner of heat and radiation. Then we would have discovered first and studied for far longer the system of mathematics in which 1+1=1.5. Sure, the axioms stating that 1+1=2 would still hold, just like the axioms stating 3+5=2 mod 6 still hold. However their role in mathematics would be less central. Or to use a more real example, non-euclidean geometries were a curiosity within math for quite a while, until we discovered that we might actually live in one such after which their role in math become more important. 74. Scott Says: Let’s for a moment assume we lived in a universe in which nuclear fision was so easy to achieve that every time you put together two objects you actually got only 1.5 times as much mass … Then we would have discovered first and studied for far longer the system of mathematics in which 1+1=1.5. Deja vu, that sounds cool if you don’t think about it too much, but I think it completely collapses if you do. What exactly is “the system of mathematics in which 1+1=1.5”? Your use of those words doesn’t conjure such a system into existence (much less a unique consistent one). Here’s one way of explaining the difficulty: even to state the rule that “every time you put together two objects you actually get only 1.5 times as much mass,” you needed the ordinary concepts of 2 and 1.5, didn’t you? 75. Scott Says: Stas: Thanks for the link! I feel that my colleagues’ work richly deserves, and will quite possibly receive, an Ig Nobel Prize. 76. Hatem Abdelghani Says: You realize, don’t you, that you’re in a den of theoretical computer scientists? Sure, but that doesn’t make the cycle closed yet. Theoretical computer science, which lies at the beginning of the chain of fundamentalism, is not the same as applied computer science, which lies at the end of that chain and which belongs to engineering more than it belongs to mathematics. That’s my opinion anyway. 77. Cynthia Says: Cody and Len Ornstein, At risk of falling into the black hole of semantics, let me extract this tidbit from both of your comments… Math is a tool for both science and religion. But while science uses math to grasp reality, religion uses it to create fantasy. 78. Deja vu Says: I think it completely collapses if you do. What exactly is “the system of mathematics in which 1+1=1.5″? Easy “plus” instead of describing the natural (there’s that pesky word again) addition, defines a completely formal function f on two variables that has the property that f(x,x)=1.5 x, f(x,0)=x, f(x,y)=f(y,x), etc. Your use of those words doesn’t conjure such a system into existence (much less a unique consistent one). I just did, and it is fully consistent. I can now study the theorems and properties behind the very useful function f. Is it associative? Is it continuous? Here’s one way of explaining the difficulty: even to state the rule that “every time you put together two objects you actually get only 1.5 times as much mass,” you needed the ordinary concepts of 2 and 1.5, didn’t you? Gee, that means nothing: say, if I’m an eskimo and I need to describe a camel for the first time I will use concepts that are familiar to the eskimo (say, it’s like a bear, but longer legs and has a hump like a whale). If on the other hand I’m an australian aborigene I’ll use concepts and words familiar to those who live in the outback. So you see, the use of a given set of words does not prove their centrality or universality, but simply our familiarity with them. Don’t get me wrong: the deep ideas of math are universal, but the specific systems we study, the shorthands we use to describe them and the order in which we study them are very much a reflection of our physical reality. 79. Scott Says: Deja vu: You haven’t fully defined the function f. Can you please do so? What is f(2,1), for example? 80. 12Quarts Says: What basis is there for believing that all of science has developed anything but a slightly more “fundamental” understanding than was achieved by 12th century theology? Sure, it seems like the technology developed off the science is huge, but how much more is out there that we aren’t aware of? How do you know but that ev’ry bird that cuts the airy way is an immense world of delight, closed to your senses five? Maybe mathematics and logic as we know it is only a subset of the sort of thinking that really gets things done–derivatives of a truly fundamental way of approaching knowledge of our universe. Maybe physics has been leading us down only a marginally useful path for the last 300 years. Maybe… 81. John Sidles Says: 12Quarts Says: What basis is there for believing that all of science has developed anything but a slightly more “fundamental” understanding than was achieved by 12th century theology? That is a serious question, and so quoting comedic lines from old Blackadder episodes is IMHO not the most appropriate answer. I would say that a major advance is humanity’s still-growing and increasingly multidimensional understanding of our own natural history. This theme unites cosmology, biology, cognitive science, and evolution … it unites every branch of science and mathematics. Humanity’s understanding of our own history and literature is deepening too … and this includes our understanding of Scripture. To read the books of Moses and *then* read Frans de Waal’s Chimpanzee Politics, is to participate in an adventure that was possible to no previous generation. As with any adventure, a considerable degree of effort, discomfort, and inconvenience is entailed, and of course the destination of the journey is uncertain too (this uncertainty is a major theme of Donald Knuth’s writing on this topic). Obviously too, our understanding is a work in progress … we are still wandering in the desert, and no one knows when (or even if) we will reach the Promised Land! 🙂 82. Raoul Ohio Says: Since I was a kid I have toyed with the question of how much of our math and science will turn out to be the same as that of alien civilizations when we establish radio communication. What is fundamental and what is arbitrary? Assuming enough math to support radio engineering, here is my model: 1. Counting is pretty basic to intelligence. I will guess a 99% likelihood of the same N we have. 2. Adding and subtracting are also basic. 95% for the same Z. 3. How do they think about division? Many reasonable systems such as an extension of ieee 754 FP numbers would work fine. 90% for the same Q 4. I think our reals are a lot more arbitrary. Do you even need to consider the solution to x^2 – 2 = 0 to be a number? 50% for roughly the same (Dedekind cuts, Cauchy sequences) R. 5. If you are working with radio, you probably have discovered something like complex numbers. 75% they make the same extension as we do to whatever they use for R. If anyone wants to post their estimates, I offer a \$1 bet about who comes closer with whatever civilization contacts us first. It might take a while to find out who wins. Those going to Heaven (surely including St. Don) can keep tabs for us. Finally, what are the technical concepts you would have to stumble onto as you develop technology? Fourier transforms, Hilbert space; Heap sort, Quick sort; Quantum (mechanics, computing, info th)? Anyone wishing to study this might assess independent rediscovery as a measure of how obvious a concept is. Shell sort seems to be kind of an outlier. Does anyone know if it has been frequently rediscovered? 83. cody Says: Cynthia, i dont see math as a tool for religion at all. i would say that mathematics is the language in which science is ‘spoken’. though math on its own is more than just a language or tool. and yes, id agree that science uses math to ‘grasp reality’. but ‘religion’ in general does not use math at all; to the contrary, it is heavily reliant on tradition and fantastic lies to explain reality (poorly). 84. Joseph Hertzlinger Says: When I was an undergraduate, I came up with a “proof” that all science was circular based on the following: Psychology is really biology. Biology is really chemistry. Chemistry is really physics. Physics is really mathematics. Mathematics is really logic. Logic is really philosophy. Philosophy is really psychology. Psychology is … 85. John Sidles Says: There is also a toroidal aspect to the cardinal tenets of the Enlightenment, as conceived by mathematicians and scientists in the 1600s and 1700s:Reason as the sole criterion of what is true.Rejection of the supernatural.The equality of mankind.Ethics stressing equity, justice, and charity.Comprehensive toleration and freedom of thought.Personal liberty.Freedom of political criticism and the press.Democratic republicanism as the most legitimate form of politics.(this particular list is from Israel’s Enlightenment Contested.) These tenets were conceived by the authors of the Enlightenmen—who included the most distinguished mathematicians and scientists of those centuries—as uniquely elevating humanity “above” the animal kingdom. But as we learned more about our own evolutionary history, we have come to see these tenets not as moral verities founded upon mathematical logic (Leibniz’ view), but rather as a contingent mirror of our own evolutionary history. In other words, the moral principles of the Enlightenment, being devised by primates, are satisfactory to primates, but would be less satisfactory to societies descended from cows, and wholly unsatisfactory to societies descended from ants. Ed Wilson famously made this point when, upon viewing the movie Aliens, he remarked “Whoever made this movie knew a lot about social insect morality!” And we can add, primate morality too … the whole movie can be viewed as an extended morality play. So in seeking to escape our primate heritage, we toroidally come to understand and embrace it more thoroughly. Even today, many folks (including me) seek via mathematics to embrace simplicity, morality, and truth … The good news is, we get to “pick any two.” 🙂 86. Cynthia Says: I totally agree with you, Cody! Theologians aren’t mathematical heavyweights by any stretch of the imagination. But many of them are masters of numerology and other such cryptic nonsense. 87. Deja vu Says: Deja vu: You haven’t fully defined the function f. Can you please do so? What is f(2,1), for example? Our imaginary reality would give you the definition. Just like our current reality tells you that bodies in free fall do so with a constant acceleration (you can imagine a universe in which bodies fall at say, a quadratic acceleration, can you?) 88. John Sidles Says: To defend Deja vu’s point of view, IMHO he is absolutely right to intuit that nonstandard classical models of reality exist that are both mathematically consistent and perfectly useful as practical models of reality. A well-known example is the Feynman-Wheeler (FW) classical electrodynamics, in which (classical) electrodynamic propagators are half-advanced and half-retarded. The FW picture of reality agrees with experiment, and is mathematically consistent too … it simply is not the way that we are conventionally are taught that classical reality “works.” For example, FW physics has the “spooky” property we are all subject to “classically real” electric and magnetic fields that impinge upon us … from the future! We are taught that electric and magnetic fields are “physically real”, but as FW physics explicitly demonstrates, this seemingly clear definition of classical reality has some mighty spooky aspects. And when we study noisy classical systems, then the notion of “simple classical reality” becomes even spookier. It is characteristic of Feynman’s way of thinking that he noticed—and took seriously—the notion that classical reality is similarly spooky to quantum reality. The good news for students is, classical reality is still mighty spooky today, fifty years after Feynman began wondering about it. We really have not made much progress in understanding this. 89. Scott Says: Yes, there are useful nonstandard models, but the point is that Deja vu can’t even give me a model! He’s not going to tell me what f(2,1), f(3,2), etc. are, and for good reason: as soon as he fully defined his nonexistent universe, I would catch him in an absurdity. It’s like this: if I’m having a strange nightmare, all I need to do to wake up is ask one pointed question: “No, why are you telling me this, if before you were telling me that?” The nightmare-generating part of my brain can’t compute an answer to the question, and the entire nightmare crashes. I’m trying to use the same tactic to wake up from Deja vu’s nightmare where 1+1=1.5. The only problem is that he’s not answering the questions. 90. John Sidles Says: Scott, to respect both you and Deja vu, it is surely true (as you point out) that most nonstandard models of reality simply don’t make sense, being inconsistent and/or incompletely defined and/or at odds with experience, so why waste time with them? Life is too short! Yet Deja vu is right too (IMHO), to point out that the number of models of reality that *do* make sense is strictly larger than one, even at the classical level. On the rare occasions that new models can be found that work, they are fertile sources of new mathematics and physics. So much so, that the discovery of a new model is a pretty good working definition of what constitutes “new mathematics and physics”. Example: Turing machines as models of computation and cognition. No one claims that cognition actually works this way … but what a great new model! The 21st century too will find new models … and IMHO the most novel aspect of these 21st century models is likely to be their informatic and algorithmic content. If only we knew what these new models are going to be … 🙂 91. harrison Says: Even though there are plenty of nonstandard models out there, I think they’re pretty boring. Noncommutative rings/fields (coughquaternionscough) are really the only interesting “different arithmetics” I’ve ever seen. A lot of what mathematics is is saying “if you want a model that satisfies this and this and this axiom, then you’re stuck with something isomorphic to this.” So a system of addition in which 1+1 = 1.5 (whatever that even means) wouldn’t follow the “normal” rules of addition. Also: C’mon, people, no one’s made the obvious Ricoh “non-standard model” joke? This is a travesty. 92. Job Says: Joseph Hertzlinger, i don’t agree with your jump from Logic to Philosophy. I find that arguments for the circularity of sciences tend to use a little bit of word play (derived from a certain ambiguity) and are more poetic than rational. For example, to argue that Chemistry is more-fundamental-than Biology you seem to be using the fact that Chemistry is-used-by Biology. But when you go from Logic to Philosophy you seem to be using the makes-use-of relation- because i don’t see how Philosophy is-used-by Logic. I think that you are jumping from Logic to Philosophy because the first is a subset of the second – but that jump is not in keeping with the previous jumps – or at least i don’t see how it is. Psychology makes-use-of biology. Biology makes-use-of chemistry. Chemistry makes-use-of physics. Physics makes-use-of mathematics. Mathematics makes-use-of logic. Logic makes-use-of philosophy. Logic is-a-subset-of philosophy. Not coherent Philosophy makes-use-of psychology. Psychology makes-use-of … In addition, Philosophy makes use of everything and Math is used by anything – so i would leave these two out because they don’t add anything meaningful and just lead to confusion and circular arguments. 93. John Sidles Says: Harrison says: Even though there are plenty of nonstandard models out there, I think they’re pretty boring. Isn’t that simply because a really successful nonstandard model pretty swiftly becomes the standard model? Rapid progress in biology in recent years supplies plenty of examples. For example, a closed gene pool has become the standard definition of what constitutes a species. However, it is far less clear (to me) that this same process works in mathematics & perhaps it does not work at all. It is downright disheartening, for example, to read in the introduction to Joe Harris’ Algebraic Geometry, that a textbook so long that (in Harris’ words) “only someone who was truly gluttonous, masochistic, or compulsive would read every example” is merely an introduction to a more advanced mathematical ontology that focuses upon sheaf cohomology and scheme theory as expressing the “real” essence of algebraic geometry. Yikes! There’s not much empirical evidence of toroidal cognitive topology in algebraic geometry! Instead, the mathematics of algebraic geometry (and perhaps many other branches of mathematics too) is apparently becoming endlessly richer, without becoming simpler. 94. Greg Egan Says: In the interests of finding out precisely where Scott vs. Deja vu is going, can I propose a candidate for f? f(x,y) = 3(x+y) / (3+xy) This has f(1,1)=1.5, is commutative and associative, but not distributive over ordinary multiplication. It has the usual identity, 0, and the usual additive inverse, f(x,-x)=0. It’s not defined when xy=-3, though one remedy for that would be to assume a universe where the relevant physical quantities must always be less than sqrt(3) in magnitude, a property that f will preserve. I thought Deja vu’s point was that it’s conceivable as a matter of alternative physics and culture that a society might arise that considers f to be a more “elementary” or “intuitive” operation than our addition, and would construct their axioms and definitions accordingly. But maybe I’m misunderstanding the debate, because that possibility seems uncontroversial to me. 95. Scott Says: Greg, thanks for suggesting an explicit model for f! (An interesting one too.) Your model reminds me of the velocity addition rule in special relativity (with √3 as a “speed of light”) — something that might also be proposed as “more basic than addition” for some alternate civilization. But on further reflection, I wish directly to controvert the possibility that seems uncontroversial to you! That is, I wish to propose not only that addition will have the usual properties for every civilization, but also that addition will seen as more basic than f or the SR addition rule by every civilization. My argument, as usual, rests on the impossibility of even talking about the other operations in a coherent way without already knowing what addition is. There are many ways to flesh out this argument; here’s one of them: can you give an algorithm for computing 3(x+y)/(3+xy), which does not invoke ordinary addition or anything trivially similar to it as a subroutine? (Since the notion of an algorithm is conceptually prior to that of addition, I don’t think there’s any circularity in this question.) 96. Scott Says: Incidentally, in giving an algorithm to compute 3(x+y)/(3+xy) that doesn’t use addition, no fair exponentiating and then multiplying and taking logs! 🙂 A civilization that has those operations already has a structure that’s trivially isomorphic to addition. One other remark: as an unexpected corollary of my “algorithmic argument,” I can imagine a civilization that would discover the integers mod 2 before it discovered the integers, or the bitwise-AND and bitwise-XOR before it discovered addition. It’s just functions like 3(x+y)/(3+xy) that I’m having severe difficulties with. 97. Sumwan Says: Well, if you consider the trigonometric identity tan(a+b) = (tan a + tan b)/(1-tan a tan b) (I hope it is correct) , perhaps that there is a civilization that considers that the operation of summing 2 numbers is just getting the tangent of the sum of their arctangents. One can find flaws in this argument, e.g that you need to have geometry to define tangents, that to have geometry you need lines, to have lines you need real numbers, to have real numbers you need rational numbers (at least in the constructions I know of), to have rationals you need natural numbers and to have natural numbers you need the successor operation, which is the same as adding one. But intuitively , it seems possible to me that an outer space civilization having access to some kind of analog computer in their bodies view this operation, or one similar to it, as an elementary one. 98. Deja vu Says: I thought Deja vu’s point was that it’s conceivable as a matter of alternative physics and culture that a society might arise that considers f to be a more “elementary” or “intuitive” operation than our addition, and would construct their axioms and definitions accordingly. But maybe I’m misunderstanding the debate, because that possibility seems uncontroversial to me. Exactly. Even more so if we assume that such culture arose in a hypothetical universe where the laws of physics are different. I can imagine a civilization that would discover the integers mod 2 before it discovered the integers, or the bitwise-AND and bitwise-XOR before it discovered addition. This is exactly my point. They would then describe addition in terms of modulo 2 operations and the fundamental axioms would be given in these terms. If we had a universe in which things added up like the f function I suggested, sooner or later they would discover our addition (as well as mod operations) but the description would be in the other direction: ‘+’ is just like f but instead of giving 1.5x gives 2x !! In fact for all we know the truly central concept in mathematics might be, say, complex numbers, and we are left having to describe them in terms of square roots of negative numbers (square roots being a notion that we discovered through geometric areas) simply because our grasp of electricity is not as built in as our grasp of spatial geometry. If we were eels, perhaps we would have discovered i first and then describe the naturals as an odd subcase of complex numbers in which the imaginary part is 0. 99. John Sidles Says: With respect, maybe you guys aren’t posing hard enough math questions? Suppose the aliens send us a multiple-choice mathematical test (the same one they send to thousands of civilizations). But the aliens grade harshly: if we answer wrong, then they send out the Berserker-type von Neumann machines. 100. Scott Says: Just like our hypothetical aliens, the Sumerians, Babylonians, and ancient Chinese knew of many cases where “addition” did behave in nonstandard ways. For example, they knew that distances at right angles add as √(x2+y2), and that the fraction taken if you take an x fraction and then a y fraction of what remains adds as x+y-xy. Yet without exception (so far as I know), it was ordinary addition, and not these other operations, that they took as fundamental. Why? Of course, the burden of answering this question doesn’t lie with me; it lies with those who think math could as easily be based on the other operations as on addition. Personally, I think the answer has almost nothing to do with physics or the human brain, and almost everything to do with the structure of mathematical ideas. What I’m proposing here is a falsifiable hypothesis: you could falsify by building a mathematical theory that makes as much internal sense as the usual one, but that takes √(x2+y2) or x+y-xy or 3(x+y)/(3+xy) as a basic operation and x+y as a complicated derived one. But I can’t stress that enough that you’d actually have to build this theory: it’s not enough to claim that you can imagine it being built. 101. cody Says: is anyone else here a strict physicalist, tending to think that the whole of the universe, and all its associated phenomena are (potentially) explainable with a set of (quite possibly unobtainable) physical laws? because that has always been my motivation for the hierarchy of the sciences (physics before chemistry before biology before psychology). in my mind, computer science is almost independent of reality somehow, but more concrete than plain mathematics. 102. Kurt Says: Scott said: Kurt, what would it mean for the disjunctive syllogism to fail for macroscopic phenomena (or for that matter, microscopic phenomena)? I don’t know. That’s the whole point. Now, the disjunctive syllogism was just the first logical inference rule that popped into my mind; there may be more interesting examples that could be used. But since it can be thought of as kind of an operational definition of the word “or”, a universe without the disjunctive syllogism would be a universe without the concept of “or-ness”. Could such a thing exist? I don’t know, and again, that’s the whole point. Deja vu is getting beat up a bit in the comments, because his/her example is not nearly radical enough. How about this: Our human language is structured around a distinction between noun and verb, actor and action. Our brains have evolved this capacity (presumably) because the universe is this way: clumps of matter coalesce together into discernible entities which can interact with each in various proscribed ways. What if our universe did not have at its base matter, energy, space and time, but something totally different? I cannot conceive of what that might be like, because I am of this universe. However, I am not willing to therefore conclude that it couldn’t be otherwise. That is what I mean when I suggest that our logic and mathematics is a result of our physics. (By the way, my inclusion of the word “macroscopic” was not meant to imply anything about quantum mechanics, but only to make explicit that I wasn’t trying to say anything about quantum mechanics. I seem to have set off some kind of woo-sensor of yours.) 103. Scott Says: …I wasn’t trying to say anything about quantum mechanics. I seem to have set off some kind of woo-sensor of yours. Yes, sorry — when you work on foundational issues in quantum mechanics, you have to check your woo-sensor like a Chernobyl cleanup crew checks its Geiger counter. While I won’t say that the world you describe can’t exist, there’s nothing whatsoever we can say about it, precisely because it’s outside all of our linguistic categories by definition. So we might as well go back to discussing something else! 🙂 104. Job Says: If we were to assume that our Universe can be simulated with a minimum of two distinct data states and two distinct operations (whatever that is) with an arbitrary amount of space/time, then a given Universe would only be different from ours if it either did not support the required minimum states/operations or if it supported any state or operation not derivable from the two fundamental states/operations. Right? 105. Greg Egan Says: Scott (#95): yes, I stole the SR velocity addition rule for f, though I didn’t want to highlight that because I didn’t want to drag along all the physical baggage of velocity addition. And of course Sumwan’s example (#97) is very similar, because it’s just the Euclidean version of the Lorentzian velocity addition rule. But my choice of f was a bad one for motivating exotic mathematics, because it’s isomorphic to ordinary addition, i.e. ((-√3, √3),f) is isomorphic as a group to (R,+), with G(z)=√3 tanh(z) as the map from (R,+). In a world where objects of weight x and y tended to meld together into composites with weight f(x,y), I suspect that the notation and language would just reflect the origins of objects, rather than their final weights, and hence it would amount to working in (R,+). Normally, they’d simply talk about “two standard-weight glubs having been melded together”, as a way of describing f(1,1), rather than comparing the melded weight to an unmelded sum and needing to compute 3(x+y)/(3+xy). In other words (translating their notation, whatever it is, into our binary), their “101” would mean a weight of f(4,1), and their “110” would mean a weight of f(4,2), but they would still add these numbers by a process isomorphic to our binary addition, and get “1011”. Most of the time, f(…) would just be an implied “wrapper” around the notation; it wouldn’t need to be explicitly evaluated. Of course, when they did want to make the comparison between melded weights and the arcane concept of a sum of unmelded weights, they might indeed compute something which, in their notation, amounted to expressing x+y in terms of f (and I think a second operation would be needed as well). But for this example, at least, I have to come down on Scott’s side: I think this culture would just work with the variables that made things simple, and end up doing ordinary addition. 106. Alex Says: Scott, why do you think the idea that quantum mechanics changes the laws of logic is confused? Isn’t that the whole idea behind the filed of quantum logic (as in Orthomodular lattices and non distributive logics not the “quantum logic” of C-not and Hadamard circuits). Here’s a quote form the Stanford encyclopedia of philosophy: “At its core, quantum mechanics can be regarded as a non-classical probability calculus resting upon a non-classical propositional logic.” 107. Scott Says: Alex: Yes, I’m familiar with quantum logic, and I can explain why many of the claims made for it are hogwash. The fundamental problem is that the outcome of a measurement you could’ve performed but didn’t is not an event. So if you plug it into logical expressions and manipulate it as if it were an event, you shouldn’t be the slightest bit surprised if you run into apparent paradoxes. By analogy, suppose a squirrel is known to be hiding under one of two bushes, and suppose you look under one of the bushes and don’t find it there — but suppose the very fact of your looking disturbs the squirrel, causing it to run away from the other bush (where it was previously hiding), so that you don’t find it under the other bush either. Have you therefore disproved the Disjunctive Syllogism, that (P or Q) and (not P) imply Q? The question itself is so ridiculous, it sounds like the setup for a Marx Brothers skit — yet from my point of view, many of the philosophical claims made on behalf of quantum logic are just as willfully dumb. None of this is to attack the mathematical study of the lattice of subspaces of Hilbert space, which some people enjoy (I’ve never been one of them). Just please don’t confuse people by calling the subspaces “propositions”! 108. Scott Says: Greg: Thanks for pointing out what should’ve been obvious to me right away, that your 3(x+y)/(3+xy) system contains a subsystem isomorphic to (R,+)! In retrospect, that was of course inevitable, as soon as you defined a commutative, associative operation f:R2→R with identity and with basic continuity properties. (Exercise for readers: which assumptions are actually necessary?) 109. Alex Says: Thanks for the nice illustration. Why have there been so many papers published on the subject (just wondering)? 110. Deja vu Says: Yet without exception (so far as I know), it was ordinary addition, and not these other operations, that they took as fundamental. Why? Because most people add objects first and foremost. Children at a very early age (2 year olds) discover on their own the concept of addition. What I’m proposing here is a falsifiable hypothesis: you could falsify by building a mathematical theory that makes as much internal sense as the usual one, but that takes √(x2+y2) or x+y-xy or 3(x+y)/(3+xy) as a basic operation and x+y as a complicated derived one. That has already been done. Many times over. Set theory for one makes x+y a complicated operation. What are otherwise natural and simple concepts of integer and addition become a nested mess of sets of the empty set (4:= {{{{{}}}}}). Here’s another one: non-euclidean geometry makes as much sense as the classical geometry, yet basic concepts such as lines, geodesics, parallelism and angles behave in different ways. Here’s one more: you can define logic in terms of the basic AND and OR operators as classical mathematicians did or you can build it around NAND and XOR as computer scientists did. AND now becomes a “complicated derived operation”. Of course, the burden of answering this question doesn’t lie with me; it lies with those who think math could as easily be based on the other operations as on addition. Contrary to what you say the burden of proof is on you. You are making an outlandish claim: the uniqueness of mathematical axioms. I on the contrary believe that failing formal proof it is much easier to imagine alternative consistent models and in fact have given you examples of such. In fact, it takes no effort to see that this is the case. Let us imagine for a moment that the Earth had a much smaller radius. Would have classical geometry where parallel lines do not intersect been discovered first? of course not. The geometry of the sphere would have been studied first with its many interesting and yes, self-consistent properties. 111. Scott Says: Deja vu, your approach is similar to that of someone who claims that cheese graters can grant eternal life, and then, when the evidence starts stacking up against that hypothesis, indignantly exclaims, “but science doesn’t know everything!” No, but that wasn’t the question. Of course addition can be built out of the successor function, or out of Boolean AND and XOR operations. In an earlier post, I said myself that I could imagine a civilization that discovered Boolean logic prior to addition. But we weren’t talking about that. Nor were we talking about whether non-Euclidean geometry could have been discovered before Euclidean geometry, which is an interesting but separate question. We were talking, specifically, about your hypothetical system of arithmetic where 1+1=1.5. Let me repeat: we were not talking about the uniqueness of axiom systems in general (of course they’re not unique, in general). We were talking about whether your “weird addition” could have been discovered before ordinary addition. To fill you in, we’ve made some actual progress on this question. After you repeatedly refused to give me a concrete model for your nonstandard addition function, Greg Egan was kind enough to do so. However, Greg then realized that while his function has many nice properties, precisely because of those properties it’s isomorphic to ordinary addition (the mapping being the hyperbolic tangent function). (Earlier, I had pointed out that the hypothetical civilization presumably couldn’t compute his function without using ordinary addition as a subroutine.) I’m now interested in the question of exactly what nice properties an addition function can have without being isomorphic to the ordinary one, and/or without requiring ordinary addition to compute. So that’s where the discussion is at. If you’re able to contribute to it, please do so. If you keep raising irrelevant strawmen, I’ll have no choice but to block you for trolling. 112. Joe Shipman Says: You can have a “nonstandard model” (countably infinite) of addition and multiplication which satisfies all the same sentences as the ordinary integers under the usual + and *, and you can make either operation computable, and the possible resulting submodels for addition-only or multiplication-only are well-understood. What you can’t do is have BOTH the nonstandard addition and the nonstandard multiplication be computable. So it’s hard to point to an explicit “nonstandard model”. 113. John Sidles Says: Recognizing that nonstandard models of reality are hard to construct, Wheeler and Feynman deserve credit for a pretty good try in their 1949 article Classical Electrodynamics in Terms of Direct Interparticle Action. Even though their nonstandard model of (classical) reality was not as successful as they hoped, the article is still a nice case study in how to go about constructing such models. One lesson-learned is that such enterprises require a *huge* effort and a *lot* of calculation. Also, if space aliens tried to confuse us by transmitting an electrodynamics textbook written in this nonstandard idiom, they would probably succeed! 🙂 114. Greg Egan Says: Given that every Lie group has subgroups isomorphic either to (R,+) or U(1) — and I’d describe addition on U(1) as so close to that on R as to be morally equivalent — it’s hard to imagine a culture where the vital concept was a Lie group not considering (R,+) or U(1) even more fundamental. But suppose the most important physics and culture revolved around, say, the Klein 4-group. That’s {0,a,b,c}, with a commutative addition such that x+x=0 for all x, and a+b=c and all permutations thereof. Sure, Klein addition is simple, but it seems to me conceptually independent of conventional integer addition; you could argue that x+x=0 is integer addition modulo 2, but I’d say that’s a degenerate case more primitive than the general concept of addition. 115. John Sidles Says: Greg sya: Given that every Lie group has subgroups isomorphic either to (R,+) or U(1) … Greg, I’m no expert, but if memory serves, isn’t the addition of points on elliptic curves also associated with an (Abelian) Lie subgroup? That embedding Lie group being the (heh! heh!) toroidal Lie group associated with the (doubly periodic) Weierstrass elliptic function? So let’s imagine an alien civilization that is wholly focussed upon acquiring one scarce resource …. WiFi bandwidth. And within that civilization, the one way to acquire that bandwidth is … break each other’s public key exchange algorithms. The resulting civilization would experience rapid biological evolution, via the well-known “Peacock’s Tail” mechanism: “Hey baby, check out my high-bandwidth big-screen internet connection! It looks so good cuz I’m tapping into every WiFi transceiver in the neighborhood!” 🙂 Over the millenia, their biological brains would become hard-wired (in the Chomsky sense) to conceive of of the addition of points on elliptic curves as being the most “natural” and “obvious” element of mathematics. We might find their alien mathematics to be mighty hard to decipher. Indeed, their initial communication to us might looks very much like a string of random numbers. The idea being (from their alien point of view) that we humans have to prove our sapience by recognizing and responding to it as the first half of a public key exchange algorithm. Cuz duh, that’s obviously the first thing that civilized galactic races do … exchange public keys! There being no other logically possible basis, for inter-species trust and cooperation! Indeed, supposing that this race knows a whole lot more about information theory than we humans do, such that they bases their civilization’s key exchange protocols not upon elliptic curves, but upon “obviously better” algorithms that we humans just haven’t conceived (as yet), then it might be mighty tough for us humans to recognize their initial communication as anything other than noise. That’s pretty much how I feel about modern algebraic geometers, anyway. Folks like Alexander Grothendieck are obviously from some other planet! 🙂 116. John Sidles Says: Another thread bites the dust … here’s a summary. 🙂 117. KWRegan Says: I’ve actually opined that Grothendieck’s work may hold a key to progress on P vs. NP, though along that line, Ketan Mumuley’s joint work at least starts on our terra firma. One can flatten Scott’s pyramid into the “Penrose Triangle”—see “On Math, Matter and Mind”, which addresses the circularity issue. My own interpretation is that if the reductionist hypotheses that give rise to Joseph Hertzlinger’s chain (comments 84, 92 here) are taken all-together, then one gets a stack whose bottom is not a turtle or elephant, but rather “universe-is-a-computer” in Seth Lloyd’s sense. 118. Jonathan Vos Post Says: “… Over the millenia, their biological brains would become hard-wired (in the Chomsky sense) to conceive of of the addition of points on elliptic curves as being the most “natural” and “obvious” element of mathematics….” And this lobe of their brains hosts a highly optimized factorization algorithm for large semiprimes. Vernor Vinge (after he retired as full-time Math prof at San Diego State U.) to devote himself to being the great full-time science fiction author that he is told me (I think at last year’s Westercon) that he’s been imagining aliens with transfinite computational brains, for whom, if you ask them ANY question in integer arithmetic (i.e. Diophantine) find the answer instantly obvious. “Ummm, about those Godel numbers…?” I said. He just grinned.
{}
# Can a Peano Set have two or more zeros? I repeat the Peano Axioms: 1. Zero is a number. 2. If a is a number, the successor of a is a number. 3. zero is not the successor of a number. 4. Two numbers of which the successors are equal are themselves equal. 5. If a set S of numbers contains zero and also the successor of every number in S, then every number is in S. Suppose to have two isomorph "copies" of the natural numbers $\mathbb{N}':=\{0',1',2'...\}$ and $\mathbb{N}'':=\{0'',1'',2''...\}$. Then the set $NUMBERS:=\mathbb{N}'\cup \mathbb{N}''$ with "Zero"$:=0'$ and the "natural" successor for each element in any of the two sets, seems to satisfy the axioms. Yes, P5 is very strange now because, it says that when I start with a set which I know to contain at least $0'$ and every successor of the numbers in it, automatically contains $0''$ which is not a successor of any number. If this way of reasoning is allowed we could also use a number of copies of $\mathbb{N}$ indicized by a continuous index so there will be TWO NOT ISOMORPHIC Peano sets. Because this sounds very strange to me, It's possible that there is a problem in my argument. What do you think? - Actually, $\mathbb N'\cup\mathbb N''$ does not satisfy $5$. –  Thomas Andrews Jun 25 at 22:43 Basically stating "$0$ is a number" is a statement about a single constant. That is the nature of the (more formalized) statement. –  Thomas Andrews Jun 25 at 22:45 Related question about alternative models of the axioms you stated: math.stackexchange.com/questions/637693/… –  MJD Jun 25 at 23:02 I'm not fully sure I understand your question, but perhaps you will find this argument enlightening: consider the set $$\mathbb{N}' = \{0', s(0'), s(s(0')), \ldots\}$$ which contains $0'$ and all its successors. (This is the same $\mathbb{N}'$ you defined in the question.) Hopefully it is clear that $\mathbb{N}'$ contains the successor of every element in $\mathbb{N}'$: $$\forall n\in\mathbb{N}'\ s(n)\in\mathbb{N}'$$ So axiom 5 applies to $\mathbb{N}'$, which tells us that $\mathbb{N}'$ contains all numbers, or in other words, axiom 5 tells us that if $n$ is a number, $n\in\mathbb{N}'$. That in turn means anything not in $\mathbb{N}'$ is not a number. Now, if you want, you can postulate the existence of another object, like $0''$, and even a set $\mathbb{N}''$ of that object and its successors. But you cannot claim that $0''$ (or any of its successors) is a number (as defined in the Peano axioms) without contradicting the result of the preceding argument, and thus contradicting axiom 5. You can construct sets which contain numbers and other things, like $\mathbb{N}'\cup\mathbb{N}''$. This set contains every number, because every number is in $\mathbb{N}'$ and by the definition of the union operation, anything in $\mathbb{N}'$ is in the union of $\mathbb{N}'$ and any other set. It also contains a bunch of other things which are not numbers, namely the objects in the set $\mathbb{N}''$. Nothing says that every object in the set $S$ referenced in axiom 5 must be a number. - Corrections appreciated if I'm misusing the notation. –  David Z Jun 26 at 1:54 Note that from the fifth axiom it follows that if $x\neq 0$ then $x=S(y)$ for some $y$. Therefore if $x,y$ are both $0$'s (and are distinct) then at least one of them is a successor. Which means that one of them is not a $0$. I think that the source of the confusion here is the fact that $0$ is seen as just any number. The truth is that $0$ is a constant in the formal language of arithmetic. And in a given structure there can only be one interpretation of a constant. Not two or more. - What you say is not explicit in the axiom. Couldn't be a (very common) interpretation? –  Benzio Jun 25 at 22:38 @Benzio Given a zero $0$, let $S$ be the set $\{x\mid x =0$ or $x$ is a successor$\}$. By P5, every number is in $S$, that is, every number not equal to $0$ is a successor, and in particular you cannot have more than one zero. Do you understand why P5 implies that $S$ contains all numbers? Do you agree with this? Otherwise you are just reading the axioms in a manner different from that which is intended. For instance, you may be reading P5 as saying that $S$ must contain "all zeros", not just the specific one I called $0$. This is not the intended reading of the axiom. –  Andres Caicedo Jun 25 at 22:43 @Benzio You seem to be very confused. Stating the axioms in the language of formal logic, and using formal logic rather than the informal way we argue in English may be needed here. Several books contain details of this formalized presentation, and it may be to your advantage to look for one, and read it carefully. If you are not familiar with formal logic, start with a book on the subject, before specializing to one covering (formal) Peano Arithmetic. I suggest Klenne's "Introduction to metamathematics", as more modern treatments may skip some of the formal details needed here. –  Andres Caicedo Jun 25 at 22:51 @Benzio: The axioms in some sense try to "define" the natural numbers by stating that they satisfy all these properties. Therefore, by the last axiom, whenever a collection of objects satisfy the property that "the object defined as 0 is inside and the collection is closed under S", then the collection is all that there is. As seen in the Andres' comment, the collection of objects which are either 0 or successor of some number satisfy this property. Therefore, the collection of objects that are "either 0 or successor of some number" is all the natural numbers. –  Burak Jun 25 at 22:59 You can prove that every number is either 0 or is a successor. So in particular any mythical $0'$ must either be $0$ or a successor. So there cannot be two distinct numbers $0$ and $0'$ neither of which is a successor. So it is axiom 5 that prevents the situation of this mythical $0'$. –  Carl Mummert Jun 25 at 23:01 In logic, $0$ is a constant, not a property. That is, we can say "$x=0$" rather than "$x$ is a zero." Even in second order logic, the statement of induction is $0\in S$. Not, "for all zeros $z, z\in S$." You can, of course, interpret lots of language in lots of different ways. But mathematics uses "$0$ is a natural number" in a very specific way, and if you want to interpret that phrase differently, you are free to do so, but you are likely to confuse mathematicians and fail to communicate with them unless you are very explicit. - You can easily prove that the zero of any Peano set (defined in the obvious way) is unique. See my answer. –  Dan Christensen Jun 26 at 4:52 Well, not if you take the obtuse interpretation of Peano that the OP was taking - that the axiom of induction means if you've proved it for all zeros... @DanChristensen –  Thomas Andrews Jun 26 at 5:10 Theorem Suppose we have zeroes $0$ and $0'$ in a Peano set such that: 1. $0$ is a number. 2. $0'$ is a number 3. If x is a number, the successor of x is a number. 4. $0$ is not the successor of any number. 5. $0'$ is not the successor of any number. 6. Two numbers of which the successors are equal are themselves equal. 7. If a set S of numbers contains $0$ and also the successor of every number in S, then every number is in S. 8. If a set S of numbers contains $0'$ and also the successor of every number in S, then every number is in S. Therefore $0=0'$, and there is a unique zero in every Peano set (defined in the obvious way). Proof We can easily prove by induction (7) that all non-$0$ numbers have a predecessor. Suppose $0\neq 0'$. Therefore $0'$ must have a predecessor. But this contradicts (5). Therefore, we must have $0=0'$. -
{}