text
stringlengths
256
16.4k
Let $G$ be a finitely generated (in my case also amenable) group and $f:G\to[0,1]$. Suppose that there is a finitely additive probability measure $\mu$ on $G\times G$ and a real number $L$ such that $\int f(xsy)d\mu(x,y)\geq L$, for all $s\in G$. Question:Does there exist a finitely additive probability measure on $G$, say $\nu$, such that $\int f(xs)d\nu(x)\geq L$, for all $s\in G$? If the group is abelian, the answer is positive: approximate $\mu$ in the weak* topology with countably additive measures $\mu_\alpha$; define $\nu_\alpha(x)=\sum_y\mu_\alpha(xy^{-1},y)$ and take $\nu$ to be a weak* limit point of $\nu_\alpha$. It works basically because I can put the $s$ after the $y$, by commutativity. One more datum that may help is that I know something about $\mu$. Precisely, it is an iterated integral w.r.t two f.a. measures on $G$, $\sigma$ and $\lambda$, where $\lambda$ is left-invariant (this explains why in my hypotheses, $\mu$ is only centrally invariant and actually that integral is equal to $L$, for all $s$). Adding this datum, one easily sees that my question has positive answer for all amenable groups such that every left-invariant mean is also right-invariant, since the argument above still works. By an old theorem of Paterson, this class is exactly the class of groups such that every conjugacy class is finite. Therefore, my question has positive answer for all these groups and in particular for all finite groups. This is also why is not that easy to try to find a counterexample. Thanks in advance for any help, Valerio
Little explorations with HP calculators (no Prime) 03-23-2017, 01:23 PM (This post was last modified: 03-23-2017 01:23 PM by pier4r.) Post: #21 RE: Little explorations with the HP calculators (03-23-2017 12:19 PM)Joe Horn Wrote: I see no bug here. Variables which are assigned values should never be used where formal variables are required. Managing them is up to the user. Ok (otherwise a variable cannot just be feed in a function from a program) but then how come that when I set the flags, that let the function return reals, the variable is purged? The behavior could be more consistent. Nothing bad, just a quirks like this, when not clear to spot, may lead to other solutions (see the advice of John Keit that I followed) Wikis are great, Contribute :) 03-24-2017, 01:45 PM (This post was last modified: 03-24-2017 03:23 PM by pier4r.) Post: #22 RE: Little explorations with the HP calculators Quote:Brilliant.org Is there a way to solve this without using a wrapping program? (hp 50g) I'm trying around some functions (e.g: MSLV) with no luck, so I post this while I dig more on the manual, and on search engines focused on "site:hpmuseum.org" or "comp.sys.hp48" (the official hp forums are too chaotic, so I won't search there, although they store great contributions as well). edit: I don't mind inserting manually new starting values, I mean that there is a function to find at least one solution, then the user can find the others changing the starting values. Edit. It seems that the numeric solver for one equation can do it, one has to set values to the variables and then press solve, even if one variable has already a value (it was not so obvious from the manual, I thought that variables with given values could not change it). The point is that one variable will change while the others stay constant. In this way one can find all the solutions. Wikis are great, Contribute :) 03-24-2017, 03:23 PM (This post was last modified: 03-24-2017 03:24 PM by pier4r.) Post: #23 RE: Little explorations with the HP calculators Quote:Same site as before. This can be solved with multiple applications of SOLVEVX (hp 50g) on parts of the equation with proper observations. So it is not that difficult, just I found it nice and I wanted to share. Wikis are great, Contribute :) 03-24-2017, 10:04 PM (This post was last modified: 03-24-2017 10:05 PM by pier4r.) Post: #24 RE: Little explorations with the HP calculators How do you solve this using a calculator to compute the number? I solved it translating it in a formula after a bit of thinkering, and I got a number that I may write as "x + 0.343 + O(0.343)" if I'm not mistaken. I used the numeric solver on the hp 50g as helper. I also needed to prove to myself that the center on the circle is on a particular location to proceed to build the final equation. Wikis are great, Contribute :) 03-24-2017, 10:54 PM (This post was last modified: 03-24-2017 11:07 PM by Dieter.) Post: #25 RE: Little explorations with the HP calculators I think a calculator is the very last thing required here. (03-24-2017 10:04 PM)pier4r Wrote: I solved it translating it in a formula after a bit of thinkering, and I got a number that I may write as "x + 0.343 + O(0.343)" if I'm not mistaken. I used the numeric solver on the hp 50g as helper. Numeric solver? The radius can be determined with a simple closed form solution. Take a look at the diagonal through B, O and D which is 6√2 units long. So 6√2 = 2√2 + r + r√2. Which directly leads to r = 4 / (1 + 1/√2) = 2,343... Dieter 03-25-2017, 08:17 AM (This post was last modified: 03-25-2017 08:19 AM by pier4r.) Post: #26 RE: Little explorations with the HP calculators As always, first comes the mental work to create the formula or the model, but to compute the final number one needs a calculator most of the time. Quote:Numeric solver? The radius can be determined with a simple closed form solution. Take a look at the diagonal through B, O and D which is 6√2 units long. I used the numeric solver because instead of grouping r on the left, I just used the formula of one step before - the one without grouping - to find the value. Anyway one cannot just use the diagonal because the picture is well done, one has to prove to himself that O is on the diagonal (nothing difficult, but required), otherwise it may be a step taken for granted. Wikis are great, Contribute :) 03-27-2017, 12:14 PM Post: #27 RE: Little explorations with the HP calculators Quote:Brilliant.org This one defeated me at the moment. My rusty memory about mathematical relations did not help me. At the end, having the hp50g, I tried to use some visual observations to write down the cartesian coordinates of the points defining the inner square, or observing the length of the sides, so if the side of the inner square is 2r so the sides of the triangle are "s+2r" and "s" from which one can say that "s^2+(s+2r)^2=1" . This plus the knowledge that 4 times the triangles plus the inner square add up to 1 as area. Still, those were not enough for a solution (with or without the hp50g). I end in too ugly/tedious formulae. Wikis are great, Contribute :) 03-27-2017, 12:54 PM (This post was last modified: 03-27-2017 03:42 PM by Thomas Okken.) Post: #28 RE: Little explorations with the HP calculators Consider a half-unit circle jammed into the corner of the first quadrant (so its center is at (0.5, 0.5)). Now consider a radius of that circle sweeping the angles from 0 to pi/2. The tangent on the circle where it meets that radius will intersect the X axis at 1 + tan(phi), and the Y axis at 1 + cot(phi) or 1 + 1 / tan(phi). The triangle formed by the X axis, the Y axis, and this tangent, is like the four triangles in the puzzle, and the challenge is to find phi such that X = Y + 1 (or X = Y - 1). The answer to the puzzle is then obtained by scaling everything down so that the hypotenuse of the triangle OXY becomes 1, and then the diameter of the circle is 1 / sqrt(X^2 + Y^2). EDIT: No, I screwed up. The intersections at the axes are not at 1 + tan(phi), etc., that relationship is not quite that simple. Back to the drawing board! Second attempt: Consider a unit circle jammed into the corner of the first quadrant (so its center is at (1, 1)). Now consider a radius of that circle sweeping the angles from 0 to pi/2. The point P on the circle where that radius intersects it is at (1 + cos(phi), 1 + sin(phi)). The tangent on the circle at that point will have a slope of -1 / tan(phi), and so it will intersect the X axis at Px + Py * tan(phi), or (1 + sin(phi)) * tan(phi) + 1 + cos(phi), and it will intersect the Y axis at (1 + cos(phi)) / tan(phi) + 1 + sin(phi). The triangle formed by the X axis, the Y axis, and this tangent, is like the four triangles in the puzzle, and the challenge is to find phi such that X = Y + 2 (or X = Y - 2). The answer to the puzzle is then obtained by scaling everything down so that the hypotenuse of the triangle OXY becomes 1, and then the radius of the circle is 1 / sqrt(X^2 + Y^2). Because of symmetry, sweeping the angles from 0 to pi/2 is actually not necessary; you can restrict yourself to 0 through pi/4 and the case that X = Y - 2. 03-27-2017, 02:27 PM (This post was last modified: 03-27-2017 02:28 PM by pier4r.) Post: #29 RE: Little explorations with the HP calculators (03-27-2017 12:54 PM)Thomas Okken Wrote: Consider a unit circle jammed into the corner of the first quadrant (so its center is at (1, 1)). Now consider a radius of that circle sweeping the angles from 0 to pi/2. The point P on the circle where that radius intersects it is at (1 + cos(phi), 1 + sin(phi)). I'm a bit blocked. http://i.imgur.com/IW4QIeU.jpg Would be possible to add a quick sketch? Wikis are great, Contribute :) 03-27-2017, 03:44 PM (This post was last modified: 03-27-2017 03:44 PM by Thomas Okken.) Post: #30 RE: Little explorations with the HP calculators (03-27-2017 02:27 PM)pier4r Wrote:(03-27-2017 12:54 PM)Thomas Okken Wrote: Consider a unit circle jammed into the corner of the first quadrant (so its center is at (1, 1)). Now consider a radius of that circle sweeping the angles from 0 to pi/2. The point P on the circle where that radius intersects it is at (1 + cos(phi), 1 + sin(phi)). OK; I attached a sketch to my previous post. 03-27-2017, 03:55 PM Post: #31 RE: Little explorations with the HP calculators Thanks and interesting approach. On brilliant.org there were dubious solutions (that did not prove their assumptions) and just one really cool making use of a known relationship with circles enclosed in right triangles. Wikis are great, Contribute :) 03-27-2017, 05:29 PM (This post was last modified: 03-27-2017 05:31 PM by pier4r.) Post: #32 RE: Little explorations with the HP calculators Quote:Brilliant.org For this I wrote a quick program, remembering some quality of the mean that after enough iterations it stabilizes. (I should find the right statement though). Code: But i'm not sure about the correctness of the approach. Im pretty sure there is a way to compute this with an integral ad then a closed form too. Anyway this is the result at the moment: Wikis are great, Contribute :) 03-27-2017, 06:01 PM (This post was last modified: 03-27-2017 07:33 PM by Dieter.) Post: #33 RE: Little explorations with the HP calculators (03-27-2017 12:14 PM)pier4r Wrote: ...so if the side of the inner square is 2r so the sides of the triangle are "s+2r" and "s" from which one can say that "s^2+(s+2r)^2=1" . This plus the knowledge that 4 times the triangles plus the inner square add up to 1 as area. Still, those were not enough for a solution Right, in the end you realize that both formulas are the same. ;-) The second constraint for s and r could be the formula of a circle inscribed in a triangle. This leads to two equations in two variables s and r. Or with d = 2r you'll end up with something like this: (d² + d)/2 + (sqrt((d² + d)/2) + d)² = 1 I did not try an analytic solution, but using a numeric solver returns d = 2r = (√3–1)/2 = 0,36603 and s = 1/2 = 0,5. Edit: finally this seems to be the correct solution. ;-) Dieter 03-27-2017, 06:14 PM (This post was last modified: 03-27-2017 06:15 PM by pier4r.) Post: #34 RE: Little explorations with the HP calculators (03-27-2017 06:01 PM)Dieter Wrote: Right, in the end you realize that both formulas are the same. ;-) How? One should be from the Pythagoras' theorem, a^2+b^2=c^2 (where I use the two sides of the triangle to get the hypotenuse) the other is the composition of the area of the square, made up from 4 triangles and one inner square. To me they sound as different models for different measurements. Could you explain me why are those the same? Anyway to me it is great even the numerical solution (actually the one that I search with the hp50g) but I cannot tell you if it is right or not because I did not solve it by myself, other reviewers are needed. Edit, anyway I remember that a discussed solution mentioned the relationship of a circle inscribed in a triangle, so I guess your direction is right. Wikis are great, Contribute :) 03-27-2017, 06:30 PM Post: #35 RE: Little explorations with the HP calculators (03-27-2017 06:14 PM)pier4r Wrote:(03-27-2017 06:01 PM)Dieter Wrote: Right, in the end you realize that both formulas are the same. ;-) Just do the math. On the one hand, \( s^2 + (s + 2r)^2 = 1\) from Phythagorus' Theorem as you observed. And your other observation is that \[ 4 \cdot \underbrace{\frac{1}{2} \cdot s \cdot (s+2r)}_{\text{area of }\Delta} + \underbrace{(2r)^2}_{\text{area of } \Box} = 1 \] Simplify the left hand side: \[ \begin{align} 4 \cdot \frac{1}{2} \cdot s \cdot (s+2r) + (2r)^2 & = 2s^2+4rs + 4r^2 \\ & = s^2 + s^2 + 4rs + 4r^2 \\ & = s^2 + (s+2r)^2 \end{align} \] Hence, both formulas are the same. Graph 3D | QPI | SolveSys 03-27-2017, 06:57 PM (This post was last modified: 03-27-2017 06:57 PM by pier4r.) Post: #36 RE: Little explorations with the HP calculators Thanks, I did not worked on the formula, I was more stuck (and somewhat still stuck) on the fact that they should represent different objects/results. But then again, the square build on the side, it is the square itself. So now I see it. I wanted to see it in terms of "represented objects (1)" not only formulae. Wikis are great, Contribute :) 03-27-2017, 07:07 PM Post: #37 RE: Little explorations with the HP calculators (03-27-2017 06:57 PM)pier4r Wrote: Are you familiar with the geometric proofs of Phythagorus' Theorem? What I wrote above is just a variation of one of the geometric proofs using areas of regular polygons (triangles, rectangles, squares). A few geometric proofs: http://www.cut-the-knot.org/pythagoras/ Graph 3D | QPI | SolveSys 03-27-2017, 07:22 PM Post: #38 RE: Little explorations with the HP calculators (03-27-2017 07:07 PM)Han Wrote: Are you familiar with the geometric proofs of Phythagorus' Theorem? What I wrote above is just a variation of one of the geometric proofs using areas of regular polygons (triangles, rectangles, squares). Maybe my choice of words was not the best. I wanted to convey the fact that if I try to model two different events (or objects in this case) and I get the same formula, for me it is not immediate to say "oh, ok, then they are the same object", I have to, how can I say, "see it". So in the case of the problem, I saw it when I realized that the 1^2 is not only equal to the area because also the area is 1^2, it is exactly the area because it models the area of the square itself. (I was visually building 1^2 outside the square, like a duplicate) Anyway the link you shared is great. I looked briefly and I can say: - long ago I saw the proof #1 - in school I saw the proof #9 - oh look, the proof #34 would have helped, as someone mentioned - how many! Great! Wikis are great, Contribute :) 03-27-2017, 07:24 PM (This post was last modified: 03-27-2017 07:28 PM by Joe Horn.) Post: #39 RE: Little explorations with the HP calculators (03-27-2017 05:29 PM)pier4r Wrote:Quote:Brilliant.org After running 100 million iterations several times in UBASIC, I'm surprised that each run SEEMS to be converging, but each run ends with a quite different result: 10 randomize 20 T=0:C=0 30 repeat 40 T+=sqr((rnd-rnd)^2+(rnd-rnd)^2):C+=1 50 until C=99999994 60 repeat 70 T+=sqr((rnd-rnd)^2+(rnd-rnd)^2):C+=1 80 print C;T/C 90 until C=99999999 run 99999995 0.5214158234249566646569152059 99999996 0.5214158242970253667174680247 99999997 0.5214158240318481570747604814 99999998 0.5214158247892039896051570164 99999999 0.5214158253601312510245695897 OK run 99999995 0.5213642776110289008920452545 99999996 0.5213642752079475043717958065 99999997 0.52136427197858201293861314 99999998 0.5213642744828552963477424429 99999999 0.5213642759132547792130043215 OK run 99999995 0.5213770659191193073147616413 99999996 0.5213770610000764506616015052 99999997 0.5213770617149058467216528505 99999998 0.5213770589414874167694264508 99999999 0.5213770570854305903944611055 OK So it SEEMS to be zeroing on something close to -LOG(LOG(2)), but I give up. <0|ɸ|0> -Joe- 03-27-2017, 07:35 PM (This post was last modified: 03-27-2017 07:37 PM by pier4r.) Post: #40 RE: Little explorations with the HP calculators (03-27-2017 07:24 PM)Joe Horn Wrote: 99999999 0.5213770570854305903944611055 Interestingly your number is quite different from mine. Ok that you have a couple of iterations more, but my average was pretty stable for the first 3 digits. I wonder why the discrepancy. Moreover if you round to the 4th decimal place, you always get 0.5214 if the rounding is adding a "+1" to the last digit in the case the first digit to be excluded is higher than 5. Wikis are great, Contribute :) User(s) browsing this thread: 1 Guest(s)
We have already seen that if $t$ is time and an object's location isgiven by ${\bf r}(t)$, then the derivative ${\bf r}'(t)$ is thevelocity vector ${\bf v}(t)$.Just as ${\bf v}(t)$ is a vector describing how ${\bf r}(t)$ changes,so is ${\bf v}'(t)$ a vector describing how ${\bf v}(t)$ changes,namely, ${\bf a}(t)={\bf v}'(t)={\bf r}''(t)$ is the acceleration vector. Example 15.4.1 Suppose ${\bf r}(t)=\langle \cos t,\sin t,1\rangle$. Then ${\bf v}(t)=\langle -\sin t,\cos t,0\rangle$ and ${\bf a}(t)=\langle -\cos t,-\sin t,0\rangle$. This describes the motion of an object traveling on a circle of radius 1, with constant $z$ coordinate 1. The velocity vector is of course tangent to the curve; note that ${\bf a}\cdot{\bf v}=0$, so ${\bf v}$ and ${\bf a}$ are perpendicular. In fact, it is not hard to see that ${\bf a}$ points from the location of the object to the center of the circular path at $(0,0,1)$. Recall that the unit tangent vector is given by ${\bf T}(t)={\bf v}(t)/|{\bf v}(t)|$, so ${\bf v}=|{\bf v}|{\bf T}$. If we takethe derivative of both sides of this equation we get$$\eqalignno{{\bf a}&=|{\bf v}|'{\bf T}+|{\bf v}|{\bf T}'.&(15.4.1)\cr}$$Also recall the definition of the curvature,$\kappa=|{\bf T}'|/|{\bf v}|$, or $|{\bf T}'|=\kappa|{\bf v}|$. Finally, recall that we defined the unit normal vector as${\bf N}={\bf T}'/|{\bf T}'|$, so ${\bf T}'=|{\bf T}'|{\bf N}=\kappa|{\bf v}|{\bf N}$.Substituting into equation 15.4.1 we get$$\eqalignno{{\bf a}&=|{\bf v}|'{\bf T}+\kappa|{\bf v}|^2{\bf N}.&(15.4.2)\cr}$$The quantity $|{\bf v}(t)|$ is the speed of the object, often written as$v(t)$; $|{\bf v}(t)|'$ is the rate at which the speed is changing, orthe scalar acceleration of the object, $a(t)$. Rewriting equation 15.4.2 with these givesus$${\bf a}=a{\bf T}+\kappa v^2{\bf N}=a_{T}{\bf T}+a_{N}{\bf N};$$$a_T$ is the tangential component of acceleration and $a_N$ is the normal component of acceleration. We have already seen that $a_T$ measures how the speed is changing; ifyou are riding in a vehicle with large $a_T$ you will feel a forcepulling you into your seat. The other component, $a_N$, measures howsharply your direction is changing with respect to time. So itnaturally is related to how sharply the path is curved, measured by$\kappa$, and also to how fast you are going. Because $a_N$ includes$v^2$, note that the effect of speed is magnified; doubling your speedaround a curve quadruples the value of $a_N$. You feel the effect ofthis as a force pushing you toward the outside of the curve, the"centrifugal force.'' In practice, if want $a_N$ we would use the formula for $\kappa$: $$a_N=\kappa |{\bf v}|^2= {|{\bf r}'\times{\bf r}''|\over |{\bf r}'|^3}|{\bf r}'|^2={|{\bf r}'\times{\bf r}''|\over|{\bf r}'|}.$$ To compute $a_T$ we can project ${\bf a}$ onto ${\bf v}$: $$a_T={{\bf v}\cdot{\bf a}\over|{\bf v}|}={{\bf r}'\cdot{\bf r}''\over |{\bf r}'|}.$$ Example 15.4.2 Suppose ${\bf r}=\langle t,t^2,t^3\rangle$. Compute ${\bf v}$, ${\bf a}$, $a_T$, and $a_N$. Taking derivatives we get ${\bf v}=\langle 1,2t,3t^2\rangle$ and ${\bf a}=\langle 0,2,6t\rangle$. Then $$a_T={4t+18t^3\over \sqrt{1+4t^2+9t^4}} \quad\hbox{and}\quad a_N={\sqrt{4+36t^2+36t^4}\over\sqrt{1+4t^2+9t^4}}.$$ Exercises 15.4 Ex 15.4.1Let ${\bf r}=\langle \cos t,\sin t,t\rangle$. Compute ${\bf v}$, ${\bf a}$,$a_T$, and $a_N$.(answer) Ex 15.4.2Let ${\bf r}=\langle \cos t,\sin t,t^2\rangle$. Compute ${\bf v}$, ${\bf a}$,$a_T$, and $a_N$.(answer) Ex 15.4.3Let ${\bf r}=\langle \cos t,\sin t,e^t\rangle$. Compute ${\bf v}$, ${\bf a}$,$a_T$, and $a_N$.(answer) Ex 15.4.4Let ${\bf r}=\langle e^t,\sin t,e^t\rangle$. Compute ${\bf v}$, ${\bf a}$,$a_T$, and $a_N$.(answer) Ex 15.4.5Suppose an object moves so that its acceleration is given by${\bf a}=\langle -3\cos t,-2\sin t,0\rangle$. At time $t=0$ the objectis at $(3,0,0)$ and its velocity vector is $\langle0,2,0\rangle$. Find ${\bf v}(t)$ and ${\bf r}(t)$ for the object.(answer) Ex 15.4.6Suppose an object moves so that its acceleration is given by${\bf a}=\langle -3\cos t,-2\sin t,0\rangle$. At time $t=0$ the objectis at $(3,0,0)$ and its velocity vector is $\langle0,2.1,0\rangle$. Find ${\bf v}(t)$ and ${\bf r}(t)$ for the object.(answer) Ex 15.4.7Suppose an object moves so that its acceleration is given by${\bf a}=\langle -3\cos t,-2\sin t,0\rangle$. At time $t=0$ the objectis at $(3,0,0)$ and its velocity vector is $\langle0,2,1\rangle$. Find ${\bf v}(t)$ and ${\bf r}(t)$ for the object.(answer) Ex 15.4.8Suppose an object moves so that its acceleration is given by${\bf a}=\langle -3\cos t,-2\sin t,0\rangle$. At time $t=0$ the objectis at $(3,0,0)$ and its velocity vector is $\langle0,2.1,1\rangle$. Find ${\bf v}(t)$ and ${\bf r}(t)$ for the object.(answer) Ex 15.4.9Describe a situation in which the normal component ofacceleration is 0 and the tangential component of acceleration isnon-zero. Is it possible for the tangential component of accelerationto be 0 while the normal component of acceleration is non-zero? Explain.Finally, is it possible for an object to move (not be stationary)so that both the tangential and normal components of acceleration are 0?Explain.
Let $T(n)$ denote the total number of updates to the variable when $n$ people have entered the room. For example, with $n=3$ there will be $3!=6$ possible orders where the heights are $1, 2, 3$. Notice that once the height 3 person enters, no further updates will occur, so let's group the six possible arrangements by when the height 3 person enters:$$\begin{array}{ccc}\mathbf{3}21 & \mathbf{3}12 \\1\mathbf{3}2 & 2\mathbf{3}1 \\12\mathbf{3} & 21\mathbf{3} \\\end{array}$$For each of these arrangements, let's count the number of updates we have to make before we see the tallest person. In the two arrangements where the height-3 person arrives first, there are 0 earlier updates. In the second row above, there will be 1 update we have to make before the tallest person arrives and in the third row there will be either 1 or two updates before the tallest person arrives: arrangement $12\mathbf{3}$ will require 2 updates (one for the height-1 person and another for the height-2 person and the arrangement $21\mathbf{3}$ will require 1 update, for the height-2 person. In this $n=3$ case, then, we'll have $0+2+3$ total prior updates (0 in row 1, 2 in row 2, and 3 in row 3). To this add the $3!=6$ updates for the tallest person, giving us a total $T(3)=5+6=11$. Now let's look at the general case for $n$ people. As we did above, let's group the $n!$ possible arrangements by when the height-$n$ person arrived. Denote this tallest person by $X$ and the rest of the people by dots. As we did above, we'll defer counting the last update until later. We'll have these possibilities, each of which can happen in $(n-1)!$ ways:$$\begin{array}{cc}\text{arrival of }X & \text{arrangement}\\1 & X\circ\dotsc\circ \\2 & \circ X\circ\dotsc\circ \\3 & \circ\circ X\circ\dotsc\circ\\\dotsm & \dotsm \\n & \circ\dotsc\circ X\end{array}$$ For each row above we'll count the total updates before the tallest person arrives: Obviously if the tallest person arrives first, we'll have no prior updates, so the first row above will give a total count of 0. Each of the following rows will look like this:$$\underbrace{\circ\dotsc\circ}_k\ X\ \underbrace{\circ\dotsc\circ}_{n-1-k}$$ The first set of $k$ dots can be filled with the heights $\{1, 2, \dotsc, n-1\}$ and there are $\binom{n-1}{k}$ ways to choose these sets, each of which will contribute $T(k)$ updates. The last set of $n-1-k$ dots can be filled with the remaining numbers in $(n-1-k)!$ ways. Thus each row in the table will contribute$$\binom{n-1}{k}T(k)(n-1-k)!$$to the total of prior updates. Adding these and including the $n!$ updates for the tallest person we have the recurrence relation$$\begin{align}T(n) &= n! + \sum_{k=1}^{n-1}\binom{n-1}{k}T(k)(n-1-k)!\\ &= n!+ \sum_{k=1}^{n-1}\frac{(n-1)!}{k!(n-1-k)!}T(k)(n-1-k)!\\ &= n!+ \sum_{k=1}^{n-1}\frac{(n-1)!}{k!}T(k)\\ &= n!+ (n-1)!\sum_{k=1}^{n-1}\frac{T(k)}{k!}\\\end{align}$$Divide both sides by $n!$ to get$$\frac{T(n)}{n!} = 1 + \frac{1}{n}\sum_{k=1}^{n-1}\frac{T(k)}{k!}$$and remember that the average number of updates, $U(n)=T(n)/n!$ which gives us.$$U(n) = 1 + \frac{1}{n}\sum_{k=1}^{n-1}U(k)$$Now we'll find a simple closed form for this:$$\begin{align}U(n) &= 1+\frac{1}{n}\sum_{k=1}^{n-1}U(k)\\ &= 1+\frac{1}{n}\left(\sum_{k=1}^{n-2}U(k)\right)+\frac{1}{n}U(n-1) &\text{pulling out the last term}\\ &= 1+\frac{n-1}{n}\left(\frac{1}{n-1}\sum_{k=1}^{n-2}U(k)\right)+\frac{1}{n}U(n-1)\\ &= 1+\frac{n-1}{n}(U(n-1)-1)+\frac{1}{n}U(n-1) & \text{definition of }U(n-1)\\ &= U(n-1)\left(\frac{n-1}{n}+\frac{1}{n}\right)+\left(1-\frac{n-1}{n}\right)\\ &= U(n-1) + \frac{1}{n}\end{align}$$Whaddya know? With $U(0)=0$ we just have$$U(n) = 1 + \frac{1}{2}+\frac{1}{3}+\dotsm+\frac{1}{n} = H(n)$$the $n$-th harmonic number, as it's known and it's also known that $H(n)$ is asymptotic to $\ln n$.
What is the Jacobian matrix? What are its applications? What is its physical and geometrical meaning? Can someone please explain with examples? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Here is an example. Suppose you have two implicit differentiable functions $$F(x,y,z,u,v)=0,\qquad G(x,y,z,u,v)=0$$ and the functions, also differentiable, $u=f(x,y,z)$ and $v=g(x,y,z)$ such that $$F(x,y,z,f(x,y,z),g(x,y,z))=0,\qquad G(x,y,z,f(x,y,z),g(x,y,z))=0.$$ If you differentiate $F$ and $G$, you get \begin{eqnarray*} \frac{\partial F}{\partial x}+\frac{\partial F}{\partial u}\frac{\partial u}{ \partial x}+\frac{\partial F}{\partial v}\frac{\partial v}{\partial x} &=&0\qquad \\ \frac{\partial G}{\partial x}+\frac{\partial G}{\partial u}\frac{\partial u}{ \partial x}+\frac{\partial G}{\partial v}\frac{\partial v}{\partial x} &=&0. \end{eqnarray*} Solving this system you obtain $$\frac{\partial u}{\partial x}=-\frac{\det \begin{pmatrix} \frac{\partial F}{\partial x} & \frac{\partial F}{\partial v} \\ \frac{\partial G}{\partial x} & \frac{\partial G}{\partial v} \end{pmatrix}}{\det \begin{pmatrix} \frac{\partial F}{\partial u} & \frac{\partial F}{\partial v} \\ \frac{\partial G}{\partial u} & \frac{\partial G}{\partial v} \end{pmatrix}}$$ and similar for $\dfrac{\partial u}{\partial y}$, $\dfrac{\partial u}{\partial z}$, $\dfrac{\partial v}{\partial x}$, $\dfrac{\partial v}{\partial y}$, $% \dfrac{\partial v}{\partial z}$. The compact notation for the denominator is $$\frac{\partial (F,G)}{\partial (u,v)}=\det \begin{pmatrix} \frac{\partial F}{\partial u} & \frac{\partial F}{\partial v} \\ \frac{\partial G}{\partial u} & \frac{\partial G}{\partial v} \end{pmatrix}$$ and similar for the numerator. Then $$\dfrac{\partial u}{\partial x}=-\dfrac{\dfrac{\partial (F,G)}{\partial (x,v)}}{% \dfrac{\partial (F,G)}{\partial (u,v)}}$$ where $\dfrac{\partial (F,G)}{\partial (x,y)},\dfrac{\partial (F,G)}{\partial(u,v)}$ are Jacobians (after the 19th century German mathematician Carl Jacobi). The absolute value of the Jacobian of a coordinate system transformation is also used to convert a multiple integral from one system into another. In $\mathbb{R}^2$ it measures how much the unit area is distorted by the given transformation, and in $\mathbb{R}^3$ this factor measures the unit volume distortion, etc. Another example: the following coordinate transformation (due to Beukers, Calabi and Kolk) $$x=\frac{\sin u}{\cos v}$$ $$y=\frac{\sin v}{\cos u}$$ For this transformation you get (see Proof 2 in this collection of proofs by Robin Chapman) $$\dfrac{\partial (x,y)}{\partial (u,v)}=1-x^2y^{2}.$$ Jacobian sign and orientation of closed curves. Assume you have two small closed curves, one around $(x_0,y_0)$ and another around $u_0,v_0$, this one being the image of the first under the mapping $u=f(x,y),v=g(x,y)$. If the sign of $\dfrac{\partial (x,y)}{\partial (u,v)}$ is positive, then both curves will be travelled in the same sense. If the sign is negative, they will have opposite senses. (See Oriented Regions and their Orientation.) The Jacobian $df_p$ of a differentiable function $f : \mathbb{R}^n \to \mathbb{R}^m$ at a point $p$ is its best linear approximation at $p$, in the sense that $f(p + h) = f(p) + df_p(h) + o(|h|)$ for small $h$. This is the "correct" generalization of the derivative of a function $f : \mathbb{R} \to \mathbb{R}$, and everything we can do with derivatives we can also do with Jacobians. In particular, when $n = m$, the determinant of the Jacobian at a point $p$ is the factor by which $f$ locally dilates volumes around $p$ (since $f$ acts locally like the linear transformation $df_p$, which dilates volumes by $\det df_p$). This is the reason that the Jacobian appears in the change of variables formula for multivariate integrals, which is perhaps the basic reason to care about the Jacobian. For example this is how one changes an integral in rectangular coordinates to cylindrical or spherical coordinates. The Jacobian specializes to the most important constructions in multivariable calculus. It immediately specializes to the gradient, for example. When $n = m$ its trace is the divergence. And a more complicated construction gives the curl. The rank of the Jacobian is also an important local invariant of $f$; it roughly measures how "degenerate" or "singular" $f$ is at $p$. This is the reason the Jacobian appears in the statement of the implicit function theorem, which is a fundamental result with applications everywhere. In single variable calculus, if $f:\mathbb R \to \mathbb R$, then \begin{equation} f'(x) = \lim_{\Delta x \to 0} \frac{f(x + \Delta x) - f(x)}{\Delta x}. \end{equation} A very useful way to think about $f'(x)$ is this: \begin{equation} \tag{$\spadesuit$} f(x + \Delta x) \approx f(x) + f'(x) \Delta x. \end{equation} One of the advantages of equation $(\spadesuit)$ is that it still makes perfect sense in the case where $f:\mathbb R^n \to \mathbb R^m$: \begin{equation} f(\underbrace{x}_{n \times 1} + \underbrace{\Delta x}_{n\times 1}) \approx \underbrace{f(x)}_{m \times 1} + \underbrace{f'(x)}_{?} \underbrace{\Delta x}_{n \times 1}. \end{equation} You see, if $f'(x)$ is now an $m \times n$ matrix, then this equation makes perfect sense. So, with this idea, we can extend the idea of the derivative to the case where $f:\mathbb R^n \to \mathbb R^m$. This is the first step towards developing calculus in a multivariable setting. The matrix $f'(x)$ is called the "Jacobian" of $f$ at $x$, but maybe it's more clear to simply call $f'(x)$ the derivative of $f$ at $x$. The matrix $f'(x)$ allows us to approximate $f$ locally by a linear function (or, technically, an "affine" function). Linear functions are simple enough that we can understand them well (using linear algebra), and often understanding the local linear approximation to $f$ at $x$ allows us to draw conclusions about $f$ itself. (I know this is slightly late, but I think the OP may appreciate this) As an application, in the field of control engineering the use of Jacobian matrices allows the local (approximate) linearisation of non-linear systems around a given equilibrium point and so allows the use of linear systems techniques, such as the calculation of eigenvalues (and thus allows an indication of the type of the equilibrium point). Jacobians are also used in the estimation of the internal states of non-linear systems in the construction of the extended Kalman filter, and also if the extended Kalman filter is to be used to provide joint state and parameter estimates for a linear system (since this is a non-linear system analysis due to the products of what are then effectively inputs and outputs of the system). I found the most beautiful usage of jacobian matrices in studying differential geometry, when one abandons the idea that analysis can be done "only on balls of $\mathbb{R}^n$". The definition of tangent space in a point $p$ of a manifold $M$ can be given via the kernel of the jacobian of a suitable submersion, or via the image of the differential of a suitable immersion from an open set $U\subseteq\mathbb{R}^{\dim M}$. Quite a simple example, but when I was an undergrad four years ago it gave me the "right" idea of what a linear transformation does in a differential (analytical) framework. This is not a rigorous explanation, but here is the best intuitive explanation/motivation for the Jacobian Matrix. Start with an interval $[x_1,x_2] \subset \mathbb{R}$. What is a common measurement of space for this interval ? It is length. To find the length of $[x_1,x_2]$, take $x_2-x_1$. Now suppose I define an invertible linear transformation $T:\mathbb{R} \rightarrow \mathbb{R}$, where $$T(x)=\begin{bmatrix}a\end{bmatrix}x,$$ where $\begin{bmatrix}a\end{bmatrix}$ is a $1\times 1$ matrix with a nonzero entry $a$. The image of $[x_1,x_2]$ under $T$ is the interval $[ax_1,ax_2]$, and the length of this new interval is $ax_2-ax_1=a(x_2-x_1)$. Now we ask ourselves this question. How does the length of the new interval relate to the length of the old interval ? The length of the $[ax_1,ax_2]$ is $|a|$ times the length of $[x_1,x_2]$. But notice that: $$|a|=\left |\det\begin{bmatrix}a\end{bmatrix}\right |.$$ Now suppose you are doing u substitution to evaluate an integral in the form $$\int_{S} f(x) dx.$$ We define $x=x(u)$ and the differential $dx$ becomes $\frac{dx}{du}du$. If you view $dx$ and $du$ as vectors in $\mathbb{R}$, you get $$dx=\begin{bmatrix}\frac{dx}{du}\end{bmatrix}du.$$ The determinant of $\begin{bmatrix}\frac{dx}{du}\end{bmatrix}$ plays the same role as $a$ in that it is a scaling factor between different "infinitesimal" interval lengths. The higher dimensional analogue of the interval in $\mathbb{R}$ is a parallelepiped in $\mathbb{R}^n$. Measurement of space in $\mathbb{R}^n$ is the $n$-dimensional volume. If you define an invertible linear transformation $T:\mathbb{R}^n \rightarrow \mathbb{R}^n$, and if you write $T(x)=Ax$, where $A$ is an $n \times n$ matrix, the absolute value of $\det A$, scales the volume of a parallelepiped. Similarly, if you are dealing with the multidimensional integral: $$\int_{S}f(x_1,...,x_n)dx_1...dx_n$$ and wish to use change of variables: $$x_i=x_i(u_1,...,u_n),1 \leq i \leq n$$ you can regard $dx=(dx_1,...,dx_n),du=(du_1,...,du_n)$ as vectors in $\mathbb{R}^n$ and relate them by $$dx=\begin{bmatrix}\frac{\partial x_i}{\partial u_j}\end{bmatrix}_{ij}du.$$ The Jacobian Matrix here is: $$\begin{bmatrix}\frac{\partial x_i}{\partial u_j}\end{bmatrix}_{ij},$$ and the notation means the $i$th row and $j$th column entry is $\frac{\partial x_i}{\partial u_j}$. The absolute value of the determinant of the Jacobian Matrix is a scaling factor between different "infinitesimal" parallelepiped volumes. Again, this explanation is merely intuitive. It is not rigorous as one would present it in a real analysis course. I don't know much about this, but I know is used in programming robotics for transforming between two frame of references. The equations become very simple. So moving from one frame to another to another is just the matrix product of Jacobian matrix. A very short contribution for the applicability question: it is a matrix of partial derivatives. One of the applications is to find local solutions of a system of nonlinear equations. When you have a system of nonlinear equation, the x`s that are solutions of the system are not easy to find, because it is difficult to invert a the matrix of nonlinear coefficients of the system. However, you can take the partial derivative of the equations, find the local linear approximation near some value, and then solve the system. Because the system becomes locally linear, you can solve it using linear algebra. The simplest answer I can give is - Jacobian Matrix is used when there is a change of variable requirement in the greater than one dimensional space. One of the explanations above explains it simplistically in the single variable concept.
You are given an undirected unweighted graph consisting of $$$n$$$ vertices and $$$m$$$ edges. You have to write a number on each vertex of the graph. Each number should be $$$1$$$, $$$2$$$ or $$$3$$$. The graph becomes beautiful if for each edge the sum of numbers on vertices connected by this edge is odd. Calculate the number of possible ways to write numbers $$$1$$$, $$$2$$$ and $$$3$$$ on vertices so the graph becomes beautiful. Since this number may be large, print it modulo $$$998244353$$$. Note that you have to write exactly one number on each vertex. The graph does not have any self-loops or multiple edges. The first line contains one integer $$$t$$$ ($$$1 \le t \le 3 \cdot 10^5$$$) — the number of tests in the input. The first line of each test contains two integers $$$n$$$ and $$$m$$$ ($$$1 \le n \le 3 \cdot 10^5, 0 \le m \le 3 \cdot 10^5$$$) — the number of vertices and the number of edges, respectively. Next $$$m$$$ lines describe edges: $$$i$$$-th line contains two integers $$$u_i$$$, $$$ v_i$$$ ($$$1 \le u_i, v_i \le n; u_i \neq v_i$$$) — indices of vertices connected by $$$i$$$-th edge. It is guaranteed that $$$\sum\limits_{i=1}^{t} n \le 3 \cdot 10^5$$$ and $$$\sum\limits_{i=1}^{t} m \le 3 \cdot 10^5$$$. For each test print one line, containing one integer — the number of possible ways to write numbers $$$1$$$, $$$2$$$, $$$3$$$ on the vertices of given graph so it becomes beautiful. Since answers may be large, print them modulo $$$998244353$$$. 2 2 1 1 2 4 6 1 2 1 3 1 4 2 3 2 4 3 4 4 0 Possible ways to distribute numbers in the first test: In the second test there is no way to distribute numbers. Name
Joint work with Øystein Linnebo, University of Oslo. J. D. Hamkins and Ø. Linnebo, “The modal logic of set-theoretic potentialism and the potentialist maximality principles,” to appear in Review of Symbolic Logic, 2018. @ARTICLE{HamkinsLinnebo:Modal-logic-of-set-theoretic-potentialism, author = {Hamkins, Joel David and Linnebo, \O{}ystein}, title = {The modal logic of set-theoretic potentialism and the potentialist maximality principles}, journal = {to appear in Review of Symbolic Logic}, year = {2018}, volume = {}, number = {}, pages = {}, month = {}, note = {}, abstract = {}, keywords = {to-appear}, source = {}, eprint = {1708.01644}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://wp.me/p5M0LV-1zC}, doi = {}, } Abstract.We analyze the precise modal commitments of several natural varieties of set-theoretic potentialism, using tools we develop for a general model-theoretic account of potentialism, building on those of Hamkins, Leibman and Löwe (Structural connections between a forcing class and its modal logic), including the use of buttons, switches, dials and ratchets. Among the potentialist conceptions we consider are: rank potentialism (true in all larger $V_\beta$); Grothendieck-Zermelo potentialism (true in all larger $V_\kappa$ for inaccessible cardinals $\kappa$); transitive-set potentialism (true in all larger transitive sets); forcing potentialism (true in all forcing extensions); countable-transitive-model potentialism (true in all larger countable transitive models of ZFC); countable-model potentialism (true in all larger countable models of ZFC); and others. In each case, we identify lower bounds for the modal validities, which are generally either S4.2 or S4.3, and an upper bound of S5, proving in each case that these bounds are optimal. The validity of S5 in a world is a potentialist maximality principle, an interesting set-theoretic principle of its own. The results can be viewed as providing an analysis of the modal commitments of the various set-theoretic multiverse conceptions corresponding to each potentialist account. Set-theoretic potentialism is the view in the philosophy of mathematics that the universe of set theory is never fully completed, but rather unfolds gradually as parts of it increasingly come into existence or become accessible to us. On this view, the outer reaches of the set-theoretic universe have merely potential rather than actual existence, in the sense that one can imagine “forming” or discovering always more sets from that realm, as many as desired, but the task is never completed. For example, height potentialism is the view that the universe is never fully completed with respect to height: new ordinals come into existence as the known part of the universe grows ever taller. Width potentialism holds that the universe may grow outwards, as with forcing, so that already existing sets can potentially gain new subsets in a larger universe. One commonly held view amongst set theorists is height potentialism combined with width actualism, whereby the universe grows only upward rather than outward, and so at any moment the part of the universe currently known to us is a rank initial segment $V_\alpha$ of the potential yet-to-be-revealed higher parts of the universe. Such a perspective might even be attractive to a Platonistically inclined large-cardinal set theorist, who wants to hold that there are many large cardinals, but who also is willing at any moment to upgrade to a taller universe with even larger large cardinals than had previously been mentioned. Meanwhile, the width-potentialist height-actualist view may be attractive for those who wish to hold a potentialist account of forcing over the set-theoretic universe $V$. On the height-and-width-potentialist view, one views the universe as growing with respect to both height and width. A set-theoretic monist, in contrast, with an ontology having only a single fully existing universe, will be an actualist with respect to both width and height. The second author has described various potentialist views in previous work. Although we are motivated by the case of set-theoretic potentialism, the potentialist idea itself is far more general, and can be carried out in a general model-theoretic context. For example, the potentialist account of arithmetic is deeply connected with the classical debates surrounding potential as opposed to actual infinity, and indeed, perhaps it is in those classical debates where one finds the origin of potentialism. More generally, one can provide a potentialist account of truth in the context of essentially any kind of structure in any language or theory. Our project here is to analyze and understand more precisely the modal commitments of various set-theoretic potentialist views. After developing a general model-theoretic account of the semantics of potentialism and providing tools for establishing both lower and upper bounds on the modal validities for various kinds of potentialist contexts, we shall use those tools to settle exactly the propositional modal validities for several natural kinds of set-theoretic height and width potentialism. Here is a summary account of the modal logics for various flavors of set-theoretic potentialism. In each case, the indicated lower and upper bounds are realized in particular worlds, usually in the strongest possible way that is consistent with the stated inclusions, although in some cases, this is proved only under additional mild technical hypotheses. Indeed, some of the potentialist accounts are only undertaken with additional set-theoretic assumptions going beyond ZFC. For example, the Grothendieck-Zermelo account of potentialism is interesting mainly only under the assumption that there are a proper class of inaccessible cardinals, and countable-transitive-model potentialism is more robust under the assumption that every real is an element of a countable transitive model of set theory, which can be thought of as a mild large-cardinal assumption. The upper bound of S5, when it is realized, constitutes a potentialist maximality principle, for in such a case, any statement that could possibly become actually true in such a way that it remains actually true as the universe unfolds, is already actually true. We identify necessary and sufficient conditions for each of the concepts of potentialism for a world to fulfill this potentialist maximality principle. For example, in rank-potentialism, a world $V_\kappa$ satisfies S5 with respect to the language of set theory with arbitrary parameters if and only if $\kappa$ is $\Sigma_3$-correct. And it satisfies S5 with respect to the potentialist language of set theory with parameters if and only if it is $\Sigma_n$-correct for every $n$. Similar results hold for each of the potentialist concepts. Finally, let me mention the strong affinities between set-theoretic potentialism and set-theoretic pluralism, particularly with the various set-theoretic multiverse conceptions currently in the literature. Potentialists may regard themselves mainly as providing an account of truth ultimately for a single universe, gradually revealed, the limit of their potentialist system. Nevertheless, the universe fragments of their potentialist account can often naturally be taken as universes in their own right, connected by the potentialist modalities, and in this way, every potentialist system can be viewed as a multiverse. Indeed, the potentialist systems we analyze in this article—including rank potentialism, forcing potentialism, generic-multiverse potentialism, countable-transitive-model potentialism, countable-model potentialism—each align with corresponding natural multiverse conceptions. Because of this, we take the results of this article as providing not only an analysis of the modal commitments of set-theoretic potentialism, but also an analysis of the modal commitments of various particular set-theoretic multiverse conceptions. Indeed, one might say that it is possible ( ahem), in another world, for this article to have been entitled, “ The modal logic of various set-theoretic multiverse conceptions.” For more, please follow the link to the arxiv where you can find the full article. J. D. Hamkins and Ø. Linnebo, “The modal logic of set-theoretic potentialism and the potentialist maximality principles,” to appear in Review of Symbolic Logic, 2018. @ARTICLE{HamkinsLinnebo:Modal-logic-of-set-theoretic-potentialism, author = {Hamkins, Joel David and Linnebo, \O{}ystein}, title = {The modal logic of set-theoretic potentialism and the potentialist maximality principles}, journal = {to appear in Review of Symbolic Logic}, year = {2018}, volume = {}, number = {}, pages = {}, month = {}, note = {}, abstract = {}, keywords = {to-appear}, source = {}, eprint = {1708.01644}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://wp.me/p5M0LV-1zC}, doi = {}, }
Please forgive me if this question is ill-defined. It's late here and I want to ask the question whilst it's still fresh in my mind. Motivation: Suppose we have a group $G$ given by a presentation $$P_G=\langle a, b\mid a^2, b^3, (ab)^2\rangle.$$ Then $P_G$ has the (oriented) graph $\Gamma_G$ in Figure 1: (This image can be found on page 58 of Magnus et al.'s Combinatorial Group Theory [ . . . ].) The group $H_{\Gamma_G}$ of symmetries of $\Gamma_G$ is $\Bbb Z_3$. If we were to extend the graph into the three dimensional world to create a prism-like shape $\Pi_G$, then the group $H_{\Pi_G}$ of symmetries of $\Pi_G$ would be $\Bbb Z_3\times\Bbb Z_2$. as it is So here is my motivating question: What are $H_{\Gamma_G}$ and $H_{\Pi_G}$ called with respect to $G$? I'm more interested $H_{\Pi_G}$ because it seems more "optimal". (I hope you can see why.) What I mean by "with respect to $G$" is something like "the Shaun group(s) of $G$", if you'll forgive me using my name as a hypothetical example. The Question: What is the terminology for the group or groups of symmetries of the Cayley graph(s) of a group $G$? Context: I'm not sure how to provide additional context for a terminology question but here goes . . . I'm studying for a PhD in combinatorial group theory and I'm in the first year. I haven't found anything ibid. nor in Lyndon & Schupp's book "Combinatorial Group Theory". I have no training in graph theory. An answer that defines the group(s) in question and provides examples of their use in group theory literature would be ideal and greatly appreciated. Why ask? I'm just curious. Please help :)
A dispersion relation tells you the form of $\omega (k)$. Since $E = \hbar \omega$ and $P = \hbar k$ you can see it as a relation between the energy and the momentum. Since we have from special relativity that $$ E^{2} = p^{2}c^{2} + m^{2}c^4$$ it is clear that we have $E = Pc$ for a photon. Also since the total energy of a free electron is $E=\gamma m c^2$ the kinetic energy is $E = (\gamma -1)mc^2$ wich reduces to $P^2/2m$ for $v<<c$ This way you can see how special relativity tells us that mass has a role in the dispersion relation, since rest (invariant) mass is the same for all observers in all reference frames. (It's the norm of the energy-momentum 4-vector in Minkowski space). Returning to your question, you can see that photons follow the wave equation $$ \partial^2_t \Psi = v^2\nabla^2 \psi $$whose solutions are transverse waves, wheras free electrons follow the Schödinger equation : $$ i\hbar\partial_t\Psi = -\hbar^2/2m\nabla^2\Psi $$whose solutions are plane waves. The dispersion relation is medium-dependent, for instance light is dispersionless in vacuum but not in matter, so in general $$v (n) = c/n$$ where $n$ is the medium's refractive index. For waves following Schrödinger's equation the dispersion relation is given in general by special relativity. This is why massive particles have a different dispersion than electromagnetic waves for example, and because massive particles have a phase velocity $v_\phi = \omega/k$ that depends upon the wavelength they broaden with time propagation. (Edited a lot of times)I do not answer questions often, so I hope this is helpful.
In Markowitz' portfolio theory we can construct portfolios with the minimum variance for a given expected return (or vice versa). Across expected risks, this traces out the well-known efficient frontier. To find the so-called tangency portfolio, we look to solve: $$\max_x \frac{\mu^T x}{\sqrt{x^T Q x}}$$ Following Tütüncü (section 5.2), this can be reformulated under a change of variables to a simpler quadratic optimisation problem: $$\min_{y,\kappa} y^T Q y \qquad \text{where} \quad (\mu-r_f)^T y = 1,\; \kappa > 0$$ I've solved the problem and got values for $y$. However.. $\kappa$ is defined in terms of $x$... So, whilst I'm sure this is a stupid question, how do we actually translate the $y$ vector to recover the true portfolio weights $x$?? The only thing I can think of is that I did not include a constraint for $\kappa$. This is for the same reason as above (that it is defined in terms of $x$, and so not available), and because the KKT conditions suggested in this answer also ignore the $\kappa >0$ term.
Variable Importance in Random Forests can suffer from severe overfitting Predictive vs. interpretational overfitting There appears to be broad consenus that random forests rarely suffer from “overfitting” which plagues many other models. (We define overfitting as choosing a model flexibility which is too high for the data generating process at hand resulting in non-optimal performance on an independent test set.) By averaging many (hundreds) of separately grown deep trees -each of which inevitably overfits the data – one often achieves a favorable balance in the bias variance tradeoff. For similar reasons, the need for careful parameter tuning also seems less essential than in other models. This post does not attempt to contribute to this long standing discussion (see e.g. https://stats.stackexchange.com/questions/66543/random-forest-is-overfitting) but points out that random forests’ immunity to overfitting is restricted to the predictions only and not to the default variable importance measure! We assume the reader is familiar with the basic construction of random forests which are averages of large numbers of individually grown regression/classification trees. The random nature stems from both “row and column subsampling’’: each tree is based on a random subset of the observations, and each split is based on a random subset of candidate variables. The tuning parameter – which for popular software implementations has the default \(\lfloor p/3 \rfloor\) for regression and \(\sqrt{p}\) for classification trees – can have profound effects on prediction quality as well as the variable importance measures outlined below. At the heart of the random forest library is the CART algorithm which chooses the split for each node such that maximum reduction in overall node impurity is achieved. Due to the CART bootstrap row sampling, \(36.8\%\) of the observations are (on average) not used for an individual tree; those “out of bag” (OOB) samples can serve as a validation set to estimate the test error, e.g.: \[\begin{equation} E\left( Y – \hat{Y}\right)^2 \approx OOB_{MSE} = \frac{1}{n} \sum_{i=1}^n{\left( y_i – \overline{\hat{y}}_{i, OOB}\right)^2} \end{equation}\] where \(\overline{\hat{y}}_{i, OOB}\) is the average prediction for the \(i\)th observation from those trees for which this observation was OOB. Variable Importance The default method to compute variable importance is the mean decrease in impurity (or gini importance) mechanism: At each split in each tree, the improvement in the split-criterion is the importance measure attributed to the splitting variable, and is accumulated over all the trees in the forest separately for each variable. Note that this measure is quite like the \(R^2\) in regression on the training set. The widely used alternative as a measure of variable importance or short permutation importance is defined as follows: \[\begin{equation} \label{eq:VI} \mbox{VI} = OOB_{MSE, perm} – OOB_{MSE} \end{equation}\] Gini importance can be highly misleading We use the well known titanic data set to illustrate the perils of putting too much faith into the Gini importance which is based entirely on training data – not on OOB samples – and makes no attempt to discount impurity decreases in deep trees that are pretty much frivolous and will not survive in a validation set. In the following model we include passengerID as a feature along with the more reasonable Age, Sex and Pclass: randomForest(Survived ~ Age + Sex + Pclass + PassengerId, data=titanic_train[!naRows,], ntree=200,importance=TRUE,mtry=2) The figure below shows both measures of variable importance and surprisingly passengerID turns out to be ranked number 2 for the Gini importance (right panel). This unexpected result is robust to random shuffling of the ID. The permutation based importance (left panel) is not fooled by the irrelevant ID feature. This is maybe not unexpected as the IDs shold bear no predictive power for the out-of-bag samples. Noise Feature Let us go one step further and add a Gaussian noise feature, which we call PassengerWeight: titanic_train$PassengerWeight = rnorm(nrow(titanic_train),70,20) rf4 =randomForest(Survived ~ Age + Sex + Pclass + PassengerId + PassengerWeight, data=titanic_train[!naRows,], ntree=200,importance=TRUE,mtry=2) Again, the blatant “overfitting” of the Gini variable importance is troubling whereas the permutation based importance (left panel) is not fooled by the irrelevant features. (Encouragingly, the importance measures for ID and weight are even negative!) In the remainder we investigate if other libraries suffer from similar spurious variable importance measures. h2o library Unfortunately, the h2o random forest implementation does not offer permutation importance: Coding passenger ID as integer is bad enough: Coding passenger ID as factor makes matters worse: Let’s look at a single tree from the forest: If we scramble ID, does it hold up? partykit conditional inference trees are not being fooled by ID: And the variable importance in cforest is indeed unbiased python’s sklearn Unfortunately, like h2o the python random forest implementation offers only Gini importance, but this insightful post offers a solution: Gradient Boosting Boosting is highly robust against frivolous columns: mdlGBM = gbm(Survived ~ Age + Sex + Pclass + PassengerId +PassengerWeight, data= titanic_train, n.trees = 300, shrinkage = 0.01, distribution = "gaussian") Conclusion Sadly, this post is 12 years behind: It has been known for while now that the Gini importance tends to inflate the importance of continuous or high-cardinality categorical variables: the variable importance measures of Breiman’s original Random Forest method … are not reliable in situations where potential predictor variables vary in their scale of measurement or their number of categories. Single Trees I am still struggling with the extent of the overfitting. It is hard to believe that passenger ID could be chosen as a split point early in the tree building process given the other informative variables! Let us inspect a single tree ## rowname left daughter right daughter split var split point status ## 1 1 2 3 Pclass 2.5 1 ## 2 2 4 5 Pclass 1.5 1 ## 3 3 6 7 PassengerId 10.0 1 ## 4 4 8 9 Sex 1.5 1 ## 5 5 10 11 Sex 1.5 1 ## 6 6 12 13 PassengerId 2.5 1 ## prediction ## 1 <NA> ## 2 <NA> ## 3 <NA> ## 4 <NA> ## 5 <NA> ## 6 <NA> This tree splits on passenger ID at the second level !! Let us dig deeper: The help page states For numerical predictors, data with values of the variable less than or equal to the splitting point go to the left daughter node. So we have the 3rd class passengers on the right branch. Compare subsequent splits on (i) sex, (ii) Pclass and (iii) passengerID: Starting with a parent node Gini impurity of 0.184 Splitting on sex yields a Gini impurity of 0.159 1 2 0 72 303 1 71 50 Splitting on passengerID yields a Gini impurity of 0.183 FALSE TRUE 0 2 373 1 3 118 And how could passenger ID accrue more importance than sex ? // add bootstrap table styles to pandoc tables function bootstrapStylePandocTables() { $('tr.header').parent('thead').parent('table').addClass('table table-condensed'); } $(document).ready(function () { bootstrapStylePandocTables(); });
Let me start off by saying that I am a complete newbie to Mathematica, so I don't really know what I'm doing. For my assignment I have to find the numerical probability of a particle in a harmonic oscillator potential in between quantum numbers $n=0$ and $n=5$. For simplicity's sake, I am only trying to find the probability where $n=0$. The wave function of n=0 in a harmonic oscillator is: $\Psi(x)$ = $N_0H_0 e^{(-x^2/2)}$ So the probability of finding a particle with the given wave function is: $\int \Psi^2(x) dx$ The classically bound region is defined as $y= \frac 12{kx^2}$ ClearAll["Global`*"];norm[n_] := (1/(Sqrt[π] 2^n n!))^(1/2)u[x_, n_] := 1/Sqrt[a] norm[n] HermiteH[n, x/a] exp^[-(a*x^2/2)]b[x_] := 0.5 kx^2φ1[x_, n_] := 1/Sqrt[2 π] NIntegrate[u[x, n]^2, {x, -1, 1}, {n, 0, 5}] However, this does not result in any output. I am wondering how you could format this to result in a valid output, and how to get the numerical probability within the bounds of $y= \frac 12{kx^2}$ and $E_v = \hbar \omega (v + 1)$.
I'm calculating expected loss on fixed-income using actual default probabilities and risk-free rate as the discount factor. I understand this is not theoretically correct. In absence of risk-neutral probabilities, are there any alternative rates I can use (besides risk-free rate) as the discount factor? Yes, I solve this in a paper I am about to submit. It is important that you understand what it means to say "actual probabilities." Most people use Pearson-Neyman type statistics and they do not and cannot give rise to actual probabilities except in the narrow case of "exact" tests when the null is actually true. The reason is that Pearson-Neyman methods use minimax distributions. Minimax distributions are worst-case distributions, but they allow you to guarantee that you will not have false positives more than $\alpha$ percent of the time if your sample size goes to infinity. The result is that any statistics generated are not coherent, with the technical definition of coherence being that a bookie offering to buy risks could not be gamed into taking a sure loss by a crafty actor or set of actors. In essence, you can gamble on these probabilities and they will be the true probabilities, given the information available. If you would use Pearson-Neyman probabilities then someone who is crafty would be capable of gaming your solution. This is very surprising, but also true. When Pearson and Neyman put together their system of thinking, which rolls into economics, they did not intend it to be used for gambling, but instead to direct behavior. Should you keep the lot coming off the assembly line or reject the lot based on a sample? It is a wonderful system when used as intended. Let us begin by estimating the default rate and to simplify the argument we will drop things like logistic regression because they nicely add dimensionality, but detract from the simplicity of the explanation. Note that the only thing that would change in this discussion is the form of the likelihood function and the dimensionality of the prior distribution. How you would do it would change, but the logic is unchanged. Let us also narrowly define what a default is, acknowledging that a wide definition would merely add dimensionality to the problem. So, for example, in a real model, you would separately define the probability of a temporary suspension of payments, the probability of a permanent suspension with full recovery of principal, the probability of a permanent suspension with partial recovery and the probability of permanent suspension with no recovery. For partial recoveries, either of interest or principal, then you also need to model the likely recovery given the type of default happened. For our narrow discussion, we will pretend we live in an all or nothing world, wherein default results in no form of recovery. In essence, this is now a Bernoulli trial. Let us also make this a zero coupon bond so that we do not need to model separate defaults on each coupon. I know I just shrank the problem down tremendously but there is no loss of generality. You would then use Bayesian decision theory and there is no issue of either risk neutrality, risk aversion, loss aversion or risk loving preferences. The first issue is to set the prior distribution. A flat or non-informative prior is not a true credible solution as it would imply that the chance of default for a bond is equally 99% and 0.1%. This would provide a 50% chance of default as the prior expectation. That is ridiculous. There are two ways to set a prior. The first would be to observe actual historical rates. Since we are dropping logistic regression we are assuming true independence of defaults rather than dependence on accounting values or the state of the economy. In this pretend world we can get away with that. Let us imagine that you found a large data set of one thousand observations and in that set, there was one default. That would create a prior probability for $\theta$ where $\theta$ is the probability of default equal to $999(1-\theta)^{998}$ by assuming the prior is a beta distribution. This may not be true, but for our purposes is very convenient mathematically. If you have a better way to construct the prior, then don't do this. To understand, this density is not the probability of default, but rather the probability $\theta=k,\forall{k}\in(0,1),$ where $\theta$ is the true chance of default. A second solution could be to work backward from rates using relationships such as $i=r^*+p_h+p_m+p_d,$ where you would factor out inflation, maturity and the Fisher effect from bond rates. There are other solutions too. Let us stick with the simple one above though. Now lets assume you buy a portfolio of 100 bonds and none of them default. We are also going to assume you are not collecting outside data though the process would be the same as no one cares where the data came from. Random sampling is not required here. The binomial density is $(1-\theta)^{100}$. The beta distribution above and the binomial likelihood would be multiplied together and normalized to a density. The resulting density of $\Pr(\theta|data)=\frac{1}{1099}(1-\theta)^{1098}$ is the posterior density of $\theta.$ You are not interested in the location of the parameter, but rather, the probability there will be zero defaults, one default, two defaults and so forth. To do this, you would integrate out any unknowns leaving only the knowns, which are the data and the prior knowledge. This is often expressed as $\Pr(\tilde{x}|X)$, where $X$ is the data. To solve this prediction you integrate out the uncertainty as follows: $\Pr(\tilde{x}=k|X)=\int_\theta\Pr(\tilde{x}|\theta)\Pr(\theta|X)\mathrm{d}\theta$. In this specific case, the solution is analytic and well known. It is called the beta-binomial distribution. If $\alpha$ are defaults, in this case, 1, and $\beta$ is surviving bonds, in this case, 1099, and $n$ is the future number of bonds purchased and $k$ is possible default levels, then you can construct a discrete mass function for each level and take the expectation. The analytic solution is $\Pr(default=k|\alpha, \beta) = \frac{\Gamma(n+1)}{\Gamma(k+1)\Gamma(n-k+1)} \frac{\Gamma(k+\alpha)\Gamma(n-k+\beta)}{\Gamma(n+\alpha+\beta)} \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)},$ where $\Gamma$ is the gamma function for the mass function and then the expectation would be trivial to take as the chance of two defaults is very small. You would then discount at the institution's marginal cost of funds, which is the deposit rate in the case of a bank. There need only be a subjective cost of funds. You end up with $$\frac{E(k)}{1+i}$$ where $i$ is the marginal cost of funds for the actor. This is the present value of your expected losses, though not expressed as a rate. The above outcome has a known solution in the literature. It is $$\frac{\frac{n\alpha}{\alpha+\beta}}{1+i}=100/1100/(1+i)\approx{.09}/(1+i).$$ So in one hundred future bonds you would expect to lose nine hundreths of one bond. Now, if I were doing it, I would ignore everything I just told you as it would make computation very costly unless I wanted to write a do loop and calculate the factorials in logs and invert the process. For a large enough sample size, the normal approximation would be better to work with in theoretical work. If I were serious about the computation then I would use Bayesian logistic regression because it does not have the assumption violations so well documented in Nwogugu at: Nwogugu, Michael; Decision-making, risk and corporate governance: A critique of methodological issues in bankruptcy/recovery prediction models; Applied Mathematics and Computation; 2007;(185); pp. 178-196. Still, you would take the expectation of the Bayesian predictive distribution divided by the subjective cost of funds. It would just now be very complex due to the true dimensionality of the problem and you would almost certainly be forced to use a Metropolis-Hastings algorithm to solve the problem. If you are not risk neutral, then you would not use quadratic loss as it assigns equal quadratically growing costs to the risk of mislocating the point estimate of $E(k)$ downward as upward. If you were loss averse then you would underweight underestimates of $k$ and overweight overestimates of $k$. This is why I said it may be easier to do with the normal distribution. It may be easier to solve as a traditional risk aversion problem with something like CARA or CRRA. There is a good book on both Pearson-Neyman and Bayesian decision theory at: I do apologize that this is so long, but you would likely ask the question in a different manner if you were accustomed to thinking in terms of decision theory. Also, you wanted to know about the actual default probabilities and not abstracted ones if the parameters were truly known at a risk-free rate. In the real world, I used Bayesian methods to test 78 models of bankruptcy and found two that worked, although all are statistically significant. All significance tells you is that the results are unlikely to be due to chance if the null is true, it does nothing to tell you how useful it is.
I've mostly worked with superconducting quantum computers I am not really familiar with the experimental details of photonic quantum computers that use photons to create continuous-variable cluster states such as the one that the Canadian startup Xanadu is building. How are gate operations implemented in these types of quantum computers? And what is the universal quantum gate set in this case? Taking an $n$-mode simple harmonic oscillator (SHO) in a (Fock) space $\mathcal F = \bigotimes_k\mathcal H_k$, where $\mathcal H_k$ is the Hilbert space of a SHO on mode $k$. This gives the usual annihilation operator $a_k$, which act on a number state as $a_k\left|n\right> = \sqrt n\left|n-1\right>$ for $n\geq 1$ and $a_k\left|0\right> = 0$ and the creation operator on mode $k$ as $a_k^\dagger$, acting on a number state as $a_k^\dagger\left|n\right> = \sqrt{n+1}\left|n+1\right>$. The Hamiltonian of the SHO is $H = \omega\left(a_k^\dagger a_k+\frac 12\right)$ (in units where $\hbar = 1$). We can then define the quadratures $$X_k = \frac{1}{\sqrt 2}\left(a_k + a_k^\dagger\right)$$ $$P_k = -\frac{i}{\sqrt 2}\left(a_k - a_k^\dagger\right)$$ which are observables. At this point there are various operations (Hamiltonians) that can be performed. The effect of such an operation on the quadratures can be found by using the time evolution of an operator $A$ as $\dot A = i\left[H, A\right]$. Applying these for time $t$ gives: $$X:P\mapsto P-t$$ $$P:X\mapsto X+t$$ $$\frac 12\left(X^2 + P^2\right): X\mapsto \cos t X - \sin t P,\, P\mapsto \cos t P + \sin t X,$$ which is just the Hamiltonian of a SHO with $\omega = 1$ and gives a phase shift. $$\pm S = \pm\frac 12\left(XP+PX\right): X\mapsto e^{\pm t}X,\, P\mapsto e^{\mp t}P,$$ which is known as the squeezing operator, where $+S\,\left(-S\right)$ squeezes $P\,\left(X\right)$. Any Hamiltonian of the form $aX+bP+c$ can be built by applying $X$ and $P$. Adding $S$ and $H$ allows for any quadratic Hamiltonian to be built. Further adding the (nonlinear) Kerr Hamiltonian $$\left(X^2 + P^2\right)^2$$ allows for any polynomial Hamiltonian to be created. Finally, including the beamsplitter operation (on two modes $j$ and $k$) $$\pm B_{jk} = \pm\left(P_jX_k - X_jP_k\right): A_j\mapsto \cos tA_j + \sin tA_k,\, A_k\mapsto \cos tA_k - \sin tA_j$$ for $A_j = X_j, P_j$ and $A_k = X_k, P_k$, which acts as a beamsplitter on the two modes. The above operations form the universal gate-set for continuous variable quantum computing. More details can be found in e.g. here To implement these unitaries: Applying these operations is generally hinted at in the name: Coupling a current is acting as the displacement operator $D\left(\alpha\left(t\right)\right)$ where, for an electric field $\varepsilon$ and current $j$, $\alpha\left(t\right) = i\int_{t_0}^t\int j\left(r, t'\right)\cdot\varepsilon e^{-i\left(k\cdot r - w_kt'\right)} dr\, dt'$. The displacement operator shifts $X$ by the real part of $\alpha$ and $P$ by the imaginary part of $\alpha$. A phase shift can be applied by simply letting the system evolve by itself, as the system is a harmonic oscillator. It can also be performed by using a physical phase shifter. Squeezing is the hard bit and is something that needs to experimentally be improved. Such methods can be found in e.g. here and here is one experiment using a limited amount of squeezed light. One possible way of squeezing is using a Kerr $\left(\chi^{\left(3\right)}\right)$ nonlinearity. This same nonlinearity also allows for the Kerr Hamiltonian to be implemented. The Beamsplitter operation is, unsurprisingly, performed using a beamsplitter.
D. D. Blair, J. D. Hamkins, and K. O’Bryant, “Representing Ordinal Numbers with Arithmetically Interesting Sets of Real Numbers,” Mathematics arXiv, 2019. (under review) @ARTICLE{BlairHamkinsOBryant:Representing-ordinal-numbers-with-arithmetically-interesting-sets-of-real-numbers, author = {D. Dakota Blair and Joel David Hamkins and Kevin O'Bryant}, title = {Representing Ordinal Numbers with Arithmetically Interesting Sets of Real Numbers}, journal = {Mathematics arXiv}, year = {2019}, volume = {}, number = {}, pages = {}, month = {}, note = {under review}, abstract = {}, keywords = {under-review}, source = {}, doi = {}, url = {https://wp.me/p5M0LV-1Tg}, eprint = {1905.13123}, archivePrefix = {arXiv}, primaryClass = {math.NT}, } Abstract. For a real number $x$ and set of natural numbers $A$, define $x∗A = \{xa \mod 1 \mid a \in A\}\subseteq [0,1)$. We consider relationships between $x$, $A$, and the order-type of $x∗A$. For example, for every irrational $x$ and order-type $\alpha$, there is an $A$ with $x ∗ A \simeq\alpha$, but if $\alpha$ is an ordinal, then $A$ must be a thin set. If, however, $A$ is restricted to be a subset of the powers of $2$, then not every order type is possible, although arbitrarily large countable well orders arise. Burak Kaya successfully defended his dissertation, “Cantor minimal systems from a descriptive perspective,” on March 24, 2016, earning his Ph.D. degree at Rutgers University under the supervision of Simon Thomas. The dissertation committee consisted of Simon Thomas, Gregory Cherlin, Grigor Sargsyan and myself, as the outside member. The defense was very nice, with an extremely clear account of the main results, and the question session included a philosophical discussion on various matters connected with the dissertation, including the principle attributed to Gao that any collection of mathematical structures that has a natural Borel representation has a unique such representation up to Borel isomorphism, a principle that was presented as a Borel-equivalence-relation-theory analogue of the Church-Turing thesis. Burak Kaya | MathOverflow profile | ar$\chi$iv profile Abstract. In recent years, the study of the Borel complexity of naturally occurring classification problems has been a major focus in descriptive set theory. This thesis is a contribution to the project of analyzing the Borel complexity of the topological conjugacy relation on various Cantor minimal systems. We prove that the topological conjugacy relation on pointed Cantor minimal systems is Borel bireducible with the Borel equivalence relation $\newcommand\d{\Delta^+_{\mathbb{R}}}\d$. As a byproduct of our analysis, we also show that $\d$ is a lower bound for the topological conjugacy relation on Cantor minimal systems. The other main result of this thesis concerns the topological conjugacy relation on Toeplitz subshifts. We prove that the topological conjugacy relation on Toeplitz subshifts with separated holes is a hyperfinite Borel equivalence relation. This result provides a partial affirmative answer to a question asked by Sabok and Tsankov. As pointed Cantor minimal systems are represented by properly ordered Bratteli diagrams, we also establish that the Borel complexity of equivalence of properly ordered Bratteli diagrams is $\d$.
Having $$f(n) = \sum_{k=0}^n g_n(k), \; g_n(x) = \min(2^x, 2^{2^{n-x}})$$ I want to know whether $\mathcal O(f(n)) \subsetneq \mathcal O(2^n)$. Since $g_n(x) \le 2^x$ it is at least $f(n) \in \mathcal O(2^{n+1}-1) = \mathcal O(2^n)$. Let $x_n$ be the maximum point of $g_n(x)$, which is where $2^x = 2^{2^{n-x}}$. We get $$f(n) = \sum_{k=0}^{\lfloor x_n \rfloor} 2^k + \sum_{k=0}^{n - \lfloor x_n \rfloor} 2^{2^k}$$ Since $x_n$ solves $x + \log_2(x) = n$ we get for the second sum $$\sum_{k=0}^{n - \lfloor x_n \rfloor} 2^{2^k} = \sum_{k=0}^{ \lfloor \log_2(x_n) \rfloor} 2^{2^k} \le \sum_{k=0}^{ \lfloor x_n \rfloor} 2^k$$ Therefore $f(n) \in \mathcal O \big( \sum_{k=0}^{\lfloor x_n \rfloor} 2^k\big) = \mathcal O \big(2^{\lfloor x_n \rfloor} \big)$. I'm quite unsure about the following lines, especially since this is the first time I've come across the Lambert-W-function, which is needed to solve $x + \log_2(x) = n$ (according to WolframAlpha): $$x_n = {W(2^n \log(2)) \over\log(2)}$$ From this paper, section 2.8, I've got $$ W(x) \in \mathcal O \Bigg(\log(x)-\log(\log(x))+{\log(\log(x)) \over \log(x)}\Bigg)$$ And plugging everything together (omitting constants): $$ f(n) \in \mathcal O\big(2^{ W(2^n)}\big) = \mathcal O\Bigg(2^{ \log(2^n)-\log(\log(2^n))+{\log(\log(2^n)) \over \log(2^n)}}\Bigg) = \mathcal O\Big(2^n \cdot 2^{-\log(n)}\cdot \sqrt[n]{2^{\log(n)}}\Big) = \mathcal O\Big(2^n \cdot {1 \over n} \cdot 1\Big) = \mathcal O\Big({2^n \over n}\Big)$$ Is this correct? Is there some easy way to do it without Lambert-W?
Refractive index manifestly plays a role in Mie scattering: if the suspended colloids have the same refractive indexand characteristic impedance as the surrounding fluid, the whole system is electromagnetically homogeneous, and there is no scattering. For nonmagnetic materials, this statement is the same as that of a homogeneous refractive index. So I believe your observations will mostly be explained by the dependence of Mie scattering on refractive index. The following is a summary of Chapter 13 of Born and Wolf, "Principles of Optics". You calculate the power scattered from a spherical, dielectric particle of refractive index $n_s$ steeped in a medium of index $n_0$ through its effective scattering cross section: $$\bar{Q}(\frac{n_s}{n_0},\,q) = \frac{2}{q^2}\operatorname{Re}\left(\sum\limits_{\ell=1}^\infty\,(-i)^{\ell+1}\,(\ell+1)\left(\mathscr{E}_\ell\left(\frac{n_s}{n_0},\,q\right)+\mathscr{M}_\ell\left(\frac{n_s}{n_0},\,q\right)\right)\right)$$ and you use this quantity by multiplying the actualy cross sectional area of the sphere presented to an incoming plane wave by $\bar{Q}(\frac{n_s}{n_0},\,q)$ and the reflexion intensity calculated from the normal incidence Fresnel equations. The size parameter for the sphere is: $$q=\frac{2\,\pi\,n_0\,r_s}{\lambda}$$ i.e. the sphere's radius expressed in the corresponding radian delay in the suspending liquid. $\mathscr{E}_\ell$ and $\mathscr{M}_\ell$ are the complicated expressions: $$\mathscr{E}_\ell(\rho,\,q) = i^{\ell+1}\,\frac{2\,\ell+1}{\ell\,(\ell+1)}\,\frac{\rho\,{\rm Re}(\zeta_\ell^\prime(q))\,{\rm Re}(\zeta_\ell(\rho\,q))-{\rm Re}(\zeta_\ell(q))\,{\rm Re}(\zeta_\ell^\prime(\rho\,q))}{\rho\,\zeta_\ell^\prime(q)\,{\rm Re}(\zeta_\ell(\rho\,q))-\zeta_\ell(q)\,{\rm Re}(\zeta_\ell^\prime(\rho\,q))}$$ $$\mathscr{M}_\ell(\rho,\,q) = i^{\ell+1}\,\frac{2\,\ell+1}{\ell\,(\ell+1)}\,\frac{\rho\,{\rm Re}(\zeta_\ell(q))\,{\rm Re}(\zeta_\ell^\prime(\rho\,q))-{\rm Re}(\zeta_\ell^\prime(q))\,{\rm Re}(\zeta_\ell^\prime(\rho\,q))}{\rho\,\zeta_\ell(q)\,{\rm Re}(\zeta_\ell(\rho\,q))-\zeta_\ell^\prime(q)\,{\rm Re}(\zeta_\ell(\rho\,q))}$$ where $\zeta(z) = \sqrt{\frac{\pi\,z}{2}}\,H_\ell^{(1)}(z)$ and $H_\ell^{(1)}(z) = J_\ell(z)+i\,Y_\ell(z)$ is the Hankel function of the first kind. I was interested in Mie scattering by conductive spheres which means $\rho = \frac{n_s}{n_0}$ is complex, and I used the Lentz-Thompson recurrence method to calculate the complex argument Bessel functions stably. Here are my results: The asymptotic normlised cross section as $q\to\infty$ in all cases is $2$. This is multiplied by the simple minded Fresnel co-efficient: $$|\Gamma|=\left(\frac{n_s-n_0}{n_s+n_0}\right)^2$$ So not only does the asymptotic strength of the reflexion $|\Gamma|$ gets very small for $n_s\approx n_0$, the colloids need to be very big relative to the light wavelength for the reflexion to approach its asymptotic value: for very small index difference, the Rayleigh behaviour ( i.e. scattering is small and varies like $1/\lambda^4$) prevails even for very large colloids.
The integral $$\int_0^\infty \frac{dx}{1 + x^4} = \frac{\pi}{2\sqrt2}$$ can be evaluated both by a complex method (residues) and by a real method (partial fraction decomposition). The complex method works also for the integral $$\int_0^\infty \frac{dx}{1 + x^3} = \frac{2\pi}{3\sqrt3}$$ but partial fraction decomposition does not give convergent integrals. I would like to know if there is some real method for evaluating this last integral. Make the substitution $x = \frac{1}{t}$ and you get $$ \int_{0}^{\infty} \frac{t}{1+t^3} \text{d}t$$ Write the one you want as $$ \int_{0}^{\infty} \frac{1}{1+t^3} \text{d}t$$ Now you can add both and cancel that pesky $1+t$ factor. btw, a straightforward approach using partial fractions also works. You consider $$F(x) = \int_{0}^{x} \frac{1}{1+t^3} \text{d}t$$ Using partial fractions you can find that (I used Wolfram Alpha, I admit) $$F(x) = \frac{1}{6}\left(2\log(x+1) - \log(x^2 - x -1) + 2\sqrt{3} \arctan\left(\frac{2x-1}{\sqrt{3}}\right)\right) + \frac{\pi}{6\sqrt{3}}$$ Now as $x \to \infty$, we have that $2\log(x+1) - \log(x^2 - x + 1) \to 0$ . Note that for $a > 0$, $$\int_0^N \frac{1}{x+a}\ dx = \ln(N+a) - \ln(a) = \ln(N) - \ln(a) + o(1)\ \text{as} \ N \to \infty$$ while $$\eqalign{\int_0^N \frac{x+a}{(x+a)^2 + b^2}\ dx &= \frac{1}{2} \left(\ln((N+a)^2+b^2) - \ln(a^2+b^2)\right)\cr &= \ln(N) - \ln(a^2+b^2) + o(1) \ \text{as} \ N \to \infty\cr}$$ and (if $b > 0$) $$ \eqalign{\int_0^N \frac{1}{(x+a)^2+b^2}\ dx = \frac{\arctan\left(\frac{N+a}{b}\right) - \arctan\left(\frac{a}{b}\right)}{b} = \frac{\pi}{2b} - \frac{\arctan\left(\frac{a}{b}\right)}{b} + o(1) \ \text{as} \ N \to \infty\cr}$$ In particular, from the partial fraction decomposition $$ \frac{1}{1+x^3} = \frac{1/3}{x+1} + \frac{(2-x)/3}{x^2 - x + 1} = \frac{1/3}{x+1} + \frac{1/2}{(x-1/2)^2+3/4} - \frac{(x-1/2)/3}{(x-1/2)^2 + 3/4}$$ you get $$ \int_0^N \frac{1}{1+x^3} \ dx = \frac{\ln(N) - \ln(1))}{3} + \frac{\pi/2 + \arctan(1/\sqrt{3})}{\sqrt{3}} - \frac{\ln(N) - \ln((1/2)^2 + 3/4)}{3} + o(1)$$ i.e. $$\int_0^\infty \frac{1}{1+x^3} \ dx = \frac{\pi}{\sqrt{3}} + \frac{\arctan(1/\sqrt{3})}{\sqrt{3}} = \frac{2 \pi}{3 \sqrt{3}}$$ For what is worth: Your integral evaluates in terms of the sine function: $$\int\limits_0^\infty \frac{1}{1+x^a}=\frac{\pi}{a}\sec\frac{\pi}{a}$$ refer to this question and the link in it. I would like to know if there is some real method for evaluating this last integral. Actually, all integrals of the form $\displaystyle\int_0^\infty\frac{x^n}{1+x^m}dx$ can be solved by substituting $t=\dfrac1{1+x^m}$ , and then recognizing the expression of the beta function in the new integral, which can be written as a product of gamma functions. Then we use the reflection formula in order to finally arrive at the desired result, $I=\dfrac\pi m\cdot\csc\left[(n+1)\dfrac\pi m\right]$ — See my answer here for more information.
☕ Mathematic calculations to cool coffee How many ice cubes do you need to quickly cool down a hot beverage to its perfect drinking temperature? Lets use physics to find the exact number of ice cubes to drop into a scalding hot cup of coffee or tea to make it the perfect temperature. The beverage The hot beverage must lose a certain amount of energy in order to decrease its temperature to a warm temperature. The energy of heat loss is given by a difference of temperature. The equation is $$Q_{beverage} = c_{water} \times m_{beverage} \times \Delta T_{beverage}$$ The specific heat of water, $c_{water}$ is 4.2 $J / g / °C$. The mass of a typical cup of coffee or tea, $m_{beverage}$ is typically around 480 grams. The change in temperature, $\Delta T_{beverage}$, is the change between the initial brewing temperature and the final drinking temperature. $$Q_{beverage} = c_{water} \times m_{beverage} \times (T_{i}-T_{f})$$ The ice Ice melts by absorbing heat, or energy. The heat absorbed by ice is given by $$ Q_{ice} = m_{ice} \times Q_{fusion}$$ where $Q_{fusion}$ is the enthalpy of fusion for ice, which is 334 $J/g$. Enthalpy of fusion is simply the amount of energy that will be needed to transform ice into cold water. Once the ice changes into cold water, it will begin to warm up, by absorbing heat from the beverage. As before, the water will absorb according to the equation $$Q_{ice-water} = c_{water} \times m_{ice} \times \Delta T_{ice}$$ In this case, the change in temperature, $\Delta T_{ice}$, is simply the final temperature, $T_{f}$, since ice has a temperature of 0°C. $$Q_{ice-water} = c_{water} \times m_{ice} \times T_{f}$$ The beverage + ice When we reach equilibrium, it means that the total energy lost by the beverage equals the sum of the energy of the ice and the cold water (once the ice has transformed), $Q_{beverage}= Q_{ice-water} + Q_{ice}$. When we plug in the equations above we get$$ c_{water} \times m_{beverage} \times (T_{i}-T_{f}) = m_{ice} \times Q_{fusion} \\ + c_{water} \times m_{ice} \times T_{i} $$ The only unknown is the mass of the ice, $m_{ice}$, which we can solve for to get $$ m_{ice} = \frac{c_{water} \times m_{beverage} \times (T_{i}-T_{f})}{Q_{fusion} + c_{water} \times T_{i}} $$ How many ice cubes do you need? Coffee is initially at a temperature of 85°C. Tea has a similar high, undrinkable, temperature. The perfect drinking temperature is 50°C. Given a typical cup of tea is about 480 grams, you need about 4 regular sized ice cubes. (A regular ice cube from a ice tray is about 30 g). How fast does an ice cube melt? How much times does it take for the ice to melt? There are several modes of heat transfer - conduction, convection, radiation. It turns out that ice is pretty well described by mainly the convection heat transfer. This is described by Fourier’s law of thermal conduction: $$ \dot{Q}_{melt} = k A \frac{d T}{d x} $$ where $\dot{Q}_{melt}$ is the heat flux and $k$ is the thermal conductivity. For water, the thermal conductivity is 0.591 $W / m / K$. Ice cube area The area of the ice cube can be approximated by a cube of the same mass. That is, $$ A = 6 \times (m_{ice})^{2/3} / 100^2 $$ where the mass of each one is $m_{ice}$ and the $100^2$ is for converting $cm^2$ to $m^2$. Though, really the surface area of the ice cubes might depends on time (the ice cube grows smaller). Temperature gradient The gradient of the temperature is between the surface of the ice and the rest of the hot water. Though this gradient can be pretty complicated (the heat diffusion occurs), we can approximate from the literature. Here is an infrared image of ice melting at room temperature 1: As you can see, the ice-air boundary is typically 0.002 meters, so we can approximate the ice water-boundary as $dx = 0.002$. The temperature difference is just the difference between the water and the ice, which is at 0, so $dT = T_{i}$. How long does it take to melt? Finally, then to get the time we can just use the equation from before and divide: $$ t_{melt} = Q_{ice} / \dot{Q}_{melt} $$ so that $$ t_{melt} = \frac{m_{ice} \times H_{fusion}}{k \times T_{i} \times 6 \times m_{ice}^{2/3} / 100^2 / dx } $$ which is measured in seconds. A typical ice cube weighs about 30 grams and a typical initial temperature for heated water is about 85°C. So a typical ice cube melts in about 70 seconds. Theory versus Experiment I hooked up an Arduino to a DS18B20 temperature probe and measured the change in temperature over time of 480 grams of water, with (red line) and without (blue line) ice. The ice melted in about 65 seconds and brought the temperature down to 50.8°C. This is in good agreement to the theoretical time to melt of 70 seconds and the theoretical final temperature of 50°C. In comparison, the water without out that started at about 77°C, without ice, took over 15 minutes to get to the same temperature! References Badarch, Ayurzana. (2017). Application of macro and mesoscopic numerical models to hydraulic problems with solid substances. 10.13140/RG.2.2.27837.36325. [return]
The genetic variance of a quantitative trait (the quantitative trait in question is fitness) can be express as the sum of two components, the dominance and additive variance: $$\sigma_D^2 + \sigma_A^2 = \sigma^2$$ , where $\sigma$ is the genetic variance, $\sigma_D^2$ is the dominance variance and $\sigma_A^2$ is the additive variance. $\sigma_D^2$ and $\sigma_A^2$ are given by $$\sigma_D^2 = x^2(1-x)^2(2 \cdot W_{12} - W_{11} - W_{22})^2$$ $$\sigma_A^2 = 2x(1-x)(xW_{11}+(1-2x)W_{12} - (1-x)W_{22})^2$$ , where $W_{11}$, $W_{12}$ and $W_{22}$ are the fitness of the three possible genotypes and $x$ and $1-x$ give the allele frequencies. Question The above definition makes sense for one bi-allelic locus only. How are $\sigma_D^2$, $\sigma_A^2$ and $\sigma^2$ defined for $n$ bi-allelic loci? Is it: $$\sigma^2 = \sum_{i=1}^n \sigma_i^2$$ $$\sigma_A^2= \sum_{i=1}^n \sigma_{Ai}^2$$ $$\sigma_D^2 = \sum_{i=1}^n \sigma_{Di}^2$$ Here is a related question
There is no mistake on your exercise sheet. To find a solution, you have to postulate a linear dependence of the $y^i$ from the $x^j$, $y^i= a^i_j x^j$,where the matrix $a$ of coefficients $a^i_j$ is constant. You have $x^j = (a^{-1})^j_i y^i$. Now, simply express the initial terms as a function of $y^i$ and express diagonal constraints, you have : $\sum\limits_{j=1}^3a_j^i a_j^k= \delta ^{ik} \tag{1}$for the differential term constraint, and : $\sum\limits_{j=1}^3(a^{-1})^j_i (a^{-1})^{[j+1]}_k= 0 \tag{2}$for the quadratic $y^2$ term constraint, where $[j+1]=(j+1) \mod 3$ By noting that : $a_j^k = (a^t) _k^j$, and $(a^{-1})^{[j+1]}_k = (a^{(-1) t})_{[j+1]}^k$, $(1)$ and $2$ can be rewritten : $\sum\limits_{j=1}^3a_j^i (a^t) _k^j= \delta ^i_k\tag{3}$and : $\sum\limits_{j=1}^3(a^{-1})^j_i (a^{(-1)t})_{[j+1]}^k= 0 \tag{4}$ The equation $3$ means simply that $aa^t= \mathbb Id$, so $a$ is an orthogonal matrix. For simplicity, we may search for a matrix so that $a=a^t=a^{-1}$, so we have only to check the equation $(4)$, rewritten : $\sum\limits_{j=1}^3a^j_i a_{[j+1]}^k= 0 \tag{5}$ A solution is : $a = \dfrac{1}{3}\begin {pmatrix} 1&-2&-2\\-2&1&-2\\-2&-2&1 \end {pmatrix}$
Answer $$\frac{\cot\alpha+1}{\cot\alpha-1}=\frac{1+\tan\alpha}{1-\tan\alpha}$$ The identity is proved by representing the left side in terms of $\tan\alpha$. Work Step by Step $$\frac{\cot\alpha+1}{\cot\alpha-1}=\frac{1+\tan\alpha}{1-\tan\alpha}$$ We find the left side comprising of only $\cot\alpha$, while the right side comprising of only $\tan\alpha$, so we only have to choose one side and represent it in terms of the other. Here I would choose to represent the left side in terms of $\tan\alpha$. $$A=\frac{\cot\alpha+1}{\cot\alpha-1}$$ We would do so by using the identity $$\cot\alpha=\frac{1}{\tan\alpha}$$ which means $$A=\frac{\frac{1}{\tan\alpha}+1}{\frac{1}{\tan\alpha}-1}$$ $$A=\frac{\frac{1+\tan\alpha}{\tan\alpha}}{\frac{1-\tan\alpha}{\tan\alpha}}$$ $$A=\frac{1+\tan\alpha}{\tan\alpha}\times\frac{\tan\alpha}{1-\tan\alpha}$$ $$A=\frac{1+\tan\alpha}{1-\tan\alpha}$$ The left side is equal with the right side. The trigonometric expression thus is an identity.
Let $A \in M_n(\Bbb R)$. How can I prove, that 1)if $ \forall {b \in \Bbb R^n}, b^{t}Ab>0$, then all eigenvalues $>0$. 2)if $A$ is orthogonal, then all eigenvalues are equal to $-1$ or $1$ Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community (1): Let $Av=\lambda v$ with $v\not=0$, i.e. $\lambda$ is an eigenvalue of $A$. Then $$0<v^t Av=\lambda v^t v=\lambda \|v\|^2.$$ Since $\|v\|^2>0$, we get $\lambda>0$. (2): You certainly mean that the determinant of $A$ is $\pm 1$, since the statement about the eigenvalues is not true, for consider the orthogonal matrix $$\begin{pmatrix}\cos\theta & -\sin\theta\\ \sin\theta & \cos\theta\end{pmatrix}$$ This represents a rotation and has therefore complex eigenvalues. But if $A$ is orthogonal, then $A^tA=AA^t=I$, therefore applying the $\det$ to both sides and using the multiplication law for determinants, we obtain $$(\det A)^2 = 1$$ Therefore $\det A=\pm 1$. Let $\lambda$ be $A$ eigenvalue and $Ax=\lambda{x}$. (1)${x}^{t}Ax=\lambda{x^tx}>0.$ Because $x^tx>0$, then $\lambda>0$ (2)$|\lambda|^2x^tx=(Ax)^{t}Ax={x}^{t}A^{t}Ax=x^tx.$ So $|\lambda|=1$. Then $\lambda=-1$ or $1$. Attention: $A$ in (2) may have complex eigenvalue with absolute value 1. The first part of the problem is well solved above, so I want to emphasize on the second part, which was partially solved. An orthogonal transformation is either a rotation or a reflection. I will focus on 3D which has lots of practical use. Let us then assume that $A$ is an orthonormal matrix in $\mathbb{R}^3 \times \mathbb{R}^3$. We find that its eigenvalues are either $1, \text{e}^{\pm i \theta}$ for a rotation or $\pm 1$ for a reflection. For a rotation: We have the following sequence of equalities (since $\det A = 1$) \begin{eqnarray*} \det(I -A) &=& \det(A) \det (I-A) \\ &=& \det(A^T) \det(I-A) \\ &=& \det(A^T - I) \\ &=& \det(A - I) \\ &=& -\det(I-A) \quad , \text{since 3 is odd}, \end{eqnarray*} So $\det(I-A)=0$ and $\lambda_1=1$ is an eigenvalue of $A$. Now, let $u_1$ the unit eigenvector of $\lambda_1$, so $A u_1 = u_1$. We show that the matrix $A$ is a rotation of an angle $\theta$ around this axis $u_1$. Let us form a new coordinate system using $u_1, u_2, u_1 \times u_2$, where $u_2$ is a vector orthogonal to $u_1$, so the new system is right handed (has determinant = 1). The transformation between the old system $\{e_1, e_2, e_3\}$ and the new system is given by a matrix $B$ with column vectors $u_1, u_2$, and $u_3$. So we have that $B e_i = u_i$. Let us call the coordinate transformation (similarity) matrix $C = B^{-1}AB$. Then \begin{eqnarray*} C e_1 = B^{-1} A B e_1 = B^{-1} A u_1 = B^{-1} u_1 = e_1. \end{eqnarray*} The fact that $C e_1 = e_1$ means two things: The first column of $C$ is $(1,0,0)^T$. The first raw of $C$ is $(1,0,0)$. Then $C$ is a matrix of the type \begin{eqnarray*} C = \left ( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & a & b \\ 0 & c & d \\ \end{array} \right ) \end{eqnarray*} Since $A$ is orthogonal $C$ is orthogonal and so the vectors $(a,c)^T$ and $(b,d)^T$ are orthogonal and since \begin{eqnarray*} 1 = \theta A = \det C = ad - bc \end{eqnarray*} we have that the minor matrix with entries $a,b,c,d$ is a rotation (orthogonal with determinat 1). Rotations in 2D are of the form \begin{eqnarray*} \left ( \begin{array}{cc} a & b \\ c & d \end{array} \right ) = \left ( \begin{array}{cc} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{array} \right ) \end{eqnarray*} then \begin{eqnarray*} B^{-1}A B = \left ( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & \cos \theta & -\sin \theta \\ 0 & \sin \theta & \cos \theta \end{array} \right ) \end{eqnarray*} But since the eigenvalues of $B^{-1}AB$ are the same as those of $A$ we find the eigenvalues of this matrix and those would be the eigenvalues of $A$. For the eigenvalues of this matrix we have that \begin{eqnarray*} \det ( C - \lambda I)= 0 \implies (\cos \theta - \lambda)^2 + \sin^2 \theta = 0. \end{eqnarray*} That is, \begin{eqnarray*} 1 -2 \lambda \cos \theta + \lambda^2 = 0 \end{eqnarray*} Then \begin{eqnarray*} \lambda = \frac{2 \cos \theta \pm \sqrt{4 \cos^2 \theta - 4}}{2} = \cos \theta \pm i \sin \theta = \text{e}^{\pm i \theta}. \end{eqnarray*} This is the interesting result by Euler where he claimed that all rigid body rotations can be written as rotation about one vector (here $u_1$ by some angle $\theta$). Eigenvalues of a reflection. If we repeat all the steps above for areflection we find that now \begin{eqnarray*} B^{-1}A B = \left ( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & \cos 2 \theta & \sin 2 \theta \\ 0 & \sin 2 \theta & -\cos 2 \theta \end{array} \right ) \end{eqnarray*} Now \begin{eqnarray*} \det ( C - \lambda I)= 0 \implies -(\cos 2 \theta - \lambda)(\cos 2 \theta + \lambda ) - \sin^2 2 \theta = 0. \end{eqnarray*} and then \begin{eqnarray*} -\sin^2 2 \theta - \cos^2 2 \theta + \lambda^2 =0 \end{eqnarray*} and from here \begin{eqnarray*} \lambda = \pm 1. \end{eqnarray*} . Please see the following reference which I used for this answer: Orthogonal Matrices Both statements are false as currently written. The following matrix serves as a counter-example for both. $$R = \begin{pmatrix}\cos 1 & -\sin 1 \\ \sin 1 & \cos 1\end{pmatrix}$$ The first statement needs to be modified so that the matrix has all real eigenvalues, otherwise it is false as noted by Algebraic Pavel in the comments. Let us express an arbitrary non-zero vector $\mathbf{b}$ in polar form $$\mathbf{b} = r\begin{pmatrix}\cos \phi \\ \sin\phi \end{pmatrix}$$ Then for the above matrix $R$, we get $$\mathbf{b}^\mathrm{T}R\mathbf{b}= r^2\left( \cos(\phi + 1)\cos\phi + \sin(\phi+1)\sin\phi\right) = r^2\cos 1 > 0 $$ The eigenvalues are both complex however. If we assume that all eigenvalues are real, then the arguments given by TooOldForMath and gaoxinge works fine. The second statement should say that the determinant of an orthogonal matrix is $\pm 1$ and not the eigenvalues themselves. $R$ is an orthogonal matrix, but its eigenvalues are $e^{\pm i}$. The eigenvalues of an orthogonal matrix needs to have modulus one. If the eigenvalues happen to be real, then they are forced to be $\pm 1$. Otherwise though, they are free to lie anywhere on the unit circle.
The first series is positive, so it either converges absolute, or diverges. My gut reaction was to try some back-of-the-envelope comparison. Let's see: $\sum \frac{1}{e^{k}}$ converges, and $\frac{1}{e^{k^3}}\leq \frac{1}{e^k}$, so $\sum\frac{1}{e^{k^3}}$ converges. The exponential dominates $\sqrt{k}$, so personally, I would expect the series to converge. I would try an integral test, but it looks like an annoying function to integrate. So perhaps we can find a series $\sum\frac{p(k)}{e^{k^3}}$ for some function $p(k)$, such that $\int_1^{\infty}p(x)e^{-x^3}\,dx$ is easy to integrate. This is doable, but as user6312 pointed out, the Ratio Test will do the job as well. The second series is (eventually) alternating. Try the Alternating Series Test. To check the absolute convergence, try an Integral test. You'll need a change of variable. Hint. If you set $u=\ln x$, what happens?
Any mathematics symbols and processes to use the numbers 3, 5, 6, and 7 once each to get 100. closed as too broad by ffao, Alconja, Glorfindel, Rubio♦ Feb 14 '18 at 22:46 Please edit the question to limit it to a specific problem with enough detail to identify an adequate answer. Avoid asking multiple distinct questions at once. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question. $\sqrt{7^6}-3^5$ ...need more characters $100=(57+3)/.6$ if decimal point is allowed. With the numbers in order: $\frac{(3!)! - 5!}{6 \mod 7}$ EDIT: Added some "any processes" answers, some sillier than others: $76 - 3 + 5 = 100$ (octal numerals) $S(3*5*7-6)$ (successor function) $$-\frac{\log\left[\left(\log\underbrace{\sqrt{\sqrt{\cdots\sqrt6}}}_{100}\right) / \log(3!)\right]}{\log{(7-5)}}$$ (generic method for obtaining any positive integer with 3,5,6 and 7, layout borrowed from ffao) $(3 + 7) ^ {\left\lceil \frac{6}{5} \right\rceil}$ If the "floor" function is acceptable: $\left\lfloor 3\times5\times6.7 \right\rfloor$ $7!!-5×6/3!$ using double factorial 35+67-2 $%The question doesn't say we can't use other digits$ ${{6}\choose{3}} * 5 + \dot{7}$ $$ \left \lceil 357 * \sin(-6) \right \rceil = 100$$
Actually, there is an exact meaning, but it is not always used in that sense. For two functors $\mathsf F,\mathsf G:\mathscr A\to \mathscr B$ a natural transformation is a morphism of functors $\eta:\mathsf F\to\mathsf G$ that is compatible with the functors in the obvious (sic!) way. For instance if $\mathsf F={\rm id}$ is the identity and $\mathsf G=(\underline{\quad} )^{*}$ is the dual of finite-dimensional vector spaces, then even though $\mathsf F(V)\simeq \mathsf G(V)$, there is no natural transformation between $\mathsf F$ and $\mathsf G$ that gives this isomorphism. On the other hand $\mathsf F$ is naturally isomorphic to $\mathsf D:=\mathsf G\circ\mathsf G$ via the natural transformation induced by the usual map to the double dual. Of course, often people say "there is a natural choice of" whatever. That usually means that the "choice" actually does not involve a "choice". In other words, two different people would be expected to make the same choice. There is however a danger involved that some authors overlook. There are situations when there are more than one natural choices to make. In other words, just because two choices are both natural, one should not assume that they are necessarily the same. For instance, often one can end up with $(-1)$-times the other choice (say for a map) and then whether they are equal or their sum is zero makes a big difference. For an example, consider choosing a generator of the infinite cyclic group, a.k.a., $(\mathbb Z,+)$. One might be led to believe that "the" natural choice of a generator for $(\mathbb Z,+)$ is $1\in \mathbb Z$, but as long as this is only a group there is no way to distinguish $1\in \mathbb Z$ and $-1\in \mathbb Z$. They both generate the group and they are each other's inverses. In other words, there are two natural choices of a generator of the group $(\mathbb Z,+)$. Of course, once it is a ring, then $1\in \mathbb Z$ is the unity, while $-1\in \mathbb Z$ is not, and there is only one choice for the unity, but actually then the question of this choice being natural is moot since there is only one choice at all, so it's kind of silly to say it is natural. Addendum (to answer the question raised in the comment below by unknown) A two dimensional vector space does not have a "natural" inner product. A two dimensional vector space with a chosen basis does: If $V$ is a vector space (over the field $k$) with basis $\mathbf v_1,\mathbf v_2\in V$ then one can define a "natural" inner product by $<\mathbf a,\mathbf b>:=a_1b_1+a_2b_2$ where $\mathbf a=a_1\mathbf v_1+a_2\mathbf v_2$ and $\mathbf b=b_1\mathbf v_1+b_2\mathbf v_2$. But this is only natural after the basis has been chosen. Basically the problem is that the definition of the inner product depends on the choice of the basis, so this definition is only natural if there is a natural choice of a basis (assuming there is no other structure present that could give a natural inner product; for more on this see the example of the wedge product below). For instance if $V$ is given with a basis as above, then there is a natural choice of a basis (called the dual basis) for $V^*$ the dual space of $V$: Let $\phi_1: V\to k$ be defined by $\phi_1(\mathbf v_1)=1$, $\phi_1(\mathbf v_2)=0$ and extended by linearity and similarly $\phi_2: V\to k$ be defined by $\phi_2(\mathbf v_1)=0$, $\phi_2(\mathbf v_2)=1$ and extended by linearity. So, if you already have a chosen basis on $V$ you can find a natural choice of a basis on $V^*$ and with that you can find a natural choice of an inner product, but this will not be a natural choice if you consider the vector space without the given basis. Otherwise, I agree, it is hard to tell what people mean by "natural". As said by many people in various answers, the essence is whether you can do the construction without making a sort of a random choice when choosing a different element would be equally good. In this example, to give an inner product you need to give a basis and unless you have some extra structure, choosing any given basis is equal to choosing any other, so the choice is non-natural. On the other hand if there is some extra structure on the vector space then there may be a natural choice for an inner product, or more generally for a bilinear form. For instance if your vector space is a space of differential forms on a manifold, then there is no natural choice of basis, but there is a natural choice of an alternating bilinear form: the wedge product. This is actually pretty good, because this means that it is possible to define this on a manifold locally: picking a chart is a non-natural choice and defining the wedge product of two differential forms locally seems like it depends on a lot of choices, but it ends up being independent of these in the sense that choosing a different chart you get the same wedge product it just looks different because it is in a different basis. Addendum 2 (I realized that this might be an interesting comment while writing this answer to another MO question). Here is an example of the importance of naturality: Suppose $M$ is a manifold and $U\subseteq M$ and open set. Then there is a natural homomorphism from the ring of regular functions on $M$ to the ring of regular functions on $U$. (If you like adjust "regular" for your favourite category; continuous, smooth, holomorphic, etc.). It can happen that this homomorphism is an isomorphism and then it has nice consequences. However, it is often important that in order to get the nice consequence one needs that the homomorphism induced by the embedding is an isomorphism and not that there exists some isomorphism. In other words, one could say that the natural homomorphism (=the one induced by the embedding $U\subseteq M$) is an isomorphism. For an explicit example where this matters see the above mentioned answer.
I found an algorithm that can compute the distance of two quantum states. It is based on a subroutine known as swap test (a fidelity estimator or inner product of two state, btw I don't understand what fidelity mean). My question is about inner product. How can I calculate the inner product of two quantum registers which contains different number of qubits? The description of the algorithm is found in this paper. Based on the 3rd step that appear on the image, I want to prove it by giving an example. Let: $|a| = 5$, $|b| = 5 $, and $ Z = 50 $ $$|a\rangle = \frac{3}{5}|0\rangle + \frac{4}{5}|1\rangle$$ $$|b\rangle = \frac{4}{5}|0\rangle + \frac{3}{5}|1\rangle $$ All we want is the fidelity of the following two states $|\psi\rangle$ and $|\phi\rangle$ and to calculate the distance between $|a\rangle$ and $|b\rangle$is given as: $ {|a-b|}^2 = 2Z|\langle\phi|\psi\rangle|^2$ so $$|\psi\rangle = \frac{3}{5\sqrt{2}}|00\rangle + \frac{4}{5\sqrt{2}}|01\rangle+ + \frac{4}{5\sqrt{2}}|10\rangle + + \frac{3}{5\sqrt{2}}|11\rangle$$ $$|\phi\rangle = \frac{5}{\sqrt{50}} (|0\rangle + |1\rangle) $$ then how to compute $$\langle\phi|\psi\rangle = ??$$
Answer $$\frac{\sec^4\theta-\tan^4\theta}{\sec^2\theta+\tan^2\theta}=\sec^2\theta-\tan^2\theta$$ We simplify the left side and find that the expression is an identity. Work Step by Step $$\frac{\sec^4\theta-\tan^4\theta}{\sec^2\theta+\tan^2\theta}=\sec^2\theta-\tan^2\theta$$ The left side is more complicated. We would simplify it. $$A=\frac{\sec^4\theta-\tan^4\theta}{\sec^2\theta+\tan^2\theta}$$ We have $a^4-b^4=(a^2-b^2)(a^2+b^2)$. So, $$A=\frac{(\sec^2\theta-\tan^2\theta)(\sec^2\theta+\tan^2\theta)}{\sec^2\theta+\tan^2\theta}$$ $$A=\sec^2\theta-\tan^2\theta$$ They are thus equal. The expression is an identity.
57 0 Hello. There is no agreement on the meaning of terms electrochemical potential and chemical potential (see for example http://web.mit.edu/6.730/www/ST04/Lectures/Lecture26.pdf"). While proper definitions would call chemical potential to -i.e., the variation in energy if the mass were not charged- and electrochemical potential or Fermi level to -i.e, the actual variation in energy taking into account the mass is charged, which is the only measurable observable- Ashcroft (p. 593) seem to be totally misleading because he uses the letter [tex]\mu[/tex] to refer to the electrochemical potential and then defines an electrochemical potential as [tex]\mu_e=\mu - e\phi[/tex]. Equilibrium condition (deduced from the fundamental thermodynamic relation in energetic form) imposes that electrochemical potential should be constant along the semiconductor, so the actual picture is the electrochemical potential being constant along the semiconductor dimension and the valence and conduction energies bending wherever electric field exists. Has anyone else observed -suffered- this misleading (wrong?) point in Aschroft? Thanks There is no agreement on the meaning of terms electrochemical potential and chemical potential (see for example http://web.mit.edu/6.730/www/ST04/Lectures/Lecture26.pdf"). While proper definitions would call chemical potential to [tex]\mu\equiv\left(\frac{\partial U}{\partial n}\right)_{neutral}[/tex] -i.e., the variation in energy if the mass were not charged- and electrochemical potential or Fermi level to [tex]\overline{\mu}\equiv\left(\frac{\partial U}{\partial n}\right)_{charged}=\left(\frac{\partial U}{\partial n}\right)_{neutral} +q\phi \equiv F_n[/tex] U}{\partial n}\right)_{neutral} +q\phi \equiv F_n[/tex] -i.e, the actual variation in energy taking into account the mass is charged, which is the only measurable observable- Ashcroft (p. 593) seem to be totally misleading because he uses the letter [tex]\mu[/tex] to refer to the electrochemical potential and then defines an electrochemical potential as [tex]\mu_e=\mu - e\phi[/tex]. Equilibrium condition (deduced from the fundamental thermodynamic relation in energetic form) [tex]dU=TdS - pdV+ \mu dn + Fz\phi dn=TdS - pdV + \overline{\mu} dn[/tex] imposes that electrochemical potential should be constant along the semiconductor, so the actual picture is the electrochemical potential being constant along the semiconductor dimension and the valence and conduction energies bending wherever electric field exists. Has anyone else observed -suffered- this misleading (wrong?) point in Aschroft? Thanks Last edited by a moderator:
Rabbits, triangles, and triplets. These three things are linked by an important series often characterised by shells and flowers. But what do the Fibonacci numbers (and their subsequent sequence) have to offer the world of finance? In mathematics, the Fibonacci series refers to the ordered sequence of numbers described by Leonardo of Pisa, a 12th-century Italian mathematician. \(0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, …\) Each element in the series is known as a Fibonacci number. The history of the Fibonacci sequence This sequence was described by Fibonacci as the solution to a rabbit breeding problem: “a certain man has a pair of rabbits in a closed space and wants to know how many are created from this pair in a year when, according to nature, each couple requires one month to grow old and each subsequent month procreates another couple.” (Laurence Sigler, Fibonacci’s Liber Abaci, pg. 404). The answer to this question is as follows: 1st month: we start from a pair of rabbits. 2nd month: the couple grows older but does not procreate. 3rd month: the pair procreates another pair (that is, we now have two couples). 4th month: the first couple procreate and the second age without procreating (we now have three couples). 5th month: the two older couples procreate, while the new pair ages (five pairs in total). Etc. Schematically, this would be: Where: Black arrow: the pair of rabbits ages. Red arrow: the pair of rabbits age for the first time (and therefore don’t procreate). Green arrow: the pair of rabbits procreates. How are the Fibonacci numbers calculated? There are different ways to calculate Fibonacci numbers: From the numbers \(0\) and \(1\), the Fibonacci numbers are defined by the function:$$ \begin{align} f_{n}&=f_{n-1} +f_{n-2}\\ f_0&=0\\ f_1&=1\\ f_{2}&=f_{1}+f_{0}=1\\ f_{3}&=f_{2}+f_{1}=2\\ … \end{align} $$ A generating functionfor any sequence \(a_{0},a_{1},a_{2},…\) is the function \(f(x)=a_{o}+a_{1}x+a_{2}x^{2}+…\), that is, a formal power series where each coefficient is an element of the sequence. Fibonacci numbers have the generating function: $$f(x) = \frac{x}{1-x-x^{2}}$$ Explicit formula, this way of calculating Fibonacci numbers uses the golden number expression: $$f_{n}=\frac{\left(\frac{1+\sqrt{5}}{2}\right)^{n}-\left(1-\frac{1+\sqrt{5}}{2}\right)^{n}}{\sqrt{5}}=\frac{\left(\frac{1+\sqrt{5}}{2}\right)^{n}-\left(\frac{1-\sqrt{5}}{2}\right)^{n}}{\sqrt{5}}$$ Fibonacci numbers in Mathematics Golden numbers The golden number, gold number or divine proportion, is the numerical value of the proportion held by two segments of line \(a\) and \(b\) (with \(a\) longer than \(b\)): the total length is to segment \(a\), as \(a\) is to segment \(b\). One property stands out among many: the number itself, its square and its inverse, have the same decimal figures: $$ \begin{align} \phi&=1.\color{red}{6180339887}\ldots\\ \phi^2&=\phi+1=2.\color{red}{6180339887}\ldots\\ \frac{1}{\phi}&=\phi-1=0.\color{red}{6180339887}\ldots\\ \end{align} $$ The ratio or quotient between Fibonacci terms and the immediately preceding one varies continuously, but stabilizes in the golden number: $$ \lim_{n \rightarrow \infty}\frac{f_{n}+1}{f_{n}}=\phi\approx1.6180339887 $$ Pascal’s triangle Pascal’s triangle is a representation of the binomial coefficients ordered in a triangle form. That is, each row of the triangle represents the coefficients of the monomials that appear in the development of the binomial \((a+b)^{n}\) (taking the top \(1\) as the power \(n=0\)) or, in the same way, the coefficients appear in Newton’s binomial coincide with the elements appearing in each row of the Pascal triangle. The triangle’s construction is as follows: We put a \(1\) in the triangle’s top vertex. Then, in the next row, we place a \(1\) on the right and a \(1\) on the left. In the lower rows, place 1st at the ends and for others, the sum of the numbers directly above on either side. This triangle has a number of curious properties: If we add the elements of each row, we get the powers of \(2: 1, 2, 4, 8, 16,\) etc. Adding two consecutive elements of the diagonal \(1-3-6-10-15,\) etc., we get a perfect square: \(1, 4, 9, 16, 25,\) etc. If the first number in a row (after \(1\)) is a prime number, then all other numbers are divisible by that prime number (excluding the 1s). For example, in row \(1-7-21-35-35-32-7\), the first number is \(7\), which is prime. The rest \((7,21,35)\) are all divisible by \(7\). But the main curiosity is the property relating to the Fibonacci numbers: Pythagorean triples A Pythagorean triple consists of three elements (\(a, b, c\)) that satisfy \(a^{2}+b^{2}=c^{2}\) (Pythagorean theorem). There’s a close relationship between the Fibonacci numbers and the Pythagorean triples. If we take four consecutive numbers from the Fibonacci sequence, \((w, x, y, z)\), we can get a Pythagorean triple if we make the following assignments: Let \(a\) be the product of the numbers belonging to the extremes \(a = xz\). Let \(b\) be the double of the product of the intermediate numbers \(c = 2yw\). Let \(c\) be the sum of the product of the odd numbers and the product of the even numbers \(c = xw + zy\). Therefore (\(a, b, c\)) is a Pythagorean triple. Fibonacci numbers in trading techniques In trading, Fibonacci numbers appear in so-called Fibonacci studies. Fibonacci studies encompass a series of analysis tools based on sequence and Fibonacci ratios, which represent geometric laws of nature and human behaviour applied to financial markets. The most popular of these tools are Fibonacci retracements, extensions, arcs, fan and time zones. Other tools include the Fibonacci eclipse, spiral and canals. If you want to know how some of these tools work in financial markets, read our post “Fibonacci retracement and extensions“. Read this post in Spanish.
ABC for Socks Posted on 0 Comments The central concept of Bayesian inference would be Exploration of posterior $\pi(\theta\mid x^\obs)$ may require to produce sample distributed from $\pi(\theta\mid x^\obs)$. MCMC is the workhorse of practical Bayesian analysis, except when product well-defined but numerically unavailable or too costly to compute. ABC stands for approximate (wrong likelihood) Bayesian (right prior) computation (producing a parameter sample). When likelihood $f(x\mid \theta)$ not in closed form, we can use the following likelihood-free rejection technique. ABC-AR algorithm For an observation $x^\obs\sim f(x\mid \theta)$, under the prior $\pi(\theta)$, keep jointly simulating until the auxiliary variable $z$ is equal to the observed value, Example: Socks Draw 11 socks out of $m$ socks made of $f$ orphans and $g$ pairs, i.e., $f+2g=m$, number $k$ of socks from the orphan group is hypergeometric $H(11, m, f)$ and the probability to observe 11 orphan socks total is Prior Sock Distributions We knows that $n_{\text{socks}}$ must be positive and discrete. A reasonable choice would perhaps be to use a Poisson distribution as a prior, but the Poisson is problematic in that both its mean and its variance is set by the same parameter. Instead we could use the more flexible cousin to the Poisson, the negative binomial. More specifically, Put a prior over the proportion of socks that come in pairs, that is Implementations using Distributionsusing Plotsusing StatsBasefunction sim_socks(nrep = 1000, n_picked = 11) # prior of n_socks prior_mu = 30 prior_sd = 15 prior_p = 1 - prior_mu / prior_sd^2 prior_size = prior_mu * (1 - prior_p) / prior_p prior_n_socks = NegativeBinomial(prior_size, 1-prior_p) # prior on the proportion prior_prop_pairs = Beta(15, 2) # simulate res = zeros(nrep, 6) for i = 1:nrep n_socks = rand(prior_n_socks) prop_pairs = rand(prior_prop_pairs) n_pairs = round(Int, n_socks * prop_pairs / 2) n_orphan = n_socks - n_pairs * 2 socks = vcat(collect(1:n_orphan+n_pairs), collect(1:n_pairs)) picked_socks = sample(socks, min(n_picked, n_socks)) if length(picked_socks) == 0 res[i, :] = [0, 0, n_socks, n_pairs, n_orphan, prop_pairs] else freq_socks = values(countmap(picked_socks)) res[i, :] = [sum(freq_socks .== 1), sum(freq_socks .== 2), n_socks, n_pairs, n_orphan, prop_pairs] end end return resendres = sim_socks(100000)idx = (res[:, 1] .== 11) .& (res[:, 2] .== 0)post_samples = res[idx, :] The parametrization of NegativeBinomial() in Julia follows Wolfram’s definition, which is different from Wiki’s (R adopts this definition), but actually just replace the success probability $p$ with $1-p$ in Wiki’s definition.
Answer $\theta $ lies in the Second Quadrant or Quadrant-II. Work Step by Step The trigonometric ratios are as follows: $\sin \theta =\dfrac{y}{r} \\ \cos \theta =\dfrac{x}{r} \\ \tan \theta =\dfrac{y}{x}\\ \csc \theta =\dfrac{r}{y} \\ \sec \theta =\dfrac{r}{x} \\ \cot \theta =\dfrac{x}{y}$ where, $ r=\sqrt {x^2+y^2}$ It has been seen that $ x $ is negative and $ y $ is positive; this implies that the angle $\theta $ lies in the Second Quadrant or Quadrant-II.
Roughly speaking there are two ways for a series to converge: As in the case of $\sum 1/n^2$, the individual terms get small very quickly, so that the sum of all of them stays finite, or, as in the case of $\ds \sum (-1)^{n-1}/n$, the terms don't get small fast enough ($\sum 1/n$ diverges), but a mixture of positive and negative terms provides enough cancellation to keep the sum finite. You might guess from what we've seen that if the terms get small fast enough to do the job, then whether or not some terms are negative and some positive the series converges. Theorem 13.6.1 If $\ds\sum_{n=0}^\infty |a_n|$ converges, then $\ds\sum_{n=0}^\infty a_n$ converges. Proof. Note that $\ds 0\le a_n+|a_n|\le 2|a_n|$ so by the comparison test $\ds\sum_{n=0}^\infty (a_n+|a_n|)$ converges. Now $$ \sum_{n=0}^\infty (a_n+|a_n|) -\sum_{n=0}^\infty |a_n| = \sum_{n=0}^\infty a_n+|a_n|-|a_n| = \sum_{n=0}^\infty a_n $$ converges by theorem 13.2.2. So given a series $\sum a_n$ with both positive and negative terms, you should first ask whether $\sum |a_n|$ converges. This may be an easier question to answer, because we have tests that apply specifically to series with non-negative terms. If $\sum |a_n|$ converges then you know that $\sum a_n$ converges as well. If $\sum |a_n|$ diverges then it still may be true that $\sum a_n$ converges—you will have to do more work to decide the question. Another way to think of this result is: it is (potentially) easier for $\sum a_n$ to converge than for $\sum |a_n|$ to converge, because the latter series cannot take advantage of cancellation. If $\sum |a_n|$ converges we say that $\sum a_n$ converges absolutely; to say that $\suma_n$ converges absolutely is to say that any cancellation that happensto come along is not really needed, as the terms already get small sofast that convergence is guaranteed by that alone. If $\sum a_n$converges but $\sum |a_n|$ does not, we say that $\sum a_n$ converges conditionally. Forexample $\ds\sum_{n=1}^\infty (-1)^{n-1} {1\over n^2}$ convergesabsolutely, while $\ds\sum_{n=1}^\infty (-1)^{n-1} {1\over n}$converges conditionally. Example 13.6.2 Does $\ds\sum_{n=2}^\infty {\sin n\over n^2}$ converge? In example 13.5.2 we saw that $\ds\sum_{n=2}^\infty {|\sin n|\over n^2}$ converges, so the given series converges absolutely. Example 13.6.3 Does $\ds\sum_{n=0}^\infty (-1)^{n}{3n+4\over 2n^2+3n+5}$ converge? Taking the absolute value, $\ds\sum_{n=0}^\infty {3n+4\over 2n^2+3n+5}$ diverges by comparison to $\ds\sum_{n=1}^\infty {3\over 10n}$, so if the series converges it does so conditionally. It is true that $\ds\lim_{n\to\infty}(3n+4)/(2n^2+3n+5)=0$, so to apply the alternating series test we need to know whether the terms are decreasing. If we let $\ds f(x)=(3x+4)/(2x^2+3x+5)$ then $\ds f'(x)=-(6x^2+16x-3)/(2x^2+3x+5)^2$, and it is not hard to see that this is negative for $x\ge1$, so the series is decreasing and by the alternating series test it converges. Exercises 13.6 Determine whether each series converges absolutely, converges conditionally, or diverges. Ex 13.6.1$\ds\sum_{n=1}^\infty (-1)^{n-1}{1\over 2n^2+3n+5}$(answer) Ex 13.6.2$\ds\sum_{n=1}^\infty (-1)^{n-1}{3n^2+4\over 2n^2+3n+5}$(answer) Ex 13.6.3$\ds\sum_{n=1}^\infty (-1)^{n-1}{\ln n\over n}$(answer) Ex 13.6.4$\ds\sum_{n=1}^\infty (-1)^{n-1} {\ln n\over n^3}$(answer) Ex 13.6.5$\ds\sum_{n=2}^\infty (-1)^n{1\over \ln n}$(answer) Ex 13.6.6$\ds\sum_{n=0}^\infty (-1)^{n} {3^n\over 2^n+5^n}$(answer) Ex 13.6.7$\ds\sum_{n=0}^\infty (-1)^{n} {3^n\over 2^n+3^n}$(answer) Ex 13.6.8$\ds\sum_{n=1}^\infty (-1)^{n-1} {\arctan n\over n}$(answer)
We turn first to counting. While this sounds simple, perhaps toosimple to study, it is not. When we speak of counting, it is shorthandfor determining the size of a set, or more often, the sizes of manysets, all with something in common, but different sizes depending onone or more parameters. For example: how many outcomes are possiblewhen a die is rolled? Two dice? $n$ dice? As stated, this isambiguous: what do we mean by "outcome''? Suppose we roll two dice,say a red die and a green die. Is "red two, green three'' a differentoutcome than "red three, green two''? If yes, we are counting thenumber of possible "physical'' outcomes, namely 36. If no, there are 21. We might even be interested simply in the possible totals, inwhich case there are 11 outcomes. Even the quite simple first interpretation relies on some degree ofknowledge about counting; we first make two simple facts explicit. Interms of set sizes, suppose we know that set $A$ has size $m$ and set$B$ has size $n$. What is the size of $A$ and $B$ together, that is,the size of $A\cup B$? If we know that $A$ and $B$ have no elements incommon, then the size $A\cup B$ is $m+n$; if they do have elements incommon, we need more information. A simple but typical problem of thistype: if we roll two dice, how many ways are there to get either 7 or11? Since there are 6 ways to get 7 and two ways to get 11, theanswer is $6+2=8$. Though this principle is simple, it is easy toforget the requirement that the two sets be disjoint, and hence to useit when the circumstances are otherwise. This principle is oftencalled the addition principle. This principle can be generalized: if sets $A_1$ through $A_n$ are pairwise disjoint and have sizes $m_1,\ldots m_n$, then the size of $A_1\cup\cdots\cup A_n=\sum_{i=1}^n m_i$. This can be proved by a simple induction argument. Why do we know, without listing them all, that there are 36 outcomeswhen two dice are rolled? We can view the outcomes as two separateoutcomes, that is, the outcome of rolling die number one and theoutcome of rolling die number two. For each of 6 outcomes for thefirst die the second die may have any of 6 outcomes, so the total is$6+6+6+6+6+6=36$, or more compactly, $6\cdot6=36$. Note that we arereally using the addition principle here: set $A_1$ is all pairs$(1,x)$, set $A_2$ is all pairs $(2,x)$, and so on. This is somewhatmore subtle than is first apparent. In this simple example, theoutcomes of die number two have nothing to do with the outcomes of dienumber one. Here's a slightly more complicated example: how many waysare there to roll two dice so that the two dice don't match? That is,we rule out 1-1, 2-2, and so on. Here for each possible value on dienumber one, there are five possible values for die number two, butthey are a different five values for each value on die numberone. Still, because all are the same, the result is $5+5+5+5+5+5=30$,or $6\cdot 5=30$. In general, then, if there are $m$ possibilities forone event, and $n$ for a second event, the number of possible outcomesfor both events together is $m\cdot n$. This is often called the multiplication principle. In general, if $n$ events have $m_i$ possible outcomes, for $i=1,\ldots,n$, where each $m_i$ is unaffected by the outcomes of other events, then the number of possible outcomes overall is $\prod_{i=1}^n m_i$. This too can be proved by induction. Example 1.2.1 How many outcomes are possible when three dice are rolled, if no two of them may be the same? The first two dice together have $6\cdot 5=30$ possible outcomes, from above. For each of these 30 outcomes, there are four possible outcomes for the third die, so the total number of outcomes is $30\cdot 4=6\cdot 5\cdot 4=120$. (Note that we consider the dice to be distinguishable, that is, a roll of 6, 4, 1 is different than 4, 6, 1, because the first and second dice are different in the two rolls, even though the numbers as a set are the same.) Example 1.2.2 Suppose blocks numbered 1 through $n$ are in a barrel; we pull out $k$ of them, placing them in a line as we do. How many outcomes are possible? That is, how many different arrangements of $k$ blocks might we see? This is essentially the same as the previous example: there are $k$ "spots'' to be filled by blocks. Any of the $n$ blocks might appear first in the line; then any of the remaining $n-1$ might appear next, and so on. The number of outcomes is thus $n(n-1)(n-2)\cdots(n-k+1)$, by the multiplication principle. In the previous example, the first "spot'' was die number one, the second spot was die number two, the third spot die number three, and $6\cdot5\cdot4=6(6-1)(6-2)$; notice that $6-2=6-3+1$. This is quite a general sort of problem: Definition 1.2.3 The number of permutations of $n$ things taken $k$ at a time is $$P(n,k)=n(n-1)(n-2)\cdots(n-k+1)={n!\over (n-k)!}.$$ A permutation of some objects is a particular linear ordering of the objects; $P(n,k)$ in effect counts two things simultaneously: thenumber of ways to choose and order $k$ out of $n$ objects. A usefulspecial case is $k=n$, in which we are simply counting the numberof ways to order all $n$ objects. This is$n(n-1)\cdots(n-n+1)=n!$. Note that the second form of $P(n,k)$ fromthe definition gives $${n!\over (n-n)!}={n!\over 0!}.$$This is correct only if $0!=1$, so we adopt the standard conventionthat this is true, that is, we define $0!$ to be $1$. Suppose we want to count only the number of ways to choose $k$ items out of $n$, that is, we don't care about order. In example 1.2.1, we counted the number of rolls of three dice with different numbers showing. The dice were distinguishable, or in a particular order: a first die, a second, and a third. Now we want to count simply how many combinations of numbers there are, with 6, 4, 1 now counting as the same combination as 4, 6, 1. Example 1.2.4 Suppose we were to list all 120 possibilities in example 1.2.1. The list would contain many outcomes that we now wish to count as a single outcome; 6, 4, 1 and 4, 6, 1 would be on the list, but should not be counted separately. How many times will a single outcome appear on the list? This is a permutation problem: there are $3!$ orders in which 1, 4, 6 can appear, and all 6 of these will be on the list. In fact every outcome will appear on the list 6 times, since every outcome can appear in $3!$ orders. Hence, the list is too big by a factor of 6; the correct count for the new problem is $120/6=20$. Following the same reasoning in general, if we have $n$ objects, the number of ways to choose $k$ of them is $P(n,k)/k!$, as each collection of $k$ objects will be counted $k!$ times by $P(n,k)$. Definition 1.2.5 The number of subsets of size $k$ of a set of size $n$ (also called an $n$-set) is $$C(n,k)={P(n,k)\over k!}={n!\over k!(n-k)!}={n\choose k}.$$ The notation $C(n,k)$ is rarely used; instead we use $n\choose k$, pronounced "$n$ choose $k$''. Example 1.2.6 Consider $n=0,1,2,3$. It is easy to list the subsets of a small $n$-set; a typical $n$-set is $\{a_1,a_2,\ldots,a_n\}$. A $0$-set, namely the empty set, has one subset, the empty set; a $1$-set has two subsets, the empty set and $\{a_1\}$; a $2$-subset has four subsets, $\emptyset$, $\{a_1\}$, $\{a_2\}$, $\{a_1,a_2\}$; and a $3$-subset has eight: $\emptyset$, $\{a_1\}$, $\{a_2\}$, $\{a_3\}$, $\{a_1,a_2\}$, $\{a_1,a_3\}$, $\{a_2,a_3\}$, $\{a_1,a_2,a_3\}$. From these lists it is then easy to compute $n\choose k$: $$\displaylines{\cr \matrix{ &\rlap{\lower 3pt\hbox{$\Rule{65pt}{0pt}{0.5pt}$}}\cr &0\cr n&1\cr &2\cr &3\cr }\left\vert \matrix{ 0&\lower 3.5pt\hbox{}\rlap{\smash{\raise 1.5em \hbox{$k$}}}1&2&3\cr 1\cr 1&1\cr 1&2&1\cr 1&3&3&1\cr }\right.\cr}$$ You probably recognize these numbers: this is the beginning of Pascal's Triangle. Each entry inPascal's triangle is generated by adding two entries from the previousrow: the one directly above, and the one above and to the left. Thissuggests that ${n\choose k}={n-1\choose k-1}+{n-1\choose k}$, andindeed this is true. To make this work out neatly, we adopt theconvention that ${n\choose k}=0$ when $k< 0$ or $k>n$. Proof. A typical $n$-set is $A=\{a_1,\ldots,a_n\}$. We consider two types of subsets: those that contain $a_n$ and those that do not. If a $k$-subset of $A$ does not contain $a_n$, then it is a $k$-subset of $\{a_1,…,a_{n-1}\}$, and there are $n-1\choose k$ of these. If it does contain $a_n$, then it consists of $a_n$ and $k-1$ elements of $\{a_1,…,a_{n-1}\}$; since there are $n-1\choose k-1$ of these, there are $n-1\choose k-1$ subsets of this type. Thus the total number of $k$-subsets of $A$ is ${n-1\choose k-1}+{n-1\choose k}$. Note that when $k=0$, ${n-1\choose k-1}={n-1\choose -1}=0$, and when $k=n$, ${n-1\choose k}={n-1\choose n}=0$, so that ${n\choose 0}={n-1\choose 0}$ and ${n\choose n}={n-1\choose n-1}$. These values are the boundary ones in Pascal's Triangle. Many counting problems rely on the sort of reasoning we have seen. Here are a few variations on the theme. Example 1.2.8 Six people are to sit at a round table; how many seating arrangements are there? It is not clear exactly what we mean to count here. If there is a "special seat'', for example, it may matter who ends up in that seat. If this doesn't matter, we only care about the relative position of each person. Then it may or may not matter whether a certain person is on the left or right of another. So this question can be interpreted in (at least) three ways. Let's answer them all. First, if the actual chairs occupied by people matter, then this is exactly the same as lining six people up in a row: 6 choices for seat number one, 5 for seat two, and so on, for a total of $6!$. If the chairs don't matter, then $6!$ counts the same arrangement too many times, once for each person who might be in seat one. So the total in this case is $6!/6=5!$. Another approach to this: since the actual seats don't matter, just put one of the six people in a chair. Then we need to arrange the remaining 5 people in a row, which can be done in $5!$ ways. Finally, suppose all we care about is who is next to whom, ignoring right and left. Then the previous answer counts each arrangement twice, once for the counterclockwise order and once for clockwise. So the total is $5!/2=P(5,3)$. We have twice seen a general principle at work: if we can overcountthe desired set in such a way that every item gets counted the samenumber of times, we can get the desired count just by dividing by thecommon overcount factor. This will continue to be a useful idea. Avariation on this theme is to overcount and then subtract theamount of overcount. Example 1.2.9 How many ways are there to line up six people so that a particular pair of people are not adjacent? Denote the people $A$ and $B$. The total number of orders is $6!$, but this counts those orders with $A$ and $B$ next to each other. How many of these are there? Think of these two people as a unit; how many ways are there to line up the $AB$ unit with the other 4 people? We have 5 items, so the answer is $5!$. Each of these orders corresponds to two different orders in which $A$ and $B$ are adjacent, depending on whether $A$ or $B$ is first. So the $6!$ count is too high by $2\cdot5!$ and the count we seek is $6!-2\cdot 5!=4\cdot5!$. Exercises 1.2 Ex 1.2.1How many positive factors does$2\cdot3^4\cdot7^3\cdot11^2\cdot47^5$ have?How many does $p_1^{e_1}p_2^{e_2}\cdots p_n^{e_n}$ have, where the$p_i$ are distinct primes? Ex 1.2.2A poker hand consists of five cards from a standard 52 carddeck with four suits and thirteen values in each suit; the order ofthe cards in a hand is irrelevant. How many hands consist of2 cards with one value and 3 cards of another value (a full house)?How many consist of 5 cards from the same suit (a flush)? Ex 1.2.3Six men and six women are to be seated around a table, withmen and women alternating. The chairs don't matter, only who is nextto whom, but right and left are different. How many seatingarrangements are possible? Ex 1.2.4Eight people are to be seated around a table; the chairsdon't matter, only who is next to whom, but right and left aredifferent. Two people, X and Y, cannot be seated next to each other.How many seating arrangements are possible? Ex 1.2.5In chess, a rook attacks any piece in the same row or columnas the rook, provided no other piece is between them. In how many wayscan eight indistinguishable rooks be placed on a chess board so thatno two attack each other? What about eight indistinguishable rooks ona $10\times 10$ board? Ex 1.2.6Suppose that we want to place 8 non-attacking rooks on achessboard. In how many ways can we do this if the 16 most`northwest' squares must be empty? How about if only the 4 most`northwest' squares must be empty? Ex 1.2.7A "legal'' sequence of parentheses is one in which the parenthesescan be properly matched, like $()(())$. It's not hard to see that thisis possible precisely when the number of left and right parentheses isthe same, and every initial segment of the sequence has at least asmany left parentheses as right. For example, $())\ldots$ cannotpossibly be extended to a legal sequence. Show that the number oflegal sequences of length $2n$ is $C_n={2n\choose n}-{2n\choose n+1}$.The numbers $C_n$ are called the Catalan numbers.
Digital to Analogue Converter (DAC) DAC Theory A digital to analogue converter takes a series of digital inputs (a string of 1s and 0s, in our case there will be 8 of them like 10011001) and converts it into an analogue output. You see DACs in every digital audio device (MP3 players, CD players) as these all store music in digital form, but need to drive a speaker with an analogue signal. Hence the need to convert the digital data into an analogue signal. Here’s an example of how digitization works (figures from the Wikipedia article on DACs). To digitize an analogue signal like a wave we sample it at a typically fixed frequency (taken to be sufficiently high so that we do not hear artifacts due to the sampling) and save the samples in digital form: The DAC does the reverse: given the samples in digital form, re-create the analogue waveform: Of course, this is approximate. The steps will always be present, but as long as they are small enough, they may be smoothened out. To make them small enough we need to be able to sample accurately and use a large number of bits to represent the signal. For example, if we use only 4 bits to sample the waveform, we will have a resolution of only $2^4 = 16$ levels. With 8 bits the number of levels we can represent increases to $2^8 = 256$ levels. This is still not good enough for a commercial audio system, but it will be good enough for this experiment. Our goal is to build an 8-bit DAC that accepts sequences of 8 binary numbers (a byte) and outputs an analogue representation of that sequence: Binary numbers. Each binary number consists of a sequence of bits. These are 0s and 1s. A sequence of 8bits is called a Byte. In this experiment we will work with bytes. How do we count in binary? Here is a sequence of binary numbers and their decimal equivalents (I’ve used 4bit sequences for conveneince). The subscripts indicate the number system in use: $2$ means binary, and $10$ means decimal. \begin{eqnarray*} 0000_{2} & = & 0_{10} \\ 0001_{2} & = & 1_{10} \\ 0010_{2} & = & 2_{10} \\ 0011_{2} & = & 3_{10} \\ 0100_{2} & = & 4_{10} \\ 0101_{2} & = & 5_{10} \\ \end{eqnarray*} How do we convert from binary to decimal? Consider the binary number $(b_3 b_2 b_1 b_0)_{2}$. The rule for converting it to decimal is: \begin{eqnarray*} (b_3 b_2 b_1 b_0)_{2} & = & b_3 \times 2^3 + b_2 \times 2^2 + b_1 \times 2^1 + b_0 \times 2^0 \end{eqnarray*} Or, more generally, for the $n$-bit binary number: \begin{eqnarray*} (b_{n-1}\cdots b_2 b_1 b_0)_{2} & = & \sum_{i = 0}^{n-1} b_i \times 2^i \end{eqnarray*} On a computer a $1$ is represented as a HIGH voltage (5V on the Arduino) and $0$ as a LOW voltage (0V on the Arduino, though on some systems all that may be necessary is a sufficiently low voltage. For more on binary numbers see the Wikipedia article. Cool fact: if you counted in binary using your fingers you’d be able to count from 0 to 1023. In our setup, we will use the Arduino Uno to set the digital pin values. This will be done in software. The DAC will be made of resistors in what is known as an R-2R network. You will find more information about R-2R networks on the links provided below. If you are interested in learning how an R-2R network works have a look at these links: The description given in the above article doesn’t directly apply to our circuit. Though you will see why it does work, if you’d like to dig more into our circuit have a look at this article which uses Thevenin’s Theorem to simplify a DAC similar to the one we use. I’ve placed a PDF file of this article here. Resistor Ladder from the Wikipedia. This is a shorter article. real worldexperiment. You are guided through it, but you are not spoon-fed. The reasons why we do certain things in paritcular ways is sometimes found elsewhere. You should make an attempt to read the references, particularly the recommended one. Parts In the first part of the experiment we will use 5% resistors in the DAC. Subsequently we will re-construct it using 1% resistors. 9 20 kOhm resistors 5%/1% metal film 7 10 kOhm resistors 5%/1% metal film Arduino Uno wire jumpers The actual values of the resistors may vary. What is important is that one set of resistors has resistances twice the other. Circuit Here is a breadboard view of the DAC circuit. The second breadboard is included as we will place components here as the circuit develops. Also, there is a thin wire from one of the 20K resistors to ground. Ignore it. Click on the images for a larger view. Programming the Arduino Having constructed the DAC what we now need to do is program the Arduino to send a byte (8 bits) of data to the eight inputs of the DAC. Here is a code which does this: /* DAC: Single byte A. J. Misquitta */ void setup(){ //set digital pins 0-7 as outputs for (int i=0; i<8; i++){ pinMode(i,OUTPUT); } } void loop(){ digitalWrite(0, HIGH); // 1 : we start from the rightmost bit digitalWrite(1, LOW); // 0 digitalWrite(2, LOW); // 0 digitalWrite(3, LOW); // 0 digitalWrite(4, HIGH); // 1 digitalWrite(5, HIGH); // 1 digitalWrite(6, LOW); // 0 digitalWrite(7, HIGH); // 1 } For details of Arduino sketches see the Arduino Getting Started page. If you haven’t already gone over the material and examples on that page, please do so before proceeding. Let’s look at what this sketch is up to. Recall that every Arduino sketch has a setup part and a loop part. The instructions in the setup part are exectuted only once, but the instructions in the loop part are executed till the device is switched off. The first bit of code sets up digital pins 0 to 7 as outputs. We have used a for loop to do this: void setup(){ //set digital pins 0-7 as outputs for (int i=0; i<8; i++){ pinMode(i,OUTPUT); } } This bit of code is equivalent to void setup(){ pinMode(0,OUTPUT); pinMode(1,OUTPUT); pinMode(2,OUTPUT); ... pinMode(7,OUTPUT); } but the former is clearly a lot more compact! Next we enter the loop part in which we write the byte 10110001 to the DAC input pins. This is done by setting appropriate pins to HIGH (= 1) and others to LOW (= 0): void loop(){ digitalWrite(0, HIGH); // 1 : we start from the rightmost bit digitalWrite(1, LOW); // 0 digitalWrite(2, LOW); // 0 digitalWrite(3, LOW); // 0 digitalWrite(4, HIGH); // 1 digitalWrite(5, HIGH); // 1 digitalWrite(6, LOW); // 0 digitalWrite(7, HIGH); // 1 } The general form of the digitalWrite (the upper case ‘W’ is essential!) is digitalWrite(pin,VALUE); where VALUE = HIGH or LOW. This is all very well, but how would you read out the result of the DAC? We have a few options: we could use a multimeter to do the reading (do it), or an oscilloscope, or use the Arduino itself. Let’s now explore the latter option. Using the Analogue Input pins on the Arduino The Arduino Uno includes 6 analogue input pins labeled ‘A0’ through ‘A5’. Locate these on the board. Each of these can read and digitize an analogue signal from 0 to 5 Volts using a 10 bit Analogue to Digital converter (ADC). A 0V signal will correspond to Bin(0000000000) and a 5V signal to (11111111111). Notice that these are 10-bit binary numbers, so while a 0V signal will correspond to Dec(0), a 5V signal will be Dec(1023) ($1023 = 2^{10} – 1$). So what we will do now is to take the output of our 8-bit DAC and send it to port A0 on the Arduino. Then get the Arduino to read and digitize the input on port A0 and display it using the Serial Monitor. Here is the program that does this: //DAC: Single byte //A. J. Misquitta /* * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 3 of the License, or * (at your option) any later version. * */ void setup(){ //set digital pins 0-7 as outputs for (int i=0;i<8;i++){ pinMode(i,OUTPUT); } pinMode(A0,INPUT); Serial.begin(9600); // Output to Serial while (! Serial); // Wait untilSerial is ready Serial.println("DAC: Single Byte!"); } void loop(){ digitalWrite(0, HIGH); // 1 : remember we start from the rightmost bit digitalWrite(1, LOW); // 0 digitalWrite(2, LOW); // 0 digitalWrite(3, LOW); // 0 digitalWrite(4, HIGH); // 1 digitalWrite(5, HIGH); // 1 digitalWrite(6, LOW); // 0 digitalWrite(7, HIGH); // 1 //delayMicroseconds(50);//wait 50us if using an oscilloscope //delay(100);//wait 100ms if using a multimeter/serial port /* Having written the byte to the digital pins, we will read the output of the DAC using the analogue pin A0 of the Arduino Uno. The analogue pins are Analogue to Digital converters (ADC). Read the DAC output into variable DACout and write it to the serial port. */ int DACout = analogRead(A0); if (Serial.available()) { Serial.println(DACout); } delay(100); } Set up the circuit to read the DAC output on A0 and run this code. You will need to start the serial monitor to see the output of the code. The serial monitor may not start till you hit a key. After that you will see a string of numbers. They will fluctuate slightly. 10110001to our 8-bit DAC. Earlier you were asked to figure out the decimal value of this byte. Does the reading on the serial port correspond to this value? If not, why not? Hint: Our DAC has an 8 bit output while the ADC on the Arduino is a 10 bit converter. You may need to do a bit of scaling to get the expected value. Using Thevenin’s theorem (see the article by Alan Wolke at Textronix) it can be shown that the output voltage of our DAC is given by \begin{equation} V^{\rm DAC} = 5 \times \left( \sum_{i = 0}^{7} b_i \left(\frac{1}{2}\right)^{(8-i)} \right) ~{\rm V} \end{equation} where the $b_i$ are the bits in the binary number. So, for example, for the binary number 10000011 we have \begin{equation} V^{\rm DAC}(10000011) = 5 \times \left( 1\times \left(\frac{1}{2}\right)^8 + 1\times \left(\frac{1}{2}\right)^7 + 0\times \left(\frac{1}{2}\right)^6 \\ ~~~~+ 0\times \left(\frac{1}{2}\right)^5 + 0\times \left(\frac{1}{2}\right)^4 \\ ~~~~+ 0\times \left(\frac{1}{2}\right)^3 + 0\times \left(\frac{1}{2}\right)^2 \\ ~~~~+ 1\times \left(\frac{1}{2}\right)^1 \right) ~ {\rm V} \end{equation} or $V^{\rm DAC}(10000011) = 5 \times ( 0.5117) ~ {\rm V} = 2.5586 ~ {\rm V}$ Digital writes to PORTD The following explanation of PORTD has been lifted verbatim from |Amanda’s article. Now we know how to send a byte of data to the DAC and get an analogue output. From the introduction to digitization given above we know that to create a waveform all we need to do is send in a sequence of bytes to our DAC. Let’s see how we could send the DAC two bytes: sequence 10110100 & 00001111 // 10110100 digitalWrite(0, HIGH); // 1 : remember we start from the rightmost bit digitalWrite(1, LOW); // 0 digitalWrite(2, LOW); // 0 digitalWrite(3, LOW); // 0 digitalWrite(4, HIGH); // 1 digitalWrite(5, HIGH); // 1 digitalWrite(6, LOW); // 0 digitalWrite(7, HIGH); // 1 // 00001111 digitalWrite(0, LOW); // 0 : remember we start from the rightmost bit digitalWrite(1, LOW); // 0 digitalWrite(2, LOW); // 0 digitalWrite(3, LOW); // 0 digitalWrite(4, HIGH); // 1 digitalWrite(5, HIGH); // 1 digitalWrite(6, HIGH); // 1 digitalWrite(7, HIGH); // 1 This would work, but to generate a sine wave we would need a lot of bytes of data (the more, the smoother the waveform). Clearly this approach to sending data to the DAC is of limited value. An alternative is to loop over the bits in each byte: seq = (1,0,1,1,0,0,0,1); for (int i = 0, i < 8; i++) { digitalWrite(i,seq(i)); // seq(i) = 0 ==> LOW & seq(i) = 1 ==> HIGH } seq = (0,0,0,0,1,1,1,1); for (int i = 0, i < 8; i++) { digitalWrite(i,seq(i)); // seq(i) = 0 ==> LOW & seq(i) = 1 ==> HIGH } Can you see how this works? It is shorter, and could have made it even shorter by using a function call. But there is an even simpler way of transmitting bytes to our DAC: The Arduino allows us to write an entire byte to pins 0 to 7 via PORTD. PORTD is simply short-hand for pins 7,6,5,4,3,2,1,0 (in this order). We can write an 8-bit binary number (i.e., a Byte) directly to pins 0 to 7 like so: Writing to PORTD // Write the byte 10110001 to pins 76543210 in that order: PORTD = B10110001; Here the B in front of 10110001 says that the number is in binary form. We could write two bytes as follows: Writing to PORTD in Binary PORTD = B10110001; PORTD = B00001111; This is clearly a much better option! The code is shorter and also more readable. We can see the bytes we are writing to the DAC. Additionally, the Arduino Atmel328 chip is able to write to all eight pins simultaneously and not sequently as was done when we used the digitalWrite() commands. Since PORTD has 8 pins on it (digital pins 0-7) we can send it one of $2^8 = 256$ possible values ($0-255$) to control the pins. We could send the data to the DAC (via PORTD) in decimal form as follows: Writing to PORTD in Decimal PORTD = 177; // decimal form of 10110001 PORTD = 15; // decimal form of 00001111 In the following pieces of code we send a value between 0 and 255 to “PORTD” when we want to send data to the DAC, it looks like this: PORTD = 125;//send data to DAC This is called addressing the port directly. On the Arduino, digital pins 0-7 are all on port D of the Atmel328 chip. The PORTD command lets us tells pins 0-7 to go HIGH or LOW in one line (instead of having to use digitalWrite() eight times). Not only is this easier to code, it’s much faster for the Arduino to process and it causes the pins to all change simultaneously instead of one by one (you can only talk to one pin at a time with digitalWrite()). For example, if we wrote the following line: PORTD = 0; it would set pins 0-7 LOW. With the DAC set up on pins 0-7 this will output 0V. if we sent the following: PORTD = 255; it would set pins 0-7 HIGH. This will cause the DAC to output 5V. We can also send combinations of LOW and HIGH states to output a voltage between 0 and 5V from the DAC. For example: PORTD = 125; 125 = 01111101 in binary. This sets pin 7 low (the most significnat bit (MSB) is 0), pins 6-2 high (the next five bits are 1), pin 1 low (the next bit is 0), and pin 0 high (the least significant bit (LSB) is 1). You can read more about how this works here. To calculate the voltage that this will output from the DAC, we use the following equation: voltage output from DAC = [ (value sent to PORTD) / 255 ] * 5 V so for PORTD = 125: voltage output from DAC = ( 125 / 255 ) * 5V = 2.45V The code below sends out several voltages between 0 and 5V and holds each for a short time to demonstrate the concepts described above. In the main loop() function: For more detail: Digital to Analogue Converter (DAC) DAC Theory JLCPCB – Prototype 10 PCBs for $2 (For Any Color) China’s Largest PCB Prototype Enterprise, 600,000+ Customers & 10,000+ Online Orders Daily How to Get PCB Cash Coupon from JLCPCB: https://bit.ly/2GMCH9w This Post / Project can also be found using search terms: dac theory digital to analogue converter arduino digital to analogue converter theory nextqhq
In my research, I work with certain finitely presented quotients of Coxeter groups. These are the automorphism groups of abstract polytopes, which are combinatorial generalizations of "usual" polytopes. (Essentially, an abstract polytope is an incidence complex.) Now, in this context, there is a useful combinatorial operation that has a nice effect on the automorphism groups. In fact, it's easy to generalize the operation on groups, so I'm curious whether any work has been done with this. Let $G = \langle X \mid R \rangle$ and $H = \langle X \mid S \rangle$ be finitely presented groups. (I'm not sure that the finiteness of the presentation is essential, but let's assume it for now.) In other words, we have that $G = F(X) / \overline{R}$ and $H = F(X) / \overline{S}$, where $F(X)$ is the free group on $X$ and $\overline{R}$ is the normal closure of $R$ in $F(X)$. Then if $K$ naturally covers $G$ and $H$ (that is, if the identity map on $X$ extends to (surjective) homomorphisms from $K$ to $G$ and $K$ to $H$), we have that $K$ covers the group $F(X) / (\overline{R} \cap \overline{S})$. Similarly, if $G$ and $H$ naturally cover $K$, then the group $F(X) / (\overline{R} \overline{S})$ with presentation $\langle X \mid R \cup S \rangle$ naturally covers $K$ as well. Therefore, the group $F(X) / (\overline{R} \cap \overline{S})$ is the minimal natural cover of $G$ and $H$, and $F(X) / (\overline{R} \overline{S})$ is the maximal natural quotient of $G$ and $H$. The first group is the fibre product of $G$ and $H$ over the second group. These seem like such natural operations that I would guess they have been studied before, but I am having trouble finding anything. Any references would be greatly appreciated. EDIT: Let me expand a little bit. As Mark Sapir points out, I'm essentially looking at the lattice of normal subgroups of F(X). I've looked a little at the general theory of subgroup lattices, but I'm really only interested in normal subgroups of F(X) or other finitely presented groups. Also, I find it difficult to work with the normal subgroups of F(X) directly, whereas it's not too hard to work with the quotients by these normal subgroups via presentations. So I'm hoping to find something that's somewhat more specific than just subgroup lattices; ideally, something that works with presentations. Here are some examples of the type of questions I'm interested in: Is there a simple way to write down a presentation for $F(X) / (\overline{R} \cap \overline{S})$ without changing the generators? Given G and H, what are some conditions on the relations under which $\overline{R} \overline{S} = F(X)$? (This obviously won't be all-inclusive, but some instructive examples would be nice.)
FINAL EDIT : This edit cleans up the first proof (and simplifies it -- there are no longer any references to the free nilpotent group) and adds some remarks to the second proof following the discussion in the comments. PROOF 1. Here's a low-tech way to see that a surface group is not free (though cohomology is secretly lurking in the background). Let $G_g = \langle a_1,b_1,\ldots,a_g,b_g\ |\ [a_1,b_1]\cdots [a_g,b_g]=1 \rangle$ be the surface group. Form the group $$\tilde{G}_g = \langle a_1,b_1,\ldots,a_g,b_g,t\ |\ [a_1,b_1]\cdots [a_g,b_g]=t, [a_i,t]=1, [b_i,t]=1\ \text{for all $1 \leq i \leq g$} \rangle$$ The subgroup of $\tilde{G}_g$ generated by $t$ is contained in the center and the quotient is $G_g$. Below I will show that this subgroup is infinite cyclic. We thus have a central extension $$1 \longrightarrow \mathbb{Z} \longrightarrow \tilde{G}_g \longrightarrow G_g \longrightarrow 1.$$ If $G_g$ were free, then this would split as a direct product. However, since $t$ becomes zero when we abelianize $\tilde{G}$, there is no splitting homomorphism $\tilde{G}_g \rightarrow \mathbb{Z}$. Thus $G_g$ cannot be free. It remains to show that the subgroup generated by $t$ is infinite cyclic. Let $H$ be the $3$-dimensional Heisenberg group, ie the group of upper-triangular $3 \times 3$ integer matrices with $1$'s on the diagonal. As is well-known, $H$ has a presentation$$H = \langle x,y,z\ |\ [x,y]=z, [x,z]=1, [y,z]=1 \rangle.$$Examining the presentations, there is a homomorphism $\psi : \tilde{G}_g \rightarrow H$ with $$\psi(a_1) = x \quad \text{and} \quad \psi(b_1) = y \quad \text{and} \quad \psi(t) = z$$ and $$\psi(a_i) = \psi(b_i) = 1 \quad \quad (2 \leq i \leq g)$$ Since $z$ generates an infinite cyclic subgroup of $H$ (as a matrix, $z$ is the matrix with $1$'s on the diagonal and at position $(1,3)$ and $0$'s elsewhere), it follows that $t$ generates an infinite cyclic subgroup of $\tilde{G}_g$. PROOF 2. It is known that free groups are Hopfian, i.e. that all surjections from a free group to itself are isomorphisms. A simple-minded cancellation-based proof (using Nielsen reduction) can be found in Proposition 2.7 of Lyndon and Schupp's book "Combinatorial group theory". Alternatively, Malcev proved that all residually finite groups are Hopfian (this can also be found in Lyndon and Schupp), and there are many proofs that free groups are residually finite; see the answers to the question Why are free groups residually finite? This implies that if $F$ is a free group on $n$ generators and $S$ is a generating set for $F$ which has $n$ elements, then $S$ is a free generating set. But this implies the result -- letting $G_g$ be the surface group as above, by abelianizing we see that if $G_g$ were a free group, then it would be free on $2g$ generators. But $a_1,b_1,\ldots,a_g,b_g$ is a generating set of size $2g$ which is not free since it satisfies a relation. Thus $G_g$ is not free.
What is the difference between Base Correlation and Implied Correlation for a CDO tranche? An implied correlation $\rho_i(k_1,k_2)$ is a correlation that matches the $(k_1,k_2)$ tranche price $P_{k_1}^{k_2}$ (usually computed under a gaussian or student t copula) $$ C(k_1,k_2,\rho_i(k_1,k_2)) = P_{k_1}^{k_2} $$ For mezzanine tranches, there can sometimes be two different implied correlations matching the tranche price. A base correlation $b_i(k_2)$ is a correlation that matches the price of the tranche, plus all higher-risk tranches "beneath" it, so we can write it as $$ b_i(k_2) = \rho_i(0,k_2) $$ where we obtain $P_{0}^{k_2}$ as $$ P_{0}^{k_2} = \sum_{k_i\leq{k_2}}P_{k_{i-1}}^{k_i} $$ The pricing function $C(0,k_2,\rho)$ is monotonic in $\rho$, hence the base correlation is unique. This allows practitioners to think about correlations a bit more like they previously thought about implied volatility (and volatility skews) for options. The super-senior tranche has (trivially) a base correlation that matches the price of the entire underlying instrument, since it is $\rho_i(0,1)$.
Better viewed on this Dropbox site: The Hypergeometric Distribution is usually explained via an urn analogy and formulated as the ratio of “favorable outcomes” to all possible outcomes: \[ \displaystyle \boxed{P(x=a;N,A,n,a) = \frac{{A \choose a} \cdot {N-A \choose n-a} }{{N \choose n}}} \] where \(N\) is the total number of balls in, \(A\) the number of “red” balls in the urn, and \(n\) the sample size. The question answered by the above expression is the probability of finding \(x=a\) red balls in the sample. However, a different – equally worthy – viewpoint is that of a tree with conditional probabilities. Here is an example for \(N=10, A=4, n=3\) So there are 3 leafs with exactly one red ball in the sample. Following the tree we multiply the conditional probabilities of the tree egdes to get to the “and” probability of the leafs. It should be clear that all three probabilites are identical – but for the order of multiplication: \[ P (\mbox{one red}) = 3 \cdot P_{leaf} = 3 \cdot \frac{4}{10} \cdot \frac{6}{9} \cdot \frac{5}{8} = 3 \cdot \frac{4 \cdot 6 \cdot 5}{10 \cdot 9 \cdot 8} \] Alternative formula How many leafs contain exactly one red ball? Exactly \({n \choose a} = {3 \choose 1} = 3\). The probability for the precise event “a red balls followed by n-a blue balls”is: \[ \frac{A}{N} \cdot \frac{A-1}{N-1} \cdots \frac{A-a+1}{N-a+1} \cdot \frac{N-A}{N-a} \cdot \frac{N-A-1}{N-a-1} \cdots \frac{N-A-(n-a-1)}{N-a-(n-a-1)} \] The last denominator is simply \(N-n+1\), i.e. the full denominator is \(_NV_n=N!/(N-n)!\) The left numerator is simply \(_AV_n = A!/(A-a)!\) in analogy the right numerator \(_{N-A}V_{n-a} = (N-A)!/(N-A-(n-a))!\) So, all in all \[ {n \choose a} \cdot \frac{_AV_n \cdot _{N-A}V_{n-a}}{_NV_n} = {n \choose a} \cdot \frac{A! \cdot (N-A)! \cdot (N-n)!}{(A-a)! \cdot (N-A-(n-a))! \cdot N!} \] \[ = {n \choose a} \cdot \frac{\frac{A!}{(A-a)!} \cdot \frac{(N-A)!}{(N-A-(n-a))!} }{\frac{N!}{(N-n)!}} = \frac{{A \choose a} \cdot {N-A \choose n-a} }{{N \choose n}} \] which leaves us with \[ \displaystyle \boxed{P(x=a;N,A,n,a) = {n \choose a} \cdot \frac{_AV_n \cdot _{N-A}V_{n-a}}{_NV_n} } \] // add bootstrap table styles to pandoc tables $(document).ready(function () { $('tr.header').parent('thead').parent('table').addClass('table table-condensed'); });
Originally posted on November 12th, Gordon Kane's answers added in the comments on November 16th By Gordon Kane, Distinguished University Professor of Physics, University of Michigan and a Lilienfeld Prize winner I want to thank Luboš Motl for his interest and for inviting me to summarize our rare decay predictions for the data that is appearing this week. Basically the prediction for supersymmetry based on compactified string/M theories is that any rare decay rate should equal the Standard Model one within an accuracy of a few per cent. Intro by LM:The HCP 2012 conference in Kyoto, Japan began today. Pallab Ghosh of BBC immediately told us that "SUSY has been certainly put in the hospital". This statement boils down to the recently exposed measurements at the LHCb detector of the rarest decay of the B-mesons so far (paper in PDF, public info), namely \(B_s^0\to\mu^+\mu^-\), whose observed branching ratio \(3.2^{+1.5}_{-1.2}\times 10^{-9}\) (about 99% certainty it is nonzero) agrees with the Standard Model's prediction of \((3.54\pm0.30)\times 10^{-9}\). See also a few days old LHCb paper on another decay and texts by Harry Cliff at the Science Museum Discovery blog, Michael Schmitt, Tommaso Dorigo, and Prof Matt Strassler on the muon decay. But what do actual supersymmetric stringy compactifications predict about this decay? Are they dead? Although many string/M theory predictions can not yet be made accurately, some can, in particular the prediction for \(B_s\to \mu^+\mu^-\). The short summary of the argument is that compactified string/M theories have moduli that describe the shapes and sizes of the small dimensions. The moduli fields have quanta, scalar particles, that decay gravitationally so they have long lifetimes. In order to not destroy the successes of nucleosynthesis the moduli have to be heavier than about \(30\TeV\). One can show that the lightest eigenvalue of the moduli mass matrix is connected to the gravitino mass in theories with softly broken supersymmetry, and in turn that in such theories the squark and slepton (and Higgs scalar) masses are essentially equal to the gravitino masses. Thus the squarks and sleptons are heavier than about \(30 \TeV\), and they are predicted to be too heavy to observe at LHC or via the rare decays. The LHCb result agrees with this prediction. While the scalars are too heavy to be seen easily, gluinos and neutralinos and one chargino should be seen at LHC. LHCb detector coils A review article summarizing compactified M-theory predictions including those for the \(B_s\to\mu^+\mu^-\), many of which also hold for other corners of string theory, was recently published by Bobby Acharya, Piyush Kumar, and myself, prediction (before the data) of the Higgs boson mass to be \(126\pm 2\GeV\) including all supergravity constraints (details in the review article). Some people used phenomenological arguments to suggest scalars were heavy and thus expected similar results for rare decays. It should be emphasized that our predictions start from the Planck scale M/string theory and derive the results for supersymmetric \(\TeV\) scale theories. Getting the Standard Model result for \(B_s\to \mu^+\mu^-\) is the only prediction derived in a supersymmetric theory, and adds to the evidence for supersymmetry and for M/string theories beginning to become a meaningful predictive and explanatory framework for particle physics and cosmology. P.S. by LM:On page 26 of the April 2012 review above, you may read that the branching ratio has SUSY contributions going like \((\tan\beta)^6\), but because the result is virtually indistinguishable from the Standard Model, one may only conclude that \(\tan\beta\lessapprox 20\). The upper limit "twenty" may get reduced a bit in the wake of the new LHCb measurement.
What is electrostatic self-energy? The self energy of a particle means the energy possessed due to interactions between the particle an the system it is part of. . It is simply called the electrostatic potential energy stored in the system of charges. In electrostatics, self energy of a particular charge distribution is the energy of required to assemble the charges from infinity to that particular configuration, without accelerating the charges For a simple example, consider an electrostatic field $\vec{E}$ due to some charge $q$. We need to know the electrostatic energy stored in the system of the charges $q$ and some another charge $Q$. Then, we assume we start from infinity (where the electric field due to the charge $q$ is zero). To assemble the charge $Q$ from infinity to a point at a distance $r$ from the charge $q$, we need to do work against the the electric field $\vec{E}$. We have for electrostatic fields, $\vec{E}=-\nabla V$, where $V$ is a scalar function called the electric potential. The charge $Q$ at any point in the electric field of $q$ experience a force $\vec{F}=Q\vec{E}$ and the work is to be done against this force to assemble the charges in the required configuration. Hence the work done is $$\begin{align}W=-\int_\infty^r \vec{F}\cdot d\vec{r} &=-Q\int_\infty^r \vec{E}\cdot d\vec{r}\\&=-Q\int_\infty^r (-\nabla V)\cdot d\vec{r}\\&=-Q\int_\infty^r dV\\&=Q[V(r)-V(\infty)]\end{align}$$ Assuming $V(\infty)=0$, we have the work done $$\bbox[5px,border:2px solid green]{W=QV(r)\qquad(1) }$$ This work done is stored as the potential energy of the system of the charges. We can extend the result to any number of charges and to any configuration. For example, we can find the electrostatic self energy or the energy needed to assemble a system of four charges at the corners of a square. We know that the electric potential $V$ is the potential energy per unit charge, and the electric potential at some distance $r$ from a charge $q$ is $$V(r)=\frac{1}{4\pi\epsilon_0}\frac{q}{r}\qquad(2)$$ Hence equation $(1)$ becomes $$\bbox[5px,border:2px solid red]{W=\frac{1}{4\pi\epsilon_o}\frac{qQ}{r}\qquad(3)}$$ This is the electrostatic self energy of the configuration. Equation $(3)$ can be generalized to a system of $N$ point charges $q_i$ located at position vectors $r_i$ ($i=1,2,...,N$): $$\bbox[5px,border:2px solid blue]{W=\frac{1}{2}\frac{1}{4\pi\epsilon_o}\sum_{i=1}^N\sum_{j\neq i=1}^N\frac{q_iq_j}{r_{ij}}\qquad(4)}$$ where $r_{ij}$ is the separation between $r_i$ and $r_j$ Would this be equivalent to integrating $kQ/r$ over the volume of a sphere? By definition, the electrostatic self energy in this case would be the work done in assembling the electrons continuously throughout the volume of the sphere of radius $R$. We need to find the electric potential at the surface of the sphere (i.e., at $r=R$). Since we have a continuous system of charges here, we have to replace summation in equation $(4)$ by integration. But it's not equivalent to integrating $kQ/r$ over the volume of a sphere. What we are going to do is as follows: We build up the sphere by adding subsequent infinitesimal layers of charge (carried from infinite distance). From Gauss’s theorem we know that, for an uniformly charged sphere having charge density $\rho$, radius $r$, and total charge $q=q(r)=\rho(4\pi r^3/3)$, the field and the potential outside the sphere are those of a point charge $q$ located in the center. On building the sphere, we are constructing infinitesimal layers of charge of size $dq=\rho4\pi r^2dr$, thereby increasing the radius of the sphere from $0$ to $R$. Hence the total energy is (from equation ($1$)) $$\begin{align}W=\int V(r)dQ&=\int_0^R k\frac{q(r)}{r}\rho 4\pi r^2dr\\&=k\int_0^R\frac{\rho}{r} \left(\frac{4}{3}\pi r^3\right)4\pi r^2dr\\&=\frac{4\pi\rho^2R^5}{15\epsilon_0}\\&=\frac{3k}{5}\frac{Q^2}{R}\end{align}$$ where we have substituted for $\rho=Q/(4\pi R^3/3)$. So the electrostatic self energy in your problem is $$\bbox[5px,border:2px solid pink]{W=\frac{3k}{5}\frac{Q^2}{R}\qquad(5)}$$ where $Q$ is the total charge enclosed by the sphere. Equation ($5$) gives the energy stored in the system of charges to be assembled continuously throughout a spherical volume of radius $R$. All we have made use is just equation ($1$).
For massless spinors case we can decompose momentum into Weyl sub-parts as $$p = \lambda_{a}\tilde \lambda_{\dot a}.$$ But for the case of massive fermions can I do something like this? Decompose them into Weyl subparts with some additional terms? If so, how? Why do I need it? I am performing a twistor transform for the equation of the process $q\bar q \to gg$ so I have to write the amplitude and the 4d delta function of the momentum and then Fourier transform the $\lambda$ and $\tilde \lambda$ separately.
Local wellposedness for the critical nonlinear Schrödinger equation on $ \mathbb{T}^3 $ Department of Mathematics, University of California, Los Angeles, Los Angeles, CA 90095, USA For $ p\geq 2 $, we prove local wellposedness for the nonlinear Schrödinger equation $ (i\partial _t + \Delta)u = \pm|u|^pu $ on $ \mathbb{T}^3 $ with initial data in $ H^{s_c}(\mathbb{T}^3) $, where $ \mathbb{T}^3 $ is a rectangular irrational $ 3 $-torus and $ s_c = \frac{3}{2} - \frac{2}{p} $ is the scaling-critical regularity. This extends work of earlier authors on the local Cauchy theory for NLS on $ \mathbb{T}^3 $ with power nonlinearities where $ p $ is an even integer. Mathematics Subject Classification:Primary: 35Q55. Citation:Gyu Eun Lee. Local wellposedness for the critical nonlinear Schrödinger equation on $ \mathbb{T}^3 $. Discrete & Continuous Dynamical Systems - A, 2019, 39 (5) : 2763-2783. doi: 10.3934/dcds.2019116 References: [1] J. M. Bony, Calcul symbolique et propagation des singularités pour les équations aux dérivées partielles non linéaires, [2] J. Bourgain, Fourier transform restriction phenomena for certain lattice subsets and applications to nonlinear evolution equations. I. Schrödinger equations, [3] [4] F. M. Christ and M. I. Weinstein, Dispersion of small amplitude solutions of the generalized Korteweg-de Vries equation, [5] [6] [7] S. Herr, D. Tataru and N. Tzvetkov, Strichartz estimates for partially periodic solutions to Schrödinger equations in 4d and applications, [8] S. Herr, D. Tataru and N. Tzvetkov, Global well-posedness of the energy-critical nonlinear Schrödinger equation with small initial data in $H^1(\mathbb T^3)$, [9] [10] [11] R. Killip and M. Vişan, Nonlinear Schrödinger equations at critical regularity, in "Evolution Equations", [12] H. Koch, D. Tataru and M. Vişan, [13] [14] M. Taylor, show all references References: [1] J. M. Bony, Calcul symbolique et propagation des singularités pour les équations aux dérivées partielles non linéaires, [2] J. Bourgain, Fourier transform restriction phenomena for certain lattice subsets and applications to nonlinear evolution equations. I. Schrödinger equations, [3] [4] F. M. Christ and M. I. Weinstein, Dispersion of small amplitude solutions of the generalized Korteweg-de Vries equation, [5] [6] [7] S. Herr, D. Tataru and N. Tzvetkov, Strichartz estimates for partially periodic solutions to Schrödinger equations in 4d and applications, [8] S. Herr, D. Tataru and N. Tzvetkov, Global well-posedness of the energy-critical nonlinear Schrödinger equation with small initial data in $H^1(\mathbb T^3)$, [9] [10] [11] R. Killip and M. Vişan, Nonlinear Schrödinger equations at critical regularity, in "Evolution Equations", [12] H. Koch, D. Tataru and M. Vişan, [13] [14] M. Taylor, [1] Hongzi Cong, Lufang Mi, Yunfeng Shi, Yuan Wu. On the existence of full dimensional KAM torus for nonlinear Schrödinger equation. [2] Pavel I. Naumkin, Isahi Sánchez-Suárez. On the critical nongauge invariant nonlinear Schrödinger equation. [3] [4] Jun-ichi Segata. Initial value problem for the fourth order nonlinear Schrödinger type equation on torus and orbital stability of standing waves. [5] Wided Kechiche. Regularity of the global attractor for a nonlinear Schrödinger equation with a point defect. [6] Wulong Liu, Guowei Dai. Multiple solutions for a fractional nonlinear Schrödinger equation with local potential. [7] Alexandre Montaru. Wellposedness and regularity for a degenerate parabolic equation arising in a model of chemotaxis with nonlinear sensitivity. [8] [9] Hiroyuki Hirayama, Mamoru Okamoto. Well-posedness and scattering for fourth order nonlinear Schrödinger type equations at the scaling critical regularity. [10] [11] Jian Zhang, Shihui Zhu, Xiaoguang Li. Rate of $L^2$-concentration of the blow-up solution for critical nonlinear Schrödinger equation with potential. [12] Daniela De Silva, Nataša Pavlović, Gigliola Staffilani, Nikolaos Tzirakis. Global well-posedness for the $L^2$ critical nonlinear Schrödinger equation in higher dimensions. [13] Miaomiao Niu, Zhongwei Tang. Least energy solutions for nonlinear Schrödinger equation involving the fractional Laplacian and critical growth. [14] Van Duong Dinh. On blow-up solutions to the focusing mass-critical nonlinear fractional Schrödinger equation. [15] Takafumi Akahori. Low regularity global well-posedness for the nonlinear Schrödinger equation on closed manifolds. [16] Seunghyeok Kim. On vector solutions for coupled nonlinear Schrödinger equations with critical exponents. [17] [18] Alexander Komech, Elena Kopylova, David Stuart. On asymptotic stability of solitons in a nonlinear Schrödinger equation. [19] [20] Wentao Huang, Jianlin Xiang. Soliton solutions for a quasilinear Schrödinger equation with critical exponent. 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
Linearity Theory There are three definitions we discussed in class for linearity. Definition 1 A system is called linear if for any constants $ a,b\in $ all complex numbers and for any input signals x 1( t), x 2( t) with response y 1( t), y 2( t), respectively, the system's response to ax 1( t) + bx t 2( ) isay t 1( ) +b y 2( t). Definition 2 If $ x_1(t) \rightarrow \begin{bmatrix} system \end{bmatrix} \rightarrow y_1(t) $ $ x_2(t) \rightarrow \begin{bmatrix} system \end{bmatrix} \rightarrow y_2(t) $ then $ ax_1(t) + bx_2(t) \rightarrow \begin{bmatrix} system \end{bmatrix} \rightarrow ay_1(t) + by_2(t) $ for any $ a,b\in $ all complex numbers, any x 1( t), x 2( t) then we say the system is linear. Definition 3 Applications Linearity can be used to simplify the Fourier transform. Integration and differentiation are also linear. Once a non-linear system is made linear, complex systems are easier to model mathematically. True linear systems are virtually unknown in the real world, but over a small range of variables, systems can be modeled as linear.
If $R$ be a DVR(discrete valuation ring) with uniformizer $\pi$, then prove that $R[\sqrt{\pi}]$ is a DVR. How shall I begin, first do I have to find a candidate for the uniformizing element of $R[\sqrt{\pi}]$, what about $\sqrt{\pi}$ ? There are also $3$ conditions for a DVR: $\bullet$ Noetherian property $\bullet$ Integrally closed $\bullet$ It must have exactly $1$ nonzero prime ideal For the first property, If $R$ is noetherian, then so is $R[T]$ by Hilbert-Basis theorem and any quotient by an ideal is also noetherian. Then there is another theorem, which states: There is an isomorphism $R[T]/(f)\to R[\alpha]$ (provided $\alpha$ is an algebraic element in some extension of the field of fractions of $R$), is it useful ? am I on the right track ? $\bf{EDIT}:$ After the hint of Mathmo123, one can write $a+b\sqrt{\pi}$, some element of $R[\sqrt{\pi}]$ as $u\pi^v+x\pi^y\sqrt{\pi}=u\left(\sqrt{\pi}\right)^{2v}+x\left(\sqrt{\pi}\right)^{2y+1}$, but required is I think to express it as $w\left(\sqrt{\pi}\right)^z$, or not ?
Taylor series A Taylor series is a representation of a function as an infinite sum of terms that are calculated from the values of the function’s derivatives at a single point $$f(a) \approx \sum\limits_{n=0}^{\infty}{\frac{f^{(n)}(a)}{n!}(x-a)^n}$$ where $f^{n}(a)$ donates the $n^{th}$ derivative of $f$ evaluated at the point $a$. And here’s a very intuitive example: The exponential function $e^x$ (in blue), and the sum of the first $n + 1$ terms of its Taylor series at $0$ (in red). Newton’s method Overview In calculus, Newton’s method is an iterative method for finding the roots of a differentiable function $f$. In machine learning, we apply Newton’s method to the derivative $f’$ of the cost function $f$. One-dimension version In the one-dimensional problem, Newton’s method attempts to construct a sequence ${x_n}$ from an initial guess $x_0$ that converges towards some value $\hat{x}$ satisfying $f'(\hat{x})=0$. In another word, we are trying to find a stationary point of $f$. Consider the second order Taylor expansion $f_T(x)$ of $f$ around $x_n$ is $$f_T(x)=f_T(x_n+\Delta x) \approx f(x_n) + f'(x_n) \Delta x + \frac{1}{2}f”(x_n) \Delta x^2$$ So now, we are trying to find a $\Delta x$ that sets the derivative of this last expression with respect to $\Delta x$ equal to zero, which means $$\frac{\rm{d}}{\rm{d} \Delta x}(f(x_n) + f'(x_n) \Delta x + \frac{1}{2}f”(x_n) \Delta x^2) = f'(x_n)+f”(x_n)\Delta x = 0$$ Apparently, $\Delta x = -\frac{f'(x_n)}{f”(x_n)}$ is the solution. So it can be hoped that $x_{n+1} = x_n+\Delta x = x_n – \frac{f'(x_n)}{f”(x_n)}$ will be closer to a stationary point $\hat{x}$. High dimensional version Still consider the second order Taylor expansion $f_T(x)$ of $f$ around $x_n$ is $$f_T(x)=f_T(x_n+\Delta x) \approx f(x_n) + \Delta x^{\mathsf{T}} \nabla f(x_n) + \frac{1}{2} {\rm H} f(x_n) \Delta x (\Delta x)^{\mathsf{T}}$$ where ${\rm H} f(x)$ donates the Hessian matrix of $f(x)$ and $\nabla f(x)$ donates the gradient. (See more about Taylor expansion at https://en.wikipedia.org/wiki/Taylor_series#Taylor_series_in_several_variables) So $\Delta x = – [{\rm H}f(x_n)]^{-1}\nabla f(x_n)$ should be a good choice. Limitation As is known to us all, the time complexity to get $A^{-1}$ given $A$ is $O(n^3)$ when $A$ is a $n \times n$ matrix. So when the data set is of too many dimensions, the algorithm will work quite slow. Advantage The reason steepest descent goes wrong is that the gradient for one weight gets messed up by simultaneous changes to all the other weights. And the Hessian matrix determines the sizes of these interactions so that Newton’s method minimize these interactions as much as possible.
I am trying to simulate the prices of bond indexes (e.g. Barclays Aggregate, IBOXX sovereign, IBOXX corporates) using Monte Carlo assuming that they follow the SDE given by the Hull-White model (one-factor model): $ dS_t = (\theta_t - \alpha_t S_t)dt +\sigma_t dW_t $ where $\theta, \alpha$ and $\sigma$ are time-dependent. First, I treat $\alpha_t$ as the mean of my process (~$\mu_t$). I am estimating and forecasting $\mu_t$ using the Local Linear Trend State Space model and the Kalman Filter. $\begin{align} Y_t &= \mu_t + \epsilon_t & \epsilon_t &\sim NID(0,\sigma_{\epsilon}^{2})\\ \mu_{t+1} &= \mu_{t} + \beta_t + \xi_t & \xi_t &\sim NID(0,\sigma_{\xi}^{2})\\ \beta_{t+1}&= \beta_{t} + \zeta_t & \zeta_t &\sim NID(0, \sigma_{\zeta}^{2}) \\ \end{align}$ Afterwards, I apply GARCH(1,1) model in order to estimate the volatility ($\sigma_t$) one time-step forward, where: $\sigma^2_t = \omega + \gamma \epsilon^2_{t-1} + \delta \sigma^2_{t-1} $ Then, comes my problem. Since $\theta$ is changing over time I cannot estimate it. If it was constant I could simply replace all the known elements and run a regression to find out its value. But now $\theta$ is inside an integral. Is there a way for me to estimate $\theta_t$ and use it to simulate the process using Monte Carlo simulation? I am using R to code it. Is there a package that I can use? Any code would be very appreciated.
Search Now showing items 21-30 of 167 Long-range angular correlations of π, K and p in p–Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (Elsevier, 2013-10) Angular correlations between unidentified charged trigger particles and various species of charged associated particles (unidentified particles, pions, kaons, protons and antiprotons) are measured by the ALICE detector in ... Anisotropic flow of charged hadrons, pions and (anti-)protons measured at high transverse momentum in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2013-03) The elliptic, $v_2$, triangular, $v_3$, and quadrangular, $v_4$, azimuthal anisotropic flow coefficients are measured for unidentified charged particles, pions, and (anti-)protons in Pb–Pb collisions at $\sqrt{s_{NN}}$ = ... Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2014-01) The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ... The ALICE Transition Radiation Detector: Construction, operation, and performance (Elsevier, 2018-02) The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ... Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV (Springer, 2015-01-10) The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ... Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV (Springer Berlin Heidelberg, 2015-04-09) The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2014-03) A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ... Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV (Springer, 2015-05-27) The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ...
Trying to perform system identification in the following state-space model $$ \begin{bmatrix} x_{1}(n)\\ x_{2}(n) \\ x_{3}(n)\end{bmatrix}=\begin{bmatrix} a_{11} && a_{12} && a_{13} \\ a_{21} && a_{22} && a_{23} \\ a_{31} && a_{32} && a_{33} \end{bmatrix} \begin{bmatrix} x_{1}(n-1)\\ x_{2}(n-1) \\ x_{3}(n-1)\end{bmatrix} +\begin{bmatrix} b_{11} && b_{12} \\ b_{21} && b_{22} \\ b_{32} && b_{23}\end{bmatrix} \begin{bmatrix} u(n) \\ u(n-1) \end{bmatrix} $$ $$ y(n) = \begin{bmatrix} c_{1} && c_{2} && c_{3}\end{bmatrix}\begin{bmatrix} x_{1}(n)\\ x_{2}(n) \\ x_{3}(n)\end{bmatrix} $$ $u$ in the input sequence and $y$ is the output sequence. If $p$ is any parameter in the model the following 'LMS rule' is used for estimating the parameter. It is seen that the gradient of the output with respect to the parameter $p$ is needed. It is possible to compute these gradients with the expressions found in e.g. enter link description here. Experimentally I have verified that it works. So far so good. All the parameters in the model I'm considering are computed from 6 underlying parameters $\theta_1, \theta_2, \theta_3, \theta_4, \theta_5, \theta_6$ which are the parameters that really need to be estimated/identified. In a synthetic setup I'm keeping $\theta_3, \theta_4, \theta_5, \theta_6$ constant (pretending they are known) and only trying to estimate $\theta_1, \theta_2$. Both $a_{11}$ and $a_{12}$ depend on $\theta_1, \theta_2$. So I can write $a_{11}(\theta_1, \theta_2)$ and $a_{12}(\theta_1, \theta_2)$. Other parameters in the model also depend on $\theta_1, \theta_2$. My question is why I can't compute the gradients $\frac{\partial y(n)}{\partial \theta_1}$ and $\frac{\partial y(n)}{\partial \theta_2}$ by $$ \begin{bmatrix} \frac{\partial y(n)}{\partial \theta_1}\\ \frac{\partial y(n)}{\partial \theta_2} \end{bmatrix} = \begin{bmatrix} \frac{\partial a_{11}(\theta_1, \theta_2)}{\partial \theta_1} && \frac{\partial a_{12}(\theta_1, \theta_2)}{\partial \theta_1} \\ \frac{\partial a_{11}(\theta_1, \theta_2)}{\partial \theta_2} && \frac{\partial a_{12}(\theta_1, \theta_2)}{\partial \theta_2} \end{bmatrix} \begin{bmatrix} \frac{\partial y(n)}{\partial a_{11}}\\ \frac{\partial y(n)}{\partial a_{12}} \end{bmatrix} $$ It may be a stupid bug but after serious debugging I'm starting to wonder if the above approach is fundamentally flawed, I just don't quite understand why it shouldn't work.
1) Let $L$ be a language. Let $k \in \mathbb{N}$ and let $R$ be a $k$-ary relational symbol that is not in $L$. Set $L' = L \cup \{ R \}$. Let $M$ be an $L$ structure and $F[x_1,....,x_k]$ be a formula. Let $M'$ be an expansion by definition of $M$ in the language $L'$. Show by induction on formulas that for any $L'$-formula $G[y_1,...,y_k]$, there is an $L$-formula $\tilde{G}[y_1,...y_n]$ such that for any $(a_1,...,a_n) \in M^n$, we have $M' \vDash G[a_1,...a_n] \iff M \vDash \tilde{G}[a_1,...,a_n]$. Conclude that any set definable in $M'$ is definable in $M$. 2) Do the same thing for a constant symbol $c$ not appear in $L$ and define $L' = L \cup \{ c \}$. 3) The same thing for a function symbol $f$. Attempt: I think this problem should be a very straight forward induction on the formulas, but somehow I couldn't find any direction, probably because I'm still confused with all the new terminology. I will focus on (1) as (2) and (3) I believe are very similar. The problem is given a $G$, how should we define $\tilde{G}$. So I start with an example. Say we have $L=\{ < \}$ and let $M = (\mathbb{N},<)$. Let $L' = L \cup \{> \}=\{<,>\}$ and so $M'=(\mathbb{N},<,>)$, an expansion by definition of $M$. Now in $L$, say we have a formula $x<y$, and atomic formula. In $L'$ to write $x>y$, we just simply negate it, i.e., $\neg((x<y)\vee(x\simeq y))$. Thus, my guess is that for a $G=Rt_1...t_k$ where $R$ is not in $L$, we put $\tilde{G}=\neg Rt_1...t_k$. Then $M' \vDash G[a_1,...,a_n] \iff...$ and this is where I got stuck because I couldn't use the fact that $M'$ is an expansion by definition of $M$. Could someone give me some hints? Thanks! UPDATE: Let $G=Rt_1...t_k$ and assume that $R$ defined by $F[x_1,...,x_k]$ (in $M$) is not part of $L$; otherwise, we are done. Put $\tilde{G} = F_{t_1/x_1}...,F_{t_k/x_k}$. Then we have $M' \vDash G[a_1,...,a_n] \iff (t^{M'}_1[a_1,...,a_n],...,t_k^{M'}[a_1,...a_n]) \in R^{M'}$ (by definition when a structure satisfies an atomic formula) $\iff M \vDash F[t^{M'}_1[a_1,...,a_n],...,t_k^{M'}[a_1,...a_n]]$ (my explanation for this step is that because $R^{M'}$ is definable in $M$ so $M$ must satisfy $F$ with this tuple plugged in; but this is where I have problem, it looks just right but is it even formal?) $\iff M' \vDash \tilde{G}[a_1,...,a_n]$ (again, it looks right, but is it even correct according to the formal approach of logic?)
Power of Generator of Cyclic Group is Generator iff Power is Coprime with Order Jump to navigation Jump to search Theorem Let $C_n = \gen a$, that is, that $C_n$ is generated by $a$. Then: $C_n = \gen {a^k} \iff k \perp n$ Proof Necessary Condition Let $k \perp n$. $\exists u, v \in \Z: 1 = u k + v n$ So $\forall m \in \Z$, we have: \(\displaystyle a^m\) \(=\) \(\displaystyle a^{m u k + m v n}\) \(\displaystyle \) \(=\) \(\displaystyle a^{m u k}\) as $a^{m v n} = e$ \(\displaystyle \) \(=\) \(\displaystyle \paren {a^k}^{m u}\) Thus $a^k$ generate $C_n$. $\Box$ Sufficient Condition Let $C_n = \gen {a^k}$. That is, let $a^k$ generate $C_n$. \(\displaystyle \exists u \in \Z: \ \ \) \(\displaystyle a\) \(=\) \(\displaystyle \paren {a^k}^u\) as $a$ is an element of the group generated by $a^k$ \(\displaystyle \leadsto \ \ \) \(\displaystyle u k\) \(\equiv\) \(\displaystyle 1\) \(\displaystyle \pmod n\) \(\displaystyle \leadsto \ \ \) \(\displaystyle \exists u, v \in \Z: \ \ \) \(\displaystyle 1\) \(=\) \(\displaystyle u k + v n\) \(\displaystyle \leadsto \ \ \) \(\displaystyle k\) \(\perp\) \(\displaystyle n\) Integer Combination of Coprime Integers $\blacksquare$ Sources 1964: Walter Ledermann: Introduction to the Theory of Finite Groups(5th ed.) ... (previous) ... (next): $\S 9$: Cyclic Groups 1971: Allan Clark: Elements of Abstract Algebra... (previous) ... (next): Chapter $2$: Subgroups and Cosets: $\S 44 \alpha$ 1978: Thomas A. Whitelaw: An Introduction to Abstract Algebra... (previous) ... (next): Chapter $6$: An Introduction to Groups: Exercise $19$
I've been trying to implement the Metropolis-Hastings algorithm for a while but there seems to be something weird going on. This algorithm does not need as much of the statistics to understand, and has the step size h. I am working with the interval $(-\infty,\infty)$, if the interval is $[a,b]$ the module will define the function to be zero outside $[a,b]$ using the UnitStep command. Here's the Metropolis algorithm used: Choose $x_{0}$ as starting point. Choose a step size $h$. Generate $x_{trial}$=RandomVariate[UniformDistribution[{$x_{0}$-$\dfrac{h}{2}$,$x_{0}$+$\dfrac{h}{2}$}],1][1]. r = w($x_{trial}$)/w($x_{0}$). If r$\geq$1 then $x_{1}$=$x_{trial}$. If r$\leq$1 then the trial is accepted with a probability r. Choose a random number $\eta \in [0,1]$, if $\eta<r$ then $x_{1}$=$x_{trial}$, else $x_{1}$=$x_{0}$. 7.Repeat. The function I'm using is $\dfrac{0.8}{1 - e^{-0.8 \pi}}e^{-0.8 x}$. It's the $f$ in the modules below. The points are sampled from a uniform distribution defined in the interval $[0, \pi]$; in the module $a=0$ and $b=\pi$. There is literally one difference between the two codes and that's step 4. In the Metropolis-Hastings algorithm you have the extra part added in the second code block but in the Metropolis there isn't such a thing. The only reason why the Metropolis works for the function is because I have added a step function to make areas outside the interval of $[0,\pi]$ to be zero. Now, for the weirdness. If I simply use the metropolis algorithm, I get the following histogram which looks great. This histogram looks fine. Here's the code for this histogram (you can copy it below). However, when I run the Metropolis-Hastings algorithm. I get the following histogram. This histogram has slight overestimations and underestimations. If I run the code with T=100000. I get the following histogram where the difference is quite significant. This does not happen with the first code block. Here's the code for this histogram (you can copy it below). I don't get why they are so different - why doesn't the second histogram match the graph well? I did a lot of research and couldn't figure it out. Maybe it's something with my code. Any help would be appreciated. Here's the code you can copy. Code 1: MCHastings[f_, a_, b_, h_, X0_, T_] := Module[{w, X, data, xtrial, r, i, \[Eta], p1, p2}, w = (1 - UnitStep[a - x] - UnitStep[-b + x])*f; p1 = Plot[w, {x, Max[a - 1, -10], Min[b + 1, 10]}, PlotStyle -> {Thick}, PlotRange -> All]; X = X0;(*Step 1*) data = {X}; Do[ xtrial = RandomVariate[UniformDistribution[{X - h/2, X + h/2}], 1][[ 1]];(*Step 3*) r = (w /. x -> xtrial)/(w /. x -> X) ;(*Step 4*) If[r >= 1, X = xtrial, {\[Eta] = RandomReal[{0, 1}], If[\[Eta] < r, X = xtrial, X = X]}];(*Steps 5 and 6*) AppendTo[data, X]; , {i, 2, T}];(*Step 7*) p2 = Histogram[data, 50, "PDF"]; Return[Show[p2, p1, BaseStyle -> Medium, AxesOrigin -> {0, 0}]]; ] Code 2: MCHastings[f_, a_, b_, h_, X0_, T_] := Module[{w, X, data, xtrial, r, i, \[Eta], p1, p2}, w = (1 - UnitStep[a - x] - UnitStep[-b + x])*f; p1 = Plot[w, {x, Max[a - 1, -10], Min[b + 1, 10]}, PlotStyle -> {Thick}, PlotRange -> All]; X = X0;(*Step 1*) data = {X}; Do[ xtrial = RandomVariate[UniformDistribution[{X - h/2, X + h/2}], 1][[ 1]];(*Step 3*) r = (w /. x -> xtrial)/(w /. x -> X) * RandomVariate[UniformDistribution[{X - h/2, X + h/2}], 1][[1]]/ RandomVariate[UniformDistribution[{xtrial - h/2, xtrial + h/2}], 1][[1]]; (*Step 4*) If[r >= 1, X = xtrial, {\[Eta] = RandomReal[{0, 1}], If[\[Eta] < r, X = xtrial, X = X]}];(*Steps 5 and 6*) AppendTo[data, X]; , {i, 2, T}];(*Step 7*) p2 = Histogram[data, 50, "PDF"]; Return[Show[p2, p1, BaseStyle -> Medium, AxesOrigin -> {0, 0}]]; ]
Is it true that the cardinality of every maximal linearly independent subset of a finitely generated free module $A^{n}$ is equal to $n$ (not just at most $n$, but in fact $n$)? Here $A$ is a nonzero commutative ring. I know that it's true if $A$ is Noetherian or integral domain. I thought it was not true in general but I came up with something that looks like a proof and I can't figure out where it went wrong. I think I have a counter-example. Let $A$ be the ring of functions $f$ from $\mathbb{C}^2 \setminus (0,0) \to \mathbb{C}$ such there is a polynomial $\widetilde{f} \in \mathbb{C}[x,y]$ such that $\widetilde{f}(x,y)=f(x,y)$ for all but finitely many $(x,y)$ in $\mathbb{C}^2$. Map $A$ into $A^2$ by $f \mapsto (fx, fy)$. We check that this is injective: If $fx=0$ then $f$ is zero off of the $x$-axis. Similarly, if $fy=0$, then $f$ is zero off of the $y$-axis. So $(fx, fy) = (0,0)$ implies that $f$ is zero everywhere on $\mathbb{C}^2 \setminus (0,0)$. We now claim that there do not exist $(u,v)$ in $A^2$ such that $(f,g) \mapsto (fx+gu, \ fy+gv)$ is injective. Suppose such a $(u,v)$ exists. Let $\widetilde{u}$ and $\widetilde{v}$ be the polynomials in $\mathbb{C}[x,y]$ which coincide with $u$ and $v$ at all but finitely many points. Let $\Delta=\widetilde{u} y - \widetilde{v} x$. Since $\Delta$ is a polynomial which vanishes at $(0,0)$, it is not a non-zero constant. Thus, $\Delta$ vanishes on an entire infinite subset of $\mathbb{C}^2$. Let $(p,q)$ be a point in $\mathbb{C}^2 \setminus (0,0)$ such that $\Delta(p,q)=0$, $\widetilde{u}(p,q)= u(p,q)$ and $\widetilde{v}(p,q)=v(p,q)$. So $q u(p,q) - p v(p,q) =0$. Since $(p,q) \neq (0,0)$, there is some $k \in \mathbb{C}$ such that $(u(p,q), v(p,q)) = (kp, kq)$. Take $f$ to be $-k$ at $(p,q)$ and $0$ elsewhere; let $g$ be $1$ at $(p,q)$ and $0$ elsewhere. So $(fx+gu, fy+gv)=0$, and the map $(f,g) \mapsto (fx+gu, \ fy+gv)$ is not injective. We have to prove that $m \leq n$ if there is a monomorphism $A^m \to A^n$. Since this is given by a $n \times m$ matrix with entries in $A$ and every finitely generated ring is noetherian, it is enough to consider the case that $A$ is noetherian. Now you already know the proof for this case, but I just add it. Pick a minimal prime ideal $\mathfrak{p} \subseteq A$. This exists since $A \neq 0$. Now localize at $\mathfrak{p}$. Then we may replace $A$ by $A_{\mathfrak{p}}$, and thereby assume that $A$ is a $0$-dimensional noetherian ring, thus artinian. For such a ring it is known that the length of finitely generated modules is finite, and additive on short exact sequences. In particular $m * l(A) \leq n * l(A)$. Since $l(A) \neq 0$ is finite, we get $m \leq n$. By the way, the assertion can be generalized to the infinite case: Let $M$ be a free module with basis $B$ and $L \subseteq M$ a linearly independent subset. Then $|L| \leq |B|$. Proof: Let $B$ be infinite. Representing elements of $L$ as linear combinations of elements in $B$ yields a map $f : L \to E(B)$, where $E(B)$ denotes the set of finite subsets of $B$. Now let $F$ be such a finite subset with $n$ elements. The finite case yields that there are at most $n$ linearly independent elements in $\langle F \rangle$, thus also in $f^{-1}(F)$. Now we use cardinal arithmetics: $|L| = \sum_{n > 0} \sum_{F \in E(B), |F|=n} |f^{-1}(F)| \leq \sum_{n > 0} |B^n| = \sum_{n > 0} |B| = |B|.$ EDIT: See the comments; this does not answer kwan's question yet. I have spotted the mistake in my proof. So here is the "wrong" proof: Let $v_{1},\ldots,v_{m}$ be linearly independent elements in $A^{n}$, where $m\lt n$. Write them as $n$-tuples of elements in $A$, thereby forming an $n$-by-$m$ matrix. Linear independence of $v_{i}\ $s means that the rank of this matrix is $m$. So there is an $m$-by-$m$ minor with non-zero determinant. By exchanging rows if necessary, bring these $m$ rows to the top part of the matrix. Now add a colomn to the right side of the matrix whose entries are $0$ except at the $n+1$ th position, where the entry is $1$. Then the new $n$-by-$(m+1)$ matrix has rank $(m+1)$ and hence the $(m+1)$ columns, the first $m$ of which are the $v_{i}\ $s, are linearly independent. The mistake was the notion of rank of a matrix. When the entries are not from integral domain, the proper definition should be the largest integer $m$ such that there is no nonzero element in the ring annihilating the determinant of every $m$-by-$m$ minor. In the above example, I can't conclude that the rank of the $n$-by-$(m+1)$ matrix is $(m+1)$. With this, I can now exhale a sigh of relief and continue believing that this is not true in general. (By the way I also know that it is true for free modules of infinite rank) Assuming that $A$ has a maximal ideal $\mathfrak{m}$ (for example, by using Zorn's Lemma), one can proceed as follows: if $M$ is a free $A$-module with basis $(v_i)_{i\in I}$, then $M \cong A^I$, whence $M / \mathfrak{m} M \cong A^I / \mathfrak{m} A^I \cong (A / \mathfrak{m} A)^I$. This is a vector space over $k := A / \mathfrak{m} A$ of dimension $|I|$. Since over fields, all vector space bases of the same vector space have the same length, and since the $k$-vector space structure of $M / \mathfrak{m} M$ is independent of the choice of the basis, this shows that all $A$-bases of $M$ have the same cardinality. I don't remember where I first saw this though... maybe someone else has a reference? I saw this first in the case that $A = \mathbb{Z}$ and $\mathfrak{m} = (2)$ for free abelian groups $M$, to show that the rank is well-defined.
I'm interested in maximizing a function $f(\mathbf \theta)$, where $\theta \in \mathbb R^p$. The problem is that I don't know the analytic form of the function, or of its derivatives. The only thing that I can do is to evaluate the function point-wise, by plugging in a value $\theta_*$ and get a NOISY estimate $\hat{f}(\theta_*)$ at that point. If I want I can decrease the variability of these estimates, but I have to pay increasing computational costs. Here is what I have tried so far: Stochastic steepest descent with finite differences: it can work but it requires a lot of tuning (ex. gain sequence, scaling factor) and it is often very unstable. Simulated annealing: it works and it is reliable, but it requires lots of function evaluations so I found it quite slow. So I'm asking for suggestions/idea about possible alternative optimization method that can work under these conditions. I'm keeping the problem as general as possible in order to encourage suggestions from research areas different from mine. I must add that I would be very interested in a method that could give me an estimate of the Hessian at convergence. This is because I can use it to estimate the uncertainty of the parameters $\theta$. Otherwise I'll have to use finite differences around the maximum to get an estimate.
(This answer uses the second link you gave.) $\newcommand{\Like}{\text{L}}\newcommand{\E}{\text{E}}$Recall the definition of likelihood: $$\Like[\theta | X] = \Pr[X| \theta] = \sum_Z \Pr[X, Z | \theta]$$where in our case $\theta = (\theta_A, \theta_B)$ are the estimators for the probabilitythat coins A and B respectively land heads, $X = (X_1, \dotsc, X_5)$ being the outcomesof our experiments, each $X_i$ consisting of 10 flips, and $Z = (Z_1, \dotsc, Z_5)$being the coin used in each experiment. We want to find maximum likelihood estimator $\hat{\theta}$. The Expectation-Maximization(EM) algorithm is one such method to find (at least local) $\hat{\theta}$. It worksby finding the conditional expectation, which is then used to maximize $\theta$.The idea is that by continually finding a more likely (i.e. more probable) $\theta$in each iteration we will continually increase $\Pr[X,Z|\theta]$ which in turn, increasesthe likelihood function. There are three things that need to be done before going forwarddesigning an EM-based algorithm. Construct the model Compute Conditional Expectation under the model (E-Step) Maximize our likelihood by updating our current estimate of $\theta$ (M-Step) Construct the Model Before we go further with EM we need to figure out what exactly it is we are computing.In the E-step we are computing exactly the expected value for $\log \Pr[X,Z|\theta]$.So what is this value, really? Observe that$$\begin{align*}\log \Pr[X,Z|\theta] &= \sum_{i=1}^5 \log\sum_{C\in \{A,B\}}\Pr[X_i, Z_i=C| \theta]\\&=\sum_{i=1}^5 \log\sum_{C\in \{A,B\}} \Pr[Z_i=C | X_i, \theta] \cdot \frac{\Pr[X_i, Z_i=C| \theta]}{\Pr[Z_i=C | X_i, \theta]}\\&\geq \sum_{i=1}^5 \sum_{C\in \{A,B\}} \Pr[Z_i=C | X_i, \theta] \cdot \log\frac{\Pr[X_i, Z_i=C| \theta]}{\Pr[Z_i=C | X_i, \theta]}.\end{align*}$$The reason is that we have 5 experiments to account for, and we don't know what coinwas used in each. The inequality is due to $\log$ being concave and applying Jensen's inequality. The reason we need that lower bound is that we cannot directly compute the arg max to the original equation. However we can compute it for the final lower bound. Now what is $\Pr[Z_i=C|X_i,\theta]$? It is the probabilitythat we see coin $C$ given experiment $X_i$ and $\theta$. Using conditional probabilitieswe have, $$\Pr[Z_i=C| X_i, \theta] = \frac{\Pr[X_i, Z_i = C|\theta]}{\Pr[X_i|\theta]}.$$ While we've made some progress, we're not done with the model just yet.What is the probability that a given coin flipped the sequence $X_i$?Letting $h_i = \#\text{heads in } X_i$ $$\Pr[X_i, Z_i = C| \theta] = \frac{1}{2} \cdot \theta_C^{h_i} (1 - \theta_C)^{10 - h_i},\ \text{ for } \ C \in \{A, B\}.$$Now $\Pr[X_i|\theta]$ is clearly just the probability under both possibilities of $Z_i=A$ or $Z_i=B$.Since $\Pr[Z_i = A] = \Pr[Z_i = B] = 1/2$ we have,$$\Pr[X_i|\theta] = 1/2 \cdot (\Pr[X_i |Z_i = A, \theta] + \Pr[X_i |Z_i = B, \theta]).$$ E-Step Okay... that wasn't so fun but we can start doing some EM work now. The EM algorithmbegins by making some random guess for $\theta$. In this example we have $\theta^0 = (0.6,0.5)$.We compute$$\Pr[Z_1=A|X_1,\theta] = \frac{1/2 \cdot (0.6^5 \cdot 0.4^5)}{1/2 \cdot ((0.6^5 \cdot 0.4^5) + (0.5^5 \cdot 0.5^5))} \approx 0.45.$$This value lines up with what is in the paper. Now we can compute the expected number of headsin $X_1 = (H,T,T,T,H,H,T,H,T,H)$ from coin $A$, $$\E[\# \text{heads by coin }A | X_1, \theta] = h_1 \cdot \Pr[Z_1=A|X_1,\theta] = 5 \cdot 0.45 \approx 2.2.$$Doing the same thing for coin $B$ we get, $$\E[\# \text{heads by coin }B | X_1, \theta] = h_1 \cdot \Pr[Z_1=B|X_1,\theta] = 5 \cdot 0.55 \approx 2.8.$$We can compute the same for the number of tails by substituting $h_1$ for $10 - h_1$.This continues for all other values of $X_i$ and $h_i$ $1 \leq i \leq 5$.Thanks to linearity of expectation we can figure out$$\E[\#\text{heads by coin } A|X ,\theta] = \sum_{i=1}^5 \E[\# \text{heads by coin }A | X_i, \theta]$$ M-Step With our expected values in hand, now comes the M step where we want to maximize$\theta$ given our expected values. This is done by simple normalization!$$\theta_A^1 = \frac{E[\#\text{heads over } X \text{ by coin } A|X ,\theta]}{E[\#\text{heads and tails over } X \text{ by coin } A|X ,\theta]} = \frac{21.3}{21.3 + 9.6} \approx 0.71.$$Likewise for $B$. This process begins again with the E-Step and $\theta^1$ and continues until the valuesfor $\theta$ converge (or to some alloweable threshold). In this example we have 10 iterationsand $\hat{\theta} = \theta^{10} = (0.8, 0.52)$. In each iteration the value of $\Pr[X,Z|\theta]$ increases, due to the better estimate of $\theta$. Now in this case the model was fairly simplistic. Things can get much more complicatedpretty quickly, however the EM algorithm will always converge, and will always produce a maxmimum likelihood estimator $\hat{\theta}$. It may be a local estimator, but toget around this we can just restart the EM process with a different initialization. Wecan do this a constant amount of times and retain the best results (i.e., those with thehighest final likelihood).
For the following (related to a binary tree complexity question): $$f(n) = \sum_{h=0}^{\lg{}n} h2^h$$ Is there any way to express this only in terms of $n$? Or approximate it? Put in another way, I figure at worst for an upper bound, we could guess at it in the following way: $f(n) = 0 + (1 \cdot 2) + (2 \cdot 2^2) + (3 \cdot 2^3) + ... + (\lg{}n \cdot 2^{\lg{}n})$ This means since we go from $0$ to $\lg{}n$, and the largest above when reduced is $n\lg{}n$, I could write $\mathcal{O}(n\lg^2{}n)$. Supposedly $f(n)$ reduces to $\Theta(n\lg{}n)$ or better (by better I mean closer to $\Theta(1)$) and I'm struggling to show that... and I don't know if it's possible even. I can't prove it though so I can't claim it's not, reducing the summation if possible would hopefully lead to a quick answer.
The moment of inertial can be calculated for any axis. The knowledge about one axis can help calculating the moment of inertia for a parallel axis. Let \(I_{xx}\) the moment of inertia about axis \(xx\) which is at the center of mass/area. The moment of inertia for axis \(x'\) is \[I_{x'x'} = \int_{A} r'^{2} dA = \int_{A}\left(y'^{2} + z'^{2}\right)dA = \int_{A}\left[\left(y+\Delta y\right)^{2} + \left(z + \Delta z\right)^{2}\right]dA\tag{23}\] equation 23 can be expanded as \[I_{x'x'} = \int_{A}\left(y^{2} + z^{2}\right)dA + 2\int_{A}\left(y\Delta y + z \Delta z\right)dA + \int_{A} \left(\left(\Delta y\right)^{2} + \left(\Delta z\right)^{2}\right)dA \tag{24}\] The first term in equation 24 on the right hand side is the moment of inertia about axis \(x\) and the second them is zero. The second therm is zero because it integral of center about center thus is zero. The third term is a new term and can be written as \[\int_{A}\left(\left(\Delta y\right)^{2} + \left(\Delta z\right)^{2}\right)dA = \left(\left(\Delta y\right)^{2} + \left(\Delta z\right)\right)\int_{A}^{2} dA = r^{2} A \tag{25}\] Hence, the relationship between the moment of inertia at \(xx\) and parallel axis \(x'x'\) is Parallel Axis Equation \[I_{x'x'} = I_{xx} + r^{2}A\tag{26}\] Fig. 3.5. The schematic to explain the summation of moment of inertia. The moment of inertia of several areas is the sum of moment inertia of each area see Figure 3.5 and therefore, \[I_{xx} = \sum_{i=1}^{n} I_{xxi}\tag{27}\] If the same areas are similar thus \[I_{xx} = \sum_{i=1}^{n} I_{xxi} = nI_{xxi}\tag{28}\] Fig. 3.6. Cylinder with an element for calculation moment of inertia. Equation 28 is very useful in the calculation of the moment of inertia utilizing the moment of inertia of known bodies. For example, the moment of inertial of half a circle is half of whole circle for axis a the center of circle. The moment of inertia can then move the center of area. of the
OK, My book has a proof that a continious function defined on $[0,1]$ attains all values between $f(0)$ and $f(1)$ using some ultra case bashy stuff, but I have two different proofs, is those correct ? (a) Let the desired value be $m$. We prove that there exists a sequence of reals $\{a_i\}_{i=0}^{\infty}$ such that $lim \; a$ exists, and $0 \leq a_i \leq 1$ for all $i$, satsifying $lim\; f(a_i) = m$. So, by the defination of cont Construction: From the defination of continiuty, for $x \in [0,1]$ given any $\epsilon$, there exists a nonzero $h(\epsilon, x):= \delta$ such that for all $y$ with $|y - x| < \delta$, $|f(y) - f(x) | < \epsilon$Let $g(\epsilon) := min\{h(\epsilon, x) | x \in [0,1]\}$. Now, set $\epsilon_0 := 10^{-10000}$. split $[0,1]$ into $ \lfloor \frac{1}{g(\epsilon_0)} \rfloor $ equal intervals $I_1, I_2, \cdots, I_{\text{A big number}}$ , and let $a_0$ be the lower bound of the interval $I_i$, of which $\max{f(x) | x \in I_i} \geq m \geq \min{f(x) | x \in I_i}$. Now, set $\epsilon_{1} = \epsilon_0^{100000}$, and divide the $I_i$ into $ \lfloor \frac{1}{g(\epsilon_1)} \rfloor $ equal interval, and choose $a_1$ to be the lower bound of the interval $I_{i_{j}}$ for which $\max{f(x) | x \in I_{i_{j}}} \geq m \geq \min{f(x) | x \in I_{i_{j}}}$ Repeat the process. (b) Another proof: Assume WLOG $f(0) < f(1)$. Let the desired number be $m$. If $f(0) = m$ or $f(1) = m$, then we're done. Otherwise, divide the reals in $[0,1]$ into sets $L$ and $R$ such that: $y \in L$ if and only if $max\{ f(x) | 0 \leq x \leq y \} \leq a$ Otherwise, put $y$ in $R$. Now, $L$ exists because as $m \neq f(0)$, we can pick very small epsilon $\epsilon$ such that for $0 \leq x \leq \epsilon$, $f(x) < m$, and $R$ exists by the analogous arguement on $f(1)$. Now it's well known that a number $y$ exists such that for all member of $L$ is smaller or equal to it, and all members of $R$ is larger or equal to it. As $y$ must be inside $[0,1]$, we're done.
The absolute viscosity of many fluids relatively doesn't change with the pressure but very sensitive to temperature. For isothermal flow, the viscosity can be considered constant in many cases. The variations of air and water as a function of the temperature at atmospheric pressure are plotted in Figures 1.8 and 1.9. Some common materials (pure and mixture) have expressions that provide an estimate. For many gases, Sutherland's equation is used and according to the literature, provides reasonable of \(-40°C\) to \(1600°C\). \[\mu = \mu_{0} \frac{0.555 T_{i0} + Suth}{0.555 T_{in} + Suth} (\frac{T}{T_0})^\frac{3}{2}\] Where \(\mu\) viscosity at input temperature, T \(\mu_{0}\) reference viscosity at reference temperature, \(T_{i0}) \(T_{in}\) input temperature in degrees Kelvin \(T_{i0}\) reference temperature in degrees Kelvin \(Suth\) Sutherland's constant (presented in Table 1.1) Example 1.3 Calculate the viscosity of air at 800K based on Sutherland's equation. Use the data provide in Table 1.1. Solution 1.3 Applying the constants from Suthelnd's table provides \[ \mu = 0.00001827 \times \dfrac{ 0.555\times524.07+120}{0.555\times800+120} \times \left( \dfrac{800}{524.07}\right)^{\dfrac{3}{2}} \ \sim 2.51\,{10}^{-5} \left[\dfrac{N\, sec}{m^2}\right] \] The observed viscosity is about \(\sim 3.7{10}^{-5}\left[\dfrac{N\, sec}{m^2}\right]\). Table 1.2 Viscosity of selected gases. Substance Chemical formula Temperature, \(T\,[^{\circ}C]\) Viscosity, \(\left[\dfrac{N\, sec}{m^2} \right]\) \(i-C_4\,H_{10}\) 23 0.0000076 \(CH_4\) 20 0.0000109 Oxygen \(O_2\) 20 0.0000203 Mercury Vapor \(Hg\) 380 0.0000654 Table 1.3 Viscosity of selected liquids. Substance Chemical formula Temperature, \(T\,[^{\circ}C]\) Viscosity, \(\left[\dfrac{N\, sec}{m^2} \right]\) \((C_2H_5)O\) 20 0.000245 \(C_6H_6\) 20 0.000647 \(Br_2\) 26 0.000946 \(C_2H_5OH\) 20 0.001194 \(Hg\) 25 0.001547 \(H_2SO_4\) 25 0.01915 Olive Oil 25 0.084 Castor Oil 25 0.986 Clucuse 25 5-20 Corn Oil 20 0.072 SAE 30 - 0.15-0.200 SAE 50 \(\sim25^{\circ}C\) 0.54 SAE 70 \(\sim25^{\circ}C\) 1.6 Ketchup \(\sim20^{\circ}C\) 0,05 Ketchup \(\sim25^{\circ}C\) 0,098 Benzene \(\sim20^{\circ}C\) 0.000652 Firm glass - \(\sim 1\times10^7\) Glycerol 20 1.069 Fig. 1.10. Liquid metals' viscosity as a function of the temperature. Liquid Metals Liquid metal can be considered as a Newtonian fluid for many applications. Furthermore, many aluminum alloys are behaving as a Newtonian liquid until the first solidification appears (assuming steady state thermodynamics properties). Even when there is a solidification (mushy zone), the metal behavior can be estimated as a Newtonian material (further reading can be done in this author's book ``Fundamentals of Die Casting Design''). Figure 1.10 exhibits several liquid metals (from The Reactor Handbook, Vol. Atomic Energy Commission AECD-3646 U.S. Government Printing Office, Washington D.C. May 1995 p. 258.) The General Viscosity Graphs In case ``ordinary'' fluids where information is limit, Hougen et al suggested to use graph similar to compressibility chart. In this graph, if one point is well documented, other points can be estimated. Furthermore, this graph also shows the trends. In Figure 1.11 the relative viscosity \(\mu_{r} = \mu / \mu_{c}\) is plotted as a function of relative temperature, \(T_{r}\). \(\mu_{c}\) is the viscosity at critical condition and \(\mu\) is the viscosity at any given condition. The lines of constant relative pressure \(P_{r} = P / P_{c}\) are drawn. The lower pressure is, for practical purpose, \(\sim1[bar]\). Table 1.3 Viscosity of selected liquids. Chemical component Molecular Weight \(T_c\)[K] \(P_c\)[Bar] \(\mu_c\)\(\left[\dfrac{N\,sec}{m^2}\right]\) \(H_2\) 2.016 33.3 12.9696 3.47 \(He\) 4.003 5.26 2.289945 2.54 \(Ne\) 20.183 44.5 27.256425 15.6 \(Ar\) 39.944 151 48.636 26.4 \(Xe\) 131.3 289.8 58.7685 49. Air "mixed'' 28.97 132 36.8823 19.3 \(CO_2\) 44.01 304.2 73.865925 19.0 \(O_2\) 32.00 154.4 50.358525 18.0 \(C_2H_6\) 30.07 305.4 48.83865 21.0 \(CH_4\) 16.04 190.7 46.40685 15.9 Water 18.01528 647.096 K 22.064 [MPa] The critical pressure can be evaluated in the following three ways. The simplest way is by obtaining the data from Table 1.4 or similar information. The second way, if the information is available and is close enough to the critical point, then the critical viscosity is obtained as \[\mu_{c} = \frac{\mu}{\mu_{r}}\tag{21}\] The third way, when none is available, is by utilizing the following approximation \[\mu_{c} = \sqrt{MT_{c}}v_{c}^{2/3}\tag{22}\] Where ___vc with sim hat___ is the critical molecular volume and \(M\) is molecular weight. Or \[\mu_{c} = \sqrt{M}P_{c}^{2/3}T_{c}^{-1/6}\tag{23}\] Calculate the reduced pressure and the reduced temperature and from the Figure 1.11 obtain the reduced viscosity. Example 1.4 Estimate the viscosity of oxygen, \(O_2\) at \(100^{\circ}C\) and 20[Bar]. Solution 1.4 \(P_c = 50.35[Bar]\,\) \(T_c=154.4\) and therefore \(\mu_c=18 \left[ \dfrac{N\,sec}{m^2}\right]\) The value of the reduced temperature is \[\left[ \dfrac{N\,sec}{m^2}\right] \tag{24}\] The value of the reduced pressure is \[P_r \sim \dfrac{20}{50.35} \sim 0.4 \tag{25}\] From Figure 1.11 it can be obtained \(\mu_r\sim 1.2\) and the predicted viscosity is \[\mu = \mu_c \, \overbrace{\left( \dfrac{\mu}{\mu_c}\right)} ^{Table } = 18 \times 1.2 = 21.6[N sec/m^2] \tag{26}\] Fig. 1.11. Reduced viscosity as a function of the reduced temperature. Fig. 1.12. Reduced viscosity as a function of the reduced temperature. Viscosity of Mixtures In general the viscosity of liquid mixture has to be evaluated experimentally. Even for homogeneous mixture, there isn't silver bullet to estimate the viscosity. In this book, only the mixture of low density gases is discussed for analytical expression. For most cases, the following Wilke's correlation for gas at low density provides a result in a reasonable range. \[\mu_{mix} = \sum_{i=1}^n \frac{x_{i}\mu_{i}}{\sum_{j=1}^n x_{i}\phi_{ij}}\tag{27}\] where \(\phi_{ij}\) is defined as \[\phi_{ij} = \frac{1}{\sqrt{8}}\sqrt{1+\frac{M_i}{M_j}}(1+\sqrt{\frac{mu_i}{mu_j}}{\frac{M_j}{M_i})^2\) Here, \(n\) the number of the chemical components in the mixture \(x_{i}\) is the mole fraction of component \(i\) \mu_{i} the viscosity of component \(i\) The subscript \(i\) should be used for the \(j\) index. The dimensionless parameter \(\phi_{ij}\) is equal to one when \(i=j\). The mixture viscosity is highly nonlinear function of the fractions of the components. Example 1.5 Calculate the viscosity of a mixture (air) made of 20% oxygen, \(O_2\) and 80% nitrogen \(N_2\) for the temperature of \(20^{\circ}C\). Solution 1.5 The following table summarizes the known details Table summary 1. Component Molecular Weight, \(M\) Fraction, \(x\) Viscosity, \(\mu\) \(O_2\) 32. 0.2 0.0000203 \(N_2\) 28. 0.8 0.00001754 Table summary 2. i j \(M_i/M_j\) \(\mu_i/\mu_j\) \(\Phi_{ij}\) 1 1 1.0 1.0 1.0 1 2 1.143 1.157 1.0024 2 1 0.875 .86 0.996 2 2 1.0 1.0 1. \[ \mu_{mix} \sim \dfrac{0.2\times 0.0000203}{0.2\times1.0 + 0.8\times 1.0024} + \\ \dfrac{0.8\times 0.00001754}{0.2\times0.996 + 0.8\times 1.0} \sim 0.0000181 \left[\dfrac{N\,sec}{m^2}\right] \] The observed value is \(\sim0.0000182 \left[\dfrac{N\,sec}{m^2}\right]\). In very low pressure, in theory, the viscosity is only a function of the temperature with a ``simple'' molecular structure. For gases with very long molecular structure or complexity structure these formulas cannot be applied. For some mixtures of two liquids it was observed that at a low shear stress, the viscosity is dominated by a liquid with high viscosity and at high shear stress to be dominated by a liquid with the low viscosity liquid. The higher viscosity is more dominate at low shear stress. Reiner and Phillippoff suggested the following formula: \[\frac{dU_{x}}{dy} = \left(\frac{1}{\mu_{\infty} + \frac{\mu_{0} - \mu_{\infty}}{1 + (\frac{\tau_{xy}}{\tau_{s}})^2}}\right)\tau_{xy}\tag{29}\] Where the term \(\mu_{\infty}\) is the experimental value at high shear stress. The term \(\mu_0\) is the experimental viscosity at shear stress approaching zero. The term \(\tau_s\) is the characteristic shear stress of the mixture. An example for values for this formula, for Molten Sulfur at temperature \(120^{\circ}C\) are \(\mu_{\infty} = 0.0215 \left({N\,sec}/{m^2}\right)\), \(\mu_{0} = 0.00105 \left({N\,sec}/{m^2}\right)\), and \(\tau_s = 0.0000073 \left({kN}/{m^2}\right)\). This equation (29) provides reasonable value only up to \(\tau = 0.001 \left({kN}/{m^2}\right)\). Figure 1.12 can be used for a crude estimate of dense gases mixture. To estimate the viscosity of the mixture with \(n\) component Hougen and Watson's method for pseudocritial properties is adapted. In this method the following are defined as mixed critical pressure as \[{P_c}_{mix} = \sum_{i=1}^{n} \, x_i \,{P_c}_i \tag{30}\] the mixed critical temperature is \[{T_c}_{mix} = \sum_{i=1}^{n} \,x_i\, {T_c}_i \tag{31}\] and the mixed critical viscosity is \[{\mu_c}_{mix} = \sum_{i=1}^{n} \,x_i\, {\mu_c}_i \tag{32}\] Example 1.6 of 0.101 [m] radius and the cylinders length is 0.2 [m]. It is given that a moment of 1 [\(N\times m\)] is required to maintain an angular velocity of 31.4 revolution per second (these number represent only academic question not real value of actual liquid). Estimate the liquid viscosity used between the cylinders.} Solution 1.6 The moment or the torque is transmitted through the liquid to the outer cylinder. Control volume around the inner cylinder shows that moment is a function of the area and shear stress. The shear stress calculations can be estimated as a linear between the two concentric cylinders. The velocity at the inner cylinders surface is \[ \label{concentricCylinders:Ui} U_i = r\,\omega = 0.1\times 31.4[rad/second] = 3.14 [m/s] \] The velocity at the outer cylinder surface is zero. The velocity gradient may be assumed to be linear, hence, \[ \label{concentricCylinders:dUdr} \dfrac{dU}{dr} \cong \dfrac{0.1- 0}{0.101 - 0.1} = 100 sec^{-1} \] The used moment is \[ \label{concentricCylinders:M1} M = \overbrace{2\,\pi\,r_i\,h}^{A} \overbrace{\mu \dfrac{dU}{dr}}^{\tau} \,\overbrace{r_i}^{ll} \] or the viscosity is \[ \label{concentricCylinders:M} \mu = \dfrac{M}{ {2\,\pi\,{r_i}^2\,h} { \dfrac{dU}{dr}} } = \dfrac{1}{2\times\pi\times{0.1}^2 \times 0.2 \times 100} = \] Example 1.7 A square block weighing 1.0 [kN] with a side surfaces area of 0.1 [\(m^2\)] slides down an incline surface with an angle of 20\0C. The surface is covered with oil film. The oil creates a distance between the block and the inclined surface of \(1\times10^{-6}[m]\). What is the speed of the block at steady state? Assuming a linear velocity profile in the oil and that the whole oil is under steady state. The viscosity of the oil is \(3 \times 10^{-5} [m^2/sec]\). Solution 1.7 The shear stress at the surface is estimated for steady state by \[ \label{slidingBlock:shear} \tau = \mu \dfrac{dU}{dx} = 3 \times 10^{-5} \times \dfrac{U}{1\times10^{-6}} = 30 \, U \] The total fiction force is then \[ \label{slidingBlock:frictionForce} f = \tau\, A = 0.1 \times 30\,U = 3\,U \] The gravity force that acting against the friction is equal to the friction hence \[ \label{slidingBlock:solPre1} F_g = f = 3\,U\Longrightarrow U = \dfrac{m\,g\,\sin\,20^{\circ}}{3} \] Or the solution is \[ \label{slidingBlock:solPre} U = \dfrac{1\times 9.8\times\sin\,20^{\circ}}{3} \] Example 1.8 The edge effects can be neglected. The gap is given and equal to \(\delta\) and the rotation speed is \(\omega\). The shear stress can be assumed to be linear.} Solution 1.8 In this cases the shear stress is a function of the radius, \(r\) and an expression has to be developed. Additionally, the differential area also increases and is a function of \(r\). The shear stress can be estimated as \[ \label{discRotating:tau} \tau \cong \mu \,\dfrac{U}{\delta} = \mu\,\dfrac{\omega \, r}{\delta} \] This torque can be integrated for the entire area as \[ \label{discRotating:F} T = \int_0^R r\, \tau \,dA = \int_0^R \overbrace{r}^{ll} \, \overbrace{\mu\, \dfrac{\omega \, r}{\delta}}^{\tau} \, \overbrace{2\,\pi\,r\,dr}^{dA} \] The results of the integration is \[ \label{discRotating:I} T = \dfrac{\pi\,\mu\,\omega\,R^4}{2\,\delta} \] Contributors Dr. Genick Bar-Meir. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or later or Potto license.
We can write any wave function as$$\psi(\vec x, t) = \sqrt{\rho(\vec x,t)}\exp{\left[\frac{iS(\vec x,t)}{\hbar}\right]}$$for $S$ real and $\rho >0$. Here we interpret $\rho$ as the probability density. With the definition of the probability flux as$$\vec j(\vec x,t) \propto \psi^*\nabla\psi ,$$Sakurai shows that for the wavefunction above$$\vec j = \frac{\rho\nabla S}{m}.$$The point here being that the probability flux depends on the spatial variation of the phase. Next he states the direction of $\vec j$ at some point $\vec x$ is always normal to the surface of a constant phase that goes through that point. He then gives the example of a plane wave:$$\psi(\vec x,t) \propto \exp{\left(\frac{i\vec p\cdot\vec x}{\hbar}-\frac{iEt}{\hbar}\right)}$$in which it is stated that $$\nabla S = \vec p.$$ Question: How can we show that the last equation is true? In the context of the first equation, I intrepret$$S(\vec x,t) = \vec p\cdot\vec x-Et$$and thus$$\nabla S = \nabla(\vec p\cdot\vec x).$$Surely we need not use a vector dot product identity. What am I missing? We can write any wave function as$$\psi(\vec x, t) = \sqrt{\rho(\vec x,t)}\exp{\left[\frac{iS(\vec x,t)}{\hbar}\right]}$$for $S$ real and $\rho >0$. Here we interpret $\rho$ as the probability density. With the definition of the probability flux as$$\vec j(\vec x,t) \propto \psi^*\nabla\psi ,$$Sakurai shows that for the wavefunction above$$\vec j = \frac{\rho\nabla S}{m}.$$The point here being that the probability flux depends on the spatial variation of the phase. Next he states the direction of $\vec j$ at some point $\vec x$ is always normal to the surface of a constant phase that goes through that point. He then gives the example of a plane wave:$$\psi(\vec x,t) \propto \exp{\left(\frac{i\vec p\cdot\vec x}{\hbar}-\frac{iEt}{\hbar}\right)}$$in which it is stated that $$\nabla S = \vec p.$$ Consider 2D (XY) case for simplicity, $\vec x = x\vec e_x + y\vec e_y$. $\vec e_x$ is unit vector along x-axis. By definition, gradient is the following operator upon scalar: $\nabla=(\frac{\partial}{\partial x}\vec{e_x} + \frac{\partial}{\partial y}\vec{e_y})$ Apply this operator to $\vec p\cdot\vec x$: $\nabla(\vec{p}\cdot \vec x)=(\frac{\partial}{\partial x}\vec{e_x} + \frac{\partial}{\partial y}\vec{e_y})(p_x x + p_y y)$ Then we consider first part: $\frac{\partial p_x x}{\partial x}\vec{e_x}=\frac{\partial p_x}{\partial x}x\vec{e_x}+\frac{\partial x}{\partial x}p_x\vec{e_x}=\frac{\partial x}{\partial x}p_x\vec{e_x}$, since $p_x$ is not explicitly function of $x$. Also, $\frac{\partial x}{\partial x}=1$. Also, $p_y$ is not explicitly function of $x$, so partial derivative $\frac{\partial(p_y y)}{\partial x}$ is zero. Same story goes for $\frac{\partial }{\partial y}$. Which yields: $\nabla(\vec{p}\cdot \vec x)=\frac{\partial x}{\partial x}p_x\vec{e_x}+\frac{\partial y}{\partial y}p_y\vec{e_y}=\vec p$ As I remember, such transitions are widely used across the field.
Why are the magnetic moment and the angular moment related? I've always read everywhere that they are related but found nowhere a satisfactory explanation of the cause Let's first look at the classical situation. A charged particle moving round a circular loop had an angular momentum and because it is also a current, it produces a magnetic moment. Therefore, it can be considered to be a magnetic dipole with a moment $\vec{m}$. The magnetic moment and the angular momentum are proportional to each other with the constant of proportionality called the gyromagnetic ratio. Going to the quantum world, some particles are observed to have an intrinsic magnetic moment the way they can have a mass or charge. We can define a quantity $\vec{S}$, the intrinsic angular momentum, from $\vec{m}$ using an appropriate gyromagnetic ratio. It is experimentally confirmed that we need $\vec{S}$ for angular momentum conservation. That is, the orbital angular momentum $\vec{L}$ alone is not conserved but the total angular momentum $\vec{J} = \vec{L} + \vec{S}$ is. You can understand, it through Einstein-de Hass effect or other way Barnett effect; In classical mechanics, when an object of mass $m$ moves circularly, it gives rise to angular momentum. Similarly, when a charge particle moves circularly it gives rise to magnetic moment. So charge particles are not massless, they do have some mass, their orbital motion results into this angular momentum. As per above effect means that: You have a ferromagnetic substance hanged by a thin fibre. Now, you magnetise the ferromagnetic rod, it starts rotating other way, because of conservation of angular momentum arising from the moving mass charges inside the rod. Magnetic moment, angular momentum, and charge are related, because the magnetic field is how the electromagnetic interaction carries angular momentum. If there were an intrinsic relationship between magnetic moment and angular momentum, you would expect the neutrino to have a magnetic moment. The current PDG reports an upper limit $\mu_\nu < 29\times10^{-12}\,\mu_B$ from experiments with reactor neutrinos, quite different from the electron's magnetic moment $\mu_e \approx 2\mu_B$. Note that an electron-type neutrino will spend part of its time as an $e^-$-$W^+$ loop (and similar for neutrinos with contributions from the other flavors), which will give it some miniscule magnetic moment whose predicted value I don't know. However at tree level the neutrino's coupling to the photon is exactly zero, which means it carries angular momentum without magnetic moment. The way I understand it, a charged particle, which has an angular momentum, will have a magnetic moment associated with it. This could be because an angular momentum is associated with some kind of rotation. For a charged particle, this rotation could be thought to constitute a current loop. And a current loop can always be associated with a magnetic moment at its centre. Thus charge + rotation (angular momentum) --> magnetic moment. The moment comes out to be proportional to the angular momentum. For the quantum case, the angular momentum is probably linked to some rotation we don't understand yet, which could be giving rise to the magnetic moment... In some sense, this is just a longer version of Amey Joshi's answer, but hope it helps. (Though I AM a year late! :P)
Abstract $\displaystyle H_n(\Omega BG {}^{^\wedge}_p;k)$ $\displaystyle \cong \mathrm{Tor}_{n-1}^{e.kG.e}(kG.e,e.kG),$ $\displaystyle H^n(\Omega BG{}^{^\wedge}_p;k)$ $\displaystyle \cong \mathrm{Ext}^{n-1}_{e.kG.e}(e.kG,e.kG).$ Further algebraic structure is examined, such as products and coproducts, restriction and Steenrod operations. Fingerprint Cite this } TY - JOUR T1 - An algebraic model for chains on ΩBG^p AU - Benson, Dave PY - 2009/4 Y1 - 2009/4 N2 - We provide an interpretation of the homology of the loop space on the $ p$-completion of the classifying space of a finite group in terms of representation theory, and demonstrate how to compute it. We then give the following reformulation. If $ f$ is an idempotent in $ kG$ such that $ f.kG$ is the projective cover of the trivial module $ k$, and $ e=1-f$, then we exhibit isomorphisms for $ n\ge 2$:$\displaystyle H_n(\Omega BG {}^{^\wedge}_p;k)$ $\displaystyle \cong \mathrm{Tor}_{n-1}^{e.kG.e}(kG.e,e.kG),$ $\displaystyle H^n(\Omega BG{}^{^\wedge}_p;k)$ $\displaystyle \cong \mathrm{Ext}^{n-1}_{e.kG.e}(e.kG,e.kG).$ Further algebraic structure is examined, such as products and coproducts, restriction and Steenrod operations. AB - We provide an interpretation of the homology of the loop space on the $ p$-completion of the classifying space of a finite group in terms of representation theory, and demonstrate how to compute it. We then give the following reformulation. If $ f$ is an idempotent in $ kG$ such that $ f.kG$ is the projective cover of the trivial module $ k$, and $ e=1-f$, then we exhibit isomorphisms for $ n\ge 2$:$\displaystyle H_n(\Omega BG {}^{^\wedge}_p;k)$ $\displaystyle \cong \mathrm{Tor}_{n-1}^{e.kG.e}(kG.e,e.kG),$ $\displaystyle H^n(\Omega BG{}^{^\wedge}_p;k)$ $\displaystyle \cong \mathrm{Ext}^{n-1}_{e.kG.e}(e.kG,e.kG).$ Further algebraic structure is examined, such as products and coproducts, restriction and Steenrod operations. U2 - 10.1090/S0002-9947-08-04728-4 DO - 10.1090/S0002-9947-08-04728-4 M3 - Article VL - 361 SP - 2225 EP - 2242 JO - Transactions of the American Mathematical Society JF - Transactions of the American Mathematical Society SN - 0002-9947 IS - 4 ER -
For city we have simplified its weather forecasting as such. If it rains then the probability for rain the next day is $0.2$. If its sunny then the probability for sunny day the next day is $0.7$. Vector $$x_{k}=\begin{bmatrix}\text{probability for sunny weather at day } k \\ \text{probability for rainy weather at day } k\end{bmatrix}$$ is the probability for sunny and rainy weather. At day $k+1$ we get from $k$ days probabilities that $$x_{k+1}=\begin{bmatrix} 0.7 & 0.8 \\ 0.3 & 0.2 \end{bmatrix} x_k$$ Whats the probability that it rains on random day? I have a hint that I can assume that $x_0 = [1\;0]^T$. So \begin{align} x_{0+1}&=\begin{bmatrix} 0.7 & 0.8 \\ 0.3 & 0.2 \end{bmatrix} \begin{bmatrix} 1 \\ 0\end{bmatrix} = \begin{bmatrix} 0.7 \\ 0.3 \end{bmatrix} \\ x_{1+1}&=\begin{bmatrix} 0.7 & 0.8 \\ 0.3 & 0.2 \end{bmatrix} \begin{bmatrix} 0.7 \\ 0.3\end{bmatrix} = \begin{bmatrix} 0.73 \\ 0.27 \end{bmatrix} \end{align} Should I continue this for $k\to \infty$ or use some other method?
Proving the formula with Taylor Series The Power series and the Taylor Series: First, let's see the definition of a Power series at 0: $$f(x) = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + ... + a_\infty x^\infty$$ Which is: $$f(x) = \sum_{n=0}^\infty a_n x^n$$ How to find a Taylor series? In functions whose derivative eventually end up resulting in the function itself (maybe changed by constants), we can create lots of equations of different orders, by derivating our known function and the sum. By taking advantage of the fact that making x=0 in those equations will always result in a single constant (all other constants will vanish or be multiplied by 0), we will then find the values for the constants and indentify a pattern that will lead us to know all the values of the a constants up to infinity. See: $$\ f(x) = a_0 + a_1 x +\ a_2 x^2 + \ a_3 x^3 \ + ... + a_n x^n \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $$$$f'(x) = 0 + \ \ a_1 \ + 2 a_2 x + 3a_3 x^2 + ... + \ \ \ na_n x^{n-1}\ \ \ \ \ \ \ \ \ $$$$f''(x) = 0 + \ \ 0\ \ + \ \ 2 a_2\ + 6a_3 x \ + ... + \ n(n-1)a_n x^{n-2}$$ And if we make x=0: $$f(0) = a_0$$$$f'(0) = a_1$$$$f''(0) = 2a_2$$$$f'''(0) = 6a_3$$$$\frac{d^nf(0)}{dx^n} = (n!)a_n$$ Finding the series of the exponential The easiest one, where its derivative is itself: $$f(x) = f'(x) = f''(x) = ... = e^x$$ From where you get for x=0: $$a_0 = a_1 = 2a_2 = 6a_3 = ... = 1$$ Then: $$a_n = 1/n! \ \ \ \Longrightarrow \ \ \ e^x = \sum_{n=0}^\infty \frac{x^n}{n!}$$ Exponential with an imaginary expoent: $$f(0) = e^{0i} = +1 = a_0$$$$f'(0) = ie^{0i} = +i = a_1$$$$f''(0) = i^2e^{0i} = -1 = 2!a_2$$$$f'''(0) = i^3e^{0i} = -i = 3!a_3$$$$f''''(0) = i^4e^{0i} = +1 = 4!a_4$$ Notice the pattern: +1; +i; -1; -i; +1; +i; -1; -i; ... We see we've got shifting signs and a real and an imaginary parts: the real part taking even values of n and the imaginary part taking odd values of n. $$e^{ix} = \sum_{n=0}^\infty \frac{(-1)^n x^{2n} }{(2n)!} + i \sum_{n=0}^\infty \frac{(-1)^n x^{2n+1}}{(2n+1)!}$$ Sinus and cosinus: Now, here is the magic! Follow the same process to get the Taylor series for these two. For the sinus: $$f(0) = +sin(0) = 0 = a_0$$$$f'(0) = +cos(0) = +1 = a_1$$$$f''(0) = -sin(0) = 0 = 2!a_2$$$$f'''(0) = -cos(0) = -1 = 3!a_3$$$$f''''(0) = +sin(0) = 0 = 4!a_4$$ Look at the shifting pattern! And look at the skipped constants (the zeros). $$sin(x) = \sum_{n=0}^\infty \frac{(-1)^n x^{2n+1}}{(2n+1)!}$$ And the cosinus: $$f(0) = +cos(0) = +1 = a_0$$$$f'(0) = -sin(0) = 0 = a_1$$$$f''(0) = -cos(0) = -1 = 2!a_2$$$$f'''(0) = +sin(0) = 0 = 3!a_3$$$$f''''(0) = +cos(0) = +1 = 4!a_4$$ Exactly the complement of the sinus: $$cos(x) = \sum_{n=0}^\infty \frac{(-1)^n x^{2n}}{(2n)!}$$ Conclusion And voilà!!!! Like a charm: $$e^{ix} = cos(x) + i*sin(x)$$ This conclusion does make me wonder and wonder. And I really can't understand what it means, but it's certainly one of the most magical and misterious things about Math I've seen.... I mean... I don't even know what an imaginary number is supposed to be, and then you make it exponential.... But the scret lies in the fact that, derivating all the functions, a cycling pattern appears. The exponential generates increasing powers of i, which will create the +1, +i, -1, -i cycle. The sinus and the cosinus also cycle when derivated: +sin, +cos, -sin, -cos. It happens that both cycles match.
As @Trevor Wilson said, $\vdash$ which is named “turnstile” or “right tack”, belongs to the meta‑language, however, it's not always a syntactic consequence operator, it also is a semantic consequence (what @Trevor Wilson said is $\models$), at least in type theory. The model is not only ⊨ , U+22A8, named TRUE in the Unicode database, it's also ⊧ , U+22A7, precisely named MODELS, in the Unicode database. TeX and Unicode disagree here (about ⊨ and ⊧). As @Peter Smith said, $\rightarrow$ is a functional implication. I'm adding it's related to $\supset$ in the way the former is an interpretation of the latter (one domain to another domain). Examples $$\frac{\Gamma (x) = \tau}{\Gamma \vdash x : \tau}$$ The above is from type theory, $\vdash$ stands for a semantic consequence, something which can't be inferred from the syntax. In this example, $\Gamma$ stands for a typing context, which is a semantic context, not a syntactic context; there, $\Gamma$ is like a function, something computed at the meta‑level. $$A \supset B = A \rightarrow B$$ The above is from Curry‑Howard correspondence. To make the above clearer, it appears in a context which also asserts this: $$A \land B = A \times B$$ … where $A \times B$ is a term of functional language and lambda‑calculus, so is $A \rightarrow B$. $A \rightarrow B$, belongs to the domain of $A \times B$; and $A \Rightarrow B$ or $A \supset B$, belongs to the domain of $A \land B$. This is the same as the previous above (same interpretation holds): $$A \Rightarrow B = A \rightarrow B$$ The distinction is less strong than object‑level / meta‑level distinction, this is a domain distinction. Comparing “$\vdash$ vs $\Rightarrow$” to “$\Rightarrow$ vs $\rightarrow$” is a bit like comparing explicit type conversion to implicit type conversion… a comparison to be used with a lot of care, that's just to give a picture. Counter examples (as the OP wish) $$A \rightarrow B $$Where a context is $\Gamma$, the above may make sense. $$A \vdash B $$Where a context is $\Gamma$, the above makes no‑sense at all (means nothing). $$A \Rightarrow B$$$$A \rightarrow B$$In typed lambda‑calculus (simply typed or more expressive), both of the above may be seen as the same (the latter is originally an interpretation of the former, but since, in a valid type theory the reverse interpretation is also valid). In untyped lambda‑calculus, the former do not exists, and the latter may make sense, as much as it may makes no sense. $$\vdash A $$You may see the above, this makes sense (to be read as “derivable from the empty context)”. $$\rightarrow A $$$$\Rightarrow A $$$$\subset A $$You will never see the above, this makes no sense. May be completed with other examples and counter‑examples, a future day.
Let $M$ complex manifolds admitting a smooth, positive, proper plurisubharmonic exhaustion, $\rho:M\to[0,\infty)$, whose we have complex Monge-Ampere foliation $(\partial\bar\partial \rho)^n=0$. Patrizio, Giorgio;and Wong, Pit Mann in Stability of the Monge-Ampère foliation,Mathematische Annalen,March 1983, Volume 263, Issue 1, pp 13–29 showed that if the volume function is continuous on the level sets of $\rho$, then the leaf space, $\mathcal L$, admits a Kähler form $\omega$, so that, if $\pi:M\to\mathcal L $ is the projection, we have $$\partial\bar \partial \log \rho=\pi^*\omega.$$ We have always the following theorem: Theorem: If $\omega$ is non-negative, $\omega^{n-1}\neq 0$, $\omega^n=0$, and $d\omega=0$, then $$\mathcal F=\text{ann}(\omega)=\{W\in TX|\omega(W,\bar W)=0, \forall W\in TX\}$$define a foliation $\mathcal F$ on $X$ and each leaf of $\mathcal F$ being a Riemann surface(which is Kaehler always) You can see my short not about fiberwise Calabi-Yau foliation and semi-Ricci flat metric introduced by Greene,Shapire,Vafa, and Yau https://hal.archives-ouvertes.fr/hal-01551080 Take a holomorphic foliation map $\pi:X\to Y$ such that the leaves of the foliation coincide with the fiber of $\pi$, then the pull back of any Kahler metric on $Y$ to $X$ gives rise to a homogeneous holomorphic Monge-Ampère foliation and the degenerate Kahler form can be the pull back of a Kahler metric on $Y$. See Proposition 6.4 of the following paper of Ruan. In fact by Theorem 1.3 in the following reference, when the homogeneous Monge-Ampère equation comes from a collapsing, the foliation is holomorphic. In fact holomorphic foliation correspond to Cheeger-Fukaya-Gromov theory about collapsing Riemannian manifolds. If you want to study the collapsing part of degeneration of K\"ahler-Einstein metrics, then you are in deal with holomorphic foliation(see Wei-Dong Ruan's paper in bellow) and also fiberwise Kahler-Einstein foliation (which is a foliation in fiber direction and may not be foliation in horizontal direction. See this preprint) Wei-Dong Ruan, On the convergence and collapsing of Kähler metrics, J. Differential Geom.Volume 52, Number 1 (1999), 1-40.
kidzsearch.com > wiki Explore:images videos games Trigonometry Trigonometry (from the Greek trigonon = three angles and metron = measure) is a part of elementary mathematics dealing with angles, triangles and trigonometric functions such as sine (abbreviated sin), cosine (abbreviated cos) and tangent (abbreviated tan). It has some connection to geometry, although there is disagreement on exactly what that connection is; for some, trigonometry is just a section of geometry. Contents Overview and definitions Trigonometry uses a large number of specific words to describe parts of a triangle. Some of the definitions in trigonometry are: Right-angled triangle- A right-angled triangle is a triangle that has an angle that is equal to 90 degrees. (A triangle can not have more than one right angle.) The standard trigonometric ratios can only be used on right-angled triangles. Hypotenuse- The hypotenuse of a triangle is the longest side, and the side that is opposite the right angle. For example, for the triangle on the right, the hypotenuse is side c. Oppositeof an angle - The opposite side of an angle is the side that does not intersect with the vertex of the angle. For example, side ais the opposite of angle Ain the triangle to the right. Adjacentof an angle - The adjacent side of an angle is the side that intersects the vertex of the angle but is not the hypotenuse. For example, side bis adjacent to angle Ain the triangle to the right. Trigonometric ratios Sine (sin) - The sine of an angle is equal to the [math]{Opposite \over Hypotenuse}[/math] Cosine (cos) - The cosine of an angle is equal to the [math]{Adjacent \over Hypotenuse}[/math] Tangent (tan) - The tangent of an angle is equal to the [math]{Opposite \over Adjacent}[/math] The reciprocals of these ratios are: Cosecant (csc) - The cosecant of an angle is equal to the [math]{Hypotenuse \over Opposite}[/math] or [math]\csc \theta = {1 \over \sin \theta}[/math] Secant (sec) - The secant of an angle is equal to the [math]{Hypotenuse \over Adjacent}[/math] or [math]\sec \theta = {1 \over \cos \theta}[/math] Cotangent (cot) - The cotangent of an angle is equal to the [math]{Adjacent \over Opposite}[/math] or [math]\cot \theta = {1 \over \tan \theta}[/math] Students often use a mnemonic to remember this relationship. The sine, cosine, and tangent ratios in a right triangle can be remembered by representing them as strings of letters, such as SOH-CAH-TOA: Sine = Opposite ÷ Hypotenuse Cosine = Adjacent ÷ Hypotenuse Tangent = Opposite ÷ Adjacent or: Silly Old Hitler Couldn't Advance His Troops Over America or: Sitting (or Sex) On Hard Concrete Always Hurts Try Other Alternatives Purpose of trigonometry With the sines and cosines one can answer virtually all questions about triangles. One can work out the remaining angles and sides of any triangle as soon as two sides and their included angle or two angles and a side or three sides are known. These laws are useful in all branches of geometry, since every polygon may be described as a combination of triangles. Other websites
You are here Basic Electric Guitar Circuits 2: Potentiometers & Tone Capacitors Part 2: Potentiometers and Tone Capacitors What is a Potentiometer? Potentiometers, or "pots" for short, are used for volume and tone control in electric guitars. They allow us to alter the electrical resistance in a circuit at the turn of a knob. Drawing of physical potentiometers depicting terminals 1, 2, and 3 Drawing of potentiometer schematic depicting terminals 1, 2, and 3 It is useful to know the fundamental relationship between voltage, current and resistance known as Ohm's Law when understanding how electric guitar circuits work. The guitar pickups provide the voltage and current source, while the potentiometers provide the resistance. From Ohm's Law we can see how increasing resistance decreases the flow of current through a circuit, while decreasing the resistance increases the current flow. If two circuit paths are provided from a common voltage source, more current will flow through the path of least resistance. Ohm's Law$$V = I \times R$$ where ~V~ = voltage, ~I~ = current, and ~R~ = resistance Basic Electric Guitar Circuit Alternative functional terminal names Terminal 1 "Cold" Terminal 2 "Wiper" Terminal 3 "Hot" A Visual Representation of how a potentiometer works Based on a 300 degree rotation We can visualize the operation of a potentiometer from the drawing above. Imagine a resistive track connected from terminal 1 to 3 of the pot. Terminal 2 is connected to a wiper that sweeps along the resistive track when the potentiometer shaft is rotated from 0° to 300°. This changes the resistance from terminals 1 to 2 and 2 to 3 simultaneously, while the resistance from terminal 1 to 3 remains the same. As the resistance from terminal 1 to 2 increases, the resistance from terminal 2 to 3 decreases, and vice-versa. Tone Control: Variable Resistors & Tone Capacitors Tone pots are connected using only terminals 1 and 2 for use as a variable resistor whose resistance increases with a clockwise shaft rotation. The tone pot works in conjunction with the tone capacitor ("cap") to serve as an adjustable high frequency drain for the signal produced by the pickups. Tone Circuit The tone pot's resistance is the same for all signal frequencies; however, the capacitor has AC impedance which varies depending on both the signal frequency and the value of capacitance as shown in the equation below.$$\text{Capacitor Impedance} = Z_{\text{capacitor}} = \frac{1}{2 \pi f C}$$ where ~f~ = frequency and ~C~ = capacitance Capacitor impedance decreases if capacitance or frequency increases.High frequencies see less impedance from the same capacitor than low frequencies. The table below shows impedance calculations for three of the most common tone cap values at a low frequency (100 Hz) and a high frequency (5 kHz) ~C~ (Capacitance) ƒ (Frequency) ~Z~ (Impedance) .022 μF 100 Hz 72.3 kΩ .022 μF 5 kHz 1.45 kΩ .047 μF 100 Hz 33.9 kΩ .047 μF 5 kHz 677 Ω .10 μF 100 Hz 15.9 kΩ .10 μF 5 kHz 318 Ω When the tone pot is set to its maximum resistance (e.g. 250kΩ), all of the frequencies (low and high) have a relatively high path of resistance to ground. As we reduce the resistance of the tone pot to 0Ω, the impedance of the capacitor has more of an impact and we gradually lose more high frequencies to ground through the tone circuit. If we use a higher value capacitor, we lose more high frequencies and get a darker, fatter sound than if we use a lower value. Volume Control: Variable Voltage Dividers Volume pots are connected using all three terminals in a way that provides a variable voltage divider for the signal from the pickups. The voltage produced by the pickups (input voltage) is connected between the volume pot terminals 1 and 3, while the guitar\'s output jack (output voltage) is connected between terminals 1 and 2. Voltage divider equation:$$V_{\text{out}} = V_{\text{in}} \times \frac{R_2}{R_1 + R_2}$$ From the voltage divider equation we can see that if ~R_1 = 0\text{Ω}~ and ~R_2 = 250\text{kΩ}~, then the output voltage will be equal to the input voltage (full volume).$$V_{\text{out}} = V_{\text{in}} \times \frac{250\text{kΩ}}{0 + 250\text{kΩ}} = V_{\text{in}} \times \frac{250\text{kΩ}}{250\text{kΩ}}$$$$V_{\text{out}} = V_{\text{in}}$$ If ~R_1 = 250\text{kΩ}~ and ~R_2 = 0\text{Ω}~, then the output voltage will be zero (no sound).$$V_{\text{out}} = V_{\text{in}} \times \frac{0}{250\text{kΩ} + 0} = V_{\text{in}} \times \frac{0}{250\text{kΩ}}$$$$V_{\text{out}} = 0$$ Two Resistor Voltage Divider Schematic Example:$$V_{\text{in}} = 60\text{mV} \text{, } R_1 = 125\text{kΩ} \text{, } R_2 = 125\text{kΩ}$$$$V_{\text{out}} = V_{\text{in}} \times \frac{R_1}{(R_1 + R_2)}$$$$V_{\text{out}} = 60\text{mV} \times \frac{125\text{kΩ}}{(125\text{kΩ} + 125\text{kΩ})}$$$$V_{\text{out}} = 60\text{mV} \times \frac{1}{2}$$$$V_{\text{out}} = 30\text{mV}$$ Potentiometer Taper The taper of a potentiometer indicates how the output to input voltage ratio will change with respect to the shaft rotation. The two taper curves below are examples of the two most common guitar pot tapers as they would be seen on a manufacturer data sheet. The rotational travel refers to turning the potentiometer shaft clockwise from 0° to 300° as in the previous visual representation drawing. How do you know when to use an audio or linear taper potentiometer? The type of potentiometer you should use will depend on the type of circuit you are designing for. Typically, for audio circuits the audio taper potentiometer is used. This is because the audio taper potentiometer functions on a logarithmic scale, which is the scale in which the human ear percieves sound. Even though the taper chart appears to have a sudden increase in volume as the rotation increases, in fact the perception of the sound increase will occur on a gradual scale. The linear scale will actually (counterintuitively) have a more significant sudden volume swell effect because of how the human ear perceives the scale. However, linear potentiometers are often used for other functions in audio circuits which do not directly affect audio output. In the end, both types of potentiometers will give you the same range of output (from 0 to full), but the rate at which that range changes varies between the two. How do you know what value of potentiometer to use? The actual value of the pot itself does not affect the input to output voltage ratio, but it does alter the peak frequency of the pickup. If you want a brighter sound from your pickups, use a pot with a larger total resistance. If you want a darker sound, use a smaller total resistance. In general, 250kΩ pots are used with single-coil pickups and 500kΩ pots are used with humbucking pickups. Specialized Pots Potentiometers are used in all types of electronic products so it is a good idea to look for potentiometers specifically designed to be used in electric guitars. If you do a lot of volume swells, you will want to make sure the rotational torque of the shaft feels good to you and most pots designed specifically for guitar will have taken this into account. When you start looking for guitar specific pots, you will also find specialty pots like push-pull pots, no-load pots and blend pots which are all great for getting creative and customizing your guitar once you understand how basic electric guitar circuits work. By Kurt Prange (BSEE), Sales Engineer for Antique Electronic Supply - based in Tempe, AZ. Kurt began playing guitar at the age of nine in Kalamazoo, Michigan. He is a guitar DIY'er and tube amplifier designer who enjoys helping other musicians along in the endless pursuit of tone. Click here to return to the desktop version of this site
Solutions are homogeneous mixtures containing one or more solutes in a solvent. The solvent that makes up most of the solution, whereas a solute is the substance that is dissolved inside the solvent. Relative Concentration Units Concentrations are often expressed in terms of relative unites (e.g. percentages) with three different types of percentage concentrations commonly used: Mass Percent: The mass percent is used to express the concentration of a solution when the mass of a solute and the mass of a solution is given: \[\text{Mass Percent}=\dfrac{\text{Mass of Solute}}{\text{Mass of Solution}} \times 100\% \label{1}\] Volume Percent: The volume percent is used to express the concentration of a solution when the volume of a solute and the volume of a solution is given: \[\text{Volume Percent}= \dfrac{\text{Volume of Solute}}{\text{Volume of Solution}} \times 100\% \label{2}\] Mass/Volume Percent:Another version of a percentage concentration is mass/volume percent, which measures the mass or weight of solute in grams (e.g., in grams) vs. the volume of solution (e.g., in mL). An example would be a 0.9%( w/v) \(NaCl\) solution in medical saline solutions that contains 0.9 g of \(NaCl\) for every 100 mL of solution (see figure below). The mass/volume percent is used to express the concentration of a solution when the mass of the solute and volume of the solution is given. Since the numerator and denominator have different units, this concentration unit is not a true relative unit (e.g. percentage), however it is often used as an easy concentration unit since volumes of solvent and solutions are easier to measure than weights. Moreover, since the density of dilute aqueous solutions are close to 1 g/mL, if the volume of a solution in measured in mL (as per definition), then this well approximates the mass of the solution in grams (making a true reletive unit (m/m)). \[\text{Mass/Volume Percent}= \dfrac{\text{Mass of Solute (g)}}{\text{Volume of Solution (mL)}} \times 100\% \label{3}\] Example \(\PageIndex{1}\): Alcohol "Proof" as a Unit of Concentration For example, In the United States, alcohol content in spirits is defined as twice the percentage of alcohol by volume (v/v) called proof. What is the concentration of alcohol in Bacardi 151 spirits that is sold at 151 proof (hence the name)? SOLUTION It will have an alcohol content of 75.5% (w/w) as per definition of "proof". When calculating these percentages, that the units of the solute and solution should be equivalent units (and weight/volume percent (w/v %) is defined in terms of grams and mililiters). You CANNOT plug in… You CANNOT plug in… (2 g Solute) / (1 kg Solution) (2 g Solute) / (1000 g Solution) or (0.002 kg Solute) / (1 kg Solution) (5 mL Solute) / (1 L Solution) (5 mL Solute) / (1000 mL Solution) or (0.005 L Solute) / (1 L Solution) (8 g Solute) / (1 L Solution) (8 g Solute) / (1000 mL Solution) or (0.008 kg Solute) / (1 L Solution) Dilute Concentrations Units Sometimes when solutions are too dilute, their percentage concentrations are too low. So, instead of using really low percentage concentrations such as 0.00001% or 0.000000001%, we choose another way to express the concentrations. This next way of expressing concentrations is similar to cooking recipes. For example, a recipe may tell you to use 1 part sugar, 10 parts water. As you know, this allows you to use amounts such as 1 cup sugar + 10 cups water in your equation. However, instead of using the recipe's "1 part per ten" amount, chemists often use parts per million, parts per billion or parts per trillion to describe dilute concentrations. Parts per Million: A concentration of a solution that contained 1 g solute and 1000000 mL solution (same as 1 mg solute and 1 L solution) would create a very small percentage concentration. Because a solution like this would be so dilute, the density of the solution is well approximated by the density of the solvent; for water that is 1 g/mL (but would be different for different solvents). So, after doing the math and converting the milliliters of solution into grams of solution (assuming water is the solvent): \[\dfrac{\text{1 g solute}}{\text{1000000 mL solution}} \times \dfrac{\text{1 mL}}{\text{1 g}} = \dfrac{\text{1 g solute}}{\text{1000000 g solution}}\] We get (1 g solute)/(1000000 g solution). Because both the solute and the solution are both now expressed in terms of grams, it could now be said that the solute concentration is 1 part per million (ppm). \[\text{1 ppm}= \dfrac{\text{1 mg Solute}}{\text{1 L Solution}}\] The ppm unit can also be used in terms of volume/volume (v/v) instead (see example below). Parts per Billion: Parts per billion (ppb) is almost like ppm, except 1 ppb is 1000-fold more dilute than 1 ppm. \[\text{1 ppb} = \dfrac{1\; \mu \text{g Solute}}{\text{1 L Solution}}\] Parts per Trillion: Just like ppb, the idea behind parts per trillion (ppt) is similar to that of ppm. However, 1 ppt is 1000-fold more dilute than 1 ppb and 1000000-fold more dilute than 1 ppm. \[\text{1 ppt} = \dfrac{ \text{1 ng Solute}}{\text{1 L Solution}}\] Example \(\PageIndex{2}\): ppm in the Atmosphere Here is a table with the volume percent of different gases found in air. Volume percent means that for 100 L of air, there are 78.084 L Nitrogen, 20.946 L Oxygen, 0.934 L Argon and so on; Volume percent mass is different from the composition by mass or composition by amount of moles. Elements Name Volume Percent (v/v) ppm (v/v) Nitrogen 78.084 780,840 Oxygen 20.946 209,460 Water Vapor 4.0% 40,000 Argon 0.934 9,340 Carbon Dioxide 0.0379 379* (but growing rapidly) Neon 0.008 8.0 Helium 0.000524 5.24 Methane 0.00017 1.7 Krypton 0.000114 1.14 Ozone 0.000001 0.1 Dinitrogen Monoxide 0.00003 0.305 Concentration Units based on moles Mole Fraction: The mole fraction of a substance is the fraction of all of its molecules (or atoms) out of the total number of molecules (or atoms). It can also come in handy sometimes when dealing with the \(PV=nRT\) equation. \[\chi_A= \dfrac{\text{number of moles of substance A}}{\text{total number of moles in solution}}\] Also, keep in mind that the sum of each of the solution's substances' mole fractions equals 1. \[\chi_A + \chi_B + \chi_C \;+\; ... \;=1\] Mole Percent: The mole percent (of substance A) is \(\chi_A\) in percent form. \[\text{Mole percent (of substance A)}= \chi_A \times 100\%\] Molarity: The molarity (M) of a solution is used to represent the amount of moles of solute per liter of the solution. \[M= \dfrac{\text{Moles of Solute}}{\text{Liters of Solution}}\] Molality: The molality (m) of a solution is used to represent the amount of moles of solute per kilogram of the solvent. \[m= \dfrac{\text{Moles of Solute}}{\text{Kilograms of Solvent}}\] The molarity and molality equations differ only from their denominators. However, this is a huge difference. As you may remember, volume varies with different temperatures. At higher temperatures, volumes of liquids increase, while at lower temperatures, volumes of liquids decrease. Therefore, molarities of solutions also vary at different temperatures. This creates an advantage for using molality over molarity. Using molalities rather than molarities for lab experiments would best keep the results within a closer range. Because volume is not a part of its equation, it makes molality independent of temperature. Practice Problems In a solution, there is 111.0 mL (110.605 g) solvent and 5.24 mL (6.0508 g) solute present in a solution. Find the mass percent, volume percent and mass/volume percent of the solute. With the solution shown in the picture below, find the mole percent of substance C. A 1.5L solution is composed of 0.25g NaCl dissolved in water. Find its molarity. 0.88g NaCl is dissolved in 2.0L water. Find its molality. Solutions 1: Mass Percent =(Mass of Solute) / (Mass of Solution) x 100%| =(6.0508g) / (110.605g + 6.0508g) x 100% =(0.0518688312) x 100% =5.186883121% Mass Percent= 5.186% Volume Percent =(Volume of Solute) / (Volume of Solution) x 100% =(5.24mL) / (111.0mL + 5.24mL) x 100% =(0.0450791466) x 100% =4.507914659% Volume Percent= 4.51% Mass/Volume Percent =(Mass of Solute) / (Volume of Solution) x 100% =(6.0508g) / (111.0mL + 5.24mL) x 100% =(0.0520) x 100% =5.205% Mass/Volume Percent= 5.2054% 2. Moles of C= (5 C molecules) x (1mol C / 6.022x10 23 C molecules) = 8.30288941x10 -24mol C Total Moles= (24 molecules) x (1mol / 6.022x10 23 molecules)= 3.98538691x10 -23mol total X C= (8.30288941x10 -24mol C) / (3.98538691x10 -23mol) = .2083333333 Mole Percent of C = X C x 100% =(o.2083333333) x 100% =20.83333333 Mole Percent of C = 20% 3. Moles of NaCl= (0.25g) / (22.99g + 35.45g) = 0.004277 mol NaCl Molarity =(Moles of Solute) / (Liters of Solution) =(0.004277mol NaCl) / (1.5L) =0.002851 M Molarity= 0.0029M 4. Moles of NaCl= (0.88g) / (22.99g + 35.45g) = 0.01506 mol NaCl Mass of water= (2.0L) x (1000mL / 1L) x (1g / 1mL) x (1kg / 1000g) = 2.0kg water Molality =(Moles of Solute) / (kg of Solvent) =(0.01506 mol NaCl) / (2.0kg) =0.0075290897m Molality= 0.0075m References Petrucci, Harwood, Herring. General Chemistry: Principles & Modern Applications. 8th ed. Upper Saddle River, New Jersey: Pearson/Prentice Hall, 2002. 528-531 Contributors Christian Rae Figueroa (UCD)
Since you want "to convert regex to DFA in less than 30 minutes", I suppose you are working by hand on relatively small examples. In this case you can use Brzozowski's algorithm $[1]$, which computes directly the Nerode automaton of a language (which is known to be equal to its minimal deterministic automaton). It is based on a direct computation of the derivatives and it also works for extended regular expressions allowing intersection and complementation. The drawback of this algorithm is that it requires to check the equivalence of the expressions computed along the way, an expensive process. But in practice, and for small examples, it is very efficient. Left quotients. Let $L$ be a language of $A^*$ and let $u$ be a word. Then $$u^{-1}L = \{v \in A^* \mid uv \in L \}$$The language $u^{-1}L$ is called a left quotient (or left derivative) of $L$. Nerode automaton. The Nerode automaton of $L$ is the deterministic automaton $\mathcal{A}(L) = (Q, A, \cdot, L, F)$ where $Q = \{u^{-1}L \mid u \in A^*\}$, $F = \{u^{-1}L \mid u \in L\}$ and the transition function is defined, for each $a \in A$, by the formula$$ (u^{-1}L)\cdot a = a^{-1}(u^{-1}L)=(ua)^{-1}L$$Beware of this rather abstract definition. Each state of $\mathcal{A}$ is a left quotient of $L$ by a word, and hence is a language of $A^*$. The initial state is the language $L$, and the set of final states is the set of all left quotients of $L$ by a word of $L$. Brzozowski's algorithm. Let $a, b$ be letters. One can compute the left quotients using the following formulas:\begin{align*}a^{-1}1 &= 0 & a^{-1}b &= \begin{cases} 1 &\text{if $a = b$}\\ 0 &\text{if $a \not= b$}\\\end{cases}\\a^{-1}(L_1 \cup L_2) &= a^{-1}L_1 \cup u^{-1}L_2,&a^{-1}(L_1 \setminus L_2) &= a^{-1}L_1 \setminus u^{-1}L_2,\\a^{-1}(L_1 \cap L_2) &= a^{-1}L_1 \cap u^{-1}L_2, &a^{-1}L^* &= (a^{-1}L)L^* \end{align*}\begin{align*}a^{-1}(L_1L_2) &=\begin{cases} (a^{-1}L_1)L_2 &\text{si $1 \notin L_1$,}\\ (a^{-1}L_1)L_2 \cup a^{-1}L_2 &\text{si $1 \in L_1$}\\\end{cases}\\%\\v^{-1}(u^{-1}L) &= (uv)^{-1}L.\end{align*} Example.For $L = (a(ab)^*)^* \cup (ba)^*$, we get successively:\begin{align*}1^{-1}L &= L=L_1\\a^{-1}L_1 &=(ab)^*(a(ab)^*)^*=L_2\\b^{-1}L_1 &= a(ba)^*=L_3\\a^{-1}L_2 &= b(ab)^*(a(ab)^*)^* \cup (ab)^*(a(ab)^*)^*=bL_2 \cup L_2=L_4\\b^{-1}L_2 &=\emptyset \\a^{-1}L_3 &=(ba)^*=L_5\\b^{-1}L_3 &=\emptyset \\a^{-1}L_4 &= a^{-1}(bL_2 \cup L_2)=a^{-1}L_2=L_4 \\b^{-1}L_4 &= b^{-1}(bL_2 \cup L_2)= L_2\cup b^{-1}L_2 = L_2 \\a^{-1}L_5 &= \emptyset\\b^{-1}L_5 &=a(ba)^*=L_3\end{align*}which gives the following minimal automaton. $[1]$ J. Brzozowski, Derivatives of Regular Expressions, J.ACM 11(4), 481–494, 1964. Edit. (April 5, 2015) I just discovered that a similar question: What algorithms exist for construction a DFA that recognizes the language described by a given regex? was asked on cstheory. The answer partly addresses complexity issues.
I know that for Stochastic Gradient Descent, one picks a data point $(x_n, y_n)$ at random from the training set $S_N$ and then updates the parameter of the model in question. If the cost function is: $$J(w; S_N) = \frac{1}{N} \sum^{N}_{n = 1} J(w;x,y) = \frac{1}{N} \sum^{N}_{n = 1} Loss(w;x,y) = \frac{1}{N} \sum^{N}_{n = 1} cost(w;x,y) $$ A typical SGD update would look as follows: $$ w := w - \gamma \nabla J(w; x,y)$$ However, if there is a regularizer present in the cost function, then its is not 100% clear to me how to stochastic gradient descent (SGD) would actually work. Consider a regularized optimization problem that we want to optimize via SGD: $$ H[w] = J(w; S_N) + \lambda R(w) = \frac{1}{N} \sum^{N}_{n = 1} J(w;x,y) + \lambda R(w) $$ My conjecture is that the correct way to do stochastic gradient descent is by doing: $$ w := w - \gamma \nabla H[w] = w - \gamma ( \nabla J(w; x_n,y_n) + \lambda \nabla R(w) )$$ by choosing a random data point $(x_n, y_n)$. The justification I have for this is the following (and wanted to check it with the community). The random direction that we move the parameters in depends on the random update that we do. However, what is the expected descent that we might do? i.e. what is: $$ \mathbb{E}_{n}[\nabla H(w)] = \mathbb{E}_{n}[ \nabla J(w; x,y) + \lambda \nabla R(w) ] $$ where the expectation is with respect to choosing a random data point at random from $S_N$ uniformly. Since we are picking a data point at random uniformly the expectation of our SGD becomes: $$ \mathbb{E}_{n}[ \nabla J(w; x,y) + \lambda \nabla R(w) ] = \frac{1}{N} \sum^{N}_{n = 1} \nabla J(w;x,y) + \mathbb{E}_{n}[\nabla \lambda R(w) ] $$ $$ \frac{1}{N} \sum^{N}_{n = 1} \nabla J(w;x,y) + \lambda \nabla R(w) \mathbb{E}_{n}[1] = \frac{1}{N} \sum^{N}_{n = 1} \nabla J(w;x,y) + \lambda \nabla R(w) $$ which is the same as if we tried to optimize the objective function in a batch way. Using this logic, this seams to me to be the way to use SGD with regularization. What do people think?
Recall that any Riemannian manifold $(M,g)$ admits a unique symmetric metric connection $\nabla$ by the fundamental theorem of Riemannian geometry, and it is determined by the Koszul formula: $$2g(\nabla_Y X, Z) =Xg(Y, Z) + Yg(X, Z) - Zg(X, Y) -g([X, Z], Y) - g([Y, Z], X) - g([X, Y], Z)$$ Suppose now that we had a compact Lie group $G$ with a bi-invariant metric $\eta$. There's a special class $\mathcal{G}$ of vector fields on $G$ which are left-invariant, i.e., $X \in \mathcal{G}$ iff $dL_g X_h = X_{gh}$ where $L_g : G \to G$ is multiplication by $g$. Any element $X$ of $\mathcal{G}$ is obtained from left-translating a tangent vector $X_e$ in $\mathfrak{g}$ to extend to all of $G$. Notice that left-invariant vector fields have constant inner product (or angles between them stay globally constant) because for any two left-invariant vector fields $U$ and $V$, $$\eta(U_{gh}, V_{gh}) = \eta(dL_gU_h, dL_g V_h) = \eta(U_h, V_h)$$ where the second equality derives from the left-invariance of $\eta$. Therefore if $X, Y, Z$ are vector fields in $\mathcal{G}$, $\nabla$ a symmetric connection on $G$ compatible with $\eta$, we invoke the Koszul formula. By the previous comment, $X\eta(Y, Z), Y\eta(X, Z)$ and $Z\eta(X, Y)$ are all zero so we are left with $$2\eta(\nabla_Y X, Z) = -\eta([X, Z], Y) - \eta([Y, Z], X) - \eta([X, Y], Z)$$ Up until this point we have not needed the right-invariance of $\eta$. But the following lemma is the relevant key: If $U, V, W$ are vectors fields in $\mathcal{G}$, then $\eta([U, V], W) = \eta(U, [V, W])$. To see this, let $g_t$ be the element of $G$ defined as $\partial_t g_t = V(g_t)$ with initial condition $g_0 = e$ (so it's a flowline of $V$ starting at $e$). Write $\eta(U, W) = \eta(dR_{g_t}U, dR_{g_t}W)$ using right-invariance of $\eta$. Differentiate with respect to $t$ at $t = 0$ to obtain $0 = \eta(\mathcal{L}_V U, W)+\eta(U, \mathcal{L}_V W)$, and writing the Lie derivatives as brackets gets the desired formula. Therefore $\eta([X, Z], Y) = -\eta([X, Y], Z)$ and $\eta([Y, Z], X) = \eta([X, Y], Z)$, which upon plugging in the previous formula we obtain $2\eta(\nabla_Y X, Z) = -\eta([X, Y], Z)$. Since this holds for all left-invariant vector fields $Z$, we conclude the formula (after using $[X, Y] = -[Y, X]$) $$\boxed{2\nabla_Y X = [Y, X] = \mathcal{L}_Y X}$$ In particular your question is easily answered, as $\nabla_X X = 1/2 \mathcal{L}_X X = 0$, implying every left-invariant vector field $X$ on $(G, \eta)$ is parallel. One can completely understand the geodesics of $G$ using this fact. Consider the Riemannian exponential map $\text{exp} : \mathfrak{g} \to G$ given by $\exp(v) = \gamma(1)$ where $\gamma$ is the solution to the ordinary differential equation $\nabla_{\gamma'} \gamma' = 0$ with initial condition $\gamma(0) = e$, $\gamma'(0) = v$. If we define the left-invariant vector field $V$ on $G$ as $V_g = dL_g v_e$, then consider the integral curve $\gamma_V$ of $V$ defined as the solution to the ordinary differential equation $\gamma_V'(t) = V(\gamma_V(t))$ with initial condition $\gamma_V(0) = e$. Definitionally, the tangent field to $\gamma_V$ is $V$, and $\nabla_V V = 0$ as $V$ is left-invariant, so $\gamma_V$ is a solution to the initial value problem, therefore by uniqueness $\gamma_V = \gamma$. So alternatively we can define $\exp : \mathfrak{g} \to G$ as $\exp(v) = \gamma(1)$ where $\gamma$ is a one-parameter subgroup of $G$ with $\gamma(0) = e$ and $\gamma'(0) = v$ ($\gamma$ is in fact tangent to $V = dL v$). The geodesics of $G$ are exactly the image of one-parameter subgroup of $G$. As an addendum, here's one argument to construct a bi-invariant metric on any compact connected Lie group $G$ (a hands-on construction is given in doCarmo, "Riemannian Geometry" problem $7$ of Ch. $1$). Pick any Riemannian metric $g(-, -)$ on $G$, and let $\mu$ be the Haar measure on $G$ (which is a bi-invariant measure). Define $\eta$ by the averaging formula $$\eta(X, Y) = \int_{G \times G} g(dL_g dR_h X, dL_g dR_h Y)\; d\mu(g) d\mu(h)$$ Clearly $\eta$ is bi-invariant by construction (this argument is in Milnor, "Morse theory", Ch. $21$)
Plotting surfaces, it's pretty much clear that we will have the following figure. Stokes theorem states that the path integral of a vector field around those dotted line is equal to the the surface vector integral of the curl of the field bound by surface. i.e.$$ \oint_\gamma \vec F \cdot dr = \iint_\Omega \nabla \times \vec F \cdot \hat n d\sigma $$ First, let's calculate the left part, the path integral around along the curve going by route $OA \to AB \to BO$. Along $OA$, the curve is $z=x^2$ and $y=0$.$$\int_{OA} ( y^2 \hat i + 2xy \hat j + xz^2 \hat k) \cdot dr = \int_{OA} xz^2 \hat k \cdot (dx \hat i + dz \hat k ) = \int_{OA} xz^2 dz $$ Putting $z=x^2$ and $x = 0 \to 1$, we get $$\int_0^1 2x^6dx = \frac 2 7 \hspace{1cm} (1)$$ Now along AB, $z=1$, constant.$$\int_{AB} ( y^2 \hat i + 2xy \hat j + xz^2 \hat k) \cdot dr = \int_{AB} ( y^2 \hat i + 2xy \hat j + xz^2 \hat k) \cdot ( dx \hat i + dy \hat j)$$ The circular curve can be parametrize $x = \cos(\theta)$ and $y = \sin(\theta)$, and we know that in along the curve $\theta = 0 \to \frac \pi 2 $, and that gives $dx = -\sin\theta d\theta$ and $dy = \cos\theta d\theta$, and simplifying we get $$\int_0^{\frac \pi 2 } \left( -\sin^3 \theta + 2 \sin\theta \cos^2 \theta \right) d\theta = 0 \hspace{1cm} (2)$$ Along $OB$ we proceed in the same as along $OA$ but here $x=0$ and we get $$\int_{BO} \vec F \cdot dr = 0 \hspace{1cm} (3)$$ Adding $(1), (2)$ and $(3)$, we get $\displaystyle \oint_\gamma \vec F \cdot dr = \frac 2 7 $. The right part::The curl of the field turns out to be $- z^2 \hat j $ and the integral is $$ \iint_\Sigma (-z^2 \hat j)\cdot \hat n ds $$ The surface $\displaystyle z = x^2 + y^2$ can be parameterized as $S = (r \cos\theta, r\sin\theta, r^2)$ where $r =0\to 1$ and $\theta = 0 \to \frac \pi 2$ and the surface integral can be evaluated as $$\int_0^1 \int_0^{\frac \pi 2 } -r^4 \hat j \cdot \left( \frac{\partial S}{\partial r } \times \frac{\partial S}{\partial \theta } \right ) dr \; d\theta $$The cross product can be calculated as $-2 r^2 \cos (\theta ) \hat i -2 r^2 \sin (\theta ) \hat j + r \sin ^2(\theta )+r \cos ^2(\theta ) \hat k$ And the integral turns out as $$ \int_0^1\int_0^{\frac \pi 2 } 2r^6 \sin ( \theta) d\theta dr = \frac 2 7 $$ ADDED:: For plotting surfaces, just plot all surfaces and take the surface of the parabloid that is bounded by $x=0, y=0, z=1$ on first quadrant, that is the one on the first quadrant which looks like the strip above.
Is there a "simple" mathematical proof that is fully understandable by a 1st year university student that impressed you because it is beautiful? closed as primarily opinion-based by Daniel W. Farlow, Najib Idrissi, user91500, LutzL, Jonas Meyer Apr 7 '15 at 3:40 Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise. If this question can be reworded to fit the rules in the help center, please edit the question. Here's a cute and lovely theorem. There exist two irrational numbers $x,y$ such that $x^y$ is rational. Proof. If $x=y=\sqrt2$ is an example, then we are done; otherwise $\sqrt2^{\sqrt2}$ is irrational, in which case taking $x=\sqrt2^{\sqrt2}$ and $y=\sqrt2$ gives us: $$\left(\sqrt2^{\sqrt2}\right)^{\sqrt2}=\sqrt2^{\sqrt2\sqrt2}=\sqrt2^2=2.\qquad\square$$ (Nowadays, using the Gelfond–Schneider theorem we know that $\sqrt2^{\sqrt2}$ is irrational, and in fact transcendental. But the above proof, of course, doesn't care for that.) How about the proof that $$1^3+2^3+\cdots+n^3=\left(1+2+\cdots+n\right)^2$$ I remember being impressed by this identity and the proof can be given in a picture: Edit: Substituted $\frac{n(n+1)}{2}=1+2+\cdots+n$ in response to comments. Cantor's Diagonalization Argument, proof that there are infinite sets that can't be put one to one with the set of natural numbers, is frequently cited as a beautifully simple but powerful proof. Essentially, with a list of infinite sequences, a sequence formed from taking the diagonal numbers will not be in the list. I would personally argue that the proof that $\sqrt 2$ is irrational is simple enough for a university student (probably simple enough for a high school student) and very pretty in its use of proof by contradiction! Prove that if $n$ and $m$ can each be written as a sum of two perfect squares, so can their product $nm$. Proof: Let $n = a^2+b^2$ and $m=c^2+d^2$ ($a, b, c, d \in\mathbb Z$). Then, there exists some $x,y\in\mathbb Z$ such that $$x+iy = (a+ib)(c+id)$$ Taking the magnitudes of both sides are squaring gives $$x^2+y^2 = (a^2+b^2)(c^2+d^2) = nm$$ I would go for the proof by contradiction of an infinite number of primes, which is fairly simple: Assume that there is a finite number of primes. Let $G$ be the set of allprimes $P_1,P_2,...,P_n$. Compute $K = P_1 \times P_2 \times ... \times P_n + 1$. If $K$ is prime, then it is obviously notin $G$. Otherwise, noneof its prime factors are in $G$. Conclusion: $G$ is notthe set of allprimes. I think I learned that both in high-school and at 1st year, so it might be a little too simple... By the concavity of the $\sin$ function on the interval $\left[0,\frac{\pi}2\right]$ we deduce these inequalities: $$\frac{2}\pi x\le \sin x\le x,\quad \forall x\in\left[0,\frac\pi2\right].$$ The first player in Hex has a winning strategy. There are no draws in hex, so one player must have a winning strategy. If player two has a winning strategy, player one can steal that strategy by placing the first stone in the center (additional pieces on the board never hurt your position) then using player two's strategy. You cannot have two dice (with numbers $1$ to $6$) biased so that when you throw both, the sum is uniformly distributed in $\{2,3,\dots,12\}$. For easier notation, we use the equivalent formulation "You cannot have two dice (with numbers $0$ to $5$) biased such that when you throw both, the sum is uniformly distributed in $\{0,1,\dots,10\}$." Proof:Assume that such dice exist. Let $p_i$ be the probability that the first die gives an $i$ and $q_i$ be the probability that the second die gives an $i$. Let $p(x)=\sum_{i=0}^5 p_i x^i$ and $q(x)=\sum_{i=0}^5 q_i x^i$. Let $r(x)=p(x)q(x) = \sum_{i=0}^{10} r_i x^i$. We find that $r_i = \sum_{j+k=i}p_jq_k$. But hey, this is also the probability that the sum of the two dice is $i$. Therefore, $$ r(x)=\frac{1}{11}(1+x+\dots+x^{10}). $$ Now $r(1)=1\neq0$, and for $x\neq1$, $$ r(x)=\frac{(x^{11}-1)}{11(x-1)}, $$ which clearly is nonzero when $x\neq 1$. Therefore $r$ does not have any real zeros. But because $p$ and $q$ are $5$th degree polynomials, they must have zeros. Therefore, $r(x)=p(x)q(x)$ has a zero. A contradiction. Given a square consisting of $2n \times 2n$ tiles, it is possible to cover this square with pieces that each cover $2$ adjacent tiles (like domino bricks). Now imagine, you remove two tiles, from two opposite corners of the original square. Prove that is is now no longer possible to cover the remaining area with domino bricks. Proof: Imagine that the square is a checkerboard. Each domino brick will cover two tiles of different colors. When you remove tiles from two opposite corners, you will remove two tiles with the samecolor. Thus, it can no longer be possible to cover the remaining area. (Well, it may be too "simple." But you did not state that it had to be a university student of mathematics. This one might even work for liberal arts majors...) One little-known gem at the intersection of geometry and number theory is Aubry's reflective generation of primitive Pythagorean triples, i.e. coprime naturals $\,(x,y,z)\,$with $\,x^2 + y^2 = z^2.\,$ Dividing by $\,z^2$ yields $\,(x/z)^2+(y/z)^2 = 1,\,$ so each triple corresponds to a rational point $(x/z,\,y/z)$ on the unit circle. Aubry showed that we can generate all such triples by a very simple geometrical process. Start with the trivial point $(0,-1)$. Draw a line to the point $\,P = (1,1).\,$ It intersects the circle in the rational point $\,A = (4/5,3/5)\,$ yielding the triple $\,(3,4,5).\,$ Next reflect the point $\,A\,$ into the other quadrants by taking all possible signs of each component, i.e. $\,(\pm4/5,\pm3/5),\,$ yielding the inscribed rectangle below. As before, the line through $\,A_B = (-4/5,-3/5)\,$ and $P$ intersects the circle in $\,B = (12/13, 5/13),\,$ yielding the triple $\,(12,5,13).\,$ Similarly the points $\,A_C,\, A_D\,$ yield the triples $\,(20,21,29)\,$ and $\,(8,15,17),\,$ We can iterate this process with the new points $\,B,C,D\,$ doing the same we did for $\,A,\,$ obtaining further triples. Iterating this process generates the primitive triples as a ternary tree $\qquad\qquad$ Descent in the tree is given by the formula $$\begin{eqnarray} (x,y,z)\,\mapsto &&(x,y,z)-2(x\!+\!y\!-\!z)\,(1,1,1)\\ = &&(-x-2y+2z,\,-2x-y+2z,\,-2x-2y+3z)\end{eqnarray}$$ e.g. $\ (12,5,13)\mapsto (12,5,13)-8(1,1,1) = (-3,4,5),\ $ yielding $\,(4/5,3/5)\,$ when reflected into the first quadrant. Ascent in the tree by inverting this map, combined with trivial sign-changing reflections: $\quad\quad (-3,+4,5) \mapsto (-3,+4,5) - 2 \; (-3+4-5) \; (1,1,1) = ( 5,12,13)$ $\quad\quad (-3,-4,5) \mapsto (-3,-4,5) - 2 \; (-3-4-5) \; (1,1,1) = (21,20,29)$ $\quad\quad (+3,-4,5) \mapsto (+3,-4,5) - 2 \; (+3-4-5) \; (1,1,1) = (15,8,17)$ See my MathOverflow post for further discussion, including generalizations and references. I like the proof that there are infinitely many Pythagorean triples. Theorem:There are infinitely many integers $ x, y, z$ such that $$ x^2+y^2=z^2 $$ Proof:$$ (2ab)^2 + ( a^2-b^2)^2= ( a^2+b^2)^2 $$ One cannot cover a disk of diameter 100 with 99 strips of length 100 and width 1. Proof: project the disk and the strips on a semi-sphere on top of the disk. The projection of each strip would have area at most 1/100th of the area of the semi-sphere. If you have any set of 51 integers between $1$ and $100$, the set must contain some pair of integers where one number in the pair is a multiple of the other. Proof: Suppose you have a set of $51$ integers between $1$ and $100$. If an integer is between $1$ and $100$, its largest odd divisor is one of the odd numbers between $1$ and $99$. There are only $50$ odd numbers between $1$ and $99$, so your $51$ integers can’t all have different largest odd divisors — there are only $50$ possibilities. So two of your integers (possibly more) have the same largest odd divisor. Call that odd number $d$. You can factor those two integers into prime factors, and each will factor as (some $2$’s)$\cdot d$. This is because if $d$ is the largest divisor of a number, the rest of its factorization can’t include any more odd numbers. Of your two numbers with largest odd factor $d$, the one with more $2$’s in its factorization is a multiple of the other one. (In fact, the multiple is a power of $2$.) In general, let $S$ be the set of integers from $1$ up to some even number $2n$. If a subset of $S$ contains more than half the elements in $S$, the set must contain a pair of numbers where one is a multiple of the other. The proof is the same, but it’s easier to follow if you see it for a specific $n$ first. The proof that an isosceles triangle ABC (with AC and AB having equal length) has equal angles ABC and BCA is quite nice: Triangles ABC and ACB are (mirrored) congruent (since AB = AC, BC = CB, and CA = BA), so the corresponding angles ABC and (mirrored) ACB are equal. This congruency argument is nicer than that of cutting the triangle up into two right-angled triangles. Parity of sine and cosine functions using Euler's forumla: $e^{-i\theta} = cos\ (-\theta) + i\ sin\ (-\theta)$ $e^{-i\theta} = \frac 1 {e^{i\theta}} = \frac 1 {cos\ \theta \ + \ i\ sin\ \theta} = \frac {cos\ \theta\ -\ i\ sin\ \theta} {cos^2\ \theta\ +\ sin^2\ \theta} = cos\ \theta\ -\ i\ sin\ \theta$ $cos\ (-\theta) +\ i\ sin\ (-\theta) = cos\ \theta\ +i\ (-sin\ \theta)$ Thus $cos\ (-\theta) = cos\ \theta$ $sin\ (-\theta) = -\ sin\ \theta$ $\blacksquare$ The proof is actually just the first two lines. I believe Gauss was tasked with finding the sum of all the integers from $1$ to $100$ in his very early schooling years. He tackled it quicker than his peers or his teacher could, $$\sum_{n=1}^{100}n=1+2+3+4 +\dots+100$$ $$=100+99+98+\dots+1$$ $$\therefore 2 \sum_{n=1}^{100}n=(100+1)+(99+2)+\dots+(1+100)$$ $$=\underbrace{101+101+101+\dots+101}_{100 \space times}$$ $$=101\cdot 100$$ $$\therefore \sum_{n=1}^{100}n=\frac{101\cdot 100}{2}$$ $$=5050.$$ Hence he showed that $$\sum_{k=1}^{n} k=\frac{n(n+1)}{2}.$$ If $H$ is a subgroup of $(\mathbb{R},+)$ and $H\bigcap [-1,1]$ is finite and contains a positive element. Then, $H$ is cyclic. Fermat's little theorem from noting that modulo a prime p we have for $a\neq 0$: $$1\times2\times3\times\cdots\times (p-1) = (1\times a)\times(2\times a)\times(3\times a)\times\cdots\times \left((p-1)\times a\right)$$ Proposition (No universal set): There does not exists a set which contain all the sets (even itself) Proof: Suppose to the contrary that exists such set exists. Let $X$ be the universal set, then one can construct by the axiom schema of specification the set $$C=\{A\in X: A \notin A\}$$ of all sets in the universe which did not contain themselves. As $X$ is universal, clearly $C\in X$. But then $C\in C \iff C\notin C$, a contradiction. Edit: Assuming that one is working in ZF (as almost everywhere :P) (In particular this proof really impressed me too much the first time and also is very simple) Most proofs concerning the Cantor Set are simple but amazing. The total number of intervals in the set is zero. It is uncountable. Every number in the set can be represented in ternary using just 0 and 2. No number with a 1 in it (in ternary) appears in the set. The Cantor set contains as many points as the interval from which it is taken, yet itself contains no interval of nonzero length. The irrational numbers have the same property, but the Cantor set has the additional property of being closed, so it is not even dense in any interval, unlike the irrational numbers which are dense in every interval. The Menger sponge which is a 3d extension of the Cantor set simultaneously exhibits an infinite surface area and encloses zero volume. The derivation of first principle of differentiation is so amazing, easy, useful and simply outstanding in all aspects. I put it here: Suppose we have a quantity $y$ whose value depends upon a single variable $x$, and is expressed by an equation defining $y$ as some specific function of $x$. This is represented as: $y=f(x)$ This relationship can be visualized by drawing a graph of function $y = f (x)$ regarding $y$ and $x$ as Cartesian coordinates, as shown in Figure(a). Consider the point $P$ on the curve $y = f (x)$ whose coordinates are $(x, y)$ and another point $Q$ where coordinates are $(x + Δx, y + Δy)$. The slope of the line joining $P$ and $Q$ is given by: $tanθ = \frac{Δy}{Δx} = \frac{(y + Δy ) − y}{Δx}$ Suppose now that the point $Q$ moves along the curve towards $P$. In this process, $Δy$ and $Δx$ decrease and approach zero; though their ratio $\frac{Δy}{Δx}$ will not necessarily vanish. What happens to the line $PQ$ as $Δy→0$, $Δx→0$? You can see that this line becomes a tangent to the curve at point $P$ as shown in Figure(b). This means that $tan θ$ approaches the slope of the tangent at $P$, denoted by $m$: $m=lim_{Δx→0} \frac{Δy}{Δx} = lim_{Δx→0} \frac{(y+Δy)-y}{Δx}$ The limit of the ratio $Δy/Δx$ as $Δx$ approaches zero is called the derivative of $y$ with respect to $x$ and is written as $dy/dx$. It represents the slope of the tangent line to the curve $y=f(x)$ at the point $(x, y)$. Since $y = f (x)$ and $y + Δy = f (x + Δx)$, we can write the definition of the derivative as: $\frac{dy}{dx}=\frac{d{f(x)}}{dx} = lim_{x→0} [\frac{f(x+Δx)-f(x)}{Δx}]$, which is the required formula. This proof that $n^{1/n} \to 1$ as integral $n \to \infty$: By Bernoulli's inequality (which is $(1+x)^n \ge 1+nx$), $(1+n^{-1/2})^n \ge 1+n^{1/2} > n^{1/2} $. Raising both sides to the $2/n$ power, $n^{1/n} <(1+n^{-1/2})^2 = 1+2n^{-1/2}+n^{-1} < 1+3n^{-1/2} $. Can a Chess Knight starting at any corner then move to touch every space on the board exactly once, ending in the opposite corner? The solution turns out to be childishly simple. Every time the Knight moves (up two, over one), it will hop from a black space to a white space, or vice versa. Assuming the Knight starts on a black corner of the board, it will need to touch 63 other squares, 32 white and 31 black. To touch all of the squares, it would need to end on a white square, but the opposite corner is also black, making it impossible. The Eigenvalues of a skew-Hermitian matrix are purely imaginary. The Eigenvalue equation is $A\vec x = \lambda\vec x$, and forming the vector norm gives $$\lambda \|\vec x\| = \lambda\left<\vec x, \vec x\right> = \left<\lambda \vec x,\vec x\right> = \left<A\vec x,\vec x\right> = \left<\vec x, A^{T*}\vec x\right> = \left<\vec x, -A\vec x\right> = -\lambda^* \|\vec x\|$$ and since $\|\vec x\| > 0$, we can divide it from left and right side. The second to last step uses the definition of skew-Hermitian. Using the definition for Hermitian or Unitarian matrices instead yields corresponding statements about the Eigenvalues of those matrices. I like the proof that not every real number can be written in the form $a e + b \pi$ for some integers $a$ and $b$. I know it's almost trivial in one way; but in another way it is kind of deep.
I want to solve a second order differential equation in the interval[-1:1], which does not have a analytic solution, \begin{eqnarray} y''(x) &=& k \phi^2(x)y(x) \\ \phi(x) &=& \frac{1}{2}\left(1-\sin\left[ \frac{\pi}{2}x\right] \right)\\ y'[1] &=& 1 \\ y[1] &=& 1 \end{eqnarray} It is possible to get numerical solution of the equation using NDSolve. But I want to approach the solution iteratively starting with a Initial guess function for example, $y_0(x) = \frac{1}{2}(1+x)$ with appropriate boundary conditions, \begin{eqnarray} y_{n+1}''(x) &=& k\phi^2(x)y_n(x) \end{eqnarray} To do this I have following code, ClearAll;k = 3955/100;W = 1phi[x_] := (1/2)*(1 - Sin[(Pi/2)*x/W])vE[x_] := Piecewise[{{0, -W <= x < 0}, {x/W, 0 <= x <= W}}];pcw[x_] := (W + x)/(2*W);so = NDSolve[{u''[x] == k*phi[x]*phi[x]*u[x], u[-W] == 0, u[W] == 1}, u, {x, -W, W}, WorkingPrecision -> 22, InterpolationOrder -> All]dd[x_] = Q[x] /. First@DSolve[{Q''[x] == k*phi[x]*phi[x]*pcw[x], Q[W] == 1, Q'[W] == 1}, Q, x];Plot[Evaluate[{dd[t], u[t] /. so, vE[t], pcw[t]}], {t, -W, W}]dd[x_] = Q[x] /. First@DSolve[{Q''[x] == k*phi[x]*phi[x]*dd[x], Q[W] == 1, Q'[W] == 1}, Q, x];Plot[Evaluate[{dd[t], u[t] /. so, vE[t], pcw[t]}], {t, -W, W}]dd[x_] = Q[x] /. First@DSolve[{Q''[x] == k*phi[x]*phi[x]*dd[x], Q[W] == 1, Q'[W] == 1}, Q, x];Plot[Evaluate[{dd[t], u[t] /. so, vE[t], pcw[t]}], {t, -W, W}]dd[x_] = Q[x] /. First@DSolve[{Q''[x] == k*phi[x]*phi[x]*dd[x], Q[W] == 1, Q'[W] == 1}, Q, x];Plot[Evaluate[{dd[t], u[t] /. so, vE[t], pcw[t]}], {t, -W, W}] Which is certainly not elegant way getting things done and also it is taking too long to run than I was expecting. How can I use recursion in this case? The numeric solution is: Plot[u[x] /. so, {x, -W, W}]
A vector is a quantity consisting of anon-negative magnitude and a direction. We could represent a vector intwo dimensions as $(m,\theta)$, where $m$ is the magnitude and$\theta$ is the direction, measured as an angle from some agreed upondirection. For example, we might think of the vector $\ds(5,45^\circ)$ as representing "5 km toward the northeast''; that is,this vector might be a displacementvector,indicating, say, that your grandmother walked 5 kilometers toward thenortheast to school in the snow. On the other hand, the same vectorcould represent a velocity, indicating that your grandmother walked at5 km/hr toward the northeast. What the vector does not indicate iswhere this walk occurred: a vector represents a magnitude and adirection, but not a location. Pictorially it is useful to represent avector as an arrow; the direction of the vector, naturally, is thedirection in which the arrow points; the magnitude of the vector isreflected in the length of the arrow. It turns out that many, many quantities behave as vectors, e.g., displacement, velocity, acceleration, force. Already we can get some idea of their usefulness using displacement vectors. Suppose that your grandmother walked 5 km NE and then 2 km SSE; if the terrain allows, and perhaps armed with a compass, how could your grandmother have walked directly to his destination? We can use vectors (and a bit of geometry) to answer this question. We begin by noting that since vectors do not include a specification of position, we can "place'' them anywhere that is convenient. So we can picture your grandmother's journey as two displacement vectors drawn head to tail: The displacement vector for the shortcut route is the vector drawnwith a dashed line, from the tail of the first to the head of thesecond. With a little trigonometry, we can compute that the thirdvector has magnitude approximately 4.62 and direction $\ds21.43^\circ$, so walking 4.62 km in the direction $\ds 21.43^\circ$north of east (approximately ENE) would get your grandmother toschool. This sort of calculation is so common, we dignify it with aname: we say that the third vector is the sum of the other two vectors. There isanother common way to picture the sum of two vectors. Put the vectorstail to tail and then complete the parallelogram they indicate; thesum of the two vectors is the diagonal of theparallelogram: This is a more natural representation in some circumstances. For example, if the two original vectors represent forces acting on an object, the sum of the two vectors is the net or effective force on the object, and it is nice to draw all three with their tails at the location of the object. We also define scalar multiplication forvectors: if $\bf A$ is a vector $(m,\theta)$ and $a\ge 0$ is a realnumber, the vector $a\bf A$ is $(am,\theta)$, namely, it points inthe same direction but has $a$ times the magnitude. If $a< 0$, $a\bfA$ is $(|a|m,\theta+\pi)$, with $|a|$ times the magnitude andpointing in the opposite direction (unless we specify otherwise,angles are measured in radians). Now we can understand subtraction of vectors: ${\bf A}-{\bf B}={\bf A}+(-1){\bf B}$: Note that as you would expect, ${\bf B} + ({\bf A}-{\bf B}) = {\bf A}$. We can represent a vector in ways other than $(m,\theta)$, and in fact $(m,\theta)$ is not generally used at all. How else could we describe a particular vector? Consider again the vector $\ds (5,45^\circ)$. Let's draw it again, but impose a coordinate system. If we put the tail of the arrow at the origin, the head of the arrow ends up at the point $\ds (5/\sqrt2,5/\sqrt2)\approx(3.54, 3.54)$. In this picture the coordinates $(3.54,3.54)$ identify the head of thearrow, provided we know that the tail of the arrow has been placed at$(0,0)$. Then in fact the vector can always be identified as$(3.54,3.54)$, no matter where it is placed; we just have to rememberthat the numbers 3.54 must be interpreted as a change from theposition of the tail, not as the actual coordinates of the arrow head;to emphasize this we will write $\langle 3.54,3.54\rangle$ to mean thevector and $(3.54,3.54)$ to mean the point. Then if the vector$\langle 3.54,3.54\rangle$ is drawn with its tail at $(1,2)$ it lookslike this: Consider again the two part trip: 5 km NE and then 2 km SSE. The vector representing the first part of the trip is $\ds \langle 5/\sqrt2,5/\sqrt2\rangle$, and the second part of the trip is represented by $\langle 2\cos(-3\pi/8),2\sin(-3\pi/8)\rangle \approx\langle 0.77,-1.85 \rangle$. We can represent the sum of these with the usual head to tail picture: It is clear from the picture that the coordinates of the destination point are $\ds (5/\sqrt2+2\cos(-3\pi/8),5/\sqrt2+2\sin(-3\pi/8))$ or approximately $(4.3,1.69)$, so the sum of the two vectors is $\ds \langle 5/\sqrt2+2\cos(-3\pi/8),5/\sqrt2+2\sin(-3\pi/8)\rangle \approx \langle 4.3,1.69\rangle$. Adding the two vectors is easier in this form than in the $(m,\theta)$ form, provided that we're willing to have the answer in this form as well. It is easy to see that scalar multiplication and vector subtraction are also easy to compute in this form: $a\langle v,w\rangle=\langle av,aw\rangle$ and $\ds \langle v_1,w_1\rangle - \langle v_2,w_2\rangle =\langle v_1-v_2,w_1-w_2\rangle$. What about the magnitude? The magnitude of the vector $\langle v,w\rangle$ is still the length of the corresponding arrow representation; this is the distance from the origin to the point $(v,w)$, namely, the distance from the tail to the head of the arrow. We know how to compute distances, so the magnitude of the vector is simply $\ds \sqrt{v^2+w^2}$, which we also denote with absolute value bars: $\ds |\langle v,w\rangle|=\sqrt{v^2+w^2}$. In three dimensions, vectors are still quantities consisting of a magnitude and a direction, but of course there are many more possible directions. It's not clear how we might represent the direction explicitly, but the coordinate version of vectors makes just as much sense in three dimensions as in two. By $\langle 1,2,3\rangle$ we mean the vector whose head is at $(1,2,3)$ if its tail is at the origin. As before, we can place the vector anywhere we want; if it has its tail at $(4,5,6)$ then its head is at $(5,7,9)$. It remains true that arithmetic is easy to do with vectors in this form: $$\eqalign{ &a\langle v_1,v_2,v_3\rangle=\langle av_1,av_2,av_3\rangle\cr &\langle v_1,v_2,v_3\rangle + \langle w_1,w_2,w_3\rangle =\langle v_1+w_1,v_2+w_2,v_3+w_3\rangle\cr &\langle v_1,v_2,v_3\rangle - \langle w_1,w_2,w_3\rangle =\langle v_1-w_1,v_2-w_2,v_3-w_3\rangle\cr} $$ The magnitude of the vector is again the distance from the origin to the head of the arrow, or $\ds |\langle v_1,v_2,v_3\rangle|=\sqrt{v_1^2+v_2^2+v_3^2}$. Three particularly simple vectors turn out to be quite useful: ${\bf i}=\langle1,0,0\rangle$, ${\bf j}=\langle0,1,0\rangle$, and ${\bf k}=\langle0,0,1\rangle$. These play much the same role for vectors that the axes play for points. In particular, notice that $$\eqalign{ \langle v_1,v_2,v_3\rangle &= \langle v_1,0,0\rangle + \langle 0,v_2,0\rangle + \langle 0,0,v_3\rangle\cr &=v_1\langle1,0,0\rangle + v_2\langle0,1,0\rangle + v_3\langle0,0,1\rangle\cr &= v_1{\bf i} + v_2{\bf j} + v_3{\bf k}\cr }$$ We will frequently want to produce a vector that points from one point to another. That is, if $P$ and $Q$ are points, we seek the vector $\bf x$ such that when the tail of $\bf x$ is placed at $P$, its head is at $Q$; we refer to this vector as $\ds \overrightarrow{\strut PQ}$. If we know the coordinates of $P$ and $Q$, the coordinates of the vector are easy to find. Example 14.2.1 Suppose $P=(1,-2,4)$ and $Q=(-2,1,3)$. The vector $\ds \overrightarrow{\strut PQ}$ is $\langle -2-1,1- -2,3-4\rangle=\langle -3,3,-1\rangle$ and $\ds \overrightarrow{\strut QP}=\langle 3,-3,1\rangle$. Exercises 14.2 Ex 14.2.1Draw the vector $\langle 3,-1\rangle$ with its tail at theorigin. Ex 14.2.2Draw the vector $\langle 3,-1,2\rangle$ with its tail at theorigin. Ex 14.2.3Let ${\bf A}$ be the vector with tail at the origin and headat $(1,2)$; let ${\bf B}$ be the vector with tail at the origin and headat $(3,1)$. Draw ${\bf A}$ and ${\bf B}$ and a vector ${\bf C}$ with tail at $(1,2)$ and head at $(3,1)$. Draw $\bf C$ with its tail at the origin. Ex 14.2.4Let ${\bf A}$ be the vector with tail at the origin and headat $(-1,2)$; let ${\bf B}$ be the vector with tail at the origin and headat $(3,3)$. Draw ${\bf A}$ and ${\bf B}$ and a vector ${\bf C}$ with tail at $(-1,2)$ and head at $(3,3)$. Draw $\bf C$ with its tail at the origin. Ex 14.2.5Let ${\bf A}$ be the vector with tail at the origin and headat $(5,2)$; let ${\bf B}$ be the vector with tail at the origin and headat $(1,5)$. Draw ${\bf A}$ and ${\bf B}$ and a vector ${\bf C}$ with tail at $(5,2)$ and head at $(1,5)$. Draw $\bf C$ with its tail at the origin. Ex 14.2.6Find $|{\bf v}|$, ${\bf v}+{\bf w}$, ${\bf v}-{\bf w}$,$|{\bf v}+{\bf w}|$, $|{\bf v}-{\bf w}|$ and $-2{\bf v}$ for${\bf v} = \langle 1,3\rangle$ and ${\bf w} = \langle -1,-5\rangle$.(answer) Ex 14.2.7Find $|{\bf v}|$, ${\bf v}+{\bf w}$, ${\bf v}-{\bf w}$,$|{\bf v}+{\bf w}|$, $|{\bf v}-{\bf w}|$ and $-2{\bf v}$ for${\bf v} = \langle 1,2,3\rangle$ and ${\bf w} = \langle -1,2,-3\rangle$.(answer) Ex 14.2.8Find $|{\bf v}|$, ${\bf v}+{\bf w}$, ${\bf v}-{\bf w}$,$|{\bf v}+{\bf w}|$, $|{\bf v}-{\bf w}|$ and $-2{\bf v}$ for${\bf v} = \langle 1,0,1\rangle$ and ${\bf w} = \langle -1,-2,2 \rangle$.(answer) Ex 14.2.9Find $|{\bf v}|$, ${\bf v}+{\bf w}$, ${\bf v}-{\bf w}$,$|{\bf v}+{\bf w}|$, $|{\bf v}-{\bf w}|$ and $-2{\bf v}$ for${\bf v} = \langle 1,-1,1\rangle$ and ${\bf w} = \langle 0,0,3\rangle$.(answer) Ex 14.2.10Find $|{\bf v}|$, ${\bf v}+{\bf w}$, ${\bf v}-{\bf w}$,$|{\bf v}+{\bf w}|$, $|{\bf v}-{\bf w}|$ and $-2{\bf v}$ for${\bf v} = \langle 3,2,1\rangle$ and ${\bf w} = \langle -1,-1,-1\rangle$.(answer) Ex 14.2.11Let $P=(4,5,6)$, $Q=(1,2,-5)$. Find $\ds \overrightarrow{\strut PQ}$. Find a vector withthe same direction as $\ds \overrightarrow{\strut PQ}$but with length 1. Find a vector withthe same direction as $\ds \overrightarrow{\strut PQ}$but with length 4.(answer) Ex 14.2.12If $A, B$, and $C$ are three points, find$\ds \overrightarrow{\strut AB}+\overrightarrow{\strut BC}+\overrightarrow{\strut CA}$.(answer) Ex 14.2.13Consider the 12 vectors that have their tails at the center of aclock and their respective heads at each of the 12 digits. What isthe sum of these vectors? What if we remove the vector correspondingto 4 o'clock? What if, instead, all vectors have theirtails at 12 o'clock, and their heads on the remaining digits?(answer) Ex 14.2.14Let $\bf a$ and $\bf b$ be nonzero vectors in two dimensionsthat are not parallel or anti-parallel. Show, algebraically, that if$\bf c$ is any two dimensional vector, there are scalars $s$ and $t$such that ${\bf c}=s{\bf a}+t{\bf b}$. Ex 14.2.15Does the statement in the previous exercise hold if the vectors$\bf a$, $\bf b$, and $\bf c$ are three dimensional vectors? Explain.
The pre-exponential factor (\(A\)) is part of the Arrhenius equation, which was formulated by the Swedish chemist Svante Arrhenius in 1889. The pre-exponential factor is also known as the frequency factor, and represents the frequency of collisions between reactant molecules. Although often described as temperature independent, it is actually dependent on temperature because it is related to molecular collision, which is a function of temperature. Temperature Dependence of Reactions The units of the pre-exponential factor vary depending on the order of the reaction and thus on the rate constant. In a first order reaction, the units of the pre-exponential factor are reciprocal seconds. Because the pre-exponential factor depends on frequency of collisions, its related to collision theory and transition state theory. The Arrhenius equation introduces the relationships between rate and \(A\), \(E_a\) and \(T\), where \(A\) is the pre-exponential factor, \(E_a\) is the activation energy, and \(T\) is the temperature. The pre-exponential factor, \(A\), is a constant that can be derived experimentally or numerically. It is also called the frequency factor, and describes the number of times two molecules collide. In empirical settings, the pre-exponential factor is considered constant. When dealing with the collision theory, the pre-exponential factor is defined as \(Z\) and its equation can be derived by considering the factors that affect the frequency of collision for a given molecule. Consider the most elementary bimolecular reaction: \[A + A \rightarrow Product\] An underlying factor to the frequency of collisions is the space or volume in which this reaction is allowed to occur. Intuitively, it makes sense for the frequency of collisions between two molecules to be dependent upon the dimensions of their respective containers. By this logic, Z is defined the following way: Z = (Volume of the cylinder * Density of the particles) / time Using this relationship, an equation for the collision frequency, \(Z\), of molecule \(A\) with \(A\) can be derived: \[Z_{AA} = 2N^2_Ad^2 \sqrt{\dfrac{\pi{k_{b}T}}{m_a}}\] For a more complex collision, such as one between \(A\) and \(B\): \[A + B \rightarrow Product\] The same reasoning is used to derive the following equation for the collision frequency, \(Z\), of molecule \(A\) and \(B\). (2) \(Z_{AB} = N_AN_Bd^2_{AB} \sqrt{\dfrac{8{k_{b}T}}{\mu}}\) Substituting the collision factor back into the original Arrhenius equation yields: \[k = Z_{AB}e^{\frac{-E_a}{RT}}\] \[k = Z_{AB} = N_A\, N_B\, d^2_{AB} \sqrt{\dfrac{8{k_{b}T}}{\mu}}\,e^{\frac{-E_a}{RT}}\] This equation produces a rate constant with the standard units of (M -1 s -1); however, on a molecular level, a rate constant with molecular units would be more useful. To obtain this constant, the rate is divided by \(N_A\,N_B\). This produces a rate constant with units (m 3 molecule -1 s -1) and provides the following equation: \[k = Z_{AB} e^{\frac{-E_a}{RT}}\] Divide both sides by N AN B (2) \[\dfrac{k}{N_AN_B} = d^2_{AB} \sqrt{\dfrac{8 k_b T}{\mu}}e^{\frac{-E_a}{RT}}\] \(Z_{AB}\) becomes \(z_{AB}\): \[\dfrac{Z_{AB}}{N{_A}N{_B }} = z_{AB}\] Substituting back into the Arrhenius equation, \[k = z_{AB}e^{\frac{-E_a}{RT}}\] The pre-exponential factor is now defined within the collision theory as the following: \(d^2_{AB} \sqrt{\dfrac{8{k_{b}T}}{\mu}}\) A and Z are practically interchangeable terms for collision frequency. Often times however, when the term is determined experimentally, A is the preferred variable and when the constant is determined mathematically, \(Z\) is the variable more often used. The derivation for Z, while mostly accurate, ignores the steric effect of molecules. For a reaction to occur, two molecules must collide in the correct orientation. Not every collision results in the proper orientation, and thus some do not yield a corresponding product. To account for this steric effect, the variable \(P\), which represents the probability of two atoms colliding with the proper orientation, is introduced. The Arrhenius equation is as follows: \[k = Pze^{\frac{-E_a}{RT}}\] The probability factor, \(P\), is very difficult to asses and still leaves the Arrhenius equation imperfect. Transition State Theory Pre-exponential Theory The collision theory deals with gases and neglects to account for structural complexities in atoms and molecules. Therefore, the collision theory estimation for probability is not accurate for species other than gases. The transition state theory attempts to resolve this discrepancy. It uses the foundations of thermodynamics to give a representation of the most accurate pre-exponential factor that yields the corresponding rate. The equation is derived through laws concerning Gibbs free energy, enthalpy and entropy: \[k = \dfrac{k_bT}{h} e^{\frac{\Delta S^o}{R}} e^{\frac{-\Delta H^o}{RT}}(M^{1-m})\] Factor Type A Empirical \(d^2_{AB} \sqrt{\dfrac{8{k_{b}T}}{\mu}}\) Collision Theory \(\dfrac{k_bT}{h}\) Transition State Theory The pre-exponential factor is a function of temperature. As indicated in Table 1, the factor for the collision theory and the transition state theory are both responsive to temperature changes. The collision theory factor is proportional to the square root of \(T\), whereas that of the transition state theory is proportional to \(T\). The empirical factor is also sensitive to temperature. As temperature increases, molecules move faster; as molecules move faster, they are more likely to collide and therefore affect the collision frequency, \(A\). Sources Atkins, Peter, and Julio De Paula. Physical Chemistry for the Life Sciences.Alexandria, VA: Not Avail, 2006. Chang, Raymond. Physical Chemistry for the Biosciences.Sausalito, CA: University Science, 2005. Contributors Golshani (UCD)
It is not very clear to me what you mean by "intersection of Hilbert Class Fields [...] is discussed". The theory of Complex Multiplication (see, for instance, Serre's short note in Cassels and Frohlich's Algebraic Number Theory, or Silverman's Advanced Topics in the Arithmetic of Elliptic Curves, or directly the bible from Shimura, Introduction to the Arithmetic Theory of Automorphic Functions) tells you that there is an explicit way, given an imaginary quadratic field $K=\mathbb{Q}(\sqrt{d})$ to construct not only its Hilbert class field, but all ray class fields of different conductors. For instance, for the Hilbert class field, you can first create the elliptic curve $\mathbb{C}/\mathcal{O}_K$: it is an elliptic curve with complex multiplication by $\mathcal{O}_K$. Then, the theory will tell you that the smallest extension of $\mathbb{Q}$ containing $\sqrt{d}$ over which this curve is defined is the Hilbert Class Field of $K$. Just to be convinced that what I say is believable (beside being true, for which you might look at the references), observe that there is an elliptic curve with CM by $\mathbb{Z}[i]$ which admits a Weierstrass equation$$y^2=x^3+x\;.$$This is defined over $\mathbb{Q}$, so the smallest field of definition which contains $\sqrt{-1}$ is $\mathbb{Q}(i)$, which is indeed its own Hilbert class field. In more concrete terms, if you are given a squarefree $d<0$ then the Hilbert class field of $K=\mathbb{Q}(\sqrt{d})$ is$K(j(\sqrt{d})$, where $j$ is the modular function$$j(q)=\frac{1}{q}+744+196884q+\dots$$whose definition you'll find in the above references, or athttp://en.wikipedia.org/wiki/J-invariant . All the fun is in proving that $j(\sqrt{d})$ is actually an algebraic number (it is even an algebraic integer!); then, of course, one proves the abelian+unramified property of the (now, finite!) extension $K(j(\sqrt{d}))/K$. Quiteremarkably, it is an algebraic number ''because" its conjugates are precisely the $j$-invariants of the elliptic curves $\mathbb{C}/\mathfrak{a}_i$ for $\mathfrak{a}_i$ running through a set of integral representatives of the ideal class. This, already, shows that the degree of the extension coincides with the class number (and rapidly leads to it being at least Galois). In your case, life is easier: you only want to be sure that if $d\equiv 1\pmod{4}$ is negative and squarefree, then $K(i)/K$ is unramified (being abelian, this would force it to lie inside the Hilbert class field). For this, it is enough to observe that $K(i)/\mathbb{Q}$ is a biquadratic extension with Galois group $(\mathbb{Z}/2)^2$ and has therefore three quadratic subfields: $\mathbb{Q}(i),K$ and $F=\mathbb{Q}(\sqrt{-d})$. Now pick a prime $\ell$ dividing $d$: it is necessarily odd. Its ramification degree is $2$ both in $K$ and in $F$, while it is unramified in $\mathbb{Q}(i)$. Therefore it needs be unramified in $K(i)/K$, since ramification degrees are multiplicative in towers of extensions. Similarly, $2$ is unramified in $F/\mathbb{Q}$ and cannot ramify in $K(i)/K$. For what concerns the infinite primes, observe that $K$ is already totally complex, so no ramification can occur. Done!
Ingo Blechschmidt already explained in the comments why we should expect a negative answer to the question (for most readings of "constructive logic"). Namely, if classical arithmetic proves $\forall n \in \mathbb{N} . \exists k \in \mathbb{N} . \phi(n, k)$, where $\phi(n,k)$ is quantifier-free, then so does intuitionistic arithmetic. So then, if intuitionstic arithmetic proves $\lnot\lnot \forall n \in \mathbb{N} . \exists k \in \mathbb{N} . \phi(n, k)$, then so does classical arithmetic, but classically we may remove the $\lnot\lnot$, and then go back to intuitionistic logic to get $\forall n \in \mathbb{N} . \exists k \in \mathbb{N} . \phi(n, k)$. The statement "Turing machine $M$ halts on every input" is of this form, namely$$\forall n \in \mathbb{N} . \exists k \in \mathbb{N} . T(m, n, k),$$where $m$ is a code of $M$ and $T$ is Kleene's predicate $T$. What I would really like to explain is that in a sense this is the wrong question to ask. Let $H(m)$ be the statement that the Turing machine encoded by $m$ always halts, i.e., $$H(m) \iff \forall n \in \mathbb{N} . \exists k \in \mathbb{N} . T(m, n, k).$$ In terms of $H$, the question is: "Is there a number $m$ such that intuitionistic logic proves $\lnot\lnot H(m)$ but does not prove $H(m)$?" This seems to indicate to me that the author of the question is trying to imagine how $$\forall m \in \mathbb{N} . (\lnot\lnot H(m) \Rightarrow H(m))$$ might fail, and he expects to be able to find an instance of $m$ in which the implication fails. But in intuitionistic logic this is not the right way to think of quantification and implication! The classical reading of $\forall x \in A . \psi(x)$ is "$\psi(a)$ holds for every element $a \in A$", whereas the intuitionistic reading of the same statement is "there is a procedure which takes as input any $a \in A$ and outputs evidence of $\phi(a)$. Here the word "procedure" is not fixed: it cold mean a computable map, or a continuous map, or computable with respect to an oracle, etc. But the point is this: in intuitionistic logic $\forall x \in A . \psi(x)$ may fail because there is no procedure, and not because there is a specific $b \in A$ for which $\lnot \psi(b)$ holds. Applying the last paragraph to Markov's principle, we see that the "correct" question to ask was: Is there a procedure which takes as input (the code of) a Turing machine $M$ that never runs forever, and a number $n \in \mathbb{N}$, and halts and outputs the running time of $M(n)$?
I was trying to compute the product $$ P_{a,b} = \prod_{n=1}^\infty(an + b), $$ after I computed $$ P_{1,b} = \prod_{n=1}^\infty(n + b) = \frac{\sqrt{2\pi}}{\Gamma(b+1)}, $$ and the well-known $$ \prod_{n=1}^\infty a = \exp\left\{\log(a)\sum_{n=1}^\infty n^0 \right\} = \exp\left\{\log(a)\zeta(0) \right\} = a^{-1/2}. $$ So I have $$ P_{a,b} = \prod_{n=1}^\infty a \prod_{n=1}^\infty\left(n + \frac{b}{a} \right) = a^{-1/2}\frac{\sqrt{2\pi}}{\Gamma\left(1+\frac{b}{a}\right)}. $$ However I found this article Quine, Heydari and Song 1993 stating $P_{1,b}$ as mine but $$ P_{a,b} = a^{-1/2 - b/a}\frac{\sqrt{2\pi}}{\Gamma \left( 1+\frac{b}{a}\right )}. \tag{18} $$ Of course this formula is not compatible with product of infinite products, but it seems to work rather than mine when computing some partition function by path integrals as $$ \int\mathcal{D}[\phi,\phi^\dagger]\exp\left\{-\int_0^\beta\mathrm{d}t\phi^\dagger(t)(\partial_t + w)\phi(t) \right\}, $$ with $\phi,\phi^\dagger$ bosonic fields. Notice that in this case $$ \phi(t) = \sum_{n=-\infty}^\infty\phi_n e^{\frac{2\pi i}{\beta}n t} $$ so that to evaluation of path integral boils up to some gaussian one. Can anyone help me?
I try to calculate the age of the universe with the FLRW model: $$ H(a) = H_0 \sqrt{\Omega_{\mathrm{R},0} \left(\frac{a_0}{a}\right)^4 + \Omega_{\mathrm{M},0} \left(\frac{a_0}{a}\right)^3 + (1-\Omega_{\mathrm{T},0}) \left(\frac{a_0}{a}\right)^2 + \Omega_{\Lambda,0}}. $$ I set $\Omega_{\mathrm{M},0} = 0.317$ (matter density) and $\Omega_{\Lambda,0} = 0.683$ (dark energy), as delivered by Planck 2013; $\Omega_{\mathrm{T},0} = 1.02$ (space curvature), according to this site; and $\Omega_{\mathrm{R},0} = 4.8\times10^{-5}$ (radiation density), according to this document. For the time $t(a)$ I take the scale factor $a$ and divide it through the integrated recessional velocity $$ t(a) = \frac{a}{\int_0^a{H(a')a'\ \mathrm{d}a'}/(a-0)} $$ and finally simplify to $$ t(a) = \frac{a^2}{\int_0^a{H(a')a'\ \mathrm{d}a'}}. $$ But the problem is, I then get about $8\times10^9$ years for the age of the universe, but it should be around $12\times10^9$ years (which I get when I set $\Omega_{\mathrm{R},0}$ to zero): $\Omega_{\mathrm{R},0} = 4.8\times10^{-5}$: $\Omega_{\mathrm{R},0} = 0 \to 0.00001$: Do I have to use some other models than FLRW/ΛCDM, or is one of my parameters outdated?
This is again a question in the context of this paper about the Exact Renormalization Group. On p 23 and the following few pages, it is explained that for a $\lambda \phi^4$ bare action at the bare scale $\Lambda_0$, after integrating out degrees of freedom and assuming a small coupling $\lambda$, the effective action at the larger scale $\Lambda$ can be written as the sum of a perturbative series plus nonperturbative power corrections (Eq. 2.6) $$ S_{\Lambda,\Lambda_0}[\phi] = \sum\limits_{i=0}^{\infty}\lambda^{i-1}S_i[\phi] +O(\Lambda/\Lambda_0) $$ Naively taking $\Lambda_0 \rightarrow \infty$ makes the second part disappear and leaves the first part which is self-similar which means that the theory is renormalizable. However, as stated in the paper, UV renormalons can make the perturbation series ambiguous, such that the power corrections and therefore the bare scale can not be removed and the theory would then not be renormalizable. The issue is explained by the mathematical argument that in order to obtain a unique finite value for the perturbative series, one makes use of the so-called Borel transformation to define a function which has the same power series expansion defined as an integral in the complex Borel plane. This integral exists only of there are no poles on the real axis, otherwise the contour of integration around these poles can be deformed which makes the integral and the function defined by it ambiguous. If this is the case, the power correction terms have to be kept to restore the uniqueness of the integral which means that the bare scale can not be removed. In the paper it is said, that the poles on the real axis are for example due to UV renormalons, that arise from large loop momenta in certain Feynman diagrams. My question now is: What are these renormalons from a physics point of view? How do they enter the Lagrangian of the theory? Are they some kind of unphysical auxilliary fields that appear in the mentioned Feynman diagrams? And what do the Feynman diagrams that contain them look like?
In a previous question, link, I asked about how I could most effectively do a Fourier Transform of a radial function given at certain values and which we knew the asymptotical behaviour of. The Fourier transform reading$$\frac{4\pi}{q}\int^{\infty}_0 dr\, r \sin(qr) f(r).$$I tried several ways and ended up choosing FFT, which approximates it by calculating the DFT over the interval $[0,\Delta r]$ with $N$ points. (I used more particularly the NAG subroutine C06FAF) I still get some issue around $q=0$. Indeed, as can be seen on the figure , I have some weird peak at very low frequencies. The black curve that is flat in q=0 is the analytical result while other curves are FFT calculations with increasing $\Delta r$. ($\color{blue}{\Delta r} >\color{red}{\Delta r} > {\bf \color{black}{\Delta r}} > \color{magenta}{\Delta r} $ and the dashed one being the highest $\Delta r$). As can be seen, this peak is narrower when $\Delta r$ gets larger. The question is from where this peak does come ? And how could I get rid of it ? I already tried to take the mean of the function and substract it to the function but it does not change anything. One subsidiary question is also the following : the subroutine calculates the integral. I then have to divide by $q$ to get this $4\pi/q$ factor. Though, at $q=0$, this can't be done for obvious reasons. So, what can be done instead ? EDIT : the problem of the wide peak was simply due to the fact that I was doing a bad conversion between $q$ and the frequency from the routine. As far as the problem of the division by $q=0$ is concerned, I'm happy with Endulum's answer.
In quantum computation, what is the equivalent model of a Turing machine? It is quite clear to me how quantum circuits can be constructed out of quantum gates, but how can we define a quantum Turing machine (QTM) that can actually benefit from quantum effects, namely, perform on high-dimensional systems? In quantum computation, what is the equivalent model of a Turing machine? It is quite clear to me how quantum ( note: the full desciption is a bit complex, and has several subtleties which I prefered to ignore. The following is merely the high-level ideas for the QTM model) When defining a Quantum Turing machine (QTM), one would like to have a simple model, similar to the classical TM (that is, a finite state machine plus an infinite tape), but allow the new model the advantage of quantum mechanics. Similarly to the classical model, QTM has: $Q=\{q_0,q_1,..\}$ - a finite set of states. Let $q_0$ be an initial state. $\Sigma=\{\sigma_0,\sigma_1,...\}$, $\Gamma=\{\gamma_0,..\}$ - set of input/working alphabet an infinite tape and a single "head". However, when defining the transition function, one should recall that any quantum computation must be reversible. Recall that a configuration of TM is the tuple $C=(q,T,i)$ denoting that the TM is at state $q\in Q$, the tape contains $T\in \Gamma^*$ and the head points to the $i$th cell of the tape. Since, at any given time, the tape consist only a finite amount of non-blank cells, we define the (quantum) state of the QTM as a unit vector in the Hilbert space $\mathcal{H}$ generated by the configuration space $Q\times\Sigma^*\times \mathrm{Z}$. The specific configuration $C=(q,T,i)$ is represented as the state $$|C\rangle = |q\rangle |T\rangle |i\rangle.$$ (remark: Therefore, every cell in the tape isa $\Gamma$-dimensional Hilbert space.) The QTM is initialized to the state $|\psi(0)\rangle = |q_0\rangle |T_0\rangle |1\rangle$, where $T_0\in \Gamma^*$ is concatenation of the input $x\in\Sigma^*$ with many "blanks" as needed (there is a subtlety here to determine the maximal length, but I ignore it). At each time step, the state of the QTM evolves according to some unitary $U$ $$|\psi(i+1)\rangle = U|\psi(i)\rangle$$ Note that the state at any time $n$ is given by $|\psi(n)\rangle = U^n|\psi(0)\rangle$. $U$ can be any unitary that "changes" the tape only where the head is located and moves the head one step to the right or left. That is, $\langle q',T',i'|U|q,T,i\rangle$ is zero unless $i'= i \pm 1$ and $T'$ differs from $T$ only at position $i$. At the end of the computation (when the QTM reaches a state $q_f$) the tape is being measured (using, say, the computational basis). The interesting thing to notice, is that each "step" the QTM's state is a superposition of possible configurations, which gives the QTM the "quantum" advantage. The answer is based on Masanao Ozawa, On the Halting Problem for Quantum Turing Machines. See also David Deutsch, Quantum theory, the Church-Turing principle and the universal quantum computer. As the notes indicate, the way to define a QTM is to define the transition function as a unitary transform of state and letter. So in each step, you imagine multiplying the (state,letter) vector by a transformation to get a new (state, letter). It's not particularly convenient, but it can be defined.
Here is an elementary proof of your equality. In the picture below,you see that the segment $A_2 P$ is the path-length difference between the ray reaching the point $P$ from the slit $A_2$ and the ray from the slit $A_1$ .I the triangle $A_1 PB$ the edges $PA_1$ and $PB$ are equal, and if the angle $A_1 PB$ is small, the line OP can be considered perpendicular on the line $A_1 B$. That means, since the segment $OA_1$ is also perpendicular to $OC$, that the angles $θ_1$ and $θ_2$ are equal. So, the triangles $OA_1 D$ and $A_1 PD$ are equivalent, and we have the relation $\dfrac {PC}{OD} = \dfrac{PD}{OA_1}$.Translating to your symbols,$$\frac{1}{2} \dfrac{s}{OD} \approx 2\dfrac S d\tag{1}\, .$$(When equating PD with S, I neglected OD in comparison with PD.) I wrote $\dfrac s2$ because you need the distance between two maxima, and PC is only half of this. Now, in order to have a maximum of intensity at the point P, the distance $A_2 B$ has to be an integer of $\dfrac \lambda 2$. Notice that, again for small angles $\theta$, $A_2 B \approx 2 \times OD$ Introducing in (1) we get$$\dfrac{s}{A_2 B} \approx \dfrac{2S}{d}\tag{2}\, ,$$ which implies your equality$$\dfrac s \lambda \approx \dfrac S d\, .$$
Strengthening weak measurements for qubit tomography and multitime correlators Justin Dressel Schmid College of Science and Technology Institute for Quantum Studies Chapman University January 16, 2019 Justin Dressel (PI) José Raúl Gonzales Alonso (postdoc) Razieh Mohseninia (postdoc) Shiva Barzili (grad student) Lucas Burns (grad student, spring 2019) Amy Lam (undergrad) Luis Pedro García-Pintos (postdoc - now at UMB) Taylor Lee Patti (undergrad - now at Harvard) William Parker (undergrad - now at U Oregon) Aaron Grisez (undergrad - founder of Qhord.com) Chapman Crew Outline Strengthening Weak Value Tomography Weak Value Tutorial Weak Value Tomography Strong Weak Values with Neutrons Strengthening Qubit Correlator Measurements Weakly Measured Correlators Generalized Qubit Measurements Strongly Measured Correlators What is a Weak Value? Roughly, it is a conditioned expectation value under weak detector coupling that can have surprising properties Aharonov, Albert, Vaidman (AAV), PRL 1988 What is a Conditioned Expectation Value? A conditioned expectation value is the conditioned response of an indirectly coupled and properly calibrated meter Pang, Dressel, Brun, PRL 113, 030401 (2014) The detector response is encapsulated in the "measurement operator", which is a partial matrix element of the interaction propagator Pang, Dressel, Brun, PRL 113, 030401 (2014) What is a Conditioned Expectation Value? What is a Weak Value? Weak values appears as the key complex system parameters that characterize the detector response. Generally all infinite orders \((A^n)_w\) are needed, but for weaker coupling the first order suffices Pang, Dressel, Brun, PRL 113, 030401 (2014) Kofman, Ashab, Nori, Physics Reports 520, 43 (2012) Dressel, Jordan, PRL 109, 230402 (2012) Detector parameters: System parameter: Pang, Dressel, Brun, PRL 113, 030401 (2014) What is a Weak Value? Typically, only the linear response regime is considered, where the weak value is directly proportional to the detector response What is a Weak Value? Even keeping only the first-order weak value, the nonlinear expression is remarkably accurate even for moderately large coupling strengths Example: Sagnac Interferometer Prototype experiment: Howell lab, Rochester PRL 102, 173601 (2009) Ultra-sensitive to beam deflection: ~560 femto-radians of tilt detected Weak Value Analysis Angular tilt (transverse momentum) amplified by large weak value. Weak value regime Dark port has single lobe that approximates displaced a Gaussian centered at: Tiny beam deflections can be distinguished, but with low output intensity. Exact Collimated Analysis Original profile of beam becomes modulated. JD et al., PRA 88 , 023801 (2013) Collimated Dark Port Profiles Left: Wavefront tilt mechanism producing spatial modulation Right : Asymmetric dark port profiles in different regimes Dashed envelope: input beam intensity Solid curve: dark port intensity Top right: weak value regime Middle right: double lobe regime Bottom right: misaligned regime Weak Value Tomography Measurable weak values Determining these also determines the state, up to a simple renormalization Unnormalized initial state: Arbitrary constant, in terms of unbiased "postselection" basis Unknown state: Dressel, et al. RMP 86, 307 (2014) Task: Measure the path state of a neutron inside a neutron interferometer using its coupled spin freedom as a probe Denkmayr, Dressel, et al. PRL 118, 010402 (2017) Spin-up neutrons preselected by silicon crystal Bragg condition of interferometer DC coil (ST1) rotates spin to +X (initial spin state prepared at this point) Sapphire slab (PS) phase shifts relative paths (initial path state prepared at this point) Helmholtz coils (HCs) perform path-dependent spin rotations by tunable angle alpha After interferometer, DC coil (ST2) and CoTi supermirror (A) spin-polarizes neutrons Only neutrons at O-Det are analyzed (postselection completed here) Measuring average spin response allows determination of path weak values, and thus the intermediate path state inside the interferometer Denkmayr, Dressel, et al. PRL 118, 010402 (2017) Expressions exact for any relative spin-splitting angle alpha Usual linear response regime when alpha small Angle alpha selected here Spin measurement axis selected here Arbitrary-strength Path Weak Values Relative phase of path state varied, keeping relative magnitudes the same (only imaginary weak value nonzero) Weak (left) and Strong (right) measurements compared Different-looking responses yield same weak value reconstructions Denkmayr, Dressel, et al. Physica B (2018) Same Weak Values : Stronger has better accuracy and precision Denkmayr, Dressel, et al. Physica B (2018) Outline Strengthening Weak Value Tomography Weak Value Tutorial Weak Value Tomography Strong Weak Values with Neutrons Strengthening Qubit Correlator Measurements Weakly Measured Correlators Generalized Qubit Measurements Strongly Measured Correlators Recall Weak Measurement What happens when you make several weak measurements in a row? Pang, Dressel, Brun, PRL 113, 030401 (2014) Crude idea : measurement operator approximately linearizes in weak measurement regime (linear response regime) Post-interaction system state couples to anticommutator to linear order in coupling: Sequences of measurements create nested anticommutator terms to lowest order in the joint couplings Benefit: Isolating such a nested commutator term gives access to interesting temporal correlation information that is sensitive to coherence (e.g., quasiprobability characterizations of the dynamics : Yunger Halpern, Swingle, Dressel PRA 9, 042105 (2018)) Problem: Isolating this term means subtracting away unwanted terms, including the higher-order coupling terms, which is difficult in practice with realistic measurements Miracle: For qubits, this term can be isolated exactly with any measurement strengthsg (Dressel, et al. PRA 98, 012032 (2018) ) Arbitrary Strength Qubit Measurements Informative Measurement: Non-Informative Measurement: Dressel, et al. PRA 98, 012032 (2018) Generalized spectral decomposition (Works for observables A that square to the identity) Arbitrary Strength Qubit Measurements Informative Measurement: White, Mutus, Dressel, et al. npj Quantum Information 2, 15022 (2016) Generalized spectral decomposition Generalized measurement method already used experimentally with superconducting qubits (Google) Arbitrary Strength Qubit Measurements Informative Measurement: White, Mutus, Dressel, et al. npj Quantum Information 2, 15022 (2016) Arbitrary Strength Qubit Measurements Informative Measurement: Dressel, et al. PRA 98, 012032 (2018) Useful corollaries: Anticommutators isolated perfectly for any coupling strength Nested anticommutator averages can be measured directly for any coupling strength Arbitrary Strength Qubit Measurements Non-Informative Measurement: Dressel, et al. PRA 98, 012032 (2018) Useful corollaries: Commutators also isolated perfectly for any coupling strength Nested commutator averages can also be measured directly for any coupling strength Arbitrary Strength Qubit Measurements Dressel, et al. PRA 98, 012032 (2018) Why does this work? What about state collapse? (Lindblad) decoherence terms from the measurement cancel in the average Any marginalization of results exposes the collapse-based disturbance Strongly Measured 2-Point Correlator Dressel, et al. PRA 98, 012032 (2018) 2-point temporal qubit correlators may be directly measured using any measurement strength These are common theoretical objects that are usually inaccessible in measurements directly Bracketing measurement by unitary evolutions probes a Heisenberg-evolved operator The final unitary may be omitted as inconsequential to the measured average Strongly Measured 4-Point Correlator Dressel, et al. PRA 98, 012032 (2018) 4-point temporal qubit out-of-time-ordered correlators may be directly measured using any measurement strength These are recently explored theoretical objects connected with information scrambling and chaos (one reversed evolution needed) Conclusions Weak values are better when made strong Nested (anti)commutators are accessible for qubits at any strength 2- and 4-point qubit correlators can be directly measured Follow-up research in the works: Measuring qubit quasiprobabilities strongly with multiple setups Measuring qutrit correlators with arbitrary strength Investigation of out-of-time-ordered correlators experimentally Thank you! Superconducting Qubits: Review and recent results Superconducting Qubits Mesoscopic quantum coherence of collective charge motion at \(\mu\)m scale EM Fields produced by charge motion described by Circuit QED Lowest levels of anharmonic oscillator potentials treated as artificial atoms Typical Transmon Parameters Microwave Measurement Note : Without a quantum-limited amplifier, this doesn't work! The Josephson Parametric Amplifier (JPA) and Traveling Wave Parametric Amplifier (TWPA) boost signal enough for later HEMTs in the readout chain to resolve the information. "3D Transmon" Cavity mode: Detuned (dispersive) regime (RWA): X-X Coupling: Korotkov group, Phys. Rev. A 92, 012325 (2015) Martinis group, Phys. Rev. Lett. 117, 190503 (2016) 2D planar chip schematic Bus acts as Purcell filter, coupled to traveling wave parametric amplifier (TWPA) Similar parameters: v1 v3 Coming soon: two-layer design of 20+ qubits separated from control circuitry (similar to Google Bristlecone + IBM Q) Now on v8+ UCB 2D planar chip Multiplexed 10 qubit control and readout Single shot "projective" readout : typical quantum computing goal Microwave Measurement A Brief Tour of Recent Results Individual quantum state trajectories filtered from the readout are verified via spot checking predicted subensembles with tomography Approach: Perform a random tomographic pulseat the endof each data run Spot check subensemblesof data to verify tomography of final state Quantum Trajectories Murch et al., Nature 502, 211 (2013) Monitored Rabi Drive JD and Siddiqi Group, Nature 511, 570 (2014) Partial collapses compete with unitary dynamics Ensemble-averaging the stochastic evolution recovers the usual Lindblad dynamics Conditioned State Dynamics JD and Siddiqi Group, Nature 511, 570 (2014) Experimental most probable path matches ODE solution derived from stochastic path integral JD and Jordan Group, PRA 88, 042110 (2013) Tracking Drifting Parameters Maximum likelihood techniques allow extraction of parameters drifts from stochastic records with reasonable precision JD and Jordan Group, PRA 95, 012314 (2017) Feedback State Stabilization Linear feedback (with very small temporal delays) can stabilize the qubit state to targeted regions of the Bloch sphere. JD and Jordan Group, PRA 96, 022311 (2017) Symmetrically detunedpumps Beats stroboscopicallymeasure rotating qubit Yields displacement coupling Allows tunablemeasurement axis Multiple cavity modes = multiple observables Displacement coupling: Siddiqi group, Nature 538, 491 (2016) Stroboscopic Displacement Coupling Rotating frame: Incoherent Zeno-Dragging Qubit Gate Idea : use time-varying measurement axes to drag the quantum state around the Bloch sphere using the quantum Zeno effect The record tracks the state well in this regime, so can be used as a herald for high-fidelity gates Non-unitary gate (measurement-based) Stroboscopic displacement coupling can be time-varying JD, Siddiqi group, PRL 120, 020505 (2018) Jumps : Faster drag speeds allow trajectories to jump to the opposite pole, decreasing ensemble-averaged dragging fidelity Jump-axis : Dragging dynamics causes lag of actual Zeno-pinned behind the measurement axis by a fixed angle JD, Siddiqi group, PRL 120, 020505 (2018) Pinned to poles : Other than the jumps, state remain pinned to lagged measurement poles Incoherent Zeno-Dragging Qubit Gate State collapses to jump-axis JD, Siddiqi group, PRL 120, 020505 (2018) Incoherent Zeno-Dragging Qubit Gate Post-selecting on trajectories with an average readout with a value >1 keeps only trajectories that did not jump, heralding a reasonably high-fidelity dragging gate for that subset Alternatively, the jump may be observed, then corrected later JD, Siddiqi group, PRL 120, 020505 (2018) Incoherent Zeno-Dragging Qubit Gate Siddiqi group, Nature 538, 491 (2016) 4 pumps, symmetrically detuned from 2 resonator modes 2 simultaneous noncommuting observables Partial collapses compete with each other, preventing full collapse to a stationary state If observables are maximally non-commuting, creates persistent phase-diffusion in Bloch sphere Noncommuting Observable Measurement Siddiqi group, Nature 538, 491 (2016) State purifies, but diffuses randomly Basins of attraction if measurement axes are nearly aligned Noncommuting Observable Measurement Siddiqi group, Nature 538, 491 (2016) State disturbance can be measured Result agrees with the lower bound set by the Maccone-Pati relation involving the sum of variances: Uncertainty relation forces the random state diffusion when measuring incompatible axes Noncommuting Observable Measurement Circuit Quantum Electrodynamics Ideas: Size scale of device (\(\mu\)m) is smaller than the characteristic wavelength of superconducting Cooper pairs: quantum coherence is relevantto the motion Cooper pairs (bosons) condense into collectivecharge motion that is well described by an effective "center of charge", much like rigid bodies are described by an effective center of mass in classical mechanics Collective charge motion along superconducting wires can be described by the currents and voltagesit produces, which are then easily related to measurable capacitances, inductances, and impedances Quantizationof the conjugate variables of fluxand chargefollows from the usual quantization of the electromagnetic field How do we model collective mesoscopic quantum coherence? Circuit QED (Technical Details) Definitions: Map hardware into a circuit of nodes connected by branches(e.g., capacitor, inductor, etc.) Define voltageand currentfor each branch via EM fields, as well as the fluxand chargestored in each element: Define ground nodeand tree of active nodesconnecting both capacitors and inductors - the fluxesto ground are the dynamical circuit variables Branches \(b\in\mathcal{B}\) in path connecting node \(n\) to ground through capacitors charge conjugate to node flux \(\phi_n\) (\(+1\) capacitive, \(-1\) inductive) Primary circuit elements: capacitor, inductor, and Josephson junction "Kinetic" energy: "Potential" energy: Canonical quantization : (equivalent to quantizing \(\vec{E},\vec{B}\) ) Circuit QED (Technical Details) (An)harmonic Oscillators Resonator: Transmon: Josephson junction shunted by large capacitance: \(E_J/E_C \sim 100, \; E_J = \frac{(\hbar/2e)^2}{L_J}, \; E_C = \frac{e^2}{2C_J} \) Qubit - Resonator Coupling Resonator + Transmon: Dispersive approximation, including rotating-wave approx (RWA): Resonator frequency dependson transmon energy levels Transmission Line Coupling Resonator + Transmon + Transmission Line: , and ( 2017 ) . doi: 10.1002/cta.2359 . (OUT: to amplifier and detector) (IN: from signal generator) ( 2017 ) .
I have a question regarding these strikingly similar problems with contradicting solutions. This is somewhat long, so prepare Probblem 1 Consider a bag of ten coins, nine are fair, but one is weighted with both sides heads. You randomly select a coin and toss it five times. Let $2s$ denote the event of selecting the weighted coin (that is the 2-sided coin) and $N$ be the even you select a regular coin and $5H$ be the event of getting five heads in a row. What is a) $P(5H | 2s)$ b) $P(5H | N)$ c) $P(5H)$ d) $P(2s | 5H)$ Solution 1 a) Simply 1 b) $\frac{1}{2^5}$ c) $\frac{1}{2^5}\frac{9}{10}+ \frac{1}{10} = \frac{41}{320}$ d) $P(2s|5H) = \dfrac{P(5H|2s)P(2s)}{P(5H)} = \frac{32}{41}$ From the Solution 1, it seems that $P(2s|5H) \neq P(2s)P(5H)$ That is the event of picking out the weighted coin affects the probability of getting 5H. Here is part of my question, isn't there also some tiny probability of getting 5H from picking the normal one as well? Doesn't make sense why the events of picking the coin and getting 5H is dependent. Read on the next question Problem 2 A diagnostic test for an eye disease is 88% accurate of the time and 2.4% of the population actually has the disease. Let $ED$ be the event of having the eye disease and $p$ be the event of testing positive. Find the probability that a) the patient tests positive b) the patient has the disease and tests positive Solution 2 Here is a tree diagram a) $0.02122 + 0.011712 = 0.13824$ b) $P(ED | p) = \dfrac{P(\text{ED and p})}{P(p)} =\frac{0.02122}{0.13824 }= 0.1535$ From Solution 2, it looks like $P(\text{ED and p}) = P(\text{ED})P(p)$ which means that having the eye disease and testing positive are independent events? After trying out the same formula from Problem 1, it also seems that $$P(\text{ED | p}) = \dfrac{P(\text{ED and p})}{P(p)} = \dfrac{P(\text{p | ED})P(ED)}{P(p)} = 0.1535$$ Also, when the question asks "the patient has the disease and tests positive", how do I know that it is $P(ED | p)$ and not $P(p | ED)$? I am very confused in general with this. Could anyone clarify for me? Thanks
Hey guys! I built the voltage multiplier with alternating square wave from a 555 timer as a source (which is measured 4.5V by my multimeter) but the voltage multiplier doesn't seem to work. I tried first making a voltage doubler and it showed 9V (which is correct I suppose) but when I try a quadrupler for example and the voltage starts from like 6V and starts to go down around 0.1V per second. Oh! I found a mistake in my wiring and fixed it. Now it seems to show 12V and instantly starts to go down by 0.1V per sec. But you really should ask the people in Electrical Engineering. I just had a quick peek, and there was a recent conversation about voltage multipliers. I assume there are people there who've made high voltage stuff, like rail guns, which need a lot of current, so a low current circuit like yours should be simple for them. So what did the guys in the EE chat say... The voltage multiplier should be ok on a capacitive load. It will drop the voltage on a resistive load, as mentioned in various Electrical Engineering links on the topic. I assume you have thoroughly explored the links I have been posting for you... A multimeter is basically an ammeter. To measure voltage, it puts a stable resistor into the circuit and measures the current running through it. Hi all! There is theorem that links the imaginary and the real part in a time dependent analytic function. I forgot its name. Its named after some dutch(?) scientist and is used in solid state physics, who can help? The Kramers–Kronig relations are bidirectional mathematical relations, connecting the real and imaginary parts of any complex function that is analytic in the upper half-plane. These relations are often used to calculate the real part from the imaginary part (or vice versa) of response functions in physical systems, because for stable systems, causality implies the analyticity condition, and conversely, analyticity implies causality of the corresponding stable physical system. The relation is named in honor of Ralph Kronig and Hans Kramers. In mathematics these relations are known under the names... I have a weird question: The output on an astable multivibrator will be shown on a multimeter as half the input voltage (for example we have 9V-0V-9V-0V...and the multimeter averages it out and displays 4.5V). But then if I put that output to a voltage doubler, the voltage should be 18V, not 9V right? Since the voltage doubler will output in DC. I've tried hooking up a transformer (9V to 230V, 0.5A) to an astable multivibrator (which operates at 671Hz) but something starts to smell burnt and the components of the astable multivibrator get hot. How do I fix this? I check it after that and the astable multivibrator works. I searched the whole god damn internet, asked every god damn forum and I can't find a single schematic that converts 9V DC to 1500V DC without using giant transformers and power stage devices that weight 1 billion tons.... something so "simple" turns out to be hard as duck In peskin book of QFT the sum over zero point energy modes is an infinite c-number, fortunately, it's experimental evidence doesn't appear, since experimentalists measure the difference in energy from the ground state. According to my understanding the zro pt energy is the same as the ground state, isn't it? If so, always it is possible to substract a finite number (higher exited state for e.g.) from this zero point enrgy (which is infinte), it follows that, experimentally we always obtain infinte spectrum. @AaronStevens Yeah, I had a good laugh to myself when he responded back with "Yeah, maybe they considered it and it was just too complicated". I can't even be mad at people like that. They are clearly fairly new to physics and don't quite grasp yet that most "novel" ideas have been thought of to death by someone; likely 100+ years ago if it's classical physics I have recently come up with a design of a conceptual electromagntic field propulsion system which should not violate any conservation laws, particularly the Law of Conservation of Momentum and the Law of Conservation of Energy. In fact, this system should work in conjunction with these two laws ... I rememeber that Gordon Freeman's thesis was "Observation of Einstein-Podolsky-Rosen Entanglement on Supraquantum Structures by Induction Through Nonlinear Transuranic Crystal of Extremely Long Wavelength (ELW) Pulse from Mode-Locked Source Array " In peskin book of QFT the sum over zero point energy modes is an infinite c-number, fortunately, it's experimental evidence doesn't appear, since experimentalists measure the difference in energy from the ground state. According to my understanding the zro pt energy is the same as the ground state, isn't it? If so, always it is possible to substract a finite number (higher exited state for e.g.) from this zero point enrgy (which is infinte), it follows that, experimentally we always obtain infinte spectrum. @ACuriousMind What confuses me is the interpretation of Peskin to this infinite c-number and the experimental fact He said, the second term is the sum over zero point energy modes which is infnite as you mentioned. He added," fortunately, this energy cannot be detected experm., since the experiments measure only the difference between from the ground state of H". @ACuriousMind Thank you, I understood your explanations clearly. However, regarding what Peskin mentioned in his book, there is a contradiction between what he said about the infinity of the zero point energy/ground state energy, and the fact that this energy is not detectable experimentally because the measurable quantity is the difference in energy between the ground state (which is infinite and this is the confusion) and a higher level. It's just the first encounter with something that needs to be renormalized. Renormalizable theories are not "incomplete", even though you can take the Wilsonian standpoint that renormalized QFTs are effective theories cut off at a scale. according to the author, the energy differenc is always infinite according to two fact. the first is, the ground state energy is infnite, secondly, the energy differenc is defined by substituting a higher level energy from the ground state one. @enumaris That is an unfairly pithy way of putting it. There are finite, rigorous frameworks for renormalized perturbation theories following the work of Epstein and Glaser (buzzword: Causal perturbation theory). Just like in many other areas, the physicist's math sweeps a lot of subtlety under the rug, but that is far from unique to QFT or renormalization The classical electrostatics formula $H = \int \frac{\mathbf{E}^2}{8 \pi} dV = \frac{1}{2} \sum_a e_a \phi(\mathbf{r}_a)$ with $\phi_a = \sum_b \frac{e_b}{R_{ab}}$ allows for $R_{aa} = 0$ terms i.e. dividing by zero to get infinities also, the problem stems from the fact that $R_{aa}$ can be zero due to using point particles, overall it's an infinite constant added to the particle that we throw away just as in QFT @bolbteppa I understand the idea that we need to drop such terms to be in consistency with experiments. But i cannot understand why the experiment didn't predict such infinities that arose in the theory? These $e_a/R_{aa}$ terms in the big sum are called self-energy terms, and are infinite, which means a relativistic electron would also have to have infinite mass if taken seriously, and relativity forbids the notion of a rigid body so we have to model them as point particles and can't avoid these $R_{aa} = 0$ values.
I'm want to understand the concept of etale morphism of schemes using following definition: A morphism $f: X \to Y$ is etale iff it is * flat *(1), (2) and has the locally of finite type (3): separable field condition Here (1), (2) and (3) mean: (1) For every $x \in X$ the induced morphism $f_x ^{\#}:\mathcal{O}_{Y, f(x)} \to \mathcal{O}_X$ is flat (2) There exist an open affine neighbourhood $U_x =Spec(R)$ of $x$ and an o. a. n. $V_{f(x)}= Spec(S)$ of $f(x)$ with $f(U_x) \subset V_{f(x)}$ such that the induced ring map $S \to R$ is of finite presentation (3) Let $m_x \subset \mathcal{O}_{X,x}$ the unique maximal ideal of local ring $\mathcal{O}_{X,x}$ and respectively $m_{f(x)} \subset \mathcal{O}_{X,x}$ the unique maximal ideal of $\mathcal{O}_{Y,f(x)}$: Then the induced finite field extension $\mathcal{O}_{Y,f(x)}/m_{f(x)} \to \mathcal{O}_{X,x}/m_x$ is separable I'm trying to acquire the intuition for etaleness by considering following four examples (a) $Spec(\mathbb{C}[T, T^{-1}]) \to Spec(\mathbb{C}[T])$ (b) $Spec(\mathbb{C}[T] /(T^d - 2))\to Spec(\mathbb{C}[T])$ (c) $Spec(\mathbb{C}[T, Y]/(Y^d - T)) \to Spec(\mathbb{C}[T])$ (d) $Spec(\mathbb{C}[T, T^{-1},Y]/(Y^d - T)) \to \mathbb{C}[T]$ My attempts: (a) Is is flat since it just a localization of $\mathbb{C}[T] $ on $T$. Or a secound argument: Open embeddings are etale. But what about (b), (c) and (d)? All induced ring maps $\mathbb{C}[T] \to ...$ in (b),(c), (d)$ are quotient maps, therefore of finite presentaions. Since beeing from finite presentations behaves well under base changes / localisations, condition (2) holds. What about conditions (1) and (3)? Why it suffice to consider in all cases only the localisation on $p= (T)$? The main problem here for me is to analyze what hapens on stalks, therefore what happens with the localisations of the ring map $\mathbb{C}[T] \to ...$ with respect an arbitrary point /prime ideal $p =(T - \lambda)$ for $\lambda \in \mathbb{C}$. I guess that one can distinguish two relevant cases 1. $\lambda=0$ and 2. $\lambda \neq 0$ arbitrary. Could anybody explain how to argue exactly for (1) and (3)?
OFDM belongs to the class of multicarrier modulation schemes. OFDM decomposes the transmission frequency band into a group of narrower contiguous subbands (carriers), and each carrier is individually modulated. You can implement this type of modulation with an inverse fast Fourier transform (IFFT). By using narrow orthogonal subcarriers, the OFDM signal gains robustness over a frequency-selective fading channel and eliminates adjacent subcarrier crosstalk. At the receiving end, you can demodulate the OFDM signal with a fast Fourier transform (FFT) and equalize it with a complex gain at each subcarrier. Combining OFDM with MIMO can improve communication speed without increasing the frequency band. In the figure above, the waveforms of single-carrier modulation and multicarrier modulation are represented in the frequency domain (top) and the time domain (bottom). Since the multiple data streams can be transmitted simultaneously with multiple carriers, OFDM is not influenced by noise to the same degree as single-carrier modulation. That’s because the time per symbol can be lengthened by the number of carriers. The Principles of OFDM An OFDM signal aggregates the information in orthogonal single-carrier frequency-domain waveforms into a time-domain waveform that can be transmitted over the air. The subcarriers use QPSK or QAM as the primary modulation method. The inverse discrete Fourier transform equation for this is: $$f(x) = { 1 \over N} \sum_{t=0}^{N-1} F(t) e^{i \frac{2 \pi xt}{N}} $$ In OFDM, when the amplitude of each subcarrier reaches the maximum, the carriers are arranged at intervals of 1 / symbol time so that the amplitude of other subcarriers is 0, thereby preventing interference between symbols. Moreover, OFDM of a multicarrier transmission is effective in multipath environments because the influence of multipath is concentrated on specific subcarriers compared with a single-carrier transmission. In the case of a single-carrier transmission, the multipath affects the whole. The arrival time difference between the direct wave and the reflected wave increases when the signal is transmitted over a long range. In that situation, the number of subcarriers is larger than in a smaller service range. OFDM Technology in 5G Systems During the specification of the 5G standard, various technologies based on OFDM had been considered. CP-OFDM (cyclic prefix OFDM) is used in LTE and was also selected for the 3GPP Release 15 standard. This technique adds an upper-level signal called a cyclic prefix to the beginning of the OFDM symbol. CP-OFDM suppresses intersymbol interference (ISI) and intercarrier interference (ICI) by inserting the data for a certain period of time from the trailing end of the OFDM symbol as the cyclic prefix at the beginning of the OFDM symbol. Pros and Cons of OFDM Advantages of OFDM Multiple users can be assigned to OFDM subcarriers. Frequency can be efficiently used by orthogonal (1 / symbol time interval). It is resistant to transmission distortion due to multipath, making demodulation possible by error correction without using a complicated equalizer. Disadvantages of OFDM Because the amplitude of the signal changes significantly, it is necessary to design an amplifier that has a higher peak-to-average power ratio, smaller-than-average transmit power allowed by the amplifier, or an amplifier with a wide dynamic range. Particularly when the carrier interval is narrow, the effect of OFDM becomes weaker against the Doppler shift, so it is preferable to use an amplifier with a wide dynamic range. OFDM Using MATLAB MATLAB ® and related toolboxes, including Communications Toolbox™, WLAN Toolbox™, LTE Toolbox™, and 5G Toolbox™, provide functions to implement, analyze, and test OFDM waveforms and perform link simulation. The toolboxes also provide end-to-end transmitter/receiver system models with configurable parameters and wireless channel models to help evaluate the wireless systems that use OFDM waveforms. Specifically, as a part of wireless communication system design, you can use these OFDM capabilities to analyze link performance, robustness, system architecture options, channel effects, channel estimation, channel equalization, signal synchronization, and subcarrier modulation selections. MATLAB functions and Simulink ® blocks for OFDM modulation provide adjustable parameters such as training signal, pilot signal, 0 padding, cyclic prefix, and points of FFT. It is also possible to generate and analyze standard-compliant and custom OFDM waveforms over the air by using the Wireless Waveform Generator app in Communications Toolbox with Instrument Control Toolbox™ to connect MATLAB to RF test and measurement instruments.
This looks right, although I would emphasise that it is not really best practice to have to ask this question at this stage. The whole point of doing a particularly simple example is so that you can confirm that it's doing what you've already calculated analytically. It's quite important to do the analytic bit first to avoid confirmation bias. Anyway, let's see what we should get. I'm going to assume a first register (the one that we do the Fourier transform etc on) contains $t$ qubits. I'm guessing that $t=7$ from your output. We start with the system in $|0\rangle^{\otimes t}|1\rangle$, and apply the Hadamard transform to the first register:$$\frac{1}{\sqrt{2^t}}\sum_{j=0}^{2^t-1}|j\rangle|1\rangle.$$Then we apply the modular exponentiation$$\frac{1}{\sqrt{2^t}}\sum_{j=0}^{2^t-1}|j\rangle|2^j\text{ mod }5\rangle.$$We can simply this as$$\frac{1}{\sqrt{2^t}}\sum_{k=0}^3\left(\sum_{j=0}^{2^{t-2}-1}|k+4j\rangle\right)|2^k\text{ mod }5\rangle.$$ Now, instead of simply applying the inverse Quantum Fourier Transform, I prefer to think about its action on states. The QFT (not inverse), transforms basis states$$|x\rangle\rightarrow|\psi_x\rangle=\frac{1}{\sqrt{2^t}}\sum_{y=0}^{2^t-1}\omega^{xy}|y\rangle$$where $\omega=e^{2\pi i/2^t}$, so we want to describe our states such as $|0\rangle+|4\rangle+|8\rangle+|12\rangle+\ldots$ in terms of $|\psi_x\rangle$ to know what values of $x$ we might get as output from the inverse QFT. But we can easily observe that$$|0\rangle+|4\rangle+|8\rangle+|12\rangle+\ldots=\frac{1}{2}\sum_{x=0}^3|\psi_{2^{t-2}x}\rangle.$$Hence, for the $k=0$ case, we get each of the 4 answers $0,2^{t-2},2^{t-1},3\times 2^{t-2}$ with equal probability. These are the 4 bit strings that you're seeing on output. Similarly, the other 3 values of $k$ can be decomposed also in terms of $\{|\psi_{2^{t-2}x}\rangle\}_{x=0}^3$, just with different multiplicative factors (fourth roots of unity), the point being that these 4 vectors all have period 4 and they form a basis for the states $|0\rangle$ to $|3\rangle$.
Club sets and stationary sets Closed and unbounded subsets of ordinals, more commonly referred to as club sets, play a prominent role in modern set theory. We intuitively think of clubs as the "large" subsets of $\kappa$ and the stationary subsets as the "not small" subsets of $\kappa$, though this is sort of a boring way to look at them. They arise from considering the natural topology on the class of ordinals and often exhibit substantial reflection properties. Given an ordinal $\kappa$, the basic open intervals are pairs of ordinals $(\alpha, \beta)=\{\gamma : \alpha <\gamma < \beta\}$ where $\beta <\kappa$. Closed intervals are defined similarly, so closed intervals are topologically closed. Considering a typical example of an interval of ordinals $[\lambda, \lambda+1, \lambda+2, \dots)$ it appears there are more successor ordinals than limits, but club (and also stationary) sets favor limit ordinals in the sense that they concentrate on them. Hence the opposite view-point is more useful when considering club sets, i.e., there are "more" limit ordinals. Club sets Although the definition can be applied to all infinite ordinals, we assume $\kappa >\omega$ is a regular cardinal for this and subsequent sections. A set $C\subseteq \kappa$ is closed unbounded or club in $\kappa$ if and only if $C$ is unbounded in $\kappa$: for every $\alpha <\kappa$ there is some $\beta \in C$ with occurring above $\alpha$ in the natural ordering; and $C$ is also closed: if $B\subseteq C$ is bounded in $\kappa$ (i.e., there is some $\gamma\in \kappa$ with $\beta\leq \gamma$ for each $\beta\in B$), then $sup(B)\in C$. If $\lambda < \kappa$ and $ \lambda$ a limit with $C\cap\lambda$ unbounded in $\lambda$, then $\lambda\in C$. Typical examples club sets include the collection of limit ordinals below $\kappa$, the collection of limits of limit ordinals below $\kappa$, and also all "tails" in $\kappa$: $\{\lambda : \alpha\leq \lambda <\kappa\}$ for each $\alpha <\kappa$. It is fairly straightforward to construct a club subset of $\kappa$. Given a sequence $\langle\gamma_\alpha\rangle_{\alpha < \kappa}$ of ordinals smaller than $\kappa$ arbitrarily pick $\gamma_{\alpha +1}$ also smaller than $\kappa$. At limit stages, take the supremum of the sequence already constructed. It is clear that club subsets of $\kappa$ all have size $\kappa$ and their enumeration functions $f:\kappa\rightarrow\kappa$ are all continuous and increasing. The intersection of two club subsets of $\kappa$ is also club in $\kappa$. In fact, given any sequence of fewer than $\kappa$-many club subsets of $\kappa$, their intersection is also club in $\kappa$. Further, any collection of fewer than $\kappa$-many club subsets is also closed under diagonal intersections, a fact used in characterizing the stationary subsets of $\kappa$. In particular, the club subsets of $\kappa$ form a filter over $\kappa$. Note that the intersection of $\kappa$-many clubs might be empty, so this filter is not an ultrafilter in general. Stationary sets A set $S\subseteq \kappa$ is stationary in $\kappa$ if $S$ intersects all club subsets of $\kappa$. As mentioned above, one intuitively thinks of the collection of stationary subsets of $\kappa$ as the ``not small" subsets of $\kappa$. Several facts about stationary sets are immediate: all club subsets of $\kappa$ are also stationary in $\kappa$; the supremum of a stationary subset of $\kappa$, is $\kappa$; the intersection of a club set with a stationary set is stationary; if $S$ is a stationary set and also the union of less than $\kappa$-many sets $S_\alpha$, then at least one such set is also stationary, in other words, stationary subsets of $\kappa$ cannot be partitioned into a small number of small sets. For a given regular cardinal $\kappa$, particular stationary sets Fodor's Lemma (improving upon Alexandrov-Urysohn, 1929) is the basic, fundamental result concerning the concept of stationarity. Call a function $f:\kappa\to\kappa$ regressive if $f(\alpha) < \alpha$ for all non-zero ordinals smaller than $\kappa$. Fodor's lemma reads: If $f$ is a regressive function with domain a stationary subset $S$ of $\kappa$, then there is some subset $S'$ of $S$ on which $f$ is constant. Using Fodor's lemma, Solovay proved that each stationary subset of $\kappa$ can be split into two, in fact $\kappa$-many disjoint stationary sets. Another application of Fodor's lemma is used to prove a result concerning families of sets that are as different as possible, i.e., any two distinct sets in the family have the same intersection. The result is more popularly known as the $\Delta$-system Lemma (originally established by Marczewski): Given a family of finite sets (infinite sets usually require CH), of size $\kappa$ there is a subfamily of size $\kappa$ which forms a $\Delta$-system. Generalized notions References This article is a stub. Please help us to improve Cantor's Attic by adding information.
In insurance mathematics, one often models the underlying of an insurance policy with a Black Scholes model on a filtered probability space $(\Omega,\mathbb{Q},\mathcal{F},\mathbb{F}=(\mathcal{F}_{t}))$ with $\mathbb{Q}$ being the risk-neutral measure. For example, one now would like to value a pure endowment product, i.e. at a fixed time $T$ the prelevant stock price $S(T)$ is paid out if the policyholder is alive at time $T$ else there is no payout. Additionally, such products often contain some further financial guarantee, i.e. minimal payout. But this is not essential for my questions. Therefore, one also has to model mortality. For this, one often considers $T_{x}$ the future lifespan of a $x$-year old and sets $\mathcal{G}_{t}:=\sigma(\mathbb{1}_{\{T_{x}\leq s \}}\vert s\leq t)$, which defines the "insurance filtration" $\mathbb{G}=(\mathcal{G}_{t})$. Then the one considers the enlarged filtration $\mathbb{H}=\mathbb{F}\vee\mathbb{G}$ and works on the filtered space $(\Omega,\mathbb{Q},\mathcal{F},\mathbb{H})$. The survival probability is then defined as $p_{x+t}(t,T):=\mathbb{Q}(T_{x}>T\vert \mathcal{H}_{t})$. Unfortunately, I never found a general good and formal account of this. Many things seem implicitly assumed. My questions: Are there any good references for this general modeling approach? Why can the risk-neutral measure even be extended to the enlarged space and in particular be used to measure mortality? Or, are they any special conditions needed? If we assume that mortality is independent from financial markets, do we need any of this anyways? Thanks alot for the help.
Answer a) .098 meters b) The box will not slide back down. Work Step by Step We first must find the acceleration on the ramp. We know that the two forces causing the block to slow down are the force of gravity and the force of friction. Thus, we find: $a = \frac{-F_gsin\theta - F_f}{m} \\ a = \frac{-F_gsin\theta - F_n \ mu}{m} \\ a = \frac{-mgsin\theta - mgcos\theta \mu}{m} \\ a = -gsin\theta - gcos\theta \mu \\ a = -9.81sin22-9.81cos22(.7)=-10.08m/s^2$ We now can find the change in distance: $v_f^2 = v_0^2 + 2a\Delta x \\ 0 = 1.4^2 - 2(-10.08) \Delta x \\ \Delta x = \fbox{.098 meters}$ To see if the box will slide back down, we must compare the force of friction to the force of gravity: $F_f = \mu mg cos22 = .65 mg $ $F_g=mgsin22=.37mg$ Thus, we see that the kinetic friction is greater than the force of gravity. Since static friction is always greater than kinetic friction, this means that the box will not slide back down.
In efforts to reduce gas consumption from oil, ethanol is often added to regular gasoline. It has a high octane rating and burns more slowly than regular gas. This "gasohol" is widely used in many countries. It produces somewhat lower carbon monoxide and carbon dioxide emissions, but does increase air pollution from other materials. Molar Heat of Combustion Many chemical reactions are combustion reactions. It is often important to know the energy produced in such a reaction so we can determine which fuel might be the most efficient for a given purpose. The molar heat of combustion \(\left( He \right)\) is the heat released when one mole of a substance is completely burned. Typical combustion reactions involve the reaction of a carbon-containing material with oxygen to form carbon dioxide and water as products. If methanol is burned in air, we have: \[\ce{CH_3OH} + \ce{O_2} \rightarrow \ce{CO_2} + 2 \ce{H_2O} \: \: \: \: \: He = 890 \: \text{kJ/mol}\] In this case, one mole of oxygen reacts with one mole of methanol to form one mole of carbon dioxide and two moles of water. It should be noted that inorganic substances can also undergo a form of combustion reaction: \[2 \ce{Mg} + \ce{O_2} \rightarrow 2 \ce{MgO}\] In this case there is no water and no carbon dioxide formed. These reactions are generally not what we would be talking about when we discuss combustion reactions. Example 17.14.1 Heats of combustion are usually determined by burning a known amount of the material in a bomb calorimeter with an excess of oxygen. By measuring the temperature change, the heat of combustion can be determined. A 1.55 gram sample of ethanol is burned and produced a temperature increase of \(55^\text{o} \text{C}\) in 200 grams of water. Calculate the molar heat of combustion. Solution: Step 1: List the known quantities and plan the problem. Known Mass of ethanol \(= 1.55 \: \text{g}\) Molar mass of ethanol \(= 46.1 \: \text{g/mol}\) Mass of water \(= 200 \: \text{g}\) \(c_p\) water \(= 4.18 \: \text{J/g}^\text{o} \text{C}\) Temperature increase \(= 55^\text{o} \text{C}\) Unknown \(He\) of ethanol Step 2: Solve. Amount of ethanol used: \[\frac{1.55 \: \text{g}}{46.1 \: \text{g/mol}} = 0.0336 \: \text{mol}\] Energy generated: \[4.184 \: \text{J/g}^\text{o} \text{C} \times 200 \: \text{g} \times 55^\text{o} \text{C} = 46024 \: \text{J} = 46.024 \: \text{kJ}\] Molar heat of combustion: \[\frac{46.024 \: \text{kJ}}{0.0336 \: \text{mol}} = 1369 \: \text{kJ/mol}\] Step 3: Think about your result: The burning of ethanol produces a significant amount of heat. Summary The molar heat of combustion is defined. Calculations using the molar heat of combustion are described. Contributors CK-12 Foundation by Sharon Bewick, Richard Parsons, Therese Forsythe, Shonna Robinson, and Jean Dupon.
Seminar Parent Program: Location: MSRI: Simons Auditorium Let $F=\{g_t\}$ be a one-parametr diagonal subgroup of $SL_n(\mathbb R)$. We assume $F$ has no nonzero invariant vectors in $\mathbb R^n$. Let $x\in X, \varphi\in C_c(X)$ and $\mu$ be the probability Haar measure on $X$. For certain proper subgroup $U$ of the unstable horospherical subgroup of $g_1$ we show that for almost every $u\in U$ \[ \frac{1}{T}\int_0^T\varphi({g_tux})dt \to \int_X\varphi d\mu. \] If $\varphi$ is moreover smooth, we can get an error rate of the convergence. The error rate is ineffective due to the use of Borel-Cantelli lemma.
I am reading Probabilistic counting algorithms for database applications. In the introduction an algorithm for finding an intersection is specified: Sort A, search each element of B in A and retain it if it appears in A. It is claimed that if a, b are number of elements in A and B, and $\alpha, \beta$ are the number of distinct elements in A, B then the complexity of this algorithm is $O(a\log\alpha + b\log\alpha)$. My question is, why the sorting of A is only dependent on the number of distinct elements? Is there some kind of algorithm I am not aware of? If so, why the same algorithm could not be used for the second strategy? The second strategy is: Sort A and B, use merge-like operation to discard duplicates. For this algorithm the complexity is $O(a\log a + b\log b + a + b)$ which makes sense to me.
Contents Before finally looking at the Monte Carlo method itself (next lesson), we need to introduce the important concept of Estimator. But let's first start with a quick referesher on the things we learned so far. A Quick Review In the last chapters we have introduced the concept of population parameter. We can use a simple average to compute the mean of any population parameter. This mean is denoted \(\mu\). However, when for practical reasons it is impossible to measure this mean, we can resort to using sampling to estimate it. The idea is to make a series of observations which take the form of a series of random variables noted \(X_i\) and average their value. This is called a sample mean:$$\bar X = \dfrac{\sum_{i=1}^n X_i}{n}.$$ Both the variance of the population and the of the sample mean can be computed using the equations:$$\sigma^2 = { {\sum_{i=1}^N {(x_i - \mu)^2}}\over N} = {{\sum_{i=1}^N x_i}\over N} - \mu^2,$$ and$$S_n^2 = { {\sum_{i=1}^n (x_i - \bar X)^2 } \over n} = {{\sum_{i=1}^n x_i}\over n} - \bar X^2.$$ respectively. We will speak again about the variance of the sample mean in this chapter. Because the sample mean is a random variable itself (it is an average of random variables and hence is random itself), we can measure its mean and its variance using the same method than the one we used for the sample mean. The mean of the sample mean is just a simple average of all the sample means. As for the variance of the sample mean we found and proved in the last chapter that it relates to the variance of the population through the equation:$$\bar \sigma_n^2 = { \sigma^2 \over n }.$$ Where \(\sigma^2 \) is the population variance and \(n\) is the sample size. We also showed in the last chapter, that by the Law of Large Numbers, the sample mean \(\bar X \) appraoaches in probability and almost surely the population mean \(\mu\):$$\bar X \xrightarrow{p} \mu \quad \text{ for } n \rightarrow \infty.$$ The superscript `p` over the right arrow means "converge in probability". Finally we also learned a few things about the distribution of the sample mean itself. We know the distribution of a statistic is called a sampling distribution, that its expected value (expected valiue of the distribution of sample means) is the population parameter's mean \(\mu\) and that \(\bar X\) converges in distribution to a normal distribution (of mean \(\mu\) and variance \(\sigma^2 / n\)). We also know its rate of convergence is proportional to \(1 / \sqrt{n}\). To summarize we have: the population mean \(\color{\red}{\mu}\) and variance \(\color{\red}{\sigma^2}\), the sample mean \(\color{\red}{\bar X_n}\) and its variance \(\color{\red}{S_n}\), the expected value of the distribution of means \(\color{\red}{\mu_{\bar X}}\) and the variance of the distribution of means \(\color{\red}{ \sigma_{\bar X}^2 }\). Estimate and Estimator The concept of estimator is simple and is just a generalization in a way of the concept of sample mean. Obviously, in statistics the terminology used when it comes to estimator is different than what we have been using so far. A parameter of a population will now be given the greek letter \(\theta\) (theta) instead of \(\mu\). As usual, our goal is to use some statistical method to estimate the value of \(\theta \) which is unknown (for example \(\theta\) can be the height of the adult population living in the Bahamas). What is used to do before, in the previous chapter was to estimate this value by sampling the population and averaging the results. The result is called the sample mean and can be written as:$$\bar X = \color{red}{\dfrac{1}{n}} (\color{green}{X_1} \color{red}{+} ... \color{red}{+} \color{\green}{X_n}) = {\sum_{i=1}^n \dfrac{X_i}{n}}.$$ Where \(n\) is the sample size and \(X_i\) are random variables or if you prefer, some observable data. What needs to be noticed here, is that the sample mean is just some sort of function (a sum and average) of a collection of observable data. In other words, if you look at the equation of the sample mean above, the terms in green are the observable data, and the terms and mathematica operator in red form manipulate this data to produce a result which is an estimation of the population's paramter \(\theta \). All these terms and operators form a function of which the data is an argument. We can formalize this idea by writing: where the greek letter \(\delta\) (lower case delta) denotes a (real-valued) function taking as argument a collection of observable data \(x_1, ... x_n\). This function \(\delta\) is what we call an estimator of the parameter \(\theta\) and the result of \(\delta(x_1, ..., x_n)\) is called an estimate of \(\theta\). The sample mean is just an example of such estimator but we will learn in future lessons that other estimators exist. It is important to realize the an estimator is a function of data, and consequently: because an estimator \(\delta(X1, . . . , Xn)\) is a function of the random variables \(X1, . . . , Xn\), the estimator itself is a random variable (which by the way is what we call a statistics), wich we have been repeating many time during the course of this lesson, but we have now formalized this idea. Try to see the difference between an estimator and an estimate (even though subtle): an estimate is a specific value \(\delta(x_1,...,x_n)\) of the estimator which we can determine by using observable values \(x_1, ..., x_n\). The estimator is a function \(\delta(X)\) of the random vector \(X\) while again, an estimate is a just specific value \(\delta(x)\). In the chapter on expected value, we mentioned that random variables can be manipulated algebraically pointwise. This is important because it justifies the fact that an estimator can be considered as a random variable itself. The result of adding together some random variables is a random variable. It also suggests that there is no restriction on the type of function you can use as an estimator and as mentioned before, different estimators will be studied in the next lessons. The "attractiveness" of an estimator (compared to others) depends on its properties such as its mean square error (RSE which we brielf talked about in the previous chapter), its consistency and its asymptic distribution. We briefly touched on these concepts in the previous chapter. We will review them in detail in a future lesson on estimators (the topic is important enough in rendering to have its own lesson). An additional property we haven't talked about yet which is important is unbiadness; we will look at this concept now. Unbiased and Biased Estimator The concept of biadness and unbiadness is important in rendering. If you are interested in computer graphics and rendering you are likely to have come across articles or a posts in which the authors talked about a bias or unbias path tracing. The question of what that means is also very often asked on forums. The term is not actually related to the field of rendering but to the field of statistics. The term "unbiased path tracing" was coined to designate a certain type of algorithm used in rendering based on "unbiased statistical methods". First we will explain what this means and then for fun and get back to the field of rendering we will look at the definition of unbiased rendering given by Wikipedia, and show you that now, with all the information you have been given in this lesson, you can understand every single word of this definition. The concept is in fact pretty simple. Earlier on in this chapter, we introduced the concept of estimator \(\delta(X)\). The sample mean is a form of estimator, but in the general sense, an estimator is a function operating on observable data and returning an estimate of the population's parameter value \(\theta\) (we will be using \(\theta\) in this chapter to denote the unknown parameter we want to estimate). In the chapter on expected value, we showed that the sample means converges in probability to the population mean as the sample size approaches infinity:$$\bar X_n \xrightarrow{p} \theta \quad \text{ for } n \rightarrow \infty.$$ We could also express this relationship in terms of the expected value of the sample mean:$$E[\bar X_n] - \theta = 0.$$ But since the method by which we compute a sample mean is an estimator itself, we can substitute \(\bar X_n\) for \(\delta(X)\):$$E[\delta_{unbiased}(X)] - \theta = 0.$$ This is an important result. It says that the difference between the expected value of the estimator and the population parameter is 0. But if this is true in the particular context where the estimator is a simple average of random variables you can perfectly design an estimator which has some interesting properties but whose expected value is different than the parameter \(\theta\). In other such an estimator would produce the following result:$$E[\delta_{biased}(X)] - \theta \neq 0.$$ The difference between the expected value of the estimator and the parameter is what we call bias. In other words, we can right the above relationship as:$$E[\delta_{biases}(X)] - \theta = \text{ bias }.$$ And of course you have already guessed that if the bias is 0, then we say that the estimator is unbiased and logically when this is not true (when the bias is different than 0) we say that the estimator is a biased estimator. In fact to be perfectly complete, we should add to this definition that to be a unbiased estimator, the estimator has to be unbiased for any possible value that \(\theta\) can take on. A sample from a normal distribution with unknown mean \(\boldsymbol{ \theta }\), \(\boldsymbol{ \bar X_n } \) is an unbiased estimator of \(\boldsymbol{ \theta}\) because \(\boldsymbol{ E[\bar X_n] = \theta } \) for \(\boldsymbol{ -\infty < \theta < \infty }\). You may ask, wouldn't that be a significan't problem for an estimator to have an expected value different than the parameter we try to estimate (in other words you may think of the result as being wrong)? In the general case this can be considered as an undesirable behaviour indeed, but consider an estimator whose rate of convergence is much better than that of other estimators. Even if its expected value is just slightly off from the real value of \(\theta\) but close enough to be considered a valid estimate, the fact that you get an acceptable estimation much "quicker" than with other estimators even though biased, can be very advantageous (imagine a system in which speed is more important than visual accuracy or fidelity). Wherever you consider the result you get from using that "biased" estimator acceptable compared to the result you would get from using an "unbiased" estimator is completely left to your own appreciation. Keep in mind that one of the main goals of computer graphics, is to be able to generate high quality antialised images (we will see what this mean soon) , with the smallest possible number of samples. For this reason, estimators having a fast rate of convergence are often preferred to others even if they produce biased results. Properties of Estimators The unbiadness (or biadness) of an estimator is a property we have laready talked about. Variance is another property: it relates to estimators's rate of convergence (how quickly do you get to the true mean as you increase the samples size). For unbiased estimators this variance is measured as \(E[(\bar X_n - \theta)^2]\) (which you can also write as \(E[(\delta(X) - \theta)^2]\)) which as briefly mentioned in the previous chapter is also called the estimator's Mean Square Error (or M.S.E.). As a general rule, the smaller the estimator's variance the better (it converges faster to the the result). Consistency is the last property we will review in this chapter. You will see this term being used often. An estimator is said to be consistent if the probability of the estimator to get closer to the populaton parameter \(\theta\) increases as the sample size \(n\) increases. We know from using the Law of Large Numbers that the sample mean \(\bar X\) converges in probability to \(\theta \) as \(n \rightarrow \infty\) hence consistency is verified in this case. As a general rule, a good estimator is one that is both unbiased and has a lowest variance or M.S.E. Often though biased estimators have a variance lower than that of unbiased estimators (which we shall see in our study of various estimators). As a final exercise, take the time to read the following Wikipedia definition of an unbiased renderer. At you great satisfactory, you should be pretty amazed by the fact that every single idea that this definition refers to is now fully making sense to you. We brok in down to insert comments: You know that goal of a renderer is to compute a quantity which is called radiance. We talked about radiance extensivelu in the lesson Introduction to Shading. The method this line refers to is actually the estimator that we will be using to evaluate this radiance (by mean of sampling) and if this estimator is unbiased then we can say that our renderer is unbiased as well. Because the estimator is unbiased we know that if the sample size is high enough it will converge eventually to the true radiance. Thus images produced by unbiased renderers can be used as reference images (i.e. compared to images produced by bias renderers for instance). We proved in the last chapters that the sample mean \(\bar X_n\) gets closer to the parameter \(\theta\) for \(n \rightarrow \infty\). We explained in the last chapter that the variance or M.S.E of the ditribution of mean \(\sigma_{\bar X}^2 \) is equal to population's variance divided by the sample size \(n\). Since the the standard deviation is just the square root of variance, we also showed that we needed four times more samples to half the standard deviation (which you can see as mesure of the error). We also explained that unbiased estimators were generally prefered to biased ones, however that biased estimators can converge more quickly to \(\theta\) making them more attractive than unbiased estimators particularly speed is preffered to accuracy (or when accuracy is not essential). The property of being biased is not as important as the property as being consistent in the evaluation of an estimator. Convergence is important regardless of whether or not the estimate is biased or not (assuming the number of samples is large enough to produce an image with very small variance, we at least know that the only difference between the unbiased and biased image is due to bias and not any other error that would have been introduced by the biased estimator). In other words, bias is generally acceptable, error is not. To see how all these concepts are used in practice, check the lesson entitled "Introduction to Light Transport". Wrapping Up! Congratulation if you read all chapters so far! This will conclude (momentarily) our journey in the world of statistics and probability. We will regularly come back to issues related to statistics in the next lessons. No matter who painful and difficult you found that journey, we promise that it will pay off! We are now finally ready to look at the Monte Carlo method itself but considering everything you know about statistics this should present no difficulty at all.
Let ($A_n : n \in \mathbb{N} $) be a sequence of events in some probability space $( \Omega, \mathcal{F}, \mathbb{P} )$. Set $A = \{ \omega \in \Omega : \omega \in A_n \text{ infinitely often} \} $ , $B = \{ \omega \in \Omega : \omega \in A_n \text{ for all sufficiently large } n \} $ Show that $ B = \cup_{n=1}^{\infty} \cap_{k=n}^{\infty} A_k $ I tried getting my head round what this question means, or is even asking, but this is too wacky...
For which non-constant rational functions $f(x)$ in $\mathbb{Q}(x)$ is there $\alpha$, algebraic over $\mathbb{Q}$, such that $\alpha$ and $f(\alpha) \neq \alpha$ are algebraic conjugates? More generally, can one describe the set of such $\alpha$ (empty/non-empty, finite/infinite etc.) if one is given $f$? Examples: $f(x)=-x$: These are precisely the square roots of algebraic numbers $\beta$ such that there is no square root of $\beta$ in $\mathbb{Q}(\beta)$. There are infinitely many $\alpha$ and even infinitely many of degree $2$. $f(x)=x^2$: These are precisely the roots of unity of odd order, since $H(\alpha)=H(\alpha^2)=H(\alpha)^2$ implies $H(\alpha)=1$, so $\alpha$ a root of unity. Here $H(\alpha)$ is the absolute multiplicative Weil height of $\alpha$. There are infinitely many $\alpha$, but only finitely many of degree $\leq D$ for any $D$. $f(x)=x+1$: There is no $\alpha$. If there was and $P(x)$ was its minimal polynomial, then $P(x+1)$ would be another irreducible polynomial, vanishing at $\alpha$, with the same leading coefficient and hence $P(x+1)=P(x)$. Looking at the coefficients of the second highest power of $x$ now leads to a contradiction. Analogously for $f(x)=x+a$ if $a$ is any non-zero rational number. So the existence of such an $\alpha$ and the set of all possible $\alpha$ seem to depend rather intricately on $f(x)$, which seems interesting to me. As I found no discussion of this question in the literature, I post it here. UPDATE: Firstly thanks to all who have contributed so far! As Eric Wofsey pointed out, any solution $\alpha$ will satisfy $f^n(\alpha)=\alpha$ for some $n>1$, where $f^n$ is the $n$-th iterate of $f$. So one should consider solutions of the equation $f^n(x)-x=0$ or $f^p(x)-x=0$ for $p$ prime. If the degree of $f$ is at least 2, one can always find irrational such solutions $\alpha$ with $f(\alpha) \neq \alpha$ by the answer of Joe Silverman. However, for his proof to work, we'd need to know that $f^k(\alpha)$ and $\alpha$ are conjugate for some $k$ with $0 < k <p$. I'm not enough of an expert to follow through with his hints for proving this, but if someone does, I'd be very happy about an answer! If the degree of $f$ is 1, then $f$ is a Möbius transformation and all $f^n$ will have the same fixed points as $f$ (so there's no solution) unless $f$ is of finite order. In that case, if $f(x) \neq x$, the order is 2, 3, 4 or 6 (see http://home.wlu.edu/~dresdeng/papers/nine.pdf). By the same reference, in the latter three cases, $f$ is conjugate to $\frac{-1}{x+1}$, $\frac{x-1}{x+1}$ or $\frac{2x-1}{x+1}$, so it suffices to consider these $f$, which give rise to the minimal polynomials $x^3-nx^2-(n+3)x-1$ (closely related to the polynomial in GNiklasch's answer), $x^4+nx^3-6x^2-nx+1$ and $x^6-2nx^5+5(n-3)x^4+20x^3-5nx^2+2(n-3)x+1$, if my calculations are correct. If the order is 2, the map is of the form $\frac{-x+a}{bx+1}$ or $\frac{a}{x}$, which leads to $x^2+(bn-a)x+n$ or $x^2+nx+a$ respectively. So this case is somewhat degenerate, which explains the unusual behavior of $f(x)=-x$ and $f(x)=x+1$ above.