text
stringlengths
256
16.4k
The integral representation converges for $Re s > 0$. As an improper integral, convergence is indeed uniform for (real) $s \in [\alpha,\beta]$. However, as you suspect, convergence is not uniform for $s \in (0,\infty)$. Problems arise due to singular behavior both at $x = 0$ when $s < 1$ and as $x \to \infty$ when $s > 1$. To show non-uniform convergence consider $$\Gamma(s) = \Phi(s) + \Psi(s),$$ where $$\Phi(s) = \int_0^1 x^{s-1} e^{-x} \, dx, \\ \Psi(s) = \int_1^\infty x^{s-1} e^{-x} \, dx.$$ We can show that $\Psi(s)$ fails to converge uniformly for $s \in [\alpha, \infty).$ Note that for any sequence $(s_n)$ such that $s_n \to \infty$ with $s_n > 1 + n/\log n,$ we have $$\left| \int_n^\infty x^{s_n-1} e^{-x} \, dx\right| > n^{s_n-1}\int_n^\infty e^{-x} \, dx = n^{s_n-1}e^{-n} > 1,$$ Thus, $$\lim_{n \to \infty}\int_n^\infty x^{s_n-1} e^{-x} \, dx \neq 0,$$ and convergence is not uniform. Similarly, we can show that $\Phi(s)$ fails to converge uniformly for $s \in (0, \beta).$ Note that for $s_n < 1$, $$\left| \int_{1/2n}^{1/n} x^{s_n-1} e^{-x} \, dx\right| > e^{-1} \left(\frac{1}{n}\right)^{s_n-1}\frac{1}{2n} = \frac{1}{2en^{s_n}}.$$ Now take $s_n = 1/n$. In this case $s_n \to 0$ and $n^{s_n} = n^{1/n} \to 1$. For all $n$ sufficiently large, we have $n^{1/n} < 2$ and $$\left| \int_{1/2n}^{1/n} x^{s_n-1} e^{-x} \, dx\right| > \frac{1}{4e}.$$
Hi, Can someone provide me some self reading material for Condensed matter theory? I've done QFT previously for which I could happily read Peskin supplemented with David Tong. Can you please suggest some references along those lines? Thanks @skullpatrol The second one was in my MSc and covered considerably less than my first and (I felt) didn't do it in any particularly great way, so distinctly average. The third was pretty decent - I liked the way he did things and was essentially a more mathematically detailed version of the first :) 2. A weird particle or state that is made of a superposition of a torus region with clockwise momentum and anticlockwise momentum, resulting in one that has no momentum along the major circumference of the torus but still nonzero momentum in directions that are not pointing along the torus Same thought as you, however I think the major challenge of such simulator is the computational cost. GR calculations with its highly nonlinear nature, might be more costy than a computation of a protein. However I can see some ways approaching it. Recall how Slereah was building some kind of spaceitme database, that could be the first step. Next, one might be looking for machine learning techniques to help on the simulation by using the classifications of spacetimes as machines are known to perform very well on sign problems as a recent paper has shown Since GR equations are ultimately a system of 10 nonlinear PDEs, it might be possible the solution strategy has some relation with the class of spacetime that is under consideration, thus that might help heavily reduce the parameters need to consider to simulate them I just mean this: The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge fixing degrees of freedom, which correspond to the freedom to choose a coordinate system. @ooolb Even if that is really possible (I always can talk about things in a non joking perspective), the issue is that 1) Unlike other people, I cannot incubate my dreams for a certain topic due to Mechanism 1 (consicous desires have reduced probability of appearing in dreams), and 2) For 6 years, my dream still yet to show any sign of revisiting the exact same idea, and there are no known instance of either sequel dreams nor recurrence dreams @0celo7 I felt this aspect can be helped by machine learning. You can train a neural network with some PDEs of a known class with some known constraints, and let it figure out the best solution for some new PDE after say training it on 1000 different PDEs Actually that makes me wonder, are the space of all coordinate choices more than all possible moves of Go? enumaris: From what I understood from the dream, the warp drive showed here may be some variation of the alcuberrie metric with a global topology that has 4 holes in it whereas the original alcuberrie drive, if I recall, don't have holes orbit stabilizer: h bar is my home chat, because this is the first SE chat I joined. Maths chat is the 2nd one I joined, followed by periodic table, biosphere, factory floor and many others Btw, since gravity is nonlinear, do we expect if we have a region where spacetime is frame dragged in the clockwise direction being superimposed on a spacetime that is frame dragged in the anticlockwise direction will result in a spacetime with no frame drag? (one possible physical scenario that I can envision such can occur may be when two massive rotating objects with opposite angular velocity are on the course of merging) Well. I'm a begginer in the study of General Relativity ok? My knowledge about the subject is based on books like Schutz, Hartle,Carroll and introductory papers. About quantum mechanics I have a poor knowledge yet. So, what I meant about "Gravitational Double slit experiment" is: There's and gravitational analogue of the Double slit experiment, for gravitational waves? @JackClerk the double slits experiment is just interference of two coherent sources, where we get the two sources from a single light beam using the two slits. But gravitational waves interact so weakly with matter that it's hard to see how we could screen a gravitational wave to get two coherent GW sources. But if we could figure out a way to do it then yes GWs would interfere just like light wave. Thank you @Secret and @JohnRennie . But for conclude the discussion, I want to put a "silly picture" here: Imagine a huge double slit plate in space close to a strong source of gravitational waves. Then like water waves, and light, we will see the pattern? So, if the source (like a Black Hole binary) are sufficent away, then in the regions of destructive interference, space-time would have a flat geometry and then with we put a spherical object in this region the metric will become schwarzschild-like. if** Pardon, I just spend some naive-phylosophy time here with these discussions** The situation was even more dire for Calculus and I managed! This is a neat strategy I have found-revision becomes more bearable when I have The h Bar open on the side. In all honesty, I actually prefer exam season! At all other times-as I have observed in this semester, at least-there is nothing exciting to do. This system of tortuous panic, followed by a reward is obviously very satisfying. My opinion is that I need you kaumudi to decrease the probabilty of h bar having software system infrastructure conversations, which confuse me like hell and is why I take refugee in the maths chat a few weeks ago (Not that I have questions to ask or anything; like I said, it is a little relieving to be with friends while I am panicked. I think it is possible to gauge how much of a social recluse I am from this, because I spend some of my free time hanging out with you lot, even though I am literally inside a hostel teeming with hundreds of my peers) that's true. though back in high school ,regardless of code, our teacher taught us to always indent your code to allow easy reading and troubleshooting. We are also taught the 4 spacebar indentation convention @JohnRennie I wish I can just tab because I am also lazy, but sometimes tab insert 4 spaces while other times it inserts 5-6 spaces, thus screwing up a block of if then conditions in my code, which is why I had no choice I currently automate almost everything from job submission to data extraction, and later on, with the help of the machine learning group in my uni, we might be able to automate a GUI library search thingy I can do all tasks related to my work without leaving the text editor (of course, such text editor is emacs). The only inconvenience is that some websites don't render in a optimal way (but most of the work-related ones do) Hi to all. Does anyone know where I could write matlab code online(for free)? Apparently another one of my institutions great inspirations is to have a matlab-oriented computational physics course without having matlab on the universities pcs. Thanks. @Kaumudi.H Hacky way: 1st thing is that $\psi\left(x, y, z, t\right) = \psi\left(x, y, t\right)$, so no propagation in $z$-direction. Now, in '$1$ unit' of time, it travels $\frac{\sqrt{3}}{2}$ units in the $y$-direction and $\frac{1}{2}$ units in the $x$-direction. Use this to form a triangle and you'll get the answer with simple trig :) @Kaumudi.H Ah, it was okayish. It was mostly memory based. Each small question was of 10-15 marks. No idea what they expect me to write for questions like "Describe acoustic and optic phonons" for 15 marks!! I only wrote two small paragraphs...meh. I don't like this subject much :P (physical electronics). Hope to do better in the upcoming tests so that there isn't a huge effect on the gpa. @Blue Ok, thanks. I found a way by connecting to the servers of the university( the program isn't installed on the pcs on the computer room, but if I connect to the server of the university- which means running remotely another environment, i found an older version of matlab). But thanks again. @user685252 No; I am saying that it has no bearing on how good you actually are at the subject - it has no bearing on how good you are at applying knowledge; it doesn't test problem solving skills; it doesn't take into account that, if I'm sitting in the office having forgotten the difference between different types of matrix decomposition or something, I can just search the internet (or a textbook), so it doesn't say how good someone is at research in that subject; it doesn't test how good you are at deriving anything - someone can write down a definition without any understanding, while someone who can derive it, but has forgotten it probably won't have time in an exam situation. In short, testing memory is not the same as testing understanding If you really want to test someone's understanding, give them a few problems in that area that they've never seen before and give them a reasonable amount of time to do it, with access to textbooks etc.
From the “Simple English Wikipedia” 1: The Lorentz Factoris the name of the factor by which time, length, and “relativistic mass” change for an object while that object is moving and is often written γ (gamma). This number is determined by the object’s speed in the following way: $\gamma = \frac{1}{\sqrt{1 - \frac{v^2}{c^2}}}$ Where vis the speed of the object and cis the speed of light (expressed in the same units as your speed). The quantity ( v/ c) is often labeled β (beta) and so the above equation can be rewritten: $\gamma = \frac{1}{\sqrt{1 - \beta^2}}$ Lets examine the Lorentz equation to see what it is actually describing. In the Reciprocal System, the speed of light, c, is unity (1.0) in natural units of space and time—one unit of space per one unit of time. Conventional science uses “man-made” units that are derived by some kind of consensus. For example, the meter (meaning “measure”) was defined as one ten-millionth of the distance between the North Pole and the Equator. For the most part, conventional units are arbitrary. However, the Reciprocal System’s natural units are a consequence of the structure of nature, inherent in everything. Starting with the velocity component, the factor v/ c is simply normalization to unity, much like converting a range of values to percentages. Since the value of c is 1.0 in the Reciprocal System, the velocity in natural units is already normalized and this can just be reduced to v, making the concept of β unnecessary, as β represents the same consequence for arbitrary units. We now have a normalized system of 1-v 2. Knowing that unity is the speed of light, this part of the equation is actually saying: c-v 2 (in natural units). Because c = 1, c n = 1 and n can have any value, so this part of the equation is actually . But when the square root function is considered, it becomes apparent that n = 2 and this Lorentz Factor is nothing more than the disguised equation of a right triangle that has been adjusted to express the speed of light a unit hypotenuse: $\frac{1}{\gamma} = \sqrt{c^2-v^2}$ $\left( \frac{1}{\gamma} \right)^2 = c^2-v^2$ $c^2 = \left( \frac{1}{ \gamma} \right)^2 + v^2$ Alas, science also tends to overlook one of the more interesting properties of the square root—that the function returns two solutions, a positive one and a negative one. The negative one is ignored (though the absolute value is never included in the Lorentz equation), because it would indicate that time, length and relativistic mass could also be . But if you consider both solutions simultaneously, then a bigger problems arises… they cancel each other out and you end up with the classic “division by zero” problem that allows you to do things like proving 2=1. negative 2(So don’t mention it and hope nobody notices.) One should also note that the equation for a right triangle is also the equation for a circle: , where r is the radius. Because r = c = 1, this is a unit circle, with the velocity on the x axis and the Lorentz factor being the corresponding value on the y axis. By plotting ( v, 1/γ) in its entirety, the reciprocal relationship becomes clearer. What immediately stands out is that a velocity can drop all the way to -1, the speed of light running backwards. That may sound a bit strange, but once identified in conventional terms, it is a very familiar concept. In the Reciprocal System, the speed of light (unit speed) is the fulcrum between motion in space and motion in time. As such, it is the upper limit of both of those motions, essentially being the maximum speed of the universe, which is referred to as the progression of the natural reference system. You can only slow down from this speed. In space, you add time, so the speed of 1 s/1 t becomes 1 s/ nt. In time, you add space going from 1 t/1 s to 1 t/ ns, remembering that when you cross the unit speed boundary, inversion takes place and speed, s/ t, becomes energy, t/ s. This is indicated in the Lorentz Factor, because any value where v>1 becomes undefined—there is no solution to the equation, because you would be moving at a velocity that is faster than the fastest velocity possible for the Universe. The system is only solvable if -1 ≤ v ≤ +1. Anything (photons, particles, atoms, molecules, etc) being carried by this progression will be moving at this “maximum speed of the universe.” Photons, having no net displacement in space or time (in a vacuum) have no resistance to this speed and will therefore be carried at this maximum speed, which is why we call it the speed of light, or in the Reciprocal System, unit speed, and why the speed of light is constant in all reference frames in Relativity. Photons are not actually moving on their own; they are just being carried by the progression—no relative motion to the speed of the progression. This is why the speed of light, the maximum speed of the Universe, cannot be exceeded by any velocity in space. 3 It has nothing to do with “infinite mass” or an object shrinking into nonexistence, which is how the Lorentz Factor is interpreted. The relations in the Lorentz Factor, understood as a unit circle, do occur in the Reciprocal System—but under different names. Larson unknowingly uses it as the basis of his initial motions. The problem is better understood in the complex plane, where the gamma function represents the imaginary axis (1/γ = -γ). By default, the Universe is expanding at unit speed, having the coordinates of (+1,0) on the diagram. Larson then introduces the concept of a direction reversal, which results in a linear vibration. This is moving inward (left on the v axis) to the coordinates (0,±1). The progression velocity appears to stop ( v=0), but there is now a split across the gamma axis, which is “imaginary” and rotational, creating the two, oppositely-directed rotations that are known as a birotation. 4 The resolution of this birotation can be expressed by Euler’s formula using the exponential functions: $\frac{e^{+i \gamma} + e^{-i \gamma}}{2} = cos(\gamma)$ So this “direction reversal” results in a cosine function, which Larson defines as a photon—the core of his rotating systems. 5 Now that he has this ±γ “line” to rotate, Larson adds an inward scalar rotation to the photon, moving the net motion to the (-1,0) coordinate with a single speed solution, creating the rotational base, whose net motion opposes the progression at the same velocity, the speed of light running backwards that we call gravity, a very familiar concept. Essentially, the Lorentz Factor is just a kludge hiding the use of imaginary quantities to describe a gravitational field structure, in a fashion similar to the imaginary quantities used to describe electric and magnetic fields. This gravitational opposition to the progression is what gives the appearance of increasing mass—even though mass remains constant—since a “heavier” object must have more gravitational pull and be harder to move. The RS2 Approach The Lorentz Fudge is a 1-dimensional solution to a 2-dimensional problem, as is Larson’s definition of the rotational base. However, the Universe is 3-dimensional and as William Hamilton discovered, it takes 4 dimensions to solve a 3-dimensional rotation: the quaternion. The RS2 solution was to upgrade the complex plane of the corrected Lorentz Factor and replace it with a quaternion. This, however, changes Larson’s 2-unit approach of speed and energy into a 4-unit system of +1, i, i.j and i.j.k=-1. This resulted in a far more accurate representation of the photon, changing it from a linear vibration to a quaternion rotation with similar characteristics, but including electromagnetic properties with a 1-dimensional, electric rotation (k) combined with a 2-dimensional, magnetic rotation (i.j). Since i.j = k, a birotation can be formed along electromagnetic lines, using i.j.(-k), providing similar behavior to Prof. KVK Nehru’s original birotation model. This will be elaborated on in a future paper, but just wanted to note the RS/RS2 difference. Summary The Lorentz Factor is the equation of a right triangle, where speed is normalized for a unit speed of light. Ignoring the negative roots and velocities of the equation conceals the fact that the Lorentz Factor is actually just a unit circle. Unit speed is the maximum speed the physical universe is capable of, expressed in the Reciprocal System as the outward progression of the natural reference system. 6 The minimum speed is negative unity, the inward motion expressed by gravitation. The default speed of the Universe is unity. When a conventional object “at rest” is accelerated, what is actually happening is that the inward motion of gravity is being neutralized. A rocket isn’t increasing its speed by thrust—the thrust is reducing the effect gravitation is having upon it, allowing it to return to the default speed of unity (the speed of light). It is impossible to accelerate an object past the speed of light in space, because you are notadding velocity—you are reducing resistanceand once that resistance is gone, you are done. This is the situation in particle accelerators and why electromagnetic systems cannot accelerate a particle past the speed of light. All they can do is reduce the resistance preventing the particle from moving at the speed of light. The circular form of the Lorentz Factor produces similar results to Larson’s construction of the rotational base. When the 1-dimensional interpretation is upgraded to three dimensions, the linear vibration of the photon becomes a quaternion rotation possessing electromagnetic characteristics, such as TE, TM and TEM modes. What the Lorentz Factor comes down to is a device that is used to try to understand the inward, “backwards speed of light” motion of gravitation, similar to Ptolemy's epicycle description of the reversal of planetary motion. But when placed in the proper context, one can see past the illusions of mathematics and understand the underlying concepts. 2 Let a=b. Then a 2 = ab; a 2+a 2 = a 2+ab; 2a 2 = a 2+ab; 2a 2-2ab = a 2+ab-2ab; 2a 2-2ab=a 2-ab; 2(a 2-ab)=1(a 2-ab); cancel (a 2-ab) from both sides gives 2=1. 3 I did qualify that, because faster-than-light motions are commonplace in the Reciprocal System, but manifest differently than “warp drive.” The translational velocities are always less then or equal to unit speed. 5 Larson’s solution is 2-dimensional; the 3-dimensional solution proposed by RS2 uses a quaternion rotation to accomplish the reversal, resulting in a more complex structure of the photon. 6 Known to astronomers as the Hubble Expansion.
I came across John Duffield Quantum Computing SE via this hot question. I was curious to see an account with 1 reputation and a question with hundreds of upvotes.It turned out that the reason why he has so little reputation despite a massively popular question is that he was suspended.May I ... @Nelimee Do we need to merge? Currently, there's just one question with "phase-estimation" and another question with "quantum-phase-estimation". Might we as well use just one tag? (say just "phase-estimation") @Blue 'merging', if I'm getting the terms right, is a specific single action that does exactly that and is generally preferable to editing tags on questions. Having said that, if it's just one question, it doesn't really matter although performing a proper merge is still probably preferable Merging is taking all the questions with a specific tag and replacing that tag with a different one, on all those questions, on a tag level, without permanently changing anything about the underlying tags @Blue yeah, you could do that. It generally requires votes, so it's probably not worth bothering when only one question has that tag @glS "Every hermitian matrix satisfy this property: more specifically, all and only Hermitian matrices have this property" ha? I though it was only a subset of the set of valid matrices ^^ Thanks for the precision :) @Nelimee if you think about it it's quite easy to see. Unitary matrices are the ones with phases as eigenvalues, while Hermitians have real eigenvalues. Therefore, if a matrix is not Hermitian (does not have real eigenvalues), then its exponential will not have eigenvalues of the form $e^{i\phi}$ with $\phi\in\mathbb R$. Although I'm not sure whether there could be exceptions for non diagonalizable matrices (if $A$ is not diagonalizable, then the above argument doesn't work) This is an elementary question, but a little subtle so I hope it is suitable for MO.Let $T$ be an $n \times n$ square matrix over $\mathbb{C}$.The characteristic polynomial $T - \lambda I$ splits into linear factors like $T - \lambda_iI$, and we have the Jordan canonical form:$$ J = \begin... @Nelimee no! unitarily diagonalizable matrices are all and only the normal ones (satisfying $AA^\dagger =A^\dagger A$). For general diagonalizability if I'm not mistaken onecharacterization is that the sum of the dimensions of the eigenspaces has to match the total dimension @Blue I actually agree with Nelimee here that it's not that easy. You get $UU^\dagger = e^{iA} e^{-iA^\dagger}$, but if $A$ and $A^\dagger$ do not commute it's not straightforward that this doesn't give you an identity I'm getting confused. I remember there being some theorem about one-to-one mappings between unitaries and hermitians provided by the exponential, but it was some time ago and may be confusing things in my head @Nelimee if there is a $0$ there then it becomes the normality condition. Otherwise it means that the matrix is not normal, therefore not unitarily diagonalizable, but still the product of exponentials is relatively easy to write @Blue you are right indeed. If $U$ is unitary then for sure you can write it as exponential of an Hermitian (time $i$). This is easily proven because $U$ is ensured to be unitarily diagonalizable, so you can simply compute it's logarithm through the eigenvalues. However, logarithms are tricky and multivalued, and there may be logarithms which are not diagonalizable at all. I've actually recently asked some questions on math.SE on related topics @Mithrandir24601 indeed, that was also what @Nelimee showed with an example above. I believe my argument holds for unitarily diagonalizable matrices. If a matrix is only generally diagonalizable (so it's not normal) then it's not true also probably even more generally without $i$ factors so, in conclusion, it does indeed seem that $e^{iA}$ unitary implies $A$ Hermitian. It therefore also seems that $e^{iA}$ unitary implies $A$ normal, so that also my argument passing through the spectra works (though one has to show that $A$ is ensured to be normal) Now what we need to look for is 1) The exact set of conditions for which the matrix exponential $e^A$ of a complex matrix $A$, is unitary 2) The exact set of conditions for which the matrix exponential $e^{iA}$ of a real matrix $A$ is unitary @Blue fair enough - as with @Semiclassical I was thinking about it with the t parameter, as that's what we care about in physics :P I can possibly come up with a number of non-Hermitian matrices that gives unitary evolution for a specific t Or rather, the exponential of which is unitary for $t+n\tau$, although I'd need to check If you're afraid of the density of diagonalizable matrices, simply triangularize $A$. You get $$A=P^{-1}UP,$$ with $U$ upper triangular and the eigenvalues $\{\lambda_j\}$ of $A$ on the diagonal.Then$$\mbox{det}\;e^A=\mbox{det}(P^{-1}e^UP)=\mbox{det}\;e^U.$$Now observe that $e^U$ is upper ... There's 15 hours left on a bountied question, but the person who offered the bounty is suspended and his suspension doesn't expire until about 2 days, meaning he may not be able to award the bounty himself?That's not fair: It's a 300 point bounty. The largest bounty ever offered on QCSE. Let h...
Abstract We give a necessary and sufficient geometric structural condition, which we call the $\alpha$-Structural Hypothesis, for a stable codimension 1 integral varifold on a smooth Riemannian manifold to correspond to an embedded smooth hypersurface away from a small set of generally unavoidable singularities. The $\alpha$-Structural Hypothesis says that no point of the support of the varifold has a neighborhood in which the support is the union of three or more embedded $C^{1, \alpha}$ hypersurfaces-with-boundary meeting (only) along their common boundary. We establish that whenever a stable integral $n$-varifold on a smooth $(n+1)$-dimensional Riemannian manifold satisfies the $\alpha$-Structural Hypothesis for some $\alpha \in (0, 1/2)$, its singular set is empty if $n \leq 6$, discrete if $n =7$ and has Hausdorff dimension $\leq n-7$ if $n \geq 8$; in view of well-known examples, this is the best possible general dimension estimate on the singular set of a varifold satisfying our hypotheses. We also establish compactness of mass-bounded subsets of the class of stable codimension 1 integral varifolds satisfying the $\alpha$-Structural Hypothesis for some $\alpha \in (0, 1/2)$. The $\alpha$-Structural Hypothesis on an $n$-varifold for any $\alpha \in (0, 1/2)$ is readily implied by either of the following two hypotheses: (i) the varifold corresponds to an absolutely area minimizing rectifiable current with no boundary, (ii) the singular set of the varifold has vanishing $(n-1)$-dimensional Hausdorff measure. Thus, our theory subsumes the well-known regularity theory for codimension 1 area minimizing rectifiable currents and settles the long standing question as to which weakest size hypothesis on the singular set of a stable minimal hypersurface guarantees the validity of the above regularity conclusions. An optimal strong maximum principle for stationary codimension 1 integral varifolds follows from our regularity and compactness theorems. Note: To view the article, click on the URL link for the DOI number. [AW] W. K. Allard, "On the first variation of a varifold," Ann. of Math., vol. 95, pp. 417-491, 1972. @article {AW, MRKEY = {0307015}, AUTHOR = {Allard, William K.}, TITLE = {On the first variation of a varifold}, JOURNAL = {Ann. of Math.}, FJOURNAL = {Annals of Mathematics. Second Series}, VOLUME = {95}, YEAR = {1972}, PAGES = {417--491}, ISSN = {0003-486X}, MRCLASS = {49F20}, MRNUMBER = {0307015}, MRREVIEWER = {M. Klingmann}, DOI = {10.2307/1970868}, ZBLNUMBER = {0252.49028}, } [A1] F. J. Almgren Jr., "Some interior regularity theorems for minimal surfaces and an extension of Bernstein’s theorem," Ann. of Math., vol. 84, pp. 277-292, 1966. @article {A1, MRKEY = {0200816}, AUTHOR = {Almgren, Jr., F. J.}, TITLE = {Some interior regularity theorems for minimal surfaces and an extension of {B}ernstein's theorem}, JOURNAL = {Ann. of Math.}, FJOURNAL = {Annals of Mathematics. Second Series}, VOLUME = {84}, YEAR = {1966}, PAGES = {277--292}, ISSN = {0003-486X}, MRCLASS = {53.04}, MRNUMBER = {0200816}, MRREVIEWER = {W. P. Ziemer}, DOI = {10.2307/1970520}, ZBLNUMBER = {0146.11905}, } [A] F. J. Almgren Jr., Almgren’s Big Regularity Paper: Q-Valued Functions Minimizing Dirichlet’s Integral and the Regularity of Area Minimizing Rectifiable Currents up to Codimension Two, River Edge, NJ: World Scientific Publ. Co., Inc., 2000, vol. 1. @book{A, MRKEY = {1777737}, AUTHOR = {Almgren, Jr., F. J.}, TITLE = {Almgren's Big Regularity Paper: Q-Valued Functions Minimizing Dirichlet's Integral and the Regularity of Area Minimizing Rectifiable Currents up to Codimension Two}, SERIES={World Scientific Monograph Series in Math.}, VOLUME={1}, PUBLISHER={World Scientific Publ. Co., Inc.}, ADDRESS = {River Edge, NJ}, YEAR={2000}, ZBLNUMBER = {0985.49001}, MRNUMBER = {1777737}, } [DG] E. De Giorgi, Frontiere Orientate di Misura Minima, Pisa: Editrice Tecnico Scientifica, 1961. @book {DG, MRKEY = {0179651}, AUTHOR = {De Giorgi, Ennio}, TITLE = {Frontiere Orientate di Misura Minima}, SERIES = {Seminario di Matematica della Scuola Normale Superiore di Pisa, 1960-61}, PUBLISHER = {Editrice Tecnico Scientifica}, ADDRESS = {Pisa}, YEAR = {1961}, PAGES = {57}, MRCLASS = {49.00 (53.04)}, MRNUMBER = {0179651}, MRREVIEWER = {W. H. Fleming}, ZBLNUMBER = {0296.49031}, } [F1] H. Federer, "The singular sets of area minimizing rectifiable currents with codimension one and of area minimizing flat chains modulo two with arbitrary codimension," Bull. Amer. Math. Soc., vol. 76, pp. 767-771, 1970. @article {F1, MRKEY = {0260981}, AUTHOR = {Federer, Herbert}, TITLE = {The singular sets of area minimizing rectifiable currents with codimension one and of area minimizing flat chains modulo two with arbitrary codimension}, JOURNAL = {Bull. Amer. Math. Soc.}, FJOURNAL = {Bulletin of the American Mathematical Society}, VOLUME = {76}, YEAR = {1970}, PAGES = {767--771}, ISSN = {0002-9904}, MRCLASS = {28.80 (26.00)}, MRNUMBER = {0260981}, MRREVIEWER = {J. E. Brothers}, DOI = {10.1090/S0002-9904-1970-12542-3}, ZBLNUMBER = {0194.35803}, } [F] H. Federer, Geometric Measure Theory, New York: Springer-Verlag, 1969, vol. 153. @book {F, MRKEY = {0257325}, AUTHOR = {Federer, Herbert}, TITLE = {Geometric Measure Theory}, SERIES = {Grundlehren Math. Wissen.}, VOLUME={153}, PUBLISHER = {Springer-Verlag}, YEAR = {1969}, PAGES = {xiv+676}, MRCLASS = {28.80 (26.00)}, MRNUMBER = {0257325}, MRREVIEWER = {J. E. Brothers}, ADDRESS = {New York}, ZBLNUMBER = {0176.00801}, } [FW] W. H. Fleming, "On the oriented Plateau problem," Rend. Circ. Mat. Palermo, vol. 11, pp. 69-90, 1962. @article {FW, MRKEY = {0157263}, AUTHOR = {Fleming, Wendell H.}, TITLE = {On the oriented {P}lateau problem}, JOURNAL = {Rend. Circ. Mat. Palermo}, FJOURNAL = {Rendiconti del Circolo Matematico di Palermo. Serie II}, VOLUME = {11}, YEAR = {1962}, PAGES = {69--90}, ISSN = {0009-725X}, MRCLASS = {53.04 (49.00)}, MRNUMBER = {0157263}, MRREVIEWER = {R. Osserman}, DOI = {10.1007/BF02849427}, ZBLNUMBER = {0107.31304}, } [HS] R. Hardt and L. Simon, "Boundary regularity and embedded solutions for the oriented Plateau problem," Ann. of Math., vol. 110, iss. 3, pp. 439-486, 1979. @article {HS, MRKEY = {0554379}, AUTHOR = {Hardt, Robert and Simon, Leon}, TITLE = {Boundary regularity and embedded solutions for the oriented {P}lateau problem}, JOURNAL = {Ann. of Math.}, FJOURNAL = {Annals of Mathematics. Second Series}, VOLUME = {110}, YEAR = {1979}, NUMBER = {3}, PAGES = {439--486}, ISSN = {0003-486X}, CODEN = {ANMAAH}, MRCLASS = {49F10 (49F20 53A10)}, MRNUMBER = {0554379}, MRREVIEWER = {Jo ao Lucas Marqu{ê}s Barbosa}, DOI = {10.2307/1971233}, ZBLNUMBER = {0457.49029}, } [I] T. Ilmanen, "A strong maximum principle for singular minimal hypersurfaces," Calc. Var. Partial Differential Equations, vol. 4, iss. 5, pp. 443-467, 1996. @article {I, MRKEY = {1402732}, AUTHOR = {Ilmanen, T.}, TITLE = {A strong maximum principle for singular minimal hypersurfaces}, JOURNAL = {Calc. Var. Partial Differential Equations}, FJOURNAL = {Calculus of Variations and Partial Differential Equations}, VOLUME = {4}, YEAR = {1996}, NUMBER = {5}, PAGES = {443--467}, ISSN = {0944-2669}, MRCLASS = {49Q05 (58E12)}, MRNUMBER = {1402732}, MRREVIEWER = {Anna Salvadori}, DOI = {10.1007/BF01246151}, ZBLNUMBER = {0863.49030}, } [M] C. B. Morrey Jr., Multiple Integrals in the Calculus of Variations, New York: Springer-Verlag, 1966, vol. 130. @book {M, MRKEY = {0202511}, AUTHOR = {Morrey, Jr., Charles B.}, TITLE = {Multiple Integrals in the Calculus of Variations}, SERIES = {Grundlehren Math. Wissen.}, VOLUME={130}, PUBLISHER = {Springer-Verlag}, YEAR = {1966}, PAGES = {ix+506}, MRCLASS = {49.00 (00.00)}, MRNUMBER = {0202511}, MRREVIEWER = {M. Schechter}, ADDRESS = {New York}, ZBLNUMBER = {0142.38701}, DOI = {10.1007/978-3-540-69952-1}, } [Mo] M. P. Moschen, "Principio di massimo forte per le frontiere di misura minima," Ann. Univ. Ferrara Sez. VII, vol. 23, pp. 165-168 (1978), 1977. @article {Mo, MRKEY = {0482508}, AUTHOR = {Moschen, Maria Pia}, TITLE = {Principio di massimo forte per le frontiere di misura minima}, JOURNAL = {Ann. Univ. Ferrara Sez. VII}, VOLUME = {23}, YEAR = {1977}, PAGES = {165--168 (1978)}, MRCLASS = {49F10 (35J99)}, MRNUMBER = {0482508}, MRREVIEWER = {Klaus Steffen}, ZBLNUMBER = {0384.49030}, } [RR] E. R. Reifenberg, "Solution of the Plateau Problem for $m$-dimensional surfaces of varying topological type," Acta Math., vol. 104, pp. 1-92, 1960. @article {RR, MRKEY = {0114145}, AUTHOR = {Reifenberg, E. R.}, TITLE = {Solution of the {P}lateau {P}roblem for {$m$}-dimensional surfaces of varying topological type}, JOURNAL = {Acta Math.}, FJOURNAL = {Acta Mathematica}, VOLUME = {104}, YEAR = {1960}, PAGES = {1--92}, ISSN = {0001-5962}, MRCLASS = {49.00}, MRNUMBER = {0114145}, MRREVIEWER = {W. H. Fleming}, DOI = {10.1007/BF02547186}, ZBLNUMBER = {0099.08503}, } [R] L. Rosales, "The geometric structure of solutions to the two-valued minimal surface equation," Calc. Var. Partial Differential Equations, vol. 39, iss. 1-2, pp. 59-84, 2010. @article {R, MRKEY = {2659679}, AUTHOR = {Rosales, Leobardo}, TITLE = {The geometric structure of solutions to the two-valued minimal surface equation}, JOURNAL = {Calc. Var. Partial Differential Equations}, FJOURNAL = {Calculus of Variations and Partial Differential Equations}, VOLUME = {39}, YEAR = {2010}, NUMBER = {1-2}, PAGES = {59--84}, ISSN = {0944-2669}, MRCLASS = {49Q05 (49Q15 53A10)}, MRNUMBER = {2659679}, MRREVIEWER = {Gian Paolo Leonardi}, DOI = {10.1007/s00526-009-0301-y}, ZBLNUMBER = {1195.49051}, } [SS] R. Schoen and L. Simon, "Regularity of stable minimal hypersurfaces," Comm. Pure Appl. Math., vol. 34, iss. 6, pp. 741-797, 1981. @article {SS, MRKEY = {0634285}, AUTHOR = {Schoen, Richard and Simon, Leon}, TITLE = {Regularity of stable minimal hypersurfaces}, JOURNAL = {Comm. Pure Appl. Math.}, FJOURNAL = {Communications on Pure and Applied Mathematics}, VOLUME = {34}, YEAR = {1981}, NUMBER = {6}, PAGES = {741--797}, ISSN = {0010-3640}, CODEN = {CPAMAT}, MRCLASS = {49F22 (53C42 58E15)}, MRNUMBER = {0634285}, MRREVIEWER = {F. J. Almgren, Jr.}, DOI = {10.1002/cpa.3160340603}, ZBLNUMBER = {0497.49034}, } [SJ] J. Simons, "Minimal varieties in riemannian manifolds," Ann. of Math., vol. 88, pp. 62-105, 1968. @article {SJ, MRKEY = {0233295}, AUTHOR = {Simons, James}, TITLE = {Minimal varieties in riemannian manifolds}, JOURNAL = {Ann. of Math.}, FJOURNAL = {Annals of Mathematics. Second Series}, VOLUME = {88}, YEAR = {1968}, PAGES = {62--105}, ISSN = {0003-486X}, MRCLASS = {53.04 (35.00)}, MRNUMBER = {0233295}, MRREVIEWER = {W. F. Pohl}, DOI = {10.2307/1970556}, ZBLNUMBER = {0181.49702}, } [S1] L. Simon, Lectures on Geometric Measure Theory, Canberra: Australian National University Centre for Mathematical Analysis, 1983, vol. 3. @book {S1, MRKEY = {0756417}, AUTHOR = {Simon, Leon}, TITLE = {Lectures on Geometric Measure Theory}, SERIES = {Proc. Centre Math. Anal. Austral. Nat. Univ.}, VOLUME = {3}, PUBLISHER = {Australian National University Centre for Mathematical Analysis}, ADDRESS = {Canberra}, YEAR = {1983}, PAGES = {vii+272}, ISBN = {0-86784-429-9}, MRCLASS = {49-01 (28A75 49F20)}, MRNUMBER = {0756417}, MRREVIEWER = {J. S. Joel}, ZBLNUMBER = {0546.49019}, } [S4] L. Simon, "A strict maximum principle for area minimizing hypersurfaces," J. Differential Geom., vol. 26, iss. 2, pp. 327-335, 1987. @article {S4, MRKEY = {0906394}, AUTHOR = {Simon, Leon}, TITLE = {A strict maximum principle for area minimizing hypersurfaces}, JOURNAL = {J. Differential Geom.}, FJOURNAL = {Journal of Differential Geometry}, VOLUME = {26}, YEAR = {1987}, NUMBER = {2}, PAGES = {327--335}, ISSN = {0022-040X}, CODEN = {JDGEAS}, MRCLASS = {49F10 (35J60 49F20 53C42)}, MRNUMBER = {0906394}, MRREVIEWER = {Michael Gr{ü}ter}, URL = {http://projecteuclid.org/euclid.jdg/1214441373}, ZBLNUMBER = {0625.53052}, } [S] L. Simon, "Cylindrical tangent cones and the singular set of minimal submanifolds," J. Differential Geom., vol. 38, iss. 3, pp. 585-652, 1993. @article {S, MRKEY = {1243788}, AUTHOR = {Simon, Leon}, TITLE = {Cylindrical tangent cones and the singular set of minimal submanifolds}, JOURNAL = {J. Differential Geom.}, FJOURNAL = {Journal of Differential Geometry}, VOLUME = {38}, YEAR = {1993}, NUMBER = {3}, PAGES = {585--652}, ISSN = {0022-040X}, CODEN = {JDGEAS}, MRCLASS = {58E15 (49Q20)}, MRNUMBER = {1243788}, MRREVIEWER = {J. E. Brothers}, URL = {http://projecteuclid.org/euclid.jdg/1214454484}, ZBLNUMBER = {0819.53029}, } [S3] L. Simon, Theorems on Regularity and Singularity of Energy Minimizing Maps, Basel: Birkhäuser, 1996. @book {S3, MRKEY = {1399562}, AUTHOR = {Simon, Leon}, TITLE = {Theorems on Regularity and Singularity of Energy Minimizing Maps}, SERIES = {Lectures Math. ETH Zürich}, PUBLISHER = {Birkhäuser}, ADDRESS = {Basel}, YEAR = {1996}, PAGES = {viii+152}, ISBN = {3-7643-5397-X}, MRCLASS = {58E20 (35J60 49N60 58G03)}, MRNUMBER = {1399562}, MRREVIEWER = {Nathan Smale}, DOI = {10.1007/978-3-0348-9193-6}, ZBLNUMBER = {0864.58015}, } [SW1] L. Simon and N. Wickramasekera, "Stable branched minimal immersions with prescribed boundary," J. Differential Geom., vol. 75, iss. 1, pp. 143-173, 2007. @article {SW1, MRKEY = {2282727}, AUTHOR = {Simon, Leon and Wickramasekera, Neshan}, TITLE = {Stable branched minimal immersions with prescribed boundary}, JOURNAL = {J. Differential Geom.}, FJOURNAL = {Journal of Differential Geometry}, VOLUME = {75}, YEAR = {2007}, NUMBER = {1}, PAGES = {143--173}, ISSN = {0022-040X}, CODEN = {JDGEAS}, MRCLASS = {53C42 (49Q05 53A07)}, MRNUMBER = {2282727}, MRREVIEWER = {Giandomenico Orlandi}, URL = {http://projecteuclid.org/euclid.jdg/1175266256}, ZBLNUMBER = {1109.53064}, } [SoW] B. Solomon and B. White, "A strong maximum principle for varifolds that are stationary with respect to even parametric elliptic functionals," Indiana Univ. Math. J., vol. 38, iss. 3, pp. 683-691, 1989. @article {SoW, MRKEY = {1017330}, AUTHOR = {Solomon, Bruce and White, Brian}, TITLE = {A strong maximum principle for varifolds that are stationary with respect to even parametric elliptic functionals}, JOURNAL = {Indiana Univ. Math. J.}, FJOURNAL = {Indiana University Mathematics Journal}, VOLUME = {38}, YEAR = {1989}, NUMBER = {3}, PAGES = {683--691}, ISSN = {0022-2518}, CODEN = {IUMJAB}, MRCLASS = {49F20}, MRNUMBER = {1017330}, MRREVIEWER = {Martin Fuchs}, DOI = {10.1512/iumj.1989.38.38032}, ZBLNUMBER = {0711.49059}, } [W1] N. Wickramasekera, "A rigidity theorem for stable minimal hypercones," J. Differential Geom., vol. 68, iss. 3, pp. 433-514, 2004. @article {W1, MRKEY = {2144538}, AUTHOR = {Wickramasekera, Neshan}, TITLE = {A rigidity theorem for stable minimal hypercones}, JOURNAL = {J. Differential Geom.}, FJOURNAL = {Journal of Differential Geometry}, VOLUME = {68}, YEAR = {2004}, NUMBER = {3}, PAGES = {433--514}, ISSN = {0022-040X}, CODEN = {JDGEAS}, MRCLASS = {53C24 (49Q05)}, MRNUMBER = {2144538}, MRREVIEWER = {Mohammad Reza Pakzad}, URL = {http://projecteuclid.org/euclid.jdg/1115669592}, ZBLNUMBER = {1085.53055}, }
How to Use the Beam Envelopes Method for Wave Optics Simulations In the wave optics field, it is difficult to simulate large optical systems in a way that rigorously solves Maxwell’s equation. This is because the waves that appear in the system need to be resolved by a sufficiently fine mesh. The beam envelopes method in the COMSOL Multiphysics® software is one option for this purpose. In this blog post, we discuss how to use the Electromagnetic Waves, Beam Envelopes interface and handle its restrictions. Comparing Methods for Solving Large Wave Optics Models In electromagnetic simulations, the wavelength always needs be resolved by the mesh in order to find an accurate solution of Maxwell’s equations. This requirement makes it difficult to simulate models that are large compared to the wavelength. There are several methods for stationary wave optics problems that can handle large models. These methods include the so-called diffraction formulas, such as the Fraunhofer, Fresnel-Kirchhoff, and Rayleigh-Sommerfeld diffraction formula and the beam propagation method (BPM), such as paraxial BPM and the angular spectrum method (Ref. 1). Most of these methods use certain approximations to the Helmholtz equation. These methods can handle large models because they are based on the propagation method that solves for the field in a plane from a known field in another plane. So you don’t have to mesh the entire domain, you just need a 2D mesh for the desired plane. Compared to these methods, the Electromagnetic Waves, Beam Envelopes interface in COMSOL Multiphysics (which we will refer to as the Beam Envelopes interface for the rest of the blog post) solves for the exact solution of the Helmholtz equation in a domain. It can handle large models; i.e., the meshing requirement can be significantly relaxed if a certain restriction is satisfied. A beam envelopes simulation for a lens with a millimeter-range focal length for a 1-um wavelength beam. We discuss the Beam Envelopes interface in more detail below. Theory Behind the Beam Envelopes Interface Let’s take a look at the math that the Beam Envelopes interface computes “under the hood”. If you add this interface to a model and click the Physics Interface node and change Type of phase specification to User defined, you’ll see the following in the Equation section: Here, \bf E1 is the dependent variable that the interface solves for, called the envelope function. In the phasor representation of a field, \bf E1 corresponds to the amplitude and \phi_1 to the phase, i.e., The first equation, the governing equation for the Beam Envelopes interface, can be derived by substituting the second definition of the electric field into the Helmholtz equation. If we know \phi_1, the only unknown is \bf E1 and we can solve for it. The phase, \phi_1, needs to be given a priori in order to solve the problem. With the second equation, we assume a form such that the fast oscillation part, the phase, can be factored out from the field. If that’s true, the envelope \bf E1 is “slowly varying”, so we don’t need to resolve the wavelength. Instead, we only need to resolve the slow wave of the envelope. Because of this process, simulating large-scale wave optics problems is possible on personal computers. A common question is: “When do you want the envelope rather than the field itself?” Lens simulation is one example. Sometimes you may need the intensity rather than the complex electric field. Actually, the square of the norm of the envelope gives the intensity. In such cases, it suffices to get the envelope function. What Happens If the Phase Function Is Not Accurately Known? The math behind the beam envelope method introduces more questions: What if the phase is notaccurately known? Can we use the Beam Envelopesinterface in such cases? Are the results correct? To answer these questions, we need to do a little more math. 1D Example Let’s take the simplest test case: a plane wave, Ez = \exp(-i k_0 x), where k_0 = 2\pi / \lambda_0 for wavelength \lambda_0 = 1 um, it propagates in a rectangular domain of 20 um length. (We intentionally use a short domain for illustrative purposes.) The out-of-plane wave enters from the left boundary and transmits the right boundary without reflection. This can be simulated in the Beam Envelopes interface by adding a Matched boundary condition with excitation on the left and without excitation on the right, while adding a Perfect Magnetic Conductor boundary condition on the top and bottom (meaning we don’t care about the y direction). The correct setting for the phase specification is shown in the figure below. We have the answer Ez = \exp(-i k_0 x), knowing that the correct phase function is k_0 x or the wave vector is (k_0,0) a priori. Substituting the phase function in the second equation, we inversely get E1z = 1, the constant function. How many mesh elements do we need to resolve a constant function? Only one! (See this previous blog post on high-frequency modeling.) The following results show the envelope function \bf E1 and the norm of \bf E, ewbe.normE, which is equal to |{\bf E1}|. Here, we can see that we get the correct envelope function if we give the exact phase function, constant one, for any number of meshes, as expected. For confirmation purposes, the phase of \bf E1z, arg(E1z), is also plotted. It is zero, also as expected. Now, let’s see what happens if our guess for the phase function is a little bit off — say, (0.95k_0,0) instead of the exact (k_0,0). What kind of solutions do we get? Let’s take a look: What we see here for the envelope function is the so-called beating. It’s obvious that everything depends on the mesh size. To understand what’s going on, we need a pencil, paper, and patience. We knew the answer was Ez = \exp(-i k_0 x), but we had “intentionally” given an incorrect estimate in the COMSOL® software. Substituting the wrong phase function in the second equation, we get \exp(-i k_0 x)={\bf E1z} \exp(-0.95i k_0 x). This results in {\bf E1z} = \exp(-0.05i k_0 x), which is no longer constant one. This is a wave with a wavelength of \lambda_b= 2\pi/0.05k_0 = 20 um, which is called the beat wavelength. Let’s take a look at the plot above for six mesh elements. We get exactly what is expected (red line), i.e., {\bf E1z} = \exp(-0.05i k_0 x). The plot automatically takes the real part, showing {\bf E1z} = \cos(-0.05 k_0 x). The plots for the lower resolutions still show an approximate solution of the envelope function. This is as expected for finite element simulations: coarser mesh gives more approximate results. This shows that if we make a wrong guess for the phase function, we get a wrong (beat-convoluted) envelope function. Because of the wrong guess, the envelope function is added a phase of the beating (green line), which is -0.05 k_0 x. What about the norm of \bf E? Look at the blue line in the plots above. It looks like the COMSOL Multiphysics software generated a correct solution for ewbe.normE, which is constant one. Let’s calculate: Substituting both the wrong (analytical) phase function and the wrong (beat-convoluted) envelope function in the second equation, we get {\bf Ez} = \exp(-0.05i k_0 x) \times \exp(-0.95i k_0 x) = \exp(-i k_0 x), which is the correct fast field! If we take a norm of \bf E, we get a correct solution, constant one. This is what we wanted. Note that we can’t display \bf E itself because the domain can be too large, but we can find \bf E analytically and display the norm of \bf E with a coarse mesh. This is not a trick. Instead, we see that if the phase function is off, the envelope function will also be off, since it becomes beat-convoluted. However, the norm of the electric field can still be correct. Therefore, it is important that the beat-convoluted envelope function be correctly computed in order to get the correct electric field. The above plots clearly show that. The six-element mesh case gives the completely correct electric field norm because it fully resolves the beat-convoluted envelope function. The other meshes give an approximate solution to the beat-convoluted envelope function depending on the mesh size. They also do so for the field norm. This is a general consequence that holds true for arbitrary cases. No matter what phase function we use in COMSOL Multiphysics, we are okay as long as we correctly solve the first equation for \bf E1 and as long as the phase function is continuous over the domain. When there are multiple materials in a domain, the continuity of the phase function is also critical to the solution accuracy. We may discuss this in a future blog post, but it is also mentioned in this previous blog post on high-frequency modeling. 2D Example So far, we have discussed a scalar wave number. More generally, the phase function is specified by the wave vector. When the wave vector is not guessed correctly, it will have vector-valued consequences. Suppose we have the same plane wave from the first example, but we make a wrong guess for the phase, i.e., k_0(x \cos \theta + y \sin \theta) instead of k_0 x . In this case, the wave number is correct but the wave vector is off. This time, the beating takes place in 2D. Let’s start by performing the same calculations as the 1D example. We have \exp(-i k_0 x)= {\bf E1z}(x,y) \exp(-i k_0 (x \cos \theta+y \sin \theta) ) and the envelope function is now calculated to be {\bf E1z}(x,y) = \exp(-i k_0 (x (1-\cos \theta) -y \sin \theta) ) , which is a tilted wave propagating to direction (1-\cos \theta, -\sin \theta) , with the beat wave number k_b = 2 k_0/\sin (\theta/2) and the beat wavelength \lambda_b=\lambda_0/(2\sin (\theta/2)). The following plots are the results for θ = 15° for a domain of 3.8637 um x 29.348 um for different max mesh sizes. The same boundary conditions are given as the previous 1D example case. The only difference is that the incident wave on the left boundary is {\bf E1z}(0,y) = \exp(i k_0 y \sin \theta) . (Note that we have to give the corresponding wrong boundary condition because our phase guess is wrong.) In the result for the finest mesh (rightmost), we can confirm that \bf E1z is computed just like we analyzed in the above calculation and the norm of \bf Ez is computed to be constant one. These results are consistent with the 1D example case. The electric field norm (top) and the envelope function (bottom) for the wrong phase function k_0(x \cos\theta +y \sin\theta ), computed for different mesh sizes. The color range represents the values from -1 to 1. Simulating a Lens Using the Beam Envelopes Interface The ultimate goal here is to simulate an electromagnetic beam through optical lenses in a millimeter-scale domain with the Beam Envelopes interface. How can we achieve this? We already discussed how to compute the right solution. The following example is a simulation for a hard-apertured flat top incident beam on a plano-convex lens with a radius of curvature of 500 um and a refractive index of 1.5 (approximately 1 mm focal length). Here, we use \phi_1 = k_0 x, which is not accurate at all. In the region before the lens, there is a reflection, which creates an interference. In the lens, there are multiple reflections. After the lens, the phase is spherical so that the beam focuses into a spot. So this phase function is far different from what is happening around the lens. Still, we have a clue. If we plot \bf E1z, we see the beating. Plot of \bf E1z. The inset shows the finest beat wavelength inside the lens. As can be seen in the plot, a prominent beating occurs in the lens (see the inset). Actually, the finest beat wavelength is \lambda_0/2 in front of the lens. To prove this, we can perform the same calculations as in the previous examples. The finest beat wavelength is due to the interference between the incident beam and reflected beam, but we can ignore this because it doesn’t contribute to the forward propagation. We can see that the mesh doesn’t resolve the beating before the lens, but let’s ignore this for now. The beat wavelength in the lens is 3\lambda_0/2 for the backward beam and 2\lambda_0 for the forward beam for n = 1.5, which we can also prove in the same way as the previous examples. Again, we ignore the backward beam. In the plot, what’s visible is the 2\lambda_0 beating for the forward beam. The backward beam is only a fraction (approximately 4% for n = 1.5 of the incident beam, so it’s not visible). The following figure shows the mesh resolving the beat inside the lens with 10 mesh elements. The beat wavelength inside the lens. The mesh resolves the beat with 10 mesh elements. Other than the beating for the propagating beam in the lens, the beating in the subsequent air domain is pretty large, so we can use a coarse mesh here. This may not hold for faster lenses, which have a more rapid quadratic phase and can have a very short beat wavelength. In this example, we must use a finer mesh only in the lens domain to resolve the fastest beating. The computed field norm is shown at the top of this blog post. To verify the result, we can compute the field at the lens exit surface by using the Frequency Domain interface, and then using the Fresnel diffraction formula to calculate the field at the focus. The result for the field norm agrees very well. Comparison between the Beam Envelopes interface and Fresnel diffraction formula. The mesh resolves the beat inside the lens with 10 mesh elements. The following comparison shows the mesh size dependence. We get a pretty good result with our standard recommendation, \lambda_b/6, which is equal to \lambda_0/3. This makes it easier to mesh the lens domain. Mesh size dependence on the field norm at the focus. As of version 5.3a of the COMSOL® software, the Fresnel Lens tutorial model includes a computation with the Beam Envelopes interface. Fresnel lenses are typically extremely thin (wavelength order). Even if there is diffraction in and around the lens surface discontinuities, the fine mesh around the lens part does not significantly impact the total number of mesh elements. Concluding Remarks In this blog post, we discuss what the Beam Envelopes interface does “under the hood” and how we can get accurate solutions for wave optics problems. Even if we get beating, the beat wavelength can be much longer than the wavelength, which makes it possible to simulate large optical systems. Although it seems tedious to check the mesh size to resolve beating, this is not extra work that is only required for the Beam Envelopes interface. When you use the finite element method, you always need to check the mesh size dependence for accurately computed solutions. Next Steps Try it yourself: Download the file for the millimeter-range focal length lens by clicking the button below. References J. Goodman, Fourier Optics, Roberts and Company Publishers, 2005. Comments (29) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
I'm solving the exercises of chapter 14 in the book Representations and Characters of groups. (Gordon James, Martin Liebeck) Always working with $\mathbb{R}$ or $\mathbb{C}$. One of them says: Suppose that $\chi$ is a non-zero, non trivial character of $G$, and that $\chi(g)$ is a non-negative real number for all $g$ in $G$. Prove that $\chi$ is reducible At this stage of the book, I think that the natural procedure is to calculate $<\chi,\chi>$ and see that it is $\neq 1$ I know that non-trivial means $\exists g \in G$ such that $\chi(g)\neq 1$, but since $\chi(1_G)\in \mathbb{N}$, I don't know what they want to say with the non-zero condition. Anyway, I've doing the next: By definition, $<\chi,\chi>=\displaystyle\dfrac{1}{|G|}\sum_{g\in G}\chi(g)\chi(g^{-1})$ I know that there exist a base $\mathcal{B}$ where $[g]_{\mathcal{B}}$ is diagonal, then $\chi(g)$ is sum of $m$th roots of the unity (considering the $order(g)=m$) so $\chi(g^{-1})=\overline{\chi(g)}$ But $\chi(g)$ is a non-negative real number for all $g$ in $G$. So we obtain $<\chi,\chi>=\displaystyle\dfrac{1}{|G|}\sum_{g\in G}\chi(g)^2$ Since $\chi$ is non-trivial, $\exists g \in G$ such that $\chi(g)\neq 1$, but I'm stuck here. Maybe I'm missing something related to the non-zero condition. Any advice would be welcome.
In this article we investigate some alternative ways of representing integers and performing arithmetic operations directly on these representations. We start from observation of Edouard Zeckendorf that leads to a representation using sums of non-adjacent Fibonacci numbers. Later we show connections of this representation to a positional numeral system with irrational base of golden ratio. Fibonacci representation Fibonacci numbers are defined as follows: $F_0 = 0, \quad F_1 = 1, \quad F_i = F_{i-1} + F_{i-2} \ \ \ \text{for}\ \ i \geq 2,$ thus forming an infinite sequence $0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55\ldots$ Any natural number can be represented as a sum of distinct Fibonacci numbers, e.g. $12 = 5 + 3 + 2 + 1 + 1 = F_5 + F_4 + F_3 + F_2 + F_1 =$ $\phantom{00}=8 + 3 + 1 = F_6 + F_4 + F_2$ Edouard Zeckendorf noticed that under certain conditions this representation is unique. First, we are not allowed use $F_0$ nor $F_1$ (the former is equal to $0$ anyway, and the latter is equal to $F_2$, so we would have to chose one of them). Second, we cannot use any adjacent numbers, so from any two adjacent $F_i$ and $F_{i+1}$ at most one can be used. We can encode which Fibonacci numbers were taken using a binary string. Suppose that we limit ourselves to Fibonacci numbers smaller than $F_{k+2}$. Then we can encode it as a binary string $a_{k-1} \ldots a_1 a_0$ of length $k$. If there is no adjacent $1$s in this string, it uniquely represents number $a_{k-1} \cdot F_{k+1} + \ldots + a_1 \cdot F_3 + a_0 \cdot F_2.$ It is easy to see that there are exactly $F_{k+2}$ such strings. The proof is easily done by induction: for $k=1$ we have two strings ($0$ and $1$), and for $k=2$ we have three ($00$, $01$, and $10$; string $11$ is invalid, since it contains adjacent $1$s). That corresponds to $F_3 = 2$ and $F_4 = 3$. For $k\geq 3$ we investigate most significant bit $a_{k-1}$: either it is $0$, and then string $a_{k-2} \ldots a_1 a_0$ of length $k-1$ can be chosen arbitrarily (resulting in $F_{k+1}$ possibilities), or $a_{k-1}=1$, and then $a_{k-2}=0$ and string $a_{k-3} \ldots a_1 a_0$ can be chosen arbitrarily (resulting in $F_k$ possibilities). Thus we have in total $F_{k+1} + F_k = F_{k+2}$ possibilities, which concludes the proof. Following the same argument, we show that each $k$-bit string represent different natural number smaller than $F_{k+2}$. For $k\leq 2$ it is easy to check. For $k\geq 3$ numbers beginning with $a_{k-1}=0$ are distinct and smaller than $F_{k+1}$ and numbers beginning with $a_{k-1}=1$ are $F_{k+1} + x$, where $x$ are distinct and smaller than $F_k$, thus these numbers are in range from $F_{k+1}$ (inclusive) and $F_{k+1}+F_k = F_{k+2}$ (exclusive). This way we have proved Zeckendorf's theorem: using $k$-bit stringwe can represent all integers from $0$ to $F_{k+2}-1$. Basic arithmetic In modern computers we represent integers using binary representation (positional notation with a base of $2$). One of its advantages is quite straightforward implementation of basic arithmetic operations. But this is not the only way – there are many possible numeral systems (including these with bases greater than $2$, or even negative, irrational or imaginary; or redundant binary representations). Thus, inspired by Zeckendorf's observation, we could imagine a computer that stores integers in Fibonacci representation. (This representation could have some advantages, e.g. think that it would improve sturdiness of punch cards, should such computer use them: no adjacent $1$s means no adjacent holes). That leads to a question: how to perform basic arithmetic operations using this representation? One possible way to deal with this question is to devise algorithms for conversion between Fibonacci representation and binary representation. Then to perform an operation in Fibonacci representation, we first convert arguments to binary, then perform the operation in binary, and finally convert the result back to Fibonacci representation. However, we do not know efficient algorithms for conversion (that is faster than $O(k^2)$ for $k$-bit numbers), so we need to perform operations directly on Fibonacci representation. Incrementation Let's start with incrementation by one. We have a $k$-bit number $a_{k-1} \ldots a_1 a_0$ representing $x$ and we want to obtain number $c_k c_{k-1} \ldots c_1 c_0$ representing $x+1$. This representation has additional bit $c_k$, called carry (or overflow) flag, since if $x=F_{k+2}-1$, then $x+1=F_{k}$ is not representable by a $k$-bit number. To do the operation it suffices to do the following transformation $a_1a_0 \to c_1c_0$ on the two least significant bits: $00. \to 01. \qquad 01. \to 10. \qquad 10. \to 11.$ and leave the remaining bits unchanged. Unfortunately, this could lead to a pair of adjacent $1$s. We need to fix the representation, and remove such a pair. We will use the most natural transformation, which is a direct consequence of Fibonacci recurrence, thus it does not change the value of the represented integer: $011 \to 100$ If we use it on a pair of adjacent $1$s, it will remove this pair, possibly producing new pair of adjacent $1$s two position to the left. Thus we will just apply this transformation from right to left to subsequent triplets of bits. This way we get an algorithm of incrementing a $k$-bit integer, working in $O(k)$ time. Addition The more complicated would be adding to $a_{k-1} \ldots a_1 a_0$ a single Fibonacci number $F_j$. If $a_j = 0$, then we only need to increment $a_j$ and take care of possible adjacent $1$s that were created. If one pair was created, we just use the idea from incrementation algorithm. If a triplet $111$ was created, we use this idea to remove the rightmost pair. If $a_j = 1$, then we temporarily increment $a_j$ to $2$. Now we need to remove this $2$, but fortunately, it must be surrounded from both sides by $0$s (or be the last digit from the right). Thus for $j\geq 2$ we can use one of the following transformations that don't change the value of number: $0200 \to 0111 \to 1001 \qquad 0201 \to 0112 \to 1002$ The former removes $2$ completely, the latter moves it two positions to the right. We can represent these two transformations in short form, where $x \in \{0,1\}$ and $\bar{x} = x+1$: $020x \to 100\bar{x}$ If $2$ arrives at the right end of the string, we can use another transformations (dot follows the least significant bit): $020. \to 101. \qquad 02. \to 10.$ After that we end up with a string consisting of $0$s and $1$s, but there could be some adjacent pairs of $1$s. But there could be only two such pairs – one generated by the first application of general rule (in $a_{j+2}a_{j+1}$ if $a_{j+2}=1$), and one generated by the last application. The latter must be preceded by $00$, thus it will be easily removed using $011 \to 100$, and the former can be removed by recursive algorithm like in incrementation algorithm. That gives us $O(k)$ algorithm for adding $F_j$. Therefore if we want to add two $k$-bit numbers $a_{k-1} \ldots a_1 a_0$ and $b_{k-1} \ldots b_1 b_0$, we can apply this algorithm $O(k)$ times (once for each $b_j=1$). That gives us $O(k^2)$ algorithm for addition two $k$-bit numbers in Fibonacci representation. How to do it faster? We can follow the same idea: do some incrementation and then cleanup unwanted patterns. But now we can increment all bits at once, i.e. we add both numbers position-wise, obtaining $c_j = a_j + b_j$. The number $c_{k-1} \ldots c_1 c_0$ will contain digits from set $\{0,1,2\}$. This number could have two problems: it could contain any number of $2$s (but each $2$ must be surrounded by two $0$s), and any number of strides of adjacent $1$s of arbitrary length. First we remove all $2$s, and ignore adjacent $1$s for a while. Note that $020x \to 100\bar{x}$ transformation is not enough, since $x$ could be equal to $2$, and then we end up with yet another digit of $\bar{x} = 3$. But it turns out that this wouldn't be a big problem, since we can easily remove it using following transformation: $030x \to 021\bar{x} \to 110\bar{x}$ But we end up with another problem: since we can have adjacent $1$s, after making transition $0201 \to 1002$ we can end up with $2$ followed by $1$. Similarly, with multiple $2$s we can end up with $1$ followed by $2$ after making transition $0200 \to 1001$. Thus we introduce two additional transitions to remove such patterns: $021x \to 110x \qquad 012x \to 101x$ Overall we have following list of transitions: $020x \to 100\bar{x} \qquad 030x \to 110\bar{x} \qquad 021x \to 110x \qquad 012x \to 101x$ where $x\in\{0,1,2\}$ and $\bar{x}=x+1$. Again, we need some additional transformations for the right end of the string. But instead of hard-coding these cases, we can temporarily extend our representation with two bogus bits $c_{-1} = c_{-2} = 0$, and perform only general transformations. Since both of these are $0$s, we remove $2$ for sure, but perhaps one of these bogus bit became $1$. If it is $c_{-2}$, we can ignore it completely, since it corresponds to $F_0 = 0$. After that phase we have a string consisting of $0$s and $1$s, but it can have arbitrarily long strides of adjacent $1$s. To fix it, we apply a sweep of transformation $011 \to 100$ twice. First from left to right, which is equivalent to making following transformations for $y\in\{0,1\}$: $y01^{2s}0 \to y(10)^s00 \qquad y01^{2s+1}0 \to y(10)^s010$ After that groups of adjacent $1$s have length at most two. We can remove them by making another sweep from right to left. Also $c_0$ and $c_{-1}$ cannot be simultaneously equal to $1$, so we can just replace it by only $c_0$ equal to $1$ if one of these bits was equal to $1$. This way we obtain algorithm for addition of two $k$-bit numbers in Fibonacci representation, working in time complexity of $O(k)$. To be continued…
So, I need to prove the identity $$\int_{-\infty}^\infty \cos t^2 dt = \int_{-\infty}^\infty \sin t^2 dt = \sqrt{\frac{\pi}{2}}$$ and as a hint I have the Gaussian integral $$\int_{-\infty}^\infty e^{-xt^2} dt = \sqrt{\frac{\pi}{x}} \;\;\;\forall x>0.$$ I suspect I have to take the real/imaginary part of $e^{-t^2}$ at some point, but I can't quite figure how. I.e., $\int e^z dz = e^z$ gives me nothing. So, how do I do it?
Background Consider $BU=colim \, BU_k$ where we take $BU_k$ to be the specific model of classifying space for the group $U(k)\subseteq O(2k)$ given by the quotient space of the infinite real Stiefel manifold $V_{2k}$ by the action of $U(k)$. The spaces $BU_k$ as described come with maps $f_k : BU_k \rightarrow BO_{2k}$ that are fibrations. With the above setup, we can define a $(BU,f)$-structure on a stable vector bundle $\xi : X \rightarrow BO_{2k}$ as a particular lift $\tilde{\xi} : X \rightarrow BU_k$. We consider lifts $\tilde{\xi}_0, \tilde{\xi}_1$ equivalent if there is $k>>0$ and a fiberwise homotopy $H:X\times [0,1] \rightarrow BU_k$ between the two lifts. (Fiberwise homotopy means $f_k \circ H = \xi$). There is a map $I: BO \rightarrow BO$ given by sending a subspace $A\subseteq \mathbb{R}^n$ to its orthogonal complement $A^{\perp} \subseteq \mathbb{R}^n$. The question In Stong's notes on cobordism theory, he shows that a $(BU,f)$-structure on the stable normal bundle is equivalent to an $(I^*BU,f^*)$-structure on the stable tangent bundle; this is OK. Is it possible, though, to construct a bijection between $(BU,f)$-structures on the stable normal bundle and $(BU,f)$-structures on the stable tangent bundle? Some thoughts For other kinds of $(B,f)$-structures it is doable, I believe. Certainly for $(BO,1)$-structures you can do it. Also, for $(BSO,f)$-structures you can do it. If $TX$ is the tangent bundle of $X$ and $N$ is the normal bundle to $X$ for some embedding in $\mathbb{R}^{n+k}$, $k>>0$, one has a canonical trivialization $TX \oplus N \cong \epsilon^{n+k}$. As the trivial bundle has a canonical choice of orientation, given an orientation of $TX$, we can get an orientation on $N$ by requiring the induced orientation on $\epsilon^{n+k}$ agrees with the canonical one. One can do the same in the other direction. A note in Davis & Kirk claims you can do it for complex structures (Exercise 137), but I don't think the discussion is correct. It works for complex vector bundles, but that is weaker than having complex structures. E.g. the case of $X=pt$, with a trivial 2-dimensional bundle $\epsilon^2$. There are two possible (inequivalent) lifts of the bundle to $BU_1$ as defined above, but only one lift to $G_1(\mathbb{C})$.
On the positive solutions for a perturbed negative exponent problem on $\mathbb{R}^3$ Department of Basic Mathematics, Centro de Investigacióne en Mathematicás, Guanajuato, Mexico $\begin{align}\left\{\begin{aligned} Δ^2 u&=-\frac{15}{16}(1+ \varepsilon Q)u^{-7} &&\text{ in } \mathbb R^3\\ u &>0 &&\text{ in } \mathbb R^3,\\ u(x) &\sim |x| \text{ as }{|x|\to ∞}. & \end{aligned} \right.\end{align}$ $Q$ $C^{1}$ $\mathbb{R}^3$ $\varepsilon >0$ Mathematics Subject Classification:Primary: 35G20; Secondary: 35B09, 35B20. Citation:Sanjiban Santra. On the positive solutions for a perturbed negative exponent problem on $\mathbb{R}^3$. Discrete & Continuous Dynamical Systems - A, 2018, 38 (3) : 1441-1460. doi: 10.3934/dcds.2018059 References: [1] A. Ambrosetti, A. Garcia and I. Peral, Perturbation of $\Delta u+ u^{\frac{N+2}{N-2}}=0$, the scalar curvature problem in $\mathbb R^N$ and related topics, [2] A. Ambrosetti and A. Malchiodi, [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] Z. Djadli, A. Malchiodi and M. O. Ahmedou, Prescribing a fourth order conformal invariant on the standard sphere, Part Ⅰ: A perturbation result, [15] [16] [17] F. Gazzola, H. Grunau and G. Sweers, [18] [19] [20] [21] M. Jiang, L. Wang and J. Wei, $2π$-periodic self-similar solutions for the anisotropic affine curve shortening problem, [22] [23] [24] P. J. McKenna and W. Reichel, Radial solutions of singular nonlinear biharmonic equations and applications to conformal geometry, [25] S. Paneitz, A quartic conformally covariant differential operator for arbitrary pseudo-Riemannian manifolds, [26] G. Sweers, No Gidas-Ni-Nirenberg type result for semilinear biharmonic problems, [27] [28] [29] show all references References: [1] A. Ambrosetti, A. Garcia and I. Peral, Perturbation of $\Delta u+ u^{\frac{N+2}{N-2}}=0$, the scalar curvature problem in $\mathbb R^N$ and related topics, [2] A. Ambrosetti and A. Malchiodi, [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] Z. Djadli, A. Malchiodi and M. O. Ahmedou, Prescribing a fourth order conformal invariant on the standard sphere, Part Ⅰ: A perturbation result, [15] [16] [17] F. Gazzola, H. Grunau and G. Sweers, [18] [19] [20] [21] M. Jiang, L. Wang and J. Wei, $2π$-periodic self-similar solutions for the anisotropic affine curve shortening problem, [22] [23] [24] P. J. McKenna and W. Reichel, Radial solutions of singular nonlinear biharmonic equations and applications to conformal geometry, [25] S. Paneitz, A quartic conformally covariant differential operator for arbitrary pseudo-Riemannian manifolds, [26] G. Sweers, No Gidas-Ni-Nirenberg type result for semilinear biharmonic problems, [27] [28] [29] [1] [2] [3] Tiexiang Li, Tsung-Ming Huang, Wen-Wei Lin, Jenn-Nan Wang. On the transmission eigenvalue problem for the acoustic equation with a negative index of refraction and a practical numerical reconstruction method. [4] Nicola Soave, Susanna Terracini. Symbolic dynamics for the $N$-centre problem at negative energies. [5] Zongming Guo, Xiaohong Guan, Yonggang Zhao. Uniqueness and asymptotic behavior of solutions of a biharmonic equation with supercritical exponent. [6] [7] Liping Wang, Dong Ye. Concentrating solutions for an anisotropic elliptic problem with large exponent. [8] Jing Zhang, Shiwang Ma. Positive solutions of perturbed elliptic problems involving Hardy potential and critical Sobolev exponent. [9] [10] Nicola Soave, Susanna Terracini. Addendum to: Symbolic dynamics for the $N$-centre problem at negative energies. [11] Qi-Lin Xie, Xing-Ping Wu, Chun-Lei Tang. Existence and multiplicity of solutions for Kirchhoff type problem with critical exponent. [12] [13] Xudong Shang, Jihui Zhang, Yang Yang. Positive solutions of nonhomogeneous fractional Laplacian problem with critical exponent. [14] [15] [16] [17] [18] Changming Song, Hong Li, Jina Li. Initial boundary value problem for the singularly perturbed Boussinesq-type equation. [19] Shengbing Deng, Zied Khemiri, Fethi Mahmoudi. On spike solutions for a singularly perturbed problem in a compact riemannian manifold. [20] 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
Pitor Indyk,Ali Vakilian,Tal Wagner,David P Woodruff; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1723-1751, 2019. Abstract A distance matrix $A \in \mathbb{R}^{n \times m}$ represents all pairwise distances, $A_{i,j} = d(x_i,y_j)$, between two point sets $x_1,\dotsc,x_n$ and $y_1,\dotsc,y_m$ in an arbitrary metric space $(\mathcal{Z},d)$. Such matrices arise in various computational contexts such as learning image manifolds, handwriting recognition, and multi-dimensional unfolding. In this work we study algorithms for low-rank approximation of distance matrices. Recent work by Bakshi and Woodruff (NeurIPS 2018) showed it is possible to compute a rank-$k$ approximation of a distance matrix in time $O((n+m)^{1+\gamma}) \mathrm{poly}(k,1/\epsilon)$, where $\epsilon>0$ is an error parameter and $\gamma>0$ is an arbitrarily small constant. Notably, their bound is sublinear in the matrix size, which is unachieveable for general matrices. We present an algorithm that is both simpler and more efficient. It reads only $O((n+m)k/\epsilon)$ entries of the input matrix, and has a running time of $O(n+m) \cdot \mathrm{poly}(k,1/\epsilon)$. We complement the sample complexity of our algorithm with a matching lower bound on the number of entries that must be ready by any algorithm. We provide experimental results to validate the approximation quality and running time of our algorithm @InProceedings{pmlr-v99-indyk19a,title = {Sample-Optimal Low-Rank Approximation of Distance Matrices},author = {Indyk, Pitor and Vakilian, Ali and Wagner, Tal and Woodruff, David P},booktitle = {Proceedings of the Thirty-Second Conference on Learning Theory},pages = {1723--1751},year = {2019},editor = {Beygelzimer, Alina and Hsu, Daniel},volume = {99},series = {Proceedings of Machine Learning Research},address = {Phoenix, USA},month = {25--28 Jun},publisher = {PMLR},pdf = {http://proceedings.mlr.press/v99/indyk19a/indyk19a.pdf},url = {http://proceedings.mlr.press/v99/indyk19a.html},abstract = {A distance matrix $A \in \mathbb{R}^{n \times m}$ represents all pairwise distances, $A_{i,j} = d(x_i,y_j)$, between two point sets $x_1,\dotsc,x_n$ and $y_1,\dotsc,y_m$ in an arbitrary metric space $(\mathcal{Z},d)$. Such matrices arise in various computational contexts such as learning image manifolds, handwriting recognition, and multi-dimensional unfolding. In this work we study algorithms for low-rank approximation of distance matrices. Recent work by Bakshi and Woodruff (NeurIPS 2018) showed it is possible to compute a rank-$k$ approximation of a distance matrix in time $O((n+m)^{1+\gamma}) \mathrm{poly}(k,1/\epsilon)$, where $\epsilon>0$ is an error parameter and $\gamma>0$ is an arbitrarily small constant. Notably, their bound is sublinear in the matrix size, which is unachieveable for general matrices. We present an algorithm that is both simpler and more efficient. It reads only $O((n+m)k/\epsilon)$ entries of the input matrix, and has a running time of $O(n+m) \cdot \mathrm{poly}(k,1/\epsilon)$. We complement the sample complexity of our algorithm with a matching lower bound on the number of entries that must be ready by any algorithm. We provide experimental results to validate the approximation quality and running time of our algorithm}} %0 Conference Paper%T Sample-Optimal Low-Rank Approximation of Distance Matrices%A Pitor Indyk%A Ali Vakilian%A Tal Wagner%A David P Woodruff%B Proceedings of the Thirty-Second Conference on Learning Theory%C Proceedings of Machine Learning Research%D 2019%E Alina Beygelzimer%E Daniel Hsu%F pmlr-v99-indyk19a%I PMLR%J Proceedings of Machine Learning Research%P 1723--1751%U http://proceedings.mlr.press%V 99%W PMLR%X A distance matrix $A \in \mathbb{R}^{n \times m}$ represents all pairwise distances, $A_{i,j} = d(x_i,y_j)$, between two point sets $x_1,\dotsc,x_n$ and $y_1,\dotsc,y_m$ in an arbitrary metric space $(\mathcal{Z},d)$. Such matrices arise in various computational contexts such as learning image manifolds, handwriting recognition, and multi-dimensional unfolding. In this work we study algorithms for low-rank approximation of distance matrices. Recent work by Bakshi and Woodruff (NeurIPS 2018) showed it is possible to compute a rank-$k$ approximation of a distance matrix in time $O((n+m)^{1+\gamma}) \mathrm{poly}(k,1/\epsilon)$, where $\epsilon>0$ is an error parameter and $\gamma>0$ is an arbitrarily small constant. Notably, their bound is sublinear in the matrix size, which is unachieveable for general matrices. We present an algorithm that is both simpler and more efficient. It reads only $O((n+m)k/\epsilon)$ entries of the input matrix, and has a running time of $O(n+m) \cdot \mathrm{poly}(k,1/\epsilon)$. We complement the sample complexity of our algorithm with a matching lower bound on the number of entries that must be ready by any algorithm. We provide experimental results to validate the approximation quality and running time of our algorithm Indyk, P., Vakilian, A., Wagner, T. & Woodruff, D.P.. (2019). Sample-Optimal Low-Rank Approximation of Distance Matrices. Proceedings of the Thirty-Second Conference on Learning Theory, in PMLR 99:1723-1751 This site last compiled Sat, 17 Aug 2019 00:05:37 +0000
Consider the following example: $$ - \Delta u = f \mbox{ in } \Omega, $$ $$ u = 0 \mbox{ on } \Gamma, $$ Here $\Gamma$ is boundary of $\Omega$. To produce weak formulation we multiply by arbitrary $v$ from $H^1(\Omega)$, integrate over $\Omega$ and apply integration by parts: $$ \int_{\Omega} \nabla u \nabla v dx - \int_{\Gamma} \frac{\partial u}{\partial n}vds = \int_{\Omega} f v dx. $$ Because we don't have information about $\partial u /\partial n$ on $\Gamma$ we restrict $v$ to lie in $V = \{ v \in H^1(\Omega): v|_\Gamma = 0 \}$. This new constructed space may not be a complete, so we need to complete it with respect to Sobolev norm (Otherwise we won't be able to apply Lax–Milgram theorem and prove existence of weak solution). But after this procedure we may have functions in $V$ which violate boundary conditions $v|_\Gamma = 0$. Why after completion of $V$ with respect to Sobolev norm we won't run into functions $v|_\Gamma \neq 0$? All textbooks I've consulted skip this moment, probably because it is obvious. So first of all, $H^1$ does not really have a restriction map, even to interior points, much less boundary points (except in one dimension, where there is a continuous embedding into $C^0$). The better way of thinking about this is to start out with smooth test functions, then look at the equation that you get and identify the solution space and the test function space appropriately. The spaces that you get in these two cases are denoted by $H^1_0$. This is essentially what we mean by $\{ f \in H^1 : \left. f \right |_\Gamma = 0 \}$. More formally it can be identified as either the completion of $C^\infty_c$ in $H^1$ or as the kernel of the map $T : H^1(\Omega) \to L^2(\Gamma)$ which is the continuous extension of the restriction map. This map is called the trace, and its existence requires a little bit of regularity about the boundary $\Gamma$. (As I recall Lipschitz is enough, but references like Evans tend to assume $C^1$ or at least piecewise $C^1$ to simplify the proof.) It is usual to assume some regularity property for $\Gamma$. For example, in Evan's PDE, if $\Omega$ is bounded and has a $C^1$ boundary, then there is (bounded) trace operator $$T : H^1(\Omega) \to L^2(\Gamma)$$ so that $T u = u|_\Gamma$ if $u \in H^1(\Omega) \cap C(\overline\Omega)$. It is also proved that $$\{ H^1(\Omega) : Tu = 0\} = H^1_0(\Omega),$$ thus $\{ H^1(\Omega) : Tu = 0\}$ is already a complete space.
Definition:Lower Closure/Element Definition Let $\left({S, \preccurlyeq}\right)$ be an ordered set. Let $a \in S$. The lower closure of $a$ (in $S$) is defined as: $a^\preccurlyeq := \left\{{b \in S: b \preccurlyeq a}\right\}$ Also known as The lower closure of an element $a$ is also known as: the down-setof $a$ the down setof $a$ the lower setof $a$ the set of preceding elementsto $a$ The terms weak lower closure and weak down-set are also encountered, so as explicitly to distinguish this from the strict lower closure of $a$. When $\left({S, \preccurlyeq}\right)$ is a well-ordered set, the term weak initial segment is often used, and defined as a separate concept in its own right. The notations $S_a$ or $\bar S_a$ are frequently then seen. Some authors use the term (weak) initial segment to refer to the lower closure on a general ordered set. $a^\preccurlyeq := \left\{{b \in S: b \preccurlyeq a}\right\}$: the lower closure of $a \in S$: everything in $S$ that precedes $a$ $a^\succcurlyeq := \left\{{b \in S: a \preccurlyeq b}\right\}$: the upper closure of $a \in S$: everything in $S$ that succeeds $a$ $a^\prec := \left\{{b \in S: b \preccurlyeq a \land a \ne b}\right\}$: the strict lower closure of $a \in S$: everything in $S$ that strictly precedes $a$ $a^\succ := \left\{{b \in S: a \preccurlyeq b \land a \ne b}\right\}$: the strict upper closure of $a \in S$: everything in $S$ that strictly succeeds $a$. $\displaystyle T^\preccurlyeq := \bigcup \left\{{t^\preccurlyeq: t \in T:}\right\}$: the lower closure of $T \in S$: everything in $S$ that precedes some element of $T$ $\displaystyle T^\succcurlyeq := \bigcup \left\{{t^\succcurlyeq: t \in T:}\right\}$: the upper closure of $T \in S$: everything in $S$ that succeeds some element of $T$ $\displaystyle T^\prec := \bigcup \left\{{t^\prec: t \in T:}\right\}$: the strict lower closure of $T \in S$: everything in $S$ that strictly precedes some element of $T$ $\displaystyle T^\succ := \bigcup \left\{{t^\succ: t \in T:}\right\}$: the strict upper closure of $T \in S$: everything in $S$ that strictly succeeds some element of $T$. The astute reader may point out that, for example, $a^\preccurlyeq$ is ambiguous as to whether it means: The lower closure of $a$ with respect to $\preccurlyeq$ The upper closure of $a$ with respect to the dual ordering $\succcurlyeq$ By Lower Closure is Dual to Upper Closure and Strict Lower Closure is Dual to Strict Upper Closure, the two are seen to be equal. Also denoted as Other notations for closure operators include: ${\downarrow} a, {\bar \downarrow} a$ for lower closure of $a \in S$ ${\uparrow} a, {\bar \uparrow} a$ for upper closure of $a \in S$ ${\downarrow} a, {\dot \downarrow} a$ for strict lower closure of $a \in S$ ${\uparrow} a, {\dot \uparrow} a$ for strict upper closure of $a \in S$ However, as there is considerable inconsistency in the literature as to exactly which of these arrow notations is being used at any one time, its use is not endorsed on $\mathsf{Pr} \infty \mathsf{fWiki}$. Also see
But if you don't want to have a Google account: Chrome is really good. Much faster than FF (I can't run FF on either of the laptops here) and more reliable (it restores your previous session if it crashes with 100% certainty). And Chrome has a Personal Blocklist extension which does what you want. : ) Of course you already have a Google account but Chrome is cool : ) Guys, I feel a little defeated in trying to understand infinitesimals. I'm sure you all think this is hilarious. But if I can't understand this, then I'm yet again stalled. How did you guys come to terms with them, later in your studies? do you know the history? Calculus was invented based on the notion of infinitesimals. There were serious logical difficulties found in it, and a new theory developed based on limits. In modern times using some quite deep ideas from logic a new rigorous theory of infinitesimals was created. @QED No. This is my question as best as I can put it: I understand that lim_{x->a} f(x) = f(a), but then to say that the gradient of the tangent curve is some value, is like saying that when x=a, then f(x) = f(a). The whole point of the limit, I thought, was to say, instead, that we don't know what f(a) is, but we can say that it approaches some value. I have problem with showing that the limit of the following function$$\frac{\sqrt{\frac{3 \pi}{2n}} -\int_0^{\sqrt 6}(1-\frac{x^2}{6}+\frac{x^4}{120})^ndx}{\frac{3}{20}\frac 1n \sqrt{\frac{3 \pi}{2n}}}$$equal to $1$, with $n \to \infty$. @QED When I said, "So if I'm working with function f, and f is continuous, my derivative dy/dx is by definition not continuous, since it is undefined at dx=0." I guess what I'm saying is that (f(x+h)-f(x))/h is not continuous since it's not defined at h=0. @KorganRivera There are lots of things wrong with that: dx=0 is wrong. dy/dx - what/s y? "dy/dx is by definition not continuous" it's not a function how can you ask whether or not it's continous, ... etc. In general this stuff with 'dy/dx' is supposed to help as some kind of memory aid, but since there's no rigorous mathematics behind it - all it's going to do is confuse people in fact there was a big controversy about it since using it in obvious ways suggested by the notation leads to wrong results @QED I'll work on trying to understand that the gradient of the tangent is the limit, rather than the gradient of the tangent approaches the limit. I'll read your proof. Thanks for your help. I think I just need some sleep. O_O @NikhilBellarykar Either way, don't highlight everyone and ask them to check out some link. If you have a specific user which you think can say something in particular feel free to highlight them; you may also address "to all", but don't highlight several people like that. @NikhilBellarykar No. I know what the link is. I have no idea why I am looking at it, what should I do about it, and frankly I have enough as it is. I use this chat to vent, not to exercise my better judgment. @QED So now it makes sense to me that the derivative is the limit. What I think I was doing in my head was saying to myself that g(x) isn't continuous at x=h so how can I evaluate g(h)? But that's not what's happening. The derivative is the limit, not g(h). @KorganRivera, in that case you'll need to be proving $\forall \varepsilon > 0,\,\,\,\, \exists \delta,\,\,\,\, \forall x,\,\,\,\, 0 < |x - a| < \delta \implies |f(x) - L| < \varepsilon.$ by picking some correct L (somehow) Hey guys, I have a short question a friend of mine asked me which I cannot answer because I have not learnt about measure theory (or whatever is needed to answer the question) yet. He asks what is wrong with \int_0^{2 \pi} \frac{d}{dn} e^{inx} dx when he applies Lesbegue's dominated convergence theorem, because apparently, if he first integrates and then derives, the result is 0 but if he first derives and then integrates it's not 0. Does anyone know?
Simulate Neyman-Scott Point Process with Variance Gamma cluster kernel Generate a random point pattern, a simulated realisation of the Neyman-Scott process with Variance Gamma (Bessel) cluster kernel. Usage rVarGamma(kappa, nu, scale, mu, win = owin(), thresh = 0.001, nsim=1, drop=TRUE, saveLambda=FALSE, expand = NULL, ..., poisthresh=1e-6, saveparents=TRUE) Arguments kappa Intensity of the Poisson process of cluster centres. A single positive number, a function, or a pixel image. nu Shape parameter for the cluster kernel. A number greater than -1. scale Scale parameter for cluster kernel. Determines the size of clusters. A positive number in the same units as the spatial coordinates. mu Mean number of points per cluster (a single positive number) or reference intensity for the cluster points (a function or a pixel image). win Window in which to simulate the pattern. An object of class "owin"or something acceptable to as.owin. thresh Threshold relative to the cluster kernel value at the origin (parent location) determining when the cluster kernel will be treated as zero for simulation purposes. Will be overridden by argument expandif that is given. nsim Number of simulated realisations to be generated. drop Logical. If nsim=1and drop=TRUE(the default), the result will be a point pattern, rather than a list containing a point pattern. saveLambda Logical. If TRUEthen the random intensity corresponding to the simulated parent points will also be calculated and saved, and returns as an attribute of the point pattern. expand Numeric. Size of window expansion for generation of parent points. By default determined by calling clusterradiuswith the numeric threshold value given in thresh. … poisthresh Numerical threshold below which the model will be treated as a Poisson process. See Details. saveparents Logical value indicating whether to save the locations of the parent points as an attribute. Details This algorithm generates a realisation of the Neyman-Scott process with Variance Gamma (Bessel) cluster kernel, inside the window win. The process is constructed by first generating a Poisson point process of ``parent'' points with intensity kappa. Then each parent point is replaced by a random cluster of points, the number of points in each cluster being random with a Poisson ( mu) distribution, and the points being placed independently and uniformly according to a Variance Gamma kernel. The shape of the kernel is determined by the dimensionless index nu. This is the parameter \(\nu^\prime = \alpha/2-1\) appearing in equation (12) on page 126 of Jalilian et al (2013). The scale of the kernel is determined by the argument scale, which is the parameter \(\eta\) appearing in equations (12) and (13) of Jalilian et al (2013). It is expressed in units of length (the same as the unit of length for the window win). In this implementation, parent points are not restricted to lie in the window; the parent process is effectively the uniform Poisson process on the infinite plane. This model can be fitted to data by the method of minimum contrast, maximum composite likelihood or Palm likelihood using kppm. The algorithm can also generate spatially inhomogeneous versions of the cluster process: The parent points can be spatially inhomogeneous. If the argument kappais a function(x,y)or a pixel image (object of class "im"), then it is taken as specifying the intensity function of an inhomogeneous Poisson process that generates the parent points. The offspring points can be inhomogeneous. If the argument muis a function(x,y)or a pixel image (object of class "im"), then it is interpreted as the reference density for offspring points, in the sense of Waagepetersen (2006). When the parents are homogeneous ( kappa is a single number) and the offspring are inhomogeneous ( mu is a function or pixel image), the model can be fitted to data using kppm, or using vargamma.estK or vargamma.estpcf applied to the inhomogeneous \(K\) function. If the pair correlation function of the model is very close to that of a Poisson process, deviating by less than poisthresh, then the model is approximately a Poisson process, and will be simulated as a Poisson process with intensity kappa * mu, using rpoispp. This avoids computations that would otherwise require huge amounts of memory. Value A point pattern (an object of class "ppp") if nsim=1, or a list of point patterns if nsim > 1. Additionally, some intermediate results of the simulation are returned as attributes of this point pattern (see rNeymanScott). Furthermore, the simulated intensity function is returned as an attribute "Lambda", if saveLambda=TRUE. References Jalilian, A., Guan, Y. and Waagepetersen, R. (2013) Decomposition of variance for spatial Cox processes. Scandinavian Journal of Statistics 40, 119-137. Waagepetersen, R. (2007) An estimating function approach to inference for inhomogeneous Neyman-Scott processes. Biometrics 63, 252--258. See Also Aliases rVarGamma Examples # NOT RUN { # homogeneous X <- rVarGamma(30, 2, 0.02, 5) # inhomogeneous ff <- function(x,y){ exp(2 - 3 * abs(x)) } Z <- as.im(ff, W= owin()) Y <- rVarGamma(30, 2, 0.02, Z) YY <- rVarGamma(ff, 2, 0.02, 3)# } Documentation reproduced from package spatstat, version 1.55-1, License: GPL (>= 2)
Faddeeva Package From AbInitio Revision as of 22:46, 29 October 2012 (edit) Stevenj (Talk | contribs) (→Usage) ← Previous diff Revision as of 22:47, 29 October 2012 (edit) Stevenj (Talk | contribs) (→Usage) Next diff → Line 26: Line 26: :<math>\mathrm{erfi}(x) = -i\mathrm{erf}(ix) = -i[e^{x^2} w(x) - 1]</math> (imaginary error function) :<math>\mathrm{erfi}(x) = -i\mathrm{erf}(ix) = -i[e^{x^2} w(x) - 1]</math> (imaginary error function) :<math>F(x) = \frac{i\sqrt{\pi}}{2} \left[ e^{-z^2} - w(z) \right]</math> ([[w:Dawson function|Dawson function]]) :<math>F(x) = \frac{i\sqrt{\pi}}{2} \left[ e^{-z^2} - w(z) \right]</math> ([[w:Dawson function|Dawson function]]) - :<math>\mathrm{Voigt}(x,y) = \mathrm{Re}[w(x+iy)]</math> (real [[w:Voigt function|Voigt function]], up to scale factor) + :<math>\mathrm{Voigt}(x,y) = \mathrm{Re}[w(x+iy)] \!</math> (real [[w:Voigt function|Voigt function]], up to scale factor) + Note that in the case of erf and erfc, we provide different equations for positive and negative ''x'', in order to avoid numerical problems arising from multiplying exponentially large and small quantities. == Wrappers: Matlab, GNU Octave, and Python == == Wrappers: Matlab, GNU Octave, and Python == Revision as of 22:47, 29 October 2012 Contents Faddeeva / complex error function Steven G. Johnson has written free/open-source C++ code (with wrappers for other languages) to compute the scaled complex error function w( z) = e − z2erfc(− iz), also called the Faddeeva function(and also the plasma dispersion function), for arbitrary complex arguments zto a given accuracy. Given the Faddeeva function, one can easily compute Voigt functions, the Dawson function, and similar related functions. Download the source code from: http://ab-initio.mit.edu/Faddeeva_w.cc (updated 29 October 2012) Usage To use the code, add the following declaration to your C++ source (or header file): #include <complex> extern std::complex<double> Faddeeva_w(std::complex<double> z, double relerr=0); The function Faddeeva_w(z, relerr) computes w( z) to a desired relative error relerr. Omitting the relerr argument, or passing relerr=0 (or any relerr less than machine precision ε≈10 −16), corresponds to requesting machine precision, and in practice a relative error < 10 −13 is usually achieved. Specifying a larger value of relerr may improve performance (at the expense of accuracy). You should also compile Faddeeva_w.cc and link it with your program, of course. In terms of w( z), some other important functions are: (scaled complementary error function) (complementary error function) (error function) (imaginary error function) (Dawson function) (real Voigt function, up to scale factor) Note that in the case of erf and erfc, we provide different equations for positive and negative x, in order to avoid numerical problems arising from multiplying exponentially large and small quantities. Wrappers: Matlab, GNU Octave, and Python Wrappers are available for this function in other languages. Matlab (also available here): A function Faddeeva_w(z, relerr), where the arguments have the same meaning as above (the relerrargument is optional) can be downloaded from Faddeeva_w_mex.cc (along with the help file Faddeeva_w.m. Compile it into an octave plugin with: mex -output Faddeeva_w -O Faddeeva_w_mex.cc Faddeeva_w.cc GNU Octave: A function Faddeeva_w(z, relerr), where the arguments have the same meaning as above (the relerrargument is optional) can be downloaded from Faddeeva_w_oct.cc. Compile it into a MEX file with: mkoctfile -DMPICH_SKIP_MPICXX=1 -DOMPI_SKIP_MPICXX=1 -s -o Faddeeva_w.oct Faddeeva_w_oct.cc Faddeeva_w.cc Python: Our code is used to provide scipy.special.wofzin SciPy starting in version 0.12.0 (see here). Algorithm This implementation uses a combination of different algorithms. For sufficiently large | z|, we use a continued-fraction expansion for w( z) similar to those described in Walter Gautschi, "Efficient computation of the complex error function," SIAM J. Numer. Anal. 7(1), pp. 187–198 (1970). G. P. M. Poppe and C. M. J. Wijers, "More efficient computation of the complex error function," ACM Trans. Math. Soft. 16(1), pp. 38–46 (1990); this is TOMS Algorithm 680. Unlike those papers, however, we switch to a completely different algorithm for smaller | z|: Mofreh R. Zaghloul and Ahmed N. Ali, "Algorithm 916: Computing the Faddeyeva and Voigt Functions," ACM Trans. Math. Soft. 38(2), 15 (2011). Preprint available at arXiv:1106.0151. (I initially used this algorithm for all z, but the continued-fraction expansion turned out to be faster for larger | z|. On the other hand, Algorithm 916 is competitive or faster for smaller | z|, and appears to be significantly more accurate than the Poppe & Wijers code in some regions, e.g. in the vicinity of | z|=1 [although comparison with other compilers suggests that this may be a problem specific to gfortran]. Algorithm 916 also has better relative accuracy in Re[ z] for some regions near the real- z axis. You can switch back to using Algorithm 916 for all z by changing USE_CONTINUED_FRACTION to 0 in the code.) Note that this is SGJ's independent re-implementation of these algorithms, based on the descriptions in the papers only. In particular, we did not refer to the authors' Fortran or Matlab implementations (respectively), which are under restrictive "semifree" ACM copyright terms and are therefore unusable in free/open-source software. Algorithm 916 requires an external complementary error function erfc( x) function for real arguments x to be supplied as a subroutine. More precisely, it requires the scaled function erfcx( x) = e erfc( x2 x). Here, we use an erfcx routine written by SGJ that uses a combination of two algorithms: a continued-fraction expansion for large xand a lookup table of Chebyshev polynomials for small x. (I initially used an erfcx function derived from the DERFC routine in SLATEC, modified by SGJ to compute erfcx instead of erfc, by the new erfcx routine is much faster.) Test program To test the code, a small test program is included at the end of Faddeeva_w.cc which tests w( z) against several known results (from Wolfram Alpha) and prints the relative errors obtained. To compile the test program, #define FADDEEVA_W_TEST in the file (or compile with -DFADDEEVA_W_TEST on Unix) and compile Faddeeva_w.cc. The resulting program prints SUCCESS at the end of its output if the errors were acceptable. License The software is distributed under the "MIT License", a simple permissive free/open-source license: Copyright © 2012 Massachusetts Institute of Technology Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
I'd like to start of by saying I am aware of this post, but I think my question is different enough to warrant its own post. Some notation, because I don't think this is standard: $N^X_\epsilon(x)$ is just the $\epsilon$-ball centered at $x$ in $X$. So say $X \subseteq \mathbb{R}$, we would have $N^X_\epsilon(x) = (x - \epsilon, x + \epsilon) \cap X$ "$\overset{\text{op}}{\subseteq}$" and "$\overset{\text{cl}}{\subseteq}$" means open and closed subsets respectively. My lecturer supplied me with this proof of the unit interval being connected: Let $U \subseteq I$ be an open and closed subset containing $0$. We want to show that $U = I$: if $q \in I$, we must show $q \in U$. Since $U \overset{\text{cl}}{\subseteq} I \overset{\text{cl}}{\subseteq} \mathbb{R}$, we have that $U \overset{\text{cl}}{\subseteq} \mathbb{R}$. We get that $U \cap [0, q] \subseteq \mathbb{R}$ is non-empty, closed and bounded (by $1$), and thus has a maximum $z \in U \cap [0, q]$. If we show that $z = q$ we have succeded in showing $q \in U$. Since $U \overset{\text{op}}{\subseteq} I$, there is an $\epsilon > 0$ such that $N_\epsilon^I(z) = (z - \epsilon, z + \epsilon) \cap I \subseteq U$, i.e., such that $$N^{[0, q]}_\epsilon(z) = (z - \epsilon, z + \epsilon) \cap [0, q] \subseteq U \cap [0, q]$$ I can follow the proof up until this part, but then he picks up his pace and suddenly says The fact that $z$ is a maximum of $U \cap [0, q]$ then rules out the case when $z < q$; so $z = q. \quad \Box$ I fail to see why this last line is true, and what it has to do with the fact that $U$ is open in $I$. I think there's a lot happening between the lines here, and I wonder if anyone of you can explain exactly what's happening?
VIII - Neural Networks: Representation 8.1 - Non-linear Hypothesis If we train a logistic regression algorithm with n features, including all the quadratic features \(x_ix_j\) we get approximately \(\frac{n^2}{2}\) features in total. 8.2 - Neurons and the Brain Origin of neural networks: try to mimic the brain. Was widely used in 80s and early 90s. Right now it is the state of the art of many applications. If we rewire the visual signal to the auditory cortex or somatosensory cortex, that that cortex learns to see! (these are called neuro-rewiring experiments). We can learn to see with our tongue. 8.3 - Model Representation 1 Neuron inputs: the dendrites. Neuron output: the axon. Neurons communicate with pulses of electricity. ⇒ Add single neuron drawing here Usually when drawing the neuron inputs we only draw x1, x2, x3, etc, not x0. x0is called the bias unit(x0 = 1). In neural networks, we sometimes use weightsinstead of parameters(\theta\). ⇒ Add neural network drawing here The Layer 1is called the input layerand the final layer is called the ouput layer. Inbetween layers are called hidden layers. \(a_i^{(j)}\) = “activation” of unit iin layer j \(\Theta^{(j)}\) = matrixof weights controlling function mapping from layer jto layer j+1. So, on the previous drawing we have: \[a_1^{(2)} = g(\Theta_{10}^{(1)}x_0+\Theta_{11}^{(1)}x_1+\Theta_{12}^{(1)}x_2+\Theta_{13}^{(1)}x_3)\] \[a_2^{(2)} = g(\Theta_{20}^{(1)}x_0+\Theta_{21}^{(1)}x_1+\Theta_{22}^{(1)}x_2+\Theta_{23}^{(1)}x_3)\] \[a_3^{(2)} = g(\Theta_{30}^{(1)}x_0+\Theta_{31}^{(1)}x_1+\Theta_{32}^{(1)}x_2+\Theta_{33}^{(1)}x_3)\] \[h_\Theta(x) = a_1^{(3)} = g(\Theta_{10}^{(2)}a_0^{(2)}+\Theta_{11}^{(2)}a_1^{(2)}+\Theta_{12}^{(2)}a_2^{(2)}+\Theta_{13}^{(2)}a_3^{(2)})\] If a network has \(s_j\) units in layer jand \(s_{j+1}\) units in layer j+1, then \(\Theta^{(j)}\) will be of dimension \(s_{j+1} \times (s_j+1)\). 8.4 Model Representation 2 We define \(z_1^{(2)} = \Theta_{10}^{(1)}x_0+\Theta_{11}^{(1)}x_1+\Theta_{12}^{(1)}x_2+\Theta_{13}^{(1)}x_3\), we define \(z_2^{(2)}\) and \(z_3^{(2)}\) similarly. So we have the vectors: \(x = \begin{bmatrix} x_0 \\ x_1 \\ x_2 \\ x_3 \end{bmatrix}\) and we define \(z^{(2)} = \begin{bmatrix} z_1^{(2)} \\ z_2^{(2)} \\ z_3^{(2)} \end{bmatrix}\), we also define \(a^{(2)}\) similarly. Then we can use the vectorized computation: \(z^{(2)} = \Theta^{(1)}x\) and \(a^{(2)} = g(z^{(2)})\). Now to make things a bit easier, we can just define \(a^{(1)} = x\), so that we get \(z^{(2)} = \Theta^{(1)}a^{(1)}\). Also note that to compute the new layer we must also Addthe component \(a_0^{(2)} = 1\). Then we compute \(z^{(3)} = \Theta^{(2)}a^{(2)}\) The process of computing \(h_\Theta(x)\) is called forward propagation. ⇒ Neural networks are learning their own features. The way the units are connected in a neural network is called the architecture. 8.5 Examples and Intuitions 1 We consider here y = x1 XNOR x2 (eg. y = NOT (x1 XOR x2)). We can compute AND function and OR function with a single neuron (weights -30,20,20 for AND and -10,20,20 for OR). 8.6 Examples and Intuitions 2 To compute Negation, we can also use a single neuron with the weights (10,-20). 8.7 Multiclass Classification We just used multiple output units where each output unit should be “1” when a specific class is found.
A tetrahedral snake, sometimes called a Steinhaus snake, is a collection of tetrahedra, linked face to face. Steinhaus showed in 1956 that the last tetrahedron in the snake can never be a translation of the first one. This is a consequence of the fact that the group generated by the four reflexions in the faces of a tetrahedron form the free product $C_2 \ast C_2 \ast C_2 \ast C_2$. For a proof of this, see Stan Wagon’s book The Banach-Tarski paradox, starting at page 68. The thread $(3|3)$ is the spine of the $(9|1)$-snake which involves the following lattices \[ \xymatrix{& & 1 \frac{1}{3} \ar@[red]@{-}[dd] & & \\ & & & & \\ 1 \ar@[red]@{-}[rr] & & 3 \ar@[red]@{-}[rr] \ar@[red]@{-}[dd] & & 1 \frac{2}{3} \\ & & & & \\ & & 9 & &} \] It is best to look at the four extremal lattices as the vertices of a tetrahedron with the lattice $3$ corresponding to its point of gravity. The congruence subgroup $\Gamma_0(9)$ fixes each of these lattices, and the arithmetic group $\Gamma_0(3|3)$ is the conjugate of $\Gamma_0(1)$ \[ \Gamma_0(3|3) = \{ \begin{bmatrix} \frac{1}{3} & 0 \\ 0 & 1 \end{bmatrix}.\begin{bmatrix} a & b \\ c & d \end{bmatrix}.\begin{bmatrix} 3 & 0 \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} a & \frac{b}{3} \\ 3c & 1 \end{bmatrix}~|~ad-bc=1 \} \] We know that $\Gamma_0(3|3)$ normalizes the subgroup $\Gamma_0(9)$ and we need to find the moonshine group $(3|3)$ which should have index $3$ in $\Gamma_0(3|3)$ and contain $\Gamma_0(9)$. So, it is natural to consider the finite group $A=\Gamma_0(3|3)/\Gamma_9(0)$ which is generated by the co-sets of \[ x = \begin{bmatrix} 1 & \frac{1}{3} \\ 0 & 1 \end{bmatrix} \qquad \text{and} \qquad y = \begin{bmatrix} 1 & 0 \\ 3 & 0 \end{bmatrix} \] To determine this group we look at the action of it on the lattices in the $(9|1)$-snake. It will fix the central lattice $3$ but will move the other lattices. Recall that it is best to associate to the lattice $M.\frac{g}{h}$ the matrix \[ \alpha_{M,\frac{g}{h}} = \begin{bmatrix} M & \frac{g}{h} \\ 0 & 1 \end{bmatrix} \] and then the action is given by right-multiplication. \[ \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}.x = \begin{bmatrix} 1 & \frac{1}{3} \\ 0 & 1 \end{bmatrix}, \quad \begin{bmatrix} 1 & \frac{1}{3} \\ 0 & 1 \end{bmatrix}.x = \begin{bmatrix} 1 & \frac{2}{3} \\ 0 & 1 \end{bmatrix}, \quad \begin{bmatrix} 1 & \frac{2}{3} \\ 0 & 1 \end{bmatrix}.x=\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \] That is, $x$ corresponds to a $3$-cycle $1 \rightarrow 1 \frac{1}{3} \rightarrow 1 \frac{2}{3} \rightarrow 1$ and fixes the lattice $9$ (so is rotation around the axis through the vertex $9$). To compute the action of $y$ it is best to use an alternative description of the lattice, replacing the roles of the base-vectors $\vec{e}_1$ and $\vec{e}_2$. These latices are projectively equivalent \[ \mathbb{Z} (M \vec{e}_1 + \frac{g}{h} \vec{e}_2) \oplus \mathbb{Z} \vec{e}_2 \quad \text{and} \quad \mathbb{Z} \vec{e}_1 \oplus \mathbb{Z} (\frac{g’}{h} \vec{e}_1 + \frac{1}{h^2M} \vec{e}_2) \] where $g.g’ \equiv~1~(mod~h)$. So, we have equivalent descriptions of the lattices \[ M,\frac{g}{h} = (\frac{g’}{h},\frac{1}{h^2M}) \quad \text{and} \quad M,0 = (0,\frac{1}{M}) \] and we associate to the lattice in the second normal form the matrix \[ \beta_{M,\frac{g}{h}} = \begin{bmatrix} 1 & 0 \\ \frac{g’}{h} & \frac{1}{h^2M} \end{bmatrix} \] and then the action is again given by right-multiplication. In the tetrahedral example we have \[ 1 = (0,\frac{1}{3}), \quad 1\frac{1}{3}=(\frac{1}{3},\frac{1}{9}), \quad 1\frac {2}{3}=(\frac{2}{3},\frac{1}{9}), \quad 9 = (0,\frac{1}{9}) \] and \[ \begin{bmatrix} 1 & 0 \\ \frac{1}{3} & \frac{1}{9} \end{bmatrix}.y = \begin{bmatrix} 1 & 0 \\ \frac{2}{3} & \frac{1}{9} \end{bmatrix},\quad \begin{bmatrix} 1 & 0 \\ \frac{2}{3} & \frac{1}{9} \end{bmatrix}. y = \begin{bmatrix} 1 & 0 \\ 0 & \frac{1}{9} \end{bmatrix}, \quad \begin{bmatrix} 1 & 0 \\ 0 & \frac{1}{9} \end{bmatrix}. y = \begin{bmatrix} 1 & 0 \\ \frac{1}{3} & \frac{1}{9} \end{bmatrix} \] That is, $y$ corresponds to the $3$-cycle $9 \rightarrow 1 \frac{1}{3} \rightarrow 1 \frac{2}{3} \rightarrow 9$ and fixes the lattice $1$ so is a rotation around the axis through $1$. Clearly, these two rotations generate the full rotation-symmetry group of the tetrahedron \[ \Gamma_0(3|3)/\Gamma_0(9) \simeq A_4 \] which has a unique subgroup of index $3$ generated by the reflexions (rotations with angle $180^o$ around axis through midpoints of edges), generated by $x.y$ and $y.x$. The moonshine group $(3|3)$ is therefore the subgroup generated by \[ (3|3) = \langle \Gamma_0(9),\begin{bmatrix} 2 & \frac{1}{3} \\ 3 & 1 \end{bmatrix},\begin{bmatrix} 1 & \frac{1}{3} \\ 3 & 2 \end{bmatrix} \rangle \]
Difference between revisions of "Probability Seminar" (→May 7, Tuesday Van Vleck 901, 2:25pm,, Duncan Dauvergne (Toronto)) (→Tuesday , May 7, Van Vleck 901, 2:25pm,, Duncan Dauvergne (Toronto)) Line 114: Line 114: Abstract: The interplay between geometry and probability in high-dimensional spaces is a subject of active research. Classical theorems in probability theory such as the central limit theorem and Cramer’s theorem can be viewed as providing information about certain scalar projections of high-dimensional product measures. In this talk we will describe the behavior of random projections of more general (possibly non-product) high-dimensional measures, which are of interest in diverse fields, ranging from asymptotic convex geometry to high-dimensional statistics. Although the study of (typical) projections of high-dimensional measures dates back to Borel, only recently has a theory begun to emerge, which in particular identifies the role of certain geometric assumptions that lead to better behaved projections. A particular question of interest is to identify what properties of the high-dimensional measure are captured by its lower-dimensional projections. While fluctuations of these projections have been studied over the past decade, we describe more recent work on the tail behavior of multidimensional projections, and associated conditional limit theorems. Abstract: The interplay between geometry and probability in high-dimensional spaces is a subject of active research. Classical theorems in probability theory such as the central limit theorem and Cramer’s theorem can be viewed as providing information about certain scalar projections of high-dimensional product measures. In this talk we will describe the behavior of random projections of more general (possibly non-product) high-dimensional measures, which are of interest in diverse fields, ranging from asymptotic convex geometry to high-dimensional statistics. Although the study of (typical) projections of high-dimensional measures dates back to Borel, only recently has a theory begun to emerge, which in particular identifies the role of certain geometric assumptions that lead to better behaved projections. A particular question of interest is to identify what properties of the high-dimensional measure are captured by its lower-dimensional projections. While fluctuations of these projections have been studied over the past decade, we describe more recent work on the tail behavior of multidimensional projections, and associated conditional limit theorems. − == <span style="color:red">'''Tuesday''' </span>, May 7, Van Vleck 901, 2:25pm + == <span style="color:red">'''Tuesday''' </span>, May 7, Van Vleck 901, 2:25pm, Duncan Dauvergne (Toronto) == Revision as of 14:52, 30 April 2019 Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM. If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu January 31, Oanh Nguyen, Princeton Title: Survival and extinction of epidemics on random graphs with general degrees Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly. Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University Title: When particle systems meet PDEs Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems.. Title: Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2. February 14, Timo Seppäläinen, UW-Madison Title: Geometry of the corner growth model Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah). February 21, Diane Holcomb, KTH Title: On the centered maximum of the Sine beta process Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette. Title: Quantitative homogenization in a balanced random environment Abstract: Stochastic homogenization of discrete difference operators is closely related to the convergence of random walk in a random environment (RWRE) to its limiting process. In this talk we discuss non-divergence form difference operators in an i.i.d random environment and the corresponding process—a random walk in a balanced random environment in the integer lattice Z^d. We first quantify the ergodicity of the environment viewed from the point of view of the particle. As consequences, we obtain algebraic rates of convergence for the quenched central limit theorem of the RWRE and for the homogenization of both elliptic and parabolic non-divergence form difference operators. Joint work with J. Peterson (Purdue) and H. V. Tran (UW-Madison). Wednesday, February 27 at 1:10pm Jon Peterson, Purdue Title: Functional Limit Laws for Recurrent Excited Random Walks Abstract: Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit. This is joint work with Elena Kosygina. March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison Title: Harmonic Analysis on GLn over finite fields, and Random Walks Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the character ratio: $$ \text{trace}(\rho(g))/\text{dim}(\rho), $$ for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant rank. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM). April 4, Philip Matchett Wood, UW-Madison Title: Outliers in the spectrum for products of independent random matrices Abstract: For fixed positive integers m, we consider the product of m independent n by n random matrices with iid entries as in the limit as n tends to infinity. Under suitable assumptions on the entries of each matrix, it is known that the limiting empirical distribution of the eigenvalues is described by the m-th power of the circular law. Moreover, this same limiting distribution continues to hold if each iid random matrix is additively perturbed by a bounded rank deterministic error. However, the bounded rank perturbations may create one or more outlier eigenvalues. We describe the asymptotic location of the outlier eigenvalues, which extends a result of Terence Tao for the case of a single iid matrix. Our methods also allow us to consider several other types of perturbations, including multiplicative perturbations. Joint work with Natalie Coston and Sean O'Rourke. April 11, Eviatar Procaccia, Texas A&M Title: Stabilization of Diffusion Limited Aggregation in a Wedge. Abstract: We prove a discrete Beurling estimate for the harmonic measure in a wedge in $\mathbf{Z}^2$, and use it to show that Diffusion Limited Aggregation (DLA) in a wedge of angle smaller than $\pi/4$ stabilizes. This allows to consider the infinite DLA and questions about the number of arms, growth and dimension. I will present some conjectures and open problems. April 18, Andrea Agazzi, Duke Title: Large Deviations Theory for Chemical Reaction Networks Abstract: The microscopic dynamics of well-stirred networks of chemical reactions are modeled as jump Markov processes. At large volume, one may expect in this framework to have a straightforward application of large deviation theory. This is not at all true, for the jump rates of this class of models are typically neither globally Lipschitz, nor bounded away from zero, with both blowup and absorption as quite possible scenarios. In joint work with Amir Dembo and Jean-Pierre Eckmann, we utilize Lyapunov stability theory to bypass this challenges and to characterize a large class of network topologies that satisfy the full Wentzell-Freidlin theory of asymptotic rates of exit from domains of attraction. Under the assumption of positive recurrence these results also allow for the estimation of transitions times between metastable states of this class of processes. April 25, Kavita Ramanan, Brown Title: Beyond Mean-Field Limits: Local Dynamics on Sparse Graphs Abstract: Many applications can be modeled as a large system of homogeneous interacting particle systems on a graph in which the infinitesimal evolution of each particle depends on its own state and the empirical distribution of the states of neighboring particles. When the graph is a clique, it is well known that the dynamics of a typical particle converges in the limit, as the number of vertices goes to infinity, to a nonlinear Markov process, often referred to as the McKean-Vlasov or mean-field limit. In this talk, we focus on the complementary case of scaling limits of dynamics on certain sequences of sparse graphs, including regular trees and sparse Erdos-Renyi graphs, and obtain a novel characterization of the dynamics of the neighborhood of a typical particle. This is based on various joint works with Ankan Ganguly, Dan Lacker and Ruoyu Wu. Friday, April 26, Colloquium, Van Vleck 911 from 4pm to 5pm, Kavita Ramanan, Brown Title: Tales of Random Projections Abstract: The interplay between geometry and probability in high-dimensional spaces is a subject of active research. Classical theorems in probability theory such as the central limit theorem and Cramer’s theorem can be viewed as providing information about certain scalar projections of high-dimensional product measures. In this talk we will describe the behavior of random projections of more general (possibly non-product) high-dimensional measures, which are of interest in diverse fields, ranging from asymptotic convex geometry to high-dimensional statistics. Although the study of (typical) projections of high-dimensional measures dates back to Borel, only recently has a theory begun to emerge, which in particular identifies the role of certain geometric assumptions that lead to better behaved projections. A particular question of interest is to identify what properties of the high-dimensional measure are captured by its lower-dimensional projections. While fluctuations of these projections have been studied over the past decade, we describe more recent work on the tail behavior of multidimensional projections, and associated conditional limit theorems. Tuesday , May 7, Van Vleck 901, 2:25pm, Duncan Dauvergne (Toronto)
In a paper by Joos and Zeh, Z Phys B 59 (1985) 223, they say:This 'coming into being of classical properties' appears related to what Heisenberg may have meant by his famous remark [7]: 'Die "Bahn" entsteht erst dadurch, dass wir sie beobachten.'Google Translate says this means something ... @EmilioPisanty Tough call. It's technical language, so you wouldn't expect every German speaker to be able to provide a correct interpretation—it calls for someone who know how German is used in talking about quantum mechanics. Litmus are a London-based space rock band formed in 2000 by Martin (bass guitar/vocals), Simon (guitar/vocals) and Ben (drums), joined the following year by Andy Thompson (keyboards, 2001–2007) and Anton (synths). Matt Thompson joined on synth (2002–2004), while Marek replaced Ben in 2003. Oli Mayne (keyboards) joined in 2008, then left in 2010, along with Anton. As of November 2012 the line-up is Martin Litmus (bass/vocals), Simon Fiddler (guitar/vocals), Marek Bublik (drums) and James Hodkinson (keyboards/effects). They are influenced by mid-1970s Hawkwind and Black Sabbath, amongst others.They... @JohnRennie Well, they repeatedly stressed their model is "trust work time" where there are no fixed hours you have to be there, but unless the rest of my team are night owls like I am I will have to adapt ;) I think u can get a rough estimate, COVFEFE is 7 characters, probability of a 7-character length string being exactly that is $(1/26)^7\approx 1.2\times 10^{-10}$ so I guess you would have to type approx a billion characters to start getting a good chance that COVFEFE appears. @ooolb Consider the hyperbolic space $H^n$ with the standard metric. Compute $$\inf\left\{\left(\int u^{2n/(n-2)}\right)^{-(n-2)/n}\left(4\frac{n-1}{n-2}\int|\nabla u|^2+\int Ru^2\right): u\in C^\infty_c\setminus\{0\}, u\ge0\right\}$$ @BalarkaSen sorry if you were in our discord you would know @ooolb It's unlikely to be $-\infty$ since $H^n$ has bounded geometry so Sobolev embedding works as expected. Construct a metric that blows up near infinity (incomplete is probably necessary) so that the inf is in fact $-\infty$. @Sid Eating glamorous and expensive food on a regular basis and not as a necessity would mean you're embracing consumer fetish and capitalism, yes. That doesn't inherently prevent you from being a communism, but it does have an ironic implication. @Sid Eh. I think there's plenty of room between "I think capitalism is a detrimental regime and think we could be better" and "I hate capitalism and will never go near anything associated with it", yet the former is still conceivably communist. Then we can end up with people arguing is favor "Communism" who distance themselves from, say the USSR and red China, and people who arguing in favor of "Capitalism" who distance themselves from, say the US and the Europe Union. since I come from a rock n' roll background, the first thing is that I prefer a tonal continuity. I don't like beats as much as I like a riff or something atmospheric (that's mostly why I don't like a lot of rap) I think I liked Madvillany because it had nonstandard rhyming styles and Madlib's composition Why is the graviton spin 2, beyond hand-waiving, sense is, you do the gravitational waves thing of reducing $R_{00} = 0$ to $g^{\mu \nu} g_{\rho \sigma,\mu \nu} = 0$ for a weak gravitational field in harmonic coordinates, with solution $g_{\mu \nu} = \varepsilon_{\mu \nu} e^{ikx} + \varepsilon_{\mu \nu}^* e^{-ikx}$, then magic?
I am very new to this particular branch of probability theory, I try to be as formal as possible. In this question I consider bernoulli percolation as it is usually introduced as a first model (see for instance Geoffrey Grimmett). Problem:Let $x,y \in \mathbb{Z}^d$. Prove that $f(p):= \mathbb{P}_p( x \leftrightarrow y)$ is strictlyincreasing in $p \in[0,1]$. My approach: First of, let me just state that it is clear that $f(p)$ is increasing in $p$. Both, intuitively and rigorously. The event $ x \leftrightarrow y$ (there exists an open path from $x$ to $y$) is an increasing event, i.e. opening up edges is beneficial for the event $x \leftrightarrow y$ and it is a straightforward result from percolation theory that if $A$ is an increasing event, then $p \in [0,1] \mapsto \mathbb{P}_p(A)$ is increasing. The issue of course being that I want to establish that $\mathbb{P}_p(A)< \mathbb{P}_q(A)$ for $p<q$. Here I consider it to be a good idea to use Russo's formula: Theorem (Russo's formula):Let $A \in \mathcal{F}_E$ be an increasing event depending on the edges in a finite subset $F \subset E$ only. Then $p \mapsto \mathbb{P}_p(A)$ is differentiable, and \begin{align} \frac{d}{dp} \mathbb{P}_p(A) = \sum_{e \in F} \mathbb{P}_p(e \text{ is pivotal for }A) \tag{*}\end{align} If I can use this formula to prove that $f'(p)>0$, then indeed $f$ is strictly increasing. Of course the event $A = \{x \leftrightarrow y \}$ is increasing and depends only on finitely many edges. Consider dimension $d=2$, then I identified pivotal edges $e \in E$ as follows: An edge $e$ is pivotal (essential) for the event $x \leftrightarrow y$ if and only if there exists an open path from $x$ to $y$ going through the open edge $e$ (say $\gamma$) and there exists a dual open path (say $\gamma^*$) that connects the two endpoints of $e^*$ and is a circuit containing the point $x$. The picture below (taken from P. Nolin Percolation) is related to said situation, it depicts the event $0 \leftrightarrow \partial B_n$ and shows the paths $\gamma$ and $\gamma^*$. My Question: How do I complete the proof? I would be happy to understand it even just in the case of $d=2$. So far I have: $$ f'(p) = \sum_{e \in F} \mathbb{P}_p( \exists \gamma, \exists \gamma^*) \overset{FKG}\geq\sum_{e \in F} \mathbb{P}_p( \exists \gamma) \mathbb{P}_p( \exists \gamma^*) \overset{?}>0 $$
Are there hidden relations between mathematical and physical constants such as $\frac{e^2}{4 \pi \epsilon_0 h c} \sim \frac{1}{137} $ or are these numerical relations mere accidents? A couple of years ago, Pierre Cartier proposed in his paper A mad day’s work : from Grothendieck to Connes and Kontsevich : the evolution of concepts of space and symmetry that there are many reasons to believe in a cosmic Galois group acting on the fundamental constants of physical theories and responsible for relations such as the one above. The Euler-Zagier numbers are infinite sums over $n_1 > n_2 > ! > n_r \geq 1 $ of the form $\zeta(k_1,\dots,k_r) = \sum n_1^{-k_1} \dots n_r^{-k_r} $ and there are polynomial relations with rational coefficients between these such as the product relation $\zeta(a)\zeta(b)=\zeta(a+b)+\zeta(a,b)+\zeta(b,a) $ It is conjectured that all polynomial relations among Euler-Zagier numbers are consequences of these product relations and similar explicitly known formulas. A consequence of this conjecture would be that $\zeta(3),\zeta(5),\dots $ are all trancendental! Drinfeld introduced the Grothendieck-Teichmuller group-scheme over $\mathbb{Q} $ whose Lie algebra $\mathfrak{grt}_1 $ is conjectured to be the free Lie algebra on infinitely many generators which correspond in a natural way to the numbers $\zeta(3),\zeta(5),\dots $. The Grothendieck-Teichmuller group itself plays the role of the Galois group for the Euler-Zagier numbers as it is conjectured to act by automorphisms on the graded $\mathbb{Q} $-algebra whose degree $d $-term are the linear combinations of the numbers $\zeta(k_1,\dots,k_r) $ with rational coefficients and such that $k_1+\dots+k_r=d $. The Grothendieck-Teichmuller group also appears mysteriously in non-commutative geometry. For example, the set of all Kontsevich deformation quantizations has a symmetry group which Kontsevich conjectures to be isomorphic to the Grothendieck-Teichmuller group. See section 4 of his paper Operads and motives in deformation quantzation for more details. It also appears in the renormalization results of Alain Connes and Dirk Kreimer. A very readable introduction to this is given by Alain Connes himself in Symmetries Galoisiennes et renormalisation. Perhaps the latest news on Cartier’s dream of a cosmic Galois group is the paper by Alain Connes and Matilde Marcolli posted last month on the arXiv : Renormalization and motivic Galois theory. A good web-page on all of this, including references, can be found here.
In a Hilbert space of dimension $d$, how do I calculate the largest number $N(\epsilon, d)$ of vectors $\{V_i\}$ which satisfies the following properties. Here $\epsilon$ is small but finite compared to 1. $$<V_i|V_i> = 1$$ $$|<V_i|V_j>| \leq \epsilon, i \neq j$$ Some examples are as follows. 1. $N(0, d)$ = d 2. $N\left(\frac{1}{2}, 2\right)$ = 3, this can be seen by explicit construction of vectors using the Bloch sphere. 3. $N\left(\frac{1}{\sqrt{2}}, 2\right) = 6$, again using the same logic. How do I obtain any general formula for $N(\epsilon, d)$. Even an approximate form for $N(\epsilon, d)$ in the large $d$ and small $\epsilon$ limit works fine for me. EDIT: The question is now resolved. See the answer at https://mathoverflow.net/a/336395/78150
Definition:Linearly Dependent/Set/Real Vector Space Definition Let $\left({\R^n,+,\cdot}\right)_{\R}$ be a real vector space. Let $S \subseteq \R^n$. That is, such that: $\displaystyle \exists \left\{{\lambda_k: 1 \le k \le n}\right\} \subseteq \R: \sum_{k \mathop = 1}^n \lambda_k \mathbf v_k = \mathbf 0$ where $\left\{{\mathbf v_1, \mathbf v_2, \ldots, \mathbf v_n}\right\} \subseteq S$, and such that at least one of $\lambda_k$ is not equal to $0$.
Trying to align equal signs and get an error message saying missing } inserted I can't figure out what I've done wrong? \begin{align}($b^nn^\alpha$)$^{-1}$=\large$\frac{a^n}{n^\alpha}$$\geq\frac{a^n}{n^p}$\end{align} TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. It only takes a minute to sign up.Sign up to join this community Trying to align equal signs and get an error message saying missing } inserted I can't figure out what I've done wrong? \begin{align}($b^nn^\alpha$)$^{-1}$=\large$\frac{a^n}{n^\alpha}$$\geq\frac{a^n}{n^p}$\end{align}
AI News, When Bayes, Ockham, and Shannon come together to define machine learning On Monday, September 17, 2018 By Read More When Bayes, Ockham, and Shannon come together to define machine learning It is somewhat surprising that among all the high-flying buzzwords of machine learning, we don’t hear much about the one phrase which fuses some of the core concepts of statistical learning, information theory, and natural philosophy into a single three-word-combo. And you may be thinking what the heck that is… Let’s peal the layers off and see how useful it is… We start with (not chronologically) with Reverend Thomas Bayes, who by the way, never published his idea about how to do statistical inference, but was later immortalized by the eponymous theorem. In this essay, Bayes described — in a rather frequentist manner — the simple theorem concerning joint probability which gives rise to the calculation of inverse probability i.e. This essentially tells that you update your belief (prior probability) after seeing the data/evidence (likelihood) and assign the updated degree of belief to the term posterior probability. But in the context of machine learning, it can be thought of any set of rules (or logic or process), which we believe, can give rise to the examples or training data, we are given to learn the hidden nature of this mysterious process. It will take many a volume to describe the genius and strange life of Claude Shannon, who almost single handedly laid the foundation of information theory and ushered us into the age of modern high-speed communication and information exchange. master’s thesis in electrical engineering has been called the most important MS thesis of the 20th century: in it the 22-year-old Shannon showed how the logical algebra of 19th-century mathematician George Boole could be implemented using electronic circuits of relays and switches. This most fundamental feature of digital computers’ design — the representation of “true” and “false” and “0” and “1” as open or closed switches, and the use of electronic logic gates to make decisions and to carry out arithmetic — can be traced back to the insights in Shannon’s thesis. Shannon defined the quantity of information produced by a source — for example, the quantity in a message — by a formula similar to the equation that defines thermodynamic entropy in physics. Sir Issac Newton: : “We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances.” Bertrand Russell: “Whenever possible, substitute constructions out of known entities for inferences to unknown entities.” Need an example about what length of a hypothesis is? On the other hand, if you create a complex (and long) hypothesis, you may be able to fit your training data really well but this actually may not be the right hypothesis as it runs against the MAP principle of having a hypothesis with small entropy. What MDL shows is that if a representation of hypotheses is chosen sothat the size of hypothesis h is — log2 P(h), and if a representation for exceptions (errors) is chosen so that the encoding length of D given h is equal to -log2 P(D|h), then the MDL principle produces MAP hypotheses. It short-circuits the (often) infinitely large hypothesis space and leads us towards a highly probable set of hypothesis which we can optimally encode and work towards finding the set of MAP hypotheses among them. It is a wonderful fact that such a simple set of mathematical manipulations over a basic identity of probability theory can result in such profound and succinct description of the fundamental limitation and goal of supervised machine learning. On Tuesday, September 18, 2018 By Read More Bayesian inference Bayesian inference is a method of statistical inference in which Bayes' theorem is used to update the probability for a hypothesis as more evidence or information becomes available. Bayesian inference has found application in a wide range of activities, including science, engineering, philosophy, medicine, sport, and law. Bayesian inference derives the posterior probability as a consequence of two antecedents: a prior probability and a 'likelihood function' derived from a statistical model for the observed data. – the posterior probability of a hypothesis is proportional to its prior probability (its inherent likeliness) and the newly acquired likelihood (its compatibility with the new observed evidence). P ( E ∣ H ) P ( E ) Ian Hacking noted that traditional 'Dutch book' arguments did not specify Bayesian updating: they left open the possibility that non-Bayesian updating rules could avoid Dutch books. Note that this is expressed in words as 'posterior is proportional to likelihood times prior', or sometimes as 'posterior = likelihood times prior, over evidence'. Bayesian theory calls for the use of the posterior predictive distribution to do predictive inference, i.e., to predict the distribution of a new, unobserved data point. By comparison, prediction in frequentist statistics often involves finding an optimum point estimate of the parameter(s)—e.g., by maximum likelihood or maximum a posteriori estimation (MAP)—and then plugging this estimate into the formula for the distribution of a data point. For example, confidence intervals and prediction intervals in frequentist statistics when constructed from a normal distribution with unknown mean and variance are constructed using a Student's t-distribution. This correctly estimates the variance, due to the fact that (1) the average of normally distributed random variables is also normally distributed; (2) the predictive distribution of a normally distributed data point with unknown mean and variance, using conjugate or uninformative priors, has a student's t-distribution. In Bayesian statistics, however, the posterior predictive distribution can always be determined exactly—or at least, to an arbitrary level of precision, when numerical methods are used.) In fact, if the prior distribution is a conjugate prior, and hence the prior and posterior distributions come from the same family, it can easily be seen that both prior and posterior predictive distributions also come from the same family of compound distributions. The only difference is that the posterior predictive distribution uses the updated values of the hyperparameters (applying the Bayesian update rules given in the conjugate prior article), while the prior predictive distribution uses the values of the hyperparameters that appear in the prior distribution. If evidence is simultaneously used to update belief over a set of exclusive and exhaustive propositions, Bayesian inference may be thought of as acting on this belief distribution as a whole. n m n m m m m n m 1 n {\displaystyle p(\mathbf {\theta } \mid \mathbf {\alpha } )} 1 n i P ( E ∣ M ) P ( E ) ( ∣ ) > ( ) P ( E ∣ M ) P ( E ) ( ∣ ) = ( ) For sufficiently nice prior probabilities, the Bernstein-von Mises theorem gives that in the limit of infinite trials, the posterior converges to a Gaussian distribution independent of the initial prior under some conditions firstly outlined and rigorously proven by Joseph L. However, if the random variable has an infinite but countable probability space (i.e., corresponding to a die with infinite many faces) the 1965 paper demonstrates that for a dense subset of priors the Bernstein-von Mises theorem is not applicable. To summarise, there may be insufficient trials to suppress the effects of the initial choice, and especially for large (but finite) systems the convergence might be very slow. There are other methods of estimation that minimize the posterior risk (expected-posterior loss) with respect to a loss function, and these are of interest to statistical decision theory using the sampling distribution ('frequentist statistics').[12][citation needed] x ~ 1 2 1 2 1 It is expected that if the site were inhabited during the early medieval period, then 1% of the pottery would be glazed and 50% of its area decorated, whereas if it had been inhabited in the late medieval period then 81% would be glazed and 5% of its area decorated. D ¯ G ¯ G ¯ D ¯ C P ( E = e ∣ C = c ) P ( E = e ) P ( E = e ∣ C = c ) ∫ 11 16 P ( E = e ∣ C = c ) f C ( c ) d c By calculating the area under the relevant portion of the graph for 50 trials, the archaeologist can say that there is practically no chance the site was inhabited in the 11th and 12th centuries, about 1% chance that it was inhabited during the 13th century, 63% chance during the 14th century and 36% during the 15th century. Note that the Bernstein-von Mises theorem asserts here the asymptotic convergence to the 'true' distribution because the probability space corresponding to the discrete set of events D ¯ G ¯ G ¯ D ¯ Wald characterized admissible procedures as Bayesian procedures (and limits of Bayesian procedures), making the Bayesian formalism a central technique in such areas of frequentist inference as parameter estimation, hypothesis testing, and computing confidence intervals.[15][16][17] There is also an ever-growing connection between Bayesian methods and simulation-based Monte Carlo techniques since complex models cannot be processed in closed form by a Bayesian analysis, while a graphical model structure may allow for efficient simulation algorithms like the Gibbs sampling and other Metropolis–Hastings algorithm schemes.[22] Solomonoff's universal prior probability of any prefix p of a computable sequence x is the sum of the probabilities of all programs (for a universal computer) that compute something starting with p. Given some p and any computable but unknown probability distribution from which x is sampled, the universal prior and Bayes' theorem can be used to predict the yet unseen parts of x in optimal fashion.[24][25] Bayesian inference can be used by jurors to coherently accumulate the evidence for and against a defendant, and to see whether, in totality, it meets their personal threshold for 'beyond a reasonable doubt'.[26][27][28] The Court of Appeal upheld the conviction, but it also gave the opinion that 'To introduce Bayes' Theorem, or any similar method, into a criminal trial plunges the jury into inappropriate and unnecessary realms of theory and complexity, deflecting them from their proper task.' argues that the criterion on which a verdict in a criminal trial should be based is not the probability of guilt, but rather the probability of the evidence, given that the defendant is innocent (akin to a frequentist p-value). According to this view, a rational interpretation of Bayesian inference would see it merely as a probabilistic version of falsification, rejecting the belief, commonly held by Bayesians, that high likelihood achieved by a series of Bayesian updates would prove the hypothesis beyond any reasonable doubt, or even with likelihood greater than 0. The problem considered by Bayes in Proposition 9 of his essay, 'An Essay towards solving a Problem in the Doctrine of Chances', is the posterior distribution for the parameter a (the success rate) of the binomial distribution.[citation needed] However, it was Pierre-Simon Laplace (1749–1827) who introduced a general version of the theorem and used it to approach problems in celestial mechanics, medical statistics, reliability, and jurisprudence.[37] Early Bayesian inference, which used uniform priors following Laplace's principle of insufficient reason, was called 'inverse probability' (because it infers backwards from observations to parameters, or from effects to causes[38]). In the subjective or 'informative' current, the specification of the prior depends on the belief (that is, propositions on which the analysis is prepared to act), which can summarize information from experts, previous studies, etc. In the 1980s, there was a dramatic growth in research and applications of Bayesian methods, mostly attributed to the discovery of Markov chain Monte Carlo methods, which removed many of the computational problems, and an increasing interest in nonstandard, complex applications.[40] On Monday, September 23, 2019 The Bayesian Trap Bayes' theorem explained with examples and implications for life. Check out Audible: Support Veritasium on Patreon: .. Probability Theory - The Math of Intelligence #6 We'll build a Spam Detector using a machine learning model called a Naive Bayes Classifier! This is our first real dip into probability theory in the series; I'll talk ... 17. Bayesian Statistics MIT 18.650 Statistics for Applications, Fall 2016 View the complete course: Instructor: Philippe Rigollet In this lecture, Prof. Rigollet .. 21. Bayesian Statistical Inference I MIT 6.041 Probabilistic Systems Analysis and Applied Probability, Fall 2010 View the complete course: Instructor: John Tsitsiklis .. Lecture 4: Conditional Probability | Statistics 110 We introduce conditional probability, independence of events, and Bayes' rule. What Is The Bayesian Theory? Bayes' theorem is a way of finding probability when we know certain other probabilities. The formula 14 mar 2017 this article provides an introduction to ... Introducing Bayes factors and marginal likelihoods Provides an introduction to Bayes factors which are often used to do model comparison. In using Bayes factors, it is necessary to calculate the marginal ... Prior And Posterior - Intro to Statistics This video is part of an online course, Intro to Statistics. Check out the course here: Birthday probability problem | Probability and Statistics | Khan Academy The probability that at least 2 people in a room of 30 share the same birthday. Practice this lesson yourself on KhanAcademy.org right now: ... Conditional probability and combinations | Probability and Statistics | Khan Academy Probability that I picked a fair coin given that I flipped 4 out of 6 heads. Practice this lesson yourself on KhanAcademy.org right now: ...
Definition:Many-to-One Relation Definition $\forall x \in \Dom {\mathcal R}: \forall y_1, y_2 \in \Cdm {\mathcal R}: \tuple {x, y_1} \in \mathcal R \land \tuple {x, y_2} \in \mathcal R \implies y_1 = y_2$ Let $f \subseteq S \times T$ be a many-to-one relation. Let $s \in S$. Let $R \subseteq S$. Also known as Such a relation is also referred to as: a rule of assignment a functional relation a right-definite relation a right-unique relation a partial mapping. Some sources break with mathematical convention and call this a (partial) function. These sources subsequently define a total function to be what on $\mathsf{Pr} \infty \mathsf{fWiki}$ is called a mapping. None of these names is as intuitively obvious as many-to-one relation, so the latter is the preferred term on $\mathsf{Pr} \infty \mathsf{fWiki}$. However, it must be noted that a one-to-one relation is an example of a many-to-one relation, which may confuse. Some approaches, for example 1999: András Hajnal and Peter Hamburger: Set Theory, use this as the definition for a mapping from $S$ to $T$, and then separately specify the requisite left-total nature of the conventional definition by restricting $S$ to the domain. However, this approach is sufficiently different from the mainstream approach that it will not be used on $\mathsf{Pr} \infty \mathsf{fWiki}$ and limited to this mention. Also see Results about many-to-one relationscan be found here. Sources 1975: T.S. Blyth: Set Theory and Abstract Algebra... (previous) ... (next): $\S 4$. Relations; functional relations; mappings 1999: András Hajnal and Peter Hamburger: Set Theory... (previous) ... (next): $1$. Notation, Conventions: $10$: Definition $1.3$ 2000: James R. Munkres: Topology(2nd ed.) ... (previous) ... (next): $1$: Set Theory and Logic: $\S 2$: Functions 2012: M. Ben-Ari: Mathematical Logic for Computer Science(3rd ed.) ... (previous) ... (next): Appendix $\text{A}.4$: Definition $\text{A}.23$
Update (21 May 18): It turns out this post is one of the top hits on google for “python travelling salesmen”! That means a lot of people who want to solve the travelling salesmen problem in python end up here. While I tried to do a good job explaining a simple algorithm for this, it was for a challenge to make a progam in 10 lines of code or fewer. That constraint means it’s definitely not the best code around for using for a demanding application, and not the best for learning to write good, readable code. On the other hand, it is simple and short, and I explain each line. So stick around if you’re up for that. Otherwise, while I haven’t used it myself, Google has a library “OR-Tools” that has a nice page about solving the travelling salesmen problem in python via their library. So maybe check that out! Update (29 May 19): I wrote a small Julia package to exactly solve small TSP instances: https://github.com/ericphanson/TravelingSalesmanExact.jl. A few weeks ago I got an email about a high performance computing course I had signed up for; the professor wanted all of the participants to send him the “most complicated” 10 line Python program they could, in order to gauge the level of the classAnd to submit 10 blank lines if we didn’t know any Python!" . I had an evening free and wanted to challenge myself a bit, and came up with the idea of trying to write an algorithm for approximating a solution to the traveling salesman problem. A long time ago, I had followed a tutorial for implementing a genetic algorithm in java for this and thought it was a lot of fun, so I tried a genetic algorithm first and quickly found it was hard to fit in ten lines. I changed to a simulated annealing approach and found a nice pedagogical tutorial here: theprojectspot.com//simulated_annealing (in java, however). I should mention: I don’t really know python, and haven’t done any non-tutorial-level programming in general. So I’m sure there’s a lot to improve here, and I hope the reader proceeds at their own risk. Warnings aside, here’s the resultwhich has been fixed up a little since then : 1 import random, numpy, math, copy, matplotlib.pyplot as plt2 cities = [random.sample(range(100), 2) for x in range(15)];3 tour = random.sample(range(15),15);4 for temperature in numpy.logspace(0,5,num=100000)[::-1]:5 [i,j] = sorted(random.sample(range(15),2));6 newTour = tour[:i] + tour[j:j+1] + tour[i+1:j] + tour[i:i+1] + tour[j+1:];7 if math.exp( ( sum([ math.sqrt(sum([(cities[tour[(k+1) % 15]][d] - cities[tour[k % 15]][d])**2 for d in [0,1] ])) for k in [j,j-1,i,i-1]]) - sum([math.sqrt(sum([(cities[newTour[(k+1) % 15]][d] - cities[newTour[k % 15]][d])**2 for d in [0,1] ])) for k in [j,j-1,i,i-1]])) / temperature) > random.random():8 tour = copy.copy(newTour);9 plt.plot([cities[tour[i % 15]][0] for i in range(16)], [cities[tour[i % 15]][1] for i in range(16)], 'xb-');10 plt.show() which you can download here if you want. A few sample outputs: Images generated by four consecutive runs of the python program. NoteThank you to Monish Kaul for pointing out this problem! : if you’re using your own list of cities, it can help to rescale the coordinates so they run between 0 and a 100 by a simple affine transformation xmin = min(pair[0] for pair in cities)xmax = max(pair[0] for pair in cities)ymin = min(pair[1] for pair in cities)ymax = max(pair[1] for pair in cities)def transform(pair): x = pair[0] y = pair[1] return [(x-xmin)*100/(xmax - xmin), (y-ymin)*100/(ymax - ymin)]rescaled_cities = [ transform(b) for b in cities] To illustrate, I generated some cities via cities = [(random.random()/10, random.random()/10) for x in range(15)]; which gives cities with coordinates between 0 and 0.01. Running the code with these cities gives a terrible result: But if we run the code with rescaled_cities (and plot with the original coordinates), we find a much nicer result: A line by line reading As the professor mentioned in his reply, the code is completely unreadable. I thought it would be good to document it somewhere, and why not here. 1 import random, numpy, math, copy, matplotlib.pyplot as plt This first line is just Python imports to use different commands. 2 cities = [random.sample(range(100), 2) for x in range(15)]; The goal here is to make an list of “cities”, each which are simply a list of two coordinates, chosen as random integers from 0 to 100. I think a better practice usually would to make a constant for the grid size and the number of cities, and use that throughout the script. But with the 10 line requirement, all that will have to be hardcoded. To explain the command itself, starting from the innermost command, range(100) returns the list [0,1,2,...,100]. Then random.sample( ,2) randomly chooses a set of size 2 from the list. This in fact makes it so that for each city, it’s first and second coordinates will never be the same. Since we just want to come up with some cities for the algorithm to run on, this doesn’t really matter; we could’ve hardcoded a list of cities here insteadI see them as allowing math-y set-like notation, but I’ll leave the explanation to the experts on the other side of the link. I’ll use them a lot to save space. This last piece thus repeats the random sampling 15 times and forms a list of these pairs of coordinates, which I called cities. 3 tour = random.sample(range(15),15); A “tour” will just be a list of 15 numbers indicating an order to visit the cities. We’ll assume you need a closed loop, so the last city will be automatically connected to the first. Here we just choose a random order to start off with. Python also has a random.shuffle() command, but then we would need two lines: one to create a list, and another to shuffle it. By asking for a random sample of 15 numbers from a list of 15 elements, we get a shuffled list created for us in one line. 4 for temperature in numpy.logspace(0,5,num=100000)[::-1]: We start off a for-loop. In the tutorial I was reading (linked above), they do something like while (temp > 1): ... temp *= 1 - coolingRate and this was my attempt to write that in one line. I don’t actually really know why you want an exponentially decreasing temperature in a simulated annealing algorithmBriefly searching, I found this paper which might be relevant? ; I just followed the tutorial. The command numpy.logspace(0,5,num=100000) gives a list of 100,000 numbers between \(10^0\) and \(10^5\) so that the (base 10) logarithm of these numbers is evenly spaced. So choosing numbers in this alone should recover exponentially distributed temperatures. However, they are in lowest-first order. Since we want the temperature to start high, I added [::-1] which reverses the list. 5 [i,j] = sorted(random.sample(range(15),2));6 newTour = tour[:i] + tour[j:j+1] + tour[i+1:j] + tour[i:i+1] + tour[j+1:]; I’ll group these two lines because together they do a single task which I had hoped to do in one line. The objective here is to make a new tour by randomly swapping two cities in tour. I do this by choosing two numbers i,j from the 15 possible cities via random.sample( ,2) as before (now glad that the two numbers will be distinct), and then order them via sorted( ). Then I piece together the new tour manually, by copying the old tour until (but not including) index i, concatenating the jth city from the old tour, continuing copying the old tour until index j where we swap in the ith city, and finish with the rest of the old tour. These two lines bugged me alot Allie Brosh, “The Alot is Better Than You at Everything,” Hyperbole and a Half (2010), hyperboleandahalf. , because it seemed like an inelegant way swap two elements of the list, and because it needed two lines. Looking back now though, I don’t really mind it. Seems like the simple way to go. 7 if math.exp( ( sum([ math.sqrt(sum([(cities[tour[(k+1) % 15]][d] - cities[tour[k % 15]][d])**2 for d in [0,1] ])) for k in [j,j-1,i,i-1]]) - sum([ math.sqrt(sum([(cities[newTour[(k+1) % 15]][d] - cities[newTour[k % 15]][d])**2 for d in [0,1] ])) for k in [j,j-1,i,i-1]])) / temperature) > random.random():8 tour = copy.copy(newTour); This one is terribly long, but only because I skipped using variables to save a few lines. If we define oldDistances = sum([ math.sqrt(sum([(cities[tour[(k+1) % 15]][d] - cities[tour[k % 15]][d])**2 for d in [0,1] ])) for k in [j,j-1,i,i-1]]newDistances = sum([ math.sqrt(sum([(cities[newTour[(k+1) % 15]][d] - cities[newTour[k % 15]][d])**2 for d in [0,1] ])) for k in [j,j-1,i,i-1]])) then the code becomes a lot cleaner: 7 if math.exp( ( oldDistances - newDistances) / temperature) > random.random():8 tour = copy.copy(newTour); which is a little easier to understand. First, if we accept the condition on the if statement, we will update our old tour to this new tour. The idea is that in the metaphor with statistical mechanics, the sum of the distances between all the cities is the energy of the system, which we wish to minimize. We consider the Gibbs factor \(\mathrm{e}^{-\Delta E / T}\) of the exponential of the (negative) change in energy over temperature which should be something like the probability of transitioning to the new state from the old one. I only sum the distances from and to the ith and jth cities, because the rest of the distances are the same for the two tours and will cancel. If this factor has \(\mathrm{e}^{-\Delta E / T}>1\) then the new energy is lower, and we should definitely take the new tour. Otherwise, even if it’s worse, we want to take the new tour with some probability so that we don’t get stuck in a local minimum. We will choose it with probability \(\mathrm{e}^{-\Delta E / T}\) by choosing a uniformly random number in \(r \in [0,1]\) by Python’s random.random() and asking for \(\mathrm{e}^{-\Delta E / T} > r\). This was a little confusing to me at first, but if I just think of \(\mathrm{e}^{-\Delta E / T} = \frac{3}{4}\), then we accept as long as \(\frac{3}{4}\) is larger than our uniformly random number, which should happen \(\frac{3}{4}\) of the timeYes, somehow this convinces me when the same formula with a variable did not. . We actually finish the algorithm here. We choose to update our tour or not as described above, lower the temperature, randomly swap two cities, and try again until we run out of temperatures (here, I put 100,000 of them). At the end, whatever is in the variable tour is our best guess as to the optimal route. I have no idea how to figure out analytically any type of convergence result or confidence in this output. This is partly why we have the next two linesThanks to Vincent for suggesting an improvement in line 9! . 9 plt.plot([cities[tour[i % 15]][0] for i in range(16)], [cities[tour[i % 15]][1] for i in range(16)], 'xb-');10 plt.show() In these two lines, the Python module matplotlib plots the cities and connects them according to our best guess tour. I’m pretty impressed that it’s a two line problem! The pictures are nice, and for a small number of cities, fairly convincing to the eye that it’s at least a pretty good route. That is, the algorithm did something! Considering that we used \(10^5\) loop iterations and a brute force solution of searching over all possible \(15! \sim 10^{12}\) tours would take much longer (though would be guaranteed to be optimal), I’m happy with the resultHappy, though perhaps not content: of course, I want something better than “it looks pretty good” . As for the code itself, we just have to get everything in order. First, the list comprehension [cities[tour[i % 15]] for i in range(16) ] writes out the cities in order according to the tour, and includes the first one again at the end (using that % in Python is modulo). Then we select for the first or second coordinate by adding a [0] or [1] index: [cities[tour[i % 15]][0] for i in range(16) ] The string 'xb-' tells matplotpy to draw x’s on the points, and connect them with blue lines. Lastly, the command plt.plot( ) generates this plot, and plt.show() displays it. And that’s it! In retrospect, I think simulated annealing was a good fit for the ten line constraint. Looking at the code, lines 1-3 are just mandatory import statements and choosing an instance of TSM to solve. Lines 4-8 are the whole algorithm, and it is almost a transcription of pseudocode. In the tutorial actually, they save the best route as they go, because sometimes it’s better than the last route, and there’s basically no cost to doing so in that context. I drop that here to save space, but for a small amount of cities and large number of temperatures, it still works out fine. I originally submitted the code with ten cities and 10,000 temperatures, and the professor told me that it was quite fast, even compared to lengthier implementations. I think that may be an artifact of the small number of cities, and that the code probably doesn’t scale as well. At ten cities the speed up over brute force is only \(10! \sim 10^6\) versus \(10^4\) loops, so it becomes much more impressive at the fifteen city mark. On my computer it still only takes a few seconds for the \(10^5\) loops, but of course it takes about ten times longer for each additional number in that exponent, so it’s not really feasible to check the scaling much past fifteen citiesand with more cities, it becomes harder to see by eye how good or bad the final route is. . Going forward, it would be interesting to try to learn about estimating the optimality of the results of such an algorithm, and more about the choice of temperature schedule. I’m not sure I will any time soon though. Another interesting thing would be trying to write a parallel version, maybe for a GPU. Since the size of the solution space grows factorially with the number of cities, I think there should be a big regime where it’s feasible to store all of the cities on each computational node yet have too many cities to find even an approximate solution in a reasonable amount of time with a single node. The algorithm I implemented here has a global temperature and current route, and each iteration of the loop needs the result of the one before it. But that doesn’t really seem essential. Each node could run it’s own copy of the algorithm (with it’s own temperature and current route), and just broadcast it’s best-yet route to all (or just some of) the other nodes every so often. Each node could then just adopt the best route they receive as their current route, and continue iterating from there. Since we are trying to iterate towards the optimal solution by random swaps in this huge solution space, the likelihood of two nodes duplicating work should be very low, so this method could provide a decent speed-up. I’m not sure how the scaling with the number of nodes would be, however. Anyway, I’m sure all of this is old territory in the optimization world. Lots to learn.
Here’s a tiny problem illustrating our limited knowledge of finite fields : “Imagine an infinite queue of Knights ${ K_1,K_2,K_3,\ldots } $, waiting to be seated at the unit-circular table. The master of ceremony (that is, you) must give Knights $K_a $ and $K_b $ a place at an odd root of unity, say $\omega_a $ and $\omega_b $, such that the seat at the odd root of unity $\omega_a \times \omega_b $ must be given to the Knight $K_{a \otimes b} $, where $a \otimes b $ is the Nim-multiplication of $a $ and $b $. Which place would you offer to Knight $K_{16} $, or Knight $K_n $, or, if you’re into ordinals, Knight $K_{\omega} $?” What does this have to do with finite fields? Well, consider the simplest of all finite field $\mathbb{F}_2 = { 0,1 } $ and consider its algebraic closure $\overline{\mathbb{F}_2} $. Last year, we’ve run a series starting here, identifying the field $\overline{\mathbb{F}_2} $, following John H. Conway in ONAG, with the set of all ordinals smaller than $\omega^{\omega^{\omega}} $, given the Nim addition and multiplication. I know that ordinal numbers may be intimidating at first, so let’s just restrict to ordinary natural numbers for now. The Nim-addition of two numbers $n \oplus m $ can be calculated by writing the numbers n and m in binary form and add them without carrying. For example, $9 \oplus 1 = 1001+1 = 1000 = 8 $. Nim-multiplication is slightly more complicated and is best expressed using the so-called Fermat-powers $F_n = 2^{2^n} $. We then demand that $F_n \otimes m = F_n \times m $ whenever $m < F_n $ and $F_n \otimes F_n = \frac{3}{2}F_n $. Distributivity wrt. $\oplus $ can then be used to calculate arbitrary Nim-products. For example, $8 \otimes 3 = (4 \otimes 2) \otimes (2 \oplus 1) = (4 \otimes 3) \oplus (4 \otimes 2) = 12 \oplus 8 = 4 $. Conway’s remarkable result asserts that the ordinal numbers, equipped with Nim addition and multiplication, form an algebraically closed field of characteristic two. The closure $\overline{\mathbb{F}_2} $ is identified with the subfield of all ordinals smaller than $\omega^{\omega^{\omega}} $. For those of you who don’t feel like going transfinite, the subfield $~(\mathbb{N},\oplus,\otimes) $ is identified with the quadratic closure of $\mathbb{F}_2 $. The connection between $\overline{\mathbb{F}_2} $ and the odd roots of unity has been advocated by Alain Connes in his talk before a general public at the IHES : “L’ange de la géométrie, le diable de l’algèbre et le corps à un élément” (the angel of geometry, the devil of algebra and the field with one element). He describes its content briefly in this YouTube-video At first it was unclear to me which ‘coupling-problem’ Alain meant, but this has been clarified in his paper together with Caterina Consani Characteristic one, entropy and the absolute point. The non-zero elements of $\overline{\mathbb{F}_2} $ can be identified with the set of all odd roots of unity. For, if x is such a unit, it belongs to a finite subfield of the form $\mathbb{F}_{2^n} $ for some n, and, as the group of units of any finite field is cyclic, x is an element of order $2^n-1 $. Hence, $\mathbb{F}_{2^n}- { 0 } $ can be identified with the set of $2^n-1 $-roots of unity, with $e^{2 \pi i/n} $ corresponding to a generator of the unit-group. So, all elements of $\overline{\mathbb{F}_2} $ correspond to an odd root of unity. The observation that we get indeed all odd roots of unity may take you a couple of seconds (( If m is odd, then (2,m)=1 and so 2 is a unit in the finite cyclic group $~(\mathbb{Z}/m\mathbb{Z})^* $ whence $2^n = 1 (mod~m) $, so the m-roots of unity lie within those of order $2^n-1 $ )). Assuming we succeed in fixing a one-to-one correspondence between the non-zero elements of $\overline{\mathbb{F}_2} $ and the odd roots of unity $\mu_{odd} $ respecting multiplication, how can we recover the addition on $\overline{\mathbb{F}_2} $? Well, here’s Alain’s coupling function, he ties up an element x of the algebraic closure to the element s(x)=x+1 (and as we are in characteristic two, this is an involution, so also the element tied up to x+1 is s(x+1)=(x+1)+1=x. The clue being that multiplication together with the coupling map s allows us to compute any sum of two elements as $x+y=x \times s(\frac{y}{x}) = x \times (\frac{y}{x}+1) $. For example, all information about the finite field $\mathbb{F}_{2^4} $ is encoded in this identification with the 15-th roots of unity, together with the pairing s depicted as Okay, we now have two identifications of the algebraic closure $\overline{\mathbb{F}_2} $ : the smaller ordinals equipped with Nim addition and Nim multiplication and the odd roots of unity with complex-multiplication and the Connes-coupling s. The question we started from asks for a general recipe to identify these two approaches. To those of you who are convinced that finite fields (LOL, even characteristic two!) are objects far too trivial to bother thinking about : as far as I know, NOBODY knows how to do this explicitly, even restricting the ordinals to merely the natural numbers! Please feel challenged! To get you started, I’ll show you how to place the first 15 Knights and give you a procedure (though far from explicit) to continue. Here’s the Nim-picture compatible with that above To verify this, and to illustrate the general strategy, I’d better hand you the Nim-tables of the first 16 numbers. Here they are It is known that the finite subfields of $~(\mathbb{N},\oplus,\otimes) $ are precisely the sets of numbers smaller than the Fermat-powers $F_n $. So, the first one is all numbers smaller than $F_1=4 $ (check!). The smallest generator of the multiplicative group (of order 3) is 2, so we take this to correspond to the unit-root $e^{2 \pi i/3} $. The next subfield are all numbers smaller than $F_2 = 16 $ and its multiplicative group has order 15. Now, choose the smallest integer k which generates this group, compatible with the condition that $k^{\otimes 5}=2 $. Verify that this number is 4 and that this forces the identification and coupling given above. The next finite subfield would consist of all natural numbers smaller than $F_3=256 $. Hence, in this field we are looking for the smallest number k generating the multiplicative group of order 255 satisfying the extra condition that $k^{\otimes 17}=4 $ which would fix an identification at that level. Then, the next level would be all numbers smaller than $F_4=65536 $ and again we would like to find the smallest number generating the multiplicative group and such that the appropriate power is equal to the aforementioned k, etc. etc. Can you give explicit (even inductive) formulae to achieve this? I guess even the problem of placing Knight 16 will give you a couple of hours to think about… (to be continued).Leave a Comment
Before we go deeper into Conway’s M(13) puzzle, let us consider a more commonly known sliding puzzle: the 15-puzzle. A heated discussion went on a couple of years ago at sci-physics-research, starting with this message. Lubos Motl argued that group-theory is sufficient to analyze the problem and that there is no reason to resort to groupoids (‘The human(oids) who like groupoids…’ and other goodies, in pre-blog but vintage Motl-speak) whereas ‘Jason’ defended his viewpoint that a groupoid is the natural symmetry for this puzzle. I’m mostly with Lubos on this. All relevant calculations are done in the symmetric group $S_{16} $ and (easy) grouptheoretic results such as the distinction between even and odd permutations or the generation of the alternating groups really crack the puzzle. At the same time, if one wants to present this example in class, one has to be pretty careful to avoid confusion between permutations encoding positions and those corresponding to slide-moves. In making such a careful analysis, one is bound to come up with a structure which isn’t a group, but is precisely what some people prefer to call a groupoid (if not a 2-group…). Groupoids are no recent invention but date back to 1926 when Heinrich Brandt defined what we now know as the ‘Brandt groupoid’ in his study of _noncommutative number theory_. He was studying central simple algebras (the noncommutative counterpart of number fields) in which there usually is not a unique ‘ring of integers’ (in noncommutative parlance, a maximal order) and fractional ideals have a left- and a right- maximal order associated to it, leading naturally to left- and right- unit elements and the notion of a groupoid. The algebraic notion of a groupoid is a set G with a partial multiplication and an everywhere defined inverse satisfying associativity $a \ast (b \ast c) = (a \ast b) \ast c $ whenever the terms are defined. Further, whenever $ a \ast b $ is defined one has $ a^{-1} \ast a \ast b = b $ and $a \ast b \ast b^{-1} = a $ and finally all $a^{-1} \ast a $ and $ a \ast a^{-1} $ are defined (but may be different elements). The categorical definition of a groupoid is even simpler : it is a category in which every morphism is an isomorphism. Both notions are equivalent. Recall that the 15-puzzle is a 4×4 slide-puzzle with initial configuration with the hole at the right bottom square (see left) and one can slide the hole one place at a time in vertical or horizontal direction. For example, if one slides the hole along the path 12-11-7-6-2 one ends up with the situation on the right $\begin{array}{|c|c|c|c|} \hline 1 & 2 & 3 & 4 \\ \hline 5 & 6 & 7 & 8 \\ \hline 9 & 10 & 11 & 12 \\ \hline 13 & 14 & 15 & \\ \hline \end{array} $ (initial position) $\begin{array}{|c|c|c|c|} \hline 1 & & 3 & 4 \\ \hline 5 & 2 & 6 & 8 \\ \hline 9 & 10 & 7 & 11 \\ \hline 13 & 14 & 15 & 12 \\ \hline \end{array} $ (position after 12-11-7-6-2) The mathematical aim is to determine the allowed positions, that is those which can be reached from the initial position by making legal slide moves. The puzzle aim is to return to the initial position starting from an allowed position. We will determine the number of allowed positions and why they are the elements of a groupoid. We dont want to draw arrays all the time so we need a way to encode a position. Giving the hole label 16 we can record a position by writing down the permutation on 16 letters describing by which label in the given position, the label of the initial position is replaced. For example, the situation on the right arises by leaving 1 to position 1, 2 is replaced by 16, 3,4 and 5 are left in their position but 6 is replaced by 2 and so on. So, we can encode this position by the permutation $\sigma = (2,16,12,11,7,6) $ and conversely, given such a permutation we can fill in the entire position encoded by it. We will denote the array or position corresponding to a partition $\tau \in S_{16} $ by the boxed symbol $\boxed{\tau} $. Next, we turn to slide-moves. A basic move interchanges the hole (label 16) with a square labeled i (if i is a horizontal or vertical neighbor of the hole in the position) so can be represented by the transposition $~(16,i) $. We can iterate this procedure, a legal move from a position $\boxed{\tau} $ will be a succession of basic-moves written from right to left as is usual in composing permutations $~(16,i_k) \cdots (16,i_2)(16,i_1) $ where legality implies that at each step the label $i_{m+1} $ must be a vertical or horizontal neighbor of the hole in the position reached from $\boxed{\tau} $ after applying the move $~(16,i_m)(16,i_{m-1}) \cdots (16,i_2)(16,i_1) $. Hence, we’d better have a method to compute the position we obtain from a given position by applying a legal sequence of slide-moves. The rule is : multiply the slide-move-permutation with the position-permutation in the group $S_{16} $ to get the code for the obtained position. In symbols $~(16,i_k) \cdots (16,i_2)(16,i_1) \boxed{\tau} = \boxed{(16,i_k) \cdots (16,i_2)(16,i_1) \tau} $ For example, the initial position corresponds to the identity permutation, that is, is $\boxed{()} $ and applying to it the legal seuence of slides moved along the path 12-11-7-6-2 as before we get the position with code $~(16,2)(16,6)(16,7)(16,11)(16,12) \boxed{()} = \boxed{(16,2)(16,6)(16,7)(16,11)(16,12)} = \boxed{(16,12,11,7,6,2)} $ which is indeed the code of the position obtained above on the right. Right, the basic ingredient to have full understanding of this puzzle are hence the combination of an allowed position together with a legal move-sequence starting from it. Therefore, we will take as our elements all possible combinations $\sigma \boxed{\tau} $ with $\sigma,\tau \in S_{12} $ where $\tau $ is the code of a reachable position and $\sigma = (16,i_l) \cdots (16,i_1) $ is a legal move from that position. On this set of elements we only have a partially defined composition rule, for we can only make sense of the composition of moves $\sigma_1 \boxed{\tau_1} \ast \sigma_2 \boxed{\tau_2} = \sigma_1 \sigma_2 \boxed{\tau_2} $ provided $\tau_1 $ is the code of the position reached from $\boxed{\tau_2} $ after applying the move-sequence $\sigma_2 $, that is, the multiplication above is defined if and only if $\tau_1 = \sigma_2 \tau_2 $ in $S_{16} $ All conditions of the algebraic notion of a groupoid are satisfied. For example, every element has an inverse $~(\sigma \boxed{\tau})^{-1} = \sigma^{-1} \boxed{\omega} $ where $\omega = \sigma \tau $ in $S_{16} $ and it is easy to check that all conditions are indeed satisfied. In the categorical definition, the groupoid is the category having as the objects the reachable positions, and morphisms $\boxed{\tau_1} \rightarrow \boxed{\tau_2} $ are of the form $\sigma_1 \boxed{\tau_1} $ such that $\sigma_1 \tau_1 = \tau_2 $ (hence, all morphisms are isomorphisms and there is just one morphism between two objects, namely corresponding to $\sigma_1 = \tau_2 \tau_1^{-1} \in S_{16} $. For example, each object $\boxed{\tau} $ also has an identity morphism $~() \boxed{\tau} $ and again all categorical requirements are met. This groupoid we will call the the 15-puzzle groupoid and next time we will determine that it has exactly $\frac{1}{2} 16! $ objects.
McNemar's test statistic is given by: $\chi^{2} = \frac{\left(|r-s|-1\right)^{2}}{r+s}$, where $r$ and $s$ are the counts of discordant pairs (0,1) versus (1,0), distributed $\chi^{2}$ with 1 degree of freedom under the null hypothesis. I am having a hard time parsing Sribney on the sign test: The test statistic for the sign test is the number $n_{+}$ of observations greater than zero. Assuming that the probability of an observation being equal to zero is exactly zero, then, under the null hypothesis, $n_{+} \sim \text{Binomial}(n, p=\frac{1}{2})$, where $n$ is the total number of observations. But what do we do if we have some observations that are zero? Fisher’s Principle of Randomization We have a ready answer to this question if we view the test from the perspective of Fisher’s Principle of Randomization (Fisher 1935). Fisher’s idea (stated in a modern way) was to look at a family of transformations of the observed data such that the a priori likelihood (under the null hypothesis) of the transformed data is the same as the likelihood of the observed data. The distribution of the test statistic is then produced by calculating its value for each of the transformed “randomization” data sets, considering each data set equally likely. For the sign test, the “data” are simply the set of signs of the observations. Under the null hypothesis of the sign test, $P(X_{i}>0)= P(X_{i}<0)$, so we can transform the observed signs by flipping any number of them and the set of signs will have the same likelihood. The $2^{n}$ possible sign changes form the family of randomization data sets. If we have no zeros, this procedure again leads to $n_{+} \sim \text{Binomial}(n, p=\frac{1}{2})$. If we do have zeros, changing their signs leaves them as zeros. So if we observe $n_{0}$ zeros, each of the $2^{n}$ sign-change data sets will also have $n_{0}$ zeros. Hence, the values of $n_{+}$ calculated over the sign-change data sets range from 0 to $n-n_{0}$, and the “randomization” distribution of $n_{+}$ is $n_{+} \sim \text{Binomial}(n-n_{0}, p=\frac{1}{2})$. Because this seems to be saying go ahead and ignore zeros. But then later in the paper, Sribney provides an adjustment for the sign-rank test that accounts for zeros just along the lines I am wondering about: The adjustment for zeros is the change in the variance when the ranks for the zeros are signed to make $r_{j}=0$; i.e., the variance is reduced by $\frac{1}{4}\sum_{i=1}^{n-{0}}{i^{2}}=n_{0}\frac{\left(n_{0}+1\right)\left(2n_{0}+1\right)}{24}$. Should I instead be asking whether or not to apply the signed-rank test to individually-matched case control data? A simple made up example will illustrate why ignoring zeros presents a problem. Imagine you've paired data with no differences equal to zero (this would correspond to data for a McNemar's test with only discordant pairs present). With a sample size of, say, 20, you find 15 positive signs of differences and 5 negative signs of differences, and conclude significant difference. Now imagine that you have 1000 observed differences equal to zero in addition to those 15 positive and 5 negative signs of differences: now you conclude difference is not significant. If McNemar's test is conducted on 1020 pairs, 1000 of which are zeros, and with discordant pairs of 15 and 5, we should not reject the null hypothesis (e.g. at $\alpha = 0.05$). There is an adjustment to the sign test to correct for observed zero differences based upon Fisher’s "Principle of Randomization" (Sribney, 1995). Is there a way of improving on McNemar's test that addresses the effect of observed zero differences (i.e. by accounting for number of concordant pairs relative to number of discordant pairs)? How? What about for the asymptotic z approximation for the sign test? References Sribney WM. (1995) Correcting for ties and zeros in sign and rank tests. Stata Technical Bulletin. 26:2–4.
In the 15-puzzle groupoid 1 we have seen that the legal positions of the classical 15-puzzle are the objects of a category in which every morphism is an isomorphism (a groupoid ). Today, we will show that there are exactly 10461394944000 objects (legal positions) in this groupoid. The crucial fact is that positions with the hole in a fixed place can be identified with the elements of the alternating group $A_{15} $, a fact first proved by William Edward Story in 1879 in a note published in the American Journal of Mathematics. Recall from last time that the positions reachable from the initial position can be encoded as $\boxed{\tau} $ where $\tau $ is the permutation on 16 elements (the 15 numbered squares and 16 for the hole) such that $\tau(i) $ tells what number in the position lies on square $i $ of the initial position. The set of all reachable positions are the objects of our category. A morphism $\boxed{\tau} \rightarrow \boxed{\sigma} $ is a legal sequence of slide-moves starting from position $\boxed{\tau} $ and ending at position $\boxed{\sigma} $. That is, $\boxed{\sigma} = (16,i_k)(16,i_{k-1}) \cdots (16,i_2)(16,i_1) \boxed{\tau} $ where for every number m between 1 and k we have that the number $i_{m+1} $ is an horizontal or vertical neighbor of the hole in position $\boxed{(16,i_m)\cdots (16,i_1) \tau} $. When we identify such a morphism with the corresponding element $~(16,i_k)\cdots (16,i_2)(16,i_1) \in S_{16} $ we see that it must be the unique element $\sigma \tau^{-1} $ hence there is just one morphism between two objects and they are all invertible, so our category is indeed a groupoid. Can we say something about the length k of such a sequence of slide moves? Well, consider the OXO-drawing on our 4×4 square $~\begin{array}{|c|c|c|c|} \hline O & X & O & X \\ \hline X & O & X & O \\ \hline O & X & O & X \\ \hline X & O & X & O \\ \hline \end{array} $ One legal slide-move brings an O-hole to an X-hole and an X-hole to an O-hole, so if the holes in $\boxed{\sigma} $ and $\boxed{\tau} $ are of the same type (both O-holes or both X-holes) then the length k of a legal sequence must be even and therefore the permutation $\sigma \tau^{-1} = (16,i_k) \cdots (16,i_1) $ belongs to the simple alternating group $A_{16} $. In particular, if we take $\tau = () $ the original position we see that if a reachable position $\sigma $ has the hole in the bottom right corner (and hence $\sigma $ fixes 16 so is an element of $S_{15} $) then $\sigma \in A_{16} \cap S_{15} = A_{15} $ and in particular, Loyd’s 14-15 puzzle has no solution (as it corresponds to the transposition $\sigma=(14,15) \notin A_{15} $. This argument first appeared in print in W.W. Johnson “Note on the “15” puzzle” Amer. J. Math. 2 (1879) 397-399. We can compose legal sequences leading to positions having their hole at the bottom right in the groupoid showing that such positions can be identified with a subgroup of $A_{15} $. Note that we do NOT claim that we can multiply any two sequences of even length $~(16,i_k) \cdots (16,i_1) $ with $~(16,j_l) \cdots (16,j_1) $ (which would give us the whole of $A_{16} $) but only composable morphisms in the groupoid! W.E. Story then went on to show that this subgroup is the full alternating group $A_{15} $ which comes down to finding enough reachable positions, with the hole at the bottom right, to generate the group. We will sketch a more recent argument due to Aaron Archer (Math. Monthly 106 (1999) 793-799). He starts out with another encoding of reachable positions, disregarding the exact placement of the hole. He records the 15-numbers in order along a snakelike path disregarding the hole. $\begin{array}{|c|c|c|c|} \hline \rightarrow & \rightarrow & \rightarrow & \downarrow \\ \hline \downarrow & \leftarrow & \leftarrow & \leftarrow \\ \hline \rightarrow & \rightarrow & \rightarrow & \downarrow \\ \hline \leftarrow & \leftarrow & \leftarrow & \leftarrow \\ \hline \end{array} $ so the position $\begin{array}{|c|c|c|c|} \hline 1 & 2 & 3 & 4 \\ \hline 5 & 6 & 7 & 8 \\ \hline & 15 & 12 & 14 \\ \hline 13 & 9 & 11 & 10 \\ \hline \end{array} $ is encoded as $~[1,2,3,4,8,7,6,5,15,12,14,10,11,9,13]~ $. The point being that we can slide the hole along the snakelike path to get a uniquely determined position having the same code but with the hole at another position. For example, sliding the hole along the path upwards to the third square of the upper row we get the position $\begin{array}{|c|c|c|c|} \hline 1 & 2 & & 3 \\ \hline 6 & 7 & 8 & 4 \\ \hline 5 & 15 & 12 & 14 \\ \hline 13 & 9 & 11 & 10 \\ \hline \end{array} $ having the same code. This gives a natural one-to-one correspondence between reachable positions having their hole at spot i with those having the hole on spot j, so in order to determine the number of objects in our groupoid, it suffices to count the number of reachable positions with the hole at a specified spot. They are just all the codes and as they form a subgroup of $A_{15} $ it is enough to calculate the permutations induced on a code by just one slide-move. If the slide move is along the snakelike path, it will not alter the code, so we only have to compute 9 remaining slide modes S(1,8), S(2,7), S(3,6), S(7,10), S(6,11), S(5,12), S(9,16), S(10,15) and S(11,14) where the numbers correspond to the order in which we encounter the square along the snakelike path. For example S(1,8) is the slide move changing the hole at position (1,1) to position (2,1). This move has the following effect on a position $\begin{array}{|c|c|c|c|} \hline & a_1 & a_2 & a_3 \\ \hline a_7 & a_6 & a_5 & a_4 \\ \hline a_8 & a_9 & a_{10} & a_{11} \\ \hline a_{15} & a_{14} & a_{13} & a_{12} \\ \hline \end{array} $ moving to $\begin{array}{|c|c|c|c|} \hline a_7 & a_1 & a_2 & a_3 \\ \hline & a_6 & a_5 & a_4 \\ \hline a_8 & a_9 & a_{10} & a_{11} \\ \hline a_{15} & a_{14} & a_{13} & a_{12} \\ \hline \end{array} $ whence it has the effect of changing the code $~[a_1,a_2,a_3,a_4,a_5,a_6,a_7,a_8,a_9,a_{10},a_{11},a_{12},a_{13},a_{14},a_{15}]~ $ to the code $~[a_7,a_1,a_2,a_3,a_4,a_5,a_6,a_8,a_9,a_{10},a_{11},a_{12},a_{13},a_{14},a_{15}]~ $ and therefore it corresponds to the permutation $S(1,8)=(1,7,6,5,4,3,2)~ $. Similarly, one calculates that the other slide moves determine the following permutations $S(2,7)=(2,6,5,4,3), S(3,6)=(3,5,4), S(5,12)=(5,11,10,9,8,7,6) $ $ S(6,11)=(6,10,9,8,7), S(7,10)=(7,9,8), S(9,16)=(9,15,14,13,12,11,10) $ $ S(10,15)=(10,14,13,12,11), S(11,14)=(11,13,12)~ $ (Ive replaced the permutations in Archer’s paper by their inverses because I want to have left actions rather than right ones). The only thing left to do is to fire up GAP (update : or use Michel’s comment below) and verify that these permutations do indeed generate the full alternating group $A_{15} $. Summarizing, there are precisely $\frac{1}{2} 15!~ $ reachable positions having their hole in a specified place and as there are 16 possible places for the hole, we get that the total number of reachable positions (or if you prefer, the number of objects in our groupoid) is equal to $16 \times \frac{1}{2} 15! = \frac{1}{2} 16! = 10461394944000 $ and the whole point of the careful group versus groupoid analysis is that one should not make the mistake in interpreting this number as the number of elements of the alternating group $A_{16} $. For those who don’t like categories but prefer the algebraic notion of a groupoid, their groupoid has $~(10461394944000)^2 = 109440784174348763136000000 $ elements as there is exactly one morphism between two objects. References Aaron F. Archer, “A Modern Treatment of the 15 Puzzle” Mathematical Monthly 106 (1999) 793-799 W.E. Story, “Note on the “15” puzzle”, Amer. J. Math. 2 (1879) 399-404
So that finally brings us to our ZSM viewer on the next page of this tutorial. When you look at the viewer page, you'll see three graphs, two in the top left and one on the right. The two on the top left are time waveforms: the top graph shows the voltage of the three phases as they change over a commutation cycle, with any PWM frequency filtered out, and the bottom graph shows the PWM waveforms corresponding to one commutation angle. The top graph also shows the zero-sequence component in light blue, which equals the average of the three-phase voltage waveforms. The graph on the right shows the average voltage output of the three phases in a cube: Still don't believe it's a cube? On the ZSM viewer page, click on the cube with your mouse and drag it around to rotate the viewpoint: There's a lot going on here, so let's go over what's being displayed: The edges of the cube, with its axes labeled A, B, and C. The long diagonal of the cube, with that axis labeled 0 (for the zero-sequence component). The perpendicular axes X and Y (equivalent to α and β). A black point within the cube showing the instantaneous voltage output of the three-phase bridge. A gray hexagon, showing the projection of the cube on the α-β (X-Y) plane, shifted up along the cube's diagonal so that it contains the output voltage point. D and Q axes within the plane of the gray hexagon, with the output voltage point on the D-axis. A blue curve, showing the trajectory of this voltage point within the cube. A blue-dashed circle, which is showing that trajectory projected onto the α-β plane through the midpoint of the cube diagonal (0.5, 0.5, 0.5), this is the resulting trajectory of sine-triangle PWM. You can also choose one of several types of zero-sequence modulation: 1 No Shift This is plain sine-triangle modulation and the phase voltages are all sinusoidal. Line-to-line amplitudes above $\frac{\sqrt{3}}{2} \approx 0.866$ cannot be achieved. Beyond this point, the trajectory will extend outside the cube. 2 Midpoint Clamp This is a Conventional Space Vector PWM (CSVPWM). There's a reason for calling it "midpoint clamp" which we'll discuss in a little bit. 3 Third Harmonic The zero-sequence component is a third-harmonic waveform equal to $-\frac{A}{6}sin3\theta$, where A is the amplitude of the output voltage, and θ is the commutation angle. This doesn't get used much and we'll discuss why a little bit. 4 Top Clamp (also called "flat-top") The zero-sequence component is chosen to move the highest output voltage to the positive rail of the DC link. This is sometimes used to reduce switching losses since that phase with maximum output voltage is at 100% duty cycle and does not need to be switched for 120° in the commutation cycle. 5 Bottom Clamp (also called "flat-bottom") The zero-sequence component is chosen to move the lowest output voltage to the negative rail of the DC link. This is also used to reduce switching losses. In low-voltage drives that utilize bootstrapped gate drives, it is more practical than top-clamp since 100% duty cycle on the high-side switches is not maintainable for many PWM cycles. 6 Top and Bottom Clamp The zero-sequence component is chosen to use top-clamp or bottom-clamp, whichever requires the lowest shift in zero-sequence voltage. This distributes the switching losses evenly among the six switches. Like top-clamp, however, it is not practical in bootstrapped gate drives, because it requires 100% duty cycle on the high-side switches for extended periods of time. 7 Minimum Shift In theory, it is possible to choose the smallest possible adjustment in zero-sequence voltage to keep the output voltages within realizable limits (within the cube). For line-to-line amplitudes below $\frac{\sqrt{3}}{2} \approx 0.866$, the zero-sequence voltage is fixed and the output waveforms are sinusoidal. There's not any real reason to use this method, however, and the choice of zero sequence voltage becomes ambiguous in the overmodulation region (when distortion is permitted in order to achieve larger output voltage amplitudes). 8 Nip & Tuck The zero-sequence component is chosen to keep the component with the largest amplitude from the mean at a constant value so that the waveforms are always flattened at a constant minimum and maximum duty cycle for 60° of the commutation cycle; this follows the waveforms in the King patent. It is very close to the third-harmonic method, and like the third-harmonic method, there are practical reasons not to use it. I am including it here merely because it looks nice, and call it "nip & tuck" for the same reason. A side-image of the cube is not shown; it looks very similar to that of the third-harmonic method. The trajectory consists of six 90° circular arcs from different planes parallel to the cube faces.
Rate of change in calculus is a series I have been wanting to do for a while. If you are a student and just starting then your in the right place. This is the very beginning of calculus and where you want to start your learning. Since there are so many topics in school that require some calculus, I felt it was important to get some basics up here for you. Beginning Calculus : Rate of Change In the examples that are to follow I try to provide any explanations that seem necessary. I want to be clear and comment what I do and why I do it. I am hoping this method will help others understand. If it seems a little long or self-explanatory then I apologize. My goal is to be consistent and thorough. We are dealing with functions in this first chapter and the rate at which they change. To get the most out of these examples I would encourage you to try the problem first by yourself. If you have issues then refer to how it is worked. Hopefully that, along with my comments, will fill in any gaps that you might have in your knowledge. Problem 1. Find the average rate of change of the function over the given intervals. \[f(x)= 3x+3\] Using the intervals: [2,4] [-2,2] Use the given function and the first interval. You are evaluating the function at both points in order to find the difference. Once you do that you are dividing by the difference of the interval length. For the interval [2,4] \[ \Longrightarrow \frac{f(4)-f(2)}{4-2} \] Substitute in all of your values to find the rate of change. \[ \Longrightarrow \frac{((3)(4)^3 +3)-((3)(2)^3 +3)}{4-2} = 84 \] For the interval [-2,2] \[ \Longrightarrow \frac{f(2)-f(-2)}{2+2}\] Evaluate the equation with those values. \[ \Longrightarrow \frac{((3)(2)^3+3)-((3)(-2)^3+3)}{2+2} = 12\] Problem 2. Find the average rate of change of the $R\theta$ function over the given interval: [0,8] \[ \Longrightarrow r \theta = \sqrt{3\theta+1} \] Using this formula, we want to use the intervals given to us and evaluate them into the function. How this is done is simple. In the numerator we are using the intervals within the function itself. In the denominator, we just have the interval difference. Then we are just dividing the two. \[ \Longrightarrow \frac{r \theta_2 – r \theta_1}{\theta_2 – \theta_1} \] I will now go ahead and set up the equation. \[ \Longrightarrow \frac{r (8) – r (0)}{8-0} \] Now just substitute the intervals into the formula and start evaluating. \[ \Longrightarrow \frac{\sqrt{3*8+1}-\sqrt{3*0+1}}{8-0} \] This gives us a simple equation to solve. \[ \Longrightarrow \frac {5-1}{8-0} \] We are left with : \[ \Longrightarrow \frac {1}{2} \] This is your rate of change. Problem 3. Find the slope of the curve at the given point and an equation of the tangent line at that point. $$(-2,-9)$$ \[ y=7-4x^2 \] Approach this problem as change in y divided by change in x. \[ \Longrightarrow \frac{(7-4(-2+h)^2) -( 7-4(-2)^2)}{h} \] Distribute everything out. \[ \Longrightarrow \frac{16h-4h^2}{h} \] Now simplify and set h=0. The slope at (-2,-9) is: 16 Now to find the equation of the tangent line. You use this formula for that: \[ \Longrightarrow y-y_1 = m(x-x_1) \] Substitute in your points and you get: \[ \Longrightarrow y-(-9) = 16(x-(-2) \] Simplify a bit. \[ \Longrightarrow y+9 = 16(x+2) \] Keep simplifying. This is the equation of the tangent line. \[ \Longrightarrow y = 16x + 23 \] Problem 4. Find the slope of the curve at the given point by finding the limiting value of the slope of the secants through the point. Also find an equation of the tangent line to the curve at the same point. \[ y = x^3 – 6x \] Point = (1,-5) This is just another form of what we have done before. You have an equation and a point to go by. Put the point into the function and evaluate it. \[ \Longrightarrow \frac{(1+h)^3 – 6(1+h) – (1)^3 6(1)}{h} \] Now distribute. Pay careful attention to where everything goes as it is a lot to deal with. \[ \Longrightarrow \frac{(1+3h+3h^2 +h^3-6-6h) – (-5)}{h} \] Reduce it down some. \[ \Longrightarrow \frac{h^3 + 3h^2 -3h}{h} \] Simplify it now. \[ \Longrightarrow h^2 + 3h – 3 \] Then just assign h=0 and see what you have left. That is your slope of course as h approaches 0. \[ \Longrightarrow = -3 \] Now to find the equation of the tangent line. We do the same process as before. We have the point-slope equation. \[ \Longrightarrow y-y_1 = m(x-x_1) \] Use your slope that you got above. Then put in your point when it asks for an x or a y. \[ \Longrightarrow y-(-5) = -3(x-1) \] Simply the equation. \[ \Longrightarrow y + 5 = -3x + 3 \] Move everything to one side of the equal sign. This is your equation of the tangent line. \[ \Longrightarrow y = -3x – 2 \] Conclusion This could be your first exposure to calculus and limits. If it is then it can certainly look strange. What you do and when you do it are not always clear. The best way it to just examine problems and how to do them. Work as many as you can after that. Problems can be easy and they can be hard. Seeing several of each kind will help. The main thing to take away from this lesson is that you are finding slope and equations. Hopefully you can see how easy it is. It is just one step at a time. Don’t get intimidated if you are new. Take it step by step and enjoy the easy parts!
Applied/ACMS/absS18 Contents ACMS Abstracts: Spring 2018 Thomas Fai (Harvard) The Lubricated Immersed Boundary Method Many real-world examples of fluid-structure interaction, including the transit of red blood cells through the narrow slits in the spleen, involve the near-contact of elastic structures separated by thin layers of fluid. The separation of length scales between these fine lubrication layers and the larger elastic objects poses significant computational challenges. Motivated by the challenge of resolving such multiscale problems, we introduce an immersed boundary method that uses elements of lubrication theory to resolve thin fluid layers between immersed boundaries. We apply this method to two-dimensional flows of increasing complexity, including eccentric rotating cylinders and elastic vesicles near walls in shear flow, to show its increased accuracy compared to the classical immersed boundary method. We present preliminary simulation results of cell suspensions, a problem in which near-contact occurs at multiple levels, such as cell-wall, cell-cell, and intracellular interactions, to highlight the importance of resolving thin fluid layers in order to obtain the correct overall dynamics. Michael Herty (RWTH-Aachen) Opinion Formation Models and Mean field Games Techniques Mean-Field Games are games with a continuum of players that incorporate the time dimension through a control-theoretic approach. Recently, simpler approaches relying on reply strategies have been proposed. Based on an example in opinion formation modeling we explore the link between differentiability notions and mean-field game approaches. For numerical purposes a model predictive control framework is introduced consistent with the mean-field game setting that allows for efficient simulation. Numerical examples are also presented as well as stability results on the derived control. Lee Panetta (Texas A&M) Traveling waves and pulsed energy emissions seen in numerical simulations of electromagnetic wave scattering by ice crystals The numerical simulation of single particle scattering of electromagnetic energy plays a fundamental role in remote sensing studies of the atmosphere and oceans, and in efforts to model aerosol "radiative forcing" processes in a wide variety of models of atmospheric and climate dynamics, I will briefly explain the main challenges in the numerical simulation of single particle scattering and describe how work with 3-d simulations of scattering of an incident Gaussian pulse, using a Pseudo-Spectral Time Domain method to numerically solve Maxwell’s Equations, led to an investigation of episodic bursts of energy that were observed at various points in the near field during the decay phase of the simulations. The main focus of the talk will be on simulations in dimensions 1 and 2, simple geometries, and a single refractive index (ice at 550 nanometers). The periodic emission of pulses is easy to understand and predict on the basis of Snell’s laws in the 1-d case considered. In much more interesting 2-d cases, simulations show traveling waves within the crystal that give rise to pulsed emissions of energy when they interact with each other or when they enter regions of high surface curvature. The time-dependent simulations give a more dynamical view of "photonic nanojets" reported earlier in steady-state simulations in other contexts, and of energy release in "morphology-dependent resonances." Haizhao Yang (National University of Singapore) A Unified Framework for Oscillatory Integral Transform: When to use NUFFT or Butterfly Factorization? This talk introduces fast algorithms of the matvec $g=Kf$ for $K\in \mathbb{C}^{N\times N}$, which is the discretization of the oscillatory integral transform $g(x) = \int K(x,\xi) f(\xi)d\xi$ with a kernel function $K(x,\xi)=\alpha(x,\xi)e^{2\pi i\Phi(x,\xi)}$, where $\alpha(x,\xi)$ is a smooth amplitude function , and $\Phi(x,\xi)$ is a piecewise smooth phase function with $O(1)$ discontinuous points in $x$ and $\xi$. A unified framework is proposed to compute $Kf$ with $O(N\log N)$ time and memory complexity via the non-uniform fast Fourier transform (NUFFT) or the butterfly factorization (BF), together with an $O(N)$ fast algorithm to determine whether NUFFT or BF is more suitable. This framework works for two cases: 1) explicite formulas for the amplitude and phase functions are known; 2) only indirect access of the amplitude and phase functions are available. Especially in the case of indirect access, our main contributions are: 1) an $O(N\log N)$ algorithm for recovering the amplitude and phase functions is proposed based on a new low-rank matrix recovery algorithm; 2) a new stable and nearly optimal BF with amplitude and phase functions in form of a low-rank factorization (IBF-MAT) is proposed to evaluate the matvec $Kf$. Numerical results are provided to demonstrate the effectiveness of the proposed framework. Eric Keaveny (Imperial College London) Linking the micro- and macro-scales in populations of swimming cells Swimming cells and microorganisms are as diverse in their collective dynamics as they are in their individual shapes and swimming mechanisms. They are able to propel themselves through simple viscous fluids, as well as through more complex environments where they must interact with other microscopic structures. In this talk, I will describe recent simulations that explore the connection between dynamics at the scale of the cell with that of the population in the case where the cells are sperm. In particular, I will discuss how the motion of the sperm’s flagella can greatly impact the overall dynamics of their suspensions. Additionally, I will discuss how in complex environments, the density and stiffness of structures with which the cells interact impact the effective diffusion of the population. Molei Tao (Georgia Tech) Explicit high-order symplectic integration of nonseparable Hamiltonians: algorithms and long time performance Symplectic integrators preserve the phase-space volume and have favorable performances in long time simulations. Methods for an explicit symplectic integration have been extensively studied for separable Hamiltonians (i.e., H(q,p)=K(p)+V(q)), and they lead to both accurate and efficient simulations. However, nonseparable Hamiltonians also model important problems, such as non-Newtonian mechanics and nearly integrable systems in action-angle coordinates. Unfortunately, implicit methods had been the only available symplectic approach for general nonseparable systems. This talk will describe a recent result that constructs explicit and arbitrary high-order symplectic integrators for arbitrary Hamiltonians. Based on a mechanical restraint that binds two copies of phase space together, these integrators have good long time performance. More precisely, based on backward error analysis, KAM theory, and some additional multiscale analysis, a pleasant error bound is established for integrable systems. This bound is then demonstrated on a conceptual example and the Schwarzschild geodesics problem. For nonintegrable systems, some numerical experiments with the nonlinear Schrodinger equation will be discussed. Boualem Khouider (UVic) Title TBA Abstract TBA
This splitting occurs due to hyperfine coupling (the EPR analogy to NMR’s J coupling) and further splits the fine structure (occurring from spin-orbit interaction and relativistic effects) of the spectra of atoms with unpaired electrons. Although hyperfine splitting applies to multiple spectroscopy techniques such as NMR, this splitting is essential and most relevant in the utilization of electron paramagnetic resonance (EPR) spectroscopy. Introduction Hyperfine Splitting is utilized in EPR spectroscopy to provide information about a molecule, most often radicals. The number and identity of nuclei can be determined, as well as the distance of a nucleus from the unpaired electron in the molecule. Hyperfine coupling is caused by the interaction between the magnetic moments arising from the spins of both the nucleus and electrons in atoms. As shown in Figure \(\PageIndex{1}\), in a single electron system the electron with its own magnetic moment moves within the magnetic dipole field of the nucleus. This spin interaction in turn causes splitting of the fine structure of spectral lines into smaller components called hyperfine structure. Hyperfine structure is approximately 1000 times smaller than fine structure. Figure \(\PageIndex{2}\) shows a comparison of fine structure with hyperfine structure splitting for hydrogen, though this is not to scale. The total angular momentum of the atom is represented by F with regards to hyperfine structure. This is found simply through the relation F=J+I where I is the ground state quantum number and J refers to the energy levels of the system. Results of Nuclear-Electron Interactions These hyperfine interactions between dipoles are especially relevant in EPR. The spectra of EPR are derived from a change in the spin state of an electron. Without the additional energy levels arising from the interaction of the nuclear and electron magnetic moments, only one line would be observed for single electron spin systems. This process is known as hyperfine splitting (hyperfine coupling) and may be thought of as a Zeeman effect occurring due to the magnetic dipole moment of the nucleus inducing a magnetic field. The coupling patterns due to hyperfine splitting are identical to that of NMR. The number of peaks resulting from hyperfine splitting of radicals may be predicted by the following equations where M i is the number of equivalent nuclei: \( \text{# of peaks}=M_{i}I+1 \) for atoms having one equivalent nuclei \( \text{# of peaks}=(2M_{1}I_{1}+1)(2M_{2}I_{2}+1).... \) for atoms with multiple equivalent nuclei For example, in the case of a methyl radical 4 lines would be observed in the EPR spectra. A methyl radical has 3 equivalent protons interacting with the unpaired electron, each with I=1/2 as their nuclear state yielding 4 peaks. The relative intensities of certain radicals can also be predicted. When I = 1/2 as in the case for 1H, 19F, and 31P, then the intensity of the lines produced follow Pascal's triangle. Using the methyl radical example, the 4 peaks would have relative intensities of 1:3:3:1. The following figures 2 show the different splitting that results from interaction between equivalent versus nonequivalent protons. It is important to note that the spacing between peaks is 'a', the hyperfine coupling constant. This constant is equivalent for both protons in the equivalent system but unequal for the unequivalent protons. The Hyperfine Coupling Constant The hyperfine coupling constant (\(a\)) is directly related to the distance between peaks in a spectrum and its magnitude indicates the extent of delocalization of the unpaired electron over the molecule. This constant may also be calculated. The following equation shows the total energy related to electron transitions in EPR. \[ \Delta E=g_e \mu _e M_s B + \sum_i g_{N_i} \mu_{N_i}M_{I_i}(1-\sigma _{i})+\sum_{i}a_i M_s M_{I_i} \] The first two terms correspond to the Zeeman energy of the electron and the nucleus of the system, respectively. The third term is the hyperfine coupling between the electron and nucleus where \(a_i\) is the hyperfine coupling constant. Figure \(\PageIndex{5}\) shows splitting between energy levels and their dependence on magnetic field strength. In this figure, there are two resonances where frequency equals energy level splitting at magnetic field strengths of \(B_1\) and \(B_2\). These parameters are essential in the derivation of the hyperfine coupling constant. By manipulating the total energy equation (those interested in the entire derivation, refer to the first outside link), the following two relations may be derived. \[ B_1= \dfrac {h\nu -a/2} {g\mu_e} \] \[ B_2= \dfrac{h\nu +a/2}{g\mu_e} \] From this, the hyperfine coupling constant (\(a\)) may be derived where \(g\) is the g-factor. \[\begin{align} \Delta B &=B_{2}-B_{1} \\[4pt] &=\dfrac {h\nu +a/2} {g\mu_{e}}- \dfrac {h\nu -a/2}{g\mu_{e}} \end{align}\] so solving for hyperfine coupling constant results in the following relationship: \[ a= g\mu_e \Delta B \] Isotropic and Anisotropic Interactions Electron-nuclei interactions have several mechanisms, the most prevalent being Fermi contact interaction and dipole interaction. Dipole interactions occur between the magnetic moments of the nucleus and electron as an electron moves around a nucleus. However, as an electron approaches a nucleus, it has a magnetic moment associated with it. As this magnetic moment moves very close to the nucleus, the magnetic field associated with that nucleus is no longer entirely dipolar. The resulting interaction of these magnetic moments while the electron and nucleus are in contact is radically different from the dipolar interaction of the electron when it is outside the nucleus. This non-dipolar interaction of a nucleus and electron spin in contact is the Fermi contact interaction. A comparison of this is shown in Figure \(\PageIndex{6}\). The sum of these interactions is the overall hyperfine coupling of the system. Fermi contact interactions predominate with isotropic interactions, meaning sample orientation to the magnetic field does not affect the interaction. Due to the fact that this interaction only occurs when the electron is inside the nucleus, only electrons in the s orbital exhibit this kind of interaction. All other orbitals (p,d,f) contain a node at the nucleus and can never have an electron at that node. The hyperfine coupling constant in isotropic interactions is denoted 'a'. Dipole interactions predominate with anisotropic interactions, meaning sample orientation does change the interaction. These interactions depend on the distance between the electron and nuclei as well as the orbital shape. The typical scheme is shown in Figure \(\PageIndex{7}\). Dipole interactions can allow for positioning paramagnetic species in solid lattices. The hyperfine coupling constant in isotropic interactions is denoted 'B'. Superhyperfine Splitting Further splitting may occur by the unpaired electron if the electron is subject to the influence of multiple sets of equivalent nuclei. This splitting is on the order of 2nI+1 and is known as superhyperfine splitting. As hyperfine structure splits fine structure into smaller components, superhyperfine structure further splits hyperfine structure. As a result, these interactions are extremely small but are useful as they can be used as direct evidence for covalency. The more covalent character a molecule exhibits, the more apparent its hyperfine splitting. For example, in a CH 2OH radical, an EPR spectrum would show a triplet of doublets. The triplet would arise from the three protons, but superhyperfine splitting would cause these to split futher into doublets. This is due to the unpaired electron moving to the different nuclei but spending a different length of time on each equivalent proton. In the methanol radical example, the electron lingers the most on the CH 2 protons but does move occasionally to the OH proton. References Bunce, N. "Introduction to the interpretation of electron spin resonance spectra of organic radicals." J. Chem. Educ., 1987, 64 (11), p 907 Griffiths, D. "Hyperfine Splitting in the ground state of Hydrogen." Am. J. Phys., 1982, 50 (8), p 698 Gasiorowicz, Stephen. Quantum Physics. New York: Wiley, 1974.2 Problems What is the number of peaks of a benzene radical in EPR due to hyperfine coupling and what are their relative intensities? What is the number of peaks for CH 2(OCH 3), a methoxymethyl radical in EPR due to hyperfine coupling? Why are s orbitals only considered in Fermi contact interactions? Answers: 7 lines, 1:6:15:20:15:6:1 (benzene has 6 equivalent protons so the number of peaks is M+1 = (6+1) = 7 peaks, intensities come from pascal's triangle) 12 lines (has two different equivalent nuclei, one with two protons and one with 3 so (M 1+1)(M 2+1) = (2+1)(3+1) = 12 peaks) p, d, and f orbitals have nodes at the nucleus and do not exhibit Fermi contact interactions Contributors Stephanie Gray
A classical description of the vibration of a diatomic molecule is needed because the quantum mechanical description begins with replacing the classical energy with the Hamiltonian operator in the Schrödinger equation. It also is interesting to compare and contrast the classical description with the quantum mechanical picture. The motion of two particles in space can be separated into translational, vibrational, and rotational motions. The internal motions of vibration and rotation for a two-particle system can be described by a single reduced particle with a reduced mass μ located at r. For a diatomic molecule, Figure \(\PageIndex{1}\), the vector r corresponds to the internuclear axis. The magnitude or length of r is the bond length, and the orientation of r in space gives the orientation of the internuclear axis in space. Changes in the orientation correspond to rotation of the molecule, and changes in the length correspond to vibration. The change in the bond length from the equilibrium bond length is the normal vibrational coordinate Q for a diatomic molecule. \(\PageIndex{1}\) Figure :The diagram shows the coordinate system for a reduced particle. R1 and R2 are vectors to \(m_1\) and \(m_2\). R is the resultant and points to the center of mass. (b) Shows the center of mass as the origin of the coordinate system, and (c) expressed as a reduced particle. We can use Newton's equation of motion \[\vec{F}= m \vec{a} \label {6-8}\] to obtain a classical description of how a diatomic molecule vibrates. In this equation, the mass, m, is the reduced mass μ of the molecule, the acceleration, a, is \(d^2Q/dt^2\), and the force, f, is the force that pulls the molecule back to its equilibrium bond length. If we consider the bond to behave like a spring, then this restoring force is proportional to the displacement from the equilibrium length, which is Hooke's Law \[ F = - kQ \label {6-9}\] where k is the force constant. Hooke's Law says that the force is proportional to, but in opposite direction to, the displacement, Q. The force constant, k, reflects the stiffness of the spring. The idea incorporated into the application of Hooke's Law to a diatomic molecule is that when the atoms move away from their equilibrium positions, a restoring force is produced that increases proportionally with the displacement from equilibrium. The potential energy for such a system increases quadratically with the displacement. (See Exercise 6.9 below.) \[ V (Q) = \dfrac {1}{2} k Q^2 \label {6-10}\] Hooke's Law or the harmonic (i.e. quadratic) potential given by Equation \(\ref{6-10}\) is a common approximation for the vibrational oscillations of molecules. The magnitude of the force constant \(k\) depends upon the nature of the chemical bond in molecular systems just as it depends on the nature of the spring in mechanical systems. The larger the force constant, the stiffer the spring or the stiffer the bond. Since it is the electron distribution between the two positively charged nuclei that holds them together, a double bond with more electrons has a larger force constant than a single bond, and the nuclei are held together more tightly. In fact IR and other vibrational spectra provide information about the molecular composition of substances and about the bonding structure of molecules because of this relationship between the electron density in the bond and the bond force constant. Note that a stiff bond with a large force constant is not necessarily a strong bond with a large dissociation energy. Example \(\PageIndex{1}\) Show that minus the first derivative of the harmonic potential energy function in Equation \(\ref{6-10}\) with respect to Q is the Hooke's Law force. Show that the second derivative is the force constant, k. At what value of Q is the potential energy a minimum; at what value of Q is the force zero? Sketch graphs to compare the potential energy and the force for a system with a large force constant to one with a small force constant. In view of the above discussion, Equation (6-8) can be rewritten as \[\dfrac {d^2 Q(t)}{dt^2} + \dfrac {k}{\mu} Q(t) = 0 \label {6-11}\] Equation \(\ref{6-11}\) is the equation of motion for a classical harmonic oscillator. It is a linear second-order differential equation that can be solved by the standard method of factoring and integrating as described in Chapter 5. Example \(\PageIndex{2}\) Substitute the following functions into Equation (6-11) to show that they are both possible solutions to the classical equation of motion. \[Q(t) = Q_0 e^{i \omega t} \text {and} Q(t) = Q_0 e^{-i \omega t}\] where \[ \omega = \sqrt {\dfrac {k}{\mu}}\] Note that the Greek symbol ω for frequency represents the angular frequency \(2π\nu\). Example \(\PageIndex{3}\) Show that sine and cosine functions also are solutions to Equation (6-11). Example Using the sine function, sketch a graph showing the displacement of the bond from its equilibrium length as a function of time. Such motion is called harmonic. Show how your graph can be used to determine the frequency of the oscillation. Obtain an equation for the velocity of the object as a function of time, and plot the velocity on your graph also. Note that momentum is mass times velocity so you know both the momentum and position at all times. Example Identify what happens to the frequency of the motion as the force constant increases in one case and as the mass increases in another case. If the force constant is increased by 9 times and the mass is increased by 4 times, by what factor does the frequency change? The energy of the vibration is the sum of the kinetic energy and the potential energy. The momentum associated with the vibration is \[P_Q = \mu \dfrac {dQ}{dt} \label {6-12}\] so the energy can be written as \[ E = T + V = \dfrac {P^2_Q}{2 \mu} + \dfrac {k}{2} Q^2 \label {6-13}\] Example What happens to the frequency of the oscillation as the vibration is excited with more and more energy? What happens to the maximum amplitude of the vibration as it is excited with more and more energy? Example \(\PageIndex{7}\) If a molecular vibration is excited by collision with another molecule and is given a total energy Ehit as a result, what is the maximum amplitude of the oscillation? Is there any constraint on the magnitude of energy that can be introduced? We can generalize this discussion to any normal mode in a polyatomic molecule. The normal coordinate associated with a normal mode can be thought of as a vector Q, with each component giving the displacement amplitude of a particular atom in a particular direction. Equation \(\ref{6-11}\) then applies to the length of this vector Q = |Q|. As Q increases, it means the displacements of all the atoms that move in that normal mode increase, and the restoring force increases as well. Contributors Adapted from "Quantum States of Atoms and Molecules" by David M. Hanson, Erica Harvey, Robert Sweeney, Theresa Julia Zielinski
I am trying to prove the following: Let $X$ be a normed linear space satisfying the property: $\forall \left\{x_n\right\}, \left\{y_n\right\} \subseteq X $, we have $\|x_n\|=\|y_n\|=1, \|x_n+y_n\|\rightarrow 2 \Rightarrow \|x_n-y_n\|\rightarrow 0.$ If $\left\{z_n\right\} \subseteq X$ converges to $z\in X$ weakly (meaning $\displaystyle \lim_{n\rightarrow \infty} f(z_n)=z$ for all $f\in X^*$) and $\|z_n\| \rightarrow \|z\|$, then $\|z_n-z\|\rightarrow 0$. Here is what I am trying to do: I can consider $\left\{z \right\}$ as a sequence in $X$. I want to show that $\|z_n+z\|\rightarrow 2$. Well, since $\|z_n\| \rightarrow \|z\|=1$, then since $\|z_n+z\|\leq\|z_n\|+\|z\|$, then $\displaystyle \lim_{n\rightarrow \infty} \|z_n+z\| \leq 2\|z\|=2$. I can't figure out how to possibly show that $\displaystyle \lim_{n\rightarrow \infty} \|z_n+z\| \geq 2$. How would I even incorporate the weak convergence assumption? Any help would be greatly appreciated! Thank you.
This is from page 17-18 of Trudinger and Gilbarg Let $\Omega$ be a domain for which the divergence theorem holds. Let $\Gamma(x-y)$ be the normalised fundamental solution of the Laplace's equation, then Green's representation formula $$u(y)=\int_{\partial\Omega}\bigg(u\frac{\partial\Gamma}{\partial\nu}(x-y)-\Gamma(x-y)\frac{\partial u}{\partial v}\bigg)\text{d}s+\int_\Omega\Gamma(x-y)\Delta u\text{d}x$$ If $u$ has compact support in $\mathbb{R}^n$, then this yields $$u(y) = \int \Gamma(x-y)\Delta u(x)\text{d}x$$ I do not really understand this. I assume the first term disappear by taking a domain, say $\Sigma$ is large enough so that $u$ is $0$ on $\partial\Sigma$, but then $\Delta u$ may not necessarily be well defined on $\partial\Omega$, so why does it hold?
Applied/ACMS/absS18 Contents 1 ACMS Abstracts: Spring 2018 ACMS Abstracts: Spring 2018 Thomas Fai (Harvard) The Lubricated Immersed Boundary Method Many real-world examples of fluid-structure interaction, including the transit of red blood cells through the narrow slits in the spleen, involve the near-contact of elastic structures separated by thin layers of fluid. The separation of length scales between these fine lubrication layers and the larger elastic objects poses significant computational challenges. Motivated by the challenge of resolving such multiscale problems, we introduce an immersed boundary method that uses elements of lubrication theory to resolve thin fluid layers between immersed boundaries. We apply this method to two-dimensional flows of increasing complexity, including eccentric rotating cylinders and elastic vesicles near walls in shear flow, to show its increased accuracy compared to the classical immersed boundary method. We present preliminary simulation results of cell suspensions, a problem in which near-contact occurs at multiple levels, such as cell-wall, cell-cell, and intracellular interactions, to highlight the importance of resolving thin fluid layers in order to obtain the correct overall dynamics. Michael Herty (RWTH-Aachen) Opinion Formation Models and Mean field Games Techniques Mean-Field Games are games with a continuum of players that incorporate the time dimension through a control-theoretic approach. Recently, simpler approaches relying on reply strategies have been proposed. Based on an example in opinion formation modeling we explore the link between differentiability notions and mean-field game approaches. For numerical purposes a model predictive control framework is introduced consistent with the mean-field game setting that allows for efficient simulation. Numerical examples are also presented as well as stability results on the derived control. Lee Panetta (Texas A&M) Traveling waves and pulsed energy emissions seen in numerical simulations of electromagnetic wave scattering by ice crystals The numerical simulation of single particle scattering of electromagnetic energy plays a fundamental role in remote sensing studies of the atmosphere and oceans, and in efforts to model aerosol "radiative forcing" processes in a wide variety of models of atmospheric and climate dynamics, I will briefly explain the main challenges in the numerical simulation of single particle scattering and describe how work with 3-d simulations of scattering of an incident Gaussian pulse, using a Pseudo-Spectral Time Domain method to numerically solve Maxwell’s Equations, led to an investigation of episodic bursts of energy that were observed at various points in the near field during the decay phase of the simulations. The main focus of the talk will be on simulations in dimensions 1 and 2, simple geometries, and a single refractive index (ice at 550 nanometers). The periodic emission of pulses is easy to understand and predict on the basis of Snell’s laws in the 1-d case considered. In much more interesting 2-d cases, simulations show traveling waves within the crystal that give rise to pulsed emissions of energy when they interact with each other or when they enter regions of high surface curvature. The time-dependent simulations give a more dynamical view of "photonic nanojets" reported earlier in steady-state simulations in other contexts, and of energy release in "morphology-dependent resonances." Francois Monard (UC Santa Cruz) Inverse problems in integral geometry and Boltzmann transport The Boltzmann transport (or radiative transfer) equation describes the transport of photons interacting with a medium via attenuation and scattering effects. Such an equation serves as the model for many imaging modalities (e.g., SPECT, Optical Tomography) where one aims at reconstructing the optical parameters (absorption/scattering) or a source term, out of measurements of intensities radiated outside the domain of interest. In this talk, we will review recent progress on the inversion of some of the inverse problems mentioned above. In particular, we will discuss an interesting connection between the inverse source problem (where the optical parameters are assumed to be known) and a problem from integral geometry, namely the tensor tomography problem (or how to reconstruct a tensor field from knowledge of its integrals along geodesic curves). Haizhao Yang (National University of Singapore) A Unified Framework for Oscillatory Integral Transform: When to use NUFFT or Butterfly Factorization? This talk introduces fast algorithms of the matvec $g=Kf$ for $K\in \mathbb{C}^{N\times N}$, which is the discretization of the oscillatory integral transform $g(x) = \int K(x,\xi) f(\xi)d\xi$ with a kernel function $K(x,\xi)=\alpha(x,\xi)e^{2\pi i\Phi(x,\xi)}$, where $\alpha(x,\xi)$ is a smooth amplitude function , and $\Phi(x,\xi)$ is a piecewise smooth phase function with $O(1)$ discontinuous points in $x$ and $\xi$. A unified framework is proposed to compute $Kf$ with $O(N\log N)$ time and memory complexity via the non-uniform fast Fourier transform (NUFFT) or the butterfly factorization (BF), together with an $O(N)$ fast algorithm to determine whether NUFFT or BF is more suitable. This framework works for two cases: 1) explicite formulas for the amplitude and phase functions are known; 2) only indirect access of the amplitude and phase functions are available. Especially in the case of indirect access, our main contributions are: 1) an $O(N\log N)$ algorithm for recovering the amplitude and phase functions is proposed based on a new low-rank matrix recovery algorithm; 2) a new stable and nearly optimal BF with amplitude and phase functions in form of a low-rank factorization (IBF-MAT) is proposed to evaluate the matvec $Kf$. Numerical results are provided to demonstrate the effectiveness of the proposed framework. Eric Keaveny (Imperial College London) Linking the micro- and macro-scales in populations of swimming cells Swimming cells and microorganisms are as diverse in their collective dynamics as they are in their individual shapes and swimming mechanisms. They are able to propel themselves through simple viscous fluids, as well as through more complex environments where they must interact with other microscopic structures. In this talk, I will describe recent simulations that explore the connection between dynamics at the scale of the cell with that of the population in the case where the cells are sperm. In particular, I will discuss how the motion of the sperm’s flagella can greatly impact the overall dynamics of their suspensions. Additionally, I will discuss how in complex environments, the density and stiffness of structures with which the cells interact impact the effective diffusion of the population. Molei Tao (Georgia Tech) Explicit high-order symplectic integration of nonseparable Hamiltonians: algorithms and long time performance Symplectic integrators preserve the phase-space volume and have favorable performances in long time simulations. Methods for an explicit symplectic integration have been extensively studied for separable Hamiltonians (i.e., H(q,p)=K(p)+V(q)), and they lead to both accurate and efficient simulations. However, nonseparable Hamiltonians also model important problems, such as non-Newtonian mechanics and nearly integrable systems in action-angle coordinates. Unfortunately, implicit methods had been the only available symplectic approach for general nonseparable systems. This talk will describe a recent result that constructs explicit and arbitrary high-order symplectic integrators for arbitrary Hamiltonians. Based on a mechanical restraint that binds two copies of phase space together, these integrators have good long time performance. More precisely, based on backward error analysis, KAM theory, and some additional multiscale analysis, a pleasant error bound is established for integrable systems. This bound is then demonstrated on a conceptual example and the Schwarzschild geodesics problem. For nonintegrable systems, some numerical experiments with the nonlinear Schrodinger equation will be discussed. Boualem Khouider (UVic) Title TBA Abstract TBA
(Sorry was asleep at that time but forgot to log out, hence the apparent lack of response) Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference
Given a random vector $X \in \mathbb{R}^k$, with a known pdf given by $f_X$. If $Y, Z \in \mathbb{R}^k$ are defined by $Y = AX$, $Z = BX$, where $A,B \in \mathbb{R}^{k\times k}$ are different, given, real-valued matrices. I know how to calculate pdfs of $Y$ and $Z$ on their own. But how do I derive the joint pdf of $Y$ and $Z$? If it helps to be more specific, $f_X$ is a mixture of $0$-mean multivariate gaussians, each component in the mixture with a different, diagonal covariance matrix (but not of the form $\Sigma = \sigma^2 I$). Any help would be much appreciated. For some context: My goal is to check for the $f_X$ mentioned above, and a specific $A$ and $B$, whether the vectors $Y$ and $Z$ are independent. This means I need to check whether the joint distribution of $Y$ and $Z$ factorises into the product of the marginals. There are at least some cases when this is true: if, for example, $X \sim \mathcal{N}(0,\sigma^2 I)$ and $A$ and $B$ are projections onto orthogonal subspaces. But proving it is not true in my case would also be helpful. Hence my need to derive the joint distribution of $Y$ and $Z$.
Integral formulae for codimension-one foliated Finsler spaces Recent decades brought increasing interest in Finsler spaces $(M,F)$, especially, in extrinsic geometry of their hypersurfaces. Randers metrics (i.e., $F=\alpha+\beta$, $\alpha$ being the norm of a Riemannian structure and $\beta$ a 1-form of $\alpha$-norm smaller than $1$ on~$M$), appeared in Zermelo's control problem, are of special interest. After a short survey of above, we will discuss Integral formulae, which provide obstructions for existence of foliations (or compact leaves of them) with given geometric properties. The first known Integral formula (by G.\,Reeb) for codimension-1 foliated closed manifolds tells us that the total mean curvature $H$ of the leaves is zero (thus, either $H\equiv0$ or $H(x)H(y)<0$ for some $x,y\in M$). Using a unit normal to the leaves of a codimension-one foliated $(M,F)$, we define a new Riemannian metric $g$ on $M$, which for Randers case depends nicely on $(\alpha,\beta)$. For that $g$ we derive several geometric invariants of a foliation in terms of $F$; then express them in terms of invariants of $\alpha$ and~$\beta$. Using our results \cite{rw2} for Riemannian case, we present new Integral formulae for codimension-one foliated $(M, F)$ and $(M, \alpha+\beta)$. Some of them generalize Reeb's formula.
Stability in representation theory of the symmetric groups In the finite-dimensional representation theory of the symmetric groups $$S_n$$ over the base field $$\mathbb{C}$$, there is an an interesting phenomena of "stabilization" as $$n \to \infty$$: some representations of $$S_n$$ appear in sequences $$(V_n)_{n \geq 0}$$, where each $$V_n$$ is a finite-dimensional representation of $$S_n$$, where $$V_n$$ become "the same" in a certain sense for $$n >> 0$$. One manifestation of this phenomena are sequences $$(V_n)_{n \geq 0}$$ such that the characters of $$S_n$$ on $$V_n$$ are "polynomial in $n$". More precisely, these sequences satisfy the condition: for $$n>>0$$, the trace (character) of the automorphism $$\sigma \in S_n$$ of $$V_n$$ is given by a polynomial in the variables $$x_i$$, where $$x_i(\sigma)$$ is the number of cycles of length $$i$$ in the permutation $$\sigma$$. In particular, such sequences $$(V_n)_{n \geq 0}$$ satisfy the agreeable property that $$\dim(V_n)$$ is polynomial in $$n$$. Such "polynomial sequences" are encountered in many contexts: cohomologies of configuration spaces of $$n$$ distinct ordered points on a connected oriented manifold, spaces of polynomials on rank varieties of $$n \times n$$ matrices, and more. These sequences are called $$FI$$-modules, and have been studied extensively by Church, Ellenberg, Farb and others, yielding many interesting results on polynomiality in $$n$$ of dimensions of these spaces. A stronger version of the stability phenomena is described by the following two settings: - The algebraic representations of the infinite symmetric group $$S_{\infty} = \bigcup_{n} S_n,$$ where each representation of $$S_{\infty}$$ corresponds to a ``polynomial sequence'' $$(V_n)_{n \geq 0}$$. - The "polynomial" family of Deligne categories $$Rep(S_t), ~t \in \mathbb{C}$$, where the objects of the category $$Rep(S_t)$$ can be thought of as "continuations of sequences $$(V_n)_{n \geq 0}$$" to complex values of $$t=n$$. I will describe both settings, show that they are connected, and explain some applications in the representation theory of the symmetric groups.
Learning Objectives Define the terms wavelength and frequency with respect to wave-form energy. State the relationship between wavelength and frequency with respect to electromagnetic radiation. During the summer, almost everyone enjoys going to the beach. They can swim, have picnics, and work on their tans. But if you get too much sun, you can burn. A particular set of solar wavelengths are especially harmful to the skin. This portion of the solar spectrum is known as UV B, with wavelengths of \(280\)-\(320 \: \text{nm}\). Sunscreens are effective in protecting skin against both the immediate skin damage and the long-term possibility of skin cancer. Waves Waves are characterized by their repetitive motion. Imagine a toy boat riding the waves in a wave pool. As the water wave passes under the boat, it moves up and down in a regular and repeated fashion. While the wave travels horizontally, the boat only travels vertically up and down. The figure below shows two examples of waves. A wave cycle consists of one complete wave - starting at the zero point, going up to a wave crest, going back down to a wave trough, and back to the zero point again. The wavelength of a wave is the distance between any two corresponding points on adjacent waves. It is easiest to visualize the wavelength of a wave as the distance from one wave crest to the next. In an equation, wavelength is represented by the Greek letter lambda \(\left( \lambda \right)\). Depending on the type of wave, wavelength can be measured in meters, centimeters, or nanometers \(\left( 1 \: \text{m} = 10^9 \: \text{nm} \right)\). The frequency, represented by the Greek letter nu \(\left( \nu \right)\), is the number of waves that pass a certain point in a specified amount of time. Typically, frequency is measured in units of cycles per second or waves per second. One wave per second is also called a Hertz \(\left( \text{Hz} \right)\) and in SI units is a reciprocal second \(\left( \text{s}^{-1} \right)\). Figure B above shows an important relationship between the wavelength and frequency of a wave. The top wave clearly has a shorter wavelength than the second wave. However, if you picture yourself at a stationary point watching these waves pass by, more waves of the first kind would pass by in a given amount of time. Thus the frequency of the first wave is greater than that of the second wave. Wavelength and frequency are therefore inversely related. As the wavelength of a wave increases, its frequency decreases. The equation that relates the two is: \[c = \lambda \nu\] The variable \(c\) is the speed of light. For the relationship to hold mathematically, if the speed of light is used in \(\text{m/s}\), the wavelength must be in meters and the frequency in Hertz. Example \(\PageIndex{1}\): Orange Light The color orange within the visible light spectrum has a wavelength of about \(620 \: \text{nm}\). What is the frequency of orange light? SOLUTION Steps for Problem Solving Example \(\PageIndex{1}\) Identify the "given"information and what the problem is asking you to "find." Given : Find: Frequency (Hz) List other known quantities \(1 \: \text{m} = 10^9 \: \text{nm}\) Identify steps to get the final answer 1.Convert the wavelength to \(\text{m}\). 2. Apply the equation \(c = \lambda \nu\) and solve for frequency. Dividing both sides of the equation by \(\lambda\) yields: \(\nu = \frac{c}{\lambda}\) Cancel units and calculate. \(620 \: \text{nm} \times \left( \frac{1 \: \text{m}}{10^9 \: \text{nm}} \right) = 6.20 \times 10^{-7} \: \text{m}\) \(\nu = \frac{c}{\lambda} = \frac{3.0 \times 10^8 \: \text{m/s}}{6.20 \times 10^{-7}} = 4.8 \times 10^{14} \: \text{Hz}\) Think about your result. The value for the frequency falls within the range for visible light. Exercise \(\PageIndex{1}\) What is the wavelength of light if its frequency is 1.55 × 10 10 s −1? Answer 0.0194 m, or 19.4 mm Summary All waves can be defined in terms of their frequency and intensity. \(c = \lambda \nu\) expresses the relationship between wavelength and frequency.
We know that for a position variable $x$ and momentum $p$, the uncertainties of the two quantities are bounded by $$\Delta x \Delta p \gtrsim \hbar$$ Now, this is usually first explained with $x$ being a simple linearly measured position and $p$ being linear momentum. But it should apply to any good coordinate and its conjugate momentum. It should, for instance, apply to angle $\phi$ about the $z$ axis, and angular momentum $L_z$: $$\Delta \phi \Delta L_z \gtrsim \hbar$$ The thing is, $\Delta \phi$ can never be greater than $2\pi$. I mean, you have to have some value of $\phi$ and $\phi$ only runs from 0 to $2\pi$. Therefore $$\Delta L_z \gtrsim \hbar/\Delta \phi \geq \hbar/2\pi$$ But, uh-oh! This means it is impossible for $\Delta L_z$ to be zero, and we should never be able to have angular momentum states with definite $L_z$ values. Of course, it doesn't mean that. But I have never figured out how this is not in contradiction with the Schroedinger eqn. calculations that give us states with definite values of $L_z$. Can anyone help me out? One answer I anticipate is that $\phi$ is sort of "abstract" in that if you chose your origin at some other point you will get completely different values of $\phi$ and $L_z$, and ipso facto, usual considerations don't apply. I don't think this will work, though. Consider a "quantum bead" sliding around on a rigid circular ring and you get the exact same problem with no ambiguity in $\phi$ or $L_z$. (Well, there will be some limited ambiguity in $\phi$, but still, there won't be in $L_z$.)
Let's see. There are two observations one needs to make in order to "arrive" to F-theory. Let's go back to type IIB string theory and take the lowe energy sugra 7-brane solutions. These 7-branes have an harmonic function that depends logarithmically on the transverse distance from the brane, something distinct to these 7-branes and not to other lower $p$ $Dp$-branes. If you examine this system you will realize that there exists a $SL(2,\mathbb{Z})$ symmetry and that many of these 7-branes put together backreact to give a a $\mathbb{P}^1$ background. The other observation is that this $SL(2,\mathbb{Z})$ symmetry has a nice geometric interpretation as the modular group of $T^2 \approx S^1 \times S^1$ in whose zero limit we compactify M-theory to get the 9-dimensional type IIA string theory (if I remember well). These are the two points or observation that lead to consider F-theory in the first place. F-theory is nothing more than a "new" way to compactify type IIB string theory in which the complex scalar field $\tau$ is not constant anymore. The novelty is also that we can consider this scalar field $\tau$ as the complex structure modulus of an auxiliary torus with modular group the usual $SL(2,\mathbb{Z})$ (and this "interpretation" if I am not mistaken is the same in say Seiberg-Witten theory). We the above in mind we indeed get a 12 dimenisonal theory where the torus on which we compactify it is actually a non-physical torus, it does not have a pure geometric interpretation. Note that the dimensional reduction is not a usual KK reduction as we do in, say, type IIB when compactifying it in $\mathbb{M}^4 \times T^6$. Additionally note that the low energy limit is not given by a 12 dimensional sugra theory since sugra can be realized up to only 11 dimensions. The above want to morally communicate the fact that the 12 dimensional interpretation is a useful means to geometrize they $SL(2,\mathbb{Z})$ duality symmetry. Now, upon compactifying the resulting IIB theory in lower dimensions (so, after the $T^2$ F-theory compactification) we get already some remarkable results. The compactification of tyoe IIB on the previously mentioned $\mathbb{P}^1$ because of the backreaction of the 7-branes preserves half the susy. What is remarkable is the fact that also M-theory compactified on a K3 surface preserves the same amount of supersymmtries. Now things can get quite technical but we already see some connection. If one goes further she will that M-theory and F-theory are related to each other after one has dualized M-theory and type IIB strings on the $\mathbb{P}^1$ and by the (conjectured) fact that F-theory on an elliptically fibrated K3 is also dual to type IIB strings on $\mathbb{P}^1$. To end up, the most useful road map I have found is the picture where F-theory on $T^2$ is dual to type IIB in 10 dimensions which is T-dual to type IIA in 9 dimensions which is the M-theory compactification on $T^2$. I took the above notes from a graduate course I attended with lecturer Inaki García-Etxebarria who works on F-theory. Additionally a nice resource is of course the nLab article and also Herman Verlinde's lectures on PiTP. Maybe Weigand's notes are also useful.
Probability Seminar Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM. If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu January 31, Oanh Nguyen, Princeton Title: Survival and extinction of epidemics on random graphs with general degrees Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly. Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University Title: When particle systems meet PDEs Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems.. Title: Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2. February 14, Timo Seppäläinen, UW-Madison Title: Geometry of the corner growth model Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah). February 21, Diane Holcomb, KTH Title: On the centered maximum of the Sine beta process Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette. Title: Quantitative homogenization in a balanced random environment Abstract: Stochastic homogenization of discrete difference operators is closely related to the convergence of random walk in a random environment (RWRE) to its limiting process. In this talk we discuss non-divergence form difference operators in an i.i.d random environment and the corresponding process—a random walk in a balanced random environment in the integer lattice Z^d. We first quantify the ergodicity of the environment viewed from the point of view of the particle. As consequences, we obtain algebraic rates of convergence for the quenched central limit theorem of the RWRE and for the homogenization of both elliptic and parabolic non-divergence form difference operators. Joint work with J. Peterson (Purdue) and H. V. Tran (UW-Madison). Wednesday, February 27 at 1:10pm Jon Peterson, Purdue Title: Functional Limit Laws for Recurrent Excited Random Walks Abstract: Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit. This is joint work with Elena Kosygina. March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison Title: Harmonic Analysis on GLn over finite fields, and Random Walks Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the character ratio: $$ \text{trace}(\rho(g))/\text{dim}(\rho), $$ for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant rank. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM). April 4, Philip Matchett Wood, UW-Madison Title: Outliers in the spectrum for products of independent random matrices Abstract: For fixed positive integers m, we consider the product of m independent n by n random matrices with iid entries as in the limit as n tends to infinity. Under suitable assumptions on the entries of each matrix, it is known that the limiting empirical distribution of the eigenvalues is described by the m-th power of the circular law. Moreover, this same limiting distribution continues to hold if each iid random matrix is additively perturbed by a bounded rank deterministic error. However, the bounded rank perturbations may create one or more outlier eigenvalues. We describe the asymptotic location of the outlier eigenvalues, which extends a result of Terence Tao for the case of a single iid matrix. Our methods also allow us to consider several other types of perturbations, including multiplicative perturbations. Joint work with Natalie Coston and Sean O'Rourke. April 11, Eviatar Procaccia, Texas A&M Title: Stabilization of Diffusion Limited Aggregation in a Wedge. Abstract: We prove a discrete Beurling estimate for the harmonic measure in a wedge in $\mathbf{Z}^2$, and use it to show that Diffusion Limited Aggregation (DLA) in a wedge of angle smaller than $\pi/4$ stabilizes. This allows to consider the infinite DLA and questions about the number of arms, growth and dimension. I will present some conjectures and open problems. April 18, Andrea Agazzi, Duke Title: Large Deviations Theory for Chemical Reaction Networks Abstract: The microscopic dynamics of well-stirred networks of chemical reactions are modeled as jump Markov processes. At large volume, one may expect in this framework to have a straightforward application of large deviation theory. This is not at all true, for the jump rates of this class of models are typically neither globally Lipschitz, nor bounded away from zero, with both blowup and absorption as quite possible scenarios. In joint work with Amir Dembo and Jean-Pierre Eckmann, we utilize Lyapunov stability theory to bypass this challenges and to characterize a large class of network topologies that satisfy the full Wentzell-Freidlin theory of asymptotic rates of exit from domains of attraction. Under the assumption of positive recurrence these results also allow for the estimation of transitions times between metastable states of this class of processes. April 25, Kavita Ramanan, Brown April 26, Colloquium, Kavita Ramanan, Brown Title: Tales of Random Projections Abstract: The interplay between geometry and probability in high-dimensional spaces is a subject of active research. Classical theorems in probability theory such as the central limit theorem and Cramer’s theorem can be viewed as providing information about certain scalar projections of high-dimensional product measures. In this talk we will describe the behavior of random projections of more general (possibly non-product) high-dimensional measures, which are of interest in diverse fields, ranging from asymptotic convex geometry to high-dimensional statistics. Although the study of (typical) projections of high-dimensional measures dates back to Borel, only recently has a theory begun to emerge, which in particular identifies the role of certain geometric assumptions that lead to better behaved projections. A particular question of interest is to identify what properties of the high-dimensional measure are captured by its lower-dimensional projections. While fluctuations of these projections have been studied over the past decade, we describe more recent work on the tail behavior of multidimensional projections, and associated conditional limit theorems.
Definition:Convergent Series/Number Field Contents Definition Let $S$ be one of the standard number fields $\Q, \R, \C$. Let $\displaystyle \sum_{n \mathop = 1}^\infty a_n$ be a series in $S$. Let $\sequence {s_N}$ be the sequence of partial sums of $\displaystyle \sum_{n \mathop = 1}^\infty a_n$. If $s_N \to s$ as $N \to \infty$, the series converges to the sum $s$, and one writes $\displaystyle \sum_{n \mathop = 1}^\infty a_n = s$. A series is said to be convergent if and only if it converges to some $s$. The series $\displaystyle \sum_{n \mathop = 1}^\infty a_n$, where: $a_n = \dfrac {\paren {-1}^n + i \cos n \theta} {n^2}$ is convergent. The series $\displaystyle \sum_{n \mathop = 1}^\infty a_n$, where: $a_n = \dfrac 1 {n^2 - i n}$ is convergent. The series $\displaystyle \sum_{n \mathop = 1}^\infty a_n$, where: $a_n = \dfrac {e^{i n} } {n^2}$ is convergent. The series $\displaystyle \sum_{n \mathop = 1}^\infty a_n$, where: $a_n = \paren {\dfrac {2 + 3 i} {4 + i} }^n$ is convergent. Also see
Alexander Gasnikov,Pavel Dvurechensky,Eduard Gorbunov,Evgeniya Vorontsova,Daniil Selikhanovych,César A. Uribe; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1374-1391, 2019. Abstract We consider convex optimization problems with the objective function having Lipshitz-continuous $p$-th order derivative, where $p\geq 1$. We propose a new tensor method, which closes the gap between the lower $\Omega\left(\e^{-\frac{2}{3p+1}} \right)$ and upper $O\left(\e^{-\frac{1}{p+1}} \right)$ iteration complexity bounds for this class of optimization problems. We also consider uniformly convex functions, and show how the proposed method can be accelerated under this additional assumption. Moreover, we introduce a $p$-th order condition number which naturally arises in the complexity analysis of tensor methods under this assumption. Finally, we make a numerical study of the proposed optimal method and show that in practice it is faster than the best known accelerated tensor method. We also compare the performance of tensor methods for $p=2$ and $p=3$ and show that the 3rd-order method is superior to the 2nd-order method in practice. @InProceedings{pmlr-v99-gasnikov19a,title = {Optimal Tensor Methods in Smooth Convex and Uniformly ConvexOptimization},author = {Gasnikov, Alexander and Dvurechensky, Pavel and Gorbunov, Eduard and Vorontsova, Evgeniya and Selikhanovych, Daniil and Uribe, C\'esar A.},booktitle = {Proceedings of the Thirty-Second Conference on Learning Theory},pages = {1374--1391},year = {2019},editor = {Beygelzimer, Alina and Hsu, Daniel},volume = {99},series = {Proceedings of Machine Learning Research},address = {Phoenix, USA},month = {25--28 Jun},publisher = {PMLR},pdf = {http://proceedings.mlr.press/v99/gasnikov19a/gasnikov19a.pdf},url = {http://proceedings.mlr.press/v99/gasnikov19a.html},abstract = {We consider convex optimization problems with the objective function having Lipshitz-continuous $p$-th order derivative, where $p\geq 1$. We propose a new tensor method, which closes the gap between the lower $\Omega\left(\e^{-\frac{2}{3p+1}} \right)$ and upper $O\left(\e^{-\frac{1}{p+1}} \right)$ iteration complexity bounds for this class of optimization problems. We also consider uniformly convex functions, and show how the proposed method can be accelerated under this additional assumption. Moreover, we introduce a $p$-th order condition number which naturally arises in the complexity analysis of tensor methods under this assumption. Finally, we make a numerical study of the proposed optimal method and show that in practice it is faster than the best known accelerated tensor method. We also compare the performance of tensor methods for $p=2$ and $p=3$ and show that the 3rd-order method is superior to the 2nd-order method in practice. }} %0 Conference Paper%T Optimal Tensor Methods in Smooth Convex and Uniformly ConvexOptimization%A Alexander Gasnikov%A Pavel Dvurechensky%A Eduard Gorbunov%A Evgeniya Vorontsova%A Daniil Selikhanovych%A César A. Uribe%B Proceedings of the Thirty-Second Conference on Learning Theory%C Proceedings of Machine Learning Research%D 2019%E Alina Beygelzimer%E Daniel Hsu%F pmlr-v99-gasnikov19a%I PMLR%J Proceedings of Machine Learning Research%P 1374--1391%U http://proceedings.mlr.press%V 99%W PMLR%X We consider convex optimization problems with the objective function having Lipshitz-continuous $p$-th order derivative, where $p\geq 1$. We propose a new tensor method, which closes the gap between the lower $\Omega\left(\e^{-\frac{2}{3p+1}} \right)$ and upper $O\left(\e^{-\frac{1}{p+1}} \right)$ iteration complexity bounds for this class of optimization problems. We also consider uniformly convex functions, and show how the proposed method can be accelerated under this additional assumption. Moreover, we introduce a $p$-th order condition number which naturally arises in the complexity analysis of tensor methods under this assumption. Finally, we make a numerical study of the proposed optimal method and show that in practice it is faster than the best known accelerated tensor method. We also compare the performance of tensor methods for $p=2$ and $p=3$ and show that the 3rd-order method is superior to the 2nd-order method in practice. Gasnikov, A., Dvurechensky, P., Gorbunov, E., Vorontsova, E., Selikhanovych, D. & Uribe, C.A.. (2019). Optimal Tensor Methods in Smooth Convex and Uniformly ConvexOptimization. Proceedings of the Thirty-Second Conference on Learning Theory, in PMLR 99:1374-1391 This site last compiled Sat, 17 Aug 2019 00:05:37 +0000
Discrete Uniform Distribution Introduction Since much of this class is about notation, this section attempts to build on your experience about dice in an effort to minimize the mental hurdles that follow the new notation. Expanding on the process of rolling a single die, we introduce a more formal definition of a random variable. Despite it’s name, random variables are 1) not random and 2) not variables. They get this name nonetheless because we think of them as variables that take on random values. Using a fair die as an example of a random variable, we introduce aparticular, but not the only, notion of probability. While it’s easyto get lost in the new notation and naturalness of this interpretationof probability, don’t lose sight of the world’s most interestingprocesses. Random variables are not always repeatable in an operational manner, and thus it’s not always obvious how probabilityis meant to be understood in tese contexts. Warm Up Die are easy to think about, because we’ve all rolled a die before and we all think we know what we mean when we say the probability of rolling a $1$ is $1/6$. Throughout this section, don’t let this intuition go. Rather expand upon it to the more detailed descriptions below. As a warm up, let’s introduce a few new words based on the easy to think about dice example. Experiment: An occurrence with an uncertain outcome that we can observe. For example, rolling a die. Outcome: The result of an experiment; one particular state of the world. Sometimes called a “case.” For example: $4$. Sample Space: The set of all possible outcomes for the experiment. For example, $\{1, 2, 3, 4, 5, 6\}$. Event: A subset of possible outcomes that together have some property we are interested in. For example, the event “even die roll” is the set of outcomes $\{2, 4, 6\}$. Probability: As Pierre-Simon Laplace said, the probability of an event with respect to a sample space is the number of favorable cases (outcomes from the sample space that are in the event) divided by the total number of cases in the sample space. (This assumes that all outcomes in the sample space are equally likely.) Since it is a ratio, probability will always be a number between 0 (representing an impossible event) and $1$ (representing a certain event). For example, the probability of an even die roll is $3/6 = 1/2$. The specific definitions aboves come from Peter Norvig’s A Concrete Introduction to Probability (using Python), which is a great resource if you want more information about the basics of probability. Random Variable A random variable is a function from abritrary sets of the sample space to a numerical value. Despite the name, the randomness is not, per se, part of the variable. The randomness is instead found in the underlying process that the random variable is meant to quantify. A die is especially easy to think, because it maps so well to a random variable. But for the sake of clarity, let’s imagine a die labeled with the letters $A, B, C, D, E, F$ instead of the numbers $1, 2, 3, 4, 5, 6$. As far the events go, rolling an $A$ will still happen with probability $1/6$; there’s only one $A$ and $6$ possible outcomes, hence $1/6$. This special die helps us separate the distinct pieces of random variables. With this special die, we have events – any value of interest that a die might turn up – and values produced by the random variable associated with those events. In mathematical notation, we might write $X(A) = 1$, $X(B) = 2, \ldots, X(F) = 6$. In mathematical statistics, we read $X \sim Uniform(\{A, B, C, D, E, F\})$, the random variable $X$ follows a discrete Uniform distribution on the set $\{A, B, C, D, E, F\}$. If you’re content to keep numbers on your die to enable cleaner notation we read $X \sim \text{Uniform}(1, 6)$, the random variable $X$ follows a discrete Uniform distribution on the set $\{1, 2, 3, 4, 5, 6\}$. Notice that the notation $\text{Uniform}(1, 6)$ implies the integer values from $1$ to $6$, inclusive. The notation $X \sim \text{Uniform}(1, 6)$ is more common. In fact, it’s common to drop the argument to the random variable, which is really a function, entirely. More often interest lies in the probability of events. For example, we might be interested in the probability that either $A, B,$ or $C$ turn up. Let $E = \{1, 2, 3\}$ be the event that an $A, B$, or $C$ turns up in one roll. We read $P(X \in E)$ what is the probability that we roll one of $A, B,$ or $C$. At a certain point, the argument to $X$ just gets in the way since the notation $P(X \in E)$ equally applies to a die labeled with letter or numbers. Consider another random variable, also named $X$. Let $X \sim U(0, 1)$ be a discrete Uniform random variable on the numbers $0$ and $1$. Note that this could reasonably represent a fair coin, since we are willing to drop the events ${T, H}$ from our notation. Next, we will consider what the following mathematical statements means, in an operational sense, $P(X \in \{ 1 \}) = P(X = 1) = 1/2$. Probability Retired professor M. K. Smith provides a nice survey of the various notions of the probability of an event. This book will focus on an empirical version of probability that goes as follows. The probability of an event $E$ is the limiting relative frequency of the occurrences of $E$ over the number of experiments $N$, where $1(X \in E)$ takes on the value $1$ any time the random variable $X$ is in the event $E$ and $0$ otherwise. We interpret probability as if the process that produces $X$ were repeated (thus assuming repeatability) an infinite number of times. In terms of a fair coin, $P(X \in \{H\}) = 1/2$ implies that we believe that flipping a fair coin an infinite number of times would witness one half of the flips to produce head. Probability in practice Statistic attempts to approximate probabilities defined with respect to random variables. The most common approximation strategy, relative to the theoretical definition of probability above, is to simply drop the limit. Let’s define an approximation $\hat{P}(X \in E)$ to $P(X \in E)$ above. In practice, we might let $X$ represent a coin. If we are interested in the event of flipping a heads, then we might flip this coin $N$ times and add up the total number of heads and divide by the total number of experiments. Take the time to notice that this is exactly what the $\hat{P}$ notation is saying mathematically.
I am aware there are other proofs of line of this statement. But I am interested in the argument outlined here on page 62-63 Corollary II.2.2.9Let $A$ and $B$ be $C^*$ algebras, $\phi:A \rightarrow B$ be injective $*$-homomoprhism. Then $\phi$ is isometric, i.e. $||\phi(x) || = ||x||$ for all $x \in A$. The proof goes as follows: Wlog we may assume $A,B$ are commutative (I got this ), and it is obvious from II.2.2.4. (as below). Theorem II.2.2.4If $A$ is a commutative $C^*$ algebra, then the Gelfand trasform is an isometric $*$-isomorphism from $A$ onto $C_0(\hat{A})$. How does II.2.2.4 imply 2.2.9?
I want to prove $$S^{\mu \nu}=\frac{i}{4}[\gamma^\mu,\gamma^\nu].$$ I started from $$[\gamma^\mu,S^{\alpha\beta}]=(J^{\alpha\beta})^\mu_\nu \gamma^\nu$$ Putting the value of $(J^{\alpha\beta})^\mu_\nu$ $$=i(\eta^{\alpha\mu}\delta^\beta_\nu-\eta^{\beta\mu}\delta^\alpha_\nu)\gamma^\nu$$ we get $$\gamma^\mu S^{\alpha\beta}-S^{\alpha\beta}\gamma^\mu=i(\eta^{\alpha\mu}\gamma^\beta-\eta^{\beta\mu}\gamma^\alpha)$$ Whats the next step? Also tell me if there is any other decent method. Note I am using metric $Diag(1.-1).$ We do not have to guess the structure of $S^{\mu\nu}$. You are really close, just replace $$2\eta^{\mu\nu}= \{\gamma^\mu,\gamma^\nu\}.$$ Rearrange the gammas on left side you will automatically make structure like your desired one by comparison on both sides. $$\gamma^\mu S^{\alpha\beta}-S^{\alpha\beta}\gamma^\mu=i(\eta^{\alpha\mu}\gamma^\beta-\eta^{\beta\mu}\gamma^\alpha)$$ $$\gamma^\mu S^{\alpha\beta}-S^{\alpha\beta}\gamma^\mu=\frac{i}{2}( \{\gamma^\alpha,\gamma^\mu\}\gamma^\beta- \{\gamma^\beta,\gamma^\mu\}\gamma^\alpha)$$ There might be difference of some constant factor. Fix it yourself. You'll probably be a little disappointed but it seems that one had to guess the answer: Existence of a solution to your second equation: Check that your first equation does satisfy the second. Unicity??: I first thought that it could not be shown, but as usually for linear equations, if you have a particular solution $S^{\alpha\beta}$ of a non-homogeneous equation then the others are obtained by solving the associated homogeneous ones, here$$[\gamma^\mu, T^{\alpha\beta}]= \gamma^\mu \cdot T^{\alpha\beta}-T^{\alpha\beta}\cdot \gamma^\mu=0$$This is equivalent to $$\gamma^\mu \cdot T^{\alpha\beta} = T^{\alpha\beta}\cdot \gamma^\mu\quad \Longleftrightarrow\quad T^{\alpha\beta} = (\eta^{\mu\mu})\, \gamma^{\mu}\cdot T^{\alpha\beta}\cdot \gamma^\mu $$I leave it to others to find possible solutions... For those who do not know where that comes from: second equation is the linearized form of $$S(\Lambda)\cdot \gamma^{\mu}\cdot S^{-1}(\Lambda) = \Lambda^{\mu}{}_{\nu}\, \gamma^{\nu}$$ One wants to find a representation $S(\Lambda)$ of the Lorentz group that satisfies such a relation and the previous $S^{\alpha\beta}$ are the generators in this representation $$S(\Lambda)= \mathrm{Id} - \frac{i}{2 i \hbar}\, \omega_{\mu\nu}\, S^{\mu\nu} + o(\omega) $$ while the $J^{\alpha\beta}$ are the generators in the defining representation. There are constructions in books on Clifford algebras where the $S(\Lambda)$ can explcitly be constructed without exhibithing a solution from nowhere. Sketch: Lorentz transfo can always be written as a composition of particular "symmetries" $\Lambda_0$ (of the kind $x$ mapped to $-x$ and identity for orthogonal vectors). For those, one find a simple associated $S(\Lambda_0)$...
I'm trying to format an optimization problem but I am having trouble aligning and and labeling it properly in one environment. I have two equations, each written using an \begin{aligned*} environment. The first is \documentclass{report}\usepackage{amsmath}\begin{document}\begin{equation*} \begin{aligned} & \underset{y \in X,\ u \in Y}{\text{minimize}} && J(y,u) \\ &\text{subject to} \end{aligned}\end{equation*}\end{document} and the second is \documentclass{report}\usepackage{amsmath}\begin{document}\begin{equation*} \begin{cases} \begin{aligned} -\nabla^2 y &= u &\text{ for } x \text{ in } \Omega, \\ y &= 0 &\text{ for } x \text{ on } \partial \Omega. \end{aligned} \end{cases}\end{equation*}\end{document} I'd like to be able to either join them together into one equation but have it formatted the same as the above code, or, somehow, leave them separated as two equations but then label them jointly as one equation so I can refer the pair jointly.
Mathematical background and pre work Mathematical background We will not assume a lot of mathematical background in this course butwill use some basic notions from linear algebra, such as vectorspaces (finite dimensional and almost always over the real numbers),matrices, and associated notions such as rank, eigenvalues andeigenvectors. We will use the notion of convexity (of functions andsets) and some of its basic properties. We will also use basic notionsfrom probability such as random variables, expectation, variance, tailbounds as well as properties of the normal (a.k.a. Gaussian)distribution. Though this will not be our main focus, we will assumesome comfort with algorithms and notions such as order of growth(\(O(n),2^{\Omega(n)}\), etc..) and some notions from computationalcomplexity such as the notion of a reduction and the classes P and NP. Probably the most important mathematical background for this course is that ever elusive notion of “mathematical maturity” which basically means the ability to pick up on the needed notions as we go along. At any point, please do not hesitate to ask questions when you need clarifications or pointers to some references, either in the class or on the Piazza forum. Some references for some of this material (that include much more than what we need are): All these topics are covered to some extent in Ryan O’Donnell’s CMU class 15-859T: A Theorist’s Toolkit see in particular Lectures 6-8 (spectral graph theory) and Lectures 13-14 (linear programming). See also the lecture notes for Jonathan Kelner’s MIT course 18.409 Topics in Theoretical Comp Sci. While not strictly necessary, you may find Luca Trevisan series of blog posts on expanders (from 2006, 2008, and 2011) illuminating. We will sometimes touch upon Fourier analysis of Boolean functions which is covered by O’Donnell’s excellent book and lecture notes For basic linear algebra and probability, see the lecture notes by Papadimitriou and Vazirani, the lecture notes of Lehman, Leighton and Meyer from MIT Course 6.042 “Mathematics For Computer Science” (Chapters 1-2 and 14 to 19 are particularly relevant). The “Probabilistic Method” book by Alon and Spencer is a great resource for discrete probability. Also, the books of Mitzenmacher and Upfal and Prabhakar and Raghavan cover probability from a more algorithmic perspective. Convexity, linear programming duality: see Boyd and Parrilo’s lecture notes and in particular Lectures 1-5. The book Convex Optimization by Boyd and Vandenberghe, which is available online, is an excellent resource for this area, which includes much more than what we will use here. Pre work (“homework 0”) Please do the following reading and exercises before the first lecture. Reading: Please read the lecture notes for the introduction to this course and for definitions of sum of squares over the hypercube. You don’t have to do the exercises in the lecture notes, but you may find attempting them useful. (See here for all notation used in these lecture notes.) Exercises: You do not need to submit these exercises, or even to write them down properly, and feel free to collaborate with others while working on them. All matrices and vectors are over the reals. In all the exercises belowyou can use the fact that any \(n\times n\) matrix \(A\) has a singularvalue decomposition (SVD) \(A = \sum_{i=1}^r \sigma_i u_i \otimes v_i\)with \(\sigma_i \in R\) and \(u_i,v_i \in \R^n\), and for every \(i,j\)\(\norm{u_i}=1\) , \(\norm{v_j}=1\) (where \(\norm{v} =\sqrt{\sum v_i^2}\)),and for all \(i\neq j\), \(\iprod{u_i,u_j}=0\) and \(\iprod{v_i,v_j}=0\). (Forvectors \(u,v\), their tensor product is defined as \(u\otimes v\) is thematrix \(T = uv^\top\) where \(T_{i,j} = u_iv_j\).) Equivalently\(A = U\Sigma V^\top\) where \(\Sigma\) is a diagonal matrix and \(U\) and \(V\)are orthogonal matrices (satisfying \(U^\top U = V^\top V = I\)). If \(A\)is symmetric then there is such a decomposition with \(u_i=v_i\) for all\(i\) (i.e., \(U=V\)). In this case the values \(\sigma_1,\ldots,\sigma_r\)are known as eigenvalues of \(A\) and the vectors \(v_1,\dots,v_r\) areknown as eigenvectors. (This decomposition is unique if \(r=n\) and allthe \(\sigma_i\)’s are distinct.) Moreover the SVD of \(A\) can be found inpolynomial time. (You can ignore issues of numerical accuracy in allexercises.) For an \(n\times n\) matrix \(A\), the \emph{spectral norm} of \(A\) is defined as the maximum of \(\norm{Av}\) over all vectors \(v\in\R^n\) with \(\norm{v}=1\). * Prove that if \(A\) is symmetric (i.e., \(A=A^\top\)), then \(\norm{A} \leq \max_i \sum_j |A_{i,j}|\). See footnote for hintYou can do this via the following stronger inequality: for any (not necessarily symmetric) matrix \(A\), \(\norm{A} \leq \sqrt{\alpha\beta}\) where \(\alpha = \max_i \sum_j |A_{i,j}|\) and \(\beta = \max_j \sum_i |A_{i,j}|\). * Show that if \(A\) is the adjacency matrix of a \(d\)-regular graph then \(\norm{A} = d\). Let \(A\) be a symmetric \(n\times n\) matrix. The Frobenius norm of \(A\),denoted by \(\norm{A}_F\), is defined as \(\sqrt{\sum_{i,j} A_{i,j}^2}\). Prove that \(\norm{A} \leq \norm{A}_F \leq \sqrt{n}\norm{A}\). Give examples where each of those inequalities is tight. Let \(\Tr(A) = \sum A_{i,i}\). Prove that for every even \(k\), \(\norm{A} \leq \Tr(A^k)^{1/k} \leq n^{1/k}\norm{A}\). Let \(A\) be a symmetric matrix such that \(A_{i,i}=0\) for all \(i\) and \(A_{i,j}\) is chosen to be a random value in \(\{\pm 1\}\) independently of all others. Prove that (for \(n\) sufficiently large) with probability at least\(0.99\), \(\norm{A} \leq n^{0.9}\). (harder) Prove that with probability at least \(0.99\), \(\norm{A} \leq n^{0.51}\). While \(\norm{A}\) can be computed in polynomial time, both \(\max_i \sum_j |A_{i,j}|\) and \(\norm{A}_F\) give even simpler to compute upper bounds for \(\norm{A}\). However the examples in the previous exercise show that they are not always tight. It is often easier to compute \(\Tr(A^k)^{1/k}\) than trying to compute \(\norm{A}\) directly, and as \(k\) grows this yields a better and better estimate. Let \(A\) be an \(n\times n\) symmetric matrix. Prove that the following are equivalent: \(A\) is positive semi-definite. That is, for every vector \(v\in R^n\), \(v^\top A v \geq 0\) (where we think of vectors as column vectors and so \(v^\top A v = \sum_{i,j} A_{i,j}v_iv_j\)). All eigenvalues of \(A\) are non-negative. That is, if \(Av = \lambda v\) then \(\lambda \geq 0\). The quadratic polynomial \(P_A\) defined as\(P_A(x) = \sum A_{i,j} x_ix_j\) is a sum of squares. That is, there are linear functions \(L_1,\ldots,L_m\) such that \(P_A = \sum_i (L_i)^2\). \(A = B^\top B\) for some \(n\times r\) matrix \(B\) There exist a set of correlated random variables \((X_1,\ldots,X_m)\) such that for every \(i,j\), \(\E X_i X_j = A_{i,j}\) and moreover, for every \(i\), the random variable \(X_i\) is distributed like a Normal variable with mean \(0\) and variance \(A_{i,i}\).
$$1 - \frac{1}{3 \cdot 2!} + \frac{1}{5 \cdot 3!} - \frac{1}{7 \cdot 4!}+\cdots$$ I am new to math over flow, and I do not know how to format the math, sorry! Also, what should this converge to? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community The series is given by $$\sum_{n=0}^\infty\frac{(-1)^n}{(2n+1)(n+1)!}.$$ Convergence follows from the Alternating Series Test. Is this test known to you? For the convergence, ΘΣΦGenSan gave the tool to use. For the sum, it is (at least to me) slightly tricky. Consider $$y=\sum_{n=0}^\infty \frac{(-1)^n x^{2 n+1}}{(2 n+1) (n+1)!}$$ and differentiate $$y'=\sum_{n=0}^\infty \frac{(-1)^n x^{2 n}}{(n+1)!}=\frac{1}{x^2}-\frac{e^{-x^2}}{x^2}$$ Now, integrate to get $$y=-\frac{1}{x}+\frac{e^{-x^2}}{x}+\sqrt{\pi }\,\text{erf}(x)$$ where appears the error function. Make $x=1$ and get $$-1+\frac{1}{e}+\sqrt{\pi }\, \text{erf}(1)\approx 0.861528$$ If you compute the partial sums $$S_p=\sum_{n=0}^p \frac{(-1)^n x^{2 n+1}}{(2 n+1) (n+1)!}$$ you should notice that this value (six significant figures) is already obtained for $p=7$. $$1 - \frac{1}{3 \cdot 2!} + \frac{1}{5 \cdot 3!} - \frac{1}{7 \cdot 4!}+\cdots$$ A standard theorem says this converges if the corresponding series of absolute values converges: $$1 + \frac{1}{3 \cdot 2!} + \frac{1}{5 \cdot 3!} + \frac{1}{7 \cdot 4!}+\cdots$$ The "comparison test" says this converges if the following converges: $$1 + \frac 1 {2!} + \frac 1 {3!} + \frac 1 {4!}+\cdots$$ And that converges by a ratio test: $$ \frac{1/(n+1)!}{1/n!} = \frac 1 {n+1} \to \text{ as } n\to\infty. $$
Let $\mu_n$ be a sequence of positive radon measures on $\mathbb{R}^n$ weakly converging (as dual of continuous compactly supported functions) to a measure $\mu$. Assume that $f_n(z)$ is a sequence of positive, compacly supported functions such that: they are uniformily supported in a ball, i.e.: for every $n$ their support is contained in a ball $B_r(0)$ and they are uniformily bounded with $\lvert f_n(z)\rvert\leq C$. Moreover $f_n(z)\to 0$ for any $z\in \mathbb{R}^n$. Is it true that at least for a subsequence $n_j$, that we have: \begin{equation} \lim_{j\to\infty}\int f_{n_j}(z) d\mu_{n_j}(z)=0? \end{equation}
On the asymptotic character of a generalized rational difference equation 1. Department of Mathematics, Indian Institute of Science, Bangalore, Karnataka, 560012, India 2. Department of Mathematics, Maligram, Paschim Medinipur, 2421140, India We investigate the global asymptotic stability of the solutions of $X_{n+1}=\frac{β X_{n-l} + γ X_{n-k}}{A + X_{n-k}} $ for $n=1,2, ...$, where $l$ and $k$ are positive integers such that $l≠ k$. The parameters are positive real numbers and the initial conditions are arbitrary nonnegative real numbers. We find necessary and sufficient conditions for the global asymptotic stability of the zero equilibrium. We also investigate the positive equilibrium and find the regions of parameters where the positive equilibrium is a global attractor of all positive solutions. Of particular interest for this generalized equation would be the existence of unbounded solutions and the existence of prime period two solutions depending on the combination of delay terms ($l$, $k$) being (odd, odd), (odd, even), (even, odd) or (even, even). In this manuscript we will investigate these aspects of the solutions for all such combinations of delay terms. Mathematics Subject Classification:39A10, 39A11. Citation:Esha Chatterjee, Sk. Sarif Hassan. On the asymptotic character of a generalized rational difference equation. Discrete & Continuous Dynamical Systems - A, 2018, 38 (4) : 1707-1718. doi: 10.3934/dcds.2018070 References: [1] [2] E. Camouzis and G. Ladas, [3] E. Camouzis, E. Chatterjee and G. Ladas, On the dynamics of $ \displaystyle{x_{n+1}=\frac{\delta x_{n-2} + x_{n-3}}{ A+x_{n-3} }}$, [4] E. Chatterjee, R. DeVault and G. Ladas, On the Global Character of $ \displaystyle{x_{n+1}=\frac{\beta x_{n} + \delta x_{n-k}}{ A+x_{n-k} }}$, [5] [6] R. DeVault, G. Ladas and S. W. Schultz, On the recursive sequence $\displaystyle{x_{n+1}=\frac{A}{x_{n}}+\frac{1}{x_{n-2}}}$, [7] [8] E. A. Grove, G. Ladas, M. Predescu and M. Radin, On the global character of the difference equation$ \displaystyle{x_{n+1}=\frac{\alpha + \gamma x_{n-(2k+1)} + \delta x_{n-2l}}{ A+x_{n-2l} }}$, [9] V. L. Kocic and G. Ladas, [10] [11] M. R. S. Kulenovi$\acute{c}$ and G. Ladas, [12] [13] [14] show all references References: [1] [2] E. Camouzis and G. Ladas, [3] E. Camouzis, E. Chatterjee and G. Ladas, On the dynamics of $ \displaystyle{x_{n+1}=\frac{\delta x_{n-2} + x_{n-3}}{ A+x_{n-3} }}$, [4] E. Chatterjee, R. DeVault and G. Ladas, On the Global Character of $ \displaystyle{x_{n+1}=\frac{\beta x_{n} + \delta x_{n-k}}{ A+x_{n-k} }}$, [5] [6] R. DeVault, G. Ladas and S. W. Schultz, On the recursive sequence $\displaystyle{x_{n+1}=\frac{A}{x_{n}}+\frac{1}{x_{n-2}}}$, [7] [8] E. A. Grove, G. Ladas, M. Predescu and M. Radin, On the global character of the difference equation$ \displaystyle{x_{n+1}=\frac{\alpha + \gamma x_{n-(2k+1)} + \delta x_{n-2l}}{ A+x_{n-2l} }}$, [9] V. L. Kocic and G. Ladas, [10] [11] M. R. S. Kulenovi$\acute{c}$ and G. Ladas, [12] [13] [14] Parameters Delay Terms Estimated Interval of Lyapunov Exponent Parameters Delay Terms Estimated Interval of Lyapunov Exponent [1] Tomás Caraballo, David Cheban. On the structure of the global attractor for non-autonomous dynamical systems with weak convergence. [2] Yan Wang, Guanggan Chen. Invariant measure of stochastic fractional Burgers equation with degenerate noise on a bounded interval. [3] [4] Wided Kechiche. Regularity of the global attractor for a nonlinear Schrödinger equation with a point defect. [5] Zhijian Yang, Zhiming Liu. Global attractor for a strongly damped wave equation with fully supercritical nonlinearities. [6] D. Hilhorst, L. A. Peletier, A. I. Rotariu, G. Sivashinsky. Global attractor and inertial sets for a nonlocal Kuramoto-Sivashinsky equation. [7] Azer Khanmamedov, Sema Simsek. Existence of the global attractor for the plate equation with nonlocal nonlinearity in $ \mathbb{R} ^{n}$. [8] Tomás Caraballo, Marta Herrera-Cobos, Pedro Marín-Rubio. Global attractor for a nonlocal [9] Tomás Caraballo, David Cheban. On the structure of the global attractor for infinite-dimensional non-autonomous dynamical systems with weak convergence. [10] Monica Lazzo, Paul G. Schmidt. Convergence versus periodicity in a single-loop positive-feedback system 1. Convergence to equilibrium. [11] [12] Nikos I. Karachalios, Nikos M. Stavrakakis. Estimates on the dimension of a global attractor for a semilinear dissipative wave equation on $\mathbb R^N$. [13] Brahim Alouini. Finite dimensional global attractor for a Bose-Einstein equation in a two dimensional unbounded domain. [14] Boling Guo, Zhaohui Huo. The global attractor of the damped, forced generalized Korteweg de Vries-Benjamin-Ono equation in $L^2$. [15] Rolci Cipolatti, Otared Kavian. On a nonlinear Schrödinger equation modelling ultra-short laser pulses with a large noncompact global attractor. [16] [17] Oleksiy V. Kapustyan, Pavlo O. Kasyanov, José Valero. Structure and regularity of the global attractor of a reaction-diffusion equation with non-smooth nonlinear term. [18] Kotaro Tsugawa. Existence of the global attractor for weakly damped, forced KdV equation on Sobolev spaces of negative index. [19] Fengjuan Meng, Chengkui Zhong. Multiple equilibrium points in global attractor for the weakly damped wave equation with critical exponent. [20] Zhiming Liu, Zhijian Yang. Global attractor of multi-valued operators with applications to a strongly damped nonlinear wave equation without uniqueness. 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
Theory of Thermoviscous Acoustics: Thermal and Viscous Losses When sound propagates in structures and geometries with small dimensions, the sound waves become attenuated because of thermal and viscous losses. More specifically, the losses occur in the acoustic thermal and viscous boundary layers near the walls. This known phenomenon needs to be considered to evaluate how these losses affect thermoviscous acoustics systems in order to build accurate models and match experimental measurements. An Introduction to Thermoviscous Acoustics When modeling the response of small transducers, like condenser microphones, MEMS microphones, and miniature loudspeakers, it is necessary to include thermal and viscous losses. Other applications include analyzing feedback in hearing aids and mobile devices as well as studying the damped vibrations of MEMS structures. Analyzing the transfer impedance of the standard IEC 60318-4 occluded ear canal simulator, sometimes referred to as the 711 coupler, is a good example to demonstrate the simulation of a thermoelectric device, and is depicted in the figure below. In the graph to the right, the response is modeled with and without thermoviscous acoustic losses. It is evident that these types of losses need to be included in a simulation in order to capture the correct behavior when comparing the curves to the standard data. The pressure distribution inside an occluded ear canal simulator at 7850 Hz (left), complying with the IEC 60318-4 standard. The modeled transfer impedance of the coupler is shown in blue, including thermal and viscous losses, together with the prescribed standard curves in red and the curve resulting from a pure lossless model in green (right). The thermoviscous effect is typically most pronounced at resonances, broadening them and shifting them down in frequency. To model these effects, it is necessary to include thermal conduction effects and viscous losses explicitly in the governing equations, solving the momentum through the Navier-Stokes equations), mass (continuity), and energy conservation equations. This is achieved by solving the thermoviscous acoustics equations in the Thermoviscous Acoustics interfaces in the Acoustics Module. The equations are also known as the viscothermal acoustics or the linearized Navier-Stokes equations. Here, we will present the physical background for the thermoviscous acoustics equations along with the important boundary layer characteristic: length scale. We will also provide a short description of the material parameters necessary for describing fluid media. Exploring the Physics Behind Thermoviscous Acoustics Acoustic waves are the propagation of small linear fluctuations in pressure on top of a background stationary (atmospheric) pressure. The governing equations for the fluctuations (the wave equation or Helmholtz’s equation) are derived by perturbing, or linearizing, the fundamental governing equations of fluid mechanics, including the Navier-Stokes equations, momentum equation, continuity equation, and energy equation. This results in the conservation equations for momentum, mass, and energy for any small (acoustic) perturbation. For many acoustics simulation applications, a series of assumptions are then made to simplify these equations. The system is assumed lossless and isentropic (adiabatic and reversible). Yet, if you retain both the viscous and heat conduction effects, you will end up with the equations for thermoviscous acoustics that solve for the acoustic perturbations in pressure, velocity, and temperature. Deriving the Governing Equations The procedure to derive the governing equations in the frequency domain starts with assuming small harmonic oscillations about the steady background properties. The dependent variables take the form where p is the pressure, \mathbf{u} is the velocity field, T is the temperature, and \omega is the angular frequency. Primed (‘) variables are the acoustic variables, while variables accompanied with the subscript 0 represent the background mean flow quantities. In thermoviscous acoustics, the background fluid is assumed to be quiescent so that \mathbf{u}_0=\mathbf{0}. The background pressure p_0 and background temperature T_0 need to be specified (they can be functions of space T_0=T_0(\mathbf{x}) and p_0=p_0(\mathbf{x})). Inserting the above equation into the governing equations and only retaining terms that are linear in the first-order variables yields the governing equations for the propagation of acoustic waves, including viscous and thermal losses. Note: Details on this process can be found in theUser’s Guide of the Acoustics Module in the “Theory Background for the Thermoviscous Acoustics Branch” section. The governing equations in the Thermoviscous Acoustics interface, in the frequency domain, are the continuity equation (omitting primes from all the acoustic variables) where \rho_0 is the background density; the momentum equation where \mu is the dynamic viscosity, \mu_\textrm{B} is the bulk viscosity, and the term on the right-hand side represents the divergence of the stress tensor; the energy conservation equation where C_p is the heat capacity at constant pressure, \textrm{k} is the thermal conductivity, \alpha_0 is the coefficient of thermal expansion (isobaric), and Q is a possible heat source; and finally, the linearized equation of state, which relates variations in pressure, temperature, and density where \beta_T is the isothermal compressibility. The left-hand sides of the governing equations represent the conserved quantities: mass; momentum; and energy (actually, entropy). In the frequency domain, multiplication with i\omega corresponds to differentiation with respect to time. The terms on the right-hand sides represent the processes that locally change or modify the respective conserved quantity. In two of the equations, diffusive loss terms are present, due to viscous shear and thermal conduction. Viscous losses are present when there are gradients in the velocity field, while thermal losses are present when there are gradients in the temperature. Viscous and Thermal Boundary Layers When sound waves propagate in a fluid bounded by walls, so-called viscous and thermal boundary layers are created at the solid surfaces. At the wall, the no-slip condition applies to the velocity field, \mathbf{u} = 0, and an isothermal condition for the temperature, T = 0. The isothermal condition is a very good approximation, as thermal conduction is typically orders of magnitude higher in solids than fluids. These two conditions give rise to the acoustic boundary layer, which consists of the viscous and thermal boundary layers. The flow transforms from the bulk condition of being nearly lossless and described by isentropic (adiabatic) conditions to the conditions in this layer. The problem of a time-harmonic wave propagating in the horizontal plane along a wall (this could be waves propagating in a small section of a pipe) is illustrated in the figures below. The left figure shows the velocity amplitude, the right figure shows the fluid’s temperature variations from the wall towards the bulk, while the middle figure shows the velocity magnitude as well as an animation indicating the velocity vector over a harmonic period. Velocity amplitude (left) and fluid temperature (right), from the wall to the bulk, of an acoustic wave propagating in the horizontal plane (bottom). The viscous and thermal boundary layer thicknesses are indicated by the red dotted lines closest to the wall. The upper dotted lines represent 2 \pi times the boundary layer thickness, in each case. The animation indicates the acoustic velocity components, while the color plot shows the velocity amplitude. The viscous and thermal boundary layers are clearly visible. The thicknesses are sometimes referred to as the viscous and thermal penetration depths. Because gradients are large in the boundary layer, losses are large here too. This means that in systems of relatively small dimensions, the losses associated with the boundary layer become important. In many engineering applications (miniature transducers, mobile devices, and more), including the losses associated with the boundary layer is essential in order to model the correct physical behavior and response. The viscous characteristic length is shown as a red dotted line in the velocity and temperature plots shown above, together with 2 \pi times the value (known as the viscous/thermal wavelength). The two characteristic lengths are related by the dimensionless Prandtl number, Pr which gives a measure of the ratio between the viscous losses and the thermal losses in a system. For air, this number is 0.7, while it is around 7.1 for water. In air, the thermal and viscous effects are roughly equal in importance, while for water (and most other fluids), the thermal losses only play a more minor role. The viscous and thermal boundary layer thicknesses exist as predefined variables for use in postprocessing in the Acoustics Module, and they are denoted by ta.d_visc and ta.d_therm. The Prandtl number is denoted by ta.Pr. The plane wave problem can be solved analytically and expressions for the viscous (d_\textrm{visc}) and thermal (d_\textrm{therm}) boundary layer thicknesses can be subsequently derived. They are given by The value of d_\textrm{visc} is 0.22 mm for air and 0.057 mm for water at 100 Hz, 20 °C and 1 atm. The figures below show the viscous and thermal boundary layers over a range of frequencies. The value of the viscous (d_\textrm{visc}) and thermal (d_\textrm{therm}) boundary layer thicknesses as functions of frequency for air (left) and water (right). This shows the diminishing effect of viscous and thermal losses at increasing acoustic wave propagation frequencies. Finally, another important effect that is captured when modeling with the Thermoviscous Acoustics interface is the transition from adiabatic to isothermal acoustics at low frequencies in small devices. This effect occurs when the thermal boundary layer stretches over the full device and is important in, for example, condenser microphones, such as the B&K 4133 condenser microphone. At isothermal conditions, the speed of sound changes to the isothermal speed of sound. Bulk Losses, Attenuation, and Narrow Region Acoustics It is important to note that viscous and thermal losses also exist in the bulk of the fluid. These are losses that typically occur when acoustic signals propagate over long distances and are attenuated. One example of this is sonar signals. These types of losses are, in air, only dominating at very high frequencies (they can be neglected at audio frequencies). The bulk losses are, of course, also described by the governing equations for thermoviscous acoustics as they include all of the physics. However, modeling large domains with the thermoviscous acoustics equations is very computationally expensive. In the Acoustics Module, you should instead use the Pressure Acoustics interface and select one of the available fluid models: Viscous, Thermally conducting, or Thermally conducting and viscous. Modeling with the Thermoviscous Acoustics interface can be computationally expensive because of the details needed to capture all physical effects. In cases where the acoustics waves propagate in waveguides or ducts of constant cross section, the thermoviscous losses can be modeled using the Narrow Region Acoustics feature in the Pressure Acoustics, Frequency Domain interface. This domain feature adds the losses associated with the acoustic boundary layers to the fluid in a homogenized way. The losses are derived analytically, thus exact for this case. This feature is very useful to reduce model size or quickly get a first assessment of results before moving onto a full and detailed thermoviscous model. Material Parameters Solving a full thermoviscous acoustics model involves defining several material parameters: Dynamic viscosity, \mu The dynamic viscosity \mu measures the fluid’s resistance to shearing in the fluid. It is the constant that relates stress to velocity gradients. The dynamic viscosity is related to the kinematic viscosity \nu by the relation \mu = \rho_0 \: \nu. The symbol for the dynamic viscosity \eta is also sometimes used. Bulk viscosity, \mu_\textrm{B} The bulk viscosity is also known as the volume viscosity, second viscosity, or expansive viscosity. It is related to losses that appear due to the compression and expansion of the fluid. \mu_\textrm{B} appears in the stress tensor term (right side of equation 3), which has to do with the compressibility (\nabla\cdot\mathbf{u}) of the bulk fluid. This factor is difficult to measure and often depends on the frequency. Heat capacity at constant pressure (specific), C_p This material parameter measures how much energy is required to change the temperature of the fluid (at constant pressure). Coefficient of thermal conduction, \textrm{k} The coefficient of proportionality between the temperature gradient and the heat flux in Fourier’s heat conduction law. Coefficient of thermal expansion (isobaric), \alpha_0 This is the volumetric thermal expansion of the fluid and expresses the ability of the fluid to expand when its temperature rises. Isothermal compressibility, \beta_T An important parameter in the equation of the state of the fluid. It relates changes in pressure to changes in volume in the fluid. The isothermal compressibility is related to the usual (isentropic) compressibility through the ratio of specific heats by \beta_T = \gamma \beta_s. Concluding Thoughts on the Theory of Thermoviscous Acoustics Now that we have discussed the theory behind thermoviscous acoustics and the associated equations, we can move on to tips and tricks for setting up a thermoviscous acoustics model using COMSOL Multiphysics and the Acoustics Module. We will discuss this, as well as many examples and applications, in the next blog post in this series. Further Reading and References The “Thermoviscous Acoustics Branch” section in the Acoustics Module User’s Guideof the COMSOL Documentation D. T. Blackstock, “Fundamentals of Physical Acoustics”, John Wiley and Sons, 2000 S. Temkin, “Elements of Acoustics”, Acoustical Society of America, 2001 B. Lautrup, “Physics of Continuous Matter”, Second Edition, CRC Press, 2011 P. M. Morse and K. U. Ingard, “Theoretical Acoustics”, Princeton University Press A. D. Pierce, “Acoustics: An Introduction to Its Physical Principles and Applications”, Acoustical Society of America, 1989 A. S. Dukhin and P. J. Goetz, “Bulk viscosity and compressibility measurements using acoustic spectroscopy”, J. Chem. Phys. 130, 124519 (2009) Editor’s note: This blog post was updated on 7/12/2016 to be consistent with version 5.2a of COMSOL Multiphysics. Comments (0) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
Symmetry of the superconducting gap First of all, a bit of theory. Superconductivity appears due to theCooper paring of two electrons, making non-trivial correlations betweenthem in space. The correlation is widely known as the gap parameter$\Delta_{\alpha\beta}\left(\mathbf{k}\right)\propto\left\langle c_{\alpha}\left(\mathbf{k}\right)c_{\beta}\left(-\mathbf{k}\right)\right\rangle $(the proportionality is merely a convention that will not matter forus) with $\alpha$ and $\beta$ the spin indices, $\mathbf{k}$ somewave vector, and $c$ the fermionic destruction operator. $\Delta$corresponds to the order parameter associated to the general recipeof second order phase transition proposed by Landau. Physically, $\Delta$is the energy gap at the Fermi energy created by the Fermi surfaceinstability responsible for superconductivity. Since it is a correlation function between two fermions, $\Delta$has to verify the Pauli exclusion principle, which imposes that $\Delta_{\alpha\beta}\left(\mathbf{k}\right)=-\Delta_{\beta\alpha}\left(-\mathbf{k}\right)$. You can derive this property from the anti-commutation relation of the fermion operator and the definition of $\Delta_{\alpha\beta}\left(\mathbf{k}\right)$ if you wish.When there is no spin-orbit coupling, both the spin and the momentumare good quantum numbers (you need an infinite system for the second, but thisis of no importance here), and one can separate $\Delta_{\alpha\beta}\left(\mathbf{k}\right)=\chi_{\alpha\beta}\Delta\left(\mathbf{k}\right)$ with $\chi_{\alpha \beta}$ a spinor matrix and $\Delta\left(\mathbf{k}\right)$ a function.Then, two possibilities $\chi_{\alpha\beta}=-\chi_{\beta\alpha}\Leftrightarrow\Delta\left(\mathbf{k}\right)=\Delta\left(-\mathbf{k}\right)$this situation is referred as the spin-singlet pairing $\chi_{\alpha\beta}=\chi_{\beta\alpha}\Leftrightarrow\Delta\left(\mathbf{k}\right)=-\Delta\left(-\mathbf{k}\right)$this situation is referred as the spin-triplet pairing. Singlet includes $s$-wave, $d$-wave, ... terms, triplet includesthe famous $p$-wave superconductivity (among others, like $f$-wave, ...). Since the normal situation (say, the historical BCS one) was for singletpairing, and because only the second Pauli $\sigma_{2}$ matrix isantisymmetric, one conventionally writes the order parameter as$$\Delta_{\alpha\beta}\left(\mathbf{k}\right)=\left[\Delta_{0}\left(\mathbf{k}\right)+\mathbf{d}\left(\mathbf{k}\right)\boldsymbol{\cdot\sigma}\right]\left(\mathbf{i}\sigma_{2}\right)_{\alpha\beta}$$where $\Delta_{0}\left(\mathbf{k}\right)=\Delta_{0}\left(-\mathbf{k}\right)$encodes the singlet component of $\Delta_{\alpha\beta}\left(\mathbf{k}\right)$and $\mathbf{d}\left(\mathbf{k}\right)=-\mathbf{d}\left(-\mathbf{k}\right)$is a vector encoding the triplet state. Now the main important point: what is the exact $\mathbf{k}$-dependencyof $\Delta_{0}$ or $\mathbf{d}$ ? This is a highly non-trivial question,to some extend still unanswered. There is a common consensus supposingthat the symmetry of the lattice plays a central role for this question.I highly encourage you to open the book by Mineev and Samokhin (1998), Introduction to unconventional superconductivity, Gordon andBreach Science Publishers, to have a better idea about that point. The $p_{x}+\mathbf{i}p_{y}$ superconductivity For what bothers you, the $p_{x}+\mathbf{i}p_{y}$ superconductivityis the superconducting theory based on the following "choice"$\Delta_{0}=0$, $\mathbf{d}=\left(k_{x}+\mathbf{i}k_{y},\mathbf{i}\left(k_{x}+\mathbf{i}k_{y}\right),0\right)$such that one has $$\Delta_{\alpha\beta}\left(\mathbf{k}\right)\propto\left(\begin{array}{cc}1 & 0\\0 & 0\end{array}\right)\left(k_{x}+\mathbf{i}k_{y}\right)\equiv\left(k_{x}+\mathbf{i}k_{y}\right)\left|\uparrow\uparrow\right\rangle $$which is essentially a phase term (when $k_{x}=k\cos\theta$ and $k_{y}=k\sin\theta$)on top of a spin-polarized electron pair. This phaseaccumulates around a vortex, and has non-trivial properties then. Note that the notation $\left|\uparrow\uparrow\right\rangle $ refersto the spins of the electrons forming the Cooper pair. A singlet statewould have something like $\left|\uparrow\downarrow\right\rangle -\left|\downarrow\uparrow\right\rangle $, and for $s$-wave $\Delta_0$ is $\mathbf{k}$ independent, whereas $\mathbf{d}=0$. Note that the $p$-wave also refers to the angular momentum $\ell=1$as you mentioned in your question. Then, in complete analogywith conventional composition of angular momentum (here it's for twoelectrons only), the magnetic moment can be $m=0,\;\pm1$. The naturalspherical harmonic for these states are then $Y_{\ell,m}$ with $Y_{1,\pm1}\propto k_{x}\pm\mathbf{i}k_{y}$and $Y_{1,0}\propto k_{z}$, so it should be rather natural to findthe above mentioned "choice" for $\mathbf{d}\left(\mathbf{k}\right)$.I nevertheless say a "choice" since this is not a real choice:the symmetry of the gap should be imposed by the material you consider,even if it is not yet satisfactorily understood. Note also that only the state $m=+1$ appears in the $p_{x}+\mathbf{i}p_{y}$ superconductivity. You might wonder about the other magnetic momentum contribution... Well, they are discarded, being less favourable (having a lower transition temperature for instance) under specific conditions that you have to know / specify for a given material. Here you may argue about the Zeeman effect for instance, which polarises the Cooper pair. [NB: I'm not sure about the validity of this last remark.] Relation between $p_{x}+\mathbf{i}p_{y}$ superconductivity and emergent unpaired Majorana modes Now, quickly, I'll try to answer your second question: why is thisstate important for emergent unpaired Majorana fermions in the vortices excitations? To understand that, one has to remember that the emergent unpairedMajorana modes in superconductors are non-degenerate particle-holeprotected states at zero-energy (in the middle of the gap if you prefer).Particle-hole symmetry comes along with superconductivity, so we alreadyvalidate one point of our check list. To make non-degenerate mode,one has to fight against the Kramers degeneracy. That's the reasonwhy we need spin-triplet state. If you would have a singlet stateCooper pair stuck in the vortex, it would have been degenerate, andyou would have been unable to separate the Majorana modes, see also Basic questions in Majorana fermionsfor more details about the difference between Majorana modes andunpaired Majorana modes in condensed matter. A more elaborate treatment about the topological aspect of $p$-wavesuperconductivity can be found in the book by Volovik, G. E. (2003), Universe in a Helium Droplet, Oxford University Press, availablefreely from the author's website http://ltl.tkk.fi/wiki/Grigori_Volovik. This post imported from StackExchange Physics at 2017-09-27 09:38 (UTC), posted by SE-user FraSchelle Note that Volovik mainly discuss superfluids, for which $p$-wave has been observed in $^{3}$He. The $p_{x}+\mathbf{i}p_{y}$ superfluidity is also called the $A_{1}$-phase [Volovik, section 7.4.8]. There is no known $p$-wave superconductor to date. Note also that the two above mentionned books (Samokhin and Mineev, Volovik) arenot strictly speaking introductory materials for the topic of superconductivity.More basics are in Gennes, Tinkham or Schrieffer books (they are all named blabla... superconductivity blabla...).
The monstrous moonshine picture is the subgraph of Conway’s big picture consisting of all lattices needed to describe the 171 moonshine groups. It consists of: – exactly 218 vertices (that is, lattices), out of which – 97 are number-lattices (that is of the form $M$ with $M$ a positive integer), and – 121 are proper number-like lattices (that is of the form $M \frac{g}{h}$ with $M$ a positive integer, $h$ a divisor of $24$ and $1 \leq g \leq h$ with $(g,h)=1$). The $97$ number lattices are closed under taking divisors, and the corresponding Hasse diagram has the following shape Here, number-lattices have the same colour if they have the same local structure in the moonshine picture (that is, have a similar neighbourhood of proper number-like lattices). There are 7 different types of local behaviour: The white numbered lattices have no proper number-like neighbours in the picture. The yellow number lattices (2,10,14,18,22,26,32,34,40,68,80,88,90,112,126,144,180,208 = 2M) have local structure \[ \xymatrix{M \ar@{-}[r] & \color{yellow}{2M} \ar@{-}[r] & M \frac{1}{2}} \] which involves all $2$-nd (square) roots of unity centered at the lattice. The green number lattices (3,15,21,39,57,93,96,120 = 3M) have local structure \[ \xymatrix{& M \ar@[red]@{-}[d] & \\ M \frac{1}{3} \ar@[red]@{-}[r] & \color{green}{3M} \ar@[red]@{-}[r] & M \frac{2}{3}} \] which involve all $3$-rd roots of unity centered at the lattice. The blue number lattices (4,16,20,28,36,44,52,56,72,104 = 4M) have as local structure \[ \xymatrix{M \frac{1}{2} \ar@{-}[d] & & M \frac{1}{4} \ar@{-}[d] \\ 2M \ar@{-}[r] & \color{blue}{4M} \ar@{-}[r] & 2M \frac{1}{2} \ar@{-}[d] \\ M \ar@{-}[u] & & M \frac{3}{4}} \] and involve the $2$-nd and $4$-th root of unity centered at the lattice. The purple number lattices (6,30,42,48,60 = 6M) have local structure \[ \xymatrix{& M \frac{1}{3} \ar@[red]@{-}[d] & 2M \frac{1}{3} & M \frac{1}{6} \ar@[red]@{-}[d] & \\ M \ar@[red]@{-}[r] & 3M \ar@{-}[r] \ar@[red]@{-}[d] & \color{purple}{6M} \ar@{-}[r] \ar@[red]@{-}[u] \ar@[red]@{-}[d] & 3M \frac{1}{2} \ar@[red]@{-}[r] \ar@[red]@{-}[d] & M \frac{5}{6} \\ & M \frac{2}{3} & 2M \frac{2}{3} & M \frac{1}{2} & } \] and involve all $2$-nd, $3$-rd and $6$-th roots of unity centered at the lattice. The unique brown number lattice 8 has local structure \[ \xymatrix{& & 1 \frac{1}{4} \ar@{-}[d] & & 1 \frac{1}{8} \ar@{-}[d] & \\ & 1 \frac{1}{2} \ar@{-}[d] & 2 \frac{1}{2} \ar@{-}[r] \ar@{-}[d] & 1 \frac{3}{4} & 2 \frac{1}{4} \ar@{-}[r] & 1 \frac{5}{8} \\ 1 \ar@{-}[r] & 2 \ar@{-}[r] & 4 \ar@{-}[r] & \color{brown}{8} \ar@{-}[r] & 4 \frac{1}{2} \ar@{-}[d] \ar@{-}[u] & \\ & & & 1 \frac{7}{8} \ar@{-}[r] & 2 \frac{3}{4} \ar@{-}[r] & 1 \frac{3}{8}} \] which involves all $2$-nd, $4$-th and $8$-th roots of unity centered at $8$. Finally, the local structure for the central red lattices $12,24 = 12M$ is \[ \xymatrix{ M \frac{1}{12} \ar@[red]@{-}[dr] & M \frac{5}{12} \ar@[red]@{-}[d] & M \frac{3}{4} \ar@[red]@{-}[dl] & & M \frac{1}{6} \ar@[red]@{-}[dr] & M \frac{1}{2} \ar@[red]@{-}[d] & M \frac{5}{6} \ar@[red]@{-}[dl] \\ & 3M \frac{1}{4} \ar@{-}[dr] & 2M \frac{1}{6} \ar@[red]@{-}[d] & 4M \frac{1}{3} \ar@[red]@{-}[d] & 2M \frac{1}{3} \ar@[red]@{-}[d] & 3M \frac{1}{2} \ar@{-}[dl] & \\ & 2M \frac{1}{2} \ar@[red]@{-}[r] & 6M \frac{1}{2} \ar@{-}[dl] \ar@[red]@{-}[d] \ar@{-}[r] & \color{red}{12M} \ar@[red]@{-}[d] \ar@{-}[r] & 6M \ar@[red]@{-}[d] \ar@{-}[dr] \ar@[red]@{-}[r] & 2M & \\ & 3M \frac{3}{4} \ar@[red]@{-}[dl] \ar@[red]@{-}[d] \ar@[red]@{-}[dr] & 2M \frac{5}{6} & 4M \frac{2}{3} & 2M \frac{2}{3} & 3M \ar@[red]@{-}[dl] \ar@[red]@{-}[d] \ar@[red]@{-}[dr] & \\ M \frac{1}{4} & M \frac{7}{12} & M \frac{11}{12} & & M \frac{1}{3} & M \frac{2}{3} & M} \] It involves all $2$-nd, $3$-rd, $4$-th, $6$-th and $12$-th roots of unity with center $12M$. No doubt this will be relevant in connecting moonshine with non-commutative geometry and issues of replicability as in Plazas’ paper Noncommutative Geometry of Groups like $\Gamma_0(N)$. Another of my pet follow-up projects is to determine whether or not the monster group $\mathbb{M}$ dictates the shape of the moonshine picture. That is, can one recover the 97 number lattices and their partition in 7 families starting from the set of element orders of $\mathbb{M}$, applying some set of simple rules? One of these rules will follow from the two equivalent notations for lattices, and the two different sets of roots of unities centered at a given lattice. This will imply that if a number lattice belongs to a given family, certain divisors and multiples of it must belong to related families. If this works out, it may be a first step towards a possibly new understanding of moonshine.Leave a Comment
A neutralization reaction is when an acid and a base react to form water and a salt and involves the combination of H + ions and OH - ions to generate water. The neutralization of a strong acid and strong base has a pH equal to 7. The neutralization of a strong acid and weak base will have a pH of less than 7, and conversely, the resulting pH when a strong base neutralizes a weak acid will be greater than 7. When a solution is neutralized, it means that salts are formed from equal weights of acid and base. The amount of acid needed is the amount that would give one mole of protons (H +) and the amount of base needed is the amount that would give one mole of (OH -). Because salts are formed from neutralization reactions with equivalent concentrations of weights of acids and bases: parts of acid will always neutralize N parts of base. N Strong Acids Strong Bases HCl LiOH HBr NaOH HI KOH HCIO 4 RbOH HNO 3 CsOH Ca(OH) 2 Sr(OH) 2 Ba(OH) 2 Strong Acid-Strong Base Neutralization Consider the reaction between \(\ce{HCl}\) and \(\ce{NaOH}\) in water: \[\underset{acid}{HCl(aq)} + \underset{base}{NaOH_{(aq)}} \leftrightharpoons \underset{salt}{NaCl_{(aq)}} + \underset{water}{H_2O_{(l)}}\] This can be written in terms of the ions (and canceled accordingly) \[\ce{H^{+}(aq)} + \cancel{\ce{Cl^{-}(aq)}} + \cancel{\ce{Na^{+}(aq)}} + \ce{OH^{-} (aq)} → \cancel{\ce{Na^{+}(aq)}} + \cancel{\ce{Cl^{-}_(aq)}} + \ce{H_2O(l)}\] When the spectator ions are removed, the net ionic equation shows the \(H^+\) and \(OH^-\) ions forming water in a strong acid, strong base reaction: \(H^+_{(aq)} + OH^-_{(aq)} \leftrightharpoons H_2O_{(l)} \) When a strong acid and a strong base fully neutralize, the pH is neutral. Neutral pH means that the pH is equal to 7.00 at 25 ºC. At this point of neutralization, there are equal amounts of \(OH^-\) and \(H_3O^+\). There is no excess \(NaOH\). The solution is \(NaCl\) at the equivalence point. When a strong acid completely neutralizes a strong base, the pH of the salt solution will always be 7. Weak Acid-Weak Base Neutralization A weak acid, weak base reaction can be shown by the net ionic equation example: \(H^+ _{(aq)} + NH_{3(aq)} \leftrightharpoons NH^+_{4 (aq)} \) The equivalence point of a neutralization reaction is when both the acid and the base in the reaction have been completely consumed and neither of them are in excess. When a strong acid neutralizes a weak base, the resulting solution's pH will be less than 7. When a strong base neutralizes a weak acid, the resulting solution's pH will be greater than 7. Strength of Acid and Base pH Level Strong Acid-Strong Base 7 Strong Acid-Weak Base <7 Weak Acid-Strong Base >7 Weak Acid-Weak Base pH <7 if \(K_a > K_b\) pH =7 if \(K_a = K_b\) pH >7 if \(K_a< K_b\) Titration One of the most common and widely used ways to complete a neutralization reaction is through titration. In a titration, an acid or a base is in a flask or a beaker. We will show two examples of a titration. The first will be the titration of an acid by a base. The second will be the titration of a base by an acid. Example \(\PageIndex{1}\): Titrating a Weak Acid Suppose 13.00 mL of a weak acid, with a molarity of 0.1 M, is titrated with 0.1 M NaOH. How would we draw this titration curve? SOLUTION Step 1: First, we need to find out where our titration curve begins. To do this, we find the initial pH of the weak acid in the beaker before any NaOH is added. This is the point where our titration curve will start. To find the initial pH, we first need the concentration of H 3O +. Set up an ICE table to find the concentration of H3O+: \(HX\) \(H_2O\) \(H_3O^+\) \(X^-\) Initial 0.1M Change -xM +xM +xM Equilibrium (0.1-x)M +xM +xM \[Ka=(7)(10^{-3})\] \[K_a=(7)(10^{-3})=\dfrac{(x^2)M}{(0.1-x)M}\] \[x=[H_3O^+]=0.023\;M\] Solve for pH: \[pH=-\log_{10}[H_3O^+]=-\log_{10}(0.023)=1.64\] Step 2: To accurately draw our titration curve, we need to calculate a data point between the starting point and the equivalence point. To do this, we solve for the pH when neutralization is 50% complete. Solve for the moles of OH- that is added to the beaker. We can to do by first finding the volume of OH- added to the acid at half-neutralization. 50% of 13 mL= 6.5mL Use the volume and molarity to solve for moles (6.5 mL)(0.1M)= 0.65 mmol OH - Now, Solve for the moles of acid to be neutralized (10 mL)(0.1M)= 1 mmol HX Set up an ICE table to determine the equilibrium concentrations of HX and X: \(HX\) \(H_2O\) \(H_3O^+\) \(X^-\) Initial 1 mmol Added Base 0.65 mmol Change -0.65 mmol -0.65 mmol -0.65 mmol Equilibrium 0.65 mmol 0.65 mmol To calculate the pH at 50% neutralization, use the Henderson-Hasselbalch approximation. pH=pKa+log[mmol Base/mmol Acid] pH=pKa+ log[0.65mmol/0.65mmol] pH=pKa+log(1) \[pH=pKa\] Therefore, when the weak acid is 50% neutralized, pH=pKa Step 3: Solve for the pH at the equivalence point. The concentration of the weak acid is half of its original concentration when neutralization is complete 0.1M/2=.05M HX Set up an ICE table to determine the concentration of OH-: \(HX\) \(H_2O\) \(H_3O^+\) \(X^-\) Initial 0.05 M Change -x M +x M +x M Equilibrium 0.05-x M +x M +x M Kb=(x^2)M/(0.05-x)M Since Kw=(Ka)(Kb), we can substitute Kw/Ka in place of Kb to get Kw/Ka=(x^2)/(.05) \[x=[OH^-]=(2.67)(10^{-7})\] \[pOH=-\log_{10}((2.67)(10^{-7}))=6.57\] \[pH=14-6.57=7.43\] Step 4: Solve for the pH after a bit more NaOH is added past the equivalence point. This will give us an accurate idea of where the pH levels off at the endpoint. The equivalence point is when 13 mL of NaOH is added to the weak acid. Let's find the pH after 14 mL is added. Solve for the moles of OH- \[ (14 mL)(0.1M)=1.4\; mmol OH^-\] Solve for the moles of acid \[(10\; mL)(0.1\;M)= 1\;mmol \;HX\] Set up an ICE table to determine the \(OH^-\) concentration: \(HX\) \(H_2O\) \(H_3O^+\) \(X^-\) Initial 1 mmol Added Base 1.4 mmol Change -1 mmol -1 mmol 1 mmol Equilibrium 0 mmol 0.4 mmol 1 mmol \[[OH-]=\frac{0.4\;mmol}{10\;mL+14\;mL}=0.17\;M\] \[pOH=-log_{10}(0.17)=1.8\] \[pH=14-1.8=12.2\] We have now gathered sufficient information to construct our titration curve. Example \(\PageIndex{1}\) In this case, we will say that a base solution is in an Erlenmeyer flask. To neutralize this base solution, you would add an acid solution from a buret into the flask. At the beginning of the titration, before adding any acid, it is necessary to add an indicator, so that there will be a color change to signal when the equivalence point has been reached. We can use the equivalence point to find molarity and vice versa. For example, if we know that it takes 10.5 mL of an unknown solution to neutralize 15 mL of 0.0853 M NaOH solution, we can find the molarity of the unknown solution using the following formula: \[M_1V_1 = M_2V_2\] where M 1 is the molarity of the first solution, Vis the volume in liters of the first solution, 1 Mis the molarity of the second solution, and 2 Vis the volume in liters of the second solution. When we plug in the values given to us into the problem, we get an equation that looks like the following: 2 \[(0.0835)(0.015) = M_2(0.0105)\] After solving for M 2, we see that the molarity of the unknown solution is 0.119 M. From this problem, we see that in order to neutralize 15 mL of 0.0835 M NaOH solution, 10.5 mL of the .119 M unknown solution is needed. Problems 1. Will the salt formed from the following reaction have a pH greater than, less than, or equal to seven? \(CH3COOH_{(aq)} + NaOH_{(s)} \leftrightharpoons Na^+ + CH3COO^- + H2O_{(l)}\) 2. How many mL of .0955 M Ba(OH) 2 solution are required to titrate 45.00 mL of .0452 M HNO 3? 3. Will the pH of the salt solution formed by the following chemical reaction be greater than, less than, or equal to seven? \(NaOH + H_2SO_4 \leftrightharpoons H_2O + NaSO_4\) 4. We know that it takes 31.00 mL of an unknown solution to neutralize 25.00 mL of .135 M KOH solution. What is the molarity of the unknown solution? Solutions 1. After looking at the net ionic equation, \[CH_3CO_2H_{(aq)} + OH^- \leftrightharpoons CH_3COO^- + H_2O_{(l)}\] we see that a weak acid, \(CH_3CO_2H\), is being neutralized by a strong base, \(OH^-\). By looking at the chart above, we can see that when a strong base neutralizes a weak acid, the pH level is going to be greater than 7. 2. By plugging the numbers given in the problem in the the equation: \[M_1V_1= M_2V_2\] we can solve for \(V_2\). \[V_2= \dfrac{M_1V_1}{M_2} = \dfrac{(0.0452)(0.045)}{0.0955} = 21.2\; mL\] Therefore it takes 21.2 mL of \(Ba(OH)_2\) to titrate 45.00 mL \(HNO_3\). 3. We know that NaOH is a strong base and H 2SO 4 is a strong acid. Therefore, we know the pH of the salt will be equal to 7. 4. By plugging the numbers given in the problem into the equation: \[M_1V_2 = M_2V_2\] we can solve for M 2. (0.135)(0.025) = M 2(0.031) M 2 = 0.108 M. Therefore, the molarity of the unknown solution is .108 M. References Petrucci, et al. General Chemistry: Principles & Modern Applications. 9th ed. Upper Saddle River, New Jersey: Pearson/Prentice Hall, 2007. Criddle, Craig and Larry Gonick. The Cartoon Guide to Chemistry.New York: HarperCollins Publishers, 2005. Contributors Katherine Dunn (UCD), Carlynn Chappell (UCD)
I'm using the following commands in my preamble to get the fonts I want: \usepackage{cmbright}\usepackage{amsmath}\usepackage{amssymb}\usepackage{pxfonts} I recently found that, in math mode, when I use the command \log or \exp (as opposed to \text{log} or \text{exp}), the logarithmic and exponential functions get resolved in the math font I want to use (pxfonts). However, I often use other functions such as "logit" and "expit", for example: \begin{equation}\text{logit} \Bigg \{ P(Y = 1| X = x) \Bigg \} = \beta_0 + s(x), \end{equation} In this case, "logit" resolves in the font used in the main text (cmbright), which stands out as somewhat of an eyesore. Is there a way I can create functions \logit and \expit that, in math mode, will return "logit" and "expit" using the desired math font (pxfonts) and not the main text font (cmbright)?
The Fibonacci sequence reappears a bit later in Dan Brown’s book ‘The Da Vinci Code’ where it is used to login to the bank account of Jacques Sauniere at the fictitious Parisian branch of the Depository Bank of Zurich. Last time we saw that the Hankel matrix of the Fibonacci series $F=(1,1,2,3,5,\dots)$ is invertible over $\mathbb{Z}$ \[ H(F) = \begin{bmatrix} 1 & 1 \\ 1 & 2 \end{bmatrix} \in SL_2(\mathbb{Z}) \] and we can use the rule for the co-multiplication $\Delta$ on $\Re(\mathbb{Q})$, the algebra of rational linear recursive sequences, to determine $\Delta(F)$. For a general integral linear recursive sequence the corresponding Hankel matrix is invertible over $\mathbb{Q}$, but rarely over $\mathbb{Z}$. So we need another approach to compute the co-multiplication on $\Re(\mathbb{Z})$. Any integral sequence $a = (a_0,a_1,a_2,\dots)$ can be seen as defining a $\mathbb{Z}$-linear map $\lambda_a$ from the integral polynomial ring $\mathbb{Z}[x]$ to $\mathbb{Z}$ itself via the rule $\lambda_a(x^n) = a_n$. If $a \in \Re(\mathbb{Z})$, then there is a monic polynomial with integral coefficients of a certain degree $n$ \[ f(x) = x^n + b_1 x^{n-1} + b_2 x^{n-2} + \dots + b_{n-1} x + b_n \] such that for every integer $m$ we have that \[ a_{m+n} + b_1 a_{m+n-1} + b_2 a_{m+n-2} + \dots + b_{n-1} a_{m+1} + a_m = 0 \] Alternatively, we can look at $a$ as defining a $\mathbb{Z}$-linear map $\lambda_a$ from the quotient ring $\mathbb{Z}[x]/(f(x))$ to $\mathbb{Z}$. The multiplicative structure on $\mathbb{Z}[x]/(f(x))$ dualizes to a co-multiplication $\Delta_f$ on the set of all such linear maps $(\mathbb{Z}[x]/(f(x)))^{\ast}$ and we can compute $\Delta_f(a)$. We see that the set of all integral linear recursive sequences can be identified with the direct limit \[ \Re(\mathbb{Z}) = \underset{\underset{f|g}{\rightarrow}}{lim}~(\frac{\mathbb{Z}[x]}{(f(x))})^{\ast} \] (where the directed system is ordered via division of monic integral polynomials) and so is equipped with a co-multiplication $\Delta = \underset{\rightarrow}{lim}~\Delta_f$. Btw. the ring structure on $\Re(\mathbb{Z}) \subset (\mathbb{Z}[x])^{\ast}$ comes from restricting to $\Re(\mathbb{Z})$ the dual structures of the co-ring structure on $\mathbb{Z}[x]$ given by \[ \Delta(x) = x \otimes x \quad \text{and} \quad \epsilon(x) = 1 \] From this description it is clear that you need to know a hell of a lot number theory to describe this co-multiplication explicitly. As most of us prefer to work with rings rather than co-rings it is a good idea to begin to study this co-multiplication $\Delta$ by looking at the dual ring structure of \[ \Re(\mathbb{Z})^{\ast} = \underset{\underset{ f | g}{\leftarrow}}{lim}~\frac{\mathbb{Z}[x]}{(f(x))} \] This is the completion of $\mathbb{Z}[x]$ at the multiplicative set of all monic integral polynomials. This is a horrible ring and very little is known about it. Some general remarks were proved by Kazuo Habiro in his paper Cyclotomic completions of polynomial rings. In fact, Habiro got interested is a certain subring of $\Re(\mathbb{Z})^{\ast}$ which we now know as the Habiro ring and which seems to be a red herring is all stuff about the field with one element, $\mathbb{F}_1$ (more on this another time). Habiro’s ring is \[ \widehat{\mathbb{Z}[q]} = \underset{\underset{n|m}{\leftarrow}}{lim}~\frac{\mathbb{Z}[q]}{(q^n-1)} \] and its elements are all formal power series of the form \[ a_0 + a_1 (q-1) + a_2 (q^2-1)(q-1) + \dots + a_n (q^n-1)(q^{n-1}-1) \dots (q-1) + \dots \] with all coefficients $a_n \in \mathbb{Z}$. Here’s a funny property of such series. If you evaluate them at $q \in \mathbb{C}$ these series are likely to diverge almost everywhere, but they do converge in all roots of unity! Some people say that these functions are ‘leaking out of the roots of unity’. If the ring $\Re(\mathbb{Z})^{\ast}$ is controlled by the absolute Galois group $Gal(\overline{\mathbb{Q}}/\mathbb{Q})$, then Habiro’s ring is controlled by the abelianzation $Gal(\overline{\mathbb{Q}}/\mathbb{Q})^{ab} \simeq \hat{\mathbb{Z}}^{\ast}$.Leave a Comment
I am following along Chapter 2 of Takagi's Vacuum Noise and Stress Induced by Uniform Acceleration. For a free real scalar field $\phi$ the stress-energy tensor is:$$T_{\mu\nu} = ( \partial_{\mu} \phi ) ( \partial_{\nu} \phi ) - g_{\mu\nu} \tfrac{1}{2} g^{\alpha\beta} ( \partial_{\alpha} \phi ) ( \partial_{\beta} \phi ) - \tfrac{1}{2} g_{\mu\nu} m^2 \phi^2 $$For $K$ a timelike Killing vector of the spacetime, define:$$H_{K} = - \int_{\Sigma} d^3\Sigma_{\nu}\ K^{\mu} T_{\mu}^{\ \nu}$$where $\Sigma$ is a spacelike hypersurface and $d^3\Sigma_{\nu}$ the 3-volume 1-form over this surface. Then $H$ is a conserved charge and is independent of the choice of $\Sigma$ used to integrate it. Takagi says that $K^{\mu} T_{\mu}^{\ \nu}$ is a conserved vector. So I have two questions: 1. Does $K^{\mu} T_{\mu}^{\ \nu}$ being a 'conserved vector' mean that it obeys $\partial_{\nu} K^{\mu} T_{\mu}^{\ \nu}= 0$? If this is true, how do I see this? 2. What does it mean that $H_K$ is a conserved charge? Does it mean $\mathcal{L}_{K} H_{K} = 0$ (Where $\mathcal{L}_{K}$ is the Lie derivative)? Normally you'd have $K = \frac{\partial}{\partial x^0}$ for ordinary Minkowski time and so I'd understand $H_{\partial_0}$ being conserved as the statement $\frac{\partial}{\partial x^0} H_{\partial_0} = 0$ EDIT: I've also read the following statement in DeWitt's A Global Approach to Quantum Field Theory: In a general stationary background $H_{K}$ is the only conserved charge that there is for this system. Why is this true? I know that in a general stationary spacetime there exists one global timelike Killing vector, but independent of this isn't it still true that $T_{\mu\nu}$ is a conserved current? To me it seems that there should still be four corresponding conserved charges, independent of whether the spacetime is stationary or not.
The wavefunctions that describe electrons in atoms and molecules are called orbitals. An orbital is a wavefunction for a single electron. When we say an electron is in orbital n, we mean that it is described by a particular wavefunction Ψn and has energy En. All the properties of this electron can be calculated from Ψn as described in Chapter 3. We will now use the particle-in-a-box model to explain the absorption spectra of the cyanine dyes. When an atom or molecule absorbs a photon, the atom or molecule goes from one energy level, designated by quantum number \(n_i\), to a higher energy level, designated by \(n_f\). We can also say that the molecule goes from one electronic state to another. This change is called a transition. Sometimes it is said that an electron goes from one orbital to another in a transition, but this statement is not general. It is valid for a particle-in-a-box, but not for real atoms and molecules, which are more complicated than the simple particle-in-a-box model. The energy of the photon absorbed (Ephoton = hν) matches the difference in the energy between the two states involved in the transition (ΔEstates). In general, the observed frequency or wavelength for a transition is calculated from the change in energy using the following equalities, \[\Delta E_{states} = E_f - E_i = E_{photon} = h \nu = hc \bar {\nu} \tag {4-18}\] Then, for the specific case of the particle-in-a-box, \[ E_{photon} = \Delta E_{states} = E_f - E_i = \frac {(n_{f^2} - n_{i^2}) h^2}{8mL^2} \tag {4-19}\] where nf is the quantum number associated with the final state and ni is the quantum number for the initial state. A negative value for Ephoton means the photon is emitted as a result of the transition in states; a positive value means the photon is absorbed. Generally the transition energy, \(E_{photon}\) or \(ΔE_{states}\), is taken to correspond to the peak in the absorption spectrum. When high accuracy is needed for the electronic transition energy, the spectral line shape must be analyzed to account for rotational and vibrational motion as well as effects due to the solvent or environment. Contributions of rotational and vibrational motion to an absorption spectrum will be discussed in later chapters. In a cyanine dye molecule that has 3 carbon atoms in the chain, there are six π-electrons. When light is absorbed, one of these electrons increases its energy by an amount hν and jumps to a higher energy level. In order to use Equation (4-18), we need to know which energy levels are involved. We assign the electrons to the lowest energy levels to create the ground-state lowest-energy electron configuration. We could put all six electrons in the n = 1 level, or we could put one electron in each of n = 1 through n = 6, or we could pair the electrons in n = 1 through n = 3, etc. The Pauli Exclusion Principle says that each spatial wavefunction can describe, at most, two electrons, or in other words, that each energy level can have only two electrons assigned to it. Spatial refers to our 3-dimensional space, and a spatial wavefunction depends upon the spatial coordinates x, y, or z. We will discuss the Pauli Exclusion Principle more fully later, but you probably have encountered it in other courses. Rather than appeal to the Pauli Exclusion Principle to assign the electrons to the energy levels, let’s try an empirical approach and discover the Pauli Exclusion Principle as a result. Assign the electrons to the energy levels in different ways until you find an assignment that agrees with experiment. When there is an even number of electrons, the lowest-energy transition is the energy difference between the highest occupied level (HOMO) and the lowest unoccupied level (LUMO). HOMO designates the highest-energy occupied molecular orbital, and LUMO designates the lowest-energy unoccupied molecular orbital. The term orbital refers to the wavefunction or energy level for one electron. All other transitions have a higher energy. For the case with all the electrons in the first energy level, the lowest-energy transition energy would be \(hν = E_2 – E_1\). With one electron in each of the first six levels, \(hν = E_7 – E_6\), and with the electrons paired, \(hν = E_4 – E_3\). Example 4.17 Draw energy level diagrams indicating the HOMO, the LUMO, the electrons and the lowest energy transition for each of the three cases mentioned in the preceding paragraph. Example 4.18 For the three ways of assigning the 6 electrons to the energy levels in Exercise 4.17, calculate the peak absorption wavelength λ for a cyanine dye molecule with 3 carbon atoms in the chain using a value for L of 0.849 nm, which is obtained by estimating bond lengths. Which wavelength agrees most closely with the experimental value of 309 nm for this molecule? It turns out that the assignment that gives a reasonable wavelength for the absorption of a cyanine dye with 6 pi electrons is \(hν = E_4 – E_3\) as you concluded from Exercise 4.18. In this way we have “discovered” the Pauli Exclusion Principle, electrons should be paired in the same energy level whenever possible, and we accept it for now because it agrees with the experimental observations of the cyanine dye spectra. In molecules with an odd number of electrons, it is possible to have transitions between the doubly occupied molecular orbitals and the singly occupied molecular orbital as well as from the singly occupied orbital to an unoccupied orbital. Contributors Adapted from "Quantum States of Atoms and Molecules" by David M. Hanson, Erica Harvey, Robert Sweeney, Theresa Julia Zielinski
(Originally published on October 19th, 2009) Before using Squarespace, I built my own front page. As I considered the best way to display series of pictures there, I came up with an interesting way to compress a lot of information onto fairly limited screen real estate. The idea was to have a kind of a slide show composed of small icons that turn larger as you hover over them; clicking on any icon would bring up with full-size image. That way I could fit a lot of small (32×32 pixels) icons of images on the screen, yet offer the users the ability to browse larger versions (67×67 pixels) easily just by moving the mouse around. The idea, of course, was inspired by what OS X does with the Dock (an effect which, sadly, I have disabled on my computer–but due to different use scenarios). Here is the effect in action ( roll your mouse over the images): The design process I went through is an interesting example of discovery (or serendipity, rather) and how taking an analytical approach doesn’t always yield the best results. The desired effect will be very familiar to you if you’ve used OS X and the Dock. I want to display a series of small thumbnails of images in a row. If you hover over them, the image that your mouse is closest to gets larger, pushing out the other images if necessary. I wanted the effect to be smooth (so as you move your mouse over the row, images get bigger as they approach the mouse pointer, and then get smaller) and resemble something like this: An analytical solution was easy to get to, but very quickly spiraled out of control, and here is how. Let’s consider two configurations: When the mouse cursor is exactly in the center of an icon, by symmetry that icon should have the maximum magnification: When the mouse cursor is exactly in between two icons, also by symmetry both icons should be of equal size: Depending on β, the magnification will drop out quickly (if β is close to α) or slowly (if it’s close to 1). Since we want the magnification of the icon to be a smooth curve (as the mouse pointer moves across the icons), we simply need to define a continuous function given the three points it goes through: (0, 1) (because at x=0 — i.e. when the mouse cursor is exactly over the icon’s center, we want the magnification to be maximum), ( α/2, β) (because when we’re in between two icons — i.e. a distance α/2 away from the center of one — we want the magnification to be β) and (Z, α) (the distance at which all magnification ceases). An exponential curve is the simplest one that we can try: We will then be able to use this curve to determine how much to magnify each icon by. The icons will be sized such that their size given the distance between their center and the mouse pointer can be read off of that magnification curve: First let’s figure out the full form of the magnification curve. The curve must go through the two endpoints we identified, and be exponentially decaying, so it is of the form \[y = 1 - \left(\frac{x}{Z}\right)^P\cdot(1-α)\] (We can verify that at 0, y=1 and at Z, y= α). We need to compute P based on the third point: \[β = 1 – \left(\frac{α}{2Z}\right)^P\cdot(1-α) \Rightarrow P = \text{log}_{α/2Z}\left(\frac{1-β}{1-α}\right)\] The first icon is simple: determine the distance between the mouse pointer and the center of the icon and use the curve above to read off the magnification (it will be something between β and 1). The subsequent icons are a little more tricky, because in order to figure out the magnification you have to know how far its center is from the mouse pointer, but the position of the center is a function of magnification! At this point the easiest thing to do is to solve this numerically, by simply iterating over all possible positions of the center and determining the closest one (since we’re operating in a discrete space with the smallest effective resolution of 1 pixel). While each step seems fairly straightforward, the end result is a pretty big hairball. Being lazy, I realized that there must be a better solution to this problem. And then I realized that so long as the illusion of smoothness is preserved, some simplifying assumptions can be made. First of all, the exponential curve I used initially was too complicated and looked too discontinuous at large magnifications (because of a sharp spike near 0); there must have been something else that’s straightforward to compute. The parameters seemed complicated, too — α and Z could be replaced with just one — a measure of how quickly the magnification should decay — without much loss of the effect. The Normal curve came to mind — with just one parameter ( σ) it was much easier to experimentally determine a value that had a pleasing effect (plus, σ is by definition very close to our notion of “how quickly this should decay”). I also got rid of the self-referential problem (determining magnification requires knowing origin, but origin influences magnification) by looking at not the actual distance (how far is the icon from the mouse pointer after all icons have been magnified), but original distance (how far is the icon from the pointer before magnification). The resulting algorithm is much more elegant — and produces a more visually pleasing effect: For each icon in the original (i.e. before any magnification takes place) series, determine how far its center is from the mouse pointer (I experimented with just using the x-coordinate, but the nice thing about this algorithm is that any smooth function works, and the actual distance produced a nicer effect than just the horizontal distance) Use the Normal curve to determine its magnification. We want the result to be 1 if the distance is 0 (i.e. the icon is directly under the mouse pointer) and αif the distance is infinite (since the Normal curve dies off quickly, the size would go down to αpretty quickly as well), i.e. \[N = e^{-\frac{d^2}{2σ^2}}\] \[M = N+α(1-N)\] Place each icon with its magnified size on screen; keep track of how much space each icon took so that subsequent icons can be displayed after it and not on top of it Technically this is enough for magnification. However, this doesn’t produce a smooth effect: since the icons are always pushed out to the right, the “tail” of icons keeps traveling back and forth. We want the entire series of move smoothly, slowly to the left as the mouse moves to the right (go here and watch the icons at the end of the series travel to the left as you move your mouse pointer left to right, across the icons). This is simple to correct, though: keep track of how much space all the icons take (by adding up each size as you go), and then offset all the icons by a fraction of that total space, depending on where the mouse pointer is: suppose the icons originally take dpixels, and expanded they all take Dpixels, and the mouse pointer is at position x(between 0–at the beginning of the series–and d), we want to offset all icons by \[x\cdot\frac{D}{d}\]
There are different teleparallel gravities, if you noticed in the literature. The one which is equivalent is called Teleparallel Equivalent of General Relativity (TEGR) and it is a particular action choice that makes it equivalent. If you decompose the variables, the metric $g_{\mu\nu}$ and the affine connection $\Gamma_{\mu\nu}^\alpha$ on the manifold into tetrad, $e_\mu^a$ which is the potential for translation symmetry, and spin connection, $\omega_{\mu} {}^a {}_b$ which is the potential for linear transformations, then you obtain the following equivalence between the Ricci scalar with respect to the Levi-Civita connection and Torsion tensor:$$\det(e) \hat{R} = \det (e) \left( \frac14 T^{abc} T_{abc} + \frac12 T^{abc} T_{bac} - T^a T_a \right) + 2 \partial_\mu \left[ \det(e) T^\mu \right]$$where $\det(e)$ is the determinant of the tetrad, $T^{abc}$ is the Torsion tensor, $T_a$ is the trace of the Torsion tensor, and $\hat{R}$ is the Ricci scalar with respect to the Levi-Civita connection, $\hat\omega_{\mu}^{ab}$, not the affine one which is zero for teleparallel gravity (cf. Weitzenböck connection). Therefore, if your action is as follows:$$\mathcal{S}_{TEGR} = \int d^4 x \det (e) \left( \frac14 T^{abc} T_{abc} + \frac12 T^{abc} T_{bac} - T^a T_a \right) + \mathcal{S}_{matter}$$it will be exactly equivalent to General Relativity up to a total derivative. Instead, if you even choose different coefficient for the torsion-squared terms, it will be both phenomenologically and dynamically different. The advantage of teleparallel gravity is that you can build a guage theory for gravity in curvature-flat spacetime since the spin connection vanishes identically. Nevertheless, the geometry does not have trivial geodesics, instead the curved world lines would still exist because of twirly features of the geometry.
Let’s try to identify the $\Psi(n) = n \prod_{p|n}(1+\frac{1}{p})$ points of $\mathbb{P}^1(\mathbb{Z}/n \mathbb{Z})$ with the lattices $L_{M \frac{g}{h}}$ at hyperdistance $n$ from the standard lattice $L_1$ in Conway’s big picture. Here are all $24=\Psi(12)$ lattices at hyperdistance $12$ from $L_1$ (the boundary lattices): You can also see the $4 = \Psi(3)$ lattices at hyperdistance $3$ (those connected to $1$ with a red arrow) as well as the intermediate $12 = \Psi(6)$ lattices at hyperdistance $6$. The vertices of Conway’s Big Picture are the projective classes of integral sublattices of the standard lattice $\mathbb{Z}^2=\mathbb{Z} e_1 \oplus \mathbb{Z} e_2$. Let’s say our sublattice is generated by the integral vectors $v=(v_1,v_2)$ and $w=(w_1.w_2)$. How do we determine its class $L_{M,\frac{g}{h}}$ where $M \in \mathbb{Q}_+$ is a strictly positive rational number and $0 \leq \frac{g}{h} < 1$?Here’s an example: the sublattice (the thick dots) is spanned by the vectors $v=(2,1)$ and $w=(1,4)$ Well, we try to find a basechange matrix in $SL_2(\mathbb{Z})$ such that the new 2nd base vector is of the form $(0,z)$. To do this take coprime $(c,d) \in \mathbb{Z}^2$ such that $cv_1+dw_1=0$ and complete with $(a,b)$ satisfying $ad-bc=1$ via Bezout to a matrix in $SL_2(\mathbb{Z})$ such that \[ \begin{bmatrix} a & b \\ c & d \end{bmatrix} \begin{bmatrix} v_1 & v_2 \\ w_1 & w_2 \end{bmatrix} = \begin{bmatrix} x & y \\ 0 & z \end{bmatrix} \] then the sublattice is of class $L_{\frac{x}{z},\frac{y}{z}~mod~1}$. In the example, we have \[ \begin{bmatrix} 0 & 1 \\ -1 & 2 \end{bmatrix} \begin{bmatrix} 2 & 1 \\ 1 & 4 \end{bmatrix} = \begin{bmatrix} 1 & 4 \\ 0 & 7 \end{bmatrix} \] so this sublattice is of class $L_{\frac{1}{7},\frac{4}{7}}$. Starting from a class $L_{M,\frac{g}{h}}$ it is easy to work out its hyperdistance from $L_1$: let $d$ be the smallest natural number making the corresponding matrix integral \[ d. \begin{bmatrix} M & \frac{g}{h} \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} u & v \\ 0 & w \end{bmatrix} \in M_2(\mathbb{Z}) \] then $L_{M,\frac{g}{h}}$ is at hyperdistance $u . w$ from $L_1$. Now that we know how to find the lattice class of any sublattice of $\mathbb{Z}^2$, let us assign a class to any point $[c:d]$ of $\mathbb{P}^1(\mathbb{Z}/n\mathbb{Z})$. As $gcd(c,d)=1$, by Bezout we can find a integral matrix with determinant $1$ \[ S_{[c:d]} = \begin{bmatrix} a & b \\ c & d \end{bmatrix} \] But then the matrix \[ \begin{bmatrix} a.n & b.n \\ c & d \end{bmatrix} \] has determinant $n$. Working backwards we see that the class $L_{[c:d]}$ of the sublattice of $\mathbb{Z}^2$ spanned by the vectors $(a.n,b.n)$ and $(c,d)$ is of hyperdistance $n$ from $L_1$. This is how the correspondence between points of $\mathbb{P}^1(\mathbb{Z}/n\mathbb{Z})$ and classes in Conway’s big picture at hyperdistance $n$ from $L_1$ works. Let’s do an example. Take the point $[7:3] \in \mathbb{P}^1(\mathbb{Z}/12\mathbb{Z})$ (see last time), then \[ \begin{bmatrix} -2 & -1 \\ 7 & 3 \end{bmatrix} \in SL_2(\mathbb{Z}) \] so we have to determine the class of the sublattice spanned by $(-24,-12)$ and $(7,3)$. As before we have to compute \[ \begin{bmatrix} -2 & -7 \\ 7 & 24 \end{bmatrix} \begin{bmatrix} -24 & -12 \\ 7 & 3 \end{bmatrix} = \begin{bmatrix} -1 & 3 \\ 0 & -12 \end{bmatrix} \] giving us that the class $L_{[7:3]} = L_{\frac{1}{12}\frac{3}{4}}$ (remember that the second term must be taken $mod~1$). If you do this for all points in $\mathbb{P}^1(\mathbb{Z}/12\mathbb{Z})$ (and $\mathbb{P}^1(\mathbb{Z}/6\mathbb{Z})$ and $\mathbb{P}^1(\mathbb{Z}/3 \mathbb{Z})$) you get this version of the picture we started with You’ll spot that the preimages of a canonical coordinate of $\mathbb{P}^1(\mathbb{Z}/m\mathbb{Z})$ for $m | n$ are the very same coordinate together with ‘new’ canonical coordinates in $\mathbb{P}^1(\mathbb{Z}/n\mathbb{Z})$. To see that this correspondence is one-to-one and that the index of the congruence subgroup \[ \Gamma_0(n) = \{ \begin{bmatrix} p & q \\ r & s \end{bmatrix}~|~n|r~\text{and}~ps-qr=1 \} \] in the full modular group $\Gamma = PSL_2(\mathbb{Z})$ is equal to $\Psi(n)$ it is useful to consider the action of $PGL_2(\mathbb{Q})^+$ on the right on the classes of lattices. The stabilizer of $L_1$ is the full modular group $\Gamma$ and the stabilizer of any class is a suitable conjugate of $\Gamma$. For example, for the class $L_n$ (that is, of the sublattice spanned by $(n,0)$ and $(0,1)$, which is of hyperdistance $n$ from $L_1$) this stabilizer is \[ Stab(L_n) = \{ \begin{bmatrix} a & \frac{b}{n} \\ c.n & d \end{bmatrix}~|~ad-bc = 1 \} \] and a very useful observation is that \[ Stab(L_1) \cap Stab(L_n) = \Gamma_0(n) \] This is the way Conway likes us to think about the congruence subgroup $\Gamma_0(n)$: it is the joint stabilizer of the classes $L_1$ and $L_n$ (as well as all classes in the ‘thread’ $L_m$ with $m | n$). On the other hand, $\Gamma$ acts by rotations on the big picture: it only fixes $L_1$ and maps a class to another one of the same hyperdistance from $L_1$.The index of $\Gamma_0(n)$ in $\Gamma$ is then the number of classes at hyperdistance $n$. To see that this number is $\Psi(n)$, first check that the classes at hyperdistance $p^k$ for $p$ a prime number and for all $k$ for the $p+1$ free valent tree with root $L_1$, so there are exactly $p^{k-1}(p+1)$ classes as hyperdistance $p^k$. To get from this that the number of hyperdistance $n$ classes is indeed $\Psi(n) = \prod_{p|n}p^{v_p(n)-1}(p+1)$ we have to use the prime- factorisation of the hyperdistance (see this post). The fundamental domain for the action of $\Gamma_0(12)$ by Moebius tranfos on the upper half plane must then consist of $48=2 \Psi(12)$ black or white hyperbolic triangles Next time we’ll see how to deduce the ‘monstrous’ Grothendieck dessin d’enfant for $\Gamma_0(12)$ from it
Electrophoresis Contents 1 Background 2 Electrophoretic Techniques 3 Innovations 4 Applications 5 References Background Electrophoresis is a biochemical technique that separates compounds by administering an electrical current. The current runs from a power source and travels from the anode end of the electrophoretic apparatus to the cathode end (Figure 1). The charge gradient within the system propagates movement according to the charge of the samples. For example, the sugar backbone of DNA possesses a negative charge, and so the macromolecule is repelled by the negative charge of the anode end, and attracted towards the positive charge of the cathode end. The velocity (and distance) that a compound travels down its medium is dependent upon the compound's electrophoretic mobility, as represented by: [math] \mu = \frac{Q}{6 \pi r \eta} [/math] μ = electrophoretic mobility Q = net charge r = ionic radius of solute η = medium viscosity Motivations Electrophoresis is a versatile technique that can separate different molecules based on selective characteristics, such as net charge, molecular weight, isoelectric point (i.e. the pH at which a molecule has a neutral charge), and shape. Various electrophoretic systems and techniques have been developed to specialize in the separation of compounds based on these varying characteristics. Multiple electrophoretic assays can also be used on a single sample (see Multidimensional Analysis), providing great specificity for sample analysis. Biomarking samples is often implemented to assist in visualizing different analytes. Electrophoresis is a valuable tool in microfluidics due to the diversity of separations and methods it can perform. Challenges Since electrophoresis typically deals with living matter, a prevalent limitation of the technique is contamination. Contamination of either the gel or chemicals (e.g. buffer) used in an electrophoretic experiment would result in contamination of an organic sample, and thus compromise results [1]. In order to overcome this limitation, it is imperative that electrophoresis be carried out in sterile laboratory settings to produce optimal results. Compared to other microfluidic separations techniques, electrophoresis is a more time-consuming separation process, and so the working sample can degrade after extensive periods of time. Another challenge presented by electrophoresis is regulating heat generated by the electrical current. Excess heat has the potential to damage the medium (e.g. melting the gel) or cause unwanted denaturation of a sample [2]. Non-uniform distribution of heat can distort band shapes and cause curved, "smiling" bands (Figure 2). Electrophoretic Techniques The development of electrophoretic techniques has given rise to specializations of electrophoresis: gel electrophoresis, capillary electrophoresis (more on capillary gel electrophoresis), and free flow electrophoresis (FFE). Gel Electrophoresis Compounds can be run through a gel medium (e.g. polyacrylamide, agarose) in electrophoresis for analysis. Polyacrylamide gel electrophoresis (PAGE) is a common separation technique used in analysis of nucleic acids and proteins within a gel. Polyacrylamide gels are used in this separation technique because it is easy to fabricate, porous, and unreactive with proteins (i.e. it will not react with samples). Gels can be run horizontally (agarose) or vertically (polyacrylamide). Sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) is a method used to denature proteins and sort polypeptides according to molecular weight [3]. The loading buffer used in SDS-PAGE comprises of a reducing agent (e.g. β-mercaptoethanol), and a strong detergent (SDS itself). The reducing agent facilitates breaking of covalent bonds (e.g. disulfide bridges) and eliminates tertiary structure in proteins. As a detergent, SDS breaks noncovalent bonds and establishes a uniform net negative charge across the proteins, effectively eliminating discrepancies in charge between different polypeptides. Establishing a net negative charge sorts polypeptides through the gel solely based on their size (Figure 3). Smaller polypeptides will travel further down the gel because they experience less friction and are more easily able to navigate through the porous acrylamide gel, whereas larger polypeptides experience greater friction when travelling through the gel [4]. SDS-PAGE is especially useful in analysis of polypeptides and isolation of proteins based on molecular weight (Figure 4). Native PAGE is another gel electrophoresis procedure that can sort samples through a gel. However, SDS is absent in native gels, meaning that proteins are not denatured, and instead analyzed in their original state, sorting proteins based on both net charge and molecular weight. Native gels are useful for analysis of amino acid composition in proteins and isolation of enzymes. Capillary Electrophoresis Capillary electrophoresis (CE) is a separation technique in which a sample is run through a thin capillary (Figure 5). Samples are separated on the basis of electrophoretic mobility. Heat transfer within capillaries promotes increased analysis speeds and produces more effective separations [5]. Unlike gel electrophoresis, capillaries are reusable for future analyses. CE is useful for analyzing DNA fragments and protein separations. Free flow Electrophoresis In free flow electrophoresis (FFE), sample is inserted into a single well as a pressure-driven stream. An electrical current runs perpendicular to the stream, causing the formation of multiple streams separated by electrophoretic mobility (Figure 6). Miniaturization of FFE has improved efficiency by enabling analysis of samples in small volumes [6]. Innovations Miniaturization Miniaturization of electrophoretic techniques has been developed for efficiency: maximizing productivity while minimizing area. These innovations enable experiments to run simultaneously, as well as reduce the time and cost required for running these experiments [7]. A feature of electrophoresis miniaturization is incorporation of separation channels, which work to increase channel efficiency. Polymers, such as polydimethylsiloxane (PDMS) and poly(methyl methacrylate) (PMMA) are used in modern electrophoretic systems as they can be easily manufactured and are more robust than glass. PMMA notably is used in gel electrophoresis because it does not react with polyacrylamide gels. Different material considerations are used in electrophoresis when making the gel. For example, agarose gel is used for larger scale analysis of macromolecules, specifically larger DNA fragments [8]. Polyacrylamide has a higher resolution than agarose gels for analysis of smaller samples, such as single strands of DNA or protein analysis. The pores in PAGE are smaller and allow for greater discrimination and separation of samples, having versatility at the microscale. Multidimensional Analysis Multiple assays for a single sample can analyze between 2-4 different characteristics and improve test efficiency [9]. Multidimensional analysis makes use of analyzing different sample properties, such as combining isoelectric focusing and SDS-PAGE [9],[10]. Isoelectric focusing of the sample is run horizontally and separates proteins based on their isoelectric points (approximate pH where samples have a net neutral charge), establishing a ladder of samples based on a pH gradient. Afterwards, SDS-PAGE utilizes the same gel and runs it vertically, further separating and distinguishing proteins of similar isoelectric points by size. Multi-dimensional analysis exploits the different characteristics of samples (charge, size, etc.) order to comprehensively separate and distinguish samples (Figure 7). Applications Nucleic Acid and Protein Analysis Electrophoresis sees common use in analyzing DNA, RNA, and proteins [11]. Electrophoresis is especially useful in genetic sequencing where different tissue samples from different organisms are compared to look for any corresponding protein or gene similarities. Sorting samples based on molecular weight assists in separation of nucleic acid fragments and protein polymers for research. Medical Research Electrophoresis is a critical tool in pharmacy; vaccine development and purification rely on electrophoresis for examination of a vaccine's contents [12]. Antibiotics are analyzed through electrophoresis to check for residue and impurity presence [13],[14]. Detection of analytes in drugs helps to maximize potency and individualize treatments for patients using drug therapy. Environmental Monitoring Soil chemistry in ecosystems reflects the overall health of an ecosystem. Performing electrophoresis on soil can detect harmful chemicals (e.g. allelochemicals) that can compromise the environment [15]. Electrophoresis of microbial population DNA also can reflect on conditions in the environment. Regularly monitoring environmental health discerns shifts and patterns in an ecosystem. References [1] C. C. Chéry, L. Moens, R. Cornelis and F. Vanhaecke, Pure Appl. Chem., 2006, 78, 91–103. DOI: http://dx.doi.org/10.1351/pac200678010091 [2] R. Westermeier, A Guide to Theory and Practice, 1993, pp 210-212. DOI: http://dx.doi.org/10.1038/npg.els.0005335 [3] K. Weber and M. Osborn, J. Biol. Chem., 1969, 244, 4406-4412. DOI: http://dx.doi.org/10.1016/0003-2697(78)90281-6 [4] J. I. Won, R. J. Meagher and A. E. Barron, Electrophoresis, 2005, 26, 2138–2148. DOI: http://dx.doi.org/10.1002/elps.200410042 [5] J. Jorgenson and K. Lukacs, Science, 1983, 222, 266–272. DOI: http://dx.doi.org/10.1126/science.6623076 [6] R. T. Turgeon and M. T. Bowser, Anal. Bioanal. Chem., 2009, 394, 187–198. DOI: http://dx.doi.org/10.1007/s00216-009-2656-5 [7] A. J. Pfeiffer, T. Mukherjee and S. Hauan, Ind. Eng. Chem. Res., 2004, 43, 3539–3553. DOI: http://dx.doi.org/10.1021/ie034071t [8] I. Day and S. Humphries, Anal. Biochem., 1994, 222, 389–395. DOI: http://dx.doi.org/10.1006/abio.1994.1507 [9] S. Tia and A. E. Herr, Lab. Chip, 2009, 9, 2524–2536. DOI: http://dx.doi.org/10.1039/b900683b [10] Y. Li, J. S. Buch, F. Rosenberger, D. L. DeVoe and C. S. Lee, Anal. Chem., 2004, 76, 742–748. DOI: http://dx.doi.org/10.1021/ac034765b [11] E. Southern, J. Mol. Biol., 1975, 98, 503-517. DOI: http://dx.doi.org/10.1016/S0022-2836(75)80083-0 [12] W. Hurni and W. Miller, J. Chromatogr., 1991, 559, 337–343. DOI: http://dx.doi.org/10.1006/abio.1994.1507 [13] C. L. Flurer, Electrophoresis, 1997, 18, 2427–2437. DOI: http://dx.doi.org/10.1002/elps.1150181233 [14] M. Hernandez, F. Borrull and M. Calull, Trac-Trends Anal. Chem., 2003, 22, 416–427. DOI: http://dx.doi.org/10.1016/S0165-9936(03)00702-7 [15] G. Chen, Y. H. Lin and J. Wang, Curr. Anal. Chem., 2006, 2, 43–50. DOI: http://dx.doi.org/10.2174/157341106775197439
It is well know that quantum Yang-Mills theory has a periodic vacuum structure. Consider electroweak theory. For a single generation of fermions, the theory is CP invariant. I would like to know if the periodic vacua of the theory are also CP invariant. One expects the trivial vacuum with topological charge $n=0$ to be CP invariant, where the topological charge, defined as \begin{equation}n= \int d^4x \mathcal{P}(x),\end{equation}is the integral of the Pontryagin density $\mathcal{P}(x)$ over the spacetime manifold. However, since $\mathcal{P}(x) \sim tr\left( F \tilde{F} \right)$ is odd under CP, one generally expects gauge configurations that have non-zero topological charge to be odd under CP (?), and this includes the non-trivial vacua. On the other hand, electroweak theory is different from QCD in that there is no explicit theta angle term in the action, so it seems to me that, as opposed to QCD, there should be no way for us to physically distinguish between the vacua and therefore they should all have the same properties under C, P and T (note however that $\textit{changing}$ vacua via sphaleron and instanton processes does have physical significance, but that is not the main concern of the question). What is the resolution of this apparent paradox? Will the pure gauge configurations that can be connected to the trivial vacuum via large gauge transformations (and hence have non-zero topological charge) be even under CP or odd?This post imported from StackExchange Physics at 2019-07-25 09:06 (UTC), posted by SE-user Optimus Prime
Abstract Let $T$ be a smooth homogeneous Calderón-Zygmund singular integral operator in $\mathbb{R}^n$. In this paper we study the problem of controlling the maximal singular integral $T^{\star}f$ by the singular integral $Tf$. The most basic form of control one may consider is the estimate of the $L^2(\mathbb{R}^n)$ norm of $T^{\star}f$ by a constant times the $L^2(\mathbb{R}^n)$ norm of $Tf$. We show that if $T$ is an even higher order Riesz transform, then one has the stronger pointwise inequality $T^{\star}f(x) \leq C \, M(Tf)(x)$, where $C$ is a constant and $M$ is the Hardy-Littlewood maximal operator. We prove that the $L^2$ estimate of $T^{\star}$ by $T$ is equivalent, for even smooth homogeneous Calderón-Zygmund operators, to the pointwise inequality between $T^{\star}$ and $M(T)$. Our main result characterizes the $L^2$ and pointwise inequalities in terms of an algebraic condition expressed in terms of the kernel $\frac{\Omega(x)}{|x|^n}$ of $T$, where $\Omega$ is an even homogeneous function of degree $0$, of class $C^\infty(S^{n-1})$ and with zero integral on the unit sphere $S^{n-1}$. Let $\Omega= \sum P_j$ be the expansion of $\Omega$ in spherical harmonics $P_j$ of degree $j$. Let $A$ stand for the algebra generated by the identity and the smooth homogeneous Calderón-Zygmund operators. Then our characterizing condition states that $T$ is of the form $R\circ U$, where $U$ is an invertible operator in $A$ and $R$ is a higher order Riesz transform associated with a homogeneous harmonic polynomial $P$ which divides each $P_j$ in the ring of polynomials in $n$~variables with real coefficients. [ACL] N. Aronszajn, T. M. Creese, and L. J. Lipkin, Polyharmonic Functions, New York: The Clarendon Press Oxford University Press, 1983. @book {ACL, MRKEY = {0745128}, AUTHOR = {Aronszajn, Nachman and Creese, Thomas M. and Lipkin, Leonard J.}, TITLE = {Polyharmonic Functions}, SERIES = {Oxford Math. Monogr.}, PUBLISHER = {The Clarendon Press Oxford University Press}, ADDRESS = {New York}, YEAR = {1983}, PAGES = {x+265}, ISBN = {0-19-853906-1}, MRCLASS = {31-02 (26E05 31B30 32Axx 35C99)}, MRNUMBER = {0745128}, MRREVIEWER = {E. Gerlach}, ZBLNUMBER = {0514.31001}, } [BM] A. J. Majda and A. L. Bertozzi, Vorticity and Incompressible Flow, Cambridge: Cambridge Univ. Press, 2002, vol. 27. @book {BM, MRKEY = {1867882}, AUTHOR = {Majda, Andrew J. and Bertozzi, Andrea L.}, TITLE = {Vorticity and Incompressible Flow}, SERIES = {Cambridge Texts Appl. Math.}, VOLUME = {27}, PUBLISHER = {Cambridge Univ. Press}, ADDRESS = {Cambridge}, YEAR = {2002}, PAGES = {xii+545}, ISBN = {0-521-63057-6; 0-521-63948-4}, MRCLASS = {76-02 (35Q30 35Q35 76B03 76D03 76D05)}, MRNUMBER = {1867882}, MRREVIEWER = {Yuxi Zheng}, ZBLNUMBER = {0983.76001}, } [CZ] A. P. Calderón and A. Zygmund, "On a problem of Mihlin," Trans. Amer. Math. Soc., vol. 78, pp. 209-224, 1955. @article {CZ, MRKEY = {0068028}, AUTHOR = {Calder{ó}n, A. P. and Zygmund, A.}, TITLE = {On a problem of {M}ihlin}, JOURNAL = {Trans. Amer. Math. Soc.}, FJOURNAL = {Transactions of the American Mathematical Society}, VOLUME = {78}, YEAR = {1955}, PAGES = {209--224}, ISSN = {0002-9947}, MRCLASS = {42.4X}, MRNUMBER = {0068028}, MRREVIEWER = {F. Smithies}, DOI = {10.2307/1992955}, ZBLNUMBER = {0065.04104}, } [Ch] J. Chemin, "Fluides parfaits incompressibles," Astérisque, vol. 230, p. 177, 1995. @article {Ch, MRKEY = {1340046}, AUTHOR = {Chemin, Jean-Yves}, TITLE = {Fluides parfaits incompressibles}, JOURNAL = {Astérisque}, FJOURNAL = {Astérisque}, VOLUME = {230}, YEAR = {1995}, PAGES = {177}, ISSN = {0303-1179}, MRCLASS = {76C05 (35-02 35Q35 76-02)}, MRNUMBER = {1340046}, MRREVIEWER = {Denis Serre}, ZBLNUMBER = {0829.76003}, } [DS] G. David and S. Semmes, Singular integrals and rectifiable sets in ${\mathbb R}^n$: Beyond Lipschitz graphs, Paris: Soc. Math. France, 1991, vol. 193. @book{DS, author={David, G. and Semmes, S.}, TITLE={Singular integrals and rectifiable sets in ${\mathbb R}^n$: {B}eyond {L}ipschitz graphs}, SERIES={Astérisque}, VOLUME={193}, PUBLISHER={Soc. Math. France}, ADDRESS={Paris}, YEAR={1991}, MRNUMBER={1113517}, ZBLNUMBER={0743.49018}, } [Gr] L. Grafakos, Classical and Modern Fourier Analysis, Upper Saddle River, NJ: Pearson Education, 2004. @book {Gr, MRKEY = {2449250}, AUTHOR = {Grafakos, Loukas}, TITLE = {Classical and Modern {F}ourier Analysis}, PUBLISHER = {Pearson Education}, ADDRESS={Upper Saddle River, NJ}, YEAR = {2004}, PAGES = {xii+931}, ISBN = {0-13-035399-X}, MRCLASS = {42-01}, MRNUMBER = {2449250}, ZBLNUMBER = {1148.42001}, } [GKP] R. L. Graham, D. E. Knuth, and O. Patashnik, Concrete Mathematics. A Foundation for Computer Science, Second ed., Reading, MA: Addison-Wesley Publishing Company, 1994. @book {GKP, MRKEY = {1397498}, AUTHOR = {Graham, Ronald L. and Knuth, Donald E. and Patashnik, Oren}, TITLE = {Concrete Mathematics. A Foundation for Computer Science}, EDITION = {Second}, PUBLISHER = {Addison-Wesley Publishing Company}, ADDRESS = {Reading, MA}, YEAR = {1994}, PAGES = {xiv+657}, ISBN = {0-201-55802-5}, MRCLASS = {68-01 (00-01 00A05 05-01 68Rxx)}, MRNUMBER = {1397498}, MRREVIEWER = {Volker Strehl}, ZBLNUMBER = {0836.00001}, } [K] E. Kunz, Introduction to Commutative Algebra and Algebraic Geometry, Boston, MA: Birkhäuser, 1985. @book {K, MRKEY = {0789602}, AUTHOR = {Kunz, Ernst}, TITLE = {Introduction to Commutative Algebra and Algebraic Geometry}, PUBLISHER = {Birkhäuser}, ADDRESS = {Boston, MA}, YEAR = {1985}, PAGES = {xi+238}, ISBN = {3-7643-3065-1}, MRCLASS = {14-01 (13-01)}, MRNUMBER = {0789602}, ZBLNUMBER = {0563.13001}, } [LS] L. Lorch and P. Szego, "A singular integral whose kernel involves a Bessel function," Duke Math. J., vol. 22, pp. 407-418, 1955. @article {LS, MRKEY = {0087774}, AUTHOR = {Lorch, Lee and Szego, Peter}, TITLE = {A singular integral whose kernel involves a {B}essel function}, JOURNAL = {Duke Math. J.}, FJOURNAL = {Duke Mathematical Journal}, VOLUME = {22}, YEAR = {1955}, PAGES = {407--418}, ISSN = {0012-7094}, MRCLASS = {33.0X}, MRNUMBER = {0087774}, MRREVIEWER = {A. P. Calder{ó}n}, DOI = {10.1215/S0012-7094-55-02244-4}, ZBLNUMBER = {0066.05201}, } [Lo] G. G. Lorentz, Approximation of Functions, Second ed., New York: Chelsea Publishing Co., 1986. @book {Lo, MRKEY = {0917270}, AUTHOR = {Lorentz, G. G.}, TITLE = {Approximation of Functions}, EDITION = {Second}, PUBLISHER = {Chelsea Publishing Co.}, ADDRESS = {New York}, YEAR = {1986}, PAGES = {x+188}, ISBN = {0-8284-0322-8}, MRCLASS = {41-01}, MRNUMBER = {0917270}, ZBLNUMBER = {0643.41001}, } [LZ] R. Lyons and K. Zumbrun, "Homogeneous partial derivatives of radial functions," Proc. Amer. Math. Soc., vol. 121, iss. 1, pp. 315-316, 1994. @article {LZ, MRKEY = {1227524}, AUTHOR = {Lyons, Russell and Zumbrun, Kevin}, TITLE = {Homogeneous partial derivatives of radial functions}, JOURNAL = {Proc. Amer. Math. Soc.}, FJOURNAL = {Proceedings of the American Mathematical Society}, VOLUME = {121}, YEAR = {1994}, NUMBER = {1}, PAGES = {315--316}, ISSN = {0002-9939}, CODEN = {PAMYAR}, MRCLASS = {26B05 (31B05 35A99)}, MRNUMBER = {1227524}, MRREVIEWER = {J. Luke{š}}, DOI = {10.2307/2160399}, ZBLNUMBER = {0815.26006}, } [MNOV] J. Mateu, Y. Netrusov, J. Orobitg, and J. Verdera, "BMO and Lipschitz approximation by solutions of elliptic equations," Ann. Inst. Fourier $($Grenoble$)$, vol. 46, iss. 4, pp. 1057-1081, 1996. @article {MNOV, MRKEY = {1415957}, AUTHOR = {Mateu, Joan and Netrusov, Y. and Orobitg, J. and Verdera, J.}, TITLE = {B{MO} and {L}ipschitz approximation by solutions of elliptic equations}, JOURNAL = {Ann. Inst. Fourier $($Grenoble$)$}, FJOURNAL = {Université de Grenoble. Annales de l'Institut Fourier}, VOLUME = {46}, YEAR = {1996}, NUMBER = {4}, PAGES = {1057--1081}, ISSN = {0373-0956}, CODEN = {AIFUA7}, MRCLASS = {41A30 (31C15 35J30)}, MRNUMBER = {1415957}, MRREVIEWER = {Juan Carlos Fari{ñ}a Gil}, URL = {http://www.numdam.org/item?id=AIF_1996__46_4_1057_0}, ZBLNUMBER = {0853.31007}, } [MO] J. Mateu and J. Orobitg, "Lipschitz approximation by harmonic functions and some applications to spectral synthesis," Indiana Univ. Math. J., vol. 39, iss. 3, pp. 703-736, 1990. @article {MO, MRKEY = {1078735}, AUTHOR = {Mateu, Joan and Orobitg, Joan}, TITLE = {Lipschitz approximation by harmonic functions and some applications to spectral synthesis}, JOURNAL = {Indiana Univ. Math. J.}, FJOURNAL = {Indiana University Mathematics Journal}, VOLUME = {39}, YEAR = {1990}, NUMBER = {3}, PAGES = {703--736}, ISSN = {0022-2518}, CODEN = {IUMJAB}, MRCLASS = {46E15 (31B05)}, MRNUMBER = {1078735}, DOI = {10.1512/iumj.1990.39.39035}, ZBLNUMBER = {0768.46006}, } [MOV] J. Mateu, J. Orobitg, and J. Verdera, "Extra cancellation of even Calderón-Zygmund operators and quasiconformal mappings," J. Math. Pures Appl., vol. 91, iss. 4, pp. 402-431, 2009. @article {MOV, MRKEY = {2518005}, AUTHOR = {Mateu, Joan and Orobitg, Joan and Verdera, Joan}, TITLE = {Extra cancellation of even {C}alderón-{Z}ygmund operators and quasiconformal mappings}, JOURNAL = {J. Math. Pures Appl.}, FJOURNAL = {Journal de Mathématiques Pures et Appliquées. Neuvième Série}, VOLUME = {91}, YEAR = {2009}, NUMBER = {4}, PAGES = {402--431}, ISSN = {0021-7824}, CODEN = {JMPAAM}, MRCLASS = {42B20 (30C62)}, MRNUMBER = {2518005}, MRREVIEWER = {Caroline P. Sweezy}, DOI = {10.1016/j.matpur.2009.01.010}, ZBLNUMBER = {1179.30017}, } [MOPV] J. Mateu, J. Orobitg, C. Pérez, and J. Verdera, "New estimates for the maximal singular integral," Int. Math. Res. Not., vol. 2010, iss. 19, pp. 3658-3722, 2010. @article {MOPV, MRKEY = {2725509}, AUTHOR = {Mateu, Joan and Orobitg, Joan and P{é}rez, Carlos and Verdera, Joan}, TITLE = {New estimates for the maximal singular integral}, JOURNAL = {Int. Math. Res. Not.}, FJOURNAL = {International Mathematics Research Notices. IMRN}, YEAR = {2010}, NUMBER = {19}, PAGES = {3658--3722}, ISSN = {1073-7928}, MRCLASS = {42B35 (42B25)}, MRNUMBER = {2725509}, VOLUME = {2010}, ZBLNUMBER = {1208.42005}, DOI = {10.1093/imrn/rnq017}, URL={http://rmi.rsme.es/index.php?option=com_docman&task=doc_details&gid=13 1&Itemid=91&lang=en}, } [MPV] J. Mateu, L. Prat, and J. Verdera, "The capacity associated to signed Riesz kernels, and Wolff potentials," J. Reine Angew. Math., vol. 578, pp. 201-223, 2005. @article {MPV, MRKEY = {2113895}, AUTHOR = {Mateu, Joan and Prat, Laura and Verdera, Joan}, TITLE = {The capacity associated to signed {R}iesz kernels, and {W}olff potentials}, JOURNAL = {J. Reine Angew. Math.}, FJOURNAL = {Journal für die Reine und Angewandte Mathematik}, VOLUME = {578}, YEAR = {2005}, PAGES = {201--223}, ISSN = {0075-4102}, CODEN = {JRMAA8}, MRCLASS = {31B15 (31C45)}, MRNUMBER = {2113895}, MRREVIEWER = {Jana Bj{ö}rn}, DOI = {10.1515/crll.2005.2005.578.201}, ZBLNUMBER = {1086.31005}, } [MV1] J. Mateu and J. Verdera, "BMO harmonic approximation in the plane and spectral synthesis for Hardy-Sobolev spaces," Rev. Mat. Iberoamericana, vol. 4, iss. 2, pp. 291-318, 1988. @article {MV1, MRKEY = {1028743}, AUTHOR = {Mateu, Joan and Verdera, Joan}, TITLE = {B{MO} harmonic approximation in the plane and spectral synthesis for {H}ardy-{S}obolev spaces}, JOURNAL = {Rev. Mat. Iberoamericana}, FJOURNAL = {Revista Matemática Iberoamericana}, VOLUME = {4}, YEAR = {1988}, NUMBER = {2}, PAGES = {291--318}, ISSN = {0213-2230}, MRCLASS = {42B30 (46E30)}, MRNUMBER = {1028743}, MRREVIEWER = {Weixing Zheng}, ZBLNUMBER = {0702.31001}, } [MV2] J. Mateu and J. Verdera, "$L^p$ and weak $L^1$ estimates for the maximal Riesz transform and the maximal Beurling transform," Math. Res. Lett., vol. 13, iss. 5-6, pp. 957-966, 2006. @article {MV2, MRKEY = {2280788}, AUTHOR = {Mateu, Joan and Verdera, Joan}, TITLE = {{$L\sp p$} and weak {$L\sp 1$} estimates for the maximal {R}iesz transform and the maximal {B}eurling transform}, JOURNAL = {Math. Res. Lett.}, FJOURNAL = {Mathematical Research Letters}, VOLUME = {13}, YEAR = {2006}, NUMBER = {5-6}, PAGES = {957--966}, ISSN = {1073-2780}, MRCLASS = {42B25}, MRNUMBER = {2280788}, MRREVIEWER = {P. K. Ratnakumar}, ZBLNUMBER = {1134.42322}, URL={http://www.mrlonline.org/mrl/2006-013-006/2006-013-006-010.html}, } [MaV] P. Mattila and J. Verdera, "Convergence of singular integrals with general measures," J. Eur. Math. Soc. $($JEMS$)$, vol. 11, iss. 2, pp. 257-271, 2009. @article {MaV, MRKEY = {2486933}, AUTHOR = {Mattila, Pertti and Verdera, Joan}, TITLE = {Convergence of singular integrals with general measures}, JOURNAL = {J. Eur. Math. Soc. $($JEMS$)$}, FJOURNAL = {Journal of the European Mathematical Society (JEMS)}, VOLUME = {11}, YEAR = {2009}, NUMBER = {2}, PAGES = {257--271}, ISSN = {1435-9855}, MRCLASS = {42B20}, MRNUMBER = {2486933}, DOI = {10.4171/JEMS/149}, ZBLNUMBER = {1163.42005}, } [RS] F. Ricci and E. M. Stein, "Harmonic analysis on nilpotent groups and singular integrals. I. Oscillatory integrals," J. Funct. Anal., vol. 73, iss. 1, pp. 179-194, 1987. @article {RS, MRKEY = {0890662}, AUTHOR = {Ricci, Fulvio and Stein, E. M.}, TITLE = {Harmonic analysis on nilpotent groups and singular integrals. {I}. {O}scillatory integrals}, JOURNAL = {J. Funct. Anal.}, FJOURNAL = {Journal of Functional Analysis}, VOLUME = {73}, YEAR = {1987}, NUMBER = {1}, PAGES = {179--194}, ISSN = {0022-1236}, CODEN = {JFUAAW}, MRCLASS = {42B20 (22E30 43A80 58G15)}, MRNUMBER = {0890662}, MRREVIEWER = {Detlef H. M{ü}ller}, DOI = {10.1016/0022-1236(87)90064-4}, ZBLNUMBER={0622.42010}, } [St] E. M. Stein, Singular Integrals and Differentiability Properties of Functions, Princeton, N.J.: Princeton Univ. Press, 1970, vol. 30. @book {St, MRKEY = {0290095}, AUTHOR = {Stein, Elias M.}, TITLE = {Singular Integrals and Differentiability Properties of Functions}, SERIES = {Princeton Math. Series}, VOLUME={30}, PUBLISHER = {Princeton Univ. Press}, ADDRESS = {Princeton, N.J.}, YEAR = {1970}, PAGES = {xiv+290}, MRCLASS = {46.38 (26.00)}, MRNUMBER = {0290095}, MRREVIEWER = {R. E. Edwards}, ZBLNUMBER = {0207.13501}, } [SW] E. M. Stein and G. Weiss, Introduction to Fourier Analysis on Euclidean Spaces, Princeton, N.J.: Princeton Univ. Press, 1971, vol. 32. @book {SW, MRKEY = {0304972}, AUTHOR = {Stein, Elias M. and Weiss, Guido}, TITLE = {Introduction to {F}ourier Analysis on {E}uclidean Spaces}, SERIES = {Princeton Math. Series}, VOLUME={32}, PUBLISHER = {Princeton Univ. Press}, ADDRESS = {Princeton, N.J.}, YEAR = {1971}, PAGES = {x+297}, MRCLASS = {42A92 (31B99 32A99 46F99 47G05)}, MRNUMBER = {0304972}, MRREVIEWER = {Edwin Hewitt}, ZBLNUMBER = {0232.42007}, } [Ve1] J. Verdera, "$C^m$ approximation by solutions of elliptic equations, and Calderón-Zygmund operators," Duke Math. J., vol. 55, iss. 1, pp. 157-187, 1987. @article {Ve1, MRKEY = {0883668}, AUTHOR = {Verdera, Joan}, TITLE = {{$C\sp m$} approximation by solutions of elliptic equations, and {C}alderón-{Z}ygmund operators}, JOURNAL = {Duke Math. J.}, FJOURNAL = {Duke Mathematical Journal}, VOLUME = {55}, YEAR = {1987}, NUMBER = {1}, PAGES = {157--187}, ISSN = {0012-7094}, CODEN = {DUMJAO}, MRCLASS = {35A35 (35J30)}, MRNUMBER = {0883668}, MRREVIEWER = {V. S. Rabinovich}, DOI = {10.1215/S0012-7094-87-05509-8}, ZBLNUMBER = {0654.35007}, } [Ve2] J. Verdera, "$L^2$ boundedness of the Cauchy integral and Menger curvature," in Harmonic Analysis and Boundary Value Problems, Providence, RI: Amer. Math. Soc., 2001, vol. 277, pp. 139-158. @incollection {Ve2, MRKEY = {1840432}, AUTHOR = {Verdera, Joan}, TITLE = {{$L\sp 2$} boundedness of the {C}auchy integral and {M}enger curvature}, BOOKTITLE = {Harmonic Analysis and Boundary Value Problems}, VENUE={{F}ayetteville, {AR}, 2000}, SERIES = {Contemp. Math.}, VOLUME = {277}, PAGES = {139--158}, PUBLISHER = {Amer. Math. Soc.}, ADDRESS = {Providence, RI}, YEAR = {2001}, MRCLASS = {30E20 (30E25)}, MRNUMBER = {1840432}, MRREVIEWER = {N. V. Rao}, ZBLNUMBER = {1002.42011}, }
Earth Eclipse of Server Sky Arrays If the Earth was perfectly round, and the poles were not inclined, arrays in the 12789 km, 17280 second radius equatorial orbit would spend 2868 seconds per orbit shaded by the 6371km radius Earth ( = 17280 \times asin( 6371 / 12789 ) / 180^\circ ~ ). In fact, the Earth has an equatorial radius of 6378.1 km, a polar radius of 6356.8 km, and an axial tilt of \phi = 23.439281° . The sun has an angular size of 0.53 degrees, and the Earth's atmosphere refracts light, meaning that the light dims gradually over approximately 30 seconds entering eclipse. For the rest of this analysis, we will ignore these gradual effects, pretend the sun is a point source at infinity, and calculate the hard cutoff time as a function of time of year. The variable \beta represents the time of year in the northern hemisphere, from 0° in spring, 90° in summer, 180° in the fall, and 270° in winter. Oblate Earth The equatorial plane is tilted towards the sun by angle \theta_{eq} defined by: \sin( \theta_{eq} ) = \sin( \beta ) \sin( \phi ) ~ ~ ~ see Precession The earth can be approximated as an elliptical disk, a projection of a ellipsoidal spheroid with an equatorial radius R_E = 6,378,137 meters and a polar radius R_P = 6,356,752 meters. The edge of this elliptical disk follows the equation: y = \sqrt{ ( R_E^2 - x^2 ) ( ( 1 - ( R_P / R_E )^2 ) \sin( \theta_{eq} )^2 + ( R_P / R_E )^2 } ~ ~ ~ see TiltingOblate The m288 orbit is a circle in the equatorial plane with a radius of R_{m288} . This circle projects into the X,Z plane as y = \sin( \theta_{eq} ) \sqrt{ R_{m288}^2 - x^2 } Two of the four points where these y values are equal are the points were the orbit enters or leaves the eclipse, so: y_e = \sin( \theta_{eq} ) \sqrt{ R_{m288}^2 - x_e^2 } = \sqrt{ ( R_E^2 - x_e^2 ) ( ( 1 - ( R_P / R_E )^2 ) \sin( \theta_{eq} )^2 + ( R_P / R_E )^2 } Let's solve for x_e : \sin( \theta_{eq} )^2 ( R_{m288}^2 - x_e^2 ) = ( R_E^2 - x_e^2 ) \left( ( 1 - ( R_P / R_E )^2 ) \sin( \theta_{eq} )^2 + ( R_P / R_E )^2 \right) x_e^2 \left( \left( ( 1 - ( R_P / R_E )^2 ) \sin( \theta_{eq} )^2 + ( R_P / R_E )^2 \right) - \sin( \theta_{eq} )^2 \right) ~ = ~ R_E^2 \left( ( 1 - ( R_P / R_E )^2 ) \sin( \theta_{eq} )^2 + ( R_P / R_E )^2 \right) - \sin( \theta_{eq} )^2 R_{m288}^2 x_e^2 \left( ( R_P / R_E )^2 ( 1 -\sin( \theta_{eq} )^2 ) \right) ~ = ~ R_P^2 - ( R_{m288}^2 + R_P^2 - R_E^2 ) \sin( \theta_{eq} )^2 \large x_e ~ = ~ \sqrt{ { R_P^2 - ( R_{m288}^2 + R_P^2 - R_E^2 ) \sin( \theta_{eq} )^2 } \over { ( R_P / R_E )^2 ( 1 -\sin( \theta_{eq} )^2 ) } } Which simplifies to the following: 6,378,137 m R_E Equatorial Radius 6,356,752 m R_P Polar Radius 12,788,866 m R_{m288} m288 Orbit radius 14,400.0 sec T_{orbit} sun-relative period 23.439281° \phi axial tilt 0° to 360° \beta time of year from spring equinox \Large x_e ~ = ~ R_E \sqrt{ { 1 - ( ( R_{m288}^2 + R_P^2 )/ R_E^2 ) - 1 ) \sin( \theta_{eq} )^2 } \over { 1 -\sin( \theta_{eq} )^2 } } ~ \approx 6,378,137 m \sqrt{ { 1 - 0.635085 \sin( \beta )^2 } \over { 1 - 0.158227 \sin( \beta )^2 } } ~ ~ ~ \sin( \theta_{eq} ) = \sin( \beta ) \sin( \phi ) The eclipse fraction F_E (fraction of the total orbit) is: \large F_E = \arcsin( x_e / R_{m288} ) / \pi ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ assumes arcsin() in radians, if in degrees divide by 180° While this is going on, the earth is going around the sun, and the sun is making an apparent motion in the earth sky. This stretches the time in eclipse by this small factor: \large stretch = 1.0 / ( 1.0 - ( T_{orbit}/Year ) ) That is also the correction factor from the sidereal to the synodic orbit. The eclipse time T_E is thus: \large T_E = T_{synodic} \arcsin( x_e / R_{m288} ) / \pi For m288, T_{synodic} = 14400 sec. This is noon to noon time, not the repeat time for a particular spot on the earth. x_e varies from 4,199,446 to 6,378,137 meters, so the fraction-of-orbit F_E varies from 0.1065 to 0.1662, with an average of 0.1388, and eclipse time varies from 1534 to 2393 seconds with an average of 1999 seconds. The eclipse will be longest in spring and fall, shortest in summer and winter. The shortest eclipse periods correspond to surface peak power demands (heating and cooling), which is fortuitous. The cost of terrestrial computing will be highest during summer in the temperate regions, when extra air conditioning is needed in data centers. Here's plot of eclipse fraction and eclipse time versus the time in the year. The x axis is degrees of the year, one degree=1.015 days, 30 degrees equals a month, 360 degrees equals a year. The plot starts on the spring equinox, around March 23. The eclipse fraction drops significantly near the summer and winter solstices. At a radius of 16043 km, the eclipse fraction drops to zero at the solstices. Above that altitude, the equation for x_e becomes imaginary around the solstices, andhe numerator of the square root drops below zero. That means that the orbit passes above or below the top of the earth ellipse, and the satellites are always in sunlight around the solstices. Here are the eclipse times for GEO and for the Moon. Note that a lunar-distance eclipse happens only when the mean anomaly (position around the orbit) is right; most times, this occurs at the wrong time of year. It is possible to tune the period and inclination of an orbit beyond the moon ( to an integer divisor of a year) so eclipses never happen.
I was trying to understand the proof of the Witt dimension formula for free Lie algebras. I was basically following this proof. (I'm not posting the complete proof but just the piece where I'm currently stuck on.) How do we prove the Witt dimension formula? Write $\ell_n = \dim L_n$. In each homogeneous subspace $L_n$ of the free Lie algebra choose an ordered basis $\{ f_{n1}, \dotsc, f_{n \ell_n} \}$. Put these finite bases together into an infinite ordered basis of $L$: $$ f_{11}, \dotsc, f_{1 \ell_1}, f_{21}, \dotsc, f_{2 \ell_2}, f_{31}, \dotsc, f_{3 \ell_3}, \dotsc $$ For simplicity write this last basis as $$ g_1, g_2, g_3, \dotsc, g_i, \dotsc $$ The PBW Theorem(after Poincaré, Birkhof and Witt) states that a basis for $A$ consists of all products of the form \begin{equation} \tag{1} g_{i_1} g_{i_2} \dotsm g_{i_k} \qquad (i_1 \leq i_2 \leq \dotsb \leq i_k). \end{equation} Now we need to do some combinatorics. There are $\ell_d$ Lie polynomials $g_i$ of degree $d$. Each $g_i$ contributes degree $d$ to the total degree of the associative word in $(1)$. But each $g_i$ may occur any number $k$ of times (consecutively) in $(1)$. The generating function for the contribution of these $k$ elements of degree $d$ to the total degree of $(1)$ is $$ 1 + x^d + x^{2d} + \dotsb + x^{kd} + \dotsb = \frac{1}{1-x^d} \cdotp $$ (Original image here.) But I do not understand the meaning of the last 4 lines. Why isn't the total contribution to the degree of $(1)$ just $dk$? Where that generating function comes from? What is $x$ in that formula? Unfortunately this seems to be the most clear proof of the Witt's formula I've found.
Multiplying Complex Numbers Multiplying complex numbers is much like multiplying binomials. The major difference is that we work with the real and imaginary parts separately. Example 4: Multiplying a Complex Number by a Real Number Let’s begin by multiplying a complex number by a real number. We distribute the real number just as we would with a binomial. So, for example, How To: Given a complex number and a real number, multiply to find the product. Use the distributive property. Simplify. Example 5: Multiplying a Complex Number by a Real Number Find the product [latex]4\left(2+5i\right)[/latex]. Solution Distribute the 4. Try It 4 Find the product [latex]-4\left(2+6i\right)[/latex]. Multiplying Complex Numbers Together Now, let’s multiply two complex numbers. We can use either the distributive property or the FOIL method. Recall that FOIL is an acronym for multiplying First, Outer, Inner, and Last terms together. Using either the distributive property or the FOIL method, we get Because [latex]{i}^{2}=-1[/latex], we have To simplify, we combine the real parts, and we combine the imaginary parts. How To: Given two complex numbers, multiply to find the product. Use the distributive property or the FOIL method. Simplify. Example 6: Multiplying a Complex Number by a Complex Number Multiply [latex]\left(4+3i\right)\left(2 - 5i\right)[/latex]. Solution Use [latex]\left(a+bi\right)\left(c+di\right)=\left(ac-bd\right)+\left(ad+bc\right)i[/latex] Try It 5 Multiply [latex]\left(3 - 4i\right)\left(2+3i\right)[/latex]. https://youtu.be/O9xQaIi0NX0 Dividing Complex Numbers Division of two complex numbers is more complicated than addition, subtraction, and multiplication because we cannot divide by an imaginary number, meaning that any fraction must have a real-number denominator. We need to find a term by which we can multiply the numerator and the denominator that will eliminate the imaginary portion of the denominator so that we end up with a real number as the denominator. This term is called the complex conjugate of the denominator, which is found by changing the sign of the imaginary part of the complex number. In other words, the complex conjugate of [latex]a+bi[/latex] is [latex]a-bi[/latex]. Note that complex conjugates have a reciprocal relationship: The complex conjugate of [latex]a+bi[/latex] is [latex]a-bi[/latex], and the complex conjugate of [latex]a-bi[/latex] is [latex]a+bi[/latex]. Further, when a quadratic equation with real coefficients has complex solutions, the solutions are always complex conjugates of one another. Suppose we want to divide [latex]c+di[/latex] by [latex]a+bi[/latex], where neither a nor b equals zero. We first write the division as a fraction, then find the complex conjugate of the denominator, and multiply. Multiply the numerator and denominator by the complex conjugate of the denominator. Apply the distributive property. Simplify, remembering that [latex]{i}^{2}=-1[/latex]. A General Note: The Complex Conjugate The complex conjugate of a complex number [latex]a+bi[/latex] is [latex]a-bi[/latex]. It is found by changing the sign of the imaginary part of the complex number. The real part of the number is left unchanged. When a complex number is multiplied by its complex conjugate, the result is a real number. When a complex number is added to its complex conjugate, the result is a real number. Example 7: Finding Complex Conjugates Find the complex conjugate of each number. [latex]2+i\sqrt{5}[/latex] [latex]-\frac{1}{2}i[/latex] Solution The number is already in the form [latex]a+bi[/latex]. The complex conjugate is [latex]a-bi[/latex], or [latex]2-i\sqrt{5}[/latex]. We can rewrite this number in the form [latex]a+bi[/latex] as [latex]0-\frac{1}{2}i[/latex]. The complex conjugate is [latex]a-bi[/latex], or [latex]0+\frac{1}{2}i[/latex]. This can be written simply as [latex]\frac{1}{2}i[/latex]. How To: Given two complex numbers, divide one by the other. Write the division problem as a fraction. Determine the complex conjugate of the denominator. Multiply the numerator and denominator of the fraction by the complex conjugate of the denominator. Simplify. Example 8: Dividing Complex Numbers Divide [latex]\left(2+5i\right)[/latex] by [latex]\left(4-i\right)[/latex]. Solution We begin by writing the problem as a fraction. Then we multiply the numerator and denominator by the complex conjugate of the denominator. To multiply two complex numbers, we expand the product as we would with polynomials (the process commonly called FOIL). Note that this expresses the quotient in standard form. Example 9: Substituting a Complex Number into a Polynomial Function Let [latex]f\left(x\right)={x}^{2}-5x+2[/latex]. Evaluate [latex]f\left(3+i\right)[/latex]. Solution Substitute [latex]x=3+i[/latex] into the function [latex]f\left(x\right)={x}^{2}-5x+2[/latex] and simplify. Try It 6 Let [latex]f\left(x\right)=2{x}^{2}-3x[/latex]. Evaluate [latex]f\left(8-i\right)[/latex]. Example 10: Substituting an Imaginary Number in a Rational Function Let [latex]f\left(x\right)=\frac{2+x}{x+3}[/latex]. Evaluate [latex]f\left(10i\right)[/latex]. Solution Substitute [latex]x=10i[/latex] and simplify. Try It 7 Let [latex]f\left(x\right)=\frac{x+1}{x - 4}[/latex]. Evaluate [latex]f\left(-i\right)[/latex]. Simplifying Powers of i The powers of i are cyclic. Let’s look at what happens when we raise i to increasing powers. We can see that when we get to the fifth power of i, it is equal to the first power. As we continue to multiply i by itself for increasing powers, we will see a cycle of 4. Let’s examine the next 4 powers of i. Example 11: Simplifying Powers of i Evaluate [latex]{i}^{35}[/latex]. Solution Since [latex]{i}^{4}=1[/latex], we can simplify the problem by factoring out as many factors of [latex]{i}^{4}[/latex] as possible. To do so, first determine how many times 4 goes into 35: [latex]35=4\cdot 8+3[/latex]. Q & A Can we write [latex]{i}^{35}[/latex] in other helpful ways? As we saw in Example 11, we reduced [latex]{i}^{35}[/latex] to [latex]{i}^{3}[/latex] by dividing the exponent by 4 and using the remainder to find the simplified form. But perhaps another factorization of [latex]{i}^{35}[/latex] may be more useful. The table below shows some other possible factorizations. Factorization of [latex]{i}^{35}[/latex] [latex]{i}^{34}\cdot i[/latex] [latex]{i}^{33}\cdot {i}^{2}[/latex] [latex]{i}^{31}\cdot {i}^{4}[/latex] [latex]{i}^{19}\cdot {i}^{16}[/latex] Reduced form [latex]{\left({i}^{2}\right)}^{17}\cdot i[/latex] [latex]{i}^{33}\cdot \left(-1\right)[/latex] [latex]{i}^{31}\cdot 1[/latex] [latex]{i}^{19}\cdot {\left({i}^{4}\right)}^{4}[/latex] Simplified form [latex]{\left(-1\right)}^{17}\cdot i[/latex] [latex]-{i}^{33}[/latex] [latex]{i}^{31}[/latex] [latex]{i}^{19}[/latex] Each of these will eventually result in the answer we obtained above but may require several more steps than our earlier method.
Any conic may be determined by three characteristics: a single focus, a fixed line called the directrix, and the ratio of the distances of each to a point on the graph. Consider the parabola [latex]x=2+{y}^{2}[/latex] shown in Figure 2. In The Parabola, we learned how a parabola is defined by the focus (a fixed point) and the directrix (a fixed line). In this section, we will learn how to define any conic in the polar coordinate system in terms of a fixed point, the focus [latex]P\left(r,\theta \right)[/latex] at the pole, and a line, the directrix, which is perpendicular to the polar axis. If [latex]F[/latex] is a fixed point, the focus, and [latex]D[/latex] is a fixed line, the directrix, then we can let [latex]e[/latex] be a fixed positive number, called the eccentricity, which we can define as the ratio of the distances from a point on the graph to the focus and the point on the graph to the directrix. Then the set of all points [latex]P[/latex] such that [latex]e=\frac{PF}{PD}[/latex] is a conic. In other words, we can define a conic as the set of all points [latex]P[/latex] with the property that the ratio of the distance from [latex]P[/latex] to [latex]F[/latex] to the distance from [latex]P[/latex] to [latex]D[/latex] is equal to the constant [latex]e[/latex]. For a conic with eccentricity [latex]e[/latex], if [latex]0\le e<1[/latex], the conic is an ellipse if [latex]e=1[/latex], the conic is a parabola if [latex]e>1[/latex], the conic is an hyperbola With this definition, we may now define a conic in terms of the directrix, [latex]x=\pm p[/latex], the eccentricity [latex]e[/latex], and the angle [latex]\theta [/latex]. Thus, each conic may be written as a polar equation, an equation written in terms of [latex]r[/latex] and [latex]\theta [/latex]. A General Note: The Polar Equation for a Conic For a conic with a focus at the origin, if the directrix is [latex]x=\pm p[/latex], where [latex]p[/latex] is a positive real number, and the eccentricity is a positive real number [latex]e[/latex], the conic has a polar equation For a conic with a focus at the origin, if the directrix is [latex]y=\pm p[/latex], where [latex]p[/latex] is a positive real number, and the eccentricity is a positive real number [latex]e[/latex], the conic has a polar equation How To: Given the polar equation for a conic, identify the type of conic, the directrix, and the eccentricity. Multiply the numerator and denominator by the reciprocal of the constant in the denominator to rewrite the equation in standard form. Identify the eccentricity [latex]e[/latex] as the coefficient of the trigonometric function in the denominator. Compare [latex]e[/latex] with 1 to determine the shape of the conic. Determine the directrix as [latex]x=p[/latex] if cosine is in the denominator and [latex]y=p[/latex] if sine is in the denominator. Set [latex]ep[/latex] equal to the numerator in standard form to solve for [latex]x[/latex] or [latex]y[/latex]. Example 1: Identifying a Conic Given the Polar Form For each of the following equations, identify the conic with focus at the origin, the directrix, and the eccentricity. [latex]r=\frac{6}{3+2\text{ }\sin \text{ }\theta }[/latex] [latex]r=\frac{12}{4+5\text{ }\cos \text{ }\theta }[/latex] [latex]r=\frac{7}{2 - 2\text{ }\sin \text{ }\theta }[/latex] Solution For each of the three conics, we will rewrite the equation in standard form. Standard form has a 1 as the constant in the denominator. Therefore, in all three parts, the first step will be to multiply the numerator and denominator by the reciprocal of the constant of the original equation, [latex]\frac{1}{c}[/latex], where [latex]c[/latex] is that constant. Multiply the numerator and denominator by [latex]\frac{1}{3}[/latex].[latex]r=\frac{6}{3+2\sin \text{ }\theta }\cdot \frac{\left(\frac{1}{3}\right)}{\left(\frac{1}{3}\right)}=\frac{6\left(\frac{1}{3}\right)}{3\left(\frac{1}{3}\right)+2\left(\frac{1}{3}\right)\sin \text{ }\theta }=\frac{2}{1+\frac{2}{3}\text{ }\sin \text{ }\theta }[/latex] Because [latex]\sin \text{ }\theta [/latex] is in the denominator, the directrix is [latex]y=p[/latex]. Comparing to standard form, note that [latex]e=\frac{2}{3}[/latex]. Therefore, from the numerator,[latex]\begin{array}{l}\text{ }2=ep\hfill \\ \text{ }2=\frac{2}{3}p\hfill \\ \left(\frac{3}{2}\right)2=\left(\frac{3}{2}\right)\frac{2}{3}p\hfill \\ \text{ }3=p\hfill \end{array}[/latex] Since [latex]e<1[/latex], the conic is an ellipse. The eccentricity is [latex]e=\frac{2}{3}[/latex] and the directrix is [latex]y=3[/latex]. Multiply the numerator and denominator by [latex]\frac{1}{4}[/latex].[latex]\begin{array}{l}\begin{array}{l}\hfill \\ \hfill \\ r=\frac{12}{4+5\text{ }\cos \text{ }\theta }\cdot \frac{\left(\frac{1}{4}\right)}{\left(\frac{1}{4}\right)}\hfill \end{array}\hfill \\ r=\frac{12\left(\frac{1}{4}\right)}{4\left(\frac{1}{4}\right)+5\left(\frac{1}{4}\right)\cos \text{ }\theta }\hfill \\ r=\frac{3}{1+\frac{5}{4}\text{ }\cos \text{ }\theta }\hfill \end{array}[/latex] Because [latex]\text{ cos}\theta [/latex] is in the denominator, the directrix is [latex]x=p[/latex]. Comparing to standard form, [latex]e=\frac{5}{4}[/latex]. Therefore, from the numerator,[latex]\begin{array}{l}\text{ }3=ep\hfill \\ \text{ }3=\frac{5}{4}p\hfill \\ \left(\frac{4}{5}\right)3=\left(\frac{4}{5}\right)\frac{5}{4}p\hfill \\ \text{ }\frac{12}{5}=p\hfill \end{array}[/latex] Since [latex]e>1[/latex], the conic is a hyperbola. The eccentricity is [latex]e=\frac{5}{4}[/latex] and the directrix is [latex]x=\frac{12}{5}=2.4[/latex]. Multiply the numerator and denominator by [latex]\frac{1}{2}[/latex].[latex]\begin{array}{l}\hfill \\ \hfill \\ \begin{array}{l}r=\frac{7}{2 - 2\text{ }\sin \text{ }\theta }\cdot \frac{\left(\frac{1}{2}\right)}{\left(\frac{1}{2}\right)}\hfill \\ r=\frac{7\left(\frac{1}{2}\right)}{2\left(\frac{1}{2}\right)-2\left(\frac{1}{2}\right)\text{ }\sin \text{ }\theta }\hfill \\ r=\frac{\frac{7}{2}}{1-\sin \text{ }\theta }\hfill \end{array}\hfill \end{array}[/latex] Because sine is in the denominator, the directrix is [latex]y=-p[/latex]. Comparing to standard form, [latex]e=1[/latex]. Therefore, from the numerator,[latex]\begin{array}{l}\frac{7}{2}=ep\\ \frac{7}{2}=\left(1\right)p\\ \frac{7}{2}=p\end{array}[/latex] Because [latex]e=1[/latex], the conic is a parabola. The eccentricity is [latex]e=1[/latex] and the directrix is [latex]y=-\frac{7}{2}=-3.5[/latex]. Try It 1 Identify the conic with focus at the origin, the directrix, and the eccentricity for [latex]r=\frac{2}{3-\cos \text{ }\theta }[/latex].
I started by showing that $1\leq a_{n} \leq n$ (by induction) and then $\frac{1}{n}\leq \frac{a_{n}}{n} \leq 1$ which doesn't really get me anywhere. On a different path I showed that $a_{n} \to \infty$ but can't see how that helps me. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Use $a_{n+1} = \frac{1}{a_n} + \frac{1}{a_{n-1}} + \cdots + 1$ and $a_n \longrightarrow \infty$. I completely forgot about the Stolz-Cesàro theorem, from which we get: $$\lim_{n\to \infty} \frac{a_n}{n}=\lim_{n\to\infty} \frac{a_{n+1}-a_{n}}{(n+1)-n}=\lim_{n\to \infty}\frac{\frac{1}{a_{n}}}{1}=\lim_{n\to \infty}\frac{1}{a_{n}}=0. $$ The same technique works for $\displaystyle \frac{a_{n}^2}{n}.$ Now that the homework is solved, here is some more investigation into this interesting sequence. I think we can show that $$\displaystyle a_{n}^2 \sim 2n + \dfrac{\log n}{2} - C$$ for some constant $\displaystyle C \gt 0$ By $\displaystyle x_n \sim y_n$ I mean $\displaystyle \lim_{n \to \infty} (x_n - y_n) = 0$ Consider $b_n = a_{n}^2 - 2n$ Then we have that $\displaystyle b_{n+1} = b_n + \dfrac{1}{b_n + 2n}$ Notice that $b_3 \gt 0$ and thus for sufficiently large $\displaystyle n$, $\displaystyle b_n \gt 0$. It is also easy to show that $\displaystyle b_n \lt 2n$. In fact, we can easily show that $b_n \lt \log n$ Now we have that, for sufficiently large $\displaystyle m,n$ $\displaystyle b_{m+1} - b_n = \sum_{k=n}^{m} \dfrac{1}{b_k + 2k}$ Since $\displaystyle 0 \lt b_k \lt \log k$ we have that $\displaystyle \sum_{k=n}^{m} \dfrac{1}{2k} \gt b_{m+1} - b_n \gt \sum_{k=n}^{m} \dfrac{1}{2k}(1- \dfrac{b_k}{2k})$ (Here we used $\displaystyle \dfrac{1}{1+x} \gt \ \ 1-x, 1 \gt x \gt 0$) Now Since $b_k \lt \log k$, we have that $\displaystyle \sum_{k=n}^{m} \dfrac{1}{2k} \gt b_{m+1} - b_n \gt \sum_{k=n}^{m} \dfrac{1}{2k} - \sum_{k=n}^{m} \dfrac{\log k}{4k^2}$ Using the fact that $\displaystyle H_m - H_n = \log(\dfrac{m+1}{n}) + O(\dfrac{1}{n} - \dfrac{1}{m})$, where $\displaystyle H_n = \sum_{k=1}^{n} \dfrac{1}{k}$ is the $\displaystyle n^{th}$ harmonic number. We see that, if $c_n = b_n - \dfrac{\log n}{2}$, then $\displaystyle O(\dfrac{1}{n} -\dfrac{1}{m}) \gt c_{m+1} - c_n \gt O(\dfrac{1}{n} -\dfrac{1}{m}) -\sum_{k=n}^{m} \dfrac{\log k}{4k^2}$ Now $\displaystyle \sum_{k=1}^{\infty} \dfrac{\log k}{k^2}$ is convergent and so by the Cauchy convergence criteria, we have that $\displaystyle c_n$ is convergent. Thus the sequence $\displaystyle a_{n}^2 - 2n - \dfrac{\log n}{2}$ converges and hence, for some $\displaystyle C$ we have that $$\displaystyle a_{n}^2 \sim 2n + \dfrac{\log n}{2} - C$$ or in other words $$\displaystyle a_{n} \sim \sqrt{2n + \dfrac{\log n}{2} - C}$$ A quick (possibly incorrect) computer simulation seems to show a very slow convergence to $\displaystyle C = 1.47812676429749\dots$ The previous answer: Hint: Consider $(a_n)^2$ and try to apply similar reasoning as you did for $a_n$. As $a_{n+1}^2 = a_n^2 + 2 + \frac{1}{a_n^2}$, we know that $ a_{n+1}^2 \leq a_n^2 + 3 $ If $a_n^2 \leq 3n$, then $a_{n+1}^2 \leq 3(n+1) $, and since $a_1^2 = 1 \leq 3$, by the induction hypothesis $a_n^2 \leq 3n$ Thus, $\frac{a_n^2}{n^2} \leq 3/n$ for all $n$, and hence $\lim_{n \rightarrow \infty} \frac{a_n^2}{n^2} = 0$ from this it follows though that $\lim_{n \rightarrow \infty} \frac{a_n}{n} = 0$. Lets rewrite: $$\frac{a_{n+1}-a_n}{n+1-n}=\frac{1}{a_n}$$ Now, as n goes to infinity lets denote $$a_n=f$$ .Then approximately: $$\frac{d f}{dn}=\frac{1}{f}$$ After integrating: $$\frac{f^2}{2}=n+c=>\frac{f^2}{n^2}=\frac{2}{n}+\frac{c}{n^2}$$ Hint: prove by induction that $a_n \leq 2\sqrt{n}$.
I have the following question from Function Theory of One Complex Variable - Greene/Krantz: Give an example of a series of complex coefficients $ a_n$ such that $\lim_{N \to + \infty} \sum_{n= -N}^{N} a_n$ exists but $\sum_{-\infty}^{+\infty} a_n$ does not converge. The answer key I have says that $a_n = n$ answers the question. I understand that $\lim_{N \to + \infty} \sum_{n= -N}^{N} n = \lim_{N\to+\infty}[-N + (-N+1) +...+ -1 +0+1+...(N-1)+N] = 0$, as can be seen from each term cancelling. However, on the question of why $\sum_{-\infty}^{+\infty} n$ does not converge I'm a little stumped. Thinking about it intuitively, wouldn't you expect the same sort of cancellation of terms? If anyone can provide a formal proof of why this series doesn't converge, I'd be very grateful. Thanks in advance!
It is well-known that a polynomial $q \in \mathbb Z[t]$ vanishes modulo $p$ only if it lies in the ideal $J_p$ generated by $p$ and $t^p-t$. This means that either the degree is large (at least $p$) or the coefficients are large (divisible by $p$). Is there anything useful like this that one can say if a polynomial vanishes modulo $n$? For example, let $n=p_1 \cdots p_k$, where $p_1<\dots<p_k$ are different primes. (For me, this could be the list of all primes less than some number $x$ for example.) It is clear that if $q(t)$ vanishes modulo $n$, then $$q(t) \in J_{p_1} \cap \cdots \cap J_{p_k} = J_{p_1} \cdots J_{p_k}.$$ Examples are $q(t)=t\prod_{i=1}^k (t^{p_i-1}-1)$ or $q(t)=p_1 \cdots p_l$ or anything in the ideal generated by polynomials divisible for each $1 \leq i \leq k$ by either $p_i$ or $t^{p_i}-t$. Question:Is it true that again either the degree must necessarily be large or some coefficient (or let's say that sum of absolute values of coefficients) must be large? Here, large could mean for example comparable with $\sum_{i} p_i$ or $n$. It is easy to see that any polynomial $q(t) \in \mathbb Z[t]$ that vanishes modulo $n$ is such that $q(t)/n$ maps $\mathbb Z$ to $\mathbb Z$, and hence $$q(t) = \sum_{i} n a_i \binom{t}{i},$$ for some $a_i \in \mathbb Z$ - but I do not see how this helps. I also tried to apply Chebotarev density theorem (which together with the error analysis of Lagarias-Odlyzko gives a way to produce small primes modulo which a polynomial has to have a root), but the estimates are too coarse and do not seem to make efficient use of the assumptions on the polynomial. EDIT: Motivated by a discussion with David Speyer below, let me formulate a more precise question: Question:Let $f \in \mathbb Z[t]$ be a monic polynomial of degree $d$ that vanishes modulo all primes $\leq P$. Is it true that $d \log \|f\|_{\infty}$ cannot be much smaller than $P^2$? Here, $\|f\|_{\infty}$ denotes the maximum of the absolute value of the coefficients of $f$.
Reynolds Number - Blayne Sarazin Contents Reynolds Number The Reynolds number is a dimensionless quantity in fluid mechanics that is used to help predict flow patterns in different fluid flow situations. The Reynolds Number serves as a guide to the laminar-turbulent transition in a particular flow situation, 1 and for the scaling of similar but different-sized flow situations. The Reynolds number is often used to predict the velocity at which a certain fluid flow turns turbulent, while it can also be used to determine what state of flow the fluid in question is currently under. Calculation of the Reynolds number depends heavily on what type of fluid is being utilized, as well as through what type of channel (i.e. pipe flow, duct, open channel, etc.) this fluid is travelling. Figure 1 is a good example of a fluid experiencing all three types of flows: laminar at the bottom, transitional near the middle of the stream (if only very briefly), and turbulent flow towards the top. The concept was first introduced in 1851 by George Gabriel Stokes, 2 however it was named by Arnold Sommerfield in 1908 3 after Osborne Reynolds, who popularized its use in 1883. 4 Definition The Reynolds Number is defined as the ratio of inertial forces to viscous forces in a flowing fluid. It is used in many fluid flow correlations and is used to describe the boundaries of fluid flow regimes (laminar, transitional and turbulent). 1 Viscous force is what tends to keep the layers moving smoothly. When these forces are sufficiently high, this removes any disturbances from the flow and we see what we call laminar flow. However, as velocity increases, inertia forces increase and particles are pushed out of the smoother path. This causes disturbances within the flow, and will eventually lead to what we call turbulent flow. 11 Determining whether a flow is experiencing laminar or turbulent flow is quite simple. Reynold's number tells us everything we need to know about the behavior of a given flow scenario. Reynolds number is determined via the following equation: [math] \mathrm{Re} = \frac{\rho v L}{\mu} [/math] where, [math]\rho[/math] is the density of the fluid (SI Units: kg/m^3) v is the velocity of the fluid (SI Units: m/s) L is the characteristic length of the fluid. This varies depending on through what the fluid is flowing (SI Units: m) [math] \mu [/math] is the dynamicviscosity of the fluid (SI Units: Ns/m 2) You will sometimes see Reynolds number in a simplified version such as this: [math] \mathrm{Re} = \frac{v L}{\nu} [/math] where [math]\nu[/math] is simply the kinematic viscosity of the acting fluid (SI Units: m 2/s). ([math] \nu = \mu/\rho) [/math] The Reynolds Number can be used to determine if flow is laminar, transient or turbulent. 5 The flow is laminar - when Re < 2300 transient - when 2300 < Re < 4000 turbulent - when Re > 4000 Flow Types Reynolds number can tell us the behavior of the flow we are analyzing. Each flow phase corresponds with a specific range of Reynolds numbers. The flow types can be broken down into the following three branches: Laminar Laminar flow is the flow that corresponds with low velocities and Reynolds numbers less than 2300. 5 In this type of flow, the fluid flows in parallel layers, with no disruption between the layers. 7 At low enough velocities, the fluid will tend to flow without lateral mixing, while adjacent layers simply slide past one another. This can be especially important in microfluidics when you do not want lateral mixing. These phenomenon are displayed well by the streamlines depicted in the laminar flow case of Figure 2. Note that the flow is very clean and without disturbance. Furthermore, laminar flow is entirely reversible. This means that we can always return to almost exactly where we began when the flow started. The following video expertly displays this phenomenon (skip ahead to 20 seconds): Transitional (transient) Transitional or transient flow is the phase of flow that occurs between laminar and turbulent flow, and corresponds to Reynolds numbers that land between 2300 and 4000 5. In this type of flow, there is a mixture of laminar and turbulent flows present. As Reynolds number increase from 2300 to ~4000, there are an increasing amount of disturbances appearing within the flow. Turbulent Turbulent flow is the most common form of flow in nature, and corresponds to the Reynolds numbers higher than a value of 4000 5. Turbulent flow is ultimately described as chaotic and unpredictable, and is often seen with fluids at high velocities. The flow undergoes irregular fluctuations, or mixing, and continuously changes magnitude and direction 6. As can be seen from figure 2, the sphere in the upper portion of the figure has steady streamlines in front of it, but severe eddy (vortex) formation behind it. Since turbulent flow is extremely hard to measure compared to laminar flow, experimental tools such as a hot wire probe 10 must be used in order to obtain good results. A hot wire probe is a device that has a very fast response time, with a probe that can respond to temperature changes within 1 millisecond, which makes this tool a good candidate for measuring flows (such as air) that experience constant, rapid change as we see in turbulent flow. Flow Scenarios Calculation of the Reynolds number for a given fluid depends on several things such as speed, density, and viscosity. Two of these are properties of the fluid itself and are usually pretty easy to look up. Speed is often a given, or is one of the things you are trying to solve for if initially given a Reynolds number. However, the characteristic length is something that changes based upon what the fluid is flowing through. The following sections will describe how calculation of characteristic length changes, and will show the equations to use in each of these situations. Pipe Flow Let's say you have water flowing through a pipe with a diameter of 25 millimeters at a speed of 5 meters per second. If we take a look at the equation for Reynolds number defined above, we simply need to replace to the characteristic length term with a D, which denotes the diameter of the pipe. The equation then looks as such: [math] \mathrm{Re} = \frac{\rho v D}{\mu} [/math] where D is the diameter of the pipe (SI units: m) For this specific scenario, the equation with the aforementioned values plugged in becomes: [math] \mathrm{Re} = \frac{(1000kg/m^3)(5 m/s)(0.025 m)}{1.002 Ns/m^2} = 124.75 [/math] This is laminar flow based on the previously defined boundaries. To create turbulent flow, the velocity or diameter of the pipe would have to be increased. Increasing either would easily force the flow to approach turbulence. Duct You can imagine a duct as either a square or rectangular structure through which the fluid is flowing. For this, the calculation of Reynolds number changes slightly due to the fact that we use something called the hydraulic diameter is the characteristic length. 5 The hydraulic diameter, [math]D_h[/math], is defined by the following equation 5: [math] \mathrm{D_h} = \frac{4A}{U} [/math] where, [math] D_h [/math] is the hydraulic diameter of the duct. A is the area of the duct. U is the wettedperimeter of the duct. While the U term may be somewhat confusing because it is not simply just perimeter, wetted perimeter just means the cross-sectional perimeter that is in contact with the acting fluid. For a square, this U term becomes [math] 2*(a+b) [/math] So the [math] D_h [/math] term turns into [math] \mathrm{D_h} = \frac{4*(a*b)}{2*(a+b)} [/math] which simplifies into the following equation we use to calculate the hydraulic diameter: [math] \mathrm{D_h} = \frac{2*(a*b)}{a+b} [/math] Therefore, our original equation for Reynold's number takes the following form when were are dealing with a rectangular duct: [math] \mathrm{Re} = \frac{\rho v D_h}{\mu} [/math] Open Channel An open channel or duct behaves similarly to the previously mentioned closed duct. The only thing that changes in this case is the wetted perimeter. The following problem explores the small difference between open and closed ducts: The perfect example of an open duct that transports a fluid is an aqueduct, which was used to transport water for various reasons. Imagine we have an aqueduct with the dimensions shown in Figure 3 and we want to calculate the Reynold's number for water flowing at 25 meters per second. The hydraulic diameter for this case is simply: [math] \mathrm{D_h} = \frac{4*(2*1)}{2+1+1} = 2 [/math] Note that the wetted perimeter for an open channel differs slightly because there are only three walls used when calculating the hydraulic diameter. This is the only difference between open and closed ducts. [math] \mathrm{Re} = \frac{(1000kg/m^3)(25 m/s)(2 m)}{1.002 Ns/m^2} = ~ 49900 [/math] Therefore, the flow within this duct is fully-developed turbulent flow. Reynolds Number in Microscopic Flow As far as Reynold's number and microfluidics is concerned, flow is almost always laminar. Meaning that almost without exception, microfluidics experience extremely small Reynolds numbers. The reasons the Reynolds numbers are kept so low is a result of low velocities and small channel sizes. Typical velocities range from 1 micrometer per second up to 1 centimeter per second, while channel radii ranges from 1-100 micrometers. 12 Constrained by these dimensions, the Reynolds number lands safely within the laminar regime, the viscous forces almost dominate over the inertial forces, resulting in smooth, laminar flow. In Physiology (Hemodynamics) Hemodynamics is a word specifically used to address blood flow within the human body. 14 When considering how Reynolds number works in the body, we must first address a few important things about flow within the body. When dealing with the circulatory system, the following key differences must be addressed: 14 Blood is a non-Newtonian fluid, meaning that blood viscosity is not constant. Flow in the body is pulsatile. In this case, the flow moves forward in pulses. Furthermore, between the pulses the flow actually reverses direction for a very short period of time. Blood vessels are elastic "pipes" whose shapes and diameters constantly change. Calculating Reynolds number can only be done locally, and is not representative of the flow everywhere in the body at that time. For this application, one would treat arteries like pipes. Further measurements may be necessary to attain flow velocity. With that being said, blood flow in the body is generally laminar, with the exception usually occurring in the ascending aorta where this flow can be disrupted and turn turbulent. 15 The location of the ascending aorta can be seen in Figure 5. This turbulent flow can also occur in large arteries at branch points as well as across stenotic heart valves. Ideally, the critical Reynolds number is high enough such that turbulent flow is not common in the circulatory system. Turbulent blood flow within the circulatory system is never a good thing, with turbulent flow being linked to heart murmurs 15 and aneurysm formation at arterial branch points. 16 References 1. neutrium.net/fluid_flow/reynolds-number/ 2. Stokes, George (1851). "On the Effect of the Internal Friction of Fluids on the Motion of Pendulums". Transactions of the Cambridge Philosophical Society. 9: 8–106. 3. Sommerfeld, Arnold (1908). "Ein Beitrag zur hydrodynamischen Erkläerung der turbulenten Flüssigkeitsbewegüngen (A Contribution to Hydrodynamic Explanation of Turbulent Fluid Motions)". International Congress of Mathematicians . 3: 116–124 4. Reynolds, Osborne (1883). "An experimental investigation of the circumstances which determine whether the motion of water shall be direct or sinuous, and of the law of resistance in parallel channels". Philosophical Transactions of the Royal Society. 174 (0): 935–982 5. www.engineeringtoolbox.com/reynolds-number-d_237.html 6. abyss.uoregon.edu/~js/glossary/turbulent_flow.html 7. Batchelor, G. (2000). Introduction to Fluid Mechanics 8. www.nuclear-power.net/nuclear-engineering/fluid-dynamics/laminar-flow-viscous/ 9. en.wikipedia.org/wiki/Reynolds_number 10. web.mst.edu/~cottrell/ME240/Resources/Fluid_Flow/Fluid_flow.pdf 11. http://www.uobabylon.edu.iq/eprints/paper_2_2117_1369.pdf 12. Squires, Todd (2004). "Microfluidics: Fluid Physics at the Nanoliter Scale". Review of Modern Physics. 77; 977. 13. Cheng, D. (2007). "Laminar Flow in Microfluidic Channels". Expo 2007. Department of Electrical and Computer Engineering, University of Wisconsin-Madison. 14. http://www.sci.utah.edu/~macleod/bioen/be6000/notes/L09-hemo.pdf 15. http://www.cvphysiology.com/Hemodynamics/H007 16. Foutrakis GN1, Yonas H, Sclabassi RJ. "Saccular aneurysm formation in curved and bifurcating arteries". AJNR Am J Neuroradiol. 1999 Aug;20(7):1309-17.
How would you go about explaining i.i.d (independent and identically distributed) to non-technical people? It means "Independent and identically distributed". A good example is a succession of throws of a fair coin: The coin has no memory, so all the throws are "independent". And every throw is 50:50 (heads:tails), so the coin is and stays fair - the distribution from which every throw is drawn, so to speak, is and stays the same: "identically distributed". A good starting point would be the Wikipedia page. ::EDIT:: Follow this link to further explore the concept. Nontechnical explanation: Independence is a very general notion. Two events are said to be independent if the occurrence of one does not give you any information as to whether the other event occurred or not. In particular,the probability that we ascribe to the second event is not affectedby the knowledge that the first event has occurred. Example of independent events, possibly identically distributed Consider tossing two different coins one after the other. Assuming that your thumb did not get unduly tired when it flipped the first coin, it is reasonable to assume that knowing that the first coin toss resulted in Heads in no way influences what you think the probability of Heads on the second toss is. The two events $$\{\text{first coin toss resulted in Heads}\}~~\text{and}~~\{\text{second coin toss resulted in Heads}\}$$ are said to be independentevents. If we know, or obstinately insist, that the two coins have different probabilities of resulting in Heads, then the events are not identically distributed. If we knowor assumethat the two coins have the sameprobability $p$ of coming up Heads, then the above events are also identically distributed,meaning that they both have the same probability $p$ of occurring. But notice that unless $p = \frac 12$, the probability of Heads does not equal the probability of Tails. As noted in one of the Comments, "identical distribution" is not the same as "equally probable." Example of identically distributed nonindependent events Consider an urn with two balls in it, one black and one white. We reach into it and draw out the two balls one after the other, choosing the first one at random (and this of course determines the color of the next ball). Thus, the two equally likely outcomes of the experiment are (White, Black) and (Black, White), and we see that the first ball is equally likely to be Black or White and so is the second ball also equally likely to be Black or White. In other words, the events $$\{\text{first ball drawn is Black}\}~~\text{and}~~\{\text{second ball drawn is Black}\}$$ certainly are identically distributed, but they are definitely notindependent events. Indeed, if we know that the first event has occurred, we know for sure that the second cannotoccur. Thus, while our initial evaluation of the probability of the second event is $\frac 12$, once we know that the first event has occurred, we had best revise our assessment of the probability of the second drawn will be black from $\frac 12$ to $0$. A random variable is variable which contains the probability of all possible events in a scenario. For example, lets create a random variable which represents the number of heads in 100 coin tosses. The random variable will contain the probability of getting 1 heads, 2 heads, 3 heads.....all the way to 100 heads. Lets call this random variable . X If you have two random variables then they are IID (independent identically distributed) if: If they are independent. As explained above independence means the occurrence of one event does not provide any information about the other event. For example, if I get 100 heads after 100 flips, the probabilities of getting heads or tails in the next flip are the same. If each random variable shares the same distribution. For example, lets take the random variable from above - . Lets say X Xrepresents Obama about to flip a coin 100 times. Now let's say Yrepresents a Priest about to flip a coin 100 times. If Obama and the Priest flip coins with the same probability of landing on heads, then Xand Yare considered identically distributed. If we sample repeatedly from either the Priest or Obama, then the samples are considered identically distributed. Side note: Independence also means you can multiply probabilities. Lets say the probability of heads is p, then the probability of getting two heads in a row is p*p, or p^2. That two dependent variables can have the same distribution can be shown with this example: Assume two successive experiments involving each 100 tosses of a biased coin, where the total number of Head is modeled as a random variable X1 for the first experiment and X2 for the second experiment. X1 and X2 are binomial random variables with parameters 100 and p, where p the bias of the coin. As such, they are identically distributed. However they are not independent, since the value of the former is quite informative about the value of the latter. That is if the result of the first experiment is 100 Heads this tells us a lot about the bias of the coin and therefore gives us a lot new information regarding the distribution of X2. Still X2 and X1 are identically distributed since they are derived from the same coin. What is also true is that if 2 random variables are dependent then the posterior of X2 given X1 will never be the same as the prior of X2 and vice versa. While when X1 and X2 are independent their posteriors are equal to their priors. Therefore, when two variables are dependent, the observation of one of them results in revised estimates regarding the distribution of the second. Still both may be from the same distribution, it is just we learn in the process more about the nature of this distribution. So returning to the coin tosses experiments, initially in the absence of any information we might assume that X1 and X2 follow a Binomial distribution with parameters 100 and 0.5. But after observing 100 Heads on a row we would certainly revise our estimate about the p parameter to make it quite close to 1. An aggregation of several random draws from the same distribution. An example being pulling a marble out of bag 10,000 times and counting the times you pull the red marble out. If a random variable $X$ comes from a population having (say) a normal distribution, that is its pdf (probability density function) is that of normal distribution, with a population average $\mu=3$ and population variance $\sigma^2=4$ (the numbers are hypothetical and are just for your understanding and to simplify comparisons) we can describe it as follows: $X \sim N(3 , 4)$. Now if we have another random variable $Y$ which is also normally distributed and which is $Y \sim N(3, 4)$ then $X$ and $Y$ are identically distributed. Nevertheless, being identically distributed does not necessarily imply independence.
In a paper by Joos and Zeh, Z Phys B 59 (1985) 223, they say:This 'coming into being of classical properties' appears related to what Heisenberg may have meant by his famous remark [7]: 'Die "Bahn" entsteht erst dadurch, dass wir sie beobachten.'Google Translate says this means something ... @EmilioPisanty Tough call. It's technical language, so you wouldn't expect every German speaker to be able to provide a correct interpretation—it calls for someone who know how German is used in talking about quantum mechanics. Litmus are a London-based space rock band formed in 2000 by Martin (bass guitar/vocals), Simon (guitar/vocals) and Ben (drums), joined the following year by Andy Thompson (keyboards, 2001–2007) and Anton (synths). Matt Thompson joined on synth (2002–2004), while Marek replaced Ben in 2003. Oli Mayne (keyboards) joined in 2008, then left in 2010, along with Anton. As of November 2012 the line-up is Martin Litmus (bass/vocals), Simon Fiddler (guitar/vocals), Marek Bublik (drums) and James Hodkinson (keyboards/effects). They are influenced by mid-1970s Hawkwind and Black Sabbath, amongst others.They... @JohnRennie Well, they repeatedly stressed their model is "trust work time" where there are no fixed hours you have to be there, but unless the rest of my team are night owls like I am I will have to adapt ;) I think u can get a rough estimate, COVFEFE is 7 characters, probability of a 7-character length string being exactly that is $(1/26)^7\approx 1.2\times 10^{-10}$ so I guess you would have to type approx a billion characters to start getting a good chance that COVFEFE appears. @ooolb Consider the hyperbolic space $H^n$ with the standard metric. Compute $$\inf\left\{\left(\int u^{2n/(n-2)}\right)^{-(n-2)/n}\left(4\frac{n-1}{n-2}\int|\nabla u|^2+\int Ru^2\right): u\in C^\infty_c\setminus\{0\}, u\ge0\right\}$$ @BalarkaSen sorry if you were in our discord you would know @ooolb It's unlikely to be $-\infty$ since $H^n$ has bounded geometry so Sobolev embedding works as expected. Construct a metric that blows up near infinity (incomplete is probably necessary) so that the inf is in fact $-\infty$. @Sid Eating glamorous and expensive food on a regular basis and not as a necessity would mean you're embracing consumer fetish and capitalism, yes. That doesn't inherently prevent you from being a communism, but it does have an ironic implication. @Sid Eh. I think there's plenty of room between "I think capitalism is a detrimental regime and think we could be better" and "I hate capitalism and will never go near anything associated with it", yet the former is still conceivably communist. Then we can end up with people arguing is favor "Communism" who distance themselves from, say the USSR and red China, and people who arguing in favor of "Capitalism" who distance themselves from, say the US and the Europe Union. since I come from a rock n' roll background, the first thing is that I prefer a tonal continuity. I don't like beats as much as I like a riff or something atmospheric (that's mostly why I don't like a lot of rap) I think I liked Madvillany because it had nonstandard rhyming styles and Madlib's composition Why is the graviton spin 2, beyond hand-waiving, sense is, you do the gravitational waves thing of reducing $R_{00} = 0$ to $g^{\mu \nu} g_{\rho \sigma,\mu \nu} = 0$ for a weak gravitational field in harmonic coordinates, with solution $g_{\mu \nu} = \varepsilon_{\mu \nu} e^{ikx} + \varepsilon_{\mu \nu}^* e^{-ikx}$, then magic?
Displacement and Acceleration Arrays of satellites cannot maintain a constant displacement above or below the orbital plane without a constant (and significant) acceleration keeping them there. The ΔV needed over the lifetime of a satellite is beyond the range of high Isp engines. Further, it is unnecessary; by using constellations that evolve and rotate in three dimensions along the orbit, the same results (high gain main lobe, suppressed sidelobes) can be achieved. It is important to work with orbital mechanics, not fight nature and physics. Imagine an attempt to permanently "hover" on the "north" side of a circular equatorial orbit. All Kepler earth orbits are mapped onto planes which intersect the center of the earth, and our hovering object is actually on a different, slightly inclined orbit. If the object is free to follow the orbital mechanics without an external force, it will follow a slightly different inclination circular orbit that intersects the original orbital plane. If the displacement of the unconstrained orbit differs from the equatorial orbit by d = D \sin{ \omega t } , then the acceleration is a = - \omega^2 D \sin{ \omega t } . Since \omega^2 = \mu / R^3 for a circular orbit without the J_2 perturbation, the peak acceleration is \mu D / R^3 . For μ = 398600.44 km 3/s 2, the acceleration at maximum displacement is given for various altitudes in the table below, as well as the accumulated \Delta V over a 10 year mission: per meter of displacement D Altitude Radius Acceleration 10 year \Delta V km km μm/s m/s 200 6578 1.400 441.9 600 6978 1.173 370.2 1000 7378 0.992 313.2 2000 8378 0.678 213.9 6411 12789 0.191 60.1 m288 35786 42164 0.0053 1.67 GEO These numbers are multiplied by the displacement; a 13 meter displacement at 600km altitude requires 4812 m/s \Delta V over 10 years. Even with a high efficiency VASIMR engine, the vehicle could be entirely fuel tanks, solar cells, and radiators, and have a hard time making that much \Delta V over a decade. And you can't do a north-south hover with electrodynamic tethers or Lorenz forces; both work across magnetic field lines, not along them. Alternatives to Continuous Displacements Though it is more complicated to think about, it is a lot easier to deploy three dimensional arrays that naturally rotate and evolve as they orbit. 90° right angles do not stay right angles in these arrays, but relative positions can be very precise and predictable, with only infinitesimal corrections for J_2 effects and somewhat larger corrections for light pressure. Indeed, by modulating the light pressure with electrochomic panels, you can keep an array predictably positioned within fractions of a micron. The array evolutions will complicate the analysis, but an orbiting array turns at about a picoradian per microsecond, so the array has plenty of time to retarget communication beams and perform position tweaks. The main principle: Work with nature, and exploit its behavior, do not attempt to fight nature. While there are many ways for arrays to rotate, it is easiest to pick a uniform mapping on a torus. We need to slightly change eccentricity if individual objects are spaced apart as they rotate through the main orbital plane, and an eccentric orbit compared to a circular orbit will skew forwards and back in the orbital direction: With this in mind, we can map slightly inclined, slightly elliptic orbits onto a torus around a central circular orbit. In general, we can also map an eccentric "torus" around a central elliptic orbit, necessary to accommodate light pressure. The objects in this orbit will skew forwards and backwards as the array rotates, once per orbit: Think of the colored dots as representing a cartesian grid, rotated and skewed towards apogee. We can map more than a "cubic" array on this grid, for example, we can arrange our objects as a geodesic sphere mapped onto this apogee-skewed grid: gsr03.c source gsr03 linux binary - not sure about dependencies Click here for higher resolution. The size of the orbiting array in the upper left (relative to earth and the orbit) is extremely exaggerated. flattening the array in the radial direction reduces apogee skew, but reduces sunlight at the 6 oclock positions in orbit. Someday, draw the view from the sun, estimating illumination There may be something wrong with the radio energy plot, I would expect it to skew to upper left for an array skewed to the upper right. A "real" array will have 8,000 or more thinsats, perhaps as many as millions. These semi-symmetric, elliptical arrays generate amplitude patterns that vaguely resemble a sin(R)/R radial dropoff. However, a more complete (and numerically challenging) analysis will be needed to look for distant grating lobes. Not too distant, fortunately. Because thinsats are themselves covered with an array of slot emitters, they will beamform within about 3 degrees, so we need "merely" to look at a 1000x1000 kilometer patch for grating lobes. Also, keep in mind that wide bandwidth pulses will be smeared radially. These are all uniform emitters making an Airy-disk-like pattern; we can probably change the weighting of edges to centers and generate a Gaussian taper. Indeed, we can dynamically change weightings in order to place nulls over nearby receivers that we do not want to interfere with. This animation is 9% of the minimum sized "real" array. gsr02.c source - array is stretched in the x (orbital) direction gsr02 linux binary - not sure about dependencies Click here for higher resolution. This animation takes 4 hours to compute on one core of a 3GHz Pentium using double precision math. 99% of the work is a small, tight loop with a sin() and a cos() in it. It would go a LOT faster on the numeric array processor of an nVidia video card. Anybody want to learn CUDA?
1 Laplace transform The Laplace transform is an essential tool in linear dynamic system modeling and control system engineering. A function $F(s)$ of the complex variable $s=\sigma+j\omega$ is called the Laplace transform of the original function $f(t)$ and is defined in the following way: The original function $f(t)$ can be recovered from the Laplace transform by applying the reverse Laplace transform defined as where $c$ is greater than the real part of all the poles of function $F(s)$ [1]. Assuming zero initial conditions, the laplace transform of a generalized fractional-order operator is given by It should be noted, that if initial conditions are not zero, different definitions apply for the Riemann-Liouville, Caputo and Grünwald-Letnikov fractional-order operators. 2 Fractional-order models A fractional-order continuous-time dynamic system can be expressed by a fractional differential equation of the following form Applying the Laplace transform to (4) with zero initial conditions the input-output representation of the fractional-order system can be obtained in the form of a transfer function: In the case of a system with commensurate order $\gamma$, and taking $\lambda = s^{\gamma}$, the continuous-time transfer function can be represented as a pseudo-rational function $H(\lambda)$: Based on the concept of the pseudo-rational function, a state-space representation can be established in the form: 3 Basic fractional-order system analysis The fractional transfer function $G(s)=Z(s)/P(s)$ is stable if and only if the following condition is satisfied in $\sigma$-plane: (Matignon’s stability theorem) where $\sigma:=s^{q}$. When $\sigma=0$ is a single root of $P(s)$, the system cannot be stable. For $q=1$, this is the classical theorem of pole location in the complex plane: no pole is in the closed right plane of the first Riemann sheet. In general, for a commensurate-order fractional-order system in the form where $0<q<1$ and $w\in R^{n}$ the equilibrium points are calculated by solving The equilibrium points are asymptotically stable if all the eigenvalues $\lambda_{k}$ of the Jacobian matrix $J=\frac{\partial f}{\partial w}$, evaluated at the equilibrium, satisfy the condition Alternatively, the stability condition can also be evaluated from the state-space representation of the system (8) where $0<q<1$ and $\mathrm{eig}(A)$ represents the eigenvalues of the state-space matrix $A$. Stability regions of a fractional-order system are shown in Figure 1. Time-domain analysis of fractional-order models can be conducted by using the definitions presented in the introduction. Specifically, the numerical approach can be used through applying the Grünwald-Letnikov definition. In case of the frequency domain, the direct substitution $s=j\omega$ can be applied and the corresponding characteristics can be obtained directly. Practically all frequency-domain analysis methods are applicable. 4 Integer-order approximation of fractional operators Using fractional-order operator approximations can be beneficial due to the abundance of tools available for regular transfer function analysis and modeling. We suggest using a very flexible approximation technique, proposed in [3], called the Oustaloup recursive filter method. It is summarized below. To approximate a fractional-order operator $s^\gamma$ for $0<\gamma<1$, one can use the following set of formulae: where For fractional orders $\alpha$ such that $\alpha\geq 1$ it holds where $n=\alpha-\gamma$ denotes the integer part of $\alpha$ and $s^\gamma$ is obtained through the Oustaloup approximation. 5 Discretization Discretization is essential in digital implementation of controllers. Several discretization methods have been developed for continuous fractional-order models. These include FIR (finite impulse response) and IIR (infinite impulse response) filter realizations. The latter is preferred to the former, because the IIR implementation will be of lower order. The following method for obtaining a discrete-time approximation of fractional models can be proposed. Approximate the continuous-time fractional model by a rational-order transfer function $G_c(s)$ using the Oustaloup recursive filter method. Use a discrete transformation with a sample period $T$ and obtain a discrete approximation $G_d(z)$ of the fractional model. Difficulties may still arise if the filter is expected to precisely reflect the desired frequency-domain specifications because to the high order of the resulting discrete model. In case of IIR filters, the direct form implementation may lead to computational instability. It is essential to improve computational stability by using a proper realization method. References [1] C. A. Monje, Y. Chen, B. Vinagre, D. Xue, and V. Feliu, Fractional-order Systems and Controls: Fundamentals and Applications, ser. Advances in Industrial Control. Springer Verlag, 2010. [2] D. Matignon, “Generalized Fractional Differential and Difference Equations: Stability Properties and Modeling Issues,” in Proc. of Math. Theory of Networks and Systems Symposium, 1998, pp. 503–506. [3] A. Oustaloup, P. Melchior, P. Lanusse, O. Cois, and F. Dancla, “The CRONE toolbox for Matlab,” in Proc. IEEE Int. Symp. Computer-Aided Control System Design CACSD 2000, 2000, pp. 190–195.
I am giving a talk on String theory to a math undergraduate audience. I am looking for a nice and suprising mathematical computation, maybe just a surprising series expansion, which is motivated by string theory and which can be motivated and explained relatively easily. Examples of what I have in mind are the results in Dijkgraaf's "Mirror symmetry and the elliptic curve", or the "genus expansion" of the MacMahon function (aka DT/GW for affine three-space), but I am not sure I can fit either into the time I have. Any thoughts? Two counting problems -- from my own very biased and personal viewpoint -- that can perhaps be motivated: Counting triangles on the torus = theta function relation for elliptic curve. (I tried to squeeze this into a public lecture one time.) Counting symmetric polynomials of degree k in 24 variables = partition function of chiral bosonic string => counting curves on K3 by heterotic duality: 24, 24 + 24*25/2 = 324, etc. But these can't beat calculating an actual partition function (as in Richard Eager's answer), unless you're trying to emphasize mathiness. Ginsparg's Applied Conformal Field Theory (hep-th/9108028 section 7.6) has a nice proof of the Jacobi Triple-Product formula and Euler's pentagonal number theorem. The equalities can be interpreted as the equivalence between the partition function of a free chiral boson and the partition function of two chiral fermions on a torus. This is an example of bosonization and plays an important role in string theory. The proof can be explained without any reference to physics, but the crucial difference in statistics (Boson/Fermion) employed in the proof becomes obscured. I've given a talk about the equivalence between 1+1 TQFTs and Frobeneus algebras to an undergraduate audience with great success. It has great pictures and a clear, beautiful idea. The "computation" can then be the beautiful formula for the number of degree $d$ covers of a genus $g$ Riemann surface as a sum over irreducible representations of the symmetric group $S_d$ $$Z(g)=\sum _{R} \left(\frac{d!}{\dim(R)}\right)^{2g-2}$$ That last computation requires that your audience knows some representation theory of finite groups, but that might be true for the Oxford undergrads. I agree that computing partition functions has many pretty applications. My favorite is the use of Jacobi's abstruse identity between theta functions, $\theta_3^4-\theta_4^4=\theta_2^4$, to show the equality between the number of bosons and fermions in open superstring theory as required by supersymmetry. This is explained in sec 4.3 of "Superstring Theory" by Green, Schwarz and Witten. Another short calculation which quickly gets to the heart of the connection between string theory and gravity is the demonstration that bosonic string theory contains a massless spin two excitation. One way to do this requires regularizing a divergent zero point energy via $\sum_{n=1}^\infty n \rightarrow \sum_{n=1}^\infty n^{-s}$ and then analytically continuing to $s=-1$ to obtain $\zeta(-1)=-1/12$. See sec 2.3 of GSW. Maybe derive the Polyakov formula? Like KConrad says, whether it can be understood on an undergraduate level depends a lot on your presentation and the level of the undergraduates. But the basic idea behind the formula, if I remember correctly, can all be explained using a little bit of Riemannian geometry/representation theory plus a bit of complex analysis. (I recently saw a talk where it was perfectly understandable and impressive for masters-level students.) (You can also segue into explaining why the universe is 26 dimensional.) Derive the Casimir Energy in Bosonic String Theory. You start with the $\hat L_0$ operator and get rid of the non-vacuum $\displaystyle\frac{\alpha_0^2}{2}+\sum_{n=1}^\infty\alpha_{-n}\cdot\alpha_n$, then you use a Ramanujam sum to do $\zeta$-function renormalisation, from which you find out that the vacuum energy denoted by $\varepsilon_0$ is $$\varepsilon_0=-\frac{d-2}{24}$$ However, the most interesting part comes when you go around deriving the critical dimension of Bosonic String Theory. After which, the expression surprisingly simplifyies to a $-1$. For a more detailed derivation of the above stuff, see these lecture notes/. (Section 4) (Equation 4.5-4.10)
June 30th, 2011, Verimag (CTL), Grenoble download flyer Registration Registration is free of charge. We only need to know how many participants will attend in order to estimate the number of people during the lunch and coffee breaks. There will also be an informal dinner on the evening of June 30th at the Bombay indian restaurant in downtown Grenoble. Please let us know if you intend to come, as reservations will be done in advance (you are kindly requested to pay for the restaurant yourself, expect it to be in the range of 20 to 30 euros). To register please send email to iosif imag.fr with subject line: VVS register(for simple registration without dinner) VVS register bombay(for registration including dinner reservation) Programme 9h00 Welcome and coffee 9h30 Ahmed Bouajjani (LIAFA) Verifying concurrent programs running over TSO We show that the state reachability problem under TSO is decidable, but highly complex (non primitive recursive) even for finite-state programs (whereas it is PSPACE complete for the SC model). This is due to the fact that, roughly speaking, store buffers have the power of lossy FIFO-channels. Then, we consider this problem under the assumption of a bounded number of context switches. We provide a translation that takes as input a concurrent program P and produces a program P’ such that, running P’ under SC yields the same set of reachable states as running P under TSO with at most K context-switches for each thread, for any fixed K. Basically we show that it is possible to use 2K additional copies of the shared variables of P as local variables to simulate the store buffers. This translation allows to transfer existing results and analysis/verification techniques from SC to TSO for the same class of programs. This talk is based on works with: Mohamed-Faouzi Atig, Sebastian Burckhardt, Madan Musuvathi, and Gennaro Parlato. 10h30 Coffee break 11h00 Ruzica Piskac (EPFL) Software Synthesis using Automated Reasoning Software synthesis is a technique for automatically generating code from a given specification. The goal of software synthesis is to make software development easier, while increasing both the productivity of the programmer and the correctness of the produced code. In this talk, I will present an approach to synthesis that relies on the use of automated reasoning and decision procedures. I will describe how to generalize decision procedures into predictable and complete synthesis procedures using linear integer arithmetic as an example. Linear integer arithmetic is interesting in of itself due to the fact that reasoning about collections, such as sets and multisets, can be reduced to reasoning about linear integer arithmetic. The reduction uses a semilinear set characterization of solutions to integer linear arithmetic formulas and a generalization of a recent result on sparse solutions of integer linear programming problems. I will explain how this decision procedure can be applied in both software synthesis and program verification. 11h30 Barbara Jobstmann (VERIMAG) Quantitative Verification and Synthesis Quantitative constraints have been successfully used to state and analyze non-functional properties such as energy consumption, performance, or reliability. Functional properties are typically viewed in a purely qualitative sense. Desired properties are written in temporal languages and the outcome of verification is a simple Yes or No answer stating that a system satisfies or does not satisfy the desired property. We believe that this black and white view is insufficient both for verification and for synthesis. Instead, we propose that specifications should have a quantitative aspect. Our recent research shows that quantitative techniques give new insights into qualitative specifications. For instance, average-reward properties allow us to express properties like default behavior or preference relations between implementations that all satisfy the functional property. These additional properties are particularly useful in a synthesis setting, where we aim to automatically construct a system that satisfies the specification, because they allow us to guide the synthesis process making the outcome of synthesis more predictable. In this talk I will give an overview of (1) how classical specification can be augmented with quantitative constraints, (2) list different quantitative constraints that arise in this way, and (3) show how to verify and synthesize systems that satisfied the initial specification and optimize such quantitative constraints. This is joint work with Roderick Bloem, Krishnendu Chatterjee, Karin Greimel, Thomas Henzinger, Arjun Radhakrishna, and Rohit Singh. 12h00 Nicolas Halbwachs (VERIMAG) Static Analysis of Programs with Arrays This talk presents some joint works with Mathias Peron and ValentinPerrelle on applying abstract interpretation to the discovery ofproperties about array contents. In Peron’s thesis and our PLDI’08paper, we start from an idea from Gopan, Reps and Sagiv [Popl05], whichconsists in partitioning arrays into symbolic intervals (e.g., $[1,i - 1], [i,i], [i + 1,n]$), and in associating with each such interval $I$ and each array $A$ an abstract variable $A_I$; the new idea is to consider \it relational abstract properties $\Psi(A_I, B_I, ...)$ about these abstract variables, and to interpret such a property \it pointwise on the interval $I$: $\forall \ell \in I, \Psi(A[\ell], B[\ell],...)$. The resulting method is able, for instance, to discover that the result of an insertion sort procedure is a sorted array. A second part of the talk will summarize our VMCAI’10 paper with V. Perrelle, which concerns properties of array contents \it up to a permutation. For instance, to prove a sorting procedure, one has to show that the result is sorted, but also that it is a permutation of the initial array. In order to analyze this kind of properties, we define an abstract interpretation working on multisets of values, and able to discover invariant ``linear’’ equations about such multisets. 12h30 Cezara Dragoi (LIAFA) On Inter-Procedural Analysis of Programs with Lists and Data We address the problem of automatic synthesis of assertions on sequential programs with singly-linked lists containing data over infinite domains such as integers or reals. Our approach is based on an accurate abstract inter-procedural analysis. We define compositional techniques for computing procedure summaries concerning various aspects such as shapes, sizes, and data.Relations between program configurations are represented by graphs where vertices represent list segments without sharing. The data in the these list segments are characterized by constraints in new complex abstract domains. We define an abstract domain whose elements correspond to an expressive class of first order universally quantified formulas and an abstract domain of multisets. Our analysis computes the effect of each procedure in a local manner, by considering only the reachable parts of the heap from its actual parameters. In order to avoid losses of information, we introduce a mechanism based on unfolding/folding operations allowing to strengthen the analysis in the domain of first-order formulas by the analysis in the multisets domains. The same mechanism is used for strengthening the sound (but incomplete) entailment operator of the domain of first-order formulas. We have implemented our techniques in a prototype tool and we have shown that our approach is powerful enough for automatic (1) generation of non-trivial procedure summaries, (2) pre/post- condition reasoning. 13h00 On-site lunch break 14h00 VERIMAG seminar: Nathalie Bertrand (IRISA) Determinizing timed automata Timed automata are frequently used to model real-time systems. Essentially timed automata are an extension of finite automata with guards and resets of continuous variables (called clocks) evolving at the same pace. They are extensively used in the context of validation of real-time systems. One of the reasons for this popularity is that, despite the fact that they represent infinite state systems, their reachability is decidable, thanks to the construction of the region graph abstraction. As for other models, determinization is a key issue for several validation problems, such as monitoring, implementability and testing. However, not all timed automata can be determinized, and determinizability itself is undecidable. After introducing timed automata, we will review existing approaches to get round their unfeasible determinization. Then we will expose a novel game-based algorithm which, given a timed automaton, produces either a deterministic equivalent or a deterministic over-approximation, and which subsumes all other known contributions. 15h00 Pierre Corbineau (VERIMAG) On Positivstellensatz Witnesses in Degenerate Cases One can reduce the problem of proving that a polynomial is nonnegative, or more generally of proving that a system of polynomial inequalities has no solutions, to finding polynomials that are sums of squares of polynomials and satisfy some linear equality (Positivstellensatz). This produces a witness for the desired property, from which it is reasonably easy to obtain a formal proof of the property suitable for a proof assistant such as Coq. The problem of finding a witness reduces to a feasibility problem in semidefinite programming, for which there exist numerical solvers. Unfortunately, this problem is in general not strictly feasible, meaning the solution can be a convex set with empty interior, in which case the numerical optimization method fails. Previously published methods thus assumed strict feasibility; we propose a workaround for this difficulty. We implemented our method and illustrate its use with examples, including extractions of proofs to Coq. Joint work with David Monniaux. 15h30 Michael Emmi (LIAFA) On Sequentializing Concurrent Programs We propose a general framework for compositional under- approximate concurrent program analyses by reduction to sequential program analyses—so-called sequentializations. We notice the existing sequentializations—based on bounding the number of execution contexts, execution rounds, or delays from a deterministic task-schedule—rely on three key features for scalable concurrent program analyses: (i) reduction to the sequential program model, (ii) compositional reasoning to avoid expensive task-product constructions, and (iii) parameterized exploration bounds. To understand how those sequentializations can be unified and generalized, we define a general framework which preserves their key features, and in which those sequentializations are particular instances. We also identify a most general instance which considers vastly more executions, by composing the rounds of different tasks in any order, re- stricted only by the unavoidable program and task-creation causality orders. In fact, we show this general instance is fundamentally more powerful by identifying an infinite family of state-reachability problems (to states g1 , g2 , . . .) which can be answered precisely with a fixed explo- ration bound, whereas the existing sequentializations require an increasing bound k to reach each gk . Our framework applies to a general class of shared-memory concurrent programs, with dynamic task-creation and arbitrary preemption. 16h00 Jules Villard (LSV/Queen Mary) Tracking Heaps that Hop with Heap-Hop Heap-Hop is a program prover for concurrent heap-manipulating programs that use message-passing synchronization. Programs are annotated with pre and post-conditions and loop invariants, written in a fragment of separation logic. Communications are governed by a form of session types called contracts. Logic and contracts collaborate inside of Heap-Hop to prove memory safety, race freedom, absence of memory leaks and communication safety. This is joint work with Étienne Lozes (LSV, ENS Cachan, CNRS) and Cristiano Calcagno (Monoidics Ltd and Imperial College, London). 16h30 Coffee break 17h00 VERIDYC business meeting
Ampère never wrote down what is confusingly called " Ampère's circuital law," not even the form without the displacement current term, as Ampère never dealt with the field concept.* Maxwell derived $$\nabla \times \mathbf{B} = \mu_0\mathbf{J}\qquad(1)$$ in his 1855 paper On Faraday's Lines of Force, based on analogies to hydrodynamics, which he corrected to be $$\nabla \times \mathbf{B} = \mu_0\left(\mathbf{J} + \varepsilon_0 \dfrac{\partial \mathbf{E}} {\partial t} \right)\qquad(2)$$ Ampère's force law is completely different from any of Maxwell's equations. It gives the force that current elements $I_1 d\vec {\ell }_1$ and $I_2 d\vec {\ell }_2$ exert on one another to be: $$d^2\vec{F_{21}^A} = - \frac{\mu _0 }{4\pi }I_1 I_2 \frac{\hat {r}_{12} }{r_{12}^2 }\left[2(d\vec {\ell }_1 \cdot d\vec {\ell }_2) - 3({\hat {r}_{12} \cdot d\vec {\ell }_1 })({\hat {r}_{12} \cdot d\vec {\ell }_2 })\right] = - d^2\vec{F_{12}^A}.$$ Thus, it is appropriate that Equation (2) is one of Maxwell's equations. Gauss and Faraday utilized the field concept, thus Equation (2) is the most "Maxwellian" of the four Maxwell's equations. So, why are Equations (1) & (2) above named after Ampère? Who first named them after Ampère? *cf. Assis, André Koch Torres; Chaib, J. P. M. C; Ampère, André-Marie (2015). Ampère's electrodynamics: analysis of the meaning and evolution of Ampère's force between current elements, together with a complete translation of his masterpiece: Theory of electrodynamic phenomena, uniquely deduced from experience(PDF). Montreal: Apeiron. ISBN 978-1-987980-03-5. ch. 15 pp. 221ff.
Hello can anyone help with this question Show that the Maclaurin series of the function $$\ln(1+\sin x)$$ up to the term in $x^4$ is $$x-x^2/2 + x^3/6 - x^4/12 + \ldots$$ So I know the expansion for $\ln(1+x)= x - x^2 + x^3/3 +\dots$ and that of $\sin x= x - x^3/3!+x^5/5!-\dots$ hence I tried by substituting the first two terms of $\sin x$ into the expansion of $\ln(1+x)$ to get $\ln(1+x-x^3/6)$ up to the $x^4$ term of the expansion of $\ln(1+x)$. But I got stucked with the algebra so I will value the help anyone can provide. What you tried is, however, an interesting attempt to solve the given task but not quite right. It is in fact (and please don't ask me why) correct up a few terms if you finish what you tried. Anyway, this is not the standard way of finding a MacLaurin Series of a given function. Recall, a MacLaurin Series Expansion is a Taylor Series Expansion centered at $0$. By Taylor's Theorem we know that the series expansion is then given by $$f(x)=\sum_{n=0}^\infty\frac{f^{(n)}(0)}{n!}x^n\tag1$$ Since you are only asked to find the expansion up to the $x^4$-term we only need to compute the first four derivatives and evaluate them at $0$. Thus, we obtain \begin{align*} &f(x)=\ln(1+\sin x),&&f(0)=\ln(1+0)=0\\ &f^{(1)}(x)=\frac{\cos x}{1+\sin x},&&f^{(1)}(0)=\frac1{1+0}=1\\ &f^{(2)}(x)=-\frac1{1+\sin x},&&f^{(2)}(0)=-\frac1{1+0}=-1\\ &f^{(3)}(x)=\frac{\cos x}{(1+\sin x)^2},&&f^{(3)}(0)=\frac1{(1+0)^2}=1\\ &f^{(4)}(x)=-\frac{1+\sin x+\cos^2x}{(1+\sin x)^3},&&f^{(4)}(0)=-\frac{1+0+1}{(1+0)^3}=-2 \end{align*} Plugging these values in $(1)$ we obtain \begin{align*} \ln(1+\sin x)&=f(0)+f^{(1)}(0)x+\frac{f^{(2)}(0)}{2}x^2+\frac{f^{(3)}(0)}{6}x^3+\frac{f^{(4)}(0)}{24}x^4+\cdots\\ &=0+1\cdot x-\frac12x^2+\frac16x^3-\frac2{24}x^4+\cdots\\ &=x-\frac{x^2}2+\frac{x^3}6-\frac{x^4}{12}+\cdots \end{align*} $$\therefore~\ln(1+\sin x)~=~x-\frac{x^2}2+\frac{x^3}6-\frac{x^4}{12}+\cdots$$ In a similiar way you can obtain the MacLaurin Series Expansions for $sin x$ or $\ln(1+x)$. Just substituting one into another isn't the afterall the exspected way to do this but rather computing the derivatives at $0$.
User:DOUG/Sandbox Transclusion Tests Testing Math! Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle A + B \div C = Z } Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle \sqrt{1-e^2}} One of the great wonders of our time can include the formula Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle \phi_n(\kappa) = \frac{1}{4\pi^2\kappa^2} \int_0^\infty \frac{\sin(\kappa R)}{\kappa R} \frac{\partial}{\partial R} \left[R^2\frac{\partial D_n(R)}{\partial R}\right]\,dR}, giving us something both balanced and beautiful. Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle \phi_n(\kappa) = \frac{1}{4\pi^2\kappa^2} \int_0^\infty \frac{\sin(\kappa R)}{\kappa R} \frac{\partial}{\partial R} \left[R^2\frac{\partial D_n(R)}{\partial R}\right]\,dR} And now, for something really different, let's get HTML equations sorted out so we can actually be responsive to screen sizing! f( x) = b = x y AB a + 1 + c </span> Modifiers = MIU + LOG Bonus + INT Bonus + Bonus bonus + Potion + \Big\lfloor \frac{\mathrm{AUR_{Bonus} + WIS_{Bonus}}}{2} \Big\rfloor + \mathrm{Potion_{Bonus}} - \mathrm{ItemEnchant_{Bonus}} - \mathrm{Encumbrance_{Penalty}} - \mathrm{Armor_{Penalty}} γ( v) = 1 v </span></span> </span> A whole page. Toggle all content Example Header Text Example sub text Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. Just text from an existing article that has appropriate markups. Beginning your journey in Elanthia If you are reading this guide the odds are you have not yet visited the lands of Elanthia. After picking your race, profession, attributes and starting skills, the first time you enter the lands a sprite will be waiting to greet you and help you get the best start possible to the game. For the best of results, see the section below on your journey to 5th level, before you do everything the sprite asks. You will begin your journey in one of three towns; either Wehnimer's Landing, Icemule Trace, or Ta'Vaalor. No matter where you first step into the lands the helpful sprite will ask if you would like his assistance. Please take the time to visit with the sprite. The introduction to the lands is brief, and the information you learn will help you to orient to the lands - and gain knowledge and experience. There might even be a useful item or two and some small award of the realm's coinage - silvers. In your pack or backpack you will find a travel token. You can use this token to safely travel to one of the other three towns. If you started in Wehnimer's Landing, but really wish to start in Ta'Vaalor you can use the DIRection verb (just type DIR in game) to the Travel Office and use the token to obtain a free guide to take you safely where you wish to go. If you choose to leave your starting town, make sure to visit the local debt collector to pay off your starting debts, you can't use the Travel Office until you have. Which ever town you begin your journey from, you can find some interesting information in the appropriate new player's guides: A beginner's guide to Ta'Vaalor; A beginner's guide to Icemule Trace, or; A beginner's guide to Wehnimer's Landing. Just format / collapsible content from existing article that has appropriate markups. Just the FAQ Just the links:
Current browse context: physics.data-an Change to browse by: Bookmark(what is this?) Physics > Data Analysis, Statistics and Probability Title: Detector resolution correction for width of intermediate states in three particle decays (Submitted on 5 Aug 2015) Abstract: We propose a method that allows to take into account detector resolution in the partial wave analysis event-by-event fit as a special case. Implementation of the method is discussed and the applicability of the method is studied for the $J/\psi \to K^{*\pm}K^{\mp} \to K^+K^-\pi^0$ and $J/\psi \to K_2(1430)^\pm K^{\mp} \to K^+K^-\pi^0$ decays. Submission historyFrom: Igor Denisenko [view email] [v1]Wed, 5 Aug 2015 21:12:53 GMT (73kb,D)
Beginning calculus problems with limits of a function are a common teaching technique for young students that are just starting their journey in Calculus. Limits problems will help you understand how they work and what is going on when you encounter a function. Repetition is key here to get your brain used to thinking about what a function or a graph means. I include several examples below in order for you to practice and grasp the concepts. Beginning Calculus Problems With Limits Limits of a function have been around in concept for a very long time. Mathematicians first started using the Limit concept somewhere around 400 years ago but our modern form of a Limit has only been used since the early 1800’s or so. They are the foundation to learning derivatives and are subsequently used in modern Calculus quite often. What Is A Limit? If you have an x-y graph like this: Lets say you are starting at 0,0 on this graph. That means that your X value =0 and your Y value =0. If you then move to some value to your right like X=4 then your X value =4. What does your Y value equal? Well that depends on whether you traveled in a straight line or some type of diagonal path up. Why does that matter? Well, the Limit = the Y value! So if you drew a crazy line that just went to X=4 but your Y value =7 then your Limit at X=4 =7. Does that make more sense now? Wherever you end up on the X plane you will have some Y value as well. If you are still on Y =0 then your Limit =0. So now that you should hopefully understand what the Limit is lets look at some problems. Problem 1 – For the graph of f(x) below, find the limit What is: \[ \tag{1}\lim_{x \to 3} f(x) \] When you look at this graph think about what you see. You see the X-axis marked off and the Y-axis is too. Now what is the question asking? It is asking what is the limit of f(x) as X approaches 3. What is the limit then? The limit is the value that the Y-axis is when X is at or very near 3. When you look at the graph again you will see that Y = 0 so the limit is 0. \[ \tag{2}\bbox[red,2pt]{\lim_{x \to 3} f(x) = 0}\] Problem 2 – For the graph of f(x) below, find the limit What is \( \tag{1}\lim_{x \to 9} f(x) \) : This is the same graph as above and looking at it we can see our X and Y values clearly marked. Our question is asking what is the limit of f(x) as X approaches 9. Now we look at X=9 on the graph and then see what Y values are there. Do you see the issue yet? Yes there are two values for Y at X=9. These are one sided limits which we will get into another time but the thing to take away here is those two Y values are not equal. This means the limit in our problem does not exist. \[ \tag{2}\bbox[red,2pt]{\lim_{x \to 9} f(x) = }\] Does Not Exist! Problem 3 – Find The Following limit What is the \( \tag{1}\lim_{x \to -2} ( -x^2 + 9x -1 ) \) This is a limit problem that does not have a graph to look at but instead uses an equation. This equation is a simple polynomial. What can you do here? Well in math it is a good idea to try the simplest idea first to rule it out unless you have some reason not to. Lets try substitution here since that is barely algebra and it looks like it could work and give us a value. What we will want to do is substitute X=-2 into our equation. Why X=-2? That is because it is the point at which X is approaching. \( \tag{2} ( -(-2)^2 + 9(-2) -1 ) \) You should notice now that everywhere there was an (X) value in the equation I put in a (-2). \(\tag{3}(-4 -18 -1)\) Multiply the values together and remember to keep the signs right. \(\tag{3}\bbox[red,2pt] {-23}\) Problem 4 – Find The Following limit Find \(\tag{1}\lim_{y \to -11}(16-y)^4/3\) Here we have another substitution problem where our variable is Y. It is a little different because our equation has an exponent. So lets take it step by step. \(\tag{2}\lim_{y \to -11}(16 – (-11))^4/3 \) Now we have substituted -11 for our Y variable. Then we simplify this a bit. \(\tag{3}\lim_{y \to -11}(16 + 11)^4/3 \) \(\tag{4}\lim_{y \to -11}(27)^4/3 \) When dealing with an unusual exponent like this you can remember this little trick. Take your base number to the 4th power and then take the 3rd root of it. The 3rd root is also called the cube root. This is all the same as taking 27 to the \(4/3\) power. Our little problem now becomes much easier and the solution is \(\tag{5}\bbox[red,2pt]{\lim_{y \to -11}(16 – y)^4/3 = 81}\) Problem 5 – Find The Following limit Find \(\tag{1}\lim_{h \to 0}(\frac{6}{\sqrt{6h + 4} +4})\) This is a nice problem because it combines a few different styles and techniques to finish. This one is still quick to finish but it will be the start of harder problems that look like this one. We have a limit, a fraction, and and a square root to deal with so this will be fun while it lasts. Now for this and any other problem involving a fraction, we can substitute as long as the bottom part of the fraction does not equal 0. We should be safe here by the looks of it so lets try that. \(\tag{2}\lim_{h \to 0}(\frac{6}{\sqrt{6(0) + 4} + 4}) \) Simplifying this now makes it easier to see what is happening. \(\tag{3}\lim_{h \to 0}(\frac{6}{\sqrt{4} + 4}) \) We still are not doing anything with the top but we are almost there. Lets finish with the bottom now. \(\tag{4}\lim_{h \to 0}(\frac{6}{2 + 4}) \) \(\tag{5}\lim_{h \to 0}(\frac{6}{6}) \) Well this is shaping up nicely. \(\tag{6}\bbox[red,2pt]{\lim_{h \to 0} = 1} \) Problem 6 – Find The Following limit Find \(\tag{1}\lim_{h \to 0}(\frac{\sqrt{19h + 1 } -1 }{h}) \) Ok different looking problem here. You can see we have the [h] on the bottom now. Since this is a fraction and we already know you can never divide by 0, we now know you must do something entirely different to solve this problem. Your main clue is that there is a square root on the top. Remember that square roots multiplied by themselves gives the value inside the root and the root disappears. However, what you do to the top will also have to be done on the bottom. This is called the conjugate method and it is used to solves problems like this. \(\tag{2}\lim_{h \to 0}(\frac{\sqrt{19h + 1} -1 }{h}) * (\frac{\sqrt{19h + 1} + 1}{\sqrt{19h + 1} +1}) \) Now we are just multiplying the left side by the right side. Watch your signs carefully here. Always good to keep everything spaced apart a lot so that you can see what your doing and avoid making mistakes. \(\tag{3}\lim_{h \to 0} (\frac{19h + 1 -1}{{h}{\sqrt{19h+1}+1}})\) Now that we have multiplied both sides together we can subsequently start simplifying the top and bottom of our equation. \(\tag{4}\lim_{h \to 0}(\frac{19h}{{h}{\sqrt{19h+1}+1}})\) Since we have an [h] on top and bottom being multiplied by another expression we can cancel those [h] out. \(\tag{5}\lim_{h \to 0} (\frac{19}{\sqrt{19h+1}+1})\) Substitute 0 for h on the bottom part of the equation. \(\tag{5}\lim_{h \to 0}(\frac{19}{0+1+1})\) We are basically done so lets just make it look nice and final. The answer is: \(\tag{6}\bbox[red,2pt]{\lim_{h \to 0}=(\frac{19}{2})}\) Problem 7 – Find The Following limit Find \(\tag{1}\lim_{x \to 3}(\frac{x-3}{x^2-9}) \) Here is another type of problem involving a limit however it just uses algebra to solve. You should be able to recognize instantly that the top and bottom parts of the equation are related to each. So break the problem up into parts and simplify from there. \(\tag{2}\lim_{x \to 3}(\frac{x-3}{(x+3)(x-3)})\) Since you will have an x-3 on both the top and the bottom you can just cancel them out in this situation. That will leave you: \(\tag{3}\lim_{x \to 3}(\frac{1}{x+3})\) Now you just apply the limit and you will have your answer. \(\tag{4}\lim_{x \to 3}(\frac{1}{3+3})\) This will give us a nice fraction as the answer. \(\tag{5}\bbox[red,2pt]{\lim_{x \to 3}=(\frac{1}{6})}\) Problem 8 – Find The Following limit Find \(\tag{1}\lim_{x \to 4} (\frac{x^2-2x-8}{x-4})\) Here is another problem that is a fraction and it looks different than the one before you will notice. Actually, they should all look slightly different than the one before because there can be many variations. This is yet another variation with a polynomial. These problems are arranged this way to teach you the steps in troubleshooting a calculus problem and to help you recognize common forms of problems and what to do with them. If you are new to calculus just keep practicing until you instantly see what to do. Now here we go! If you look at this problem it might seem weird but it is just a factoring calculus problem. You will obviously need to factor the top and hopefully that is clear to you. However, what beginners might not recognize is that the problem is already half factored for you. At this level of calculus problems are not going to be given to you that do not factor easily. So, with that in mind, remember how the previous problems were dealt with? Yeah we factored and then cancelled out a top and bottom factor. We will do the same here. The bottom part of the fraction is one of the factors of the top part of the fraction. \(\tag{2}\lim_{x \to 4}(\frac{(x+2) (x-4)}{x-4})\) So see my point there how that works? We now have x-4 on both the top and bottom so we can just cancel them out. \(\tag{3}\lim_{x \to 4}(x+2)\) Now we just apply the limit. \(\tag{4}\lim_{x \to 4}(4+2)\) We now have a solution. \(\tag{5}\bbox[red,2pt]{\lim_{x \to 4}=(6)}\) Problem 9 – Find The Following limit Find the limit of \(\tag{1}\lim_{u \to 1}(\frac{u^3-1}{u^4-1})\) There is a lot of factoring to do in this problem and you should be familiar with how to deal with numbers to the 3rd and 4th power. It is not hard at all though and furthermore it is good practice for harder problems later on. First, factor top and bottom and then see what we can do after that. \(\tag{2}\lim_{u \to 1}(\frac{(u^2+ 1u +1)(u-1)}{(u^2+1)(u+1)(u-1)})\) There we have both the top and the bottom parts of the fraction factored. Lets see what we can do with it now. We can cancel expressions on both top and bottom subsequently really make this simpler. \(\tag{3}\lim_{u \to 1}(\frac{u^2 + 1u + 1}{(u^2 + 1)(u + 1) })\) We are almost there now and we just need to substitute to apply the limit. \(\tag{4}\lim_{u \to 1}(\frac{1^2 + 1(1) +1 }{(1^2 +1)(1 + 1)})\) Now just do the simple math and we will have our answer. \(\tag{5}\bbox[red,2pt]{\lim_{u \to 1} = 3/4 }\) Problem 10 – Find The Following limit Find \(\tag{1}\lim_{x \to -6} (\frac{5-\sqrt {x^2 – 11}}{x+6})\) Here is another nice problem but with a twist because now the square root is on the top. From looking at the top we can treat this as another conjugate problem. So lets multiple the top and bottom by the conjugate of the top. \(\tag{2}\lim_{x \to -6} (\frac{5-\sqrt{x^2 – 11}}{x + 6}) * (\frac{5 + \sqrt{x^2 – 11}} {5 + \sqrt{x^2 – 11}}) \) That is an ugly equation and set up so let us try to make it look a little nicer. Just multiple top and bottom and remember to keep the sings straight. \(\tag{3}\lim_{x \to -6} (\frac{25 – (x^2 – 11)}{(x + 6)(5 + \sqrt{x^2 – 11})})\) While this still looks messy and difficult to work with it is getting better. Keep simplifying things out step by step so you do not miss any signs or do something silly. \(\tag{4}\lim_{x \to -6} (\frac{ -x^2 + 36}{(x + 6)(5 + \sqrt{x^2 – 11})})\) Rearrange the top so that it makes more sense as to what your supposed to. \(\tag{5}\lim_{x \to -6} (\frac{36 – x^2}{(x + 6)(5 + \sqrt{x^2 – 11})})\) Now factor what you can and this problem start to make sense. \(\tag{6}\lim_{x \to -6} (\frac{(6 + x)(6 – x)}{(x + 6)(5 + \sqrt{x^2 – 11})})\) Now cancel the similar terms and lets see what is left. \(\tag{7}\lim_{x \to -6}(\frac{6-x}{5 + \sqrt{x^2 – 11}})\) Substitute and apply the limit now. We get a good integer answer. \(\tag{8}\bbox[red,2pt]{\lim_{x \to -6} = (\frac{6}{5})}\) Problem 11 – Find The Following limit Find the limit of \(\tag{1}\lim_{x \to 0}(\csc(x))\) Now things are getting interesting because we now have limits of trig functions. Hopefully you remember your pre-calculus! These are simple though as you just rewrite the harder forms into something that you do recognize. \(\tag{2}\lim_{x \to 0}(\frac{1}{\sin(x)} )\) We know that \((\sin (0) = 0) \) but if you graph it or do a table of values you will see that values go wildly out of control as they get close to 0. That is our clue there. \(\tag{3}\bbox[red,2pt]{\lim_{x \to 0} (\csc(x) = \infty)} \) Problem 12 – Find The Following limit Find the limit of \(\tag{1}\lim_{h \to 0} (\frac{f(x + h) + f(x)}{h} )\) when \(f(x) = x^2\) and \(x = 7\). \(\tag{2}\lim_{h \to 0} (\frac {(x + h)^2 -x^2}{h})\) We just substituted in our functions and the value of x. \(\tag{3}\lim_{h \to 0} (\frac{(7 + h)^2 – 7^2}{h})\) We just plugged things in but it does take some attention to detail to keep things straight. We now have: \(\tag{4}\lim_{h \to 0} (\frac{49 + 14h – 49}{h} ) \) Just factor everything and watch expressions disappear again. \(\tag{5}\lim_{h \to 0}(\frac{h(14 + h)}{h})\) Factor out the \(h\) now so that we no longer have a fraction to deal with. \(\tag{6}\lim_{h \to 0}(14 + h)\) Now you just evaluate the expression and you get the answer. \(\tag{7}\lim_{h \to 0}(14 + 0)\) So our answer is : \(\tag{8}\bbox[red,2pt]{\lim_{h \to 0} = (14)} \) Conclusion That will conclude our lesson about beginning calculus problems with limits. I hope this was all clear and well explained especially with all of the Latex markup I tried to write. These limits are important because they will help in your understanding of derivatives soon and furthermore make you better at spotting derivative problems. When we see a problem to solve there is not always a guide to tell you what kind of problem you are looking at so it is important to grasp these fundamentals well.
This is a heuristic explanation of Witten's statement, without going into the subtleties of axiomatic quantum field theory issues, such as vacuum polarization or renormalization. A particle is characterized by a definite momentum plus possible other quantum numbers. Thus, one particle states are by definition states with a definite eigenvalues of the momentum operator, they can have further quantum numbers. These states should exist even in an interactiong field theory, describing a single particle away from any interaction.In a local quantum field theory, these states are associated with local field operators: $$| p, \sigma \rangle = \int e^{ipx} \psi_{\sigma}^{\dagger}(x) |0\rangle d^4x$$Where $\psi $ is the field corresponding to the particle and $\sigma$ describes the set of other quantum numbers additional to the momentum.A symmetry generator $Q$ being the integral of a charge density according to the Noether's theorem$$Q = \int j_0(x') d^3x'$$should generate a local field when it acts on a local field:$[Q, \psi_1(x)] = \psi_2(x)$(In the case of internal symmetries $\psi_2$ depends linearly on the components of $\psi_1(x)$, in the case of space time symmetries it depends on the derivatives of the components of $\psi_1(x)$) Thus in general: $$[Q, \psi_{\sigma}(x)] = \sum_{\sigma'} C_{\sigma\sigma'}(i\nabla)\psi_{\sigma'}(x)])$$ Where the dependence of the coefficients $ C_{\sigma\sigma'}$ on the momentum operator $\nabla$ is due to the possibility that $Q$ contains a space-time symmetry.Thus for an operator $Q$ satisfying $Q|0\rangle = 0$, we have$$ Q | p, \sigma \rangle = \int e^{ipx} Q \psi_{\sigma}^{\dagger}(x) |0\rangle d^4x = \int e^{ipx} [Q , \psi_{\sigma}^{\dagger}(x)] |0\rangle d^4x = \int e^{ipx} \sum_{\sigma'} C_{\sigma\sigma'}(i\nabla)\psi_{\sigma'}(x) |0\rangle d^4x = \sum_{\sigma'} C_{\sigma\sigma'}(p) \int e^{ipx} \psi_{\sigma'}^{\dagger}(x) |0\rangle d^4x = \sum_{\sigma'} C_{\sigma\sigma'}(p) | p, \sigma' \rangle $$Thus the action of the operator $Q$ is a representation in the one particle states. The fact that $Q$ commutes with the Hamiltonian is responsible for the energy degeneracy of its action, i.e., the states $| p, \sigma \rangle$ and $Q| p, \sigma \rangle$ have the same energy.This post imported from StackExchange Physics at 2015-06-16 14:50 (UTC), posted by SE-user David Bar Moshe
Current browse context: physics.app-ph Change to browse by: Bookmark(what is this?) Physics > Applied Physics Title: Improving the Time Stability of Superconducting Planar Resonators (Submitted on 30 Apr 2019) Abstract: Quantum computers are close to become a practical technology. Solid-state implementations based, for example, on superconducting devices strongly rely on the quality of the constituent materials. In this work, we fabricate and characterize superconducting planar resonators in the microwave range, made from aluminum films on silicon substrates. We study two samples, one of which is unprocessed and the other cleaned with a hydrofluoric acid bath and by heating at $880^{\circ}$C in high vacuum. We verify the efficacy of the cleaning treatment by means of scanning transmission electron microscope imaging of samples' cross sections. From 3 h-long resonator measurements at $\approx 10$ mK and with $\approx 10$ photonic excitations, we estimate the frequency flicker noise level using the Allan deviation and find an approximately tenfold noise reduction between the two samples; the cleaned sample shows a flicker noise power coefficient for the fractional frequency of $\approx 0.23 \times 10^{-15}$. Our preliminary results follow the generalized tunneling model for two-level state defects in amorphous dielectric materials and show that suitable cleaning treatments can help the operation of superconducting quantum computers. Submission historyFrom: Matteo Mariantoni [view email] [v1]Tue, 30 Apr 2019 23:13:15 GMT (1255kb)
1 Introduction The concept of the differentiation operator $\mathscr{D}=\dif/\dif x$ is a well-known fundamental tool of modern calculus. For a suitable function $f$ the $n$-th derivative is well defined as $\mathscr{D}^n f(x)=\dif f(x)/\dif x^n$, where $n$ is a positive integer. However, what would happen if we extended this concept to a situation, when $n$ is arbitrary, e.g. fractional? This was the very same question L’Hôpital addressed to Leibniz in a letter in 1695. Since then the concept of fractional calculus has drawn the attention of many famous mathematicians, including Euler, Laplace, Fourier, Liouville, Riemann, Abel and Laurent. But it was not until 1884 that the theory of generalized operators reached a satisfactory level of development for the point of departure for the modern mathematician [1]. However, fractional-order calculus was not particularly popular until recent years when benefits stemming from using its concepts became evident in various scientific fields, including system modeling and automatic control. The rise of interest to the topic of fractional differentiation is also related to accessibility of more efficient and powerful computational tools. The introduction of computer algebra systems, such as MATLAB and Mathematica, led to new possibilities for evaluating the theoretical aspects of fractional calculus in specific applications. Recent findings support the notion that fractional-order calculus should be employed where more accurate modeling and robust control are concerned. Specifically, fractional-order calculus found its way into complex mathematical and physical problems [2, 3]. In general, fractional-order calculus may be useful when modeling any system which has memory and/or hereditary properties [4]. In the field of automatic control fractional calculus is used to obtain more accurate models, develop new control strategies and enhance the characteristics of control systems. 2 Definitions of the fractional operator Fractional calculus is a generalization of integration and differentiation to non-integer order operator $_a\mathscr{D}_t^\alpha$, where $a$ and $t$ denote the limits of the operation and $\alpha$ denotes the fractional order such that where generally it is assumed that $\alpha\in\mathbb{R}$, but it may also be a complex number [5]. One of the reasons why fractional calculus is not yet found in elementary texts is a certain degree of controversy found in the theory [1]. This is why there exist several definitions for the fractional-order differintegral operator. Next, several popular definitions are given. Definition 1. (Riemann-Liouville definition) where $m-1\lt\alpha\lt m$, $m\in\mathbb{N}$, $\alpha\in\mathbb{R}^{+}$ and $\Gamma\left(\cdot\right)$ is Euler’s gamma function. Definition 2. (Caputo definition) where $m-1\lt\alpha\lt m$, $m\in\mathbb{N}$. Definition 3. (Grünwald-Letnikov definition) where $\left[\cdot\right]$ means the integer part. 3 Fractional operator properties If $f(t)$ is an analytic function, then the fractional-order differentiation $_{0}\mathscr{D}_{t}^{\alpha}f(t)$ is also analytic with respect to $t$. If $\alpha=n$ and $n\in\mathbb{Z}^{+}$, then the operator $_{0}\mathscr{D}_{t}^{\alpha}$ can be understood as the usual operator $\dif\,^{n}/\dif\, t^{n}$. Operator of order $\alpha=0$ is the identity operator: $_{0}\mathscr{D}_{t}^{0}f(t)=f(t)$. Fractional-order differentiation is linear; if $a,\, b$ are constants, then \begin{equation} _{0}\mathscr{D}_{t}^{\alpha}\left[af(t)+bg(t)\right]=a\,_{0}\mathscr{D}_{t}^{\alpha}f(t)+b\,{}_{0}\mathscr{D}_{t}^{\alpha}g(t). \tag{5}\end{equation} For the fractional-order operators with $\Re(\alpha)>0,\,\Re(\beta)>0$, and under reasonable constraints on the function $f(t)$ it holds the additive law of exponents: \begin{equation} _{0}\mathscr{D}_{t}^{\alpha}\left[_{0}\mathscr{D}_{t}^{\beta}f(t)\right]={}_{0}\mathscr{D}_{t}^{\beta}\left[_{0}\mathscr{D}_{t}^{\alpha}f(t)\right]={}_{0}\mathscr{D}_{t}^{\alpha+\beta}f(t) \tag{6}\end{equation} The fractional-order derivative commutes with integer-order derivative \begin{equation} \frac{\dif\,^{n}}{\dif\, t^{n}}\,\left(_{a}\mathscr{D}_{t}^{\alpha}f(t)\right)={}_{a}\mathscr{D}_{t}^{\alpha}\left(\frac{\dif\,^{n}f(t)}{\dif\, t^{n}}\right)={}_{a}\mathscr{D}_{t}^{\alpha+n}f(t), \tag{7}\end{equation}under the condition $t=a$ we have $f^{(k)}(a)=0,\,(k=0,\,1,\,2,\,…,\, n-1)$. 4 Computation examples Example 1. Let us compute 1 the Riemann-Liouville fractional derivative of order $\alpha=\frac{1}{2}$ for an elementary function $f(t)=t^2$ taking $a=0$: Let us show that using Caputo’s definition yields the same result in this case: Example 2. Compute the fractional derivative of order $\alpha=\frac{1}{3}$ for function $f_{1}(t)=\mathrm{e}^{5t}$ and the fractional derivative of order $\alpha=\frac{1}{2}$ for function $f_{2}(t)=\sin(3t)$. In this case, in order to obtain the derivative, we use the Riemann-Liouville definition taking $a=-\infty$: We can compute the fractional derivative for the function $f_2(t)$ in the same way: 5 Thoughts on the meaning of fractional operator The reader might be wondering, why the title of this section begins with “thoughts”. Could there be no proper explanation for the physical and geometrical meaning of fractional differentiation? Unfortunately, there is no clear, intuitive interpretation so far. Now, there exist several papers that shed some light on this matter, e.g. [8, 9]. Additionally, one may look for interpretation of fractional operators in other directions as well: Fractal theory; Сorrespondence to integer-order derivatives, which may be considered as a particular case of fractional derivatives. It is also important to note the apparent reason for the difficulties with understanding the fractional derivative. The integer-order derivatives were developed with a clear application in mind. That is, they were the primary object of development, while the applications of using fractional derivatives, while considered by e.g. Leibniz and L’Hôpital, were not clear at the time. Since the field of applications of fractional calculus is rapidly growing, it is perhaps safe to say that a clear geometric and physical interpretation of the fractional-order derivative will eventually arise. References [1] K. Miller and B. Ross, An introduction to the fractional calculus and fractional differential equations. Wiley, 1993. [2] R. Hilfer, Applications of fractional calculus in physics, ser. Applications of Fractional Calculus in Physics. World Scientific, 2000. [3] A. A. Kilbas, H. M. Srivastava, and J. J. Trujillo, Theory and Applications of Fractional Differential Equations, Volume 204 (North-Holland Mathematics Studies). New York, NY, USA: Elsevier Science Inc., 2006. [4] I. Podlubny, Fractional differential equations, ser. Mathematics in science and engineering. Academic Press, 1999. [5] Y. Q. Chen, I. Petráš, and D. Xue, “Fractional order control – A tutorial,” in Proc. ACC ’09. American Control Conference, 2009, pp. 1397–1411. [6] D. Xue, Y. Chen, and D. P. Atherton, Linear Feedback Control: Analysis and Design with MATLAB (Advances in Design and Control), 1st ed.. Philadelphia, PA, USA: Society for Industrial and Applied Mathematics, 2008. [7] C. A. Monje, Y. Chen, B. Vinagre, D. Xue, and V. Feliu, Fractional-order Systems and Controls: Fundamentals and Applications, ser. Advances in Industrial Control. Springer Verlag, 2010. [8] I. Podlubny, “Geometric and physical interpretation of fractional integration and fractional differentiation,” Fractional Calculus & Applied Analysis, vol. 5, no. 4, pp. 367–386, 2002. [9] J. A. T. Machado, “A probabilistic interpretation of the fractional order differentiation,” Fractional Calculus & Applied Analysis, vol. 6, no. 1, pp. 73–80, 2003.
November 10th, 2016, 05:19 PM # 1 Member Joined: Aug 2016 From: South Korea Posts: 55 Thanks: 0 Help!! How do I find table of values for undetermined Limit? The given problem is 3x^2 + 2x/x , x> 0 and the only ans. Ive got is 0/0 How do I make Table of values for this and how can I determine if the limit exists or not?? November 10th, 2016, 05:45 PM # 2 Senior Member Joined: Sep 2013 From: Earth Posts: 827 Thanks: 36 Is this your question given? $\displaystyle \lim_{x\rightarrow 0} 3x^2+\frac{2x}{x}$ If this is the question given, then the answer is 2. November 11th, 2016, 03:28 AM # 3 Math Team Joined: Jan 2015 From: Alabama Posts: 3,264 Thanks: 902 By "table of values", I think you mean just a table of values of your function for x close to 0. If you mean as jiasyuen suggested (which is what you wrote) then when x= 1, y= 3+ 2/2= 4 x= 1/2, y= (3/4)+ 1/(1/2)= 2.75 x= 1/4, y= (3/16)+ (1/2)/(1/4)= 2.1875 etc. If you mean, as I suspect, [tex]\frac{3x^2+ 2x}{x} then x= 1, y= (3+ 2)/1= 5 x= 1/2, y= (3/4+ 1)/1/2= 2(7/4)= 7/2= 3.5 x= 1/4, y= (3/16+ 1/2)/(1/4)= 4(11/16)= 11/4= 2.75 x= 1/8, y= (3/64+ 1/4)(1/8 )= 8(19/64)= 9/8= 1.125 etc. You don't have to choose one over powers of two- that was my choice. Just a sequence of number getting closer and closer to 0. But you really should not have to look at numbers like that- for one thing there exist functions such that you can get "closer and closer" to what looks like a limit but then, when you are really close, suddenly change. Instead use the fact that and, as long as x is not 0, those 'x's cancel: . The limit, as x goes to 0, is 3(0)+ 2= 2. Tags find, limit, table, undetermined, values Thread Tools Display Modes Similar Threads Thread Thread Starter Forum Replies Last Post Table of values/Value table Pumaftw Elementary Math 14 November 13th, 2014 08:02 AM Determine equation given a table of values dendimiiii Applied Math 1 February 12th, 2014 08:18 AM Estimating derivative from table of values tham Calculus 1 September 15th, 2012 03:43 PM Derivative and table of values rk183 Calculus 2 October 2nd, 2011 07:37 PM For the limit below, find values of ? that correspond to the rodjav305 Calculus 1 January 28th, 2010 07:13 AM
Search Now showing items 1-1 of 1 Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE (Elsevier, 2017-11) Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ...
Let me throw some water on your goal and any nice proof. For my article on this, you can find it at: Harris, D.E. (2017) The Distribution of Returns. Journal of Mathematical Finance, 7, 769-804 Let us use even weaker assumptions than your assumption that $S_t,\forall{t}$ is stationary. Let us use more Markowitz style assumptions. Our first assumption is that there are very many buyers and very many sellers. Normally this is to motivate the absence of liquidity costs, but we are going to repurpose it as it has other consequences that no one noticed. Stocks are sold in a double auction. Because of this, there is no winner's curse. As a consequence, the rational behavior is to bid your expectation. With many buyers and many sellers bidding their expectation, the limit book will converge to the normal so that as the number of bids becomes large enough, the limit book will be normally distributed. We could also just assume that stock prices are drawn from a normal distribution. The weakness of that assumption is that it does not cover things such as auctions at Christie's which are subject to the winner's curse. See the paper for the solution to that problem. So, let $R_t=\frac{S_{t+1}}{S_T}$. We will call this the reward for investing. Subtracting one makes it the return on investing. We will ignore the $-1$ as it changes nothing and is just a little bit of extra work. Now the question is what is the distribution of $R_t$ as $S_t,S_{t+1}$ are actual data while $R_t$ is not data, but rather a statistic; that is to say, it is a function of the data. As is well known, from Curtis at: Curtiss, J.H. (1941) On the Distribution of the Quotient of Two Chance Variables. Annals of Mathematical Statistics, 12, 409-421, the solution to any ratio of continuous random variables, where $Z=\frac{Y}{X}$ is $$p(z)=\int_{-\infty}^\infty|x|f(x,zx)\mathrm{d}x$$ For the normally distributed variables that are in equilibrium, the solution is very well known and goes back in various forms to Fermat and Cardano as $$\frac{1}{\pi}\frac{\sigma}{\sigma^2+(z-\mu)^2}.$$ There is an assumption of allowing infinitely negative returns. If you restrict the domain then the constant of integration changes from $$\pi^{-1}$$ to $$\left[\frac{\pi}{2}+\tan\left(\frac{\mu}{\sigma}\right)\right]^{-1}.$$ For our purposes, the constant of integration does not matter, though it will create a serious error in estimation if you drop it in the real world. The above distribution is famous for a variety of reasons. When Laplace first sent his proof of what we now call the "central limit theorem" to his former student Poisson, Poisson returned the proof to him with an exception to when the rule holds. It fails to hold when the distribution is as above. From that observation, when that distribution is present, then you can no longer use things such as t-tests, F-tests and so forth, subject to the qualification that as the sample size exceeds 100, the t-test will work if you hold it to one degree of freedom. You can find a discussion of this at: Fama, E. F. and Roll, R. (1968). Some properties of symmetric stable distributions. Journal of the American Statistical Association, 63(323): pp. 817–836. However, the Fama and Roll discussion does not apply to the case of limiting liability to $-100%$. I am building a separate discussion for those in another paper. The next appearance of this distribution is in a battle between Augustin Cauchy and Irénée-Jules Bienaymé. Augustin Cauchy had just produced a method of regression in a journal article. Bienaymé produced an article that showed that ordinary least squares was the "best" way to do regression. Cauchy took this as a personal attack and then went to work to determine when OLS will ALWAYS fail with probability 1. Whenever the above distribution is present, OLS will produce purely spurious results. The reason is that the above distribution, which has acquired the moniker "the Cauchy distribution," has no mean and so cannot have a variance. While the Cauchy principal value is $\mu$, higher moments do not exist, even about the Cauchy principal value. The second raw moment is infinity or does not exist depending on how you define the integral. As to estimating the scale parameter of returns, you cannot use a non-Bayesian method. There does not exist an unbiased admissible Frequentist estimator for real data. I have estimated the scale parameter for all disaggregated equity securities in another paper, but for one security, what you should do is solve: $$\Pr(\sigma|\mathbf{R},\mu)=\int_{-\infty}^\infty\frac{\prod_{i=1}^n{\left[\frac{\pi}{2}+\tan\left(\frac{\mu}{\sigma}\right)\right]^{-1}}\frac{\sigma}{\sigma^2+(R_i-\mu)^2}\Pr(\mu;\sigma)}{\int_0^\infty\int_{-\infty}^\infty{\prod_{i=1}^n{\left[\frac{\pi}{2}+\tan\left(\frac{\mu}{\sigma}\right)\right]^{-1}}\frac{\sigma}{\sigma^2+(R_i-\mu)^2}\Pr(\mu;\sigma)}\mathrm{d}\sigma\mathrm{d}\mu}\mathrm{d}\mu.$$ The posterior density of $\sigma$ is well behaved. If you need a point estimate, you can minimize a cost function over the density. You should even be able to use quadratic loss because, for a sufficiently flat prior, the posterior density should converge to the ratio distribution of two standard deviation distributions. I haven't taken the time to prove that, however. It is possible that it is not true, but it should be as $\sigma$ is the ratio of the standard deviation of $S_{t+1}$ and the standard deviation of $S_t$. You will want to restrict your prior probabilities, $\Pr(\mu,\sigma)$ to proper priors as I have not found generalized Bayes rules in the literature for the truncated case and there is no reason to believe that the posterior is well behaved under the joint distribution of $(\mu,\sigma)$ with a uniform or other improper prior. From this, it should be sufficient to argue that mean-variance finance cannot exist. Consequently, any $\beta$ style model in raw data is invalid. In log-transformed data, it is suspect as the likelihood function is the hyperbolic secant distribution and it admits nothing resembling a covariance matrix. Since nothing can covary, what are you measuring? This is not to say they cannot co-move. When looking at multiple firms returns of the firms cannot be independent, though asymptotically none of them can covary. This is part of what makes this distribution famous. The variables are not independent, but they do not covary as the sample size goes to infinity. Finally, risk-neutral behavior cannot exist at the margin. I know I am making your day. There are two arguments for this. The first isn't a true argument, but should warrant a pause. If you assume risk-aversion, then from deFinetti coherence principle and assumption of a willingness to accept all finite bets at stated prices, then Kolmogorov's axioms fall out as theorems. If you do not assume risk-aversion, then this does not happen. You then have to add the assumptions that:$$\Pr(A)\ge{0},$$ $$\Pr(\Omega)=1,$$ and for any countable sequence of disjoint sets $$\Pr(\cup_{i=1}^\infty{A_i})=\sum_{i=1}^\infty\Pr(A_i).$$ One should give a moment of pause when nature provides a solution that minimizes both the assumptions and matches reality sufficiently often. The second argument is from rationality. If the marginal actor was risk-loving then they would pay a premium to take a risk. This is the same as saying that $K_{t+1}=RK_t+\epsilon_{t+1},R<1,\forall{t}$. Given sufficient time the capital stock of the planet would go to zero and all humans would die. This does not mean that risk-loving actors do not exist, nor does it mean that they are never the marginal actor. It implies that they can be the marginal actor only a minority of the time. The assumption of risk-neutrality was only ever a mathematical convenience created by using the normal distribution. The probability of risk-neutrality must be zero from this second argument. It goes like this, risk-neutrality exists at exactly one point. A single point over a continuum of possible points has measure zero and hence a probability of zero. Even if it were true, it could never be measured and risk-loving behavior is impossible. Hence, by being required to use Bayesian statistics, risk-neutral behavior is functionally excluded as a possibility. For an extended discussion of the Cauchy distribution see:Why the Cauchy Distribution Has No Mean
Gamma Distribution Introduction In this chapter we’ll introduce the Exponential Distribution a one parameter distribution that is a special case of the Gamma distribution and, of course, the Gamma distribution. The Gamma distribution is used to model random durations of time until a next event. What each event is, really only depends on the context of the process being modeled. A general example might be time until the end of the life of someone or something. The Gamma distribution is also used to model random volumes, e.g. rainfall. Estimating Parameters Pedagogically, the Exponential and Gamma distributions will provide us insight on the difference between the likelihood estimate of population parameters and estimates of the mean of a random variable. Exponential Distribution Let $X \sim \text{Exponential}(\beta)$. Then $X$ has probability density function for $x \geq 0$ and $\beta > 0$. The parameter $\beta$ measures the rate at which events occur. From this, it’s easy enough to verify the mean of an exponential random variable is $\mathbb{E}(X) = 1 / \beta$ as derived from Consider a random sample that measures days between rain events at the Winnipeg International Airport (Canada), from the R library DAAG (Maindonald, Braun, & Braun, 2015). These data measure the time between rain events, and are thus necessarily positive as the density plot below shows. The maximum likelihood estimate of the rate parameter is $\hat{\beta} = N / \sum_{n=1}^N X_N$. The exponential density function with an estimated rate parameter $\hat{\beta}$ is drawn over the density plot. import numpy as npimport pandas as pdimport bplot as bpfrom scipy.stats import expon as exponential, gammabp.LaTeX()bp.dpi(300) df = pd.read_csv("https://vincentarelbundock.github.io/Rdatasets/csv/DAAG/droughts.csv")beta = 1 / df['length'].mean() # true PDF evaluated at estimate of βx = np.linspace(df['length'].min(), df['length'].max(), 101)fx = exponential.pdf(x, scale=1 / beta)bp.curve(x, fx, color='tab:orange')# estimation of the density itselfbp.density(df['length'])bp.labels(x='length', y='density', size=18) <matplotlib.axes._subplots.AxesSubplot at 0x121d9c8d0> Gamma Distribution Let $X \sim \text{Gamma}(\alpha, \beta)$. Then $X$ has probability density function for $x \geq 0, \alpha > 0$, and $\beta > 0$. Notice that when $\alpha = 1$, the exponential density function is recovered since the gamma function evaluated at $1$ is equal to $1$, $\Gamma(1) = 1$. The Gamma distribution has one parameter more than the exponential distribution. In general, more parameters in a model will enable better adaption to the data. Don’t read too closely into this though, better adaption to a dataset does not guarantee better predictions. We’ll consider this point more closely later on in the course. The second parameter $\alpha$ is called the shape parameter. Because the shape parameter appears inside the gamma function, there is no closed form for the maximum likelihood estimator of $\alpha$. Instead of maximizing the likelihood function by hand, we’ll use a computer to approximate the parameters that maximize the likelihood function of the Gamma density function. The computer maximized likelihood for the gamma density function applied to the same dataset above gives estimates $\hat{\alpha} = 0.472$ and $\hat{\beta} = 0.24$. Below, the gamma density function with $(\hat{\alpha}, \hat{\beta})$ overlays the density plot for these data. We see that the Gamma distribution fits these data better than the Exponential distribution. This happens because the Gamma distribution has one more parameter than the Exponential distribution. gx = gamma.pdf(x, a=0.472, scale=1 / beta)bp.curve(x, gx, color='tab:orange')# estimate of density itselfbp.density(df['length'])bp.labels(x='length', y='density', size=18) <matplotlib.axes._subplots.AxesSubplot at 0x121e17c88>