text
stringlengths 256
16.4k
|
|---|
"Abstract: We prove, once and for all, that people who don't use superspace are really out of it. This includes QCDers, who always either wave their hands or gamble with lettuce (Monte Zuma calculations). Besides, all nonsupersymmetric theories have divergences which lead to problems with things like renormalons, instantons, anomalons, and other phenomenons. Also, they can't hide from gravity forever."
Can a gravitational field possess momentum? A gravitational wave can certainly possess momentum just like a light wave has momentum, but we generally think of a gravitational field as a static object, like an electrostatic field.
"You have to understand that compared to other professions such as programming or engineering, ethical standards in academia are in the gutter. I have worked with many different kinds of people in my life, in the U.S. and in Japan. I have only encountered one group more corrupt than academic scientists: the mafia members who ran Las Vegas hotels where I used to install computer equipment. "
so Ive got a small bottle that I filled up with salt. I put it on the scale and it's mass is 83g. I've also got a jup of water that has 500g of water. I put the bottle in the jug and it sank to the bottom. I have to figure out how much salt to take out of the bottle such that the weight force of the bottle equals the buoyancy force.
For the buoyancy do I: density of water * volume of water displaced * gravity acceleration?
so: mass of bottle * gravity = volume of water displaced * density of water * gravity?
@EmilioPisanty The measurement operators than I suggested in the comments of the post are fine but I additionally would like to control the width of the Poisson Distribution (much like we can do for the normal distribution using variance). Do you know that this can be achieved while still maintaining the completeness condition $$\int A^{\dagger}_{C}A_CdC = 1$$?
As a workaround while this request is pending, there exist several client-side workarounds that can be used to enable LaTeX rendering in chat, including:ChatJax, a set of bookmarklets by robjohn to enable dynamic MathJax support in chat. Commonly used in the Mathematics chat room.An altern...
You're always welcome to ask. One of the reasons I hang around in the chat room is because I'm happy to answer this sort of question. Obviously I'm sometimes busy doing other stuff, but if I have the spare time I'm always happy to answer.
Though as it happens I have to go now - lunch time! :-)
@JohnRennie It's possible to do it using the energy method. Just we need to carefully write down the potential function which is $U(r)=\frac{1}{2}\frac{mg}{R}r^2$ with zero point at the center of the earth.
Anonymous
Also I don't particularly like this SHM problem because it causes a lot of misconceptions. The motion is SHM only under particular conditions :P
I see with concern the close queue has not shrunk considerably in the last week and is still at 73 items. This may be an effect of increased traffic but not increased reviewing or something else, I'm not sure
Not sure about that, but the converse is certainly false :P
Derrida has received a lot of criticism from the experts on the fields he tried to comment on
I personally do not know much about postmodernist philosophy, so I shall not comment on it myself
I do have strong affirmative opinions on textual interpretation, made disjoint from authoritorial intent, however, which is a central part of Deconstruction theory. But I think that dates back to Heidegger.
I can see why a man of that generation would be leaned towards that idea. I do too.
|
The short rate in the Ho-Lee model is given by :
$$dr_t=\left( \frac{df(0,t)}{dt} +\sigma^2t\right)dt + \sigma dW_t$$
I'm trying to find the bond dynamics given by :
$$dP(t,T)/P(t,T)=r_tdt-\sigma(T-t)dW_t$$
I started from :
$$P(t,T)=E_t[e^{-\int_t^T r_sds}]$$
and I applied Itô to the function $P(t,T)=\phi(t,r)$:
$$d\phi(t,r) = \frac{\partial \phi(t,r)}{\partial t}dt+\frac{\partial \phi(t,r)}{\partial r} dr_t+ \frac{1}{2} \frac{\partial^2\phi(t,r)}{\partial r^2}(dr_t)^2$$
I computed the derivatives :
$$\frac{\partial \phi(t,r)}{\partial t}=r_tP(t,T)$$
$$\frac{\partial \phi(t,r)}{\partial r} = -(T-t)P(t,T)$$
$$\frac{1}{2} \frac{\partial^2\phi(t,r)}{\partial r^2} = (T-t)^2P(t,T)$$
Assembling everything I get :
$$dP(t,T)/P(t,T) = r_tdt-(T-t)\sigma dW_t +\left[ \frac{1}{2}(T-t)^2\sigma^2-(T-t)\left( \frac{df(0,t)}{dt}+\sigma^2t \right) \right] dt $$
I don't know how to get rid of the last $dt$ term. Any Help? Or did I get the derivatives wrong? I checked them several times but I don't see where the probem comes from. Thank you
|
Answer
a. Natural: $\sqrt{100}$ b. Whole: $0,\sqrt{100}$ c. Integer: $-9,0,\sqrt{100}$ d. Rational: $-9,-\displaystyle \frac{4}{5},0.25,9.2,\sqrt{100}$ e. Irrational: $\sqrt{3}$ f. Real: all of them
Work Step by Step
The set of natural numbers is { 1, 2, 3, 4, 5, ... } . The set of whole numbers is { 0, 1, 2, 3, 4, 5, ... } The set of integers is {... , -4, -3, -2, -1,0, 1, 2, 3, 4,...} The set of rational numbers$: \displaystyle \frac{a}{b},\ b\neq 0$, and b integers Irrational numbers: are not rational (can not be written as $\displaystyle \frac{a}{b},\ b\neq 0$) Real numbers: all rational and irrational numbers. --- $-9$ is a negative integer, can be written as $\displaystyle \frac{-9}{1}$ so is also rational $-\displaystyle \frac{4}{5}$ is a rational number $0$ is a whole number, an integer, rational ($\dfrac 01$) $0.25=\displaystyle \frac{25}{100}$ is a rational number $\sqrt{3 }$ is irrational $9.2=\displaystyle \frac{92}{10}$ , rational $\sqrt{100}=10,$ natural, whole, integer, rational. a. Natural: $\sqrt{100}$ b. Whole: $0,\sqrt{100}$ c. Integer: $-9,0,\sqrt{100}$ d. Rational: $-9,-\displaystyle \frac{4}{5},0.25,9.2,\sqrt{100}$ e. Irrational: $\sqrt{3}$ f. Real: all of them
|
Possible Duplicate: Why does 1/x diverge?
I'm a math tutor. This is a high school level problem. I'm unable to solve this.
What is the value of:
$\lim\limits_{n \to \infty}\sum\limits_{k=1}^n \frac{1}{k}$
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Possible Duplicate: Why does 1/x diverge?
I'm a math tutor. This is a high school level problem. I'm unable to solve this.
What is the value of:
$\lim\limits_{n \to \infty}\sum\limits_{k=1}^n \frac{1}{k}$
From the figure, you can see that the area under the blue-curve is bounded below by the area under the red-curve from $1$ to $\infty$.
The blue-curve takes the value $\frac1{k}$ over an interval $[k,k+1)$
The red-curve is given by $f(x) = \frac1{x}$ where $x \in [1,\infty)$
The green-curve takes the value $\frac1{k+1}$ over an interval $[k,k+1)$
The area under the blue-curve represents the sum $\displaystyle \sum_{k=1}^{n} \frac1{k}$ while the area under the red-curve is given by the integral $\displaystyle \int_{1}^{n+1} \frac{dx}{x}$ while the area under the blue-curve represents the sum $\displaystyle \sum_{k=1}^{n} \frac1{k+1}$
Hence, we get $\displaystyle \sum_{k=1}^{n} \frac1{k} > \displaystyle \int_{1}^{n+1} \frac{dx}{x} = \log(n+1)$
$\log(n+1)$ diverges as $n \rightarrow \infty$ and hence $$\lim_{n \rightarrow \infty} \displaystyle \sum_{k=1}^{n} \frac1{k} = + \infty$$
By a similar argument, by comparing the areas under the red curve and the green curve, we get $$\displaystyle \sum_{k=1}^{n} \frac1{k+1} < \displaystyle \int_{1}^{n+1} \frac{dx}{x} = \log(n+1)$$ and hence we can bound $\displaystyle \sum_{k=1}^{n} \frac1{k}$ from above by $1 + \log(n+1)$
Hence, $\forall n$, we have $$\log(n+1) < \displaystyle \sum_{k=1}^{n} \frac1{k} < 1 + \log(n+1)$$
Hence, we get $0 < \displaystyle \sum_{k=1}^{n} \frac1{k} - \log(n+1) < 1$, $\forall n$
Hence, if $a_n = \displaystyle \sum_{k=1}^{n} \frac1{k} - \log(n+1)$ we have that $a_n$ is a monotonically increasing sequence and is bounded.
Hence, $\displaystyle \lim_{n \rightarrow \infty} a_n$ exists. This limit is denoted by $\gamma$ and is called the Euler-Mascheroni constant.
It is not had to show that $\gamma \in (0.5,0.6)$ by looking at the difference in the area of these graphs and summing up the area of these approximate triangles.
You'd think it would converge but it doesn't.
An easy way to see this is to consider the subsequences from $\frac1{2^k}$ to $\frac1{2^{k+1}}$. Since $\frac1{2^{k+1}}$ is less than $\frac1{2^k}$, consider replacing all values between them with the last one. That is:
$$1 + 1/2+1/4 + 1/4+1/8 + 1/8 + 1/8+1/8+1/8+ 1/16 + 1/16+...$$
Since each subsequence sums to $1/2$, each time you go from $n = 2^m$ to $n=2^{m+1}$, you're adding 1/2, so in the limit it won't ever converge.
It diverges to infinity as $\log n$. You can see Wikipedia
|
Please illustrate that a bond with maturity N years that has coupon equal to its yield is associated with the conversion factor of 1.
I do this by writing out $$\frac1{100} \left( \sum_{t=1}^N \left[ \frac{100 (0.06)}{1.06^t} \right]+\frac{100}{1.06^N} \right)$$ but I do not get that this = 1.
I use the formula:
$$\sum_{k=m}^n a^k = \begin{cases}\frac{a^{n+1} - a^m}{a-1}, \quad &a \neq 1\\n-m+1, \quad &a=1\end{cases}$$
|
Regularity under sharp anisotropic general growth conditions
1.
Dipartimento di Matematica "U. Dini", Università di Firenze, Viale Morgagni 67/A, 50134 - Firenze, Italy, Italy
sharpassumptions on the exponents $p_{i}$ in terms of $\overline{p} $: the * Sobolev conjugate exponentof $\overline{p}$; i.e., $\overline{p} $ = {n\overline{p}}/{n-\overline{p}}, $ $ 1 / \overline{p}$= $\frac{1}{n} \sum_{i=1}^{n}\frac{1}{p_{i}}$. As a consequence, by mean of regularity results due to Lieberman [21], we obtain the local Lipschitz-continuity of minimizers under sharp assumptions on the exponents of anisotropic growth. * Keywords:anisotropic growth conditions, minimizers, $p-q$ growth conditions, gradient estimates, $L^{\infty }-$regularity. Mathematics Subject Classification:Primary: 49N60; Secondary: 35J70. Citation:Giovanni Cupini, Paolo Marcellini, Elvira Mascolo. Regularity under sharp anisotropic general growth conditions. Discrete & Continuous Dynamical Systems - B, 2009, 11 (1) : 67-86. doi: 10.3934/dcdsb.2009.11.67
[1] [2] [3]
Peter Weidemaier.
Maximal regularity for parabolic equations with inhomogeneous boundary conditions in Sobolev spaces with mixed $L_p$-norm.
[4]
Karen Yagdjian, Anahit Galstian.
Fundamental solutions for wave equation in Robertson-Walker model of universe
and $L^p-L^q$ -decay estimates.
[5]
Masahiro Ikeda, Takahisa Inui, Mamoru Okamoto, Yuta Wakasugi.
$ L^p $-$ L^q $ estimates for the damped wave equation and the critical exponent for the nonlinear problem with slowly decaying data.
[6]
Jinju Xu.
A new proof of gradient estimates for mean curvature equations with oblique boundary conditions.
[7]
Shenzhou Zheng, Laping Zhang, Zhaosheng Feng.
Everywhere regularity for P-harmonic type systems under the subcritical growth.
[8]
Chérif Amrouche, Nour El Houda Seloula.
$L^p$-theory for the Navier-Stokes equations with pressure boundary conditions.
[9]
Tadeusz Iwaniec, Gaven Martin, Carlo Sbordone.
$L^p$-integrability & weak type $L^{2}$-estimates for the gradient of harmonic mappings of $\mathbb D$.
[10]
Elvise Berchio, Filippo Gazzola, Dario Pierotti.
Nodal solutions to critical growth elliptic problems under Steklov boundary conditions.
[11] [12] [13] [14] [15]
Antonio Cañada, Salvador Villegas.
Optimal Lyapunov inequalities for disfocality and Neumann boundary conditions using $L^p$ norms.
[16]
Horst Heck, Matthias Hieber, Kyriakos Stavrakidis.
$L^\infty$-estimates for parabolic systems with VMO-coefficients.
[17]
Yi Cao, Dong Li, Lihe Wang.
The optimal weighted $W^{2, p}$ estimates
of elliptic equation with non-compatible conditions.
[18]
Raúl Ferreira, Julio D. Rossi.
Decay estimates for a nonlocal $p-$Laplacian evolution problem
with mixed boundary conditions.
[19]
Aneta Wróblewska-Kamińska.
Local pressure methods in Orlicz spaces for the motion of rigid bodies in a non-Newtonian fluid with
general growth conditions.
[20]
Martin Kružík, Johannes Zimmer.
Rate-independent processes with linear growth energies and time-dependent boundary conditions.
2018 Impact Factor: 1.008
Tools Metrics Other articles
by authors
[Back to Top]
|
A vertical asymptote is a place where the function becomes infinite, typically because the formula for the function has a denominator that becomes zero. For example, the reciprocal function $f(x)=1/x$ has a vertical asymptote at $x=0$, and the function $\tan x$ has a vertical asymptote at $x=\pi/2$ (and also at $x=-\pi/2$, $x=3\pi/2$, etc.). Whenever the formula for a function contains a denominator it is worth looking for a vertical asymptote by checking to see if the denominator can ever be zero, and then checking the limit at such points. Note that there is not always a vertical asymptote where the denominator is zero: $f(x)=(\sin x)/x$ has a zero denominator at $x=0$, but since $\ds \lim_{x\to 0}(\sin x)/x=1$ there is no asymptote there.
A horizontal asymptote is a horizontal line to which $f(x)$ gets closer and closer as $x$ approaches $\infty$ (or as $x$ approaches $-\infty$). For example, the reciprocal function has the $x$-axis for a horizontal asymptote. Horizontal asymptotes can be identified by computing the limits $\ds \lim_{x \to \infty}f(x)$ and $\ds \lim_{x \to -\infty}f(x)$. Since $\ds \lim_{x \to \infty}1/x=\lim_{x \to -\infty}1/x=0$, the line $y=0$ (that is, the $x$-axis) is a horizontal asymptote in both directions.
Some functions have asymptotes that are neither horizontal nor vertical, but some other line. Such asymptotes are somewhat more difficult to identify and we will ignore them.
If the domain of the function does not extend out to infinity, we should also ask what happens as $x$ approaches the boundary of the domain. For example, the function $\ds y=f(x)=1/\sqrt{r^2-x^2}$ has domain $-r< x< r$, and $y$ becomes infinite as $x$ approaches either $r$ or $-r$. In this case we might also identify this behavior because when $x=\pm r$ the denominator of the function is zero.
If there are any points where the derivative fails to exist (a cusp or corner), then we should take special note of what the function does at such a point.
Finally, it is worthwhile to notice any symmetry. A function $f(x)$ that has the same value for $-x$ as for $x$, i.e., $f(-x)=f(x)$, is called an "even function.'' Its graph is symmetric with respect to the $y$-axis. Some examples of even functions are: $\ds x^n$ when $n$ is an even number, $\cos x$, and $\ds \sin^2x$. On the other hand, a function that satisfies the property $f(-x)=-f(x)$ is called an "odd function.'' Its graph is symmetric with respect to the origin. Some examples of odd functions are: $x^n$ when $n$ is an odd number, $\sin x$, and $\tan x$. Of course, most functions are neither even nor odd, and do not have any particular symmetry.
Exercises 5.5
Sketch the curves. Identify clearly any interesting features, including local maximum and minimum points, inflection points, asymptotes, and intercepts. You can use this Sage worksheet to check your answers. Note that you may need to adjust the interval over which the function is graphed to capture all the details.
Ex 5.5.1$\ds y=x^5-5x^4+5x^3$
Ex 5.5.2$\ds y=x^3-3x^2-9x+5$
Ex 5.5.3$\ds y=(x-1)^2(x+3)^{2/3}$
Ex 5.5.4$\ds x^2+x^2y^2=a^2y^2$, $a>0$.
Ex 5.5.5$\ds y = 4x+\sqrt{1-x}$
Ex 5.5.6$\ds y = (x+1)/\sqrt{5x^2 + 35}$
Ex 5.5.7$\ds y= x^5 - x$
Ex 5.5.8$\ds y = 6x + \sin 3x$
Ex 5.5.9$\ds y = x+ 1/x$
Ex 5.5.10$\ds y = x^2+ 1/x$
Ex 5.5.11$\ds y = (x+5)^{1/4}$
Ex 5.5.12$\ds y = \tan^2 x$
Ex 5.5.13$\ds y =\cos^2 x - \sin^2 x$
Ex 5.5.14$\ds y = \sin^3 x$
Ex 5.5.15$\ds y=x(x^2+1)$
Ex 5.5.16$\ds y=x^3+6x^2 + 9x$
Ex 5.5.17$\ds y=x/(x^2-9)$
Ex 5.5.18$\ds y=x^2/(x^2+9)$
Ex 5.5.19$\ds y=2\sqrt{x} - x$
Ex 5.5.20$\ds y=3\sin(x) - \sin^3(x)$, for $x\in[0,2\pi]$
Ex 5.5.21$\ds y=(x-1)/(x^2)$
For each of the following five functions, identify any vertical and horizontal asymptotes, and identify intervals on which the function is concave up and increasing; concave up and decreasing; concave down and increasing; concave down and decreasing.
Ex 5.5.22$f(\theta)=\sec(\theta)$
Ex 5.5.23$\ds f(x) = 1/(1+x^2)$
Ex 5.5.24$\ds f(x) = (x-3)/(2x-2)$
Ex 5.5.25$\ds f(x) = 1/(1-x^2)$
Ex 5.5.26$\ds f(x) = 1+1/(x^2)$
Ex 5.5.27Let $\ds f(x) = 1/(x^2-a^2)$, where $a\geq0$. Find any vertical and horizontal asymptotes and the intervals upon which the given function is concave up and increasing; concave up and decreasing; concave down and increasing; concave down and decreasing. Discuss how the value of $a$ affects these features.
|
We still have not answered one of our first questions about the steepness of a surface: starting at a point on a surface given by $f(x,y)$, and walking in a particular direction, how steep is the surface? We are now ready to answer the question.
We already know roughly what has to be done: as shown in figure 16.3.1, we extend a line in the $x$-$y$ plane to a vertical plane, and we then compute the slope of the curve that is the cross-section of the surface in that plane. The major stumbling block is that what appears in this plane to be the horizontal axis, namely the line in the $x$-$y$ plane, is not an actual axis—we know nothing about the "units'' along the axis. Our goal is to make this line into a $t$ axis; then we need formulas to write $x$ and $y$ in terms of this new variable $t$; then we can write $z$ in terms of $t$ since we know $z$ in terms of $x$ and $y$; and finally we can simply take the derivative.
So we need to somehow "mark off'' units on the line, and we need aconvenient way to refer to the line in calculations. It turns out thatwe can accomplish both by using the vector form of a line. Supposethat ${\bf u}$ is a unit vector $\langle u_1,u_2\rangle$ in thedirection of interest. A vector equation for the line through$(x_0,y_0)$ in this direction is ${\bf v}(t)=\langleu_1t+x_0,u_2t+y_0\rangle$. The height of the surface above the point $(u_1t+x_0,u_2t+y_0)$ is $g(t)=f(u_1t+x_0,u_2t+y_0)$. Because $\bf u$is a unit vector, the value of $t$ is precisely the distance along theline from $(x_0,y_0)$ to $(u_1t+x_0,u_2t+y_0)$; this means that theline is effectively a $t$ axis, with origin at the point $(x_0,y_0)$,so the slope we seek is $$\eqalign{g'(0)&=\langle f_x(x_0,y_0),f_y(x_0,y_0)\rangle\cdot\langle u_1,u_2\rangle\cr&=\langle f_x,f_y\rangle\cdot{\bf u}\cr&=\nabla f\cdot {\bf u}\cr}$$Here we have used the chain rule and the derivatives${d\over dt}(u_1t+x_0)=u_1$ and ${d\over dt}(u_2t+y_0)=u_2$.The vector $\langle f_x,f_y\rangle$ is very useful, so it has its ownsymbol, $\nabla f$, pronounced "del f''; it is also called the
gradient of $f$.
Example 16.5.1 Find the slope of $z=x^2+y^2$ at $(1,2)$ in the direction of the vector $\langle 3,4\rangle$.
We first compute the gradient at $(1,2)$: $\nabla f=\langle 2x,2y\rangle$, which is $\langle 2,4\rangle$ at $(1,2)$. A unit vector in the desired direction is $\langle 3/5,4/5\rangle$, and the desired slope is then $\langle 2,4\rangle\cdot\langle 3/5,4/5\rangle=6/5+16/5=22/5$.
Example 16.5.2 Find a tangent vector to $z=x^2+y^2$ at $(1,2)$ in the direction of the vector $\langle 3,4\rangle$ and show that it is parallel to the tangent plane at that point.
Since $\langle 3/5,4/5\rangle$ is a unit vector in the desired direction, we can easily expand it to a tangent vector simply by adding the third coordinate computed in the previous example: $\langle 3/5,4/5,22/5\rangle$. To see that this vector is parallel to the tangent plane, we can compute its dot product with a normal to the plane. We know that a normal to the tangent plane is $$\langle f_x(1,2),f_y(1,2),-1\rangle = \langle 2,4,-1\rangle,$$ and the dot product is $\langle 2,4,-1\rangle\cdot\langle 3/5,4/5,22/5\rangle=6/5+16/5-22/5=0$, so the two vectors are perpendicular. (Note that the vector normal to the surface, namely $\langle f_x,f_y,-1\rangle$, is simply the gradient with a $-1$ tacked on as the third component.)
The slope of a surface given by $z=f(x,y)$ in the direction of a(two-dimensional) vector $\bf u$ is called the
directional derivativeof $f$, written $D_{\bf u}f$.The directional derivative immediately provides us with someadditional information. We know that $$D_{\bf u}f=\nabla f\cdot {\bf u}=|\nabla f||{\bf u}|\cos\theta=|\nabla f|\cos\theta$$if $\bf u$ is a unit vector; $\theta$ is the angle between $\nabla f$and $\bf u$. This tells us immediately that the largest value of$D_{\bf u}f$ occurs when $\cos\theta=1$, namely, when $\theta=0$, so $\nabla f$ is parallel to $\bf u$. In other words, the gradient$\nabla f$ points in the direction of steepest ascent of the surface, and $|\nabla f|$ is the slope in that direction. Likewise, the smallest value of$D_{\bf u}f$ occurs when $\cos\theta=-1$, namely, when $\theta=\pi$, so $\nabla f$ is anti-parallel to $\bf u$. In other words, $-\nabla f$ points in the direction of steepest descent of the surface, and $-|\nabla f|$ is the slope in that direction.
Example 16.5.3 Investigate the direction of steepest ascent and descent for $z=x^2+y^2$.
The gradient is $\langle 2x,2y\rangle=2\langle x,y\rangle$; this is a vector parallel to the vector $\langle x,y\rangle$, so the direction of steepest ascent is directly away from the origin, starting at the point $(x,y)$. The direction of steepest descent is thus directly toward the origin from $(x,y)$. Note that at $(0,0)$ the gradient vector is $\langle 0,0\rangle$, which has no direction, and it is clear from the plot of this surface that there is a minimum point at the origin, and tangent vectors in all directions are parallel to the $x$-$y$ plane.
If $\nabla f$ is perpendicular to $\bf u$, $D_{\bf u}f=|\nabla f|\cos(\pi/2)=0$, since $\cos(\pi/2)=0$. This means that in either of the two directions perpendicular to $\nabla f$, the slope of the surface is 0; this implies that a vector in either of these directions is tangent to the level curve at that point. Starting with $\nabla f=\langle f_x,f_y\rangle$, it is easy to find a vector perpendicular to it: either $\langle f_y,-f_x\rangle$ or $\langle -f_y,f_x\rangle$ will work.
If $f(x,y,z)$ is a function of three variables, all the calculations proceed in essentially the same way. The rate at which $f$ changes in a particular direction is $\nabla f\cdot{\bf u}$, where now $\nabla f=\langle f_x,f_y,f_z\rangle$ and ${\bf u}=\langle u_1,u_2,u_3\rangle$ is a unit vector. Again $\nabla f$ points in the direction of maximum rate of increase, $-\nabla f$ points in the direction of maximum rate of decrease, and any vector perpendicular to $\nabla f$ is tangent to the level surface $f(x,y,z)=k$ at the point in question. Of course there are no longer just two such vectors; the vectors perpendicular to $\nabla f$ describe the tangent plane to the level surface, or in other words $\nabla f$ is a normal to the tangent plane.
Example 16.5.4 Suppose the temperature at a point in space is given by $T(x,y,z)=T_0/(1+x^2+y^2+z^2)$; at the origin the temperature in Kelvin is $T_0>0$, and it decreases in every direction from there. It might be, for example, that there is a source of heat at the origin, and as we get farther from the source, the temperature decreases. The gradient is $$\eqalign{ \nabla T&=\langle {-2T_0x\over (1+x^2+y^2+z^2)^2}, {-2T_0y\over (1+x^2+y^2+z^2)^2},{-2T_0z\over (1+x^2+y^2+z^2)^2}\rangle\cr &={-2T_0\over (1+x^2+y^2+z^2)^2}\langle x,y,z\rangle.\cr }$$ The gradient points directly at the origin from the point $(x,y,z)$—by moving directly toward the heat source, we increase the temperature as quickly as possible.
Example 16.5.5 Find the points on the surface defined by $x^2+2y^2+3z^2=1$ where the tangent plane is parallel to the plane defined by $3x-y+3z=1$.
Two planes are parallel if their normals are parallel or anti-parallel, so we want to find the points on the surface with normal parallel or anti-parallel to $\langle 3,-1,3\rangle$. Let $f=x^2+2y^2+3z^2$; the gradient of $f$ is normal to the level surface at every point, so we are looking for a gradient parallel or anti-parallel to $\langle 3,-1,3\rangle$. The gradient is $\langle 2x,4y,6z\rangle$; if it is parallel or anti-parallel to $\langle 3,-1,3\rangle$, then $$\langle 2x,4y,6z\rangle=k\langle 3,-1,3\rangle$$ for some $k$. This means we need a solution to the equations $$2x=3k\qquad 4y=-k\qquad 6z=3k$$ but this is three equations in four unknowns—we need another equation. What we haven't used so far is that the points we seek are on the surface $x^2+2y^2+3z^2=1$; this is the fourth equation. If we solve the first three equations for $x$, $y$, and $z$ and substitute into the fourth equation we get $$\eqalign{ 1&=\left({3k\over2}\right)^2+2\left({-k\over4}\right)^2+3\left({3k\over6}\right)^2\cr &=\left({9\over4}+{2\over16}+{3\over4}\right)k^2\cr &={25\over8}k^2\cr }$$ so $\ds k=\pm{2\sqrt2\over 5}$. The desired points are $\ds\left({3\sqrt2\over5},-{\sqrt2\over10},{\sqrt2\over 5}\right)$ and $\ds\left(-{3\sqrt2\over5},{\sqrt2\over10},-{\sqrt2\over 5}\right)$. The ellipsoid and the three planes are shown in figure 16.5.1.
Exercises 16.5
Ex 16.5.1Find $D_{\bf u} f$ for $\ds f=x^2+xy+y^2$ in the direction of ${\bf v}=\langle 2,1\rangle$ at the point $(1,1)$.(answer)
Ex 16.5.2Find $D_{\bf u} f$ for $\ds f=\sin(xy)$ in the direction of ${\bf v}=\langle -1,1\rangle$ at the point $(3,1)$.(answer)
Ex 16.5.3Find $D_{\bf u} f$ for $\ds f=e^x\cos(y)$ in the direction 30 degrees from the positive $x$ axisat the point $(1,\pi/4)$.(answer)
Ex 16.5.4The temperature of a thin plate in the $x$-$y$ plane is $\ds T=x^2+y^2$. How fast does temperature change at the point $(1,5)$moving in a direction 30 degrees from the positive $x$ axis?(answer)
Ex 16.5.5Suppose the density of a thin plate at $(x,y)$ is$\ds 1/\sqrt{x^2+y^2+1}$. Find the rate of change of the density at$(2,1)$ in a direction $\pi/3$ radians from the positive $x$ axis.(answer)
Ex 16.5.6Suppose the electric potential at $(x,y)$ is$\ds\ln\sqrt{x^2+y^2}$. Find the rate of change of the potential at$(3,4)$ toward the origin and also in a direction at a right angle tothe direction toward the origin.(answer)
Ex 16.5.7A plane perpendicular to the $x$-$y$ plane contains thepoint $(2,1,8)$ on the paraboloid $z=x^2+4y^2$. The cross-section ofthe paraboloid created by this plane has slope 0 at this point. Findan equation of the plane.(answer)
Ex 16.5.8A plane perpendicular to the $x$-$y$ plane contains thepoint $(3,2,2)$ on the paraboloid $36z=4x^2+9y^2$. The cross-sectionof the paraboloid created by this plane has slope 0 at this point.Find an equation of the plane.(answer)
Ex 16.5.9Suppose the temperature at $(x,y,z)$ is given by $\ds T=xy+\sin(yz)$. In what direction should you go from the point $(1,1,1)$ to decrease the temperature as quickly as possible? What isthe rate of change of temperature in this direction?(answer)
Ex 16.5.10Suppose the temperature at $(x,y,z)$ is given by $\ds T=xyz$. In what direction can you go from the point $(1,1,1)$ to maintain the same temperature?(answer)
Ex 16.5.11Find an equation for the plane tangent to $\ds x^2-3y^2+z^2=7$ at $(1,1,3)$.(answer)
Ex 16.5.12Find an equation for the plane tangent to $\ds xyz=6$ at $(1,2,3)$.(answer)
Ex 16.5.13Find a vector function for the line normal to $\ds x^2+2y^2+4z^2=26 $ at $(2,-3,-1)$.(answer)
Ex 16.5.14Find a vector function for the line normal to $\ds x^2+y^2+9z^2=56$ at $(4,2,-2)$.(answer)
Ex 16.5.15Find a vector function for the line normal to $\ds x^2+5y^2-z^2=0$ at $(4,2,6)$.(answer)
Ex 16.5.16Find the directions in which the directional derivative of$f(x,y)=x^2+\sin(xy)$ at the point $(1,0)$ has the value 1.(answer)
Ex 16.5.17Show that the curve ${\bf r}(t) = \langle\ln(t),t\ln(t),t\rangle$is tangent to the surface $xz^2-yz+\cos(xy) = 1$ at the point$(0,0,1)$.
Ex 16.5.18A bug is crawling on the surface of a hot plate, thetemperature of which at the point $x$ units to the right of the lowerleft corner and $y$ units up from the lower left corner is given by$T(x,y)=100-x^2-3y^3$.
a. If the bug is at the point $(2,1)$, in what direction should it move to cool off the fastest? How fast will the temperature drop in this direction?
b. If the bug is at the point $(1,3)$, in what direction should it move in order to maintain its temperature?
(answer)
Ex 16.5.19The elevation on a portion of a hill is given by $f(x,y) =100 -4x^2 - 2y$. From the location above $(2,1)$, in which direction willwater run?(answer)
Ex 16.5.20Suppose that $g(x,y)=y-x^2$. Find the gradient at the point$(-1, 3)$. Sketch the level curve to the graph of $g$ when$g(x,y)=2$, and plot both the tangent line and the gradient vector atthe point $(-1,3)$. (Make your sketch large). What do you notice,geometrically?(answer)
Ex 16.5.21The gradient $\nabla f$ is a vectorvalued function of two variables. Prove the following gradient rules.Assume $f(x,y)$ and $g(x,y)$ are differentiable functions.
a. $\nabla(fg)=f\nabla(g)+g\nabla(f)$
b. $\nabla(f/g)=(g\nabla f - f \nabla g)/g^2$
c. $\nabla((f(x,y))^n)=nf(x,y)^{n-1}\nabla f$
|
To familiarize myself with concepts from my system modeling class, I have posed myself the following problem, which I have been struggling with for a good while now. I am an absolute physics beginner, so it is likely I fundamentally misunderstood some things about pressure etc. The Problem
There are two water tanks with cross-sectional areas $A_1$ and $A_2$, and water level heights $h_1(t)$ and $h_2(t)$ respectively. They are connected by a duct of cross-sectional area $A_d$ and length $l$. Density of water is $\rho$.
Given initial water heights $h_1(0)$ and $h_2(0)$, how can the
velocity of the water $v_d$ flowing through the duct be modeled? My Attempt
I assume that atmospheric pressure is zero and the water velocity at the top of the tanks can be neglected as the duct should be much smaller than the tanks and thus water flows there much faster. By Bernoulli's principle, I think that the pressure at the surface of tank 1 and the pressure right before the duct ($p_1$) should be related as follows:
$\underbrace{\frac{1}{2} 0^2 + \rho g h_1(t) + 0}_\text{top of tank 1} = \underbrace{\frac{1}{2}v_d^2 + 0 + p_1}_\text{point p_1}$
Leading to $p_1 = \rho g h_1(t) - \frac{1}{2}v_d^2$. Similarly, for the pressure right at the entrance of the duct at tank 2: $p_2 = \rho g h_2(t) - \frac{1}{2}v_d^2$
Now that I have these two pressures, I tried to model the force acting on the mass of water inside the duct (which is $\rho A_d l$) as follows:
$F = A_d p_1 - A_d p_2 \\ \rho A_d l \cdot \frac{d}{dt} v = A_d ( p_1 - p_2 ) \\ \frac{d}{dt} v = \frac{p_1 - p_2}{\rho l} = \frac{1}{\rho l} \cdot \left( \rho g h_1(t) - \frac{1}{2} v_d^2 - \rho g h_2(t) + \frac{1}{2}v_d^2 \right) = \frac{1}{l} g \left( h_1(t) - h_2(t) \right)$
I modeled the height of the first water tank based on the volume of water in it ($V_1(t)$) as follows:
$\frac{d}{dt}V_1(t) = - v_d(t) \cdot A_d \\ h_1(t) = \frac{V_1(t)}{A_1} \\ \frac{d}{dt}V_2(t) = v_d(t) \cdot A_d \\ h_2(t) = \frac{V_2(t)}{A_2}$
When I run this model in Simulink, I the height of the two water tanks keeps fluctuating like a pendulum, instead of reaching the expected equilibrium where the water heights are the same.
Questions Is my assumption correct that this system should arrive at an equilibrium after a while such that $h_1(t) = h_2(t)$? How would I correctly model the velocity inside the duct neglecting friction? How would I correctly model the velocity inside the duct with friction? What is wrong with my modeling attempt? Are the calculated pressures $p_1$ and $p_2$ wrong? Can my attempt be corrected or is it fundamentally wrong?
|
The point is the following:
Delta, $\Delta$, is defined as $\frac{\partial C}{\partial S}$, where $C$ is the value of the call option, and $S$ is the price of the underlying asset.
So, given that the value of a call option for a non-dividend-paying underlying stock in terms of the Black–Scholes parameters is
$$C = N(d_{1})S - N(d_{2})Ke^{-rT},$$
$$\Delta = \frac{\partial C}{\partial S} = N(d_{1}).$$
Basically, Delta is just the first partial derivative of $C$ with respect to $S$.
How to derive $\Delta$ $N(x)$ is the cumulative probability that a variable with a standardized normal distribution will be less than x; $N'(x)$ is the probability density function for a standardized normal distribution:
$$N'(X) = \frac{1}{\sqrt{2\pi}}e^{\frac{x^2}{2}}.$$
Then, defining $\tau = T - t$, we have$$ d_{1} = \frac{\ln(\frac{S}{K}) + (r + \frac{\sigma^2}{2})\tau}{\sigma\sqrt{\tau}}$$
and
$$ d_{2} = \frac{\ln(\frac{S}{K}) + (r - \frac{\sigma^2}{2})\tau}{\sigma\sqrt{\tau}}$$
It follows that
$$ N'(d_{1}) = N'(d_{2} + \sigma\sqrt{\tau}) = \frac{1}{\sqrt{2\pi}}e^{-\frac{(d_{2} + \sigma\sqrt{\tau})^2}{2}} = N'(d_{2})e^{-d_{2}\sigma\sqrt{\tau} - \frac{\sigma^2\tau}{2}} = N'(d_{2})\frac{Ke^{-r\tau}}{S}$$
Thus,
$$N'(d_{1})S = N'(d_{2})Ke^{-r\tau}.$$
Then
$$ \frac{\partial d_{1}}{\partial S} = \frac{\partial d_{2}}{\partial S} = \frac{1}{S\sigma\sqrt{\tau}}$$
Since there is an $S$ in $N(d_{1})$ and $N(d_{2})$, we use the chain-rule:
$$ \frac{\partial C}{\partial S} = N(d_{1}) + \frac{\partial d_{1}}{\partial S} N'(d_{1})S - \frac{\partial d_{2}}{\partial S} N'(d_{2})Ke^{-r\tau} = N(d_{1}) + \frac{\partial d_{1}}{\partial S} N'(d_{1})S - \frac{\partial d_{2}}{\partial S} N'(d_{1})S = N(d_{1}) + \frac{1}{S\sigma\sqrt{\tau}} N'(d_{1})S - \frac{1}{S\sigma\sqrt{\tau}} N'(d_{1})S = N(d_{1}).$$
|
Define
$$y(x,s) = \int_0^{\infty} dt \, Y(x,t) \, e^{-s t}$$
Then, integrating by parts:
$$\int_0^{\infty} dt \, Y_t(x,t) \, e^{-s t} = -Y(x,0) + s y(x,s)$$
$$\int_0^{\infty} dt \, Y_{tt}(x,t) \, e^{-s t} = -Y_t(x,0) + s Y(x,0) + s^2 y(x,s)$$
Then using the initial conditions $Y(x,0)=Y_y(x,0)=0$, the PDE becomes the following ODE:
$$y''-2 s y' + s^2 y = 0$$
where $y(0,s)=0$ and $y(1,s)=f(s)$, where the prime represents derivative with respect to $x$, and where
$$f(s) = \int_0^{\infty} dt \, F(t) \, e^{-s t} $$
The general solution of the ODE is (I will not derive here)
$$y(x,s) = (A + B x) \, e^{s x}$$
Using the boundary conditions, we may find $A$ and $B$ and therefore the LT of the solution to the PDE:
$$y(x,s) = x \, f(s) \, e^{-s (1-x)} $$
We may find the inverse LT by convolution, as we know the individual LT's. The ILT of $f(s)$ is obviously $F(t)$ by definition, and the ILT of $e^{-s (1-x)}$ is $\delta(t-(1-x))$. Therefore, the ILT, and the solution to the equation, is
$$y(x,t) = x \int_0^t dt' \, F(t') \delta(t-t'-(1-x)) = x F(t-(1-x)) \theta(t-(1-x))$$
where $\theta$ is the Heaviside step function, which is necessary because the contribution to the integral from the $\delta$ function for $t < 1-x$ is zero (i.e., $t' \gt 0$ in the integral.)
|
One of the challenges of being a math teacher is getting all the fractions and square roots into your documents. The challenge is especially large on the web. Most websites do not have a way for teachers to enter math symbols and we have to make due with calculator expressions.
Year ago a colleague, Gayle Taylor, introduced me to www.quia.com which is good for any subject, but allows for me to insert fractions and other math symbols through the use of LaTex code.
Quia has incredibly easy to use templates for a variety of games, quizzes and surveys. Anywhere I want to insert math symbols I type
<latex>$$</latex>
This allows me to signal that I want to insert math. I then need to learn a few basic tags to render the math symbols.
Notice the slash in the LaTex tag is backwards to the slash used for fractions and other LaTex code. Also notice you use a lot of curly braces { } in your code.
If you want your math symbols to render a little larger I use <latex>$\huge$</latex> So to create the quadratic formula I type: <latex>$\huge x = \frac{-b\pm \sqrt{b^2-4ac}}{2a}$</latex>
Another hint is that if you are doing exponents you use the carot symbol (^) just like you do in excel, however if you want double digits in your exponents you have to use the curly braces. <latex>$x^{10}$</latex> Otherwise the zero in my x to the tenth will not be displayed in the exponent, but rather full size next to it.
vs
One last thing, recently I found this website: http://www.codecogs.com/latex/eqneditor.php that will generate my code for me. I still have to write the <latex>$$</latex> part, but I can copy and paste the rest of it. For large equations I am trying to write, this is a life saver!
|
"Abstract: We prove, once and for all, that people who don't use superspace are really out of it. This includes QCDers, who always either wave their hands or gamble with lettuce (Monte Zuma calculations). Besides, all nonsupersymmetric theories have divergences which lead to problems with things like renormalons, instantons, anomalons, and other phenomenons. Also, they can't hide from gravity forever."
Can a gravitational field possess momentum? A gravitational wave can certainly possess momentum just like a light wave has momentum, but we generally think of a gravitational field as a static object, like an electrostatic field.
"You have to understand that compared to other professions such as programming or engineering, ethical standards in academia are in the gutter. I have worked with many different kinds of people in my life, in the U.S. and in Japan. I have only encountered one group more corrupt than academic scientists: the mafia members who ran Las Vegas hotels where I used to install computer equipment. "
so Ive got a small bottle that I filled up with salt. I put it on the scale and it's mass is 83g. I've also got a jup of water that has 500g of water. I put the bottle in the jug and it sank to the bottom. I have to figure out how much salt to take out of the bottle such that the weight force of the bottle equals the buoyancy force.
For the buoyancy do I: density of water * volume of water displaced * gravity acceleration?
so: mass of bottle * gravity = volume of water displaced * density of water * gravity?
@EmilioPisanty The measurement operators than I suggested in the comments of the post are fine but I additionally would like to control the width of the Poisson Distribution (much like we can do for the normal distribution using variance). Do you know that this can be achieved while still maintaining the completeness condition $$\int A^{\dagger}_{C}A_CdC = 1$$?
As a workaround while this request is pending, there exist several client-side workarounds that can be used to enable LaTeX rendering in chat, including:ChatJax, a set of bookmarklets by robjohn to enable dynamic MathJax support in chat. Commonly used in the Mathematics chat room.An altern...
You're always welcome to ask. One of the reasons I hang around in the chat room is because I'm happy to answer this sort of question. Obviously I'm sometimes busy doing other stuff, but if I have the spare time I'm always happy to answer.
Though as it happens I have to go now - lunch time! :-)
@JohnRennie It's possible to do it using the energy method. Just we need to carefully write down the potential function which is $U(r)=\frac{1}{2}\frac{mg}{R}r^2$ with zero point at the center of the earth.
Anonymous
Also I don't particularly like this SHM problem because it causes a lot of misconceptions. The motion is SHM only under particular conditions :P
I see with concern the close queue has not shrunk considerably in the last week and is still at 73 items. This may be an effect of increased traffic but not increased reviewing or something else, I'm not sure
Not sure about that, but the converse is certainly false :P
Derrida has received a lot of criticism from the experts on the fields he tried to comment on
I personally do not know much about postmodernist philosophy, so I shall not comment on it myself
I do have strong affirmative opinions on textual interpretation, made disjoint from authoritorial intent, however, which is a central part of Deconstruction theory. But I think that dates back to Heidegger.
I can see why a man of that generation would be leaned towards that idea. I do too.
|
I like the Connectedness argument, which follows straight from the axioms of a topology. A topological space $\left(\mathbf{X},\,\mathcal{T}\right)$ is connected iff $\mathbf{X}$ and $\emptyset$ are the only members of $\mathcal{T}$ which are both open and closed at once. $\mathbf{A} \subset \mathbf{X}$ is both open and closed iff its complement $\mathbf{X} \sim \mathbf{A}$ is also both open and closed, thus $\mathbf{X} = \mathbf{A} \bigcup \left(\mathbf{X} \sim \mathbf{A}\right)$ is not a union of disjoint open sets iff either $\mathbf{A} = \emptyset$ or $\mathbf{A} = \mathbf{X}$.
It is main idea in "the" (I don't know of any others) proof that a connected topological group $\left(\mathfrak{G},\,\bullet\right)$ is generated by any neighbourhood $\mathbf{N}$ of the group's identity $e$,
i.e. $\mathfrak{G} = \bigcup\limits_{k=1}^\infty \mathbf{N}^k$. Intuitively: you can't have a valid "neighbourhood" in the connected topological space without its containing "enough inverses" of its members to generate the whole group in this way.
For completeness, the proof runs: We consider the entity $\mathbf{Y} = \bigcup\limits_{k=1}^\infty \mathbf{N}^k$.For any $\gamma \in \mathbf{Y}$ the map $f_{\gamma} : \mathbf{Y} \to \mathbf{Y}; f_{\gamma}(x) = \gamma^{-1} x$ iscontinuous, thus $f_{\gamma}^{-1}\left(\mathbf{N}\right) = \gamma \, \mathbf{N}$ contains an open neighbourhood $\mathbf{O}_{\gamma} \subseteq \mathbf{N}$ of $\gamma$, thus $\mathbf{Z} = \bigcup\limits_{\gamma \in \mathbf{Y}} \mathbf{O}_{\gamma}$ is open. Certainly $\mathbf{Y} \subseteq \mathbf{Z}$, but, since $\mathbf{Y}$ is the collection of all products of a finite number of members of $\mathbf{N}$, we have $\mathbf{Z} \subseteq \mathbf{Y}$, thus $\mathbf{Z} = \mathbf{Y}$ is open. If we repeat the above reasoning for members of the set $\mathbf{X} \sim \mathbf{Y}$, we find that the complement of $\mathbf{Y}$ is also open, thus $\mathbf{Y}$, being both open and closed, must be the whole (connected) space $\mathfrak{G}$.
The above is one of my favourite proofs of all time, up there in my favourite thoughts with Beethoven's ninth andBangles "Walk Like an Egyptian" (or anything by Captain Sensible) and it all hinges on the connectedness argument. It is extremely simple, (not trivial, so it itself doesn't count for the Wiki, sadly) and its result unexpected and interesting: you can't define a neighbourhood without including enough inverses. This is an example of "homogeneity" at work: throwing the group axioms into another set of axioms makes a strong brew and tends to bethe mathematical analogue of turfing a kilogram chunk of native sodium into a bucket of water: the group operationtends to clone structure throughout the whole space, thus not many axiom systems can withstand this assault by this cloning process and be consistent. When all the bubbling, fizzing, toiling and trouble is over, only very special systems can beleft, thus all kinds of unforeseen results are forced by homogeneity, and the above is a very excitingly typical one.
|
I was following the textbook by David Mackay:
Information theory inference and learning algorithms.
I have question on asymptotic equiparition' principle:
For an ensemble of $N$ $i.i.d$ random variables $X^N=(X_1,X_2....X_N),$ with $N$ sufficiently large, the outcome $x=(x_1,x_2...x_N)$ is almost certain to belong to a subset of $|A_x^N|$ having only $2^{NH(x)}$ members, with each member having probability "close-to" $2^{-NH(x)}$.
And then in the textbook, it also says that
typical set doesn't nesscarry to contain the most probable element set.
On the other hand, "smallest-sufficient set" $S_{\delta}$ which defines to be:
the smallest subset of of $A_x$ satisfying $P(x\epsilon S_{\delta})\ge 1-\delta $, for $0\leq{\delta}\leq1. $ In other words, $S_{\delta}$ is constructed by taking the most probable elements in $A_x$, then the second probable......until the total probabily is $\ge1-{\delta}$.
My question is,
as $N$ increases, does $S_{\delta}$ approaches typical set such that these two sets will end up be equivalent of each other? If the size of the typical set is identical to the size of $|S_{\delta}|$, then why are we even bother with $S_{\delta}$? Why can't we just take the typical set as our compression scheme instead?
|
I have found out the answer.
When you do a measurement (one measurement) you have many uncertainty sources. But if you want to have the combined uncertainty, you don't add like $1 + 1$, because that would give you uncertainty of $2$, but that doesn't have to be the case. The true value of the measurement may lie in between the uncertainty regions but it may lie on the left or on the right of the range.
As we don't know where exactly is the measurement we have to treat that situation like a rectangular distribution. We have $a=x-\Delta x$ and $b=x+\Delta x$, where $\Delta x$ is the uncertainty.
$$E(X)= \int_a^b xp(x)\mathrm{d}x= \int_a^b \frac{x}{b - a}\mathrm{d}x= \left.\frac{1}{2}\frac{x^2}{b - a}\right|_a^b = \frac{1}{2}\frac{b^2 - a^2}{b - a} = \frac{b + a}{2}$$
$$\begin{align}\sigma^2&= \int_a^b \bigl(x - E(x)\bigr)^2 p(x)\mathrm{d}x \\&= \int_a^b \biggl(x - \frac{b + a}{2}\biggr)^2\frac{1}{b - a}\mathrm{d}x \\&= \left.\frac{1}{3(b - a)}\biggl(x - \frac{b + a}{2}\biggr)^3\right|_a^b \\&= \frac{1}{3(b - a)}\Biggl[\biggl(\frac{b - a}{2}\biggr)^3 - \biggl(\frac{a - b}{2}\biggr)^3\Biggr] \\&= \frac{1}{3}\biggl(\frac{b - a}{2}\biggr)^2\end{align}$$
So we see that the standard deviation of this distribution is exactly our uncertainty divided by $\sqrt3$. Without that our uncertainty would be bigger than it needs to be. And so the final uncertainty of the single measurement is then: $u(x)=\Delta x/\sqrt3$
|
Contents Introduction
In this chapter we will present and use a technique developed by the researchers Kay and Kajiya in 1986. Other acceleration structures since this time have proven to be better than their technique, but their solution can help to lay down the principles upon which most acceleration structures are built. It is a very good starting point, and like with the Utah Teapot will give us an opportunity to introduce some new and interesting techniques which appear in many other computer graphics algorithms (such as the octree structure for instance). We highly recommend that you read their paper (check the references section at the end of this lesson).
Extent
In the previous chapter, we intuitively showed that simple techniques such as ray tracing against bounding volumes could be used to accelerate ray tracing. As Kay and Kajiya point out in their paper, these techniques are only valid if ray tracing these bounding volumes or
extents as they call them, is much faster than ray tracing the objects themselves. They also point out that if the extent fits an object loosely, then many of the rays intersecting this bounding volume are likely to actually miss the object inside. On the other hand, if the extent describes the object precisely, then all rays intersecting the extent will also intersect the object. Obviously, the bounding volume that fits this criteria is the object itself. A good choice for a bounding volume is therefore a shape that provides a good tradeoff between tightness (how close an extent fits the object) and speed (a complex bounding volume is more expensive to ray trace than a simple shape). This idea is illustrated in figure 1, where the box surrounding the teapot is faster to ray trace than the tighter bounding volume surrounding the teapot in the image below. However more rays intersecting this box will miss the teapot geometry than in the case of an extent fitting the model more closely. Shapes such as spheres and boxes give some pretty good results in most cases, but exploiting the idea of finding a good compromise between simplicity and speed, Kay and Kajiya propose to refine these simple shapes a step further.
A bounding box can be seen as planes intersecting each other. To make this demonstration easier let's just consider the two-dimensional case. Let's imagine that we need to compute the bounding box of the teapot. Technically this can be seen as the intersection of two planes parallel to the y-axis with two planes parallel to the x-axis (figure 2). We now need to find a way of computing this planes? How do we do that? We have already presented the plane equation in the lesson 7 to compute the intersection of a ray with a box, as well in lesson 9, to compute the intersection of a ray with a triangle. Let's recall that the plane equation (in three dimensions this time) can be defined as:$$Ax + By + Cz - d = 0$$
where the terms A, B, C define the normal of the plane (a vector perpendicular to the plane) and \(d\) is the distance from the world origin to this plane along this vector. The terms x, y, z define the 3D cartesian coordinates of a point lying on the plane. This equation says that any point lying on the place whose coordinates are multiplied by the coordinates of the plane's normal, minus \(d\) is equal zero. This equation can also be used to
project the vertices of the teapot onto a plane and find a value for \(d\). For a given point \(P_{(x, y, z)}\) and a given plane with normal \(N_{(x, y, z)}\) we can solve for \(d \):
We can re-write this equation as a more traditional point-matrix multiplication of the form (equation 1):$$d = [P_x P_y P_z]\left[ \begin{array}{c} N_x \\ N_y \\ N_z \end{array} \right] $$
If we project a vertex \(P_{(x, y, z)}\) on the plane parallel to the y-axis with normal \(N_{(1, 0, 0)}\), \(d\) gives the distance along the x-axis from the origin to the plane parallel to the y-axis in which lies \(P_{(x, y, z)}\). If we repeat this process for all the vertices of the model, we can show that the point with the minimum \(d\) value and the point with the maximum \(d\) value, correspond to the object x-coordinate minimum and maximum extent respectively. These two values of \(d\), \(d^{near}\) and \(d^{far}\) describe two planes that bound the object (as showed in figure 3). We can implement this process with the following pseudocode:
In their paper, Kay and Kajiya call the region in space between the two planes a
slab and the normal vector defining the orientation of a slab is termed a plane-set
normal. And as they observe: plane-set normalsyield different bounding slabs for an object. The intersection of a set of bounding slabsyields a bounding volume. In order to create a closed bounding volume in 3-space, at least three bounding slabs must be involved, and they must be chosen so that the defining normal vectors span 3-space."
The simplest example of this principle is an axis-aligned bounding box (AABB) which is defined by three slabs respectively parallel to the xz-, yz- and xy-plane (figure 4). However, Kay and Kajiya propose to use not just three but seven slabs to get tighter bounding volumes. The plane-set normals of these slabs are chosen in advance and are independent of the objects to be bounded. To better visualise how this works let's go back again to the two-dimensional case. Let's imagine that the plane-set normals used to bound an object are:$$ \left( \begin{array}{c} 1\\0 \end{array}\right),\; \left(\begin{array}{c} 0\\1 \end{array}\right),\; \left(\begin{array}{c} \dfrac{\sqrt{2}}{2}\\\dfrac{\sqrt{2}}{2} \end{array}\right),\; \text{ and } \left(\begin{array}{c} \dfrac{\sqrt{2}}{2}\\-\dfrac{\sqrt{2}}{2} \end{array} \right) $$
Figure 5 shows an object bounded by these pre-selected plane-set normals (this is a reproduction of figure 3 in Kay and Kajiya's paper).
As you can see, the resulting bounding box fits the object better than a simple bounding box. For the three-dimensional case, they use seven plane-set normals:$$ \begin{array}{l} {\left(\begin{array}{c} 1\\0\\0 \end{array}\right),\; \left(\begin{array}{c} 0\\1\\0 \end{array}\right),\; \left(\begin{array}{c}0\\0\\1 \end{array}\right), } { \left(\begin{array}{c} \dfrac{\sqrt{3}}{3}\\ \dfrac{\sqrt{3}}{3}\\ \dfrac{\sqrt{3}}{3} \end{array}\right),\; \left(\begin{array}{c} -\dfrac{\sqrt{3}}{3}\\ \dfrac{\sqrt{3}}{3}\\ \dfrac{\sqrt{3}}{3} \end{array}\right),\; \left(\begin{array}{c} -\dfrac{\sqrt{3}}{3}\\ -\dfrac{\sqrt{3}}{3}\\ \dfrac{\sqrt{3}}{3} \end{array}\right), \text{ and } \left(\begin{array}{c} \dfrac{\sqrt{3}}{3}\\-\dfrac{\sqrt{3}}{3}\\ \dfrac{\sqrt{3}}{3} \end{array}\right) } \end{array} $$
The first three plane-set normals define an axis-aligned bounding box and the last four plane-set normals, define an eight sided parallelepiped. To Build a bounding volume of an object, we simply find the minimum and maximum value of \(d\) for each slab by projecting the vertices of the model on the seven plane-set normals.
Ray-Volume Intersection
Next, we need to write some code to ray trace the volumes. The principle is very similar to that of the ray-box intersection. A slab is defined by two planes parallel to each other, and if the ray is not parallel to these planes, it will intersect them both yielding two values, \(t_{min}\) and \(t_{far}\). To compute the intersection distance \(t\) we simply substitute the ray equation \(O + Rt = 0\) into the plane equation \(Ax + By + Cz - d = N_i \cdot P_{x,y,z} - d = 0\) yielding (equation 2):$$ \left\{ \begin{array}{l} N_i \cdot (O + Rt) - d= 0\\ t = { \dfrac{ d - N_i \cdot O }{N_i \cdot R} } \end{array} \right. $$
where \(N_i\) is one of the seven plane-set normals, \(O\) and \(R\) are respectively the origin and direction of the ray and \(d\) the distance from the world origin to the plane with normal \(N_i\) in which lies the intersection point \(P_{x,y,z}\). The two terms \(N_i \cdot O\) and \(N_i \cdot R\) can be re-used to compute the intersection distance \(t\) between the ray and the two planes. Substituting the pre-computed values of \(d\) (\(d_{near}\) and \(d_{far}\)) for the tested slab yields a value \(t\) for each plane.
Like with the ray tracing of boxes, care must be taken when the denominator \(N_i \cdot R\) is close to zero. Furthermore, when the denominator is lower than zero we also need to swap \(d_{near}\) and \(d_{far}\) (see figure 6). As for the ray-box intersection (see lesson 7 for more information on the algorithm), this test is performed for each slab enclosing the object. Of all the computed \(t_{near}\) values we will keep the largest one, and of all the computed \(t_{far}\) values, we will keep the smallest one. An intersection with the volume occurs if the final \(t_{far}\) value is greater than the \(t_{near}\) value. If the ray intersects the volume, the \(t\) values indicate the position of these intersections along the ray. The resulting interval (defined by \(t_{near}\) and \(t_{far}\)) is useful as an estimate of the position of the object along the ray. Let's now try to put what we learned into practice.
Source Code
The following C++ code implements the method described in this chapter. A BVH class is derived from the base AccelerationStructure class. We create a structure called Extents in this class which holds the values of \(d_{near} \) and \(d_{far}\) for all seven pre-defined plane-set normals (lines 7-11).
In the constructor of the class we allocate an array of extents to store the bounding volume data for all the objects in the scene (line 3). Then we loop over all the objects and call the method computeBounds from the Object class to compute the values dNear and dFar for each slab enclosing the object (lines 4-8). In the following code snippet we only show this function for the PolygonMesh class. We loop over all the vertices of the mesh and project them on the current plane (lines 27-31). This conclude the work done in the class constructor.
Once the render function is called, rather than intersecting each object of the scene, we call the intersection method from the BVH class. First the ray is tested against all the bounding volumes of all the objects from the scene. To do so, we call the intersect method from the Extent structure with the volume data of the current tested object (line 24). This method simply computes the intersection of the current ray with each of the seven slabs enclosing the object and tracks the greatest of the dNear values and the smallest of the computed dFar values. If dFar is greater than dFar, then an intersection with the bounding volume occurs and the function returns true. In the following version of the code, if the ray intersects the volume we set the member variable N from the IsectData structure with the normal of the intersected plane. The result of the dot product of this vector N with the ray direction is used to set the color of the current pixel. The resulting image can be seen in figure 7. This helps to visualise the bounding volumes being intersected and surrounding the objects.
But in the following and final version of the intersect method, if the bounding volume is intersected, we then test if there is an intersection between the ray and the object (or objects if it is a grid of triangles for example) enclosed by the volume. When the test is successful, we update tClosest if the intersection distance is the smallest we have found so far and keep a pointer to the intersected object.
Finally, if we compile and run the program using our new acceleration structure we get the following the statistics:
This technique is 1.34 times faster than the method from the previous chapter. It might not seem much but a few saved seconds on a simple scene can turn into hours on a complex one.
What's Next?
Even though we have improved the results from chapter 2 this technique still suffers from the fact that the rendering time is proportional to the number of objects in the scene. To improve the performance of this method a step further, Kay and Kajiya suggest to use a hierarchy of volumes. Quoting the authors of the paper again:
We will implement this idea in the next chapter.
|
Consider each of the following encryption schemes and state whether the scheme is perfectly secret or not. Justify your answer by giving a detailed proof if your answer is Yes, and a counterexample if your answer is No.
Consider an encryption scheme whose plaintext space is $\mathcal{M}=\{m\in\{0,1\}^\ell \mathrel{|} \text{the last bit of $m$ is $0$}\}$ and key generation algorithm chooses a uniform key from the key space $\mathcal{K}=\{0,1\}^{\ell-1}$. Suppose $\mathit{Enc}_k(m)=m \oplus (k\parallel 0)$ and $\mathit{Dec}_k(c)=c\oplus (k\parallel 0)$.
$\newcommand{\given}{\mathrel{|}}$The definition of
perfectly secret which states:An encryption scheme $(\mathit{Gen}, \mathit{Enc}, \mathit{Dec})$ with message space $\mathcal{M}$ is perfectly secret if for every probability distribution over $\mathcal{M}$, every message $m\in \mathcal{M}$, and every ciphertext $c\in \mathcal{C}$ for which $\Pr[C=c]>0$: $$\Pr[M=m\given C=c]=\Pr[M=m].$$
We first compute $\Pr[C=c\given M=m']$ for arbitrary $c\in \mathcal{C}$ and $m'\in \mathcal{M}$. \begin{equation*} \begin{aligned} \Pr[C=c\given M=m'] & =\Pr[\mathit{Enc}_K(m')=c]=\Pr[m' \oplus (K\parallel 0)=c] \\ & =\Pr[(K\parallel 0) = c\oplus m']=2^{1-\ell}\quad (1) \end{aligned} \end{equation*} where the final equality holds because the key $K$ is a uniform $\ell-1$-bit string. Fix any distribution over $\mathcal{M}$. For any $c\in \mathcal{C}$, we have \begin{equation*} \begin{aligned} \Pr[C=c] & = \sum_{m'\in\mathcal{M}} \Pr[C=c\given M=m'] \cdot \Pr[M=m'] \\ & = 2^{1-\ell} \cdot \sum_{m'\in \mathcal{M}} \Pr[M=m']=2^{1-\ell}\cdot 1=2^{1-\ell}\quad (2) \end{aligned} \end{equation*} where the sum is over $m'\in \mathcal{M}$ with $\Pr[M=m']\neq 0$. Bayes' Theorem gives: \begin{equation*} \begin{aligned} \Pr[M=m\given C=c] & = \dfrac{\Pr[C=c\given M=m]\cdot \Pr[M=m]}{\Pr[C=c]} \\ & = \dfrac{2^{1-\ell} \cdot \Pr[M=m]}{2^{1-\ell}} = \Pr[M=m] \end{aligned} \end{equation*} Hence we conclude that this encryption scheme is perfectly secret.
MY QUESTION: I tried to follow the set up for the proof of the One-Time Pad being perfectly secure. However, I don't really understand the logic behind the proof (assuming what I did was correct). Can someone clear up why this technique is correct?
|
Is the following proof valid?
Let $X$ and $Y$ be jointly continuous random variables such that the joint density function is given by $$ f(x,y) = \begin{cases} ye^{-(x+y)} & \text{ for } x>0, y>0 \\ 0 & \text{ otherwise } \end{cases} $$
Then $X$ and $Y$ are dependent.
Proof
Let $X$ and $Y$ be continuous random variables. Let the probability density function of $X$ be given by $f_X(x) = e^x$. Let the probability density function of $Y$ be given by $f_Y(y) = ye^y$. $$ f_X(x)f_Y(y) = (e^{-x})(ye^{-y})= ye^{-(x+y)} $$ Jointly continuous random variables are independent if and only if $ f(x,y) = f_X(x) f_Y(y) \hspace{8 px} \forall \hspace{7 px} x, y \in \mathbb{R}$. Thus, to prove that $X$ and $Y$ are dependent, it suffices to show that there exist some pair of real numbers $x, y$ such that $f(x, y) \neq f_X(x) f_Y(y)$. Suppose $x = -1$, and suppose $y = -1$. Then $f(x, y) = 0$. We also have that $ f_X(-1)f_Y(-1) = -e^2 \neq 0 $. So $X$ and $Y$ are dependent.
|
I'm reading the following set of notes on Taylor series and big O-notation, written by a professor at Columbia: http://www.math.columbia.edu/~nironi/taylor2.pdf. He repeatedly refers to what he calls "limit comparison", by which he means the theorem that for $a_n, b_n$ sequences of positive real numbers such that $b_n \to C>0$, we have that $\sum a_n <\infty$ iff $\sum a_n b_n < \infty$. On page 13, he is manipulating a series using O notation, and he ends up with $$\sum\limits_{n=1}^{\infty}(-1)^n\left(\frac{1}{n}\right)\frac{\frac{7}{12}+O(1/n^2)}{-\frac{1}{6}+O(1/n^2)},$$at which point he says "At this point we might be tempted to use limit comparison and conclude that the series is convergent; but limit comparison cannot be applied to an alternating series. Instead of using limit comparison we try to separate the part of the series that converges but not absolutely from the part that converges absolutely."
Now, in reference to the theorem he calls "limit comparison", can't we strengthen this theorem to say that if $a_n$ and $b_n$ are sequences of reals, and $b_n \to C \neq 0$, then $\sum a_n <\infty \iff \sum a_n b_n < \infty$? If not, what is a relevant counter-example? And if so, can't we use this to conclude that the above series converges?
My second question is that I do not understand the algebraic manipulation he does directly after the sentence quoted above. I suppose he is trying to "separate the part of the series that converges but not absolutely from the part that converges absolutely", but I don't know what this means, or what he is doing. This is all on page 13.
Thanks for your help.
|
ISSN:
1078-0947
eISSN:
1553-5231
All Issues
Discrete & Continuous Dynamical Systems - A
November 2002 , Volume 8 , Issue 4
Select all articles
Export/Reference:
Abstract:
This paper discusses two numerical schemes that can be used to approximate inertial manifolds whose existence is given by one of the standard methods of proof. The methods considered are fully numerical, in that they take into account the need to interpolate the approximations of the manifold between a set of discrete gridpoints. As all the discretisations are refined the approximations are shown to converge to the true manifold.
Abstract:
The spectrum of dimensions for Poincaré recurrences of Markov maps is obtained by constructing a sequence of approximating maps whose spectra are known to be solution of non-homogeneous Bowen equations. We prove that the spectrum of the Markov map also satisfies such an equation.
Abstract:
We study the asymptotic behavior of solutions of the damped linear system $u_{t t}(t)+Au(t)+Bu_t(t)=0, t\geq 0$ in the context of Hilbert spaces. We present abstract theorems on the decay rate, moreover an adequate example is presented to illustrate these results.
Abstract:
Using the technique of Adimurthy, F. Pacella and S.L. Yadava [1], we extend an uniqueness result for a class of non-autonomous semilinear equations in M.K. Kwong and Y. Li [8]. We also observe that combining the results of [1] with bifurcation theory, one can obtain a detailed picture of the global solution curve for a class of concave-convex nonlinearities.
Abstract:
We describe a global version of the KS regularization of the $n$-center problem on a closed 3-dimensional manifold. The regularized configuration manifold turns out to be 4 or 5 dimensional closed manifold depending on whether $n$ is even or odd. As an application, we show that the $n$ center problem in $S^3$ has positive topological entropy for $n\ge 5$ and energy greater than the maximum of the potential energy. The proof is based on the results of Gromov and Paternain on the topological entropy of geodesic flows. This paper is a continuation of [6], where global regularization of the $n$-center problem in $\mathbf R^3$ was studied.
Abstract:
In this paper we prove some regularity and uniqueness results for a class of nonlinear parabolic problems whose prototype is
$\partial_t u - \Delta_N u=\mu$ in $\mathcal D'(Q) $
$u=0$ on $]0,T[\times\partial \Omega$
$u(0)=u_0$ in $ \Omega,$
where $Q$ is the cylinder $Q=(0,T)\times\Omega$, $T>0$, $\Omega\subset \mathbb R^n$, $N\ge 2$, is an open bounded set having $C^2$ boundary, $\mu\in L^1(0,T;M(\Omega))$ and $u_0$ belongs to $M(\Omega)$, the space of the Radon measures in $\Omega$, or to $L^1(\Omega)$. The results are obtained in the framework of the so-called grand Sobolev spaces, and represent an extension of earlier results on standard Sobolev spaces.
Abstract:
We study the existence of $2\pi$-periodic solutions for forced nonlinear oscillators at resonance, the nonlinearity being a bounded perturbation of a function deriving from an isochronous potential, i.e. a potential leading to free oscillations that all have the same period. The family of isochronous oscillators considered here includes oscillators with jumping nonlinearities, as well as oscillators with a repulsive singularity, to which a particular attention is paid. The existence results contain, as particular cases, conditions of Landesman-Lazer type. Even in the case of perturbed linear oscillators, they improve earlier results. Multiplicity and non-existence results are also given.
Abstract:
We exhibit an open set of symplectic Anosov diffeomorphisms on which there are discrete "jumps" in the regularity of the unstable subbundle. It is either highly irregular almost everywhere ($C^\epsilon$ only on a negligible set) or better than $C^1$. In the latter case the Hölder exponent of the derivative is either about $\epsilon/2$ or almost 1.
Abstract:
We discuss estimates of the Hausdorff and fractal dimension of a global attractor for the semilinear wave equation
$u_{t t} +\delta u_t -\phi (x)\Delta u + \lambda f(u) = \eta (x), x \in \mathbb R^N, t \geq 0,$
with the initial conditions $ u(x,0) = u_0 (x)$ and $u_t(x,0) = u_1 (x),$ where $N \geq 3$, $\delta >0$ and $(\phi (x))^{-1}:=g(x)$ lies in $L^{N/2}(\mathbb R^N)\cap L^\infty (\mathbb R^N)$. The energy space $\mathcal X_0=\mathcal D^{1,2}(\mathbb R^N) \times L_g^2(\mathbb R^N)$ is introduced, to overcome the difficulties related with the non-compactness of operators, which arise in unbounded domains. The estimates on the Hausdorff dimension are in terms of given parameters, due to an asymptotic estimate for the eigenvalues $\mu$ of the eigenvalue problem $-\phi(x)\Delta u=\mu u, x \in \mathbb R^N$.
Abstract:
We study Cauchy problems associated to partial differential equations with infinite delay where the history function is modified by an evolution family. Using sophisticated tools from semigroup theory such as evolution semigroups, extrapolation spaces, or the critical spectrum, we prove well-posedness and characterize the asymptotic behavior of the solution semigroup by an operator-valued characteristic equation.
Abstract:
We study relaxation for optimal design problems in conductivity in the two-dimensional situation. To this end, we reformulate the optimal design problem in an equivalent way as a genuine vector variational problem, and then analyze relaxation of this new variational problem. Our main achievement is to explicitly compute the quasiconvexification of the involved density in this problem for some interesting cases. We think the method given here could be generalized to compute quasiconvex envelopes in other situations. We restrict attention to the two-dimensional case.
Abstract:
In this paper we construct a triangular map $F$ on $I^2$ which holds the following property. For each $[a,b]\subseteq I=[0,1]$, $a\leq b$, there exists $(p,q)\in I^2$ \ $I_0$ such that $\omega_F(p,q)=$ {0} $\times [a,b]\subset I_0$ where $I_0=${0}$\times I$. Moreover, for each $(p,q)\in I^{2}$, the set $\omega_F(p,q)$ is exactly {0} $\times J$ where $J\subset I$ is a compact interval degenerate or not. So, we describe completely the family $\mathcal W(F)=${$\omega_F(p,q):(p,q)\in I^2$} and establish $\mathcal W(F)$ as the set of all compact interval, degenerate or not, of $I_0$.
Abstract:
Consider the polynomial perturbations of Hamiltonian vector field
$X_\epsilon=(H_y+\epsilon f(x,y,\epsilon))\frac{\partial}{\partial x}+ (-H_x+\epsilon g(x,y,\epsilon))\frac{\partial}{\partial y},$
where the Hamiltonian $H(x,y)=\frac{1}{2}y^2+U(x)$ has one center and one cuspidal loop, $deg U(x)=4$. In present paper we find an upper bound for the number of zeros of the $k$th order Melnikov function $M_k(h)$ for arbitrary polynomials $f(x,y,\epsilon)$ and $g(x,y,\epsilon)$.
Abstract:
In the present paper, using the Leray-Schauder degree theory, we proved the existence of nontrivial solutions for p-Laplacian with a crossing nonlinearity.
Abstract:
We establish the optimal rate of decay for the global solutions of some nonlinear partial differential equations with dissipation. We apply the well known Fourier splitting technique invented by Maria Schonbek in [1] -- [5] to achieve our goal.
Abstract:
We consider the generalized Liénard system
$\frac{dx}{dt} = \frac{1}{a(x)}[h(y)-F(x)],$
$\frac{dy}{dt}= -a(x)g(x),\qquad\qquad\qquad\qquad\qquad$ (0.1)
where $a$ is a positive and continuous function on $R=(-\infty, \infty)$, and $F$, $g$ and $h$ are continuous functions on $R$. Under the assumption that the origin is a unique equilibrium, we obtain necessary and sufficient conditions for the origin of system (0.1) to be globally asymptotically stable by using a nonlinear integral inequality. Our results substantially extend and improve several known results in the literature.
Readers Authors Editors Referees Librarians More Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top]
|
Search
Now showing items 1-3 of 3
D-meson nuclear modification factor and elliptic flow measurements in Pb–Pb collisions at $\sqrt {s_{NN}}$ = 5.02TeV with ALICE at the LHC
(Elsevier, 2017-11)
ALICE measured the nuclear modification factor ($R_{AA}$) and elliptic flow ($\nu_{2}$) of D mesons ($D^{0}$, $D^{+}$, $D^{⁎+}$ and $D^{s+}$) in semi-central Pb–Pb collisions at $\sqrt{s_{NN}} =5.02$ TeV. The increased ...
ALICE measurement of the $J/\psi$ nuclear modification factor at mid-rapidity in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV
(Elsevier, 2017-11)
ALICE at the LHC provides unique capabilities to study charmonium production at low transverse momenta ( p T ). At central rapidity, ( |y|<0.8 ), ALICE can reconstruct J/ ψ via their decay into two electrons down to zero ...
Net-baryon fluctuations measured with ALICE at the CERN LHC
(Elsevier, 2017-11)
First experimental results are presented on event-by-event net-proton fluctuation measurements in Pb- Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, recorded by the ALICE detector at the CERN LHC. The ALICE detector is well ...
|
Cardinal numbers Cardinality is a measure of the size of a set. Two sets have the same cardinality---they are said to be equinumerous---when there is a one-to-one correspondence between their elements. The cardinality assignment problem is the problem of assigning to each equinumerosity class a cardinal number to represent it. In ZFC, this problem can be solved via the well-ordering principle, which asserts that every set can be well-ordered and therefore admits a bijection with a unique smallest ordinal, an initial ordinal. By this means, in ZFC we are able to assing to every set $X$ a canonical representative of its equinumerosity class, the smallest ordinal bijective with $X$.
We therefore adopt the definition that $\kappa$ is a
cardinal if it is an initial ordinal, an ordinal that is not equinumerous with any smaller ordinal. Contents Finite and infinite cardinals
The set $\omega$ of natural numbers is the smallest inductive set, that is, the smallest set for which $0\in\omega$ and whenever $n\in\omega$ then also $n+1\in\omega$, where $n+1=n\cup\{n\}$ is the successor ordinal of $n$. A set is
finite if it is equinumerous with a natural number, and otherwise it is is infinite. In ZFC, the finite sets are the same as the Dedekind finite sets, but in ZF, these concepts may differ. In ZFC, $\aleph$ is a unique order-isomorphism between the ordinals and the cardinal numbers with respect to membership. Countable and uncoutable cardinals
A set is
countable when it is equinumerous with a subset of $\omega$. This includes all finite sets, including the empty set, and the infinite countable sets are said to be countably infinite. An uncountable set is a set that is not countable. The existence of uncountable sets is a consequence of Cantor's observationt that the set of reals is uncountable. Successor cardinals and limit cardinals
Hartog established that for every set $X$, there is a smallest ordinal that does not have an injection into $X$, and this ordinal is now known as the
Hartog number of $X$. When $\kappa$ is a cardinal, then the successor cardinal of $\kappa$, denoted $\kappa^+$, is the Hartog number of $\kappa$, the smallest ordinal of strictly larger cardinality than $\kappa$. The existence of successor cardinals can be proved in ZF without the axiom of choice. Iteratively taking the successor cardinal leads to the aleph hierarchy.
Although ZF proves the existence of successor cardinals for every cardinal, ZF also proves that there exists some cardinals which are not the successor of any cardinal. These cardinals are known as
limit cardinals. Cardinals which are not limit cardinals are known as successor cardinals. The limit cardinals are precisely those which are limit points in the topology of cardinals (hence the name). That is, for any cardinal $\lambda<\kappa$, there is some $\nu>\lambda$ with $\nu<\kappa$.
The limit cardinals share an incredible affinity towards the singular cardinals; there does not exist a weakly inaccessible cardinal if and only if the singular cardinals are precisely the limit cardinals. If inaccessibility is inconsistent (which is thought "untrue" by most set theorists, although possible), then ZFC actually proves that any cardinal is singular if and only if it is a limit cardinal.
Regular and singular cardinals
A cardinal $\kappa$ is
regular when $\kappa$ not the union of fewer than $\kappa$ many sets of size each less than $\kappa$. Otherwise, when $\kappa$ is the union of fewer than $\kappa$ many sets of size less than $\kappa$, then $\kappa$ is said to be singular.
The axiom of choice implies that every successor cardinal $\kappa^+$ is regular, but it is known to be consistent with ZF that successor cardinals may be singular.
The
cofinality of an infinite cardinal $\kappa$, denoted $\text{cof}(\kappa)$, is the smallest size family of sets, each smaller than $\kappa$, whose union is all of $\kappa$. Thus, $\kappa$ is regular if and only if $\text{cof}(\kappa)=\kappa$, and singular if and only if $\text{cof}(\kappa)\lt\kappa$. Cardinals in ZF See general cardinal for an account of the cardinality concept arising without the axiom of choice.
When the axiom of choice is not available, the concept of cardinality is somewhat more subtle, and there is in general no fully satisfactory solution of the cardinal assignment problem. Rather, in ZF one works directly with the equinumerosity relation.
In ZF, the axiom of choice is equivalent to the assertion that the cardinals are linearly ordered. This is because for every set $X$, there is a smallest ordinal $\alpha$ that does not inject into $X$, the Hartog number of $X$, and conversely, if $X$ injects into $\alpha$, then $X$ would be well-orderable.
Dedekind finite sets
The
Dedekind finite sets are those not equinumerous with any proper subset. Although in ZFC this is an equivalent characterization of the finite sets, in ZF the two concepts of finite differ: every finite set is Dedekind finite, but it is consistent with ZF that there are infinite Dedekind finite sets. An amorphous set is an infinite set, all of whose subsets are either finite or co-finite.
|
Is there any way to show that the following inequality holds for the given function with constraints?
$\frac{(a x + y)^{y+1}}{a x (a x + y + 1)^y}\geq 1$ for $0.5 \leq a \leq 1$, $x >0,y \geq 0$. It can be easily checked that it saturates the inequality for $\lim_{x \rightarrow \infty}$.
Numerically, I've checked this for different range of $x,y$ and $a$. It always holds.. I'm not sure how I can prove this analytically!
Thanks!
|
As mentioned above, a pseudorandom distribution “looks uniform” to all polynomial-time computations. We already know of a distribution that “looks uniform” — namely, the uniform distribution itself! A more interesting case is when \(\lambda\) uniform bits are used to
deterministically ( i.e., without further use of randomness) produce \(\lambda + \ell\) pseudorandom bits, for \(\lambda > 0\). The process that “extends” \(\lambda\) bits into \(\lambda + \ell\) bits is called a pseudorandom generator. More formally:
Definition \(\PageIndex{1}\): PRG Security
Let \(G : \{ 0,1 \} ^ { \lambda } \rightarrow \{ 0,1 \} ^ { \lambda + \ell }\) be a deterministic function with \(\ell > 0\). We say that \(G\) is a pseudorandom generator (PRG) if \(\mathscr { L } _ { \text { prg-real } } ^ { G } ≋ \mathscr { L } _ { \text { prg-rand } } ^ { G }\), where:
\[\mathscr{L}^G_{\text{prg-real}}\space\space\space\space\space\space\space\mathscr{L}^G_{\text{prg-rand}}\\ \underline{\text{QUERY():}}\space\space\space\space\space\space\space\underline{\text{QUERY():}}\\\space s\space\leftarrow \{0,1\}^{\lambda}\space\space\space\space\space\space\space\space z\space\leftarrow \{0,1\}^{\lambda+\ell}\\ \text{return}\space G(s)\space\space\space\space\space\space\space\text{return}\space z\nonumber\]
The value \(\ell\) is called the stretch of the PRG. The input to the PRG is typically called a seed. Discussion: Is 0010110110 a random string? Is it pseudorandom? What about 0000000001? Do these questions make any sense?
Randomness and pseudorandomness are not properties of individual strings, they are properties of the process used to generate the string. We will try to be precise about how we talk about these things (and you should too). When we have a value \(z = G(s)\) where \(G\) is a PRG and s is chosen uniformly, we can say that zwas “chosen pseudorandomly”, but not that \(z\) “is pseudorandom”. On the other hand, a distribution can be described as “pseudorandom.” The same goes for describing a value as “chosen uniformly” and describing a distribution as “uniform.”
Pseudorandomness can happen only in the computational setting, where we restricts focus to polynomial-time adversaries. The exercises ask you to prove that for all functions \(G\) (with positive stretch), \(\mathscr{L}^G_{\text{prg-real}}\not\equiv \equiv \mathscr{L}^G_{\text{prg-rand}}\) (note the use of \(\equiv\) rather than \(\not\equiv\)). That is, the output distribution of the PRG can never actually be uniform in a mathematical sense. Because it has positive stretch, the best it can hope to be is pseudorandom. It’s sometimes convenient to think in terms of statistical tests. When given access to some data claiming to be uniformly generated, your first instinct is probably to perform a set of basic statistical tests: Are there roughly an equal number of 0s as 1s? Does the substring 01010 occur with roughly the frequency I would expect? If I interpret the string as a series of points in the unit square \([0,1)^2\) , is it true that roughly π/4 of them are within Euclidean distance 1 of the origin? 1
The definition of pseudorandomness is kind of a “master” definition that encompasses all of these statistical tests and more. After all, what is a statistical test, but a polynomial-time procedure that obtains samples from a distribution and outputs a yes-or-no decision? Pseudorandomness implies that
every statistical test will “accept” when given pseudorandomly generated inputs with essentially the same probability as when given uniformly sampled inputs. Consider the case of a length-doubling PRG (so \(\ell = \lambda\); the PRG has input length \(\lambda\) and output length \(2\lambda\). The PRG only has \(2\lambda\) possible inputs, and so there are at most only \(2^{\lambda}\) possible outputs. Among all of \(\{0,1\}^{2\lambda}\), this is a miniscule fraction indeed. Almost all strings of length \(2\lambda\) are impossible outputs of \(G\). So how can outputs of \(G\) possibly “look uniform?”
The answer is that it’s not clear how to take advantage of this observation. While it is true that most strings (in a relative sense as a fraction of \(2^{2\lambda}\)) are impossible outputs of \(G\), it is also true that \(2\lambda\) of them are possible, which is certainly a lot from the perspective of a program who runs in polynomial time in \(\lambda\). Recall that the problem at hand is designing a distinguisher to behave as differently as possible in the presence of pseudorandom and uniform distributions. It is not enough to behave differently on just a few strings here and there — an individual string can contribute at most \(1/2^{\lambda}\) to the final distinguishing advantage. A successful distinguisher must be able to recognize a huge number of outputs of \(G\) so it can behave differently on them, but there are exponentially many \(G\)-outputs, and they may not follow any easy-to-describe pattern.
Below is an example that explores these ideas in more detail.
Example \(\PageIndex{1}\):
Let \(G\) be a length-doubling PRG as above, let \(t\) be an arbitrary string \(t\in \{0,1\}^{2\lambda}\), and consider the following distinguisher \(?_t\) that has the value \(t\) hard-coded:
\[\underline{\mathscr{A}_t:}\\z\space\leftarrow\text{QUERY()}\\\text{return}\space z\stackrel{?}{=}t\nonumber\]
What is the distinguishing advantage of \(?_t\) ?
We can see easily what happens when \(?_t\) is linked to \(\mathscr{L}^G_{\text{prg-rand}}\). We get:
\[PR[\mathscr{A}_t \diamondsuit \mathscr{L}^G_{\text{prg-rand}}\Rightarrow\space 1]=1/2^{2\lambda}\nonumber\]
What happens when linked to \(\mathscr{L}^G_{\text{prg-real}}\) depends on whether \(t\) is a possible output of \(G\), which is a simple property of \(G\). We always allow distinguishers to depend arbitrarily on \(G\). In particular, it is fine for a distinguisher to “know” whether its hard-coded value t is a possible output of G. What the distinguisher “doesn’t know” is which library it is linked to, and the value of s that was chosen in the case that it is linked to \(\mathscr{L}^G_{\text{prg-real}}\).
Suppose for simplicity that \(G\) is injective (the exercises explore what happens when \(G\) is not). If t is a possible output of \(G\), then there is exactly one choice of \(s\) such that \(G(s) = t\), so we have:
\[PR[\mathscr{A}_t \diamondsuit \mathscr{L}^G_{\text{prg-rand}}\Rightarrow\space 1]=1/2^{\lambda}\nonumber\]
Hence, the distinguishing advantage of \(?_t\) is |\(1/2^{2\lambda} − 1/2^{\lambda} \le 1/2^{\lambda}\). If t is not a possible output of \(G\), then we have:
\[PR[\mathscr{A}_t \diamondsuit \mathscr{L}^G_{\text{prg-rand}}\Rightarrow\space 1]=0\nonumber\]
Hence, the distinguishing advantage of \(?_t\) is \(|1/2^{2\lambda} − 0| = 1/2^{2\lambda}\).
In either case, \(?_t\) has negligible distinguishing advantage. This merely shows that \(?_t\) (for any hard-coded \(t\)) is not a particularly helpful distinguisher for any PRG. Of course, any candidate PRG might be insecure because of other distinguishers, but this example should serve to illustrate that PRG security is at least compatible with the fact that some strings are impossible outputs of the PRG
Related Concept: Random Number Generation
The definition of a PRG includes a
uniformly sampled seed. In practice, this PRG seed has to come from somewhere. Generally a source of “randomness” is provided by the hardware or operating system, and the process that generates these random bits is (confusingly) called a random number generator (RNG).
In this course we won’t cover low-level random number generation, but merely point out what makes it different than the PRGs that we study:
The job of a PRG is to take a small amount of “ideal” (in other words, uniform) randomness and extend it. By contrast, an RNG usually takes many inputs over time and maintains an internal state. These inputs are often from physical/hardware sources. While these inputs are “noisy” in some sense, it is hard to imagine that they would be statistically uniform. So the job of the RNG is to “refine” (sometimes many) sources of noisy data into uniform outputs. Perspective on this Chapter
PRGs are a fundamental cryptographic building block that can be used to construct more interesting things. But you are unlikely to ever find yourself designing your own PRG or building something from a PRG that couldn’t be built from some higher-level primitive instead. For that reason, we will not discuss specific PRGs or how they are designed in practice. Rather, the purpose of this chapter is to build your understanding of the concepts (like pseudorandomness, indistinguishability, proof techniques) that will be necessary in the rest of the class.
1For a list of statistical tests of randomness that are actually used in practice, see http://csrc.nist.gov/ publications/nistpubs/800-22-rev1a/SP800-22rev1a.pdf.
|
Most of the permutation and combination problems we have seen count choices made without repetition, as when we asked how many rolls of three dice are there in which each die has a different value. The exception was the simplest problem, asking for the total number of outcomes when two or three dice are rolled, a simple application of the multiplication principle. Typical permutation and combination problems can be interpreted in terms of drawing balls from a box, and implicitly or explicitly the rule is that a ball drawn from the box stays out of the box. If instead each ball is returned to the box after recording the draw, we get a problem essentially identical to the general dice problem. For example, if there are six balls, numbered 1–6, and we draw three balls with replacement, the number of possible outcomes is $6^3$. Another version of the problem does not replace the ball after each draw, but allows multiple "identical'' balls to be in the box. For example, if a box contains 18 balls numbered 1–6, three with each number, then the possible outcomes when three balls are drawn and not returned to the box is again $6^3$. If four balls are drawn, however, the problem becomes different.
Another, perhaps more mathematical, way to phrase such problemsis to introduce the idea of a
multiset.A multiset is like a set, except that elements may appear more thanonce. If $\{a,b\}$ and $\{b,c\}$ are ordinary sets, we say that theunion $\{a,b\}\cup\{b,c\}$ is $\{a,b,c\}$, not $\{a,b,b,c\}$. If weinterpret these as multisets, however, we do write $\{a,b,b,c\}$ andconsider this to be different than $\{a,b,c\}$. To distinguishmultisets from sets, and to shorten the expression in most cases, weuse a repetition numberwith each element. For example, we will write $\{a,b,b,c\}$ as$\{1\cdot a,2\cdot b,1\cdot c\}$. By writing $\{1\cdot a,1\cdot b,1\cdot c\}$we emphasize that this is a multiset, even though no element appearsmore than once. We also allow elements to be included an infinitenumber of times, indicated with $\infty$ for the repetition number, like$\{\infty\cdot a, 5\cdot b, 3\cdot c\}$.
Generally speaking, problems in which repetition numbers are infinite are easier than those involving finite repetition numbers. Given a multiset $A=\{\infty\cdot a_1,\infty\cdot a_2,\ldots,\infty\cdot a_n\}$, how many permutations of the elements of length $k$ are there? That is, how many sequences $x_1,x_2,\ldots,x_k$ can be formed? This is easy: the answer is $n^k$.
Now consider combinations of a multiset, that is, submultisets: Given a multiset, how many submultisets of a given size does it have? We say that a multiset $A$ is a submultiset of $B$ if the repetition number of every element of $A$ is less than or equal to its repetition number in $B$. For example, $\{20\cdot a, 5\cdot b, 1\cdot c\}$ is a submultiset of $\{\infty\cdot a, 5\cdot b, 3\cdot c\}$. A multiset is finite if it contains only a finite number of distinct elements, and the repetition numbers are all finite. Suppose again that $A=\{\infty\cdot a_1,\infty\cdot a_2,\ldots,\infty\cdot a_n\}$; how many finite submultisets does it have of size $k$? This at first seems quite difficult, but put in the proper form it turns out to be a familiar problem. Imagine that we have $k+n-1$ "blank spaces'', like this:
Now we place $n-1$ markers in some of these spots:
This uniquely identifies a submultiset: fill all blanks up to the first $\land$ with $a_1$, up to the second with $a_2$, and so on:
So this pattern corresponds to the multiset $\{1\cdot a_2,3\cdot a_3,\ldots, 1\cdot a_n\}$. Filling in the markers $\land$ in all possible ways produces all possible submultisets of size $k$, so there are $k+n-1\choose n-1$ such submultisets. Note that this is the same as $k+n-1\choose k$; the hard part in practice is remembering that the $-1$ goes with the $n$, not the $k$.
$$\bullet\quad\bullet\quad\bullet$$
Summarizing the high points so far: The number of permutations of $n$ things taken $k$ at a time without replacement is $\ds P(n,k)=n!/(n-k)!$; the number of permutations of $n$ things taken $k$ at a time with replacement is $\ds n^k$. The number of combinations of $n$ things taken $k$ at a time without replacement is ${n\choose k}$; the number of combinations of $n$ things taken $k$ at a time with replacement is ${k+n-1 \choose k}$.
$$\bullet\quad\bullet\quad\bullet$$
If $A=\{m_1\cdot a_1, m_2\cdot a_2,\ldots,m_n\cdot a_n\}$, similarquestions can be quite hard. Here is an easier special case: How manypermutations of the multiset $A$ are there? That is, how manysequences consist of $m_1$ copies of $a_1$, $m_1$ copies of $a_2$, andso on? This problem succumbs to overcounting: suppose to begin withthat we can distinguish among the different copies of each $a_i$; theymight be colored differently for example: a red $a_1$, a blue $a_1$,and so on. Then we have an ordinary set with $M=\sum_{i=1}^n m_i$elements and $M!$ permutations. Now if we ignore the colors, so thatall copies of $a_i$ look the same, we find that we have overcountedthe desired permutations. Permutations with, say, the $a_1$ items inthe same positions all look the same once we ignore the colorsof the $a_1$s. Howmany of the original permutations have this property? $m_1!$permutations will appear identical once we ignore the colors of the$a_1$ items, since there are $m_1!$ permutations of the colored $a_1$sin a given $m_1$ positions. So after throwing out duplicates, thenumber of remaining permutations is $M!/m_1!$ (assuming the other$a_i$ are still distinguishable). Then the same argument applies to the$a_2$s: there are $m_2!$ copies of each permutation once we ignore thecolors of the $a_2$s, so there are $\ds {M!\over m_1!\,m_2!}$ distinct permutations.Continuing in this way, we see that the number of distinctpermutations once all colors are ignored is $${M!\over m_1!\,m_2!\cdots m_n!}.$$This is frequently written$${M\choose m_1\;\;m_2\;\ldots\; m_n},$$called a
multinomial coefficient.Here the second row has $n$ separate entries, not a single product entry.Note that if $n=2$ this is $$\eqalignno{{M\choose m1\;\;m2}&={M!\over m_1!\,m_2!}={M!\over m_1!\,(M-m_1)!}={M\choose m_1}.&(1.5.1)}$$This is easy to see combinatorialy: given$\{m_1\cdot a_1, m_2\cdot a_2\}$ we can form a permutation by choosingthe $m_1$ places that will be occupied by $a_1$, filling in theremaining $m_2$ places with $a_2$. The number of permutations is thenumber of ways to choose the $m_1$ locations, which is $M\choose m_1$.
Example 1.5.1 How many solutions does $\ds x_1+x_2+x_3+x_4=20$ have in non-negative integers? That is, how many 4-tuples $(m_1,m_2,m_3,m_4)$ of non-negative integers are solutions to the equation? We have actually solved this problem: How many submultisets of size 20 are there of the multiset $\{\infty\cdot a_1,\infty\cdot a_2,\infty\cdot a_3,\infty\cdot a_4\}$? A submultiset of size 20 is of the form $\{m_1\cdot a_1,m_2\cdot a_2,m_3\cdot a_3,m_4\cdot a_4\}$ where $\sum m_i=20$, and these are in 1–1 correspondence with the set of 4-tuples $(m_1,m_2,m_3,m_4)$ of non-negative integers such that $\sum m_i=20$. Thus, the number of solutions is $20+4-1\choose 20$. This reasoning applies in general: the number of solutions to $$\sum_{i=1}^n x_i = k$$ is $${k+n-1\choose k}.$$
This immediately suggests some generalizations: instead of the total number of solutions, we might want the number of solutions with the variables $x_i$ in certain ranges, that is, we might require that $m_i\le x_i\le M_i$ for some lower and upper bounds $m_i$ and $M_i$.
Finite upper bounds can be difficult to deal with; if we require that $0\le x_i\le M_i$, this is the same as counting the submultisets of $\{M_1\cdot a_1,M_2\cdot a_2,\ldots,M_n\cdot a_n\}$. Lower bounds are easier to deal with.
Example 1.5.2 Find the number of solutions to $\ds x_1+x_2+x_3+x_4=20$ with $x_1\ge 0$, $x_2\ge 1$, $x_3\ge 2$, $x_4\ge -1$.
We can transform this to the initial problem in which all lower bounds are 0. The solutions we seek to count are the solutions of this altered equation: $$ x_1+(x_2-1)+(x_3-2)+(x_4+1)=18.$$ If we set $y_1=x_1$, $y_2=x_2-1$, $y_3=x_3-2$, and $y_4=x_4+1$, then $(x_1,x_2,x_3,x_4)$ is a solution to this equation if and only if $(y_1,y_2,y_3,y_4)$ is a solution to $$ y_1+y_2+y_3+y_4=18,$$ and moreover the bounds on the $x_i$ are satisfied if and only if $y_i\ge 0$. Since the number of solutions to the last equation is $18+4-1\choose 18$, this is also the number of solutions to the original equation.
Exercises 1.5
Ex 1.5.1Suppose a box contains 18 balls numbered 1–6, three ballswith each number. When 4 balls are drawn without replacement, how manyoutcomes are possible? Do this in two ways: assuming that the order inwhich the balls are drawn matters, and then assuming that order doesnot matter.
Ex 1.5.2How many permutations are there of the letters inMississippi?
Ex 1.5.3How many permutations are there of the multiset$\ds\{1\cdot a_1,1\cdot a_2,\ldots,1\cdot a_n\}$?
Ex 1.5.4Let $M=\sum_{i=1}^n m_i$. If $k_i< 0$ for some $i$, let's say $${M\choose k_1\;k_2\;\ldots\; k_n}=0.$$Prove that $${M\choose m_1\;m_2\;\ldots\; m_n}=\sum_{i=1}^n {M-1\choose m_1\;m_2\;\ldots\;(m_i-1)\;\ldots\; m_n}.$$Note that when $n=2$ this becomes$${M\choose m_1\;m_2}={M-1\choose (m_1-1)\;m_2}+{M-1\choose m_1\;(m_2-1)}.$$As noted above in equation 1.5.1,when $n=2$ we are really seeing ordinary binomialcoefficients, and this can be rewritten as$${M\choose m_1}={M-1\choose m_1-1}+{M-1\choose m_1},$$which of course we already know.
Ex 1.5.5The Binomial Theorem (1.3.1) can be written$$(x+y)^n=\sum_{i+j=n} {n\choose i\;j}x^i\,y^j,$$where the sum is over all non-negative integers $i$ and $j$ that sumto $n$. Prove that for $m\ge 2$,$$(x_1+x_2+\cdots+x_m)^n=\sum {n\choose i_1\;i_2\;\ldots\;i_m}x_1^{i_1}\,x_2^{i_2}\ldots x_m^{i_m}.$$where the sum is over all $i_1,\ldots,i_m$ such that $i_1+\cdots+i_m=n$.
Ex 1.5.6Find the number of integer solutions to $$x_1+x_2+x_3+x_4+x_5=50, x_1\ge -3, x_2\ge 0, x_3\ge 4, x_4\ge 2,x_5\ge 12.$$
Ex 1.5.7You and your spouse each take two gummy vitamins everyday. You share a single bottle of 60 vitamins, 30 of one flavor and30 of another. You each prefer a different flavor, but it seems childish tofish out two of each type (but not to take gummy vitamins). So youjust take the first four that fall out and then divide them upaccording to your preferences. For example, if there are two of eachflavor, you and your spouse get the vitamins you prefer, but if threeof your preferred flavor come out, you get two of the ones you likeand your spouse gets one of each. Of course, you start a newbottle every 15 days.On average, over a 15 day period, how many of the vitamins you takeare the flavor you prefer?(Fromfivethirtyeight.com.)
|
, and to attain this field in specific regions of the brain, the electric current should pass through different head layers via skin, fat, skull, meninges, and cortex (part of the brain). In order to model the brain, different layers should be considered, including gray and white matters.The meninges, three layers of protective tissue, cover the outer surface of the central nervous system (brain and spinal cord) and comprise three connective tissue layers viz. (from the innermost to the outermost layer) the pia mater, arachnoid and the dura mater. The meninges also
Kathrin Badstübner, Marco Stubbe, Thomas Kröger, Eilhard Mix and Jan Gimsa
Education and Research (BMBF, FKZ 01EZ0911). The custom-made stimulator system was developed in cooperation with the Steinbeis company (STZ1050, Rostock, Germany) and Dr. R. Arndt (Rückmann & Arndt, Berlin, Germany).References1 Krack P, Hariz MI, Baunez C, Guridi J, Obeso JA. Deep brain stimulation: from neurology to psychiatry? Trends Neurosci. 2010;33:474-84. https://doi.org/10.1016/j.tins.2010.07.002 10.1016/j.tins.2010.07.002 20832128Krack P Hariz MI Baunez C Guridi J Obeso JA Deep brain stimulation: from neurology to psychiatry
Lisa Röthlingshöfer, Mark Ulbrich, Sebastian Hahne and Steffen Leonhardt
magnetic field strength (h) are assigned to the edges. Hence, a system of equations, called Maxwell-Grid Equations, has to be solved for the whole calculation domain, where each cell is described by:(1)C e → = − ∂ b → ∂ t C ˜ h → = − ∂ d → ∂ t + j →$$C\overrightarrow{e}=-\frac{\partial \overrightarrow{b}}{\partial t}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \tilde{C}\overrightarrow{h}=-\frac{\partial \overrightarrow{d}}{\partial t}+\overrightarrow{j}$$(2)S ˜ d → = q S b → = 0$$\tilde
electrodes are near-to constant because of the high resistance to current of the stratum corneum in the considered frequency range [ 3 ]. This allows us to rewrite the boundary conditions, Eqs. 5 - 7 , between the probe and the uppermost skin layer n , stratum corneum, as (we drop the subindex ` eff ’ for notational convenience in the analysis)− σ n ∂ Φ ( r , H n ) ∂ z = ∑ j = 1 m I j A j [ U ( R 2 j − 1 − r ) − U ( R 2 j − 2 − r ) ] ,$$\begin{array}{}\displaystyle-\sigma_{n}\frac{\partial\Phi(r,\mathcal{H}_{n})}{\partial z}=\sum_{j=1}^{m
|
So not quite sure what you're missing, but here's how you go about doing this sort of thing.
So first, I am assuming this is 1D due to your description. Second, I'm assuming you know the relationship between points in the physical domain, $x$, and the computational domain, $\xi$, something along the lines of $x=x(\xi)$. Given you have this relationship, you can do the following for some function $\phi(\cdot)$:
$$\begin{align}\frac{\partial \phi}{\partial x} &= \frac{\partial \phi}{\partial \xi} \frac{\partial \xi}{\partial x}\\\frac{\partial \phi}{\partial x} &= \frac{\partial \phi}{\partial \xi} \left(\frac{\partial x}{\partial \xi}\right)^{-1}\\\end{align}$$
With this representation, since you know $x(\xi)$, you can produce the second term exactly in the multiplication. So now you just need to approximate the first term, $\frac{\partial \phi}{\partial \xi}$. This term can be approximated using normal finite difference schemes. Using a central difference, for example, you could get the following:
$$\frac{\partial \phi}{\partial x} = \left(\frac{\phi_{i+1}-\phi_{i-1}}{2\Delta \xi}\right) \left(\frac{\partial x}{\partial \xi}\right)^{-1}$$
In this case, $\phi_{i}$ is associated to both $x_i$ and $\xi_i$, where $\phi_i = \phi(x_i)$, and where, in addition, $\phi_i = \phi(x(\xi_i))$. This should help you understand how to compute the necessary quantity you're after.
|
Note that, by definition, the projections $P_n$ converge strongly to $I$.
Let $r\in\mathbb N$ (to be determined later), and define$$Q_n=P_{n+r}-P_n. $$The projections $Q_n$ are finite-rank, and pairwise orthogonal. Let $$S=\sum_n Q_nTQ_n,\ \ \ \ K=T-S.$$Let us check first that $SP_n=P_nS$ for all $n$. It is obvious that $SQ_n=Q_nS$ for all $n$. We have $$SP_n-P_nS=-(SQ_n-Q_nS)+SP_{n+r}-P_{n+r}S=SP_{n+r}-P_{n+r}S.$$Repeat the argument, to get $$SP_n-P_nS=SP_{n+kr}-P_{n+kr}S,\ \ \ k\in\mathbb N. $$As $P_n\nearrow I$, we get $SP_n-P_nS=0$.
As for $K$, we have $$K=T-\sum_n Q_nSQ_n=\sum_n TQ_n-Q_nTQ_n=\sum_n (TQ_n-Q_nT)Q_n.$$Also,$$\|TQ_n-Q_nT\|\leq\|TP_n-P_nT\|+\|TP_{n+r}-P_{n+r}T\|\leq\frac1{2^{n+1}}+\frac1{2^{n+r+1}}$$$$\tag1\left\|\sum_{n=m+1}^\infty (TQ_n-Q_nT)Q_n\right\|\leq\sum_{n=m+1}^\infty \|TQ_n-Q_nT\|\leq\sum_{m+1}^\infty \frac1{2^{n+1}}+\frac1{2^{n+r+1}}\xrightarrow[m\to\infty]{}0$$The estimate $(1)$ shows that $K$ is of the form $\sum_{n=1}^m(TQ_n-Q_nT)Q_n$, which is finite-rank, plus an arbitrarily small operator; that is, $K$ is a limit of finite-rank operators, and thus compact.
Finally, using $(1)$ with $m=1$, we get $$\|K\|\leq\sum_{m=1}^\infty \frac1{2^{n+1}}+\frac1{2^{n+r+1}}=\frac12+\frac1{2^{r+1}}.$$So any $r\geq3$ will give us $\|K\|<1$.
As a final note, a very small tweak of the argument allows one to get $\|K\|<\varepsilon$ for any fixed $\varepsilon>0$.
|
I agree with everything that's been said about Euler's formula not being a practical way of testing for primality, but it occurs to me there might be special numbers for which it
could be useful. Indulge me in a laborious "proof" that $n=82$ is not a prime.
Euler's formula in this case says
$$\begin{align}\sigma(82) = &\sigma(81) + \sigma(80) - \sigma(77) - \sigma(75) + \sigma(70) + \sigma(67) - \sigma(60) - \sigma(56)+\cr&\sigma(47) + \sigma(42) - \sigma(31) -\sigma(25)+\sigma(12)+\sigma(5)\cr\end{align}$$
As it happens, "most" of the numbers inside the $\sigma$s on the right hand side factor into "small" primes, where by "small" I mean up to $11$. In particular, we have$$\begin{align}\sigma(81)&=\sigma(3^4)=121\cr\sigma(80)&=\sigma(2^4)\sigma(5)=186\cr\sigma(77)&=\sigma(7)\sigma(11)=96\cr\sigma(75)&=\sigma(3)\sigma(5^2)=124\cr\sigma(70)&=\sigma(2)\sigma(5)\sigma(7)=144\cr\sigma(60)&=\sigma(2^2)\sigma(3)\sigma(5)=168\cr\sigma(56)&=\sigma(2^3)\sigma(7)=120\cr\sigma(42)&=\sigma(2)\sigma(3)\sigma(7)=96\cr\sigma(25)&=\sigma(5^2)=31\cr\sigma(12)&=\sigma(2^2)\sigma(3)=28\cr\sigma(5)&=6\cr\end{align}$$
When you add and subtract all this stuff up, you have
$$\sigma(82)=42+\sigma(67)+\sigma(47)-\sigma(31)$$
Now we're not allowing ourselves to know that $67$, $47$, and $31$ are primes, but we do know that $\sigma(n)\ge n+1$ for all $n$. Therefore we have
$$\sigma(82)\ge 42+ 68 + 48 - 32 -\text{stuff} = 126-\text{stuff},$$
where "$\text{stuff}$" is the sum of the divisors (if any) of $31$ other than $1$ and $31$. These divisors must come in pairs $d,31/d$, with $d$ odd and less than $\sqrt{31}$. Thus
$$\text{stuff} \le 3+{31\over3}+5+{31\over5} = 24.5333\ldots,$$
and hence
$$\sigma(82) \gt 126-24=112 \gt 83,$$
from which we can conclude that $82$ is not prime.
The obvious drawback to such a "proof" is that it requires an awful lot of computation: There are always $O(\sqrt n)$ terms to deal with, so its only advantage over trial-and-error division is that you can hope to avoid doing trial divisions by "large" primes. It also requires a considerable amount of luck -- in this case we were left subtracting the $\sigma$ of only one number that was "too big" to factor, and even it was fairly small. But still, there might be some rare values of $n$ for which one can deduce something nontrivial from Euler's formula without ever dividing by anything other than "small" primes.
|
In section 4.5.1 of Nielsen and Chuang, two-level unitary matrices are defined as unitary matrices which act non-trivially only on two or fewer vector components. I'm not sure that I understand this definition. For instance, is $$\begin{pmatrix} \frac{1}{\sqrt{2}} & 0 & \frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{2}} & 0 & \frac{1}{\sqrt{2}} \\ 0 & 1 & 0\end{pmatrix}\quad$$ a two level unitary matrix? It only acts non-trivially on the first and third vector components, but it doesn't leave this linear space invariant, unlike the examples given in Nielsen and Chuang.
In addition to the error pointed out by DanielSank in his comment, note that your matrix is not even unitary.
Indeed, in particular for a unitary matrix any two distinct columns (resp. rows) form orthogonal vectors, and you can quickly check that it is not the case with the first and third columns (resp. first and second rows) of your matrix.
But, the following matrix works, i.e. is a two-level unitary matrix: $$\begin{pmatrix} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & 0\\ -\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & 0 \\ 0 & 0 & 1\end{pmatrix}\quad$$
Unitary matrices must satisfy $U^\dagger = U^{-1}$. Your example is not unitary because it isn't invertible so $U^{-1}$ doesnt exist. In that entire section, he is only talking about unitary (and therefore invertible) matrices.
|
Reinhardt
The existence of
Reinhardt cardinals has been refuted in $\text{ZFC}_2$ and $\text{GBC}$ by Kunen (Kunen inconsistency), the term is used in the $\text{ZF}_2$ context, although some mathematicians suspect that they are inconsistent even there. Definitions
A
weakly Reinhardt cardinal(1) is the critical point $\kappa$ of a nontrivial elementary embedding $j:V_{\lambda+1}\to V_{\lambda+1}$ such that $V_\kappa\prec V$ ($\mathrm{WR}(\kappa)$. Existence of $\kappa$ is Weak Reinhardt Axiom ($\mathrm{WRA}$) by Woodin).[1]:p.58
A
weakly Reinhardt cardinal(2) is the critical point $\kappa$ of a nontrivial elementary embedding $j:V_{\lambda+2}\to V_{\lambda+2}$ such that $V_\kappa\prec V_\lambda\prec V_\gamma$ (for some $\gamma > \lambda > \kappa$).[2]:(definition 20.6, p. 455)
A
Reinhardt cardinal is the critical point of a nontrivial elementary embedding $j:V\to V$ of the set-theoretic universe to itself.[3]
A
super Reinhardt cardinal $\kappa$, is a cardinal which is the critical point of elementary embeddings $j:V\to V$, with $j(\kappa)$ as large as desired.[3]
For a proper class $A$, cardinal $\kappa$ is called
$A$-super Reinhardt if for all ordinals $\lambda$ there is a non-trivial elementary embedding $j : V \rightarrow V$ such that $\mathrm{crit}(j) = \kappa$, $j(\kappa)\gt\lambda$ and $j^+(A)=A$. (where $j^+(A) := \cup_{α∈\mathrm{Ord}} j(A ∩ V_α)$)[3]
A
totally Reinhardt cardinal is a cardinal $\kappa$ such that for each $A ∈ V_{κ+1}$, $(V_\kappa, V_{\kappa+1})\vDash \mathrm{ZF}_2 + \text{“There is an $A$-super Reinhardt cardinal”}$.[3]
Totally Reinhardt cardinals are the ultimate conclusion of the Vopěnka hierarchy. A cardinal is Vopěnka if and only if, for every $A\subseteq V_\kappa$, there is some $\alpha\lt\kappa$ $\eta-$extendible for $A$ for every \(\eta\lt\kappa\), in that the witnessing embeddings fix $A\cap V_\zeta$. In its original conception Reinhardt cardinals were thought of as ultimate extendible cardinals, because if $j: V\rightarrow V$ is elementary, then so is $j\restriction V_{\kappa+\eta}: V_{\kappa+\eta}\rightarrow V_{j(\kappa+\eta)}$. It is as if one embedding works for all $\eta$.
Relations
$\mathrm{WRA}$ (1) implies thet there are arbitrary large $I1$ and super $n$-huge cardinals. Kunen inconsistency does not apply to it. It is not known to imply $I0$.[1]
$\mathrm{WRA}$ (1) does not need $j$ in the language. It however requires another extension to the language of $\mathrm{ZFC}$, because otherwise there would be no weakly Reinhardt cardinals in $V$ because there are no weakly Reinhardt cardinals in $V_\kappa$ (if $\kappa$ is the least weakly Reinhardt) — obvious contradiction.[1]
$\mathrm{WR}(\kappa)$ (1) implies that $\kappa$ is a measurable limit of supercompact cardinals and therefore is strongly compact. It is not known whether $\kappa$ must be supercompact itself. Requiring it to be extendible makes the theory stronger.[1]
Weakly Reinhardt cardinal(2) is inconsistent with $\mathrm{ZFC}$. $\mathrm{ZF} + \text{“There is a weakly Reinhardt cardinal(2)”}\rightarrow\mathrm{Con}(\mathrm{ZFC} + \text{“There is a proper class of $\omega$-huge cardinals”})$ (At least here $\omega$-huge=$I1$) (Woodin, 2009). You can get this by seeing that $V_\gamma\vDash\forall\alpha\lt\lambda(\exists\kappa'\gt\alpha(I1(\kappa')\land\kappa'\lt\lambda))$.
If $\kappa$ is super Reinhardt, then there exists $\gamma\lt\kappa$ such that $(V_\gamma , V_{\gamma+1})\vDash \mathrm{ZF}_2 + \text{“There is a Reinhardt cardinal”}$.[3]
If $\delta_0$ is the least Berkeley cardinal, then there is $\gamma\lt\delta_0$ such that $(V_\gamma , V_{\gamma+1})\vDash\mathrm{ZF}_2+\text{“There is a Reinhardt cardinal witnessed by $j$ and an $\omega$-huge above $\kappa_\omega(j)”$}$. (Here $\omega-$huge means $I3$). [3] Each club Berkeley cardinal is totally Reinhardt.[3]
References Corazza, Paul. The Axiom of Infinity and transformations $j: V \to V$.Bulletin of Symbolic Logic 16(1):37--84, 2010. www DOI bibtex Baaz, M and Papadimitriou, CH and Putnam, HW and Scott, DS and Harper, CL. Cambridge University Press, 2011. www bibtex Kurt Gödel and the Foundations of Mathematics: Horizons of Truth. Bagaria, Joan. Large Cardinals beyond Choice., 2017. www bibtex
|
[latexpage]
NOTE: This article does not represent the whole truth. Actually OpenMP is better, not all relevant things have been exploited. There will be an updated article as soon as possible!
Many problems can be reduced, or reformulated as such that the solution is equivalent to solving a linear system of equations (LSE) in the form of:
$Ax = b$ where $A \in \mathbb{R}^{NxN}$ and $x, b \in \mathbb{R}^{N}$.
There exist a big variety of algorithms to solve these equations. This post will lay out a possible parallel implementation using HPX and OpenMP of the Gauss-Seidel method.
As an example we use this method to solve a static, second order partial differential equation (PDE):
$-\triangle u + k^2 u = f$
With Dirichlet boundary conditions on a uniform two-dimensional grid. After discretizing this PDE using the central difference method, we are able to use the following matrix to be used with our Solver:
$A = \begin{bmatrix} a & b & \cdots & c & & \\ b & a & b & \cdots & c \\ & \ddots & \ddots & \ddots & & \ddots\\ c & \cdots & b & a & b & \cdots & c \\ & \ddots & & \ddots & \ddots & \ddots & & \ddots & \\ & & c & \cdots & b & a & b& \cdots & c \\ & & & \ddots & & \ddots & \ddots & \ddots & \\ & & & & c & \cdots & b & a & b \\ & & & & & c & \cdots & b & a \end{bmatrix}$ and $x=\begin{bmatrix} u(0,0) \\ u(0,1) \\ \vdots \\ u(0,N_y) \\ \vdots \\ u(x,y) \\ \vdots \\ u(N_x, N_y) \end{bmatrix} $
where $a = \frac{2}{h_x^2} + \frac{2}{h_y^2} + k^2, b = \frac{1}{h_x^2}$ and $c = \frac{1}{h_y^2}$, $h_x$ and $h_y$ are the stepsizes for the space discretization.
Now, our algorithm can be formulated as follows:
for(unsigned iter=0; iter < max_iterations; ++iter) { for(unsigned x = 1; x < N_x-1; ++x) { for(unsigned y = 1; y < N_y-1; ++y) { u(x, y) = ((u(x-1,y) + u(x+1, y)) * b + (u(x, y-1) + u(x, y+1)) * c + f(x,y))/a; } } }
In order to parallelize this algorithm, we first need to look at what dependencies between the different iterations:
To update a grid point, the update in the previous iteration has to be finished first. The update of the top and left grid points of the current iteration needs to be finished The update of the bottom and right grid points of the previous iteration needs to finished
An illustration of these dependencies can be seen here:
Parallelization with OpenMP
In order to parallalize this algorithm with OpenMP, the Red-Black scheme of updating the grid points has been developed:
That way, it is easy to rewrite the above introduce algorithm to be used with OpenMP:
for(unsigned iter=0; iter < max_iterations; ++iter) { #pragma omp parallel for for(unsigned x = 1; x < N_x-1; ++x) { for(unsigned y = (x%2) +1; y < N_y-1; y+=2) { u(x, y) = ((u(x-1,y) + u(x+1, y)) * b + (u(x, y-1) + u(x, y+1)) * c + f(x,y))/a; } } #pragma omp parallel for for(unsigned x = 1; x < N_x-1; ++x) { for(unsigned y = ((x+1)%2) +1; y < N_y-1; y+=2) { u(x, y) = ((u(x-1,y) + u(x+1, y)) * b + (u(x, y-1) + u(x, y+1)) * c + f(x,y))/a; } } }
Due to the implicit global barriers between the loops, all dependencies are met in every iteration, and the algorithm can be run in parallel.
Parallelization with HPX
Due to the HPX programming model, handling the dependencies is easy! In contrast to the OpenMP, we don’t need to develop a specific scheme for traversing the grid. HPX provides us with futures and promises. These can be used to track dependencies. In other words, while traversing the tree we spawn actions to update a certain element in the grid which explicitly waits until all dependencies are met. This way, we are able to remove all barriers in the code, including the ones between every iteration. However, spawning an action comes with a certain overhead, that means that we have to limit the number of threads to spawn. In the current implementation, that is done by dividing the grid into smaller blocks. The effect on runtime can be seen here:
The next post will explain the techniques used to implement the algorithm in detail.
Strong Scaling of the two implementations
This graph shows how important removing barriers for parallel applications is. By removing the barriers, strong scaling of the HPX implementation can be observed until 40 cores, while the OpenMP version converges to its maximum scaling rather quickly.
GD Star Rating loading...
|
I have heard various talks at my institution from experimentalists (who all happened to be working on superconducting qubits) that the textbook idea of true "Projective" measurement is not what happens in real-life experiments. Each time I asked them to elaborate, and they say that "weak" measurements are what happen in reality.
I assume that by "projective" measurements they mean a measurement on a quantum state like the following:
$$P\vert\psi\rangle=P(a\vert\uparrow\rangle+ b\vert\downarrow\rangle)=\vert\uparrow\rangle \,\mathrm{or}\, \vert\downarrow\rangle$$
In other words, a measurement which fully collapses the qubit.
However, if I take the experimentalist's statement that real measurements are more like strong "weak"-measurements, then I run into Busch's theorem, which says roughly that you only get as much information as how strongly you measure. In other words, I can't get around not doing a full projective measurement, I need to do so to get the state information
So, I have two main questions:
Why is it thought that projective measurements cannot be performed experimentally? What happens instead?
What is the appropriate framework to think about experimental measurement in quantum computing systems that is actually realistic? Both a qualitative and quantitative picture would be appreciated.
|
$$\textbf{I have been given the following:}$$
This is an example solution of a second order ODE, specifically that of a Harmonic Oscillator $$ {\text{d}v\over\text{d}t}=-\omega^2x-kv, $$ where $v=$d$x/$d$t$ and $\omega$ & $k$ are constants.
We can make this dimensionless by putting $x=x_0X$, $v=(x_0/t_0)V$ and $k=K/t_0$ where $\omega t_0=1$. The equation then becomes $$ {\text{d}V\over\text{d}T}=-X-KV. $$
To solve numerically, we need to convert this to a pair of 1st-order equations by using a vector of variables $Z=(X,V)$: $$ \begin{align} {\text{d}X\over\text{d}T} &= V;\\ {\text{d}V\over\text{d}T} &= -X-KV. \end{align} $$
We'll look for solutions with $K=0,0.5,1$, initial conditions $(X_0,V_0)=(1.0,0.3)$, and we'll plot the solution to $T=4\pi$.
$$\textbf{My problem}$$
I understand how to solve the differential equation numerically, once it is in the form of two 1st-order equations. My problem is with making the differential equation dimensionless:
Why do we want to make it dimensionless, is there not a way to convert into first order DEs without doing so? I don't entirely understand what the X,V,K terms are. I am assuming that we can say $x$ is some function, X multiplied by the initial condition $x_0$, as in this case the initial condition would be the only part with dimensionality. However I am not convinced with my interpretation, if anyone could provide some insight for this, that'd be great. Finally, upon making it dimensionless, how is it converted into the two First order DEs, it is mainly where $\frac{dX}{dT}$ comes from that is confusing me.
Any help is greatly appreciated, Thanks
|
(similar to Mariano's post)
Q1: no. There are topological manifolds that don't admit triangulations, let alone smooth structures. All smooth manifolds admit triangulations, this is a theorem of Whitehead's. The lowest-dimensional examples of topological manifolds that don't admit triangulations are in dimension 4, the obstruction is called the Kirby-Siebenmann smoothing obstruction.
Q2: $C^1$ manifolds all admit compatible $C^\infty$ and analytic ($C^\omega$) structures. This is a theorem of Hassler Whitney's, in his early papers on manifold theory, where he proves they embed in euclidean space. The basic idea is that your manifold is locally cut out of euclidean space by $C^1$-functions so you apply a smoothing operator to the function and then argue that the level-set does not change (up to $C^1$-diffeomorphism), provided your smoothing approximation is small enough in the $C^1$-sense. I'm not sure who gets the original credit but you can go much further -- compact boundaryless smooth manifolds are all realizable as components of real affine algebraic varieties, planar linkages in particular. There's a Millson and Kapovich paper on the topic available if you do a Google search. It seems people give a lot of credit to Bill Thurston.
edit: Some time ago Riccardo Benedetti sent me some comments to append to my answer. They appear below, with some minor MO-formatting on my part.
In the famous paper "Real algebraic manifolds" (Annals of Math 56, 3, 1952), John Nash just proved that:
"Every compact closed smooth manifold M embedded in some $R^N$, with $N$ big enough (as usually $N=2Dim(M)+1$ suffices), can be smoothly approximated by a union of components, say $M_a$, of the non-singular locus of a real algebraic subset $X$ of $R^N$."
In the same paper he stated also some conjectures/questions, in particular whether one can get $M_a = X$ (so that $X$ is a non-singular real algebraic model of $M$), or whether one can even get such an algebraic model which is rational.
A.H. Wallace (1957) solved positively the first question under the assumption that $M$ is a boundary. Finally a complete positive answer was given by A. Tognoli (1973) by using, among other things, a so called "Wallace trick" and the fact (due to Milnor) that the smooth/un-oriented bordism group is generated by real algebraic projective non singular varieties.
Starting from this Nash-Tognoli theorem, mostly in the 80's-90's of the last century, a huge activity has been developed about the existence of real algebraic models for several instances of smooth or polyhedral structures, with major contributions by S. Akbulut and H. King (in particular they proved that if M is embedded in $R^n$, an algebraic model can be realized in $R^{n+1}$; to my knowledge it is open if we can stay in the given $R^n$).
If I am not wrong, the realization of real algebraic varieties via planar linkages (with related credit to Thurston) does not provide an alternative proof of Nash-Tognoli theorem.
The "Nash rationality conjecture" is more intriguing and has been basically "solved" in dimension less or equal to 3. This is mentioned for instance in some answers to the questions:
What's the difference between a real manifold and a smooth variety?
"You might also be interested in some of the articles by Kolla'r on the Nash conjecture contrasting real varieties and real manifolds. such as "What are the simplest varieties?", Bulletin, vol 38. I like the pair of theorems 54, 51, subtitled respectively: "the Nash conjecture is true in dim 3", and "The Nash conjecture is false in dim 3".
What is known about the MMP over non-algebraically closed fields
"Another issue is the rational connectivity and its relation to Mori fiber spaces ...... To illustrate the difficulties, here is a conjecture of Nash (yes, that Nash):
Let $Z$ be a smooth real algebraic variety. Then $Z$ can be realized as the real points of a rational complex algebraic variety. This, actually, turns out to be false. Kollár calls it the shortest lived conjecture as it was stated in 1954 and disproved by Comessatti around 1914 (I don't remember the exact year). However, even if the statement is false, just the fact that it was made and no one realized for 50 years that it was false should show that these questions are by no means easy. (Comessatti's paper was in Italian and I have no idea how Kollár found it.)
Kollár showed more systematically the possible topology types of manifolds that can satisfy this statement. In particular, Kollár shows that any closed connected3-manifold occurs as a possibly non-projective real variety birationally equivalent to $P^3$. In other words the way Nash's conjecture fails is on the verge of the difference between projective and proper again showing that these questions are not easy." (Sandor Kovacs)
As I have been even more concerned with, let me add a few comments. Comessatti's result was certainly "well known" at least to some Italian people and also to the real algebraic Russian guys of Arnol'd's and mostly Rokhlin's school. Moreover (before Kollar's work) it had been rediscovered (via modern tools) for instance by R. Silhol. This allows the following 2D solution of the Nash conjecture:
" (1) Every non orientable compact closed surface admits a rational projective non singular model which can be explicitly given by (algebraically) blowing-up $\mathbb RP^2$ at some points;
(2) $S^2$ and $T^2$ are the only orientable surfaces that admit non singular projective rational models (Comessatti);
(3) Every surface $S$ admits a projective rational model possibly having one singular point (to get it, first smoothly blow-up $S$ at one point $p$ getting a non orientable $S'$ containing a smooth exceptional curve $C$ over $p$. Take a projective non singular rational model $S'_r$ of $S$.Finally one can prove that $C$ can be approximated in $S'_r$ by a non singular algebraic curve $C_a$ and that we can perform a "singular" blow-down of $C_a$ producing the required singular rational model of $S$.)"
Inspired by this 2D discussion, in the paper (with A. Marin) Dechirures de varietes de dimension trois et la conjecture de Nash de rationalite' en dimension trois Comment. Math. Helv. 67 (4) (1992), 514-545 (a PDF file is available in http://www.dm.unipi.it/~benedett/research.html)
we got a formally similar 3D solution. More precisely we provide at first a complete classification of compact closed smooth 3-manifolds $M$ up to "flip/flop" along smooth centres (see below - these are the "Dechirures" - perhaps we were wrong to write the paper in French ... ).
Summarizing (and roughly) the results are:
There is a complete invariant $I(M)$ for this equivalence relation. Depending only on $I(M)$, we explicitly produce a real projective non singular rational 3-fold $Z$ such that $I(M)=I(Z)$.
There is a smooth link $L$ in $M$ and a non singular real algebraic link $L_a$ in $Z$ such that by smoothly (algebraically) blowing up $M (Z)$ along $L (L_a)$ we get the same manifold $Z'$ and furthermore the disjoint real algebraic exceptional tori over $L_a$ coincide with the exceptional tori over $L$ (thinking all within the smooth category, basically by definition this means that $Z$ and $M$ are equivalent up to flip/flop along smooth centres).
Clearly $Z'$ is projective rational non singular and algebraically dominates $Z$. It also smoothly dominates M. Finally we can convert the smoothblow-down onto $M$ to a singular algebraic blow-down producing a projective rational model $M_r$ of $M$, possibly singular at a non singular real algebraic copy of the link $L$ in $M_r$.
The invariant $I(M)$ is easy when $M$ is orientable; this is just the dimension of $H_2(M;\mathbb Z/2)$. In this case $Z$ is obtained by algebraically blowing up$S^3$ at some points. $I(M)$ is much more complicated if M is non orientable and involves, among other things, certain quadratic forms on $Ker(i_*:H_1(S;\mathbb Z/2) \to H_1(M;\mathbb Z/2))$, $S$ being any characteristic surface of $M$.
A combination of our work with Kollar's one roughly gives:
(3D "a la" Comessatti) In the projective framework, in general our
singular rational models cannot be improved (singularity cannot be avoided, and in a sense we provided models with very mild singularities); in other words those blowing-down $Z'\to M_r$ were intrinsically singular.
(Non projective non singular rational models) Starting from our real projective rational non singular 3-fold $Z'$, as above, Kollar proves that one can realize a non singular blow-down $Z'\to M'_r$, provided that we leave the projective framework.
Finally it is intriguing to note another important occurrence of the opposition projective singular vs non singular (related to the existence of intrinsically singular blow-down). Going back to the original Nash-Tognoli kind of problems, for a while it was conjectured and very "desidered" (for several reasons also related to the general question of characterizing say the compact polyhedron which admit possibly singular real algebraic models) that every M as above admits a "totally algebraic model" M_a, i.e. such that $H_*(M_a;\mathbb Z/2)$ is generated by the $\mathbb Z/2$-fundamental class of algebraic sub-varieties of $M_a$. On the contrary we constructed counterexamples in:
(with M. Dedo') Counterexamples to representing homology classes by real algebraic subvarieties up to homeomorphism, Compositio Math. 53 (2) (1984), 143-151 (idem)
This contrasts with a result by Akbulut-King that M admits
singular totally algebraic models.
|
adjclust package
This document has two parts:
the first part aims at clarifying relations between dissimilarity and similarity methods for hierarchical agglomerative clustering (HAC) and at explaining implementation choices in
adjclust;
the second part describes the different types of dendrograms that are implemented in
plot.chac.
In this document, we suppose given \(n\) objects, \(\{1, \ldots, n\}\) that have to be clustered using adjacency-constrained HAC (CHAC), that is, in such a way that only adjacent objects/clusters can be merged.
adjclust
The basic implementation of
adjclust takes, as an input, a kernel \(k\) which is supposed to be symmetric and positive (in the kernel sense). If your data are under this format, then the constrained clustering can be performed with
fit <- adjClust(k, type = "similarity")
or with
fit <- adjClust(k, type = "similarity", h = h)
if, in addition, the kernel \(k\) is supposed to have only null entries outside of a diagonal of size
h.
The implementation is the one described in [1].
In this section, the available data set is a matrix \(s\) that can either have only positive entries (in this case it is called a similarity) or both positive and non-positive entries. If, in addition, the matrix \(s\) is
normalized, i.e., \(s(i,i) + s(j,j) - 2s(i,j) \geq 0\) for all \(i,j=1,\ldots,n\) then the algorithm implemented in
adjclust can be applied directly, similarly as for a standard kernel (section 1). This section explains why this is the case.
The interpretation is similar to the kernel case, under the assumption that small similarity values or similarity values that are strongly negative are less expected to be clustered together than large similarity values. The application of the method is justified by the fact that, for a given matrix \(s\) described as above, we can find a \(\lambda > 0\) such that the matrix \(k_\lambda\) defined by \[ \forall\,1,\ldots,n,\qquad k_\lambda(i,j) = s(i,j) + \lambda \mathbb{1}_{\{i=j\}}\] is a kernel (
i.e., the matrix \(k = s + \lambda I\) is positive definite; indeed, it is the case for any \(\lambda\) larger than the opposite of the smallest negative eigenvalue of \(s\). [3] shows that the HAC obtained from the distance induced by the kernel \(k_\lambda\) in its feature space and the HAC obtained from the ad hoc dissimilarity defined by \[ \forall\, i,j=1,\ldots,n,\qquad d(i,j) = \sqrt{s(i,i) + s(j,j) - 2s(i,j)}\] are identical, except that all the merging levels are shifted by \(\lambda\).
In conclusion, to address this case, the command lines that have to be used are the ones described in section 1.
Suppose now that the data set is described by a matrix \(s\) as in the previous section except that this similarity matrix is not normalized, meaning that, there is at least one pair \((i,j)\), such that \[ 2s(i,j) > s(i,i) + s(j,j). \]
The package then performs the following pre-transformation: a matrix \(s^{*}\) is defined as \[ \forall\,i,j=1,\ldots,n,\qquad s^{*}(i,j) = s(i,j) + \lambda \mathbb{1}_{\{i=j\}}\] for a \(\lambda\) large enough to ensure that \(s^{*}\) becomes normalized. In the package, \(\lambda\) is chosen as \[ \lambda := \epsilon + \max_{i,j} \left(2s(i,j) - s(i,i) - s(j,j)\right)_+\] for a small \(\epsilon > 0\). This case is justified by the property described in Section 2.1 (Non-positive but normalized similarities). The underlying idea is that, shifting the diagonal entries of a similarity matrix does not change HAC result and thus they can be shifted until they induce a proper
ad-hoc dissimilarity matrix. The transformation affects only the heights to ensure that they are all positive and the two command lines described in the first section of this note are still valid.
The original implementation of (unconstrained) HAC in
stats::hclust takes as input a dissimilarity matrix. However, the implementation of
adjclust is based on a kernel/similarity approach. We describe in this section how the dissimilarity case is handled.
Suppose given a dissimilarity \(d\) which satisfies:
\(d\) has non negative entries: \(d(i,j) \geq 0\) for all \(i=1,\ldots,n\);
\(d\) is symmetric: \(d(i,j) = d(j,i)\) for all \(i,j=1,\ldots,n\);
\(d\) has a null diagonal: \(d(i,i) = 0\) for all \(i=1,\ldots,n\).
Any sequence of positive numbers \((a_i)_{i=1,\ldots,n}\) would provide a similarity \(s\) for which \(d\) is the
ad-hoc dissimilarity by setting: \[ \left\{ \begin{array}{l} s(i,i) = a_i\\ s(i,j) = \frac{1}{2} (a_i + a_j - d^2(i,j)) \end{array} \right. .\] By definition, such an \(s\) is normalized and any choice for \((a_i)_{i=1,\ldots,n}\) yields the same clustering (since they all correspond to the same ad-hoc dissimilarity). The arbitrary choice \(a_i = 1\) for all \(i=1,\ldots,n\) has thus been made.
The basic and the sparse implementations are both available with, respectively,
fit <- adjClust(d, type = "dissimilarity")
and
fit <- adjClust(d, type = "dissimilarity", h = h)
In this section, we suppose given an Euclidean distance \(d\) between objects (even though the results described in this section are not specific to this case, they are described more easily using this framework). Ward’s criterion, that is implemented in
adjclust aims at minimizing the Error Sum of Squares (ESS) which is equal to: \[ \mbox{ESS}(\mathcal{C}) = \sum_{C \in \mathcal{C}} \sum_{i \in C} d^2(i, g_C)\] where \(\mathcal{C}\) is the clustering and \(g_C = \frac{1}{\mu_C} \sum_{i \in C} i\) is the center of gravity of the cluster \(C\) with \(\mu_C\) elements [5]. In the sequel, we will denote:
within-cluster dispersion which, for a given cluster \(C\), is equal to \[ I(C) = \sum_{i \in C} d^2(i, g_C).\] We can prove that \(I(C) = \frac{1}{2\mu_C} \sum_{i,j \in C} d^2(i,j)\) (see [4] for instance); average within-cluster dispersion which is equal to \(\frac{I(C)}{\mu_C}\) and corresponds to the cluster variance.
Usually, the results of standard HAC are displayed under the form of a dendrogram for which the heights of the different merges correspond to the linkage criterion \[ \delta(A,B) = I(A \cup B) - I(A) - I(B)\] of that merge. This criterion corresponds to the increase in total dispersion (ESS) that occurs by merging the two clusters \(A\) and \(B\). However, for constrained HAC, there is no guaranty that this criterion is non decreasing (see [2] for instance) and thus, the dendrogram build using this method can contain reversals in its branches. This is the default option in
plot.chac (that corresponds to
mode = "standard"). To provide dendrograms that are easier to interpret, alternative options have been implemented in the package: the first one is a simple correction of the standard method, and the three others are suggested by [3].
In the sequel, we denote by \((m_t)_{t=1,\ldots,n-1}\) the series of linkage criterion values obtained during the clustering.
mode = "corrected"
This option simply corrects the heights by adding the minimal value making them non decreasing. More precisely, if at a given step \(t \in \{2,\ldots,n-1\}\) of the clustering, we have that \(m_t < m_{t-1}\) then, we define the corrected weights as: \[ \tilde{m}_{t'} = \left\{ \begin{array}{ll} m_{t'} & \textrm{if } t' < t\\ m_{t'} + (m_{t-1} - m_t) & \textrm{otherwise} \end{array} \right.. \] This correction is iteratively performed for all decreasing merges, ensuring a visually increasing dendrogram.
mode = "total-disp"
This option represents the dendrogram using the total dispersion (that is the objective function) at every level of the clustering. It can easily be proved that the total dispersion is equal to ESS\(_t = \sum_{t' \leq t} m_{t'}\) and that this quantity is always non decreasing. This is the quantity recommanded by [2] to display the dendrogram.
mode = "within-disp"
This option represents a cluster specific criterion by using the within cluster dispersion of the two clusters being merged at every given step of the algorithm. It can be proved that this quantity is also non decreasing but it is also very dependant of the cluster size, leading to flattened dendrogram in most cases.
mode = "average-disp"
This last option addresses the problem of the dependency to cluster sizes posed by the previous method (
"within-disp") by using the average within-cluster dispersion of the two clusters being merged at avery given step of the algorithm. This criterion is also a cluster specific one but does not guaranty the absence of reversals in heights.
As documented in [4], the call to
hclust(..., method = "Ward.D") implicitely supposes that
... is a
squared distance matrix. As explained above, we did not make such an assumption so
hclust(d^2, method = "Ward.D") and
adjClust(d, method = "dissimilarity") give identical results when the ordering of the (unconstrained) clustering is compatible with the natural ordering of objects used as a constraint. In addition, since
hclust(..., method = "Ward.D2") takes for linkage \(\sqrt{m_t}\),
hclust(d, method = "Ward.D2") and
adjClust(d, method = "dissimilarity") give identical results for the merges and the slot
height of the first is the square root of the slot
height of the second, when the ordering of the (unconstrained) clustering is compatible with the natural ordering of objects used as a constraint.
Finally,
rioja uses ESS\(_t\) to display the heights of the dendrogram (because, as documented above, this quantity is non decreasing, in the Euclidean case, even for constrained clusterings). Hence,
rioja(d, method = "coniss") and
adjClust(d, method = "dissimilarity") give identical results for the merges and the slot
height of the first is the cumulative sum of the slot
height of the second.
[1] Dehman A. (2015). Spatial clustering of linkage disequilibrium blocks for genome-wide association studies. Phd Thesis, Université Paris Saclay.
[2] Grimm, E.C. (1987) CONISS: a fortran 77 program for stratigraphically constrained cluster analysis by the method of incremental sum of squares.
Computers & Geosciences, 13(1), 13-35.
[3] Miyamoto S., Abe R., Endo Y., Takeshita J. (2015) Ward method of hierarchical clustering for non-Eclidean similarity measures. In:
Proceedings of the VIIth International Conference of Soft Computing and Pattern Recognition (SoCPaR 2015).
[4] Murtagh, F. and Legendre, P. (2014) Ward’s hierarchical agglomerative clustering method: which algorithms implement Ward’s criterion?
Journal of Classification, 31, 274-295.
[5] Ward, J.H. (1963) Hierarchical grouping to optimize an objective function.
Journal of the American Statistical Association, 58(301), 236-244.
|
Suppose I have a damped harmonic oscillator which is at rest, sitting comfortably with no initial amplitude, obeying the equation
$$\ddot{x} + \frac{1}{Q}\dot{x} + x = 0$$
where x is the vertical amplitude and Q is the quality factor. At $t = 0$, $x = 0$.
Now, suppose I model my system to include some sort of small pertubation, such as heat. We can model this as random, Gaussian-like vibrations: for example white noise. The equation becomes:
$$\ddot{x} + \frac{1}{Q}\dot{x} + x = N(t)$$
where $N(t)$ is some random number function shaped for a Gaussian distribution.
Will this noise perturb the oscillator and give it a small amount of amplitude, and can we expect to see a plot like the one below for such a case?
This is a simulation I ran and I am wondering whether these small random perturbations will set off the oscillator and cause the typical, yet haphazard, sinusoidal behaviour.
|
Is there a "simple" mathematical proof that is fully understandable by a 1st year university student that impressed you because it is beautiful?
closed as primarily opinion-based by Daniel W. Farlow, Najib Idrissi, user91500, LutzL, Jonas Meyer Apr 7 '15 at 3:40
Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise. If this question can be reworded to fit the rules in the help center, please edit the question.
Here's a cute and lovely theorem.
There exist two irrational numbers $x,y$ such that $x^y$ is rational.
Proof. If $x=y=\sqrt2$ is an example, then we are done; otherwise $\sqrt2^{\sqrt2}$ is irrational, in which case taking $x=\sqrt2^{\sqrt2}$ and $y=\sqrt2$ gives us: $$\left(\sqrt2^{\sqrt2}\right)^{\sqrt2}=\sqrt2^{\sqrt2\sqrt2}=\sqrt2^2=2.\qquad\square$$
(Nowadays, using the Gelfond–Schneider theorem we know that $\sqrt2^{\sqrt2}$ is irrational, and in fact transcendental. But the above proof, of course, doesn't care for that.)
How about the proof that
$$1^3+2^3+\cdots+n^3=\left(1+2+\cdots+n\right)^2$$
I remember being impressed by this identity and the proof can be given in a picture:
Edit: Substituted $\frac{n(n+1)}{2}=1+2+\cdots+n$ in response to comments.
Cantor's Diagonalization Argument, proof that there are infinite sets that can't be put one to one with the set of natural numbers, is frequently cited as a beautifully simple but powerful proof. Essentially, with a list of infinite sequences, a sequence formed from taking the diagonal numbers will not be in the list.
I would personally argue that the proof that $\sqrt 2$ is irrational is simple enough for a university student (probably simple enough for a high school student) and very pretty in its use of proof by contradiction!
Prove that if $n$ and $m$ can each be written as a sum of two perfect squares, so can their product $nm$.
Proof: Let $n = a^2+b^2$ and $m=c^2+d^2$ ($a, b, c, d \in\mathbb Z$). Then, there exists some $x,y\in\mathbb Z$ such that
$$x+iy = (a+ib)(c+id)$$
Taking the magnitudes of both sides are squaring gives
$$x^2+y^2 = (a^2+b^2)(c^2+d^2) = nm$$
I would go for the proof by contradiction of an infinite number of primes, which is fairly simple:
Assume that there is a finite number of primes. Let $G$ be the set of allprimes $P_1,P_2,...,P_n$. Compute $K = P_1 \times P_2 \times ... \times P_n + 1$. If $K$ is prime, then it is obviously notin $G$. Otherwise, noneof its prime factors are in $G$. Conclusion: $G$ is notthe set of allprimes.
I think I learned that both in high-school and at 1st year, so it might be a little too simple...
By the concavity of the $\sin$ function on the interval $\left[0,\frac{\pi}2\right]$ we deduce these inequalities: $$\frac{2}\pi x\le \sin x\le x,\quad \forall x\in\left[0,\frac\pi2\right].$$
The first player in Hex has a winning strategy.
There are no draws in hex, so one player must have a winning strategy. If player two has a winning strategy, player one can steal that strategy by placing the first stone in the center (additional pieces on the board never hurt your position) then using player two's strategy.
You cannot have two dice (with numbers $1$ to $6$) biased so that when you throw both, the sum is uniformly distributed in $\{2,3,\dots,12\}$.
For easier notation, we use the equivalent formulation "You cannot have two dice (with numbers $0$ to $5$) biased such that when you throw both, the sum is uniformly distributed in $\{0,1,\dots,10\}$."
Proof:Assume that such dice exist. Let $p_i$ be the probability that the first die gives an $i$ and $q_i$ be the probability that the second die gives an $i$. Let $p(x)=\sum_{i=0}^5 p_i x^i$ and $q(x)=\sum_{i=0}^5 q_i x^i$.
Let $r(x)=p(x)q(x) = \sum_{i=0}^{10} r_i x^i$. We find that $r_i = \sum_{j+k=i}p_jq_k$. But hey, this is also the probability that the sum of the two dice is $i$. Therefore, $$ r(x)=\frac{1}{11}(1+x+\dots+x^{10}). $$ Now $r(1)=1\neq0$, and for $x\neq1$, $$ r(x)=\frac{(x^{11}-1)}{11(x-1)}, $$ which clearly is nonzero when $x\neq 1$. Therefore $r$ does not have any real zeros.
But because $p$ and $q$ are $5$th degree polynomials, they must have zeros. Therefore, $r(x)=p(x)q(x)$ has a zero. A contradiction.
Given a square consisting of $2n \times 2n$ tiles, it is possible to cover this square with pieces that each cover $2$ adjacent tiles (like domino bricks). Now imagine, you remove two tiles, from two opposite corners of the original square. Prove that is is now no longer possible to cover the remaining area with domino bricks.
Proof:
Imagine that the square is a checkerboard. Each domino brick will cover two tiles of different colors. When you remove tiles from two opposite corners, you will remove two tiles with the
samecolor. Thus, it can no longer be possible to cover the remaining area.
(Well, it may be
too "simple." But you did not state that it had to be a university student of mathematics. This one might even work for liberal arts majors...)
One little-known gem at the intersection of geometry and number theory is Aubry's reflective generation of primitive Pythagorean triples, i.e. coprime naturals $\,(x,y,z)\,$with $\,x^2 + y^2 = z^2.\,$ Dividing by $\,z^2$ yields $\,(x/z)^2+(y/z)^2 = 1,\,$ so each triple corresponds to a rational point $(x/z,\,y/z)$ on the unit circle. Aubry showed that we can generate all such triples by a very simple geometrical process. Start with the trivial point $(0,-1)$. Draw a line to the point $\,P = (1,1).\,$ It intersects the circle in the
rational point $\,A = (4/5,3/5)\,$ yielding the triple $\,(3,4,5).\,$ Next reflect the point $\,A\,$ into the other quadrants by taking all possible signs of each component, i.e. $\,(\pm4/5,\pm3/5),\,$ yielding the inscribed rectangle below. As before, the line through $\,A_B = (-4/5,-3/5)\,$ and $P$ intersects the circle in $\,B = (12/13, 5/13),\,$ yielding the triple $\,(12,5,13).\,$ Similarly the points $\,A_C,\, A_D\,$ yield the triples $\,(20,21,29)\,$ and $\,(8,15,17),\,$ We can iterate this process with the new points $\,B,C,D\,$ doing the same we did for $\,A,\,$ obtaining further triples. Iterating this process generates the primitive triples as a ternary tree
$\qquad\qquad$
Descent in the tree is given by the formula
$$\begin{eqnarray} (x,y,z)\,\mapsto &&(x,y,z)-2(x\!+\!y\!-\!z)\,(1,1,1)\\ = &&(-x-2y+2z,\,-2x-y+2z,\,-2x-2y+3z)\end{eqnarray}$$
e.g. $\ (12,5,13)\mapsto (12,5,13)-8(1,1,1) = (-3,4,5),\ $ yielding $\,(4/5,3/5)\,$ when reflected into the first quadrant.
Ascent in the tree by inverting this map, combined with trivial sign-changing reflections:
$\quad\quad (-3,+4,5) \mapsto (-3,+4,5) - 2 \; (-3+4-5) \; (1,1,1) = ( 5,12,13)$
$\quad\quad (-3,-4,5) \mapsto (-3,-4,5) - 2 \; (-3-4-5) \; (1,1,1) = (21,20,29)$
$\quad\quad (+3,-4,5) \mapsto (+3,-4,5) - 2 \; (+3-4-5) \; (1,1,1) = (15,8,17)$
See my MathOverflow post for further discussion, including generalizations and references.
I like the proof that there are infinitely many Pythagorean triples.
Theorem:There are infinitely many integers $ x, y, z$ such that $$ x^2+y^2=z^2 $$ Proof:$$ (2ab)^2 + ( a^2-b^2)^2= ( a^2+b^2)^2 $$
One cannot cover a disk of diameter 100 with 99 strips of length 100 and width 1.
Proof: project the disk and the strips on a semi-sphere on top of the disk. The projection of each strip would have area at most 1/100th of the area of the semi-sphere.
If you have any set of 51 integers between $1$ and $100$, the set must contain some pair of integers where one number in the pair is a multiple of the other.
Proof: Suppose you have a set of $51$ integers between $1$ and $100$. If an integer is between $1$ and $100$, its largest odd divisor is one of the odd numbers between $1$ and $99$. There are only $50$ odd numbers between $1$ and $99$, so your $51$ integers can’t all have different largest odd divisors — there are only $50$ possibilities. So two of your integers (possibly more) have the same largest odd divisor. Call that odd number $d$. You can factor those two integers into prime factors, and each will factor as (some $2$’s)$\cdot d$. This is because if $d$ is the largest divisor of a number, the rest of its factorization can’t include any more odd numbers. Of your two numbers with largest odd factor $d$, the one with more $2$’s in its factorization is a multiple of the other one. (In fact, the multiple is a power of $2$.)
In general, let $S$ be the set of integers from $1$ up to some even number $2n$. If a subset of $S$ contains more than half the elements in $S$, the set must contain a pair of numbers where one is a multiple of the other. The proof is the same, but it’s easier to follow if you see it for a specific $n$ first.
The proof that an isosceles triangle ABC (with AC and AB having equal length) has equal angles ABC and BCA is quite nice:
Triangles ABC and ACB are (mirrored) congruent (since AB = AC, BC = CB, and CA = BA), so the corresponding angles ABC and (mirrored) ACB are equal.
This congruency argument is nicer than that of cutting the triangle up into two right-angled triangles.
Parity of sine and cosine functions using Euler's forumla:
$e^{-i\theta} = cos\ (-\theta) + i\ sin\ (-\theta)$
$e^{-i\theta} = \frac 1 {e^{i\theta}} = \frac 1 {cos\ \theta \ + \ i\ sin\ \theta} = \frac {cos\ \theta\ -\ i\ sin\ \theta} {cos^2\ \theta\ +\ sin^2\ \theta} = cos\ \theta\ -\ i\ sin\ \theta$
$cos\ (-\theta) +\ i\ sin\ (-\theta) = cos\ \theta\ +i\ (-sin\ \theta)$
Thus
$cos\ (-\theta) = cos\ \theta$
$sin\ (-\theta) = -\ sin\ \theta$
$\blacksquare$
The proof is actually just the first two lines.
I believe Gauss was tasked with finding the sum of all the integers from $1$ to $100$ in his very early schooling years. He tackled it quicker than his peers or his teacher could, $$\sum_{n=1}^{100}n=1+2+3+4 +\dots+100$$ $$=100+99+98+\dots+1$$ $$\therefore 2 \sum_{n=1}^{100}n=(100+1)+(99+2)+\dots+(1+100)$$ $$=\underbrace{101+101+101+\dots+101}_{100 \space times}$$ $$=101\cdot 100$$ $$\therefore \sum_{n=1}^{100}n=\frac{101\cdot 100}{2}$$ $$=5050.$$ Hence he showed that $$\sum_{k=1}^{n} k=\frac{n(n+1)}{2}.$$
If $H$ is a subgroup of $(\mathbb{R},+)$ and $H\bigcap [-1,1]$ is finite and contains a positive element. Then, $H$ is cyclic.
Fermat's little theorem from noting that modulo a prime p we have for $a\neq 0$:
$$1\times2\times3\times\cdots\times (p-1) = (1\times a)\times(2\times a)\times(3\times a)\times\cdots\times \left((p-1)\times a\right)$$
Proposition (No universal set): There does not exists a set which contain all the sets (even itself)
Proof: Suppose to the contrary that exists such set exists. Let $X$ be the universal set, then one can construct by the axiom schema of specification the set
$$C=\{A\in X: A \notin A\}$$
of all sets in the universe which did not contain themselves. As $X$ is universal, clearly $C\in X$. But then $C\in C \iff C\notin C$, a contradiction.
Edit: Assuming that one is working in ZF (as almost everywhere :P)
(In particular this proof really impressed me too much the first time and also is very simple)
Most proofs concerning the Cantor Set are simple but amazing.
The total number of intervals in the set is zero.
It is uncountable.
Every number in the set can be represented in ternary using just
0 and
2. No number with a
1 in it (in ternary) appears in the set.
The Cantor set contains as many points as the interval from which it is taken, yet itself contains no interval of nonzero length. The irrational numbers have the same property, but the Cantor set has the additional property of being closed, so it is not even dense in any interval, unlike the irrational numbers which are dense in every interval.
The Menger sponge which is a 3d extension of the Cantor set
simultaneously exhibits an infinite surface area and encloses zero volume.
The derivation of first principle of differentiation is so amazing, easy, useful and simply outstanding in all aspects. I put it here:
Suppose we have a quantity $y$ whose value depends upon a single variable $x$, and is expressed by an equation defining $y$ as some specific function of $x$. This is represented as:
$y=f(x)$
This relationship can be visualized by drawing a graph of function $y = f (x)$ regarding $y$ and $x$ as Cartesian coordinates, as shown in Figure(a).
Consider the point $P$ on the curve $y = f (x)$ whose coordinates are $(x, y)$ and another point $Q$ where coordinates are $(x + Δx, y + Δy)$.
The slope of the line joining $P$ and $Q$ is given by:
$tanθ = \frac{Δy}{Δx} = \frac{(y + Δy ) − y}{Δx}$
Suppose now that the point $Q$ moves along the curve towards $P$.
In this process, $Δy$ and $Δx$ decrease and approach zero; though their ratio $\frac{Δy}{Δx}$ will not necessarily vanish.
What happens to the line $PQ$ as $Δy→0$, $Δx→0$? You can see that this line becomes a tangent to the curve at point $P$ as shown in Figure(b). This means that $tan θ$ approaches the slope of the tangent at $P$, denoted by $m$:
$m=lim_{Δx→0} \frac{Δy}{Δx} = lim_{Δx→0} \frac{(y+Δy)-y}{Δx}$
The limit of the ratio $Δy/Δx$ as $Δx$ approaches zero is called the derivative of $y$ with respect to $x$ and is written as $dy/dx$.
It represents the slope of the tangent line to the curve $y=f(x)$ at the point $(x, y)$.
Since $y = f (x)$ and $y + Δy = f (x + Δx)$, we can write the definition of the derivative as:
$\frac{dy}{dx}=\frac{d{f(x)}}{dx} = lim_{x→0} [\frac{f(x+Δx)-f(x)}{Δx}]$,
which is the required formula.
This proof that $n^{1/n} \to 1$ as integral $n \to \infty$:
By Bernoulli's inequality (which is $(1+x)^n \ge 1+nx$), $(1+n^{-1/2})^n \ge 1+n^{1/2} > n^{1/2} $. Raising both sides to the $2/n$ power, $n^{1/n} <(1+n^{-1/2})^2 = 1+2n^{-1/2}+n^{-1} < 1+3n^{-1/2} $.
Can a Chess Knight starting at any corner then move to touch every space on the board exactly once, ending in the opposite corner?
The solution turns out to be childishly simple. Every time the Knight moves (up two, over one), it will hop from a black space to a white space, or vice versa. Assuming the Knight starts on a black corner of the board, it will need to touch 63 other squares, 32 white and 31 black. To touch all of the squares, it would need to end on a white square, but the opposite corner is also black, making it impossible.
The Eigenvalues of a skew-Hermitian matrix are purely imaginary.
The Eigenvalue equation is $A\vec x = \lambda\vec x$, and forming the vector norm gives $$\lambda \|\vec x\| = \lambda\left<\vec x, \vec x\right> = \left<\lambda \vec x,\vec x\right> = \left<A\vec x,\vec x\right> = \left<\vec x, A^{T*}\vec x\right> = \left<\vec x, -A\vec x\right> = -\lambda^* \|\vec x\|$$ and since $\|\vec x\| > 0$, we can divide it from left and right side. The second to last step uses the definition of skew-Hermitian. Using the definition for Hermitian or Unitarian matrices instead yields corresponding statements about the Eigenvalues of those matrices.
I like the proof that not every real number can be written in the form $a e + b \pi$ for some integers $a$ and $b$. I know it's almost trivial in one way; but in another way it is kind of deep.
|
Lipschitz stability for the finite dimensional fractional Calderón problem with finite Cauchy data
1.
Max-Planck Institute for Mathematics in the Sciences, Inselstraße 22, 04103 Leipzig, Germany
2.
Dipartimento di Matematica e Geoscienze Università degli Studi di Trieste, via Valerio 12/1, 34127 Trieste, Italy
In this note we discuss the conditional stability issue for the finite dimensional Calderón problem for the fractional Schrödinger equation with a finite number of measurements. More precisely, we assume that the unknown potential $ q \in L^{\infty}(\Omega) $ in the equation $ ((- \Delta)^s+ q)u = 0 \mbox{ in } \Omega\subset \mathbb{R}^n $ satisfies the a priori assumption that it is contained in a finite dimensional subspace of $ L^{\infty}(\Omega) $. Under this condition we prove Lipschitz stability estimates for the fractional Calderón problem by means of finitely many Cauchy data depending on $ q $. We allow for the possibility of zero being a Dirichlet eigenvalue of the associated fractional Schrödinger equation. Our result relies on the strong Runge approximation property of the fractional Schrödinger equation.
Mathematics Subject Classification:Primary: 35R30, 35A35, 35S15. Citation:Angkana Rüland, Eva Sincich. Lipschitz stability for the finite dimensional fractional Calderón problem with finite Cauchy data. Inverse Problems & Imaging, 2019, 13 (5) : 1023-1044. doi: 10.3934/ipi.2019046
References:
[1] [2]
G. Alessandrini, M. V. de Hoop, R. Gaburro and E. Sincich,
Lipschitz stability for the electrostatic inverse boundary value problem with piecewise linear conductivities,
[3]
G. Alessandrini, M. V. de Hoop and R. Gaburro,
Romina and Eva Sincich, Lipschitz stability for a piecewise linear Schrödinger potential from local Cauchy data,
[4] [5]
G. Alessandrini, L. Rondi, E. Rosset and S. Vessella, The stability for the Cauchy problem for elliptic equations,
[6] [7]
B. Barceló, E. Fabes and J. Keun Seo,
The inverse conductivity problem with one measurement: Uniqueness for convex polyhedra,
[8]
E. Beretta, M. V. de Hoop and L. Qiu,
Lipschitz stability for an inverse boundary value problem for a Schrödinger-type equation,
[9]
E. Beretta and E. Francini,
Lipschitz stability for the electrical impedance tomography problem: The complex case,
[10] [11] [12] [13]
X. Cao, Y.-H. Lin and H. Liu, Simultaneously recovering potentials and embedded obstacles for anisotropic fractional Schrödinger operators,
[14] [15]
E. Fabes, H. Kang and J. K. Seo,
Inverse conductivity problem with one measurement: Error estimates and approximate identification for perturbed disks,
[16]
M. Moustapha Fall and V. Felli,
Unique continuation property and local asymptotics of solutions to fractional elliptic equations,
[17] [18]
R. Gaburro and E. Sincich, Lipschitz stability for the inverse conductivity problem for a conformal class of anisotropic conductivities,
[19] [20]
T. Ghosh, Y.-H. Lin and J. Xiao,
The Calderón problem for variable coefficients nonlocal elliptic operators,
[21] [22]
T. Ghosh, M. Salo and G. Uhlmann, The Calderón problem for the fractional Schrödinger equation, to appear in
[23]
G. Grubb,
Fractional Laplacians on domains, a development of Hörmander's theory of $\mu$-transmission pseudodifferential operators,
[24] [25] [26] [27] [28]
M. Ikehata,
Extraction formulae for an inverse boundary value problem for the equation $\nabla\cdot(\sigma-i\omega\epsilon)\nabla u = 0$,
[29] [30] [31] [32]
W. McLean,
Strongly Elliptic Systems and Boundary Integral Equations,, Cambridge University Press, Cambridge, 2000.
Google Scholar
[33] [34]
R.-Y. Lai and Y.-H. Lin, Global uniqueness for the semilinear fractional Schrödinger equation,
[35]
A. Rüland,
Unique continuation for fractional Schrödinger equations with rough potentials,
[36]
A. Rüland and M. Salo, The fractional Calderón problem: Low regularity and stability, to appear in
[37] [38] [39] [40]
show all references
References:
[1] [2]
G. Alessandrini, M. V. de Hoop, R. Gaburro and E. Sincich,
Lipschitz stability for the electrostatic inverse boundary value problem with piecewise linear conductivities,
[3]
G. Alessandrini, M. V. de Hoop and R. Gaburro,
Romina and Eva Sincich, Lipschitz stability for a piecewise linear Schrödinger potential from local Cauchy data,
[4] [5]
G. Alessandrini, L. Rondi, E. Rosset and S. Vessella, The stability for the Cauchy problem for elliptic equations,
[6] [7]
B. Barceló, E. Fabes and J. Keun Seo,
The inverse conductivity problem with one measurement: Uniqueness for convex polyhedra,
[8]
E. Beretta, M. V. de Hoop and L. Qiu,
Lipschitz stability for an inverse boundary value problem for a Schrödinger-type equation,
[9]
E. Beretta and E. Francini,
Lipschitz stability for the electrical impedance tomography problem: The complex case,
[10] [11] [12] [13]
X. Cao, Y.-H. Lin and H. Liu, Simultaneously recovering potentials and embedded obstacles for anisotropic fractional Schrödinger operators,
[14] [15]
E. Fabes, H. Kang and J. K. Seo,
Inverse conductivity problem with one measurement: Error estimates and approximate identification for perturbed disks,
[16]
M. Moustapha Fall and V. Felli,
Unique continuation property and local asymptotics of solutions to fractional elliptic equations,
[17] [18]
R. Gaburro and E. Sincich, Lipschitz stability for the inverse conductivity problem for a conformal class of anisotropic conductivities,
[19] [20]
T. Ghosh, Y.-H. Lin and J. Xiao,
The Calderón problem for variable coefficients nonlocal elliptic operators,
[21] [22]
T. Ghosh, M. Salo and G. Uhlmann, The Calderón problem for the fractional Schrödinger equation, to appear in
[23]
G. Grubb,
Fractional Laplacians on domains, a development of Hörmander's theory of $\mu$-transmission pseudodifferential operators,
[24] [25] [26] [27] [28]
M. Ikehata,
Extraction formulae for an inverse boundary value problem for the equation $\nabla\cdot(\sigma-i\omega\epsilon)\nabla u = 0$,
[29] [30] [31] [32]
W. McLean,
Strongly Elliptic Systems and Boundary Integral Equations,, Cambridge University Press, Cambridge, 2000.
Google Scholar
[33] [34]
R.-Y. Lai and Y.-H. Lin, Global uniqueness for the semilinear fractional Schrödinger equation,
[35]
A. Rüland,
Unique continuation for fractional Schrödinger equations with rough potentials,
[36]
A. Rüland and M. Salo, The fractional Calderón problem: Low regularity and stability, to appear in
[37] [38] [39] [40]
[1] [2] [3]
Albert Clop, Daniel Faraco, Alberto Ruiz.
Stability of Calderón's inverse conductivity problem in the plane for discontinuous conductivities.
[4] [5] [6] [7] [8]
Michael L. Frankel, Victor Roytburd.
A Finite-dimensional attractor for a nonequilibrium Stefan problem with heat losses.
[9] [10]
Messoud Efendiev, Alain Miranville.
Finite dimensional attractors for reaction-diffusion equations in $R^n$ with a strong nonlinearity.
[11]
Eddye Bustamante, José Jiménez Urrea, Jorge Mejía.
The Cauchy problem for a family of two-dimensional fractional Benjamin-Ono equations.
[12]
Wei Qu, Siu-Long Lei, Seak-Weng Vong.
A note on the stability of a second order finite difference scheme for space fractional diffusion equations.
[13]
Xin Zhong.
Global well-posedness to the cauchy problem of two-dimensional density-dependent boussinesq equations with large initial data and vacuum.
[14] [15]
Georg Vossen, Torsten Hermanns.
On an optimal control problem in laser cutting with mixed finite-/infinite-dimensional constraints.
[16] [17] [18]
A. Jiménez-Casas, Mario Castro, Justine Yassapan.
Finite-dimensional behavior in a thermosyphon with a viscoelastic fluid.
[19] [20]
Barbara Panicucci, Massimo Pappalardo, Mauro Passacantando.
On finite-dimensional generalized variational inequalities.
2018 Impact Factor: 1.469
Tools Metrics Other articles
by authors
[Back to Top]
|
Forgot password? New user? Sign up
Existing user? Log in
C=∫03π[1−cos2x121−sin2x12+1] dxC = \int_0^{3\pi} \bigg [ \frac{1 - \cos^2{\frac{x}{12}}} {1 - \sin^2{\frac{x}{12}}} + 1 \bigg ] \, \mathrm{d}xC=∫03π[1−sin212x1−cos212x+1]dxIf the value of 2+C1442 + \frac{C}{144}2+144C is in the form a/ba/ba/b where aaa and bbb are positive coprime integers and enter a+ba+ba+b as your answer.
Problem Loading...
Note Loading...
Set Loading...
|
Fix a finite set $X$ and two natural numbers $d$ and $n$.
For a partition $\lambda$ and a number $d$ denote by $s_\lambda^d(x_1,\dots,x_d)$ the Schur polynomial in $d$-many variables $x_1,\dots,x_d$. Denote by $\mathcal P_n(X)$ the set of partition-valued functions on $X$ of total size $n$, i.e. the set of functions $\lambda\colon X\to\{\text{partitions}\}$ such that $\|\lambda\|:=\sum_{x\in X}|\lambda(x)|=n$.
Let $a\colon X\to \mathbb{N}\backslash\{0\}$ be a function on $X$. EDIT: I am mostly interested in the case where $a$ is the constant function with value $1$.
I am trying to evaluate the expression $$F_X(d,a):=\sum_{\lambda\in \mathcal P_n(X)}\left(\prod_{x\in X}s^d_{\lambda(x)}(a(x),\dots,a(x))\right)^2. $$
EDIT:
I have now good reason to believe that if we take for $a$ the constant function with value $1$ then we have $$F_X(d,a)=\left(\binom{d^2|X|}{n}\right)$$ Here $\left(\binom{m}{n}\right)$ denotes the multiset number
More generally I am hoping that if $G$ is a finite group, $X$ is the set of irreducible characters (over $\mathbb C$) of $G$ and $a$ maps each character to its dimension then we have $$F_X(d,a)=\left(\binom{d^2|G^{\mathrm{ab}}|}{n}\right)$$ Here $G^\mathrm{ab}$ is the abelianisation of $G$. This reduces to case 1 if $G$ is taken to be an abelian group.
EDIT2: my conjecture 2 above is totally wrong (as can be seen by hand in small examples like $G=S_3$) and I have no good substitute hypothesis. I am now mostly interested in a proof of point 1 (where $a$ is the constant function $1$)
The special case where $X=\{\star\}$ is a one-element set can be done using the identity $$\prod_{i,j}(1-x_iy_j)^{-1}=\sum_n\sum_{|\lambda|=n} s_\lambda(x_1,\dots)s_\lambda(y_1,\dots)$$ by putting $x_i=y_j=t$ for $i,j=1,\dots,d$ (and zero otherwise) and picking out the coefficient of $t^{2n}$ obtaining $$c^d_nt^{2n}=\sum_{|\lambda|=n}s^d_\lambda(t,\dots,t)^2,$$ where $c^d_n$ is the coefficient of $t^n$ in $(1-t)^{-d^2}$ and can be calculated to be the multiset number $\left(\binom{d^2}{n}\right)$. All in all we obtain $F_{\{\star\}}(d,a)=\left(\binom{d^2}{n}\right)\cdot a(\star)^{2n}$
|
The basis of the first derivative test is that if the derivative changes from positive to negative at a point at which the derivative is zero then there is a local maximum at the point, and similarly for a local minimum. If $f'$ changes from positive to negative it is decreasing; this means that the derivative of $f'$, $f''$, might be negative, and if in fact $f''$ is negative then $f'$ is definitely decreasing, so there is a local maximum at the point in question. Note well that $f'$ might change from positive to negative while $f''$ is zero, in which case $f''$ gives us no information about the critical value. Similarly, if $f'$ changes from negative to positive there is a local minimum at the point, and $f'$ is increasing. If $f''>0$ at the point, this tells us that $f'$ is increasing, and so there is a local minimum.
Example 5.3.1 Consider again $f(x)=\sin x + \cos x$, with $f'(x)=\cos x-\sin x$ and $ f''(x)=-\sin x -\cos x$. Since $\ds f''(\pi/4)=-\sqrt{2}/2-\sqrt2/2=-\sqrt2< 0$, we know there is a local maximum at $\pi/4$. Since $\ds f''(5\pi/4)=- -\sqrt{2}/2- -\sqrt2/2=\sqrt2>0$, there is a local minimum at $5\pi/4$.
When it works, the second derivative test is often the easiest way to identify local maximum and minimum points. Sometimes the test fails, and sometimes the second derivative is quite difficult to evaluate; in such cases we must fall back on one of the previous tests.
Example 5.3.2 Let $\ds f(x)=x^4$. The derivatives are $\ds f'(x)=4x^3$ and $\ds f''(x)=12x^2$. Zero is the only critical value, but $f''(0)=0$, so the second derivative test tells us nothing. However, $f(x)$ is positive everywhere except at zero, so clearly $f(x)$ has a local minimum at zero. On the other hand, $\ds f(x)=-x^4$ also has zero as its only critical value, and the second derivative is again zero, but $\ds -x^4$ has a local maximum at zero.
Exercises 5.3
Find all local maximum and minimum points by the second derivative test, when possible.
Ex 5.3.1$\ds y=x^2-x$ (answer)
Ex 5.3.2$\ds y=2+3x-x^3$ (answer)
Ex 5.3.3$\ds y=x^3-9x^2+24x$(answer)
Ex 5.3.4$\ds y=x^4-2x^2+3$ (answer)
Ex 5.3.5$\ds y=3x^4-4x^3$(answer)
Ex 5.3.6$\ds y=(x^2-1)/x$(answer)
Ex 5.3.7$\ds y=3x^2-(1/x^2)$ (answer)
Ex 5.3.8$y=\cos(2x)-x$ (answer)
Ex 5.3.9$\ds y = 4x+\sqrt{1-x}$(answer)
Ex 5.3.10$\ds y = (x+1)/\sqrt{5x^2 + 35}$(answer)
Ex 5.3.11$\ds y= x^5 - x$(answer)
Ex 5.3.12$\ds y = 6x +\sin 3x$(answer)
Ex 5.3.13$\ds y = x+ 1/x$(answer)
Ex 5.3.14$\ds y = x^2+ 1/x$(answer)
Ex 5.3.15$\ds y = (x+5)^{1/4}$(answer)
Ex 5.3.16$\ds y = \tan^2 x$(answer)
Ex 5.3.17$\ds y =\cos^2 x - \sin^2 x$(answer)
Ex 5.3.18$\ds y = \sin^3 x$(answer)
|
When we first considered what the derivative of a vector function might mean, there was really not much difficulty in understanding either how such a thing might be computed or what it might measure. In the case of functions of two variables, things are a bit harder to understand. If we think of a function of two variables in terms of its graph, a surface, there is a more-or-less obvious derivative-like question we might ask, namely, how "steep'' is the surface. But it's not clear that this has a simple answer, nor how we might proceed. We will start with what seem to be very small steps toward the goal; surprisingly, it turns out that these simple ideas hold the keys to a more general understanding.
Imagine a particular point on a surface; what might we be able to say about how steep it is? We can limit the question to make it more familiar: how steep is the surface in a particular direction? What does this even mean? Here's one way to think of it: Suppose we're interested in the point $(a,b,c)$. Pick a straight line in the $x$-$y$ plane through the point $(a,b,0)$, then extend the line vertically into a plane. Look at the intersection of the plane with the surface. If we pay attention to just the plane, we see the chosen straight line where the $x$-axis would normally be, and the intersection with the surface shows up as a curve in the plane. Figure 16.3.1 shows the parabolic surface from figure 16.1.2, exposing its cross-section above the line $x+y=1$.
In principle, this is a problem we know how to solve: find the slope of a curve in a plane. Let's start by looking at some particularly easy lines: those parallel to the $x$ or $y$ axis. Suppose we are interested in the cross-section of $f(x,y)$ above the line $y=b$. If we substitute $b$ for $y$ in $f(x,y)$, we get a function in one variable, describing the height of the cross-section as a function of $x$. Because $y=b$ is parallel to the $x$-axis, if we view it from a vantage point on the negative $y$-axis, we will see what appears to be simply an ordinary curve in the $x$-$z$ plane.
Consider again the parabolic surface $f(x,y)=x^2+y^2$. Thecross-section above the line $y=2$ consists of all points$(x,2,x^2+4)$. Looking at this cross-section from somewhere onthe negative $y$ axis, we see what appears to be just the curve$f(x)=x^2+4$. At any point on the cross-section, $(a,2,a^2+4)$, thesteepness of the surface
in the direction of the line $y=2$ is simply the slope of the curve $f(x)=x^2+4$, namely $2x$.Figure 16.3.2 shows the sameparabolic surface as before, but now cut by the plane $y=2$. The leftgraph shows the cut-off surface, the right shows just thecross-section, looking up from the negative $y$-axis toward theorigin.
If, say, we're interested in the point $(-1,2,5)$ on the surface, then the slope in the direction of the line $y=2$ is $2x=2(-1)=-2$. This means that starting at $(-1,2,5)$ and moving on the surface, above the line $y=2$, in the direction of increasing $x$ values, the surface goes down; of course moving in the opposite direction, toward decreasing $x$ values, the surface will rise.
If we're interested in some other line $y=k$, there is really no change in the computation. The equation of the cross-section above $y=k$ is $x^2+k^2$ with derivative $2x$. We can save ourselves the effort, small as it is, of substituting $k$ for $y$: all we are in effect doing is temporarily assuming that $y$ is some constant. With this assumption, the derivative ${d\over dx}(x^2+y^2)=2x$. To emphasize that we are only temporarily assuming $y$ is constant, we use a slightly different notation: ${\partial\over \partial x}(x^2+y^2)=2x$; the "$\partial$'' reminds us that there are more variables than $x$, but that only $x$ is being treated as a variable. We read the equation as "the partial derivative of $(x^2+y^2)$ with respect to $x$ is $2x$.'' A convenient alternate notation for the partial derivative of $f(x,y)$ with respect to $x$ is is $f_x(x,y)$.
Example 16.3.1 The partial derivative with respect to $x$ of $x^3+3xy$ is $3x^2+3y$. Note that the partial derivative includes the variable $y$, unlike the example $x^2+y^2$. It is somewhat unusual for the partial derivative to depend on a single variable; this example is more typical.
Of course, we can do the same sort of calculation for lines parallel to the $y$-axis. We temporarily hold $x$ constant, which gives us the equation of the cross-section above a line $x=k$. We can then compute the derivative with respect to $y$; this will measure the steepness of the curve in the $y$ direction.
Example 16.3.2 The partial derivative with respect to $y$ of $f(x,y)=\sin(xy)+3xy$ is $$f_y(x,y)={\partial\over\partial y}\sin(xy)+3xy=\cos(xy){\partial\over\partial y}(xy)+ 3x=x\cos(xy)+3x. $$
So far, using no new techniques, we have succeeded in measuring the slope of a surface in two quite special directions. For functions of one variable, the derivative is closely linked to the notion of tangent line. For surfaces, the analogous idea is the tangent plane—a plane that just touches a surface at a point, and has the same "steepness'' as the surface in all directions. Even though we haven't yet figured out how to compute the slope in all directions, we have enough information to find tangent planes. Suppose we want the plane tangent to a surface at a particular point $(a,b,c)$. If we compute the two partial derivatives of the function for that point, we get enough information to determine two lines tangent to the surface, both through $(a,b,c)$ and both tangent to the surface in their respective directions. These two lines determine a plane, that is, there is exactly one plane containing the two lines: the tangent plane. Figure 16.3.3 shows (part of) two tangent lines at a point, and the tangent plane containing them.
How can we discover an equation for this tangent plane? We know a point on the plane, $(a,b,c)$; we need a vector normal to the plane. If we can find two vectors, one parallel to each of the tangent lines we know how to find, then the cross product of these vectors will give the desired normal vector.
How can we find vectors parallel to the tangent lines? Consider first the line tangent to the surface above the line $y=b$. A vector $\langle u,v,w\rangle$ parallel to this tangent line must have $y$ component $v=0$, and we may as well take the $x$ component to be $u=1$. The ratio of the $z$ component to the $x$ component is the slope of the tangent line, precisely what we know how to compute. The slope of the tangent line is $f_x(a,b)$, so $$ f_x(a,b)={w\over u} ={w\over1} = w.$$ In other words, a vector parallel to this tangent line is $\langle 1,0,f_x(a,b)\rangle$, as shown in figure 16.3.4. If we repeat the reasoning for the tangent line above $x=a$, we get the vector $\langle 0,1,f_y(a,b)\rangle$.
Now to find the desired normal vector we compute the cross product, $\langle 0,1,f_y\rangle\times\langle 1,0,f_x\rangle= \langle f_x,f_y,-1\rangle$. From our earlier discussion of planes, we can write down the equation we seek: $f_x(a,b)x+f_y(a,b)y-z=k$, and $k$ as usual can be computed by substituting a known point: $f_x(a,b)(a)+f_y(a,b)(b)-c=k$. There are various more-or-less nice ways to write the result: $$\displaylines{ f_x(a,b)x+f_y(a,b)y-z=f_x(a,b)a+f_y(a,b)b-c\cr f_x(a,b)x+f_y(a,b)y-f_x(a,b)a-f_y(a,b)b+c=z\cr f_x(a,b)(x-a)+f_y(a,b)(y-b)+c=z\cr f_x(a,b)(x-a)+f_y(a,b)(y-b)+f(a,b)=z\cr }$$
Example 16.3.3 Find the plane tangent to $x^2+y^2+z^2=4$ at $(1,1,\sqrt2)$. This point is on the upper hemisphere, so we use $\ds f(x,y)=\sqrt{4-x^2-y^2}$. Then $\ds f_x(x,y)=-x(4-x^2-y^2)^{-1/2}$ and $\ds f_y(x,y)=-y(4-x^2-y^2)^{-1/2}$, so $f_x(1,1)=f_y(1,1)=-1/\sqrt2$ and the equation of the plane is $$z=-{1\over\sqrt2}(x-1)-{1\over\sqrt2}(y-1)+\sqrt2.$$ The hemisphere and this tangent plane are pictured in figure 16.3.3.
So it appears that to find a tangent plane, we need only find twoquite simple ordinary derivatives, namely $f_x$ and $f_y$. This istrue
if the tangent plane exists. It is, unfortunately, notalways thecase that if $f_x$ and $f_y$ exist there is a tangent plane. Consider the function $xy^2/(x^2+y^4)$ pictured in figure 16.2.1. This function has value 0 when $x=0$or $y=0$, and we can "plug the hole'' by agreeing that$f(0,0)=0$. Now it's clear that $f_x(0,0)=f_y(0,0)=0$, because in the$x$ and $y$ directions the surface is simply a horizontal line. Butit's also clear from the picture that this surface does not haveanything that deserves to be called a "tangent plane'' at the origin,certainly not the $x$-$y$ plane containing these two tangent lines.
When does a surface have a tangent plane at a particular point? What we really want from a tangent plane, as from a tangent line, is that the plane be a "good'' approximation of the surface near the point. Here is how we can make this precise:
Definition 16.3.4 Let $\Delta x=x-x_0$, $\Delta y=y-y_0$, and $\Delta z=z-z_0$ where $z_0=f(x_0,y_0)$. The function $z=f(x,y)$ is differentiable at $(x_0,y_0)$ if $$\Delta z=f_x(x_0,y_0)\Delta x+f_y(x_0,y_0)\Delta y+\epsilon_1\Delta x + \epsilon_2\Delta y,$$ and both $\epsilon_1$ and $\epsilon_2$ approach 0 as $(x,y)$ approaches $(x_0,y_0)$.
This definition takes a bit of absorbing. Let's rewrite the central equation a bit: $$\eqalignno{ z&=f_x(x_0,y_0)(x-x_0)+f_y(x_0,y_0)(y-y_0)+f(x_0,y_0)+ \epsilon_1\Delta x + \epsilon_2\Delta y.& (16.3.1)\cr }$$ The first three terms on the right are the equation of the tangent plane, that is, $$f_x(x_0,y_0)(x-x_0)+f_y(x_0,y_0)(y-y_0)+f(x_0,y_0)$$ is the $z$-value of the point on the plane above $(x,y)$. Equation 16.3.1 says that the $z$-value of a point on the surface is equal to the $z$-value of a point on the plane plus a "little bit,'' namely $\epsilon_1\Delta x + \epsilon_2\Delta y$. As $(x,y)$ approaches $(x_0,y_0)$, both $\Delta x$ and $\Delta y$ approach 0, so this little bit $\epsilon_1\Delta x + \epsilon_2\Delta y$ also approaches 0, and the $z$-values on the surface and the plane get close to each other. But that by itself is not very interesting: since the surface and the plane both contain the point $(x_0,y_0,z_0)$, the $z$ values will approach $z_0$ and hence get close to each other whether the tangent plane is "tangent'' to the surface or not. The extra condition in the definition says that as $(x,y)$ approaches $(x_0,y_0)$, the $\epsilon$ values approach 0—this means that $\epsilon_1\Delta x + \epsilon_2\Delta y$ approaches 0 much, much faster, because $\epsilon_1\Delta x$ is much smaller than either $\epsilon_1$ or $\Delta x$. It is this extra condition that makes the plane a tangent plane.
We can see that the extra condition on $\epsilon_1$ and $\epsilon_2$ is just what is needed if we look at partial derivatives. Suppose we temporarily fix $y=y_0$, so $\Delta y=0$. Then the equation from the definition becomes $$\Delta z=f_x(x_0,y_0)\Delta x+\epsilon_1\Delta x$$ or $${\Delta z\over\Delta x}=f_x(x_0,y_0)+\epsilon_1.$$ Now taking the limit of the two sides as $\Delta x$ approaches 0, the left side turns into the partial derivative of $z$ with respect to $x$ at $(x_0,y_0)$, or in other words $f_x(x_0,y_0)$, and the right side does the same, because as $(x,y)$ approaches $(x_0,y_0)$, $\epsilon_1$ approaches 0. Essentially the same calculation works for $f_y$.
Almost all of the functions we will encounter are differentiable at points we will be interested in, and often at all points. This is usually because the functions satisfy the hypotheses of this theorem.
Theorem 16.3.5 If $f(x,y)$ and its partial derivatives are continuous at a point $(x_0,y_0)$, then $f$ is differentiable there.
Exercises 16.3
Ex 16.3.1Find $f_x$ and $f_y$ where $\ds f(x,y)=\cos(x^2y)+y^3$.(answer)
Ex 16.3.2Find $f_x$ and $f_y$ where $\ds f(x,y)={xy\over x^2+y}$.(answer)
Ex 16.3.3Find $f_x$ and $f_y$ where $\ds f(x,y)=e^{x^2+y^2}$.(answer)
Ex 16.3.4Find $f_x$ and $f_y$ where $\ds f(x,y)=xy\ln(xy)$.(answer)
Ex 16.3.5Find $f_x$ and $f_y$ where $\ds f(x,y)=\sqrt{1-x^2-y^2}$.(answer)
Ex 16.3.6Find $f_x$ and $f_y$ where $\ds f(x,y)=x\tan(y)$.(answer)
Ex 16.3.7Find $f_x$ and $f_y$ where $\ds f(x,y)={1\over xy}$.(answer)
Ex 16.3.8Find an equation for the plane tangent to $\ds 2x^2+3y^2-z^2=4$ at$(1,1,-1)$. (answer)
Ex 16.3.9Find an equation for the plane tangent to $\ds f(x,y)=\sin(xy)$ at$(\pi,1/2,1)$. (answer)
Ex 16.3.10Find an equation for the plane tangent to $\ds f(x,y)=x^2+y^3$ at$(3,1,10)$. (answer)
Ex 16.3.11Find an equation for the plane tangent to $\ds f(x,y)=x\ln(xy)$ at$(2,1/2,0)$. (answer)
Ex 16.3.12Find an equation for the line normal to $\ds x^2+4y^2=2z$ at$(2,1,4)$. (answer)
Ex 16.3.13Explain in your own words why, when taking a partial derivative of a function of multiple variables, we can treat the variables not being differentiated as constants.
Ex 16.3.14Consider a differentiable function, $f(x,y)$. Give physical interpretations of the meanings of $f_x(a,b)$ and $f_y(a,b)$ as they relate to the graph of $f$.
Ex 16.3.15In much the same way that we used the tangent line to approximate the value of a function from single variable calculus, we can use the tangent plane to approximate a function from multivariable calculus. Consider the tangent plane found in Exercise 11. Use this plane to approximate $f(1.98, 0.4)$.
Ex 16.3.16Suppose that one of your colleagues has calculated the partial derivatives of a given function, and reported to you that $f_x(x,y)=2x+3y$ and that $f_y(x,y)=4x+6y$. Do you believe them? Why or why not? If not, what answer might you have accepted for $f_y$?
Ex 16.3.17Suppose $f(t)$ and $g(t)$ are single variable differentiable functions. Find $\partial z/\partial x$ and $\partial z/\partial y$ for each of the following two variable functions.
a. $z=f(x)g(y)$
b. $z=f(xy)$
c. $z=f(x/y)$
|
How to Model Fluid Friction in Joints with COMSOL Multiphysics®
Various machinery, such as engines, pumps, and turbines, employ components that transmit the load between the solid parts that are in relative motion. Common examples are piston rings, cams, gear teeth, and (of course) bearings. Often, these components are lubricated by maintaining an oil film between the two solid parts to minimize the friction and wear. In this blog post, we look at methods for modeling the fluid friction in lubricated joints.
Modes of Lubrication
Depending on the loads between the two contact surfaces and their geometry, the following different regimes of lubrication can be observed:
Fluid-film lubrication The load is fully supported by the fluid film such that the contacting surfaces are fairly separated by the fluid film Elastohydrodynamic lubrication Observed between nonconforming surfaces or high load conditions, where the bodies suffer significant elastic strain at contact Fluid film is still maintained between the deforming surfaces due to the pumping action of the relative motion between the surfaces Boundary lubrication The bodies come into closer contact at their asperities, and hydrodynamic effects are negligible Mixed lubrication A regime between the full-film elastohydrodynamic lubrication and boundary lubrication, where the lubricant film alone is not enough to separate the bodies completely Hydrodynamic effects are considerable
In this blog post, we will focus on the full-film lubrication regimes, because joints form conforming surfaces and the pressure is not high enough to cause significant deformation.
Calculating the Viscous Drag Force Between Plates Separated by a Lubricant
Consider two flat plates separated by a lubricant, as shown in the figure below. The bottom plate is kept fixed and the top surface moves by a horizontal velocity,
U. Shear flow between two flat plates.
For a Couette flow without the pressure gradient, the velocity profile of the lubricant in the thickness direction
z can be written as:
Therefore, the viscous shear stress in the lubricant is:
which is independent of the thickness coordinate.
Thus, the viscous shear drag on the top plate is given by:
where
A is the area of the plate.
In the case of joints, the lubricant flow also has a pressure gradient due to the varying thickness of the film. In such a case, the velocity profile changes to:
where
x is the coordinate along the direction of the flow.
In this case, the viscous shear stress at the top plate is given by:
and the viscous shear drag on the same is:
Determining the Viscous Force in Lubricated Joints
To understand the viscous force in joints, let us take the example of a hinge joint. The hinge joint is a joint that allows the relative rotation about the joint axis between the two components forming the joint. In general, as opposed to the above example, both of the components forming the hinge joint can be in motion; for instance, a joint between the connecting rod and crank pin of a reciprocating engine. A general scenario is shown in the figure below, where the components are rotating about their center at speeds Ω
1 and Ω 2, respectively. If the inner radius of component 1 is R 1 and the outer radius of component 2 is R 2, then the surfaces are moving at speeds Ω 1 R 1 and Ω 2 R 2, respectively. Lubricated hinge joint.
At any location locally, the flow is similar to the one described for the flat plate. Therefore, the velocity profile at any circumferential location of the film in the tangential direction is given by:
where the subscript
t represents the tangential component and z is measured from component 2.
The viscous shear stress in the lubricant then is:
Assuming that Ω
1 R 1 > Ω 2 R 2, the shear stress at surface 1 is:
and at surface 2 is:
Then, the total drag force on both surfaces is obtained by integrating the shear stress on the surface as:
F_{f2} &= \int_0^L\int_0^{2\pi}\tau_2dxd(R_2\phi)=\mu R_2(\Omega_1R_1-\Omega_2R_2)\int_0^L\int_0^{2\pi}\frac{dx}{h}d\phi-\int_0^L\int_0^{2\pi}\frac{h}{2}\frac{\partial p}{\partial \phi}dx d\phi
\end{aligned}
Modeling Lubricated Joints in COMSOL Multiphysics®
Joints are available in the Multibody Dynamics Module, an add-on to the Structural Mechanics Module and the COMSOL Multiphysics® software. These joints can either be rigid or flexible joints. Rigid joints, as the name suggests, do not allow any relative motion between the components other than the joint degrees of freedom (DOFs). In flexible joints, you can specify the stiffness between the components for the relative motions other than the joint DOFs. This stiffness can be due to the flexibility of the components themselves, the presence of a fluid film between the area forming the joint, or a combination of both. It is the effect of the fluid film that we wish to address here.
A lubricant in the joint supports the joint forces through the film pressure, thus avoiding the structure-to-structure contact and reducing the friction between the joint components. Although not as large as contact friction force, shearing of the lubricant due to the relative motion in the joints offers a resistance to the relative motion on both of the components forming the joint. This resistance is what we call
fluid friction in joints. Therefore, a fluid film in the joint applies two types of forces on the joint components: Force to support the load on the joint Viscous shear force to resist the relative motion of the components
Therefore, the simplest way of accounting for the support forces of the fluid film is by using dynamic characteristics (stiffness and damping coefficient) of the fluid film as the joint stiffness and viscous damping in an elastic joint. Often, these characteristics are known through some experiments. For simple cases, analytical expressions as a function of the eccentricity are also available for the dynamic characteristics of the bearing. COMSOL Multiphysics also provides a way to compute the dynamic characteristics of the lubricant film in the joint via the
Hydrodynamic Bearing interface. This interface is available in the Rotordynamics Module (also an add-on to COMSOL Multiphysics and the Structural Mechanics Module), which is used for simulating the flow in fluid-film bearings. In addition, to account for the viscous resistance, a joint force (or moment) should be applied to the relative motion in the joint. The method to calculate the viscous resistance is explained in the previous section.
Calculating the dynamic coefficients and viscous resistance for each joint and using it in a multibody simulation can be a rather tedious process. There is an easier way of modeling the joint lubrication in the COMSOL® software. A multiphysics coupling feature called
Solid Bearing Coupling is provided to combine the hydrodynamic bearing simulations directly with the multibody and structural mechanics ones. The coupling feature transfers the motion of the structure to the Hydrodynamic Bearing interface to compute the change in the film thickness, which affects the pressure distribution in the film. The pressure and shear forces in the film are then transferred back to the structure as an external force, making it a bidirectional coupling. An Example of Fluid Friction in Joints
To understand the modeling process explained above, let’s take a look at an example. Consider a piston and cylinder of a reciprocating engine. The walls of the piston and cylinder are separated by a thin lubricant film, as shown in the figure below.
Lubrication of a piston reciprocating in a cylinder.
The piston is connected to the connecting rod, which is connected to the crankpin on the other end. The schematic of the whole system is shown in figure below:
Schematic of a slider crank system.
The radius of the crank is
r c, and it rotates at an angular speed Ω. The length of the connecting rod is l c. From the geometric considerations, the vertical position of the piston from the crank center is given by
where θ = Ωt.
In the above expression, θ is referred from the crank position when the piston is at bottom dead center (BDC).
The initial position of the piston (at BDC) from the crank center is
Therefore, the vertical displacement of the piston is given by:
Note that at t = 0, the displacement and velocity of the piston are 0.
Let us further assume that a vertical force due to gas pressure is acting on the piston, given by
Thus, the force is zero when the piston is at bottom dead center and is maximum when it is at the top dead center. In a real scenario, this force will have a more complex dependence on θ. The connecting rod supports this load through its reaction on the piston. Interestingly, this reaction,
F c, is always along the length of the connecting rod, as shown in the figure below. Free body diagram of a piston.
Thus, the vertical component of the reaction,
F a, supports the gas pressure on the piston. However, there is an additional horizontal component, F r, acting on the piston. It pushes the piston to the cylinder walls. This force, whose magnitude is given by
also needs to be applied on the piston as it moves in the cylinder.
To simulate such a problem, we start by creating a geometry of the piston. We can use the
Linear Elastic Material model in the Solid Mechanics interface to model the flexibility of the piston. The motion of the piston is prescribed using the Rigid Connector feature, also available in the Solid Mechanics interface. Motion of the piston in the vertical direction ( z direction) is prescribed as obtained from the geometric considerations above. Also, the piston cannot rotate about its axis ( z-axis). Lateral translation and tilting of the piston are set free, which will be obtained from the force balance on the piston. These conditions can be specified using a setting, shown in the image below, in the Rigid Connector feature. Screenshot showing the prescribed motion and displacement of the piston.
The horizontal load on the piston coming from the connecting rod is modeled using the
Applied Force subfeature on the Rigid Connector feature. Screenshot showing the Applied Force feature and its settings.
Computation of the fluid friction on the piston requires the evaluation of resistive force to the motion of the piston from the lubricant film. Although we could model the cylinder and piston together (with the lubricant between them), it is not necessary to model the cylinder because it remains stationary. The stationary state of the cylinder can be directly specified in the
Hydrodynamic Bearing interface.
Due to lateral motion of the piston, the gap between the piston and cylinder changes, which will effectively change the lubricant’s thickness. Thus, the pressure distribution in the lubricant film also depends on the relative motion of the contacting boundaries. Under high loads, deformation in the piston can also change the film thickness. If the ratio of the pressure in the film to the effective stiffness of the contacting boundaries is small, the deformation during the contact can be neglected. Otherwise, deformation of the contacting boundaries needs to be considered because it plays a significant role in determining the effective friction between the contacting boundaries. Simulation of this class of problem is classified as
elastohydrodynamic (EHD) simulation. In this case, since the pressure levels are expected to be low, film thickness will largely be affected by the lateral motion of the piston rather than its deformation.
To model the lubricant film, we make a thin-film approximation of the Navier-Stokes and continuity equations to get the Reynolds equation, which is solved on the surface instead of the fluid domain. The
Hydrodynamic Bearing interface solves the Reynolds equation and can be used for getting the pressure distribution in the film on the piston surface. Parameters required for the modeling of the lubricant film are the initial film thickness, viscosity and density of the lubricant, and motion of the contacting boundaries. All of this information is specified in the Hydrodynamic Journal Bearing feature of the Hydrodynamic Bearing interface.
Note that the
Hydrodynamic Journal Bearing feature, in general, can model a journal bearing where the journal undergoes an axial rotation in addition to its translational motion. This feature can also be used in cases where there is no axial rotation of the journal, like the piston in the present model. The motion of the contacting boundaries can be easily passed to the Hydrodynamic Journal Bearing feature using the built-in Solid-Bearing Coupling multiphysics feature. As mentioned in the previous section, this feature automatically applies the forces due to the pressure and shear of the lubricant in the film on the structural boundary. Visualizing the Simulation Results
The animations below show the pressure distribution in the film and the stress in the piston as the part performs the reciprocating motion. To highlight the lateral motion of the piston, the reciprocating motion of the piston is suppressed.
The von Mises stress in the piston (left) and the pressure in the film (right).
You can see the flipping of the stress after the half cycle of the piston. Note that during the upward stroke, as the connecting rod pushes the piston on the right wall, the pressure increases on the right side of the piston and decreases on the left side. During the downward stroke, the direction is reversed.
The time variation of force on the piston in the vertical direction due to the shearing of the lubricant film is given in the figure below.
Piston velocity and viscous force.
Note that the viscous force on the piston is always opposite of its velocity. The viscous force coefficient (damping coefficient) is given by the ratio of the viscous force to the piston velocity. In the present case, it is approximately 8 N*s/m.
Reciprocating Engine Example
The Reciprocating Engine with Hydrodynamic Bearings model, available in the Application Library with the Rotordynamics Module, demonstrates the steps of combining the structural or multibody simulation with the hydrodynamic bearings. In this example, a single-cylinder reciprocating engine supported through hydrodynamic journal bearings on the foundation is considered. The dynamics of the various engine components are analyzed when gas pressure is applied on the piston. During the engine’s operation, the load on the piston is transferred to the foundation through the connecting rod, crank, and bearings.
The pressure in the bearings and the stress in the foundations are analyzed to understand the bearing performance and the stress variation on the foundation during one operating cycle. The stress in the crankshaft is also analyzed. The animation below shows the stress variation in the crankshaft and foundation as well as the resulting bearing pressure distribution during its operation. Bearing pressure is highest during the downward stroke of the piston under the gas pressure.
Reciprocating engine: stresses in the crankshaft and foundation as well as pressure in the bearing.
The viscous torque on the bearing is plotted for various cycles of operation. As the piston reaches the top dead center, the cylinder pressure is highest. At this instant, the pressure in the bearing is also very high and the eccentricity of the journal is at maximum. This creates high friction on the journal, which appears as sharp negative peaks in the viscous torque plot. Note that the viscous torque is approximately 10% of the loading torque (approximately 16 Nm), which is quite significant and cannot be ignored when computing the losses in the engine.
The viscous torque in the bearing (left) as well as the driving and loading torques on the crankshaft (right). Next Steps
Learn more about the specialized features for modeling the characteristics of thin-film lubricant flow available in the Rotordynamics Module by clicking the button below.
Further Reading Read more about modeling bearings on the COMSOL Blog: Comments (0) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
|
Let $g \colon [0,\infty) \to \mathbb{R}$ be a monotonous function.
Suppose $g$ only attains positive values and is (not necessarily strictly) decreasing.
Does the sequence of series $$ s_k := \sum_{n=1}^\infty 2^{-k} g(n 2^{-k}) $$ converge to $$ \int_0^\infty g(t) dt $$ for $k \to \infty$?
Since $g$ is positive, the series either converges absolutely or "converges" to $\infty$. Since we always have $s_k \leq \int_0^\infty g(t) dt$ let's assume that the series converges for all $k \in \mathbb{N}$. Note that the series approximates the improper integral like Riemannian sums with interval length $2^{-k}$.
|
The identity$$E(\mathbf{r},t)=A(\mathbf{r},t)e^{i(\langle\omega\rangle t-\phi(\mathbf{r},t))}\tag{1}$$needs neither derivation nor justification; instead, it acts as an Ansatz for the electric field and as a definition for the pair of functions$$A(\mathbf{r},t)e^{-i\phi(\mathbf{r},t)}:=E(\mathbf{r},t)e^{-i\langle\omega\rangle t}.\tag{1'}$$Now, one of the weirder quirks of mathematics (and the mathematics of physics) is that you're free to define
whatever you want, no matter how weird it might seem a priori. The only requirement is that you then go on to use those definitions to do something useful.
In this specific case, $E(\mathbf r,t)$ is a complex-valued function of both position and time, and since complex numbers have an amplitude and a phase, proposing an Ansatz of the form $E(\mathbf{r},t)=C(\mathbf{r},t)e^{-i\varphi(\mathbf{r},t)}$ would carry pretty much zero new information.
Your Ansatz in $(1)$, however, is different, because you're saying something nontrivial about the structure of the phase, namely that it's of the form $\varphi(\mathbf{r},t) = \phi(\mathbf{r},t)-⟨\omega⟩t$, where the variation in $\phi$ is much smaller than the central frequency. Here's the first core point:
this is not guaranteed, i.e. there's plenty of imaginable waveforms for which there is no frequency $⟨\omega⟩$ such that that holds. (For examples, try superpositions of quasi-monochromatic waves at different central frequencies, or short pulses with a broad bandwidth and a strong chirp.) Similarly, there's no guarantee that you're going to be able to bound the time variation of the amplitude with reference to the central frequency. (Again, for examples, look to ultrashort pulses.)
Now, none of this is a problem, because we're not here to build a formalism that will handle every imaginable waveform. Instead, building on the auxiliary definitions in $(1)$, the bit that really does the work is the condition that$$\left[\frac{1}{A} \frac{\partial A}{ \partial t},\frac{\partial \phi}{ \partial t} \right] \ll \langle\omega\rangle, \tag 3$$and it is this that acts as the definition of quasi-monochromatic waves. Again, all you've done thus far is define things (in this case, the term quasi-monochromatic), so again, you don't actually need to justify anything*. Instead, if the author does their job correctly, the justification will come from showing that quasi-monochromatic waves, defined in this fashion, have useful properties (which they do).
*(extended) footnote:
OK, so maybe I lied a little at that point. You don't really need to justify the stuff you define, but if you're re-using terms that have previous connotations (or which partially overlap with such terms) then you do need to show that you're not radically changing those terms. For the case of quasi-monochromatic waves, you do need to show that your definition agrees with the intuitive understanding of the term.
There's two components of this, and they're both mathematical.
One is a link between the relevant time derivatives, $\frac{1}{A} \frac{\partial A}{ \partial t}$ and $\frac{\partial \phi}{ \partial t}$, and the width of the power spectrum of the function $A(\mathbf{r},t)e^{-i\phi(\mathbf{r},t)}$. The other is the fact that multiplying a function by $e^{-i\langle\omega\rangle t}$ on the time domain is equivalent to shifting its frequency-domain representation by $⟨\omega⟩$, which is a corollary of the convolution theorem.
Both can be shown and they are relatively reasonable theorems, but I don't think the technicalities are that important here. Ultimately, those mathematical facts allow you to link your definition, $(3)$ with the physical fact that the bandwidth of your waveform is much smaller than its central frequency, which is about as close to the intuitive concept of quasi-monochromatic as you can reasonably get.
So, in a way, this last bit is the justification for the definition.
|
Fitting a logistic regression (
LR) model (with Age, Sex and Pclass as predictors) to the survival outcome in the Titanic data yields a summary such as this one:
##
## Call:
## glm(formula = Survived ~ Age + Sex + Pclass, family = binomial(link = logit),
## data = NoMissingAge)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -2.7303 -0.6780 -0.3953 0.6485 2.4657
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 3.777013 0.401123 9.416 < 2e-16 ***
## Age -0.036985 0.007656 -4.831 1.36e-06 ***
## Sexmale -2.522781 0.207391 -12.164 < 2e-16 ***
## Pclass2 -1.309799 0.278066 -4.710 2.47e-06 ***
## Pclass3 -2.580625 0.281442 -9.169 < 2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 964.52 on 713 degrees of freedom
## Residual deviance: 647.28 on 709 degrees of freedom
## AIC: 657.28
##
## Number of Fisher Scoring iterations: 5
## 1 2 3 4 5 7
## -0.4716467 0.4224490 1.0794775 0.4005642 -0.3747404 -0.8821915
## 1 2 3 4 5 7
## -0.4716467 0.4224490 1.0794775 0.4005642 -0.3747404 -0.8821915
For “normal regression” we know that the value of \(\beta_j\) simply gives us \(\Delta y\) if \(x_j\) is increased by one unit.
In order to fully understand the exact meaning of the coefficients for a LR model we need to first warm up to the definition of a
link function and the concept of probability odds.
Using linear regression as a starting point \[
y_i = \beta_0 + \beta_1 x_{1,i} + \beta_2 x_{2,i} + \ldots +\beta_k x_{k,i} + \epsilon_i \] we modify the right hand side such that (i) the model is still basically a linear combination of the \(x_j\)s but (ii) the output is -like a probability- bounded between 0 and 1. This is achieved by “wrapping” a sigmoid function \(s(z) = 1/(1+exp(z))\) around the weighted sum of the \(x_j\)s: \[ y_i = s(\beta_0 + \beta_1 x_{1,i} + \beta_2 x_{2,i} + \ldots +\beta_k x_{k,i} + \epsilon_i) \] The sigmoid function, depicted below to the left, transforms the real axis to the interval \((0;1)\) and can be interpreted as a probability.
The inverse of the sigmoid is the
logit (depicted above to the right), which is defined as \(log(p/(1-p))\). For the case where p is a probability we call the ratio \(p/(1-p)\) the probability odds. Thus, the logit is the log of the odds and logistic regression models these log-odds as a linear combination of the values of x.
Finally, we can interpret the coefficients directly: the odds of a positive outcome are multiplied by a factor of \(exp(\beta_j)\) for every unit change in \(x_j\). (In that light, logistic regression is reminiscient of linear regression with logarithmically transformed dependent variable which also leads to multiplicative rather than additive effects.)
As an example, the coefficient for Pclass 3 is -2.5806, which means that the odds of survival compared to the reference level Pclass 1 are reduced by a factor of \(exp(-2.5806) = 0.0757\);with all other input variables unchanged. How does the relative change in odds translate into probabilities? It is relatively easy to memorize the to and forth relationship between odds and probability \(p\): \[
odds = \frac{p}{1-p} \Leftrightarrow p = \frac{odds}{1 + odds} \] So the intercept 3.777 (= log-odds!) translates into odds of \(exp(3.777) = 43.685\) which yields a base probability of survival (for female Pclass1 of age 0) of \(43.685/(1 + 43.685) = 0.978\) Why make life so complicated?
I am often asked by students why we have to go through the constant trouble of (i) exponentiating the coefficients, (ii) multiplying the odds and finally (iii) compute the resulting probabilities? Can we not simply transform the coefficients from the summary table such that we can read off their effects directly – just like in linear regression?
The trouble is that there is no simple way to translate the coefficients into an additive or even a multiplicative adjustment of the probabilities. (One simple way to see this is the impossibility of keeping the output bounded between 0 and 1) The following graph shows the effect of various coefficient values on a base/reference probability.
We immediately see that there is no straightforward additive or multiplicative effect that could be quantified.
// add bootstrap table styles to pandoc tables $(document).ready(function () { $('tr.header').parent('thead').parent('table').addClass('table table-condensed'); });
|
I know we can find out the electric field using the electric field $$E=\frac{KQ}{R^2}$$ taking small element $dq$ and finding the electric field by integrating the value of $dE$ over the circumference which will be $$E=\frac{kxQ}{\sqrt {(a^2+x^2)^3}}$$ where $a$ is radius of ring and $x$ is distance of point $p$ on the axis of the ring. Can we find the same using the Gauss law?
Gauss' law uses
for its functioning. symmetry
Even if you consider a cylindrical gaussian (or spherical) surface that encloses the ring, the electric field lines will be very different for two adjacent dA area elements and you will have to consider each and every of these $dA$ area elements of the gaussian surface to get the electric field. Thats certainly not how integration is supposed to work. It would make the problem harder to solve.
This integration is like adding $1+2+3+4+5+6=21$, i.e, you will have to consider everything. That's impossible to do.
But if the electric field lines and their dot product with corresponding $dA$ was same for every field line then integration would become simpler. Such is the case when electric field due to infinite sheet is to be calculated.
This integration of this type is like adding $2+2+2+2+2+2=6\times2$. Now this is simpler as it assumes symmetry.
(The image shows electric field due to 30 charges arranged in a ring at a given observation point. The position of the observation point can be varied to see how the electric field of the individual charges adds up to give the total field.)
In short, the electric field lines due to the ring lack symmetry. This would only make the problem harder to solve using Gauss' law.
$\oint E.dA=\frac{q}{\epsilon_{\circ}} $
Recall when the electric field due to a infinite sheet is to be found, a cylindrical gaussian surface is considered that encloses a circular part of the sheet inside it. The electric field lines for this sheet are perpendicular to it. This is a perfectly ideal condition for Gauss' law to work. The field lines emerge perpendicularly from the 2 circular parts of the cylinder.
But in case of a ring, such conditions are absent.
|
253 23 Homework Statement There is an infinite charged plate in yz plane with surface charge density ##\sigma = 8*10^8 C/m^2## and negatively charged particle at coordinate (4,0,0) Find magnitude of efield at coordinate (4,4,0) Homework Equations E= E1+E2
So I figured to get e-field at point (4,4,0), I need to find the resultant e-field from the negatively charged particle and the plate
##E_{resultant}=E_{particle}+E_{plate}##
##E_{particle}=\frac{kq}{d^2}=\frac{(9*10^9)(-2*10^-6)}{4^2}=-1125N/C##
Now for the plate is where I'm confused.
If this was a wire, it would have been okay for me since I only need to deal with one dimension.
Since what they requested was a plate in yz plane, does this means that my:
##\sigma=dy*dy*x?## where ##dy## is the 'slice' I take and x is the width of the plate? Is that accurate?
If it is true, then to find the e-field created by that slice at the point,
##dE=\frac{kdq}{R^2}##
##dE=\frac{k\sigma *x*dy}{a^2+y^2}##
I know that the vertical components of the resultant e-field will cancel out because there are same amount of segments on top and below the point.
So need to find ##dE_{x}##, which = ##dEcos\theta##, where ##\theta## is shown:
So ##dE_{x} = dEcos\theta = (\frac{k\sigma *x*dy}{a^2+y^2}) (\frac{a}{\sqrt{y^2+a^2}})##,
Now the problem is I can't integrate this to find my resultant e-field because I do not know what the value of x is. If this was a wire in a plane it will have been solvable for me, but now I'm kind of stuck.
Any clues/help? Thanks :)
|
历史查询
为了更好的帮助您理解掌握查询词或其译词在地道英语中的实际用法,我们为您准备了出自英文原文的大量英语例句,供您参考。
It is also shown that on the nilmanifold $\Gamma\backslash (H^3\times H^3)$ the balanced condition is not stable under small deformations.
For $\gamma\in\mathbb{R}$ let $C(\gamma)$ be the set of all $f\in{\mathcal S}^\prime$ for which$\sum_{n=0}^{\infty}\,|(f,h_n)|\,(n+1)^{\gamma}>amp;lt;\infty$, where $(h_n)_{n=0,1,\dots}$ is the orthonormal base of Hermite functions.
We show that $C(\gamma)\subset S_0$ if and only if $\gamma\geq\frac{1}{4}$ and that $S_0\subset C(\gamma)$ if and only if $\gamma\leq{-}\frac{1}{4}$.
Using these results we give an explicit solution of the problem of optimal reconstruction of functions from Sobolev's classes $W^{\gamma}_{p}(M^{d})$ in $L_{q}(M^{d}), 1 \leq q \leq p \leq \infty$.
This result generalizes the characterization of Fourier series of distributions with a distributional point value given in [5] by $\lim_{x\rightarrow\infty}\sum_{-x\leq n\leq ax}a_{n}e^{inx_{0}}=\gamma\ (\mathrm{C},k)\,$.
更多
This paper studies the deposition of clay rocks and the relationshipbetween organism richness and radioactive minerals, and clarifies thepossibility of organism richness evaluation according to gamma ray spectrallog data. Practical example of application in Zhungeer Basin is provided.
本文通过分析粘土岩的沉积、有机物质的富集与放射性矿物之间的联系,阐述了利用自然伽玛能谱测井资料评价粘土岩的有机质丰度的可行性,并结合准噶尔盆地的实际作了应用尝试。
The losses of sodium petroleum sulfonates in laboratory experiments reach their equilibrium value quickly and go through a maximum as the concentration of aqueous sulfonates solution is increased, thus the phenomena can not be described satisfactorily by a Langmuir type equation. In this paper a two-parameter equation in form of gamma funcfion is constructed as a result of linear regression analysis (l.r.a.) on computer of a series of experimental data published. It is obtained as.(?)=(?)~(vb)exp b(1-(?)~v),...
The losses of sodium petroleum sulfonates in laboratory experiments reach their equilibrium value quickly and go through a maximum as the concentration of aqueous sulfonates solution is increased, thus the phenomena can not be described satisfactorily by a Langmuir type equation. In this paper a two-parameter equation in form of gamma funcfion is constructed as a result of linear regression analysis (l.r.a.) on computer of a series of experimental data published. It is obtained as.(?)=(?)~(vb)exp b(1-(?)~v), in which (?)=S/S_m and (?)=C/C_c, where S denotes the sulfonate loss at solution concentration C, S_m—the maximum loss observed generally at apparent micellat concentration C_c, v and b—two controllable parameters determined by try and error method in the l.r.a, process.
本文提出了一个反映表活剂在等温条件下损耗特性的数学表达式。
Owe to nonconductivity of oil and gas, there is usually little differences between their displays in resistivity logs. Therefore conventional log analysis method developed by resistivity log using water saturation can only distinguish hydrocarbon zones from non—hydrocarbon zones, but can't tell oil zones from gas zones.Through the analysis of resoponse characteristic to oil and gas zones in domestic logs of Triassic in Xiazijie oilfield, this paper suggests that by combination of neutron gamma log and...
Owe to nonconductivity of oil and gas, there is usually little differences between their displays in resistivity logs. Therefore conventional log analysis method developed by resistivity log using water saturation can only distinguish hydrocarbon zones from non—hydrocarbon zones, but can't tell oil zones from gas zones.Through the analysis of resoponse characteristic to oil and gas zones in domestic logs of Triassic in Xiazijie oilfield, this paper suggests that by combination of neutron gamma log and sonic log we could further distinguish oil zones from gas zones after we found hydrocarbon zones in Triassic of Xiazijie oilfield by resistivity and sonic logs. Examples showed in the paper tells good results.
由于石油及天然气均不导电,使得油层和气层在电阻率测井曲线上的显示通常没有大的差别,因此,利用电阻率测井发展起来的、依据地层含水饱和度参数进行测井解释的常规方法只能区分油气层及非油气层,但不能识别出是油层还是气层。 本文通过分析夏子街油田三叠系所具有的国产测井资料对油气层的响应特征,认为在夏子街油田三叠系应用电阻率及声波测井识别出油气层后,利用中子伽马及声波测井组合可进一步识别出是油层还是气层。本文给出的实例表明,效果是令人满意的。
<< 更多相关文摘
|
I am reading an intro book about cryptography and the author tries to explain why using pseudo random number generators is vulnerable.
Given PRNG equation;
\begin{align} S_0 &= \text{seed}\\ S_{i+1} &\equiv A\cdot S_i + B \mod m, i = 0,1,\ldots \end{align}
where we choose $m$ to be 100 bits long and $S_i,A,B \in \{0,1,\ldots,m−1\}.$ Since this is a stream cipher, we can encrypt
$$y_i \equiv x_i + s_i \mod 2$$
Further in the text:
But Oscar can easily launch an attack. Assume he knows the first 300 bits of plaintext (this is only 300/8=37.5 byte), e.g., file header information, or he guesses part of the plaintext. Since he certainly knows the ciphertext, he can now compute the first 300 bits of key stream as:
(Equation 1)$s_i \equiv y_i + x_i \mod m, \; i=1,2,\ldots,300$
There are several things about the paragraph above that I don't understand.
Firstly, by what mechanism could Oscar gain the first 300 bits of plaintext? It makes little sense for Alice (the person who tries to securely communicate with Bob) to send encrypted and plain text together. Is there a situation this happens? How exactly could Oscar predict the word and location of cyphertext?
Secondly, I don't understand how
Equation 1 was derived?
I appreciate any help.
|
One can make use of Simplify with AssumptionsI. Compute the sums=Sum[HarmonicNumber[n,5]/n^8,{n,1,Infinity}](* -(1/63) π^6 Zeta[7]-13/15 π^4 Zeta[9]-55 π^2 Zeta[11]+644 Zeta[13] *)II. Make a table of Zeta-functions with even argumentst=Flatten[Table[{ζ[2n]==Zeta[2n]},{n,0,6}]](* {ζ[0]==-(1/2),ζ[2]==π^2/6,ζ[4]==π^4/90,ζ[6]==π^6/945,ζ[8]==π^8/9450,ζ[...
It seems the problem came from defining the function in terms of k while the mathematical formulation is defined in terms of k+1. When the two were combined, it introduced the problem of including the input term in the output, which caused an infinite loop. For exampled[2] -> With[{k = 2}, Table[d[k + 1 - i], {i, k + 1}]]{d[2], d[1], d[0]}would ...
I'd proceed as follows:delta[0, alpha_, lambda_] = 1;delta[k_, alpha_, lambda_] :=delta[k, alpha, lambda] =alpha/k Sum[Sum[(1 - lambda[[1]]/lambda[[j]])^i, {j, 1, Length[lambda]}] delta[k - i, alpha, lambda], {i, 1, k}]To test it with your values:list = {1.2, 2.4, 3.3};a = 2.3;delta[1, a, list]delta[2, a, list]delta[3, a, list]2.61364...
I think you have to be mindful of the radius of convergence of the series. From the Mejer reference you cited, if $f(z)=x+x^p$, then define$$h(z)=\sum_{k=0}^\text{kMax} \frac{(-1)^k}{pk-k+1}\binom{pk}{k}z^k$$However, convergence of the series is restricted to the region of convergence $$ R=\frac{(p-1)^{p-1}}{p^p}$$ so that if $0\leq y\leq R^{\frac{1}{p-1}...
The Null appears because two-argument If[] produces Null if the condition in the first argument is not satisfied. Αλέξανδρος shows one possibility, but you can fix your original code by recalling that $1$ is the identity element for multiplication; thus, you can implement Lagrangian interpolation like so:With[{x1 = {1, 2, 3, 4}, y1 = {3, 4, 2, 5}},...
What you want is already built-in as SymmetricPolynomial[]:SymmetricPolynomial[3, Array[f, 4]]f[1] f[2] f[3] + f[1] f[2] f[4] + f[1] f[3] f[4] + f[2] f[3] f[4]SymmetricPolynomial[4, Array[f, 4]]f[1] f[2] f[3] f[4]but otherwise, Bill's suggestion can be vastly simplified using Sum[] and Product[]'s ability to take a list of indices:Sum[Product[...
|
The equilibrium constant is known as \(K_{eq}\). A common example of \(K_{eq}\) is with the reaction:
\[aA + bB \rightleftharpoons cC + dD\]
\[K_{eq} = \dfrac{[C]^c[D]^d}{[A]^a[B]^b}\]
where:
At equilibrium, [A], [B], [C], and [D] are either the molar concentrations or partial pressures. Products are in the numerator. Reactants are in the denominator. The exponents are the coefficients (a,b,c,d) in the balanced equation. Solids and pure liquids are omitted. This is because the activities of pure liquids and solids are equal to one, therefore the numerical value of equilibrium constant is the same with and without the values for pure solids and liquids. \(K_{eq}\) does not have units. This is because when calculating activity for a specific reactant or product, the units cancel. So when calculating \(K_{eq}\), one is working with activity values with no units, which will bring about a \(K_{eq}\) value with no units.
Various \(K_{eq}\)
All the equilibrium constants tell the relative amounts of products and reactants at equilibrium. For any reversible reaction, there can be constructed an equilibrium constant to describe the equilibrium conditions for that reaction. Since there are many different types of reversible reactions, there are many different types of equilibrium constants:
\(K_{c}\): constant for molar concentrations \(K_{p}\): constant for partial pressures \(K_{sp}\): solubility product \(K_{a}\): acid dissociation constant for weak acids \(K_{b}\): base dissociation constant for weak bases \(K_{w}\): describes the ionization of water (\(K_{w} = 1 \times 10^{-14}\)) Calculating K p
Referring to equation:
\[aA + bB \rightleftharpoons cC + dD\]
\[K_p = \dfrac{(P_C)^c(P_D)^d}{(P_A)^a(P_B)^b}\]
Partial Pressures: In a mixture of gases, it is the pressure an individual gas exerts. The partial pressure is independent of other gases that may be present in a mixture. According to the ideal gas law, partial pressure is inversely proportional to volume. It is also directly proportional to moles and temperature.
Example \(\PageIndex{1}\)
At equilibrium in the following reaction at room temperature, the partial pressures of the gases are found to be \(P_{N_2}\) = 0.094 atm, \(P_{H_2}\) = 0.039 atm, and \(P_{NH_3}\) = 0.003 atm.
\[\ce{N_2 (g) + 3 H_2 (g) \rightleftharpoons 2 NH_3 (g)} \nonumber \]
What is the \(K_p\) for the reaction?
SOLUTION
First, write \(K_{eq}\) (equilibrium constant expression) in terms of activities.
\[K = \dfrac{(a_{NH_3})^2}{(a_{N_2})(a_{H_2})^3} \nonumber\]
Then, replace the activities with the partial pressures in the equilibrium constant expression.
\[K_p = \dfrac{(P_{NH_3})^2}{(P_{N_2})(P_{H_2})^3} \nonumber\]
Finally, substitute the given partial pressures into the equation.
\[K_p = \dfrac{(0.003)^2}{(0.094)(0.039)^3} = 1.61 \nonumber\]
Example \(\PageIndex{2}\)
At equilibrium in the following reaction at 303 K, the total pressure is 0.016 atm while the partial pressure of \(P_{H_2}\) is found to be 0.013 atm.
\[\ce{3 Fe_2O_3 (s) + H_2 (g) \rightleftharpoons 2 Fe_3O_4 (s) + H_2O (g)} \nonumber\]
What is the \(K_p\) for the reaction?
SOLUTION
First, calculate the partial pressure for \(\ce{H2O}\) by subtracting the partial pressure of \(\ce{H2}\) from the total pressure.
\[ \begin{align*} P_{H_2O} &= {P_{total}-P_{H_2}} \\[4pt] &= (0.016-0.013) \; atm \\[4pt] &= 0.003 \; atm \end{align*}\]
Then, write K (equilibrium constant expression) in terms of activities. Remember that solids and pure liquids are ignored.
\[K = \dfrac{(a_{H_2O})}{(a_{H_2})}\nonumber\]
Then, replace the activities with the partial pressures in the equilibrium constant expression.
\[K_p = \dfrac{(P_{H_2O})}{(P_{H_2})}\nonumber\]
Finally, substitute the given partial pressures into the equation.
\[K_p = \dfrac{(0.003)}{(0.013)} = 0.23 \nonumber\]
Example \(\PageIndex{3}\)
A flask initially contained hydrogen sulfide at a pressure of 5.00 atm at 313 K. When the reaction reached equilibrium, the partial pressure of sulfur vapor was found to be 0.15 atm.
\[\ce{2 H_2S (g) \rightleftharpoons 2 H_2 (g) + S_2 (g) } \nonumber\]
What is the \(K_p\) for the reaction?
SOLUTION
For this kind of problem, ICE Tables are used.
\(\ce{2H2S (g)}\) \( \rightleftharpoons \) \(\ce{2H2(g)}\) + \(\ce{S2(g)}\) Initial Amounts 5.00 atm 0 atm 0 atm Change in Amounts -0.3 atm +0.3 atm +0.15 atm Equilibrium Amounts 4.7 atm 0.3 atm 0.15 atm
Now, set up the equilibrium constant expression, \(K_p\).
\[K_p = \dfrac{(P_{H_2})^2(P_{S_2})}{(P_{H_2S})^2} \nonumber\]
Finally, substitute the calculated partial pressures into the equation.
\[ \begin{align*} K_p &= \dfrac{(0.3)^2(0.15)}{(4.7)^2} \\[4pt] &= 6.11 \times 10^{-4} \end{align*} \]
Inside Links References Petrucci, et al. General Chemistry: Principles & Modern Applications; Ninth Edition. Pearson/Prentice Hall; Upper Saddle River, New Jersey 07. Contributors Bianca Yau
|
Q4.1
Write the Schrödinger equation for a particle in a two dimensional box with infinite potential barriers and adjacent sides of unequal length (a rectangle). Solve the equation by separating variables with a product function X(x)Y(y) to obtain the wavefunctions X(x) and Y(y) and energy eigenvalues. How many different sets of quantum numbers are needed for this case? Sketch an energy level diagram to illustrate the energy level structure. What happens to the energy levels when the box is a square? When two or more states have the same energy, the states and the energy level are said to be degenerate. What is the zero point energy for an electron in a square box of length 0.05 nm?
Q4.2
A materials scientist is trying to fabricate a novel electronic device by constructing a two dimensional array of small squares of silver atoms. She thinks she has managed to produce an array with each square consisting of a monolayer of 25 atoms. You are an optical spectroscopist and want to test this conclusion. Use the particle-in-a-box model to predict the wavelength of the lowest energy electronic transition for these quantum dots. Which electrons do you want to describe by the particle-in-a-box model, or do you think you can apply this model to all the electrons in silver and get a reasonable prediction? In which spectral region does this transition lie? What instrumentation would you need to observe this transition?
Q4.3
Model the pi electrons of benzene by adapting the electron in a box model. Consider benzene to be a ring of radius r and circumference 2πr. You can find r by using the bond length of benzene (0.139 nm) and some trigonometry. Show how the electron on a ring is analogous to the electron in a linear box. Derive this analogy by thinking, not by copying from some book. What is the boundary condition for the case of the particle on a ring? Find mathematical expressions for the energy and the wavefunctions. Draw an energy level diagram. What is the physical reason that the energy levels are degenerate for this situation? Predict the wavelength of the lowest energy electronic transition for benzene. Compare your prediction with the experimental value (256 nm). What insight do you gain from this comparison?
Q4.4
Explain how and why the following two sets of selection rules for the particle-in-a-box are related to each other: (1) If Δn is even, the transition is forbidden; if Δn is odd, the transition is allowed. (2) If the transition is g to g or u to u, it is forbidden; if the transition is g to u or u to g, it is allowed.
Q4.5
The factor \(fi/(f^2-i^2)^2\) in Equation (4-32) determines the relative intensity of transitions in the particle-in-a-box model. Make plots of [\(fi/(f^2-i^2)^2\)] vs f for several values of i with f starting at i+1 and increasing. What conclusions can you make about particle-in-a-box spectra from your plots?
Q4.6
Starting with the mathematical definition of uncertainty as the standard or root mean square deviation σ from the average, show by evaluating the appropriate expectation value integrals that
\[ \sigma _x = \frac {L}{2\pi n } \left ( \frac {\pi ^2 n^2}{3} -2 \right ) \text {and} \sigma _p = \frac {n \pi \hbar }{L} \]
for a particle in a one-dimensional box of length L as given in the chapter. Then show that the product \(\sigma _x \sigma _p \ge \frac {\hbar}{2}\).
Q4.7
Use the symbolic processor in Mathcad to help you carry out the steps leading from Equation (4-27) to Equation (4-31). See Activity 4.3 for an introduction to the symbolic processor.
Q4.8
An electron is confined to a one-dimensional space with infinite potential barriers at x = 0 and x = L and a constant potential energy between 0 and L. The electron is described by the wavefunction \(ψ(x) = N (Lx - x^2)\)
In responding to the following questions (a through g), do not leave your answers in the form of integrals, i.e. do the integrals. Note:
\[ \text {Note} : \int x^n dx = \frac {1}{n + 1} x ^{n + 1} + C \text {for} x \ne 0 \]
Explain why this wavefunction must be normalized, and find an expression for N that normalizes the wavefunction. Define what is meant by the expectation value, and find the expectation value for the position of the electron and the momentum of the electron. Find the expectation value for the energy of the electron. Is your energy expectation value consistent with your momentum expectation value? Explain. What is the energy of the n = 1 state for the one-dimensional particle-in-a-box model? How does the energy obtained in (c) compare with this value? Explain why these two energies must have such a relationship to each other. Does the wavefunction, \(ψ(x) = N (Lx - x^2)\), for this electron represent a stationary state of the electron? What is the probability that the electron will be located at x = L/3 in an interval of length L/100? Explain why you expect this probability to be time dependent or time independent. Q4.9
How does choosing the potential energy inside the box to be –100 eV rather than 0 modify the description of the particle-in-a-box?
Contributors Adapted from "Quantum States of Atoms and Molecules" by David M. Hanson, Erica Harvey, Robert Sweeney, Theresa Julia Zielinski
|
I'd like to be able to auto-number equations and label them so i can refer to them in the text. Math Jax provides a built-in way to do this by setting the autoNumber option to AMS. I realize that by adding this option to the universal template, existing code will be affected in such a way that some equations will get unduly auto-numbered. Is there any way to allow individual users to specify this option for their posts, either once and for all, or on a per-post level? If not, what is the recommended way to number equations and refer to them?
You can use something like this $$ \sum_{k=1}^\infty\frac1{k^2}=\frac{\pi^2}{6}\tag{1}\label{mylabel} $$ and refer to it like $\eqref{mylabel}$
This can also be used in a normal link
Caveat: I believe that these labels need to be unique within a question.
|
If x(t) is even, then $x(t) = a_0 + \sum_{n=1}^{\infty}a_n*\cos(2\pi nt/T)$
However, based on this formula: $x(t) = a_0 + \sum_{n=1}^{\infty}a_n*\cos(2\pi nt/T) + b_n*\sin(2\pi nt/T)$ where $a_n = 2/T \int_o^Tx(t)*\cos(2\pi nt/T)$
x(t) is an even function and cos is an odd function. An even * odd = odd function. The periodic integral of an odd function is 0. Hence, it should only be the sin term remaining. However, the Fourier Series for these even and odd functions are reversed. It should be that the fourier series of an
odd function is the Fourier cosine series. I don't understand.
|
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
|
Forgot password? New user? Sign up
Existing user? Log in
If the roots of 8x2−10x+3=08x^2-10x+3=08x2−10x+3=0 are α\alphaα and β2\beta^2β2 where β2>12\beta^2>\frac{1}{2}β2>21, then find the equation whose roots are (α+iβ)100(\alpha+i\beta)^{100}(α+iβ)100 and (α−iβ)100(\alpha-i\beta)^{100}(α−iβ)100.
Problem Loading...
Note Loading...
Set Loading...
|
Theorem
The following definitions of the concept of
Compact Space in the context of Topology are equivalent:
A topological space $T = \left({S, \tau}\right)$ is
compact if and only if every open cover for $S$ has a finite subcover.
A topological space $T = \left({S, \tau}\right)$ is
compact if and only if $\tau$ has a sub-basis $\mathcal B$ such that: from every cover of $S$ by elements of $\mathcal B$, a finite subcover of $S$ can be selected.
Proof
Let every open cover of $S$ have a finite subcover.
Let $\mathcal B$ be a sub-basis of $\tau$.
By definition of a compact space, from every cover of $S$ by elements of $\mathcal B$, a finite subcover can be selected.
$\Box$
Let the space $T$ have a sub-basis $\mathcal B$ such that every cover of $S$ by elements of $\mathcal B$ has a finite subcover.
Aiming for a contradiction, suppose $T$ is not such that every open cover of $S$ has a finite subcover.
Use Zorn's Lemma to find an open cover $\mathcal C$ which has no finite subcover that is maximal among such open covers.
So if:
$V$ is an open set
and:
$V \notin \mathcal C$
then $\mathcal C \cup \left\{ {V}\right\}$ has a finite subcover, necessarily of the form:
$\mathcal C_0 \cup \left\{ {V}\right\}$
for some finite subset $\mathcal C_0$ of $\mathcal C$.
Consider $\mathcal C \cap \mathcal B$, that is, the sub-basic subset of $\mathcal C$.
Suppose $\mathcal C \cap \mathcal B$ covers $S$.
Then, by hypothesis, $\mathcal C \cap \mathcal B$ would have a finite subcover.
But $\mathcal C$ does not have a finite subcover.
So $\mathcal C \cap \mathcal B$ does not cover $S$.
Let $x \in S$ that is not covered by $\mathcal C \cap \mathcal B$.
We have that $\mathcal C$ covers $S$, so:
$\exists U \in \mathcal C: x \in U$
We have that $\mathcal B$ is a sub-basis.
So for some $B_1, \ldots, B_n \in \mathcal B$, we have that:
$x \in B_1 \cap \cdots \cap B_n \subseteq U$
Since $x$ is not covered, $B_i \notin \mathcal C$.
As noted above, this means that for each $i$, $B_i$ along with a finite subset $\mathcal C_i$ of $\mathcal C$, covers $S$.
But then $U$ and all the $\mathcal C_i$ cover $S$.
Hence $\mathcal C$ has a finite subcover.
This contradicts our supposition that we can construct $\mathcal C$ so as to have no finite subcover.
It follows that we cannot construct an open cover $\mathcal C$ of $S$ which has no finite subcover.
$\blacksquare$
Axiom of Choice
This theorem depends on the Axiom of Choice, by way of Zorn's Lemma.
Because of some of its bewilderingly paradoxical implications, the Axiom of Choice is considered in some mathematical circles to be controversial.
Most mathematicians are convinced of its truth and insist that it should nowadays be generally accepted.
However, others consider its implications so counter-intuitive and nonsensical that they adopt the philosophical position that it cannot be true.
Notes
Although this proof makes use of Zorn's Lemma, the proof does not need the full strength of the Axiom of Choice.
Instead, it relies on the intermediate Ultrafilter Lemma.
Also known as
Alexander's Compactness Theorem is also known as: Alexander's Sub-Basis Theorem Alexander's Sub-Base Theorem
which can also be seen in the form
the Alexander Sub-Basis Theorem, and so on. Sub-Base and Sub-Basis can also be seen here rendered as Subbase and Subbasis.
Source of Name
This entry was named for James Waddell Alexander II.
Sources
|
Answer
The solution set is $\{\varnothing\}$
Work Step by Step
$$\tan^2x+3=0$$ over interval $[0,2\pi)$ 1) Consider the equation: $$\tan^2x+3=0$$ $$\tan^2x=-3$$ We know that $A^2\ge0$ for $\forall A\in R$. As a result, $\tan^2x\ge0$ for $\forall x\in [0,2\pi)$ Therefore, as $-3\lt0$, there are no values of $x\in[0,2\pi)$ that $\tan^2x=-3$ In other words, the solution set of this equation is $\{\varnothing\}$
|
Relative Complement inverts Subsets Theorem
Let $S$ be a set.
Let $A \subseteq S, B \subseteq S$ be subsets of $S$.
Then: $A \subseteq B \iff \relcomp S B \subseteq \relcomp S A$
where $\complement_S$ denotes the complement relative to $S$.
Proof
\(\displaystyle A\) \(\subseteq\) \(\displaystyle B\) \(\displaystyle \leadstoandfrom \ \ \) \(\displaystyle A \cap B\) \(=\) \(\displaystyle A\) Intersection with Subset is Subset \(\displaystyle \leadstoandfrom \ \ \) \(\displaystyle \relcomp S {A \cap B}\) \(=\) \(\displaystyle \relcomp S A\) Relative Complement of Relative Complement \(\displaystyle \leadstoandfrom \ \ \) \(\displaystyle \relcomp S A \cup \relcomp S B\) \(=\) \(\displaystyle \relcomp S A\) De Morgan's Laws: Complement of Intersection \(\displaystyle \leadstoandfrom \ \ \) \(\displaystyle \relcomp S B\) \(\subseteq\) \(\displaystyle \relcomp S A\) Union with Superset is Superset
$\blacksquare$
Also known as
This result can be referred to by saying that the subset operation is
inclusion-inverting. Sources 1965: Seth Warner: Modern Algebra... (previous) ... (next): Exercise $3.3 \ \text{(e)}$ 1975: T.S. Blyth: Set Theory and Abstract Algebra... (previous) ... (next): $\S 1$. Sets; inclusion; intersection; union; complementation; number systems: $\text{(k)}$ 1993: Keith Devlin: The Joy of Sets: Fundamentals of Contemporary Set Theory(2nd ed.) ... (previous) ... (next): $\S 1$: Naive Set Theory: $\S 1.2$: Operations on Sets: Exercise $1.2.2 \ \text{(ii)}$
|
Discontinuous solutions for Hamilton-Jacobi equations: Uniqueness and regularity
1.
Department of Mathematics, Northwestern University, 2033 Sheridan Road, Evanston, IL 60208-2730
2.
Department of Mathematics, University of Wisconsin-Madison, Madison, WI 53706
(*)$ \qquad\qquad \varphi(x)\ge\varphi_{\star \star}(x) \equiv \lim$inf$_{y\rightarrow x, y\in\mathbb R^d\backslash\Gamma}\varphi(y).
The regularity of discontinuous solutions to Hamilton-Jacobi equations with locally strictly convex Hamiltonians is proved: The discontinuous solutions with almost everywhere continuous initial data satisfying (*) become Lipschitz continuous after finite time. The $L^1$-accessibility of initial data and a comparison principle for discontinuous solutions are shown. The equivalence of semicontinuous viscosity solutions, bi-lateral solutions, $L$-solutions, minimax solutions, and $L^\infty$-solutions is also clarified.
Keywords:viscosity solutions, accessibility of initial data., minimax solutions, stability, $L^\infty$ solutions, Hamilton-Jacobi equations, discontinuous solutions, regularity after finite time, semicontinuous solutions. Mathematics Subject Classification:35F25, 49L99, 35D10, 35B35, 35D0. Citation:Gui-Qiang Chen, Bo Su. Discontinuous solutions for Hamilton-Jacobi equations: Uniqueness and regularity. Discrete & Continuous Dynamical Systems - A, 2003, 9 (1) : 167-192. doi: 10.3934/dcds.2003.9.167
[1]
Olga Bernardi, Franco Cardin.
Minimax and viscosity solutions of Hamilton-Jacobi equations in the convex case.
[2]
Martino Bardi, Yoshikazu Giga.
Right accessibility of semicontinuous initial data for Hamilton-Jacobi equations.
[3] [4]
Kaizhi Wang, Jun Yan.
Lipschitz dependence of viscosity solutions of Hamilton-Jacobi equations with respect to the parameter.
[5]
Gawtum Namah, Mohammed Sbihi.
A notion of extremal solutions for time periodic Hamilton-Jacobi
equations.
[6]
Kai Zhao, Wei Cheng.
On the vanishing contact structure for viscosity solutions of contact type Hamilton-Jacobi equations I: Cauchy problem.
[7]
Nalini Anantharaman, Renato Iturriaga, Pablo Padilla, Héctor Sánchez-Morgado.
Physical solutions of the Hamilton-Jacobi equation.
[8] [9] [10]
Xia Li.
Long-time asymptotic solutions of convex hamilton-jacobi equations depending on unknown functions.
[11]
Thi Tuyen Nguyen.
Large time behavior of solutions of local and nonlocal nondegenerate Hamilton-Jacobi equations with Ornstein-Uhlenbeck operator.
[12]
Mariane Bourgoing.
Viscosity solutions of fully nonlinear second order parabolic equations with $L^1$ dependence in time and Neumann boundary conditions.
[13] [14]
Eddaly Guerra, Héctor Sánchez-Morgado.
Vanishing viscosity limits for space-time periodic
Hamilton-Jacobi equations.
[15]
Manil T. Mohan, Sivaguru S. Sritharan.
$\mathbb{L}^p-$solutions of the stochastic Navier-Stokes equations subject to Lévy noise with $\mathbb{L}^m(\mathbb{R}^m)$ initial data.
[16]
Anya Désilles, Hélène Frankowska.
Explicit construction of solutions to the Burgers equation with discontinuous initial-boundary conditions.
[17] [18]
Zhi-Qiang Shao.
Lifespan of classical discontinuous solutions to the generalized nonlinear initial-boundary Riemann problem for hyperbolic conservation laws with small BV data: shocks and contact discontinuities.
[19]
Piermarco Cannarsa, Marco Mazzola, Carlo Sinestrari.
Global propagation of singularities for time dependent Hamilton-Jacobi equations.
[20]
Pablo Ochoa, Julio Alejo Ruiz.
A study of comparison, existence and regularity of viscosity and weak solutions for quasilinear equations in the Heisenberg group.
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top]
|
I confused with a question from the past-paper of Lagrangian and Hamiltonian mechanics. A Lagrangian (plane polar coordinate) for the spaceship (mass is $m$) under influence of central force directed towards the centre of Earth is $$L=\frac{1}{2} m \left( \dot{r}^2 +r^2 \dot{\phi}^2\right)+\frac{k}{r}$$
Where the $k$ is a constant of the central field, $l$ is the magnitude of angular momentum which is a conserved quantity, and the total energy of this system is given by:
$$ m r^2 \dot\phi =l \quad \quad E=\frac{m\dot{r}^2}{2}+\frac{l^2}{2mr^2}-\frac{k}{r}$$
I could find the effective potential $$V^{\text{eff}}(r)=\frac{l^2}{2mr^2}-\frac{k}{r} $$ The radius $R$ and period $T$ when spaceship moves on a circular orbit with a constant magnitude of angular momentum $l$, cloud be written in terms of $l$, $k$ and $m$ :
$$ R=\frac{l^2}{mk} \quad \quad T=\frac{2 \pi l^3}{ mk^2} $$
When the spaceship moves on a circular orbit with radius $R = 7000$ km and velocity $V = 2\pi R/T = 8 $km/s. The astronaut on the spaceship jumps directly towards the Earth with velocity $v = 8 $m/s. I'm asked to calculate the minimum distance of the astronaut from the centre of the Earth, with a hint that $v \ll V $ so I could expand the effective potential near its minimum.
My basic idea is below:
Before the astronaut jumps, the energy of the spaceship $E_0$ is equal to the effective potential $V^{\text{eff}}(r_0)$: $$E_0=V^{\text{eff}}(r_0)=\frac{l^2}{2mr_0^2}-\frac{k}{r_0}$$
After the astronaut jumps, the energy of is $$E_{\text{jump}}= \frac{m_{as}\dot{r}^2}{2}+ \frac{l^2}{2m_{as}r^2}-\frac{k}{r}$$
Where $m_{as}$ is the mass of astronaut. Because $v \ll V$, the radius change is small, therefore the potential change is small, as, $$E_{\text{jump}}=E_0+\frac{m_{as}\dot{r}^2}{2}$$
As the curve of effective potential shown above, I would like to use a approximation of $E_{\text{jump}}$, $$E_{\text{jump}}=E_0 + \frac12 \frac{\partial^2 V^{\text{eff}}(r_0)}{\partial r^2}(r-R)^2$$
Therefore, $$E_{\text{jump}}- E_0= \frac{m_{as}\dot{r}^2}{2} =\frac12 \frac{\partial^2 V^{\text{eff}}(r_0)}{\partial r^2}(r-R)^2 $$
But I found problem from here, $m_{as}$ is not given, therefore I'm not able to find the numerical solution from the above equation. Is there anything wrong with my idea?
|
Set is Subset of Itself
Jump to navigation Jump to search
Theorem $\forall S: S \subseteq S$ Proof
\(\displaystyle \forall x: \ \ \) \(\displaystyle (x \in S\) \(\implies\) \(\displaystyle x \in S)\) Law of Identity: a statement implies itself \(\displaystyle \leadsto \ \ \) \(\displaystyle S\) \(\subseteq\) \(\displaystyle S\) Definition of Subset
$\blacksquare$
Sources 1960: Paul R. Halmos: Naive Set Theory... (previous) ... (next): $\S 1$: The Axiom of Extension 1964: W.E. Deskins: Abstract Algebra... (previous) ... (next): $\S 1.1$ 1964: William K. Smith: Limits and Continuity... (previous) ... (next): $\S 2.1$: Sets 1965: J.A. Green: Sets and Groups... (previous) ... (next): $\S 1.2$. Subsets 1965: Seth Warner: Modern Algebra... (previous) ... (next): Chapter $1$: Algebraic Structures: $\S 1$: The Language of Set Theory 1966: Richard A. Dean: Elements of Abstract Algebra... (previous) ... (next): $\S 0.2$ 1967: George McCarty: Topology: An Introduction with Application to Topological Groups... (previous) ... (next): $\text{I}$: Exercise $\text{B i}$ 1971: Robert H. Kasriel: Undergraduate Topology... (previous) ... (next): $\S 1.3$: Subsets 1975: T.S. Blyth: Set Theory and Abstract Algebra... (previous) ... (next): $\S 1$. Sets; inclusion; intersection; union; complementation; number systems: Theorem $1.1 \ \text{(a)}$ 1975: Bert Mendelson: Introduction to Topology(3rd ed.) ... (previous) ... (next): Chapter $1$: Theory of Sets: $\S 2$: Sets and Subsets 1978: Thomas A. Whitelaw: An Introduction to Abstract Algebra... (previous) ... (next): $\S 6.1$: Subsets 1982: P.M. Cohn: Algebra Volume 1(2nd ed.) ... (previous) ... (next): Chapter $1$: Sets and mappings: $\S 1.2$: Sets 1983: George F. Simmons: Introduction to Topology and Modern Analysis... (previous) ... (next): $\S 1$: Sets and Set Inclusion 2014: Christopher Clapham and James Nicholson: The Concise Oxford Dictionary of Mathematics(5th ed.) ... (previous) ... (next): Entry: subset(i)
|
V. Gitman, J. D. Hamkins, and A. Karagila, “Kelley-Morse set theory does not prove the class Fodor theorem.” (manuscript under review)
@ARTICLE{GitmanHamkinsKaragila:KM-set-theory-does-not-prove-the-class-Fodor-theorem, author = {Victoria Gitman and Joel David Hamkins and Asaf Karagila}, title = {Kelley-Morse set theory does not prove the class {F}odor theorem}, journal = {}, year = {}, volume = {}, number = {}, pages = {}, month = {}, note = {manuscript under review}, abstract = {}, keywords = {under-review}, eprint = {1904.04190}, archivePrefix = {arXiv}, primaryClass = {math.LO}, source = {}, doi = {}, url = {http://wp.me/p5M0LV-1RD}, }
V. Gitman, J. D. Hamkins, and A. Karagila, “Kelley-Morse set theory does not prove the class Fodor theorem.” (manuscript under review) Abstract. We show that Kelley-Morse (KM) set theory does not prove the class Fodor principle, the assertion that every regressive class function $F:S\to\newcommand\Ord{\text{Ord}}\Ord$ defined on a stationary class $S$ is constant on a stationary subclass. Indeed, it is relatively consistent with KM for any infinite $\lambda$ with $\omega\leq\lambda\leq\Ord$ that there is a class function $F:\Ord\to\lambda$ that is not constant on any stationary class. Strikingly, it is consistent with KM that there is a class $A\subseteq\omega\times\Ord$, such that each section $A_n=\{\alpha\mid (n,\alpha)\in A\}$ contains a class club, but $\bigcap_n A_n$ is empty. Consequently, it is relatively consistent with KM that the class club filter is not $\sigma$-closed.
The
class Fodor principle is the assertion that every regressive class function $F:S\to\Ord$ defined on a stationary class $S$ is constant on a stationary subclass of $S$. This statement can be expressed in the usual second-order language of set theory, and the principle can therefore be sensibly considered in the context of any of the various second-order set-theoretic systems, such as Gödel-Bernays (GBC) set theory or Kelley-Morse (KM) set theory. Just as with the classical Fodor’s lemma in first-order set theory, the class Fodor principle is equivalent, over a weak base theory, to the assertion that the class club filter is normal. We shall investigate the strength of the class Fodor principle and try to find its place within the natural hierarchy of second-order set theories. We shall also define and study weaker versions of the class Fodor principle.
If one tries to prove the class Fodor principle by adapting one of the classical proofs of the first-order Fodor’s lemma, then one inevitably finds oneself needing to appeal to a certain second-order class-choice principle, which goes beyond the axiom of choice and the global choice principle, but which is not available in Kelley-Morse set theory. For example, in one standard proof, we would want for a given $\Ord$-indexed sequence of non-stationary classes to be able to choose for each member of it a class club that it misses. This would be an instance of class-choice, since we seek to choose classes here, rather than sets. The class choice principle $\text{CC}(\Pi^0_1)$, it turns out, is sufficient for us to make these choices, for this principle states that if every ordinal $\alpha$ admits a class $A$ witnessing a $\Pi^0_1$-assertion $\varphi(\alpha,A)$, allowing class parameters, then there is a single class $B\subseteq \Ord\times V$, whose slices $B_\alpha$ witness $\varphi(\alpha,B_\alpha)$; and the property of being a class club avoiding a given class is $\Pi^0_1$ expressible.
Thus, the class Fodor principle, and consequently also the normality of the class club filter, is provable in the relatively weak second-order set theory $\text{GBC}+\text{CC}(\Pi^0_1)$. This theory is known to be weaker in consistency strength than the theory $\text{GBC}+\Pi^1_1$-comprehension, which is itself strictly weaker in consistency strength than KM.
But meanwhile, although the class choice principle is weak in consistency strength, it is not actually provable in KM; indeed, even the weak fragment $\text{CC}(\Pi^0_1)$ is not provable in KM. Those results were proved several years ago by the first two authors, but they can now be seen as consequences of the main result of this article (see corollary 15. In light of that result, however, one should perhaps not have expected to be able to prove the class Fodor principle in KM.
Indeed, it follows similarly from arguments of the third author in his dissertation that if $\kappa$ is an inaccessible cardinal, then there is a forcing extension $V[G]$ with a symmetric submodel $M$ such that $V_\kappa^M=V_\kappa$, which implies that $\mathcal M=(V_\kappa,\in, V^M_{\kappa+1})$ is a model of Kelley-Morse, and in $\mathcal M$, the class Fodor principle fails in a very strong sense.
In this article, adapting the ideas of Karagila to the second-order set-theoretic context and using similar methods as in Gitman and Hamkins’s previous work on KM, we shall prove that every model of KM has an extension in which the class Fodor principle fails in that strong sense: there can be a class function $F:\Ord\to\omega$, which is not constant on any stationary class. In particular, in these models, the class club filter is not $\sigma$-closed: there is a class $B\subseteq\omega\times\Ord$, each of whose vertical slices $B_n$ contains a class club, but $\bigcap B_n$ is empty.
Main Theorem. Kelley-Morse set theory KM, if consistent, does not prove the class Fodor principle. Indeed, if there is a model of KM, then there is a model of KM with a class function $F:\Ord\to \omega$, which is not constant on any stationary class; in this model, therefore, the class club filter is not $\sigma$-closed.
We shall also investigate various weak versions of the class Fodor principle.
Definition. For a cardinal $\kappa$, the class $\kappa$-Fodor principleasserts that every class function $F:S\to\kappa$ defined on a stationary class $S\subseteq\Ord$ is constant on a stationary subclass of $S$. The class ${<}\Ord$-Fodor principleis the assertion that the $\kappa$-class Fodor principle holds for every cardinal $\kappa$. The bounded class Fodor principleasserts that every regressive class function $F:S\to\Ord$ on a stationary class $S\subseteq\Ord$ is bounded on a stationary subclass of $S$. The very weak class Fodor principleasserts that every regressive class function $F:S\to\Ord$ on a stationary class $S\subseteq\Ord$ is constant on an unbounded subclass of $S$.
We shall separate these principles as follows.
Theorem. Suppose KM is consistent. There is a model of KM in which the class Fodor principle fails, but the class ${<}\Ord$-Fodor principle holds. There is a model of KM in which the class $\omega$-Fodor principle fails, but the bounded class Fodor principle holds. There is a model of KM in which the class $\omega$-Fodor principle holds, but the bounded class Fodor principle fails. $\text{GB}^-$ proves the very weak class Fodor principle.
Finally, we show that the class Fodor principle can neither be created nor destroyed by set forcing.
Theorem. The class Fodor principle is invariant by set forcing over models of $\text{GBC}^-$. That is, it holds in an extension if and only if it holds in the ground model.
Let us conclude this brief introduction by mentioning the following easy negative instance of the class Fodor principle for certain GBC models. This argument seems to be a part of set-theoretic folklore. Namely, consider an $\omega$-standard model of GBC set theory $M$ having no $V_\kappa^M$ that is a model of ZFC. A minimal transitive model of ZFC, for example, has this property. Inside $M$, let $F(\kappa)$ be the least $n$ such that $V_\kappa^M$ fails to satisfy $\Sigma_n$-collection. This is a definable class function $F:\Ord^M\to\omega$ in $M$, but it cannot be constant on any stationary class in $M$, because by the reflection theorem there is a class club of cardinals $\kappa$ such that $V_\kappa^M$ satisfies $\Sigma_n$-collection.
Read more by going to the full article:
V. Gitman, J. D. Hamkins, and A. Karagila, “Kelley-Morse set theory does not prove the class Fodor theorem.” (manuscript under review)
@ARTICLE{GitmanHamkinsKaragila:KM-set-theory-does-not-prove-the-class-Fodor-theorem, author = {Victoria Gitman and Joel David Hamkins and Asaf Karagila}, title = {Kelley-Morse set theory does not prove the class {F}odor theorem}, journal = {}, year = {}, volume = {}, number = {}, pages = {}, month = {}, note = {manuscript under review}, abstract = {}, keywords = {under-review}, eprint = {1904.04190}, archivePrefix = {arXiv}, primaryClass = {math.LO}, source = {}, doi = {}, url = {http://wp.me/p5M0LV-1RD}, }
|
J. D. Hamkins and J. Reitz, “The set-theoretic universe $V$ is not necessarily a class-forcing extension of HOD,” ArXiv e-prints, 2017. (manuscript under review)
@ARTICLE{HamkinsReitz:The-set-theoretic-universe-is-not-necessarily-a-forcing-extension-of-HOD, author = {Joel David Hamkins and Jonas Reitz}, title = {The set-theoretic universe {$V$} is not necessarily a class-forcing extension of {HOD}}, journal = {ArXiv e-prints}, year = {2017}, volume = {}, number = {}, pages = {}, month = {September}, note = {manuscript under review}, abstract = {}, keywords = {under-review}, source = {}, doi = {}, eprint = {1709.06062}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://jdh.hamkins.org/the-universe-need-not-be-a-class-forcing-extension-of-hod}, }
Abstract.In light of the celebrated theorem of Vopěnka, proving in ZFC that every set is generic over $\newcommand\HOD{\text{HOD}}\HOD$, it is natural to inquire whether the set-theoretic universe $V$ must be a class-forcing extension of $\HOD$ by some possibly proper-class forcing notion in $\HOD$. We show, negatively, that if ZFC is consistent, then there is a model of ZFC that is not a class-forcing extension of its $\HOD$ for any class forcing notion definable in $\HOD$ and with definable forcing relations there (allowing parameters). Meanwhile, S. Friedman (2012) showed, positively, that if one augments $\HOD$ with a certain ZFC-amenable class $A$, definable in $V$, then the set-theoretic universe $V$ is a class-forcing extension of the expanded structure $\langle\HOD,\in,A\rangle$. Our result shows that this augmentation process can be necessary. The same example shows that $V$ is not necessarily a class-forcing extension of the mantle, and the method provides a counterexample to the intermediate model property, namely, a class-forcing extension $V\subseteq V[G]$ by a certain definable tame forcing and a transitive intermediate inner model $V\subseteq W\subseteq V[G]$ with $W\models\text{ZFC}$, such that $W$ is not a class-forcing extension of $V$ by any class forcing notion with definable forcing relations in $V$. This improves upon a previous example of Friedman (1999) by omitting the need for $0^\sharp$.
In 1972, Vopěnka proved the following celebrated result.
Theorem. (Vopěnka) If $V=L[A]$ where $A$ is a set of ordinals, then $V$ is a forcing extension of the inner model $\HOD$.
The result is now standard, appearing in Jech (Set Theory 2003, p. 249) and elsewhere, and the usual proof establishes a stronger result, stated in ZFC simply as the assertion: every set is generic over $\HOD$. In other words, for every set $a$ there is a forcing notion $\mathbb{B}\in\HOD$ and a $\HOD$-generic filter $G\subseteq\mathbb{B}$ for which $a\in\HOD[G]\subseteq V$. The full set-theoretic universe $V$ is therefore the union of all these various set-forcing generic extensions $\HOD[G]$.
It is natural to wonder whether these various forcing extensions $\HOD[G]$ can be unified or amalgamated to realize $V$ as a single class-forcing extension of $\HOD$ by a possibly proper class forcing notion in $\HOD$. We expect that it must be a very high proportion of set theorists and set-theory graduate students, who upon first learning of Vopěnka’s theorem, immediately ask this question.
Main Question. Must the set-theoretic universe $V$ be a class-forcing extension of $\HOD$?
We intend the question to be asking more specifically whether the universe $V$ arises as a bona-fide class-forcing extension of $\HOD$, in the sense that there is a class forcing notion $\mathbb{P}$, possibly a proper class, which is definable in $\HOD$ and which has definable forcing relation $p\Vdash\varphi(\tau)$ there for any desired first-order formula $\varphi$, such that $V$ arises as a forcing extension $V=\HOD[G]$ for some $\HOD$-generic filter $G\subseteq\mathbb{P}$, not necessarily definable.
In this article, we shall answer the question negatively, by providing a model of ZFC that cannot be realized as such a class-forcing extension of its $\HOD$.
Main Theorem. If ZFC is consistent, then there is a model of ZFC which is not a forcing extension of its $\HOD$ by any class forcing notion definable in that $\HOD$ and having a definable forcing relation there.
Throughout this article, when we say that a class is definable, we mean that it is definable in the first-order language of set theory allowing set parameters.
The main theorem should be placed in contrast to the following result of Sy Friedman.
Theorem. (Friedman 2012) There is a definable class $A$, which is strongly amenable to $\HOD$, such that the set-theoretic universe $V$ is a generic extension of $\langle \HOD,\in,A\rangle$.
This is a postive answer to the main question, if one is willing to augment $\HOD$ with a class $A$ that may not be definable in $\HOD$. Our main theorem shows that in general, this kind of augmentation process is necessary.
It is natural to ask a variant of the main question in the context of set-theoretic geology.
Question. Must the set-theoretic universe $V$ be a class-forcing extension of its mantle?
The mantle is the intersection of all set-forcing grounds, and so the universe is close in a sense to the mantle, perhaps one might hope that it is close enough to be realized as a class-forcing extension of it. Nevertheless, the answer is negative.
Theorem. If ZFC is consistent, then there is a model of ZFC that does not arise as a class-forcing extension of its mantle $M$ by any class forcing notion with definable forcing relations in $M$.
We also use our results to provide some counterexamples to the intermediate-model property for forcing. In the case of set forcing, it is well known that every transitive model $W$ of ZFC set theory that is intermediate $V\subseteq W\subseteq V[G]$ a ground model $V$ and a forcing extension $V[G]$, arises itself as a forcing extension $W=V[G_0]$.
In the case of class forcing, however, this can fail.
Theorem. If ZFC is consistent, then there are models of ZFC set theory $V\subseteq W\subseteq V[G]$, where $V[G]$ is a class-forcing extension of $V$ and $W$ is a transitive inner model of $V[G]$, but $W$ is not a forcing extension of $V$ by any class forcing notion with definable forcing relations in $V$. Theorem. If ZFC + Ord is Mahlo is consistent, then one can form such a counterexample to the class-forcing intermediate model property $V\subseteq W\subseteq V[G]$, where $G\subset\mathbb{B}$ is $V$-generic for an Ord-c.c. tame definable complete class Boolean algebra $\mathbb{B}$, but nevertheless $W$ does not arise by class forcing over $V$ by any definable forcing notion with a definable forcing relation.
More complete details, please go to the paper (click through to the arxiv for a pdf).
J. D. Hamkins and J. Reitz, “The set-theoretic universe $V$ is not necessarily a class-forcing extension of HOD,” ArXiv e-prints, 2017. (manuscript under review)
@ARTICLE{HamkinsReitz:The-set-theoretic-universe-is-not-necessarily-a-forcing-extension-of-HOD, author = {Joel David Hamkins and Jonas Reitz}, title = {The set-theoretic universe {$V$} is not necessarily a class-forcing extension of {HOD}}, journal = {ArXiv e-prints}, year = {2017}, volume = {}, number = {}, pages = {}, month = {September}, note = {manuscript under review}, abstract = {}, keywords = {under-review}, source = {}, doi = {}, eprint = {1709.06062}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://jdh.hamkins.org/the-universe-need-not-be-a-class-forcing-extension-of-hod}, }
|
Recap of Lecture 18
Last Lecture addressed the angular moment of an electron revolving around the nucleus. This is described by the \(l\) quantum number and an electron in any non-spherical orbital (i.e., an s orbital) will have an angular moment (you should know formula). The \(m_l\) quantum number designates the orientation of that angular moment wrt the z-axis (and there is a formula for that too) and the degeneracy can be partial broken by magnetic fields. We discussed that we cannot always do a one-to-one correspondence between quantum numbers to orbitals. We discussed basic spectroscopy of H-like system and specifically selection rules (for n,l, and ml). We introduced the He system, discussed we cannot solve it and need to do approximations. We discussed the first approximation, which is the worst possible one.
The "ignorance is bliss" Approximation (redux) If we ignore the \(\dfrac {e^2}{4\pi\epsilon_0 r_{12}}\) electron-electron repulsion term, then the electron 1 coordinates are separable from electron 2 coordinates, so that
\[\Psi_{total} = \psi_{el_{1}}\psi_{el_{2}}\]
or in braket notation
\[ | \Psi_{total} \rangle = | \psi_{el_1} \rangle | \psi_{el_2} \rangle\]
With some operator algebra, something important arises - the one electron energies are
additive:
\[\hat{H} \Psi_{total} = (\hat{H}_{el_1} + \hat{H}_{el_2}) \psi_{n\ {el_1}} \psi_{n\ {el_2}} = (E_{n_1} + E_{n_2}) \psi_{n\ {el_1}} \psi_{n\ {el_2}} \]
or in bra-ket notation
\[\hat{H} | \Psi_{total} \rangle = \hat{H} | \psi_{el_1} \rangle | \psi_{el_2} \rangle = (E_{n_1} + E_{n_2}) | \psi_{1} \rangle | \psi_{1} \rangle \]
The energy for a ground state Helium atom (both electrons in lowest state) is then
\[E_{He_{1s}} = \underset{\text{energy of single electron in helium}}{E_{n_1}} + \underset{\text{energy of single electron in helium}}{E_{n_2}} = -R\left(\dfrac{Z^2}{1}\right) -R \left(\dfrac{Z^2}{1}\right) = -8R\]
where \(R\) is the
constant (\(13.6 \; eV\) that also maps the lowest energy of the hydrogen atom and \(Z=2\) for the helium nucleus. Experimentally, we find that the total energy for a ground state Helium atom \(E_{He_{1s}} = -5.8066\,R\). The Rydberg big differencein results is due to the approximation we used to get to the value of \(-8\;R\). The "ignorance is bliss" approximation overestimates the energy of the helium atom greatly; this is a poor approximationand we need to address electron-electron repulsion properly (or better at least).
The general form of the "Ignorance is Bliss" approximation which almost as wrong as you can be for multi-electron atoms
\[\underbrace{E_{z} = nZ^2E_{z=1}}_{\text{bad approximation}}\]
Electron-electron repulsion results in "Shielding"
One way to take electron-electron repulsion into account is to
modify the form of the wavefunction. A logical modification is to change the nuclear charge, \(Z\), in the wavefunctions to an effective nuclear charge, from +2 to a smaller value, \(Z_{eff}=\zeta\). The rationale for making this modification is that one electron partially shields the nuclear charge from the other electron.
Figure: Electron-electron shielding leads to a reduced effective nuclear charge. The attractive force of the nucleus on electron 2,\(V(r_2)\), is partially countered by the repulsive force between electron 1 and electron 2, \(V(r_{12})\).
Electron-electron shielding leads to a reduced effective nuclear charge. \(Z_{eff}\) is used instead of \(Z\) in radial part of wavefunctions.
A region of negative charge density between one of the electrons and the +2 nucleus makes the potential energy between them more positive (decreases the attraction between them). We can effect this change mathematically by using \(Z_{eff} < 2\) in the wavefunction expression. If the shielding were complete, then \(Z_{eff}\) would equal 1. If there is no shielding, then \(Z_{eff} = 2\). So a way to take into account the electron-electron interaction is by saying it produces a shielding effect. The shielding is not zero, and it isn't complete, so the effective nuclear charge is between one and two.
The lower the energy level, the higher the probability of finding the electron close to the nucleus. Also, the lower momentum quantum number gets, the closer it is to the nucleus.
The presence of \(Z_{eff}\) in the radial portions of the wavefunctions also means that the electron probability distributions associated with hydrogenic atomic orbitals in multi-electron systems are different from the exact atomic orbitals for hydrogen. The Figure above compares the radial distribution functions for an electron in a 1s orbital of hydrogen (the ground state), a 2s orbital in hydrogen (an excited configuration of hydrogen) and a 1s orbital in helium that is described by the best variational value of \(Z_{eff}\). Our use of hydrogen-like orbitals in quantum mechanical calculations for multi-electron atoms helps us to interpret our results for multi-electron atoms in terms of the properties of a system we can solve exactly.
Shielding is like a Nickelback concert
More electrons create the shielding or screening effect.
Shielding or screening means the electrons closer to the nucleus block the outer valence electrons from getting close to the nucleus.
Imagine being in a crowded auditorium in a Nickelback concert. The enthusiastic fans are going to surround the auditorium, trying to get as close to the celebrity on the stage as possible. They are going to prevent people in the back from seeing the celebrity or even the stage. This is
shielding. The stage is the nucleus and the celebrity is the protons. The fans are the electrons. Electrons closest to the nucleus will try to be as close to the nucleus as possible. The outer/valence electrons that are farther away from the nucleus will be shielded by the inner electrons. Therefore, the inner electrons prevent the outer electrons from forming a strong attraction to the nucleus. The degree to which the electrons are being screened by inner electrons can be shown by ns<np<nd<nf where n is the energy level. The inner electrons will be attracted to the nucleus much more than the outer electrons. Thus, the attractive forces of the valence electrons to the nucleus are reduced due to the shielding effects. That is why it is easier to remove valence electrons than the inner electrons. It also reduces the nuclear charge of an atom. Penetration Penetration is the ability of an electron to get close to the nucleus. Thus, the closer the electron is to the nucleus, the higher the penetration. Electrons with higher penetration will shield outer electrons from the nucleus more effectively. Electrons with higher penetration are closer to the nucleus than electrons with lower penetration. Electrons with lower penetration are being shielded from the nucleus more. Slater's Rules Crudely Approximate Shielding
Slater's rules are an old and not-unreasonable way to estimate the effective nuclear charge \(Z_{eff}\) from the real number of protons in the nucleus and the effective shielding of electrons in each orbital "shell" (e.g., to compare the effective nuclear charge and shielding 3d and 4s in transition metals). The proper approach requires solving the Schrödinger equations (e.g., in the variational method), but this is a good rule of thumb that illustrates the gist.
Step 1: Write the electron configuration of the atom in the following form:
(1s) (2s, 2p) (3s, 3p) (3d) (4s, 4p) (4d) (4f) (5s, 5p) . . .
Step 2: Identify the electron of interest, and ignore all electrons in higher groups (to the right in the list from Step 1). These do not shield electrons in lower groups Step 3: Slater's Rules is now broken into two cases: the shielding experienced by an s- or p- electron, and the shielding experienced by a d- or f- electron.
Graphical depiction of Slater's rules with shielding constants indicated.
Group Other electrons in the same group Electrons in group(s) with principal quantum number n and azimuthal quantum number < l Electrons in group(s) with principal quantum number n-1 Electrons in all group(s) with principal quantum number < n-1 [1s] 0.30 - - - [ ns, np] 0.35 - 0.85 1 [ nd] or [ nf] 0.35 1 1 1
Sum together the contributions as described in the appropriate rule above to obtain an estimate of the shielding constant, \(
S\). S is found by totaling the screening by all electrons except the one in question.
\[ S = \sum n_i S_i \]
We can quantitatively represent this difference between \(Z\) and \(Z_{eff}\) as follows:
\[ S=Z-Z_{eff} \]
Rearranging this formula to solve for \(Z_{eff}\) we obtain:
\[ Z_{eff}=Z-S \]
For a more basic overview of shielding and orbital penetration, check this module out.
Example \(\PageIndex{1}\)
What is the shielding constant experienced by a 2
p electron in the nitrogen atom? Strategy: Determine the electron configuration of nitrogen (Z=7), then write it in the appropriate form. Use the Slater Rules to calculate the shielding constant for the electron. Solution: AN: 1 s 22 s 22 p 3
N: (1
s 2)(2 s 2,2 p 3) B S[2 p] = 1.00(0) + 0.85(2) + 0.35(4) = 3.10
Now, what is it effective charge?
Effective Charge:
\[ Z_{eff}=Z-S \]
\[ Z_{eff} = 7 - 3.10 = 3.9\]
Now, what is it effective energy of this electron if is followed the hydrogen atom energy (single electron system)?
Effective Energy:
\(E_{2p}=-R\dfrac{Z^2}{n^2}\).
Let's substitute \(Z_{eff}\) for \(Z\) then
\[E_{2p}=-R \dfrac{Z_{Eff}^2}{n^2}\]
\[E_{2p} = - R \dfrac{(3.10)^2}{2^2} \approx -2.4R\]
Slaters rules provide an approximate guide to explain why certain orbitals fill before others since electrons in different orbitals will have a different effective nuclear charge due to different shielding resulting in differing energy levels.
(left) This energy levels for a hydrogen atom (the degeneracy per level is \(n^2\)). (right) The energy levels for a multi-electron atom (the \(\ell\) degeneracy is broken, but not the \(m_l\) degeneracy. The Variational Method Approximation
Start with the lowest energy state of a system \(\psi_0\). For the hydrogen atom, this state is \(\psi_{100}\), for the Harmonic Oscillator this state is \(v=0\), and for the Rigid Rotor this state is \(J=0\). Variational theorem states that if you substitute any function \(\phi\) for the true wavefunction \(\psi_0\), then calculate
\[ \color{red} E_{\phi} = \dfrac {\int \phi^* \hat{H} \phi d\tau}{\int \phi^* \phi d\tau} \ge E_o \label{1a}\]
where \(E_0\) is the true energy of the system. Equation \(\ref{1a}\) in braket notation is
\[ \color{red} E_{\phi} = \dfrac { \langle \phi | \hat{H} | \phi \rangle} { \langle \phi | \phi \rangle} \ge E_o \label{1b}\]
Note that \(E_{\phi} = E_0\) when \(\psi_0 = \phi\). This means that we can use any \(\phi\) and we will find \(E_{\phi}\), which is an upper bound for \(E_0\). The closer \(\phi\) is to \(\psi_0\), the lower the value of \(E_{\phi}\). The minimization of \(E_{\phi}\) leads to a better estimate of \(E_0\) and \(\psi_0\). Set \(\phi(\alpha,\beta,\gamma)\) with adjustable parameters \(\alpha, \beta, \gamma\), and solve for the parameters by differentiating \(E_{\phi}\) with respect to each parameter and setting the derivatives in equal to 0. In formulas,
\[\dfrac {dE_{\phi}}{d\alpha} = 0,\]
\[\dfrac {dE_{\phi}}{d\beta}=0\]
\[\dfrac {dE_{\phi}}{d\gamma}=0\label{2}\]
Equation \(\ref{1b}\) is call the variational theorem and states that for a time-independent Hamiltonian operator, any trial wave function will have an variational energy expectation value that is
greater than or equal to the true ground state wave function corresponding to the given Hamiltonian. Because of this, the variational energy is an upper bound to the true ground state energy of a given molecule.
The optimum parameters in a trial wavefunction are found by searching for minima in the potential landscape spanned by those parameters
The general approach of this method consists in choosing a "trial wavefunction" depending on one or more parameters, and finding the values of these parameters for which the expectation value of the energy is the lowest possible (Figure \(\PageIndex{2}\)).
As is clear from Equation \(\ref{7.3.1b}\), the variational method approximation requires that a trial wavefunction with one or more adjustable parameters be chosen.
|
Next week I will start teaching Calculus for the first time. I am preparing my notes, and, as pure mathematician, I cannot come up with a good real world example of the following.
Are there good examples of \begin{equation} \lim_{x \to c} f(x) \neq f(c), \end{equation} or of cases when $c$ is not in the domain of $f(x)$?
The only thing that came to my mind is the study of physics phenomena at temperature $T=0 \,\mathrm{K}$, but I am not very satisfied with it.
Any ideas are more than welcome!
Warning
The more the examples are approachable (e.g. a freshman college student), the more I will be grateful to you! In particular, I would like the examples to come from natural or social sciences. Indeed, in a first class in Calculus it is not clear the importance of indicator functions, etc..
Edit
As B. Goddard pointed out, a very subtle point in calculus is the one of removable singularities. If possible, I would love to have some example of this phenomenon. Indeed, most of the examples from physics are of functions with poles or indeterminacy in the domain.
|
Here is an answer to the extra question regarding regular rings:
A finitely generated module is flat if and only if it is locally free, so for finitely generated modules your question translates to:
Q1 When is a torsion-free module/sheaf locally free?
and
Q2 What is a simple example of a torsion-free (say coherent) sheaf on a smooth algebraic variety that is not locally free?
So, first of all, the locus where a torsion-free coherent sheaf on a smooth algebraic variety fails to be locally free is at least of codimension $2$, in particular we have the following partial answer to
Q1:
A 1.1 Any torsion-free coherent sheaf on a smooth algebraic curve is locally free.
This is not true in higher dimensions and you can find a counter example on
any smooth algebraic variety of dimension at least $2$:
A 2 Let $X$ be a smooth algebraic variety of dimension at least $2$ (e.g., $\mathbb A^2$) and let $\mathcal F=\mathfrak m_x\subset \mathcal O_X$ the (maximal) ideal of a closed point $x\in X$. Then $\mathcal F$ is not flat. (This should qualify as the simplest example possible with the requirement of $X$ being smooth given A1 above).
Proof of A 2 This ideal is isomorphic to $\mathcal O_X$ on the open set $X\setminus\{x\}$, but cannot be generated with less number of elements than the dimension of the local ring $\dim \mathcal O_{X,x}\geq 2$, so it cannot be locally free.
On a surface you have to do a little better to get local freeness:The locus where a reflexive sheaf is not locally free is at least of codimension $3$, so
A 1.2 Any reflexive sheaf on a smooth algebraic surface is locally free.
Being reflexive is equivalent to torsion-free and $S_2$. See more on $S_2$ in this answer.
|
Comparing Two Interfaces for High-Frequency Modeling
It is always important to choose the correct tool for the job, and choosing the correct interface for high-frequency electromagnetic simulations is no different. In this blog post, we take a simple example of a plane wave incident upon a dielectric slab in air and solve it in two different ways to highlight the practical differences and relative advantages of the
Electromagnetic Waves, Frequency Domain interface and the Electromagnetic Waves, Beam Envelopes interface.
Meshing Free Space in Two Electromagnetic Interfaces
Both of these interfaces solve the frequency-domain form of Maxwell’s equations, but they do it in slightly different ways. The
Electromagnetic Waves, Frequency Domain interface, which is available in both the RF and Wave Optics modules, solves directly for the complex electric field everywhere in the simulation. The Electromagnetic Waves, Beam Envelopes interface, which is available solely in the Wave Optics Module, will solve for the complex envelope of the electric field for a given wave vector. For the remainder of this post, we will refer to the Electromagnetic Waves, Frequency Domain interface as a Full-Wave simulation and the Electromagnetic Waves, Beam Envelopes interface as a Beam-Envelope simulation.
To see why the distinction between
Full-Wave and Beam-Envelope is important, we will begin by discussing the trivial example of a plane wave propagating in free space, as shown in the image below. We will then apply the lessons learned to the dielectric slab. A graphical representation of a plane wave propagating in free space, where the red, green, and blue lines represent the electric field, magnetic field, and Poynting vector, respectively.
To properly resolve the harmonic nature of the solution in a
Full-Wave simulation, we need to mesh finer than the oscillations in the field. This is discussed further in these previous blog posts on tools for solving wave electromagnetics problems and modeling their materials. To simulate a plane wave propagating in free space, the number of mesh elements will then scale with the size of the free space domain in which we are interested. But what about the Beam-Envelopes simulation?
The
Beam-Envelopes method is particularly well-suited for models where we have good prior knowledge of the wave vector, \mathbf{k}. Practically speaking, this means that we are solving for the fields using the ansatz \mathbf{E}\left(\mathbf{r}\right) = \mathbf{E_1}\left(\mathbf{r}\right)e^{-j\mathbf{k_1}\cdot\mathbf{r}}. Notice that the only unknown in the ansatz is the envelope function \mathbf{E_1}\left(\mathbf{r}\right). This is the quantity that needs to be meshed to obtain a full solution, hence the mention of beam envelopes in the name of the interface. In the case of a plane wave in free space, the form of the ansatz matches exactly with the analytical solution. We know that the envelope function will be a constant, as shown by the green line in the figure below, so how many mesh elements do we need to resolve the solution? Just one. The electric field and phase of a plane wave propagating in free space. In the field plot (left), the blue and green lines show the real part and absolute value of E(r), which are abs(\mathbf{E_1}\left(\mathbf{r}\right)e^{-j\mathbf{k_1}\cdot\mathbf{r}}) = E_1 and real(\mathbf{E_1}\left(\mathbf{r}\right)e^{-j\mathbf{k_1}\cdot\mathbf{r}}) = E_1\cos(kr), respectively. The phase plot (right) shows the argument of E(r). In both plots, the x -axis is normalized to a wavelength, so this represents one full oscillation of the wave.
In practice,
Beam-Envelopes simulations are more flexible than the \mathbf{E}\left(\mathbf{r}\right) = \mathbf{E_1}\left(\mathbf{r}\right)e^{-j\mathbf{k_1}\cdot\mathbf{r}} ansatz we just used. This is for two reasons. First, instead of specifying a wave vector, we can specify a user-defined phase function, \phi\left(\mathbf{r}\right) = \mathbf{k}\cdot\mathbf{r}. Second, there is also a bidirectional option that allows for a second propagating wave and a full ansatz of \mathbf{E}\left(\mathbf{r}\right) = \mathbf{E_1}\left(\mathbf{r}\right)e^{-j\phi_1\left(\mathbf{r}\right)} + \mathbf{E_2}\left(\mathbf{r}\right)e^{-j\phi_2\left(\mathbf{r}\right)}. This is the functionality that we will take advantage of in modeling the dielectric slab (also called a Fabry-Pérot etalon).
The points discussed here will come up again in the dielectric slab example, and so we highlight them again for clarity. The size of mesh elements in a
Full-Wave simulation is proportional to the wavelength because we are solving directly for the full field, while the mesh element size in a Beam-Envelopes simulation can be independent of the wavelength because we are solving for the envelope function of a given phase/wave vector. You can greatly reduce the number of mesh elements for large structures if a Beam-Envelopes simulation can be performed instead of a Full-Wave simulation, but this is only possible if you have prior knowledge of the wave vector (or phase function) everywhere in the simulation. Since the degrees of freedom, memory used, and simulation time all depend on the number of mesh elements, this can have a large influence on the computational requirements of your simulation. Meshing a Dielectric Slab in COMSOL Multiphysics
Using the 2D geometry shown below, we can clearly see the different waves that need to be accounted for in a simulation of a dielectric slab illuminated by a plane wave. On the left of the slab, we have to account for the incoming wave traveling to the right, as well as the reflected wave traveling to the left. Because of internal reflections inside the slab itself, we have to account for both left- and right-traveling waves in the slab, and finally, the transmitted waves on the right. We also choose a specific example so that we can use concrete numbers.
Let’s make the dielectric slab an undoped silicon (Si) wafer that is 525 µm thick. We will simulate the response to terahertz (THz) radiation (i.e., submillimeter waves), which encompasses wavelengths of approximately 1 mm to 100 µm and is increasingly used for classifying semiconductor properties. The refractive index of undoped Si in this range is a constant n = 3.42. We choose the domain length to be 15 mm in the direction of propagation.
The simulation geometry. Red arrows indicate incident and reflected waves. The left and right regions are air with n = 1 and the Si slab in the center has a refractive index n = 3.42. The x is on the bottom denote the spatial location of the planes. The slab is centered in the simulation domain, such that x1 = (15 mm – 525 µm)/2. Note that this image is not to scale.
For a 2D
Full-Wave simulation, we set a maximum element size of \lambda/8n to ensure the solution is well resolved. The simulation is invariant in the y direction and so we choose our simulation height to be \lambda/(8\times3.42). Because we have constrained the wave to travel along the x-axis, we choose a mapped mesh to generate rectangular elements. The mesh will then be one mesh element thick in the y direction, with a mesh element size in the x direction of \lambda/8n, where n depends on whether it is air or Si. Again, note that this is a wavelength-dependent mesh.
Before setting up the mesh for a
Beam-Envelopes simulation, we first need to specify our user-defined phase function. The Gaussian Beam Incident at the Brewster Angle example in the Application Gallery demonstrates how to define a user-defined phase function for each domain through the use of variables, and we will use the same technique here. Referring to x 0, x 1, and x 2 in the geometry figure above, we define the phase function for a plane wave traveling left to right in the three domains as
where n = 3.42 and the first line corresponds to \phi in the leftmost domain, the second line is \phi in the Si slab, and the bottom line is \phi in the rightmost domain. We then use this variable for the phase of the first wave, and its negative for the phase of the second wave. Because we have completely captured the full phase variation of the solution in the ansatz, this allows a mapped mesh of only
three elements for the entire model — one for each domain. Let’s examine what the mesh looks like in the Si slab for these two interfaces at two different wavelengths, corresponding to 1 mm and 250 µm. The mesh in the Si (dielectric) slab. From left to right, we have the Full-Wave mesh at 1 mm, the Full-Wave mesh at 250 µm, and the Beam-Envelopes mesh at any wavelength. Note that the Full-Wave mesh density clearly increases with decreasing wavelength, while the Beam-Envelopes mesh is a single rectangular element at any wavelength.
Yes, that is the correct mesh for the Si slab in the
Beam-Envelopes simulation. Because the ansatz matches the solution exactly, we only need three total elements for the entire simulation: one for the Si slab and one each for the two air domains on either side of it. This is independent of wavelength. On the other hand, the mesh for the Full-Wave simulation is approximately four times more dense at \lambda = 250 µm than at \lambda = 1 mm. Let’s look at this in concrete numbers for the degrees of freedom (DOF) solved for in these simulations.
Wavelength
Simulated
Full-Wave Simulation
DOF Used
Beam-Envelopes Simulation
DOF Used
1 mm 4,134 74 250 µm 16,444 74 The number of degrees of freedom (DOF) used at two different wavelengths for the Full-Wave and Beam-Envelopes simulations.
Again, it is important to point out that this does not mean that one interface is better or worse than another. They are different techniques and choosing the appropriate option is an important simulation decision. However, it is fair to say that a
Full-Wave simulation is more general, since we did not need to supply it with a wave vector or phase function. It can solve a wider class of problems than Beam-Envelopes simulations, but Beam-Envelopes simulations can greatly reduce the DOF when the wave vector is known. As we have seen in a previous blog post, memory usage in a simulation strongly depends on the number of DOF. Do not blindly use a Beam-Envelopes simulation everywhere though! Let’s take a look at another example where we intentionally make a bad choice for the wave vector and see what happens. Making Smart Choices for the Wave Vector
In the hypothetical free space example above, we chose a unidirectional wave vector. Here, we will do the same for the Si slab. It is important to emphasize that choosing a single wave vector where we know that the solution will be a superposition of left- and right-traveling waves is an exceptionally bad choice, and we do this here solely for demonstration purposes. Instead of using the bidirectional formulation with a user-defined phase function, let’s naively choose a single “guess” wave vector of \mathbf{k_G} = n\mathbf{k_0} = \mathbf{k} and see what the damage is. Using our ansatz, inside of the dielectric slab we have
where the left-hand side is the solution we are computing and the right-hand side is exact. Now, we manipulate the equation slightly to examine the spatial variation in the solution.
We intentionally chose the case where \mathbf{k_G} = \mathbf{k}, which means we can simplify to
Since \mathbf{E_1} and \mathbf{E_2} are constants determined by the Fresnel relations at the boundaries of the dielectric slab, this means that the only spatial variation in the computed solution will come from exp\left(-j\left(\mathbf{k+k_G}\right)\cdot\mathbf{r}\right). The minimum mesh requirement in the slab is then determined by the “effective” wavelength of this oscillating term
which is half of the original wavelength. Not only have we made the
Beam-Envelopes mesh wavelength dependent, but the required mesh in the dielectric slab for this choice of wave vector needs to be twice as dense as the mesh for a Full-Wave simulation. We have actually made the situation worse with the poor choice of a single wave vector for a simulation with multiple reflections. We could, of course, simply double the mesh density and obtain the correct solution, but that would defeat the purpose of choosing the Beam-Envelopes simulation in the first place. Make smart choices! Simulation Results
Another practical question is how do the results of a
Full-Wave and Beam-Envelopes simulation compare? They are both solving Maxwell’s equations on the same geometry with the same material properties, and so the various results (transmission, reflection, field values) agree as you would expect. There are slight differences though.
If you want to evaluate the electric field of the right-propagating wave in the dielectric slab, you can do that in the
Beam-Envelopes simulation. This is, of course, because we solved for both right- and left-propagating waves and obtained the total field by summing these two contributions. This could be extracted from the Full-Wave simulation in this case as well, but it would require additional user-defined postprocessing and may not be possible in all cases. It may seem counterintuitive in that we actually have more information readily available from a Beam-Envelopes simulation, even though it is computationally less expensive. We must remember, however, that this is simply the result of solving the model using the ansatz we specified initially. Concluding Thoughts on Interfaces for High-Frequency Modeling
We have examined the simple case of a dielectric slab in free space using both the
Electromagnetic Waves, Frequency Domain and Electromagnetic Waves, Beam Envelopes interfaces. In comparing Full-Wave and Beam-Envelopes simulations, we showed that a Beam-Envelopes simulation can handle much larger simulations, but only in cases where we have good knowledge of the wave vector (or phase function) everywhere in the simulation. This knowledge is not required for a Full-Wave simulation, but the simulation must then be meshed on the order of a wavelength, as opposed to meshing the change in the envelope function in a Beam-Envelopes simulation. It is also worth mentioning that most Beam-Envelopes meshes will need more than the three elements shown here. This was only possible here because we chose a textbook example with an analytical solution to use as a teaching model. For more realistic simulations, you can refer to the Mach-Zehnder Modulator or Self-Focusing Gaussian Beam examples in the Application Gallery.
Note that the
Electromagnetic Waves, Frequency Domain interface is available in both the RF and Wave Optics modules, although with slightly different features. The Full-Wave simulation discussed in this post could be performed in either module, although the Beam-Envelopes simulation requires the Wave Optics Module. For a full list of differences between the RF and Wave Optics modules, you can refer to this specification chart for COMSOL Multiphysics products. Further Resources Browse the COMSOL Blog for more discussions of electrical modeling Watch these videos: Take your electromagnetics modeling to the next level at a local training event Contact us with questions about your own model Comments (0) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
|
I'm a bit stumped on this one.
Show that $\lim_{x\to0} \frac{e^x -1}{\sin(x)} = 1$ using power series.
The instructions are not to use L'Hospital's Rule. I cannot find a way to do this without L'Hopital even simplifying using series expansion.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
I'm a bit stumped on this one.
Show that $\lim_{x\to0} \frac{e^x -1}{\sin(x)} = 1$ using power series.
The instructions are not to use L'Hospital's Rule. I cannot find a way to do this without L'Hopital even simplifying using series expansion.
This question appears to be off-topic. The users who voted to close gave this specific reason:
$$\lim_{x\to0} \frac{e^x -1}{\sin(x)} =\lim_{x\to0} \frac{-1+1+x+\frac{x^2}{2!}+...}{x-\frac{x^3}{3!}+...}=\lim_{x\to0}\frac{x(1+\frac{x}{2!}+...}{x(1-\frac{x^2}{3!}+...} = 1$$
A simpler solution in my opinion using fundamental limits and the definition of the derivative. Note that $$ \lim_{x\to 0}\frac{e^x-1}{\sin x}= \lim_{x\to 0}\left( \frac{e^x-1}{x}\times \frac{x}{\sin x} \right)=\lim_{x\to 0}\frac{e^x-1}{x}\times\lim_{x\to 0}\frac{x}{\sin x} =\frac{d}{dx}(e^x)\bigg|_{x=0}\times 1=1 $$
With
high-school limits:
$$\frac{\mathrm e^x-1}{\sin x}=\underbrace{\frac{\mathrm e^x-1}{x}}_{\substack{\downarrow\\\tfrac{\mathrm{d(e}^x)}{\mathrm dx}\Bigm\vert_{x=0}=1}} \underbrace{\frac x{\sin x}}_{\substack{\downarrow\\1}}$$
|
J. D. Hamkins, “Every countable model of set theory embeds into its own constructible universe,” J. Math. Logic, vol. 13, iss. 2, p. 1350006, 27, 2013.
@article {Hamkins2013:EveryCountableModelOfSetTheoryEmbedsIntoItsOwnL,
AUTHOR = {Hamkins, Joel David},
TITLE = {Every countable model of set theory embeds into its own
constructible universe},
JOURNAL = {J. Math. Logic},
FJOURNAL = {J.~Math.~Logic},
VOLUME = {13},
YEAR = {2013},
NUMBER = {2},
PAGES = {1350006, 27},
ISSN = {0219-0613},
MRCLASS = {03C62 (03E99 05C20 05C60 05C63)},
MRNUMBER = {3125902},
MRREVIEWER = {Robert S. Lubarsky},
DOI = {10.1142/S0219061313500062},
eprint = {1207.0963},
archivePrefix = {arXiv},
primaryClass = {math.LO},
URL = {http://wp.me/p5M0LV-jn},
}
In this article, I prove that every countable model of set theory $\langle M,{\in^M}\rangle$, including every well-founded model, is isomorphic to a submodel of its own constructible universe $\langle L^M,{\in^M}\rangle$. Another way to say this is that there is an embedding
$$j:\langle M,{\in^M}\rangle\to \langle L^M,{\in^M}\rangle$$ that is elementary for quantifier-free assertions in the language of set theory.
Main Theorem 1. Every countable model of set theory $\langle M,{\in^M}\rangle$ is isomorphic to a submodel of its own constructible universe $\langle L^M,{\in^M}\rangle$.
The proof uses universal digraph combinatorics, including an acyclic version of the countable random digraph, which I call the countable random $\mathbb{Q}$-graded digraph, and higher analogues arising as uncountable Fraisse limits, leading eventually to what I call the hypnagogic digraph, a set-homogeneous, class-universal, surreal-numbers-graded acyclic class digraph, which is closely connected with the surreal numbers. The proof shows that $\langle L^M,{\in^M}\rangle$ contains a submodel that is a universal acyclic digraph of rank $\text{Ord}^M$, and so in fact this model is universal for all countable acyclic binary relations of this rank. When $M$ is ill-founded, this includes all acyclic binary relations. The method of proof also establishes the following, thereby answering a question posed by Ewan Delanoy.
Main Theorem 2. The countable models of set theory are linearly pre-ordered by embeddability: for any two countable models of set theory $\langle M,{\in^M}\rangle$ and $\langle N,{\in^N}\rangle$, either $M$ is isomorphic to a submodel of $N$ or conversely. Indeed, the countable models of set theory are pre-well-ordered by embeddability in order type exactly $\omega_1+1$.
The proof shows that the embeddability relation on the models of set theory conforms with their ordinal heights, in that any two models with the same ordinals are bi-embeddable; any shorter model embeds into any taller model; and the ill-founded models are all bi-embeddable and universal.
The proof method arises most easily in
finite set theory, showing that the nonstandard hereditarily finite sets $\text{HF}^M$ coded in any nonstandard model $M$ of PA or even of $I\Delta_0$ are similarly universal for all acyclic binary relations. This strengthens a classical theorem of Ressayre, while simplifying the proof, replacing a partial saturation and resplendency argument with a soft appeal to graph universality.
Main Theorem 3. If $M$ is any nonstandard model of PA, then every countable model of set theory is isomorphic to a submodel of the hereditarily finite sets $\langle \text{HF}^M,{\in^M}\rangle$ of $M$. Indeed, $\langle\text{HF}^M,{\in^M}\rangle$ is universal for all countable acyclic binary relations.
In particular, every countable model of ZFC and even of ZFC plus large cardinals arises as a submodel of $\langle\text{HF}^M,{\in^M}\rangle$. Thus, inside any nonstandard model of finite set theory, we may cast out some of the finite sets and thereby arrive at a copy of any desired model of infinite set theory, having infinite sets, uncountable sets or even large cardinals of whatever type we like.
The proof, in brief: for every countable acyclic digraph, consider the partial order induced by the edge relation, and extend this order to a total order, which may be embedded in the rational order $\mathbb{Q}$. Thus, every countable acyclic digraph admits a $\mathbb{Q}$
-grading, an assignmment of rational numbers to nodes such that all edges point upwards. Next, one can build a countable homogeneous, universal, existentially closed $\mathbb{Q}$-graded digraph, simply by starting with nothing, and then adding finitely many nodes at each stage, so as to realize the finite pattern property. The result is a computable presentation of what I call the countable random $\mathbb{Q}$-graded digraph $\Gamma$. If $M$ is any nonstandard model of finite set theory, then we may run this computable construction inside $M$ for a nonstandard number of steps. The standard part of this nonstandard finite graph includes a copy of $\Gamma$. Furthermore, since $M$ thinks it is finite and acyclic, it can perform a modified Mostowski collapse to realize the graph in the hereditary finite sets of $M$. By looking at the sets corresponding to the nodes in the copy of $\Gamma$, we find a submodel of $M$ that is isomorphic to $\Gamma$, which is universal for all countable acyclic binary relations. So every model of ZFC isomorphic to a submodel of $M$.
The article closes with a number of questions, which I record here (and which I have also asked on mathoverflow: Can there be an embedding $j:V\to L$ from the set-theoretic universe $V$ to the constructible universe $L$, when $V\neq L$?) Although the main theorem shows that every countable model of set theory embeds into its own constructible universe $$j:M\to L^M,$$ this embedding $j$ is constructed completely externally to $M$ and there is little reason to expect that $j$ could be a class in $M$ or otherwise amenable to $M$. To what extent can we prove or refute the possibility that $j$ is a class in $M$? This amounts to considering the matter internally as a question about $V$. Surely it would seem strange to have a class embedding $j:V\to L$ when $V\neq L$, even if it is elementary only for quantifier-free assertions, since such an embedding is totally unlike the sorts of embeddings that one usually encounters in set theory. Nevertheless, I am at a loss to refute the hypothesis, and the possibility that there might be such an embedding is intriguing, if not tantalizing, for one imagines all kinds of constructions that pull structure from $L$ back into $V$.
Question 1. Can there be an embedding $j:V\to L$ when $V\neq L$?
By embedding, I mean an isomorphism from $\langle V,{\in}\rangle$ to its range in $\langle L,{\in}\rangle$, which is the same as a quantifier-free-elementary map $j:V\to L$. The question is most naturally formalized in Gödel-Bernays set theory, asking whether there can be a GB-class $j$ forming such an embedding. If one wants $j:V\to L$ to be a definable class, then this of course implies $V=\text{HOD}$, since the definable $L$-order can be pulled back to $V$, via $x\leq y\iff j(s)\leq_L j(y)$. More generally, if $j$ is merely a class in Gödel-Bernays set theory, then the existence of an embedding $j:V\to L$ implies global choice, since from the class $j$ we can pull back the $L$-order. For these reasons, we cannot expect every model of ZFC or of GB to have such embeddings. Can they be added generically? Do they have some large cardinal strength? Are they outright refutable?
It they are not outright refutable, then it would seem natural that these questions might involve large cardinals; perhaps $0^\sharp$ is relevant. But I am unsure which way the answers will go. The existence of large cardinals provides extra strength, but may at the same time make it harder to have the embedding, since it pushes $V$ further away from $L$. For example, it is conceivable that the existence of $0^\sharp$ will enable one to construct the embedding, using the Silver indiscernibles to find a universal submodel of $L$; but it is also conceivable that the non-existence of $0^\sharp$, because of covering and the corresponding essential closeness of $V$ to $L$, may make it easier for such a $j$ to exist. Or perhaps it is simply refutable in any case. The first-order analogue of the question is:
Question 2. Does every set $A$ admit an embedding $j:\langle A,{\in}\rangle \to \langle L,{\in}\rangle$? If not, which sets do admit such embeddings?
The main theorem shows that every countable set $A$ embeds into $L$. What about uncountable sets? Let us make the question extremely concrete:
Question 3. Does $\langle V_{\omega+1},{\in}\rangle$ embed into $\langle L,{\in}\rangle$? How about $\langle P(\omega),{\in}\rangle$ or $\langle\text{HC},{\in}\rangle$?
It is also natural to inquire about the nature of $j:M\to L^M$ even when it is not a class in $M$. For example, can one find such an embedding for which $j(\alpha)$ is an ordinal whenever $\alpha$ is an ordinal? The embedding arising in the proof of the main theorem definitely does not have this feature.
Question 4. Does every countable model $\langle M,{\in^M}\rangle$ of set theory admit an embedding $j:M\to L^M$ that takes ordinals to ordinals?
Probably one can arrange this simply by being a bit more careful with the modified Mostowski procedure in the proof of the main theorem. And if this is correct, then numerous further questions immediately come to mind, concerning the extent to which we ensure more attractive features for the embeddings $j$ that arise in the main theorems. This will be particularly interesting in the case of well-founded models, as well as in the case of $j:V\to L$, as in question , if that should be possible.
Question 5. Can there be a nontrivial embedding $j:V\to L$ that takes ordinals to ordinals?
Finally, I inquire about the extent to which the main theorems of the article can be extended from the countable models of set theory to the $\omega_1$-like models:
Question 6. Does every $\omega_1$-like model of set theory $\langle M,{\in^M}\rangle$ admit an embedding $j:M\to L^M$ into its own constructible universe? Are the $\omega_1$-like models of set theory linearly pre-ordered by embeddability?
|
Slingshot argument From The Art and Popular Culture Encyclopedia
Featured:
This type of argument was dubbed the "slingshot" by philosophers Jon Barwise and John Perry (1981) due to its disarming simplicity. It is usually said that versions of the slingshot argument have been given by Gottlob Frege, Alonzo Church, W. V. Quine, and Donald Davidson. However, it has been disputed by Lorenz Krüger (1995) that there is much unity in this tradition. Moreover, Krüger rejects Davidson's claim that the argument can refute the correspondence theory of truth. Stephen Neale (1995) claims, controversially, that the most compelling version was suggested by Kurt Gödel (1944).
These arguments are sometimes modified to support the alternative, and evidently stronger, conclusion that there is only one
fact, or one true proposition, state of affairs, truth condition, truthmaker, and so on. The argument
One version of the argument (Perry 1996) proceeds as follows.
Assumptions: Substitution. If two terms designate the same thing, then substituting one for another in a sentence does not change the designation of that sentence. Redistribution. Rearranging the parts of a sentence does not change the designation of that sentence, provided the truth conditions of the sentence do not change. Every sentence is equivalent to a sentence of the form F( a). In other words, every sentence has the same designation as some sentence that attributes a property to something. (For example, "All men are mortal" is equivalent to "The number 1 has the property of being such that all men are mortal".) For any two objects there is a relation that holds uniquely between them. (For example, if the objects in question are denoted by " a" and " b", the relation in question might be R( x, y), which is stipulated to hold just in case x= aand y= b.)
Let
S and T be arbitrary true sentences, designating Des( S) and Des( T), respectively. (No assumptions are made about what kinds of things Des( S) and Des( T) are.) It is now shown by a series of designation-preserving transformations that Des( S) = Des( T). Here, "<math>\iota x</math>" can be read as "the x such that".
1. <math>S</math> 2. <math>\phi (a)</math> assumption 3 3. <math>a = \iota x (\phi (x) \land x=a)</math> redistribution 4. <math>a = \iota x (\pi (x,b) \land x=a)</math> substitution, assumption 4 5. <math>\pi (a,b)</math> redistribution 6. <math> b = \iota x (\pi (a,x) \land x=b ) </math> redistribution 7. <math> b = \iota x (\psi (x) \land x=b )</math> substitution, assumption 3 8. <math> \psi (b)</math> redistribution 9. <math>T </math> assumption 3
Note that (1)-(9) is not a derivation of
T from S. Rather, it is a series of (allegedly) designation-preserving transformation steps. Responses to the argument
As Gödel (1944) observed, the slingshot argument does not go through if Bertrand Russell's famous account of definite descriptions is assumed. Russell claimed that the proper logical interpretation of a sentence of the form "The
F is G" is: Exactly one thing is F, and that thing is also G.
Or, in the language of first-order logic:
<math>\exists x (\forall y (F(y) \leftrightarrow y = x) \land G(x))</math>
When the sentences above containing <math>\iota</math>-expressions are expanded out to their proper form, the steps involving substitution are seen to be illegitimate. Consider, for example, the move from (3) to (4). On Russell's account, (3) and (4) are shorthand for:
3'. <math>\exists x (\forall y ((\phi(y) \land y=a) \leftrightarrow y = x) \land a = x)</math> 4'. <math>\exists x (\forall y ((\pi (y,b) \land y=a) \leftrightarrow y = x) \land a = x)</math>
Clearly the substitution principle and assumption 4 do not license the move from (3') to (4'). Thus, one way to look at the slingshot is as simply another argument in favor of Russell's theory of definite descriptions.
If one is not willing to accept Russell's theory, then it seems wise to challenge either
substitution or redistribution, which seem to be the other weakest points in the argument. Perry (1996), for example, rejects both of these principles, proposing to replace them with certain weaker, qualified versions that do not allow the slingshot argument to go through.Gaetano Licata (2011) rejected the slingshot argument, showing that the concept of identity (=) employed in Davidson and Gödel's demonstration is very problematic, because Gödel (following Russell) uses the G. W. Leibniz's principle of the identity of indiscernibles,Template:Page needed which suffer from the criticism proposed by Ludwig Wittgenstein: to state that x=y when all properties of x are also properties of y is false because y and x are different signs, while to state that x=x when all properties of x are also properties of x is a nonsense.Template:Page needed Licata's thesis is that the sign = (usually employed between numbers) needs a logical foundation before being employed between objects and properties. See also
Unless indicated otherwise, the text in this article is either based on Wikipedia article "Slingshot argument" or another language Wikipedia page thereof used under the terms of the GNU Free Documentation License; or on original research by Jahsonic and friends. See Art and Popular Culture's copyright notice.
|
This is a little question inspired from Hartshorne's Geometry, which I've been juggling around for a while.
Suppose that $\Pi$ is the Cartesian plane $F^2$ for some field $F$, with the set of ordered pairs of elements of $F$ being the points and lines those subsets defined by linear equations. Let $\Pi'$ be the associated projective plane. Here $\Pi'$ is just $\Pi$ together with the points at infinity, which are the pencils of sets of parallel lines. A line in $\Pi'$ will be the subset consisting of a line of $\Pi$ plus its unique point at infinity.
Also, let $V=F^3$ be a three-dimensional vector space over $F$. Let $\Sigma$ be the set of $1$-dimensional subspaces of $V$, and call them points. If $W\subseteq V$ is a $2$-dimensional subspace of $V$, then the set of all 'points' contained in $W$ will be called a 'line,' and $\Sigma$ forms a projective plane.
I would like to see how $\Pi'$ and $\Sigma$ are isomorphic. I figure I should construct a bijection from the points of $\Pi'$ to the points of $\Sigma$, however, I don't see a natural mapping. At first I figured if $(a,b)\in\Pi'$, then maybe $(a,b)\mapsto\text{span}(a,b,0)$, but this isn't injective or surjective. Not only that, but I figure if $A$ and $B$ are points in $\Pi'$ on a line, then the images of all points in that line in $\Pi'$ should be in the same $2$-dimensional subspace in $\Sigma$.
I don't quite see why they should be isomorphic. Suppose to lines $l$ and $m$ intersect in $\Pi'$, shouldn't their intersection point map to the intersection of the lines in $\Sigma$? But what if this intersection doesn't even pass through the origin, and hence isn't even a $1$-dimensional subspace? Thank you for any insight on how to solve this.
Edit: For my last question above, I just realized that all the $2$-dimensional subspaces pass through the origin (whoops!), and thus the intersection will contain the origin as well, so that's not an issue.
|
A
local maximum point on a function is apoint $(x,y)$ on the graph of the function whose $y$ coordinate islarger than all other $y$ coordinates on the graph at points "closeto'' $(x,y)$. More precisely, $(x,f(x))$ is a local maximum if thereis an interval $(a,b)$ with $a< x< b$ and $f(x)\ge f(z)$ for every $z$in $(a,b)$. Similarly, $(x,y)$ is a local minimum pointif it has locally the smallest $y$ coordinate. Againbeing more precise: $(x,f(x))$ is a local minimum if thereis an interval $(a,b)$ with $a< x< b$ and $f(x)\le f(z)$ for every $z$in $(a,b)$. A local extremumis either a local minimum or a local maximum.
Local maximum and minimum points are quite distinctive on the graph of a function, and are therefore useful in understanding the shape of the graph. In many applied problems we want to find the largest or smallest value that a function achieves (for example, we might want to find the minimum cost at which some task can be performed) and so identifying maximum and minimum points will be useful for applied problems as well. Some examples of local maximum and minimum points are shown in figure 5.1.1.
If $(x,f(x))$ is a point where $f(x)$ reaches a local maximum or minimum, and if the derivative of $f$ exists at $x$, then the graph has a tangent line and the tangent line must be horizontal. This is important enough to state as a theorem, though we will not prove it.
Theorem 5.1.1 (Fermat's Theorem) If $f(x)$ has a local extremum at $x=a$ and $f$ is differentiable at $a$, then $f'(a)=0$.
Thus, the onlypoints at which a function can have a local maximum or minimum arepoints at which the derivative is zero, as in the left hand graph infigure 5.1.1,or the derivative is undefined, as in the right hand graph. Any valueof $x$ for which $f'(x)$ is zero or undefined is called a
critical value for $f$.When looking for local maximum and minimum points, you are likely tomake two sorts of mistakes: You may forget that a maximum or minimumcan occur where the derivative does not exist, and so forget to checkwhether the derivative exists everywhere. You might also assume thatany place that the derivative is zero is a local maximum or minimumpoint, but this is not true. A portion of the graph of $\ds f(x)=x^3$ isshown in figure 5.1.2. The derivative of $f$ is$f'(x)=3x^2$, and $f'(0)=0$, but there is neither a maximum norminimum at $(0,0)$.
Since the derivative is zero or undefined at both local maximum and local minimum points, we need a way to determine which, if either, actually occurs. The most elementary approach, but one that is often tedious or difficult, is to test directly whether the $y$ coordinates "near'' the potential maximum or minimum are above or below the $y$ coordinate at the point of interest. Of course, there are too many points "near'' the point to test, but a little thought shows we need only test two provided we know that $f$ is continuous (recall that this means that the graph of $f$ has no jumps or gaps).
Suppose, for example, that we have identified three points at which $f'$ is zero or nonexistent: $\ds (x_1,y_1)$, $\ds (x_2,y_2)$, $\ds (x_3,y_3)$, and $\ds x_1< x_2< x_3$ (see figure 5.1.3). Suppose that we compute the value of $f(a)$ for $\ds x_1< a< x_2$, and that $\ds f(a)< f(x_2)$. What can we say about the graph between $a$ and $\ds x_2$? Could there be a point $\ds (b,f(b))$, $\ds a< b< x_2$ with $\ds f(b)>f(x_2)$? No: if there were, the graph would go up from $(a,f(a))$ to $(b,f(b))$ then down to $\ds (x_2,f(x_2))$ and somewhere in between would have a local maximum point. (This is not obvious; it is a result of the Extreme Value Theorem, theorem 6.1.2.) But at that local maximum point the derivative of $f$ would be zero or nonexistent, yet we already know that the derivative is zero or nonexistent only at $\ds x_1$, $\ds x_2$, and $\ds x_3$. The upshot is that one computation tells us that $\ds (x_2,f(x_2))$ has the largest $y$ coordinate of any point on the graph near $\ds x_2$ and to the left of $\ds x_2$. We can perform the same test on the right. If we find that on both sides of $\ds x_2$ the values are smaller, then there must be a local maximum at $\ds (x_2,f(x_2))$; if we find that on both sides of $\ds x_2$ the values are larger, then there must be a local minimum at $\ds (x_2,f(x_2))$; if we find one of each, then there is neither a local maximum or minimum at $\ds x_2$.
It is not always easy to compute the value of a function at a particular point. The task is made easier by the availability of calculators and computers, but they have their own drawbacks—they do not always allow us to distinguish between values that are very close together. Nevertheless, because this method is conceptually simple and sometimes easy to perform, you should always consider it.
Example 5.1.2 Find all local maximum and minimum points for the function $\ds f(x)=x^3-x$. The derivative is $\ds f'(x)=3x^2-1$. This is defined everywhere and is zero at $\ds x=\pm \sqrt{3}/3$. Looking first at $\ds x=\sqrt{3}/3$, we see that $\ds f(\sqrt{3}/3)=-2\sqrt{3}/9$. Now we test two points on either side of $\ds x=\sqrt{3}/3$, making sure that neither is farther away than the nearest critical value; since $\ds \sqrt{3}< 3$, $\ds \sqrt{3}/3< 1$ and we can use $x=0$ and $x=1$. Since $\ds f(0)=0>-2\sqrt{3}/9$ and $\ds f(1)=0>-2\sqrt{3}/9$, there must be a local minimum at $\ds x=\sqrt{3}/3$. For $\ds x=-\sqrt{3}/3$, we see that $\ds f(-\sqrt{3}/3)=2\sqrt{3}/9$. This time we can use $x=0$ and $x=-1$, and we find that $\ds f(-1)=f(0)=0< 2\sqrt{3}/9$, so there must be a local maximum at $\ds x=-\sqrt{3}/3$.
Of course this example is made very simple by our choice of points to test, namely $x=-1$, $0$, $1$. We could have used other values, say $-5/4$, $1/3$, and $3/4$, but this would have made the calculations considerably more tedious.
Example 5.1.3 Find all local maximum and minimum points for $f(x)=\sin x+\cos x$. The derivative is $f'(x)=\cos x-\sin x$. This is always defined and is zero whenever $\cos x=\sin x$. Recalling that the $\cos x$ and $\sin x$ are the $x$ and $y$ coordinates of points on a unit circle, we see that $\cos x=\sin x$ when $x$ is $\pi/4$, $\pi/4\pm\pi$, $\pi/4\pm2\pi$, $\pi/4\pm3\pi$, etc. Since both sine and cosine have a period of $2\pi$, we need only determine the status of $x=\pi/4$ and $x=5\pi/4$. We can use $0$ and $\pi/2$ to test the critical value $x= \pi/4$. We find that $\ds f(\pi/4)=\sqrt{2}$, $\ds f(0)=1< \sqrt{2}$ and $\ds f(\pi/2)=1$, so there is a local maximum when $x=\pi/4$ and also when $x=\pi/4\pm2\pi$, $\pi/4\pm4\pi$, etc. We can summarize this more neatly by saying that there are local maxima at $\pi/4\pm 2k\pi$ for every integer $k$.
We use $\pi$ and $2\pi$ to test the critical value $x=5\pi/4$. The relevant values are $\ds f(5\pi/4)=-\sqrt2$, $\ds f(\pi)=-1>-\sqrt2$, $\ds f(2\pi)=1>-\sqrt2$, so there is a local minimum at $x=5\pi/4$, $5\pi/4\pm2\pi$, $5\pi/4\pm4\pi$, etc. More succinctly, there are local minima at $5\pi/4\pm 2k\pi$ for every integer $k$.
Exercises 5.1
In problems 1–12, find all local maximum and minimum points $(x,y)$ by the method of this section.
Ex 5.1.1$\ds y=x^2-x$ (answer)
Ex 5.1.2$\ds y=2+3x-x^3$ (answer)
Ex 5.1.3$\ds y=x^3-9x^2+24x$(answer)
Ex 5.1.4$\ds y=x^4-2x^2+3$ (answer)
Ex 5.1.5$\ds y=3x^4-4x^3$(answer)
Ex 5.1.6$\ds y=(x^2-1)/x$(answer)
Ex 5.1.7$\ds y=3x^2-(1/x^2)$ (answer)
Ex 5.1.8$y=\cos(2x)-x$ (answer)
Ex 5.1.9$\ds f(x) =\cases{ x-1 & $x < 2$ \crx^2 & $x\geq 2$\cr}$(answer)
Ex 5.1.10$\ds f(x) =\cases{x-3 & $x < 3$ \crx^3 & $3\leq x \leq 5$\cr1/x &$x>5$\cr}$(answer)
Ex 5.1.11$\ds f(x) = x^2 - 98x + 4$(answer)
Ex 5.1.12$\ds f(x) =\cases{ -2 & $x = 0$ \cr1/x^2 &$x \neq 0$\cr}$(answer)
Ex 5.1.13For any real number $x$ there is a unique integer $n$ such that $n \leq x < n +1$, and the greatest integer function is defined as $\ds\lfloor x\rfloor = n$. Where are the critical values of the greatest integer function? Which are local maxima and which are local minima?
Ex 5.1.14Explain why the function $f(x) =1/x$ has no localmaxima or minima.
Ex 5.1.15How many critical points can a quadratic polynomial function have?(answer)
Ex 5.1.16Show that a cubic polynomial can have at most two criticalpoints. Give examples to show that a cubic polynomial can have zero,one, or two critical points.
Ex 5.1.17Explore the family of functions $\ds f(x) = x^3 + cx +1$ where $c$ is a constant. How many and what types of local extremes are there? Your answer should depend on the value of $c$, that is, different values of $c$ will give different answers.
Ex 5.1.18We generalize the preceding two questions. Let $n$ be apositive integer and let $f$ be a polynomial of degree $n$. How manycritical points can $f$ have? (Hint: Recall the Fundamental Theorem of Algebra, which says that a polynomial of degree $n$ has at most $n$ roots.)
|
I understand an MDP (Markov Decision Process) model is a tuple of $\{S, A, P, R \}$ where:
$S$ is a discrete set of states $A$ is a discrete set of actions $P$ is the transition matrix ie. $P(s' \mid s, a) \rightarrow [0,1]$ $R$ is the reward function id. $R(s, a, s') \rightarrow \mathbb{R}$
For a non-trivial MDP, say $1000$ states and $10$ actions, the transition matrix has theoretically $S \times A \times S = 10,000,000$ entries (though many entries will be $0$).
I understand that one way of generating the $P$ matrix is to estimate it via Monte Carlo sampling, by simulating the environment. However, with non-trivial state space and simulation costs, this could be prohibitively expensive.
In practice, when a non-trivial MDP is being formulated, what are the different ways an accurate $P$ matrix can be produced?
|
Although the conversion of one element to another is the basis of natural radioactive decay, it is also possible to convert one element to another artificially. The conversion of one element to another is the process of transmutation. Between 1921 and 1924, Patrick Blackett conducted experiments in which he converted a stable isotope of nitrogen to a stable isotope of oxygen. By bombarding \(\ce{^{14}N}\) with \(\alpha\) particles he created \(\ce{^{17}O}\). Transmutation may also be accomplished by bombardment with neutrons.
\[\ce{^{14}_7N + ^4_2He \rightarrow ^{17}_8O + ^1_1H}\]
Historically, part of Alchemy was the study of methods of creating gold from base metals, such lead. Where the Alchemists failed in this quest, we can now succeed. Thus, bombardment of platinum-198 with a neutron creates an unstable isotope of platinum that undergoes beta decay to gold-199. Unfortunately, while we may succeed in making gold, the platinum we make it from is actually worth more than the gold making this particular transmutation economically non-viable!
\[\ce{^{198}_{78}Pt + ^1_0n \rightarrow ^{199}_{78}Pt \rightarrow ^{199}_{79}Au + ^0_{-1}\beta}\]
Contributors Hans Lohninger (Epina eBook Team) Andrew R. Barron Steven B. Krivit
|
Please help transcribe this video using our simple transcription tool. You need to be logged in to do so.
Description
We give an explicit construction of a pseudorandom generator for read-once formulas whose inputs can be read in arbitrary order. For formulas in $n$ inputs and arbitrary gates of fan-in at most $d = O(n/\log n)$, the pseudorandom generator uses $(1 - \Omega(1))n$ bits of randomness and produces an output that looks $2^{-\Omega(n)}$-pseudorandom to all such formulas. Our analysis is based on the following lemma. Let $pr = Mz + e$, where $M$ is the parity-check matrix of a sufficiently good binary error-correcting code of constant rate, $z$ is a random string, $e$ is a small-bias distribution, and all operations are modulo 2. Then for every pair of functions $f, g\colon \B^{n/2} \to \B$ and every equipartition $(I,J)$ of $[n]$, the distribution $pr$ is pseudorandom for the pair $(f(x|_I), g(x|_J))$, where $x|_I$ and $x|_J$ denote the restriction of $x$ to the coordinates in $I$ and $J$, respectively.
Questions and Answers You need to be logged in to be able to post here.
ADS
|
I want to determine whether the following statement is true:
Let $R$ be any ring, and $V$ an $R$-module that is a union $V=\bigcup_{n=1}^\infty V_n$ of submodules $V_1 \subseteq V_2 \subseteq \dots$. If each $V_i$ is projective, then so is $V$.
My attempt at a proof is to use Zorn's lemma as follows: let $g : M \twoheadrightarrow N$ be a surjective $R$-module homomorphism. Let $f: V \to N$ be a module homomorphism. Consider the set $S$ of pairs $(V_i,f_i)$, where $f_i : V_i \to M$ is such that $g\circ f_i = f\big|_{V_i}$. Define a partial order on $S$ by $(V_i,f_i)\leq (V_j,f_j)$ if and only if $V_i \subseteq V_j$ and $f_j\big|_{V_i} = f_i$. The union of any chain gives an upper bound, so by Zorn's lemma, there is a maximal element $(V_m,f_m)$ of $S$. The problem here is that I don't think $V_m = V$ necessarily, since we may not be able to extend this particular $f_m$ to a larger $V_i$.
A possible counterexample to the statement is as follows: Let $p_1,p_2,p_3,\dots$ be the distinct prime numbers of $\mathbb{Z}^{>0}$. Set $V_i = \left(\prod_{k=1}^i p_k^i\right)^{-1}\mathbb{Z}$. Then each $V_i$ is a free $\mathbb{Z}$-module, hence projective. We have $\mathbb{Q} = \bigcup_{i=1}^\infty V_i$, and $V_1 \subseteq V_2 \subseteq \dots$, but $\mathbb{Q}$ is not a projective $\mathbb{Z}$-module.
Does this counterexample work? Is there an easier counterexample?
|
This is simply proportional to the 3-tachyon correlation function. They warn you above the equation that the momentum conservation factor such as $$(2\pi)^D\delta^{(D)}(k_1-k_2-k_3)$$ is omitted everywhere and I am not even sure whether their normalization of the states includes the power of $2\pi$ factor. Pick Polchinski's textbook if you want every formula to be much more robust and reliable about similar details. At any rate, this delta function is what you get from all the factors of the form $\exp(ik_j\cdot x)$ for $j=1,2,3$.
You got the first power of $g$ which is a good thing, a part of the result.
One of the two last excessive things you got in your calculation is the exponential of $\sum \alpha_{-n}z^n/n$. But this exponential may simply be replaced by $1$ because all the creation operators $\alpha_{-n}$ for positive $n$ annihilate the ket vector $\langle 0;k_1|$ on the left side – the Hermitian conjugate claim to the claim that annihilation operators $\alpha_{n}$ for a positive $n$ annihilate the ket vectors on the right. So from the Taylor expansion for the exponential, only the leading term $1$ survives.
That's great and the only excessive factor you're left with is $z^{k_2 k_1-1}$. But this is also equal to $1$ because the exponent vanishes when all the physical conditions are satisfied. Note that $$k_1\cdot k_2 = \frac 12[(k_1+k_2)^2 - k_1^2-k_2^2] =\frac 12( k_3^2-k_1^2-k_2^2) $$where, in the second $=$ step, I used the momentum conservation because everything is multiplied by the delta-function for the sum of momenta, anyway. However, the calculation is also meaningful for on-shell $k_1,k_2,k_3$ only. But the squared masses of the open-string tachyon is$$-k^2=m_T^2=-\alpha'$$where the minus sign in front of $k^2$ arises because they use the mostly-plus convention for the metric. So in the units $\alpha'=2$ they selectively use for open strings (one has $\alpha'=1/2$ for closed strings, to make things more confusing),$$k_1\cdot k_2 = \frac 12(k_3^2-k_1^2 - k_2^2) =-\frac 12 (-2+1)m_T^2 = \frac 12\alpha' =1$$so the exponent above $z$ is actually $k_1\cdot k_2-1=0$ and all the factors except for $g$ are equal to one. Note that the string amplitudes are on-shell (scattering amplitudes) and the calculations only simplify when the on-shell conditions are imposed. In fact, the general string amplitudes don't have any natural or canonical extension for off-shell momenta (although, of course, when you write down an effective low-energy field theory for string theory, that theory gives you formulae for the off-shell amplitudes, too)!
If you want a more pedagogical treatment that doesn't use these "inconsistent" conventions for $\alpha'=1/2$ or $2$, doesn't omit the momentum-conservation delta-functions, and is more explicit about the moments when the on-shell conditions are used and how, try e.g. Polchinski's book or one of the newer competitors. On the other hand, if you start to calculate as many amplitudes as Green and Schwarz did in the early 1980s, it may be pretty helpful to use all the "seemingly sloppy" simplifications in the notation that still capture all the physical essence.
|
Suppose n is a positive integer such that 6n has exactly 9 positive divisors.
How many prime numbers are divisors of 6n?
\(6=2^13^1\\ \text{let }n = \prod \limits_{k=0}^\infty p_k^{n_k} \\ \text{where }p_k \text{ is the }kth \text{ prime}\\ 6n = 2^{n_2+1}\cdot 3^{n_3+1}\cdot \prod \limits_{k=3}^\infty p_k^{n_k} \)
\(6n \text{ thus has }(n_2+2)(n_3+2)\prod \limits_{k=0}^\infty (n_k+1) = 9 \text{ divisors}\)
\(\text{This is easily enough accomplished if }n_2=1,~n_3=1,~n_k=0,k>3, \text{ i.e. }\\ 6n=36 = 2^2 3^2\\ \text{There are a total of 2 prime divisors}\).
|
What does it mean when the price elasticity of demand %Qd/%P is greater than one? Typically I hear that it means the demand is elastic since if, say, the price decreases by 1% the demand for the good increases by more than 1%. But what happens if %P is +1% and %Qd still increases by more? Sure, this is elastic, but does it give us any more information? I can't see why %Qd/%P > 1 makes sense in this case.
Except of the purely descriptive aspect, "elastic demand", or more accurately, regions of the demand schedule where demand elasticity with respect to price is higher than unity,
in absolute terms, is linked to the basic monopoly theory, since the monopolist maximizes profits at a point of the demand schedule where "demand is elastic".
Define the demand point-elasticity with respect to price as
$$\eta = \frac {\partial Q }{ \partial P}\cdot \frac {P}{Q} \Rightarrow \frac {\partial Q }{ \partial P} = \eta \cdot \frac {Q}{P} \tag{1}$$
Note that algebraically, the elasticity is a
negative number, with the sign indicating the direction of influence, since $\partial Q / \partial P <0$.
The profit function of a monopolist is
$$\pi = P\cdot Q(P) - C(Q(P)) \tag{2}$$
The first-order condition for a maximum with respect to price is
$$\frac {\partial \pi}{\partial P} = 0 \Rightarrow Q + P\frac {\partial Q }{ \partial P} - MC\cdot \frac {\partial Q }{ \partial P} = 0 \tag{3}$$
Inserting $(1)$ into $(3)$ we have
$$Q + P\cdot \eta \cdot \frac {Q}{P} - MC\cdot \eta \cdot \frac {Q}{P} = 0$$
$$\Rightarrow 1 + \eta - \eta \cdot \frac {MC}{P} =0$$
$$\Rightarrow - \eta \cdot \frac {MC}{P} = -\eta -1 $$
$$\Rightarrow |\eta| \cdot \frac {MC}{P} = |\eta| -1$$
$$\Rightarrow P^* = \frac {|\eta|}{|\eta|-1} MC \tag{4}$$
$(4)$ is essentially an implicit relation since $\eta$ is a function of price also,
but it provides a specific insight: Since we naturally expect that price will be positive, we see that we must have $|\eta| >1$: the price will necessarily be set at a level where "demand is elastic", i.e. at a point on the demand schedule where the point price elasticity of demand is higher than unity, in absolute terms.
Yes, if elasticity is greater than one we say that demand is elastic. This means that the percentage change in quantity demanded is greater than (in magnitude) the percentage change in price. More generally, if elasticity is
e then the percentage change in quantity demanded is e times the percentage change in price. For example, if e=0.5, the percentage change in quantity demanded is half the percentage change in price.
If it's more than one, it means that the variation of the selled quantity is greater that the variation of the price. In other words a variation in price cause a bigger variation in production. So if price goes down, consumers will increase purchases in a bigger variation: the demand of that good is more than elastic to the price. If it's 1 it's elastic. If it's less than one, it is not very elastic or unelastic.
|
It is common in physics literature to identify invariant geometric objects with their components in some reference frame.
For example, if $T$ is a type (2,0) tensor field, then it is common to refer to its components $T^{\mu\nu}$ as a tensor field.
In essence, the connection coefficients are the components of a connection. So it is also customary to refer to $\Gamma^\rho_{\mu\nu}$ or $\omega^{\ a}_{\mu\ b}$ as a connection.
Do note that the exact definition of the object themselves are subject to convention. There are authors who refer to different things as a connection, than the $\nabla$ covariant derivative. In fact, there are general connections that do not admit a representation as a covariant derivative. In general, a connection is always a rule to define parallel transport of locally defined geometric objects along curves. But the exact way of defining them differ from author to author.
Also, the Levi-Civita connection and the "spin connection" are essentially the same. The Christoffel symbols $\Gamma^\rho_{\mu\nu}$ are the components of $\nabla$ when taken with respect to a holonomic (coordinate-) frame, while the "spin connection" coefficients $\omega^{\ a}_{\mu\ b}$ are the components of $\nabla$ when taken with respect to an orthonormal frame.
The only reason why there is any significant difference between the two is that spinor fields can only be represented in orthonormal frames, so to define covariant derivatives of spinors, you need the "spin connection". But this is all just juggling with local component representations, the underlying invariant object is the same.
|
Gamma is the second partial derivative of the change in the price of the option wrt to the change in the underlying. Said another way, it is the change in delta. If you write down the Black-Scholes pricing formula, you's see the gamma term:$$...\frac{1}{2}\frac{\partial^2C}{\partial S^2}(\Delta S)^2...$$Notice that the $\Delta S$ (change in stock price) ...
Find the topic of model-independent properties of option prices very interesting as well. Here are some results that I am aware of and the respective references in the literature. Some are already contained in your initial list as well.Plain Vanilla Prices are Convex in the StrikeTheorem 4 in Merton (1973).Delta is Bounded by the Slopes of the Payoff ...
You are absolutely right to point out that most proactive participants in options markets prefer to be long gamma, and it is typically reactive market makers who take the opposite side of their trades. While the typical options trader (I find it difficult to call anyone trading options an "investor") does not hedge his position, market makers will attempt ...
This is an interesting and not so easy question. Here's my 2 cents:First, you should distinguish between mathematical models for the dynamics of an underlying asset (Black-Scholes, Merton, Heston etc.) and numerical methods designed to calculate financial instruments' prices under given modelling assumptions (lattices, Fourier inversion techniques etc.). ...
This is in fact a tricky matter.As you say one way is to calculate delta by an analytic formula, i.e. calculate the first derivative of the option pricing formula you are using with respect to the underlying's spot price.The second way is to do it numerically, i.e. change the spot price by a small value $dS$, calculate the value of the option and then ...
You need to compute your greeks as finite differences, but the full procedure may be pretty tricky. I will use vega $\aleph$ as the example here. Let's begin by designating your Monte Carlo estimator as a function $V(\sigma,s,M)$ where $\sigma$ is the volatility as usual, $s$ is the seed to your random number generator, and $M$ is the sample count.To ...
Under the Black-Scholes model,\begin{align*}Gamma &= \frac{N'(d_1)}{S \sigma \sqrt{T-t}}\\Vega &= SN'(d_1) \sqrt{T-t}.\end{align*}Then, it is easy to see that\begin{align*}Vega = S^2 \sigma (T-t) Gamma.\end{align*}
Short gamma is being of the view that realized volatility would be less than the implied volatility for the period in which an option is valid. So if you think realized volatility in the future would be consistently lesser than implied volatility at present, then you'd be short gamma.The premium one would receive by selling an option (call or put) is a ...
VIX is calculated from a basket of SPX options, and VIX futures expire into following expiration, e.g. September VIX futures that will expire next Wednesday will use SPX October options chain to calculate settlement value. If $B$ is the value of the basket then VIX value at expiration is $\sqrt{ B }$. Then VIX futures price is the expectation of the basket $...
Think of moving volatility in the other direction.As volatility approaches zero, any call strike strictly smaller than the ATM strike, $K<K_{ATM}$, will have zero probability of ending in the money, and the corresponding option value will be zero. An infinitesimally small change in stock price will not move $K$ past $K_{ATM}$, so the option value ...
Automatic Differentiation (aka AD) is a family of methods that are used to evaluate the derivative of a coded function. These methods are far more accurate than finite differences, since they are theoretically exact in the absence of floating point roundoff error.AD is, however, subtly different than symbolic differentiation. The key difference here is ...
For any process with independent increments, by the very fact of statistical independence the variance of $x_{t3}-x_{t1}$ is going to be the sum of the variances of $x_{t2}-x_{t1}$ and $x_{t3}-x_{t2}$ for $t1\leq t2 \leq t3$. Many processes have independent increments, including ABM, GBM, Poisson, etc. Then if you add a homogeneity assumption (the ...
No, you should not expect such a relationship to hold in general. The reason is that American options have an "exercise barrier" which European options don't, and this results in different prices and greeks.In the case of put options (with interest rate $r>0$) as the spot price falls, at some point it becomes optimal to exercise early and take the cash. ...
First, my notation. $K$ is the strike price, $S$ is the stock price, $r$ is the continuously compounded risk-free rate, $T$ is time at expiration, $t$ is time at issue, $\sigma$ is volatility, $\delta$ is continuously compounded dividend rate.The Black-Scholes formula for a European call is$C = Se^{-\delta (T-t)} N(d_1) - Ke^{-r(T-t)} N(d_2)$$d_1 = \...
[Mathematically]Risk-neutral pricing means that\begin{align}C_0(K,T) &= \mathbb {E}_0\left[\frac{1}{B_T} (S_T - K)^+\right] \\&= \mathbb {E}_0\left[\left(\frac {S_T}{B_T} - \frac {K}{B_T}\right)^+\right]\end{align}Now simply notice that the dynamics of $$\tilde{S}_t := \frac {S_t}{B_t},\ \forall t \geq 0$$ is independent of $r$ (see the ...
Short Answer : Futures don't have GreeksLong Answer : Assuming a non strictly mathematical (i.e. false) point of view.Well, having Greeks on VIX Futures is not relevant, VIX value is itself a "Greek" (and Futures don't have Greeks).Sensitivity toPrice of the Underlying : Insensitive (ν = 0)Volatility of the Underlying : Delta Δ = 1 (to Volatility of ...
Joseph de la Vega wrote Confusion of Confusions in 1688, probably the World's first descriptive text on stock market processes and volatility.I'm not sure that this is why Vega is thus named, but I like to think it's in his honour.
As far as PDEs (deterministic) are concerned we have the notion of a "strong solution" (directly solving the differential operator in the strong formulation of the problem) and the "weak solution" that deals with a weak formulation of the problem.For the strong formulation, finite differences are the way to go since they are the natural discretization of ...
Simply put, no. Vega depends on a variety of factors (including the level/price of the underlying asset). However, vomma/volga/vega convexity (whatever you want to call dVega/dIV) is always positive. So as IV increases, the vega of an option increases - I think this might have been what you were getting at.It's important to understand that IV is an input ...
Put-call parity says that a call and put (worth $C$ and $P$ respectively) with the same strike $K$ have the following relationship with the spot rate $S$, risk-free rate $r$, and time to maturity $T$ --$$C - P = S - e^{-rT} K$$Taking the first derivative with respect to $S$,$$\frac{\partial C}{\partial S} - \frac{\partial P}{\partial S} = 1$$which ...
FDMs represent PDEs over a simple grid shape; the different implementations are just different recurrence relations to approximate the solutions to the PDE between boundary values (e.g., for options pricing, $T=[t_\mathrm{now},t_\mathrm{maturity}]$ and $S=[\mathrm{deep\_itm},\mathrm{deep\_otm}])$.FEM is a general name for a lot of different ...
For non-interest rate derivatives with not-so-long maturities worrying about rho is uncommon. Think about it: interest-rates do not change that often relative to options expiring next week, next month or at most next year. LEAPS are obviously another turf. You could think about gamma, but the intimate relation of gamma and vega (at least in BS model) makes ...
most models in financial maths are linear so prices and Greeks just add. This is in particular true of Black--Scholes so Yes.However, once one starts taking into account value adjustments non-linearities appear and it is a lot more complicated.
if you have a portfolio of calls and puts with the same maturity then your portfolio is gamma neutral if and only if it is vega neutral.The reasons is that the BS gamma divided by the BS vega is a function of $S$ and $T$ that does not vary with $K.$ So if you construct a linear combination that has zero gamma then the vega is zero too, and vice versa.
The risk exposures/sensitivities of long and short positions always have different signs. This has to hold since derivatives are zero sum games.Vega is always positive for a long position in a European plain vanilla option (or any convex payoff in general). This is true even when the option is already in-the-money. As volatility increases, the probability ...
This question has been asked many times and some clarifications appear needed.As pointed out in an answer to this question, the portfolio\begin{align*}\Delta_t^1 S_t + \Delta^2_t C,\end{align*}where $\Delta_t^1 = -\frac{\partial C}{\partial S}$ and $\Delta_t^2 =1$, is, generally, neither self-financing nor locally risk-free.To derive the Black-...
Something went wrong in the third equality of the equation where you compute $\partial C_0 / \partial K$. Starting from the second equality, you can use that\begin{equation}S_0 \mathcal{N}' \left( d_1 \right) = K e^{-r T} \mathcal{N}' \left( d_2 \right),\end{equation}see e.g. Equation (1.29) in Wystup (2006). Alternatively, you could use the ...
The most general answer is to shift your input to approximate the first derivative. Given that you need Monte Carlo to price this, it may get expensive. But that's the way it goes as when you have no analytical solutions as there aint't no free lunch ...
If your "European vanilla options" are restricted to piece-wise linear pay-offs, then the following may help:Remark: I assume you are looking for a rule of thumb to get the profiles without the use of a computer.All piece-wise linear pay-offs can be decomposed into a sum of digital options and call options with different notional (possibly negative) and ...
|
Let $V$ a vector space of finite dimension an let $T:V\rightarrow V$ be a linear operator such that every hyperplane of $V$ is $T$-stable. Prove $T=\lambda\,\mathit{Id}_{V}$ for some $\lambda$.
Note: $\mathit{Id}_{V}$ is the identity operator.
Proof:
Pick a vector $v \in V \setminus \{ 0 \}$. You can find $v_2, \dots, v_n \in V$ such that $\{v,v_2, \dots,v_n \}$ is basis of $V$. Now consider the $(n-1)$-dimensional subspaces $W_i = \operatorname{span}( v, v_2, \dots, v_{i-1}, v_{i+1}, v_n )$, for $i=2,\dots,n$. It is clear that the subspace $\operatorname{span}(v)= W_2 \cap \dots \cap W_n$ is $T$-invariant. So there exists $\lambda_v \in F$ such that $Tv = \lambda_v v$. (Pay attention: $\lambda_v$ depends on $v$.)
Now we must prove the scalars $\{ \lambda_v \}_{v \in V \setminus \{ 0 \}}$ are all the same. Choose two vectors $v, v' \neq 0$; if $v$ and $v'$ are linearly dependent then it is clear that $\lambda_v = \lambda_{v'}$. If $v$ and $v'$ are linearly independent, then $\lambda_{v+v'} (v + v') = Tv + Tv' = \lambda_v v + \lambda_{v'} v'$, so $\lambda_v = \lambda_{v+v'} = \lambda_{v'}$.
My questions:
1) Why does $\operatorname{span}(v)= W_2 \cap \dots \cap W_n$?
2) Why $\operatorname{span}(v)$ is $T$-invariant?
3) I can find $v_2, \dots, v_n \in V$ for create a basis for the theorem of completion of basis, don't I?
|
Let $L: C^\infty(\mathbb{R}) \to C^\infty(\mathbb{R})$ be a linear operator which satisfies:
$L(1) = 0$
$L(x) = 1$
$L(f \cdot g) = f \cdot L(g) + g \cdot L(f)$
Is $L$ necessarily the derivative? Maybe if I throw in some kind of continuity assumption on $L$? If it helps you can throw the "chain rule" into the list of properties.
I can see that $L$ must send any polynomial function to it's derivative. I want to say "just approximate any function by polynomials, and pass to a limit", but I see two complications: First $\mathbb{R}$ is not compact, so such an approximation scheme is not likely to fly. Maybe convolution with smooth cutoff functions could help me here. Even if I could rig up something I am concerned that if polynomials $p_n$ converge to $f$, I may not have $p_n'$ converging to $f'$. My Analysis skills are really not too hot so I would like some help.
I am interested in this question because it is a slight variant of a characterization given here:
I am not sure whether or not those properties characterize the derivative, and they are closely related to mine.
If these properties do not characterize the derivative operator, I would like to see another operator which satisfies these properties. Can you really write one down or do you need the axiom of choice? I feel that any counterexample would have to be very weird.
|
Problem evaluating a very tiny integral
Hi everyone,
I'm pretty new with sage. I was forced to change from MATLAB to Sage, because I was told Sage does approximate very tiny numbers better as it can work with sqrt(2) as sqrt(2) and not as the rational number approximating it.
Approximations are very important for my problem. I need to evaluate this integral $$\sum_{c=1}^{d}\int_{\min(256c-0.5,\frac{y(y+1)}{2})}^{\min(256c+0.5,\frac{y(y+1)}{2})}\frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}\, dx$$
Suppose d = 1, then this is simply the integral
$$ \int_{255.5}^{256.5}\frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}\, dx$$
To evaluate this integral I wrote the following code
T = RealDistribution('gaussian,1) print T.cum_distribution_function(256.5)-T.cum_distribution_function(255.5)
because the integral above is the same as the difference of the distribution function of a standard distributed gaussian random variable between the boundaries of the integral. However, and you can check yourselves if you don't believe it, the result I get with sage is 0.
I guess that this is due to some approximation, which sage does. Indeed the exact value of the integral is pretty close (and with pretty I mean a lot) to 0. My problem is that I need to be able to have the exact value, because the sum of integrals I'm working with and my whole work behind this integral requires me to be very careful with those little tiny numbers.
I tried to use the function
integrate
to deal with this problem, but something funny and apparently inexplicable happened when I was trying to use it.
To be precise I defined this code:
def VarianceOfvy(y):temp1 = 0temp2 = 0for r in range(0,y+1): for x in range(0,r+1): temp1 = temp1 + (255/256)^x * 1/256 * (r-x)^2for r in range(0,y+1): for x in range(0,r+1): temp2 = temp2 + ((255/256)^x * 1/256 * (r-x))^2sigma = temp1 - temp2return sqrt(sigma)def Integerd(y):b = y*(y+1)/2d = 1c = 0while min((c+1)*256-0.5,b) == (c+1)*256-0.5: d = d+1 c = c+1return ddef Probabilityvynequiv(y):var('c')b = (y*(y+1))/2sigma = 2mu = 1d = Integerd(y)factor = 1/(integrate(1/sqrt(2*pi)*e^(-(x/sigma)^2/2),x,-oo,(b-mu)/sigma) - integrate(1/sqrt(2*pi)*e^(-(x)^2/2),x,-oo,(-mu)/sigma))p = sum(factor*1/sigma*integrate(1/sqrt(2*pi)*e^((-x^2)/(2)),x,c*256+0.5,min((c+1)*256-0.5,b)),c,0,d)return p
And if I let it run, the result I get is
1/2*(erf(255.75*sqrt(2)) - erf(128.25*sqrt(2)) + erf(127.75*sqrt(2)) - erf(0.25*sqrt(2)))/(3*erf(1/4*sqrt(2)) + 1)
which I assume is correct, and at least it tells me that Sage is able to read my code and output a result. If I call the function VarianceOfvy(2), the result I get is 3/65536*sqrt(11133895), which is also correct. Now, if I'm changing the command
sigma = 2
with
sigma = VarianceOfvy(2)
and try to let the whole program run again, Sage is not able anymore to output a result.
I'm really lost and I don't know what to do. Could someone advise me and give me some hints on how to evaluate those tiny integrals, in such a way that I don't loose any approximation?
|
Fatima has done some previous analyses and has found that the stock price over any period of time can be modelled reasonably accurately with the following equation:\[ \operatorname {price}(k) = p \cdot (\sin (a \cdot k+b) + \cos (c \cdot k+d) + 2) \]
where $p$, $a$, $b$, $c$ and $d$ are constants. Fatima would like you to write a program to determine the largest price decline over a given sequence of prices. Figure 1 illustrates the price function for Sample Input 1. You have to consider the prices only for integer values of $k$.
The input consists of a single line containing $6$ integers $p$ ($1 \le p \le 1\, 000$), $a$, $b$, $c$, $d$ ($0 \le a, b, c, d \le 1\, 000$) and $n$ ($1 \le n \le 10^6$). The first $5$ integers are described above. The sequence of stock prices to consider is $\operatorname {price(1)}, \operatorname {price(2)}, \ldots , \operatorname {price}(n)$.
Display the maximum decline in the stock prices. If there is no decline, display the number $0$. Your output should have an absolute or relative error of at most $10^{-6}$.
Sample Input 1 Sample Output 1 42 1 23 4 8 10 104.855110477
Sample Input 2 Sample Output 2 100 7 615 998 801 3 0.00
Sample Input 3 Sample Output 3 100 432 406 867 60 1000 399.303813
|
Consider a dielectric slab waveguide (lossless, isotropic) illuminated transversally from the vacuum (with coherent, monochromatic light).
We define the
base bandwidth of a waveguide (or optical fiber), $AB$, to be the inverse of the time retardation, $\Delta t$, at 1 km of the waveguide between the energy of a guided mode (transmitted following the zig-zag model) with a $\theta_{c}$ critical angle and the energy transmitted without total internal reflections. Let $n_{f}$ and $n_{s}$ be the refractive indexes of the film and the substrate respectively. Prove that if $\Delta n=n_{f}-n_{s}<<1$, then:
$$ {\rm{AB}} = {\left( {\Delta t} \right)^{ - 1}} \simeq \frac{{2c{n_s}}}{{{{\left( {{\rm{AN}}} \right)}^2}}} = \frac{c}{{{n_f} - {n_s}}} $$
where $AN$ is the numerical aperture of the guide.
This problem has been on my mind for 2 days now and it seems I can't find a method to calculate that time difference... Any ideas?
My thinking so far:
We have to look at the Ray Optics picture of Dielectric Waveguide Theory (see e.g. Tamir et al:
Integrated Optics (chapter 2)). A guided mode is propagated through the waveguide following a series of total internal reflections at an angle $\theta_{c}$ with the normal to the film-substrate or film-cover surfaces, and therefore its energy is "trapped" in the film. A mode which is not a guided mode will travel through the waveguide suffering reflections and refractions and therefore radiating some energy to the cover and substrate.
The rays travel through the film at the same speed $c/n_{f}$ but follow different paths, so it will take different times for them to advance 1 km in the waveguide. We could then try to find the components of these velocities in the direction of propagation ($z$ axis), $v_{i}$, and use the simple relation $t_{i}=d/v_{i}$. The trouble with this, is that the problem doesn't specify the angle at which the non-guided mode incides in the surfaces of the film. Am I missing something here?
On the other hand, the numerical aperture of the waveguide is:
$$ n\sin \left( {{\theta _{\max }}} \right) = {n_f}\sin \left( {90 - {\theta _{\rm{c}}}} \right) = {n_f}\cos \left( {{\theta _{\rm{c}}}} \right) $$
where n=1, and $\theta_{\max}$ is the angle of incidence of the illumination beam with the normal to the $x-y$ plane so that doing some work we find:
$$ {\rm{AN}} = \sin \left( {{\theta _{\max }}} \right) = \sqrt {{n_f}^2 - {n_s}^2} $$
UPDATE: Some calculations:
The effective refractive indexes of the 3 rays (ray 1: guided mode, ray 2: radiation mode, ray 3: no reflection at all) are:
$$ {N_1} = \frac{{{\beta _1}}}{{{k_1}}} = \frac{c}{{{V_1}}} = {n_f}\sin \left( {{\theta _{\rm{c}}}} \right) $$ $$ {N_2} = \frac{{{\beta _2}}}{{{k_2}}} = \frac{c}{{{V_2}}} = {n_f}\sin \left( \theta \right) $$ $$ {N_3} = {n_f} = \frac{c}{{{v_3}}} $$
So that the retardations would be: ($d=1km$)
Between rays 1 and 2: $$ \Delta {t_{1 - 2}} = {t_2} - {t_1} = d\left( {\frac{1}{{{V_2}}} - \frac{1}{{{V_1}}}} \right) = \left[ {...} \right] = \left( {d\frac{{{n_f}}}{c}} \right)\left( {\sin {\theta _c} - \sin \theta } \right) $$
Between rays 1 and 3:
$$ \Delta {t_{1 - 3}} = {t_3} - {t_1} = d\left( {\frac{1}{{{v_3}}} - \frac{1}{{{V_1}}}} \right) = \left[ {...} \right] = \left( {d\frac{{{n_f}}}{c}} \right)\left( {\sin {\theta _c} - 1} \right) $$
And the respective base bandwidths:
$$ {\rm{A}}{{\rm{B}}_{1 - 2}} = \frac{{\frac{c}{{d{n_f}}}}}{{{\rm{AN}} - \sin \theta }} $$
$$ {\rm{A}}{{\rm{B}}_{1 - 3}} = \frac{{\frac{c}{{d{n_f}}}}}{{{\rm{AN}} - 1}} $$
Are any of these results equal to the equation given at the beginning for $AB$? How would one use the approximation $n_{f}-n_{s}<<1$?
|
As we have defined it in Section 1.3, a function is a very general object. At this point, it is useful to introduce a collection of adjectives to describe certain kinds of functions; these adjectives name useful properties that functions may have. Consider the graphs of the functions in Figure 2.5.1. It would clearly be useful to have words to help us describe the distinct features of each of them. We will point out and define a few adjectives (there are many more) for the functions pictured here. For the sake of the discussion, we will assume that the graphs do not exhibit any unusual behavior off-stage (i.e., outside the view of the graphs).
(a) (b) (c) (d)
Functions. Each graph inFigure 2.5.1certainly represents a function—since each passes the verticalline test. In other words, as you sweep a vertical line across the graphof each function, the line never intersects the graph more than once. Ifit did, then the graph would not represent a function. Bounded. The graph in (c) appears to approach zero as $x$ goesto both positive and negative infinity. It also never exceeds the value$1$ or drops below the value $0$. Because the graph never increases ordecreases without bound, we say that the function represented by the graphin (c) is a bounded function.
Definition 2.5.1 (Bounded) A function $f$ is bounded if there is a number $M$ such that $|f(x)| < M$ for every $x$ in the domain of $f$.
For the function in (c), one such choice for $M$ would be $10$. However,the smallest (optimal) choice would be $M=1$. In either case, simplyfinding an $M$ is enough to establish boundedness. No such $M$ exists forthe hyperbola in (d) and hence we can say that it is
unbounded. Continuity. The graphs shown in (b) and (c) both represent continuous functions. Geometrically, this is because there are nojumps in the graphs. That is, if you pick a point on the graph andapproach it from the left and right, the values of the function approachthe value of the function at that point. For example, we can see that thisis not true for function values near $x=-1$ on the graph in (a) which isnot continuous at that location.
Definition 2.5.2 (Continuous at a Point) A function $f$ is continuous at a point $a$ if $\ds \lim_{x\to a} f(x) = f(a)$.
Definition 2.5.3 (Continuous) A function $f$ is continuous if it is continuous at every point in its domain.
Strangely, we can also say that (d) is continuous even though there is avertical asymptote. A careful reading of the definition of continuousreveals the phrase "
at every point in its domain.'' Because thelocation of the asymptote, $x=0$, is not in the domain of the function, andbecause the rest of the function is well-behaved, we can say that (d)is continuous. Differentiability. Now that we have introduced the derivativeof a function at a point, we can begin to use the adjective differentiable. We can see that the tangent line is well-defined at everypoint on the graph in (c). Therefore, we can say that (c) is adifferentiable function.
Definition 2.5.4 (Differentiable at a Point) A function $f$ is differentiable at point $a$ if $f'(a)$ exists.
Definition 2.5.5 (Differentiable) A function $f$ is differentiable if is differentiable at every point (excluding endpoints and isolated points in the domain of $f$) in the domain of $f$.
Take note that, for technical reasons not discussed here, both of these definitions exclude endpoints and isolated points in the domain from consideration.
We now have a collection of adjectives to describe the very rich and complex set of objects known as functions.
We close with a useful theorem about continuous functions:
This is most frequently used when $d=0$.
Example 2.5.7 Explain why the function $\ds f=x^3 + 3x^2+x-2$ has a root between 0 and 1.
By theorem 2.3.6, $f$ is continuous. Since $f(0)=-2$ and $f(1)=3$, and $0$ is between $-2$ and $3$, there is a $c\in[0,1]$ such that $f(c)=0$.
This example also points the way to a simple method for approximating roots.
Example 2.5.8 Approximate the root of the previous example to one decimal place.
If we compute $f(0.1)$, $f(0.2)$, and so on, we find that $f(0.6)< 0$ and $f(0.7)>0$, so by the Intermediate Value Theorem, $f$ has a root between $0.6$ and $0.7$. Repeating the process with $f(0.61)$, $f(0.62)$, and so on, we find that $f(0.61)< 0$ and $f(0.62)>0$, so $f$ has a root between $0.61$ and $0.62$, and the root is $0.6$ rounded to one decimal place.
Exercises 2.5
Ex 2.5.1Along the lines of Figure 2.5.1,for each part below sketch the graph of a function that is:
a. bounded, but not continuous.
b. differentiable and unbounded.
c. continuous at $x=0$, not continuous at $x=1$, and bounded.
d. differentiable everywhere except at $x=-1$, continuous, and unbounded.
Ex 2.5.2Is $f(x)=\sin(x)$ a bounded function? If so, find the smallest $M$.
Ex 2.5.3Is $\ds s(t) = 1/(1+t^2)$ a bounded function? If so, find thesmallest $M$.
Ex 2.5.4Is $v(u) = 2\ln|u|$ a bounded function? If so, find the smallest $M$.
Ex 2.5.5Consider the function $$h(x) = \cases{2x - 3, & if $x< 1$\cr0, & if $x\geq 1$.}$$Show that it is continuous at the point $x=0$. Is $h$ a continuous function?
Ex 2.5.6Approximate a root of $\ds f=x^3-4x^2+2x+2$ to one decimal place.(answer)
Ex 2.5.7Approximate a root of $\ds f=x^4+x^3-5x+1$ to one decimal place.
|
As we begin to compile a list of convergent and divergent series, new ones can sometimes be analyzed by comparing them to ones that we already understand.
Example 13.5.1 Does $\ds\sum_{n=2}^\infty {1\over n^2\ln n}$ converge?
The obvious first approach, based on what we know, is the integral test. Unfortunately, we can't compute the required antiderivative. But looking at the series, it would appear that it must converge, because the terms we are adding are smaller than the terms of a $p$-series, that is, $${1\over n^2\ln n}< {1\over n^2},$$ when $n\ge3$. Since adding up the terms $\ds 1/n^2$ doesn't get "too big'', the new series "should'' also converge. Let's make this more precise.
The series $\ds\sum_{n=2}^\infty {1\over n^2\ln n}$ converges if and only if $\ds\sum_{n=3}^\infty {1\over n^2\ln n}$ converges—all we've done is dropped the initial term. We know that $\ds\sum_{n=3}^\infty {1\over n^2}$ converges. Looking at two typical partial sums: $$ s_n={1\over 3^2\ln 3}+{1\over 4^2\ln 4}+{1\over 5^2\ln 5}+\cdots+ {1\over n^2\ln n} < {1\over 3^2}+{1\over 4^2}+ {1\over 5^2}+\cdots+{1\over n^2}=t_n. $$ Since the $p$-series converges, say to $L$, and since the terms are positive, $\ds t_n< L$. Since the terms of the new series are positive, the $\ds s_n$ form an increasing sequence and $\ds s_n< t_n< L$ for all $n$. Hence the sequence $\ds \{s_n\}$ is bounded and so converges.
Sometimes, even when the integral test applies, comparison to a known series is easier, so it's generally a good idea to think about doing a comparison before doing the integral test.
We can't apply the integral test here, because the terms of this series are not decreasing. Just as in the previous example, however, $$ {|\sin n|\over n^2}\le {1\over n^2},$$ because $|\sin n|\le 1$. Once again the partial sums are non-decreasing and bounded above by $\ds \sum 1/n^2=L$, so the new series converges.
Like the integral test, the comparison test can be used to show both convergence and divergence. In the case of the integral test, a single calculation will confirm whichever is the case. To use the comparison test we must first have a good idea as to convergence or divergence and pick the sequence for comparison accordingly.
Example 13.5.3 Does $\ds\sum_{n=2}^\infty {1\over\sqrt{n^2-3}}$ converge?
We observe that the $-3$ should have little effect compared to the $\ds n^2$ inside the square root, and therefore guess that the terms are enough like $\ds 1/\sqrt{n^2}=1/n$ that the series should diverge. We attempt to show this by comparison to the harmonic series. We note that $${1\over\sqrt{n^2-3}} > {1\over\sqrt{n^2}} = {1\over n},$$ so that $$ s_n={1\over\sqrt{2^2-3}}+{1\over\sqrt{3^2-3}}+\cdots+ {1\over\sqrt{n^2-3}} > {1\over 2} + {1\over3}+\cdots+{1\over n}=t_n, $$ where $\ds t_n$ is 1 less than the corresponding partial sum of the harmonic series (because we start at $n=2$ instead of $n=1$). Since $\ds\lim_{n\to\infty}t_n=\infty$, $\ds\lim_{n\to\infty}s_n=\infty$ as well.
So the general approach is this: If you believe that a new series is convergent, attempt to find a convergent series whose terms are larger than the terms of the new series; if you believe that a new series is divergent, attempt to find a divergent series whose terms are smaller than the terms of the new series.
Example 13.5.4 Does $\ds\sum_{n=1}^\infty {1\over\sqrt{n^2+3}}$ converge?
Just as in the last example, we guess that this is very much like the harmonic series and so diverges. Unfortunately, $${1\over\sqrt{n^2+3}} < {1\over n},$$ so we can't compare the series directly to the harmonic series. A little thought leads us to $${1\over\sqrt{n^2+3}} > {1\over\sqrt{n^2+3n^2}} = {1\over2n},$$ so if $\sum 1/(2n)$ diverges then the given series diverges. But since $\sum 1/(2n)=(1/2)\sum 1/n$, theorem 13.2.2 implies that it does indeed diverge.
For reference we summarize the comparison test in a theorem.
Theorem 13.5.5 Suppose that $\ds a_n$ and $\ds b_n$ are non-negative for all $n$ and that $\ds a_n\le b_n$ when $n\ge N$, for some $N$.
If $\ds\sum_{n=0}^\infty b_n$ converges, so does $\ds\sum_{n=0}^\infty a_n$.
If $\ds\sum_{n=0}^\infty a_n$ diverges, so does $\ds\sum_{n=0}^\infty b_n$.
Exercises 13.5
Determine whether the series converge or diverge.
Ex 13.5.1$\ds\sum_{n=1}^\infty {1\over 2n^2+3n+5} $(answer)
Ex 13.5.2$\ds\sum_{n=2}^\infty {1\over 2n^2+3n-5} $(answer)
Ex 13.5.3$\ds\sum_{n=1}^\infty {1\over 2n^2-3n-5} $(answer)
Ex 13.5.4$\ds\sum_{n=1}^\infty {3n+4\over 2n^2+3n+5} $(answer)
Ex 13.5.5$\ds\sum_{n=1}^\infty {3n^2+4\over 2n^2+3n+5} $(answer)
Ex 13.5.6$\ds\sum_{n=1}^\infty {\ln n\over n}$(answer)
Ex 13.5.7$\ds\sum_{n=1}^\infty {\ln n\over n^3}$(answer)
Ex 13.5.8$\ds\sum_{n=2}^\infty {1\over \ln n}$(answer)
Ex 13.5.9$\ds\sum_{n=1}^\infty {3^n\over 2^n+5^n}$(answer)
Ex 13.5.10$\ds\sum_{n=1}^\infty {3^n\over 2^n+3^n}$(answer)
|
I am working on a physics reserach project for school and I have run into some troubles working Mathematica. I am a fairly inexperienced mathematica user so any help would very much be appreciated.
I need to find the roots of the transcendental equation $$\zeta_n \tan(\zeta) - \sqrt{R^2-\zeta^2_n}=0$$ and then collect them into a list, $\{\zeta_n\}$, which I can sum over.
FindRoot works but only finds one root at a time. For example $R^2=18$ gives
FindRoot[Sqrt[18. - zeta^2] - zeta Tan[zeta] == 0, {zeta, 3}]
{zeta -> 3.66808} From a plot of the function I know there should be two roots for this particular value of $R$, however FindRoot only gives one.
I have had no luck with NSolve or Solve either.
NSolve[xi[zeta, Erg4]- zeta*Tan[zeta] == 0, zeta]
NSolve[Sqrt[18. - zeta^2] - zeta Tan[zeta] == 0, zeta]
Also once I have succeeded in finding all roots how does one put them in an indexed set ${\zeta_n}$.
|
There are many problems that a data scientist encounters when “fighting” financial data for the first time: nothing is normally distributed, most problems are tough (low signal to noise ratio) and non-stationary high-dimensional time series are ubiquitous.
In Quantdare we have spoken many times about one of the main sources of non-stationarity in financial time series: volatility. It is very well-known that volatility (standard deviation) is not constant in financial markets and volatility clustering is one of the clearest characteristics of financial returns.
In this blog, you can find some references to the matter, for example in our posts
Learning with kernels, Financial Time Series Generation or How do stock market prices work? (and probably more that I’m missing).
First things first, the covariance matrix for two assets can be written as:
\[\Sigma = \begin{bmatrix} \sigma_{1}^{2} & \rho \sigma_{1} \sigma_{2}\\ \rho \sigma_{1} \sigma_{2} & \sigma_{2}^{2}\end{bmatrix}\]
where \(\sigma_{i}\) is the volatility (standard deviation) of asset i.
When we speak about time-varying volatility, we are actually speaking about \(\sigma_{i}\) changing its behaviour with time. However, the off-diagonal terms in the covariance matrix can vary with time too, (and they do). These off-diagonal terms are the covariances and, if standardised with the volatilities of the variables, they become the well-known Pearson correlation coefficient \(\rho\) (if you don’t know what I’m talking about, you can visit this really neat explanation on what’s Pearson correlation and how it is used in finance).
Why is this important?
As you probably know, covariance is important because most (or all) the portfolio optimisation problems include the following quadratic form:
\[\omega^{T}\,\Sigma\,\omega \]
where \(\Sigma\) is the variance-covariance matrix.
This quadratic form is the level of risk (variance) of our portfolio and, consequently, this expression is vital for risk management, diversification, computing the efficient frontier and much more…
It is easy to see that, if the matrix \(\Sigma\) changes with time, then our optimisation objective will change too and our optimal solution or risk assessment can become totally wrong!
In addition, there’s strong academic evidence in favor of
the existence of contagion, i.e. increasing correlations between assets during financial crisis. This is especially true for equity markets but it’s a stylised fact of financial returns in general. If we don’t take this effect into account, we would be underestimating the level of risk of our portfolio, and that’s really scary.
Let’s look at a very simple example.
A clarifying example
Gold is broadly known as the anti-dollar, because it has a negative correlation with the US Dollar.
Using EURUSD exchange rate as an inverse proxy for the value of the dollar, we can compute the strength of this relationship. As EURUSD is the Dollar denominated price of one Euro, we will be expecting a positive correlation between gold prices and EURUSD.
However, as correlations are dynamic and vary with time, how can we know which is the true value of the instantaneous correlation between EURUSD and Gold, as a function of time?
One first solution would be to compute the rolling Pearson correlation coefficient between the returns of both time series. But an important doubt quickly comes to our mind: what window should we use?
As you can see, this simple estimate of correlation can vary a lot depending on the sample window we’re using so we might need a more powerful tool to solve this problem.
The experiment
In order to find out which is the best way to
uncover the true dynamic correlation between two assets, we’re going to create two artificial correlation time series. To generate those we’ve just filtered the rolling sample correlations of the previous plot. One of these correlations varies quickly and the other one is much smoother.
To generate random samples from a multivariate Gaussian distribution we need more information than just correlations. We have to artificially create full covariance matrices that vary with time. From each of these dynamic covariances we are going to sample (jointly) a pair of artificial returns.
In order to do so, in each time step, we use the Cholesky decomposition \(L_{t}\) of the instantaneous covariance matrix \(\Sigma_{t}\) to draw two correlated samples from a multivariate gaussian distribution:
\[ \Sigma_{t} = L_{t}^{T}L_{t} \quad x_{t} = L_{t}^{T}z_{t} \]
where \(x_{t}\) is a vector of generated random returns and \(z_{t}\) is a vector of independent samples from a standard normal.
Repeating this procedure for several timesteps \(t\), we get:
It’s easy to notice that for the fast-varying covariance, the volatility of the returns is less stable and so will be the correlation coefficient between the two series.
Once we have generated these random pairs of series, we’re going to recover the dynamic correlation coefficient for each pair, using different standard techniques. Since we have created these pairs ourselves, we know the true dynamic correlation of the pairs and thus, we are now able to set a proper competition between the different algorithms. May the best win!
The tools of the quant
To tackle this problem, there are several tools that any good quant can use:
Dynamic conditional correlation model: this model is a form of multivariate GARCH that assumes an ARMA process for the conditional correlation matrix and univariate GARCH(1, 1) processes for the volatility of the individual assets. BEKK model: this is another multivariate GARCH version but, in this case, the full covariance matrix follows an ARMA process. Risk metrics 2006: this methodology by JP Morgan is broadly used in the industry. The classical version proposes estimating the covariance matrix using a exponentially weighting scheme of samples with a smoothing parameter of 0.94. The 2006 version of the methodology actually blends the estimates made with a range of different smoothing parameters in order to have a more accurate final estimate. Kalman filter: this is actually a state space model but can be used to compute instantaneous regression betas as it is explained in this very nice post. In order to be able to use it, the beta of the regression has to be equivalent to the correlation coefficient. By definition, this is the case when the samples are scaled to unit variance. In our example, as volatility also changes with time, we have standardised the returns with the one year rolling volatility.
Once we have introduced some algorithms to try, let’s make them compete:
Fast-varying correlation
Slow-varying correlation
In the plots above, the gray area represents the square error of the estimates.
Surprisingly enough, all the methods are pretty good at estimating the original dynamic correlation of the returns, and, at least, they are all able to estimate roughly if correlation changes quickly or slowly.
In order to compare the inference accuracy of the different algorithms we can use the Mean Square Error of the estimated correlation relative to the true one:
Fast Slow DCC 0.0194 0.0068 BEKK 0.0204 0.0079 Riskmetrics 0.0229 0.0162 Kalman 0.0142 0.0038
Even though Kalman filter is a very general technique, the algorithm manages to beat all the models that were specifically designed for finance. Bravo!
Getting Bayesian
And, finally, just because anything is better if you make it Bayes’ way, we can estimate correlation as the beta of a bayesian rolling regression. It is extremely easy to implement using probabilistic programming in Python thanks to the
pymc3 package. In the docs, you can find a very nice example of bayesian rolling regression.
Using this Bayesian approach, it is straightforward to come up with some uncertainty bounds around our correlation estimate.
Even though having uncertainty estimates is much better than not having them, the plots suggest that the estimated percentile 10 and percentile 90 bounds are a bit too conservative:
To sum up, Kalman filter and other dynamic regression methods (including Bayesian ones) really seem better in estimating time-varying correlation than the (more complex) multivariate financial models specifically designed for this task. Deal with it, JP Morgan!
Unfortunately, our example is too simple for portfolio applications. If you are not interested in just the correlation between two series or if you need to estimate the full covariance matrix between many assets, the dynamic regression approach is not an elegant option anymore, and the problem can be much more complex than this simple example. For those cases, this paper proposes a very elegant alternative to the traditional multivariate GARCH models when estimating a full covariance matrix.
I hope you find this post useful and, from now on, please get your correlations right!
|
The conceptually simplest way to produce a W state is somewhat analogous to classical reservoir sampling, in that it involves a series of local operations that ultimately create a uniform effect.
Basically, you look at each qubit in turn and consider "how much amplitude do I have left in the all-0s state, and how much do I want to transfer into the just-this-qubit-is-ON state?". It turns out that the family of rotations you need is what I'll call the "odds gates" which have the following matrix:
$$M(p:q) = \sqrt{\frac{1}{p+q}} \begin{bmatrix} \sqrt{p} & \sqrt{q} \\ -\sqrt{q} & \sqrt{p} \end{bmatrix}$$
Using these gates, you can get a W state with a sequence of increasingly-controlled operations:
This circuit is somewhat inefficient. It has cost $O(N^2 + N \lg(1/\epsilon))$ where $N$ is the number of qubits and $\epsilon$ is the desired absolute precision (since, in an error corrected context, the odds gates are not native and must be approximated).
We can improve the efficiency by switching from a "transfer out of what was left behind" strategy to a "transfer out of what is traveling along" strategy. This adds a fixup sweep at the end, but only requires single controls on each operation. This reduces the cost to $O(N \lg(1/\epsilon))$:
It is still possible to do better, but it starts to get complicated. Basically, you can use a single partial Grover step to get $N$ amplitudes equal to $\sqrt{1/N}$ but they will be encoded into a binary register (we want a one-hot register with a single bit set). Fixing this requires a binary-to-unary conversion circuit. The tools needed to do this are covered in "Encoding Electronic Spectra in Quantum Circuits with Linear T Complexity"). Here are the relevant figures.
The partial grover step:
How to perform an indexed operation (well... sort of. the closest figure had an accumulator which is not quite right for this case):
Using this more complicated approach reduces the cost from $O(N \lg(1/\epsilon))$ to $O(N + \lg(1/\epsilon))$.
|
I'm studying
Stochastic Processes by Richard F. Bass. Within this book I encountered the definition of a Markov process, which is given as follows:
We are given a separable metric space $S$ endowed with its Borel $\sigma$-field and a measurable space $(\Omega, \mathcal{F})$ together with a filtration $\{\mathcal{F}_t\}$. Then
Definition 19.1A Markov process$(X_t , \Bbb{P}^x)$ is a stochastic process $$X : [0, \infty) \times \Omega \to S$$ and a family of probability measures $\{\Bbb{P}^x : x \in S\}$ on $(\Omega, \mathcal{F})$ satisfying the following. For each $t$, $X_t$ is $\mathcal{F}_t$ measurable. For each $t$ and each Borel subset $A$ of $S$, the map $x \mapsto \Bbb{P}^x (X_t \in A)$ is Borel measurable. For each $s, t \geq 0$, each Borel subset $A$ of $S$, and each $x \in S$, we have $$ \Bbb{P}^x(X_{s+t} \in A \mid \mathcal{F}_s) = \Bbb{P}^{X_s} (X_t \in A), \quad \Bbb{P}^x − \text{a.s.}$$
Okay, this definition was fine. Then in the next chapter, the author begins with the setting that $(X_t , \Bbb{P}^x)$ is a Markov process with respect to $\mathcal{F}_t^{00} = \sigma(X_s : s \leq t)$ such that its sample path is càdlàg with probability 1 under $\Bbb{P}^x$ for all $x \in S$. With $$ \mathcal{F}_t^{0} = \sigma \left(\mathcal{F}_t^{00} \cup \{ A \subset S : A \text{ is } \Bbb{P}^x\text{-null for all } x \in S\} \right) \quad \text{and} \quad \mathcal{F}_t = \mathcal{F}_{t+}^{0} = \bigcap_{\epsilon > 0} \mathcal{F}_{t+\epsilon}^{0},$$
he proved the following property:
Theorem 20.6 Let $(X_t , \Bbb{P}^x)$ be a Markov process and suppose for all bounded Borel measurable function $f$, $$ \Bbb{E}^x[ f(X_{s+t}) \mid \mathcal{F}_s] = \Bbb{E}^{X_s} [ f(X_t)], \quad \Bbb{P}^x − \text{a.s.} $$ holds. Suppose $Y$ is bounded and measurable with respect to $\mathcal{F}_{\infty} = \bigvee_{s \geq 0} \mathcal{F}_s $. Then $$ \Bbb{E}^x[Y \circ \theta_s \mid \mathcal{F}_s] = \Bbb{E}^{X_s} Y, \quad \Bbb{P}^x − \text{a.s.}$$
Until now, still there was no problem. Then it claims the
Blumenthal 0-1 law, which is stated as follows:
Proposition 20.8 Let $(X_t , \Bbb{P}^x)$ be a Markov process with respect to $\{\mathcal{F}_t\}$. If $A \in \mathcal{F}_0$, then for each $x$, $\Bbb{P}^x(A)$ is equal to $0$ or $1$.
Proof. Suppose $A \in \mathcal{F}_0$. Under $\Bbb{P}^x$, $X^0 = x$, a.s., and then $$\Bbb{P}^x(A) = \Bbb{E}^{X_0}\mathbf{1}_A = \Bbb{E}^x[\mathbf{1}_A \circ \theta_0 | \mathcal{F}_0] = \mathbf{1}_A \circ \theta_0 = \mathbf{1}_A \in \{0, 1\}, \quad \Bbb{P}^x - \text{a.s.} $$ since $\mathbf{1}_A \circ \theta_0$ is $\mathcal{F}_0$ measurable. Our result follows because $\Bbb{P}^x(A)$ is a real number and not random.
I was puzzled by the claim of the proof, that $X_0 = x$ a.s. under $\Bbb{P}^x$. Clearly there is no such assumption in Definition 19.1 above. So I tried to prove this using the definition. I succeeded in circumventing this seemingly unjustified claim when there is some $x_0 \in S$ such that $\Bbb{P}^{x}(X_0 = x_0) > 0$, but my approach turned out to be inadequate for a general proof.
So here is my question: Is there an argument which either avoids or proofs the claim $\Bbb{P}^x (X_0 = x) = 1$? Or is there a counter-example, so that we should just embrace this as a part of definition and insert it to the incomplete Definition 19.1?
|
I was observing here and conjecture $(1)$
$$\lim_{n\to\infty}{S_{n-1}S_{n+2}\over S_nS_{n+1}}=e^2\tag1$$
Given that Harlan Brothers' formula $(2)$
$$\lim_{n\to\infty}{S_{n-1}S_{n+1}\over S_n^2}=e\tag2$$
Trying to prove $(1)$:
$(1)\div(2)$
$$\lim_{n\to\infty}{S_{n-1}S_{n+2}\over S_nS_{n+1}}\times{S_n^2\over S_{n-1}S_{n+1}}=e\tag3$$
Simplified to
$$\lim_{n\to\infty}{S_{n}S_{n+2}\over S_{n+1}^2}=e\tag4$$
$(4)$ which is the same as $(2)$, so hence we can say $(1)$ is correct?
2nd conjecture
Another conjecture from observing $(1)\div(2)^2$, simplified to
$$\lim_{n\to\infty}{S_n^3S_{n+2}\over S_{n-1}S_{n+1}^3}=1\tag5$$
I noticed that it takes the binomial coefficients,you can see a pattern emerges
$$\lim_{n\to\infty}{S_n^4S_{n+2}^4\over S_{n-1}S_{n+1}^6S_{n+3}}=1\tag6$$
$$\lim_{n\to\infty}{S_n^5S_{n+2}^{10}S_{n+4}\over S_{n-1}S_{n+1}^{10}S_{n+3}^5}=1\tag7$$
and so on ...
We can write as
$$\lim_{n\to \infty}\prod_{k=0}^{m}S_{n+k-1}^{(-1)^{k+1}{m\choose k}}=1\tag8$$ $m\ge3$
Numerically we have verified for certain range of $S_n$, but it is not necessarily indicate that it is true for larger values of $S_n$
How can we prove $(8)?$
|
Given a positive integer $n$ which is not a perfect square, it is well-known that Pell's equation $a^2 - nb^2 = 1$ is always solvable in non-zero integers $a$ and $b$.
Question:Let $n$ be a positive integer which is not a perfect square. Is there always a polynomial $D \in \mathbb{Z}[x]$ of degree $2$, an integer $k$ and nonzero polynomials $P, Q \in \mathbb{Z}[x]$ such that $D(k) = n$ and $P^2 - DQ^2 = 1$, where $a = P(k)$, $b = Q(k)$ is the fundamental solution of the equation $a^2 - nb^2 = 1$?
If yes, is there an upper bound on the degree of the polynomials $P$ and $Q$ -- and if so, is it even true that the degree of $P$ is always $\leq 6$?
Example: Consider $n := 13$.Putting $D_1 := 4x^2+4x+5$ and $D_2 := 25x^2-14x+2$, we have $D_1(1) = D_2(1) = 13$.Now the fundamental solutions of the equations$P_1^2 - D_1Q_1^2 = 1$ and $P_2^2 - D_2Q_2^2 = 1$ are given by
$P_1 := 32x^6+96x^5+168x^4+176x^3+120x^2+48x+9$,
$Q_1 := 16x^5+40x^4+56x^3+44x^2+20x+4$
and
$P_2 := 1250x^2-700x+99$,
$Q_2 := 250x-70$,
respectively. Therefore $n = 13$ belongs to at least $2$ different series whose solutions have ${\rm deg}(P) = 6$ and ${\rm deg}(P) = 2$, respectively.
Examples for all non-square $n \leq 150$ can be found here.
Added on Feb 3, 2015: All what remains to be done in order to turnLeonardo's answers into a complete answer to the question is to find out which valuesthe index of the group of units of $\mathbb{Z}[\sqrt{n}]$ in the group of unitsof the ring of integers of the quadratic field $\mathbb{Q}(\sqrt{n})$can take. This part is presumably not even really MO level, but it's justnot my field -- maybe someone knows the answer? Added on Feb 14, 2015: As nobody has completed the answer so far, it seemsthis may be less easy than I thought on a first glance. Added on Feb 17, 2015: Leonardo Zapponi has given now a complete answer tothe question in this note.
|
Problem
Take this (easy) problem as an example:
An astronomer is interested in measuring the distance, in light-years, from his observatory to adistant star. Although the astronomer has a measuring technique, he knows that, because ofchanging atmospheric conditions and normal error, each time a measurement is made it will notyield the exact distance, but merely an estimate. As a result, the astronomer plans to make a seriesof measurements and then use the average value of these measurements as his estimated value ofthe actual distance. If the astronomer believes that the values of the measurements are independentand identically distributed random variables having a common mean d (the actual distance) and acommon variance of 4 (light-years), how many measurements need he make to be reasonably surethat his estimated distance is accurate to within 0.5 light-year? Approaches
At first sight, what seemed to me more reasonable, was to use the
Chebyshev's inequality as follows:
$X_{i}$ = distance in l.y. observed in experiment $i$
$E(X_{i})=\mu =d$
$Var(X_{i})=\sigma^{2} = 4$
Sample Mean:
$E(\bar{X}_{n})=\mu =d$
$Var(\bar{X}_{n})=\frac{\sigma^{2}}{n} = \frac{4}{n}$
So by Chebyshev's inequality we have:
$P(\left | \bar{X}_{n}-d \right | <0.5)>1-\frac{4}{n \cdot 0.5^{2}} \\ = 1- \frac{1}{n}\cdot \frac{4}{0.5^{2}} \\ = 1- \frac{16}{n}$
So if we consider the sentence
"reasonably sure" as $0.95$, then $1- \frac{16}{n} = 0.95$ when $n= 320$.
So I would answer with: $n=320$ is enough.
But, Using the
Central Limit Theorem we have that:
$P(\left | \bar{X}_{n}-d \right |\leq 0.5) \\ = P\left \{ \sqrt{n}\frac{\left | \bar{X}_{n}-d \right |}{2} \leq \frac{\sqrt{n}}{4}\right \}\\ =P\left ( \left | Z \right | \leq \frac{\sqrt{n}}{4} \right ) \\ \approx \Phi \left ( \frac{\sqrt{n}}{4} \right )-\Phi \left (- \frac{\sqrt{n}}{4} \right ) \\ =2\Phi \left ( \frac{\sqrt{n}}{4} \right )-1$
where symbol $\Phi(z)$ denotes the cumulative distribution function of a standard normal variable. Hence, $n$ should be, approximatively, the value such that:
$2\Phi \left ( \frac{\sqrt{n}}{4} \right )=0.975$
In other terms $\frac{\sqrt{n}}{4}$ is, approximatively, equal to the $97.5$% quantile of standard normal distribution. Using the normal table, we find that $\frac{\sqrt{n}}{4} = 1.96$ and thus $n = (1.96 \cdot 4)^{2} = 61.466$, thus $n$must be equal to $62$, which is
very different from the result I ended up with, which is $320$. Questions
What's the reasoning behind this? This was an exercice I had to solve, but I used the Chebychev ineq. approach instead of the central limit theorem (used by my prof) and the results are very different. Is it correct? Am I missing something important?
Any clarification is appreciated.
|
Contents MA 453 Fall 2008 Professor Walther News
Here are some basic pointers:
In order to do any editing, you must be logged in with your Purdue career account. If you look under MediaWiki FAQ, you get lots of instructions on how to work with Rhea. Some important things are under item 4 in that manual. If you want to do things like $ \sum_{i=1}^\infty 1/i^2 = \frac{\pi^2}{6} $ here then you should look a) at the "view source" button on this page and b) get acquainted with Latex [1], a text-formatting program designed to write math stuff.
Here is some more math, to show yo mathsymbol commands: $ \forall x\in{\mathbb R}, x^2\ge 0 $, $ \exists n\in{\mathbb N}, n^2\le 0 $ where $ {\mathbb N}=0,1,2,\ldots $
If you need to find out a latex command, google for the thing you want to make, latex, and "command". (E.g., google for "integral latex command".)
If you want to make a new page, all you need to do is to invent one. For example, let's say I want to make a page for further instructions on how to deal with Rhea. I just type "double-left-square-bracket page with more instructions double-right-square-bracket", where of course I use the actual brackets. The effect is: I get a link (initially red) to a page that is the empty set. Once I click it, the link page with more instructions_MA453Fall2008walther turns blue and I am transferred to a newborn page of name as indicated.
Note: it may take a few minutes for the new page to start existing. If you click the red link and nothing happens, wait a bit and try again.
Ideas what to put on Rhea
Course notes, HW discussion, solutions to problems you encountered while using Rhea (how do you upload, make links, post movies, ...)
For week 1, click this link here and on that new page create a page as outlined above. Then move to that page and state your favorite theorem. Why is it you favorite theorem? Have other people he same favorite theorem? Crosslink! Use the math-environment if appropriate.
For week 3: post and discuss the notion/theorem that you have found hardest to understand so far. Alternatively, find somebody else's post and reply to it by explainign how you understand things.
Rhea Questions Course notes Discussion topics Homework Discussion
Homework 1, September 4
Homework 2, September 11 Homework 3, September 18 Homework 4, September 25 Homework 5, October 2 Homework 6, October 9 Homework 7, October 23 Homework 8, October 30 Homework 9, November 6 Homework 10, November 13 Homework 11, November 20 Homework 12, December 4 Homework 13, December 11 Math News Interesting Articles Latex comments
More Latex!_MA453Fall2008walther - Latex commands from NASA!
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.