title
string | question_body
string | answer_body
string | tags
string | accepted
int64 |
|---|---|---|---|---|
Non-cyclic subgroup of order 4 in non-dihedral group
|
A group $G$ has sixteen elements: $$\{e, r, r^2, \dots , r^7, s, rs, r^2s, \dots , r^7s\},$$ where $r$ and $s$ satisfy the relations $r^8 = e, s^2 = e, sr = r^3s$ . (Note that $G$ is not a dihedral group.) a) Make a list of the cyclic subgroups of $G$ . Make sure to state the elements of each cyclic subgroup, and do not list any subgroup more than once. b) The group $G$ has two subgroups of order $4$ which are not cyclic. State the elements of each of these subgroups. I have managed to answer part a) and for part b) I have found the klein group, $\{e, r^4, s, r^4s\}$ but I can't find the second group of order $4$ , could someone please help with this? Thanks for your answers!
|
Consider $$\{e, r^2s, r^4, r^6s\}.$$
|
|group-theory|cyclic-groups|combinatorial-group-theory|
| 0
|
Transform $y=sin(x)$ with a matrix
|
I want to transform the graph $y=\sin{x}$ by the matrix $\begin{bmatrix}3 & 4 \\-2 & 8\end{bmatrix}$ , which roughly judging would give a clockwise rotation and a stretch. My solution is to first parametrize the graph to get $x=t$ and $y=\sin{t}$ . Then, I computed $$\begin{bmatrix}3 & 4 \\-2 & 8\end{bmatrix}\begin{bmatrix} t \\ \sin{t} \end{bmatrix}$$ to get the transformed parametric equations of $3t+4\sin{t}=x$ and $-2t+8\sin{t}=y$ . My logic was that, by parameterizing $y=\sin{x}$ , I am basically representing each point in the original function as a vector, and then transforming each of these individual vectors. When graphed, it looks like this: Which seems more or less in agreement with my prediction. However, I'm also wondering if we can transform a general set of coordinates $(x,y)$ and plug the transformed version back into the original equation as such: $$\begin{bmatrix} 3 & 4\\-2 & 8\end{bmatrix}\begin{bmatrix}x\\y\end{bmatrix}=\begin{bmatrix}3x+4y\\-2x+8y\end{bmatrix}$$ Hen
|
Using $x'$ and $y'$ for the transformed points, from your method 1, $$\begin{align*} \begin{bmatrix}3 & 4 \\-2 & 8\end{bmatrix} \begin{bmatrix} t \\ \sin{t} \end{bmatrix} &= \begin{bmatrix} x' \\ y' \end{bmatrix}\\ \begin{bmatrix} t \\ \sin{t} \end{bmatrix} &= \begin{bmatrix}3 & 4 \\-2 & 8\end{bmatrix}^{-1} \begin{bmatrix} x' \\ y' \end{bmatrix}\\ &= \frac1{32}\begin{bmatrix}8 & -4 \\2 & 3\end{bmatrix} \begin{bmatrix} x' \\ y' \end{bmatrix}\\ &= \frac{1}{32} \begin{bmatrix} 8x'-4y' \\ 2x'+3y' \end{bmatrix} \end{align*}$$ Eliminate $t$ by substituting $t = \frac{8x'-4y'}{32}$ into $\sin t = \frac{2x'+3y'}{32}$ , to obtain: $$\frac{2x'+3y'}{32} = \sin \left(\frac{8x'-4y'}{32}\right)$$ Fundamentally, the equation $y=\sin x$ relates the points before transformation $(x,y)=(t, \sin t)$ . Writing $y' \overset ?= \sin x'$ directly with the transformed points doesn't work. As for the meaning of the second diagram, the curve $-2x+8y=\sin{(3x+4y)}$ relates coordinates using the new basis $\mathb
|
|linear-algebra|matrices|trigonometry|linear-transformations|
| 0
|
Confusion about function composition
|
I've gone through many math courses thinking that a function $f(x)=x^2$ could mean $f(anything) = anything^2$ , however I realized that that isn't always true. For example, suppose we have a function $f(x) = kx$ where $f$ represents the opposing force. We can't just substitute $t$ for time into that equation, as that isn't physically true. It could be the case that $x=2t$ . In that case, $f(t) = 2kt$ . But suppose that $t$ is actually a function of another variable $p$ . Let $t(p)= p^2$ . If I were to naively try to do function composition to get $f(t(p))$ , I would get $f(t(p)) = kp^2$ , ignoring the relationship between $x$ and $t$ . The correct answer would be $f(t(p)) = 2kp^2$ . What is really going on when I do the above function composition? Is there something more complex going on? Update: Even after reading @peek-a-boo's post, I am still confused about a couple of points. I guess my original question might not have been as clear as I wanted it to be. Say we have a spring that f
|
however I realized that that isn't always true. Wrong! Whoever/whatever made you change your mind about your previous sentence, ignore them/that (for now). If $f:\Bbb{R}\to\Bbb{R}$ is the function defined for each $x\in\Bbb{R}$ as $f(x)=x^2$ , then $f(@)=@^2$ , and $f(t)=t^2$ and $f(\ddot{\smile})=\ddot{\smile}^2$ . Anyone telling you otherwise is just trying to skip a whole bunch of steps because they’re impatient/lazy. If $t$ is a real number and you consider the real number $x=2t$ , then $f(x)=f(2t)=(2t)^2=4t^2$ . I have said this before (e.g here and here ) and I’ll say it again: math does not care what your favourite letter is. Now let us define a new function $F:\Bbb{R}\to\Bbb{R}$ as $F(x)=kx$ (I’m explicitly using a different letter than $f$ because the $f$ above is different from this $F$ ). If you now fix a number $t$ and decide to consider the number $x=2t$ , then $F(x)=F(2t)=k(2t)=2kt$ . Now, let us introduce three new functions $X:\Bbb{R}\to\Bbb{R}$ , defined as $X(t)=2t$ .
|
|functions|
| 0
|
Law of total variance on conditional expectation
|
Let's take 3 random variables $X,Y,Z$ on the same probability space (or associated with the same experiment), then the law of total variance states that: $$ V[X] = V[E[X|Z]] + E[V[X|Z]] $$ Then what if I want to apply the law of total variance on the following conditional variance: $V[X|Y]$ ? The goal is to decompose this conditional variance further. Would this result in something like $$ V[X|Y] = V[E[X|Z|Y]] + E[V[X|Z|Y]] $$ or maybe $$ V[X|Y] = V[E[X|Z,Y]|Y] + E[V[X|Z,Y]|Y] $$ Thanks a lot for any help or suggestions.
|
$V\big[E[X\mid Z\mid Y]\big] + E\big[V[X\mid Z\mid Y]\big]$ does not make much sense notationally. $V\big[E[X\mid Z]\mid Y\big] + E\big[V[X\mid Z]\mid Y\big]$ makes sense notationally but does not always give $V[X\mid Y]$ . $V[X\mid Y] = V\big[E[X\mid Z,Y]\mid Y\big] + E\big[V[X\mid Z,Y]\mid Y\big]$ both makes sense and is what you want. To illustrate the distinction consider the example where $Y$ and $Z$ are i.i.d. Bernouilli $(\frac12)$ , i.e. each independently $0$ or $1$ with equal probability and mean $\frac12$ , and $X=Y+Z$ : You can find $V[X\mid Y]=V[Z]=\frac14$ since $Y$ and $Z$ are independent here. $E[X \mid Z] =\frac12+Z$ and so $V\big[E[X\mid Z]\mid Y\big]=V\big[\frac12\mid Y\big]+V\big[Z\mid Y\big]=0+\frac14$ , while $V[X \mid Z] =V[Y]=\frac14$ and so $E\big[V[X\mid Z]\mid Y\big]=\frac14$ , giving $V\big[E[X\mid Z]\mid Y\big] + E\big[V[X\mid Z]\mid Y\big] = \frac12$ , which is not $V[X\mid Y]$ . $E[X\mid Z,Y]=Y+Z$ and so $V\big[E[X\mid Z,Y]\mid Y\big]=V\big[Y\mid Y\big]+V
|
|probability|probability-theory|expected-value|conditional-expectation|variance|
| 1
|
Why do we say this is a reparametrization? ("Analysis on Manifolds" by James R. Munkres.)
|
I am reading "Analysis on Manifolds" by James R. Munkres. Definition. Let $k\leq n$ . Let $A$ be open in $\mathbb{R}^k$ , and let $\alpha:A\to\mathbb{R}^n$ be a map of class $C^r (r\geq 1)$ . The set $Y=\alpha(A)$ , together with the map $\alpha$ , constitute what is called parametrized-manifold, of dimension $k$ . We denote this parametrized-manifold by $Y_\alpha$ ; and we define the ( $k$ -dimensional) volume of $Y_\alpha$ by the equation $$v(Y_\alpha)=\int_A V(D\alpha),$$ provided the integral exists. Definintion. Let $A$ be open in $\mathbb{R}^k$ ; let $\alpha:A\to\mathbb{R}^n$ be of class $C^r$ ; let $Y=\alpha(A)$ . Let $f$ be a real-valued continuous function defined at each point of $Y$ . We define the integral of $f$ over $Y_\alpha$ , with respect to volume, by the equation $$\int_{Y_\alpha} f\mathrm{d}V=\int_A (f\circ\alpha)V(D\alpha),$$ provided this integral exists. Here we are reverting to "calculus notation" in using the meaningless symbol $\mathrm{d}V$ to denote the "inte
|
Both $\alpha: A \to \mathbb{R}^n$ and $\beta \to \mathbb{R}^n$ are parametrizations of the same manifold in $\mathbb{R}^n$ . Suppose you start off with a manifold in $\mathbb{R}^n$ , equipped a particular parametrization (say $\alpha$ ), and defined its volume via $\alpha$ . You want to say that volume is a concept attached to the manifold, but not any particular parametrization. In which case, if you have another parametrization of the same manifold (say $\beta$ ) - you are parametrizing the manifold again, i.e. you are reparametrizing - then you want to show that the volume is the same no matter which parametrization you are using. This is the content of Theorem 22.1.
|
|multivariable-calculus|manifolds|parametrization|
| 0
|
Why do we say this is a reparametrization? ("Analysis on Manifolds" by James R. Munkres.)
|
I am reading "Analysis on Manifolds" by James R. Munkres. Definition. Let $k\leq n$ . Let $A$ be open in $\mathbb{R}^k$ , and let $\alpha:A\to\mathbb{R}^n$ be a map of class $C^r (r\geq 1)$ . The set $Y=\alpha(A)$ , together with the map $\alpha$ , constitute what is called parametrized-manifold, of dimension $k$ . We denote this parametrized-manifold by $Y_\alpha$ ; and we define the ( $k$ -dimensional) volume of $Y_\alpha$ by the equation $$v(Y_\alpha)=\int_A V(D\alpha),$$ provided the integral exists. Definintion. Let $A$ be open in $\mathbb{R}^k$ ; let $\alpha:A\to\mathbb{R}^n$ be of class $C^r$ ; let $Y=\alpha(A)$ . Let $f$ be a real-valued continuous function defined at each point of $Y$ . We define the integral of $f$ over $Y_\alpha$ , with respect to volume, by the equation $$\int_{Y_\alpha} f\mathrm{d}V=\int_A (f\circ\alpha)V(D\alpha),$$ provided this integral exists. Here we are reverting to "calculus notation" in using the meaningless symbol $\mathrm{d}V$ to denote the "inte
|
Perhaps the best way to answer this is with an example. Everything here is smooth, i.e. $C^\infty$ . The map $\alpha: (-1, 1) \to \mathbb{R}^2$ given by $\alpha(t) = (t, \sqrt{1 - t^2})$ gives a parametrization of the (open, upper, unit) semicircle $x^2 + y^2 = 1,\, y>0$ . Now, the map $g: (0, \pi) \to (-1, 1)$ defined by $g(\theta) = \cos \theta$ provides a diffeomorphism $(0, \pi) \to (-1, 1)$ . This composition $\beta = \alpha \circ g: (0, \pi) \to \mathbb{R}^2$ is a reparametrization of $\alpha$ , as it describes the same curve ( $1$ -manifold), i.e. the same set of points in $\mathbb{R}^2$ , but the particular way that each value of the parameter describes the points has changed. For instance, the point $(x, y) = \bigl(\tfrac12, \smash{\tfrac{\sqrt3}{2}}\bigr)$ is the image of $t = \tfrac12$ under $\alpha$ , but it's the image of $\theta = \frac\pi3$ under $\beta$ . The point being made in the text is that whenever you change variables in an integral over a parametrized manifold,
|
|multivariable-calculus|manifolds|parametrization|
| 1
|
Non-cyclic subgroup of order 4 in non-dihedral group
|
A group $G$ has sixteen elements: $$\{e, r, r^2, \dots , r^7, s, rs, r^2s, \dots , r^7s\},$$ where $r$ and $s$ satisfy the relations $r^8 = e, s^2 = e, sr = r^3s$ . (Note that $G$ is not a dihedral group.) a) Make a list of the cyclic subgroups of $G$ . Make sure to state the elements of each cyclic subgroup, and do not list any subgroup more than once. b) The group $G$ has two subgroups of order $4$ which are not cyclic. State the elements of each of these subgroups. I have managed to answer part a) and for part b) I have found the klein group, $\{e, r^4, s, r^4s\}$ but I can't find the second group of order $4$ , could someone please help with this? Thanks for your answers!
|
Let $G=\{e,r,r^2,...,r^7,s,rs,...,r^7s\}$ and condition $sr=r^3s \Rightarrow$ $sr^n=r^3sr^{n-1}=...=r^{3n}s$ we can calculate all element order $|r| = |r^3| = |r^5| = |r^7| = 8$ $|r^2| = |r^6| = |rs| = |r^3s| = |r^5s| = |r^7s| = 4$ $|r^4| = |s| = |r^2s| = |r^4s| = |r^6s| = 2$ and $|e| = 1$ all non-cyclic order-4 group element is in $\{e, r^4, s, r^2s, r^4s, r^6s\}$ simple calculate we can found other non-cyclic order-4 group is $\{e, r^2s, r^4, r^6s\}$
|
|group-theory|cyclic-groups|combinatorial-group-theory|
| 1
|
The convergence in sense of distribution associated with Lebesgue integrable function but not differential.
|
Let $f \in L^1(\mathbb{R}^n)$ and $T$ is a distribution. Show that $T_{D_j^hf} \rightarrow \partial x_j T_f$ as $h \rightarrow 0$ in the sense of distributions, where $$D_j^hf(x) := \dfrac{f(x + e_jh) − f(x)}{ h}, \quad h>0.$$ I try to prove that, for any $\phi \in C^{\infty}_c(\mathbb{R}^n$ $lim_{h \rightarrow 0} \int_{\mathbb{R}^n}\phi(x).\dfrac{f(x + e_ih) − f(x)}{ h}dx=-\int_{\mathbb{R^n}}\partial x_i\phi(x).f(x)dx.$ I am planning to bounded the integral $$\int_{\mathbb{R}^n}\left\vert \phi(x).\dfrac{f(x + e_ih) − f(x)}{ h}dx + \partial x_i\phi(x).f(x)\right\vert dx for all $|h| But I don't know how to deal with $\dfrac{f(x + e_ih) − f(x)}{ h}$ in this case. Could you please give me some ideas? Btw, can you recommend me some good books about Distribution Theory?
|
As usual in the theory of distributions we want to "throw" everything onto the test function. After a change of variables (shifting the first term) we have \begin{align*} \int_{\mathbb{R}^d} \phi(x)D_j^h(x) dx&=\int_{\mathbb{R}^d} \frac{\phi(x)}{h} f(x+he_j)dx +\int_{\mathbb{R}^d} \frac{\phi(x)}{h}f(x)dx\\ &=\int_{\mathbb{R}^d} \frac{\phi(x-he_j)-\phi(x)}{h} f(x)dx. \end{align*} Now you can us dominated convergence to conclude (as the factor with $\phi$ has compact support and is bounded by the mean value theorem).
|
|limits|analysis|distribution-theory|weak-convergence|
| 1
|
Let $S$ be the set of points whose coordinates $x$ and $y$ are integers that satisfy $0\le x\le3,$ $0\le y\le4$. Two distinct points .....
|
Let $S$ be the set of points whose coordinates $x,$ $y$ are integers that satisfy $0\le x\le3,$ $0\le y\le4$ . Two distinct points are randomly chosen from $S.$ The probability that the midpoint of the segment they determine also belongs to $S$ is? My approach: I tried finding the probability of the points not belonging to $S$ as that makes the problem easier. Let the two points be $(a,b)$ and $(c,d)$ Since $a,b,c,d$ belong to $\mathbb{Z}$ , their midpoint won't belong to $S$ if $\{a,c\}$ and $\{b,d\}$ are both: one odd one even. (Like $a$ is odd, $c$ is even or vice versa, same for $(b,d)$ .) Now, for selecting $a,c$ : $_2C_1$ (for selecting which one is odd and which is even) $\ast\, _2C_1 \ast\, _2C_1$ ( since for $x$ there are $2$ even numbers: $0,2$ and $2$ odd numbers: $1,3$ ) Similarly for selecting $b,d$ : $_2C_1 \ast\, _3C_1 \ast\, _2C_1$ (since for $y$ there are $3$ even numbers and $2$ odd numbers) Total favourable cases = $_2C_1 \ast\, _2C_1 \ast\, _2C_1 \ast\, _2C_1 \ast\,
|
Just count, for each possible midpoint, the number of pairs with that midpoint, which is not too hard to do manually: 0 1 2 1 0 1 4 7 4 1 1 4 7 4 1 0 1 2 1 0 So there are $42$ admissible pairs out of $\binom{20}2$ for a probability of $\frac{21}{95}$ .
|
|probability|probability-theory|bayes-theorem|
| 0
|
Proof $\lim_{n\to\infty} \cos^{2n}\left(\frac{x}{\sqrt{n}}\right) = e^{-x^2}$
|
I have it in the notes of an old project that $\cos^{2n}\left(\frac{x}{\sqrt{n}}\right) \to e^{-x^2}$ pointwise as $n\to \infty$ . The proof I gave at the time sounds bogus, but it's quite close empirically for $n\ge 10$ . The bogus argument is basically that $$\cos^{2}\left(\frac{x}{\sqrt{n}}\right)\approx \left(1-\frac{x^2}{n}\right)$$ and $$ \lim_{n\to\infty} \left(1-\frac{x^2}{n}\right)^n = \exp(-x^2) $$ by the definition of $\exp$ . But unless I'm forgetting something nice about limits, I see no reason why the quality of the approximation improves if I take both sides to a power. A promising argument that got nasty is to show that $\lim_{n\to\infty} \cos^{2n}\left(\frac{x}{\sqrt{n}}\right)$ has the same Taylor series expansion as \begin{align} e^{-x^2}&=\sum_{k=0}^\infty \frac{(-x^2)^k}{k!}\\ &=\sum_{\text{$k$ even}}\frac{(-1)^{k/2}k!}{(k/2)!}\frac{x^k}{k!}.\end{align} Well, consider $$ \left. \frac{d^k}{dx^k} \cos^{2n}\left(\frac{x}{\sqrt{n}}\right)\right|_{x=0}.$$ Each different
|
Your first idea is basically correct. Fix $x$ . We know $\cos(x/\sqrt{n})=1-\frac{x^2}{2n}+\mathcal{O}(1/n^2)$ , that $\log(1+y)=y+\mathcal{O}(y^2)$ as $y\to0$ , and therefore that $\log\cos(x/\sqrt{n})=-\frac{x^2}{2n}+\mathcal{O}(1/n^2)$ as $n\to\infty$ (we can chain it because the error $\cos(x/\sqrt{n})-1=:y\to0$ as $n\to\infty$ ). Now it follows $\log\cos^{2n}(x/\sqrt{n})=-x^2+\mathcal{O}(1/n)$ tends to $-x^2$ as $n\to\infty$ , which is what you wanted. The Big- $\mathcal{O}$ notation is rigorous. Replace all instances of it with explicit bounds if you so wish.
|
|limits|taylor-expansion|normal-distribution|
| 0
|
Why is acceleration $\frac{1}{2}at^2$ halved when finding final height (distance)?
|
The final distance of an object dropped from a certain height is: $$S_f=S_0-\frac{1}{2}at^2,$$ $S_f=$ Final distance $S_0=$ Initial height from which the object was dropped $a=$ acceleration due to gravity (gravitational acceleration) $t=$ the time traveled by object. Why is $a$ halved? It goes from $9.8$ to $4.9$. Why is the time $t$ squared? These are basic equations, however, I couldn't find explanations as to the whys , only methodology telling me to "plug in". Thank you.
|
In basic scenarios, we multiply an object's velocity $(v)$ by the time it traveled $(t)$ to obtain its distance traveled $(d)$ . $d=vt$ However, when the velocity is changing, we can't use the formula as it is. We first need to find a single velocity value that is representative of the object's velocity over the whole time interval. In other words, we need the object's average velocity , which we can then multiply by time. $d=v_{average}\cdot t$ Generally, we would first have to integrate the velocity function, then divide that value by the time traveled in order to obtain average velocity. In this case however, gravity causes the velocity to change at a constant rate $(a)$ , so the velocity is modeled by a linear function . This makes it very convenient for us to figure out the average velocity. All we need to do is add the initial velocity and final velocity, then divide by 2. $v_{average}=\frac{v_{initial}+v_{final}}{2}$ We also know that we obtain $v_{final}$ by taking $v_{initial}
|
|calculus|limits|physics|mathematical-physics|
| 0
|
Proof $\lim_{n\to\infty} \cos^{2n}\left(\frac{x}{\sqrt{n}}\right) = e^{-x^2}$
|
I have it in the notes of an old project that $\cos^{2n}\left(\frac{x}{\sqrt{n}}\right) \to e^{-x^2}$ pointwise as $n\to \infty$ . The proof I gave at the time sounds bogus, but it's quite close empirically for $n\ge 10$ . The bogus argument is basically that $$\cos^{2}\left(\frac{x}{\sqrt{n}}\right)\approx \left(1-\frac{x^2}{n}\right)$$ and $$ \lim_{n\to\infty} \left(1-\frac{x^2}{n}\right)^n = \exp(-x^2) $$ by the definition of $\exp$ . But unless I'm forgetting something nice about limits, I see no reason why the quality of the approximation improves if I take both sides to a power. A promising argument that got nasty is to show that $\lim_{n\to\infty} \cos^{2n}\left(\frac{x}{\sqrt{n}}\right)$ has the same Taylor series expansion as \begin{align} e^{-x^2}&=\sum_{k=0}^\infty \frac{(-x^2)^k}{k!}\\ &=\sum_{\text{$k$ even}}\frac{(-1)^{k/2}k!}{(k/2)!}\frac{x^k}{k!}.\end{align} Well, consider $$ \left. \frac{d^k}{dx^k} \cos^{2n}\left(\frac{x}{\sqrt{n}}\right)\right|_{x=0}.$$ Each different
|
Another approach, and a standard trick for dealing with limits of the indeterminate form " $1^{\infty}$ ", is to take a log to convert it to " $\infty \cdot 0$ ", then use another trick (series or L'Hopitals' rule) to evaluate that. Write $L_n(x) = \ln\left(\cos^{2n} \left(\frac{x}{\sqrt n}\right)\right)$ . Then \begin{align*} \lim_{n\to \infty} L_n(x) & = \lim_{n\to\infty} 2n \ln \cos\left(\frac{x}{\sqrt n}\right) \\&= \lim_{n\to\infty} \frac{2\ln \cos\left(\frac{x}{\sqrt n}\right)}{n^{-1}} \\&= \lim_{n\to\infty} \frac{2\cdot \frac{-\sin(x/\sqrt n)}{\cos(x/\sqrt{n})} \cdot (-1/2)n^{-3/2}x}{(-1)n^{-2}} \\&= \lim_{n\to\infty} -\tan(x/\sqrt{n}) \cdot n^{1/2}x \\&= \lim_{t \to 0^+} \frac{-\tan(tx)}{t}\cdot x \\&= \lim_{t \to 0^+} \frac{-\sec^2(tx) \cdot x}{1}\cdot x \\&= -x^2. \end{align*} Above, I'm using the substitution $t=1/\sqrt{n}$ , along with L'Hopital's rule a couple of times. Now, $$ \lim_{n\to\infty} \cos^{2n} \left(\frac{x}{\sqrt n}\right) = \lim_{n\to\infty} e^{L_n(x)} = e^{\
|
|limits|taylor-expansion|normal-distribution|
| 1
|
How to find the original matrix given its basis for column & null space?
|
I was given this problem: Find a matrix $A$ such that $C(A)$ is spanned by $(1, 2, -1, 3), (2, 1, 1, 1), (3, 1, -1 , 1)$ and $N(A)$ is spanned by $(-2, -1, 1, 0, 0), (-3, 2, 0, -2, 1)$ . I have no idea where to start with this. I know that the column space basis vectors must be in the original, $A$ , but I have no clue how I would derive the remaining two dependent columns for the null space. I also know that $A$ should be $4 \times 5$ with rank $3$ , but what trips me up is where to place the independent columns in this matrix (a.k.a. the $C(A)$ basis vectors). Thanks
|
I like the solution provided by @Sammy Black. However, I would like to add a side note. The step of dropping $s$ and $t$ and replacing them by $x_3$ and $x_5$ is a nice one, but in general, it may not be the case that you can find such variables to do something similar. Here's a slightly more general approach to the first step: Since you want the solutions of the equation $AX=0$ to be spanned by $(-2, -1, 1, 0, 0)$ and $(-3, 2, 0, -2, 1)$ , then each row of $A$ must be orthogonal to both vectors. This condition is satisfied for every vector $Y$ such that $$ \begin{pmatrix} -2 & -1 & 1 & 0 & 0\\ -3 & 2 & 0 & -2 & 1 \end{pmatrix} Y = \mathbf{0}. $$ You can use then row reduction to find a basis for the solutions of the previous system, for example, $\left\lbrace (1, 0, 2, 0, 3), (0, 1, 1, 0, -2), (0, 0, 0, 1, 2) \right\rbrace$ , which are the same vectors for the rows than the ones found by @Sammy Black. This last step can be easily generalized, since it's just row reduction of the matri
|
|linear-algebra|matrices|
| 0
|
Isn't rolling with advantage unordered with replacement?
|
Conventional wisdom for rolling with advantage (roll 2 die take the higher) is say the probability of getting a 20 is - $$39 / 400 = 0.0975$$ Where 400 is total number of rolls from $20^2$ and 39 is the total ways one can get a 20 from 2 dice. As far as I understand the above means that order matters. But when one rolls with advantage, they don't care which die is a 20 just that a 20 occurs. So shouldn't the calculation be total combinations = $\binom{n-1+k}k$ = $\binom{21}2$ = 210. And the number of ways to roll a 20 given order doesn't matter is 20. Therefore the probability of a 20 occuring is $20/210 = 0.0952$ . Edit: In this problem I'm considering two 20 sided dice being rolled simultaneously.
|
Regardless of whether you roll the dice together or separately, rolling a d20 with advantage (aka rolling two d20s) is two independent events - what you roll on one dice does not affect what you roll on the other. Each of those events has 20 outcomes, each of which has a probability of $\frac{1}{20}$ . As the two events are independent, we determine the probability of any single outcome as the product of the probability of both events. That simply doesn't change regardless of whether the order of those events is disregarded - you still have two events whose probability needs to be multiplied together. With that said, you're not entirely wrong here - rolling with advantage can be modelled as a binomial distribution, and binomial distributions famously don't care about order, but you've made two errors with your reasoning. Firstly, we don't use binomial coefficients to determine the total number of outcomes - we use them to determine the specific number of outcomes for a given number of
|
|probability|combinatorics|conditional-probability|dice|
| 1
|
Present value involving deferred annuities - is this a typo?
|
Catfish Hunter’s 1974 baseball contract with the Oakland Athletics called for half of his 100,000 salary to be paid to a life insurance company of his choice for the purchase of a deferred annuity. More precisely, there were to be semi-monthly contributions in Hunter’s name to the Jefferson Insurance Company with the first payment on April 16 and the final payment on September 30. We suppose that the first eleven of these were to be for 4,166.67 and the final payment was to be for four cents less. (12 × $4,166.67 = $ 50, 000.04.) A) Using an annual effective interest rate of 6% (a rate that figures in a six-year personal loan of $120,000 that Oakland’s owner Charles Finley had made to Hunter in 1969 and then promptly recalled), find the value of the specified payments to the insurance company at the scheduled time of the last payment. B) Suppose that the contracted payments had been made to the insurance company from April 16, 1974 through September 30, 1974, and that they accumulated
|
$\require{enclose}$ At the time of the last payment of premium, the equation of value is $$AV = 4166.67 \left((1+j)^{11} + (1+j)^{10} + \cdots + 1\right) - 0.04 = 4166.67 s_{\enclose{actuarial}{12} j} - 0.04,$$ where $j$ is the effective semimonthly rate of interest; i.e., $$(1 + j)^{24} = 1+i = 1.06.$$ Thus $$j = (1.06)^{1/24} - 1 \approx 0.00243082$$ and the accumulated value of payments is $$4166.67 \frac{(1.00243082)^{12} - 1}{0.00243802} - 0.04 \approx 50673.92.$$ For the second part, the annuity continues to accrue interest at effective annual rate $i = 0.06$ , so the accumulated value at the time of the first level annual payment is $$AV(1+j)^6 (1+i)^5 \approx 68808.2187690,$$ as the annuity has had six semimonthly periods from 30 September to 31 December 1974 and five years to 31 Dec 1979 to accrue. Then the first disbursement of $K$ occurs immediately, so the equation of value on 01 January 1980 is $$68808.22 = K (1 + v + v^2 + \cdots + v^{19}) = K \ddot a_{\enclose{actuarial}
|
|actuarial-science|
| 0
|
How to determine an ODE based on a general solution
|
I have the general solution $y(x) = Ee^x + De^{-x} + Fxe^x$ , $E,D,F$ are reals arbitary constants. I need to find a 3rd order, linear, homogeneous ODE by this. I know there are a few things I can do, for example, finding the first, second and third derivatives. $ y'(x) = Ee^x-De^{-x}+F\left(e^x+e^xx\right)$ $ y''(x) = e^xE+e^{-x}D+F\left(2e^x+e^xx\right)$ $ y'''(x) = e^{x}E-De^{-x}+F\left(3e^x+e^xx\right)$ Writing the general form of the ODE, I get: $ay'''(x) + by''(x) + cy'(x) + dy(x)=0$ So I can substitute y(x) and the rest of the derivatives into the ODE. But not sure how to proceed after that? I can group the coefficients of $Ee^x, De^{-x}$ and so on, but then how can I find the constants $a,b,c,d$ ? Can I set the coefficient = 0 for all of them?
|
Let $\hat{D}=d/dx$ . Suppose you have $\hat{D}^2(\hat{D}-2)y=y'''-2y''=0$ . Then as a trial we let $y=e^{kx}$ . Swap it in to get $(k^3-2k^2)e^{kx}=0\implies k^2(k-2)=0$ . $k=0$ is a double root, i.e. has multiplicity 2. $y=c_1+c_2x+c_3e^{2x}$ . You translate the differential equation into an auxiliary equation, then use those roots to find your solutions. If you have any repeated roots, supposing your root is $k$ , then your solutions are $e^{kx}$ times a polynomial in $x$ with degree one less than the multiplicity of $k$ . Use the same reasoning backwards. $y=Ee^x+De^{-x}+Fxe^x=e^x(E+Fx) +De^{-x}$ This tells us that the roots of the auxiliary equation are $1$ and $-1$ with 1 having multiplicity 2. This says the auxiliary equation is $(x-1)^2(x+1)$ . Now we essentially swap $\hat{D}$ for $x$ to get $(\hat{D}-1)^2(\hat{D}+1)y=0$ . Now you can check. $(\hat{D}-1)^2(\hat{D}+1)y=(\hat{D}-1)(\hat{D}^2-1)y=(\hat{D}^3-\hat{D}^2-\hat{D}+1)y=0.$ $y=Ee^x+De^{-x}+Fxe^x$ $y'=(E+F+Fx)e^x-De^{-x}$
|
|ordinary-differential-equations|
| 0
|
How to determine an ODE based on a general solution
|
I have the general solution $y(x) = Ee^x + De^{-x} + Fxe^x$ , $E,D,F$ are reals arbitary constants. I need to find a 3rd order, linear, homogeneous ODE by this. I know there are a few things I can do, for example, finding the first, second and third derivatives. $ y'(x) = Ee^x-De^{-x}+F\left(e^x+e^xx\right)$ $ y''(x) = e^xE+e^{-x}D+F\left(2e^x+e^xx\right)$ $ y'''(x) = e^{x}E-De^{-x}+F\left(3e^x+e^xx\right)$ Writing the general form of the ODE, I get: $ay'''(x) + by''(x) + cy'(x) + dy(x)=0$ So I can substitute y(x) and the rest of the derivatives into the ODE. But not sure how to proceed after that? I can group the coefficients of $Ee^x, De^{-x}$ and so on, but then how can I find the constants $a,b,c,d$ ? Can I set the coefficient = 0 for all of them?
|
Differentiate the solution twice: $$y = D e^{-x} + E e^x + Fxe^x \\ \implies y'' = D e^{-x} + E e^x + F (x+2)e^x = y + 2Fe^x$$ Let $z=y''-y$ , then differentiate again to recover the third-order ODE: $$z = 2 F e^x \implies z' = 2 F e^x = z \\ \implies z'-z = y'''-y''-y'+y=0$$
|
|ordinary-differential-equations|
| 0
|
Symmetry of Green's Functions
|
In the book Introduction to Partial Differential Equations by Folland, he states the claim Let $ \Omega$ be a bounded domain in $\mathbb{R}^n$ with smooth boundary $ S$ . The Green's function $G$ for $\Omega$ exists and for each $x \in \Omega , G(x, \cdot) \in C^\infty(\overline{\Omega} \backslash \{x\})$ . Then he claims $\forall x,y \in \Omega : G(x,y) = G(y,x)$ . He provides the following formal argument: \begin{align*} G(x,y) - G(y,x) &= \int_{\Omega} G(x,z) \delta(y - z) - G(y,z) \delta(x - z) \mathrm{\,d} z \\ &= \int_{S} G(x,z) \partial_{\nu _z} G(y,z) - G(y,z) \partial_{\nu _z} G(x,z) \mathrm{\,d} \sigma(z) = 0 \end{align*} as $ \forall x \in \Omega : \Delta G(x,\cdot) = \delta (x - \cdot)$ and $ G(x, S) = 0$ . A strategy for formalising he suggest is that we can excise small balls of $x,z$ from $\Omega$ and then let their radii shrink to zero. Approach As $ G(x , \cdot)\in C^\infty(\overline{\Omega} \backslash \{x\})$ so $\forall x,y \in \Omega : $ \begin{align*} G(x, y) - G(y
|
To do this more rigorously, you should avoid all mention of the $\delta$ function, and start with the excised domain $\Omega\backslash B_\epsilon(x, y)$ . Here $\Delta G(x, z)=0$ and $\Delta G(y, z)=0$ , so we have \begin{align} 0 &= -\lim_{\epsilon\to 0}\int_{\Omega\backslash B_\epsilon(x, y)} G(x, z)\Delta G(y, z) - G(y, z)\Delta G(x, z)\, dz\\ &= -\int_{\partial \Omega} 0 + \lim_{\epsilon\to 0}\int_{\partial B_\epsilon(x)\cup \partial B_\epsilon(y)} G(x, z)\partial_\nu G(y, z) - G(y, z)\partial_\nu G(x, z)\, d\sigma\\ &= \lim_{\epsilon\to 0}\int_{\partial B_\epsilon(y)} G(x, z)\partial_\nu N(z-y) \,d\sigma - \lim_{\epsilon\to 0}\int_{\partial B_\epsilon(x)} G(y, z)\partial_\nu N(z-x) \,d\sigma \\ &= G(x, y) - G(y, x). \end{align} Here we are using that $G(x, z)-N(x, z)$ is smooth (see the definition of Green's function), and $N(x, z)=N(x-z)=N(z-x)$ is the Newton potential. In particular, $$ \int_{\partial B_\epsilon(x)} \partial_\nu N(z-x)\,d\sigma = 1. $$ Since $x$ is the only sing
|
|partial-differential-equations|harmonic-functions|greens-function|
| 1
|
Implicit details regarding the proof of $\lim_{n \to \infty}\frac{\sqrt[n]{e^1}+\sqrt[n]{e^2}+\cdots\sqrt[n]{e^n}}{n}$
|
Problem 9(i) of Spivak's Calculus (Chpt 22) asks for the following limit: $\displaystyle \lim_{n \to \infty}\frac{\sqrt[n]{e^1}+\sqrt[n]{e^2}+\cdots\sqrt[n]{e^n}}{n}$ The terse solution manual's argument reads as follows: $\displaystyle \int_0^1 e^x dx = e-1$ Note: My book has not yet covered infinite series I understand what this argument aims to illustrate, but it seems to me there are several implicit theorems being used here that I do not believe the book has previously addressed (or, at least, not that I am aware of/cannot properly identify). This question is specifically about properly identifying what those theorems are. Firstly, I see that $\frac{\sqrt[n]{e^1}+\sqrt[n]{e^2}+\cdots\sqrt[n]{e^n}}{n}$ is an upper sum representation of an $n$ -subinterval uniform partition $P_n$ on the closed interval $[0,1]$ . As such, given that $e^x$ is integrable on this interval, the $\inf$ of the set of all $U(e^x,P)$ exists (where this notation refers to the upper sum of $e^x$ on all partiti
|
If $f(x)=e^x$ , then $\frac{\sqrt[n]{e^1}+\cdots+\sqrt[n]{e^n}}{n}=\frac{1}{n}\sum_{k=1}^nf\left(\frac{k}{n}\right)=\sum_{k=1}^nf\left(\frac{k}{n}\right)\cdot \frac{1}{n}$ . In other words, it is a Riemann sum on $[0,1]$ for the integrable function $f$ . So, the limit equals $\int_0^1f=e-1$ . Edit: For completeness and context: by this stage, Spivak does introduce the equivalence (in an appendix) of Riemann and Darboux integrability of functions $f:[a,b]\to\Bbb{R}$ . To set the notation/terminology, the mesh of a partition $P=\{t_0,\dots, t_k\}$ of $[a,b]$ is defined to be $\max\limits_{i\in\{1,\dots, k\}}|t_i-t_{i-1}|$ . a tagged partition of $[a,b]$ is a pair $(P,\tau)$ where $P=\{t_0,\dots, t_k\}$ is a partition of $[a,b]$ and $\tau$ is a tagging relative to $P$ , meaning a collection of points $\{\xi_1,\dots, \xi_k\}$ such that for each $i\in\{1,\dots, k\}$ , we have $\xi_i\in [t_{i-1},t_i]$ (i.e a partition, and a choice of one point per subinterval). The Riemann sum of $f$ relati
|
|calculus|integration|sequences-and-series|limits|proof-explanation|
| 1
|
Understanding the proof of the uncomputability of the "productivity function"
|
I'm not following this proof of what Bools,Burgess,Jeffrey call the productivity function . The proof states that the value from the productivity function is not computable. They call it $s$ , and it computes a score for a $k$ -state Turing machine, $s(k)$ . They construct a new machine with one more state ( $k + 1$ states), that computes another value $t$ . But then they say the machine to compute $t$ only has $k$ states. This is confusing because it was just defined as having one more than $k$ states. It seems like the machine to compute $t$ should have $k+1$ states (by definition), which isn't a contradiction, and also doesn't prove anything. What am I not understanding? Consider a $k$ -state Turing machine, that is, a machine with $k$ states (not counting the halted state). Start it with input $k$ , that is, start it in its initial state on the leftmost of a block of $k$ strokes on an otherwise blank tape. If the machine never halts, or halts in nonstandard position, give it a scor
|
"They call it s, and it computes a score for a k-state Turing machine, s(k)." Not a k-state TM, rather the highest score achieved by any k-state TM. "They construct a new machine with one more state (k+1) states" No, we don't know a priori that the machine C that computed s has k states. All we know is that G, the new machine, has one more state than C. In any case, we can find any number of machines to simulate any computable function, so this is not too important a detail. The important part is that if s is computable, then t is computable (computable functions are closed under addition, so to speak). Now, given that t is computable, it is in particular computable by some TM, call it G. G must have some number of states, call that number of states (not including halting) k. Here is the contradiction: since G calculates t, consider G with k strokes written on its tape. Then G halts in position t(k) - this is what it means for G to compute t. But then t(k) = s(k) + 1, and s(k) is the h
|
|proof-explanation|computability|turing-machines|
| 0
|
(Bayesian probability) Show that $P(H|E) = \frac{h (c +a \overline c)}{hc+a\overline c}$
|
The question, briefly How does the calculation $P(H|E) = \frac{h (c +a \overline c)}{hc+a\overline c}$ work? Some background I'm trying to work through the proof of the following theorem in Heinzelmann and Hartmann (2022): Theorem 1. $P(H | E, \neg O ) > P(H|E) > P(H)$ . I don't quite follow the proof. I quote Heinzelmann and Hartmann's (2022) proof below, up to the step I find puzzling. Specifically, my problem is with the calculation $P(H|E) = \frac{h (c +a \overline c)}{hc+a\overline c}$ . We consider the Bayesian network in Fig. 1 and use the machinery of Bayesian networks (..). With this, it is easy to see that $P(H) = op + \overline o q =: h$ , where $\overline x := 1 - x$ . Note that the above expression for $P(H)$ and condition (3) imply that $p . **Next we use the product rule and calculate $P(H|E) = \frac{h (c +a \overline c)}{hc+a\overline c}$ . Here is the Directed Acyclical Graph (DAG) relevant for the calculation. And here's some notation and probabilities they use in the
|
$\overline h= 1-h$ by definition, so: $$ah\overline c+a\overline h\overline c ~{= a(h+\overline h)\overline c\\=a\overline c}$$
|
|probability|conditional-probability|bayesian|bayes-theorem|bayesian-network|
| 0
|
Implicit details regarding the proof of $\lim_{n \to \infty}\frac{\sqrt[n]{e^1}+\sqrt[n]{e^2}+\cdots\sqrt[n]{e^n}}{n}$
|
Problem 9(i) of Spivak's Calculus (Chpt 22) asks for the following limit: $\displaystyle \lim_{n \to \infty}\frac{\sqrt[n]{e^1}+\sqrt[n]{e^2}+\cdots\sqrt[n]{e^n}}{n}$ The terse solution manual's argument reads as follows: $\displaystyle \int_0^1 e^x dx = e-1$ Note: My book has not yet covered infinite series I understand what this argument aims to illustrate, but it seems to me there are several implicit theorems being used here that I do not believe the book has previously addressed (or, at least, not that I am aware of/cannot properly identify). This question is specifically about properly identifying what those theorems are. Firstly, I see that $\frac{\sqrt[n]{e^1}+\sqrt[n]{e^2}+\cdots\sqrt[n]{e^n}}{n}$ is an upper sum representation of an $n$ -subinterval uniform partition $P_n$ on the closed interval $[0,1]$ . As such, given that $e^x$ is integrable on this interval, the $\inf$ of the set of all $U(e^x,P)$ exists (where this notation refers to the upper sum of $e^x$ on all partiti
|
A relevant property of $e^x$ is that it is an increasing function in the interval $[0,1]$ . So if you study both the lower as well as the upper sums related to the definite integral $I=\int_0^1e^x\,dx$ (and the partition into $n$ equal length subintervals), you will find that for all $n>1$ we have $$ \frac{e^{0/n}+e^{1/n}+\cdots+e^{(n-1)/n}}n\le I\le \frac{e^{1/n}+e^{2/n}\cdots+e^{(n-1)/n}+e^{n/n}}n. $$ Then look at the difference between the lower and upper bounds as $n\to\infty$ . Whenever you have sequences of both lower and upper sums with their difference tending to zero, you can deduce that both sequences must converge to the value of the definite integral (and that the function is integrable). I am unable to check how much Spivak gets into the finer points of the relation between the Riemann sums as opposed to the upper/lower sums. Judging from the question the latter is used for definitions, so I tailored my answer not to use any result stating that under certain circumstances
|
|calculus|integration|sequences-and-series|limits|proof-explanation|
| 0
|
Using combinational group theoretical perspective on semidirect products, show $\langle r,s\mid r^8, s^2, srs=r^3\rangle$ has two Klein four subgroups
|
Note: This is an alternative-proof question, since I know how to prove the result but I'm asking for a particular kind of proof. Why? For the fun of it! Motivation: I've been trying to give a reason behind the statement in this question from a particular perspective. The Question: Why, from a combinational group theoretical perspective on semidirect products, does the group $G$ given by $$P=\langle r,s\mid r^8, s^2, srs=r^3\rangle$$ have two subgroups isomorphic to the Klein group? Note: I separated $G$ from $P$ in my notation on purpose, just in case. Thoughts: By a standard result in combinatorial group theory, the group can be written as $$\begin{align} G&\cong\langle r\mid r^8\rangle\rtimes_\varphi\langle s\mid s^2\rangle\\ &\cong \Bbb Z_8\rtimes_\varphi \Bbb Z_2, \end{align}$$ where $\varphi:\Bbb Z_2\to \operatorname{Aut}(\Bbb Z_8)$ is given by $\varphi(s)(r)=r^3$ . Are the two simply conjugate to each other? According to GroupNames , there is at least one other way to write $G$ a
|
It can also be written as $C_4\rtimes (C_2)^2.$ That makes it clear that there's at least two Klein four subgroups, because the $(C_2)^2$ is not normal. So they should be conjugate.
|
|group-theory|alternative-proof|group-presentation|semidirect-product|combinatorial-group-theory|
| 0
|
Probability that the coefficients of a quadratic equation with real roots form a triangle
|
Question : What is the probability that the coefficients of a quadratic equation form the sides of triangle given that it has real roots? Assume that the coefficients are uniformly distributed and positive. Note that it is sufficient to assume that the coefficients are uniformly distributed in $(0,1)$ since we can always divide by the largest coefficient to scale all the coefficients to $(0,1)$ . Experimental data : A simulation with $10^{10}$ trails gives the probability as $0.182185$ . This could be a coincidence but this value which agrees with $\displaystyle \frac{\log \pi}{2\pi}$ to five decimal places. Julia code : using Random step = target = 10^7 count = count_q = qt = 0 while 1 > 0 count += 1 random_numbers = rand(3) a = random_numbers[1] b = random_numbers[2] c = random_numbers[3] if b^2 >= 4*a*c count_q += 1 if (a + b >= c) && (b + c >= a) && (c + a >= b) qt += 1 end end if count_q >= target println(count," ",count_q/step," ",qt," ",qt/count_q) target += step end end
|
Disclaimer: This isn't very elegant and just brute-forces through the solution. You're trying to find $$P(a+b\ge c,a+c\ge b,b+c\ge a|a,b,c\sim U(0,1),b^2-4ac\ge0)$$ which is equal to $$\frac{P(a+b\ge c,a+c\ge b,b+c\ge a,b^2-4ac\ge0|a,b,c\sim U(0,1))}{P(b^2-4ac\ge0 | a,b,c\sim U(0,1))}$$ Since $a,b,c$ are chosen uniformly, we can directly set up an integral for the denominator as $$\iiint_{R}[b^2-4ac\ge0]dA$$ where $[b^2-4ac\ge0]$ , using the Iverson bracket , is $1$ if the condition is satisfied and $0$ otherwise, and $R=[0,1]^3$ . Simplifying and setting up the bounds a bit more clearly, this is equal to $$\int_0^1\int_0^1\int_{\min(\sqrt{4ac},1)}^11dbdcda=\int_0^1\int_0^1(1-\min(\sqrt{4ac},1))dcda$$ With the substitution $(p,r)\mapsto (p/r,r)=(a,c)$ . This is equal to $$\int_{0}^{1}\int_{p}^{1}\frac{1}{r}\left(1-\min\left(\sqrt{4p},1\right)\right)drdp = \int_{0}^{1}-\left(1-\min\left(\sqrt{4p},1\right)\right)\ln\left(p\right)dp$$ which simplifies to $$\int_{0}^{\frac{1}{4}}-\left(1-\
|
|probability|integration|geometry|algebra-precalculus|geometric-probability|
| 1
|
Linear Independence, Span, REF
|
We wish to show that the list $(1,2-4),(7,-5,6)$ is linearly independent in $\mathbb{F}^3$ but is not a basis of $\mathbb{F}^3$ because it does not span $\mathbb{F}^3$ Then for some $v\in\mathbb{F}^3, \hspace{0.5em}v = (a,b,c), \hspace{0.5em} \forall a,b,c\in\mathbb{F}$ . We know that if we write the vectors as column vectors in an augmented matrix and use Gaussian Elimination until REF then the coefficient matrix part is $3\times2$ , so if we obtain $m$ leading ones the vectors span $\mathbb{F^3}$ and if we obtain $n$ leading ones then the vectors are linearly indepdent. $$ \begin{pmatrix} 1 & 7 & a\\ 2 & -5 & b\\ -4 & 6 & c \end{pmatrix} \longrightarrow \begin{pmatrix} 1 & 7 & a\\ 0 & 1 & \frac{b-2a}{-19}\\ 0 & 1 & \frac{4a+c}{36} \end{pmatrix} $$ So then we have 3 leading ones since the first one zero element in each row is 1 thus as stated above then the vectors span the space and are not linearly independent. This is clearly nonsense! What went wrong. If now we consider another ex
|
I can't say I follow some of the grittier details of your proof, but I think you're trying to use the following: Suppose $v_1, \ldots, v_n \in \Bbb{F}^m$ . Let $A = \left(\begin{array}{c|c} v_1 & v_2 & \cdots & v_n \end{array}\right)$ , and suppose $B$ is any row-echelon form of $A$ . Then $\{v_i : B \text{ has a leading $1$ in column } i \}$ is a basis for $\operatorname{span} \{v_1, \ldots, v_n\}$ , and hence are linearly independent. In your first example, the matrix $$\begin{pmatrix} 1 & 7 & a\\ 0 & 1 & \frac{b-2a}{-19}\\ 0 & \color{red}1 & \frac{4a+c}{36} \end{pmatrix}$$ is not in row-echelon form, as the entry in red lies below a leading $1$ , but is not $0$ . To use the theorem, you'd have to subtract row $2$ from row $3$ , to get: $$\begin{pmatrix} 1 & 7 & a\\ 0 & 1 & \frac{b-2a}{-19}\\ 0 & 0 & \frac{4a+c}{36} + \frac{b-2a}{19} \end{pmatrix}.$$ I'm not going to bother simplifying this expression in the bottom right, nor will I check that your reductions so far are valid. But, a
|
|linear-algebra|abstract-algebra|
| 1
|
Question regarding Transitivity of a Relation
|
Suppose we define a relation $R$ in the natural set $\mathbb N$ which says: $$(x,y)\in R\iff x^2-4xy+3y^2=0$$ and we would like to find which of the following properties does $R$ satisfy. My book gives four options: (a) reflexive and transitive (b) reflexive and symmetric (c) symmetric and transitive (d) an equivalence relation According to me, none of the options is correct. For example: $(9,3),(3,1)\in R$ but $(9,1)\notin R$ hence $(a,b),(b,c)\in R \not \implies (a,c)\in R$ , proving $R$ to be NOT transitive. Am I correct? Thank you.
|
For reflexivity, it is easy to see. To verify whether it has symmetry, take arbitrary $x,y$ s.t. $(x,y) \in R$ , then we have $$x^2−4xy+3y^2=0$$ Can we derive $y^2 - 4xy + 3x^2 = 0$ from it? No, in general it is not true. So it doesn't have symmetry. To verify whether it has transitivity, take $x,y, z$ s.t. $(x,y) \in R$ and $(y,z) \in R$ , then we have $$x^2−4xy+3y^2=0, y^2−4yz+3z^2=0$$ Can we derive $x^2 - 4xz + 3z^2 = 0$ ? Whatever we do, in general it is not true. So we don't have transitivity. Therefore, none of the options on your book is true.
|
|functions|set-theory|relations|equivalence-relations|products|
| 0
|
$\int \frac{\sqrt{a^2-x^2}}{x^2}dx$ using trigonometric substitition
|
I'm aware this question is more easily done using substitution by parts or euler substitution, but this was under a section in my book where we were asked to use trigonemtric substations. Substituting $$x=a\sin{u}$$ I eventually get to $$-\cot{u}-u$$ But I'm not sure how to express this in terms of $x$ .
|
$\int \frac{\sqrt{a^2-x^2}}{x^2}dx$ $x=a \sin u. dx = a\cos u du.$ $\int \frac{a \cos u}{a^2\sin^2 u} a \cos u du=\int \cot^2 u \ du=\int \csc^2 u -1 \ du= -\cot u-u+C$ $x/a= \sin u. $ $\sqrt{1-x^2/a^2}= \cos u$ $\frac{-\sqrt{a^2-x^2}}{x}-\arcsin (x/a)+ C$ Since these are trig functions, there are other possible, equivalent solutions.
|
|integration|indefinite-integrals|trigonometric-integrals|
| 1
|
Are these two function spaces identical?
|
Let the function space $A$ denote all functions $f : [0, 1) \to [0, 1)$ such that, for some set $Z$ of Lebesgue measure zero, the derivative $f'$ exists on $[0, 1) \setminus Z$ and $|f'| = 1$ there. Let the function space $B$ denote all functions $f : [0, 1) \to [0, 1)$ such that for all $a in $[0, 1)$ , the portion $G_f(a,b) = \{(x,f(x)) ∈ \mathbb R^2 \mid a \le x of the graph of $f$ that lies above $[a, b)$ has Hausdorff $1$ -measure $H_1(G_f(a,b))$ equal to $\sqrt 2(b - a)$ . (Note that we do not assume functions to be continuous.) Does $A = B$ ? Edit: The following is false, as was shown to me by Christian Remling by an easy application of the Cantor function: "It is clear to me that $A$ is a subset of $B$ ." (Wrong!) So the real question is whether the reverse inclusion holds.
|
Let $F$ be a fat Cantor set, and let $$ f(x) = x + \mathbf{1}_F(x). $$ Then for any $[a, b) \subseteq [0, 1)$ , \begin{align*} G_f([a,b)) &= \{ (x, x) : x \in [a, b) \setminus F\} \cup \{(x, x+1) : x \in [a, b) \cap F \} \end{align*} and hence we get \begin{align*} H_1(G_f([a,b))) &= \sqrt{2}\operatorname{Leb}([a, b) \setminus F) + \sqrt{2}\operatorname{Leb}([a, b) \cap F) \\ &= \sqrt{2}\operatorname{Leb}([a, b)) \\ &= \sqrt{2}(b - a). \end{align*} This shows that $f$ is a member of $B$ . On the other hand, $f$ is not differentiable at each point in $F$ . Therefore, $f$ does not lie in $A$ .
|
|real-analysis|measure-theory|geometric-measure-theory|
| 0
|
Dimensionality of the set of linear maps between inner product spaces
|
Suppose we have two inner product spaces $V$ and $W$ with dimensions $n$ and $m$ , respectively. For some set $\{ \mathbf{v}_i\}_{i=1}^k \in V$ and some nonzero $\mathbf{w} \in W$ , where $1 \leq k \leq n$ , we let the subspace $U \subset L(V,W)$ be the set of linear maps $T$ between these inner products spaces such that $\langle T \mathbf{v}_i, \mathbf{w} \rangle = 0$ . Now, if that set $\{ \mathbf{v}_i\}_{i=1}^k$ happens to be linearly independent, then what can we say about the dimension of $U$ ? Is there a range of values that $U$ can take, or is there a unique value? Moreover, I wonder what does it mean, intuitively, for $U$ to have a dimension? I'm not quite sure where to begin with any of these questions, so any prods in the right direction or heuristic explanations would be helpful.
|
Firstly, on your question about dimension of $U$ : since $L(V,W)$ is a vector space, we can do usual vector space things (finding a basis, finding its dimension, etc.). This of course goes for its subspaces, including $U$ . Now, note that $\langle Tv_i, w \rangle =0$ is equivalent to $T(\text{Span}(\{v_i\}))\subseteq \{w\}^{\perp}$ . We are searching for the subspace $U\subset L(V,W)$ of linear maps that send the subspace $\text{Span}(\{v_i\})\subset V$ , having dimension $k$ , to the subspace $\{w\}^{\perp}\subset W$ , having dimension $m-1$ . Note that such a map $T$ need not have rank equal to $m-1$ , only $ ; in fact, $T=0$ works fine. The final steps are not hard, but I'll hide them in case you'd like to do them yourself. The problem now is just combinatorial. Let $V_1 =\text{Span}(\{v_i\})\subset V$ and $V_2 = V_1^{\perp}$ . There are $k(m-1)$ dimensions of linear maps sending $V_1$ into $\{w\}^{\perp}$ (since they have dimension $k$ and $m-1$ , respectively), and $(n-k)m$ dimens
|
|linear-transformations|inner-products|
| 1
|
Need to compute the conditional expectation
|
Note: I have edited the question to add more context to it. Please provide me with feedback. One unbiased die is thrown 10 times. For $1 \leq i \leq 6$ , let $X_i$ denote the number of times $i$ appears. Find the conditional expectation of $X_1$ given that $X_6 = 5$ . My approach to solve this problem is as follows: The formula for computing the expectation is: $E(X)= \sum_{i=1}^n x_{i}P(X=x_i)$ where $X$ is the random variable and $x_i$ 's are the values that the random variable takes. $P(X=x_i)$ is the probability of the random variable taking the value $x_i$ . Now, coming back to the question, it is clear that the random variable $X_1$ can take any value from 0 to 5, with 0 and 5 included. This is because we are already given that the value of the random variable $X_6$ is 5, and the total number of throws is 10. Now, the next task is to compute the probability distribution of $X_1$ given that $X_6$ = 5. I first try to compute the joint probability distribution for $X_1$ and $X_6$ .
|
Let's break this down into an easier problem to see how to approach this. Imagine we have a d3 and roll it 2 times. What is the expected value of the number of times of each number? Well it's just simply $\frac{2}{3}$ . Now, conditioned on there being exactly 1 of those 2 rolls that are 3s, we know that the roll unaccounted for is either a 1 or a 2 and therefore is a coin toss as to which one it is. Tying this to your question, if we know that $X_6 = 5$ then that means that we expect all of the other rolls to be evenly split between everything else so we have that $E(X_1 | X_6=5) = \frac{1}{5} \times 5 = 1$ .
|
|conditional-probability|conditional-expectation|
| 1
|
Convergence of exponential sequence
|
I am looking at a sequence where $a_1>0$ and $$ a_{n+1}=a_n^{a_n}.$$ I want to figure out for what values of $a_1$ does this sequence converge. My guess is that it converges of $a_1\leq 1$ and diverges for $a_1>1$ . I was able to show convergence for $a_1\leq 1$ since if $a_1=1$ , then it is trivial, otherwise, we can make a change of variables $b_n=\ln(a_n)$ and show that $b_n$ converges, which implies $a_n$ converges. Then we have $$b_{n+1}=b_ne^{b_n}$$ Now if $b_1 , then $b_{n} for all $n$ since $b_{k+1}=b_ne^{b_k} as $e^{b_k}>0$ and $b_k inductively. However, we also have $b_{k+1}>b_k$ as $$b_{k+1}=b_ke^{b_k}>b_k$$ as we assume $b_k which implies $e^{b_k}\in(0, 1)$ , and since $b_k , this will lead to a larger number, so $b_k$ converges due to monotonic sequence theorem. Now I am stuck on showing divergence. I know there are divergent values (for example $a_1=2$ ), but I have no clue about the general behavior about $a_1>1$ , like values such as $a_1=1.0001$ . I was wondering if I
|
Using the same transformation $b_n = \ln a_n$ $b_{n+1} = b_ne^{b_n}$ if $a_n > 1$ then $b_0 > 0\\ e^{b_0} > 1 + b_0\\ b_{n+1} > b_n\\ b_{n+k} > (1+b_0)^k b_n > (1+kb_0) b_n$ This is creating a pattern of growth that is faster than a linear progression. For any $\epsilon > 0$ and $k > \frac {\epsilon}{b_0}$ then $|b_{n+k} - b_n| > \epsilon.$ The sequence $\{b_n\}$ divererges, hence $\{a_n\}$ diverges.
|
|real-analysis|sequences-and-series|recurrence-relations|
| 0
|
Dimensionality of the set of linear maps between inner product spaces
|
Suppose we have two inner product spaces $V$ and $W$ with dimensions $n$ and $m$ , respectively. For some set $\{ \mathbf{v}_i\}_{i=1}^k \in V$ and some nonzero $\mathbf{w} \in W$ , where $1 \leq k \leq n$ , we let the subspace $U \subset L(V,W)$ be the set of linear maps $T$ between these inner products spaces such that $\langle T \mathbf{v}_i, \mathbf{w} \rangle = 0$ . Now, if that set $\{ \mathbf{v}_i\}_{i=1}^k$ happens to be linearly independent, then what can we say about the dimension of $U$ ? Is there a range of values that $U$ can take, or is there a unique value? Moreover, I wonder what does it mean, intuitively, for $U$ to have a dimension? I'm not quite sure where to begin with any of these questions, so any prods in the right direction or heuristic explanations would be helpful.
|
Hint1. If $B=(v_1,\dots,v_n)$ is a basis of $V$ , prove that the map $$U\to(w^\perp)^k\times W^{n-k},\;T\mapsto(T(v_1),\dots,T(v_n))$$ is an isomorphism. Hint2. Justify that $(w^\perp)^k\times W^{n-k}$ is isomorphic to $(\Bbb R^{m-1})^k\times(\Bbb R^m)^{n-k}=\Bbb R^d$ , where $d=k(m-1)+(n-k)m=mn-k$ . Edit. Alternatively and more directly, if moreover $C=(w_1,\dots,w_m)$ is a basis of $W$ such that $w_2,\dots,w_m\perp w$ , check that the canonical isomorphism $$L(V,W)\xrightarrow{\sim} M_{m,n}(\Bbb R),\;T\mapsto[T]^B_C$$ maps $U$ onto the subspace of matrices whose $k$ first entries on the first line are $0$ (the dimension of this subspace is obviously $mn-k$ ).
|
|linear-transformations|inner-products|
| 0
|
Zero divisors in an algebra with two generators
|
Let $k$ be a field, and $R = k\langle x,y \mid x^2 = 0\rangle$ . The elements $x$ and $y$ are not supposed to commute with each other. Is the only case where nonzero elements $a, b \in R$ satisfy $ab=0$ when $a\in Rx$ and $b\in xR$ ?
|
The question is answered in Example 9.3 in the paper Victor Camillo and Pace P. Nielsen, McCoy rings and zero-divisors, J. Pure Appl. Algebra 212 (2008), no. 3, 599–615 .
|
|algebras|
| 1
|
Dot product vs Matrix multiplication, is the later a special case of the first?
|
I seemed to have thoroughly confused myself today... Long story short, the question is simple. Is matrix multiplication just a special case of the dot product of two sets of vectors when the sets of vectors have the same cardinality and all vectors in both sets have the same length? I assume the answer is yes from reviewing the computation of matrix multiplication and the dot product .
|
Matrix multiplication is the multiplication of 2D or more D tensors(arrays). *Either of 2 operands can be a 1D array(tensor) but not both of them. The rule which you must follow to do matrix multiplication is the number of the columns of A tensor(array) must match the number of the rows of B tensor(array). 2D arrays: [[a, b, c], x [[g, h, i, j], = [[ag+bk+co, ah+bl+cp, ai+bm+cq, aj+bn+cr], [d, e, f]] [k, l, m, n], [dg+ek+fo, dh+el+fp, di+em+fq, dj+en+fr]] [o, p, q, r]] 2 rows (3) rows (3) columns 4 columns [[2, 7, 4], x [[5, 0, 8, 6], = [[35, 58, 59, 69], [6, 3, 5]] [3, 6, 1, 7], [44, 38, 96, 67]] [1, 4, 9, 2]] [[2x5+7x3+4x1, 2x0+7x6+4x4, 2x8+7x1+4x9, 2x6+7x7+4x2] [6x5+3x3+5x1, 6x0+3x6+5x4, 6x8+3x1+5x9, 6x6+3x7+5x2]] In PyTorch with @ , matmul() or mm() : import torch tensor1 = torch.tensor([[2, 7, 4], [6, 3, 5]]) tensor2 = torch.tensor([[5, 0, 8, 6], [3, 6, 1, 7], [1, 4, 9, 2]]) tensor1 @ tensor2 # tensor([[35, 58, 59, 69], [44, 38, 96, 67]]) torch.matmul(tensor1, tensor2) # tensor([[
|
|linear-algebra|matrices|inner-products|
| 0
|
Understanding Lowenheim-Skolem theorem and its consequences
|
Based on my limited understanding, the precise formulation of Lowenheim-Skolem (LS) theorem is stated as follows: Let $\Gamma$ be a set of sentences from a countable language $\mathcal L$ . Let $\mathcal M$ be an infinite model for $\Gamma$ , i.e. $|\text{dom}(\mathcal M)|\ge\aleph_0$ . Then, (Upward part) For every cardinal number $\kappa\ge|\text{dom}(\mathcal M)|$ , there exists an elementary extension $\mathcal N$ of $\mathcal M$ such that $|\text{dom}(\mathcal N)|=\kappa$ . (Downward part) For every infinite cardinal number $\kappa , there exists an elementary substructure $\mathcal N$ of $\mathcal M$ such that $|\text{dom}(\mathcal N)|=\kappa$ . According to Sets, Logic, Computation , one of the consequences of this theorem is that FOL cannot express that the size of a structure is uncountable. I don't understand what it means. What does it mean to "express" exactly? I interpreted this statement as for any infinite cardinal $\kappa>\aleph_0$ (not sure if this result holds for $\k
|
I think I figured it out now. The LS theorem tells us that for any set of sentences $\Gamma$ in countable $\mathcal L$ , FOL cannot "control" the cardinalities of its infinite models, i.e. it is inevitable that there are other infinite models for $\Gamma$ with cardinality $\ge\aleph_0$ . So, it has two main consequences: There is no consistent set of sentences $\Gamma$ in $\mathcal L$ such that all of its models are uncountably infinite. Otherwise, the downward LS theorem implies $\Gamma$ has a countably infinite model, which is a contradiction. There is no consistent set of sentences $\Gamma$ in $\mathcal L$ such that all of its models are countably infinite. Otherwise, the upward LS theorem implies $\Gamma$ has an uncountably infinite model, which is a contradiction.
|
|logic|model-theory|
| 0
|
How to find the value of $\Gamma (0^+)$?
|
This question was part of my complex analysis assignment and I am not able to solve it. Find the value of $\Gamma (0^+)$ . (It could be $-\infty $ or $+\infty$ .) I used the formula of gamma function which is $\Gamma (z) = \int_{0}^{\infty} t^{z-1}e^{-t} dt $ and I got by putting $z =0^+ $ , $\Gamma (0^+)=\int_{0}^{\infty}(1/x ) e^{-x} dx$ and if I integrate it by parts I get it equal $-\infty-\int_{0}^{\infty}(1/x^2) e^{-x}dx$ . If I again use integrating by parts to $\int_{0}^{\infty}(1/x^2) e^{-x}dx$ and do it sucessively the power of $x$ will become more negative. So, I am not able to solve the integral. Can you please tell how to do it?
|
Also from this point of view to see the singularities. $$\Gamma(z)=\frac{\Gamma(z+n+1)}{z(z+1)\cdots(z+n)}$$ The denominator will be zero with $z=0, -1, -2, \ldots$ for all non-positive integers. So gamma function will be undefined at those points to avoid division by zero.
|
|real-analysis|integration|gamma-function|
| 0
|
Convexity radius function on manifold is continuous
|
Let $(M, g)$ be a Riemannian manifold. For $p \in M$ , the convexity radius at $p$ , denoted by $\mathrm{conv}(p)$ , is defined as $$\mathrm{conv}(p) := \sup \{r > 0 \ \mid \ B_g(p, r) \text{ is a geodesically convex geodesic ball} \} \in (0, \infty]. $$ It is known that $\mathrm{conv}(p) > 0$ for all $p \in M$ . I would like to show that the map $$\mathrm{conv} : M \to (0, \infty], \quad p \mapsto \mathrm{conv}(p) $$ is continuous. I tried showing that, around a point $p \in M$ where $\mathrm{conv}(p) is finite, the $\mathrm{conv}$ map is $1$ -Lipschitz. For $q \in M$ , due to symmetry, is suffices to show that $$\mathrm{conv}(q) \geq \mathrm{conv}(p) - d_g(p,q). $$ If $d_g(p, q) \geq \mathrm{conv}(p)$ , we cleary have $$\mathrm{conv}(q) \geq 0 \geq \mathrm{conv}(p) - d_g(p,q). $$ If, however, $d_g(p, q) , I am not sure how to prove that $$\mathrm{conv}(q) \geq \mathrm{conv}(p) - d_g(p,q), $$ because I am not sure that the geodesic ball of radius $\mathrm{conv}(p) - d_g(p,q)$ around $
|
The key part of the proof is the fact that if $\gamma$ is a minimizing geodesic segment joining two points of a geodesically convex geodesic ball $B_r(p)$ , then $d_g(p,\gamma(t))$ attains a maximum at one of its endpoints. This is from Problem 6-5 in Introduction to Riemannian Manifolds by John M.Lee or Lemma 4.1 from Riemannian Geometry by Do Carmo. To prove $$\text{conv}(p)\geq \text{conv}(q)-d_g(p,q)$$ when the RHS is positive, let $\text{conv}(q) = R$ . For any $r , the ball $B_r(q)$ is geodesically convex and $B_{r-d_g(q,p)}(p) \subseteq B_r(q)$ . For any two points $x_1,x_2\in B_{r-d_g(q,p)}(p) \subseteq B_r(q)$ there is a minimizing curve $\gamma$ segment in $B_r(q)$ joining these two points. Now $$ d_g(p,\gamma(t)) \leq d_g(q,p) + d_g(q,\gamma(t)) $$ Since $d_g(q,\gamma(t))$ attains its maximum at $x_1$ or $x_2$ , so does $d_g(p,\gamma(t))$ i.e., the curve $\gamma(t)\in B_{r-d_g(q,p)}(p)$ . Therefore $$ conv(p) \geq r-d_g(p,q) $$ Since this is true for all $r , the desired res
|
|riemannian-geometry|geodesic|
| 0
|
Proof $\lim_{n\to\infty} \cos^{2n}\left(\frac{x}{\sqrt{n}}\right) = e^{-x^2}$
|
I have it in the notes of an old project that $\cos^{2n}\left(\frac{x}{\sqrt{n}}\right) \to e^{-x^2}$ pointwise as $n\to \infty$ . The proof I gave at the time sounds bogus, but it's quite close empirically for $n\ge 10$ . The bogus argument is basically that $$\cos^{2}\left(\frac{x}{\sqrt{n}}\right)\approx \left(1-\frac{x^2}{n}\right)$$ and $$ \lim_{n\to\infty} \left(1-\frac{x^2}{n}\right)^n = \exp(-x^2) $$ by the definition of $\exp$ . But unless I'm forgetting something nice about limits, I see no reason why the quality of the approximation improves if I take both sides to a power. A promising argument that got nasty is to show that $\lim_{n\to\infty} \cos^{2n}\left(\frac{x}{\sqrt{n}}\right)$ has the same Taylor series expansion as \begin{align} e^{-x^2}&=\sum_{k=0}^\infty \frac{(-x^2)^k}{k!}\\ &=\sum_{\text{$k$ even}}\frac{(-1)^{k/2}k!}{(k/2)!}\frac{x^k}{k!}.\end{align} Well, consider $$ \left. \frac{d^k}{dx^k} \cos^{2n}\left(\frac{x}{\sqrt{n}}\right)\right|_{x=0}.$$ Each different
|
Let $L= \lim_{n\to\infty} \cos^{2n}\left(\frac{x}{\sqrt{n}}\right)$ $logL=\lim_{n\to\infty} log(\cos^{2n}\left(\frac{x}{\sqrt{n}}\right))$ Using the Taylor expansion of $cosx$ , we get $cos^{2n}(\frac{x}{\sqrt{n}})=(1-\frac{x^2}{2!n}+\frac{x^4}{4!n^2}+...)^{2n}$ . Thus we get $logL=\lim_{n\to\infty} (2n)$ log $(1-\frac{x^2}{2!n}+\frac{x^4}{4!n^2}+...)$ . Since $n\to\infty$ , we can neglect the terms starting from $\frac{x^4}{4!n^2}$ to get the following $logL=\lim_{n\to\infty} (2n)$ log $(1-\frac{x^2}{2!n})$ . Using series expansion of log( $1-x$ ), $logL=\lim_{n\to\infty} (2n)(-\frac{x^2}{2!n}-\frac{x^4}{2!^2n^2}...)= -x^2$ after neglecting the other terms. $logL=-x^2$ so $L=e^{-x^2}$ which is the required result.
|
|limits|taylor-expansion|normal-distribution|
| 0
|
Symmetry of Green's Functions
|
In the book Introduction to Partial Differential Equations by Folland, he states the claim Let $ \Omega$ be a bounded domain in $\mathbb{R}^n$ with smooth boundary $ S$ . The Green's function $G$ for $\Omega$ exists and for each $x \in \Omega , G(x, \cdot) \in C^\infty(\overline{\Omega} \backslash \{x\})$ . Then he claims $\forall x,y \in \Omega : G(x,y) = G(y,x)$ . He provides the following formal argument: \begin{align*} G(x,y) - G(y,x) &= \int_{\Omega} G(x,z) \delta(y - z) - G(y,z) \delta(x - z) \mathrm{\,d} z \\ &= \int_{S} G(x,z) \partial_{\nu _z} G(y,z) - G(y,z) \partial_{\nu _z} G(x,z) \mathrm{\,d} \sigma(z) = 0 \end{align*} as $ \forall x \in \Omega : \Delta G(x,\cdot) = \delta (x - \cdot)$ and $ G(x, S) = 0$ . A strategy for formalising he suggest is that we can excise small balls of $x,z$ from $\Omega$ and then let their radii shrink to zero. Approach As $ G(x , \cdot)\in C^\infty(\overline{\Omega} \backslash \{x\})$ so $\forall x,y \in \Omega : $ \begin{align*} G(x, y) - G(y
|
With discussion with @Three aggies, here is the complete solution: Fix $x \neq y \in \Omega$ and let $ G^x := G(x, \cdot)$ . Since $ \Delta G^x = \delta _x$ so $G^x$ is harmonic away from $x.$ When $ n > 2 $ , we have $ \partial_{\nu _z} G^x$ on $\partial B_\varepsilon(x)$ is given by $$ \nu _x (z) \cdot \nabla _z G^x = \varepsilon ^{-1} (z - x) \cdot \nabla_z \frac{ \lvert z -x \rvert^{2 - n}}{\omega _n (2 - n)} = (\varepsilon \omega _n ) ^{-1} \lvert z - x \rvert^2 \lvert z - x \rvert^{ -n} = \varepsilon ^{1 - n} \omega _n ^{-1} = \lvert \partial B_\varepsilon \rvert^{-1} $$ Moreover, for $ \partial_{\nu _z}G^y$ on $\partial B_\varepsilon(x)$ , we have $$ \lvert \nu _x(z) \cdot \nabla_z G^y \rvert = (\varepsilon \omega _n)^{-1} \lvert y - z \rvert^{-n} \lvert (z -y) \cdot ( z- x) \rvert \leq (\varepsilon \omega _n)^{-1} \lvert y - z \rvert^{1 -n} \lvert z- x \rvert. $$ As such, via Green's identities applicable due to smoothness of $G^x$ (respectively $G^y$ ), away from $x$ and $ G^x
|
|partial-differential-equations|harmonic-functions|greens-function|
| 0
|
Doubt related to Symmetric matrix
|
Let $A \in \mathbb{R}^{n \times n}$ be a real symmetric matrix. Which of the following are true. (a) If $ A^k=I_n$ for some positive integer $k$ then $A^2=I$ (b) If $ A^k=0_n$ for some positive integer $k$ then $A^2=0_n$ My approach: (b) Given $A^k=I_n \implies $ minimal polynomial(say $m(x))$ should divide the annihilating polynomial that is $m_\lambda(x) | \lambda^k-1 $ so minimal polynomial is product of linear factors. Hence matrix is diagonalizable and by definition it is nilpotent matrix so $A$ must be zero which implies $A^2=0_n.$ But for part (a) How to prove or disprove it Idont have any idea. Please check part b and suggest something for part (a).
|
You attacked (b) with diagonalisation, which is often the right approach when you have a symmetric matrix. That works with (a) as well. Let's assume that diagonalisation has already been done, and we're working with a diagonal matrix. If $A^k = I$ , then each of the diagonal entries must fulfill $x^k = 1$ . And among the reals, there are only two numbers that do that: $\pm 1$ . Therefore all diagonal entries are either $1$ or $-1$ , and therefore $A^2$ must be the identity matrix.
|
|linear-algebra|symmetric-matrices|
| 1
|
how to epsilon-delta
|
i kinda dont know how to solve, to be more precise, i can't find a proper $\delta$ the following limits using epsilon-delta notation: $\displaystyle\lim_{x\to 4}\sqrt{x}=2$ $\displaystyle\lim_{t\to 7}\frac{8}{t-3}=2$ the first limit, i found on the solutions manual that the $\delta = \frac{1}{2}$ on the first equation. in the first limit we have, by using the epsilon-delta formula for limits: $|\sqrt{x} - 2| Multiply by the function's conjugate to get a function with the same format as |x - 4|: $\left| \sqrt{x} - 4 \cdot \left( \frac{\sqrt{x} + 2}{\sqrt{x} + 2} \right) \right|$ Solving the equations, the numerator is a difference of squares and the denominator is resolved according to the distribubility rules. As the denominator of the function is 1, so the conjugate of the function assumes the denominator of the function: $\left| \frac{x - 4}{\sqrt{x} + 2} \right|$ transform into a product so that the factor $(x - 4)$ is highlighted: $\left| (x - 4) \cdot \left( \frac{1}{\sqrt{x}} + 2
|
Lets walk through the first one. We claim: $\lim_\limits{x\to 4} \sqrt x = 2$ Then we must show that the following is true: For any $\epsilon > 0$ there exists $\delta > 0$ such that when $|x-4| then $|\sqrt x - 2| As you note $|\sqrt x - 2| = |(\sqrt x - 2)\frac {\sqrt x + 2}{\sqrt x + 2}| = | \frac {x-4}{\sqrt x + 2}|$ And, from the line above we only care about the case when $|x-4| $|\sqrt x - 2| = |\frac {x-4}{\sqrt x + 2}| And since $\sqrt {x} \ge 0, |\frac {\delta}{\sqrt x + 2}| \le \frac {\delta}{2}$ Putting it all together: If $\delta \le 2\epsilon$ $|\sqrt {x} - 2| And we have shown what we need to show. For any $\epsilon$ we have found that there exists a corresponding $\delta$ that does what we need it to do.
|
|real-analysis|calculus|limits|problem-solving|epsilon-delta|
| 0
|
How to linearize time-variant ordinary differential equation (ODE), in particular, if there is a convolution term?
|
Letting $(\ * \ )$ denote convolution, I have an ODE of the form \begin{align} \dot{r} &= -7.4r -1.6f - 8.8 (f(t)*e^{-t}) - 10.4(f(t)*(te^{-t})) \\ \dot{f} &= 0.25r \end{align} with initial conditions $r(0) = f(0) = 0.$ I would like to linearize about $[r=0, f=0]^t$ . I realized that in general I can linearize when I have a time invariant ODE; e.g., letting $x(t) = [r,f]^t$ I can linearize $\dot{x} = g(x)$ . I realized I do not know how to linearize a general equation of the form $\dot{x} = g(x,t)$ (time varying). How do we linearize time-variant ODEs? Aside from the general question, there is a more specific question--how do we linearize the convolution terms? It seems I need to compute $\frac{\partial }{\partial f}$ of the operator $f\mapsto (f(t)*e^{-t})$ . This implies we are considering the derivative of an operator defined on a function space---what are the function spaces and norms/topologies in play here?
|
The set of odes is linear. You can solve it using the Laplace Transform. \begin{align} s\hat{r} &= -7.4\hat r -1.6\hat f - 8.8 \hat f\frac{1}{s+1}- 10.4\hat f\frac{1}{(s+1)^2} + r_0 \\ s\hat {f} &= 0.25\hat r+ f_0 \end{align} thus obtaining $$ \cases{ \hat r = \frac{r_0 s (s+1)^2-f_0 ((1.6 s+12.) s+20.8)}{s^4+9.4 s^3+16.2 s^2+10.4 s+5.2}\\ \hat f=\frac{s^2 (f_0 (1. s+7.4)+0.25 r_0)}{s^4+9.4 s^3+16.2 s^2+10.4 s+5.2} } $$ NOTE If $r_0 = f_0 = 0$ then the solution is the trivial solution: $\hat f = \hat r = 0$
|
|ordinary-differential-equations|dynamical-systems|distribution-theory|convolution|linearization|
| 0
|
Autocorrelation of the OU process
|
In a 1966 paper by the physicist Kubo ( https://iopscience.iop.org/article/10.1088/0034-4885/29/1/306 ), I found the following problem: The author considers an OU process of the form $$m\frac{dv}{dt}=-m\gamma v +R(t)$$ Where the second term is a gaussian stochastic process. He then argues that the probability distribution of velocities of this object should be given by \begin{align} \frac{\partial P}{\partial t}(t,v)=\nabla_v\cdot\left[D_v\nabla_v +\gamma v\right]P && (1.1) \end{align} Then he argues that these quantities should be related by the expression $$ D_v=\frac{1}{m^2}\int_0^\infty \langle R(0)R(s)\rangle ds$$ However, if I were to write this SDE using standard Ito calculus notation, I would write $$dv=-\gamma v dt +\sqrt{2D_v}dW_t$$ In order to reach the same PDE as in equation (1.1). However, if I do that, that would mean that $R(s)=m\sqrt{2D_v}dW_t$ , which would mean $$\frac{1}{m^2}\int_0^\infty \langle R(0)R(s)\rangle ds=2D_v\int_0^\infty \langle dW_0 dW_s\rangle ds=2D_v\
|
Must be because the heat equation $$ \partial_tp=\partial_x^2p $$ is satisfied by the heat kernel $$ p_t(x,y)=\frac{1}{\sqrt{\color{red}{4}\,\pi\,t}}e^{-\frac{(x-y)^2}{\color{red}{4}\,t}} $$ whilst standard Brownian motion has transition probability $$ p_t(x,y)=\frac{1}{\sqrt{\color{red}{2}\,\pi\,t}}e^{-\frac{(x-y)^2}{\color{red}{2}\,t}} $$ that satisfies $$ \partial_tp=\color{red}{\frac12}\partial_x^2p\,. $$
|
|stochastic-calculus|stochastic-differential-equations|
| 0
|
Does independence of generators of σ -algebras imply the independence of σ -algebras? Without assuming that the generators are ∩-stable?
|
If generators of $\sigma$-algebra independent, then $\sigma$-algebras are independent provides a proof that Let $(\Omega, \mathcal{A}, P)$ be a probability space and $\mathcal{E}_i\subset \mathcal{A},\ \forall i\in I$ . If $(\mathcal{E}_i \cup \{\emptyset\})$ is $\cap$ -stable, then $(\mathcal{E}_i)_{i\in I}\text{ independent} \Leftrightarrow\left (\sigma(\mathcal{E}_i)\right )_{i\in I}\text{ independent}$ holds. But what happens if the generators $\mathcal{E}_i$ are not stable under intersections? (In other words: What happens if we do not assume that $\mathcal{E}_i$ are $\pi$ -systems?) Can you provide a counter-example in the case that $\mathcal{E}_i$ are not $\cap$ -stable.
|
Let $(\Omega, \mathcal{A}, P)$ be a Laplace model with $\Omega=\{0,1\}^3$ (e.g., 3 coin tosses), and $\mathcal{E}_1=\left\{\{\omega\in\Omega : \omega_1=\omega_2\}, \{\omega\in\Omega : \omega_2=\omega_3\}\right\}$ , and $\mathcal{E}_2=\left\{\{\omega\in\Omega : \omega_1=\omega_3\}\right\}$ . ($I={1,2}.) Then $(\mathcal{E}_i)_{i\in \{1,2\}}$ are independent, but $\left (\sigma(\mathcal{E}_i)\right )_{i\in \{1,2\}}$ are not independent.
|
|probability-theory|measure-theory|independence|
| 0
|
How to force character of ultrafilter be equal to $2^k$?
|
Let $k$ be an infinite cardinal. We already know there are exactly $2^{2^k}$ distinct non-principal ultrafilter on $k$ . Here The set of ultrafilters on an infinite set is uncountable . And proof uses that independent family of cardinality $2^k$ . "Can we force something upon that independent family such that the ultrafilters generated have character $2^k$ ?" But my question for the post is : Are there exactly $2^{2^k}$ distinct non-principal ultrafilters with character equal to $2^k$ ? Definition: Character of ultrafilter $\mathcal{U}$ is the least cardinality of a filter base of $\mathcal{U}$ .
|
Let $I$ be an index set of cardinality $|I|=2^\kappa$ . By Hausdorff's theorem proved in the linked post, there is a collection $\{A_i\}_{i\in I}\subseteq\mathcal{P}(\kappa)$ such that $A_i\ne A_j$ for $i\ne j$ and the $A_i$ generate a free Boolean sub-algebra of $\mathcal{P}(\kappa)$ . Notation: If $S\subseteq\kappa$ , $S^\complement:=\kappa\setminus S$ . Let $\mathfrak T=\{T\subseteq I\colon |T|=\aleph_0\}$ . For $T\in\mathfrak T$ define $\mathcal S_T:=\left(\bigcup_{i\in T}A_i^\complement\right)\in\mathcal{P}(\kappa)$ . Claim: The collection $\mathfrak L:=\{A_i\colon i\in I\}\cup\{\mathcal S_T\colon T\in\mathfrak T\}$ has the finite intersection property. Proof: Let $F$ be a finite subset of $I$ . Let $J$ be a finite subset of $\mathfrak T$ . For each $T\in J$ , since $|T|=\aleph_0$ and $F$ is finite, there is an index $i_T\in T\setminus F$ . By definition of $\mathcal S_T$ , $A_{i_T}^\complement\subseteq \mathcal{S}_T$ . Therefore $$ \left(\bigcap_{f\in F} A_f\right)\cap\left(\bigc
|
|set-theory|cardinals|filters|
| 0
|
Is there any method for gradient descent that achieves acceleration while moving always in the opposite direction of the gradient?
|
I'm studying gradient descent methods, in particular Nesterov's methods and others that achieve a better complexity (in terms of access to the gradient oracle) than regular gradient descent. In particular, for a smooth objective, accelerated gradient descent uses $O(1/\sqrt{\epsilon})$ calls to the oracle as opposed to $O(1/\epsilon)$ of regular gradient descent. I've been reading other methods that accelerate but all of them change the direction of descent. I was wondering if there is some method that always moves following the opposite direction of the gradient and that also only needs $O(1/\sqrt{\epsilon})$ to the gradient oracle.
|
While the answer to this question is I believe, still unknown, the following papers give an algorithm that improves over the $O(1/\epsilon)$ rate of constant step size gradient descent while always moving in the opposite direction of the gradient: https://arxiv.org/abs/2309.07879 https://arxiv.org/abs/2309.16530
|
|optimization|convex-optimization|numerical-optimization|gradient-descent|
| 0
|
Topology induced by the norm $||x||_T \overset{\underset{\mathrm{def}}{}}{=} ||T(x)||$ is the usual topology on $\mathbb{R}^n$
|
I have been working on a few exercises that were given to us as practice for a topology class that I am currently taking. I managed to solve all exercises after a few days of thinking, however, there is one exercise that I have not been able to figure out at all: Let $T: \mathbb{R}^n \rightarrow \mathbb{R}^{n+k}$ be an injective linear transformation. Define a norm $|| \cdot ||_T$ on $\mathbb{R}^n$ in the following manner: $$\forall x \in \mathbb{R}^n , \, ||x||_T \overset{\underset{\mathrm{def}}{}}{=} ||T(x)||$$ where $||\cdot||$ is the euclidean norm on $\mathbb{R}^n$ . Show that $||\cdot||_T$ is a norm and that the topology induced by $||\cdot||_T$ is the usual topology on $\mathbb{R}^n$ I have been able to show that $||\cdot||_T$ is a norm without any issue, however I have not been able to figure out how to even start proving the second part. I have tried showing that both topologies are finer than the other with the intent of reaching the conclusion that they are equivalent by sho
|
It is not hard to prove that all norms on $\mathbb R^n$ are equivalent (see Alex's answer), but I do not think that proving this general theorem was the intention of the exercise. We can use the following well-known Lemma. Each linear map $f : \mathbb R^n \to \mathbb R^m$ is continuous with respect to the standard Euclidean topologies. Now consider $T$ as in your question. Let $e_1, \ldots, e_n$ be the standard basis of $\mathbb R^n$ . The $T(e_i)$ are linearly independent, hence $\mathbb R^{n+k}$ has a basis of the form $b_1 = T(e_1), \ldots, b_n = T(e_n), b_{n+1},\ldots, b_{n+k}$ . There exists a unique linear map $f : \mathbb R^{n+k} \to \mathbb R^n$ such that $f(b_i) = e_i$ for $i \le n$ and $f(b_i) = 0$ for $i > n$ . It is continuous with respect to the standard Euclidean topologies. Let $\mathbb R^n_T$ denote $\mathbb R^n$ with the topology induced by $\lVert - \rVert_T$ . By definition of $\lVert - \rVert_T$ we see that $T : \mathbb R^n_T \to \mathbb R^{n+k}$ is continuous. $T^{
|
|general-topology|
| 0
|
Convergence in distribution absolute value of random variables.
|
How to proof from definition of convergence in distribution that fact: Let $X_{1}, X_{2}, . . .$ and $X$ be real random variables with respective distribution functions $F_{X_1}, F_{X_2}, . . .$ and $F_{X}$ . If $X_{n}$ converges in distribution to random variable $X$ then $|X_{n}|$ converge in distribution to $|X|$ . We say that sequence of random variable $X_{1}, X_{2}, . . .$ converges in distribution to $X$ if and only if $ \displaystyle \lim_{n \rightarrow \infty} F_{X_n}(x)=F_{X}(x)$ for every $x \in \mathbb{R}$ at which $F_X$ is continuous. I will be very grateful for the tips.
|
Going by the comment, \begin{align*} \displaystyle \lim_{n \rightarrow \infty} F_{\vert X_n \vert } (x) &= \displaystyle \lim_{n \rightarrow \infty} P(\vert X_n \vert \leq x) \\ &= \displaystyle \lim_{n \rightarrow \infty} P(-x \leq X_n \leq x) \\ &= \displaystyle \lim_{n \rightarrow \infty} \{ P(X_n \leq x) - P(X_n where the fourth inequality follows by the algebraic properties of limits.
|
|probability-theory|convergence-divergence|random-variables|
| 0
|
Are these test paper questions as unclear and ambiguous as I think they are?
|
The Local Education Authority here in Wales are running a free voluntary course online for maths dunces like me to help us to help our 7-14 year olds with maths. There's a free optional paper leading to a meaningless qualification, so I thought "why not", and starting to take a look at it... Not going to post the actual paper, but here's a link to the course: https://www.agored.cymru/Units-and-Qualifications/Qualification/126848 Numbers (Level:Two) HD42CY038 I've posted the questions further down, but here's where I'm getting stuck: In question 1b, we do not know the intersection of the number of people who did not want both vegetables AND sauce, so how can we calculate the total number that DID want both? (Note that in any case, it's not asking for the fraction of the total guests, just those that opted for chicken, and of that subset, just those that wanted sauce AND veg, not those that wanted sauce and those that wanted veg.). I asked the online tutor, who hadn't noticed this before
|
Question 1b. You are right, you don't have enough information. Perhaps the examiners expect you to make an assumption like no one would skip both veg and sauce. You could provide a range depending on the possible intersection. Questions 1, 2, and 3 not leading to consistent numbers... I would say that that is by design, and to answer each question independently of the details from the earlier questions. Question 4. everyone will be charged for food regardless of whether they eat it. While not explicit, the venue fee is independent of the number of guests.
|
|soft-question|question-verification|
| 0
|
Are these test paper questions as unclear and ambiguous as I think they are?
|
The Local Education Authority here in Wales are running a free voluntary course online for maths dunces like me to help us to help our 7-14 year olds with maths. There's a free optional paper leading to a meaningless qualification, so I thought "why not", and starting to take a look at it... Not going to post the actual paper, but here's a link to the course: https://www.agored.cymru/Units-and-Qualifications/Qualification/126848 Numbers (Level:Two) HD42CY038 I've posted the questions further down, but here's where I'm getting stuck: In question 1b, we do not know the intersection of the number of people who did not want both vegetables AND sauce, so how can we calculate the total number that DID want both? (Note that in any case, it's not asking for the fraction of the total guests, just those that opted for chicken, and of that subset, just those that wanted sauce AND veg, not those that wanted sauce and those that wanted veg.). I asked the online tutor, who hadn't noticed this before
|
This is how I think about this: In question 1b, we do not know the intersection of the number of people who did not want both vegetables AND sauce, so how can we calculate the total number that DID want both? I asked the online tutor, who hadn't noticed this before, and he said "Paper can't be changed, write 'unsolveable' and you'll get the mark". You are right about this. You do not have the necessary information to solve this. The answer depends on the intersection of the number of people who did not want both vegetables AND sauce In question 2C, we are told that there are 200 guests. In question 1b we were told that 1/4 of the guests chose the salmon meal, but in question 3, we are told that 40 guests had salmon. Meaning that 10 guests didn't have salmon. In question 3, they do not say that 40 guests will have salmon. They just provided the cost for 40 meals of salmons. This doesn't have to connect with the number in question 2C. In question 4, we are not told whether the cost for t
|
|soft-question|question-verification|
| 0
|
How to split a multi-variable ODE into multiple equations?
|
While thinking quite randomly, a friend of mine came up with the following ODE $$\dot x+\dot y=ax+by$$ Now we want to come up with an equivalent system of ODEs in explicit form, as those can always be found from what we know, but we haven't figured out how to come up with this, eg in the following format with appropriate mappings from the old to the new variables. \begin{align}\dot w&=f(w,z,t)\\\dot z&=g(w,z,t)\end{align} So my question now is: Given the first ODE, can we transform it into an explicit system of ODEs, and if so how? Please don't just throw the transformed version at me, because I want to understand the general / specific approach needed here. We have thought about a few possible variations, but found no viable transformation. The only one that would technically work would lead to only two equations and three variables being needed, eg you could replace $\dot x=k,\dot y=ax+by-k$ but that would require an extra variable that we would like to avoid.
|
If the ode is considered as an underdetermined system of equations $$ (1,1)\left(\begin{array}{c}x'\\y'\end{array}\right)=(a,b)\left(\begin{array}{c}x\\y\end{array}\right) $$ then the derivatives vector with the least euclidian norm will verify $$ \left(\begin{array}{c}x'\\y'\end{array}\right) = (1,1)^\dagger(a,b)\left(\begin{array}{c}x\\y\end{array}\right) $$ where $(1,1)^\dagger=\frac{1}{2}\left(\begin{array}{c}1\\1\end{array}\right)$ is the pseudo-inverse of $(1,1)$ , so $$ \begin{align*} x'&=\frac{1}{2}(ax+by),\\ y'&=\frac{1}{2}(ax+by).\\ \end{align*} $$ So $x$ will be equal to $y$ if $x(0)=y(0)$ .
|
|ordinary-differential-equations|systems-of-equations|
| 0
|
Alternative method of evaluating $\int_0^{\frac{\pi}{2}} \sin ^{2 n} x \ln (\tan x) d x $?
|
LATEST EDITION Glad to share with you that we had found below as an answer, in general, that $$\boxed{ \int_0^{\frac\pi2} {\sin^n x} \ln{(\tan x)} \,dx =\frac{\sqrt{\pi}}{4 \Gamma\left(\frac{n}{2}+1\right)} \Gamma\left(\frac{n+1}{2}\right) \left[\psi\left(\frac{n+1}{2}\right)-\psi\left(\frac{1}{2}\right)\right]}$$ where $n\in \mathbb N$ . After reading the post with the result $$\int_0^{\frac\pi2} {\sin^2{x} \ln{(\tan x)} \,dx}=\frac{\pi}{4}$$ I was attracted by its decency and wanted to generalise it as $$ I_n=\int_0^{\frac{\pi}{2}} \sin ^{2 n} x \ln (\tan x) d x \stackrel{x\mapsto\frac \pi 2- x}{=}- \int_0^{\frac{\pi}{2}} \cos ^{2 n} x \ln (\tan x) d x =-J_n, $$ For convenience, I started with the second integral, $$ J_n=\int_0^{\frac{\pi}{2}} \cos ^{2 n} x \ln (\tan x) d x, $$ Letting $u=\tan x$ transforms the integral into $$ J_n=\int_0^{\infty} \frac{\ln u}{\left(1+u^2\right)^{n+1}} d u $$ Differentiating the following famous result w.r.t. $a$ by $n$ times $$ K(a)=\int_0^{\infty}
|
$$J_n=\int_0^{\pi/2}(\cos x)^{2n}\ln(\tan x)\ dx$$ which can be converted to, $$J_n=\frac{1}{4}\left[\frac{d}{ds}\int_0^1t^{s+1/2-1}(1-t)^{n-s+1/2-1}\ dt\right]_{s=0}$$ Which after the application of Beta Function turns out to be, $$J_n=\frac{1}{4\Gamma(n+1)}\left[\frac{d}{ds}\Gamma(n-s+1/2)\Gamma(s+1/2)\right]_{s=0}$$ Using the Digamma Function $\psi(z)\Gamma(z)=\Gamma'(z)$ we can simplify it as follows, (I would prefer to write it in terms of Harmonic Number $H_n$ ) $$J_n=\frac{\pi}{4}\frac{1}{2^{2n}}\binom{2n}{n}(H_n-2H_{2n})$$ where we have used $H_{n-1/2}=2H_{2n}-H_n-2\ln 2$ The second question asks about the result, $$\left(\frac{d}{ds}\right)^n\frac{\ln(1+s)}{\sqrt{1+s}}\bigg|_{s=0}=\frac{(-1)^n(2n)!}{2^{2n}n!}(H_n-2H_{2n})$$ which can be rewritten as, $$\frac{\ln(1-x)}{\sqrt{1-x}}=\sum_{n=0}^{\infty}\frac{1}{2^{2n}}\binom{2n}{n}(H_n-2H_{2n})x^n$$ and is known, but I can't find the post with the derivation. (there should be one somewhere around the site, if anyone could link it
|
|calculus|integration|definite-integrals|trigonometric-integrals|digamma-function|
| 0
|
A separation property
|
Let $(X,\tau)$ be a topological space. Does anyone know the name of the following somewhat unusual property of the topological space $(X,\tau)$ ? (It makes me think of the $T_0$ separation axiom.) For each $x\in X$ and for each finite subset $F$ of $X$ such that $x\not\in F$ , there exists a neighborhood $U_{x,F}$ of $x$ such that $U_{x,F}\cap F=\emptyset$ .
|
$T_0$ is too weak. Consider $\mathbb{R}$ with the topology generated by the basis consisting of intervals $(a, \infty)$ , where $a \in \mathbb{R}$ . Take $x = 0$ ; any neighborhood of $x$ must contain the interval $(0, \infty)$ . Therefore, if $F = \{1\}$ for instance, there is no neighborhood of $x$ disjoint from $F$ . As mentioned in the comments, the condition you describe is equivalent to $T_1$ , and you can easily show this. A $T_1$ space $X$ is such that for each $x \ne y \in X$ , there is a neighborhood $U$ of $x$ not containing $y$ , and a neighborhood $V$ of $y$ not containing $x$ . Assume $X$ is $T_1$ . Let $x \in X$ be arbitrary, and $F$ be a finite set with $x \notin F$ . For each $y \in F$ , select neighborhood $U_y$ of $x$ that misses $y$ . Since the collection $\{U_y\}_{y \in F}$ of neighborhoods of $x$ is finite, the intersection of its elements is a neighborhood of $x$ . And, that intersection misses each point of $F$ , since each point $y$ of $F$ is excluded from $U_y
|
|general-topology|
| 1
|
What does $sin^{-1}x+cos^{-1}y+cos^{-1}xy=π/2$ represent?
|
The question asks me to ascertain what $sin^{-1}x+cos^{-1}y+cos^{-1}xy=π/2$ represents. I tried adding them and then trying to simplify the expression, but the expression just keeps getting more and more jumbled and doesn't lead to anything.
|
Not getting anything special but this. $sin^{-1}x + cos^{-1}y + cos^{-1}xy = \pi/2$ So $cos^{-1}y + cos^{-1}xy = cos^{-1}x$ Using the formula $cos^{-1}x - cos^{-1}y = cos^{-1}(xy + \sqrt{(1-x^2)(1-y^2)})$ We get finally $cos^{-1}xy = cos^{-1}(xy + \sqrt{(1-x^2)(1-y^2)})$ Which on taking cos both sides and solving gives $x = \pm 1 \space or \space y= \pm 1$ So the equation represents line $x = \pm 1 $ or $y = \pm 1$ On plotting all four lines together you get a square of side length $2$ units.
|
|trigonometry|
| 1
|
On an exercise on double integral
|
Let $D=\{(x,y) \in \mathbb{R}^2 \mid \, \frac{x^2}{4}+\frac{y^2}{2} \le 1\} \setminus R$ , where $R$ is the rectangle with vertices $(\pm \sqrt{2},\pm 1)$ . I need to compute the integral $$ \iint_D x^2 dxdy $$ Our domain consists of 4 pieces. Actually, it is sufficient to work with the northest and the eastern pieces, and multiply the two integrals so obtained by $2$ . Let $D_1$ the northest piece of our domain. Here, $x \in [-\sqrt{2},\sqrt{2}]$ , while $y$ varies starting from $1$ and following the ellipse. I think I need to use polar coordinates, but I have no clue on how to use them in such a case.
|
Let $S = \{(x,y) \in \mathbb{R}^2 \mid \frac{x^2}{4}+\frac{y^2}{2} \leq 1 \}$ . $ \begin{align} \iint \limits_{D} x^2 dxdy &= \iint \limits_{S} x^2 dxdy - \iint \limits_{R} x^2dxdy \\ \iint \limits_{S}x^2 dxdy &= \int _{-2}^{2}dx \int_{-\sqrt{2-\frac{x^2}{2}}} ^{\sqrt{2-\frac{x^2}{2}}} x^2 dy \\ &= \int _{-2}^{2}x^2 dx \int_{-\sqrt{2-\frac{x^2}{2}}} ^{\sqrt{2-\frac{x^2}{2}}} dy \\ &= 2 \int _{-2}^{2} x^2 \sqrt{2-\frac{x^2}{2}} dx \\ &= 2\sqrt{2} \int _{- \pi} ^0 (2 \cos \theta)^2 \sqrt{1-\cos ^2 \theta} \ d(2 \cos \theta) , (x =2 \cos \theta) \\ &= 16\sqrt{2} \int _{- \pi} ^0 \cos^2 \theta \sin^2 \theta d \theta \\ &= 4\sqrt{2} \int _{- \pi} ^0 \sin^2 2 \theta d \theta \\ &= 2 \sqrt 2 \int _{- \pi} ^0 1-\cos4\theta d \theta \\ &= 2\sqrt {2} \pi \\ \iint \limits_{R} x^2dxdy &= \int _{-\sqrt 2} ^{\sqrt 2} dx \int _{-1} ^{1} x^2 dy \\ &= 2\int _{- \sqrt 2} ^{\sqrt 2} x^2 dx \\ &=\begin{array}{r|l} \frac{2}{3}x^3 & _{- \sqrt 2} ^{\sqrt 2} \end{array} \\ &= \frac{8}{3} \sqrt{2} \end{align}
|
|real-analysis|integration|
| 1
|
How does hyperplane work?
|
The definition of a plane in $R^3$ is intuitive : $Ax + By + Cz = D$ , where $(A, B, C)$ is a normal vector to the plane and $D$ is some bias. I can visualize it dividing the space into two halves (because I'm a 3-dimensional creature and it's quite intuitive for me) , but how does the generalization of this equation to higher dimensions work? How do hyperplanes divide space into two halves? ANY explanation is appreciated.
|
Hyper PLane : The most intuitive & simple way is to consider it like this : $Ax + By + Cz - D = 0$ Let $M_2(x,y) = Ax + By - D $ Let $M_3(x,y,z) = Ax + By + Cz - D $ More generally , let $M_n(x,y,\cdots,z) = Ax + By + \cdots + Cz - D $ When we plug in the Co-ordinates of a Point in $R^2$ , $R^3$ , $R^n$ , then we will get the $M$ value which will be either Positive or Zero or Negative , there is no other Possibility !!!! When $M$ is Zero , then it is "on" the hyper-plane. When $M$ is Positive , then it is "on one side" of the hyper-plane. When $M$ is Zero , then it is "on the other side" of the hyper-plane. Thus we have 2 "halves" when we check the Sign of $M$ . When we take $D=0$ , then we can see that the Zero vector will give Zero $M$ automatically , hence it means those hyper-planes are going though the Origin. Hyper Surface : Same Intuition applies for non-linear cases too , where we will get hyper surfaces. When we have $x^2+y^2-2xy+100=0$ , we can make it $M(x,y)=x^2+y^2-2xy+100
|
|linear-algebra|intuition|machine-learning|
| 0
|
How does hyperplane work?
|
The definition of a plane in $R^3$ is intuitive : $Ax + By + Cz = D$ , where $(A, B, C)$ is a normal vector to the plane and $D$ is some bias. I can visualize it dividing the space into two halves (because I'm a 3-dimensional creature and it's quite intuitive for me) , but how does the generalization of this equation to higher dimensions work? How do hyperplanes divide space into two halves? ANY explanation is appreciated.
|
Perhaps the simplest case will help. In two dimensions a hyperplane is a line. The easiest line to think about in the usual coordinate system is the $x$ axis. That clearly divides the plane into the two regions above and below: $y> 0$ and $y . In three dimensions the $xy$ plane divides space into the regions $z>0$ and $z . You can see the algebraic analog for any number of coordinates, even though it's hard to "see" above and below geometrically. Then remember that all the hyperplanes behave the same way, since that behavior does not depend on how you set up a coordinate system.
|
|linear-algebra|intuition|machine-learning|
| 0
|
push forward of differential form/ integration over fiber
|
It is elementary that differential forms can be pulled back via a smooth map between manifolds. However, I was reading a paper and came across a construction about push forward of a differential form via a submersion which I didn't fully understand. The paper pointed to Differential Forms in Algebraic Topology by Bott and Tu for reference. However, since I have little background in algebraic topology, I would like to know if anyone can show me a more detailed explanation, or point me to some references. Below is the construction as described in the paper: If $f: X\rightarrow Y$ is a submersion from an oriented manifold of dimension $n$ to an oriented manifold of dimension $m \leq n$ . Then the fibers are manifolds of dimension $r=n-m$ . So far this is OK, and it continues: Integration over the fibers gives a map $f_*: D^p(X)\rightarrow D^{p-r}(Y)$ defined as follows. Any $p$ -form $\phi$ on $X$ with compact support can be written $\phi = \psi \wedge f^*\omega$ , where $\psi$ is an $r$
|
For the record: I am now using the following reformulation of the definition of fiber-wise integration, where a unique decomposition appears. I will formulate it for fiber-bundles: Let $\pi: E\to B$ be a fiber bundle with fiber $F$ , and let $\omega\in\Omega^m(E)$ . Let $VE:=\ker(d\pi)\subset TE$ be the vertical subbundle . Suppose that $HE\subset TE$ is a horizont subbundle ; that is, $TE=VE\oplus HE$ . Given $b\in B$ , let $U\subset B$ be a coordinate neighborhood of $b$ . We denote the coordinate functions by $y^i:U\to \mathbb{R}$ , $i\in\{1,\dotsc,n\}$ . Pullbacks along $\pi$ of the coordinate forms $d y^I = d y^{i_1} \dotsb d y^{i_k}$ for ordered multiindices $I=\{i_1 form a basis of $\Lambda^k_p(HE^*)$ for each $p\in\pi^{-1}(U)$ . Since $\Lambda^m_p(T^*E) = \oplus_{i+j=m}\Lambda^i_p(VE^*)\wedge\Lambda^j_p(HE^*)$ , there exists a unique $(\omega_I)_p\in\Lambda^k_p(VE^*)$ such that $$ \omega_p = \sum_I (\omega_I)_p \wedge \pi^*(d y^I)_p.\tag{1} $$ Since $\omega_I$ are coefficients
|
|algebraic-topology|differential-forms|smooth-manifolds|
| 0
|
How do I verify that a set is an affine subspace
|
I am having trouble with this exercise in one of my text books Let $\mathbf{x, y}\in\mathbb{R}^4$ , and let $L$ be a set such that $L = \{\alpha \mathbf{x} +\beta\mathbf{y}: \alpha, \beta\in \mathbb{R}, \alpha+\beta=1\}$ Show that $L$ is an affine subspace of $\mathbb{R}^4$ I am quite stuck on this exercise. I know that if some vectors let's call them $l_1$ , and $l_2$ are in $L$ then so is any affine combination. I don't really know how to get further than that. Would you do something like $l_1 = \alpha_1\mathbf{x}_1+\beta_1\mathbf{y}_1$ and $l_2 =\alpha_2\mathbf{x}_2+\beta_2\mathbf{y}_2$ and put this into an affine combination. How would you then show that this affine combination is in $L$ ? Any help would be great.
|
First, an affine subspace is of the form $A+V$ , where $A$ is a point, and $V$ a linear subspace. Here, with $A=:x$ and $V:=\mathbb R (y-x)$ , it is not difficult to verify that $L=A+V$ , using $\alpha + \beta = 1$ . Let $\beta \in \mathbb R$ and $\alpha=1-\beta. \alpha x +\beta y=(1-\beta)x+\beta y=x+\beta (y-x)\in L.$
|
|linear-algebra|vector-spaces|
| 0
|
How to integrate $\int \frac{3x^{4}+5x^{3}+7x^{2}+2x+3}{(x-6)^{5}}dx$?
|
Q) How to Integrate $\int \frac{3x^{4}+5x^{3}+7x^{2}+2x+3}{(x-6)^{5}}dx$ ? First of all let me tell what I think about this question. In my Coaching Institute, the chapter 'Integration' is over. This question came in my mind while I was solving the questions of 'Integration By Partial Fraction Decomposition' . Let me give two examples: Example 1) Let's integrate $\int\frac{x-5}{(x-7)^{2}}dx$ Now let me tell the solution of $\int\frac{x-5}{(x-7)^{2}}dx$ Let $I=\int\frac{x-5}{(x-7)^{2}}dx$ $\implies \frac{(x-5)}{(x-7)^{2}}=\frac{A}{(x-7)}+\frac{B}{(x-7)^{2}}$ $\implies (x-5)=Ax+(B-7A)$ Upon solving we get : $A=1, B=2$ $\implies I=\int\frac{1}{(x-7)}dx+\int\frac{2}{(x-7)^{2}}dx$ Finally, after this step, it is easy to solve. Now let me give the $2^{nd}$ example: Evaluate $ I_1=\int\frac{3x^{2}+2x+4}{(x-7)^{3}}dx$ Similarly we can integrate this expression by using Partial Fraction Decomposition. $\implies \frac{3x^{2}+2x+4}{(x-7)^{3}}=\frac{A}{(x-7)}+\frac{B}{(x-7)^{2}}+\frac{C}{(x-7)^{3}
|
Hint: in this particular case there is a shortcut. Substitute $y=x-6$ . Then $\int \frac{3x^{4}+5x^{3}+7x^{2}+2x+3}{(x-6)^{5}}dx= \\ \int \frac{3(y^4+24y^3+216y^2+864y+1296)+5(y^3+18y^2+108y+216)+7(y^2+12y+36)+2(y+6)+3x^{4}+3}{y^{5}}dy$ Divide each term by $y^5$ for easy integration.
|
|calculus|integration|indefinite-integrals|
| 1
|
inequality $|x + 3\eta^2| \geq |x| + |\eta|^2$
|
I'm learning the basics of Fourier Analysis. Reading a book, I've found this inequality: " If $x\geq -3$ and $|\eta | >2$ we have $|x + 3\eta^2| \geq |x| + |\eta|^2$ . I've tried to solve this inequality applying triangle one but I failed. Can you help me, please?
|
Under your hypothesis, the inequality you want to prove is equivalent to $$x+3\eta^2\ge|x|+\eta^2$$ i.e. $$2\eta^2\ge|x|-x,$$ which is obvious if $x\ge0$ , and is a consequence of $x+\eta^2>0$ if $x\le0$ .
|
|analysis|inequality|fourier-analysis|triangle-inequality|oscillatory-integral|
| 0
|
Reference question for a generalization of vertex coloring
|
I am wondering whether there is a standard convention for the following generalization of the vertex coloring. An $n$ -diameter coloring is a vertex coloring such that vertices between which there exists a path of length $\leq n$ have distinct colors. So the usual coloring would be $1$ -diameter coloring.
|
It's usually called a distance colouring. See the survey by Kramer and Karmer, https://www.sciencedirect.com/science/article/pii/S0012365X0700386X#:~:text=A%20distance%2Dcolouring%20relative%20to,than%20p%20have%20distinct%20colours .
|
|graph-theory|coloring|
| 1
|
Integral of $\int_{b-a}^{a} \frac{1}{x} \sqrt{a^2-(b-x)^2}dx$
|
I am solving the integral $$\int_{b-a}^{a} \frac{1}{x} \sqrt{a^2-(b-x)^2}dx$$ which is related to the magnetic flux of toroid with circular cross section. I got this integration but have no idea how to compute it...
|
First apply kings rule to get: $\int_{b-a}^{a} \frac{1}{b-x} \sqrt{a^2-x^2} dx$ Let $x = asin(u)$ and $dx = acos(u)du$ You get $\int_{b-a}^{a} \frac{a(acos^2u)}{b-asinu}du$ I think you can proceed from here.
|
|calculus|integration|
| 0
|
Group extension analogous to the symmetric group.
|
Recently, My Professor taught us about group extension. It is the following: A group $G$ is an extension of $Q$ by $N$ if we have the following short exact sequence: $1 \rightarrow N \rightarrow G \rightarrow Q \rightarrow 1$ . After learning this definition I observed that for $n \geq 3$ , the symmetric group on $n$ letters $S_{n}$ is an extension of $\mathbb{Z_{2}}$ by the alternating group on $n$ letters $A_{n}$ , that is, we have an exact sequence: $1 \rightarrow A_{n} \rightarrow S_{n} \rightarrow \mathbb{Z_{2}} \rightarrow 1$ . From here, I came up with the following question that I am completely stuck at. If $G$ is any nontrivial group, does there exist a group $H$ that is an extension of a group $Q$ by $G$ such that $N \unlhd H$ implies $N \subseteq G$ ? (Here, in this question, I have assumed that $H \neq G$ .) Please help me.
|
This is not always the case, for instance $G=S_n$ for $n\geq 7$ does not admit such an extension, and generally no complete group has this property: Suppose for a contradiction that $G$ is a complete group that also arises as a unique maximal normal subgroup of some group $H$ . As $H$ acts on $G$ by conjugation, we have a homomorphism $\phi:H \to \mathrm{Aut}(G)$ . Since every automorphism of $G$ is inner, this map cannot be injective. Hence, its kernel $C_H(G)$ , the centralizer of $G$ in $H$ is nontrivial. By assumption, it must therefore be contained in $G$ already and thus coincide with its center $Z(G)$ . But this is a contradiction since $G$ is also centerless. Both of these assumptions ( $Z(G)=1$ and $\mathrm{Aut}(G)=\mathrm{Inn}(G)$ ) are evidently essential for this argument to work: The alternating groups $A_n$ are centerless but sit inside $S_n$ , while $G=C_2$ has no outer automorphisms and sits inside $C_4$ as unique maximal subgroup. I do not know for which groups there i
|
|abstract-algebra|group-theory|normal-subgroups|group-extensions|
| 1
|
In calculus, if $\frac{dy}{dx}$ is not in fact a fraction, is the equation below for geometric brownian motion technically incorrect?
|
Consider the following stochastic differential equation for geometric Brownian motion: $$ {\displaystyle dS_{t}=\mu S_{t}\,dt+\sigma S_{t}\,dW_{t}} $$ I was reading on Wikipedia about geometric Brownian motion ( https://en.wikipedia.org/wiki/Geometric_Brownian_motion ) and this question popped into my mind. To be clear, I understand the interpretation of the above equation, but is it technically correct? Clearly, if they weren't talking about derivatives (that is not in the limit as $dt$ goes to $0$ ), then I'd see no problem; but it looks like they simply multiplied both sides of the "proper equation" by $dt$ . By the "proper equation" I mean this: $$ {\displaystyle \frac{dS_{t}}{dt}=\mu S_{t}+\sigma S_{t}\,\frac{dW_{t}}{dt}} $$ Am I missing something?
|
As suggested in the comments, the paths of $W_t$ (and also $S_t$ ) are almost surely nowhere differentiable, so the notation $\frac{dW_t}{d t}$ is meaningless in this context. The SDE as written is simply notational sugar for: $$S_t = S_0 + \int_0^t \mu S_r dr + \int_0^t \sigma S_r dW_r$$
|
|calculus|probability-theory|stochastic-calculus|brownian-motion|stochastic-differential-equations|
| 1
|
Evaluating $\sum _{k=0}^{\infty }\:\frac{x^k}{k!\cdot \left(2+k\right)}$
|
My work so far was noticing that it is similar to the MacLaurin expansion of $e^x$ but I am stuck
|
Let's $f(x)=\sum\limits_{k=0}^{\infty}\frac{x^k}{k!\cdot (k+2)}$ , then $g(x)=x^2 \cdot f(x) = \sum\limits_{k=0}^{\infty}\frac{x^{k+2}}{k!\cdot (k+2)}$ $$g'(x)=\sum\limits_{k=0}^{\infty}\frac{x^{k+1}}{k!}=xe^x$$ Now it remains to solve the equation and the areas of applicability of the operations performed. I hope you can handle this, but if not, write where you encounter difficulties.
|
|sequences-and-series|taylor-expansion|
| 1
|
Practical fast primality test
|
I want to test the primality of some numbers. To do this I have tried PARI/GP and from the command line: isprime(17979859685352166649444960043583743435092156814420614714121915607179025671009227830482723258842041715425952898332925254446576490157062407486032589055872280702183643214632727483725783812977527538736774722549223552417771522858159684194785865804626071815326334013904933241586962748732368114629800029243589590149615238585275829325800947079414864439109261765420730775569684361272728950583105975836070277267187364691756683072539090213230533714301881183047858987112752034347717370007248725738868806374205197244444632844433887276302836646665280523675417763136615852629837330249936814786810730988314985125146895757608156265122365541724141695817557264590920266910259702107578093282656174106982780146997920758825755743495056876300139463519504606695397721560034371516930040137713138029593508911752519848571360505289382912101561098109186411919271631390671631954827297706598339662327182105852982435452
|
Mathematica returns the answer in milliseconds: Timing[PrimeQ[179...761]] {0.055672, True}
|
|elementary-number-theory|math-software|primality-test|
| 0
|
how to prove the following equation $\sum_{i=1}^{i=n}\prod_{j=1,j\neq{i}}^{j=n}{\frac{1}{x_{i}-x_{j}}}=0$
|
$\sum_{i=1}^{i=n}\prod_{j=1,j\neq{i}}^{j=n}{\frac{1}{x_{i}-x_{j}}}=0$ The product on the denominator is what I derive from the derivative of an n degree polynomial with different solutions $x_{1}$ to $x_{n}$ . And the sum seems to be zero. I can prove the equation when n is 2 or 3 but I have no idea when n is larger. When n is 3, I know that $\frac{1}{(x_{1}-x_{2})(x_{1}-x_{3})}$ can break down to $\frac{1}{(x_{1}-x_{2})(x_{2}-x_{3})}-\frac{1}{(x_{1}-x_{3})(x_{2}-x_{3})}$ , consequently proving the equation. I will be very grateful if someone can help me solve this problem or give me some hint.
|
Assume that the $x_i$ are pairwise different (otherwise the LHS is meaningless). Then, by Lagrange's interpolation formula, $$1=\sum_{i=1}^n\prod_{j\ne i}\frac{x-x_j}{x_i-x_j}.$$ Taking the $x^{n-1}$ coefficient of both sides recovers the desired identity.
|
|calculus|lagrange-interpolation|
| 1
|
Is $\prod_{i\in I}x_i$ equal to $1-\prod_{i\in I}(1-x_i)$?
|
Are these 2 equations equivalent? If so how can I prove it, if not how can I disprove it? $CI(t)=\prod_{t_i $CI(t)=1-\prod_{t_i
|
Just try it: $$ab \stackrel{?}{=} 1 - (1-a) \cdot (1-b)$$ Well: $1 - (1-a) \cdot (1-b) = 1 - (1 - a - b + ab) = a + b - ab \ne ab$ So, no. Those equations are not equal.
|
|products|
| 0
|
How many ways are there to distribute $19$ distinct objects into $3$ distinct boxes such that no two boxes contain same number of element
|
How many ways are there to distribute $19$ distinct objects into $3$ distinct boxes such that no two boxes contain same number of element,i.e, each boxes have distinct number of elements. No empty boxes allowed. My work: I chose using principle of inclusion exclusion principle and generating functions , so All - at least two boxes have same number of element All : Stirling number of second kind $3! \times S(19,3)=3! \times193448101$ or $\bigg[\frac{x^{19}}{19!}\bigg](e^x-1)^3$ by generating functions. Now, lets calculate two boxes have same number of element. Firstly, select which ones they are by $\binom{3}{2}$ .Because of these two boxes have same number of elements, they can take $(1,1),(2,2),(3,3),..,$ . So, if we select the elements for these two boxes at the same time, and after the selection we disperse selected elements between them equally. So, our E.G.F is equal to $$\binom{2}{1}\frac{x^2}{2!}+\binom{4}{2}\frac{x^4}{4!}+\binom{6}{3}\frac{x^6}{6!}+...+ =\sum_{n \geq1}\frac{x^{
|
You are free to place 18 balls anywhere you like, and almost free to place the last ball except when it makes a count match. And except when it leaves an empty box. So the answer is between $3^{18}$ and $3^{19}$ . The exact answer is $$\sum_{a + b + c = 19 \\ a,b,c\text{ distinct} \\ a,b,c \text{ positive}} {19 \choose a} {19 - a \choose b} $$ which is $773,698,050 \approx 3^{18} \times 1.997$ . You could try lifting the $\ne$ branch out of the sums to make a finite set of combination sums, but not sure it is worth the effort.
|
|combinatorics|discrete-mathematics|solution-verification|generating-functions|
| 0
|
Are zeros of a cubic polynomial lipschitz wrt the constant term?
|
Consider the cubic equation $$ ax^3 + bx^2 + cx + d = 0\,.$$ My question is: do the zeros of this equation change on the same order as the change in its coefficients? I'm interested in $d$ specifically, so in this case, suppose $|d -d'| . Then, are the zeros of $$ax^3 + bx^2 + cx + d' = 0$$ within $\epsilon$ of the zeros of the first equation? Or $O(\epsilon)$ ? I know that the effect of $d$ is to raise or lower the graph of the cubic equation. I also know that to solve for the zeros of a cubic equation, one typically factors it into a product of linear and quadratic terms.
|
Consider $a=0$ , $b=1$ , $c=0$ , $d=0$ . You should be able to show that the Lipschitz condition (with respect to $d$ as you asked for) fails here since $ x = \sqrt{-d} $ is (as you must check) not Lipschitz there. However this statement can be salvaged: you can use the inverse function theorem to show that the "root-finding" function is not only Lipschitz but smooth near single-roots. (Note, my example had to single out a double-root.) On the other hand there's no reason to expect a global Lipschitz constant. One can only expect locally Lipschitz; the constant will have to differ as one considers different parts of the domain of coefficients.
|
|algebra-precalculus|polynomials|roots|
| 0
|
When does the (topological) cone $CA$ of an (open) subspace $A \subset X$ embed into the cone $CX$?
|
Let $X$ be a topological space. Let's say $X$ is Hausdorff, compact. Let $A \subset X$ be a subspace of $X$ , let's say $A$ is open (though I'm not sure it's relevant). The topological cone $CX$ of $X$ is defined as the quotient of $(X \times [0,1])/ (x,1)\sim(x',1)$ . We define $CA$ similarly. In fact, the cone operator is a functor, and so the inclusion $f : A \to X$ induces a continuous map $Cf : CA \to CX$ in the obvious way. It is clear that in our setting the map $Cf$ is injective, and thus if $A$ were compact then $Cf$ would be a topological embedding. My question relates to when $A$ is not compact, and more precisely is an open set. When is $Cf$ a topological embedding? Is it always a topological embedding? Any suggestions on this would be appreciated. Thank you
|
It is not, a typical counter-example is $X=[0,1]$ and $U=[0,1)$ . Namely, consider the quotient maps $\pi_U\colon U\times I\rightarrow CU$ and $\pi_X\colon X\times I\rightarrow CX$ . The subspace $\{(u,t)\in U\times I\vert t>u\}$ is open in $U\times I$ and contains $U\times\{1\}$ , hence its image $V\subseteq CU$ under $\pi_U$ is an open set containing the cone point. If the map was an embedding, there would be an open subset $W\subseteq CX$ s.t. $W\cap CU=V$ (here, I interpret $CU$ as a sub set of $CX$ ). Then $\pi_X^{-1}(W)$ is an open subset of $I\times I$ containing $I\times\{1\}$ , hence containing a strip of the form $I\times(1-\varepsilon,1]$ for some $\varepsilon>0$ by the tube lemma. This implies that $\pi_X^{-1}(W)\cap(U\times I)=\pi_U^{-1}(V)$ contains the strip $U\times(1-\varepsilon,1]$ , a contradiction. [The intuition is the tube lemma. A neighborhood of the cone point in $CX$ contains all points with $t$ -coordinate sufficiently close to $1$ and this property is inherit
|
|general-topology|continuity|
| 1
|
Lower bound on the number of complete bipartite graphs to partition the edge set of $G$
|
Let $bp(G)$ be the number of complete bipartite graphs needed to partition the edge set of G. This means, $ \forall e \in E(G) $ , e is in exactly 1 complete bipartite graph. Now, how to prove bp(G) $ \geq \log_{2}(\chi(G)) $ ? Some things I thought of is that we can consider $ V_{i} $ the independence sets of G that form the $ \chi(H) $ colouring of $G$ . Now. $ \forall i, j, i \neq j $ , $ V_{i} $ and $ V_{j} $ share at least 1 edge. If we view $ V_{i} $ as a vertex each, we get a complete graph. Also, consider any 1 of the bipartite graph $(A \cup B, E)$ , the set of colors of the vertices in A is disjoint with the set of colors of the vertices in $B$ . But still I cannot get any progress.
|
We use induction on $\chi(G)$ . I'll let you handle the base cases yourself. Let $\chi(G) = k\geq 3$ and let \begin{equation*} C_i:=\{v \in V(G) \mid \text{ the colour used for $v$ is $i$ }\}, \end{equation*} and $\{C_1, C_2, \dots, C_{k} \}$ be the disjoint collection of colour classes of $G$ . Let $I = \{1, \dots, k\}$ be an indexing set for colours used in the graph. Let $\mathcal{H} = \{H_1, H_2, \dots, H_{n} \}$ be the minimum family of complete bipartite graphs covering $E(G)$ . By minimiality of $\mathcal{H}$ , each $H_i \in \mathcal{H}$ must contain at least one edge. Choose $H_1 \in \mathcal{H}$ . Now, $V(H_1) \subsetneq C_i$ for any $i$ , since any subset of $C_i$ is independent, and $H_1$ has an edge. Let $H_1 = (A, B)$ be the bipartition of $H_1$ . We can further say that the colours of vertices in $A$ and $B$ must be disjoint, since $H_1$ is a complete bipartite graph. More precisely, there exists $S, T \neq \varnothing \subseteq I$ such that $S \cup T = I, S \cap T = \var
|
|graph-theory|coloring|bipartite-graphs|
| 1
|
Is $\sigma(X+Y) \subseteq \sigma (X,Y)$?
|
Let $(\Omega,M,P)$ be a probability space, $ X_{1} $ and $ X_2 $ be two random variavles, is it ture that $\sigma(X_1+X_2) \subseteq \sigma (X_1,X_2)$ ? How can we prove this? Someone told me that this is a conclusion of Doob-dynkin's theorem. Doob-Dynkin's Theorem : Lex $X=(X_1, X_2,...,X_n)$ be a $R^n$ random variable, and Y be a $R$ -valued random variable, then Y is measurable w.r.t $\sigma(X)$ iff there exists a Borel measurable function f on $R^n$ such that $Y=f(X)$ . In this way, we can define $f(x_1,x_2)=x_1+x_2$ , then $Y=f(X)$ , hence $Y$ is measurable w.r.t $\sigma(X)$ by Doob-dynkin's theorem. But the question here is: why $\sigma(X)=\sigma(X_1,X_2)$ ?
|
We can check that $$\sigma(X)=\text{the smallest $\sigma$-algebra $\mathscr{A}$ such that $X$ is $\mathscr{A}$-measurable}.$$ Indeed, if $X$ is measurable with respect to a $\sigma$ -algebra $\mathscr{A}$ then $\mathscr{A}$ must contain all sets of the form $X^{-1}(A)$ , where $A$ is a Borel set. Since those sets form a $\sigma$ -algebra, the assertion follows. In the same spirit, we can also show that $$\sigma(X,Y)=\text{the smallest $\sigma$-algebra $\mathscr{B}$ such that $X$ and $Y$ are $\mathscr{B}$-measurable}. $$ In particular, $X+Y$ is measurable with respect to the $\sigma$ -algebra $\sigma(X,Y)$ (by Doob-Dynkin's theorem); hence $\sigma(X+Y)\subset\sigma(X,Y)$ . (Edited) As for your question, set $Z=(X,Y)$ . Then $Z$ is measurable if and only if $X$ and $Y$ are measurable. The forward implication follows from $$X=\pi_1\circ Z\qquad\text{and}\qquad Y=\pi_2\circ Z, $$ where $\pi_1$ and $\pi_2$ are coordinate functionals; $\pi_1(x,y)=x$ and $\pi_2(x,y)=y$ . Since these functiona
|
|probability-theory|measure-theory|random-variables|
| 0
|
Are zeros of a cubic polynomial lipschitz wrt the constant term?
|
Consider the cubic equation $$ ax^3 + bx^2 + cx + d = 0\,.$$ My question is: do the zeros of this equation change on the same order as the change in its coefficients? I'm interested in $d$ specifically, so in this case, suppose $|d -d'| . Then, are the zeros of $$ax^3 + bx^2 + cx + d' = 0$$ within $\epsilon$ of the zeros of the first equation? Or $O(\epsilon)$ ? I know that the effect of $d$ is to raise or lower the graph of the cubic equation. I also know that to solve for the zeros of a cubic equation, one typically factors it into a product of linear and quadratic terms.
|
The basic idea is that the roots are analytic functions of the coefficients when they are distinct. The discriminant of $a x^3 + b x^2 + c x + d$ is $-27 a^{2} d^{2}+18 a b c d -4 a \,c^{3}-4 b^{3} d +b^{2} c^{2}$ , and this is $0$ when there is a multiple root. For example, take $a = 1$ , $b = 0$ , $c = -3$ . At $d=2$ there is a simple root of $-2$ and a double root of $1$ . Here is a plot of the real roots as a function of $d$ . You will notice that as $d \to 2-$ two real roots collide, with the slope of the curve approaching $\pm \infty$ (so these roots are not Lipschitz functions of $d$ there). For $d > 2$ those two roots become complex.
|
|algebra-precalculus|polynomials|roots|
| 0
|
How many ways are there to distribute $19$ distinct objects into $3$ distinct boxes such that no two boxes contain same number of element
|
How many ways are there to distribute $19$ distinct objects into $3$ distinct boxes such that no two boxes contain same number of element,i.e, each boxes have distinct number of elements. No empty boxes allowed. My work: I chose using principle of inclusion exclusion principle and generating functions , so All - at least two boxes have same number of element All : Stirling number of second kind $3! \times S(19,3)=3! \times193448101$ or $\bigg[\frac{x^{19}}{19!}\bigg](e^x-1)^3$ by generating functions. Now, lets calculate two boxes have same number of element. Firstly, select which ones they are by $\binom{3}{2}$ .Because of these two boxes have same number of elements, they can take $(1,1),(2,2),(3,3),..,$ . So, if we select the elements for these two boxes at the same time, and after the selection we disperse selected elements between them equally. So, our E.G.F is equal to $$\binom{2}{1}\frac{x^2}{2!}+\binom{4}{2}\frac{x^4}{4!}+\binom{6}{3}\frac{x^6}{6!}+...+ =\sum_{n \geq1}\frac{x^{
|
Obviously, all three boxes can't have the same number of items. So, PIE is very easy, and we get $\color{green}{3^{19} - 3\cdot2^{19}+ 3}$ without any restrictions. Now, suppose 2 boxes have the same no. of elements. There are $\color{red}3$ ways to select the pairs. Once you have the pair, suppose both have $k$ objects, $1\le k \le 9$ . Now, this can be done in $$\binom{19}{2k}\binom{2k}{k}$$ ways. Other objects must go into the remaining box (and it can't be empty, due to parity of 19). Summing, $$\sum_{k=1}^{9} \binom{19}{2k}\binom{2k}{k}$$ I don't see any useful trick to calculate this, other than brute forcing it. You should get $\color{red}{128996852}$ , hence the final answer is $$\boxed{\color{green}{3^{19} - 3\cdot2^{19}+ 3} - \color{red}{3\cdot 128996852}}$$ This is $773698050$ .
|
|combinatorics|discrete-mathematics|solution-verification|generating-functions|
| 0
|
Example of a finite projective plane which is not a translation plane
|
I've been studying finite projective geometry for several weeks and I came across the fact that the most studied planes are the translation planes. Is there any known example (of minimum order if possible) of a finite projective plane which is not a translation plane ?
|
All projective planes of order $\leq 8$ are Desarguesian (and hence translation planes). Up to isomorphism, there are four projective planes of order $9$ , among them the Desarguesian plane and a single further translation plane. Hence the smallest order of a projective plane which is not a translation plane is $9$ , and up to isomorphism, there are two such planes.
|
|combinatorics|discrete-mathematics|examples-counterexamples|projective-geometry|finite-geometry|
| 0
|
projective transformation that maps lines through a point to parallel lines
|
A construction is as follows: Given a fixed line $L$ and two fixed points $S,S'$ . For any point $P$ on the plane, let the line $PS'$ intersect $L$ at $Z$ . Draw the line through $S'$ parallel to $SP$ , intersecting the line $SZ$ at $P'$ . The map $P\mapsto P'$ maps lines through $S$ to parallel lines through $S'$ : the line $S'P'$ is parallel to $SP$ . By composing with a translation, we can assume $S=S'$ . then I try to describe the matrix of projective transformations on $\mathbb RP^2$ that fix the point $[0,0,1]$ , and fixes any line through $[0,0,1]$ . I verified that any projective transformation represented by $$\pmatrix{1&0&0\\0&1&0\\A&B&C}$$ where $A,B,C\in\Bbb R,C\ne0$ satisfies the condition. Is that all of them? To verify it satisfies the condition: Compute the adjugate matrix is $\left( \begin{array}{ccc} C & 0 & 0 \\ 0 & C & 0 \\ -A & -B & 1 \\ \end{array} \right)$ . Any line through $[0,0,1]$ is of the form $[u,v,0]$ . $$(u,v,0)\left( \begin{array}{ccc} C & 0 & 0 \\ 0 &
|
Since the projective transformation $T$ fixes $[0,0,1]$ , it is of the form $$\left( \begin{array}{ccc} a_{11} & a_{12} & 0 \\ a_{21} & a_{22} & 0 \\ a_{31} & a_{32} & C \\ \end{array} \right)$$ The adjugate matrix is $$\left( \begin{array}{ccc} a_{22} C &-a_{12}C & 0 \\ -a_{21}C& a_{11} C & 0 \\ a_{21} a_{32}-a_{22} a_{31} & a_{12} a_{31}-a_{11} a_{32} & a_{11} a_{22}-a_{12} a_{21} \\ \end{array} \right)$$ $T$ fixes the line $[1,0,0]$ , then $\mathrm{adj}(T)$ has a left-eigenvector $(1,0,0)$ $$(1,0,0)\left( \begin{array}{ccc} a_{22} C & a_{12} (-C) & 0 \\ a_{21} (-C) & a_{11} C & 0 \\ a_{21} a_{32}-a_{22} a_{31} & a_{12} a_{31}-a_{11} a_{32} & a_{11} a_{22}-a_{12} a_{21} \\ \end{array} \right)=\left(a_{22} C,-a_{12} C,0\right)$$ so $a_{12}=0$ . Similarly, $T$ fixes the line $[0,1,0]$ , then $a_{21}=0$ . Now the matrix of $T$ becomes $$\left( \begin{array}{ccc} a_{11} & 0 & 0 \\ 0 & a_{22} & 0 \\ a_{31} & a_{32} & C \\ \end{array} \right)$$ The adjugate matrix is $$\left( \begin{array}
|
|linear-algebra|eigenvalues-eigenvectors|projective-geometry|geometric-transformation|
| 1
|
A discussion on different versions of the Lebesgue Differentiation Theorem.
|
The simplest formulation of the Lebesgue Differentiation Theorem (LDT) (that I am aware of) is the following: LDT (Simplest formulation). Given a function $f \in L^1_{\text{loc}}(\mathbb R^n)$ , we have that $$ \lim_{r \to 0} \frac{1}{|B(x,r)|} \int_{B(x,r)} f(y) \, dy = f(x), $$ for almost every $x \in \mathbb R^n$ . In this post , it is presented a version of the above LDT for open sets $\Omega \subset \mathbb R^n$ , which is stated as follows: LDT (Simplest formulation for open sets). Let $\Omega \subset \mathbb R^n$ be an open set. Given $f \in L^1_{\text{loc}}(\Omega)$ , we have that $$ \tag{1} \lim_{r \to 0} \frac{1}{|B(x,r)|} \int_{B(x,r)} f(y) \, dy = f(x), $$ for almost every $x \in \Omega$ . $\color{red}{\textbf{QUESTION.}}$ There is one concern I have about the last formulation I've presented. More precisely, I am wondering about the "validity" of the integral $$ \int_{B(x,r)} f(y) \, dy. $$ To explain my point I start by recalling that $f$ is only defined on $\Omega$ . Ther
|
For every $x\in \Omega$ , there is $r(x)>0$ such that $B(x,r(x))\subset\Omega$ . Hence the quantity $\int_{B(x,r)}f$ is well defined once $r\le r(x)$ , and hence for every $x\in \Omega$ , the $\limsup_{r\to 0}\frac{1}{|B(x,r)|}\int_{B(x,r)}f$ and $\liminf_{r\to0}\frac{1}{|B(x,r)|}\int_{B(x,r)}f$ are well defined. Then the LDT for the open set $\Omega$ is the statement that for almost every $x\in\Omega$ , these two numbers agree. Added: For every $x\in\Omega$ , we define $\limsup_{r\to 0}\frac{1}{|B(x,r)|}\int_{B(x,r)}f$ in the usual way by $$ \lim_{r\to 0}\sup_{\rho\le r}\frac{1}{|B(x,\rho)|}\int_{B(x,\rho)}f. $$ The key point is that this quantity only depends on the values of $r$ arbitrarily close to zero, so in particular, the limsup at $x$ is the same as $$ \inf_{r Note that since $f$ is only defined on $\Omega$ , if $\rho>r(x)$ , then $$ \int_{B(x,\rho)}f = \int_{B(x,\rho)\cap \Omega}f. $$
|
|real-analysis|functional-analysis|limits|functions|lebesgue-integral|
| 1
|
Is it possible to find when two cars will meet given this graph?
|
The problem is as follows: The figure from below shows the speed against time of two cars, one blue and the other orange. It is known that both depart from the same spot. Find the instant on seconds when one catches the other. The given alternatives are: $\begin{array}{ll} 1.&14\,s\\ 2.&16\,s\\ 3.&20\,s\\ 4.&25\,s\\ 5.&28\,s\\ \end{array}$ For this problem I attempted to do the "trick" using the areas behind the curves but I couldn't find the answer. So far I could only state the equations as this: (I'm using $v_{r}=\textrm{orange car}$ and $v_{b}=\textrm{blue car}$ $v_{r}=8$ $v_{b}=t-5$ Since $v=\dfrac{dx}{dt}$ Then: $\dfrac{dx}{dt}=8$ $x(t)=8t+c$ $x(0)=0\,, c=0$ $x(t)_{r}=8t$ $\dfrac{dx}{dt}=t-5$ $x(t)=\frac{t^2}{2}-5t+c$ $x(0)=0\, c=0$ $x(t)_{b}= \frac{t^2}{2}-5t$ So by equating both I could obtain the time isn't it?. $8t=\frac{t^2}{2}-5t$ $0=\frac{t^2}{2}-13t$ $0=t(t-26)$ So time would be $26\,s$ But it doesn't seem to be in any of the alternatives given. Could it be that I'm not g
|
The orange car started moving $5$ seconds before the blue car started. By the time the blue car started moving, the orange car had already covered a $40$ meter distance. Therefore, we can say that: Orange car distance $(d_o)=8x+40$ Blue car distance $(d_b)=\frac{x^2}{2}$ Now we make both the equations equal to each other, and solve for $x$ (time). $8x+40=\frac{x^2}{2} \\ 16x+80=x^2 \\ 0=x^2-16x-80 \\ 0=(x-20)(x+4) \\ x=20,-4$ Since we can't have $-4$ seconds, we have to go with $20$ seconds. However, this is not the final answer. The time $20$ seconds just tells us how long it took after the blue car started moving for both distances to become equal. In reality, the orange car had already started moving $5$ seconds earlier, so we have to count that time as well. Therefore, the final answer is $25$ seconds. If you wanted to take $t=0$ as the starting point for both cars (instead of $t=5$ ), then you would have to go with the advice of N.S. and write the blue car's velocity as a piecewis
|
|calculus|algebra-precalculus|physics|kinematics|
| 0
|
Alternative method of evaluating $\int_0^{\frac{\pi}{2}} \sin ^{2 n} x \ln (\tan x) d x $?
|
LATEST EDITION Glad to share with you that we had found below as an answer, in general, that $$\boxed{ \int_0^{\frac\pi2} {\sin^n x} \ln{(\tan x)} \,dx =\frac{\sqrt{\pi}}{4 \Gamma\left(\frac{n}{2}+1\right)} \Gamma\left(\frac{n+1}{2}\right) \left[\psi\left(\frac{n+1}{2}\right)-\psi\left(\frac{1}{2}\right)\right]}$$ where $n\in \mathbb N$ . After reading the post with the result $$\int_0^{\frac\pi2} {\sin^2{x} \ln{(\tan x)} \,dx}=\frac{\pi}{4}$$ I was attracted by its decency and wanted to generalise it as $$ I_n=\int_0^{\frac{\pi}{2}} \sin ^{2 n} x \ln (\tan x) d x \stackrel{x\mapsto\frac \pi 2- x}{=}- \int_0^{\frac{\pi}{2}} \cos ^{2 n} x \ln (\tan x) d x =-J_n, $$ For convenience, I started with the second integral, $$ J_n=\int_0^{\frac{\pi}{2}} \cos ^{2 n} x \ln (\tan x) d x, $$ Letting $u=\tan x$ transforms the integral into $$ J_n=\int_0^{\infty} \frac{\ln u}{\left(1+u^2\right)^{n+1}} d u $$ Differentiating the following famous result w.r.t. $a$ by $n$ times $$ K(a)=\int_0^{\infty}
|
Inspired by Miracle Invoker, I had just found the exact value of $$\boxed{A_n= \int_0^{\frac\pi2} {\sin^n x} \ln{(\tan x)} \,dx =\frac{\sqrt{\pi}}{4 \Gamma\left(\frac{n}{2}+1\right)} \Gamma\left(\frac{n+1}{2}\right) \left[\psi\left(\frac{n+1}{2}\right)-\psi\left(\frac{1}{2}\right)\right]}$$ and $$\boxed{ B_n=\int_0^{\frac\pi2} {\cos^n x} \ln{(\tan x)} \,dx =-\frac{\sqrt{\pi}}{4 \Gamma\left(\frac{n}{2}+1\right)} \Gamma\left(\frac{n+1}{2}\right) \left[\psi\left(\frac{n+1}{2}\right)-\psi\left(\frac{1}{2}\right)\right]}$$ where $n\in \mathbb N.$ Noticing that $$ A_n=\left.\frac{\partial}{\partial a}\left(A_n(a)\right)\right|_{a=0} $$ where \begin{aligned}A_n (a) & =\int_0^{\frac{\pi}{2}} \sin ^{n} x \tan ^a x d x \\ & =\int_0^{\frac{\pi}{2}} \sin ^{n+a} x \cos ^{-a} x d x \\ & =\frac{1}{2} B\left(\frac{n+a+1}{2}, \frac{1-a}{2}\right) \\ & =\frac{1}{2 \Gamma(\frac n2+1) } \Gamma\left(\frac{n+a+1}{2}\right) \Gamma\left(\frac{1-a}{2}\right) \end{aligned} Using logarithmic differentiation, we
|
|calculus|integration|definite-integrals|trigonometric-integrals|digamma-function|
| 1
|
Proving a multivariate normal distribution gets the maximum entropy when mean and covariance are given
|
I'm working on a homework question. The first part was: Given an unbounded one dimensional continuous random variable: $X\in\left(-\infty,\infty\right)$ , that satisfies: $\left\langle X\right\rangle =\mu,\;\left\langle \left(X-\mu\right)^{2}\right\rangle =\sigma^{2}$ Show that the distribution that maximizes entropy is Gaussian $X\sim N\left(\mu,\sigma^{2}\right)$ . I've solved this using Lagrange multipliers method. The next part is proving the same holds in the case of multivariate distributions. Generalize the previous part to a $k$ dimensional variable $X$ with given expectation value $\vec{\mu}$ and covariance matrix $\Sigma$ . I started the same way when I define the proper functional I wish to optimize: $$ F\left[f_{X}\left(\overline{x}\right)\right]=H\left(X\right)+\lambda\left(1-\intop_{\mathbb{R}^{k}}f_{X}\left(\overline{x}\right)d\overline{x}\right)+\sum_{i\in\left[k\right]}\varGamma_{i}\left(\mu_{i}-\intop_{\mathbb{R}^{k}}\overline{x}_{i}f_{X}\left(\overline{x}\right)d\ove
|
Let's compute the constrained functionals in terms of the dual variables, which might clarify things. The idea is almost completely illustrated via the computations for $$ A(\lambda, \Gamma, \Lambda) = \exp(\lambda-1) \int \exp( -\Gamma^\top x - (x-\mu)^\top \Lambda (x-\mu) )\mathrm{d}x. $$ Of course, this only makes sense if $\Lambda$ is positive definite, since otherwise the exponential blows up over the integration domain. If $\Lambda$ is positive definite, then there exists a positive definite matrix $\Lambda^{1/2}$ such that $(\Lambda^{1/2})^2 = \Lambda$ . Substitute $x = \Lambda^{-1/2}y + \mu$ to get $$ A \cdot \exp(1-\lambda) =\exp(-\Gamma^\top \mu) \int (\det \Lambda^{-1/2}) \exp( -\Gamma^\top \Lambda^{-1/2} y - y^\top y)\mathrm{d}y.$$ Let me write $v = -\Lambda^{-1/2} \Gamma/2$ . Then we have \begin{align} A \cdot \exp(1-\lambda) \exp(\Gamma^\top \mu) \sqrt{\det \Lambda} &= \int \exp( 2v^\top y - y^\top y)\mathrm{d}y\\ &= \int \exp( - (y-v)^\top (y-v) + \|v\|^2)\mathrm{d}y\\ &
|
|real-analysis|probability|statistics|optimization|information-theory|
| 0
|
Is $\langle x,y \rangle=\overline{\langle y,x \rangle}$?
|
can anyone help me? Be $(X,\|.\|)$ a normed space that satisfies the parallelogram law. Let's define $\Phi : X\times X \rightarrow \mathbb{C}$ by: $\Phi (x,y)=\dfrac{1}{4}[ \|x+y\|^2-\|x-y\|^2+i\|x+iy\|^2-i\|x-iy\|^2]$ Prove $\Phi(x,y)=\overline{\Phi(y,x)}$ I resolved $\overline{\Phi(x,y)}=\dfrac{1}{4}[ \|x+y\|^2-\|x-y\|^2-i\|x+iy\|^2+i\|x-iy\|^2]$
|
A Hermitian inner product must have the conjugate symmetry $$ \langle x,y\rangle=\overline{\langle y,x\rangle}\,. $$ For $\Phi(x,y)=\langle x,y\rangle$ this is not hard to show by noticing that $$ \|x+iy\|^2=\|i(-ix+y)\|^2=|i|^2\|(-ix+y)\|^2=\|y-ix\|^2\,. $$ Likewise, $$ \|x-iy\|^2=\|-y-ix\|^2=\|y+ix\|^2\,. $$ Then \begin{align} \Phi (x,y)&=\dfrac{1}{4}[ \|x+y\|^2-\|x-y\|^2+i\|x+iy\|^2-i\|x-iy\|^2]\\ &=\dfrac{1}{4}[ \|x+y\|^2-\|x-y\|^2+i\|y-ix\|^2-i\|y+ix\|^2] \end{align} from which $\Phi(x,y)=\overline{\Phi(y,x)}$ follows.
|
|functional-analysis|inner-products|
| 0
|
In how many we ways can choose at least three courses (one from each major)?
|
Note :This question was in our final exam and I didn't know how to approach it. The question: In the University there are $6$ six courses in Math, $6$ courses in CS, and $3$ in Physics (each course is different). In how many ways we can choose $6$ courses from the database s.t one course from each major will be chosen? First, I tried to use Brute-force to see if there is a pattern, but it cannot be done, because if I'm doing Brute-force, I divide by $3$ but it makes us to not handle physics courses at all because there is only one option and they're always in the databases. But that's what I did in the exam so let's look at it again - if we divide by $3$ we need to choose only $3$ courses out of $5$ courses database and it just a regular question with the solution: $1 \cdot {3 \choose 1} \cdot {3 \choose 1}$ I really didn't know how to approach it, the official solution apply PIE, but I don't understand how to apply PIE, because if we divide to groups, but when I'm trying to do to so,
|
The suggested inclusion-exclusion approach: There are $6+6+3=15$ courses available. $\binom{15}{6}$ ways to choose six courses if we don't care about amounts. Of these, $\binom{9}{6}$ had no math courses and are "bad" for that reason. Similarly, $\binom{9}{6}$ had no CS courses, and $\binom{12}{6}$ had no physics courses. We subtract these from our count. But then, we subtracted too much... specifically those which were void in multiple subjects, which happens specifically when we took only math courses, or took only CS courses ( noting it is impossible to take only physics courses, there aren't enough ) Our total then is $\binom{15}{6}-\binom{9}{6}-\binom{9}{6}-\binom{12}{6}+\binom{6}{6}+\binom{6}{6}$ To be clear, the approach where you first select one from each subject and then fill out the rest after the fact is wrong. That would have inadvertently given importance to whether something was picked in the first step versus in the remainder. That would be the count for how many class
|
|combinatorics|discrete-mathematics|
| 1
|
Solving an ODE using the Fourier transform
|
Solve $\begin{equation}u''+u=\delta(x)\end{equation}.$ Using the Fourier integrals we obtain: $\mathscr{F}\{u''\}= -(\omega)^2\hat{u}$ , $\mathscr{F}\{u\}= \hat{u}$ , $\mathscr{F}\{\delta\}=\frac{1}{\sqrt{2\pi}}$ . We then have $\begin{equation}-(\omega)^2\hat{u}+\hat{u}=\frac{1}{\sqrt{2\pi}}.\end{equation}$ But how do we go from here?
|
In the absence of boundary (initial) conditions, the solution of the linear differential equation $$u^{\prime \prime}(x)+u(x)=\delta(x)\tag{1} \label{1}$$ is not unique, but given by the general solution $u_h(x)$ of the homogeneous equation $$u_h^{\prime \prime} (x)+u_h(x)=0 \tag{2}$$ plus a special solution $G(x)$ of the inhomogeneous equation $$G^{\prime \prime}(x) +G(x)=\delta(x), \tag{3}$$ where $G(x)$ is a Green function of the differential operator $d^2\!/dx^2+1$ . The analogous situation is encountered after the Fourier transformation, $$\hat{u}(\omega):=\mathcal{F} \{u\}(\omega)=\int\limits_{-\infty}^\infty\! \!\frac{dx}{\sqrt{2\pi}} \, e^{i \omega x} \,u(x) \tag{4} \label{4}$$ mapping \eqref{1} into the equivalent algebraic equation $$(-\omega^2+1)\,\hat{u}(\omega) = \frac{1}{\sqrt{2\pi}}.\tag{5} \label{5}$$ The general solution $\hat{u}_h(\omega)$ of the homogeneous equation $$(-\omega^2+1)\,\hat{u}_h(\omega) =0 \tag{6} $$ is given by $$\hat{u}_h(\omega) = \sqrt{2 \pi} \left[
|
|ordinary-differential-equations|fourier-transform|
| 1
|
Coming up with a counter example - calculus
|
I have to come up with a counter example for the following statement: Let $f$ be a function $f: [0,\infty)\longrightarrow R$ , continuous and bounded. Prove that it receives either a minimum or a maximum (or both) Everything I tried seems to always be lacking one of the conditions, for example $\sin(1/x)$ is not continuous at $0$ , $x\sin(x)$ is unbounded, $\sin(x) + 1/x$ does have a minimum Any help would be appreciated.
|
You can take, for instance, $f(x)=\sin(x)\arctan(x)$ . Its supremum and its infimum are $\pm\pi/2$ , but none of them is reached. Besides, it is continuous and bounded (for each $x\in[0,\infty)$ , $|f(x)| ).
|
|real-analysis|calculus|examples-counterexamples|
| 0
|
Expected value exists because it has an integrable lower bound?
|
I'm trying to understand an argument my prof made: Given $h:\mathbb R \rightarrow \mathbb R$ convex, we look at $E[h(X)]$ . The expected value exists because $h(X)$ is lower-bounded by [some] $l(X)$ that is integrable. I'm not certain why this lower bound implies that the expected value exists. I know, of course, Lebesgue's theorem about dominated convergence but that refers to an upper bound. So I tried to search for a different reason behind it and I came up with this: We have $E(h(X)^-) \leq E(l(X)^-) since $l(X) \leq h(X)$ . Our lecture then implies that $E(h(X))$ exists already. But couldn't we just use that argument in generalization and we get that whenever an $X$ is lower bounded by some integrable $Y$ random variable, then $E(X)$ exists. Is that correct?
|
Assuming $X$ to be integrable: By convexity, $h$ is bounded below by affine $l$ ; and $l(X)$ is integrable because $X$ is so. In particular, $E[l(X)^-] . (Where $b^-:=\max(-b,0)$ .) Therefore, because $h(x)^-\le l(x)^-$ for all $x$ , $E[h(X)^-]\le E[l(X)^-] . Consequently, $E[h(X)]$ is well defined as $E[h(X)^+]-E[h(X)^-]$ , although $E[h(X)]$ might take the value $+\infty$ .
|
|probability-theory|stochastic-processes|stochastic-analysis|
| 0
|
What is the Second mean value theorem for integrals?
|
I was attempting to solve this limit $$\lim_{n \to \infty}\int_{0}^ \infty \frac{nx \arctan(x)}{(1+x)(n^2+x^2)}dx $$ After some time I gave up and saw the solution. The solution involves the Second mean value theorem for integrals which I never heard of. The solution used the fact that if $f:[0,1]\to \mathbb{R}$ is a continuous function then $\lim_{n \to \infty}\int_0 ^1 \frac{nf(x)}{x^2n^2+1}dx = \frac{f(0) \pi}{2}$ which is proved using the Second mean value theorem for integrals I want to ask specifically for books that have this theorem and its proof. Because I want to see what theorem this book have that I don't know.
|
Note $$ I_n=\int_{0}^ \infty \frac{nx \arctan(x)}{(1+x)(n^2+x^2)}dx\overset{x\to nx}=\int_{0}^\infty \frac{nx \arctan(nx)}{(1+nx)(1+x^2)}dx. $$ Using $$ \frac{nx \arctan(nx)}{(1+nx)(1+x^2)}\le\frac{\pi}{2(1+x^2)}$$ and $$ \int_0^\infty \frac{\pi}{2(1+x^2)}dx one has, by DCT, $$ \lim_{n\to\infty}I_n=\int_{0}^\infty\lim_{n\to\infty}\frac{nx \arctan(nx)}{(1+nx)(1+x^2)}dx=\int_0^\infty\frac{\pi}{2(1+x^2)}dx=\frac{\pi^2}4. $$
|
|real-analysis|integration|definite-integrals|book-recommendation|mean-value-theorem|
| 0
|
There is at most one $K$-linear mapping $f:V\rightarrow V'$ such that $f(a_i)=a_i'$ for all $i$.
|
This statement is being proven in one of my textbooks, however I think it's not true: Let $V$ be a $K$ -Vectorspace with spanning set $\{a_1, a_2, \cdots a_n\}$ , and $\{a_1', a_2', \cdots a_n'\}$ be a spanning set of another $K$ -Vectorspace $V'$ . Then there is at most one $K$ -linear mapping $f:V\rightarrow V'$ such that $f(a_i)=a_i'$ for all $i$ . Translated proof from Textbook: Since $a$ has a representation of the form $a=\sum_{i=1}^{n} \alpha_i a_i$ , because of the linearity of $f$ we have: \begin{equation} f(a) = f\bigg(\sum_{i=1}^{n} \alpha_i a_i \bigg) = \sum_{i=1}^{n} \alpha_i f(a_i) \end{equation} which means that $f$ is being uniquely represented by the values of $f(a_i)$ , $ i=1, \cdots n$ . I don't understand why it follows that " $f$ is being uniquely represented" since $\{a_1, a_2, \cdots a_n\}$ is just a spanning set and not a basis, therefore the $\alpha_i$ don't have to be unique. I also think I have a counterexample: Let $a_1 = \begin{pmatrix} 1 \\ 0 \end{pmatrix}
|
Then any $f:\mathbb{R}^2\rightarrow \mathbb{R}^2$ with the matrix representation But that's not the claim in the textbook. The claim is about "any $f:V\to V'$ ". In your example $V$ is a proper ( $1$ -dimensional) subspace of $\mathbb{R}^2$ . And so is $V'$ . These are not whole $\mathbb{R}^2$ . You are correct that there are many different $f:\mathbb{R}^2\rightarrow \mathbb{R}^2$ such that $f(a_1)=a_1'$ . Or in other words $f:V\to V'$ can be extended to many different linear maps $\mathbb{R}^2\rightarrow \mathbb{R}^2$ . But again: that's not the claim. Note that when you restrict this definition to $V$ , then regardless of the choice of the matrix you will get the same function. Consider a simpler example: $$f:k^2\to k^2$$ $$f(x,y)=(x,\lambda y)$$ for a fixed scalar $\lambda$ . Now if you restrict it to $\{(x,0)\ |\ x\in k\}$ subspace, then the restriction is still given by $f(x,y)=(x,\lambda y)$ . However $y=0$ implies $f(x,y)=(x,0)$ as well. On the subspace. So regardless of the cho
|
|linear-algebra|matrices|vector-spaces|linear-transformations|
| 1
|
Is $\text{BRANCH}(n)$ finite for $n > 2$?
|
Is $\text{BRANCH}(n)$ finite for $n > 2$ ? Define $\text{BRANCH}(n)$ as the maximum length of a string that is composed of at most $n$ unique characters AND meets the following condition: Define a substring as all letters from positions $i$ to $2i$ (inclusive) (where $i= 1, 2, 3...$ ). Then, a certain substring must not be allowed to have previous substrings embedded in it, or else the branch stops. $\text{BRANCH}(1) = 3$ , since the longest string is aaa . The first substring from positions $1$ to $2$ is aa . The theoretical substring from positions $2$ to $4$ would be aaa . However, since aa (substring from $1$ to $2$ ) is embedded inside aaa (substring from $2$ to $4$ ), we cannot make our string aaaa and stop at aaa . $\text{BRANCH}(2) = 11$ , since the longest string is abbbaaaaaaa . At positions $1$ to $2$ , we have substring ab . At positions $2$ to $4$ , we have substring bbb . ab isn't embedded in this, we move on. At positions $3$ to $6$ , we have substring bbaa . None of ab
|
For a given $n\geq 2$ , let $S=(s_1,s_2,\ldots)$ be an infinite sequence of elements in $\{3,\ldots,n+2\}$ . For $i\geq 1$ , let $S_i:=S[i:2i]$ be the subsequence $S_i=(s_i,s_{i+1},\ldots,s_{2i})$ . We build a rooted tree $T_i$ on $i+1$ vertices as follow. Start with $T_i$ being a single vertex, the root of $T_i$ . For $k$ in $0,\ldots,i$ : Let $v$ be any vertex of $T_i$ at distance $k$ from the root. Add $s_{k+i}$ vertices all adjacent with $v$ This process stops with a rooted tree, with bounded degree $\Delta(T_i)\leq n+2$ , and with no vertices of degree $2$ . Ignoring the leaves, it has degree sequence exactly $S_i$ (ordered by distance to the root). By Kruskal tree theorem , there exist $i such that and $T_i$ is homeomorphic to a subtree of $T_j$ . Given that $T_j$ contains no vertex of degree $2$ , then $T_i$ is isomorphic to a subtree of $T_j$ . Which imlies that $S_i$ is contained in $S_j$ . QED
|
|combinatorics|graph-theory|trees|big-numbers|
| 0
|
Partition of a Matrix
|
In Linear Algebra, we have been taught that the partition of a matrix $A$ consists of matrices,or blocks. In other words, its elements are matrices. This same, partitioned matrix, however is said to be equal to the original matrix. But their elements are different, as one contains scalars and the other matrices. Please help me understand.
|
In linear algebra, when we talk about the partition or block structure of a matrix A, we're essentially organizing its elements into submatrices or blocks. The key point is that while the individual elements within these blocks might be matrices instead of scalars, the equality between the original matrix A and its partitioned form still holds. To clarify, the equality is established based on the correspondence of corresponding blocks in the partitioned matrix and the original matrix. Each block in the partitioned matrix is, in fact, a matrix, and the equality is a way of expressing that the entire structure of the partitioned matrix, block by block, is equivalent to the original matrix. In summary, even though the elements within the blocks are matrices instead of scalars, the equality is upheld by ensuring that the organization and structure of the blocks align with the original matrix A. This concept is often used in various applications, such as solving linear systems or representi
|
|linear-algebra|matrices|block-matrices|
| 0
|
Operator norm of powers of bounded normal operators and self adjoint operators on Hilbert space
|
I saw this problem in Sheldon Axler's Measure,Integration and Real Analysis Ex. $10B$ , problem $17$ and $18$ . Let's just restrict to the self adjoint case. What I am trying to show is that $||T^{n}||=||T||^{n}$ for all natural numbers. However, I am stuck with this problem. Firstly, I can directly show that $||T^{2^{n}}||=||T||^{2^{n}}$ by a simple induction. But I don't know how to do this for odd numbers now. If $T$ was compact, then I can say that $\pm ||T||$ is an eigen value and has the largest modulus in the spectrum. Then by the polynomial spectral mapping theorem, I can say that $||T||^{n}$ has the largest modulus in the spectrum of $T^{n}$ . Hence I can say that $||T^{n}||=||T||^{n}$ . I can even apply the same logic to normal operators I guess. But how do I solve it in general?
|
If you know about the Gelfand representation of abelian $C^*$ -algebras, the assertion is obvious because we can regard the $C^*$ -algebra generated by $1$ and $T$ as $C(X)$ for some compact Hausdorff space $X$ . For a more elementary approach, let $T$ be a self-adjoint operator. Let us recall the spectral radius formula $$r(T)=\lim_{n\to\infty}\lVert T^n\rVert^{1/n}, $$ where $r(T)$ is the spectral radius of $T$ . Then we have $$r(T)=\lim_{n\to\infty}\lVert T^{2^n}\rVert^{2^{-n}}=\lVert T\rVert, $$ and now the spectral mapping theorem gives $$ \lVert T^n\rVert=r(T^n)=r(T)^n=\lVert T\rVert^n, $$ as desired. If now $T$ is a normal operator, one can apply the preceding argument to $T^*T$ to obtain $$\lVert T^n\rVert^2=\lVert (T^*T)^n\rVert=\lVert T^*T\rVert^n=\lVert T\rVert^{2n}. $$
|
|functional-analysis|operator-theory|hilbert-spaces|self-adjoint-operators|
| 0
|
There is at most one $K$-linear mapping $f:V\rightarrow V'$ such that $f(a_i)=a_i'$ for all $i$.
|
This statement is being proven in one of my textbooks, however I think it's not true: Let $V$ be a $K$ -Vectorspace with spanning set $\{a_1, a_2, \cdots a_n\}$ , and $\{a_1', a_2', \cdots a_n'\}$ be a spanning set of another $K$ -Vectorspace $V'$ . Then there is at most one $K$ -linear mapping $f:V\rightarrow V'$ such that $f(a_i)=a_i'$ for all $i$ . Translated proof from Textbook: Since $a$ has a representation of the form $a=\sum_{i=1}^{n} \alpha_i a_i$ , because of the linearity of $f$ we have: \begin{equation} f(a) = f\bigg(\sum_{i=1}^{n} \alpha_i a_i \bigg) = \sum_{i=1}^{n} \alpha_i f(a_i) \end{equation} which means that $f$ is being uniquely represented by the values of $f(a_i)$ , $ i=1, \cdots n$ . I don't understand why it follows that " $f$ is being uniquely represented" since $\{a_1, a_2, \cdots a_n\}$ is just a spanning set and not a basis, therefore the $\alpha_i$ don't have to be unique. I also think I have a counterexample: Let $a_1 = \begin{pmatrix} 1 \\ 0 \end{pmatrix}
|
The basic idea is this: If you know what a linear transformation does on a spanning set, then you know what it does on any vector. Why? Because if $a_1,\ldots,a_n$ span $V$ , and you know that $f\colon V\to V'$ is a linear transformation, and you know what $f(a_1),\ldots,f(a_n)$ are, then there is only one thing that $f(v)$ can be for any $v\in V$ . The reason is that you can express $v$ as a linear combination of $a_1,\ldots,a_n$ , and therefore $f(v)$ must be the corresponding linear transformation of $f(a_1),\ldots,f(a_n)$ . For instance, suppose you know that $f\colon\mathbb{R}^2\to\mathbb{R}^3$ has $f(1,1) = (1,2,3)$ , $f(1,2) = (2,5,2)$ , and $f(0,1) = (1,3,-1)$ , and that $f$ is definitely linear. Given any vector in $\mathbb{R}^2$ , say $(5,-2)$ , you know you can write it as a linear combination of the vectors $(1,1)$ , $(1,2)$ , and $(0,1)$ (in potentially many ways). For instance, $$(5,-2) = 3(1,1) + 2(1,2) - 9(0,1).$$ That means that $$\begin{align*} f(5,-2) &= f\Bigl( 3(1,
|
|linear-algebra|matrices|vector-spaces|linear-transformations|
| 0
|
What is the maximum number of non-zero entries of a matrix $A$ with non-negative entries that fulfills $A^2 = 0$
|
A question I found and that I could not answer so far. Assuming that $A$ is a $n \times n$ matrix with non-negative entries, that fulfills the equation $A^2= 0$ , where $0$ is the zero matrix. What is the maximal number of positive entries for $A$ ? So far I have tried going through the specific entries, namely to solve $0 = \sum_{k=1}^n a_{i,k} \cdot a_{k,j}$ for all $i,j$ , but I was hoping for a simpler way. An example for $n=5$ has been $ \begin{bmatrix} 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 \\ \end{bmatrix} $ which fulfills the equation, but I do not know if there are better possibilities
|
Consider the incidence graph of the matrix obtained by replacing every non-zero entry by 1. A path of length 2 in this graph corresponds to the product of two non-zero coefficients, say $a_{i,j}a_{j,k}$ . In your example, the graph would have 5 vertices and edges $(2,1), (3,1), (4,1), (2,5), (3,5), (4,5)$ . Thus the problem can be reduced to the following graph-theoretic question: Question . What is the maximum number of edges of a directed graph containing no paths of length 2? This question was discussed in [1, Theorem 4.3] or [2, Theorem 1] and the answer is $k^2$ if $n = 2k$ and $k^2 + k$ if $n = 2k + 1$ . For your case, $n = 5$ , $k = 2$ and $k^2 + k = 6$ , in agreement with your bound. [1] Bermond, J.-C.; Sotteau, D.; Germa, A.; Heydemann, M.-C. Chemins et circuits dans les graphes orientés . (French) Ann. Discrete Math. 8 (1980), 293--309 [2] Sotteau, D.; Wojda, A. P., Digraphs without directed path of length two or three. Discrete Math. 58 (1986), no. 1, 105--108.
|
|linear-algebra|matrices|graph-theory|
| 1
|
How to generalize a sequence of terms/patterns in this case
|
$y_{0}=0$ $y_{1}=\frac{-1}{2}x^2$ $y_{2}=\frac{-1}{2}x^2 - \frac{1}{10}x^5$ $y_{3}=\frac{-1}{2}x^2 - \frac{1}{10}x^5- \frac{1}{80}x^8$ $y_{4}=\frac{-1}{2}x^2 - \frac{1}{10}x^5- \frac{1}{80}x^8- \frac{1}{880}x^{11}$ Now I am trying to figure out a general pattern. I can see that the powers follow an arithmetic sequence 3 is added each time. The denominator is the current power multiplied by its previous power. so for example $10 = 5 \times 2$ and $80 = 8 \times 5 \times 2$ . My problem is that I am not sure how to generalise this. I am thinking $(\frac{1}{\prod^{n}_{k = 0} 3n-1}\times x^{3n-1}$ ). But not sure if this is correct? How can I prove by induction in this case? I have not done anything with the $\prod$ symbol before. EDIT: Thank you @mathlove for the detailed and thoughtful answer. For the proving part, I forgot to write up the whole problem. But here it is: This problem about applying the methods of successive approximations ${φ0(x)=0, φ1(x), φ2(x), . . . , φn(x)}$ to the in
|
I am thinking $(\frac{1}{\prod^{n}_{k = 0} 3n-1}\times x^{3n-1}$ ). You are almost there. We can guess that for $n\ge 1$ , $$y_n=-\sum_{\color{red}k=1}^{n}x^{3\color{red}k-1}\prod_{m=1}^{\color{red}k}\frac{1}{3m-1}\tag1$$ For $n=5$ , for example, $(1)$ gives $$\begin{align}y_5&=-\sum_{k=1}^{5}x^{3k-1}\prod_{m=1}^{k}\frac{1}{3m-1} \\\\&=-\bigg(x^{2}\prod_{m=1}^{1}\frac{1}{3m-1}+x^{5}\prod_{m=1}^{2}\frac{1}{3m-1}+x^{8}\prod_{m=1}^{3}\frac{1}{3m-1}+x^{11}\prod_{m=1}^{4}\frac{1}{3m-1}+x^{14}\prod_{m=1}^{5}\frac{1}{3m-1}\bigg) \\\\&=-\frac{x^2}{2}-\frac{x^5}{2\times 5}-\frac{x^8}{2\times 5\times 8}-\frac{x^{11}}{2\times 5\times 8\times 11}-\frac{x^{14}}{2\times 5\times 8\times 11\times 14} \\\\&=-\frac{x^2}{2}-\frac{x^5}{10}-\frac{x^8}{80}-\frac{x^{11}}{880}-\frac{x^{14}}{12320}\end{align}$$ However, note that we cannot prove $(1)$ since from the given conditions, $y_n\ (n\ge 5)$ are not determined. There are infinitely many examples which satisfy the given conditions. For example, for $n\g
|
|sequences-and-series|algebra-precalculus|induction|
| 1
|
If $\Re(f)=\Re(g)$ on the boundary of $D$ then the equality is also true on $D$
|
Let $f,g$ be holomorphic on a bounded domain $D$ and continuous on $\overline{D}$ . Assume that $\Re(f)=\Re(g)\forall z\in\partial D$ . Prove that $\Re (f-g)=0$ . My attempt Consider $h=f-g$ . Clearly $h$ is holomorphic on $D$ , continuous on $\overline{D}$ . If $h$ is constant, then we finish. If $h$ is non-constant. According to Maximum modulus principle, for all $z\in\overline{D}$ , we have $$|h(z)|\leq|h(z_0)|\quad (\text{ for some }z_0\in\partial D).$$ which also means that $$\sqrt{\Re^2(f(z)-g(z))+\Im^2(f(z)-g(z))}\leq\sqrt{\Im^2(f(z_0)-g(z_0))}$$ Hence we obtain the following inequality $$\Re^2(f(z)-g(z))+\Im^2(f(z)-g(z))\leq\Im^2(f(z_0)-g(z_0))^2\quad\forall z\in\overline{D}.$$ I'm stuck at here. Could someone help me to continue or another way to deal with the problem? Thanks in advance!
|
Since $h = f-g$ is holomorphic the real-part $\Re(h)$ is harmonic. So, it attains its maximum and minimum on the boundary. But $\Re(h) = 0$ on $\partial D$ , so $\Re(f-g) = 0$ on the whole $D$ .
|
|complex-analysis|
| 0
|
Relation between zero singular values and eigenvalues
|
Given a non-normal diagonalizable square matrix $A\in \mathbb{R}^{N\times N}$ , we know its Eigenvalues $\lambda_i$ and its singular values $\sigma_i, i=1,\dots, N$ . Say, I approximate $A$ by $\sigma_i\rightarrow 0\ \forall i\ge k$ for some cutoff index $k$ , how can I prove that $\lambda_i=0\ \forall i\ge k$ ? I did some numerical tests and it very much seems to be the case, but I struggle proving it. Also, can we say something about the error in the remaining $\lambda_i$ when performing this approximation? Is there some kind of upper limit to the error we make?
|
Zero is a singular value of $A$ iff there exists $u\neq 0\in\mathbb{R}^N$ such that $ A^TAu = 0$ . However, $$ A^TAu=0\implies u^TA^TAu = \|Au\|_2^2=0\implies Au=0, $$ so $u$ is an eigenvector of $A$ corresponding to $\lambda=0$ . This trick can be used in the other direction to show that $\lambda=0 \iff \sigma=0$ . As for the sensitivity of $\lambda_i$ to perturbing $A$ , I suggest you check out section 7.2 of Matrix Computations by Golub and Van Loan. They show that $\mathcal{O}(\epsilon)$ perturbations in $A$ can induce $\mathcal{O}(\epsilon^{1/p})$ errors in $\lambda$ , where $p$ is the multiplicity of $\lambda$ , with more detailed estimates in the case where $p=1$ . Golub, Gene; Van Loan, Charles F. , Matrix computations. , Baltimore, MD: The Johns Hopkins Univ. Press. xxvii, 694 p. (1996). ZBL0865.65009 .
|
|eigenvalues-eigenvectors|svd|
| 1
|
Tower law and iterated expectation
|
Let's take 3 random variables $X,Y,Z$ on the same probability space (or associated with the same experiment), then applying the tower law in an iterative fashion we get: $$ E[X] = E_Y[E_{X|Y}[X|Y]] = E_Y\left[E_{Z|Y }\left[E_{X|Y,Z}\left[X|Y,Z\right]|Y\right]\right] $$ where I specified the pdfs on which we integrate below the expectation $(E)$ symbol. Precisely, we have that $$ E_{X|Y}[X|Y] = E_{Z|Y}\left[E_{X|Y,Z}\left[X|Y,Z\right]|Y\right] $$ However, if I am not mistaken, the above implies that the following is not true: $$ E[E[X|Y]|Z] = E[E[X|Y,Z]] $$ I have some difficulty in understanding why, any help or suggestion would be much appreciated.
|
No, that is not true in general, because the left-hand side is a random variable and the right-hand side is a number. Suppose $X=Y=Z$ then, $$E(E(X|Y)|Z) = E(E(X|X)|X) = E(X|X) = X$$ but, $$E(E(X|Y,Z)) = E(E(X|X)) = E(X)$$ which are not equal so long as $X$ is not degenerate.
|
|probability|probability-theory|expected-value|conditional-expectation|
| 1
|
Stationary distribution of a birth-death process
|
Consider a birth-death process with constant parameters $\lambda_n = 4$ and $\mu_n = 5$ with $n = 0,1,2,\dots$ . Find the stationary probability (as a function of n) My attempt so far is using the balance equations $\lambda_n\pi_n = \mu_{n+1}\pi_{n+1}$ we have the equations $4\pi_0 = 5\pi_1, ..., 4\pi_n = 5\pi_{n+1}$ would yield $\pi_1 = \frac{4}{5}\pi_0, \pi_2 = \frac{4}{5}\pi_1 = (\frac{4}{5})^2\pi_0, \dots,\pi_n = (\frac{4}{5})^n\pi_0$ and solving for $\pi_0 = (1+\frac{4}{5} + (\frac{4}{5})^2 + \dots + (\frac{4}{5})^n)^{-1} = (\frac{1}{1-\frac{4}{5}})^{-1} = 5$ . Then putting it all together my answer of $\pi_n = (\frac{4}{5})^n\cdot5$ was incorrect. I hope someone will be able to lead me in the correct direction
|
The detailed balance equations $$ \pi_{n-1}\lambda_{n-1} = \pi_n\mu_n,\ n=1,2,\ldots $$ yield the recurrence $$\pi_n = \prod_{i=1}^n \frac{\lambda_{i-1}}{\mu_i}\pi_0,$$ and from $\sum_{n=0}^\infty \pi_n=1$ it follows that \begin{align} \pi_0 &= \left(\sum_{n=0}^\infty\prod_{i=1}^n \frac{\lambda_{i-1}}{\mu_i}\right)^{-1}\\ \pi_n &= \prod_{i=1}^n \frac{\lambda_{i-1}}{\mu_i}\left(\sum_{n=0}^\infty\prod_{i=1}^n \frac{\lambda_{i-1}}{\mu_i}\right)^{-1}. \end{align} Substituting $\lambda_n=4$ and $\mu_n=5$ for all $n$ , we have $$ \pi_0 =\left(\sum_{n=0}^\infty\prod_{i=1}^n \frac45\right)^{-1} = \left(\sum_{n=0}^\infty\left(\frac45\right)^n\right)^{-1}=\frac15 $$ and $$ \pi_n = \left(\prod_{i=1}^n \frac45\right)\cdot\frac15 = \frac15\left(\frac45\right)^n. $$ Your reasoning looks correct, but you neglected to take the factor of $-1$ when computing $\pi_0$ .
|
|birth-death-process|
| 1
|
Seeking clarity on the concept of equivalent martingale measures
|
Question Consider a one-step trinomial tree, where there are two traded assets, a bond with risk-free rate, $r$ , a stock with initial price, $S_0$ , and terminal price $$S_T = \begin{cases} S_0u,& \text{with probability} \ p_u \\ S_0m,& \text{with probability} \ p_m \\ S_0d,& \text{with probability} \ p_d \end{cases}$$ where $p_u, p_m, p_d >0.$ Suppose that $T = 1, r = 0.05, S_0 = 1, u =1.5, m = 1, d =\frac{1}{u}.$ By considering the set of EMMs or otherwise, show that the market is incomplete. Suppose a European call on the stock with strike price $0.9$ and maturity time T is an asset traded in the market. Explain whether or not including this option as a traded asset in the market has made the market complete. My attempt From my understanding, a market is complete if there exists a unique EMM. Also, if there exists infinitely many EMMs, then the market is incomplete. Thus, I think it is sufficient to prove this by establishing the existence of two EMMs, $\mathbb{P}_1$ and $\mathbb{P
|
As you stated a market is complete if there exists a unique EMM ( Second Fundamental Theorem of Asset Pricing ). To show that the standard trinomial model is incomplete we can proceed as follows. The martingale condition requires that $$ S_{n-1} = \frac{1}{1+r}\mathbb{E}^{\mathbb{Q}}[S_n|\mathcal{F}_{n-1}] $$ we then couple this with the requirement that the probabilities sum to $1$ . In our case: $$ \begin{equation} \begin{cases} 1+r = uq_u + mq_m+dq_d \\ q_u+q_m+q_d = 1 \end{cases} \end{equation} $$ as you can see this system does not admit a unique solution, hence the EMM is not unique and the model in therefore incomplete. (try and solve it) On the other hand, if we include an option (or in general a second asset), we need to include the martingality condition for that asset as well. this adds an equation in the previous system which makes the solution unique and hence the model complete.
|
|measure-theory|martingales|finance|
| 1
|
Subsets and Splits
SQL Console for glopezas/math_stackexchange_qa
Filters and displays accepted answers excluding questions with specific tags, providing a focused view of less common or specialized topics.