title
string | question_body
string | answer_body
string | tags
string | accepted
int64 |
|---|---|---|---|---|
Independence of conditional probabilities
|
Let $X$ and $Y$ be two random variables, each with range $R\subset\mathbb{N}$ of size $|R|=n\in\mathbb{N}$ . Consider the sample space $\Omega$ "generated" by the two variables, i.e. an atomic algebra with atoms $X=k\wedge Y=k'$ for all $k,k'\in R$ , closed under (finite) conjunction and negation. Let $p$ be a probability function defined on $\Omega$ , and moreover positive everywhere on $\Omega$ (such that all of the conditional probabilities below are well-defined). Question: I'm wondering how many of the $2n^2$ conditional probabilities $p(X=k|Y=k')$ and $p(Y=k|X=k')$ are at least "partially" independent. That is, how many choices of values (in $]0,1[$ ) can I make before the rest of the above mentioned conditional probabilities are fully determined? So far I've only found some obvious upper bounds. For example, for all $k\in R$ , $\sum_{k'\in R} p(X=k'|Y=k)=1$ and $\sum_{k'\in R} p(Y=k'|X=k)=1$ , which yields $2n$ independent constraints. Hence at most $2n(n-1)$ of the above mentio
|
$$p(X=k|Y=k′) = \frac{p(X=k \ and\ Y=k')}{\sum_{\forall j} p(X=j \ and\ Y=k')}$$ Suppose you want to calculate every $p(X=k|Y=k′)$ and similarly every $p(Y=k'|X=k)$ . You would need $p(X=k \ and\ Y=k')$ for all $k$ and $k'$ , which consists of $n^2 - 1$ constraints. If you are given a number of $p(X=k|Y=k′)$ or $p(Y=k'|X=k)$ individually, some of them each provides a new piece of "information" while others can be redundant, as you have pointed out. So this may be a direction you can look into.
|
|probability|conditional-probability|
| 0
|
Solving the system $A\cos x +B\cos y=C$, $A\sin x + B\sin y=D$
|
It is not a school-related problem, and pretty much the title. I am looking for a way to solve the system of trigonometric equations: $$ A\cos(x) + B\cos(y) = C \tag{1} $$ $$ A\sin(x) + B\sin(y) = D \tag{2} $$ I have already solved the system for the specific case of $A=B$ and am looking to generalize this to $A \neq B$ . For reference, my solution to the specific case $A=B$ is: $$x = \cos^{-1}(\frac{C}{A} - \cos(\cot^{-1}(\frac{C}{D})-\cos^{-1}(\frac{D}{2A\sin(\cot(\frac{C}{D})})) \tag{3}$$ $$y = \cot(\frac{C}{D}) - \cos^{-1}(\frac{D}{2A\sin(\cot(\frac{C}{D})})) \tag{4}$$ I have been using computer software such as WolframAlpha to solve it, and they do solve it, but I've been unable to come up with a solution by hand after a couple months of working on it pretty regularly. Can it be solved by hand?
|
Move either the terms with $A$ or $B$ to the other side and square: $$B^2\cos^2y=A^2\cos^2x-2AC\cos x+C^2\\B^2\sin^2y=A^2\sin^2x-2AD\sin x+D^2$$ Now add them together: $$B^2=A^2+C^2+D^2-2AC\cos x-2AD\sin x$$ Now write $C=R\cos\phi$ , $D=R\sin\phi$ . So $R^2=C^2+D^2$ . $$B^2-A^2-C^2-D^2=-2A\sqrt{C^2+D^2}(\cos x\cos\phi+\sin x\sin\phi)\\\cos(x-\phi)=\frac{-B^2+A^2+C^2+D^2}{2A\sqrt{C^2+D^2}}$$ We take inverse to get $x$ . Likewise , we can get $y$ . Can you take it from here?
|
|algebra-precalculus|trigonometry|systems-of-equations|recreational-mathematics|
| 1
|
How many natural numbers $a\le100$ are there such that $a=[\frac a2]+[\frac a3]+[\frac a5]$, where [.] represents the greatest integer function?
|
A natural number $a$ is selected from the first $100$ natural numbers. The probability that $a=[\frac a2]+[\frac a3]+[\frac a5]$ , where [.] represents greatest integer function, is $\frac mn$ where $m,n$ are coprime then $(m+n)$ is equal to My Attempt: Let $a=30n+\gamma$ , where $0\le\gamma\lt30$ Putting this in the given equation, I get, $n=\gamma-[\frac{\gamma}{2}]-[\frac{\gamma}{3}]-[\frac{\gamma}{5}]$ $\gamma=0$ doesn't satisfy but $\gamma=1, 2, ..., 29$ satisfy. So, the probability is $\frac{29}{100}$ . Is this correct?
|
Let $a=6q+r$ . Then we need $r =\left[\dfrac{r}{2} \right]+\left[\dfrac{r}{3} \right]+\left[\dfrac{q+r}{5} \right]$ When $r=0$ , $q$ can be $1,2,3,4$ For $r=1,2,3,4,5$ it is easy to see that $q$ has $5$ solutions each (none exceeding $100$ ) Thus there are $29$ solutions
|
|probability|combinatorics|discrete-mathematics|solution-verification|contest-math|
| 0
|
Question in validity of proof that $9|n^3 + (n+1)^3 + (n+2)^3$
|
I wanted to prove that 9 divides the cubes of 3 consecutive integers. I represented this as: For $n \in \mathbb{Z}, 9|n^3 + (n+1)^3 + (n+2)^3$ By the definition of divisibility, if $9|n^3 + (n+1)^3 + (n+2)^3$ then there would exist an integer, let's say m such that $n^3 + (n+1)^3 + (n+2)^3 = 9m$ $$n^3 + (n+1)^3 + (n+2)^3 = 3n^3 + 9n^2 + 15n + 9$$ $$= (9n+9) + (3n^3 +5n)$$ $$= 9(n^2 + 1) + 3(n^3 + 5n)$$ At this point I noticed that if I could prove $3|n^3 + 5n \rightarrow n^3 + 5n = 3q$ , for some $q \in \mathbb{Z}$ and I'd have essentially completed the proof. I proved this by induction, I will omit the trivial base case $n=1$ . I assumed that for an arbitrary intger $k$ , that $3|k^3 + 5k \rightarrow k^3 + 5k = 3p$ for $p \in \mathbb{Z}$ . I sought to prove that for an arbitrary integer $k+1$ , that $3|(k+1)^3 + 5(k+1)$ . $$(k+1)^3 +5(k+1) = k^3 + 3k^2 + 8k + 6$$ $$= (k^3 + 5k) + (3k^2 + 6)$$ $$= 3p + 3k^2 + 6$$ $$= 3(p+ k^2 + 2)$$ $$= 3q$$ Where by multiplicative closure of the integ
|
It should be fine that you have split out part of it to be proven by induction, while the other part is true algebraically without induction. Another way to see what Will and John mentioned is that with three consecutive integers, you will always have one of each remainder (divide by 3) from 0, 1, and 2. Say you are given consecutive integers $a$ , $b+1$ , and $c+2$ , where $3|a,b,c$ and $a$ , $b$ , or $c$ may be equal to one another. You can derive after cubing and expanding the binomials that the sum divides 9, algebraically.
|
|solution-verification|proof-writing|induction|divisibility|
| 0
|
Question in validity of proof that $9|n^3 + (n+1)^3 + (n+2)^3$
|
I wanted to prove that 9 divides the cubes of 3 consecutive integers. I represented this as: For $n \in \mathbb{Z}, 9|n^3 + (n+1)^3 + (n+2)^3$ By the definition of divisibility, if $9|n^3 + (n+1)^3 + (n+2)^3$ then there would exist an integer, let's say m such that $n^3 + (n+1)^3 + (n+2)^3 = 9m$ $$n^3 + (n+1)^3 + (n+2)^3 = 3n^3 + 9n^2 + 15n + 9$$ $$= (9n+9) + (3n^3 +5n)$$ $$= 9(n^2 + 1) + 3(n^3 + 5n)$$ At this point I noticed that if I could prove $3|n^3 + 5n \rightarrow n^3 + 5n = 3q$ , for some $q \in \mathbb{Z}$ and I'd have essentially completed the proof. I proved this by induction, I will omit the trivial base case $n=1$ . I assumed that for an arbitrary intger $k$ , that $3|k^3 + 5k \rightarrow k^3 + 5k = 3p$ for $p \in \mathbb{Z}$ . I sought to prove that for an arbitrary integer $k+1$ , that $3|(k+1)^3 + 5(k+1)$ . $$(k+1)^3 +5(k+1) = k^3 + 3k^2 + 8k + 6$$ $$= (k^3 + 5k) + (3k^2 + 6)$$ $$= 3p + 3k^2 + 6$$ $$= 3(p+ k^2 + 2)$$ $$= 3q$$ Where by multiplicative closure of the integ
|
In general, induction is a great tool, but when considering divisibility problems, looking at prime factors (i.e. 3 instead of 9) and using casework on the remainder (which could be simplified with modular arithmetic) often works better. Here’s a quick way without induction nor modular arithmetic. Letting $n$ be the middle number, we have that: $$(n-1)^3+ n^3+ (n+1)^3 \equiv 3n(n^2+2) \ \mod 9$$ Thus, it suffices to show that $3$ divides $n(n^2+2)$ . Now we can do casework on the remainder of $n$ when dividing by 3. If $n=3a$ , then $3$ divides $n=3a$ . If $n=3a+1$ , then 3 divides $n^2+2=9a^2+6a+3$ . If $n=3a+2$ , then 3 divides $n^2+2=9a^2+12a+6$ . Thus, 3 either divides $n$ or $n^2+2$ , so $3$ divides $n(n^2+2)$ , so 9 divides $3n(n^2+2)$
|
|solution-verification|proof-writing|induction|divisibility|
| 0
|
Is there a nice form to $\sum_{k=1}^{n}\left\lfloor\sqrt{ r^2-k^2 }\right\rfloor$?
|
$$\sum_{k=1}^{n}\left\lfloor\sqrt{ r^2-k^2 }\right\rfloor$$ where $r$ is a constant (not necessarily an integer). Note that $r\ge n$ and both $r, n$ are positive. I apologize if I'm not able to contribute a lot to this problem as the floor function is really weird to me. I have never seen this inside a summation and don't know what to make of this problem. If anything, can one at least place bounds on this summation? I know that the floor function is defined to satisfy the following property: If $x:=\left\lfloor n \right\rfloor$ , then $$x-1 How should I proceed? Are there any properties that I should know about when dealing with floor functions in summations? UPDATE: Thank you for the answer. Are there bounds that I can place around the summation that get more accurate as $n$ gets larger?
|
Consider the $xy$ -plane, which contains positive lattice points $(j,k)$ with $j,k\in\Bbb N$ . For any fixed $k$ , the expression $\lfloor\sqrt{r^2-k^2}\rfloor$ counts the number of positive integers $j\le\sqrt{r^2-k^2}\rfloor$ , which is the same as $j^2+k^2\le r^2$ . In other words, $\lfloor\sqrt{r^2-k^2}\rfloor$ is the number of positive lattice points at height $k$ inside the circle of radius $r$ centered at the origin. The entire sum is therefore the number of positive lattice points of height at most $n$ inside that circle. There isn't a nice closed formula for that quantity, even when $n\ge r$ so that the height restriction is unnecessary. But it is a standard result that the number of lattice points of a "nice" region is approximately its area. So the number of lattice points will be approximately the area of the region $\{(x,y)\in\Bbb R^2\colon x\ge0,\, 0\le y\le n,\, x^2+y^2\le r^2\}$ , which is (fun calculus problem) $$ \begin{cases} \displaystyle \frac{1}{2} \left(n \sqrt{r
|
|sequences-and-series|summation|ceiling-and-floor-functions|
| 0
|
Why we have to proof both $Q$ and $R$ in $P\implies (Q\lor R)$
|
I'm studying proofs trying to use logic before starting with the proof. A direct proof can be written as $P\implies Q$ , by forcing $P$ to be true, we have to force $Q$ to be true so the statement stays true. But in a case of the form $P\implies (Q \lor R)$ , why do we have to prove both propositions to be true ( $Q, R$ ) individually? True tables states 3 cases to be true when $P$ is true in this particular case P Q R $P\implies (Q\lor R)$ T T T T T T F T T F T T Suppose $x\in\mathbb{R}$ . If $xy$ is even, then $x$ is even or $y$ in even. In this case it also applies my reasoning through true tables. We just need to show that $x$ is even to proof $xy$ is also even. No matter which value $y$ have, it'll always be true $xy$ are even in $x$ is even. What's the logic behind my wrong reasoning and why it is a must to proof both by cases, $x$ being even and $y$ being even?
|
In general, proving that $P\implies Q$ is sufficient to prove that $P\implies Q\lor R$ . Sufficient, but not necessary , and in fact, often times, proving $P\implies Q$ may be impossible ! Take, for example, the statement "If I have fewer than $2$ legs, then I either have one leg or I have no legs". Clearly, this is a true statement in which $P$ is "I have fewer than $2$ legs", $Q$ is "I have one leg" and $R$ is "I have no legs". So, in this case, the statement $P\implies Q\lor R$ is true, The statement $P\implies Q$ is false (since it equals "If I have fewer than two legs, I must have one leg", a statement contradicted by the existence of people with no legs) The statement $P\implies R$ is false (since it equals "If I have fewer than two legs, I must have no legs", a statement contradicted by the existence of people with one leg) In your case, if $P$ is " $xy$ is even", $Q$ is " $x$ is even" and $R$ is " $y$ is even", then proving $P\implies Q$ will be impossible, because the statemen
|
|logic|proof-explanation|
| 1
|
Solving the system $A\cos x +B\cos y=C$, $A\sin x + B\sin y=D$
|
It is not a school-related problem, and pretty much the title. I am looking for a way to solve the system of trigonometric equations: $$ A\cos(x) + B\cos(y) = C \tag{1} $$ $$ A\sin(x) + B\sin(y) = D \tag{2} $$ I have already solved the system for the specific case of $A=B$ and am looking to generalize this to $A \neq B$ . For reference, my solution to the specific case $A=B$ is: $$x = \cos^{-1}(\frac{C}{A} - \cos(\cot^{-1}(\frac{C}{D})-\cos^{-1}(\frac{D}{2A\sin(\cot(\frac{C}{D})})) \tag{3}$$ $$y = \cot(\frac{C}{D}) - \cos^{-1}(\frac{D}{2A\sin(\cot(\frac{C}{D})})) \tag{4}$$ I have been using computer software such as WolframAlpha to solve it, and they do solve it, but I've been unable to come up with a solution by hand after a couple months of working on it pretty regularly. Can it be solved by hand?
|
Final edit: general solution and constraints This answer started off as a one-liner for a special case (see rollbacks if you like), but given how long you've worked on the problem (and now me too!) here is rewrite with a general geometric solution, which also shows where @Andrei's solution comes from. I'm going to treat $A,B,C,D$ as positive, though you could reframe things to accommodate negative values. I also presume $C \leq A+B$ otherwise the first equation cannot be true, and for geometric simplicity that $A,B \leq C$ so that we are not dealing with negative distances at any point. If we fix $A,B$ with $C$ some permissible value of $A,B \leq C \leq A+B$ then we have zero, one, or two solutions as the diagrams below indicate. Case $A+B No solution Case $A+B = \sqrt{C^2+D^2}$ One solution, $x=y= \arctan{\frac{D}{C}}$ Case $A+B > \sqrt{C^2+D^2}$ Two, one, or zero solutions per diagrams below In this regime, the solutions form a kite in which the diagonal $R= \sqrt{C^2+D^2}$ , at an a
|
|algebra-precalculus|trigonometry|systems-of-equations|recreational-mathematics|
| 0
|
Defining the Y combinator in terms of S, K and I
|
We know that the Y-combinator is defined as: $$\text{Y}:=\lambda f.(\lambda x.f(xx))(\lambda x.f(xx))$$ Wikipedia says : $$\text{Y}:=\text{S(K(SII))(S(S(KS)K)(K(SII)))}$$ Now the question is: What logical steps can we take to convert the first definition to the second? While it is easy to show the equivalence between the two definitions, finding how the first definition can motivate and lead to the second definition is, in my opinion, a tricky task. I have added my proof as an answer, but all other ideas and suggestions are welcome.
|
Let's define $$\text{E}=\lambda\text{x. f (x x)}$$ which leads to: $$ \begin{align*} \text{E x}&=\text{f (x x)}\\ &=\text{f (I x (I x))}\\ &=\text{f (S I I x)}\\ &=\text{(K f x) (S I I x)}\\ &=\text{S (K f) (S I I) x}\\ &=\text{(K S f) (K f) (S I I) x}\\ &=\text{S (K S) K f (S I I) x}\\ &=\text{S (K S) K f (K (S I I) f) x}\\ &=\text{S (S (K S) K)(K (S I I)) f x}\\ \therefore \text{ E}&=\text{S (S (K S) K)(K (S I I)) f}\\ &=\text{T f [Let]} \end{align*} $$ Now $\text{Y}=\lambda\text{f. E E}$ , so: $$\begin{align*} \text{Y f}&=\text{E E}\\ &=\text{T f (T f)}\\ &=\text{S T T f}\\ \therefore\text{ Y}&=\text{S T T}\\ &=\text{S S I T}\\ &=\text{S S I (S (S (K S) K)(K (S I I)))}\\ &=\text{S (K (S I I)) (S (S (K S) K)(K (S I I)))} \end{align*}$$ Note : See this for why $\text{SSI}$ and $\text{S(K(SII))}$ are equivalent.
|
|logic|lambda-calculus|combinatory-logic|
| 0
|
Why are $\sin x$ and $\cos x$ continuous on $\mathbb R$
|
I have seen many calculus books to use the limits $\lim_{x\to0}\sin x=0$ and $\lim_{x\to0}\cos x =1$ without their proofs. Are these limits obvious? I think no. These two limits are essentially telling us that the functions $\sin x$ and $\cos x$ are continuous at $x=0$ . But how do we know this? In fact, it is a standard result that $\sin x$ and $\cos x$ are continuous (and differentiable too) on $\mathbb R$ . Again, I have no clue about its proof. Thanks in advance for the help!
|
One possible way to overcome such difficulty consists in defining the trigonometric functions as power series. More precisely, one has that: \begin{align*} & \sin(x) := x - \frac{x^{3}}{3!} + \frac{x^{5}}{5!} - \frac{x^{7}}{7!} + \ldots\\\\ & \cos(x) := 1 - \frac{x^{2}}{2!} + \frac{x^{4}}{4!} - \frac{x^{6}}{6!} + \ldots \end{align*} Hence the proposed limits make sense (as well as continuity and differentiability).
|
|calculus|limits|
| 0
|
What does adding a little o mean in differentiability?
|
A function $f: {\mathbb{R}^m} \to {\mathbb{R}^n}$ is said to be differentiable at $x$ if there exists a linear function $L(x)$ such that $f(x+h)-f(x)=L(x)h + o(h)$ , as $h\to0$ . I'm abit confused about the use of little $o$ here. Which one of the interpretation is correct? There exists a function $\alpha(t) = o(t)$ as $t\to 0$ , such that for any $h \in \mathbb{R}^m$ $f(x+h)-f(x)=L(x)h + \alpha(h)$ . Thus in this interpretation $o(h)$ is an $o(t)$ function evaluated at $h$ , and the "addition" here is actual number addition. The function $g(h) = f(x+h)-f(x)$ is $L(x) + o(h)$ . Thus in this definition, we are actually performing function additions, and $o(h)$ represents an $o(h)$ function rather than some $o(t)$ function evaluated at $h$ . Based on possibly the answer the above question, I'm confused about the use of little o notation in the proof of differentiation of function composition. Here $g$ is differentiable at $f(x)$ and $g'(f(x))$ denote the differential. $$g(f(x + h)) - g(f
|
Everything depends on $x$ , so maybe $o_x(h)$ would be more accurate. Anyway, it is vector addition, not number addition: in $\mathbf R^n$ , $f(x+h) - f(x) - L(x)h = o(h)$ as $h \to 0$ in $\mathbf R^m$ , i.e., $|\!|f(x+h) - f(x) - L(x)h|\!|/|h| \to 0$ as $h \to 0$ . Or $f(x+h) = f(x) + L(x)h + \alpha_x(h)$ , where $\alpha_x$ is a function on a small neighborhood of $0$ in $\mathbf R^m$ with values in $\mathbf R^n$ such that $|\!|\alpha_x(h)|\!|/|h| \to 0$ as $h \to 0$ .
|
|real-analysis|calculus|analysis|multivariable-calculus|asymptotics|
| 0
|
Failure of translation-invariance of an integral?
|
Let $u$ be a smooth compactly supported function on $\mathbb{C}$ . The $\overline{\partial}$ -Poincaré lemma is the formula $$\frac{{\partial}}{{\partial} \overline{z}}\frac{1}{\pi} \int_{\mathbb{C}} \frac{u(w)}{w - z} dx \wedge dy = u(z)$$ where $dx \wedge dy$ is the standard Lebesgue measure on $\mathbb{C}$ ( $w = x + iy$ ). In proving this, it is not immediately justified to pass the derivative through the integral, since for any fixed $w$ the integrand is an unbounded function of $z$ , so, e.g., the Dominated Convergence Theorem fails. But one can make the change of variables $w \to w + z$ to remedy this, so the above equals $$\frac{1}{\pi} \int_{\mathbb{C}} \frac{{\partial}}{{\partial} \overline{z}}\frac{u(w + z)}{w } dx \wedge dy.$$ From here one can make a standard limit argument using Stokes' theorem to finish the proof of the formula. My confusion is that the integral $\frac{1}{\pi} \int_{\mathbb{C}} \frac{{\partial}}{{\partial}\overline{ z}}\frac{u(w + z)}{w } dx \wedge dy$ a
|
The issue turned out to be notational, rather than anything subtle with limits or integration. The function $\frac{\partial}{\partial \overline{z}} \frac{u(w)}{w - z}$ is not the function you get by composing $\frac{\partial}{\partial \overline{z}} \frac{u(w + z)}{w}$ with $w \mapsto w - z$ ; this notation confuses dummy variables with actual variables. The correct function we get from change of variables is more accurately notated as $$\frac{\partial}{\partial \overline{z}}\Big|_{(w, z) = (w_0 - z_0, z_0)} \frac{u(w + z)}{w},$$ as a function of $w_0, z_0$ , treating $z, w$ as dummy variables.
|
|real-analysis|complex-analysis|measure-theory|
| 1
|
How many homomorphisms from $\mathbb Z_4$ to $S_4$?
|
Let $\varphi: \mathbb{Z_4}\to S_4$ be a homomorphism. From theory I have: Since $1_{\mathbb{Z_4}}$ is the identity element in $\mathbb{Z_4}$ , $\varphi(1_\mathbb{Z_4})=1_{s_4}$ . $\varphi(1^{-1})=\varphi(3)=(\varphi(1))^{-1}, \varphi(2^{-1})=\varphi(2)=(\varphi(2))^{-1}$ These don't seem enough to find all homomorphisms. What am I missing?
|
More generally, let's attempt to enumerate the homomorphisms $f:\mathbb Z_n\to S_m$ . So that a non-trivial homomorphism exists, it is necessary that a non-trivial subgroup of $\mathbb Z_n$ is embedded inside $S_m$ . $1$ is a generator of $\mathbb Z_n$ so knowledge of $f(1)$ fully determines a homomorphism $f$ . $1$ is an element of order $n$ so it must be sent to a permutation $\pi\in S_m$ whose order is a divisor of $n$ . Otherwise, the fact that a homomorphism maps identity to identity i.e., $f(0)=\text{id}$ will be contradicted. Let $\rho (k)$ be the number of elements of order $k$ in $S_m$ then the number of required homomorphisms is: $$\sum\limits_{\text{$k$ is a divisor of $n$}} \rho(k)$$ In your particular case where $n=4$ , we are lucky that $n$ has so few divisors, namely, $1$ , $2$ and $4$ . Now count the number of elements of these orders in $S_4$ . $\rho(1)= 1$ $\begin{align}\rho(2) &= |\text{cl}((12))| + |\text{cl}((12)(34))|\\&= \binom{4}{2} +\binom{4}{2} \\&= 12\end{ali
|
|abstract-algebra|group-theory|symmetric-groups|group-homomorphism|
| 0
|
What are generic ideals and dense sets, intuitively?
|
Someone commented under one of my previous posts that, intuitively, a generic set isn't supposed to have any "conspicuous properties". I wonder what the precise meaning of that comment was and whether we could provide some kind of proof of this. For reference, I'm working with these definitions: A dense set $D$ is any subset of the forcing notion $P$ such that every $p \in P$ has an extension in $D$ . A generic ideal $G \subseteq P$ is an ideal that intersects every dense subset of $P$ that is also an element of the model $\mathbf{M}$ . I have been thinking about this myself, but haven't gotten anywhere concrete. Here's what I was working with: consider a specific forcing notion, say the set $P \in \mathbf{M}$ of all finite partial functions from $\mathbb{N}$ to $\mathbb{N}$ . Suppose $D \subseteq P$ . I will call some $(a_1, a_2, \dots, a_n) \in \mathbb{N}^n$ "determined in $D$ " if the sentence $$(\exists b_1 \exists b_2 \dots \exists b_n) (\forall f \in D) ((f(a_1) \neq b_1) \lor (f
|
The statement about $G$ not having any "conspicuous properties" means that any first-order property of $M[G]$ depends only on the facts that (1) $G$ is $M$ -generic on $P,$ and (2) some particular condition $p$ belongs to $G.$ In detail: Let $\varphi$ be a first-order sentence in the language of set theory. Then there exists some $p\in G$ with the following property: For every $H\subseteq P$ such that $H$ is $M$ -generic on $P$ and $p\in H,$ we have that $$M[H]\models\varphi\;\text{ iff }\;M[G]\models\varphi.$$ You can even allow $\varphi$ to contain constants from $M$ or names representing members of the generic extension. (Any names must be interpreted as appropriate in $M[G]$ or $M[H].)$ Moreover, if the partial ordering $P$ is homogeneous and if we just allow $\varphi$ to contain constants from $M$ (but no mentions of other names), then all generic $G$ agree on whether $\varphi$ is true in $M[G].$ (In other words, we don't need $p.)$
|
|set-theory|model-theory|forcing|
| 1
|
Algebraic points on circles/solutions to $x^2+y^2=a$ over $\overline{\mathbb Q}$: is there a point with odd degree field extension over $\mathbb Q$?
|
It is well known how to parameterize rational solutions to $x^2+y^2=a$ for $a\in \mathbb N^+$ : first we know which $a$ admit integer solutions ; our characterization is strong enough to tell us that theses are exactly the $a$ admitting rational solutions; then if we have one rational solution we can get all of them by drawing rational-slope lines through that one point . I am interested in knowing if people understand well the algebraic solutions to $x^2+y^2=a$ , i.e. the solutions $x,y\in \overline{\mathbb Q}$ to $x^2+y^2=a$ . In particular, I am interested in knowing if there exists an algebraic solution $(x_0,y_0)\in \overline{\mathbb Q} \times \overline{\mathbb Q}$ to $x^2+y^2=3$ such that the degree of the field extension $[\mathbb Q(x_0,y_0): \mathbb Q]$ is odd. (Indeed it has no solutions over $\mathbb Q$ ) EDIT: a "famous" (in the sense of large number of upvotes/views) question on MSE in fact shows that $[\mathbb Q(x_0+iy_0):\mathbb Q]$ must be even. If $z_0:= x_0+iy_0$ was a
|
Theorem . Let $d \in \mathbf Q$ not be a square and $a$ be nonzero in $\mathbf Q$ . If the equation $x^2 - dy^2 = a$ has a solution in an odd-degree extension of $\mathbf Q$ then it has a solution in $\mathbf Q$ . Therefore when $x^2 - dy^2 = a$ has no solution $(x,y)$ in $\mathbf Q$ , it has no solution in any odd-degree extension of $\mathbf Q$ . Proof . This will be an application of transitivity of the norm map. Suppose there's a number field $K$ with odd degree $n$ over $\mathbf Q$ such that $x^2 - dy^2 = a$ for some $x, y \in K$ . Since $d$ is not a square in $\mathbf Q$ and $[K:\mathbf Q]$ is odd, $\sqrt{d} \not \in K$ , so $[K(\sqrt{d}):K] = 2$ . Then $$ a = x^2 - dy^2 = {\rm N}_{K(\sqrt{d})/K}(x + y\sqrt{d}). $$ Applying the norm map from $K$ down to $\mathbf Q$ to that equation: $$ a^n = {\rm N}_{K/\mathbf Q}({\rm N}_{K(\sqrt{d})/K}(x+y\sqrt{d})) = {\rm N}_{K(\sqrt{d})/\mathbf Q}(x+y\sqrt{d}). $$ We used transitivity of the norm map at the end of that calculation. Now going f
|
|number-theory|field-theory|algebraic-number-theory|extension-field|
| 1
|
Non-cyclic numbers that are not the order of any capable group
|
A group $G$ is said to be capable if there is some group $H$ for which $H/Z(H)$ is isomorphic to $G$ . It is known that the only capable cyclic group is the trivial group. So, if $n$ is a cyclic number (i.e., the cyclic group is the only group of order $n$ ) other than $1$ , then there is no group whose center has index $n$ . But is there a non-cyclic number $n$ for which there is no group whose center has index $n$ (or equivalently, there is no capable group of order $n$ )? Checking small values of $n$ : $n=4$ : The Klein four-group is capable (arising as the central quotient of the quaternion group). $n=6$ : The symmetric group $S_3$ is capable (arising as its own central quotient, as it is centerless). $n=8$ : A finite abelian group is known to be capable if and only if it is isomorphic to a finite direct sum $\mathbb{Z}_{n_1} \oplus \mathbb{Z}_{n_2} \oplus ... \oplus \mathbb{Z}_{n_k}$ where $n_1 \mid n_2 \mid ... \mid n_{k-1} = n_k$ (i.e., the last two invariant factors coincide).
|
A number $n$ with prime factorization $$n=p_1^{a_1}\cdots p_r^{a_r}$$ where the $p_i$ are distinct primes and $a_i\geq 1$ for all $i$ , is a nilpotent number if and only if for all $1\leq i,j\leq r$ , and $1\leq k\leq a_i$ , $p_i^k\not\equiv 1\pmod{p_j}$ . A theorem of Pazderski from 1959 shows that every group of order $n$ is nilpotent if and only if $n$ is a nilpotent number. It is a theorem of Dickson from 1905 that every group of order $n$ is abelian if and only if $n$ is a cube free nilpotent number. It is a theorem of Szele from 1947 that every group of order $n$ is cyclic if and only if it is a squarefree nilpotent number. (See this post ) It is a corollary to a Theorem of Baer that a finite abelian group is capable if and only if it is not cyclic, and its two largest invariant factors are equal. That is, if $A$ is a finite abelian group written in the form $$A= C_{m_1}\oplus\cdots\oplus C_{m_r},\qquad m_1\mid m_2\mid\cdots\mid m_r,$$ with $C_k$ the cyclic group of order $k$ , t
|
|group-theory|
| 0
|
Extension of a differentiable function $f$ to an open superset
|
This is a question the book Munkres-Calculus on Manifolds pg.144(Exercise 3-b) If $f :S\to \mathbb R$ and $f$ is differentiable of class $C^r$ at each point $x_0$ of $S$ ,then $f$ may be extended to a $C^r$ function $h: A\to \mathbb R$ that is defined on an open set $A$ of $\mathbb R^n$ containing $S$ . My attempt, with $f:S \to \mathbb R$ is $C^r$ , for each $x \in S$ ,then for each $x \in S$ , I can choose $U_x$ open neighborhood of $x$ such that $\cup U_x = A$ open (arbitrary union of abiertos es abierto).The item before "a"(pg.144.Exercise 3),it is show that if $f$ is $C^r$ then always exists $g:U_x \to \mathbb R $ where $x \in U_x \subset \mathbb R^n$ such that $f$ is equal to $g$ when $U_x \cap S$ and $$h(x) =\left \{ \begin{matrix} \phi(x)g(x)& \mbox{if }x\mbox{ $\in U_x$ } \\ 0 & \mbox{if }x\mbox{ $\notin \operatorname{supp} \phi$}\end{matrix}\right. $$ Is $C^r$ with $\phi:\mathbb R^n \to \mathbb R $ is $C^r$ , where your support is in $U_x$ .Choosing $h:A \to \mathbb R $ with
|
The author solves this exercise on p.199 (proof of Lemma 23.1). Lemma 23.1. Let $S$ be a subset of $\mathbb{R}^k$ ; let $f:S\to\mathbb{R}^n$ . If for each $x\in S$ , there is a neighborhood $U_x$ of $x$ and a function $g_x:U_x\to\mathbb{R}^n$ of class $C^r$ that agrees with $f$ on $U_x\cap S$ , then $f$ is of class $C^r$ on $S$ . Proof. The lemma was given as an exercise in $\S$ 16; we provide a proof here. Cover $S$ by the neighborhoods $U_x$ ; let $A$ be the union of these neighborhoods; let $\{\phi_i\}$ be a partition off unity on $A$ of class $C^r$ dominated by the collection $\{U_x\}$ . For each $i$ , choose one of the neighborhoods $U_x$ containing the support $\phi_i$ , and let $g_i$ denote the $C^r$ function $g_x:U_x\to\mathbb{R}^n$ . The $C^r$ function $\phi_ig_i:U_x\to\mathbb{R}^n$ vanishes outside a closed subset of $U_x$ ; we extend it to a $C^r$ function $h_i$ on all of $A$ by letting it vanish outside $U_x$ . Then we define $$g(x)=\sum_{i=1}^\infty h_i(x)$$ for each $x\in
|
|calculus|analysis|functions|derivatives|
| 0
|
Why Doesn't the St Petersburg Paradox Happen All the Time?
|
I am learning about the St Petersburg Paradox https://en.wikipedia.org/wiki/St._Petersburg_paradox - here is my attempt to summarize it: A fair coin is tossed at each stage. The initial stake begins at 2 dollars and is doubled every time tails appears. The first time heads appears, the game ends and the player wins whatever is the current stake As we can see, this game will have an expected reward of infinite dollars: $$E(X) = \sum_{i=1}^{\infty} x_i \cdot p_i$$ $$E = \sum_{n=1}^{\infty} \frac{1}{2^n} \cdot 2^n = \frac{1}{2} \cdot 2 + \frac{1}{4} \cdot 4 + \frac{1}{8} \cdot 8 + \frac{1}{16} \cdot 16 + ... = 1 + 1 + 1 + 1 + ... = \sum_{n=1}^{\infty} 1 = \infty$$ The paradox is that even though the game has an infinite reward, in real life simulations, the game usually ends up with a finite reward. Although seemingly counterintuitive, this does seem logical. Even more, we can write computer simulations to see that large number of games will have finite rewards. My question is about apply
|
As others have noted, your main confusion seems to be that an infinite sum can have either a finite answer or an infinite answer depending on the specifics. Finite example: If you make $n$ dollars for $n$ heads, you get $\sum_{n=1} n/2^n=1/2+2/4+3/8+4/16\dots=2$ Infinite example: If you make $2^n$ for $n$ heads, you get $\sum_{n=1} 2^n/2^n=1+1+1+\dots\rightarrow \infty$ . Both of them have some possibility of making arbitrarily large amounts of money, but the first example has the payouts increase in size slowly enough that it doesn’t matter. Standard approaches to the St. Petersburg paradox are that super large $10^{30}$ payouts are infeasible and so meaningless and should be ignored and that after a certain point, you have enough money to satisfy your needs, so more money doesn’t actually matter. For your Brownian motion examples, it’s well known that if you start at $0$ and stop when you hit $+a$ or $-b$ , then it’ll take $ab$ steps on average. This makes sense because half the time
|
|probability|brownian-motion|paradoxes|
| 0
|
Unique Solution to the Heat Equation
|
Consider the IVP consisting of the homogeneous heat equation \begin{equation} v_t(x,t)=\alpha v_{xx}(x,t), \quad (x,t)\in(-\infty,\infty)\times(0,\infty), \tag{1} \end{equation} subject to the initial condition \begin{equation} v(x,0)=f(x), \quad x\in (-\infty,\infty). \tag{2} \end{equation} Here $\alpha>0$ and $f\in C^\infty$ . It can be shown using Fourier transform methods that a solution to the IVP is given by $$ v(x,t)=\int_{-\infty}^\infty \frac{f(s)}{\sqrt{4\pi\alpha t}}\exp\left(-\frac{(x-s)^2}{4\alpha t}\right) ds. $$ I am trying to determine if this is a unique solution. I am familiar with the contradiction method where we assume that there exist two distinct solutions, $v_1(x)$ and $v_2(x)$ , to the IVP. Hence, $$v(x)=v_1(x,t)-v_2(x,t) \tag{3}$$ is a solution to $(1)$ as this equation is linear and homogeneous. By looking at $(2)$ , we find that $$ v(x,0)=v_1(x,0)-v_2(x,0)=f(x)-f(x)=0. $$ Hence, this does not satisfy $(2)$ and so $(3)$ is not a solution to the IVP. What is t
|
No, that's not a correct argument. (And what you're trying to prove is not true.) In order to show uniqueness, you would like to conclude that $v(x,t) = v_1(x,t) - v_2(x,t)$ is identically zero. As you noticed, $v$ solves the heat equation with the initial condition $v(x,0)=0$ , so the question now is whether that IVP (not your original IVP (1)+(2)) has a unique solution, namely the trivial solution which is identically zero. The fact that $v$ doesn't satisfy (2) is completely irrelevant. However, the IVP for $v$ actually has nontrivial solutions unless you impose some extra assumptions! See, for example, this Math Overflow question: "Wild" solutions of the heat equation: how to graph them? . So without such assumptions, the solution to the IVP (1)+(2) is not unique.
|
|partial-differential-equations|heat-equation|
| 1
|
Existing irreducible elements infinitely in an infinite commutative domain with unit.
|
I want to know if there exist infinitely many irreducible elements in an infinite commutative domain with unit. In other words, is the following statement true? Let $R$ be an infinite commutative domain with unit. Then, $\# \{ a \in R \mid a$ is an irreducible element of $R \} = \infty$ This question is not an assignment, but is just a question of me.
|
Not always true: it isn't true when $R$ is an infinite field, as in that case there are no irreducible elements (a more subtle example with no irreducible elements is in a comment to this answer). So probably you meant to require that $R$ is not a field. You probably also meant to ask whether, when $R$ is not a field, there have to be infinitely many irreducibles that are not unit multiples of each other . That too has counterexamples: there are infinite domains with only one irreducible element up to unit multiples. Such rings can be found among the subrings of $\mathbf Q$ : for each prime $p$ , let $R_p = \{a/b : a, b \in \mathbf Z, p \nmid b\}$ . These are the fractions that can be reduced mod $p$ . Its usual name is the localization of $\mathbf Z$ at $p$ , and in this ring $p$ is irreducible and every prime other than $p$ is a unit. The rings $R_p$ are examples of discrete valuation rings (DVRs), which all have just one irreducible element up to unit multiples. They are also exampl
|
|commutative-algebra|
| 0
|
Verifying Property of Constructed Brownian Motion using Haar Basis
|
This is an alternative proof to the construction of BM (to the one using Kolmogorov Extension), called Lévy construction. Let us define the Haar basis as $H_{k,l}$ and $Z_{k,l}\sim N(0,1)$ as IID random variables. Then define $G_k(t)=\sum_{l=0}^{2^{n-1}}Z_{k,l}\int_0^t H_{k,l}(s)ds$ , and let our construction of Brownian motion be: $$W_t=\sum_{k-1}^\infty G_k(t)$$ Recall that the Haar basis is $H_{k,l}(t)=2^{k/2}$ or $-2^{k/2}$ on alternating $2^{-k-1}$ -length sub-intervals on the interval $[0,1]$ . What I don't understand is how this respects the property of BM, that is, $(B_t - B_s)$ is independent, mean 0, normally distributed of variance $(t-s)$ . If we take, for example, $t=1/2$ and $s=0$ , the above construction would include both the $G_2$ path of $N(0,1)/\sqrt{2}=N(0,1/2)$ and well as half of the $G_1$ path of $N(0,1)$ (the remaining paths vanishing to 0). Any clarification would be appreciated (and lmk if any is needed from my end).
|
This is shown in The Haar functions and the Brownian motion and Construction of Brownian Motion using Haar wavelets . For $t>s$ we have $$W_t-W_s=\sum_{k-1}^\infty G_k(t)-G_k(s)=\sum_{k-1}^\infty \sum_{l=0}^{2^{n-1}}Z_{k,l}\int_{s}^{t} H_{k,l}(r)dr.$$ Let $\Phi_{k,l}(s,t):=\int_{s}^{t} H_{k,l}(r)dr$ . We study the correlation $t>s, v>u$ $$\begin{align*} \mathbb{E}((W(t)-W(s))(W(v)-W(u))) &\stackrel{1}{=} \lim_{N \to \infty} \mathbb{E}((W_N(t)-W_N(s))(W_N(v)-W_N(u))) \\ &\stackrel{2}{=} \lim_{N \to \infty} \mathbb{E} \left( \sum_{m,n=1}^N\sum_{z,l} \Phi_{m,z}(s,t)\Phi_{n,l}(u,v) Z_{m,z}Z_{n,l} \right) \\ &\stackrel{3}{=} \sum_{n=1}^{\infty}\sum_{l} \Phi_{n,l}(s,t)\Phi_{n,l}(u,v) \\ &\stackrel{4}{=} \int_0^1 1_{[s,t)}(x) 1_{[u,v)}(x) \, dx. \end{align*}$$ For step 3 we used iid and for step 4 we used the orthonormality of the Haar basis. The right-hand side equals $0$ if $[s,t) \cap [u,v) = \emptyset$ ; this shows that $W(t)-W(s)$ and $W(v)-W(u)$ are uncorrelated; hence, independent, as
|
|probability|measure-theory|stochastic-processes|brownian-motion|
| 0
|
Finding any arbitrary integer point on a line with rational slope and intercept
|
Consider the equation of a simple line: $$ f(x)=mx+b $$ with the additional constraint that $m$ and $b$ are guaranteed to be rational numbers: $$ f(x) = \frac{m_n}{m_d}x + \frac{b_n}{b_d} $$ $$ m_n, m_d, b_n, b_d \in \mathbb{Z}; m_d, b_d \neq 0 $$ This question affirms the intuitive fact that for any $x \in \mathbb{Z}$ such that $f(x) \in \mathbb{Z}$ , $f(x+km_d) \in \mathbb{Z}$ holds true for any $k \in \mathbb{Z}$ . So it is quite easy to determine the location of the 'next' integer coordinate pair on a line, but only if you are given an integer coordinate pair that $f(x)$ passes through to start with. If you are not given a starting point, it seems to be very difficult to find one without doing manual iteration. I am looking for a function that, given the rational slope and y-intercept of an arbitrary line, returns any integer $x$ such that $f(x)$ is also an integer. There is no constraint on where $(x, f(x))$ lies on $f(x)$ . Both coordinates just have to be integers and $f(x)$ mus
|
The expression $\frac qr x + \frac st$ is an integer if and only if $qtx + rs$ is a multiple of $rt$ . In other words, finding integer values of $\frac qr x + \frac st$ is the same problem as solving the linear congruence $qtx + rs \equiv 0 \pmod{rt}$ in the variable $x$ . Solving linear congruences (including deciding when they have no solutions) is algorithmically easy, and many descriptions of the algorithm can be found via a web search. In brief: Divide all of $qt$ , $rs$ , and $rt$ through by their greatest common divisor, yielding $ax+b\equiv 0\pmod c$ . If $\gcd(a,c)>1$ , then there are no solutions. Otherwise, the solutions are given by the congruence $x\equiv -a^{-1}b\pmod c$ .
|
|number-theory|integers|
| 0
|
Proof for Particular Fair Shuffle Algorithm
|
I ran multiple simulations of the following function, and it seems to be fair shuffling, given that all permutations were roughly equal, but I don't understand why it works. It's just inserting at random positions within the current shuffled deck isn't it? I know about Fisher-Yates shuffling, for reference. def shuffle_deck(deck): shuffled_deck = [] for card in deck: r = random.randint(0, len(shuffled_deck)) shuffled_deck.insert(r, card) return shuffled_deck I was expecting that it would be uneven probability permutation.
|
This is (almost) the same as the Fisher-Yates shuffle, but with less optimization. The Fisher-Yates shuffle can be thought of as performing the following (high-level, conceptual) steps: Make a new, empty list that will eventually be the output. Repeatedly select a random element from the input list. Remove the random element from the input list. Add the random element to the output list (at the beginning or end). However, Durstenfeld made a simple optimization: Instead of making a separate output list, he moves the elements to the beginning or end of the input list, and treats that portion of the input list as the output list. Then, the other optimization is to swap things instead of moving them (so that we don't have to shift everything over all the time). As a side effect, this causes whichever element was at the front or back of the input list to be moved into the position where the random element used to be. But that makes no difference to the overall shuffling algorithm (exercise
|
|probability|combinatorics|algorithms|python|
| 0
|
Solving a complex cubic equation
|
I am trying to solve the following equation: $$ z^3 + z +1=0 $$ Attempt: I tried to factor out this equation to get a polynomial term, but none of the roots of the equation is trivial.
|
Apply the cubic formula. https://math.stackexchange.com/a/4868747/928654 We get $$z_1=\sqrt[3]{-\frac12+\sqrt{\frac14+\frac{1}{27}}}+\sqrt[3]{-\frac12-\sqrt{\frac14+\frac{1}{27}}}=\sqrt[3]{-\frac12+\frac{\sqrt{31}}{6\sqrt3}}+\sqrt[3]{-\frac12-\frac{\sqrt{31}}{6\sqrt3}}$$ This is about $-0.6823$ if $i^2=-1$ $$z_2=(\frac{-1+i\sqrt3}{2})\sqrt[3]{-\frac12+\sqrt{\frac14+\frac{1}{27}}}+(\frac{-1-i\sqrt3}{2})\sqrt[3]{-\frac12-\sqrt{\frac14+\frac{1}{27}}}=(\frac{-1+i\sqrt3}{2})\sqrt[3]{-\frac12+\frac{\sqrt{31}}{6\sqrt3}}+(\frac{-1-i\sqrt3}{2})\sqrt[3]{-\frac12-\frac{\sqrt{31}}{6\sqrt3}}=-\frac12\sqrt[3]{-\frac12+\frac{\sqrt{31}}{6\sqrt3}}+\frac{i\sqrt3}{2}\sqrt[3]{-\frac12+\frac{\sqrt{31}}{6\sqrt3}}-\frac12\sqrt[3]{-\frac12-\frac{\sqrt{31}}{6\sqrt3}}-\frac{i\sqrt3}{2}\sqrt[3]{-\frac12-\frac{\sqrt{31}}{6\sqrt3}}=-\frac12(\sqrt[3]{-\frac12+\frac{\sqrt{31}}{6\sqrt3}}+\sqrt[3]{-\frac12-\frac{\sqrt{31}}{6\sqrt3}})+\frac{i\sqrt3}{2}(\sqrt[3]{-\frac12+\frac{\sqrt{31}}{6\sqrt3}}-\sqrt[3]{-\frac12-\fra
|
|complex-numbers|cubics|
| 0
|
Conditional distribution of interarrival time of second arrival in Poisson process
|
Given a Poisson process with parameter $\lambda$ , what is the probability $P(T_2 \leq 2 | N(3) = 1)$ , where $T_2$ is the interarrival time between the first and second arrivals? In words, given that there is one arrival in the first 3 time units, what is the probability that the second arrival occurs within 2 time units of the first arrival? My attempt: We want to calculate $P(T_2 \leq 2 | N(3) = 1) = \frac{P(T_2 \leq 2, N(3) = 1)}{P(N(3)=1)}$ . I know that $P(N(3)=1) = e^{-3\lambda}*3\lambda$ . The first arrival occurs in $[0,3)$ , and it must be that $T_1 > 1$ for the events { $T_2 \leq 2, N(3) = 1$ } to hold. Since $T_1$ is uniformly distributed on $[0,3)$ , $P(T_1 > 1) = \frac{2}{3}$ . If $T_1 = t_1$ for $1 \leq t_1 \leq 3$ , then $P(T_2 \leq 2, N(3) = 1) = P(N(t_1 - 1) > 0)$ . Not quite sure where to proceed from here, so would appreciate any hints!
|
The joint event $(T_2 \le 2) \cap (N(3) = 1)$ indeed requires that $T_1 > 1$ . It is natural to condition this event on the first arrival time, so given $T_1 = t$ for some $1 , then we need $3-t \le T_2 \le 2$ . Hence $$\begin{align} \Pr[(T_2 \le 2) \cap (N(3) = 1)] &= \int_{t=1}^3 \Pr[3-t \le T_2 \le 2 \mid T_1 = t]f_{T_1}(t) \, dt \\ &= \int_{t=1}^3 (e^{-(3-t)\lambda} - e^{-2\lambda}) \lambda e^{-\lambda t} \, dt \\ &= (2\lambda - 1)e^{-3\lambda} + e^{-5\lambda}. \end{align}$$ So the desired conditional probability is $$\Pr[T_2 \le 2 \mid N(3) = 1] = \frac{(2\lambda - 1)e^{-3\lambda} + e^{-5\lambda}}{3 \lambda e^{-3\lambda}} = \frac{2}{3} - \frac{1 - e^{-2\lambda}}{3\lambda}.$$
|
|conditional-probability|poisson-process|
| 1
|
An algebraic space over etale topology is also a fppf-sheaf?
|
This is the Theorem 5.5.2 in M.Olsson's Algebraic Spaces and Stacks . (And Theorem A-4 in Champs algébriques by Gérard Laumon, Laurent Moret-Bailly) Here is the definition of algebraic spaces we used: Definition . Let $S$ be a scheme. An algebraic space over $S$ is a functor $$X:(Sch/S)^{op}\to(Sets)$$ such that (i) $X$ is a sheaf with respect to the big etale topology; (ii) $\Delta_{X/S}:X\to X\times_SX$ is representable; (iii) There exists an $S$ -scheme $U$ with surjective etale morphism $U\to X$ . RMK . In (iii) the 'surjective' means 'surjective representable morphism' instead of sheaf-theoric (or presheaf-theoric) surjective, I think. Theorem 5.5.2 . Let $S$ be a scheme, and let $X/S$ be an algebraic space over $S$ with quasi-compact diaganol. Then $X$ is a sheaf with respect to the fppf-topology. RMK . I modified some details in Olsson's proof after seen the original proof in Champs algébriques . Proof and My Questions . Let $\overline{X}$ be the sheaf of fppf-topology associate
|
This is probably way too late, but I'm reading the same passage right now and I'll try to record what I figured out. I'll start with (b). $X$ is an étale sheaf by definition, and so to show schemewise surjectivity, it suffices to show that lifts from $\overline X$ to $X$ exist étale locally, since those lifts necessarily agree on pairwise fibered product by injectivity. For (c), we are not using fppf descent to get $s'$ . $s'$ exists fppf locally because we are considering the sheafification map. What descent theory is doing is that once we know that $X_0\times_{\overline X,s} V$ is a scheme quasi-affine over $V$ whenever $s:V\to \overline X$ factors through $X$ , we have an (effective) descent datum for quasi-affine schemes over a fppf covering $V\to U$ , and all $U$ admit such a map. This also gives an answer to (a), where the quasi-affineness guarantees the effectiveness of the descent datum.
|
|algebraic-geometry|
| 0
|
If $g\circ f=f$ in a category, then is $g$ necessarily the identity morphism?
|
I know momomorphisms are left-cancellative and epimorphisms are right-cancellative. But, let $C$ be any object in an arbitrary category $\mathbf{C}$ , and $f,g:C\to C$ be any morphisms st $gf=f$ . Then, do we necessarily have $g=\text{id}$ ? I am asking for some kind of cancellative law wrt composition. If it is not true, is there a handy counter-example? Also are there some special type of categories where this statement is indeed true? If $\mathbf C$ were an abelian category, then we have $0=gf-f=(g-\text{id})f$ . Now the question reduces to can two non-zero morphisms compose to give a zero morphism (ie morphisms in $\mathbf C$ that factorize through a zero object, or equivalently constant and co-constant, see definitions here )?
|
That's not even true in $\mathsf{Set}$ : take $C=\{1,2\}$ , $f=g$ mapping everything to $1$ . Now for an abelian category take $\mathsf{AbGroup}$ , $C=C_2\times C_2$ , $f=g$ sending $(x,y)$ to $(x,0)$ . You can see two nonzero endo-maps composing to zero in any abelian category by taking any nonzero object $D$ , letting $C=D\oplus D$ , and letting $f=\mathrm{id}\oplus \mathbf{z}$ and $g=\mathbf{z}\oplus \mathrm{id}$ , where $\mathbf{z}$ is the zero map from $D$ to itself. For that matter, the zero map composed with itself equals the zero map, and any non-identity projection in vector spaces or modules composed with itself equals itself.
|
|category-theory|abelian-categories|
| 1
|
L'hopital's rule doesn't work
|
To compare growth rates of $\sqrt{\log{(n)}}$ and $n^a$ , $a = 10^{-4}$ , I computed the limit $\lim_{n\to\infty} \frac{\sqrt{\log{(n)}}}{n^a}$ . Using L'Hôpital's rule, this limit came out to be $0$ . However, it seems obvious that $\sqrt{\log{(n)}}$ grows at a faster rate than $n^a$ , is my application of LH rule incorrect?
|
L'Hopital rule is right, the limit is zero. The problem is your intutition. Indeeed $$\log(n)\ll n^{a}\quad \forall\ a > 0$$ i.e. the logarithm grows slower than any positive power of $a$ , no matter how small.
|
|limits|
| 0
|
L'hopital's rule doesn't work
|
To compare growth rates of $\sqrt{\log{(n)}}$ and $n^a$ , $a = 10^{-4}$ , I computed the limit $\lim_{n\to\infty} \frac{\sqrt{\log{(n)}}}{n^a}$ . Using L'Hôpital's rule, this limit came out to be $0$ . However, it seems obvious that $\sqrt{\log{(n)}}$ grows at a faster rate than $n^a$ , is my application of LH rule incorrect?
|
Why do you say it is "obvious" that $\sqrt{\log n}$ grows faster than $n^{0.0001}$ ? If I choose $n = e^{10^6}$ , what are the resulting values of $\sqrt{\log n}$ and $n^{0.0001}$ ? Which is larger?
|
|limits|
| 0
|
How Do I Show this Differential is Surjective?
|
Question Write $(2n\times 2n)$ -matrices in block form $A=\begin{bmatrix}a&b\\c&d\end{bmatrix} \in Mat_{\mathbb{R}}(2n)$ where each of $a,b,c,d$ is an $n\times n$ block. Define $J=\begin{bmatrix}0&1\\-1&0\end{bmatrix}$ where 1 is a shorthand for an $n\times n$ identity matrix. Let $Sp(2n)= \{ A\in Mat_\mathbb{R}(2n)| AJA^T=J \}$ , so that $Sp(2n)=F^{-1}J$ , where $F: Mat_{\mathbb{R}}(2n)\rightarrow Skew_{\mathbb{R}}(2n)$ is the map $F(A)=AJA^T$ . Here, the codomain is the set of real $n\times n$ matrices satisfying $C=-C^T$ . (I have already shown that the differential $D_AF: Mat_{\mathbb{R}}(2n)\rightarrow Skew_{\mathbb{R}}(2n)$ is $AJH^T+HJA^T$ ) Show that for $A\in F^{-1}(J)$ , the differential is surjective, and conclude that $F^{-}(J)$ is a submanifold. What is its dimension? Note I can’t seem to show the differential is surjective. I know once that is settled, the case of sub-manifold will be easy to handle. I think the dimension is easy to
|
By your calculation of the differential $$ D_A F: Mat_{\mathbb R}(2n)\to Skew_{\mathbb R}(2n);\ H\mapsto AJH^T + HJA^T, $$ we would like to show that for $A\in F^{-1}J=Sp(2n)$ and $C\in Skew_{\mathbb R}(2n)$ with $C=-C^T$ , there exists an $H$ such $D_A F(H)=C$ . Note that $A\in F^{-1}J$ is invertible, since $$ AJA^T=J\ \Longrightarrow\ A(JA^TJ^{-1})=I\ \Longrightarrow\ A^{-1}\text{ exists}. $$ Then $JA^T=A^{-1}J$ . The key is that both $D_AF(H)$ and $C$ are skew-symmetric. Indeed, $$(HJA^T)^T=AJ^TH^T=-AJH^T.$$ We can let $$ HJA^T=\frac{1}{2}C, $$ then $AJH^T=-(HJA^T)^T=-\frac{1}{2}C^T=\frac{1}{2}C.$ So the equation is solved. Therefore we have a solution to $D_AF(H)=C$ as $$ H=\frac{1}{2}C(JA^T)^{-1}=\frac{1}{2}C(A^{-1}J)^{-1}=\frac{1}{2}CJ^{-1}A = -\frac{1}{2}CJA, $$ since $J^{-1}=-J$ . This solution can be directly checked, using $AJA^T=J$ .
|
|differential-geometry|manifolds|smooth-manifolds|
| 1
|
If $g\circ f=f$ in a category, then is $g$ necessarily the identity morphism?
|
I know momomorphisms are left-cancellative and epimorphisms are right-cancellative. But, let $C$ be any object in an arbitrary category $\mathbf{C}$ , and $f,g:C\to C$ be any morphisms st $gf=f$ . Then, do we necessarily have $g=\text{id}$ ? I am asking for some kind of cancellative law wrt composition. If it is not true, is there a handy counter-example? Also are there some special type of categories where this statement is indeed true? If $\mathbf C$ were an abelian category, then we have $0=gf-f=(g-\text{id})f$ . Now the question reduces to can two non-zero morphisms compose to give a zero morphism (ie morphisms in $\mathbf C$ that factorize through a zero object, or equivalently constant and co-constant, see definitions here )?
|
A particularly simple class of counterexamples is given by taking $f = g$ . Then you are asking for a morphism $f$ such that $f \circ f = f$ but $f \neq 1$ , i.e. any nontrivial idempotent. It is very easy to find examples in most categories of interest. However, this condition will hold in any category for which every morphism is an epimorphism. These are known as right-cancellative categories. These do arise in some situations, for instance in categorical model theory .
|
|category-theory|abelian-categories|
| 0
|
Change of coordinates in vector fields on Heisenberg group
|
In the book Geometric Analysis on the Heisenberg Group and Its Generalizations proposition 1.3 says: Under the change of coordinates $$ y_1 = x_1 , \, y_2 = x_2, \, \tau = 4t -2x_1x_2,$$ the vector fields $$ X = \partial_{x_1}, \, Y = \partial_{x_2} +x_1 \partial_t, \, T = \partial_t $$ are transformed into $$ X = \partial_{y_1} - 2y_2\partial_{\tau}, \, Y = \partial_{y_2} +2y_1 \partial_{\tau} , \, T = 4\partial_{\tau}. $$ The author says that the proof follows from the following relationships $$ \partial_t = 4\partial_{\tau} , \, \partial_{x_2} = \partial_{y_2} -2y_1 \partial_{\tau} , \, \partial_{x_1} = \partial_{y_1} - 2y_2\partial_{\tau} . $$ I don't understand how this change of coordinates is done, nor how these relations in the demonstration were obtained, can anyone help me understand?
|
You must use the chain rule, because the two sets of coordinates are functions of each other. For instance, the derivative with respect to $t = t(y_1,y_2,\tau)$ is given by $$ \frac{\partial}{\partial t} = \frac{\partial y_1}{\partial t}\frac{\partial}{\partial y_1} + \frac{\partial y_2}{\partial t}\frac{\partial}{\partial y_2} + \frac{\partial \tau}{\partial t}\frac{\partial}{\partial \tau} = 0\cdot\frac{\partial}{\partial y_1} + 0\cdot\frac{\partial}{\partial y_2} + 4\cdot\frac{\partial}{\partial \tau}, $$ hence $\partial_t = 4\partial_\tau$ . In order to speed up those computations, you can carry them out simultaneously with the help of the following relation : $$ \nabla_{(x_1,x_2,t)} = \frac{\partial(y_1,y_2,\tau)}{\partial(x_1,x_2,t)} \cdot \nabla_{(y_1,y_2,\tau)}, $$ which is nothing else than multidimensional chain rule, where $\nabla_{(x_1,x_2,t)} = (\partial_{x_1},\partial_{x_2},\partial_t)^T$ and $\nabla_{(y_1,y_2,\tau)} = (\partial_{y_1},\partial_{y_2},\partial_\tau)^T$ are
|
|proof-explanation|vector-fields|heisenberg-group|
| 1
|
Help proving a chain rule from total derivative chain rule
|
Consider $f:\mathbb{R}^d\to\mathbb{R}$ and $g:\mathbb{R}\to\mathbb{R}^d$ . It is known that $$ \tag{*} (f\circ g)'(x) = \sum_{i=1}^d \partial_i f(g(x)) \cdot g'(x)^i $$ I would like to prove this from the chain rule for the total derivatives: $$ \tag{**} D_x(f\circ g) = D_{g(x)}f \circ D_xg $$ I'm not sure how to rigorously proceed here. Intuitively I know the two total differentials in the total derivative chain rule can be represented as matrices and their composition will correspond to the multiplication in the desired partial derivative chain rule. But I'm not sure how to get there. How does the function composition get converted into a summation + multiplication? Another related issue. The expression $(*)$ is a real number when evaluated at $x$ . The expression $(**)$ is a linear map from $\mathbb{R}\to\mathbb{R}$ . It's not to difficult to understand that the linear maps $\mathbb{R}\to\mathbb{R}$ (the dual space of $\mathbb{R}$ ) are isomorphic to $\mathbb{R}$ itself. But still,
|
Hint: Matrix multiplication is indeed the reason, once you choose the right basis to represent the total derivatives. Partial derivatives are a special case of directional derivatives, and directional derivatives are related to the total derivative. Choose the canonical basis $(e_i)_{i = 1}^d$ for $\mathbb{R}^d$ . Let $M_y$ and $N_x$ be the matrices of $D_yf$ and $D_x g$ with respect to this basis. Note that the partial derivative with respect to the $i$ -th variable coincides with the directional derivative along $e_i$ . Moreover, one of the possible characterizations of the total derivative of $f$ at $y$ is a linear map $D_y$ such that $f(y + v) = f(y) + D_y(v) + R(v)$ where $R(v)$ is a residue term that is small when $v$ is small enough: $\lim_{v \to 0} \frac{R(v)}{\lVert{v}\rVert} = 0$ . So, putting everything together: $$\partial_i f(y_1, \ldots, y_d) = \lim_{h \to 0} \frac{f(y_1, \ldots, y_i + h, \ldots, y_d) - f(y_1, \ldots, y_d)}{h} \\ = \lim_{h \to 0} \frac{f(y + he_i) - f(y)}
|
|linear-algebra|multivariable-calculus|partial-derivative|
| 0
|
Is it possible to extend the domain and range of a function that maps from R to R to other sets?
|
I am currently working on a project where I would like to define infintesimals that can be used in conjunction with the real numbers (similar to the hyperreals). Right now, I am working on an algebraic extension of $\mathbb{R}$ to allow for my infintesimals. I have a definition that works for all linear combinations of integer powers in my infinitesimal $\varepsilon$ (basically just a variable at this point). So for example, expressions like $\varepsilon^5$ , $7\varepsilon^3$ and $73+\varepsilon$ are defined within my framework. The issue right now is that I can't quite seem to find a good way of defining rational exponents. I have an idea where I would like to use the exponential function: $\displaystyle \exp(x):=\sum_{n=0}^{\infty}\frac{x^{n}}{n!}$ $\implies a^x=\exp(x\ln a)$ So I would be able to make a statement such as: $\displaystyle \sqrt\varepsilon=\exp(\frac{1}{2}\ln\varepsilon)$ The crux here is that $\exp:\mathbb{R}\to\mathbb{R}$ , and I am not sure if you can extend this to
|
The possibility of extending the domain of functions to a larger domain in such a way that the relevant properties of the functions are preserved is called the transfer principle . Of course, the transfer principle does not apply to just any extension of $\mathbb R$ . It does apply to the field of hyperreals $\mathbb R^{\ast}$ which properly extends $\mathbb R$ and is also an ordered field. In particular, the exponential function $e^x$ extends, so that now it is defined for any hyperreal input $x$ (including infinitesimal and infinite numbers). If you stick to analytic functions, it may be sufficient to use the smaller Levi-Civita field .
|
|abstract-algebra|nonstandard-analysis|infinitesimals|
| 1
|
Can a CL-term have multiple fixed points?
|
Given a CL term $E$ , can there exist multiple non-equivalent fixed points for $E$ ? I think: any fixed point of $E$ can be expressed as $Y(E)$ , this expression cannot reduce to multiple non-equivalent forms, due to confluence. So, I think that any CL term can have only 1 fixed point. Is my logic correct? All ideas regarding this problem are welcome. Edit: Realized just now, the identity combinator is a CL term with multiple non-equivalent fixed points. But then, if we consider the term $Y(I)$ , this must have multiple non-equivalent forms, corresponding to the multiple fixed points of $I$ . But how is that justifiable under confluence? Edit 2: $Y(I)$ is nothing but the omega combinator. I think I have got it now. Even when a CL term has multiple fixed points, applying that CL term to Y gives us only one possible fixed point. Is that correct?
|
This may be useful to construct (partial) answers to your question: In general, there are infinitely many non-β-equivalent fixed-point combinators in λ-calculus. Consider $$ \delta := λy.λx.x(yx) =_β \mathsf{SI}, $$ then the terms $Y_n$ defined by $$ Y_0 := Y \qquad Y_{n+1} := Y_n \delta $$ are fixed points combinators, and $Y_i \neq_β Y_j$ . This is called the Böhm-van der Mey sequence . What I know about it comes from Barendregt and Manzonetto ( 2022 ), who cite Klop ( 2007 ).
|
|logic|lambda-calculus|combinatory-logic|
| 0
|
How Can I construct an Embedding for $S^n\times \mathbb{R} \rightarrow \mathbb{R}^{n+1}$
|
I will like to construct an embedding for $S^n\times \mathbb{R} \rightarrow \mathbb{R}^{n+1}$ . I have constructed embeddings for similar question (e.g. $S^1\times S^1 \rightarrow S^3$ ), but I can’t seem to construct one for this question. Your help would be greatly appreciated.
|
The key is to observe that $$h : S^n \times (0,\infty) \to \mathbb R^{n+1} \setminus \{0\}, h(x, t) = t x$$ is a diffeomorphism. Its inverse is $$g : \mathbb R^{n+1} \setminus \{0\} \to S^n \times (0,\infty), g(y) = (\frac{y}{\lVert y \rvert},\lVert y \rvert) .$$ Then $$\psi : S^n \times \mathbb R \to \mathbb R^{n+1}, \psi(x,t) = (\ln t)x$$ is an embedding. Note that all maps occurring here are smooth.
|
|differential-geometry|manifolds|smooth-manifolds|
| 0
|
Expected value and standard deviation of greater from two normal distributions
|
Let's assume I have two normal distributions, X and Y with known expected values and standard deviations. How to calculate the distribution of greater values from random pairs of values from these distributions? If it's not clear, step by step looks like that: Based on E(X) and D(X) I sample one result, got 10. Based on E(Y) and D(Y) I sample one result, got 15. 15 is greater than 10, so we put 15 in our result set Z. If we'd receive 10 from X and 8 from Y, we put 10 in result set Z. What are the parameters E(Z) and D(Z) (and is it still a normal distribution)?
|
If $X=a+Z_1$ and $Y=b+Z_2$ where $Z_1$ and $Z_2$ are $N(0,1)$ and independent, then $$M=\max(X,Y)=\frac{1}{2}(X+Y+|X-Y|)$$ is certainly not Gaussian since the density of $M$ is $\Phi(m-b)\phi(m-a)+\Phi(m-b)\phi(m-b)$ where $\phi$ is the density of $N(0,1)$ and $\Phi(y)=\int_{-\infty}^y\phi(x)dx.$ The expectation of $M$ is given by the following formula, since the density of $Z=\frac{1}{\sqrt{2}}(Z_1-Z_2)$ is $\phi$ : $$E(M)=\frac{a+b}{2}+\int_{-\infty}^{\infty}|a-b+\sqrt{2}z|\phi(z)dz.$$ This is computable only by numerical analysis if $a\neq b.$ I have not computed $E(M^2)$ but if $a$ and $b$ are large enough an approximation of $M^2$ is $$\max(X^2,Y^2)=\frac{1}{2}(X^2+Y^2+|X^2-Y^2|).$$ But I am afraid that both $E(M^2)$ and $E(\max(X^2,Y^2))$ end up with a double integral.
|
|statistics|normal-distribution|
| 1
|
On average how many keystrokes will it take to randomly hit the keypad of four characters (ABCD) to write DAC for the first time?
|
I am trying to calculate on average how many keystrokes will it take to randomly hit the keypad of four characters (ABCD) to write DAC for the first time. For example, ABCDAAADAC had 10 keystrokes before the first appearance of DAC.
|
If the problem is indeed as you wrote it here (as a commenter pointed out, there is an "H" in the $10$ letter example you gave that doesn't make sense given the stated problem), the answer is indeed $64$ steps. This can be solved using Markov Chain modelling (as maybe you did). Using this you find out that from "DA" it still takes an expected additional $48$ steps to reach "DAC", from just "D" it takes an expected $60$ steps and from the starting postion $64$ steps. See the solution of this problem on stack exchange for how this works if you are unfamiliar with that. So if the problem is copied correctly, and there are no other assumptions (the monkey may "prefer" certain keys or have a higher probability of pressing the same key twice in a rown), then I'm at a loss what your professor has in mind.
|
|probability|
| 1
|
Prove that: $\gcd[a,b,c]=\frac{abc.\operatorname{lcm}(a,b,c)}{\operatorname{lcm}(a,b)\operatorname{lcm}(a,c)\operatorname{lcm}(b,c)}$
|
I may be wrong, but I was thinking of: $\operatorname{lcm}(a, b)$ as $\min(a, b)$ and $\gcd (a, b)$ as $\max(a, b)$ and $a I know I'm wrong, but I think I can do it
|
Not the most elegant solution but pretty straight forward let $a=p_1^{k_1}\cdot ...\cdot p_s^{k_s}$ , $b=p_1^{l_1}\cdot ...\cdot p_s^{l_s}$ and $c=p_1^{m_1}\cdot ...\cdot p_s^{m_s}$ , where $k,l,m,s$ are non-negative integers. Now lets compare $p_i$ on both sides. Let $k_i \leq l_i \leq m_i$ then on the left side we get $p_i^{k_i}$ (because $\min(k_i,\min(l_i,m_i))=\min(k_i,l_i)=k_i$ ). On the right side we get $p_i^{k_i+l_i+m_i+\max(k_i,\max(l_i,m_i))-\max(k_i,l_i)-\max(l_i,m_i)-\max(k_i,m_i)}=p_i^{k_i+l_i+m_i+m_i-l_i-m_i-m_i}=p_i^{k_i}$ . Thus the equality holds.
|
|number-theory|divisibility|gcd-and-lcm|
| 0
|
Show continuity or uniform continuity of $\phi: (C([0,1];\Bbb R ),||\cdot||_\infty )\to (\Bbb R, |\cdot | )$
|
$\phi: (C([0,1];\Bbb R ),||\cdot||_\infty )\to (\Bbb R, |\cdot | ); \: \: \: \: \: \: \phi(u):=\int_0^1 u^2(t) dt $ Is this function continuous or even uniformly continuous? (I know that the function $g: M\to \Bbb R , g(x) := x^2$ is continuous but not uniformly) Also, is there a non-empty subset of $ C([0,1];\Bbb R )$ so that $\phi$ is Lipschitz-continuous on that set? Thanks in advance!
|
Let $T:C[0,1]\to L^2(0,1)$ , $S:L^2(0,1)\to \mathbb{R}$ and $R:\mathbb{R}\to\mathbb{R}$ be given by $$Tf=f, \ Sg=\|g\|_2, \ R(t)=t^2$$ Then $\varphi =R\circ S\circ T.$ The linear map $ T$ is continuous as $\|f\|_2\le \|f\|_\infty.$ Also $S$ is continuous since the norm is continuous on any normed space. Thus $\varphi$ is continuous as a composition of three continuous mappings. Clearly the mapping $\varphi$ is not Lipschitz as the property fails on constant functions, as $R(t)$ is not Lipschitz.
|
|analysis|continuity|uniform-continuity|lipschitz-functions|
| 0
|
If $f(x) =\sin^{1200}(x) \ln(1+x)^{500}\arctan^{300}(x)$ how to find $f^{(2000)}(0)$ without Taylor series?
|
I saw this challenging problem: If $f(x) =\sin^{1200}(x) \ln(1+x)^{500}\arctan^{300}(x)$ how to find $f^{(2000)}(0)$ . Applying Leibniz rule seems to be the only way to solve this, but it quickly became a nightmare finding a pattern: $$f'(x)=1200 \sin^{1199}(x) \ln(1+x)^{500}\arctan^{300}(x)\cos(x)+\sin^{1200}(x) 500\frac{\ln(1+x)^{499}}{1+x}\arctan^{300}(x)+300\sin^{1200}(x) \ln(1+x)^{500}\frac{\arctan^{299}(x)}{1+x^2}$$ This function becomes very messy and ugly in the first derivative so there is no way Leibniz rule is applied here. There must be some trick since $1200+500+300=2000$ . My friend (who gave me this problem) gave a very strange proof to this problem with the answer $2000!$ but I didn't understand his proof. There is an easy way to solve this problem with Taylor's series like user170231's answer, But I want to ask is other way to solve it without Taylor series ?
|
Let us observe that $f$ is infinitely differentiable at $0$ and $f(x) /x^{2000}\to 1$ as $x\to 0$ . In what follows we replace $2000$ by a symbol $p$ to reduce typing effort. The above shows that $f(x) /x^k\to 0$ for all $k=1,2,\dots, p-1$ . We can also notice by definition of derivative that $$f'(0)=\lim_{x\to 0}\frac{f(x)-f(0)}{x}=0$$ Next we have via l'Hospital's Rule $$\lim_{x\to 0}\frac{f(x)}{x^2}=\lim_{x\to 0}\frac{f'(x)}{2x}=\frac{1}{2}f''(0)$$ so that $f' '(0)=0$ . Let's show one more step for $k=3$ and we have via two applications of l'Hospital's Rule $$\lim_{x\to 0}\frac{f(x)}{x^3}=\lim_{x\to 0}\frac{f'(x)}{3x^2}=\lim_{x\to 0}\frac{f''(x)}{6x}=\frac{1}{6}f'''(0)$$ so that $f' '' (0)=0$ . We can continue in this manner till $k=p-1$ and get all the derivatives $f^{(k)} (0)=0$ . Finally for $k=p$ we apply l'Hospital Rule $(p-1)$ times to get $$\lim_{x\to 0}\frac{f(x)}{x^p}=\lim_{x\to 0}\frac{f'(x)}{px^{p-1}}=\dots =\lim_{x\to 0}\frac{f^{(p-1)}(x)}{p!x}=\frac{f^{(p)}(0)}{p!}$$ an
|
|real-analysis|calculus|derivatives|
| 0
|
How to determine the stationary distribution of a Markov Chain with N states?
|
I have a fairly simple discrete Markov Chain with a transition probability matrix that looks like this: \begin{bmatrix}0&{n_1}&0&0&0&...\\ {n_2}&0&{n_3}&0&0&...\\ 0&{n_4}&0&{n_5}&0&...\\ ...&...&...&...&...&...\\ ...&0&0&0&n_k&0\end{bmatrix} Basically, just a long line of states, all of which can go into a neighboring one (First one into second, second one into the first and the third and so on). I know all of transitional probabilities - for simplicity's sake, here I have replaced them with just ${n_i}$ . The problem here, is that I have N states - their amount may vary. I need to find a stationary distribution for this chain, and while making a system of N equations is trivial, I'm not sure how exactly I can solve it. Is there, perhaps, an easier way to find a stationary distribution of such a chain? Or I should just bruteforce it with an equation system?
|
Hint Assuming your transition matrix is row-stochastic, you must have $\ n_1=n_k=1\ $ and $\ n_{2i+1}=1-n_{2i}\ .$ If $\ \pi\ $ is the stationary distribution, then the first and last equations of $$ \pi^T\begin{bmatrix}0&{n_1}&0&0&0&...\\ {n_2}&0&{n_3}&0&0&...\\ 0&{n_4}&0&{n_5}&0&...\\ ...&...&...&...&...&...\\ ...&0&0&0&n_k&0\end{bmatrix}=\pi^T $$ give you $\ \pi_2=\pi_ 1\ $ and $\ \pi_{k-1}=\pi_k\ .$ The second of the equations gives \begin{align} n_2\pi_1+n_3\pi_3&=\pi_2\\ &=n_2\pi_2+(1-n_2)\pi_3\ , \end{align} from which you get $\ \pi_3=\pi_2\ .$ Does that suggest a guess for the remaining entries of $\ \pi\ ?$
|
|probability|probability-theory|markov-chains|
| 0
|
An ordinal $\nu$ is a natural iff there is no injection $f$ of $\nu$ into $X$ in $\mathscr P(\nu)\setminus\{\nu\}$.
|
Let's we prove the following theorem. Theorem An ordinal $\nu$ is a natural if and only if there is no injection of $\nu$ into $X$ in $\mathscr P(\nu)\setminus\{\nu\}$ . Proof . Let's we assume there no exists an injection $f$ of $\nu$ over any $X$ in $\mathscr P(\nu)\setminus\{\nu\}$ . So if $\nu$ was not a natural the there would be exists $A$ in $\mathscr P(\nu)$ with no maximum element so that let be $\alpha$ the corresponding initial ordinal: this ordinal exists independently by AC since $A$ is a set of ordinals so that it is well ordered! Now if $\alpha$ was in $\nu$ then by transitivity the inclusion $$ \alpha\subseteq\nu $$ would hold and thus by the above hypothesis it must be an equality: we conclude in this way that the inequality $$ \nu\le\alpha\tag{1}\label{1} $$ holds. So by ineq. \eqref{1} we observe that the inequality $$ |\nu|\le|\alpha|=\alpha $$ holds but by the inclusion $$ A\subseteq\nu $$ also the inequality $$ \alpha=|A|\le|\nu| $$ holds so that we argue the equa
|
For natural numbers, you can get an essentially identical proof to the one by induction by saying 'let $n$ be the least natural number such that $n$ has an injection onto a proper subset of itself. Then use the same argument as in the linked proof to show that $n-1$ has one'. You can get a different proof if using a different equivalent definition of finiteness. For example, the complementary version of Tarski-finiteness. A set $X$ is finite if and only if every non-empty $Y\subseteq\mathcal{P}(X)$ has a $\subseteq$ -minimal element. Assume $X$ is Tarski-finite and $f\colon X\to X$ is injective onto a proper subset of $X$ . Let $$ Y=\{A\subseteq X\colon f(A)\subsetneq A\} $$ Then $Y$ is nonempty because $X\in Y$ . Let $A_0$ be a $\subseteq$ -minimal element of $Y$ . Then $f(A_0)\subsetneq A_0$ . Let $t\in A_0\setminus f(A_0)$ . Let $B=A_0\setminus \{t\}$ . Since $t\not\in f(A_0)$ then $f(A_0)\subseteq B$ . Therefore $f(B)\subseteq B$ . Let $u=f(t)$ . Then $u\in B$ , and since $f$ is in
|
|solution-verification|set-theory|examples-counterexamples|ordinals|natural-numbers|
| 1
|
high order derivative of product
|
Let $f\in\mathcal{C}^\infty(\mathbb{R})$ , what is the form of $$ \frac{d^n}{dx^n}\left(\frac{f(x)}{x}\right) $$ for any $n\in\mathbb{N}$ ? I need to pull out $\frac{d^n}{dx^n}f(x)$ if possible. Thank you very much.
|
Apparently, it holds $$ \left(\frac{d}{dx}\right)^n\left(\frac{f(x)}{x}\right)=\sum_{j=0}^n\frac{n!}{j!}(-1)^{n-j}x^{j-n-1}\left(\frac{d}{dx}\right)^{j}f(x). $$
|
|real-analysis|derivatives|binomial-coefficients|arithmetic|binomial-theorem|
| 0
|
Is it possible to extend the domain and range of a function that maps from R to R to other sets?
|
I am currently working on a project where I would like to define infintesimals that can be used in conjunction with the real numbers (similar to the hyperreals). Right now, I am working on an algebraic extension of $\mathbb{R}$ to allow for my infintesimals. I have a definition that works for all linear combinations of integer powers in my infinitesimal $\varepsilon$ (basically just a variable at this point). So for example, expressions like $\varepsilon^5$ , $7\varepsilon^3$ and $73+\varepsilon$ are defined within my framework. The issue right now is that I can't quite seem to find a good way of defining rational exponents. I have an idea where I would like to use the exponential function: $\displaystyle \exp(x):=\sum_{n=0}^{\infty}\frac{x^{n}}{n!}$ $\implies a^x=\exp(x\ln a)$ So I would be able to make a statement such as: $\displaystyle \sqrt\varepsilon=\exp(\frac{1}{2}\ln\varepsilon)$ The crux here is that $\exp:\mathbb{R}\to\mathbb{R}$ , and I am not sure if you can extend this to
|
As noted by Mikhail Katz, the transfer principle of nonstandard analysis provides a way of transferring (certain) properties from the standard to the nonstandard (which includes the infinitesimals) objects and back. As a partial answer to your question: the transfer principle can be non-constructive in the following ways. In other words, having this principle means that things will be `non-computable' in a well-defined way. classically, the transfer principle for a certain formula class corresponds to comprehension for said formula class. classically, the transfer principle for sentences, i.e. formulas without parameters, does not yield extra comprehension (or anything). using intuitionistic logic, the transfer principle for fairly small formulas classes implies the law of excluded middle. What should one do in light of 1)-3), i.e. how does one develop the infinitesimal calculus in a constructive/computable manner? One should start with the nonstandard definitions of continuity, differ
|
|abstract-algebra|nonstandard-analysis|infinitesimals|
| 0
|
Strange behaviour of $x^2+5x+7$ under iteration
|
If any of the following exposition is unclear, please write a comment. In essence, I am looking at the graph $G$ that is generated by the polynomial $q(x) = x^2+ax+b$ ( $a,b \in \mathbb{Z}$ ) via the edge set $$\{(n, q(n) \text{ mod } p) : n =0,\ldots,p-1\}$$ for some prime $p \in \mathbb{P}$ . After some thought, it should be clear that the statements $$``\text{Iterating } q \text{ for any input will always eventually lead to divisibility by some prime at least once.} ``$$ and $$``G \text{ has a path from any node to 0.}``$$ are equivalent. Furthermore, "at least once" can be replaced by "periodically infinitely many times" and "every node has a path to $0$ " can be characterized by " $G$ is weakly connected and has exactly one loop containing $0$ ". With that out of the way, let's get to the question: The natural question now is to know for which polynomials $q$ we have this nice property of "eventual divisibility" by some prime no matter what number we input, i.e. finding connected
|
Oscar Lanzi has already given quite a nice overview over a general method to attack a certain set of polynomials trying to disprove eventual divisibility. Let's explore this idea a bit further and give a nice summary of the method: We start with defining $q(x) := x^2+ax+b$ and examine solutions to $q(x) = x$ and $q(x) = 0$ to set up the method. For $p \neq 2$ , we can derive the quadratic formula over $\mathbb{F}_p$ in exactly the same way as over $\mathbb{R}$ (using modular inverses) and obtain $$x^2 + cx + d \text{ is solvable in $\mathbb{F}_p$} \iff \Delta = 4^{-1}c^2 - d \text{ is a quadratic residue in $\mathbb{F}_p$}$$ For $p = 2$ , the usual method breaks since we would have to divide by $2 \equiv 0 \mod 2$ . This can be accounted for by checking the parities of the coefficients seperately. To now arrive at a contradiction, we need to have: $q(x) = x$ solvable and $q(x) = 0$ not solvable for any $p \in \mathbb{P}$ Why? Because this guarantees that there is fixed point of $p$ tha
|
|number-theory|polynomials|graph-theory|prime-numbers|recreational-mathematics|
| 0
|
The integration of dx/dt respect to t is equal to x + c. What is the variable c?
|
why $\int \frac{dx}{dt} dt = x + c$ ? what is the variable $c$ ? I think the constant $c$ is ommited, as like $\frac{d(x + c)}{dt} = \frac{dx}{dt}$ . Is my assumption correct?
|
When you are solving an indefinite integral $\int fdx$ for any real function $f:\mathbb R\rightarrow \mathbb R$ , you are essentially solving the differential equation $g'(x)=f$ for $g$ . Say you have found a solution $g$ , then $g+c$ for any real number $c$ is also solves the differential equation as $(g+c)'=g'=f$ . It follows that $c$ can be any real number.
|
|calculus|
| 0
|
Determine the rectangular parallelepiped of maximum surface area which can be inscribed in a sphere.
|
The equation of a sphere of radius $r$ centered at the origin $(0,0,0)$ is given by $x^2 + y^2 + z^2 = r^2$ . The surface area of a rectangular parallelepiped with center at the origin is given by $$S = 8(xy+yz+zx)= 8\left[xy +(x+y)\sqrt{r^2 - x^2 - y^2}\right].$$ For the maximum surface area, we have the necessary conditions $\frac{\partial S}{\partial x}=0$ and $\frac{\partial S}{\partial y}=0$ . Therefore, differentiating $S$ with respect to $x$ and $y$ , we have $$\frac{\partial S}{\partial x} = 8\left[y+\sqrt{r^2 - x^2 - y^2}- \frac{x(x+y)}{\sqrt{r^2 - x^2 - y^2}}\right]$$ and $$\frac{\partial S}{\partial y} = 8\left[x+\sqrt{r^2 - x^2 - y^2}- \frac{y(x+y)}{\sqrt{r^2 - x^2 - y^2}}\right].$$ To find critical points, we equate the derivatives to zero. As $\sqrt{r^2 - x^2 - y^2} \ne 0$ in a sphere, we get $$y\sqrt{r^2 - x^2 - y^2}+r^2 - x^2 - y^2- x(x+y) = 0$$ and $$x\sqrt{r^2 - x^2 - y^2}+r^2 - x^2 - y^2- y(x+y) = 0.$$ Subtracting, we get $(y-x)\left[\sqrt{r^2 - x^2 - y^2}+(x+y)\righ
|
The constraint is $$x^2+y^2+z^2=r^2.$$ The surface area of a rectangular parallelepiped with center at the origin, axes perpendicular to the $x$ , $y$ , and $z$ axes, and one vertex at the point $(x,y,z)$ ( $x,y,z\geqslant 0$ ) is $$S=8(xy+yz+zx).$$ I'm going to look at the objective function $$T=xy+yz+zx.$$ We treat $x,y$ as the independent variables, and $z$ is a function of them, as determined by the constraint. Note that $$z^2=r^2-x^2-y^2,$$ so $$2z\frac{\partial z}{\partial x} = -2x,$$ and $$\frac{\partial z}{\partial x} = -\frac{x}{z}.$$ By symmetry, $$\frac{\partial z}{\partial y}=-\frac{y}{z}.$$ In principle, we should be worried about when $z=0$ , but this has surface area zero and will be a minimum, not a maximum, surface area. Then $$\frac{\partial T}{\partial x} = y+y\frac{\partial z}{\partial x}+x \frac{\partial z}{\partial x}+z$$ and $$\frac{\partial T}{\partial y} = x+x \frac{\partial z}{\partial x}+y \frac{\partial z}{\partial y}+z.$$ Using $\frac{\partial z}{\partial x
|
|calculus|maxima-minima|
| 1
|
Continuity of infimum argument
|
Suppose $f: [0,1]^2 \mapsto \mathbb R$ is continuous. Let $$ \underline{y}(x)\equiv \inf \{y\in [0,1]: f(x,y)\geq 0\}. $$ There is $y_0\in(0,1)$ such that $$f(0,y_0)= 0,$$ $$f(0,y) Moreover, $$ f(x,y_0) > 0 \quad \forall x\in (0,1]. $$ I hope to find out: is it true that $\underline{y}(x)$ is continuous at $x=0$ ? If not, what extra conditions may imply this? Thank you. I am tempted to answer yes, and I imagine a proof may work along this line: By the definition of $\underline{y}(x)$ , as long as $\underline{y}(x)\in(0,1)$ , $$ f(x,\underline{y}(x))=0. $$ Then I want to use some version of the Implicit Function Theorem to argue the continuity of $\underline{y}(x)$ . Two difficulties I encounter are: How to argue $y(x)\in (0,1)$ for $x$ in a neighbourhood of $0$ ? To use the Implicit Function Theorem (without differentiability) to argue the continuity of $\underline{y}(x)$ at $x=0$ , it seems I need $f(x,\cdot)$ to be a one-to-one mapping for $(x,y)$ in a neighbourhood of $(0,y_0)$ , wh
|
The key idea is to use the compactness of the domain $[0,1]^2$ (which is necessary for the claim to work). From what you stated, we have $\underline{y}(0)=y_0$ and $\underline{y}(x) for $x \in (0,1]$ . What we will show is that, given some $\epsilon > 0$ , there is a $\delta > 0$ such that $$f(x,y) This means $\forall x \in [0,\delta]: \underline{y}(x) \ge y_0 - \epsilon$ and with the above $\underline{y}(x) for $x \in (0,1]$ that means $\underline{y}(x)$ is continuous at $x=0$ . So let's fix some $\epsilon > 0$ now. We know that $f(0,y) for $y \in [0,y_0-\epsilon]$ . Since $y$ varies over a compact interval and $f(0,y)$ is continuous, it takes a maximum value $M there. So we actually have $f(0,y) \le M . Since $f(x,y)$ is continuous, for each $y \in [0,y_0-\epsilon]$ we can find a small $\delta_y > 0$ such that in the rectangular neighborhood $[0, \delta_y] \times [y-\delta_y, y +\delta_y]$ of $(0,y)$ we still have $f(x,y) . That's a $\delta_y$ for each $y \in [0,y_0-\epsilon]$ , so u
|
|real-analysis|
| 1
|
Limit of $f:\mathbb{R}\to\mathbb{R}$ such that $f(x) = \frac1{q_n^5}$, if $x=\frac{p_n}{q_n}$ for $x \to \sqrt{2}$
|
Let $p_n$ and $q_n$ two successions of integer numbers such that $q_n > 0$ and such that $(p_n, q_n) = 1$ for all indexes $n$ . Define $f(x) := \begin{cases}\frac1{q_n^5} \quad x=\frac{p_n}{q_n} \\[6pt] 0 \quad x \in \mathbb{R}-\mathbb{Q}\end{cases}$ Prove that $\lim_{x \to \sqrt{2}} \frac{f(x)}{(x-\sqrt2)^2}$ exists. Can you compute it? If we know the limit exists, then we might be able to apply Heine's criteria for limits of functions using limits of sequences to find the limit by using a rational sequence $(x_n) \to \sqrt2$ and using the definition of the function. However, I don't know how to approach proving the existence of the limit. Also, we know that if a sequence $(\frac{p_n}{q_n})$ , where $(p_n, q_n) = 1, p_n,q_n \in \mathbb{Z}, q_n > 0$ converges to an irrational number, then $\lim_{n \to \infty} q_n = \infty$ . Can you help me with this?
|
First, observe that $f$ as stated is not always well-defined: we don't know its values on rational numbers which are not in the sequence $\left(\frac{p_{n}}{q_{n}}\right)$ . I'm going to make the assumption that $\left(\frac{p_{n}}{q_{n}}\right)$ runs uniquely over all rational numbers (which we can do by picking an enumeration of them), so that $f$ is well-defined. Moreover, we can always choose $q_{n} > 0$ (by changing the sign of $p_{n}$ if necessary). We will assume rational numbers are always written in reduced form. Let $g(x) = \frac{f(x)}{(x-\sqrt{2})^{2}}$ . We want to show that $\lim_{x \to \sqrt{2}}g(x)$ exists, and to find its value. Observe that for any irrational number $\alpha \neq \sqrt{2}$ , $g(\alpha) = 0$ . So, if the limit exists, it should be $0$ . Now, recall Roth's theorem in Diophantine approximation, which states: for any irrational algebraic number $x$ and any real $\lambda> 0$ , the inequality $$\left|x - \frac{p}{q} \right| has only finitely many coprime inte
|
|real-analysis|sequences-and-series|limits|
| 1
|
Is taylor series also an orthogonal projection of a infinitely differentiable function on some subspace?
|
I'm wondering if there exists some inner product $\langle \cdot,\cdot \rangle$ defined on all real infinitely differentiable functions such that $$1, x, x^2, x^3, \ldots$$ are orthonormal w.r.t this inner product? If there does exist such an inner product, denote $$U_j=\text{span}(1,x,\ldots,x^j).$$ Then, is it true that, for an arbitrary infinitely differentiable function $f$ , $P_{U_j}(f)$ is $f$ 's $j$ th order taylor expansion at $x=0$ ?
|
If instead we considered complex differentiable functions, we could define an inner product by integrating over the unit circle: $$\langle f,g\rangle=\int_0^1 f(e^{2\pi it}) \overline{g(e^{2\pi it})}dt.$$ Then the desired identity $$\langle f,x^n\rangle=\frac{f^{(n)}(0)}{n!}$$ follows from Cauchy's differentiation formula . In particular, the monomials are orthonormal. You can express the key idea here as a commutative diagram: Taking the Taylor series and then restricting to the unit circle is equivalent to restricting to the unit circle and then taking the Fourier series.
|
|linear-algebra|functional-analysis|analysis|taylor-expansion|orthonormal|
| 0
|
Is there a commutative binary operation which is closed, has a neutral, unique negatives, satisfies cancellation but not associative?
|
What I want is: a close operation there's a neutral there're unique negatives cancellation laws commutativity but no associativity Can I get all this?
|
Here's an answer: (4 X 3)X2=!4X(3 X 2)
|
|group-theory|
| 0
|
Number of possible relations with following restrictions | Discrete Mathematics
|
I am new to math. stack exchange, I am really not sure how I am supposed to ask this but I need a logical explanation and a way to logical way to approach questions like these. I tried doing it myself. Thank you. The question is as follows: $$\text{Let } A = \{a, b, c, d, f, g, h, k, u, x, z,y\}. \\$$ $(1)$ How many relations are there on $A$ that are reflexive, symmetric, and contain all the elements $(a,z), (u,u)$ and $(z,a)$ ? (2) How many relations are there on $A$ that are antisymmetric and contain all the elements $(c,c), (x,a)$ and $(b,c)$ , but do not contain the element $(c,b)$ ? (3) ) ) How many relations are there on A that are reflexive, antisymmetric, and contain all the elements (x,a),(a,a),(b,d), and (a,z), but do not contain any of the elements (a,x),(a,b),(d,z),(z,d) or (c,d)? My Solution Let $A$ = {a,b,c,d,e,f,g,h,k,w,x,y,z}. (1) . But I can use the formulas outlined for reflective and systematic. the formula is $$2^{\frac{n^2-n}{2}}$$ since the last condition says, t
|
Well, I think it is better not representing the relation as $n^2$ directed pairs that either exist or not, but as a graph of $\frac{n\left(n-1\right)}{2}$ general edges between 2 elements that can exist in 4 states - $a{\not-}b$ , $a{\Rightarrow}b$ , $a{\Leftarrow}b$ , $a{\iff}b$ and $n$ self-edges that either are, $a{\iff}a$ or are not, $a{\not-}a$ . With $n$ being 12 makes it 66 normal and 12 self-edges This way you can easily encompass requirements like reflexive or symmetric by simply excluding some of the states reflexive relation has simply all $n$ self-edges in the $a{\iff}a$ state symmetric relation can have the normal edges only in 2 states, $a{\not-}b$ or $a{\iff}b$ antisymmetric relation can have the normal edges in 3 states, $a{\not-}b$ , $a{\Rightarrow}b$ , $a{\Leftarrow}b$ So in your $(1)$ it is reflexive and symmetric thus the normal edges can exist in 2 states only and the self-edges don't contribute, so you get ${\Large2}^{\left(\frac{n\left(n-1\right)}{2} - 1\right)}$
|
|discrete-mathematics|relations|
| 0
|
Finding a Lebesgue integrable function for every $1 \leqslant q < \infty$ that satisfies aditional requirement.
|
Consider the usual Lebesgue spaces. Amid one of my studies, I started wondering if it is possible to find an example that satisfies the following problem: Problem. Consider arbitrary elements $1 \leqslant p \leqslant q and $0 \leqslant \lambda , where $n$ represents the dimension of the space we're working in (in this case, assume it is $\mathbb R^n$ ). My goal is to find a function $f$ defined on $\mathbb R^n$ such that $f \in L^q(\mathbb R^n)$ and additionally $f$ has to satisfy the following property: $$ \sup_{x \in \mathbb R^n, \, r > 0} r^{-\lambda} \int_{B(x,r)} |f(y)|^p \, dy = \infty. $$ Does anyone have an idea of a function that possibly satisfies this requirements? Thanks for any help in advance.
|
We use spherical coordinates, so every point in $\mathbb{R}^n\setminus\{0\}$ can be uniquely represented as a pair $(r,\theta)$ , where $r>0$ and $\theta\in S^{n-1}$ , the unit sphere. If $\mu$ is the (non-normalized) Haar measure on the unit sphere, integrating in spherical coordinates is $$\int_0^\infty \int_{S^{n-1}} f(r,\theta)d\mu(\theta)r^{n-1}dr.$$ This is because we're integrating over different spheres of radius $r$ . For a fixed $r>0$ , the integral $\int_{S^{n-1}}f(r,\theta)d\mu(\theta)$ is the integral of the function over the fixed sphere of radius $r$ centered at $0$ . Scaling the sphere by a factor of $r$ scales the surfaces areas (which is what $\mu$ is capturing) by a factor of $r^{n-1}$ , which is where the $n-1$ comes from. If we have a nice, spherically symmetric function $f$ (in other words, if it's a function only of $r$ ), then for any $0 , we get $$\int_{\mathbb{R}^n} |f(x)|^s dx = \int_0^\infty |f(r)|^sd\mu(\theta)r^{n-1}dr = \int_0^\infty |f(r)|^sr^{n-1}\Bigl[
|
|real-analysis|functional-analysis|lebesgue-integral|lebesgue-measure|examples-counterexamples|
| 1
|
Prove that $T-\lambda I$ has minimal polynomial $q(z)=p(z+\lambda)$
|
I am learning Linear Algebra Done Right. Here is a question from 5B. Suppose $V$ is finite-dimensional, $T\in L(V)$ , and $p$ is the minimal polynomial of T. Suppose $\lambda\in F$ . Show that the minimal polynomial of $T-\lambda I$ is defined by $q(z)=p(z+\lambda)$ . Up to this point, I can only write like the following: By the definition of minimal polynomial, $p(T)=0$ , then $q(T-\lambda I)=p(T)=0$ , but this only shows that $q$ is a multiple of the minimal polynomial of $T-\lambda I$ . How can I show this is the minimal polynomial?
|
Hint: Do the same trick for $T_0 = T - \lambda I$ and $\lambda_0 = -\lambda$ and consider degrees. More concretely, if $p_0$ is minimal polynomial of $T_0$ different from $q$ , what's the relation between $q_0$ and $p$ and what does it tell you about minimality of $p$ ?
|
|linear-algebra|eigenvalues-eigenvectors|minimal-polynomials|
| 0
|
Classify the conic $x^2+xy+3y^2+5x$ and determine its cartesian equation
|
Classify the conic $C:\;x^2+xy+3y^2+5x$ , and determine its cartesian equation I would be happy if you solve it close to my way A= \begin{pmatrix}1&\frac{1}{2}&\frac{5}{2}\\ \frac{1}{2}&3&0\\ \frac{5}{2}&0&0\end{pmatrix} Det(A)= - $\frac {75}{4}$ = $(α11)*(α22)*(α33)$ (≠0 irreducible) A33= $\frac {75}{4}$ = $(α11)*(α22)$ (>0 elipse) I = $α11+α22$ $α11X^2+α2Y^2+α33=0$ thats all I know I could not find the cartesian equation
|
The rotation angle for $Ax^2+Bxy+Cy^2+Dx+Ey+F=0$ is $\theta$ where $\tan{2\theta}=\frac{B}{A-C}$ which make $\cos{\theta}=\frac1{\sqrt{(2-\sqrt5)^2+1}},\sin{\theta}=-\frac1{\sqrt{(2+\sqrt5)^2+1}},$ for $x^2+xy+3y^2+5x=0.$ Let $M=\begin{pmatrix}1&\frac12\\\frac12&3\end{pmatrix},$ $P^tMP=\begin{pmatrix}(2-\frac{\sqrt5}2)&0\\0&(2+\frac{\sqrt5}2)\end{pmatrix},$ where $P=\begin{pmatrix}\cos{\theta}&-\sin{\theta}\\\sin{\theta}&\cos{\theta} \end{pmatrix}.$ The original equation therefore can be written $$\frac{75}{11}(\frac{(\cos{(\theta)}(x+\frac{30}{11})+\sin{(\theta)}(y-\frac5{11}))^2}{a^2}+\frac{(-\sin{(\theta)}(x+\frac{30}{11})+\cos{(\theta)}(y-\frac5{11}))^2}{b^2}-1)=0.$$ where $a^2=\frac{75}{11(2-\frac{\sqrt5}2)}\approx (2.78)^2$ and $b^2=\frac{75}{11(2+\frac{\sqrt5}2)}\approx (1.48)^2,$ from which you should be able to extract what you need. Even the area enclosed $\pi ab=\frac{150\pi}{11\sqrt{11}}.$ Or the squared eccentricity $(1-\frac{b^2}{a^2})=\frac2{11}(4\sqrt5-5),$ making the e
|
|linear-algebra|matrices|conic-sections|diagonalization|
| 0
|
How to prove that any even number, divided by 2 a number of times, would eventually turn into an odd number or 1?
|
How do I prove that any even number, divided by 2 a number of times, would eventually go down to an odd number or 1?
|
Let $n$ be an even number. By the Fundamental Theorem of Arithmetic, we have that $$n = 2^{m}\cdot p_{1}^{m_{1}} \cdots p_{k}^{m_{k}},$$ for some prime numbers $p_{i}$ , with $p_{i} \not = p_{j}$ for all $i \not = j$ , and $2 \not = p_{j}$ for all $j$ , and $m, \, m_{j} \geq 0$ . Therefore, $\frac{n}{2^{m}}$ is an odd number. In other words, if we divide $n$ by $2$ $m$ times turn into an odd number.
|
|elementary-number-theory|
| 1
|
Prove a string can be rearranged such that no character repeats
|
I'm solving a coding problem, and I wanted to mathematically prove that the answer exists. Essentially, we are given a string of lowercase letters, and we want to know if it can be rearranged such that no character repeats. Example: aab can be written aba , but aaab can't be rearranged in such a way. Let $N$ be the length of the string, and let there be $n$ unique characters $\{c_1,\cdots, c_n\}$ , and $f_m$ is the frequency of $c_m$ in the string. The solution exists if $\max\{f_m\}\leq \lceil N/2\rceil$ . This part I'll take for granted -- I'm curious about a subproblem. Suppose I repeat each distinct character $k\leq \min\{f_m\}$ times and form a valid string. For instance, if the input string is aabbccdddd , and $k=1$ , we would form a string abcd repeating each unique character once, and the remaining substring to rearrange would be abcddd . Is it still guaranteed that the remaining $N-km$ characters ( abcdd ) can be rearranged into a string that has no repeating characters? (For
|
Is it still guaranteed that the remaining $N-km$ characters ( abcdd ) can be arranged into a string that has no repeating characters? No - if some character is a large enough proportion of the whole, what's left can have more of that character than half its length. For example, applying this algorithm to abcddd , we start with abcd but then dd is left over. A summary of another algorithm which will work: Order the groups of letters from the most frequent to least frequent. Split this string in half, and interleave them in the odd and even positions in the final string. For example, abcdddd $\to$ dddd , abc $\to$ dadbdcd .
|
|combinatorics|combinatorics-on-words|
| 1
|
Logic behind contrapositive proofs that involves De Morgan's Laws
|
Suppose $a,b\in\mathbb{Z}$ . If both $ab$ and $a+b$ are even, then both $a$ and $b$ are even Proof by contrapositive. Propositions: $P$ : $ab$ is even $Q$ : $a+b$ is even $R$ : $a$ is even $S$ : $b$ is even Then logically we have $(P\land Q)\implies (R\land S)$ . We have to negate $R\land S$ , so $\neg(R\land S)$ , by De Morgan's Laws we have $\neg R \lor \neg S$ And we have to get not $P\land Q$ , which is $\neg P \lor \neg Q$ Then we have $(\neg R \lor \neg S)\implies(\neg P \lor \neg Q)$ Which places in a truth table would be the right ones to evaluate this? My answer is that the first 3 rows are the ones we have to evaluate. My reasoning is: First we have to force $(\neg R \lor \neg S)$ to be true Second we have to prove all different combination that makes $(\neg R \lor \neg S)\implies(\neg P \lor \neg Q)$ true (marked in blue brackets) Finally, since we have 3 equal combinations we decide to choose only the first 3 rows. Is my reasoning correct? I leave the image again without an
|
There are in fact several issues with your approach. Here are few observations: The statement you want to prove is one of arithmetic , and should rather be formalized in first-order logic (you need predicates, aka propositional functions, indeed already to define ${\operatorname {even}}$ ), since the structure of $a$ and $b$ , as well as the specific definitions of $+$ , $\times$ and ${\operatorname {even}}$ matter. It is in fact a statement that is not purely logical , you need arithmetic facts (previous theorems) to prove it, e.g. that the product of two integers is even iff at least one of the operands is even. So, propositional logic does not help there, but, suppose it did: in classical propositional logic , an approach different from inferential proof (where equivalences such as DeMorgan's, usually axiomatized, are used in inferential steps) is the so called "method of truth tables", where we simply compute the truth table for the statement to prove (exactly as you have done in y
|
|logic|proof-writing|proof-explanation|
| 1
|
Sum of compact modules over compact ring is compact?
|
Hopefully a quick one! Let $R$ be a compact noetherian ring and $S$ be an $R$ -module. If $T,T^\prime$ are compact submodules of $S$ , does it then follow that the module $T+T^\prime$ is compact too (where $T+T^\prime$ is the module generated by $T \cup T^\prime$ )? I know that this is false for general modules, but I wonder if it works in this scenario. Attempt: Suppose $T+T^\prime \subseteq \bigcup_{i \in I} U_i$ is an open cover and $R \subseteq \bigcup_{j=1}^n V_j$ is a finite (sub)cover (as $R$ is compact). As $T,T^\prime \subseteq_{i \in I} U_i$ we can extract finite subcovers for each: $T=\bigcup_{i_1 \in I_1} U_{i_1}$ and $T^\prime = \bigcup_{i_2 \in I_2} U_{i_2}$ for $I_1, I_2 \subseteq I$ finite sets. Note that then $T \cup T^\prime \subseteq \bigcup_{i_3 \in I_3} U_{i_3}$ where $I_3 = I_1 \cup I_2$ is also a finite set. Then $T+T^\prime = R \cdot (T \cup T^\prime) \subseteq \bigcup_{j=1}^n V_j \cdot \left( \bigcup_{i_3 \in I_3} U_{i_3} \right)$ which is a finite union of a p
|
and $R \subseteq \bigcup_{j=1}^n V_j$ is a finite (sub)cover (as $R$ is compact). Subcover of what? $U_i$ are subsets of $S$ , not $R$ . They do not cover $R$ at all, to pick a "subcover". Then $T+T^\prime = R \cdot (T \cup T^\prime)$ No, those subsets are hardly ever equal. The subset on the left consists of all elements of the form $t+t'$ for $t\in T$ and $t'\in T'$ , while the subset on the right consists of all elements of the form $rx$ for $r\in R$ and $x\in T\cup T'$ . In particular $R \cdot (T \cup T^\prime)$ is literally equal to $T\cup T'$ . A proper solution comes from the observation that $T+T'$ is a continuous image of $T\times T'$ which is compact (as a product of compact spaces). The surjective continuous mapping we are looking for is given by $(x,y)\mapsto x+y$ . And this is regardless of what $R$ is (compact or not, noetherian or not), the only thing we need to know is that $T$ and $T'$ are compact.
|
|abstract-algebra|general-topology|ring-theory|modules|compactness|
| 1
|
Prove that () is odd if and only if n is perfect square or twice a perfect square.
|
What I know is to let $n > 1$ and $$n = p_1^{k_1}p_2^{k_2}\cdots p_r^{k_r}$$
|
Assuming that $\sigma (n)$ denotes the sum of all divisors of $n$ , we know $$\sigma(n) = (1+p_1+\cdots+p_1^{k_1})(1+p_2+\cdots+p_2^{k_2}) \ldots (1+p_r+\cdots+p_r^{k_r})$$ This is odd if and only if all terms are odd. Suppose $p_1 = 2$ (whether $n$ is even or not doesn't matter, we can just take $k_1=0$ when needed) so the rest of prime factors are odd and thus we need $k_2,\ldots, k_r$ to be even, otherwise we would be adding an even number of odd numbers in some term. Note that, if $k_1$ is $0$ or even, $n$ becomes be a perfect square and if $k_1$ is odd it is twice a square.
|
|elementary-number-theory|
| 0
|
Matricial differential equation and eigenvalues
|
I have the matricial differential equation: $$\frac{\text{d}}{\text{d}t}\vec{x}(t)=A \cdot \vec{x}(t)$$ where: $$\vec{x}(t) \in \mathbb{R}^3, \quad A=\begin{pmatrix} a & 0 & b \newline 0 & c & 0 \newline d & 0 & a \end{pmatrix}, \quad a,b,c,d \in \mathbb{R}$$ I need to find $a,b,c,d \in \mathbb{R}$ such that: $$\lim_{t \to +\infty} \vec{x}(t)=\vec{0}$$ for every initial condition $\vec{x}(0) \in \mathbb{R}^3$ . The very short solution provided by the text is that the eigenvalues of the matrix $A$ must have negative real part. I can't undetstand why. What I have done so far: I know that the solution can be written in the form: $$\vec{x}(t)=e^{tA} \cdot \vec{x}(0)$$ and I can easily find the eigenvalues of $A$ : $$c, \quad a-\sqrt{bd}, \quad a+\sqrt{bd}$$ If the real part must be negative then we must have: $c ; $a ; $a 0$ .
|
If the matrix $A$ can be diagonalized then it can be written as $$ A = U\Lambda U^{-1} $$ where $U$ is the matrix formed with the eigenvectors of $A$ and $\Lambda$ is the diagonal matrix formed with its eigenvalues. $$ \Lambda = \pmatrix{\lambda_1 & 0 & \cdots & 0\\ 0 & \lambda_2 & \cdots & 0 \\ 0 & 0 & \cdots & \lambda_n } $$ What's interesting about this representation is \begin{eqnarray} A &=& U\Lambda U^{-1} \\ A^2 &=& (U\Lambda U^{-1})(U\Lambda U^{-1}) = U\Lambda^2 U^{-1}\\ &\vdots& \\ A^k &=& U\Lambda^k U^{-1} \end{eqnarray} so that $$ e^A = \sum_k \frac{A^k}{k!} = \sum_k\frac{U\Lambda^k U^{-1}}{k!} = U\left(\sum_k \frac{\Lambda^k}{k!}\right)U^{-1} = Ue^\Lambda U^{-1} $$ where $$ e^\Lambda = \pmatrix{e^{\lambda_1} & 0 & \cdots & 0\\ 0 & e^{\lambda_2} & \cdots & 0 \\ 0 & 0 & \cdots & e^{\lambda_n} } $$ Now let's get back to your problem. As you correctly stated $$ x(t) = e^{At}x(0) = U\pmatrix{e^{\lambda_1 t} & 0 & \cdots & 0\\ 0 & e^{\lambda_2t} & \cdots & 0 \\ 0 & 0 & \cdots & e
|
|linear-algebra|ordinary-differential-equations|eigenvalues-eigenvectors|
| 1
|
Periodic solutions to a first order linear ODE with incommensurable periods
|
I'm studying for an admission test for the PhD in Mathematical Analysis. I'm stuck to solve this exercise: Consider the ODE $x'(t)+a(t)x(t)=b(t)$ where $a(t)$ and $b(t)$ are continuous functions from $\mathbb{R}$ to itself. Prove that this Equation cannot have 3 periodic non-constant solutions with pairwise incommensurable periods. I recall that if $k\in\mathbb{R}$ and $q\in\mathbb{R}-\{0\}$ , then they are called incommensurable when $\frac{k}{q}\notin\mathbb{Q}$ . I solved the first part of the exercise that asked to prove that if a continuous function on $\mathbb{R}$ is periodic and has two periods that are incommensurable then it's a constant function. But I can't see how the first part of the exercise can help with the second part. I also tried to do some computations with the solutions (subtracting and adding them etc.) and to write the general integral of the ODE and see if it could help. In both cases all I did seemed useless.
|
Let $x_1,x_2,x_3:\mathbb{R} \to\mathbb{R}$ be non-constant solutions of $x'+a(t)x=b(t)$ with pairwise incommensurable periods $p_1,p_2,p_3>0$ . Set $y_1:=x_1-x_2$ and $y_2:=x_2-x_3$ . Note that $y_1$ cannot be the zero function since otherwise $x_1=x_2$ has period $p_1$ and $p_2$ and would be constant. The same way $y_2 \not=0$ . Now, $y_1$ and $y_2$ are solutions of the homogeneous equation $x'+a(t)x=0$ , hence linear dependent: $\alpha_1 y_1 + \alpha_2y_2= 0$ for some $\alpha_1,\alpha_2 \in \mathbb{R}$ with with $\alpha_1\not=0\not=\alpha_2$ . So $$ 0=\alpha_1(x_1(t)-x_2(t)) + \alpha_2(x_2(t)-x_3(t)) = \alpha_1x_1(t) + (\alpha_2-\alpha_1)x_2(t)-\alpha_2 x_3(t) \quad (t \in \mathbb{R}). $$ Thus $$ 0= \alpha_1x_1(t) + (\alpha_2-\alpha_1)x_2(t+p_1)-\alpha_2 x_3(t+p_1) \quad (t \in \mathbb{R}), $$ which yields $$ (\alpha_2-\alpha_1)x_2(t+p_1)-\alpha_2 x_3(t+p_1)= -\alpha_1x_1(t)=(\alpha_2-\alpha_1)x_2(t)-\alpha_2 x_3(t) \quad (t \in \mathbb{R}), $$ that is $$ \frac{\alpha_2-\alpha_1}{\al
|
|real-analysis|ordinary-differential-equations|analysis|periodic-functions|
| 1
|
Any reference that explains generated equivalence relation
|
Im looking for a reference that characterizes when an equivalence relation can be generated from a relation and gives a clear explanation of it.
|
Did's answer gives a direct explicit characterization, but you can also break it up into three steps that correspond to the defining properties of an equivalence relation: Relate every element to itself, to make sure the relation is reflexive . Whenever $x$ is related to $y$ , relate $y$ to $x$ , to make sure the relation is symmetric . Directly relate elements that are connected by a chain of relations, to make sure the relation is transitive . In a bit more detail, letting $X$ be a set and $R$ a relation on $X$ : Take the reflexive closure : for every $x \in X$ , add $(x, x)$ . We obtain a relation $R_1$ . Take the symmetric closure of that: for every $(x, y) \in R_1$ , add $(y, x)$ to the relation. We obtain a relation $R_2$ . Take the transitive closure of that: for every finite path of related elements $x_1 \mathrel{R_2} x_2 \mathrel{R_2} x_3 \cdots \mathrel{R_2} x_n$ (meaning $(x_1, x_2), (x_2, x_3), \ldots \in R_2$ ), add $(x_1, x_n)$ to the relation. We obtain a relation $R_3$
|
|reference-request|elementary-set-theory|
| 0
|
boundary of cone operation
|
In The Bredon at the beginning of the chapter Subdivions (homology theory) there is the following construction: for $v \in \Delta_q$ define: $ v: L_p(\Delta_q) \to L_{p+1}(\Delta_q)$ (where $L$ is generated by the affine simplicies). by $[v_0,...,v_p] \mapsto [v,v_0,...,v_p]$ . Then the claim is that for $p>0: \delta v[v_0,...,v_p] = [v_0,...,v_p] - v(\delta [v_o,...,v_p]) $ . This equation I do understand just by definition of the boundary operator. But why does this not hold for $p=0$ and moreover the book claims that for $p=0$ we have $\delta vc= c-\varepsilon(c)[v]$ , where $\varepsilon$ is the augmentation i.e it sends a 0 chain to the sum of its coefficients. I absolutly dont understand why the last equation should be true, what $\varepsilon(c)[v]$ should mean and why the first one is not true for $p=0$ . Moreover this augmentation appears some time in this book but I never really liked it or understood why Bredon is using it. So if someone can explain me what is the meaning of t
|
For each $p$ -chain $c = \sum_\sigma n_\sigma \sigma$ we have $vc = \sum_\sigma n_\sigma v \sigma$ . For $p = 0$ we have $\sigma = [v_0]$ and $v\sigma = [v, v_0]$ . Hence $$\partial v\sigma = \partial [v, v_0] = [v_0] - [v] = \sigma - [v].$$ Here $[v]$ is the $0$ -simplex with vertex $v$ . Given a $0$ -chain $c = \sum_\sigma n_\sigma \sigma$ we get $$\partial vc = \partial \left(\sum_{i=1}^k n_\sigma v \sigma\right) = \sum_\sigma n_\sigma\partial v\sigma = \sum_\sigma n_\sigma(\sigma - [v]) = \sum_\sigma n_\sigma \sigma - \sum_\sigma n_\sigma[v] \\= c - \left(\sum_{i=1}^k n_\sigma \right) [v] = c -\epsilon(c)[v] .$$
|
|algebraic-topology|homology-cohomology|homological-algebra|
| 1
|
Calculating custom coordinates
|
I have a set of four coordinates with which I am creating a rectangular path in svg as following. This rectangular path has width of 10 and height of 150. | point# | x | y | |------------------|---------------|---------------| | 0 | 650 | 300 | | 1 | 650 | 150 | | 2 | 640 | 150 | | 3 | 640 | 300 | I need to apply rotation (r) to this path on coordinate 650,300. Let's suppose r=85deg and below is what hapens to the path after rotation. After the transformation, I need to ensure that the opposite arm of the rectangle still stays on the green line. How can I calculate the new coordinate where it would extend to the base green like
|
Ok, here is a representation using geogebra The rectangle ABCD is your original rectangle (I have used up increasing coordinates under geogebra), A'B'C'D' is the same rectangle after applying a 85° rotation at point (650,300). If I have correctly understood your question, you want to know the coordinates of the point noted I on the graphic. We know $\angle{A'I, A'D}' = r$ . We know $\angle{D'A', D'I}$ is a right angle. And as $A'D'$ is obtained from $AD$ by a rotation, both segments have same length: $10$ units. We can immediately conclude $A'D' = A'I cos(r)$ or $A'I = \dfrac {A'D'}{cos(r)}$ . Numeric application: $A'I = \dfrac{10}{cos(85)}\approx 114.74$ . So $I$ is the point of coordinates $(535.26, 300)$
|
|geometry|trigonometry|
| 0
|
Classify the conic $x^2+xy+3y^2+5x$ and determine its cartesian equation
|
Classify the conic $C:\;x^2+xy+3y^2+5x$ , and determine its cartesian equation I would be happy if you solve it close to my way A= \begin{pmatrix}1&\frac{1}{2}&\frac{5}{2}\\ \frac{1}{2}&3&0\\ \frac{5}{2}&0&0\end{pmatrix} Det(A)= - $\frac {75}{4}$ = $(α11)*(α22)*(α33)$ (≠0 irreducible) A33= $\frac {75}{4}$ = $(α11)*(α22)$ (>0 elipse) I = $α11+α22$ $α11X^2+α2Y^2+α33=0$ thats all I know I could not find the cartesian equation
|
The conic matrix $$A=\begin{pmatrix}1 & \frac{1}{2} & \frac{5}{2}\\ \frac{1}{2} & 3 & 0\\ \frac{5}{2} & 0 & 0 \end{pmatrix} \tag{1}$$ can be transformed into the following diagonal form $$(T\;R)^{\intercal}\;A\;(T\;R)=\begin{pmatrix}2-\frac{\sqrt{5}}{2}\\ & 2+\frac{\sqrt{5}}{2}\\ & & \text{-}\frac{75}{11} \end{pmatrix} \tag{2}$$ using the translation matrix $$T=\begin{pmatrix}1 & & \text{-}\frac{30}{11}\\ & 1 & \frac{5}{11}\\ & & 1 \end{pmatrix} \tag{3}$$ and rotation matrix $$R=\begin{pmatrix}\cos\left(\tfrac{1}{2}{\rm atan}\left(\tfrac{1}{2}\right)\right) & \text{-}\sin\left(\tfrac{1}{2}{\rm atan}\left(\tfrac{1}{2}\right)\right)\\ \sin\left(\tfrac{1}{2}{\rm atan}\left(\tfrac{1}{2}\right)\right) & \cos\left(\tfrac{1}{2}{\rm atan}\left(\tfrac{1}{2}\right)\right)\\ & & 1 \end{pmatrix} \tag{4}$$ From (2) the canonical form $\left(\tfrac{u}{a}\right)^{2}+\left(\tfrac{v}{b}\right)^{2}=1$ is $$\begin{aligned}a=\sqrt{\tfrac{600}{121}+\tfrac{150\sqrt{5}}{121}} & & \text{semi-major axis}\\ b=\
|
|linear-algebra|matrices|conic-sections|diagonalization|
| 0
|
Help proving a chain rule from total derivative chain rule
|
Consider $f:\mathbb{R}^d\to\mathbb{R}$ and $g:\mathbb{R}\to\mathbb{R}^d$ . It is known that $$ \tag{*} (f\circ g)'(x) = \sum_{i=1}^d \partial_i f(g(x)) \cdot g'(x)^i $$ I would like to prove this from the chain rule for the total derivatives: $$ \tag{**} D_x(f\circ g) = D_{g(x)}f \circ D_xg $$ I'm not sure how to rigorously proceed here. Intuitively I know the two total differentials in the total derivative chain rule can be represented as matrices and their composition will correspond to the multiplication in the desired partial derivative chain rule. But I'm not sure how to get there. How does the function composition get converted into a summation + multiplication? Another related issue. The expression $(*)$ is a real number when evaluated at $x$ . The expression $(**)$ is a linear map from $\mathbb{R}\to\mathbb{R}$ . It's not to difficult to understand that the linear maps $\mathbb{R}\to\mathbb{R}$ (the dual space of $\mathbb{R}$ ) are isomorphic to $\mathbb{R}$ itself. But still,
|
$ \newcommand\R{\mathbb R} $ Two things: You have to recognize that the "total derivative" is best understood as a "differential". Meaning, if $F : \R \to \R$ then $F'$ is not the total derivative. Instead, $$ D_xF(h) = F'(x)h,\quad h\in\R. $$ The same remark can be made about $g : \R \to \R^d$ . Your desired chain rule is the inner product of the gradient of $f$ with the derivative of $g$ : $$ (f\circ g)'(x) = \nabla f(g(x))\cdot g'(x). $$ The gradient can be defined by $$ \nabla f(x)\cdot h = D_xf(h),\quad h\in\R^d. $$ Putting these points together, we can see that (with $h \in \R$ ): $$ (f\circ g)'(x) = \nabla f(g(x))\cdot g'(x) $$ $$ \iff (f\circ g)'(x)h = \nabla f(g(x))\cdot g'(x)h $$ $$ \iff D_x[f\circ g](h) = D_{g(x)}f(g'(x)h) $$ $$ \iff D_x[f\circ g](h) = D_{g(x)}f(D_xg(h)). $$
|
|linear-algebra|multivariable-calculus|partial-derivative|
| 1
|
Why do we need square integrability in showing $E(Y|X)$ minimizes expected quadratic loss?
|
I've read about that $E(Y|X)=\underset{f(x)\in \mathcal{F}}{\arg\min} E(Y-f(X))^2$ , where $\mathcal{F}$ is the set of all square integrable functions in $x$ . The proof of this result is simple and does not seem to use square integrability of function $f(x)$ . I'm wondering why do we restrict $f(x)$ to be square integrable? If this restriction is relaxed for $\mathcal{F}$ , does $E(Y|X)$ fail to be the argmin?
|
The correct restriction to make is to ensure square-integrability of $f(X)$ , not necessarily $f$ ; i.e. $E(f(X)^2) . Eventually in the proof of this you want to show that $$E[(Y- E(Y|X))(E(Y|X) - f(X)) ] = 0$$ for all reasonably defined $f(X)$ . Before you prove that expectation is zero, you should prove it exists, something which the Cauchy-Schwartz inequality affords you. That requires the square-integrability of each of the factors, hence of $f(X)$ .
|
|probability-theory|conditional-expectation|
| 1
|
"Subset" List Coloring for Graphs
|
I am interested in the following problem for research: we are given a graph $G$ and two integers $N, d$ with $N \ge d$ . Say that $G$ is " $(N,d)$ -subset-list colorable" if each vertex $v$ is assigned a subset $s(v)$ of $\{1, ..., N\}$ , and for each edge $uv$ , the symmetric difference of $s(u)$ and $s(v)$ has size at least $d$ . When $d=1$ , of course we can apply any known "standard list coloring" algorithm by letting the lists that can be chosen be every subset of $\{1, ..., N\}$ , i.e., each vertex "color" is a subset of $\{1,...,N\}$ . Has this problem been studied before? Simple Google Scholar searches has not turned up anything. If not, is there a nontrivial exact algorithm? General bounds? I'm interested for when $d$ is fixed and finding the smallest $N$ for which $G$ is $(N,d)$ -subset-list colorable.
|
Like many coloring variants, this problem can be phrased as a graph homomorphism problem. Let $H_{N,d}$ be the graph whose vertices are subsets of $\{1,\dots,N\}$ , with an edge between two subsets if and only if the symmetric difference between them has size at least $d$ . Then a graph $G$ is $(N,d)$ -subset-list colorable if and only if there is a graph homomorphism $G \to H_{N,d}$ . Determining if $G$ has a homomorphism to a fixed graph $H$ is also called the " $H$ -coloring problem", so here, we are faced with the $H_{N,d}$ -coloring problem. By the way, the term "subset-list colorable" is misleading; list-colorability is a different, and unrelated problem. I think " $(N,d)$ -subset colorable" would be equally distinguishing, and less confusing. What does the graph homomorphism terminology gain us? Well, we can say the following: By the Hell–Nešetřil theorem, if a fixed graph $H$ is not bipartite, the $H$ -coloring problem is NP-complete. In particular, this problem is NP-complete
|
|graph-theory|reference-request|coloring|
| 1
|
What is the compactification of $y=e^x$ in $\mathbb P_{\mathbb C}^2$?
|
Denote the curve $V(y-e^x)\subset \mathbb A_{\mathbb C}^2$ by X, and consider its closure $\bar X$ in $\mathbb P_{\mathbb C}^2$ . By GAGA, we know $\bar X$ is algebraic. I think this is amazing. I want to ask what is the algebraic equation of $\bar X$ ?
|
I claim the closure in the Euclidean topology is $$\overline X:=\{(x:e^x:1):x\in\mathbb C\}\cup\{(x:y:0):(x:y)\in\mathbb P^1\}.$$ Indeed, for any $x\in\mathbb C$ , all $(x+2\pi in:e^x:1)\in\mathbb C$ , so the closure contains $(1:0:0)$ , the limit as $n\to\infty$ . Moreover, the limit of $(x:e^x:1)$ as $x\in\mathbb R$ goes to infinity is $(0:1:0)$ , so $(0:1:0)\in\overline X$ . Finally, we hope to find points $x_n\in\mathbb C$ such that $(x_n:e^{x_n}:1)$ converges to $(y:1:0)$ , i.e., $x_ne^{-x_n}$ converges to $y$ . But such a sequence $x_n$ exists, since $xe^{-x}$ is a holomorphic function on $\mathbb C$ so by Picard's great theorem tells us that $xe^{-x}=y$ has an infinite set of solutions where necessarily $|x_n|\to\infty$ . Thus $\overline X$ is just isomorphic to $\mathbb P^1\sqcup\mathbb P^1/(\infty\sim \infty)$
|
|algebraic-geometry|complex-geometry|
| 1
|
How to show perpendicularity in a square
|
$ABCD$ is a square so $AB=BC=CD=DA=20$ and $AE=BF=15$ . Since $DAE \sphericalangle =90^0$ we can use the Pythagorean theorem so $AD^2+AE^2=DE^2$ and we get that $DE=25$ . We know that $DAE\sphericalangle=ABF\sphericalangle=90^o$ , $AD=AB=20$ and $AE=BF=15$ so from (SAS) we get that triangles $DAE\equiv ABF$ . How to show that $AF \perp DE$ , so that I can calculate $AM=\frac{AD\cdot AE}{DE}=12$
|
Rotate the entire figure $90^\circ$ clockwise about the center of the square. $D$ will land on where $A$ used to be, and $E$ will land on where $F$ used to be. Thus, $DE$ will necessarily land on where $AF$ used to be. Alternately, you can show that the two triangles $\triangle ADM$ and $\triangle FAB$ are similar by showing equality of the two non-right angles.
|
|geometry|
| 1
|
Old Identity of Cauchy's
|
So I'm reading this old paper by Cauchy's from 1847, and at one point, he merely states without proof the following identity: let $n$ be an odd prime number, and let $\rho$ be an $n^{\text{th}}$ root of unity. Then $$\prod_{k=1}^{n-1} (1-\rho^{k}) = n .$$ Due to the fact that Cauchy states it without proof and without reference to where proof can be found, I can only assume that it mustn't be too difficult. Nevertheless, I am unable to prove it on my own, and if it is a standard identity (and it certainly looks familiar), I have been unable to find it by googling it. Any and all help would be appreciated.
|
Actually, this is a consequence of the fundamental theorem of algebra. Indeed, the $n^\mathrm{th}$ roots of unity, namely $z_k = \rho^k$ , $k = 0,1,2,\ldots,n-1$ , where $\rho = e^{2\pi i/n}$ , are defined as the roots of the polynomial $z^n - 1$ , which can be factorized as $$ z^n - 1 = \prod_{k=0}^{n-1} (z-z_k) = (z-1) \prod_{k=1}^{n-1} (z-\rho^k) $$ thanks to the said theorem, hence $$ \prod_{k=1}^{n-1} (z-\rho^k) = \frac{z^n-1}{z-1} $$ and finally $$ \prod_{k=1}^{n-1} (1-\rho^k) = n $$ by evaluating the expression at $z = 1$ with the help of L'Hospital's rule.
|
|algebra-precalculus|complex-numbers|math-history|
| 1
|
Ito integral independence of time increments
|
When looking at an Ito integral are $\int_{0}^t f(s) dW_s $ and $\int_{t}^x f(s) dW_s $ independent under what conditions for $f$ . I was trying to solve the exercise where I need to prove that: $\int_{0}^t f(s) dWs$ is a Gaussian random variable if we look at this as a stochastic process by varying $t$ . If we have independence of the time increments i can construct any collection of $\int_{0}^{t_k} f(s) dWs$ as a linear transformation of a vector with components $\int^{t_{k+1}}_{t_k} f(s) dWs$ and a linear transformation of a normally distributed vector is normally distributed. It is clear for me, that if $f$ is a deterministic elementary function that we have independece of time increments for the ito integral. I also know that I can write my Ito integral of the deterministic function as an ${L}^2$ limit of the ito integral of elementary functions. My proof would be complete, if I knew that independence of random variables carries over via $L^2$ limits. This feels wrong, but I am no
|
Denote $X = \int_0^t f(s) dW_s$ , $Y = \int_t^x f(s) dW_s$ , and $X_n, Y_n$ a sequence of approximating simple stochastic integrals for $X$ and $Y$ respectively. Note that $(X_n, Y_n)$ is jointly Gaussian and uncorrelated for each $n$ , hence $X_n$ and $Y_n$ are independent for each $n$ . The preservation of independence in the limit is due simply to convergence in distribution of the vector $(X_n, Y_n)$ . Notice that since $(X_n, Y_n)$ converges to $(X,Y)$ in probability, it does so in distribution, so $$F_{(X,Y)}(x,y) = \lim_n F_{(X,Y)}^{(n)}(x,y) = \lim_n F_X^{(n)}(x)F_Y^{(n)}(y) = F_X(x) F_Y(y)$$ where $F$ denotes a corresponding CDF.
|
|probability-theory|stochastic-integrals|
| 1
|
How to show perpendicularity in a square
|
$ABCD$ is a square so $AB=BC=CD=DA=20$ and $AE=BF=15$ . Since $DAE \sphericalangle =90^0$ we can use the Pythagorean theorem so $AD^2+AE^2=DE^2$ and we get that $DE=25$ . We know that $DAE\sphericalangle=ABF\sphericalangle=90^o$ , $AD=AB=20$ and $AE=BF=15$ so from (SAS) we get that triangles $DAE\equiv ABF$ . How to show that $AF \perp DE$ , so that I can calculate $AM=\frac{AD\cdot AE}{DE}=12$
|
Since a transformation approach has already been given, here's a vector approach: $$\begin{align} \vec{DE}\cdot \vec{AF} &= (\vec{DA}+\vec{AE}) \cdot (\vec{AB}+\vec{BF}) \\ &= \vec{DA} \cdot \vec{BF} + \vec{AE}\cdot \vec{AB} \\ &= -|DA|\cdot |FB| + |AE|\cdot{AB}| \\ &= 0 \tag*{$\blacksquare$} \end{align}$$
|
|geometry|
| 0
|
Problem in a triangle
|
$ABC$ is an isosceles triangle so $AB=BC$ , $D$ is the midpoint of $BC$ so $BD=DC$ and $BED$ and $DFC$ are isosceles triangles such that $BE=ED=DF=FC$ . We know that $BED$ and $DFC$ are isosceles triangles but $BD=DC$ then $BED$ and $DFC$ are equilateral triangles where $BE=ED=DF=FC=BD=DC$ . Since $ABC$ is an isosceles triangle let $ABC \sphericalangle=ACB \sphericalangle=a$ $ABE \sphericalangle = ABC \sphericalangle + DBE \sphericalangle =60^o +a $ $ACF \sphericalangle= ACB \sphericalangle +DCF \sphericalangle= 60^o +a$ We can easily show that triagnles $ABE\equiv ACF$ because $AB=BC$ , $ABE \sphericalangle=ACF \sphericalangle$ and $AB=AC$ (SAS) we get that triangles $ABE\equiv ACF$ . We also know that $AE=AF$ how to show that $AM=AN$ ?
|
Note that the given conditions do not imply the two smaller triangles are equilateral. However, they are congruent by SSS criteria, so that $\angle DBE = \angle DCF$ and you already know $\angle ABC = \angle ACB$ . Thus, $\angle ABE = \angle ACF$ , combined with $AB = AC$ and $BE=CF$ gives $\Delta ABE \cong \Delta ACF$ , and the desired result follows.
|
|geometry|triangles|
| 1
|
Interpretation of closure in inverse limit
|
Can one interpret the closure of a set inside an inverse limit as the closure of its individual components? I have not been able to find a source confirming or denying this claim. I have only been able to show that there exist closed sets in the inverse limit that have an open component, making me believe that it is not true (since then we may have equality $S= \overline{S}$ where $S$ has an open component and $\overline{S}$ does not. Though perhaps these are identified in the limit). To be more explicit, suppose $X = \varprojlim X_i$ is an inverse limit of topological spaces, and $S= \varprojlim S_i \subseteq X$ . Does it hold that $\overline{S} = \varprojlim \overline{S_i}$ ? Thanks in advance as ever, M
|
It is not completely clear what you want to know. For notation let $f^j_i: X_j \to X_i$ be the bonding maps of the inverse system and $\pi_i : \varprojlim X_i \to X_i$ the projections. Variant 1. Let $S \subset \varprojlim X_i$ and $S_i = \pi_i(S)$ . Then $f^j_i(S_j) \subset S_i$ and by continuity we get $f^j_i(\overline S_j) \subset \overline{f^j_i( S_j)} \subset \overline S_i$ . One can show that $$\overline S = \varprojlim \overline S_i .$$ As commented by Stephan, a proof can be found in Bourbaki, General Topology, Chapter 1, §4, Corollary to Proposition 9 Also see Proposition 2.5.6 in Engelking, R. (1989). General topology. Sigma series in pure mathematics, 6. Variant 2. Let $S_i \subset X_i$ be subpaces such that $f^j_i(S_j) \subset S_i$ . By continuity we get $f^j_i(\overline S_j) \subset \overline{f^j_i( S_j)} \subset \overline S_i$ . We have $$\varprojlim \overline{S_i} = \{(x_i )\in \prod \overline{S_i} \mid f^j_i(x_j) = x_i \text{ for all } j \ge i \} = \varprojlim X_i \cap
|
|abstract-algebra|general-topology|commutative-algebra|category-theory|limits-colimits|
| 1
|
How to prove the characteristic function of normal distribution?
|
How to prove that the characteristic function of the normal distribution $N(a, σ^2)$ has the form: $$φ (t) = e^{ita - \frac{1}2t^2σ^2}$$ I think that we need to use this formula, but I don't know what to do next: image
|
You can use the relation: $$\varphi(t)=\int_{-\infty}^{+\infty} p(x)e^{ixt}\text{d}x$$ where $p$ is the normal density probability function: $$p(x)=\frac{1}{\sqrt{2\pi\sigma^2}}\exp\left[\frac{(x-\mu)^2}{2\sigma^2}\right]$$ So you have: $$\varphi(t)=\frac{1}{\sqrt{2\pi\sigma^2}}\int_{\mathbb{R}} e^{-\frac{(x-\mu)^2}{2\sigma^2}}e^{ixt}\text{d}x=\frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{\mu^2}{2\sigma^2}}\int_{\mathbb{R}} e^{-\frac{x^2}{2\sigma^2}+\left(\frac{\mu}{\sigma^2}+it\right)x}\text{d}x= \\ \\ =\frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{\mu^2}{2\sigma^2}}\sqrt{2\pi\sigma^2}e^{\left(\frac{\mu}{\sigma^2}+it\right)^2\frac{\sigma^2}{2}}=e^{-\frac{t^2\sigma^2}{2}+i\mu t}$$ I have used the gaussian integral: $$\int_{\mathbb{R}} e^{-Ax^2+Bx}\text{d}x=\sqrt{\frac{\pi}{A}}e^{B^2/4A}, \quad A>0, \quad B \in \mathbb{C}$$
|
|probability|probability-distributions|normal-distribution|characteristic-functions|
| 0
|
How do differentiate a partial derivative
|
I have $f(x, y)$ where $y$ is a function of $x$ . So $f(x,y)=f(x, y(x))$ . How do I differentiate the partial derivative $$\frac{d}{dx}\left(\frac{\partial f}{\partial y}\right)=?$$ $$\frac{d}{dy}\left(\frac{\partial f}{\partial x}\right)=?$$
|
When you find yourself getting confused in a situation like this, it is time to get explicit and pedantic. Write $f$ as $f(X,Y)$ , where $ X(x) = x$ $ Y(x) = y(x)$ . Now you can consider the function $g(x) = f(X(x), Y(x))$ , if you want to. But you should write anything you're curious about in terms of these variables, to make sure that what you're asking about is meaningful. Notice that if you do that, the only meaningful lowest-level derivatives are $$\frac{\partial f}{\partial X}, \frac{\partial f}{\partial Y}, \frac{dY}{dx} \textrm{ [or } y'(x)\textrm{], and }\frac{dX}{dx} \textrm{ [or } 1\textrm{]}$$ along with their higher-order relatives. I'm still thinking about it, but I'm starting to suspect that the two things you've written aren't actually meaningful - $\frac{d}{dy}\left(\frac{\partial f}{\partial x}\right)$ looks particularly suspicious. You should provide more detail on where they came from. Added after some comments: Okay, it sounds like you're curious about the function
|
|calculus|derivatives|partial-derivative|
| 0
|
The image of a periodic nonconstant maximal integral curve is an immersed submanifold diffeomorphic to $S^1$
|
This is problem 9-1 from John Lee's Introduction to Smooth Manifolds. Suppose $M$ is a smooth manifold $X$ is a smooth vector field on $M$ and $\gamma$ is a maximal integral curve of $X$ . Show that the image of $\gamma$ is an immersed submanifold of $M$ , diffeomorphic to $S^1$ if $\gamma$ is periodic and nonconstant. If $\gamma$ is periodic and nonconstant, we have the period $T$ , and we can consider the smooth covering map $\pi:\mathbb{R} \to S^1$ defined by $\pi(x)=e^{2\pi it/T}$ . Since $\gamma$ is constant on the fibers of $\pi$ it descends to a smooth map $\tilde{\gamma}:S^1 \to M$ with $\tilde{\gamma}\circ \pi=\gamma$ . Since $\gamma(t)=\gamma(t')$ iff $t-t'=kT$ for some $k\in \mathbb{Z}$ , $\tilde{\gamma}$ is injective. Hence, $\tilde{\gamma}$ is an injective smooth immersion and its image is an immersed submanifold of $M$ diffeomorphic to $S^1$ using Proposition 5.18 of the book (Images of Immersions as Submanifolds) which is Suppose $M$ is a smooth manifold with or without
|
Quote from Lee: An immersed submanifold $S$ of $M$ is a subset $S \subset M$ endowed with a topology (not necessarily the subspace topology) with respect to which it is a topological manifold (without boundary), and a smooth structure with respect to which the inclusion map $S \hookrightarrow M$ is a smooth immersion. In other words, an immersed submanifold $S$ of $M$ is the image of an injective immersion $i : S' \to M$ . Here is Problem 9-1 : Suppose $M$ is a smooth manifold, $X \in \mathfrak X(M)$ , and $\gamma$ is a maximal integral curve of $X$ . (a) We say $\gamma$ is periodic if there is a number $T > 0$ such that $\gamma(t + T) = \gamma(t)$ for all $t \in \mathbb R$ . Show that exactly one of the following holds: $\quad$ (1) $\gamma$ is constant. $\quad$ (2) $\gamma$ is injective. $\quad$ (3) $\gamma$ is periodic and nonconstant. (b) Show that if $\gamma$ is periodic and nonconstant, then there exists a unique positive number $T$ (called the period of $\gamma$ ) such that $\gam
|
|differential-geometry|smooth-manifolds|submanifold|
| 1
|
Find the solution to the given derivative of the product: $\big(\cos{x}\delta{x}\big)^{(k)}$
|
Find the solution to the given derivative of the product $$\big(\cos{x}\delta{x}\big)^{(k)}$$ We need to use the differentiation formula for generalized functions: $$\big(D^\alpha f, \phi)=(-1)^{|\alpha|}(f,D^\alpha\phi)$$ with $a(x)=cos{x}$ , $f=\delta(x)$ , and $\alpha=k $ $$\big(D^k(\cos{x},\delta(x))=-1^{k}\int\cos{x} D^{(k)}\delta(x)\text{d}x=D^{(k)}\delta(x)$$ Although the answer is right, I can't make sense of that $\cos{x}$ gives $1$ in order to obtain the right answer. Any ideas on how to get that last integral right? Thanks
|
Find the solution to the given derivative of the product $$\big(\cos{x}\delta(x)\big)^{(k)}$$ $$\big(D^\alpha f,\phi\big)=(-1)^{|\alpha|}(f,D^{\alpha}\phi),$$ with $f(x)=\cos{x}$ , $a(x)=\delta(x)$ and $\phi(x)$ , and $\alpha=k $ $m=1$ Then, by the rule $\big(f(x)a(x),\phi(x)\big)=\big((f(x),a(x)\phi(x)\big)$ $$\big(\cos{x}\delta(x)\big)^{\prime}=\big(\cos{x}'\delta(x),\phi(x)+\cos{x}\delta'(x),\phi(x)\big)$$ $$\big(\cos{x}\delta(x)\big)^{\prime}=\big((\delta'(x),\cos{x}\phi(x))+(\delta(x),\cos{x}'\phi(x)\big)$$ Let us split, consider the first part on the LHS above: $$\big(\delta'(x),\cos{x}\phi(x)\big)\implies$$ $$- (\delta (x), [\cos x \phi(x)]')= - \big(\delta (x), \cos x \phi'(x)) + (\delta (x), \sin x \phi'(x)\big)= $$ $$-(\delta (x), \cos x \phi'(x))= - \phi'(0)= \big(\delta'(x), \phi(x)\big)=\delta'(x)$$ and consider the second part on the LHS above $$\big(\delta(x),\cos{x}'\phi(x)\big)=\big(\delta'(x),\cos{x}\phi(x)\big)=\delta'(x)$$ We sum up, and obtain: $$\big(\cos{x}\delta
|
|distribution-theory|dirac-delta|
| 0
|
Surface integrals in spherical coordinates
|
If I am given a surface in spherical coordinates $(r,\theta,\varphi)$ , such that it is parametrised as: $$ \begin{align} r&=r(\theta,\varphi)\\ \theta&=\theta\\ \varphi&=\varphi \end{align} $$ What is the area $S$ of such surface? Or more specifically, can you show how to get the result: $$ S=\int_{0}^{2\pi}\int_{0}^{\pi}\sqrt{r^2+\left(\frac{\partial r}{\partial \theta}\right)^2 + \frac{1}{\sin^2\theta}\left(\frac{\partial r}{\partial \varphi}\right)^2}\;r\sin\theta\;{\rm d}\theta\,{\rm d}\varphi $$ Some definitions that I am using: $k$ -surface : Let $k,N\in\mathbb{N}$ , $k , $M\subset \mathbb{R}^N$ is called a $k$ -surface , if there exists a non-empty open set $E\subset \mathbb{R}^k$ and a map $\varphi:\mathbb{R}^k\to \mathbb{R}^N$ , such that: (i) $\varphi(E)=M$ , (ii) $\varphi\in C^1(E;\mathbb{R}^N)$ , and (iii) the rank of Jacobi matrix of $\varphi$ is equal $k$ everywhere on $E$ . The surface is called simple if $\varphi$ is also injective on $E$ and $\varphi^{-1}$ is continuo
|
In the case where the surface can be expressed in both spherical and Cartesian forms the result can be obtained by transforming the Cartesian integral into spherical coordinates. The theorem below gives a proof of this in terms of surface integrals - to obtain the result for surface areas choose an integrand function $f = 1$ . The conventions used below are the same as described in this answer to the current question and in this answer to How to prove that the area of a shape is independent of the choice of axes? . The term 'HC' means that the boundary of a set in $\mathbb{R}^p$ has 'zero content' [ERA, Definition 24.12, p325]. Theorem Let $S$ be a surface in $\mathbb{R}^3 \setminus \{ \mbox{$z$-axis} \}$ given in spherical coordinates by $S = \{ (r(\theta, \varphi), \theta, \varphi) : (\theta, \varphi) \in G \}$ where $G \subseteq (0, \pi) \times [0, 2\pi)$ is open in $\mathbb{R}^2$ and $r : G \rightarrow \mathbb{R}$ is a positive-valued $C^1$ function. Let $f : S_D \rightarrow \mathb
|
|calculus|integration|surfaces|surface-integrals|
| 0
|
Ito integral independence of time increments
|
When looking at an Ito integral are $\int_{0}^t f(s) dW_s $ and $\int_{t}^x f(s) dW_s $ independent under what conditions for $f$ . I was trying to solve the exercise where I need to prove that: $\int_{0}^t f(s) dWs$ is a Gaussian random variable if we look at this as a stochastic process by varying $t$ . If we have independence of the time increments i can construct any collection of $\int_{0}^{t_k} f(s) dWs$ as a linear transformation of a vector with components $\int^{t_{k+1}}_{t_k} f(s) dWs$ and a linear transformation of a normally distributed vector is normally distributed. It is clear for me, that if $f$ is a deterministic elementary function that we have independece of time increments for the ito integral. I also know that I can write my Ito integral of the deterministic function as an ${L}^2$ limit of the ito integral of elementary functions. My proof would be complete, if I knew that independence of random variables carries over via $L^2$ limits. This feels wrong, but I am no
|
Alternatively to @JoseAvilez's answer (+1): you may have seen that the stochastic integral wrt $f \in L^2[0,\infty)$ is adapted and that for any $\xi\in \mathbb{R}$ and $s we have $$E\bigg[\exp\bigg(i\xi\int_s^tf(u)dW_u\bigg)\bigg|\mathscr{F}_s\bigg]=\exp\bigg(-\frac{\xi^2}{2}\int_s^tf(u)^2du\bigg)$$ or equivalently, that $\int_s^tf(u)dW_u|\mathscr{F}_s\sim \mathcal{N}(0,\int_s^tf(u)^2du)$ (*). Then if you define for $0 $$\varphi(\xi_1,\xi_2):=E\bigg[\exp\bigg(i\xi_1\int_0^tf(u)dW_u+i\xi_2\int_t^Tf(u)dW_u\bigg)\bigg]$$ for $\xi_1,\xi_2\in \mathbb{R}$ then an application of the tower rule yields: $$\begin{aligned} \varphi(\xi_1,\xi_2)&=E\bigg[\exp\bigg(i\xi_1\int_0^tf(u)dW_u\bigg)E\bigg[\exp\bigg(i\xi_2\int_t^Tf(u)dW_u\bigg)\bigg|\mathscr{F}_t\bigg]\bigg]\\ &=\varphi(\xi_1,0)\varphi(0,\xi_2) \end{aligned}$$ But this proves independence by Kac's theorem . (*) Indeed, given simple stochastic integrals $\int_s^t f_n dW$ approximating $\int_s^t f dW$ in $L^2(P)$ (we can choose $f_n$ simple
|
|probability-theory|stochastic-integrals|
| 0
|
Probability game with throwing coin.
|
Alice and Bob are playing a game. They throw a fair coin, and write out the results of the throws. Each of the players has a victory criterion, the one whose criterion comes first wins. Alice wins at the moment when the results of the last three throws are equal to 001. Bob wins when the results of the last three throws are equal to 010. What is the probability that Alice will win? Is there any easy way to solve this task? I only see a direct way: finding amount of bit strings with "001" at the end and no substring "010". :(
|
Both sequences contain the subsequence HT, so imagine writing down an infinite number of toss outcomes and scanning for HT subsequences. In order for HTH to occur first, an H must not precede HT (else HHT occurs) and an H must succeed it (to create HTH). Hence, the probability that HTH wins is $\frac12 \cdot \frac12 = \frac14$ given there is an HT. In order for HHT to occur first, an H must precede HT, which happens with probability $\frac12$ given there is an HT. Note that these are the only two outcomes that can occur and thus define our sample space $\frac12 + \frac24 = \frac34$ . Finally, Probability that Alice wins = $\frac12 ÷ \frac34 = \frac23$ and the Probability that Bob wins = $\frac14 ÷ \frac34 = \frac13$ . In other words, HHT is twice as likely to occur before HTH .
|
|probability|combinatorics|
| 0
|
Find the prime which divides $f(n)+f(n+100)$ terms of Fibonacci sequence.
|
Let $f(n)$ denote $n$ th number in the Fibonacci sequence. Find a two digit prime such that $p$ divides $f(n) + f(n+100)$ for all $n$ . Well to be honest I have no idea how to solve this problem so I don't really have anything to show for my attempts. Well I did try to play around with the recursion a bit but it ended with just a jumbled up nothing burger. Anyone any ideas?
|
Let us consider Fibonacci sequence modulo $41$ . One can look up the Pisano period of $41$ , it is equal to $40$ , but actually (the $0$ -th and then) every $20$ -th Fibonacci number is divisible by $41$ . Here are Fibonacci numbers modulo $41$ : $$0, 1,1,2,3,5,8,13,21,34,14,7,21,28,8,36,3,39,1,40,0$$ The next two numbers are $$40, 40.$$ This can be interpreted as $$-1,-1.$$ So the numbers repeat, but with opposite sign. This means that $F_n$ and $F_{n+20+40k}$ are opposite modulo $41$ for any non-negative integer $k$ . Or, $F_n + F_{n+20+40k}$ is divisible by $41$ . Putting $k=2$ we get that $F_n + F_{n+100}$ is divisible by $41$ for any $n$ . Note. How did we guess to consider $41$ ? From the list of Fibonacci numbers and their prime factorizations . I’d like to see the solution without using such data. Edit Here I will prove why $41$ is the only two digit prime with such property. I will use some of the facts about Pisano periods, as well as some other facts about Fibonacci numbers:
|
|elementary-number-theory|contest-math|recurrence-relations|fibonacci-numbers|
| 1
|
Triple Cross Product Identity for imaginary quaternions $\mathbb{H}_0$
|
Considering $x,y,z \in \mathbb{H}_0$ , $x,y,z=\alpha$ i $+ \beta $ j $+ \gamma $ k , prove the Triple Cross Product Identity: \begin{equation} (x \times y) \times z = y(x \bullet z) - x(y \bullet z) \end{equation} with the cross product defined as usual, and $(\alpha$ i $+ \beta $ j $+ \gamma $ k $)\bullet(\alpha'$ i $+ \beta' $ j $+ \gamma' $ k $)=\alpha\alpha'+\beta\beta'+\gamma\gamma'$ . The proof becomes straightforward if we define wlog $x=\alpha$ i , $y=\alpha$ i $+ \beta $ j , $z=\alpha$ i $+ \beta $ j $+ \gamma $ k , and I need to convert it to the desired form with Gram-Schmidt, but don't know exactly why or how. Any type of help is appreciated.
|
Using $$ x=x_1\,\mathbf{i}+x_2\,\mathbf{j}+x_3\,\mathbf{k}\,,\quad y=y_1\,\mathbf{i}+y_2\,\mathbf{j}+y_3\,\mathbf{k}\,, $$ it is straightforward to show that pure quaternions multiply as $$ xy=-x\bullet y+(x\times y)\,. $$ Since $x\bullet y$ is symmetric in $x,y$ and $x\times y$ is anti symmetric \begin{align} x\times y&=\frac{xy-yx}{2}\,,\quad x\bullet y=-\frac{xy+yx}{2}\,. \end{align} Then, clearly $$ (x\times y)\times z=\frac{xyz-yxz-zxy+zyx}{4}\,. $$ The RHS term $(x\bullet z)y-(y\bullet z)x$ is equal to this because it is \begin{align}\require{cancel}&-\frac{xz+zx}{2}y+\frac{yz+zy}{2}x\\[2mm]&=-\frac{xz+zx}{4}y+\frac{yz+zy}{4}x-y\frac{xz+zx}{4}+x\frac{yz+zy}{4}\\[2mm]&=\frac{-\cancel{xzy}-zxy+\bcancel{yzx}+zyx-yxz+\bcancel{yzx}+xyz+\cancel{xzy}}{4}\,.\end{align}
|
|ring-theory|quaternions|division-ring|
| 1
|
$a^2-2b^2=-73696$
|
How to find all pairs $(a,b)$ of natural numbers that satisfy $a^2-2b^2=-73696$ and $\gcd(a,b)=4$ The only thing I was able to do is to change the equation to $a_1^2-2b_1^2=-4606$ with $\gcd(a_1,b_1)=1$ This gives that $2|a_1$ so the equation becomes $2a_2^2-b_1^2=-2303$ , $2a_2=a_1$ I can't find useful bound or factorization to complete , Can someone help me please ?
|
There are infinitely many solutions and they come with a linear degree two recursion. I was finding $x^2 - 2 y^2 = 2303$ The first observation is a generator for the (oriented) automorphism group of the quadratic form $$ x^2 - 2 y^2 = (3x+4y)^2 - 2(2x+3y)^2 $$ As your target $2303$ is composite we need six families of solutions as $$ x_{n+6} = 3x_n + 4 y_n $$ $$ y_{n+6} = 2x_n + 3 y_n $$ These have parallel recurrences $$x_{n+12} = 6 x_{n+6} - x_n $$ $$y_{n+12} = 6 y_{n+6} - y_n $$ I think I will include the inequality part, this is how the business gets proved. The solutions that my program, below, calls "seeds" are those $x,y >0 $ such that either $3x-4y \leq 0$ or $-2x+3y \leq 0.$ The $3x-4y$ condition causes no restrictions, but the second, $y \leq \frac{2x}{3},$ gives firm upper bounds on both $x,y.$ Finding where the line intersects the hyperbola, we learn $x with the $y$ bound two thirds of that. And there are exactly six such fundamental solutions. let me type in the lists as t
|
|elementary-number-theory|diophantine-equations|natural-numbers|
| 0
|
Why is this differential injective? Lee Smooth Manifolds Proposition 5.3
|
I'm trying to understand the following line in the proof to Proposition 5.3 in john M. Lee's 'Introduction to Smooth Manifolds': "Because the projection $\pi_M : M \times N \rightarrow M$ satisfies $\pi_M \circ \gamma_f(x) = x$ for each $x \in U$ (where $U$ ) is an open subset of $M$ , so the composition $d(\pi_M)_{(x, f(x))} \circ d\gamma_x$ is the identity map on $T_x M$ for each $ \in U$ . Thus $d(\gamma)_x$ is injective." This is a silly question (apologies in advance), but why does the composition being the identity imply that $d(\gamma)_x$ is injective?
|
This is true in general: If $f\colon X \to Y$ and $g\colon Y \to X$ are two functions satisfying $$ g\circ f = \operatorname{id}_X, $$ then $f$ is injective. If $x,y \in X$ are such that $f(x) = f(y)$ , then of course it is also true that $g(f(x)) = g(f(y))$ and by the above identity, $x = y$ which shows injectivity. In your case, $f = \operatorname{d}\gamma_x$ and $g = \operatorname{d}(\pi_M)_{(x,f(x))}$ .
|
|proof-explanation|differential-topology|
| 1
|
What do the symbols for these binary operations on a set mean?
|
If S = {0,1,2,3,4} and (a,b) is an arbitrary ordered pair such that a ∈ S and b ∈ S , which of the matchings in Exercises 12-15 are binary operations on S ? Construct operation tables. (a,b) ----> a * b = 2a - b (a,b) ----> a ∇ b is the maximum of a and b (a,b) ----> a Δ b = either a or b (a,b) ----> a ∬ b = 2 Background: I'm on the introductory Review section of my dad's old "Algebra and Trigonometry" textbook from 1971 (authors: Kane, Oesterle, Bechtel, Finco) and everything described is great, but these symbols looked like they popped up out of nowhere. Can anyone explain what they are and how they are being used? Are they related to a branch of mathematics that is more advanced than algebra?
|
When the book writes " $a \nabla b$ is the maximum of $a$ and $b$ ", that is a more concise way of writing "Define $\nabla$ as follows: for every $a,b$ , $a \nabla b$ is the maximum of $a$ and $b$ ". Then, the book is asking you to determine whether the map $(a,b) \mapsto a \nabla b$ is a binary operation on $S$ . In other words, $\nabla$ is not an existing symbol that already has an accepted meaning. Rather, the book is defining a new symbol (just for the purposes of the exercise), and then asking you a question about it. I suggest you review the definition of "binary operation on $S$ " carefully. The book is asking you to understand the abstract definition by applying it to some concrete examples. See if you can think of an example of a set $S$ and an operator that is not a binary operation on $S$ .
|
|algebra-precalculus|binary-operations|
| 1
|
Using Surreal Numbers to measure function growth rate - Tetration?
|
In "The Book of Numbers" by John H. Conway, pg. 299, he discusses the application of surreal numbers to quantifying the growth rate of functions. He gives the following correspondences: $$\begin{aligned} \frac{1}{ω}&&\cdots&&&\ln x \\ n&&\cdots&&&x^n&&(n\text{ real}) \\ ω&&\cdots&&&e^x \\ ω + n &&\cdots&&&x^n e^x \\ nω &&\cdots&&&e^{nx} \\ ω^n &&\cdots&&&e^{x^n} \\ ω^ω &&\cdots&&&e^{e^x} \\ ω^{ω^ω} &&\cdots&&&e^{e^{e^x}} \\ ω^{ω^{ω^ω}} &&\cdots&&&e^{e^{e^{e^x}}} \\ \end{aligned}$$ Now here is my question: The next number after an arbitrary number of $ω$ 's should be $\epsilon_0$ . But what should the corresponding function be? Obviously an infinitely nested exponential would diverge. I'm thinking it might be tetration . i.e. $$\begin{aligned} \epsilon_0 &&\cdots&&& {^x}e \\ \end{aligned}$$ But this is a hunch. I'm not sure if this is provable. Thoughts?
|
Conway's correspondences are presented in a very informal way, so it's unclear what one would want to prove there. There is a way to make it formal, but it excludes numbers such as $\varepsilon_0$ . Still, there is a general but conjectural correspondence between surreal numbers and growth rates. In this picture, surreal number do not all correspond to growth rates of real-valued functions (of which there are too few, compared to the multiplicity of surreal numbers). Instead they are growth rates of functions defined on surreal numbers themselves. In that correspondence, the number $\varepsilon_0$ corresponds to a function $E$ satisfying $E(x+1) = \exp(E(x))$ for all large $x$ . So it is a form of tetration. If you want to learn more about this correspondence, I suggest you read the survey article On numbers, germs and transseries of Aschenbrenner, vdDries and vdHoeven, or the introduction of my thesis "Hyperseries and surreal numbers". You can find both of them online.
|
|asymptotics|cardinals|surreal-numbers|
| 1
|
Simple block factorization?
|
In the two substructures case for the finite element tearing and interconnecting method (FETI) from "Domain Decomposition: Parallel Multilevel Methods for Elliptic Partial Differential Equations" (pp 104, Smith 1996), the finite element stiffness matrix $A$ can be represented by representing the interior indices $I$ and boundary indices $B$ as shown below: $$ A = \begin{bmatrix} A_{II}^{(1)} & 0 & A_{IB}^{(1)} \\ 0 & A_{II}^{(2)} & A_{IB}^{(2)} \\ A_{BI}^{(1)} & A_{BI}^{(2)} & A_{BB}^{(1)} + A_{BB}^{(2)} \end{bmatrix}. \tag{1} $$ But "An Introduction to Domain Decomposition Methods: Algorithms, Theory, and Parallel Implementation" (pp 132, Dolean 2015) states "a simple computation shows that we have a block factorization" as below: $$ \begin{bmatrix} I & 0 & 0 \\ 0 & I & 0 \\ A_{BI}^{(1)}A_{II}^{(1)^{-1}} & A_{BI}^{(2)}A_{II}^{(2)^{-1}} & I \end{bmatrix} \begin{bmatrix} A_{II}^{(1)} & 0 & 0 \\ 0 & A_{II}^{(2)} & 0 \\ 0 & 0 & S^{(1)} + S^{(2)} \end{bmatrix} \begin{bmatrix} I & 0 & A_{II
|
You could obtain a factorization like this by using "block-row" operations. Begin with $$ A = \begin{bmatrix} A_{II}^{(1)} & 0 & A_{IB}^{(1)} \\ 0 & A_{II}^{(2)} & A_{IB}^{(2)} \\ A_{BI}^{(1)} & A_{BI}^{(2)} & A_{BB}^{(1)} + A_{BB}^{(2)} \end{bmatrix}. $$ If we multiply the first row by $A_{BI}^{(1)}[A_{II}^{(1)}]^{-1}$ (from the left), then the resulting first entry will match the $3,1$ block entry. Thus, the block-row operation $R_3 = R_3 - A_{BI}^{(1)}[A_{II}^{(1)}]^{-1}R_1$ will "zero out" the $3,1$ entry. Writing this out in matrix form, we have $$ \overbrace{\begin{bmatrix} I & 0 & 0\\ 0 & I & 0\\ -A_{BI}^{(1)}[A_{II}^{(1)}]^{-1} & 0 & I \end{bmatrix}}^{E_1} \begin{bmatrix} A_{II}^{(1)} & 0 & A_{IB}^{(1)} \\ 0 & A_{II}^{(2)} & A_{IB}^{(2)} \\ A_{BI}^{(1)} & A_{BI}^{(2)} & A_{BB}^{(1)} + A_{BB}^{(2)} \end{bmatrix} = \\ \begin{bmatrix} A_{II}^{(1)} & 0 & A_{IB}^{(1)} \\ 0 & A_{II}^{(2)} & A_{IB}^{(2)} \\ 0 & A_{BI}^{(2)} & A_{BB}^{(1)} + A_{BB}^{(2)} - A_{BI}^{(1)}[A_{II}^{(1)}]^{-
|
|linear-algebra|finite-element-method|
| 1
|
Is my solution to find $f(x)$ exhaustive?
|
To find all $f:\mathbb{N}\rightarrow \mathbb{N}$ such that \begin{gather} \frac{f(x+y)+f( x)}{2x + f( y)} = \frac{2y + f( x)}{f( x + y)+ f( y)} \notag\\ x = 1,\ y = 1 \notag\\ f( 2) + f( 1) = 2 + f( 1) \notag\\[3mm] f( 2) = 2\tag{1}\\[3mm] x= 1,\ y= 2 \notag\\ \frac{f( 3) + f( 1)}{4} = \frac{f( 1) + 4}{f( 3) + 2} \notag\\[3mm] f( 3) = \frac{2f( 1) + 16}{f( 1) + 2 + f( 3)}\tag{2}\\[3mm] x= 2,\ y = 2 \notag\\[3mm] f( 4) = 4\tag{3}\\[3mm] x = 1,\ x = 3 \notag\\[3mm] f( 3) = f( 1) + 2\tag{4} \end{gather} using (4) in (2), $f( 1)=1$ and from (4) $f(3)=3$ so if we take $f(x)=x$ , we see it satisifies the given problem constraint. Therefore, $f(x)=x$ is one such function. Plugging $f( x) = x$ in the given constraint, $$ \frac{x + y + x}{2x + y} =\frac{2y + x}{x + y + y} = 1 $$ Therefore, $f( x) = x$ is a solution. In the original question, they have said find "all" $f(x)$ . However, I have only found one such $f(x)$ . Are there any $f(x)$ other than this one as an acceptable solution? PS: Due
|
We wish to find all $f: \mathbb{N} \to \mathbb{N}$ with the property that for all $x,y \in \mathbb{N}$ , $$\frac{f(x+y)+f(x)}{2x+f(y)} = \frac{2y+f(x)}{f(x+y)+f(y)}. \tag{1}$$ You have already verified that $f(x)=x$ satisfies $(1)$ . We now wish to determine if there are any additional solutions. We can readily show that such a $f$ must fix all positive even numbers by noting that $(1)$ with $y=x$ implies $f(2x)=2x$ for all $x \in \mathbb{N}$ . In order to conclude that $f(x)=x$ is the unique solution to $(1)$ , we only need show that $f$ must additionally fix all positive odd numbers. For this we can take $y=1$ in $(1)$ to find that for all $x \in \mathbb{N}$ , we require that $$\frac{f(x+1)+f(x)}{2x+f(1)} = \frac{2+f(x)}{f(x+1)+f(1)}. \tag{2}$$ Applying $(2)$ specifically to arbitrary positive even $x$ and recalling that $f(1)=1$ as you have correctly shown, $$\frac{f(x+1)+x}{2x+1} = \frac{2+x}{f(x+1)+1} \tag{3}.$$ This can be rearranged to show that $f(x+1)=x+1$ , thereby showing th
|
|solution-verification|functional-equations|
| 1
|
Matrix construction of Dorroh extension
|
Let $R$ be a commutative unital ring and $S$ an associative rng (nonunital) that is an $R$ -module (= $(R,R)$ -bimodule with $\forall s \in S, r \in R$ : $rs = sr$ ). The Dorroh extension is a well-known construction that "adjoins unity to $S$ using $R$ ", i.e. constructs a unital ring $T$ with $S \lhd T$ and $R \subset T$ . The additive group of $T$ is $R \oplus S$ , and the multiplication is defined as $(r,s)(r',s') = (rr',rs'+sr'+ss')$ . This looks like a "bilinear form". Question : is there a "matrix construction" of $T$ , like that of trivial extensions : $\{\begin{pmatrix} r & s \\ & r\end{pmatrix}: r \in R, s \in S\}$ , and triangular rings : $\{\begin{pmatrix} r & m \\ & s\end{pmatrix}: r \in R, m \in M, s \in S\}$ ? I.e. is the ring $T$ isomorphic to $\{(a_{ij})\}$ , where $a_{ij} \in R$ whenever $(i,j) \in M$ and $a_{ij} \in S$ if $(i,j)$ is in the complement of $M$ in $\{1,\ldots,n\}^2$ , with some (perhaps linear) relations pre-imposed on $a_{ij}$ 's?
|
With $\phi:(a,b)\mapsto \begin{bmatrix}a&0\\n&a+n\end{bmatrix}$ You have an injective additive map that also satisfies $$ \phi((a,n))\phi((b,m))=\begin{bmatrix}a&0\\n&a+n\end{bmatrix}\begin{bmatrix}b&0\\m&b+m\end{bmatrix}=\\\begin{bmatrix}ab&0\\nb+am+nm&ab+am+nb+nm\end{bmatrix}=\\\begin{bmatrix}ab&0\\am+nb+nm&ab+(am+nb+nm)\end{bmatrix}=\phi((a,n)(b,m)) $$ It's perhaps not as tidy as the other two, but it's not far off.
|
|abstract-algebra|matrices|ring-theory|rngs|
| 1
|
Find all integer solutions of $n = 1 + a + b + c$, with $a,b,c$ distinct divisors of $n$, and prove there are no more solutions.
|
This is a question derived from the 17 camels problem . I have been asked to find every integer $n$ satisfying $$ n = 1+a+b+c, $$ where $a, b, c$ are distinct divisors of $n$ greater than one. By brute-forcing I got the answer $n\in\{12,18,20,24,42\}$ : \begin{align} 12 &= 1+2+3+6, \\ 18 &= 1+2+6+9, \\ 20 &= 1+4+5+10, \\ 24 &= 1+3+8+12, \\ 42 &= 1+6+14+21. \end{align} Now I have to prove these are the only solutions. How can I aproach this? Thanks!
|
Without loss of generality let $1 . Dividing by $n$ gives $$1=\frac1{a'}+\frac1{b'}+\frac1{c'}+\frac1n$$ where $a'=\frac na$ is a natural number, similarly for $b'$ and $c'$ , and $1 . In particular a triple $(a'+k,b'+l,c'+m)$ cannot lead to a solution for any nonnegative integers $k,l,m$ if $1-\frac1{a'}-\frac1{b'}-\frac1{c'}\ge\frac1{c'}$ as this would lead to $c\le1$ . $(4,5,6)$ and $(3,4,6)$ fail this test and $(3,4,5)$ does not lead to a solution, so any solution must have $a'=2$ . $(2,6,7)$ fails the test, so $b'\in\{3,4,5\}$ ; $(2,3,12),(2,4,8),(2,5,7)$ also fail the test, so ultimately we have a finite number of cases to check: $$(2,3,c),4\le c\le 11$$ $$(2,4,5),(2,4,6),(2,4,7),(2,5,6)$$ $(2,3,4)$ to $(2,3,6)$ would lead to negative or undefined $n$ , so they are rejected. $(2,3,7)$ gives $n=42$ ( $1=\frac12+\frac13+\frac17+\frac1{42}$ ). $(2,3,8)$ gives $n=24$ , $(2,3,9)$ gives $n=18$ , $(2,4,5)$ gives $n=20$ , $(2,4,6)$ gives $n=12$ and the other cases do not lead to solution
|
|elementary-number-theory|
| 1
|
$a^2-2b^2=-73696$
|
How to find all pairs $(a,b)$ of natural numbers that satisfy $a^2-2b^2=-73696$ and $\gcd(a,b)=4$ The only thing I was able to do is to change the equation to $a_1^2-2b_1^2=-4606$ with $\gcd(a_1,b_1)=1$ This gives that $2|a_1$ so the equation becomes $2a_2^2-b_1^2=-2303$ , $2a_2=a_1$ I can't find useful bound or factorization to complete , Can someone help me please ?
|
Use the famous identity $\begin{array}\\ (u^2-nv^2)(x^2-ny^2) &=u^2x^2-nu^2y^2-nv^2x^2+n^2v^2y^2\\ &=u^2x^2+n^2v^2y^2-n(u^2y^2+v^2x^2)\\ &=u^2x^2+2nu^2v^2x^2y^2+n^2v^2y^2-n(u^2y^2+2u^2v^2x^2y^2+v^2x^2)\\ &=(ux+nvy)^2-n(uy+vx)^2\\ \end{array} $ Therefore, if there is a solution to $x^2-ny^2=1$ , then, if $u^2-nv^2=d$ , there are an infinite number of solutions to $u_m^2-nv_m^2=d$ starting with $u_1=u, v_1=v$ and $u_{m+1}=u_mx+nv_my, v_{m+1}=u_my+v_mx $ .
|
|elementary-number-theory|diophantine-equations|natural-numbers|
| 0
|
Calculating $ \int_{-\infty}^{\infty}\frac{1}{(x^2+a^2)^2}dx $
|
I'm asked to prove that $ \int_{-\infty}^{\infty}\frac{1}{(x^2+a^2)^2}dx = \frac{\pi}{2a^3}$ Using partial fraction decomposition on $ \frac{1}{(z^2+a^2)^2} $ , and defining a contour $ \gamma_R $ to be the line segment between −R and R, together with a semicircle around $0$ of radius R. Using Cauchy's integral formula and the winding number of the appropriate points, I solved it. However, it was a lot of algebra. I wanted to ask, are there different ways to prove the statement?
|
Let $$ I(a)=\int_{-\infty}^{\infty}\frac{1}{x^2+a^2}dx = \frac{\pi}{a}$$ and then $$ I'(a)=\int_{-\infty}^{\infty}\frac{-2a}{(x^2+a^2)^2}dx = -\frac{\pi}{a^2}$$ Hence $$ \int_{-\infty}^{\infty}\frac{1}{(x^2+a^2)^2}dx = \frac{\pi}{2a^3}.$$
|
|complex-analysis|
| 1
|
How to transform this expression to a numerically stable form?
|
I have this function $$f(x, t)=\frac{\left(1+x\right)^{1-t}-1}{1-t}$$ Where $x \ge 0$ and $t \ge 0$ . I want to use it in neural network, and thus need it to be differentiable. While it has a discontinuity at $t = 1$ , the $lim_{t \rightarrow 1}{\frac{\left(1+x\right)^{1-t}-1}{1-t}}=log(1+x)$ is well defined, so I patch up $f$ like this: $$ f(x, t) = \left\{ \begin{aligned} &\frac{\left(1+x\right)^{1-t}-1}{1-t}&, t \ne 1 \\ &log(1+x)&, t = 1 \end{aligned} \right. $$ For infinite precision math we are done, but for floats this is still numerically unstable around $t = 1$ . What can be done about that? Can the expression be transformed somehow to avoid it?
|
When computing $1-t$ for $t \to 1$ in finite-precision floating-point arithmetic, the result is exact per Sterbenz lemma . This leaves us with problems of subtractive cancellation. These can be addressed by making use of $\mathrm{expm1}(x) := \exp(x)-1$ and $\mathrm{log1p}(x) := \log (1+x)$ . Most programming environments offer standard math functions with exactly those names, that is, expm1() and log1p() . Based on these, one computes: $$ f(x, t) = \left\{ \begin{aligned} &\frac{\mathrm{expm1}((1-t)\mathrm{log1p}(x))}{1-t}&, t \ne 1 \\ &\mathrm{log1p}(x)&, t = 1 \end{aligned} \right. $$
|
|continuity|numerical-methods|numerical-optimization|floating-point|
| 1
|
How to prove $a^{b}+b^{a} \geq 1$ for all $a, b \geq 0$?
|
It is clear if $a \geq 1$ or $b \geq 1$ , but how can I show it when $0 ? I want hints.
|
If $0 and $0 then $\frac1b > 1$ then $$b^{a-1} = \left(\frac1b\right)^{1-a} \ge 1 \ge 1-a$$ this implies $$b^a \ge b(1-a)$$ Finally, $$a^b + b^a \ge (1-(1-a))^b + b(1-a) \ge 1$$
|
|inequality|
| 1
|
Where to find proof for the remainder formula of the interpolation in two variables
|
Professor showed this result in the lecture without giving any proof (after proving the existence of the interpolating polynomial in two variables). I've been trying to prove it myself or find a book where is proved but I failed. This is the theorem: Let $$ x_0 $$ M = \{ (x_i, y_j) : 0 \leq i \leq n, 0 \leq j \leq m \}, \quad f \in \mathcal{C}^{m + n + 2}([a,b] \times [c,d]), $$ $$ p \in \Pi_{n, m} : p(x_i, y_j) = f(x_i, y_j) \quad \forall 0 \leq i \leq n, 0 \leq j \leq m. $$ Then, for all $(x, y) \in (x_0, x_n) \times (y_0, y_m)$ exist $\xi, \xi' \in (x_0, x_n), \eta, \eta' \in (y_0, y_m)$ such that $$ f(x, y) - p(x, y) = \frac{1}{(n + 1)!} \frac{\partial^{n + 1} f(\xi, y)}{\partial x^{n + 1}} \prod_{i = 0}^n (x - x_i) $$ $$ + \frac{1}{(m + 1)!} \frac{\partial^{m + 1} f(x, \eta)}{\partial y^{m + 1}} \prod_{j = 0}^m (y - y_j) $$ $$ - \frac{1}{(n + 1)! (m + 1)!} \frac{\partial^{n + m + 2} f(\xi', \eta')}{\partial x^{n + 1} \partial y^{m + 1}} \prod_{i = 0}^n (x - x_i) \prod_{j = 0}^m (y
|
Too long for comment and also don’t have enough reputation I had hoped that you would have started the calculations, especially as $f(\xi,y)$ occurs on the right side of $$\dfrac{\partial^{n+1} f(\xi,y)}{\partial^{n+1} x}$$ which makes the term $$ \dfrac{\partial^{n+1} f(\xi,y)}{\partial^{n+1} x}\prod_{i=0}^n(x-x_i) $$ confusing and I have to guess whether this means $$ f(\xi,y)\cdot\dfrac{\partial^{n+1} }{\partial^{n+1} x}\prod_{i=0}^n(x-x_i) $$ Anyway, your formula is a direct consequence of the multivariate Taylor expansion, here the bivariate version in two variables. Of course, I could type what my textbook says and use very likely a different notation than your textbook uses, so let me instead directly ask you what you know about Taylor's theorem for functions $f\, : \,\mathbb{R}^2\longrightarrow \mathbb{R}\;$ ?
|
|numerical-methods|interpolation|approximation-theory|lagrange-interpolation|multivariate-polynomial|
| 0
|
Understanding the derivation of the equation for envelopes.
|
Given a family of curves, an envelope is defined as a curve that it is tangent to every curve in the family of curves at some point on it. To derive the equation for it, the first step is to parameterize the envelope with parameter $t$ , which also indexes which curve in the family. But what if there is a curve tangent to the envelope at two or more points? Which points should $x(t),y(t)$ be then? And my second question is that whether the equations $F(x,y,p)=0$ and $F_p(x,y,p)=0$ is just a necessary condition for a point to be on the envelope or it is actually both necessary and sufficient. Edit: Now I know that there's actually three definitions: synthetic: the union of characteristic points (aka "limit points" between two nearby curves) impredicative: a curve that it is tangent to every curve in the family of curves at every point on it analytic: the union of the solution set of $F(x,y;p)=0$ and $F_p(x,y;p)=0$ of every $p$ And they're somehow equivalent. (source: https://www.emis.de
|
There is in projective geometry a notion of dual, that extends to this situation for the special case of a family of lines. The dual projective plane of lines in the usual projective plane, is just that, a projective plane. The dual curve is the envelope seen as a curve in the dual projective plane. The dual curve of a curve with a bitangent has a corresponding node. Traversing a nodal curve a curve you come once to the node for one value of $t$ and back to it at a different value of $t$ . If a line of your envelope touches a curve twice, it's the same story. To continue with a family of lines, let a specific one be $xt^2+yt+1=0,$ then the second equation is $2tx+y=0$ which on eliminating $t$ gives $y^2-4x=0.$ For the more general family of lines $p(t)x+q(t)y+1=0, p'(t) x+ q'(t) y=0,$ the usual formula for the dual emerges from solving the linear system: $(x,y)=(\frac{-q'}{pq'-p'q},\frac{p'}{pq'-p'q}).$ To your second question, parametrized curves are irreducible and the elimination ma
|
|analytic-geometry|envelope|
| 1
|
Subsets and Splits
SQL Console for glopezas/math_stackexchange_qa
Filters and displays accepted answers excluding questions with specific tags, providing a focused view of less common or specialized topics.