title
string
question_body
string
answer_body
string
tags
string
accepted
int64
Finding general term for a sequence satisyfing provided conditions (with goniometric functions)
I want to inquire about how would one find the formula for $a_n$ (or reccurently) if we know: $$ \alpha_k = arctan\left(\frac{a_{(n+1)}}{a_n} \right) $$ $$ \sin{\alpha_k} = \sin{\alpha_{(k-1)}} \cdot \frac{ a_n }{a_{(n+1)}} $$ This is also true for the first elements, the sequence entry condition is $$ \alpha_1 = \arctan{ \left( \frac{a_2}{a_1} \right)}$$ It does not matter what exact values $a_1 $ and $a_2$ hold as long as they satisfy this condition When I try to plug in those conditions together, I get $ a_{(n+1)} = a_{(n+1)} $ which is obviously true but also not helpful at all.
I doubt there are explicit closed forms for $a_k$ and $\alpha_k$ , but you can find recursive formulas. The two relations combine as follows : $$ \frac{a_{n+1}}{a_n} = \frac{\sin\alpha_{n-1}}{\sin\alpha_n} = \tan\alpha_n, $$ hence $$ \sin\alpha_{n-1}\cos\alpha_n = \sin^2\alpha_n = 1 - \cos^2\alpha_n, $$ which is a quadratic equation for $\cos\alpha_n$ , solved by $$ \alpha_n^\pm = \arccos\left(\frac{-\sin\alpha_{n-1} \pm \sqrt{4 + \sin^2\alpha_{n-1}}}{2}\right). $$ Once you have computed the values of $\alpha_n$ recursively until a given step, you can calculate $a_n$ with the following recursion : $$ a_{n+1} = a_n\tan\alpha_n = a_1 \prod_{k=1}^n \tan\alpha_k $$
|sequences-and-series|functions|
0
Does $\lim_{x \searrow 0} \frac{\int_{x^4}^{2x^4} f(y) \ dy}{x^4}$ converge?
I have problems with this exercise: Let $f: [a, b] \to \mathbb{R}$ be continuous. Does $$ \lim_{x \searrow 0} \frac{\int_{x^4}^{2x^4} f(y) \ dy}{x^4} $$ exist? If so determine the limits value. I actually have no idea how to approach this. Any help is appreciated. I think I am supposed to use the special case of the mean value theorem that states $\exists \xi \in [a, b]: f(\xi)(b-a) = \int_a^b f(y) \ dy$ . But since $a$ and $b$ are arbitrary real numbers I cannot use the theorem right away. Correct? What can I say about the integral limits $x^4$ and $2x^4$ ? Or is this even important? Because otherwise I could just say $$ \lim_{x \searrow 0} \frac{\int_{x^4}^{2x^4} f(y) \ dy}{x^4} = \lim_{x \searrow 0} f(\xi)(2x^4-x^4)\frac{1}{x^4} = f(\xi) $$ for some $\xi \in [a, b]$ ?
Hint Your notations are not good ( $x\to 0^+$ , however $f:[a,b]\to \mathbb R$ ...) Let suppose that $0\in [a,b)$ (I let you fix this problem). Let $F(x)=\int_a^xf(t)\,\mathrm d t$ . Then $$\lim_{x\to 0^+}\frac{1}{x^4}\int_{x^4}^{2x^4}f(t)\,\mathrm d t=\lim_{x\to 0}\frac{F(2x^4)-F(x^4)}{x^4}=\lim_{u\to 0^+}\frac{F(2u)-F(u)}{u}=...$$ I let you conclude.
|real-analysis|integration|limits|analysis|
0
$\lim_{x\to 1} \sqrt{x^2 + 8} = 3$ -- Proof Via Epsilon-Delta Definition of a Limit
This problem was originally asked here: $\lim_{x\to 1} \sqrt{x^2 + 8} = 3$ prove this using epsilon delta . It got closed and is no longer accepting answers. Prove this using epsilon-delta $$\lim_{x\to 1} \sqrt{x^2 + 8} = 3$$ Definitions: $\lim_{x\to c}f(x)=L⟺∀ϵ>0,∃δ>0$ s.t. $0 as defined by Gregory Hartman et al. by the Virginia Military Institute in LibreTexts Mathematics $_1$ Context For Why This Question Is Important: Epsilon-Delta proofs of limits form the foundations of rigorous calculus as we know it. The rigor and importance of these fundamental limits should be demonstrated to students. By giving me guidance on how to correctly evaluate this limit, users provide me (and future readers of this post) with a token of knowledge. Additional Context For the Problem: I have searched for the answer, and found this video on YouTube $_2$ . https://www.youtube.com/watch?v=yC8Y50H6kw8 . However, the person does not provide an example where the inequality is not easily factorable. Given $\
Given $\epsilon>0$ , instead of providing some $\delta$ out of the blue and check it "works", like textbooks too often do, we shall look for the "best" $\delta$ (i.e. the largest one which "works"). We will thus experience concretely the fact that the smaller we take $\epsilon$ , the smaller $\delta$ has to be chosen. Let us take advantage of the monotonicity of $x\mapsto\sqrt{x^2+8}$ for $x\ge0$ . Assuming $|h|\le\delta\le1$ , in order to get $|\sqrt{(1+h)^2+8}-3|\le\epsilon$ , we just need $$3-\epsilon\le\sqrt{(1-\delta)^2+8}\quad\text{and}\quad\sqrt{(1+\delta)^2+8}\le3+\epsilon,$$ i.e., assuming moreover $3-\epsilon\ge0$ : $$(3-\epsilon)^2-8\le(1-\delta)^2\quad\text{and}\quad(1+\delta)^2\le(3+\epsilon)^2-8,$$ i.e., if we even have $3-\epsilon\ge\sqrt8$ : $$\delta\le\delta_0(\epsilon):=1-\sqrt{(3-\epsilon)^2-8}\quad\text{and}\quad\delta\le\delta_1(\epsilon):=\sqrt{(3+\epsilon)^2-8}-1.$$ Note that $\delta_0(\epsilon)$ and $\delta_1(\epsilon)$ are $>0$ , and tend to $0$ when $\epsilon$
|calculus|limits|epsilon-delta|
1
Mutually recursive definition of the terms to nonreqursive defenition with Y combinator.
Can't solve this task: Let there be a mutually recursive definition of the terms ${foo}$ and ${bar}$ . In general, it can be written as $$ {foo} = P {foo} {bar} $$ $$ {bar} = Q {foo} {bar} $$ Here $P$ and $Q$ are some terms that contain neither ${foo}$ nor ${bar}$ . Using the $Y$ -combinator, find nonrecursive definitions for ${foo}$ and ${bar}$ . Try to find the most "compact" solution, with the smallest number of $Y$ -combinators possible My attempts I tried to explicitly write $foobar = PfoobarQfoobar$ , but it didn't work. I tried to start an auxiliary $bar'=\lambda f.Qf(bar'f)$ , and through it get a nonrecursive form with $Y$ combinator. I tried to find the dependence by explicitly describing $foo=Pfoobar=PPfoobarbar=...=P...Pfoobar...bar$ .
Disclaimer: I have been reading about lambda calculus and combinatory logic only for some months. But I think that I have an idea for this question. If you find any problems, please comment. $$\begin{align*} \text{foo} &= \text{P foo bar}\\ &=\text{P foo (Q foo bar)}\\ &=\text{P foo (Q foo (Q foo (...)))}\\ &=\text{P foo (Y (Q foo))}\\ &=\text{P foo (S (K Y) Q foo)}\\ &=\text{S P (S (K Y) Q) foo}\\ &=\text{Y (S P (S (K Y) Q))} \end{align*}$$ Similarly we can extract a definition for $\text{bar}$
|lambda-calculus|
1
Mutually recursive definition of the terms to nonreqursive defenition with Y combinator.
Can't solve this task: Let there be a mutually recursive definition of the terms ${foo}$ and ${bar}$ . In general, it can be written as $$ {foo} = P {foo} {bar} $$ $$ {bar} = Q {foo} {bar} $$ Here $P$ and $Q$ are some terms that contain neither ${foo}$ nor ${bar}$ . Using the $Y$ -combinator, find nonrecursive definitions for ${foo}$ and ${bar}$ . Try to find the most "compact" solution, with the smallest number of $Y$ -combinators possible My attempts I tried to explicitly write $foobar = PfoobarQfoobar$ , but it didn't work. I tried to start an auxiliary $bar'=\lambda f.Qf(bar'f)$ , and through it get a nonrecursive form with $Y$ combinator. I tried to find the dependence by explicitly describing $foo=Pfoobar=PPfoobarbar=...=P...Pfoobar...bar$ .
Mutual recursion can always be merged. If $f,g$ are mutually recursive, then define a third function $h = \lambda x. x f g$ . Now you can express $f = h (\lambda p q. p)$ (Exercise: express $g$ ), and write a recursive equation using $h$ alone. Another way to solve this is to solve for $f$ assuming you already know $g$ . This gives you an expression $f = \dots g\dots$ . Now substitute this into the equation for $g$ and you get a recursive equation for $g$ alone, which you can solve using Y.
|lambda-calculus|
0
List all topologies of $X = \{1,2,3\}$ up to homeomorphism.
Problem 2-2 in Lee's Introduction to Topological Manifolds reads: Let $X = \{1, 2, 3\}$ . Give a list of topologies on $X$ such that every topology on $X$ is homeomorphic to exactly one on your list. Below is my attempted solution. Is there a more efficient way to solve a problem like this one? My method felt a bit tedious. The topologies can be partitioned into classes according to how many 1- and 2-element sets they contain. Let us write $(m, n) \in \{0, \dotsc, 3\}^2$ to represent each such class, with $m$ being the number of 1-element sets (singletons) and $n$ the number of 2-element sets in the topology, and go through each of them. Note that homeomorphisms preserve class membership. (0,0) This class contains only the trivial topology $\{X, \varnothing\}$ . (1,0) If we add a single singleton to the trivial topology we obtain a topology homeomorphic to $\{X, \varnothing, \{1\}\}$ . (2,0) This class is empty, since if we have two singletons we must also have the 2-element set given
I suspect the repeated string of symbols $\varnothing\, \{1\}$ should be $\varnothing, \{1\}$ . Otherwise it looks fine to me. For a discussion of the difficulties of this problem in the general case see this math.stackexchange post . In the comments of that post you'll see a link there for the OEIS sequence that gives number of different topologies on an $n$ -element set up to homeomorphism; for the case $n=3$ that list agrees with your count of 9 topologies.
|general-topology|problem-solving|
1
Number of ways to form a sequence of 10 letters from four 'a's, four 'b's, four 'c's, and four 'd's if each letter must appear at least twice
Question: How many ways are there to form a sequence of 10 letters from four 'a's, four 'b's, four 'c's, and four 'd's if each letter must appear at least twice? My Approach: I started out with the exponential generating function where: XA represents the number of 'A's, XB represents the number of 'B's, XC represents the number of 'C's, XD represents the number of 'D's, and $XA + XB + XC + XD = 10$ . Here, $A(XA) = \frac{x^2}{2!} + \frac{x^3}{3!} + \frac{x^4}{4!}$ , and $A(X) = (A(XA))^4$ . My Problem: I am unable to solve or reduce it to a standard formula of $e^x$ from which I can easily calculate the coefficient of $ x^{10} $ . Can I do it directly using permutation with limited repetition?
The exponential generating function for the number of acceptable sequences of length $r$ is $$f(x) = \left( \frac{1}{2!}x^2 + \frac{1}{3!}x^3 + \frac{1}{4!}x^4 \right)^4$$ Expand $f(x)$ by polynomial multiplication, with result $$f(x) = \frac{x^8}{16}+\frac{x^9}{12}+\frac{x^{10}}{16}+\frac{13 x^{11}}{432}+\frac{107 x^{12}}{10368}+\frac{13 x^{13}}{5184}+\frac{x^{14}}{2304}+\frac{x^{15}}{20736}+\frac{ x^{16}}{331776}$$ So the number of acceptable strings of length $10$ is $$10! \cdot \frac{1}{16} = 226800$$
|combinatorics|
0
Does $\lim_{x \searrow 0} \frac{\int_{x^4}^{2x^4} f(y) \ dy}{x^4}$ converge?
I have problems with this exercise: Let $f: [a, b] \to \mathbb{R}$ be continuous. Does $$ \lim_{x \searrow 0} \frac{\int_{x^4}^{2x^4} f(y) \ dy}{x^4} $$ exist? If so determine the limits value. I actually have no idea how to approach this. Any help is appreciated. I think I am supposed to use the special case of the mean value theorem that states $\exists \xi \in [a, b]: f(\xi)(b-a) = \int_a^b f(y) \ dy$ . But since $a$ and $b$ are arbitrary real numbers I cannot use the theorem right away. Correct? What can I say about the integral limits $x^4$ and $2x^4$ ? Or is this even important? Because otherwise I could just say $$ \lim_{x \searrow 0} \frac{\int_{x^4}^{2x^4} f(y) \ dy}{x^4} = \lim_{x \searrow 0} f(\xi)(2x^4-x^4)\frac{1}{x^4} = f(\xi) $$ for some $\xi \in [a, b]$ ?
Using what you were doing, let $x_{n}$ be a sequence that converges to $0$ from above. For every interval $I_{n} = (x_{n}^{4}, 2x^{4}_{n})$ , there is a $\xi_{n} \in I_{n}$ such that the conditions of the mean value theorem hold. Note that $x_{n}^{4} \leq \xi_{n} \leq 2x_{n}^{4}$ so $\xi_{n} \to 0$ by the squeeze theorem. Let $$g(x) = \frac{\int_{x^{4}}^{2x^{4}}f(y) \ dy}{x^{4}}$$ Then, $$g(x_{n}) = \frac{\int_{x_{n}^{4}}^{2x_{n}^{4}}f(y) \ dy}{x_{n}^{4}} = \frac{f(\xi_{n})(2x_{n}^{4} - x_{n}^{4})}{x_{n}^{4}} = f(\xi_{n})$$ Then, $g(x_{n}) \to f(0)$ as $f(\xi_{n}) \to f(0)$ by the continuity of $f$ . It follows that $\lim_{x\to 0^{+}}g(x) = f(0)$ .
|real-analysis|integration|limits|analysis|
1
Clarifying the Existence and Uniqueness of Solutions to an Initial Value Problem Given Local Lipschitz Continuity
For an open interval $I$ and an open region $ \Omega \subset \mathbb{R}^{n}$ let $F: I \times \Omega \rightarrow \mathbb{R}^{n},( t, x) \mapsto F(t, x)$ , continuous and locally Lipschitz continuous with respect to $x$ . For each $\left(t_{0}, x_{0}\right) \in I \times \Omega $ then there is [...] one solution $\varphi: I \rightarrow \mathbb {R}^{n} $ of Initial value problem $\varphi^{\prime}(t)=F(t, \varphi(t)) $ with $\varphi\left(t_{0}\right)=x_{0} $ . Here [...] should be replaced with: exactly, at least, at most I was sure it has to be "exactly", because of lipschitz, but it says it's false because: "Only uniqueness follows from local Lipschitz, but not global existence." Now I think that it's " at least" one solution because of local lipschitz continuity and Peano theorem An example would help a lot!
This depends on the interval $I$ . If $F$ is as in the assumptions, solutions exist and are unique on small open intervals about $t_0$ . But they may not exist on the entire interval. Therefore at most is correct. Example: Consider $x'= x^2, x(0) = x_0 = 1$ . The unique solution is $x(t) = (1-t)^{-1}$ . It exists on the set $(-\infty,1)$ . But on the interval $(-2,2)$ this solution does not exist.
|real-analysis|calculus|ordinary-differential-equations|lipschitz-functions|
1
How to find the inverse element to a number in the residue ring modulo
I don't understand how to find the inverse element to $7$ in the residue ring modulo $449$ . According to Euclid's algorithm, I get $-64$ , although the answer should be the number $385$ . But at the same time $-64 + 449 = 385$ . But how do I get $385$ initially?
Start with the Euclidean algorithm, which is a single step: $449=7 \times 64+1$ $449 - 7 \times 64 = 1$ $ - 7 \times 64 = -449 + 1$ $ 7 \times (-64) \equiv 1 \mod{449}$ So the congruence class $-64$ is your answer. Typically you would choose a term from $0... \; 448$ to represent the class $-64 + 449 = 385$ To double check: $385 \times 7 = 2695$ , and $2695=6 \times 449 + 1$ as desired.
|modular-arithmetic|
0
How to argue that two outcomes are equally likely?
When it comes to the concept of "equally likely," students are not told what makes two outcomes equally likely, but are instead shown what it means via examples. "When flipping a coin, heads and tails are equally likely" and "When rolling a die, the faces 1-6 are equally likely" are such common examples. However, this leads me to believe that "two outcomes are equally likely" is merely an intuitive fact and cannot be justified by any logical argument. Is it possible to justify––even better, prove ––that two outcomes are equally likely? For instance, how do you prove $P(heads)=P(tails)$ ? The only potential justification that comes to my mind is similar to the law of large numbers: that the relative frequency of the two outcomes approaches the same number when the experiment is repeated an infinite number of times. But I see this as more of an empirical justification than an actual proof, because it is not actually possible to repeat an experiment "an infinite number of times."
For instance, how do you prove $P$ (ℎ)= $P$ ()? I don't think we can ever prove that mathematically, without resorting to circular reasoning at some level. But do we need a proof? In problems involving probability, "fairness" is assumed unless there's a reason not to. And for a fair coin (dice or something similar), outcomes are equiprobable, by definition. The next question might be - is it reasonable to assume that for a fair coin heads or tails are equally likely outcomes? I, like probably everyone else, would say yes. We can imagine a perfect coin in which the two faces are perfectly symmetrical (I mean, there will have to be something to make one face recognizabe as different from the other but let's pretend that's negligible). And if the faces are truly symmetrical, then there's no logical reason why one outcome should win over the other. In fact, symmetry of faces must imply symmetry of outcome, making the two outcomes equally likely. Also, is such an assumption necessary? While
|probability|proof-writing|
0
Doubt in proof of Prop 2.2.2 in Kesavan's Functional Analysis
Given a normed linear space $V$ (with norm $\lVert \rVert_{V}$ ) and a closed subspace $W$ , Kesavan defines a norm $\lVert \rVert_{V/W}$ on the quotient space $V/W$ as $\lVert x+W \rVert_{V/W} = \inf_{w \in W} \lVert x +w \rVert_{V}$ . Then he proceeds to show that this norm is well defined. Prop. 2.2.2 (pg. 36) claims that if $(V, \lVert \rVert_{V})$ is Banach, then $(V/W, \lVert \rVert_{V/W})$ is also Banach. The proof goes as follows: Take a Cauchy sequence $\{x_i + W\}_{i=1}^{\infty}$ in $V/W$ . Then extract a subsequence $\{x_{n_k} + W\}_{i=1}^{\infty}$ such that $\lVert x_{n_k} + W - (x_{n_{k+1}} + W) \rVert_{V/W} . (I'm able to understand till this step) Then the author chooses a sequence $\{y_{k}\}_{i=1}^{\infty}$ such that $y_k \in x_{n_k} +W$ and $\lVert y_k - y_{k+1} \rVert_{V} . Then he observes that $\{y_{k}\}_{i=1}^{\infty}$ is Cauchy and so converges to some $y$ since $V$ is Banach. Then a straightforward argument shows that $\{x_{n_k} + W\}_{i=1}^{\infty}$ converges to
Here's a way to select the sequence: From $\inf_{w\in W}\|x_{n_1}-x_{n_2}+w\|_V we get that there exists a $w_2 \in W$ such that $\|x_{n_1}-(x_{n_2}-w_2)\|_V . Set $y_1 = x_{n_1}$ and $y_2 = x_{n_2}-w_2$ . Since $w_2 \in W$ we have $$\inf_{w\in W}\|(x_{n_2}-w_2) - x_{n_3} + w\|_V=\inf_{w\in W}\|x_{n_2}-x_{n_3}+w\|_V and hence there exists some $w_3 \in W$ such that $\|(x_{n_2}-w_2) - (x_{n_3} - w_3)\|_V . Set $y_3 = x_{n_3}-w_3$ . Continuing this way we get the required sequence $\{y_k\}_{k\in\mathbb{N}}$ .
|functional-analysis|proof-explanation|
1
Help with sum of series
How can I find the sum of sum of series $\sum_{n=0}^{\infty}\frac{(2n^2+3)}{(2\cdot 3^n)}$ ? I proved that it is convergent, but I am stuck. I tried putting $x$ instead $3$ and studied it as a power series, but it didn't help. I also operated and reduced the problem to find the sum of $n^2 x^n$ , but I couln't conclude anything. My attemp: $$\sum_{n=0}^{\infty}\frac{(2n^2+3)x^n}{2}= \sum_{n=0}^{\infty} n^2 x^n + \sum_{n=0}^{\infty} \frac{3 x^n}{2}$$ The last sum is $3/2$ times the sum of the geometric series (convergent iff $|x| ), $\frac{1}{1-x}$ . But I don't know how to work with the first sum.
As a hint : separation is the clue $$\sum_{n=0}^{\infty}\frac{(2n^2+3)}{(2\cdot 3^n)} \\\frac 12\sum_{n=0}^{\infty}\frac{(2n^2+3)}{3^n} \\\frac 12\sum_{n=0}^{\infty}\frac{2n^2}{3^n}+\frac 12\sum_{n=0}^{\infty}\frac{3}{3^n} $$ the second part is the geometric progression but for the first, you can start with $f(x)=\sum_{n=0}^{\infty}x^n$ then find $f'(x)$ then multiply by $x$ then...can you take over now?
|real-analysis|calculus|sequences-and-series|analysis|
1
Derivation of an epitrochoid
I was working on an assignment and had a good idea to have a water jet model an epitrochoid to create a fountain. While the overall idea was approved by my teachers, I was told that I need to show a proof of the derivation of the parametric equations for the epitrochoid. the particular Desmos model of what I am modelling and need the equations derived for are below: https://www.desmos.com/calculator/zb8ccuup1q Can anyone help with the derivation of the parametric equations? Thanks
I'll detail the approach in J.G.'s comment. Let's work in the complex plane in order to deal with both coordinates at the same time, as suggested by J.G. From now on, I'll refer to the linked animation in the same comment, with the same initial situation. Let $R$ and $r$ be the radii of the static big circle (centered at the origin) and the rotating little one respectively. Let $z = x + iy$ be a point of the epitrochoid and $\theta$ the angle between the $x$ -axis and the center of the rotating circle, so that $\theta = 0$ initially. Finally, let's denote the distance between the point $z$ and the center of the little by $d$ . In consequence, the position of the center of the little circle is given by $(R+r)e^{i\theta}$ . From there, the point $z$ lies at a distance $d$ from this center, hence a translation by the same quantity, but it has to be combined with a rotation of angle $\varphi$ (which is represented by a factor $e^{i\varphi}$ in the complex plane, because the red dot and the
|mathematical-modeling|derivation-of-formulae|derivations|
0
Lemma 10.32 of Lee's Introduction to Smooth Manifolds
I am reading over the proof of Lemma 10.32 (Local Frame Criterion for Subbundles) in Lee's Introduction to Smooth Manifolds . The lemma says Let $\pi: E \rightarrow M$ be a smooth vector bundle and suppose that for each $p\in M$ we are given an M-dimensional linear subspace $D_p \subseteq E_p$ . Then $D = \cup_{p \in M} D_p \subseteq E$ is a smooth subbundle of $E$ iff each point of $M$ has a neighborhood $U$ on which there exist smooth local sections $\sigma_1, \cdots, \sigma_m: U \rightarrow E$ with the property that $\sigma_1(q), \cdots, \sigma_m(q)$ form a basis for $D_q$ at each $q \in U$ . Overall I understand the proof of this lemma, besides the part where we need to show that $D$ is an embedded submanifold with or without boundary of $E$ . Professor Lee's proof says that it suffices to show that each $p \in M$ has a neighborhood $U$ such that $D \cap \pi^{-1}(U)$ is an embedded submanifold (possibly with boundary) in $\pi^{-1}(U) \in E$ . It is not very obvious to me why it is
This question has been a while but I'm leaving this answer in case anyone else is wondering about this as well. I believe the statement in the text is misleading, or at least unclear. We actually need to use the Vector bundle chart lemma in the book. So we are given $$\Phi_\alpha : \pi^{-1}(U_\alpha)\to U_\alpha \times \mathbb{R}^k : \Phi_\alpha(s^1 \sigma_1(q)+\cdots +s^k \sigma_k(q))=(q,(s^1,\dots, s^k)).$$ And using this by restricting the last $k-m$ coordinates we get the smooth local trivialization candidate for $D$ given by $$\Psi_\alpha: D \cap \pi^{-1}(U_\alpha)=\pi_D^{-1}(U_\alpha) \to U \times \mathbb{R}^m: \Psi_\alpha(s^1 \sigma_1(q)+\cdots +s^m \sigma_m(q))=(q,(s^1,\dots, s^m)).$$ From $\Phi_\alpha$ we do get, as Lee writes, $D\cap \pi^{-1}(U_\alpha)$ as an embedded submanifold of $\pi^{-1}(U_\alpha)$ , and from this we get that the map $\Psi_\alpha$ below is a diffeomorphism. But we do not yet have a smooth structure on $D$ or even a topology. So we need to find it using t
|differential-geometry|differential-topology|smooth-manifolds|vector-bundles|
0
Learning effective
today I got my midterm test result of Discrete math (It can only raise the grade). I got 63, which is above the average, but still I don't feel comfortable with that. I have 7 courses in the semester, and the time management is pretty hard, because everything is time consuming. I understand that I need to change things and how I learn, at least in Discrete math. I notice I spend a lot of time reviewing what I do in class, and then going to the HW they gave us (but we don't get really a review on the HW) .I would glad for advices from experience people, how to understand what I need to improve or change, also does reviewing what I did in class in detail is a right thing? or is it better to try solve more problems or maybe look at books. What helped you in improving your grades, and to success, and how to ease learning in hard courses and little time. Thank you
The question is somewhat personal, as there is no guaranteed method for raising your grades; however, I could suggest a couple of things that helped with mine, Printing stuff instead of viewing them online. This might be something that only I struggled with, but before my first midterms, I used to study by viewing everything online(i.e., homework, midterm samples). I noticed that I am concentrating better on actual printed materials instead. Start the hw as soon as you get it and try not to rely too much on problem-solving sessions or office hours. I don't mean don't attend them all together; I mean attend them only after you did your best solving the problems yourself and only need a little guidance here and there. Ideally, you should be able to solve 80-100% of the homework yourself so you can ensure that you truly grasped the new materials Start preparing for exams(using samples or just redoing all the homework) far earlier than other students usually do(typically, they start one we
|discrete-mathematics|self-learning|learning|
0
If a topological space X is not first countable, is it second countable?
I know that topological space is first countable if each point has a countable neighborhood basis. A neighborhood basis at a point. Consider the following topological space, $X=R$ with $\mathcal T=\{A\subseteq R:0 \not\in A\} \cup \{A\subseteq R:0\in A\text{ and } R \setminus A\text{ is finite}\}$ . I know it is not first countable. Is this because at the local base $x=0$ , $R\setminus A$ is finite, so the elements of A are infinite and uncountable? If a topological space such as the one above is not first countable, will it have to be second countable? or not second countable for the same reason as it is not first countable?
This is actually quite a reasonable question if you are learning about first/second-countability after learning about first/second category . As Moishe Kohan mentioned in the comments, while the definition of second category is "not first category", this is not true for first/second countability. Instead, "first countable" means every point has a local countable basis, while "second countable" means the entire space has a countable basis. This is why I encourage authors to use "meager" and "nonmeager" rather than "first category" and "second category". Likewise, I prefer the terminology "countable local weight" and "countable weight" in place of "first countable" and "second countable", but that is far less conventional as of today.
|general-topology|second-countable|first-countable|
0
Is there some notion of "Uniformly Discontinuous" function?
Suppose there is a function $f : I \rightarrow \mathbb{R}$ (I is some non-empty interval of real numbers) which is discontinuous everywhere. This means that for any $x \in I, \exists \epsilon_{x}> 0$ such that for any $\delta >0, \exists x_{\delta} \in I$ such that $|x-x_{\delta}| , but $|f(x) - f(x_{\delta})| \geq \epsilon_x$ . I was wondering whether there would always exist a uniform such $\epsilon$ , i.e., an $\epsilon > 0$ such that for any $x \in I$ , and for any $\delta >0$ , $\exists x_{\delta} \in I$ such that $|x-x_{\delta}| , but $|f(x) - f(x_{\delta})| \geq \epsilon$ . For example, if $f$ was the Dirichlet function (1 on rationals, 0 on irrationals), then $\epsilon = \frac{1}{2}$ would work. This gave me the feel of "Uniform Continuity", as in that case, for every $\epsilon > 0$ , we have a uniform $\delta_{\epsilon} > 0$ such that for any $x ,y\in I$ with $|x-y| , $|f(x)-f(y)| , whereas in the proposed sense of "Uniform Discontinuity", we would have a uniform $\epsilon>0$
Consider the function $f(x)$ on $[0,1]$ such that $f(0)=1$ , $f(r)=r$ for rational $r$ and otherwise $f(x)=0$ (for irrational $x$ ). This is discontinuous everywhere but its oscillation at a positive $x$ becomes smaller and smaller as $x$ tends to zero, while at $x=0$ the oscillation is $1$ . Therefore there is no uniform lower bound.
|real-analysis|continuity|
1
Why subset W of cartesian product $X\times Y$ cannot be X
Let $W\subseteq X\times Y$ where $\forall x\in X , \exists ! y\in Y , (x,y) \in W $ . How to prove that if $W\neq \emptyset$ then always $W \neq X$ ? I try to prove that $x = (x,y)$ cannot exist - however I realize that maybe my assumption that if $W = X$ then $x = (x,y)$ is wrong - at least maybe I should first prove that if $z\neq x$ then $x \neq (z,y)$ - I am stuck and don't know how to do it - or maybe I have done it wrong from the beginning. How can I do it? I asume that we use ZF (or ZFC if necessarily). We can assume that pair is encoded by Kuratowski formula $(x,y)=\{\{x\}, \{x,y\}\}$
Kuratowski's definition for ordered pairs is widely accepted: $(a,b) = \{\{a\}, \{a,b\}\}$ . It is also accepted that for sets $X, Y$ the cartesian product is the set of their ordered pairs. Finally for ordered pairs $(x_1,x_2) = (a,b)$ if and only if $x_1 = a$ and $x_2 = b$ . Assume $W$ is not empty and $W= X$ , then there exists $(a,b) \in W$ such that $(a,b) \in X$ , $b \in Y$ , then that set contains the singleton set of ordered pair $\{(a,b)\} = \{(a,b) \} \times \{b \}$ . From this we get $$\{(a,b) \} \times \{b \} \} = (\{\{a\}, \{a,b\}\}, b)$$ We would then have from the equality of ordered pairs and singletons, that $\{b\}=\{b\}$ and $\{a\}=\{\{a\}, \{a,b\}\}$ . We need the cardinalities of the sets to match, therefore we need $a=b$ (as then $\{a,b\} = \{a\}$ and sets do not contain multiple elements). But the proposition that $(a,b)$ exists was free. We have a contradiction which completes the proof. Also in naive set theory we can consider $a,b$ to be atoms, which do not hav
|set-theory|
0
Sufficient condition for linear map between $ C^* $ algebras to preserve the identity
Let $ f: A \to B $ be a linear map between two $ C^* $ algebras. What is a sufficient condition to guarantee that the linear map $ f $ takes the identity to the identity $ f(1_A)=1_B $ ? For example, taking $ A=B=\mathbb{C}^{2 \times 2} $ the algebra of $ 2 \times 2 $ matrices. Is it sufficient for $ f $ to be positive and trace preserving? Being positive alone is not sufficient since the map $ M \mapsto 2M $ is positive but not unital. Being trace-preserving alone is not sufficient since the map $ \begin{bmatrix} a & b \\ c & d \end{bmatrix} \mapsto \begin{bmatrix} a+d & b \\ c & 0 \end{bmatrix} $ is trace preserving but not unital. I'm guessing I don't need to assume complete positivity since the transpose map $ M \mapsto M^T $ is not completely positive but it is positive and trace preserving and it happens to be unital. https://en.wikipedia.org/wiki/Completely_positive_map
No, a positive trace-preserving map is not necessarily unital. For example, the map $$ M_n(\mathbb C)\to M_n(\mathbb C),\,A\mapsto \operatorname{tr}(A)\sigma $$ is positive (in fact completely positive) and trace-preserving if $\sigma$ is positive and $\operatorname{tr}(\sigma)=1$ . But it is only unital if $\sigma=1/n$ . The closest result I can think of is that a linear map $\Phi\colon M_m(\mathbb C)\to M_n(\mathbb C)$ is unital if and only if the Hilbert-Schmidt adjoint $\Phi^\dagger$ defined by $$ \operatorname{tr}(\Phi^\dagger(A)^\ast B)=\operatorname{tr}(A^\ast\Phi(B)),\qquad A\in M_n(\mathbb C),\,B\in M_m(\mathbb C), $$ is trace-preserving. However, this is a pretty direct consequence of the definition. Abstractly, this criterion just says that a bounded linear map $\Phi\colon A\to B$ between $C^\ast$ -algebras is unital if and only if $\Phi^\ast(\psi)(1)=\psi(1)$ for all $\psi\in B^\ast$ , which is a simple consequence of Hahn-Banach.
|functional-analysis|operator-theory|operator-algebras|c-star-algebras|
1
Smallest axis-aligned regular hexagon for a set of hexagonal lattice points
Given a set of points on a hexagonal lattice, is there an efficient way to compute a center point $c$ and a minimal integer radius $r$ such that all points are within Manhattan distance $r$ of $c$ ? The problem for squares on a square lattice is easy—compute the bounding rectangle and lengthen the shorter sides until it is a square. But I haven't been able to find a reference for how to solve the equivalent problem on a hexagonal lattice. For context, we are trying to rank solutions in the game Opus Magnum according to this minimum bounding hexagon. Here's an example of a solution optimized for its bounding hexagon: https://imgur.com/J8BeLNu .
This answer gives a way to compute $r$ . Using the axial coordinate system, write points as $(u, v)$ . Define $(u_1, v_1) \perp (u_2, v_2) = u_1u_2 - v_1v_2$ . For a set of points $U$ , define $(u, v) \perp U = \max\{ (u, v) \perp p : p \in U \}$ . Then we can write six directional distances: $$ \begin{split} d_1 & = (0, -1) \perp U \\ d_2 & = (1, 0) \perp U \\ d_3 & = (-1, 1) \perp U \\ d_4 & = (0, 1) \perp U \\ d_5 & = (-1, 0) \perp U \\ d_6 & = (1, -1) \perp U \\ \end{split} $$ The value of $r$ is the maximum of these five quantities, rounded up: $$ r = \left\lceil \max \left\{ \frac{d_1 + d_4}{2}, \frac{d_2 + d_5}{2}, \frac{d_3 + d_6}{2}, \frac{d_1 + d_2 + d_3}{3}, \frac{d_4 + d_5 + d_6}{3} \right\} \right\rceil. $$ The first three quantities handle the case where the hexagon's size is determined by points on two opposite sides, while the last two handle the case where the hexagon's size is determined by points on three different sides.
|geometry|
0
The Fourier transform of conjugate Poisson Kernel.
From the very early begining we knew that $P_t(x)=\frac{t}{\pi (t^2+x^2)}$ is the poisson kernel for upper half plane. Now we knew that \begin{equation}\hat{P_t}(\xi)=e^{-t|\xi|}\label{e}.\end{equation} Now the conjugate Poisson kernel is defined by $Q_t(x)=\frac{x}{\pi(x^2+t^2)}$ . Now to prove that $\hat{Q_t}(\xi)=(-i)sgn(\xi) e^{-t\lvert\xi\rvert}$ using that formula $\hat{P_t}(\xi)=e^{-t|\xi|}$ . I could not make any progress and also the existing stack answer could not help me in getting that. Can anyone give me some hint.
METHODOLOGY $1$ : COMPLEX ANALYSIS Let $Q_t(x)=\frac{x}{\pi^2(x^2+t^2)}$ . Note that the Fourier Transform of $Q_t(x)$ is given by $$\begin{align} \hat Q_t(\xi)&=\int_{-\infty}^\infty \frac{x}{\pi^2(x^2+t^2)}e^{i\xi x}\,dx\\\\ &=\begin{cases} 2\pi i \text{Res}\left(\frac{z}{\pi^2(z^2+t^2)}, z=it\right), &\xi>0\\\\ -2\pi i \text{Res}\left(\frac{z}{\pi^2(z^2+t^2)}, z=-it\right), &\xi where we used contour integration and the Residue Theorem to arrive at the coveted result. METHODOLOGY $2$ : REAL ANALYSIS Alternatively, we can evaluate the Fourier Transform using real analysis. Proceeding, we see that $$\begin{align} \hat Q_t(\xi)&=\int_{-\infty}^\infty \frac{x}{\pi^2(x^2+t^2)}e^{i\xi x}\,dx\\\\ &=\int_{-\infty}^\infty \frac{(x^2+t^2)-t^2}{\pi^2x(x^2+t^2)}e^{i\xi x}\,dx\\\\ &=\frac i \pi \text{sgn}(\xi)-\frac{t^2}{\pi^2}\int_{-\infty}^\infty \frac{1}{x(x^2+t^2)}e^{i\xi x}\,dx \end{align}$$ Differenting twice, we find that $$\begin{align} \hat Q_t''(\xi)&=\frac{t^2}{\pi^2}\int_{-\infty}^\i
|fourier-transform|harmonic-analysis|
1
Intuitive definition of continuity
Often, people are introduced to the notion of continuity by the idea that a function is continuous if and only if its graph can be drawn without lifting the pencil from the paper. Now, after joining an undergraduate course and learning continuity formally using $\epsilon$ - $\delta$ definition, I do not see any direct correlation between the formal definition and the intuitive idea about not having to lift the pencil to draw the graph. How can we mathematically formalize the idea of "drawing the graph without lifting the pencil", and is it actually equivalent to the formal definition of continuity?
When first learning analysis, I always thought it was easier and slightly less abstract to consider the negation of a function being continuous (it being discontinuous) at a point. A function $f: \mathbb{R} \rightarrow \mathbb{R}$ is discontinuous at $x_0 \in \mathbb{R}$ if there exists some $\varepsilon > 0$ such that for every $\delta > 0$ we choose, there exists at least one $x \in \mathbb{R}$ such that $$|x - x_0| So, this gives us something to "work with" when considering the analogy of typical discontinuities (i.e., pencil test) which are taught in primary schooling. For instance, if we consider a jump discontinuity where we have to "pick up our pencil" to draw a continuous line, it is clear that there is some distance $\varepsilon > 0$ in the " $y$ -direction" such that for any possible distance in the " $x$ -direction," there will always be a "jump" between $f(x)$ and $f(x_0)$ . Same idea goes for other types of discontinuities.
|continuity|
0
Why is it that: "If polynomials have integer coefficients, the roots of those polynomials will be a divisor (factor) of the constant term"?
So, in my textbook, I came across a theorem/axiom (?) that states: Given a polynomial $P(x)$ with integer coefficients, its roots, if they exist, are divisors of the independent/constant term of the polynomial I've wrapped my head around this and I can't figure out why. There's a explanation in my book, but I can't quite understand it. It goes like this: Given a polynomial $P(x)$ with whole numbers as coefficients $P(x)=a_{n}x^n+a_{n-1}x^{n-1}+\dots+a_{1}x+a_{0}$ and $a$ is an integer root of $P(x)$ , then $a$ is a divisor of $a_{_{0}}$ (the independent term). Demonstration: $$P(a)=a_{n}a^n+a_{n-1}a^{n-1}\dots+a_{1}a+a_{0}=0$$ From here, it results that: $$a_{0}=a_{n}a^n-a_{n-1}a^{n-1}-_{\dots}-a_{1}a$$ Therefore, $a_{0}$ is a multiple of $a$ , i. e. $a$ is a divisor of $a_{0}$ . Can someone explain to me why this is true? And maybe give a real example with a polynomial. Thanks.
Let $\mathit F(x) = a_nx^n + a_{n-1}x^{n-1} + \dots + a_1x + a_0$ , where $\mathit a_n, a_{n-1}, \dots, a_0$ are all arbitrary integer coefficients. $\it Note:$ $\;$ The subscripts $\mathit n, n-1$ , etc. don't mean that there's any relation between the coefficient and the power of each term, but rather just a convenient way of naming coefficients of polynomials of large degree instead of using all the letters of the alphabet or something along those lines. Suppose r is an integral root of $\mathit F(x)$ , $$\mathit \implies a_nr^n + a_{n-1}r^{n-1} + \dots + a_1r + a_0 = 0$$ $$\mathit \iff -a_nr^n - a_{n-1}r^{n-1} - \dots - a_1r = a_0$$ $$\mathit \iff r(-a_nr^{n-1} - a_{n-1}r^{n-2} - \dots - a_1) = a_0$$ $$\mathit \iff r\,|\, a_0$$ I'll give an example of a quadratic to clear up any confusion. $$\mathit P(x) = x^2 + 8x + 7$$ $$\mathit P(x) = (x + 7)(x + 1)$$ $\mathit \implies x = -7$ and $\mathit x = -1$ are roots, which are factors of the constant term, i.e. 7. Hopefully that helped c
|functions|polynomials|proof-explanation|roots|
0
Binary choice with mixture of normal errors estimation
My problem is simplified as follows: I'm estimating a binary choice probit model where the error term follows a mixture of two normal distributions. This is: $y_i = \mathbf{1} \left( \beta_0 + \beta_1 x_i + \epsilon_i > 0 \right)$ Where $\epsilon_i \sim 0.3 \mathcal{N}(-1,1) + 0.7 \mathcal{N}(1.2,0.2)$ . I followed naively what I would do for other more simpler models. This is, let the likelihood function be: $$ \text{NLL} = -\sum_{i=1}^{n} \left[ y_i \log(P(y_i=1|x_i; \Theta)) + (1-y_i) \log(1-P(y_i=1|x_i; \Theta)) \right] $$ Where $ P(y_i=1|x_i; \Theta) = w \Phi\left(\frac{\beta_0 + \beta_1 x_i - \mu_1}{1}\right) + (1-w) \Phi\left(\frac{\beta_0 + \beta_1 x_i - \mu_2}{\sigma_2}\right)$ . Then, the parameters to estimate are: $\Theta = \{\beta_0, \beta_1, \mu_1, \mu_2, w, ,\sigma_1^2,\sigma_2^2\}$ . For the implementation, I simulated the data generating process. Then, to estimate the model I placed the following restrictions (in hope of identification): 1) $\sigma_1=1$ , 2)Weights mus
Your derivation and MLE codes look good to me (I tried it out and the code works apart from that you need to also specify $\beta_0$ and $\beta_1$ ). It is indeed an identification issue. Under this model specification with $\beta_0=0$ and $\beta_1=1$ , you can try to plot the histogram of the latent variable $Y_i = \beta_0+\beta_1 x_i+\epsilon_i$ . You will notice that, although $\epsilon_i$ takes a bimodal shape due to the mixture design, the latent variable $Y_i$ has a unimodal shape, as $Y_i$ is a convolution of $\mathcal{N}(0,1)$ and $\epsilon_i$ , which smoothes out the bimodal features. But your binary choice variable $y_i$ is generated by the mixture $Y_i$ from which the bimodal feature (that identifies $\mu_1$ and $\mu_2$ ) is not evident. To estimate $\mu_1$ and $\mu_2$ , $x_i$ can be viewed as a Gaussian noise, and in this case, the signal-to-noise ratio is simply way too small for the model to have good identification. You can fix this issue by picking $x_i\sim\mathcal{N}(0,
|statistics|probability-distributions|matlab|maximum-likelihood|expectation-maximization|
0
Prove that $\partial x = \partial \cdot x = n$ (Geometric algebra)
I’m trying to understand equation (2-1.34) on page 51 of Hestenes and Sobczyk’s “Clifford Algebra to Geometric Calculus”. $\partial x = \partial \cdot x = n \tag{1.34}$ According to the book, this follows from $\partial \wedge x = 0, \tag{1.33}$ $\partial_x = P(\partial_x) = \sum_k a^ka_k\cdot \partial_x, \tag {1.5}$ $a\cdot\partial_x x = P(a) = \partial_x(x\cdot a), \tag{1.18}$ because $\partial \cdot x = \sum_k a^k\cdot(a_k\cdot\partial x) = \sum_k a^k \cdot a_k =n. \tag{*}$ Here $x$ is a vector from an $n$ -dimensional vector space $x\in \mathcal A_n$ with orthonormal basis $a_1,…, a_n$ . As $\partial x = \partial \wedge x + \partial \cdot x$ , the first equation of $(1.34)$ follows from $(1.33)$ . My problem is understanding the first equation on the left of $(\ast)$ .
This is only a partial answer. For the second equality notice that $a_k \cdot \partial x$ is by (1.2) the directional derivative of the identity function in the $a_k$ direction, so the operator is $a_k \cdot \partial$ and the function is $x$ . Thus in coordinates as if $F(x)=x$ is the identity $F(x+\tau a_t)=x+\tau\, a_t$ you get $$a_k \cdot \partial x=\lim_{\tau \to 0}\frac{x+\tau\,a_t}{\tau}=a_t.$$
|geometric-algebras|
0
Checking equivalence of combinatorial terms.
Due to some context, I have reason to believe that S(K(SII)) and SSI are actually equivalent CL terms. This is my attempt at a proof (assuming a and b to be arbitrary CL terms): $$\text{S(K(SII))ab = SII(ab) = ab(ab)}\\ \text{SSIab = Saab = ab(ab)}$$ Since both simplify to the same form, can we say that S(K(SII)) and SSI are equivalent terms? [This problem arose when I was looking to describe the Y-combinator using only S, K and I. The wikipedia definition starts with S(K(SII)), while mine starts with SSI]
Your proof seems to work, but I'm not sure. However you can use lambda calculus to show a beta equivalence. That works guaranteed: $$I:=\lambda x.x, \quad K:=\lambda x. \lambda y. x, \quad S := \lambda x. \lambda y. \lambda z. xz(yz)$$ The first term reduces to: $$ SII = \lambda z. (\lambda x.x) z ((\lambda x.x)z) \rightarrow_\beta \lambda z.zz \\ S(K(SII)) = \lambda y . \lambda z. ( \lambda y . \lambda z . zz) z (yz) \rightarrow_\beta \lambda y . \lambda z. yz(yz)$$ And the second to (Keep in mind to rename conflicting variables!): $$ SS = \lambda y . \lambda z. ( \lambda x. \lambda y . \lambda z. xz(yz))z(yz) \rightarrow_\beta \lambda y . \lambda z. (\lambda z'. z z' (yz z'))\\ SSI = \lambda z . \lambda z' . z z' (z z') =_\alpha \lambda y . \lambda z. yz(yz)$$ Therefore $SSI$ and $S(K(SII))$ are $\beta$ -equivalent. Hope this helps :)
|solution-verification|lambda-calculus|combinatory-logic|
1
A weird probability question
This is the problem in question: You have two identical bowls: the first one contains 3 white balls and 4 black balls, and the second one contains 4 white balls and 5 black balls. If you choose randomly a ball from the two bowls, what is the probability it is white ? Let's define our events as such: A1 = choosing a ball from the first bowl A2 = choosing a ball from the second bowl B = choosing a white ball One approach would be using the theorem of total probability: $$\text{We know that }P(B|A_1) = \frac34\text{ and }P(B|A_2) = \frac45\text{, and that:}$$ $$P(A_1) = P(A_2) = \frac12\text{, because the bowls are identical}$$ $$P(B) = P(B |A_1)\times P(A_1) + P(B|A_2)\times P(A_2) = \frac37 \times \frac12 + \frac49 \times \frac12 = 55/126$$ The second approach would be simplifying the problem: Because the two bowls are identical, we could just say we don't even choose between two bowls, but just between the set of all balls. Then we could calculate the probability directly: $$P(B) = \fr
Your confusion stems from the fact that the problem is not properly stated as a probability computation. For that, you need to have a probability space $\Omega$ , and a probability measure on that space. The space must be such that every point in $\Omega$ completely determines the final outcome of the experiment, so that the latter can be viewed as realised by a single drawing of a point from $~\Omega$ , and in the case where $\Omega$ is at most countable (as it can be here, in fact there are only finitely many outcomes) the probability measure can be given by simple assigning a probability (from the interval $[0,1]$ ) to each point of $~\Omega$ so that the sum of the probabilities is $~1$ . In simple problems $\Omega$ can be taken to be the set of all possible final outcomes, and if $\Omega$ is specified and finite, there is a probability measure that is easy to describe which is the uniform measure, giving the same probability $\frac1n$ to each point of $\Omega$ where $n$ is their nu
|probability|combinatorics|solution-verification|
0
Formula for angle in triangle when two sides and the included angle are known
Given a triangle $ABC$ with edges $a$ , $b$ , and $c$ and corresponding angles $\alpha$ , $\beta$ and $\gamma$ . Assume, the values of $a$ , $b$ and $\gamma$ are given. Does there exist a direct theorem (like the law of cosines, the law of sines or the law of tangents) which describes the relationship between these parameters and $\alpha$ ? Obviously, one can use the law of cosine to calculate $c$ and then apple the law of sine to solve the problem. But does there exist a specific formula for the relationship which can be applied without a step in between? Maybe, someone could give me some advice. I have already read the entire article on Wikipedia about trigonometry. Thanks in advance!
When you know two sides and the angle between them, the so-called "side-angle-side" case, older textbooks would recommend using the law of tangents to find the remaining angles, if you don't want to bother finding the remaining side. I think that's what you mean by "directly" finding the other angles. This avoids having to use the law of cosines altogether: particularly useful before electronic calculators when you'd rather avoid calculating a square root, but even today the law of tangents can have better numerical performance (if $a$ and $b$ are similar and $\gamma$ is small then applying the law of cosines can produce errors by catastrophic cancellation ). Suppose $a>b$ and so $\alpha>\beta$ . How much of the length of $BC = a$ would you have to redistribute to $AC =b$ to make them the legs of an isosceles triangle with apex at $C$ ? Cut off the half-difference $\frac{1}{2}(a-b)$ from $a$ and give it to $b$ , then the lengths would be equalised to their half-sum (i.e. mean) value $\
|geometry|triangles|angle|
0
$\lim_{x\to 1} \sqrt{x^2 + 8} = 3$ -- Proof Via Epsilon-Delta Definition of a Limit
This problem was originally asked here: $\lim_{x\to 1} \sqrt{x^2 + 8} = 3$ prove this using epsilon delta . It got closed and is no longer accepting answers. Prove this using epsilon-delta $$\lim_{x\to 1} \sqrt{x^2 + 8} = 3$$ Definitions: $\lim_{x\to c}f(x)=L⟺∀ϵ>0,∃δ>0$ s.t. $0 as defined by Gregory Hartman et al. by the Virginia Military Institute in LibreTexts Mathematics $_1$ Context For Why This Question Is Important: Epsilon-Delta proofs of limits form the foundations of rigorous calculus as we know it. The rigor and importance of these fundamental limits should be demonstrated to students. By giving me guidance on how to correctly evaluate this limit, users provide me (and future readers of this post) with a token of knowledge. Additional Context For the Problem: I have searched for the answer, and found this video on YouTube $_2$ . https://www.youtube.com/watch?v=yC8Y50H6kw8 . However, the person does not provide an example where the inequality is not easily factorable. Given $\
Prove $\lim_{x\to 1} \sqrt{x^2+8}=3$ We need to find a delta given an epsilon, so we try to find $0 . $1-\delta $1+\delta^2-2\delta $9+\delta^2-2\delta So $|(x^2+8)-9| So if $\delta=\min(1,\epsilon/3)$ , we have proven $\lim_{x\to 1} x^2+8=9$ $|(x^2+8)-9|=|x^2-1|$ $|a^2-b^2| If $a=1$ and $b=1+\delta$ , then max $|a+b|=3$ . This suggests $\delta=\epsilon$ will give us the full limit for our original expression. Let's see. $|(x^2+8)-9|=|x^2-1| $|x-1| $\frac{3\epsilon}{2+\epsilon} Where for the last inequality, I've replaced all higher powers of $\epsilon$ with $\epsilon$ itself and added up a geometric series. This tells us the corresponding square root expression holds if $\delta=\min(\epsilon/3,1)$ .
|calculus|limits|epsilon-delta|
0
How to argue that two outcomes are equally likely?
When it comes to the concept of "equally likely," students are not told what makes two outcomes equally likely, but are instead shown what it means via examples. "When flipping a coin, heads and tails are equally likely" and "When rolling a die, the faces 1-6 are equally likely" are such common examples. However, this leads me to believe that "two outcomes are equally likely" is merely an intuitive fact and cannot be justified by any logical argument. Is it possible to justify––even better, prove ––that two outcomes are equally likely? For instance, how do you prove $P(heads)=P(tails)$ ? The only potential justification that comes to my mind is similar to the law of large numbers: that the relative frequency of the two outcomes approaches the same number when the experiment is repeated an infinite number of times. But I see this as more of an empirical justification than an actual proof, because it is not actually possible to repeat an experiment "an infinite number of times."
In full generality, we can never argue with 100% certainty that any physical experiment is correctly modeled by any probabilistic model - whether it has uniform probabilities or not. We can conclude that a model is pretty close to the truth by doing repeated experiments, but we are also guided by wanting a useful model. Here are some reasons to use uniform models in probability: Simplicity. Sometimes the calculations would just become terrible if we did not assume equally-likely outcomes, so if that assumption is not too wrong, we can use it to simplify our life. Symmetry. If there are $n$ possible outcomes, and no reason to favor any outcome over the others, we may start by assuming that each one has probability $\frac1n$ , because it's a very simple model and we don't have a reason to want a more complicated one yet. How long we stick with this depends on how much we know about the experiment, which brings us to... A starting point for inference. More sophisticated techniques try to
|probability|proof-writing|
0
Upper bound on tail probability $P(X \geq 100)$ with given generating function at a point, $\mathbb E (10^X)=10$
Find the best possible upper bound on $P(X \geq 100)$ if $g_X(10)=10$ ( $g_X$ is a generating function), where $X$ is a discrete random variable. I started by writing the equation to find possible generating function $g_X(10)=(1-p)\cdot10^0+p\cdot10^{100} = 10$ . Its solution is $p = \frac{9}{10^{100}-1}$ , hence $P(X\geq 100) = \frac{9}{10^{100}-1}$ . Is it possible to give a better estimate? It seems to me that it is not, because as we increase the exponent at $p\cdot10^{100}$ , (e.g. $p\cdot10^{101}$ ) , then $p$ decreases, and as we increase the exponent at ( $1-p)\cdot10^0$ (e.g. ( $1-p)\cdot10^1$ ), then $p$ is zero or negative. Although I'm not sure, which is why I'm asking.
From the OP, I guess you assumed that $X$ is non-negative , which implies $$10^X-1 \ge 0.$$ Now, from the Markov's inequality we have $$\mathbb P (X \geq 100)=\mathbb P (10^X-1 \geq 10^{100}-1) \le \frac{\mathbb E (10^X-1)}{10^{100}-1}=\frac{g_X(10)-1}{10^{100}-1}=\color{blue}{\frac{9}{10^{100}-1}}$$ where recall $g_X(t):=\mathbb E (t^X)$ for $t>0$ . This gives the bound you are seeking, which is tight for a random variable $X_0$ with $$\mathbb P (10^{X_0}-1=10^{100}-1)=\frac{9}{10^{100}-1},\mathbb P (10^{X_0}-1=0)=1-\frac{9}{10^{100}-1}. $$ If $X$ can take negative values , then $$10^X > 0.$$ In this case, again from the Markov's inequality we have $$\mathbb P (X \geq 100)=\mathbb P (10^X \geq 10^{100}) \le \frac{\mathbb E (10^X)}{10^{100}}=\frac{g_X(10)}{10^{100}}=\color{blue}{\frac{10}{10^{100}}}.$$ This bound is not tight, but can be approached arbitrarily by the sequence $X_n$ defined as ( $n \in \mathbb N$ ) $$\mathbb P (10^X-10^{-n}=10^{100}-10^{-n})=\frac{10-10^{-n}}{10^{100}-1
|probability|
1
Prove that $\partial x = \partial \cdot x = n$ (Geometric algebra)
I’m trying to understand equation (2-1.34) on page 51 of Hestenes and Sobczyk’s “Clifford Algebra to Geometric Calculus”. $\partial x = \partial \cdot x = n \tag{1.34}$ According to the book, this follows from $\partial \wedge x = 0, \tag{1.33}$ $\partial_x = P(\partial_x) = \sum_k a^ka_k\cdot \partial_x, \tag {1.5}$ $a\cdot\partial_x x = P(a) = \partial_x(x\cdot a), \tag{1.18}$ because $\partial \cdot x = \sum_k a^k\cdot(a_k\cdot\partial x) = \sum_k a^k \cdot a_k =n. \tag{*}$ Here $x$ is a vector from an $n$ -dimensional vector space $x\in \mathcal A_n$ with orthonormal basis $a_1,…, a_n$ . As $\partial x = \partial \wedge x + \partial \cdot x$ , the first equation of $(1.34)$ follows from $(1.33)$ . My problem is understanding the first equation on the left of $(\ast)$ .
The equality $$\partial\cdot x = \sum_ka^k(a_k\cdot\partial x)$$ follows directly from $\partial = \sum_ka^k(a_k\cdot\partial)$ and the fact that $a_k\cdot\partial$ is scalar-like.
|geometric-algebras|
1
Orbit of Lagrange resolvent under complex conjugation
The orbit of Lagrange resolvent $R = \sum{a_i w^i}$ under the action of $S_n$ plays a key role in Lagrange's method of solving polynomial equations. I've been playing with the case n=4 and noticed the following: Orbit of simple $R$ has size 24. No surprise here because the $w^i$ are linearly independent. Orbit of $R^4$ has size 6. Again no surprise because $R^4 = (w^i R)^4$ for each $i = 0, 1, 2, 3$ . Taking complex conjugation, the orbit of $(R \bar{R})^4$ has size 3 because the complex conjugation automorphism has order 2 in the Galois group. The surprise came when I noticed that also the orbit of just $R \bar{R}$ has size 3, and similar observations can be made for other n, e.g. 5. I would expect that according to arguments above the orbit would have size $n!/2$ . There must apparently be a hidden structure in the resolvent that I don't see.
Well, it's not exactly correct to say that $w^i$ s are linearly independent over the base field, e.g. trivially $\sum{w^i} = 0$ . Nevertheless, this doesn't affect the orbit of $R$ . Anyway, there is a similar action of the roots of unity as in the second point, namely that $w^iR*w^{n-i}\bar{R} = R\bar{R}$ for each $i = 1, \ldots, n$ . That explains why the orbit size of $R\bar{R}$ is $n!/2n$ .
|abstract-algebra|number-theory|galois-theory|
1
Probability of infinite coin toss where tossing stops only if the number of heads is twice the number of tails
The coin is tossed until the number of heads is exactly equal to twice the number of tails. The coin lands on heads with probability . What is the probability that a coin will be tossed forever? A similar problem (A coin lands heads with probability . The coin is tossed until the first heads will come up. What is the probability that an even number of tosses will be made?) was solved by reducing the result to an infinite sum, which was an infinite sum of a geometric progression: $$(1-p)*p + (1-p)^3*p +\ldots = (1-p)*p*(1+(1-p)^2+(1-p)^4 + \ldots) = (1-p)*p/(1-(1-p)^2)=(1-p)/(2-p)$$ I assume, if I’m not mistaken, that there should be a similar approach here, but I’m a little confused about how exactly this should be decided: I first thought to subtract from 1 the probability of events when the coin toss stops, that is, when the number of heads becomes equal to twice the number of tails (when you get 1 tail and 2 heads, 2 tails and 4 heads, and so on), but the problem is that there are s
As pointed out in the comment, your formula involves double counting. Sketch: Let $S_n$ be the number of sequences of length $3n$ having $n$ tails and $2n$ heads. Let $T_n$ be the number of sequences with $n$ tails and $2n$ heads, such that there is no prefix of length $3m (starting from the first element) having $m$ tails and $2m$ heads (that is, the sequence is the first one having the desired property). Then $$S_n = T_n +T_{n-1} {3 \choose 1} +T_{n-2} {6 \choose 2} \cdots + T_1 {3(n-1) \choose n-1} ={3n \choose n}\tag1$$ And the probabilty of ending is $$T_1 a + T_2 a^2 + T_3 a^3 +\cdots \tag 2$$ where $a = p^2 q = p^2(1-p)$ I remains to compute $T_n$ explicitly from $(1)$ , plug it in $(2)$ and see if you can express it in some simple analytic form. PS: Plugging the first terms, I found https://oeis.org/A024485 : $$T_n={3n \choose n} \frac{2}{3n−1} \tag3$$ I guess we might prove it by induction. To tackle $(2)$ , perhaps this is useful.
|probability|
0
Intrinsic proof that sprays induce involutions
Let $M$ be a smooth manifold. Let $V$ be the canonical vector field on $T M$ (also called the Liouville vector field), which if $(x, y)$ are local coordinates on $T M$ is defined by $V = y^i \frac{\partial }{\partial y^i}$ . Also denote by $J : TTM \to TTM$ the canonical endomorphism of $TTM$ (also called an almost tangent structure), which in local coordinates is given by $J(a^i \frac{\partial }{\partial x^i} + b^i \frac{\partial }{\partial y^i}) = a^i \frac{\partial }{\partial y^i}$ . These definitions immediately imply some basic facts: $\mathcal{L}_V J = -J$ , $J^2 = 0$ (in fact $\operatorname{im} J = \operatorname{ker} J$ ), and $[JX, JY] = J[JX, Y] + J[X, JY]$ . Now let $S$ be a spray on $M$ , which means that $S$ is a vector field on $TM$ such that (1) $J S = V$ and (2) $[V, S] = S$ . In this situation, the Lie derivative $\mathcal{L}_S J$ is an involution on $TTX$ , which is intimately connected to the affine connection $S$ induces on $M$ . (Namely $S$ defines an Ehresmann conn
Not sure this is an answer, but it is too long for a comment. One of the reasons why the proof might not be shorter is that the intrinsic definition of the canonical endomorphism first projects and then lifts: $J(v)=(\pi_*(v))^V$ , with $(\cdot)^V$ the vertical lift and $\pi$ the tangent bundle projection. While easy in coordinates, working with the above may not be straightforward. If you are familiar with vertical and complete lifts, there is an alternative approach. You can then show that $(\mathcal{L}_SJ)(X^V)=X^V$ and $(\mathcal{L}_SJ)^2(X^C)=X^C$ . The first one is easy, the second one involves more work. From here, since $\{X^V,X^C\}$ span $TTM$ you are done. You can find this computation, or similar ones, in a few papers. One of the first ones I am aware of is the following: Crampin, M. - Tangent bundle geometry for Lagrangian dynamics. J. Phys. A 16, 3755-3772 (1983).
|differential-geometry|riemannian-geometry|connections|lie-derivative|tangent-bundle|
0
Radical of a bilinear form
Given a bilinear form $b$ on a vector space, one defines the radical , which is the set of vectors which are orthogonal (with respect to $b$ ) to every other vector (so essentially the orthogonal complement of the whole space). My question is: Why is it called radical? Is there an example that motivates this name?
This goes back, at least, to Witt(1936)
|linear-algebra|soft-question|bilinear-form|
0
A linear operator is bounded iff the image of every sequence convergent to $0$ is bounded
Let $E,F$ be normed spaces and $T:E \rightarrow F$ a linear operator. Then, $T$ is bounded if and only if the image of every sequence on $E$ which converges to $0$ is bounded on $F$ . In order to prove the left-to-right direction, I suppose that it exists $\{x_n\} \subset E$ such that $x_n \rightarrow 0$ and $\{T x_n \} \subset F$ is not bounded. Then, $\forall k \in \Bbb{N} \, \exists m_k \in \Bbb{N} : \lVert T x_{m_k} \rVert > k$ . Since $x_n \rightarrow 0$ , $\forall \varepsilon >0 \, \exists n_\varepsilon \in \Bbb{N} \, \forall n \geq n_\varepsilon : \lVert x_n \rVert . Thus, for every $k \in \Bbb{N}$ such that $m_k \geq n_1 : k \lVert x_{m_k} \rVert , so $T$ is not bounded. I am getting stuck proving the other direction. I think I should take a sequence convergent to $0$ and prove that its image also converges to 0. Then, since the continuous of a linear operator on a point is equivalent to its boundness, it would be finished. This seems to be pretty easy but I am do not see how t
You kind of overcomplicated in the first direction. It is much easier: if $x_n\to 0$ then by continuity, $T(x_n)\to 0$ as well, and a convergent sequence must be bounded. For the converse, assume $T$ is not bounded. Then for each $n\in\mathbb{N}$ there is some $x_n\in E$ with $||x_n||=1$ and $||T(x_n)||>n$ . But now note that the sequence $y_n=\frac{x_n}{\sqrt{n}}$ tends to $0$ , and $T(y_n)$ is not bounded, which contradicts the assumption.
|functional-analysis|
1
Discontinuous process with same marginals as Brownian Motion
I understand that there are many discontinuous modifications of Brownian Motion, one of which seen in my course was: $$X_t = \begin{cases} B_t & t\neq u \\ 0 & t=u \\ \end{cases} $$ where $(B(t):t\in[0,\infty))$ is BM and $u$ is $\text{Unif}([0,1])$ . However, I don't understand the claim that $(X(t))$ has the same finite dimensional marginals as $(B(t))$ . Recall that same finite-dimensional marginals means that with probability 1, for any finite sets $\{t_j\}_1^k\subseteq T$ , the collection of marginals of both stochastic processes equate in law with each other. Any help is appreciated. Edit: I'm adding "with probability 1" to the beginning the definition of same finite-dimensional marginals for congruence with the answer below. Also, in the same light as the answer below, it's clear that if the BM was $(B(t): t\in [0,n] \cap \mathbb{N})$ for some $n$ , then equality in f.d. marginals would no longer hold.
The reason is that for any finite set on which you're taking marginals, the uniform random variable will almost surely never be on one of those points - this is true for a simpler example too: For $t \in [0,1]$ , take $X_t := 0$ , a uniform $[0,1]$ random variable $U$ , and $Y_t = \chi_\{U\}$ to be 1 at the value chosen by $U$ , 0 otherwise. Then for any finite set $\{t_i\}$ , we have $$ \mathbb P(Y_{t_i} = 0 \textrm{ for all }i) = 1 $$ so that $\mathbb P(X_{t_i} = Y_{t_i} \textrm{ for all }i) = 1$ , but that $\mathbb P(X_t = Y_t \textrm{ for all }t) = 0$ since $X_U = 0 \not = Y_U = 1$ (there's always exactly one $t$ for which the two processes differ) Your example just has more complicated (but still identical) definitions of processes away from the uniformly selected point.
|probability-theory|measure-theory|stochastic-processes|brownian-motion|
0
Haar Measure of Product of Compact Sets
This is from Bachman's Harmonic Analysis book, exercise 8.3 Let $\mu$ be a Haar measure in $G$ and let $E_1$ and $E_2$ be two compact subsets of $G$ such that $\mu(E_1)=\mu(E_2)=0$ . Does this imply that $\mu(E_1E_2)=0$ ? My attempt: $E_1E_2$ is compact. By outer regularity, consider an open neighborhood $U_2$ of $E_2$ with $\mu(U_2) . $\{xU_2\}_{x\in E_1}$ is an open cover of $E_1E_2$ with, by left invariance, $\mu(xU_2)=\epsilon$ . Let the finite subcover be $\{x_iU_2\}_{i=1}^n$ . So $\mu(E_1E_2) . But this doesn't work because $n$ is dependent on $\epsilon$ . Also, I didn't use the compactness of $E_1$ . I don't even know whether the hypothesis implies that $\mu(E_1E_2)=0$ . Any help is appreciated. Thanks.
An example in $\mathbb R$ : the Cantor set $C$ has measure zero and $C+C=[0,2]$ . A proof is in this article .
|harmonic-analysis|topological-groups|locally-compact-groups|haar-measure|
1
Problem 7-5 Algebraic Curves William Fulton
I have been trying problem number 5 of chapter 7 of the Fulton, I attach the statement in the following link; Problem 7-5 Fulton QUESTION: Let P be an ordinary multiple point on C, $f^{-1}(P) = \{P_1,...,P_r\}, L_i = Y -\alpha_iX$ the tangent line corresponding to $P_i = (0, \alpha_i)$ Let $G$ be a plane curve with image $g$ in $\Gamma (C) \subset \Gamma (C') $ a) Show that $ord^{C'}_{P_i}(g)\geq m_p(G)$ , with iguality if $L_i$ is not tangent to G at P. b) if $s \leq r$ and $ord^{C'}_{P_i}(g)\geq s $ for each $i = 1,...,r $ show that $ m_p(G) \geq s $ (Hint: How many tangents would G have otherwise?) MY STEPS I'm not very sure, but I'm trying to analyze the order of intersection between the tangent lines $L_i$ and the plane curve $C'$ and I also know that $ord^{C'}_{P_i}(g)\geq 1$ also as $m_p(G)$ is the intersection multiplicity between $C$ and $G$ at the multiple point $P$ and given that an ordinary multiple point has at least two distinct tangent lines, I'm not sure but it seems to
Here's my solution to this problem: If $P$ does not go through $G$ then $m_p(G)=0$ and we are done. Thus, assume $m_p(G)=m$ , where $m>0$ . Write $G=G_m+...+G_n,$ where each $G_i$ is a form of degree i. Further, we can write $G_m=\prod_{i=1}^k(\gamma_i X - \beta_iY)^{j_i}$ where the $\sum_i^k j_i=m$ . I will only consider the case where $\beta_i's$ are nonzero, however, the problem does go through with slight modifications if you have the opposite scenario. Thus, with the latter assumption, we can further say $G_m=\prod_{i=1}^k(\gamma_i X - Y)^{j_i}$ . Then, $g \in \Gamma(C')$ can be written as $ g(x,xz)= x^m(\prod_{i=1}^k(z-\gamma_i)^{j_i}+x(...))$ In particular, since the order function turns multiplication into sums, we have $$ord_{P_i}^{C'}(g)=m + ord_{P_i}^{C'}(\prod_{i=1}^k(z-\gamma_i)^{j_i}+x(...)) \geq m = m_p(G) $$ . In this last line, equality holds iff $\prod_{i=1}^k(z-\gamma_i)^{j_i}+x(...)$ is a unit in the local ring at $P_i$ , which happens iff its value at $P_i$ is nonz
|abstract-algebra|algebraic-geometry|algebraic-curves|
0
Generating function problem
Using generating functions, find the number of ways to divide $n$ coins into $5$ boxes that meet the condition that the number of coins in boxes $1$ and $2$ is an even number that does not exceed $10$ , the remaining boxes contain $3$ to $5$ coins. I have found that the generating function of this problem is $(1+x^2+x^4+x^6+x^8+x^{10})^2(x^3+x^4+x^5)^3$ . I have tried to convert it into $\frac{(x^{12}-1)^2x^9(x^3-1)^3}{(x^2-1)^2(x-1)^3}$ however I can't improve any further more.
All that is needed is to multiply out your product and read of the answer: $x^{35} + 3 x^{34} + 6 x^{33} + 7 x^{32} + 8 x^{31} + 9 x^{30} + 15 x^{29} + 20 x^{28} + 27 x^{27} + 29 x^{26} + 36 x^{25} + 39 x^{24} + 47 x^{23} + 46 x^{22} + 49 x^{21} + 45 x^{20} + 49 x^{19} + 46 x^{18} + 47 x^{17} + 39 x^{16} + 35 x^{15} + 26 x^{14} + 21 x^{13} + 13 x^{12} + 8 x^{11} + 3 x^{10} + x^{9}$ That reads there is 1 possible way to put 35 coins in the boxes (20 in the 2 boxes and 3*5 in the 3 boxes. There is 1 possible way to put in 9 coins (zero in the 2 boxes and 3 times 3 in the 3 boxes). There are zero ways to put more then 35 or less then 9. No need to "simplify" it into a fraction where I think you did it wrong.
|combinatorics|discrete-mathematics|generating-functions|
0
How do I find an algebraic expression for the function $F(ξ, \bar{ξ})$ from this paper?
I am working on understanding the paper "On $C^2$ -smooth Surfaces of Constant Width" by Brendan Guilfoyle and Wilhelm Klingenberg . As part of their definition of equations for a 3D surface, they use a function $F(ξ, \bar{ξ})$ where $ξ$ is "the local complex coordinate on the unit 2-sphere in $^3$ obtained by stereographic projection from the south pole" and $\bar{ξ}$ is the complex conjugate of $ξ$ . However, they don't explicitly state an algebraic definition for $F$ in any of the places they use it. All they do is state some conditions $F$ must satisfy, and I don't see how to derive an explicit expression from those alone. How do I work out an expression for $F(ξ, \bar{ξ})$ ? @Abezhiko says that if you have a circle parameterized as $x(t) = r \cos(t), y(t) = r \sin(t)$ , you can reparameterize it as $(x, y = F(x))$ with $F(x) = ±\sqrt{r^2 - x^2}$ . That makes sense, and it appears to be analogous to what the authors of this paper are doing: they explain a method of finding an orien
$F$ seems to be a parametrization for the considered curve, such that the first coordinate is treated as a free/independent variable. Concretely, given the equation describing a curve, namely $\Phi(\xi,\eta) = 0$ , you can express the second coordinate in function of the first one, i.e. $\eta = F(\xi,\bar{\xi})$ . For instance, the circle $x^2 + y^2 = r^2$ is usually parametrized as $(r\cos t, r\sin t)$ , but you may reparametrize it as $(x, y = F(x))$ , with $F(x) = \sqrt{r^2-x^2}$ (for the upper half-circle here).
|functions|differential-geometry|complex-geometry|
0
Is this derivation of the inverse Laplace transform solid?
I tried to recreate the proof of the inverse Laplace transform formula based on the blurry memory from an electrical engineering book I've read and I would like a real mathematician to look at it and tell me if I've omitted some little detail that electrical engineers are not told about. It goes as follows: Let $$ f(t) = \left\{\begin{split}&g(t),~t \in [0, +\infty),\\&0, t \in (-\infty, 0).\end{split}\right. $$ The Laplace transform of $f(t)$ can be written as $$ \mathcal{L}\{f(t)\} = \int_{0}^{+\infty}{f(t)e^{-st}}dt = \int_{-\infty}^{+\infty}{f(t)e^{-st}}dt = \int_{-\infty}^{+\infty}{f(t)e^{-\sigma t}e^{-i\omega t}}dt = \mathcal{F}\{f(t)e^{-\sigma t}\}. $$ Thus, $$ f(t)e^{-\sigma t} = \mathcal{F}^{-1}\{\mathcal{L}\{f(t)\}\} = \frac{1}{2\pi}\int_{-\infty}^{+\infty}{\mathcal{L}\{f(t)\}e^{i\omega t}}d\omega, $$ and $$ f(t) = \frac{1}{2\pi}\int_{-\infty}^{+\infty}{\mathcal{L}\{f(t)\}e^{(\sigma + i\omega)t}}d\omega = \frac{1}{2\pi i}\int_{\sigma - i\infty}^{\sigma + i\infty}{\mathcal{L}\
The development looks good. And yes, since $f(t)\equiv 0$ for $t , we can assert that $$\int_0^\infty f(t)e^{-st}\,dt=\int_{-\infty}^\infty f(t)e^{-st}\,dt$$ where we assume that $f(t)e^{-\sigma t}H(t)$ is in $L^1$ . That is to say , $\int_{0}^\infty |f(t)|e^{-\sigma t}\,dt . Then, let $F$ repersent the Laplace Transform of $f$ as given by $$F(s)=\mathscr{L}\{f\}(s)=\int_{-\infty}^\infty f(t)e^{-st}\,dt$$ Denoting the real and imaginary parts of $s$ be denote $\sigma$ and $\omega$ , respectively, we see that $$\begin{align} F(s)&=F(\sigma+i\omega)\\\\ &=G_\sigma (\omega)\\\\ &=\int_{-\infty}^\infty f(t) e^{-\sigma t}e^{-i\omega t}\,dt\\\\ &=\mathscr{F}\{g_\sigma\}(\omega) \end{align}$$ where $G_\sigma(\omega)=F(\sigma+i\omega)$ , and $g_\sigma(t)=f(t)e^{-\sigma t}$ . Next, we assume that $G_\sigma\in L^1$ and $g_\sigma$ is continuous at the point $t$ .. Then, applying the Fourier Inversion Theorem, we find that $$\begin{align} f(t)e^{-\sigma t}&=g_\sigma(t)\\\\ &=\mathscr{F^{-1}}\{G_\s
|solution-verification|fourier-transform|laplace-transform|inverse-laplace|
1
Are these definitions of the total derivative equivalent?
The total derivative of a function $F:\mathbb{R}^n \to \mathbb{R}^m$ is defined in this way: If a linear map $L:\mathbb{R}^n\to\mathbb{R}^m$ exists such that \begin{equation} \tag{*} \lim_{\boldsymbol{h}\to\boldsymbol{0}} \frac{||F(\boldsymbol{x}+\boldsymbol{h})-F(\boldsymbol{x}) - L\boldsymbol{h}||}{||\boldsymbol{h}||} = 0 \end{equation} then we say $L$ is the total derivative of $F$ at $\boldsymbol{x}$ and we denote it by $D_x F$ . This is analogous to a possible definition of the derivative in the single-variable case for a function $f:\mathbb{R}\to\mathbb{R}$ : If a number $l\in\mathbb{R}$ exists such that $$ \lim_{h\to 0} \frac{f(x+h)-f(x)-lh}{h} = 0 $$ then we say $l$ is the derivative of $f$ at $x$ and we denote it by $f'(x)$ . However, a more standard direct definition of the derivative in the single variable case is usually used: $$ f'(x) = \lim_{h\to0} \frac{f(x+h)-f(x)}{h} $$ Where the definition holds if the limit on the right exists. Is a similar direct definition possible
No, because $(**)$ is the definition of the directional derivative, not the derivative. You have examples of functions that have directional derivatives in all directions, i.e. $D_x F \: \pmb{v}$ exists for all directions $\pmb {v}$ , but the function is not differentiable. Classical example is $f(x,y)=\frac{x^3}{x^2+y^2}$ considered at the origin $(0,0)$ . In polar coordinates the function is ${f}(r,\theta)=r \cos^3(\theta)$ . Thus $\frac{\partial f}{\partial r}= cos^3(\theta)$ , so directional derivative at $(0,0)$ exists in all directions, but the function is non-differentiable since there is no linear map $L$ that satisfies $(*)$ . You can see it by noting that $(*)$ would imply that all directional derivatives are linear combinations of the partial derivatives $(\frac{\partial f }{\partial x},\frac{\partial f }{\partial y})$ . Essentially, being differentiable at a point means you can place a tangent plane at that point. You can have directional derivatives is all directions but n
|multivariable-calculus|definition|
1
Are these definitions of the total derivative equivalent?
The total derivative of a function $F:\mathbb{R}^n \to \mathbb{R}^m$ is defined in this way: If a linear map $L:\mathbb{R}^n\to\mathbb{R}^m$ exists such that \begin{equation} \tag{*} \lim_{\boldsymbol{h}\to\boldsymbol{0}} \frac{||F(\boldsymbol{x}+\boldsymbol{h})-F(\boldsymbol{x}) - L\boldsymbol{h}||}{||\boldsymbol{h}||} = 0 \end{equation} then we say $L$ is the total derivative of $F$ at $\boldsymbol{x}$ and we denote it by $D_x F$ . This is analogous to a possible definition of the derivative in the single-variable case for a function $f:\mathbb{R}\to\mathbb{R}$ : If a number $l\in\mathbb{R}$ exists such that $$ \lim_{h\to 0} \frac{f(x+h)-f(x)-lh}{h} = 0 $$ then we say $l$ is the derivative of $f$ at $x$ and we denote it by $f'(x)$ . However, a more standard direct definition of the derivative in the single variable case is usually used: $$ f'(x) = \lim_{h\to0} \frac{f(x+h)-f(x)}{h} $$ Where the definition holds if the limit on the right exists. Is a similar direct definition possible
No, these are not equivalent. The reason is that $(*)$ allows for more than just straight paths along which $\bf h$ can approach 0, while $(**)$ allows for only straight paths, since $h\bf v$ is always on the same straight line. For instance, consider the following function: $$ f:\mathbb R^2\to\mathbb R, \\ f(x,y)=\begin{cases} 0&\textrm{if }y=x^2\\ x&\textrm{else} \end{cases} $$ If we graph this function, it's basically a constantly sloping plane, except on the parabola $y=x^2$ , where it is 0. According to definition $(*)$ , this is not differentiable at $(0,0)$ , because the only possible differential is $(1,0)$ (by calculating partial derivatives), but $\lim_{h\to0}\frac{f(h,h^2)-f(0,0)-(1,0)\cdot(h,h^2)}{h}=1$ . However, according to definition $(**)$ , this is differentiable, since the corresponding limits exist. Your definition $(**)$ hints at the directional derivative: The limit you're calculating is exactly the directional derivative of $f$ along the direction $v$ , and it is
|multivariable-calculus|definition|
0
number of homomorphisms from $S_n$ to $\mathbb Z_n$
How can I find all homomorphisms $f:S_n \rightarrow \mathbb Z_n$ for $n > 4$, where $S_n$ is permutation group on $n$ elements and $\mathbb Z_n$ is group mod $n$? My guess would be that normal subgroups of $S_n$ for $n>4$ are only $1,A_n,S_n$ and then use property of group homomorphisms that $\ker(f)$ is a normal subgroup.
When $n \geq 2$ , the group $S_n$ is generated by transpositions, which all have order $2$ and are all conjugate to each other. So if we have a homomorphism $f : S_n \to A$ where $A$ is an abelian group written additively, then $f$ has the same value on all transpositions, which implies $f$ is completely determined by the single value $f((12))$ . When a permutation $\sigma \in S_n$ is a product of $r$ transpositions, we have (i) $2f((12)) = 0$ in $A$ and (ii) $f(\sigma) = rf((12))$ . Conversely, for each $a \in A$ such that $2a = 0$ , the mapping $f : S_n \to A$ where $f(\sigma) = ra$ when $\sigma$ is a product of $r$ transpositions, is a homomorphism. The number $r$ is only well-defined modulo $2$ , which is okay since $ra$ only depends on $r \bmod 2$ . Thus the number of homomorphisms from $S_n$ to an abelian group $A$ equals the number of $a \in A$ such that $2a = 0$ . When $A$ is a finite cyclic group, like $\mathbf Z/(n)$ , there are no elements of order $2$ in $A$ when $|A|$ is o
|abstract-algebra|group-theory|group-homomorphism|
0
Are the real and imaginary parts of orthogonal complex vectors linearly independent of each other?
Let $v_1,\dots,v_n$ be some $n$ orthogonal complex vectors, i.e. $v_k = a_k + ib_k$ for some real vectors $a_k,b_k$ . Is then the set $\{a_1,b_1,\dots,a_n,b_n\}$ linearly independent, i.e. are the real and imaginary parts of the $v_n$ s linearly independent of each other? We know that $$0 = \left = \left = \left - i\left + i\left + \left $$ which doesn't necessarily imply that e.g. $\left = 0$ . But then again, the only examples that come to my mind are e.g. Fourier bases in $L^2$ , where this claim is true.
A counterexample: $$v_1=(1,0)+i(1,1)$$ $$v_2=(1,0)+i(1,-2)$$
|linear-algebra|functional-analysis|
1
Arc length of the cardioid
Compute the length of the segment of the cardioid $(r, θ) = (1+ \cos(t), t) $ such that $ t \in [0, 2π].$ How do I find the arc length of the cardioid. I did $\mathbf{r}'=\langle -\sin(t),1\rangle$ so $|\mathbf{r}'|=\sqrt{\sin^2(t)+1}$ which you can't integrate. I think I may need to turn the polar coordinates into cartesian, and then do this process, but I don't know how to do that.
I think I may need to turn the polar coordinates into cartesian, and then do this process, but I don't know how to do that. While manageable, working in Cartesian coordinates requires a lot more effort and is not advisable. The equation's Cartesian form is a fairly simple quartic polynomial equation, $$r = 1 + \cos\theta \implies (x^2-x+y^2)^2 = x^2 + y^2$$ Use the quadratic formula to reveal the $4$ possible solutions for $y$ as a function of $x$ , $$y(x) = \pm \sqrt{\frac{1+2x-2x^2 \pm \sqrt{1+4x}}2}$$ and together their plots form the cardioid, color-coded in the figure below. Because of the curve's symmetry, we need only $2$ of these solutions, e.g. the red and green solutions, to compute the arc length. Implicitly differentiating the Cartesian equation and solving for $\dfrac{dy}{dx}$ gives $$\frac{dy}{dx} = \frac{-2x^3+3x^2+(1-2x)y^2}{y(2x^2-2x-1+2y^2)}$$ Now plug in the corresponding $y$ 's, then each $\dfrac{dy}{dx}$ into the arc length integral, $\displaystyle\int_a^b\sqrt{1+\
|multivariable-calculus|polar-coordinates|plane-curves|arc-length|
0
SOA #315 Transformation of Random variable Solution review
I am trying to teach myself the math to pass the P exam and I was wondering if someone could tell me the error in my thinking for this problem. Problem : The random variable $Y_1 = e^{X_1}$ characterizes an insurer's annual property losses, where $X_1$ is normally distributed with a mean 16 and a standard deviation 1.5. Similarly, the random variable $Y_2 = e^{X_2}$ characterizes the insurer's annual liability losses, where $X_2$ is normally distributed with a mean 15 and a standard deviation 2. The insurer's annual property losses are independent of its annual liability losses. Calculate the probability that, in a given year, the minimum of the insurer's property losses and liability losses exceeds $e^{16}$ The book I have has the solution to the problem P(min( $Y_1,Y_2$ ) > $e^{16}$ ) = P( $Y_1$ > $e^{16}$ ) * P( $Y_2 > e^{16}$ ) = 0.5 * 0.3085 I was wondering why there is no need to calculate the probability that $X_1$ > $X_2$ then do a linear combination (idk if the linear combinat
Adding an answer because it is too long for a comment. What you are trying to do is compute the probability using the Law of Total Probability as follows: \begin{align*} \Pr(\min\{Y_1, Y_2\} > a) & = \Pr(\{\min\{Y_1, Y_2\} > a \} \cap \{Y_1 > Y_2\}) + \Pr(\{\min\{Y_1, Y_2\} > a\} \cap \{Y_1 a\} \cap \{Y_1 > Y_2\}) + \Pr(\{Y_1 > a\} \cap \{Y_1 a\} | \{Y_1 > Y_2\})\Pr(Y_1 > Y_2) + \Pr(\{Y_1 > a\} | \{Y_1 Y_1). \end{align*} However, you are plugging in $\Pr(\{Y_2 > a\} | \{Y_1 > Y_2\}) = \Pr(Y_2 > a)$ , which is incorrect. If you try plugging in the correct value of the conditional probability above, you will end up obtaining the same expression as suggested in the solution.
|probability|probability-theory|
1
Proof for Particular Fair Shuffle Algorithm
I ran multiple simulations of the following function, and it seems to be fair shuffling, given that all permutations were roughly equal, but I don't understand why it works. It's just inserting at random positions within the current shuffled deck isn't it? I know about Fisher-Yates shuffling, for reference. def shuffle_deck(deck): shuffled_deck = [] for card in deck: r = random.randint(0, len(shuffled_deck)) shuffled_deck.insert(r, card) return shuffled_deck I was expecting that it would be uneven probability permutation.
One way of showing this is to consider the set of possible paths through the code. Note that at step $i$ , a random number between $0$ and $i$ is chosen to determine where to insert the next card. This gives us $1\cdot2\cdot3\cdots\cdot n=n!$ different paths through the code, and each path clearly corresponds to a different permutation. Contrariwise, it's possible to run the algorithm 'in reverse'; for each $i$ from $n$ downward, look at the position of $i$ within the subpermutation of the shuffled permutation consisting of only the elements $1..i$ .
|probability|combinatorics|algorithms|python|
0
Proof for Particular Fair Shuffle Algorithm
I ran multiple simulations of the following function, and it seems to be fair shuffling, given that all permutations were roughly equal, but I don't understand why it works. It's just inserting at random positions within the current shuffled deck isn't it? I know about Fisher-Yates shuffling, for reference. def shuffle_deck(deck): shuffled_deck = [] for card in deck: r = random.randint(0, len(shuffled_deck)) shuffled_deck.insert(r, card) return shuffled_deck I was expecting that it would be uneven probability permutation.
... I don't understand why it works. It's just inserting at random positions within the current shuffled deck isn't it? Imagine this. You have a deck of cards kept in front of you. You spread out one of your hands so you can create a deck on it. You reach out, pick the card at the top and place it on your hand. You reach out to the deck, and once again, pick the card on top. This time before placing the card on my hand, you think of a number (randomly) between $0$ and the $1$ (the number of cards I have already placed), both inclusive. Whatever number you think of, you place the new card below that many card(s) from the top. For example, if your number for $0$ , the new card would go on top. If the number was $1$ , it will be placed after the first card from the top. You repeat this process for all the remaining cards in the original deck. In the end, you will have a well shuffled deck in your hand, no? This is essentially what your algorithm is doing.
|probability|combinatorics|algorithms|python|
0
Professor Lee's Introduction to Smooth Manifolds Second Edition Lemma 10.35
I'm stuck trying to verify the proof given in the text. There are parts of the hypothesis and proof that have nothing to do with where I'm stuck, so in the interests of brevity, I'll give only the part I'm having trouble with. We are given $M$ , an immersed submanifold with or without boundary in $\mathbb{R}^n$ , and $D$ , a smooth rank- $k$ subbundle of $T\mathbb{R}^n|_M$ . The proof begins as follows: Let $p\in M$ be arbitrary, and let $(X_1,\dots,X_k)$ be a smooth local frame for $D$ over some neighborhood $V$ of $p$ in $M$ . Because immersed submanifolds are locally embedded, by shrinking $V$ if necessary, we may assume that it is a single slice in some coordinate ball or half-ball $U\subseteq\mathbb{R}^n$ . Since $V$ is closed in $U$ , Proposition 8.11(c) shows that we can complete $(X_1,\dots,X_k)$ to a smooth local frame $(\tilde{X}_1,\dots,\tilde{X}_n)$ for $T\mathbb{R}^n$ over $U$ , ... My problem lies wholly within the above three sentences. The first sentence is fine. The se
I'm not sure if I understand your question correctly. I think you are worried that the extension of the vector fields, which are initially a section into $D$ do not become vector fields in $\mathbb{R}^n$ ? In any case, I don't really see what is the problem here. These sections are still vector fields in $R^n$ . We choose smooth local frame $(X_1,\dots, X_k)$ for $D$ over $V$ basically from the Local Frame Criterion for Subbundles. This means that each is a smooth local section from $V \to TR^n|_M \approx M \times R^n.$ But we are actually mapping to $TR^n|_V \approx V \times R^n$ . (In case you worry about smoothness, $V$ is open in $M$ so the restriction to the codomain is fine as $TR^n|_V$ is just the inverse image of the open set $V$ of the projection map $TR^n|_M \to M$ .) But $V$ is embedded in $R^n$ , so we can see this as a smooth map into $TR^n= R^n \times R^n$ . Now $V$ is closed in $U$ so use the extension lemma for vector fields.
|differential-geometry|smooth-manifolds|vector-fields|submanifold|
0
Expectation of thinned Poisson point process
Problem statement: Assume $N=\sum_{i=-\infty}^{\infty} \delta_{\Gamma_i}$ is a homogeneous Poisson point process (hPPP) on the real line with positive rate $\lambda$ . Let $N$ be independent of $(U_n)_{n\in\mathbb{Z}}$ which are i.i.d. uniform on the interval $\langle 0,1\rangle$ . Now consider the process $N'=\sum_{\Gamma_i >0} \delta_{\Gamma_i} \boldsymbol{\mathbb{1}}_{U_i which is also a Poisson process, but on $\langle 0,\infty\rangle$ . Calculate $\mathbb{E} N' (\langle 0,a\rangle)$ for $a>0$ . My attempt: I will denote with $\mathcal{L}$ the Lebesgue measure, with $f_U$ the pdf of the uniform random variable on $\langle 0,1 \rangle$ and with $f_Y$ the pdf od $\Gamma_i$ . Since $N$ is a hPPP with rate $\lambda$ , then its intensity measure is $\mu = \lambda \mathcal{L}$ . Since $N'$ is a thinning of $N$ , then it is a PPP with intensity measure $\mu' = \lambda \mathcal{L}\mathbb{P}(C)$ , where $C=\{ U_i . It follows that $$ \begin{align*} \mathbb{E} N' (\langle 0,a\rangle) &= \mu'
We note the following Lemma. Let $N$ and $(U_n)_{n\in\mathbb{Z}}$ be as in OP. Then the process $M$ on $\mathcal{S} = \mathbb{R}\times[0,1]$ defined by $$ M = \sum_{i\in\mathbb{Z}} \delta_{(\Gamma_i, U_i)} $$ is a homogeneous Poisson point process with intensity measure $\lambda \operatorname{Leb}_{\mathcal{S}}$ . This is essentially a restatement of Poisson thinning, hence we skip the proof. Now note that \begin{align*} N'([0, a]) &= \sum_{i\in\mathbb{Z}} \mathbf{1}_{\{ \Gamma_i \in (0, a], \, U_i So by taking expectation, we get \begin{align*} \mathbb{E}[N'([0, a])] &= \int_{\mathcal{S}} \mathbf{1}_{\{ x \in (0, a], \, u Regarding OP's approach, I can see that the equality doesn't hold: $$\mu' (\langle 0,a\rangle) = \lambda a \mathbb{P} (U_i I guess OP tried to use $\Gamma_i$ to denote a typical Poisson point here, but this is already problematic. For a random point to qualify as "a typical point" of a homogeneous process on $\mathbb{R}$ , it would be a "uniformly chosen point in $\m
|probability-theory|random-variables|expected-value|poisson-process|
1
Question about quantifiers in the proof of the cut eliminiation theorem
Lately I have been reading about the cut elimination theorem, I think I get the idea however I have been struggling with some technical details concerning quantifiers. Consider the following rule: Now in the proof of the cut elimination theorem we are using mixes instead of cuts for technical reasons, so if it appears in a mix (but is inactive) in the following way: In the above at the "mix" step the idea is that we remove some occurrences of a formula $A$ from $\Gamma_1$ and $\Delta_2$ to get $\Gamma_1^*$ and $\Delta_2^*$ . We can find $y$ a variable that appears nowhere and reformulate this step to find: Now we can use induction to see that we can eliminate the cut, this seems good. However, in the conclusion we end up with $\forall y B[x:=y]$ instead of $\forall x B$ this may not seem to be a big deal because we can go from one formula to the other by doing a substitution... But formally this does not seem to be fine because if I were to go from $\Gamma_1^*, \Gamma_2 \vdash \forall
The short answer is: the formula $\forall x B$ is identical to the formula $\forall y B[x:= y]$ . Indeed, in a proof-theoretical context, for technical reasons to avoid dealing with annoying syntactical details, first-order formulas are considered as identified up to the renaming of bound variables, where a variable is bound if it is under the scope of a universal or existential quantifier. For instance, if $P$ is an unary predicate symbol, $\forall x P(x) = \forall y P(y)$ . Said differently, in proof theory for first-order logic, bound variables are dummy in that they appear in a formula only as a placeholder and they can be changed without modifying the meaning of the formula (see also here and here ). (This is analogous to what happens in mathematical analysis, where the integrals $\int_a^b f(x) dx$ and $\int_a^b f(y) dy$ are the same thing.) As a consequence, the sequents $\Gamma_1^*, \Gamma_2 \vdash \forall x B, \Delta_1, \Delta_2^*$ and $\Gamma_1^*, \Gamma_2 \vdash \forall y B[x
|logic|quantifiers|proof-theory|formal-proofs|sequent-calculus|
1
Computing $\int_{e^\pi}^\infty\frac{\sin(\ln(x))dx}{x}$
I need help spotting the error in my calculation here. Wolfram alpha tells me my integral should work out to $-1$ , rather than diverging. I've worked this out as follows, where unless I'm really missing something $\lim\limits_{t\to\infty}\cos(\ln(t))$ should diverge and not equal $0$ : \begin{equation} \int_{e^\pi}^\infty\frac{\sin(\ln(x))dx}{x} = \left.-\lim\limits_{t\to\infty}\cos(\ln(x))\right\rvert_{e^\pi}^t = \left.\lim\limits_{t\to\infty}\cos(\ln(x))\right\rvert_t^{e^\pi} = -1 - \lim\limits_{t\to\infty}\cos(\ln(t)) \end{equation}
As @Sine of the Time suggests, wolfram is incorrect because the integral just oscillates, this can be observed due to the following : Take $M > \pi$ , then $$ \int_{e^\pi}^{e^M} \frac{\sin{(\log{x})}}{x}dx \overset{\log{(x)} = t}{=} \int_{\pi}^{M} \sin{(t)} dt = -\cos{(t)}\big|_{\pi}^{M} = -2\cos^2{\left(M/2\right)}. $$ So how do we make sense of Wolfram's result? I am guessing the integral $\int_{\pi}^{\infty} \sin{(t)} dt$ can be "assigned" the value $-1$ as $$ \int_{\pi}^{\infty} \sin{(x)}e^{-\delta x} dx = -\frac{e^{-\pi\delta}}{\delta^2+1},$$ therefore taking small values of delta, the right side converges to $-1$ .
|calculus|improper-integrals|
0
Homology of the torus from the axioms
I'm supposed to show that for the Torus $T^d = (S^1)^d$ we have the formula: $$ H_n(T^d) = \bigoplus_{i = 0}^d \bigoplus_{j = 1}^\binom{d}{i} H_{n-i}(\text{point}) = \begin{cases} R^\binom{d}{n} &0\leq n \leq d \\ 0 & \text{else} \end{cases} $$ where $H$ is any ordinary homology theory. We have already proofed and its probably a consequence out of that the following things: If $X$ is locally compact then $H_n(S^d \times X) = H_n (X) \times H_{n - d}(X)$ and we know the homology of the sphere i.e $H_n(S^d) = \begin{cases} R & d \neq 0,\ n = 0, d \\ 0 & d \neq 0,\ n \neq 0, d \\ R \oplus R & d = 0,\ n = 0 \\ 0 & d = 0,\ n \neq 0 \end{cases} $ How can I prove the homology of the torus out of that?
This is a straightforward proof by induction. For the base case $d = 1$ we directly see that $H_0(S_1) \cong H_1(S^1) \cong R = R^\binom{1}{0} = R^\binom{1}{1}$ . Adopting the convention that $\binom{a}{b} = 0$ if $b or $b > a$ and assuming now that we've proven the identity up to some fixed $d - 1 \geq 1$ , we have for all $n$ that \begin{align} H_n(T^d) &= H_n(S^1 \times T^{d - 1}) \\ &\cong H_n(T^{d - 1}) \oplus H_{n - 1}(T^{d - 1}) \\ &\cong R^\binom{d - 1}{n} \oplus R^\binom{d - 1}{n - 1} \\ &\cong R^\binom{d}{n} \end{align} where we use the binomial identity $\binom{d - 1}{n} + \binom{d - 1}{n - 1} = \binom{d}{n}$ for the last step.
|algebraic-topology|homology-cohomology|homotopy-theory|
1
Order type of N and Q
Studying linear orderings, I learned two theorems. Suppose two linearly ordered sets A and B satisfy the following: (1) countably infinite, (2) dense, i.e. if x Suppose two linearly ordered sets A and B satisfy the following: (1) least upper bound property, (2) separable, i.e. there exists countable dense subset, (3) no first and last elements. Then, A and B are order isomorphic. I thought the conditions of first theorem determine the order type of Q, and the conditions of second theorem determine the order type of R. What I wonder is the conditions which determine the order type of N and Z. I guess the conditions that determine the order type of N are the following: (1) countably infinite, (2) well ordered, (3) For all elements, except the first element, there exists immediate predecessor. And, I guess the conditions that determine the order type of Z are the following: (1) countably infinite, (2) For all elements, there exist immediate predecessor and successor, (3) least upper bound
For both $\Bbb N$ and $\Bbb Z$ , they are quickly seen to meet the indicated conditions. Conversely: Suppose $A$ is a infinite well-ordered set where every element $x$ except the least has an immediate predecessor, denoted by $p(x)$ . Define $f: \Bbb N \to A$ inductively by $f(0) = \min A$ (or $f(1)$ , depending on your favorite definition of $\Bbb N$ ). for all $n > 0, f(n) = \min\{m \in A: m > f(n-1)\}$ . This defines $f$ as an order-preserving injection. Suppose $f$ is not surjective. Let $x = \min A \setminus f(\Bbb N)$ . $x \ne \min A$ , since $f(0) = \min A$ , so $x$ has a predecessor. But $p(x) \notin A \setminus f(\Bbb N)$ since it is smaller than $x$ . Thus $p(x) = f(n)$ for some $n$ . But then $f(n+1) = \min\{m \in A: m > p(x)\}$ . So $f(n+1) > p(x)$ , but because $x > p(x), f(n+1) \le x$ . If $f(n+1) , then $p(x) contradicts that $p(x)$ is the immediate predecessor of $x$ , while $f(n+1) = x$ contradicts that $x \in A\setminus f(\Bbb N)$ . This cannot be. Thus $f$ must be an
|integers|natural-numbers|well-orders|
1
On the convergence/divergence of this hypergeometric series
Let $\sum x_n$ a series of positive terms such that $$ \frac{x_{n+1}}{x_n} = \frac{\alpha n + \beta}{\alpha n + \gamma} $$ I want to study the convergence of this series. My attemp: because of the above relationship, it is clear that the quotient criterium fails, but we can apply Raabe's criterium and conclude that the series converges for $\gamma > \alpha+\beta$ and diverges for $\gamma . How about the case $\gamma = \alpha + \beta$ ?
Unless I've missed something here, $x_n = \frac{x_0\beta}{(n+1)\alpha + \beta}$ , where $\sum\frac{\beta}{n\alpha + \gamma} \ge \sum\frac{\beta}{n\alpha + n}$ already diverges after some $n$ .
|real-analysis|calculus|sequences-and-series|analysis|
1
Is the weak topology in separable Sobolev spaces induced by a metric?
I know that the spaces $W^{k,p}(\Omega)$ are separable and reflexive ( $\Omega \subset \mathbb{R}^n$ open and bounded) for $p \in (1, \infty)$ . I also learned that in a separable Banach space $X$ there exists a metric on the closed unit ball $ B \subset X^* $ which induces the weak* topology on $B$ . Now my thinking goes as follows: Since $W^{k,p}(\Omega)$ is separable, there exists a metric on the closed unit ball inducing the weak* topology. Can this metric be extended to all of $W^{k,p}(\Omega)$ ? Or at least to $W_0^{k,p}(\Omega)$ ? Since $W^{k,p}(\Omega)$ is also reflexive, the weak and weak* topologies coincide. Therefore, there exists a metric on the closed unit ball inducing the weak topology too. Extends this metric? I guess both induced metrics are isomorphic? (My actual goal is to show that $W_0^{k,p}(\Omega)$ is first-countable.)
This does not have much to do with Sobolev spaces. Let thus $X$ be a separable Banach space with a countable dense set $S=\{x_n:n\in\mathbb N\}$ . The topology on $X^*$ of pointwise convergence on $S$ is given by the increasing sequence of semi-norms $q_n(x^*)=\max\{|x^*(x_k)|:1\le k\le n\}$ . This topology is Hausdorff (by the density of $S$ ) and metrizable, e.g., by the metric $$d(x^*,y^*)=\sup\{q_n(x^*-y^*) \wedge 1/n:n\in\mathbb N\}$$ where $a\wedge b=\min\{a,b\}$ . By Alaoglu, the unit ball $B_{X^*}$ is $\sigma(X^*,X)$ -compact and since a compact space does not have stricly coarser Hausdorff topologies $d$ induces the weak $^*$ -topology on $B_{X^*}$ . However, for infinite dimensional Banach spaces, the weak $^*$ topology on the full dual $X^*$ is never metrizable: Otherwise, it would then be (sequentially) complete (because $\sigma(X^*,X)$ -Cauchy sequences are pointwise bounded and hence uniformly bounded and the compactness of balls $rB_{X^*}$ then yields converging subseque
|functional-analysis|metric-spaces|sobolev-spaces|weak-topology|
1
Sum of series involving factorials and number e
Consider $\sum \frac{n^2}{n!}$ . How can I find the sum? My attemp: it is equal to $\sum \frac{n}{(n-1)!}$ , and we know that $\sum \frac{1}{n!}=e$ .
It is known that $e^x=\sum_{n=0}^\infty \frac{x^n}{n!}$ . If you differentiate wrt $x$ on both sides you'd get $$e^x=\sum_{n=0}^\infty \frac{nx^{n-1}}{n!}\iff xe^x=\sum_{n=0}^\infty \frac{nx^n}{n!}$$ If we differentiate once again, we get this $$(x+1)e^x=\sum_{n=0}^\infty \frac{n^2x^{n-1}}{n!}$$ Finally, evaluating the equation at $x=1$ , you get that the series you asked for is $$\sum_{n=0}^\infty \frac{n^2}{n!}=\sum_{n=1}^\infty \frac{n^2}{n!}=2e,$$ where we changed the starting index to $1$ because the $n=0$ term vanishes due to the $n^2$ in the summand.
|real-analysis|calculus|sequences-and-series|analysis|
1
Different ways to prove $\sum_{k=1}^\infty \frac{1}{k^2}=\frac{\pi^2}{6}$ (the Basel problem)
As I have heard people did not trust Euler when he first discovered the formula (solution of the Basel problem ) $$\zeta(2)=\sum_{k=1}^\infty \frac{1}{k^2}=\frac{\pi^2}{6}$$ However, Euler was Euler and he gave other proofs. I believe many of you know some nice proofs of this, can you please share it with us?
The extraction of the desired values will also consider the simple fact that $\displaystyle \sum_{n=1}^{\infty}\frac{1}{n^2}= \sum_{n=1}^{\infty}\frac{1}{(2n)^2}+ \sum_{n=1}^{\infty}\frac{1}{(2n-1)^2}=\frac{1}{4} \sum_{n=1}^{\infty}\frac{1}{n^2}+ \sum_{n=1}^{\infty}\frac{1}{(2n-1)^2}$ that leads to $\displaystyle \sum_{n=1}^{\infty}\frac{1}{n^2}=\frac{4}{3}\sum_{n=1}^{\infty}\frac{1}{(2n-1)^2}$ . Exploiting the power series $\displaystyle \operatorname{arctanh}(x)=\sum_{n=1}^{\infty}\frac{x^{2n-1}}{2n-1}$ , we write $$4\sum_{n=1}^{\infty} \frac{1}{(2n-1)^2}=4\int_0^1 \frac{\operatorname{arctanh}(x)}{x}\textrm{d}x =-\int_0^1\left( \int_0^1\frac{\partial}{\partial x} \left(\frac{\log((1+y)^2-4 x y)}{y}\right)\textrm{d}x\right) \textrm{d}y $$ $$ =4\int_0^1\left( \int_0^1 \frac{1}{(y+1-2x)^2+(2 \sqrt{x (1-x)})^2}\textrm{d}y\right) \textrm{d}x $$ $$=4\int_0^1\frac{1}{2 \sqrt{x (1-x)}}\arctan\left(\frac{y+1-2x}{2 \sqrt{x (1-x)}}\right)\biggr|_{y=0}^{y=1}\textrm{d}x$$ \begin{equation*} =2\int
|sequences-and-series|fourier-analysis|big-list|transcendental-numbers|faq|
0
Least common multiple and greatest common divisor for monomial with rational coefficients
I have read in my textbook that if I have, for example two monomials $A$ e $B$ one or both with rational coefficients, $$A=\frac 34 x^2y^3, \qquad B=-2xyz$$ for the $\text{lcm}$ or the $\text{gcd}$ we have always $1$ as numerical coefficient. Is it a convention or is there something more complex that I do not know?
$lcm$ or $gcd$ doesn't really come into play for the rational coefficient part, at least not in the precalc algebra that I know of.
|algebra-precalculus|gcd-and-lcm|
0
What is the definition for the dot product of a co-vector and a dyadic product of vectors?
The definition of the dot product of a second order tensor with a vector is: $$ (\mathbf{a} \otimes \mathbf{b}) \cdotp \mathbf{x} = (\mathbf{b} \cdotp \mathbf{x})\mathbf{a} $$ The definition of the dot product of a third order tensor with a vector is: $$ (\mathbf{a} \otimes \mathbf{b} \otimes \mathbf{c}) \cdotp \mathbf{x} = (\mathbf{c} \cdotp \mathbf{x}) (\mathbf{a} \otimes \mathbf{b}) $$ Is it the case that: $$ \mathbf{x}^T \cdotp (\mathbf{a} \otimes \mathbf{b}) =(\mathbf{x}^T \cdotp \mathbf{a}^T)\mathbf{b}^T$$ I believe the transpose on $\mathbf{a}$ is needed because the dot product must be either between two vectors or two co-vectors. I also believe the transpose on $\mathbf{b}$ is needed to make the shape of the resulting vector come out right. Like in matrix multiplications where a row vector times a matrix gives a row vector. But I'm not sure whether $\mathbf{x}^T \cdotp (\mathbf{a} \otimes \mathbf{b})$ even makes sense conceptually or if only makes sense to write $\mathbf{x}^T \
This is a long comment rather than an answer. Take your first definition, $$ (\pmb{a} \otimes \pmb{b}) \cdot \pmb{x}= (\pmb{b} \cdot \pmb{x}) \pmb{a} $$ This only makes sense if, somehow, $$ (\pmb{a} \otimes \pmb{b}) \cdot \pmb{x} = (a^i \pmb{e}_i \otimes b_j \pmb{e^j}) x^k \pmb{x}_k = a^i \pmb{e}_i (b_jx^j) $$ i.e., as if by magic, $\pmb{b}$ is a covector (in Euclidean space you could call this $\pmb{b}^T$ but then you would simply write $\pmb{a}\pmb{b}^T\pmb{x}$ as @Kurt G. suggested). Then if we write $(\pmb{a} \otimes \pmb{b}) \cdot \pmb{x}^T$ using the same definitions, $$ (a^i \pmb{e}_i \otimes b_j \pmb{e^j}) x_k \pmb{x}^k = (a^ix_i) \otimes b_j \pmb{e^j} $$ or $(\pmb{x}^T\pmb{a}) \pmb{b}^T$ . Your second definition is even more difficult to decipher. You write, $$ (\pmb{a} \otimes \pmb{b} \otimes \pmb{c}) \cdot \pmb{x}= (\pmb{c} \cdot \pmb{x}) \pmb{a} \otimes \pmb{b} $$ so $\pmb{c}$ is also a covector and we have, $$ (\pmb{a} \otimes \pmb{b} \otimes \pmb{c}) \cdot \pmb{x} = (a^i
|tensor-products|matrix-calculus|tensors|
0
differential chain rule
I'm trying to follow some calculus lecture notes and I can reproduce an argument for the sum and product rule. Can I make an argument like this for the chain rule in a similar style, using the definition of $df$ ? $$ df := f(x+dx) - f(x) $$ Sum rule $$ f(x) := g(x) + h(x) $$ \begin{align} df &= f(x+dx) - f(x) \\ &= g(x+dx)+h(x+dx)-(g(x)+h(x)) \\ &= dg + g(x) + dh + h(x) - g(x) - h(x) \\ &= dg + dh \end{align} where step (3) uses $dg=g(x+dx)-g(x)$ and likewise for $dh$ . Product rule $$f(x) := g(x)h(x)$$ \begin{align} df &= f(x+dx) - f(x) \\ &= g(x+dx)h(x+dx) - g(x)h(x) \\ &= (dg + g(x))(dh+h(x)) - g(x)h(x) \\ &= dgdh + dgh + gdh + gh - gh \\ &= dgdh + dgh + gdh \\ &= dgh + gdh \end{align} using the same rewrite as above. Chain rule $$f(x) := g(h(x))$$ \begin{align} df &= f(x+dx)-f(x) \\ &= g(h(x+dx)) - g(h(x)) \\ &= g(dh + h(x)) - g(h(x)) \\ &= \quad ? \end{align}
You could do something like this: (Assuming that the derivative of $f$ , $f^\prime$ is defined as $\frac{f(x+dx)-f(x)}{dx}$ .) $$ df=f(x+dx)-f(x) \\ =g(h(x+dx))-g(h(x)) \\ =g(h(x)+dh)-g(h(x)) \\ =\frac{g(h(x)+dh)-g(h(x))}{dh}dh \\ =g^\prime(h(x))dh. $$
|calculus|chain-rule|
1
f(x) vs f×x: Functions vs Variables
Might be a stupid question, but I was thinking about this today and wanted to clear something up. I always thought of f(x) as having different algebraic properties than f×x, and it came as a surprise to me when I noticed that you can modify the equation algebraically, treating $f(x)$ as $f×x$ , without necessarily butchering the original function. Most rudimentarily: $f(x) = x$ $\frac {f(x)}x = \frac xx$ $f(1) = 1$ I was able to get the answer by simply dividing x out of the function $f(x)$ as if it were being multiplied by another variable. Another example: $f(x) = x+2$ $f(x) × \frac2x=\frac{2x}x+\frac4x$ $f(2) = 2+\frac4x$ This one's a little different, as you'd need to plug the value within f that has replaced x into the equation, but $f(x) = 2+\frac4x$ (the equation we ended up with) $2+\frac4{(2)} = 4 = f(2)$ $f(x) = x+2$ $(2)+2=4 = f(2)$ Yes, I realize that if I were to plug anything into x that was not the constant in the parentheses, the new derived function would not work. Reg
It sounds like you're interested in functions $f$ with the property that for all numbers $x$ and $y$ , we have $f(y)=f(x)\frac yx$ . This is equivalent to $\frac{f(x)}x=\frac{f(y)}y$ , i.e. $\frac{f(x)}x$ is a constant $c$ that doesn't depend on $x$ . So the only functions with this property are of the form $f(x)=cx$ . This makes sense in hindsight, since you're trying to treat the " $f$ " as a multiplied constant. In your second example, you "cheated" by plugging 2 in again for the remaining occurrence of $x$ . Functions for which this works satisfy $f(y)=f(x)\frac yx[x\to y]$ , where the $[x\to y]$ indicates that we replace occurrences of $x$ with $y$ . But this clearly true for all functions because $f(x)\frac yx[x\to y]=f(y)\frac yy=f(y)$ . In other words, you're multiplying $f(x)$ by $\frac yx$ and then plugging in $y$ for $x$ so that the $f(x)$ becomes $f(y)$ and the $\frac yx$ becomes $1$ .
|algebra-precalculus|functions|notation|
1
prove that if $A^k =0$ then there is $v \in \mathbb{F}^{n}$ such that the group $\{v,vA,vA^2,\dots,vA^{k-1}\}$ is linearly independent in induction
I have a question which I didn't manage to solve. Assume that there is $A \in \mathbb{F}^{nxn}$ niloptent matrix, which means that there is $k \in \mathbb{N}$ such that $A^k = 0$ . I need to prove that $A^n = 0$ . They told us that we need to do it by proving the following: If $A^k = 0$ then of course $A^{k-1} \ne 0$ so there is $v \in \mathbb{F}^n$ such that $vA^{k-1} \ne 0$ . Prove that the set $ \{v,Av,A^2v,\dots,vA^{k-1}\}$ is linearly independent. From what I understood it will help to show that the case where $k > n$ is not possible (cause we found that we have n+1 linearly independent vectors in vectorial space from size n) and then the only cases that actually can happen are when $n = k$ : $A^n = A^k = 0$ or $n > k$ : $A^n = A^k * A^{n-k} = 0 * A^{n-k} = 0$ they told me that for $k > n$ we need to do it by using inductive reasoning and the above statement but I'm not sure how to do it.
Hint: To prove the linear independence of $\{v, \dots, A^{k-1}v\}$ , all you need to do is show that $\alpha_0 v + \cdots + \alpha_{k-1}A^{k-1}v = 0$ implies that $\alpha_0 = \cdots = \alpha_{k-1} = 0$ . Multiplying the equation $\alpha_0 v + \cdots + \alpha_{k-1}A^{k-1}v = 0$ by $A^{k-1}$ will imply that $\alpha_0A^{k-1}v = 0$ . From here, you can continue inductively to get linear independence.
|matrices|induction|nilpotence|linear-independence|
1
Examples of integer sequences that have a distribution approx $1/\log(n)$, like the primes do?
It is well known that the primes are distributed such that they occur with an approximate "likelihood" of $1/\log(n)$ around the integer $n$ - or more precisely, the number of primes up to $n$ is $$\pi(n) \sim \int_2^n \frac{dt}{\log(t)}.$$ Question: Are there other sequences that have a distribution such that the likelihood of $n$ being a member of that sequence is approx $1/\log(n)$ ? Further Question: What properties or constraints would such sequences need to adhere to?
One such sequence is $a_n = \lfloor n\log n\rfloor$ . (In a similar way, one can construct sequences with any desired density.) Another is $\{n\in\Bbb N\colon n$ is divisible by $\lfloor \log n\rfloor\}$ (which also has obvious generalizations).
|number-theory|distribution-of-primes|integer-sequences|
0
Find the circle center point that is tangential on 2 other circles that has at least one intersection?
Given circle A and circle B (we got their center points, radiuses and 1 relevant point where they intersect) we want to draw a circle C with a given radius in a way that it is tangential with the other two circles (there are up to 4 possible locations). How to get the possible centerpoints?
Hint: What is the distance between the centers? To find (say) $O_3$ (which is internally tangent to both $A$ and $B$ ), what is $|KO_3|$ and $|O_3M|$ ? With that info, determine the possible locations of $O_3$ . Note that there are 2 possible locations. (Why?)
|trigonometry|circles|
1
Prove that the set of real symmetric matrices is closed
The scalar product defined for the matrices $A$ and $B$ in $R^{n\times n}$ is $$ \langle A, B \rangle = trace(A^{T} B) $$ . I want to show that the set of all symmetric matrices ( $D$ ) is closed in $R^{n\times n}$ . My attempt: I want to show that $\overline{D} \subset D $ . Let $A \in \overline{D}$ . Then there exists a sequence $A_n \in D$ such that $||A-A_{n}|| \to 0$ . $$||A-A_n||^2 = trace((A-A_n)^{T}(A-A_n) $$ Expanding this and using linearity of trace operator, I get, $$||A-A_n||^2= trace(A^{T} A)-trace(A^{T}A_n)-trace(A_n^T A)+trace(A_n^T A_n) $$ So, I get that $$ \lim_{n \to \infty}(||A||^2-2\langle A, A_n \rangle + ||A_n||^2)=0$$ I am not able to show that $A$ is symmetric, from here. Can someone help?
Decomposing into symmetric and antisymmetric parts we have $A= \frac12 (A+A^T)+\frac12 (A-A^T)$ . We also know that $A_n = \frac12 (A_n + A_n^T)$ since it was assumed symmetric. Now if $A \neq A^T$ then $A-A^T \neq 0$ so $\|A -A^T\|\neq 0$ however $$\|A-A_n\|=\|\frac12(A-A_n+A^T-A_n^T) +\frac12 (A-A^T)\|$$ and taking the limit with $n \to \infty$ gives $$\|A-A_n\|=\frac12\|(A-A^T)\| \neq 0$$ which contradicts that $A_n \to A$ . This implies $A-A^T=0$ and so $A$ is symmetric.
|general-topology|matrices|
1
Why is a=43 an exception?
There is a sequence. The first integer is positive, the second integer is negative. An alternating part is a part that switches between positive and negative (0 is not included) (a) I have a complete answer to this part (b) What second term should be chosen to get the longest possible switching between positive and negative For b, I wrote an entire proof as to why, assuming a is the positive first integer, b is the integer closest to -0.618a. However, I was testing cases today, and I found that a=43 is an exception to this. 0.618 multiplied by a would be -26.57 something, which is about -27. However, the number to make the sequence the longest is actually -26. Could anyone give me a hint as to why this happened?
You have posed an interesting question here, but have not shown how you got to where you are. If you look at a previous link of mine , you'll find a general solution to the problem $f_n=f_{n-1}+f_{n-2}$ with arbitrary initial conditions, say $f_0$ and $f_1$ , or $a$ and $b$ in your notation. It easy to show that your solution can be expressed as $$ f_n=\bigg(b-\frac{a}{2}\bigg)F_n+\frac{a}{2}L_n $$ where $F_n$ and $L_n$ are the Fibonacci and Lucas numbers, respectively. Here you can see that the Fibonacci terms are always negative while the Lucas terms are all positive (for $a>0$ and $b ). This can also be expressed as $$ f_n=bF_n+aF_{n-1} $$ Now, we can express the Fibonacci number with the Binet formula as follows $$ F_n=\frac{\varphi^n-\psi^n}{\varphi-\psi} $$ where $\varphi$ is the golden ratio and $\psi=1-\varphi$ . Thus $$ f_n=\frac{b(\varphi^n-\psi^n)+a(\varphi^{n-1}-\psi^{n-1})}{\varphi-\psi} $$ Almost there. For large $n$ we have $\varphi^n>>\psi^n$ and the above becomes $$ f_
|golden-ratio|alternating-expression|
1
Necessary & Sufficient Conditions for a set to be connected
This question is based on Proposition 1.4.13 in Basic Complex Analysis, by Marsden. Definition 1.4.12: A set $C \subset \mathbb{C}$ is not connected if there exists open sets $U$ and $V$ such that, a) $\, C \subset U \cup V$ , b) $\, C \cap U \not = \emptyset$ and $\, C \cap V \not = \emptyset$ , c) $(C \cap U) \cap (C \cap V) = \emptyset$ . If a set fails to be not-connected, then it's called connected . Proposition 1.4.13: A set $C$ is connected iff the only subsets of $C$ that are both open and closed relative to $C$ are the empty set and $C$ itself. I'm trying to prove this using proof by contradiction for both directions, $\Rightarrow (PBC):$ If $\exists C^*$ such that $C^* \subset C$ (strict) & $C^*$ is both open & closed relative to $C$ , then $C \subset C^* \cup V^*$ where $V^* = \mathbb{C} \backslash C^*$ is open and properties b) & c) hold, a contradiction. $\Leftarrow (PBC):$ If $\exists U,V$ which satisfy the conditions above. Then, $U \cap C$ is open relative to $C$ by $b)
Suppose $C$ is not connected. Then there exist $U, V$ , both open in $\mathbb{C}$ , such that: (1) neither $C \cap U$ nor $C \cap V$ is empty, (2) $C \subseteq U \cup V$ , and (3) $(C \cap U) \cap (C \cap V) = \emptyset$ . We claim that the set $C \cap U$ is both open and closed, relative to $C$ . It is trivial that $C \cap U$ is open relative to $C$ , since $U$ is open in $\mathbb{C}$ . To see that $C \cap U$ is closed relative to $C$ , it is enough to show that $C - (C \cap U)$ is open relative to $C$ . First, we have that $C - (C \cap U) = C \cap (\mathbb{C} - U)$ . We now show that $C \cap (\mathbb{C} - U) = C \cap V$ , completing the proof. Take $z \in C \cap (\mathbb{C} - U)$ . Clearly, $z \in C$ . From (2) above, this implies $z \in U$ or $z \in V$ . But $z \notin U$ , so it must be that $z \in V$ . This shows $z \in C \cap V$ . Now take $w \in C \cap V$ ; in other words, $w \in C$ and $w \in V$ . Suppose that further $w \in U$ . Then $(C \cap U) \cap (C \cap V) = C \cap U \cap
|general-topology|
0
Understanding and Visualizing Complex Roots According to the Fundamental Theorem of Algebra
I'm trying to deepen my understanding of the Fundamental Theorem of Algebra, which asserts that every non-constant single-variable polynomial with complex coefficients has as many roots as its degree, counting multiplicities. Specifically, for a polynomial equation of degree n , there should be exactly n roots in the complex number system. For example, consider the function $y=x^3+5x^2+x+3$ . When plotting this function on a 2D y-x plane, it appears there is only one root, which contradicts the theorem's assertion that there should be three roots for a cubic polynomial. Question 1: How do we find the other two imaginary roots for this equation? I understand that complex roots come in conjugate pairs when the polynomial has real coefficients, but I'm unsure how to mathematically derive these roots. Question 2: Is there a way to visualize this function and its roots in a 3D environment, allowing us to see the imaginary roots? I'm interested in understanding how these complex roots are co
While FTOA tells us that every polynomial has a certain number of roots, it doesn't give us a method for finding them. In general, while the roots of any linear or quadratic polynomial can be found through basic algebra, finding the roots of a cubic or quartic is quite difficult and for quintics and higher there is no guarantee that the roots can be exactly calculated. Beyond that, many of the methods used for working with real roots can be extended to complex numbers, for example: Vieta's formulas still apply. A version of the rational root theorem can be written in terms of Gaussian rationals . If you can find some of the roots exactly, you can perform polynomial long division to reduce the degree of the polynomial. You can use a method like Newton-Raphson to find approximate values of the roots. As for visualisation, it can be difficult to efficiently display four dimensions of information (two each for the real and imaginary parts of the inputs and outputs). As linked in the commen
|polynomials|complex-numbers|roots|visualization|
1
Constructing a function from a function of its inverse
Let $f$ be a continuous strictly-increasing function that maps $\mathbb{R}_+$ to $\mathbb{R}_+$ . Define a function $g$ on $\mathbb{R}_+$ as follows: $$g(x) := f^{-1}(f(x)+1).$$ For example, if $f(x) = x^2$ , then $g(x) = \sqrt{x^2+1}$ . In words, $g(x)$ describes to what value you should increase $x$ , so that $f(x)$ increases by $1$ . The function $g$ clearly satisfies two properties: $g(x)>x$ for all $x$ ; $g(x)$ is strictly increasing. QUESTION: Given a function $g$ that satisfies these two properties, does there always exist a corresponding function $f$ ? If not, what other conditions are required?
If $g$ is continuous, strictly increasing with $g(x)>x$ , then there exist infinitely many strictly increasing $f$ with a well-defined inverse $f^{-1}$ that satisfy the definition of $g$ . To see this, we may substitute $x=f^{-1}(y-1)$ into the definition of $g$ : $$g\circ f^{-1}(y-1) =f^{-1}(y),\quad y\geq f(0)+1\geq 1. $$ Therefore, for any given $g$ , if we can find some strictly increasing function $h:\mathbb{R}_+\mapsto \mathbb{R}_+$ such that $g\circ h(x-1) =h(x)$ for all $x\geq 1$ , we can simply set $f=h^{-1}$ which is the required function. Let us show that such $h$ is not unique. Pick an arbitrary strictly increasing function $h_0:[c_0,c_0+1]\mapsto [0,g(0)]$ which is always possible for any $c_0\geq0$ and $g$ since $g(0) > 0$ , so there are infinitely many possible choices. The required function $h:[c_0,\infty)\to\mathbb{R}_+$ can be constructed by the following piecewise design: $$h(x) =g^n\circ h_0(x-n),\quad x\in[c_n,c_{n+1}], \quad c_n=c_0+n,\quad n\in\mathbb{N},$$ where
|functions|inverse-function|
1
Log-tangent integral using exponential generating function $\int_0^{\pi/2}x\log^n(\tan x)\,dx$
$$I_n=\int_0^{\pi/2}x\log^n(\tan x)\ dx,\quad n\in\mathbb{Z}$$ I want to evaluate this log-tangent integral using exponential generating functions . My attempt is below. Define, $$G(t)=\sum_{n=0}^\infty\frac{I_n}{n!}t^n=\int_0^{\pi/2}x\sum_{n=0}^\infty\frac{(t\log(\tan x))^n}{n!}\ dx=\int_0^{\pi/2}x\tan^{t}(x)\ dx$$ rewriting $x$ as $\arcsin(\sin(x))$ and expanding by its Maclaurin series, $$\int_0^{\pi/2}x\tan^{t}(x)\ dx=\int_0^{\pi/2}\arcsin(\sin(x))\frac{\sin^t(x)}{\cos^{t}(x)}\ dx=\int_0^{\pi/2}\sum_{n=0}^\infty\binom{2n}{n}\frac{\sin^{2n+1}(x)}{4^n(2n+1)}\frac{\sin^t(x)}{\cos^{t}(x)}\ dx\\ =\sum_{n=0}^\infty\binom{2n}{n}\frac{4^{-n}}{(2n+1)}\int_0^{\pi/2}\sin^{2n+t+1}(x)\cos^{-t}(x)\ dx=\sum_{n=0}^\infty\binom{2n}{n}\frac{4^{-n}}{2(2n+1)}B\left(\frac{2n+t+2}{2},\frac{1-t}{2}\right)$$ where $B(m,n)$ is the Beta function, $$B(m,n)=\frac{\Gamma(m)\Gamma(n)}{\Gamma(m+n)}$$ hence, $$G(t)=\sum_{n=0}^\infty\binom{2n}{n}\frac{4^{-n}}{2(2n+1)}\frac{\Gamma\left(n+t/2+1\right)\Gamma(1/2-t/2)
For the even case \begin{align} I_{2m}=& \int_0^{\pi/2}x\ln^{2m}(\tan x)\ \overset{t=\tan x}{dx}\\ =& \int_0^{\infty}\frac{\tan^{-1}t\ln^{2m}t}{1+t^2}\ \overset{t\to 1/t}{dt} =\int_0^{\infty}\frac{(\frac\pi2-\tan^{-1}t)\ln^{2m}t}{1+t^2}\ dt\\ =&\ \frac\pi4 \int_0^{\infty}\frac{\ln^{2m}t}{1+t^2}\ dt =\frac{(-1)^m E_{2m}}{2}\left(\frac\pi2\right)^{2(m+1)} \end{align} where $E_{2m}=(-1,5,-61,1385,\cdots)$ are the Euler numbers. In particular $$I_2= \frac{\pi^4}{32},\> I_4 = \frac{5\pi^6}{128}, \> I_6 =\frac{61\pi^8}{512},\cdots$$
|integration|definite-integrals|summation|special-functions|generating-functions|
0
Basketball scores - solving 2 variable recurrence relation
My question is a follow up to the famous problem 'Basketball Scores' from the green book. The question can be found here : A person shoots basketball 100 times. First time he scores a point and second time he misses it. For the following shots, the probability of him scoring a point is equal to number of points scored before this shot divided by number of shots taken before this shot, for ex: if he is into his 21st shot and he has scored 13 points in the first 20 shots then the probability of him scoring in 21st shot is 13/20. What is the probability of scoring 66 points in the 100 shots (including the first two) The general solution is for any $1 \le x \le n-1$ : $$P(x, n) = \frac{1}{n - 1}$$ where for the specific problem ( $n=100$ and $x=66$ ) is $\frac{1}{99}$ . I solved the problem using the factorial approach. I also understand how the following recurrence is formed and the answer is verified using induction in this answer , in which we need to know the answer . My question is, h
Here I explained how the two-dimensional recurrence can be solved directly without knowing the answer. Let us first fix $x=1$ . Then, we have the following recurrence: $$P(1,n) = P(1,n-1)\left(\frac{n-2}{n-1}\right).$$ The solution of this is given by $$P(1,n) = \frac{c}{n-1},$$ where from $P(1,2)=1$ , we have $c=1$ . Then, for $x=2$ , we have the following recurrence: $$P(2,n) = \frac{1}{(n-2)(n-1)} + P(2,n-1)\left(\frac{n-3}{n-1}\right)$$ The solution of this is given by $$P(2,n) = \frac{2c+n}{(n-1)(n-2)},$$ where from $P(2,3)=P(1,2) \times \frac{1}{2} +0=\frac{1}{2}$ , we have $c=-1$ . Next, for $x=3$ , we have the following recurrence: $$P(3,n) = \frac{2}{(n-2)(n-1)} + P(3,n-1)\left(\frac{n-4}{n-1}\right)$$ The solution of this is given by $$P(3,n) = \frac{6c+n^2-5n+6}{(n-1)(n-2)(n-3)},$$ where from $P(3,4)=P(2,3) \times \frac{2}{3} +0=\frac{1}{3}$ , we have $c=0$ . This can be continued in the same way. As shown above, our problem reduces to solve the following one-dimensional rec
|probability|combinatorics|probability-theory|recurrence-relations|
1
How special is $z_1^2 + z_2^2 + z_3^2 = z_1 z_2 + z_2 z_3 + z_3 z_1$?
The well-known identity for complex points forming an equilateral triangle reads $$z_1^2 + z_2^2 + z_3^2 = z_1 z_2 + z_2 z_3 + z_3 z_1$$ I have a doubt concerning the uniqueness of this identity Is $z_1^2 + z_2^2 + z_3^2 - z_1 z_2 - z_2 z_3 - z_3 z_1$ the only quadratic form in 3 variables that when replacing with the complex points of an equilateral triangle is equal to 0? Another related question, in case that's not unique, Is $z_1^2 + z_2^2 + z_3^2 - z_1 z_2 - z_2 z_3 - z_3 z_1$ the smallest quadratic form, in the sense of the smallest absolute determinant of its associated matrix? A quadratic form in three variables has 6 parameters, and I've recognized 3 equations: 1) the quadratic form is equal to $0$ , 2) the translated quadratic form when $z_i \mapsto z_i+\zeta$ for each $i$ is equal to 0 (translation invariance), and 3) the rotated quadratic form when $z_i \mapsto z_ie^{i\pi/3}$ is equal to $0$ (rotational invariance). How can I narrow down the other parameters?
I went into full calculation, it's not very elegant: The vertices of a arbitrary regular triangle, given the parameters $\tilde{z}$ , $R$ , and $\theta$ , are as follows: $$z_{1} = \tilde{z} + R e^{i\theta}$$ $$z_{2} = \tilde{z} + R e^{i\left(\theta + \frac{2\pi}{3}\right)}$$ $$z_{3} = \tilde{z} + R e^{i\left(\theta + \frac{4\pi}{3}\right)}$$ Replacing these points into $$az_{1}^2+bz_{2}^2+cz_{3}^2+dz_{1}z_{2}+ez_{1}z_{3}+fz_{2}z_{3}$$ gives $$e^{2 i \theta} R^2 \left(a - (-1)^{\frac{1}{3}} b + (-1)^{\frac{2}{3}} c + (-1)^{\frac{2}{3}} d - (-1)^{\frac{1}{3}} e + f\right) + \frac{1}{2} e^{i \theta} R \tilde{z} \left(4a + 4(-1)^{\frac{2}{3}} b - 2c - 2i \sqrt{3} c + d + i \sqrt{3} d + e - i \sqrt{3} e - 2f\right) + \left(a + b + c + d + e + f\right) \tilde{z}^2$$ Therefore, we have three equations to satisfy (considering any value of $\tilde{z}$ , $R$ , or $\theta$ ) $$ a - (-1)^{\frac{1}{3}} b + (-1)^{\frac{2}{3}} c + (-1)^{\frac{2}{3}} d - (-1)^{\frac{1}{3}} e + f=0$$ $$4a + 4(-1)^{\fr
|geometry|complex-numbers|triangles|quadratic-forms|
0
Question about a part in Polya's Theorem
I think I have already wasted too much time thinking about this, so bear with me. I have a question about a specific step in Polya's Theorem I can't seem to find a simple solution to. For reference I'm attaching the source. The bit I don't quite grasp is the part at the end: "Thus $\int_C \cdots \neq 0$ , since there is exactly one non-zero term." I see that the highest order pole at $\infty$ of $h(z)^\kappa z^{r'-\kappa r}$ is $z^{r'}$ and thus the integral is non-zero by definition. Every other term of the form $z^m$ for $m evaluates to zero, because of the definition. Similarly for the terms with poles at $p_k$ . The problem I have is that the definition of $r'$ and $s_k'$ only takes care of the "pure integrals", while there also appear mixed terms. Such a term, for example, would be $$\int_C f(z) h(z)^N z^{r'-r} \left(\frac{q_k}{z-p_k}\right)^{s_k} \, {\rm d}z = 0 \, ,$$ but I think that is non-obvious. Surely, each factor $z^{r'-r}$ and $\left(\frac{q_k}{z-p_k}\right)^{s_k}$ indiv
I guess I have found a way; correct me if I'm wrong. We do know $$\oint_C f(z) h(z)^N z^m = 0 \qquad \forall m $$\oint_C f(z) h(z)^N \left(\frac{q_k}{z-p_k}\right)^m = 0 \qquad \forall m As for the case $$\oint_C f(z) h(z)^N h(z)^\kappa z^{r'-\kappa r} \neq 0$$ with $r'-\kappa r\leq r$ , expanding $h(z)^\kappa z^{r'-\kappa r}$ gives rise to terms of the form $$z^m \left(\frac{q_{1}}{z-p_{1}}\right)^{m_1} \left(\frac{q_{2}}{z-p_{2}}\right)^{m_2} \cdots \left(\frac{q_{l}}{z-p_{l}}\right)^{m_l} \tag{1}$$ with $0 \leq m_k \leq \kappa s_{k} ( $1\leq k\leq l$ ) and $r'-\kappa r \leq m \leq r'$ with $m=r'$ only if $m_k=0$ for all $k$ . Therefore and wlog we can assume $m_{1} \geq m_{2} \geq ... \geq m_{l} \geq 0$ with $m . With the expansion about $z=p_1$ $$z^m \left(\frac{q_{2}}{z-p_{2}}\right)^{m_2} \cdots \left(\frac{q_{l}}{z-p_{l}}\right)^{m_l} = \sum_{n=0}^\infty c_{1,n} \left(\frac{z-p_{1}}{q_{1}}\right)^{n} \tag{2} \, ,$$ we can re-write (1) as $$z^m \left(\frac{q_{1}}{z-p_{1}}\right)^
|complex-analysis|analysis|
0
Technical issue with empty metric space
I was reading through the appendix of Lee's Introduction to Topological Manifolds and came across the following exercise in the section on metric spaces: Exercise B.11 Let $M$ be a metric space and $A \subseteq M$ be any subset. Prove that the following are equivalent: A is bounded. A is contained in some closed ball. A is contained in some open ball. Now, this is straightforward if $M$ is nonempty, even in the case $A = \varnothing$ , but what happens if $M = A = \varnothing$ ? We still have that $A$ is bounded (vacuously), but can we still say that $A$ is contained in a closed/open ball? The only subset of $M$ is the empty set, which is not a ball (at least from how I read the definition of an open/closed ball). It seems to me that the result must be false in this degenerate case. Am I correct in this assertion?
You are correct. The empty set $\emptyset$ does not have any balls. Not under standard definitions anyway. An open ball of radius $r>0$ centered around $x\in M$ is defined as $$B(x,r)=\{y\in M\ |\ d(x,y) Then an open ball (without specifying $x$ and $r$ ) is typically defined as $B(x,r)$ for some $x\in M$ and $r>0$ . So it forces existence of $x\in M$ (and $r>0$ which is not relevant here). In particular the collection of all open balls is given by $$\mathcal{B}(X)=\{B(x,r)\ |\ x\in X, r>0\}$$ What I said so far is that $\mathcal{B}(\emptyset)=\emptyset$ , while $\mathcal{B}(X)\neq\emptyset$ when $X\neq\emptyset$ . Also note that typically balls are nonempty because $r>0$ and so $x\in B(x,r)$ . If you allow $r=0$ or even $r (for closed balls) then it might be empty. This kind of definition is problematic though, is $B(x,-1)=\emptyset$ really an open ball around $x$ ? But still the empty set won't contain balls, even if we allow them to be empty. In other words it may happen that $\empt
|metric-spaces|
1
If $x,y,z>0$ and $x+y+z=1$,then find the maximum value of $(1-x)(2-y)(3-z)$.
If $x,y,z>0$ and $x+y+z=1$ ,then find the maximum value of $(1-x)(2-y)(3-z)$ . My Attempt: We have $0 so A.M-G.M inequality cannot be used since we can never have $1-x=2-y=3-z$ . Is there any other way out
As pointed out in the comments, equality holds at a boundary point. With that in mind, we use a smoothing argument, and show that $$f(x,y,z) \leq f(0, y, x+z) \leq f(0, 0, x+y+z) = f(0, 0, 1) = 4.$$ Each of the inequalities should be obvious. Holding $x+z$ constant, $(1-x) \times (3-z)$ is maximized when the terms are as close to $\frac{4-x-y}{2}$ as possible. We can't set $1-x = 3-z$ as that requires $ 1 \geq z-x = 3-1 = 2$ . (Notice that the AM-GM attempts fail at this step, because they don't consider the boundary condition.) This is where we run into the boundary condition of $ x \geq 0, z \leq 1$ . The expression is maximal when $x^* = 0, z^* = x+z$ . Likewise, holding $y+z$ constant, then $(2-y) \times (3-z)$ is maximized when the terms are as close together as possible. As before, the expression is maximal when $y^* = 0, z^* = y+z$ . Notes Aig's solution is similar. Expressed along these lines, they are saying that when we hold $y$ constant (which is equivalent to holding $x+z$
|calculus|algebra-precalculus|inequality|maxima-minima|
0
Task Assignment Problem using MILP (tasks >> agents)
I have a general assignment problem that assigns a set of payload tasks $T$ to a set of workers $A$ , where $|T|$ >> $|A|$ . Each task $T_i \in T$ consists of a tuple $(s_i, g_i)$ , which represent the start and goal position of the task. I would like a MILP formulation, similar to that which is used for the Hungarian Algorithm, for this problem that I can use something like Gurobi to solve. Here are some assumptions I am making: Agents start at the position of their first assigned task. Thus, there is no assigned cost for an agent starting their first task. All distances from any position can be efficiently calculated. Agents need not be assigned the same number of tasks and all agents need not be used to solve the set of tasks A task can only completed by a single agent once. Tasks are not repeated or split amongst agents. My biggest problem is figuring out how to incorporate the cost of traveling from one task to another. For example, if agent $A_i$ is assigned tasks $T_j$ and $T_k$
I recommend a network-based formulation in a directed network defined as follows. Each task is a node, and there is a directed arc from task $T_j$ to task $T_k$ if it is possible for a single agent to perform $T_j$ and then $T_k$ . There is also a source node, with supply $n$ , and an arc from the source to each task. Similarly, there is a sink node, with demand $n$ , and an arc from each task node to the sink. Finally, introduce an arc from source to sink. You have a nonnegative integer flow variable for each arc and two linear constraints for each task node: the incoming flow equals 1, and the outgoing flow equals 1.
|linear-programming|mixed-integer-programming|
1
Examples of integer sequences that have a distribution approx $1/\log(n)$, like the primes do?
It is well known that the primes are distributed such that they occur with an approximate "likelihood" of $1/\log(n)$ around the integer $n$ - or more precisely, the number of primes up to $n$ is $$\pi(n) \sim \int_2^n \frac{dt}{\log(t)}.$$ Question: Are there other sequences that have a distribution such that the likelihood of $n$ being a member of that sequence is approx $1/\log(n)$ ? Further Question: What properties or constraints would such sequences need to adhere to?
There is a lot of such sequences. For example: $p_n +1$ $p_n +7$ $p_n + (-1)^n$
|number-theory|distribution-of-primes|integer-sequences|
0
Justifying the existence of an improper integral
function with 2 variables can someone please help me with this
Upon making the change $t \to t\sqrt(x)$ the integral becomes: \begin{equation} F(x) = \frac{\pi}{4} \frac{\ln(x)}{\sqrt(x)}+\frac{1}{\sqrt(x)}\int_0^\infty\frac{\ln(t)}{1+t^2}dt \end{equation} The last integral is found to be $0$ by the change $t \to \frac{1}{t}$ . So $F(x)$ exists and is defined for $x>0$ as: \begin{equation} F(x) = \frac{\pi}{4} \frac{\ln(x)}{\sqrt(x)} \end{equation}
|real-analysis|integration|analysis|improper-integrals|
1
A question about prime numbers, totient function $ \phi(n) $ and sum of divisors function $ \sigma(n) $
I noticed something interesting with the totient function $ \phi(n) $ and sum of divisors function $ \sigma(n) $ when $n > 1$ . It seems than : $ \sigma(4n^2-1) \equiv 0 \pmod{\phi(2n^2)}$ only if $ 2n - 1 $ is a prime number. for example : $ \sigma(4 \cdot 6^2-1) \equiv 0 \pmod{\phi(2 \cdot 6^2)} $ and $ 2 \cdot 6 - 1 = 11$ and $11$ is a prime number. $ \sigma(4 \cdot 7^2-1) \equiv 0 \pmod{\phi(2 \cdot 7^2)} $ and $ 2 \cdot 7 - 1 = 13$ and $13$ is a prime number. I found this sequences of primes : $``3,5,11,13,19,29,31,53,67,83,103,113,131,139,193,233,251,271,313,383,389 ..."$ I've checked until $n = 1000000$ and I didn't find any counterexamples. This sequence is not on OEIS. I would like to know why some primes are here and some primes are not here. This is only an coincidence or not ? And if not, is there a way to prove it ?
This question is certainly difficult to solve. Let $x=2n$ . As @Lucid has stated in the comments, it is easy to show that $x\mid\phi\left(\frac{x^2}{2}\right)$ , so by transitivity $\phi\left(\frac{x^2}{2}\right) \mid \sigma(x-1)\sigma(x+1)$ . As $\sigma(x-1)=x$ if $x-1$ is prime, to validate the OP conjecture it would suffice to show that $x \nmid \sigma(x-1)\sigma(x+1)$ unless $\sigma(x-1)=x$ . But this task seems quite difficult.
|number-theory|prime-numbers|totient-function|divisor-sum|
0
How to combine two probabilities for the same event? Context: error correction codes / decoding
I'm learning the maths behind error correction codes. For this purpose I made this question for myself: Assume there are two random bits $x_0$ , $x_1$ , which are both i.i.d. and have a 50% chance of being 0 or 1 (the information bits) and an additional check bit $x_2 = x_0 \mathbin{\mathsf{XOR}} x_1$ ). You now transfer all 3 bits through an additive white gaussian noise channel (AWGNC), one by one. The AWGNC adds noise to each bit independently and on the receiver side, you can only restore each bit with some probability, depending on what you received. You do this for each bit individually and independently of the other bits and conclude that the 3 probabilities of the bits to be 1 are $p = (0.2, 0.9, 0.7)$ , I.e. $P(x_0 = 1 \;|\; \text{given the noisy version of $x_0$ that you received}) = 0.2$ $P(x_1 = 1 \;|\; \text{given the noisy version of $x_1$ that you received}) = 0.9$ $P(x_2 = 1 \;|\; \text{given the noisy version of $x_2$ that you received}) = 0.7$ Obviously, the best gues
Enumerate all 4 possibilities for the correct values of $x$ . Apply Bayes rule. Let $x=(x_0,x_1,x_2)$ denote the true values. Let $\tilde{x}=(\tilde{x}_0,\tilde{x}_1,\tilde{x}_2)$ denote the observed values. There are four possibilities for $x$ , i.e., 000, 011, 101, 110. You know the prior on $x$ , i.e., you can compute the probabilities $\Pr[x=abc]$ for each possible $abc$ : specifically, for each $abc \in \{000,011,101,110\}$ , you have $\Pr[x=abc] = 1/4$ . You also have a model for the channel (based on the parameters of the AWGNC), i.e., you are given $$q_i = p(\tilde{x}_i=d | x_i=a)$$ where $p(\cdot | x_i=a)$ is the pdf of $\tilde{x}_i$ , conditioned on $x_i=a$ . Finally, your goal is to compute $$\Pr[x = abc | \tilde{x} = def],$$ where $def$ are the observed values of $\tilde{x}$ and $abc$ are the hypothesized/inferred values of $x$ . This conditional probability can be computed with Bayes rule, i.e., $$\Pr[x = abc | \tilde{x} = def] = {p(\tilde{x}=def | x=abc) \Pr[x=abc] \over
|statistics|conditional-probability|statistical-inference|
0
Proof of Triangle Inequality for $d(g; x, y) = \left(|x-y|^4 + g\,| x \times y |^2\right)^{\frac{1}{4}}$
I am seeking assistance in proving that a function, denoted as $d(g; x, y)$ , defined on $\mathbb{R}^2 \times \mathbb{R}^2$ and parameterized by the non-negative real number $g$ , may satisfy the triangle inequality. The function is defined as follows: \begin{align} d(g; x, y) &:= \left(|x-y|^4 + g\,| x \times y |^2\right)^{\frac{1}{4}} \\ &= \left( \left((x_1 - y_1)^2 + (x_2 - y_2)^2\right)^2 + g\,(x_1\,y_2 - x_2\,y_1)^2 \right)^{\frac{1}{4}} \end{align} where $x=(x_1,x_2),\,y=(y_1, y_2)$ . It is noteworthy that when $g=0$ , $d(g;x,y)$ coincides with the Euclidean distance. My ultimate goal is to prove that $d(g; x, y)$ is a distance function on $\mathbb{R}^2$ for a certain range of $g$ . While it is trivial that $d(g; x, y)$ is non-degenerate and symmetric with respect to $x$ and $y$ , the proof of the triangle inequality is not straightforward. Through numerical calculation, I observed that the triangle inequality seemed to hold in the range $0\leq g \leq 6$ . In other words, the co
Here is the half of the answer : I am going to exhibit a family of counter-examples in the cases $g>6$ where triangular inequality is violated : $$d(g;x,y)+d(g;y,z) A first remark : function $d$ is rotation-invariant and homogeneous : $$\begin{cases}d(g;R(u),R(v))&=&d(g;u,v)\\d(g;k.u,k.v)&=&k.d(g;u,v)\end{cases} \tag{2}$$ The first property above allows to assume WLOG that the central point $y$ in (1) can be taken on the real axis (see (4) below). I have begun by a vast similation making evident that the violation of triangular inequality occurs for any $g > 6$ . For example, in the case $g=6.1$ , among $10^5$ random triangles in the unit disk, only 3 of them violate triangle identity, all with the peculiar shape of (almost) flat isosceles triangles : I will use complex numbers representation which is slightly more convenient. One can write, by identifying vectors and their affixes (complex numbers) : $$d(g;a,b)^4=\left(|a-b|^2\right)^2+g.\left(\Im(\overline{a}b)\right)^2\tag{3}$$ Let
|geometry|inequality|
0
Stable coordinates in Indoor localization
I am implementing an Indoor localization demo for a project. In that i am using ultra wide band technology to calculate the distances from three different anchors to the moving object, then I use least square method with trilateration to calculate the coordinate of the moving object. However due to some environmental factors (may be objects) , distance values are changing little bit even the object is not moving . Therefore coordinates are also changing little bit. So can you suggest a mathematical model (or algorithm) to minimize this coordinate instability and get better smooth coordinate output. Thanks in Advance !!!
I suggest you use maximum likelihood inference, which is a very general and powerful technique, as long as you have a statistical model for the behavior of the object and for the observations. Let me start by illustrating the technique, if you know the object is not moving. Specifically, if $x$ represents the true position of the object, and $y=(y_1,\dots,y_k)$ represent observations (e.g., $y_i$ contains the three distances in the $i$ th measurement), define the likelihood $p(x|y)$ as the probability that its true position is $x$ , given that you've observed $y$ . Then, use optimization methods to compute $$\max_{\hat{x}} p(\hat{x}|y).$$ If you have a "noise model", i.e., a mathematical model for $p(y_i|x)$ (for example, maybe $y_i$ is a multivariate Gaussian with mean given by the distances between $x$ and the three anchors and some known standard deviation), then you can use Bayes rule and an independence assumption to express $p(x|y)$ as a function of the $p(y_i|x)$ 's and the prio
|linear-algebra|statistics|algorithms|mathematical-modeling|
0
Pizza-topping problem with 8 slices, 10 toppings and constraints
We have to prepare pizza with 8 slices, and have 10 toppings to put on the pizza. We can put only one topping on each slice but can use the same topping on zero or more slices. In how many unique ways can we prepare the slices so that the same topping is not used in adjacent slices? I have seen this question with 4 slices and 5 toppings but that seems to be visualizable. But with 8 slices, I am confused on the part when we select 7 or less toppings with identical objects in circular permutation. Please help on how should be problem be approached and solved.
Since there are two conflicting answers and neither had been upvoted for a while, let’s do this using Burnside’s lemma . As determined at In how many ways can we colour $n$ baskets with $r$ colours? , there are $(-1)^n(r-1)+(r-1)^n$ ways to select toppings from a choice of $r$ toppings for $n$ slices in a circle. Here $r=10$ and $n=8$ , so that’s $(-1)^8(10-1)+(10-1)^8=43046730$ , in agreement with Ross Millikan’s spreadsheet. To find the number of rotationally inequivalent arrangements: The identity leaves all $43046730$ arrangements invariant; the rotation of order $2$ leaves $(-1)^4(10-1)+(10-1)^4=6570$ arrangements invariant; the $2$ rotations of order $4$ leave $(-1)^2(10-1)+(10-1)^2=90$ arrangements invariant, and the $4$ rotations of order $8$ leave $(-1)^1(10-1)+(10-1)^1=0$ arrangements invariant (because an arrangement left invariant by such a rotation would have to have all toppings the same and thus identical adjacent toppings). Thus by Burnside’s lemma there are $$ \frac18(
|combinatorics|
0
How to prove unique solvability of an SDE?
I have a stochastic differential equation of the type: $$ dX(t) = \mu(t) X(t)dt + \sigma(t) X(t) dW(t) \tag{1} $$ However, my $\mu(t)$ is a complicated function of $t$ as well as $W(t)$ , somewhat like: $$ \mu(t) = t \cdot a(t) + b(t) + \sigma W(t) \tag{2} $$ I feel like $W(t)dt$ that I get by substituting $(2)$ into $(1)$ doesn't look "right". And I'm not sure how to get the $X(t)$ from that. Edit What I mean, is that my equation $(1)$ becomes: $$ dX(t) = (t \cdot a(t) + b(t))X(t)dt + \sigma W(t)X(t)dt + \sigma(t) X(t) dW(t) \tag{3} $$ My questions are: Assuming that I obtain a $X(t)$ , does that single-handedly prove that $(1)$ is uniquely solvable? I am not sure what conditions will demonstrate that something is uniquely solvable. Is the $ W(t)dt $ term that I get in equation $(3)$ sensible?
The equation $$ dX(t) = \left[(t \cdot a(t) + b(t)+ \sigma W(t)\right] X(t)dt + \sigma(t) X(t) dW(t)=\mu(t)X(t)dt + \sigma(t) X(t) dW(t) $$ is a linear equation and it has a general solution give here Solution to General Linear SDE , and to be clear all you need from these various functions is continuity.
|stochastic-calculus|brownian-motion|stochastic-differential-equations|
0
Memoryless property with any random wait time
We see here a proof of the random-time memoryless property $P(X>T+s|X>T)=P(X>s)$ where $E\sim Exp(\lambda)$ and $T\ge 0$ is a continuous random variable independent of $E$ . The proof, however, relies upon the existence of a pdf $f_T$ for $T$ (and hence the continuity of $T$ ); this places restrictions on the random variable $T$ that I'd like to avoid. Can we prove this result independent of such a pdf for $T$ ? I imagine conditioning on $E$ instead should be possible in deriving a proof, but can't seem to find anything on this.
You do not require $T$ to be continuously distributed . It is only required to be almost surely non-negative and independent from $X$ . For instance: Let $T$ be a discrete non-negative integer-valued random variable ( $T\in\Bbb N$ ). $$\begin{align}\mathsf P(X\gt T+s\mid X\gt T) &=\dfrac{\mathsf P(X\gt T+s)}{\mathsf P(X\gt T)}\\[1ex]&=\dfrac{\displaystyle\sum_{t\in\Bbb N}\mathsf P(X\gt t+s\mid X\gt t)\,\mathsf P(T=t)}{\displaystyle\sum_{t\in\Bbb N}\mathsf P(X\gt t\mid X\gt t)\,\mathsf P(T=t)}\\[1ex]&~~\vdots\\[1ex]&=\mathsf P(X\geqslant s)\end{align}$$ So more generally. $$\begin{align}\mathsf P(X\gt T+s\mid X\gt T) &=\dfrac{\mathsf P(X\gt T+s)}{\mathsf P(X\gt T)}\\[1ex]&=\dfrac{\mathsf E(\mathsf E(\mathbf 1_{X\gt T+s}\mid T))}{\mathsf E(\mathsf E(\mathbf 1_{X\gt T}\mid T))}\\[1ex]&=\dfrac{\mathsf E(\mathrm e^{-\lambda(T+s)})}{\mathsf E(\mathrm e^{-\lambda(T)})}\\[1ex] &= \mathsf e^{-\lambda s}\\[1ex]&=\mathsf P(X\gt s)\end{align}$$
|probability|probability-distributions|conditional-probability|exponential-distribution|
0
Necessary & Sufficient Conditions for a set to be connected
This question is based on Proposition 1.4.13 in Basic Complex Analysis, by Marsden. Definition 1.4.12: A set $C \subset \mathbb{C}$ is not connected if there exists open sets $U$ and $V$ such that, a) $\, C \subset U \cup V$ , b) $\, C \cap U \not = \emptyset$ and $\, C \cap V \not = \emptyset$ , c) $(C \cap U) \cap (C \cap V) = \emptyset$ . If a set fails to be not-connected, then it's called connected . Proposition 1.4.13: A set $C$ is connected iff the only subsets of $C$ that are both open and closed relative to $C$ are the empty set and $C$ itself. I'm trying to prove this using proof by contradiction for both directions, $\Rightarrow (PBC):$ If $\exists C^*$ such that $C^* \subset C$ (strict) & $C^*$ is both open & closed relative to $C$ , then $C \subset C^* \cup V^*$ where $V^* = \mathbb{C} \backslash C^*$ is open and properties b) & c) hold, a contradiction. $\Leftarrow (PBC):$ If $\exists U,V$ which satisfy the conditions above. Then, $U \cap C$ is open relative to $C$ by $b)
We claim $C$ is not connected if and only if there exist relatively open non-empty $U', V' \subset C$ such that $U' \cup V = C$ and $U' \cap V' = \emptyset$ . Given $U, V$ as in the definition, simply take $U' = U \cap C$ and $V' = V \cap C$ . Conversely, given $U', V'$ as above, there exist open $U, V \subset \mathbb C$ such that $U' = U \cap C$ and $V' = V \cap C$ . These sets have the properties required in the definition. The Proposition is equivalent to the following statement: A set $C$ is not connected iff there exists a subset $W \subset C$ with $W \ne \emptyset, C$ which is both open and closed relative to $C$ . Given such $W$ , the sets $U' = W, V' = C \setminus W$ have the properties in the above claim. Conversely, given $U', V'$ with these properties, take $W = U'$ .
|general-topology|
0
The simplest curve which is never straight and has a rational arc length.
This tweet claims to give an explanation for why one should expect the perimeter of a circle with a rational radius to be irrational. It doesn't strike me as that convincing (although feel free to disagree). A simple refutation would be to exhibit a smooth curve which is never straight, and yet has a rational arc length. Furthermore it needs to clearly not be smuggling any irrationality into its parameterization, so it should at least start and end at rational coordinates. (This second requirement is definitely a bit soft). So, what's the simplest curve one can come up with which fits the criteria?
Here's a simple family of curves such that the arc length over any interval $[a, b]$ with rational endpoints (and, say, $0 ) is rational. Fix $n \in \{2, 3, 4, \ldots\}$ and $c \in \Bbb Q \setminus \{0\}$ and define $$f(x) = c x^{n + 1} + \frac{1}{4 (n^2 - 1) c x^{n - 1}} .$$ In particular, the endpoints $(a, f(a))$ and $(b, f(b))$ are rational. The arc length element for the graph of $f$ is $$ds = \sqrt{1 + f'(x)^2} \,dx = \left((n + 1) c x^n + \frac{1}{4 (n + 1) c x^n}\right) dx,$$ and so the arc length of $f$ over $[a, b]$ is $$\int ds = \left. c x^{n + 1} - \frac{1}{4 (n^2 - 1) c x^{n - 1}}\right\vert_a^b = c (b^{n + 1} - a^{n + 1}) - \frac{1}{4(n^2 - 1)c}\left(\frac{1}{b^{n + 1}} - \frac{1}{a^{n + 1}}\right),$$ which in particular is rational.
|soft-question|rational-numbers|arc-length|
0
If $x,y,z>0$ and $x+y+z=1$,then find the maximum value of $(1-x)(2-y)(3-z)$.
If $x,y,z>0$ and $x+y+z=1$ ,then find the maximum value of $(1-x)(2-y)(3-z)$ . My Attempt: We have $0 so A.M-G.M inequality cannot be used since we can never have $1-x=2-y=3-z$ . Is there any other way out
Remarks : Once we motivate the point $(0, 0, 1)$ , we can write a terse solution. Often, the motivation of a solution is complicated while the solution itself is terse. We have \begin{align*} (1-x)(2-y)(3-z) &= (2 -2x - y + xy)(3 - z)\\ &\le (2 - 2x - y + x)(3 - z)\\ &= (2 - x - y)(3 - z)\\ &= (1 + z)(3 - z)\\ &= -(z - 1)^2 + 4\\ &\le 4. \end{align*}
|calculus|algebra-precalculus|inequality|maxima-minima|
0
Why does $f(t)\le g(t)$ imply $\int_a^b f(t)\; dt\le \int_a^b g(t)\; dt$?
I've come across a proof of the classic limit definition of $e = \lim_\limits{n \to \infty}(1 + \frac{1}{n})^n$ that starts with letting $t$ be any (real, I'm assuming) number in the interval $[1, 1 + \frac{1}{n}]$ . The proof goes: $$\frac{1}{1 + \frac{1}{n}} \leq \frac{1}{t} \leq 1\Rightarrow \int_1^{1 + \frac{1}{n}}\frac{1}{1 + \frac{1}{n}}dt \leq \int_1^{1 + \frac{1}{n}}\frac{1}{t}dt\leq \int_1^{1 + \frac{1}{n}}1dt.$$ I fail to see how the first inequality implies the second.
Assume that $$\fbox{ \( g(x)\ge f(x)\) }$$ Then, $$g(x)-f(x)\ge0$$ Let's define $h(x):=g(x)-f(x)$ Note that because of the above inequality, $h(x)\ge0$ . Using the definition of a limit, we obtain that the integral of $h(x)$ can be defined as $$\int_a^bh(x)dx = \lim_{n\to \infty}\frac{b-a}{n}\sum_{k=0}^{n-1}h\left(a+\frac{(b-a)k}{n} \right)$$ Note that this is the pure definition of a limit. We're cutting up the interval from $a$ to $b$ (so therefore it will have a size of $b-a$ into $n$ rectangles where the number of rectangles goes to infinity. The $\frac{b-a}{n}$ represents the individual width of each rectangle, and the summation on the right represents the corresponding height of the function $h(x)$ at the $x$ value of $k$ . Note that $\frac{b-a}{n}\sum_{k=0}^{n-1}h\left(a+\frac{(b-a)k}{n} \right)\ge0$ is true for $n$ rectangles and because $h(x)$ is always positive. Therefore, taking the limit as $n$ goes to infinity should still hold true that $\lim_{n\to \infty}\frac{b-a}{n}\su
|real-analysis|limits|eulers-number-e|
1
where are the poles in the product of zeta functions, and where are their residues?
I am trying to find the poles an residues of an expression of the form $$\left(\sum_i \zeta(m_i) \right)^2$$ where $m_i\in \mathcal{M}$ some set of integers including 1. Now I know that on the we only have a single simple pole at 1, so I would imagine that the square I have above has both simple poles arising from $$\zeta(1)\sum_{m_i \neq1} \zeta(m_i) $$ with residue $\sum_{m_i \neq1} \zeta(m_i)$ as well as a second order pole from $\zeta(1)\zeta(1)$ . On the other hand $$\left(\sum_i \zeta(m_i) \right)^2 = \left(\sum_i\sum_n n^{-m_i} \right)^2 = \sum_{i,j}\sum_n n^{-m_i-m_j} = \sum_{i,j}\zeta(m_i+m_j) = \sum_i p^2_{\mathcal{M}} (m_i)\zeta(m_i)$$ where $p^2_{\mathcal{M}} (m_i)$ is the number of solutions to $m+n = m_i$ with $m,n \in \mathcal{M}$ . In any case this only has a simple pole at $m_i=1$ with residue $p^2_{\mathcal{M}} (1)$ . . I dont see why these two should be equivalent (in fact, barring some mathemagic, I cant see how these are), but I also dont see which of these is corr
The error in the calculation is in the second equality. You correctly turned the index $i$ inside the square into two indices $i,j$ when expanding the square out, but you forgot to do the same with the single index $n$ —it should become two indices $k,n$ after squaring out. (Note that if this calculation were correct, it would show in particular the incorrect identity $\zeta(s)^2=\zeta(2s)$ .)
|complex-analysis|riemann-zeta|
1
Statistics Question about the Probability of Absolute Values
I was tutoring a student today and was given a question that stated the following: Find $P(|z| > -0.29)$ . Seems simple enough. But this doesn't really make sense in concept. Firstly, absolute values always yield a positive result, so it wouldn't really make sense for this question. Would the answer simply be $1$ ? And secondly, for standard questions where $P(|z|>a)$ , is the formula simplfied just $P(z>a) + P(z for values of $a>0$ ? For this one, it wouldn't make sense to just say $P(z -0.29)$ because you can't just flip the sign on absolute values. It just doesn't make sense for an absolute value equation to have a negative on the other side of the inequality. Tried finding a YouTube video or anything just searching "probability of absolute value" and found nothing. It's shocking such a simple concept found in the first statistics course you take doesn't have anything to click on with that prompt so that I can show my student a simpler example that makes more concrete sense.
The question "Find $P(|z| > -0.29)$ " has the obviously correct answer of $1$ . It is likely either that it was a test of obviousness, or that it was asked in error. As to your other question related to $P(|z|>a) = P(z>a)+P(z , that expression is only correct when the events $z>a$ and $z are mutually exclusive, as they are in this case when $a\ge 0$ but not necessarily when $a . More generally $P(A \cup B)=P(A) +P(B)-P(A \cap B)$ using inclusion-exclusion. So here you would have for general $a$ : $$P(|z|>a)=P((z>a) \cup (z a)+P(z and thus for $a \ge 0$ you have $P(a giving $$P(|z|>a) = P(z>a)+P(z while for $a \lt 0$ you have $P(a $=1 - P(z \le a) - P(z \ge -a) $ $= 1 - (1- P(z>a))+(1-P(z a)+P(z giving $$P(|z|>a) = 1.$$
|probability|statistics|absolute-value|
0
Why is there no conditional inference rule in Sequent Calculus of these forms?
I'm wondering why sequent calculus doesn't have rules like these (at least in the ones I've come across): $$ \Gamma \vdash A \rightarrow B, \Pi \qquad \Delta \vdash A, \Sigma \over \Gamma, \Delta \vdash B, \Pi, \Sigma $$ And $$ \Gamma, A \rightarrow B \vdash \Pi \qquad \Delta, A \vdash \Sigma \over \Gamma, \Delta, A \rightarrow B, A \vdash \Pi \land B, \Sigma $$ Perhaps they are unnecessary because we can derive the same thing using? But I don't see how as we don't have any rules that involve a conditional on the left or right, above the inference line. I feel like I'm fundamentally misunderstanding sequent calculus, as I still think in terms of natural deduction.
If a sentence $P$ holds in first-order classical logic, then you can derive $\vdash P$ using the rules present in the usual Gentzen sequent calculus LK. No other rules are necessary. You don't need to add any extra rules, in particular you don't need to add rules like the one you suggest. That should answer your question. Why does this hold? Well, the proper explanation depends on which sources you're using, and how these define first-order classical logic in the first place. Some works and authors define first-order classical logic as precisely the things you can prove using the rules of the sequent calculus LK, in which case the statement is trivial. Other authors define first-order classical logic using derivability in another formal proof system, such as a Hilbert system or a system of Natural Deduction: in that case, there will be a proof showing that the derivable sentences in LK and the system used to define first-order logic coincide. Yet other works define first-order logic us
|logic|sequent-calculus|
1
unbiased estimator of sample mean
The question: Given a random sample $X_1,...,X_n$ show that $\frac{1}{n}\sum_{i=1}^n X_i$ is an unbiased estimator for $E(X_1)$ . My confusion: Given a statistical model $(\Omega,\Sigma,p_{\theta})$ , where $p_{\theta}$ is a parameterized collection of probability measures, We define $E_{\theta}(X)=\int_{\Omega}X(\omega)p_{\theta}(d\omega)$ , where $X:\Omega\to\mathbb{R}$ . If $\frac{1}{n}\sum_{i=1}^nX_i$ is unbiased, we need $E_{\theta}(\frac{1}{n}\sum_{i=1}^nX_i)=\theta$ , when $\theta=E(X_1)$ . But what is $E(X_1)$ ? With respect to what measure are we integrating, if we're considering all these different measures on the space? For any $\theta$ , $E_{\theta}(\frac{1}{n}\sum_{i=1}^nX_i)=E_{\theta}(X_1)$ , because each $X_i$ is identically distributed. My guess is, you would say something like, "for a fixed probability measure $p_{\varphi}$ on $(\Omega,\Sigma)$ , $\frac{1}{n}\sum_{i=1}^nX_i$ is an unbiased estimator for $E_{\varphi}(X_1)$ ." Is this right?
Let $(\mathcal X,\mathcal F, \mathcal P)$ be a statistical model, i.e., $(\mathcal X,\mathcal F, P)$ is a probability space for every $P\in\mathcal P$ . A statistic $T:\mathcal X\rightarrow \mathbb R$ is called $\mathcal P$ -unbiased for $\vartheta$ iff $$\int_{\mathcal X}T(x)\,\mathrm dP(x) = \vartheta(P)$$ for each $P\in\mathcal P$ ; see Definition Definition 1.1 in Chapter 2 of Theory of Point Estimation by Lehman and Casella. Note that $\vartheta$ is a statistical functional, i.e. a function mapping $\mathcal P$ into $\mathbb R$ . For the parametric version, it is common to replace $\mathcal P$ by a parametric family indexed by $\Theta$ and write $\theta$ instead of $P_\theta$ (on the right hand side) for notional convienence. Example: Consider the parametric statistical model $\left(\mathbb R^n, \mathcal B(\mathbb R^n), \{P_\theta : \theta\in\mathbb R\}\right)$ , where $P_\theta:=\bigotimes_{i=1}^n\mathcal N(\theta, 1)$ denotes the product measure associated with $n$ iid $\mathcal
|statistics|definition|
1
Find the circle center point that is tangential on 2 other circles that has at least one intersection?
Given circle A and circle B (we got their center points, radiuses and 1 relevant point where they intersect) we want to draw a circle C with a given radius in a way that it is tangential with the other two circles (there are up to 4 possible locations). How to get the possible centerpoints?
Hint: two circles are tangent at a point $P$ means that $P$ is on the line connecting the two centers. What does that mean? For example, $M, O_3, N$ are on the same line. Given the radii $R$ and $r$ of the two initial circles, the radius $x$ for the small circle, and the distance $d$ between the original centers, the triangle formed by the three centers has sides $d, R\pm x, r\pm x$ . That's a fully determinate problem.
|trigonometry|circles|
0
Show space of test functions is complete
This is a statement from Rudin's Functional Analysis section 1.46: Given an open set $\Omega \subset \mathbb{R}^n$ , choose compact sets $K_i$ such that $\Omega = \bigcup K_i$ and $K_i \subset K_{i+1}^{\mathrm{o}}$ . Then define seminorms for $\phi \in C^\infty(\Omega)$ : $$ p_N(\phi) = \max\{ |D^\alpha \phi(x)| : x \in K_N, |\alpha| \le N \} $$ The seminorms define a locally convex topology on $C^\infty(\Omega)$ . My question is, how do we show that this topology is complete? Rudin says after the definition that given a Cauchy sequence ${\phi_i}$ in this topology, we have $p_N(\phi_i - \phi_j) for $i, j$ sufficiently large, which means that $|D^\alpha\phi_i - D^\alpha\phi_j| on $K_N$ . Then he says that this implies that $D^\alpha\phi_i \rightarrow g_\alpha$ for some function $g_\alpha$ , and then we must have $\phi_i \rightarrow g_0$ and $g_\alpha = D^\alpha g_0$ . I'm not sure how the statements for the functions $g_\alpha$ follow here. How how would we show that $g_0 \in C^\infty(\
Let $\{\phi_j\}$ be a Cauchy sequence in $\mathcal{D}(\Omega)$ . That is $\exists K \subseteq \Omega$ such that $\text{supp}(\phi_j) \subseteq K \ \forall j \in \mathbb{N}.$ Given $\epsilon > 0$ and $n \in \mathbb{N}_0$ , there exists $N = N(\epsilon, n) > 0$ such that $\forall j, k > N$ we have $$ \|\phi_j - \phi_k\|_{C^n( \Omega)} = \left( \sum_{|\alpha| \leq n} \|D^{\alpha}(\phi_j - \phi_k)\|_{L^{\infty}(\Omega)} \right) $$ = \sum_{|\alpha| \leq n} \sup_{x \in \Omega} |D^{\alpha} (\phi_j(x) - \phi_k(x))| Then, the sequence $\{\phi_j\}$ is Cauchy in $C^n_B = \{\phi \in C^n(\Omega), \|\phi\|_{C^n( \Omega)} Since $C^n_B$ is a Banach space, then $\{\phi_j\}$ converges in $C^n_B(\Omega)$ , say, to $\phi \in C^n_B(\Omega)$ . Since this is true for each $n \in \mathbb{N}$ , then $$ \lim_{j \to \infty} \|\phi - \phi_j\|_{C^n(\Omega)} = 0 \quad \forall n \in \mathbb{N}. $$ i.e., $\phi_j \to \phi$ in $\mathcal{D}(\Omega)$ . Hence, $\mathcal{D}(\Omega)$ is complete.
|functional-analysis|distribution-theory|
1