title
string | question_body
string | answer_body
string | tags
string | accepted
int64 |
|---|---|---|---|---|
The number of positive divisors of a number that are not present in another number.
|
How many positive divisors are there of 30^2024 which are not divisors of 20^2021? I have tried many ways to try to get a pattern for this problem but I can't. I know that 30 has 8 divisors and 20 has 6 divisors, and the number of divisors in 30 not present in 20 is 4, But I just can't proceed from there.
|
Use the prime factorization of the given numbers, i.e. the fact that any integer number $x \in \mathbb{N}$ has a unique factorization $x=p_1^{r_1}\cdot\dots\cdot p_k^{r_k}$ with $p_1 prime numbers and $r_i \in \mathbb{N}^*$ . The divisors of $x$ are then just all numbers of the form $p_1^{s_1} \cdot \dots \cdot p_k^{s_k}$ with $s_i \in \{0,1,\dots,r_i\}$ . Therefore, the divisors of $x^n$ are all numbers of the form $p_1^{s_1} \cdot \dots \cdot p_k^{s_k}$ with $s_i \in \{0,1,\dots,nr_i\}$ . If you now apply this observation to the specific numbers given (and their prime factorization which you already found), it should be easy to see which of the divisors of $30^{2024}$ are also divisors of $20^{2021}$ (and then count them).
|
|number-theory|prime-numbers|divisor-sum|
| 0
|
Gaussian fit for dice rolls has peculiar constants, looking for their origin
|
For a programming project I want to describe huge quantities of dice rolls using a gaussian. Hence I set out to find a relation between: $size$ = the number of sides a single die has, $amount$ = the number of same sided dice rolled at once, and the gaussian that describes the probability of rolling a specific total sum $x$ with all of these dice. Dice in this case mean some abstract object that on 'rolling' give a random integer number between 1 and $size$ (inclusive), just like normal dice would. Let the gaussian be: $$ P(x) = A * e^{-\frac{(x-\mu)^2}{2\sigma^2}} $$ Where usually $A = \frac{1}{\sigma \sqrt{2 \pi}}$ is used as normalisation constant while $\mu = amount * \frac{size + 1}{2}$ is straightforward. Now, using several parameters for $size$ and $amount$ I fitted simulated distributions to the gaussian using $A$ and $\sigma$ as fitting parameters. I found the following holds true in very good approximation when $size$ and $amount$ are both bigger than like 4: $$\sigma = a * si
|
You should take a look at the central limit theorem . The variance of a discrete uniform distribution (what you call a die is a special case of a discrete uniform distribution on an interval starting at $1$ ) is $\frac{n^2-1}{12}$ , where $n$ is the length of the interval. By the central limit theorem, if you add a large number $m$ of such independent uniformly generated numbers, the resulting distribution is approximated by a normal distribution with variance $m\cdot\frac{n^2-1}{12}$ . For large $n$ , the $-1$ can be neglected, so the standard deviation is asymptotically $\frac{n\sqrt m}{\sqrt{12}}$ . So your ‘peculiar constant’ is $a=\frac1{\sqrt{12}}\approx0.2887$ . As to $b$ , you seem to have inadvertently shifted it by some powers of $10$ ; as you rightly point out, it should be determined by $A=\frac1{\sigma\sqrt{2\pi}}$ and thus $b=\frac1{a\sqrt{2\pi}}=\sqrt{\frac6\pi}\approx1.382$ .
|
|statistics|probability-distributions|
| 1
|
If the roots of $z^4+az^3+bz^2+z$ are distinct and concyclic in the complex plane, does $ab\in\mathbb R$ imply $1<ab<9$?
|
HMMT February 2022, Team Round, Problem 6 (proposed by Akash Das ) is: Let $\operatorname{\it P\!}{\left(x\right)}=x^4+ax^3+bx^2+x$ be a polynomial with four distinct roots that lie on a circle in the complex plane. Prove that $ab\neq9$ . Its solution may be found here . Recently, certain netizens claim ed that there exists a stronger result when $ab$ is real: Let $P(z)=z^4+az^3+bz^2+z$ be a polynomial with four distinct roots that are concyclic in the complex plane; then $ab\in\mathbb R\implies1 . Nevertheless, I could find neither any proof nor any counterexample of it. Does the above proposition hold? (And what if $ab\in{\mathbb C\setminus\mathbb R}$ ?) Edit. It appears that when $ab\in{\mathbb C\setminus\mathbb R}$ , $ab$ can take any values in the complex plane other than the real axis. Here is a simulation:
|
As said in the comments, the problem is equivalent to showing that if $ a b \in \mathbb{R}$ and the polynomial \begin{equation}P \left(z\right) = {z}^{3}+b {z}^{2}+a z+1\end{equation} has three colinear distinct roots on a line that does not contain zero, then $1 . We prove here that this result is true in the general case, when $ a$ and $ b$ are complex numbers. Let us define \begin{equation}a b = {m}^{3} , \qquad b = m c , \quad a = \frac{{m}^{2}}{c} , \quad m \in {\mathbb{R}}^{\ast } , \quad c \in {\mathbb{C}}^{\ast }\end{equation} hence \begin{equation}P \left(z\right) = {z}^{3}+m c {z}^{2}+\frac{{m}^{2}}{c} z+1\end{equation} Let $ \frac{c}{w} \in {\mathbb{C}}^{\ast }$ a direction vector of the line $ L$ containing the roots. This line contains the average of the three roots, which is $-\frac{m c}{3}$ , hence if $ z$ is a root, \begin{equation}z = m c \left(\frac{t}{w}-\frac{1}{3}\right) , \qquad t \in \mathbb{R}\end{equation} Substituting into $ P$ gives \begin{equation}P \left(z\
|
|complex-analysis|inequality|polynomials|contest-math|plane-geometry|
| 1
|
Proof of $\frac{\tanh(x)-1}{e^{-2x}}$ as $x \rightarrow \infty$.
|
I have to give a formal proof that the limit of h(x) = $\frac{\tanh(x)-1}{e^{-2x}}$ is -2 as $x \rightarrow \infty$ . For all ε>0 there must be a N>0 such that |h(x)-(-2)| N $$ | \frac{\tanh{x}-1}{e^{-2x}}-(-2) | = | \frac{2}{1+e^{2x}} |$$ Which means that $$|\frac{2}{1+e^{2x}}|≤ \frac{2}{2x+2}=\frac{1}{x+1}$$ Therefore, if given $\epsilon$ >0 we can always choose N = $\frac{1}{\epsilon}-1$ $$ | h(x) + 2 | ≤ \frac{1}{x+1} Is this correct?
|
I don't think it's quite right, but it's close enough that I can comment on it all. The question statement is clear now, and so is your definition of the limit. Your algebra is right, so we just need to prove that $\frac{2}{1+e^{2x}} \to 0$ as $x \to 0$ . You said $$\frac{2}{1+e^{2x}} \le \frac{2}{2x+2}$$ it would be good to say explicitly that this is due to $e^{2x} \ge 1+2x$ (and that this is true for $x>0$ ). You then say "we can choose $N=\frac1\epsilon-1$ ... which further means I can always choose $N$ to be $1/\epsilon$ " which is a bit confusing. You can't pick $N$ to have two different values. You also say $K$ , where I guess you mean $N$ , and say $\frac{1}{1+\frac{1}{\epsilon}} = \epsilon$ which is clearly wrong. It would be clearer to say, Given $\epsilon>0$ , choose $N = \frac1\epsilon - 1$ . Then for all $x>N$ $$|h(x)+2| \le \frac{1}{1+x} \le \frac{1}{1+N} = \epsilon$$
|
|real-analysis|solution-verification|
| 1
|
Prove that $ \|x\|_2 = \sup\left \{ x \cdot y \; s.t. \; \|y\|_2 = 1 \right \}$ and that $ x \cdot y = \|x\| $ and $ \|y\|=1$ iff $x = \|x\| y$
|
In $\mathbb{R}^d$ Question: a) Prove that $ ||x||_2 = Sup\left \{ x \cdot y \; s.t. \; ||y||_2 = 1 \right \}$ b) Prove that $ x \cdot y = ||x|| $ and $ ||y||=1$ iff $x = ||x|| y$ Answer: a) First by definition $x \cdot y = ||x|| ||y|| cos( \widehat{ x ; y} )$ . Now if $||y||=1 \Rightarrow x \cdot y = ||x|| cos( \widehat{ x ; y} )$ . Then $Sup\left \{ x \cdot y \; s.t. \; ||y||_2 = 1 \right \} = Sup\left \{ ||x|| cos( \widehat{ x ; y} ) \; s.t. \; ||y||_2 = 1 \right \}$ and indeed, this is equal to $||x||$ when $cos( \widehat{ x ; y} ) = 1$ . In brief we have proved that $||x||_2 = Sup\left \{ x \cdot y \; s.t. \; ||y||_2 = 1 \right \}$ b) $(\Rightarrow)$ On one hand we have: $x \cdot y = ||x|| ||y|| cos(\widehat{ x ; y})= ||x|| \Rightarrow cos(\widehat{ x ; y}) = 1 \Rightarrow \widehat{ x ; y} = 0$ In other words the vectors $x$ and $y$ arec colinear $\Rightarrow y =\alpha x $ . On the other heand we have that $||y||=1 \Rightarrow || x || | \alpha | =1 \Rightarrow |\alpha| = \frac{1}{|
|
Using the https://math.stackexchange.com/users/136544/daw comment. -Using C.S. inequality $\forall x,y \in \mathbb{R}^d $ we have $x \cdot y = \sum_{i=1} ^d x_i \cdot y_i \leq | \sum_{i=1} ^d x_i \cdot y_i | = | | \leq \| x \|_2 \cdot \| y \|_2 $ a) If $\| y \|_2=1 \Rightarrow x \cdot y \leq \| x \|_2$ and so naturally we have that $Sup \left \{ x \cdot y : \| y \|_2=1 \right \} = \| x \|_2$ and the equality of the supremum can be obtain for $y = \frac{x}{ \| x \|}$ b) $x \cdot y = \| x \|$ and $\| y \| = 1 \Rightarrow x= \| x \| y$ If $x \cdot y = \| x \|$ it means that $y= \lambda x $ in the CS inequality writen above. Moreover as $|| y || = 1 \Rightarrow || \lambda x || = 1 \Rightarrow |\lambda| = \frac{1}{||x||}$ ( $\Leftarrow $ ) Trivial Q.E.D.
|
|real-analysis|linear-algebra|functional-analysis|hilbert-spaces|
| 0
|
Intersection of uncountably many sigma algebras
|
The intersection of sigma algebras is a sigma algebra $$ \bigcap_{\alpha\in \mathcal{A}} \mathcal{X}_\alpha $$ where $\mathcal{A}$ is an index set. However, in every definition I looked at (Billingsley, Cinlar, Evans, Bogachev, Wikipedia, Proof-Wiki) no one says if $\mathcal{A}$ is allowed to be uncountable , i.e. can it be any set?
|
Yes, $\mathcal A$ can be an arbitrary index set. In particular, it is allowed to be uncountable. (This can be seen from inspecting the proof. Countability is never used.) For a reference where it is explicitly mentioned that the index set can be arbitrary, see Theorem 1.15 of Achim Klenke's Probability Theory: A Comprehensive Course (Third Edition).
|
|measure-theory|set-theory|
| 1
|
Group theory orbit stabilizer
|
could you please help me with this statement. First of all, is this statement true? If an action of a finite group G on a finite set X with the number of elements in X strictly greater than 1, has a unique orbit, then G contains an element with no fixed points. I've tried to apply the Orbit Stabilizer Theorem, which is saying that the number of orbits of a Group acting on X equal is to 1/ (G) multiplied by the sommation of the number of fixed points by every element in the group G. Well if there only exists one orbit, then all the element g of the group are conjugated 'cause orbits are equivalence classes (conjugacy classes as well?). So x, and y are lying in the same orbit if and only if there exists an element in G such that g(x)=y... Now I'm stuck. Thank you
|
Let every element $g\in G$ fix some point, namely $\left|\operatorname{Fix}(g)\right|\ge1$ for every $g\in G$ . Therefore: \begin{alignat}{1} \sum_{g\in G}\left|\operatorname{Fix}(g)\right| &= \left|\operatorname{Fix}(e)\right|+\sum_{g\in G\setminus\{e\}}\left|\operatorname{Fix}(g)\right| \\ &\ge |X|+|G|-1 \\ \tag1 \end{alignat} and hence: \begin{alignat}{1} 1&=\#\text{ of orbits} \\ &=\frac{1}{|G|}\sum_{g\in G}\left|\operatorname{Fix}(g)\right| \\ &\stackrel{(1)}{\ge} 1+\frac{|X|-1}{|G|} \\ &\stackrel{|X|>1}{>}1 \end{alignat} Contradiction. So, there is some $\tilde g\in G$ such that $\left|\operatorname{Fix}(\tilde g)\right|=0$ , i.e. $\tilde g$ has no fixed point.
|
|group-theory|finite-groups|group-actions|
| 0
|
What is wrong with this proof by contradiction?
|
My professor tells me that a proof by contradiction structure below is poor: Proof. To prove: A implies B. So, assume A (in order to prove that B follows). Assume not B (in order to arrive at a contradiction). Succeed at showing that not A follows from the assumption of not B, thus arriving at the contradiction A and not A. Conclude that the initial assumption of not B must be false. So, conclude that B must be true, and so it follows from A, as desired. I don't see what is wrong with it exactly. I don't see any obvious logical errors. Perhaps it is just a presentation issue?
|
Here's the standard proof that the reals are uncountable: Suppose that the reals are countable. Let $a_m$ be the $m$ th real number in the interval $[0,1)$ , and let $a_{m,n}$ be the $n$ th digit after the decimal place. Consider the number where the $n$ th digit after the decimal place is $9-a_{n,n}$ , call this number $b$ , and let $b_n$ be the $n$ th digit after the decimal place. Because $b_n=9-a_{n,n}$ , $b_n\neq a_{n,n}$ , so $b\neq a_m$ for all $m$ . This means that we have found a real number $b\in[0,1)$ that is not in $\{a_m\}$ , so that $\{a_m\}$ does not contain all the real numbers in $[0,1)$ . Contradiction. Now, B is clear enough: "the reals are uncountable". So "not B" is "the reals are countable". The problem is, what is A? And even if we eventually decide that A is something like "everything we know about the reals so far", then what is "not A"? We've definitely not shown that "everything we know about the reals so far is false". All we've really shown is that if we as
|
|proof-writing|
| 0
|
CW Complex Decomposition of $S^{n}$
|
I'm currently delving into the foundational concepts of algebraic topology and came across the definition of CW complexes in Hatcher's "Algebraic Topology" (Chapter 0, p. 5): (1) Start with a discrete set $X^0$ , whose points are regarded as $0$ cells. (2) Inductively, form the $n$ skeleton $X^n$ from $X^{n−1}$ by attaching $n$ cells $e_{\alpha}^n$ via maps $\phi_{\alpha}:S^{n-1} \to X^{n-1}$ . This means that $X^n$ is the quotient space of the disjoint union $X^{n−1} \cup \cup_{\alpha} D_{\alpha}^n$ of $X^{n−1}$ with a collection of $n$ disks $D_{\alpha}^n$ under the identifications $x ∼ \phi_{\alpha}(x)$ for $x \in \partial (D_{\alpha}^n)$ (...) I found Hatcher's exposition a bit challenging to grasp. Fortunately, I stumbled upon an alternative definition via a commutative diagram on this post : $$\require{AMScd} \begin{CD} \coprod_\alpha S^{n-1}_\alpha @>{\coprod_\alpha i_\alpha}>> \coprod_\alpha D^n_\alpha \\ @VV (\phi_\alpha) V @VV V\\ X^{n-1} @>>> X^n \end{CD}$$ which appeared mo
|
This diagram is easy to verify on $S^1$ and $S^2$ . No. You did not define what $\alpha$ is and what attaching maps (vertical arrows) are. It is impossible to deduce anything from that diagram, it is incomplete in the particular case. There are infinitely many CW structures on a sphere, two of them are "standard". The first one is recursive. Take two copies of $D^{n}$ and attach both of them to $S^{n-1}$ by glueing over identity $S^{n-1}\to S^{n-1}$ . The resulting space is homeomorphic to $S^n$ . And by induction (until $S^0=\{-1,1\}$ ) we get CW decomposition of $S^n$ having $2n$ cells, two at each dimension. But there's a simpler CW structure on $S^n$ . Just take $X^0=\{*\}$ to be a point, ignore all dimensions between $1$ and $n-1$ , and add a single $n$ -cell. There's only one possible choice of glueing $S^{n-1}\to X^0$ , which corresponds to collapsing boundary of $D^n$ into a point. And this results in the $n$ -sphere as well, but this time it has $2$ cells (regardless of $n$ ).
|
|algebraic-topology|cw-complexes|
| 1
|
Prove $1 - \frac{1}{2} x^2 \leq \cos x$ using Maclaurin series
|
I want to prove the following inequality using Maclaurin series (for all $\mathbb{R}$ ). $$1 - \frac{1}{2} x^2 \leq \cos x$$ I have tried: $$\cos x = 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \frac{\sin c}{5!} x^5 \geq 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \frac{x^5}{5!} = 1 - \frac{x^2}{2!} + \frac{x^4}{4!} \left( 1 - \frac{x}{5} \right)$$ $c \in (0, x)$ But the last term is not greater than $0$ for $x > 5$ . Do I have to take into account the whole Maclaurin series?
|
Here it is another possible approach to compare with. The proposed inequality is equivalent to \begin{align*} 1 - \frac{x^{2}}{2} \leq \cos(x) & \Longleftrightarrow 1 - \cos(x) \leq \frac{x^{2}}{2}\\\\ & \Longleftrightarrow 2\sin^{2}\left(\frac{x}{2}\right) \leq \frac{x^{2}}{2}\\\\ & \Longleftrightarrow \sin^{2}\left(\frac{x}{2}\right) \leq \frac{x^{2}}{4}\\\\ & \Longleftrightarrow \left|\sin\left(\frac{x}{2}\right)\right| \leq \frac{|x|}{2} \end{align*} which is true indeed for every $x\in\mathbb{R}$ . Hopefully this helps!
|
|real-analysis|taylor-expansion|
| 0
|
If $f$ is injective (resp. surjective), is $af$ injective (resp. surjective)?
|
Let $f:\mathbb R\to\mathbb R$ and $af:\mathbb R\to \mathbb R$ , with $a\in\mathbb R\setminus\{0\}$ , be the scalar multiple of $f$ . I was wondering if $f$ is injective (respectively surjective), is $af$ also injective (respectively surjective)?
|
Assume $f:\mathbb{R} \to \mathbb{R}$ is an injective function. Then $f(x) = f(y) \iff x = y$ . Let $a \in \mathbb{R}-\{0\}$ . Let's see that $af$ is also injective. If $af(x) = af(y)$ , then $\frac{1}{a}af(x) = \frac{1}{a}af(y)$ . Thus, $f(x) = f(y)$ . Since $f$ is an injective function, it follows that $x = y$ . Therefore, $af$ is an injective function. Now, assume that $f$ is a surjective function. Let $y \in \mathbb{R}$ . Since $f$ is a surjective function it follows that there exists $x\in \mathbb{R}$ such that $f(x) = \frac{y}{a}$ . Then, we have that $af(x) = y$ . Thus, $af$ is a surjective function.
|
|functions|
| 0
|
High School Combinatorics - I do not agree with the provided explanation
|
The proposed problem was the following (translated into English by me): "A school has $16$ students interested in participating in a team competition. First, they must split themselves into $8$ pairs. Then, each of those pairs must choose (among themselves) someone to be their leader. Finally, two teams of $8$ students are formed by choosing $4$ pairs to compose the first team, which already locks the opposing team. Considering two teams to be equal if, and only if, they are composed of the same pairs (with pairs being equal if they are composed of the same students, with the same leader), in how many distinct ways can these $16$ students divide themselves into two teams of eight?" Solution Provided (translated by me): First, compute the total amount of distinct pairs that can exist. This is clearly given by $C_{16,2} = \frac{16!}{14!2!} = 15\cdot8$ . Since each pair has two choices of leader, we also multiply by two, to obtain $16\cdot15 = 240$ Now, multiply the amount of distinct pai
|
Arrange the Students in a line & Select them 2 at a time to make the Pairs , where the earlier Student is the leader. This gives $16!$ ways to make the Student line. We can re-order the Pairs like $A,B,C,D \equiv C,D,A,B$ , hence we have to reduce by $4!$ for the first team & reduce by $4!$ for the second team. We can also reorder the 2 whole teams , hence we have to reduce by $2!$ We then get the total $16!/[4!4!2!] = 18162144000$ which matches what you got. That is the Correct Answer. The text book answer is wrong & too small & too much off-target. The given argument is wrong too , due to issues listed by you & more !
|
|combinatorics|solution-verification|
| 1
|
Suppose that Ax = b and Cx = b have the same general solution for every b. why is it that then A = C?
|
If you have any two matrices A and C such that A $\vec{x}$ = $\vec{b}$ and C $\vec{x}$ = $\vec{b}$ have the same general solution for all $\vec{b}$ then why is that A = C. I can understand this when $\vec{x}$ is always unique, but I don't know how to prove this in general. Does this equation imply that A and C are the same linear transformation? and does this automatically mean that the matrices are equal. I can understand how the equality of two matrices can be defined as when the matrices have the same linear transformation but I don't understand this computationally. Is there a way to show that for every distinct linear transformation there is only one matrix that corresponds to that linear transformation?
|
Let A $\epsilon \space \mathbb{R}^{m * n}$ Let $e_i$ denote the basic column vectors in $\mathbb{R}^{n * 1}$ , for each $1 \le i \le n$ . Let $b_i := Ae_i$ , for each $1 \le i \le n$ . By our hypothesis, $e_i$ is a solution of $Cx = b_i$ , for each $i$ . Hence each $e_i$ satisfies $(A - C)x = O$ , which is deduced by subtracting both the matrix equations. Hence, the dimension of $\mathcal{N}(A - C) \space \textit{[null space]}$ is n, and hence by rank-nullity theorem, we have the $rank(A - C) = 0 \implies (A - C) = 0 \implies A = C$ $\textbf{Hope this helps you.}$
|
|matrices|matrix-equations|
| 0
|
High School Combinatorics - I do not agree with the provided explanation
|
The proposed problem was the following (translated into English by me): "A school has $16$ students interested in participating in a team competition. First, they must split themselves into $8$ pairs. Then, each of those pairs must choose (among themselves) someone to be their leader. Finally, two teams of $8$ students are formed by choosing $4$ pairs to compose the first team, which already locks the opposing team. Considering two teams to be equal if, and only if, they are composed of the same pairs (with pairs being equal if they are composed of the same students, with the same leader), in how many distinct ways can these $16$ students divide themselves into two teams of eight?" Solution Provided (translated by me): First, compute the total amount of distinct pairs that can exist. This is clearly given by $C_{16,2} = \frac{16!}{14!2!} = 15\cdot8$ . Since each pair has two choices of leader, we also multiply by two, to obtain $16\cdot15 = 240$ Now, multiply the amount of distinct pai
|
Another alternative approach. There are $$A = \binom{16}{8} \times \frac{1}{2}$$ ways of dividing the $~16~$ people into two groups of $~8,$ since selecting persons 1 through 8 is the same as selecting persons 9 through 16. An alternative (equivalent) computation for $~A~$ is $$A = \binom{15}{7},$$ since there are $~\displaystyle \binom{15}{7}~$ ways of selecting the $~7~$ other people who will be on the same team as person-1. Then, there are $$B = \binom{8}{2} \times \binom{6}{2} \times \binom{4}{2} \times \binom{2}{2} \times \frac{1}{4!} = \frac{8!}{2^4 \times 4!}$$ ways of dividing one of the teams of $~8~$ people into $~4~$ pairs. The $~4!~$ factor in the denominator reflects that it is irrelevant in what order the pairs are constructed. So, there are $~B^2~$ ways of dividing both groups of $~8~$ into $~4~$ pairs each. Then, there are $$C = 2^8$$ ways of choosing the pair leaders. So, the final computation is $$A \times B^2 \times C\\ = \frac{(16)!}{8! \,8!} \times \frac{1}{2} \tim
|
|combinatorics|solution-verification|
| 0
|
Examples proving the independence of field axioms
|
I wanted to find examples from the field where one of the conditions of the field is not fulfilled every time, but the rest of the conditions are fulfilled. For example, "+" is not associative, but the rest of the fields are for exampel (z,+,.) does not have the condition of the .
|
The set $\{\text{true}, \text{false}\}$ with addition $\vee$ (the logical "or") and multiplication $\wedge$ (the logical "and") is an example. Only the condition that there must exist an additive inverse for every element is being harmed.
|
|abstract-algebra|field-theory|
| 0
|
What is "white noise" and how is it related to the Brownian motion?
|
In the Chapter 1.2 of Stochastic Partial Differential Equations: An Introduction by Wei Liu and Michael Röckner, the authors introduce stochastic partial differential equations by considering equations of the form $$\frac{{\rm d}X_t}{{\rm d}t}=F\left(t,X_t,\dot B_t\right)$$ where $\left(\dot B_t\right)_{t\ge 0}$ is a "white noise in time" (whatever that means) with values in a separable Hilbert space $U$. $\left(\dot B_t\right)_{t\ge 0}$ is said to be the "generalized time-derivative of a $U$-valued Brownian motion $(B_t)_{t\ge 0}$. Question: What exactly do the authors mean? What is a "white noise in time" and why (and in which sense) is it the "generalized time-derivative" of a Brownian motion? You can skip the following, if you know the answer to these questions. I will present what I've found out so far: I've searched the terms "white noise" and "distributional derivative of Brownian motion" on the internet and found few and inconsistent definitions. Definition 1 : In the book An I
|
I recently went down this rabbit hole as well, reading stochastic calculus, Wiener process, etc, trying to understand why Gaussian white noise is a delta function. Here's an (maybe less mathematically complex - only requiring calculus - but just as physically intuitive) answer: White noise is a stationary signal $\zeta(t)$ such that its power spectral density (PSD) is constant in Fourier (frequency) space . Let $\zeta_T(t)=\zeta(t)w_T(t)$ be the original signal within a bounded interval $[-T/2,T/2]$ , constrained by a windowing kernel $w_T(t)$ (step function, Hanning, etc). The power spectral density is defined as $$ S[\zeta_T(t)]=S(\omega)\equiv\lim_{T\to\infty}\frac{\zeta_T(\omega)\zeta_T^*(\omega)}{T}, $$ so that the averaged power is $P=\int d\omega S(\omega)$ . Writing out Fourier transform $\zeta_T(\omega)$ , $$ \begin{align} S(\omega) &=\lim_{T\to\infty}\frac1T\int_{-\infty}^\infty\int_{-\infty}^\infty dtdt' e^{-i\omega(t-t')}\zeta_T(t)\zeta_T^*(t')\\ &= \int_{-\infty}^{\infty}d
|
|functional-analysis|probability-theory|stochastic-processes|brownian-motion|stochastic-analysis|
| 0
|
Prove that is a tautology without using truth table
|
I can't find a proper formula to prove that $$(p\rightarrow q) \rightarrow ((r ∨ p) \rightarrow (r ∨ q)) $$ Is a tautology; considering it's almost exclusively made up of implications. Can somebody help me? just I want to know how to simplify it. I tried this .
|
One approach is to rewrite in conjunctive normal form by replacing $A \implies B$ with $\lnot A \lor B$ , pushing negation inward, and distributing $\lor$ over $\land$ : $(p \implies q) \implies ((r \lor p) \implies (r \lor q))$ $\lnot (\lnot p \lor q) \lor (\lnot (r \lor p) \lor (r \lor q))$ $(p \land \lnot q) \lor ((\lnot r \land \lnot p) \lor (r \lor q))$ $(p \land \lnot q) \lor (\lnot r \land \lnot p) \lor r \lor q$ $(p \lor \lnot r \lor r \lor q) \land (p \lor \lnot p \lor r \lor q) \land (\lnot q \lor \lnot r \lor r \lor q) \land (\lnot q \lor \lnot p \lor r \lor q)$ Now notice that each clause contains a pair $A \lor \lnot A$ , yielding $T \land T \land T \land T$ , which reduces to $T$ .
|
|logic|first-order-logic|
| 0
|
When is a nef line bundle big
|
Suppose $M^n$ is a smooth projective variety. A line bundle $L$ on $M$ is nef (numerically effective) if on any complete curve $C$ in $M$ , $L$ has positive degree, i.e. $$ L\cdot C=\int_{C}R_h\geq 0. $$ where $R_h$ is the curvature form of any Hermitian metric $h$ on $L$ . A line bundle $L$ on $M$ is big if there exist constants $m_0$ and $c>0$ such that $$ h^0(M,L^m)=\operatorname{dim} H^0(M,L^m)\geq cm^n. $$ for all $m\geq m_0$ . I read a result which states that by Riemann-Roch theorem that a nef line bundle is big if and only if $\int_Mc_1(L)^n>0$ . The reference given by the author is Hartshorne's Algebraic Geometry , which is written in the language of algebraic geometry. However, I'm not familiar with it. Can anyone explain the result and Riemann-Roch theorem? Thanks in advance.
|
The right reference for this material is probably Lazarsfeld's book Positivity in Algebraic Geometry , Volume $1$ , and this is a combination of results. (I'd argue it does not follow immediately from Riemann-Roch.) This book is also written in the language of algebraic geometry, but it at least contains this result. See the end of section 1.4 for example. For a vector bundle $E$ we can define the Euler characteristic $\chi(M, E) = \sum_i (-1)^i h^i(M, E)$ . Riemann-Roch theorems seek to compute this using the intersection theory of $M$ and $E$ . For our purpose, we only need the following version of the asymptotic Riemann-Roch theorem. Theorem : Let $L$ be a line bundle on (a smooth projective variety) $M$ . Then, $$\chi(M, L^m) = Cm^n + O(m^{n - 1})$$ is a polynomial in $m$ of degree $\leq n$ , where $C = \int_M c_1(L)^n$ . When $L$ is nef we also have the following fact which controls the asymptotic growth of the higher cohomology. Theorem : Let $L$ be a nef line bundle on $M$ . The
|
|differential-geometry|algebraic-geometry|complex-geometry|line-bundles|
| 1
|
Probability of 9 people going into 3 rooms with 4,3,2 people respectively if the room can and cannot be differentiated
|
What is the probability of 9 people going into 3 rooms with 4,3,2 people respectively if the rooms can and cannot be differentiated? I would like to verify my thinking logic on this question. For the case where the room can be differentiated, I take the sample space to be $3^{9}$ , which is the number of ways the 9 people can go inside the 3 rooms. If we would like to arrange such that the first room has 4 person, second room 3 person, and third room 2 person respectively, then there are $9\choose{3}$$6\choose{4}$$2\choose{2}$ ways to do this. Dividing this by $3^{9}$ gives us $0.064$ . The second case where we don't know which room has 4, 3,2 respectively is where I am unsure of my answer. I think we just need to multiply $9\choose{3}$$6\choose{4}$$2\choose{2}$ by $3!$ , is this correct? This gives us $0.384$ .
|
Chancing on this question, I see that there are conflicting answers, conflicting views in comments, and that no answer has been accepted ! So I am posting in the hope that the question gets closed and doesn't confuse users who chance on it. ! Assuming that each room has at least a capacity of $4$ and is equally likely to be occupied, the denominator for computing probability is $3^9$ over which there is little dispute, the problem is about the numerator. The multinomial coefficient $\dbinom{n}{n_1.n_2,n_3,...} \equiv\dfrac{n!}{n_1!n_2!n_3!...}, n_i\geq 0, \Sigma n_i=n$ distributes distinct objects to distinct boxes, $n_i$ to box $i$ , not to any box, so although, say, $\dfrac{9!}{4!3!2!} = \dfrac{9!}{3!2!4!}$ they are different distributions for distinct objects in distinct boxes. Since people have identities (as is normal !), with distinct boxes, the numerator has to be multiplied by $3!$ to count all possible distributions and so the probability will correspondingly increase. A clear
|
|probability|combinatorics|statistics|
| 0
|
Concrete examples of internal categories (other than small categories)
|
I understand the definition of an internal category , but so far I haven't found examples of internal categories (other than categories internal to $\mathrm{Set}$ , which are just the small categories) that one could encounter "naturally" in mathematics. I have only found other abstract concepts related to this one. What are some concrete examples that can motivate the study of internal categories? In particular, why is it useful to think of operations such as "assigning the domain to a morphism" as being a morphism in a certain category? The only example I've found here is in this answer , namely that where we consider a topological space $X$ as the set of objects of a category, and $X \times X$ with the product topology as the set of morphisms, with the "obvious" operations. This is interesting but seems a bit pointless at first sight; for instance, what is the use of thinking of composition as a continuous operation in this example?
|
To every group object $G$ (internal group) in a category with finite limits, we can associate an internal category: let $1$ be the object of objects and $G$ be the object of morphisms, and composition is multiplication in $G$ .
|
|category-theory|big-list|
| 0
|
Area of Shaded Region
|
I am using the Pearson Edexcel A-Level maths books for my A-Level studies and stumbled across a question I do not understand (Exercise 3G Question 8d). The last part of the question confused me since it asked me to calculate the area of the shaded region. I checked the solution bank after attempting the question and found that it was using the area of two triangles and taking away the distance between them, despite the smaller triangle not being a right-angled triangle and no angles being provided, hence my confusion. Below is the question in full, a diagram of what I have mentioned above along with the coordinates of each vertices. Question 8.) a.) On a coordinate grid, shade the region which satisfies the following inequalities $y \lt x + 4$ , $y + 5x + 3 \ge 0$ , $y \ge 1$ and $x \lt 2$ b.) Work out the coordinates of the vertices of the shaded region c.) Which of the vertices lie within the region identified by the inequalities? d.) Work out the area of the shaded region. Diagram D
|
Let $E = (-5,-1)$ . The solution seemingly claims something like $A = A_{\Delta EDB} - A_{\Delta ECA}$ . $EDB$ is a right triangle and for $ECA$ they use $A_{ECA} = \frac{|EC| \cdot h}{2}$ where $h$ is the height over $EC$
|
|linear-algebra|geometry|inequality|area|
| 0
|
Perpendicular diagonals of quadrilateral $ABCD$ meet at $O$. Find the angle between lines $AB$ and $DC$, given lengths of $OA$, $OB$, $OC$, $OD$.
|
A few days ago I was at the math Olympiad and I failed to solve only one rather simple task, it feels like a 9th grade, but I either misunderstand something, or there is something quite interesting hidden there The problem sounds like this: In quadrilateral ABCD the diagonals AC and BD are perpendicular and intersect at the point O. It is known that OB = OC = 3, OA = 6, OD = 9. Find the angle between lines AB and DC by expressing vectors AB and DC through vectors b = OB and c = OC. I tried to solve the problem using the relationship between the sides, if vector OC is equal to 3 and vector c, and AO is equal to 6, then vector AO will be equal to 2 vectors c. This method I came to find the angle through the cosine of the formula for the scalar product of vectors, but what was my surprise when nothing normal, such an answer option simply does not exist! by the way, here are answers: 150° 120° 45° 90° 30° As a result I tried to solve the problem 10 times but nothing worked, well, can anyon
|
Let's suppose vertices ABCD appear anticlockwise around the quadrilateral. Take O as origin and put A,b on the positive x,y axes. So A is (6,0), B is (0,3), C is (-3, 0), and D is (0,-9). Then the vector AB is (-6,3) and the vector DC is (-3,9). Their dot product is then 18+27=45; their lengths are $3\sqrt5$ and $3\sqrt{10}$ ; so the cosine of the angle between them is $\frac{45}{3\cdot3\cdot5\cdot\sqrt2}=\frac{1}{\sqrt2}$ . So the angle is $45^\circ$ . That isn't quite the way the question wants it expressed. In the question's own terms, $OA=-2c$ and $OD=-3b$ so $AB=OB-OA=b+2c$ and $DC=OC-OD=c+3b$ . So then $AB\cdot DC=(b+2c)\cdot(c+3b)=3b^2+2c^2$ since $b\cdot c=0$ , etc. (Personally, I think working in coordinates is a bit easier in this case.) (If I have to guess what's gone wrong for you, then my best guess is a sign error somewhere. But without seeing your working, it's hard to be sure.)
|
|geometry|vectors|
| 0
|
What is the number of integers divisible by either 2, 3, or 5 from the integers 1 to $n+1$?
|
I am interested in integer sequence A254828 ( https://oeis.org/A254828 ), but from the link it seems to have a recursive formula $a(n) = a(n-1) + a(n-30) - a(n-31)$ . As joriki said, they are the number of positive integers up to $n+1$ divisible by 2, 3 or 5. We can check them by the following code: //A254828 #include int countDivisibleBy235(int n) { int count = 0; for (int i = 1; i A specific example is when n+1 equals 300. See What is the number of integers divisible by either 2, 3, or 5 from the integers 1 to 300? However, is there a concise closed-form expression? Maybe we can give it as $\lfloor \frac{n+1}{2} \rfloor+ \lfloor \frac{n+1}{3} \rfloor+ \lfloor\frac{n+1}{5}\rfloor -\lfloor\frac{n+1}{6}\rfloor- \lfloor\frac{n+1}{10}\rfloor-\lfloor \frac{n+1}{15}\rfloor+ \lfloor\frac{n+1}{30} \rfloor$ .
|
This sequence makes little sense by itself. It’s the first of a series of sequences up to A $254834$ , where the $n$ -th term of the sequence A $254827+k$ is the number of $k$ -tuples of positive integers up to $n+1$ with every partial sum divisible by $2$ , $3$ or $5$ . For $k=1$ , this is simply the number of positive integers up to $n+1$ divisible by $2$ , $3$ or $5$ . Since there are $\phi(30)=8$ residues modulo $30$ that are coprime to $30=2\cdot3\cdot5$ , this is essentially $\frac{22}{30}n=\frac{11}{15}n$ with some local deviations. The recurrence relation just expresses that the period is $30$ . As for a “concise closed-form expression”, it would be nice to be able to express this as $a(n)=\left\lfloor\frac{11}{15}n+x\right\rfloor$ , but that’s not possible due to the bunching of the coprime residues in pairs. Using the fact that the coprime residues are all at either $6k+1$ or $6k+5$ and only two of the residues of that form aren’t coprime ( $5$ and $25$ ), the sequence can be
|
|combinatorics|closed-form|oeis|
| 1
|
Area of Shaded Region
|
I am using the Pearson Edexcel A-Level maths books for my A-Level studies and stumbled across a question I do not understand (Exercise 3G Question 8d). The last part of the question confused me since it asked me to calculate the area of the shaded region. I checked the solution bank after attempting the question and found that it was using the area of two triangles and taking away the distance between them, despite the smaller triangle not being a right-angled triangle and no angles being provided, hence my confusion. Below is the question in full, a diagram of what I have mentioned above along with the coordinates of each vertices. Question 8.) a.) On a coordinate grid, shade the region which satisfies the following inequalities $y \lt x + 4$ , $y + 5x + 3 \ge 0$ , $y \ge 1$ and $x \lt 2$ b.) Work out the coordinates of the vertices of the shaded region c.) Which of the vertices lie within the region identified by the inequalities? d.) Work out the area of the shaded region. Diagram D
|
The body of your question says $y>1$ but the attached figure shows $y>-1.$ I will assume that $y>-1$ is correct. As far as finding the area. I suggest breaking this quadrilateral in two at the line $y = \frac {17}{6}$ Now you have a triangle and a trapezium, and you have formula for both of these. Triangle: $A = \frac 12 bh = \frac 12 (\frac {19}{6})(\frac {19}{6})$ Trapezium: $A = \frac 12 (b_1 + b_2)h = \frac 12 (\frac {17}{6} + \frac {12}{5})(\frac {23}{6})$ Alternatively, you could use the shoelaces algorithm. Put the points an array in counterclockwise order. $\begin {array}\\ 2&6\\ -\frac 76& \frac {17}6\\ -\frac 25&-1\\ 2&-1\\ 2&6\end {array}$ Repeating the first row at the end. Now multiply pairs diagonally, add the numbers you get when multiplying Northwest/Southeast and subtract the ones from multiplying Southeast/Northwest. Divide the result by 2. $\frac {\frac {17}{3} + 7 + \frac {7}{6}+\frac {34}{30} + \frac {2}{5} + 2 + 12 +2}{2}$ As far as the solution bank solution goes
|
|linear-algebra|geometry|inequality|area|
| 1
|
Right G-Space (Husemoller, Fiber Bundles)
|
Hussemoller's definition of a right G-space confuses me at a few key points: For a topological group G, a right G-space is a space $X$ together with a map $X \times G \xrightarrow{\quad }X$ . The image of $(x, s) \in X \times G$ under this map is $xs$ . We assume the following axioms. (1) For each $x \in X, s,t \in G$ , the relation $x(st) = (x s)t$ holds (2) (Not represented in my diagrams) For each $x \in X$ the relation $x e_G = x$ holds, where $e_G$ is the identity of $G$ . I'm not interested or concerned with (2), so we'll ignore it and focus the concern on (1), which confuses me and I want a clarification on. I want to more explicitly label what group operations are happening at each point. Denote the group law for $G$ as $\left( G\times G \xrightarrow{\quad \ddagger \quad } G \right)$ and our other binary operation as $\left( X \times G \xrightarrow{\quad * \quad}G \right)$ . (I) Is this more explicit restatement of (1) correct? For each $x, (x * s) \in X $ and $s, t \in G$ , it
|
(I) Yes, both (distinct) binary operations are denoted by concatenation. In case one wants to be explicit about the operations, conventional notation would be something like denoting the multiplication $G\times G\rightarrow G$ by an infix " $\cdot$ " and the operation $X\times G\rightarrow X$ by an infix " $.$ ". The correct statement of axiom (1) then becomes that $x.(s\cdot t)=(x.s).t$ for $x\in X$ and $s,t\in G$ . (II) The space $X\times G$ is equipped with the product topology. This is general topological convention: If the product of two topological spaces is introduced, it is always assumed to be equipped with the product topology unless explicitly stated otherwise. The operation is given by a map $X\times G\rightarrow X$ , i.e. continuous with respect to this topology.
|
|abstract-algebra|differential-geometry|algebraic-topology|topological-groups|fiber-bundles|
| 1
|
Examples proving the independence of field axioms
|
I wanted to find examples from the field where one of the conditions of the field is not fulfilled every time, but the rest of the conditions are fulfilled. For example, "+" is not associative, but the rest of the fields are for exampel (z,+,.) does not have the condition of the .
|
I think that you are looking for examples of rings that are not fields. A classical one is the ring of integers $(\mathbb{Z},+,\cdot)$ . Try to understand why this is not a field and come back if you have any doubts. Another classical example is the ring of quaternions $(\mathbb{H},+,\cdot)$ . A quaternion is a number of the form $$a_0 + a_1 i + a_2 j + a_3 k$$ where $a_0, a_1, a_2, a_3$ are real numbers, and $i, j, k$ are distinct imaginary units. Given two quaternions $p := a_0 + a_1 i + a_2 j + a_3 k$ and $q := b_0 + b_1 i + b_2 j + b_3 k$ , we define their sum as $$ p+q := (a_0 + b_0) + (a_1 + b_1) i + (a_2 + b_2) j + (a_3 + b_3) k.$$ The product of two quaternions is given by the unique associative and distributive binary operation satisfying $$i^2 = j^2 = k^2 = ijk = -1.$$ Note that multiplication is not commutative. There is also another interesting example: the octonions . It’s a good exercise to check that these are not fields.
|
|abstract-algebra|field-theory|
| 0
|
Area of Shaded Region
|
I am using the Pearson Edexcel A-Level maths books for my A-Level studies and stumbled across a question I do not understand (Exercise 3G Question 8d). The last part of the question confused me since it asked me to calculate the area of the shaded region. I checked the solution bank after attempting the question and found that it was using the area of two triangles and taking away the distance between them, despite the smaller triangle not being a right-angled triangle and no angles being provided, hence my confusion. Below is the question in full, a diagram of what I have mentioned above along with the coordinates of each vertices. Question 8.) a.) On a coordinate grid, shade the region which satisfies the following inequalities $y \lt x + 4$ , $y + 5x + 3 \ge 0$ , $y \ge 1$ and $x \lt 2$ b.) Work out the coordinates of the vertices of the shaded region c.) Which of the vertices lie within the region identified by the inequalities? d.) Work out the area of the shaded region. Diagram D
|
The solution clearly uses the sine theorem. You can use the points of intersection with x and y axis to get the sine value and use the sine theorem to get the area of the unshaded triangle: $\frac{1}{2}ab\sin\alpha$ To get the lengths of these two edges |AC| and (referring to the intersection of $y=-1$ and $y=x+4$ as E) |CE| you can calculate the intersection points and use the Pythagorean theorem, then use the said equation to get the area and subtract from the larger triangle.
|
|linear-algebra|geometry|inequality|area|
| 0
|
Two questions about quadratic equations to be solved without using Cauchy-Schwarz inequality
|
$Q1:$ Let $a_1,a_2,…,a_n$ be non-zero real numbers and $b_1,b_2,…,b_n$ be real numbers. Find the discriminant of the quadratic equation $$(a_1x-b_1)^2 + (a_2x-b_2)^2 +…+(a_nx-b_n)^2 = 0$$ What can you say about the discriminant? $S1: \text{Discriminant},\Delta= 4\left[\left(\sum a_ib_i \right)^2 - \left(\sum a_i^2 \right) \left(\sum b_i^2 \right)\right]$ $Q2:$ Let $a,b,c$ be real numbers. Consider the equation $$(x-a)(x-b)+(x-b)(x-c)+(x-c)(x-a) = 0$$ Prove that the roots of the equation are always real. Further, show that the roots are equal $\text{iff} \; a=b=c$ $S2: \Delta= 4\left((a^2+b^2+c^2)-(ab+bc+ca)\right)$ One can easily say that for $S1, \Delta \le 0$ and for $S2,\Delta \ge 0$ using Cauchy-Schwarz Inequality . But the book (which I am following to study Algebra) haven’t told about the inequalities yet so we can’t use them, it’s the next topic. How to do them without knowing the inequality?
|
For the last question, just draw the graph of the function $$f(x)=\frac{1}{x-a}+\frac{1}{x-b}+\frac{1}{x-c}.$$
|
|inequality|polynomials|
| 0
|
Does a system of linear equations with solutions modulo all primes have an integer solution?
|
Suppose we are given a finite system of linear equations $a_{i1} x_1 + \cdots + a_{in} x_n = b_i$ in the variables $x_i$ , where the constants $a_{ij}$ and $b_i$ are all integers, and moreover suppose that the system of equations has a solution when viewed in $\mathbb Z/ p$ for every prime $p$ and also has a solution over $\mathbb Q$ . Does it follow that the system of equations has a solution over $\mathbb Z$ ? If we knew that the system had a solution over $\mathbb Z/ n$ for every integer $n$ , then using the Chinese Remainder Theorem one can deduce that there is a solution over $\mathbb Z$ . However, in this setting I don't see how to lift solutions modulo primes to a solution over the integers. I tried searching for a counterexample, but also found nothing.
|
How about $$ 4x + 4y = 2 $$ For any odd prime $p$ , we know $2,4$ are invertible $\pmod p.$ That leads to $x+y = \frac{1}{2} \pmod p \; . \; $ I could have just written $4x = 2.$ Still possible $\pmod 2$ but impossible $\pmod 4$
|
|linear-algebra|modular-arithmetic|
| 0
|
Proof of local isometry, Riemannian manifolds. Is my proof correct?
|
There is one exercise I proved involving that a smooth map $\varphi$ from $M$ to $\widetilde{M}$ is a local isometry if and only if (pullback) $(\varphi^* \tilde{g})=g.$ So I am wondering if the proof is correct? Below is both the exercise and my proof: Exercise: Prove that if $(M,g)$ and $(\widetilde{M}, \tilde{g})$ are Riemannian manifolds with $\dim M = \dim \widetilde{M}$ , then a smooth map $\varphi : M \to \widetilde{M}$ is a local isometry if and only if $\varphi^* \tilde{g} = g$ . My proof: Recall that an isometry from $(M,g)$ to $(\widetilde{M}, \tilde{g})$ is given by a diffeomorphism $\varphi : M \to \widetilde{M}$ with the pullback $\varphi^* \tilde{g} = g$ , i.e. \begin{equation*} \forall v, w \in T_pM, \forall p \in M : \, g_p(v,w) = \tilde{g}_{\varphi(p)}(d\varphi_p(v), d\varphi_p(w)). \end{equation*} Now, let $(M,g)$ and $(\widetilde{M}, \tilde{g})$ be Riemannian manifolds with the same dimension, and define a smooth map as $$ C^\infty(M)\ni \varphi : M \to \widetilde{M
|
The first statement is a tautology, given a local isometry $\phi$ , we must have that $\phi^*\tilde{g}=g$ . Now let $\phi$ be a smooth map, we need to show that given $\phi^*\tilde{g}=g$ , $\phi$ is a local diffeomorphism. By the inverse function theorem, it suffices to check that $D_p\phi$ is an isomorphism for all $p\in M$ , and since dimensions are equal it suffices to check that $D_p\phi$ is injective. We proceed by contradiction, suppose that $D_p\phi$ is not injective, and let $v\in \ker D_p\phi$ . Then for all $w\in T_pM$ : $$g_p(v,w)=(\phi^*\tilde{g})_p(v,w)=g_{\phi(p)}(D_p\phi v,D_p\phi w)=0$$ It follows that $g_p$ is degenerate, a contradiction. So $D_p\phi$ is injective at all $p\in M$ , and $\phi$ is a local diffeomorphism.
|
|differential-geometry|riemannian-geometry|
| 1
|
Showing if $A[y/x]\models\Delta$ then $\exists x A(x)\models\Delta$
|
In order to better understand the tricky $\exists L$ rule and its eigenvariable requirement from sequent calculus: $$\Gamma, A[y/x] \vdash \Delta \over \Gamma, \exists xA \vdash \Delta$$ I decided to try proving the below proposition to build my intuition for this rule. If $A[y/x]\models\Delta$ , provided that $y$ does not appear in $A(x)$ and $\Delta$ , then $\exists x A(x)\models\Delta$ My attempt: Suppose $A[y/x]\models\Delta$ . Let $M$ be an arbitrary structure with $M\models\exists x A(x)$ . Let $v$ be any variable assignment. Then, $M,v\models\exists x A(x)$ . Then, there is some $m\in\text{Domain}(M)$ so that $M,v[m/x]\models A(x)$ . Let $M'$ be a copy of $M$ with the difference that $y^{M'}=m$ . Since $y$ does not appear in $A(x)$ , $M',v[m/x]\models A(x)$ by extensionality. It follows that $M',v\models A[y/x]$ . Since $v$ is arbitrary, $M'\models A[y/x]$ . Since $A[y/x]\models\Delta$ , we have $M'\models\Delta$ . Since $y$ does not appear in $\Delta$ , $M\models\Delta$ by exte
|
I don't think so. Rather than trying to disentangle the given attempt (which is a wise exercise in itself), I shall outline a proof in a stripped down way to be compared to. I shall not annotate the elementary definitions and theorems one by one and not use the generalisations of them to focus on the mainline. They are fairly intuitive and can be found in textbooks covering introductory chapters on model theory; particularly, I recommend Kees Doets' Basic Model Theory (CSLI Publications, 1996). This not widely known book aims at and neatly achieves a clear and concise presentation of the fundamental concepts; one might find a freely available pdf of it circulating on the Web. In the sequent (in order to stress what is involved is a transformation of sequences of formulas, the symbol ‘ $\Rightarrow$ ’ may be preferable) $$\Gamma, A(t/x)\Rightarrow\Delta$$ any model of the antecedent, for that reason of $A(t/x)$ , must be a model of at least one sentence in $\Delta$ also; otherwise, it w
|
|logic|sequent-calculus|
| 0
|
Treating derivatives as quotients
|
If $x$ and $y$ are (non-constant) functions of $z$ , is it in general true that $\frac{dy}{dx} = \frac{\frac{dy}{dz}}{\frac{dx}{dz}}$ ? If it's not true, can you please provide a counterexample, and if it's true, how would we prove that? Thanks!
|
Assume $x,y,z$ differentiable functions and $$ \frac{dx}{dz}(z(p)) \ne 0 $$ In a neightbourhood of $z(p)$ (otherwise the formula does not make sense, as you are dividing by $0$ ) $$ \frac{dy}{dx}(p)=\frac{dy}{dz}(z(p))\frac{dz}{dx}(p) $$ for the chain rule. If $\frac{dx}{dz}$ has constant sign in a neightbourhood $U$ of $z(p)$ the derivative $x(z)$ is strictly monotone in $U$ . So it is invertible in $U$ and the formula holds. But if the derivative is continuos and change sign in all the neightbourhoods of $z(p)$ then the derivative in $z(p)$ is zero, by Bolzano theorem. And this is against the hypothesis. More generally, by Darboux theorem if the derivative change sign in any interval containing $z(p)$ than it has a zero in any such interval. As such if $$ \lim_{h \to z(p)} \frac{dx}{dz}(h) $$ exists the derivative need to have constant sign (and so the function is invertible) in a neightbourhood of $z(p)$
|
|derivatives|
| 1
|
Sum of conditional probabilities equals 1?
|
Assuming that sum of probabilities for all possible events that can occur should sum to 1, how does one denote this for a conditional probability? Is it $P(A|E_1) + P(A|E_2) + ... = 1$, where $E_i$ is a specific event to be conditioned on? Or is the answer something else entirely?
|
One extra proof to complement the rest (the same but with a slightly different notation): $$\sum_{i=0}^n P(E_{i}|A) = \sum_{i=0}^n(\frac{P(A|E_{i})P(E_{i})}{P(A)}) = \frac{1}{P(A)}\sum_{i=0}^n(P(A|E_{i})P(E_{i})) = \frac{P(A)}{P(A)} = 1 $$ The third step holds thanks to the product rule in probability.
|
|probability|probability-theory|conditional-probability|
| 0
|
difficulty for deriving the series representation of $f(x)=\frac{1+x^3}{(1+x)^3}$
|
I was given the generating function $f(x)=\frac{1+x^3}{(1+x)^3}$ and I was asked to find $a_9$ . I attempted to break it down into two parts: $$f(x)=\frac{1}{(1+x)^3}+\frac{x^3}{(1+x)^3}$$ For the first part, I utilized a known formula: $$\frac{1}{(1+ax)^k}=\sum_{n=0}^{\infty}(-1)^n\binom{n+k-1}{n}a^nx^n.$$ However, I encountered difficulty in handling the second term.
|
Hint $$\frac{(1+x^3)}{(1+x)^3}=\frac{(1+x)(x^2-x+1)}{(1+x)^3}=\frac{x^2-x+1}{(1+x)^2}=\frac{-3}{(1+x)}+\frac{3}{(1+x)^2}+1$$ So $f(x)=1+\sum_{n=1}^{\infty}3x^nn(-1)^n$
|
|generating-functions|
| 0
|
Confusion about Limits (Rationals)
|
$ f(x)= \begin{cases} x^2&\text{if $x$ is irrational}\,\\ 2x+1&\text{if $x$ is rational}\\ \end{cases} $ I want to calculate the limit of $f(x)$ as $x$ tends to $0$ . Is it enough to just say that when $x$ tends to $0$ and $x$ is irrational, the limit of $x^2$ is clearly $0$ , but when $x$ is rational we get $1$ , and $1$ isn't equal to $0$ so the limit doesn't exist? Or is there a way to do this more precisely using the sandwich theorem?
|
Your argument is correct.If you want a formal proof you can use the definition of continuity in terms of limit of sequences e.g., Define the sequence $x_n=\frac{1}{2^n}$ and $y_n=\frac{\pi}{2^n}$ Then, if $f$ is continuous at zero we would have $$ \lim_{n \to \infty} f(x_n)= \lim_{n \to \infty} 2x_n+1=1=\lim_{n \to \infty} f(y_n)=\lim_{n \to \infty} y_n^2=0 $$ That is basically what you have said in the OP.
|
|real-analysis|limits|analysis|limits-without-lhopital|
| 1
|
A geometry problem involving tangents of circumference
|
I found out this problem on Geogebra but I can't prove it. Please help me. Thanks Let $ABC$ be an acute triangle with circumference $(O)$ . The tangents at $B, C$ of $(O)$ intersects at $D$ . The tangent of $(O)$ at $A$ intersects the line $BC$ at $M$ . $AD$ intersects $(O)$ at $E$ . Now prove that $ME$ is the tangent of $(O)$ at $E$ .
|
Hints towards a solution. If you're stuck, explain what you've tried. The setup is quite complicated. How can we simplify it? Point $E$ is quite troublesome, introduced only at the end, and not as relevant to the other points. Is there a way to remove it? Can we find a condition not involving $E$ , that would imply that $ME$ is tangential? If $OM$ is perpendicular to $AD$ , then the tangents at $A$ and $E$ will meet at $M$ . How can we simplify this further? We see that either $A$ or $M$ is somewhat arbitrary (picking one forces the other). Since $A$ is sightly less involved in the setup (merely being a point on the circle), how can we "remove" $A$ from the above condition? One approach to prove perpendicularity is to show that $MA^2 - AO^2 = MD^2 - DO^2$ This approach allows us to "remove" $A$ via the substitutions $MA^2 = MB \times MC$ and $AO^2 = R^2$ . Now, we want to show that $MB \times MC - OB^2 = MD^2 - DO^2$ . This becomes a condition only about $B, C$ and their common tangent
|
|geometry|euclidean-geometry|plane-geometry|
| 0
|
difficulty for deriving the series representation of $f(x)=\frac{1+x^3}{(1+x)^3}$
|
I was given the generating function $f(x)=\frac{1+x^3}{(1+x)^3}$ and I was asked to find $a_9$ . I attempted to break it down into two parts: $$f(x)=\frac{1}{(1+x)^3}+\frac{x^3}{(1+x)^3}$$ For the first part, I utilized a known formula: $$\frac{1}{(1+ax)^k}=\sum_{n=0}^{\infty}(-1)^n\binom{n+k-1}{n}a^nx^n.$$ However, I encountered difficulty in handling the second term.
|
HINT Alternatively, you can proceed as follows: \begin{align*} \frac{1 + x^{3}}{(1 + x)^{3}} & = \frac{(1 + x)(1 - x + x^{2})}{(1 + x)^{3}} = \frac{1 - x + x^{2}}{(1 + x)^{2}} = -\left(1 - x + x^{2}\right)\frac{\mathrm{d}}{\mathrm{d}x}\left(\frac{1}{1 + x}\right) \end{align*} where the derivative's argument corresponds to a geometric series that converges when $|x| . Can you take it from here?
|
|generating-functions|
| 1
|
difficulty for deriving the series representation of $f(x)=\frac{1+x^3}{(1+x)^3}$
|
I was given the generating function $f(x)=\frac{1+x^3}{(1+x)^3}$ and I was asked to find $a_9$ . I attempted to break it down into two parts: $$f(x)=\frac{1}{(1+x)^3}+\frac{x^3}{(1+x)^3}$$ For the first part, I utilized a known formula: $$\frac{1}{(1+ax)^k}=\sum_{n=0}^{\infty}(-1)^n\binom{n+k-1}{n}a^nx^n.$$ However, I encountered difficulty in handling the second term.
|
A more direct approach: $$\begin{align}(1+x^3)\cdot \frac 1{(1+x)^3} &= (1+x^3)\sum_{r=0}^\infty (r+2)(r+1)(-x)^r/2 \\ &= \sum_{r=0}^\infty (r+2)(r+1)(-x)^r/2 - \sum_{r=3}^\infty (r-2)(r-1)(-x)^r/2 \\ &= \sum_{r=0}^\infty (r+2)(r+1)(-x)^r/2 - \left(\sum_{r=0}^\infty (r-2)(r-1)(-x)^r/2-1\right) \\ &= \sum_{r=0}^\infty 3r(-x)^r + 1 \end{align}$$ The same as the other answer.
|
|generating-functions|
| 0
|
Construct a linear programming problem for which both the primal and the dual problem has no feasible solution
|
Construct (that is, find its coefficients) a linear programming problem with at most two variables and two restrictions, for which both the primal and the dual problem has no feasible solution. For a linear programming problem to have no feasible solution it needs to be either unbounded or just not have a feasible region at all I think. Therefore, I know how I should construct a problem if it would only have to hold for the primal problem. However, could anyone tell me how I should find one for which both the primal and dual problem have no feasible solution? Thank you in advance.
|
This is old and I think vadim123 already provided a senseful answer, but here's another solution: the primal problem is $$\min_{x}\; -1\text{x} \; \text{s.t.} \; 0\text{x} = 1$$ and the dual problem is $$\max_{y} \; 1\text{y} \; \text{s.t.} \; 0\text{y} = -1$$ So you basically set $-c^T = b^T = 1 $ and $A = 0$ .
|
|linear-programming|
| 0
|
Can $\int_0^\infty \frac{\sin x}{x}dx$ be evaluated using Fourier series?
|
I'm aware of the many answers to evaluating the integral of $\frac{\sin x}{x}$ over $[0,\infty)$ given here . No answer seems to mention the approach below, and neither have I found anything elsewhere, and hence I'm doubting if it's possible. I stumbled upon this in connection with a problem I was working on. Consider $$u(x)=\begin{cases} \frac{\sin x}{x}, & 0 [and $2\pi$ periodically extended]. Show its Fourier coefficients are $$c_n=\frac{1}{2\pi}\int_{(n-1)\pi}^{(n+1)\pi}\frac{\sin x}{x}dx.$$ Use this to evaluate $\int_0^\infty \frac{\sin x} x dx$ . The formula for the Fourier coefficients are derived here . Here is my attempt at evaluating $\int_0^\infty \frac{\sin x} x dx$ , which we assume converges. Moreover, since $u(x)$ is at least piecewise $C^1$ , we also assume the Fourier series converges pointwise to the function. Now, $\sum_{n\in\mathbb Z} c_n$ is just the function evaluated at $0$ , so we have $$1=\sum_{n\in\mathbb Z}c_n=\sum_{n\in\mathbb Z} c_{2n}+\sum_{n\in\mathbb Z}c
|
The series $\sum_{k\in\mathbb Z} |c_k|$ does converge. We have \begin{align}\sum_{k=1}^\infty|c_{2k-1}|&=\frac1{2\pi}\int_0^\infty\frac{\sin(x)}x\,dx , \\ \sum_{k=1}^\infty|c_{2k}|&=-\frac1{2\pi}\int_\pi^\infty\frac{\sin(x)}x\,dx.\end{align} Adding these two convergent series, we get $$\sum_{k=1}^\infty|c_k|=\frac1{2\pi}\int_0^\pi\frac{\sin(x)}x\,dx.$$ Since $c_k=c_{-k}$ and $$c_0=\frac1{2\pi}\int_{-\pi}^\pi\frac{\sin(x)}xdx,$$ we obtain finally that $$\sum_{k\in\mathbb Z} |c_k|=\frac2\pi\int_0^\pi\frac{\sin(x)}{x}\,dx.$$
|
|sequences-and-series|solution-verification|fourier-analysis|fourier-series|
| 0
|
Limit Interchange for double sequence
|
I made the following claim and gave it a proof during my winter break. Given four hypothesis $X=\mathbb{N},Y$ is a metric space. $\{f_n\}$ uniformly converge to $f$ . $\lim_{k\to \infty}f(k)$ exists. $\lim_{n\to \infty}\lim_{k\to \infty}f_n(k)$ exists. We have \begin{align*} \lim_{k\to \infty}f(k)=\lim_{n\to \infty}\lim_{k\to \infty}f_n(k) \end{align*} Proof: We wish to prove that \begin{align*} \forall \epsilon , d(\lim_{k\to \infty}f(k),\lim_{n\to \infty}\lim_{k\to \infty}f_n(k)) Fix $\epsilon$ . Because $\lim_{k\to \infty}f_n(k) \to \lim_{n\to \infty}\lim_{k\to \infty}f_n(k)$ as $n\to \infty$ , we know there exists $N$ such that \begin{align*} \forall n>N, d(\lim_{k\to \infty}f_n(k),\lim_{n\to \infty}\lim_{k\to \infty}f_n(k)) Also, because $\{f_n\}$ uniformly converge to $f$ , we know there exists $N'$ such that \begin{align*} \forall n>N', \forall k \in X, d(f_n(k),f(k)) Let $m$ be a natural number greater than $\max \{N,N'\}$ .\ Now, because $f(k)\to \lim_{k\to \infty}f(k)$ as $k\
|
Your proof seems correct to me, and here is a proof of your conjecture that we can drop the 3rd hypothesis: Assuming $\lim_{k\to\infty}f_n(k)= \ell_n$ , $\lim_{n\to\infty}\ell_n= \ell$ , and $f_n\to f$ uniformly, we shall prove that $\lim_{k\to\infty}f(k)=\ell$ . Let $\epsilon>0$ . For $m$ large enough, we have both $d(\ell_m,\ell) and $\forall k, d(f_m(k),f(k)) . Fix such an $m$ . There exists now $K$ such that $\forall k\ge K, d(f_m(k),\ell_m) and therefore, $d(f(k),\ell) .
|
|limits|analysis|metric-spaces|uniform-convergence|double-sequence|
| 1
|
Solve $\int_0^\infty\frac{\ln(2e^x-1)}{e^x-1}dx$
|
In one of the final problems of the MIT integration bee for this year, $$I=\int_0^\infty\frac{\ln(2e^x-1)}{e^x-1}dx$$ was one of the given problems. My try was to let $u=e^x-1$ to get $$I=\int_0^\infty\frac{\ln(2u+1)}{u(u+1)}du=\int_0^\infty\frac{\ln(x+1)}{x(x+2)}dx$$ I don't know whether I would be right but I had a feeling this was a dead end. Turning the original integral into a geometric series doesn't seem promising either. How should I solve this? Note: These problems are solved in 5 minutes so please come up with a solution that can be done in such a time limit.
|
Put $$I\left ( a \right ) =\int_{0}^{\infty } \frac{\log\left ( ax+1 \right ) }{x\left ( x+1 \right ) }\mathrm{d}x, $$ then $$I'\left ( a \right ) =\int_{0}^{\infty } \frac{\mathrm{d}x }{\left ( ax+1 \right )\left ( x+1 \right ) }=\frac{\log a}{a-1}.$$ Therefore $$\begin{align*} I\left ( 2 \right )&= \int^{1}_{-1}\frac{\log\left(x+1\right)}{x}\mathrm{d}x\\ & = -\int_{-1}^{1} \frac{1}{x} \sum _{n = 1}^{\infty}\frac{\left ( -x \right )^n }{n}\mathrm{d}x\\ &= \sum _{n = 0}^{\infty}\int_{-1}^{1}\frac{x^{2n}}{2n+1} \mathrm{d}x\\ &=\sum _{n = 0}^{\infty}\frac{2}{\left ( 2n+1 \right )^2 } \\ &=\frac{\pi^2}{4} \end{align*}. $$
|
|calculus|integration|contest-math|
| 0
|
Right inverse of evaluation map from polynomial vector space
|
Say $E_1 : \mathbb{P}_3 \rightarrow \mathbb{R}$ by $f(x) \mapsto f(1)$ . Is it sufficient to say there exists right inverse $S_R : \mathbb{R} \rightarrow \mathbb{P}_3 $ from the following? We see $\mathbb{P}_3$ has basis $B = \{1,x,x^2,x^3\}$ . So we can say $f(x) = a+bx+cx^2+dx^3$ where $f(1)=a+b\cdot1+c\cdot1^2+d\cdot1^3$ . But it is clear $a+b\cdot1+c\cdot1^2+d\cdot1^3 = a+b+c+d$ . This gives $E_1 : a+bx+cx^2+dx^3 \mapsto a+b+c+d$ We then need to find some $g(x)$ where $S_R : a+b+c+d \mapsto g(x)\in\mathbb{P}_3$ such that $g(1)=a+b+c+d$ . I assume this $g(x)$ need not be the $f(x)$ above. (Can I assume this?) Under this assumption, say $k=\frac{a+b+c+d}{4}$ . Then $g(x)=k+kx+kx^2+kx^3$ gives $g(1)=k+k+k+k=4(k)=4(\frac{a+b+c+d}{4})=a+b+c+d$ . And, so $g(1)=a+b+c+d$ as desired. We can see $E_1\circ S_R : \mathbb{R} \rightarrow \mathbb{P}_3 \rightarrow \mathbb{R}$ . This is given by $f(1)=g(1)=x_0\in \mathbb{R}$ for $E_1\circ S_R : x_0 \mapsto g(x) \mapsto x_0$ where $f(x)$ is arbitrar
|
Why not try this? $$S_R:k\rightarrow kx^3, \mathbb{R}\rightarrow\mathbb{P}_3$$ Then: $$E(S_R(k))=E(kx^3)=k*1^3=k$$ Therefore, $E(S_R(k))=k$ for all $k\in \mathbb{R}$ , so that's a right inverse.
|
|linear-algebra|matrices|solution-verification|linear-transformations|
| 1
|
Probability of Rectangle in Unit Circle - Explain Solution Please
|
I have a bit of an odd request regarding this problem: Let $C$ be the unit circle. Point $a$ is chosen randomly on the boundary of $C$ and another point $b$ is chosen randomly from the interior of $C$ (points are chosen independently and uniformly over their domains). Let $R$ be the rectangle with the sides parallel to the $x$ - and $y$ - axes with diagonal $ab$ . What is the probability that no point of $R$ lies outside of $C$ . Already answered here - This makes sense... However, when trying to understand a slightly more rigorous solution set out by the course, I couldn't quite wrap my head around the proof: Let $a = (cos( \theta ),sin(θ))$ and $b = (b_x, b_y)$ . We will show that no point of $R$ lies outside $C$ if and only if: $|b| ≤ |sin(θ)|$ and $|a| ≤ | cos(θ)|$ (1 for reference) The other two vertices of $R$ are $(cos(θ), b_y)$ and $(b_x,sin(θ))$ . If $|b_x| ≤ | cos(θ)|$ and $|b_y| ≤ |sin(\theta)|$ , then each vertex $(x, y)$ of $R$ satisfies $x^2 + y^2 ≤ cos^2(θ) + sin^2(θ) =
|
Starting with your equations 1 you can square both sides leaving you with $b^2 \le \sin^2(\theta)$ , and $a^2 \le \cos^2(\theta)$ . Recall that $\cos^2(\theta) + \sin^2(\theta) = 1$ So we can add $\cos^2(\theta)$ to the first equation, giving us $\cos^2(\theta) + b^2 \le \cos^2(\theta) + \sin^2(\theta)$ , which from the identity above, simplifies to $\cos^2(\theta) + b^2 \le 1$ . The same can be done for the other equation by adding $\sin^2(\theta)$ , which will leave you with $a^2 + \sin^2(\theta) \le 1$ .
|
|probability|trigonometry|circles|
| 0
|
Prove the following matrix equation: $A'B = B'A$
|
Let's say I have these 2 matrices: $$A = \begin{bmatrix} c \\ d \end{bmatrix} $$ and $$B = \begin{bmatrix} e \\ f \end{bmatrix}$$ $A'B = ce + df $ and $B'A = ec + fd$ As shown above, $A'B = B'A$ . But is there a matrix product property that could've told me this without having to manually check? I haven't been able to find any matrix properties that prove that $A'B = B'A$
|
A real number can be considered a $1\times 1$ real matrix, which is trivially symmetric. Therefore if $A^TB = C\in\mathbb{R}$ , then $A^TB = C = C^T = B^TA$ .
|
|linear-algebra|matrices|transpose|
| 0
|
Show that $z$ is a root of unity
|
I want to show that if $z\in\mathbb{C}$ such that $|z|=1$ and $1+z^{k_1}+z^{k_2}=0$ for integers $k_1 , than $z$ is a root of unity. Here is my approach: Suppose $z=\cos(\theta)+i\sin(\theta)$ . Then we have $$1+\cos(k_1\theta)+\cos(k_2\theta)+i(\sin(k_1\theta)+\sin(k_2\theta))=0.$$ This would require $$\sin(k_1\theta)+\sin(k_2\theta)=2\sin\left(\frac{(k_1+k_2)\theta}{2}\right)\cos\left(\frac{(k_2-k_1)\theta}{2}\right)=0.$$ This would mean either $$\frac{(k_1+k_2)\theta}{2}=n\pi$$ or $$\frac{(k_2-k_1)\theta}{2}=\frac{(2n+1)\pi}{2}$$ so either $$\theta=\frac{2n\pi}{k_1+k_2}, \frac{(2n+1)\pi}{k_2-k_1}$$ Now note that $$1+\cos(k_1\theta)+\cos(k_2\theta)=1+2\cos\left(\frac{(k_1+k_2)\theta}{2}\right)\cos\left(\frac{(k_2-k_1)\theta}{2}\right)$$ But now I do not know how to continue, because I would need the real part to be $0$ . Any hints on how to continue? My guess is that it would be a $k_1+k_2$ -root of unity. Also, is there an algebraic way/trick of doing this without resorting to using
|
The key point is that $z^{k_1}$ and $z^{k_2}$ are complex conjugate. Indeed, since $|z|=1$ , let $z=e^{i\theta}$ then $$1+e^{ik_1\theta}+e^{ik_2\theta}=0$$ requires that $e^{ik_1\theta}+e^{ik_2\theta}$ is real that is $e^{ik_2\theta}=e^{-ik_1\theta}$ with $$e^{ik_1\theta}+e^{-ik_1\theta}=2\cos (k_1 \theta)=2\Re(z^{k_1})=-1$$ therefore $z^{k_1}=-\frac12 +i\frac{\sqrt 3}2=e^{i\frac{2\pi}3}$ $z^{k_2}=-\frac12 -i\frac{\sqrt 3}2=e^{i\frac{4\pi}3}$ Following your way, we have that the condition $$1+\cos(k_1\theta)+\cos(k_2\theta)+i(\sin(k_1\theta)+\sin(k_2\theta))=0$$ requires $$\sin(k_1\theta)+\sin(k_2\theta) =0 \iff k_2\theta =-k_1\theta +2n\pi \; \lor \; k_2\theta =\pi +k_1\theta +2n\pi$$ then for $k_2\theta =-k_1\theta +2n\pi$ we obtain $$1+\cos(k_1\theta)+\cos(k_2\theta) =0 \iff 1+2\cos(k_1\theta) =0$$ that is $k_1\theta =\frac{2\pi}3+2n\pi \implies k_2\theta=\frac{4\pi}3+2n\pi$ or $k_1\theta =\frac{4\pi}3+2n\pi \implies k_2\theta=\frac{2\pi}3+2n\pi$ which are equivalent, while for $k_2
|
|complex-analysis|complex-numbers|
| 1
|
Is my proof of this fact about pi correct?
|
I recently have thought of a proof but I can’t tell if it is correct or not. The proof is of $n \pi$ being irrational if n is an integer and non zero. The proof is below: We assume that $n \pi$ is not irrational, thus meaning that $n \pi \in \mathbb Q$ . Since $\sin(x)$ is an irrational number if x $\in \mathbb Q$ we can logically assume that $\sin(n \pi)$ is then irrational. But as we all know, $\sin(n \pi)$ = 0. QED I have a lot of doubts about the proof being correct especially because of the fact that the proof seems to simple and that am usually terrible with proofs. Can someone please tell me if the proof is correct or incorrect? And if it is incorrect, (which it probably is) can someone tell me where I went wrong?
|
Your proof isn't wrong but it depends on the following prior result: If $x$ is a nonzero rational number then $\sin x$ is irrational. (You stated it without the "nonzero" condition, which makes it false, but with that correction it's true, and even more is true: if $x$ is a nonzero rational number then $\sin x$ is transcendental .) That's true, but it's just as difficult to prove as the irrationality of $\pi$ . In fact, I think the only known proofs are more difficult than the easiest proofs that $\pi$ is irrational. (There's an easier theorem -- "Niven's theorem" -- which is about when $r$ and $\sin r\pi$ are both rational. But what you need is something different.)
|
|solution-verification|irrational-numbers|pi|
| 1
|
Is the set of continuous functions a Borel measurable subset of $L^2$?
|
Let $C([0,1])$ be the set of continuous real-valued functions on $[0,1]$ and $L^2([0,1])$ the Hilbert space of (equivalence classes) of square-integrable realvalued functions on $[0,1]$ . Then $C([0,1])$ can be identified with a subset of $L^2([0,1])$ . I am wondering if $C([0,1])$ is in the Borel sigma-algebra of $L^2([0,1])$ . (It is well-known that $C([0,1])$ is not an element of the product sigma-algebra of the product space $\mathbb R^{[0,1]}$ , because the latter only sees a countable set of values of a function).
|
Both $C[0,1]$ and $L^{p}[0,1]$ are polish. The embedding map $$f: C[0,1] \hookrightarrow L^{p}[0,1]$$ is continuous (therefore borel measurable) and injective. So, $f(C[0,1])$ is Borel subset of $L^{p}[0,1]$ .
|
|functional-analysis|measure-theory|stochastic-processes|stochastic-calculus|
| 0
|
Value restrictions on off-diagonal elements of positive definite matrices
|
Given a positive definite matrix $A$ there have to be some restrictions on the off-diagonal elements $A_{ij}$ (i.e. where $i \ne j$). Since there exists a Cholesky decomposition (i.e. $A=U^TU$) for a positive definite matrix I know that in a 2x2 matrix $U_{22} = \sqrt{A_{22} - \frac{A_{12}^{2}}{A_{11}}}$ from which I can derive that $A_{12} \lt \sqrt{A_{11}A_{22}}$ as otherwise the value for $U_{22}$ would be complex and $A=U^TU$ holds no longer true. When I increase the dimensions of the my matrix $A$ to 3x3 or 4x4, the cholesky decomposition gets longer but is still manageable but using something larger is something I can no longer handle. My question is if there are also other methods to impose restrictions on the off-diagonal elements of a positive definite matrix other that are easier to handle than the cholesky decomposition?
|
The equivalency $xAx = \sum_{ij} A_{ij} x_i x_j$ gives a direct way to bound the off-diagonal elements relative to the diagonal for a real symmetric positive definite matrix. In general, $|A_{ij}| . To see this, set $x$ equal to a vector with a value 1 at positions $i$ and $j$ , and zero elsewhere. We then have $A_{ii} + 2A_{ij} + A_{jj} > 0$ . This implies the negative side of the bound $-A_{ij} . Using a vector $x$ with 1 at position $i$ and $-1$ at position $j$ , we get the positive side of the bound. This gives us the bound on the absolute value of any off-diagonal element. You can also obtain a bound based on the geometric mean of $A_{ii}$ and $A_{jj}$ (and an extension of the bound you have above) using 1 at position $i$ and $\lambda$ at position $j$ using the method at https://math.stackexchange.com/a/3018652/1175270 . This gives $|A_{ij}| .
|
|linear-algebra|matrix-decomposition|
| 0
|
Show that $z$ is a root of unity
|
I want to show that if $z\in\mathbb{C}$ such that $|z|=1$ and $1+z^{k_1}+z^{k_2}=0$ for integers $k_1 , than $z$ is a root of unity. Here is my approach: Suppose $z=\cos(\theta)+i\sin(\theta)$ . Then we have $$1+\cos(k_1\theta)+\cos(k_2\theta)+i(\sin(k_1\theta)+\sin(k_2\theta))=0.$$ This would require $$\sin(k_1\theta)+\sin(k_2\theta)=2\sin\left(\frac{(k_1+k_2)\theta}{2}\right)\cos\left(\frac{(k_2-k_1)\theta}{2}\right)=0.$$ This would mean either $$\frac{(k_1+k_2)\theta}{2}=n\pi$$ or $$\frac{(k_2-k_1)\theta}{2}=\frac{(2n+1)\pi}{2}$$ so either $$\theta=\frac{2n\pi}{k_1+k_2}, \frac{(2n+1)\pi}{k_2-k_1}$$ Now note that $$1+\cos(k_1\theta)+\cos(k_2\theta)=1+2\cos\left(\frac{(k_1+k_2)\theta}{2}\right)\cos\left(\frac{(k_2-k_1)\theta}{2}\right)$$ But now I do not know how to continue, because I would need the real part to be $0$ . Any hints on how to continue? My guess is that it would be a $k_1+k_2$ -root of unity. Also, is there an algebraic way/trick of doing this without resorting to using
|
If $|a|=|b|=1$ ( so $\bar a = 1/a$ , $\bar b = 1/b$ ), and $1+a+b=0$ , then $1+a= -b$ . Take conjugates and get $1+1/a=-1/b$ . Multiply and get $(1+a)(1+1/a)=1$ , so $1+1+a+1/a=1$ , $a^2+a+1=0$ , and $b=\frac{1}{a}$ .
|
|complex-analysis|complex-numbers|
| 0
|
Finding the perimeter of an equilateral triangle given height
|
Given an equilateral triangle with vertices $A,B,C$ , find the perimeter of the triangle if the height of this triangle is $8\sqrt{6}$ . I do not have a visual image for this problem, because I cannot see. However, this shape is not complex, and I think it is really not needed here. When I first attempted this problem, I imagined an altitude $\overline{DE}$ being drawn from one of the vertices to the base forming a right angle. I also think that the height is equal to the altitude. This creates two right triangles and I now can apply trigonometric ratio to solve for the hypotenuse. I think $\sin$ will be a good choice, since I want to get the hypotenuse: $$\sin {A} = \frac{8\sqrt{6}}{h}$$ Since this is an equalatteral triangle, all of its angle is $60^\circ$ . Therefore, I can safely assume that $A$ is also $60^\circ$ . By chance, I also know that $\frac{\sqrt{3}}{2}$ is equal to $\sin{60}$ . Substituting the values into the equation, I get $$\frac{\sqrt{3}}{2} = \frac{8\sqrt{6}}{h}$$
|
The height of the equilateral triangle splits the triangle into 2 30 $^\circ$ -60 $^\circ$ -90 $^\circ$ triangles. In these special right triangles, the hypotenuse is twice the length of the short leg, and the long leg is the length of the short leg times $\sqrt{3}$ . This means $|\overline{AM}| = \frac{8\sqrt{6}}{\sqrt{3}}$ and $|\overline{AC}|$ is twice this length or $\frac{16\sqrt{6}}{\sqrt{3}}$ . The perimeter of the equilateral triangle triangle is 3 times this length, which is $\frac{48\sqrt{6}}{\sqrt{3}} \approx$ 67.88un. Which is also 48 $\sqrt{2}$ so you are correct.
|
|algebra-precalculus|trigonometry|
| 0
|
maximize the area of a trapezoid inscribed in an hyperbola
|
I am trying to maximize the size of a trapezoid inscribed in an hyperbola as a function of parameter $t$ . For simplicity, assume hyperbola parameters are > 0 and I'm only interested in the y $\geq$ 0 sheet. The upper line of the trapezoid is fixed - defined by the point $(x_{1}, y_{1})$ . For a parameterized hyperbola x(t) = $b * sinh(t)$ , y(t) = $a * cosh(t)$ . The width of the upper line, $w_{1}$ , is a constant: $2 * x_{1}$ . The width of the lower line, $w(t) = 2 * x(t)$ or $2 * b * sinh(t)$ . The height, $h(t) = y_{1} - y(t)$ or $y_{1} - (a*cosh(t))$ . Trapezoid area $A$ , is $1/2 * (w_{1} + w(t)) * h(t)$ . So it should be 'simple' to substitute in identities, take the derivative w.r.t. $t$ , set to $0$ , and solve for $t$ . But no matter what I do, the derivative I derive is unsolvable or evaluates only to $t=0$ . E.g. if I break the area into $1/2 * (w_{1} * h(t) + 1/2 * (w(t) * h(t)$ , I end up with a sum including a $sinh(t)$ term, a $cosh(t)$ term, and $sinh(t)*cosh(t)$ ter
|
We need to maximize $$\mathcal A=\frac{w_1+w(t)}{2}h(t)=(x+x_1)(y_1-y)=ab(\sinh(t)+\sinh(t_1))(\cosh(t_1)-\cosh(t)),$$ where in the last step we used the fact that since $x_1$ and $y_1$ lie on the hyperbola, then $(x_1,y_1)=(a\sinh(t_1),b\cosh(t_1)))$ . We can now differentiate wrt to $t$ and impose it to be $0$ : $$ab[\cosh(t)(\cosh(t_1)-\cosh(t))-\sinh(t)(\sinh(t)+\sinh(t_1))]\overset{!}{=}0$$ $$\sinh^2(t)+\cosh^2(t)-[\cosh(t)\cosh(t_1)-\sinh(t)\sinh(t_1)]=0$$ $$\cosh(2t)=\cosh(t-t_1)$$ Therefore we have two solutions: $$\left\{ \begin{aligned} &2t=t-t_1\implies t=-t_1\\ &2t=t_1-t\implies t=\frac{t_1}{3} \end{aligned} \right.$$ It can then be checked using $2^{\text{nd}}$ derivative test or first derivative limit from the left and right of a critical point, that we can distinguish the solution that yields maximum area: $$\mathcal A'(-t_1^-) 0\implies -t_1\text{ is a minimum}$$ $$\mathcal A'((t_1/3)^-) Thus, the maximum area of a trapezoid bounded by a hyperbola and $x_1$ and $y_1$ is
|
|derivatives|analytic-geometry|
| 1
|
Example of a non-hausdorff space
|
For instance, the simplest example of a non-hausdorff topological space is the the pair $(X, \tau_X)$ where $\tau_X = \{ X, \varnothing \} $. But this is boring. Can someone help me find more interesting examples?
|
Let $X$ be a set with at least two points and $x_0\in X$ . The special topology $$\mathcal T=\{U\subset X|x_0\in U\}\cup\{\emptyset\}$$ is not Hausdorff. Similarly, $$\mathcal T'=\{U\subset X|x_0\not\in U\}\cup\{X\}$$ is not Hausdorff.
|
|general-topology|
| 0
|
Among morphisms of morphisms, what makes commutative squares special?
|
Given two (1-)categories $\mathcal{C}, \mathcal{D}$ , and given the 0-category (class) of funtors $\mathcal{C} \to \mathcal{D}$ , denoted $Func(\mathcal{C} \to \mathcal{D})$ , let's say we want to make some choice of (1-)category denoted $[[\mathcal{C}, \mathcal{D}]]$ such that $Ob([[\mathcal{C}, \mathcal{D}]]) = Func(\mathcal{C} \to \mathcal{D})$ . Usually we choose $[[\mathcal{C}, \mathcal{D}]]$ such that $Mor([[\mathcal{C}, \mathcal{D}]])$ are natural transformations. But imagine we're from a universe where no one has discovered the concept of natural transformation, and so we start only with the definitions of category and functor. To reflect the structure of $Ob([[\mathcal{C}, \mathcal{D}]]) = Func(\mathcal{C} \to \mathcal{D})$ , whose elements act on both objects and morphisms of $\mathcal{C}$ , we can ask that for every $\eta \in Mor([[\mathcal{C}, \mathcal{D}]])$ , with $\eta: F \to G$ for some functors $F, G \in Func(\mathcal{C} \to \mathcal{D})$ , one has that $\eta$ sends ev
|
Guess 1: We need the morphisms of $Arr(\mathcal{D})$ / $[[\mathbb{2}, \mathcal{D}]]$ to in some way be "expressible" in terms of the objects, so that we can define "whiskerings". (Compositions of "1-morphisms" with "2-morphisms" to create new "2-morphisms".) Commutative squares obviously allow this. But are they a necessary choice for this purpose? Also this is somewhat unsatisfying because apparently whiskerings can be used to axiomatize strict 2-categories , which were defined to describe the properties satisfied by natural transformations. So the justification in terms of whiskerings seems to roughly reduce to "we should choose natural transformations because we want to use them to define whiskerings because we want to use them to define natural transformations", i.e. it seems circular. On the other hand, if there is really some other reason to believe (ideally defined solely in terms of functors and 1-categories) why the conventional whiskering properties in the definition of stric
|
|category-theory|definition|functors|higher-category-theory|natural-transformations|
| 0
|
Statistical framework for using MSE for linear classification
|
We learned in class to use a linear model to to predict a real target value y. We made the assumption that $$ y = w^Tx + w_0 + \epsilon $$ where $x$ is the input vector, $w$ is the vector of the linear model parameters and $\epsilon$ is a Gaussian distributed error. The professor showed us that since $y|x$ is a Gaussian random variable, maximizing the likelihood( for $w$ ) is actually minimizing MSE namely minimizing $$ \frac{1}{N} \sum_{i=1}^N (y_i - (w^Tx_i + w_0 ))^2 $$ for $w$ . This is all good. But now he said that minimizing the MSE expression above is also valid for a binary classification problem where $ y \in \{-1,1\} $ . What is the statistical justification for this ? it is clear that $ y = w^Tx + w_0 + \epsilon $ does not hold any more and $y|x$ is not a Gaussian random variable.
|
The validity comes from the close relationship between linear regression and linear discriminant analysis (LDA) for two classes. This result is detailed in ESL Section 4.3 and Exercise 4.2, whose solution is available online. To sum up, when there are only two classes and when the numbers of observations from both classes coincide, the least square classification of $y\in\{-1,1\}$ by solving the MSE criteria produces the same decision as the LDA.
|
|machine-learning|maximum-likelihood|
| 1
|
Why is $\wedge ^2 E[p] \cong \mu_p$?
|
As the title says, why is $$\wedge ^2 E[p] \cong \mu_p,$$ where $E[p]$ refers to the $p$ -torsion points of an Elliptic curve over a number field $K$ , $\wedge$ refers to the exterior product and $\mu_p$ refers to the group of $p^{th}$ roots of unity. I know that for the formal multiplicative group $\mathbb{G}_m$ over $K$ , $\operatorname{Tor}\mathbb{G}_m(K)$ contains all the roots of unity in $K$ . I know you can also define formal group laws on elliptic curves. Was wondering if that is the direction to go?
|
Let $k$ be of characteristic coprime to $p$ and $E/k$ . The first thing to note is that $E[p]$ is an $\mathbb{F}_p$ -vector space of rank $2$ . The exterior square of $E[p]$ is therefore an $\mathbb{F}_p$ -vector space of rank $1$ (spanned by the wedge of the independent basis vectors). Indeed, this is the case for any $k$ -vector space of dimension $2$ . The non-trivial thing here is that the Galois action is really making $\bigwedge^2 E[p] \cong \mu_p$ as group schemes (or if you prefer as $\mathbb{F}_p$ -vector spaces with a $\operatorname{Gal}(\bar k/k)$ action). Recall that elements of $\bigwedge^2 V$ correspond to alternating bilinear forms on $V$ (were $V/K$ is a $K$ -vector space). It comes equipped with a natural Galois action. The Weil pairing is an alternating, symmetric, bi-linear, non-degenerate form $e_p : E[p] \times E[p] \to \mu_p$ and it is Galois equivariant. You can see this because the Weil pairing is a non-trivial alternating Galois equivariant bilinear form, and i
|
|algebraic-geometry|algebraic-number-theory|elliptic-curves|formal-power-series|torsion-groups|
| 1
|
Sum of binomial coefficients weighted by 1/t
|
I would like to evaluate the following sum in terms of $n$ . The sum is essentially a weighted sum of a binomial distribution with N = 2n and p = $\frac{1}{n}$ . I can't figure out how to do it, despite looking through a lot of posts on here and a few papers. Could anyone help? $$\sum_{t=1}^{2n} {2n\choose{t}} \left(\frac{1}{n}\right)^t \left(\frac{n-1}{n}\right)^{2n-t} \left(\frac{1}{t}\right)$$
|
Well this is the key insight for your solution $$ (1+x)^{2n}=\sum_{r=0}^{2n}{2n\choose r}x^r\\ {(1+x)^{2n}-1\over x}=\sum_{r=1}^{2n}{2n\choose r}x^{r-1} $$ Now we can integrate on both sides from 0 to $1\over n-1$ $$ I=\sum^{2n}_{r=1}{2n\choose r}{1\over r(n-1)^r} $$ Where $$I=\int_0^{1\over n-1}{(1+x)^{2n}-1\over x}dx$$ Now multiply $({n\over n-1})^{2n}$ on both sides to get $$ \sum_{r=1}^{2n}{2n \choose r}{1\over r}({1\over n})^r({n\over n-1})^{2n-r}=\left({n\over n-1}\right)^{2n}I $$ Closed form of the integral if anyone cares $$ I=2(_3F_2(1,1,1-2n;2,2;-{1\over n})) $$ Where F is the generalised hypergeometric function
|
|statistics|summation|binomial-coefficients|binomial-distribution|
| 0
|
Having two sets of bearing and elevation measurements, find the height of the balloon
|
This is problem 6.3.8 from Trigonometry: A Complete Introduction , by Hugh Neill (ISBN 1473678498). It's in the surveying chapter, which is a practical continuation of the sine and cosine rules of the chapter before that. So I'm assuming you don't need much more than those two rules. It's a simple problem, but I'm a beginner in maths, so I'm struggling. I've written a lot of Latex for such a small problem, but it's the first time I'm writing in it, and I find it fascinating. It looks so cool! So, if I appear too pedantic, it's because of my enthusiasm. The problem Two observers 5 kilometers apart measure the bearing of the base of the balloon and the angle of elevation of the balloon at the same instant. One finds that the bearing is 041°, and the elevation is 24°. The other observer finds that the bearing is 032°, and the elevation is 26.62°. Calculate the height of the balloon. The book also gives this answer: 2.23 km I cannot get to that answer. Any help would be appreciated. My att
|
Hint: Finding $h$ is equivalent to finding $AC$ , or $BC$ . The sides $AC$ and $BC$ are proportional to $\cot 24^{\circ}$ , $\cot 26.62$ . Also we know $AB$ . That in itself is not enough to solve the triangle $ABC$ . But, we have an extra information, the angle at $C$ . Now this should be a standard problem. I advise to focus on $\Delta ABC$ , and solve it using letter variables, then switch to numerics in the end, like you would do with a physics problem.
|
|trigonometry|
| 0
|
Value of $E_{A,y}[\cos(A^T y,A^{-1} y)]$ for Gaussian $A,x,y=Ax$
|
For $d\times d$ matrix $A$ , $x\in \mathbb{R}^d$ with IID standard normal entries and $y=Ax$ , I'm interested in the the following quantity where $\cos(u,v)$ refers to cosine similarity : $$E_{A,x}[\cos(A^T y,A^{-1} y)]$$ It appears to be $\frac{1}{\sqrt{2}}$ , in simulations, can this be proven? Motivation: This justifies the use of transpose+line search to approximately solve linear equations randn[dims__] := RandomVariate[NormalDistribution[], {dims}]; mat = randn[1000, 1000]; {d2, d1} = Dimensions[mat]; pinv[mat_] := Inverse[mat]; bs = 10000; vecsIn = randn[bs, d1]; vecsOut = (mat . vecsIn\[Transpose])\[Transpose]; meanAlign[vec1_, vec2_] := Median@MapThread[(#1 . #2/(Norm[#1] Norm[#2])) &, {vec1, vec2}]; Print["average cosine similarity: ", meanAlign[(mat\[Transpose] . vecsOut\[Transpose])\[Transpose], vecsIn]]
|
Below is proof outline. Several places switch the order of expectation $f(E(x))=f(E(x))$ which was confirmed to hold in numeric simulations, but I'm curious how to show it more rigorously: Given $x\sim \mathcal{N}(0, 1)$ , define $y=Ax$ for invertible $d\times d$ matrix $A$ and consider 2 solutions of this system: Exact solution $x=A^{-1}y$ Approximate solution $\hat{x}=A^T y$ We seek to show that angle between two solutions $\cos \angle$ converges in probability to $\frac{1}{\sqrt{2}}$ where $$\cos \angle=\frac{\langle x, \hat{x}\rangle}{\|x\|\|\hat{x}\|}$$ and $A$ is a large matrix with IID Gaussian entries. Proof parts: Define "effective rank" of a matrix $A$ as $R(A)=\|A\|_F^4/\|AA^T\|_F^2$ and show convergence in probability w.r.t. random $x$ \begin{equation} \cos \angle \xrightarrow{P} \sqrt{\frac{R(A)}{d}} \tag{0} \label{0} \end{equation} Show that $R(A)\xrightarrow{P} d/2$ for large Gaussian matrix $A$ Combine these two to get convergence in probability with respect to both ran
|
|probability|probability-theory|random-variables|random-matrices|
| 0
|
Is the Euclidean group $E(3)$ on $\mathbb{R}^3$ amenable?
|
I am quite confused about this since I thought the Banach-Tarski theorem implies it is not but I also heard from multiple people that it is amenable since it is the semidirect product of an abelian and a compact group. It would be amazing if someone could explain the relationship between amenable groups and the paradox to me! I would also be interested in the amenability of $E(d)$ for general $d$ .
|
It is better to think about the group $SO(3)$ . If you equip it with the natural matrix topology, it is compact, hence amenable: The invariant probability measure is the natural one, given by a left-invariant volume form on this Lie group. (There is a lot to digest here if you do not know what these words mean.) But as a discrete group, $SO(3)$ is not amenable since it contains a free group on two generators and the latter is nonamenable. The existence of nonabelian free subgroups is the heart of the proof of the Banach-Tarski paradox. Again, it requires quite a bit of work on your part to understand what am I talking about if you are not familiar with these notions.
|
|group-theory|amenability|
| 1
|
Product of Laplace transforms with disjoint Region Of Convergence (ROC)
|
Let the Laplace transform of $x(t)$ be $$X(s) = \int_{-\infty}^{+\infty}x(t)\exp(-st)dt, \ \ \ \ \ s\in\mathbb{C}$$ This integral converges when $s \in \text{ROC}_1$ . Also suppose that the Laplace transform of $h(t)$ be $H(s)$ which converges when $s\in\text{ROC}_2$ . I'm interested in the case $\text{ROC}_1 \cap \text{ROC}_2 = \emptyset$ . I think in this case the convolution of $x(t)$ and $h(t)$ is divergent, i.e. $$\forall t\in \mathbb{R}: \ \ y(t)=\int_{-\infty}^{+\infty}x(\tau)h(t-\tau)d\tau \ \ \text{diverges}.$$ Or at least it diverges almost surely. I want to find a rigorous proof for this hypothesis. So maybe we should consider a special class of functions such as $L^1(\mathbb{R})$ or $L^2(\mathbb{R})$ ? Example: Take $x(t) = \exp(-\alpha t)u(t)$ and $h(t) = \exp(-\alpha t)u(-t)$ where $\alpha \gt 0$ and $u(t)$ denotes the step function. Then we have $$X(s) = \frac{1}{s+\alpha} \ \ \ \text{ROC}_1: \ \Re\{s\}\gt -\alpha, \\ H(s) = \frac{1}{s+\alpha} \ \ \ \text{ROC}_2: \ \Re\{
|
Willie Wong's great answer shows that it's possible to construct uncountably many counterexamples. Let $S = \{ (-2)^k : k \in \mathbb{N} \}, 0\lt \epsilon \lt \frac14$ and define the function $$\chi(t) = \begin{cases} 1 & \exists s\in S, \quad |t-s| Also let $x(t) = h(t) = \chi(t)$ . It can be shown that in this case $\text{ROC}_1 = \text{ROC}_2 = \text{ROC}_1 \cap \text{ROC}_2 = \emptyset$ and the function $\tau \mapsto g(\tau, t-\tau)$ defined by $g(\tau , t-\tau) = \chi(\tau)\chi(t-\tau)$ has compact support for each $t\in \mathbb{R}$ . This shows that for each $t\in \mathbb{R}$ the convolution converges. If we want to have examples for the case $\text{ROC}_1 \not = \emptyset$ and $\text{ROC}_2 \not = \emptyset$ , we can consider $x(t) = \exp(-\eta|t|)\chi(t)$ and $h(t) = \exp(-\eta|t| + 5\eta t)\chi(t)$ . In this case we have $\text{ROC}_1: \Re(s') \in (-\eta , \eta)$ and $\text{ROC}_2: \Re(s') \in (4\eta , 6\eta)$ , so still $\text{ROC}_1 \cap \text{ROC}_2 = \emptyset$ holds. Sinc
|
|calculus|complex-analysis|convergence-divergence|improper-integrals|laplace-transform|
| 1
|
Solving $\int_1^3\frac{1+\frac{1+...}{x+...}}{x+\frac{1+...}{x+...}}dx$
|
In the Regular season of the MIT Integration bee, the following integral was given $$I=\int_1^3\frac{1+\frac{1+...}{x+...}}{x+\frac{1+...}{x+...}}dx$$ If the integrand is called $f(x)$ then I am guessing that $f(x)=\frac{1+f(x)}{x+f(x)}$ implying $f(x)=\frac{1-x+\sqrt{(x-1)^2+4}}2$ . So we now have $$I=\int_2^4\sqrt{x^2+4}dx-1=\sqrt2-1+\log(\sqrt2+1)$$ Is there a way to arrive at this, particularly without even finding $f(x)$ ?
|
I am not sure if integrating without even finding $\mathcal {f} \left( x \right)$ is possible. Integrating $\mathcal {f} \left( x \right)$ is actually standard (*): Just integrate by parts and see $$ \int \sqrt {{x}^{2} + 1} \text {d} x = x \sqrt {{x}^{2} + 1} - \int x \cdot \frac {\text {d}}{\text {d} x} \sqrt {{x}^{2} + 1} \cdot \text {d} x, $$ where $$ \frac {\text {d}}{\text {d} x} \sqrt {{x}^{2} + 1} = \frac {1}{2 \sqrt {{x}^{2} + 1}} \cdot 2 x = \frac {x}{\sqrt {{x}^{2} + 1}}, $$ and $$ \int \frac {{x}^{2}}{\sqrt {{x}^{2} + 1}} \text {d} x = \int \frac {\left( {x}^{2} + 1 \right) - 1}{\sqrt {{x}^{2} + 1}} \text {d} x = \int \sqrt {{x}^{2} + 1} \text {d} x - \int \frac {1}{\sqrt {{x}^{2} + 1}} \text {d} x. $$ So $$ \int \sqrt {{x}^{2} + 1} \text {d} x = x \sqrt {{x}^{2} + 1} - \int \sqrt {{x}^{2} + 1} \text {d} x + \int \frac {1}{\sqrt {{x}^{2} + 1}} \text {d} x; $$ that is, $$ \int \sqrt {{x}^{2} + 1} \text {d} x = \frac {1}{2} x \sqrt {{x}^{2} + 1} + \frac {1}{2} \int \frac {1}{
|
|calculus|integration|contest-math|
| 0
|
General method for solving problems like $17 \mid 2x + 3y \Longleftrightarrow 17 \mid 9x + 5y$
|
Note: This is not a duplicate of Understanding a proof that $17\mid (2x+3y)$ iff $17\mid(9x +5y)$ or Understanding a proof that $2x + 3y$ is divisible by $17$ iff $9x + 5y$ is divisible by $17$ . I was reading through Naoki Sato's notes on number theory. I am somewhat unsatisfied with the given solution to this problem: Example. Let $x$ and $y$ be integers. Prove that $2x + 3y$ is divisible by $17$ iff $9x + 5y$ is divisible by $17$ . Solution. $17 \mid 2x + 3y \Rightarrow 17 \mid 13(2x + 3y)$ , or $17 \mid 26x + 39y \Rightarrow 17 \mid 9x + 5y$ , and conversely, $17 \mid 9x + 5y \Rightarrow 17 \mid 4(9x + 5y)$ , or $17 \mid 36x + 20y \Rightarrow 17 \mid 2x + 3y$ . I do understand the solution but it seems like an unmotivated approach. How would one get the numbers $13$ and $4$ except for clever guessing? Is there a general method for solving such problems? That is, some theorem that trivializes such problems. I don't know much about modular arithmetic but I've noticed that $13$ and $4
|
Let $(c,d)$ be an ordered pair in $((\mathbb{Z}/17\mathbb{Z})^*)^2$ , and let $(x,y)$ is be any pair of integers s.t. $cx+dy$ is divisible by $17$ . Then let $(c',d')$ be another pair in $((\mathbb{Z}/17\mathbb{Z})^*)^2$ . Then $c'x+d'y$ is also divisible by $17$ iff either: There is an element $a \in (\mathbb{Z}/17 \mathbb{Z})^*$ such that $(ac,ad)=(c',d')$ , or Both $x$ and $y$ are each divisible by $17$ . This is a straightforward exercise to check. Note that $17$ can be replaced by any prime $q$ . Furthermore, this leads to a fast algorithm to check, as for any prime $q$ , the equation $c=ac'$ can be solved for $a$ in time polylog $(q)$ , and then it is straightforward to check whether the equation $d=ad'$ holds mod $q$ .
|
|elementary-number-theory|modular-arithmetic|divisibility|
| 0
|
Proof by contradiction, that a set of all binary sequences, where "1" cant be twice in a row, is uncountable
|
I emphasize that I want to prove it by contradiction using cantor diagonal method. $$A = \{ (a_i) \mid \text{$a_i$ all the binary sequence where $1$ doesn't appear twice in a row}\}.$$ So I'm assuming $A$ is a countable set, and trying to find a contradiction by finding $(a_i)$ in $A$ but not listed. My first attempt was to take the diagonal and change every $0$ to $1$ and vice versa. a1 = 01001... a2 = 10100... a3 = 01000... a4 = 00001... a5 = 10101... For this example my attempt fail because I will get 11110 which is definitely not part of $A$ (because $1$ appears twice in a row). The next attempt was to change $0$ to $1$ only if there is not a $1$ in the previous digit. For this attempt I will get 10100 , which is definitely in $A$ , but also mentioned before $(a_2)$ , so it isn't a contradiction.
|
Every $1$ is always followed by a $0$ , so you can replace any $10$ with a $1$ , and you're now able to form any arbitrary binary string. This works the other way too (add a $0$ after every $1$ ), giving you a bijection between $A$ and all binary strings. Thus $A$ is uncountable. It's not a diagonalization proof, but it's simpler IMO
|
|set-theory|cantor-set|
| 0
|
$24\mid n(n^{2}-1)(3n+2)$ for all $n$ natural problems in the statement.
|
"Prove that for every $ n $ natural, $24\mid n(n^2-1)(3n+2)$" Resolution: $$24\mid n(n^2-1)(3n+2)$$if$$3\cdot8\mid n(n^2-1)(3n+2)$$since$$n(n^2-1)(3n+2)=(n-1)n(n+1)(3n+2)\Rightarrow3\mid n(n^{2}-1)(3n+2)$$and$$8\mid n(n^{2}-1)(3n+2)?$$$$$$ Would not, ever succeeded without the help of everyone that this will post tips, ideas, etc., etc.. Thank you.
|
Newton's interpolation formula gives $$ n(n^2-1)(3n+2) = 48 \binom{n}{2} + 120 \binom{n}{3} + 72 \binom{n}{4} $$ which is clearly a multiple of $24$ .
|
|elementary-number-theory|divisibility|
| 0
|
Partitions of $\{1,\dots,n\}$ with no consecutive integers in each block is counted by $B(n-1)$?
|
I'm trying to understand why $B(n-1)$ also counts the number of partitions of $[n]$ where not two consecutive integers appear in the same block. Now the bell number $B(n-1)$ counts the number of partitions of the $n-1$-set $[n-1]$. Suppose I take any partition $\pi$ of $[n-1]$. Now taking $i,i+1,\dots,j$ to be a maximal sequence of two or more consecutive integers in a block, I can remove alternating integers $j-1$, $j-3$, $j-5$,... and put them in a block with $n$. Doing so for all sequences of consecutive integers in blocks of $\pi$ will then give a partition of $[n]$ with no two consecutive numbers. I think this gives a needed bijection of the two things, but if I'm given a partition of $[n]$ with no two consecutive integers in a block, how can I reconstruct the partition of $[n-1]$ to see that it is indeed a bijection? Thanks!
|
Alternatively, there is an evident bijection between partitions of $\left\{0,\dots,\text{n-1}\right\}$ and length- $\text{n}$ strings on the set of characters $\left\{0,1,\dots\right\}$ with the property that the first character is $0$ and every character thereafter is at most one more than the largest character preceding it. E.g., $$\left\{0,1,4\right\}\sqcup\left\{2,5\right\}\sqcup\left\{3\right\}\ \mapsto\ \underline{0}\ \underline{0}\ \underline{1}\ \underline{2}\ \underline{0}\ \underline{1}\text{.}$$ (I.e., the $\text{k}^{\text{th}}$ character of the string, indexing from $0$ , is the index of $\text{k}$ 's part when the parts of the partition are ordered by their lowest members.) There is likewise an evident bijection between partitions of $\left\{0,\dots,\text{n-1}\right\}$ in which no two consecutive elements occur in the same part and length- $\text{n}$ strings on the set of characters $\left\{*\right\}\sqcup\left\{0,1,\dots\right\}$ with the property that the first two chara
|
|combinatorics|set-partition|
| 0
|
Probability of dealing a specific blackjack sequence of hands
|
I'm trying to calculate the probability that a certain scenario will be dealt in Blackjack (known as "21" to some), after a very random shuffle of a pack of 52 cards. Let me describe the "rules" that should be kept in mind for this question: There are 6 players and one dealer. The dealer deals each of the six players one card, and then themselves one card. A second card is then dealt to each player and the dealer. If the dealer has blackjack at this point (which is composed of a face card or a 10, paired with an ace), the cards are collected, and another hand is dealt the same way from the remainder of the 52 card pack. The cards showing for the players in this situation do not matter (there is no insurance, etc... Just the very basics as described above) The specific question is "What is the probability that after a single completely random shuffle, the cards dealt will end with the dealer seeing blackjack three times in a row (as described above)?" Now I'll describe how I worked out
|
I agree with your analysis and your answer. An alternative way of computing the probability is: Let $~E_1~$ denote the event that there are $~3~$ Aces and $~3~$ 10-King cards in the first 6 cards. Let $~E_2~$ denote the event that (assuming that event $~E_1~$ has occurred), there is exactly one Ace in each of cards 1-2, 3-4, 5-6. Then, the desired probability is $~p(E_1) \times p(E_2).~$ $$p(E_1) = \frac{\binom{4}{3} \times \binom{16}{3}}{\binom{52}{6}}.$$ To compute $~E_2~$ note that there are $~\displaystyle \binom{6}{3} = 20~$ ways of selecting the $~3~$ cards out of $~6~$ that will be the Aces. Further, of these $~20~$ ways, exactly $~2^3 = 8~$ of them are satisfactory (i.e. 1-3-5, 1-3-6, 1-4-5, 1-4-6, ...). Therefore, $$p(E_2) = \frac{8}{\binom{6}{3}} = \frac{8 \times 3! \times 3!}{6!}.$$ Then $$p(E_1) \times p(E_2) = \frac{4! \times 16!}{3! \times 1! \times 13! \times 3!} \times \frac{46! \times 6!}{52!} \times \frac{8 \times 3! \times 3!}{6!}$$ $$= \frac{4 \times 3 \times 2 \tim
|
|probability|card-games|
| 1
|
How do you find the equation for a best fit line that passes a specific point?
|
All I have been given are a set of x and y coordinates, and another point that the best-fit line should pass through (not the origin). I know how to use linear regression to find the slope and intercept of the regular best-fit line, but I don't know how to force the line through a specific point. Any help would be appreciated.
|
Let $(x_0, y_0)$ be the point that the regression line, $\hat{y} = \beta x + \beta_0$ , must pass through. Since we require the estimated slope and intercept to satisfy $y_0 = \beta x_0 + \beta_0$ , we can rewrite $\beta_0 = y_0 - \beta x_0$ , which now allows us to consider the problem of estimating the regression as having only one parameter: either $\beta_0$ or $\beta$ (you can take your pick and express the other parameter in terms of the one you selected). Now, I would encourage you to rewrite the regression in terms of only one of the parameters ( $\beta_0$ or $\beta$ ) along with $x_0$ and $y_0$ . Hopefully it shouldn't be too difficult to do the rest from there.
|
|statistics|regression|
| 0
|
What shape does the intersection of two spheres yield? How do you find the center and radius of this circle?
|
Suppose I have two spheres of arbitrary size $S_1$ and $S_2$ and I intersect them so they overlapped some extent. What kind of shape would their intersection create? In other words, what does $S_1 \cap S_2$ look like? EDIT: Someone has suggested this answer which does seem relevant. https://math.stackexchange.com/a/1551800/197705 When two sphere's $S_1$ and $S_2$ intersect, then their intersection forms a circle assuming they aren't tangent. The answerer explains how to obtain the center and radius for this circle intersection of the two spheres. However, to do this, they seem to rely on the distance from the center of the circle to each of the sphere centers, namely $d_1, d_2$ . How do we obtain $d_1$ and $d_2$ ? This step I don't get and so the answer seems circular to me.
|
It's a circle, as shown by this image:
|
|spheres|
| 0
|
Confusion on Folland's redefinition of L^1 space.
|
The following are excerpts from Folland: Proposition 2.12: 2.12 Proposition. Let $(X, \mathcal{M}, \mu)$ be a measure space and let $(X, \bar{\mathcal{M}}, \bar{\mu})$ be its completion. If $f$ is an $\mathcal{M}$ -measurable function on $X$ , there is an $\bar{\mathcal{M}}$ -measurable function $g$ such that $f = g$ $\bar{\mu}$ -almost everywhere. Proposition 2.23: If $f, g \in L^1$ , then $\int_E f = \int_E g$ for all $E \in \mathcal{M}$ iff $\int |f - g| = 0$ iff $f = g$ a.e. This proposition shows that for the purposes of integration, it makes no difference if we alter functions on null sets. Indeed, one can integrate functions $f$ that are only defined on a measurable set $E$ whose complement is null simply by defining $f$ to be zero (or anything else) on $E^c$ . In this fashion, we can treat $\bar{\mathbb{R}}$ -valued functions that are finite a.e. as real-valued functions for the purposes of integration. With this in mind, we shall find it more convenient to redefine $L^1(\mu)$
|
I'm not sure if Folland did assume (implicitly or explicitly) that the measure space it complete, but based on the presence of Proposition $2.12$ , I'm guessing he did (although he may not have said to explicitly). Other things you're quoting here seem to confirm that. However, let's cover why, regardless of what Folland assumes or explicitly says, all of what he's doing here is "allowed" and "makes no difference." Let $N$ denote the set of all $A\subset X$ such that there exists $E\in \mathcal{M}$ with $A\subset E$ and $\mu(E)=0$ . We refer to the members of $N$ as null sets (so "null" doesn't imply measurable according to our usage). Let $D$ denote the set of $\overline{\mathbb{R}}$ -valued functions defined on a subset of $X$ . For $f\in D$ , let $\text{dom}(f)$ denote the set of $x\in X$ such that $f(x)$ is defined and let $\text{fin}(f)$ denote the set of points $x\in \text{dom}(f)$ such that $f(x)\in \mathbb{R}$ . Let $M_0$ denote the set of functions $f\in D$ which are defined o
|
|real-analysis|measure-theory|lebesgue-integral|
| 1
|
A sequence which is homtopy equivalent to a fibration is a fibration
|
Suppose we have a fibration $F\to E \to B$ and a sequence of maps $F'\to E' \to B'$ and suppose we have a map between these two sequences, which commutes up to homotopy and which is pointwise a homotopy equivalence (i.e each map $\bullet\to\bullet', \bullet\in\{F,E,B\}$ in the homotopy commutative map is a homotopy equivalence). Is there any chance that then $F'\to E'\to B'$ is a homotopy fibration? We will obviously have the homotopy long exact sequence as if it were a homotopy fibration, but will it have the homotopy lifting property? thanks in advance for your help and advice.
|
For an explicit counter-example, you can consider something like $$\require{AMScd}\begin{CD} \{1\} @>{\subseteq}>> I @>{=}>> I\\ @V{=}VV @V{i}VV @V{=}VV\\ \{\ast\} @>{\subseteq}>> CX @>{p}>> I \end{CD}. $$ Here, $CX=X\times I/X\times\{1\}$ is the cone over some space $X$ , the map $i$ is given by $t\mapsto [x_0,t]$ for some basepoint $x_0\in X$ and the map $p$ is given by $[x,t]\mapsto t$ . The squares strictly commute, the vertical maps are homotopy equivalences (the outer vertical maps are even identities) and the top row is clearly a fibration sequence, yet the bottom row is not necessarily so. Indeed, a fibration over a path-connected base has homotopy-equivalent fibers, yet the fiber of $p$ over $1$ is the cone point $\{\ast\}$ whereas the fiber of $p$ over any $t is a homeomorphic copy of $X$ , so it suffices to choose a non-contractible $X$ to get a counter-example.
|
|algebraic-topology|homotopy-theory|fibration|
| 1
|
How we define the Lie symmetry of a stochastic differential equation?
|
In the literature of symmetry groups, Sophus Lie define the symmetry of a pde or an ode by a vector field defined in the tangent space of the submanifold (defined by solutions of the pde or ode) and search when this pde or ode is invariant under the action of the one parameter group of transformations generated by the vector field of the submanifold . In a stochastic differential equations we have a random term in it, so how we can define a Lie symmetry of this kind of equations (if possible of course)?
|
In "On Lie-point symmetries for Ito stochastic differential equations" , they explain some of the difficulties of doing that and some approaches eg. by Kozlov in "Symmetry of systems of stochastic differential equations with diffusion matrices of full rank". For example, Itô equations don't satisfy chain rule and so we lose the invariance in coordinates, so one might prefer using the Stratonovich formulation that satisfies chain rule. Kozlov studied $dx = f(x, t) dt + \sigma(x, t) dw$ that admit a simple Lie-point symmetry with generator $X = \xi(x, t) \partial_{X}$ , in order to reduce them to linear equations $dx = \hat{f}( t) dt + \hat{\sigma}( t) dw$ .
|
|lie-groups|lie-algebras|stochastic-differential-equations|symmetry|invariant-theory|
| 0
|
An upper bound involving the second derivative of the error function
|
I am trying to bound a function of the form \begin{align} f(x,y) &= \operatorname{erf}(x+y) - 2\operatorname{erf}(x) + \operatorname{erf}(x-y), \end{align} for small values of $y$ and all (sufficiently large) $x$ . Obviously this form is rather reminiscent of the symmetric way of writing the second derivative \begin{align} g''(x) = \lim_{y\to 0} \frac{g(x+y) - 2g(x) + g(x-y)}{y^2}, \end{align} so we are motivated to seek a bound that looks like \begin{align} f(x,y) \leq y^2 \frac{d^2}{dx^2} \operatorname{erf}(x) = -\frac{4}{\sqrt{\pi}} xy^2 \exp(-x^2). \end{align} It appears (just numerically plotting it) that this bound is indeed true, as long as $x \geq y$ and $x\geq\sqrt{2}$ , which is relevant as the point where the Gaussian $\exp(-x^2)$ switches from being concave to convex. Unfortunately I have not been able to prove such a bound. I think that such a basic thing should be known somewhere, so hopefully someone here has seen it before!
|
Some thoughts. Remarks : We can prove that if $x \ge \frac32$ and $x \ge y > 0$ , the desired inequality is true. I think that $x \ge \frac32$ can be relaxed to $x \ge \sqrt{2}$ . Fact 1 : Let $x \ge \frac32$ and $x \ge y > 0$ . Then $$ -\frac{4}{\sqrt{\pi}} xy^2 \exp(-x^2) \ge \operatorname{erf}(x+y) - 2\operatorname{erf}(x) + \operatorname{erf}(x-y).$$ Proof of Fact 1. Let $$F(y) := -\frac{4}{\sqrt{\pi}} xy^2 \exp(-x^2) - \operatorname{erf}(x+y) + 2\operatorname{erf}(x) - \operatorname{erf}(x-y).$$ We have, for all $y \in (0, x]$ , $$F'(y) = \frac{4xy}{\sqrt{\pi}}\exp(-x^2 - y^2) \Big(\frac{\exp(2xy) - \exp(-2xy)}{2xy} - 2\exp(y^2)\Big) \ge 0. \tag{1}$$ (The proof is given at the end.) Also, we have $F(0) = 0$ . Thus, $F(y) \ge 0$ for all $y \in (0, x]$ with $x \ge 3/2$ . We are done. Proof of (1). It suffices to prove that $$\frac{\exp(2xy) - \exp(-2xy)}{2xy} - 2\exp(y^2) \ge 0. \tag{A1}$$ Let $g(u) := \frac{\mathrm{e}^u - \mathrm{e}^{-u}}{u}$ . We have $$g'(u) = \frac{\mathrm{e}^{-
|
|calculus|inequality|error-function|
| 0
|
Equivalence between IVP and integral equation
|
I was trying to prove the following theorem: If $f(x, y)$ is continuous on some region $R \subseteq \mathbb R^2$ then any solution of IVP $$y'(x)=f(x,y(x)), y(x_0)=y_0 \tag{1}$$ is also a solution of integral equation $$y(x)=y_0+\int_{x_0}^x f(t,y(t)) \, dx \tag{2}$$ Converse also holds, meaning that, any continuous solution of $(2)$ is also a solution of $(1)$ . Proof: Let $\varphi$ be the solution of $(1)$ on some interval $\mathcal I$ which contains point $x_0$ . Than we have $$\varphi'(t)=f(t,\varphi(t)),\forall t\in\mathcal I\tag{3}$$ and $$\varphi(x_0)=y_0\tag{5}$$ We would like for the right hand side of $(3)$ to be Riemman integrable, and it would be, provided that $f(t,\varphi(t))$ is continuous. Then we could simply integrate relation $(3)$ and obtain $$\int_{x_0}^x \varphi'(t) \,dt=\int_{x_0}^x f(t,\varphi(t)) \,dt \tag{6}$$ which would yield, after evaluating left integral $$\varphi(x)=y_0+\int_{x_0}^x f(t,\varphi(t)) \,dt,\forall x\in\mathcal I \tag{7}$$ This would mean th
|
You're referencing Cauchy-Lipschitz or Picard-Lindelof theorem for the ODE, which usually assumes that that $ f \in \mathcal{C}([0,T] \times \mathbb{R}) $ in addition to being Lipschitz continuous in the second argument. Here I assume $x_0=0$ for convenience and $T>0$ . Then you use your proof to equate the integral solution and the ODE and use Banach fixed point to guarantee a solution for $0 . There is a local version of this theorem which says that given any $r>0$ if $ f \in \mathcal{C} \left( [0,T] \times \overline{B(y_0;r)} \right)$ and $f$ is Lipschitz continuous in the second argument then the ODE admits a unique solution for $0 for some $\tau \in (0,T]$ . Here $\overline{B(y_0;r)}$ is the closed ball centered at $y_0$ with radius $r$ . This works because the function $t \rightarrow f(t,\varphi(t)) $ is clearly continuous in a neighborhood of $x_0$ since $f$ is continuous in a neighborhood of $(x_0,y_0)$ .
|
|calculus|ordinary-differential-equations|continuity|initial-value-problems|
| 1
|
What qualifies as a polynomial?
|
I have a very simple question regarding the definition of polynomials (with real coefficients). What I've seen so far in terms of defintions: A polynomial $p(x)$ is a function that can be written in the form $p(x)=\sum^n_{k=0} a_kx^k$ where $a_k$ are real numbers. A polynomial is an expression involving only addition, subtraction, multiplication and integer exponents. Now $f(x) = (2x-1)(3x + 4)$ is a polynomial so the representation $\sum_{k=1}^n a_kx^k$ is not essential. Of course one can rewrite $f$ in the desired form. If I have, say, the function $$ p(x)=\frac{x^3-x^2+x-1}{x^2+1} $$ then this is not a polynomial as it stands. However one can easily represent it as a polynomial by factoring out $x^2 + 1$ . Another example that I wonder about is the function $$ q(x)=\begin{cases} x^2+1 & \text{if } x>0 \\ x + 1 & \text{if } x \le 0\end{cases} $$ The latter involves only the allowed operations, furthermore pointwise it is of the form $\sum_k^n a_kx^k$ . So my question is whether the e
|
We would need to define a polynomial. A polynomial in terms of $x_1, x_2,...x_n$ is a particular type of mathematical expression with the following restriction: The $x_i$ s are only added , subtracted , multiplied , or raised to positive integer powers. If an $x_i$ is subject to any other operation, it is not a polynomial in terms of that particular $x_i$ $x$ is a polynomial in terms of $x$ , a monomial to be precise. $x$ -1 is also a polynomial (binomial) in terms of $x$ $x^2-f$ is also a polynomial in terms of $x$ $3x^3-2x^2+x-7$ is a polynomial in terms of $x$ . Here are some expressions that are not polynomials in terms of $x$ $2^x$ , because this expression takes the antilog of $x$ to the base 2 $x^{-\pi}$ , because $-\pi$ is not a positive integer. $\frac1x$ , because it equals $x^{-1}$ , and $-1$ is not a positive integer. $\log(x)$ , because $x$ is either the base or the subject of the common logarithm function. Of course, polynomials can be in more than one variable. $x_1{x_2}
|
|functions|polynomials|definition|real-numbers|
| 0
|
Prove this equation with the given determinant
|
Let $$ \begin{vmatrix} a & \sqrt{5} & \sqrt{7} \\ \sqrt{3} & b & \sqrt{7} \\ \sqrt{3} & \sqrt{5} & c \\ \end{vmatrix} =0 $$ with $a\neq\sqrt{3},\, b\neq\sqrt{5},\, c\neq\sqrt{7}.$ Show that $$\frac{a}{a-\sqrt{3}}+\frac{b}{b-\sqrt{5}}+\frac{c}{c-\sqrt{7}}=2$$
|
I believe that in the proof statement there should be +2 on the RHS. Under that assumption: $$ \begin{vmatrix} a & \sqrt{5} & \sqrt{7} \\ \sqrt{3} & b & \sqrt{7} \\ \sqrt{3} & \sqrt{5} & c \\ \end{vmatrix} =0 $$ Dividing the first, second and third columns throughout by $\sqrt3, \sqrt5, \sqrt7$ respectively: $$ \begin{vmatrix} a’ & 1 & 1\\ \ 1 & b’ & 1 \\ \ 1 & 1 & c’ \\ \end{vmatrix} =0 $$ [Assuming $a’=\frac{a}{\sqrt3}$ and similarly for others.] As shown here , that implies: $$\frac{1}{1-a’}+ \frac{1}{1-b’}+ \frac{1}{1-c’}=1$$ Substituting the original values leads to the proof of the given statement.
|
|matrices|determinant|
| 1
|
Solve $\int_0^\infty\frac{\ln(2e^x-1)}{e^x-1}dx$
|
In one of the final problems of the MIT integration bee for this year, $$I=\int_0^\infty\frac{\ln(2e^x-1)}{e^x-1}dx$$ was one of the given problems. My try was to let $u=e^x-1$ to get $$I=\int_0^\infty\frac{\ln(2u+1)}{u(u+1)}du=\int_0^\infty\frac{\ln(x+1)}{x(x+2)}dx$$ I don't know whether I would be right but I had a feeling this was a dead end. Turning the original integral into a geometric series doesn't seem promising either. How should I solve this? Note: These problems are solved in 5 minutes so please come up with a solution that can be done in such a time limit.
|
You can proceed with your last attempted integral as follows, though note that you had omitted a factor of $2$ . Substitutions: $$\int_0^\infty \frac{\log(x+1)}{x(x+2)} \, dx \stackrel{x\to x-1}= \int_1^\infty \frac{\log x}{(x-1)(x+1)} \stackrel{x\to\tfrac1x}= \int_0^1 \frac{\log x}{x^2-1} \, dx$$ Series: $$- \sum_{n\ge0} \int_0^1 x^{2n} \log x \, dx = \sum_{n\ge0} \frac1{(2n+1)^2}$$ The last sum is easy if you're familiar with the Basel problem.
|
|calculus|integration|contest-math|
| 0
|
Sum of over primes $p$ of $x^{-p}$
|
I was playing around with the following series $$S(x) = \frac{1}{x^2}+\frac{1}{x^3}+\frac{1}{x^5}+\frac{1}{x^7}+\frac{1}{x^{11}}+...=\sum_{p\in primes}\frac{1}{x^p}$$ for $x\in\mathbb{R}$ and $|x|>1$ . And it seems, that for all $x\ge2$ the value of this sum is pretty close numerically to the following expression: $$S(x) \approx \big(\sqrt{2}-1\big)\exp\big[-2(\psi(x) + \gamma-1)\big]$$ where $\psi(x)$ is digamma function and $\gamma$ is Euler–Mascheroni constant. Is this just a coincidence or is there some explanation for this observation? And why does it work only for $x\ge 2$ ? EDIT: I'd like to show how I got my approximation. Because even though Nilotpal Sinha's answer shows that sharper estimate of $S(x)$ doesn't look like $e^{-2\psi(x)}$ , I think there's something more than just coincidence of two terms in the Laurent series as $x\to\infty$ (maybe just another coincedence?..) because numerical agreement is much better. So, if we calculate logarithm of the ratio of $\frac{S(n)}{
|
Your observation is a coincidence and is a consequence of the fact that the Laurent series expansion of $e^{-2\psi(x)}$ about the point $x=\infty$ is $$ e^{-2\psi(x)} = \frac{1}{x^2} + \frac{1}{x^3} + O\bigg(\frac{1}{x^{4}}\bigg) \tag 1 $$ and its first two terms have exponents $2$ and $3$ happen to be primes. If you replace $\psi(x)$ with any other function whose first tow terms are $2$ and $3$ you will get the same observation so there is nothing special that $\psi(x)$ is doing here. A sharper estimate of $S(x)$ will help explain this coincidence further as shown below. Using the basic properties of primes we can get remarkably sharper estimates for $S(x)$ . Every primes $\ge 5$ is of the form $6k \pm 1$ . Sum up the geometric sequences $x^{-6k-1} + x^{-6k+1}$ for $k = 1,2,\ldots$ and add $x^{-2} + x^{-3}$ , and taking advantage of the fact the the density of primes among the first few numbers of these form is high we get $$ S(x) = \frac{x^7 + x^6 + x^4 + x^2 -x - 1}{x^3(x^6 - 1)} +
|
|prime-numbers|power-series|euler-mascheroni-constant|digamma-function|
| 1
|
Is the metric topology determined by its convergent sequences?
|
I am aware that in a first countable space (and thus any metric space) is completely determined by its convergent sequences and their limits, i.e. , If $\tau_1$ and $\tau_2$ are two first countable topologies on a set $X$ such that $x_i\to c$ in $\tau_1$ iff $x_i\to c$ in $\tau_2$ , then $\tau_1 = \tau_2$ . However, this raises the following question: If two metrics on a space have the same convergent sequences, will they have the same limits as well (and thus the same topology)?
|
I believe this is true -- we can recover what the sequences converge to. Say $(a_n)$ is a sequence in $X$ that we know converges, but we don't know what it converges to . There will be a unique $x \in X$ so that the new sequence $(a_1, x, a_2, x, a_3, x, a_4, x, \ldots)$ is convergent (which we can detect), and this $x$ is necessarily the limit of the sequence $(a_n)$ . Of course, this process is not "computable" in any sense. I have no idea how one would find such an $x$ in practice (though maybe it's doable in case $X$ is compact...) but, in the abstract, this shows that the convergent sequences remember limits. I hope this helps ^_^
|
|sequences-and-series|general-topology|metric-spaces|
| 1
|
Probability of winning a circular ball game
|
Context: You and 4 other people are sitting in a circle. You are given a ball to start the game. Every second of this game, the person with the ball has three choices they can make. They can either pass the ball to the left, pass the ball to the right, or keep the ball (all with equal probability). This game goes on till someone keeps the ball. What is the probability that you are the person to end the game and keep the ball? I am refreshing my knowledge of probability, and I am stuck on this question. For reference, it is the 'pass the Quantguide. Working: Consider the following diagram for clarity. Let's define random variable $W(S_i)$ to be the event that player $S_i$ keeps the ball, and end the games. Well, by symmetry we have that $\mathbb{P}(W(S_1)) = \mathbb{P}(W(S_2))$ , and similarly $\mathbb{P}(W(S_3)) = \mathbb{P}(W(S_4))$ . For simplicity, let use denote $W(S_1)$ and $W(S_2)$ by $N$ (neighbour), and $W(S_3), W(S_4)$ by $NN$ (next neighbour). So, we wish to find $\mathbb{P}(
|
You keep the ball after $k+1$ decisions iff the ball makes a closed walk of length $k$ among all the players (passing left or right, no keeping) and you finally keep it. Each such sequence of events happens with probability $(1/3)^{k+1}$ . It follows that the probability of you ultimately keeping the ball is $\frac13$ times the value at $\frac13$ for the generating function for the number of closed walks in a pentagon of given length $$1,0,2,0,6,2,20,14,70\dots$$ This is OEIS A054877 and its generating function is $$\frac15\left(\frac1{1-2x}+\frac{2(2+x)}{1+x-x^2}\right)$$ Thus the answer is $$\frac1{15}\left(\frac1{1-2/3}+\frac{2(2+1/3)}{1+1/3-1/9}\right)=\frac5{11}$$ More generally, if the passing probabilities to the left and right are both equal to $x$ , the probability of you keeping the ball simplifies to $$\frac{x^2+x-1}{x^2-x-1}$$
|
|probability|
| 0
|
How $|S(x_n, \xi_n) - S(x'_n, \xi'_n)|$ being arbitrary small implies existence of Stieltjes-Riemann Integral?
|
I am studying Multiplicative number theory I: Classical theory by Hugh L. Montgomery, Robert C. Vaughan ( E-book ). In Appendix A, Theorem A.1. states that $I = \int f dg$ exists if $f$ is continuous and $g$ is of bounded variation. Let $a=x_0 \le x_1 \le \dots \le x_N=b$ is a partition of $[a,b]$ , and $S(x_n, \xi_n) = \sum_1^N f(\xi_n) (g(x_n)-g(x_{n-1}))$ . The proof claims that for existence of $I$ it is sufficient to prove that for every $\epsilon$ there is a $\delta$ such that $|I - S(x_n, \xi_n)| . It is clear up to this point. Then it is claimed that it is sufficient to prove that $|S(x_n, \xi_n) - S(x'_n, \xi'_n)| for two arbitrary partitions $x_n$ and $x'_n$ . I could follow all steps of the proof and I can see that $|S(x_n, \xi_n) - S(x'_n, \xi'_n)| is true. But why $|S(x_n, \xi_n) - S(x'_n, \xi'_n)| being true implies $|I - S(x_n, \xi_n)| to be true ??
|
Let me consider fixed $g$ , so "Cauchy like" property for inequality $(A.2)$ can be formulated as following: $\forall \varepsilon>0, \exists\delta >0,$ such that for $\forall$ partitions $x'_n,x''_n$ with mesh $ , holds $$|S(x'_n, \xi_n)-S(x''_n, \xi'_n)| where $\xi_n, \xi'_n$ are any from corresponding intervals. Now, let's consider $\varepsilon_n=\frac{1}{n}, n\in \mathbb{N}$ and choose for any $n$ choose $\delta_n$ , for which holds $(A.2)$ . We are able consider $\delta_n$ monotone decreasing. Also, for any $n$ we can choose partition $x_n$ with mesh $\lambda_{x_n} and take $\xi_n$ from this partition intervals. Let's denote Riemann–Stieltjes sum for obtained values by $T_n$ . For exactness we can write $\lambda_{x_n}$ as $\lambda_{T_n}$ . For given any $\forall \varepsilon>0$ we can find $N$ for which $\varepsilon_N . For any $k,m>N$ we have $\lambda_{T_m } and $\lambda_{T_k } , so from $(A.2)$ holds $$|T_k - T_m| This gives, that $T_n$ converged and we can show, that its limit $T
|
|real-analysis|calculus|complex-analysis|proof-explanation|stieltjes-integral|
| 1
|
Global solution of $y'=y^4-x^8$
|
I encountered a problem while working on a mathematical analysis exercise. The problem is as follows: we need to determine the initial value condition $y(x_0)=y_0$ for the ordinary differential equation (ODE) $y'=y^4-x^8$ in order to find a global solution, i.e., a solution defined for all $x$ in the set of real numbers $\mathbb{R}$ . Can someone provide me with assistance?
|
This ODE doesn't appear to have a closed-form general solution: neither Maple nor Wolfram Alpha find one. But it looks like the solution for $y(0) = 0$ (which is an odd function) is defined for all $x \in \mathbb R$ . In fact, it seems to be $-x^2 + 1/(2 x^5) + O(1/x^{12})$ as $x \to +\infty$ and $x^2 + 1/(2 x^5) + O(1/x^{12})$ as $x \to -\infty$ .
|
|ordinary-differential-equations|
| 0
|
What does "Motions along translation axes induce the opposite orientations on the region they bound" mean?
|
I am reading a paper and they mention the following: "Two isometries of the hyperbolic plane are said to be co-parallel if they have disjoint axes and the motions along these axes induce the opposite orientations on the region they bound" I am very confused by what is meant by that (visually w.r.t to the fixed points) in the upper-half plane. Consider two hyperbolic matrices $A,B$ and assume they have disjoint axis. Denote $a^+,a^-$ ( $b^+,b^-$ ) as attracting/repelling fixed points of $A$ (of $B$ ) respectively. In my head, what the above means is that either $b^+>b^-$ and $a^+>a^-$ or $b^+ and $a^+ . Visually, the translation axis "point" in the same direction when pointing towards the attracting point. However, doing some computations using the results in the paper, my understanding is wrong. Could anyone explain what is meant by the above? For example: $$A=\left( \begin{array}{cc} -1.01179 & -2.92967 \\ 2.38967 & 5.93101 \\ \end{array} \right),B=\left( \begin{array}{cc} -0.244863 &
|
Suppose that you have two oriented arcs on the boundary of a domain $D$ in the plane. Then the two arcs define the same orientation on if and only if, as you travel along these arcs in the direction of their orientation, the domain is to the same side from you. (I can add a formal definition if you wish.) Now, consider two examples. In the first example the two arcs define the same orientation on $D$ : As you travel along the both arcs, the domain is to the left from you. In the second example, the arcs define opposite orientations on $D$ : As you travel along the red arc, the domain is to the left from you, while as you travel along the blue arc, the domain is to the right from you. Note that in both cases, the inequalities $a_- are satisfied. Hence, your conjecture about orientation in terms of stated inequalities between $a_\pm, b_\pm$ is (in general) false. This probably explains why you are getting answers different from the ones in the paper you are reading. More precisely, your
|
|matrices|hyperbolic-geometry|
| 1
|
$x \mapsto x^TAx$ determines A for symmetric A
|
For some unknown $A \in \mathbb{R}^{n\times n}$ , if $A$ is symmetric, I think the function $f: \mathbb{R}^n \to \mathbb{R}$ , $x \mapsto x^{\top}Ax$ determines what $A$ is. I think this is true, and I should've known how to prove this, but right now I'm not very sure on how to proceed. One always have $x^{\top}Ax = \operatorname{tr}(x^{\top}Ax) =\langle x x^{\top},A \rangle$ . This is very useful in specifying the Hessian when the Hessian is symmetric, especially for functions where the domain is a matrix. Does anyone mind sharing a hint?
|
I saw you were only asking for a hint so I deleted my other answer. My hint is consider $x \mapsto x^TAx$ applied to simple linear combinations of standard basis vectors.
|
|linear-algebra|matrices|
| 1
|
General method for solving problems like $17 \mid 2x + 3y \Longleftrightarrow 17 \mid 9x + 5y$
|
Note: This is not a duplicate of Understanding a proof that $17\mid (2x+3y)$ iff $17\mid(9x +5y)$ or Understanding a proof that $2x + 3y$ is divisible by $17$ iff $9x + 5y$ is divisible by $17$ . I was reading through Naoki Sato's notes on number theory. I am somewhat unsatisfied with the given solution to this problem: Example. Let $x$ and $y$ be integers. Prove that $2x + 3y$ is divisible by $17$ iff $9x + 5y$ is divisible by $17$ . Solution. $17 \mid 2x + 3y \Rightarrow 17 \mid 13(2x + 3y)$ , or $17 \mid 26x + 39y \Rightarrow 17 \mid 9x + 5y$ , and conversely, $17 \mid 9x + 5y \Rightarrow 17 \mid 4(9x + 5y)$ , or $17 \mid 36x + 20y \Rightarrow 17 \mid 2x + 3y$ . I do understand the solution but it seems like an unmotivated approach. How would one get the numbers $13$ and $4$ except for clever guessing? Is there a general method for solving such problems? That is, some theorem that trivializes such problems. I don't know much about modular arithmetic but I've noticed that $13$ and $4
|
Remember that if $17|2x+3y$ , this is the same as saying $2x+3y\equiv 0\mod(17)$ , and therefore $2x\equiv -3y\mod(17)$ . Solving for $x$ requires the multiplicative inverse of $2\mod(17)$ , which happens to be $9$ . Therefore $x\equiv -(3\cdot9)y= -27y\equiv -10y\equiv 7y\mod(17)$ Now we do the same for the other divisibility statement: $17|9x+5y\Longrightarrow 9x\equiv -5\mod(17)$ . We already know that $2$ and $9$ are multiplicative inverses $\mod(17)$ , so this is equivalent to $x\equiv-10y\equiv 7y\mod(17)$ . Hang on, that's the same as what we got before! This means that assuming one of $2x+3y$ or $9x+5y$ to be divisible by $17$ leads to the other being divisible by it also, thus proving the claim. Addendum: I got carried away with my method and forgot to answer the question you actually asked. $4$ and $13$ come from solving either $2m\equiv 9\mod(17)$ or $3n\equiv 5\mod(17)$ . Solving the first requires the inverse of $2\mod(17)$ , which we already know to be $9$ ; therefore $2m
|
|elementary-number-theory|modular-arithmetic|divisibility|
| 1
|
Probability of winning a circular ball game
|
Context: You and 4 other people are sitting in a circle. You are given a ball to start the game. Every second of this game, the person with the ball has three choices they can make. They can either pass the ball to the left, pass the ball to the right, or keep the ball (all with equal probability). This game goes on till someone keeps the ball. What is the probability that you are the person to end the game and keep the ball? I am refreshing my knowledge of probability, and I am stuck on this question. For reference, it is the 'pass the Quantguide. Working: Consider the following diagram for clarity. Let's define random variable $W(S_i)$ to be the event that player $S_i$ keeps the ball, and end the games. Well, by symmetry we have that $\mathbb{P}(W(S_1)) = \mathbb{P}(W(S_2))$ , and similarly $\mathbb{P}(W(S_3)) = \mathbb{P}(W(S_4))$ . For simplicity, let use denote $W(S_1)$ and $W(S_2)$ by $N$ (neighbour), and $W(S_3), W(S_4)$ by $NN$ (next neighbour). So, we wish to find $\mathbb{P}(
|
Let $p_0$ be your probability of winning when you have the ball. Let $p_1$ be your probability of winning when one of your immediate neighbors has the ball. Let $p_2$ be your probability of winning when one of the remaining two players has the ball. Then \begin{align} p_0 &= \frac 13 + \frac 23 p_1 \\ p_1 &= \frac 13 p_0 + \frac 13 p_2 \\ p_2 &= \frac 13 p_1+\frac 13 p_2 \end{align} The third equation tells us $\frac 23 p_2= \frac 13 p_1$ so $p_1=2p_2$ . Plugging this into the second equation, we find that $\frac 53 p_2=\frac 13 p_0$ , so $p_0=5p_2$ . Finally, we use the first equation to see: $$5p_2=p_0=\frac 13+\frac 43 p_2 \Rightarrow \frac{11}{3} p_2 = \frac 13 \Rightarrow p_2= \frac {1}{11} \Rightarrow p_0=5p_2 = \frac {5}{11}.$$
|
|probability|
| 1
|
Finding the minimum value of function ${27^6}$ +$\frac{1}{(^2−^2−^2+2)}$
|
When I put this function in GeoGebra calculator suite, I am getting the minimum value 4 but when I tried to solve, the minimum value I'm getting is different: My attempt: ${27^6}$ + $\frac{1}{(^2−^2−^2+2)}$ can be written as ${27a^6}$ + $\frac{1}{a^2-(b-c)^2}$ i.e. ${27a^6}$ + $\frac{1}{(a+b-c)(a-b+c)}$ Observing, we know that, the minimum occurs at b=c . so, the function gets reduced to ${27a^6}$ + $\frac{1}{a^2}$ Using AM-GM inequality, we get, $\frac{{27a^8}+1}{2{a^2}}$ ≥ √( ${27a^4}$ ) or, ${27a^8}$ +1 ≥ 6√3 ${a^4}$ or, ${27a^8}$ -6√3 ${a^4}$ +1 ≥ 0 or, ${(3√3 {a^4}-1)^2}$ ≥ 0 or, ${a^4}$ ≥ $\frac{1}{3√3}$ ∴ ${a^2}$ ≥ $\frac{1}{√(3√3)}$ Now, substituting the value of ${a^2}$ to the original equation, we get, ${27a^6}$ + $\frac{1}{a^2}$ = ${{(3a^2)}^3}$ + $\frac{1}{a^2}$ ≥( $\frac{3}{√(3√3)})^3$ + √(3√3) ≥ 2* ${3^\frac{3}{4}}$ ≥4.559014113909555283987126503927272012380640693323861911667731922 ∴ ${27a^6}$ + $\frac{1}{a^2}$ ≥4.6 This implies that the minimum value of the function is 4
|
AM-GM generally works best when you get a constant on the other side $$ {1\over 4}(27a^6+{1\over 3a^2}+{1\over 3a^2}+{1\over 3a^2})\ge\sqrt[4]{1} $$ So $$27a^6+{1\over a^2}\ge4$$
|
|functions|maxima-minima|
| 0
|
verifying Ramanujan constant
|
The famous Ramanujan constant $ e^{\pi \sqrt{163}} $ is a near-integer. see the link here. I tried to calculate this number with matlab and failed. Matlab cannot even deliver the first 9 apparently because of the double float precision. 2.625374126407683e+17 It seems that we need a customized algorithm. Could anyone give a hint?
|
Throwing this into Wolfram Alpha, my goto for stuff like this, I get 2.62537412640768743999999999999250072597198185688879353856337... × 10^17 with a continued fraction of [262537412640768743; 1, 1333462407511, 1, 8, 1, 1, 5, 1, 4, 1, 7, 1, 1, 1, 9, 1, ...].
|
|computational-mathematics|machine-precision|
| 0
|
How to see that $T^*T$ is self-adjoint?
|
Definition: An operator $T$ on a Hilbert space $H$ is said to be nonnegative definite if it is self-adjoint and $\langle Tx,x \rangle \geq 0$ for all $x \in H$ . Now suppose $T$ is any element of $\mathcal B(H), T^*T$ is nonnegative definite because it is self-adjoint and $\langle T^*Tx,x \rangle = ||Tx||^2$ . Question: How to see that $T^*T$ is self-adjoint? And how to see that $\langle T^*Tx,x \rangle = ||Tx||^2$ ?
|
As others have mentioned above this can be done with minimal effort. Indeed $$(T^*T)^* = T^* (T^*)^* = T^* T,$$ which shows that $T^*T$ is self adjoint. Moreover, $$\langle T^*Tx, x\rangle = \langle Tx, Tx \rangle = \|Tx\|^2.$$
|
|functional-analysis|operator-theory|
| 1
|
Is the metric topology determined by its convergent sequences?
|
I am aware that in a first countable space (and thus any metric space) is completely determined by its convergent sequences and their limits, i.e. , If $\tau_1$ and $\tau_2$ are two first countable topologies on a set $X$ such that $x_i\to c$ in $\tau_1$ iff $x_i\to c$ in $\tau_2$ , then $\tau_1 = \tau_2$ . However, this raises the following question: If two metrics on a space have the same convergent sequences, will they have the same limits as well (and thus the same topology)?
|
This is just to expand a bit on HallaSurvivor's answer. It turns out that limits of convergent sequences can be encoded in sequences themselves: In any $T_1$ space, $a_i\to x$ iff the sequence $a_1, x, a_2, x, a_3, x, a_4, \ldots$ converges. Hence the topology of a first countable $T_1$ space is completely determined by its convergent sequences.
|
|sequences-and-series|general-topology|metric-spaces|
| 0
|
Covariant differentiation of the curvature operator
|
I am confused by what the Leibniz rule means here . In my understanding, Leibniz rule is only for connection on tensor field bundle in the following form $$\begin{align} \nabla_X\left(T(\omega^1,\dots,\omega^r,X_1,\dots, X_s)\right)&=(\nabla_XT)(\omega^1,\dots, \omega^r,X_1,\dots, X_s)\\ &+\sum_{i=1}^rT\left(\omega^1,\dots, \nabla_X(\omega^i),\dots, \omega^r,X_1,\dots, X_s\right)\\ &+ \sum_{j=1}^sT\left(\omega^1,\dots, \omega^r,X_1,\dots,\nabla_X(X_j),\dots, X_s\right). \end{align}$$ But for the one in the link, it does not treat the curvature operator as the (1,3) type tensor. I wonder how that formula is obtained. Edit: I understand that the curvature operator can be thought as a linear mapping over smooth functions from $\Gamma(TM)\times\Gamma(TM)\times\Gamma(TM)$ to $\Gamma(TM)$ . But the formula above for Leibniz rule only works for tensors; how can one obtain the one in the link for linear mapping from $\Gamma(TM)\times\Gamma(TM)\times\Gamma(TM)$ to $\Gamma(TM)$ .
|
A $(1,3)$ tensor can be thought of a multi-linear function that takes three vector fields and returns a vector field. So one can think of the curvature tensor as $R(X,Y,Z)\rightarrow W$ or as $\tilde{R}(X,Y,Z,\omega) = \omega(R(X,Y,Z))$ Another POV is that with $R$ we are only applying three of the four arguments. The second part follows from the fact that the formula you listed above is also valid for the case when think of the curvature tensor as $R(X,Y,Z)\rightarrow W$ . In other words, $$ \nabla_U\left(R(X, Y, Z)\right)=(\nabla_UR)(X, Y, Z) + R(\nabla_U X, Y, Z) + R(X, \nabla_U Y, Z) + R( X, Y, \nabla_U Z) $$ Edit to address your comment. We can derive the formula above from the formula you originally included. First for a covariant tensor $\omega$ , $$ \begin{align*} \nabla_U(\omega(X)) &= \nabla_U(\omega)(X) + \omega(\nabla_UX)\\ \omega(\nabla_UX) &= \nabla_U(\omega(X)) - \nabla_U(\omega)(X) \\ \end{align*} $$ For a (1,3) tensor, $T$ , using the identification we mentioned above
|
|differential-geometry|riemannian-geometry|
| 0
|
Interpretation of change in direction cosines of a variable line: Pythagorean theorem for small angles?
|
Consider a variable rotating line passing through a fixed point. The angle between two successive/adjacent positions of the line is a small angle $\delta \theta$ . If the change in the direction cosines of the line in these two positions is $\delta l$ , $\delta m$ , $\delta n$ respectively, prove that: $$(\delta \theta)^2=(\delta l)^2+(\delta m)^2+(\delta n)^2$$ This sort of resembles the formula for the length of a vector expressed in terms of the projections on 3 orthogonal axes. Does this above expression somehow result from the fact that we can treat angular vector variables in the same manner as all other kinds of vectors (by all other kinds of vectors, I mean displacement, velocity, acceleration, etc, and by angular vector variables, I mean angular displacement, angular velocity ( $\omega$ ), etc)? How can I interpret this expression? I am curious because I found this to be a surprisingly concise and elegant formula for describing small angles of a variable line in 3D. EDIT: I fo
|
Without loss of generality, we take the fixed point to be the origin. Let $L$ and $L'$ be two close lines in ${\mathbb R}^3$ passing through the origin with angle $\delta\theta$ between them. Let $P=L\cap S^2$ and $P'=L'\cap S^2$ be the intersection points of the two lines with the unit sphere $S^2$ . (I think these lines are oriented, so there is one intersection point as you travel from the origin in the positive direction.) Write $P=(x, y, z)$ and $P'=(x',y',z')$ . By definition, the direction cosines of $L$ are $l=x,m=y,n=z$ since $x=\vec{OP}\cdot e_1=\cos \angle(L,x-\text{axis})=l$ , where $e_1=(1, 0, 0)$ is the standard basis in the $x$ -direction. Similarly for the other two. So we indeed have $ P=(l, m, n),\ P'=(l',m',n'), $ and $$ \vec{PP'}=(l'-l,m'-m,n'-n)=(\delta l,\delta m, \delta n). $$ Note that $|\vec{PP'}|^2=\delta l^2 + \delta m^2 + \delta n^2$ by the Euclidean distance formula. Note that $\delta\theta$ is the arc-length of the great circle arc $\overset{\large\frown}{
|
|trigonometry|vectors|taylor-expansion|analytic-geometry|3d|
| 1
|
Converge/Divergence of Telescoping Series
|
$\require{cancel}$ I'm trying to find whether the series $\sum_{n=1}^{\infty} \left(\frac{1}{\ln(n+2)} - \frac{1}{\ln(n+1)}\right)$ diverges or converges. It is a telescoping series. My approach was to write down the few terms and find the patterns of which items are being removed in each term. Here's my attempt, \begin{align*} \sum_{n=1}^{\infty} \left(\frac{1}{\ln(n+2)} - \frac{1}{\ln(n+1)}\right) = \left(\frac{1}{\ln(3)} - \frac{1}{\ln(2)}\right) + \left(\frac{1}{\ln(4)} - \frac{1}{\ln(3)}\right) + \left(\frac{1}{\ln(5)} - \frac{1}{\ln(4)}\right) + ... + \left(\frac{1}{\ln(n+2)} - \frac{1}{\ln(n+1)}\right) \end{align*} As we can see, some of the terms are canceled. \begin{align*} \sum_{n=1}^{\infty} \left(\frac{1}{\ln(n+2)} - \frac{1}{\ln(n+1)}\right) = \left(\cancel{\frac{1}{\ln(3)}} - \frac{1}{\ln(2)}\right) \quad + \left(\cancel{\frac{1}{\ln(4)}} - \cancel{\frac{1}{\ln(3)}}\right) \quad + \left(\cancel{\frac{1}{\ln(5)}} - \cancel{\frac{1}{\ln(4)}}\right) \quad + \cdots \quad + \l
|
Yours is correct. The book has a typo. However, your attempt is not correctly written. What you (and your book) calculated is not $\sum_{n=1}^{\infty} \left(\frac{1}{\ln(n+2)} - \frac{1}{\ln(n+1)}\right)$ but $\sum_{k=1}^n\left(\frac{1}{\ln(k+2)} - \frac{1}{\ln(k+1)}\right)$ . Note that likewise, a telescoping series $\sum_{n=1}^\infty(a_{n+1}-a_n)$ converges if and only if the sequence $(a_n)$ has a finite limit $\ell$ , and the sum of the series is then $\ell-a_1$ , because $$\sum_{k=1}^n(a_{k+1}-a_k)= a_{n+1}-a_1.$$
|
|sequences-and-series|telescopic-series|
| 1
|
Solve $\int_0^\infty\frac{\ln(2e^x-1)}{e^x-1}dx$
|
In one of the final problems of the MIT integration bee for this year, $$I=\int_0^\infty\frac{\ln(2e^x-1)}{e^x-1}dx$$ was one of the given problems. My try was to let $u=e^x-1$ to get $$I=\int_0^\infty\frac{\ln(2u+1)}{u(u+1)}du=\int_0^\infty\frac{\ln(x+1)}{x(x+2)}dx$$ I don't know whether I would be right but I had a feeling this was a dead end. Turning the original integral into a geometric series doesn't seem promising either. How should I solve this? Note: These problems are solved in 5 minutes so please come up with a solution that can be done in such a time limit.
|
We are now evaluating the following integral: $$ \int_{0}^{\infty} \frac {\ln \left( 2 {e}^{x} - 1 \right)}{{e}^{x} - 1} \text {d} x = \, ? $$ My first observation is $$ 2 {e}^{x} - 1 = {e}^{2 x} - {\left( {e}^{x} - 1 \right)}^{2}. $$ I am not sure if this is relevant. Anyway, it looks like ${e}^{x} \mapsto x$ is a good substitution. So $x \mapsto \ln \left( x \right)$ , and $\text {d} x \mapsto \text {d} x / x$ . As a consequence, $$ \int_{0}^{\infty} \frac {\ln \left( 2 {e}^{x} - 1 \right)}{{e}^{x} - 1} \text {d} x = \int_{1}^{\infty} \frac {\ln \left( 2 x - 1 \right)}{x \left( x - 1 \right)} \text {d} x. $$ Next, we perform the substitution: $x \mapsto 1 / x$ , so $\text {d} x \mapsto - \text {d} x / {x}^{2}$ . Accordingly, $$ \int_{1}^{\infty} \frac {\ln \left( 2 x - 1 \right)}{x \left( x - 1 \right)} \text {d} x = \int_{0}^{1} \frac {\ln \left( 2 - x \right)}{1 - x} \text {d} x - \int_{0}^{1} \frac {\ln \left( x \right)}{1 - x} \text {d} x. $$ Let $x \mapsto 1 - x$ , so $$ \int_{0
|
|calculus|integration|contest-math|
| 0
|
Proving $e^{-\mu}\left(\left(\frac{e}{1+\delta}\right)^{(1+\delta)\mu}+\left(\frac{e}{1-\delta}\right)^{(1-\delta)\mu}\right) \le 2e^{-C\mu\delta^2}$
|
I'm trying to show that $$e^{-\mu}\left(\left(\frac{e}{1+\delta}\right)^{(1+\delta)\mu} + \left(\frac{e}{1-\delta}\right)^{(1-\delta)\mu}\right) \le 2e^{-C\mu\delta^2},$$ for an absolute constant $C > 0$ , where $\delta \in (0,1]$ and $\mu > 0$ . Using the idea in this answer , I wrote $$\log(1+\delta) = 0 + \delta \frac{1}{1+0} - \frac{\delta^2}{(1+\delta')^2},$$ for some $\delta' \in (0, \delta).$ This gives $(1+\delta')^2 \le 4$ , i.e., $- \frac{\delta^2}{(1+\delta')^2} \le -\frac{\delta^2}{4}$ . Plugging back in, we have the upper bound $$\log(1+\delta) = \delta - \frac{\delta^2}{(1+\delta')^2} \le \delta - \frac{\delta^2}{4}.$$ This doesn't immediately seem helpful, since we are looking for a bound of the form $$\log(1+\delta) \ge \frac{\delta + C\delta^2}{1+ \delta},$$ i.e., a lower bound. Could I get a hint? Thanks!
|
I will show that any $0 will do. Start with $\dfrac{1-x^n}{1-x} =\sum_{k=0}^{n-1} x^k $ , so $\dfrac1{1-x} =\sum_{k=0}^{n-1} x^k+\dfrac{x^n}{1-x} $ or, putting $-x$ for $x$ , $\dfrac1{1+x} =\sum_{k=0}^{n-1} (-1)^kx^k+\dfrac{(-1)^nx^n}{1+x} $ . Integrating from $0$ to $x$ (assuming $0 , and replacing $x$ by $t$ in the integrand), $\begin{array}\\ \ln(1+x) &=\int_0^x \dfrac{dt}{1+t}\\ &=\sum_{k=0}^{n-1} (-1)^k\int_0^x t^kdt+(-1)^n\int_0^x \dfrac{t^n dt}{1+t}\\ &=\sum_{k=0}^{n-1} (-1)^k\dfrac{x^{k+1}}{k+1}+(-1)^n e_n(x)\\ \end{array} $ where $e_n(x) =\int_0^x \dfrac{t^n dt}{1+t} \gt 0$ and $e_n(x) \gt e_{n+1}(x) $ . We also have $\begin{array}\\ e_n(x) &=\int_0^x \dfrac{t^n dt}{1+t}\\ & \dfrac1{1+x}\int_0^x t^n dt\\ &=\dfrac{x^{n+1}}{(n+1)(1+x)}\\ \end{array} $ Therefore, for $n=2$ , $\ln(1+x) \lt x-\dfrac{x^2}{2}+\dfrac{x^3}{3} $ and $\ln(1+x) \gt x-\dfrac{x^2}{2}+\dfrac{x^3}{3(1+x)} $ . From this last, we get the desired inequality if $$x-\dfrac{x^2}{2}+\dfrac{x^3}{3(1+x)} \ge \dfrac{x+
|
|analysis|proof-explanation|
| 0
|
What shape does the intersection of two spheres yield? How do you find the center and radius of this circle?
|
Suppose I have two spheres of arbitrary size $S_1$ and $S_2$ and I intersect them so they overlapped some extent. What kind of shape would their intersection create? In other words, what does $S_1 \cap S_2$ look like? EDIT: Someone has suggested this answer which does seem relevant. https://math.stackexchange.com/a/1551800/197705 When two sphere's $S_1$ and $S_2$ intersect, then their intersection forms a circle assuming they aren't tangent. The answerer explains how to obtain the center and radius for this circle intersection of the two spheres. However, to do this, they seem to rely on the distance from the center of the circle to each of the sphere centers, namely $d_1, d_2$ . How do we obtain $d_1$ and $d_2$ ? This step I don't get and so the answer seems circular to me.
|
Note the calculation of $r$ , the radius of the intersection circle, depends only on $d$ , not on $d_1, d_2$ . You can then use Pythagoras to find $d_1=\sqrt{r_1^2-r^2}, d_2=\sqrt{r_2^2-r^2}$
|
|spheres|
| 0
|
Three cafes,R S,and M take one step to go to one of these cafes to another. How many unique ways are there to start at R and end at M in seven steps?
|
There is an equilateral triangle with one cafe at each vertice. They are called R, S, and M. I tried to solve this problem but got stuck. Here is my work: After knowing that R and M are the endpoints, we can alternate to the 3 cafes for 5 steps or 6 times. I have two choices for each cafe. Hence, the answer is 2^6 or 64 . After some testing and writing down some possibilities, I saw that this theory was wrong. I then tried to use casework to solve this problem. Here are my cases: Case 1: Visit Cafe M once For this, It is easy to see that there is 1 way to visit Cafe M once, making it the last cafe to visit on our list. Case 2: Visit Cafe M Twice For this case, we can do 7 choose 2 (Using the formula for distributing n items into r groups.) to find the number of ways to visit Cafe M twice, one being at the end. For this, we get 21. Case 3: Visit Cafe M thrice For this, I did 6 choose 2 ( using the formula), and found it to be 15. Case 4: Visit Cafe M four times For this, I found 10 ways
|
Let $S(n)$ be the number of ways to start at $R$ , take $n$ steps, and end at $S$ . Define $R(n), M(n)$ similarly. At each step you can go anywhere except where you are so the recurrences are $$S(n)=R(n-1)+M(n-1)\\R(n)=S(n-1)+M(n-1)\\M(n)=R(n-1)+S(n-1)$$ By symmetry $S(n)=M(n)$ and the starting condition is $R(0)=1,S(0)=0,M(0)=0$ . A quick spreadsheet (copy the equations down) will give the answer. All the values are $2^n/3$ rounded one way or the other so the total is $2^n$ as you have two choices at each step.
|
|combinatorics|
| 0
|
Covariant differentiation of the curvature operator
|
I am confused by what the Leibniz rule means here . In my understanding, Leibniz rule is only for connection on tensor field bundle in the following form $$\begin{align} \nabla_X\left(T(\omega^1,\dots,\omega^r,X_1,\dots, X_s)\right)&=(\nabla_XT)(\omega^1,\dots, \omega^r,X_1,\dots, X_s)\\ &+\sum_{i=1}^rT\left(\omega^1,\dots, \nabla_X(\omega^i),\dots, \omega^r,X_1,\dots, X_s\right)\\ &+ \sum_{j=1}^sT\left(\omega^1,\dots, \omega^r,X_1,\dots,\nabla_X(X_j),\dots, X_s\right). \end{align}$$ But for the one in the link, it does not treat the curvature operator as the (1,3) type tensor. I wonder how that formula is obtained. Edit: I understand that the curvature operator can be thought as a linear mapping over smooth functions from $\Gamma(TM)\times\Gamma(TM)\times\Gamma(TM)$ to $\Gamma(TM)$ . But the formula above for Leibniz rule only works for tensors; how can one obtain the one in the link for linear mapping from $\Gamma(TM)\times\Gamma(TM)\times\Gamma(TM)$ to $\Gamma(TM)$ .
|
Well, you’re going to need a definition of how $\nabla$ acts on such guys. And the definition is pretty much to enforce the desired Leibniz rule: \begin{align} (\nabla_{\xi} R)(X,Y)[Z]&:=\nabla_{\xi}\left(R(X,Y)[Z]\right)-R(\nabla_{\xi}X,Y)[Z]-R(X,\nabla_{\xi}Y)[Z]-R(X,Y)[\nabla_{\xi}Z]. \end{align} For clarity, let me denote the curvature operator as done here, $R(X,Y)[Z]$ , and let me call the associated $(1,3)$ tensor field $r$ , so that for all covector fields $\omega$ and vector fields $X,Y,Z$ , we have \begin{align} r(\omega,Z,X,Y):=\omega(R(X,Y)[Z]). \end{align} You can then check that with the above definition, $\nabla_{\xi}R$ ‘corresponds’ to $\nabla_{\xi}r$ . Notice that the way I presented things, you first need a definition for how $\nabla$ acts on $R$ ; only then can you ask how it is related to the (‘originally defined’) action of $\nabla$ on $r$ (and notice that the same notation $\nabla$ is being used for multiple different (though related) purposes). Alternatively, you
|
|differential-geometry|riemannian-geometry|
| 1
|
4 elements are named x,y,z,w now how to name 8 elements
|
I have a vector of four elements. The elements are named x, y, z, w : Vec4(x, y, z, w) Now I have another vector of eight elements: Vec8(x, y, z, w, ?, ?, ?, ?) Now how do you name the additional elements? What's the best practice for naming?
|
I use this, $x=(x_1, \cdots , x_n)$ . Much easier to work with than $(x,y,z,w)$ .
|
|linear-algebra|matrices|vectors|terminology|
| 1
|
One-point compactification and embedding in a cube
|
I am currently reading Folland's Real Analysis. On page 145 of his book, he claims the following: Claim : If $X$ is a noncompact, locally compact Hausdorff space, then the closure of the image of the embedding $e:X\hookrightarrow I^\mathcal F$ associated to $\mathcal F=C_c(X)\cap C(X,I)$ is the one-point compactification of $X$ . A few remarks: The "associated embedding" refers to the map $X\to I^\mathcal F$ obtained by mapping $X\ni x\mapsto \hat x\in I^{\mathcal F}$ , where $\hat x$ is given by $\hat x(f)=f(x)$ for $f\in \mathcal F$ . $C_c(X)$ is the set of all continuous, compactly supported, complex valued functions on $X$ . $C(X,I)$ is the set of all continuous functions from $X$ to $I=[0,1]$ . I understand that this is indeed an embedding, because Urysohn's lemma guarantees that $\mathcal F$ separates points and closed sets and we know Theorem 1 below. I also understand that $\overline{e(X)}\setminus X$ contains at least one point because $\overline {e(X)}$ is a compact space con
|
I can offer a "direct" proof that $\overline{e(X)} \backslash e(X)$ consists of just the point all of whose coordinates are zero, though I would not say the proof is short. Now I need to prove some lemmas: Lemma 1: If $w \neq x$ are points in $X$ , then $e(w)$ and $e(x)$ differ in at least two coordinates. Proof : By Urysohn's lemma for LCH spaces (and since a single point is compact), there is a function $f\in \mathscr{F}$ such that $f(w)=1$ and $f(z)=0$ . Similarly, there is a function $g\in \mathcal{F}$ such that $g(z)=1$ and $g(w)=0$ . Then $e(w)$ and $e(x)$ differ in the coordinates associated with $f$ and $g$ . $\square$ Now let $p\in \mathcal{F}\backslash e(X)$ be such that it has a non-zero coordinate. We show that $p$ cannot be in $\overline{e(X)}$ . For the purpose of notation, suppose $g\in \mathcal{F}$ is a coordinate such that $\pi_g (p) \neq 0$ . Lemma 2: There can only be at most one $x\in X$ such that $\pi_f (e(x)) = \pi_f(p)$ for all $f \in \mathcal{F}\backslash\{g\}$
|
|general-topology|compactification|
| 0
|
Subsets and Splits
SQL Console for glopezas/math_stackexchange_qa
Filters and displays accepted answers excluding questions with specific tags, providing a focused view of less common or specialized topics.