title
string
question_body
string
answer_body
string
tags
string
accepted
int64
prove that there's no sequence $a_n$ such that the set $[3,4)\cup [5,6]$ is the set of its partial limits
I supposed in the sake of contradiction that there is such a sequence and then I divided the problem into two cases, the first one is when $a_n$ converges or diverges to $\infty$ or $-\infty$ and then it's clear why it's wrong, but when $a_n$ diverges I didn't succeed to reach to anything
Suppose that there are subsequences converging to every point of $[3,4)$ . Then we can construct a subsequence converging to $4$ . We recursively construct a strictly increasing function $f(n)$ such that $4-\frac{1}{2^n}\leqslant a_{f(n)}\leqslant 4+\frac{1}{2^n}$ for all $n\geqslant 1$ , and then $a_{f(n)}\to 4$ . Let $f(0)=0$ . Now suppose that $f(0), f(1), \dots, f(n)$ are constructed. As $4-\frac{1}{2^{n+2}}\in [3,4)$ we have a subsequence $a_{g(m)}$ converging to $4-\frac{1}{2^{n+2}}$ , so that after some point $m_0$ all the $a_{g(m)}$ lie in $(4-\frac{1}{2^{n+1}}, 4+\frac{1}{2^{n+1}})$ . Now define $f(n+1)$ to be $g(m)$ , where $m$ is the least integer such that $m>m_0$ and $g(m)>f(n)$ . Then we have ensured that $f(n+1)>f(n)$ and that $4-\frac{1}{2^{n+1}}\leqslant a_{f(n+1)}\leqslant 4+\frac{1}{2^{n+1}}$ as required.
|calculus|
0
Examples of statements that could be proven using strong induction without checking a special case?
After two days of contemplating, I believe that I've finally realized why strong induction doesn't require a base case as in regular induction. However, there's something I'd like to ask. To make it easier for discussion, let me restate the Principle of Strong Induction below (In my analysis that follows, $k$ and $n$ will be used exactly as they are in this principle): $\textit{Suppose that $P(n)$ is a statement about n $\in$ S (S $\subseteq$ $\mathbb{N}$):}$ $\textit{If for every n $\in$ S,}$ $$\textit{P(k) holds for all k $\textit{then P(n) holds for all n $\in$ S.}$ As I revisited some past examples, I realized that despite the theoretical non-necessity of a base case in the proof, we always seem to need to check a few special cases in practice. The proof doesn't require a base case if $(*)$ is true. However, to prove that $P(n)$ holds, we always seem to need to use $P(k)$ in our argument, but $P(k)$ being true cannot be obtained for some $n$ ( $k ) or some choice of $k$ . $P(k)$ is
You need to show: If p(k) holds for all k So you have to show: if True then p(0). In other words you have to show p(0) without the help that you would get from induction for larger n.
|induction|
0
Prove $(0\le a\lt b)\Rightarrow (0\le\sqrt{a}\lt\sqrt{b})$. Why is the proof of $0\le\sqrt{a}$ not needed?
$(0\le a\lt b)\implies(0\le\sqrt{a}\lt\sqrt{b})$ According to a proof by contradiction, Why is the proof of $0\le\sqrt{a}$ not needed? Proof: If $ (\sqrt{a}\ge\sqrt{b})$ then $a = (\sqrt{a})^2 \ge (\sqrt{b})^2 = b$ A contradiction
$0 \le \sqrt{a}$ is true for all $a$ , by definition of $\sqrt{\cdot}$ . (This depends on your definition of $\sqrt{\cdot}$ , but I know it as a function $\mathbb{R}^{\geq 0} \to \mathbb{R}^{\geq 0}$ ; since you're dealing with inequalities anyway, that means we can't be in $\mathbb{C}$ .)
|proof-writing|proof-explanation|
1
Naive linear algebra in vector bundles
I would like to ask a fairly general question: how much of naïve linear algebra remains true for vector bundles? For example, If $F$ and $F’$ are two subbundles of $E$ such that the fiber $E_x$ is the internal direct sum $F_x$ and $F’_x$ at each point $x$ , is it true that $E$ is isomorphic to the direct sum of $F$ and $F’$ ? Is the quotient of $E\oplus F$ by $E$ isomorphic to $F$ ? Is the direct sum $E\oplus F$ the product / coproduct in the category of vector bundles over a fixed manifold? What if the base is not fixed? My intuition says that the answers to these questions should be yes, but intuition for linear algebra could fail for vector bundles. If they are true, could you provide a reference for the proof or at least a general idea of the proof?
Basically, quite a lot! And the reason is quite simple: as the vector bundles are locally trivial, you can use a local frame to repeat most of the proofs you do in linear algebra using a basis. In particular the facts you guessed are true and the proofs are not different from the ones you would do for vector spaces. In Milnor-Stasheff "Characteristic Classes" you can find the proofs or some hints to check them. In the category of vector bundles over a fixed space, the Whitney sum is a biproduct, and kernel/cokernel exist for all morphism of constant rank. This makes it an additive category. If you let change the base space, things go slightly worse: if $E \to M$ and $F \to N$ are vector bundles, their Whitney sum is obtained summing their pullbacks along the projections $$M \leftarrow M \times N \to N.$$ Therefore, their sum $E \oplus F$ is still a vector bundle, but now over $M \times N$ . This still behaves as a product in this category, but it is no more a coproduct as there is no c
|linear-algebra|algebraic-topology|differential-topology|vector-bundles|
0
Finding the expectation of the smallest value when $k$ numbers is taken from $1$ to $n$
Let X be the smallest value obtained when k numbers are randomly chosen from the set 1,...,n. Find E[X] by interpreting X as a negative hypergeometric random variable. This is Self Test Exercise 7.7 of Sheldon's A First Course in Probability. The way I approached this is to first consider (for example) P(X = 1). Using the suggestion to take X as a negative hypergeometric random variable, we have $$P(X = 1) = \frac{\binom{1}{1}\binom{n - 1}{k - 1}}{\binom{n}{k}}$$ My idea was that we have one element 1 and the rest $k - 1$ elements from ... the remaining $n - 1$ elements. Similarly, for $P(X = 2)$ , we have one way of getting the minimum element $2$ , and $\binom{n - 2}{k - 1}$ ways of getting the other elements (well, except 1). Similarly, we can deduce that $$P(X = i) = \frac{\binom{n - i}{k - 1}}{\binom{n}{k}}$$ As a result, I got $$E(X) = \sum_{i = 1}^n i \times \frac{\binom{n - i}{k - 1}}{\binom{n}{k}}$$ The answer in the book is $\frac{n + 1}{k + 1}$ , which is just so much more e
This admittedly doesn't answer the question about your method, but here is a completely different approach. Fix $k$ and write $E_n$ for the expectation for $n$ . We'll prove $E_n=\frac{n+1}{k+1}$ by induction on $n$ ; clearly it holds for $n=k$ . If $1$ was one of the integers chosen, which happens with probability $k/n$ , the minimum is $1$ . If $1$ was not one of the integers chosen, which happens with probability $1-k/n$ , then subtracting $1$ from each integer gives a uniformly random choice of $k$ integers from $1,\ldots,n-1$ . The expected value of the minimum of these is $E_{n-1}$ , and the minimum of the original set was $1$ more than the minimum of these. Therefore we get $$E_n=\frac{k}{n}+\frac{n-k}{n}(E_{n-1}+1)=1+\frac{n-k}{n}E_{n-1}.$$ Since $E_{n-1}=\frac{n}{k+1}$ by the induction hypothesis, this gives $E_n=\frac{n+1}{k+1}$ .
|probability|expected-value|
0
Is the Dot Product a form of dimensionality reduction?
I've been reviewing some concepts in Linear Algebra, specifically watching the Essence of Linear Algebra by 3blue1brown on YouTube. As I was watching the Dot Product video I realized that the Dot product is form of Dimensionality reduction/compression (specifically the portion of the video where he shows the the projection of 2D points to the 1D line). I'm curious as to if this line of thinking makes sense and more specifically what does this means in higher order dimensions. For example the dot product of two 100 dimension vectors produces one value which seems to imply a lot of data is encoded in this one number Edit: Added link to video (time stamp 4:11)
Your view is correct to a good extent. When two vectors become similar their cosine product (dot) is the highest (1). It means that instead of two one of them is enough to explain the variation. Among all such vectors, there will be vectors of features ( with elements representing its values for respective instance for that given dimension) which could stand closest with the maximum number of such vectors can be chosen as the vector representing rest of the closer allies vectors. This is the rudimentary idea but same is used by most of the feature reduction classical approaches with their own refinements.
|linear-algebra|inner-products|
0
Understanding a measurabiliy statement from Section 6.3 in Lehmann and Romano
This paragraph is at the end of Section 6.3 in the book Testing Statistical Hypotheses by Lehmann and Romano: In most applications, $M(x)$ is a measurable function taking on values in a Euclidean space and it is convenient to take $\mathcal B$ as the class of Borel sets. If $\phi(x) = \psi[M(x)]$ is then an arbitrary measurable function depending only on $M(x)$ , it is not clear that $\psi(m)$ is necessarily $\mathcal B$ -measurable. This measurability can be concluded if $\mathcal X$ is also Euclidean with $\mathcal A$ the class of Borel sets, and if the range of $M$ is a Borel set. We shall prove it here only under the additional assumption (which in applications is usually obvious, and which will not be verified explicitly in each case) that there exists a vector-valued Borel-measurable function $Y(x)$ such that $[M(x), Y (x)]$ maps $\mathcal X$ onto a Borel subset of the product space $\mathcal M\times \mathcal Y$ , that this mapping is $1 : 1$ , and that the inverse mapping is als
Let $h$ be the function given by $h(x)=(M(x),Y(x))$ . By assumption, the range of $h$ is a Borel set and has a measurable inverse $h^{-1}$ defined on the range which is where we need that $h$ is 1-to-1. For all $x$ , we must have $h^{-1}(h(x))=x$ . So we can define $\phi'$ as $\phi\circ h^{-1}$ . Then $\phi(x)=\phi(h^{-1}(h(x)))=\phi'(h(x))=\phi'(M(x),Y(x))$ . Also, $\phi'$ is measurable as a composition of measurable functions. That $\phi$ depends only on $M(x)$ means that $M(x)=M(x')$ implies $\phi(x)=\phi(x').$ Since $\phi'(M(x),Y(x))=\phi(x)=\phi(x')=\phi'(M(x'),Y(x'))$ , this implies that $\phi'(M(x),Y(x))=\phi(x)=\phi(x')=\phi'(M(x'),Y(x'))$ whenever $M(x)=M(x')$ . And that means that $\phi'$ depends only on $m=M(x)$ .
|measure-theory|statistics|proof-explanation|statistical-inference|measurable-functions|
1
Right and left stabilizers
Suppose that $A$ is a finite subset of a group $G$ . Let $L$ ( $R$ ) denote the set of all those group elements that fix $A$ from the left (right): $$L:=\{g\in G\colon Ag=A\},\quad R:=\{g\in G\colon gA=A\}.$$ It is easily seen that $L$ and $R$ are subgroups of $G$ , and also that $L=R$ in the commutative case. Is it true that $|L|=|R|$ in the general case?
No, it's not true in general. Note that if $g\in L(A)$ , then we can partition $A$ into sets of the form $\{a,ga,g^2a,\ldots\}$ . If we have just one such set, then $L(A)=\langle g\rangle$ , and $R(A)=\langle a^{-1}ga\rangle$ . To find a counterexample, we therefore need at least two such subsets. Here's a simple situation, where we take $G=S_4$ , $g=(12)$ , and $A=\{g^r(123)\}\cup\{g^r(34)\}$ , so $$ A = \{(123),(23),(34),(12)(34)\}. $$ Now $g\in L(A)$ by construction, but we can quickly check that $R(A)$ is trivial.
|abstract-algebra|group-theory|
1
Algorithm to compute Gamma function
The question is simple. I would like to implement the Gamma function in my calculator written in C; however, I have not been able to find an easy way to programmatically compute an approximation to arbitrary precision. Is there a good algorithm to compute approximations of the Gamma function? Thanks!
You can use Lanczos approximation to implement it. The C# implementation looks like: using System; class Program { static void Main(string[] args) { // Test cases for the SpecialFunctionGamma method double[] testValues = { 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 5.0, 6.0, 7.0 }; foreach (double value in testValues) { Console.WriteLine($"Gamma({value}) = {SpecialFunctionGamma(value)}"); } } //Lanczos Approximation for Gamma Function private static double SpecialFunctionGamma(double z) { double[] p = { 676.5203681218851, -1259.1392167224028, 771.32342877765313, -176.61502916214059, 12.507343278686905, -0.13857109526572012, 9.9843695780195716e-6, 1.5056327351493116e-7 }; if (z Try it online!
|algorithms|numerical-methods|special-functions|gamma-function|
0
A solution to a minimax problem is a saddle point?
Let $X\subset \mathbb{R}^n, Z\subset \mathbb{R}^m$ and $\phi:X\times Z\to \mathbb{R}$ . We define a saddle point of $\phi$ as follows: Definition. $(x^*, z^*)\in X\times Z$ is said to be a saddle point if for any $x\in X, z\in Z$ , we have $\phi(x^*, z)\leq \phi(x^*, z^*) \leq \phi(x,z^*)$ . According to Convex Optimization by Bertsekas, the following theorem holds for a saddle point. Theorem. $(x^*,z^*)$ is a saddle point if and only if: (1)The minimax equality holds, i.e., $\sup_{z}\inf_{x}\phi(x,z)=\inf_{x}\sup_{z}\phi(x,z)$ . (2) $x^*$ is a solution to $\textrm{minimize } \sup_{z}\phi(x,z)$ subject to $x\in X$ . (3) $z^*$ is a solution to $\textrm{maximize } \inf_{x}\phi(x,z)$ subject to $z\in Z$ . I was wondering if we can put (2) and (3) together as a minmax problem, that is, we want to prove the following. Conjecure. $(x^*,z^*)$ is a saddle point if and only if: (1)The minimax equality holds, i.e., $\sup_{z}\inf_{x}\phi(x,z)=\inf_{x}\sup_{z}\phi(x,z)$ . (2) $(x^*,z^*)$ is a solu
If $\phi(x,z)=x^2-z^2$ , then this is false, indeed any point in $\{ (a,\pm a) : a\in \mathbb R\}$ satisfies your condition while only $(0,0)$ is a saddle point.
|optimization|
1
Convergence of summation of complex exponentials with alternating exponent
Related to my previous question , consider $$f(s)=\sum_{k=1}^{\infty} \exp(-s(-2)^k)$$ where $s\in\mathbb{C}$ is a complex number. According to the Willie Wong's comment , $f(s)$ diverges when $\Re\{s\} \not = 0$ and my numerical approximation using Mathematica supports this claim (honestly, I couldn't prove it for myself). My main interest is the case $\Re\{s\} = 0$ and the partial sums suggests that $f(s)$ converges when $s$ satisfies the conditions $\Im\{s\}\lt 1, \Re\{s\} = 0, s \not = 0$ . In the other case, i.e. $\Im\{s\}\ge 1, \Re\{s\} = 0$ , Mathematica returns "Overflow occurred in computation". So the main question is when $f(s)$ converges?
Among American college students, the following is sometimes called the "n-th term test": A necessary condition for a series $\sum a_n$ to converge is that $\lim a_n = 0$ . A fortiori, if $\sum a_n$ converges then $\lim |a_n| = 0$ (here we allow $a_n$ to take complex values). Apply this to our series: Given $s\in \mathbb{C}$ : If $\Re(s) > 0$ , then as $k$ increases among the odd numbers, $|\exp(-s(-2)^k)|$ diverges to $+\infty$ . If $\Re(s) , then as $k$ increases among the even numbers, $|\exp(-s(-2)^k)|$ diverges to $-\infty$ . If $\Re(s) = 0$ , then $|\exp(-s(-2)^k)| = 1$ as the argument to $\exp$ is purely imaginary. Hence $\sum \exp(-s(-2)^k)$ cannot converge for any $s\in \mathbb{C}$ .
|real-analysis|calculus|complex-analysis|convergence-divergence|summation|
1
Construct ellipse with given foci such that it is tangent to a given circle
Given two points $F_1$ and $F_2$ and a circle centered at $r_0$ with radius $s$ , I'd like to construct the ellipse with foci $F_1$ and $F_2$ that is tangent to the given circle. That is the question. My attempt: Let $r = [x, y]^T $ be the position vector of a point in the plane. To simplify the analysis, I'll introduce a new coordinate reference with its origin at the center of the ellipse. This is known, because the center of the ellipse is just the midpoint of the two foci. So let $ C = \dfrac{1}{2} (F_1 + F_2) $ And define the unit vector $ u_1 = \dfrac{ F_2 - F_1}{\| F_2 - F_1 \| } $ And let $u_2 $ be a unit vector that is perpendicular to $u_1$ . Now define the rotation matrix $ R = [u_1, u_2] $ By letting the $x'$ axis point along $u_1$ and the $y'$ axis point along $u_2$ , then if $p' = (x',y')$ is the local coordinate of a (world) point $p$ with resepct to this new frame, then then two vectors are related by $ p = C + R p' $ So that $ p' = R^T (p - C) $ Using these new coordin
Using the method developed in question statement, I've worked an example where $F_1 = (2,1), F_2 = (7, 5), r_0 = (5, 10), s = 3 $ . There were four solutions in total, 2 ellipses, and 2 hyperbolas. They are shown below.
|geometry|conic-sections|
0
What are $0$ and $1$, the elements of $\mathbb Z/2\mathbb Z$? How do they relate to $\mathbb Z$?
I often see field $\mathbf{Z}/\mathbf{2Z}=\{0,1\}$ . Without other indication we might see elements of field $\mathbf{Z}/\mathbf{2Z}$ as a subset of $\mathbf{Z}$ . Operations such as $2+1=3=1$ are also given as examples: Source . I understand this field is the one defining the sum modulo-2 (and the Boolean addition provided we associate False/True to $0$ / $1$ ), and in $\mathbf {Z}$ , $3$ modulo $2= 1$ but I've some difficulty to fully understand these statements: $2+1=3=1$ seems incorrect since the addition associated with field $\mathbf{Z}/\mathbf{2Z}$ (the modulo/boolean addition) is used on elements $2$ and $1$ , but $2$ is not in $\mathbf{Z}/\mathbf{2Z}$ . $3=1$ seems to be used (approximately) to indicate there is a correspondence between $3 \in \mathbf {Z}$ and $1 \in \mathbf {Z} / \mathbf {2Z}$ . On the other hand, my understanding is elements of field $\mathbf{Z}/\mathbf{2Z}$ , denoted $0$ and $1$ are actually classes of elements of group $\mathbf{Z}$ : $2+1$ should rather be
$\mathbf{Z}/\mathbf{2Z}$ is a field represented by the two equivalence classes of integers modulo 2. We take ${0},{1}$ as representatives of the two classes given by the partition which splits the integers between even and odd, where 0 is in the class of all even integers, 1 is in the class of all odd integers. Then, you can check , that addition and multiplication are well-defined (which they are), and that it satisfies all the field axioms. As you said , Edit: Credit to Mark S below "2 is not in that set, but the equivalence class modulo 2, [2]=[0] is in that set" , which is what $\mathbf{Z}/\mathbf{2Z}$ represents: equivalence classes under mod 2.
|abstract-algebra|terminology|definition|finite-fields|
0
Durrett Exercise 4.2.10 (i)
[Durrett Exercise 4.2.10 (i)] Let $(X_n,\mathcal{F}_n)$ be a supermartingale. Let $N_0=-1$ and for $j\geq 1$ , let \begin{equation} \begin{aligned}N_{2j-1}&=\inf\{m>N_{2j-2}:X_m\leq a\},\\N_{2j}&=\inf\{m>N_{2j-1}:X_m\geq b\}.\end{aligned} \end{equation} Let $Y_n=1$ for $0\leq n and for $j\geq 1$ \begin{equation} Y_n=\begin{cases}(b/a)^{j-1}(X_n/a)&\text{for}N_{2j-1}\leq n Use the switch principle and induction to show $Z_n^j=Y_{n\wedge N_j}$ is a supermartingale. [Switching Principle] If $X_n,Y_n$ are supermartingales, and $N$ a stopping time with $X^1_N\geq X^2_N$ , then \begin{equation} \begin{aligned}&X_n^11_{(N>n)}+X_n^21_{(N\leq n)}\text{is a supermartingale,}\\&X_n^11_{(N\geq n)}+X_n^21_{(N This post How to prove Dubin's inequality? gave a proof of this exercise. Following its route, \begin{aligned} Z^1_n&=1_{(N_1>n)}+\left(\frac{X_{N_1}}{a}\right)1_{(N_1\leq n)}\\ Z^2_n&=Z^1_n1_{(N_1>n)}+\left(\frac{X_n}{a}\right)1_{(N_1\geq n)}1_{(N_2>n)}+\left(\frac{b}{a}\right)^21_{(N_2\leq n
Note $X_N 1_{(N \leq n)} = X_{N\wedge n} 1_{(N \leq n)}$ and if $X_n$ is a supermartingale and $N$ is a stopping time, then $X_{N\wedge n}$ is a supermartingale.
|probability|probability-theory|probability-distributions|conditional-expectation|martingales|
1
Show that, for any $n\in \mathbb N^*$, we have $k_{n+1} - k_n \in \{0,1\}.$
The question For any nonzero natural number $n$ , we define the set $M_n = \{a \in N | [ \sqrt{n^2+2na} ]= n + a - 1\}$ , where $[x]$ represents the integer part of the real number x. We denote by $k_n$ the cardinality of the set $M_n$ . a) Show that, for any n ∈ N*, we have $k_{n+1} - k_n \in {0;1}.$ b) Prove that there exists an infinity of numbers $n ∈ N*$ , for which $k_n=k_{n+1}$ c) Show that there is an infinity of numbers $n ∈ N*$ , for which $k_n+1=k_{n+1}$ The idea $M_n =\{ a \in N | [ \sqrt{n^2+2na} ]= n + a - 1\}$ $n+a-1 \leq \sqrt{n^2+2na}|()^2 => a^2-2a+1\leq 2n => (a-1)^2\leq 2n$ $M_{n+1} =\{ b \in N | [ \sqrt{(n+1)^2+2(n+1)b} ]= n + b\}$ $n+b \leq \sqrt{(n+1)^2+2(n+1)b}|()^2 => b^2\leq 2n+1+2b => (b-1)^2 \leq 2n+2=2(n+1)$ we have to show that $b$ has the same number of solutions as $a$ or one more. I don't know what to do forward. I hope one of you can help me! Thank you!
You've done IMO most part of the work, with a few technical details missing and a relatively straightforward argument missing at the end to get at the subproblem a) and then b) and c). As has been alluded to in the comments, the intermmediate goal is to show that $$M_n = \{a \in \mathbb N\,|\, a > 0, (a-1)^2 \le 2n\}. \label{eq1} \tag{1}$$ You arrived at $(a-1)^2 \le 2n$ yourself, by starting with $n+a-1 \le \sqrt{n^2+2an}$ and then squaring. Note that squaring is not usually an equivalent transformation. It's only an implication (as you noted), so when you arrived at $(a-1)^2 \le 2n$ it only means that $M_n \subseteq \{a \in \mathbb N\,|\, (a-1)^2 \le 2n\}$ , because you don't know if you got additional spurious solutions. The good news is that in this case the transformation is equivalent, because you can prove from the initial assumptions ( $a\ge 0, n \ge 1$ ) that both $n+a-1 \ge 0$ and $n^2+2an \ge 0$ . But we still only have $M_n \subseteq \{a \in \mathbb N\,|\, (a-1)^2 \le 2n\}$
|combinatorics|
1
Where's the mistake in this proof of Euler's reflection formula?
$$\frac{\sin(\pi x)}{\pi}=x\prod_{r=1}^\infty\left(1-\frac{x^2}{r^2}\right)$$ $$\Gamma(x)=\frac{1}{x}e^{-\gamma x}\prod_{r=1}^\infty\left(\frac{r}{x+r}\right)e^{\frac{x}{r}}$$ $$\Gamma(x) \Gamma(1-x)=\frac{1}{x(1-x)}e^{-\gamma}\prod_{r=1}^\infty\left(\frac{r^2}{(x+r)(1-x+r)}\right)e^{\frac{1}{r}}$$ Assume $\Gamma(x) \Gamma(1-x)=\frac{\pi}{\sin(\pi x)}$, then, for $x \notin \mathbb{Z}$: $$\frac{1}{x}\prod_{r=1}^\infty\left(\frac{r^2}{r^2-x^2}\right)=\frac{1}{x(1-x)}e^{-\gamma}\prod_{r=1}^\infty\left(\frac{r^2}{(x+r)(1-x+r)}\right)e^{\frac{1}{r}}$$ $$\prod_{r=1}^\infty\left(\frac{1}{r-x}\right)=\frac{1}{1-x}e^{-\gamma}\prod_{r=1}^\infty\left(\frac{1}{1-x+r}\right)e^{\frac{1}{r}}$$ $$\prod_{r=1}^\infty\left(\frac{1}{r-x}\right)=\frac{1}{1-x}e^{-\gamma}\prod_{r=1}^\infty e^{\frac{1}{r}}\prod_{r=2}^\infty\left(\frac{1}{r-x}\right)$$ $$1=e^{-\gamma}\prod_{r=1}^\infty e^{\frac{1}{r}}$$ Thus $-\gamma+\sum_{r=1}^\infty \frac{1}{r}=0$, which is ridiculous. Where did I go wrong?
Instead of using the Weierstrass definiton of $\Gamma(1-z)$ , rewrite $$\Gamma(1-z)=-z\Gamma(-z)$$ Rewriting it like that will short out out the $-z$ in the denominator and once you multiply it with $\Gamma(z)$ everything will cancel out leaving you with Euler's product definition of $\sin(\pi z)$ in the denominator and a $\pi$ in the numerator: $$\Gamma(z)\Gamma(1-z)=\Gamma(z)\Gamma(1+(-z))\\=-z\Gamma(z)\Gamma(-z)\\=\left(\frac{e^{-\gamma z}}{z}\prod_{k=1}^{\infty}(1+\frac zk)^{-1}e^{\frac zk}\right)\left(-z.\frac {e^{\gamma z}}{-z}\prod_{k=1}^{\infty}(1-\frac zk)^{-1}e^{\frac {-z}{k}}\right)=\frac{1}{z}\prod_{k=1}^{\infty}\left((1+\frac zk)(1- \frac zk)\right)^{-1}=\frac 1z\prod_{k=1}^{\infty}\left(1-\frac {z^{2}}{k^{2}}\right)^{-1}\\=\frac{\pi}{\pi z}\prod_{k=1}^{\infty}\left(1-\frac {(\pi z)^{2}}{\pi^{2} k^{2}}\right)^{-1}=\frac {\pi}{\sin(\pi z)}$$
|gamma-function|
0
Relationship between circlular $i^n$ property and that $e^i{\theta}$ traces the unit circle.
That $e^{i\theta}$ traces a circle $\theta \in R$ has been well discussed elsewhere . However, I was always curious with it's relation to the following property of $i$ : $i^0$ = 1 $i^1$ = i $i^2$ = -1 $i^3$ = -i $i^4$ = 1 These properties seem like they should be related, but I can't map them too eachother.
Since $i=e^{i\pi/2}$ , you see that taking powers of $i$ is equivalent to making successive rotations of $\pi/2$ around the unit circle. This gives exactly the periodic sequence of $4$ elements you describe.
|complex-numbers|
1
Does the exponential of a function converge? What can we do with it?
I’m in middle school (6th grade) and I have a question related to the exponential function (the teacher couldn’t help me): We define the exponential as follows: $$ \exp(t) = \sum_{n = 0}^{\infty} \frac{t^n}{n!}$$ For whole number inputs of $t$ , this corresponds to raising $e$ to that power. I think $\exp(t)$ converges for every possible input of $t$ , but I haven’t found a proof yet. If anyone can provide a proof or disproof then that would be helpful. I would also like to know if for some $\exp(t)$ where $t \in U$ , $\exp(t)$ will also be $\in U$ . However, these are secondary questions and you can ignore them (but do answer if you want). We also define $\exp(t)$ for complex numbers , and matrices . Today I thought about the exponential of a function: $\exp(f(t))$ . For example, let’s take $\exp(t^2)$ . This would result in something like: $\exp(t^2) = \sum_{n = 0}^{\infty} \frac{(t^2)^n}{n!} = 1 + t^2 + \frac{t^4}{2!} + \frac{t^6}{3!} + \cdots$ Which we could write as a regular poly
The important thing to understand is the convergence of the series for $\exp(t)$ . We'd like to find out if the sum $$\sum_{n=0}^\infty \frac{t^n}{n!}$$ is defined for all real $t$ (I'm just going to focus on reals here). Dealing with infinite sums can be tricky. If you're unsure about whether a particular series converges, and start manipulating it as if it does, you can get weird (and sometimes wrong) answers. Finite sums, on the other hand, are much safer to deal with. So one approach for questions about convergence is to consider the sequence of "partial sums": in this case, define $$S_N(t)=\sum_{n=0}^N \frac{t^n}{n!}$$ This is defined for all real $t$ and for all non-negative whole numbers $N$ - that is, given a $t$ value and an $N$ value we could (in theory) work out this sum. (The motivation for doing this is that it's often a lot easier to work with sequences than series.) Let's start with positive $t$ . The sequence $S_N$ is certainly increasing (it's formed by adding up posit
|real-analysis|functions|convergence-divergence|exponential-function|
0
Interval arithmetic and functions of intervals
I'm playing with the interval arithmetic and found a Wikipedia article https://en.wikipedia.org/wiki/Interval_arithmetic Consider the positive interval $[a,b]$ ( $a\gt 0$ ). Then what is $\sin([a,b])$ ? I tried to do it with Taylor series: $$\sin(x)=x-\frac{x^3}{3!}+\frac{x^5}{5!}+\ldots$$ Since $[a,b]$ is positive, then $[a,b]^n=[a^n,b^n]$ . Thus, $$\sin([a,b])=[a,b]-\frac{[a,b]^3}{3!}+\frac{[a,b]^5}{5!}+\ldots=\left[a-\frac{b^3}{3!}+\frac{a^5}{5!}+\ldots,b-\frac{a^3}{3!}+\frac{b^5}{5!}+\ldots\right]$$ This can be rewritten $$\sin([a,b])=\left[\sum_{n=0}^{\infty} \frac{a^{4n+1}}{(4n+1)!}-\frac{b^{4n+3}}{(4n+3)!},\sum_{n=0}^{\infty} \frac{b^{4n+1}}{(4n+1)!}-\frac{a^{4n+3}}{(4n+3)!}\right]=\frac{1}{2}\left[\sin(a)+\sin(b)+\sinh(a)-\sinh(b),\sin(a)+\sin(b)-\sinh(a)+\sinh(b)\right]$$ Is this used somewhere, is it useful or just a junk? Thank you.
I wrote the following python code to apply cos and sin to an interval. It uses the following properties: if a and b are on the same increasing branch, then $\cos([a, b]) = [\cos a, \cos b]$ ; if a and b are on the same decreasing branch, then $\cos([a, b]) = [\cos b, \cos a]$ ; if $[a, b]$ contains $[n \pi, (n+1) \pi]$ then $\cos([a, b]) = [-1, +1]$ ; if $2 k \pi \leq a \leq (2 k + 1)\pi \leq b \leq (2k+2) \pi$ then $\cos([a, b]) = [-1, \max(\cos a, \cos b)]$ ; if $(2k-1)\pi \leq a \leq 2k \pi \leq b \leq (2k+1) \pi$ then $\cos([a,b]) = [\min(\cos a, \cos b), +1]$ def intervalcos(a, b): old_a = a a = old_a % twopi # principal value of a in [0, 2pi] b = b + (a - old_a) # translate b accordingly with a if 0 [-1, 1] return [-1, 1] elif 3 * pi [-1, 1] return [-1, 1] else: # unimodal with maximum /\ return [min(cos(a), cos(b)), 1] def intervalsin(a, b): return intervalcos(halfpi - b, halfpi - a)
|algebra-precalculus|recreational-mathematics|
0
How to prove $SUS=\frac{(S|S)}{2}U$ with $S$ and $U$ are skew-hermitian with following conditions?
We denote the $(A|B):=-{\rm tr}(AB)$ as the inner product of the skew-hermitian matrix, which is easy to prove by the definition. I'm stuck by the problem when I read the Lie algebra of matrix group, finding it's a calculation problem of matrix . $A\in{{\rm SU}(2)}$ So $A$ can be written as $$A=\left(\begin{array}{1}u&v\\-\overline{v}&\overline{u} \end{array}\right)$$ for $u,v\in\mathbb{C}$ and $|u|^2+|v|^2=1$ . So we express A in the form $$A=\cos{\theta}I+S$$ where S is skew hermitian and ${\rm Re} \,u=\cos{\theta}$ with $\theta\in[0,\pi]$ Then we can caculate that $S^2=-\sin^2\theta{I}$ and $(S|S)=2\sin^2\theta$ We know that $\mathfrak{su}(2)$ is all skew-hermitian matrix $\subset{M_2(\mathbb{C})}$ So the question is If $U \in \mathfrak{su}(2)$ with $(S|U)=0$ , how to prove the result $SUS=\frac{(S|S)}{2}U$ ? The author in the book gives the hint that we can use the properties of the vector product, but it's hard for me to understand what he says. So I want to try a direct way to ca
lemma:If $U_1,U_2\in\mathfrak{su}(2)$ with $U_1=x_1\hat{H}+y_1\hat{E}+z_1\hat{F}$ and $U_2=x_2\hat{H}+y_2\hat{E}+z_2\hat{F}$ ( $\hat{H},\hat{E},\hat{F}$ form an orthonormal basis), so $$U_1U_2=-\frac{(U_1|U_2)}{2}I+\frac{1}{2}[U_1,U_2]$$ So $$SUS=\frac{1}{2}[S,U]S\Rightarrow{}SUS=\frac{1}{2}SUS-\frac{1}{2}USS\Rightarrow{}SUS=-USS=\frac{(S|S)}{2}U$$
|linear-algebra|lie-algebras|matrix-calculus|hermitian-matrices|
1
If $\sigma$-algebra generated by $X$ is a subset of $\sigma$-algebra generated by $X^2$.
The question is about $\sigma$ -algebras and to prove or give a counter-example to the statement that for $X$ a random variable, $\sigma(X) \subseteq \sigma(X^2)$ . I think the statement is false. Suppose $X(\omega) = \omega$ (e.g. $N(0, 1)$ ). Then $(-\infty, a) \in \sigma(X)$ , $(-\infty, b)$ is a Borel subset but $(-\infty, a) \not \in \sigma(X^2)$ . This because $(X^2)^{-1}((-\infty, b)) = (-\sqrt{b}, \sqrt{b})$ but you can't use unions and complements involving $(-\sqrt{b}, \sqrt{b})$ s to get $(-\infty, a)$ . Is this right?
The opposite is true: more generally, $\sigma(f(X)) \subset \sigma(X)$ for any measurable function $f$ . To get a simple counter-example of your inclusion, take a non-degenerate random variable $X$ distributed on $\{-1,1\}$ . Then $X^2$ is degenerate: it is constant, equal to $1$ . And then $\sigma(X^2)$ is the trivial $\sigma$ -algebra, which of course does not contain the non-trivial $\sigma$ -algebra $\sigma(X)$ .
|probability-theory|measure-theory|solution-verification|random-variables|
1
How do you get joint probability of two events conditional on the same event?
I want to get the joint probability of two independent events, both conditional on another event. What I know: Probability of A is 0.2375. Probability of B given A is .8. Probability of C given A is .5. Now, I know that B and C is a joint probability (B * C). I'm also fairly certain that the "un-conditional" probabilities of B and C separately (let's call them Bu and Cu) are (A * B) and (A * C). My question is, is the joint probability of B and C calculated as (A * Bu * Cu), or is it ((A * Bu) * (A * Cu))?
Are you told that events $B$ and $C$ are "independent", or are they "conditionally independent given $A$ "? If $B$ and $C$ are independent , then, by definition : $\mathsf P(B, C)=\mathsf P(B)\,\mathsf P(C)$ . However you do not appear to have enough information to find this, since you cannot use the Law of Total Probability without the conditional probability of $B$ given the complement of $A$ , nor without that for $C$ . $$\mathsf P(B)=\mathsf P(B\mid A)\,\mathsf P(A)+\mathsf P(B\mid A^\complement)\,(1-\mathsf P(A))\\\mathsf P(C)=\mathsf P(C\mid A)\,\mathsf P(A)+\mathsf P(C\mid A^\complement)\,(1-\mathsf P(A))$$ On the other hand, if $B$ and $C$ are conditionally independent given $A$ , then you can at least find their joint conditional probability given $A$ . This is $\mathsf P(B, C\mid A)=\mathsf P(B\mid A)\,\mathsf P(C\mid A)$ . However you still cannot find their joint probability unconditionally for the same reason. $$\mathsf P(B,C)=\mathsf P(B,C\mid A)\,\mathsf P(A)+\mathsf P(B
|probability|conditional-probability|
0
The derivative of an unknown function is additive?
Assume that we have a function: $$ y=f\left(x,z(x)\right) $$ e take the derivative of this function with respect to $x,$ we get: $$ \frac{\partial y}{\partial x}=\frac{\partial f}{\partial x}+\frac{\partial f}{\partial z}\times\frac{\partial z}{\partial x} $$ My question is this: while $f\left(.\right)$ is an unknown function, the derivative on the RHS with respect to $x$ enters additively (i.e. the derivative with respect to $x$ is added to that with z.) Why is this additive, and no say, multiplicative? Will this hold for any function $f?$ Is this a theorem/rule? Thanks!
This is the chain rule. Because everything depends on $x$ , $y$ is as mooth function $\mathbb R\rightarrow \mathbb R$ , so the Jacobian of $y$ is a real number. We can write $y$ as the composition of two function $F:\mathbb R\rightarrow \mathbb R^2$ , given by $x\mapsto (x,z(x))$ , and $G:\mathbb R^2\rightarrow \mathbb R$ given by $(x,z)\mapsto f(x,z)$ . Then, the Jacobian of $y$ is given by: $$D(G\circ F)=DG\circ D_xF$$ So, we see that $DG$ is just the gradient $[\partial f/\partial x,\partial f/\partial z]$ , while: $$DF=\begin{pmatrix} 1\\ \partial z/\partial x \end{pmatrix}$$ multiplying the two matrices gives: \begin{align} \frac{dy}{dx}=\frac{\partial f}{\partial x}+\frac{\partial f}{\partial z}\cdot\frac{\partial z}{\partial x} \end{align}
|calculus|
1
Understanding a step in a functional equation
I was trying to solve a functional equation and I while I was pretty close to the answer there is a remaining case that has stumped me for a while now. Here is the problem Determine all functions $f: \mathbb{N}^* \to \mathbb{N}^*$ satisfying $$ \frac{f(x)+y}{x+f(y)} + \frac{f(x)y}{xf(y)} = \frac{2(x+y)}{f(x+y)}, \qquad \forall x,y \in \mathbb{N}^*. $$ It's easy to see that the function $f(x)=x$ satisfies the condition, but then remains the issue of proving that it is the only function that satisfies the condition. I was able to prove that $f{(2x)}=2x$ so naturally I then considered the odd numbers, however I ended up with two very nasty expressions, if we let $x=1$ and $y=2n$ we end up with this $$ \frac{2(1+2n)^2}{a(2+2n)}=f(2n+1)$$ Where $a=f(1)$ Or if we let $x=2n$ and $y=1$ Then we end up with: $$ \frac{a(2n+1)(2n+a)}{a(n+1)+n}=f(2n+1)$$ (In fact the intended solution uses the second substitution but I couldn't understand the rest of the explanation). Since we have a fraction the d
The observation, that the right hand side of the functional equation is invariant under exchanging $x$ with $y$ , leads to the "flipped" equation $$\frac{f(y)+x}{y+f(x)} + \frac{f(y)x}{yf(x)} = \frac{2(x+y)}{f(x+y).}$$ Now one can realize that each summand on the left side here is exactly the multiplicative inverse of the same summand in the original equation. That means, if we set $$s_1=\frac{f(y)+x}{y+f(x)},\; s_2=\frac{f(y)x}{yf(x)},$$ we now know that $s_1+s_2 = \frac1{s_1}+\frac1{s_2}$ . Multiplication with $s_1s_2>0$ leads to $s_1s_2(s_1+s_2)=s_2+s_1=s_1+s_2$ and since $s_1+s_2>0$ we get $s_1s_2=1$ , which leads to $$ \frac{f(y)+x}{y+f(x)} = \frac{f(x)y}{xf(y)}.$$ Using $x=2n$ and the already known $f(2n)=2n$ , we get $$ \frac{f(y)+2n}{y+2n} = \frac{2ny}{2nf(y)} = \frac{y}{f(y)}.$$ Now, for a given $y$ , in the above equation the left hand side increases when $f(y)$ increases, but the right hand side decreases when $f(y)$ increases. That means there can be at most one possible va
|functions|solution-verification|functional-equations|integers|natural-numbers|
0
Matrix $A \in \mathbb{R}^{17, 17}$ satisfies condition $A^2 = 0$. Prove that matrix $A + A^T$ is uninvertible (singular) matrix.
I know that matrix is uninvertible (singular) when it's determinant equals to zero. So I used this and we know that det $(A \cdot A) = 0$ and then det $A = 0$ . Then we know that det $(A) =$ det $(A^T) = 0$ . But I doubt if from everything I described above it follows that det $(A + A^T) = 0$ .
We let 17=2n+1 . If $A^2=0$ $im(A)\subset \ker(A)$ . As the dimension of the space is $2n+1$ , $\dim(\ker A) \geq n+1$ . But $\dim \ker(A)= \dim (\ker (A^T)$ , we see that both $\ker A$ and $\ker (A^T)$ have dimension $>n$ , so the intersection $\ker(A) \cap \ker (A^T)$ has dimension at least 1, in particular $A+A^T$ has a non trivial kernel.
|linear-algebra|matrices|determinant|
1
$p$-integrability of a certain Bessel function
I read the following statement in a book: $$ \frac{J_{\frac{n-2}{2}}(2\pi r)}{r^{\frac{n-2}{2}}}\text{ is in }L^p\text{ if and only if }p>\frac{2n}{n-1}\text{ ;} $$ where $n\geq3$ , $r$ is non-negative real number. Let's call the above fraction the function $K$ . I have proved that if $p>\frac{2n}{n-1}$ then $K\in L^p$ . But I am stuck on the reverse side. Particularly, if $p\leq\frac{2n}{n-1}$ then how one can rigorously show that $K\notin L^p$ ? I know that one must use the following asymptotic expansion of the Bessel function : $$ J_{\nu}(t)=O(t^{-\frac{1}{2}})\text{ as }t\to\infty\text{ ; where }\nu,t\geq0. $$ But the above asymptotic expansion just gives an upper bound on our $K$ . If one would have an appropriate lower bound on $K$ then, perhaps, one could do something to prove that $K\notin L^p$ . Thanks in advance.
The statement that you wrote is not exactly incorrect although one should write the more precise following statement : Let $d$ be the dimension of the real Euclidean space $\mathbb{R}^d$ with $d\geq3$ . Then $$\text{The function }K(x)=\frac{J_{\frac{d-2}{2}}(2\pi|x|)}{|x|^{\frac{d-2}{2}}}\text{ (where }x\in\mathbb{R}^d\text{) is in }L^p(\mathbb{R}^d)\text{ if and only if }p>\frac{2d}{d-1}. $$ Proof : First of all, it is not so hard to see that the function $K$ on $\mathbb{R}^d$ is smooth and bounded. Since $J_{\nu}(r)=O(r^{-\frac{1}{2}})$ as $r\to\infty$ (where $\nu,r\geq0$ ), it is easy to see that $K\in L^p(\mathbb{R}^d)$ for $p>\frac{2d}{d-1}$ . Now we recall the following $\textit{oscillatory}$ behaviour of the Bessel function : $$ J_{\nu}(r)\sim\sqrt{\frac{2}{\pi r}}\cos\left(r-\frac{\pi\nu}{2}-\frac{\pi}{4}\right)\text{ as }r\to\infty\text{ ;} $$ where $r,\nu\geq0$ . So, in our case, we get $$ K(x)=\frac{J_{\frac{d-2}{2}}(2\pi|x|)}{|x|^{\frac{d-2}{2}}}\sim\frac{1}{\pi|x|^{\frac{d
|real-analysis|lebesgue-integral|lp-spaces|bessel-functions|
1
When do two functions have the same subdifferentials?
For two functions $f$ and $g$ , if $\nabla f(x) = \nabla g(x)$ , $f = g + c$ for some constant $c$ . Does the same hold if the gradient is replaced by the (convex) subdifferential, ie $\partial f(x) = \partial g(x)$ for all $x$ ? And, as a stronger result, can we characterize pairs $(f, g)$ for which $\partial f(x) \cap \partial g(x) \neq \emptyset$ for all $x$ ?
You need some extra assumptions on $f$ and $g$ . If $f,g \colon X \to \bar{\mathbb R}$ are convex and lower semicontinuous and if $X$ is a Banach space, then $\partial f = \partial g$ imply that $f$ and $g$ differ by a constant. A proof can be found in the 1970 paper "On the maximal monotonicity of subdifferential mappings" by Rockafellar, see https://doi.org/10.2140/pjm.1970.33.209 .
|convex-analysis|convex-optimization|subgradient|non-smooth-analysis|non-smooth-optimization|
0
Can you square root both sides of an implicit equation to get the derivative?
Suppose you have $x^2=(4x^2y^3 + 1)^2$ , can you square root both sides of the equation to make it simpler?
$x= 4x^2y^3+ 1$ OR $-x= 4x^2y^3+ 1$ .
|calculus|derivatives|chain-rule|implicit-differentiation|
1
When studying a book (like Rudin ) where the problems are not intended to be fully solvable by a student, what criteria show you're ready to advance?
When self studying a text where it is not expected to be able to solve all (or most) of the problems, what are the appropriate criteria to use for advancement? A word about the problems. There are a great number of them. It would be an extraordinary student indeed who could solve them all.... Many are introduced not so much to be solved as to be tackled. The value of a problem is not so much in coming up with the answer as in the ideas and attempted ideas it forces on the would-be solver. --Herstein Advanced books, like Rudin and Herstein, are intentionally written with problems "not so much to be solved as to be tackled." The value in this is self-evident. But it raises a question: How does a self-learner know when they should continue tackling more problems in a section (or book), and when they should say "well, I can't solve every problem here, and I haven't even attempted many of them; but I've learned quite a bit, and my time is now best spent learning the next thing." The inheren
To quote different Mathematics StackExchange users: One suggestion for finding a list of some (not all) problems in a given book one should attempt: search professors' websites for course information on courses they've taught previously. Ideally, you'll find a course that (a) uses the book you're interested in or otherwise already own, and (b) has a list of homework problems from that book, and maybe even additional suggested exercises. If a book is good, you should read it more than once. If you are unable to solve some problem at this moment. move ahead. After you finish a book, read something similar and look at same subject from different angle. go back and forw between multiple books for inspiration. When you feel you are ready, you can try the problems again... My belief is if you're reading more advanced texts then it's up to you to have your own taste and goals. Then these study choices are made based on internal reflection against what you actually want out of life balanced ag
|real-analysis|abstract-algebra|soft-question|self-learning|education|
0
Number of real solution of $2^{\sin(x)}-2e^{-\sin(x)}=2$ is
Number of real solution of $\displaystyle 2^{\sin(x)}-2e^{-\sin(x)}=2$ is What I try :: Put $\displaystyle 2^{\sin(x)}=t$ , Then $\displaystyle t-\frac{2}{t}=2\Longrightarrow t^2-2t-2=0$ $\displaystyle t^2-2t+1=3\Longrightarrow (t-1)=\pm\sqrt{3}$ Then we have $\displaystyle t=1\pm \sqrt{3}\Longrightarrow \sin(x)=\log_2(1\pm \sqrt{3})$ So here $\log_{2}(1+\sqrt{3})>1$ No solution exists Can someone please explain any short way to solve it.
$2^y-\dfrac{2}{e^y}=2,$ where $-1\le y \le 1$ $2^y \le 2$ The second term $\dfrac{2}{e^y}$ is positive. The left hand side is always less than $2$ . No solution.
|transcendental-equations|
0
Number of real solution of $2^{\sin(x)}-2e^{-\sin(x)}=2$ is
Number of real solution of $\displaystyle 2^{\sin(x)}-2e^{-\sin(x)}=2$ is What I try :: Put $\displaystyle 2^{\sin(x)}=t$ , Then $\displaystyle t-\frac{2}{t}=2\Longrightarrow t^2-2t-2=0$ $\displaystyle t^2-2t+1=3\Longrightarrow (t-1)=\pm\sqrt{3}$ Then we have $\displaystyle t=1\pm \sqrt{3}\Longrightarrow \sin(x)=\log_2(1\pm \sqrt{3})$ So here $\log_{2}(1+\sqrt{3})>1$ No solution exists Can someone please explain any short way to solve it.
Substituting $t=2^{\sin(x)}$ is a good place to start. If you do that, observe the following: First, because $\sin(x)$ has range $[-1,1]$ , we can restrict our attention to $t\in[\frac12,2]$ . Secondly, $$ e^{-\sin(x)} =\left( e^{\ln 2 \sin(x)}\right)^{\frac1{-\ln 2}}= t^{-1/\ln 2} $$ Thus any solution to the original equation will correspond to a solution for $t\in[\frac12,2]$ to the equation $$ t -2t^{-1/\ln 2} = 2 $$ This has no solution, which we can show by observing that $t -2t^{-1/\ln 2}$ is increasing on $[0,\infty]$ . (This may be shown by taking the derivative, or observing that the $t$ term is clearly increasing and the $-2t^{-1/\ln 2}$ is a composition of two decreasing functions and therefore is increasing.) Thus, we have for all $x\in \mathbb{R}$ , $$ 2^{\sin x}-2e^{-\sin x} = t -2t^{-1/\ln 2} \le 2-2\cdot 2^{-1/\ln 2}
|transcendental-equations|
0
Proof of expected length in random division of $[0,1]$ interval
I have trouble fully understanding the proof in a formal manner for the expected length of the $k^{th}$ smallest interval when we randomly divide the $[0,1]$ interval using $n$ points. The $k^{th}$ smallest interval's expected length is equal to $$\frac{\frac{1}{k} + \frac{1}{k+1} + \dots + \frac{1}{n+1}}{n+1}$$ Proof : Without loss of generality, assume the $[0,1]$ segment is broken into segments of length $s_1 \geq s_2 \geq \dots \geq s_n \geq s_{n+1}$ , in that order. We are given that $ s_1 + \dots + s_{n+1} = 1$ , and want to find the expected value of each $s_k$ . Set $ s_i = x_i + \dots + x_{n+1} $ for each $ i = 1, \dots, n+1 $ . Then, we have $ x_1 + 2x_2 + \dots + (n+1)x_{n+1} = 1 $ , and want to find the expected value of $ s_k = x_k + \dots + x_{n+1} $ . If we set $y_i = ix_i $ , then we have $ y_1 + \dots + y_{n+1} = 1 $ , so by symmetry $ E[y_i] = \frac{1}{n+1} $ for all $ i $ . Thus, $ E[x_i] = \frac{1}{i(n+1)} $ for each $ i $ , and now by linearity of expectation $ E[s
Here I will try and expand a justification for the symmetry of the $y$ variables: $y_1,y_2,\dots y_{n+1}$ can be considered as independent variables, except for the fact that they sum up to $1$ . Unlike the $s$ variables, there is no ordering relation between them. Apart from that, they can be given any set of $n+1$ values in the $[0,1]$ interval. And any such assignment corresponds uniquely and linearly to a valid assignment of the $s_1,s_2,\dots s_{n+1}$ variables. The linearity of the transformation between $s$ and $y$ shows "probability volume" is preserved in the change of variable. Given this correspondence, and uniform distribution of the initial variables, we can assume the assigned values of $y$ can be permuted, leading to equiprobable events in the $s$ variables. Therefore $\forall r\neq p, E(y_r) = E(y_p)$ . Due to their sum being $1$ , we get $E(y_r) = \frac{1}{n+1}$ . To get more intuition on this, I will work out an example for $n= 1$ , Consider an event which gives the v
|probability|combinatorics|proof-explanation|expected-value|
0
Number of squares in a planar Graph
I need help with the following Graph Theory problem: Let $G$ be a connected and planar Graph. Each area in the planar version of the Graph (including the outer area) is either a square or a hexagon. Also, There are exactly $3$ areas adjacent to each node (probably just means that the degree of each node is $3$ ). Determine the number of squares of $G$ . I created an equation with two variables: $4x + 6y = 2 * E$ where $x$ represents the number of squares and $y$ is the number of hexagons. Also, the number of areas: $x + y$ . I plugged those values into the Eulers formula but I ended up with an equation consisting of two variables which has infinitely many solutions.
Let there be $x$ squares, $y$ hexagons, $v$ vertices and $e$ edges in the graph. Then the following equalities hold: $$v-e+x+y=2,$$ $$3v=2e,$$ $$4x+6y=2e.$$ Putting $v$ and $e$ from two last equalities into the first yields: $$4/3x+2y-2x-3y+x+y=2.$$ From here we find $$x=6.$$
|graph-theory|planar-graphs|
0
Calculate the volume of the tetrahedron...
The question Let $ABXY$ be a tetrahedron such that the triangles $ABX$ and $ABY$ are isosceles right angle in $A$ with legs of $2$ cm. Calculate the volume of the tetrahedron, knowing that there is a point $P$ such that $PA = PB=PX = PY = \sqrt{5} cm$ . The drawing my idea $AB=AX=AY=2$ and using Phytagoras theorem we get $BX=BY=2\sqrt{2}$ We can use Chasles Formula to write the volume $$\frac{a*b* dis(a,b)*sin(a,b)}{6}$$ . The volume of the tetrahedron is also equal to $\frac{2*A(AXY)}{3}$ I don't know where the point $P$ should be put. I hope one of you can help me solve this problem. Thank you!
Let the midpoint of A and B to M, the point where the line perpendicular to surface AXY and goes through P meets surface AXY to H. Since PA=PB=PX=PY , A,B,Y and X are vertices of the pyramid P is the circumscribed sphere. Since PB=PA ∠PMB=90 PM = √(BP^2-BM^2)=2 Since ∠BAX = ∠BAY= 90 and X,Y and H share the safe surface ∠BAH = 90 Therefore AB∥PH and PM∥AH so PM = AH = 2 Since AB∥PH , P is centre of sphere and H is in the same surface as A,X,Y H is the centre of the circumcircle of △AXY . Therefore AH = XH = YH = 2 Since AX=AY=2 △AXH and △AYH are regular triangles. Therefore ∠XAH and ∠YAH = 60 and ∠XAY = 120 Therefor XY = 2√3 and △AXY =√3 Therefor volume of pyramid = △AXY*AB *1/3 =2√3/3 I think this is much simple then using sin like others.
|geometry|volume|
1
Calculate the volume of the tetrahedron...
The question Let $ABXY$ be a tetrahedron such that the triangles $ABX$ and $ABY$ are isosceles right angle in $A$ with legs of $2$ cm. Calculate the volume of the tetrahedron, knowing that there is a point $P$ such that $PA = PB=PX = PY = \sqrt{5} cm$ . The drawing my idea $AB=AX=AY=2$ and using Phytagoras theorem we get $BX=BY=2\sqrt{2}$ We can use Chasles Formula to write the volume $$\frac{a*b* dis(a,b)*sin(a,b)}{6}$$ . The volume of the tetrahedron is also equal to $\frac{2*A(AXY)}{3}$ I don't know where the point $P$ should be put. I hope one of you can help me solve this problem. Thank you!
From symmetry, point $P$ lies on the intersection of the perpendicular bisecting plane of $AB$ , and the perpendicular bisecting plane of $XY$ as well as the perpendicular bisecting plane of $AX$ . Now let the base of the tetrahedron $\triangle AXY$ be in the $xy$ plane, with $ A = (0,0,0), X = (2, 0, 0) , Y = (2 \cos(2\theta) , 2 \sin(2 \theta), 0) $ and finally $B = (0, 0, 2) $ The perpendicular bisecting plane of $AB$ has the equation $z = 1$ , and the perpendicular bisecting plane of $XY$ has the equation $ - \sin(\theta) x + \cos \theta y = 0 $ . Finally, the bisecting plane of $AX$ has the equation $x = 1$ . Therefore, point $P$ is given by $P = (x, y, z) = (1, y , 1 ) $ with $ 0 = - \sin \theta + \cos \theta \ y $ , i.e. $y = \tan \theta$ Since $P$ is $\sqrt{5}$ away from $A$ which is at the origin, then $ 2 + y^2 = 5 $ Therefore, $ y = \sqrt{3} $ And $\theta = \dfrac{\pi}{3} $ Therefore, the volume of the tetrahedron is $ V = \dfrac{1}{3} \cdot \dfrac{1}{2} (2)^2 \sin(2 \theta)
|geometry|volume|
0
Untiunitary operator on a Hilbert space
A bijective linear (antilinear) operator $A$ on a Hilbert space $\mathcal{H}$ is called unitary (untiunitrary) if $\langle A\psi |A\phi \rangle =\langle \psi |\phi \rangle$ (resp. $\langle A\psi |A\phi \rangle =\langle \phi |\psi \rangle$ ) for all $\phi ,\psi \in \mathcal{H}$ . By the relation $\langle \psi |A^{\dagger}\phi \rangle =\langle A\psi |\phi \rangle$ , for a unitary operator $A$ we can conclude that $\langle \psi |A^{\dagger}A\phi \rangle =\langle A\psi |A\phi \rangle =\langle \psi |\phi \rangle $ hence $A^{\dagger}A=I$ . But how can I get that for an untiunitary operator $U$ , we have $U^* U=I$ ?
If $\ U\ $ is antiunitary, then \begin{align} \langle\phi|\psi\rangle&=\langle U\psi|U\phi\rangle\\ &=\overline{\langle\psi|U^*U\phi\rangle}\\ &=\langle U^*U\phi|\psi\rangle \end{align} for all $\ \phi,\psi\ .$
|operator-theory|hilbert-spaces|adjoint-operators|
1
Showing that a subset of a function space is closed
In a calculus of variations problem, I am given $V = \mathcal C([0,1], \mathbb R)$ to be a normed function space. I want to show that the set $$A = \{x\in V \quad \vert \quad x(0) = x(1) = 0, \quad \max_{s\in [0,1]} ||x(s)|| \leq 1\}$$ is closed (and also be able to show it for different sets). Unfortunately my background in real analysis is quite weak. I have read about how to show this, particularly showing that $A$ is closed iff it contains all its limit points. But I still do not really understand the technical procedure. Can someone guide me through it?
Since i can not comment, i will write is as a post. One of the ways to prove a set is closed is to show that the set contains its limit points. Assuming (because you did not mention it) that the norm is $\|x\|=\max_{s \in [0,1]} |x(s)|$ Let $\{x_n\}$ be a sequence in $A$ that converges (w.r.t $\|.\|$ ) to $x \in V$ . We want to show that $x \in A$ . Since $x_n \in A$ for every $n \in \mathbb N$ then we have $x_n(0)=x_n(1)=0$ and $\|x_n\|=\max_{s\in[0,1]} | x_n(s)| \leq 1$ . Fix $\epsilon >0$ . Then there is $N>0$ such that for $ n >N$ $$|x_n(0) -x(0)| \leq \max_{s \in [0,1]}|x_n(0) -x(0)| \leq \|x_n - x\| \leq \epsilon, $$ i.e $\lim_{n \to \infty} |x_n(0) -x(0)|=0$ , implies $\lim_{n \to \infty} x_n(0)=x(0)=0$ . Similarly, we get $\lim_{n \to \infty} x_n(1)=x(1)=0$ . The show that $x$ is in the unit ball, we argue as follows: for $n >N$ we get $$\|x\| \leq \|x-x_n\| + \|x_n\| since $\epsilon$ was chosen arbitrarily small, then we get $\|x\| \leq 1$ . Therefore, $x \in A$ , thus $A$ is
|real-analysis|calculus-of-variations|
0
Let $(A_k)_{k \in \mathbb{N}}$ a sequence of events.If $P(A_k)$ does not converge to $0$ then $\exists$ an event belonging to an infinity of $A_k$
Question: Let $(A_k)_{k \in \mathbb{N}}$ a sequence of events. Prove that if $P(A_k)$ does not converge to $0$ then $\exists$ an event belonging to an infinity of $A_k$ My answer: 1-Let writte $A'_1=A_1 \cup A_2 ..., A'_2=A_2 \cup A_3,..., A'_n=A'_n \cup A_{n+1} \cup ...$ Now we ca note that $A'_1 \cap A'_2 \cap ... \cap A'_n=A_n$ . More over $A_n \subseteq A'_n= A_n \cup A_{n+1} \cup ... = \bigcup_{k \geq n}^{}A_n$ 2-Hence $lim_{n \to \infty } A'_1 \cap A'_2 \cap ... \cap A'_n = lim_{n \to \infty } A'_n$ . So: $\mathbb{P}(\lim_{n \to \infty} \bigcap_{k=1}^{n}A'_n)= \mathbb{P}(\lim_{n \to \infty} A'_n) \Rightarrow \lim_{n \to \infty} \mathbb{P}( \bigcap_{k=1}^{n}A'_n)=\lim_{n \to \infty} \mathbb{P}( A'_n)$ (the limit goes out by the continuity of the probability measure). 3- By (1-) we have, $A_n \subseteq A'_n \Rightarrow 0 now we apply limit and we get $ \lim_{n \to \infty} \mathbb{P}( \bigcap_{k=1}^{n}A'_n) \geq \lim_{n \to \infty} \mathbb{P}( A_n)>0$ , so we have that $\lim_{n \to
Indeed as undersigned by https://math.stackexchange.com/users/8508/robert-israel there was a typo error. Now I ve edited my first message and I think that my answer is correct. Thank you.
|probability|probability-theory|measure-theory|solution-verification|
1
How to show two functions can not be solutions of a same ODE of the form $y' = F(t,y)$
Question 1: Consider here the two functions $f: \mathbb R\rightarrow \mathbb R$ and $g: \mathbb R \rightarrow \mathbb R$ defined by $f(t)=t^2$ and $g(t)=t^2 -t +1$ . Show that $f, g$ are not solutions of a same ODE on $\mathbb R$ of the form $y'=F(t,y)$ , with $F: \mathbb R^2 \rightarrow \mathbb R$ a function. I know this must be try, but I don't really know how to write a formal proof. My attempt right now: Assume that $f(t)$ and $g(t)$ are both solutions of the same ODE $y' = F(t,y)$ . Then $2t = F(t, t^2)$ , $2t- 1 = F(t, t^2 -t+1)$ I am kind of lost after this, I also create a plot of this question. Can anyone give me some hint?
At $t_0 = 1$ is $ f(t_0) = 1 = g(t_0)$ but $f'(t_0) = 2 \ne 1 = f'(t_0)$ . If $f$ and $g$ were solutions of the same ODE $y'=F(t, y)$ then $$ f'(t_0) = F(t_0, f(t_0)) = F(t_0, g(t_0)) = g'(t_0) $$ is a contradiction, so this is not possible. Remark: This is a very simple case of the general fact that solutions of “initial value problems” are locally unique (under some conditions on the function $F$ ). If two functions $f$ and $g$ are both solutions of the same ODE with the same initial value $y(t_0) = y_0$ then these functions are identical near $t_0$ , that is the Picard–Lindelöf theorem .
|ordinary-differential-equations|
0
Counterexample of $f^* : \operatorname{Spec}B \to \operatorname{Spec}A$ injective implies every prime ideal of $B$ extended
This is exercise 3.20, (ii) of Atiyah & Macdonald. $f^*$ is the induced map on $\operatorname{Spec}$ of $f: A \to B$ , a ring homomorphism. I have seen a counterexample on MathSE, stating that $k[t^2,t^3] \subset k[t]$ is one. But I cannot seem to understand why this is a counterexample. In this case $f$ must be the inclusion $k[t^2, t^3] \to k[t]$ . How is the induced map $f^*$ injective in the first place? Also, how could I know about the prime ideals of $k[t]$ when I don't even know what $k$ is? I have made this a separate question rather than a comment, since the original post is quite old. I would appreciate both hints or full descrpitions. EDIT I have just realized that if $k$ is assumed to be algebraically closed, then the question becomes rather trivial. I am not deleting this post for future reference.
We will check that $k[t^2, t^3] \subseteq k[t]$ induces an injection on spectra for any ring $k$ . (On the other hand, clear the prime ideal $t$ is not extended from $k[t^2, t^3]$ .) Set $I = (t^2, t^3)$ . The map $k[t^2, t^3] \rightarrow k[t]$ induced by the inclusion issuch that $(t^2, t^3) k[t] \subseteq k[t^2, t^3]$ Note that $I k[t] \subseteq k[t^2, t^3]$ and that $Ik[t] \cap k[t^2, t^3] = I$ . Thus $k \subseteq k[t]/Ik[t]$ . This extension is just the result of adjoining a nilpotent, and we get a retract $k[t]/Ik[t] \rightarrow k$ by killing $t$ . Since modding by nilpotents does not affect the spectrum, deduce that $k \subseteq k[t]/Ik[t]$ induces a homeomorphism on spectra. Now suppose that $P_1, P_2$ are two primes of $k[t]$ contracting $Q$ . Assuming for the sake of contradiction that $P_1 \not= P_2$ without loss of generality we would be able to pick $f \in P_1 \setminus P_2$ . Then $If \subseteq P_1 \cap k[t^2,t^3] = Q \subseteq P_2$ . By primeness of $P_2$ , deduce $I \sub
|commutative-algebra|
1
Ways to determine (or estimate) the weight function that makes trig functions orthogonal on a given interval?
From the general definition of orthogonality for functions set of trig functions $\phi_k=\cos(kx)$ is orthogonal on $[a,b]$ if \begin{equation} \int_a^b w(x) \cos(jx)\cos(kx) dx =\alpha_j \delta_{jk}, \end{equation} where $w(x)$ is the weight function and $\delta_{jk}$ is the Dirac delta. On the interval $a=-\pi, b=\pi$ , $\cos(kx)$ is orthogonal with just $w(x) = 1$ . This value of $w(x)$ also works for intervals that are just integer multiples of a $[-\pi, \pi]$ . Are there ways to determine or estimate $w(x)$ for an arbitrary given interval? Say, I want to make them orthogonal on the interval $[0,5]$ , what would I do to find $w(x)$ that makes this works? I'm aware that there are methods of doing so for certain classes of polynomials, but I haven't found anything that discusses trying to do so for trig functions. As far as I am aware, there is no reason why, given $a$ and $b$ , there shouldn't be a weight function that works to make the trig functions orthogonal on that interval. Am
So I believe that I have what works as a weight function $w(x)$ to make the set of trig functions orthogonal on a given interval $[a,b]$ . I realized that if you linearly transform $x$ so that the interval $[a,b]$ is transformed back into $[-\pi, \pi]$ , $a$ and $b$ can be whatever and the orthogonality still holds. The linear transform that accomplishes this is \begin{equation} x'=\frac{1}{a-b}(-2 \pi x+ \pi(a+b)), \end{equation} Where I am using $x'$ to denote the new transformed x. It follows then that the $w(x)$ that transforms $\cos(jx)\cos(kx)$ into the desired form with the transformed $x$ 's is \begin{equation} w(x)=\frac{\cos\left(j\frac{-2 \pi x+ \pi(a+b)}{a-b}\right)\cos\left(k\frac{-2 \pi x+ \pi(a+b)}{a-b}\right)}{\cos(jx)\cos(kx)}. \end{equation} So the solution to the original question can be summed up as the appropriate weight function that makes the set of trig functions orthogonal on an arbitrary interval is the one that performs a linear transform of the given interva
|orthogonality|
1
Indpendence inequality over interval: $\mathbb{P}[a<X<b]\leq\mathbb{P}[a<X]\mathbb{P}[X<b]$
I am trying to show the following: $$\mathbb{P}[a where $X$ is any random variable and $a,b$ are constants. I feel I am missing the obvious here, but does the below proof hold? $$\mathbb{P}[a I used the inequality $\mathbb{P}[a which I am fairly sure holds but am struggling to show. Is my reasoning correct?
Adapting Maximilian Janisch's comment, a simpler way without using conditional probability (assuming $a as otherwise you get $0 \le $ the product of two probabilities, so clearly true) may be $$\begin{array}{rcl}\mathbb P(a Your statement $\mathbb{P}(a is correct so long as $\mathbb P(X 0$ , because (still assuming $a ) $$\mathbb{P}(a since $\mathbb P(X .
|probability|probability-theory|solution-verification|
1
Epsilon delta proofs on discontinuous functions
Does the epsilon delta definition of a limit always hold? I was thinking about the function $$f(x)=\begin{cases} 1 & x= 0 \\ 0 & x\neq 0 \end{cases} $$ and it seems that $$\lim_{x \to 0} f(x) = 0$$ , however unless I'm mistaken this would not converge to $0$ (or even at all) via the epsilon delta definition as given $$\epsilon \geq 1, \nexists \space \delta>0 \space s.t. \space|x| as we have $f(0)=1$ . There's probably just something I'm missing but it's bothering me.
If $\lim\limits_{x \rightarrow 0}f(x) = 0$ , then given $\epsilon>0$ , there must be $\delta>0$ such that $0 Now, for $x \neq 0$ , $f(x)=0$ , then for any $\delta>0$ $0 .
|real-analysis|epsilon-delta|
1
Algebraic approach to spherical harmonics
I am interested in an algebraic approach to the following theorem: Theorem. Consider the sphere $S^{n-1} \subseteq \Bbb{R}^n$ and for each $k=0,1,2,\ldots$ , the space $H^k$ consisting of homogeneous harmonic polynomials of degree $k$ , restricted to $S^{n-1}$ . Then $\bigoplus_{k=0}^\infty H^k$ (the set of all linear combinations of elements of $H^k$ ) is dense in $L^2(S^{n-1})$ . I am familiar with an analytic derivation of this fact (as described at the bottom). However, it relies on some heavy functional machinery (diagonalization in Hilbert spaces, Lioville theorem and singularity removal for harmonic functions), so I am looking for a more elementary, algebraic argument instead - just for my own satisfaction. If one accepts the fact that polynomials are dense in $L^2(S^{n-1})$ (which is much easier to prove than the machinery mentioned earlier), the rest is purely algebraic (comparing all polynomials with harmonic polynomials). However, I found it hard to complete this strategy. M
Proposition. If $u(x)$ and $|x|^2 u(x)$ are both harmonic (say on the whole space $\mathbf{R}^n$ for simplicity), then $u$ is identically zero. Analytic proof. The difference $(1-|x|^2) \, u(x)$ is harmonic, and zero on the boundary of the unit ball; hence it's zero inside the ball by the maximum principle. So $u(x)$ is zero inside the ball as well, since the nonzero factor $1-|x|^2$ can be cancelled. And harmonic functions are real-analytic, so a harmonic function which is zero on an open set is zero everywhere. Algebraic proof, for polynomials. A polynomial is harmonic iff all its homogeneous parts are, so assume that $u$ is a homogeneous polynomial of degree $m$ . Then Euler's theorem for homogeneous functions says that $x \cdot \nabla u(x) = m \, u(x)$ , and the rule for the Laplacian of a product gives $$ \begin{aligned} 0 & = \Delta ( \, |x|^2 u(x) \,) \\ & = (\Delta |x|^2) \, u(x) + 2 \nabla |x|^2 \cdot \nabla u(x) + |x|^2 \Delta u(x) \\ & = 2n \, u(x) + 4x \cdot \nabla u(x) + |
|polynomials|harmonic-functions|spherical-harmonics|
1
Showing $E_1 \Delta N_1=E_2 \Delta N_2 \implies E_1 \Delta E_2=N_1 \Delta N_2$
Showing $E_1 \Delta N_1=E_2 \Delta N_2 \implies E_1 \Delta E_2=N_1 \Delta N_2$ in Not able to show this even after expanding, can someone help?
Suppose $x\in E_{1}$ and $x \notin E_{2}$ . If $x \notin N_{1}$ then $x \in E_{1} \Delta N_{1}$ , so $x \in E_{2} \Delta N_{2}$ and $x \in N_{2}$ . We have $x \in N_{1} \Delta N_{2}$ in this case If $x \in N_{1}$ then $x$ is not an element of $E_{1} \Delta N_{1}$ and neither is it of $E_{2} \Delta N_{2}$ , so $x \notin N_{2}$ . We also have $x \in N_{1} \Delta N_{2}$ in this case. All other cases and contentions are entirely symmetric.
|measure-theory|
0
Why does duality of objects $A$, $A^\ast$ in a symmetric monoidal category imply an adjunction $(-) \otimes A \dashv (-) \otimes A^\ast$?
Let $\mathcal{C}$ be a symmetric monoidal category and let $A$ and $A^*$ be dual in the sense of Definition 2.1 in nLab. Dold & Puppe (1984) show (Thm 1.3) that the map $$ \text{Hom}(X, Y \otimes A^{\ast}) \xrightarrow{(-) \otimes A} \text{Hom}(X \otimes A, Y \otimes {A}^{\ast} \otimes A) \xrightarrow{{\textrm{id}}_{Y} \otimes \text{ev}_A \circ (-)} \text{Hom}(X \otimes A, Y) $$ is an isomorphism for all objects $X$ , $Y$ . Let us refer to this map by $\varphi$ . I am trying to understand how it follows from this observation that $$ (-) \otimes A \dashv (-) \otimes A^{\ast} $$ as pointed out in Remark 2.15 on nLab. If the isomorphism $\varphi$ is natural, we automatically get the desired adjunction by definition. How can we know that $\varphi$ is indeed natural?
We will check that $\varphi_{X,Y}$ is natural in $X$ and $Y$ separately, by checking that the same holds for $-\otimes A$ and $(\mathrm{id}_Y\otimes \mathrm{ev}_A)\circ-$ . However, since $-\otimes A$ is a functor, naturality in $X$ and $Y$ of the induced map $\mathrm{Hom}(X,Y\otimes A^*)\to\mathrm{Hom}(X\otimes A,Y\otimes A^*\otimes A)$ is just functoriality of $-\otimes A$ , more specifically the statement that a functor preserves composition of morphisms. The assignment $(\mathrm{id}_Y\otimes \mathrm{ev}_A)\circ-\colon\mathrm{Hom}(X\otimes A,Y\otimes A^*\otimes A)\to\mathrm{Hom}(X\otimes A,Y)$ is a postcomposition operation. Naturality in $X$ (which is tested by looking at precompositions) is hence a consequence of the statement that postcomposition and precomposition commute, in the sense that, given three composable morphisms $f$ , $g$ and $h$ , we have $(hg)f=h(gf)$ (this is of course also simply called associativity of composition). We hence only have to still check that the ass
|category-theory|adjoint-functors|monoidal-categories|higher-category-theory|hom-functor|
1
How one can negate this statement?
I have a logical satement of the form: For every number $a≥0$ , there exist two prime numbers $p$ and $p+2$ such that $$a+4 and $$a+4 Then how one can negate this statment? My solution : There exist a number $a≥0$ such that for every natural number $d$ with $$a+4 at least one of $d,d+2$ is composite .
There exist a number $a\ge 0$ such that for every twin prime numbers $p$ and $p+2$ , $$a+4\ge p$$ or $$p \ge 2^{a+4} \tag 2$$ or $$a+4\ge p+2 \tag 3$$ or $$p+2 \ge 2^{a+4}$$ (One may omit inequalities $(2)$ and $(3)$ if they wish to)
|logic|
0
Find the sum of series: $\frac{1}{\sqrt{1}+\sqrt{2}}+\frac{1}{\sqrt{3}+\sqrt{4}}+...+\frac{1}{\sqrt{97}+\sqrt{98}}+\frac{1}{\sqrt{99}+\sqrt{100}}$
Find the sum of series: $$\frac{1}{\sqrt{1}+\sqrt{2}}+\frac{1}{\sqrt{3}+\sqrt{4}}+\frac{1}{\sqrt{5}+\sqrt{6}}+...+\frac{1}{\sqrt{97}+\sqrt{98}}+\frac{1}{\sqrt{99}+\sqrt{100}}$$ My Attempt: I tried to go by telescopic method but nothing appears to be cancelling. Something similar was given in a book by Titu Andreescu. I will try to reproduce Let $S=\frac{1}{\sqrt{1}+\sqrt{2}}+\frac{1}{\sqrt{3}+\sqrt{4}}+\frac{1}{\sqrt{5}+\sqrt{6}}+...+\frac{1}{\sqrt{97}+\sqrt{98}}+\frac{1}{\sqrt{99}+\sqrt{100}}$ Further let $T=\frac{1}{\sqrt{1}+\sqrt{2}}+\frac{1}{\sqrt{3}+\sqrt{4}}+\frac{1}{\sqrt{5}+\sqrt{6}}+...+\frac{1}{\sqrt{97}+\sqrt{98}}+\frac{1}{\sqrt{99}+\sqrt{100}}$ Clearly $S+T=\sqrt{100}-1=9$ Also $S-T=2S+1-\sqrt{100}$ and $S>T$ $\Rightarrow 2S>S+T$ $\Rightarrow S>4.5$ This is the best I could come up with
I don't think it's possible to say for sure, but you can find the whole part Hint: Try to find $S - T$ . $$S - T = \dfrac{1}{\sqrt 2 + 1} - \left(\dfrac{1}{\sqrt 2 + \sqrt 3} - \dfrac{1}{\sqrt 3 + \sqrt 4}\right) - . . . - \left(\dfrac{1}{\sqrt 98 + \sqrt 99} - \dfrac{1}{\sqrt 99 + \sqrt 100}\right) since each bracket is greater than $0$ $S - T and $S + T = 9$ So if the whole part is $S > 5$ , then $T > 4$ , which means $S + T > 9$ , a contradiction $\lfloor {S} \rfloor = 5$ and $\{ {S} \}$ $\dfrac{1}{\sqrt 2 + 1}$
|sequences-and-series|algebra-precalculus|summation|telescopic-series|
0
Integrating $\int_{0}^{1} \left(\frac{\arctan(x) - x}{x^2}\right)^2 \,dx$
how to integrate $$\int_{0}^{1} \left(\frac{\arctan(x) - x}{x^2}\right)^2 \,dx$$ Attempt $$=\int_{0}^{1} \left(\frac{\arctan(x) - x}{x^2}\right)^2 \,dx = \int_{0}^{1} \frac{1}{x^4} \cdot (\arctan(x) - x)^2 \,dx$$ Integrating by parts $$I = -\frac{1}{3} \left(\frac{\pi}{4} - 1\right)^2 + \frac{2}{3} \int_{0}^{1} \frac{x - \arctan(x)}{x(x^2 + 1)} \,dx$$ $$= -\frac{1}{3} \left(\frac{\pi}{4} - 1\right)^2 + \frac{2}{3} \int_{0}^{1} \frac{1}{x^2 + 1} \,dx - \frac{2}{3} \int_{0}^{1} \frac{\arctan(x)}{x(x^2 + 1)} \,dx$$
In fact, $$\begin{eqnarray} I&=&\int_{0}^{1} \frac{\arctan(x)}{x(x^2 + 1)}dx\\ &=&\int_{0}^{1}\arctan(x)\;d(\ln x-\frac12\ln(1+x^2))\\ &=&\arctan(x)(\ln x-\frac12\ln(1+x^2))\bigg|_0^1-\int_{0}^{1}\frac{\ln x-\frac12\ln(1+x^2)}{1+x^2}\;dx\\ &=&-\frac\pi8\ln2-\int_0^1\frac{\ln x}{1+x^2}\;dx+\frac12\int_0^1\frac{\ln(1+x^2)}{1+x^2}\;dx\\ &=&\frac12G + \frac\pi8\ln2. \end{eqnarray}$$ Here $$\int_0^1\frac{\ln x}{1+x^2}\;dx=-G,\int_0^1\frac{\ln(1+x^2)}{1+x^2}\;dx=-G+\frac\pi2\ln2 $$ are used from https://en.wikipedia.org/wiki/Catalan%27s_constant and Find integral $\int_0^1 \frac{\ln(1+x^2)}{1+x^2} \ dx$ (most likely substitution) .
|calculus|integration|definite-integrals|improper-integrals|closed-form|
0
Distributional representation of a function defined at a single point
During some calculation regarding a physics problem, I came across a function, which we will denote as $u(x)$ , which is $1/2$ if $x=y$ , where $y$ is a positive real number, and $0$ elsewhere. Since I then have to integrate this function $u$ , I need a suitable representation in order to do the actual calculation. To give a little bit of context, the function $u$ derive from the time-average of the product of two trigonometric functions at different frequencies. In particular, $$ \langle \cos (x t - \phi) \, \cos (y t - \phi') \rangle := \lim_{T\rightarrow+\infty} \frac{1}{T} \int_{0}^{T} dt \, \cos (x t - \phi) \, \cos (y t - \phi') $$ The time-average is $\ne 0$ only when the frequencies are the same, and so here's where $u$ comes from. Now, I know that, in the "discrete" case, i.e. when $x$ and $y$ are integers, the time average is proportional to the Kroneker delta $\delta_{x,y}$ . However, in this case $x$ and $y$ are continuous variables. My "physical" guess is that the Kronecke
If $u$ is really defined by $u(y) = 1/2$ and $\forall x\neq y,\, u(x) = 0$ , then $$\int u(x)dx = 0.$$ No "representation" is going to change the fact that the integral of $u$ is zero. In particular, this function $u$ is not the Dirac delta. If the integral of $u$ is not zero, then $u$ is not the function defined above.
|physics|distribution-theory|
1
Must an operator $\mathcal{O}$ satisfying $\mathcal{O} (f + c) = \mathcal{O} (f)$ for any constant function $c$ be a differential operator?
Consider an operator (not necessarily linear) $\mathcal{O}: C^{\infty} (\mathbb{R}) \to C^\infty(\mathbb{R})$ which satisfies the following property for any function $f \in C^\infty(\mathbb{R})$ and any constant function $c \in C^\infty(\mathbb{R})$ \begin{equation} \mathcal{O} \left(f + c \right) = \mathcal{O} \left(f \right) \end{equation} Must the operator $\mathcal{O}$ be a differential operator? If not, what additional conditions should $\mathcal{O}$ have for it to have to be a differential operator? For example, would $\mathcal{O}$ being a linear operator suffice?
No; there are many many linear operators which satisfy this condition. For example, let $\mathcal{O}$ be the operator which sends a smooth function $f$ to the constant function with value $f(1)-f(0)$ . See here for a characterization of the derivative, which then leads to a characterization of differential operators.
|functional-analysis|operator-theory|differential-operators|
0
Why this configuration is not possible for the Tower of Hanoi?
I am studying the Tower of Hanoi, and as I read through some posts on here about the proof of optimality for the optimal solution, I found a comment that said there cannot be deadlocks in the TH game. I took it to mean that the game is always solvable and set out to try proving this statement. However, I ended up with a legal-looking configuration that seems unsolvable. Consider a game with $3$ pegs and $7$ disks, whose sizes are represented by $1, 2, 3, 4, 5, 6, 7$ . $$\begin{array}[b]{c} 2 \\ 3 \\ \text{Peg 1} \\ \end{array} \quad \begin{array}[b]{c} 1 \\ 7 \\ \text{Peg 2} \\ \end{array} \quad \begin{array}[b]{c} 4 \\ 5 \\ 6 \\ \text{Peg 3} \\ \end{array} $$ I believe this configuration is not possible. However, I can't seem to find a way to prove it. Please advise! I'm not sure if I should start a separate thread for this, but I'd greatly appreciate it if someone could provide some insights into how one may approach proving the solvability of TH game, just in case I couldn't prove i
Disks $1$ through $6$ can be sorted and stacked onto peg $3$ with the following sequence of moves: $$1 \to 3 \\ 2 \to 2 \\ 1 \to 2 \\ 3 \to 3 \\ 1 \to 1 \\ 2 \to 3 \\ 1 \to 3, $$ where the notation $a \to b$ means moving disk $a$ to peg $b$ . Since it is possible to sort the first $6$ disks onto a single peg, it is possible to move disk $7$ to a different peg--in this case, peg $1$ if it was not already on $1$ . If disk $7$ was already on peg $1$ , then the standard sequence of moves of the stack $1$ - $6$ on peg $3$ will move it onto disk $7$ , solving the puzzle.
|discrete-mathematics|computer-science|
1
$Log(z^2)=2Log(z)$ not always satisfied
We have that the main value of the logarithm in complex numbers is given by: $$Ln(z)=log_e(r)+i\theta$$ with $-\pi . However, it can be seen that the relationship $Log(z^2)=2Log(z)$ is not always satisfied when considering the main branch. For example, I realized that for $z=1+i$ the relationship is valid, but for $z_1=-1+i$ it is not. After a long time I realized that this happens because $Arg(z_1^2)$ is $\dfrac{-\pi}{2}$ while $Arg(z_1)$ is $\dfrac{3\pi}{ 4}$ , that is, $Arg(z_1^2)$ was influenced by the main value of the logarithm due to $-\pi . Now, I'm not sure how to find every value of $z$ that satisfies this equality. I believe that $z$ is in the first or fourth quadrants
As you note, the only problem is with the angle. And the discontinuity in the number we assign to the angle happens along the negative $x$ -axis. I suggest you start with a $z$ that makes a zero angle with the positive $x$ -axis, and track what happens to the angle made by both $z$ and $z^2$ as you continuously increase the angle it makes. You should be able to see when things break down. Then you can also do the same thing as the angle gets increasingly negative to figure out the whole range.
|complex-analysis|logarithms|
0
Maximum interval of definition of solution of ODE
I'm studying first order ODEs, and I have the following theorem: Let $$ \left\{ \begin{aligned} y' &= f(y, t) \\ y(t_0) &= y_0 \end{aligned} \right. \tag{1} $$ with $(t_0, y_0)\in D\subset\mathbb{R}$ and $D$ an open set, be a Cauchy problem for a first order ODE, for which existence and unicity of the solution is granted in a neighborhood $I$ of $t_0$ . Then the solution is defined in a "maximal interval" $(t_{\mathrm{min}}, t_{\mathrm{max}})$ . Outside this interval, the solution isn't defined or is outside $D$ . Now, my problem with this theorem is that I don't understand what is the purpose of finding this interval where a solution is defined, and I'll articulate it in two points: What does the theorem mean for "definition" of a solution $\phi(t)$ ? Does it really mean "where the analytical expression of the $\phi(t)$ is defined" or does it mean "where the function $\phi(t)$ found is also a solution"? Because in the second case, it could be that $\phi(t)$ is defined in a subset of $
It's not a question of an "analytical expression": there's no guarantee that a closed-form expression for the solution exists at all. You simply want to find the largest interval in which this Cauchy problem has a solution. Typically, as $t$ approaches an endpoint of such an interval, $(t, y(t))$ might approach a point where the function $f(y,t)$ is undefined or discontinuous, or $y(t)$ might go off to $\pm \infty$ .
|ordinary-differential-equations|analysis|
1
Sequence of 1-Lipschitz functions pointwise converge
I'm asked to prove the following: Let $(X, d_x)$ and $(Y, d_y)$ be two metric spaces. Let $D \subset X$ be dense. Show the following: If $f_1, f_2, \ldots$ is a sequence of $1$ -Lipschitz functions so that there is another $1$ -Lipschitz function, g, where $\lim f_i(d) = g(d)$ for all $d \in D$ then $f_1, f_2, ...$ converge pointwise to $g$ . This is my first time encountering Lipschitz functions, but I do know of the Arzela-Ascoli Theorem, which seems relevant. My version of the theorem is as follows: For a sequence $f_1$ , $f_2$ , ... of continuous functions X to Y, if: $X$ is $\sigma$ -compact $Y$ is complete The sequence of functions is point-wise equicontinuous The closure of the sequence of functions is totally bounded Then there exists $n_1,n_2, ...$ such that $f_{n_1}, f_{n_1}, ...$ converge pointwise to a function g. From Wikipedia, I know that 1-Lipschitz functions are uniformly continuous, so the sequence of functions as a whole is equicontinuous, so condition (3) of my theo
To establish pointwise convergence, we need to show that for every $x \in X$ and for any $\epsilon > 0$ , there exists an $N \in \mathbb{N}$ such that for all $i \geq N$ , $d_Y(f_i(x), g(x)) . Since $D$ is dense in $X$ , for any $x \in X$ and for any $\epsilon > 0$ , there exists a sequence $\{d_n\}_{n \in \mathbb{N}} \subset D$ converging to $x$ in the sense that $\lim_{n \to \infty} d_X(d_n, x) = 0$ . Given that $g$ is $1$ -Lipschitz, for any $d_n \in D$ and $x \in X$ , we have $$ d_Y(g(d_n), g(x)) \leq d_X(d_n, x). $$ As $n \to \infty$ , this implies $g(d_n)$ converges to $g(x)$ , i.e., $\lim_{n \to \infty} g(d_n) = g(x)$ . For each $f_i$ , also being $1$ -Lipschitz, we analogously have $$ d_Y(f_i(d_n), f_i(x)) \leq d_X(d_n, x). $$ Thus, as $n \to \infty$ , $f_i(d_n)$ converges to $f_i(x)$ for each $i$ . Consider $\epsilon > 0$ . Given $\lim_{i \to \infty} f_i(d) = g(d)$ for all $d \in D$ , there exists $N \in \mathbb{N}$ such that for all $i \geq N$ and for sufficiently large $n$ ,
|real-analysis|lipschitz-functions|equicontinuity|arzela-ascoli|
1
Total Sitting Arrangements
There are 5 people sitting on 5 chairs arranged in a straight line, all facing north. Everyone gets up and can do only one of the following things at a time. (i) Sit back again in their original chair. (ii)Turn their chair 180◦ to face south and then sit on it. (iii) Sit on the chair next to their original chair. How many sitting arrangements are possible? I have a doubt in this question I think the answer is 35 but I still have doubts. So I request my teachers to clear my doubts. It is a multiple choice question where options are a)35 b)70 c)56 d)89
Assuming each person can perform at most one action in total: Claim: If some person $x$ were to change seats to a seat to their side, the person $y$ whose seat they took must then change seats to where $x$ came from. Suppose otherwise. If $y$ stayed still, you have two people in the same seat. That presumably can't happen. So then $y$ must be changing seats. Supposing without loss of generality that both $x$ and $y$ moved to the right, then since no person can move further than one seat away you will have more seats than people on their left which again can not happen. So, any and all seat exchanges must be between pairs of adjacent people. In the event there was only one pair exchanging seats, choose which pair it was (1,2),(2,3),(3,4) or (4,5). Then for all remaining people, choose if they did or did not rotate their chair. This gives $4\times 2^3$ In the event two seat exchanges took place, pick which adjacent pairs exchanged seats, it being ((1,2),(3,4)), ((1,2),(4,5)), or ((2,3),(
|combinatorics|permutations|
1
Construct a manufactured solution of Poisson's equation with Chebyshev/Fourier expansions
I am solving a nonlinear Poisson's equation numerically using a mixed Chebyshev/Fourier spectral methods. Thus, assuming $x$ is periodic and $y$ is nonperiodic. I am trying to test my current numerical method/solver by using the method of manufacturing solution (MMS). Additionally, I am applying some coordinate mapping or rescaling my Chebyshev Differentiation Matrices to an arbitrary domain [a,b] by performing a change of variables as the following: $$ y_L = (1/2)\left[ (b-a)y+(b+a) \right] $$ Thus, my chebyshev derivatives can be rewritten as: $$ D_L = D \left[ \frac{2}{b-a} \right] $$ The two above expressions were taken from the Weideman and Reddy paper found in this post: Are there any alterations for the Chebyshev Differentiation Matrices on an arbitrary domain [a,b]? I) As an initial step I solved the linear Poisson's equation with the rescaled Chebyshev derivatives: $$ \nabla^2 u = S $$ where BCs are periodic in x and zero Dirichlet in y as $u(x,a)=0$ & $u(x,b)=0$ . For the sim
I ended up getting the correct form of $n(x,y)$ that satisfies the BCs in my main post: \begin{equation} n(x,y)=\left(\frac{2}{b-a}\right)\left[y-\left(\frac{a+b}{2}\right) \right]^2\sin{3Ax}+10 \end{equation}
|partial-differential-equations|fourier-series|poissons-equation|chebyshev-polynomials|
1
Limit as $x\to \infty$ of a function given in integral form
Consider $F(x) = \int_0^{x^2} \frac{t^2+1}{t^4+2t^2+4}dt$ . I want to study $$ \lim_{x\to\infty} F(x), \quad \lim_{x\to\infty} \frac{F(x)}{x} $$ Ideas: First, we can observe than $f(t) = \frac{t^2+1}{t^4+2t^2+4}$ is a continuous function over $\mathbb{R}$ , so $G(x) = \int_0^{x} \frac{t^2+1}{t^4+2t^2+4}dt $ is well defined, is continuous, differentiable and $F'=f$ . Is clear that $F = G \circ g$ , being $g(x) = x^2$ , so $F'$ is also differentiable and we can easily compute its derivative using the Chain Rule. If I find that $\lim_{x\to\infty} F(x) = \infty$ , then I could apply L'Hôpital Rule and study the limit of $F'(x)$ , which is $0$ . Can you help me with $\lim_{x\to\infty} F(x)$ ? I'm stuck with this part.
As pointed out in the comments, we're just computing $$I := \int_0^\infty \frac{(t^2 + 1) \,dt}{t^4 + 2 t^2 + 4} .$$ Since the integrand is even, $$2 I = \int_{-\infty}^\infty \frac{(t^2 + 1) \,dt}{t^4 + 2 t^2 + 4} ,$$ and the latter integral can be evaluated using a standard residue calculus computation. Let $\Gamma_R$ be the boundary of the half-disk $\{(x, y) \mid x^2 + y^2 \leq R, y \geq 0\}$ , oriented anticlockwise. A standard estimate shows that the integral of $f(z) = \frac{z^2 + 1}{z^4 + 2 z^2 + 1}$ over the semicircular part of $\Gamma_R$ is $O(R^{-1})$ . So, for large enough $R$ , $$2I = \oint_{\Gamma_R} f(z) \,dz = 2 \pi i \sum_{z^*} \operatorname{Res}\left(f(z); z^*\right),$$ where $z^*$ varies over the poles of $f(z)$ in the upper half-plane, namely, $$z_\pm = \sqrt{2} \left(\pm \frac12 + \frac{\sqrt3}2 i\right) .$$ Computing the residues directly gives $$\operatorname{Res}\left(f(z); z_\pm\right) = \lim_{z \to z_\pm} (z - z_\pm) f(z) = \frac{1}{4 \sqrt 2} \left(\pm\frac1
|real-analysis|calculus|analysis|definite-integrals|
1
Prove that it exists $i \neq j$ s.t. $P(A_i \cap A_j) \geq \frac{nc^2-c}{n-1}$ with $P(A_i) \geq c $
Question: Let $A_1,A_2,...,A_n \subset \Omega $ a sequence of outcome such that $P(A_i) \geq c $ for all $i$ . Prove that it exists $i \neq j$ s.t. $P(A_i \cap A_j) \geq \frac{nc^2-c}{n-1}$ I have tried different ways including the following one but I did not succeed to conclude. 1-First note that: $\int (\sum_{1 \leq i \leq n} I_{A_i})^2dP= \sum_{i \neq j} \int I_{A_i}I_{A_j}dP + \sum_{i = j} \int I_{A_i}^2dP= \sum_{i \neq j} P(A_i \cap A_j) + \sum_{i = j}P(A_i)$ 2-Now we know that $\sum_{i = j}P(A_i)$ is the sum of $n$ element, thus $\sum_{i = j}P(A_i) \geq nc$ 3-We know too that $\sum_{i \neq j} P(A_i \cap A_j)$ is the sum of $n(n-1)$ elements. More over $\forall i \neq j$ by absurd let suppose that $P(A_i \cap A_j) . But after I don't succeed to continue. Can someone help me please? Thank for your help.
HINT: It is enough to show the following fact: if $\sum P(A_i) \ge s$ , then $\sum_{i as follows. Consider the polytope formed by all possible values $(\sum P(A_i), \sum P(A_i \cap A_j)$ . It is the image under a linear map of a $2^n-1$ dimensional simplex consisting of all the possible $(P(A_1^{\epsilon_1} \cap A_2^{\epsilon_2} \cap \ldots A_n^{\epsilon_n}))$ , where $\epsilon_k = \pm 1$ , $A_k^1 = A_k$ , $A_k^{-1} = A_k^c$ ( the complement). The vertices of the simplex ( $2^n$ of them) correspond to the $A_i$ being $\emptyset$ or $\Omega$ ( the total set). The vertices map to $(k, \binom{k}{2})$ , where $k \in \{0,1, \ldots,n\}$ (meaning $k$ of the $A_i$ are $\Omega$ , the other $\emptyset$ ). We conclude: the set of possible values of $(\sum P(A_i), \sum P(A_i \cap A_j)$ is the convex hull of $(k, \binom{k}{2})$ , with $0 \le k \le n$ . Notice that these points form a convex polygon with $n+1$ vertices, inscribed in the parabola $(x, \binom{x}{2})_{x\ge 0}$ . Now the inequality foll
|probability|probability-theory|measure-theory|inequality|
0
The probability distribution for the sum of a large number of point scatterers
I have a number of independent and identically distributed random variables $X_{1},X_{2},...,X_{n}$ , uniformly distributed on the interval $[0,2\pi)$ . I am interested in understanding the distribution of $\left|\sum_{j=1}^{n}e^{iX_{j}}\right|$ , particularly as $n\to\infty$ . Each term represents the phase due to a point scatterer, so I'm interested in understanding how they combine when there are a large number of them. Clearly, if $n=1$ , the distribution as a Dirac-delta centred at unity. If $n=2$ , I'm able to use the fact that $\left|e^{iX_{1}}+e^{iX_{2}}\right|=2\left|cos\left(\frac{X_{1}-X_{2}}{2}\right)\right|$ and the CDF method to show that the distribution is $\frac{2}{\pi\sqrt{4-x^{2}}}$ for $0 . However, beyond this I'm struggling. I'm not sure if there's an inductive way to get the behaviour as $n\to\infty$ or whether there's a way of getting there without calculating the distribution for each $n$ along the way. Thanks in advance for any help.
$$\begin{align} L&:= \left|\sum_{j=1}^n e^{iX_j}\right| \\ &= \left|\sum_{j=1}^n \cos(X_j) + i \sum_{j=1}^n \sin(X_j)\right|\\ &=\sqrt{\left(\sum_{j=1}^n \cos(X_j)\right)^2+\left(\sum_{j=1}^n \sin(X_j)\right)^2}\\ &=n\cdot \sqrt{\left(\underbrace{\frac{1}{n}\sum_{j=1}^n \cos(X_j)}_{P_n}\right)^2+\left(\underbrace{\frac{1}{n}\sum_{j=1}^n \sin(X_j)}_{Q_n}\right)^2}\\ \end{align}$$ According to the CLT, we have $$\sqrt{n}\left(\pmatrix{P_n\\Q_n}-\mu \right)\xrightarrow[n\to+\infty]{\mathcal{D}}\mathcal{N}_2\left(\pmatrix{0\\0},\Sigma\right)$$ where $\mu \in \mathbb{R}^{2}$ the mean value of $(\cos(X),\sin(X))'$ and $\Sigma\in \mathbb{R}^{2\times 2}$ the covariance variance of $(\cos(X),\sin(X))'$ . It's easy to calculate that $$\mu = \pmatrix{0\\0} \hspace{1cm}\text{and }\hspace{1cm} \Sigma = \pmatrix{1/2 & 0 \\ 0 &1/2}$$ then we deduce $$\iff \color{red}{ \sqrt{n}\pmatrix{P_n\\Q_n} \xrightarrow[n\to+\infty]{\mathcal{D}}\mathcal{N}_2\left(\pmatrix{0\\0},\pmatrix{1/2 & 0 \\ 0 &1/2}\right)}
|probability|probability-distributions|
1
$Log(z^2)=2Log(z)$ not always satisfied
We have that the main value of the logarithm in complex numbers is given by: $$Ln(z)=log_e(r)+i\theta$$ with $-\pi . However, it can be seen that the relationship $Log(z^2)=2Log(z)$ is not always satisfied when considering the main branch. For example, I realized that for $z=1+i$ the relationship is valid, but for $z_1=-1+i$ it is not. After a long time I realized that this happens because $Arg(z_1^2)$ is $\dfrac{-\pi}{2}$ while $Arg(z_1)$ is $\dfrac{3\pi}{ 4}$ , that is, $Arg(z_1^2)$ was influenced by the main value of the logarithm due to $-\pi . Now, I'm not sure how to find every value of $z$ that satisfies this equality. I believe that $z$ is in the first or fourth quadrants
Let $\DeclareMathOperator{\Arg}{Arg}\DeclareMathOperator{\Log}{Log} \Arg\colon \mathbb C\setminus\{0\}\to (-\pi,\pi]$ be the principle value of the multi-valued argument function and let $\Log(z) = \ln|z| + i \Arg(z)$ be the principle branch of logarithm. Then, we have \begin{align}\Log(z^2) = 2\Log(z) &\iff \ln|z^2| + i \Arg(z^2) = 2\ln|z| + 2i\Arg(z)\\ &\iff \Arg(z^2) = 2\Arg(z).\end{align} Now, if $\Arg(z) = \theta$ , then $z = re^{i\theta}$ and $z^2 = r^2e^{2i\theta} = r^2e^{i(2\theta + 2n\pi)}$ , $n\in\mathbb Z$ , so we conclude that $\Arg(z^2) = 2\theta + 2k\pi$ , where $k$ is the unique integer such that $2\theta + 2k\pi\in (-\pi,\pi]$ . From here we can conclude that $$\Arg(z^2) = 2\Arg(z) \iff 2\theta \in (-\pi,\pi].$$
|complex-analysis|logarithms|
1
A questions on complete endomorphisms of $2^{\mathbb{N}}$
Let $A$ and $B$ be complete Boolean algebras and $f:A\to B$ a homomorphism. Recall that $f$ is complete if $f$ it preserves arbitrary (including possibly infinite) joins. Does there exist a non-complete endomorphism of $2^{\mathbb{N}}$ that preserves aribitrary disjoint unions?
If a homomorphism $f:2^{\mathbb{N}}\to 2^{\mathbb{N}}$ preserves disjoint unions, then it is determined by what it does on singletons by the formula $f(A)=\bigcup_{n\in A}f(\{n\})$ , since each set is the disjoint union of the singletons it contains. But if $f$ is given by this formula, then it clearly preserves arbitrary unions as well, since $f(\bigcup_i A_i)=\bigcup_{n\in \bigcup_i A_i} f(\{n\})=\bigcup_i \bigcup_{n\in A_i}f(\{n\})=\bigcup_i f(A_i)$ .
|boolean-algebra|
1
Two quadratic functions with min F(x) > min G(x). Does this imply that F(x) > G(x) for all x?
Let us suppose that $F(x)=ax^{2}+bx+c$ and $G(x)=Ax^2+Bx+C$ where $a, b, c, A, B, $ and $C$ are real numbers with $a > 0$ and $A > 0$ . Let us suppose that the minimum value of the quadratic function defined by $F(x)$ is greater than or equal to the minimum value of the quadratic function defined by $G(x)$ . Does it necessarily hold that $G(x) \leq F(x)$ for every real number $x$ ? Thanks in advance for sharing your thoughts on this question with me.
For any quadratic, we can use a change of variables to set $F(x)=x^2$ and $G(x)=m(x-b_1)(x-b_2)$ with $m > 0$ . Since $G(x)$ has a vertex $\leq$ that of $F(x)$ at $(0,0)$ it has two real roots $b_1,b_2$ . It will be convenient not to deal with separate $b_i$ terms so let's set $b_2=kb_1$ . In this way, if $k=1$ the roots are identical, if $k=-1$ the roots are symmetric and the vertex of $G(x)$ is directly below that of $F(x)$ . We examine if parabolas intersect by setting $x^2=m(x-b)(x-kb)$ , leading to the quadratic: $$x^2(m-1)-xmb(k+1)+mkb^2=0$$ Case 1: $m=1$ The $x^2$ terms cancel, and $x=\frac{bk}{k+1}$ gives the intersection of the parabolas. Only if $k=-1$ does no $x$ satisfy, which is when the parabola of $G(x)$ is shifted directly down from $F(x)$ by a constant. This would be $G(x)=x^2-c$ . Intuitively, if we have identical parabolas $(m=1)$ , the only way to prevent intersection is to shift one vertically, in this case $G(x)$ down. Case 2: $m\neq 1$ $$x=\frac{mb(k+1) \pm b\sqr
|geometry|algebra-precalculus|differential-geometry|analytic-geometry|conic-sections|
0
Why is this inequality true? $(a+b)^2\leq 2(a^2+b^2)$
Why is this inequality true? $a,b$ are real numbers. $$ (a+b)^2=a^2+2ab+b^2\leq 2(a^2+b^2) $$ I know $(a+b)^2=a^2+2ab+b^2 \geq 0$, but then?
By rearrangement $$(a+b)^2=a^2+ab+ba+b^2\le $$ $$\le a^2+aa+bb+b^2=2(a^2+b^2)$$
|calculus|inequality|
0
What is the intuition behind conditional expectation in a measure-theoretic treatment of probability?
What is the intuition behind conditional expectation in a measure-theoretic sense, as opposed to a non-measure-theoretic treatment? You may assume I know: what a probability space $(\Omega, \mathcal{F}, \mathbb{P})$ refers to probability without measure theory really well (i.e., discrete and continuous random variables) what the measure-theoretic definition of a random variable is that a Lebesgue integral has something to do with linear combinations of step functions, and intuitively, it involves partitioning the $y$-axis (as opposed to the Riemann integral, which partitions the $x$-axis). I haven't had time to learn measure-theoretic probability lately due to graduate school creeping in as well as other commitments, and conditional expectation is often covered as one of the last topics in every measure-theoretic probability text I've seen. I have seen notations such as $\mathbb{E}[X \mid \mathcal{F}]$, where I assume $\mathcal{F}$ is some sort of $\sigma$-algebra - but of course, this
As Oscar suggested, the most common intuition for $\mathbb{E}[X | \mathcal{F}]$ is that it is the best guess of $X$ given the information in $\mathcal{F}$ . However, I find that the alternative intuition that it is the orthogonal projection of $X$ onto a subspace makes it clearer why it is defined the way it is. First, just for clarity's sake, let me set up the orthogonal projection. Lets say you have an inner product space $V$ and $v \in V$ . Then for a subspace $W \subseteq V$ , we can define the orthogonal projection of $v$ onto $W$ as the unique $p_{w}(v) \in W$ such that $$ \left = \left $$ for all $w \in W$ . This is based on the idea that a vector can be determined entirely by its inner product with other vectors. That is, the projection $p_{w}(v)$ is unique because if there were some other candidate $z \in W$ we would have that $\left = 0$ for all $w \in W$ . Before we move back to random variables, let's consider a space of functions, specifically the square-integrable functio
|probability-theory|measure-theory|intuition|conditional-expectation|
0
Existence and uniqueness of the minimal polynomyal using the primary decomposition theorem
Let $f: V \rightarrow V$ be a linear transformation. Using the primary decomposition theorem, prove that there exists $v \in V$ such that $m_{f, v}=m_{f}$ . Next, prove that for every polynomial $p$ such that $p \mid m_{f}$ there exists a vector $w$ such that $m_{f, w}=p$ . I would love to write an attempt but I feel pretty lost, sorry. I'm not very familiar with the primary decomposition theorem and I don't really know where to start.
Ok, I made an attempt, but it might be wrong: Existence of $v \in V / m_f=m_{f,v}$ : For any $v \in V, m_{f, v}$ divides $m_f$ , since if $m_f(f)=0$ , then $m_f(f)(v)=0 \ \forall \ v \in V$ , implying that $m_{f, v}$ , which annihilates $v$ , must divide $m_f$ . The primary decomposition theorem states that $V$ can be decomposed into a direct sum of invariant subspaces $V=\bigoplus_i^k V_i$ , where each $V_i$ corresponds to a distinct irreducible factor of $m_f$ , and the minimal polynomial of $f$ restricted to each $V_i$ is a power of that irreducible factor. Consider a basis $\left\{v_1, \dots, v_i, \dots, v_k\right\}$ for $V$ where each $v_i$ is chosen from each of the invariant subspaces $V_i$ . $m_f(f)$ is the minimal polynomial that annihilates every component $v_i$ in each invariant subspace, and thus, it must annihilate their sum $v$ , so $v=\sum_i v_i$ . Since $m_{f, v}$ divides $m_f$ and has the same annihilating effect on $v$ , we conclude $m_{f, v}=m_f$ . Existence of $w$ f
|linear-algebra|
0
Why is this inequality true? $(a+b)^2\leq 2(a^2+b^2)$
Why is this inequality true? $a,b$ are real numbers. $$ (a+b)^2=a^2+2ab+b^2\leq 2(a^2+b^2) $$ I know $(a+b)^2=a^2+2ab+b^2 \geq 0$, but then?
Rewritten: $(a+b) ^2 \le (a+b)^2+(a-b)^2 =2(a^2+b^2)$ Used : $(a-b)^2 \ge 0$
|calculus|inequality|
0
Boundedness condition in Theorem 5.3.2 of Evans
Thanks! I was going through this proof, and it mostly made sense to me. However, I don't understand where in the proof, boundedness of the open set $U$ is required. The construction of a smooth partition of unity does not require a bounded open cover, and the locally finite cover constructed by Evans can also be applied to an unbounded open set $U$ . Can someone explain where the proof fails?
The original proof by Meyers and Serrin does not need boundedness. https://www.semanticscholar.org/paper/H-%3D-W.-Meyers-Serrin/e7a5f41b547afe5b8f38860436446a123c495738?utm_source=direct_link
|partial-differential-equations|sobolev-spaces|
0
Why is this inequality true? $(a+b)^2\leq 2(a^2+b^2)$
Why is this inequality true? $a,b$ are real numbers. $$ (a+b)^2=a^2+2ab+b^2\leq 2(a^2+b^2) $$ I know $(a+b)^2=a^2+2ab+b^2 \geq 0$, but then?
(This is similar to some of the other answers, but I haven't seen this exact form.) $\begin{array}\\ 2(a^2+b^2)-(a+b)^2 &=2(a^2+b^2)-(a^2+2ab+b^2)\\ &=a^2-2ab+b^2\\ &=(a-b)^2\\ &\ge 0\\ \end{array} $
|calculus|inequality|
0
Found the parameter of the elliptic integral of the first kind
Suppose I have: $$\frac{p}{[K(p)-K(\frac{-\pi}{2},p)]^2} = x$$ $K(p)$ being the complete elliptic function of the first kind and $K(\theta,p)$ the incomplete elliptic function of the first kind. How could I determine $p$ as a function of $x$ ? I'm using scipy, I'm more of a programmer that a classical mathematician. I've been looking into the Jacobian elliptic functions but as I understand it, it's used to determine the angle of the elliptic function knowing the parameter. EDIT 1 As Claude Leibovici noticed, $K(-\frac{\pi}{2},p) = -K(p)$ So now I'm looking for $p$ , knowing $y$ in: $$y = \frac{p}{K(p)^2}$$
Using Python, you can firstly plot $y$ in function of $p$ : import numpy as np import matplotlib.pyplot as plt from scipy.special import ellipk p = np.linspace(0, 1, 100) K = ellipk(p) y = p / K**2 plt.plot(p, y) Say you want the values $p$ giving $y=0.12$ . This graphic allows you to get an idea of the values of $p$ , say $0.4$ and $0.9$ . Then you can use fsolve from scipy.optimize import fsolve ytarget = 0.12 f = lambda x: x / ellipk(x)**2 - ytarget fsolve(f, [0.4, 0.9]) # output: array([0.37077373, 0.93649696])
|elliptic-equations|elliptic-functions|
0
Counterexample about injectivity of preimage of a function
Let $f:X\to Y$ be a function between two sets. I am wondering if there is a case where $f^{-1}: \mathscr P(Y) \to \mathscr P(X)$ can not be injective ? I feel like the fact that $f$ can only give one image to an element of $X$ makes it compulsory for $f^{-1}$ to be injective. Many thanks
Let $X=\{a\}$ , $Y=\{1,2\}$ , and $f(a)=1$ . Then $f^{-1}(\{1\}) = \{a\}=f^{-1}(\{1,2\})$ , but $\{1\}\neq\{1,2\}$ . More generally, $f^{-1}(B_1)=f^{-1}(B_2)$ if and only if $B_1\cap\mathrm{Im}(f) = B_2\cap\mathrm{Im}(f)$ . Thus: Theorem. Let $f\colon X\to Y$ be a function. The inverse image function $f^{-1}$ is one-to-one if and only if $f$ is surjective. Proof. Suppose that $f^{-1}$ is one-to-one, and let $b\in B$ . Then $f^{-1}(\{b\})\neq f^{-1}(\varnothing) = \varnothing$ , so $b\in\mathrm{Im}(f)$ . Thus, $f$ is surjective. Conversely, assume that $f$ is surjective, and let $B_1\neq B_2$ be subsets of $Y$ . If $b\in B_1\setminus B_2$ , then by the surjectivity of $f$ there exists $a\in X$ such that $f(a)=b$ . Then $a\in f^{-1}(B_1)$ , but $a\notin f^{-1}(B_2)$ , so $f^{-1}(B_1)\neq f^{-1}(B_2)$ . A symmetric argument holds if $b\in B_2\setminus B_1$ . $\Box$
|functions|
1
Tensor product action on vectors: $(f\otimes g) (a\otimes b)=f(a)\otimes g(b)$
Following this related Example of tensor (0,2) acting on two vectors (Nick's answer). I want to ask why in the definition we consider the map $(f\otimes g)\colon V\otimes W \to X\otimes Y$ and not $(f\otimes g)\colon V\times W \to X\otimes Y$ as $(f\otimes g)(a,b)=f(a)\otimes g(b)$ ? Why we need $V\otimes W$ here ? A lot of times I see the notation $(f\otimes g)(a,b)$ as well. Is this some kind of convention we make if we work over fields just because we always have $F\otimes F\cong F$ ? Maybe there is a lot to unpack here but a reference to how we define the tensor product action on vectors and covectors would also be nice.
You can consider the map $f * g\colon V \times W \to X \otimes Y$ , $(a, b) \mapsto f(a) \otimes g(b)$ (choosing different notation to distinguish the maps, but this is not standard) without issue, but this map is bilinear and not linear. Thus, by the universal property of the tensor product, it induces a linear map $f \otimes g\colon V \otimes W \to X \otimes Y$ , $a \otimes b \mapsto f(a) \otimes g(b)$ . It's often nicer to have this rather than the former since most other maps you're considering are usually linear and you have all these tools for understanding linear maps at your disposal, but $f * g$ and $f \otimes g$ are closely related and "almost the same" (take this with a grain of salt), and therefore one sometimes doesn't distinguish too rigorously.
|tensor-products|tensors|
0
How to write Tricomi's confluent hypergeometric function in terms of Meijer-G function
I am calculating a closed form expectation and I encountered the Tricomi's confluent hypergeometric function (aka confluent hypergeometric function of the second kind) given by integral $U\left( a,b,z \right) = \frac{1}{\Gamma\left(a\right)} \int_{0}^{\infty} e^{-zt}t^{a-1}\left(1+t\right)^{b-a-1}$ . I need to write this in terms of Meijer-G function $G_{p,q}^{m,n}\left(z\left|\begin{smallmatrix}\mathbf{a}_n, \mathbf{a}_{p-n}\\ \mathbf{b}_m, \mathbf{b}_{q-m}\end{smallmatrix}\right.\right)$ . I used the "MeijerGReduce[HypergeometricU[a, b, z], z] in wolfram alpha and got the solution $U(a,b,z) = \frac{G_{1,2}^{2,1}\left(z\left|\begin{smallmatrix}[1-a], [] \\ [0,1-b],[]\end{smallmatrix}\right.\right)}{\Gamma\left(a\right)\Gamma\left(1+a-b\right)} = \frac{\frac{1}{2\pi i}\oint \Gamma\left(-s\right) \Gamma\left(a+s\right) \Gamma\left(1+a-b+s\right)z^{-s}ds }{\Gamma\left(a\right)\Gamma\left(1+a-b\right)}$ However in the book Handbook of Mathematical Functions: With Formulas, Graphs, and Mat
There is no issue at all, except that you made a mistake in expanding the Meijer-G function in your first assertion. Both Abramowitz, Wolfram and Python are correct (and NIST, and Prudnikov et al....) The wrong bit in your question is: $$[^*]\; G_{1,2}^{2,1}\left(z\left|\begin{smallmatrix}[1-a], [] \\ [0,1-b],[]\end{smallmatrix}\right.\right)= \frac{1}{2\pi i}\oint \Gamma\left(-s\right) \Gamma\left(a+s\right) \Gamma\left(1+a-b+s\right)z^{-s}ds $$ By the very definition of the Meijer-G function (NIST handbook 2010 p.415 16.17.1): $$G_{1,2}^{2,1}\left(z\left|\begin{smallmatrix}1-a \\ 0,1-b\end{smallmatrix}\right.\right)=\frac {1}{2\pi i} \int_L \Gamma(-s)\Gamma(a+s)\Gamma(1-b-s) z^s ds$$ So now by using the obvious change of variables $s = -u-a$ : \begin{align} G_{1,2}^{2,1}\left(z\left|\begin{smallmatrix}1-a \\ 0,1-b\end{smallmatrix}\right.\right)&=\frac {1}{2\pi i} \int_L \Gamma(u+a)\Gamma(-u)\Gamma(1+a-b+u) z^{-u-a} du \\ &= \frac {1}{2\pi i}z^{-a} \int_L \Gamma(-u)\Gamma(a+u)\Gamma(1
|special-functions|hypergeometric-function|
1
How does $(2k+1)^2$ become $4k^2 + 4k + 1$?
I'm currently following a course where the professor did the following simple calculation: $$(2k+1)^2 = 4k^2 + 4k + 1$$ Now, I just do not understand any world where this can be the case. Could someone please explain to me why $(2k+1)\times(2k+1)$ can ever result in only one $1$ , and reduced to $2k$ ? This is so extremely nonsennsical for me.
Slightly more complicated, but more generalizable. Binomial Theorem. $(x+y)^n=\sum_{k=0}^n \binom{n}{k}x^ky^{n-k}$ Where $\binom{n}{k}=\frac{n!}{k!(n-k)!}$ So if $n=2$ then $(x+y)^n=\binom{2}{0}x^0y^2+\binom{2}{1}x^1y^2+\binom{2}{2}x^2y^0=x^2+2xy+y^2$ Then let $x=2k$ and $y=1$ . $(2k+1)^2=(2k)^2+2(2k)(1)+1^2=4k^2+4k+1$
|algebra-precalculus|
0
How can I prove that $\sqrt{3}-\sqrt{2}-0.32<0$
How can I prove that $$\sqrt{3}-\sqrt{2}-0.32 This is what I did : We have : $$\sqrt3\simeq1.732 \; ; \; \sqrt2\simeq1.414$$ $$\sqrt3 1.413$$ $$\sqrt3 Then we get : $$\sqrt3+(-\sqrt2) $$\sqrt3-\sqrt2 $$\sqrt3-\sqrt2-0.32 What you think for this solution, and can you please give me an other one without using an approximate value.
We have $$\sqrt{3}-\sqrt{2}-0.32 which is true, indeed by AM-GM $\frac{\sqrt{3}+\sqrt{2}}2 \ge \sqrt[4]6$ and $$\sqrt[4]6\ge \frac{25}{16} \iff 6\cdot 16^4\ge 25^4 \iff 393216 \ge 390625$$
|calculus|algebra-precalculus|number-comparison|
0
Surjectivity in inverse limit
Suppose we have two inverse systems $(A_i)$ and $(B_i)$ of abelian groups and we have a homomorphism from one system to other, say $\lambda_i:A_i\rightarrow B_i$ for all $i$ (so that respective commutativity in diagram holds). Even if each $\lambda_i$ is surjective, the map induced between inverse limits $\lambda: \lim A_i \rightarrow \lim B_i$ need not be surjective.(*) Are there some sufficient condition on systems so the $\lambda$ will be surjective? [(*) Example: $A_i=\mathbb{Z}$ with identity maps $A_{i+1}\rightarrow A_i$ , $B_i=\mathbb{Z}/p^i$ with natural map $B_{i+1}\rightarrow B_i$ and the morphism from $A_i$ to $B_i$ given by $\lambda_i:\mathbb{Z}\rightarrow \mathbb{Z}/p^i$ , the natural map].
Your friend here is the Mittag-Leffler Condition. For surjections $\lambda_i:A_i\to B_i$ between directed inverse systems, the kernels $K_i$ again form a directed system. If the $K_i$ satisfy the Mittag-Leffler Condition, then taking the limit will be exact, and so $\lambda:A\to B$ will be surjective.
|abstract-algebra|homological-algebra|exact-sequence|
0
Using Euler-Lagrange equations show that the following EoM can be written as $D_\mu D^\mu\phi+m^2\phi + \lambda\left(\phi^\ast\phi\right)\phi=0$
This post is a follow-up to this previous question . Consider the following Lagrangian: $$\mathcal{L}=\frac14\left(\partial_\mu A_\nu-\partial_\nu A_\mu\right)\left(\partial^\mu A^\nu-\partial^\nu A^\mu\right)+\partial_\mu\phi^\ast\partial^\mu\phi$$ $$+ieA^\mu\left(\phi\partial_\mu\phi^\ast - \phi^\ast\partial_\mu\phi\right) +e^2A_\mu A^\mu\phi^\ast\phi-m^2\phi^\ast\phi-\frac12\lambda\left(\phi^\ast\phi\right)^2\tag{1}$$ The equation of motion obtained from the Lagrangian, $(1)$ , for the scalar field $\phi$ is $$\frac{\partial \mathcal{L}}{\partial \phi^\ast}-\partial_\mu\frac{\partial\mathcal{L}}{\partial\partial_\mu\phi^\ast}=-ieA^\mu\partial_\mu\phi+e^2A_\mu A^\mu\phi-m^2\phi-\lambda\left(\phi^\ast\phi\right)\phi$$ $$-\partial_\mu\left(\partial^\mu\phi+ieA^\mu\phi\right)=0\tag{2}$$ which can be written compactly as $$D_\mu D^\mu\phi+m^2\phi + \lambda\left(\phi^\ast\phi\right)\phi=0\tag{3}$$ I don't understand how in the last line of the quote above the author managed to write the e
You wrote $D_\mu \phi D^\mu \phi$ instead of $D_\mu D^\mu \phi$ . You also put $\phi$ in the definition of $D_\mu$ but it is something that acts on $\phi$ . $D_\mu$ does not have $\phi$ in it's expression yet. \begin{eqnarray*} D_\mu D^\mu \phi &=& D_\mu (\partial^\mu \phi + ie A^\mu \phi)\\ &=& (\partial_\mu + ieA_\mu) (\partial^\mu \phi + ie A^\mu \phi)\\ &=& \partial_\mu (\partial^\mu \phi + ie A^\mu \phi)\\ &+& ieA_\mu (\partial^\mu \phi + ie A^\mu \phi)\\ &=& \partial_\mu \partial^\mu \phi + \partial_\mu (ie A^\mu \phi)\\ &+& ieA_\mu \partial^\mu \phi - e^2 A_\mu A^\mu \phi\\ &=& \partial_\mu \partial^\mu \phi + ie \partial_\mu A^\mu \phi + ieA^\mu \partial_\mu \phi\\ &+& ieA_\mu \partial^\mu \phi - e^2 A_\mu A^\mu \phi)\\ &=& \partial_\mu \partial^\mu \phi + ie\partial_\mu A^\mu \phi + 2 ieA_\mu \partial^\mu \phi - e^2 A_\mu A^\mu \phi \end{eqnarray*} Used parentheses when $\partial_\mu$ was acting on something when it was acting on a product of two things that were spacetime dep
|solution-verification|proof-explanation|mathematical-physics|tensors|quantum-field-theory|
1
The equation $ pq = px + qy $ has more than one complete integral.
Show that the equation $ pq = px + qy $ has more than one complete integral. Using Charpit Method I find $az = \frac{1}{2}(y + ax)^{2} + b$ , where $a$ and $b$ are integrating constant. How could I find another? Is there any theorem to show for existence of more than one complete integral? Please help.
It's possible to obtain other solutions with two arbitrary constants with the ansatz $z=xf(y)+a$ ( $a$ is one of the constants); then $p=z_x=f(y)$ , $q=z_y=xf'(y)$ , and the PDE $pq=px+qy$ becomes $$ f(y)xf'(y)=f(y)x+xf'(y)y. \tag{1} $$ Dividing both sides of $(1)$ by $x$ and rearranging terms, we obtain the ODE $$ f'(y)=\frac{f(y)}{f(y)-y}. \tag{2} $$ Rewriting $(2)$ as $$ \frac{dy}{df}=\frac{f-y}{f}=1-\frac{y}{f}, \tag{3} $$ we obtain a linear ODE for $y(f)$ , whose general solution is $$ y=\frac{f}{2}-\frac{b}{f}. \tag{4} $$ Solving $(4)$ for $f$ , we find two solutions, $$ f_{\pm}(y)=y\pm\sqrt{y^2+b}. \tag{5} $$ This gives us two solutions to the PDE containing two arbitrary constants: $$ z_{\pm}(x,y)=x\left(y\pm \sqrt{y^2+b}\right)+a. \tag{6} $$ Needless to say, the ansatz $z=f(x)y+a$ yields another two solutions, $$ \tilde{z}_{\pm}(x,y)=\left(x\pm \sqrt{x^2+b}\right)y+a. \tag{7} $$
|analysis|partial-differential-equations|
0
Is $\exists x [(P(x) \vee Q(x))\rightarrow R(x)]$ logically equivalent to $\exists x [(P(x) \rightarrow R(x)) \vee (Q(x)\rightarrow R(x))]$?
Is $\exists x [(P(x) \vee Q(x))\rightarrow R(x)]$ logically equivalent to $\exists x [(P(x) \rightarrow R(x)) \vee (Q(x)\rightarrow R(x))]$ ? What about if I replace $\exists$ with $\forall$ ?
No. In fact, $(P(x) ∨ Q(x)) ⇒ R(x)$ is equivalent to $(P(x) ⇒ R(x)) \color{red}{∧} (Q(x) ⇒ R(x))$ . Proof: Suppose $(P(x) ∨ Q(x)) ⇒ R(x)$ . If $P(x)$ , then $P(x) ∨ Q(x)$ , and thus $R(x)$ , which proves $P(x) ⇒ R(x)$ . Likewise, $Q(x) ⇒ R(x)$ . So, $(P(x) ⇒ R(x)) ∧ (Q(x) ⇒ R(x))$ . In the other direction, suppose $(P(x) ⇒ R(x)) ∧ (Q(x) ⇒ R(x))$ . If $P(x) ∨ Q(x)$ , then in the case $P(x)$ we get $R(x)$ by $P(x) ⇒ R(x)$ , and in the case $Q(x)$ , we get $R(x)$ by $Q(x) ⇒ R(x)$ . Thus, $(P(x) ∨ Q(x)) ⇒ R(x)$ . Now you should be able to easily find counterexamples to the equivalence under an $∃$ or $∀$ quantifier.
|first-order-logic|quantifiers|
1
Theorem 7, Section 4.5 of Hungerford’s Algebra
If $R$ is a ring with identity and $A_R$ , ${}_{R}B$ are unitary $R$ -modules, then there are $R$ -module isomorphisms $$A\otimes_R R\cong R \text{ and } R\otimes_R B\cong B$$ Sketch of proof: Since $R$ is an $R\text{-}R$ bimodule $R \otimes_R B$ is a left $R$ -module by Theorem 5.5. The assignment $(r,b) \mapsto rb$ defines a middle linear map $R\times B \to B$ . By Theorem 5.2 there is a group homomorphism $\alpha : R\otimes_R B \to B$ such that $\alpha (r\otimes b)=rb$ . Verify that $\alpha$ is in fact a homomorphism of left $R$ -modules. Then verify that the map $\beta :B\to R\otimes_R B$ given by $b\mapsto 1_R\otimes b$ is an $R$ -module homomorphism such that $\alpha \beta =1_B$ and $\beta \alpha = 1_{R\otimes_R B}$ . Hence $\alpha : R \otimes_R B\cong B$ . The isomorphism $A\otimes_R R\cong A$ is constructed similarly. Can we prove this theorem without defining map $\alpha :R\otimes_R B\to B$ ? To be specific, we will show $\beta:B\to R\otimes_R B$ is left $R$ -module homomorphi
As you asked, I will convert my comments in an answer. I guess you can prove the theorem showing directly that $\beta$ is bijective but what's the point of that? I mean, you have defined tensor product using an extremely powerful universal property, why you don't want to use it? If you think about the contruction of tensor product, it should be clear why it is difficult to work with elements of it. The contruction I know of the tensor product (but there are others) define $A\otimes_R B$ as a quotient of an enormous free $\mathbb{Z}$ -module by some submodule which reflects the property you want tensor product to have, i.e. transforming bilinear maps into linear ones. So what you care about when working with tensor product it's usually only its universal property. What I'm trying to say is that tensor product is defined by its universal property, so most of the constructions you do with tensor product just rely on this universal property. Thus there really is no point in working with el
|abstract-algebra|proof-writing|modules|tensor-products|module-isomorphism|
1
Equivalent statements of a finitely generated module being locally free
Let $M$ be a finitely generated $R$ -module. Prove that the following conditions on $M$ are equivalent: (a) $M$ is locally free over $R$ (i.e. $M_m$ is free over $R_{m}$ for all maximal ideals $m\subset R$ ). (b) For every maximal ideal $m\subset R$ , there is an $f \not \in m$ such that $M_f$ is free over $R_f$ . (c) There exist $f_1,\dots, f_n \in R$ such that $(f_1, \dots, f_n) =1 $ and $M_{f_i}$ is free over $R_{f_i}$ for all $i$ . I was able to prove $(a) \Rightarrow (b)$ . For $(b) \Rightarrow (c)$ , this is what I have: Let $S = \big\{f \in R:M_f\text{ is free over }R_f\big\}$ . Let $(S)$ be the ideal generated by the elements of $S$ . Suppose that $(S) \neq (1)$ . It follows that $(S)$ is contained in some maximal ideal $m$ of $R$ . By our assumption, there exists $f \not \in m$ such that $M_f$ is free over $R_f$ . By the definition of $S$ , this implies that $f \in S$ . Thus, $f \in (S)\subset m$ , a contradiction. We therefore conclude that $(S) = (1)$ . How does $M$ being fi
There should be a mistake somewhere in your argument, because condition (a) is saying that $M$ is flat, while condition (c) implies that $M$ is projective, and it is well-known that there are finitely generated modules that are flat but not projective. Example. Let $A=\prod_{i=1}^\infty \mathbb{F}_2$ . Then, $M=A/I$ where $I:=\{(\epsilon_i)_{i}\in A \mid \epsilon_i=0\text{ for almost all $i$}\}$ is flat but not projective over $A$ .
|commutative-algebra|modules|maximal-and-prime-ideals|localization|free-modules|
0
Proof that wave operator isn't hypoelliptic in 1+3 coordinates
I want to prove that the wave operator $P=\partial_t - \Delta_x$ isn't hypoelliptic in $\Bbb R \times\Bbb R^n$ . My attempt is to use the fact that an operator with constant coefficients is hypoelliptic iff (given $E$ fundamental solution of $P$ ) $\text{sing supp}(E)=\{0\}$ If we call the family of distributions $E_t(\varphi)=\left\{ \begin{array}{ll} T_t(\varphi)=\frac{t}{4\pi} \int_{S^{n-1}}\varphi(tw)d\sigma(w) & \mbox{if } t \geq 0 \\ 0 & \mbox{if } t (for $\varphi$ test function in $\Bbb R^3$ , where $d\sigma(w)$ is the induced measure in $\Bbb R^3$ by the unit sphere and $w$ angular variable) Then a fundamental solution for $P$ is $E=\int\langle E_t,\psi(t,\cdot)\rangle\,dt$ with $\psi$ test function in $\Bbb R^{1+3}$ . Now I'm not sure how to prove that the singular support of $E$ is not $\{0\}$ . My attempt is: If $t>0$ , $T_t$ is a compact support distribution with support in $\{x \in\Bbb R^3 : |x|=|t|\}\Rightarrow\operatorname{supp}E=\{(t,x) : t \geq 0 ,|x|=t\}$ . Intuitivel
I think you omitted a $t$ subscript, wave operator should be $P=\partial_{tt}- \Delta_x$ or you omitted a $2$ superscript, should be $P=\partial_t^2- \Delta_x$ If $P$ is hypoelliptic, then singsupp(u) ⊆ singsupp(Pu) for all u ∈ D′(R×R n ). Let $H$ denote the Heaviside step function, wave operator $P$ is not hypoelliptic since $u(x, t)=H\left(x-x_0-\left(t-t_0\right)\right)$ is a solution of $P{u}=0$ , so $$\mathrm{singsupp}(Pu)=\emptyset$$ but $$\mathrm{singsupp}({u})=\left\{({x}, {t}): x-x_0={t}-{t}_0\right\}$$ is not contained in $\emptyset$ . I find the above in https://www.mat.univie.ac.at/~stein/lehre/SoSem09/distrvo.pdf
|real-analysis|functional-analysis|partial-differential-equations|distribution-theory|wave-equation|
0
Conditional bakery problem
This question is from QuantGuide(Bakery Boxes): A bakery manager uniformly at random selects an integer k between 1 and 4, inclusive. He then chooses from k distinct desert types at his shop to create a gift basket consisting of k items. Find the expected number of ways that the manager can create the gift basket. My Approach: Let us have the opportunity to select k from 1 to n-1 and the expected value be denoted by $E_{n-1}$ . Where $E_1 = 1$ . With probability $\frac{n-1}{n}$ we select a number less than n and in that case the expected value is $E_{n-1}$ . With probability $\frac{1}{n}$ we select n. In that case, using n distinct deserts are used to make a basket of n items. Total no of ways is $n^n$ . Hence \begin{equation} E_n = \frac{n-1}{n}E_{n-1}+\frac{1}{n}n^n = \frac{1}{n}(1+2^2+3^3+ \cdots + n^n) \end{equation} Using this I'm getting 72
Your calculations involve quite a lot of double counting. For eg. consider the case when $k = 2$ . Say, the varieties are $d_1$ and $d_2$ . There are only $3$ ways to make the basket, i.e., $d_1d_1$ , $d_2d_2$ , $d_1d_2$ . But you are also counting $d_2d_1$ as a separate basket. Method 1: Finding the number of baskets case by case: $1.$ $k = 1$ : There is only $1$ basket possible $2.$ $k = 2$ : There are $3$ baskets possible (as explained above) $3.$ $k = 3$ : Three subcases follow here:- $\quad$ $3.1$ AAA: $3$ baskets ( $d_1d_1d_1, d_2d_2d_2, d_3d_3d_3$ ) $\quad$ $3.2$ AAB: $6$ baskets ( ${3 \choose 1} \cdot {2 \choose 1}$ ) $\quad$ $3.3$ ABC: $1$ basket ( $d_1d_2d_3$ ) $4.$ $k = 4$ : $5$ subcases follow here: $\quad$ AAAA ( $4$ ), AAAB ( ${4 \choose 1} \cdot {3 \choose 1} = 12$ ), AABC ( ${4 \choose 1} \cdot {3 \choose 2} = 12$ ), AABB ( ${4 \choose 2} = 6$ ), ABCD ( $1$ ). $\quad$ So total baskets in this case = $4 + 12 + 12 + 6 + 1 = 35$ Finally, since each value of $k$ is uniforml
|probability-theory|expected-value|conditional-expectation|
1
Finding a bound for existential quantification
Let $\Sigma$ be an arbitrary alphabet, $\mathcal{P}$ denote the set of all prime numbers, and $\omega := \mathbb{N} \cup \{0\}$ Take the following set. $$ L = \left\{ (x, \alpha, \beta) \in \omega \times \Sigma^{*2} : \left( \exists t \in \mathcal{P} \right) ~ \alpha^{(x-2)(|\alpha| - 1)} = \beta^t \right\} $$ I was requested to prove that the set is computable in the following way: $a.$ Prove that the predicate being quantified is primitive recursive, $b.$ Prove that the set being quantified over (the primes) is primitive recursive, $c.$ Find a bound for the quantification. Points $a$ and $b$ are simple, but I am having trouble with $c.$ For "a bound for the quantification" I mean some function $f(x, \alpha, \beta)$ s.t. $t \leq f(x, \alpha, \beta)$ whenever $\alpha^{(x-2)(|\alpha|-1)} = \beta^t$ holds for $t \in \mathcal{P}$ . The general case is tricky so I thought of dividing in the following cases. Given fixed elements $(x, \alpha, \beta) \in \omega \times \Sigma^{*2}, t \in \math
Hint: I think you are aware of the fact that for $\gamma \in \Sigma^*$ and $m \in \omega$ , $|\gamma^m| = m |\gamma|$ . So if $\alpha^{(x-2)(|\alpha|-1)}=\beta^t$ , ${(x-2)(|\alpha|-1)}|\alpha|=t|\beta|$ . Now note that $\left\lceil\frac{|\alpha|}{|\beta|}\right\rceil$ is a primitive recursive function of $\alpha$ and $\beta$ (when $|\beta| \neq 0$ ). The edge case when $|\beta| = 0$ is simple and can be handled separately.
|discrete-mathematics|logic|computer-science|computability|formal-languages|
1
Integration of laplace operator
Given two functions $f(x,y)$ and $g(x,y)$ defined on $D =]0,a[ \times ]0,b[ \subset \mathbb{R}^2$ . $$$$ And given that $$\int _D f(x,y).g(x,y) dxdy =0$$ I have to show the following : $$\int _D \Delta f(x,y).g(x,y) dxdy =0$$ I tried to use green's formula : $$\int _D \Delta f(x,y).g(x,y) dxdy = \int_{\partial D}\frac{\partial f }{\partial n}g d \sigma-\int _D \nabla f(x,y).\nabla g(x,y) dxdy$$ In my problem the derivative of $f$ and $g$ on the boundary is nul so I assumed that $\int_{\partial D}\frac{\partial f }{\partial n}g d \sigma = 0$ which leaves $$\int _D \Delta f(x,y).g(x,y) dxdy = -\int _D \nabla f(x,y).\nabla g(x,y) dxdy$$ I'm not sure about how to continue and wether my solution is correct I tried using the integration by parts : $$-\int _D \nabla f(x,y).\nabla g(x,y) dxdy = -\biggl[ \nabla f(x,y).g(x,y)\biggr]_D+\int _D \nabla f(x,y).\nabla g(x,y) dxdy$$
This assertion is not correct. Here is a counterexample: Take $a = b = 1$ and $g(x,y) = 1, \, f(x,y) = x^2 - 1/3$ . Then $\Delta f(x,y) = 2$ and thus $$ \int_D f(x,y) g(x,y) dx\, dy = 0 \ne 2 = \int_D \Delta f(x,y) g(x,y) dx\, dy \, . $$
|calculus|integration|laplacian|
1
Does the second derivative tell me how the variation of the slope of the tangent line to the graph varies?
The first derivative tells me how the slope of the tangent line of each point to the graph varies, while the second derivative tells me how it varies as a true variation? For example, having $f(x) = x^2$ , $f'(x)= 2x$ (therefore the slope of each point of the line tangent to the point is equal to $2x$ , for example at the point $x = 1$ is equal to $2$ ). While the second derivative tells me how this variation varies, $f ''(x) = 2$ . So $f''$ what are you telling me? I don't understand why it's a constant if the slope changes point to point.
You are correct that the 1st derivative tells you the slope of the parent function at any particular point. The 2nd derivative is similar, except it shows the slope of the 1st derivative at any point. Let's take the example you used: If $f(x)=x^2$ , then... $f'(x)=2x$ This is because the slope of $x^2$ at any point is modeled by the function $y=2x$ . Now, if $f'(x)=2x$ , then... $f''(x)=2$ This is because the slope of $2x$ at any point is modeled by the function $y=2$ . Conclusion: The equation $f''(x)=2$ indicates that $f'(x)$ increases at a constant rate of 2, which means $f(x)$ is accelerating in the positive direction (since its slope is increasing).
|calculus|derivatives|
0
Maximum after m tries is the same as maximum after m+1 tries, without replacement
This is a question in a quant interview. Given a bag of $n$ marbles each of distinct weights, perform the following process. Pick two marbles at random, keep the heavier one and return the lighter one. Then, repeat the following process. Pick one marble at random from the rest, and compare it to the marble at hand. Keep the heavier one and return the lighter one to the bag. What is the probability that the marble you have in your hand after $m$ turns will remain in your hand after $m+1$ turns (one more turn)? My thinking: It seems conditioning on the mth marble seems the way to go, since it is not true that all marbles have equal probability of being that maximum weight marble after m turns. Let $W_{k}$ be the marble in your hand after $k$ marbles have been drawn. So we have $$ \mathbb{P} \left[W_{m} = W_{m+1}\right] = \sum_{i=2}^{n}\mathbb{P}[W_{m+1} = W_{m} \mid W_m = i] \mathbb{P}[W_m=i]$$ These individual probabilities are easy to calculate. The conditional probability is just $\fr
Consider the sequence of $m+2$ extracted marbles (due to the nature of the first turn where you extract $2$ marbles, this only counts as $m+1$ turns). The maximum value is (uniquely) attained at the $k$ th marble in the sequence. The $m+1$ th kept marble ( $m$ th turn) will be different from the $m+2$ th kept marble ( $m+1$ th turn) if and only if the maximum is attained at the last ( $m+2$ th) marble of the sequence, ie. $k = m+2$ . The probability of maintaining the same marble is aproximately $1-\frac{1}{m+2}$ . Edit: the above formula is only a heuristic approximation which holds as $n \rightarrow \infty$ . However, we can use the outlined method to compute the exact formula. Let us count the valid sequences of $m+2$ marbles where the maximum is equal to $k$ and occurs at the last marble. We get: $$\frac{k-1}{n}\left( \frac{k-2}{n-1} \right)^m \frac{1}{n-1}$$ This is by counting the valid possibilities for the first marble, middle $m$ marbles, and last marble. By summing over possi
|probability|combinatorics|conditional-probability|
1
Find the domain of a function for different values of $g$
Find the domain of: $$f(x) = \frac{1}{(g+1)x^2 + 2(g-1)x + g-3}$$ for the various values of $g\in \mathbb{R}$ I am trying to solve this. First of all, I take the $g$ value equal to $-1$ so the domain is $\mathbb{R}$ . The next thing that I need to do is to take the g value not equal to $-1$ . The discriminant must be greater than zero. After many calculations, the result I have is this: $$-3g^2+6g+13>0$$ but I think that is not correct. This is an exercise from a Greek book and it gives me the solution, but not step by step : Domain of $f = R - \left[{\dfrac{3-g}{g+1}, -1 }\right]$ Any ideas? Thanks!
The domain of the function is the set of $x$ values where the function is defined on. Looking at the expression, this is when the denominator is not zero. To find when this occurs, let's first do the complement: find when the denominator is zero. This means solving the following equation for $x$ , $$(g+1)x^2 + 2(g-1)x + g-3 = 0$$ This is a quadratic equation, so you can use the standard formula, and fortunately the discriminant becomes a neat perfect square. After some simplification, you should get two solutions for $x$ : $$\dfrac{3-g}{g+1},\; -1$$ These are the only two $x$ values for which the denominator of $f(x)$ becomes zero. Hence, the domain of the function is the real line excluding these two points. As an aside, $\left[{\dfrac{3-g}{g+1}, -1 }\right]$ conventionally means the interval from $\dfrac{3-g}{g+1}$ to $-1$ . Whereas $\left\lbrace \dfrac{3-g}{g+1}, -1 \right\rbrace$ conventionally means the set of the two points $\dfrac{3-g}{g+1}$ and $-1$ , which is what is needed fo
|algebra-precalculus|functions|problem-solving|
0
Prove that if every smooth function on a subset of a manifold can be extended to a smooth function on the whole manifold, then the subset is closed.
Consider a nonempty subset $A$ of a smooth manifold $M$ and suppose that every smooth function on $A$ can be extended to a smooth function on $M$ . We want to show that $A$ is closed. How might we proceed with a proof? Since $M$ is a manifold, $A$ is Hausdorff. Thus if $A$ is closed if it is compact. To construct a finite cover of $A$ by open sets, let $f:A\to\mathbb{R}^k$ be a smooth function on $A$ so $f$ extends to a smooth function $F:M\to\mathbb{R}^k$ on $M$ . Consider an arbitrary cover $C=\{U_\alpha:\alpha\in X\}$ of $A$ . We must show that $C$ has a finite subcover. I am not sure now how to proceed. I would like to think there is a way to use the fact that for all open sets $B\subseteq M$ such that $A\cap B$ is nonempty, we can let $F|_B$ be the restriction of $F$ to $B$ and get a finite cover of $A$ by pulling back the image of the intersection. (This isn't sufficient, it's just a sense that we should look at images in $\mathbb{R}^k$ .) What are some ideas that I can use to ge
The statement in the title of this question is wrong. Consider $A=M=\mathbb{R}$ . Then, clearly any smooth function on $A$ can be extended to one on $M$ . But $A$ is not compact. Of course, $A$ is still closed. Hence, this example does not disprove the statement from your post. It does, however, show that your approach (showing closedess by showing compactness) cannot succeed. With regards to the statement in the body of the question: I would suggest trying to show closedness of $A$ more directly. Or, equivalently, openness of $M\setminus A$ : I.e. take any point $x \in M\setminus A$ and try to show that there exists an open neighborhood of $x$ which does not intersect $A$ . To start you might just use an open neighborhood you get from $M$ being a manifold. This neighborhood might, of course, intersect $A$ . But we also haven't used the assumption that we can extend smooth functions on $A$ yet. So we have to find some smooth function such that it's extendability says something interest
|general-topology|manifolds|differential-topology|smooth-manifolds|smooth-functions|
1
inequality for supremum
Consider two bounded sequences $\{|A_i|\}_{i = 1}^{\infty}$ and $\{|B_i|\}_{i = 1}^{\infty}$ , for some $\epsilon > 0$ , suppose $\sup_{i \in \mathbb{N}}||A_i| - |B_i|| where $\mathbb{N}$ is the set of natural numbers, do we have $|\sup_{i \in \mathbb{N}}|A_i| - \sup_{i \in \mathbb{N}}|B_i|| ? I can prove the above claim when using max not sup, but I have no idea how to prove rigorously when using sup. Here are my thoughts for the case max. Let $\alpha: = \max_{i \in S}|A_i| = |A_{i_j}|$ and $\beta := \max_{i \in S}|B_i| = |B_{i_k}|$ where $S$ is a finite set, and suppose now we have $|\alpha - \beta| \geq \epsilon$ , i.e., $||A_{i_j}| - |B_{i_k}|| \geq \epsilon$ , and without loss of generality assume $|A_{i_j}| \geq |B_{i_k}|$ , then we have $|A_{i_j}| - |B_{i_j}| \geq |A_{i_j}| - |B_{i_k}| \geq \epsilon$ , contradicting the fact that $\max_{i \in S}||A_i| - |B_i|| .
The $\max$ here is not necessarily well-defined. Let $x(i) = |A_i|$ and $y(i) = |B_i|$ , then we have two sequences of real numbers. Since the two sequences are non decreasing and bounded then $\sup_{i \in \mathbb N} (x(i)) and $\sup_{i \in \mathbb N} (y(i)) . Let $z:=\sup_{i \in \mathbb N}(z(i))$ . Then from triangle inequality we have $|x +y | \leq |x| +|y|.$ Since $|x| \leq |x-y|+ (|-y|=|y|) \Rightarrow |x|-|y| \leq |x-y|$ . Also, Since $|y| \leq |y-x|+ (|-x|=|x|) \Rightarrow |y|-|x| \leq |x-y|$ . Then we get $$||x|-|y|| \leq |x-y|.$$ Thus, the given information is equivalent means exactly expected.
|real-analysis|supremum-and-infimum|
0
I've found the minimal AND-OR expression for a function, but I can't find the minimal OR-AND function
I am given a function $f(W,X,Y,Z)$ that only outputs $1$ if exactly three inputs are $1$ . The AND-OR function is easy enough to find: $$ W'X\,YZ + WX'YZ + WX\,Y'Z + WX\,YZ' $$ I think that function is minimal as I can't find a way to simplify it I'm not supposed to be using K-maps yet, so I'm struggling with finding the OR-AND form. I've tried negating the AND-OR function as the complement of SoP is supposed to be PoS, which is the OR-AND function I think, but that's not the correct answer and I'm not sure why. I've also tried writing out the entire formula in terms of zeroes instead of $1$ s, i.e. $$ (W'+X'+Y'+Z')(W'+X'+Y'+Z)\cdots(W+X+Y+Z) $$ and I know I've done that correctly, but it gets too complicated for me to simplify quickly. I'm not sure where to go. EDIT: I've been staring at the problem for a while so took a break and came back. I think I've got it worked out at this point though. By first using uniting theorems ((X+Y)(X+Y')=X), I get it to a form where I can apply the se
In my mind, the basic simplification for both AND-OR and OR-AND forms is, $$xy + xy' = x, \; (x+y)(x+y') = x$$ which can be interpreted as: if there are two input cases which differ in only one input but have the same output value, then they can be combined into a single case excluding that differing input. Generalizing for multiple differing inputs, if there is a set of input cases having the same output value and which run through all possible combinations of the differing inputs, then the set can be combined into a single case of the constant inputs. Alternatively, if you fix some inputs and the output is known regardless of the value of the other inputs, then you can write a single input case. Based on the above, I don't see any further simplification to your AND-OR expression. For OR-AND, let's think of the complementary definition for $f(W,X,Y,Z)$ : only outputs $0$ if zero, two, three, or four inputs are $0$ . We can infer that if any two of the inputs are zero, the output is gu
|boolean-algebra|conjunctive-normal-form|disjunctive-normal-form|boolean|
0
How can you show that the trace class norm $\|A\|_1:=\mathrm{Tr}(|A|)$ satisfies the triangle inequality?
Exact wording of my question is a bit oxymoronic, since a norm by definition is a metric, and thus requires proper context. Let $H$ be a separable Hilbert space over the field $\mathbb{K}$ . I am aware that a bounded linear operator $A\in\mathcal{B}(H)$ is said to be of trace class if $\mathrm{Tr}(|A|) for $|A|$ the square root of $A^*A$ and $$\mathrm{Tr}(A):= \sum_{k=1}^\infty\left $$ for a Hilbert basis $\{e_k\}$ for $H$ . Then, the $p$ th Schatten norm for $p\in [1,\infty)$ is defined as $$\|A\|_p := \left(\mathrm{Tr}(|A|^p)\right)^{1/p}$$ and as a special case, the trace class norm is given by $$\|A\|_1 := \mathrm{Tr}(|A|)$$ None of the references I have seen anywhere actually prove that what we call the Schatten norm is actually a norm, and the same goes for the trace norm. The main difficulty I have with proving that Schatten norm is actually a norm is already present in the case of the trace norm, and hence this question focuses on that. What I struggle most with showing that fo
I do not know a "nice" proof of this! Here is an outline of how it is done in the first volume of Reed and Simon (Section VI.6). If you don't already know that $\operatorname{Tr}$ does not depend on the basis that is used to compute it, this is a relatively straightforward exercise, and something I use below. Recall that if $T = U|T|$ is the polar decomposition of the operator $T \in \mathcal{B}(H)$ , then $U^* T = |T|$ (e.g. because the operator $U^* U$ is projection onto the orthocomplement of the kernel of $|T|$ , and the orthocomplement of the kernel of a positive operator is the closure of its range). With that in mind, suppose that $A+B = U|A+B|$ , $A = V|A|$ , and $B = W|B|$ are the polar decompositions of $A$ , $B$ , and $A+B$ , respectively, and that $A$ and $B$ are trace class, and that $(e_n)$ is an orthonormal basis, we have (using the observation just made with $T = A + B$ ) $$ \sum_n \langle |A+B|e_n,e_n\rangle = \sum_n \langle U^*(A+B) e_n, e_n\rangle = \sum_n (\langle U
|linear-algebra|functional-analysis|operator-theory|normed-spaces|hilbert-spaces|
1
Limits within Simpson's Paradox
I've been studying about Simpson's paradox, and encountered this question: Why must the positive value be less than 2.7%? According to the explanation, people can move between groups, which generated a total increase of 0.8% across those years. Imagine however one person in every group, except a million in the lowest level of 9th or lower. The income of the lowest level is 1\$ and the highest income is 100\$. Now all of them except one switched to a Bachelor's degree, earning 97.3\$ - which is 2.7% less than that one person in year 2000. The total median change was therefore ~ 9700% higher, even though the net change in each group was lower.
Yes, your point looks correct to me. In fact, you do not even need the numbers of people in each group to change. For example with three people in each group before and after and these individual incomes: -----2000----- -----2012----- you would get the "changes in medians" for each row as stated but the overall medians would go from $3500$ to $4500$ , i.e. an increase of over $28.7\%$ , more than ten times the official answer for a top bound. And it could easily be higher with different examples. It looks like a bad question.
|statistics|
1
Equation for Elevation & Azimuth after Rotation in Two Axes
Ideally I'd like a general equation but I've drawn a specific example of rotating a board $45^\circ$ from horizontal then $45^\circ$ on its own axis. I haven't been able to come up with an equation to describe the resulting angle of the vector normal to the board's surface from horizontal (i.e. the angle of elevation). I also need an equation to describe the resulting angle projected onto the horizontal plane, such as the red axis being $0^\circ$ and green being $90^\circ$ (i.e. angle of azimuth).
Using the attached picture as a reference, and using the $+x$ axis to be the red solid ray pointing from from the origin to the right of the image (and down), and the $+y$ axis to be the green solid ray pointing from the origin to the right of the image (and up). And finally the $+z$ axis to be the blue solid ray pointing from the origin, upwards to the top of the image. This is a right handed system. Now assume the axis of rotation as shown in the image is the line passing through the origin and making an angle of $-\phi$ with the $+x$ axis, then the unit vector along this axis of rotation is $ a = ( \cos \phi , - \sin \phi , 0) $ And we want to rotate the unit vector $k = [0, 0, 1]^T $ about this axis, by an angle $\theta = +\dfrac{\pi}{4}$ . According to the Rodrigues' rotation formula The rotated vector $k'$ is given by $ k' = k \cos \theta + (1 - \cos \theta) a (a^T k) + (a \times k) \sin \theta $ Therefore, noting that $a^T k = 0$ , this simplifies to $ k' = [0, 0, 1]^T \cos 45^\
|linear-algebra|trigonometry|spherical-geometry|
1
Limit of the sequence $z_{n+1} = z_n^2 + c$
Hello I am working on the following question: For $c = \frac{2-i}{8}\in\mathbb{C}$ , define $z_0 = 0$ and $z_{n+1} = z_n^2 + c$ . We want to determine if this converges. I am trying to do it this way: Suppose the limit exists, call it $L$ , then $L = L^2 + c$ , solving it there are two possible roots $w_1 = \frac{3}{4} + \frac{1}{4}i$ and $w_2 = \frac{1}{4} - \frac{1}{4}i$ . Now we wanted to try to prove the limit is $w_1$ or $w_2$ , after some factoring I got: $$|z_{n+1} - w_1| = |z_n^2 + c - w_1| = |z_n-w_1||z_n+w_1| $$ Now I wanted to show somehow $|z_n+w_1|$ is always bounded by some constant $0 , so we would have: $$ |z_{n+1} - w_1| \le \alpha|z_n-w_1| $$ and this would means the $(z_n)$ converges to $w_1$ (Or we can do the same thing for $w_2$ , I am not sure which one is the limit yet). Could someone help me to find this bound $\alpha$ ? Or if this approach is not very doable, could someone give me some hint? I would really appreciate it, thanks!
Let us show the limit is $L=\frac 14(1-i)$ . We have first of all $$ |z_1-L| =\left|c-L\right|=\frac 18\ . $$ Assume inductively $|z_n-L|\le \frac 18$ . Then we have: $$ \begin{aligned} |z_{n+1}-L| &=|z_n^2+c-L| \\ &=|z_n^2-L^2| \\ &=|z_n + L|\cdot |z_n-L| \\ &=|(z_n-L) + 2L|\cdot |z_n-L| \\ &\le\Big(|z_n-L| + 2|L|\Big)\cdot |z_n-L| \\ &\le\underbrace{\left(\frac 18 + \frac{4\sqrt 2}8\right)}_{:=\alpha}\cdot |z_n-L| \ . \end{aligned} $$ Since $\alpha we obtain inductively first $|z_{n+1}-L|$ . Then rewriting the same chain we obtain a wanted inequality of the shape $|z_{n+1}-L|\le\alpha|z_n-L|$ . $\square$
|calculus|sequences-and-series|complex-analysis|limits|
1
Prove that if $\xi$ is a bounded random variable then $\xi$ is $\phi _2$.
Let $\xi$ be a real random variable. We say that $\xi$ $\in$ $\phi _2$ when $\exists \lambda >0$ such that $\mathbb{E}\left(exp\left(\frac{\xi^2}{\lambda^2}\right)\right) \le e$ . My problem: Prove that if $\xi$ is a bounded random variable then $\xi$ is $\phi _2$ . Although I tried to find a solution, it didn't work. Some hints on the internet said that we can use Hoeffding's inequality but I don't know how to do it. Could you please help me with this problem? Thank you very much.
Let $M>0$ be a real number such that $|ξ|\leq M$ (which exists, as we assume $ξ$ to be bounded) and set $λ=M$ . Then $$\mathbb{E}\left[\exp\left(\frac{ξ^2}{λ^2}\right)\right] \leq \mathbb{E}\left[\exp\left(\frac{M^2}{λ^2}\right)\right] = \mathbb{E}[e]=e.$$
|probability|probability-distributions|random-variables|
0
Can every transcendental number be expressed as an infinite continued fraction?
Every infinite continued fraction is irrational. But can every number, in particular those that are not the root of a polynomial with rational coefficients, be expressed as a continued fraction?
Any arbitrary complex number (and I spect, any number in a normed division algebra) can be expressed as an infinite continued fraction. Irrational numbers have a unique continued fraction expression consisting solely of integers. See here. https://sites.millersville.edu/bikenaga/number-theory/infinite-continued-fractions/infinite-continued-fractions.html As such, real transcendental numbers (being irrational) all have a unique continued fraction expression with integer convergents. An infinite continued fraction with only rational convergents must be irrational, per that same article. As such, nonzero rational numbers can only be represented by an infinite continued fraction if at least one convergent is irrational. An infinite continued fraction expression with real convergents must be real. This means that for nonreal transcendental numbers, any infinite continued fraction expression must have at least one nonreal convergent.
|continued-fractions|transcendental-numbers|
0
A detail in the proof of the Theorem of the Uniqueness of the Laurent expansion
So the theorem is as follows Let $f$ be holomorphic in the ring $A=\{z\in\mathbb{C}\,|\,r_1 . Also assume that $$ f(z)=\sum_{n=0}^\infty a_n(z-z_0)^n+\sum_{n=1}^\infty \frac{b_n}{(z-z_0)^n}\,\,\forall z\in A, $$ where both series converge independently. Then $$ a_n=\frac{1}{2\pi}\int_{\gamma}\frac{f(w)}{(w-z_0)^{n+1}}dw\quad\text{and}\quad b_n=\frac{1}{2\pi}\int_{\gamma}f(w)(w-z_0)^{n-1}dw $$ where $\gamma(\theta)=z_0+r\,e^{i\,\theta}$ with $0\leq\theta\leq2\pi$ and $r_1 . The proof begins quoting Abel's Lemma to assure that both series converge uniformly in $\gamma$ , and so $$ \frac{f(z)}{(z-z_0)^{k+1}}=\sum_{n=0}^\infty a_n(z-z_0)^{n-(k+1)}+\sum_{n=1}^\infty \frac{b_n}{(z-z_0)^{n+(k+1)}} $$ too converge uniformly on $\gamma$ , this allows us to interchange the sum and the integral. So far so good, however the next step is where I am having trouble following along, it states that: if $k\geq0$ , integrating term by term, we get $$ \int_{\gamma}\frac{f(z)}{(z-z_0)^{k+1}}dz=2\pi\,i\,a_k
No, only for a net exponent of $-1$ do you get nonzero integrals. Otherwise, the integral is $0$ — even with negative exponents — by the Fundamental Theorem of Calculus. It’s not about Cauchy’s Theorem.
|complex-analysis|power-series|
0
What does it mean that a field can be a vector space in itself?
The commutative field $K$ can be considered a vector space in itself. I define a series of properties common to fields and vector spaces and then define each of these separately and guide how I am understanding both concepts. Some properties of algebraic structures common to vector spaces and fields are: ${\color{red}{\text{- Internal binary operation:}}}$ $A×A→A$ $(a,b)→a⋆b$ ${\color{red}{\text{- External operation:}}}$ $A×B \to A$ $A×B \to C$ $B×A \to A$ $A×A \to B$ ${\color{red}{\text{- Neutral element: a number within the set in use that, operated with any other number in the set, does not alter it.}}}$ $\exists~0 \in V: \forall~v\in V: v+0=0+v=v~~{\text{for all}}~~ v∈V. ~{\text{Neutral element}}$ . ${\color{red}{\text{- Commutative property: the order of the values does not alter the result.}}}$ $∀~u,v \in V:u+v=v+u$ ${\color{red}{\text{- Associative property: the order of operation does not alter the result.}}}$ $∀~u,v,w ∈V,(u+v)+w=u+(v+w)$ ${\color{red}{\text{- Opposite element:
You seem to have put it together already, but just for concreteness: A field has defined multiplication and addition, with both being commutative and associative, having inverses and identities, and with multiplication distributing over addition. A vector space can be thought of as an abelian group with defined multiplication by a field which distributed over the group operation. If we take the additive group of a field as our abelian group, we can use multiplication by the field as our vector space scalar multiplication. So it might be termed a "coincidence", but it's more just that fields with addition and multiplication already satisfy the requisite commutativity and distributive axioms.
|linear-algebra|vector-spaces|
0
Question on determining the posterior pdf
Can someone tell me how the pdf of noise (w) is equivalent to the conditional pdf of observations (x) given A, assuming noise is independent of A for the equation x[n]=A+w[n] where A is the mean (and a random variable)? This is done in order to determine the posterior pdf using Bayes' rule.
Weird notations... One has the following fact. If $W$ and $A$ are independent random variables, and if $X = h(W, A)$ for some measurable function $h$ , then the conditional distribution of $X$ given $A = a$ is the distribution of the random variable $h(W, a)$ (I should talk about regular conditional distribution to be rigorous). Here $h(w, a) = w + a$ . Now it is easy to see that if $f_W$ is the pdf of $W$ , then the random variable $Y := W + a$ has pdf $f_Y$ given by $f_Y(y) = f_W(y - a)$ .
|statistics|stochastic-processes|random-variables|bayesian|parameter-estimation|
0