title
string
question_body
string
answer_body
string
tags
string
accepted
int64
How to evaluate an integral involving incomplete gamma , exponential and power functions?
I am attempting to solve the following integral: \begin{equation} \int_0^\infty z^{\kappa^2 -2} \Gamma\left(-\frac{\kappa^2}{2}+n+1,\frac{(K+1) z^2}{A_0^2 \Omega }\right) \exp{\left(-\left(\frac{1}{2}\right)\left(\frac{w/z-\mu_R}{\sigma_R}\right)^2\right)} dz \end{equation} This integral arises when I try to derive the expression for the probability density function (pdf) of the product of three random variables using the multiplicative convolution technique. By making a few substitutions, abbreviating the constant terms by defining $\alpha=-\frac{\kappa^2}{2}+n+1$ and $\beta=\frac{(K+1)}{A_0^2 \Omega }$ and setting $\mu_R=0$ , I was able to express it in a much cleaner way. \begin{equation} \frac{1}{2}\left(\frac{\sqrt{2}\sigma_R}{w}\right)^{-\kappa^2+1}\int_0^\infty y^{\frac{\kappa^2}{2}} e^{-y} \Gamma\left(\alpha,\frac{\beta w^2}{2 \sigma_R^2 y} \right) dy \end{equation} It looks like the final result might be a product of gamma functions, but I am still unable to figure out the fin
Mathematica yield a result for $$\int_0^{\infty } e^{-y} y^{\alpha } \Gamma \left(\beta ,\frac{\gamma }{y}\right) \, dy$$ $$\Gamma (\alpha +1) \Gamma (\beta )-\pi \csc (\pi (\alpha -\beta )) \left(\gamma ^{\alpha +1} \Gamma (\alpha +1) \, _1\tilde{F}_2(\alpha +1;\alpha +2,\alpha -\beta +2;\gamma )-\gamma ^{\beta } \Gamma (\beta ) \, _1\tilde{F}_2(\beta ;\beta +1,\beta -\alpha ;\gamma )\right)$$ $$\text{if}\quad \Re(\gamma )\geq 0\land \Re(\alpha )>-1\land (\Re(\alpha -\beta )>-3\lor \Re(\gamma )>0)$$ in Mathematica code Gamma[1+\[Alpha]] Gamma[\[Beta]]- \[Pi] Csc[\[Pi] (\[Alpha]-\[Beta])] (\[Gamma]^(1+\[Alpha]) Gamma[1+\[Alpha]] HypergeometricPFQRegularized[{1+\[Alpha]},{2+\[Alpha],2+\[Alpha]- \[Beta]},\[Gamma]]- \[Gamma]^\[Beta] Gamma[\[Beta]] HypergeometricPFQRegularized[ {\[Beta]},{1+\[Beta],-\[Alpha]+\[Beta]},\[Gamma]]) if Re[\[Gamma]]>=0&&Re[\[Alpha]]>-1&&(Re[\[Alpha]-\[Beta]]>-3||Re[\[Gamma]]>0)
|probability-distributions|definite-integrals|improper-integrals|gamma-function|
0
Prove $XX’+ZZ’=YY’$
Let the feet of angle bisectors of triangle $ABC$ are $X,Y$ and $Z$ . The circumcircle of triangle $XYZ$ cuts off three segment from lines $AB,BC $ and $CA$ , let it be $XX’, YY’, ZZ’$ . Prove that at least one combination exists such that sum of two segment's length is equal to third segment’s length. WLOG, I assumed $YY’\geq XX’\geq ZZ’$ , So we have to prove $YY’=XX’+ZZ’$ I tried it using coordinate geometry assuming $A,B,C$ to be $(x_i,y_i)$ for $i=1,2,3$ . Then we can find coordinates of $X,Y,Z$ and hence the equation of circumcircle of $\Delta XYZ$ . And then intercept with three sides. But it will consume a whole to day If a human follows this method without computer help. Can someone please help me finding a good solution for this problem?
For the sake of simplicity, let's assume $AB=c, AC=b, AB=c, XX'=x, YY'=y,$ and $ZZ'=z$ . Just by using the properties of the power of a point and angle bisectors , we will have: $$\frac{ac}{b+c}(\frac{ac}{b+c}-x)=\frac{ca}{a+b}(\frac{ca}{a+b}+z), \\ \frac{bc}{a+b}(\frac{bc}{a+b}-z)=\frac{bc}{a+c}(\frac{bc}{a+c}-y), \\ \frac{ab}{a+c}(\frac{ab}{a+c}+y)=\frac{ab}{b+c}(\frac{ab}{b+c}+x),$$ or equivalently, $$\frac{1}{b+c}(\frac{ac}{b+c}-x)=\frac{1}{a+b}(\frac{ca}{a+b}+z), \\ \frac{1}{a+b}(\frac{bc}{a+b}-z)=\frac{1}{a+c}(\frac{bc}{a+c}-y), \\ \frac{1}{a+c}(\frac{ab}{a+c}+y)=\frac{1}{b+c}(\frac{ab}{b+c}+x).$$ By solving this $3-$ variable system of equations, we will get: $$2x=\frac{a(c-b)}{b+c}+\frac{b(b+c)}{a+c}-\frac{c(b+c)}{a+b}, \\2z=\frac{c(b-a)}{a+b}+\frac{a(a+b)}{b+c}-\frac{b(a+b)}{a+c}, \\ 2y=\frac{b(c-a)}{a+c}+\frac{a(a+c)}{b+c}-\frac{c(a+c)}{a+b}.$$ Now, it's almost obvious that we have $2x+2z=2y.$ We are done. Note $1$ : To clarify the initial relations, we have used the fact tha
|geometry|contest-math|triangles|circles|coordinate-systems|
1
Techniques to prove rather unobvious arguments
I'm trying to prove this argument but I can't seem to find a way to prove it. $a \to b$ $b \lor c$ $(c \land \sim a) \to (d \land \sim a)$ $\sim b$ $\therefore d$ EDIT: Attached below is the rules we are allowed to use. Rules
$1)$ $\overline{b}\to c$ (Given Premise). $2)$ $\overline{b}$ (premise) $3)$ $c$ (Modus ponens of 1,2) $4)$ $\overline{b} \to \overline{a}$ (premise) $5)$ $\overline{a}$ (Modus ponens of 2 ,4) $6)$ $c$ $\wedge$ $\overline{a}$ (Conjunction of 3 ,5) $7)$ $(c$ $\wedge$ $\overline{a})$ $\implies$ $(d$ $\wedge$ $\overline{a})$ (premise) $8)$ $d$ $\wedge$ $\overline{a}$ (Modus ponens of 6,7) $9)$ $d$ (From 8).
|discrete-mathematics|logic|propositional-calculus|
1
Getting closest relative position of a point in a toroidal space
I am writing a boid simulation in a toroidal space. The boids wrap around from the left to right edge and the top to bottom edge. To calculate what boids are visible, I need to find the minimum distance between those two boids' positions. I have been doing the following Given: $$P_1 = (x_1, y_1), P_2 = (x_2, y_2)$$ $\Delta x = \lvert(x_2 - x_1)\rvert,\Delta y = \lvert(y_2 - y_1)\rvert$ Apply the toroidal wrapping on x: If $\Delta x > width/2$ , $\Delta x = width - \Delta x$ Apply the toroidal wrapping on y: If $\Delta y > height/2$ , $\Delta y = height - \Delta y$ Calculate the Euclidean distance: $d=\sqrt{(\Delta x)^2+(\Delta y)^2}$ How would I also get the closest relative position of $P_2$ in Euclidean space (possibly outside of the toroidal space's $width$ and $height$ restrictions) in relation to $P_1$ ? I have tried to do: $$P_3 = (\Delta x + x_1, \Delta y + y_1)$$ but it doesn't work every time. For example: it worked with $P_1=(700, 520), P_2=(700, 20)$ which resulted in $P_3=(
$\newcommand{\Neg}{\phantom{-}}\DeclareMathOperator{\sgn}{sgn}$ Write $W$ for the width of the fundamental rectangle and $H$ for the height, and let $\sgn$ be the signum function: $$ \sgn(x) = \begin{cases} -1 & x 0. \end{cases} $$ Let's say two points of the plane are avatars (of each other) if they represent the same point of the torus, i.e., their difference has the form $(mW, nH)$ for some integers $m$ and $n$ . Treating $P_{2}$ as the "origin" and assuming both $P_{1}$ and $P_{2}$ are in the same fundamental $W \times H$ rectangle, the goal is to find the nearest avatar of $P_{1}$ to $P_{2}$ . Set $m = -\sgn(x_{1} - x_{2})$ and $n = -\sgn(y_{1} - y_{2})$ . Geometrically, these are "the numbers of horizontal and vertical steps," either $1$ , $0$ , or $-1$ , to move $P_{1}$ closer to $P_{2}$ . The nearest avatar $P_{3}$ is one of at most four points: $$ P_{1},\qquad P_{1} + (mW, 0),\qquad P_{1} + (0, nH),\qquad P_{1} + (mW, nH). $$ If it matters, there is generally no continuous cho
|geometry|simulation|
1
Find a vertex of a tetragon where three vertices are given
Suppose that $V,W,U$ are three 3D points and $L,K$ are given positive values. Let $dist(A,B)$ represents euclidean distance between $A,B$ . Morover, assume that $M$ is a plane that passes through $V,W,U$ . What I need is a point $P$ where: $$dist(V,P)=L \\ dist(U,P)=K \\ P\in M $$ Obviously, we must find the equations of 2 circles, $C_1(V,L), C_2(U,K)$ where $C_1\subset M, C_2\subset M$ and then find their intersections. These possible intersections would be our points. Of course $L,K$ are given such a way that the intersections exist.
Dear of course this is my code about your response: import math def myVector(M,N): return [N[i] - M[i] for i in range(3)] def myNorm(M): return math.sqrt(sum(M[i]**2 for i in range(3))) def myDotProduct(M,N): return sum(M[i]*N[i] for i in range(3)) def findVertex2(v, w, u, l, k): X1 = myVector(v, w) X2 = myVector(v, u) d1 = myDotProduct(X1, X1) d2 = myDotProduct(X2, X2) d3 = myDotProduct(X1, X2) a0 = (l**2 - k**2 +d2)/(2 * d2) a1 = -2 * d3/ (2 * d2) s2 = d1 + (a1 ** 2 )* d2 + 2 * a1 * d3 s1 = 2 * a0 * a1 * d2 + 2 * d3 * a0 s0 = (a0 ** 2)* d2 - l ** 2 disc = s1 ** 2 - 4 * s2 * s0 if disc #result ([1.0, 1.0, 0.0], [0.0, 0.0, 0.0]) I edited it and it worked!
|linear-algebra|geometry|analytic-geometry|vector-analysis|
0
Basis of intersection of two vectorspaces is not in original vectorspaces
I'm having troubles understanding following problem: Find a basis for $U \cap V$ with: \begin{equation*} U = \text{span}\big\{ \underbrace{\begin{pmatrix} 0 \\ 1 \\ 0 \\ 1 \end{pmatrix}}_{= \vec{u}_1},\underbrace{\begin{pmatrix} 1 \\ 0 \\ -1 \\ 0 \end{pmatrix}}_{= \vec{u}_2}\big\} \qquad\qquad V = \text{span}\big\{ \underbrace{\begin{pmatrix} -1 \\ 1 \\ 0 \\ 0 \end{pmatrix}}_{=\vec{v}_1},\underbrace{\begin{pmatrix} -2 \\ 0 \\ 1 \\ -1 \end{pmatrix}}_{= \vec{v}_2}\big\} \end{equation*} I know that I have to find solutions to this equation: \begin{equation} \vec{0} = \alpha \vec{u}_1 + \beta \vec{u}_2 - \gamma \vec{v}_1 - \epsilon \vec{u}_2 = \begin{pmatrix} \vec{u}_1 & \vec{u}_2 & -\vec{v}_1 & -\vec{v}_2 \end{pmatrix} \begin{pmatrix} \alpha \\ \beta \\ \gamma \\ \epsilon \end{pmatrix} \end{equation} \begin{align} U \cap V &= \text{ker}\begin{pmatrix} \vec{u}_1 & \vec{u}_2 & -\vec{v}_1 & -\vec{v}_2 \end{pmatrix} = \text{ker}\begin{pmatrix} 0 & 1 & 1 & 2 \\ 1 & 0 & -1 & 0 \\ 0 & -1 & 0 & -
Note that when you solve the equation below: \begin{equation} \vec{0} = \alpha \vec{u}_1 + \beta \vec{u}_2 - \gamma \vec{v}_1 - \epsilon \vec{u}_2 = \begin{pmatrix} \vec{u}_1 & \vec{u}_2 & -\vec{v}_1 & -\vec{v}_2 \end{pmatrix} \begin{pmatrix} \alpha \\ \beta \\ \gamma \\ \epsilon \end{pmatrix} \end{equation} You find all the scalars which defines the vectors in the intersection between U and V but not the vectors . Hence, you should use the scalars you found to obtain the vectors, Choose: $\alpha = 1$ , $\beta = 1$ for the vectors in U basis, and $\gamma = -1$ , $\epsilon = 1$ for the vectors in V basis, and you will get: \begin{pmatrix} 1 \\ 1 \\-1 \\ 1 \end{pmatrix} which is indeed in U∩V.
|linear-algebra|vector-spaces|
1
Across all additive bases $A$ of $\mathbb{N}\setminus\{1\}$ of order $2$, what is the maximum possible value of the $n-$th term of $A?$
Across all additive bases $A\subset \mathbb{N}$ of $\mathbb{N}\setminus\{1\}$ of order $2$ , what is the maximum possible value of the $n-$ th term of $A?$ For example, across all additive bases of $\mathbb{N}\setminus\{1\},$ the maximum possible value of the $1$ st term of $A$ is $1,$ because $1$ must be the $1$ st term, otherwise $2\not\in A+A,\implies A$ not an additive basis of $\mathbb{N}\setminus\{1\}$ . Across all additive bases of $\mathbb{N}\setminus\{1\},$ the maximum possible value of the $2$ nd term of $A$ is $2$ , because the first term of $A$ must be $1$ and if the $2$ nd term of $A$ is not $2,$ then $3\not\in A+A.$ Across all additive bases of $\mathbb{N}\setminus\{1\},$ the maximum possible value of the $3$ nd term of $A$ is $4$ , because the first term of $A$ must be $1$ and the $2$ nd term of $A$ must be $2,$ and if the third term of $A$ is $>4$ then $5\not\in A+A.$ But the third term can be $4,$ and so this is the maximum possible value the $3$ rd term of $A$ can be.
The sequence in this problem appears to be A234941 . This is $2$ more than the corresponding term of A001212 , the solution to the postage stamp problem. Let me try to explain the connection. In this problem, the $n^{\text{th}}$ term $a_n$ is the maximum value such that we can find a set $A'$ of size $n-1$ such that $A' + A'$ contains $\{2, 3, \dots, a_n\}$ . This is because: On one hand, if $a_n$ is the $n^{\text{th}}$ term of an additive basis $A$ , then taking $A'$ to be the first $n-1$ terms of $A$ will satisfy this property. On the other hand, if $A'$ has this property, then $A' \cup \{a_n, a_n + 1, a_n + 2, \dots\}$ is an additive basis. Rather than finding the set $A'$ such that $A' + A'$ contains $\{2,3, \dots, a_n\}$ , it is equivalent to consider $B' = A' - 1$ , and ask for $B'+B'$ to contain $\{0,1,\dots,a_n-2\}$ . We must have $0 \in B'$ . So if we exclude $0$ from $B'$ , we have a set of size $n-2$ such that sums of one or two terms from $B'$ cover all of $\{1,2,\dots,a_n
|number-theory|upper-lower-bounds|natural-numbers|ramsey-theory|additive-combinatorics|
0
Some questions about the proof of the third Sylow theorem
Sylow's Theorem: Let $|G| = p^n · m$ with $p$ prime, m coprime to $p$ , and $n ≥ 1$ . The number $α(p)$ of $p$ -Sylow groups of $G$ is a divisor of $m$ and of the form $α(p) = 1+kp$ for a $k ≥ 0$ . Proof: Let $S$ be the set of all $p$ -Sylow groups of $G$ . $G$ operates on this through conjugation. By Sylow's 2nd theorem we have a single $G$ orbit and $S =\{ gP g^{−1} | g ∈ G \}$ . $α(p) = [G : St_G(P)]$ a divisor of $m$ . The group $P$ also operates on $S$ by conjugation. Let $Q ∈ S$ be an element of the fixed point set $S^P$ . It follows $P ⊆ Q$ (1), and because of equality of numbers even $P = Q$ . So $P$ is the only fixed point of this $P$ action. This results in $α(p) = |S | = 1 + kp$ for a $k ≥ 0$ . (2) (1) and (2): Why? I didn't understand these two steps
For (1): \begin{equation*} S^P\overset{\rm def}{=} \{A\in S:\ gAg^{-1}=A,\ \text{for all $g\in P$}\}. \end{equation*} Therefore, if $Q\in S^P$ , then $gQg^{-1}=Q$ for all $x\in P$ . Therefore $P\subseteq N_G(Q)$ by the definition of normalizer. Since $Q\subseteq N_G(Q)\subseteq G$ , $|N_G(Q)|=p^nm_0$ for some $m_0\mid m$ . Therefore $P,Q$ are both Sylow- $p$ subgroups of $N_G(Q)$ . Since $Q$ is normal in $N_G(Q)$ (by the definition of normalizer again) and $P,Q$ are conjugate in $N_G(Q)$ , by applying the second Sylow theorem to $N_G(Q)$ we have $P=Q$ . (2) \begin{equation*} |S|=\sum_{\text{$\Lambda$ is an orbit}} \Lambda. \end{equation*} The above argument shows that $\{P\}$ is the only orbit with one element. Any other orbit $\Lambda$ has $\frac{|P|}{|{\rm Stab}(x)|}$ elements where $x$ is an element in $\Lambda$ , which is a multiple of $p$ since $|P|=p^n$ and $|\Lambda|\ne 1$
|abstract-algebra|group-theory|proof-explanation|sylow-theory|
1
Lesser known forms of Euler's identity
https://twitter.com/martinmbauer/status/1763622278128947464?t=YObGhK4ZqjAXrwPaHxB8gw&s=19 Many know Euler's identity, but did you also know that $|e^{i\pi}| = |\pi^{ie}| = |i^{\pi e}| = 1$ While the claim about $\pi$ 's powers feels intriguing to me (for I don't know any better), the claim about $i$ 's powers feels outrageous. $πe = 8.53973422...$ $i^x; x ∈ ℤ$ , is one of $\{ 1, i, -1, -i \}$ , and that's "if and if only" as far as I know. 8.539... is not an integer so the last identity must be false. Is the quoted tweet just misinformation or am I getting something wrong?
Yes, there is some misinformation here or at least omitted information. The first two terms, $e^{i\pi}$ and $\pi^{ie}$ are well defined by Euler's formula. Both have a complex value with norm $1$ by rewriting them as: $$ e^{i\theta} = \cos \theta + i \sin \theta $$ for a real number $\theta$ (which is $\theta=\pi$ in the first case and as DarkLordofPhysics and Kevin Dietrich point out in comments, $\theta = e \ln \pi$ in the second case). The third term $i^{\pi e}$ does not have a conventional definition. Attempts to apply Euler's formula (or similar) result in a multiple valued expression, centering on the multivalued complex logarithm $\ln i$ . The restrictions to make a well-defined function out of the complex logarithm are called branch cuts , and this topic comes up fairly often in related Questions asked here in the past, such as Complex logarithm function .
|complex-numbers|transcendental-numbers|
0
Dual space of $L^p([0,1])$ for $0 < p < 1$.
I convinced myself that in the space $L^p([0,1])$ there is no other non-empty, open, convex subset different from $L^p([0,1])$ itself when $0 . I was told that I can conclude from this statement that the topological dual of $L^p([0,1])$ if $\{0\}$ , but I can not see it. I suppose that Hahn-Banach theorem would be useful here but I am not sure how. Thanks in advance.
Any nontrivial element $\lambda\in X'\setminus\{0\}$ divides the space into the two halves $\{x\in X \mid \lambda(x)>0\}$ and $\{x\in X \mid \lambda(x) both of which are non-empty (because $\lambda$ is non-zero), open (because $\lambda$ is continuous), convex, and proper subsets of $X$ . By your prior reasoning, no such sets exist.
|functional-analysis|dual-spaces|
0
Use of equivalent and equal
Really simple question. Is valid to do this: $$ 2x + 2 = 1 \iff x = \frac{1 - 2}{2} = -\frac{1}{2} $$ I mean is anything wrong when solved for $x$ to not use $\iff$ symbol more and just use $=$ .
What you write is correct. I think it would be even better (equally correct and easier to read) with words: $$ 2x + 2 = 1 $$ implies $$ x = \frac{1-2}{2} = -\frac{1}{2} . $$ Note the period at the end of the sentence - since it is a sentence. In this case you don't care about the reverse implication, which happens to be true here but isn't always when manipulating expressions. Replacing the $\iff$ by $=$ would make your argument just plain wrong - in a way all too commonly seen. Read literally it would say $$ 2x + 2 = 1 = \cdots = -\frac{1}{2} . $$
|logic|definition|
0
Dual space of $L^p([0,1])$ for $0 < p < 1$.
I convinced myself that in the space $L^p([0,1])$ there is no other non-empty, open, convex subset different from $L^p([0,1])$ itself when $0 . I was told that I can conclude from this statement that the topological dual of $L^p([0,1])$ if $\{0\}$ , but I can not see it. I suppose that Hahn-Banach theorem would be useful here but I am not sure how. Thanks in advance.
Denote by $\mathbb D$ the open unit disk in the complex plane. Let $\varphi:L^p[0,1]\to\mathbb C$ be linear and continuous. Then $\varphi^{-1}(\mathbb D)$ is open, because $\varphi$ is continuous; nonempty, because it contains $0$ ; convex, because $\varphi$ is linear. Then $\varphi^{-1}(\mathbb D)=L^p[0,1]$ . This means that $|\varphi(f)| for all $f$ . As $L^p[0,1]$ is a vector space, this implies that $\varphi=0$ .
|functional-analysis|dual-spaces|
0
Evaluation of $\int \limits _0 ^{2 \pi} \frac {(r \cos \phi +x) \cos n\phi} {r^2+2xr \cos \phi +x^2} d\phi$
How to compute $$\int \limits _0 ^{2 \pi} \frac {(r \cos \phi +x) \cos n\phi} {r^2+2xr \cos \phi +x^2} d\phi ?$$ The answer I am provided with is $\dfrac {(-1)^n\pi r^n} {x^{n+1}}$ for $\ x>r$ , but I have no idea whether this is actually correct and how to get this.
Utilize \begin{align} I(a) =& \int_0^{2\pi}\ln(1+2a\cos\phi+a^2) \cos n\phi\ d\phi\\ = & \int_0^{2\pi} \bigg(2\sum_{k=1}^\infty \frac{(-1)^{k+1}}{ka^k}\ \cos k \phi\bigg) \cos n\phi \ d\phi=\frac{2\pi(-1)^{n+1}}{n a^n} \end{align} with $|a|>1$ to evaluate \begin{align} &\int_0 ^{2 \pi} \frac {(r \cos \phi +x) \cos n\phi} {r^2+2xr \cos \phi +x^2} \ d\phi = \frac1{2r}\frac{dI(a)}{da}\bigg|_{a=\frac xt }= \frac {(-1)^n\pi r^n} {x^{n+1}} \end{align}
|integration|definite-integrals|trigonometric-integrals|
0
Why does this definition capture the notion of anticommutativity intuitively?
My textbook (Amann and Escher, Analysis 1 ) defines an operation $\circledast$ (i.e. a function $\circledast: X \times X \to X$ ) as anticommutative if it obeys There is a right identity element $r:=r_X$ , that is, $\exists r \in X: x\circledast r=x, x \in X$ , and $x \circledast y = r \iff (x \circledast y) \circledast (y \circledast x) = r \iff x = y, \forall x,y$ . Since this definition came up in an exercise (which I've since solved), not much elaboration was given by the authors. My questions are: How does this generalize our intuitive notion of anticommutative? Why is it defined in terms of a right inverse in particular? I suppose a left inverse would be suitable too?
It's a bit hard to relate directly to general anticommutativity, but it is fairly parallel to how $x\star y=x-y$ would work. Clearly, $r$ is the replacement for $0$ , i.e. $ x \star r = x-0 =x$ . And $x \star y = r$ is $x - y =0$ is $x=y$ ; and of course $(x \star y) \star (y \star x) = (x-y) - (y-x) = 2(x-y)$ which is zero iff $x=y$ iff $(x \star y) =r$ (so no characteristic 2). Constructing this set of rules from anticommutativity is messier. You want $(x \star y) =- (y \star x)$ , but then you need a unary $-$ . What is that? It's the thing for which $x + (-x)=0$ . Well, now you need $0$ and binary $+$ . Zero is ok, it's that $r$ for which $ x \star r =x$ for all $x$ (and we get our first rule above). But the binary plus is not great; you could use $x+y = x \star (-y)$ . But that's back to needing unary minus. One issue is that, as was pointed out, these rules don't hold for general anticommutative operations, $\vec{v} \times \vec{w} =0 $ does not imply $\vec{v} = \vec{w}$ . So one
|abstract-algebra|functions|
1
Use of equivalent and equal
Really simple question. Is valid to do this: $$ 2x + 2 = 1 \iff x = \frac{1 - 2}{2} = -\frac{1}{2} $$ I mean is anything wrong when solved for $x$ to not use $\iff$ symbol more and just use $=$ .
The equivalence sign is actually pretty smart since many students often write $$ 2x+2=1 \Longrightarrow x=-\dfrac{1}{2} $$ and ignore the fact that this is only a necessary condition. Observing that both directions, $\Longrightarrow $ AND $\Longleftarrow $ hold makes it unnecessary to check whether your finding $x=-1/2$ is actually a solution. This may be obvious in a case like this, but it isn't if you divided by a variable and haven't ruled out that it could have been zero, or if you square an equation. E.g. $x+1=0 \Longrightarrow x^2=1 \not\Longrightarrow x=1.$ The more complicated your algebraic manipulations are, the more is it necessary to check whether something found at the end of the calculations is actually a solution to the problem. By writing $\Longleftrightarrow $ you make sure that you checked the opposite direction, too.
|logic|definition|
0
I'm not quite sure I understand this one. Show that the specified real number is rational: $7^{2/3}$
This is the first problem in this discrete math assignment, and I'm a little bit confused because I thought that the square root, cube root, nth root of a non-square, non-cube, etc. were not rational numbers. Is that untrue, and if so I don't even know how to start working on this one. Just as a reference, this problem is from the book Discrete Mathematics and Applications by Kevin Ferland p. 175 #2. !
Proof that the number $7^\frac{2}{3}$ is irrational: Suppose otherwise, i.e. that $7^\frac{2}{3} = \frac{a}{b}$ where $a, b$ are integers such that $\gcd(a, b) = 1$ . Then we may write $$49b^3 = a^3$$ Which implies $7 | a \iff a = 7k, k \in \mathbb{Z} \implies 49b^3 = 343k^3 \iff b^3 = 7k^3 \implies 7 | b$ . That is, both $a$ and $b$ are divisible by $7$ , which contradicts our supposition $\gcd(a,b) =1$ $\square$ .
|discrete-mathematics|rationality-testing|
0
Using the Hessian, prove that affine scalar field is convex
Using the Hessian, prove that the affine scalar field $f : \Bbb R^n \to \Bbb R$ defined by $$f(x) = A^T x + b$$ where $A \in \Bbb R^n$ and $b \in \Bbb R$ , is convex. My intuition on this problem is to apply the theorem that if $f(x)$ is twice continuously differentiable then $f(x)$ is convex if and only if the Hessian is positive semidefinite, but I don't know how to find the hessian of this function and show that it is positive semidefinite.
Element $\nabla^2 f_{ij}$ of the Hessian is $\frac{\partial f}{\partial x_i \partial x_j}$ . Because it's affine, this will be zero for each element, so the Hessian is a zero matrix. This is PSD and NSD, so an affine function is convex and concave.
|multivariable-calculus|convex-analysis|hessian-matrix|scalar-fields|
0
Unexpected appearances of $\pi^2 /~6$.
"The number $\frac 16 \pi^2$ turns up surprisingly often and frequently in unexpected places." - Julian Havil, Gamma: Exploring Euler's Constant . It is well-known, especially in 'pop math,' that $$\zeta(2)=\frac1{1^2}+\frac1{2^2}+\frac1{3^2}+\cdots = \frac{\pi^2}{6}.$$ Euler's proof of which is nice. I would like to know where else this constant appears non-trivially. This is a bit broad, so here are the specifics of my question: We can fiddle with the zeta function at arbitrary even integer values to eek out a $\zeta(2)$ . I would consider these 'appearances' of $\frac 16 \pi^2$ to be redundant and ask that they not be mentioned unless you have some wickedly compelling reason to include it. By 'non-trivially,' I mean that I do not want converging series, integrals, etc. where it is obvious that $c\pi$ or $c\pi^2$ with $c \in \mathbb{Q}$ can simply be 'factored out' in some way such that it looks like $c\pi^2$ was included after-the-fact so that said series, integral, etc. would equal
In one of my papers in Journal of Number Theory: https://www.sciencedirect.com/science/article/abs/pii/S0022314X1930277X A remark after Corollary 8.1 implies the result: $$ \lim_{x\rightarrow\infty}\frac1x\sum_{p\leq x} \frac{\tau(p-1)\phi(p-1)}{p-1}=\frac6{\pi^2}. $$ Here, $\tau(n)$ is the number of divisors of $n$ , and $\phi(n)$ is the Euler's totient function. The sum is taken over primes at most $x$ .
|integration|sequences-and-series|riemann-zeta|big-list|pi|
0
Maximum of $\frac{x^{15}(1-x)y^{15}(1-y)}{(1-xy)^{15}}$ where $x,y\in(0,1)$
I need maximum of $$ \frac{x^{15}(1-x)y^{15}(1-y)}{(1-xy)^{15}}$$ where $0 , $0 Define $$ f(x,y)=\frac{x^{15}(1-x)y^{15}(1-y)}{(1-xy)^{15}}$$ So the function is positive for $0 , $0 and vanishes on the boundary $x=0$ or $x=1$ or $y=0$ or $y=1$ . Therefore it attains the maximal value inside the region. So for maxima or minima $\frac{\partial{f}}{\partial{x}}=0$ and $\frac{\partial{f}}{\partial{y}}=0$ Using Wolfram Alpha $$\frac{\partial{f}}{\partial{x}}=-\frac{x^{14}(y-1)y^{15}(x^2y-16x+15)}{(1-xy)^{16}}=0$$ Which gives $$x^2y-16x+15=0\tag{1}$$ Similarly $\frac{\partial{f}}{\partial{y}}=0$ gives $$y^2x-16y+15=0\tag{2}$$ Subtracting $(1)$ and $(2)$ , we obtain $$ x^2y-y^2x+16y-16x=0 $$ So we get $$(xy-16)(x-y)=0 $$ which gives $x=y$ or $xy=16$ . Since $0 , so $xy=16$ is not possible. So we have for maxima or minima $$x=y\tag{3}$$ So we are down to function of single variable $$ F(x)=\frac{x^{30}(1-x)^2}{(1-x^2)^{15}}$$ where $0 . Any help will be highly appreciated. Thank you
Consider the limit of $f(x,y)$ as $(x,y)$ approaches $(1,1)$ along the path $x=y$ . Then $$\lim_{(x,y)\to(1,1)}f(x,y)=\lim_{x\to1}\frac{x^{30}(1-x)^2}{(1-x^2)^{15}}=\lim_{x\to1}\frac{x^{30}}{(1-x)^{13}(1+x)^{15}}=\infty.$$ This shows that $f(x,y)$ does not have a global maximum in the given domain. At any local extremum you have $\frac{\partial f}{\partial x}=\frac{\partial f}{\partial y}=0$ . You have already shown that this implies $x=y$ . Then also $\frac{d}{dx}f(x,x)=0$ where $$\frac{d}{dx}f(x,x)=\frac{x^{29}(1-x)^2(x^2+x-15)}{(1-x^2)^{16}},$$ but the numerator is easily verified to be nonzero for $0 . So $f(x,y)$ does not even have any local extremum in the given domain.
|real-analysis|calculus|multivariable-calculus|functions|maxima-minima|
1
Hessian of the Rayleigh quotient $\frac{\langle x,Ax\rangle}{\langle x,x\rangle}$
I am struggling on the following question. Let $A$ be a semidefinite positive matrix ( $A\in\mathscr{S}_n^+(\mathbb{R})$ ) and let consider the Rayleigh quotient defined as: $$\forall x\in\mathbb{R}^n\backslash 0,~f(x):=\frac{\langle x,Ax\rangle}{\langle x,x\rangle}. $$ How can we compute the Hessian of $f$ ? I have tried to do the Taylor expansion but couldn't get anything from there... Thank you very much!
$ \def\a{\alpha} \def\b{\beta} \def\l{\lambda} \def\BR#1{\Big(#1\Big)} \def\LR#1{\left(#1\right)} \def\op#1{\operatorname{#1}} \def\trace#1{\op{Tr}\LR{#1}} \def\frob#1{\left\| #1 \right\|_F} \def\qiq{\quad\implies\quad} \def\mt{\mapsto} \def\p{\partial} \def\grad#1#2{\frac{\p #1}{\p #2}} \def\hess#1#2#3{\grad{^2 #1}{#2\,\p #3}} \def\Hess#1#2{\grad{^2 #1}{#2^2}} \def\c#1{\color{red}{#1}} \def\CLR#1{\c{\LR{#1}}} $ Consider the scalar functions (aka quadratic forms) $$\eqalign{ \a &= x^TAx &\qiq d\a = \LR{2Ax}^Tdx \\ \b &= x^TBx &\qiq d\b = \LR{2Bx}^Tdx \\ }$$ whose ratio is a generalized Rayleigh quotient and whose gradient can be calculated as $$\eqalign{ \l &= \frac{\a}{\b} \\ d\l &= \frac{\b\,d\a-\a\,d\b}{\b^2} \;=\; 2\b^{-1}\LR{Ax-\l Bx}^Tdx \\ \grad{\l}{x} &= 2\b^{-1}\BR{A-\l B}x \;\doteq\; \c{g} \\ }$$ The Hessian is the gradient of $g$ $$\eqalign{ dg &= 2\b^{-1}\LR{A-\l B}\c{dx} - 2\b^{-1}\c{d\l}Bx - 2\b^{-2}\c{d\b}\LR{A-\l B}x \\ &= 2\b^{-1}\LR{A-\l B}\c{dx} - 2\b^{-1}Bx\CLR{g^Td
|real-analysis|linear-algebra|hessian-matrix|
1
A bilinear operator is continuous iff verifying $|| \phi (v;w)|| \leq M ||v|| ||w|| $
First I know that this question has all ready be asked for exemple here but for bilinear operator with only one variable here I want to show it for linear operator with two variables . Question: Prove that a bilinear operator $\phi$ is continuous iff verifying $|| \phi (v;w)|| \leq M ||v|| ||w|| $ Answer: 1 - $\forall v \in E, w \in F, || \phi (v;w)||_G \leq M ||v||_E ||w||_F \Rightarrow \phi $ is continuous By definition $ \phi(.) $ is continuous iff $\forall$ couple of sequence $ (v_n,w_n) \in E \times F$ verifying $\lim_{n \to \infty}(v_n;w_n) = (v;w) $ we have that $\lim_{n \to \infty} \phi (v_n;w_n) = (v;w)$ $0 \leq || \phi(v;w) - \phi(v_n;w_n)||_G = || \phi(v-v_n;w) - \phi(v;w - w_n)||_G \leq || \phi(v-v_n;w)||_G + || \phi(v_n;w - w_n)||_G \leq M ||v-v_n ||_E ||w ||_F + M ||w-w_n ||_F ||v ||_E \xrightarrow[n \to \infty]{} 0$ When the limit is obtain by simple definition of limit arithmetic. And thus by sandwitch theorem Q.E.D. 2- $\phi $ is continuous $ \Rightarrow $ $\forall v \
My answer is correct according to https://math.stackexchange.com/users/112915/maowao More over we can note that this prove can be easilly generalized
|general-topology|solution-verification|linear-transformations|operator-theory|
1
Please help in probability
A number is picked uniformly and randomly from a set of five-digit natural numbers. What is the probability that at least one of the digits of the number thus picked is 0? I solved by taking numerator as 1st digit we have 9 options, then we select 1 out of remaining 4 digits and lock it for zero so 4c1 and remaining three digits have 10* 10 * 10 options, so probablity = 36000/90000 but my friend explained the count of 5 digit natural numbers that don’t have any 0 digit in any position = 9^5 = 59049 Therefore, 90000 - 59049 = 30951 5 digit natural numbers have at least one position occupied by a 0. Therefore, the probability of selecting a 5 digit natural number from the set of all 5 digit natural numbers such that at least one position of the selected number is occupied by a 0 digit = 30951/90000 = 3439/10000 I cant understand where my approach is wrong? Its 9 options for first digit, then 1 option of zero for any of four digits and then 10 options each for remaining three digits .. :-
As has already been pointed out, the complement approach is the easiest. If you want to stick to your approach, and also avoid inclusion-exclusion , you can add up cases with exactly one digit zero, two digits zero, etc to get $\binom41*9^4 + \binom42*9^3 + \binom43*9^2 +\binom44*9^1 = 30951$ as the numerator, and $Pr = \Large\frac{30951}{90000}$ PS You must definitely learn inclusion-exclusion, as it is an essential tool in your armoury, but don't start with this problem, as it is not a simple one because place values of the $0's$ will also come into the picture. In fact, learning apart, my credo is that the simplest way to solve a problem is best, and the simplest here, as you know, is using the complement to get $ Pr = \Large\frac{90000-9^5}{90000}$
|probability|
0
Show that, if $L$ is a regular language, then so is $\{w : \exists n \in \Bbb{N}, w^n \in L\}$
Suppose $L$ is a regular language over an alphabet $\Sigma$ . Let $$L' = \{w : \exists n \in \Bbb{N}, w^n \in L\}.$$ Prove that $L'$ is regular too.
We plan to construct an NFA with $\varepsilon$ -moves that accepts $L'$ , to show it is regular. For all $n \in \Bbb{N}$ , let $$L_n = \{w : w^n \in L\}.$$ We will show two things: $L' = \bigcup_{n=1}^N L_n$ for some $N \in \Bbb{N}$ , Each $L_n$ is regular. As is commonly known, the union of finitely many regular languages is regular. Thus, these two claims together prove that $L'$ is regular. Let $(Q, \Sigma, \delta, q_0, F)$ be a NFA- $\varepsilon$ that accepts $L$ , in the sense of the wikipedia article: $Q$ is the set of states, $\Sigma$ remains the alphabet, $\delta : (Q \times \Sigma \cup \{\varepsilon\}) \to \mathcal{P}(Q)$ is the transition function, $q_0 \in Q$ is the initial state, and $F \subseteq Q$ is the set of final states. First claim To prove the first claim, let $N = |Q|$ , and suppose integer $n$ and string $w \in \Sigma^*$ exist such that $w^n \in L$ . Without loss of generality, suppose $n$ is the least positive integer such that $w^n \in L$ . We wish to show $n \l
|regular-language|finite-state-machine|
0
Using Fake Numerals to Make Real Decimal Numbers
The setup to this question is very simple: Take the numbers $\frac{1}{1}, \frac{21}{12}, \frac{321}{123},...,\frac{987654321}{123456789}$ and plot them versus the natural numbers, as seen here: https://i.stack.imgur.com/psgiz.png . Now, seeing that this follows a linear path, and that adding more numerals seems to increase the value of the fraction by about $0.9$ , the obvious next step is to ask whether this line continues for new numeral symbols. Imagine the letter $A$ was a transdecimal character for $10$ , so that we've extended the common decimal number system to an undecimal system. Then we might postulate that $\frac{A987654321}{123456789A}$ is approximately $8.9$ , if we follow the line of best fit. But is there a way to actually find this exact value?
Apparently you're taking $$f(n) = \dfrac{\sum_{j=1}^n j\cdot 10^{j-1}}{\sum_{j=1}^n j\cdot 10^{n-j}}$$ Then $$ f(n) = \dfrac{(9n-1) \cdot 10^n + 1}{10^{n+1} - 9 n - 10} $$ In particular $$ f(10) = \frac{10987654321}{1234567900} = 8.90000000891000000891\ldots$$
|number-systems|
1
Prove that $(G, \circ)$ is a group
Given that $F_1, F_2, F_3, F_4$ are applications of $F_i: \mathbb{R}^2 \rightarrow \mathbb{R}^2$ which are defined as $F_1: (x, y) = (x, y), F_2: (x, y) = (-x, y), F_3: (x, y) = (x, -y), F_4: (x, y) = (-x, -y)$. If $G = \{F_1, F_2, F_3, F_4\}$, demonstrate that $(G, \circ)$ is a group. I know that I have to prove the internal law, associative rule, the existence of the neutral element and the existence of the symmetric elements for all $F_i \in G$ but I am not sure where to begin as I have not worked with sets of functions and the problem does not state the definition of $\circ$. Is there a predefined representation of $\circ$? Also, how do I go about solving this problem?
Since $\operatorname {Sym}(X)$ is a group for every set $X$ , and all four $F_i$ 's are bijections on $X=\mathbb R^2$ , it suffices to prove that $G$ (which is then a nonempty finite subset of a group) is closed under map composition. And in fact (let me omit all the " $\circ$ "): \begin{alignat}{1} &F_1F_k=F_k, k=1,2,3,4 \\ &F_kF_1=F_k, k=1,2,3,4 \\ &F_k^2=F_1, k=1,2,3,4 \\ &F_2F_3=F_4 \\ &F_2F_4=F_3 \\ &F_3F_4=F_2 \\ &F_3F_2=F_4 \\ &F_4F_2=F_3 \\ &F_4F_3=F_2 \\ \end{alignat} Btw, since all the nontrivial elements of $G$ have order $2$ , $G\cong C_2\times C_2$ .
|linear-algebra|group-theory|functions|discrete-mathematics|
0
Prove $XX’+ZZ’=YY’$
Let the feet of angle bisectors of triangle $ABC$ are $X,Y$ and $Z$ . The circumcircle of triangle $XYZ$ cuts off three segment from lines $AB,BC $ and $CA$ , let it be $XX’, YY’, ZZ’$ . Prove that at least one combination exists such that sum of two segment's length is equal to third segment’s length. WLOG, I assumed $YY’\geq XX’\geq ZZ’$ , So we have to prove $YY’=XX’+ZZ’$ I tried it using coordinate geometry assuming $A,B,C$ to be $(x_i,y_i)$ for $i=1,2,3$ . Then we can find coordinates of $X,Y,Z$ and hence the equation of circumcircle of $\Delta XYZ$ . And then intercept with three sides. But it will consume a whole to day If a human follows this method without computer help. Can someone please help me finding a good solution for this problem?
I will outline an another approach that is useful to remember for Euclidean/Olympiad geometry problems. A simple application of Theorem of Sines gives us: $$XX' = 2r\sin\left(\frac\alpha 2+\gamma\right)=2r\sin\left(\frac\alpha 2+\beta\right)=r\left(\sin\left(\frac\alpha 2+\gamma\right)+\sin\left(\frac\alpha 2+\beta\right)\right) = $$ $$=2r\sin\frac{\pi}{2}\cos\frac{\beta-\gamma}{2} = 2r\cos\frac{\beta-\gamma}{2}.$$ From here, it suffices to prove these $\cos\frac{\beta-\gamma}{2}, \cos\frac{\gamma-\alpha}{2}, \cos\frac{\alpha-\beta}{2}$ satisfy your condition. Then you can brute force this however way you want - I personally would start out by finding: $$\sin\frac{\alpha}{2} = \sqrt{\frac{1-\cos\alpha}{2}} = \sqrt{\dfrac{(a-b+c)(a+b-c)}{4bc}} = \sqrt{\frac{(p-b)(p-c)}{bc}}$$ and then you get: $$\cos\frac{\alpha}{2} = \sqrt{\dfrac{p(p-a)}{bc}}$$ and so on.
|geometry|contest-math|triangles|circles|coordinate-systems|
0
Sequential divergence criterion for functional limit, diverging function evaluated at converging sequence
According to Abbott 4.2.5 "Divergence Criterion for Functional Limits", Let $f$ be a function defined on $A$ , and $c$ be a limit point of $A$ . If there exist two sequences $(x_n)$ , $(y_n)$ with $x_n \neq c$ and $y_n \neq c$ and $\lim_{x \to c} x_n = \lim_{x \to c} y_n = c$ but $\lim_{x \to c} f(x_n) \neq \lim_{x \to c} f(y_n)$ , then $\lim_{x \to c} f(x)$ does not exist. Does this hold when $\lim_{x \to c} f(x_n)$ does not exist, and $\lim_{x \to c} f(y_n)$ does not exist? For example, say $(x_n)$ converges but $f(x_n)$ oscillates. The limits, at least, would not be equal in this case. I will leave this question here, cautious, since my previous analysis question was closed for "lack of focus", where a commenter described it as "a word salad". Stack Exchange wants enough but not too much information, it seems. There are many answers on this site where the limits do exist, but I struggle to find examples of divergence criterion where e.g. $f(x_n)$ oscillates as I stated above. If you
Yes: In fact , it is not too difficult to prove that $f(x)\rightarrow L$ as $x\rightarrow c$ , if and only if for any sequence $(x_n)$ , such that $x_n\rightarrow c,$ $ x_n\ne c$ (and $x_n$ lie in the domain of f) , $f(x_n)\rightarrow L$ by the definition of convergence of sequences and the definition of a limit of a function. If the limit doesn't exist for any particular sequence of points: Then the function definitely doesn't converge to a limit. Doing that proof may be a useful exercise to convince yourself of this fact.
|real-analysis|calculus|limits|definition|epsilon-delta|
1
If there is a bijection $F : A \mapsto A / R$, then $R = \{(x, y) \in A^2 : x = y\}$?
Assume $R$ is an equivalence relation over $A$ and there is a bijection between $A$ and $A / R$ . Does this entail $R = \left\{ (x, y ) \in A^2 : x = y \right\} $ ? What I thought is the following. Assume $A$ is an infinite set. Then we can find a counter-example. For example, if $A = \mathbb{N}$ and $R$ the equivalence relation s.t. $A / R$ is the partition $$\left\{ \left\{ 1 \right\}, \left\{ 2, 3\right\}, \left\{ 4, 5, 6 \right\}, \left\{ 7, 8, 9, 10 \right\}, \ldots \right\} $$ Then $F(1) = \left\{ 1 \right\} , F(2) = \left\{ 2, 3 \right\}, F(3) = \left\{ 4, 5, 6 \right\}, \ldots $ is a bijection. However, if $A = \left\{ a_1, \ldots, a_n \right\} $ a finite set, and $F$ is bijective, we must have \begin{align*} F(a_1) = X_1, \ldots, F(a_{n}) = X_{n} \end{align*} with $X_i \neq X_j$ for $i, j \in [1, n]$ . This entails $|A / R| = |A|$ , which implies $A / R$ is a partition of $A$ into singleton sets. In other words, $$A / R = \left\{ \left\{ a_1 \right\}, \ldots, \left\{ a_n\right
You've got it right. We have $|A| = |A/R|$ (two sets have the same cardinality iff there's a bijection between them), and any surjective function between two finite sets of the same cardinality is a bijection. Thus the surjective function $v : A \to A/R$ that maps each element of $A$ to its partition is also injective, and hence no two elements are in the same partition.
|elementary-set-theory|equivalence-relations|
1
Spivak, Ch. 20, Problem 22a: If $|f'|\leq M_0$, $f''\leq M_2$, prove $|f'(x)|\leq \frac{2}{h}M_0+\frac{h}{2}M_2, \text{ for all } h>0$
The following is a problem from Spivak's Calculus . Questions have been asked about this problem before here and here , though those problems are formulated slightly differently. The current question regards specifically the formulation below. To cut to the chase, I will show the solution manual solution and then my own solution. They apparently differ by a very small detail. I'd like to know if the detail is important for the solutions to be technically correct. (a) Suppose that $f$ is twice differentiable on $(0,\infty)$ and that $|f(x)|\leq M_0$ for all $x>0$ , while $|f''(x)|\leq M_2$ for all $x>0$ . Use an appropriate Taylor polynomial to prove that for any $x>0$ we have $$|f'(x)|\leq \frac{2}{h}M_0+\frac{h}{2}M_2, \text{ for all } h>0$$ The solution manual solution is as follows By Taylor's theorem with $n=1$ we have $$f(a+h)=f(a)+f'(a)h+\frac{f''(t)}{2!}(a+h-t)^2, t\in (a,a+h)\tag{1}$$ $$|f'(a)|=\left | \frac{f(a+h)-f(a)}{h}-\frac{f''(t)}{2}\frac{(a+h-t)^2}{h} \right |\tag{2}$$
Your summary of the solution manual's argument is incorrect. However, this is not really your fault, as the solution manual itself has an obvious typo in it that makes it pretty ambiguous what the argument was intended to be in the first place - and perhaps you attempted to rectify this using your own interpretation. Specifically, Spivak's argument involves the usage of a remainder term that does not adhere to the structure of any remainder covered in the book (as I will now show). Spivak's opening line reads as: $$f(x+h)=f(x)+f'(x)h+\frac{f''(t)}{2}(x+h-t)h \quad \text{for some $t\in(x,x+h)$}$$ This equation is incorrect, as it blends the Lagrange interpretation of the remainder with the Cauchy interpretation of the remainder. If the Cauchy interpretation was used, then the following argument should have been written: It will be easier to think of $f(x+h)$ as $F(h)$ . Next, consider the Taylor polynomial representation of $F$ with $n=1$ , $a=0$ , and the Cauchy remainder, which we wou
|calculus|integration|derivatives|taylor-expansion|
0
I need help confirming this equivalency statement
Having trouble confirming that these two values are equal. $$1 - \int_{1}^{2} \ln(y)^{\frac{1}{3}} \, dy + 1 = \int_{0}^{1} e^{x^3} \, dx$$ For context, $$\ln(y)^{\frac{1}{3}}$$ is the inverse of $$e^{x^3}$$ I saw that there was no simple solution for f(x) and decided to try to convert to f(y) to find other ways to measure the value under the curve. If you look at both equations in a graphing calculator you will better understand my rational. Am I wrong in thinking these would be equal?
The LHS and RHS of your equality are not equal unfortunately. If $f(x)=e^{x^{3}}$ then $f^{-1}(x)=\ln(x)^{\frac{1}{3}}$ and you can derive, $$\int_{0}^{1}e^{x^{3}}dx=e-\int_{1}^{e}\ln\left(y\right)^{\frac{1}{3}}dy.$$ Is this what you were looking for?
|calculus|integration|inverse|
0
Obtaining a connection on a trivial bundle by giving a matrix of $1$-forms
I'm new to connections and I'm going over the page ( https://mathworld.wolfram.com/VectorBundleConnection.html ) in which they state the following For example, the trivial bundle $E=M\times \Bbb R^k$ admits a flat connection since any bundle section $s$ corresponds to a function $s:M\to \Bbb R^k$ . Then setting $\nabla s=ds$ gives the connection. Any connection on the trivial bundle is of the form $\nabla s=ds+s \otimes \alpha$ , where $\alpha$ is any one-form with values in $\text{Hom}(E,E)=E^*\otimes E$ , i.e., $\alpha$ is a matrix of one-forms. I understand from here that if $s: M \to E= M\times \Bbb R^k$ is a section of $E$ , then this is identifiable with a map $s :M \to \Bbb R^k$ . Now $ds$ is a bit ambiguous, but if I'm not mistaken this is under the identification $s=(s_1,\dots,s_k)$ with $s_i : M \to \Bbb R$ just $$ds=(ds_1,\dots,ds_k).$$ The second part of the paragraph is what confuses me a bit, they state that any connection on a trivial bundle is of the form $$\nabla s=ds
First question: the space of connections on a vector bundle is an affine space, whose translation space consists of tensors of the appropriate type. What this means, more precisely, is that: If $\nabla$ is a connection in $E$ and $A\colon \mathfrak{X}(M)\times \Gamma(E)\to \Gamma(E)$ is $C^\infty(M)$ -bilinear, then $\nabla+A$ is a connection. Here, we write $A(X,s)$ as $A_Xs$ for psychological reasons, so that $(\nabla+A)_Xs = \nabla_Xs+A_Xs$ . Check that $\nabla+A$ satisfies the Leibniz rule. If $\nabla$ and $\nabla'$ are two connections in $E$ , then $A = \nabla'-\nabla\colon \mathcal{X}(M)\times\Gamma(E)\to \Gamma(E)$ is $C^\infty(M)$ -bilinear. Now, assume that $E = M\times \Bbb R^k$ is trivialized. Your understanding of what ${\rm d}s$ means here is correct: we identify $s$ with a mapping $M\to \Bbb R^k$ , and so ${\rm d}s_p\colon T_pM \to \Bbb R^k$ eats $v\in T_pM$ and spits out ${\rm d}s_p(v) \in \Bbb R^k$ . The same thing happens for $\nabla s$ : it takes $v\in T_pM$ and outpu
|differential-geometry|vector-bundles|connections|
1
Convergence of Fourier sine transform of $1$
While reading Paul J. Nahin's "Hot molecules, cold electrons", while solving the heat equation for a semi-infinite mass with infinite thickness, when comparing the result with the initial conditions, he writes $1=\int_{0}^{\infty}B(\lambda)\sin(\lambda x)d\lambda$ , then, recalling the Fourier sine transform $f(x)=\int_{0}^{\infty}F(\lambda)\sin(\lambda x)d\lambda$ , from which $F(\lambda)=2/\pi\int_{0}^{\infty}f(x)\sin(\lambda x) dx$ , he claims that in this case, since $f(x)=1$ , we have $B(\lambda)=\frac{2}{\pi}\int_{0}^{\infty}\sin(\lambda x)dx$ , but this integral clearly diverges. How is this result supposed to be intended?
That integral diverges indeed, and setting $B$ equal to it makes no sense to me. If one wants a quick fix, one can regularize the integral and take the limit, that is, $$\frac{\pi}{2}B(\lambda)=\lim_{\epsilon \downarrow 0} \int_0^\infty e^{-\epsilon x}\sin(\lambda x)\,dx=\lim_{\epsilon \downarrow 0}\frac{\lambda}{\epsilon^2+\lambda^2}=\frac{1}{\lambda}.$$ This formally shows that $B(\lambda)$ should be equal to $\frac{2}{\pi \lambda}$ . This argument can also be made rigorous in a distributional setting. Namely, if we extend 1 to an odd function on $\mathbb{R}$ , it becomes the signum function, and its Fourier transform will coincide with $1/i$ times the sine transform. It is well-known that the Fourier transform of the signum function, when understood as a tempered distribution, is equal to a constant times the principal value distribution associated to $\frac{1}{i\lambda}$ , which agrees with the function $\frac{1}{i\lambda}$ outside the origin. Multiplying with $i$ , then yields the
|real-analysis|calculus|fourier-analysis|heat-equation|
0
Solving PDEs as ODEs under certain conditions
(Before closing this question, yes, I've read this one , but my question is not about separation of variables). I'm currently studying second order PDEs with variable coefficients, and after reducing them to their canonical form, my professor solves them as if they were ODEs (except, of course, for the elliptical case, which I still don't know how to solve, although I reckon has something to do with Laplace's equation in 2 dimensions). I don't quite understand why we are allowed to do this. The only difference is the integration constants are now functions of the other variable. For instance, consider the equation: $$2u_{xx}+5yu_x+2y^2u=y$$ It's already reduced to its canonical parabolic form, and I've been taught to solve it by treating $y$ as a constant, which therefore leaves me with a constant coefficient ODE with the general solution being: $$u(x,y)=\frac{1}{2y}+C_1(y)e^{-xy/2}+C_2(y)e^{-2xy}$$ Again, why are we allowed to solve the PDE this way? Is there any theorem, proposition
The key is that $y$ does not appear as a partial in the expression. Thus you can imagine plugging in a fixed value of $y$ , call it $y_0$ , into the expression. Define $f(x) = u(x,y_0)$ , then the PDE becomes $$ 2f''(x) + 5y_0 f(x) + 2y_0^2 f(x) = y_0. $$ This is the same differential equation that we started with. This would not work if we had $y$ appearing as a partial as we would lose some terms (since the $y$ partial of $f$ would be $0$ ) resulting in an entirely different differential equation. Thus, for all intents and purposes, this is an ODE because you can treat $y$ as if it is a fixed parameter. As for the "constants" of integration: the solution will depend on $y_0$ as $y_0$ is a parameter in the equation: different values of $y_0$ will lead to different solutions (check it yourself by plugging a couple different values in). Keep in mind that $g(y)$ and $g(y+a)$ are really the same idea: just define a new function $h(y) = g(y+a)$ for fixed $a$ . This realization is useful if
|ordinary-differential-equations|partial-differential-equations|
1
Possible ways to write conditional probability on two events
I have to evaluate the following quantity $P\left(C|AB\right)$ Where A, B, and C are events. Since I know the following conditional rule $P\left(A|B\right)=\frac{P\left(AB\right)}{P\left(B\right)}$ I thought I could write $P\left(C|AB\right)=\frac{P\left(CAB\right)}{P\left(AB\right)}$ and $P\left(C|AB\right)=\frac{P\left(BC|A\right)}{P\left(B|A\right)}$ However, only the latter equation gives me the correct result. Is the other one incorrect?
Both equations are correct as stated by @geetha290krm as we will demonstrate. The conditioning rule holds that for events $X, Y$ having $P(Y) \neq 0$ , $$P(X \vert Y)=\frac{P(X \cap Y)}{P(Y)} \tag{1}.$$ We would like to show that $$P(C \vert A \cap B) = \frac{P(A \cap B \cap C)}{P(A \cap B)} = \frac{P(B \cap C \vert A)}{P(B \vert A)} \tag{2}.$$ Applying $(1)$ with $X = C, Y= A \cap B$ , we immediately obtain the first equivalence in $(2)$ , assuming $P(A \cap B) \neq 0$ . Notice that this implies that $P(A) \neq 0$ . We can therefore proceed by multiplying the right hand side of the first equivalence by $1$ in the form $\frac{\frac{1}{P(A)}}{\frac{1}{P(A)}}$ to find that $$P(C \vert A \cap B) = \frac{\frac{P(A \cap B \cap C)} {P(A)}}{\frac{P(A \cap B)}{P(A)}} = \frac{\frac{P((B \cap C) \cap A)} {P(A)}}{\frac{P(B \cap A)}{P(A)}} = \frac{P(B \cap C \vert A)}{P(B \vert A)},$$ recognizing conditional probability $P(X \vert Y)$ with $X= B \cap C, Y= A$ in the numerator and $X=B, Y=A$ in the
|probability-theory|conditional-probability|
1
What is the set generate by ${1}$ and a function $1/(a+b)$?
If I'm given a starting set, and an operation, what would the generated set looks like? Here we take $S_0=\{1\}$ and $f(a,b) = \dfrac{1}{a+b}$ as an example, the following Mathematica codes shows the recursion: S = {1}; S = Union[S, 1/(#1 + #2) & @@@ Tuples[S, 2]] // Sort // DeleteDuplicates after the second step, I get $S_1=\{1,1/2\}$ , $1/2 = 1/(1+1)$ and after the third step, I get $S_2=\{1,1/2,2/3\}$ , $2/3=1/(1+1/2)$ then after the fourth step, I get $S_3=\{1/2,3/5,2/3,3/4,6/7,1\}$ ... I can prove that the max and min of the final set is 1 and 1/2 respectively, and I guess it would contain all rational numbers between 1/2 and 1. Is my guess true? And how to prove it true? Generally, how to analyse this kind of generating problems?
Let $S$ be the smallest set containing $1$ , such that for any $a, b \in S$ , we have $f(a, b) \in S$ . I'll show by induction on $n$ that for positive integers $m, n$ with $\frac 12 \le \frac mn \le 1$ , we have $\frac mn \in S$ . Indeed, we can assume that $\frac 12 , since it's clear that $\frac 12, 1 \in S$ . In particular we can assume $m and $n . If $n$ is even, say $n = 2k$ . Observe that $\frac mn = f(\frac km, \frac km)$ , and $\frac 12 \le \frac km \le 1$ : We have $m , so $\frac 12 . We have $2k = n , so $\frac km . If $n$ is odd, say $n = 2k + 1$ . Observe that $\frac mn = f(\frac km, \frac{k + 1}m)$ , and $\frac 12 \le \frac km \le \frac{k + 1}m \le 1$ : We have $m , so $m \le 2k$ , so $\frac 12 \le \frac km$ . We have $2k + 1 = n , so $2k + 2 \le 2m$ , so $\frac {k + 1}m \le 1$ . In each case, we have written $\frac mn$ as an output of inputs between $\frac 12$ and $1$ that have strictly smaller denominator (which must belong to $S$ by inductive hypothesis). So we are don
|elementary-number-theory|recursion|recursive-algorithms|
0
How to show that the dual of $(\mathbb{R}^n,\|{\cdot}\|_p)$ is $(\mathbb{R}^n,\|{\cdot}\|_q)$?
I am trying to brush up on my functional analysis and I learn some $L_p$ spaces since I was never formally intrduced to them through courses. I wanted to know if anyone could offer me a proof or give me a resouce that that would have the proof of the fact that the dual space of $(\mathbb{R}^n,\|{\cdot}\|_p)$ is isometrically isomorphic to $(\mathbb{R}^n,\|{\cdot}\|_q)$ whenever $\frac{1}{p}+\frac{1}{q}=1$.
As I am too just a student actually learning this topic below a more details answer that I hope will help you and other students. As my answer is long and with a lot of details beware, maybe there is some mistakes if you see one please write it in the comment. Background To prove that the dual of $l^p_n$ identifies itself with $l^q_n$ , where $1 and $ \frac{1}{p} + \frac{1}{q} = 1 $ we utilize the concept of dual spaces and properties of norms in $l_p$ spaces . The dual space of a normed vector space $E$ , denoted as $E^∗= L(E; \mathbb{R})$ , consists of all bounded linear mapping from $T:E \to\mathbb{R}$ . So we want to show that $(l^p_n)^*$ are isometric isomorph to $l^q_n$ (includes all the requirements of isomorphism but adds the preservation of distances or norms). Prove 1- $ \forall \underline{y} = ( y_1; y_2; ... ; y_n) \in l^p_n $ we define the following linear mapping to $ \mathbb{R} $ $T_{\underline{x}}(\underline{y})= = \sum_{1 \leq i \leq n}x_i \cdot y_i$ with $\underline{x
|real-analysis|linear-algebra|functional-analysis|banach-spaces|
0
Can you extend the unit speed vector field on a circle embedded in the Möbius band to the whole Möbius band?
I found in my old notes that if a vector field $X$ is defined on a simply connected subset $U \subset M$ , it can be extended to a vector field $\overline X$ on $M$ using bump functions. For a counterexample in the non-simply connected case, I had that the unit-speed vector field on a circle embedded in the Möbius band cannot be extended. However, I think that I can construct such an extension as follows. First, a drawing of the idea (in red, I depict the vector field on the embedded circle. In blue, the extension): Done explicitly, I first cover the Möbius band with charts $$ \phi: M \supset U_\phi \rightarrow (0,1) \times (0,1),$$ $$ \psi: M \supset U_\psi \rightarrow (0,1) \times (0,1)$$ having the transition function: \begin{align} \psi\circ\phi^{-1}: \left(\left(0,\frac14\right)\cup\left(\frac34,1\right)\right) \times (0,1) & \rightarrow \left(\left(0,\frac14\right)\cup\left(\frac34,1\right)\right) \times (0,1) \end{align} \begin{align} (x,y) & \mapsto \begin{cases}\left(\frac14-x
As I said in my comments, you should not trust your old notes. If you want to re-learn the subject, consider reading a textbook instead. Here is how to correct what is written in your notes. Let $M$ be a smooth manifold, $A\subset M$ a subset. A (smooth) vector field on $A$ is a map $X: A\to TM$ such that: $X(a)\in T_aM$ for all $a\in A$ . For every $a\in A$ there is a neighborhood $U_a$ of $a$ in $M$ and a (smooth) vector field $X_{U_a}$ on $U_a$ extending the restriction of $X$ to $U_a\cap A$ . Lemma. For every closed subset $A\subset M$ and every vector field $X$ on $A$ , there exists an extension of $X$ to the entire $M$ . (Simple connectivity of $A$ is irrelevant, but you have to assume that $A$ is closed.) Proof. The complement $U:= M\setminus A$ is an open subset of $M$ . Thus, we obtain an open cover ${\mathcal V}$ of $M$ consisting of the open subsets $U_a$ ( $a\in A$ ) and of $U$ . Since $M$ is paracompact , the open cover ${\mathcal V}$ of $M$ admits a locally finite refinem
|differential-geometry|
1
What is the set generate by ${1}$ and a function $1/(a+b)$?
If I'm given a starting set, and an operation, what would the generated set looks like? Here we take $S_0=\{1\}$ and $f(a,b) = \dfrac{1}{a+b}$ as an example, the following Mathematica codes shows the recursion: S = {1}; S = Union[S, 1/(#1 + #2) & @@@ Tuples[S, 2]] // Sort // DeleteDuplicates after the second step, I get $S_1=\{1,1/2\}$ , $1/2 = 1/(1+1)$ and after the third step, I get $S_2=\{1,1/2,2/3\}$ , $2/3=1/(1+1/2)$ then after the fourth step, I get $S_3=\{1/2,3/5,2/3,3/4,6/7,1\}$ ... I can prove that the max and min of the final set is 1 and 1/2 respectively, and I guess it would contain all rational numbers between 1/2 and 1. Is my guess true? And how to prove it true? Generally, how to analyse this kind of generating problems?
It is clear that every number in $S$ is a rational number between $\tfrac12$ and $1$ inclusive. For any rational number $r$ define the height $h(r)$ as its minimal denominator, so that $h(\tfrac ab)=b$ if $\gcd(a,b)=1$ . We prove by induction that $S$ contains every rational number in the closed interval $[\tfrac12,1]$ . The base case $h(r)=1$ is clear; the only rational number of height $1$ in $[\tfrac12,1]$ is of course $1$ , and $1\in S$ by definition. So let $n>1$ and suppose that $S$ contains every rational number $r\in[\tfrac12,1]$ with $h(r) . Let $r\in[\tfrac12,1]$ with $h(r)=n$ , so that $r=\tfrac mn$ for some positive integer $m$ with $\gcd(m,n)=1$ and $n\leq 2m\leq 2n$ . Because $n>1$ and $\gcd(m,n)=1$ it follows that $m . Then for any integer $k$ with $m\leq 2k\leq 2m$ we have $\tfrac km\in[\tfrac12,1]$ and $h(\tfrac km)\leq m , so by induction hypothesis $\frac{k}{m}\in S$ . It follows that $S$ also contains $$\frac{1}{\frac{k_1}{m}+\frac{k_2}{m}}=\frac{m}{k_1+k_2},$$ so i
|elementary-number-theory|recursion|recursive-algorithms|
1
Convergence of Fourier sine transform of $1$
While reading Paul J. Nahin's "Hot molecules, cold electrons", while solving the heat equation for a semi-infinite mass with infinite thickness, when comparing the result with the initial conditions, he writes $1=\int_{0}^{\infty}B(\lambda)\sin(\lambda x)d\lambda$ , then, recalling the Fourier sine transform $f(x)=\int_{0}^{\infty}F(\lambda)\sin(\lambda x)d\lambda$ , from which $F(\lambda)=2/\pi\int_{0}^{\infty}f(x)\sin(\lambda x) dx$ , he claims that in this case, since $f(x)=1$ , we have $B(\lambda)=\frac{2}{\pi}\int_{0}^{\infty}\sin(\lambda x)dx$ , but this integral clearly diverges. How is this result supposed to be intended?
The Fourier $\sin$ transform $$\frac{2}{\pi} \int\limits_0^{\infty} \text{sgn}(x)\, \sin(\lambda x) \, dx=\frac{2}{\pi \lambda}\tag{1}$$ and inverse Fourier $\sin$ transform $$\int\limits_0^{\infty} \frac{2}{\pi \lambda}\, \sin(\lambda x) \, d\lambda =\text{sgn}(x)\tag{2}$$ are related to the Fourier transform $$\mathcal{F}_x[\text{sgn}(x)](\lambda)=\frac{1}{\pi} \int\limits_{-\infty}^{\infty} \text{sgn}(x)\, e^{i \lambda x} \, dx=\frac{2 i}{\pi \lambda}\tag{3}$$ and inverse Fourier transform $$\mathcal{F}_{\lambda}^{-1}\left[\frac{2 i}{\pi \lambda}\right](x)=\frac{1}{2} \int\limits_{-\infty}^{\infty} \frac{2 i}{\pi \lambda}\, e^{-i \lambda x} \, d\lambda =\text{sgn}(x)\tag{4}.$$ For the Fourier transform of $\frac{1}{x}$ , see row 311 at Fourier_transform: Distributions, one-dimensional which indicates the following in the remarks column. Note that 1/x is not a distribution. It is necessary to use the Cauchy principal value when testing against Schwartz functions. Note the Fourier tra
|real-analysis|calculus|fourier-analysis|heat-equation|
0
Specific way to prove that a cubic graph with a cut edge isn't $3$-edge-colorable.
The statement "If a simple graph $G$ is cubic and has a cut edge, then $\chi'(G) =4$ " has a couple of proofs on this site, namely here and here . However, I was interested in a specific way of proving the statement. In Bondy's book he leaves this problem as an exercise, but lays out a way in which to approach the exercise: 17.1.4 Deduce from Exercise 16.1.7 that a cubic graph with a cut edge is not $3$ -edge-colourable. And the mentioned exercise says the following: 16.1.7 Let $M$ be a perfect matching in a graph $G$ and $S$ a subset of $V$ . Show that $\lvert M \cap \partial(S)\rvert \equiv \lvert S \rvert \pmod{2}$ . I've thought about how to apply the exercise, but I haven't found the connection Bondy laid out. I have a feeling that it might have something to do with the following: By handshake if $G$ is cubic ( $3$ -regular) then $2\lvert E(G)\rvert = 3 \lvert V(G)\rvert$ , and since $2 \not\equiv 3 \pmod{2}$ you might get something. Another possible observation is that if the cut
The connection to perfect matchings is just this: in a $k$ -edge-coloring of a $k$ -regular graph (and in particular, in a $3$ -edge-coloring of a cubic graph), every color class is a perfect matching. We will take each of those color classes to be $M$ , in turn. To make use of 16.1.7, we want to choose a set $S$ with a very simple boundary $\partial(S)$ . If our graph $G$ has a cut edge $e$ , we can do this by letting $S$ be one of the connected components of $G-e$ . If we do, then $\partial(S)$ is just the singleton set $\{e\}$ : $e$ is the only edge between $S$ and $V(G) \setminus S$ . So what does 16.1.7 tell us about this scenario? It tells us that for every perfect matching $M$ , we have $$|M \cap \{e\}| \equiv |S| \pmod 2.$$ Actually, $|M \cap \{e\}|$ is either $0$ or $1$ , so we can be more precise and say: If $|S|$ is odd, then $|M\cap \{e\}|=1$ for all $M$ : every perfect matching contains $e$ . If $|S|$ is even, then $|M \cap \{e\}|=0$ for all $M$ : no perfect matching conta
|graph-theory|proof-explanation|modular-arithmetic|coloring|matching-theory|
1
discrete subgroups of euclidean space
I'm trying to prove this proposition: http://groupprops.subwiki.org/wiki/Every_discrete_subgroup_of_Euclidean_space_is_free_Abelian_on_a_linearly_independent_set That every discrete subgroup of Euclidean space is free Abelian with rank at most n, but I don't know how to approach it, even with the "used facts" section. Any help please?
To simplify @Mr.P's answer, to prove $|H:L|$ is finite we can also note that $\mathbb{R}^k/L \cong \prod_{i=1}^k S^1$ . The image of $H$ in the quotient is still discrete, and it's a subgroup of the right hand side, which is compact, so $d := |H:L| . Then, for any $x\in H$ , $dx + L = d(x+L) = L$ because the order is $d$ , so $dx \in L$ , which means $dH \subseteq L$ (i.e. H is generated by $h_i/d$ ).
|group-theory|abelian-groups|
0
Can $({\Bbb N}, \max)$ be topologized to be a compact Hausdorff monoid?
I am aware of this question and this one , the answers to which show that the natural numbers can be equipped with a compact Hausdorff topology. But what happens if one also requires the operation $$ \max: {\Bbb N} \times {\Bbb N} \to {\Bbb N} $$ to be continuous? My intuition is that it is not possible because the profinite approach fails. More precisely, for each natural $t$ , let ${\Bbb N}_t = \{0, 1, \ldots, t \}$ . Then $({\Bbb N}_t, \max)$ is a finite (and hence compact Hausdorff for the discrete topology) monoid. Furthermore, the map $$ f_t: ({\Bbb N}, \max) \to ({\Bbb N}_t, \max) \text{ defined by } f_t(x) = \min(x, t) $$ is a monoid morphism. Thus one can embed $({\Bbb N}, \max)$ into the compact Hausdorff product monoid $\prod_{t \in {\Bbb N}}({\Bbb N}_t, \max)$ by identifying $n$ with $(0, 1, 2, \ldots, n, n, n, \ldots)$ . Unfortunately, $({\Bbb N}, \max)$ is not closed for this topology, since the sequence $u_n = (0, 1, 2, \ldots, n, n, n, \ldots)$ converges to $(0, 1, 2, \
Suppose for contradiction that we have found a topology on $\mathbb{N}$ making it a compact Hausdorff monoid wrt $\max$ . Since $\mathbb{N}$ is infinite and compact, it cannot be the case that every singleton subset of $\mathbb{N}$ is open. So, let $m$ be a natural number such that $\{m\}$ is not open. Also, we can write $\mathbb{N}$ as the union $\bigcup_{n \in \mathbb{N}} \{n\}$ of closed sets, and thus (by the Baire category theorem) there is some $n_0 \in \mathbb{N}$ such that $\{n_0\}$ is open. Then $\mathbb{N} \setminus \{n_0\}$ is again compact Hausdorff, and we can repeat the argument to obtain another point $n_1$ such that $\{n_1\}$ is open. Etc., so we conclude that there are infinitely many clopen points in $\mathbb{N}$ . So, let $N \in \mathbb{N}$ be such that $m and $\{N\}$ is open. Then $\max^{-1}(\{N\})$ is open in $\mathbb{N} \times \mathbb{N}$ . Since $(m,N) \in \max^{-1}(\{N\})$ , there exist open sets $U,V \subseteq \mathbb{N}$ such that $(m,N) \in U \times V \subset
|general-topology|compactness|monoid|
1
Solutions of Diophantine Equations with Restricted Terms
For the equation $x_1+x_2+\dots+x_n=k$ , I know that the total number of nonnegative solutions is ${k+n-1 \choose k}$ . My question is how does the number of solutions change if all $x_i$ have to be even?
If all $x_i$ are even then $x_i=2y_i$ for some nonnegative integer $y_i$ , for each $i$ . If $k$ is odd, then of course there is no solution. If $k$ is even then we have $$y_1+y_2+\cdots+y_n=\tfrac k2,$$ and the number of nonnegative solutions is $\tbinom{\tfrac k2+n-1}{n}$ . If all $x_i$ are odd, then $x_i=2y_i+1$ for some nonnegative integer $y_i$ , for each $i$ . If $k\not\equiv n\pmod{2}$ then of course there is no solution. If $k\equiv n\pmod{2}$ , then we have $$y_1+y_2+\cdots+y_n=\frac{k-n}{2},$$ and the number of nonnegative solutions is $\tbinom{\tfrac{k-n}{2}+n-1}{n}=\tbinom{\tfrac{k+n}{2}-1}{n}$ .
|combinatorics|diophantine-equations|
0
Folland chapter 3 exercise 22 Hardy-Littlewood maximal function and maximal theorem
Following is a question and it's solution from folland's real analysis chapter 3 exercise 22. Q22. If $f \in L^1\left(\mathbb{R}^n\right), f \neq 0$ , there exist $C, R>0$ such that $H f(x) \geqq C||^{-n}$ for $|x|>R$ . Hence $m(\{x: H f(x)>\alpha\}) C / \alpha$ when $\alpha$ is small, so the estimate in the maximal theorem is essentially sharp. A22. Assuming $\|f\|_1>0$ , there exists $R \in(0, \infty)$ such that $\int_{B_R(0)}|f| d m>0$ (otherwise the monotone convergence theorem implies that $\left.\int|f| d m=\lim _{N \rightarrow \infty} \int_{B_N(0)}|f| d m=0\right)$ . If $x \in \mathbb{R}^n \backslash B_R[0]$ , then $B_R(0) \subseteq B_{2|x|}(x)$ so $$ H f(x) \geq A_{2|x|}|f|(x)=\frac{1}{m\left(B_{2|x|}(x)\right)} \int_{B_{2|x|}(x)}|f| d m \geq \frac{1}{m\left(B_{2|x|}(0)\right)} \int_{B_R(0)}|f| d m=\frac{1}{|x|^n m\left(B_2(0)\right)} \int_{B_R(0)}|f| d m . $$ Note that $C:=\frac{1}{m\left(B_2(0)\right)} \int_{B_R(0)}|f| d m$ is positive and independent of $x$ . Now, if $\alpha
By definition of $x$ , $B_R(0)\subset B_{2|x|}(x)$ , so by monotonicity of integrals, \begin{align} \int_{B_R(0)}|f|\,dm\leq \int_{B_{2|x|}(x)}|f|\,dm. \end{align}
|real-analysis|measure-theory|lebesgue-integral|average|borel-measures|
1
Inverse function of a polynomial
What is the inverse function of $f(x) = x^5 + 2x^3 + x - 1?$ I have no idea how to find the inverse of a polynomial, so I would greatly appreciate it if someone could show me the steps to solving this problem. Thank you in advance!
The original question seems to have come from a homework problem which does NOT require explicitly finding the inverse of a polynomial; however, since this page is one of the first things that pop up when you search for how to find the inverse of a polynomial function, and a general method for this WAS asked for, this general question is what I will discuss. Also, I realize that, until I discuss numerical methods at the bottom, I'm just repeating what others have said with more explanation. Solving a polynomial $y = p(x) = a_{0} + a_{1}x + a_{2}x^2 + ... + a_{n}x^n$ for x is equivalent to solving the equation $p(x) - y = 0$ i.e., $a_{n}x^n + a_{n-1}x^n-1 + ... + a_{1}x + (a_0 - y) = 0$ for x, i.e., to getting the zeros of the polynomial on the left as a function of y. (That is, for each value of y, we treat y as a constant and find the zeros of the polynomial, so in the general case of polynomials on the complex numbers, this is a function from complex numbers (any value of y) to sets
|polynomials|inverse|
0
Representing Complex Vector Spaces as Real Vector Spaces
Suppose $V$ is a complex vector space with respect to basis $v_{1},...,v_{n}$ , and $T: V \rightarrow V$ is a linear transformation with matrix representation $A$ . Now, consider $V$ as a real vector space with respect to basis $v_{1}, iv_{1},...,v_{n}, iv_{n}$ . What is matrix representation of $T$ with respect to this basis $v_{j}, iv_{j}$ ? My initial idea was trying to represent each "complex $A$ " column $Tv_{j} \rightarrow v_{1},...,v_{n}$ as two columns for the "real $A$ " matrix such that $Tv_{j} + Tiv_{j} \rightarrow v_{1}, iv_{1},...,v_{n}, iv_{n}$ . However, I don't comprehend exactly what the question means by $iv_{j}$ (is it literally $v_{j}$ multiplied by $i$ ?) or how to represent the new matrix in terms of $A$ (For this, I assume I have to employ something of the form $CAC^{-1}$ , where $C$ changes from the real to the complex basis). I would appreciate some hints on how to proceed from where I am and how to properly interpret the problem. Thanks!
Let $(v_1,\ldots,v_n)$ be a basis of $V$ in which the matrix of $T$ is $A=(a_{i,j})_{1\leqslant i,j\leqslant n}\in\mathscr{M}_n(\mathbb{C})$ so that $$ T(v_j)=\sum_{k=1}^na_{k,j}v_k=\sum_{k=1}^n{\rm Re}(a_{k,j})v_k+\sum_{k=1}^n{\rm Im}(a_{k,j})iv_k $$ and $$ T(iv_j)=iT(v_j)=-\sum_{k=1}^n{\rm Im}(a_{k,j})v_k+\sum_{k=1}^n{\rm Re}(a_{k,j})iv_k. $$ The matrix of $T$ in the basis $(v_1,iv_1,\ldots,v_n,iv_n)$ is $$ \begin{pmatrix} R_{1,1} & \cdots & R_{1,n} \\ \vdots & & \vdots \\ R_{n,1} & \cdots & R_{n,n} \end{pmatrix}\in\mathscr{M}_{2n,2n}(\mathbb{R}) $$ where $$ R_{i,j}=\begin{pmatrix} {\rm Re}(a_{i,j}) & {\rm -Im}(a_{i,j}) \\ {\rm Im}(a_{i,j}) & {\rm Re}(a_{i,j}) \end{pmatrix}. $$
|linear-algebra|change-of-basis|
0
Vertical Stretch transformation Question for rational function with variable in numerator and denominator.
I'm trying to help a student with following rational equation question: Describe the transformations of $$g(x) = \frac{-4x - 2}{7x +1}$$ from the graph of $$f(x) = \frac{1}{x}.$$ The given answers are horizontal shift to the left of $\frac{1}{7}$ units, a vertical shift down of $\frac{4}{7}$ units and vertical shrink factor of $\frac{10}{49}$ units. The vertical and horizontal shifts make sense based on the new Asymptotes compared to the graph of $f(x) = \frac{1}{x}$ , but I have no idea how this shrink factor is calculated. Any help would be appreciated!
$$y=\frac {-4x-2}{7x+1} \\ x \to X-\frac 17\\y \to Y+\frac 47 \\ Y=\frac {-10}{49X}=\frac {-10}{49} \times \frac 1X$$ it means firstly transform by $-1$ then by $\frac {-10}{49}$ In general for $f(x)=\frac {ax+b}{cx+d}$ when you use transform $$x\to X-\frac dc\\y\to Y+\frac ac$$ it will be turnd to $$Y+\frac ac=\frac {a(X-\frac dc)+b}{c(X-\frac dc)+d}\\Y+\frac ac=\frac {aX-a\frac dc+b}{cX}\\ Y+\frac ac=\frac {aX+\frac {-(ad-bc)}{c}}{cX}\\Y=\frac {aX+\frac {-(ad-bc)}{c}}{cX}-+\frac ac\\Y=\frac {-(ad-bc)}{c^2X}\\Y=\frac{-det(\begin{bmatrix}a & b \\c & d \end{bmatrix})}{c^2}\times \frac 1X$$
|algebra-precalculus|transformation|rational-functions|
0
Insight about an example of a hyperbola in the projective space in my textbook
I'm reading a text ( https://people.maths.ox.ac.uk/hitchin/files/LectureNotes/Projective_geometry/Chapter_2_Quadrics.pdf ) to study about Quadrics in the projective space. The following is the example (1) (on Page 26). Example Consider the hyperbola $xy = 1$ . Using the homogeneous coordinate ( $x = x_1/x_0, y = x_2/x_0$ ), we can transform into; $$ \left( \frac{1}{2}(x_1 + x_2) \right)^2 - \left( \frac{1}{2}(x_1 - x_2) \right)^2 - x_0^2 = 0 $$ and then if (x_1 + x_2 \neq 0), we put $$ y_1 = \frac{x_1 - x_2}{x_1 + x_2}, \quad y_2 = \frac{2x_0}{x_1 + x_2} $$ and the conic intersects the copy of $\mathbb{R}^2 \subset P^2(\mathbb{R})$ (the complement of the line $x_1 + x_2 = 0$ ) in the circle; $y_1^2 + y_2^2 = 1$ . The original line at infinity $x_0 = 0$ meets this in $y_2 = 0$ . Question I understood that we transform the hyperbola ( $x_1 x_2 - x^2_0 = 0$ in $P^2(\mathbb{R})$ into a circle in $\mathbb{R^2}$ . But I can't quite imagine how $x_0$ and $y_2$ meet. Could someone help me unde
In 3-space it's a linear transformation taking one cone with singular point the origin to another, one with axis the $y=x$ in the $xy$ plane and the other the $z$ -axis. However, the $x_0=1$ in one is $y_2=2$ in the other, and $y_0=1$ in the other is $x_1+x_2=1$ in the first. Only trouble is, one cone isn't right circular and the other is (i.e. the transformation is not just a rotation; there will be some stretching and contracting going on), so we get that $x_1+x_2=1$ intersects the cone $x_0^2=x_1x_2$ in a circle of radius $\frac{\sqrt3}2,$ and not $1.$ Similarly the cone $y_1^2+y_2^2=y_0^2$ intersects in $y_2=2$ to get a hyperbola, but not quite a rectangular one, like the one we started out with.
|projective-geometry|quadratic-forms|
1
Arranging Drilled Unit Cubes into a Rectangular Prism Without Breaking the Thread
Given positive integers p , q , and r , we have $p \cdot q \cdot r$ unit cubes. Each cube has a hole drilled along one of its space diagonals. These cubes are then strung onto a very thin thread of length $p \cdot q \cdot r \cdot \sqrt{3}$ , resembling beads on a string. The task is to arrange these unit cubes into a rectangular prism with side lengths $p$ , $q$ , and $r$ without breaking the thread. The problem can be divided into two parts: a) For which values of $p$ , $q$ , and $r$ is it possible to arrange the cubes into the prism without breaking the thread? b) For which values of $p$ , $q$ , and $r$ can this arrangement be done in such a way that the beginning and end of the thread come together? I have attempted to visualize and manipulate various configurations but have yet to devise a systematic approach to solve this problem. Does anyone have ideas on how to tackle this problem, or are there any similar problems or theories that might shed light on a possible solution? Any gu
We label the coordinates of a unit cube vertex by an integer vector $(x, y, z)$ . We easily see that an invariant for the vertices visited by the thread is given by $(y-x \text{ mod } 2, z-x \text{ mod } 2)$ and that only two vertices of each unit cube share the same invariant. There are four possible invariants: (even, even), (even, odd), (odd, even) and (odd, odd). Thus, all the visited vertices will share the same invariant. Because for each cube, there will be only one diagonal such that the two endpoints satisfy this invariant, we know exactly in which diagonal the thread will pass. So the problem reduces to find an Eulerian path/cycle in the graph induced by the diagonals with endpoints satisfying the invariant. We need to find such an Eulerian path/cycle in one of the four possible graphs – one for each possible invariant. For each vertex, the degree in the graph will be equal to the number of cubes meeting at this vertex. We see that all the vertices will induce an even degree,
|combinatorics|hamiltonian-path|
1
About the regularity of the decomposition in prime factors
Definition of $\rho$. Let's consider a function $\rho$, acting on the prime decomposition of an integer $n$: $$\begin{matrix} \rho\colon & \mathbb N_{\geqslant 1}&\to &\mathbb N_{\geqslant 1} \\& n=\displaystyle\prod_p p^{\alpha_p}&\mapsto &\displaystyle\sum_p p\cdot\alpha_p .\end{matrix} $$ For example: $$\rho(12)=\rho(2^2\times 3)=2\times 2+3=7.$$ Context and various information. We can easily prove two things about $\rho$: $\forall m,n\in \mathbb N_{\geqslant 1},\quad \rho(mn)=\rho(m)+\rho(n)$; $\rho(n)=n\iff \text{ $n$ is a prime number}.$ We can also prove that $$\rho(n)\geqslant \frac {2\log n}{\log 2}.$$ Here is an illustration of the sequence $\rho(n)$ for $n\in \{1,\ldots,100\}$. We can see that the born $\frac {2\log 2}{\log n}$ fits well. We can also notice that $\rho(n)=n$ if, and only if, $n$ is a prime number. Here is the sequence for $n$ up to $10^3$. We can notice that things are ordered at the top and chaotic on the bottom. If we plot the sequence for $n$ up to $10^6$,
The top line is $y=x$ and consists of the primes and four because $\rho(p)=p$ if $p$ is a prime number, and $\rho(4)=4$ . The second to top line is $y=\frac{x}{2}+2$ and consists of all $2p$ where $p\in{P}$ and $p\neq2$ , and the number 8. This is because $\rho(2p)=\rho(p)+\rho(2)=p+2$ and $\rho(8)=\rho(4)+\rho(2)=4+2$ . Let's skip to the 6th to top line. What is its equation? It is actually $y=\frac{x}{6}+5$ . Why $+5$ and not $+6$ ? Well, if we take all $6p$ where $p\in{P}$ and $p\neq5$ , we get $\rho(6p)=\rho(p)+\rho(6)=p+5$ . The line also contains 24, because $\rho{24}=\rho(4)+\rho(6)=4+5$ . To summarize, the nth line to the top has equation $y=\frac{x}{n}+\rho(n)$ , and contains all $np$ where $p\in{P}$ and $p>2$ , and $4n$ . I hope I answered all of your questions.
|prime-numbers|arithmetic|
1
How Long does it take to Reach a Limiting Distribution?
For Markov Chains, there are well known formulae (e.g. fundamental matrix approach, first step analysis) which show us how to calculate the Expected Time to Absorption (e.g https://en.wikipedia.org/wiki/Absorbing_Markov_chain ). That is, given a transition matrix and the current state - how many iterations (i.e. steps) are required (on average) for the chain to reach a certain state. In a Markov Chain, we can define the following: Define the Limiting Distribution $\pi(n)$ as: $$ \lim_{n \to \infty} \pi(n) = \lim_{n \to \infty} \pi(0) P^n $$ Where: $\pi(0)$ is the initial distribution. $P$ is the transition probability matrix of the Markov chain. Define the Stationary Distribution $\pi$ as: $$\pi P^n = \pi$$ Where: $P$ is the transition probability matrix of the Markov chain. Based on these concepts, I have the following question: Just as we have formulae to calculate the Expected Time to Absorption - given a Markov Chain with a given Transition Matrix and initial Distribution, can we d
First, any existing limiting distribution must be stationary, see for instance [ 1 ] for a more detailed discussion of this. Conversely, there may exist a stationary distribution but no limiting distribution for a Markov chain. Hence, let's consider convergence towards the stationary distribution. Second, Markov chain has a unique stationary distribution $\pi$ if and only if it is irreducible and all of its states are positive recurrent. (Any finite-state irreducible chain is positive recurrent.) Assume that this holds, and thus $\pi$ is a stationary distribution that is unique. By definition, $\pi$ is stationary if it is unaffected by the action of transition matrix $P \in \mathbb{R}^{n \times n}$ , that is: $$ P^{T} \pi = 1 \pi $$ this implies that $\pi$ is an eigenvector of $P$ with corresponding eigenvalue 1. Since $\pi$ is unique, the eigenvalue 1 is also unique. It can be shown that all other eigenvalues have norm strictly less than 1: $$ \lambda_{1} = 1 > |\lambda_{2}| \geq \dot
|probability|
1
Calculating $\lim_{x\rightarrow 0}\left(\frac{1}{x^2}\right)^x$
Can $\displaystyle \lim_{x\rightarrow 0} \left(\frac{1}{x^2}\right)^x$ be solved as $\displaystyle \lim_{x\rightarrow 0}\frac{1^x}{x^{2x}}$ ? It is in the indeterminate form of ${\infty}^0$ so I think I should transform it to the $\ln$ form and solve it. But I am not sure if I can solve it as given above.
Notice that $$\left(\frac{1}{x^2}\right)^x=(x^{-2})^x=x^{-2x}=e^{-2x\ln x}$$ Now utilize that $\lim_{x \rightarrow 0^+} x \ln x = 0$ which results in the limit equaling $1$ .
|limits|
0
Is this proof of the principle of recursion theorem proving anything?
The following is presented as an example to motivate my question. It's my paraphrase of the principle of recursion theorem and proof. From Fundamentals of Mathematics Foundations of Mathematics: The Real Number System and Algebra Let $c$ be a number and let $F$ be a function of two arguments defined in $\mathbb{N}$ and with values in $\mathbb{N}.$ Then there exists exactly one function $f$ defined in $\mathbb{N}$ such that \begin{align*} f\left(1\right)= & c;\\ \forall_{x}f\left(x^{\prime}\right)= & F\left(x,f\left(x\right)\right) \end{align*} Proof: Express $f$ as the set $\mathcal{P}$ of ordered pairs $\left\langle x,y\right\rangle $ where $y=f\left(x\right)$ and having only the recursively defined elements given by \begin{align*} \left\langle 1,c\right\rangle \in & \mathcal{P};\\ \left\langle x,y\right\rangle \in & \mathcal{P}\implies\left\langle x^{\prime},F\left(x,y\right)\right\rangle \in\mathcal{P}. \end{align*} We need to prove that for every number $x\in\mathbb{N}$ there is ex
The original title of my post was Why is it so hard for mathematicians to say exactly one ? The proof under critique seems to be an elaborate way of saying: given a function $s$ let $a=s\left(x\right)$ and $b=s\left(y\right)$ then $x=y\implies a=b.$ Which is the definition of a function. The way I see it, what needs to be shown is that for each number $x^\prime$ there is exactly one pair $\langle{x^\prime,f\left(x^\prime\right)}\rangle.$ That is, $f\left(x^\prime\right)=F\left(x,f\left(x\right)\right)$ has exactly one value. It is axiomatic that there is exactly one value of $x$ for each $x^\prime$ value. So we are obligated to show that for each $x$ value there is exactly one value given by $F\left(x,f\left(x\right)\right).$ The notation $\underline{\exists}$ means there exists exactly one . Some authors write $\exists!,$ but as G. H. Hardy said: "Beauty is the first test: there is no permanent place in the world for ugly mathematics." Base Case of $x=1$ For the exceptional case of $x
|elementary-number-theory|elementary-set-theory|logic|proof-writing|quantifiers|
1
Finding a basis of a subspace of a cartesian product of two vector spaces
Let $$T=\lbrace \left((a,b,c),\left[\begin{matrix} x & y \\\ y & z \end{matrix}\right] \right)\in \mathbb{R}^3 \times \mathbb{R}^{2\times 2}:a+b+c=0 \wedge y=x+z\rbrace$$ $\times$ is cartesian product. Find a basis and the dimension of $T$ . What I did: Using the conditions $c=-a-b$ and $y=x+z$ we get $T=\lbrace \left((a,b,-a-b),\begin{bmatrix} x & x+z \\\ x+z & z \end{bmatrix}\right):a,b,x,z \in \mathbb{R}\rbrace$ $T=\lbrace \left(a(1,0,-1)+b(0,1,-1),x\begin{bmatrix} 1 & 1 \\\ 1 & 0 \end{bmatrix} + z\begin{bmatrix} 0 & 1 \\\ 1 & 1 \end{bmatrix}\right):a,b,x,z \in \mathbb{R}\rbrace$ $T=\lbrace (a(1,0,-1),x\begin{bmatrix} 1 & 1 \\\ 1 & 0 \end{bmatrix})+(b(0,1,-1),z\begin{bmatrix} 0 & 1 \\\ 1 & 1 \end{bmatrix}):a,b,x,z \in \mathbb{R}\rbrace$ At this point, I don't know how to take vectors in $\mathbb{R}^3 \times \mathbb{R}^{2\times 2}$ that span $T$ . (Must be at most 7 vectors because $dim(\mathbb{R}^3 \times \mathbb{R}^{2\times 2})=7$ ).
I'm pretty sure you could take the sequence of vectors: $$v_1=[(1,0,-1),\begin{pmatrix} 0 & 0 \\0 & 0 \end{pmatrix}]$$ $$v_2=[(0,1,-1),\begin{pmatrix} 0 & 0 \\0 & 0 \end{pmatrix}]$$ $$v_3=[(0,0,0),\begin{pmatrix} 1 & 1 \\1 & 0 \end{pmatrix}]$$ $$v_4=[(0,0,0),\begin{pmatrix} 0 & 1 \\1 & 1 \end{pmatrix}]$$ Treating the two parts of the cartesian product as individual parts if you will. This is linearly independent (Trivial but proof left to you) and spanning, as any element of the form above can be made by this sequence of vectors: For example , if I wanted to make $$[(a,b,-a-b),\begin{pmatrix} 0 & 0 \\0 & 0 \end{pmatrix}]$$ , I could take $av_1+bv_2 $ I leave the more general case of the spanning property for you to prove. Thus the size of the basis is 4 as this is an LI spanning sequence. Thus, the dimension is 4. As an aside: You could see this as a linear transformation from $\mathbb{R^4}$ ( you can set $c=-a-b$ and $y=x+z$ ) to a space of dimension 7. And the kernel of that transfor
|linear-algebra|vector-spaces|
1
Calculating $\lim_{x\rightarrow 0}\left(\frac{1}{x^2}\right)^x$
Can $\displaystyle \lim_{x\rightarrow 0} \left(\frac{1}{x^2}\right)^x$ be solved as $\displaystyle \lim_{x\rightarrow 0}\frac{1^x}{x^{2x}}$ ? It is in the indeterminate form of ${\infty}^0$ so I think I should transform it to the $\ln$ form and solve it. But I am not sure if I can solve it as given above.
In almost every standard textbook for 1st year calculus, there exists a technique for calculating a limit of function, that is, take the logarithm of that function, calculate the limit, then see what is the exponential of that limit. So here, we take its logarithm, then we have $$\ln (\frac{1}{x^2})^x$$ $$= x \ln\frac{1}{x^2}$$ $$= x \ln\frac{1}{x^2}$$ $$= x \ln x^{-2}$$ $$= -2x \ln x$$ And this means $$\lim_{x \rightarrow 0} \ln(f(x)) = \lim_{x \rightarrow 0}(-2x \ln x)$$ Since we know that $$\lim_{x \rightarrow 0} x \ln x = 0$$ (It is easy to show this through L'Hôpital's rule) we have $$\lim_{x \rightarrow 0} \ln(f(x)) = -2 \times 0 = 0$$ Therefore, $$\lim_{x \rightarrow 0} f(x)$$ $$= \lim_{x \rightarrow 0} e^{\ln f(x)}$$ $$= e^{\lim_{x \rightarrow 0} \ln f(x)}$$ $$= e^0$$ $$= 1$$ Hope this can help you.
|limits|
0
Categorical distribution pmf
I am trying to understand the pmf $p(y|\theta_1,\dots,\theta_c)=\Pi_{k=1}^c\theta_k^{y_k}$ of the categorical distribution but I do not understand why there aren't any $1-\theta_k$ terms, like in the Bernoulli $p(x)=\theta^x(1-\theta)^{1-x}$ . Is it to do with encoding - that it is the probability of being classified as classes $y$ and it is not concerned with which classes it is not classified as?
For your expression of the PMF to make sense, you should add a few important requirements: for every $i \in \{1, \dotsc, c\}$ , $\theta_i$ is the probability of the $i$ -th outcome, so that we need $\theta_i \in [0,1]$ ; for the probabilities of every possible outcome to sum up to $1$ it must hold that $\sum_{i = 1}^c \theta_i = 1$ ; There are a few different ways to express the PMF of a categorical distribution (see the relevant Wikipedia page for some examples), but from your notation I guess that your $\mathbf{y}$ should be a vector $\mathbf{y} = (y_1, \dotsc, y_c)$ satisying $y_i \in \{0,1\} \text{ for every } i \in \{1, \dotsc, c\}$ ; $\sum_{i = 1}^c y_i = 1$ , that is, exactly one component of the vector is equal to $1$ and all the others have value $0$ . The Bernoulli distribution is just an example of a categorical distribution with only two categories, corresponding to the two possible outcomes: $0$ and $1$ . To see this, let $\theta_1 = 1-p$ and $\theta_2 = p$ . We obtain the
|probability|probability-theory|probability-distributions|multinomial-coefficients|
0
How to prove the limit of "the exponential of a sequence"
So given a convergent sequence $\{a_n\}_{n=1}^\infty$ with limit $a$, I'd like to prove that $$\lim_{n\to\infty} \left(1+\frac{a_n}{n}\right)^n=e^a.\quad(1)$$ Knowing that $e$ is defined by $$e=\lim_{n\to\infty} \left(1+\frac{1}{n}\right)^n,$$ the relationship in $(1)$ certainly not unintuitive, and also very useful, but how do I prove it? In the case of a constant sequence $a_n=k\;\forall n$, it's pretty straightforward as you can write $$\left(1+\frac{k}{n}\right)^n=\exp\left[\log\left(\left(1+\frac{k}{n}\right)^n\right)\right]=\exp\left[n\log\left(1+\frac{k}{n}\right)\right]=\exp\left[\frac{\log\left(1+\frac{k}{n}\right)}{1/n}\right]$$ and then taking the limit you can apply L'Hôpital's rule to differentiate the numerator and denominator separately and then get the result after a few manipulations. But since $$\frac{\mathrm{d}}{\mathrm{d}x} \log\left(1+\frac{f(x)}{x}\right)=\frac{xf'(x)-f(x)}{xf(x)+x²}$$ I will need to know the derivative $(a_n)'$ with respect to $n$ of the sequence
A similar approach to the one by @zhw. : By the Mean Value Theorem, for sufficiently large $n$ (such that $1+\frac{a_n}{n} \neq 0$ ), there is a number $b_n$ between $1$ and $1+\frac{a_n}{n}$ , i.e., $$\left\lvert b_n -1 \right\rvert \leq \left\lvert \frac{a_n}{n} \right\rvert = \left\lvert \left(1+\frac{a_n}{n}\right) - 1 \right\rvert,$$ such that $$\ln\left(1+\frac{a_n}{n}\right) - \ln(1) = \frac{1}{b_n} \cdot \frac{a_n}{n}.$$ Then, $$n\ln\left(1+\frac{a_n}{n}\right) \rightarrow \frac{a}{1} = a$$ as $n \rightarrow \infty$ . Therefore, by the continuity of $\exp$ , $$\left(1+\frac{a_n}{n}\right)^n = e^{n\ln\left(1+\frac{a_n}{n}\right)}\rightarrow e^a$$ as $n \rightarrow \infty$ , as desired.
|sequences-and-series|derivatives|exponential-function|
0
Changing the integration limit
I was reading the derivation of $E = mc^{2}$ but I am stuck in a very simple minor detail. In the derivation, we have the kinetic energy (K.E.) as $\int_{0}^{s} F \, ds$ , where $F$ is constant. Then we have \begin{align*} \int_{0}^{s} F \, ds &= \int_{0}^{s} \frac{d(\gamma m v)}{dt} \, ds \\ &= \int_{0}^{mv} vd(\gamma m v) \\ &= \int_{0}^{v} vd(\frac{mv}{\sqrt{1-\frac{v^2}{c^2}}}) \end{align*} I cannot see how the integration bounds change. Do we make some kind of $u$ substitution?
As discussed in the comments, the derivation is maybe not perfectly rigorous, but we can see that a substitution is being made not only because the bounds of integration changed but because the variable of integration changed. Notice that in the first line the integral is being taken with respect to $ds$ , but in the second it is being taken with respect to $d(\gamma mv)$ , i.e. the new variable of integration is $\rho = \gamma mv$ , the (relativistic) momentum. If you go through the substitution more thoroughly (and preferably use different symbols for $v$ as a variable in the integral and as a bound of the integral) you should get something that resembles the given result.
|calculus|
0
Help with reformulating linear programming with rounding numbers
I have the following problem, abstracting away a few details from a real-world application, that I want to solve with linear programming (or any other mathematical optimization with constraints, really). I have two sets of values $y$ and $\hat{r}$ . $y$ are real-world measured values, while $\hat{r}$ are outputs from a simulation with mathematical model. $y$ are integers in range $[0, 1, \dots, 21]$ , while $\hat{r}$ are real numbers in range $[0, 21]$ that are rounded to integers $\hat{y}$ . So far I used regular rounding, e.g. $1.33 \rightarrow 1$ , $2.5 \rightarrow 3$ , $21.0 \rightarrow 21$ . This can also be written as $\text{round}(x) = \lfloor x+0.5 \rfloor$ . My $y$ values distribution is far from uniform (more like log-normal), which is reflected by the model, which often outputs values too large (for small $y$ ) or too small (for large $y$ ). My idea is to "optimize" the rounding, changing the range $[a_i, a_{i+1}]$ for which the number is rounded to $i$ . I minimize the sum
You can solve the problem via linear programming by reformulating as a shortest path problem in a layered network defined as follows. Let $R$ be the set of distinct $r$ values. Let $I=\{1,\dots,4\}$ . The main nodes are $R \times I$ , and the main arcs are from $(r,i)$ to $(r',i+1)$ , where $r . The idea is that arc $(r,i)\to(r',i+1)$ is traversed if all observations with $r \le r_j have $\hat{y}_j=i$ . The cost for traversing the arc is the sum of absolute deviations $$\sum_{j: r \le r_j Add a source node $(0,0)$ , with directed arcs to all $(r,1)$ , and a sink node $(4,5)$ , with directed arcs from all $(r,4)$ . Now find a shortest path from source to sink. For each arc $(r,i)\to(r',i+1)$ that appears in the shortest path, take $a_i=r$ .
|optimization|linear-programming|programming|
1
About the number of solutions of $\varphi(n)=m$
Let's denote $N_m$ as the number of solutions of $$\varphi(n)=m$$ for all $m\geqslant 2$ , where $\varphi$ is Euler totient function, i.e. $$N_m:=\#\{n\in \mathbb N,\ \varphi(n)=m\}.$$ We can prove easily that $N_m=0$ if $m$ is odd. We also know that $N_m for all $m$ (thanks to this ). I plotted the sequence $(N_m)$ , and I putted red points when $$m\equiv 0\pmod{12}.$$ The question. Why is every "high" point ( i.e. corresponding to a high value of $N_m$ ) a red point?
Let $m\in\Bbb{N}$ be given, and let $n\in\Bbb{N}$ be such that $\varphi(n)=m$ . If $m\neq0\pmod{12}$ then either $3\nmid m$ or $4\nmid m$ . If $4\nmid m$ then either $n=p^k$ or $n=2p^k$ for some prime $p\equiv3\pmod{4}$ , or $n=4$ . Similarly, if $3\nmid m$ then $n$ is a product of primes congruent to $2\pmod 3$ , or $3$ times such a product. Both conditions are quite restrictive, and so $N_m$ should be small.
|arithmetic|arithmetic-functions|
0
Ultraweak Continuity Implies Norm Continuity
I was reading the section on Schur Multipliers in Ozawa's book "C*-algebras and Finite-Dimensional Approximations" and I am having troubles understanding the proof of Proposition D.6, namely the part about ultraweak continuity. I think I can skip most of the context and I am stuck on a particular part, so I'll write it differently. Let $H$ be a Hilbert (in the problem we have $H=\ell^2(\Gamma)$ ) and let $T: B(H) \rightarrow B(H)$ be an ultraweak continuous linear operator on $H$ , moreover, we know that the restriction $T|_{K(H)}:K(H) \rightarrow K(H) $ is well defined and not only ultraweak continuous, but also norm continuous. To my understanding of the proof in the book, this implies that $T$ is also norm continuous on the whole $B(H)$ , however I am not being able to prove this. I tried using the fact that trace class operators (predual of $B(H)$ ) are compact, but did not reach any relevant conclusion, moreover I am almost sure the Uniform Boundness Theorem is required. If this a
By Kaplansky density theorem, for any $x \in B(H)$ , there exists a net $x_\lambda \in K(H)$ with $x_\lambda \to x$ ultraweakly and $\|x_\lambda\| \leq \|x\|$ for all $\lambda$ . Then $\|Tx_\lambda\| \leq \|T|_{K(H)}\|\|x_\lambda\| \leq \|T|_{K(H)}\|\|x\|$ . By ultraweak continuity of $T$ , we have $Tx_\lambda \to Tx$ ultraweakly, whence $\|Tx\| \leq \|T|_{K(H)}\|\|x\|$ . Since $x \in B(H)$ is arbitrary, this means $T$ is bounded and in fact $\|T\| \leq \|T|_{K(H)}\|$ .
|functional-analysis|hilbert-spaces|operator-algebras|c-star-algebras|completely-positive-maps|
0
Integral $\int_0^1\frac{\arcsin^3 x}{x^2}\text{d}x=6\pi G-\frac{\pi^3}{8}-\frac{21}{2}\zeta(3)$
Show that: $$\int_0^1\frac{\arcsin^3 x}{x^2}\text{d}x=6\pi G-\frac{\pi^3}{8}-\frac{21}{2}\zeta(3)$$ I evaluated this by some Fourier series. Is there any other method? Start with substitution of $$u=\arcsin x$$ Then we have to integrate $$\int_0^{\frac{\pi}{2}}\frac{u^3\cos u}{\sin^2 u}\text{d}u=-\int_0^{\frac{\pi}{2}}u^3\csc u\text{d}u$$ Since $$\int\csc u\text{d}u=\ln (\csc u-\cot u)=\ln \left(\frac{1-\cos x}{\sin x}\right)=\ln 2+2\ln \left(\sin \frac{x}{2}\right)-\ln \sin x$$ Thus $$\int_0^{\frac{\pi}{2}}u^2\csc u\text{d}u=\int_0^{\frac{\pi}{2}}u^2\text{d}\left(2\ln \frac{\sin u}{2}-\ln \sin u\right)$$ $$=-\frac{\pi^2}{4}\ln 2-2\int_0^{\frac{\pi}{2}}u\left(2\ln \sin \frac{u}{2}-\ln \sin u\right)$$ $$=-\frac{\pi^2}{4}\ln 2-4\int_0^{\frac{\pi}{2}}u\ln \sin \frac{u}{2}\text{d}u+2\int_0^{\frac{\pi}{2}}u\ln \sin u\text{d}u$$ $$=-\frac{\pi^2}{4}\ln 2+4\int_0^{\frac{\pi}{2}}u\left[\ln 2+\sum_{n=1}^{\infty}\frac{\cos nu}{n}\right]\text{d}u-\int_0^{\frac{\pi}{2}}u^2\cot u\text{d}u$$ $$=\frac
$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{{\displaystyle #1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\on}[1]{\operatorname{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\sr}[2]{\,\,\,\stackrel{{#1}}{{#2}}\,\,\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} & \color{#44f}{\int_{0}^{1}{\arcsin^{3}\pars{x} \over x^{2}}\,\dd x} \\[2mm] & \sr{\arcsin\pars{x}\ \mapsto\ x}{=} \int_{0}^{\pi/2}{x^{3} \over \sin^{2}\pars{x}}\cos\pars{x}\dd x \\[5mm] \sr{\rm IBP}{=} & -\,{\pi^{3} \over 8} + 3\color{red}{\int_{0}^{\pi/2}{x
|real-analysis|calculus|integration|fourier-analysis|trigonometric-integrals|
0
How does $y^{1-q}$ become $x^{(1-q)/q}$ in this proof of the formula for the derivative of rational powers?
I don't understand a proof step in my calculus textbook (Calculus one and several variables, Salas). It's supposed to be a proof of the derivative of rational powers, but I don't understand the 3rd line from the bottom. How did they get $y^{1-q}$ to equal $x^{(1-q)/q}\,$ ? Thanks
Recall that $y=x^{\frac{1}{q}}$ by assumption, so substitution tells us that $y^{1-q}=(x^{\frac{1}{q}})^{1-q}=x^{\frac{1-q}{q}}$ . This uses the fact that an exponent raised to another exponent is equivalent to multiplying the exponents together; the "power rule for exponents."
|calculus|
1
Obtaining angle of rotation from eigenvalues of the rotation matrix
Suppose we have a 2D rotation matrix. If we plot eigenvalues on the complex plane, angle of rotation is connected to eigenvalues -- larger rotations have eigenvalues further away from 1. To extend this to higher dimensions, I want to know the following: Given a fixed generic orthogonal matrix $A$ , and a random Gaussian vector $x$ , can we get $z$ , the expected value of cosine of the angle between $x$ and $Ax$ , from eigenvalues of $A$ ? There appears to be square-root-like relationship, if rotation is small, all eigenvalues are close to $1$ , as we increase $z$ , eigenvalues get further away from $1$ . This plot is made by taking orthogonal component of QR decomposition of $I+\epsilon G$ for random Gaussian matrix $G$ and varying values of $\epsilon$ . Estimating angle of rotation by trying it on random Gaussian vectors, and plotting against arg of the eigenvalues. Notebook
Yes, the expected value of the cosine is uniquely determined by the eigenvalues, being the average of the real parts. We can decompose $A$ as rotations $R_i$ of mutually orthogonal 2D subspaces $V_i$ (and the identity or reflection along a 1D space if the dimension is odd). Let $\lambda_i, \overline{\lambda_i}$ be the eigenvalues of $A$ corresponding to $R_i$ and let $x_i$ be the projection of $x$ onto $V_i$ . Then $x^TAx = \sum_i x_i^T R_i x_i$ . For $v_i$ uniformly distributed on $\{v\in\mathbb{R}^2: \|v\|_2 = 1\}$ , we have $\mathbb{E}[v_i^T R_i v_i] = \Re(\lambda_i)$ , so with $x$ uniformly distributed on the sphere, $$\mathbb{E}[x^TAx] = \frac{2}{n}\sum_i \Re(\lambda_i),$$ where $n$ is the dimension.
|linear-algebra|
0
Showing that the supremum of a given function is finite.
Consider arbitrary elements $n \in \mathbb N$ and $ \lambda \in \mathbb R$ such that $0 . Problem. My goal is to prove that the supremum $$ \sup_{r > 0} f(r) $$ is finite, where $f$ is defined as $$ f(r) = \begin{cases} r^{n-\lambda}\log_2(r+2) \quad &\text{ if } 0 1.\end{cases} $$ Attempt. I thought about starting by differentiating $f$ . Doing so, we obtain $$ f'(r) = \begin{cases} r^{n-\lambda-1}\left( (n-\lambda)\log_2(r+2) + \frac{r}{r\ln(2) + \ln(4)} \right) \quad &\text{ if } 0 1.\end{cases} $$ After this, I attempted to determine the zeros of $f'$ but it turns out this isn't an easy task (at least for me). Therefore, I skipped to try and study the signal of the derivative $f'$ . It's clear that for $0 the derivative $f'$ is always positive, since every member involved in positive. As for the $r > 1$ , to study the signal of $f'$ we must study the expression $$ \frac{r}{r\ln(2) + \ln(4)} - \lambda \log_2(r+2). $$ Again, just like studying the zeros of the derivative, this turned
The question is not asking you to find the supremum, only prove that it is finite. So it is sufficient to prove that $f(r)$ is bounded. First, while not completely necessary, it's nice to note that $f(r)$ is always positive. As you've noted, for $0 , $f(r)$ has positive derivative, so it's increasing. Then for $r > 1$ , we can use Lázaro's hint from the comments that $\lim_{r \rightarrow \infty} \frac{\log r}{r^\lambda} = 0$ , and with a little tweaking that gives us that $\lim_{r \rightarrow \infty} f(r) = 0$ . Then think back to the original definition of a limit at infinity - we know that for any $M \in \mathbb{R}^+$ , there is a value $R$ such that $r \geq R \implies |f(r)| \leq M$ . So pick an $M$ . We know that $\sup_{r > R} f(r) . We also know that $f$ is continuous, which means that it is bounded on $(0, R)$ , so set $M' = \sup_{r \in (0, R)} f(r)$ . Then $\sup_{r > 0} f(r) \leq \max(M, M') .
|real-analysis|calculus|solution-verification|logarithms|supremum-and-infimum|
1
Asymptotic analysis of a product of logarithms
It may have been already done, but I have found the answer nowhere... Context. We already know by Stirling's formula that $$n!\sim \sqrt{2\pi n}\left(\frac ne\right)^n.$$ We can deduce from this that $$\log(n!)\sim n\log n.$$ The question. But what about $$P_n:=\prod_{k=2}^n \log k.$$ Is possible to conduct an asymptotical analysis for $P_n$, and to find a simpler expression of its growth?
With the help of the Abel–Plana formula , it can be shown that \begin{align*} \prod\limits_{k = 2}^n {\log k} =\; & C\cdot (\log n)^{n + 1/2} \exp \!\left( { - \int_2^n {\frac{{\text{d}t}}{{\log t}}} } \right) \\ & \times\!\left( 1 + \frac{1}{{12n\log n}} + \frac{1}{{288n^2 \log ^2 n}} + \mathcal{O}\!\left( {\frac{1}{{n^3 \log n}}} \right) \right) \end{align*} with $C = 1.6352358009117223 \ldots$ , or explicitly $$ \log C = - \frac{3}{2}\log \log 2 - 2\int_0^{ + \infty } {\frac{1}{{\text{e}^{2\pi t} - 1}}\arctan \left( {\frac{{2\arctan (t/2)}}{{\log (t^2 + 4)}}} \right)\text{d}t} . $$
|real-analysis|logarithms|asymptotics|
0
Is there a simple proof for $\frac{2n}{3}$ is not an integer when $\frac{n}{3}$ is not an integer?
Obviously if $\frac{n}{3}$ is an integer then $\dfrac{2n}{3}$ is an integer. But what if $\frac{n}{3}$ is not an integer? Can it be proven that $\dfrac{2n}{3}$ is not an integer (or, more specifically, an even multiple of 3) if $\frac{n}{3}$ is not an integer? I feel that intuitively this must be true, but I am not a mathematician and my idle algebraic musings aren't helping me. Is there a quick, simple proof one way or the other? [Sorry if this is mistagged.]
If $2n/3$ is an integer it is either odd or even. If it is even then $n/3$ is also an integer. If it is odd then so is $3\cdot (2n/3)=2n$ , but this is clearly even.
|group-theory|elementary-number-theory|
0
get dihedral angles of octahedron given all triangles
An octahedron (not necessarily regular) consists of 8 triangles. You can see it as two pyramids glued together (for now on I only consider this case). Call the triangles in the upper pyramid $T_1, T_2, T_3, T_4 $ (clockwise) and in the lower pyramid $T_5, …, T_8$ (clockwise). The seams are $(T_i, T_{i+1}) (i=1,2,3,5,6,7), (T_4, T_1), (T_8, T_5)$ and $(T_i, T_{i+4}) (i=1,2,3,4)$ . Assume you now all information about the triangles. Call the angles and length for triangle $T_i$ (starting from the top (for $i = 1,...,4$ ), resp. bottom (for $i = 5,6,7,8$ ) and go clockwise): $(a_i, b_i, c_i)$ for lengths and $(\alpha_i, \beta_i, \gamma_i)$ for the angles. You know that they form an octahedron, the question is know: How can I find all dihedral angles? I guess you can find equations to which these dihedral angles need to fullfill and that by solving these system of equations I can find the dihedral angles. Does someone have ideas for getting these equations?
Let the vertices of the pyramid be $A,B,C,D,V$ , where $ABCD$ is the base and $V$ is the apex. Using coordinate geometry, and letting the given sides of the base be $a = AB, b = BC, c = CD, d = DA $ , we can assign $A,B,C,D$ as follows $ A = (0,0,0) , B= (a, 0, 0), C= (x_1, y_1, 0), D = (x_2, y_2, 0) $ with $y_1, y_2 \gt 0 $ Further, we have $V = (x_3, y_3, z_3)$ with $z_3 \gt 0$ Let $e = VA, f = VB, g = VC , h = VD $ , then it is straightforward to write the following seven equations in the seven unknowns $x_1,y_1, x_2, y_2, x_3, y_3, z_3$ : $(x_1 - a)^2 + y_1^2 = b^2 \tag{1}$ $(x_1 - x_2)^2 + (y_1 - y_2)^2 = c^2 \tag{2} $ $x_2^2 + y_2^2 = d^2 \tag{3}$ $x_3^2 + y_3^2 + z_3^2 = e^2 \tag{4}$ $(x_3 - a)^2 + y_3^2 + z_3^2 = f^2 \tag{5}$ $(x_3 - x_1)^2 + (y_3 - y_1)^2 + z_3^2 = g^2 \tag{6} $ $(x_3 - x_2)^2 + (y_3 - y_2)^2 + z_3^2 = h^2 \tag{7}$ Subtracting equations $(5),(6), and (7)$ from equation $(4)$ , we get $2 a x_3 - a^2 = e^2 - f^2 \tag{8}$ $2 x_1 x_3 + 2 y_1 y_3 - x_1^2 - y_1^2 =
|geometry|trigonometry|3d|
0
representation of an element in a cyclic group in a normal form
Suppose $A$ is a finite cyclic group of order $n$ , and $a$ is an element in $A$ of order $d$ . My question is: Question 1:Can we find a generator $x$ of $A$ such that the element $a$ can be written in the form $x^{\frac{n}{d}}$ ? My attempt to solve this question is as follows: Suppose $x$ is a generator of $A$ . Since A has a unique subgroup of order $d$ , namely $ $ . We have $ = $ . Hence we can write $a$ as: \begin{equation} x^{\frac{mn}{d}}, \end{equation} where $m\in \mathbb{Z}, (m,d)=1$ . Since the generators of $A$ takes the form: $x^l$ , where $(l,n)=1$ . Our question can be restated as : Question 1': Suppose $m\in \mathbb{Z}, (m,d)=1$ . Can we find an integer $l$ , such that $(l,n)=1$ and $x^{\frac{mn}{d}}=x^{\frac{ln}{d}}$ ? And this is obviously equivalent to: Question 1'': Suppose $m\in \mathbb{Z}, (m,d)=1$ . Can we find $k\in \mathbb{Z}$ , such that $m+kd$ and $n$ are relatively prime? I fail to prove it. Also I cannot find a counterexample.
Proof of Question 1''. Suppose $n=dn'$ . Decompose $n'=d'n''$ such that the prime factors of $d'$ divide $d$ and there is no prime factors of $n'$ that divide $d$ . Since $(dd',n')=1$ and $(m,dd')=1$ , we only need to prove that we can find $k\in \mathbb{Z}$ , such that $(m+kdd',n')=1$ . This becomes trivial because the congruence classes represented by: \begin{equation} m,m+dd',m+2dd',\cdots,m+(n'-1)dd' \end{equation} exhaust $\mathbb{Z}/n'\mathbb{Z}$ . (Again use the fact $(dd',n')=1$ )
|group-theory|elementary-number-theory|cyclic-groups|
0
Showing $\frac16 \tan^6x +\frac18\tan^8x = \frac14 \cos^{-4}x -\frac13 \cos^{-6}x + \frac18\cos^{-8}x+c$ for some real $c$, without calculus
While teaching my calculus class I happened upon a strange identity. Consider the antiderivative $$\int \tan^5 x \sec^4 x dx.$$ You can do that by writing the integrand as $\tan^5 x (1+\tan^2 x) \sec^2x$ , then substituting $u=\tan x$ . This gives one answer. But another way to do it is to rewrite the integrand as $\sin^5 x \cos^{-9} x = \sin x (1-\cos^2 x)^2 \cos^{-9}x$ , then apply a substitution $u=\cos x$ . If you follow through with both of these methods, it seems to give the strange identity $$\frac16 \tan^6x +\frac18\tan^8x = \frac14 \cos^{-4}x -\frac13 \cos^{-6}x + \frac18\cos^{-8}x+c,$$ for some $c\in \Bbb R$ . I didn't believe it at first, but a student checked it on desmos and they seem to agree. Questions: Is there a simple way to see that without calculus? And is there some general way to generate these kinds of seemingly nontrivial trig identities, where some function of tangent equals some other function of sine/cosine? I thought maybe it could be related to some orthogo
This is just binomial expansion applied to the identity $$\sec^2 x = 1 + \tan^2 x.$$ We have $$\begin{align} \sec^4 x &= 1 + 2 \tan^2 x + \tan^4 x \\ \sec^6 x &= 1 + 3 \tan^2 x + 3 \tan^4 x + \tan^6 x \\ \sec^8 x &= 1 + 4 \tan^2 x + 6 \tan^4 x + 4 \tan^6 x + \tan^8 x \end{align}$$ hence $$\begin{align} \frac{1}{4} \sec^4 x - \frac{1}{3} \sec^6 x + \frac{1}{8} \sec^8 x &= A + B \tan^2 x + C \tan^4 x + D \tan^6 x + E \tan^8 x \end{align}$$ where $$\begin{align} A &= \frac{1}{4} - \frac{1}{3} + \frac{1}{8} = \frac{1}{24} \\ B &= \frac{2}{4} - \frac{3}{3} + \frac{4}{8} = 0 \\ C &= \frac{1}{4} - \frac{3}{3} + \frac{6}{8} = 0 \\ D &= -\frac{1}{3} + \frac{4}{8} = \frac{1}{6} \\ E &= \frac{1}{8}, \end{align}$$ and the result immediately follows with the constant term equal to $A = \frac{1}{24}$ .
|calculus|integration|trigonometry|
1
An inequality with a "Cauchy-Schwarz" flavour
Let $x_1\leq x_2\leq ... \leq x_n$ and $y_1 be positive integers such that $x_1\geq 2$ , $x_i and $y_i + 1 . Do we have that $$x_ny_n (\sum_{i=1}^n x_i)^2 \leq (\sum_{i=1}^n x_iy_i)^2$$ The proposed inequality has a certain "Cauchy-Schwarz" flavour, but from the lower bound side of the sum of products of two variables. Given the conditions, the right side has the potential to grow more rapidly due to both the squaring of the sum and the strict increase of $y_i$ ​ elements. However, I feel that proving this inequality requires a sophisticated mathematical argument or method that I am not able to develop. The conditions have been set after finding counterexamples if they are softened. However, if you think that they can be softened in any way, it would be great. Nevertheless, here my Questions: (I) I have not been able to find counterexamples to the inequality with the conditions set. Do you find any? If there is, how would you strengthen the conditions for the inequality to hold? (II) D
I think I can prove that for every choice of real numbers $x_1\leq x_2\leq \dots \leq x_n$ and $y_1\leq y_2\leq \dots \leq y_n$ we have that $$\left(\sum_{i=1}^n x_i\right) \cdot \left(\sum_{i=1}^n y_i\right) \leq n \cdot \left(\sum_{i=1}^n x_iy_i\right)$$ It can be done with a direct application of the rearrangement inequality , which states that $x_1y_{\sigma(1)}+ \dots +x_ny_{\sigma(n)}\leq \sum_{i=1}^n x_iy_i$ for every permutation $\sigma$ of the numbers $1,2,...,n$ . If we sum the $n$ sums of permutations where each $x_i$ is impaired with every $y_i$ is precisely $\left(\sum_{i=1}^n x_i\right) \cdot \left(\sum_{i=1}^n y_i\right)$ , and by the rearrangement inequality, as each of those sums of permutations is equal or less than $\sum_{i=1}^n x_iy_i$ , we get the desired inequality. An inequality no doubt more beautiful and general than the one stated originally. Regarding the original inequality, we have that, for every choice of real numbers $x_1\leq x_2\leq \dots \leq x_n$ and $
|inequality|integers|upper-lower-bounds|cauchy-schwarz-inequality|
1
What is the probability of at least one coincidence after a permutation?
Imagine we have N distinct and ordered elements. (1, 2, 3, 4, 5, 6, 7) # Example with 7 elements. And we permute them randomly, for example.... (3, 2, 5, 4, 1, 7, 6) # We get one coincidence for the number 4. What is the probability that at least one element matches its original position? If the question were about the probability that all numbers match, it would be easy, 1/N! But the probability it's not so easy to calculate because the value at each position depends on the others. For example, I guess the probability of no coincidences cannot be calculated as ((N-1)/N)·((N-2)/(N-1))·((N-3)/(N-2))··· = 1/N
Just use the inclusion-exclusion principle . The number of permutations where a match occurs is $$\sum_{k=1}^n(-1)^{k-1}\binom{n}{k}\cdot (n-k)!$$ Here, $k$ is the number of matching positions, $\binom{n}{k}$ is the number of ways to choose these $k$ positions, $(n-k)!$ is the number of ways to arrange the other $n-k$ elements. The probability is $$\frac{\sum_{k=1}^n (-1)^{k-1}\binom{n}{k}\cdot (n-k)!}{n!}$$ Or you can use derangements which already incapsulate this inclusion-exclusion calculation. The answer is $$\frac{1-\text{D}_n}{\text{P}_n}.$$
|combinatorics|statistics|permutations|
1
A martingale $(X_t)_{t\geqslant 0}$ which is bounded in $L^{1}$ but not uniformly integrable.
Is there any examples of the martingale $X(t)$ which is bounded in $L^{1}$ but not uniformly integrable? I've just known that $$M_t=\mathrm{e}^{aW_t-a^2t/2}$$ with $a$ not $0$ and $W$ a Brownian Motion is a martingale but not uniformly integrable. Is there any other examples which are more concrete and more constructive? Thanks for your help!
There are many examples Show that $(W_t)_{t \geq 0}$ and $(W_t^2-t)_{t \geq 0}$ are not uniformly integrable from Martingale not uniformly integrable Let $\xi_j$ be i.i.d. with $(\forall j \in \mathbb{N}) \mbox{} \mathbb{P}(\xi_j=0)=\mathbb{P}(\xi_j=2)=1/2$ , and define $M_n=\prod_{j=1}^n \xi_j$ for $n\geq 0$ . Then $(M_n)$ is a non-negative martingale with $\mathbb{E}(M_n)=1$ for all $n$ . But $M_n\to 0$ almost surely, so $(M_n)$ cannot be uniformly integrable. Proof of a martingale not being uniformly integrable 4.From Uniformly Integrable Martingale The $(Y_{n})_{n\in\mathbb{N}}$ is a seq. of positive, independent r.v.'s whose expectation is 1 $\forall n$ satisfying $$\prod_{k=1}^{\infty}\mathbb{E}(\sqrt{Y_{k}})=0$$ . Then $\xi_{n}=\prod_{k=1}^{n}Y_{k}$ is a $\mathcal{F_{n}}$ -martingale that is not uniformly-integrable.
|probability-theory|stochastic-calculus|martingales|
0
$I=r(I)$ iff $I$ is an intersection of prime ideals
I am trying to solve this problem. Let $I$ be a non trivial ideal of a ring $A$ , and let $r(I)$ be its radical. Prove that $I=r(I)$ iff $I$ is an intersection of prime ideals. I got the right to left implication: If $I$ is an intersection of prime elements, let $x\in I$ . Since $x^1\in I$ , we know $x\in r(I)$ , and thus, $I\subseteq r(I)$ . On the other hand, let $x\notin I$ such that $x^n\in I$ . Then, since $I$ is an intersection of prime ideals (let them be $P_1, ..., P_k$ ), $x^n\in P_j$ , for all $j$ , and since all $P_j$ are prime, either $x\in P_j$ , or $x^{n-1}\in P_j$ , for all $j$ . Repeating this reasoning, we end up with $x\in P_j$ , for all $j$ , and thus $x\in I$ , which is a contradiction. Therefore, if $x\in r(I)$ , then $x\in I$ , and $I=r(I)$ . On the left to right, I'm struggling a little. I'm trying to prove that $I$ is a prime ideal, and therefore I'll get $I$ as an intersection of a single prime ideal, but maybe that's not the way to go. Also, is my reasoning co
Slight correction first: $r(I)$ being an intersection of prime ideals does not mean that $r(I)=P_1\cap\dots\cap P_k$ for prime ideals $P_1,\dots,P_k$ , but rather $r(I)=\bigcap_{i\in I}P_i$ for prime ideals $P_i$ and some index set $I$ . However, your argument for the backwards direction remains intact. As already pointed out in the comments, the notion of radical and prime ideals are distinct. Moreover, the standard approach to this statement is more naturally formulated as the following fact. Fact. For every ideal $I$ , $r(I)=\bigcap\{P\text{ prime}\,|\,I\subseteq P\}$ prime. Clearly, $r(I)$ is contained in the given union. Indeed, if $x\in r(I)$ , then $x^n\in I$ for some $n>0$ . Hence, inductively, $x\in P$ for any prime ideal $P$ containing $I$ . This is essentially the argument you gave. The converse is a bit more tricky. I will follow the proof idea of Atiyah--MacDonald, as mentioned in the comments. We show the contrapositive, that is, if $x^n\notin I$ for any $n>0$ , then ther
|ring-theory|commutative-algebra|ideals|maximal-and-prime-ideals|
1
Rigidity of finitely generated projective modules
I am currently reading a lecture notes on module theory and more specifically on Morita Theory. Here is a porition of the lecture note that I do not understand. Finitely generated projective modules are `rigid' in the following sense. Let $A$ be a ring and $P$ be a finitely generated projective left $A$ -module. Then its $A$ -dual $P^{\vee} := \textrm{Hom}_A(P,A)$ is a finitely generated projective right $A$ -module (by additivity) and for every left $A$ -module $M$ the natural homomorphism $P^{\vee} \otimes_A M \to \textrm{Hom}_A(P,M)$ given by $f \otimes m \mapsto (p \mapsto f(p)m)$ is an isomorphism. Also, we have a natural transformation $\overline{\omega}_P : P \to (P^{\vee})^{\vee}$ given by the usual formula $x \mapsto \textrm{ev}_x$ where $\textrm{ev}_x : P^{\vee} \to A$ with $f \mapsto f(x)$ is the evaluation map. As this is an isomorphism for $P = A$ and as both sides are additive, it follows that $P$ is an isomorphism for any finitely generated projective $A$ -module $P$ . I
Let us start with the assertion that $P$ finitely generated and projective if and only if $P$ is a direct summand of a finitely generated free module. Of course, if $P$ is a summand of any free module, it is projective. Conversely, assume that $P$ is finitely generated and projective. Then we find a surjection $F\to P$ for some finite free module $F$ . Completing this surjection to a short exact sequence $$ 0 \to K\to F\to P\to 0 $$ gives the assertion by the splitting lemma (equivalenly, a module may be defined to be projective if and only if every sequence of the above form splits). Now note that any additive functor preserves split-exact sequences. Indeed, a short exact sequence is split-exact if and only if either the projection on the right admits a section or the embedding on the left admits a retraction. Both conditions are preserved by functoriality, hence so is the split. If now $$ 0 \to K\to F\to P\to 0 $$ is split-exact with $F$ free of finite rank, then $$ 0 \to P^\vee\to F
|modules|projective-module|morita-equivalence|
0
Volume of Same Region Yields Different Values
This question comes from a previous question on this site. There is the integral $$\int_{-1}^{1}\int_{0}^{\sqrt{1 - x^2}}\int_{0}^{\frac{y}{2}}f(x, y, z) \, dz \, dy \, dx.$$ Our job is to reorder the order of integration so that the differential is $dy \, dx \, dz$ . I thought of $$\int_{0}^{1/2}\int_{-1}^{1}\int_{2z}^{\sqrt{1 - x^2}} f(x, y, z) \, dy \, dx \, dz$$ . To check this, I thought to let $f(x, y, z) = 1$ . If the two integrals are equivalent, the volume gained by both of them should be the same (because we're integrating over the same region.) However, placing this into Wolfram Alpha yields different results. There, $$\int_{-1}^{1}\int_{0}^{\sqrt{1 - x^2}}\int_{0}^{y/2}f(x, y, z) \, dz \, dy \, dx = \frac{1}{2},$$ while $$\int_{0}^{1/2}\int_{-1}^{1}\int_{2z}^{\sqrt{1 - x^2}} f(x, y, z) \, dy \, dx \, dz = \frac{1}{4}(\pi - 2).$$ So clearly, there is something wrong with my new integral. However, going into Desmos, I plotted the region of integration of both integrals, and I
The first integral (integration order is $z$ , $y$ , then $x$ ) has value $1/3$ , not $1/2$ , and this is the correct volume. The second integral is problematic, because the range of $x$ must depend on $z$ . It should be $$\int_{z=0}^{1/2} \int_{x = -\sqrt{1-(2z)^2}}^{\sqrt{1-(2z)^2}} \int_{y=2z}^{\sqrt{1-x^2}} f(x,y,z) \, dy \, dx \, dz.$$ We can see this because when $z = 1/2$ , $x = 1$ is not allowed; in fact, the interval for $x$ would comprise the single point $x = 0$ . The reason why you get a different answer without this restriction is because the innermost integral $$\int_{y=2z}^{\sqrt{1-x^2}} f(x,y,z) \, dy$$ does not evaluate to $0$ when $2z > \sqrt{1-x^2}$ . It is still evaluated and ends up being negative (if $f = 1$ ), but the intent was that it should be $0$ .
|multivariable-calculus|definite-integrals|volume|iterated-integrals|
1
On which natural numbers do we have injectivity of Euler's totient function?
I was wondering about the Euler's function and I would like to know which are all the natural numbers $m$ such that: $$\phi (m) \neq \phi (n) \forall n \in \mathbb N\setminus \{m\}$$ After putting some thought into it: $m$ cannot be odd, because $\phi(m) = \phi(2m)$ . $m$ cannot be twice an odd number for the previous reason. So no prime numbers. $m$ must be a multiple of $4$ , but which ones? We know that $$ m = p_1^{\alpha_1} \cdot ... \cdot p_n^{\alpha_n} \implies \phi(m) = p_1^{\alpha_1-1} \cdot ... \cdot p_n^{\alpha_n-1}(p_1-1)\cdot...\cdot(p_n-1) $$ EDIT: I think I got something, $m$ must be a composite number, let's say it is $(4p) \cdot q$ with $gcd(4p,q)=1$ , so we have that $$\phi (m) = \phi(4p) \cdot \phi (q)$$ but we know for a fact that there is some $q_2 \in \mathbb N$ such that $\phi (q_2) = \phi (q)$ if we could know for sure that $\gcd(q_2,4p)=1$ , then we would have that $\phi$ is never injective for any natural number, as $\phi (4p \cdot q)$ would be the same as $\ph
We don't expect there to be any such $m$ , and this is a famous unsolved problem. See here .
|elementary-number-theory|functions|
1
Given that $\frac{dy}{dx}$ at $x = 0$ is $-\frac{1}{2}$. Is this the same as $\lim_{x\to 0} \frac{f(x) - f(a)}{x - a}=-\frac{1}{2}. $
Given that $\frac{dy}{dx}$ at $x = 0$ is $-\frac{1}{2}$ . Is this the same as $$ \lim_{x\to 0} \frac{f(x) - f(a)}{x - a} =-\frac {1}{2}. $$
No. You need to go back to your textbook and check the definition of derivative of $f(x)$ at $x = a$ $$f'(a) = \lim_{x \rightarrow a} \frac{f(x) - f(a)}{x-a}$$ So, if you need $f'(0)$ , then you need to compute $$\lim_{x \rightarrow 0} \frac{f(x)-f(0)}{x-0}$$ rather than $$\lim_{x \rightarrow 0} \frac{f(x)-f(a)}{x-a}$$ for some arbitrary $a$ .
|calculus|limits|derivatives|
0
Help showing this integral is positive.
I would some help on showing that the following integral is positive: Fix $1\leq p and let $(a,b) \in (0.5,1)^2$ $$ \int_0^1 \int_0^1 a(p+2)|(a,b)|^p \Big(\sqrt{(x-a)^2+(y-b)^2}\Big)^{p} - p|(a,b)|^{p+2}(a-x)\Big(\sqrt{(x-a)^2+(y-b)^2}\Big)^{p-2}dydx $$ Context: I am trying to show that the quotient $$\frac{|(a,b)|^{p+2}}{\int_0^1\int_0^1 |(x,y) - (a,b)|^pdydx}$$ achieves its maximum at $(a,b) = (1,1)$ . So, I took its derivative with respect to $a$ , which resulted in the integral above (numerator only), and now I'm trying to show it is positive, i.e. the quotient is increasing as $a$ goes to 1. Something analogous for $b$ will allow me to conclude. I know that the maximum happens at $(1,1)$ for $(a,b) \in [0.5,1]^2$ by using a graphing calculator. See Here
You can't prove that the derivative w.r.t. $a$ is positive, because it has zeroes and regions where it is negative. It's still possible that your quotient has its maximum in the desired corner $(a,b)=(1,1)$ . For an extremum somewhere inside the unit square we would need the derivatives w.r.t. $a$ and $b$ simultaneously zero, and I don't immediately see such points... But since the question was about the $d/da$ integral, we'll show here that it can become negative. This happens for $p$ larger than approximately $4.49$ . We first split it into two parts: $$ I = I_1+I_2, \quad \mbox{with:} $$ $$ I_1 = a\ (p+2)\ |(a,b)|^p \ \int_0^1 \int_0^1\Big(\sqrt{(x-a)^2+(y-b)^2}\Big)^{p} dy\ dx $$ $$ I_2 = -p\ |(a,b)|^{p+2} \int_0^1 \int_0^1(a-x)\Big(\sqrt{(x-a)^2+(y-b)^2}\Big)^{p-2}dy\ dx $$ For $I_2$ we can write the intgrand as a derivative w.r.t. $x$ (because it obviously comes from a derivative w.r.t. $a$ ) which makes the $x$ integration trivial: $$ I2 = -p\ |(a,b)|^{p+2}\ \int_0^1 \int_0^1\fr
|multivariable-calculus|maxima-minima|
1
Inequality for certain stochastic process.
Let $b: \mathbb{R} \rightarrow \mathbb{R}$ be a Lipschitz-continuous function and let $X_t$ be a real valued stochastic process satisfying the stochastic differential equation $dX_t= b(X_t) dt+ dB_t$, $X_0=x$. Prove that for any $M> 0$, $t> 0$ and $x \in \mathbb{R}$ we have that $P(X_t \geq M)>0$ but in the case that $b(x)= \alpha$ for some $\alpha
As mentioned in Prop 3.10 in Shreve-Karatzas we have that for the martingale $$Z_{t}:=exp\left(-\int_{0}^t b(X_{s})dW_s-\frac{1}{2}\int_{0}^t b^{2}(X_{s})ds \right)$$ and measure $\frac{dQ}{dP}=Z_{T}$ , the process $$X_{t}=X_{0}+\int_{0}^t b(X_{s})ds+W_{s}$$ is a Brownian motion $\tilde{W}_{t}$ . So $$E\left(1_{X_{t}\geq M}Z_{T}\right)=Q(\tilde{W}_{t}\geq M)>0,$$ which implies $P[X_{t}\geq M]>0$ . For the particular case of constant negative drift, we have the exact solution $$X_{t}=x-|\alpha| t+B_{t}.$$ and so here we use that $\lim_{t\to +\infty}\frac{B_{t}}{t}=0$ to in fact get $$\lim_{t\to +\infty}\frac{X_{t}}{t}=-|\alpha|.$$
|stochastic-processes|
0
Is every member of A216594 the order of a capable group?
A capable group is a group that is isomorphic to $G/Z(G)$ for some group $G$ . So, is every member of the OEIS sequence A216594 (consisting of those $n$ for which there is a non-nilpotent group, but no centerless group, of order $n$ ) the order of a capable group? Every even number $2n$ (for $n>1$ ) is the order of a capable group. Indeed, the dihedral group of order $2n$ is isomorphic to the central quotient of the dihedral group of order $4n$ as pointed out at Non-cyclic numbers that are not the order of any capable group . So, only odd numbers need to be considered. The first odd member of A216594 is $63$ . Since $63$ is not a powerful number (it is divisible by the prime $7$ but not by $7^2=49$ ), there is no capable abelian group of order $63$ by the known characterization of finite capable abelian groups (isomorphic to a direct sum of cyclic groups, with the order of each one dividing the order of the next one and the last two being isomorphic, or equivalently, having the same or
To generalize Derek Holt's observation, I claim that any group of the form $G=H\times C_q$ with $q\gt 1$ and $\gcd(|H|,q)=1$ cannot be capable. I believe this follows from a theorem of Beyl, Felgner and Schmidt ("On groups occurring as center factor groups", J. Algebra, 61 (1979)) which gives sufficient conditions under which the capability of a direct product of finitely many groups would imply the capability of the direct factors, but below I give a direct argument. Note that $q$ need not be squarefree, so this is a more general result. Indeed, let $K$ be a group and $N\leq Z(K)$ be a central subgroup such that $K/N\cong G= H\times C_q$ . It is known that if a finite group is capable then it is the central quotient of a finite group, so we may take $K$ to be finite. We will show that $N\neq Z(K)$ . Let $M$ be the preimage of $C_q$ in $K$ ; then $M/N$ is cyclic, with $N$ central, so $M$ is abelian. Since the image is normal in $G$ , then $M$ is normal in $K$ . Moreover, if $y\in K$ ha
|group-theory|finite-groups|
0
Show that each composite function $f_i \circ f_j$ is one of the given functions
I'm just going through the problems that I got wrong on my discrete math exam, and I was not sure how to do this one. How would I go about making this chart? The chart has $f_1, \dots, f_5$ going across the top and down the side. I already got the question wrong, just want to know how to do it, in case it comes up again. Let $A = \{1,2,3\}$ and define $f_1,f_2,f_3,f_4,f_5 \colon A \to A$ as follows: \begin{align} f_1 &= \{(1,1),(2,3),(3,2)\} \\ f_2 &= \{(1,3),(2,2),(3,1)\} \\ f_3 &= \{(1,2),(2,1),(3,3)\} \\ f_4 &= \{(1,2),(2,3),(3,1)\} \\ f_5 &= \{(1,3),(2,1),(3,2)\} \end{align} Show that each composite function $f_i\circ f_j$ is one of the given functions, or the identity, by completing a table. Find the inverse of those six functions that have inverses.
In cycle notation: \begin{alignat}{1} &f_1=(23)\\ &f_2=(13)\\ &f_3=(12)\\ &f_4=(123)\\ &f_5=(132)\\ \end{alignat} By "completing a table etc." they mean to explicty prove that: $$\forall i,j\in A, \exists k\in A \text{ such that } f_if_j=f_k, \text{ or otherwise } f_if_j=()$$ In fact (composition is right to left): \begin{alignat}{1} &f_1^2=()\\ &f_1f_2=(123)=f_4\\ &f_1f_3=(132)=f_5\\ &f_1f_4=(13)=f_2\\ &f_1f_5=(12)=f_3\\ \\ &f_2f_1=(132)=f_5\\ &f_2^2=()\\ &f_2f_3=(123)=f_4\\ &f_2f_4=(12)=f_3\\ &f_2f_5=(23)=f_1\\ \\ &f_3f_1=(123)=f_4\\ &f_3f_2=(132)=f_5\\ &f_3^2=()\\ &f_3f_4=(23)=f_1\\ &f_3f_5=(13)=f_2\\ \\ &f_4f_1=(12)=f_3\\ &f_4f_2=(23)=f_1\\ &f_4f_3=(13)=f_2\\ &f_4^2=(132)=f_5\\ &f_4f_5=()\\ \\ &f_5f_1=(12)=f_3\\ &f_5f_2=(23)=f_1\\ &f_5f_3=(13)=f_2\\ &f_5f_4=()\\ &f_5^2=(123)=f_4\\ \end{alignat} Therefore, every $f_i$ has inverse; in particular, $f_{1,2,3}$ are self-inverses ("involutions"), whereas $f_4$ and $f_5$ are inverses each other.
|functions|discrete-mathematics|
0
Growth rate of orders of perfect groups
Is anything known about the function $P(n)$ where, $$P(n)=|\{ m\leq n :\text{There is a perfect group of order } m\}|.$$ Like asymptotics or a good upper bound?
I'm not sure what is known about this function, but you might be interested to know that it has (basically) been computed for small values of $n$ . GAP comes with a library of finite perfect groups, including the function SizesPerfectGroups which returns a list of all integers between $1$ and $2 \cdot 10^6$ which occur as the orders of perfect groups. That list is [ 1, 60, 120, 168, 336, 360, 504, 660, 720, 960, 1080, 1092, 1320, 1344, 1920, 2160, 2184, 2448, 2520, 2688, 3000, 3420, 3600, 3840, 4080, 4860, 4896, 5040, 5376, 5616, 5760, 6048, 6072, 6840, 7200, 7500, 7560, 7680, 7800, 7920, 9720, 9828, 10080, 10752, 11520, 12144, 12180, 14400, 14520, 14580, 14880, 15000, 15120, 15360, 15600, 16464, 17280, 19656, 20160, 21504, 21600, 23040, 24360, 25308, 25920, 28224, 29120, 29160, 29760, 30240, 30720, 32256, 32736, 34440, 34560, 37500, 39600, 39732, 40320, 43008, 43200, 43320, 43740, 46080, 48000, 50616, 51840, 51888, 56448, 57600, 57624, 58240, 58320, 58800, 60480, 61440, 62400, 64512,
|group-theory|finite-groups|asymptotics|
0
Definition of $L^2(S^{n-1})$
I'm puzzled with how we're supposed to define $L^2(S^{n-1})$ where $S^{n-1}$ is the unit sphere in $\mathbb{R}^n$ . How do we even define the inner product there? the only way that comes to mind currently is $(f,g) = \int_{S^{n-1}}fg\,d\mathcal{H}^{n-1}$ . But this is not well defined since we need $f, g $ to be functions from $\mathbb{R}^n$ to $\mathbb{R}$ in order to use the Hausdorff measure. Any help or pointer to the right books would be appreciated.
There are many ways to define the desired measure on $S^{n-1}$ . One way to do it is to note that $SO(n)$ acts transitively on $S^{n-1}$ . Fix a point $x_0 \in S^{n-1}$ , and let $H$ be the stabilizer of $x_0$ . Note that $H \cong SO(n-1)$ , so it is a compact subgroup of $SO(n)$ . By the orbit-stabilizer theorem, we get a homeomorphism $SO(n)/H \to S^{n-1}$ . $SO(n)$ is a compact Hausdorff topological group, so it admits a canonical Haar probability measure. We push this measure forward to the quotient to obtain a probability measure on $S^{n-1}$ . The fact that this measure is nonzero and invariant under the action of $SO(n)$ tells you that it is the one you want (perhaps up to scaling). Another way is to let $\nu$ be the vector field $\nu(x) = x$ on $\mathbb{R}^n$ (where we identify the tangent spaces of $\mathbb{R}^n$ with $\mathbb{R}^n$ in the standard way). Then let $\omega$ be the standard volume form on $\mathbb{R}^n$ , i.e. $\omega = dx_1 \wedge \dots \wedge dx_n$ . Let $\iota
|functional-analysis|measure-theory|geometric-measure-theory|hausdorff-measure|
0
Why is strongness of a cardinal a different notion than Reinhardtness?
A cardinal $\kappa$ is Reinhardt if there is a nontrivial elementary embedding $j:V\to V$ such that $\kappa$ is the critical point of $j$ . Kunen proved (using choice) that this large cardinal axiom is inconsistent. For ordinal $\lambda$ , a cardinal $\kappa$ is called $\lambda$ -strong if there is a nontrivial elementary embedding $j:V\to M$ for some transitive class $M$ where $V_\lambda\subseteq M$ , and such that $\kappa$ is the critical point of $j$ . A cardinal $\kappa$ is called strong if it is $\lambda$ -strong for all ordinals $\lambda$ . As far as I know, strong cardinals continue to be studied with the axiom of choice. But it appears that all strong cardinals are Reinhardt. Assume $\kappa$ is a strong cardinal. Then there is a nontrivial elementary embedding from $V$ to a transitive proper class $M$ such that for all $\lambda$ , $V_\lambda\subseteq M$ (and the critical point of $j$ is $\kappa$ ). This means that the class $M$ is necessarily $V$ . In this case, it looks like a
The problem is that there is a quantifier exchange fallacy in deducing that if $\kappa$ is strong, there is a nontrivial elementary embedding from $V$ to a single transitive class $M$ such that $V_\lambda\subseteq M$ for all ordinals $\lambda$ . The positions of the "exists $M$ " and "for all $\lambda$ " quantifications are switched: Strongness of $\kappa$ states that "for all $\lambda$ , there is a transitive class $M$ such that there is an elementary embedding $j:V\to M$ where $V_\lambda\subseteq M$ and $\mathrm{crit}(j)=\kappa$ ". Reinhardtness of $\kappa$ is equivalent to "there is a transitive class $M$ such that for all $\lambda$ , there is an elementary embedding $j:V\to M$ where $V_\lambda\subseteq M$ and $\mathrm{crit}(j)=\kappa$ ". In particular, strongness of $\kappa$ gets you a transitive class $M_\lambda$ and a nontrivial elementary embedding $j_\lambda:V\to M_\lambda$ for each ordinal $\lambda$ , where $V_\lambda\subseteq M_\lambda$ and $\mathrm{crit}(j)=\kappa$ .
|set-theory|large-cardinals|
1
Why it is not possible for both primal and dual LP to be unbounded?
I already read this post and its answers and I am still not satisfied. I want to know how to use weak duality to explain why it is not possible for both primal and dual LP to be unbounded. Here is one way I can explain: Suppose both primal and dual LP are unbounded. Weak duality implies dual LP is infeasible. So, the dual LP is both unbounded and infeasible, which is impossible (right?), so a contradiction. Is there any examples of LP problems that are both unbounded and infeasible?
By definition, an unbounded linear programming problem is one that has feasible solutions with arbitrarily large (positive or negative, depending on whether it's a "maximize" or "minimize" problem) values of the objective function. In particular, these are problems that have feasible solutions, so they are not infeasible.
|optimization|linear-programming|
1
Decomposition of the tangent bundle of a tensor product of vector bundles
If $E \to B$ is a smooth vector bundle, then the tangent bundle $T E$ is a vector bundle over both $E$ and $T B$ . (If you like, the latter structure is the derivative of the vector bundle structure on $E$ .) If $F \to B$ is another smooth vector bundle and $\pi : T B \to B$ is the obvious projection, I have seen it written (e.g. here ) that there is a canonical isomorphism $T(E \otimes F) \cong \pi^* E \otimes T F$ as vector bundles over $T B$ . Supposing that we can do this, we can in particular take $F = B \times \mathbb{R}$ (the trivial rank 1 bundle over $B$ ) and obtain a vector bundle isomorphism $$ \pi^* E \otimes_{TB} T(B \times \mathbb{R}) \to T E. $$ But since $T \mathbb{R} \cong \mathbb{R}^2$ canonically, there is a chain of canonical isomorphisms of vector bundles over $T B$ $$ \pi^* E \otimes_{TB} T(B \times \mathbb{R}) \cong \pi^* (E \otimes T \mathbb{R}) \cong \pi^* (E \oplus E). $$ (To avoid doubt, $E \otimes T \mathbb{R}$ just means the fiberwise tensor product of the
I have explained the situation when there are connections on both $E$ and $F$ on MathOverflow over here . As I explain there, there is a surjective map $$ \tau : TE \otimes_{TB} TF \to T(E \otimes F) $$ which has a kernel (since the fiber dimension of the source is twice that of the target). As I also explain there, a connection on $E$ induces a direct sum decomposition of $TB$ -bundles $TE \cong ZE \oplus_{TB} HE$ (with $ZE$ and $HE$ of the same dimension. In our case, this means that $\tau$ splits as a sum of maps $$ \tau_Z : ZE \otimes_{TB} TF \to T(E \otimes F)\\ \tau_H : HE \otimes_{TB} TF \to T(E \otimes F). $$ The images $\tau_Z(ZE \otimes_{TB} TF)$ and $\tau_H(HE \otimes_{TB} ZF)$ are both equal to $Z(E \otimes F$ ), so the image of $\tau_Z$ is contained in the image of $\tau_H$ . Since $\tau$ is surjective, by counting dimensions the only way this is possible is if $\tau_H$ is an isomorphism. Of course, a connection on $E$ gives a corresponding horizontal lift isomorphism $\op
|differential-geometry|tensor-products|vector-bundles|tangent-bundle|
1
Different Approaches for Proving Kantorovich Inequality
Here is a statement of the famous Kantorovich inequality. Thoerem (Kantorovich). Let $A$ be a $n\times n$ symmetric and positive matrix. Furthermore, assume that its eigenvalues are $0 . Then, the following inequality holds for all $\mathbf{x}\in\mathbb{R}^n$ \begin{equation} \frac{(\mathbf{x}^{\top}A\mathbf{x})(\mathbf{x}^{\top}A^{-1}\mathbf{x})}{(\mathbf{x}^{\top}\mathbf{x})^2} \leq \frac{1}{4}\frac{(\lambda_1+\lambda_n)^2}{\lambda_1\lambda_n} = \frac{1}{4}\Bigg(\sqrt{\frac{\lambda_1}{\lambda_n}}+\sqrt{\frac{\lambda_n}{\lambda_1}}\Bigg)^2. \end{equation} There are a variety of proofs for this inequality. My aim for asking this question is three fold. First, to gather a list of all nice proofs about this inequality. Second, to see if a proof with constrained optimization techniques is possible. Third, to know how Kantorovich thought about the problem. Here are the main questions. Questions What are different approaches (excluding those mentioned below) for proving Kantorovich inequali
Let me present an alternative algebraic approach to the inequality. Theorem ( Kantorovich ). Consider a set of positive numbers satisfying $\sum_{i=1}^n a_i = 1$ and $\lambda_n \geq \cdots \geq \lambda_1 > 0$ . Then, $$\sum_{i=1}^n \lambda_i a_i \cdot \sum_{i=1}^n \frac{a_i}{\lambda_i} \leq \frac{(\lambda_1 + \lambda_n)^2}{4 \lambda_1 \lambda_n}.$$ Let's dive into the proof. Starting with the assumption $\lambda_n \geq \cdots \geq \lambda_1 > 0$ , we have: \begin{align} \lambda_i \geq \lambda_1 > 0 & \implies \sqrt{\lambda_i} \cdot \sqrt{\lambda_i} \geq \sqrt{\lambda_1} \cdot \sqrt{\lambda_1} \implies \sqrt{\frac{\lambda_i}{\lambda_1}} - \sqrt{\frac{\lambda_1}{\lambda_i}} \geq 0. \end{align} By the same reasoning, we can establish $\sqrt{\frac{\lambda_n}{\lambda_i}} - \sqrt{\frac{\lambda_i}{\lambda_n}} \geq 0$ for every $i = 1, \dots, n$ . Hence, we can conclude \begin{align} 0 &\leq \sum_{i=1}^n \left(\sqrt{\frac{\lambda_i}{\lambda_1}} - \sqrt{\frac{\lambda_1}{\lambda_i}}\right) \left
|linear-algebra|multivariable-calculus|inequality|optimization|constraints|
0
Question on Tangent
Suppose that C is any circle concentric with the ellipse E: x^2/16+ y^2/4 =1 . Let A is a point on E and B is a point on C such that AB is tangent to both E and C. The maximum Length of AB is?
Let's solve this using the slope form of tangents. The equation of tangent to ellipse: $$y=mx\pm {\sqrt{ 16m^2+4}}$$ This is also tangent to the circle of radius $r$ and then the perpendicular distance from the origin to tangent is $r$ $$r=\frac{\vert\sqrt {(16m^2+4)}\vert}{\sqrt{1+m^2}}$$ $$r^2=\frac{(16m^2+4)}{1+m^2}$$ The point of intersection of the tangent to the ellipse is given by: $$(x', y')=(\frac{\pm16m}{\sqrt{16m^2+4}},\frac{\pm4}{\sqrt{16m^2+4}})$$ Now the length of the tangent between the circle and the ellipse is given by $$L=\sqrt{x'^2+y'^2-r^2}$$ Now when we put all the values of x', y' and r we will get an equation in $m$ (slope) only. For maximum length, $\frac{dL}{dm}= 0$ . $$\frac{d}{dm}\sqrt{\frac {256m^2}{16m^2+4}+\frac{16}{16m^2+4}-\frac{16m^2+4}{1+m^2}}=0$$ Putting this condition and solving, we get $$m=\pm \frac { 1}{\sqrt2}$$ And then put it back in the length equation to get AB as 2.
|circles|conic-sections|tangent-line|
0
How to show the following inequality: $|x^p - y^p| \leq |x - y| \cdot p \cdot (|x| + |x - y|)^{p - 1}$ ($1 \leq p < \infty$)
I would like to show the following inequality for all $1 \leq p and $x, y \geq 0$ : $|x^p - y^p| \leq |x - y| \cdot p \cdot (|x| + |x - y|)^{p - 1}$ For $p = n \in \mathbb{N}$ , this is easy to show, since we then have: $|x^n - y^n| = \left| (x - y) \cdot \sum_{k = 0}^{n - 1} x^k y^{(n - 1) - k} \right| \leq |x - y| \cdot \sum_{k = 0}^{n - 1} |x|^k |y|^{(n - 1) - k} \leq |x - y| \cdot \sum_{k = 0}^{n - 1} (|x| + |x - y|)^k (|x| + |x - y|)^{(n - 1) - k} = |x - y| \cdot \sum_{k = 0}^{n - 1} (|x| + |x - y|)^{n - 1} = |x - y| \cdot n \cdot (|x| + |x - y|)^{n - 1}$ However, I don't know how to show this inequality for general $1 \leq p .
By Mean Value Theorem $|x^{p}-y^{p}|=|x-y|(pt^{p-1})$ for some $t$ between $x$ and $y$ . Suppose $x \le y$ . Then $x \le t\le y$ and $t^{p-1}\le y^{p-1} \le (|x|+|x-y|)^{p-1}$ . The case $y is simpler.
|functional-analysis|inequality|
1
How is completeness proved?
How is, for example, Goedel's Completeness theorem, proved if it requires a formula to be true in every model, given that we may not be able to exhaust all possible models?
We only need to show a theory $T$ is consistent iff it has a model: If so, then given a formula $\phi$ that is undecidable in $T$ , then $T\cup\{\phi\}$ and $T\cup\{\neg\phi\}$ are both consistent, hence both have models. Therefore we don't have to exhaust all models, but (find a way to) build only one model from $T$ . Henkin's approach can be found in most modern logic books.
|first-order-logic|model-theory|
0
How can a compact set on R^n be closed (heine borel), when by definition a compact set has to be finitely covered by open sets?
A compact set A is defined st given a cover of open sets of A, there exist a finte subcover with elements in that original cover. And so A is defined as a finite union of open sets, and so A should be open (def of a topological space, the intersection of two opens is open itself). By heine borel, in R^n a compact set is closed (and bounded). The question is therefore: is A a closed or an open set?
The key word here is “covered”. A set $K$ is compact if, when $K$ is contained in an union of open sets, then $K$ is contained in the union of finitely many of those sets. It is not being claimed that $K$ is the union of those sets.
|general-topology|compactness|covering-spaces|
0
Estimate clan size
The people in a country are partitioned into clans. In order to estimate the average size of a clan, a survey is conducted where $1000$ randomly selected people are asked to state the size of the clan to which they belong. How does one compute an estimate average clan size from the data collected? Source: puzzledquant.com My approach: I am thinking of using $E[X]$ = $E[E[X|N]]$ where $X$ is size and $N$ is the clan I am currently in. But I am unsure how to proceed from here. Help.
Answer $$n/\sum_{j=1}^n \frac{1}{k_j}$$ Where n is the sample size $k_j$ is the survey answer obtained at j-th observation How to figure this out Average clan size true value is $$\frac{\sum_{i=1}^M k_i}{M}=\frac{N}{M}$$ where M is unknown quantity of groups N is unknown total amount of people on the island $k_i$ is unknown group size of distinct group i If we just sum the survey answers for the whole population we will get $$\sum_{j=1}^N k_j=\sum_{i=1}^M k_i*k_i = \sum_{i=1}^M k_j^2$$ As every group of size $k_i$ is reported precisely $k_i$ times in case of whole population. Instead of summing it with Identity( $k_i$ ) = $k_i$ we check if summing with f( $k_i$ ) = 1/ $k_i$ will provide something more like $\frac{N}{M}$ $$\sum_{j=1}^N f(k_i)=\sum_{j=1}^N 1/k_i=\sum_{i=1}^M k_i*1/k_i=\sum_{i=1}^M 1 =M$$ This is how we can estimate the denominator of desired fraction. The nominator is estimated using sample size as usual.
|expected-value|puzzle|
0
Solution or property to a functional equation
I have a continuous and nondecreasing function $F:\mathbb{R}\to\mathbb{R}$ satisfying following conditions: $F(x) = 0$ for any $x\in (-\infty, 1]$ , $\lim\limits_{x\to +\infty} F(x) = 1$ , for any $x\in \mathbb{R}$ we have $$F(x) \;=\; 0.8 F(2x - 1) + 0.2 F(x/2 - 1).$$ Is there any solution to this "functional equation" system? Is there any further property of $F$ like its lower bound or growing rate? This problem actually comes from probability theory. Consider a sequence of i.i.d. Bernoulli variables $Y_1, Y_2, \dots$ following $$\mathbb{P}(Y_i = 1) = 1 - \mathbb{P}(Y_i = 0) = 0.8.$$ Then using $Y_1, Y_2, \dots$ , we define a random series $S$ as follows $$S = \sum_{k=1}^\infty 2^{k - \sum_{i=1}^k Y_i} (1/2)^{\sum_{i=1}^k Y_i}.$$ I have known that $S$ converges almost surely. Now I want to find the cumulative distribution function (CDF) of $S$ . It turns out that the CDF satisfies the properties I gave above. Any comment helps. I draw a picture for the empirical disrtibution as follo
This answer is partial. Namely, we show that $F(7)\le \frac 1{1.12}=0.8728\dots$ and for each natural $n$ we have $$F(1+6\cdot 2^{-n})=0.8^nF(7)>0$$ and $$1-F(1+6\cdot 2^n)\le (1-F(7))4^{-n} Substituting $x=\frac{y+1}2$ in Condition 3, we obtain the equality $$F(y)=1.25 F\left(\frac{y+1}2\right)-0.25F\left(\frac{y-3}4\right)$$ for each real $y$ . Put $\bar y=\sup\{y\in\mathbb R:F(y)=0\}$ . Then $\bar y\ge 1$ . Put $y_0=4\bar y+3$ . Then $y_0\ge 7$ . If $y\le y_0$ then $\frac{y-3}4\le \bar y$ , so $F\left(\frac{y-3}4\right)=0$ and $F(y)=1.25F\left(\frac{y+1}2\right)$ . For each natural $n$ put $y_n=\frac{y_{n-1}+1}2$ . Then $(y_n)_{n\in\omega}$ is a decreasing sequence contained in the segment $[1,\bar y]$ and $F(y_n)=0.8F(y_{n-1})$ for each natural $n$ . Let $y_\infty$ be the limit of the sequence. Then $y_\infty=\frac{y_\infty+1}2$ , so $y_\infty=1$ . Suppose for a contradiction that $\bar y>1$ . Then there exists natural $N$ such that $y_N . So $F(y_N)=0$ . Since $F(y_n)=0.8F(y_{n-1}
|functional-equations|
0
Is $(\mathbb{Q},<)$ elementary equivalent to $(\mathbb{R},<)$?
I'm looking at those models over the language $\{ . For the case of $L=\{+\}$ there's already an answer here .
Yes. The theory of dense linear orders without endpoints is complete, and both $(\mathbb{Q}, and $(\mathbb{R}, are models of this theory.
|model-theory|
0
Calculate a integral of : $I = \int \int_{V} y^{-2} d \lambda_2(x,y)$ where: $V = \{ x^2 + y^2 > 1, |x| < \frac{1}{2}, y > 0 \}$
Calculate a integral of : $$I = \int \int_{V} y^{-2} d \lambda_2(x,y)$$ where: $V = \{ x^2 + y^2 > 1, |x| 0 \}$ Using help provided by @geetha290krm I was able to write the solution. From the drawing I see that it would be appropriate to integrate $ \frac{1}{y^2}$ over $y \in (\sqrt{1 - x^2}, \infty)$ and then over $x \in (0, \frac{1}{2})$ . The function that we will integrate is continous on the area of interest so we can use Fubini's theorem - therefore we have that: $$\int_{0}^{\frac{1}{2}} \int_{\sqrt{1-x^2}}^{\infty} \frac{1}{y^{2}} dx \ dy = \int_{0}^{\frac{1}{2}} \left[ - \frac{1}{y} \right]^{\infty}_{\sqrt{1-x^2}} dy = \int_{0}^{\frac{1}{2}} \frac{1}{\sqrt{1-x^2}} dx = \left[ arcsin(x) \right]^{\frac{1}{2}}_0 = arcsin \left( \frac{1}{2} \right) = \frac{\pi}{6}$$ Is that correct?
The integral over $A$ is $\int_0^{1/2} \int_{\sqrt {1-x^{2}}}^{\infty} \frac 1{y^{2}}dydx=\int_0^{1/2}\frac 1 {\sqrt {1-x^{2}}}dx$ . Use the fact that $\arcsin x$ is an anti-derivative of $\frac 1 {\sqrt {1-x^{2}}}$ Remark: The limits of integration are constants only for rectangular regions .
|real-analysis|integration|fubini-tonelli-theorems|
1
Necessary assumptions for direct method of calculus of variations
In "Calculus of Variations" by F. Rindler I learned about the following result (direct method of calculus of variations): If $X$ is a topological space, $f: X \to \mathbb{R}$ lower semicontinuous and coercive, then $f$ obtains a minimum in $X$ . (1) Here, $f$ is said to be coercive, if the set of all $u \in X$ satisfying $ f(u)\leq \Lambda $ is relatively sequentially precompact $\forall \Lambda \in \mathbb{R}$ (i.e. every sequence of this set admits a convergent subsequence). The proof is analogous to the one of Weierstraß' theorem. Later in the book an analogous theorem is formulated: If $X$ is a reflexive Banach space, $f: X \to \mathbb{R}$ weakly lower semicontinuous and weakly coercive, then $f$ obtains a minimum in $X$ . (2) (Weakly coercive is defined as above with weakly convergent subsequences.) Now I don't see why we require $X$ to be a reflexive Banach space. I know this is useful for later applications, but do we really need this for the existence of a minimum? In the first
You are right, when it is formulated in that way, it suffices if $X$ is a normed space. The reason one may want to assume that $X$ is a reflexive Banach space, is because the weak coercivity then follows from the property \begin{equation}\lim_{||x||\to\infty} f(x)=\infty,\end{equation} which is much easier to verify in practice. Namely, the set $$\{u\in X \, |\, f(u)\leq \Lambda\}$$ is then bounded, and hence, relatively sequentially precompact by the Banach-Alaoglu theorem.
|real-analysis|calculus|functional-analysis|calculus-of-variations|reflexive-space|
1
Growth rate of orders of perfect groups
Is anything known about the function $P(n)$ where, $$P(n)=|\{ m\leq n :\text{There is a perfect group of order } m\}|.$$ Like asymptotics or a good upper bound?
This doesn't answer the question that you asked, but if we let ${\rm Perf}(\le n)$ be the number of isomorphism classes of perfect groups of order up to $n$ , then we have the bounds $$n^{l(n)^2/108-cl(n)} \le {\rm Perf}(\le n) \le n^{l(n)^2/48 + l(n)},$$ where $c$ is a constant and $l(n)= \log_2(n)$ . This is proved in this paper : "Enumerating Perfect Groups" by D. F. Holt, J. London Math, Soc. 39 (1989), 67-78. Since there are very large numbers of perfect groups of certain orders, such as $2^n.60$ , this doesn't help much with estimating your function $P(n)$ . Note that the crude estimate $n^{O((\log n) ^2)}$ is the same as for the growth rate of all finite groups. But the constant in the exponent is $2/27$ for all groups, and something between $1/108$ and $1/48$ for perfect groups.
|group-theory|finite-groups|asymptotics|
0
find value of a expression $\frac{(y_1-3)(y_2-3)}{x_1x_2}$ under some conditions
If $x_1,y_1,x_2,y_2$ satisfy below equtions: $$x_1^2+y_1^2=9\quad (1)$$ $$x_2^2+y_2^2=9\quad (2)$$ $$(x_1+x_2)^2+(y_1+y_2-2)^2=4 \quad (3)$$ then find the value of the expression: $$\frac{(y_1-3)(y_2-3)}{x_1x_2}$$ the abolve equtions hava $4$ variables, so if we treat one variable as constant, for example: $x_1$ . then solve other variables as the function of $x_1$ : $y_1=f(x_1)$ , $x_2=g(x_1)$ , $y_2=h(x_1)$ ,even under the help of Mathematica. I can't do the job.because the expression if very large and have square root. because $$\frac{(y_1-3)(y_2-3)}{x_1x_2}=\frac{y_1y_2-3(y_1+y_2)+9}{x_1x_2} \quad (4)$$ maybe find the relationship between $x_1x_2$ , $y_1y_2$ and $y_1+y_2$ is the right way. If we add the above (1)(2)(3) equations, get $$9+x_1x_2+y_1y_2=2(y_1+y_2)$$ it can simplify the expression (4) a little.
(1)+(2): $x_1^2+x_2^2+y_1^2+y_2^2=18$ Putting in (2) we get: $18+2(x_1x_2+y_1y_2)-6y_1-5y_2+9=4$ Rewriting as : $2x_1x_2 +2(y_1y_2-3y_1-3y_2+9)=-5$ $2x_1x_2 +2(y_1-3)(y_2-3)=-5$ Dividing both sides by $x_1x_2$ we get: $2+\frac{(y_1-3)(y_2-3)}{x_1x_2}=-\frac 5{x_1x_2}$ $A=\frac{(y_1-3)(y_2-3)}{x_1x_2}=-(2+\frac 5{x_1x_2})$ Now we can argue on value of A, for example if we want an integer: $x_1x_2=5\Rightarrow A=-3$ A is a function of $x_1$ and $x_2$ , so it can not have a fixed value. Update: If : (3): $(x_1+x_2)^2+(y_1+y_2-2)^2=4$ we have: $18+2y_1y_2-2y_1-2y_2+2x_1x_2=0$ $y_1y_2-3y_1-3y_2+9=-x_1x_2-2(y_1+y_2)$ $(y_1-3)(y_2-3)=-x_1x_2-2(y_1+y_2)$ Dividing both sides by $x_1x_2$ we get: $\frac{(y_1-3)(y_2-3)}{x_1x_2}=-(1+ 2\frac{y_1+y_2}{x_1x_2})$ Which is a function of $x_1, x_2, y_1$ and $y_2$ .
|algebra-precalculus|analytic-geometry|
0
Finding P(X+Y>1) with the joint pdf
I know this seems to be a very basic question, but I have been struggling with this. If two random variables have joint density f(x, y) = K(3x + 4y) & 0 1). Which I approached as 1-P(X+Y Your help is greatly appreciated.
If two random variables have joint density $f(x, y) = K(3x + 4y) \mathbf 1_{ 0 I want to find $P(X+Y>1)$ You wish to integrate over $0 and $0 . This is $0 . Because if the sum of two positive reals is less than 1, then neither may be greater than 1. $$\mathsf P(X+Y>1)=1-(1/144)\int_0^1\int_0^{1-y} (3x+4y)\,\mathrm dx\,\mathrm d y$$
|probability|probability-distributions|
1