title
string | question_body
string | answer_body
string | tags
string | accepted
int64 |
|---|---|---|---|---|
If $p(x) = a_3 + a_2x + a_1x^2 + a_0x^3$ is irreducible in $\mathbb{Q}[x]$, then $q(x) = a_0 + a_1x + a_2x^2 + a_3x^3$ is also irreducible?
|
I have this doubt because what I want to prove is that, with the given hypothesis, $\frac{\mathbb{Q}[x]}{\langle q(x) \rangle}$ is an integral domain. I have the same problem but for polynomials of degree 4, it occurs to me to give an isomorphism $T: \mathbb{Q}[x] \longrightarrow \mathbb{Q}[x]$ in general but the proof becomes tedious when it comes to proving that the function $T$ is a ring homomorphism. My idea is to suppose that $q(x)$ is reducible and by means of the isomorphism to show that $T(q(x(x)) = p(x)$ showing that $p(x)$ is also reducible reaching a contradiction, but I would like to know if there is an alternative way that does not involve constructing the isomorphism.
|
It's easier to prove the contrapositive: If $q(x)=a_0+a_1x+a_2x^2+a_3x^3$ is reducible, then $p(x)=a_3+a_2x+a_1x^2+a_0x^3$ is reducible. I'm actually just going to prove a generalized version of this, not just for degree $4$ , so we are proving that, for any integer $d > 0$ , if $$ q(x)=\sum_{i=0}^d a_ix^i $$ is reducible, then $$ p(x)=\sum_{i=0}^d a_{d-i}x^i $$ is also reducible. Unfortunately, I do have to add an extra assumption, which is $a_0\neq 0$ . Otherwise, you could have a situation where $q(x)=x^3+x^2$ , which is reducible, and $p(x)=x+1$ , which is irreducible, which clearly contradicts what we are trying to prove. To make the proof simpler, I will split it into two cases: The first case is $a_d=0$ , which is very simple because $a_d$ is the constant term of $p(x)$ and thus if $a_d=0$ , $p(x)$ is clearly reducible. The second case is $a_d\neq 0$ , which is what follows. In this case, I can safely assume that the degree of $q$ is $d$ , because the leading coefficient $a_d$ i
|
|abstract-algebra|polynomials|ring-theory|commutative-algebra|irreducible-polynomials|
| 1
|
Stuck on computing the derivative of $f(x)=6x^2-\dfrac{6}{x}$ using the limit definition of the derivative.
|
First I set up the difference quotient as follows: $$\dfrac{6(x+h)^2-\frac{6}{x+h}-(6x^2-\frac{6}{x})}{h}$$ Then I expand the binomial in the first term, and distribute $6$ and $-1$ : $$\dfrac{6x^2+12xh+6h^2-\frac{6}{x+h}-6x^2+\frac{6}{x}}{h}$$ Finally I cancel like terms: $$\dfrac{12xh+6h^2-\frac{6}{x+h}+\frac{6}{x}}{h}$$ At this point I multiply the numerator by $\frac{1}{h}$ and I'm left with: $$12x+6h-\frac{6}{h(x+h)}+\frac{6}{xh}$$ By now I grow weary because the solution is $$f'(x)=12x+\frac{6}{x^2}$$ The path I took looks unpromisng. I'm not sure where I went wrong. Taking any help I can get. Thank you.
|
You're so close! You just need to find a common denominator for the fractions. $$12x+6h-\frac{6}{h(x+h)}+\frac{6}{xh}$$ $$= 12x+6h+\frac{-6x + 6(x + h)}{hx(x+h)}$$ $$= 12x+6h+\frac{6h}{hx(x+h)}$$ $$= 12x+6h+\frac{6}{x(x+h)}$$ Now, take the limit as $h \to 0$ .
|
|algebra-precalculus|
| 1
|
How can commutativity of operators be assumed in this example?
|
Below is a screenshot from Axler's Linear Algebra Done Right. Note that V is a vector space over $F$ , which is either equal to $R$ or $C$ . I haven't reached the chapter on commutating operators yet and thus far, this text has been very transparent about using results not covered in the book that are used, ex. stating that some analysis is needed for a more illuminating proof of the Fundamental Theorem of Algebra, and that De Moivre's Theorem would be a "given". So, the assertion that about commutativity make me feel like I'm missing something really obvious, even though the last time commutativity was even mentioned in this text, it was to explain that linear maps are not necessarily commutative. I appreciate any help.
|
It is true that, in general, for any two linear maps $S, T$ , it is not necessarily the case that $S$ and $T$ commute, i.e. it is possible to have $ST$ be a different linear map than $TS$ . However, in this proof, they are saying that two specific linear maps, $T-\lambda_j I$ and $T-\lambda_k I$ , happen to commute. We can see these linear maps commute using the distributive property and commutativity of scalar addition and multiplication: $$ \begin{align*} (T-\lambda_j I)(T-\lambda_k I) &=T(T-\lambda_k I)-\lambda_j(T-\lambda_k I) \\ &=T^2-\lambda_k T-\lambda_j T+\lambda_j\lambda_k I \\ &=T^2-(\lambda_j+\lambda_k)T+\lambda_j\lambda_k \end{align*} $$ $$ \begin{align*} (T-\lambda_k I)(T-\lambda_j I) &=T(T-\lambda_j I)-\lambda_k(T-\lambda_j I) \\ &=T^2-\lambda_j T-\lambda_k T+\lambda_k\lambda_j I \\ &=T^2-(\lambda_j+\lambda_k)T+\lambda_j\lambda_k \end{align*} $$ As you can see, we have $(T-\lambda_j I)(T-\lambda_k I)=T^2-(\lambda_j+\lambda_k)T+\lambda_j\lambda_k=(T-\lambda_k I)(T-\lambda_
|
|linear-algebra|proof-explanation|
| 1
|
Are there solutions of $a^n+n+b^n=c^n$ for $n>2$?
|
This question has been extensively edited to meet site requirements. As is well-known, the Diophantine equation $a^n+b^n=c^n$ has many solutions when $n=2$ (Pythagorean triples) but none when $n>2$ (the Fermat-Wiles Theorem). If one includes in the equation an extra term $\pm k$ , yielding the equation: $$a^n \pm {k} +b^n=c^n\qquad(1)$$ then obviously solutions of (1) can easily be found. What is perhaps surprising however is that there exist solutions in small values of $a,b,c,k$ , for example: $$5^2-1+5^2=7^2$$ $$6^3+1+8^3=9^3$$ $$5^3+2+6^3=7^3$$ $$9^3-1+10^3=12^3$$ $$13^5-12+16^5=17^5$$ A special case of (1) is when we add a requirement that $\pm k = n$ , so that the equation becomes: $$a^n+n+b^n=c^n\qquad(2)$$ It is easy to find solutions when $n=2$ , for example: $$3^2+2+5^2=6^2$$ $$5^2+2+13^2=14^2$$ In fact, given any Pythagorean triple of the form: $$(2m+1)^2 + (2m^2+2m)^2=(2m^2+2m+1)^2$$ we have: $$(2m+1)^2+2+(2m^2+2m+1)^2=(2m^2+2m+2)^2$$ Question Are there any solutions of (2)
|
According to Fermat's little theorem ( https://en.wikipedia.org/wiki/Fermat%27s_little_theorem ), if $2n+1$ is prime then $x^{2n}\equiv 0,1 \pmod{2n+1}$ , and therefore $x^n\equiv 0,\pm1 \pmod{2n+1}$ . Hence $a^n+n+b^n=c^n$ has no positive integer solution where $2n+1$ is a prime number with $n>4$ . For instance, $a^n+n+b^n=c^n$ for $n=5,6,8,9$ has no positive integer solution.
|
|diophantine-equations|
| 0
|
Prove $T' = 0$ if and only if $T=0$
|
Theorem. Given that $W$ is finite-dimensional and $T \in \mathcal{L}(V, W)$ . then $T'=0$ if and only if $T=0$ . I’m having trouble proving $T=0$ if $T' = 0$ . I see a counter example to this. From hypothesis $T'=0$ so equivalently given any linear functional $\psi \in W', T'(\psi)= \psi \circ T=0$ . Then $T$ can be non zero when $\psi$ is a zero map i.e. all values in $W$ are mapped to $0$ . Then $T$ does not have to be $0$ . Could someone please explain what am I missing here? Thank you!
|
To do that, take any $x\in V$ . If $T'=0$ then you have $T'(\phi)=\phi\circ T=0$ for all $\phi\in W'$ . Then you have $\phi(Tx)=0$ for all $\phi\in W'$ . So, fix a basis $e_{1},...,e_{m}$ of $W$ and the corresponding dual basis $E_{1},...,E_{m}$ of $W'$ . So if $Tx=\sum_{k=1}^{m}c_{k}e_{k}$ , then as $E_{k}(Tx)=0$ , you have $c_{k}=0$ for each $k$ . This proves that $Tx=0$ . Then as $x$ was arbitrary in $V$ , you have that $Tx=0$ for all $x$ and hence $T=0$ . PS: This is also true in infinite dimensional normed spaces but it requires something called the Hahn-Banach Theorem to prove it rigorously.
|
|linear-algebra|dual-maps|
| 0
|
WWfYMP: Bypassing reversible moves
|
This question concerns the theory developed in Winning Ways for Your Mathematical Plays. The relevant Volume 1 can be found online here. I'm unclear about the intuition behind the authors' "Bypassing Reversible Moves" result, the discussion of which begins around page 60. For convenience, this is the result: The authors demonstrate that the resulting altered game is indeed equivalent to $G$ . I follow their proof, but feel like my understanding of the result is faulty. In particular, consider what happens if Right's option $D$ in fact has two Left options $A_1,A_2\in D^L$ that are good for Left, with one much better than the other - say $A_1\gg A_2\geq G$ . Then in applying the result to $A_2$ , it seems to me that we are in effect assuming that Left will respond to $D$ with $A_2$ . But what right have we to disregard $A_1$ like this? Furthermore, it seems to me that after Right's move to $D$ and Left's response to $D^L$ , the result assumes that Right will definitely move again in the
|
I've thought about it some more. Starting with my third concern: the condition $D^L\geq G$ is necessary in order to provide a disincentive for Right to ever move to $D$ . In more detail: by agreeing to play the altered game instead, Left is in effect locking himself into responding to $D$ with some particular $A_2\in D^L$ satisfying $A_2\geq G$ . This might seem like a disadvantage to his cause at first (hence my first concern), but it really isn't - it's like saying "If you decide to ruin your position, then I'll commit to playing this particular good continuation." It is evident this isn't much of a concession. Naturally, Right has no reason to disagree with the alteration either - after all, the only option he has lost is moving to $D$ , but moving there would've allowed a response to $D^L$ anyway. Finally, my second concern was simply due to inattention on my end. All that's happened is Left has committed to responding to $D$ in a certain way. Right has made no commitments.
|
|combinatorial-game-theory|
| 1
|
Let $F$ be a field of characteristic $2$. Find the maximal separable subextension in $F(X)/F(X^4 + X^2)$.
|
Let $F$ be a field of characteristic $2$ . Find the maximal separable subextension in $F(X)/F(X^4 + X^2)$ . I am not sure what to do here. I know that if $f(X) = aX^3 + bX^2 + cX + d \in \mathbb{F}_2[X]$ , then $f'(X) = aX^2 + c$ , so that if $f(X)$ were a separable extension, then at least one of $a$ or $c$ has to be nonzero. However, I'm not sure what to do with polynomials in $X$ of higher degrees, much less solving the case for this $f$ .
|
The entire extension has degree $4$ , since $X$ is a root of $Y^4 + Y^2 - (X^4 + X^2)$ . Furthermore, we can factor $Y^4 + Y^2 - (X^4 + X^2)$ as $(Y^2 + Y - (X^2+ X))^2$ , and so the extension is inseparable. Therefore we claim that $F(X^2)/F(X^4 + X^2)$ is the maximum separable subextension. Observe that if this extension is separable and nontrivial, then it is maximal since the entire extension is inseparable. Indeed it is separable and nontrivial, since it is a root of $Y^2 - Y - (X^4 + X^2)$ , and it has degree $2$ since this polynomial is irreducible (you can show this by thinking about quadratic extensions in characteristic $2$ ). The proof follows.
|
|field-theory|extension-field|separable-extension|
| 1
|
Integration of hypergeometric function on complex plane
|
I have come across an integral that involves a hypergeometric function, which can be expressed as follows: $$I = \int_0^1 x^{1/2}(1-x)^{\epsilon-1} {_{2}F_1}(\frac{1}{2}+\epsilon,1+\epsilon;\frac{3}{2};x) dx.$$ Here, $\epsilon$ is a small complex quantity where $|\epsilon|\ll1$ . I found an integral formula in "Table of Integrals, Series, and Products," ET II 399(4), as follows: $$\int_0^1 x^{\gamma-1}(1-x)^{\rho-1}F(\alpha,\beta,;\gamma;x)dx=\frac{\Gamma(\gamma)\Gamma(\rho)\Gamma(\gamma+\rho-\alpha-\beta)}{\Gamma(\gamma+\rho-\alpha)\Gamma(\gamma+\rho-\beta)}$$ , for $Re \ \gamma\gt0, Re\ \rho\gt0, Re\ (\gamma +\rho -\alpha - \beta)\gt0$ . As one can see, for my case, $\alpha=1/2+\epsilon, \beta=1+\epsilon, \gamma=3/2, \rho=\epsilon$ , and if assume $Re\ (\epsilon) \gt 0$ (this is not necessarily true), the third condition can not be satisfied as $Re\ (\gamma +\rho -\alpha - \beta)= Re\ (-\epsilon)\lt0$ . I have a question regarding the integral $I$ . Does the given case imply that $I$
|
If you use Eulers integral representation of hypergeometric series, you get a double integral over the unit square $$\text{Assuming}\left[0 $$=\frac{\sqrt{\pi } (20 \epsilon +33) \Gamma \left(\frac{3}{2}-\epsilon \right) \Gamma (\epsilon ) \Gamma (\epsilon +2)}{105 \Gamma \left(\frac{1}{2}-\epsilon \right) \Gamma (\epsilon +1) \Gamma \left(\epsilon +\frac{5}{2}\right)}$$ using Mathematica. A numerical test confirms With[{\[Epsilon] = 1/3}, NIntegrate[ Gamma[3/2]/ (Gamma[1 + \[Epsilon]] Gamma[1/2 - \[Epsilon]])* ((1 - t)^(1/2 - \[Epsilon])* t^(1 + \[Epsilon])* (1 - x)^(-1 + \[Epsilon]) * Sqrt[x] (1 + t x)) , {t, 0, 1}, {x, 0, 1}]] 0.231148 (Sqrt[\[Pi]] (33 + 20 \[Epsilon]) Gamma[ 3/2 - \[Epsilon]] Gamma[\[Epsilon]] Gamma[2 + \[Epsilon]])/( 105 Gamma[1/2 - \[Epsilon]] Gamma[1 + \[Epsilon]] Gamma[ 5/2 + \[Epsilon]]) /. {\[Epsilon] -> 1/3} // N 0.231148
|
|complex-analysis|definite-integrals|special-functions|contour-integration|hypergeometric-function|
| 0
|
Prove $F(x) = \int_{-\infty}^x f(t) dt$ is uniformly continuous.
|
Following this question: $f$ is integrable, prove $F(x) = \int_{-\infty}^x f(t) dt$ is uniformly continuous. Our question is that let $f\in L_1(\mathbb{R})$ . Prove $F(x) = \int_{-\infty}^x f(t) dt$ is uniformly continuous. I have the following Lemma: If for every $\epsilon>0$ , there exists $\delta>0$ such that $\mu(A)=\mu([\min\{x, y\}, \max\{x,y\}]) , then $$ \int_{A}|f| Can we use this Lemma to prove our result? Since for every $\epsilon>0$ , there exists $\delta>0$ so that $$ |F(x)-F(y)|\le \int_{A}|f| whenever $\mu(A) and $A=[\min\{x, y\}, \max\{x,y\}]$ .
|
Yes. Here, $\mu$ is Lebesgue measure, so $\mu (\{min \{x,y\}, \max \{x,y\})=|y-x|$ and the Lemma can be directly applied.
|
|real-analysis|lebesgue-integral|
| 1
|
An observation from IMO 2010 P6 using sets of vectors
|
Here is a observation which I got while working on 2010 IMO P6 but couldn't prove on my own. For any positive integer $n$ , consider the sequence of sets of $n$ -element vectors starting with $$\begin{aligned} b_1&=\{[1,0,0,\ldots,0,0]\}, \\ b_2&=\{[0,1,0,\ldots,0,0]\}, \\ &\vdots \\ b_n&=\{[0,0,0,\ldots,0,1]\}\end{aligned}$$ and such that for $i>n$ , $$b_i = \bigcup_{j=1}^{i-1} (b_j+b_{i-j}),$$ where the sum of sets of sets is taken pairwise and elementwise. For example, $$\begin{aligned}\{[0,1],\,[0,2]\}+\{[0,3],\,[0,5]\}&=\{[0,1]+[0,3],\,[0,1]+[0,5],\,[0,2]+[0,3],\,[0,2]+[0,5]\}\\ &=\{[0,4],\,[0,6],\,[0,5],\,[0,7]\}.\end{aligned}$$ For any $e \in b_i$ , the dot product of $e$ with $[1, 2, \ldots, n]$ is $i$ . This is quite obvious to prove using induction and the distributivity of the dot product. What I am trying to show is this: Given any $n$ , for sufficiently large $i$ , the set $b_i$ consists of all vectors $e$ with integral coefficients, whose dot product with $[1, 2, \ldots,
|
[I am the author of the problem as well]. So after lot of work i established a weaker version of the claim, hopefully someone can work further on it and the answer is suprisingly short and simple. Call a n dimensional vector $v$ "non-pairing" if Case 1: n even there exist $i$ such that if a= $i$ th entry of $v$ and $b$ = $n+1-i$ th entry of $v$ then $0$ not $ \in \{a,b\}$ Case 2: n odd there exist $i $ such that if a= $i$ th entry of $v$ and $b$ = $n+1-i$ th entry of $v$ then $0$ not $ \in \{a,b\}$ and the $\frac{n+1}{2}$ th entry isnt $0$ or $1$ . Here is what i managed to prove, for all $i$ , all vectors with integral coefficients and are "non-pairing" such that its dot product with $[1,2,...,n]$ is $i$ are in $b_i$ . Firstly i define the infinite set, $S=\bigcup_{j=n+1}^{\infty} b_j$ Then i establish a claim Claim 1:- if $e \in S$ then $e+$ 1st element of $b_i \in S, \forall 1 \leq i \leq n$ This is not so hard to observe say $e \in b_j$ now $b_{j+1}$ should contain the vetor $e+$ f
|
|algebra-precalculus|contest-math|
| 0
|
If $x^2-16\sqrt x =12$ what is the value of $x-2\sqrt x$?
|
I saw this problem:If $x^2-16\sqrt x =12$ what is the value of $x-2\sqrt x$ ? To make notations simpler Let $x=t^2$ and $t^4-16t=12$ and $l= t^2-2t$ . I tried to use the fact the $\frac{12} l =\frac{t^4-16t}{t^2-2t}= t^2+2t +4 + \frac{8}{t-2}= (t-2)^2 + 6t+ \frac{t}{8l}= \frac{l^2}{t^2}+6\frac{l}{t-2}+ \frac{t}{8l}$ I tried to find some similarities between the $l$ cubic polynomial and the original quartic polynomial but I didn't find anything useful, needless to say that solving the quartic polynomial is not an option as the general solution is too complicated. Btw the answer is $2$ and since this is a "clean" and "nice" answer for a quartic polynomial there must be some trick
|
Question Summary (to Make Easier to Reference in the Answer) $$ \text{If }x^2-16\sqrt{x}=12 \text{ what is the value of }f=x-2\sqrt{x} \tag{Eq. 1}$$ Checking for Consistency by Plugging in the $f=x-2\sqrt{x}=2$ Check $ x - 2\sqrt{x}-2=0$ implies $(\sqrt{x}-1)^2=2+1=3$ so $\sqrt{x}=1+\sqrt{3}$ . (Only the positive solution holds since $\sqrt{x} \ge 0$ ). So then $x=(1+\sqrt{3})^2=1+2\sqrt{3}+3=4+2\sqrt{3}$ . Then $x^2=16+12+16\sqrt{3}=28+16\sqrt{3}$ and $x^2 - 16\sqrt{x}-12=(28+16\sqrt{3})-16*(1+\sqrt{3})-12=(28-16-12)+(16-16)\sqrt{3}=0$ . Thus indeed it must be so that $x-2*\sqrt{x}=2$ . Solving for $f=x-2\sqrt{x}$ starting with $-2\sqrt{x}$ and then simplifying further To start, one can solve for $-2\sqrt{x}$ as follows (subtracting $x^2$ from the left and right of Equation 1, and then dividing both sides by $8$ ): $$ \frac{(x^2-16\sqrt{x})-x^2}{8} =\frac{(12)-x^2}{8} \underset{implies}\implies \tag{Eq. 2} $$ $$ \underset{implies}\implies -2\sqrt{x}=\frac{12-x^2}{8} \underset{implies}
|
|algebra-precalculus|
| 0
|
What is the relationship between the Laplace equation and the Wave equation?
|
What is the relationship between the Laplace equation: $$ (\delta^2_x + \delta^2_y)\phi = 0 $$ and the Wave equation: $$ (\delta^2_x - \delta^2_y)\phi = 0 $$ What is the relationship between the Laplace equation: $$ (\delta^2_{x0} + \delta^2_{x1} + \delta^2_{x2} + \delta^2_{x3})\phi = 0 $$ and the Wave equation: $$ (\delta^2_{x0} - \delta^2_{x1} - \delta^2_{x2} - \delta^2_{x3})\phi = 0 $$
|
Comment: All formula of electrodynamics and Special Relativity follow out of the 4-dimensional Wave equation, which is the 4-dimensional Laplace equation in reciprocal (hyperbolic) space. A conformal mapping of the unit vectors $ x_0:= x_0; x_1:=ix_1; x_2:= jx_2; x_3:=-kx_3 $ carries the 4-dimensional Wave equation from the reciprocal space of Special Relativity over into the Euclidean space of the 4-dimensional Laplace equation. This latter can be generally solved by factoring the operator in Quaternion space, using D'Alembert's method of characteristics. The general solution is a combination of forward- and backward-rotating functions, which may interfere with each other and also give rise to self-interference (stationary solutions). Mapping back by $ x_0:= x_0; x_1:=-ix_1; x_2:= -jx_2; x_3:= -kx_3 $ gives then the results in the hyperbolic space of Special Relativity.
|
|partial-differential-equations|complex-numbers|mathematical-physics|quaternions|
| 0
|
Revisiting the distribution of mth order statistic using symmetry
|
Not asking for the distribution of $m$ th order statistic, for it has been well documented and covered and in fact well-known. What I am rather confused, perhaps basic but failing to see, is the approach using the argument of symmetry in the book, An Introduction to Order Statistics : Let $f_{m:n}$ be the pdf of the $m$ th order statistic. It is given by $$f_{m:n}(x) =n! f(x) \int\cdots\int \prod_{k=1}^{m-1} f(x_k) \prod_{k=m+1}^n f(x_k) ~\mathrm dx_1\cdots\mathrm dx_{m-1}\mathrm dx_{m+1}\cdots\mathrm dx_n, $$ over the domain $-\infty The authors argued that the symmetry of $\prod_{k=1}^{m-1} f(x_k) $ with respect to $x_1, \ldots, x_{m-1}$ as well as the symmetry of $\prod_{k=m+1}^n f(x_k)$ with respect to $x_{m+1}, \cdots, x_n$ help us to evaluate the integral as follows: $$\int\cdots\int \prod_{k=1}^{m-1} f(x_k) \prod_{k=m+1}^n f(x_k) ~\mathrm dx_1\cdots\mathrm dx_{m-1}\mathrm dx_{m+1}\cdots\mathrm dx_n=\frac{1}{(m-1)! } \prod_{k=1}^{m-1} \int_{-\infty}^xf(x_k)~\mathrm dx_k\frac{1}{(
|
The unrestricted integrals over $(-\infty,x)$ and $(x,\infty)$ allow all $(m-1)!$ possible orders of the first $m-1$ variables and all $(n-m)!$ possible orders of the last $n-m$ variables. Since the variables are exchangeable, the restriction to any given order yields the same contribution as the restriction originally imposed. Thus, all $(m-1)!$ contributions to the first integral are equal, and all $(n-m)!$ contributions to the second integral are equal, so in each case we can use the entire integral and divide by the number of equal contributions.
|
|order-statistics|
| 1
|
An ellipse with focal points further and further...
|
Let's start with a definition of an ellipse that states: An ellipse is a set of points where each point's sum of distances from two focal points is equal to a constant value. Now, we have got a two focal points and a given constant that creates an ellipse. Then we make those focal points further and further away. In my understanding the ellipse will become more and more "flat". I have got two questions: How an eccentricity of the ellipse changes? If we reach a point where focal points are separated with a distance equal to the constant, does it mean our ellipse "degenerates" into a line? Or maybe a point?
|
Eccentricity is a measure of how much an ellipse has been "squished" from a circle. A circle has not been squished from itself at all, so its eccentricity is 0. A circle that has been completely squished becomes a line, which has an eccentricity of 1. You can play around with it here: https://www.mathopenref.com/ellipseeccentricity.html The answer to your second question is yes (it degenerates into a line).
|
|conic-sections|
| 1
|
Find the solution of $z=pq$ by Charpit's method
|
I am solving the question $$z = pq$$ I have formed this auxiliary equation: $$ {dx\over -q} + {dy\over p} + {dz\over 2pq } + {dp\over p} + {dx\over q}$$ I have used $${dp\over p}={dq\over q}$$ I got $$p=aq$$ I put this in $$z = pq$$ then $$q = {\sqrt\frac za}$$ then $$p = \sqrt(az)$$ after putting in $$dz = pdx + qdy$$ I got soln as $$\sqrt a . 2\sqrt z = ax + y + b $$ but the answer key is saying $$2\sqrt z= ax + \frac ya + b $$
|
$$ f(x,y,z,p,q)=pq-z $$ using Charpit's method: $$ \frac{\mathrm{d}p}{\frac{ \partial f }{ \partial x } +p\frac{ \partial f }{ \partial z } }=\frac{\mathrm{d}q}{\frac{ \partial f }{ \partial y } +q\frac{ \partial f }{ \partial z } }=\frac{\mathrm{d}z}{-p\frac{ \partial f }{ \partial p } -q\frac{ \partial f }{ \partial q } }=\frac{\mathrm{d}x}{-\frac{ \partial f }{ \partial p } }=\frac{\mathrm{d}y}{-\frac{ \partial f }{ \partial q } } $$ And, $$ \frac{ \partial f }{ \partial x } =0;\frac{ \partial f }{ \partial y } =0;\frac{ \partial f }{ \partial z } =-1;\frac{ \partial f }{ \partial p } =q;\frac{ \partial f }{ \partial q } =p $$ So, $$ \frac{\mathrm{d}p}{p}=\frac{\mathrm{d}q}{q}=\frac{\mathrm{d}z}{2pq}=\frac{\mathrm{d}x}{q}=\frac{\mathrm{d}y}{p} $$ Integrating the first two terms as you say: $$ p=C_{1}q $$ Put it into the equation $$ z=C_{1}q^{2} $$ $$ q=\sqrt{ \frac{z}{C_{1}} };p=\sqrt{ C_{1}z } $$ $$ \mathrm{d}z=p\mathrm{d}x+q\mathrm{d}y $$ So, $$ \frac{\mathrm{d}z}{\sqrt{ z }}=\sqr
|
|partial-differential-equations|
| 1
|
solution using symmetry to probability question involving 3 jurors
|
Consider the following problem: Alice has decided to participate in a jury with three members, with the verdict decided by majority. To express her disinterest in the case, she decides to vote by flipping a fair coin. The other two members make the correct decision with probability $p \in (0, 1)$ . How does this arrangement compare to a judge who makes the correct decision with probability $p$ ? The probability the correct decision is made in the former scenario can be calculated as $\frac{p^2}{2} + \frac{p^2}{2} + 2\left(\frac{p(1-p)}{2}\right) = p$ , so the probabilities are the same. Because they are the same, I am wondering if there is a more elegant solution that argues by symmetry?
|
Imagine the judge on the jury. The probability that the judge’s correct decision is messed up by the other two and the probability that the judge’s incorrect decision is corrected by the other two are the same by symmetry, since they both require the two serious people to disagree and Alice to make one of her two arbitrary decisions.
|
|probability|recreational-mathematics|problem-solving|puzzle|
| 0
|
How to find cosh(arcsinh(f(x)))?
|
With the regular trig functions, if I ever end up with something like $\operatorname{trig}_1(\operatorname{arctrig}_2(f(x))$, where $\text{trig}_1$ and $\text{trig}_2$ are two arbitrary trigonometric functions, I can draw a right triangle to find a formula for this that doesn't involve any trigonmetric functions. How do I find a similar result for hyperbolic functions? For instance, when working a problem recently, I ended up with $\cosh(\operatorname{arcsinh}(3x))$. WolframAlpha told me that it was $\sqrt{1+9x^2}$, but how do I figure that out? What picture can I draw? I'm not sure of the geometry here. I'm pretty sure that hyperbolic functions are related to hyperbolas the way that trig functions are related to circles, but I don't figure out the trig(arctrig) expressions by looking at circles -- I draw a triangle. Is there something similar I can do with hyperbolic functions?
|
The Gudermannian function $\theta = gd(x) = arcsin(tanh\ x)$ transforms between the hyperbolic trigonometric functions and the circular ones. For all $x \in \mathbb{R}$ , it gives us a unique $\theta \in (-\pi/2, \pi/2)$ such that: $sinh\ x = tan\ \theta$ $cosh\ x = sec\ \theta$ $tanh\ x = sin\ \theta$ $cosech\ x = cot\ \theta$ * $sech\ x = cos\ \theta$ $coth\ x = cosech\ \theta$ * *When $x = 0$ , $\theta = 0$ meaning $cosech\ x$ , $coth\ x$ , $cosec\ \theta$ and $cot\ \theta$ are all undefined. But if we allow "undefined = undefined" these still hold. As you probably know, there are three Pythagorean triples relating the circular trigonometric functions: $sin^2\ \theta + cos^2\ \theta = 1$ $tan^2\ \theta + 1 = sec^2\ \theta$ $1 + cot^2\ \theta = cosec^2\ \theta$ Translating these via the Gudermann relationships, we get three Pythagorean triples which can be used to label the sides of a right-angled triangle: $tanh^2\ x + sech^2\ x = 1$ $sinh^2\ x + 1 = cosh^2\ x$ $1 + cosech^2\ x = co
|
|geometry|trigonometry|hyperbolic-functions|
| 0
|
Exercise about Venn diagrams and Cardinality
|
In a survey of 480 students at a university. The following information was collected. Of those surveyed, 200 students liked the subjects that involved mathematics, 260 students liked the subjects that involved programming,170 students disliked subjects involving physics, and 250 of them liked subjects involving two topics combined. Could you find out, how many students liked the subjects that involved all three topics combined? First, i try to know the cardinality of the following sets: Students that liked math and physics Students that liked math and programming Students that liked physics and programming But i not sure how i can find these, because im a litle confused with all the variables. Any hint or help i will be very grateful.
|
There are $480-170=310$ students who like physics. Now denote the quantity of those who like maths and physics but not programming by $a$ , those who like physics and programming but not maths by $b$ , those who like programming and maths but not physics by $c$ , those who like all three by $x$ . Then we have the following equations: $$\begin{cases} a+b+x=310\\ b+c+x=260\\ c+a+x=200 \end{cases}$$ Add these equations and get: $$2(a+b+c)+3x=770.$$ Since $a+b+c=250$ , we have $$x=90.$$
|
|combinatorics|elementary-set-theory|
| 1
|
non-oscillatory behavior of modified Bessel equation
|
How to prove the Bessel modified equation $$x^2y''+xy'-(x^2+n^2)y=0$$ has no oscillatory solution (preferably using comparison theorem)? For $x>0$ we have one zero of any solution of $$x^2y''+xy'-n^2y=0$$ lies between any two consecutive zeroes of solutions of the former one. But the latter equation has solutions of the form $\displaystyle y=Ax^n+\frac{B}{x^n}$ , which has at most one positive zero $\bigg($ say for example, $\displaystyle y=x-\frac1x\bigg)$ . So we can expect at least two positive zeroes of the given equation, which contradicts the non-oscillatory behavior. What is possibly wrong? Any help is appreciated.
|
The Euler-Cauchy terms determine the behavior for $x\approx 0$ . The long-term behavior is determined by the terms with the largest coefficients for $x\to\infty$ , thus here $$ y''(x)-y(x)=0, $$ which has exponential solutions (with real exponential factors). Combinations of $e^{\pm x}$ do not oscillate.
|
|ordinary-differential-equations|
| 0
|
Almost sure convergence in terms of sets
|
Let $(\Omega,\mathcal{F},p)$ be a probability space, and let $X_n$ be a sequence of random variables. Suppose that for a random variable $X$ we have that for all events $F\in\mathcal{F}$ , $$ \lim_{n\to\infty}\int_F X_n \, dp = \int_F X \, dp . $$ Can we conclude that $X_n\to X$ almost surely?
|
No: take a sequence $\left(X_n\right)_{n\geqslant 1}$ which converges to $0$ in $\mathbb L^1$ but not almost surely (for example $\left(X_n\right)_{n\geqslant 1}$ i.i.d. with $\mathbb P(X_n=1)=1/n$ and $\mathbb P(X_n=0)=1-1/n$ ) to get a counter-example.
|
|probability-theory|convergence-divergence|random-variables|
| 0
|
Almost sure convergence in terms of sets
|
Let $(\Omega,\mathcal{F},p)$ be a probability space, and let $X_n$ be a sequence of random variables. Suppose that for a random variable $X$ we have that for all events $F\in\mathcal{F}$ , $$ \lim_{n\to\infty}\int_F X_n \, dp = \int_F X \, dp . $$ Can we conclude that $X_n\to X$ almost surely?
|
No, this doesn't hold. In fact, it holds iff $(\Omega,\mathcal{F},p)$ is "essentially" discrete. For example, let $\Omega=[0,1)$ , let $\mathcal{F}$ be the Borel $\sigma$ -algebra, and let $p$ be Lebesgue measure. Let $$G_{n,k}=\Bigl[\frac{k-1}{n},\frac{k}{n}\Bigr)$$ for $n=1,2,\ldots$ and $1\leqslant k\leqslant n$ . Let $F_1,F_2,\ldots$ be the numeration of the collection $G_{n,k}$ given by $$F_1,F_2,F_3,F_4,F_5,F_6,\ldots = G_{1,1},G_{2,1},G_{2,2},G_{3,1},G_{3,2},G_{3,3},\ldots.$$ In other words, with $s_0=0$ and $$s_n=\sum_{i=1}^n i$$ for $n=1,2,\ldots$ , each $m\in\mathbb{N}$ admits a unique decomposition of the form $m=s_{n-1}+k$ where $n\in\{0\}\cup \mathbb{N}$ and $1\leqslant \leqslant n$ . Then $F_m=G_{n,k}$ . In particular, $p(F_m)\leqslant 1/n$ whenever $m>s_{n-1}$ . Let $X_n$ be the indicator of $F_n$ and let $X=0$ . Then $$\lim_n \int_F X_ndp = \lim_n p(F_n\cap F)=0.$$ However, for each $x\in [0,1)$ and $n=1,2,\ldots$ , there exists a unique $k_n(x)\in [1,n]$ such that $x\i
|
|probability-theory|convergence-divergence|random-variables|
| 1
|
Isometry between cone and cylinder
|
In a certain exercise I have been asked to find an isometry between a portion of the cylinder $S = \{ x^2+y^2 = 2: 0 and the complete cone $S_* = \{x^2+y^2 = 2z^2: 0 . I need to find an explicit formula. However, I have really no idea of how to start. I would really appreciate if somebody could give me a hand. I might be able to find an isometry between that cone and the cylinder and the plane $z = 0$ for example, but not between those geometric figures I said above.
|
What do you think to the map $$ F\colon \text{cone}\to \text{cylinder}, \qquad (x,y,z)\mapsto (x/z, y/z, z) $$
|
|differential-geometry|isometry|
| 0
|
An ellipse with focal points further and further...
|
Let's start with a definition of an ellipse that states: An ellipse is a set of points where each point's sum of distances from two focal points is equal to a constant value. Now, we have got a two focal points and a given constant that creates an ellipse. Then we make those focal points further and further away. In my understanding the ellipse will become more and more "flat". I have got two questions: How an eccentricity of the ellipse changes? If we reach a point where focal points are separated with a distance equal to the constant, does it mean our ellipse "degenerates" into a line? Or maybe a point?
|
I remember that for a conic with a focus at the origin, if the directrix is $x=\pm p$ where $p$ is a positive real number, and the eccentricity is a positive real number $e$ , the conic has a polar equation $$r=\frac{ep}{1\pm e \cos \theta}$$ and when $e=1$ we not have a stright line but a parabola.
|
|conic-sections|
| 0
|
If $p(x) = a_3 + a_2x + a_1x^2 + a_0x^3$ is irreducible in $\mathbb{Q}[x]$, then $q(x) = a_0 + a_1x + a_2x^2 + a_3x^3$ is also irreducible?
|
I have this doubt because what I want to prove is that, with the given hypothesis, $\frac{\mathbb{Q}[x]}{\langle q(x) \rangle}$ is an integral domain. I have the same problem but for polynomials of degree 4, it occurs to me to give an isomorphism $T: \mathbb{Q}[x] \longrightarrow \mathbb{Q}[x]$ in general but the proof becomes tedious when it comes to proving that the function $T$ is a ring homomorphism. My idea is to suppose that $q(x)$ is reducible and by means of the isomorphism to show that $T(q(x(x)) = p(x)$ showing that $p(x)$ is also reducible reaching a contradiction, but I would like to know if there is an alternative way that does not involve constructing the isomorphism.
|
$p(x) = a_3 + a_2x + a_1x^2 + a_0x^3\Rightarrow p\left(\dfrac 1x\right)=\dfrac{a_3x^3 + a_2x^2 + a_1x + a_0}{x^3}=\dfrac{q(x)}{x^3}$ If $q(x)$ were reducible then for some $\beta\ne0$ (because $a_0\ne0$ ) we would have $q(\beta)=0$ with the degree of $\beta$ equal to $1$ or $2$ over $\Bbb Q$ . But in this case $\dfrac {1}{\beta}$ is also of degree $1$ or $2$ over $\Bbb Q$ and $p\left(\dfrac {1}{\beta}\right)=0$ . This is a contradiction because each root of $p(x)$ must be of degree $3$ since $p(x)$ is irreducible in $\mathbb{Q}[x]$ .
|
|abstract-algebra|polynomials|ring-theory|commutative-algebra|irreducible-polynomials|
| 0
|
Matrix norm that keeps order relation when applied to a vector
|
Let $M_{n\times n}$ be the vector space of real square matrices of size $n\times n$ . Does there exist a matrix norm $\left\lVert\cdot\right\rVert_M$ such that if $\left\lVert A\right\rVert_M \leq \left\lVert B\right\rVert_M$ , then for all $v\in\mathbb{R}^n$ , $\left\lVert Av\right\rVert_2\leq \left\lVert Bv\right\rVert_2$ , where $\left\lVert \cdot\right\rVert_2$ is the usual $l^2$ norm over $\mathbb{R}^n$ ? This does not work for the induced norm. For example let $A,B$ be $$ A = \begin{pmatrix} 2 & 0 \\ 0 & 1 \end{pmatrix} $$ $$ B = \begin{pmatrix} 1 & 0 \\ 0 & 4 \end{pmatrix} $$ Then $\left\lVert A\right\rVert_2 = 2 , but consider $v = \begin{pmatrix} 1 \\ 0\end{pmatrix}$ . Then $Av = \begin{pmatrix} 2 \\ 0\end{pmatrix}$ and $Bv = \begin{pmatrix} 1 \\ 0 \end{pmatrix}$ , so this clearly doesn't work.
|
When $n>1$ , no. Let $A=e_1e_1^T,\,B=ce_2e_2^T$ and $v=e_1$ . When $c$ is sufficiently large, we have $\|A\|_M but $\|Av\|_2=1>0=\|Bv\|_2$ . The answer remains negative even if you require $A$ and $B$ to be invertible. Let $A=\frac{1}{c}I+e_1e_1^T$ and $B=ce_2e_2^T+\frac{1}{c}I$ . When $c\to\infty$ , we have $\|A\|_M\to\|e_1e_1^T\|_M$ and $\|B\|_M\ge c\|e_2e_2^T\|_M-\frac{1}{c}\|I\|_M\to\infty$ . Therefore $\|A\|_M when $c$ is sufficiently large. Yet, $\|Ae_1\|_2=\frac{1}{c}+1>\frac{1}{c}=\|Be_1\|_2$ .
|
|linear-algebra|matrices|normed-spaces|
| 0
|
Finding characteristic polynomial of a square matrix and how to proving that matrix is diagonalizable
|
Let $A$ be a square matrix of order $n$ such that $|A + I| = |A − 3I| = 0$ and also $\operatorname{rank}(A)= 2$ . I need to find characteristic polynomial of $A$ and have prove that $A$ is diagonalizable. I thought of using the characteristic polynomial equation $|\lambda I - A| = 0$ , and for prove that $A$ is diagonalizable using the $P^{-1}AP$ and proving that $A$ is similar to diagonal matrix The rank information gives me that there is only $2$ linearly independent rows, so if I use Gauss–Jordan elimination it will remain only $2$ rows that are not zeroes.
|
Let $n$ be the size of your square matrix, and $a,b$ be the respective dimensions of the eigenspaces corresponding to the eigenvalues $-1$ and $3$ . Then, $$a,b\ge1\quad\text{and}\quad a+b\le\operatorname{rank}(A)=2$$ hence $$a=b=1.$$ Since moreover $\dim\ker A=n-2,$ $A$ is similar to $\operatorname{diag}(-1,3,0,0,\dots,0)$ and its characteristic polynomial is $(X+1)(X-3)X^{n-2}$ .
|
|linear-algebra|matrices|diagonalization|characteristic-polynomial|
| 0
|
What is the maximum and minimum value of the following function?
|
I came across a question in the book Calculus for the Practical Man. The question stated to find the maximum or minimum value of the following function: $$f(x) = \frac{1}{4}\cos^2x - \sin 2x$$ I tried to find first the critical value by differentiating this function to get: $$f'(x) = \frac{-\sin2x}{4}-2\cos2x$$ However, I am unable to solve it further to get the value of $x$ for the maximum or minimum. Can someone please help me out? Thanks for helping.
|
(from one of my comments) “Of course, the values of the tangent function can be “pushed through” with some work to obtain max/min values of the function (evaluate sine and cosine functions at the inverse-tangent values), $\ldots$ ” What follows is probably not what the author intended, but I'm posting it on the off-chance that someone wants to see some details for what I suggested. My guess is that this problem was NOT intended to be a calculus problem, which in my opinion is a bit disingenuous to include in a calculus text without some kind of warning or prior expectation that non-calculus methods should sometimes be used. First, solve $f'(x) = 0.$ Since $f'(x) = -\frac{1}{2}\cos x \sin x - 2\cos 2x = -\frac{1}{4}\sin 2x - 2\cos 2x,$ we get $-\frac{1}{4}\sin 2x = 2 \cos 2x,$ or $\tan 2x = -8.$ The reason for converting $\cos x \sin x$ to $\frac{1}{2}\sin 2x$ is that it allows us to get a quotient of sine and cosine having the same argument/input, which then allows us to put the equati
|
|calculus|maxima-minima|
| 0
|
Notion of isomorphism of lattices / intrinsic definition of lattices
|
I'm trying to construct the "correct" (read: a good) notion of moduli space of $n$ -dimensional lattices. Here, an $n$ -dimensional lattice is the $\mathbb{Z}$ -span of an $\mathbb{R}$ -basis of $\mathbb{R}^n$ . For this, one has to find the "correct" (read: a good) notion of isomorphism between two lattices. The first immediate thought is that the orthogonal transformations $O_n(\mathbb{R})$ should definitely be (or rather "induce") isomorphisms between lattices. So for example, when columns of a matrix are considered to be bases of lattices, then $\pmatrix{1 & 0 \\ 0 &1}$ and $\pmatrix{1/\sqrt{2} & 1/\sqrt{2} \\ -1/\sqrt{2} & 1/\sqrt{2}}$ should be isomorphic, because they are just rotated versions of each other. That's because I really only care about intrinsic properties of lattices such as density, shortest vector, etc., and all of this is preserved under orthogonal transformations. A second thing one might want to mod out is the scaling action of $\mathbb{R}^\times$ , so that $\l
|
By definition, a lattice $L$ is an additive discrete subgroup of $\mathbb{R}^n$ . An additive subgroup $L$ of $\mathbb{R}^n$ is discrete iff $L$ is the $\mathbb{Z}-$ span of some $t$ linearly independent vectors over $\mathbb{R}^n$ . Let's assume that $L$ is full-rank, i.e., $t=n$ . Thus, when we say "given" $L$ , we mainly mean given its basis, which can be represented by an invertible real matrix in $\mathbb{R}^{n\times n}$ . To "compare" two lattices with respect to some invariant, e.g., shortest vector, covering radius, it is natural to scale both lattices to the same volume, say volume 1. Hence, $\mathcal{L}_n$ , the space of all possible $n-$ dimensional lattice bases (of volume 1) is identified with $SL_n(\mathbb{R})$ . As any change of basis generates the same lattice, that is multiplying your basis matrix by a matrix in $SL_n(\mathbb(R))$ (right multiplication if you write your basis as column vectors), then $\mathcal{L}_n$ is in fact $$SL_n(\mathbb{R})/SL_n(\mathbb{Z}).$$ Now
|
|integer-lattices|
| 0
|
If $x^2-16\sqrt x =12$ what is the value of $x-2\sqrt x$?
|
I saw this problem:If $x^2-16\sqrt x =12$ what is the value of $x-2\sqrt x$ ? To make notations simpler Let $x=t^2$ and $t^4-16t=12$ and $l= t^2-2t$ . I tried to use the fact the $\frac{12} l =\frac{t^4-16t}{t^2-2t}= t^2+2t +4 + \frac{8}{t-2}= (t-2)^2 + 6t+ \frac{t}{8l}= \frac{l^2}{t^2}+6\frac{l}{t-2}+ \frac{t}{8l}$ I tried to find some similarities between the $l$ cubic polynomial and the original quartic polynomial but I didn't find anything useful, needless to say that solving the quartic polynomial is not an option as the general solution is too complicated. Btw the answer is $2$ and since this is a "clean" and "nice" answer for a quartic polynomial there must be some trick
|
$$\text{Let }y=\sqrt{x}$$ $$y^4-16y-12=0$$ $$(y^2-2x-2)(y^2+2x+6)=0$$ $$y=1\pm\sqrt{3}\text{ or }y=-1\pm\sqrt{5}i\\$$ $$\sqrt{x}=1\pm\sqrt{3}\text{ or }\sqrt{x}=-1\pm\sqrt{5}i$$ $$x=4\pm2\sqrt{3}\text{ or }x=-4\mp2\sqrt{5}i\\$$ $$x-2\sqrt{x}=2$$ $$\text{or}$$ $$x-2\sqrt{x}=-2\mp4\sqrt{5}i$$
|
|algebra-precalculus|
| 0
|
Application of the residue theorem: how to use angular sectors
|
I am studying the residue theorem applications to physics and in particular the angular sector method. I understand the concept of finding an angle such that the infinite half-segment along it acts as the real axis, but I found an integral that I do not know how to solve with this method: $$\int{\frac{1-x}{1+x^3}}$$ I tried the following: $$ \oint{\frac{1-z}{1-z^3} dz}=\int_{0}^{\infty}{\frac{1-x}{1-x^3}dx} + \int_{\infty}^0{\frac{1-z}{1+z^3}dz}=2\pi i \Sigma_jRes(f, z_j)$$ And taking $z=re^{2\pi i/3}$ in the last integral, I get $$ \oint{\frac{1-z}{1-z^3} dz}=\int_{0}^{\infty}{\frac{1-x}{1-x^3}dx} - \int^{\infty}_0{\frac{1-re^{2 \pi i /3}}{1+r^3}dr}=2\pi i \Sigma_jRes(f, z_j)$$ What I wanted to get is something of the form $(1-a)I=2\pi i \Sigma_j Res(f,z_j)$ , but I dont see anyway of factorizing. What should I do ? By the way, the integral should vanish, maybe it helps...
|
You can do it in two parts: $$ \int_0^\infty \frac{1-x}{1+x^3}dx = \int_0^\infty \frac{1}{1+x^3}dx - \int_0^\infty \frac{x}{1+x^3}dx $$ This will make it easier to "have the factor come out". Use the contour: The last segment is $[R\omega, 0]$ , where $\omega = \frac{-1+\sqrt{3}i}{2}$ . Notice that $w^3 = 1$ so that will allow us to relate the integrals along the last segment with the integrals we want. For $f_1(z)=\frac{1}{1+z^3}$ we get $$ \int_{[R\omega,0]} f_1(z)dz = -\int_{[0,R\omega]} f_1(z)dz = -\int_0^R \frac{1}{1+(\omega t)^3}\omega dt = -\omega \int_0^R \frac{1}{1+t^3} dt $$ Over the circle contour, as $R\to \infty$ , the integral will vanish (basically because the denominator is of power that is at least $2$ bigger than the numerator). The contour encloses one pole $z_1 = \frac{1+\sqrt{3}i}{2}$ of $f_1$ , so by the residue theorem, we get $$ (1-\omega)\int_0^\infty \frac{1}{1+x^3}dx = 2\pi i Res(f_1, z_1) = \frac{1}{3z_1^2} $$ The integral for $f_2(z)=\frac{z}{1+z^3}$ goes s
|
|complex-analysis|residue-calculus|
| 1
|
Does the definition of $R^3$ requires orthonormal basis?
|
Does the definition of $R^3$ requires orthonormal basis? If I have a vector space V that consists of 3 linear independent basis { $v_1,v_2,v_3$ } which are not orthonormal, can I still call it $R^3$ ? It seemed people generally use $R^3$ to mean orthonormal basis, What's the definition of $R^3$ (or even, $R^n$ )? does it requires orthonormal basis?
|
The problem is the same in $\mathbb R^2$ . Let us consider $\cos$ and $\sin$ . $$\cos'=-\sin\land \cos"=-\cos$$ $$\sin'=\cos \land \sin''=-\sin$$ Both functions are solutions of $$y''+y=0$$ and we know that $$S:=\{f:f''+f=0\}=\{\alpha \cos+\beta \sin| (\alpha,\beta)\in \mathbb R^2\}$$ So, in a first approach (in mathematics, we say that there is an isomorphism of vector spaces) each solution is written $$(\alpha, \beta)\in \mathbb R^2$$ We could write $$S=\mathbb R^2$$ No need to have an orthonormal basis to consider $\mathbb R^2$ as S
|
|vector-spaces|
| 0
|
Does the definition of $R^3$ requires orthonormal basis?
|
Does the definition of $R^3$ requires orthonormal basis? If I have a vector space V that consists of 3 linear independent basis { $v_1,v_2,v_3$ } which are not orthonormal, can I still call it $R^3$ ? It seemed people generally use $R^3$ to mean orthonormal basis, What's the definition of $R^3$ (or even, $R^n$ )? does it requires orthonormal basis?
|
It depends on the context. In the context of abstract algebra, one may say that anything that's isomorphic to $\mathbb{R}^n$ "is" $\mathbb{R}^n$ , because the fact that an isomorphism exists implies that all algebraic properties are preserver. However, if you have specific scalar products on your vector spaces, then it might not be smart to say that they are the same unless the isomorphism $\Phi$ between them satisfies something like $(\Phi(a),\Phi(b))=(a,b)$ . The question of whether some two objects are "the same" is sometimes a little bit more nuanced and really depends on the general question you're trying to answer.
|
|vector-spaces|
| 0
|
About two functions whose Lebesgue integral on all sets of a $\sigma-$algebra are equal
|
Let $X$ be an infinite set and $\mathcal{F}$ be a $\sigma-$ algrbra with infinite sets on $X$ . Given on $X$ a measure $\mu$ . Let $f$ and $g$ be two $\mathcal{F}-$ measurable functions. Is it necessary that if $$ \int_A f d\mu = \int_A g d\mu, \forall A \in \mathcal{F}$$ then $f = g$ ( $\mu-$ a.e)? I feel that the answer for this is positive, but I can't prove the statement above nor give a counter-example. Please give me a hint. Thank you.
|
It is enough to prove the following proposition: If $h:(X,\mathcal F, \mu)\to \Bbb R^+_0$ is an $\mathcal F$ -measurable function, then $$h\equiv 0 ~\mu -a.e.\iff \forall A\in \mathcal F:\int_A hd\mu =0.$$ We prove as follows; If $h\equiv 0 ~\mu -a.e.$ and $A\in \mathcal F$ , then $\int_A hd\mu =\int_A 0d\mu=0$ . Conversely suppose $\forall A\in \mathcal F:\int_A hd\mu =0$ . For $n\in \Bbb N$ we consider the sets $\{h>\frac 1n\}$ and we get $0=\int_{\{h>\frac 1n\}}hd\mu \geq \int_{\{h>\frac 1n\}}h1_{\{h>\frac 1n\}}d\mu \geq \int_{\{h>\frac 1n\}}\frac 1n d\mu =\frac 1n \mu (\{h>\frac 1n\})\implies \mu (\{h>\frac 1n\})=0.$ From $\sigma$ -subadditivity of $\mu$ we have $\{h>0\}=\bigcup_n \{h>\frac 1n\}\implies \mu (\{h>0\})=\mu (\bigcup_n\{h>\frac 1n\})\leq \sum_n \mu (\{h>\frac 1n\})=0$ , which means that $h\equiv 0~\mu-$ a.e.
|
|measure-theory|
| 0
|
What is the maximum and minimum value of the following function?
|
I came across a question in the book Calculus for the Practical Man. The question stated to find the maximum or minimum value of the following function: $$f(x) = \frac{1}{4}\cos^2x - \sin 2x$$ I tried to find first the critical value by differentiating this function to get: $$f'(x) = \frac{-\sin2x}{4}-2\cos2x$$ However, I am unable to solve it further to get the value of $x$ for the maximum or minimum. Can someone please help me out? Thanks for helping.
|
Without using the result about $a\cos(x)+b\sin(x)$ , here is a solution. Continue... So, $f^\prime(x)=0$ if $\tan(2x)=-8$ . Then, $\cos(2x)=\pm1/\sqrt{65}$ . Take $a$ and $b$ such that $$\cos(2a)=+\frac{1}{\sqrt{65}}$$ $$\cos(2b)=-\frac{1}{\sqrt{65}}$$ Then, $$f^{\prime\prime}(x)=-\frac12\cos(2x)+4\sin(2x)=-\frac12\cos(2x)\left(1-8\tan(2x)\right)$$ so $f^{\prime\prime}(a) and $f^{\prime\prime}(b)>0$ . Hence, $a$ is a point of local maximum and $b$ is a point of local minimum. Now, note $$f(x)=\frac14\cos^2(x)-\sin(2x)=\frac14\left(\frac{1+\cos(2x)}{2}\right)-\cos(2x)\tan(2x)$$ Thus, $$f(a)=\frac18\left(1+\frac{1}{\sqrt{65}}\right)+\frac{8}{\sqrt{65}}=\frac{1+\sqrt{65}}{8}$$ $$f(b)=\frac18\left(1-\frac{1}{\sqrt{65}}\right)-\frac{8}{\sqrt{65}}=\frac{1-\sqrt{65}}{8}$$ You can verify these are the global maximum and minimum. Hope this helps. :)
|
|calculus|maxima-minima|
| 0
|
Evaluation of $\displaystyle \lim_{u\rightarrow \infty}\frac{\int^{\pi u}_{1}\frac{\sin^2(5x)}{x}dx}{\ln(u^2+u^{-2})}$
|
Evaluation of $\displaystyle \lim_{u\rightarrow \infty}\frac{\int^{\pi u}_{1}\frac{\sin^2(5x)}{x}dx}{\ln(u^2+u^{-2})}$ What I Try: Using newton leibniz formula $\displaystyle \lim_{u\rightarrow \infty}\frac{\bigg(\int^{\pi u}_{1}\frac{\sin^2(5x)}{x}dx\bigg)'}{\bigg(\ln(u^2+u^{-2})\bigg)'}$ $\displaystyle \lim_{u\rightarrow \infty}\frac{\frac{\sin^2(5\pi u)}{\pi u}\cdot\pi -0\cdot 0 }{\frac{1}{u^2+u^{-2}}\cdot (2u-2u^{-3})}$ $\displaystyle \lim_{u\rightarrow \infty}\frac{\sin^2(5\pi u)}{5\pi u}\cdot \frac{(u^2+u^{-2})}{(2u-2u^{-3})}=0$ But answer is $\displaystyle \frac{1}{4}$ . Please have a look on that problem , Thanks
|
The last limit is not zero, it just doesn't exists, note that $$ \begin{align*} \lim_{u\to \infty }\frac{\sin ^2(5\pi u)}{5\pi u}\cdot \frac{u^2+u^{-2}}{2u-2u^{-3}}&=\frac1{10\pi}\lim_{u\to \infty }\sin ^2(5\pi u)\lim_{u\to \infty }\frac{u^2+u^{-2}}{u^2-u^{-2}}\\&=\frac1{10\pi}\lim_{u\to \infty }\sin ^2(5\pi u) \end{align*} $$ And as the sine function is periodic, is clear that the last limit doesn't exists. However if we assume that $u\in \mathbb{N}$ and the original limit represents the limit of a sequence then, applying Stolz-Cesáro's theorem to the original limit expression, it can be shown that the limit of this sequence is indeed $1/4$ . Suppose that the given limit is a functional limit, then noticing that $\ln(u^2+u^{-2})\sim_{\infty }2\ln u$ and that $\sin ^2(5x)=\frac{1-\cos (10x)}{2}$ then for any chosen $a>0$ we find that $$ \lim_{u\to \infty }\frac1{\ln (u^2+u^{-2})}\int_{a}^{\pi u}\frac{\sin ^2(5x)}{x}\,d x=\lim_{u\to \infty }\frac1{4\ln u}\int_{10a}^{10\pi u}\frac{1-\cos
|
|definite-integrals|
| 1
|
$\frac{\log(x)}{1-x}$ is increasing
|
How can I show that the function $h(x):= \frac{\log(x)}{1-x}$ is increasing on $\mathbb{R}_{>0}$ ? The derivative of $h$ is $h'(x)= \frac{x\log(x)-x+1}{x(1-x)^2}$ but I don't see why $h'(x)$ is always non-negative.
|
It is enough to show that $x\log(x)-x+1\geq0$ for all $x>0$ . Define $g(x)=x\log(x)-x$ . Then, $$g^\prime(x)=\log(x)$$ So, for $x\geq1$ , we have $g^\prime(x)\geq0$ and so $g$ is increasing on $[1,\infty)$ . Hence, for $x\geq 1$ , we have $g(x)\geq g(1)=-1$ . On the other hand, for $0 , note that $g^\prime(x)\leq 0$ so $g$ is decreasing. Hence, $g(x)\geq g(1)=-1$ for all $0 . To conclude, we have that for all $x>0$ that $g(x)\geq -1$ or $x\log(x)-x+1\geq0$ . Note . The function $h(x)$ is not defined at $x=1$ right away, but you can continuously extend it by $h(1)=1$ and likewise $h^\prime(1)=0$ . The answer above assumes that. Hope this helps. :)
|
|calculus|
| 1
|
Munkres lemma 68.5
|
I'm reading Munkres Topology and I'm stuck in lemma 68.5 as you can see he uses the theorem 68.4 in order to imply that there is a isomorphism between $G$ and $G'$ , but in order for this theorem to be applied we must have that $\{i_{\alpha}(G_{\alpha})\}$ and $\{i'_{\alpha}(G_{\alpha})\}$ generate $G$ and $G'$ which for $G'$ is just fine because it is stated that is a free product of the groups $\{i'_{\alpha}(G_{\alpha})\}$ but how do we know that $G$ can be generated by $\{i_{\alpha}(G_{\alpha})\}$ ? We say that the groups $G_\alpha$ generate $G$ if every element $x$ of $G$ can be written as a finite sum of elements of the groups $G_\alpha$ . Lemma 68.5. Let $\{G_{\alpha}\}_{\alpha \in J}$ be a family of groups; let $G$ be a group; let $i_{\alpha} : G_{\alpha} \rightarrow G$ be a family of homomorphisms. If the extension condition of Lemma 68.3 holds, then each $i_{\alpha}$ is a monomorphism and $G$ is the free product of the groups $i_{\alpha}(G_{\alpha})$ . Proof. We first show tha
|
The Lemma is correct, but Theorem 68.4 (as stated by Munkres) is not sufficient to prove it. Let us analyze Lemma 68.3. The "universal property" of the free product is $(*)$ Given a group $H$ and a family of homomorphisms $h_\alpha : G_\alpha \to H$ , there exists a homomorphism $h : G \to H$ such that $h \circ i_\alpha = h_\alpha$ for each $\alpha$ . Furthermore, $h$ is unique. Unfortunately it is a bit unclear what the universal property is. Actually it is not only $(*)$ , but is $(*)$ plus uniqueness. In other word, it is $(\#)$ Given a group $H$ and a family of homomorphisms $h_\alpha : G_\alpha \to H$ , there exists a unique homomorphism $h : G \to H$ such that $h \circ i_\alpha = h_\alpha$ for each $\alpha$ . This is a special case of the universal property of the coproduct (aka categorical sum ) in category theory. I shall not go into details here, you can read anby book about category theory. Note that $(\#)$ is a property applicable to any system $(G,i_\alpha)$ consisting of a
|
|group-theory|algebraic-topology|finitely-generated|free-product|
| 0
|
For distinct non-zero complex $z_1$, $z_2$, $z_3$ satisfying $|z-1|=1$ and $z_2^2=z_1z_3$, which of the following are true?
|
$z_1, z_2, z_3$ are three non zero distinct points satisfying $|z-1|=1 \space \& \space z_2^2=z_1 z_3$ then $\qquad$ (A) $\displaystyle\frac{z_3-z_2}{z_2+z_3-2}$ is purely imaginary $\qquad$ (B) $\displaystyle\operatorname{Arg}\left(\frac{z_2-1}{z_1-1}\right)=2\operatorname{Arg}\left(\frac{z_3}{z_2}\right)$ $\qquad$ (C) $\displaystyle\operatorname{Arg}\left(\frac{z_2-1}{z_1-1}\right)=2 \operatorname{Arg}\left(\frac{z_3}{z_1}\right)$ $\qquad$ (D) $\left|\frac{1}{z_2}-\frac{1}{z_3}\right|+\left|\frac{1}{z_1}-\frac{1}{z_2}\right|=\left|\frac{1}{z_1}-\frac{1}{z_3}\right|$ My approach is as follows: $z$ represent the points in the circle $(x-1)^2+y^2=1$ Hence the parametric points are $x=1+\cos\theta \,$ & $\,y=\sin\theta$ Not able to approach as it is become complex in nature and not able to solve.
|
Let $z_1 = e^{i\theta_1}+1 = 2e^\frac{i\theta_1}{2}cos\frac{\theta_1}{2}$ So $arg(z_1) = \frac{\theta_1}{2}$ $z_2 = e^{i\theta_2}+1$ $z_3 = e^{i\theta_3}+1$ So $arg(z_i)= \frac{\theta_i}{2}, i =1,2,3$ So $ \frac{z_3-z_2}{z_3+z_2-2}$ becomes $ = \frac{e^{i(\theta_3 -\theta_2 )}-1}{ e^{i(\theta_3 - \theta_2)}+1}$ = $itan(\frac{\theta_3 - \theta_2}{2})$ Which is purely imaginary as $\theta_3 \ne \theta_2$ Now using the 2nd relation: $2arg(z_2) = arg(z_1)+ arg(z_3)$ So $\theta_2 = \frac{\theta_1 + \theta_3}{2}$ . This relation is satisfied by equation in option (B) but not (C). Now applying triangle inequality on option (D). We find $|\frac{1}{z_2} - \frac{1}{z_3}|+|\frac{1}{z_1} - \frac{1}{z_2}| \ge |\frac{1}{z_1} - \frac{1}{z_3}|$ , equality is achieved only when $\frac{1}{z_2}-\frac{1}{z_3} = \frac{1}{z_1}- \frac{1}{z_2}$ Which on simplifying gives $z_1 , z_2 , z_3$ to be in AP. But they are already in GP.This is possible only when they are equal which is restricted in the question. So
|
|complex-numbers|
| 0
|
Why invariant geodesics of translation can't have at least two intersections?
|
I can't understand the red line well. Seemly, if $\tilde \gamma_1\cap \tilde \gamma_2$ has at least two points, it will contradict with the simple connectivity. I want to know why it is ? PS(2024-3-26) : As the hint of Moishe Kohan, the right way is to use the Hadamard Theorem. Hadamard Theorem : Let $M$ be a complete Riemannian manifold, simply connected, with sectional curvature $K\le 0$ . Then $M$ is diffeomorphic to $\mathbb R^n$ , $n=\dim M$ ; more precisely $\exp_p:T_pM\rightarrow M$ is a diffeomorphism. If there are at least two points, assuming two of them are $p,q$ , then, there are two different vectors $u,v\in T_pM$ such that $\exp_pu=\exp_pv =q$ . It contradict with $\exp_p$ is diffeomorphism. Picture below is from the 260th page of do Carmo's Riemannian Geometry .
|
Suppose the intersection contains just one point, $P$ . Where must that point go after the translation, i.e., what is $f(P)$ ? It must lie on both geodesics, because we're assuming both are (setwise) invariant under $f$ . But the only point on both geodesics is $P$ . So we get $f(P) = P$ , i.e., $P$ is a fixed-point of $f$ . But $f$ has no fixed points (by the opening clause of that sentence), so that's a contradiction.
|
|riemannian-geometry|
| 1
|
Is the homology long exact sequence natural?
|
For instance, the Tor (or any other derived functor of an additive functor) long exact sequence associated to a short exact sequence of modules. The naturality is stated in many books as "for any commutative diagram of short exact sequences the induced diagram of long exact sequences is commutative". I see clearly that "for any commutative diagram of short exact sequences there exists an induced diagram of long exact sequences which is commutative". But I do not understand why the above stronger formulation is also valid. For instance, I can compute the connecting homomorphism of the long exact sequence constructing a short exact sequence of complexes (using e.g. the horseshoe lemma). But if I multiply all the maps in the three complexes by -1, we can againg compute the connecting homomorphisms, and the result is that the new connecting homomorphism is -1 times the older one. So the connecting homomorphism is not unique. Where am I wrong?
|
$\require{AMScd}$ Firstly, naturality of the long exact sequence and uniqueness of the long exact sequence are two different things. That the long exact sequence is natural is a fairly simple consequence of the construction and the naturality of the long exact sequence in homology for an exact sequence of nonnegative complexes $0\to X_\bullet\to Y_\bullet\to Z_\bullet\to0$ , which itself is a reasonably straightforward exercise to verify. Your question has two conflicting subquestions but if you want me to elaborate on that more I can. The connecting homomorphism is not unique. It is not even unique if you stipulate that it is natural. But, who cares? There is a canonical choice of construction of the connecting homomorphism from the Snake Lemma, which makes homology a $\delta$ -functor. Appropriately stated, any other choice of natural connecting homomorphism so that $H_\bullet$ would be a $\delta$ -functor would give you something naturally isomorphic with the canonical one. So there
|
|commutative-algebra|homology-cohomology|homological-algebra|
| 1
|
Trigonometry: Find $\sin \theta$ when $\tan \theta$ is known.
|
if $\tan \theta = \sqrt{63}$ and $\cos \theta$ is negative, find $\sin \theta$. So since $\tan \theta$ is positive and $\cos \theta$ is negative, it lies in the $3$ rd quadrant. So $\sin$ is negative, but I don't know how to find $\sin \theta$, please guide me... Thank you.
|
$\text{Because, }\tan{\theta}=\tan{(\pi+\theta)}$ $$\tan{(\pi+\theta)}=\sqrt{63}$$ $\text{Because, }\sin{(\arctan{x})}=\frac{x\sqrt{1+x^2}}{1+x^2}$ $$\sin{(\pi+\theta)}=\frac{3\sqrt{7}}{8}$$ $$\sin{\theta}=-\frac{3\sqrt{7}}{8}$$ $\text{Note that, }\cos{(\arcsin{(-\frac{3\sqrt{7}}{8})})}\text{ could be smaller than 0. }$
|
|trigonometry|
| 0
|
Complex version of Lax-Milgram Theorem
|
I'm trying to prove Lax-Milgram Theorem in the complex case, i.e. Let $X$ be complex Hilbert space and let $f\in X'$, its topological dual. If $a(\cdot,\cdot):X\times X\to \mathbb{C}$ is sesquilinear bounded coercive form, then there exists a unique $u\in X$ such that \begin{equation} a(u,v)=\langle f , v \rangle\qquad\forall\ v\in X. \end{equation} I'm trying to proceed as in the real case. The only thing that differs is the statement of Riesz Representation Theorem. First of all, I rewrite the variational problem. By Riesz Representation Theorem there exists a unique $w\in X$ such that \begin{equation} \langle f, v \rangle =(v,w)_X\qquad\forall\ v\in X. \end{equation} We need to define a bounded linear bijection $A:X\to X$ such that \begin{equation} (v,w)_X=(v,Au)_X\qquad\forall\ v\in X, \end{equation} for a proper $u\in X$. Any help?
|
Library of Congress Cataloging-in-Publication Data Lax, Peter D. Functional analysis / Peter D. Lax. p. cm. Includes bibliographical references and index. ISBN 0-471-55604-1 (cloth: alk. paper) I. Functional analysis. I. Title. Page 57, Theorem 6
|
|complex-analysis|functional-analysis|partial-differential-equations|operator-theory|
| 0
|
Finding solutions of modulus functions
|
I decided to do some practice with some functions, and was posed with the following question: So, I sketched the two graphs. For convenience I'll display a photo of them from Desmos. The blue line is | $3x - 2$ | and the red is | $x-5$ |. Now, to find when the two intersect, it the case where the two equations have the same output with the same input is achieved. However, the modulus sign seemed to trip me up when I was writing | $3x - 2$ | = | $x-5$ | and begin doing algebra. I couldn't merely solve it like the function had no modulus sign, since that is a different function. So, I figured by inspection that only the left side of the red function makes the intersections, so the following conditions are the only valid ones. $$-x-5 = 3x-2$$ $$-x-5 = -3x+2$$ And with that I can find the inputs necessary. However, is there a general way to solve this type of problem? Say, if I didn't have the graph to make the inference that I did? How could I solve this problem if I couldn't graph the tw
|
Here is the general step without making graph : Step 01: For solving equations of the form $|ax+b|=|cx+d|$ ,open the mod then equation looks like ax+b=cx-d then solve for x co ordinate, after solving you get $x=\frac{d-b}{a-c}$ Step 02: for getting Y co ordinate, put Y = | $ax_1 + b$ | where $x_1$ = $\frac{d-b}{a-c}$ Step03: Repeat the step 1 and step 2 on this equation $|ax+b|=|cx+d|$ by replacing b by "-b" and also replace a by "-a" then your expression become $|-ax-b|=|cx+d|$ and open the mod then expression looks like -ax-b=cx+d by following step 1 and apply step 2 to get y co-ordinate. Then you will be able to get your 2 solution. How I make this step = just observing 3 things Lines will intersect always in 1st and 2nd quadrants so if they intersect in 3rd and 4 th quadrant without mod then still value of x will be same so we remove mod without changing anything as mentioned in step 01 making slope +ve to -ve according to get another intersection of These lines as with mod we get
|
|functions|absolute-value|
| 0
|
conditions for quaternions to be valid
|
I have begun reading about quaternions after a long time. However I should answer this soon to other members of my work group. My understanding of quaternions is composed of four values w,x,y and z and that they can be used to represent rotations. I have received data that supposedly represent quaternions What are the conditions that have to be met for these data to be valid? I have been told one condition of the quaternions is they have to have unit length so I am checking this condition on the data I was told that a second one is that the w part(the scalar part) has to be non-negative to represent a proper rotation. However watching videos about quaternions make me doubt this. Are there other conditions for data to be valid as quaternions?
|
Every unit quaternion represents a rotation, i.e., if you have $(w, z, y, z)$ and $$ w^2 + x^2 + y^2 + z^2 = 1 $$ then it represents a rotation. Unfortunately, the rotation represented by $(w, x, y, z)$ and the one represented by $(-w, -x, -y, -z)$ are one and the same, so folks often say "Only use non-negative values of $w$ " as a way to make things unique. Unfortunately, that doesn't really work, because $(0, 1, 0, 0)$ and $(0, -1, 0, 0)$ still represent the same rotation. But you can't just say "OK, make $w, x, y,$ and $z$ all be nonnegative", because that ends up ruling out some rotations that you want, like $(0, s, -s, 0)$ , where $s = \sqrt{2}/2$ . So when you want to check whether your data is valid, the sum-of-squares is the only sure thing; non-negative $w$ is something you might expect to see ... a quick look at the data in a spreadsheet would tell you a lot, as would an email to the person who sent you the data ... and beyond that, there's probably no other fixed pattern in
|
|quaternions|
| 0
|
Proof related to factorization of a quadratic equation
|
Can we prove or disprove this given statement: "If a, b, and c are integers with a≠0, and the roots of the equation $$ax^2 +bx+c=0$$ are rational, then the equation can be factored as $$(ax+m)(x+n)$$ where m is an integer and n is a rational number, and this factorization is unique."
|
The factorization is not unique so the statement can be disproven. For example take $2x^2+6x+4$ . We can factor this polynomial as either $(2x+2)(x+2)$ or $(2x+4)(x+1)$ , showing that the factorization is not unique.
|
|quadratics|
| 0
|
$\frac{\log(x)}{1-x}$ is increasing
|
How can I show that the function $h(x):= \frac{\log(x)}{1-x}$ is increasing on $\mathbb{R}_{>0}$ ? The derivative of $h$ is $h'(x)= \frac{x\log(x)-x+1}{x(1-x)^2}$ but I don't see why $h'(x)$ is always non-negative.
|
Using the well known inequality $\log(x) \ge 1-1/x$ we get that $$ x \log(x) - x + 1 \ge x \left( 1-\frac 1x \right) - x + 1 = 0 $$ for all $x > 0$ . This proves that $h'(x) \ge 0$ , i.e. that $h$ is increasing. More precisely, $h'(x) > 0$ except for $x=1$ , so that $h$ is strictly increasing. Another solution: $$ -h(x) = \frac{\log(x)}{x-1} = \frac{1}{x-1} \int_1^x \frac{dt}{t} = \int_0^1 \frac{ds}{1+s(x-1)} $$ with the substitution $t = 1 + s(x-1)$ . The last integral is decreasing in $x$ because the integrand is decreasing in $x$ for every $s$ .
|
|calculus|
| 0
|
Integral of a product divded by a sum of Gaussian PDFs
|
Let $f(x)$ and $g(x)$ be the PDFs of two Gaussian distributions both with zero mean, and variances $\sigma_1$ and $\sigma_2$ respectively. I'm trying to compute $$ \int_{-\infty}^{\infty} \frac{f(x) g(x)}{f(x)+g(x)}dx $$ but I am unable to make any progress. I can also use the fact that the product of two Gaussian densities is an (unnormalized) Gaussian in the same variable to simplify the numerator but this doesn't seem to help. I would also settle for upper/lower bounds. For example you can bound point-wise $f(x)+g(x)$ with another scaled Gaussian. We can also assume $\sigma_2 = c \sigma_1$ for some constant $c$ if that's helpful.
|
Assume $\sigma_1 Writing $$I=\int_{-\infty}^{\infty}\frac{f(x)}{1+\frac{f(x)}{g(x)}}dx=\sum_{n=0}^{\infty}(-1)^n\int_{-\infty}^{\infty}\frac{f^{n+1}(x)}{g^n(x)}dx$$ you land on the series $$\sum_{n=0}^{\infty}(-1)^n \frac{a^nb}{\sqrt{n+c}}$$ where $a,b,c$ are easyly computable, but not the sum of the series.
|
|definite-integrals|normal-distribution|gaussian-integral|
| 0
|
Is $U^{-1}(U(x)+U(y))$ a convex function in general?
|
Let $U(x)$ be a positive, strictly increasing, strictly convex $C^2$ function in $x$ , is it generally true that $U^{-1}(U(x)+U(y))$ is a convex function in $x,y$ ? For $U(x)=e^x$ , it is well known that the log-sum-exponential function is convex. The statement is also true for power functions $U(x)=x^{1+\beta}$ . What about the general case?
|
Here is a counterexample. Let $U(x) := \mathrm{e}^{\sqrt{x}}$ . We have $$U'(x) = \frac{1}{2\sqrt{x}}\mathrm{e}^{\sqrt{x}}, \quad U''(x) = \frac{1}{4x^{5/2}}\mathrm{e}^{\sqrt{x}}(x^{3/2} - x).$$ Thus, $U(x)$ is positive, strictly increasing, strictly convex on $x > 1$ . We have $U^{-1}(x) = \ln^2 x$ . We have $$f(x, y) = U^{-1}(U(x) + U(y)) = \ln^2 \left(\mathrm{e}^{\sqrt{x}} + \mathrm{e}^{\sqrt{y}}\right).$$ We have $$\frac{\partial^2 f(x, 3/2)}{\partial x^2}\Big\vert_{x=3/2} Note that $U(x) := \mathrm{e}^{\sqrt{x}}$ satisfies $U'''(x) U'(x) - 2[U''(x)]^2 > 0$ on $(1, 3)$ . See below. Some thoughts. Assume that $U'''(x)$ exists. If $U'''(x) U'(x) - 2[U''(x)]^2 > 0$ , then $U^{-1}(U(x) + U(y))$ is not convex. For $U(x) = \mathrm{e}^x, x^2$ etc, we have $U'''(x) U'(x) - 2[U''(x)]^2 \le 0$ . Reasoning. Let $z := U^{-1}(U(x) + U(y))$ . We have $U(z) = U(x) + U(y)$ . Taking derivative with respect to $x$ on both sides, we have $$U'(z) \cdot \frac{\partial z}{\partial x} = U'(x)$$ which res
|
|functions|inequality|convex-analysis|convex-optimization|
| 0
|
Prove that the sum of a convex function and a concave function is convex.
|
I was trying to prove, for the sake of curiosity, if the sum of a convex and a concave functions is convex, so i tried to do the following: Let f: R -> R and g: R -> R. f is convex and g is concave. Prove that (f + g) : R -> R is convex. Proof: Since f is convex, by definition holds that: $f(tx + (1-t)y) And g is concave, so: $g(tx + (1-t)y) > tg(x) + (1-t)g(y), \forall x,y \in \Re, t \in (0,1)$ Now, I can write the inequalities like: $f(tx + (1-t)y) - tf(x) - (1-t)f(y) $-g(tx + (1-t)y) + tg(x) + (1-t)g(y) It follows that: $f(tx + (1-t)y) - tf(x) - (1-t)f(y) Doing some calculation, I came up with: $f(tx + (1-t)y) + g(tx + (1-t)y) I don't know if I have made some reasoning or logical mistakes, in fact it looks fine to me, but I don't trust myself so much to say that it's correct, so please, let me know. Thanks.
|
It's obviously not true. If that is true then you can choose $f_1(x) = -g(x)$ and $g_1(x) = -f(x)$ , then $f_1(x)$ is convex and $g_1(x)$ is concave, and $f_1(x)+g_1(x)$ = $-(f(x)+g(x))$ is concave, hence contradiction.
|
|real-analysis|functions|solution-verification|convex-analysis|
| 0
|
Are linearly ordered topological spaces well-based?
|
A linearly-ordered topological space or LOTS is one whose topology admits a basis generated by open intervals of a total ordering of its points. A well-based space is one which admits a local basis of each point that is totally ordered by set inclusion. It appeared to me that the former implied the latter, and I attempted to prove it via the following: for any point $p$ in a LOTS take (possibly transfinite) sequences $a:\Gamma\rightarrow[-\infty,p)$ and $b:\Gamma\rightarrow(p,\infty]$ for some ordinal $\Gamma$ which are monotonic and surjective with $a_0=-\infty$ and $b_0=\infty$ . Then for any open interval $(c,d)$ containing $p$ there is an ordinal $\beta$ such that $c\le a_\beta , so $\{(a_\alpha,b_\alpha)\mid\alpha\in\Gamma\}$ is a local basis of $p$ . However, I got a response that a LOTS can be not well-based if there exists a point with different cofinalities on the left and the right, giving the example of $\omega_1+1+\omega*$ with the order topology (where $+$ denotes order co
|
There is no monotonic surjection $s:\kappa\to\lambda$ when $cf(\kappa)>\lambda$ . To see this, we construct a cofinal map $f_s:\lambda\to\kappa$ by $f(\alpha)=\min\{\beta . We then see that $f$ is cofinal: given $\beta , consider $s(\beta)+1 . There is some minimal $\gamma such that $s(\gamma)=s(\beta)+1$ , so $f(s(\gamma))=\gamma$ . Since $s$ is monotonic and $s(\gamma)>s(\beta)$ , we have $\gamma>\beta$ .
|
|general-topology|order-theory|
| 0
|
Find the last three digits of $\large\phi({3}^{2})^{\phi({4}^{2}){}^{\phi({5}^{2})^{.^{.^{.\phi(2019{}^{2})}}}}{}^{}}$
|
Given any positive integer $n,$ let $\phi(n)$ be the number of positive integers less than or equal to $n$ that are relatively prime to $n.$ Find the last three digits of $\large\phi({3}^{2})^{\phi({4}^{2}){}^{\phi({5}^{2})^{.^{.^{.\phi(2019{}^{2})}}}}{}^{}}$ My attempts: I calculated Euler's totient functions separately by the formula $\phi(n)=n\big(1-\frac{1}{p_1}\big)\big(1-\frac{1}{p_2}\big)\cdots\big(1-\frac{1}{p_k}\big),$ and then I worked them with $\pmod{1000}.$ But it looks very tedious! May be there some smart way to tackle this problem. Any guidance would be highly appreciated.Thank you!
|
As announced in comments, let us begin with finding an eventual period of the powers $\bmod{1000}$ of $\phi(3^2)=6$ , hence a period of the powers $\bmod{125}$ of $6$ . The smallest solution, $25$ , is given by the binomial formula: $6^{25}=(1+5)^{25}\equiv1\bmod{125}$ . (Note that we never use that $\varphi(125)=100$ .) A period of the powers $\bmod{25}$ of $\phi(4^2)=8$ being $a:=\phi(25)=\phi(5^2)$ , which is precisely the next nested exponent, we stop, set $$m:=\large\phi(6^2)^{\phi(7^2){}^{\phi(8^2)^{.^{.^{.\phi(2019{}^{2})}}}}{}^{}},$$ and calculate backwards: $$\phi(5^2)^m=ab$$ (with $b:=a^{m-1}$ ). $\bmod{25},\;\phi(4^2)^{ab}=8^{ab}\equiv1^b=1$ hence $$\phi(4^2)^{ab}=25c+1.$$ $\phi(3^2)^{25c+1}=6^{25c+1}\equiv\begin{cases}6\bmod{125}\\0\bmod8,\end{cases}$ hence $$\phi(3^2)^{25c+1}\equiv256\bmod{1000}.$$
|
|elementary-number-theory|totient-function|
| 0
|
Series of products of Bessel functions of the first kind
|
I am a physicist and encountered the following series in my research: \begin{equation} \sum_{n=-\infty}^\infty n^p J_n^2(z), \end{equation} where $J_n(z)$ is a Bessel function of the first kind and $p$ a nonnegative integer. I would like to know if there exists a general expression of this series in terms of a Taylor series in $z$ ? From the symmetry properties of the Bessel function I already know that the sum vanishes for odd $p$ and that it has to be an even function of $z$ . Hence, we can restrict ourselves to the sum \begin{equation} \sum_{n=-\infty}^\infty n^{2m} J_n^2(z), \end{equation} where $m$ is a nonnegative integer. By using the generating function of Bessel functions, I have written a program in Mathematica that essentially finds the series expansion for a given $m$ . By looking at a few cases, I have established the following hypothesis: \begin{equation} \sum_{n=-\infty}^\infty n^{2m} J_n^2(z) = \sum_{l=0}^m c_{m,l} \frac{z^{2l}}{(2l)!}. \end{equation} So essentially I w
|
A possible approach is to use the generating function $$ \sum_{n\in\mathbb{Z}}J_n(z)J_{n+m}(z)t^{2n+m}=I_m\big(z(t-1/t)\big)\qquad(z,t\in\mathbb{C}) $$ which can be deduced from known generating functions $$ \sum_{n\in\mathbb{Z}}J_n(z)t^n=e^{\frac z2\left(t-\frac 1t\right)}, \quad \sum_{n\in\mathbb{Z}}I_n(z)t^n=e^{\frac z2\left(t+\frac 1t\right)}. $$ In detail, Cauchy's integral formula yields $$J_n(z)=\frac1{2\pi i}\oint e^{\frac z2\left(w-\frac1w\right)}\frac{dw}{w^{n+1}}\implies J_{n}(z)t^n=\frac1{2\pi i}\oint e^{\frac z2\left(tw-\frac1{tw}\right)}\frac{dw}{w^{n+1}},$$ so that \begin{align}\sum_{n\in\mathbb{Z}}J_n(z)J_{n+m}(z)t^{2n+m}&=\frac1{2\pi i}\sum_{n\in\mathbb{Z}}J_n(z)t^n\oint e^{\frac z2\left(tw-\frac1{tw}\right)}\frac{dw}{w^{n+m+1}}\\&=\frac1{2\pi i}\oint e^{\frac z2\left(tw-\frac1{tw}\right)}e^{\frac z2\left(\frac tw-\frac wt\right)}\frac{dw}{w^{m+1}},\end{align} which equals $I_m\big(z(t-1/t)\big)$ , again by Cauchy. Assume $m\geqslant 0$ (the other case is treated simil
|
|taylor-expansion|power-series|bessel-functions|
| 0
|
Hyperbolic trigonometry NOT in Poincare disk
|
There are a lot of hyperbolic trigonometric identities derived in Poincare disk model that resemble similar identities from the Euclidean geometry. For example, the analogue of the Pythagoras theorem is $\cosh(c) = \cosh(a)\cosh(b)$ , the analogue of the sine law is $\frac{\sin(A)}{\sinh(a)}$ = $\frac{\sin(B)}{\sinh(b)}=\frac{\sin(C)}{\sinh(c)}$ and so on. After reading the proofs of those identities it made me wonder whether they are specific for the Poincare disk model, or they can be arrived at in other models, e.g. Beltrami-Klein model. I suppose in other models those identities look differently, but I'm not sure. If so, how does Pythagoras theorem looks in Beltrami-Klein model?
|
The identities of hyperbolic trigonometry are true intrinsically . They are equations relating distances and angles of triangles (and other shapes), and those distances and angles themselves are intrinsic to hyperbolic geometry, so the identities that they satisfy are true independent of which model of the hyperbolic plane that you wish to work in. Perhaps you are misled by the fact that some models of the hyperbolic plane are conformal, others are not. A conformal model means a model embedded within the Euclidean plane, such that hyperbolic and Euclidean angles are equal. The Poincare disc model is conformal, as is the upper half plane model. The Beltrami-Klein model is not conformal. But that doesn't matter because those models are all isometric to each other, meaning that between any two of those models there is a bijection that preserves the distances between points, and so everything derived from those distances (e.g. angles) is also preserved, hence all identities about distances
|
|trigonometry|hyperbolic-geometry|
| 1
|
Prove that $a^2-f^2+abd-acd+ace-abe-bcf+cdf+bef-def\leq 0.$
|
Let $0\leq a\leq b\leq c\leq d\leq e\leq f$ . Prove that $$a^2-f^2+abd-acd+ace-abe-bcf+cdf+bef-def\leq 0.$$ I have tried to split it into pairs and prove that each such pair is negative. However, I have been unsuccessful with this approach as of now. This of course is equivalent to the following inequality, which might be easier to see than the previous one: $$\frac{a^2-f^2+abd-acd+ace-abe-bcf+cdf+bef-def}{a-f}\geq 0.$$
|
I would try to split the whole thing in pairs: You know that $a \le f$ , so $a^2 \le f^2$ , so $a^2-f^2 \le 0$ . Also you know that $b \le c$ , hence $ad \cdot b \le ad \cdot c$ , meaning that $abd \le acd$ or $abd - acd \le 0$ . Try to continue like this, creating all such pairs.
|
|algebra-precalculus|inequality|contest-math|
| 0
|
How to compensate for overcount?
|
Task: Find the number of ways to distribute 10 customers (distinguishable) to 7 salesmen (dist.) with each having at least 1 customer. Attempt : We first pick 7 customers (unordered: $C(10,7)$ ) and give 1 to each salesman (order matters thus $P(7,7)$ ways to do this) for a total of $C(10,7) \cdot P(7,7) = P(10,7) $ ways. We now have 7 options for the remaining 3 customers, thus we get $P(10,7) \cdot 7^3$ total. However, whenever a salesman has more than 1 customer, we have overcounted. Question: How do I compensate for this?
|
Short answer: $S(10,7)\cdot 7!$ , where $S(n,k)$ is the Strirling number if the second kind . Long answer. Use the inclusion-exclusion principle . There are $7^{10}$ ways to distribute $10$ customers between $7$ salesmen. We want to subtract the number of distributions where a salesman didn’t get a customer. $\binom71$ ways to choose an empty salesman, and $6^{10}$ ways to distribute the customers between the other salesmen. Now we have to add the distributions where two salesmen got no customers. There are $\binom72$ of them. And $5^{10}$ ways to distribute the customers. And so on. In total, we get $$\sum_{k=0}^7 (-1)^k\cdot(7-k)^{10}\cdot\binom7k.$$ The value is equal to $29635200.$
|
|combinatorics|permutations|combinations|
| 0
|
Prove that $a^2-f^2+abd-acd+ace-abe-bcf+cdf+bef-def\leq 0.$
|
Let $0\leq a\leq b\leq c\leq d\leq e\leq f$ . Prove that $$a^2-f^2+abd-acd+ace-abe-bcf+cdf+bef-def\leq 0.$$ I have tried to split it into pairs and prove that each such pair is negative. However, I have been unsuccessful with this approach as of now. This of course is equivalent to the following inequality, which might be easier to see than the previous one: $$\frac{a^2-f^2+abd-acd+ace-abe-bcf+cdf+bef-def}{a-f}\geq 0.$$
|
Write the inequality in the form $(a+f)(a-f)+a(c-b)(e-d)-f(d-b)(e-c) \leq 0$ . It is then clear that $(a+f)(a-f) \leq 0$ and $\begin{align*} a(c-b)(e-d)-f(d-b)(e-c) &\leq f(c-b)(e-d)-f(d-b)(e-c) \\ &\leq f((d-b)(e-c)-(d-b)(e-c))=0. \end{align*}$
|
|algebra-precalculus|inequality|contest-math|
| 1
|
Scope of Von Neumann's Minimax Theorem
|
Does Von Neumann's Minimax Theorem concluding that it doesn't matter which player moves first (when moving means submitting a probability distribution whose realizations take place after both moves are submitted) apply to zero-sum games only if they consist of a single move from each player, or does it also apply to zero-sum games which have several alternating moves (the type of game ordinarily solved by backward induction)? If the latter is correct, then why does the theorem not imply that neither white nor black has an edge in Chess? If the former , then why does the theorem appear when players use multiplicative weights to play zero-sum games over a sequence of periods in online learning models ?
|
It's both. Well, it applies directly to 2-player zero-sum games in normal form (i.e. a matrix game, i.e. each player simultaneously picks a single move). However, an extensive-form game (EFG, a formalism for sequential games) can be converted into an equivalent normal-form game (see: Kuhn's theorem), by creating a matrix game where every pure strategy is an action of the matrix game. Thus, the minimax theorem applies to sequential games as well, but in the space of pure strategies -- this means that in chess, it doesn't matter who chooses their strategy first (obviously, since if each player can choose their strategy, they should just choose to play The Optimal Strategy). Details: A pure behavioral strategy for a game like chess is a function from a board state to an action. So when each player chooses their strategy before starting a game of chess, it means that each player commits to the move that they would play, for every possible state in the game of chess.
|
|optimization|game-theory|
| 1
|
How is the sheafy De Rham cohomology functorial?
|
$\newcommand{\T}{\mathscr{T}}\newcommand{\C}{\mathscr{C}^\infty}$ I've been enjoying Iversen's book on sheaf cohomology. He briefly mentions De Rham cohomology and a sheafy perspective on it but he doesn't mention if the canonical isomorphism is functorial; everyone says that it is, but with the sheaf perspective I'm not so sure. It's not been clear to me or people I've talked to how to even define sheafy De Rham cohomology as a functor. To explain the problem I need to explain his definition first. Let "smooth manifold" mean the usual but emphasise second-countable, Hausdorff and being without boundary (the latter for convenience and the first two because we need our manifolds to be $\sigma$ -compact). All sheaves and sheaf morphisms considered will be of $\Bbb R$ -sheaves. For a smooth manifold $X$ , let $\C_X$ denote the sheaf of smooth $\Bbb R$ -valued functions. Let $\T_X$ denote the tangential sheaf; this is the sheaf with sections $\T_X(U)$ the vector space of all sheaf morphism
|
$\newcommand{\T}{\mathscr{T}}\newcommand{\C}{\mathscr{C}^\infty}$ Thanks to a helpful onlooker, there is a very satisfactory solution making the sheafy definition functorial and agreeing with ordinary cohomology; it probably descends to something equivalent to the classical definition, and I can write it up now. I'm sure the universal derivation perspective offered in comments by Aphelli also works out but I was less sure of the details. In my question, I lamented the lack of a "derivative". Well, turns out there is one. We don't need to work with tangent vectors and pointwise stuff, we can just locally use Jacobians. Fix smooth manifolds $X,Y$ and some $F:X\to Y$ . Let $F^{-1}$ denote the sheaf theoretic preimage. Observe that precomposition by $F$ yields a map $\C_Y\to F_\ast\C_X$ of $\Bbb R$ -sheaves on $Y$ , even of $\Bbb R$ -algebras, and by adjunction there is a map $F^{-1}\C_Y\to\C_X$ of $\Bbb R$ -algebra-sheaves on $X$ , making $\C_X$ a $F^{-1}\C_Y$ -module. Thus there is a pul
|
|differential-geometry|sheaf-theory|sheaf-cohomology|de-rham-cohomology|
| 1
|
How to prove the two answers to an integral are equivalent
|
I'm trying to do the integral: $$\int{\frac{1}{\sqrt{e^{-2x}-1}}}dx$$ So I try two ways to do it, the first method I used is to multiply $e^x$ on both sides first. $$\int{\frac{1}{\sqrt{e^{-2x}-1}}}dx$$ $$=\int{\frac{e^x}{\sqrt{1-e^{2x}}}}dx$$ $$=\int{\frac{\sin{\theta}}{\sqrt{1-\sin^2{\theta}}}\times\frac{\cos{\theta}}{\sin{\theta}}}d\theta$$ $$=\theta+C$$ $$=\arcsin{(e^x)}+C$$ The second method is to do substitution directly. $$\int{\frac{1}{\sqrt{e^{-2x}-1}}}dx$$ $$=-\int{\frac{1}{u}\times\frac{u}{u^2+1}}du$$ $$=-\arctan{u}+C$$ $$=-\arctan{\sqrt{e^{-2x}-1}}+C$$ $\text{After input them into desmos, I found out that the distances betweens the graphs is constant}\frac{\pi}{2}$ $\text{So I want to ask that, can you prove that,} $ $$\arcsin{(e^x)}-(-\arctan{\sqrt{e^{-2x}-1}})\text{ is a constant?}$$
|
You can compute the derivatives: $$\arcsin'y = \frac{1}{\sqrt{1-y^2}} ,$$ $$\arctan'y = \frac{1}{1+y^2} .$$ This shows that the derivative of $\arcsin{(e^x)}-(-\arctan{\sqrt{e^{-2x}-1}})$ is $$\frac{1}{\sqrt{1-e^{2x}}}e^x + \frac{1}{1+e^{-2x} -1}\frac 1 2 \frac{1}{\sqrt{e^{-2x}-1}}(-2e^{-2x}) = \frac{e^x}{\sqrt{1-e^{2x}}} - \frac{1}{\sqrt{e^{-2x}-1}} \\= \frac{e^x}{\sqrt{1-e^{2x}}} - \frac{e^x}{\sqrt{1 -e^{2x}}} = 0 .$$ Hence $\arcsin{(e^x)}-(-\arctan{\sqrt{e^{-2x}-1}})= C$ . With $x = 0$ we get $$C = \arcsin 1 + \arctan 0 = \frac{\pi}{2} + 0 = \frac{\pi}{2}.$$
|
|calculus|integration|trigonometry|indefinite-integrals|inverse-trigonometric-functions|
| 0
|
Rate of convergence of mollified exponential on $[0, \infty)$
|
Consider the function $$ f(t)=e^{-at} $$ with $a>0$ , $t \in [0, \infty)$ and let $\rho$ a smooth function on $[0, \infty)$ such that $$ \int_0^\infty \rho(t)=1 $$ and let $\rho_\delta(t):= \frac{1}{\delta} \rho\left(\frac{x}{\delta}\right)$ so that $$ \lim_{\delta \to 0} f * \rho_\delta(t)= f(t) $$ I'm trying to obtain the rate of convergence of $(f *\rho_\delta, \phi)$ , but if I explicitely write the convolution I obtain (after a change of variable) $$ \int_0^T \phi(t) f * \rho_\delta(t)dt= \int_0^T \phi(t) \int_{-\infty}^{t/\delta} e^{-a(t-\delta \tau)} \rho(\tau) d\tau dt \lesssim \int_0^T \phi(t) \frac{e^{-at}}{a\delta} dt $$ where I have used that $\rho_\delta$ is supported in $[0, \infty)$ . As far as I can see this does not converges to $f$ . What is my mistake? Is there any way to obtain the rate of convergence of the convolution in terms of $\delta$ ?
|
We have: $$ \begin{align} \int_0^T \phi(t) f * \rho_\delta(t)dt &= \int_0^T \phi(t) \int_{-\infty}^{t} e^{-a\tau} \rho_\delta(t-\tau) d\tau \,dt \\ &= \int_0^T \phi(t) \int_{-\infty}^{t} e^{-a\tau} \frac{\rho(\frac{t-\tau}{\delta})}{\delta} d\tau \,dt \end{align} $$ Let $u(\tau)=\frac{t-\tau}{\delta}$ , then $\tau = t - \delta u$ , $u(t) = 0$ , $u(-\infty)=+\infty$ and $d\tau=-\delta du$ $$ \begin{align} \int_0^T \phi(t) f * \rho_\delta(t)dt &= \int_0^T \phi(t) \int_{0}^{+\infty} e^{-a(t-\delta u)} \rho(u)du\,dt\\ &= \int_0^T \phi(t) e^{-at}\int_{0}^{+\infty} e^{a\delta u} \rho(u)du\,dt \end{align} $$ When $\delta \rightarrow 0$ , the integral reaches $$ \int_0^T \phi(t) e^{-at}\int_{0}^{+\infty} \rho(u)du\,dt = \int_0^T \phi(t) e^{-at}dt$$
|
|functional-analysis|convergence-divergence|distribution-theory|convolution|weak-convergence|
| 1
|
Behavior of an Infinite Series
|
I've been studying infinite series recently and believe I came across a counterintuitive (at least to me) result in the past from a textbook that I can't seem to find now. Is it possible to show $$\sum_{n=1}^\infty \frac{\csc(n)}{n}$$ converges? I actually think the result I saw was something along the lines of $\displaystyle\sum_{n=1}^\infty\frac{\csc(\sqrt{3}\, n)}{n}$ being a convergent series, though I don't remember the exact form and my internet searches haven't turned up anything that has been particularly helpful with this problem. Plugging the series into WolframAlpha as is shows that it diverges, but computing various values for finite series shows that the value stabilizes around $-77.5$ ( here is the sum from $n=1$ to $n=3000$ ) -- I computed various values up to $n=5000$ I did find related series (such as the Flint Hills Series and the Cookson Hills Series ) and this potentially-related MSE question .
|
The series under consideration is the following result due to Hardy: $$\sum_{n=1}^\infty\frac{1}{n^3\sin(n\pi\sqrt{2})}=-\frac{13\pi^3}{360\sqrt{2}}$$ This result can also be found in Titchmarsh's The Theory of Functions textbook.
|
|sequences-and-series|reference-request|trigonometric-series|
| 0
|
What are the ingredients of the Sunada's theorem in this example?
|
I recall what this theorem says: Let M be a Riemannian manifold upon which a finite group G acts by isometries; let H and K be subgroups of G that act freely. Suppose that H and K are almost conjugate i.e., there is a bijection f : H → K carrying every element h of H to an element f (h) of K that is conjugate in G to h. Then the quotient manifolds M1 = H/M and M2 = K/M are isospectral. I found an example in which we isospectral manifolds: http://www.geom.uiuc.edu/docs/research/drums/planar/node2.html#SECTION00020000000000000000 It seems that M is given in Fig 2 and that the quotient manifolds are in Fig 3. What is G and its subgroups H and K? thanks. Xsnl wrote that the Buser propellers are not of sunada type and it is true. But if you read https://arxiv.org/pdf/2008.12498.pdf you can see that Berard gave a proof of Sunada theorem which is still true without the free action condition. It uses orbifolds but we still have G ans the subgroups H and K. What are they here?
|
Buser, Conway and Doyle give in their article https://arxiv.org/abs/1005.1839 the ingredients that we are looking for. There is a set called the Fano projective plane which contains 7 points and 7 "lines". We can put them (with their number) on a triangle with its medians. "0" is at its barycenter. Choose one side and label its middle with "1" Then complete the labeling of the 5 other poinrs to get the sequence: 1 5 2 3 4 6 around the triangle. G is the group of the collineations of the points: https://en.wikipedia.org/wiki/Collineation If with a given k you map each point with number n to the point with number n+k module 7 you have a collineation in G. Conway gives 3 such collineations a = (0 1)(2 5) b = (0 2)(4 3) c = (0 4)(1 6) They gernerate a first subgroup of G. He then uses the duality between points and lines to get a second subgroup genrated by a' = (0 4)(2 3) b' = (0 1)(4 6) c' = (0 2)(1 5) b' = (0 1)(4 6) I recall that the Sunada theorem is still true without the free action
|
|group-isomorphism|
| 0
|
Prove range of values for d for polynomial of degree 3
|
The following question is from a worksheet on polynomials. The polynomial $x^3 + cx + d=0$ has a solution when $x=d$ . Prove that $d$ is the only solution provided $-2 , $d ≠ 0$ My initial approach to this was through long substitution by factor $(x-d)$ , solve for the remainder which would equal zero and find the range of values for d. This only provided $d(d^2+(c+1)) = 0$ . Provided $d ≠ 0$ , I assumed the square would be the only solution, so using the discriminant, $D = 0$ , solved for c to arrive at $c = -1$ . This obviously gave $d^3 = 0$ which is not a possible solution. This also uses $d ≠ 0$ without any reason given in my working and is only an assumption. Could anyone attempt to work through this proof, to arrive at the inequalities and for why $d ≠ 0$ ?
|
COMMENT.-You have maybe to specified the nature of coefficients because if not your property is not true. Counterexample:If $f(x)=x^3-\dfrac{13}{4}x+\dfrac32$ then $f\left(\dfrac32\right)=f\left(\dfrac12\right)=f(-2)=0$ and $-2\lt d=\left(\dfrac32\right)\lt2$ and $d\ne0$
|
|polynomials|proof-explanation|
| 0
|
Prove that the sum of a convex function and a concave function is convex.
|
I was trying to prove, for the sake of curiosity, if the sum of a convex and a concave functions is convex, so i tried to do the following: Let f: R -> R and g: R -> R. f is convex and g is concave. Prove that (f + g) : R -> R is convex. Proof: Since f is convex, by definition holds that: $f(tx + (1-t)y) And g is concave, so: $g(tx + (1-t)y) > tg(x) + (1-t)g(y), \forall x,y \in \Re, t \in (0,1)$ Now, I can write the inequalities like: $f(tx + (1-t)y) - tf(x) - (1-t)f(y) $-g(tx + (1-t)y) + tg(x) + (1-t)g(y) It follows that: $f(tx + (1-t)y) - tf(x) - (1-t)f(y) Doing some calculation, I came up with: $f(tx + (1-t)y) + g(tx + (1-t)y) I don't know if I have made some reasoning or logical mistakes, in fact it looks fine to me, but I don't trust myself so much to say that it's correct, so please, let me know. Thanks.
|
Your statement after "It follows that:" is false. You are essentially claiming that if $A and $B , then $A , which is clearly false.
|
|real-analysis|functions|solution-verification|convex-analysis|
| 1
|
Abstract Vector Space: Distance and Angle
|
I saw the following example of an abstract vector space: $V$ is the vector space made up of vectors of the form $(a, b)$ with $a+b=1$ . The way we add and scale vectors in $V$ is: $$(a_1, b_1) + (a_2, b_2)=(a_1+a_2-1, b_1 + b_2)$$ $$c(a, b)=(ac-c+1,bc)$$ It's easy to check this is indeed a vector space with $(1,0)$ acting as the zero vector. And I know we can use the inner product to define the concept of distance and angle by saying the norm of a vector is $||\vec{v}||=\sqrt{\vec{v} \cdot \vec{v}}$ , the distance between two vectors is $||\vec{v}-\vec{w}||$ and the angle between two vectors is $\theta = \arccos\left(\frac{\vec{v} \cdot \vec{w}}{||\vec{v}||||\vec{w}||}\right)$ . Are these calculations only for $\mathbb{R}^n$ or are they fine for abstract spaces as well? Because I was trying a few random calculations and came up with: $(2,-1)$ and $(10,-9)$ are in $V$ with the distance between them being $$||(2, -1)-(10,-9)||$$ $$=||(2,-1)+-(10,-9)||$$ $$=||(2,-1)+(-8,9)||$$ $$=||(-7, 8
|
You can't use the traditional definition of the length of an ordered pair in this context because that length does not interact properly with the vector space operations. In particular, $$ ||2v|| \ne 2 ||v||. $$ Whenever you have an inner product that does play nicely with vector arithmetic in an abstract vector space you can use it to define angles in that vector space. You can even have different inner products on the same space. In $\mathbb{R}^2$ the inner product $$ \langle (a,b), (c,d) \rangle = ac + 2bd $$ leads to different lengths and angles.
|
|linear-algebra|vector-spaces|
| 1
|
The differences between Lagrange and Leibniz's derivative notations
|
One problem I have found when learning calculus is that there are many different ways to denote the derivative. If $y=f(x)=x^2$ , then we could write \begin{align} f'(x)&=2x \\ y'&=2x \\ \frac{df}{dx}(x)&=2x \\ \frac{df(x)}{dx}&=2x \\ \frac{d}{dx}f(x)&=2x \\ \frac{dy}{dx}&=2x \end{align} And this is just Lagrange and Leibniz's notations alone. What I find troubling is that they all seem to be suggesting subtly different things about what the derivative actually is . Is it a function, a limit of a quotient, or both? In the interests of keeping my post brief, I'll focus my attention on $f'(x)=2x$ and $\frac{dy}{dx}=2x$ , as these seem to be the most common notations. $$ f'(x)=2x $$ It does make sense to think of the derivative as the gradient function: $$ f'\colon x\mapsto\lim_{\Delta x \to 0}\frac{f(x+\Delta x)-f(x)}{\Delta x} $$ In this case the limit expression is equal to $2x$ , and so we can write $$ f' \colon x \mapsto 2x $$ However, this notation seems a little counter-intuitive w
|
I would also like to point out to one more interpretation, which is more intuitive and which was quite helpful for me to fix the meaning behind everything said about the problem. You can think about \begin{align}\frac{df}{dg}\end{align} as a slope of a tangent line to the parametric curve defined by points $(g(t),f(t))$ at a given point. If $f$ and $g$ are differentiable functions on $(a,b)$ , and if the parametric equations $y=f(t)$ , $x=g(t)$ determine $y$ as a differentiable function $h$ of $x$ on an arc of the curve defined by points $(g(t),f(t))$ ( $t\in(a,b)$ ) that extends beyond a point $P=(g(t_0),f(t_0))$ , and if $g'(t_0)\neq0$ , then \begin{align}\frac{dy}{dx}\biggr|_{P}=\frac{f'(t_0)}{g'(t_0)}\end{align} And the expression $dy/dx$ here can be thought of as $df/dg$ . Now, I think, the meaning behind the ridiculous phrase "differentiate $x^2$ with respect to $x/2$ " receives much more insight.
|
|derivatives|notation|terminology|
| 0
|
Why would these 2 predicate logics not be equivalent?
|
Are these two logical formulas logically equivalent: ∀x ∈ U(P(x) ∨Q(x)), and ¬(∃x ∈ U(¬P(x)) ∧ ∃x ∈ U(¬Q(x))), where we let U be a domain. For the 2nd statement, I simplified the expression ¬(∃x ∈ U(¬P(x)) ∧ ∃x ∈ U(¬Q(x))) = ¬∃x ∈ U(¬P(x)) v ¬∃x ∈ U(¬Q(x) ) = ∀x ∈ U(P(x)) v ∀x ∈ U(Q(x) ) = ∀x ∈ U(P(x) v Q(x) ) Therefore I assumed the 2nd statement would be equivalent to the 1st statement, but the answers to this question says it is not. Where did I make a mistake in my simplification procedure, or is there another reason why we cant consider these 2 statements equivalent? Thank you for your support in advance.
|
Note: Your notation is a bit weird I'm not sure where you get it on but I'll adapt to it ∀x ∈ U(P(x)) v ∀x ∈ U(Q(x) ) = ∀x ∈ U(P(x) v Q(x) ) The problems lies here: this is not true. A simple example is letting x be a natural number and P(x) be "x is odd" and Q(x) be "x is even" The first predicate is "Either all natural number are odd, or all natural numbers are even" The second predicate is "Every natural number is either odd or even" Clearly that the first one is wrong and the second one is right in this case The correct answer is $$\forall x \in U(P(x)) \lor \forall x \in U(Q(x))\\ =\forall x,y\in U((P(x) \Leftrightarrow P(y))\lor (Q(x) \Leftrightarrow Q(y)))$$
|
|discrete-mathematics|first-order-logic|
| 1
|
About equivariant vector valued forms on principal bundle
|
Let $\pi: M \to E$ be a $G$ -equivariant vector bundle and let us adopt the notation $$C^{\infty}(M,E)^{G} = \{\tilde{s} \in C^{\infty}(M,E) | \ \forall g \in G \ \tilde{s} \cdot g = \tilde{s}\}$$ where the condition is the usual equivariance of the section. For example, give the trivial vector bundle $P \times V$ over principal bundle $P$ the equivariant structure $$(p,v) \cdot g := (p \cdot g, \rho(g^{-1}v))$$ via some linear representation $\rho: G \to GL(V)$ . The since the bundle is trivial, one may write an arbitrary smooth section $\tilde(s) = (p, s(p))$ for some smooth function $s: P \to V$ . Then one may easily compute that $$C^{\infty}(P,P \times V)^{G} \cong \{ s: P \to V \ \mathrm{smooth}| \ \forall g \in G \ R^{*}_{g} s = \rho(g^{-1})s \} $$ The trouble comes when trying to then make sense of the following isomorphism $$\Omega^{r}(P,P \times V)^{G} \cong \{ \phi \in \Omega^{r}(P,V) | \ \forall g \in G \ R^{*}_{g}\phi = \rho(g^{-1})\phi \}$$ Namely, I'm not sure even what t
|
The action of $G$ on $P$ induces a right action on $T^*P$ by cotangent lift, given $p\in P$ , $g\cdot \alpha=R_{g^{-1}}^*\alpha\in T^*_{g^{-1}p}P$ . This induces a natural right action on $\bigwedge^kT^*P$ by extension. We then get a right action on $\bigwedge^k T^*P\otimes(P\times V)\cong V\otimes \bigwedge^k T^*P$ by tensoring the representation of $G$ with our right action. $g(v\otimes \omega)=(\rho(g^{-1})v)\otimes g\cdot \omega$ .
|
|differential-geometry|principal-bundles|equivariant-maps|
| 1
|
Are linearly ordered topological spaces well-based?
|
A linearly-ordered topological space or LOTS is one whose topology admits a basis generated by open intervals of a total ordering of its points. A well-based space is one which admits a local basis of each point that is totally ordered by set inclusion. It appeared to me that the former implied the latter, and I attempted to prove it via the following: for any point $p$ in a LOTS take (possibly transfinite) sequences $a:\Gamma\rightarrow[-\infty,p)$ and $b:\Gamma\rightarrow(p,\infty]$ for some ordinal $\Gamma$ which are monotonic and surjective with $a_0=-\infty$ and $b_0=\infty$ . Then for any open interval $(c,d)$ containing $p$ there is an ordinal $\beta$ such that $c\le a_\beta , so $\{(a_\alpha,b_\alpha)\mid\alpha\in\Gamma\}$ is a local basis of $p$ . However, I got a response that a LOTS can be not well-based if there exists a point with different cofinalities on the left and the right, giving the example of $\omega_1+1+\omega*$ with the order topology (where $+$ denotes order co
|
Your argument is wrong because such ordinal $\Gamma$ and monotone surjections need not exist. For example, there is no (not necessarily strictly) increasing surjection $f:\omega_1\to \omega$ , for otherwise by picking $\alpha_n\in f^{-1}(n)$ for each $n$ , we'd obtain a countable cofinal subset of $\omega_1$ . But cofinality of $\omega_1$ is uncountable. Let $X = \omega_1+\{n' : n\in\mathbb{N}_0\}$ and let $x = 0'$ be the point greater than all of points of $\omega_1$ , where I decided to use $\{n' : n\in\mathbb{N}\}$ instead of $1+\omega*$ for the clarity of argument. Here $n' iff $n > k$ . Suppose that $U_t$ , $t\in T$ is a linearly ordered base of $x$ where $T$ is a totally ordered set. We can write $T = \bigcup_{n=1}^\infty T_n$ where $T_n = \{t\in T : \min\{k : k'\in U_t\} = n\}$ . Since we know that $k'\notin U_t$ , $t\in T_n$ for $n , we see that for $s\in T_k$ we must have $U_s\supsetneq U_t$ and so $t since the map $a\mapsto U_a$ is increasing. So $T_n$ decompose $T$ into disj
|
|general-topology|order-theory|
| 0
|
Osculating circle of ellipse
|
The osculating circle of ellipse at point $A$ interects the ellipse at $A,D$ . I want to prove the tangent line $AB$ and the line $AD$ form equal angles with axis of the ellipse. My attempt: The osculating circle of ellipse $$x^2/a^2+y^2/b^2=1\tag1$$ at $A(a\cos(t),b\sin(t))$ is $$\left(x-\frac{\left(a^2-b^2\right) \cos ^3(t)}{a}\right)^2+\left(y+\frac{\left(a^2-b^2\right) \sin ^3(t)}{b}\right)^2=\frac{\left(a^2 \sin ^2(t)+b^2 \cos ^2(t)\right)^3}{a^2 b^2}\tag2$$ Solving the equations $(1),(2)$ to get the intersections $A$ and $D$ . Then I find the coordinates of $D$ is $(a \cos (3 t),-b \sin (3 t))$ . Then I calculate the slope of $AD$ $$\frac{b \sin (t)--b \sin (3 t)}{a \cos (t)-a \cos (3 t)}=\frac{b \cot (t)}{a}$$ and the slope of $AB$ is $$\frac{\frac{d}{dt}b\sin(t)}{\frac{d}{dt}a\cos(t)}=-\frac{b \cot (t)}{a}$$ so $AB,AD$ form equal angle with $y$ axis (or $x$ axis). Solving equations $(1),(2)$ took very long computation, but in the end I find that the solution of $D$ is quite sim
|
Theorem 1 in this article is Let $A, B, C, D$ be four distinct points on $C$ . Then $A, B, C, D$ are concyclic if and only if $[AC, BD] \cong\text{Sim}(C)$ , that is, the axes of $C$ are parallel to the bisectors of the angles formed by the lines $AC$ and $BD$ . Let the points $B,C$ approach the point $A$ , then the circle approaches the osculating circle in this problem, so $AC$ approaches the tangent line at $A$ and the line $BD$ approaches $AD$ , so the limit of these two lines still form equal angle with y axis (or x axis). QED.
|
|conic-sections|osculating-circle|
| 1
|
Reverse L'Hospital rule under certain conditions
|
Let $p$ be a positive real number, and $f:\mathbb{R}^+\rightarrow\mathbb{R}$ a differentiable function. Suppose that $f'(x)$ is monotonically increasing. Show that $\lim\limits_{x\rightarrow+\infty}\frac{f(x)}{x^p}=l$ if and only if $\lim\limits_{x\rightarrow+\infty}\frac{f'(x)}{px^{p-1}}=l$ . The direction " $\Leftarrow$ " follows from L'Hospital rule. By the way, many textbooks only prove the L'Hopsital rule for $\frac{0}{0}$ case, but just say that the proof for $\frac{\infty}{\infty}$ case is similar. I do not think that the proof for $\frac{\infty}{\infty}$ case is as easy as that for $\frac{0}{0}$ case. Come back to our question, how to prove " $\Rightarrow$ "?
|
The assertion is true and it is an immediate consequence of the so called Monotone Density Theorem (see e.g. here , Theorem 1.7.2, p. 39). We say that $\ell(x)$ is a slowly varying function , if $\ell$ is positive, measurable, and such that $$\lim_{x\to \infty}\frac{\ell(\lambda x)}{\ell (x)} = 1$$ for $\lambda > 0$ . Then we can state the following. Monotone Density Theorem . Let $U(x) = \int_0^x u(y) dy$ . If $U(x) \sim c x^{\rho} \ell (x)$ ( $x \to \infty$ ), where $c\ \in \mathbb R$ and $\ell (x)$ is a slowly varying function, and if $u(x)$ is ultimately monotone, then $$u(x) \sim c\rho x^{\rho-1} \ell(x)\ \ \ (x\to \infty)$$ The proof of the above theorem can be specialized in your case as follows. We can first write $$f(bx)-f(ax)=\int_{ax}^{bx} f'(t) dt,$$ for $0 . Monotonicity of $f'$ yields $$\frac{x(b-a)f'(ax)}{x^p}\leq \frac{f(bx)-f(bx)}{x^p}\leq \frac{x(b-a)f'(bx)}{x^p}. $$ The middle term is $$\frac{f(bx)}{(bx)^p}b^p-\frac{f(ax)}{(ax)^p} a^p\to b^p-a^p,$$ for $x\to \infty$
|
|real-analysis|calculus|limits|analysis|derivatives|
| 0
|
Ring with $1+1+\cdots+1$ invertible.
|
Given a ring A for which $a=1+1+\cdots+1$ ( $n$ times) is invertible, I'm supposed to find all elements $x$ for which $(x-1)^3=0$ and $x^n=1$ . I've yet to encounter such exercise, so I dont have a clear image of what I'm supposed to be "looking for". What I've noticed, and I'm not quite sure of, is that $x^n=1$ means $x$ is invertible, and I'm now trying to figure out how that would be of any help, since it usually is.
|
With the help of everyone in the comments I managed to end up with this: Instead of working with the formulas we were given, let's write $x-1$ as $y$ . Now we have $y^3=0$ and $(y+1)^n=1$ . Expand the binomial and knowing $y^3=0$ (1) you get ${n \choose n-2}y^2+{n \choose n-1}y+1=1$ (2), then you subtract 1 and multiply by $y$ , leaving you with $ny^2=0$ (1), and since $n=a=1+1+...+1$ invertible, $y^2=0$ . Heading back to (2) you now have $ny=ay=0$ , then $y=0$ . So, the only possible element is $x-1=0 => x=1$ . Nice!
|
|ring-theory|
| 0
|
Is there a simple geometric proof of why two simple harmonic oscillators draw a circle?
|
We all know that a circle can be drawn with the trigonometric functions $x=\cos(t), y=\sin(t)$ . If we define the sine and cosine functions in terms of triangles (like we do in high school), then this is quite obvious. But then later on in our education, we learn that the solution to a simple harmonic oscillator is the sine function. A weight on an undamped spring goes back and forth following a sine wave over time, and that this is the intuition behind a lot of wave motion (like sound waves). However, it's not generally taught why the sine wave solution to simple harmonic motion is the same function as the sine wave as defined by triangles or circles. Or in other words, when you take two harmonic oscillators and plot their outputs as $x=\cos(t), y=\sin(t)$ , why should they make a perfect circle? More specifically: it's intuitive enough that they must form a shape that makes a full loop of some kind. But why does it happen to be a perfect circle , as opposed to an alternative shape li
|
Nice question! You might reverse this, asking why an object with uniform circular motion looks like a harmonic oscillator when you project it to one dimension (as Greg Martin alludes to in the comments.) One way to see this is to imagine a ball tethered to a post moving in a circle. The tension force from the tether acts as centripetal force on the ball pointing toward the post. So if the ball is at point $(x,y)$ and the post is at the origin, the force acts in direction $(-x, -y)$ , and so the acceleration is proportional to $(-x, -y)$ ; that is, $\frac{\partial^2}{\partial t^2}(x,y) = (-cx, -cy)$ . If we look at just the $x$ -component, we get $\frac{\partial^2}{\partial t^2}x = -cx$ , the equation for a harmonic oscillator.
|
|calculus|geometry|trigonometry|taylor-expansion|intuition|
| 1
|
Prove that a critical graph $G$ satisfies $\chi(G) \leq \delta + 1$
|
Let $G = (V, E)$ a critical graph; i.e. a graph s.t. for any subgraph $H \subseteq G$ we have $\chi(H) . I was requested to prove that $\chi(G) \leq \delta + 1$ , where $\delta$ is the degree of the vertex (or vertices) of lesser degree in $G$ . I presume that I need to relate a sub-graph $H$ that involves the vertex (or vertices) of $G$ whose degree is $\delta$ and somehow prove that the chromatic number of this subgraph is $\delta$ . Then the property follows from the fact that $G$ is critical. However, I was unable to construct such subgraph. Am I taking the wrong approach? Any hints/suggestions are appreciated.
|
Let there be a vertex of degree less than $\chi(G)-1$ . Remove it and its edges. Since $G$ is critical, the rest can be coloured with $\chi(G)-1$ colours. Now put the removed vertex back. Its degree is less than the number of used colours, so we can colour it with one of the already used colours. Contradiction with the definition of $\chi(G)$ .
|
|discrete-mathematics|graph-theory|coloring|
| 1
|
Find the last three digits of $\large\phi({3}^{2})^{\phi({4}^{2}){}^{\phi({5}^{2})^{.^{.^{.\phi(2019{}^{2})}}}}{}^{}}$
|
Given any positive integer $n,$ let $\phi(n)$ be the number of positive integers less than or equal to $n$ that are relatively prime to $n.$ Find the last three digits of $\large\phi({3}^{2})^{\phi({4}^{2}){}^{\phi({5}^{2})^{.^{.^{.\phi(2019{}^{2})}}}}{}^{}}$ My attempts: I calculated Euler's totient functions separately by the formula $\phi(n)=n\big(1-\frac{1}{p_1}\big)\big(1-\frac{1}{p_2}\big)\cdots\big(1-\frac{1}{p_k}\big),$ and then I worked them with $\pmod{1000}.$ But it looks very tedious! May be there some smart way to tackle this problem. Any guidance would be highly appreciated.Thank you!
|
$\,\ 6\equiv_5 1\overset{(\ \ )^{\large 5}}\Rightarrow 6^5\equiv_{25} 1\overset{(\ \ )^{\large 5}}\Rightarrow 6^{\large \color{darkorange}{25}}\equiv_{125}1\,$ by $\mu$ LTE , so by the mod distributive law we have $\!\begin{align} \bmod 1000\!:&\ \ 6^{\:\!\Large\color{#c00}{8^{\LARGE 20i}}}\!\!\! \equiv 8 \!\left[\dfrac{6^{\large \color{darkorange}{76}}}{\color{#0a0}8}\!{\small \bmod} 125\right]\!\equiv 8[\color{#0a0}{32}],\ \, {\rm by} \ \ {\small {\frac{6^{\large \color{darkorange}1}}{\color{#0a0}8\ \ }}\equiv \frac{3}4\equiv\frac{128}4}\equiv 32\!\!\!\pmod{\!125}\\[.2em] {\rm by} \bmod\!\!\underbrace{100}_{^{\Large\phi(\color{125}{125})}}\!\!\!:&\ \ \ \color{#c00}{8^{\large 20i}}\! \equiv 4\!\!\underbrace{\left[\color{#0af}{\frac{8^{20i}}4}\!{\small \bmod} 25\right]}_{{\large \color{#0af}{8^{\large 20}\equiv_{25}1}}\ \text{by }\,\phi(25)=20\!\!}\!\!\! \equiv\color{darkorange}{76},\ \ {\rm by} \ \ {\small \color{#0af}{\frac{1}4}\equiv \frac{-24}4\equiv}\, {-6}\equiv 19\!\!\!\pmod{\!2
|
|elementary-number-theory|totient-function|
| 0
|
Number of zeros at the end of $5^5 \times 10^{10} \times 15^{15} \times \dots \times 120^{120} \times 125^{125} $?
|
I have to find the number of zeros at the end of the number $$5^5 \times 10^{10} \times 15^{15} \times \dots \times 120^{120} \times 125^{125} $$ So, First I know that the zeros are formed by the multiplication of 2 with 5 and 2 appears less number of times than 5 than we should count the number of 2. Even though I can count the number of 2 manually be selecting each even terms in the product and calculating the power of 2 in that and adding all such. But, What if the product was large that it would consume more time doing this way? , so I searched I find that it is solved as follows: Highest power of 2 in the given product = (Number of multiples of 2) + (Number of multiples of 4) + (Number of multiples of 8) + (Number of multiples of 16) = (10+20+30+...+120) + (20+40+60+...+120) + (40+80+120) + 80 = 1520 How did they calculate it like that?
|
To know how many trailing zeros there are we must figure out the highest power of $10$ that divides the number (that power is how many zeros there are). To find the highest power of $10$ that divides the number we must find out the highest power of $2$ that divides the number and the highest power of $5$ and compare them. (Take the lower of the powers, each power of one combines with a power of the other to make a power of $10$ and you'll have as many powers of $10$ as lower of those to powers). This part of the argument is about figuring out the power of $2$ that divides the number. The powers of two come from each even component. $10^{10}, 20^{20}, 30^{30}$ etc. The even components are $10^{10}, 20^{20}, 30^{30}, ...., 110^{110}, 120^{120}$ . As these can be written as $(2\times 5)^{10} = 2^{10}\times 5^{10}, (2\times 10)^{20}= 2^{20}\times 10^{20}, (2\times 15)^{30}=2^{30}\times 15^{30}$ , we get that $2^{10}, 2^{20}, 2^{30}, .... , 120^{120}=2^{120}\times 60^{120}$ Thus we have ext
|
|elementary-number-theory|
| 0
|
Matrix elements of irreducible inequivalent unitary representations of group G form orthogonal basis of $\mathbb{C}(G)$
|
I'm having a hard time understanding this (translated literally from lecture script): "let $\rho_1,...,\rho_k $ be a list of irreducible inequivalent unitary representations of a finite group G. $\rho_{\alpha,ij}(g), \alpha = 1,...,k$ denote the Matrix entries of $\rho_{\alpha}(g)$ w.r.t. an orthonormal basis. The functions $\rho_{\alpha,ij}$ then form an orthogonal basis of the space of functions from G to the complex numbers $\mathbb{C}(G)$ " I'm under the impression that "matrix elements" are just the matrix representation of the transform $\rho(g)$ acting on a vector space V. How, then, are they supposed to be used to build some function that maps G into the complex numbers?
|
Each $\rho_\alpha(g)$ is an endomorphism, which in some (orthonormal) basis defines a matrix with coefficients in $\mathbb{C}$ , and those coefficients are called $\rho_{\alpha,ij}(g)$ . Then $\rho_{\alpha,ij}(g)\in \mathbb{C}$ , so the function $g\mapsto \rho_{\alpha,ij}(g)$ is a function from $G$ to $\mathbb{C}$ .
|
|abstract-algebra|representation-theory|
| 1
|
How to lead a shot at a target undergoing uniform circular motion
|
I'm playing a programming game in which script-controlled tanks drive around and try to shoot each other until only one side remains. Because of the way the steering works, the tanks often follow circular trajectories, so leading a shot in those circumstances is very useful. I have developed an equation that seems to find a correct shot angle, but I can only solve it numerically. I was hoping that someone here could help me come up with a closed-form solution, or propose a different way of approaching the problem that might be better all around. Here's how my solution works: Concept: If I fire at time 0, then the bullet can only hit if at some time $t$ , the enemy and the bullet are equidistant from me. If I can find the value for $t$ , then I can determine where the enemy will be at that time and from there I can compute the angle I should use for aiming. In order to compute this, here's what I did: $\omega_0$ is the enemy's initial angle from the $x$ axis $\omega$ is the enemy's rate
|
TL;DR: I don't think you can avoid numerical methods if you want an accurate solution, but you can get the answer by solving a simpler equation. Let's consider just the distance from the point $\vec p$ from which you fired to the point $\vec e(t)$ where the tank is at time $t$ . Then in order for you to hit the tank at time $t$ , $t$ needs to be a solution of the equation $$ \lVert \vec e(t) - \vec p \rVert = Qt. \tag1 \label{eq:hit} $$ (You also need to aim in the right direction, but let's leave that problem for later.) Let $C$ be the center of the circle traveled by the tank, with position vector $\vec c$ . Let $E(t)$ and $P$ be the points described by the position vectors $\vec e(t)$ and $\vec p$ . Suppose that at the instant you fire, the points $E(t)$ , $C$ , and $P$ form a triangle with angle $\theta_0$ at vertex $E_0$ and that the angle is increasing at the rate $\omega$ as $E(t)$ moves around its circular path, so the angle at a time $t$ shortly after you fire is $\theta(t) =
|
|calculus|trigonometry|roots|projectile-motion|
| 1
|
Calculate the integral of $1/\sin z$ over the unit circle
|
$\int_C \frac{1}{\sin z}$ where $C := \{z \in C : |z| = 1 \}$ I know $\frac{1}{\sin z}$ has a pole at $z = 0$ . I can apply Cauchy Residue Theorem, so I need to know the laurent series expansion of $\frac{1}{\sin z}$ which I am unable to figure out. Is there any other neat way to approach this? Thanks in advance.
|
Since you just want the residue, you can write $\sin z=zg(z)$ for some entire $g$ , where $g(0)\ne0$ . Then by residue theorem, your integral is $$\dfrac{2\pi\textbf{i}}{g(0)}=2\pi\textbf{i}\displaystyle\lim_{z\to0}\frac{1}{g(z)}=2\pi\textbf{i}\displaystyle\lim_{z\to0}\dfrac{z}{\sin z}$$ So you can get the residue without computing the Laurent series of $1/\sin z$ .
|
|complex-analysis|
| 0
|
Calculate the integral of $1/\sin z$ over the unit circle
|
$\int_C \frac{1}{\sin z}$ where $C := \{z \in C : |z| = 1 \}$ I know $\frac{1}{\sin z}$ has a pole at $z = 0$ . I can apply Cauchy Residue Theorem, so I need to know the laurent series expansion of $\frac{1}{\sin z}$ which I am unable to figure out. Is there any other neat way to approach this? Thanks in advance.
|
Hint: $\dfrac{1}{\sin z} - \dfrac{1}{z}\;$ is analytic inside $C$ , so $\int_C \left(\dfrac{1}{\sin z} - \dfrac{1}{z}\right)dz=0$ . To see that $\dfrac{1}{\sin z} - \dfrac{1}{z}=\dfrac{z-\sin z}{z\sin z}\;$ is analytic, note that the leading term in the numerator is $\frac16x^3$ and the leading term in the denominator is $x^2$ .
|
|complex-analysis|
| 0
|
joint distribution and conditional expectation
|
I am confused about conditional expectations. Let $(\Omega, \mathcal{F},P)$ be a probability space. Next, let $X$ and $Y$ be random variables on this space. Next, let $E[X|\sigma(Y)]$ be conditional expectation, which is $\sigma(Y)$ -measurable random variable, and, therefore, also $\mathcal{F}$ -measurable. The question: can we define joint distribution of $E[X|\sigma(Y)]$ , $Y$ and $X$ ? Ok, for example, $X$ and $Y$ are jointly Normal. What would be a distribution of $(E[X|\sigma(Y)], X, Y)$ ?
|
Suppose $X$ and $Y$ are jointly normal; then $$Z = E(X | \sigma (Y)) = \mu_X + \frac{\sigma_X}{\sigma_Y}\rho (Y - \mu_Y)$$ which is readily seen to be normally distributed as well, and the vector $(Z,X,Y)$ is jointly Gaussian. It's a simple exercise from here to compute its mean and covariance.
|
|probability|probability-distributions|conditional-probability|conditional-expectation|
| 1
|
Probability distribution and expectated number of factors shared by two dice with 12 sides?
|
I've already seen You roll two dice. What is the probability of the event that they have no common factor greater than unity? but I'm looking into a similar but distinct problem, one I don't even know how to approach besides simply enumerating all possible results. Ideally I'd like to 'learn how to fish' instead of just be given fish. As a concrete example, if the two 12-sided dice come up 4 and 12, they share ${1,2,4}$ as factors - so they would 'score' 3. The dice 7 and 7 would share ${1,7}$ (scoring 2) and two lots of 12 would share ${1,2,3,4,6,12}$ (scoring 6). Is it easy to work out a probability distribution for this problem? Or is it not mathematically possible and I need to manually work out all 144 combinations?
|
With regards to the expected number of factors, we can calculate this easily without bothering to look at the probability distribution. Letting $X$ be the random variable counting the total number of factors shared, and letting $X_i$ be the indicator random variable corresponding to whether $i$ is a factor of both the first and second die, we can see that $X=X_1+X_2+\dots+X_{12}$ and thanks to the linearity of expectation that $E[X]=E[X_1+X_2+\dots+X_{12}]=E[X_1]+E[X_2]+\dots+E[X_{12}]$ , this despite any possible dependence between these. Now, $E[X_i]=\Pr(X_i=1)$ is simply going to be equal to the probability that $i$ is a factor of each die result, and will be equal to $\frac{k}{12^2}$ where $k$ is the number of integer multiples of $i$ in the range $\{1,2,\dots,12\}$ We get then a calculation of: $$E[X] = \dfrac{12^2+6^2+4^2+3^2+2^2+2^2+1^2+1^2+1^2+1^2+1^2+1^2}{12^2}$$ $$ = \dfrac{219}{144}= 1.5208\overline{3}$$ Actually finding the probability distribution for the number of factors
|
|probability|dice|
| 0
|
Upper bounding the variance of a sum
|
I have random variables $X_1, \ldots, X_N$ and two functions: a bounded function $|f|\leq M$ and another function $0 \le g \le 1$ . I would like to upper bound the variance $$ \mathbb{V}\left[\sum_{i=1}^N f(X_i)g(X_i) \right] $$ by a bound that does not depend on $f$ (happy for it to depend on $g$ ). Is it possible? Attempt I tried by using the usual variance formula, but I am unsure if I can do this type of bounds $$ \begin{align} \mathbb{V}\left[\sum_{i=1}^N f(X_i)g(X_i) \right] &= \mathbb{E}\left[\left(\sum_{i=1}^N f(X_i)g(X_i)\right)^2\right] - \mathbb{E}\left[\sum_{i=1}^N f(X_i)g(X_i)\right]^2 \\ &\leq \mathbb{E}\left[\left(\sum_{i=1}^N M g(X_i)\right)^2\right]- \mathbb{E}\left[\sum_{i=1}^N f(X_i)g(X_i)\right]^2 \\ &= M^2 \mathbb{E}\left[\left(\sum_{i=1}^N g(X_i)\right)^2\right] - \mathbb{E}\left[\sum_{i=1}^N f(X_i)g(X_i)\right]^2 \\ &\leq M^2 \mathbb{E}\left[\left(\sum_{i=1}^N g(X_i)\right)^2\right] \end{align} $$ but I was wondering if I can do anything better. One idea I had wa
|
From the assumption that $(X_i)_{i=1,..,N}$ are i.i.d, we deduce that $(f(X_i)g(X_i))_{i=1,..,N}$ are also i.i.d. As a consequence $$\begin{align} L:=\mathbb{V}\left(\sum_{i=1}^N f(X_i)g(X_i) \right) &= n\cdot \mathbb{V}\left( f(X_1)g(X_1) \right) \\ &= n\cdot \mathbb{E}\left(f^2(X_1)g^2(X_1) \right)-n\cdot \mathbb{E}^2\left(f(X_1)g(X_1) \right)\\ \end{align}$$ Without any further information about $f,g$ and $X$ , a best upper bound is $$L\stackrel{(1)}{\le}n\cdot \mathbb{E}\left(f^2(X_1)g^2(X_1) \right)\stackrel{(2)}{\le}\color{red}{nM^2\cdot \mathbb{E}\left(g^2(X_1) \right)}$$ The equality of $(1)$ occurs if for example $f$ is an odd function, $g$ is an even function and $X$ follows a symmetric distribution. The equality of $(2)$ occurs if and only if $|f(x)| = M$ for all $x$ .
|
|probability|integration|probability-theory|expected-value|upper-lower-bounds|
| 1
|
Inversion of a matrix equation
|
Is there a general way to invert (solve for $u$ ) this? $$\sum_{ij}R_{ijk}a_iu_j = -x_k$$ With $a,u,x \in \mathbb{R}^N$ . $R_{ijk}$ is symmetric in the last two indices. So really I'm trying to invert this: $$A= \begin{bmatrix} a'R_1 \\ a'R_2 \\ ...\\ a'R_N \end{bmatrix}$$ Could there be a nice formula for this inverse if all the $R_i$ are invertible? If you multiply $A$ by a matrix that has as columns $R_i^{-1}a$ you get that it's equal to $a'aI_n+ E$ where $E$ is traceless (it has $0$ in every element of the diagonal).
|
$ \def\p{\partial} \def\grad#1#2{\frac{\p #1}{\p #2}} \def\R{{\cal R}} \def\qif{\quad\iff\quad} \def\Si{S^{-1}} \def\qiq{\quad\implies\quad} \def\Sp{S^{\bf+}} \def\Sh{{\widehat S}} \def\LR#1{\left(#1\right)} \def\gradLR#1#2{\LR{\grad{#1}{#2}}} \def\c#1{\color{red}{#1}} $ Contracting the vector $a$ with the first component of the third-order tensor $\R$ yields $$ S=a\cdot\R \qif S_{jk} = \sum_i\,a_i\,\R_{ijk} $$ The fact that $\R$ is symmetric in its last two $(jk)$ indices means that $S$ is a symmetric matrix. This leaves a symmetric linear system to be solved for $u$ $$x=-S\cdot u \qiq u=-\Si x$$ Introducing the indexed matrices $\left(R_k = \R_{ijk}\right)\,$ seems like an unnecessary complication. ${\sf NB\!:\: If}\;S^{-1}$ does not exist, then there is no solution and you must settle for a least-squares approximation via the pseudoinverse $S^{\bf+}$ and an arbitrary vector $b$ $$u=-\Sp x \;+\; \LR{I-\Sp S}b$$ Update Although not stated in the question, your comments indicate that y
|
|linear-algebra|matrix-equations|tensors|multilinear-algebra|index-notation|
| 0
|
Random walk where Increments have exponential distribution. Probability of never reaching a negative value after $n$ steps.
|
Consider the random walk $S_n = \sum_{i=1}^n (X_i-1)$ where $X_i$ are i.i.d. with exponential distribution and mean $1$ , i.e. $P(X_i \leq x) = 1-e^{-x}$ . I am trying to figure out the probability $p_n$ , that the random walk has never reached a negative value after $n$ steps. I have obtained the exact values for the first small $n$ : $p_1 = e^{-1}$ $p_2 = 2e^{-2}$ $p_3 = \frac{9}{2}e^{-3}$ $p_4 = \frac{64}{6}e^{-4}$ I have noticed the pattern, that apparently $p_n = \frac{n^{n-1}}{(n-1)!}e^{-n}$ . I have also done some numeric simulations that seem to confirm that this is in fact the answer. However, I have been unable to prove this. I have tried using the law of total probability, but there I then need to determine the probability that a random walk like above never reaches a negative value if it starts at a specific (positive) value, which I don't know how to compute. If the exact value is too difficult to prove, I would also be content with an upper bound that also converges to 0
|
This is not a full answer. I found a different perspective to view the problem in, which turns the question from a complicated integral to a complicated discrete summation. Consider a Poisson process, with unit density, on the positive real line. This means that for any $0 , the number of arrivals in $[a,b]$ has the distribution $\text{Poisson}(b-a)$ . If you number the arrivals $0 , with the convention $A_0=0$ , then it is well known that the inter-arrival times $A_k-A_{k-1}$ all have an exponential distribution with rate $1$ . So, from now on, I will identify $$X_k=A_{k}-A_{k-1}.$$ Now, how do your events related to the Poisson process? You are requiring $(X_1-1)\ge 0$ , which implies that $A_1=X_1\ge 1$ . That is, the first arrival happens after time $1$ , so there are no arrivals in the interval $(0,1)$ . Next, you require $(X_1-1)+(X_2-1)\ge 0$ , which implies $A_2=X_1+X_2\ge 2$ . That is, the second arrival happens after time $2$ , there is at most one arrival in the interval $(0
|
|probability|random-walk|exponential-distribution|
| 0
|
Multidimensional Mean Value Theorem with arbitrary norm
|
In the question Multivariate Mean Value Theorem Reference was written the following statement for $x,y\in \mathbb{R}^{n}$ \begin{equation} ||f(x) - f(y)||_q \leq \sup_{z\in[x,y]}||f'(z)||_{(q,p)}||x-y||_p, \end{equation} where $z∈[x,y]$ denotes a vector $z$ contained in the set of points between $x,y\in \mathbb{R}^{n}$ , and $||f′(z)||_{(q,p)}$ is the $L(p,q)$ norm of the derivative matrix of $f:\mathbb{R}^{n}→\mathbb{R}^{m}$ evaluated at $z$ . I have not found proof of this statement anywhere, including the link provided in the question above. But I've found the following proof of the classical Mean Value Theorem ( $p=q=2$ ) here Theorem 5.4. My question is can this proof be used for arbitrary norms? For cases of norm $p=q=1$ or $p=q=\infty$ it is obvious that the same proof can be used since it is easily can be obtained the following estimations for integrable vector-valued functions $$ \|\int_0^1f(t)dt\|_1\le\int_0^1\|f(t)\|_1\,dt, $$ $$ \|\int_0^1f(t)dt\|_\infty\le\int_0^1\|f(t)\|_
|
By @LázaroAlbuquerque 's suggestion I've posted the one more proof of the statement above here for differatiable function (more general case than the previous one). Let $F:\mathbb{R}^{n}→\mathbb{R}^{m}$ be differentiable on a domain containing the two points $x,y\in \mathbb{R}^{n}$ with a segment connecting them. Let $p,q\in [1,\infty].$ By restriction of $F$ on $[x,y]$ we can consider parameterization $$f(t) = F(x+t(y-x)),$$ where $t\in[0,1].$ So, $f:[0,1]→\mathbb{R}^{m}$ is differentiable on $[0,1]$ with derivative $$f'(t)=\begin{pmatrix}\nabla F_1(x+t(y-x))^T(y-x)\\...\\\nabla F_m(x+t(y-x))^T(y-x)\end{pmatrix}=\nabla F(x+t(y-x))(y-x),$$ where $\nabla F(z)$ is Jacobian matrix of $F.$ Then, by applying second statement/proof from this (which can be proven in the same manner for $l_q$ -norms with $q\in[1;\infty]$ ) $$||F(x) - F(y)||_q = ||f(0) - f(1)||_q \leq \sup_{t\in[0,1]}||\nabla F(x+t(y-x))(y-x)||_q=\otimes$$ applying the estimate for the operator norm $$\otimes \leq \sup_{t\in[0,
|
|integration|derivatives|vector-analysis|matrix-norms|mean-value-theorem|
| 1
|
Closed form for $\int_0^1\log\log\left(\frac{1}{x}+\sqrt{\frac{1}{x^2}-1}\right)\mathrm dx$
|
Please help me to find a closed form for the following integral: $$\int_0^1\log\left(\log\left(\frac{1}{x}+\sqrt{\frac{1}{x^2}-1}\right)\right)\,{\mathrm d}x.$$ I was told it could be calculated in a closed form.
|
To calculate $$\int_0^1\log\left(\log\left(\frac{1}{x}+\sqrt{\frac{1}{x^2}-1}\right)\right)\,{\mathrm d}x.$$ let $x=e^y$ then $$\int\limits_{0}^{+\infty }{e^{-x}\ln \ln \left( e^{x}+\sqrt{e^{2x}-1} \right)dx}=\int\limits_{0}^{+\infty }{e^{-x}\ln \left( \cosh ^{-1}e^{x} \right)dx}=\int\limits_{1}^{+\infty }{\frac{\ln \left( \cosh ^{-1}e^{x} \right)}{x^{2}}dx}$$ $$=\int\limits_{0}^{+\infty }{\frac{\sinh x\ln x}{\cosh ^{2}x}dx}$$ Consider $$F\left( s \right)=\int\limits_{0}^{+\infty }{\frac{x^{s}\sinh x}{\cosh ^{2}x}dx}$$ and note that $$F'\left( 0 \right)=\int\limits_{0}^{+\infty }{\frac{\sinh x\ln x}{\cosh ^{2}x}dx}=\int\limits_{0}^{+\infty }{e^{-x}\ln \ln \left( e^{x}+\sqrt{e^{2x}-1} \right)dx}$$ then $$F\left( s \right)=\int\limits_{0}^{+\infty }{\frac{x^{s}\sinh x}{\cosh ^{2}x}dx}=s\int\limits_{0}^{+\infty }{\frac{x^{s-1}}{\cosh x}dx}=2s\int\limits_{0}^{+\infty }{\frac{e^{-x}x^{s-1}}{1+e^{-2x}}dx}$$ $$=2s\int\limits_{0}^{+\infty }{e^{-x}x^{s-1}\sum\limits_{n=0}^{+\infty }{\left( -e^{
|
|calculus|integration|logarithms|definite-integrals|closed-form|
| 0
|
Continuity of Probability for Events with Density Greater Than a Threshold
|
I am exploring the conditions (if some exist) under which the function g(b) = $\mathbb{P}(\{\mathbf{x}\in\mathbb{R}^m: p(\mathbf{x}) \geq b\})$ is continous, where $p$ is a PDF and $\mathbb{P}$ is the probability measure (from the same probability space). I think the conditions should be continuity of $p$ and $p$ having non-zero derivative a.s. but I am struggling to show this, possibly I am missing something major here. I know that $g$ must be monotonous but thats where I am stuck.
|
Since $\lim_{y \to b+} \mathbb P(p(X) \ge y) = \mathbb P(p(X) > b)$ and $\lim_{y \to b-} \mathbb P(p(X) \ge y) = \mathbb P(p(X) \ge b)$ , the only issue is whether there can be $b$ such that $\mathbb P(p(X) = b) > 0$ . I claim this can't happen if $p$ is differentiable with $\nabla p \ne 0$ a.e. That is: Suppose $p$ is a function on $\mathbb R^d$ such that for almost every $x$ , $p$ is differentiable at $x$ with $\nabla p(x) \ne 0$ . Then for every $b$ , the ( $d$ -dimensional Lebesgue) measure $m \left(\{x : p(x) = b\}\right) = 0$ . Proof: Let $$B = \{x: p(x) = b,\ p\ \text{differentiable at}\ x,\ \nabla p \ne 0\}$$ If my claim is false, then $B$ has nonzero Lebesgue measure. By the Lebesgue density theorem, there is $x \in B$ such that for all sufficiently small $r > 0$ , more than half (in terms of Lebesgue measure) of the ball of radius $r$ centred at $x$ is in $B$ . Let $v = \nabla p(x)$ , which by assumption is nonzero. Since $p$ is differentiable at $x$ , for any $\varepsilon >
|
|probability|measure-theory|continuity|
| 1
|
Let $f:V'\rightarrow V$ be a linear function and $A$ be an affine subspace in V, then $f^{-1}(A)$ is an affine subspace in $V'$
|
The fiber of a set is defined as: $f^{-1}(A) = \{x\in V' : f(x) \in A \}$ and the empty set is neither considered as a linear subspace nor an affine subspace. Let $A=a+U$ with $a \in V$ and $U$ being a linear subspace of $V$ . I have already proven that $f^{-1}(U)$ is a linear subspace of $V'$ , which I'm sure will come in handy. I think the idea is that $a$ is being transformed to a new $a'$ whereas $U$ will be transformed to a new subspace $f^{-1}(U)$ , but I can't fill in the gaps. $ f^{-1}(A)=f^{-1}(a+U) \,\longrightarrow\, a'+f^{-1}(U) $ Help would be really appreciated!
|
If $f^{-1}(A)$ is non-empty, choose some $a'$ in it. Then, $f(a')\in a+U$ , or equivalently: $$a+U=f(a')+U.$$ Therefore, $$x\in f^{-1}(A)\iff f(x)\in f(a')+U\iff f(x-a')\in U\iff x-a'\in f^{-1}(U),$$ so that $$f^{-1}(A)=a'+f^{-1}(U).$$
|
|linear-algebra|vector-spaces|linear-transformations|inverse-function|
| 1
|
A condition to guarantee invertibility of a matrix $A=BCD$ for every invertible matrix $C$
|
I have an $m$ -by- $m$ matrix $A = BCD$ where $B$ is an $m$ -by- $n$ matrix, $C$ is an $n$ -by- $n$ matrix, and $D$ is an $n$ -by- $m$ matrix. My question is: what are the requirements on matrices $B$ and $C$ so that matrix $A$ is always invertible for every invertible matrix $C$ ? Note that $m and all matrices are real matrices. If $m = n$ , I can look at the determinant, as $\det(A) = \det(B) \det(C) \det(D)$ . To make a matrix invertible, the determinant cannot be zero. Therefore, to make matrix $A$ always invertible for every invertible matrix $C$ , we can choose matrices $B$ and $D$ such that $\det(B) \neq 0$ and $\det(D) \neq 0$ . However, for the case $m , I don't know where to look.
|
There is no such condition. When $m , there always exists an invertible matrix $C$ such that $BCD$ is singular. Pick any nonzero vector $u$ . If $Du=0$ , then $BCD$ is singular for every $C$ because $BCDu=BC0=0$ . Suppose $Du\ne0$ . Since $m , the matrix $B$ has deficient column rank. Hence $Bv=0$ for some nonzero vector $v$ . As both $Du$ and $v$ are nonzero, there exists an invertible matrix $C$ such that $v=CDu$ . Now $BCD$ is singular because $BCDu=Bv=0$ .
|
|linear-algebra|matrices|inverse|
| 1
|
Help on understanding the particular integral
|
I need help with trying to understand the solution to $\operatorname{y}'' + \operatorname{y} = 5x{\rm e}^{2x}$ : Clearly the auxillary equation is just $m^2 + 1 = 0$ which can be solved by $m= \pm {\rm i}$ , thereby giving us $A\cos\left(x\right) + B\sin\left(x\right)$ for our complementary function. The issue comes with the particular integral, so far, I have been taught to use the "most general form of the $RHS$ " and "multiply by $x$ if the expected particular integral contains any term from the complementary function". Clearly the particular integral and the complementary function do not share any terms, therefore I would say the most general form of the $RHS$ is simply $\mu x{\rm e}^{2x}$ , however it turns out that I should have used $\mu x{\rm e}^{2x} + \lambda {\rm e}^{2x}$ , could someone explain this please $?$ . Thanks !!.
|
$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{{\displaystyle #1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\on}[1]{\operatorname{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\sr}[2]{\,\,\,\stackrel{{#1}}{{#2}}\,\,\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} & \color{#44f}{\on{y}''\pars{x} + \on{y}\pars{x} = 5x\expo{2x}}:\ {\LARGE ?}.\ \mbox{Lets}\ \on{z}\pars{x} \equiv \on{y}'\pars{x} + \on{y}\pars{x}\ic \\[5mm] & \mbox{such that}\ \left\{\begin{array}{rcl} \ds{\on{y}''\pars{x} + \on{y}\pars{x}} & \ds{=} & \ds{\
|
|ordinary-differential-equations|
| 0
|
Are there objects that can't be elements of sets, or be sets themselves?
|
I heard of urelements, which don't contain elements; I've also heard of proper classes which can't be elements of sets, has anyone ever discussed objects that can't be elements in a set or be themselves sets?
|
In mathematics we do not consider questions like "are there objects that are XYZ?", which is a rather philosophical question. In mathematics we consider questions like "are there objects that are XYZ under ABC axioms?". Everything in maths is relative to some axiomatic theory. In particular the answer to this question depends on the theory you consider. The Zermelo-Fraenkel set theory (ZF) which is widely regarded as the standard set theory does not consider anything other than sets and relations between them. And so within this theory urelements and proper classes are not defined, are meaningless. Even when you consider "collection of all sets", in ZF we can only deduce that it is not a set. Such thing doesn't exist within the theory. That's all. But there are other theories that do consider such objects. For example ZF can be extended to Von Neumann–Bernays–Gödel set theory which does deal with proper classes (which is a "collection" that is too big to fit in any set) formally. A the
|
|set-theory|
| 0
|
Equivalence between the category $\text{Ho}(\mathcal{M})$ and $\text{Ho}(\mathcal{M}_{\mathcal{F}})$
|
Let $(\mathcal{M}, \mathcal{W}, \mathcal{C},\mathcal{F})$ be a model category and $\text{Ho}(\mathcal{M})$ its homotopy category. Now we consider $ \mathcal{M}_{\mathcal{F}}$ the full subcategory of fibrant object. Now, I want to show that we have an equivalence between $\text{Ho}(\mathcal{M})$ and $\text{Ho}(\mathcal{M}_{\mathcal{F}})$ . For that I first construct a functor: $$\mathcal{M}_{\mathcal{F}} \xrightarrow{\subset} \mathcal{M}\xrightarrow{\lambda}\text{Ho}(\mathcal{M})$$ Where $\lambda$ is the localization functor. Then since by definition, this functor sends weak equivalences to isomorphism, we get a functor $\text{Ho}(\mathcal{M}_{\mathcal{F}})\to \text{Ho}(\mathcal{M})$ . Now I want to construct a functor $text{Ho}(\mathcal{M}\to \text{Ho}(\mathcal{M}_{\mathcal{F}}))$ : For that I start to construct the following functor: $$R: \mathcal{M}\rightarrow \mathcal{M}_{\mathcal{F}} $$ We define $R$ on object: We take $X\in \text{Ob}(\mathcal{M})$ , and consider the unique morphis
|
I don't think there is a substantial difference in complexity between showing full and faithfulness and showing an inverse functor exists. Note that a morphism $f:X\rightarrow Y$ gives rise to a commutative square of the form $$\begin{array}{ccc} X & \overset{f}{\longrightarrow} Y \longrightarrow & RY\\ \downarrow^\text{triv}_\text{cofib} &&\downarrow_\text{fib}\\ RX &-\!-\!-\!-\!\longrightarrow&\ast \end{array}$$ so you get a lift $RX \rightarrow RY$ . If you show that this lift is unique up to homotopy, you can use that either to define the inverse functor on morphisms, or deduce from it that the inclusion is faithful (it is obviously essentially surjective and full).
|
|category-theory|homotopy-theory|model-categories|
| 1
|
Pullback metric on sphere
|
I am learning differential geometry, and wanted to see a calculation for the round (induced) metric on the sphere $S^n$ . To do this, I wanted to consider the immersion $\iota:S^n \rightarrow \mathbb{R}^{n+1}$ , and consider the pullback formula for an immersion $\phi: M \rightarrow N$ and a Riemannian metric $g$ on $N$ given at each $p \in M$ by $$\phi^{*}g(v, w) = \langle \mathrm{d}\phi_p(v), \mathrm{d}\phi_p(w) \rangle$$ which in this case becomes $$\iota^{*}g(v, w) = \langle \mathrm{d}\iota_p(v), \mathrm{d}\iota_p(w) \rangle$$ and where the metric tensor is given on $\mathbb{R}^{n+1}$ by $g_{ij} = \delta_{ij}$ . I would appreciate seeing this calculation in full, since I don't know how to compute with pullbacks, and cannot find a reference. Thank you!
|
Consider the immersion $\varphi:S^n\hookrightarrow\mathbb{R}^{n+1}$ where we have; $$\varphi(\theta_1,\theta_2,\cdots\theta_{n})= \begin{bmatrix} x_1 \\ x_2 \\ \vdots\\ x_{n+1} \end{bmatrix}$$ Where our coordinate components, $x_i$ , are defined as; $$x_1=\cos\theta_1$$ $$x_2=\cos\theta_2\sin\theta_1$$ $$\vdots$$ $$x_k=\cos\theta_k\prod_{m=1}^{k-1}\sin\theta_m$$ $$\vdots$$ $$x_{n+1}=\prod_{m=1}^{n}\sin\theta_m$$ As you have noted, the pull back of our metric will be; $$\varphi^*g(v,w)=g(\varphi_*(v),\varphi_*(w))$$ Which will give us; $$\hat{g}_{ij}=g_{ab}\frac{\partial x_a}{\partial\theta_i}\frac{\partial x_b}{\partial\theta_j}$$ The metric, $g_{ab}$ , on $\mathbb{R}^{n+1}$ is simply equal to $\delta_{ab}$ . Therefore; $$\hat{g}_{ij}=\frac{\partial x_a}{\partial\theta_i}\frac{\partial x_a}{\partial\theta_j}$$ The diagonal elements of our tensor should be fairly easy for you to calculate on your own. You will find that; $$\hat{g}_{11}=\bigg(\frac{\partial x_a}{\partial \theta_1}\bigg)^
|
|differential-geometry|manifolds|riemannian-geometry|pullback|
| 1
|
Closed form for $\int_0^1\log\log\left(\frac{1}{x}+\sqrt{\frac{1}{x^2}-1}\right)\mathrm dx$
|
Please help me to find a closed form for the following integral: $$\int_0^1\log\left(\log\left(\frac{1}{x}+\sqrt{\frac{1}{x^2}-1}\right)\right)\,{\mathrm d}x.$$ I was told it could be calculated in a closed form.
|
To calculate $$\int_0^1\log\left(\log\left(\frac{1}{x}+\sqrt{\frac{1}{x^2}-1}\right)\right)\,{\mathrm d}x.$$ let $x=e^y$ then $$\displaystyle{\int\limits_0^\infty {{e^{ - x}}\log \left( {\log \left( {{e^x} + \sqrt {{e^{2x}} - 1} } \right)} \right)dx} = \mathop = \limits^{\log \left( {{e^x} + \sqrt {{e^{2x}} - 1} } \right) = y} = 2\int\limits_0^\infty {\frac{{{e^y}\left( {{e^{2y}} - 1} \right)}}{{{{\left( {1 + {e^{2y}}} \right)}^2}}}\log \left( y \right)dy} = }$$ $\displaystyle{ = 2\int\limits_0^\infty {\frac{{{e^{ - y}}\left( {1 - {e^{ - 2y}}} \right)}}{{{{\left( {1 + {e^{ - 2y}}} \right)}^2}}}\log \left( y \right)dy} = 2\int\limits_0^\infty {{e^{ - y}}\left( {1 - {e^{ - 2y}}} \right)\left( {\sum\limits_{n = 1}^\infty {{{\left( { - 1} \right)}^{n - 1}} \cdot n \cdot } {e^{ - 2y\left( {n - 1} \right)}}} \right)\log \left( y \right)dy} = }$ $\displaystyle{ = 2\sum\limits_{n = 1}^\infty {{{\left( { - 1} \right)}^{n - 1}} \cdot n \cdot \int\limits_0^\infty {\left( {{e^{ - y\left( {2n - 1}
|
|calculus|integration|logarithms|definite-integrals|closed-form|
| 0
|
Determining condition of coplanarity
|
Determine the value of $\lambda$ such that the vectors $$5\vec{a}+6\vec{b}+7\vec{c},7\vec{a}+\lambda\vec{b}+9\vec{c},3\vec{a}+20\vec{b}+5\vec{c}$$ are coplanar given that $\vec{a},\vec{b},\vec{c}$ are non coplanar. For the vectors to be coplanar,there must exist scalars $x_1,x_2,x_3$ not all $0$ such that their linear combination is $0$ . Then after rearranging,the condition we get is $$(5x_1+7x_2+3x_3)\vec{a}+(6x_1+\lambda x_2+20x_3)\vec{b}+(7x_1+9x_2+5x_3)\vec{c}=0$$ Since $\vec{a},\vec{b},\vec{c}$ are non coplanar,hence each of the coefficients must be $0$ . But here,we are getting $3$ equations for $4$ variables(the fourth one being $\lambda$ ). So,how can $\lambda$ be determined? Can I assign any one of $x_1,x_2,x_3$ any value I like,for example can I make $x_1$ or $x_3$ $0$ or $1$ or any other number? I am really confused.
|
There's another approach which is imposing $$\mathbf u\cdot(\mathbf v\wedge\mathbf w)=0,$$ where $$\left\{\begin{align} &\mathbf u=5\mathbf a+6\mathbf b+7\mathbf c\\ &\mathbf v=7\mathbf a+\lambda\mathbf b+9\mathbf c\\ &\mathbf w=3\mathbf a+20\mathbf b+5\mathbf c \end{align}\right.$$ Let us now substitute and proceed: $$\begin{align} (5\mathbf a+6\mathbf b+7\mathbf c)\cdot((7\mathbf a+\lambda\mathbf b+9\mathbf c)\wedge(3\mathbf a+20\mathbf b+5\mathbf c)) &\overset{(1)}{=}(5\mathbf a+6\mathbf b+7\mathbf c)\cdot((140-3\lambda)\mathbf a\wedge\mathbf b+8\mathbf a\wedge\mathbf c+(5\lambda-180)\mathbf b\wedge\mathbf c)\\ &\overset{(2)}{=}7(140-3\lambda)\mathbf c\cdot(\mathbf a\wedge\mathbf b)+48\mathbf b\cdot(\mathbf a\wedge\mathbf c)+5(5\lambda-180)\mathbf a\cdot(\mathbf b\wedge\mathbf c)\\ &\overset{(3)}{=}[7(140-3\lambda)-48+5(5\lambda-180)]\underbrace{\mathbf a\cdot(\mathbf b\wedge\mathbf c)}_{\neq 0\, (4)}\overset{!}{=}0\\ &\iff 980-21\lambda-48+25\lambda-900=0\\ &\implies 32+4\lambda=0\
|
|linear-algebra|geometry|vector-analysis|plane-geometry|
| 1
|
How many dimensions does it take to model language for AI?
|
I was watching the video Using AI to Decode Animal Communication with Aza Raskin and he talks about converting semantic relationships between words into geometric relationships. I don't think he says it in that video (but I feel like he strongly implies it), but in a Guardian article , they explicitly state that they are using multi-dimensional geometry. This led me to wonder "how many dimensions". I tried looking this up on my own, but my knowledge of math is limited to 10th grade, and I have zero knowledge of computer programming, so when I read papers that appear to be related, I am utterly lost. Also, I can't find the original paper by Raskin's team. I was hoping I could skim through them for something concrete like "it took x numbers of dimensions to model the English/Spanish/German/etc. language", but no such luck. I am not even sure if thinking about the dimensions in terms of real numbers makes sense. So, assuming the answer is as simple as a single number, how many dimensions
|
Perhaps a simplified model would make this more intuitively clear. Suppose you have an unknown real valued function $y = f(x)$ that you want to approximate using a set of data given by $(x_i, y_i)$ pairs. If the function is linear, then you only need two pairs and then solve two linear equations to get the coefficients of the linear function. If the function is given by a polynomial, then the pairs needed to solve for the coefficients of the polynomial must be at least the number of coefficients of the polynomial. The dimension of the space of possible polynomial functions in this case is equal to the number of coefficients needed to be specified. However, if the data is approximate to begin with and the function is known to be approximately linear, then you can use linear regression to get the best linear function to approximate the data. This reduces the dimension of the space of approximating functions while using a worse approximation. This tradeoff between complexity (in this case
|
|geometry|analytic-geometry|
| 0
|
Find all Sylow 3-subgroups of $S_3\times S_3$
|
Find all Sylow 3-subgroups of $S_3\times S_3$ ? This is what I already found: Since $o(S_3\times S_3)=36=2^2 3^2$ Sylow- $3$ subgroups have order $9$ . If $n_3$ is the no. of Sylow- $3$ subgroups, Then $n_3|4$ and $3|(n_3 - 1)$ . Hence $n_3$ should be $1$ or $4$ . Now how can I find at least one subgroup of order $9$ ?
|
The elements of order $3$ in $S_3\times S_3$ are: \begin{alignat}{1} &((123),()) \\ &((132),()) \\ &((),(123)) \\ &((),(132)) \\ &((123),(123)) \\ &((123),(132)) \\ &((132),(123)) \\ &((132),(132)) \\ \end{alignat} Jointly with $()$ , they form a closed subset of $S_3\times S_3$ , which is then a subgroup of order $9$ , say $P_9$ , clearly isomorphic to $C_3^2$ . As there isn't any other element of order $3$ , $P_9$ is unique.
|
|group-theory|finite-groups|sylow-theory|
| 0
|
Does the opposite direction of alpha congruence in applications hold?
|
The forward direction is: if $s \equiv_\alpha s' \land t \equiv_\alpha t'$ , then $st \equiv_\alpha s't'$ . I'm wondering if this holds: if $st \equiv_\alpha s' t'$ then $s \equiv_\alpha s' \land t \equiv_\alpha t'$ . I was trying to come up with a counterargument, where if for any term $s t$ , there exists a way to rewrite it as: $s'' t''$ , where $s'' \not \equiv s' \lor t'' \not\equiv t'$ , then this property clearly fails. But I haven't found any way to do this, as expanding the terms seem to keep the terms encapsulated inside $s$ or $t$ . Maybe I'm missing something. Thanks!
|
In an inductive definition of α-equivalence, there is a rule $$\dfrac{s =_α s' \quad t =_α t'}{st =_α s't'}$$ and this is the only way to deduce that two applications are α-equivalent. Hence what you try to (dis)prove is true by definition. Alternatively, you could have these two rules: $$\dfrac{s =_α s'}{st =_α s't} \qquad \dfrac{t =_α t'}{st =_α st'}$$ then similarly the statement holds.
|
|lambda-calculus|
| 0
|
Where did I go wrong? Integration by parts
|
So I was looking at the integral of $\int{f(x)\cdot\frac{d}{dx}(\frac{df(x)}{dx})dx}$ and I got the following using the DI method (for those who don't know it search DI bprp on youtube): choosing f(x) for D and $\frac{d}{dx}(\frac{df(x)}{dx})$ for I we get on the first row $\frac{df(x)}{dx}$ for both D and I, and continuing another row (and switching the sign from + to -) we get $\frac{d}{dx}(\frac{df(x)}{dx})$ for D and f(x) for I, and by using the rules of DI method we would get: $\int{f(x)\cdot\frac{d}{dx}(\frac{df(x)}{dx})dx} = f(x)\cdot\frac{df(x)}{dx} -\int{f(x)\cdot\frac{d}{dx}(\frac{df(x)}{dx})dx}$ by adding $\int{f(x)\cdot\frac{d}{dx}(\frac{df(x)}{dx})dx}$ to both sides we get: $2\int{f(x)\cdot\frac{d}{dx}(\frac{df(x)}{dx})dx} = f(x)\cdot\frac{df(x)}{dx}$ and so: $\int{f(x)\cdot\frac{d}{dx}(\frac{df(x)}{dx})dx} = \frac{f(x)\cdot\frac{df(x)}{dx}}{2}$ but this is clearly wrong as even just looking at $f(x)=x^2 -> f'(x)=2x -> f''(x)=2$ we can see that the "formula" I got is wrong
|
With the DI method, you will get $$\begin{array}{ccc} +&f(x)&f^{\prime\prime}(x)\\ -&f^\prime(x)&f^{\prime}(x)\\ +&f^{\prime\prime}(x)&f(x) \end{array}$$ And so $$\int f(x)f^{\prime\prime}(x)\,\mathrm dx=f(x)f^\prime(x)-f^\prime(x)f(x)+\int f(x)f^{\prime\prime}(x)$$ which is true, but an useless equation. You have made sign errors in your attempt. If you carefully go through your second solution as well, you will notice that you took the wrong sign in the last equation and so instead of the integral cancelling, it instead added up to give something like $2\int(\cdots)=0$ . Hope this helps. :)
|
|integration|
| 1
|
Subsets and Splits
SQL Console for glopezas/math_stackexchange_qa
Filters and displays accepted answers excluding questions with specific tags, providing a focused view of less common or specialized topics.