title
string | question_body
string | answer_body
string | tags
string | accepted
int64 |
|---|---|---|---|---|
Is there a definite integral that yields $e^\pi$ or $e^{-\pi}$ in a non trivial way?
|
The title says it all. No trivial answers like $\int_0^\pi e^tdt$ please. The idea is rather, if there are integrals like $$\int\limits_0^\infty \frac{t^{2n}}{\cosh t}dt=(-1)^{n}\left(\frac{\pi}2\right)^{2n+1}E_{2n}$$and $$\int\limits_0^\infty \frac{t^{2n-1}e^{-t}}{\cosh t}dt=(-1)^{n-1}\frac{2^{2n-1}-1}{n}\left(\frac{\pi}2\right)^{2n}B_{2n}$$ (here, $E_{2n}$ and $B_{2n}$ are Euler and Bernoulli numbers), there should also be integrals of similar type that yield $e^\pi$ or $e^{-\pi}$. Certainly not by means of the given ones. Any ideas?
|
What about these? $$\int_{0}^{1} \left(\frac{5}{2} \left((x - \sqrt{x^2 - 1})^{2i} + x^4\right) - 1\right) \, dx = e^{\pi}\tag{1}$$ $$\int_{0}^{1} \left( -\frac{5}{2\left( x - \sqrt{x^2 - 1}\right)^{2i}} - \frac{5x^4}{2} + 1 \right) \, dx = e^{-\pi}\tag{2}$$
|
|integration|exponentiation|
| 0
|
Linear Algebra Doubt regarding matrix of transformation wrt basis B and C
|
Let $T$ be a linear transformation, T $:V\to W$ , where $V$ and $W$ are vector spaces over a field $F$ . Then the matrix of $T$ wrt the basis $B$ (of $V$ ) and $C$ (of $W$ ) is? Is it $C^{-1}T(B)$ ? This is the same as writing $[T(v_1),...,T(v_n)]$ where $B=(v_1,v_2,...,v_n)$ and $T(v_i)$ is written using the basis $C$ (ie coordinates when $C$ is the basis. (I have written my answer below)
|
Let T(v)=w v=BX and w=CY Let AX=Y (such A exists, have A = [T(v1),...,T(vn)] where B=(v1,...,vn) and write T(vi) as coordinates when the basis C is used) A is the matrix of transformation T(BX)=CY; T(BX) = T(B)X = CY = CAX for all X Therefore T(B) = CA Implies A = C^(-1)T(B)
|
|vector-spaces|linear-transformations|
| 0
|
$\sigma$-algebra generated by a finite set $S$
|
How the Borel $\sigma$ -algebra generated by a finite set $S$ may look like?
|
I think we have to answer without the assumption of taking the discrete topology, i.e. the identification of our topology with the powerset of our space. We can consider exempli gratia the space $Ω=\{a,b\}$ and the topology $\mathcal T=\{\varnothing, Ω, \{a\}\} \neq \mathcal P (Ω)$ . We then construct $\mathcal B (Ω)$ (with the reminder that $\mathcal B (Ω)$ is the minimum $σ$ -algebra containing $\mathcal T$ ) as follows: It must be $\mathcal T \subseteq \mathcal B(Ω)$ and $\varnothing \in \mathcal B(Ω)$ , so we get $\varnothing, Ω, \{a\} \in \mathcal B(Ω)$ . We also need $\{a\}^c=\{b\}\in \mathcal B(Ω)$ (closure of complements). Finally the closure of countable unions is satisfied by above, thus $\mathcal B(Ω)=\mathcal P (Ω)$ . Of course this result is not universal for finite spaces: If $Ω=\{a,b,c\}$ and topology is as above, then obviously $\mathcal B(Ω)\neq\mathcal P (Ω)$ . Consequently Borel sets depend on the topology.
|
|measure-theory|borel-sets|borel-measures|
| 0
|
How to calculate this sum $\sum_{n=1}^{\infty} \frac{H_n \cdot H_{n+1}}{(n+1)(n+2)}$
|
How to calculate this sum $$\sum_{n=1}^{\infty} \frac{H_n \cdot H_{n+1}}{(n+1)(n+2)}$$ Attempt The series telescopes. We have $$=\frac{H_n \cdot H_{n+1}}{(n+1)(n+2)} = \frac{H_n \cdot H_{n+1}}{n+1} - \frac{H_n \cdot H_{n+1}}{n+2}$$ $$=\frac{H_n(H_n + \frac{1}{n+1})}{n+1} - \frac{(H_{n+1} - \frac{1}{n+1}) \cdot H_{n+1}}{n+2}$$ $$=\frac{H_n^2}{n + 1} - \frac{H_{n+1}^2}{n + 2} + \frac{H_n}{(n + 1)^2}+ \frac{H_{n+1}}{(n + 1)(n + 2)}$$ $$=\frac{H_n^2}{n + 1} - \frac{H_{n+1}^2}{n + 2} + \frac{H_{n+1} - \frac{1}{n+1}}{(n+1)^2} + - \frac{H_{n+1}}{n+1} - \frac{H_{n+1}}{n+2}$$ $$=\frac{H_n^2}{n + 1} - \frac{H_{n+1}^2}{n + 2} + \frac{H_{n+1}}{(n+1)^2} - \frac{1}{(n+1)^3} + \frac{H_{n + 1}}{n + 1} - \frac{H_{n + 2}}{n + 2} + \frac{1}{{(n + 2)^2}}$$ It follows that $$\sum_{n=1}^{\infty} \frac{H_n \cdot H_{n+1}}{(n+1)(n+2)} = \sum_{n=1}^{\infty} \left(\frac{H_{n}^2}{n + 1} - \frac{H_{n+1}^2}{(n+2)^2}\right) + \sum_{n=1}^{\infty} \frac{H_{n+1}}{(n+1)^2} - \sum_{n=1}^{\infty} \frac{1}{(n+1)^3}$$
|
We use the following summation by parts formula (twice): $$\sum_{n=1}^\infty a_{n+1}(b_{n+1}-b_n) = -a_1 b_1 - \sum_{n=1}^\infty (a_{n+1}-a_n)b_n \tag1\label1$$ First take $a_n=H_{n-1}H_n$ and $b_n=-1/(n+1)$ in \eqref{1} to obtain \begin{align} \sum_{n=1}^\infty \frac{H_n H_{n+1}}{(n+1)(n+2)} &= \sum_{n=1}^\infty H_n H_{n+1}\left(\frac{1}{n+1}-\frac{1}{n+2}\right) \\ &= \frac{-H_0 H_1}{-(n+1)} - \sum_{n=1}^\infty (H_n H_{n+1}-H_{n-1}H_n)\frac{-1}{n+1} \\ &= 0 + \sum_{n=1}^\infty \frac{H_n (H_{n+1}-H_{n-1})}{n+1} \\ &= \sum_{n=1}^\infty \frac{H_n}{n+1}\left(\frac{1}{n+1}+\frac{1}{n}\right) \\ &= \sum_{n=1}^\infty \frac{H_n}{(n+1)^2} + \sum_{n=1}^\infty \frac{H_n}{n(n+1)} \\ &= \zeta(3) + \sum_{n=1}^\infty H_n\left(\frac{1}{n}-\frac{1}{n+1}\right). \end{align} Now take $a_n=H_{n-1}$ and $b_n=-1/n$ in \eqref{1} to obtain $$\zeta(3) + \frac{-H_0}{-1} + \sum_{n=1}^\infty (H_n-H_{n-1})\frac{1}{n} =\zeta(3) + 0 + \sum_{n=1}^\infty \frac{1}{n^2} = \zeta(3) + \frac{\pi^2}{6}.$$
|
|sequences-and-series|summation|closed-form|harmonic-numbers|euler-sums|
| 1
|
Will negative bases with irrational exponents get a real or imaginary number?
|
Here are a few examples: $$(-1)^{\sqrt{2}},(-2)^{\pi},(-3)^{e}$$ From what I've learned, negative bases must have denominators of the exponent odd. Normally if we do $(-2)^{0.258}$ it would be the same as $(-2)^{50/129}$. That means that since $129$ is odd, it will give a real number. But for irrational numbers I'm not sure if we can determine if it's going to be odd on the denominator or not because you can't convert it to a fraction.
|
Again, let us use Euler's identity. If $e$ is the base of the natural logarith, and $i*2=-1$ , then: $$e^{i}+1=0$$ We can conclude that for all integers $z$ : $$\ln(-1)=i(2z+1) and thus: $$\ln(-r)=\ln(r)+i(2z+1)$$ Now, let $n$ be an irrational number and $H$ be the arc measure of a half-circle (180 degrees, radians, 200 grads, 2 quadrants, etc) Applying a generalized version of Euler's formula: $$(-r)^n=(e^{ln(-r)+i2πz})^n=(e^{ln(r)+iπ+i2πz})^n=(e^{ln(r)+iπ(2z+1)})^n=e^{nln(r)+iπn(2z+1)}=r^n\cos(Hn(2z+1)+ir^n\sin(Hn(2z+1)$$ Now, the zeroes of the sine function are integer multiples of a half-circle. $n$ being irrational and $2z+1$ being rational and always nonzero ( $2z+1=0$ has no Diophantine solution), their product is irrational and the sine is always nonzero for all integers $z$ , so the imaginary part can not vanish. (fun fact: at least one of $r$ , $n$ , or $(-r)^n$ must be transcendental, per Gelfond-Schneider theorem)
|
|exponentiation|irrational-numbers|
| 0
|
Prove the inequality for x,y real numbers
|
I'm trying to prove this inequality using the triangle inequality, but unfortunately haven't had much luck. I feel like I can rewrite the left hand side into something usable, but I don't know what to. The inequality is for x,y real numbers: $$\frac{|x+y|}{1+|x+y|}\leq\frac{|x|}{1+|x|} + \frac{|y|}{1+|y|}$$ Any help or solution would be appreciated.
|
Consider $ \frac{|x+y|}{1+|x+y|} = 1-\frac{1}{1+|x+y|} \leq 1-\frac{1}{1+|x|+|y|+2|xy|}$ by the triangle inequality: adding an extra positive term on the bottom obviously will keep satisfying the inequality. That equals: $ \frac{|x|+|y|+2|xy|}{1+|x|+|y|+2|xy|} = \frac{|x|}{1+|x|} +\frac{|y|}{1+|y|}$ so we prove the required inequality.
|
|real-analysis|inequality|triangle-inequality|
| 0
|
Midsegment of trapezoid
|
$ABCD$ is a rectangular trapezoid $AB || CD$ $A \sphericalangle =D \sphericalangle =90^o$ $AC\perp CB$ , $AD=\sqrt{15}$ cm. If $3AB=8CD$ then what does the midsegment equal to? I know that the midsegment is $MN=\frac{AB+CD}{2}$ , how to get the value of $AB$ or $DC$ ?
|
Extend AD,BC to meet at P. Proportionality constant for AB, DC is $t$ . In finding length of segments we use Pythagoras thm and proportion of similar triangles: $$PD=u ~~,PC= \sqrt{u^2+9t^2} $$ $$ DA= \sqrt {15}$$ $$ (DC,AD)= (3t, 8t)$$ CB is computed and simplified $ CB^2= AB^2-AC^2 $ $$ CB= \sqrt {55 t^2-15} $$ $$ \frac{PD}{DC}=\frac{PA}{AB} $$ $$ \frac{PD}{PC}=\frac{PA}{PB} $$ Two equations two unknowns. Solving numerically $$ {u = 1.341640786499874, t= 0.6277680455262371} $$ midsegment length $$ MN = 5.5 t = 3.45372 ; $$
|
|geometry|
| 0
|
Deriving asymptotic for the roots of Digamma Function
|
So Wikipedia gave these asymptotics for the Digamma function: $$x_n=-n+\frac12+O\left(\frac1{(\ln n)^2}\right)$$ $$x_n\approx-n+\frac1\pi\arctan\left(\frac{\pi}{\ln n}\right)$$ $$x_n\approx-n+\frac1\pi\arctan\left(\frac{\pi}{\ln n+\frac1{8n}}\right)$$ I'm interested in deriving the second one. All Wikipedia says is to use the reflection formula $0=\psi(1-x_n)=\psi(x_n)+\pi\cot(\pi x_n)$ and then substitute the asymptotic expansion of the digamma function. Firstly, because $x_n$ is the $n$ th largest root of the Digamma function, $\psi(1-x_n)$ doesn't (and shouldn't) need to be equal to zero. So I think the correct expression to analyze is $0=\psi(1-x_n)-\pi\cot(\pi x_n)$ . Substituting the first term in the asymptotic of digamma we get $0\approx\ln(1-x_n)-\pi\cot(\pi x_n)$ . I did solve for $x_n$ from the cotangent term and got that $$x_n\approx\frac1{\pi}\arctan\left(\frac\pi{\ln(1-x_n)}\right)$$ But I don't know what to do next.
|
After looking at a similar post, it seems to be a much better idea if we write $\psi(1+x_n)=\psi(-x_n)-\pi\cot(\pi x_n)$ and use the recurrence relation of $\psi$ to get $\frac1{x_n}=\psi(-x_n)-\pi\cot(\pi x_n)\sim\log(-x_n)-\pi\cot(\pi x_n)$ . Because $\frac1{x_n}\sim0$ we just get $\log(-x_n)-\pi\cot(\pi x_n)\sim0$ . Let $x_n=-n-f(n)$ where $f(n)$ is a bounded function, then we get that $\log(n+f(n))\sim\log(n)$ . Plugging this definition into the last asymptotic, we get that $f(n)\sim-\frac1\pi\arctan\left(\frac\pi{\ln n}\right)$ and so $x_n\sim-n-\frac1\pi\arctan\left(\frac\pi{\ln n}\right)$ as desired.
|
|asymptotics|roots|digamma-function|
| 0
|
Some questions about proof that $n \nmid 2^n - 1$ for any natural number $n > 1$
|
I have a few questions about the proof that $n \nmid 2^n - 1$ for any natural number $n > 1$ I found the solution, but I have some questions and wanted to make sure I understood everything correctly. Thank you in advance. Proof By Contradiction : Suppose, to the contrary, that an integer $n > 1$ exists such that $n \mid 2^n - 1$ . Since $2^n - 1$ is odd, $n$ must be odd too. Let's take $p$ to be the smallest prime dividing $n$ (Q1 : Why do we take a smallest prime number?) . Then, $p \mid 2^n - 1$ (rule: if $c \mid b$ and $b \mid a$ , then $c \mid a$ ) and $p \mid 2^{(p-1)} - 1$ (Q2 : Why the exponent here is $p-1$ ?) . Hence, $p \mid 2^d - 1$ , where $d := \gcd(n,p-1)$ (Because $\gcd(a^k - 1, a^l - 1) = a^{\gcd(k,l)} - 1$ . Since $p \mid 2^n - 1$ and $p \mid 2^{(p-1)} - 1$ , then $p \mid \gcd(2^n - 1, 2^{(p-1)} - 1) = 2^{\gcd(n, p-1)} - 1$ ). However, as $p$ is the smallest prime divisor of $n$ , we have $gcd(n,p-1) = 1$ . (Q3 : If I understood correctly, this is always true?) Hence,
|
This solution is overkill but I can answer your queries. (The proof is based on contradiction.) First, by Fermat's Little Theorem $$a^{p-1} \equiv 1 \pmod{p},(a,p) = 1$$ Therefore, as $n$ is odd, the smallest prime can not be $2$ . $a = 2$ will give $2^{p-1} \equiv 1 \pmod{p}$ . (The next part can be done by the method in the solution. I think you don't have any query with that.) Now, if you know what $orders$ are then it is clear that if $ord_p(2) = d$ then $2^d \equiv 1 \pmod{p}$ . This implies, $d|n$ and $d|p-1$ . As GCD is the largest common divisor of both, $d|(n,p-1)$ But as $p$ is the smallest prime, $$(n,p-1) = 1$$ As all other divisors of $n$ are equal or greater than $p$ . This means $order$ has to be $1$ . $$2^1 \equiv 1 \pmod{p}$$ $$1 \equiv 0 \pmod{p}$$ Contradiction.
|
|elementary-number-theory|divisibility|
| 0
|
One variable inequality problem $\frac{x\sqrt{|x^2 - 4|}}{x^2 - 4} - 1 > 0$
|
Hi I'm having problem with inequality $\frac{x\sqrt{|x^2 - 4|}}{x^2 - 4} - 1 > 0$ I rearranged the equation a bit and I did this $\frac{x\sqrt{|x^2 - 4|}-x^2+4}{x^2 - 4} > 0$ holds if $(x-2) \cdot(x+2)\cdot (x\sqrt{|x^2 - 4|}-x^2+4)>0$ This part is the most difficult for me $(x\sqrt{|x^2 - 4|}-x^2+4) = 0$ I am unable to find solution of this equation and I think this is crucial to solving it. I also tried other ways to solve this equation, such as moving 1 to the other side of the equation. And then I multiply both sides by $(x-2)^2 \cdot (x+2)^2$ But I also failed because I probably at some point illegally raised both sides to the second power. I also have answer for that inequality: $x\in(-2,\sqrt{2})\cup(2, \infty)$ I also tried to calculating this equation using programs such as wolfram alpha but only gives me a result and I would like to know how I could solve it. Thank you in advance for your help.
|
As a hint: Divide into two part $x \in (-2,2)$ , $x \not\in(-2,2)$ for the first one $$x \in(-2,2) \to x^2-4 1\\ \frac{x\sqrt{4-x^2}}{-(4-x^2)}>1\\ \frac{x\sqrt{(4-x^2)}}{-\sqrt{(4-x^2)^2}}>1\\ \frac{x}{-\sqrt{(4-x^2)^1}}>1$$ and for the second $$x \not \in(-2,2) \to x^2-4>0 \\\frac{x\sqrt{x^2-4}}{x^2 - 4} >1\\ \frac{x\sqrt{x^2-4}}{\sqrt{(x^2 - 4)^2}} >1\\\frac{x}{\sqrt{x^2 - 4}} >1\\$$
|
|linear-algebra|analysis|inequality|
| 0
|
One variable inequality problem $\frac{x\sqrt{|x^2 - 4|}}{x^2 - 4} - 1 > 0$
|
Hi I'm having problem with inequality $\frac{x\sqrt{|x^2 - 4|}}{x^2 - 4} - 1 > 0$ I rearranged the equation a bit and I did this $\frac{x\sqrt{|x^2 - 4|}-x^2+4}{x^2 - 4} > 0$ holds if $(x-2) \cdot(x+2)\cdot (x\sqrt{|x^2 - 4|}-x^2+4)>0$ This part is the most difficult for me $(x\sqrt{|x^2 - 4|}-x^2+4) = 0$ I am unable to find solution of this equation and I think this is crucial to solving it. I also tried other ways to solve this equation, such as moving 1 to the other side of the equation. And then I multiply both sides by $(x-2)^2 \cdot (x+2)^2$ But I also failed because I probably at some point illegally raised both sides to the second power. I also have answer for that inequality: $x\in(-2,\sqrt{2})\cup(2, \infty)$ I also tried to calculating this equation using programs such as wolfram alpha but only gives me a result and I would like to know how I could solve it. Thank you in advance for your help.
|
One way to do questions like this is to consider "critical values". Imagine if you were solving an equality rather than an inequality, and then consider other behaviour. So solve $ x\sqrt(|x^2-4|)= x^2-4$ so $ x^2(|x^2-4|)=|x^2-4|^2 $ so $|x^2-4|(x^2-|x^2-4|)=0$ You find that the solutions to this equation are 2,-2 $ \sqrt(2), -\sqrt(2)$ Now, we should sub in these critical points to make sure they make sense (we might get extra solutions): 2 and -2 are undefined asymptotes. sqrt(2) does not give the right value(-2) and sqrt(2) does give the right value (0) At 2 and -2 we have asymptotes: The asymptote at 2 is clearly towards - infinity from the left, and + infinity from the right. The same for -2 in fact! And the curve crosses the axes at -sqrt(2). Therefore, in conjunction with this information, we can say that $(-2,-\sqrt(2)), (2,\infty)$ are the intervals required.
|
|linear-algebra|analysis|inequality|
| 0
|
Is covariant derivative of the connection one-form defined?
|
This is in regards to the definition of curvature two-form $\Omega$ defined in Nakahara (Sec. 10.3.2, Def. 10.5, Pg. 386) as the covariant derivative of the connection one-form $\omega$ $$\Omega \equiv D\omega$$ This is actually in stark contrast to what we have been taught in physics that the covariant derivative of the connection is not defined. Can anyone please explain which understanding is the correct one?
|
It makes sense to define curvature for principal connections as the covariant exterior derivative of the connection one-form. The connection one-form $\omega$ is a one-form on the total space of a principal bundle with values in a Lie algebra $\mathfrak g$ . You can take the exterior derivative $d\omega$ to obtain a $\mathfrak g$ -valued two-form on the total space. On the other hand, the connection one-form gives rise to a complementary subbundle to the vertical subbundle in the tangent bundle of your principal bundle. This is called the horizontal subbundle it defines a horizontal projection. Writing this as $X\mapsto H(X)$ the covariant exterior derivative of $\omega$ is defined as $(X,Y)\mapsto d\omega(H(X),H(Y))$ . The result is exactly the curvature of the prinicpal connection, viewed as a two form on the total space of the principal bundle (which by construction is horizontal and equivariant).
|
|differential-geometry|connections|
| 0
|
Basis of $Hom(V,W)$
|
Question: Let $V,W$ be vector spaces over $F$ . Let $v_{1},...,v_{n}$ be a basis of $V$ and $w_{1},...,w_{n}$ be a basis of $W$ . Find a related basis of $Hom(V,W)$ and give its dimension. I'm not really sure what a basis of $Hom(V,W)$ would be, as it is a set of linear maps and not of vectors. My initial idea was to write some $v \in V$ that maps into an arbitrary vector in $W$ : Let $T \in Hom(V,W)$ such that some $w_{j} \in rangeT$ . Then, $\exists v_{i} \in V$ such that $T(v_{i}) = w_{j}$ $w_{j} = T(v_{i1}) + ... + T(v_{in})$ I don't know how to grasp the concept of $Hom(V,W)$ from this. I know, however, that $Hom(V,W)$ can be described as the set of matrices of $n\times n$ dimension with entries in $F$ . Could I use matrices as "vectors" for the basis of $Hom(V,W)$ . If yes, how is this possible? Thanks!
|
In modern algebra, a vector space (over a field $F$ ) is any set $H$ that has been equipped with an addition operation $+: H \times H \to H$ and a scalar multiplication operation $\cdot: F \times H \to H$ , that together satisfy the usual axioms. From this point of view, $\hom(V, W)$ is considered a vector space, provided that suitable addition and scalar multiplication operations have been defined for it. And indeed, it is standard that for linear maps $f, g: V \to W$ , we define $f+g$ to be the linear map that takes $v \in V$ to $f(v) + g(v) \in W$ [one should verify that $f + g$ is indeed linear], and for $r \in F$ , we define $r \cdot f$ to be the linear map that takes $v \in V$ to $rf(v) \in W$ . In this way, we have defined addition and scalar multiplication operations for $\hom(V, W)$ , and once we are satisfied that these satisfy the vector space axioms, then $\hom(V, W)$ may be considered a vector space. So now that you know what it means to say $\hom(V, W)$ is a vector space,
|
|linear-algebra|vector-spaces|linear-transformations|
| 1
|
Can $i$ be conceptualized as a free variable?
|
Stuart Hollingdale's book Makers of Mathematics states the following: In 1833 Hamilton read a paper to the Royal Irish Academy in which he pointed out that the plus sign in $a + ib$ was a misnomer, as $a$ and $ib$ cannot be added arithmetically. Following Gauss, he proposed that a complex number should be regarded as an ordered pair of real numbers $(a, b)$ which obey certain operational rules (etc.) With a computer science hat on, as I understand this, the problem is that multiples of $i$ are not the same type as multiples of $1$ , so it simply not possible to perform the addition operation. The expression as a whole doesn't type-check, because addition (implicitly, of reals) has type $\mathbb{R} * \mathbb{R} \rightarrow \mathbb{R}$ . Hamilton recognizes the need to keep the real and imaginary components apart, because they are incompatible. I have two vague doubts about this objection. The first is just that we could imagine an addition operator that has type $\mathbb{C} * \mathbb{C}
|
It’s natural to think of complex numbers as an extension of the real numbers, not just as pairs of them. The notation $a+bi$ makes dealing with complex numbers much easier than using $(a,b)$ e.g. Multiplying $(a,b)$ and $(c,d)$ is nonobvious, but $(a+bi)(c+di)$ clearly just requires using the distributive law. This kind of redefining of old notation to match our intuitions in new contexts is very common throughout math. In CS, this kind of simplification is often known as syntactic sugar. If you want to be rigorous about it, you could define $\mathbb C$ as $\mathbb R[X]/(1+X^2)$ , meaning that you just consider $\mathbb C$ as being any combination of $i$ and reals numbers under the assumption that $i^2=-1$ , though it requires some work to show that division is well-defined I.e. that this is a field. This is a particularly nice approach since it’s clear that the real numbers are a subfield.
|
|complex-numbers|math-history|
| 0
|
An iterative method for minimizing squared errors between nonlinear functions
|
Suppose that $f$ and $g$ are two vector-valued functions, where $f$ has a possibly highly complex nonlinear form and $g$ is much simpler (e.g., can even be linear). Consider the nonlinear least squares problem $$ \min_x \quad \|f(x) - g(x)\|^2. \qquad (P)$$ Let's say that directly minimizing the target function with some conventional numerical methods (e.g., Gauss-Newton method) is very computationally demanding while minimizing $\|f(x_0) - g(x)\|^2$ for a given $x_0$ is quite straightforward. A hypothetical solution in this case is described as follows: Initialize $x_0$ Solve $\hat x = {\arg \min}_x \|f(x_0) - g(x)\|^2$ Let $x_0 \gets \hat x$ Repeat 2. and 3. Is the $\hat x$ sequence given by such an iterative method guaranteed to converge (can be just a local minimum)? Can it be guaranteed to converge under some further specifications? More specifically, let's say that $f$ is a highly nonparametric function such as a neural network that outputs a $p$ -dimensional vector, while $g(x)
|
The following is in the spirit of your proposed solution method. $ \def\o{{\tt1}} \def\l{\lambda} $ Introduce a homotopy parameter $\l$ such that $\l=0$ represents an easy problem and $\l=\o$ recovers the original (hard) problem. Then solve a sequence of sub-problems $$\eqalign{ {\large x}_\l &= \arg\min_x \big\|\,f\big((\o-\l)x_0+\l x\big) - g(x)\,\big\|^2 \\ }$$ starting at $\l=0$ and slowly increasing $\l\to\o,\,$ using the previously calculated solution as the initial guess for the next sub-problem. There are many different ways to introduce the homotopy parameter (Newton homotopy, Affine homotopy, pseudo-arclength) and many ideas on how to increment the $\l$ parameter have been proposed.
|
|optimization|numerical-methods|nonlinear-optimization|fixed-point-theorems|least-squares|
| 0
|
Proof of sum of somewhat binomial distribution to infinity
|
I have this sum which looks somewhat like a sum of a binomial distribution but not quite, and I'm not quite sure how I'd prove this equation: $$\sum_{n = 0}^{\infty}\left(\frac{n}{mn + 1} {{mn + 1}\choose n} p^n (1-p)^{mn + 1 - n}\right) = \frac{p}{1 - mp}$$ In this equation we also know that $p = 0.0913$ and that $mp . Any tips or references would be appreciated, don't really know where I would start. Without the fraction in front it would be a binomial distribution sum, but I'm not sure how the fraction affects the sum
|
Here's a start: \begin{align} \sum_{n=0}^\infty \frac{n}{mn + 1} \binom{mn + 1}{n} p^n (1-p)^{mn + 1 - n} &=\sum_{n=1}^\infty \frac{n}{mn + 1} \binom{mn + 1}{n} p^n (1-p)^{mn + 1 - n} \\ &= \sum_{n=1}^\infty \frac{n}{mn + 1} \frac{mn + 1}{n}\binom{mn + 1-1}{n-1} p^n (1-p)^{mn + 1 - n} \\ &= \sum_{n=1}^\infty \binom{mn}{n-1} p^n (1-p)^{mn + 1 - n} \\ &= \sum_{n=0}^\infty \binom{m(n+1)}{(n+1)-1} p^{n+1} (1-p)^{m(n+1)+1 - (n+1)} \\ &= \sum_{n=0}^\infty \binom{m(n+1)}{n} p^{n+1} (1-p)^{m(n+1)-n} \\ &= p \sum_{n=0}^\infty \binom{m(n+1)}{n} p^n (1-p)^{m(n+1)-n} \end{align}
|
|summation|proof-explanation|binomial-coefficients|
| 1
|
Prove $(2+\sqrt{13})^n + (2-\sqrt{13})^n$ is natural number without using binomial theorem
|
Question is as stated in the title. I attempted to prove this by induction on $n$ . Base case is just $2$ , so it obviously holds. Assuming it holds for $n=k$ , then the case for $n=k+1$ can be converted into something like $2((2+\sqrt{13})^k + (2-\sqrt{13})^k) + \sqrt{13}(2+\sqrt{13})^k - \sqrt{13}(2-\sqrt{13})^k$ . The first part is a natural number by our inductive hypothesis, but I'm not sure what to do with the second part: $\sqrt{13}(2+\sqrt{13})^k - \sqrt{13}(2-\sqrt{13})^k$ . I can factor out another natural number from this part, and convert it into: $\sqrt{13}((2+\sqrt{13})^k + (2-\sqrt{13})^k) - 2\sqrt{13}(2-\sqrt{13})^k$ , but I'm not sure how I can proceed from there. I'm not sure what tags to put, but I encountered this problem in a combinatorics course, so I added a tag for that. Also since it's about natural numbers, I think maybe there can be some insights from number theory, although it's not a typical number theory problem.
|
One good thing to know (and to verify) is that conjugation, i.e., the function that takes an element $x = m + n\sqrt{13}$ to its conjugate $\overline{x} = m - n\sqrt{13}$ , preserves multiplication: $\overline{xy} = \overline{x} \cdot \overline{y}$ . Once you have this, then you can prove by induction that the conjugate of $(2 + \sqrt{13})^n$ is $(2 - \sqrt{13})^n$ . Finally, what can you say about an element plus its conjugate?
|
|combinatorics|elementary-number-theory|induction|recurrence-relations|
| 0
|
Prove $(2+\sqrt{13})^n + (2-\sqrt{13})^n$ is natural number without using binomial theorem
|
Question is as stated in the title. I attempted to prove this by induction on $n$ . Base case is just $2$ , so it obviously holds. Assuming it holds for $n=k$ , then the case for $n=k+1$ can be converted into something like $2((2+\sqrt{13})^k + (2-\sqrt{13})^k) + \sqrt{13}(2+\sqrt{13})^k - \sqrt{13}(2-\sqrt{13})^k$ . The first part is a natural number by our inductive hypothesis, but I'm not sure what to do with the second part: $\sqrt{13}(2+\sqrt{13})^k - \sqrt{13}(2-\sqrt{13})^k$ . I can factor out another natural number from this part, and convert it into: $\sqrt{13}((2+\sqrt{13})^k + (2-\sqrt{13})^k) - 2\sqrt{13}(2-\sqrt{13})^k$ , but I'm not sure how I can proceed from there. I'm not sure what tags to put, but I encountered this problem in a combinatorics course, so I added a tag for that. Also since it's about natural numbers, I think maybe there can be some insights from number theory, although it's not a typical number theory problem.
|
HINT: Let us write for each natural number $k$ : $$c_k = (2+\sqrt{13})^k,$$ and $$d_k = (2-\sqrt{13})^k, $$ and $$e_k = c_k +d_k.$$ Then $c_{n+2}$ satisfies $$c_{n+2} = (4+4\sqrt{13}+4+9)c_n$$ $$=((8+4\sqrt{13}) + 9)c_n$$ $$= 4c_{n+1}+9c_n.$$ Likewise, $d_{n+2}$ satisfies $$d_{n+2} = (4-4\sqrt{13}+4+9)d_n$$ $$=4d_{n+1}+9d_n.$$ Thus $e_{n+2} =c_{n+2}+d_{n+2}$ satisfies $$e_{n+2} = 4e_{n+1}+9e_n.$$ So if $e_n$ is integral and $e_{n+1}$ is integral, then so is $e_{n+2}$ . But what were $e_1$ and $e_2$ again...
|
|combinatorics|elementary-number-theory|induction|recurrence-relations|
| 1
|
Upper and lower bounds for $x/\ln x$
|
It is well known that $$ (x-1)-\frac{3}{2}\left(x-1\right)^{2} for every $x>1$ . This inequalities are good around $x=1$ . I found that $$ \frac{2x}{x^2+1} for $x>8$ . Are there other rational functions $f,g$ such that we get a better inequality $f(x) for large values of $x$ ?
|
Let $\phi(x)=x/\log x$ . Note the following: $\phi(x)\to\infty$ as $x\to\infty$ . $\phi'(x)=\frac{\log x-1}{{\log(x)}^2}\to 0$ as $x\to\infty$ . For a function $f$ to be a "good" approximation of $\phi$ on all of $\mathbb R_+$ , it must also obey these two criteria. Now let $f=p/q$ be a rational approximation of $\phi$ . Letting $a=\operatorname{deg}p$ and $b=\operatorname{deg}q$ , there are three options: A) $a>b$ . In this case, then criterion 1 is satisfied, but not criterion 2. $f$ will grow asymptotically at a linear rate whereas $\phi$ grows at a less than linear rate. The function $|f-\phi|$ will grow arbitrarily large for large inputs. B) $a=b$ . In this case criterion 2 is satisfied, but not criterion 1. $f$ will reach a finite limit $c$ , and once again the function $|f-\phi|$ will grow arbitrarily large for large inputs. C) $a . Again in this case criterion 2 is satisfied, but not criterion 1. This is essentially identical to case B, in the special case of the limit $c$ bein
|
|calculus|inequality|logarithms|approximation|upper-lower-bounds|
| 1
|
Prove that $\left|\sum_{i=1}^{n} \lambda_{i}a_{i} \right|<1$
|
Consider $|a_{i}| and $\lambda_{i} \geq 0$ , for $i=1,\dots,n$ and $\sum_{i=1}^{n} \lambda_{i}=1$ i want to show that $$\left|\sum_{i=1}^{n} \lambda_{i}a_{i} \right| So, note that \begin{align*} \begin{aligned} \left|\sum_{i=1}^{n} \lambda_{i}a_{i} \right| &\leq \sum_{i=1}^{n} |\lambda_{i}a_{i}| \\ &= \sum_{i=1}^{n} |\lambda_{i}||a_{i}| \\ & Can this be valid?
|
For the strict inequality $$ \sum_{i=1}^{n} \lambda_{i}|a_{i}| one has to argue carefully because $\lambda_{i}|a_{i}| holds only if $\lambda_i > 0$ . One possible argument is that $\lambda_{i}|a_{i}| \le \lambda_{i}$ and the strict inequality holds for at least one index $i$ (since not all $\lambda_i$ can be zero). Another option is to define $$ A = \max(|a_1|, |a_2|, \ldots, |a_n|) \, . $$ Then $A and $$ \left|\sum_{i=1}^{n} \lambda_i a_i \right| \le \sum_{i=1}^{n} \lambda_i|a_i| \le \sum_{i=1}^{n} \lambda_i A = A
|
|real-analysis|
| 1
|
How to derive this representation of the Lambert W function?
|
Lately I read this in some site about a closed-form representation of Lambert W function (all branch-cuts): $$\ln\bigg(\frac{W_k(z)-1}{ \ln(z)-1+2k\pi{i} }\bigg)=\frac{i}{2\pi}\int_0^\infty{\ln\bigg({\frac{t-\ln{t}+\ln{z}+(2k+1)\pi{i}}{t-\ln{t}+\ln{z}+(2k-1)\pi{i}}}\bigg)\frac{\mathrm{d}t}{t+1}}$$ where $\ln$ denotes the principal branch of natural logarithm. This site doesn't contain any references. Closed-form here refers to being able to be represented in a finite composition of integrals and all ''already-known and well-defined'' functions. Simply using of an iterative inverse denotion is not accepted. I also read Closed-form representations of the Lambert W function by Alexander Kheyfits , in which he the author introduced a very similar representation solved by contour integration. And btw is there some technique to derive a closed-form (integral is in desire) representation for almost any transcendental equations? I read some articles but they gave too many restrictions on the t
|
And btw is there some technique to derive a closed-form (integral is in desire) representation for almost any transcendental equations? I read some articles but they gave too many restrictions on the transcendental equation. (Riemann's method) $\DeclareMathOperator WW$ Here is another representation that did not seem to appear online. A possible advantage is that is gives a direct integral representation for $\W_k(z)$ instead of an exponent of an integral, like the question’s, or a ratio of integrals, like Kheyfit’s. One takes the series of $\W_k(z)$ function, which can be derived from Lagrange reversion and the Stirling number $S_n^{(m)}$ of the first kind’s generating function : $$\W_k(z)=w-r-\sum_{n=1}^\infty\sum_{m=1}^n\frac{(-w)^{-n}r^m}{m!} S_n^{(n-m+1)};w=\ln(z)+2\pi i k,r=\ln(w)$$ To convert it to an integral, use $S_n^{(m)}$ ’s contour integral representation , likely derived via inverse Z transform from its generating function: $$S_n^{(m)}=\frac{n!}{2\pi i m!}\oint_{|z|=1}\ln
|
|integration|contour-integration|closed-form|lambert-w|transcendental-equations|
| 0
|
Taylor series for $\log(1+x)$ and its convergence
|
I know the expression for the Taylor series of $\log(1+x)$ . However, I don't understand how to prove the convergence of the series for $x>1$ and its divergence for $x . Can someone explain the reason for this?
|
For $0 \le r $$\frac{1}{1-r} = \sum_{k=0}^\infty r^k$$ Integrate both pats $ln(1-r) = \sum_{k=1}^\infty \frac{r^{k}}{k} $ For $-1 $$\frac{1}{1+r} = \sum_{k=0}^\infty (-1)^kr^k$$ Integrate both pats $ln(1+r) = \sum_{k=1}^\infty \frac{(-1)^{k-1}}{k}r^{k} $
|
|sequences-and-series|convergence-divergence|
| 0
|
Find if the series converges $\sum_{n\geq0}\frac{\sqrt{n^3}+n^2+n^3+n^4}{4^{\frac{n}{2}}}\cdot x^n$
|
$$\sum_{n\geq0}\frac{\sqrt{n^3}+n^2+n^3+n^4}{4^{\frac{n}{2}}}\cdot x^n$$ Could somebody please describe the steps? I managed to use quotientcriterium and receive: $\frac{1}{2}\cdot x \cdot \frac{\sqrt{(n+1)^3}+(n+1)^2+(n+1)^3+(n+1)^4}{\sqrt{n^3}+n^2+n^3+n^4}$ , but i am not sure what to do next
|
You are almost done indeed $$\frac{\sqrt{(n+1)^3}+(n+1)^2+(n+1)^3+(n+1)^4}{\sqrt{n^3}+n^2+n^3+n^4} \to1$$ and then $$\frac{1}{2}\cdot |x| \cdot \frac{\sqrt{(n+1)^3}+(n+1)^2+(n+1)^3+(n+1)^4}{\sqrt{n^3}+n^2+n^3+n^4} \to \frac 12 |x| which requires $-2 . The cases $x=\pm 2$ are clearly divergent.
|
|sequences-and-series|convergence-divergence|
| 1
|
Curve cover direction
|
I have interesting problem: Let's imagine a wire bent into a curve. We have a set of non-stretchable plastic covers of various diameters. The larger the diameter of the cover, the deeper it can be placed on the wire. The cover has some direction. With the casing diameter approaching zero, the direction is that of the derivative at the ends of the curve. If the cover is wide enough so that the entire curve fits in it, there is only one cover instead of two (starting and ending) and its direction can be calculated using the "rotating calipers" algorithm (in practice, a poly-line can be used instead of a smooth curve). How to calculate the direction depending on the curve (implemented as a polyline) and the width of the cover? Problem goal: image reconstruction where we have many polylines and gaps between parts of polylines, we need to find the most appropriate line B for line A and connect them by filling the gap - if tail cover of A intersect with head cover of B, optionally, we also l
|
Solution: First of all, it's worth recalling why I actually came up with this problem. It is intended to be used to test whether linestrings can be attached to the first group; the first one encountered sorted by length, with a minimum length of e.g. 50 pixels. These polylines actually have a small number of segments: 1, 2, maybe 3. They are created by dividing the polylines in such a way that between the first and last segment there must be less than 90 degrees (or maybe some smaller specified value), and they all have to turn in one direction: if 1 is 15 degrees relative to 0, then 2 relative to 1 can be 20 but not -20 degrees. Let's start with 3 points - two segments. std::vector points = {{100, 100}, {300, 200}, {450, 50}}; Check whether they do not form a degenerate triangle: double deg = degen(points); if (deg where double degen(const vector &tr) { double a = cv::norm(tr[1] - tr[0]); double b = cv::norm(tr[2] - tr[1]); double c = cv::norm(tr[0] - tr[2]); return min(min(a+b-c, b+c
|
|geometry|differential-geometry|
| 0
|
Probability of winning a game that ends after one player gets 3 wins in a row
|
Question: In a game with two people, there is a draw half of the time. The other half of the time, player A wins with 2/3 and player B wins with 1/3. The game ends once a player wins three games in a row. What is the probability A wins? Approach: I am thinking that this is somewhat similar to asking what is the probability of getting HHH before TTT. However, you need to compute the probability of each player winning in a round and not having a draw. Let $P(A)$ probability A wins, let $P(B)$ be probability $B$ wins: $$P(A) = \frac{1}{2}\frac{2}{3} = \frac{2}{6} \quad P(B) = \frac{1}{2}\frac{1}{3} = \frac{1}{6}$$ Let $E$ be the event that we get $AAA$ before $BBB$ . However, I am not sure if we can use the below approach because I think we may violate the total law of probability. I am not sure how I can include a third draws? Do I add $P(\text{draw})P(E)$ to each of these? because if it is a draw, we restart... $$P(E) = P(A)P(E|A_1) + P(B)P(E|B_1)$$ $$P(E|A_1) = P(A)P(E|A_1A_2) + P(B)P(
|
Here’s another way to do this, using generating functions. A match that $A$ wins has the form $$ \left(B_0^2\left(A_1^2B_1^2\right)_0^\infty A_0^2D\right)_0^\infty B_0^2\left(A_1^2B_1^2\right)_0^\infty A_3^3\;, $$ where $A$ , $B$ and $D$ denote a win for $A$ , a win for $B$ and a draw, respectively, and $X_a^b$ means anything from $a$ to $b$ repetitions of $X$ . That is, we optionally start with up to two wins by $B$ ; then we have any number of repetitions of first $A$ then $B$ winning one or two games; then $A$ optionally wins up to two games; then a draw; this entire sequence can be repeated any number of times (possibly none) and then the same once more except now ending in three wins by $A$ . With $x$ , $y$ , $z$ denoting the probabilities for $A$ winning, $B$ winning and a draw, respectively, and with $\sum_{k=0}^\infty q^k=(1-q)^{-1}$ , this corresponds to the generating function $$ \left(1-\left(1+y+y^2\right)\left(1-\left(x+x^2\right)\left(y+y^2\right)\right)^{-1}\left(1+x+x^2
|
|probability|conditional-probability|
| 0
|
Proving a set is closed and bounded in a specific situation
|
Good evening, I’ve read in a research article that: Given a vector $\gamma\in\mathbb{R}^n$ , $l$ a differentiable and continuous function from $\Omega\subset\mathbb{R}^n$ to $\mathbb{R}$ . If, when $\gamma$ approaches the boundarie of $\Omega$ , $l(\gamma)\to -\infty$ , then, for a constant $c\in\mathbb{R}$ , the set $\{\gamma\in\Omega: l(\gamma) \geq c\}$ is bounded and closed. I really struggle to understand how we arrive to this implication, if you need more context to help me, let me know, thanks in advance.
|
Because $l$ is continuous, the preimage $l^{-1}((-\infty,c))$ will be an open set; since the set in question is the complement of this set, it is closed. Because the limit approaching the boundary of $\Omega$ is $-\infty$ , we see that $l^{-1}((-\infty,c))$ is an open neighborhood of the boundary of $\Omega$ . We can use this to determine boundedness, but let's first take a small detour to discuss what they mean by the boundary of $\Omega$ and the underlying topology of the space. In order to define limits as $\gamma$ approaches infinity, we adjoin $\infty$ as a "point at infinity" in $\mathbb{R}^n$ . If you look at the standard way to define a limit as $\gamma$ approaches infinity, you can see that this is the same as equipping the regular topology on $\mathbb{R}^n$ with basic open neighborhoods around the point $\infty$ at infinity of the form $\{\gamma\in\mathbb{R}^n|\gamma>M\}$ for some $M\in\mathbb{R}$ . With this setup, it is possible for $\infty$ to be a boundary point of $\Omeg
|
|general-topology|
| 0
|
Question about the distribution of primes in Davenport's Multiplicative Number Theory
|
The following comes from Page 57 of Davenport's Multiplicative Number Theory. My question is: why is (6) being convergent necessary in deducing the result from (5)?
|
Have you studied the proof of Dirichlet's theorem enough to appreciate how a sum over the primes subject to a congruence condition $a \bmod q$ , where $\gcd(a,q) = 1$ , can be broken up into a linear combination over all Dirichlet characters mod $q$ of sums over all primes, where the term associated to a prime $p$ is weighted by a factor $\chi(p)$ ? This is what Davenport means by "a linear combination of characters in the usual way". Here is how this idea arises in the proof of Dirichlet's theorem. When $s > 1$ , $$ \sum_{p\equiv a \bmod q} \frac{1}{p^s} = \sum_p \frac{1}{\varphi(q)}\left(\sum_{\chi \bmod q} \overline{\chi}(a)\chi(p)\right)\frac{1}{p^s} = \frac{1}{\varphi(q)}\sum_{\chi \bmod q}\overline{\chi}(a)\sum_p \frac{\chi(p)}{p^s} $$ and you then separately look at the inner sum on the right where $\chi$ is trivial and the inner sums where $\chi$ is nontrivial: the inner sums at nontrivial $\chi$ are bounded as $s \to 1^+$ , so $$ \sum_{p\equiv a \bmod q} \frac{1}{p^s} = \frac{
|
|number-theory|prime-numbers|analytic-number-theory|
| 1
|
Prove $(k+1)\binom{n}{k+1}+k\binom{n}{k}=n\binom{n}{k}$ for integers $0\le k\le n$
|
Prove $(k+1)\binom{n}{k+1}+k\binom{n}{k}=n\binom{n}{k}$ for integers $0\le k\le n$ I need help, I've been trying to factor all day and can't figure it out.
|
Before I start my answer, I believe I should state that you should type that instead of using an image. Proof: $(k+1)*\binom{n}{k+1} +k * \binom{n}{k}= \frac{k+1}{(k+1)!*(n-k-1)!}+\frac{k*n!}{k!*(n-k)!}$ $= \frac{n!}{k!*(n-k-1)!}+\frac{k*n!}{k!*(n-k)!}=\frac{(n-k)n!}{k!*(n-k)!}+\frac{k*n!}{k!*(n-k)!}$ $=\frac{n(n)!}{k!(n-k)!}$ $=n*\binom{n}{k}$ as required.
|
|real-analysis|analysis|binomial-theorem|
| 0
|
Prove $(k+1)\binom{n}{k+1}+k\binom{n}{k}=n\binom{n}{k}$ for integers $0\le k\le n$
|
Prove $(k+1)\binom{n}{k+1}+k\binom{n}{k}=n\binom{n}{k}$ for integers $0\le k\le n$ I need help, I've been trying to factor all day and can't figure it out.
|
$${n \choose k+1} = \frac{n!}{(k+1)!(n-k-1)!} =\frac{n!}{\frac{k+1}{n-k}k!(n-k)!} = \frac{n-k}{k+1}{n \choose k}$$
|
|real-analysis|analysis|binomial-theorem|
| 0
|
Generate random points on a section of a spherical surface
|
If I have a sphere with radius 1, and a section of the sphere defined by having a polar angle bounded by some constant θ and an azimuthal angle bounded by some constant φ, how do I generate random points on this surface such that the points are uniformly distributed?
|
Pick $\varphi$ uniformly randomly from the admissible interval, and pick $z=\cos\theta$ uniformly randomly from the admissible interval. This works because the sphere has equal areas in slices of equal height. (This is only the case in $3$ dimensions.)
|
|probability|geometry|multivariable-calculus|probability-distributions|
| 0
|
On the fractional parts of the roots of the Alternating Harmonic Numbers
|
We define $$\bar{H}_x=\ln2+\cos(\pi x)\left(\psi(x)-\psi\left(\frac x2\right)-\frac1x-\ln2\right)$$ As the $x$ th Alternating Harmonic Number (test out a few values to see why). Let $x_n$ be the $n$ th largest zero of the Alternating Harmonic Numbers, show that $\lim_{n\rightarrow\infty}\{x_n\}=\eta$ converges (where $\{x\}$ denotes the fractional part of $x$ ) or equivalently $\lim_{n\rightarrow\infty}(x_n-x_{n+1})=1$ . This definition of the alternating harmonic number originates from my terribly formatted paper where I generalize sigma notation for non-integer upper and lower bounds. Graphically speaking, the conjecture (I called it so in my paper because I couldn't solve it) seems to be true and $\eta\approx-0.431$ . Some interesting approximations are $\frac{7\pi}{51}$ and this seemingly random integral $$\int_0^1\frac{\bar{H}_x\bar{H}_{-x}\sin(\pi x)}{\pi x}dx\approx0.43141591$$ Not only does it hold for three digits, but there is a decimal representation of pi mixed in, up to 5
|
I finally solved the conjecture. Notice that as $x$ approaches negative infinity, the reflection formula becomes $\ln2+\pi\cot(\pi x)$ and so $\ln2+\pi\cot(\pi x_n)\sim0$ . Solving for $x_n$ we get that $x_n\sim-n-\frac1{\pi}\arctan\left(\frac{\pi}{\ln2}\right)$ implying that $\eta=-\frac1{\pi}\arctan\left(\frac{\pi}{\ln2}\right)$ , which agrees with numerical evidence ( $\eta\approx-0.430876945137$ ) and has nothing to do with the integral in the OP.
|
|functions|roots|conjectures|alternating-expression|
| 0
|
Why is $\lim_{h \to 0} \frac{0}{h} = 0$?
|
Why is $\displaystyle{\lim_{h \to 0} \frac{0}{h} = 0}$ ? We can re-write the LHS as $$\left(\lim_{h \to 0} 0\right) \cdot \left( \lim_{h \to 0} \frac{1}{h} \right)$$ $$0 \cdot \left( \lim_{h \to 0} \frac{1}{h} \right)$$ the term in the brackets is undefined, since $1 / h$ does not have a limit at $0$ . I am going through Spivak derivatives chapter right now, so would appreciate argumentation in a basic context. The above was supposed to be a derivation of a derivative of a constant function $f(x) = c$ .
|
Take the definition of the limit: you actually can't split the limit like that as that second limit clearly diverges! For $h\ne0$ , $\frac{0}{h}=0$ So $\forall ɛ >0, \exists δ>0$ s.t $ 0 But clearly, for any positive ɛ, that inequality is arbitrarily true, so I could just take $δ=1$ and that proves by the definition of the limit that $\lim_{h\to\ 0} (\frac{0}{h})= 0$
|
|real-analysis|calculus|derivatives|
| 1
|
Why is $\lim_{h \to 0} \frac{0}{h} = 0$?
|
Why is $\displaystyle{\lim_{h \to 0} \frac{0}{h} = 0}$ ? We can re-write the LHS as $$\left(\lim_{h \to 0} 0\right) \cdot \left( \lim_{h \to 0} \frac{1}{h} \right)$$ $$0 \cdot \left( \lim_{h \to 0} \frac{1}{h} \right)$$ the term in the brackets is undefined, since $1 / h$ does not have a limit at $0$ . I am going through Spivak derivatives chapter right now, so would appreciate argumentation in a basic context. The above was supposed to be a derivation of a derivative of a constant function $f(x) = c$ .
|
The limit of a product is the product of the limits if they both exist . You cannot apply this limit rule/law to the limit of $0\cdot \frac{1}{h}$ as $h\to $ , because the second factor's limit does not exist. Remember that the value of a limit as $h\to 0$ cares only about what happens near $0$ , and not what happens at $0$ . This gives the following limit law that a lot of textbooks use but do not state explicitly: Suppose that $f$ and $g$ are functions defined on an open interval $I$ containing $a$ , except perhaps at $a$ , and that for every $x$ in $I$ , except perhaps $x=a$ , we have $f(x)=g(x)$ . Then $$\lim_{x\to a}f(x)=\lim_{x\to a}g(x)$$ in that either neither limit exists, or else they both exist and are equal. We can apply this rule to the function $f(h)=\frac{0}{h}$ defined everywhere but at $h=0$ , and the constant function $g(h)=0$ , to conclude the limit equals $0$ .
|
|real-analysis|calculus|derivatives|
| 0
|
How to calculate integer exponent of complex number with integer parts to not get real values
|
I'm trying to implement the exponentiation of numbers in my Scheme interpreter . I'm wondering how one Scheme implementation (Kawa) implemented this equation: $(10+10i)^{10}$ Kawa give this output: $+320000000000i$ My implementation (same as few others) return: $9.79717439317883e-5+3.200000000000001e11i$ I was checking wolfram Alpha and it gives the same output. How to calculate this value to not get floating pointer number? Is there a special case that need to be handled? How to calculate the value to get such even number? This is the equation I'm using for calculating of exponents taken from Wikipedia (de Moivre's formula): $ z^{n}=(r(\cos \varphi + i\sin \varphi ))^n = r^n (\cos n\varphi + i \sin n \varphi) $ The problem is that $\varphi$ and $ r $ are real numbers the same as a result of $cos$ and $sin$ . There has to be a different equation to evaluate the complex number to the power of an integer. I was searching the Kawa source code but was not able to find anything. So hoping s
|
I recently encountered the same issue in my exponentiation, and solved it through recursion. If you have (a+bi)^c, where c is some integer greater than 1, and you don't want to run into float precision errors, simply restate it as (a+bi)(a+bi)^(c-1), and do it recursively until the exponent is 1. Your multiplication code should not have issues with precision.
|
|complex-numbers|
| 1
|
Question about the formula for switching bases of numbers
|
To express the base $10$ number $5213$ in base $7$ , all I have to do is simply divide by $7$ as follows: $5213 \div7=744$ with remainder $5$ $744\div 7=106$ with remainder $2$ $106\div7=15$ with remainder $1$ $15 \div7=2$ with remainder $1$ Hence $5213$ in base $7$ is $21125$ . Now the formula for this operation is as follows: $N=a_nr^n+a_{n-1}r^{n-1}+...+a_2r^2+a_1r+a_0$ where $N$ be the given number, and $r$ the radix of the proposed base. To find the values $a_0,a_1,a_2...a_n$ , divide $N$ by $r$ , then the remainder is $a_0$ , and the quotient is: $a_nr^{n-1}+a_{n-1}r^{n-2}+...+a_2r+a_1$ Then continue the operation until all required digits have been acquired. My question is if you divide $N$ by $r$ , then the formula becomes thus: $\dfrac{N}{r}=a_nr^{n-1}+a_{n-1}r^{n-2}+...+a_2r+a_1+\dfrac{a_0}{r}$ and obviously the remainder is not simply $a_0$ but rather $\dfrac{a_0}{r}$ . But in the above calculation with $5213$ the remainder is $a_0$ , so I'm confused about what's going on he
|
We have $$N = a_n r^n + \ldots + a_1 r^1 + a_0 = A_1 r + a_0$$ Which gives us that the remainder when dividing $N$ by $r$ is $a_0$ . Then, we remove that remainder from $N$ and then divide by $r$ (knowing that what we get will still be an integer): $$\begin{eqnarray} \frac{N - a_0}{r} & = & \frac{A_1 r}{r} \\ & = & A_1 \\ & = & a_n r^{n-1} + \ldots + a_1 & = & A_2 r + a_1 \end{eqnarray}$$ For example, when you convert $5213$ into base $7$ what's happening is the following: $$\begin{eqnarray} 5213 & = & \square \times 7 + 5 \\ (5213 - 5) \div 7 & = & 744 \\ & = & \square \times 7 + 2 \\ (744 - 2) \div 7 & = & 106 \\ & = & \square \times 7 + 1 \end{eqnarray}$$ and so on.
|
|number-systems|
| 1
|
Uncertainty of the couple position-kinetic energy
|
Let's suppose we have a quantum system that consists of a free particle on the real line. The Hilbert space associated to this quantum system is $L^2(\mathbb{R})$ . Let $x$ and $p$ be the position and momentum operators. The operatorial version of Schrodinger equation states that if the system is in a state $\psi=\psi(x)$ then $$\sigma_\psi(x)\sigma_\psi(p)\geq \frac12 |\langle \psi,[x,p]\psi\rangle|$$ It's pretty easy to see that $\langle \psi,[x,p]\psi\rangle=0$ iff $\psi=0$ , but $\psi$ can't be $0$ , so there is no state in which the uncertainty of $(x,p)$ can be reduced to $0$ . What if we try with the couple $(x,p^2)$ . This is trickier because: $$\langle \psi,[x,p^2]\psi\rangle=0 \iff \langle \psi,\psi'\rangle=0$$ I understand that $\psi'$ can't be $0$ , but why does this imply that $\langle \psi,\psi'\rangle$ can't be $0$ for any $\psi$ ? I think that $\psi(x)=e^{-x^2}$ does the job! Right?
|
I'm not sure how you are misconstruing the uncertainty principle, because I didn't understand your argument. I am a little baffled by why you failed to directly compute the uncertainty of your Gaussian state, (basically the ground state of the oscillator), bypassing generic Robertson inequality jazz . Nondimensionalize ℏ=1, m =1. For the state $\psi= N e^{-x^2}$ where $ N^2=\sqrt{2/\pi}$ , you have, directly, the variances $$ \sigma_\psi ^2 (x)= N^2\int\!\!dx~ x^2 e^{-2x^2}= 1/4, \\ \sigma_\psi ^2 (p^2/2)= N^2\int\!\!dx~ e^{-2x^2}(4x^2-12x^2 +3) -\left ( N^2\int\!\!dx~ e^{-2x^2}(2x^2-1) \right )^2\\ =3/4-1/4=1/2, $$ where I hope you have recognized the Hermite polynomials controlling the problem here, so that $$ \sigma_\psi (x) \sigma_\psi (p^2/2)= \sqrt{2}/4. $$ It is not evident what "trick" you are invoking, but, for that state, the precise uncertainty is nonvanishing.
|
|mathematical-physics|quantum-mechanics|
| 1
|
Can this Non-Linear Residual Function, $R(x)=((x+1)^3-1)^{1/3}-(x+1)$, be expressed exactly using an Infinite Series or Polynomial Function in $x$?
|
Can this Non-Linear Residual Function, $R(x)=((x+1)^3-1)^{1/3}-(x+1)$ , be expressed exactly using a infinite series (using integer powers in $x$ ) or a finite polynomial function in $x$ ? For example I have found one approximate series / polynomial function that is a reasonably accurate approximation above $x=2$ $$R(x) \approx -\sum_{n=1}^\infty \frac{(-1)^{n-1}}{3 (x+1)^{2n}}=-\frac{1}{3(x^2+2x+2)}$$ Can you please give some advice in how I might proceed with this problem? Is the method to proceed using the calculus of finite differences to improve the approximation I have obtained or is there some direct transformation I can apply? Update Added 13/02/2024 Just playing around tonight I've been able to produce a better approximation, i.e. $$R(x) \approx -\frac{1}{3 (x+1)^2}-\frac{1}{9 (x+1)^5}-\frac{1}{17 (x+1)^8}-\frac{1}{18 (x+1)^{11}}$$ Which is semi-regular but the trial and error iteration technique seems to run out of steam in terms of improving accuracy with increasing number o
|
$R(x)$ is not differentiable at $x=0$ , so there's no analytic function that will match it exactly. However, $R(x)$ does have a Puiseux series at $x=0$ that will converge for all $x\in\Bbb R$ .
|
|sequences-and-series|polynomials|approximation|
| 0
|
A question about the routine of usage of quantifiers and other logic symbols in Algebra courses
|
I am nowing taking some 4th year algebra courses, for example Field Theory and Ring & Modules, and the instructors of both courses $\textbf{forbid}$ students from using quantifiers and other logic symbols like $$\forall, \exists, \lor (\text{or}), \land (\text{and})$$ I once asked the instructor of Rings & Modules why he didn't allow students to use them, and he answered me that, "Have you ever seen any modern algebra textbook which uses quantifiers and other logic symbols in any theorem or exercise?" I checked the most famous textbooks, like Dummit & Foote, and they really didn't use these symbols. But for courses of analysis, like real analysis, functional analysis, measure theory, the instructors of these courses never forbid students from using these symbols. Therefore, my question is, does there exist an academic routine or requirement in the area of Algebra s.t. everyone should not use logic symbols when necessary? Hope someone who are expert in this can help explain a little. Th
|
It sounds more like a dictum from an essay titled "How to Write Mathematics Well", the type of thing Halmos or Krantz might have written at some point, than anything to do with algebra per se. The overall point of the dictum would be that jargon and symbolism should generally be avoided if they do not aid in comprehension. Because then you have to spend a minute being distracted by and parsing the logical symbolism, while instead you could have spent that time considering what the problem is about. (It could also be partly a pedagogical thing, e.g., some students may come to feel that they are being more formal and precise if they use the logical symbols, and get attached to them, and the instructors are possibly fighting delusions that students may have about how necessary they are in really understanding the mathematics. Gauss did very well before the time of Frege, after all.) About all I can come up with, why you might see more of a concern with the quantifiers in analysis courses,
|
|abstract-algebra|logic|field-theory|
| 1
|
A question about the routine of usage of quantifiers and other logic symbols in Algebra courses
|
I am nowing taking some 4th year algebra courses, for example Field Theory and Ring & Modules, and the instructors of both courses $\textbf{forbid}$ students from using quantifiers and other logic symbols like $$\forall, \exists, \lor (\text{or}), \land (\text{and})$$ I once asked the instructor of Rings & Modules why he didn't allow students to use them, and he answered me that, "Have you ever seen any modern algebra textbook which uses quantifiers and other logic symbols in any theorem or exercise?" I checked the most famous textbooks, like Dummit & Foote, and they really didn't use these symbols. But for courses of analysis, like real analysis, functional analysis, measure theory, the instructors of these courses never forbid students from using these symbols. Therefore, my question is, does there exist an academic routine or requirement in the area of Algebra s.t. everyone should not use logic symbols when necessary? Hope someone who are expert in this can help explain a little. Th
|
I can't speak to your professors intent but, generally it is best practice when introduced to a subject to either write completely algebraic sentences or complete prose. This is because it is often not entirely obvious when to use which one, and so switching between them can create awkward sentences that are repetitive or hard to read. In analysis courses, this comes much more naturally to students, since you tend to deal with things you on some level already understand and have dealt with for a while. For example, a student is almost never going to write: $\forall \epsilon > 0$ , there exists $N(\epsilon)$ such that for any $n > N(\epsilon)$ where $n$ is a natural number, that $|a_n - a| This is a harder to read sentence at the end, and its because I mixed prose and algebraic writing in a way that is purposefully hard to read. Also, you should note here that analysis without quantifiers tends to be a lot more writing (in my opinion). If you had to write "For every $\epsilon > 0$ there
|
|abstract-algebra|logic|field-theory|
| 0
|
If $\Phi_{5}(n) \equiv 0 \pmod{p},$ then $p = 5$ or $p \equiv 1 \pmod{5}$.
|
Let $p$ be prime, $n \in \mathbb{Z}$ . Show that if $\Phi_{5}(n) = n^4 + n^3 + n^2 + n + 1 \equiv 0 \pmod{p}$ , then $p = 5$ or $p \equiv 1 \pmod{5}$ . If $p = 5$ , then we are done. If $p \neq 5$ , then I believe I may follow an argument here by Thomas Andrews, whose answer I do not fully understand. We notice that $n \neq 1 \pmod{p}$ (because, otherwise, $n^4 + n^3 + n^2 + n + 1 \equiv 5 \not \equiv 0 \pmod{p}$ ). How does one then conclude that $n^5 \equiv 1 \pmod{p}$ ? Also, how does one know that $5$ is the order of $n$ -- and not some power between $1$ and $5$ ? I can piece together the remainder of the argument: In noticing that $n \neq 0 \pmod{p}$ , one can use Fermat's Little Theorem to conclude $n^{p-1} \equiv 1 \pmod{p}$ . In establishing that $n$ has order $5$ , we may conclude that $5|p-1$ , hence that $p \equiv 1 \pmod{5}$ , as desired.
|
Note that $(n-1)(n^4+n^3+n^2+n+1) = n^5-1$ . If the product is $0$ Modulo $p$ (which it will be if $n^4+n^3+n^2+n+1\equiv 0\pmod{p}$ , regardless of whether $n-1\equiv 0\pmod{p}$ or not), then $n^5-1\equiv 0\pmod{p}$ , so $n^5\equiv 1\pmod{p}$ . In general, if $a^k\equiv 1\pmod{p}$ , then the multiplicative order of $a$ modulo $p$ divides $k$ . You can verify this using the division algorithm, and remembering that the multiplicative order is the least positive integer $m$ such that $a^m\equiv 1\pmod{p}$ . So if $n^5\equiv 1\pmod{p}$ , then the multiplicative order of $n$ modulo $p$ divides $5$ . Hence it is either $1$ or $5$ , and cannot be anything in between. But if you are also assuming that $n\not\equiv 1\pmod{p}$ , then the order is not $1$ , and hence must be equal to $5$ .
|
|number-theory|elementary-number-theory|modular-arithmetic|
| 1
|
How would I go about finding the sum of a non-infinite summation?
|
I am given: $\sum_{k=1}^n 2^{n(1+k)}$ and I am honestly at a loss on how to proceed. I'm thinking to use a geometric series formula, but the index starts at k=1, and there is a "n" in my summation. How would I shift and proceed? Thank you
|
Since $2^{n(1+k)}=2^n2^{nk}$ and $n$ is not the indexing variable, we may rewrite the series as $2^n\sum_{k=1}^{n}2^{nk}$ . Next, let $2^n=t$ ; then the series becomes $2^n\sum_{k=1}^nt^k=t+t^2+t^3+...+t^n$ . From here it is rather straightfoward. Factor out a $t$ from the series, so that you have $t(1+t+t^2+...+t^{n-1})$ . By the sum of a geometric series formula, the larger factor is equal to $\frac{1-t^n}{1-t}$ . Re-multiply this with the factor of $t$ to get $\frac{t-t^{n+1}}{1-t}$ , and then substitute $t$ with $2^n$ to get $\frac{2^n-(2^n)^{n+1}}{1-2^n}=\frac{2^n-2^{n^2+n}}{1-2^n}$ . Finally, re-introduce the $2^n$ that we factored out at the beginning. This leaves you with $\frac{2^{2n}-2^{n^2+2n}}{1-2^n}$ . And that's it!
|
|sequences-and-series|discrete-mathematics|geometric-series|
| 0
|
Explaining the argument formula
|
I am beginning my study of Complex Analysis, and I stumbled on the following definition on how to compute the argument: $\varphi = \arg(z) = \begin{cases} \arctan(\frac{y}{x}) & \mbox{if } x > 0 \\ \arctan(\frac{y}{x}) + \pi & \mbox{if } x 0\\ -\frac{\pi}{2} & \mbox{if } x = 0 \mbox{ and } y I am striving to understand why in the second and third quadrant we have $\arctan(\frac{y}{x}) + \pi$ and $\arctan(\frac{y}{x}) - \pi$ respectively. I have been searching for the reason why $\pi$ is summed and subtracted however this is presented as a formula and I have nor found any intuition that would explain those operations. Question: What is the reasoning behind summing and subtracting $\pi$ ? Thanks in advance!
|
is the number of radians in a half-circle, and the argument function, by definition, gives the number of radians between $(a,b)$ and $(\sqrt{a^2+b^2}, 0)$ . The reason is this. $\tan(x)$ , regardless of the unit of angle measurement, has the range of all real numbers, with the domain exclusively between a right angle ( $\frac{}{2}$ radians) and three times a right angle ( $\frac{3}{2}$ radians) The inverse function will only yield a number of radians either from $0$ to $/2$ or $3/2$ to $2$ . In other words, the inverse of the tan function only yields half the circle. If the point is on the other half (easily checked by the signs of the real numbers in the coordinate), you would have to add or subtract to gvet the correct number of radians.
|
|complex-analysis|complex-numbers|
| 0
|
$O(n)\cong SO(n)\rtimes O(1)$
|
I want to prove that $O(n)\cong SO(n)\rtimes O(1)$ as Lie groups. I have the following result: If $G,N,H$ are Lie groups, then $G\cong N\rtimes H$ iff there are Lie group homomorphisms $\phi:G\to H$ and $\psi:H\to G$ such that $\phi\circ\psi=\mathrm{Id}_H$ and $\ker\phi\cong N$ . Since $O(1)\cong C_2$ , we can take $\phi=\det$ , which is a Lie group homomorphism with kernel $SO(n)$ . But I am unsure how to find a suitable $\psi$ . I thought about something like $\psi(x)=xI$ , but then $\phi\circ\psi(x)=x^n$ , so is not the identity map if $n$ is even. Can anyone suggest a $\psi$ ?
|
Note that $O_1=\pm 1$ and consider the map $\phi:O_n \to O_1$ by the determinant map and $\psi:O_1 \to O_n$ by $\pm 1 \mapsto \pm Id$ . Then we have $\psi \circ \phi = id_{O_1}$ and $ker \phi = SO(n)$ . Also these are Lie group homomorphisms (smooth) because $O_n, O_1$ are embedded subgroups so restrictions become smooth as well. Therefore, we obtain the result.
|
|abstract-algebra|lie-groups|semidirect-product|
| 0
|
Understanding Friedman’s H-statistic
|
In "Interpretable Machine Learning: A Guide For Making Black Box Models Explainable", I found the following for Friedman's H-statistic: $$PD_{jk}(x_j, x_k) = PD_j (x_j) + PD_k (x_k),$$ where $PD_{jk}(x_j, x_k)$ is the two way partial dependence function of both features and $PD_j (x_j)$ and $PD_k (x_k)$ the partial dependence functions of the single features. Later, the H-statistic is calculated as follows: $$H_{jk}^2 = \frac{\sum_i [PD_{jk}(x_j^{(i)}, x_k^{(i)}) - PD_j(x_j^{(i)}) - PD_k(x_k^{(i)})]^2}{\sum_i PD_{jk}(x_j^{(i)}, x_k^{(i)})}$$ Wouldn't this equation be always zero, when combined with the upper equation? Looking at the numerator, my thought process is the following: $$PD_{jk}(x_j^{(i)}, x_k^{(i)}) - PD_j(x_j^{(i)}) - PD_k(x_k^{(i)}) = PD_j (x_j^{(i)}) + PD_k (x_k^{(i)}) - PD_j(x_j^{(i)}) - PD_k(x_k^{(i)}) = 0.$$ The chapter can be found here: https://christophm.github.io/interpretable-ml-book/interaction.html
|
This only holds if the features don't interact as is stated in your reference: "If two features do not interact, we can decompose the partial dependence function as follows (assuming the partial dependence functions are centered at zero): $$_{}(_,_)=_(_)+_(_)." $$
|
|statistics|machine-learning|
| 1
|
How to perform this sum
|
I encountered this sum $$S(N,j)= \frac{2 \sqrt{2}h(-1)^j}{N+1}\cdot\sum _{n=1}^{\frac{N}{2}} \frac{\sin ^2\left(\frac{\pi j n}{N+1}\right)}{\sqrt{2 h^2+\cos \left(\frac{2 \pi n}{N+1}\right)+1}},$$ where $h\in \mathbb{R}^+, \{n,j\}\in \mathbb{Z}^+$ and $N\to \infty.$ It has the curious property that for a fixed $h$ (say $5/4$ ) as $N$ increases, the value of the sum seems to be the same for all values of $j$ other than the first few ( $j=1,2,3,\ldots$ ) and the last few $j=N,N-1,N-2,\ldots$ , otherwise there is a single value $a$ , which alternates in sign i.e. $$S(N,j)=(\ldots, a,-a,a,-a,\ldots).$$ Is it possible to obtain an expression for $S(N,j)$ by summing over $n$ , either for finite $N$ or if that's not possible is it possible to obtain $$\lim_{N\to \infty} S(N,j)$$ or at least the value of $|a|$ ?
|
The following computes that $\lim_{j \to \infty} \lim_{N \to \infty} S(N,j,h) = \int_0^{1/2} \frac{h}{\sqrt{h^2 + \cos^2(\pi x)}} \mathrm{d}x.$ This at least tells us the limiting value, $a$ , seen in the post, but in my opinion it is not satisfactory as an explanation of the observations because a) it doesn't explain the behaviour for $j$ linearly large in $N$ , and b) it doesn't explain the fairly fast convergence of the sum to the integral. Ideally one would explain why the sum already matches the limit to $10^{-5}$ at, e.g., $j = 5, N = 100,$ and the same at $j = 995, N = 1000$ . Perhaps a Fourier theoretic lens is useful? Through the usual Riemann sum developement, it holds that as $N \to \infty,$ $$ \sum_{n = 1}^{N/2} \frac{1}{N+1} \frac{\sin^2 ( \pi j \cdot(n/N+1)) }{ \sqrt{2 h^2 +1 + \cos(2\pi (n/N+1))} } \to \int_0^{\frac 12} \frac{\sin^2(\pi j x)}{\sqrt{2h^2 +1 + \cos(2\pi x)}} \mathrm{d}x.$$ Note that $\cos(2\pi x) = 2\cos^2(\pi x) - 1.$ Further the alternating sign in your
|
|real-analysis|summation|riemann-sum|
| 0
|
What kind of product operator appears in the chain rule involving a derivative wrt a matrix?
|
Let $$y = Ax$$ $$z = y^Ty$$ . Per the chain rule, $$\frac{\partial{z}}{\partial{A}} = \frac{\partial{y}}{\partial{A}} \frac{\partial{z}}{\partial{y}} $$ Since, $$\frac{\partial{z}}{\partial{A}} = \frac{\partial}{\partial{A}}(x^TA^TAx) = 2yx^T$$ $$\frac{\partial{y}}{\partial{A}} = \frac{\partial}{\partial{A}}(Ax) = x^T \otimes \Bbb{I}$$ $$\frac{\partial{z}}{\partial{y}} = \frac{\partial}{\partial{y}}(y^Ty) = 2y$$ It means that, $$2yx^T = (x^T \otimes \Bbb{I}) \text { mystery product operator } 2y$$ What is the mystery product operator ?
|
Using index notation it's easier to see what's going on $$\eqalign{ \def\p{\partial} \def\grad#1#2{\frac{\p #1}{\p #2}} \def\LR#1{\left(#1\right)} \def\gradLR#1#2{\LR{\grad{#1}{#2}}} y_i &= A_{ip}\,x_p \\ \grad{y_i}{A_{jk}} &= \grad{A_{ip}}{A_{jk}}\;x_p \\ &= \delta_{ij}\delta_{pk}\,x_p \\ &= \delta_{ij}\,x_k }$$ where $x_k$ is the $k^{th}$ component of the $x$ vector and $\delta_{ij}$ is the $(i,j)$ component of the identity matrix. The resulting quantity is a third-order tensor . So you don't need the Kronecker product $(\otimes),\,$ but rather the dyadic (aka tensor) product $(\star)$ $$ \frac{\partial y}{\partial A} = I\star x $$ Although some authors (especially in certain engineering disciplines) use the $(\otimes)$ symbol to denote the tensor product. The chain rule uses an ordinary dot product $(\cdot)$ $$\eqalign{ \grad{z}{A} &= \gradLR{z}{y}\cdot \gradLR{y}{A} \\ &= \LR{2y}\cdot \LR{I\star x} \\ &= \LR{2y\cdot I}\star x \\ &= {2y\star x} \\ &= {2yx^T} \\ }$$
|
|matrices|matrix-calculus|chain-rule|
| 0
|
Fundamental complex number formula kept secret?
|
I recently started self-studying of complex analysis. I checked several popular online courses, and all of them, of course, start with the basic definitions: real and imaginary parts of a complex number, complex conjugation and the module of a complex number, addition and multiplication of complex numbers. This set of definitions is usually followed by a set of properties that includes the connection with vectors . The connection with vectors is trivial for all the terms, except for the multiplication . I was surprised I was not able to find it in those courses. Furthermore, for some time I could not find it anywhere, including available AIs. But the formula is known (e.g. Tristan Needham "Visual Complex Analysis", 2023): $$\tag{1}zw = (\bar{z}w) + i[\bar{z}w]$$ where $\bar{z}$ is the complex conjugate of $z$ , $(xy) = Re(x)Re(y) + Im(x)Im(y)$ is the vector dot product, $[xy] = \begin{vmatrix} Re(x) & Im(x) \\\ Re(y)&Im(y) \end{vmatrix}$ is the vector cross product. We can also use the
|
Probably one of the reasons the formula is not considered in many courses is because it belongs to a different "world": to the Geometric Algebra. Let us consider a 4-dimensional space $\{1, e_1, e_2, e_1e_2\}$ with the following multiplication rules: $e_1{\cdot}e_1 = e_2{\cdot}e_2 = 1$ $e_1{\cdot}e_2 = e_1e_2$ $e_2{\cdot}e_1 = -e_1e_2$ From the rules above: $e_1{\cdot}e_1e_2 = e_1{\cdot}e_1{\cdot}e_2= 1{\cdot}e_2 = e_2$ $e_2{\cdot}e_1e_2 = -(e_1{\cdot}e_2){\cdot}e_2 = -e_1(e_2{\cdot}e_2) = -e_1$ $e_1e_2{\cdot}e_1e_2 = e_1(e_2{\cdot}e_1)e_2 = -e_1(e_1{\cdot}e_2)e_2 = -(e_1{\cdot}e_1)(e_2{\cdot}e_2) = -1$ Thus, the imaginary unit $i$ of complex numbers corresponds to the element $e_1e_2$ of the 4-space. The product of 4-vectors in this space is defined as the multiplication of polynomials: $(a_1,b_1,c_1,d_1)(a_2,b_2,c_2,d_2) = (a_1{\cdot}1{+}b_1{\cdot}e_1{+}c_1{\cdot}e_2{+}d_1{\cdot}e_1e_2)(a_2{\cdot}1{+}b_2{\cdot}e_1{+}c_2{\cdot}e_2{+}d_2{\cdot}e_1e_2)$ Applying the rules above (in colu
|
|complex-analysis|complex-numbers|geometric-algebras|
| 1
|
Does $\mathcal{O}(1)$ on a projective bundle depends on the presentation of vector bundle?
|
Consider a projective bundle $\pi:\mathbb P(E)\to X$ that comes from a vector bundle of $E$ on a variety $X$ . My question is about what is the line bundle " $\mathcal{O}(1)$ " on $\mathbb P(E)$ . To my understanding, this line bundle defines $\mathcal{O}(1)$ on each fiber and satisfies some universal properties, and should be unique. However, in Hartshorne (Cha.2, Lemma 7.9), it says that when twisted by a line bundle $\mathcal{L}$ , $\mathbb P(E)\to \mathbb P(E\otimes L)$ is isomorphic canonically, and the isomorphism pullbacks $\mathcal{O}_{\mathbb P(E\otimes L)}(1)$ to $\mathcal{O}_{\mathbb P(E)}(1)\otimes \pi^*L^{-1}$ . This seems to imply that $\mathcal{O}_{\mathbb P(E)}(1)$ is not unique: Let's test this on the Hirzeburch surface. Let's consider $X=\mathbb P^1$ and $E=\mathcal{O}(-1)\oplus \mathcal{O}$ , then the divisor $S$ that defines $\mathcal{O}_{\mathbb P(E)}(1)$ is the section at infinity, induced from the constant section on $E$ . Moreover $S\cdot S=-1$ . (cf. Hartshorne
|
There's no contradiction here - $\mathcal{O}(1)$ depends on $\mathcal{E}$ . In fact, if you're working out of Hartshorne, this is called out several times: lemma II.7.9 shows that if $\mathcal{E}$ is a vector bundle and $\mathcal{L}$ is a line bundle on a (sufficiently nice) scheme $X$ , then there is a natural isomorphism $\varphi: P' = \Bbb P(\mathcal{L}\otimes\mathcal{E}) \stackrel{\cong}{\to} \Bbb P(\mathcal{E}) = P$ and $\mathcal{O}_{P'}(1) \cong \varphi^*\mathcal{O}_P(1) \otimes (\pi')^*\mathcal{L}$ for $\pi':P'\to X$ ; exercise II.7.14 shows that $\mathcal{O}(1)$ need not always be very ample relative to $\pi$ ; notation V.2.8.1 says that $\mathcal{E}$ for a ruled surface isn't uniquely determined (even when $\mathcal{E}$ is normalized), etc.
|
|algebraic-geometry|vector-bundles|
| 1
|
What's $\int_{-\frac{\pi}{2}}^ {\frac{\pi}{2}}\text{erf}\left(\frac{\sqrt 2 R\cos\theta}{\sigma}\right)\text d\theta$?
|
The context of $$\int_{-\frac{\pi}{2}}^ {\frac{\pi}{2}}\text{erf}\left(\frac{\sqrt 2 R\cos\theta}{\sigma}\right)\text d\theta$$ is it came up whilst integrating the Rayleigh distribution function over an off-centered circle of radius $R$ : $$\int_{-\frac{\pi}{2}}^ {\frac{\pi}{2}}\int_0^{2R\cos\theta}\frac{r}{\sigma^2}\exp\left(-\frac{r^2}{2\sigma^2}\right)r\text dr\text d\theta.$$ Since I’ve never worked with integrals of special functions such as the error function, I’d appreciate any help.
|
You can reduce this integral to an infinite sum but I'm not sure how enlightening the final result is... $$\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}erf\bigg(\frac{\sqrt{2}R\cos\theta}{\sigma}\bigg)d\theta=\frac{2}{\sqrt{\pi}}\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\int _{0}^{\frac{\sqrt{2}R\cos\theta}{\sigma}}e^{-t^2}dtd\theta$$ $$=\frac{2}{\sqrt{\pi}}\sum_{n=0}^{\infty}\frac{(-1)^n}{n!}\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\int _{0}^{\frac{\sqrt{2}R\cos\theta}{\sigma}}t^{2n}dtd\theta$$ $$=\frac{2}{\sqrt{\pi}}\sum_{n=0}^{\infty}\frac{(-1)^n\big(\sqrt{2}R\big)^{2n+1}}{\sigma^{2n+1}(2n+1)\cdot n!}\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}(\cos(\theta))^{2n+1} d\theta$$ $$=\frac{2}{\sqrt{\pi}}\sum_{n=0}^{\infty}\frac{(-1)^n\big(\sqrt{2}R\big)^{2n+1}}{\sigma^{2n+1}(2n+1)\cdot n!}2\int_{0}^{\frac{\pi}{2}}(\cos(\theta))^{2n+1} d\theta$$ $$=\frac{2}{\sqrt{\pi}}\sum_{n=0}^{\infty}\frac{(-1)^n\big(\sqrt{2}R\big)^{2n+1}}{\sigma^{2n+1}(2n+1)\cdot n!}B\bigg(\frac{1}{2},(n+1) \bigg)$$ $$=2\sum_{n=0}^{\infty}\fr
|
|integration|definite-integrals|error-function|
| 0
|
Proof or counterexample for the convergence of projected gradient descent with summable stepsizes
|
Suppose we want to solve the following optimization problem: $$ \min_{x\in\mathcal{X}\subset\mathbb{R}^n} f(x) $$ where $\mathcal{X}$ is closed and convex and $f$ can be nonconvex but still smooth. Projected gradient descent for solving this problem writes: $$ x_{t+1}\leftarrow \mathrm{proj}_{\mathcal{X}}(x_{t} - \eta_t \nabla f(x_t)) $$ where $\eta_t>0$ is the stepsize. A commonly used convergence criterion (stopping criterion) when $f$ is nonconvex is $$ G_t:= \frac{1}{\eta_t}\{x_t - \mathrm{proj}_{\mathcal{X}}(x_{t} - \eta_t \nabla f(x_t))\} $$ which reduces to $\nabla f(x_t)$ if $\mathcal{X}=\mathbb{R}^n$ . Usually we take $\eta_t$ to be a constant or a non-summable sequence, i.e. $\sum_{t=1}^{\infty}\eta_t=\infty$ (see for example Bertsekas Nonlinear programming sec 3.3). My question is that, what if I take $\eta_t$ to be a summable sequence, say $\eta_t=1/t^2$ ? can we prove or disprove the claim that if $G_t\rightarrow 0$ in this case, the algorithm still converges to a stationa
|
To answer my question directly: By Lemma 2.3.1 in Bertsekas Nonlinear programming 1st ed, $G_t$ is a non-increasing function w.r.t. $\eta_t$ . Thus if we can show that $G_t\rightarrow 0$ with $\eta_t=1/t^2$ , then we are indeed still converging to a first-order stationary point. Shiv's answer is also correct in answering the other aspect of this question: taking $\eta_t=1/t^2$ will generally not leading to a convergence, since our stepsizes are summable.
|
|optimization|reference-request|nonlinear-optimization|convex-geometry|non-convex-optimization|
| 0
|
Is this Improper Integral Correct?
|
I solved this improper integral and I know that the result is correct but I wanted to understand if the theorems I used and the steps I took are actually correct $$ \displaystyle\int\limits_{-1}^1\frac{\sqrt{|x|}}{|x|-1}\,dx $$ I broke the integral into x=0 and since it is an even function I studied the interval [0,1). Subsequently I noticed that the function is negative on the interval and I decided to use the asymptotic comparison criterion. I rewrote the function as $$ \frac{1}{\sqrt{x}(1-\frac{1}{x})}\ $$ and I chose $$ g(x) = 1-\frac{1}{x}$$ I solved lim x-> 1 f(x)/g(x) = 1 and according to the asymptotic comparison criterion g(x) behaves like f(x) in this interval. So I started studying g(x). I noticed that g(x) is also negative in this interval and I decided to use the asymptotic comparison here too. As the function to compare I chose h(x) = -1/x and since lim x->0 g(x)/h(x) = 1, h(x) behaves like g(x) in this interval. So since -1/x diverges negatively, g(x) and consequently f(
|
By Comparison Test As OP said, the integrand is even and hence $$ \int_{-1}^1 \frac{\sqrt{|x|}}{|x|-1} d x=2 \int_0^1 \frac{\sqrt{x}}{x-1} d x $$ For any $x\in (0,1),$ $$ \frac{\sqrt{x}}{x-1} and $$ \int_0^1 \frac{1}{x-1} d x=[\ln |x-1|]_0^1=\infty $$ Hence $ \displaystyle \int_{-1}^1 \frac{\sqrt{|x|}}{|x|-1} d x$ is divergent.
|
|calculus|improper-integrals|
| 0
|
Closed form for the similar integrals $\int_0^\infty1-\text{erf}(x)dx$ and $\int_0^\infty x(1-\text{erf}(x))dx$
|
How do I find a closed form of $$\int_0^\infty1-\text{erf}(x)dx\tag{1}$$ ? A follow up is $$\int_0^\infty x(1-\text{erf}(x))dx\tag{2}$$ Where $$\text{erf}(x)=\frac2{\sqrt{\pi}}\int_0^xe^{-t^2}dt$$ The second integral seems to be $\frac14$ according to desmos. I also found that the first integral is $\frac1{\sqrt{\pi}}$ . I have solved these integrals (I will post them in an answer) but I wonder if there are other solutions.
|
Another approach is to employ Feynman's Trick coupled with the Dominated Convergence Theorem and Leibniz' Integral Rule. Here let: \begin{equation} I = \int_0^\infty \left(1 - \operatorname{erf}\left(x\right) \right)\:dx \end{equation} From here, we introduce all three elements previously mentioned and define the following Integral function: \begin{equation} J\left(t\right) = \int_0^\infty \left(1 - \operatorname{erf}\left(tx\right) \right)\:dx \end{equation} Where $t\geq 0$ . We observe that $I = J\left(1\right)$ and furthermore that: \begin{equation} J\left(+\infty\right) = \lim_{t \rightarrow +\infty}J\left(t\right) = \lim_{t \rightarrow +\infty}\int_0^\infty \left(1 - \operatorname{erf}\left(tx\right) \right)\:dx = \int_0^\infty \left(1 - \operatorname{erf}\left(+\infty\right) \right)\:dx = \int_0^\infty \left(1 - 1\right)\:dx = 0 \end{equation} Here we employ Leibniz's Integral Rule and differentiate under the curve w.r.t ' $t$ ': \begin{align} J'\left(t\right) &= \frac{d}{dt} \in
|
|calculus|integration|error-function|
| 0
|
Is this even solvable? Area of 4th triangle given 3 areas
|
I was sent an interesting problem. The diagram has $4$ triangles and their side lengths are given. The areas of $3$ triangles are given, and the task is to solve for the area of the last triangle. I first tried Heron's formula but that lead to a system of equations that did not seem solvable. I was thinking you are supposed to make a square of side $6$ , which is then divided into those $4$ triangles. That would lead to an easy solution. But I didn't see any method to justify why the angles lined up--it could be the triangles don't form a square. I was thinking law of sines or cosines but I couldn't see a path to prove the angles lined up. I searched the web and forums without luck. (Numerade claims to have an answer but it is paywalled). Is the question solvable as stated? I've seen my fair share of well-intentioned but improperly stated questions. (I run the YouTube channel MindYourDecisions so if this turns into a video I would credit answers appropriately.)
|
The question is not solvable. "L ... perpendicular ... also parallel" in user's solution is not necessarily true. Call the vertices $A, B, C, D,$ then we can have $DA=AB=BC = 6$ and $DAB, ABC$ not right angles, in which case $L$ is parallel to $BC$ by construction but not necessarily $DA.$ The 4th triangle can have any area between $14$ and $14.734.$ Notice $x+y=6$ doesn't mean the 2 segments have to line up. I have started with the 3 triangles and not drawn in $CD.$ https://imgur.com/a/UpfQsfn The possible areas can be calculated by brute force, here's a comment (not mine) to demonstrate:
|
|geometry|
| 0
|
Clarification on independence and conditionality of coin tosses and the resulting PMF
|
I'm confused about independence of events in the following question from Blitzstein and Hwang, chapter 3 (Random Variables): Let X be the number of Heads in 10 fair coin tosses. Find: (a) the conditional PMF of X, given that the first two flips landed Heads, and (b) the conditional PMF of X, given that at least two flips land Heads. There are already three answers: Prob. 24, Random Variables and their distribution - Blitzstein and Hwang and Finding the Conditional PMF of Random Variables and Probability Distribution of No. of Heads (H) when a Fair Coin is flipped 10 times (a Blitzstein and Hwang problem) , but reading them hasn't helped me understand. If I focused purely on (a), I do not see how the first two flips being heads can influence the remaining 8 flips. The way I understand it, given that the first two flips result in heads should not provide any information about further flips. That means that $$P(X=k) = \binom{8}k\frac{1}{2}^k\frac{1}{2}^{8-k}+2$$ Ie. The remaining 8 flips
|
Your essential reasoning for (a) is correct but your notation and subsequent expression should be modified, since $X$ is the unconditional distribution of the number of heads and is therefore binomial with parameters $n = 10$ and $p = 0.5$ . So it is incorrect to say $$\Pr[X = k] = \binom{8}{k} (0.5)^k (1 - 0.5)^{8-k} + 2.$$ First of all, the support of $X$ is on the set $\{0, 1, 2, \ldots, 10\}$ , so if you plug in $k = 10$ , the RHS doesn't make any sense: you'd get $$\Pr[X = 10] = \binom{8}{10} (0.5)^{10} (1 - 0.5)^{-2} + 2.$$ Also, you can't just arbitrarily add $2$ to the probability mass function. Probabilities must always range between $0$ and $1$ . The conditional random variable for the total number of heads given that the first two are heads, requires us to introduce a second random variable, say $Z$ , which counts the number of heads in the first two flips. $Z$ is therefore binomial with parameters $n = 2$ and $p = 0.5$ , and we can then describe the desired conditional rand
|
|probability|random-variables|conditional-probability|
| 1
|
Is there an opposite of a dirac delta function, a function that is infinitely wide and infinitesimally high?
|
Is there an opposite of a dirac delta function, a function that is infinitely wide and infinitesimally high? The dirac delta function is defined as: $$ \delta(x) = \begin{cases} 0, & \text{if } x \neq 0\\ \infty, & \text{if } x = 0 \end{cases} $$ where $$ \int_{-\infty}^{\infty} \delta(x) \, dx = 1 \\ \int_{-\infty}^{\infty} f(x) \delta(x - a) \, dx = f(a) $$ Is there a function: $$ \epsilon(x) = \lim_{\epsilon \to 0} \epsilon $$ where $$ \int_{-\infty}^{\infty} \epsilon(x) \, dx = 1 $$ The dirac delta function $\delta(x)$ is infinitely high but infinitesimally wide, this new function $\epsilon(x)$ is infinitesimally high but infinitely wide. Im guessing theres no such function/distribution given how limits work and it probably would not be useful if it did, but is there any credence to this idea?
|
As others have said, the delta function is not a function but a distribution. I won't complain about your definition but point out it really is the limit of a tall peaky function where you have to be integrating over another function and you take the limit outside the integral . I would suggest a better definition is $\delta(x)=\lim_{\epsilon \to 0} k(\epsilon)$ with the integral of $\delta(x)$ over any interval including $0$ to be $1$ and $k(\epsilon)$ being some tall peaky function, probably always positive and not doing strange things as $\epsilon$ changes. In that spirit the obvious answer is to define a set of functions $g(\epsilon,x)$ as $$g(\epsilon,x)=\begin {cases} \epsilon /2 & |x| \le 1/\epsilon \\ 0& \text{otherwise} \end {cases}$$ It has the property that $\lim_{\epsilon \to 0} g(\epsilon,x)=0$ and the integral of $g$ over the real line is constant at $1$ . If you take $$\lim_{\epsilon \to 0} \int_{-\infty }^\infty f(x)g(x,\epsilon) dx$$ (note I have taken the limit out of
|
|real-analysis|limits|functions|dirac-delta|
| 1
|
Clarification on independence and conditionality of coin tosses and the resulting PMF
|
I'm confused about independence of events in the following question from Blitzstein and Hwang, chapter 3 (Random Variables): Let X be the number of Heads in 10 fair coin tosses. Find: (a) the conditional PMF of X, given that the first two flips landed Heads, and (b) the conditional PMF of X, given that at least two flips land Heads. There are already three answers: Prob. 24, Random Variables and their distribution - Blitzstein and Hwang and Finding the Conditional PMF of Random Variables and Probability Distribution of No. of Heads (H) when a Fair Coin is flipped 10 times (a Blitzstein and Hwang problem) , but reading them hasn't helped me understand. If I focused purely on (a), I do not see how the first two flips being heads can influence the remaining 8 flips. The way I understand it, given that the first two flips result in heads should not provide any information about further flips. That means that $$P(X=k) = \binom{8}k\frac{1}{2}^k\frac{1}{2}^{8-k}+2$$ Ie. The remaining 8 flips
|
Probability mass cannot be greater than $1$ , so adding $2$ is clearly incorrect. What you want is the probability that the count of heads among ten tosses equals $k$ when given that the first two tosses are heads (call that event $\rm B$ ). The event occurring ensures that the first two tosses are certainly heads and the remaining $X-2$ of the count are independent Bernoulli distributions. Thus the conditional distribution of $X-2$ will binomial: $$X-2\mid\mathrm B\sim\mathcal{Bin}(8,1/2)$$ $$\mathsf P(X=k\mid\mathrm B) = \dbinom{10}{k-2}\dfrac{1}{2^{8}}~\big[k\in[[2...10]]\big]$$
|
|probability|random-variables|conditional-probability|
| 0
|
Angles can be congruent but not equal?
|
How could ‘two angles cannot be equal, but two angles can be congruent? ‘ My understanding was if a shape is congruent then it would be equal in size and shape. If two angles are congruent, how could they not be equal?
|
I’ll just say it: in typical usage, angle refers to the magnitude of the angle, and angles of the same magnitude are certainly equal. Congruence typically means a figure or shape is identical - each corresponding part is the same. If two line segments meet at an angle and the segments and angles are equal then you have congruence. As if two congruent triangles had a side subtracted. You would typically not bother saying rays meeting at identical angles are congruent, though it would be true. Unless your book has some explicit statement (and explanation) to the contrary, the above would be a standard Euclidean view.
|
|geometry|
| 0
|
Simpyfying the expression $(1 + \cos \alpha + i\sin \alpha)^k + (1 + \cos \alpha - i\sin \alpha)^k$
|
I came across the following question the other day and was wondering if my solution is the correct solution. Here is the question. Prove that $$ (1 + \cos \alpha + i\sin \alpha)^k + (1 + \cos \alpha - i\sin \alpha)^k = 2^{k+1} \cos \left( \frac{k\alpha}{2} \right) \cos^k \left( \frac{\alpha}{2} \right) $$ My solution is: $$ (1 + \cos \alpha + i\sin \alpha)^k + (1 + \cos \alpha - i\sin \alpha)^k = 2^{k+1} \cos \left( \frac{k\alpha}{2} \right) \left| \cos^k \frac{\alpha}{2} \right| $$ The absolute value is obtained if, say, $z = (1 + \cos \alpha) + i\sin \alpha $ , then after some trig substitution one gets. $$ |z| = \sqrt{ 4 \cos^2 \frac{\alpha}{2} } $$ $$ |z| = 2\left| \cos^k \frac{\alpha}{2} \right| $$ Finding Arg $z = \dfrac{\alpha}{2}$ , one gets: $$ z = 2 \left|\cos \frac{\alpha}{2} \right| \left( \cos \frac{\alpha}{2} + i \sin \frac{\alpha}{2} \right) $$ $$ \overline{z} =2 \left|\cos \frac{\alpha}{2} \right| \left( \cos \frac{\alpha}{2} - i \sin \frac{\alpha}{2} \right) $$ Using d
|
Consider the expression $1 + \cos \alpha + i \sin \alpha$ . We can represent this expression in polar form as $r (\cos \theta + i \sin \theta)$ , where $r$ is the modulus and $\theta$ is the argument of the complex number. To find $r$ , we calculate the magnitude of the complex number: \begin{align*} r &= \sqrt{(1 + \cos \alpha)^2 + \sin^2 \alpha} \\ &= \sqrt{2 + 2 \cos \alpha} \\ &= \sqrt{2(1 + \cos \alpha)} \\ &= \sqrt{2(2 \cos^2 \left(\frac{\alpha}{2}\right))} \\ &= 2 \cos \left(\frac{\alpha}{2}\right). \end{align*} The argument $\theta$ is determined by the tangent of the angle, which is given by: \begin{align*} \tan \theta &= \frac{\sin \alpha}{1 + \cos \alpha} \\ &= \frac{2 \sin \left(\frac{\alpha}{2}\right) \cos \left(\frac{\alpha}{2}\right)}{2 \cos^2 \left(\frac{\alpha}{2}\right)} \\ &= \tan \left(\frac{\alpha}{2}\right), \end{align*} implying that $\theta = \frac{\alpha}{2}$ . Similarly, for the expression $1 + \cos \alpha - i \sin \alpha$ , we find it can also be represented
|
|complex-numbers|
| 0
|
Is there an opposite of a dirac delta function, a function that is infinitely wide and infinitesimally high?
|
Is there an opposite of a dirac delta function, a function that is infinitely wide and infinitesimally high? The dirac delta function is defined as: $$ \delta(x) = \begin{cases} 0, & \text{if } x \neq 0\\ \infty, & \text{if } x = 0 \end{cases} $$ where $$ \int_{-\infty}^{\infty} \delta(x) \, dx = 1 \\ \int_{-\infty}^{\infty} f(x) \delta(x - a) \, dx = f(a) $$ Is there a function: $$ \epsilon(x) = \lim_{\epsilon \to 0} \epsilon $$ where $$ \int_{-\infty}^{\infty} \epsilon(x) \, dx = 1 $$ The dirac delta function $\delta(x)$ is infinitely high but infinitesimally wide, this new function $\epsilon(x)$ is infinitesimally high but infinitely wide. Im guessing theres no such function/distribution given how limits work and it probably would not be useful if it did, but is there any credence to this idea?
|
I suppose you could consider the family of functions $$\epsilon(x;s)=\frac{1}{s^2}\exp\left(-\pi~\frac{x^2}{s^2}\right)$$ We remark the following properties: $$\int_\Bbb R \epsilon(x;s)\mathrm dx=1 \\ \forall s\in\mathbb R_+$$ and $$\lim_{s\to\infty}\epsilon(x;s)=0 \\ \forall x\in\mathbb R$$ However unlike the limit definitions of the Dirac delta distribution, the pointwise limit of this family is a well-defined function - that is, the zero function.
|
|real-analysis|limits|functions|dirac-delta|
| 0
|
Simpyfying the expression $(1 + \cos \alpha + i\sin \alpha)^k + (1 + \cos \alpha - i\sin \alpha)^k$
|
I came across the following question the other day and was wondering if my solution is the correct solution. Here is the question. Prove that $$ (1 + \cos \alpha + i\sin \alpha)^k + (1 + \cos \alpha - i\sin \alpha)^k = 2^{k+1} \cos \left( \frac{k\alpha}{2} \right) \cos^k \left( \frac{\alpha}{2} \right) $$ My solution is: $$ (1 + \cos \alpha + i\sin \alpha)^k + (1 + \cos \alpha - i\sin \alpha)^k = 2^{k+1} \cos \left( \frac{k\alpha}{2} \right) \left| \cos^k \frac{\alpha}{2} \right| $$ The absolute value is obtained if, say, $z = (1 + \cos \alpha) + i\sin \alpha $ , then after some trig substitution one gets. $$ |z| = \sqrt{ 4 \cos^2 \frac{\alpha}{2} } $$ $$ |z| = 2\left| \cos^k \frac{\alpha}{2} \right| $$ Finding Arg $z = \dfrac{\alpha}{2}$ , one gets: $$ z = 2 \left|\cos \frac{\alpha}{2} \right| \left( \cos \frac{\alpha}{2} + i \sin \frac{\alpha}{2} \right) $$ $$ \overline{z} =2 \left|\cos \frac{\alpha}{2} \right| \left( \cos \frac{\alpha}{2} - i \sin \frac{\alpha}{2} \right) $$ Using d
|
There's a quick way to test if there's a problem. We can just substitute some values for $k$ and $\alpha$ , particularly ones where the two final answers will differ, and see which produces the correct answer. In particular, if we let $k = 1$ , and for the sake of nice trig values, but $\cos(\alpha/2) , take $\alpha = 7\pi/3$ , then $$(1 + \cos \alpha + i\sin \alpha)^k + (1 + \cos \alpha - i\sin \alpha)^k = 1 + \frac{1}{2} + i \frac{\sqrt{3}}{2} + 1 + \frac{1}{2} - i\frac{\sqrt{3}}{2} = 3,$$ and $$2^{k+1} \cos \left( \frac{k\alpha}{2} \right) \cos^k \left( \frac{\alpha}{2} \right) = 4\left(-\frac{\sqrt{3}}{2}\right)\left(-\frac{\sqrt{3}}{2}\right) = 3.$$ The original formula correctly evaluates this case, at least, while yours will differ by a sign. The error here appears to be when you calculate the argument to be $\frac{\alpha}{2}$ . You don't really include your reasoning here, so it's difficult to pick specifically where you went wrong. However, you can observe that your next two e
|
|complex-numbers|
| 0
|
Permutation with repetition, more elements than slots to choose from
|
Using 3 A's, 5 B's, 7 C's, how many 4-letter word can you arrange them into? Note that here 3+5+7>4. We can't use the formula: $\frac{n!}{p!q! \cdots}$ because $\frac{4!}{3!4!7!}$ will be a fraction. What is the general formula for this type of problem?
|
Since you asked for a general way to count labeled enumeration problems: use exponential generating functions. If a word has $a$ As, $b$ Bs, and $c$ Cs, then the number of permutations that can be formed is $$\frac{4!}{a!b!c!}$$ Then we can use counting combinations similar to ordinary generating functions: for $a \le 3$ , find the coefficient of $x^4/4!$ in $$\underbrace{\left(\frac{x^0}{0!} + \frac {x^1} {1!} + \frac{x^2}{2!} + \frac{x^3}{3!} \right)}_{\text{select } a} \underbrace{\left(\frac{x^0}{0!} + \frac {x^1} {1!} + \frac{x^2}{2!} + \frac{x^3}{3!} + \frac{x^4}{4!} + \cdots \right)^2 }_{\text{select } b,c} = \left(1 + x + x^2/2 + x^3/6 \right) e^{2x}$$ (Since $b$ and $c$ are effectively unlimited, we can use the EGF of $e^x$ ) P.S. I think for extremely large polynomials, FFT polynomial multiplication can be used, but obviously that isn't needed here.
|
|combinatorics|permutations|
| 0
|
A nice property of a triangle With side 1,2,2
|
About a year ago, while using GeoGebra, I discovered a beautiful property of a triangle with sides 1,2,2. It is that many of the important centers of this triangle are located at equal distances in a row, so much so that I called this triangle the “numbers line triangle.” You can consider the following two images: $G$ is the point of intersection of the heights of $∆XYZ$ $C$ is the point of intersection of the bisectors of $∆X^{'}Y^{'}Z^{'}$ $F$ is the intersection of the bisectors of $∆XYZ$ $B$ is the center of the circle $XMPDQN$ $E$ is the point of convergence of the averages of $∆XYZ$ $A$ is the point of convergence of the heights of $∆X^{'}Y^{'}Z^{'}$ $D$ is the point of convergence of the $∆XYZ$ axes $XA=AB=BC=CD=DE=EF=FG$ My question consists of two parts: Is this feature already discovered? How do we prove that anyway?
|
This is a brief proof, I may add details later Let's consider standard notation, then We just need to prove that $GI=HI$ because $OH=3OG$ Applying the angular conjugation property and the inner bisector theorem, we get this The rest of the centers are easy to show similarly because the other triangle is an enlargement of the same triangle
|
|geometry|euclidean-geometry|
| 0
|
Find all strictly increasing sequences $\{a_k\}_{k=1}^\infty$ of positive integers such that $\frac{a_{nk}}{a_na_k} \in (1,2)$ for any $n,k>1$
|
Find all strictly increasing sequences $\{a_k\}_{k=1}^\infty$ of positive integers such that $\frac{a_{nk}}{a_na_k} \in (1,2)$ for any $n,k>1$ . Motivation First, I was wondering about the strictly increasing sequences of real numbers such that $\frac{a_{nk}}{a_na_k}=1$ for any integer $n,k\ge 1.$ I believe all such sequences look like $a_k=k^\alpha$ for some $\alpha > 0$ (due to Erdős’ result). Then I started changing the conditions. What if I try to find all the strictly increasing sequences such that $\frac{a_{nk}}{a_na_k}=2$ for any integer $n,k>1$ . We can’t deduce $a_1=1$ in this case. Since $a_1$ never participates in $\frac{a_{nk}}{a_na_k}=2$ anyway, it can be any number smaller than $a_2$ . Overall, it seems to me that virtually the same reasoning as for the previous question could be made for this case, so $a_k=k^\alpha$ for some positive $\alpha$ . But then from $\frac{a_{nk}}{a_na_k}=2$ we get that $1=2$ . So such sequences don’t exist. Now, what if we try the condition $\f
|
Let $a_n := n^2-1$ . Then $\dfrac{a_{xy}}{a_xa_y}\in(1, 2)$ for $x, y\ge2$ . In fact Proposition. $a_{xy} > a_xa_y$ Proof. $$\begin{aligned} (xy)^2-1 &> (x^2-1)(y^2-1)&\iff\\ x^2y^2-1 &>x^2y^2-x^2-y^2+1&\iff\\ x^2+y^2&>2 \end{aligned}$$ which holds true because $x, y>1.~\square$ Proposition. $a_{xy} Proof. $$\begin{aligned} (xy)^2-1 & which holds true because $x, y\ge 2.~\square$
|
|sequences-and-series|
| 0
|
If $x+y=2$ is a tangent to the ellipse with foci $(2, 3)$ and $(3, 5)$, what is the square of the reciprocal of its eccentricity?
|
If $x+y=2$ is a tangent to the ellipse with foci $(2, 3)$ and $(3, 5)$ , what is the square of the reciprocal of its eccentricity? This could be done by the property, The product of perpendiculars drawn from focus to the tangent is equal to $b^2$ . But I couldn't figure out where is error in my approach.
|
The equation of tangent used is wrong. Your formula only works when the centre of the ellipse is at the origin and the axis of ellipse is parallel to x and y axis. In this case we use a modified version of the same as pointed out by Jan-Magnus Økland in the comments. The centre of this ellipse is ( $2.5,4$ ) and axis of this ellipse are $-2x+y+1=0$ and $x+2y-\frac{21}2=0$ Therefore, in this case we use $$\frac{-2x+y+1}{\sqrt5}=m\frac{x+2y-\frac{21}2}{\sqrt5}\pm\sqrt{a^2m^2+b^2}$$ Note: I thank @Jan-Magnus Økland for pointing out a major blunder in my answer.
|
|conic-sections|
| 1
|
Solving $x^{55} \equiv 33 \pmod{257}$
|
I attempting to solve $x^{55} \equiv 33 \pmod{257}$ . In using Fermat's Little Theorem and the observation that $1 = 55 \cdot (-121) + 256\cdot 26,$ I get \begin{align*} x^{55} \equiv 33 \pmod{257} &\Rightarrow x^{55 \cdot (-121)} \equiv 33^{-121} \pmod{257} \\ & \Rightarrow x (x^{256})^{-26} \equiv 33^{-121} \pmod{257} \\ &\Rightarrow x \equiv 33^{-121} \pmod{257} . \end{align*} However, I must still compute $33^{-1} \pmod{257}$ by solving \begin{align} 33x \equiv 1 \pmod{257}, \tag{1} \end{align} with which I am having difficulty without a calculator. In using WolframAlpha, I have obtained $x \equiv 148 \pmod{257}$ and so must compute \begin{align} 148^{121} \pmod{257}, \tag{2} \end{align} with which I am again having difficulty evaluating. Are there any tips to compute these last two steps $(1)$ and $(2)$ ?
|
I will write a solution that is doable by hand, though a tad computational. We can use the Extended Euclidean Algorithm to compute modular inverses. We have \begin{align*} 257 &= 33 \times 7 + 26 && & 1 &= 5 \times 1 - 2 \times 2\\ 33 &= 26 \times 1 + 7 && && = 5 \times 3 - 7 \times 2\\ 26 &= 7 \times 3 + 5 && \implies && = 26 \times 3 - 7 \times 11\\ 7 &= 5 \times 1 + 2 && && = 26 \times 14 - 33 \times 11\\ 5 &= 2 \times 2 + 1 && && = 257 \times 14 - 33 \times 109 \end{align*} Hence, $33 \times 109 \equiv -1 \pmod{257}$ , so $33^{-1} \equiv 148 \pmod{257}$ . Now, we wish to evaluate $148^{121} \pmod{257}$ . Using repeated squares, we have \begin{align*} 148^2 \equiv (-109)^2 \equiv 11881 &\equiv 59 \pmod{257}\\ 148^3 \equiv -6431 &\equiv -6 \pmod{257}\\ 148^6 &\equiv 36 \pmod{257}\\ 148^{12} \equiv 1296 &\equiv 11 \pmod{257}\\ 148^{15} &\equiv -66 \pmod{257}\\ 148^{30} \equiv 4356 &\equiv -13 \pmod{257}\\ 148^{60} &\equiv 169 \pmod{257}\\ 148^{120} \equiv (-88)^2 \equiv 7744 &\equiv 3
|
|elementary-number-theory|modular-arithmetic|
| 1
|
Showing that $f\in C^{n}$ if and only if $\frac{\partial^n f}{\partial z^k\partial\bar{z}^{n-k}}$ is continuous
|
Suppose that $D$ is an area, $n$ is a natural number. Show that: $f\in C^{n}$ if and only if $\frac{\partial^n f}{\partial z^k\partial\bar{z}^{n-k}}$ is continuous on $D$ . It's easy to prove necessity, but I can't prove sufficiency. My idea is so direct, and I want to find that the expression of $\frac{\partial^n f}{\partial z^k\partial\bar{z}^{n-k}}$ , then I can find an expression which is indirectly related to $\frac{\partial^n f}{\partial x^n}$ and $\frac{\partial^n f}{\partial y^n}$ . So I try to use partial differential operators, we can prove that $$\frac{\partial^n }{\partial z^k\partial\bar{z}^{n-k}}=\frac{1}{2^n}\left(\frac{\partial}{\partial x}-i\frac{\partial}{\partial y}\right)^k\left(\frac{\partial}{\partial x}+i\frac{\partial}{\partial y}\right)^{n-k}$$ $\frac{\partial}{\partial x}$ and $\frac{\partial}{\partial y}$ are not interchangeable. But I found that the equation above seems to have no practical use, so my idea may be wrong.
|
I've thought of it, just use induction and notice a formula like this: $$ \frac{\partial^n f}{\partial \bar{z}^n}=\frac{\partial^{n-1} }{\partial \bar{z}^{n-1}}(\frac{\partial f}{\partial \bar{z}}); \frac{\partial^n f}{\partial z\partial\bar{z}^{n-1}}=\frac{\partial^{n-1} }{\partial z \partial\bar{z}^{n-2}}(\frac{\partial f}{\partial \bar{z}}); \frac{\partial^n f}{\partial z^2\partial\bar{z}^{n-2}}=\frac{\partial^{n-1} }{\partial z^2\partial\bar{z}^{n-3}}(\frac{\partial f}{\partial \bar{z}}); \cdots; \frac{\partial^n f}{\partial z^n}=\frac{\partial^{n-1} }{\partial z^{n-1}}(\frac{\partial f}{\partial z}) $$
|
|complex-analysis|analytic-continuation|
| 0
|
Seeking suggestions for a book with hard problems about surface and volume integrals
|
I am interested about the hard problem of surface and volume integral, so can anyone suggest me a book based on the problem on surface and volume integral (containing a lot of hard problem) for practicing and different types problem solving? The book should be theoretical as well as practical, any kind of help is acceptable.
|
G.M Fiftenholtz " Differential and Integral Calculus " vol 3 says advanced integration methods for intrinsic volumes and areas. That's just for beginning
|
|multivariable-calculus|reference-request|problem-solving|book-recommendation|surface-integrals|
| 0
|
A Radical Integral $\int\frac{x^2-3x+1}{\sqrt{1-x^2}}dx$
|
Evaluate: $$\int\frac{x^2-3x+1}{\sqrt{1-x^2}}dx$$ Please tell me where I committed the mistake as the answer I got does not match with the actual answer of $$=\frac{3sin^{-1}(x)}{2}+3\sqrt{1-x^2}-\frac{(\sqrt{1-x^2})x}{2}+c$$ Let $$t=\sqrt{1-x^2}$$ $$dx=\frac{-\sqrt{1-x^2}}{x}dt$$ $$x^2=1-t^2$$ $$x=\sqrt{1-t^2}$$ $$dx=\frac{-t}{\sqrt{1-t^2}}dt$$ Using all these values, I substituted them into the integral: $$\int\frac{[(1-t^2)-3(\sqrt{1-t^2})+1]}{t}\frac{-t.dt}{\sqrt{1-t^2}}$$ $$-\int(\frac{1-t^2}{\sqrt{1-t^2}}-3\frac{\sqrt{1-t^2}}{\sqrt{1-t^2}}+\frac{1}{\sqrt{1-t^2}})dt$$ This simplified to give: $$=\frac{-3sin^{-1}(\sqrt{1-x^2})}{2}+3\sqrt{1-x^2}-\frac{(\sqrt{1-x^2})x}{2}+c$$
|
Your integration is perfectly correct. To match the answer, you have to manipulate a little more. Lets talk about only the $\frac{3 sin^{-1} \sqrt{1-x^2}}{2}$ , more specifically $sin^{-1} \sqrt{1-x^2}$ part. Let $x=cos \theta$ . Now we end up with $$sin^{-1}(sin \theta)$$ $$\Rightarrow cos^{-1}(x)$$ $$\Rightarrow\frac{\pi}{2}-sin^{-1}(x)$$ Now, put this back in your solution and now you get the required answer. Now you will get the answer
|
|integration|
| 1
|
About a step in the proof of "A linear transformation between two normed spaces is uniformly continuous if it is bounded."
|
Theorem: A linear transformation between two normed spaces is uniformly continuous if it's bounded. Proof: Let $X_1$ and $X_2$ be normed linear spaces with norms $\| \cdot \|_i,i=1,2$ and let $T$ be a linear transformation between the two spaces. If $T$ is uniformly continuous, then it is continuous at 0 from which it follows that there is a universal $\delta >0$ such that $\|Tx\|_2 \leq 1$ whenever $\|x\|_1 \leq \delta$ . Thus, for example, with any $x \neq 0$ , we will have $$\|Tx\|_2= \left\|T\frac{\delta x}{\|x\|_1}\right\|_2 \cdot\frac{\|x\|_1}{\delta} \leq \frac{\|x\|_1}{\delta}$$ For the converse, if $x_n \to x$ , the fact that $\|T(x-x_n)\|_2 \leq C \|x-x_n\|_1$ means that $T x_n \to Tx$ . As C is independent of $x$ , the continuity is uniform. Question: for the converse direction, since we assume it is bounded. Thus, by definition, we have $\|T(x-x_n)\|_2 \leq C\|x-x_n\|_1$ for some finite constant $C$ . But how to see that "it means $T x_n \to Tx$ "?
|
To expand on the answer in the comments, note that if $\|\cdot\|$ is a norm on a vector space $X$ , the condition that $\lim_{n\to\infty}y_{n} = y$ means by definition that $\lim_{n\to\infty}\|y_{n} - y\| = 0$ . Returning to the question, note that for each $n\in\mathbb{N}$ , by the linearity and boundedness of $T$ , $$0 \leq \|T(x_{n}) - T(x)\|_{2} = \|T(x_{n} - x)\|_{2} \leq C\|x_{n} - x\|_{1}.$$ As $\lim_{n\to\infty}0 = 0$ and as $\lim_{n\to\infty}C\|x_{n} - x\|_{1} = 0$ , it follows from the squeeze theorem that $\lim_{n\to\infty}\|T(x_{n}) - T(x)\|_{2} = 0$ . By definition, $\lim_{n\to\infty}T(x_{n}) = T(x)$ . However, it is unclear how uniform continuity is being shown from this proof. Only sequential continuity has been shown, and it is not clear from what was stated in the proof as to how uniform continuity is obtained from here. However, you can use that the estimate $\|T(x - x_{n})\|_{2} \leq C\|x - x_{n}\|_{1}$ for $n\in\mathbb{N}$ implies Lipschitz continuity with a bit of
|
|functional-analysis|continuity|normed-spaces|uniform-continuity|
| 1
|
Angles can be congruent but not equal?
|
How could ‘two angles cannot be equal, but two angles can be congruent? ‘ My understanding was if a shape is congruent then it would be equal in size and shape. If two angles are congruent, how could they not be equal?
|
If two angles are congruent, their measures are equal. It's like saying two twins are identical and their heights are the same. We don't say the twins are the same or equal, we just use a different word in that context.
|
|geometry|
| 0
|
Can a shape be both similar and congruent?
|
I know that congruent shapes have the same size and the rotations don't matter. I just want to know if congruent shapes can also be similar at the same time. I've seen that similar shapes have one shape that is a different size in comparison to another shape with proportional sides and the same angles. So is it safe to say that congruent shapes can also be similar?
|
All congruent shapes are similar (the ratio is 1:1 for corresponding sides), but not all similar shapes are congruent (since the corresponding sides of similar shapes are proportional, the ratios are the same but may not (and probably won't) simplify to 1:1).
|
|geometry|
| 0
|
Making sense of eigenvariable restriction in $\exists L$ rule in sequent calculus
|
I still cannot understand why the $\exists L$ rule from sequent calculus is sound: $$\Gamma, A[y/x] \vdash \Delta \over \Gamma, \exists xA \vdash \Delta$$ Intuitively I can explain this rule as "When $\Delta$ can be derived from assumptions $\Gamma\cup\{A[y/x]\}$ , due to the existence of some object $y$ that makes $A$ hold, it follows that $\Delta$ can be derived from assumptions $\Gamma\cup\{\exists xA\}$ ." What prevents me from fully understanding this rule is the eigenvariable restriction on $y$ . Why this restriction is necessary for soundness? I know there is one counterexample when the eigenvariable restriction is lightened: I could derive $\exists x A\vdash A[y/x]$ from $A[y/x]\vdash A[y/x]$ which is not always true. However, this counterexample does not explain to me why it is possible to derive $\exists xA \vdash \Delta$ from $A[y/x] \vdash \Delta$ when $y$ is not present in $\Delta$ . P.S. I already read this related question but I don't understand this part Now, since $y$
|
The "intuition" is: under what condition we can derive $A(t)$ from $\exists x A$ ? If we use a term (a "name") $t$ that is already used in the proof, we may get into trouble, because if $t$ is used already the formula that uses it "impose" a meaning on the term that can be not consistent with that expressed by $\exists x A$ . Thus, the meaning of the $(\exists \text L)$ rule is: from a proof of $Δ$ with premises $Γ,A[y/x]$ we can derive a proof of $Δ$ with premises $Γ,∃xA$ , provided that the term $y$ is "the right one". How we can formalize the proviso above? imposing that the "name" $y$ is nowhere used in the upper sequent except for the formula $A(y)$ . Consider a very simple counterexample: $(1=0) \to (1=0)$ is a correct initial sequent (or axiom ). Thus, applying the rule without the proviso we get: $\exists x(x=0) \to (1=0)$ , which is clearly wrong. Another approach is using the intuitive semantical reading of sequent calculus: If the upper sequent is true, then the lower sequen
|
|logic|sequent-calculus|
| 0
|
show two surface area formulas are equivalent
|
According to Wikipedia, for a uniform n-gonal antiprism with edge length $a$ , SA = $\frac{n}{2} \left( \cot\frac{\pi}{n} + \sqrt{3} \right) a^2$ . This formula seems unnecessarily complex to me, as I deduced the surface area should just be the area of each base plus the areas of the 2n equilateral triangles, which means the formula should be SA = 2B + 2n $\Bigl(\!\frac{a^2\sqrt{3}}{4}\Bigr)$ . Can anyone show these formulas are equivalent? (Pictures are always appreciated.) I'm also wondering whether there is a specific reason to use the first formula rather than the second.
|
Obviously, you have figured out the areas of the triangles: That's your $2n(a^2\frac{\sqrt{3}}4)$ . However, the area of the base is not always immediately clear. Imagine if I have an octagon for example, we should have a formula for calculating the base area! We do that by splitting the base into n equal isosceles triangles. At the centre, obviously the angle will be $\frac{2\pi}{n}$ , and the outer side will be a! So using the cosine rule, we can find the length of the inner sides of the triangle to be $\frac{a}{2*sin(pi/n)}$ . Using the formula for the area of a triangle, we get each triangle has an area of $\frac{a^2cot(\frac{\pi}{n})}{4}$ Now there are n many of these triangles on each base, and two base sides, so multiply that by 2n to get to that required area!
|
|geometry|derivation-of-formulae|
| 1
|
Proving convergence in prob of $X_n = X = Y_n$ using Markov's inequality.
|
Below question is from the book 'Probability course.com'. The book provides a solution using Chebyshev's inequality. Before reading that solution, I used Markov's inequality. Is my solution correct? Question: Let $X$ be a random variable, and $X_n = X + Y_n$ , where $E[Y_n] = \frac{1}{n}, V[Y_n] = \frac{\sigma^2}{n}$ , where $\sigma > 0$ is a constant. Show that $X_n$ converges in probability to $X$ . Answer: By defn of convergence in probability, I must show that $\lim_{n \rightarrow \infty} P(|X_n - X| \ge \epsilon) = 0, \forall \epsilon > 0$ . $P(|X_n - X| \ge \epsilon) = P(|X + Y_n - X| \ge \epsilon) = P(|Y_n | \ge \epsilon) = P(Y_n \le -\epsilon) + P(Y_n \ge \epsilon) $ . My strategy was to use Markov's inequality on each of the summands of the last equality. $ P(Y_n \le -\epsilon) = P(- Y_n \ge \epsilon) \le \frac{E[-Y_n]}{\epsilon} = \frac{-E[Y_n]}{\epsilon} = \frac{-1/n}{\epsilon} = \frac{-1}{n\epsilon} \rightarrow 0 \text{ as } n \rightarrow \infty.$ $ P(Y_n \ge \epsilon) \le
|
To use Markov's inequality, you need to have a positive random variable. As far as we know, $Y_n$ can take positive and negative values. Even if it is a.s positive (or a.s negative), you can just use one time Markov's inequality so the two inequalities $ P(Y_n \le -\epsilon) = P(- Y_n \ge \epsilon) \le \frac{E[-Y_n]}{\epsilon} = \frac{-E[Y_n]}{\epsilon} = \frac{-1/n}{\epsilon} = \frac{-1}{n\epsilon} \rightarrow 0 \text{ as } n \rightarrow \infty.$ $ P(Y_n \ge \epsilon) \le \frac{E[Y_n]}{\epsilon} = \frac{1/n}{\epsilon} = \frac{1}{n\epsilon} \rightarrow 0 \text{ as } n \rightarrow \infty.$ cannot be simultaneously true in general.
|
|probability|probability-theory|statistics|convergence-divergence|probability-limit-theorems|
| 0
|
Problem with triangle in a circle
|
We are given that $ABC$ is equilateral so $AB=BC=CA$ , and that the length of the circle is $24\pi$ . What is the area of the triangle? Since $24\pi=2\pi r$ we get that $r=12$ and $OB=OC=OA=r$ ; $OAB$ , $OAC$ , $OCB$ is isosceles and because $ABC$ is equilateral the area of the triangle is $\frac{a^2\sqrt{3}}{4}$ , how to find $AB=BC=CA=a$ ?
|
Draw in the midpoint of $\overline{AB}$ and call it Point M. $\triangle$ AOM is a $30^\circ-60^\circ-90^\circ$ right triangle, which means that since the circumradius is 12, $|\overline{AO}|$ = 12, $|\overline{OM}|$ = 6, and $|\overline{AM}| = 6\sqrt{3}$ which means $|\overline{AB}| = 12\sqrt{3}$ . The formula for the area of an equilateral triangle is A = $\frac{s^2\sqrt{3}}{4}$ , so the area of $\triangle$ ABC is $\frac{(12\sqrt{3})^2\sqrt{3}}{4} = \frac{432\sqrt{3}}{4} \approx 187$ un $^2$ .
|
|geometry|triangles|circles|
| 1
|
What's $\int_{-\frac{\pi}{2}}^ {\frac{\pi}{2}}\text{erf}\left(\frac{\sqrt 2 R\cos\theta}{\sigma}\right)\text d\theta$?
|
The context of $$\int_{-\frac{\pi}{2}}^ {\frac{\pi}{2}}\text{erf}\left(\frac{\sqrt 2 R\cos\theta}{\sigma}\right)\text d\theta$$ is it came up whilst integrating the Rayleigh distribution function over an off-centered circle of radius $R$ : $$\int_{-\frac{\pi}{2}}^ {\frac{\pi}{2}}\int_0^{2R\cos\theta}\frac{r}{\sigma^2}\exp\left(-\frac{r^2}{2\sigma^2}\right)r\text dr\text d\theta.$$ Since I’ve never worked with integrals of special functions such as the error function, I’d appreciate any help.
|
Let $k=\frac{\sqrt{2} R}{\sigma }$ and use the series expansion of the error function $$\text{erf}(k \cos (\theta ))=\frac{2}{\sqrt{\pi }}\sum_{n=0}^\infty (-1)^n\, \frac{k^{2 n+1}}{(2 n+1)\,n!}\,\cos ^{2 n+1}(\theta )$$ $$\int_{-\frac{\pi}{2}}^ {+\frac{\pi}{2}}\cos ^{2 n+1}(\theta )\,d\theta=\sqrt{\pi }\,\frac{ \Gamma (n+1)}{\Gamma \left(n+\frac{3}{2}\right)}$$ $$I=\int_{-\frac{\pi }{2}}^{\frac{+\pi }{2}} \text{erf}(k \cos (\theta )) \, d\theta= \sum_{n=0}^\infty (-1)^n\, \frac{k^{2 n+1}}{\left(n+\frac{1}{2}\right) \Gamma \left(n+\frac{3}{2}\right)}$$ which, as already given in @Roland F's answer, is $$I=\frac{4 k}{\sqrt{\pi }}\, _2F_2\left(\frac{1}{2},1;\frac{3}{2},\frac{3}{2};-k^2\right)$$ If $a_n$ is the summand $$\frac {a_{n+1}}{a_{n}}=\frac{2 (2 n+1)}{(2 n+3)^2} k^2=\frac {k^2}n \left(1-\frac{5}{2 n}+O\left(\frac{1}{n^2}\right) \right)$$ If you write $$I=\sum_{n=0}^p (-1)^n\, \frac{k^{2 n+1}}{\left(n+\frac{1}{2}\right) \Gamma \left(n+\frac{3}{2}\right)}+\sum_{n=p+1} ^\infty (-1)^
|
|integration|definite-integrals|error-function|
| 0
|
How many possible pairs in a random 5-card poker hand?
|
This question is from Introduction to Probability, question 32b. I've seen the solution and I understand it clearly. $$\frac{\binom{13}{2}\times \binom{4}{2}\times \binom{4}{2} \times 44}{\binom{52}{5}}$$ The $\binom{13}{2}$ is from the different pairs of values I can have. The two $\binom{4}{2}$ is for the two same values from the possible four cards. The 44 is the fifth card I can have which is different from the previous four. I came up with my own answer (which is obviously wrong) but I can't understand why it doesn't equal to the above. My own answer is $$\frac{\frac{52}{2!} \times \frac{(52-4)}{2!} \times 44} {3!}$$ $\frac{52}{2!}$ is for the first possible card I can choose. Therefore, the second one must be the same. The 2! is because order doesn't matter. $\frac{52-4}{2!}$ is for the third possible card I can choose. Therefore, the fourth one must be the same. 44 is the rest of the card that I can choose. 3! is for the order of the two pairs and the last card doesn't matter. I
|
It appears you are not even accounting for the five cards. If I understand correctly, here's what you are doing . Step 1. The first card can be any of the cards, so $52$ ways. Take the $3$ other cards with the same rank / denomination out of consideration. (Why? Don't you need another card to make a pair?) Step 2. Pick one of the $48$ cards remaining - $48$ ways. Again exclude the $3$ other cards with the same rank. (Again, the same question as above.) Step 3. Pick any of the $44$ cards remaining. Hopefully you see the problem. There could be more problems. but I'll wait until you clarify if this is actually what you were thinking. And where you get making the necessary corrections. Post OP's edit. You say - ...the second one must be the same. The 2! is because order doesn't matter. No. The second card cannot be the same. You are not repeating. It has to be of the same rank. So you have $3$ options. Meaning, you must multiply by $3$ - both for the first pair and the second. The divisio
|
|probability|combinatorics|combinations|card-games|
| 0
|
Do matrices really rotate and stretch vectors or is that definition incorrect?
|
I come from the applied math and statistics world, but I was talking to my friend who comes from the pure math and number theory world—in particular Galois representation theory. I mentioned something about the confusing definition of "matrices" in textbooks. Many textbooks talk about a matrix as the solution to a linear system of equations, or other abstract descriptions. The definition that I have always found useful, was the sense that matrices rotate and scale vectors through linear transformations. Now, coming from the pure math side, my friend said that this definition was not accurate. I am trying to paraphrase some of his comments, but he said that in higher dimensions, matrices can stretch and rotate vectors only locally . He also said that it depends on what vectors the matrix acts upon. His response threw me for a loop and I was trying to understand how to resolve his statements. First, is my understanding of matrices incorrect? It is fine if this idea of rotating and stretc
|
In the “linear transformation” point of view, matrices represent linear transformations, by denoting where the unit vector in the $x$ direction goes in the first column and where the unit vector in the $y$ direction goes in the second. Of course for $3$ dimensions there are more basis vectors but the same applies. Then, we could form the matrix like this: $\begin{bmatrix} a_{x_1} & a_{y_1} \\ a_{x_2} & a_{y_2} \\ \end{bmatrix}$ Then, we could apply it to a vector using matrix-vector multiplication: $\begin{bmatrix} a_{x_1} & a_{y_1} \\ a_{x_2} & a_{y_2} \\ \end{bmatrix} \begin{bmatrix} x \\ y \\ \end{bmatrix}$ Which would result in $\begin{bmatrix} xa_{x_1} + ya_{y_1} \\ xa_{x_2} + ya_{y_2} \\ \end{bmatrix}$ This is the vector when you stretch, rotate, and scale it by the matrix. It would look something like this (for an arbitrary transformation): Grey: gridlines of the original plane Blue: gridlines of the transformed plane Here, we could form a matrix. Red: unit vector in the $x$ dir
|
|linear-algebra|abstract-algebra|matrices|representation-theory|
| 0
|
Prove that $\Vert A \Vert \le \dfrac{1}{1-2\varepsilon} \sup_{x \in \mathcal{N},\ y\in \mathcal{M}} \langle Ax,y\rangle$
|
Problem. Let $A = (a_{ij})$ , $1\le i \le m$ , $1\le j \le n$ and $\varepsilon \in (0,1/2) $ . Let $\mathcal{N}$ be an $\varepsilon$ -net of $S^{n-1}$ and $\mathcal{M}$ be an $\varepsilon$ -net of $S^{m-1}$ ( $S^{m-1}$ and $S^{n-1}$ are unit spheres). Prove that \begin{align*} \sup_{x \in \mathcal{N},\ y \in \mathcal{M}} \langle Ax,y\rangle \le \Vert A \Vert \le \dfrac{1}{1-2\varepsilon} \sup_{x \in \mathcal{N}, y \in \mathcal{M}} \langle Ax,y \rangle. \end{align*} My attempt: Let $x \in \mathcal{N}$ and $y \in \mathcal{M}$ . Hence, there exist $x' \in S^{n-1}$ , $y' \in S^{m-1}$ such that \begin{align*} & \Vert x - x' \Vert_2 \le \varepsilon,\\ & \Vert y - y' \Vert_2 \le \varepsilon. \end{align*} Hence, we have \begin{align*} \vert \langle Ax,y \rangle - \langle Ax',y'\rangle \vert & = \vert \langle Ax,y\rangle - \langle Ax',y\rangle + \langle Ax',y\rangle - \langle Ax',y'\rangle\\ & = \vert \langle A(x-x'),y\rangle + \langle Ax',y-y'\rangle \vert\\ & = \vert \langle A(x-x'),y\rangle
|
Here is a sketch of one approach. First, show that given $\alpha \in (1,\infty )$ , there are $x_{0}\in S^{n-1}$ and $y_{0}\in S^{m-1}$ such that $\|A\| \leq \alpha \langle A(x_{0}), y_{0}\rangle $ . Second, given $x'\in \mathcal{N}$ and $y'\in\mathcal{M}$ such that $\|x' - x_{0}\| \leq \varepsilon$ and $\|y' - y_{0}\| \leq \varepsilon$ , modify the work from the inequality in your answer to show that $\langle A(x_{0}), y_{0}\rangle \leq \langle A(x'), y'\rangle + 2\varepsilon \|A\|$ . After that, use the previously obtained estimates to show that $$(1-2\varepsilon\alpha )\|A\| \leq \sup_{x\in\mathcal{N}, y\in\mathcal{M}}\langle A(x), y\rangle .$$ The result follows from taking the supremum over all $\alpha \in (1,\infty )$ .
|
|functional-analysis|inequality|linear-transformations|normed-spaces|inner-products|
| 1
|
Find the number of positive real values of $a$, such that $a^2 x^2 + ax - 1 + 7a^2$ has $2$ distinct integer roots
|
Problem Find the number of positive real values of $a$ for which the given equation has 2 distinct integer roots. $$ a^2 x^2 + ax - 1 + 7a^2 $$ Till now, I have the following inferences. Assuming $\alpha, \beta$ to be the roots, we have $$\alpha + \beta = \frac{-1}{a}$$ $$\alpha\beta = -\frac{1}{a^2} + 7$$ But since $\alpha, \beta$ are integers, this implies that $|a| \leq 1$ . Also by the quadratic formula, we have $\alpha = \frac{-1 \pm \sqrt{28a^2 - 3}}{2a}$ From here we can infer that since the requirement is just integer roots, if $a$ satisfies the condition, then $-a$ also satisfies the condition. Thus, we may find the total number of solutions and divide by two to get the number of positive integers. However, whether this is helpful or not, I can not figure. One trivial solution emerges, i.e. $a = \pm 1$ . I cannot make progress beyond this.
|
I guess the question is wrong according to your answer that is 3 , this question has been taken from 16-th All-Russian Mathematical Olympiad 1990 round 4 grade 9 P5 . The original question is Find all positive values of a for which both roots of the equation ( $a^2$ )( $x^2$ ) + ax + 1 -7( $a^2$ )=0 are integer . So I will answer according to the original question . Rearrange the equation by taking quadratic in a (Treat x as constant instead of a ) so ( $a^2$ )[ $x^2$ - 7] + ax + 1 =0 as both a and x are integer therefore D = k^2 (for any integer k) so $x^2$ -4( $x^2$ - 7)*1 = $k^2$ = 28 - 3 $x^2$ = $k^2$ clearly only x = -1,-2,-3,1,2,3 satisfy the condition Now we have to check by placing the possible values of x in quadratic formula if a is also an integer or not . checking it we will find it that ( a,b ) = (-1,2) ,(-1,3),(1,-2) satisfy .
|
|algebra-precalculus|quadratics|
| 0
|
Generate random points on a section of a spherical surface
|
If I have a sphere with radius 1, and a section of the sphere defined by having a polar angle bounded by some constant θ and an azimuthal angle bounded by some constant φ, how do I generate random points on this surface such that the points are uniformly distributed?
|
Consider the sphere patch defined as the part of the sphere whose azimuthal angle $\theta$ and polar angle $\phi$ satisfy $0 \leqslant \theta_1 \leqslant \theta \leqslant \theta_2 \leqslant 2\pi$ and $0 \leqslant \phi_1 \leqslant \phi \leqslant \phi_2 \leqslant \pi$ for given $\theta_1$ , $\theta_2$ , $\phi_1$ , $\phi_2$ . Here is how to uniformly sample on this sphere patch. Take two independent random variables $U_1 \sim \mathcal{U}(0,1)$ and $U_2 \sim \mathcal{U}(0,1)$ . Define $\Theta = \theta_1(1-U_1) + \theta_2 U_1$ , define $A = \cos(\phi_1)(1-U_2) + \cos(\phi_2)U_2$ , define $B = \sin\bigl(\arccos(A)\bigr)$ . Then, denoting by $r$ the radius of the sphere, the random point $$r\bigl(\cos(\Theta)B, \sin(\Theta)B, A\bigr)$$ has the uniform distribution on the sphere patch. I don't know why this is true. This is how this sampler is implemented in my R package uniformly , and I copied this method from somewhere when I wrote this package.
|
|probability|geometry|multivariable-calculus|probability-distributions|
| 1
|
Symmetric property of symmetric matrices
|
Are symmetric matrices symmetric about a particular axis geometrically ? Or is the property defined only for the matrix representation of the linear transformation? If you consider a unit square, after transformation due to a symmetric matrix, is the resulting shape symmetric along any axis?
|
Let $s$ be a symmetry in $\mathbb R^2$ . It is clear that $s^2=id_{\mathbb R^2}$ . Let $A:= \begin{bmatrix} 1 & 3 \\ 3 & 2 \end{bmatrix}$ $$A^2=\begin{bmatrix} 1 & 3 \\ 3 & 2 \end{bmatrix}^2=\begin{bmatrix} 10 & 9\\ 9& 13 \end{bmatrix}\neq\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}$$ The interest of symmetric matrices is related to the theorem: a bilinear form is symmetric if and only if its matrix is symmetric. For example, $f:\mathbb R^2\times \mathbb R^2\to \mathbb R, ((x,y),(x',y'))\mapsto xx'+3xy'+3x'y+2yy'$ is symmetric, ie $$\forall X=(x,y),Y=(x',y'), f(X,Y)=f(Y,X)$$ The same goes for $g(X,Y)=xx'-yy'+xy'+x'y$ . Its matrix is $B:= \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}$ We have $$g(X,Y)=X^TBY=\left\langle X|BY\right\rangle=\left\langle BX|Y\right\rangle$$ where $\left\langle .|.\right\rangle $ is the dot product. If we now look at the self-adjoint endomorphism represented by B, here is for example its effect on a figure $C=(9,1)\mapsto C"=(10,8)$ ... I will not dwell furt
|
|linear-algebra|matrices|linear-transformations|symmetric-matrices|
| 0
|
How many possible pairs in a random 5-card poker hand?
|
This question is from Introduction to Probability, question 32b. I've seen the solution and I understand it clearly. $$\frac{\binom{13}{2}\times \binom{4}{2}\times \binom{4}{2} \times 44}{\binom{52}{5}}$$ The $\binom{13}{2}$ is from the different pairs of values I can have. The two $\binom{4}{2}$ is for the two same values from the possible four cards. The 44 is the fifth card I can have which is different from the previous four. I came up with my own answer (which is obviously wrong) but I can't understand why it doesn't equal to the above. My own answer is $$\frac{\frac{52}{2!} \times \frac{(52-4)}{2!} \times 44} {3!}$$ $\frac{52}{2!}$ is for the first possible card I can choose. Therefore, the second one must be the same. The 2! is because order doesn't matter. $\frac{52-4}{2!}$ is for the third possible card I can choose. Therefore, the fourth one must be the same. 44 is the rest of the card that I can choose. 3! is for the order of the two pairs and the last card doesn't matter. I
|
Following your route and avoiding mistakes I arrive at: $$\frac{\frac{52\times3}{2!}\times\frac{48\times3}{2!}\times44}{2!}=2808$$ After choosing the first card of a pair there are $3$ ways to choose its "mate". You overlook that and speak of "the same". There are $2$ pairs and to avoid double counting we must not divide by $3!$ but by $2!$ . The single card that does not belong to any pair plays no part in this
|
|probability|combinatorics|combinations|card-games|
| 0
|
Problem with triangle in a circle
|
We are given that $ABC$ is equilateral so $AB=BC=CA$ , and that the length of the circle is $24\pi$ . What is the area of the triangle? Since $24\pi=2\pi r$ we get that $r=12$ and $OB=OC=OA=r$ ; $OAB$ , $OAC$ , $OCB$ is isosceles and because $ABC$ is equilateral the area of the triangle is $\frac{a^2\sqrt{3}}{4}$ , how to find $AB=BC=CA=a$ ?
|
Let $M$ be the midpoint of $AB$ . Then, by Viviani's theorem , $|CM|=18$ . $\triangle CMB$ is right-angled, and so apply Pythagoras's theorem to find $$18^2 + \frac{a}{2}^2 = a^2$$ $$a^2=432$$ $$a=12\sqrt{3}$$
|
|geometry|triangles|circles|
| 0
|
C*-algebra generated by two normal operators
|
Let $N_1$ and $N_2$ be two distinct commuting normal elements. $C*$ -algebra generated by $N_1$ and $N_2$ is a commutative $C*$ -algebra. So, it will be of the form $C(X)$ for $X$ some compact set in a topological space. Is $X$ of the form $\sigma(N_1) \times \sigma(N_2)$ ? Thanks in advance for the help. I have mentioned two distinct normal elements after Polp's comment.
|
No, this is not the case in general, even if $N_1$ and $N_2$ are distinct and differ from the identity. For example, let $N_1$ be a normal operator on a finite dimensional space with two distinct eigenvalues and let $N_2$ denote the spectral projection associated with one of them. Note that $N_2 \in C^*(N_1)$ and so $$C^*(N_1, N_2) = C^*(N_1) \cong C(\sigma(N_1)).$$ However, $\sigma(N_2) = \{0,1 \}$ . Thus, the dimension of $C(\sigma(N_1) \times \sigma(N_2))$ is double that of $C(\sigma(N_1))$ .
|
|operator-algebras|
| 0
|
How to do integration $\int_0^1 \frac{1-2x}{x^2-x+1} \ln (1-x) dx$
|
As title . I tried to work it out $$\int_0^1 \frac{1-2x}{x^2-x+1} \ln (1-x) dx$$ But could not find a solution. Wolfram Alpha gives a solution using $Li(c)$ , but I’m not familiar with the special function, couldn’t understand it.
|
If you cannot use polylogarithms yet, use a series expansion around $x=1$ $$\frac{1-2 x}{x^2-x+1}= -\sum_{n=0}^\infty \left(\sqrt{3} \sin \left(\frac{2 \pi n}{3}\right)+\cos \left(\frac{2 \pi n}{3}\right)\right)\,(x-1)^n$$ Using one integration by parts $$\int (x-1)^n \log(1-x)\,dx=\frac{(x-1)^{n+1} ((n+1) \log (1-x)-1)}{(n+1)^2}$$ $$\int_0^1 (x-1)^n \log(1-x)\,dx=-\frac{(-1)^n}{(n+1)^2}$$ which makes that your integral is $$\int_0^1 \frac{1-2x}{x^2-x+1} \log (1-x)\, dx=\sum_{n=0}^\infty (-1)^n \, \frac{\sqrt{3} \sin \left(\frac{2 \pi n}{3}\right)+\cos \left(\frac{2 \pi n}{3}\right) } {(n+1)^2}$$ Using the fist $100$ terms, you get $0.548211$ while the exact solution is $0.548311$
|
|integration|logarithms|
| 0
|
How to do integration $\int_0^1 \frac{1-2x}{x^2-x+1} \ln (1-x) dx$
|
As title . I tried to work it out $$\int_0^1 \frac{1-2x}{x^2-x+1} \ln (1-x) dx$$ But could not find a solution. Wolfram Alpha gives a solution using $Li(c)$ , but I’m not familiar with the special function, couldn’t understand it.
|
\begin{align} &\int_0^1 \frac{1-2x}{x^2-x+1} \ln (1-x)\overset{ibp}{ dx}\\ =& -\int_0^1\frac{\ln(1-x+x^2)}{1-x}\overset{x\to 1-x}{dx} =-\int_0^1\frac{\ln(1-x+x^2)}{x}{dx}\\ =& \int_0^1\frac{\ln(1+x)}{x}{dx}-\int_0^1\frac{\ln(1+x^3)}{x}\overset{x^3\to x}{dx}\\ =& \ \frac23\int_0^1\frac{\ln(1+x)}{x}{dx}=\frac{\pi^2}{18} \end{align} where $\int_0^1\frac{\ln(1+x)}{x}{dx}= \frac{\pi^2}{12}$ .
|
|integration|logarithms|
| 1
|
Symmetric property of symmetric matrices
|
Are symmetric matrices symmetric about a particular axis geometrically ? Or is the property defined only for the matrix representation of the linear transformation? If you consider a unit square, after transformation due to a symmetric matrix, is the resulting shape symmetric along any axis?
|
I'm pretty sure the word "symmetric" in "symmetric matrices" refers to what pancini talks about: the symmetry of the numbers in the matrix along the diagonal. However, (real) symmetric matrices have nice geometric properties, which you could arguably connect to a kind of geometric symmetry. Symmetric matrices are orthogonally diagonalisable, meaning that they admit an orthonormal basis of eigenvectors, and all its eigenvalues are real. Depending on where you are in your linear algebra studies, this may contain some concepts about which you may be unfamiliar, but let me try to describe what it means geometrically without going to deeply into the concepts. Imagine the unit ball in $\Bbb{R}^n$ , and we multiply each vector in the unit ball by a given matrix. Perhaps the ball will be stretched/compressed/flipped along an axis. Perhaps the matrix will rotate the ball. Perhaps there will be a skewing operation. Perhaps, all of the above will happen. Either which way, the resulting shape must
|
|linear-algebra|matrices|linear-transformations|symmetric-matrices|
| 0
|
Do matrices really rotate and stretch vectors or is that definition incorrect?
|
I come from the applied math and statistics world, but I was talking to my friend who comes from the pure math and number theory world—in particular Galois representation theory. I mentioned something about the confusing definition of "matrices" in textbooks. Many textbooks talk about a matrix as the solution to a linear system of equations, or other abstract descriptions. The definition that I have always found useful, was the sense that matrices rotate and scale vectors through linear transformations. Now, coming from the pure math side, my friend said that this definition was not accurate. I am trying to paraphrase some of his comments, but he said that in higher dimensions, matrices can stretch and rotate vectors only locally . He also said that it depends on what vectors the matrix acts upon. His response threw me for a loop and I was trying to understand how to resolve his statements. First, is my understanding of matrices incorrect? It is fine if this idea of rotating and stretc
|
First of all: "stretching and rotating" is not a definition, is intuition . The intuition on stretching and rotation you have is ok, although it depends a little on what you mean. All functions rotate and stretch vectors, that is not really the interesting thing; the point and the special feature of linear transformations is that they rotate and stretch vectors in a sort of "uniform" and "rigid" way. Like all intuition, one needs to question it and take it with a grain of salt. Let's consider the intuition "matrices stretch and rotate vectors in a rigid/uniform way". For example you might ask: does it mean that you can always write a matrix as a composition (i.e., product) of a pure rotation(/reflection) matrix and a pure stretching matrix? The answer (for square matrices) is indeed yes, and this is precisely the Polar decomposition of matrices . For rectangular matrices there are similar decompositions as well, like the singular value decomposition . The fact that these decompositions
|
|linear-algebra|abstract-algebra|matrices|representation-theory|
| 0
|
Shortest distance between vertex of a circular cone and a quarter of its conical helix
|
I was given with the question below: A circular cone with vertex C has a point A on the circumference of its base and point B on the segment AC, as shown in the diagram below. The shortest possible rope is wrapped once around the cone such that it starts at point A and ends at point B. Assume that the base diameter and the slant height of this cone are 6 cm and 12 cm, respectively. If AB = 3 cm and D is the point on the rope that is closest to C, what is the length, in cm, of CD? Instinctively, by just looking at the diagram, I believe that B is the point on the rope that is closest to C. Therefore D is the same as B, and CD = 12 - 3 = 9. However, this looks to simple to be true, and it doesn't make much sense to have two points overlap. Furthermore, I found this question from this website , where the 15th question of EMIC is exactly the question I was tasked with, and the solution claims that it is 7.2. The value of 7.2 makes no sense to me, so I tried to attempt the question again, b
|
The trick is to unwrap the side of the cone. Since the side is $AC=12$ , you get a circular sector with the same radius. To find the angle, the length is $2\pi \frac 62=6\pi$ . Then the angle of the circular sector is then $\frac{6\pi}{12}=\frac{\pi}2$ . So point $A$ is at a distance $12$ from $C$ , point $B$ is at a distance $9$ from $C$ . See the figure below: It should be obvious that the shortest path for the rope is the $AB$ line. You notice that the closest point to $C$ is the foot of the perpendicular from $C$ to $AB$ . Since $\triangle ABC$ is a right angle triangle, we can write area in two ways: $$\frac12 AB\cdot CD=\frac12 CB\cdot CA$$ Can you take it from here?
|
|geometry|3d|parametric|
| 1
|
Equivalent definitions of a continuous function.
|
I want to show the following two definitions of a continuous function are equivalent and was wondering if my proof is valid. In particular, I felt that (2) -> (1) was much simpler than (1) -> (2). First, let $X$ and $Y$ be top. spaces with the function $f \colon X \to Y$ . Definition 1. If $U$ is open in $Y$ then $f^{-1}(U)$ is open in $X$ . Definition 2. Let $A \subset X$ and $p$ be a limit point of $A$ . Then $f(p) \in \text{cl}({f(A)})$ (1) -> (2) If $p \in A$ then $f(p) \in f(A)$ so assume $p \notin A$ . If $f(p) \in f(A)$ we are done so additionally assume $f(p) \notin f(A)$ . Take $U$ to the be an open neighborhood of $f(p)$ . Then by (1) $f^{-1}(U), $ is the open neighborhood of p. Since $p$ is a limit point of $A$ , there is a point $q \in f^{-1}(U) \cap A$ . Then $f(q) \in U \cap f(A)$ . We assumed $f(p) \notin f(A)$ so it follows $f(p) \ne f(q)$ . Hence, $(U - \{f(p)\}) \cap f(A) \ne \varnothing$ . Thus, $f(p)$ is a limit point of $f(A)$ and $f(p) \in \text{cl}({f(A)})$ . (2)
|
The proof seems correct. The reason (2) $\implies$ (1) looks simpler than the converse is that you use the "easy to show by taking complements" reduction to closed sets in that direction. Using the same reduction, ie assuming $f^{-1}(\text{closed})$ is closed, and supposing $p$ is a limit point of $A$ , $f^{-1}(\operatorname{cl}(f(A))$ is a closed set containing $A$ and therefore contains $p$ , which is to say $f(p) \in \operatorname{cl}(f(A))$ . Note in general that even if two statements are equivalent, the difficulty of going from one to another need not be symmetric. For an artificial example, if $P \implies Q$ is a "hard to prove" statement, then $P$ is equivalent to " $P \text{ and } Q$ ", but one of the directions is the same as $P \implies Q$ and the other direction is trivial.
|
|general-topology|solution-verification|
| 1
|
Lognormal linear regression
|
I would like to fit a linear regression model with a lognormal distribution that has a linear expectation in the $X$ , $Y$ space. So, I need a model with the following properties: $$ Y|X \sim Lognormal $$ $$ E[Y|X] = X*β^T $$ Can you offer any suggestions? Thank you in advance.
|
If you're looking for a software to fit this model, you can use R with the logNormReg package . Example: library(logNormReg) # covariate x If you're look for mathematical details, look at the references given in the above link.
|
|statistical-inference|linear-regression|
| 1
|
If $Y_n \rightarrow Y$ in distribution then $P(Y \geq M) \geq \liminf_{n\rightarrow \infty} P(Y_n \geq M)$ for $M \in \mathbb{R}$
|
Let $Y_n$ be a sequence of real random variables converging in distribution to a random variable $Y$ and let $M \in \mathbb{R}$ be a fixed real number. I would like to prove that $P(Y \geq M) \geq \liminf_{n\rightarrow \infty} P(Y_n \geq M)$ . This seems like it should be very easy to prove since if $x_n \rightarrow x$ then $x \geq \liminf_{n\rightarrow \infty} x_n$ for any convergent sequence of real numbers. But since we only have convergence in distribution and the limit is outside the measure $P$ I am having some trouble formalizing everything. How can one prove this?
|
Actually you have something stronger, in that $\lim\sup_{n\to\infty}P(Y_{n}\geq M)\leq P(Y\geq M)$ . In fact, you have for all closed sets $K$ that $\lim\sup_{n\to\infty}P(Y_{n}\in K)\leq P(Y\in K)$ and this is both necessary and sufficient for convergence in distribution. Here's an easy way to prove this by extending your idea that it would have been easier if we had almost sure convergence . Use Skorokhod's Representation Theorem (Also here in Durrett Theorem $3.2.8$ ) to find a sequence $X_{n}$ and a random variable $X$ such that $X_{n}\xrightarrow{a.s.}X$ and $X_{n}=Y_{n}$ in distribuiton and $X=Y$ in distribution. Now, notice that $\lim\inf\mathbf{1}_{\{X_{n} i.e. if for an $\omega\in\Omega$ (or more precisely, an $\omega$ from the almost sure set in which convergence holds pointwise) , if $X(\omega) , then $X_{n}(\omega) for all large enough $n$ by basic properties of real sequences and hence $\lim\inf\mathbf{1}_{\{X_{n} . Now you have by the above and Fatou's lemma that \begin{a
|
|real-analysis|probability|probability-theory|measure-theory|convergence-divergence|
| 0
|
Derive the Derivative of a Lagrange Multiplier Equation
|
Sorry for a tedious question, but I really eager to know how to derive these equations. Now I'm reading a paper and stuck in differentiating a Lagrangian multiplier equation. The given Lagrangian equation is as below $$G(X) = Q(X) + \sum_{j=1}^m \pi_{j}\left( -\sum_{i} a_{ij}x_{i} + b_{j}\right) \qquad (22)$$ here, $\pi_{j}$ is the Lagrangian multiplier and $Q(X)$ is the quadratic approximation of $F(X)$ : $$Q(X) = \left.F(X) \right\rvert_{X=Y} + \left.{\sum_{i} {{\partial F} \over {\partial x_{i}}} }\right\rvert_{X=Y} \Delta_i + \left.{\frac 12 \sum_i , \sum_k , {{\partial^2 F} \over {\partial x_i \partial x_k}}}\right\rvert_{X=Y} \Delta_i \Delta_k \qquad (21)$$ with $\Delta_i = x_i - y_i $ . $x_i$ is improved value(mole number here) and $y_i$ is initial value. $$F(Y) = \sum_{i=1}^n y_i \left [c_i + ln {\frac {y_i}{\bar y}} \right] \qquad (20)$$ $$f_i (Y) = y_i \left [ c_i + ln \frac{y_i}{\bar y} \right ] \qquad (19)$$ And $c_i = \frac {g_i^0 (T)}{RT} + ln P$ is treated as a constant.
|
Thank you @whpowell96 for giving me a courage to tackle this problem. 1. First derivative of F(X) $$F(X) = \sum_{i=1}^{n} x_i \left\{ c_i + ln \frac{x_i}{\bar x} \right\} = \sum_{i=1}^{n} c_i x_i + \sum_{i=1}^{n} x_i ln x_i - \sum_{i=1}^{n} x_i ln \bar x $$ $$\frac{\partial F(X)}{\partial x_i} = \frac{\partial}{\partial x_i} \left\{ \sum_{i=1}^{n} c_i x_i \right\} $$ $$+ \frac{\partial}{\partial x_i} \left\{ \sum_{i=1}^{n} x_i ln x_i \right\} - \frac{\partial}{\partial x_i} \left\{ \sum_{i=1}^{n} x_i ln \bar x \right\}$$ There are three terms in right hand side of the equation. $$$$ The 1st term $$\frac{\partial}{\partial x_i} \left\{ \sum_{i=1}^{n} c_i x_i \right\} = \frac{\partial}{\partial x_i} \left\{ c_i x_i \right\} = c_i$$ Because any element $c_j x_j$ for $j \neq i $ would be zero above. $$$$ The 2nd term $$\frac{\partial}{\partial x_i} \left\{ \sum_{i=1}^{n} x_i ln x_i \right\} = \frac{\partial}{\partial x_i} \left\{ x_i ln x_i \right\} = \frac{\partial x_i}{\partial x_i} ln x
|
|partial-differential-equations|
| 0
|
Proving the Euler totient function (product formula)
|
Let $x\geq 1$ and $\phi(x)$ be the Euler totient function. I want to show that $$\phi(x) = x\prod_{p| x}\bigg(1 - {1\over p}\bigg)$$ First, I looked at the case $x$ is prime. If $x$ is prime, then the only such $p$ that divides $x$ is $1$ , so $x-1$ numbers are relatively prime to $x$ . So $\phi(x) = x-1$ in this case. I then tried to apply induction on $x$ . The first case $\phi(1) = 1$ holds. So, suppose that $\phi(x) = x\prod_{p| x}\big(1 - {1\over p}\big)$ holds. Then show that $\phi(x+1) = (x+1)\prod_{p| {x+1}}\big(1 - {1\over p}\big)$ If $x+1$ is prime, then $\phi(x+1) = x$ . Suppose that $x+1$ is not prime. Then the number of integers coprime to $x$ is given by the product formula, and since $\gcd(x, x+1) = 1$ for all $x\geq 1$ , then $x$ is counted as well. So $$\phi(x+1) = x\prod_{p| x}\bigg(1 - {1\over p}\bigg) + 1$$ Which implies $\phi(x+1) = \phi(x) + 1 = \phi(x) + \phi(1)$ . So it seems I want to show that in the case $x+1$ is not prime, $$\phi(x+1) = x\prod_{p| x}\bigg(1
|
By definition the Euler's totient function counts the positive integers up to a given integer $n\in \mathbb{N}$ that are relatively prime to $n$ . Consequently, if $A = \{1,\cdots, n \}$ , then $\phi (n) = | \{ a\in A:(a . What elements of the set $A$ divide $n$ ? If $ n = \prod_{i=1}^r p_i^{\alpha_i}$ is the prime factorization of $n$ (that is, $p_1$ ,..., $p_r$ are distinct prime numbers), then $p_i | n $ . Therefore, $\phi(n)$ does not count these elements and all elements less than n that are divisible by them. Thus, we need to find the number of all elements up to n and which are not divisible by any of the numbers $p_1$ ,..., $p_r$ (i.e. $\phi(n)$ ). Thus, if $P_i = \{ p_i, 2p_i, 3p_i, \cdots, \frac{n}{p_i}p_i \}$ (obviously, $|P_i| = \frac{n}{p_i}$ ), then $$\phi(n) = | A \setminus \left( \bigcup_{i=1}^r P_i \right) |.$$ Let's find the cardinality by induction on i. Let us denote by $A_1 = A\setminus P_1$ the set of elements of A that are not divisible by $p_1$ . Then $$|A_1| =
|
|combinatorics|totient-function|
| 0
|
Show that $\max$ function on $\mathbb R^n$ is convex
|
I am reading the book Convex Optimization , and I don't understand why a $\max$ function is convex. The function is defined as: $$f(x) = \max(x_1, x_2, \dots, x_n)$$ The book offers the proof shown below: for $0 \leq \theta \leq 1$ $$\begin{aligned} f(\theta x + (1 - \theta)y) &= \max_i \left( \theta x_i + (1 - \theta)y_i \right)\\ & \leq \theta \max_i x_i + (1 - \theta)\max_i y_i\\ &= \theta f(x) + (1 - \theta)f(y) \end{aligned}$$ However, I don't understand why the following inequality holds. $$\max_i (\theta x_i + (1 - \theta)y_i) \leq \theta \max_i x_i + (1 - \theta)\max_i y_i$$
|
$ \forall x_i, y_j \in \mathbb{R} , \forall \space i,j=1,2, $ $$ \begin{align} \max(x_1, x_2) + \max(y_1, y_2) &= \max(x_1 + y_1, x_1 + y_2, x_2 + y_1, x_2 + y_2) \\ &≥ \max(x_i + y_j). \end{align} $$ $ \forall x_i, y_j \in \mathbb{R} , \forall \space i,j=1,...,n, $ $$ \begin{align} \max(x_i) + \max(y_j) &= \max( \underbrace{x_1 + y_1, x_1 + y_2,...,x_n + y_n}_{\text{n×n terms}} ) \\ &≥ \max(x_i + y_j). \end{align} $$ Since $ \max(\theta x) = \theta \max(x) $ , $$ \begin{align} \theta \max(x_i) + (1-\theta) \max(y_j) &= \max(\theta x_i) + \max((1-\theta)y_j) \\ &= \max( \underbrace{\theta x_1 + (1-\theta)y_1, \theta x_1 + (1-\theta)y_2,..., \theta x_n + (1-\theta)y_n}_{\text{n×n terms}} ) \\ &≥ \max(\theta x_i + (1-\theta)y_j). \end{align} $$ That is $$ \max \left( \theta x_i + \left( 1 - \theta \right) y_j \right) ≤ \theta \max(x_i) + (1-\theta) \max(y_j). $$ So $\max$ function is convex.
|
|convex-analysis|
| 0
|
PDF Random Variable Statement
|
Is it true that if the probability density function of a continuous random variable is an even function, then the continuous random variable is symmetric?
|
Yes, this is true. Consider a probability density function $p(x)$ that is even: By definition, the probability density function is symmetric if there exists $x_0$ s.t $ p(x_0+δ)=p(x_0-δ)$ for all real numbers δ. But take $x_0=0$ . Then it follows immediately that if p is even then $p(δ)=p(-δ)$ for all real $δ$ . So clearly the probability density function is symmetric.
|
|probability|probability-theory|statistics|probability-distributions|statistical-inference|
| 1
|
If a sequence $ \{ a_n \} $ is contained in a finite union of sets, then there is a subsequence of $ \{ a_n \} $ contained in only one of the sets.
|
Edit : It has been pointed out that the title is stated ambiguously, please refer to the post by @Prem as to why. The title should actually be: If a sequence $ \{ a_n \} $ is contained in a finite union of sets, then there is a subsequence of $ \{ a_n \} $ contained entirely in one of the sets. In other words, if $ \{a_{n_i} \}$ is the desired subsequence, then there is some $ 1 \leq k \leq m $ such that $ a_{n_i} \in E_k $ for all $ i $ where $ \{n_i\} $ is the index selection sequence. I'm working on a proof in which I would like to use the following result: Suppose that a sequence $ \{ a_n \} $ is contained in a finite union of sets $ E_1 \cup \dots \cup E_m $ , then there exists a subsequence $ \{ a_{n_i} \} $ of $ \{ a_n \} $ contained entirely in one of the $ E_k $ where $ 1 \leq k \leq m $ . Here $ \{n_i\} $ is the index selection sequence. However, I have trouble finding a proof of this result in any of my textbooks. I've looked around the internet as well but haven't found any
|
For a rigorous proof one should consider the sets of indices $I_1, I_2, \ldots, I_m$ defined by $$ I_k = \{ n \in \Bbb N \mid a_n \in E_k \} \, . $$ Then $I_1 \cup I_2 \cup \cdots \cup I_m = \Bbb N$ , so at least one of these sets contains infinitely many elements, say $I_{K}$ . Then there exists a bijection $f: \Bbb N \to I_K$ , and $\{ a_{f(n)} \}$ is a subsequence of $\{ a_n\}$ with all elements contained in $E_K$ .
|
|real-analysis|sequences-and-series|general-topology|solution-verification|
| 0
|
How to compute the different ideal of the cyclotomic field extension $\mathbb{Q}(\zeta_p)/\mathbb{Q}$?
|
Let $p$ be a prime number, $K=\mathbb{Q}(\zeta_p)$ be the cyclotomic field extension of $\mathbb{Q}$ by adding a $p$ -th root of unity. There is a notation called different ideal , which is defined to measure the (possible) lack of duality in the ring of integers. I want to know what is the different ideal of the cyclotomic field extension $K/\mathbb{Q}$ ?
|
It's well-known that $p$ is totally ramified in $K$ and that it is the only ramified prime in this extension. The unique prime ideal above $p$ is $(1-\zeta_p)$ , so we only need to figure out the exponent of $(1-\zeta_p)$ . The norm of the different is the (absolute value of) discriminant. The discriminant in this case is also well-known : We have $|\Delta_{K/\Bbb Q}|=p^{p-2}$ . Now if $\mathfrak{d}_{K/\Bbb Q}=(1-\zeta_p)^m$ , then using $N((1-\zeta_p))=p$ we get $p^{p-2}=|\Delta_{K/\Bbb Q}|=N(\mathfrak{d}_{K/\Bbb Q})=p^m$ So the exponent $m$ is $p-2$ and we have $\mathfrak{d}_{K/\Bbb Q}=(1-\zeta_p)^{p-2}$ . Another approach: If the ring of a number field is $\Bbb Z[\alpha]$ with $\alpha$ having minimal polynomial $f$ , then the different is generated by $f'(\alpha)$ . Here $\alpha=\zeta_p$ , $f=\Phi_p$ . We have $(X-1)\Phi_p(X)=X^p-1$ . Differentiating yields. $$(X-1)\Phi'_p(X)+\Phi_p(X)=pX^{p-1}$$ Plugging in $\zeta_p$ gives $$(\zeta_p-1)\Phi'_p(\zeta_p)=p\zeta_p^{p-1}$$ The factor $
|
|algebraic-number-theory|galois-extensions|cyclotomic-fields|
| 1
|
Precise axiomatic definition for the equality "=" as a binary relation
|
Question: What is a simple yet precise definition for "=" as a binary relation? My try: I find two definitions for "equality relation" which seems to be contradictory. The first one I learnt in school is that equality is what is called an equivalence relation , that is, it satisfies three axioms: Reflexivity, Symmetry, Transitivity. The second definition contains an axiom of extensionality. The third definition I heard contains this additional axiom: $x=y$ implies $P(x)=P(y)$ Thanks to the comments, I also learnt a definition using first-order logics: These equality axioms are: Reflexivity. Substitution for functions. Substitution for formulas. This is close to the things I am looking for. However, this definition looks weird as the second axiom is a special case of the third. Axiomatic system usually don't use redundant axioms.
|
Re your addendum to the question . Yes, the two basic axioms for equality are reflexivity : $x=x$ , and substitution for formulas : $s = t \to (\varphi [s/z] \to \varphi [t/z])$ . Note that, using Universal instantiation , we have that reflexivity holds for terms whatever: $t=t$ . Now, if $f$ is a function symbol (assuming for simplicity that it is a unary function symbol) from reflexivity we have $f(x)=f(x)$ . Thus, having that $f(x)=f(x)$ is $(z=f(x))[f(x)/z]$ and $f(y)=f(x)$ is $(z=f(x))[f(y)/z]$ we can use susbstitution to get: $x=y \to f(x)=f(y)$ . The substitution property is crucial for equality : if the binary relation has reflexivity, symmetry and transitivity but lacks the substitution property its is an equivalence relation .
|
|logic|set-theory|first-order-logic|relations|axioms|
| 1
|
Subsets and Splits
SQL Console for glopezas/math_stackexchange_qa
Filters and displays accepted answers excluding questions with specific tags, providing a focused view of less common or specialized topics.