qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
1,285,213 | <p>Let $f\in P_2(\mathbb R)$, the space of second-order polynomials with real coefficients, and let the linear operator $T$ be defined as $T[f(x)] = f(0)+f(1)(x+x^2)$.</p>
<p>Is $T$ diagonalizable? If so, find a basis $\beta$ of $P_2(\mathbb R)$ in which $[T]_\beta$ is a diagonal matrix.</p>
| math.n00b | 135,233 | <p>You can write the operator in a matrix form by considering the action of this operator on $1$, $x$ and $x^2$ which is the standard basis for second-order polynomials.Then all you need to do is to deal with the eigenvectors of that matrix.</p>
|
1,907,743 | <p>I'm having trouble with a step in a paper which I believe boils down to the following inequality:
$$
\left\| \sum_{k\in\mathbb{Z}} f(\cdot+k) \right\|_{L^2(0,1)}
\leq c \|f\|_{L^2(\mathbb{R})}.
$$
I haven't come up with many ideas. Hitting the left-hand side with Minkowski, for example, produces something which can exceed $\|f\|_{L^2(\mathbb{R})}$.</p>
<p>I also put a bit of effort this afternoon into falsifying the above inequality (it may be that I'm misunderstanding the omitted steps in the original paper). Begin with a power series $g(x)=\sum a_kx^k$ which has a local $L^2$ singularity. It's then not too hard to use this representation to construct $f$ satisfying
$$
\sum_{k\in\mathbb{Z}} f(x+k) = g(x).
$$
However, the few times I attempted this did not result in an $L^2(\mathbb{R})$ function.</p>
<p>Any help one way or the other is appreciated.</p>
| Calvin Khor | 80,734 | <p>Inspiration from the Shannon sampling theorem: if you assume $\mathcal F_{\Bbb R} f$ is Schwartz with $\text{supp}\ f$ contained in $(-1/2,1/2)$ then $$ LHS^2 = \sum_k |\mathcal F_{\Bbb R}f(k)|^2 = \sum_k \int_{\Bbb R} f(y) e^{-2\pi i ky} \ \text dy = \sum_k ∫_{-1/2}^{1/2} f(y) e^{-2\pi i ky} \ \text dy = \sum_k |\mathcal F_{\Bbb T}\tilde f(k)|^2 = ‖\tilde f‖^2_{L^2(\mathbb T)}= ‖f‖^2_{L^2(-1/2,1/2)} = ‖f‖_{L^2(\Bbb R)}^2 $$
The first equality is Poisson summation, $\sum_k f(x+k) = \sum_k \mathcal F_\Bbb R f(k) e^{-2\pi i k x}$. Also, $\tilde f ∈ C^∞ (\Bbb T)$ is the 1-periodic extension of $f∈ C_c^∞(-1/2,1/2) $.</p>
<p>Unfortunately the assumption on the support of $f$ means that there is only one summand in LHS. I'll just leave this here anyway.</p>
|
1,701,935 | <p>I've been experimenting with recursive sequences lately and I've come up with this problem:</p>
<blockquote>
<p>Let $a_n= \cos(a_{n-1})$ with $a_0 \in \Bbb{R}$ and $L=[a_1,a_2,...,a_n,...].$
<br><br>Does there exist an $a_0$ such that $L$ is dense in $[-1,1]?$ </p>
</blockquote>
<p><br><br> I know of $3$ ways of examining whether a set is dense:
<br><br>$i)$The definition, that is, whether its closure is the set on which it is dense, in our case this means if: $\bar L=[-1,1]$.
<br><br>ii)$(\forall x \in [-1,1])(\forall \epsilon>0)(\exists b \in L):|x-b|<\epsilon$
<br><br>$iii)$ $(\forall x \in [-1,1])(\exists b_n \subseteq L):b_n\rightarrow
x$ </p>
<p>So far I haven't been able to use these to answer the question. I tried plugging in different values of $a_0$ and see where that leads but I have not found any corresponding promising "pattern" for $a_n$. Any ideas on how to approach this?</p>
| Singh | 83,768 | <p>By continuity of cosine function, $a_n=\cos a_{n-1}$, which for any $a_0\in [-1,1]$ satisfies $a_n>0$ for all $n=1,2,...$, so can not intersect the open set $[-1,0)-\{a_0\}$. </p>
|
16,725 | <p>I was directed just now to a post with the following abbreviated time-line:</p>
<ul>
<li>Question was posted <strong>21</strong> hours ago</li>
<li>Question was closed as "unclear what you are asking" <strong>19</strong> hours ago</li>
<li>Question was deleted by the votes of three 10K users <strong>4</strong> hours ago. </li>
</ul>
<p>Yes, I get it: the original question is very vague and it was not at all clear what the OP is asking. But there were some users commenting and trying to help the OP formulate it into a mathematical question. The whole purpose of having posts put "On-Hold" versus "Closed" is that we are supposed to give new users a chance to edit their questions to a form that fits the community norm. This rapid-fire deletion runs entirely contrary to that. </p>
<p><strong>Request</strong>: Can we please be a little bit more generous about deleting bad posts? </p>
<p>It is one thing to delete a low-quality orphan which the OP abandoned, but I feel that over-zealous deletions of recent posts (which are not obviously spam or offensive) is unfair to the new users and creates an unwelcoming atmosphere. </p>
<p>Worse, the OP is now essentially deprived of a chance of learning from his mistakes: if he cannot see the comments he cannot know why his earlier question is closed and deleted<sup>1</sup>! And sure enough my attention was brought to this question because the OP simply posted his question again<sup>2</sup>, identically to the original version that was closed and deleted. So the net effect is that the deletion of the original post is <em>counterproductive</em> to our goal of having clear, well-formed question on this site. </p>
<hr>
<p><sup>1</sup> A user can see his <a href="https://meta.stackexchange.com/questions/185491/what-is-the-deleted-recent-questions-page-in-the-user-profile">own recently deleted question</a>, though it may not be immediately obvious to new users how to do that. A user can also see his own deleted question at any time provided that the user saved the URL. </p>
<p><sup>2</sup> I should also add that footnote 1 notwithstanding, in the case that caught my attention the original poster used an unregistered account for the first post. This made it additionally difficult to see the comments on the deleted post. </p>
| stackErr | 59,787 | <p>Suggestion: Can we have a time limit(say 48-72 hours) and an edit limit (say 5 edits) on the question that is on hold such that if any of those limits are crossed then the question can be deleted. If none are crossed then the question cannot be deleted unless marked as spam/offensive/trolling/jokes (Thursdays suggestion).</p>
<p>I think this will allow new users to edit and improve their questions.</p>
|
1,905,898 | <p>The graph of quadratic function drawn on the interval $-1\leq x\leq 5$.</p>
<p><a href="https://i.stack.imgur.com/0eOzE.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0eOzE.jpg" alt="enter image description here"></a></p>
<p>i.If the quadratic function is , $y=(x-2)^2-k$, find the value of $k$.</p>
<p>ii.for this value of $k$, find the roots of the equation $(x-2)^2-k=0$</p>
<p>Any Ideas on how to begin?</p>
| Steve Suh | 362,486 | <p>If $\gamma<\alpha$, $\gamma$ is not an upper bound for L (since $\alpha$ was the <strong>least</strong> upper bound). It is written above in italics that every element in B must be an upper bound for L. Thus $\gamma$ is not in B.</p>
<p>We know that $\alpha \le x$ for every $x \in B$ because B is a set of upper bounds(from italics again) of L, and $\alpha$ is the <strong>least</strong> upper bound for L.</p>
<p>Also, $\alpha$ being an upper bound for L literally means that every element in L is less than or equal to $\alpha$. Thus if some $\beta$ is greater than $\alpha$, $\beta$ is not an element of L.</p>
|
1,905,898 | <p>The graph of quadratic function drawn on the interval $-1\leq x\leq 5$.</p>
<p><a href="https://i.stack.imgur.com/0eOzE.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0eOzE.jpg" alt="enter image description here"></a></p>
<p>i.If the quadratic function is , $y=(x-2)^2-k$, find the value of $k$.</p>
<p>ii.for this value of $k$, find the roots of the equation $(x-2)^2-k=0$</p>
<p>Any Ideas on how to begin?</p>
| fleablood | 280,126 | <p>L is the set of lower bounds of B.</p>
<p>Let $l\in L;b\in B $. As $l $ is a lower bound of $B $, $l\le b $ and therefore every element of $B $ is an upper bound of $L $.</p>
<p>So if $\gamma < \sup L $, $\gamma$ is not an upper bound of $L $ and thus not an element of $B $.</p>
<p>As for the second part. $\alpha = \sup L $ so $\alpha $ is upper bound of $L $. So $\alpha \ge l; \forall l\in L$. So if $\beta > \alpha \ge l; \forall l\in L $, then $\beta $ can't be in $L $ because it is larger than every member of $L $.</p>
|
384,700 | <p>This question addresses a hierarchy of linear recurrences
which arise from an attempt to generalize the Nekrasov-Okounkov
formula to the Young-Fibonacci setting.
A related posting</p>
<p><a href="https://mathoverflow.net/questions/384591/extensions-of-the-nekrasov-okounkov-formula">extensions of the Nekrasov-Okounkov formula</a></p>
<p>asks how one might try to extend the Nekrasov-Okounkov formula
by replacing the Plancherel measure on the Young lattice <span class="math-container">$\Bbb{Y}$</span>
with another ergodic, central measure.
In this discussion, I want to instead replace the Young lattice <span class="math-container">$\Bbb{Y}$</span>
by the Young-Fibonacci lattice <span class="math-container">$\Bbb{YF}$</span> which comes equipped with its own <em>Plancherel measure</em> in virtue of being a <span class="math-container">$1$</span>-differential poset.
Allow me to briefly review some basics of the Young-Fibonacci lattice
before I state the putative <span class="math-container">$\Bbb{YF}$</span>-version of the Nekrasov-Okounkov partition function.</p>
<p><strong>Young-Fibonacci Preliminaries:</strong>
Recall that a <em>fibonacci word</em> <span class="math-container">$u$</span> is a word formed
out of the alphabet <span class="math-container">$\{1,2\}$</span>. As a set <span class="math-container">$\Bbb{YF}$</span> is the
collection of a (finite) fibonacci words and <span class="math-container">$\Bbb{YF}_n$</span>
will denote the set of fibonacci words <span class="math-container">$u \in \Bbb{YF}$</span> of <em>length</em>
<span class="math-container">$|u|=n$</span> where
<span class="math-container">$|u|:= a_1 + \cdots + a_k$</span> and where <span class="math-container">$u=a_k \cdots a_1$</span> is the
parsing of <span class="math-container">$u$</span> into its digits <span class="math-container">$a_1, \dots, a_k \in \{1,2 \}$</span>.
The adjective Fibonacci reflects the fact that the cardinality of <span class="math-container">$\Bbb{YF}_n$</span>
is the <span class="math-container">$n$</span>-th Fibonacci number. I will skip defining the poset structure
on <span class="math-container">$\Bbb{YF}$</span> and instead I point the readers to the Wikipedia page
<a href="https://en.wikipedia.org/wiki/Young%E2%80%93Fibonacci_lattice" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Young–Fibonacci_lattice</a>. Suffice it to
say that when endowed with an appropriate partial order <span class="math-container">$\unlhd$</span> the set <span class="math-container">$\Bbb{YF}$</span> becomes a ranked, modular (but not distributive), <span class="math-container">$1$</span>-differential lattice. R. Stanley's <span class="math-container">$1$</span>-differential property (see <a href="https://en.wikipedia.org/wiki/Differential_poset" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Differential_poset</a>) is key here because it implies that the function
<span class="math-container">$\mu_\mathrm{P}: \Bbb{YF} \longrightarrow \Bbb{R}_{>0}$</span>
defined by</p>
<p><span class="math-container">\begin{equation}
\begin{array}{ll}
\mu_\mathrm{P}(u)
&\displaystyle := \ { \dim^2(u) \over {|u|!}} \quad \text{where} \\
\dim(u)
&\displaystyle := \
\# \left\{
\begin{array}{l}
\text{all saturated chains $(u_0 \lhd \cdots \lhd u_n)$ in $\Bbb{YF}$} \\
\text{starting with $u_0 = \emptyset$ and ending at $u_n =u$}
\end{array}
\right\}
\end{array}
\end{equation}</span></p>
<p>restricts to a positive probability distribution <span class="math-container">$\mu^{(n)}_\mathrm{P}$</span>
on <span class="math-container">$\Bbb{YF}_n$</span> for each <span class="math-container">$n \geq 0$</span>. In fact <span class="math-container">$\mu_\mathrm{P}$</span> satisfies a
stronger property known as <em>coherence</em>: The ratios</p>
<p><span class="math-container">\begin{equation}
\tilde{\mu}_\mathrm{P}(u \lhd v) \ := \
{\mu_\mathrm{P}(v) \over {\mu_\mathrm{P}(u)}}
\end{equation}</span></p>
<p>restrict to a probability distribution <span class="math-container">$\tilde{\mu}_{\mathrm{P},u}$</span> on the set of
<em>covering relations</em> <span class="math-container">$u \lhd v$</span>
(i.e. edges in the Hasse diagram of <span class="math-container">$\Bbb{YF}$</span>)
for any fixed <span class="math-container">$u \in \Bbb{YF}_n$</span>.
We refer to
<span class="math-container">$\mu^{(n)}_\mathrm{P}$</span> as the <em>Plancherel
measure</em> for <span class="math-container">$\Bbb{YF}_n$</span>. If <span class="math-container">$S:\Bbb{YF} \longrightarrow \Bbb{R}_{\geq 0}$</span>
is some statistic let <span class="math-container">$\langle S \rangle_n$</span> denote its expectation
value with respect to the Plancherel measure, i.e.</p>
<p><span class="math-container">\begin{equation}
\langle S \rangle_n \ := \ \sum_{|u|=n} \, {\dim^2(u) \over {n!}} \, S(u)
\end{equation}</span></p>
<p>We may visualize a fibonacci word <span class="math-container">$u \in \Bbb{YF}$</span>
using a profile of <em>boxes</em>
akin to the way one depicts a partition by its Young diagram.
The following example with <span class="math-container">$u = 12112211$</span>
should illustrate the concept of a Young-Fibonacci diagram clearly. For emphasis
each digit of the fibonacci word <span class="math-container">$u$</span> is written directly underneath the corresponding column of boxes:</p>
<p><span class="math-container">\begin{equation}
\begin{array}{cccccccc}
& \Box & & & \Box & \Box & & \\
\Box & \Box & \Box & \Box & \Box & \Box & \Box & \Box \\
1 & 2 & 1 & 1 & 2 & 2 & 1 & 1
\end{array}
\end{equation}</span></p>
<p>A Fibonacci word <span class="math-container">$u$</span> will be synonymous with its Young-Fibonacci diagram
and <span class="math-container">$\Box \in u$</span> will indicate membership of a box.
The <em>hook length</em> <span class="math-container">$\mathrm{h}(\Box)$</span> of a box <span class="math-container">$\Box \in u$</span>
is defined to be <span class="math-container">$1$</span> whenever it is in the top row; otherwise <span class="math-container">$\mathrm{h}(\Box)$</span>
equals <span class="math-container">$1$</span> plus the total number of boxes directly
above it and to its right. For example the hook lengths of the boxes of
<span class="math-container">$u = 12112211$</span> are indicated in the tableaux below:</p>
<p><span class="math-container">\begin{equation}
\begin{array}{cccccccc}
& \boxed{1 \ \ } & & & \boxed{1 \ \ } & \boxed{1 \ \ } & & \\
\boxed{11} & \boxed{10} & \boxed{8 \ \ } & \boxed{7 \ \ }
& \boxed{6 \ \ } & \boxed{4 \ \ } & \boxed{2 \ \ } & \boxed{1 \ \ }
\end{array}
\end{equation}</span></p>
<p>These graphical conventions allows us to reformulate
the value of <span class="math-container">$\mu_\mathrm{P}(u)$</span> in terms of (the squares of)
the hook-lenghts of <span class="math-container">$u \in \Bbb{Y}$</span>, i.e.</p>
<p><span class="math-container">\begin{equation}
\mu_\mathrm{P}(u) \ = \ \prod_{\Box \, \in \, u} \, {|u|! \over
{\mathrm{h}^2(\Box)} }
\end{equation}</span></p>
<p>This is a non-trivial observation made by R. Stanley in the course
of his work examining differential posets.</p>
<p><strong>The <span class="math-container">$\Bbb{YF}$</span>-version of the Nekrasov-Okounkov partition function:</strong>
For a fibonacci words <span class="math-container">$u \in \Bbb{YF}$</span>
define a <span class="math-container">$t$</span>-statistic
<span class="math-container">$H_t(u) := \prod_{\Box \, \in \, u} \, \big(\mathrm{h}^2(\Box) - t \big)$</span> and the <em><span class="math-container">$\Bbb{YF}$</span>-Nekrasov-Okounkov</em> partition function as</p>
<p><span class="math-container">\begin{equation}
\begin{array}{ll}
F(z;t)
&\displaystyle = \ \sum_{n \geq 0} {z^n \over {n!}}
\, \langle H_t \rangle_n \\
&\displaystyle = \ \sum_{n \geq 0} {z^n \over {n!}} \,
\sum_{|u|=n} \, {\dim^2(u) \over {n!}} \, H_t(u)
\end{array}
\end{equation}</span></p>
<p>Given a fibonacci word <span class="math-container">$u$</span> let <span class="math-container">$E_k(u)$</span> be the elementary symmetric polynomial in the square hook lengths
<span class="math-container">$\mathrm{h}^2(\Box)$</span> for <span class="math-container">$\Box \in u$</span> with the conventions
that <span class="math-container">$E_k(u) = 0$</span> whenever <span class="math-container">$k > |u|$</span> and that <span class="math-container">$E_0(u) = 1$</span>
for all <span class="math-container">$u \in \Bbb{YF}$</span>. Following a hint from Stanley's
notes "Partition Statistics with Respect to Plancherel Measure"
(<a href="http://www-math.mit.edu/%7Erstan/transparencies/plancherel.ps" rel="nofollow noreferrer">http://www-math.mit.edu/~rstan/transparencies/plancherel.ps</a>)
we will try to compute <span class="math-container">$F(z;t)$</span> by working out a recursion for the expectation values <span class="math-container">$\langle E_k \rangle_n$</span>. It will be convenient to make a change of variable <span class="math-container">$z \mapsto -z$</span> and consider <span class="math-container">$F^\vee(z;t)
:= F(-z;t)$</span> instead; the effect of this sign-change is to
replace the statistic <span class="math-container">$H_t(u)$</span> by <span class="math-container">$H^\vee_t(u) := \prod_{\Box \, \in \, u} \, \big(t -\mathrm{h}^2(\Box) \big)$</span> in the definition of the partition
function. After expanding into elementary symmetric polynomials <span class="math-container">$E_k$</span> we
get</p>
<p><span class="math-container">\begin{equation}
\begin{array}{ll}
\displaystyle H^\vee_t(u)
&\displaystyle = \ \sum_{k=0}^{|u|} \, (-1)^k \, E_{k}(u) \, t^{|u|-k} \\ \\
&\text{--- and so ---} \\ \\
\displaystyle F^\vee(z;t)
&\displaystyle = \
\sum_{n \geq 0} \, {z^n \over {n!}} \,
\langle H^\vee_t \rangle_n \\
&\displaystyle = \
\sum_{n \geq 0} \, {z^n \over {n!}} \,
\sum_{k=0}^n \, (-1)^k
\, \langle E_k \rangle_n \, t^{n-k} \\
&\displaystyle = \
\sum_{k \geq 0} \, (-t)^{-k} \,
\underbrace{\sum_{n \geq 0} \, (zt)^n \, {\langle E_k \rangle_n \over {n!}}}_{\text{$= \, F^\vee_k(zt)$ see below}}
\end{array}
\end{equation}</span></p>
<p><strong>Evaluating expectation values:</strong>
Fibonacci words <span class="math-container">$u \in \Bbb{YF}_n$</span> with <span class="math-container">$n \geq 2$</span> can be separated into two
disjoint groups: Those of the form <span class="math-container">$u=1v$</span> for <span class="math-container">$v \in \Bbb{YF}_{n-1}$</span>
and those of the form <span class="math-container">$u=2v$</span> for <span class="math-container">$v \in \Bbb{YF}_{n-2}$</span>. Depending on
whether the prefix of <span class="math-container">$u$</span> is <span class="math-container">$1$</span> or <span class="math-container">$2$</span> we can write down a recursive
formula for the value of <span class="math-container">$E_k(u) := E_k \big( \mathrm{h}^2(\Box) \big)_{\Box \, \in \, u}$</span> by analyzing the hook length(s) of the box(es) in the left-most
column, specifically:</p>
<p><span class="math-container">\begin{equation}
\begin{array}{lll}
E_k(1v)
&= E_k(v) + n^2E_{k-1}(v)
&\text{if} \ |v| = n-1 \\
E_k(2v)
&= E_k(v) + (n^2+1)E_{k-1}(v) + n^2E_{k-2}(v)
&\text{if} \ |v| = n-2
\end{array}
\end{equation}</span></p>
<p>Using the observation that <span class="math-container">$\dim(1v) = \dim(v)$</span> and
<span class="math-container">$\dim(2v) = (|v| + 1)^2 \dim(v)$</span> we may conclude</p>
<p><span class="math-container">\begin{equation}
\langle E_k \rangle_n
= \left\{
\begin{array}{l}
\displaystyle {1 \over n} \langle E_k \rangle_{n-1}
\ + \ {n-1 \over n} \langle E_k \rangle_{n-2} \\ \\
\displaystyle + \ n \langle E_{k-1} \rangle_{n-1} \ + \
{(n-1)(n^2+1) \over n} \langle E_{k-1} \rangle_{n-2} \\ \\
\displaystyle + \ n(n-1) \langle E_{k-2} \rangle_{n-2}
\end{array}
\right.
\end{equation}</span></p>
<p>If we set <span class="math-container">$\sigma_k(n) := {1 \over {n!}} \, \langle E_k \rangle_n$</span> then
the above recursion can be rewritten as:</p>
<p><span class="math-container">\begin{equation}
(\dagger) \ \
\left\{
\begin{array}{l}
\displaystyle
n^2\sigma_k(n) \ = \
\underbrace{\sigma_k(n-1) \ + \ \sigma_k(n-2)}_{\text{homogeneous part}}
\ + \ \gamma_{<k}(n) \quad \text{where} \\ \\
\displaystyle
\gamma_{<k}(n) \ = \
\underbrace{n^2\sigma_{k-1}(n-1)
\ + \ (n^2 +1)\sigma_{k-1}(n-2)
\ + \ n^2\sigma_{k-2}(n-2)}_{\text{inductive heap of inhomogeneous junk}}
\end{array} \right.
\end{equation}</span></p>
<p>all of which can be converted, using the usual yoga of generating functions, into the following second order inhomogeneous ODE for <span class="math-container">$F^\vee_k(x) := \sum_{n \geq 0} \sigma_k(n) x^n$</span>.</p>
<p><span class="math-container">\begin{equation}
\begin{array}{c}
\displaystyle x^2 \, {d^2 \over {dx^2}} F^\vee_k(x) \ + \
x {d \over {dx}} F^\vee_k(x) \ - \
\big(x^2 + x \big) F^\vee_k(x) \\
\displaystyle \ = \ \\
\displaystyle
G_{<k}(x) \ + \ \big( \sigma_k(1) - \sigma_k(0) \big)x
\end{array}
\end{equation}</span></p>
<p>which, after setting <span class="math-container">$F^\vee_k(x) := e^x J^\vee_k(x)$</span>, can be rewritten as</p>
<p><span class="math-container">\begin{equation}
(\dagger \dagger) \ \ \left\{
\begin{array}{c}
\displaystyle x {d^2 \over {dx^2}} J^\vee_k(x)
\ + \ \big(2x + 1 \big) {d \over {dx}} J^\vee_k(x) \\
= \\
\displaystyle {1 \over x} \Big[ G_{<k}(x) \ + \ \big( \sigma_k(1) - \sigma_k(0) \big)x \Big]
\end{array} \right.
\end{equation}</span></p>
<p>where the generating function
<span class="math-container">$G_{<k}(x) = \sum_{n \geq 2} \, \gamma_k(n) x^n$</span> will have been evaluated earlier by induction on <span class="math-container">$k \geq 0$</span>.
The associated homogeneous ODE of <span class="math-container">$(\dagger \dagger)$</span>
has two nice independent solutions <span class="math-container">$Y_1(x) = 1$</span> and <span class="math-container">$Y_2(x)= \int x^{-1} e^{-2x} dx$</span> whose Wronskian is <span class="math-container">$W=x^{-1} e^{-2x}$</span>. One starts the inductive engine beginning with <span class="math-container">$F^\vee_0(x) = e^x$</span> or, equivalently with <span class="math-container">$J^\vee_0(x) = 1$</span>. For <span class="math-container">$k=1$</span> clearly <span class="math-container">$\sigma_1(0)=0$</span> and <span class="math-container">$\sigma_1(1)=1$</span> while</p>
<p><span class="math-container">\begin{equation}
\begin{array}{ll}
\displaystyle G_{<1}(x)
&\displaystyle = \ \sum_{n \geq 2} \, {n^3 + n -1 \over {(n-1)!}} \, x^n \\
&\displaystyle = \ \big(x + 8x^2 + 6x^3 + x^4 \big) \, e^x \ - \ x
\end{array}
\end{equation}</span></p>
<p>so the ODE for <span class="math-container">$J^\vee_1(x)$</span> becomes</p>
<p><span class="math-container">\begin{equation}
\begin{array}{c}
\displaystyle x {d^2 \over {dx^2}} J^\vee_1(x) \ + \
\big(2x+1\big) {d \over {dx}} J^\vee_1(x) \\
\displaystyle = \\
\underbrace{\big(x + 8x^2 + 6x^3 + x^4 \big) \, e^x}_{G_{<1}(x) \ + \ x }
\end{array}
\end{equation}</span></p>
<p>By variation of parameters, a particular inhomogeneous solution is</p>
<p><span class="math-container">\begin{equation}
\begin{array}{rl}
\displaystyle Y_\mathrm{particular}(x)
&\displaystyle = \ v_1(x) \cdot Y_1(x) \ + \ v_2(x) \cdot
Y_2(x) \\
\displaystyle v_1(x)
&\displaystyle = \ -\int xe^{2x} \, Y_2(x) \,
\Big(x + G_{<1}(x) \Big) \, dx \\
\displaystyle v_2(x)
&\displaystyle = \ \ \ \ \ \int xe^{2x} \, Y_1(x) \,
\Big(x + G_{<1}(x) \Big) \, dx
\end{array}
\end{equation}</span></p>
<p>After solving <span class="math-container">$J^\vee_1(x)$</span> (and thus for <span class="math-container">$F^\vee_1(x)$</span>) we repeat the process for <span class="math-container">$k > 1$</span>. At each stage we solve by variation of parameters, using the two homogeneous solutions <span class="math-container">$Y_1(x)$</span> and <span class="math-container">$Y_2(x)$</span>, the <span class="math-container">$(\dagger \dagger)$</span>-ODE
whose inhomogeneous term is itself computed
from the data obtained in the previous layer of computation.</p>
<blockquote>
<p><strong>Question.</strong>
Does anyone know how to solve either the <span class="math-container">$(\dagger)$</span>-hierarchy of
linear recurrences, the <span class="math-container">$(\dagger \!\dagger)$</span>-hierarchy of 2nd order inhomogeneous ODEs, or equivalently the <span class="math-container">$(\dagger \! \dagger \! \dagger)$</span>-ODE explained in the second answer/response below? By solve I mean to express the solution in terms of elementary functions, continued fractions, or else by some nice class of special functions (e.g. hypergeometric).</p>
</blockquote>
<p>thanks, ines.</p>
| David E Speyer | 297 | <p>Let <span class="math-container">$G$</span> be any finite group. Then the group algebra <span class="math-container">$\mathbb{C}[G]$</span> is, as an algebra, isomorphic to <span class="math-container">$\bigoplus_V \mathrm{End}(V)$</span>, where the direct sum is over irreducible representations of <span class="math-container">$V$</span> of <span class="math-container">$G$</span>. So the group of units of <span class="math-container">$\mathbb{C}[G]$</span> is isomorphic to <span class="math-container">$\prod_V \mathrm{GL}(V)$</span>. Only finitely many elements of this product come from the original group <span class="math-container">$G$</span>. Any element of <span class="math-container">$\prod_V \mathrm{GL}(V)$</span> not coming from <span class="math-container">$G$</span> is invertible but not group like.</p>
<p>To be concrete, let <span class="math-container">$G$</span> be the cyclic group <span class="math-container">$C_2$</span>, with nontrivial element <span class="math-container">$g$</span>. Then <span class="math-container">$a+bg$</span> is invertible as long as <span class="math-container">$(a+b)(a-b) \neq 0$</span>, but <span class="math-container">$a+bg$</span> is group like only for <span class="math-container">$(a,b) = (1,0)$</span> and <span class="math-container">$(a,b) = (0,1)$</span>.</p>
|
3,742,189 | <p>in a game I play there's a chance to get a good item with 1/1000.
After 3200 runs I only got 1.</p>
<p>So how can I calculate how likely that is and I remember there are graphs which have 1 sigma and 2 sigma as vertical lines and you can tell what you can expect with 90% and 95% sureness.</p>
<p>Sorry if that's asked before, but I don't remember the name of such a graph!</p>
<p>Thanks in advance.</p>
| JMP | 210,189 | <p>You need to align the <strong>positive</strong> <span class="math-container">$x$</span>'s, so the second equation becomes <span class="math-container">$x\gt-1$</span>, and now you can see that your addition involves the two different inequality symbols.</p>
<p>Solve for the first# equation: <span class="math-container">$x\lt3$</span>, and now we can join the two: <span class="math-container">$-1\lt x\lt 3$</span>.</p>
|
3,514,547 | <p>The problem is as follows:</p>
<p>The figure from below shows vectors <span class="math-container">$\vec{A}$</span> and <span class="math-container">$\vec{B}$</span>. It is known that <span class="math-container">$A=B=3$</span>. Find <span class="math-container">$\vec{E}=(\vec{A}+\vec{B})\times(\vec{A}-\vec{B})$</span></p>
<p><a href="https://i.stack.imgur.com/kob4R.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kob4R.png" alt="Sketch of the problem"></a></p>
<p>The alternatives are:</p>
<p><span class="math-container">$\begin{array}{ll}
1.&-18\hat{k}\\
2.&-9\hat{k}\\
3.&-\sqrt{3}\hat{k}\\
4.&3\sqrt{3}\hat{k}\\
5.&9\hat{k}\\
\end{array}$</span></p>
<p>What I've attempted here was to try to decompose each vectors</p>
<p><span class="math-container">$\vec{A}=\left \langle 3\cos 53^{\circ}, 3 \sin 53^{\circ} \right \rangle$</span></p>
<p><span class="math-container">$\vec{B}=\vec{A}=\left \langle 3\cos (53^{\circ}+30^{\circ}), 3 \sin (53^{\circ}+30^{\circ}) \right \rangle$</span></p>
<p>But by attempting to use these relationships do seem to extend the algebra too much. Does it exist another way? some simplification?. Or could it be that am I overlooking something?</p>
<p>Can someone help me with this?.</p>
| José Carlos Santos | 446,262 | <p>The answer is <span class="math-container">$-9\vec k$</span>. In fact, the angle between <span class="math-container">$\vec A$</span> and <span class="math-container">$\vec B$</span> has <span class="math-container">$30^\circ$</span> degrees. On the other hand<span class="math-container">\begin{align}\left(\vec A+\vec B\right)\times\left(\vec A-\vec B\right)&=\overbrace{\vec A\times\vec A}^{\phantom{0}=0}+\overbrace{\vec B\times\vec A}^{\phantom{-\vec A\times\vec B}=-\vec A\times\vec B}-\vec A\times\vec B-\overbrace{\vec B\times\vec B}^{\phantom{0}=0}\\&=-2\vec A\times\vec B.\end{align}</span>The length of <span class="math-container">$\vec A\times\vec B$</span> is <span class="math-container">$3\times3\times\sin(30^\circ)=\frac92$</span>, and therefore the answer is <span class="math-container">$-9\vec k$</span>; in order to see why it is this and not <span class="math-container">$9\vec k$</span>, use the <a href="https://en.wikipedia.org/wiki/Right-hand_rule" rel="nofollow noreferrer">right-hand rule</a>.</p>
|
1,006,562 | <p>So I am trying to figure out the limit</p>
<p>$$\lim_{x\to 0} \tan x \csc (2x)$$</p>
<p>I am not sure what action needs to be done to solve this and would appreciate any help to solving this. </p>
| JukesOnYou | 148,363 | <p>$$
\lim_{x\to 0} \tan x \csc (2x)= \lim_{x\to 0} \dfrac{\sin x}{2\cos^2x\sin x} = 1/2
$$</p>
|
2,428,243 | <p>How can I evalute this product??</p>
<p>$$\prod_{i=1}^{\infty} {(n^{-i})}^{n^{-i}}$$</p>
<p>Unfortunately, I have no idea.</p>
| Donald Splutterwit | 404,247 | <p>\begin{eqnarray*}
P=\prod_{i=1}^{\infty} (n^{-i})^{n^{-i}} = \prod_{i=1}^{\infty} n^{-in^{-i}} = n^{-\sum_{i=1}^{\infty}in^{-i}}
\end{eqnarray*}
$\sum_{i=1}^{\infty}ix^{i}= \frac{x}{(1-x)^2}$
\begin{eqnarray*}
P=n^{-\frac{n}{(n-1)^2}}
\end{eqnarray*}</p>
|
2,428,243 | <p>How can I evalute this product??</p>
<p>$$\prod_{i=1}^{\infty} {(n^{-i})}^{n^{-i}}$$</p>
<p>Unfortunately, I have no idea.</p>
| John Lou | 404,782 | <p>HINT:</p>
<p>$$\prod_{i=1}^{\infty} {(n^{-i})}^{n^{-i}} = \prod_{i=1}^{\infty} \frac{1}{n^{\frac{i}{n^i}}} = \frac{1}{n^{\frac{1}{n}}} \cdot \frac{1}{n^{\frac{2}{n^2}}}\cdot\frac{1}{n^{\frac{3}{n^3}}}...$$</p>
<p>$$\prod_{i=1}^{k} {(n^{-i})}^{n^{-i}} = \frac{1}{n^{\frac{n^{k-1} + 2 \cdot n^{k-2}...(k-1) \cdot n + k}{n^k}}}$$</p>
<p>And the exponent can be made into a summation.</p>
|
856,958 | <p>$$|x+y|=|x|+|y| \iff |xy|>0$$</p>
<p>I tried to prove the above inequality but i cant find a way. I tried assuming the first condition is true and tried to derive the second part of it but it seems i can't find a way to get through. I'm new to real analysis and it would be in great help if someone can provide me with a tip. Thanks :)</p>
| Community | -1 | <p>You want prove that
$$|x+y|=|x|+|y|\iff xy>0$$
which means that $x$ and $y$ have the same sign: to prove the necessary condition we do it by contrapositive so let $x$ and $y$ with opposite signs and prove that $|x+y|\ne|x|+|y|$. Can you take it from here?</p>
|
856,958 | <p>$$|x+y|=|x|+|y| \iff |xy|>0$$</p>
<p>I tried to prove the above inequality but i cant find a way. I tried assuming the first condition is true and tried to derive the second part of it but it seems i can't find a way to get through. I'm new to real analysis and it would be in great help if someone can provide me with a tip. Thanks :)</p>
| Martin R | 42,969 | <p>Actually $|x+y|=|x|+|y| \iff xy \ge 0$, i.e. if $x$ and $y$ have the same sign
or one of them is zero.</p>
<p>One way to see this is from</p>
<p>$$ |x + y|^2 = (x + y)^2 = x^2 + 2xy + y^2 $$</p>
<p>and</p>
<p>$$ (|x| + |y|)^2 = |x|^2 + 2|x||y| + |y|^2 = x^2 + 2|xy| + y^2$$</p>
<p>so that $|x+y|=|x|+|y| \iff xy = |xy| \iff xy \ge 0$.</p>
|
422,233 | <p>I was asked to find a minimal polynomial of $$\alpha = \frac{3\sqrt{5} - 2\sqrt{7} + \sqrt{35}}{1 - \sqrt{5} + \sqrt{7}}$$ over <strong>Q</strong>.</p>
<p>I'm not able to find it without the help of WolframAlpha, which says that the minimal polynomial of $\alpha$ is $$19x^4 - 156x^3 - 280x^2 + 2312x + 3596.$$ (Truely it is - $\alpha$ is a root of the above polynomial and the above polynomial is also irreducible over <strong>Q</strong>.)</p>
<p>Can anyone help me with this?</p>
<p>Thank you!</p>
| alans | 80,264 | <p>$$\alpha-\alpha\sqrt{5}+\alpha\sqrt{7}=3\sqrt{5}-2\sqrt{7}+\sqrt{35},$$
$$\alpha-\sqrt{35}=(\alpha+3)\sqrt{5}-(\alpha+2)\sqrt{7},$$
$$(\alpha-\sqrt{35})^2=[(\alpha+3)\sqrt{5}-(\alpha+2)\sqrt{7}]^2,$$
$$\alpha^2+35-2\alpha\sqrt{35}=5(\alpha+3)^2+7(\alpha+2)^2-2\sqrt{35}(\alpha+2)(\alpha+3),$$
$$2\sqrt{35}(\alpha^2+4\alpha+6)=11\alpha^2+58\alpha+38,$$
$$[2\sqrt{35}(\alpha^2+4\alpha+6)]^2=(11\alpha^2+58\alpha+38)^2.$$ From the last equality, we get that minimal polynomial is $19x^4-156x^3-280x^2+2312x+3596$.</p>
|
666,217 | <p>If $a^2+b^2 \le 2$ then show that $a+b \le2$</p>
<p>I tried to transform the first inequality to $(a+b)^2\le 2+2ab$ then $\frac{a+b}{2} \le \sqrt{1+ab}$ and I thought about applying $AM-GM$ here but without result</p>
| gt6989b | 16,192 | <p>Suffices to show if $a^2+b^2 = 2$ then $a+b \leq 2$. From the constraint consider
$$
f(a) = a + \sqrt{2-a^2}
$$
and we need to prove $f(a) \leq 2$ over $[0,\sqrt{2}]$.</p>
<p>$$
f'(a) = 1 - \frac{a}{\sqrt{2-a^2}} \Leftrightarrow a = 1
$$
which is a maximum by 1st derivative test.
Since $f(0), f(1), f(\sqrt{2})$ are $\sqrt{2}, 2, \sqrt{2}$ we have $f(a) \leq 2$ as desired.</p>
|
555,446 | <p>Given this shape: <img src="https://i.stack.imgur.com/1rRsC.png" alt="diagram showing a 4000 unit wide cyan square with a 400 unit wide red square in the middle"></p>
<h1>Is it possible to divide the cyan area into 5 equal area shapes</h1>
<p>such that:</p>
<ol>
<li>Each shape is the same</li>
<li>Each shape has an edge touching the red square</li>
<li>Each shape has an edge touching the outside.</li>
<li>No diagonal lines.</li>
</ol>
<p>Its reasonably easy to perform #2 and #3 as long as you violate #1</p>
<p>And I don't believe its possible to actually satisfy #1, and the very fact its contained within a square suggests #1 is not satisfiable given the presence of #2.</p>
<p>Though its just a bit of fun really =).</p>
<p>Context: Was simply devising a town plan for a guild oriented town for a minecraft world, and it became possible that we might want 5 guilds, and the fun of giving each guild a fair equal area region , in conjunction with all 5 guilds having a shared space in the middle.</p>
<p>Diagonals are unwanted as it makes dividing the land fairly and applying region controls onerous, as most things are rectangular, regions included.</p>
<p>So while you can roughly approximate a diagonal with sufficiently many rectangles, the less steps, the better.</p>
<h1>If it is not possible</h1>
<p>Please provide reasoning as to why not.</p>
<h1>Additionally</h1>
<p>It would be interesting to see what sort of alternatives people can come up with, perhaps there is an optimal shape that results in all the shapes being highly similar geometrically, despite not being identical.</p>
| Kent Fredric | 106,097 | <p>I don't think it possible to have all shapes identical, because the criteria of "touching the outside" and "touching the inner square" basically means there has to be at least 5 edges running from center to outside.</p>
<p>And because it is impossible to draw 5 edges from a square in such a way that all edges leave the square in such a way they present as being geometrically the same, the rest of the image becomes dependent on this fact.</p>
<p>However, if you waive the "shapes are identical rule", this is the best solution I've come up with so far.</p>
<p><img src="https://i.stack.imgur.com/bLZBB.png" alt="enter image description here"></p>
<p><img src="https://i.stack.imgur.com/14aZU.png" alt="enter image description here"></p>
<p><img src="https://i.stack.imgur.com/5lQ40.png" alt="enter image description here"></p>
<p>Logic:</p>
<p>The Cyan area is basically:
$$
\begin{array}{cc}
A_{cyan} & = & A_{outer} & - & A_{inner} \\
A_{cyan} & = & 4,000 \times 4,000 & - & 400 \times 400 \\
A_{cyan} & = & 15,840,000
\end{array}
$$
So each must be </p>
<p>$$
\begin{array}{cc}
A_{section} & = & \frac{15,840,000}{5} \\
A_{section} & = & 3,168,000
\end{array}
$$</p>
<p>Its apparent that at least one shape may need an edge from center to edge, so you quickly find a starting rectangle of </p>
<p>$$
\begin{array}{cc}
A_{section} & = & \frac{W_{outer}}{2} & - & \frac{W_{inner}}{2} & \times & y \\
A_{section} & = & \frac{4000}{2} & - & \frac{400}{2} & \times & y \\
A_{section} & = & 2000 & - & 200 & \times & y \\
A_{section} & = & 1800 & & & \times & y \\
y & = & \frac{3,168,000}{1800} \\
y & = & 1760
\end{array}
$$</p>
<p>From there, it was basically a case of shape-shifting the perimeter while keeping area the same, to achieve a sort of balanced distribution, which is reasonably easy to do.</p>
<p>After you've created 2 shapes with that process, duplicating it was pretty simple, and you're then left with a void which will logically be the same size. </p>
<p>For the usecase we had, having different shapes was acceptable, and this combination of shapes was useful enough. Though somebody can probably find a combination of more similar shapes. </p>
<p>( You'll also see that the arrangement loosely models a pentagon, just with far less regularity ) </p>
|
4,310,003 | <p>Suppose you have a non empty set <span class="math-container">$X$</span>, and suppose that for every function <span class="math-container">$f : X \rightarrow X$</span>, if <span class="math-container">$f$</span> is surjective, then it is also injective. Does it necessarily follow that <span class="math-container">$X$</span> is finite ?</p>
<p>Every example I've been able to think of leads me to believe this is true. Is it ? Or could anyone provide a counterexample?</p>
| Laxmi Narayan Bhandari | 931,957 | <p>We start with the substitution <span class="math-container">$e^x=t$</span>. This yields</p>
<p><span class="math-container">$$I = \int\limits_0^\infty\frac{\mathrm dt}{1+t^4} $$</span></p>
<p>Now using <span class="math-container">$t^4=y$</span>,</p>
<p><span class="math-container">$$\begin{align}I &= \frac14\int\limits_0^\infty \frac{ y^{1/4-1}}{1+y}\,\mathrm dy \\ &= \frac14\mathrm B(1/4,3/4) \\ &= \frac{ \Gamma(1/4)\Gamma(3/4)}{\Gamma(1)}\\ &= \frac{\pi}{4\sin\frac{\pi}4} \\ I &= \frac{\pi}{2\sqrt2}\end{align}$$</span></p>
|
325,186 | <p>If <span class="math-container">$p$</span> is a prime then the zeta function for an algebraic curve <span class="math-container">$V$</span> over <span class="math-container">$\mathbb{F}_p$</span> is defined to be
<span class="math-container">$$\zeta_{V,p}(s) := \exp\left(\sum_{m\geq 1} \frac{N_m}{m}(p^{-s})^m\right). $$</span>
where <span class="math-container">$N_m$</span> is the number of points over <span class="math-container">$\mathbb{F}_{p^m}$</span>.</p>
<p>I was wondering what is the motivation for this definition. The sum in the exponent is vaguely logarithmic. So maybe that explains the exponential?</p>
<p>What sort of information is the zeta function meant to encode and how does it do it? Also, how does this end up being a rational function?</p>
| Wojowu | 30,186 | <p>The definition using exponential of such an ad hoc looking series is admittedly not too illuminating. You mention that the series looks vaguely logarithmic, and that's true because of denominator <span class="math-container">$m$</span>. But then we can ask, why include <span class="math-container">$m$</span> in the denominator?</p>
<p>A "better" definition of a zeta function of a curve (more generally a variety) over <span class="math-container">$\mathbb F_p$</span> involves an Euler product. The product will be over all points <span class="math-container">$P$</span> of <span class="math-container">$V$</span> which are defined over the algebraic closure <span class="math-container">$\overline{\mathbb F_p}$</span> (this isn't exactly true, see below). Any such point has a minimal field of definition, namely the field <span class="math-container">$\mathbb F_{p^n}$</span> generated by the coordinates of this point. We shall define the <em>norm</em> of this point <span class="math-container">$P$</span> as <span class="math-container">$|P|=p^n$</span>. Then we can define
<span class="math-container">$$\zeta_{V,p}(s)=\prod_P(1-|P|^{-s})^{-1}.$$</span>
(again, this is not quite right) Why would this definition be equivalent to yours? It's easiest to see by taking the logarithms. Then for a point <span class="math-container">$P$</span>, the logarithm of the corresponding factor of the product will contribute
<span class="math-container">$$-\log(1-|P|^{-s})=\sum_{k=1}^\infty\frac{1}{k}|P|^{-ks}=\sum_{k=1}^\infty\frac{1}{k}p^{-nks}=\sum_{k=1}^\infty\frac{n}{nk}(p^{nk})^{-s}.$$</span>
In the last step I have multiplied the numerator and the denominator by <span class="math-container">$n$</span>, because the point <span class="math-container">$P$</span> contributes precisely to numbers <span class="math-container">$N_{nk}$</span>, since <span class="math-container">$P$</span> is defined over all the fields <span class="math-container">$\mathbb F_{p^{nk}}$</span>.</p>
<p>But we see a problem - this way, we have counted each point <span class="math-container">$n$</span> times because of <span class="math-container">$n$</span> in the numerator. The resolution is rather tricky - instead of taking a product over points, we take a product over <em>Galois orbits</em> of the points - if we have a point <span class="math-container">$P$</span> minimally defined over <span class="math-container">$\mathbb F_{p^n}$</span>, then there are exactly <span class="math-container">$n$</span> points (conjugates of <span class="math-container">$P$</span>) which we can reach from <span class="math-container">$P$</span> by considering the automorphisms of <span class="math-container">$\mathbb F_{p^n}$</span>. If we were to write <span class="math-container">$Q$</span> for this set of conjugates, and we define <span class="math-container">$|Q|=p^n$</span>, then repeating the calculation above we see that we always count <span class="math-container">$Q$</span> <span class="math-container">$n$</span> times - which is just right, since it consists of <span class="math-container">$n$</span> points! Thus we arrive at the following (this time correct) definition of the zeta function:
<span class="math-container">$$\zeta_{V,p}(s)=\prod_Q(1-|Q|^{-s})^{-1},$$</span>
the product this time over Galois orbits.</p>
<p>Apart from being (in my opinion) much better motivated, it has other advantages. For instance, from the product formula it is clear that the series has integer coefficients. Further, it highlights the similarity with the Riemann zeta function, which has a very similar Euler product. Both of those are generalized to the case of certain arithmetic schemes, but that might be a story for a different time.</p>
<p><strike>As for your last question, regarding rationality, this is a rather nontrivial result, even, as far as I know, for curves. If you are interested in details, then I recommend taking a look at Koblitz's book "<span class="math-container">$p$</span>-adic numbers, <span class="math-container">$p$</span>-adic analysis and zeta functions". There he proves, using moderately elementary <span class="math-container">$p$</span>-adic analysis, rationality of zeta functions of arbitrary varieties.</strike></p>
<p>As KConrad says in the comment, the proof of rationality actually <em>is</em> much simpler for curves than for general varieties. I imagine it is by far more illuminating than Dwork's proof as presented in Koblitz.</p>
|
3,673,613 | <p>I have to find out if <span class="math-container">$\displaystyle\sum_{n=2}^{\infty}$$\dfrac{\cos(\frac{\pi n}{2}) }{\sqrt n \log(n) }$</span> is absolute convergent, conditional convergent or divergent. I think it's divergent while the value for <span class="math-container">$\cos\left(\dfrac{\pi n}{2}\right)$</span> swings between <span class="math-container">$0$</span>, <span class="math-container">$1$</span> and <span class="math-container">$-1$</span>. And for <span class="math-container">$\left|\cos\left(\dfrac{\pi n}{2}\right)\right|$</span> it still swings between <span class="math-container">$0$</span> and <span class="math-container">$1$</span>. But how can I show it formally?</p>
| user8675309 | 735,806 | <p>I take OP's statement<br>
<em>We can easily prove this inequality</em><br>
<span class="math-container">$\dim(\ker((A-\lambda I)(A-\psi I))) \geq \dim(\ker(A-\lambda I)) + \dim(\ker(A-\psi I))$</span><br>
as a given </p>
<p>we also know that<br>
<span class="math-container">$\dim(\ker((A-\lambda I)(A-\psi I))) \leq \dim(\ker(A-\lambda I)) + \dim(\ker(A-\psi I))$</span><br>
<strong>by Sylvester's Rank Inequality.</strong> </p>
<p>This implies<br>
<span class="math-container">$\dim(\ker((A-\lambda I)(A-\psi I))) =\dim(\ker(A-\lambda I)) + \dim(\ker(A-\psi I))$</span> </p>
<p>(the standard for of Sylvester's Rank Inequality of course involves Rank. But negating, then adding 2n to each side and applying rank-nullity gives this other form) </p>
|
172,080 | <p>Here is a fun integral I am trying to evaluate:</p>
<p>$$\int_{0}^{\infty}\frac{\sin^{2n+1}(x)}{x} \ dx=\frac{\pi \binom{2n}{n}}{2^{2n+1}}.$$</p>
<p>I thought about integrating by parts $2n$ times and then using the binomial theorem for $\sin(x)$, that is, using $\dfrac{e^{ix}-e^{-ix}}{2i}$ form in the binomial series.</p>
<p>But, I am having a rough time getting it set up correctly. Then, again, there is probably a better approach. </p>
<p>$$\frac{1}{(2n)!}\int_{0}^{\infty}\frac{1}{(2i)^{2n}}\sum_{k=0}^{n}(-1)^{2n+1-k}\binom{2n}{k}\frac{d^{2n}}{dx^{2n}}(e^{i(2k-2n-1)x})\frac{dx}{x^{1-2n}}$$</p>
<p>or something like that. I doubt if that is anywhere close, but is my initial idea of using the binomial series for sin valid or is there a better way?.</p>
<p>Thanks everyone.</p>
| Random Variable | 16,033 | <p>There is a <a href="https://math.stackexchange.com/questions/776903/lobachevskys-formula-for-integrals">theorem</a> that states if <span class="math-container">$f(x)$</span> is continuous and <span class="math-container">$\pi$</span>-periodic on <span class="math-container">$\mathbb{R}$</span>, then <span class="math-container">$$ \displaystyle\int_{-\infty}^{\infty} \frac{\sin x}{x} f(x) \ dx = \int_{0}^{\pi} f(x) \ dx. $$</span></p>
<p>See Graham Hesketh's comment for a way to prove this.</p>
<p>Using this theorem, <span class="math-container">$$ \begin{align} \int_{0}^{\infty} \frac{\sin^{2n+1} (x)}{x} \ dx &= \frac{1}{2} \int_{-\infty}^{\infty} \frac{\sin^{2n+1} (x)}{x} \ dx = \frac{1}{2} \int_{-\infty}^{\infty} \frac{\sin x}{x} \sin^{2n} (x) \ dx \\ &= \frac{1}{2} \int_{0}^{\pi} \sin^{2n} (x) \ dx = \int_{0}^{\frac{\pi}{2}} \sin^{2n} (x) \ dx \\ &= \frac{\pi}{2^{2n+1}} \binom{2n}{n}. \tag{1} \end{align}$$</span></p>
<p><span class="math-container">$(1)$</span> <a href="http://en.wikipedia.org/wiki/Wallis%27_integrals" rel="nofollow noreferrer">http://en.wikipedia.org/wiki/Wallis%27_integrals</a></p>
|
172,080 | <p>Here is a fun integral I am trying to evaluate:</p>
<p>$$\int_{0}^{\infty}\frac{\sin^{2n+1}(x)}{x} \ dx=\frac{\pi \binom{2n}{n}}{2^{2n+1}}.$$</p>
<p>I thought about integrating by parts $2n$ times and then using the binomial theorem for $\sin(x)$, that is, using $\dfrac{e^{ix}-e^{-ix}}{2i}$ form in the binomial series.</p>
<p>But, I am having a rough time getting it set up correctly. Then, again, there is probably a better approach. </p>
<p>$$\frac{1}{(2n)!}\int_{0}^{\infty}\frac{1}{(2i)^{2n}}\sum_{k=0}^{n}(-1)^{2n+1-k}\binom{2n}{k}\frac{d^{2n}}{dx^{2n}}(e^{i(2k-2n-1)x})\frac{dx}{x^{1-2n}}$$</p>
<p>or something like that. I doubt if that is anywhere close, but is my initial idea of using the binomial series for sin valid or is there a better way?.</p>
<p>Thanks everyone.</p>
| user149844 | 149,844 | <p>I am just adding the proof of the identity for those who have interest:
$$ \sin^{2n+1} x = \frac{1}{4^n}\sum_{k=0}^{n}(-1)^{n-k}\binom{2n+1}{k}\sin\left(\left(2(n-k)+1\right)x\right). $$
Using the complex representation and the Binomial Theorem, we have
$$\begin{aligned}
\sin^{2n+1}x&=\left(\frac{\mathrm{e}^{ix}-\mathrm{e}^{-ix}}{2i}\right)^{2n+1}\\
&=\frac{(-1)^n}{2^{2n+1}i}\sum_{k=0}^{2n+1}\binom{2n+1}{k}\mathrm{e}^{i(2n+1-k)x}(-1)^k\mathrm{e}^{i(-kx)}\\
&=\frac{(-1)^n}{2^{2n+1}i}\sum_{k=0}^{2n+1}(-1)^k\binom{2n+1}{k}\mathrm{e}^{i(2(n-k)+1)x}\\
&=\frac{(-1)^n}{2^{2n+1}i}\sum_{k=0}^{2n+1}(-1)^k\binom{2n+1}{k}\left[\cos\left(\left(2(n-k)+1\right)x\right) + i\sin\left(\left(2(n-k)+1\right)x\right)\right]\\
&=\frac{(-1)^n}{2^{2n+1}}\sum_{k=0}^{2n+1}(-1)^k\binom{2n+1}{k}\left[\sin\left(\left(2(n-k)+1\right)x\right) - i\cos\left(\left(2(n-k)+1\right)x\right)\right]
\end{aligned}
$$</p>
<p>Now, observe that
$$\begin{aligned}
\sum_{k=0}^{2n+1} a_{k} &= \sum_{k=0}^{n}a_{k}+\sum_{k=n+1}^{n+n+1}a_{k}\\
&=\sum_{k=0}^{n}a_{k}+\sum_{k=0}^{n}a_{n+1+k}\\
&=\sum_{k=0}^{n}a_{k}+\sum_{k=0}^{n}a_{n+1+n-k}\\
&=\sum_{k=0}^{n}\left(a_{k}+a_{2n+1-k}\right)
\end{aligned}
$$
Apply with $a_{k}=(-1)^{k}\binom{2n+1}{k}\left[\sin\left(\left(2(n-k)+1\right)x\right) - i\cos\left(\left(2(n-k)+1\right)x\right)\right]$, so
$$\begin{aligned}
a_{2n+1-k}&=-(-1)^{k}\binom{2n+1}{2n+1-k}\left[-\sin\left(\left(2(n-k)+1\right)x\right) - i\cos\left(\left(2(n-k)+1\right)x\right)\right]\\
&=(-1)^{k}\binom{2n+1}{k}\left[\sin\left(\left(2(n-k)+1\right)x\right) + i\cos\left(\left(2(n-k)+1\right)x\right)\right].
\end{aligned}
$$
Then,
$$ a_{k}+a_{2n+1-k}=2(-1)^{k}\binom{2n+1}{k}\sin\left(\left(2(n-k)+1\right)x\right). $$
Therefore,
$$\begin{aligned} \sin^{2n+1} x&=\frac{1}{4^{n}}\sum_{k=0}^{n}(-1)^{n-k}\binom{2n+1}{k}\sin\left(\left(2(n-k)+1\right)x\right)\\
&=\frac{1}{4^n}\sum_{k=0}^{n}(-1)^k\binom{2n+1}{n-k}\sin\left((2k+1)x\right)\\
&=\frac{1}{4^n}\sum_{k=0}^{n}(-1)^k\binom{2n+1}{n+k+1}\sin\left((2k+1)x\right),
\end{aligned}$$
as desired.</p>
|
2,637,914 | <p>I would like to teach students about the pertinence of the Axiom of Infinity. Are there any high school-level theorems of arithmetic, algebra, or calculus, whose proof depends on the Axiom of Infinity? If there are no such examples, what would be the simplest theorem which demands the Axiom of Infinity?</p>
<p>It seems we can still generate endless numbers without the Axiom of Infinity, but this axiom lets us treat infinite sets as a whole -- is this true?</p>
| Akababa | 87,988 | <p>I think you're doing induction backwards; if you assume $n+1$ is true you're already done the inductive step. Assume $n$ is true, that is:
$$\sum_{i=1}^n\frac{i}{i+1}\leq \frac{n^2}{n+1}$$
and try to prove that $$\sum_{i=1}^{n+1}\frac{i}{i+1}\leq \frac{(n+1)^2}{n+2}$$</p>
|
4,076,324 | <p>Let <span class="math-container">$ax+b$</span> be the group of affine transformations <span class="math-container">$x\mapsto ax+b$</span> with <span class="math-container">$a>0$</span> and <span class="math-container">$b\in \mathbb{R}$</span>. How do you topologize this group? As a group it is isomorphic to the set of <span class="math-container">$2$</span>x<span class="math-container">$2$</span> matrices of the form <span class="math-container">\begin{bmatrix}a & b \\0 & 1\end{bmatrix}</span>. But unless I am greatly mistaken this set of matrices connot be an open subset of <span class="math-container">$\mathbb{R}^{4}$</span> since two entries are fixed. So how do you argue that <span class="math-container">$ax+b$</span> is a locally compact group?</p>
| Igor Rivin | 109,865 | <p><span class="math-container">$a>0, b\in \mathbb{R}$</span> gives you a pretty obvious homeomorphism with the (open) upper halfspace. Is that locally compact?</p>
|
4,076,324 | <p>Let <span class="math-container">$ax+b$</span> be the group of affine transformations <span class="math-container">$x\mapsto ax+b$</span> with <span class="math-container">$a>0$</span> and <span class="math-container">$b\in \mathbb{R}$</span>. How do you topologize this group? As a group it is isomorphic to the set of <span class="math-container">$2$</span>x<span class="math-container">$2$</span> matrices of the form <span class="math-container">\begin{bmatrix}a & b \\0 & 1\end{bmatrix}</span>. But unless I am greatly mistaken this set of matrices connot be an open subset of <span class="math-container">$\mathbb{R}^{4}$</span> since two entries are fixed. So how do you argue that <span class="math-container">$ax+b$</span> is a locally compact group?</p>
| nullUser | 17,459 | <p>The topology on this group is inherited from <span class="math-container">$\mathbb{R}^2$</span>, and when identified in <span class="math-container">$\mathbb{R}^2$</span> it is an open half-plane. Definitely locally-compact.</p>
|
4,076,324 | <p>Let <span class="math-container">$ax+b$</span> be the group of affine transformations <span class="math-container">$x\mapsto ax+b$</span> with <span class="math-container">$a>0$</span> and <span class="math-container">$b\in \mathbb{R}$</span>. How do you topologize this group? As a group it is isomorphic to the set of <span class="math-container">$2$</span>x<span class="math-container">$2$</span> matrices of the form <span class="math-container">\begin{bmatrix}a & b \\0 & 1\end{bmatrix}</span>. But unless I am greatly mistaken this set of matrices connot be an open subset of <span class="math-container">$\mathbb{R}^{4}$</span> since two entries are fixed. So how do you argue that <span class="math-container">$ax+b$</span> is a locally compact group?</p>
| José Carlos Santos | 446,262 | <p>Let <span class="math-container">$G$</span> be your group and consider the bijection<span class="math-container">$$\begin{array}{rccc}\psi\colon&(0,\infty)\times\Bbb R&\longrightarrow&G\\&(a,b)&\mapsto&\begin{bmatrix}a&b\\0&1\end{bmatrix}.\end{array}$$</span>Then, consider the distance defined on <span class="math-container">$G$</span> by <span class="math-container">$d(M,N)=\bigl\|\psi^{-1}(M)-\psi^{-1}(N)\bigr\|.$</span></p>
|
1,219,129 | <p>For any vector space $V$ over $\mathbb{C}$, let $X$ be a set whose cardinality is the dimension of $V$. Then $V \cong \bigoplus\limits_{i \in X} \mathbb{C}$ as vector spaces.</p>
<p>Is there a similar description of arbitrary Hilbert spaces? Is there something they all "look" like?</p>
| Alex Zorn | 73,104 | <p>Every Hilbert space is isomorphic to $L^2(X)$ for some measure space $X$. This is a generalization of Tomek's answer.</p>
<p>Now this is redundant, since $X$ is not determined up to isomorphism by $H$. However, this formulation sheds light onto the spectral theorem:</p>
<p>If $A$ is a self-adjoint (possibly unbounded) operator $H \rightarrow H$, then there is a measure space $X$ and an isomorphism $\phi:H \rightarrow L^2(X)$ such that $\phi A \phi^{-1}:L^2(X) \rightarrow L^2(X)$ is multiplication by a measurable real-valued function.</p>
<p>(You might need $H$ separable for this theorem)</p>
|
4,612 | <p>I would like to make a slope field. Here is the code</p>
<pre><code>slopefield =
VectorPlot[{1, .005 * p*(10 - p) }, {t, -1.5, 20}, {p, -10, 16},
Ticks -> None, AxesLabel -> {t, p}, Axes -> True,
VectorScale -> {Tiny, Automatic, None}, VectorPoints -> 15]
</code></pre>
<p>I solved the differential equations and plotted the curves manually. Three questions:</p>
<ol>
<li>Is there an easier way to do it?</li>
<li>Ticks -> None doesn't seems to work. I still get labels for the tick marks.</li>
<li>I'd like to selectively label 2 tick marks.</li>
</ol>
| Jens | 245 | <p>To plot the vector field and the streamlines (curves) together, there are two other plot functions that are specialized for this purpose: </p>
<ul>
<li><a href="http://reference.wolfram.com/mathematica/ref/StreamPlot.html" rel="nofollow noreferrer"><code>StreamPlot</code></a></li>
<li><a href="http://reference.wolfram.com/mathematica/ref/LineIntegralConvolutionPlot.html" rel="nofollow noreferrer"><code>LineIntegralConvolutionPlot</code></a></li>
</ul>
<p>The difference to <code>VectorPlot</code> is that Mathematica can automatically pick a set of curves for you. You can specify the starting points for the curves, but you don't have to:</p>
<pre><code>StreamPlot[{1, .005*p*(10 - p)}, {t, -1.5, 20}, {p, -10, 16},
AxesLabel -> {t, p}, Axes -> True,
VectorScale -> {Tiny, Automatic, None}, VectorPoints -> 15,
StreamStyle -> Red, FrameTicks -> {{5, 10}, {-10, 10}, {}, {}}]
</code></pre>
<p><img src="https://i.stack.imgur.com/4xiUN.png" alt="StremPlot"></p>
<p>Here, I've also added a <code>FrameTicks</code> specification that labels several special points on the horizontal and vertical axes. </p>
<p>The other alternative, which contains some additional visual information, is this:</p>
<pre><code>LineIntegralConvolutionPlot[{1, .005*p*(10 - p)}, {t, -1.5,
20}, {p, -10, 16}, Ticks -> None, AxesLabel -> {t, p}, Axes -> True,
VectorScale -> {Tiny, Automatic, None}, VectorPoints -> 15,
VectorStyle -> LightGray, ColorFunction -> ColorData["Rainbow"],
FrameTicks -> {{5, 10}, {-10, 10}, {}, {}}, Background -> Black,
BaseStyle -> White, FrameTicksStyle -> Yellow, ImageMargins -> 5]
</code></pre>
<p><img src="https://i.stack.imgur.com/H06OT.png" alt="convolution plot"></p>
<p>The last plot can be customized by adding the actual streamlines with the <code>StreamPoints</code> option (see the documentation), but the colored background serves the same purpose. The idea is that the background pattern is physically like the pattern you'd get (e.g.) from grass seeds in an electric field, or irons filings in a magnetic field, etc. And of course the color encodes information about the field strength.</p>
|
3,189,173 | <p>What will be the remainder when <span class="math-container">$2^{87} -1$</span> is divided by <span class="math-container">$89$</span>?</p>
<p>I tried it solving by Euler's remainder theorem by separating terms:</p>
<p><span class="math-container">$$ \frac {2^{87}}{89} - \frac{1}{89}$$</span></p>
<p><span class="math-container">$\phi (89) =88 $</span></p>
<p>remainder <span class="math-container">$\dfrac{{87}}{88} = 87;$</span></p>
<p>this led me to the point from where I started.</p>
| Phicar | 78,870 | <p>So <span class="math-container">$$2^{88}\equiv 1\pmod {89}$$</span> and <span class="math-container">$(2,89)=1$</span> so
<span class="math-container">$$2(2^{87}-1)=2^{88}-2\equiv 1-2=-1 \pmod {89}$$</span> so <span class="math-container">$$2^{87}-1\equiv -2^{-1}\pmod {89},$$</span>
but <span class="math-container">$2\times 45=90\equiv 1 \pmod {89} and so$</span> it should be 44.</p>
|
2,477,676 | <p>I'm supposed to prove that for any Random Variable X, </p>
<p>$E[X^4] \ge \frac 14 P(X^2\ge \frac 12)$</p>
<p>I tried substituting the definitions of expected value and of the probability into the inequality, but that gets me no where. </p>
<p>Any tips on where to go with this proof? Would a moment generating function lead me in the right direction? Thank you</p>
| Abhiram Natarajan | 481,835 | <p>Use <a href="https://en.wikipedia.org/wiki/Markov%27s_inequality" rel="nofollow noreferrer">Markov's inequality</a>. For a random variable $$X \ge 0, P[X \ge a] \le \frac{E[X]}{a}.$$</p>
<p>We have</p>
<p>\begin{align}
P[X^2 \ge \frac{1}{2}] &= P[X^4 \ge \frac{1}{4}] \qquad \textit{[$X^2 \ge 0$, and $(\cdot)^2$ is a non-decreasing bijective function on $\mathbb{R}_+$]}\\
&\le \frac{E[X^4]}{\frac{1}{4}}.
\end{align}</p>
<p>Re-arranging gives the desired result.</p>
|
3,073,832 | <p>I need to understand the meaning of this mathematical concept: "undecided/undecidable". </p>
<p>I know what it means in the English dictionary. But, I don't know what it means mathematically.</p>
<p>If You answer this question with possible mathematical examples, it will be very helpful to understand this issue.</p>
<p>Thank you very much!</p>
| J.G. | 56,861 | <p>If a proposition <span class="math-container">$p$</span> can be stated in the language of a theory <span class="math-container">$T$</span>, we say <span class="math-container">$p$</span> is undecidable in <span class="math-container">$T$</span> if <span class="math-container">$T$</span> contains neither a proof nor a disproof of <span class="math-container">$p$</span>.</p>
|
3,464,615 | <p>A novel process of manufacturing laptop screens is under test. In recent tests, it is found that 75% of the screens are acceptable. What is the most probable number of acceptable screens in the next batch of 10 screens and what is the probability?</p>
<p>Does that mean 7 screens out of 10 will pass with a probability of 0.75?
I was skeptical about using binomial or geometric probability law because there is a huge difference in each calculation.
Any help would be appreciated!</p>
| Community | -1 | <p>Tabulating (3/4)^x * (1/4)^(10-x) * (10 C x) for x<11, we get that 8 is the most likely number with probability about 28.2%.</p>
|
2,115,532 | <blockquote>
<p>Let $\mu$ be a $\sigma$-finite measure on $(A,\mathcal{A})$. Then
there are finite measures $(\mu_n)_{n \in \mathbb{N}}$ on
$(X,\mathcal{A})$ such that $$\mu = \sum_{n \in \mathbb{N}}\mu_n$$</p>
</blockquote>
<p>So if $\mu$ is $\sigma$-finite, we have that $$X = \bigcup_{n \in \mathbb{N}}X_n$$ for some measurable sets $X_n$ with $\mu(X_n) < \infty$ for any $n \in \mathbb{N}$. My first idea was something like restricting this measures to the sets $X_n$ but $\mu_n$ must be defined on $\mathcal{A}$. Any hint?</p>
| operatorerror | 210,391 | <p>Try evaluating
$$
\int_{\gamma}\frac{e^z}{z}dz
$$
around the unit circle. This will be easy to tackle using the integral formula you mentioned. </p>
<p>Let's just make sure it's the right integral. Parametrize the path as
$z=e^{it}\implies dz=ie^{it}dt$ and
$$
\int_{\gamma}\frac{e^z}{z}dz=
-i\int_{0}^{2\pi} e^{e^{it}}dz
$$</p>
|
3,042,802 | <p>For each <span class="math-container">$n ≥ 1$</span>, let <span class="math-container">$T_n = \{x ∈ l_2(N) : ||x||_1 ≤ n \}$</span>.</p>
<p>For <span class="math-container">$n ≥ 1$</span>, is <span class="math-container">$T_n$</span> an absorbing subset of <span class="math-container">$l_2(N) $</span>, but why?
I would like to show that <span class="math-container">$T_n$</span> has empty interior, for all <span class="math-container">$n ≥ 1. $</span> and that <span class="math-container">$T_n$</span> is closed in <span class="math-container">$l_2(N)$</span>, for all <span class="math-container">$n ≥ 1 $</span>. </p>
| user289143 | 289,143 | <p>Since <span class="math-container">$l_1(\mathbb{N})$</span> is a proper subset of <span class="math-container">$l_2(\mathbb{N})$</span> we can take <span class="math-container">$x \in l_2(\mathbb{N})- l_1(\mathbb{N})$</span>, i.e. <span class="math-container">$||x||_2< \infty$</span> and <span class="math-container">$||x||_1= \infty$</span>. Now suppose <span class="math-container">$T_n$</span> is absorbing, i.e. <span class="math-container">$\exists t >0$</span> such that <span class="math-container">$t^{-1}x \in T_n$</span>.
But this means that <span class="math-container">$t^{-1}||x||_1=||t^{-1}x||_1 \leq n$</span> or in other words <span class="math-container">$||x||_1 \leq t \ n < \infty$</span>. Thus <span class="math-container">$x \in l_1(\mathbb{N})$</span> and it's a contradiction.</p>
|
358,102 | <p>How would I go about doing this?</p>
<p>I assume it is some integral I have to solve, but I have no idea what.</p>
<p>(Note:Not a physicist so please excuse incompetence with regard standard notation.)</p>
<p>Context is I want to estime the energy of N point particles spread over the unit sphere. This is an equation of the for $E(N)=N^2/2 - aN^{3/2}$. I know the $N^2/2$ comes from the uniform charge density on the sphere, and have been told that the $N^{3/2}$ comes from "recovering the energy of distribution of point charges, therefore subtracting the self energies of a et of N uniformly charges disks, which can be shown to be proportional to $N^{3/2}$''.</p>
<p>I get what it is doing, but I don't know how to show it is proportional to $N^{3/2}$. I have already managed to show the $N^2/2$ term though. Also I don't need to find a, I just need to show the power.</p>
| Arthur | 15,500 | <p>Let a vertex $v$ be in two different strongly connected components $G_1$ and $G_2$. Then there is one vertex $v_1 \in G_1\setminus G_2$ and one vertex $v_2 \in G_2\setminus G_1$ so that $v$ is strongly connected to both of them. Therefore $v_1$ and $v_2$ are also strongly connected to eachother via $v$, and thus have to be part of the same strongly connected component. Thus a contradiction, and the assumption has to be false.</p>
<hr>
<p>Edit after the question was updated:</p>
<p>So, in the image given in the question, the right diamond shaped side of the graph is strongly connected, and the left diamond is strongly connected, and the question is, are they part of the same strongly connected component, seing that the middle vertex is in both of them? (Correct me in the comments if this is not what you were asking)</p>
<p>The answer is yes, the whole graph is strongly connected. Pick any two vertices. If they are in the same half of the graph, just go around the diamond. If they are in different halves, go around the diamond until you get to the middle vertex, then go around the other diamond until you get to your other vertex.</p>
|
54,506 | <p><a href="http://www.hardocp.com/news/2011/07/29/batman_equation/" rel="noreferrer">HardOCP</a> has an image with an equation which apparently draws the Batman logo. Is this for real?</p>
<p><img src="https://i.stack.imgur.com/VYKfg.jpg" alt="Batman logo"></p>
<p><strong>Batman Equation in text form:</strong>
\begin{align}
&\left(\left(\frac x7\right)^2\sqrt{\frac{||x|-3|}{|x|-3}}+\left(\frac y3\right)^2\sqrt{\frac{\left|y+\frac{3\sqrt{33}}7\right|}{y+\frac{3\sqrt{33}}7}}-1 \right) \\
&\qquad \qquad \left(\left|\frac x2\right|-\left(\frac{3\sqrt{33}-7}{112}\right)x^2-3+\sqrt{1-(||x|-2|-1)^2}-y \right) \\
&\qquad \qquad \left(3\sqrt{\frac{|(|x|-1)(|x|-.75)|}{(1-|x|)(|x|-.75)}}-8|x|-y\right)\left(3|x|+.75\sqrt{\frac{|(|x|-.75)(|x|-.5)|}{(.75-|x|)(|x|-.5)}}-y \right) \\
&\qquad \qquad \left(2.25\sqrt{\frac{(x-.5)(x+.5)}{(.5-x)(.5+x)}}-y \right) \\
&\qquad \qquad \left(\frac{6\sqrt{10}}7+(1.5-.5|x|)\sqrt{\frac{||x|-1|}{|x|-1}} -\frac{6\sqrt{10}}{14}\sqrt{4-(|x|-1)^2}-y\right)=0
\end{align}</p>
| Willie Wong | 1,543 | <p>Looking at the equation, it looks like it contains terms of the form
$$ \sqrt{\frac{| |x| - 1 |}{|x| - 1}} $$
which evaluates to
$$\begin{cases} 1 & |x| > 1\\ i & |x| < 1\end{cases} $$</p>
<p>Since any non-zero real number $y$ cannot be equal to a purely imaginary non-zero number, the presence of that term is a way of writing a piece-wise defined function as a single expression. My guess is that if you try to plot this in $\mathbb{C}^2$ instead of $\mathbb{R}^2$ you will get all kinds of awful. </p>
|
54,506 | <p><a href="http://www.hardocp.com/news/2011/07/29/batman_equation/" rel="noreferrer">HardOCP</a> has an image with an equation which apparently draws the Batman logo. Is this for real?</p>
<p><img src="https://i.stack.imgur.com/VYKfg.jpg" alt="Batman logo"></p>
<p><strong>Batman Equation in text form:</strong>
\begin{align}
&\left(\left(\frac x7\right)^2\sqrt{\frac{||x|-3|}{|x|-3}}+\left(\frac y3\right)^2\sqrt{\frac{\left|y+\frac{3\sqrt{33}}7\right|}{y+\frac{3\sqrt{33}}7}}-1 \right) \\
&\qquad \qquad \left(\left|\frac x2\right|-\left(\frac{3\sqrt{33}-7}{112}\right)x^2-3+\sqrt{1-(||x|-2|-1)^2}-y \right) \\
&\qquad \qquad \left(3\sqrt{\frac{|(|x|-1)(|x|-.75)|}{(1-|x|)(|x|-.75)}}-8|x|-y\right)\left(3|x|+.75\sqrt{\frac{|(|x|-.75)(|x|-.5)|}{(.75-|x|)(|x|-.5)}}-y \right) \\
&\qquad \qquad \left(2.25\sqrt{\frac{(x-.5)(x+.5)}{(.5-x)(.5+x)}}-y \right) \\
&\qquad \qquad \left(\frac{6\sqrt{10}}7+(1.5-.5|x|)\sqrt{\frac{||x|-1|}{|x|-1}} -\frac{6\sqrt{10}}{14}\sqrt{4-(|x|-1)^2}-y\right)=0
\end{align}</p>
| ShreevatsaR | 205 | <p>As Willie Wong observed, including an expression of the form $\displaystyle \frac{|\alpha|}{\alpha}$ is a way of ensuring that $\alpha > 0$. (As $\sqrt{|\alpha|/\alpha}$ is $1$ if $\alpha > 0$ and non-real if $\alpha < 0$.)</p>
<hr>
<p>The ellipse $\displaystyle \left( \frac{x}{7} \right)^{2} + \left( \frac{y}{3} \right)^{2} - 1 = 0$ looks like this:</p>
<p><img src="https://i.stack.imgur.com/PXv4W.png" alt="ellipse"></p>
<p>So the curve $\left( \frac{x}{7} \right)^{2}\sqrt{\frac{\left| \left| x \right|-3 \right|}{\left| x \right|-3}} + \left( \frac{y}{3} \right)^{2}\sqrt{\frac{\left| y+3\frac{\sqrt{33}}{7} \right|}{y+3\frac{\sqrt{33}}{7}}} - 1 = 0$ is the above ellipse, in the region where $|x|>3$ and $y > -3\sqrt{33}/7$:</p>
<p><img src="https://i.stack.imgur.com/oeCdG.png" alt="ellipse cut"></p>
<p>That's the first factor. </p>
<hr>
<p>The second factor is quite ingeniously done. The curve $\left| \frac{x}{2} \right|\; -\; \frac{\left( 3\sqrt{33}-7 \right)}{112}x^{2}\; -\; 3\; +\; \sqrt{1-\left( \left| \left| x \right|-2 \right|-1 \right)^{2}}-y=0$ looks like:</p>
<p><img src="https://i.stack.imgur.com/vAoFe.png" alt="second factor"></p>
<p>This is got by adding $y = \left| \frac{x}{2} \right| - \frac{\left( 3\sqrt{33}-7 \right)}{112}x^{2} - 3$, a parabola on the positive-x side, reflected:</p>
<p><img src="https://i.stack.imgur.com/Vfyre.png" alt="second factor first term"></p>
<p>and $y = \sqrt{1-\left( \left| \left| x \right|-2 \right|-1 \right)^{2}}$, the upper halves of the four circles $\left( \left| \left| x \right|-2 \right|-1 \right)^2 + y^2 = 1$:</p>
<p><img src="https://i.stack.imgur.com/69Pdf.png" alt="second factor second term"></p>
<hr>
<p>The third factor $9\sqrt{\frac{\left( \left| \left( 1-\left| x \right| \right)\left( \left| x \right|-.75 \right) \right| \right)}{\left( 1-\left| x \right| \right)\left( \left| x \right|-.75 \right)}}\; -\; 8\left| x \right|\; -\; y\; =\; 0$ is just the pair of lines y = 9 - 8|x|:</p>
<p><img src="https://i.stack.imgur.com/3CJGO.png" alt="Third factor without cut"></p>
<p>truncated to the region $0.75 < |x| < 1$.</p>
<hr>
<p>Similarly, the fourth factor $3\left| x \right|\; +\; .75\sqrt{\left( \frac{\left| \left( .75-\left| x \right| \right)\left( \left| x \right|-.5 \right) \right|}{\left( .75-\left| x \right| \right)\left( \left| x \right|-.5 \right)} \right)}\; -\; y\; =\; 0$ is the pair of lines $y = 3|x| + 0.75$:</p>
<p><img src="https://i.stack.imgur.com/Sh0Bp.png" alt="fourth factor without cut"></p>
<p>truncated to the region $0.5 < |x| < 0.75$.</p>
<hr>
<p>The fifth factor $2.25\sqrt{\frac{\left| \left( .5-x \right)\left( x+.5 \right) \right|}{\left( .5-x \right)\left( x+.5 \right)}}\; -\; y\; =\; 0$ is the line $y = 2.25$ truncated to $-0.5 < x < 0.5$.</p>
<hr>
<p>Finally, $\frac{6\sqrt{10}}{7}\; +\; \left( 1.5\; -\; .5\left| x \right| \right)\; -\; \frac{\left( 6\sqrt{10} \right)}{14}\sqrt{4-\left( \left| x \right|-1 \right)^{2}}\; -\; y\; =\; 0$ looks like:</p>
<p><img src="https://i.stack.imgur.com/XKs3Z.png" alt="sixth factor without cut"></p>
<p>so the sixth factor $\frac{6\sqrt{10}}{7}\; +\; \left( 1.5\; -\; .5\left| x \right| \right)\sqrt{\frac{\left| \left| x \right|-1 \right|}{\left| x \right|-1}}\; -\; \frac{\left( 6\sqrt{10} \right)}{14}\sqrt{4-\left( \left| x \right|-1 \right)^{2}}\; -\; y\; =\; 0$ looks like</p>
<p><img src="https://i.stack.imgur.com/OO3np.png" alt="sixth factor"></p>
<hr>
<p>As a product of factors is $0$ iff any one of them is $0$, multiplying these six factors puts the curves together, giving: (the software, Grapher.app, chokes a bit on the third factor, and entirely on the fourth)</p>
<p><img src="https://i.stack.imgur.com/YmA4v.png" alt="Wholly Batman"></p>
|
54,506 | <p><a href="http://www.hardocp.com/news/2011/07/29/batman_equation/" rel="noreferrer">HardOCP</a> has an image with an equation which apparently draws the Batman logo. Is this for real?</p>
<p><img src="https://i.stack.imgur.com/VYKfg.jpg" alt="Batman logo"></p>
<p><strong>Batman Equation in text form:</strong>
\begin{align}
&\left(\left(\frac x7\right)^2\sqrt{\frac{||x|-3|}{|x|-3}}+\left(\frac y3\right)^2\sqrt{\frac{\left|y+\frac{3\sqrt{33}}7\right|}{y+\frac{3\sqrt{33}}7}}-1 \right) \\
&\qquad \qquad \left(\left|\frac x2\right|-\left(\frac{3\sqrt{33}-7}{112}\right)x^2-3+\sqrt{1-(||x|-2|-1)^2}-y \right) \\
&\qquad \qquad \left(3\sqrt{\frac{|(|x|-1)(|x|-.75)|}{(1-|x|)(|x|-.75)}}-8|x|-y\right)\left(3|x|+.75\sqrt{\frac{|(|x|-.75)(|x|-.5)|}{(.75-|x|)(|x|-.5)}}-y \right) \\
&\qquad \qquad \left(2.25\sqrt{\frac{(x-.5)(x+.5)}{(.5-x)(.5+x)}}-y \right) \\
&\qquad \qquad \left(\frac{6\sqrt{10}}7+(1.5-.5|x|)\sqrt{\frac{||x|-1|}{|x|-1}} -\frac{6\sqrt{10}}{14}\sqrt{4-(|x|-1)^2}-y\right)=0
\end{align}</p>
| J. M. ain't a mathematician | 498 | <p>Since people (not from this site, but still...) keep bugging me, and I am unable to edit my previous answer, here's <em>Mathematica</em> code for plotting this monster:</p>
<pre><code>Plot[{With[{w = 3 Sqrt[1 - (x/7)^2],
l = 6/7 Sqrt[10] + (3 + x)/2 - 3/7 Sqrt[10] Sqrt[4 - (x + 1)^2],
h = (3 (Abs[x - 1/2] + Abs[x + 1/2] + 6) -
11 (Abs[x - 3/4] + Abs[x + 3/4]))/2,
r = 6/7 Sqrt[10] + (3 - x)/2 - 3/7 Sqrt[10] Sqrt[4 - (x - 1)^2]},
w + (l - w) UnitStep[x + 3] + (h - l) UnitStep[x + 1] +
(r - h) UnitStep[x - 1] + (w - r) UnitStep[x - 3]],
1/2 (3 Sqrt[1 - (x/7)^2] + Sqrt[1 - (Abs[Abs[x] - 2] - 1)^2] + Abs[x/2] -
((3 Sqrt[33] - 7)/112) x^2 - 3) (Sign[x + 4] - Sign[x - 4]) - 3*Sqrt[1 - (x/7)^2]},
{x, -7, 7}, AspectRatio -> Automatic, Axes -> None, Frame -> True,
PlotStyle -> Black]
</code></pre>
<p><img src="https://i.stack.imgur.com/zYXKB.png" alt="Mathematica graphics"></p>
<p>This should work even for versions that do not have the <code>Piecewise[]</code> construct. Enjoy. :P</p>
|
2,461,615 | <p>I am still at college. I need to solve this problem.</p>
<p>The total amount to receive in 1 year is 17500 CAD.
And the university pays its students each 2 weeks (26 payments per year). </p>
<p>How much does a student have to receive for 4 months?
I have calculated this in 2 ways (both seem ok) but results are different. Which one is the right one and why? </p>
<pre><code>a) 17500CAD / 12 months = 1458.33CAD each month
1458.33CAD x 4 months = 5833 (total amount of money in 4 months)
If money has to be given each 2 weeks:
5833 / 8 = 729.125 CAD
b) 17500 / 26 = 673.08 each 2 weeks
673.08 x 8 = 5384.62 (total amount of money in 4 months)
</code></pre>
<p>I think the right one is a), because b) is assuming the student has been receiving money for the whole year (26 payments). But it is not the case.</p>
<p>Thank you</p>
| Ross Millikan | 1,827 | <p>There are more than $8$ two week periods in four months. If you are paid by the month, the first calculation is correct and the total should be $5833$. On average there are $17$ weeks (and a little) in four months so you would get eight full two week paychecks and one more smaller one. The checks will be smaller than the $729.125$ you calculate. Your second calculation assumes a $364$ day year. If you account for the extra week you will be very close to the first approach.</p>
|
128,708 | <p>My lecture notes say that for every bilinear form there exists a linear operator such that $$\tau (v,w) = v.(Tw)$$ and that there must exist some other linear operator $S$ such that $$(Sv).w = v.(Tw).$$ I understand everything up to there but then it says that it's easy to see that in an orthonormal basis, the matrix of S is just the transpose of the matrix of T. I can't get my head around why it has to be an orthonormal basis. Surely, if $A$ is the matrix for $T$ and $B$ is the matrix for $S$ then,
$$(Sv).w = v.(Tw)$$
$$(B\underline{v})^T\underline{w} = \underline{v}^T A\underline{w}$$
$$\underline{v}^T B^T \underline{w} = \underline{v}^T A\underline{w}$$
So
$B^T = A$ for any basis? Where am I going wrong? </p>
| Robert Israel | 8,508 | <p>The matrices that preserve the set $P$ of probability vectors are those whose columns are members of $P$. This is obvious since if $x \in P$, $M x$ is a convex combination of the columns of $M$ with coefficients given by the entries of $x$. Each column of $M$ must be in $P$ (take $x$ to be a vector with a single $1$ and all else $0$), and $P$ is a convex set.</p>
|
128,708 | <p>My lecture notes say that for every bilinear form there exists a linear operator such that $$\tau (v,w) = v.(Tw)$$ and that there must exist some other linear operator $S$ such that $$(Sv).w = v.(Tw).$$ I understand everything up to there but then it says that it's easy to see that in an orthonormal basis, the matrix of S is just the transpose of the matrix of T. I can't get my head around why it has to be an orthonormal basis. Surely, if $A$ is the matrix for $T$ and $B$ is the matrix for $S$ then,
$$(Sv).w = v.(Tw)$$
$$(B\underline{v})^T\underline{w} = \underline{v}^T A\underline{w}$$
$$\underline{v}^T B^T \underline{w} = \underline{v}^T A\underline{w}$$
So
$B^T = A$ for any basis? Where am I going wrong? </p>
| Hugo Nava Kopp | 130,222 | <p>Since you originally asked about $L^1$ spaces I dared to add this comment. </p>
<p>If one wants to preserve <strong>the integral</strong> in (finite-dimensional and with finite measure ) $L^1$ spaces rather than <strong>the norm</strong> of $\ell^p$, the matrices $M$ that do this <strong><em>are more general than the stochastic matrices</em></strong>. </p>
<p>One can define these matrices with two components, labeled $S$ (for the stochastic component) and $G$ (for the generalized permutation matrix component) such that that $M= S * G$, where * represents the Hadamard product. </p>
<p>The $S$ matrices are effectively stochastic matrices as shown by Robert Israel. </p>
<p>The $G$ matrix is given by the unique matrix resulting of the outer product $u_{\mu} \otimes \frac{1}{u_{\mu}} := | u_{\mu} \rangle \langle \frac{1}{u_{\mu}} |$ of the unique column vector
$u_{\mu} :=\left(\begin{array}{c}\mu_1 \\ \mu_2 \\ \ldots \\ \mu_2\end{array}\right)$ and the also unique row vector $\frac{1}{u_{\mu}} :=\left(\frac{1}{\mu_1} \ \frac{1}{\mu_2} \ \ldots \ \frac{1}{\mu_n}\right)$:</p>
<p>$G:=u_{\mu} \otimes \frac{1}{u_{\mu}} = \left(\begin{array}{cccc}
1 & \frac{\mu_2}{\mu_1} & \ldots & \frac{\mu_n}{\mu_1} \\
\frac{\mu_1}{\mu_2} & 1 & \ldots & \frac{\mu_n}{\mu_2} \\
\ldots & \ldots & \ldots & \ldots \\
\frac{\mu_1}{\mu_n} & \frac{\mu_2}{\mu_n} & \ldots & 1 \end{array}\right)$</p>
<p>where $\mu_i$ <em>are the measures of the generating family of subsets</em> $\{ A_i \}$ of the underlying sigma algebra, <em>i.e.</em> $\mu_i := \mu(A_i)$ and $n = |\{ A_i \}|$. </p>
<p>To give you an example of where the stochastic component $S$ is absent, take the stochastic matrix $S$ to be simply a permutation matrix. In this case your $M$ that preserves the integral is a generalized permutation matrix whose non-zero elements are of the form $A_{i,j} =\frac{\mu_{j}}{\mu_i}$. </p>
<p>To see why the measure values $\mu_i$ are needed in the definition of $M$ recall that a $L^p$ space is defined given a measure space $(X,\Sigma,\mu)$. So if the $L^1$ space is finite dimensional then the vectors $v$ in $L^1$ are <em>simple functions</em>, whose integral is defined as the product $\langle u_{\mu}|v\rangle$. And if the measure is <em>finite</em>, then this integral is always well defined.</p>
<p>I hope I made myself clear.</p>
|
2,046,521 | <p>Of course, faster calculations help solve problems quickly. But does that also mean that faster calculations open more opportunities for a career in mathematics (like a researcher)? I like mathematics and can spend weeks trying to solve any problem or understanding any concept. But nowadays, there are many contests that focus on faster calculations rather than problem solving. I am very slow at calculations due to which I end up doing badly in these types of contests. Does that mean I am lagging somewhere? Can this cause hindrance in pursuing a career in mathematics? </p>
| Vidyanshu Mishra | 363,566 | <p>After reading you question I remembered an interview given by Scott flansburg who is worldwide accepted as the fastest calculating hymen in the world ( I can't provide that video to you as I found that one luckily). He said in the video that calculation is purely a strong consequence of strong logic and ability to find the pattern.I thought starting the answer with this US a good idea. Since now I have given the definition of calculation to you now so let's come to your question now.</p>
<p>The pure mathematics or generally called higher mathematics is not so related with calculation. It is based on higher order thinking , strong reasoning ability and knowledge application ( since it deals with problem solving to a large extent). The mathematics contest held worldwide test you knowledge in higher mathematics and hence test those things which are essentials of higher mathematics ( which I have already mentioned earlier).</p>
<p>So in my opinion, you don't need to worry about the role of calculations in international/higher level mathematics contest.</p>
|
1,756,685 | <p>For natural numbers—that is, integers greater than or equal to 1—prove that: <br/>
$n^{2n+1}\ge(n+1)^{n+1}(n-1)^{n}$ <br/></p>
<p>Equivalently, show that $(1-1/n)^n$ is strictly increasing.</p>
| Rene Schipperus | 149,912 | <p>One can check that it can be rearranged to</p>
<p>$$1-\frac{1}{n+1}\leq \left(1-\frac{1}{(n+1)^2}\right)^{n+1}$$</p>
<p>And this is an application of the Bernoulli inequality $1+kx\leq (1+x)^k$ </p>
|
1,756,685 | <p>For natural numbers—that is, integers greater than or equal to 1—prove that: <br/>
$n^{2n+1}\ge(n+1)^{n+1}(n-1)^{n}$ <br/></p>
<p>Equivalently, show that $(1-1/n)^n$ is strictly increasing.</p>
| TOM | 118,685 | <p>For $n \ge 2$, the first inequality is equal to </p>
<p>$(1 + \frac{1}{n^2 - 1})^n \ge 1 + \frac{1}{n}$.</p>
<p>This is obvious by the following.</p>
<p>$(1 + \frac{1}{n^2 - 1})^n \ge 1 + \frac{n}{n^2 - 1} \ge 1 + \frac{1}{n}$.</p>
|
1,355,509 | <p>In my mathematical travels, I've stumbled upon the implicit formula $y^2+x^2+\frac{y}{x}=1$ and found that every graphing program I've plugged it in to seems to believe that there is large set of points which satisfy the equation $(y^2+x^2+\frac{y}{x})^{-1}=1$ which do not satisfy the original equation and this has me quite perplexed. I suspect that this is simply a glitch in the software and this question might therefore be better suited in the CS forum but I figured I would post it here first in the event that someone may have a mathematical explanation for this bizarre behavior. Any and all insights are welcome!</p>
| Will Jagy | 10,400 | <p>rotationally symmetric about the origin; your version does not allow $x=0$ but the curve becomes a smooth variety if $(0,0)$ is included</p>
<p>The smooth implicit function is
$$ x^3 + x y^2 + y - x = 0 $$
with gradient
$$ \left\langle 3 x^2 + y^2 - 1, 2xy + 1 \right\rangle $$</p>
<p>For large $|y|,$ solving the quadratic formula in $y$ shows $xy \approx -1.$ Indeed, large or small, for $x \neq 0$ we have
$$ y = \frac{-1 \pm \sqrt {1 + 4 x^2 - 4 x^4}}{2x} $$ and we require
$1 + 4 x^2 - 4 x^4 \geq 0.$</p>
<p><img src="https://i.stack.imgur.com/sJTiG.jpg" alt="enter image description here"></p>
|
18,511 | <p>I have a notebook written in Mathematica 8 in which I imported Tiff images and everything worked fine. Since I installed Mathematica 9, I get the error:</p>
<pre><code>In[14]:= Files[[1]][[1]]
Import[Files[[1]][[1]],"TIFF"]
Out[14]= Growth_1_130124_1353/Growth_1_130124_1353_T0001.tif
During evaluation of In[14]:= Image::imgcsmis: The specified color space ColorProfileData[<>,Description->sRGB IEC61966-2.1,DeviceColorSpace->RGB,IndependentColorSpace->XYZ] and the number of channels 1 are not compatible. >>
Out[15]= Image[RawArray[Byte,<1024,1360>],Byte,ColorSpace->ColorProfileData[<>,Description->sRGB IEC61966-2.1,DeviceColorSpace->RGB,IndependentColorSpace->XYZ],Interleaving->None]
</code></pre>
<p>How can I solve it?
Cheers,
Andrea</p>
| cormullion | 61 | <p>Looks like it might be a problem introduced with the colorprofile additions. Perhaps Mathematica's getting confused with the grayscale model and RGB profile? Opens fine in Preview.</p>
<p><img src="https://i.stack.imgur.com/p5diK.png" alt="image details"> </p>
|
737,689 | <p>I have to prove that in a partially ordered set, only one of </p>
<blockquote>
<p>$$x<y,x=y,x>y$$ </p>
</blockquote>
<p>can hold. </p>
<p>My book says if both $x<y$ and $x=y$ hold, then this will imply $x<x$, which is a contradiction (contradicting irreflexivity). </p>
<p>I don't understand how this conclusion was reached. Two elements $x$ and $y$ may have multiple relations existing between them. For example, $\langle 2,4\rangle_R$, where $R$ can be $2<4$, $2|4$, $2$ is the largest even number smaller than $4$, etc. These distinct relations don't interact with each other; they have completely distinct identities. </p>
<p>$<$ and $=$ are also distinct relations between $x$ and $y$. One is reflexive while the other is irreflexive. From these two <strong>distinct</strong> relations, how could one ever conclude that irreflexivity (I still don't know of what relation) is being violated? </p>
| Laz | 139,851 | <p>Let us prove the statement not only for <span class="math-container">$p$</span> prime, but for all natural numbers <span class="math-container">$n$</span>.<br />
Indeed, <span class="math-container">$a^n-b^n=(a-b)(a^{n-1}+a^{n-2}b+\dots+ab^{n-2}+b^{n-1})$</span>, <span class="math-container">$\forall a,b,n\in \mathbb{N}$</span>, then since <span class="math-container">$n|(a-b)$</span>, we only have to prove that <span class="math-container">$a^{n-1}+a^{n-2}b+\dots+ab^{n-2}+b^{n-1}\equiv 0$</span> (mod <span class="math-container">$n$</span>).<br />
We use again the fact that <span class="math-container">$a\equiv b$</span> (mod <span class="math-container">$n$</span>), so that <span class="math-container">$\forall j=0,1,\dots,n-1$</span>, we have <span class="math-container">$a^{n-1-j}b^j\equiv a^{n-1-j}a^j=a^{n-1}$</span> (mod <span class="math-container">$n$</span>), a simple property of congruences, and so:
<span class="math-container">$a^{n-1}+a^{n-2}b+\dots+ab^{n-2}+b^{n-1}\equiv na^{n-1}=0$</span> (mod <span class="math-container">$n$</span>).</p>
|
1,136,278 | <p>Prove that $n(n-1)<3^n$ for all $n≥2$. By induction.
What I did: </p>
<p>Step 1- Base case:
Keep n=2</p>
<p>$2(2-1)<3^2$</p>
<p>$2<9$ Thus it holds.</p>
<p>Step 2- Hypothesis: </p>
<p>Assume: $k(k-1)<3^k$</p>
<p>Step 3- Induction:
We wish to prove that:</p>
<p>$(k+1)(k)$<$3^k.3^1$</p>
<p>We know that $k≥2$, so $k+1≥3$ </p>
<p>Then $3k<3^k.3^1$</p>
<p>Therefore, $k<3^k$, which is true for all value of $n≥k≥2$</p>
<p>Is that right? Or the method is wrong? Is there any other methods?</p>
| Community | -1 | <p>Other approach (not induction): by the binomial theorem,
$$(1+2)^n=1+n.2+\frac12n(n-1).2^2+\frac1{3!}n(n-1)(n-2).2^3\cdots>n(n-1).$$</p>
|
579,907 | <p>Let $G$ be a group and let $H$ be a normal subgroup.</p>
<p>Prove that if $S\subseteq G$ generates $G$,
then the set $\{sH\mid s∈S\} ⊆ G/H$ generates $G/H$.</p>
<p>I have no idea how to deal with the question above.
Can somebody please give me some help?</p>
| Ittay Weiss | 30,953 | <p>You can prove in general that if $\psi:G_1\to G_2$ is a surjective group homomorphism, then if $S\subseteq G_1$ generates $G_1$, then $\psi(S)=\{\psi(s)\mid s\in S\}$ generates $G_2$. The proof is quite straightforward, just follows the meaning of being a generating set. </p>
<p>Now, to conclude what you need to show, just remember that for any quotient construction, there is an associated natural surjection: $G\to G/H$, given by $g\mapsto gH$. </p>
|
4,128,050 | <p><a href="https://i.stack.imgur.com/ybNTh.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ybNTh.jpg" alt="enter image description here" /></a></p>
<p>I think we should use the corollary to solve this problem.</p>
<p>For the first question, I think we can use the property that <span class="math-container">$J$</span> is a free <span class="math-container">$\mathbb{Z}[G]$</span> module with a basis <span class="math-container">${s-1,t-1}$</span>. Hence <span class="math-container">$J\otimes \mathbb{Z^{'}}= 2\mathbb{Z}[G]\otimes \mathbb{Z^{'}}$</span>.</p>
<p>But I don't know how to solve the second question.</p>
<p>Thank you for your help.</p>
| Mod.esty | 766,784 | <p>I think the hardest part of the question is how to prove <span class="math-container">$H_1(G ; \mathbb{Z}^{'}) = \mathbb{Z}$</span>.</p>
<p>We just need to prove that the kernel of <span class="math-container">$J\otimes \mathbb{Z^{'}} \longrightarrow \mathbb{Z}[G]\otimes \mathbb{Z^{'}}$</span> is <span class="math-container">$\{(s-t) \otimes n\mid n\in \mathbb{Z}\}$</span>.</p>
<p>First it is easy to check that <span class="math-container">$(s^2-1)\otimes 1=(s-1)(s+1)\otimes 1 =0 \in J \otimes \mathbb{Z^{'}}$</span>.</p>
<p>For every <span class="math-container">$g(s,t)\otimes 1\in J\otimes \mathbb{Z^{'}}$</span> which is zero in <span class="math-container">$ \mathbb{Z}[G]\otimes \mathbb{Z^{'}}$</span>, we can assume that g(s,t) dones't have constant.</p>
<p>Hence,<span class="math-container">$g(s,t)=sf_1(s,t)-tf_2(s,t)=t(f_1(s,t)-f_2(s,t))+(s-t)(f_1(s,t))$</span> with <span class="math-container">$g(1,1)=$$g(-1,-1)=0$</span>.
We can continue this process for <span class="math-container">$(f_1(s,t)-f_2(s,t))$</span>.</p>
<p>Ultimately, <span class="math-container">$\{(s-t) \otimes n\mid n\in \mathbb{Z}\}$</span> is the kernel.</p>
|
23,485 | <p>Forgive me if this has been brought up either here or meta.se before, but I could not find it in either.</p>
<p>On a user's activity page, beneath their reputation, there is a label that says "top x% overall". Clicking this brings you to the user reputation leagues. However, there appears to be a flaw here.</p>
<p>At the top of this page, it shows a cumulative total of users (as seen in the image below).</p>
<p><a href="https://i.stack.imgur.com/A2Nex.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/A2Nex.png" alt="Total users shown at top of page"></a></p>
<p>Lower down, in the right-hand area, there is a table representing a breakdown of the number of users by reputation total, as shown below.</p>
<p><a href="https://i.stack.imgur.com/CWspR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CWspR.png" alt="Table showing number of users with each reputation total"></a></p>
<p>Adding the values in the second column, we arrive at the same number listed in the first image; that is to say, each row represents the total number of users with at least that reputation but <em>less than the prior row's total</em>.</p>
<p>This is confirmed by looking at the breakdowns; there are in fact 67 users here with >50,000 rep. However, if these data are cumulative, then the bottom-most row, representing users with 1+ reputation, should read the same as the number in the first image. It does not. Furthermore, it does not appear to be <em>exclusive</em> of the cumulative total in the 200+ rep row. Instead, it appears to incorrectly subtract the values of <em>all</em> prior rows, instead of just the cumulative 200+ row. (This detail was pointed out by @quid; thank you!)</p>
| quid | 85,306 | <p>There is a problem, yet it is a bit different from the one claimed. There is no problem with the computation of the percentile. </p>
<p>Let us look at a site with a much smaller number of users to simplify manual checks. </p>
<p>The site Retro Computing has 29 pages of users (a page holds 36 names). This gives a total number of users above 1008 and below 1044. This is in line with the 1025 users claimed on <a href="http://stackexchange.com/leagues/392/alltime/retrocomputing">http://stackexchange.com/leagues/392/alltime/retrocomputing</a>. </p>
<p>Their table is:</p>
<pre><code>Total Rep | Users
3,000+ | 1
2,000+ | 2
1,000+ | 10
500+ | 30
200+ | 68
1+ | 914
</code></pre>
<p>There is only 1 user above 3000 and one user between 2000 and 3000, and there are 10 above 1000 (not 13) and actually 68 above 200 (and not more than 100).<br>
Thus, the x+ (except for 1+) really mean above the given threshold. </p>
<p>Yet then the sum of all matches the 1025. Thus, 1+ does not really mean 1 or more but rather it seems is intended to mean between 1 and 199. </p>
<p>Yet it seems to be a computed based on a misunderstanding of the earlier entries in the table, so that this value does not have any significance. </p>
<p><strong>Summary:</strong> The x+, except for 1+, gives the number of user above the threshold. The 1+ does not give a significant number. </p>
|
23,485 | <p>Forgive me if this has been brought up either here or meta.se before, but I could not find it in either.</p>
<p>On a user's activity page, beneath their reputation, there is a label that says "top x% overall". Clicking this brings you to the user reputation leagues. However, there appears to be a flaw here.</p>
<p>At the top of this page, it shows a cumulative total of users (as seen in the image below).</p>
<p><a href="https://i.stack.imgur.com/A2Nex.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/A2Nex.png" alt="Total users shown at top of page"></a></p>
<p>Lower down, in the right-hand area, there is a table representing a breakdown of the number of users by reputation total, as shown below.</p>
<p><a href="https://i.stack.imgur.com/CWspR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CWspR.png" alt="Table showing number of users with each reputation total"></a></p>
<p>Adding the values in the second column, we arrive at the same number listed in the first image; that is to say, each row represents the total number of users with at least that reputation but <em>less than the prior row's total</em>.</p>
<p>This is confirmed by looking at the breakdowns; there are in fact 67 users here with >50,000 rep. However, if these data are cumulative, then the bottom-most row, representing users with 1+ reputation, should read the same as the number in the first image. It does not. Furthermore, it does not appear to be <em>exclusive</em> of the cumulative total in the 200+ rep row. Instead, it appears to incorrectly subtract the values of <em>all</em> prior rows, instead of just the cumulative 200+ row. (This detail was pointed out by @quid; thank you!)</p>
| Sklivvz | 272 | <p>This has been fixed. The issue was in the way the "1+" group was calculated. </p>
<p>All the "n+" groups were exactly what they should have been, the number users with rep of at least "n", so basically "200+" includes the "500+" users.</p>
<p>The "1+" was instead calculated assuming the list was of non overlapping buckets, so we removed the sum of all the row counts from the total number of users, which is a known value.</p>
<p>I've set it so we just show the total number of users now in the 1+ row.</p>
<p><del>This will be deployed in the next Stack Exchange release later on.</del></p>
<p>This is now deployed.</p>
|
2,147,458 | <p>Solve the following integral:
$$
\frac{2}{\pi}\int_{-\pi}^\pi\frac{\sin\frac{9x}{2}}{\sin\frac{x}{2}}dx
$$</p>
| Felix Marin | 85,343 | <p><span class="math-container">$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</span>
<span class="math-container">\begin{align}
&\bbox[5px,#ffd]{{2 \over \pi}\int_{-\pi}^{\pi}{\sin\pars{9x/2} \over \sin\pars{x/2}}\,\dd x} \\[5mm] = &\
{2 \over \pi}\oint_{\verts{z} = 1}{\pars{z^{9/2} - z^{-9/2}}/2\ic \over \pars{z^{1/2} - z^{-1/2}}/2\ic}\,{\dd z \over \ic z}
\\[5mm] = &\
-\,{2\ic \over \pi}\oint_{\verts{z} = 1}{1 \over z^{4}}
{z^{9} - 1 \over z - 1}\,\dd z
\\[5mm] = &\
-\,{2\ic \over \pi}\braces{2\pi\ic\bracks{z^{3}}\pars{z^{9} - 1 \over z - 1}}
\\[5mm] = &\
4\,\bracks{z^{3}}\pars{1- z}^{-1} =
\bbx{\ds{4}} \\ &
\end{align}</span></p>
|
2,936,028 | <p>The question is:</p>
<p>Prove that If the sum of the elements of each row of a square matrix is k, then the sum of the elements in each row of the inverse matrix is 1/k ?</p>
<p>In the text book the answer is:</p>
<p>Let A be <span class="math-container">${m\times m}$</span>, non-singular, with the stated property. Let B be its imverse. Then for <span class="math-container">$n\leqslant m$</span>,
<span class="math-container">$$
1 = \sum\limits_{r=1}^m \sigma_{nr} = \sum\limits_{r=1}^m\sum \limits_{s=1}^mb_{ns}a_{sr} = \sum\limits_{s=1}^m\sum \limits_{r=1}^mb_{ns}a_{sr}
= k\sum\limits_{s=1}^m b_{ns}$$</span></p>
<p>(A is singular if K = 0).</p>
<p>I have trouble to understand this proof. Is there another way to prove it?</p>
| Stefan Lafon | 582,769 | <p>Let $M$ be the matrix and $u$ be the vector with 1 for all its elements.
Then saying that the sum of all the elements in the rows of $M$ is $k$ is equivalent to saying that $$Mu= ku$$
Now multiply that equation by $M^{-1}$ to the left:
$$u=kM^{-1}u$$ or $$\frac 1k u=M^{-1}u$$
Which means that the sum of the elements of the rows of $M^{-1}$ all equal $\frac 1k$.</p>
|
206,305 | <p>Prove: $s_n \to s \implies \sqrt{s_n} \to \sqrt{s}$ by the definition of the limit. $s \geq 0$ and $s_n$ is a sequence of non-negative real numbers.</p>
<p>This is my preliminary computation:</p>
<p>$|\sqrt{s_n} - \sqrt{s}| < \epsilon$</p>
<p>multiply by the conjugate:</p>
<p>$|\dfrac{s_n - s}{\sqrt{s_n}+\sqrt{s}}| < \epsilon$</p>
<p>Thus we can use the fact that $|\sqrt{s_n} - \sqrt{s}| <
\dfrac{|s_n - s|}{\sqrt{s}} < \epsilon$</p>
<p>After this I am lost...</p>
| robjohn | 13,854 | <p>Since $s_n\to s$, for any $\epsilon$, we can find an $N$ so that for all $n\ge N$ we have $|s_n-s|<\epsilon\sqrt{s}$. Then
$$
|\sqrt{s_n}-\sqrt{s}|=\left|\frac{s_n-s}{\sqrt{s_n}+\sqrt{s}}\right|\le\left|\frac{s_n-s}{\sqrt{s}}\right|
$$
to get that
$$
|\sqrt{s_n}-\sqrt{s}|\le\epsilon
$$</p>
|
2,993,979 | <p>I tried to determine if <span class="math-container">$n\cdot \arctan (\frac 1n)$</span> is divergent or convergent. </p>
<p>My solution is in the two pictures. I really have no clue as how to solve it, so I tried something, but it cannot be right. At least that's what I think.</p>
<p>I am sorry in advance for my bad maths.</p>
<p>Appreciate all help :)<a href="https://i.stack.imgur.com/Uf3nt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Uf3nt.png" alt="enter image description here"></a></p>
| Claude Leibovici | 82,404 | <p>If we speak about the sequence
<span class="math-container">$$a_n=n \tan ^{-1}\left(\frac{1}{n}\right)$$</span> let <span class="math-container">$x=\frac{1}{n}$</span> and consider
<span class="math-container">$$y=\frac{\tan ^{-1}\left({x}\right)}x$$</span> and use Taylor series of <span class="math-container">$\tan ^{-1}\left({x}\right)$</span> close to <span class="math-container">$x=0$</span>. Then
<span class="math-container">$$y=\frac 1x \left(x-\frac{x^3}{3}+O\left(x^5\right) \right)=1-\frac{x^2}{3}+O\left(x^4\right)$$</span> So, back to <span class="math-container">$n$</span>,
<span class="math-container">$$a_n=1-\frac{1}{3 n^2}+O\left(\frac{1}{n^4}\right)$$</span></p>
|
1,617,462 | <p>Is this a line or a plane, I thought it would be a plane where z=0 always so it will be the xy plane.</p>
<p>Also: what will be the normal vector for this if it is a plane?</p>
| Sri-Amirthan Theivendran | 302,692 | <p>It is the collection of vectors that are orthogonal to $(2,-1,0)$ and hence a plane. </p>
|
1,617,462 | <p>Is this a line or a plane, I thought it would be a plane where z=0 always so it will be the xy plane.</p>
<p>Also: what will be the normal vector for this if it is a plane?</p>
| Eli Rose | 123,848 | <p>It's true that $z = 0$ in the equation, but don't think of the equation as <em>requiring</em> $z = 0$ -- instead think of it as <em>putting no conditions</em> on $z$. $z$ doesn't appear in the equation, hence it can be anything.</p>
<p>So this is not the $xy$-plane, but a different plane: the set $\{(x, y, z) \in \mathbb{R}^3 \mid 2x - y = 0\}$. How do we know it's a plane? I find it helpful to visualize the line $2x - y = 0$ in the $xy$-plane and then picture it "extending" to the sky and to the ground to cover all $z$.</p>
<p>We can always read the normal vector off a plane $ax + by + cz = 0$; it's $(a, b, c)$. So this normal vector is:</p>
<p>$$
\left(\begin{matrix}2\\ -1\\ 0\\\end{matrix}\right)
$$</p>
|
784,032 | <p>Find the remainder when $6!$ is divided by 7.</p>
<p>I know that you can answer this question by computing $6! = 720$ and then using short division, but is there a way to find the remainder without using short division?</p>
| Bill Dubuque | 242 | <p><strong>Hint</strong> $\ $ In analogy with <strong>Gauss's trick</strong> (see below), to simplify the product we pair up each number with its (multiplicative) inverse mod $7.\,$ Thus $$ 6! = 1\cdot (\overbrace{2\cdot 4}^{\equiv \,1})(\overbrace{3\cdot5}^{\equiv\, 1})\cdot 6 \equiv 1\cdot 1\cdot 1\cdot 6\equiv6\pmod{7}$$</p>
<p><strong>Remark</strong> $\ $ This method of pairing up inverses works for any prime - see <a href="http://www.proofwiki.org/wiki/Wilson%27s_Theorem" rel="nofollow noreferrer">Wilson's Theorem.</a></p>
<p>Below is <strong>Gauss's trick</strong>, imported from <a href="https://math.stackexchange.com/a/7077/242">a deleted question</a></p>
<p>$\qquad\qquad \begin{array}{rcl}\rm{\bf Hint}\quad\quad\ \ S &=&\rm 1 \ \ \ +\ \ \: 2\ \ \ \ +\ \:\cdots\ +\ n\!-\!1\ +\ n \\
\rm S &=&\rm n \ \ +\ n\!-\!1\ +\,\ \cdots\ +\,\quad 2\ \ \ +\ \ 1\\
\hline \\
\rm Adding\ \ \ \ 2\: S &=&\rm n\ (n\!+\!1)\end{array}$ </p>
<p>A <a href="http://www.americanscientist.org/issues/pub/gausss-day-of-reckoning" rel="nofollow noreferrer">famous legend</a> says Gauss used this trick to quickly compute $ 1+2+\:\cdots\:+100\ $ in grade school.</p>
<p>This trick of pairing up reflections around the average value is a special case of exploiting innate symmetry - here a reflection or involution. It's a ubiquitous powerful technique, e.g. see my post on <a href="https://math.stackexchange.com/questions/1865">Wilson's Theorem</a> and it's <a href="https://math.stackexchange.com/questions/9311/product-of-all-elements-in-an-odd-finite-abelian-group-is-1/9345#9345">group theoretic generalization</a>.</p>
|
884,362 | <blockquote>
<p>Compute the integral
$$\int_{0}^{2\pi}\frac{x\cos(x)}{5+2\cos^2(x)}dx$$</p>
</blockquote>
<p>My Try: I substitute $$\cos(x)=u$$</p>
<p>but it did not help. Please help me to solve this.Thanks </p>
| David | 119,775 | <p>As an <em>indefinite</em> integral this would be hard, maybe impossible, but there is a clever trick for the definite integral. Let
$$I=\int_{0}^{2\pi}\frac{x\cos(x)}{5+2\cos^2(x)}dx\ .$$
Substituting $x=2\pi-t$ gives
$$I=\int_0^{2\pi}\frac{(2\pi-t)\cos(t)}{5+2\cos^2(t)}\,dt
=\int_0^{2\pi}\frac{(2\pi-x)\cos(x)}{5+2\cos^2(x)}\,dx\ .$$
Adding the two expressions for $I$ gives
$$2I=2\pi\int_0^{2\pi}\frac{\cos(x)}{5+2\cos^2(x)}\,dx\ ,$$
and the integral on the RHS can now be done by various methods.</p>
|
3,897,689 | <p>i have the equation:
<span class="math-container">$$y'+2y\:=1$$</span></p>
<p>and i solve it the regular way for first order differential equation:
<span class="math-container">$$y'\:=1-2y$$</span>
<span class="math-container">$$\frac{dy}{dx}=1-2y$$</span>
<span class="math-container">$$\int \:\frac{1}{1-2y}dy=\int \:1dx$$</span>
<span class="math-container">$$-\frac{1}{2}\int \:-\frac{2}{1-2y}dy=\int \:1dx$$</span>
and using the integral formula:
<a href="https://i.stack.imgur.com/pjKMZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pjKMZ.png" alt="enter image description here" /></a>
<span class="math-container">$$-\frac{1}{2}\ln\left(\left|1-2y\right|\right)=x+\ln\left(c\right)$$</span>
Why <a href="https://he.symbolab.com/solver/step-by-step/%5Cleft%7C1-2y%5Cright%7C%3Dx" rel="nofollow noreferrer">Symbolab</a> omits the absolute value operator and writes:
<a href="https://i.stack.imgur.com/52jay.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/52jay.png" alt="enter image description here" /></a></p>
| user577215664 | 475,762 | <p><span class="math-container">$$y'+2y\:=1$$</span>
With integrating factor method:
<span class="math-container">$$(ye^{2x})'=e^{2x}$$</span>
<span class="math-container">$$ye^{2x}=\dfrac 12 e^{2x}+K$$</span>
<span class="math-container">$$\boxed {y(x)=\dfrac 12 +Ke^{-2x}}$$</span>
Then we can rewrite this as:
<span class="math-container">$$\dfrac {2y-1}K=e^{-2x} \geq 0$$</span>
<span class="math-container">$$\ln \left (\dfrac {2y-1}K\right )=-2x $$</span>
It seems to me that the absolute value is needed.</p>
|
3,814,195 | <p>As an applied science student, I've been taught math as a tool. And although I've been studying <strong>a lot</strong> throughout the years, I always felt like I am missing depth. Then I read geodude's answer on this <a href="https://math.stackexchange.com/questions/721364/why-dont-taylor-series-represent-the-entire-function">post</a>, that cited these beautiful quotes:</p>
<blockquote>
<p>You might want to do calculus in <span class="math-container">$\mathbb{R}$</span>, but the functions themselves naturally live in <span class="math-container">$\mathbb{C}$</span></p>
</blockquote>
<blockquote>
<p>Even in <span class="math-container">$\mathbb{R}$</span>, and in the most practical and applied problems, you can hear distant echos of the complex behavior of the functions. It's their nature, you can't change it.</p>
</blockquote>
<p>And although pieces of complex analysis are well known even to the most applied scientist (e.g Euler's identity), these quotes really helped me understand why my math knowledge is so shallow. It seems I share the same worries with other engineers: (<a href="https://math.stackexchange.com/questions/1658577/whats-the-best-way-for-an-engineer-to-learn-real-math">What's the best way for an engineer to learn "real" math?</a>) and I've found many beautiful and informative answers about diving deeper into mathematics, but none of them (as far as I could spot) addressed complex analysis. And as I think I am lost in the labyrinth of math knowledge, I ask this question:</p>
<p>How can one that has an basic knowledge of real analysis approach complex analysis? What do I start? Are there any books you would recommend?</p>
| awkward | 76,172 | <p>Assuming you are interested in applications (given your background), my favorite book for applications of complex analysis is <em>Fundamentals of Complex Analysis: with Applications to Engineering and Science</em> by E.B. Saff and A.D. Snider. Their coverage of residue theory, in particular, is more extensive than in most of the other texts I have seen, with many examples and exercises.</p>
<p>Regardless of the text you choose (there are many excellent books), I hope you will carry out your plan to study complex analysis. I think it is one of the most beautiful areas of mathematics.</p>
|
63,633 | <p>(This question came up in a conversation with my professor last week.)</p>
<p>Let $\langle G,\cdot \rangle$ be a group. Let $x$ be an element of $G$.
<br>
Is there always an isomorphism $f : G \to G$ such that $f(x) = x^{-1}$ ?
<br>
What if $G$ is finite?</p>
| Qiaochu Yuan | 290 | <p>Here's a comment which might as well be written down. If $f$ is required to be an inner automorphism, then for $G$ finite this question can be understood using the character table of $G$:</p>
<blockquote>
<p>$x$ is conjugate to its inverse if and only if $\chi(x)$ is real for all characters $\chi$.</p>
</blockquote>
<p>Since $\chi(x^{-1}) = \overline{ \chi(x) }$, one direction is clear. In the other direction, if $\chi(x)$ is real then $\chi(x) = \chi(x^{-1})$ for all characters $\chi$, hence $c(x) = c(x^{-1})$ for all class functions $c$. One also has the following cute result: the number of conjugacy classes which are closed under inversion is equal to the number of irreducible characters all of whose values are real (equivalently, the number of self-dual irreps). Since there exist plenty of groups (even simple groups) whose character tables have complex entries, there are plenty of groups with elements not conjugate to their inverses.</p>
<p>This is one way to address the question for finite groups with no outer automorphisms. </p>
|
2,512,736 | <p>I do not understand how this result is a special case of theorem 9.1, could anyone explain this for me please?</p>
<p><a href="https://i.stack.imgur.com/hsgYr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hsgYr.png" alt="enter image description here"></a></p>
<p>This is theorem 9.1:</p>
<p><a href="https://i.stack.imgur.com/jUvFU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jUvFU.png" alt="enter image description here"></a>
<a href="https://i.stack.imgur.com/po6xr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/po6xr.png" alt="enter image description here"></a></p>
| Community | -1 | <p>$F'(X):H\in M_n\rightarrow A^TH+HA-HUX-XUH$ where $U=BR^{-1}B^T$.</p>
<p>Let $K=F'(X_i)^{-1}F(X_i)$.</p>
<p>At each step, you must solve in $K$ this linear equation:</p>
<p>$A^TK+KA-KUX_i-X_iUK=A^TX_i+X_iA-X_iUX_i+Q$, or</p>
<p>$(A^T-X_iU)K+K(A-UX_i)=A^TX_i+X_iA-X_iUX_i+Q$, which is a Sylvester equation.</p>
<p>EDIT 1. Answers to the first three comments.</p>
<p>I don't need any dot product or "fourth order tensor"!!! $F'(X)$ is a derivative and not a gradient. Stop to read the matrix cookbook; you don't understand that you do.</p>
<p>$F$ is a function from $M_n$ to $M_n$. Then, its derivative, in $X\in M_n$, $F'(X)$ is a LINEAR application from $M_n$ to $M_n$; we assume that it is a bijection in $X_i$ and, therefore $F'(X_i)(K)=F(X_i)$. One cannot do simpler!</p>
<p>Last point. A Sylvester equation in the unknown matrix $K$, is a linear equation in the form $AK+KB=C$.</p>
<p>cf. <a href="https://en.wikipedia.org/wiki/Sylvester_equation" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Sylvester_equation</a></p>
<p>Matlab has a solver dedicated to this type of equation.</p>
<p>EDIT 2. Answer to Daniel Mårtensson . Your below comment: "but where do I update $K$ then?" shows that you did not understand one word of my post...</p>
<p>I rewrite. One iteration consists in that follows.</p>
<p>i) input. $X_i$</p>
<p>ii) Solve in $K$ (using Matlab): $(A^T-X_iU)K+K(A-UX_i)=A^TX_i+X_iA-X_iUX_i+Q$ </p>
<p>The equation depends on $X_i$; then $K$ depends on $X_i$.</p>
<p>iii) output. $X_{i+1}=X_i-K$ (and not $X_i+K$).</p>
|
94,440 | <p>In Sean Carroll's <em>Spacetime and Geometry</em>, a formula is given as
$${\nabla _\mu }{\nabla _\sigma }{K^\rho } = {R^\rho }_{\sigma \mu \nu }{K^\nu },$$</p>
<p>where $K^\mu$ is a Killing vector satisfying Killing's equation ${\nabla _\mu }{K_\nu } +{\nabla _\nu }{K_\mu }=0$ and the convention of Riemann curvature tensor is</p>
<p>$$\left[\nabla_{\mu},\nabla_{\nu}\right]V^{\rho}={R^\rho}_{\sigma\mu\nu}V^{\sigma}.$$</p>
<p>So how to prove the this formula (the connection is Levi-Civita)?</p>
| Gravity_CK | 762,708 | <p>We could try to solve it the other way that if a vector obeys the first condition then it must be a Killing vector.</p>
<p>We assume,
<span class="math-container">$$
\nabla_{\mu} \nabla_{\sigma} K^{\rho} = R_{\sigma \mu \nu}^{\rho} k^{\nu}-\text { (i) }
$$</span>
<span class="math-container">$$ where
[\nabla_{\mu}, \nabla_{\sigma}] V^{\rho} = R_{\sigma,\mu,\nu}^{\rho} V^{\sigma}
$$</span>
Subtracting <span class="math-container">$\nabla_{\sigma} \nabla_{\mu} K^{\rho}$</span> from (i)
<span class="math-container">$$
\begin{array}{l}
{\left[\nabla_{\mu}, \nabla_{\sigma}\right] K^{\rho} = R_{\sigma, \mu \nu}^{\rho} K^{\nu} - \nabla_{\sigma} \nabla_{\mu} K^{\nu}} \\
\nabla_{\sigma} \nabla_{\mu} K^{\rho}=\left(R_{\sigma, \mu \nu}^{\rho}-R_{\nu, \mu \sigma}^{\rho}\right) K^{\nu}
\end{array}
$$</span>
<span class="math-container">$$
\begin{array}{r}
\nabla_{\sigma} \nabla_{\mu} K_{\rho}=\left(R_{\rho} _{\sigma}_{\mu \nu}- R_{\mu \sigma \rho \nu}\right) K^{\nu} \\
\left(\because R_{a b c d}=R_{c d a b}\right)
\end{array}
$$</span>
using (1),
<span class="math-container">$$
\nabla_{\sigma} \nabla_{\mu} K_{\rho}=\nabla_{\mu} \nabla_{\sigma} K_{\rho}-\nabla_{\rho} \nabla_{\sigma} K_{\mu}
$$</span>
Add <span class="math-container">$ \nabla_{\sigma} \nabla_{\rho} K_{\mu}$</span></p>
<p><span class="math-container">$$\nabla_{\sigma}\left(\nabla_{\mu} K_{\rho}+\nabla_{\rho} K_{\mu}\right)=\nabla_{\mu} \nabla_{\sigma} K_{\rho}+\left[\nabla_{\sigma}, \nabla_{\rho}\right] K_{\mu}$$</span></p>
<p><span class="math-container">$$=R_{\rho \sigma \mu \alpha} K^{\alpha}+ R_{\mu \alpha \sigma \rho} V^{\alpha}$$</span>
<span class="math-container">$$=R_{\rho \sigma \mu \alpha} V^{\alpha}+R_{\sigma \rho \mu \alpha} V^{\alpha}$$</span>
<span class="math-container">$$=R_{\rho \sigma \mu \alpha} V^{\alpha}-R_{\rho \sigma \mu \alpha} V^{\alpha}=0 $$</span>
<span class="math-container">$\because \left( R_{a b c d}=R_{c d a b}\right) and $</span>
<span class="math-container">$ \left( R_{a b c d}=-R_{b a c d}\right)$</span></p>
<p><span class="math-container">$$
\therefore \nabla_{\sigma}\left(\nabla_{\mu} K_{\rho}+\nabla_{\rho} K_{\mu}\right)=0
$$</span>
<span class="math-container">$ \because x^{\sigma}$</span> is an arbitrary direction,
<span class="math-container">$$
\nabla_{\mu} K_{\rho}+\nabla_{\rho} K_{\mu}=0$$</span></p>
<p><span class="math-container">$ \therefore K$</span> is a Killing vector.</p>
|
2,991,825 | <p>I'm trying to find the general solution to this matrix
<span class="math-container">\begin{bmatrix}1&-2&1&3&0\\2&-4&4&6&4\\ -2&4&-1&-6&2\\1&-2&-3&3&-8\end{bmatrix}</span></p>
<p>Ax=<span class="math-container">$\begin{bmatrix}1&6&0&-7&\end{bmatrix}^T$</span></p>
<p>I think I'm supposed to get it in x=x*+z format, I'm still not sure if this is the correct way to do it.
But I ended up getting this matrix in row echelon form.
-2(r1)+(r2)</p>
<p>2(r1)+(r3)</p>
<p>-(r1)+(r4)
<span class="math-container">\begin{bmatrix}1&-2&1&3&0\\0&0&2&0&4\\ 0&0&1&0&2\\0&0&-4&0&-8\end{bmatrix}</span></p>
<p>And then </p>
<p>2(r2)+(r4)</p>
<p>-1/2(r2)+(r3)</p>
<p><span class="math-container">\begin{bmatrix}1&-2&1&3&0\\0&0&2&0&4\\ 0&0&0&0&0\\0&0&0&0&0\end{bmatrix}</span></p>
<p>then</p>
<p>1/2(r2)</p>
<p><span class="math-container">\begin{bmatrix}1&-2&1&3&0\\0&0&1&0&1\\ 0&0&0&0&0\\0&0&0&0&0\end{bmatrix}</span></p>
<p>and lastly -r2 + r2</p>
<p><span class="math-container">\begin{bmatrix}1&-2&0&3&-2\\0&0&1&0&1\\ 0&0&0&0&0\\0&0&0&0&0\end{bmatrix}</span></p>
<p>after doing some algebra i ended up getting</p>
<p>x1 = 2(x2)-3(x4)-2(x5)</p>
<p>x3 = -2(x5)</p>
<p>and set x5 = x4 = x2 = 1</p>
<p>and got z = <span class="math-container">$\begin{bmatrix}1&1&-2&1&1\end{bmatrix}^T$</span></p>
<p>But I'm when i try to solve for Ax = <span class="math-container">$\begin{bmatrix}1&6&0&-7\end{bmatrix}^T$</span></p>
<p>The last two rows are full of zeros</p>
<p>so I can't have 0=-7</p>
<p>How would i solve this?</p>
| Angina Seng | 436,618 | <p>Use <span class="math-container">$\sin x=x+O(x^3)$</span> as <span class="math-container">$x\to0$</span>. Then
<span class="math-container">$$\sin\frac\pi{2^{n+1}}=\frac{\pi}{2^{n+1}}+O(2^{-3n})$$</span>
and
<span class="math-container">$$2^n\sin\frac\pi{2^{n+1}}=\frac{2^n\pi}{2^{n+1}}+O(2^{-2n})$$</span>
etc.</p>
|
365,483 | <p>Let <span class="math-container">$f\colon X\to \mathbb{A}^n_{\mathbb{C}}$</span> be a morphism of <span class="math-container">$\mathbb{C}$</span>-schemes. Suppose <span class="math-container">$f$</span> is (a) separated, (b) flat, (c) locally of finite type, (d) all fibers are quasi-compact, is <span class="math-container">$X$</span> necessarily quasi-compact?</p>
| Angelo | 4,790 | <p>Let <span class="math-container">$X$</span> be the scheme obtained by gluing the generic points of all <span class="math-container">$\operatorname{Spec}\mathcal{O}_p$</span> for all closed points <span class="math-container">$p$</span> of <span class="math-container">$\mathbb{A}^1_{\mathbb C}$</span>. The obvious morphism <span class="math-container">$X \to \mathbb{A}^1_{\mathbb C}$</span> is a bijection, but <span class="math-container">$X$</span> is not quasi-compact.</p>
|
4,228,512 | <p>The question is: does the sequence of characteristic functions <span class="math-container">$f_k(x) := \chi_{[-\frac{1}{k}, \frac{1}{k}]}(x)$</span> converge in distributional sense to the Dirac delta?</p>
<p>In order to answer I followed this approach, but I fear I'm neglecting something important in my lines:</p>
<p>Firstable, <span class="math-container">$f_k\in L^1_{loc}(\mathbb{R})$</span>, so we can write the action of the associated distribution as <span class="math-container">$$\langle T_k(x),\psi \rangle= \int_\mathbb{R}\chi_{[-\frac{1}{k}, \frac{1}{k}]}(x)\cdot\psi(x)dx=\int_{-\frac{1}{k}}^\frac{1}{k}\psi(x)dx$$</span> for every test function <span class="math-container">$\psi \in C^\infty_c(\mathbb{R})$</span>. Then I computed the limit as:
<span class="math-container">$$\displaystyle{\lim_{k\to \infty}\langle T_k(x),\psi\rangle =\lim_{k\to \infty} \int_{-\frac{1}{k}}^\frac{1}{k}{\psi(x)dx} =\lim_{k\to \infty} \int_{-1}^1{\frac{1}{k}\cdot\psi(\frac{y}{k})dy}}$$</span>
and applied the Lebesgue dominated convergence theorem, saying that <span class="math-container">$\frac{|\psi(\frac{y}{k})|}{|k|} \le \sup_{x\in [-1,1]}|\psi(x)| <\infty$</span> and <span class="math-container">$\displaystyle{\lim_{k\to \infty}{\frac{\psi(\frac{y}{k})}{k}} = 0 }$</span>. Then I deduced that the sequnce <span class="math-container">$T_k$</span> converges to the distribution associated to the zero function.</p>
<p>Is my proof correct? Any check or observation would be really appreciated.</p>
| Kavi Rama Murthy | 142,385 | <p>You are making things too commplicated. <span class="math-container">$\psi$</span> is a bounded function and if <span class="math-container">$|\psi| \leq M$</span> we get <span class="math-container">$|\int_{-1/k}^{1/k} \psi (x)dx|\leq \frac M {2k} \to 0$</span>.</p>
|
338,535 | <p>Suppose that $f$ is a function defined on the set of natural numbers such that $$f(1)+ 2^2f(2)+ 3^2f(3)+...+n^2f(n) = n^3f(n)$$ for all positive integers $n$.
Given that $f(1)= 2013$, find the value of $f(2013)$.</p>
| masmoudihoussem | 64,548 | <p>This is an easy question.</p>
<p>Let's prove first that for every non negative integer, the following holds:</p>
<p>$$n^2 f(n)=f(1)$$</p>
<p>For $n=2$:</p>
<p>$$f(1)+2^{2} f(2)=2^{3} f(2)$$
$$2^2 f(2)=f(1)$$</p>
<p>Suppose that for every $p$ less than $n$:</p>
<p>$$p^2 f(p)=f(1)$$</p>
<p>Then by hypothesis</p>
<p>$$f(1)+2^2 f(2)+3^2 f(3)+..+(n-1)^2 f(n-1)+n^2 f(n)=n^3 f(n)$$</p>
<p>So:</p>
<p>$$(n^3-n^2) f(n)=(n-1) f(1)$$</p>
<p>Then:</p>
<p>$$n^2 f(n)=f(1)$$</p>
<p>Thus:</p>
<p>$$f(n)=f(1)/(n^2)$$</p>
<p>Applying to the $n=2013$ case:</p>
<p>$$f(2013)=2013/(2013^2)=1/2013$$</p>
|
338,535 | <p>Suppose that $f$ is a function defined on the set of natural numbers such that $$f(1)+ 2^2f(2)+ 3^2f(3)+...+n^2f(n) = n^3f(n)$$ for all positive integers $n$.
Given that $f(1)= 2013$, find the value of $f(2013)$.</p>
| lab bhattacharjee | 33,337 | <p>We have $$\sum_{1\le r\le n} r^2f(r)=n^3f(n)$$</p>
<p>Putting $n=m, \sum_{1\le r\le m} r^2f(r)=m^3f(m)$</p>
<p>Putting $n=m+1, \sum_{1\le r\le m+1} r^2f(r)=(m+1)^3f(m+1)$</p>
<p>On subtraction, $$m^3f(m)=f(m+1)\{(m+1)^3-(m+1)^2\}$$</p>
<p>$$f(m+1)=f(m)\cdot\left(\frac m{m+1}\right)^2=f(m-1)\cdot\left(\frac {m(m-1)}{(m+1)m}\right)^2=\cdots = \frac{f(1)}{(m+1)^2}\text{ for }m+1\ge 1\implies m\ge 0 $$</p>
|
2,208,943 | <p>I am about to finish my first year of studying mathematics at university and have completed the basic linear algebra/calculus sequence. I have started to look at some real analysis and have really enjoyed it so far.</p>
<p>One thing I feel I am lacking in is motivation. That is, the difference in rigour between the usual introduction to calculus class and real analysis seems to be quite strong. While I appreciate rigour for aesthetic reasons, I have trouble understanding why the transition from the 18th century Euler style calculus to the rigorous "delta-epsilon" formulation of calculus was necessary. </p>
<p>Is there a book that provides some historical motivation for the rigorous developement of calculus? Perhaps something that gives several counterexamples that occur when one is only equipped with a non-rigorous (i.e. first year undergraduate) formulation of calculus. For example, were there results that were thought to be true but turned out to be false when the foundations of calculus were strengthened? I suppose if anyone knows good counterexamples themselves they could list them here as well. </p>
| polfosol | 301,977 | <p>Some other answers have already provided excellent insights. But let's look at the problem this way: <em>Where does the need for rigor originates</em>? I think the answer lies behind one word: counter-intuition.</p>
<p>When someone is developing or creating mathematics, they mostly need to have an intuition about what they are talking about. I don't know much about the history, but for example, I bet the notion of derivative was first introduced because they needed something to express the "speed" or "acceleration" in motion. I mean, first there was a natural phenomenon for which a mathematical concept was developed. This math could perfectly describe the thing they were dealing with, and the results matched with expectation/intuition. But as time passed, some new problems popped out that led to unexpected/counter-intuitive results. So they felt the need to provide some more rigorous (and consequently, more abstract) concepts. This is why the more we develop in math, the harder its intuition become.</p>
<p>A classic example, as mentioned in other answers, is the Weierstrass function. Before knowing calculus, we may have some sense about the notion of continuity as well as the slope, and this helps us understand calculus more thoroughly. But Weierstrass function is something unexpected and hard-to-imagine, which leads us to the fact that "sometimes mathematics may not make sense, but it's true!"</p>
<p>Another (somehow related) example is the Bertrand paradox in probability. In a same manner, we may have some intuition about the probability even before studying it. This intuition is helpful in understanding the initial concepts of probability, until we are faced with the Bertrand paradox and be like, Oh... what can we do about <em>that</em>?</p>
<p>There are some good questions on this site and mathoverflow about some counter-intuitive results in various fields of mathematics, some of which were the initial incentive to develop more rigorous math. I recommend taking a look at them as well.</p>
|
2,208,943 | <p>I am about to finish my first year of studying mathematics at university and have completed the basic linear algebra/calculus sequence. I have started to look at some real analysis and have really enjoyed it so far.</p>
<p>One thing I feel I am lacking in is motivation. That is, the difference in rigour between the usual introduction to calculus class and real analysis seems to be quite strong. While I appreciate rigour for aesthetic reasons, I have trouble understanding why the transition from the 18th century Euler style calculus to the rigorous "delta-epsilon" formulation of calculus was necessary. </p>
<p>Is there a book that provides some historical motivation for the rigorous developement of calculus? Perhaps something that gives several counterexamples that occur when one is only equipped with a non-rigorous (i.e. first year undergraduate) formulation of calculus. For example, were there results that were thought to be true but turned out to be false when the foundations of calculus were strengthened? I suppose if anyone knows good counterexamples themselves they could list them here as well. </p>
| user64742 | 289,789 | <p>The purpose of "rigor" is to prove that when you claim something in mathematics it actually is legitimately true. If you wish to ask "why" then it is a fairly simple answer:</p>
<p>When we use calculus in machinery, programming, and to solve problems in science at a much larger scale than just a handful of expert scientists* can we risk not being able to perfectly know whether or not mathematics (the fundamental tool we use to measure theoretical scientific concepts) is actually correct? Imagine if the mean value theorem were not always true but <strong>we assumed it to be true</strong>. What if we built an airplane with an auto-piloting system relying on that theorem being true (maybe it turns upward at full throttle at a certain dropping velocity which we know it must pass through to be considered 'crashing'). We know that position is continuous (obviously we do not teleport) but without proof that the derivative is continuous due to position being a "smooth" curve we don't have a basis to claim velocity is not a step function.</p>
<p>And well, without rigor there would be a risk that the plane would crash.</p>
<p>tl;dr Science relies much heavier on calculus to do riskier jobs with safety concerns and so our scrutiny must therefore rise to meet the occasion.</p>
<p>*Of course it wasn't just experts that did calculus in the late 1800's to early 1900's but one has to admit that a college education is more widespread than many decades and/or centuries ago and so more people have that knowledge. Therefore, the number of people using it rises. With that, the need for quality control rises. You wouldn't buy a broken device at the store. Mathematics isn't a product that can be bought, but it's the same way. If it's broken, people won't accept it. Therefore, we scrutinize everything in a much deeper manner than before so that we can be justified in saying "yes, this statement <em>is</em> true!" and people will agree with us. We don't want to be blamed for something failing because we simply ignored cases where an equation wasn't true.</p>
|
3,489,212 | <p>Playing around I found a series which looks to converge to the square root function.</p>
<p><span class="math-container">$$\sqrt{p^2+q}\overset{?}{=}p\left(1-\sum_{n=1}^{+\infty}\left(-\frac q{2p^2}\right)^n\right)$$</span></p>
<p>Is it correct?</p>
| Community | -1 | <p>No.</p>
<p><span class="math-container">$$p\left(1-\sum_{n=1}^{+\infty}\left(-\frac q{2p^2}\right)^n\right)=p\left(1+\frac q{2p^2}\frac1{1+\dfrac q{2p^2}}\right)=p+\frac{pq}{2p^2+q}\ne\sqrt{p^2+q}.$$</span></p>
|
1,393,265 | <p>How to prove that$(n!)^{1/n}$ tends to infinity as limit tends to infinity?
I tried to do this by expanding $n!$ as $n\times (n-1)\times (n-2)\cdots 4\times3\times2\times 1$ and taking out n common from each factor so that I can have $n$ outside the radical sign, But then the last terms would be $(4/n)\times(3/n)\times(2/n)\times (1/n)$, which would tend to zero and would present indeterminate form of $0\cdot \infty$, but how should I further solve it. I would appreciate a little help.</p>
| Zhanxiong | 192,408 | <p>Denote $(n!)^{1/n}$ by $a_n$, then
$$\log a_n = \frac{1}{n}\log n! = \frac{\log 1 + \log 2 + \cdots + \log n}{n}.$$
By the celebrated <a href="https://en.wikipedia.org/wiki/Ces%C3%A0ro_summation#Definition" rel="nofollow">Cesaro's theorem</a> (note the result also holds if the general term tends to $\infty$), since $\log n \to \infty$ as $n \to \infty$, we have
$\log a_n \to \infty$
as $n \to \infty$. Consequently, $a_n \to \infty$ as $n \to \infty$.</p>
|
1,393,265 | <p>How to prove that$(n!)^{1/n}$ tends to infinity as limit tends to infinity?
I tried to do this by expanding $n!$ as $n\times (n-1)\times (n-2)\cdots 4\times3\times2\times 1$ and taking out n common from each factor so that I can have $n$ outside the radical sign, But then the last terms would be $(4/n)\times(3/n)\times(2/n)\times (1/n)$, which would tend to zero and would present indeterminate form of $0\cdot \infty$, but how should I further solve it. I would appreciate a little help.</p>
| David Holden | 79,543 | <p>$$
\lim_{n \to \infty} (n!)^{\frac1n} = \lim_{n \to \infty} \exp\left({\frac1n}\sum_{k=1}^n \log k\right)
$$
for any $n \gt 1$ we have
$$
n\log n -n+1 = \int_1^n \log x dx \lt \sum_{k=1}^n \log k\ \lt \int_1^n \log (x+1) dx \\= (n+1)\log(n+1) -(n+1) -2\log 2 +2
$$
i.e.
$$
\log \frac{n}{e} +\frac1{n} \lt \frac1{n} \sum_{k=1}^n \log k \lt (1 +\frac1n)\left(\log n + \log(1 + \frac1n) \right) - 1 + \frac1n -\frac{2\log2}n \\
= \log\frac{n}{e}+\frac1n+\frac{\log n}n +O(\frac1n)
$$
taking exponentials and dividing through:
$$
e^{\frac1n} \lt \frac{(n!)^{\frac1n}}{\frac{n}{e}} \lt (Ken)^{\frac1n}
$$
for a positive constant $K$</p>
<p>thus, by squeeze, as $n$ becomes large
$$
\frac {(n!)^{\frac1n}}{ \frac{n}e} \to 1
$$</p>
|
3,453,408 | <p>I'm reading through some lecture notes and see this in the context of solving ODEs:
<span class="math-container">$$\int\frac{dy}{y}=\int\frac{dx}{x} \rightarrow \ln{|y|}=\ln{|x|}+\ln{|C|}$$</span> why is the constant of integration natural logged here?</p>
| Quanto | 686,284 | <p>Normally, you need the boundary values to solve the ODE. Assume <span class="math-container">$y(x_0)=y_0$</span>, then the solution is,</p>
<p><span class="math-container">$$\ln |y| - \ln|y_0| = \ln |x| - \ln|x_0| $$</span></p>
<p>Thus, <span class="math-container">$\ln|C|$</span> is necessary and is to be determined via,</p>
<p><span class="math-container">$$\ln |C| = \ln |y_0| - \ln|x_0|=\ln|\frac{y_0}{x_0}|$$</span></p>
|
45,570 | <p>I'm writing a little package in Mathematica for geology where a particular stone may be approximated as an hemisphere. Anyway this is a rough estimation because a real hemisphere has its height as loong as its radius. Instead, a reservoir stone (for an hydrocarbon) has often a form of a section of an hemisphere, its height is lower than the radius. For example, I can have an hemisphere with radius long 5 km and height of only 3 km and I can plot it like that:</p>
<pre><code>semisfera[x_, y_, raggio_] := Sqrt[raggio^2 - (x - raggio)^2 - (y - raggio)^2];
plotsemisfera = Plot3D[semisfera[x, y, raggioSfera], {x, 0, 2 raggioSfera}, {y, 0, 2 raggioSfera}, PlotRange -> {0, 3}, AxesLabel -> {"lunghezza km" , "larghezza km","profondità km"}, PlotLabel -> Style[Framed["Referenced Theorical Hemisphere"], 22, Black]]
</code></pre>
<p>and I get the following graphic:
<img src="https://i.stack.imgur.com/O5pSO.jpg" alt="enter image description here"></p>
<p>you'll agree with me that is a section of ah hemisphere without the top part, won't you?</p>
<p>Sometimes it may happen that the height is << radius.
In my case, my geology student worked on a stone with radius of 5km and an height of only 0.2 km.
If I try to plot this as I've done before, I get a very awful graphic, here:</p>
<p><img src="https://i.stack.imgur.com/Zo3eb.jpg" alt="enter image description here"></p>
<p>So, I'd just like to know if there is a way to plot a more precise graphic, without all that irregular part at the base of the hemisphere.</p>
<p>The centre of the "hemisphere" should be in = <0,0></p>
<p>Maybe it could be something like that:
<a href="http://uploadpie.com/eAVvq" rel="nofollow noreferrer">http://uploadpie.com/eAVvq</a></p>
<p>but I really don't understand why for low values of the height the base of the hemisphere is so jagged!</p>
<p>How can I plot that? Thank you</p>
| m_goldberg | 3,066 | <h3>Edit</h3>
<p>I now have a better understanding of what you are looking for.</p>
<p>To get plot centered at the origin defined in terms of the radius and height, then you can use <a href="http://reference.wolfram.com/mathematica/ref/SphericalPlot3D.html" rel="nofollow noreferrer"><code>SphericalPlot3D</code></a> as Kuba suggested. It would go like this.</p>
<pre><code>theta[r_, h_] /; 0 < h < r := π/2. - ArcTan[Sqrt[r^2 - h^2], h]
With[{r = 5, h = 3, zScale = .3},
SphericalPlot3D[r, {θ, theta[r, h], π/2}, {ϕ, 0, 2 π}, BoxRatios -> {1, 1, zScale}]]
</code></pre>
<p><img src="https://i.stack.imgur.com/bFdmM.png" alt="spherical-sect-1" /></p>
<p>Note the use of a parameter to scale z-axis. It is set to <code>h/(2 r)</code> in the above plot. This gives true proportions.</p>
<p>In the extreme of <code>r = 5</code> and <code>h = .2</code>, <code>zScale</code> will need to be adjusted to give a reasonable looking plot, which is going to look very much like a cylinder.</p>
<pre><code>With[{r = 5, h = .2, zScale = .25},
SphericalPlot3D[r, {θ, theta[r, h], π/2.}, {ϕ, 0, 2 π}, BoxRatios -> {1, 1, zScale}]]
</code></pre>
<p><img src="https://i.stack.imgur.com/zscxB.png" alt="spherical-sect-2" /></p>
|
4,021,994 | <p>I was taught in high school algebra to translate word problems into algebraic expressions. So when I encountered <a href="https://artofproblemsolving.com/wiki/index.php/2016_AMC_10A_Problems/Problem_3" rel="nofollow noreferrer">this</a> problem I tried to reason out an algebra formula for it</p>
<blockquote>
<p>For every dollar Ben spent on bagels, David spent 25 cents less. Ben
paid $12.50 more than David. How much did they spend in the bagel store
together?</p>
</blockquote>
<p>To solve this I imagined a series of comparisons when Ben spends <span class="math-container">$x$</span>, David spends <span class="math-container">$.75x$</span>. Loop this relationship until <span class="math-container">$x - .75x \approx 12.50$</span>. Good. Done. <span class="math-container">$x = 50$</span>, then add David's for the answer. Coming from computers, I would have set this up in code where a loop (recursion) would increase <span class="math-container">$x$</span> until the condition <span class="math-container">$x - .75x = 12.50$</span> was met, then the "loop counter/accumulator" would be how much Ben spent, i.e., <span class="math-container">$50$</span>, etc.</p>
<p>I'm a beginner with math, but it seems like there should be a better approach, something with series and sequences or even calculus derivatives, something better than my brute-force computer algorithm. Can someone enlighten? The "answer" given at the site (see link) is its own brute-force and hardly satisfying. I'm thinking there should be something more formal -- at least for the first part that derives <span class="math-container">$50$</span>.</p>
<p><strong>Update</strong></p>
<p>I think everyone so far has missed my point. Many of you simply re-did the problem again. I'm wondering if there is a more <em>formal</em> way to do this other than just "figuring it out" (FIO). The whole FIO routine is murky. It looks like a limit problem; it looks like a system of equations, but I'm not experienced enough to know exactly. If there isn't, then let's call it a day....</p>
| Alan | 175,602 | <p>From the step <span class="math-container">$x-.75x=12.50$</span> simplify to <span class="math-container">$.25x=12.5$</span>, divide by .25 to immediately get <span class="math-container">$x=50$</span>. No looping/brute force needed.</p>
|
319,262 | <p>If the first 10 positive integer is placed around a circle, in any order, there exists 3 integer in consecutive locations around the circle that have a sum greater than or equal to 17? </p>
<p>This was from a textbook called "Discrete math and its application", however it does not provide solution for this question. </p>
<p>May I know how to tackle this question. </p>
<p>Edit: I relook at the actual question and realize it is sum greater or equal to 17. My apologies.</p>
| joriki | 6,622 | <p>Gerry's answer shows that the average sum of the triples is $16.5$. If there's no sum above $17$, then at least five sums have to be $17$ for the average to be $16.5$. Since two successive sums can't be equal, at most five sums are $17$, and thus exactly five sums are $17$, and thus the other five sums are $16$ and they alternate. But that's impossible, because it implies that moving by three goes up or down by $1$ and moving another three goes down or up by $1$, respectively, leading to the same number again.</p>
|
3,386,530 | <p>Let <span class="math-container">$(\Omega,\mathcal{F},\mathbb{P})$</span> be a probability space and <span class="math-container">$(\mathcal{X},d)$</span> be a complete, separable, locally compact metric space. Suppose that <span class="math-container">$X,X_1,X_2,X_3,... : \Omega\to\mathcal{X}$</span> are <span class="math-container">$\mathbb{P}$</span>-i.i.d. random variables.</p>
<p>Define: <span class="math-container">$$\forall m\in\mathbb{N}, \pi_m: \mathcal{X}\times\mathcal{X}^m\to\{1,...,m\}, (x,x_1,...,x_m)\mapsto \min\left(\operatorname{argmin}_{k\in\{1,...,m\}}\left(d\left(x,x_1\right),...,d\left(x,x_m\right)\right)\right).$$</span>
Define:
<span class="math-container">$$\forall m\in\mathbb{N}, Z_m:\Omega\to\mathcal{X}, \omega\mapsto X_{\pi_m\left((X(\omega),X_1(\omega),...,X_m(\omega)\right)}(\omega).$$</span></p>
<p>If <span class="math-container">$A$</span> is a open set of <span class="math-container">$(\mathcal{X},d)$</span>, is it true that:
<span class="math-container">$$\limsup_{m\to+\infty}\mathbb{P}_{Z_m}(A)\le\mathbb{P}_{X}(A)?$$</span></p>
<blockquote>
<p><strong>Edit 1</strong>: or maybe that there exists a constant <span class="math-container">$C>0$</span> independent of <span class="math-container">$A$</span> such that:
<span class="math-container">$$\limsup_{m\to+\infty}\mathbb{P}_{Z_m}(A)\le C\cdot \mathbb{P}_{X}(A)?$$</span></p>
</blockquote>
<p>If it is false in general, what if we add the hypothesis that <span class="math-container">$\mathcal{X}=\mathbb{R}^n$</span>, <span class="math-container">$d$</span> is the Euclidean distance and <span class="math-container">$\mathbb{P}_X$</span> is absolutely continuous w.r.t. Lebesgue measure in <span class="math-container">$\mathbb{R}^n$</span>?</p>
<blockquote>
<p><strong>Edit 2:</strong> in this last case, we have that if <span class="math-container">$B$</span> is a ball of <span class="math-container">$\mathbb{R}^n$</span>, then <span class="math-container">$\mathbb{P}_X(\partial B)=0$</span> so, since <span class="math-container">$\mathbb{P}_{Z_m}\to \mathbb{P}_{X}$</span> in distribution (as explained below by WoolierThanThou), we have that <span class="math-container">$\mathbb{P}_{Z_m}(B)\to\mathbb{P}_{X}(B), m\to \infty$</span>. Now, since by Besicovitch covering theorem there exist an universal constant <span class="math-container">$C_n\in\mathbb{N}$</span> such that every open subset <span class="math-container">$A$</span> is the union of at most <span class="math-container">$C_n$</span> open sets <span class="math-container">$A_1,...,A_{C_n}$</span> each of them is a disjoint countable union of open balls, say <span class="math-container">$A_i = \cup_{j\in I_i} B_{i,j}$</span>, we have that:
<span class="math-container">$$\mathbb{P}_{Z_m}(A)\le \sum_{i=1}^{C_n}\mathbb{P}_{Z_m}(A_i)= \sum_{i=1}^{C_n}\sum_{j\in I_i}\mathbb{P}_{Z_m}(B_{i,j}) = (*)$$</span>
Now, if only we could exchange the limit and the series, we obtain that
<span class="math-container">$$(*)\to \sum_{i=1}^{C_n}\sum_{j\in I_i}\mathbb{P}_{X}(B_{i,j})= \sum_{i=1}^{C_n}\mathbb{P}_{X}(A_i)\le \sum_{i=1}^{C_n}\mathbb{P}_{X}(A) = C_n \mathbb{P}_{X}(A) $$</span>
Can anyone see a reason why could we exchange the limit an the series?</p>
</blockquote>
| Kavi Rama Murthy | 142,385 | <p><span class="math-container">$\sqrt {(z_1-x_1)^{2}+(z_2-x_2)^{2}} =\frac 1 2 \min\{r,s\} <s$</span> so <span class="math-container">$(z_1,z_2) \in E$</span>. There is no mistake in the manual. </p>
<p><span class="math-container">$s$</span> is chosen in a particular way and that condition is not met in your example. </p>
|
98,798 | <p>I used of this command to draw a sphere </p>
<pre><code>Graphics3D[{Specularity[White, 50], ColorData["Atoms", "Ag"],
Sphere[{0, 0, 0}, .7]}, Lighting -> "Neutral", Boxed -> False]
</code></pre>
<p><a href="https://i.stack.imgur.com/TL15F.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TL15F.png" alt="enter image description here"></a></p>
<p>How can I change the pattern of that as meshing or something else not as this case simple. In fact, if we assume this shape as earth, the desired case is to plot this earth with its Meridian lines and Earth orbits lines (as mesh lines) on the surface with a flexible distances (angels) far away each others. </p>
| Bob Hanlon | 9,362 | <p><a href="http://reference.wolfram.com/language/ref/RegionPlot3D.html" rel="nofollow noreferrer"><code>RegionPlot3D</code></a> has a <a href="http://reference.wolfram.com/language/ref/Mesh.html" rel="nofollow noreferrer"><code>Mesh</code></a> option.</p>
<pre><code>RegionQ[Sphere[{0, 0, 0}, .7]]
(* True *)
RegionPlot3D[Sphere[{0, 0, 0}, .7],
Lighting -> "Neutral",
Boxed -> False,
PlotStyle -> {
Specularity[White, 100],
ColorData["Atoms", "Ag"]},
Mesh -> True]
</code></pre>
<p><a href="https://i.stack.imgur.com/kuuPV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kuuPV.png" alt="enter image description here"></a></p>
|
98,798 | <p>I used of this command to draw a sphere </p>
<pre><code>Graphics3D[{Specularity[White, 50], ColorData["Atoms", "Ag"],
Sphere[{0, 0, 0}, .7]}, Lighting -> "Neutral", Boxed -> False]
</code></pre>
<p><a href="https://i.stack.imgur.com/TL15F.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TL15F.png" alt="enter image description here"></a></p>
<p>How can I change the pattern of that as meshing or something else not as this case simple. In fact, if we assume this shape as earth, the desired case is to plot this earth with its Meridian lines and Earth orbits lines (as mesh lines) on the surface with a flexible distances (angels) far away each others. </p>
| Zviovich | 1,096 | <pre><code>latitude[r_, a_] :=
Line[Table[{r Cos[a] Sin[b], r Sin[a] Sin[b], r Cos[b]}, {b, 0,
2 Pi, .1}]]
longitude[r_, b_] :=
Line[Table[{r Cos[a] Sin[b], r Sin[a] Sin[b], r Cos[b]}, {a, 0,
2 Pi, .1}]]
orbit[r_, a_, incline_] := Rotate[latitude[r, a], incline, {1, 0, 0}]
Graphics3D[{Specularity[White, 0.5], ColorData["Atoms", "Ag"],
Sphere[], Brown, latitude[1, #] & /@ Range[0, 2 Pi, Pi/8],
longitude[1, #] & /@ Range[-Pi, Pi, Pi/8], Red,
orbit[2, Pi/6, Pi/3]}, Lighting -> "Neutral", Boxed -> False]
</code></pre>
<p><a href="https://i.stack.imgur.com/kHdkJ.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kHdkJ.gif" alt="enter image description here"></a></p>
|
395,791 | <p>I am searching for examples of manifolds which are not symmetric spaces but where Jacobi fields can be computed in closed form. For now, I am aware of</p>
<ul>
<li>Gaussian distribution with the Wasserstein metric: <a href="https://arxiv.org/pdf/2012.07106.pdf" rel="noreferrer">https://arxiv.org/pdf/2012.07106.pdf</a></li>
<li>Kendall shape space: <a href="https://arxiv.org/pdf/1906.11950.pdf" rel="noreferrer">https://arxiv.org/pdf/1906.11950.pdf</a></li>
</ul>
<p>Are there many others? Thank you for your help.</p>
| Robert Bryant | 13,972 | <p>A particularly simple non-homogeneous example in which one can explicitly integrate the Jacobi equations is the complete metric on <span class="math-container">$\mathbb{R}^2$</span> given by
<span class="math-container">$$
g = (x^2{+}y^2{+}2)\bigl(\mathrm{d}x^2+\mathrm{d}y^2\bigr).
$$</span>
It has Gauss curvature <span class="math-container">$K = -4/(x^2{+}y^2{+}2)^3<0$</span>, and, visibly, a rotational symmetry about the origin <span class="math-container">$(x,y)=(0,0)$</span>.</p>
<p>It is not hard to show that, up to a rotation, each geodesic can be parametrized in the form
<span class="math-container">$$
(x,y) = \bigl(r\,\cosh t,\ \sqrt{r^2+2}\,\sinh t\,\bigr)
$$</span>
where the constant <span class="math-container">$r\ge0$</span> determines the closest approach of the geodesic to the origin. The element of arc length along this geodesic is then found to be <span class="math-container">$\mathrm{d}s$</span>, where
<span class="math-container">$$
s = t + (r^2{+}1)\,\cosh t\,\sinh t.
$$</span></p>
<p>Now, the Jacobi fields split into the tangential Jacobi fields, which are spanned by
<span class="math-container">$$
J_1 = \frac{\partial}{\partial s}
=\frac{1}{(1+(r^2{+}1)\cosh 2t)}\,\frac{\partial}{\partial t}
\quad\text{and}\quad
J_2 = s\,\frac{\partial}{\partial s},
$$</span>
and the normal Jacobi fields <span class="math-container">$J_3= f_1\,N$</span> and <span class="math-container">$J_4 = f_2\,N$</span>, where <span class="math-container">$N$</span> is the unit normal vector field to the curve and <span class="math-container">$f_1$</span> and <span class="math-container">$f_2$</span> are a basis for the solutions to the (linear) normal Jacobi equation
<span class="math-container">$$
\frac{d}{ds}\left(\frac{df}{ds}\right) + K\,f = 0.
$$</span>
Using the above formuale, one finds that these can be taken to be
<span class="math-container">$$
f_1(t) = r^2+1+\cosh 2t\quad\text{and}\quad
f_2(t) = \sinh 2t\,.
$$</span></p>
<p>Finally, note that these formulae generalize immediately to the case of the cohomogeneity-1 metric on <span class="math-container">$\mathbb{R}^n$</span> with the formula
<span class="math-container">$$
g = \bigl(|x|^2+2\bigr)\,(\mathrm{d}x\cdot\mathrm{d}x),
$$</span>
since every geodesic in this space lies in a 2-plane through the origin <span class="math-container">$x=0$</span>. In fact, the more general family of complete, conformally flat <em>Liouville metrics</em>
<span class="math-container">$$
g = \bigl(a_0 + a_1\,{x_1}^2 + \cdots + a_n\,{x_n}^2\bigr)(\mathrm{d}{x_1}^2 + \cdots + \mathrm{d}{x_n}^2),
$$</span>
where <span class="math-container">$a_0>0$</span> and <span class="math-container">$a_i\ge 0)$</span> for <span class="math-container">$1\le i\le n$</span>, has the property that the Jacobi equations on the geodesics of this metric can be explicitly integrated.</p>
<p><strong>Remark:</strong> For more information about the metric <span class="math-container">$g$</span>, in particular the explicit formula for its distance function, see <a href="https://mathoverflow.net/questions/37651/riemannian-surfaces-with-an-explicit-distance-function/360046#360046">this answer of mine</a></p>
<p><strong>Additional Examples:</strong> In case the OP is interested, here is another group of examples that may be of interest. These are also conformally flat Liouville metrics, but now defined on the interior of the unit <span class="math-container">$n$</span>-ball <span class="math-container">$B = \{\,x\in\mathbb{R}^n\ |\ |x|\le 1\ \}$</span> for constants <span class="math-container">$m_i>0$</span> <span class="math-container">$(1\le i\le n)$</span> as metrics of the form
<span class="math-container">$$
g = \left(1-|x|^2\right)\left(\frac{{\mathrm{d}x_1}^2}{{m_1}^2}+\cdots + \frac{{\mathrm{d}x_n}^2}{{m_n}^2}\right)
$$</span>
It is not hard to show that every <span class="math-container">$g$</span>-geodesic is parametrized in the form
<span class="math-container">$$
x_i(t) = \lambda_i\,\cos( m_i\,t + q_i)
$$</span>
where the constants <span class="math-container">$\lambda_i$</span> and <span class="math-container">$q_i$</span> satisfy <span class="math-container">${\lambda_1}^2+\cdots+{\lambda_n}^2 = 1$</span> and <span class="math-container">$q_1+\cdots+q_n=0$</span> and where arclength <span class="math-container">$s$</span> along the geodesic satisfies
<span class="math-container">$$
\mathrm{d}(vs+c) = \bigl(1-|x(t)|^2\bigr)\,\mathrm{d}t
$$</span>
for some constants <span class="math-container">$v$</span> and <span class="math-container">$c$</span>. Note that the constants <span class="math-container">$(\lambda,q,v,c)$</span> subject to the two constraints vary in a manifold of dimension <span class="math-container">$2n$</span>, which is the dimension of the space of (parametrized) <span class="math-container">$g$</span>-geodesics in <span class="math-container">$B$</span>. Now, the Jacobi fields along a given geodesic can be computed explicitly as the partials of the explicit formulae with respect to the constrained parameters <span class="math-container">$(\lambda,q,v,c)$</span> using the Chain Rule and eliminating <span class="math-container">$s$</span> in favor of <span class="math-container">$t$</span>, which can be done explicitly using the given relation between <span class="math-container">$\mathrm{d}s$</span> and <span class="math-container">$\mathrm{d}t$</span>.</p>
|
2,259,840 | <blockquote>
<p>Points $P$, $Q$, and $R$ lie on the same line. Three semi-circles with the diameters $PQ$, $QR$, and $PR$ are drawn on the same side of the line segment $PR$. (That is, suppose we have an <a href="https://en.wikipedia.org/wiki/Arbelos" rel="nofollow noreferrer">arbelos</a>.) The centers of the semi-circles are $A$, $B$, and $O$, respectively. A circle with center $C$ touches all three semi-circles. Show that the radius of this circle is
$$c = \frac{ab(a+b)}{a^2+ab+b^2}$$
where $a :=|AQ|$ and $b :=|BQ|$ are the radii of the smaller two semi-circles.</p>
</blockquote>
<p>I know that since this is a trigonometry question, I have to construct a triangle somewhere. However, I am unsure as to whether I should construct the triangle between points ACQ or points ACB. </p>
| Jack D'Aurizio | 44,121 | <p>I will outline an approach that can be ultimately used to prove Descartes' (kissing circles) theorem too. The key idea is to perform a circle inversion and to keep track of some distances.</p>
<p><a href="https://i.stack.imgur.com/P6Pm1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/P6Pm1.png" alt="enter image description here"></a></p>
<p>If we perform a <a href="http://mathworld.wolfram.com/Inversion.html" rel="nofollow noreferrer">circle inversion</a> with respect to a circle centered at $Q$ through $P$ (first dotted circle) the $PQ$-circle is mapped into a line through $P$ (red line), the $QR$-circle is mapped into a parallel line (blue line) and the $PR$-circle is mapped into a circle tangent to both lines (green circle). If $PQ=2a$ and $QR=2b$ the radius of the green circle is
$$ \frac{1}{2}\left(2a+\frac{4a^2}{2b}\right) = \frac{a^2+ab}{b}. $$
A circle $\Xi$ congruent to the green circle (the second dotted circle) lies above the green circle and is tangent to the green circle, the blue line and the red line. Its inverse (the orange circle) is the solution to the arbelos. It follows that it is enough to compute where the intersections of $\Xi$ with the red line, the blue line and the green circle lie (in terms of the previously computed radius) to get three points on the orange circle by inverting with respect to $Q$, and that also gives the radius of the orange circle.</p>
|
3,999,488 | <p><strong>Question:</strong> How is the differentiation of <span class="math-container">$xy=constant$</span> equal to <span class="math-container">$x\text{d}y+y\text{d}x$</span>?</p>
<p><strong>My Approach:</strong> I first tried using partial differentiation, which I know very little of. Basically, it's the differentiation of the function with respect to one variable at a time, while keeping the other constant, right?</p>
<p>So using that, shouldn't I get the answer as <span class="math-container">$x+y$</span>?</p>
<p>All help will be appreciated greatly.</p>
| Abhinav Tahlani | 739,290 | <p>Consider a function in a single variable only, say for example <span class="math-container">$f(x)=xsinx$</span>. How would you go about finding <span class="math-container">$f'(x)?$</span> Certainly, you would have the privilege of using <em>chain rule</em> to differentiate <span class="math-container">$f(x)$</span>. Try to think of using the same <em>chain rule</em> in case of <span class="math-container">$f(x,y)=xy$</span>.</p>
<pre><code>Note: xy can't be a function in a single variable, if you assume both x and y as variables.
</code></pre>
<p>Assume <span class="math-container">$y$</span> to be some function of <span class="math-container">$x$</span>. Just as you differentiated <span class="math-container">$xsinx$</span>, try to differentiate <span class="math-container">$xy$</span> now(with respect to <span class="math-container">$x$</span>). You can work it out and see that-
<span class="math-container">$$x\frac{dy}{dx}+y=0$$</span> and then, you can reach your coveted equation. If you wanna look up what <span class="math-container">$dy$</span> and <span class="math-container">$dx$</span> really are, then go for <strong>differentials</strong> and read it up. It's fairly a good read. As of <em>partial differential</em>, get to know what a differential actually is, and apply your known concepts in 1 variable differentiation to understand it.</p>
<pre><code>PS: The equation which you got is a "differential equation". This can be solved via integration in order to come to your f(x,y)=xy
</code></pre>
<p>Coming to the last question, apply the previous concepts again and differentiate with respect to x to get it done! More can be learnt after you go on to deal with multi-variable calculus.</p>
|
3,476,022 | <p>I was watching this Mathologer video (<a href="https://youtu.be/YuIIjLr6vUA?t=1652" rel="noreferrer">https://youtu.be/YuIIjLr6vUA?t=1652</a>) and he says at 27:32</p>
<blockquote>
<p>First, suppose that our initial <em>chunk</em> is part of a parabola, or if you like a cubic, or any polynomial. If I then tell you that my <em>mystery function</em> is a polynomial, there's always going to be exactly one polynomial that continues our initial <em>chunk</em>. In other words, <strong>a polynomial is completely determined by any part of it.</strong> [...] Again, just relax if all this seems a little bit too much.</p>
</blockquote>
<p>So he didn't give a proof of the theorem in bold text – I think this is very important.</p>
<p>I understand that there always exists a polynomial of degree <span class="math-container">$n$</span> that passes through a set of <span class="math-container">$n+1$</span> points (i.e. there are <strong>finitely many</strong> custom points to be passed by, the <em>chunk</em> has to be discrete, like <span class="math-container">$(1,1),(2,2),(3,3),(4,5)$</span>). But there also exists some polynomial of degree <span class="math-container">$m$</span> (<span class="math-container">$m\ne n$</span>) that passes through the same set of points.</p>
<p>But how do I prove that there exists one and only one polynomial that passes through a set of <strong>infinitely many</strong> points?</p>
| Peter LeFanu Lumsdaine | 2,439 | <p>If <span class="math-container">$p$</span> and <span class="math-container">$q$</span> are polynomials agreeing on infinitely many points, then <span class="math-container">$p-q$</span> is a polynomial that’s 0 on infinitely many points.</p>
<p>But if a polynomial <span class="math-container">$f$</span> of degree <span class="math-container">$n$</span> is <span class="math-container">$0$</span> on more than <span class="math-container">$n$</span> points, then it’s zero everywhere. (If it has zeroes <span class="math-container">$a_1, \ldots a_n$</span>, then by repeated division it’s of the form <span class="math-container">$c(x-a_1)\cdots(x-a_n)$</span>; if it’s zero at some other point as well, then we get <span class="math-container">$c=0$</span>.)</p>
<p>So a polynomial that’s 0 on infinitely many points is 0 everywhere. So, going back to the beginning, if <span class="math-container">$p$</span> and <span class="math-container">$q$</span> are polynomials agreeing on infinitely many points, then <span class="math-container">$p-q$</span> is zero everywhere, i.e. <span class="math-container">$p=q$</span>.</p>
|
627,871 | <p>Let $\mathbf{A}$ be an algebra (in the sense of universal algebra) of some signature $\Sigma$. By <em>quasi-identity</em> I mean the formula of the form</p>
<p>$$(\forall x_1) (\forall x_2) \dots (\forall x_n) \left(\left[\bigwedge_{i=1}^{k}t_i(x_1, \dots, x_n)=s_i(x_1, \dots, x_n)\right]\rightarrow t(x_1, \dots, x_n)=s(x_1, \dots, x_n) \right) \;, $$</p>
<p>where $t_i(x_1, \dots, x_n), s_i(x_1, \dots, x_n), t(x_1, \dots, x_n), s(x_1, \dots, x_n)$ are terms (using the algebra operations) with all its variables among $x_1, \dots, x_n$.</p>
<p>Since the class of all algebras (of the considered signature) satisfying some quasi-identity is (allegedly) not a variety in general and clearly such class is closed under taking subalgebras and products, it follows that it is not closed under taking quotients in general.</p>
<p>So the question is:</p>
<p><strong>Is there some (possibly elementary) example of an algebra satisfying some quasi-identity and its quotient where the quasi-identity does not hold?</strong></p>
<p>My only idea was to show this for the cancellation law in some monoid, but I do not see any example (mainly because I do not see how the quotients look like).</p>
<p>Thanks in advance for any help.</p>
| Bartek | 23,371 | <p>Another way of seeing this about cancellation is to notice that free semigroups/monoids are cancellative, and every semigroup/monoid is a quotient of one.</p>
|
1,688,762 | <p>$$\int \sqrt{\frac{x}{2-x}}dx$$</p>
<p>can be written as:</p>
<p>$$\int x^{\frac{1}{2}}(2-x)^{\frac{-1}{2}}dx.$$</p>
<p>there is a formula that says that if we have the integral of the following type:</p>
<p>$$\int x^m(a+bx^n)^p dx,$$ </p>
<p>then:</p>
<ul>
<li>If $p \in \mathbb{Z}$ we simply use binomial expansion, otherwise:</li>
<li>If $\frac{m+1}{n} \in \mathbb{Z}$ we use substitution $(a+bx^n)^p=t^s$
where $s$ is denominator of $p$;</li>
<li>Finally, if $\frac{m+1}{n}+p \in \mathbb{Z}$ then we use substitution
$(a+bx^{-n})^p=t^s$ where $s$ is denominator of $p$.</li>
</ul>
<p>If we look at this example:</p>
<p>$$\int x^{\frac{1}{2}}(2-x)^{\frac{-1}{2}}dx,$$</p>
<p>we can see that $m=\frac{1}{2}$, $n=1$, and $p=\frac{-1}{2}$ which means that we have to use third substitution since $\frac{m+1}{n}+p = \frac{3}{2}-\frac{1}{2}=1$ but when I use that substitution I get even more complicated integral with square root. But, when I tried second substitution I have this:</p>
<p>$$2-x=t^2 \Rightarrow 2-t^2=x \Rightarrow dx=-2tdt,$$ </p>
<p>so when I implement this substitution I have:</p>
<p>$$\int \sqrt{2-t^2}\frac{1}{t}(-2tdt)=-2\int \sqrt{2-t^2}dt.$$</p>
<p>This means that we should do substitution once more, this time:</p>
<p>$$t=\sqrt{2}\sin y \Rightarrow y=\arcsin\frac{t}{\sqrt{2}} \Rightarrow dt=\sqrt{2}\cos ydy.$$</p>
<p>So now we have:</p>
<p>\begin{align*}
-2\int \sqrt{2-2\sin^2y}\sqrt{2}\cos ydy={}&-4\int\cos^2ydy = -4\int \frac{1+\cos2y}{2}dy={} \\
{}={}& -2\int dy -2\int \cos2ydy = -2y -\sin2y.
\end{align*}</p>
<p>Now, we have to return to variable $x$:</p>
<p>\begin{align*}
-2\arcsin\frac{t}{2} -2\sin y\cos y ={}& -2\arcsin\frac{t}{2} -2\frac{t}{\sqrt{2}}\sqrt\frac{2-t^2}{2}={} \\
{}={}& -2\arcsin\frac{t}{2} -\sqrt{t^2(2-t^2)}.
\end{align*}</p>
<p>Now to $x$:</p>
<p>$$-2\arcsin\sqrt{\frac{2-x}{2}} - \sqrt{2x-x^2},$$</p>
<p>which would be just fine if I haven't checked the solution to this in workbook where the right answer is:</p>
<p>$$2\arcsin\sqrt\frac{x}{2} - \sqrt{2x-x^2},$$ </p>
<p>and when I found the derivative of this, it turns out that the solution in workbook is correct, so I made a mistake and I don't know where, so I would appreciate some help, and I have a question, why the second substitution works better in this example despite the theorem i mentioned above which says that I should use third substitution for this example?</p>
| MickG | 135,592 | <p>Let me try do derive that antiderivative. You computed:</p>
<p>$$f(x)=\underbrace{-2\arcsin\sqrt{\frac{2-x}{2}}}_{f_1(x)}\underbrace{-\sqrt{2x-x^2}}_{f_2(x)}.$$</p>
<p>The easiest term is clearly $f_2$:</p>
<p>$$f_2'(x)=-\frac{1}{2\sqrt{2x-x^2}}\frac{d}{dx}(2x-x^2)=\frac{x-1}{\sqrt{2x-x^2}}.$$</p>
<p>Now the messier term. Recall that $\frac{d}{dx}\arcsin x=\frac{1}{\sqrt{1-x^2}}$. So:</p>
<p>\begin{align*}
f_1'(x)={}&-2\frac{1}{\sqrt{1-\left(\sqrt{\frac{2-x}{2}}\right)^2}}\frac{d}{dx}\sqrt{\frac{2-x}{2}}=-2\frac{1}{\sqrt{1-\frac{2-x}{2}}}\cdot\frac{1}{\sqrt2}\frac{d}{dx}\sqrt{2-x}={} \\
{}={}&-2\sqrt{\frac2x}\cdot\frac{1}{\sqrt2}\cdot\frac{1}{2\sqrt{2-x}}\cdot(-1)=\frac{2}{\sqrt x}\frac{1}{2\sqrt{2-x}}=\frac{1}{\sqrt{2x-x^2}}.
\end{align*}</p>
<p>So:</p>
<p>$$f'(x)=f_1'(x)+f_2'(x)=\frac{x}{\sqrt{2x-x^2}}=\frac{x}{\sqrt x}\frac{1}{\sqrt{2-x}}=\frac{\sqrt x}{\sqrt{2-x}},$$</p>
<p>which is your integrand. So you were correct after all! Or at least got the correct result, but no matter how I try, I cannot find an error in your calculations.</p>
<p>As for the book's solution, take your $f$, and compose it with $g(x)=2-x$. You get the book's solution, right? Except for a sign. But then $g'(x)=-1$, so the book's solution is also correct: just a different change of variables, probably, though I cannot really guess which.</p>
|
1,688,762 | <p>$$\int \sqrt{\frac{x}{2-x}}dx$$</p>
<p>can be written as:</p>
<p>$$\int x^{\frac{1}{2}}(2-x)^{\frac{-1}{2}}dx.$$</p>
<p>there is a formula that says that if we have the integral of the following type:</p>
<p>$$\int x^m(a+bx^n)^p dx,$$ </p>
<p>then:</p>
<ul>
<li>If $p \in \mathbb{Z}$ we simply use binomial expansion, otherwise:</li>
<li>If $\frac{m+1}{n} \in \mathbb{Z}$ we use substitution $(a+bx^n)^p=t^s$
where $s$ is denominator of $p$;</li>
<li>Finally, if $\frac{m+1}{n}+p \in \mathbb{Z}$ then we use substitution
$(a+bx^{-n})^p=t^s$ where $s$ is denominator of $p$.</li>
</ul>
<p>If we look at this example:</p>
<p>$$\int x^{\frac{1}{2}}(2-x)^{\frac{-1}{2}}dx,$$</p>
<p>we can see that $m=\frac{1}{2}$, $n=1$, and $p=\frac{-1}{2}$ which means that we have to use third substitution since $\frac{m+1}{n}+p = \frac{3}{2}-\frac{1}{2}=1$ but when I use that substitution I get even more complicated integral with square root. But, when I tried second substitution I have this:</p>
<p>$$2-x=t^2 \Rightarrow 2-t^2=x \Rightarrow dx=-2tdt,$$ </p>
<p>so when I implement this substitution I have:</p>
<p>$$\int \sqrt{2-t^2}\frac{1}{t}(-2tdt)=-2\int \sqrt{2-t^2}dt.$$</p>
<p>This means that we should do substitution once more, this time:</p>
<p>$$t=\sqrt{2}\sin y \Rightarrow y=\arcsin\frac{t}{\sqrt{2}} \Rightarrow dt=\sqrt{2}\cos ydy.$$</p>
<p>So now we have:</p>
<p>\begin{align*}
-2\int \sqrt{2-2\sin^2y}\sqrt{2}\cos ydy={}&-4\int\cos^2ydy = -4\int \frac{1+\cos2y}{2}dy={} \\
{}={}& -2\int dy -2\int \cos2ydy = -2y -\sin2y.
\end{align*}</p>
<p>Now, we have to return to variable $x$:</p>
<p>\begin{align*}
-2\arcsin\frac{t}{2} -2\sin y\cos y ={}& -2\arcsin\frac{t}{2} -2\frac{t}{\sqrt{2}}\sqrt\frac{2-t^2}{2}={} \\
{}={}& -2\arcsin\frac{t}{2} -\sqrt{t^2(2-t^2)}.
\end{align*}</p>
<p>Now to $x$:</p>
<p>$$-2\arcsin\sqrt{\frac{2-x}{2}} - \sqrt{2x-x^2},$$</p>
<p>which would be just fine if I haven't checked the solution to this in workbook where the right answer is:</p>
<p>$$2\arcsin\sqrt\frac{x}{2} - \sqrt{2x-x^2},$$ </p>
<p>and when I found the derivative of this, it turns out that the solution in workbook is correct, so I made a mistake and I don't know where, so I would appreciate some help, and I have a question, why the second substitution works better in this example despite the theorem i mentioned above which says that I should use third substitution for this example?</p>
| Machinato | 240,067 | <p>Alternative solution - let $x=2t^2$, then</p>
<p>$$I=\int\sqrt{\frac{x}{2-x}}\mathrm{d}x=4\int\frac{t^2}{\sqrt{1-t^2}}\mathrm{d}t=4J$$</p>
<p>By parts we have</p>
<p>$$J=-t\sqrt{1-t^2}+\int\sqrt{1-t^2}\;\mathrm{d}t = -t\sqrt{1-t^2}+\int\frac{1-t^2}{\sqrt{1-t^2}}\;\mathrm{d}t\!=\!-t\sqrt{1-t^2}+\arcsin t-J $$</p>
<p>Hence</p>
<p>$$I=4J=2\cdot 2J =2\arcsin t -2t\sqrt{1-t^2} = 2\arcsin\sqrt{\frac{x}{2}}-\sqrt{2x-x^2} + C$$</p>
<p>The solutions are equivallent because of formula :
$$\arcsin x= \frac{\pi}{2}-\arcsin{\sqrt{1-x^2}} $$</p>
<p>Clearly, take $\sin$ of both sides, with the fact that $\sin (\frac{\pi}{2}-x)=\cos x$ :</p>
<p>$$ x= \cos\arcsin{\sqrt{1-x^2}}=\sqrt{1-\sin^2{\arcsin{\sqrt{1-x^2}}}} =\sqrt{1-(1-x^2)} = x $$</p>
|
1,688,762 | <p>$$\int \sqrt{\frac{x}{2-x}}dx$$</p>
<p>can be written as:</p>
<p>$$\int x^{\frac{1}{2}}(2-x)^{\frac{-1}{2}}dx.$$</p>
<p>there is a formula that says that if we have the integral of the following type:</p>
<p>$$\int x^m(a+bx^n)^p dx,$$ </p>
<p>then:</p>
<ul>
<li>If $p \in \mathbb{Z}$ we simply use binomial expansion, otherwise:</li>
<li>If $\frac{m+1}{n} \in \mathbb{Z}$ we use substitution $(a+bx^n)^p=t^s$
where $s$ is denominator of $p$;</li>
<li>Finally, if $\frac{m+1}{n}+p \in \mathbb{Z}$ then we use substitution
$(a+bx^{-n})^p=t^s$ where $s$ is denominator of $p$.</li>
</ul>
<p>If we look at this example:</p>
<p>$$\int x^{\frac{1}{2}}(2-x)^{\frac{-1}{2}}dx,$$</p>
<p>we can see that $m=\frac{1}{2}$, $n=1$, and $p=\frac{-1}{2}$ which means that we have to use third substitution since $\frac{m+1}{n}+p = \frac{3}{2}-\frac{1}{2}=1$ but when I use that substitution I get even more complicated integral with square root. But, when I tried second substitution I have this:</p>
<p>$$2-x=t^2 \Rightarrow 2-t^2=x \Rightarrow dx=-2tdt,$$ </p>
<p>so when I implement this substitution I have:</p>
<p>$$\int \sqrt{2-t^2}\frac{1}{t}(-2tdt)=-2\int \sqrt{2-t^2}dt.$$</p>
<p>This means that we should do substitution once more, this time:</p>
<p>$$t=\sqrt{2}\sin y \Rightarrow y=\arcsin\frac{t}{\sqrt{2}} \Rightarrow dt=\sqrt{2}\cos ydy.$$</p>
<p>So now we have:</p>
<p>\begin{align*}
-2\int \sqrt{2-2\sin^2y}\sqrt{2}\cos ydy={}&-4\int\cos^2ydy = -4\int \frac{1+\cos2y}{2}dy={} \\
{}={}& -2\int dy -2\int \cos2ydy = -2y -\sin2y.
\end{align*}</p>
<p>Now, we have to return to variable $x$:</p>
<p>\begin{align*}
-2\arcsin\frac{t}{2} -2\sin y\cos y ={}& -2\arcsin\frac{t}{2} -2\frac{t}{\sqrt{2}}\sqrt\frac{2-t^2}{2}={} \\
{}={}& -2\arcsin\frac{t}{2} -\sqrt{t^2(2-t^2)}.
\end{align*}</p>
<p>Now to $x$:</p>
<p>$$-2\arcsin\sqrt{\frac{2-x}{2}} - \sqrt{2x-x^2},$$</p>
<p>which would be just fine if I haven't checked the solution to this in workbook where the right answer is:</p>
<p>$$2\arcsin\sqrt\frac{x}{2} - \sqrt{2x-x^2},$$ </p>
<p>and when I found the derivative of this, it turns out that the solution in workbook is correct, so I made a mistake and I don't know where, so I would appreciate some help, and I have a question, why the second substitution works better in this example despite the theorem i mentioned above which says that I should use third substitution for this example?</p>
| notuserealname | 568,250 | <p>Let $u=\sqrt{2-x}$ then we simply want</p>
<p>$-2\int \sqrt{2-u^2}du$ which is simple after $u=\sqrt{2}\sin{v}$</p>
|
3,978,303 | <p><strong>Background</strong></p>
<p>The following Euler product for the Riemann zeta function is well known.</p>
<p><span class="math-container">$$ \sum_n \frac{1}{n^s} = \prod_p (1-\frac{1}{p^s})^{-1} $$</span></p>
<p>Here <span class="math-container">$n$</span> ranges over all integers, <span class="math-container">$p$</span> over a primes, and real <span class="math-container">$s>1$</span>.</p>
<hr />
<p><strong>Common Proof Strategy</strong></p>
<p>Many derivations / proofs, found in textbooks and papers, consider the following expression.</p>
<p><span class="math-container">$$(1 - \frac{1}{p^s})^{-1} = 1 + \frac{1}{p^s} + \frac{1}{p^{2s}} + \frac{1}{p^{3s}} + \ldots$$</span></p>
<p>The LHS is finite for any given <span class="math-container">$p$</span> and the series expansion is valid because <span class="math-container">$\frac{1}{p} < 1$</span>.</p>
<p>The following takes the product over all primes.</p>
<p><span class="math-container">$$\prod_{p_i} (1-\frac{1}{p_i^s})^{-1} = 1 + \frac{1}{p_1^s} + \frac{1}{p_1^{2s}} + \ldots + \frac{1}{p_1^sp_2^{s}} + \frac{1}{p_1^sp_3^{s}} + \ldots $$</span></p>
<p>The LHS is a product of finite and non-zero factors.</p>
<p>The RHS has terms of the form <span class="math-container">$\frac{1}{X}$</span> where <span class="math-container">$X$</span> contains all combinations of the primes, and all combinations of powers of the primes.</p>
<p>It is common to apply the Fundamental Theorem of Arithmetic to see that there is one term X for each integer <span class="math-container">$n$</span>, and therefore the RHS is the desired <span class="math-container">$\sum\frac{1}{n^s}$</span>.</p>
<hr />
<p><strong>Challenge</strong></p>
<p>A challenge (for example <a href="https://math.stackexchange.com/a/3970823/319008">here</a>) that has been raised to this very common proof logic is that there are terms <span class="math-container">$X$</span> with an infinite number of factors in the denominator, for example:</p>
<p><span class="math-container">$$ \frac{1}{(2^2\cdot3^2\cdot 5^2\cdot 7^2 \cdot\ldots)^s} $$</span></p>
<p>or another simpler example:</p>
<p><span class="math-container">$$ \frac{1}{(2\cdot 2 \cdot 2\cdot 2\cdot\ldots)^s} $$</span></p>
<hr />
<p><strong>Question</strong></p>
<p>Is the challenge valid?</p>
<p>I am not a trained mathematician, but in my opinion the proof strategy is valid because terms <span class="math-container">$X$</span> with denominators with an infinite number of prime factors are equivalent to zero. That is:</p>
<p><span class="math-container">$$ \frac{1}{(2^2\cdot3^2\cdot 5^2\cdot 7^2 \cdot\ldots)^s} = 0$$</span></p>
<p>and</p>
<p><span class="math-container">$$ \frac{1}{(2\cdot 2 \cdot 2\cdot 2\cdot\ldots)^s} = 0$$</span></p>
<p>My assertion is that the proof strategy remains valid because any finite integer <span class="math-container">$n$</span> has a single finite non-zero term <span class="math-container">$X$</span>, and those <span class="math-container">$X$</span> with infinitely long denominators can be discarded because they are zero.</p>
| Thomas Andrews | 7,933 | <p>The way to prove this rigorously is to show that:</p>
<p><span class="math-container">$$\sum_{n\leq N}\frac1{n^s}\leq \prod_{p\leq N}\left(1-1/p^s\right)^{-1}\leq\sum_{n=1}^{\infty}\frac1{n^s}$$</span> This you can get because the product in the middle is finite, so you can use the argument without worrying about infinitely many primes.</p>
<p>Then use the squeeze theorem as <span class="math-container">$N\to\infty.$</span></p>
<p>This technique works only for real <span class="math-container">$s>1,$</span> because the inequalities are not true for <span class="math-container">$s$</span> complex.</p>
<hr />
<p>At heart the proof is to let <span class="math-container">$$A_N=\{n\geq 1\mid n\text{ has no prime factors }>N\}$$</span></p>
<p>Then you can show that:</p>
<p><span class="math-container">$$\prod_{p\leq N}\left(1-1/p^s\right)^{-1} =\sum_{n\in A_N} \frac1{n^s}$$</span></p>
<p>Then you get the left-hand inequality because <span class="math-container">$1,2,\dots,N\in A_N$</span> and the right hand side because <span class="math-container">$A_N\subseteq \mathbb Z^+.$</span></p>
|
4,602,683 | <p>Let <span class="math-container">$\mathbb{F}$</span> be a field, and consider <span class="math-container">$\mathbb{F}^\mathbb{F}$</span> as an algebra over <span class="math-container">$\mathbb{F}$</span> with the standard function multiplication. Let <span class="math-container">$D$</span> be a linear transformation on a subalgebra of <span class="math-container">$\mathbb{F}^\mathbb{F}$</span> closed under function composition that satisfies the chain rule. Does <span class="math-container">$D$</span> necessarily satisfy the product rule for arbitrary <span class="math-container">$\mathbb{F}$</span>? (Inspired by a comment on <a href="https://math.stackexchange.com/questions/4602208/does-the-product-rule-imply-the-chain-rule">this question.</a>) What if the subalgebra must be unital?</p>
| Eric Wofsey | 86,856 | <p>Let <span class="math-container">$\mathbb{F}=\mathbb{F}_2$</span> and consider <span class="math-container">$D:\mathbb{F}_2^{\mathbb{F}_2}\to\mathbb{F}_2^{\mathbb{F}_2}$</span> which sends the constant functions to <span class="math-container">$0$</span> and the nonconstant functions to <span class="math-container">$1$</span>. It is easy to check that this satisfies the chain rule but it does not satisfy the product rule since <span class="math-container">$D(i\cdot i)=D(i)=1\neq 2iD(i)=0$</span> where <span class="math-container">$i$</span> denotes the identity function.</p>
<p>On the other hand, if <span class="math-container">$\mathbb{F}$</span> has characteristic different from <span class="math-container">$2$</span>, then the chain rule actually does imply the product rule (assuming your subalgebra <span class="math-container">$A\subseteq\mathbb{F}^\mathbb{F}$</span> is unital and contains the identity function <span class="math-container">$i$</span>). First, note that <span class="math-container">$$D(1)=D(1\circ 0)=(D(1)\circ 0)\cdot D(0)=0$$</span> since <span class="math-container">$D(0)=0$</span>. Also, for any <span class="math-container">$f\in A$</span>, <span class="math-container">$$D(f)=D(f\circ i)=D(f)D(i).$$</span> Taking <span class="math-container">$f=i$</span> gives <span class="math-container">$D(i)=D(i)^2$</span>, so <span class="math-container">$D(i)$</span> can only take the values <span class="math-container">$0$</span> and <span class="math-container">$1$</span>. Let <span class="math-container">$S\subseteq\mathbb{F}$</span> be the set of inputs on which <span class="math-container">$D(i)$</span> is <span class="math-container">$0$</span>; then <span class="math-container">$D(f)$</span> vanishes on <span class="math-container">$S$</span> for all <span class="math-container">$f\in A$</span>. For each <span class="math-container">$a\in\mathbb{F}$</span>, let <span class="math-container">$s_a$</span> be the function <span class="math-container">$a-i$</span>. Note that <span class="math-container">$s_a\circ s_a=i$</span> so <span class="math-container">$$D(i)=(D(s_a)\circ s_a)\cdot D(s_a).$$</span> Comparing the vanishing sets of each side of this equation, we see that <span class="math-container">$S\supseteq s_a^{-1}(S)=s_a(S)$</span>. Since <span class="math-container">$a\in\mathbb{F}$</span> is arbitrary, this means <span class="math-container">$S$</span> must be either <span class="math-container">$\emptyset$</span> or all of <span class="math-container">$\mathbb{F}$</span>. If <span class="math-container">$S=\mathbb{F}$</span> then <span class="math-container">$D(i)=0$</span> and thus <span class="math-container">$D=0$</span> and the conclusion is trivial. So let us assume <span class="math-container">$S=\emptyset$</span>, which means <span class="math-container">$D(i)=1$</span>.</p>
<p>Now let <span class="math-container">$f=D(i^2)$</span>. For any <span class="math-container">$a,b\in\mathbb{F}$</span>, we have <span class="math-container">$$D((ai+b)^2)=(f\circ (ai+b))D(ai+b)=af\circ(ai+b)$$</span> but also <span class="math-container">$$D((ai+b)^2)=D(a^2i^2+2abi+b^2)=a^2f+2ab.$$</span> That is, for each <span class="math-container">$x\in\mathbb{F}$</span>, <span class="math-container">$$af(ax+b)=a^2f(x)+2ab,$$</span> or <span class="math-container">$$f(ax+b)=af(x)+2b$$</span> as long as <span class="math-container">$a\neq 0$</span>. Plugging in <span class="math-container">$x=0$</span> gives <span class="math-container">$$f(b)=af(0)+2b.$$</span> Since <span class="math-container">$\mathbb{F}$</span> has more than <span class="math-container">$2$</span> elements (so there are multiple different choices for <span class="math-container">$a$</span>) this implies <span class="math-container">$f(0)=0$</span> and thus <span class="math-container">$f(b)=2b$</span>. That is, <span class="math-container">$f=2i$</span>.</p>
<p>Now let <span class="math-container">$f,g,\in A$</span> be arbitrary and consider <span class="math-container">$D((f+g)^2)$</span>. On one hand, <span class="math-container">$$D((f+g)^2)=(2i\circ (f+g))\cdot D(f+g)=2(f+g)D(f+g).$$</span> On the other hand, <span class="math-container">$$D((f+g)^2)=D(f^2+2fg+g^2)=2fD(f)+2D(fg)+2gD(g).$$</span> Since <span class="math-container">$2\neq 0$</span>, comparing these two equations gives <span class="math-container">$D(fg)=fD(g)+gD(f)$</span>, as desired.</p>
<p>(I don't know what can be said if <span class="math-container">$\mathbb{F}$</span> has characteristic <span class="math-container">$2$</span> but more than <span class="math-container">$2$</span> elements.)</p>
|
20,726 | <p>The following situation is ubiquitous in mathematical physics. Let
$\Lambda_N$
be a finite-size lattice with linear size
$N$. An typical example would be the subset of
$\mathbb{Z}\times\mathbb{Z}$
given by those pairs of integers
$(j,k)$ such that
$j,k \in$ {
$0,\ldots,N-1$}. On each vertex
$j$ of the lattice place a copy of the vector space
$\mathbb{C}^d$. The total space will be the tensor product of all of these spaces. Then define a Hamiltonian acting on this total space as follows:
$$ H = \sum_{k \in \Lambda_N} h_k$$
for some Hermitian matrices
$h_k$ which act like the identity everywhere except on the vector spaces located on site
$k$ and in the neighborhood surrounding
$k$. Typically, one is interested in the case where there is a translational symmetry (except at the boundary) in the definition of the
$h_k$. Denote the eigenvalues of
$H$ in increasing order by
$\lambda_1 \le \lambda_2 \le \ldots \le \lambda_M$. </p>
<blockquote>
<p>For an arbitrary fixed family of Hamiltonians
$H$, what proof techniques exist for computing an upper and a lower bound on
$\Delta = \lambda_2 - \lambda_1$
as a function of
$N$? In particular, we want to know if
$\Delta$ decays to zero as a function of
$N$, or if it is lower-bounded by some constant independent of
$N$.</p>
</blockquote>
<p>The gap
$\Delta$ is the energy gap between the ground state and the first excited state of an interacting quantum system. Understanding this quantity tremendously impacts our understanding of the different phases of matter, but it is extremely difficult to compute or even bound for all but the simplest cases (like when all the
$h_k$ commute). This difficulty persists even when there is significant additional (physically motivated) structure in the problem, such as considering only
$h_k$ which are projectors, and where there is a unique zero-energy eigenstate (all others having positive energy for any finite
$N$).</p>
<p>More general formulations of this question also have applications to expansion properties of graphs, mixing times of Markov chains, and many other things. I’m happy to hear answers related to these as well, but I’m hoping to find answers that are useful for the structure of local Hamiltonians, as defined above.</p>
| jjcale | 17,261 | <p>For VBS quantum antiferromagnets in one dimension see also :</p>
<p>Ian Affleck, Tom Kennedy, Elliott H. Lieb and Hal Tasaki,
Valence bond ground states in isotropic quantum antiferromagnets.
Comm. Math. Phys., Volume 115, Number 3 (1988) </p>
<p>and</p>
<p>Stefan Knabe, Energy gaps and elementary excitations for certain VBS-quantum antiferromagnets, Journal of Statistical Physics
Volume 52, Numbers 3-4, 1988</p>
|
2,129,830 | <p>I am wondering if this is generally true for any topology. I think there might be counter examples, but I am having trouble generating them. </p>
| MPW | 113,214 | <p>A punctured disk and a slit disk are easy examples in the plane.</p>
|
2,129,830 | <p>I am wondering if this is generally true for any topology. I think there might be counter examples, but I am having trouble generating them. </p>
| Ilmari Karonen | 9,602 | <p>Since the complement of an open set is closed (and vice versa), and since the complement of the interior is the closure of the complement, we can rephrase your question equivalently as:</p>
<blockquote>
<p>Is every closed set the closure of some open set?</p>
</blockquote>
<p>This immediately suggests a counterexample: any singleton (i.e. a set containing only one point) is closed in $\mathbb R^n$ (with the usual Euclidean topology), but has no non-empty open subsets that it could be the closure of.</p>
<p>Conversely, the complement of any singleton (i.e. $\mathbb R^n \setminus \{x\}$ for any $x \in \mathbb R^n$) provides a counterexample to your original claim, being an open set that cannot be the interior of any closed set.</p>
|
50,002 | <p>a general version: connected sums of closed manifold is orientable iff both are orientable.
I think this can be prove by using homology theory, but I don't know how.Thanks.</p>
| PseudoNeo | 7,085 | <p>You can also have a differential eye on that matter. I will use a less precise vocabulary than in the other answers.</p>
<p>A manifold is orientable if and only if, when you follow a (smooth) path, you never come back to the starting point with the orientation reversed (as happens for example in the Möbius band). That can be seen using the orientation cover, if you know what this is, or as the most trivial result in obstruction theory.</p>
<p>In a connected sum, I can see the whole manifold as the union of the two pieces along a thickening of a sphere of codimension one. On this intersection, I can fix a compatible orientation once and for all. Now, if I had a path starting in this sphere and ending there, but where orientation is reversed, I could slightly modify that path in order to ensure that it has only finitely intersection points with the sphere. So my path is the union of finitely many paths starting from the sphere, ending there and never coming back there meanwhile. As I have a coherent choice of orientation along this sphere, at least one of those paths has changed orientation, so at least one of the pieces isn't orientable.</p>
<p>I believe that this argument can be made rigorous in several ways (using the orientation cover, using the first Stiefel-Whitney class, using a differential topology definition of orientation, etc.) but I think that regardless of the formalisation you choose, it really tells you the whole story.</p>
|
942,470 | <p>I am trying to count how many functions there are from a set $A$ to a set $B$. The answer to this (and many textbook explanations) are readily available and accessible; I am <strong>not</strong> looking for the answer to that question and <strong>please do not post it</strong>. Instead I want to know what fundamental mistake(s) I am making in counting the number of these functions. My reasoning is below, which I know is wrong after checking this question: <a href="https://math.stackexchange.com/questions/639326/how-many-functions-there-is-from-3-element-set-to-2-element-set">How many functions there is from 3 element set to 2 element set?</a>.</p>
<hr>
<p>For an example case, I consider counting how many functions there are from set $A = \{0,1\}$ to set $B = \{a,b\}$. My understanding of the term <em>function</em> is that it is any possible mapping between elements of set $A$ to elements of set $B$. Thus, a possible function $F: A \times B$ is the function that maps each element of $A$ to no element of $B$, i.e. $f_0(0) = \emptyset, f_0(1) = \emptyset$. Another possible function is $f_1(0) = a, f_1(1) = \{a, b\}$. </p>
<p>I notice a pattern here: for each element of the set $A$, there are $|\mathcal P (B)|$ unique combinations of elements that it can map to. In this case, $\mathcal P(B) = \{\{a,b\}, \{a\}, \{b\}, \emptyset\}$. To count these functions, then, we can use the product rule, since the choice of what each element of $A$ maps to does not affect what another element of $A$ can map to (since we consider all functions). </p>
<p>There are $4$ choices for $0$ and $4$ choices for $1$. Therefore there are $16$ unique functions $F: A \times B$. For a sanity check, I've listed out all <strong>16</strong> possible functions.</p>
<p>$f_0(0) = \emptyset, f_0(1) = \emptyset$</p>
<p>$f_1(0) = \emptyset, f_1(1) = \{a\}$</p>
<p>$f_2(0) = \emptyset, f_2(1) = \{b\}$</p>
<p>$f_3(0) = \emptyset, f_3(1) = \{a, b\}$</p>
<p>$f_4(0) = \{a\}, f_4(1) = \emptyset$</p>
<p>$f_5(0) = \{a\}, f_5(1) = \{a\}$</p>
<p>$f_6(0) = \{a\}, f_6(1) = \{b\}$</p>
<p>$f_7(0) = \{a\}, f_7(1) = \{a, b\}$</p>
<p>$f_8(0) = \{b\}, f_8(1) = \emptyset$</p>
<p>$f_9(0) = \{b\}, f_9(1) = \{a\}$</p>
<p>$f_{10}(0) = \{b\}, f_{10}(1) = \{b\}$</p>
<p>$f_{11}(0) = \{b\}, f_{11}(1) = \{a, b\}$</p>
<p>$f_{12}(0) = \{a,b\}, f_{12}(1) = \emptyset$</p>
<p>$f_{13}(0) = \{a,b\}, f_{13}(1) = \{a\}$</p>
<p>$f_{14}(0) = \{a,b\}, f_{14}(1) = \{b\}$</p>
<p>$f_{15}(0) = \{a,b\}, f_{15}(1) = \{a, b\}$</p>
<p>The generalization: The number of functions $F: A \times B$ is $|\mathcal P(B)|^{|A|}$.</p>
<hr>
<p>Now I know my reasoning is completely wrong, but why? Am I double counting? Do I misunderstand the definition of a function? </p>
| Andrew | 154,986 | <p>Technically, what you've done in your example is defined all possible functions $f:A \to \mathcal{P}(B)$. That is, you're sending elements of $A$ to elements of $\mathcal{P}(B)$. If you want to count functions $f:A \to B$, then the outputs must be <em>elements</em> of $B$, not <em>subsets</em> of $B$.</p>
<p>Another way to say this is that a function from $A$ to $B$ is a subset of $A\times B$. The things you list are really subsets of $A \times \mathcal{P}(B)$, since you have pairs of elements of $A$ with subsets of $B$.</p>
|
1,480,331 | <blockquote>
<p>Let $A$ be an $m \times n$ matrix with $m < n$ and $\operatorname{rank}(A) = m$. Prove that there exist infinitely many matrices $B$ such that $AB = I$.</p>
</blockquote>
<p>Stumped. How do I begin to prove this?</p>
| lulu | 252,071 | <p>Think of it this way: Throwing a $2$ or a $4$ is meaningless, so ignore those cases. Without them we have only $4$ equally likely events: $\{6,Odd,Odd,Odd\}$. The probability of getting the $6$ first is then seen to be just $\frac 14$</p>
|
288,499 | <p>Simply stated, I've been trying for a long time to either find in the literature, or derive myself, a notion of path in Cech closure spaces, that specialises to paths in a topological space, and to graph-like paths in so-called "quasi-discrete closure spaces". </p>
<p>Let me recall the definitions:</p>
<p>A closure space is a pair <span class="math-container">$(X,C)$</span> where <span class="math-container">$C : \mathcal P (X) \to \mathcal P (X)$</span> is a function satisfying <span class="math-container">$C(\emptyset) = \emptyset$</span>, <span class="math-container">$A \subseteq C(A)$</span>, <span class="math-container">$C(A \cup B) = C(A) \cup C(B)$</span>.</p>
<p>A continuous function <span class="math-container">$f$</span> is a function between two spaces such that <span class="math-container">$f(C(A)) \subseteq C(f(A))$</span></p>
<p>A topological space is (via the Kuratowski definition) a closure space with the additional axiom <span class="math-container">$C(C(A)) = C(A)$</span> (idempotence of closure).</p>
<p>Any reflexive relation <span class="math-container">$R$</span> generates a closure space by <span class="math-container">$C(A) = \{y \in A | \exists x \in A . x R y\}$</span>. That's called a "quasi-discrete closure space". </p>
<p>Topological paths are defined as continuous functions from the unit interval. </p>
<p>Let me now make two examples. </p>
<p>Example 1: <span class="math-container">$\mathbb R^2$</span>. Topological paths work fine (indeed!). </p>
<p>Example 2: the closure space on <span class="math-container">$\mathbb N$</span> generated by the successor relation. It's a nice closure space, but topological paths exist that do not "follow the edges" of non-symmetric relations, due to "directionality" of <span class="math-container">$R$</span>; topology (e.g. the unit interval) is intrinsically symmetric; relations are not. For an example of this consider the set <span class="math-container">$\{a,b\}$</span> and the relation <span class="math-container">$R = \{ (a,b) \}$</span>. This generates a quasi-discrete closure space. Consider the function <span class="math-container">$f : [0,1] \to \{a,b\}$</span> with <span class="math-container">$f(0) = b$</span> and <span class="math-container">$f((0,1])) = a$</span>. This function is continuous but not a graph-like path in <span class="math-container">$R$</span>.</p>
<p>Further clarifications (due to comments)</p>
<p>I understand that [0,1]-paths in topological spaces can't be directional. That's absolutely the case and for good reasons. But then, is there a more general construction that becomes the "natural" notion of path in closure spaces, and in topological spaces, it is not directional, since this is very natural in topological spaces?</p>
<p>Let's say it more formally: perhaps there's a universal construction in the category of closure spaces, of which topological spaces are a full subcategory, that captures the notion of paths in such a way that in directional graph structures like quasi-discrete closure spaces, paths are directional, and in topological spaces, paths are in one-to-one-correspondence to classical, topological paths, a.k.a. <span class="math-container">$[0,1]$</span>-morphisms?</p>
| user2554 | 118,562 | <p>Gauss's procedure leads to Bolyai's result on the volume of orthoscheme tetrahedron, as I'll show here. However, Gauss's result is a little bit more limited than Bolyai, since Gauss refers to an orthoscheme tetrahedron of which 4 of the 12 face angles of the tetrahedron are right (each face is an hyperbolic right triangle), while Bolyai refers to a slightly more general tetrahedron whose only 3 face angles ar right.</p>
<p>In order to help visualize the relations, I added here a pic of Gauss's note.</p>
<p><a href="https://i.stack.imgur.com/Ac163.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ac163.png" alt="enter image description here" /></a></p>
<p><strong>Preliminary discussion:</strong></p>
<p>To see the connection between Schlafli formula and the first formula in Gauss's fragment, one needs to understand that Gauss thinks of the tetrahedron 1234 in such a way that the faces 124 and 134 are perpendicular and the edges 24 and 13 meet the intersection line 14 also at right angles. Therefore, the dihedral angles at sides 12 and 14 are constant right angles and don't contribute to the sum in Schlafli formula. In addition, Gauss defines the tetrahedron in such a way that the angles at vertex 3 are constant (so that an "observer" in hyperbolic space which is located at vertex 3 sees the rest of the vertexes at constant lines of sight). Since the three face angles at vertex 3 correspond to the length of sides of a spherical triangle, and the dihedral angles at sides 31,32,34 correspond to the angles of this spherical triangle, one gets that constancy of face angles at vertex 3 implies constancy of the dihedral angles 31,32,34.</p>
<p>Therefore, only the dihedral angle of the side 24 changes. The dihedral angle 24 is equal to face angle 341 since two face angles at vertex 4 are right so the third face angle 341 (which is one side of a spherical triangle) is equal to the opposite angle - which is dihedral angle 24. This leads directly to the first formula in Gauss's fragment (apart from a missing factor of <span class="math-container">$\frac {1}{2}$</span>).</p>
<p><strong>Derivation of explicit volume formula from Gauss's formula</strong>:</p>
<p>For the sake of consistency, we denote the angles 431, 234, and 214 as <span class="math-container">$\alpha$</span>, <span class="math-container">$\beta$</span> and <span class="math-container">$\gamma$</span>, respectively. Now lets look at the link of vertex 3 of the tetrahedron: it is a spherical triangle whose two edges lengths are <span class="math-container">$\alpha$</span> and <span class="math-container">$\beta$</span> and one angle is <span class="math-container">$\gamma$</span> (it is the dihedral angle of edge 31 and it is also equal to <span class="math-container">$\gamma$</span>). In addition the sides <span class="math-container">$\alpha$$, \beta$</span> of this spherical triangle are orthogonal to each other. Therefore, by a combination of the spherical sine theorem and the spherical pythagoras theorem, we get:</p>
<p><span class="math-container">$$ \frac{{\mathbb{sin}(\mathbb{arccos}(\mathbb{cos}\alpha\cdot \mathbb{cos}\beta))}}{{\mathbb{sin} (\frac{\pi}{2}})} = \frac {{\mathbb{sin}\beta}}{{\mathbb{sin}\gamma}}$$</span>, or:</p>
<p><span class="math-container">$$(1) \mathbb{sin}\gamma = \frac {{\mathbb{sin}\beta}}{{\sqrt{{1 - (\mathbb{cos}\alpha \cdot \mathbb{cos}\beta)^2}}}}$$</span></p>
<p>Now, denote the length of side <span class="math-container">$24$</span> as <span class="math-container">$l_{24} = x$</span> and the angle <span class="math-container">$341$</span> as <span class="math-container">$\varphi$</span>. Since <span class="math-container">$\varphi$</span> is related to <span class="math-container">$x$</span> by the equation <span class="math-container">$c^2_1 \mathbb{cot}^2\varphi - c^2_2\mathbb{tanh}^2x = 1$</span> (here <span class="math-container">$c_1 = \mathbb{cot}\alpha,c_2 = \mathbb{cot}\beta$</span>), one can write:</p>
<p><span class="math-container">$$\varphi = \mathbb{arccot}(\frac{\sqrt{1+c^2_2\mathbb{tanh}^2x}}{c_1})$$</span></p>
<p>Gauss's procedure for the calculation of the volume, which uses the relation <span class="math-container">$\partial \Delta = -\frac{1}{2}x d\varphi$</span>, leads to the following integral:</p>
<p><span class="math-container">$$\Delta = -\frac{1}{2}\int x d\varphi = -\frac{1}{2}\int x \frac{d\varphi}{dx}dx$$</span></p>
<p>so one can compute the derivative of <span class="math-container">$\varphi$</span> with respect to <span class="math-container">$x$</span> by an application of the chain rule:</p>
<p><span class="math-container">$$-\frac{d\varphi}{dx} = \frac{1}{1+\frac{1+c^2_2\mathbb{tanh}^2x}{c^2_1}}\frac{c^2_2\mathbb{tanh}x\cdot \frac{1}{\mathbb{cosh}^2x}}{c_1\sqrt{1+c^2_2\mathbb{tanh}^2x}}$$</span></p>
<p>Now make a very long algebraic simplification:</p>
<p><span class="math-container">$$ -\frac{d\varphi}{dx} = \frac{\mathbb{sinh}x (c^2_2/c_1)}{\mathbb{cosh}^2x(1+\frac{1+c^2_2\mathbb{tanh}^2x}{c^2_1})\mathbb{cosh}x\cdot c_2\sqrt{\frac{1}{c^2_2}+\mathbb{tanh}^2x}} = \frac{\mathbb{sinh}x (c_2/c_1)}{\mathbb{cosh}^2x(1+\frac{1+c^2_2\mathbb{tanh}^2x}{c^2_1})\sqrt{\frac{\mathbb{cosh}^2x}{c^2_2}+\mathbb{sinh}^2x}} = \frac{\mathbb{sinh}x (c_1/c_2)}{(c_1/c_2)^2(\mathbb{cosh}^2x(1+\frac{1+c^2_2\mathbb{tanh}^2x}{c^2_1}))\sqrt{\frac{\mathbb{cosh}^2x}{c^2_2}+\mathbb{sinh}^2x}}$$</span></p>
<p>The expression under the square root is <span class="math-container">$$\sqrt{\frac{\mathbb{cosh}^2x}{c^2_2}+(\mathbb{cosh}^2x-1)} = \sqrt{\mathbb{cosh}^2x(\frac{1}{c^2_2}+1)-1} = \sqrt{{\frac{{\mathbb{cosh}^2x}}{{\mathbb{cos}^2\beta}} - 1}}$$</span></p>
<p>while the expression in the left side of the denominator is equal to:</p>
<p><span class="math-container">$$(c_1/c_2)^2\mathbb{cosh}^2x(1+\frac{1}{c^2_1})+\mathbb{sinh}^2x = \mathbb{cosh}^2x(1+\frac{c^2_1+1}{c^2_2}) -1 = \mathbb{cosh}^2x(1+\frac{1}{\mathbb{sin}^2\alpha \mathbb{cot}^2\beta})-1$$</span></p>
<p>Recalling that <span class="math-container">$\frac{c_1}{c_2} = \frac {\mathbb{tan}\beta}{\mathbb{tan} \alpha}$</span>, the resulting expression for the integral is:</p>
<p><span class="math-container">$$\Delta = \frac {{\mathbb{tan}\beta}}{{2 \mathbb{tan} \alpha}}\int_{0}^{c}\frac {{x \mathbb{sinh}(x) dx}}{{(\mathbb{cosh}^2x(1 + \frac {{1}}{{\mathbb{sin}^2\alpha \mathbb{cot}^2\beta}})-1)\sqrt{{\frac{{\mathbb{cosh}^2x}}{{\mathbb{cos}^2\beta}} - 1}} }}$$</span></p>
<p>Now, the left factor of the denominator <span class="math-container">$\mathbb{cosh}^2(x)(1 + \frac{{1}}{{\mathbb{sin}^2\alpha \ \mathbb{cot}^2\beta}})-1$</span>, is exactly equal to <span class="math-container">$\mathbb{cosh}^2(x)\cdot \frac{{1}}{{\mathbb{cos}^2\gamma}}-1$</span>, because subtitution of <span class="math-container">$\mathbb{cos}\gamma = \sqrt {1 - \frac {{\mathbb{sin}^2\beta}}{{1-(\mathbb{cos}\alpha\cdot \mathbb{cos}\beta)^2}}}$</span>
(this substitution is true because of relation (1)) in this expression gives the previous one.</p>
<p><strong>Concluding remarks</strong>:</p>
<ul>
<li>As can be seen from this presentation - <a href="http://www.csu.ru/faculties/Documents/%D0%9F%D1%80%D0%B5%D0%B7%D0%B5%D0%BD%D1%82%D0%B0%D1%86%D0%B8%D1%8F_%D0%90%D0%B1%D1%80%D0%BE%D1%81%D0%B8%D0%BC%D0%BE%D0%B2.pdf" rel="nofollow noreferrer">Hyperbolic Volumes and Symmetry</a>, the Bolyai's volume integral is written in my notation in this way (see Theorem 5, p. 12, at this presentation) :</li>
</ul>
<p><span class="math-container">$$Vol(T) = \frac {{\mathbb{tan}\beta}}{{2 \mathbb{tan} \alpha}}\int_{0}^{c}\frac {{x \mathbb{sinh}(x) dx}}{{(\frac {{\mathbb{cosh}^2(x)}}{{\mathbb{cos}^2\gamma}} - 1)\sqrt{{\frac{{\mathbb{cosh}^2x}}{{\mathbb{cos}^2\beta}} - 1}} }}$$</span></p>
<p>and in the case treated here Bolyai's integral coincides with the result of Gauss's procedure. <strong>Important Note</strong>: the differences in notation between the Bolyai integral in the presentation and Gauss's integral are just due the different symbols of the angles 431, 234, and 214 - <span class="math-container">$\alpha,\beta,\gamma$</span> in the presentation correspond to <span class="math-container">$\gamma, \alpha ,\beta$</span> in my notation.</p>
<p>However, for the case treated by Gauss, his formulas are absolutely correct. He should also be given credit for the identification of the calculation of the orthoscheme tetrahedron as the basis for volume formulas of general tetrahedrons (without right angles). In one of his letters, he refered to those calculations of volumes as "<strong>die jungle</strong>" - I guess he refered to the extremely complicated integrals that arise in the attempts to the decompose the general tetrahedron into orthoscemes (this problem was only solved very recently).</p>
<ul>
<li>Paul Stackel, the mathematician who edited Janos Bolyai's geometric works, had the following things to say about Bolyai's derivation of his integral formula:</li>
</ul>
<blockquote>
<p>It is most remarkable that the method that Gauss used for cubing the tetrahedron, is exactly the same as that of
Johann. This is shown in a note from March 1832, from
Gauss's estate, which is printed in the works (vol. VIII, p. 228);
Gauss has exactly the same special tetrahedron (only 3142 instead
<span class="math-container">$abc\delta$</span> means) and exactly the same decomposition by planes perpendicular to ab (31).</p>
</blockquote>
<p>This quotation is taken from p. 113 of the book "Wolfgang und Johann Bolyai geometrische Untersuchungen" (here is a link: <a href="https://archive.org/details/wolfgangundjohan01stuoft/page/112/mode/2up" rel="nofollow noreferrer">https://archive.org/details/wolfgangundjohan01stuoft/page/112/mode/2up</a>), which was edited and translated to german by the Stackel.</p>
<ul>
<li>It's still necessary to understand how Gauss arrived at the formula <span class="math-container">$\partial \Delta = -\frac{{1}}{{2}}(24)d(341)$</span> (he missed the factor <span class="math-container">$\frac {{1}}{{2}}$</span> at the first attempt); the second formula from his note can be derived with relative ease. In his commentary on Gauss's note, Stackel derives it in the following way:</li>
</ul>
<blockquote>
<p>The tetrahedron <span class="math-container">$1234$</span>, whose volume is called <span class="math-container">$\Delta$</span>, may now experience an infinitely small increase in volume <span class="math-container">$1 2 4 1' 2' 4' = \partial \Delta$</span>, by lengthening the edge <span class="math-container">$31$</span> by the infinitely small amount <span class="math-container">$11' = d(13)$</span> and through <span class="math-container">$1'$</span> a perpendicular to <span class="math-container">$31'$</span> laying in the plane <span class="math-container">$1'2'4'$</span>, which intersects the edges <span class="math-container">$32$</span> and <span class="math-container">$34$</span> in <span class="math-container">$2'$</span> and <span class="math-container">$4'$</span>, respectively. The angles at the corner <span class="math-container">$3$</span>, and thus also the sizes <span class="math-container">$\alpha$</span> and <span class="math-container">$\beta$</span>, remain unchanged. The angle <span class="math-container">$(341)$</span> changes into the angle <span class="math-container">$$(34'1') = (3 4 1)+d(3 4 1)$$</span> namely, like the consideration of the quadrilateral <span class="math-container">$1 1' 4' 4$</span> with the infinitely small base line <span class="math-container">$1 1' = d(1 3)$</span> and right angles at <span class="math-container">$1$</span> and <span class="math-container">$1'$</span> recognized immediately: <span class="math-container">$$d(3 4 1) = \mathbb{sinh}(14)\cdot d(13)$$</span> The increase in volume <span class="math-container">$1 2 4 1' 2' 4'$</span> is bounded laterally by the triangles <span class="math-container">$1 2 4$</span> and <span class="math-container">$1' 2' 4'$</span>, whose planes are both perpendicular to <span class="math-container">$1 1'$</span>, and hence (see p.233 of this volume): <span class="math-container">$$\partial \Delta = -\frac{1}{2}d(13)\cdot(24)\mathbb{sinh}(14)$$</span> hence:<span class="math-container">$$\partial \Delta = -\frac {1}{2}(24)\cdot d(3 4 1) $$</span> and that is, apart from the missing factor <span class="math-container">$\frac{1}{2}$</span>, Gauss's formula.</p>
</blockquote>
<p>Since Stackel refers to Gauss's second fragment on volume determinations in non-euclidean geometry (p. 233 of he same volume), which was written in 1840 and was found next to Gauss's copy of one of Lobachevski's publications, I think understanding Gauss's second fragment may help understanding Gauss's reasoning.</p>
|
4,368,464 | <p>How to solve <span class="math-container">$\sum_{i=1}^{n} \frac{P_i}{1+(d_i-d_1)x/365} = 0$</span> in spreadsheet?</p>
<p>We have already known that in Excel,</p>
<p>XIRR() find the root of the equation:
<span class="math-container">$\sum_{i=1}^{n} \frac{P_i}{(1+x)^{(d_i-d_1)/365}} = 0$</span>, which is the IRR (Internal Rate of Return) of a series of compounding cash flow.</p>
<p>However, what if the interest rate is flat rather than compounded?
that is, solve <span class="math-container">$\sum_{i=1}^{n} \frac{P_i}{1+(d_i-d_1)x/365} = 0$</span></p>
<p>I have used approximations like</p>
<p><span class="math-container">$1/(1+\tau x)=1-\tau x$</span></p>
<p>or</p>
<p><span class="math-container">$1/(1+\tau x)=(1+x)^{\tau}$</span></p>
<p>where <span class="math-container">$\tau=(d_i-d_1)/365$</span></p>
<p>But I am not sure the error is negligible.</p>
<p>I hope that I could have a nice numerical result in excel.</p>
<p>Thanks!</p>
| Arthur | 15,500 | <p>You're asking whether
<span class="math-container">$$
1+2+3+\cdots+n
$$</span>
has the same value as
<span class="math-container">$$
0+1+2+3+\cdots+n
$$</span>
And the answer is that of course those are the same.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.