qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
1,621,019 | <blockquote>
<p>Let $a,b,$ and $c$ be the lengths of the sides of a triangle, prove that $$a(b^2+c^2-a^2)+b(a^2+c^2-b^2)+c(a^2+b^2-c^2) \le 3abc.$$</p>
</blockquote>
<p>I can't really factor this into something nice. Also using AM-GM or Cauchy-Schwarz doesn't look like it will help. I am thinking we need to bound the left and right side with something, but I don't know how to.</p>
| yurnero | 178,464 | <p><strong>Hint</strong>: Expand the following, divide by 2, and rearrange:
$$
(a+b-c)(a-b)^2+(b+c-a)(b-c)^2+(c+a-b)(c-a)^2\geq 0.
$$</p>
<blockquote class="spoiler">
<p>How I thought of this: I was thinking of a multinomial with degree $3$ that is symmetrical in $a,b,c$, is nonnegative, uses the triangle inequality, and goes away when $a=b=c$. The above was the first guess.</p>
</blockquote>
|
2,310,828 | <blockquote>
<p>Let $f:\Bbb{R}^2 \to \Bbb{R}^2$ be continuously differentiable. Suppose $f'(0)$ has non-zero determinant. Let $U = \left\{x \in \Bbb{R}^2 : ||f'(x)-f'(0)||<\frac{1}{2||f'(0)||}\right\}$. Show that $f(U)$ is open.</p>
</blockquote>
<p>I have tried doing this using generic properties of a norm but that didn't work. I suspect that I'll need to specifically use what the norm is. And I don't know what that is. The wikipedia article on norms for matrices didn't help.</p>
<p>So an answer telling what the norm is( in this case for a $2\times2$ matrix) and the general direction to proceed in would be helpful. </p>
| José Carlos Santos | 446,262 | <p>This is not true. Take $f(x,y)=\left(\left(x-\frac14\right)^2,\left(y-\frac14\right)^2\right)$. Then $f'\left(0,0\right)=-\frac12\operatorname{Id}$ and so $\left\|f'\left(0,0\right)\right\|=\frac12$. On the other hand, $f'\left(\frac14,\frac14\right)=0$ and so $\left(\frac14,\frac14\right)\in U$. But then $f(U)$ is not open, since $(0,0)=f\left(\frac14,\frac14\right)\in U$ but both coordinates of any element of $f(U)$ is greater than or equal to $0$.</p>
|
2,310,828 | <blockquote>
<p>Let $f:\Bbb{R}^2 \to \Bbb{R}^2$ be continuously differentiable. Suppose $f'(0)$ has non-zero determinant. Let $U = \left\{x \in \Bbb{R}^2 : ||f'(x)-f'(0)||<\frac{1}{2||f'(0)||}\right\}$. Show that $f(U)$ is open.</p>
</blockquote>
<p>I have tried doing this using generic properties of a norm but that didn't work. I suspect that I'll need to specifically use what the norm is. And I don't know what that is. The wikipedia article on norms for matrices didn't help.</p>
<p>So an answer telling what the norm is( in this case for a $2\times2$ matrix) and the general direction to proceed in would be helpful. </p>
| Severin Schraven | 331,816 | <p>Your statement is in general not true. Take</p>
<p>$$ f_c: \mathbb{R}^2 \rightarrow \mathbb{R}^2, \ f_c(x,y)=\left(\sin \left(\frac{x}{c}\right), \sin\left(\frac{y}{c}\right)\right).$$</p>
<p>One computes</p>
<p>$$ f_c'(x,y)= \frac{1}{c}\begin{pmatrix} \cos\left(\frac{x}{c}\right) & 0 \\ 0 & \cos\left(\frac{y}{c} \right) \end{pmatrix} = \frac{1}{c} f_1'\left( \frac{x}{c}, \frac{y}{c} \right).$$</p>
<p>We have</p>
<p>$$ \Vert f_c'(0,0) - f_c'(x,y) \Vert
\leq 2 \max_{(u,v)\in [0, 2\pi c]\times [0,2\pi c]} \Vert f_c'(u,v) \Vert
= \frac{2}{c} \max_{(u,v)\in [0, 2\pi]\times [0,2\pi]} \Vert f_1'(u,v) \Vert.$$</p>
<p>Thus, if we choose $c>0$ such that $ c^2> 2 \Vert f_1'(0,0) \Vert \cdot \max_{(u,v)\in [0, 2\pi]\times [0,2\pi]} \Vert f_1'(u,v) \Vert $, then</p>
<p>$$ U= \{ (x,y)\in \mathbb{R}^2 : \ \Vert f_c'(x,y) - f_c'(0,0) \Vert < \frac{1}{2\Vert f'(0,0) \Vert} \} = \mathbb{R}^2.$$</p>
<p>However,</p>
<p>$$ f_c(\mathbb{R}^2)=f_c([0, 2\pi c]\times [0, 2\pi c]) = [-1,1]\times [-1,1],$$</p>
<p>which is not open.</p>
|
773,880 | <p>What approach would be ideal in finding the integral $\int4^{-x}dx$?</p>
| DeepSea | 101,504 | <p>Hint: let $u = -x$, then you can go from here.</p>
|
202,040 | <p>I'd like to get separate plots for the functions in a list, and I'm trying the following, which doesn't work. What is the correct way to do that?</p>
<pre><code>Table[ContourPlot3D[f, {x, -2, 2}, {y, -2, 2}, {z, -2, 2}], {f, {x + y + z + x y z == 0, x + y + z^2 + x y z^2 == 0, x + y^2 + z + x y^2 z == 0}}]
</code></pre>
| yarchik | 9,469 | <pre><code>Table[ContourPlot3D[
Evaluate[f], {x, -2, 2}, {y, -2, 2}, {z, -2,
2}], {f, {x + y + z + x y z == 0, x + y + z^2 + x y z^2 == 0,
x + y^2 + z + x y^2 z == 0}}]
</code></pre>
<p>Just add <code>Evaluate</code></p>
|
4,234 | <p>I recently pasted the following code:</p>
<pre><code> my @cards = qw(BB BR RR);
my $n_trials = shift || 100;
for (1 .. $n_trials) {
my $card = $cards[ int(rand 3) ];
my @faces = split //, $card;
my $face_choice = int(rand 2);
my ($face, $other_face) = @faces[$face_choice, 1-$face_choice];
if ($face eq "R") {
# this trial is spoiled; do it over
redo;
} else {
$count{$other_face} += 1;
}
}
print "In $n_trials trials:\n";
for my $other_face (keys %count) {
print " The other face was '$other_face' in $count{$other_face} trials\n";
}
</code></pre>
<p>The Markdown engine displays the code incorrectly. For example, the <code>for</code> in the fourth line is indented by eight spaces, the same as the <code>my</code> on the third line. But it is displayed as if it were indented 12 spaces.</p>
<p>Similarly, the three following lines that begin with <code>my</code> should all be aligned the same, but the third (<code>my $face_choice</code>) is indented four spaces too far.</p>
<p>There are other errors; you can view the source of this note to see how it should be indented.</p>
<p>The code does not contain any tabs or trailing spaces.</p>
<p><a href="https://i.stack.imgur.com/FEl1M.png" rel="nofollow noreferrer">Here is a screenshot that shows the incorrect rendering</a>, in case it works properly on your browser.</p>
| Davide Cervone | 7,798 | <p>The handling of MathJax in Markdown is rather awkward, since TeX and Markdown have different interpretations for things like underscores. You need to keep Mardown from processing those character inside of mathematics, and the way it works on StackExchange is that the mathematics is removed before Markdown runs, and then reinserted after the Markdown processing is complete. That means that anything that looks like mathematics is taken out of the page during Markdown processing, and that includes anything between dollar signs.</p>
<p>This approach works most of the time, but if the dollar signs are in a code block (where they really <em>don't</em> end up delimiting mathematics), that means that the material between the dollars hasn't been processed by Markdown. In particular, there is no normalization of the leading spaces, and that is what you are seeing here. Markdown removes the four initial spaces that indicate a code block, but between dollars they are being retained, thus your spacing issue. When you use the <code><pre></code> directly, <em>all</em> the spaces are retained, as Markdown does not modify the contents of the <code><pre></code> (notice that your <code><pre></code> example has four more spaces than the plain indented example), and so the spaces within the material removed as mathematics and replaced later no longer is a problem since the code material is also indented to the same amount.</p>
<p>The real solution is to incorporate knowledge of the math delimiters into Markdown itself, so that Markdown knows not to perform its usual substitutions within the math. The SE folks (rightly) don't want to modify the Markdown processor directly, which is what lead to the partial solution that currently exists.</p>
<p>The next best thing would be to make the preprocessor that removes the math be more aware of the surrounding Markdown so that it can recognize situations like the one you have here and <em>not</em> remove the "math" in that case (since it is not actually going to be processed as math in the end) so that it gets properly handled by Markdown. I worked on that at one point, but it is more delicate than you might think, and practically ends up requiring writing a Markdown processor in order to get it right.</p>
|
69,272 | <p>By the way, does anyone know how to prove in an elementary way (i.e. expanding) that $\prod_1^n (1+a_i r)$ tends to $e^r=\sum \frac{r^k}{k!}$ as you let $\max|a_i|\to 0$ with $0\leq a_i \leq 1$ and $\sum a_i = 1$? An easy solution goes by writing the product with the exponential function so that you get the exponential of $\sum \log(1+a_i r) = \sum \int_0^1 \frac{a_i r}{(1+s a_i r)} ds$.</p>
<p>You can then integrate by parts (i.e. Taylor expand) to obtain $\sum a_ i r − \sum \int_0^1 (1−s)\frac{(a_i r)2}{(1+s a_i r)2}ds$. Now, $\sum a_i r = r$ is the main term. After you take $\max|a_i|$ to be less than $.5/|r|$, the error term is bounded in absolute value by $C \sum |a_i r|^2 \leq \max|a_i|\cdot \sum |a_i| |r|^2 \leq C |r|^2 \max |a_i|$.</p>
<p>I was hoping to find an elementary proof of this convergence by expanding the product $\prod_1^n (1+a_i r)$ and gathering terms with a common power of $r$. In particular, it would be nice to prove the convergence of this limit without the exponential function, since then the limit could be considered a definition of $e^r$. The case when all of the $a_i$ are equal is done in Rudin's "Principles of Mathematical Analysis".</p>
<p>The motivation for this problem comes from compound interest, which I described in a different thread here: <a href="https://mathoverflow.net/questions/40005/generalizing-a-problem-to-make-it-easier/69224#69224">Generalizing a problem to make it easier</a> .</p>
| Geoff Robinson | 14,450 | <p>I record this answer because I think that Pietro Majer's comment can be made into a solution which
meets the proposer's criterion of potentially being able to be used to define $e^r$ ( I had been thinking along the same lines, although I am not sure there would be an advantage over the usual $\lim_{n \to \infty} (1 + \frac{r}{n})^{n}$
definition). If $r > 0$, then the GM-AM inequality gives $\prod_{i=1}^{n}( 1+a_{i}r) \leq ( 1 + \frac{r}{n})^{n}$. If $r > 0$ and each $a_{i} < \frac{1}{r}$, we obtain $\prod_{i=1}^{n}( 1-a_{i}r) \leq ( 1 - \frac{r}{n})^{n}$. Taking reciprocals in the second case and using standard
approximations gives
$\prod_{i=1}^{n}( 1+ a_{i}r + \frac{a_{i}^{2}r^{2}}{1-a_{i}r}) \geq ( 1 +\frac{r}{n})^{n}$.
Choose $\varepsilon > 0$, and suppose that $a_{i} < \frac{\varepsilon}{2}$ for each $i$, and that, furthermore, $1 - a_{i}^{2}r^{2} > \frac{1}{2}$ for each $i$. Then we obtain
$\prod_{i=1}^{n}( 1+ a_{i}r + \frac{a_{i}^{2}r^{2}}{1-a_{i}r} ) \leq
\left[ \prod_{i=1}^{n}(1 + a_{i}r) \right]. (1 + \frac{r^{2}\varepsilon}{n})^{n}$, so that $\lim_{max(a_{i}) \to 0} \prod_{i=1}^{n} (1 + a_{i}r)
\geq \lim_{n \to \infty} (1 + \frac{r}{n})^{n}$. A similar argument can be devised
for $r < 0$.</p>
|
2,018,239 | <p>I have to show, using induction, that $2^{4^n}+5$ is divisible by $21$. It is supposed to be a standard exercise, but no matter what I try, I get to a point where I have to use two more inductions.</p>
<p>For example, here is one of the things I tried:</p>
<p>Assuming that $21 |2^{4^k}+5$, we have to show that $21 |2^{4^{k+1}}+5$.</p>
<p>Now, $2^{4^{k+1}}+5=2^{4\cdot 4^k}+5=2^{4^k+3\cdot 4^k}+5=2^{4^k}2^{3\cdot 4^k}+5=2^{4^k}2^{3\cdot 4^k}+5+5\cdot 2^{3\cdot 4^k}-5\cdot 2^{3\cdot 4^k}=2^{3\cdot 4^k}(2^{4^k}+5)+5(1-2^{3\cdot 4^k})$.</p>
<p>At this point, the only way out (as I see it) is to prove (using another induction) that $21|5(1-2^{3\cdot 4^k})$. But when I do that, I get another term of this sort, and another induction.</p>
<p>I also tried proving separately that $3 |2^{4^k}+5$ and $7 |2^{4^k}+5$. The former is OK, but the latter is again a double induction.</p>
<p>Is there an easier way of doing this?</p>
<p>Thank you!</p>
<p><strong>EDIT</strong></p>
<p>By an "easier way" I still mean a way using induction, but only once (or at most twice). Maybe add and subtract something different than what I did?...</p>
<p>Just to put it all in a context: a daughter of a friend got this exercise in her very first HW assignment, after a lecture about induction which included only the most basic examples. I tried helping her, but I can't think of a solution suitable for this stage of the course. That's why I thought that there should be a trick I am missing... </p>
| Denis Korzhenkov | 367,345 | <ol>
<li><p>$2^{4^1} \equiv -5 \bmod 21$</p></li>
<li><p>$2^{4^{n+1}} = 2^{4^n \cdot 4} = (2^{4^n})^4 \equiv (-5)^4 = 625 \equiv -5 \bmod 21$</p></li>
</ol>
|
2,018,239 | <p>I have to show, using induction, that $2^{4^n}+5$ is divisible by $21$. It is supposed to be a standard exercise, but no matter what I try, I get to a point where I have to use two more inductions.</p>
<p>For example, here is one of the things I tried:</p>
<p>Assuming that $21 |2^{4^k}+5$, we have to show that $21 |2^{4^{k+1}}+5$.</p>
<p>Now, $2^{4^{k+1}}+5=2^{4\cdot 4^k}+5=2^{4^k+3\cdot 4^k}+5=2^{4^k}2^{3\cdot 4^k}+5=2^{4^k}2^{3\cdot 4^k}+5+5\cdot 2^{3\cdot 4^k}-5\cdot 2^{3\cdot 4^k}=2^{3\cdot 4^k}(2^{4^k}+5)+5(1-2^{3\cdot 4^k})$.</p>
<p>At this point, the only way out (as I see it) is to prove (using another induction) that $21|5(1-2^{3\cdot 4^k})$. But when I do that, I get another term of this sort, and another induction.</p>
<p>I also tried proving separately that $3 |2^{4^k}+5$ and $7 |2^{4^k}+5$. The former is OK, but the latter is again a double induction.</p>
<p>Is there an easier way of doing this?</p>
<p>Thank you!</p>
<p><strong>EDIT</strong></p>
<p>By an "easier way" I still mean a way using induction, but only once (or at most twice). Maybe add and subtract something different than what I did?...</p>
<p>Just to put it all in a context: a daughter of a friend got this exercise in her very first HW assignment, after a lecture about induction which included only the most basic examples. I tried helping her, but I can't think of a solution suitable for this stage of the course. That's why I thought that there should be a trick I am missing... </p>
| Steven Alexis Gregory | 75,410 | <p>$2^{4^k}+5 \equiv (-1)^{4^k}+5 \equiv 1+5 \equiv 0 \pmod 3$</p>
<p>Proof by induction that $2^{4^k}+5 \equiv 0 \pmod 7$ for all non negative ingeters $k$.</p>
<p>If $k=0$, then $2^{4^k}+5 \equiv 2+5 \equiv 0 \pmod 7$</p>
<p>Suppose that $2^{4^m}+5 \equiv 0$ for some non negative integer $m$.</p>
<p>Then
$2^{4^{m+1}}+5 \equiv
2^{4\cdot 4^m}+5 \equiv
(2^4)^{4^m}+5 \equiv
16^{4^m}+5 \equiv
2^{4^m}+5 \equiv 0 \pmod 7$</p>
<p>Hence, by mathematical induction $2^{4^k}+5 \equiv 0 \pmod 7$ for all non negative integers $k$.</p>
<p>It follows that $2^{4^k}+5 \equiv 0 \pmod{21}$ for all non negative integers $k$.</p>
|
2,372,762 | <p>So I know in order to prove a function is bijective, you need to prove that it is both injective and surjective. I know that to prove it is an injection, I need to make $f(x) = f(y)$, and try to get $x=y$ from that, but I can't seem to manipulate the equations to do so. </p>
<p>Also, how would I prove that this is surjective? </p>
| Tai | 290,413 | <p>Let $p(n,k)$ denote the number of partitions of $n$ with largest part at most $k$. Then, of the $p(n,k)$ such partitions, there are some that have a partition with part $k$ and some that don't. Those that don't have part $k$ are all precisely the partitions of $n$ with largest part at most $k-1$, so there are $p(n,k-1)$ of them. Those that do have part $k$ can be converted into a partition of largest part $k$ of $n-k$ by removing the $k$. Thus, we have the recurrence
$$
p(n,k) = p(n-k,k) + p(n,k-1).
$$
Defining the generating function $F_k(x) = \sum_{n\geq0}p(n,k)x^n$, the recurrence translates to
$$
F_k(x) = x^kF_k(x) + F_{k-1}(x)
$$
which rearranges to
$$
F_k(x) = \frac{F_{k-1}(x)}{1-x^k}.
$$</p>
<p>Can you proceed from here?</p>
<p><strong>Edit:</strong></p>
<p>Now, note that $F_1(x) = \sum_{n\geq0} x^n = \frac1{1-x}$ since there is always exactly one way to write $n$ as a partition with largest part $1$. Then, induction shows that $F_k(x) = \prod_{1\leq n\leq k}\frac1{1-x^n}$. It follows that for partitions of any large sized part, you take the desired infinite product. </p>
|
3,063,577 | <p>Please suggest a book on applications of Diophantine equations in physics, chemistry, and biology. This book should be suitable to introduce this subject to students who are not mathematics specialists. </p>
| José Carlos Santos | 446,262 | <p>I am unaware of any such book, but you may find <a href="http://ijmaa.in/v5n2-b/217-222.pdf" rel="nofollow noreferrer">this article</a> interesting.</p>
|
723,633 | <p>My book asserts that for fixed $w$ where $w\neq 0$ that $P^2=P$ for $P(v)=\frac{\langle v,w\rangle }{||w||^2}w$</p>
<p>My book has a general corralary that $v\to P(v)$ is a bounded linear transformation and the fact that $P^2=P$ implies it is a projection. I'm not sure how they made the assertation. Any ideas?</p>
| vadim123 | 73,324 | <p>Hint: $$P^2(v)=P(P(v))=\frac{\langle P(v),w\rangle}{\|w\|^2}w$$</p>
|
3,290,095 | <p>Now first something that I already know;
<span class="math-container">\begin{eqnarray}
∞/ ∞ = undetermined ( ≠1 ) \\
∞- ∞ = undetermined (≠0)\\
\end{eqnarray}</span></p>
<p>So basically one reason for this is that the <span class="math-container">$∞$</span> I assume is not as same as the <span class="math-container">$∞$</span> someone else will assume as <span class="math-container">$ ∞$</span> is a very large number with no definite value.....but what if I assign the <span class="math-container">$ ∞$</span> to a certain variable....that way the infinity is always same.</p>
<p>For eg:</p>
<p>What if I assign <span class="math-container">$ a=∞$</span>;</p>
<p>Now infinity is always the same if I use <span class="math-container">$a $</span> instead of directly using <span class="math-container">$∞$</span>......so my question is are the same laws mentioned above applicable here.....or can i solve it like solving any other equation;
<span class="math-container">\begin{eqnarray}
a/a = 1 \\
a-a = 0\\
\end{eqnarray}</span>
Or are these still undetermined? .</p>
| mlchristians | 681,917 | <p>Strange things can happen when considering the infinite.</p>
<p>Consider the infinite set <span class="math-container">$\lbrace 1, 2, 3, \ldots \rbrace$</span>. If I multiply all the elements of this set by <span class="math-container">$2$</span>, I get <span class="math-container">$\lbrace 2, 4, 6, \ldots \rbrace$</span>. Have I added or taken anything away? No. Therefore, both of these sets must contain exactly the same number of elements even though it appears that every element in the second set is in the first with infinitely many taken away.</p>
<p>On top of all this, we recognize that not all "infinities" are the same. </p>
<p>There are infinitely many infinities each of different cardinalities (measures of the number of elements in the underlying set).</p>
<p>So, when we assign <span class="math-container">$a$</span> a value, it must be a finite one. </p>
|
2,464,756 | <p>When I was trying to prove a relation from solid state physics, I reached this mathematical problem. In the equation</p>
<p>$$\sum_{i=1}^Nm_ix_i=n$$</p>
<p>$m_i$ and $n$ are known integers, $N=3$, and $x_i$ are unknown integers. Also we know that the greatest common factor of $\left\{m_i\right\}$ is 1. I don't need to find the solution; I must just show/state that the answer exists.</p>
| Bernard | 202,857 | <p>It's a standard result from <em>Arithmetic</em>:</p>
<p>The ideal $\;(x_1,\dots , x_N)\subset\mathbf Z$ generated by $x_1, \dots, x_n$, i.e. the set of linear combinations of $x_1,\dots, x_N$ with integer coefficients is the principal ideal generated by $\gcd(x_1,\dots,x_N)$.</p>
<p>Hence, if the generators are coprime, this ideal is $\mathbf Z$, generated by $1$, and any multiple of the g.c.d., i.e. any integer is attained.</p>
|
2,870,729 | <blockquote>
<p>Why does $|e^{ix}|^2 = 1$?</p>
</blockquote>
<p>The book said $e^{ix} = \cos x + i\sin x$, and square it, then $|e^{ix}|^2 = \cos^2x + \sin^2x = 1$.</p>
<p>But, when I calculated it, $ |e^{ix}|^2 = \left|\cos x + i\sin x\right|^2 = \cos^2x - \sin^2x + 2i\sin x\cos x$.</p>
<p>I can't make it to be equal $1.$ How can I do it?</p>
| Mason | 552,184 | <p>$|z|^2=z\bar{z}$</p>
<p>But the complex conjugate of ${e^{xi}}$ is $e^{-xi}$.</p>
<p>$|e^{xi}|^2=e^{xi}e^{-xi}=1$</p>
|
748,815 | <blockquote>
<p>$\displaystyle\sum\limits_{k=1}^nk^2(k-1){n\choose k}^2 = n^2(n-1)
{2n-3\choose n-2}$ considering $n\ge2$</p>
</blockquote>
<p>Can somebody help with this combinatorial proof?
I'm struggling a lot.
Thanks.</p>
<p><strong>EDIT:</strong> Ok. I could figure it out, if we had $\displaystyle\sum\limits_{k=1}^nk^2{n\choose k}^2 = n^2
{2n-2\choose n-1}$.</p>
<p>The problem is, i don't understand what to do with that $(k-1)$ and how it leads to ${2n-3\choose n-2}$.</p>
<p>I know $k{n\choose k} = n{n-1\choose k-1}$ </p>
<p>Choosing a team of $k$ elements from $n$ and from that $k$ elements, pick a captain is the same as choose a captain first, and then, complete the team, choosing $k-1$ elements from $n-1$</p>
<p>But, what about $k(k-1){n\choose k}$ ?</p>
| DirkGently | 88,378 | <p>There's a factor of $n-1$ missing from the right side of the equation. Let us write the equation as
$$\sum_{k=1}^n k (k-1){\binom{n}{k}}\cdot k{\binom{n}{n-k}}=n^2(n-1)\binom{2n-3}{n-2}.$$
Now count the number of sequences of length $2n$ on the alphabet $\{a_0,a_1,a_2,b_0,b_1\}$ with $n$ $a$'s (with any subscript) and $n$ $b$'s (with any subscript) satisfying the following additional conditions. There is exactly one $a_1$, one $a_2$ and one $b_1$. Furthermore, $a_1$ and $a_2$ appear in the first half of the sequence and $b_1$ appears in the second half.</p>
|
748,815 | <blockquote>
<p>$\displaystyle\sum\limits_{k=1}^nk^2(k-1){n\choose k}^2 = n^2(n-1)
{2n-3\choose n-2}$ considering $n\ge2$</p>
</blockquote>
<p>Can somebody help with this combinatorial proof?
I'm struggling a lot.
Thanks.</p>
<p><strong>EDIT:</strong> Ok. I could figure it out, if we had $\displaystyle\sum\limits_{k=1}^nk^2{n\choose k}^2 = n^2
{2n-2\choose n-1}$.</p>
<p>The problem is, i don't understand what to do with that $(k-1)$ and how it leads to ${2n-3\choose n-2}$.</p>
<p>I know $k{n\choose k} = n{n-1\choose k-1}$ </p>
<p>Choosing a team of $k$ elements from $n$ and from that $k$ elements, pick a captain is the same as choose a captain first, and then, complete the team, choosing $k-1$ elements from $n-1$</p>
<p>But, what about $k(k-1){n\choose k}$ ?</p>
| Felix Marin | 85,343 | <p>$\newcommand{\+}{^{\dagger}}
\newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle}
\newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack}
\newcommand{\ceil}[1]{\,\left\lceil\, #1 \,\right\rceil\,}
\newcommand{\dd}{{\rm d}}
\newcommand{\down}{\downarrow}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,{\rm e}^{#1}\,}
\newcommand{\fermi}{\,{\rm f}}
\newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}
\newcommand{\half}{{1 \over 2}}
\newcommand{\ic}{{\rm i}}
\newcommand{\iff}{\Longleftrightarrow}
\newcommand{\imp}{\Longrightarrow}
\newcommand{\isdiv}{\,\left.\right\vert\,}
\newcommand{\ket}[1]{\left\vert #1\right\rangle}
\newcommand{\ol}[1]{\overline{#1}}
\newcommand{\pars}[1]{\left(\, #1 \,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\pp}{{\cal P}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\vphantom{\large A}\,#2\,}\,}
\newcommand{\sech}{\,{\rm sech}}
\newcommand{\sgn}{\,{\rm sgn}}
\newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}}
\newcommand{\ul}[1]{\underline{#1}}
\newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}
\newcommand{\wt}[1]{\widetilde{#1}}$
$\ds{\sum_{k = 1}^{n}k^{2}\pars{k - 1}{n \choose k}^{2} =
n^{2}\pars{n - 1}{2n - 3 \choose n - 2}}$</p>
<blockquote>
<p>\begin{align}
&\mbox{Lets consider}\quad
\fermi\pars{x} \equiv \sum_{k = 0}^{n}{n \choose k}^{2}x^{k}
\\&\mbox{such that}\quad
\sum_{k = 1}^{n}k^{2}\pars{k - 1}{n \choose k}^{2}
=\left.\bracks{\pars{x\,\partiald{}{x}}^{3} - \pars{x\,\partiald{}{x}}^{2}}\fermi\pars{x}
\right\vert_{x\ =\ 1}\tag{1}
\end{align}</p>
</blockquote>
<p>Hereafter we'll use the identity
$$\color{#c00000}{%
{m \choose n} = \int_{\verts{z} = 1}{\pars{1 + z}^{m} \over z^{n + 1}}
\,{\dd z \over 2\pi\ic}\,,\qquad m, n \in {\mathbb N}\,,\quad m \geq n}\tag{2}
$$</p>
<blockquote>
<p>\begin{align}
\fermi\pars{x}&=\sum_{k = 0}^{n}x^{k}{n \choose k}
\int_{\verts{z} = 1}{\pars{1 + z}^{n} \over z^{k + 1}}\,{\dd z \over 2\pi\ic}
=\int_{\verts{z} = 1}{\pars{1 + z}^{n} \over z}
\sum_{k = 0}^{n}{n \choose k}\pars{x \over z}^{k}\,{\dd z \over 2\pi\ic}
\\[3mm]&=\int_{\verts{z} = 1}{\pars{1 + z}^{n} \over z}\pars{1 + {x \over z}}^{n}
\,{\dd z \over 2\pi\ic}
=\int_{\verts{z} = 1}{\pars{1 + z}^{n} \over z^{n + 1}}\,\pars{x + z}^{n}
\,{\dd z \over 2\pi\ic}\tag{3}
\end{align}</p>
</blockquote>
<p>With $\pars{1}$ and $\pars{3}$ we'll have:
\begin{align}
&\color{#00f}{\large\sum_{k = 1}^{n}k^{2}\pars{k - 1}{n \choose k}^{2}}
\\[3mm]&=\int_{\verts{z} = 1}{\pars{1 + z}^{n} \over z^{n + 1}}\,
n\pars{n - 1}\bracks{%
\pars{n - 2}\pars{1 + z}^{n - 3} + 2\pars{1 + z}^{n - 2}}\,{\dd z \over 2\pi\ic}
\\[3mm]&=n\pars{n -1}\pars{n - 2}
\int_{\verts{z} = 1}{\pars{1 + z}^{2n - 3} \over z^{n + 1}}\,{\dd z \over 2\pi\ic}
+
2n\pars{n -1}
\int_{\verts{z} = 1}{\pars{1 + z}^{2n - 2} \over z^{n + 1}}\,{\dd z \over 2\pi\ic}
\\[3mm]&=n\pars{n -1}\pars{n - 2}{2n - 3 \choose n}
+
2n\pars{n -1}{2n - 2 \choose n}
\\[3mm]&={\pars{2n - 3}! \over \pars{n - 3}!\pars{n - 3}!}
+ 2\,{\pars{2n - 2}! \over \pars{n - 2}!\pars{n - 2}!}
\\[3mm]&={\pars{n - 1}\pars{n - 2}^{2} + 2\pars{n - 1}\pars{2n - 2}
\over \pars{n - 2}!\pars{n - 1}!}\,\pars{2n - 3}!
=\bracks{\pars{n - 2}^{2} + 4n - 4}\pars{n - 1}
{2n - 3 \choose n - 2}
\\[3mm]&=\color{#00f}{\large n^{2}\pars{n - 1}{2n - 3 \choose n - 2}}
\end{align}</p>
|
933,604 | <p>Hi can anyone solve these two questions using logs and indices</p>
<p>a.
$$4^{2x}-2^{x+1}=48$$</p>
<p>b.
$$6^{2x+1}-17*{6^x}+12=0$$</p>
<p>Thanks.</p>
| Brass2010 | 173,503 | <p>Thanks for the answers, after a night of thoughts, I came up with the idea of using quadratic equations and logarithms to solve the answers</p>
<p>Let 2x=a</p>
<p>a^2-2a-48=0</p>
<p>a=8 or a=-6(rejected)</p>
<p>2^x=3</p>
<p>log(2)8=3</p>
<h2>x=3</h2>
<p>Let 6x=a</p>
<p>6a^2-17a+12=0</p>
<p>a=1.5 or4/3</p>
<p>log(6)1.5=0.226</p>
<p>log(6)1.5=0.1606</p>
<h2>x=0.226 or 0.1606</h2>
<p>Cheers!</p>
|
971,617 | <p>Suppose $p,q$ are two distinct prime numbers, $q \geq 3$ and $p \not\equiv 1 \pmod q$. Then I have the following problem: Prove that there is no integer $x \in \mathbb{Z}$ such that $1+x+x^2+...+x^{q-1} \equiv 0 \pmod p$. </p>
<p>It is obvious that $x$ cannot be $0 \pmod p$, and I also found that when $p$ is even, i.e. $p=2$, that this isn't too hard. However, for the rest I only found that $x^q-1 = (1+x+...+x^{q-1})(x-1) \equiv 0 \pmod p$. Where do I go next? </p>
<p>Thanks in advance</p>
| Xoff | 36,246 | <p>You are looking for an element $x$ of the finite field with $p$ elements such that $x^q=1$ and $x\neq 1$. But for any finite field, the multiplicative group is cyclic, so $q$ (as prime) must be a divisor of $p-1$, so $p=1 (\mod q)$</p>
|
189,650 | <p>let $S=\{s_1, s_2, s_3 \}$, if $s_1$ can be represented as a linear combination of $s_2$ and $s_3$, $s_2$ can be represented as a linear combination of $s_1$ and $s_3$ but $s_3$ can not be represented as a linear combination of $s_1$ or $s_2$ or $s_1$ and $s_2$, can we call $S$ a linearly dependent set? </p>
| Community | -1 | <p>Yes, the elements of $S$ are linearly dependent. To be linearly dependent means that there exist scalars $a, b, c$, not all zero such that
$$
as_1 + bs_2 + cs_3 = 0.
$$
This is true of your elements because we know
$$
s_1 = ms_2 + ns_3
$$
for some scalars $m, n$ and we can rearrange this equation to get $a = 1$, $b = -m$, $c = -n$. (The fact that we can't express $s_3$ as a linear combination of $s_1$ and $s_2$ implies that $n$ is going to have to be $0$, but this is not relevant to the definition of linear independence.)</p>
|
2,979,226 | <p>Consider you are given following </p>
<blockquote>
<p><span class="math-container">$$\biggr (x-\dfrac{2}{x^2}\biggr )^6$$</span></p>
</blockquote>
<p>I'm trying to evaluate the constant term. What I've done so far is given below</p>
<p><span class="math-container">$$\sum^{6}_{n = 0} \binom{6}{r}x^{6-r}\times (-2)^6 \times x^{-12}$$</span></p>
<p><span class="math-container">$$\sum^{6}_{n = 0} \binom{6}{r}x^{-6-r}\times (-2)^6 $$</span></p>
<p><span class="math-container">$$-6-r = 0 \implies r = -6$$</span></p>
<p>I got a negative number. Where did I go wrong?</p>
<p>Regards</p>
| Sujit Bhattacharyya | 524,692 | <p><strong>EDIT:</strong></p>
<p>Here is a simple observation:</p>
<p>Constant term will have no <span class="math-container">$x$</span>. So in order to omit <span class="math-container">$x$</span> we must have, <span class="math-container">$\displaystyle\frac{x^r}{x^{2(6-r)}}=1\implies x^r=x^{12-2r}\implies r=4$</span></p>
<p>Again since <span class="math-container">$^nC_r=^nC_{n-r}$</span> so <span class="math-container">$r=2$</span> will be another solution and in the expansion summing the Binomial quantities for <span class="math-container">$r=2$</span> and <span class="math-container">$r=4$</span> we get the constant term as <span class="math-container">$60$</span>.</p>
<p>Here is a <a href="https://i.stack.imgur.com/q6OLQ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/q6OLQ.jpg" alt="gift"></a> from Mathematica.</p>
<p>Hope it works.</p>
|
4,045,238 | <p>I was working on the problems in Mathematical Methods for Physics and Engineering by Riley,Hobson & Bence.
In Problem 2.34 (d) I'm supposed to find this integral: <span class="math-container">$$J=\int\frac{dx}{x(x^n+a^n)}.$$</span>
I used partial fractions and arrived at the form
<span class="math-container">$$J=\frac{1}{a^n}\left[\log x-\int \frac{dx}{x^n+a^n}\right]$$</span>
and now I'm stuck, I don't know how to integrate <span class="math-container">$1/(x^n+a^n)$</span>.</p>
| Kenta S | 404,616 | <p><span class="math-container">\begin{align*}
J&=\int\frac{dx}{x(x^n+a^n)}\\
&=\frac1{a^n}\int\left(\frac1{x}-\frac{x^{n-1}}{x^n+a^n}\right)dx\\
&=\frac1{a^n}\ln|x|-\frac1{na^n}\int\frac{nx^{n-1}}{x^n+a^n}dx\\
&=\frac1{a^n}\ln|x|-\frac1{na^n}\ln|x^n+a^n|+C,\\
\end{align*}</span>
where <span class="math-container">$C$</span> is the constant of integration.</p>
<p>Alternatively, we have
<span class="math-container">\begin{align*}
J&=\int\frac{x^{-n-1}dx}{1+a^nx^{-n}}\\
&=-\frac1n\int\frac{(x^{-n})'dx}{1+a^nx^{-n}}\\
&=-\frac1{na^n}\int\frac{(1+a^nx^{-n})'dx}{1+a^nx^{-n}}\\
&=-\frac1{na^n}\ln|1+a^nx^{-n}|+C.\\
\end{align*}</span></p>
|
1,180,437 | <p>I am trying to understand this proof. Rather an important part of the proof. I have already shown this is true for $n=2$ and am assuming the $a_n$ case is true.</p>
<p>$$(a_1^2+a_2^2+...+a_n^2) \le (a_1+a_2+...+a_n)^2$$
Want to show that
$$(a_1^2+a_2^2+...+a_n^2 + a_{n+1}^2) \le (a_1+a_2+...+a_n+a_{n+1})^2$$
$=$
$$(a_1^2+a_2^2+...a_n^2) + a_{n+1}^2 \le ((a_1+a_2+...a_n)+(a_{n+1}))^2$$</p>
<p>$=$
$$(a_1^2+a_2^2+...+a_n^2 + a_{n+1}^2) \le (a_1+a_2+...+a_n)^2+2(a_1+a_2+...+a_n)(a_{n+1})+(a_{n+1}) ^2$$ and here is the part I am not understanding. For some reason the proof moves some of the terms over and I cannot identify what is being replaced or why. My guess is that the terms that moves are the ${n+1}$ terms. But, I am not certain. </p>
<p>$$a_1^2+a_n^2+a_{n+1}^2...+2(a_1+a_2+...a_n)(a_{n+1}) \le (a_1+a_2+...a_n)^2$$</p>
| Math-fun | 195,344 | <p><em>inductive step</em>: </p>
<p>the claim being correct for
$$(a_1^2+a_2^2+...+a_n^2) \le (a_1+a_2+...+a_n)^2$$
implies
$$(a_1^2+a_2^2+...+a_n^2+a_{n+1}^2) \le (a_1+a_2+...+a_n+a_{n+1})^2$$
<strong>Proof</strong>
\begin{align}
a_1^2+a_2^2+...+a_n^2+a_{n+1}^2 &=(a_1^2+a_2^2+...+a_n^2)+a_{n+1}^2\\
& \leq (a_1+a_2+...+a_n)^2+a_{n+1}^2 \mbox{ (using the assumption)}\\
&=y^2+a_{n+1}^2 \mbox{ (rewriting } y=a_1+a_2+...+a_n)\\
& \leq (y+a_{n+1})^2 \mbox{ (using: } a^2+b^2\leq (a+b)^2)\\
&=(a_1+a_2+...+a_n+a_{n+1})^2 \mbox{ (plug back for } y)
\end{align}</p>
|
192,334 | <p>I want to partition string into longest substrings that each contain only specific characters, beginning from left to right with no overlaps, always choosing the longest one possible at current position. In my example only substrings that contain only characters <code>d,f,g</code> or <code>d,e,h</code> or <code>a,b,c,g</code> are allowed.</p>
<p>Example:</p>
<p>input:</p>
<pre><code>StringCases["ABCDEFGH",Longest[("D"|"F"|"G")..|("D"|"E"|"H")..|("A"|"B"|"C"|"G")..]]
</code></pre>
<p>output:</p>
<pre><code>{"ABC","D","E","FG","H"}
</code></pre>
<p>But after <code>"ABC"</code> there is evidently substring <code>"DE"</code> that is longer than <code>"D"</code> or <code>"E"</code>.
So my expected output would be:<code>{ABC,DE,FG,H}</code></p>
<p>If I switch first and second argument of <code>Alternatives</code> this way:</p>
<pre><code>StringCases["ABCDEFGH",Longest[("D"|"E"|"H")..|("D"|"F"|"G")..|("A"|"B"|"C"|"G")..]]
</code></pre>
<p>then output is as expected:</p>
<pre><code>{"ABC","DE","FG","H"}
</code></pre>
<p>But <code>Alternatives</code> should be from definition something that is independent on arguments order. So I would expect in both inputs the same output (second one).</p>
<p><strong>So my question is how to do it that I get always longest possible substring no matter what order of arguments inside <code>Alternatives</code> is?</strong></p>
| Carl Woll | 45,431 | <p>You could do:</p>
<pre><code>StringCases[
"ABCDEFGH",
Longest[p__] /; StringMatchQ[p,("D"|"F"|"G")..|("D"|"E"|"H")..|("A"|"B"|"C"|"G")..]
]
</code></pre>
<blockquote>
<p>{"ABC", "DE", "FG", "H"}</p>
</blockquote>
|
3,111,985 | <p><span class="math-container">$f_n(x)= \frac{x}{(1+x)^n}\quad f_n(0)=0$</span></p>
<p>pointwise convergence: <span class="math-container">$\sum_{n=1}^{\infty} \frac{x}{(1+x)^n}=x \sum_{n=1}^{\infty} \frac{1}{(1+x)^n}$</span> and the series is a geometric series convergent if <span class="math-container">$|x+1|>1$</span>.</p>
<p>So there is pointwise convergent in <span class="math-container">$E=(-\infty-2)\cup[0,+\infty)$</span></p>
<p>The sum of the series is <span class="math-container">$S(x)=1$</span> for <span class="math-container">$x\ne0$</span> and <span class="math-container">$S(0)=0$</span> so if I consider <span class="math-container">$[0,+\infty)$</span> there is not uniform convergence.
But is there convergence on a subset of <span class="math-container">$[0,+\infty)$</span>?</p>
<p>If I consider <span class="math-container">$A=[b,+\infty),b>0$</span> <span class="math-container">$sup_A|f_n(x)|=f_n({\frac{1}{n-1}})$</span> general term of a convergent series so for Weierstrass test the series uniform converges for in A</p>
<p>And in B=<span class="math-container">$(-\infty,-2)$</span>: if I consider sup<span class="math-container">$_B|S(x)-S_N(x)|>$</span>sup<span class="math-container">$_B|\sum_{n=N+1}^{+\infty}f_n(x)|$</span>
The superior extreme is major of value of the fuction in every its point so I take <span class="math-container">$x_n=-2-{\frac{1}{n}}$</span> and prove there isn't convergence?</p>
| MathFail | 978,020 | <p>The series is convergent on <span class="math-container">$x\in E, ~\text{where} ~E=(-\infty, -2)\cup [0,\infty)$</span></p>
<p>Define the parital sum as <span class="math-container">$S_n(x)=\sum_{k=0}^{n} f_k(x)$</span></p>
<p><span class="math-container">$$S_n(x)=x\left(\frac{1}{1+x}+\cdots+\frac{1}{(1+x)^n}\right)=1-\frac{1}{(1+x)^n}$$</span></p>
<p>First check pointwise convergence</p>
<p>If <span class="math-container">$x=0,~~S_n(0)=0$</span>,</p>
<p>If <span class="math-container">$x\in E\cap x\neq 0$</span>, <span class="math-container">$\lim_{n\to\infty} S_n(x)=1$</span>.</p>
<p><span class="math-container">$$
S(x)=
\begin{cases}
0 & \text{if} \quad x=0 \\
1 & \text{if} \quad x\in E\cap x\neq 0
\end{cases}
$$</span></p>
<p>Since <span class="math-container">$S_n(x)$</span> is continous for all <span class="math-container">$n$</span>, if <span class="math-container">$S_n(x)\to S(x)$</span> uniformly, then <span class="math-container">$S(x)$</span> is continuous.</p>
<p>Obviously, <span class="math-container">$S(x)$</span> is not continous at <span class="math-container">$0$</span>, (by <span class="math-container">$p\to q \Leftrightarrow\neg q\to\neg p $</span>), therefore <span class="math-container">$S_n$</span> is not uniformly convergent.</p>
|
4,573,566 | <p>So I have to find the bifurcation points of the system: <span class="math-container">$\dot{x}=(ax-x^3+x^5)(x-a+2)$</span>, where <span class="math-container">$a\in\mathbb{R}$</span> is a parameter.</p>
<p>Attempt:<br />
I know that a bifurcation point is the point, where there is a change in stability or number of fixed points.
I have tried visualising the graph, and have come to the conclusion, that there are: <br />
4 fixed points for <span class="math-container">$a\leq 0$</span>.<br />
6 fixed points for <span class="math-container">$0<a\leq 0.2$</span>. <br />
2 fixed points for <span class="math-container">$0.2<a<2$</span>. <br />
1 fixed point for <span class="math-container">$a=2$</span><br />
2 fixed points for <span class="math-container">$2<a$</span>.<br />
The change in stability happens at the same time as the number of fixed points changes.</p>
<p>From what I have learned, I'm pretty sure that one bifurcation point is <span class="math-container">$(a,x)=(2,0)$</span>, and I think that a transcritical bifurcation happens at this point.</p>
<p>I think there is another bifurcation point, when we go from 4 to 6 to 2 points. I just don't know exactly what that point is? <span class="math-container">$a=0$</span>? <span class="math-container">$a=0.2$</span>? It confueses me, that the change seems to happen before and after the interval <span class="math-container">$0<a\leq 0.2$</span>. Normally the change should happen at a single point?</p>
<p>All help is appreciated!</p>
| Gregory | 197,701 | <p>Lets take this step-by-step:</p>
<ul>
<li>Find the fixed points as a function of our parameters (e.g. <span class="math-container">$a$</span>).</li>
<li>Investigate the changes of the locations as we vary the parameters</li>
<li>Determine the stability of the fixed points.</li>
<li>Investigate the nature of these fixed points as we vary the parameters.</li>
</ul>
<p>The fixed points:
<span class="math-container">$$ x_c = 0, a-2, \pm \sqrt{\frac{1 \pm \sqrt{1 - 4a}}{2}} $$</span></p>
<p>When <span class="math-container">$a < 0$</span>, we have four solutions. Our first (pitchfork) bifurcation occurs at <span class="math-container">$a = 0$</span> where two new solutions emerge. We now have six solutions for <span class="math-container">$0 < a < 1/4$</span>, where we reach our next double (saddle-node) bifurcation and lose two solutions. Down to only two fixed points, one more (transcritical) bifurcation occurs at <span class="math-container">$a = 2$</span>.</p>
<p>Stability can be found by linearizing about the critical points in the usual way.</p>
<p><a href="https://i.stack.imgur.com/rcKjc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rcKjc.png" alt="Bifurcation diagram" /></a></p>
|
3,362,000 | <p>From listing the first few terms, I suspect that the sequence is increasing, so I wanted to use mathematical induction to verify my suspicion.</p>
<p>I have assumed that <span class="math-container">$a_k<a_{k+1}$</span>, I don't see how I can obtain <span class="math-container">$a_{k+1}<a_{k+2}$</span> because <span class="math-container">$\frac{1}{a_k}>\frac{1}{a_{k+1}}$</span></p>
| mathcounterexamples.net | 187,663 | <p>The sequence is positive. Easy proof by induction.</p>
<p>Then <span class="math-container">$a_n - a_{n-1} = 1/a_{n-1} >0$</span> proving that the sequence is increasing.</p>
|
3,953,681 | <p>I have a basic question but I have failed in solving it. I have the equation of a cylinder which is <span class="math-container">$y^2 + z^2 = r^2$</span> (centered in the x-axis). The parametric equation (dependent on <span class="math-container">$L$</span> and <span class="math-container">$s$</span>) is <span class="math-container">$(x,y,z) = (L, r\cos(s), r\sin(s))$</span>.</p>
<p>I would like to rotate it certain angle <span class="math-container">$\theta$</span> (anticlockwise). Thus I have the new axis from the rotation as: <span class="math-container">$x=x'*\cos\theta + z'*\sin\theta$</span>, <span class="math-container">$y=y'$</span> and <span class="math-container">$z=r*\sin\theta$</span>. However, when rewriting the equation of the cylinder as <span class="math-container">$(y')^2 + (-x'*\sin\theta + z'*\cos\theta)^2 = r^2$</span> and parametrizing, I get: <span class="math-container">$(x,y,z) = (L, r*\cos(s), z+x'*\tan\theta)$</span>, with <span class="math-container">$z=r*\sin\theta$</span>. When I plot this, I get a elliptic cylinder.
Does anyone know what am I doing wrong? I need such equation because I will generate multiple cylinders later computationally.</p>
<p>I have followed previous posts such as <a href="https://math.stackexchange.com/questions/2733090/if-i-have-an-oblique-cylinder-can-i-trim-it-in-to-a-rectilinear-cylinder">If I have an oblique cylinder can I trim it in to a rectilinear cylinder?</a> but they actually obtain the elliptic cylinder.</p>
<p>Many thanks!</p>
| Z Ahmed | 671,540 | <p><span class="math-container">$$I=\int \frac{dx}{x\sqrt{a-bx^2}}$$</span>
Let <span class="math-container">$x=\sqrt\frac{a}{b}\sin t$</span>, then
<span class="math-container">$$I=\int \frac{\sqrt{a/b} \cos t!dt}{\sqrt{a/b} \sin t \sqrt{a} \cos t}.$$</span>
<span class="math-container">$$\implies I=\frac{1}{\sqrt{a}} \int \csc t dt=\frac{1}{\sqrt{a}}\int \csc t \frac{\csc t+\cot t}{\csc t+ \cot t}dt.$$</span>
Let <span class="math-container">$\csc t+ \cot t=u$</span>
<span class="math-container">$$I=\frac{1}{\sqrt{a}} \int \frac{-du}{u}=\frac{-1}{\sqrt{a}}\ln(\csc t+\cot t)=-\frac{1}{\sqrt{a}}ln\frac{1-\sqrt{1-c^2x^2}}{cx}+C,c=\sqrt{b/a}$$</span></p>
|
2,690,433 | <p>If $V$ is a vector space that has closure properties and satisfies the axioms and $S$ is a subset of $V$, why wouldn't $S$ always have closure under addition and scalar multiplication (which are required to show $S$ is a subspace) because since $S$ is a subset of $V$, doesn't that mean $S$ would have the same properties as $V$?</p>
| Matteo Casarosa | 539,948 | <p>No, because it could be that $v, w \in S $ but $v+w \not\in S $ or $\lambda v \not\in S$.</p>
<p>Note that you have to prove <em>both</em> closure properties: one does not entail the other.</p>
<p>Here are some examples:</p>
<p>Take $V = \mathbb{R}^2 $ . Now, $ \{(x,y)\in \mathbb{R}^2 \vert x\in \mathbb{Z}, y\in \mathbb{Z} \} $ . This subset is closed under addition but not under scalar multiplication (for a number in $\mathbb{R}$, which is our field). </p>
<p>Then consider $\{(x,y) \vert x=0 \vee y=0 \} $. This set is closed under scalar multiplication but not under addition.</p>
<p>Lastly, consider $\{(x,y) \vert x^2+y^2=1 \} $. This set is not closed under any of the two operations. </p>
|
1,574,663 | <p>I'm a first time Calc I student with a professor who loves using $e^x$ and logarithims in questions. So, loosely I know L'Hopital's rule states that when you have a limit that is indeterminate, you can differentiate the function to then solve the problem. But what do you do when no matter how much you differentiate, you just keep getting an indeterminate answer? For example, a problem like</p>
<p>$\lim _{x\to \infty }\frac{\left(e^x+e^{-x}\right)}{\left(e^x-e^{-x}\right)}$</p>
<p>When you apply L'Hopital's rule you just endlessly keep getting an indeterminate answer. With just my basic understanding of calculus, how would I go about solving a problem like that?</p>
<p>Thanks</p>
| Matematleta | 138,929 | <p>Factor out $e^x$: </p>
<p>$\frac{\left(e^x+e^{-x}\right)}{\left(e^x-e^{-x}\right)}=\frac{1+e^{-2x}}{1-e^{-2x}}\rightarrow 1$ as $x\rightarrow \infty $.</p>
<p>In other situations, Taylor expansions and algebraic operations usually are sufficient. There are also many standard inequalities which can help with these problems.</p>
|
3,734,216 | <p>Say <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are operators on Hilbert spaces <span class="math-container">$H_A,H_B$</span> respectively. If the Hilbert spaces are finite dimensional, then I know the tensor <span class="math-container">$A\otimes B$</span> can be represented by the Kronecker product <span class="math-container">$[a_{ij}B]$</span>.</p>
<p>Question 1: Does the Kronecker product formula <span class="math-container">$[a_{ij}B]$</span> still work in infinite dimensions?</p>
<p>Question 2: If not, does it work when <span class="math-container">$H_A$</span> is finite dimension and <span class="math-container">$H_B$</span> is infinite dimensional (possibly an operator on a non-separable space)?</p>
| Ben Grossmann | 81,360 | <p>We can make the Kronecker-product formula work in the following way. If <span class="math-container">$\{u_j\}_{j \in \Bbb N}$</span> is a basis of <span class="math-container">$H_A$</span>, then we have
<span class="math-container">$$
H_{A} \otimes H_B \cong \bigoplus_{j \in \Bbb N} H_B,
$$</span>
with an isomorphism between the two spaces defined by <span class="math-container">$\phi : H_{A} \otimes H_B \to \bigoplus_{j \in \Bbb N} H_B$</span>,
<span class="math-container">$$
\phi(u_j \otimes v) = (0,\dots,0,v,0,0,\dots).
$$</span>
Now, if <span class="math-container">$a_{jk}$</span> are defined such that <span class="math-container">$Au_k = \sum_{j \in \Bbb N} a_{jk}u_j$</span>, then we have
<span class="math-container">$$
\phi(A \otimes B) \phi^{-1}(v_1,v_2,\dots) =
\phi(A \otimes B)\sum_{k \in \Bbb N} u_k \otimes v_k = \phi\sum_{k \in \Bbb N} \left(\sum_{j \in \Bbb N} a_{jk}u_j \otimes (Bv_k) \right)\\
= \phi\sum_{j \in \Bbb N} u_j \otimes \sum_{k \in \Bbb N}(a_{jk}B)(v_k)
\\ = \left(\sum_{k \in \Bbb N}(a_{1k}B)(v_k),\sum_{k \in \Bbb N}(a_{2k}B)(v_k),\dots \right),
$$</span>
which indeed corresponds to the operator matrix product
<span class="math-container">$$
\pmatrix{a_{11}B & a_{12} B & \cdots\\
a_{21}B & a_{22}B & \cdots\\
\vdots & \vdots & \ddots}
\pmatrix{v_1\\ v_2 \\ \vdots}.
$$</span></p>
|
1,602,271 | <p>Someone can help me to solve this differential equation with method of undetermined coefficient.
$$ y''-2y'+y=x\sin x$$
Thanks</p>
| Cesareo | 397,348 | <p>As the DE is linear we have</p>
<p><span class="math-container">$$
y = y_h + y_p
$$</span></p>
<p>with</p>
<p><span class="math-container">$$
\cases{
y''_h-2y'_h+y_h = 0\\
y''_p-2y'_p+y_p = x\sin x
}
$$</span></p>
<p>The homogeneous has the solution</p>
<p><span class="math-container">$$
y_h(x) = c_1 e^x+c_2 x e^x
$$</span></p>
<p>now for the particular we propose a solution (variation of constants method) such that</p>
<p><span class="math-container">$$
y_p = c_1(x) e^x+c_2(x)x e^x
$$</span></p>
<p>from <span class="math-container">$y''_p-2y'_p+y_p = x \sin x$</span> we obtain</p>
<p><span class="math-container">$$
e^x \left(c_1''(x)+x c_2''(x)+2 c_2'(x)\right)-x \sin (x) = 0
$$</span></p>
<p>now as <span class="math-container">$c_1(x), c_2(x)$</span> are independent we equate</p>
<p><span class="math-container">$$
\cases{
c''_1(x)e^x = x \sin x\\
xc''_2(x)+2c'_2(x) = 0}
$$</span></p>
<p>Here as <span class="math-container">$y_p$</span> is a particular solution we can specify the integration constants at our profit. Now solving for <span class="math-container">$c_1(x), c_2(x)$</span> we have finally</p>
<p><span class="math-container">$$
y = y_h + y_p
$$</span></p>
|
2,648,626 | <p>Is the set $(e_n)_{n>0}$ a (vector space) basis for the sequence Hilbert space $l^2$? It is a Hilbert space basis anyway.</p>
<p>I would say no, because the sequence $\left(\frac{1}{n}\right)_{n>0}$ is in $l^2$ but it can't be written as a finite linear combination of $e_i$'s.</p>
<p>Is that right?</p>
| Henno Brandsma | 4,280 | <p>Yes, that's true. Any actual vector space basis for $\ell^2$ has to have the same size as $\mathbb{R}$ and cannot be explicitly written down. </p>
|
3,041,656 | <p>I need some help in a proof:
Prove that for any integer <span class="math-container">$n>6$</span> can be written as a sum of two co-prime integers <span class="math-container">$a,b$</span> s.t. <span class="math-container">$\gcd(a,b)=1$</span>.</p>
<p>I tried to go around with "Dirichlet's theorem on arithmetic progressions" but didn't had any luck to come to an actual proof.
I mainly used arithmetic progression of <span class="math-container">$4$</span>, <span class="math-container">$(4n,4n+1,4n+2,4n+3)$</span>, but got not much, only to the extent of specific examples and even than sometimes <span class="math-container">$a,b$</span> weren't always co-prime (and <span class="math-container">$n$</span> was also playing a role so it wasn't <span class="math-container">$a+b$</span> it was <span class="math-container">$an+b$</span>).</p>
<p>I would appriciate it a lot if someone could give a hand here.</p>
| fleablood | 280,126 | <p>Well if <span class="math-container">$n$</span> is odd you can always do <span class="math-container">$n-2$</span> and <span class="math-container">$2$</span>. Or you can do <span class="math-container">$\frac {n-1}2$</span> and <span class="math-container">$\frac {n+1}2$</span>.</p>
<p>If <span class="math-container">$n = 2k$</span> and <span class="math-container">$k$</span> is even you can do <span class="math-container">$k-1$</span> and <span class="math-container">$k+1$</span>. As <span class="math-container">$k\pm 1$</span> is odd and <span class="math-container">$\gcd(k-1, k+1) = \gcd(k-1, k+1 -(k-1)) = \gcd(k-1,2)=1$</span>.</p>
<p>If <span class="math-container">$n = 2k$</span> and <span class="math-container">$k$</span> is odd you can do <span class="math-container">$k-2$</span> and <span class="math-container">$k+2$</span> and as <span class="math-container">$k\pm 2$</span> is odd you have <span class="math-container">$\gcd(k-2,k+2)=\gcd(k-1, 4) = 1$</span>. </p>
|
2,516,942 | <p>Trying to find all solutions on (-infinity,+infinity) for :
$y''+4y = 0$</p>
<p>I know that the discriminant of the characteristic equation is -16 so the roots are complex. so $k=0.5 \cdot \sqrt{-16} = 2i$</p>
<p>$f_1(x) = e^{(2ix)} = \cos(2x) + i\sin(2x)$</p>
<p>$f_2(x) = e^{(-2ix)} = \cos(2x) - i\sin(2x)$</p>
<p>and so the general solution therefore is </p>
<p>$y=c_1(\cos(2x) + i\sin(2x)) + c_2(\cos(2x) - i\sin(2x))$</p>
<p>but the answers say that it is </p>
<p>$y=C_1\cos(2x) + C_2\sin(2x)$</p>
<p>So I am having trouble interpreting the real parts of the complex roots. Could someone please explain how to get to the answer from here?</p>
| Nirvanacs | 70,592 | <p>It can be solved directly by induction.</p>
<p>Note that
$$
(n!)^22^{2n}=(2\cdot 4\cdot 6 \cdot\cdots \cdot 2n)^2,
$$
the middle expression can be rewritten as
$$
\prod_{k=1}^n\left(1-\frac{1}{2k}\right).
$$
By indcution, for the right inequality, it suffices to show that
$$
\frac{1}{\sqrt{2n+1}}\frac{2n+1}{2n+2}<\frac{1}{\sqrt{2n+3}}.
$$
This is equivalent to
$$
(2n+1)(2n+3)<(2n+2)^2,
$$which is trivial.</p>
<p>For the left inequality, it suffices to show that
$$
\frac{1}{2\sqrt{n}}\frac{2n+1}{2n+2}>\frac{1}{2\sqrt{n+1}}.
$$
This is equivalent to
$$
(2n+1)^2>2n(2n+2),
$$which is also trivial.</p>
|
2,516,942 | <p>Trying to find all solutions on (-infinity,+infinity) for :
$y''+4y = 0$</p>
<p>I know that the discriminant of the characteristic equation is -16 so the roots are complex. so $k=0.5 \cdot \sqrt{-16} = 2i$</p>
<p>$f_1(x) = e^{(2ix)} = \cos(2x) + i\sin(2x)$</p>
<p>$f_2(x) = e^{(-2ix)} = \cos(2x) - i\sin(2x)$</p>
<p>and so the general solution therefore is </p>
<p>$y=c_1(\cos(2x) + i\sin(2x)) + c_2(\cos(2x) - i\sin(2x))$</p>
<p>but the answers say that it is </p>
<p>$y=C_1\cos(2x) + C_2\sin(2x)$</p>
<p>So I am having trouble interpreting the real parts of the complex roots. Could someone please explain how to get to the answer from here?</p>
| Jack D'Aurizio | 44,121 | <p>$$\begin{eqnarray*}\frac{1}{4^n}\binom{2n}{n}=\prod_{k=1}^{n}\left(1-\frac{1}{2k}\right)&=&\sqrt{\frac{1}{4}\prod_{k=2}^{n}\left(1-\frac{1}{k}\right)\prod_{k=2}^{n}\left(1+\frac{1}{4k(k-1)}\right)}\\&=&\frac{1}{2\sqrt{n}}\sqrt{\prod_{k=2}^{n}\left(1-\frac{1}{(2k-1)^2}\right)^{-1}}\\(\text{Wallis product})\quad&=&\frac{1}{\sqrt{\pi n}}\sqrt{\prod_{k>n}\left(1+\frac{1}{4k(k+1)}\right)^{-1}}\end{eqnarray*}$$
is trivially $\leq\frac{1}{\sqrt{\pi n}}$ but also
$$\begin{eqnarray*}\phantom{\frac{1}{4^n}\binom{2n}{n}aaaaaaaaa}&\geq &\frac{1}{\sqrt{\pi n}}\sqrt{\prod_{k>n}\exp\left(\frac{1}{4(k+1)}-\frac{1}{4k}\right)}\\(\text{Telescopic})\quad&=&\frac{1}{\sqrt{\pi n\exp\frac{1}{4n}}}\end{eqnarray*}$$
and the double inequality can be improved up to:
$$\boxed{ \frac{1}{\sqrt{\pi\left(n+\frac{1}{3}\right)}}\leq\frac{1}{4^n}\binom{2n}{n}\leq\frac{1}{\sqrt{\pi\left(n+\frac{1}{4}\right)}}.}$$</p>
|
3,971,059 | <p><span class="math-container">$x_n$</span> is a sequence. The only thing I can do here is just write the definition of <span class="math-container">$\lim_{n \to \infty} x_n=a$</span>, but that doesn' t seem helpful to me. I tried looking for some inequalities that would help, and the only thing that I found that I thought could be a little bit helpful, was the reverse triangle inequality, but I don' t know how to use that here.Could you please give some advice or a hint?</p>
<p>Thanks in advance</p>
| mathcounterexamples.net | 187,663 | <p>This is an immediate consequence of the continuity of the map <span class="math-container">$f_n(x)=x^{n}$</span>.</p>
|
3,971,059 | <p><span class="math-container">$x_n$</span> is a sequence. The only thing I can do here is just write the definition of <span class="math-container">$\lim_{n \to \infty} x_n=a$</span>, but that doesn' t seem helpful to me. I tried looking for some inequalities that would help, and the only thing that I found that I thought could be a little bit helpful, was the reverse triangle inequality, but I don' t know how to use that here.Could you please give some advice or a hint?</p>
<p>Thanks in advance</p>
| mathcounterexamples.net | 187,663 | <p>Use the equality</p>
<p><span class="math-container">$$x^n-a^n =(x-a)(x^{n-1} +x^{n-2}a + \dots + a^{n-1})$$</span></p>
<p>To prove the inequality</p>
<p><span class="math-container">$$\vert x^n -a^n \vert \le n \vert x -a\vert \vert \vert 2a \vert^{n-1}$$</span></p>
<p>valid for <span class="math-container">$\vert x \vert \le \vert 2a\vert$</span>.</p>
<p>Then apply it for <span class="math-container">$n=2k$</span>.</p>
|
1,832,177 | <p><em>(see edits below with attempts made in the meanwhile after posting the question)</em></p>
<h1>Problem</h1>
<p>I need to modify a sigmoid function for an AI application, but cannot figure out the correct math. Given a variable <span class="math-container">$x \in [0,1]$</span>, a function <span class="math-container">$f(x)$</span> should satisfy the following requirements (pardon the math-noobiness of my expression):
<br>
a) the values of <span class="math-container">$f(x)$</span> should be costrained to <span class="math-container">$[0,1]$</span><br>
b) when <span class="math-container">$x=0$</span> then <span class="math-container">$f(x)=0$</span>, and when <span class="math-container">$x=1$</span> then <span class="math-container">$f(x)=1$</span><br>
c) <span class="math-container">$f(x)$</span> should follow a sigmoid or "S-curve" shape within these bounds, with some variable changing the "steepness" of the curve.</p>
<p>I used a different function earlier, corresponding to (0), illustrated below on the left:</p>
<p>(0) <span class="math-container">$f(x) = x^{(z+0.5)^{-b}}$</span> , where <span class="math-container">$b=2$</span> with <span class="math-container">$z \in [0,1]$</span> controlling the curve
<img src="https://i.stack.imgur.com/nvPie.png" alt="" /></p>
<p>What is a function that would satisfy these requirements, i.e. do the same thing as equation (0) but with an S-curve<span class="math-container">$?_{(I\hspace{1mm} hope\hspace{1mm} this\hspace{1mm} makes\hspace{1mm} at\hspace{1mm} least\hspace{1mm} some \hspace{1mm}mathematical\hspace{1mm} sense...)}$</span></p>
<hr />
<h2>Attempt 1</h2>
<p>I tried to accomplish something similar with a logistic function (by varying the <span class="math-container">$x_0$</span> value; cf. equation (1) and the right side plot above)</p>
<p>(1) <span class="math-container">$f(x) = \dfrac{1}{1 + e^{-b(x-(1-x_0))}}$</span> , where <span class="math-container">$b=10$</span>, <span class="math-container">$x_0 \in [0,1]$</span></p>
<p>...so that <span class="math-container">$x_0 = 0.5$</span> yields some central or average curve (like the linear growth line on the leftmost plot), and values around it rise or lower the steepness. Shortcoming: the "ends" of the curve where <span class="math-container">$x=\{0,1\}$</span> won't reach the required values of 0 and 1 respectively. I don't want to force it arbitrarily with an <code>if-else</code>, it should come naturally from the properties of an equation (and as such form nice curves).</p>
<h2>Attempt 2</h2>
<p>This sort of does the trick, now the ends smootly reach 0 and 1 for all values of <span class="math-container">$x_0$</span>:</p>
<p>(2) <span class="math-container">$f(x) = \Bigg(\bigg( \dfrac{1}{1 + e^{-b(x-(1-x_0))}}\bigg)x\Bigg)^{1-x} $</span></p>
<p>Only problem, the effect is not "symmetrical", when comparing high and low values. Observe the values for <span class="math-container">$f(x)$</span> with <span class="math-container">$x_0 = [0,1]$</span>; <span class="math-container">$b=10$</span> (left side plot below). The curve steepness varies more among lower <span class="math-container">$x_0$</span> values (yellow,red) than amoing higher values (pink); also, changing <span class="math-container">$x_0$</span> also has noticably more effect in lower values of <span class="math-container">$x$</span> than its higher values.</p>
<p><a href="https://i.stack.imgur.com/OghRN.png" rel="noreferrer"><img src="https://i.stack.imgur.com/OghRN.png" alt="enter image description here" /></a></p>
<h2>Attempt 3</h2>
<p>Ok maybe if-elsing the hell out of those extreme values is not such a bad idea.</p>
<p>(3) <span class="math-container">$f(x) = \Bigg(\bigg( \dfrac{1}{1 + e^{-b(x-(1-x_0))}}\bigg)g(x)\Bigg)^{1-h(x)} $</span></p>
<p>where <span class="math-container">$g(x) = \left\{\begin{array}{ll}1 & x>0\\0 & otherwise\end{array}\right.$</span> and <span class="math-container">$h(x) = \left\{\begin{array}{ll}1 & x==0\\0 & otherwise\end{array}\right.$</span></p>
<p>The right side plot above illustrates the result: the ends are still nicely 0 and 1, and now the curves above and below <span class="math-container">$x_0=0.5$</span> are symmetrical, but there are noticable "jumps" on <span class="math-container">$x=0.1$</span> and <span class="math-container">$x=0.9$</span> if <span class="math-container">$x_0$</span> is in its either extremes. Still not good.</p>
<hr />
<p>So far all my attempts are all so-so, each lacking in some respect. A better solution would thus still be very welcome.</p>
| Ron | 680,211 | <p>I was just working on finding a similar function. In case anyone else lands here in the future, here is a sigmoidal function which satisfies:</p>
<p><span class="math-container">$$f(0)=0$$</span>
<span class="math-container">$$f(1)=1$$</span></p>
<p>and has a single parameter, <span class="math-container">$k$</span>, controlling steepness:</p>
<p><span class="math-container">$$f(x) = 1-\frac{1}{1+(\frac{1}{x}-1)^{-k}} $$</span></p>
<p>Note that, by symmetry, these will also always be true: <span class="math-container">$f(1/2)=1/2$</span>, and <span class="math-container">$\int_0^1 f(x)dx = 1/2$</span>.</p>
<p>Method: Since <span class="math-container">$\frac{1}{1+e^x}$</span> is a bijection from <span class="math-container">$\mathbb{R} \to [0,1]$</span>; we can invert it to get a function mapping <span class="math-container">$[0,1] \to \mathbb{R}$</span>, which is <span class="math-container">$h(x) = \ln(\frac{1}{x}-1)$</span>. Then we can put <span class="math-container">$h$</span> into a standard (increasing) sigmoid which maps the reals back to <span class="math-container">$[0,1]$</span>. The form of the sigmoid is:</p>
<p><span class="math-container">$$ g(y) = 1-\frac{1}{1+e^{-ky}}$$</span></p>
<p><span class="math-container">$f(x)$</span> is what you get when you simplify <span class="math-container">$g(h(x))$</span>. Note that this <span class="math-container">$f$</span> is defined outside <span class="math-container">$x \in [0,1]$</span> and it does not asymptote out to 0 or 1, but it does achieve the desired behavior within that interval.</p>
<p>One additional note, in the special case where <span class="math-container">$k=1$</span>, this function reduces to <span class="math-container">$f(x)=x$</span>.</p>
|
634,127 | <p>How to prove this (true or not)?</p>
<blockquote>
<p>$f(a,b) = f(a,c)$ must hold if $b = c$</p>
</blockquote>
<p><b>Note:</b> <i><b>f(a,b)</b> is a function with <b>a</b> & <b>b</b></i> parameters</p>
<p>thanks</p>
| angryavian | 43,949 | <p>Hint: by definition, a function takes in some inputs, and produces a <em>unique</em> output.</p>
|
227,096 | <p>Q:
Let $A$ be an $n\times n$ matrix defined by $A_{ij}=1$ for all $i,j$.
Find the characteristic polynomial of $A$.</p>
<p>There is probably a way to calculate the characteristic polynomial $(\det(A-tI))$ directly but I've spent a while not getting anywhere and it seems cumbersome. Something tells me there is a more intelligent and elegant way. The rank of $A$ is only 2. Is there a way to use this?</p>
| josh314 | 42,904 | <p>The Kronecker-delta factor does a "trace," meaning a summation over the diagonal components. Remember that $\delta_{ij}=0$ if $i\ne j$. So, then for any function $f(i,j)$, you'd have
\begin{equation}
\sum_{i,j} \delta_{ij} f(i,j) = \sum_{i=j} f(i,j) = \sum_i f(i,i)
\end{equation}</p>
<p>In your example, then,
\begin{equation}
\sum_{i,j} (1+\delta_{ij}) M_{ij} = \sum_{i,j} M_{ij} + \sum_i M_{ii}
\end{equation}</p>
<p>The first term is the sum of every element in the matrix. The second term is the sum of the elements on the diagonal.</p>
|
227,096 | <p>Q:
Let $A$ be an $n\times n$ matrix defined by $A_{ij}=1$ for all $i,j$.
Find the characteristic polynomial of $A$.</p>
<p>There is probably a way to calculate the characteristic polynomial $(\det(A-tI))$ directly but I've spent a while not getting anywhere and it seems cumbersome. Something tells me there is a more intelligent and elegant way. The rank of $A$ is only 2. Is there a way to use this?</p>
| Ernesto Lopez Fune | 115,376 | <p>This transformation can be decomposed in the sum of two transformations (supposing of course that the index <span class="math-container">$i$</span> and <span class="math-container">$j$</span> run over a finite ordered set, otherwise you have to check convergence piece by piece first): <span class="math-container">$M = \Sigma_{ij} (1+\delta_{ij})M_{ij}=\Sigma_{ij} M_{ij} + \Sigma_{ij}\delta_{ij}M_{ij}.$</span></p>
<p>The first term gives you the sum of all the elements in the matrix <span class="math-container">$M_{ij},$</span> while the second term, since the Kronecker Delta is zero for <span class="math-container">$i\neq j$</span>, then it gives you the sum of the terms <span class="math-container">$M_{ij}$</span> for which <span class="math-container">$j=i$</span>. So, in resume, this transformation gives you the sum of all the elements of the matrix <span class="math-container">$M_{ij}$</span> but the elements <span class="math-container">$M_{ii}$</span> are added twice.</p>
<p>Notice that in your question, there are no additional assumptions on the type of matrix <span class="math-container">$M_{ij}$</span>, therefore, the description given above is general. If by any chance you say that <span class="math-container">$M_{ij}$</span> is a square matrix, then the transformation above gives you the sum of all off-diagonal terms of <span class="math-container">$M_{ij}$</span> plus twice the trace of <span class="math-container">$M_{ij}$</span>.</p>
|
203,456 | <p>Please help me proof $\log_b a\cdot\log_c b\cdot\log_a c=1$, where $a,b,c$ positive number different for 1.</p>
| Karolis Juodelė | 30,701 | <p><strike>By definition</strike> $\log_a b = \frac{\log b}{\log a}$.</p>
|
500,632 | <p>Find all such lines that are tangent to the following curves:</p>
<p>$$y=x^2$$ and $$y=-x^2+2x-2$$</p>
<p>I have been pounding my head against the wall on this. I used the derivatives and assumed that their derivatives must be equal at those tangent point but could not figure out the equations. An explanation will be appreciated.</p>
| Mark Bennet | 2,906 | <p>If $y=x^2$ we have that $\frac {dy}{dx} = 2x$</p>
<p>This gives the gradient of the tangent at the point $(x, y)=(a, a^2)$</p>
<p>If the tangent line is $y=mx + c$ we therefore have $a^2=(2a)\cdot a+c$ whence $c=-a^2$ and the general tangent line to $y=x^2$ is $$y=2ax-a^2$$</p>
<p>If $y=-x^2+2x-2$ we have $\frac {dy}{dx} = -2x+2$, so that the tangent line $y=mx +c$ at $(x, y)=(b, -b^2+2b-2)$ is $-b^2+2b-2=(-2b+2)b+c$ whence $c=b^2-2$ and the general tangent to the parabola is $$y=(-2b+2)x+b^2-2$$</p>
<p>If the two tangents are to be the same line we equate coefficients to give:$$a=1-b$$ and $$a^2=2-b^2$$</p>
|
14,552 | <p>What are good examples of proofs by induction that are relatively low on algebra? Examples might include simple results about graphs.</p>
<p>My aim is to help students get a sense of the logical form of an induction proof (in particular proving a statement of the form 'if $P(k)$ then $P(k+1)$'), independent of the way one might show that in a proof about series formulae specifically.</p>
| John Coleman | 6,891 | <p>A couple of simple examples come to mind:</p>
<p>1) Prove that there are $2^n$ subsets of an $n$-element set.</p>
<p>2) Prove the power rule of derivatives for non-negative integer powers using the product rule.</p>
|
14,552 | <p>What are good examples of proofs by induction that are relatively low on algebra? Examples might include simple results about graphs.</p>
<p>My aim is to help students get a sense of the logical form of an induction proof (in particular proving a statement of the form 'if $P(k)$ then $P(k+1)$'), independent of the way one might show that in a proof about series formulae specifically.</p>
| Bill Dubuque | 163 | <p>Tiling problems might meet your constraints. A nice simple example is Golomb's Theorem that a chessboard of side $2^n$ with <em>any</em> square omitted can be tiled by trominoes ("L" shapes of 3 squares).</p>
<p>In fact we can modify it to give an example of how <em>strengthening the induction hypothesis</em> is often needed: simply replace "any square omitted" by "central square omitted".</p>
<p>The induction step is easy and vivid: divide the board into four smaller $2^{n-1}$ boards. By induction we can tile the board with the missing (pink) square, and we can tile the other three omitting their (purple) corner squares (in the center of the big square), leaving 3 central squares that form an "L", which we tile with one final tromino.</p>
<p><a href="https://i.stack.imgur.com/XDHYP.png" rel="noreferrer"><img src="https://i.stack.imgur.com/XDHYP.png" alt="enter image description here"></a></p>
|
611,529 | <p>$$i^3=iii=\sqrt{-1}\sqrt{-1}\sqrt{-1}=\sqrt{(-1)(-1)(-1)}=\sqrt{-1}=i
$$</p>
<p>Please take a look at the equation above. What am I doing wrong to understand $i^3 = i$, not $-i$?</p>
| Martin Brandenburg | 1,650 | <p>$\sqrt{ab} = \sqrt{a} \sqrt{b}$ is correct in the following sense: In an arbitrary field (here it is the field of complex numbers) the root $\sqrt{a}$ is <em>an</em> element in some field extension such that $\sqrt{a}^2=a$. It is not uniquely determined, for if $b$ is a root, then also $-b$ is a root (and these only coincide when $a=0$ or the characteristic is $2$). Now the correct statement is:</p>
<ul>
<li>If $\sqrt{a}$ and $\sqrt{b}$ are roots of $a$ resp. $b$, then $\sqrt{a}\sqrt{b}$ is a root of $ab$.</li>
</ul>
<p>If we define $\sqrt{a}$ to be the <em>set</em> of all roots of $a$, then $\sqrt{ab}=\sqrt{a}\sqrt{b}$ even holds verbatim. For example, $\sqrt{-1}=\{\pm i\}$ then, and $\sqrt{(-1) (-1)} = \sqrt{-1} \sqrt{-1}$ holds since both sides equal $\{\pm 1\}$.</p>
<p>If $a \in \mathbb{R}^+$, one usually denotes by $\sqrt{a}$ the unique root of $a$ in $\mathbb{R}^+$, but this definition doesn't work properly for complex numbers or other fields.</p>
|
611,529 | <p>$$i^3=iii=\sqrt{-1}\sqrt{-1}\sqrt{-1}=\sqrt{(-1)(-1)(-1)}=\sqrt{-1}=i
$$</p>
<p>Please take a look at the equation above. What am I doing wrong to understand $i^3 = i$, not $-i$?</p>
| Husrat Mehmood | 116,634 | <p>Here is the Simple Answer to this question</p>
<p>as $i^2=-1$, if you break $i^3$ into $i^2\cdot i$ then you insert $-1$ into the place of $i^2$ you will get $(-1)\cdot i =-i$</p>
<p>this is the simple trick behind it.</p>
|
2,734,374 | <p>I don't think it is possible because that entails that only the <span class="math-container">$\mathbf 0$</span>-vector is in the eigenspace, but <span class="math-container">$\mathbf 0$</span> is not an eigenvector by definition. </p>
<p>However, my textbook says:</p>
<blockquote>
<p>For an <span class="math-container">$n\times n$</span> matrix, if there are <span class="math-container">$n$</span> distinct eigenvalues, then all eigenspaces have dimension at most <span class="math-container">$1$</span>.</p>
</blockquote>
<p>which seems to imply that eigenspaces of dimension <span class="math-container">$0$</span> are possible.</p>
| user | 505,767 | <p>By definition to any eigenvalues correspond at least one eigenvector thus for a n-by-n matrix for each eigenvalue $\lambda_i$ we have $1\le$ dim(eigenspace)$\le n$.</p>
|
451,131 | <blockquote>
<p><strong>Problem</strong>: If <span class="math-container">$\int f(x) \sin{x} \cos{x}\,\mathrm dx = \frac {1}{2(b^2 - a^2)} \log f(x) +c $</span>. Find <span class="math-container">$f(x)$</span></p>
<p><strong>Solution</strong>: <span class="math-container">$\int f(x) \sin{x} \cos{x}\,\mathrm dx = \frac {1}{2(b^2 - a^2)} \log f(x) +c $</span></p>
<p>Differenting both sides,we get</p>
<p><span class="math-container">$ f(x) \sin{x} \cos{x} = \frac {f'(x)}{2(b^2 - a^2)f(x)} $</span></p>
</blockquote>
<p>Am I doing right ?</p>
| André Nicolas | 6,312 | <p>It is a good start. For simplicity write $y$ for $f(x)$. We can rewrite the result you got as
$$\frac{y'}{y^2}=2(b^2-a^2)\sin x\cos x.$$
Integrate both sides. It may be handy to note that $2\sin x\cos x=\sin(2x)$. Or not, since it is clear that $2\sin x\cos x$ is the derivative of $\sin^2 x$. </p>
|
2,856,373 | <blockquote>
<p>If <span class="math-container">$z_{1},z_{2}$</span> are two complex numbers and <span class="math-container">$c>0.$</span> Then prove that</p>
<p><span class="math-container">$\displaystyle |z_{1}+z_{2}|^2\leq (1+c)|z_{1}|^2+\bigg(1+\frac{1}{c}\bigg)|z_{2}|^2$</span></p>
</blockquote>
<p>Try: put <span class="math-container">$z_{1}=x_{1}+iy_{1}$</span> and <span class="math-container">$z_{2}=x_{2}+iy_{2}.$</span> Then
from left side</p>
<p><span class="math-container">$$(x_{1}+x_{2})^2+(y_{1}+y_{2})^2=x^2_{1}+x^2_{2}+2x_{1}x_{2}+y^2_{1}+y^2_{2}+2y_{1}y_{2}$$</span></p>
<p>Could some help me how to solve it further, Thanks in Advance.</p>
| Kavi Rama Murthy | 142,385 | <p>Let $a=|z_1|,b=|z_2|$. Then $2ab=2(a\sqrt c) \frac b {\sqrt c} \leq ca^{2}+\frac {b^{2}} c$ (because $2ts \leq t^{2}+s^{2}$ for any two real numbers $t$ and $s$). Hence $|z_1+z_2|^{2} \leq (a+b)^{2}=a^{2}+b^{2}+2ab\leq (1+c)a^{2}+(1+\frac 1 c )b^{2}$.</p>
|
2,872,492 | <p>My work starts with a supposition of $N$, so that for $n > N$ we have $\vert b \vert ^n < \epsilon$.</p>
<p>Since $0 < \vert b \vert < 1$, we see the logarithm with base $\vert b \vert$ is a decrescent function meaning it will invert the inequality once taken.
$$\vert b \vert ^n < \epsilon $$
$$n > \text{log}_{\vert b \vert}\epsilon $$</p>
<p>Done the scrapwork, lets start the formal proof.
Let $\epsilon > 0$. Let $N = \text{log}_{\vert b \vert}\epsilon$. If $n > N$ and $0 < \vert b \vert< 1$, then
$$\vert b \vert ^n < \vert b \vert ^{\text{log}_{\vert b \vert}\epsilon} = \epsilon$$
Hence by definition, $\text{lim} \ b^n=0$ </p>
<p>Have I done everything correctly? I am using in an implicit manner that $\vert b^n \vert = \vert b \vert ^n$, is everything fine with this? (Something is really pinching me up!)</p>
| marty cohen | 13,079 | <p>You can make janmarqz's result
more explicit.</p>
<p>Since
$|b^n| = |b|^n$,
we can assume that
$0 < b < 1$.</p>
<p>Then
$b = \dfrac1{1+a}$
with $a > 0$,
so that,
from Bernoulli's inequality,
$(1+a)^n \ge 1+an
$.</p>
<p>Therefore,
since
$a = \dfrac1{b}-1$,</p>
<p>$\begin{array}\\
b^n
&=\dfrac1{(1+a)^n}\\
&\le \dfrac1{1+na}\\
&\lt \dfrac1{na}\\
&=\dfrac1{n(\frac1{b}-1)}\\
\end{array}
$</p>
<p>To make
$b^n < \epsilon$,
it is enough if
$\epsilon > \dfrac1{n(\frac1{b}-1)}$
or
$n > \dfrac1{\epsilon(\frac1{b}-1)}$.</p>
<p>This is,
of course,
far worse than using the log,
but it is
completely elementary.</p>
<p>This is also not original with me.
I first saw this in
"What is Mathematics?"
by Courant and Robbins,
which I highly recommend
and is available in paperback
for less than $20.</p>
|
1,957,084 | <p>Where $n$ is any positive integer.</p>
<p>I'm honestly completely at a loss at how to prove this.<br>
Tested by brute forcing it up to large numbers, and it keeps increasing, although very slowly.</p>
<p>This is actually part of a bigger problem containing harmonic numbers, but I've solved the rest, so that's why I don't really have much of an idea on how to approach this.<br>
I've tried integrals to find something that would be smaller than the sum, yet clearly bigger than $0.5$, but, again, I can't really prove it.</p>
| user133281 | 133,281 | <p>$$\sum_{i=2^n+1}^{2^{n+1}} \frac1i \geq \sum_{i=2^n+1}^{2^{n+1}} \frac{1}{2^{n+1}} = \frac12.$$ </p>
|
113,725 | <p>Is there a closed form for $\prod_{1 \leq i < j \leq k} (j - i)$? It looks like something like a determinant of a Vandermonde matrix, but I can't seem to get it to fit.</p>
| Brett Frankel | 22,405 | <p>Indeed, the square of this quantity is the discriminant of the polynomial whose roots are the integers from 1 to $k$, so your observation that this is the determinant of a Vandermonde matrix is correct.
None of the below are close forms, but here are two alternative formulas that may (or may not) be helpful: $$\prod_{1\leq i < j \leq k}(j-i)=\prod_{n=1}^{k-1} n!=\prod_{n=1}^{k-1}n^{k-n}$$</p>
|
443,736 | <p>Find the point on the parabola $3x^2+4x-8$ that is closest to the point $(-2,-3)$.</p>
<p>My plan for this problem was to use the distance formula and then that the derivative to get my answer. I'm having a little trouble along the way.</p>
<p>$$ d = \sqrt{(x_1-x_2)^2+(y_1-y_2)^2}.$$</p>
| Gamma Function | 54,750 | <p>$$ d = \sqrt{(x - (-2))^2 + ((3x^2+4x-8) - (-3))^2} = \sqrt{\left(x+2\right) ^2 + \left(3x^2+4x-5 \right)^2}$$</p>
<p>Instead of minimizing $d$, lets minimize $d^2$. It should be noted that the minimum of $d$ and $d^2$ are exactly the same as $d \geq 0$. This gets rid of the square root, thereby simplifying the problem.</p>
<p>$$d^2 = (x+2)^2 + (3x^2+4x-5)^2.$$</p>
<p>Now, we can use calculus to minimize $d^2$,</p>
<p>$$(d^2)' = 2(x+2) + 2(3x^2+4x-5)(6x+4) = 36x^3 + 72x^2 - 26x - 36$$</p>
<p>This has roots at $x \approx -2.118, -.631, .749$. We know that one of these must be the absolute minimum, so evaluate $d^2$ for each $x$-value and whichever is smallest is the $x$-value of the point on the parabola nearest to the given point. You should then plug that $x$-value into the equation for the parabola to determine the $y$-value where this occurs. I got,</p>
<blockquote class="spoiler">
<p> Ans: $ (-2.118, -3.014) $</p>
</blockquote>
|
43,226 | <p>I realize the probability of the following two events are equal. I am curious: is there a reason, besides coincidence, that the probabilities are equal?</p>
<p>Suppose there are five balls in a bucket. 3 of the balls are labelled A, and 2 of the balls are labelled B. There is no way to distinguish between balls labelled A. There is no way to distinguish between balls labelled B.</p>
<p>Suppose I draw balls at random, without replacement. The event $\{AAABB\}$ means I pick 3 balls labelled A, then 2 balls labelled B (in that exact order). Then,</p>
<p>$$ P(\{AAABB\}) = \frac{3}{5} \times \frac{2}{4} \times \frac{1}{3} \times 1 = 0.1 $$</p>
<p>Also,</p>
<p>$$ P(\{AABAB\}) = \frac{3}{5} \times \frac{2}{4} \times \frac{2}{3} \times \frac{1}{2} \times 1 = 0.1$$</p>
<p>As you can see, the probability of the events $\{AAABB\}$ and $\{AABAB\}$ are exactly the same. I have seen the claim that any possible order of 3 A's and 2 B's is the same. Why is this true (if it indeed true)? If the claim is true, then I don't have to multiply out individually for every conceivable event.</p>
<p>Thanks.</p>
| Aryabhata | 1,102 | <p>The claim indeed seems true.</p>
<p>For a way to see this.</p>
<p>Consider the first probability</p>
<p>$P(\{AAABB\}) = \frac{3}{5} \times \frac{2}{4} \times \frac{1}{3} \times 1 = 0.1$</p>
<p>This can actually be written as</p>
<p>$P(\{AAABB\}) = \frac{3}{5} \times \frac{2}{4} \times \frac{1}{3} \times \frac{2}{2} \times \frac{1}{1} = 0.1$</p>
<p>Now notice the denominators: This will always be $5 \times 4 \times 3 \times 2 \times 1$.</p>
<p>Now the numerator will just be $3 \times 2 \times 1 \times 2 \times 1$, in some order.</p>
<p>The $3$ will appear when you pick the first A, one of the $2$ will appear when you pick the second A or first B etc.</p>
<p>Thus the probability is $\frac{12}{120} = 0.1$.</p>
|
172,119 | <p>For a random graph G(n,p) what is the expected number of connected components? What is the probability distribution of this value?
I'm specially interested in what happens for small values of p, before the giant component emerges.</p>
| ericf | 39,187 | <p>In case this isn't obvious, a simple insight used in many papers is that for small p, the network is (whp) simply a random tree, which allows for simple estimates of these and many other properties.</p>
<p>Added in response to comments:
oops, I should have said "tree-like" and one can use "tree counting" techniques to analyze the structure of the connected components. This is discussed in the classic paper by Newman, Strogatz and Watts -- <a href="http://arxiv.org/pdf/cond-mat/0007235v2.pdf" rel="nofollow">http://arxiv.org/pdf/cond-mat/0007235v2.pdf</a></p>
|
172,119 | <p>For a random graph G(n,p) what is the expected number of connected components? What is the probability distribution of this value?
I'm specially interested in what happens for small values of p, before the giant component emerges.</p>
| David White | 11,540 | <p>Have a look at Chapter 11 of Alon and Spencer's <em>The Probabilistic Method</em>. They focus entirely on the case $p = \Theta(1/n)$, and the smallest p they consider is of the form $c/n$, which perhaps counts as "small p" as in your question. In 11.6, they study the smallest case $c < 1$ in detail (they refer to this case as Very Subcritical). The results are always asymptotic as $n\to \infty$. They prove that the size of a component is modeled well by a Poisson Branching Process, so 11.4.2 gives you the probability distribution of sizes of components. Now it's a simple exercise to find the expected number of components: do a branching process to fill up the first component, then do one for the second, etc. until you run out of vertices. Write down how many components you ended up with.</p>
|
172,119 | <p>For a random graph G(n,p) what is the expected number of connected components? What is the probability distribution of this value?
I'm specially interested in what happens for small values of p, before the giant component emerges.</p>
| Gruddium Fretzke | 159,884 | <p>For <span class="math-container">$p= \frac c N$</span>, the mean number of components per node is <span class="math-container">$(1-s_0) (1- \frac c 2 (1-s_0))$</span>, where <span class="math-container">$s_0$</span> is the fraction of nodes in the largest component. (So, trivially, in the non-percolating phase for <span class="math-container">$c<1$</span>, where <span class="math-container">$s_0=0$</span>, one obtains <span class="math-container">$1- \frac c 2$</span>, which corresponds to the simple argument to having <span class="math-container">$N$</span> nodes and <span class="math-container">$M=\frac {N(N-1)} 2 \frac c N= \frac {c(N-1)} 2$</span> about <span class="math-container">$\frac {cN} 2$</span> edges, i.e. in the loop-less case, about <span class="math-container">$N-M=N(1- \frac c 2)$</span> components).</p>
<p>The result, including the full distribution, also in the percolating phase, can be found (obtained by statistical mechanics methods and simulations) in J. Stat. Phys. 117, 387 (2004), see also <a href="https://arxiv.org/abs/cond-mat/0311535" rel="nofollow noreferrer">https://arxiv.org/abs/cond-mat/0311535</a>.</p>
|
1,645,130 | <p>Is there any known explicit bijection between these two sets? </p>
<p>I know it can be proved that such bijection exists using two injections and Schröder–Bernstein theorem, but I wanted to know whether some explicit bijection is known. I failed to find any except ones constructed awkwardly from the Schröder–Bernstein theorem.</p>
| Asaf Karagila | 622 | <p>You can do this by steps:</p>
<ol>
<li><p>There are reasonably nice bijections between <span class="math-container">$[0,1]$</span> and <span class="math-container">$(0,1)$</span>, and between <span class="math-container">$(0,1)$</span> and <span class="math-container">$\Bbb R$</span>. So there is a reasonably nice bijection between <span class="math-container">$[0,1]$</span> and <span class="math-container">$\Bbb R$</span>.</p>
</li>
<li><p>There is an almost bijection between <span class="math-container">$\mathcal P(\Bbb N)$</span> and <span class="math-container">$[0,1]$</span> using binary expansions: <span class="math-container">$f(A)=\sum_{n\in A}2^{-n}$</span> (for simplicity, let's take <span class="math-container">$0\notin\Bbb N$</span>, otherwise take <span class="math-container">$2^{-n-1}$</span> in the sum).</p>
<p>The injectivity fails because finite and co-finite sets might have the same output. Namely, <span class="math-container">$\frac12=0.10000\ldots=0.011111\ldots$</span>.</p>
</li>
<li><p>Fix the situation: there is a reasonable bijection between the sets <span class="math-container">$\{A\subseteq\Bbb N\mid\Bbb N\setminus A\text{ is finite}\}$</span> and <span class="math-container">$\{A\subseteq\Bbb N\mid A\text{ or }\Bbb N\setminus A\text{ is finite}\}$</span>.</p>
</li>
</ol>
<p>Now we combine all these together.</p>
<p>Given a set <span class="math-container">$A$</span>, if it is finite or co-finite, map it to a co-finite set using the reasonably looking bijection from part 3. Then map it to a real number in <span class="math-container">$[0,1]$</span> using binary expansion as in part 2. Then map this to a real number in <span class="math-container">$\Bbb R$</span> by passing through <span class="math-container">$(0,1)$</span>.</p>
<p>It ain't pretty. But it's explicit, as all the maps can be written explicitly. The trickiest is the third part, of course, but it's not hard by noting that you can encode "finite" and "co-finite" using <span class="math-container">$0$</span> being in the finite set, and then work accordingly. Namely,</p>
<p><span class="math-container">$$F(A)=\begin{cases}\{n-1\mid n\in A\setminus\{0\}\} & 0\in A\\ \{n\mid n\notin A\} & 0\notin A\end{cases}$$</span></p>
|
2,232,952 | <p>Could you please help me solve these quesitons??</p>
<p><a href="https://i.stack.imgur.com/iRgCE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iRgCE.png" alt="enter image description here" /></a></p>
<p>Consider the bases S={<span class="math-container">$u_1, u_2, u_3$</span>}, and T={<span class="math-container">$v_1, v_2, v_3$</span>}, with <br/></p>
<p><span class="math-container">$u_1$</span>=[-3, 0, -3], <span class="math-container">$u_2$</span>=[-3, 2, -1], <span class="math-container">$u_3$</span>=[1, 6, -1], <br/>
<span class="math-container">$v_1$</span>=[-6, -6, 0],
<span class="math-container">$v_2$</span>=[-2, -6, 4], <span class="math-container">$v_3$</span>=[-2, -3, 7] <br/>
(a) Find the transition matrix from <span class="math-container">$S$</span> to <span class="math-container">$T$</span> <br/>
(b) Using the result in (a), compute the coordinate vector <span class="math-container">$[w]_T$</span> where <span class="math-container">$w$</span>=[-5, 8, -5]</p>
| Emilio Novati | 187,568 | <p>Hint:</p>
<p>If $U$ is the transition matrix from the standard basis to the basis $S$ ( i.e. the matrix that has as columns the vectors $u_i$), and $V$ is the transition matrix from the standard basis to the basis $T$, than the transition from $S$ to $T$ is given by the matrix $V^{-1}U$. </p>
|
142,819 | <p>I am currently studying Serge Lang's book "Algebra", on page 25 it is proved that if $G$ is a cyclic group of order $n$, and if $d$ is a divisor of $n$, then there exists a unique subgroup $H$ of $G$ of order $d$.</p>
<p>I have trouble seeing why the proof (as explained below) settles the uniqueness part.</p>
<p>The proof (as I understand it) goes as follows: </p>
<p>First we show existence of the subgroup $H$, given any choice of a divisor $d$ of $n$. </p>
<p>So suppose $n = dm$. Obviously, one can construct a surjective homomorphism $f : \mathbb{Z} \to G$, and it is also clear that $f(m\mathbb{Z}) \subset G$ is a subgroup of $G$. The resulting isomorphism $\mathbb{Z}/m\mathbb{Z} \cong G/f(m\mathbb{Z})$ leads us to conclude that the index of $f(m\mathbb{Z})$ in $G$ is $m$ and so the order of $f(m\mathbb{Z})$ must be $d$.</p>
<p>Ok, so we have shown that a subgroup having order $d$ exists.</p>
<p>The second part is then to show uniqueness - and here is where I am lost as I don't understand why the following argument serves this end:</p>
<p>Suppose $H$ is any subgroup of order $d$. Looking at the inverse image of $f^{-1}(H)$ in $\mathbb{Z}$ we know it must be of the form $k\mathbb{Z}$ for some positive integer $k$ (since all non - trivial subgroups in $\mathbb{Z}$ can be written in this form). Now $H = f(k\mathbb{Z})$ has order $d$, and $\mathbb{Z}/k\mathbb{Z} \cong G/H$, where the group on the right hand side has order $n/d = m$. From this isomorphism we can therefore conclude that $k = m$. Here Lang ends by saying ".. and H is uniquely determined".</p>
<p>But why is this ? Does he mean uniquely determined up to isomorphism ? Because, what I think I have shown is that any subgroup of order $d$ must be isomorphic to $m\mathbb{Z}$ - yet this gives me uniqueness only up to isomorphism.. what am I missing ?</p>
<p>Thanks for your help! </p>
| D_S | 28,556 | <p>Another way is to use the fact that a cyclic group is the same thing as a quotient of $\mathbf{Z}$. If $G$ is a cyclic group, let $a$ be a generator, and define a homomorphism $\mathbf{Z} \rightarrow G$ by $n \mapsto a^n$. This is surjective, so the first isomorphism applies. Now you can classify all subgroups of $G$ like this:</p>
<p><strong>Lemma 1:</strong> Every subgroup of $\mathbf{Z}$ is of the form $n\mathbf{Z} = \{na : a \in \mathbf{Z}\}$ for a unique $n \geq 0$.</p>
<p>Proof: For different $n$, the subgroups $n\mathbf{Z}$ are clearly distinct, since for $n \geq 1$, $n$ is the smallest positive element of $n\mathbf{Z}$. Let $H$ be any subgroup of $\mathbf{Z}$. If $H = 0$, then $H = 0 \mathbf{Z}$. Otherwise, there is a nonzero, hence a positive element of $H$. Let $n$ be the smallest one. Since $H$ is a subgroup, $n\mathbf{Z} \subseteq H$. Conversely, let $x \in H$. By the division algorithm, there exist $q, r \in \mathbf{Z}$, with $0 \leq r \leq n-1$, such that </p>
<p>$$x = qn + r$$</p>
<p>Then $r = x - qn \in H$, so $r$ must be zero: otherwise, $n$ would not be the smallest positive element of $H$. This shows that $x \in n\mathbf{Z}$.</p>
<p><strong>Lemma 2</strong>: Let $n\mathbf{Z}, m \mathbf{Z}$ be subgroups of $\mathbf{Z}$. Then $n\mathbf{Z} \subseteq m \mathbf{Z}$ if and only if $m$ divides $n$.</p>
<p><strong>Lemma 3</strong>: Let $G$ be a group, $N$ a normal subgroup. If $H$ is a subgroup of $G$ containing $N$, then $H/N = \{ hN : h \in H \}$ is a subgroup of $G/N$, and $H \mapsto H/N$ gives a bijection between the set of subgroups of $G$ containing $N$, and the set of all subgroups of $G/N$.</p>
<p>Now for $G = \mathbf{Z}/n\mathbf{Z}$, Lemma 3 says that the subgroups of $G$ are of the form $H/n\mathbf{Z}$ for $H$ a subgroup of $\mathbf{Z}$ containing $n\mathbf{Z}$. But by Lemmas 1,2 the only subgroups of $\mathbf{Z}$ containing $n\mathbf{Z}$ are of the form $m \mathbf{Z}$, where $m$ divides $n$. Thus the subgroups of $G$ are in one to one correspondence with the positive divisors of $n$.</p>
|
4,504,768 | <p>We've to prove that
<span class="math-container">$$
\lim_{(x,y)\to(0,0)} \frac{x^3+y^4}{x^2+y^2} =0
$$</span></p>
<p>Kindly check if my proof below is correct.</p>
<p><b>Proof</b></p>
<p>We need to show there exists <span class="math-container">$\delta>0$</span> for an <span class="math-container">$\varepsilon>0$</span> such that
<span class="math-container">$$
\left| \frac{x^3+y^4}{x^2+y^2} \right| < \varepsilon \implies \sqrt{x^2+y^2}< \delta
$$</span></p>
<p>Start
<span class="math-container">$$
\left| \frac{x^3}{x^2+y^2} \right| <\left| \frac{x^3+y^4}{x^2+y^2} \right| < \varepsilon
$$</span></p>
<p>Note
<span class="math-container">$$
\left| \frac{x^3}{x^2+y^2} \right|= \frac{x^2|x|}{x^2+y^2}>\frac{(x^2-y^2)|x|}{x^2+y^2}
$$</span></p>
<p>Therefore
<span class="math-container">$$
\frac{x^2-y^2}{x^2+y^2}|x|<\varepsilon \tag{1}
$$</span></p>
<p>Note
<span class="math-container">$$
\frac{x^2-y^2}{x^2+y^2}|x|<|x|=\sqrt{x^2}<\sqrt{x^2+y^2}<\delta
$$</span></p>
<p>So
<span class="math-container">$$
\frac{x^2-y^2}{x^2+y^2}|x|<\delta \tag{2}
$$</span></p>
<p>From <span class="math-container">$(1)$</span> and <span class="math-container">$(2)$</span>, we can say
<span class="math-container">$$
\delta=\varepsilon
$$</span></p>
| David Sheard | 373,149 | <p><strong>The more general statement is not true</strong>, even if you assume that <span class="math-container">$T$</span> is a set of elements of order <span class="math-container">$2$</span> like transpositions. For an explicit counter example, consider the dihedral group <span class="math-container">$D_8$</span> of order 16 which has presentation <span class="math-container">$$G=D_8=\langle s_1,s_2\mid s_1^2,\,s_2^2,\,(s_1s_2)^8\rangle.$$</span> Take as the set <span class="math-container">$T$</span> the set of all conjugates of <span class="math-container">$s_1$</span> and <span class="math-container">$s_2$</span> which all have order <span class="math-container">$2$</span>. Thinking of <span class="math-container">$D_8$</span> as the group of symmetries of a regular octagon, the elements of <span class="math-container">$T$</span> are all of the reflectional symmetries.</p>
<p>Let <span class="math-container">$U_1$</span> be the subgroup of <span class="math-container">$G$</span> generated by <span class="math-container">$\{s_1,s_2s_1s_2\}\subset T$</span> which is isomorphic to the dihedral group <span class="math-container">$D_4$</span>. Similarly, let <span class="math-container">$U_2$</span> be the subgroup of <span class="math-container">$G$</span> generated by <span class="math-container">$\{s_2,s_1s_2s_1\}\subset T$</span> which is also isomorphic to <span class="math-container">$D_4$</span>. However you can check that <span class="math-container">$U_1\cap U_2$</span> is isomorphic to the cyclic group <span class="math-container">$\mathbb{Z}_4$</span> and consists of the rotations by multiples of <span class="math-container">$\pi/2$</span>. Indeed <span class="math-container">$U_1\cap U_2\cap T=\emptyset$</span>.</p>
<p><a href="https://i.stack.imgur.com/2ZiBC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2ZiBC.png" alt="enter image description here" /></a></p>
<p>In fact this example is even closer to the case of <span class="math-container">$S_n$</span> than just the fact that <span class="math-container">$T$</span> consists of order <span class="math-container">$2$</span> elements. Both <span class="math-container">$S_n$</span> and <span class="math-container">$D_n$</span> are examples of Coxeter groups - groups generated by reflections. The set of transpositions in <span class="math-container">$S_n$</span> are the reflections in that case, and the set <span class="math-container">$T$</span> in my example above is the set of reflections for <span class="math-container">$D_n$</span>.</p>
<p><strong>Answer to the main question:</strong>
In this broader setting of Coxeter groups (or more accurately Coxeter systems) it is possible to justify the first statement in the case that <span class="math-container">$T$</span> is the set of transpositions of the form <span class="math-container">$(i\;i+1)$</span>. With this particular choice of <span class="math-container">$T$</span>, the pair <span class="math-container">$(S_n, T)$</span> is a Coxeter system. Writing <span class="math-container">$t_i=(i\;i+1)$</span> for <span class="math-container">$1\le i\le n-1$</span> then <span class="math-container">$S_n$</span> has the presentation <span class="math-container">$$\langle t_1,\dots, t_{n-1}\mid t_i^2\;\textrm{for}\; 1\le i\le n-1;\; (t_it_{i+1})^3\;\textrm{for}\; 1\le i\le n-2;\;\textrm{and} \;(t_it_j)^2\;\textrm{for}\; 1\le i\le n-1\;\textrm{such that}\;|i-j|\ge2\rangle.$$</span>
(Compare the form of this presentation to the one above for <span class="math-container">$D_n$</span>)</p>
<p>More generally a Coxeter system is a pair <span class="math-container">$(W,S)$</span> where <span class="math-container">$W$</span> is a group, <span class="math-container">$S=\{s_1,\dots,s_n\}\subset W$</span> generates <span class="math-container">$W$</span>, and <span class="math-container">$W$</span> has a presentation of the form <span class="math-container">$$\langle s_1,\dots,s_n\mid s_i^2 \;\textrm{for}\;1\le i\le n;\;(s_is_j)^{m_{ij}}\;\textrm{for}\;1\le i\neq j\le n\rangle,$$</span> for suitable chosen integers <span class="math-container">$m_{ij}$</span>.</p>
<p>It is a basic result in the theory of Coxeter groups that if <span class="math-container">$S'\subset S$</span> then <span class="math-container">$\langle S'\rangle$</span> is a Coxeter group with Coxeter system <span class="math-container">$(\langle S'\rangle,S')$</span>.</p>
<p>Applying that in the case of <span class="math-container">$(S_n,T)$</span> then there are subsets <span class="math-container">$T_1,T_2\subset T$</span> such that <span class="math-container">$U_i=\langle T_i\rangle$</span>, and then <span class="math-container">$U=U_1\cap U_2=\langle T_1\cap T_2\rangle$</span> is generated by <span class="math-container">$U\cap T$</span>.</p>
<p>For an introduction to Coxeter groups see <a href="https://core.ac.uk/download/pdf/301663425.pdf" rel="nofollow noreferrer">this set of notes</a>, and the general theorem I apply is given in Theorem 5.2 there.</p>
|
637,897 | <p>I have a 2-Sphere with a finite number $k$ of points removed (at least 3), and I want to equip it with a Riemannian metric of constant negative curvature. </p>
<p>My first thought was to take a free subgroup $A_k$ of rank $k$ of the isometries of the hyperbolic plane $H$, which acts fix point free and discontinuous on $H$ (such a group exists).</p>
<p>If I take the quotient $H/A_k$, I end up with a complete manifold of constant negative curvature, with isomorphic fundamental group to the punctured Sphere, but are those two manifolds homeomorphic? Maybe this is a terribly clumsy way to try to accomplish this task.</p>
| Andrea Mori | 688 | <p>I think we are saying the same thing but in a slightly different order. Isn't it true that a sphere with at least 3 points removed is homeomorphic to an open set in a compact Riemann surface of genus $\geq2$?</p>
<p>If so, the latter can be endowed with a constant negative curvature because it is uniformized by the hyperbolic plane.</p>
|
452,653 | <p>If $f:X\rightarrow Y$ is initial in category <strong>Top</strong> then
it is easy to proof that </p>
<blockquote>
<p>(!) the topology on $X$ is the set of preimages of open sets in $Y$. </p>
</blockquote>
<p>Just construct topology $Z$ having
the same underlying subset as $X$ and let the set of these preimages
serve as topology on it. Then from $g:Z\rightarrow X$ with $x\mapsto x$
it is clear that $fg$ is continuous so the conclusion that $g$ is
continuous can be made. Then we are ready.
But now my question: </p>
<blockquote>
<p>what if we do not work in $\textbf{Top}$ but in category $\textbf{Haus}$?</p>
</blockquote>
<p>The constructed topology $Z$ does not have to be a Hausdorff space (or am I overlooking something here?) and if the fact that $f$ is initial in $\textbf{Haus}$ would work then it would justify the conclusion that $g$ can be recognized as an arrow in $\textbf{Haus}$. </p>
<blockquote>
<p>Is there a way out? Or - even stronger - is statement (!) not true in $\textbf{Haus}$?</p>
</blockquote>
| user37238 | 87,392 | <p>In fact, here your series converges absolutely since we know that
$$ \sum_{n=2}^{+\infty} \frac{1}{n^\alpha (\ln n)^{\beta}}<+\infty \Leftrightarrow \left[(\alpha>1)\text{ or }(\alpha=1\text{ and }\beta >1)\right]$$
You can prove this with integral test. And conclude since the absolute convergence implies the convergence.</p>
|
3,849,311 | <blockquote>
<p>What is the remainder when dividing the polynomial
<span class="math-container">$$P(x)=x^n+x^{n-1}+\cdots+x+1$$</span> with the polynomial
<span class="math-container">$$x^3-x$$</span> if <span class="math-container">$n$</span> is a natural odd number?</p>
</blockquote>
<p>So, what I know so far is:</p>
<p><span class="math-container">$$P(x)=Q(x)D(x)+R(x)$$</span></p>
<p>In this case I'll call <span class="math-container">$Q(x) = x^3-x$</span></p>
<p><span class="math-container">$$Q(x) = 0 \iff x=\pm1$$</span></p>
<p>So from here:</p>
<p><span class="math-container">$\begin{array} {l}P(1):\qquad&1^n+1^{n-1}+\cdots+1^1+1=R(1)\\P(-1):&(-1)^n+(-1)^{n-1}+\cdots+(-1)^1+1=R(-1)\end{array}$</span></p>
<p>Here is where I assumed that <span class="math-container">$R(x)=ax^2+b$</span> since <span class="math-container">$Q(x)=x^3-x$</span></p>
<p>And from the equations (assuming <span class="math-container">$n = \{2k+1 \mid k\in\mathbb{N}\}$</span>):</p>
<p><span class="math-container">$\begin{array}{l}P(1):\\&R(1)=n\\P(-1):\\&R(-1)=0\end{array}$</span></p>
<p>And here I get to:</p>
<p><span class="math-container">$$\left\{
\begin{array}{ll}
ax^2+by &=n \\
ax^2+by &=0 \\
\end{array}
\right.$$</span></p>
<p>I would like to know where did I go wrong? Is my assumption for <span class="math-container">$R(x)$</span> incorrect, my calculation of the <span class="math-container">$P(1)$</span> and <span class="math-container">$P(-1)$</span> wrong or is there something else I didn't think about?</p>
| Bernard | 202,857 | <p><strong>Hint</strong>:</p>
<p>Experimenting the first few values of <span class="math-container">$n$</span>: <span class="math-container">$\:n= 3,5,7,9,11$</span>, you may conjecture the remainder for the general polynomial <span class="math-container">$P_n(x)=x^n+x^{n-1}+\dots+x^3+x^2+x+1\:$</span> is
<span class="math-container">$$R_n(x)=kx^2+(k+1)x+1,\quad\text{ where } k=\left\lfloor \frac n2 \right\rfloor.$$</span>
and try to prove it by induction on <span class="math-container">$n$</span> (odd).</p>
<p>You can prove the inductive step (case <span class="math-container">$n \implies$</span> case <span class="math-container">$n+2$</span>) you can try rewriting
<span class="math-container">$$ P_{n+2}(x)= x^2 P_n(x)+ (x+1)=x^2\bigl[ Q_n(x)(x^3-x)+R_n(x)\bigr]+(x+1), $$</span>
so the remainder for <span class="math-container">$P_{n+2}$</span> will be as well the remainder for
<span class="math-container">$$x^2 R_n(x)+(x+1)=x^2(kx^2+(k+1)x +1)+(x+1)=kx^4+(k+1)x^3+x^2+x+1.$$</span>
You'll only have to prove that the coefficient of <span class="math-container">$x^2$</span> thus obtained is <span class="math-container">$\left\lfloor\dfrac{n+2}2\right\rfloor$</span> and the coefficient of <span class="math-container">$x$</span> is <span class="math-container">$1$</span> more.</p>
|
3,849,311 | <blockquote>
<p>What is the remainder when dividing the polynomial
<span class="math-container">$$P(x)=x^n+x^{n-1}+\cdots+x+1$$</span> with the polynomial
<span class="math-container">$$x^3-x$$</span> if <span class="math-container">$n$</span> is a natural odd number?</p>
</blockquote>
<p>So, what I know so far is:</p>
<p><span class="math-container">$$P(x)=Q(x)D(x)+R(x)$$</span></p>
<p>In this case I'll call <span class="math-container">$Q(x) = x^3-x$</span></p>
<p><span class="math-container">$$Q(x) = 0 \iff x=\pm1$$</span></p>
<p>So from here:</p>
<p><span class="math-container">$\begin{array} {l}P(1):\qquad&1^n+1^{n-1}+\cdots+1^1+1=R(1)\\P(-1):&(-1)^n+(-1)^{n-1}+\cdots+(-1)^1+1=R(-1)\end{array}$</span></p>
<p>Here is where I assumed that <span class="math-container">$R(x)=ax^2+b$</span> since <span class="math-container">$Q(x)=x^3-x$</span></p>
<p>And from the equations (assuming <span class="math-container">$n = \{2k+1 \mid k\in\mathbb{N}\}$</span>):</p>
<p><span class="math-container">$\begin{array}{l}P(1):\\&R(1)=n\\P(-1):\\&R(-1)=0\end{array}$</span></p>
<p>And here I get to:</p>
<p><span class="math-container">$$\left\{
\begin{array}{ll}
ax^2+by &=n \\
ax^2+by &=0 \\
\end{array}
\right.$$</span></p>
<p>I would like to know where did I go wrong? Is my assumption for <span class="math-container">$R(x)$</span> incorrect, my calculation of the <span class="math-container">$P(1)$</span> and <span class="math-container">$P(-1)$</span> wrong or is there something else I didn't think about?</p>
| lhf | 589 | <p>Your approach is fine but you've missed the root <span class="math-container">$x=0$</span> of <span class="math-container">$x^3-x$</span>.</p>
<p>We get <span class="math-container">$R(0)=P(0)=1$</span>, <span class="math-container">$R(1)=P(1)=n+1$</span>, <span class="math-container">$R(-1)=P(-1)=0$</span>.</p>
<p>The quadratic polynomial that interpolates this data is <span class="math-container">$\frac{n-1}{2} x^2+\frac{n+1}{2}x+1$</span>.</p>
<p>The same approach works when <span class="math-container">$n$</span> is even (the interpolation data is different because <span class="math-container">$P(-1)=1$</span>) and we get <span class="math-container">$\frac{n}{2} x^2+\frac{n}{2}x+1$</span>.</p>
<p>In both cases, we get the nice recurrence: <span class="math-container">$R_{n+2}=R_n+x^2+x$</span>, with <span class="math-container">$R_0=1$</span> and <span class="math-container">$R_1=x+1$</span>.</p>
|
16,831 | <p>As the title says, I'm wondering if there is a continuous function such that $f$ is nonzero on $[0, 1]$, and for which $\int_0^1 f(x)x^n dx = 0$ for all $n \geq 1$. I am trying to solve a problem proving that if (on $C([0, 1])$) $\int_0^1 f(x)x^n dx = 0$ for all $n \geq 0$, then $f$ must be identically zero. I presume then we do require the $n=0$ case to hold too, otherwise it wouldn't be part of the statement. Is there ay function which is not identically zero which satisfies $\int_0^1 f(x)x^n dx = 0$ for all $n \geq 1$?</p>
<p>The statement I am attempting to prove is homework, but this is just idle curiosity (though I will tag it as homework anyway since it is related). Thank you!</p>
| Matthew Towers | 5,316 | <p>Just for fun, here's a proof in non-standard analysis (Nelson-style IST). I write $a \approx b$ to mean that $a-b$ is infinitesimal. </p>
<p>It's enough to prove the result when $f$ is standard. The Weierstrass approximation theorem gives a polynomial $p$ such that $p(x) \approx x f(x)$ for all $x \in [0,1]$. Note $\int _0 ^1 xfp = 0$, because the integral is linear.</p>
<p>Now estimate the absolute value of $\int xfp - (xf)^2$. Factoring the integrand as $xf(x)\cdot ( p(x) - xf(x) )$, we see that this integral is bounded above by $\operatorname{sup}(xf) \cdot \operatorname{sup}(p-xf)$, where the sup is over all $x \in [0,1]$. This is infinitesimal, since the sup of a standard continuous function on a standard compact interval is limited.</p>
<p>We therefore have $\int (xf)^2 \approx \int xfp = 0$. Since $\int (xf)^2$ is standard, it is equal to zero. Therefore $(xf)^2$ is identically zero, and so is $f$ (being continuous).</p>
|
16,831 | <p>As the title says, I'm wondering if there is a continuous function such that $f$ is nonzero on $[0, 1]$, and for which $\int_0^1 f(x)x^n dx = 0$ for all $n \geq 1$. I am trying to solve a problem proving that if (on $C([0, 1])$) $\int_0^1 f(x)x^n dx = 0$ for all $n \geq 0$, then $f$ must be identically zero. I presume then we do require the $n=0$ case to hold too, otherwise it wouldn't be part of the statement. Is there ay function which is not identically zero which satisfies $\int_0^1 f(x)x^n dx = 0$ for all $n \geq 1$?</p>
<p>The statement I am attempting to prove is homework, but this is just idle curiosity (though I will tag it as homework anyway since it is related). Thank you!</p>
| Hans Lundmark | 1,242 | <p>As an aside, the answer is <em>yes</em> if the interval is $(0,\infty)$ instead of $(0,1)$.
For example the "Stieltjes ghost function"
$f(x) = \exp(-x^{1/4}) \sin x^{1/4}$ satisfies
$\int_0^{\infty} f(x) x^n dx = 0$
for all integers $n \ge 0$. </p>
<p>Stieltjes gave this as an example of a case where the
<a href="http://en.wikipedia.org/wiki/Moment_problem">moment problem</a>
does not have a unique solution.
It appears in Section 55 of his famous paper "Recherches sur les fractions continues" from 1894;
see p. 506 in
<a href="http://www.archive.org/details/oeuvresthomasja02stierich"><em>Œuvres Complètes</em>, Vol. II</a>.
To compute the moments, use the substitution $x=u^4$ to write
$I_n = \int_0^{\infty} f(x) x^n dx = 4 \int_0^{\infty} e^{-u} \sin(u) u^{4n+3} du$;
then integrate by parts four times (differentiating the power of $u$, and integrating the rest) to show that $I_n$ is proportional to $I_{n-1}$,
and finally check that $I_0=0$.</p>
|
3,628,277 | <p>I am solving the equation <span class="math-container">$e^z = 1 $</span> in <span class="math-container">$\mathbb{C}$</span>. The book says, other than <span class="math-container">$z = 0$</span>, <span class="math-container">$z = 2 \pi k i$</span> for <span class="math-container">$ k \in \mathbb{Z}$</span> is also the solution. It explains the solution by saying that <span class="math-container">$e^z$</span> is periodic function so that</p>
<p><span class="math-container">$1 = e^z = e^{2\pi k i}$</span></p>
<p>However I want to know how the identity is derived to solve other cases such as <span class="math-container">$e^z = 2 $</span>.</p>
| Mostafa Ayaz | 518,023 | <p><strong>Hint</strong></p>
<p>By differentiating the equality, we should prove <span class="math-container">$$\frac{1}{(a+b\cos x)^n}={d\over dx}\frac{A\sin x}{(a+b\cos x)^{n-1}}+{B \over (a+b\cos x)^{n-1}}+{C \over (a+b\cos x)^{n-2}}, \hspace{1cm}$$</span> with <span class="math-container">$|a|\neq |b|$</span>. Also <span class="math-container">$${d\over dx}\frac{A\sin x}{(a+b\cos x)^{n-1}}{={A\cos x(a+b\cos x)^{n-1}+(n-1)Ab\sin^2 x(a+b\cos x)^{n-2}\over (a+b\cos x)^{2n-2}
}
\\=
{A\cos x(a+b\cos x)+(n-1)Ab\sin^2 x\over (a+b\cos x)^{n}}
\\={Aa\cos x+Ab\cos^2 x+(n-1)Ab\sin^2 x\over (a+b\cos x)^{n}}
}$$</span>
The task of finding proper values <span class="math-container">$A,B,C$</span> such that <span class="math-container">$$\frac{1}{(a+b\cos x)^n}-{B \over (a+b\cos x)^{n-1}}-{C \over (a+b\cos x)^{n-2}}={Aa\cos x+Ab\cos^2 x+(n-1)Ab\sin^2 x\over (a+b\cos x)^{n}}$$</span>should be easy.</p>
|
3,628,277 | <p>I am solving the equation <span class="math-container">$e^z = 1 $</span> in <span class="math-container">$\mathbb{C}$</span>. The book says, other than <span class="math-container">$z = 0$</span>, <span class="math-container">$z = 2 \pi k i$</span> for <span class="math-container">$ k \in \mathbb{Z}$</span> is also the solution. It explains the solution by saying that <span class="math-container">$e^z$</span> is periodic function so that</p>
<p><span class="math-container">$1 = e^z = e^{2\pi k i}$</span></p>
<p>However I want to know how the identity is derived to solve other cases such as <span class="math-container">$e^z = 2 $</span>.</p>
| Quanto | 686,284 | <p>Note,</p>
<p><span class="math-container">$$\frac b{\sin x} \left(\frac{\sin^2x}{(a+b\cos x)^{n-1}}\right)'
=\frac{(n-1)(b^2-a^2)}{(a+b\cos x)^n}+\frac{2a(n-2)}{(a+b\cos x)^{n-1}}
-\frac{n-3}{(a+b\cos x)^{n-2}}$$</span></p>
<p>Integrate both sides and denote <span class="math-container">$I_n= \int \frac{dx}{(a+b\cos x)^n}$</span>,</p>
<p><span class="math-container">$$(n-1)(b^2-a^2)I_n + 2a(n-2)I_{n-1} -(n-3)I_{n-2} \\
= \int \frac b{\sin x} d\left(\frac{\sin^2x}{(a+b\cos x)^{n-1}}\right)
= \frac{b\sin x}{(a+b\cos x)^{n-1}}-aI_{n-1} +I_{n-2} $$</span></p>
<p>which leads to</p>
<p><span class="math-container">$$I_n = \frac{A\sin x}{(a+b\cos x)^{n-1}}+BI_{n-1}+CI_{n-2} $$</span></p>
<p>where,</p>
<p><span class="math-container">$$A = \frac b{(n-1)(b^2-a^2)},\>\>\>
B = -\frac {(2n-3)a}{(n-1)(b^2-a^2)},\>\>\>
C = \frac {n-2}{(n-1)(b^2-a^2)}$$</span></p>
|
23,674 | <p>Let $v$ be the 3-adic valuation on $\mathbb{Q}$ and consider the subring $\mathbb{Z}_{(3)}$ of $\mathbb{Q}$ defined by
$$
\mathbb{Z}_{(3)} = \{ x \in \mathbb{Q} : v(x) \geq 0 \}.
$$
That is, $\mathbb{Z}_{(3)}$ is the ring of rational numbers that are integral with respect to $v$. $\mathbb{Z}_{(3)}$ is also the localization of $\mathbb{Z}$ at the prime ideal $(3)$. I know $\mathbb{Z}_{(3)}$ is integrally closed in $\mathbb{Q}$.</p>
<p>I want to find the integral closure of $\mathfrak{O}$ in the field $\mathbb{Q}(\sqrt{-5})$:
$$
\overline{\mathbb{Z}_{(3)}} = \{x \in \mathbb{Q}(\sqrt{-5}) : x \text{ is a root of a monic irreducible polynomial with coefficients in } \mathbb{Z}_{(3)} \}
$$</p>
<p>How can I do this? What should I be thinking about?</p>
| Andrea Mori | 688 | <p>The integral closure of $\Bbb Z$ in $K={\Bbb Q}(\sqrt{-5})$ is the ring of integers ${\cal O}_K={\Bbb Z}[\sqrt{-5}]$, the latter equality because $-5\equiv 3\bmod 4$.</p>
<p>As you observed, ${\Bbb Z}_{(3)}$ is the localization of $\Bbb Z$ at the ideal $(3)$. Then, it follows from general properties (see Atiyah-Macdonald, Ch 5) that the integral closure $A$ of ${\Bbb Z}_{(3)}$ in $K$ is the localization at the ideal $(3)$ in ${\cal O}_K$.</p>
<p>Since $3$ splits in $K$, $A$ has two maximal ideals, namely ${\frak m}_{\pm}=(1\pm\sqrt{-5})A$.</p>
|
2,444,196 | <p>The center of a group $G$ is defined as $Z(G):=\{ z\in G : gz = zg, \; \forall g \in G\}$.</p>
<p>The goal is to show that if $\vert G\vert = pq$, where $p$ and $q$ are not necessarily distinct primes then either $G$ is abelian or $Z(G) = \{ e\}$.</p>
<p>I want to suppose that $Z(G) \neq \{ e\}$ and then use the fact that $G/Z(G)$ is cyclic to imply that $G$ is abelian, which is something I have already proven. But how do I show that $G/Z(G)$ is cyclic when I am not certain what exactly $Z(G)$ looks like. I only know that it has at least one non-identity element in it, which will be of order $p$ WLOG, (the case where it is of order $pq$ is trivial).</p>
<p>Any help is appreciated. Thank you.</p>
| Nicky Hekster | 9,605 | <p>Hint: assume $Z(G) \neq \{1\}$. Then look at $|G:Z(G)| \in \{1,p,q\}$</p>
|
2,444,196 | <p>The center of a group $G$ is defined as $Z(G):=\{ z\in G : gz = zg, \; \forall g \in G\}$.</p>
<p>The goal is to show that if $\vert G\vert = pq$, where $p$ and $q$ are not necessarily distinct primes then either $G$ is abelian or $Z(G) = \{ e\}$.</p>
<p>I want to suppose that $Z(G) \neq \{ e\}$ and then use the fact that $G/Z(G)$ is cyclic to imply that $G$ is abelian, which is something I have already proven. But how do I show that $G/Z(G)$ is cyclic when I am not certain what exactly $Z(G)$ looks like. I only know that it has at least one non-identity element in it, which will be of order $p$ WLOG, (the case where it is of order $pq$ is trivial).</p>
<p>Any help is appreciated. Thank you.</p>
| GAVD | 255,061 | <p>You already suppose that $Z(G)\neq 1$. Then the order of the quotient group $G/Z(G)$ is one of 1,p,q. </p>
<p>You can follow <a href="https://math.stackexchange.com/questions/106163/show-that-every-group-of-prime-order-is-cyclic">this question</a> to see that all group of prime order is cyclic. So, the group $G/Z(G)$ is a cyclic group. </p>
|
239,863 | <p>Im trying to reproduce the following graph:
<a href="https://i.stack.imgur.com/tHQX3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tHQX3.png" alt="enter image description here" /></a></p>
<p>I have the parabolas graphed, but I want to indicate the intercepts with the x axis by some point or cross, also I want to eliminate the numbers and the little lines that mark them. I have this:
<a href="https://i.stack.imgur.com/wfxwr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wfxwr.png" alt="enter image description here" /></a></p>
<p>Thanks in advance. Im new to mathematica so...</p>
| MassDefect | 42,264 | <p>Maybe something like this?</p>
<pre><code>eqs = {(x + 5)^2 - 130, (x - 2)^2 - 130};
intercepts = NumericalSort[x /. Solve[Or @@ Thread[eqs == 0], x]];
Plot[
eqs,
{x, -21, 16},
AxesStyle -> White,
Background -> Black,
Frame -> True,
FrameLabel -> {n, \[Epsilon]},
FrameStyle -> White,
FrameTicks -> None,
Mesh -> {{0}},
MeshFunctions -> {#2 &},
MeshStyle -> Directive[White, AbsolutePointSize[6]],
PlotStyle -> Red,
Ticks -> None,
Epilog -> {
White,
MapThread[
Text[Subscript[n, #2], {#1,
4}, {Sign[#2 - 2.5] 2, -1}] &, {intercepts,
Range[Length[intercepts]]}]
}
]
</code></pre>
<p><a href="https://i.stack.imgur.com/ZdKjL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZdKjL.png" alt="Plot of parabolas with zeroes shown." /></a></p>
<p>I've sort of tried to "automate" finding the zeroes and plotting the labels, but it will probably still require a bit of manual tweaking to get the labels right. Thanks to J. M.'s ennui for the <code>Mesh/MeshFunctions/MeshStyle</code> solution. Since I'm already calculating the intercepts, one could also just use those, but I liked the <code>Mesh</code> solution better.</p>
<p>I'm using <code>Epilog</code> to add the text to the plot. The <code>Sign[#2 - 2.5]2</code> is just there so that the first two zeroes use -2 as their x-offset and +2 for the second two zeroes. What looks good will depend a lot on what equations you're plotting, so the positioning may vary.</p>
|
2,583,454 | <p>Consider for instance the linear system:</p>
<p>$$\left(
\begin{array}{cc}
1 & 2 \\
3 & 4 \\
5 & 6 \\
\end{array}
\right).\left(
\begin{array}{c}
x \\
y \\
\end{array}
\right)=\left(
\begin{array}{c}
1 \\
2 \\
4 \\
\end{array}
\right)$$</p>
<p>This is over determined and thus has no solution. Yet, by simply multiplying both sides by $\textbf{A}^T$:</p>
<p>$$\left(
\begin{array}{ccc}
1 & 3 & 5 \\
2 & 4 & 6 \\
\end{array}
\right).\left(
\begin{array}{cc}
1 & 2 \\
3 & 4 \\
5 & 6 \\
\end{array}
\right).\left(
\begin{array}{c}
x \\
y \\
\end{array}
\right)=\left(
\begin{array}{ccc}
1 & 3 & 5 \\
2 & 4 & 6 \\
\end{array}
\right).\left(
\begin{array}{c}
1 \\
2 \\
4 \\
\end{array}
\right)$$</p>
<p>We find that the system now has a unique solution, which is the (x,y) that minimizes the squared error.</p>
<p>Now I understand the derivation of why multiplying by the transpose helps to find the pseudoinverse which then helps to perform OLS regression, but my question is perhaps a bit more fundamental. </p>
<p>How can multiplying both sides of an equation by a matrix change a system which previously had no solutions into one that has a unique solution? This seems to against what I assumed that the solutions to $\textbf{A}x = \textbf{B}$ were the same as the solutions to $\textbf{P}\textbf{A}x = \textbf{P}\textbf{B}$.</p>
| Jean Marie | 305,862 | <p>A general principle (some would say "tautology") connecting sets and their defining properties is as follows:</p>
<p>Let $E_1 := \{x | p_1(x)\} $ (with the meaning "$x$ has property $p_1$") and $E_2 := \{x | p_2(x)\} $.</p>
<p>$$\text{If} \ \forall x, (\ p_1(x) \implies p_2(x)), \ \ \text{then} \ \ E_1 \subset E_2$$</p>
<p>Here, as $AX=B \implies A^TAX=A^TB$, the set $E_1$ of solutions of equation $AX=B$ is included into the set $E_2$ of solutions of $A^TAX=A^TB$.</p>
<p>Rather often (as in the case shown in your question), $E_1=\varnothing$ whereas $E_2 \neq \varnothing$. </p>
<p>But what we have shown explains that this is not contradictory.</p>
|
8,107 | <p>Imagine I have a company that makes widgets, where each widget costs me A dollars to make. Each month I can allocate money toward research and development with the aim of finding a new process that will allow me to build widgets for a cost of A/B dollars. Presume that I know that for each C dollars I spend on research and development there's a D% chance of finding a breakthrough. Of course, spending money on research and development means that I have less to spend on building widgets.</p>
<p>I have a monthly budget of E dollars. This budget is not directly tied to my profit margin, but it is safe to say that it my profit margins influence future budgets (i.e., if I make no widgets for three straight months b/c I do all research and development, it's likely that my budget will be reduced, whereas if I discover a breakthrough the first month my profits will skyrocket and I'll likely see that budget grow over time).</p>
<p>In case that is too abstract, here's the real world scenario I'm interested in solving (although I'd like a more general approach, as well):</p>
<ul>
<li>A = 15 dollars</li>
<li>B = 3</li>
<li>C = 5 dollars</li>
<li>D = 2.75%</li>
<li>E = 30 dollars</li>
</ul>
<p>That is, today widgets cost me 15 dollars to build but if I can find a breakthrough I know I can make them at 1/3 the cost (5 dollars). For each 5 dollars I spend on research and development there is a 2.75% chance I'll find the breakthrough. However, I have only 30 dollars to spend each month. If I spend it all on research and development and have no success then I have made no widgets for sale. If I spend it all on widget construction I have no chance of finding a breakthrough.</p>
<p>Is there some statistical distribution or formula that can let me plug in these variables and see some sort of breakdown that gives me an idea of whether it's a good idea to spend any money on research and development each month and, if so, how much?</p>
| Mike Spivey | 2,370 | <p>Since your question is about seeing how your different options play out, and you have a small number of them (the point of my previous question), you can use a <a href="http://en.wikipedia.org/wiki/Decision_tree" rel="nofollow">decision tree</a>. (From Wikipedia: "In decision analysis, a 'decision tree'... is used as a visual and analytical decision support tool, where the expected values (or expected utility) of competing alternatives are calculated.")</p>
<p>You will want the first level of branches of the tree to be the various decisions you have (spend 0 dollars on research, spend 5 dollars on research, etc.). The second level of branches of the tree will be whether or not the breakthrough is found. Based on the outcomes and probabilities for the second level, you can calculate an expected value for each option in the first level. Presumably you want to choose the option with the highest expected value (or expected utility, if you are comfortable with utility functions).</p>
<p>However, I think you need some more information to get a really good answer. In particular, your example doesn't quantify exactly how your budget would change over time. That needs to be included somehow. In addition, perhaps going with the highest expected monthly value isn't what you really want, as your description of the scenario implies that you might be willing to go three months without profit in order to increase your chance of a breakthrough.</p>
<p>In short: I think a decision tree is what you want (and there are lots of resources out there besides the Wikipedia page that can help you with that), but you need to quantify some of the other aspects of your problem before you can get a good answer.</p>
|
8,107 | <p>Imagine I have a company that makes widgets, where each widget costs me A dollars to make. Each month I can allocate money toward research and development with the aim of finding a new process that will allow me to build widgets for a cost of A/B dollars. Presume that I know that for each C dollars I spend on research and development there's a D% chance of finding a breakthrough. Of course, spending money on research and development means that I have less to spend on building widgets.</p>
<p>I have a monthly budget of E dollars. This budget is not directly tied to my profit margin, but it is safe to say that it my profit margins influence future budgets (i.e., if I make no widgets for three straight months b/c I do all research and development, it's likely that my budget will be reduced, whereas if I discover a breakthrough the first month my profits will skyrocket and I'll likely see that budget grow over time).</p>
<p>In case that is too abstract, here's the real world scenario I'm interested in solving (although I'd like a more general approach, as well):</p>
<ul>
<li>A = 15 dollars</li>
<li>B = 3</li>
<li>C = 5 dollars</li>
<li>D = 2.75%</li>
<li>E = 30 dollars</li>
</ul>
<p>That is, today widgets cost me 15 dollars to build but if I can find a breakthrough I know I can make them at 1/3 the cost (5 dollars). For each 5 dollars I spend on research and development there is a 2.75% chance I'll find the breakthrough. However, I have only 30 dollars to spend each month. If I spend it all on research and development and have no success then I have made no widgets for sale. If I spend it all on widget construction I have no chance of finding a breakthrough.</p>
<p>Is there some statistical distribution or formula that can let me plug in these variables and see some sort of breakdown that gives me an idea of whether it's a good idea to spend any money on research and development each month and, if so, how much?</p>
| Emre | 9,901 | <p>This is a decision-theoretic problem. I suggest you look into <a href="http://en.wikipedia.org/wiki/Multi-armed_bandit" rel="nofollow">multi-armed bandits</a>, which</p>
<blockquote>
<p>have been used to model the problem of managing research projects in a large organization, like a science foundation or a pharmaceutical company.</p>
</blockquote>
<p>If you want a book, there's <a href="http://www.wiley.com/WileyCDA/WileyTitle/productCd-0470670029.html" rel="nofollow">Multi-armed Bandit Allocation Indices, 2nd Edition</a> by Gittins <em>et al</em>.</p>
|
721,449 | <p>I need to determine all the positive divisors of 7!. I got 360 as the total number of positive divisors for 7!. Can someone confirm, or give the real answer?</p>
| Joe K | 64,292 | <p>Just to generalize what others have said, it's a neat little fact that the number of distinct factors of $n!$ is given by:</p>
<p>$$
\prod_{p \in primes}\left( 1 + \sum_{k=1}^{\infty}\left \lfloor \frac{n}{p^k} \right \rfloor \right)
$$</p>
<p>Note that the sum is simply a shortcut to calculating the exponent for an individual prime factor that only works with factorials. The product and "1+" part are adequately explained by Alessandro's answer.</p>
|
3,237,476 | <p>I have a trouble with calculating the sum of this series:</p>
<p><span class="math-container">$$2+\sum_{n=1}^{\infty}\frac{1-n}{9n^3-n}$$</span></p>
<p>I tried to split it into three separate series like this:
<span class="math-container">$$2+\sum_{n=1}^{\infty}\frac{1-n}{9n^3-n} =2+\sum_{n=1}^{\infty}\frac{2}{3n+1}+\sum_{n=1}^{\infty}\frac{1}{3n-1}-\sum_{n=1}^{\infty}\frac{1}{n} $$</span> but I'm not able to continue, can you give me some tips?</p>
| Claude Leibovici | 82,404 | <p>For the direct evaluation of the limit, you have received the good solution from J.G.</p>
<p>You could also consider the partial sum using, as you did, partial fraction decomposition
<span class="math-container">$$\frac{1-n}{9n^3-n}=\frac{1}{3 n-1}+\frac{2}{3 n+1}-\frac{1}{n}$$</span> which makes
<span class="math-container">$$S_p=\sum_{n=1}^{p}\frac{1-n}{9n^3-n}=\frac{1}{3} \left(\psi \left(p+\frac{2}{3}\right)-\psi
\left(\frac{2}{3}\right)\right)+\frac{2}{3} \left(\psi
\left(p+\frac{4}{3}\right)-\psi \left(\frac{4}{3}\right)\right)-H_p$$</span> where appear the digamma function and harmonic number.</p>
<p>Now, using asymptotics and Taylor series you should arrive to
<span class="math-container">$$S_p=-\left(\gamma +\frac{1}{3}\psi\left(\frac{2}{3}\right)+\frac{2}{3} \psi
\left(\frac{4}{3}\right)\right)+\frac{1}{9 p}-\frac{1}{9
p^2}+O\left(\frac{1}{p^3}\right)$$</span> which shows the limit and how it is approached.</p>
<p>Now, the particular values
<span class="math-container">$$\psi\left(\frac{2}{3}\right)=-\gamma +\frac{\pi }{2 \sqrt{3}}-\frac{3 \log (3)}{2}$$</span>
<span class="math-container">$$\psi\left(\frac{4}{3}\right)=3-\gamma -\frac{\pi }{2 \sqrt{3}}-\frac{3 \log (3)}{2}$$</span> make
<span class="math-container">$$S_p=\left(-2+\frac{\pi }{6 \sqrt{3}}+\frac{3 \log (3)}{2}\right)+\frac{1}{9
p}-\frac{1}{9 p^2}+O\left(\frac{1}{p^3}\right)$$</span></p>
<p>For <span class="math-container">$p=10$</span>, the exact value is <span class="math-container">$-\frac{477820712081}{12033629407800}\approx -0.039707$</span> while the above truncated expansion gives <span class="math-container">$-\frac{199}{100}+\frac{\pi }{6 \sqrt{3}}+\frac{3 \log (3)}{2}\approx -0.039782$</span>.</p>
|
926,168 | <p>In how many ways can 3 teachers and 4 pupils be arranged in a line if the pupils and teachers must alternate?
.
how to get the answer?
the ans :144</p>
| duci9y | 165,386 | <p>Thanks to @labbhattacharjee. Here's a step by step version of their solution:</p>
<p>$$\sin 2a + \sin 2b + \sin 2c - \sin 2(a+b+c) = 4 \sin (a+b) \sin (b+c) \sin (c+a)$$</p>
<p>LHS:</p>
<p>$ = 2 \sin (a+b) \cos (a-b) + \sin 2c - \sin((2a+2b)+2c) $
$ = 2 \sin (a+b) \cos (a-b) + 2 \cos (a+b+2c) \sin (-a-b) $
$ = 2 \sin (a+b) \cos (a-b) + 2 \cos (a+b+2c) \sin (-a-b) $
$ = 2 \sin (a+b) \cos (a-b) - 2 \cos (a+b+2c) \sin (a+b) $
$ = 2 \sin (a+b) (\cos (a-b) - \cos (a+b+2c)) $<br>
$ = 2 \sin (a+b) (-2 \sin (c+a) \sin (-b-c)) $<br>
$ = 2 \sin (a+b) (2 \sin (c+a) \sin (b+c)) $<br>
$ = 4 \sin (a+b) \sin (b+c) \sin (c+a) $</p>
|
2,323,845 | <p>I would like to create a 4 on 4 tournament with 8 players (4 players on a team where two teams play against each other each game), where every player plays with every other player an equal number of times. A simple example of this would be if you had a 2 on 2 tournament with 4 players then:</p>
<p>12 v 34</p>
<p>13 v 24</p>
<p>14 v 23</p>
<p>If it were 6 players doing 3 v 3 then you could have 10 games covering the 20 combinations possible. (i.e. 123 v 456 and so forth).</p>
<p>With 4 v 4 using 8 players it is difficult or at least impossible and impractical to cover all combinations with 8 Choose 4 being 105. I would like to determine a 'very close' practical solution that would require no more than 10 games, so really an ideal would be 7 games where each player plays with every other player 3 times total on their team. I haven't been able to figure out a good algorithmic way to approach this aside from doing it by hand and adjusting as I go to ensure player 1 plays with all others 3 times, then player 2 plays with all others (3-8) 3 times, then player 3 plays with all others (4-8) 3 times, making changes that preserve the previous counts. Any suggestions or solutions?</p>
<p><strong>Second</strong> Update: </p>
<p>I have solved the problem by hand below, where each player has each other player as a teammate for exactly 3 matches:</p>
<p>1235 4678</p>
<p>1458 2367</p>
<p>1347 2568</p>
<p>1278 3456</p>
<p>1368 2457</p>
<p>1246 3578</p>
<p>1567 2348</p>
<p>I performed this by hand by looking for imbalances and attempting a rebalance that preserved the partner match count for player 1. For instance if there was one match with 46 paired but 5 for 48 paired then I looked to change a 48 pairing into a 46 and then preserve the balance of the matchups for player 1 by changing yet another 48 into a 46. Then, recheck to verify all pairings up to the "4s" were still balanced and continue. I feel like it was dumb luck paired with a generally sound higher probability approach that enabled me to reach this perfect solution. </p>
| AmagicalFishy | 126,921 | <p>I started thinking of it this way: </p>
<p>We have $28$ pairs of players that need to be satisfied three times. Each game can accommodate six pairs. If we can find an optimal configuration of pairings for a certain number of games such that each pair is satisfied, we can just play that configuration three times.</p>
<p>Say we have eight players $P = \{A, B, ..., H\}$. The pairs that need to be satisfied are:</p>
<p>$$\begin{array}{ccccccc}
(AB) & (AF) & (BD) & (BH) & (CG) & (DG) & (EH) \\
(AC) & (AG) & (BE) & (CD) & (CH) & (DH) & (FG) \\
(AD) & (AH) & (BF) & (CE) & (DE) & (EF) & (FH) \\
(AE) & (BC) & (BG) & (CF) & (DF) & (EG) & (GH) \\
\end{array}$$</p>
<p>Where each pair represents one player playing with another player. For the first game, it doesn't matter who we pick—the same number of pairs will be satisfied regardless (12 pairs). For example, say our first game is team $\{A, B, C, D\}$ vs. team $\{E, F, G, H\}$: </p>
<p>$$(AB)(BC) (CD) \times (EF)(FG)(GH)$$</p>
<p>The pairs $(AB), (BC), (CD), (EF), (FH),$ and $(GH)$ are satisfied, but so are $(AC), (AD), (BC), (BD)$ and $(EH), (EG), (FH), (FG)$. Keep in mind that, if you changed the teams (for the first choice), the particular pairs satisfied would be different, but $12$ of them would <em>always</em> be satisfied. (Think of just changing the labels, for example. What if $P = \{H, G, F, ... , A\}$?)</p>
<p>That leaves $16$ pairs. Now we have to switch teams up. We want three new pairs on each team. Let's say...</p>
<p>$$(AE)(AH)(BE) \times (CF)(CG)(DF)$$</p>
<p>This is just team $\{A,B,E,H\}$ vs. $\{C,D,F,G\}$. This satisfies pairs $(AE), (AH), (BE), (CF), (CG),$ and $(DF)$, but <em>also</em> $(AE)$ and $(CF)$. Again, [I don't think] you can get more efficient than this. Trying to remove any of the redundancies results in a team that's too big. (For example: In the first team, the redundancies are $(AE) + (AH)$ and $(AE) + (BE)$ (since $E$ and $H$ were on the same team already, $A$ and $B$ were on the same team already, and $A$ and $A$... well, are <em>always</em> on the same team.) If you try to switch out one of these pairs for another, less redundant one, you get a team that's too big---or you increase the redundancies on the other team.</p>
<p>For the third game, we have team $\{A, B, F, G\}$ vs. $\{C, D, E, H\}$:</p>
<p>$$(AF)(BF)(BG) \times (DH)(CH)(CE)$$</p>
<p>... which satisfies another $8$ pairs. This leaves us with only two pairs, $(BH)$ and $(DG)$.</p>
<p>So the most optimal teams to get everyone to play with one another once is: </p>
<ol>
<li>$\{A, B, C, D\} \times \{E, F, G, H\}$</li>
<li>$\{A, B, E, H\} \times \{C, D, F, G\}$</li>
<li>$\{A, B, F, G\} \times \{C, D, E, H\}$</li>
<li>$\{B, H, x, x\} \times \{D, H, x, x\}$</li>
</ol>
<p>With $x \in P$.</p>
<hr>
<p>I believe this is the most efficient way to shift teams around. <strong>Disclaimer</strong> my argument isn't very mathematical (yet), so I haven't actually proven any of this, but I believe it can be made into one—the primary argument just that "Switching teams around gives you either the same result, or a worse one" ... which is really just another way of saying "it's the most efficient."</p>
<p>It <em>might</em> follow that the most efficient way to pair everyone three times is by following the above configuration three times (resulting in $12$ games). I'm more skeptical of this claim, though, because the open spaces in $(4)$ lead me to believe there's some space with which to work and make things more efficient. My gut instinct is that there <em>isn't</em>, but $\text{gut instinct} \neq \text{math}$.</p>
|
2,425,127 | <p>I am learning about general solutions to differential equations and would like to ask whether my solution is mathematically correct. </p>
<p>I was asked to find the general solution to the differential equation </p>
<p>$$\frac{dy}{dx} = 2e^{x-y}$$</p>
<p>So I did the following - </p>
<p>$$\int e^y dy = 2\int e^x dx$$
$$e^y = 2 e^x + C $$
$$y = \ln (2 e^x + C) $$</p>
<p>Now, my book says that the solution in the form $y = f(x)$ is $y = \ln (2 e^x + C) $.</p>
<p>However, I progressed further and did the following:</p>
<p>$$y = \ln (2 e^x + C) $$
$$ = \ln (e^{{x}^{2}} + C) $$
$$y = x^2 + C' $$</p>
<p>where $C'$ is a modified constant from the original constant $C$. </p>
<p>Is this an acceptable solution?</p>
| Ross Millikan | 1,827 | <p>You went wrong in two places. First you replaced $2e^x$ with $e^{x^2}$. It seems you were thinking about $2 \ln a= \ln (a^2)$, but note that the $2$ is outside the log here. Second, you are thinking that $e^{x^2}=(e^x)^2$, but the convention is that $e^{x^2}=e^{(x^2)}$ because you can replace $(e^x)^2$ by $e^{2x}$</p>
|
425,663 | <p>Suppose $ X $ is a Banach space with respect to two different norms, $ \|\cdot\|_1 \mathrm{ e } \|\cdot\|_2 $. Suppose there is a constant $ K > 0 $ such that
$$ \forall x \in X, \|x\|_1 \leq K\|x\|_2 .$$
show then that these two norms are equivalent</p>
| Romeo | 28,746 | <p>Use the open mapping principle: the identity map $F: (X, \Vert \cdot \Vert_2) \to (X, \Vert \cdot \Vert_1)$ is linear and continuous (thanks to hypothesis). Its rank is of the second category, since $X$ is Banach (in particular, a complete metric space under the metric induced by $\Vert \cdot \Vert_1$). </p>
<p>By open mapping, it follows the this is an open map, i.e. an homeomorphism. In particular, the inverse map $F^{-1}: (X, \Vert \cdot \Vert_1) \to (X, \Vert \cdot \Vert_2)$ is still continuous; from this you easily get the other inequality you need in order to prove that the norms are equivalent. </p>
|
2,283,123 | <p>Let $(\mathbb{R}, +, 0)$ be the additive group of reals. Is this structure $\aleph_0$-saturated? </p>
<p>I don't really see how to go about showing this. To show it is not saturated, it is enough to exhibit a type omitted in $(\mathbb{R}, +, 0)$. The interesting statements we can make about groups are usually to do with torsion or divisibility, but I can't see a way to find a type omitted in $(\mathbb{R}, +, 0)$ from that. </p>
| Alex Kruckman | 7,062 | <p>Note, as in Zarathustra's answer, that $(\mathbb{R},0,+)$ carries the structure of a $\mathbb{Q}$-vector space. </p>
<p>Let $A$ be a finite subset of $\mathbb{R}$. What are the possible types over $A$? </p>
<p>For every $\mathbb{Q}$-linear combination of elements of $A$, there is a type $p(x)$ which is completely determined by the formula asserting that $x$ is equal to that linear combination (this is not strictly a formula in the language $\{0,+\}$, but it is equivalent to one by clearing denominators and interpreting scalar multiplication by integers as repeated addition). All of these types are realized in $\mathbb{R}$, since $\mathbb{R}$ is closed under $\mathbb{Q}$-linear combinations..</p>
<p>I claim there is just one other type over $A$. It suffices to show that if $a$ and $b$ are any two elements of some elementary extension $R$, such that neither is in the $\mathbb{Q}$-linear span of $A$, then $a$ and $b$ have the same type over $A$. This follows by from the fact that there is an automorphism of $R$ fixing $A$ and moving $a$ to $b$, obtained by linear algebra: pick a basis for the span of $A$, then extend this basis to a basis for $R$ in two different ways, one containing $a$ and one containing $b$. </p>
<p>This last type is also realized in $\mathbb{R}$, since the span of $A$ is countable, while $\mathbb{R}$ is uncountable. Any element of $\mathbb{R}$ which is not in the span of $A$ satisfies it. </p>
<p>Note that the same argument works whenever $|A|<|\mathbb{R}|$, which shows that this structure is saturated, not just $\aleph_0$-saturated. </p>
|
2,916,685 | <p>What are some common, preferably uncomplicated functions $ f: [a,b] \rightarrow \mathbb{R} $ that are Riemann integrable on $ [c,b] $ for all $ c \in (a,b) $ but not integrable on $ [a,b] $. </p>
<p>I know $ f = 1/x $ is one such function for the interval $ [0,1] $. Are there any other examples? </p>
| zhw. | 228,045 | <p>Any unbounded function $f$ on the closed interval $[a,b]$ such that $f$ is continuous on $(a,b]$ is an example.</p>
|
649,570 | <p>How do we show that there is only one solution to,$$\sqrt{2+\sqrt{2+\sqrt{2+\sqrt{2+x}}}}=\sqrt[3]{6+\sqrt[3]{6+\sqrt[3]{6+\sqrt[3]{6+x}}}}$$</p>
<p>I guess it is only $x=2$.
Please help.</p>
| nbubis | 28,743 | <p>A proof by induction.
Let:
$$f_n(x) = \sqrt[3]{6+\sqrt[3]{6+\ldots+\sqrt[3]{6+x}}},\ g_n(x) = \sqrt{2+\sqrt{2+\ldots+\sqrt{2+x}}}$$
With $n$ terms. Then for $n=1$ you can easily solve the cubic equation to show that $f_1=g_1 $ only at $x=2$ (over the reals).</p>
<p>Now assume our claim is true for $n$, i.e. that $f_n(x)=g_n(x)$ iff for $x=2$. Then for $n+1$, raise to the sixth power to get that:
$$(6+f_n(x))^2=(2+g_n(x))^3$$
Clearly, this equality is true for $x=2$ since $f_n(2)=g_n(2)$. Now, if our claim is false and this equality holds for some $x_0\neq 2$, then:
$$g_n(x_0) = (6+f_n(x_0))^{2/3}-2$$
But since $dg_n/df_n < 1$ for all $x>0$, by the mean value theorem we have a contradiction.
$$$$
<strong>Thus we have proven that for any number of $n$ the only (real) solution of $f_n(x)=g_n(x)$ is $x=2$.</strong></p>
|
2,797,329 | <p>The function $y_1 = x^2$ is a solution of
$x^2y'' − 3xy' + 4y = 0$.
Find the general solution of the nonhomogeneous linear differential equation
$x^2y'' − 3xy' + 4y = x^2$</p>
<p>I know the equation $x^2y'' − 3xy' + 4y = 0$ is a Euler-Cauchy equation but I'm not sure how to proceed with this question; any help is appreciated</p>
| user577215664 | 475,762 | <p>$x^2$ is a known solution to the homogeneous equation</p>
<p>Let's consider $y_p=x^2v$ as a general solution to the original equation. Then</p>
<p>$$
y_p=x^2v \implies
\begin{cases}
y'_p=2xv+x^2v' \\
y_p''=2v+4xv'+x^2v''
\end{cases}
$$
The equation becomes
$$x^2v''+xv'=1 \implies xv''+v'=\frac 1x$$
$$(xv')'=\frac 1x \implies xv'=\ln|x|+K_1$$
$$v'=\frac {\ln|x|}x+\frac {K_1}x$$
Therefore after integration we have that
$$v=K_1\ln|x|+K_2+\frac 12 {\ln^2|x|}$$
And since $y_p=x^2v$
$$\boxed{y_p(x)=x^{2}\frac {\ln^2|x|}2+K_1x^{2}\ln|x|+K_2x^2}$$</p>
|
2,528,306 | <p>The answer is 648 but I tried to solve this problem in reverse, so I ended up with 630. Theee are 10 ways to pick the third digit, 9 ways to pick the second digit, and 7 ways to pick the first digit. So why do these answers differ. Please do not close this question as I am trying to learn mathematics and I have stumbled upon this question.</p>
| N. F. Taussig | 173,070 | <p>The main restriction that we need to consider is that the hundreds digit cannot be zero. By choosing the hundreds digit first, we handle this restriction immediately, after which our choices depend only on which digits have already been selected.</p>
<p>The restriction that the hundreds digit may not be zero means we have nine choices for the hundreds digit. Since zero may be used in the tens place, we have nine choices for the tens digit (any digit except the hundreds digit). Once the hundreds and tens digits have been selected, we are left with eight choices for the units digits (any digit except the hundreds and tens digits). Therefore, there are $9 \cdot 9 \cdot 8 = 648$ three-digit positive integers with distinct digits.</p>
<p>If we do not start with the hundreds place, the number of choices we have depends on whether or not the number contains a zero. We consider cases:</p>
<ol>
<li>There is a zero in the units place: This leaves us with nine choices for the tens place (any digit except zero) and eight choices for the hundreds place (any digit except zero and the digit in the tens place). Hence, there are $1 \cdot 9 \cdot 8 = 72$ such numbers.</li>
<li>There is a zero in the tens place: This leaves us with nine choices for the units digit (any digit except zero) and eight choices for the hundreds place (any digit except zero and the digit in the units place). Hence, there are $9 \cdot 1 \cdot 8 = 72$ such numbers.</li>
<li>There is not a zero in either the units place or tens place: This leaves with nine choices for the units place, eight choices for the tens place, and seven choices for the hundreds place. Hence, there are $9 \cdot 8 \cdot 7 = 504$ such numbers.</li>
</ol>
<p>Since these cases are mutually exclusive and exhaustive, there are $72 + 72 + 504 = 648$ three-digit positive integers with distinct digits, as we found above without doing tedious casework.</p>
|
80,899 | <p>This is related to a previous post of mine (<a href="https://math.stackexchange.com/questions/78669/limit-superior-of-a-sequence-showing-an-alternate-definition">link</a>) regarding how to show that for any sequence $\{x_{n}\}$, the limit superior of the sequence, which is defined as $\text{inf}_{n\geq 1}\text{sup }_{k\geq n} x_{k}$, is equal to the to supremum of limit points of the sequence. Below I think I have found a counter-example to this (although I know I am wrong but just don't know where!).</p>
<p>Define the sequence $x_{n}=\sin(n)$. We have $\text{sup }_{k\geq n} x_{k}=1$ for any $k\geq 1$ (i.e. for any subsequence). Thus $\text{inf}_{n\geq 1}\text{sup }_{k\geq n} x_{k}=1$. Now the sequence has no limit points since it does not converge to anything so the supremum of all the subsequence limits is the supremum of the empty set which presumably is not equal to 1 (I actually do not know what it is). So we have a sequence where the supremum of limit points does not equal $\text{inf}_{n\geq 1}\text{sup }_{k\geq n} x_{k}=1$?</p>
<p>Any help with showing where this is wrong would be much appreciated.</p>
| Brian M. Scott | 12,042 | <p>The problem is that you’ve misunderstood the notion of <em>limit point of a sequence</em>. I prefer the term <em>cluster point</em> in this context, but by either name it’s a point $x$ such for any nbhd $U$ of $x$ and any $n\in\mathbb{N}$ there is an $m\ge n$ such that $x_n \in U$. Thus $1$ and $-1$ are both cluster points of the sequence $\langle (-1)^n:n\in\mathbb{N}\rangle$, though this sequence doesn’t converge to anything.</p>
<p>Equivalently $x$ is a cluster point of $\langle x_n:n\in\mathbb{N}\rangle$ iff for each nbhd $U$ of $x$, $\{n\in\mathbb{N}:x_n \in U\}$ is infinite. If the space is first countable, $x$ is a cluster point of $\langle x_n:n\in\mathbb{N}\rangle$ iff some <em>subsequence</em> of $\langle x_n:n\in\mathbb{N}\rangle$ converges to $x$.</p>
|
18,879 | <p>A first-order sentence is (logically) valid iff it's true in every interpretation. And it's valid iff it can be deduced from the FO axioms alone.</p>
<p>One normal case of showing that a FO sentence is true is deducing it (syntactically).</p>
<p>I guess that indirect proofs have to be interpreted more "semantically": </p>
<p>"Assume that ~F is true in an interpretation. [Deduction of a contradiction] Thus ~F couldn't have been true. Thus F must be true in every interpretation." (But we are left without a deduction of F from the axioms.)</p>
<p>Is this reading of indirect proofs correct?</p>
<blockquote>
<p>Are there also <em>direct</em> proofs that
work "semantically", i.e. that show
directly that a sentence is true in
every interpretation? </p>
</blockquote>
<p>Like truth tables do.</p>
| Aram Kasner | 6,215 | <p>As Moron says, Cardano works. <a href="http://books.google.com/books?id=w47cyQLVQowC&pg=PA228">You merely need to be a bit more careful than usual.</a></p>
|
72,669 | <p>I encountered this site today <a href="https://code.google.com/p/google-styleguide/">https://code.google.com/p/google-styleguide/</a> regarding the programming style in some languages. What would be best programming practices in Mathematica, for small and large projects ?</p>
| Kevin Groenke | 62,350 | <p>Not a general Answer, but I haven't seen such a thing in any language else. What I have found using a 1080p Monitor with the Mathematica Benchmark is: Formatting every function by indending every argument and place the commata on the first level, e.g.</p>
<pre><code>If[ Head@list[[1]] == Symbol,
foo[]
,
If[ Head@list[[1,2]] == Symbol,
bar[]
,
foobar[]
]
]
</code></pre>
<p>improves the readability heavenly.
The codebase I am currently working on is heavily oversmarted, i.e. way too long single lines. Especially since there is no clearly visible "else" I find it very important to at least give the second comma from "if" its own line.</p>
|
72,669 | <p>I encountered this site today <a href="https://code.google.com/p/google-styleguide/">https://code.google.com/p/google-styleguide/</a> regarding the programming style in some languages. What would be best programming practices in Mathematica, for small and large projects ?</p>
| Jules Manson | 37,721 | <h1>Separate Styles and Options from Functional Code</h1>
<p><em>Almost nothing was said about</em> <code>Styles</code> <em>and</em> <code>Options</code> <em>so I think I will do that now.</em></p>
<h2>Basis</h2>
<p>In all modern languages it is a <em>best practice</em> to separate presentation and behavior of interface elements from content. In web development presentation is controlled with cascading stylesheets (CSS) and behavior with JavaScript (JS) which were early on expressed inline or embedded throughout web pages (HTML) making them appear extremely cluttered and clumsy to work with. This is analogous to how Mathematica currently embeds styles and options throughout functional code. As web pages became more dynamic and grew much larger it became essential to separate presentation and behavior from content by intelligently placing CSS and JS in separate files and linking them to HTML files with a few simple commands at the top. We can't do that in Mathematica (we can but we shouldn't) however we can do the next best thing -- place all styles and options in cells apart from the rest of the code by defining hooks and handlers linking them. This accomplishes three main things...</p>
<ul>
<li>It makes the dependent code more readable and styles and options easier to find when debugging or adjustments are needed.</li>
<li>It helps in keeping style directives and options consistent.</li>
<li>It practices DRY resulting in writing less code and lighter files.</li>
</ul>
<p>This is probably not a great strategy for smaller projects which are about a full page or less as using hooks and handlers may cause some bloat. However it is still great practice for becoming proficient at this and for setting up smaller projects that are over time expected to grow or refactored.</p>
<h2>Implementing a Strategy</h2>
<h3>1. Don't Be Clever</h3>
<p>Before I suggest a solution I wish to stress just because you can do something it doesn't mean that you should. Do not change the default set options of native functions especially when other developers with various backgrounds or levels of experiences may need to access your code. Too much can go wrong by just forgetting you did that. There is a better way as shall be discussed shortly. Options can also be borrowed between functions but such code usually appears entangled (violates KISS) making it very difficult to read or debug. So I don't recommend this either even though it practices DRY.</p>
<h3>2. Separate Presentation and Behavior from Content</h3>
<p><em>The following uses</em> <code>Grid</code> <em>as an example.</em></p>
<p>My solution is to specify all options using variables then they can be inserted inside a function in place of the actual options.</p>
<pre><code>gridops = Sequence[op1->spec1, op2->spec2, ...]
Grid[{row1, row2,...},gridops]
</code></pre>
<p>Sometimes you will have many output structures that have common option specs but with minor differences. You can do one of two things. Trail unique option specs (op3 and op4) after placement of the options variable. This should be the only exception to the rule of separating presentation from content.</p>
<pre><code>gridops = Sequence[op1->spec1, op2->spec2, ...]
Grid[{row1, row2,...},gridops,op3->spec3,op4->spec4]
</code></pre>
<p>Or you can create a function to handle unique option specs as parameters. Just one caveat. You may need to wrap the options function in an <code>Evaluate</code> when placed in your dependent function as is necessary with <code>Grid</code> and others like <code>Plot</code>.</p>
<pre><code>gridOps[spec1_,spec2_]:=Sequence[op1->spec1, op2->spec2,...common ops here]
Grid[{row1, row2,...},Evaluate[gridOps[spec1,spec2]]]
</code></pre>
<p>Similarly can be done wherever <code>Style</code> is used. For my grid headers I might wrap text in the top row of cells with <code>Style</code> and place common style specs in a variable. If you have unique styles you can create a function to handle them or trail them as was done with <code>Options</code>.</p>
<pre><code>headerstyle = Sequence[12,"Helvetica", Bold]
Style["string",headerstyle,...trailing unique styles]
</code></pre>
<h3>3. Collect All Option and Style Handlers in One Place</h3>
<p>Group all <code>Styles</code> directives and <code>Option</code> specs handlers into a single cell at the top of file placed so that they load before dependencies. This is analogous to how CSS and JavaScript are loaded in web pages. If there are an overabundance of handlers use several cells keeping related styles and option definitions together. Of course there is no need to reminder user that it might be a good idea to set those cells as <code>Initialization Cells</code>.</p>
<h3>4. Go Further With Theming</h3>
<p>Most projects demand setting only a few options or style directives that are consistently sprinkled in a lot of places. Instead of repeating the same specs define those specs in alias variables (a new feature in CSS does this) and insert aliases everywhere they occur including the aforementioned styles and options handlers made in points 2 and 3. These would then be collected in a cell higher up from any dependent code. Doing this would enable making quick and consistent changes to behavior or theme from just one place in your file. Your hierarchy might look like below. As before it might be a good idea to set cells to <code>Initialization Cell</code>.</p>
<pre><code>(* TOP CELL *)
(* theme aliases goes here *)
bgcolor=Darker[Blue];
fontcolor=White;
fontfam="Helvetica";
fontsize=12;
font=Italic;
spaces={4,4};
align=Center;
frame=All;
output=Traditional;
method=Automatic;
(* NEXT CELL *)
(* handlers from points 2 and 3 *)
(* may be injected with aliases from point 4 *)
(* ALL OTHER CELLS *)
(* all other code *)
(* injected with hooks and aliases *)
(* from points 2 - 4 *)
</code></pre>
<h2>Your Opinion Matters</h2>
<p>What are your thoughts on this? Do you already do something similar? What are your best practices?</p>
|
523,529 | <p>I'm new to inequalities in mathematical induction and don't know how to proceed further. So far I was able to do this:<br>
$V(1): 1≤1 \text{ true}$ <br>
$V(n): n!≤((n+1)/2)^n$ <br>
$V(n+1): (n+1)!≤((n+2)/2)^{(n+1)}$<br><br></p>
<p>and I've got : <br>$(((n+1)/2)^n)\cdot(n+1)≤((n+2)/2)^{(n+1)}$ <br>$((n+1)^n)n(n+1)≤((n+2)^n)((n/2)+1)$</p>
| Smylic | 100,361 | <p>If you really need induction let it be.
Base is $n = 0$: $0! = 1 \le 1 = \left(\frac12\right)^0$.
By induction hypothesis the inequality holds for $n = k$. Let proove it for $n = k+1$.
$$k! \le \left(\frac{k+1}2\right)^k,\\
k!(k+1) \le \left(\frac{k+1}2\right)^k(k+1),$$
$$(k+1)! \le \frac{(k+1)^{k+1}}{2^k}.\tag{*}$$
Now we need to show that $f(x) = \frac{x^{x}}{(x+1)^x} \le \frac12$. Ok, $f(x) = \left(\frac{x}{x+1}\right)^x = e^{x(\ln x - \ln (x+1))}$, then
$$f'(x) = e^{x(\ln x - \ln (x+1))}\left((\ln x - \ln (x+1)) + x\left(\frac1x - \frac1{x+1}\right)\right) =
e^{x(\ln x - \ln (x+1))}\left(\ln \left(1 - \frac1{x+1}\right) + \frac1{x+1}\right) \le 0$$ for any positive $x$, since $\ln y \le y - 1$ for any positive $y$. So $f(x) \le f(1) = 1/2$ for any $x \ge 1$. Then from (*) we get
$$(k+1)! \le \frac{(k+1)^{k+1}}{2^k} \le \frac{(k+1)^{k+1}}{2^k}\cdot \frac1{2f(k+1)} = \frac{(k+1)^{k+1}}{2^k}\cdot \frac{(k+2)^{k+1}}{2(k+1)^{k+1}} = \frac{(k+2)^{k+1}}{2^{k+1}} = \left(\frac{k+2}2\right)^{k+1}.$$</p>
|
980,941 | <p>How can I calculate $1+(1+2)+(1+2+3)+\cdots+(1+2+3+\cdots+n)$? I know that $1+2+\cdots+n=\dfrac{n+1}{2}\dot\ n$. But what should I do next?</p>
| Timbuc | 118,527 | <p>$$\sum_{k=1}^n(1+\ldots+k)=\sum_{k=1}^n\frac{k(k+1)}2=\frac12\left(\sum_{k=1}^nk^2+\sum_{k=1}^nk\right)$$</p>
<p>and now</p>
<p>$$\sum_{k=1}^nk^2=\frac{n(n+1)(2n+1)}6$$</p>
|
2,292,511 | <p>I need some help to solve this problem:</p>
<blockquote>
<p>Evaluate A such that the exponential distribution with parameter $\alpha, P(X = x) = Ae^{−\alpha x}$, is normalized.
Here, $\alpha > 0$ and $\Omega = \mathbb{R}_{+}$.</p>
</blockquote>
<p>I've been trying to evaluate the following Integral </p>
<p>$$
\int^{\infty}_{0}Ae^{-\alpha x}dx=1 \,\, ,
$$</p>
<p>I always get teh result that $A$ must be equal to 0... Am I making something wrong?</p>
<p>Edit: What I did:</p>
<p>$$
\int^{\infty}_{0}Ae^{-\alpha x}dx=1 \Longrightarrow -\frac{A}{\alpha}\lim_{b\to \infty} \int^{b}_{0}e^{-\alpha x}dx=1 \,\, ,
$$</p>
<p>Since $\lim_{b\to \infty}e^{-\alpha \cdot \infty}=0$, I get the not true equality</p>
<p>$$
0=1 \,\, .
$$</p>
| Jack D'Aurizio | 44,121 | <p>With Mathematica notation we have that $\int_{0}^{\pi/2}\frac{d\theta}{\sqrt{a+\sin^2\theta}}$ is an elliptic integral, namely
$$ \frac{1}{\sqrt{1+a}}\,K\left(\frac{1}{1+a}\right) $$
hence by differentiating with respect to $a$ we get that
$$ \int_{0}^{\pi/2}\frac{d\theta}{(a+\sin^2\theta)^{3/2}} = \frac{1}{a\sqrt{a+1}}\,E\left(\frac{1}{1+a}\right).$$</p>
|
3,256,767 | <p>So I'm trying to understand a solution made by my teacher for a question that asks me to determine whether the following is true. I'm having trouble understanding where some values in the steps are coming from.</p>
<p>Like for the first part, I don't really get where n≥5 came from. My guess is getting 16n^2 + 25 to equal 16n^2 + n^2 by substituting n with 5. But I was wondering why 25 turned into n^2 in the first place?</p>
<p>I also have no idea where k = 5 came from.</p>
<p>For the second part of the solution, I'm also having similar struggles. Why did 16n^2 turn into 15n^2 + n^2? I'm also not sure where n≥41 and k=41 came from. I would really appreciate some clarification because I'm having trouble understanding this unit. </p>
<p><a href="https://i.stack.imgur.com/97ORS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/97ORS.png" alt="enter image description here"></a></p>
| Bernard | 202,857 | <p><strong>Hint</strong>:</p>
<p>Let the matrix of <span class="math-container">$\varphi$</span> be the matrix
<span class="math-container">$$A=\begin{pmatrix}
a_1&b_1&c_1&d_1\\
a_2&b_2&c_2&d_2 \\
a_3&b_3&c_3&d_3
\end{pmatrix}.$$</span></p>
<p>The conditions on <span class="math-container">$\varphi$</span> can be written as
<span class="math-container">$$\begin{cases}
\varphi(e_1+e_2+e_3+e_4)=\scriptsize\begin{pmatrix}
1\\ 2 \\ 1
\end{pmatrix}\\[1ex]
\varphi(e_1+e_3)=\scriptsize\begin{pmatrix}
-1\\ 1 \\ 1
\end{pmatrix}
\end{cases}\iff\begin{cases}
\varphi(e_2+e_4)=\varphi(e_2)+\varphi(e_4)=\scriptsize\begin{pmatrix}
1\\ 2 \\ 1
\end{pmatrix}\\[1ex]
\varphi(e_1+e_3)=\varphi(e_1)+\varphi(e_3)=\scriptsize\begin{pmatrix}
2\\ 1 \\ 0
\end{pmatrix}
\end{cases}$$</span>
So one obtains the linear systems
<span class="math-container">$$\begin{cases}
a_1+c_1=2,\\ a_2+c_2=1\\a_3+c_3=0
\end{cases},\qquad \begin{cases}
b_1+d_1=-1,\\ b_2+d_2=1,\\b_3+d_3=1
\end{cases}$$</span>
Can you proceed?</p>
|
3,342,094 | <p>I am asked to prove following proposition:</p>
<blockquote>
<p><strong>Proposition 1.</strong> If an invertible matrix <span class="math-container">$\mathbf A$</span> has a left inverse <span class="math-container">$\mathbf{B}$</span> and a right inverse <span class="math-container">$\mathbf{C}$</span>, then <span class="math-container">$\mathbf{B} = \mathbf {C}$</span></p>
</blockquote>
<p>My attempt:</p>
<p>"<span class="math-container">$\mathbf{B}$</span> is the left inverse of <span class="math-container">$\mathbf{A}$</span>" implies:
<span class="math-container">$$\mathbf{BA} = \mathbf I$$</span></p>
<p>And "<span class="math-container">$\mathbf{C}$</span> is the right inverse of <span class="math-container">$\mathbf{A}$</span>" implies:</p>
<p><span class="math-container">$$\mathbf{AC} = \mathbf I$$</span></p>
<p>Hence</p>
<p><span class="math-container">$$\tag 1\mathbf{AC} = \mathbf{BA}$$</span></p>
<p>Premultiply <span class="math-container">$(1)$</span> by <span class="math-container">$\mathbf A^{-1}$</span>:
<span class="math-container">$$ \mathbf A^{-1}\mathbf{AC} = \mathbf A^{-1}\mathbf{BA} \implies $$</span>
<span class="math-container">$$\mathbf{C} = \mathbf A^{-1} $$</span></p>
<p>Postmultiply <span class="math-container">$(1)$</span> by <span class="math-container">$\mathbf A^{-1}$</span>:
<span class="math-container">$$\mathbf{AC}\mathbf A^{-1} = \mathbf{BA}\mathbf A^{-1} \implies$$</span>
<span class="math-container">$$\mathbf A^{-1} = \mathbf{B}$$</span></p>
<p>Hence <span class="math-container">$\mathbf B = \mathbf C.$$\Box$</span></p>
<p>Is it correct?</p>
<hr>
<p>I have one more question: </p>
<p>Assume we already proved following theorem:</p>
<blockquote>
<p>The inverse of an invertible matrix is <strong>unique</strong></p>
</blockquote>
<p>Knowing this, do we really need to prove the proposition <strong>1</strong>? My reasoning is, if we know <span class="math-container">$\mathbf{B}$</span> is the left inverse of <span class="math-container">$\mathbf{A}$</span>, and we also know theorem above, then we can infer that <span class="math-container">$\mathbf{B}$</span> will also be right inverse of <span class="math-container">$\mathbf{A}$</span>. Or am I missing something?</p>
| Henry | 6,460 | <p>Assuming that <span class="math-container">$\mathbf A^{-1}$</span> is meaningful seems premature until you have proved the assertion that <span class="math-container">$\mathbf B = \mathbf C$</span></p>
<p>Instead use something like <span class="math-container">$\mathbf{B(AC)} = \mathbf{(BA)C}$</span> since matrix multiplication is associative </p>
|
2,777,631 | <p>Angle bisectors of traingle $ABC$ meet its circum-circle ( after passing through in-center) at opposite points $P, Q$, and $R$ respectively on the circumcircle. </p>
<p>Find $\angle RQP.$ </p>
<p>Is there any way of getting the answer through its in-center properties?</p>
<p>Ans = $90-\frac{B}{2}$ </p>
| Alex D | 477,539 | <ol>
<li><p>Suppose $1<0$. Since $x^{2}\ge0$ holds for all $x$, we also have $$(1)\cdot(1)=1^{2}=1\ge0,$$
hence a contradiction.$$\\ \\$$</p></li>
<li><p>Your second part is correct, for suppose $a>0$ and $a^{-1}<0$. Multiplying through by $a$ doesn't change the order, so
$$a^{-1}<0\implies a^{-1}\cdot a<0\cdot a\implies1<0,$$ which is a contradiction. Whence $a^{-1}>0$.</p></li>
</ol>
|
270,624 | <p>For a polynomial $f(x) = \sum_{i=0}^dc_ix^i \in \mathbb Z[x]$ of degree $d$, let</p>
<p>$$
H(f):=\max\limits_{i=0,1,\ldots, d}\{|c_i|\}
$$</p>
<p>denote the naive height. Further, define</p>
<p>$$
R(M, r, d) := \#\{f(x) \colon \text{$H(f) \leq M$, $\deg f = d$ and $f(x)$ has extactly $r$ real roots}\}.
$$</p>
<p>I wonder if anything is known about the quantity $R(M,r,d)$. More precisely, I am interested how it changes as $r$ and $d$ remain fixed and $M$ varies. Has this question been explored at all? I would be thankful for any references.</p>
| Liviu Nicolaescu | 20,302 | <p>This is a bit too long to be a comment. Let me phrase this probabilistically by saying that the coefficients $c_i$ are independent (discrete) random variables uniformly distributed on the finite set
$$
\big\{\, -M,-(M-1),\dotsc, -1,0, 1, \dotsc, (M-1), M\,\big\}.
$$
Denote by $N_d(f)$ the number of real zeros of the random degree $d$ polynomial $f$ whose coefficients $c_i$ are random variables as above. Your question asks for the probability distribution of the random variable $N_d(f)$. This is a very difficult question for fixed $d$. However, for <em>fixed</em> $M$ and $d\to\infty$ some nontrivial information is available.</p>
<p>A highly nontrivial result of Ibragimov and Maslova </p>
<blockquote>
<p><strong>The mean number of real zeros of random polynomials. I. Coefficients with zero mean</strong>. (Russian) Teor. Verojatnost. i Primenen. 16 1971 229–248. </p>
</blockquote>
<p>generalizing earlier work of Kac, Erdos-Offord and D.C. Stevens shows that the expectation of $N_d(f)$ satisfies the asymptotic estimate $\newcommand{\bE}{\mathbb{E}}$</p>
<p>$$
\mu_d:=\bE\big[ N_d(f)\big]\sim \frac{2}{\pi}\log d\;\;\mbox{as $d\to\infty$}.
$$</p>
<p>Recently (see <a href="https://arxiv.org/abs/1402.4628" rel="nofollow noreferrer">this paper</a>) this result was improved to
$$
\mu_d:=\bE\big[ N_d(f)\big]= \frac{2}{\pi}\log d+O(1)\;\;\mbox{as $d\to\infty$}.
$$</p>
<p>Remarkably, the variance of the random variable $N_d$ is about the same size. A result of N. B. Maslova</p>
<blockquote>
<p><strong>The variance of the number of real roots of random polynomials</strong>, (Russian) Teor. Verojatnost. i Primenen. 19 (1974), 36–51. </p>
</blockquote>
<p>shows that $\DeclareMathOperator{\var}{var}$ the variance of $N_d(f)$ satisfies the asymptotic estimate
$$
\sigma^2_d:=\var\big[ N_d(f)\big]\sim \frac{1}{\pi}\left(1-\frac{2}{\pi}\right)\log d\;\;\mbox{as $d\to\infty$}.
$$</p>
<p>This suggests that the typical polynomial in your family does not have too many roots, compared to its degree. Finally, another result of N. B. Maslova</p>
<blockquote>
<p><strong>The distribution of the number of real roots of random polynomials</strong>, (Russian. English summary) Teor. Verojatnost. i Primenen. 19 (1974),488–500. </p>
</blockquote>
<p>shows that the normalized random variable
$$
Z_d=\frac{1}{\sigma_d}\Big( N_d(f)-\mu_d\Big)
$$
converges in distribution to a standard normal random variable. You can use this result to estimate the probability that the number of zeros of $f$ lies in an interval of the form
$$
[\mu_d+a\sigma_d, \mu_d+b\sigma_d], \;\; a, b\in\mathbb{R}, \;\; a<b.
$$</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.