qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
1,849,588 | <p>The exercise asks if exists a definition of $Nat(x)$ such that $Nat(x) \Rightarrow Nat(S(x))$, and $\exists x $such that $ Nat(x)$ is false without the use of axiom of infinity. Here $Nat(x) \Leftrightarrow x$ is a natural number, and $S(x)$ is the successor of $x$. </p>
<p>I tried to define $$S(a)=\{a\}.$$ Then I define $$a \in C_a \Leftrightarrow (a\in C_a \Rightarrow S(s) \in C_a).$$
So $$\emptyset\in \cap C_i \forall i. $$
At the end I define $$Nat (a) \Leftrightarrow a\in C_\emptyset.$$
Can you ckech my idea or suggest another one?</p>
| edgar alonso | 329,621 | <p>No, you cannot define the set of natural numbers without infinity; but you can define the property of being a natual number:</p>
<p>Define
\begin{equation}
Or(x)\equiv\forall y\in x(y\subset x) \wedge(\forall u,v\in x)(u\in v \vee v\in u \vee v=u);
\end{equation}
"$x$ is and ordinal". Then define,
\begin{equation}
Lim(x)\equiv Or(x)\wedge\forall y\in x\exists z\in x(y\in z);
\end{equation}
"$x$ is a limit ordinal". Finally we make,
\begin{equation}
Nat(x)\equiv x=\emptyset \vee (Or(x)\wedge \neg Lim(x) \wedge \forall y\in x(\neg Lim(y)\vee y=\emptyset))
\end{equation}
To check $Nat(x)\to Nat(S(x))$ remember that the succesor of any ordinal $x$ is $S(x)=x\cup\{x\}$, not $\{x\}$.</p>
|
206,780 | <p>Let $f:X\to Y$ is a measurable function. Banach indicatrix
$$
N(y,f) = \#\{x\in X \mid f(x) = y\}
$$
is the number of the pre-images of $y$ under $f$. If there are infinitely many pre-images then $N(y,f) = \infty$. </p>
<p>Let $X\subset\mathbb R^n$, $Y\subset\mathbb R^m$ with Lebesgue measure.</p>
<p><em>I am interested to know if $N(y,f)$ is a measurable function (?)</em> </p>
<ul>
<li>If $X$ is an interval (say $X=[a,b]$) and $f$ is a continuous function, the answer is some how positive (<a href="https://math.stackexchange.com/q/68635/23566">https://math.stackexchange.com/q/68635/23566</a>).</li>
<li>In Federer's Geometric measure theory we find following theorem </li>
</ul>
<blockquote>
<p>Let $X$ be a separable metric space and let $f(A)$ be $\mu$-measurable for all Borel subsets $A$ of $X$.
Let $\zeta(S) = \mu(f(S))$ for $S\subset X$ and let $\psi$ be the measure on $X$ defined by the Carathéodory construction from $\zeta$. Then
$$
\psi(A) = \int\limits_{A}N(y,f)\, d\mu_{Y}
$$
for every Borel set $A\subset X$.</p>
</blockquote>
<p><em>Does it say anything about measurability of $N(y,f)$ ?</em> </p>
| Gerald Edgar | 454 | <p>A counterexample.</p>
<p>Let $F : \mathbb R \to \mathbb R$ be the Cantor singular function. $F$ is continuous. Let $C \subseteq \mathbb R$ be the middle-thirds Cantor set. $C$ has Lebesgue measure zero. $F$ maps $C$ onto $Y = [0,1]$. Let $M \subseteq [0,1]$ be a non-measurable set. For simplicity, remove from $M$ any points with two different binary expansions. Then $X := F^{-1}(M)$ is a subset of $C$, so it is Lebesgue measurable. Let $f : X \to Y$ be the restriction of $F$. $f$ is continuous, hence measurable. Next:
$$
\#\{x \in X : f(x)=y\} = \begin{cases}
0,\qquad y \not\in M
\\
1,\qquad y \in M
\end{cases}
$$
Note there are no points with $2$ or more pre-images, since those would be points with two different binary expansions, and we removed those. </p>
<p>Since $M$ is a non-measurable set, we have a counterexample.</p>
<p><strong>note</strong><br>
Compare the two bullet-points in the question.<br>
For the first: in this example $X$ is not an interval.<br>
For the second: in this example the property "$A$ Borel (in $X$) implies $f(A)$ measurable" fails.</p>
|
1,477,325 | <p>It is known that an integrable function is a.e. finite. Is an a.e. finite function integrable? What if the measure is finite?</p>
| Joe | 234,473 | <p>No. $\frac{1}{x}$ is an example.</p>
|
3,368,402 | <p>I am utilizing set identities to prove (A-C)-(B-C).</p>
<p><span class="math-container">$\begin{array}{|l}(A−B)− C = \{ x | x \in ((x\in (A \cap \bar{B})) \cap \bar{C}\} \quad \text{Def. of Set Minus}
\\
\quad \quad \quad \quad \quad =\{ x | ((x\in A) \wedge (x\in\bar{B})) \wedge (x\in\bar{C})\} \quad \text{Def. of intersection}
\\ \quad \quad \quad \quad \quad =\{ x | (A\wedge\overline{C}\wedge\overline{B})\vee(\overline{C}\wedge\overline{B}\wedge C)\} \quad \text{Association Law}
\\
\quad \quad \quad \quad \quad =\{ x | ((x\in A) \wedge (x\in\bar{C})) \wedge ((x\in \bar{B}) \wedge (x\in\bar{C}))\} \quad \text{Idempotent Law}
\\
\quad \quad \quad \quad \quad =\{ x | (((x\in (A\cap\bar{C})) \cap (x\in (\bar{B} \cap\bar{C})))\} \quad \text{Def. of union}
\\
\quad \quad \quad \quad \quad =\{ x | (((x\in (A\cap \bar{C})) \cap \overline{(x\in (B\cup C)))} \} \quad \text{DeMorgan's Law}
\\
\quad \quad \quad \quad \quad =\{ x | x \in (A - C) - (B \cup C) \} \quad \text{Def. Set Minus}
\\
=(A-C)-(B-C)
\end{array}$</span></p>
<p>So it looks like I screwed up on the final step. Is there something that I am forgetting to do properly or where am I supposed to go from that final step? </p>
| richard1941 | 133,895 | <p>All of the above looks like hard math requiring actual thought. I did it by means of an excel spreadsheet. Easy, as there are only 8 possibilities. The two highlighted columns are identical.</p>
<p><a href="https://i.stack.imgur.com/dvI3R.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dvI3R.png" alt="enter image description here"></a></p>
|
3,492,376 | <p>Can anyone explain to me why the below expression:</p>
<p><span class="math-container">$$\int\frac{2\cos x}{{(4-4\sin^2x})^{3/2}}\:dx$$</span></p>
<p>is equal to this:</p>
<p><span class="math-container">$$\frac{2}{8}\int\frac{\cos x}{{(1-\sin^2x})^{3/2}}\:dx$$</span></p>
<p>a) Why the constant <span class="math-container">$2/8$</span> outside the integral is not <span class="math-container">$2/4$</span>?</p>
<p>b) And how do you arrive at?</p>
<p><span class="math-container">$$\int\frac{1}{{(4\cos^2x})}\:dx$$</span></p>
<p>Thank you.</p>
| A. Goodier | 466,850 | <p>Use that <span class="math-container">$4^{3/2}=8$</span> as pointed out in the comments and also
<span class="math-container">$$(1-\sin^2x)^{3/2}=(\cos^2x)^{3/2}=\cos^3x$$</span></p>
|
3,492,376 | <p>Can anyone explain to me why the below expression:</p>
<p><span class="math-container">$$\int\frac{2\cos x}{{(4-4\sin^2x})^{3/2}}\:dx$$</span></p>
<p>is equal to this:</p>
<p><span class="math-container">$$\frac{2}{8}\int\frac{\cos x}{{(1-\sin^2x})^{3/2}}\:dx$$</span></p>
<p>a) Why the constant <span class="math-container">$2/8$</span> outside the integral is not <span class="math-container">$2/4$</span>?</p>
<p>b) And how do you arrive at?</p>
<p><span class="math-container">$$\int\frac{1}{{(4\cos^2x})}\:dx$$</span></p>
<p>Thank you.</p>
| J. W. Tanner | 615,567 | <p><span class="math-container">$$\int\frac{2\cos x}{{(4-4\sin^2x})^{(3/2)}}\:dx$$</span></p>
<p><span class="math-container">$$ \int\frac{2\cos x}{{4^{3/2}(1-\sin^2x})^{(3/2)}}\:dx$$</span></p>
<p><span class="math-container">$$=\frac{2}{8}\int\frac{\cos x}{{(1-\sin^2x})^{(3/2)}}\:dx$$</span></p>
<p><span class="math-container">$$=\frac{1}{4}\int\frac{\cos x}{{(\cos^2x})^{(3/2)}}\:dx$$</span></p>
<p><span class="math-container">$$=\frac{1}{4}\int\frac{\cos x}{{(\cos^3x})}\:dx$$</span></p>
<p><span class="math-container">$$\int\frac{1}{{4\cos^2x}}\:dx$$</span></p>
|
65,886 | <p>It is clear that Sylow theorems are an essential tool for the classification of finite groups.
I recently read an article by Marcel Wild, <em>The Groups of Order Sixteen Made Easy</em>, where he gives a complete classification of the groups of order $16$ that is based on
elementary facts, in particular, he does not use Sylow theorem.</p>
<p>Did anyone encounter a complete classification of the groups of order $12$ that does not use Sylow theorem?
What about order 24? (I'm less optimistic there, but who knows).</p>
| Community | -1 | <p>Here is a proof that any simple group $G$ of order $360$ is isomorphic to a specific subgroup of $A_{10}$ (and hence there can be only one [insert Highlander pun]).</p>
<p>Let $G$ be a simple group of order $360$, and let's ask how many Sylow 3-subgroups there can be. A quick check shows the possibilities are : $1,4,10,40$. $1$ and $4$ can easily be ruled out, and $40$ is ruled out because then the Sylow 3-group is self-normalizing. But a group of order $9$ is abelian, and hence by Burnside's Transfer Theorem, $G$ can't be simple. [You can avoid BTT by showing a subgroup of order 3 has normalizer of size at least 72, and getting a contradiction that way.]</p>
<p>Thus, there are 10 Sylow 3-groups; let's pick one and call it $P$. Then the conjugation action of $G$ on these ten Sylows gives an embedding of $G$ into $A_{10}$. So let's assume for the rest of this post that $G$ actually lives inside $A_{10}$. Note that $N_G(P)$ has order $36$, and is a point stabilizer in $G$ (let's say the point stabilizer of $10$).</p>
<p>Now if $P$ was cyclic, then elements of $N_G(P)$ would basically be elements of $A_9$ normalizing a 9-cycle. 9-cycles in $A_9$ are self-centralizing however (count conjugates), and thus $N_G(P)/P$ would embed in $\operatorname{Aut}(P)$; this is a contradiction because the former group has order $4$ and the latter order $6$.</p>
<p>So $P$ is non-cyclic of order $9$, generated by two elements $a$ and $b$ of order $3$. Each of these is a product of 3 3-cycles in $\{1,2,\ldots,9\}$. We can assume that
$$ a = (1,2,3)(4,5,6)(7,8,9); $$
$$ b = (1,4,7)(2,5,8)(3,6,9). $$</p>
<p>This is because we can renumber the points so $a$ looks has the required form, and take an appropriate element of the form $a^ib^j$ to give us the form for $b$.</p>
<p>Now consider the point-stabilizer of $1$ in $N_G(P)$. since $P$ acts transitively on $\{1,2,\ldots,9\}$, the orbit-stabilizer theorem shows this point stabilizer has order $4$. It is thus a Sylow 2-subgroup $Q$ of $N_G(P)$, and $N_G(P)=PQ$. Again, the centralizer of $P$ in $A_9$ is a 3-group, and hence $Q\cong N_G(P)/P$ embeds in $\operatorname{Aut}(P)$; this implies $Q$ is cyclic of order $4$. Let $Q$ be generated by a permuation $c$ of order $4$. We know $c$ fixes both $10$ and $1$, so for $c$ to be an even permutation it must be the product of two 4-cycles. </p>
<p>Note also that $c$ is almost completely determined by where it sends $2$; this is because every element of $P$ is determined by where it sends $1$ [if $x,y\in P$ both sent $1$ to the same point, then $xy^{-1}$ would fix $1$, hence be in $Q$, so we would have $xy^{-1}=1$.]. So for example, if $c$ sends $2$ to $3$, then it must send $a$ to $a^2$, and there are only two ways to do this (basically it sends $4$ to either $6$ or $9$). One can easily check that permutations sending $2$ to one of $\{2,3,5,6,8,9\}$ don't have order $4$, and thus $c$ sends $2$ to either $4$ or $7$. One will simply give the inverse of the other, and so we can assume that
$$ c=(2,4,3,7)(5,6,9,8). $$</p>
<p>It's important to note that no non-trivial power of $c$ fixes any point of $\{2,3,4,5,6,7,8,9\}$; that is, no element of $G$ fixes more than $2$ points.</p>
<p>Now let $S$ be a Sylow 2-subgroup of $G$ containing $Q$; note that this means there's an element $d$ such that $S=\langle c,d\rangle$. It is an easy exercise to show the Sylow 2-subgroup of a simple group can't be cyclic, so that $d$ has order either $4$ or $2$. Also since $d\notin N_G(P)$, it cannot fix the point $10$. Now suppose that $d$ sends the point $1$ to the point $p\notin\{1,10\}$; then $c^d$ would not fix $1$, and yet $c^d\in Q$. Similarly, if $d$ sent $10$ to a point $q\notin\{1,10\}$, $c^d$ would not fix $10$. Thus $d$ must permute $1$ and $10$ amongst each other, and since it can't fix $10$, it contains the 2-cycle $(1,10)$ [it's a 2-cycle because $d^2\in Q$].</p>
<p>But if $d$ had order $4$, then - ignoring that $(1,10)$ cycle - it would be an odd permutation on $8$ points fixing $c$. Since it can fix at most $2$ points, it would be a 4-cycle $m$ multiplied by a 2-cycle $n$. Now if it sent $2$ to one of $\{5,6,8,9\}$, it could fix no points at all; thus $m$ must normalize one of $(2,4,3,7)$ and $(5,6,9,8)$. But 4-cycles are only normalized by their own powers (at least restricting to other 4-cycles on the same 4 points), and thus $m$ centralizes its 4-cycle. However, $n$, a 2-cycle, must then centralize its 4-cycle, which is impossible. Thus $d$ can't have order $4$.</p>
<p>So $d$ must be order $2$, and in fact, every element of $S-Q$ has order $2$ [so $S$ is dihedral]. The same analysis above shows - ignoring once again the $(1,10)$ cycle - $d$ is the product of 3 2-cycles. Thus it is the product of $m$ and $n$, except this time $m$ looks like $(\cdot,\cdot)(\cdot,\cdot)$ and $n$ is a 2-cycle. Again, $m$ must invert one of the two 4-cycles making up $c$, and $n$ inverts the other. So the 8 possibilities for $d$ [ignoring $(1,10)$] are a product of one of $(2,3)$,$(4,7)$, $(2,4)(3,7)$, and $(2,7)(3,4)$, together with one of $(5,9)$, $(6,8)$, $(5,6)(9,8)$, and $(5,8)(6,9)$. [There are not 16 possibilities, because for example (2,3)(5,9) is not of the required $mn$ form.]</p>
<p>Now if $d=(1,10)(2,3)(5,6)(8,9)$, then it's routine to check that $cd$, $c^2d$, and $c^3d$ give three other acceptable products from the above 8. If we set $\hat{d}=(1,10)(4,7)(5,8)(6,9)$, then we can check that the other four are given by $\hat{d}$, $c\hat{d}$, $c^2\hat{d}$, and $c^3\hat{d}$. However, a direct computation shows $ab\hat{d}$ has order $21$. Thus, up to factors of $c$ (which we can safely ignore), we have
$$ d = (1,10)(2,3)(5,6)(8,9).$$</p>
<p>Now we are done: the subgroup $\langle a,b,c,d\rangle\le G$ has order at least $72$; but $G$ is simple, and so we must have $\langle a,b,c,d\rangle=G$.</p>
<p><strong>EDIT</strong> - Here is the argument to avoid Burnside's Transfer Theorem:</p>
<p>Assume $G$ has $40$ Sylow 3-groups. Since $40\not\equiv 1\pmod{9}$, there are two Sylow 3-groups $A$ and $B$ such that $D=A\cap B$ in non-trivial (and hence order 3). Now the normalizer $N_G(D)$ has more than one Sylow 3-group, and thus has order at least $36$. If $|N_G(D)|>36$, we would have $|N_G(D)|\ge72$, and that gives a subgroup of index $5$ in $G$, which implies (via the right coset action) that $G$ embeds in $A_5$, contradiction. Thus we can assume $N_G(D)$ has order 36, and since it does not have a normal Sylow 3-group (remember they were self-normalizing), it must have a normal Sylow 2-group (for this implication see the proof <a href="http://www.artofproblemsolving.com/Forum/viewtopic.php?f=61&t=371698">here</a>). Thus this subgroup $T$ of order $4$ is normalized by a Sylow 3-group, and since "normalizers grow" in p-groups, its normalizer also has order divisible by $8$. That is, $|N_G(T)|\ge72$, and once again we have a contradiction. Thus there cannot be $40$ Sylow 3-groups.</p>
|
3,299,296 | <p>The question is </p>
<blockquote>
<p>When <span class="math-container">$~2x^3 + x^2 - 2kx + f~$</span> is divided by <span class="math-container">$~x - 1~$</span>, the remainder is
<span class="math-container">$~-4~$</span>, and when it is divided by <span class="math-container">$~x+2~$</span>, the remainder is <span class="math-container">$~11~$</span>. Determine the values of <span class="math-container">$~k~$</span>
and <span class="math-container">$~f~$</span>.</p>
</blockquote>
<p>I know how to solve for <span class="math-container">$~k~$</span>, you would just sub in the root for <span class="math-container">$~x~$</span> and set the equation equal to the remainder but because I also have to solve for <span class="math-container">$~f~$</span> this throws me off and I am confused as of what to do.</p>
| evaristegd | 447,617 | <p>After you replace the value of <span class="math-container">$x$</span> with <span class="math-container">$1$</span> and then with <span class="math-container">$-2$</span>, you will get two equations. The first equation is equal to <span class="math-container">$-4$</span> , by Bézout's theorem. Analogously, the second equation is equal to <span class="math-container">$11$</span>.</p>
<p>That's a linear system of two equations with two unknowns.
<span class="math-container">$$-2k+f=-7 $$</span>
<span class="math-container">$$ 4k+f=23 $$</span></p>
<p>Multiply the first equation by 2, and then add that result to the second equation. You can get the value of <span class="math-container">$f$</span> that way. After that, obtaining the value of <span class="math-container">$k$</span> should be easier.</p>
|
898,495 | <p>A standard pack of 52 cards with 4 suits (each having 13 denominations) is well shuffled and dealt out to 4 players (N, S, E and W).</p>
<p>They each receive 13 cards.</p>
<p>If N and S have exactly 10 cards of a specified suit between them. </p>
<p>What is the probability that the 3 remaining cards of the suit are in one player's hand (either E or W)? Can you please just help me understand how to solve this conditional probability question?</p>
| drhab | 75,923 | <p>It can be thought of as drawing $3$ distinct numbers out of the set $\{1,\dots,26\}$ and to find the probability that the numbers all three belong to set $\{1,\dots,13\}$ or to set $\{14,\dots,26\}$. </p>
<p>This results in $2\times\frac{13}{26}\times\frac{12}{25}\times\frac{11}{24}=\frac{11}{50}$</p>
<p>Actually you can start by 'reserving' number $1$ for the first number and then find the probability that the others will belong to set $\{2,\dots,13\}$. </p>
<p>This results is $\frac{12}{25}\times\frac{11}{24}=\frac{11}{50}$</p>
<hr>
<p>Background thinking:</p>
<p>Think of placing the $26$ cards (that contain $3$ cards of the specified suit) randomly in a row. The first $13$ cards in this row are for East and the last are for West. There is no objection at all against <em>starting</em> this ordering with the $3$ specials.</p>
|
1,868,440 | <p>In a game , there are <code>N</code> numbers and <code>2</code> player(<code>A</code> and <code>B</code>) . If <code>A</code> and <code>B</code> pick a number and replace it with one of it's divisors other than itself alternatively, how would I conclude who would make the last move? (Notice that eventually when the list gets replaced by all 1's , you can't make any more moves). Any help would be appreciated, thank you :)</p>
| Francesco Alem. | 175,276 | <p>Conclusion comes when the player is faced with a list of 1's and a prime number somewhere, the player is forced to replace the prime number with 1. Ending the game (since a prime number can only be divided by 1 and itself)</p>
<p>EDIT.</p>
<p>unless the game requires some criterion that the player must respect for choosing numbers it's impossible to determine the end game at the very start.</p>
<pre><code>imagine the list 2,3,25
let's suppose player 1 begins.
first scenario:
p1 takes 2 list reduces to: 1,3,25
p2 takes 3 list reduces to: 1,1,25
p1 takes 5 list reduces to: 1,1,5
p2 takes 5 list reduces to: 1,1,1 ENDGAME with player 1
second scenario:
p1 takes 2 list reduces to: 1,3,25
p2 takes 3 list reduces to: 1,1,25
p1 takes 25 list reduces to: 1,1,1 ENDGAME with player 2
</code></pre>
<p>REEDIT. i've just seen in the comments that the author of the post was assuming a best possible move criteria. so refer to the other answers given.</p>
|
1,218,238 | <p>Describe explicitly a subgroup $H$ of order 8 of the permutation group $S_5$.</p>
<p>How could I find such a subgroup? I don't know how to start with. Should I start with some transition $(i,j)$ and use them to generate a subgroup?</p>
| Pedro | 37,702 | <p>The statement is true for $n \geq 1$, otherwise the statement is false.</p>
<p>We are in a subspace when if we add two vectors, we still are in the subspace and if we multiply the vector by any scalar multiple (including $0$), we are still in the subspace. Because we can multiply always by $0$, the null vector always needs to be a member of our subspace.</p>
<p>Any other subspace is invalid. Imagine you have a subspace containing of all vectors on the real line except the vector $[2]$, then this is invalid because the addition of vector $[1]$ for instance with vector $[1]$ gives vector $[2]$ which needs to be in your subspace, and which is not know and thus the space of all vectors on the real line minus the vector $[2]$ is not a subspace.</p>
<hr>
<p>For $n = 1$ we have only $2$ subspace. We have the space containing only the zero vector, and we have the space containing the full $1$ dimensional line.</p>
<p>To see that why the first one is true, note that every zero vector plus any other zero vector gives just back your zero vector (which is in your subspace) and every zero vector times any scalar number gives Always your zero vector (which is in your subspace). </p>
<p>For the second one, every vector lying on your real line added by an other random vector lying on your $1$ dimensional line, just gives a vector back on that line. Multiplying any vector on that line by any scalar number, just gives a vector back on that $1$ dimensional line you had. </p>
<hr>
<p>For $n = 2$ you have an infinite amount of subspaces. Every set of vectors containing all vectors lying on a line through the origin form a subspace and the set of vectors containing the full 2D space. </p>
<p>To see why this is true let us first consider the set of vectors that lie on a lines. If you add two vectors that lie on such a line, they still lie on that line. If you multiply a vector lying on that line by any scalar multiple, the resulting vector still lies on that line. When this line where you take all vectors from now goes through the origin, you have a subspace. Why through the origin? Well, one of the rules we had with subspaces was that you need to be able to multiply a vector by any scalar number. So you need to be able to multiply any 2D vector on that line by $0$. Because you need to be able to multiply any 2D vector by $0$, the result of that multiplication, namely the 2D zero vector, needs to be in your subspace.</p>
<p>For the second one, it is obvious that if you add two vectors in the full 2D space, you still get a vector in the 2D space and that if you multiply a vector in the 2D space by any scalar, you still are in the 2D space.</p>
<hr>
<p>For $n = 3$ we have also infinite amount of subspaces. All 3D vectors lying on a single line through the origin form a subspace, all 3D vectors lying on a plane through the origin form a subspace, as well as the full $3$ dimensional space.</p>
<hr>
<p>For $n = 4$ and higher it becomes harder to visualize, but you can still see that we have an infinite amount of subspaces. Even having all lines going through the origin in your $n$-dimensional space is enough to have an infinite amount of subspaces.</p>
|
1,209,934 | <p>So I am given two points $A=(-.5,2.3,-7.3)$ and $B=(-2,17.1,-0.3)$ and then using $AB = OB - OA$ to give me $(1.5,-14.8,-7)$. The plane is $$x+23y+13z=500$$ From there I got $r.n$ where $r=(1.5,-14.8,-7)$ and $n=(1,28,13)$. From here I do not know how to check if the vector is perpendicular to the plane.</p>
| Harish Chandra Rajpoot | 210,295 | <p>In general two vectors: <span class="math-container">$(a_1, a_2, a_3)$</span> and <span class="math-container">$(b_1, b_2, b_3)$</span> are said to be parallel if
<span class="math-container">$$\frac{a_1}{b_1}=\frac{a_2}{b_2}=\frac{a_3}{b_3}$$</span></p>
<p>Now, the vector <span class="math-container">$\vec{OB}=(-1.5,14.8,7)$</span> will be perpendicular to the plane:<span class="math-container">$x+23y+13z=500$</span> iff vector <span class="math-container">$r=(-1.5,14.8,7)$</span> is parallel to the normal vector <span class="math-container">$n=(1,28,13)$</span> of plane i.e.
<span class="math-container">$$\frac{-1.5}{1}\ne\frac{14.8}{28}\ne \frac{7}{13}$$</span>
above inequality shows that vector <span class="math-container">$r=(-1.5,14.8,7)$</span> is not parallel to the normal vector <span class="math-container">$n=(1,28,13)$</span> i.e. vector <span class="math-container">$r=(-1.5,14.8,7)$</span> is not perpendicular to the given plane.</p>
|
2,725,455 | <p>Probably this is pretty simple (or even trivial), but I'm stucked.</p>
<p>If $H\leq G$ is a subgroup, does it follow that $hH=Hh$, if $h\in H$ ? I can't prove or find a counter-example. If anyone could help me, I'd be grateful!</p>
| Joaquin Liniado | 237,607 | <p>If $H\leq G$ is a subgroup, then for any $h,h' \in H$ you have that $hh'\in H$. </p>
<p>Therefore $hH=H=Hh$. To see this, let's see the double inclusion:</p>
<p>An element of $hH$ is of the form $hg$ with $g\in H$. Since $H$ is a subgroup, then $hg \in H$. </p>
<p>The other way around, take $h' \in H$ you want to see that $h' \in hH$. Consider $h'=h(h^{-1} h') \in hH$</p>
|
1,196,317 | <p><a href="https://math.stackexchange.com/questions/1196261/let-g-be-a-group-where-ab3-a3b3-and-ab5-a5b5-prove-that-g-is/1196295#1196295">Let $G$ be a group, where $(ab)^3=a^3b^3$ and $(ab)^5=a^5b^5$. How to prove that $G$ is an abelian group?</a></p>
<p>P.S Why cannot not we just cancel ab out of the middle of these expressions? Why can we only cancel "on the left" and "on the "right"? Could somebody explain that to me (if it is possible, could you refer to definitions/ theorems when doing that)?</p>
<p>We can multiply the expressions by $a^{-1}$ and $b^{-1}$ , as we have a group and it is one of the properties of a group? Am I right?</p>
| grand_chat | 215,011 | <p>The middle cancellation property does not hold for every group. See <a href="https://math.stackexchange.com/a/1196399/215011">this example</a>.</p>
<p>In fact a group possesses the middle cancellation property if and only if it is abelian:</p>
<p>$(\Rightarrow)$ If the middle cancellation property holds, then for any $a$, $b$ we have
$aa^{-1}b=b=ba^{-1}a$, so $ab=ba$.</p>
<p>$(\Leftarrow)$ If the group is abelian and $axb=cxd$ then $$ab=x^{-1}xab=x^{-1}axb=x^{-1}cxd=x^{-1}xcd=cd . $$</p>
|
39,597 | <p>There was a recent question on intuitions about sheaf cohomology, and I answered in part by suggesting the "genetic" approach (how did cohomology in general arise?). For historical material specific to sheaf cohomology, what Houzel writes in the Kashiwara-Schapira book <em>Sheaves on Manifolds</em> for sheaf theory 1945-1958 should be adequate.</p>
<p>The question really is about the earlier period 1935-1938. According to nLab, cohomology with local coefficients was proposed by Reidemeister in 1938 (<a href="http://ncatlab.org/nlab/show/history+of+cohomology+with+local+coefficients">http://ncatlab.org/nlab/show/history+of+cohomology+with+local+coefficients</a>). The other bookend comes from Massey's article in <em>History of Topology</em> edited by Ioan James, suggesting that from 1895 and the inception of homology, it took four decades for "dual homology groups" to get onto the serious agenda of topologists. It happens that 1935 was also the date of a big international topology conference in Stalin's Moscow, organised by Alexandrov. This might be taken as the moment at which cohomology was "up in the air".</p>
<p>Now de Rham's theorem is definitely somewhat earlier. Duality on manifolds is quite a bit earlier in a homology formulation. </p>
<p>It is apparently the case that <em>At the Moscow conference of 1935 both Kolmogorov and Alexander announced the definition of cohomology, which they had discovered independently of one another.</em> This is from <a href="http://www.math.purdue.edu/~gottlieb/Bibliography/53.pdf">http://www.math.purdue.edu/~gottlieb/Bibliography/53.pdf</a> at p. 11, which then mentions the roles of Čech and Whitney in the next couple of years. This is fine as a narrative, as far as it goes. I have a few questions, though:</p>
<p>1) Is the axiomatic idea of cocycle as late as Eilenberg in the early 1940s?</p>
<p>2) What was the role of obstruction theory, which produces explicit cocycles?</p>
<p>Further, Weil has his own story. Present at the Moscow conference and in the USSR for a month or so after, his interest in cohomology was directed towards the integration of de Rham's approach into the theory. He comments in the notes to his works that he pretty much rebuffed Eilenberg's ideas. Bourbaki was going to write on "combinatorial topology" but the idea stalled (I suppose this is related). So I'd also like to understand better the following:</p>
<p>3) Should we be accepting the topologists' history of cohomology, if it means restricting attention to the "algebraic" theory, or should there be more differential topology as well as sheaf theory in the picture?</p>
<p>As said, restriction to a short period looks like a good idea to get some better grip on this chunk of history.</p>
| roy smith | 9,449 | <p>As explained to us by Alan Mayer, sheaf cohomology is a generalization of cech cohmology. I found this very helpful.</p>
<p>As to the question of how ordinary cohomology arose, Hermann Weyl implies in the revised version of his book Concept of a Riemann surface, that it is a generalization of the Weierstrass, Hensel,and Landsberg approach to Riemann surfaces, focusing first on the behavior of integrals, and passing from that to deductions about the paths of integration.</p>
<p>Bott also used to say that a cocyle was "something that hovers over a space and when it sees a cycle, pounces on it and spits out a number". Such a thing he then observed was provided by an integral, and went on to introduce de Rham cohomology as the most natural type. So he too seemed to suggest that the fundamental example giving rise to cohomology was classical integration over cycles.</p>
|
646,032 | <p>I'm wondering where the notation for the quotient of a ring by an ideal comes from. I.e., why do we write $R/I$ to denote a ring structure on the set $\{r+I: r\in I\}$, wouldn't $R+I$ be more natural?</p>
| Sempliner | 122,727 | <p>We do so because in general what we are doing is arranging the object $R$ into equivalence classes (in such a way that the set of equivalence classes has a structure analogous to that of $R$), in a manner very similar to what happens when one takes one integer modulo another (in fact this can be reconceptualized as the quotient of the ring $\mathbb Z$ by one of its ideals $n\mathbb Z$). This is a very general operation done in many objects in mathematics, and it is almost always referred to as a quotient. Further $R + I$ in most contexts refers to something like the set of all $r + i$, where $i \in R, i \in I$, whereas $R/I$ refers to the set of equivalence classes $\hat{r}$, where $r, s \in \hat{r}$ if $r - s \in I$. This can be understood as simply $r + I$ but it is not the same as the set of ALL $r + I$ as above, because (with notation as before) $r + I$ and $s + I$ are the same thing. There is more structure there than just taking sums of things.</p>
|
2,569,267 | <p><a href="https://gowers.wordpress.com/2011/10/16/permutations/" rel="nofollow noreferrer">This</a> article claims:</p>
<blockquote>
<p>we simply replace the number 1 by 2, the number 2 by 4, and the number 4 by 1</p>
<p>....I start with the numbers arranged as follows: 1 2 3 4 5 6. After doing the permutation (124) the numbers are arranged as 2 4 3 1 5 6.</p>
</blockquote>
<p>I always thought <span class="math-container">$(124)$</span> was read left to right as "1 goes to 2, 2 goes to 4, and 4 goes to 1" and therefore the outcome should be 4, 1, 3, 2, 5, 6.</p>
<p>According to my understanding, the article did the permutation reading from right to left. Is the blog following a convention of reading right to left, or do I just have it wrong?</p>
| hmakholm left over Monica | 14,366 | <p>Perhaps a better example would be:</p>
<p>If you apply the permutation $(1\,2\,4)$ to (each element of) the sequence "6, 5, 3, 1, 2, 4" you get "6, 5, 3, 2, 4, 1".</p>
<p>The 6, 5, and 3 are unchanged by the permutation, 1 becomes 2, 2 becomes 4, and 4 becomes 1.</p>
|
499,652 | <p>I saw this a lot in physics textbook but today I am curious about it and want to know if anyone can show me a formal mathematical proof of this statement? Thanks!</p>
| Glen O | 67,842 | <p>The formal proof would involve demonstrating that, for any value $\epsilon<1$, one can find a value $\omega>0$ such that</p>
<p>$$
\forall \alpha\in(-\omega,\omega), |\tan(\alpha)-\alpha|<\epsilon |\alpha|
$$</p>
<p>That is, for all values of $\alpha$ between $-\omega$ and $\omega$, the difference between $\tan(\alpha)$ and $\alpha$ is smaller than $\epsilon |\alpha|$. The formal logical expression actually takes the form</p>
<p>$$
\forall \epsilon<1, \exists \omega>0 \text{ s.t. }\forall \alpha\in(-\omega,\omega), |\tan(\alpha)-\alpha|<\epsilon |\alpha|
$$</p>
<p>It isn't trivial to prove it formally, requiring some proper analysis. The informal proofs provided by Eric Auld and ftfish should be sufficient to demonstrate the it in a less rigorous manner.</p>
|
298,913 | <p>Suppose you have $n$ triangles whose corners are random points on a sphere $S$
in $\mathbb{R}^3$.
Viewing the triangles as built from rigid bars as edges,
two triangles are <em>linked</em> if they cannot be separated without two
edges passing through one another.
A triangle that is not topologically linked with any other is <em>loose</em>.
In the example below of $n=15$ triangles, $11$ are linked to at least
one other triangle, and $4$ are loose.
<hr />
<a href="https://i.stack.imgur.com/CusHE.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/CusHE.jpg" alt="Tri15_4"></a>
<br />
<sup>
$n=15$. Magenta triangles $\{1,5,10,11\}$ are loose.
</sup>
<hr />
It is easy to surmise that the proportion of linked triangles
approaches $1$ as $n \to \infty$:
<hr />
<a href="https://i.stack.imgur.com/SQpEm.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/SQpEm.jpg" alt="TangledGraph"></a>
<br />
<sup>
Fraction of triangles linked to at least one other triangle.
</sup>
<hr />
But I wonder about the largest <em>linked component</em>,
the collection of all triangles
linked into one "giant" component, in the sense that if you picked up one
triangle all the others would follow.
I wonder that
when $n \to \infty$, what is the probability that this linked
component includes <em>all</em> the triangles.</p>
<blockquote>
<p><strong><em>Q</em></strong>. As $n \to \infty$, what is the probability that every triangle
is linked into one giant component?</p>
</blockquote>
<p>My sense is that this probability is zero:
Even though the probability that each triangle is linked to another
approaches $1$, the probability that all triangles are linked
to one another approaches $0$.
This would contrast with an earlier related question,
<a href="https://mathoverflow.net/q/128940/6094">Random rings linked into one component?</a>,
whose answer was the opposite: The rings form one component
as $n \to \infty$.</p>
| Gerhard Paseman | 3,402 | <p>Here is an idea which suggests why the probability is near zero as the number of triangles gets large.</p>
<p>Among the many random triangles, there is one which has the smallest area as measured when embedded in the plane. Let us call this triangle T, and consider how likely that another triangle links with it.</p>
<p>Pick a random point p on the sphere. From the vantage point of p, look at T. For p belonging to much of the sphere, T looks small and tilted, so that a chord from p passing through the interior of (the plane embedded version of ) T will strike the sphere on the other side in a small area. (Bisecting the sphere by the plane of T, such a chord has to have an endpoint in either piece. If the triangle is contained in a small cap, the strike area for much of the sphere is within this small cap.)</p>
<p>However, not only does this point opposite p (call it q) reside in this small cap, it has to "see" a point away from the triangle: if it just connects to another point by a chord passing through the triangle interior again, there is no link. In particular, the chord from q that does not head to p has to live in a minimal cap that contains q and one of the small edges of T. (I am ignoring the case that q is near the boundary of the small cap.)</p>
<p>To measure how unlikely this is, consider the following. For a given T and many points q in "the small cap" of T, compute the area of the three small regions given by that part of a new cap determined by a plane containing q and an edge of T and the region of interest being on that part of the cap "on the other side of the edge from q". If T has small area (and q lies "sufficiently inside" T), the area of these three regions will be even smaller than T. You now need the probability that a point q lands inside of (the small cap part of ) T, with a point r that lands in the special region that would give a link, which intuitively is of the same order as the area of T, multiplied by the probability that p permits a linking triangle with T (which is much larger than the previous probabilities).</p>
<p>This might result in a small but significantly far from zero probability of occurring. But now, take the number of small triangles to be large, and it becomes easy to believe that one of these small triangles is not linked to any other.</p>
<p>Gerhard "See The Lack Of Link?" Paseman, 2018.04.27.</p>
|
2,943,973 | <p>I'm trying to prove there is some <span class="math-container">$N$</span> such that for all <span class="math-container">$n > N$</span>, it is the case that <span class="math-container">$$2n^{3/4} + 2(n-\sqrt{n})^{3/2} + n - 2n^{3/2} \leq 0$$</span></p>
<p>I know that this is true, since I graphed this function on Wolfram. It has one real root, and beyond that root the function is negative. How would I prove this analytically if possible? It seems like the expression is to messy to work with, but perhaps there is a simplification, or argument I am missing that makes the problem easy.</p>
| mathlove | 78,967 | <p>Let <span class="math-container">$m:=n^{1/4}$</span>.</p>
<p>The inequality is equivalent to
<span class="math-container">$$2m^3(m^2-1)^{3/2} \leq 2m^6-m^4-2m^3$$</span></p>
<p>Dividing the both sides by <span class="math-container">$m^3$</span> gives
<span class="math-container">$$2(m^2-1)^{3/2} \leq m(2m^2-1)-2$$</span></p>
<p>If <span class="math-container">$m\ge 2$</span>, then the both sides are positive. Then, squaring the both sides gives
<span class="math-container">$$4m^6-12m^4+12m^2-4\le 4m^6+m^2+4-4m^4-8m^3+4m,$$</span>
i.e.
<span class="math-container">$$m^2(8m(m-1)-11)+4m+8\ge 0$$</span></p>
<p>This inequality holds if <span class="math-container">$m\ge 2$</span>, i.e. <span class="math-container">$n\ge 16$</span>. </p>
|
2,943,973 | <p>I'm trying to prove there is some <span class="math-container">$N$</span> such that for all <span class="math-container">$n > N$</span>, it is the case that <span class="math-container">$$2n^{3/4} + 2(n-\sqrt{n})^{3/2} + n - 2n^{3/2} \leq 0$$</span></p>
<p>I know that this is true, since I graphed this function on Wolfram. It has one real root, and beyond that root the function is negative. How would I prove this analytically if possible? It seems like the expression is to messy to work with, but perhaps there is a simplification, or argument I am missing that makes the problem easy.</p>
| Claude Leibovici | 82,404 | <p>Consider the Taylor series of the expression for large values of <span class="math-container">$n$</span>. You should get
<span class="math-container">$$2n^{3/4} + 2(n-\sqrt{n})^{3/2} + n - 2n^{3/2}=-2 n+2 n^{3/4}+\frac{3 }{4}n^{1/2}+\frac{1}{8}+O\left(\frac{1}{n^{1/2}}\right)$$</span> that is to say
<span class="math-container">$$-2n\left(1 -n^{-1/4}+\frac 38n^{-1/2}+\cdots\right)$$</span></p>
|
1,759,836 | <p>It is well-known that on a smooth manifold $M$, the Lie derivative commutes with the exterior derivative, i.e.
$${\cal L}_Xd\alpha=d{\cal L}_X\alpha$$
for any vector field $X$ and differential form $\alpha$.</p>
<p>If $M$ is a complex manifold, is there a similar result for the partial derivative
$${\cal L}_X\partial\alpha=\partial{\cal L}_X\alpha?$$</p>
<p>(<strong>Edit:</strong> By "similar" I mean maybe it does not hold in this form but there is nevertheless an analogous statement?)</p>
| Michael Albanese | 39,599 | <p>This is not true as stated.</p>
<p>Suppose $\alpha$ is a $d$-closed $(p, q)$-form, then $\partial\alpha = 0$, so $\mathcal{L}_X\partial\alpha = 0$. On the other hand, </p>
<p>$$\partial\mathcal{L}_X\alpha = \partial(di_X + i_Xd)\alpha = \partial di_X\alpha = \partial(\partial + \bar{\partial})i_X\alpha = \partial\bar{\partial}i_X\alpha.$$</p>
<p>Now let $M = \mathbb{C}$, $\alpha = dz$ and $X = |z|^2\partial_z$. Then we have</p>
<p>$$\partial\mathcal{L}_X\alpha = \partial\bar{\partial}i_X\alpha = \partial\bar{\partial}(i_{|z|^2\partial_z}dz) = \partial\bar{\partial}(dz(|z|^2\partial_z)) = \partial\bar{\partial}|z|^2 = dz\wedge d\bar{z} \neq 0.$$</p>
<p>I don't know if there is a complex analogue of the identity $\mathcal{L}_X d\alpha = d\mathcal{L}_X\alpha$ which would allow one to replace $d$ by $\partial$ or $\bar{\partial}$.</p>
|
1,759,836 | <p>It is well-known that on a smooth manifold $M$, the Lie derivative commutes with the exterior derivative, i.e.
$${\cal L}_Xd\alpha=d{\cal L}_X\alpha$$
for any vector field $X$ and differential form $\alpha$.</p>
<p>If $M$ is a complex manifold, is there a similar result for the partial derivative
$${\cal L}_X\partial\alpha=\partial{\cal L}_X\alpha?$$</p>
<p>(<strong>Edit:</strong> By "similar" I mean maybe it does not hold in this form but there is nevertheless an analogous statement?)</p>
| jws | 71,874 | <p>It's true if the complex structure is invariant under the vector field X. In that case: $\mathcal{L}_X \partial a = \mathcal{L}_X (1 - \imath J) da = (1 - \imath J)\mathcal{L}_X da = \partial \mathcal{L}_X a$. (Note: This is written with $a$ being a scalar. For a $(p,q)$ form you just need to pick a suitable projector.)</p>
|
1,423,449 | <p>Find all extrema for the function $f(x)=-\frac{x^{3}}{3}+x^{2}-x+4$ on the domain $x \in [-3.3]$.</p>
<p><strong>Solution:</strong> $f'(x)=-x^{2}+2x-1 = 0 \implies (x-1)^{2}=0 \implies x^{*}=1$. </p>
<p>Is that it? </p>
| Emilio Novati | 187,568 | <p>Hint:</p>
<p>note that $f'(x)=-(x-1)^2 \le 0 \quad \forall x \in \mathbb{R}$, so the function is motononic decreasing and $x=1$ can not be an extremum.</p>
<p>You have to find the values of the function in $x=-3$ and $x=3$ to find the extrema in the given interval.</p>
|
2,842,217 | <p>im looking to understand the tangent taylor series, but im struggling to understand how to use long division to divide one series (sine) into the other (cosine). I also can't find examples of the Tangent series much beyond X^5 (wikipedia and youtube videos both stop at the second or third term), which is not enough for me to see any pattern. (x^3/3 + 2x^5/15 tells me nothing).</p>
<p>Wiki says Bernouli Numbers which i plan on studying next, but seriously, i could really use an example of tangent series out to 5-6 just to get a ballpark of what's going on before i start plug and pray. If someone can explain why the long division of the series spits out x^3/3 instead of x^3/3x^2, that would help too,</p>
<p>because I took x^3/6 divided by x^2/2 and got 2x^3/6x^2, following the logic that 4/2 divided by 3/5 = 2/0.6 or 20/6. So I multiplied my top and bottom terms for the numerator, and my two middle terms for the denominator (4x5)/(2x3) = correct.</p>
<p>But when i do that with terms in the taylor series I'm doing something wrong. does that first x from sine divided by that first 1 from cosine have anything to do with it?</p>
<p>Completely lost. </p>
| Travis Willse | 155,629 | <p>You might find it conceptually easier to set up the identity of power series and compare the first few coefficients, and solve. This is algebraically equivalent to long division, though the order of some of the arithmetic operations is somewhat rearranged.</p>
<p>Write the desired Taylor series at $x = 0$ as
$$\tan x \sim a_0 + a_1 x + a_2 x^2 + a_3 x^3 + \cdots .$$
Since $\tan$ is odd, all of the coefficients of the even terms vanish, i.e., $0 = a_0 = a_2 = a_4 = \cdots$. (This insight isn't necessary---we'd recover this fact soon anyway---but it does make the next computation easier.)</p>
<p>Replacing the functions in
$$\cos x \tan x = \sin x$$
with their Taylor series gives
$$\left(1 - \frac{1}{2!} x^2 + \frac{1}{4!} x^4 - \cdots\right)(a_1 x + a_3 x^3 + a_5 x^5 \cdots) = x - \frac{1}{3!} x^3 + \frac{1}{5!} x^5 - \cdots .$$</p>
<p>Now, comparing the coefficients of the terms $x, x^3, x^5, \ldots$, on both sides respectively gives
$$\begin{align*}
a_1 &= 1 \\
a_3 - \tfrac{1}{2} a_1 &= -\tfrac{1}{6} \\
a_5 - \tfrac{1}{2} a_3 + \tfrac{1}{24} a_1 &= \tfrac{1}{120} \\
& \,\,\vdots
\end{align*}$$
and successively solving and substituting gives
$$\tan x \sim x + \tfrac{1}{3} x^3 + \tfrac{2}{15} x^5 + \cdots .$$
Of course, it's straightforward (if eventually tedious) to compute as many terms as you want this way.</p>
<p>An efficient proof of the formula you mentioned involving the Bernoulli numbers for the general coefficient is given in <a href="https://math.stackexchange.com/a/2099213/155629">this answer</a>.</p>
|
1,392,340 | <p>Suppose there is a function $f:\mathbb R^n \to \mathbb R$. One way to find a stationary value is to solve the ODE $\dot x = - \nabla f(x)$, and look at $\lim_{t\to\infty} x(t)$.</p>
<p>However I want to consider a variation of this method where we solve
$$ dx = - \nabla f(x) dt + C(t,x) \cdot dW_t ,$$
where $C(x,t) \in \mathbb R^{n\times n}$, and $W_t$ is $n$-dimensional Wiener process, with some kind of condition like $C(x,t) \to 0$ as $t\to\infty$. The hope is that it might converge to a stationary value faster, and also that the stationary value it converges to will be a local minimum.</p>
<p>Can anyone give me some resources for where I could read about this sort of thing? Using a google search, and following references, I did find the book
<em>Random perturbations of dynamical systems</em> by Mark I. Freidlin and Alexander D. Wentzell, but I didn't find "gradient descent" in the index.</p>
| Sergio Almada | 172,877 | <p>Freidlin & Wentzell and its community is interested in a set of topics a little bit different from yours (metastability and exit problems). Your kind of cases have been studied, see for example <a href="http://repository.ias.ac.in/1132/1/323.pdf" rel="nofollow">http://repository.ias.ac.in/1132/1/323.pdf</a> and its reference for further information. </p>
<p>In a nutshell, the sde you describe, under general conditions for $c$ and in the case $c$ does not depend in time, has an invariant measure proportional to $e^{ - f(x)/c }$ ( I might be missing some constant in the exponent), so as $c$ tends to zero, the diffusion converges to one of the global minimums of the function f. </p>
<p>Now, for the case you are interested in, where $c$ is time dependent, it is expected that the same behavior holds if $c(t,x) \to 0, \text{ as } t \to \infty$. This was studied for the case $c(t) = 1/( c_0 \log t )$, and the same result follows for $c_0$ large enough, this process is called the annealing process. The last result I have seen in this direction is given here <a href="http://projecteuclid.org/euclid.aoap/1043862427" rel="nofollow">http://projecteuclid.org/euclid.aoap/1043862427</a>. I recommend you to read the review paper <a href="http://onlinelibrary.wiley.com/store/10.1002/wcms.31/asset/31_ftp.pdf;jsessionid=4CD5D404F2C4C556D5C3F4B6331C71CA.f01t02?v=1&t=idf23s2i&s=61e6eec769b556097ae3239bc68798fa9231d8c1" rel="nofollow">http://onlinelibrary.wiley.com/store/10.1002/wcms.31/asset/31_ftp.pdf;jsessionid=4CD5D404F2C4C556D5C3F4B6331C71CA.f01t02?v=1&t=idf23s2i&s=61e6eec769b556097ae3239bc68798fa9231d8c1</a> for a related setting that might interest you.</p>
|
3,115,347 | <p>Let <span class="math-container">$f:(0,\infty) \to \mathbb R$</span> be a differentiable function and <span class="math-container">$F$</span> on of its primitives. Prove that if <span class="math-container">$f$</span> is bounded and <span class="math-container">$\lim_{x \to \infty}F(x)=0$</span>, then <span class="math-container">$\lim_{x\to\infty}f(x)=0$</span>.</p>
<p>I've seen this problem on a Facebook page yesterday. Can anybody give me some tips to solve it, please? It looks pretty interesting and I have no idea of a proof now.</p>
| Jonas De Schouwer | 581,053 | <p><span class="math-container">$a)$</span>
The probability that one dice rolls a number equal to <span class="math-container">$3$</span> or lower is <span class="math-container">$\frac{3}{6}=\frac{1}{2}$</span>.
Hence, the probability that all of them roll <span class="math-container">$3$</span> or lower is <span class="math-container">$(\frac{1}{2})^4=\frac{1}{16}$</span>.</p>
<p>Can you do the same for <span class="math-container">$b)$</span>?</p>
<p><span class="math-container">$c)$</span>
Firstly, There is a probability of <span class="math-container">$(\frac{4}{6})^4=(\frac{256}{1296})$</span> that all numbers rolled are <span class="math-container">$4$</span> or lower.</p>
<p>However, at least one of the numbers must <strong>equal</strong> <span class="math-container">$4$</span>, so we subtract the probability that they are all 3 or lower. We already calculaties this in <span class="math-container">$a)$</span>.</p>
<p>Hence, the probability that the highest number rolled is <span class="math-container">$4$</span>, equals
<span class="math-container">$$\frac{256}{1296}-\frac{81}{1296}=\frac{175}{1296}$$</span></p>
|
1,064,115 | <p><strong>UPDATE:</strong> Thanks to those who replied saying I have to calculate the probabilities explicitly. Could someone clarify if this is the form I should end up with:</p>
<p>$G_X$($x$) = P(X=0) + P(X=1)($x$) + P(X=2) ($x^2$) + P(X=3)($x^3$)</p>
<p>Then I find the first and second derivative in order to calculate the expected value and variance?</p>
<p>Thanks!</p>
<p><strong>ORIGINAL POST:</strong> We have a probability question which has stumped all of us for a while and we really cannot figure out what to do. The question is:</p>
<p>An urn contains 4 red and 3 green balls. Balls will be drawn from the urn in sequence until the first red ball is drawn (ie. without replacement). Let X denote the number of green balls drawn in this sequence.</p>
<p>(i) Find $G_X$(x), the probability generating function of X.</p>
<p>(ii) Use $G_X$(x) to find E(X), the expected value of X.</p>
<p>(iii) Use $G_X$(x) and E(X) to find $σ^2$(X), the variance of X.</p>
<p>It appears to me from looking in various places online that this would be a hypergeometric distribution, as it is with replacement. However, we have not covered that type of distribution in our course and it seems the lecturer wishes for us to use a different method. We have only covered binomial, geometric and Poisson. I have tried to figure out an alternative way of finding the probability generating function and hence the expected value and variance (just using the derivatives), but, I have not been successful. Would anyone be able to assist?</p>
<p>Thanks! :)
Helen</p>
| heropup | 118,193 | <p>You don't need to use the formula for a hypergeometric distribution. Simply observe that the most number of balls you can draw before obtaining the first red ball is $3$, so the support of $X$ is $X \in \{0, 1, 2, 3\}$. This is small enough to very easily compute explicitly $\Pr[X = k]$ for $k = 0, 1, 2, 3$.</p>
|
684,076 | <blockquote>
<p>In the past, practical applications have motivated the development of mathematical theories, which then became the subject of study in pure mathematics, where mathematics is developed primarily for its own sake. Thus, the activity of <a href="http://en.wikipedia.org/wiki/Applied_mathematics" rel="nofollow">applied mathematics</a> is vitally connected with research in pure mathematics.</p>
</blockquote>
<p>I wonder which problems in pure mathematics could be tackled once an algorithm solving <a href="http://en.wikipedia.org/wiki/NP-complete" rel="nofollow">NP-complete problems</a> is found. The <a href="http://en.wikipedia.org/wiki/List_of_NP-complete_problems" rel="nofollow">list of instances of NP-complete problems</a> is long, but are they any other, let's say stand-alone problems in pure mathematics that could be solved?</p>
<p>Sure, we could get more numerical data to tighten some bounds, but that's not what I'm after...</p>
| baffld | 128,321 | <p>I think it is important to note that NP-complete problems have solutions to them. An algorithm exists to solve any NP-complete problem. All NP-complete problems are NP. If one were to implement an algorithm solving an NP problem (that is not in P), then the implementation will have at least exponential running time (it will be very slow).</p>
<p>With that said, a proof showing that $P=NP$ will likely contain techniques that will change the landscape of pure mathematics forever, and it will likely be a proof showing that an NP-complete problem can be solved in polynomial time (which is what I think you are asking). Such a proof would change the world in ways that don't seem natural. For this reason, it is conjectured that $P\neq NP$.</p>
<p>I hope that helps add some context to DanielV's answer.</p>
|
3,354,566 | <p>I see integrals defined as anti-derivatives but for some reason I haven't come across the reverse. Both seem equally implied by the fundamental theorem of calculus.</p>
<p>This emerged as a sticking point in <a href="https://math.stackexchange.com/questions/3354502/are-integrals-thought-of-as-antiderivatives-to-avoid-using-faulhaber">this question</a>.</p>
| Stella Biderman | 123,230 | <p>From the point of view of analysis (as hinted at in Henning Makholm's answer) the issue is that the mapping <span class="math-container">$I:f'\to f$</span> is <strong>extremely</strong> not one-to-one. When you try to invert it, you find that a great deal of functions are possible "anti-integrals" of a given function. While this does occur for <span class="math-container">$d:f\to f'$</span> as well, there is a robust mathematical theory about how to address this and how to describe the set of anti-derivatives of a given function. For example, if <span class="math-container">$f$</span> is defined on <span class="math-container">$[a,b]$</span> then all antiderivatives of <span class="math-container">$f$</span> are of the form <span class="math-container">$$F_i(x)=c_i + \int_a^x f(t)dt$$</span> for constants <span class="math-container">$c_i$</span>. Although in some contexts the situation becomes more complicated (for example, if we look at <span class="math-container">$1/x$</span> defined on <span class="math-container">$[-1,0)\cup(0,1]$</span> then you have two constants, one for each side) there is a whole field that studies what happens for various domains.</p>
<p>The situation for inverting <span class="math-container">$I$</span> is a lot less rosy. For one thing, if you take any finite subset of the domain you can move the function's value around however you like without changing the value. More generally, as long as two functions <a href="https://math.stackexchange.com/questions/480403/two-functions-agreeing-except-on-set-of-measure-zero">disagree on a set of measure zero</a> they will have the same integral. As far as I know there are no known ways to fruitfully analyze such a set of functions (a statement that has deep repercussions in machine learning and functional analysis).</p>
<p>A second issue is that integrating doesn't always ensure that you can differentiate. There are a wide variety of functions <span class="math-container">$f$</span> such that the anti-integral doesn't (or doesn't have to) produce a differentiable function! For example, if <span class="math-container">$1_\mathbb{Q}$</span> denotes the function that takes on the value <span class="math-container">$1$</span> on rational inputs and <span class="math-container">$0$</span> on irrational inputs, this function has a Lebesgue integral of <span class="math-container">$0$</span> (a similar example works for Riemann integral but it's more work). If you take the anti-integral of <span class="math-container">$f(x)=0$</span> and get <span class="math-container">$1_\mathbb{Q}$</span>, you can't differentiate and get back <span class="math-container">$f(x)=0$</span> because it's not differentiable.</p>
<p>A commenter mentions vector calculus, and it is true that something like this happens in vector calculus but there are a couple massive caveats.</p>
|
553,845 | <p>Could we assert that if $H$ is a subgroup of $G$, then the factor group $N_G(H)/C_G(H)$ is isomorphic to a subgroup of ${\rm Inn}(H)$ instead of ${\rm Aut}(H)$?</p>
| anon | 11,763 | <p>No, it is not necessary for (the image of) $N_G(H)/C_G(H)$ to be inner in ${\rm Aut}(H)$. </p>
<p>This means that "external" conjugation can defy imitation by "internal" conjugation.</p>
<p>In fact, every element of ${\rm Aut}(H)$ can be realized as conjugation by an element in some overgroup containing $H$. In particular, set $G$ to be the <a href="http://en.wikipedia.org/wiki/Holomorph_%28mathematics%29" rel="nofollow">holomorph</a> ${\rm Hol}(H):=H\rtimes{\rm Aut}(H)$ (for necessary background information see <a href="http://en.wikipedia.org/wiki/Semidirect_product" rel="nofollow">semidirect product</a>). If $\varphi\in{\rm Aut}(H)$ then "$\varphi\in N_G(H)$" and the image of $\varphi$ in ${\rm Aut}(H)$ (under $N_G(H)\to N_G(H)/C_G(H)\to{\rm Aut}(H)$) is just itself, $\varphi$.</p>
|
2,935,743 | <p>Given two <strong>independent</strong> random variables X, Y, the expectation of their product XY is:</p>
<p><span class="math-container">$\mathrm{E}[XY] = \mathrm{E}[X]\cdot\mathrm{E}[Y]$</span></p>
<p>Similarly, the variance of the product of these variables is:</p>
<p><span class="math-container">$\mathrm{Var}[XY] = \mathrm{Var}[X]\cdot \mathrm{Var}[Y] + \mathrm{Var}[Y]( \mathrm{E}[X])^2 + \mathrm{Var}[X] (\mathrm{E}[Y])^2$</span></p>
<p>While proofs or sketches of proofs can be found online (even within this forum), I have been struggling to find a <strong>citable reference</strong> of the above formulas (i.e., a textbook or a paper). Can you provide a suitable reference?</p>
| StubbornAtom | 321,264 | <p>The first formula for independent random variables <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>, sometimes called the product law of expectation, is not hard to find. It can be found in most undergrad probability and statistics textbooks. For instance, you can find it <a href="https://books.google.com/books?id=Es_VswEACAAJ&lpg=PP1&pg=PA160#v=onepage&q&f=false" rel="nofollow noreferrer">here</a> on page 160 of <em>Introduction to the Theory of Statistics</em> by Mood-Graybill-Boes, 3rd edition.</p>
<p>The second formula, even if not included in the main material of textbooks, could be found as an exercise, like this one <a href="https://books.google.com/books?id=NGPdBwAAQBAJ&lpg=PP1&pg=PA89#v=onepage&q&f=false" rel="nofollow noreferrer">here</a> on page 89, problem I-32 of <em>Exercises in Probability</em> by T. Cacoullos.</p>
|
2,935,743 | <p>Given two <strong>independent</strong> random variables X, Y, the expectation of their product XY is:</p>
<p><span class="math-container">$\mathrm{E}[XY] = \mathrm{E}[X]\cdot\mathrm{E}[Y]$</span></p>
<p>Similarly, the variance of the product of these variables is:</p>
<p><span class="math-container">$\mathrm{Var}[XY] = \mathrm{Var}[X]\cdot \mathrm{Var}[Y] + \mathrm{Var}[Y]( \mathrm{E}[X])^2 + \mathrm{Var}[X] (\mathrm{E}[Y])^2$</span></p>
<p>While proofs or sketches of proofs can be found online (even within this forum), I have been struggling to find a <strong>citable reference</strong> of the above formulas (i.e., a textbook or a paper). Can you provide a suitable reference?</p>
| Teemu Sarapisto | 1,108,398 | <p><a href="https://www.tandfonline.com/doi/abs/10.1080/01621459.1960.10483369" rel="nofollow noreferrer">On the Exact Variance of Products</a> by Leo A. Goodman, published in
Journal of the American Statistical Association, Volume 55, 1960 - Issue 292</p>
<p>is a peer-reviewed article focusing on this problem with the result for two independent variables stated in its Equation 2.</p>
|
1,708,900 | <p>Does a closed form exist for </p>
<blockquote>
<p>$$\sum \limits_{n=0}^{\infty} \frac{1}{(kn)!}$$</p>
</blockquote>
<p>in terms of $k$ and other functions? The best that I have been able to do is solve the case where $k=1$, since the sum is just the infinite series for $e$. I would guess that any closed form must involve the exponential function, but am at a loss to prove it.</p>
| David C. Ullrich | 248,223 | <p>If $(c_n)$ is any sequence with period $k$ (that is, $c_{n+k}=c_n$) then it's possible to evaluate $\sum c_n/n!$ using tricks involving $k$-th roots of unity.</p>
<p>Let $\omega=e^{2\pi i/k}$. Consider the $k$ sequences</p>
<p>$s_0:1,1,1\dots$</p>
<p>$s_1: 1, \omega,\omega^2,\omega^3,\dots$</p>
<p>$s_2: 1, \omega^2,\omega^4,\omega^6,\dots$</p>
<p>$s_3: 1, \omega^3,\omega^6,\omega^9,\dots$</p>
<p>...</p>
<p>$s_{k-1}: 1, \omega^{k-1},\omega^{2(k-1)},\dots$.</p>
<p>Using tricks analogous to finding Fourier coefficients you can find $a_0,\dots a_{k-1}$ so that $$(c_n)=a_0(s_0)+\dots+a_k(s_{k-1}).$$</p>
<p>Hence $$\sum\frac{c_n}{n!}=\sum_{j=0}^{k-1}a_j\sum_n\frac{\omega^{jn}}{n!}
=\sum_{j=0}^{k-1}a_je^{\omega^j}.$$</p>
<p>If you do that for your sequence you get $$\sum_{n=0}^\infty\frac{1}{(kn)!}=\frac1k\sum_{j=0}^{k-1}e^{\omega^j}.$$</p>
<p><strong>Edit:</strong> Thomas Andrews makes a comment that I should have included: Since the original sum is real, it follows that $$\sum_{n=0}^\infty\frac{1}{(kn)!}=\frac{1}{k}\sum_{j=0}^{k-1}e^{\cos 2\pi j/k}\cos(\sin2\pi j/k).$$</p>
<p><strong>Edit:</strong> If one is familiar with "abstract harmonic analysis", here just harmonic analysis on compact abelian groups, one sees that those "tricks analogous to finding Fourier coefficients" are in fact finding Fourier coefficients for a certain function on the group $\Bbb Z/k\Bbb Z$.</p>
|
2,799,439 | <blockquote>
<p>Prove that if $p$ is a prime in $\Bbb Z$ that can be written in the form $a^2+b^2$ then $a+bi$ is irreducible in $\Bbb Z[i]$ .</p>
</blockquote>
<p>Let $a+bi=(c+di)(e+fi)\implies a-bi=(c-di)(e-fi)\implies a^2+b^2=(c^2+d^2)(e^2+f^2)\implies p|(c^2+d^2)(e^2+f^2)\implies p|c^2+d^2 $ or $p|e^2+f^2$
since $p$ is a prime.</p>
<p>How to show $e+fi $ or $c+di$ is a unit from here?</p>
| Tsemo Aristide | 280,301 | <p>Hint: Use the norm $N(a+ib)=a^2+b^2$, $N(zz')=N(z)N(z')$, so if $zz'=a+ib$, $N(z)N(z')=p$ implies that $N(z)=1$ or $N(z')=1$, you deduce that $z$ is a unit or $z'$ is a unit.</p>
|
38,731 | <p>The <a href="http://en.wikipedia.org/wiki/Ramanujan_summation">Ramanujan Summation</a> of some infinite sums is consistent with a couple of sets of values of the Riemann zeta function. We have, for instance, $$\zeta(-2n)=\sum_{n=1}^{\infty} n^{2k} = 0 (\mathfrak{R}) $$ (for non-negative integer $k$) and $$\zeta(-(2n+1))=-\frac{B_{2k}}{2k} (\mathfrak{R})$$ (again, $k \in \mathbb{N} $). Here, $B_k$ is the $k$'th <a href="http://en.wikipedia.org/wiki/Bernoulli_number">Bernoulli number</a>. However, it does not hold when, for example, $$\sum_{n=1}^{\infty} \frac{1}{n}=\gamma (\mathfrak{R})$$ (here $\gamma$ denotes the Euler-Mascheroni Constant) as it is not equal to $$\zeta(1)=\infty$$. </p>
<p>Question: Are the first two examples I stated the only instances in which the Ramanujan summation of some infinite series coincides with the values of the Riemann zeta function?</p>
| Sumit Kumar Jha | 37,260 | <p>Ramanujan summation arises out of <a href="http://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula">Euler-Maclaurin summation formula</a>. Ramanujan summation is just (C, 1) summation. (See <a href="http://en.wikipedia.org/wiki/Ces%C3%A0ro_summation">Cesàro summation</a>)</p>
<p>You can find out easily from Euler-Maclaurin that</p>
<p>$$\sum_{k=1}^{\infty}\frac{1}{k}$$</p>
<p>is not (C, 1) summable.</p>
<p>Follow the method of Ramanujan below (which you can easily follow):</p>
<p>Using Euler-Summation we have</p>
<p>\begin{align*}
\zeta(s) & = \frac{1}{s-1}+\frac{1}{2}+\sum_{r=2}^{q}\frac{B_r}{r!}(s)(s+1)\cdots(s+r-2) \\
& \phantom{=} -\frac{(s)(s+1)\cdots(s+q-1)}{q!}\int_{1}^{\infty}B_{q}(x-[x])x^{-s-q} ~dx
\end{align*}</p>
<p>$\zeta(s)$ is the Riemann zeta function (Note $s=1$ is pole) . Note that right side has values even for $Re(s)<1$.</p>
<p>For example, putting $s=0$ we get
$$\zeta(0)=-\frac{1}{2}.$$</p>
<p>If we put $s=-n$ (n being a positive integer) and $q=n+1$, we see the remainder vanishes and have</p>
<p>$$(n+1)\zeta(-n)=-1+\frac{n+1}{2}+\sum_{r=2}^{n+1}\frac{B_r(-1)^{r-1}}{r!}\binom{n+1}{r}$$</p>
<p>which after</p>
<p>$$\sum_{j=0}^{r}\binom{r+1}{j}B_j=0$$</p>
<p>gives</p>
<p>$$\zeta(-n)=-\frac{B_{n+1}}{n+1}.$$</p>
|
3,485,441 | <p>I don't quite understand why Burnside's lemma
<span class="math-container">$$
|X/G|=\frac1{|G|}\sum_{g\in G} |X_g|
$$</span>
should be called a "lemma". By "lemma", we should mean there is something coming after it, presumably a theorem. However, I could not find a theorem which requires Burnside as a lemma. In every book I read, the author jumps into calculations using Burnside rather than further theorems.</p>
<p>Question: What are some important consequences of Burnside Lemma, and why is it called a "lemma"?</p>
| Community | -1 | <p>An closely related fact, by looking at the <em>class equation</em>, is that a nontrivial <span class="math-container">$p$</span>-group has nontrivial center. </p>
|
200,777 | <p>I have a question regarding sums in arrays.</p>
<p>So I have the following array:</p>
<pre><code>list=RandomReal[{0,1},{5,2}]
(*{{0.693551,0.447185},{0.274842,0.637526},{0.745271,0.0288363},{0.894933,0.937219},{0.605447,0.0337067}}*)
</code></pre>
<p>And from that I want to have the splitting for each pair like that</p>
<pre><code>Subsets[Range@Length@list, {2}]
{{1,2},{1,3},{1,4},{1,5},{2,3},{2,4},{2,5},{3,4},{3,5},{4,5}}
</code></pre>
<p>Lets say every array in the list is a pair of x and y coordinates.
Now I compute the distance between the different points using:</p>
<pre><code>dist = EuclideanDistance @@@ Subsets[list, {2}] .
</code></pre>
<p>But now I want to have the sum of all the distances where particle 1 is occuring, so these are the first 5 entries in the array, for particle 2 its the first and the 5, 6 and 7th etc. </p>
<p>In the end I want to have a list containing 5 arrays with the sum for each particle.</p>
<p>So can anyone help me please?</p>
| Armani42 | 66,191 | <p>Thanks, but now I have another problem, now I have</p>
<pre><code>ener[r2_] = 4* (1/r2^12 - 1/r2^6);
</code></pre>
<p>And I want to make now</p>
<pre><code>dist = DistanceMatrix[list, DistanceFunction->(#1-#2 &)];
</code></pre>
<p>Which produces me 2x2 vectors in a matrix.
But now I want to do that</p>
<pre><code>e = Apply[ener @ {##}&, dist, {2}];
</code></pre>
<p>But here is the problem, that some entries in the vectors are 0 and dividing by zero is not possible. </p>
<p>So how can I avoid that maybe by an if statement or something like that?</p>
|
982,021 | <p>$$ f(x) = (1+x)^\frac35, a= (1.2)^\frac35 $$</p>
<p>I got the linear approximation equation of $$1+ \frac35 x$$
What do I do with the value of a? </p>
| David | 119,775 | <p><strong>Hint</strong>: $f(x)=a$ when $x=\cdots\,$.</p>
<p>Find $f(x)$ approximately by using this $x$ value in the linear approximation.</p>
|
982,021 | <p>$$ f(x) = (1+x)^\frac35, a= (1.2)^\frac35 $$</p>
<p>I got the linear approximation equation of $$1+ \frac35 x$$
What do I do with the value of a? </p>
| Paul | 17,980 | <p>Since $$ f(x) = (1+x)^\frac35 \approx 1+\frac35x$$, then $f(a)=f(1.2)=f(1+0.2)\approx1+\frac35\times 0.2=1.12$</p>
|
1,987,507 | <p>I find this question, which comes from section 2.2 of Dummit and Foote's algebra text, to be somewhat confusing:</p>
<blockquote>
<p>Let $G = S_n$, fix $i \in \{1,...,n\}$ and let $G_i = \{\sigma \in G ~|~ \sigma(i) = i\}$ (the stabilizer of $i$ in $G$). Use group actions to prove that $G_i$ is a subgroup of $G$. Find $|G_i|$.</p>
</blockquote>
<p>Here is what I came up with, but it hardly uses group actions. Let $\ker(\cdot)$ denote the kernel of $G$ acting on $\{1,...,n\}$ (?). It is easy to show that $\ker(\cdot)$ is the intersection of all stabilizers of elements in $G$, i.e., $\ker(\cdot) = \bigcap_{i=1}^n G_i$. But since $\ker(\cdot)$ is a subgroup of $G$, and since $\bigcap_{i=1}^n G_i$ is a subgroup if and only if each $G_i$ is, then the stabilizer $G_i$ is a subgroup. </p>
<p>That is the best I could come up with; as I mentioned, it really doesn't use many ideas of group actions. Also, I am not 100% certain $G$ is acting on $\{1,...,n\}$ in this case; perhaps it is acting on $n$-tuples of elements in $\{1,...,n\}$.</p>
<p>PS What is the standard notation for the kernel of a group action? Dummit and Foote offers no convenient notation for it---in fact, they haven't yet offered any notation for it!</p>
| msm | 350,875 | <p>Your mistake (in your own calculation) is that you assume all tickets are sold and the winner is the last person who buys the last ticket. This is not what always happens. <em>Any</em> of the tickets can win (including the first one!) and it is likely that some of them are not sold at all.</p>
|
114,895 | <blockquote>
<p><strong>Possible Duplicate:</strong><br>
<a href="https://math.stackexchange.com/questions/21282/show-that-every-n-can-be-written-uniquely-in-the-form-n-ab-with-a-squa">Show that every $n$ can be written uniquely in the form $n = ab$, with $a$ square-free and $b$ a perfect square</a> </p>
</blockquote>
<p>I am trying to prove that for every $n \ge 1$ there exist uniquely determined integers $a \gt 0$ and $b \gt 0$ such that $n = a^2b$ where $b$ is square-free.</p>
<p>The fact that such $a$ and $b$ exist is easy to prove.</p>
<p>From the fundamental theorem of arithmetic, $n$ can be uniquely represented as $p_1^{a_1} p_2^{a_2} \cdots p_s^{a_s}$ where $s$ is a positive integer. Thus</p>
<p>\begin{align*}
n & = \prod_{i=1}^s p_i^{a_i} \\\\
& = \prod_{i=1}^s p_i^{\left(2 \left\lfloor \frac{a_i}{2} \right\rfloor + a_i \bmod{2}\right)} \\\\
& = \prod_{i=1}^s p_i^{\left(2 \left\lfloor \frac{a_i}{2} \right\rfloor\right)} \cdot \prod_{i=1}^s p_i^{a_i \bmod{2}} \\\\
& = \left(\prod_{i=1}^s p_i^{\left\lfloor \frac{a_i}{2} \right\rfloor}\right)^2 \cdot \prod_{i=1}^s p_i^{a_i \bmod{2}}.
\end{align*}</p>
<p>Clearly, $\left(\prod_{i=1}^s p_i^{\left\lfloor \frac{a_i}{2} \right\rfloor}\right)^2$ is a perfect square and $\prod_{i=1}^s p_i^{a_i \bmod{2}}$ is square free. Hence, we have shown that such $a$ and $b$ exist.</p>
<p>Now, how do we show that such a pair of $a$ and $b$ is unique?</p>
<p>I know how to start proving such a theorem. Let us assume that $n = a^2b = a'^2b'$ such that $a' \ne a$ and $b' \ne b$. Now since this is not possible this should lead us to some contradiction. But, I'm unable to reach a contradiction from this assumption. Could you please help me?</p>
| Bill Dubuque | 242 | <p><strong>Hint</strong> $\ $ The problem is <em>multiplicative</em>, thus it suffices to show that it is true for a prime power $\rm\ P^N\:.\ $ But that's trivial: $\rm\ P^{2N} =\ (P^N)^2,\ \ P^{\:2N+1} =\ P\ (P^N)^2\:,\ $ uniquely. $\ $ <strong>QED</strong></p>
<p>Alternatively, examining the power of each prime in unique prime factorizations, the sought uniqueness reduces to the uniqueness of quotient and remainder (for division by $2$). Namely, suppose we have two squarefree factorizations $\rm\: A^2 B = n = C^2 D$ and suppose the prime $\rm p$ has power $\rm\:a,b,c,d\:$ in $\rm\:A,B,C,D\:$ resp. Since $\rm\:B,D\:$ are squarefree, $\rm\:0\le b,d\le 1.\:$ Now comparing the power of $\rm\:p\:$ in both decompositions, and using unique factorization we deduce</p>
<p>$$\rm 2\:a+b\ =\ 2\:c + d\ \ \Rightarrow\ \ a=c,\: b = d$$</p>
<p>which is clear by rewriting it $\rm\ 2\:(a-c)\: =\: d-b.\:$ Now $\rm\:|d-b| < 2\:$ $\Rightarrow$ $\rm\:d=b\:$ $\Rightarrow$ $\rm\:a=c$.</p>
<p>Note that squarefree decompositions may fail to be unique in domains lacking unique factorization, where one may have factorizations like $\rm\ p\:q = r^2\ $ for nonassociate non-prime irreducibles $\rm\:p,q,r$. Thus any proof must employ unique factorization or some closely related property.</p>
|
4,581,539 | <p>Consider the task of proving that <span class="math-container">$|z+w|\leq |z|+|w|$</span>, where <span class="math-container">$z$</span> and <span class="math-container">$w$</span> are complex numbers.</p>
<p>We can consider three cases:</p>
<ol>
<li><span class="math-container">$|z|$</span> or <span class="math-container">$|w|$</span> equal to <span class="math-container">$0$</span></li>
<li><span class="math-container">$z=\lambda w$</span>, <span class="math-container">$\lambda \in \mathbb{R}$</span></li>
<li><span class="math-container">$z\neq \lambda w$</span>, <span class="math-container">$\lambda \in \mathbb{R}$</span></li>
</ol>
<p>My question is about case (2) specifically.</p>
<p>There is are similar questions <a href="https://math.stackexchange.com/questions/1671369/triangle-inequality-about-complex-numbers-special-case">here</a> and <a href="https://math.stackexchange.com/questions/1968175/if-z-and-w-are-two-complex-numbers-prove-that-zw-zw">here</a> but those solutions are different from the one below, which I am asking about.</p>
<p><strong>My proof of case (2) is</strong></p>
<p><span class="math-container">$$|z+w| = |\lambda w + w|=|(1+\lambda)w|$$</span></p>
<p><span class="math-container">$$=|((1+\lambda)w_1, (1+\lambda)w_2)|$$</span></p>
<p><span class="math-container">$$=\sqrt{(1+\lambda)^2 (w_1^2+w_2^2)}$$</span></p>
<p><span class="math-container">$$|1+\lambda||w|$$</span></p>
<p><span class="math-container">$$\leq (1+|\lambda|)|w|$$</span></p>
<p><span class="math-container">$$=|w|+|z|$$</span></p>
<p>where I used <span class="math-container">$|z|=\sqrt{\lambda^2(w_1^2+w_2^2)}=|\lambda||w|$</span>.</p>
<p>But then I noticed that in Spivak's <em>Calculus</em> he says to consider separately the cases <span class="math-container">$\lambda>0$</span> and <span class="math-container">$\lambda<0$</span>, and I am not doing this.</p>
<p><strong>My questions then are:</strong></p>
<ul>
<li>is the proof above incorrect?</li>
<li>why do we need to consider the cases separately?</li>
</ul>
| Ryszard Szwarc | 715,896 | <p>The function can be calculated explicitly. Namely let <span class="math-container">$$f(t)=\begin{cases} \phantom{-} 1 & \phantom{-} 0<t < \pi \\
-1 & -\pi <t<0
\end{cases}$$</span>
Then the Fourier coefficient are <span class="math-container">$a_0=0$</span> and
<span class="math-container">$$a_n={1\over \pi}\int\limits_{-\pi}^\pi f(t)\cos (nt)\,dt =0$$</span>, <span class="math-container">$$ b_n={1\over \pi}\int\limits_{-\pi}^\pi f(t)\sin (nt)\,dt={2\over \pi}\int\limits_{0}^\pi\sin (nt)\,dt=\begin{cases}{4\over \pi(2k-1)} & n=2k-1\\
0 & n=2k\end{cases} $$</span>
Therefore, by the Dirichlet-Jordan condition we get
<span class="math-container">$$f(t)={4\over \pi}\sum_{k=1}^\infty {\sin(2k-1)t\over 2k-1},\qquad 0<|t|<\pi$$</span>
For the OP case we have
<span class="math-container">$$x(t)=f(2\pi t)=\begin{cases}\phantom{-} 1
& \phantom{-} 0<t<{1\over 2} \\
-1 & -{1\over 2} <t<0\end{cases}$$</span>
Therefore the function <span class="math-container">$x(t)$</span> is not continuous at <span class="math-container">$0,$</span> and after extension to <span class="math-container">$1$</span>-periodic function is not continuous at <span class="math-container">${n\over 2}.$</span></p>
<p>The reason for the continuity and discontinuity follows from the fact that the partial sums are uniformly convergent on each interval <span class="math-container">$(-{1\over 2}+\delta, -\delta)$</span> and <span class="math-container">$(\delta, {1\over 2}-\delta)$</span> but not uniformly convergent on <span class="math-container">$(-{1\over 2},0)$</span> and on <span class="math-container">$ (0,{1\over 2})$</span></p>
|
201,122 | <p>A little bit of <em>motivation</em> (the question starts below the line): I am studying a proper, generically finite map of varieties $X \to Y$, with $X$ and $Y$ smooth. Since the map is proper, we can use the Stein factorization $X \to \hat{X} \to Y$. Since the composition is generically finite, $X \to \hat{X}$ is birational, and therefore a sequence of blowups. I am currently interested in the other map: $\hat{X} \to Y$. I would like to apply Casnati–Ekedahl's techniques from “Covers of algebraic varieties I” (Journal of alg. geom., 1996). For this, I need $\hat{X} \to Y$ to be Gorenstein. (Since $Y$ is Gorenstein (since it is smooth), this is equivalent with $\hat{X}$ being Gorenstein.) When is this true?</p>
<p>Specifically, in my case $X \to Y$ is the albanese morphism of a smooth projective surface: so $Y$ is an abelian surface, and I am in the situation that the albanese morphism is surjective.</p>
<hr>
<p>Let $f \colon X \to Y$ be a proper map between two varieties $X$ and $Y$ over a field $k$. Assume $X$ and $Y$ are smooth (and proper, if you want).</p>
<p>Let $\pi \colon X \to \hat{X}$ and $\hat{f} \colon \hat{X} \to Y$ be the Stein factorization ($f = \hat{f} \circ \pi$). Of course, in general $\hat{X}$ is not smooth. However:</p>
<blockquote>
<p><strong>Q1:</strong> Does $\hat{X}$ have some other nice properties?</p>
</blockquote>
<p>I am thinking in the direction of, e.g., Gorenstein or Cohen–Macaulay. If not, does it help if we assume a bit more on $f$? Or, alternatively:</p>
<blockquote>
<p><strong>Q2:</strong> Under what conditions is $\hat{X}$ Gorenstein?</p>
</blockquote>
| Karl Schwede | 3,521 | <p>For what it's worth, one can say the following sort of thing.</p>
<p>Since $Y$ is log terminal so is $(\hat{X}, -\mathrm{Ram})$. This doesn't mean much since in the pair, the boundary has a negative coefficients (ie, the singularities of $\hat{X}$ can be arbitrarily bad). But it does say things like:</p>
<p><em>if $\hat{X}$ has really bad singularities at some points, then $\mathrm{Ram}$ also has really bad singularities at those points too. Another way to say this is if the ramification divisor has mild singularities, then $\hat{X}$ does too</em>. </p>
<p>Note that of course, $K_{\hat{X}} + (-\mathrm{Ram}) \sim f^*(K_Y)$. The right side is Cartier, and thus so is the left. So the pair $(\hat{X}, -\mathrm{Ram})$ is log-Gorenstein (again, this doesn't mean much unless you control the ramification divisor in some sense). </p>
|
2,424,508 | <p>One textbook exercise asks to prove $$|a|+|b|+|c|-|a+b|-|a+c|-|b+c|+|a+b+c| \geq 0.$$</p>
<p>The textbook's solution is:</p>
<blockquote>
<p>If $a$, $b$ or $c$ is zero, the equality follows. Then, we can assume
$|a| \geq |b| \geq |c| > 0$. </p>
<p>Dividing by $|a|$, the inequality is equivalent
to</p>
<p>$$ 1 + |\frac{b}{a}| + |\frac{c}{a}| - |1+\frac{b}{a}| - |\frac{b}{a}+\frac{c}{a}| - |1+\frac{c}{a}| + |1+\frac{b}{a}+\frac{c}{a}| \geq 0 $$</p>
<p>Since $
\frac{b}{a} \leq 1$ and $\frac{c}{a} \leq 1$, we can
deduce that $|1+\frac{b}{a}| = 1+\frac{b}{a}$ and $|1+\frac{c}{a}| =
> 1+\frac{c}{a}$. </p>
<p>Thus, it is sufficient to prove that</p>
<p>$$ |\frac{b}{a}| + |\frac{c}{a}| - |\frac{b}{a}+\frac{c}{a}| - (1+\frac{b}{a}+\frac{c}{a}) + |1+\frac{b}{a}+\frac{c}{a}| \geq 0 .$$</p>
<p>Now, use the triangle inequality shows that the sum of the first three
terms are positive, and absolute value shows that the sum of last two
terms is also positive.</p>
</blockquote>
<p>There may be more intuitive proofs to this, but how can one in 'some semi-logical way' arrive at this exact one? </p>
| Marios Gretsas | 359,315 | <p>To answer the first question the set $\{0,2\}$ is compact in the discrete topology and thus from Tychonov's theorem the set $X=\{0,2\}^{\mathbb{N}}$ is compact with respect to the product topology.</p>
<p>So $X$ cannot be a discrete topological space because a discrete topological space $(X,\mathcal{T})$ is compact if and only if the $X$ is finite.</p>
<p>For the second question ,the Cantor set is a compact space with the subspace topology of the real line.</p>
<p>So it cannot be a discrete space because it is infinite.</p>
|
2,544,864 | <p>I have been trying to prove the continuity of the function:
$f:\mathbb{R}\to \mathbb{R}, f(x) =x \sin(x) $ using the $\epsilon -\delta$ method. </p>
<p>The particular objective of posting this question is to understand <strong>the dependence of $\delta$ on $\epsilon$ and $x$</strong>. I know that $f(x) =x \sin(x) $ is not uniformly continuous, so $\delta$ depends on both. Here is my attempt:</p>
<p>We need to prove that $\forall \epsilon > 0 \: \exists\, \delta(\epsilon,x) >0$ such that $\lvert x - y \rvert < \delta \implies \lvert x \sin(x) - y \sin(y)\rvert < \epsilon$.</p>
<p>Let $x=2n\pi$ and $y=x-\frac{\delta}{2}$ so that $\lvert x - y \rvert < \delta$. </p>
<p>Then,
\begin{align}
\bigl\lvert x \sin(x) - y \sin(y)\bigr\rvert&=\biggl\lvert 2n\pi \sin(2n\pi) - (2n\pi-\frac{\delta}{2})\sin(2n\pi-\frac{\delta}{2})\biggr\rvert\\
&= \biggl\lvert (2n\pi-\frac{\delta}{2}) \: \sin(2n\pi-\frac{\delta}{2})\biggr\rvert
\end{align}
Now,
\begin{align}
\biggl\lvert (2n\pi-\frac{\delta}{2}) \sin(2n\pi-\frac{\delta}{2})\biggr\rvert \leq \biggl\lvert (2n\pi-\frac{\delta}{2}) \biggr\rvert \leq \epsilon
\end{align}
and hence, a $\delta $ chosen such as $4n\pi + 2\epsilon$ can be used. Since, this choice depends on $4n\pi$ which is $2x$ and $2\epsilon$, hence the function is continuous but not uniformly so.</p>
<p>Is my procedure correct? How can I prove it generally so $\forall x$?</p>
| user284331 | 284,331 | <p>So you want to further show that it is not uniform? Assume it were, then there exists some $\delta>0$ such that for every $x,y$ with $|x-y|<\delta$, we have $|x\sin x-y\sin y|<1$. Now take $x_{n}=2n\pi+\eta$, $y_{n}=2n\pi$, $n=1,2,...$ where $\eta>0$ is so small that $\eta<\min\{\delta,\pi/2\}$, then $|x_{n}-y_{n}|<\delta$ but $|x_{n}\sin x_{n}-y_{n}\sin y_{n}|=x_{n}\sin x_{n}\geq x_{n}\left(\dfrac{2}{\pi}(x_{n}-2n\pi)\right)\geq 2n\pi\cdot\dfrac{2}{\pi}\eta=4n\eta$. So we have $4n\eta<1$ for all $n=1,2,...$, this is a contradiction.</p>
<p>Note that we have the inequality $\sin x\geq\dfrac{2}{\pi}x$ for $x\in[0,\pi/2]$.</p>
|
2,544,864 | <p>I have been trying to prove the continuity of the function:
$f:\mathbb{R}\to \mathbb{R}, f(x) =x \sin(x) $ using the $\epsilon -\delta$ method. </p>
<p>The particular objective of posting this question is to understand <strong>the dependence of $\delta$ on $\epsilon$ and $x$</strong>. I know that $f(x) =x \sin(x) $ is not uniformly continuous, so $\delta$ depends on both. Here is my attempt:</p>
<p>We need to prove that $\forall \epsilon > 0 \: \exists\, \delta(\epsilon,x) >0$ such that $\lvert x - y \rvert < \delta \implies \lvert x \sin(x) - y \sin(y)\rvert < \epsilon$.</p>
<p>Let $x=2n\pi$ and $y=x-\frac{\delta}{2}$ so that $\lvert x - y \rvert < \delta$. </p>
<p>Then,
\begin{align}
\bigl\lvert x \sin(x) - y \sin(y)\bigr\rvert&=\biggl\lvert 2n\pi \sin(2n\pi) - (2n\pi-\frac{\delta}{2})\sin(2n\pi-\frac{\delta}{2})\biggr\rvert\\
&= \biggl\lvert (2n\pi-\frac{\delta}{2}) \: \sin(2n\pi-\frac{\delta}{2})\biggr\rvert
\end{align}
Now,
\begin{align}
\biggl\lvert (2n\pi-\frac{\delta}{2}) \sin(2n\pi-\frac{\delta}{2})\biggr\rvert \leq \biggl\lvert (2n\pi-\frac{\delta}{2}) \biggr\rvert \leq \epsilon
\end{align}
and hence, a $\delta $ chosen such as $4n\pi + 2\epsilon$ can be used. Since, this choice depends on $4n\pi$ which is $2x$ and $2\epsilon$, hence the function is continuous but not uniformly so.</p>
<p>Is my procedure correct? How can I prove it generally so $\forall x$?</p>
| Community | -1 | <p>Hint: $|x\sin x-y\sin y|=|x\sin x-x \sin y+x\sin y-y\sin y|\le 2|x||\cos\frac {x+y}{2}||\sin\frac {x-y}{2}|+|x-y||\sin y|\le |x||x-y|+|x-y|=(|x|+1)|x-y|$ so use $\delta=\frac {\varepsilon}{|x|+1} $. I have used the facts that, for every $x $, $|\sin x|, |\cos x|\le 1$, $|\sin x|\le|x|$.</p>
|
52,657 | <p>I have a pair of points at my disposal. One of these points represents the parabola's maximum y-value, which always occurs at x=0. I also have a point which represents the parabola's x-intercept(s). Given this information, is there a way to rapidly derive the formula for this parabolic curve? My issue is that I need to generate this equation directly in computer software, but all the standard-formula definitions for a parabolic curve use its Vertex, not its intercepts. Is there some standard form of equation into which these intercepts can be 'plugged in' in order to produce a working relation? If not, what is the most computationally direct way to solve this problem?</p>
| Eric Naslund | 6,075 | <p>I'll assume you meant you know the $x$-intercepts and maximum height.</p>
<p>If you have any parabola with $x$-intercepts $a,b$, $a\neq b$, and maximum height $c$, then you can write it as $$y=k(x-a)(x-b)$$ where $$k=-c\left(\frac{4}{(a-b)^2}\right).$$</p>
<p>(Notice that the value $c$ must be positive)</p>
<p>If $a=b$ we actually can't specify $k$ without more information. </p>
|
4,444,669 | <p>I'm unsure about the problem below</p>
<hr>
Under which conditions is the following linear equation system solvable ?
<span class="math-container">$$x_1 + 2x_2 - 3x_3 = a$$</span>
<span class="math-container">$$3x_1 - x_2 + 2x_3 = b$$</span>
<span class="math-container">$$x_1 - 5x_2 + 8x_3 = c$$</span>
<hr>
<p>We set up our matrix</p>
<p><span class="math-container">$$\begin{bmatrix}
1 & 2 & -3 & | a \\
3 & -1 & 2 & | b \\
1 & -5 & 8 & | c \\
\end{bmatrix}$$</span></p>
<p>We apply -3 first row to second row and -1 first row to third row. Then we add -1 second row to third row. We get</p>
<p><span class="math-container">$$\begin{bmatrix}
1 & 2 & -3 & |a\\
0 & -7 & 11 & |b - 3a\\
0 & 0 & 0 & |2a - b + c\\
\end{bmatrix}$$</span></p>
<p>So <span class="math-container">$2a - b + c = 0$</span> for the system to be solvable. Is this correct ? I fear that there are other conditions that I forgot ?</p>
| Greg Nisbet | 128,599 | <p>If you're allowed to use the ceiling, then you can use the following.</p>
<p>Let <span class="math-container">$x$</span> be a positive real.</p>
<p>Suppose <span class="math-container">$x > 1$</span>, choose <span class="math-container">$n = 1$</span>.</p>
<p>Suppose <span class="math-container">$x = 1$</span>, choose <span class="math-container">$n = 2$</span>.</p>
<p>Suppose <span class="math-container">$x < 1$</span>, choose <span class="math-container">$n = \lceil x \rceil$</span>. Since x is not an integer <span class="math-container">$\lceil x \rceil$</span> will be strictly greater than <span class="math-container">$x$</span> and hence <span class="math-container">$\frac{1}{\lceil x \rceil}$</span> will be strictly less than <span class="math-container">$x^{-1}$</span>.</p>
|
172,058 | <p>I'm wondering whether there is certain relationship between the largest eigenvalue of a positive matrix(every element is positive, not neccesarily positive definite) $A$, $\rho(A)$ and that of $A∘A^T$, $\rho( A∘A^T)$, where $∘$ denotes hadamard product.</p>
<p>Here's a result I find for many numerical cases. I create a matrix of size $n$ whose elements are uniformly drawn from $[0,M]$, as $n$ gets large (>20), $\rho(A)\rightarrow 2M\rho( A∘A^T)$.</p>
<p>I've read some papers on the bound of eigenvalue of $A∘B$, yet none of them mention the special case of $A∘A^T$. I'm wondering whether there's a theory about this and moreover, whether this result could be extended to general linear operators, such as integral operators $T(f(x))=\int k(x,y)f(y)dy$ and $T(f(x))=\int k(x,y)k(y,x)f(y)dy$</p>
<p>Any reference is appreciated. hanks in advance!</p>
| Suvrit | 8,430 | <p>The following information may be useful, though probably you already know it.</p>
<ol>
<li>$\rho(A\circ A^T) \le \rho(A)\rho(A^T)=\rho^2(A)$</li>
<li>The other direction of course fails easily; though it is interesting to note that $\rho(A) \le \rho( (A+A^T)/2)$</li>
<li>Section 5.7 of <em>Topics in matrix analysis</em> by R. A. Horn, C. J. Johnson contains a wealth of material about the Hadamard product, especially for nonnegative matrices.</li>
</ol>
|
8,052 | <p>I wonder how you teachers walk the line between justifying mathematics because of
its many—and sometimes surprising—applications, and justifying it as the study
of one of the great intellectual and creative achievements of humankind?</p>
<p>I have quoted to my students G.H. Hardy's famous line,</p>
<blockquote>
<p>The Theory of Numbers has always been regarded as one of the most obviously useless branches of Pure Mathematics.</p>
</blockquote>
<p>and then contrasted this with the role number theory plays in contemporary
cryptography.
But I feel slightly guilty in doing so, because I believe that even without
the applications to cryptography that Hardy could not foresee—if in fact
number theory were completely "useless"—it
would nevertheless be well-worth studying for anyone.</p>
<p>One provocation is
Andrew Hacker's influential article in
the <em>NYTimes</em>,
<a href="http://www.nytimes.com/2012/07/29/opinion/sunday/is-algebra-necessary.html?_r=0" rel="noreferrer">Is Algebra Necessary?</a>
I believe your cultural education is not complete unless you understand
something of the achievements of mathematics, even if pure and useless.
But this is a difficult argument to make when you are teaching students how
to factor quadratic polynomials, e.g.,
<a href="https://matheducators.stackexchange.com/q/8020/511">The sum - product problem</a>.</p>
<p>So, to repeat, how do you walk this line?</p>
| Benoît Kloeckner | 187 | <p>My answer is mostly about higher education, and mostly suited for undergrads (which make the largest part of my students).</p>
<h3>The main purpose of their studies is to become more intelligent.</h3>
<p>This is what I say my undergrad student a lot. It applies to math as well as to any other fields; as was mentioned in other answers, many aspects what we teach, from reproducing algorithmic tasks to handle abstract reasoning, makes one mind more flexible, more powerful. </p>
<p>A couple of years ago, I came up with a comparison I like, but the effectiveness of which I cannot yet evaluate: I tell my students that they are like high level sport competitors, except they work their brain rather than their body. This has many interesting consequences (among which the most important to me is not related to your question: it makes it clear that the student have to work by themselves to learn, and the teacher, like a coach, can only help them, but cannot work for them), and it partially answers the "why learn math" question.</p>
<h3>Knowing how to use mathematical tools is useful.</h3>
<p>Knowing how to use mathematical tools is indeed useful
for sciences as well as in any quantitative field. Let me digress to partially answer "Lockhart's lament" mentioned in a comment. that mathematicians should not conflate our research activity "doing maths", and what we mainly teach (this is why I wrote about mathematical tools). We also should help students not to conflate the two, and know a little bit about what "doing math" really is, but for most of them this is not what we are asked to teach them. That said, it would be good to make what we teach interesting, of course.</p>
<p>Back to usefulness. Knowing about probabilities is useful in every day's life, just as is basic arithmetic. Let me give an example I heard about recently: a biology professor gave her students yes/no tests where each correct answer was rewarded 1 point, and each incorrect answer was penalized 0.25 point. The professor was not really aware that this grading system gives a not good, but not too bad grade on average to a monkey answering randomly. A students knowing one fourth of the answers and giving random answers to the other questions would often get a passing grade. This is a classical example of mathematical illiteracy (here, the lack of understanding of expectation and the law of large numbers), and shows that a basic understanding of a variety of basic mathematics comes right next to reading and writing in terms of usefulness.</p>
<p>I also try to give examples of the usefulness of mathematical tools as often as possible, in order to give an idea of why we chose to teach them their precise curriculum. Let me name a couple: to justify the differential equations to freshmen, I gave a dozen examples from biology (various population growth models), physics (from mechanics, radioactivity, etc.), economy (I derived the "second law of economics" that I recently read in Thomas Picketty's book from a differential equation, and mentioned the context to show that the example was not made up for them), chemistry (cinetics); to justify the differential geometry course on curves and surfaces, I had a session devoted to students measuring distances on a globe and a map and comparing them, before I went on to prove that no map can be faithful.</p>
|
8,052 | <p>I wonder how you teachers walk the line between justifying mathematics because of
its many—and sometimes surprising—applications, and justifying it as the study
of one of the great intellectual and creative achievements of humankind?</p>
<p>I have quoted to my students G.H. Hardy's famous line,</p>
<blockquote>
<p>The Theory of Numbers has always been regarded as one of the most obviously useless branches of Pure Mathematics.</p>
</blockquote>
<p>and then contrasted this with the role number theory plays in contemporary
cryptography.
But I feel slightly guilty in doing so, because I believe that even without
the applications to cryptography that Hardy could not foresee—if in fact
number theory were completely "useless"—it
would nevertheless be well-worth studying for anyone.</p>
<p>One provocation is
Andrew Hacker's influential article in
the <em>NYTimes</em>,
<a href="http://www.nytimes.com/2012/07/29/opinion/sunday/is-algebra-necessary.html?_r=0" rel="noreferrer">Is Algebra Necessary?</a>
I believe your cultural education is not complete unless you understand
something of the achievements of mathematics, even if pure and useless.
But this is a difficult argument to make when you are teaching students how
to factor quadratic polynomials, e.g.,
<a href="https://matheducators.stackexchange.com/q/8020/511">The sum - product problem</a>.</p>
<p>So, to repeat, how do you walk this line?</p>
| jhocking | 5,160 | <p>I wish I could find an excerpt to link you to, but the book <a href="http://rads.stackoverflow.com/amzn/click/1400064287" rel="nofollow">Made to Stick</a> had a great reason as one of its examples. You learn math in order to make you better at thinking. It's like athletes lifting weights; they don't lift barbells because they are going to need to</p>
|
8,052 | <p>I wonder how you teachers walk the line between justifying mathematics because of
its many—and sometimes surprising—applications, and justifying it as the study
of one of the great intellectual and creative achievements of humankind?</p>
<p>I have quoted to my students G.H. Hardy's famous line,</p>
<blockquote>
<p>The Theory of Numbers has always been regarded as one of the most obviously useless branches of Pure Mathematics.</p>
</blockquote>
<p>and then contrasted this with the role number theory plays in contemporary
cryptography.
But I feel slightly guilty in doing so, because I believe that even without
the applications to cryptography that Hardy could not foresee—if in fact
number theory were completely "useless"—it
would nevertheless be well-worth studying for anyone.</p>
<p>One provocation is
Andrew Hacker's influential article in
the <em>NYTimes</em>,
<a href="http://www.nytimes.com/2012/07/29/opinion/sunday/is-algebra-necessary.html?_r=0" rel="noreferrer">Is Algebra Necessary?</a>
I believe your cultural education is not complete unless you understand
something of the achievements of mathematics, even if pure and useless.
But this is a difficult argument to make when you are teaching students how
to factor quadratic polynomials, e.g.,
<a href="https://matheducators.stackexchange.com/q/8020/511">The sum - product problem</a>.</p>
<p>So, to repeat, how do you walk this line?</p>
| dtldarek | 42 | <p>Some reasons (random order):</p>
<ul>
<li><p><em>Math is fun and beatiful.</em> Not all marvel at the beauty of mathematics and not all enjoy working with math. In fact, daily mathematical work can be tedious, boring, uninspiring even for those in love with it. But every now and then you discover something great, and the feeling you get can be matched by very few other things (it does help if you have someone to share it with).</p>
<p>Moreover, math can help you appreciate the world even more deeply than before. For example, during a nice clear sky night, try to imagine the size of Andromeda galaxy from how far it is (about <span class="math-container">$2.5\times 10^6$</span> light years$) and the fact that its apparent size (angular diameter) is about six times that of full moon.</p>
</li>
<li><p><em>Math makes you more intelligent.</em> I have no citation for this, but it seems to work that way. There are some papers linking happiness to intelligence, but to the best of my knowledge they are inconclusive, so this may be a double-edged sword.</p>
</li>
<li><p><em>Math rewires your brain.</em> I don't know how it happens, but it is true. To give an example, usually people don't consider <span class="math-container">$-5$</span> (negative <span class="math-container">$5$</span>) a valid number of apples, but that would be perfectly normal to a mathematician (unless assumptions imply otherwise). In most cases such a rewiring gives a superior perspective (of course, it does not help much if it is your only one; insight is not wisdom), although it may also cause friction or trouble with communication.</p>
</li>
<li><p><em>Math can teach you how the world works.</em> There are many non-intuitive real-life phenomena, for example, adding new roads may increase traffic jams (see <a href="http://en.wikipedia.org/wiki/Braess%27s_paradox" rel="nofollow noreferrer">Braess's paradox</a>). Still, there is math that models it in an intuitive way (here it is game theory). Knowledge of such math allows you to better predict the outcome of actions.</p>
</li>
<li><p><em>Math is useful.</em> Almost everybody and everything uses math now, whether it is a small MP3 player or big spaceship. We wouldn't be where we are now without probability, eigenvalues, Fourier transform or differential equations to name a few.</p>
</li>
<li><p><em>Math is considered important.</em> Even if math was not important by itself, people thinking it is important make it important.</p>
</li>
<li><p><em>Math is gatekeeper.</em> As @Benjamin Dickman pointed out, math is already thought of as a gatekeeper, but this might be even more true in the future, see a nice article by Alexandre Borovik: <a href="http://www.borovik.net/selecta/wp-content/uploads/2015/01/Spade_31Dec14.pdf" rel="nofollow noreferrer">Calling a spade a spade: mathematics in the new pattern of division of labour</a>.</p>
</li>
<li><p>Finally: <img src="https://i.stack.imgur.com/Fqgi8.jpg" alt="math homework" /></p>
</li>
</ul>
<p>Also, <a href="https://math.stackexchange.com/a/529525/26306">this post</a> is somewhat related.</p>
<p>I hope this helps <span class="math-container">$\ddot\smile$</span></p>
|
716,122 | <p>Using the following deductive system D1:</p>
<pre><code>(A1) A → (B → A)
(A2) (A → B) → ((A → (B → C)) → (A → C))
(A3) (A → B) → ((A → ¬B) → ¬A)
(A4) ¬¬A → A
(MP) A A → B
---------
B
</code></pre>
<p><strong>Premise: p → q . Conclusion: ¬q → ¬p</strong></p>
<p><strong>My attempt:</strong> </p>
<ol>
<li><p>p → q</p></li>
<li><p>(p → q)→((p→ ¬q)→ ¬p) Using (A3) axiom</p></li>
<li><p>(p→ ¬q)→ ¬p Using Modus ponens- MP(1&2)</p></li>
</ol>
<p>So as you see step 3 I have reached ¬p right hand, but I couldn't reach ¬q. </p>
| Doug Spoonwood | 11,300 | <p>I use <a href="http://en.wikipedia.org/wiki/Polish_notation" rel="nofollow">Polish notation</a>.</p>
<p>Your axioms are</p>
<p>A1 CaCba</p>
<p>A2 CCabCCaCbcCac</p>
<p>A3 CCabCCaNbNa</p>
<p>A4 CNNaa</p>
<p>Note that {A1, A2} give you a deduction metatheorem. Thus, if we make a hypothesis h and we can then derive the result r, we could find a proof of Chr using the procedure (and other methods to make the work shorter if desired) outlined by a decent meta-proof of the deduction metatheorem.</p>
<p>We want CNqNp, so we'll hypothesize Nq. </p>
<pre><code> 0 Cpq premise
1 Nq hypothesis
2 CNqCpNq instance of A1
3 CpNq 2, 1 MP
4 CCpqCCpNqNp instance of A3
5 CCpNqNp 4, 0 MP
6 Np 5, 3 MP
</code></pre>
<p>Now, we just need to have the ability to find C10, C11, C12, C13, C14, C15, and getting to C16 will follow in 3 more steps. C11 follows from the proof of Cpp. C12, and C14 aren't too hard to find, since they're instances of CpCqCrq. CpCqCrq can get found by substituting CqCrq for "a" in A1, p for "b" in A1, and then taking the consequent of the resulting formula. C13 is an instance of A1. C10 isn't hard to find either. Now we just need to get C15 and C16. 5 comes from 4 and 0, and thus due to how a decent meta-proof of the deduction metatheorem works, C15 can come from C14 and C10. 6 comes from 5 and 3, and so C16 will come from C15 and C13. Thus...</p>
<pre><code> 80 Cpq premise
81 CCpqCNqCpq instance of A1
82 CNqCpq 81, 80 MP (this is C10).
83 CCCpqCCpNqNpCNqCCpqCCpNqNp instance of A1
84 CCpqCCpNqNp instance of A3.
85 CNqCCpqCCpNqNp 84, 83 MP (this is C14).
86 C CNq Cpq C CNq C Cpq CCpNqNp CNqCCpNqNp instance of A1
87 CCNqCCpqCCpNqNpCNqCCpNqNp 86, 82 MP
88 CNqCCpNqNp 87, 85 MP (this is C15).
89 CNqCpNq instance of A1 (this is C13).
90 CCNqCpNqCCNqCCpNqNpCNqNp instance of A2
91 CCNqCCpNqNpCNqNp 90, 89 MP
92 CNqNp 91, 88 MP
</code></pre>
|
716,122 | <p>Using the following deductive system D1:</p>
<pre><code>(A1) A → (B → A)
(A2) (A → B) → ((A → (B → C)) → (A → C))
(A3) (A → B) → ((A → ¬B) → ¬A)
(A4) ¬¬A → A
(MP) A A → B
---------
B
</code></pre>
<p><strong>Premise: p → q . Conclusion: ¬q → ¬p</strong></p>
<p><strong>My attempt:</strong> </p>
<ol>
<li><p>p → q</p></li>
<li><p>(p → q)→((p→ ¬q)→ ¬p) Using (A3) axiom</p></li>
<li><p>(p→ ¬q)→ ¬p Using Modus ponens- MP(1&2)</p></li>
</ol>
<p>So as you see step 3 I have reached ¬p right hand, but I couldn't reach ¬q. </p>
| Mauro ALLEGRANZA | 108,274 | <p>Your <em>axiom system</em> is that of Elliott Mendelson, <em>Introduction to Mathematical Logic</em> (4th ed - 1997). page 35.</p>
<p>With <em>axioms</em> (A1) and (A2) - as said by Doug - you may prove <em>Deduction Theorem</em> [see Mendelson, page 37 for a proof].</p>
<p>We need an "intermediate result" (we call it <em>Syll</em>) :</p>
<blockquote>
<p>$\mathcal A \rightarrow \mathcal B, \mathcal B \rightarrow \mathcal C \vdash \mathcal A \rightarrow \mathcal C$ --- [Corollary 1.10a, page 38]</p>
</blockquote>
<p>We prove it with <em>DT</em> :</p>
<p>(1) --- $\mathcal A \rightarrow \mathcal B$ --- assumption</p>
<p>(2) --- $\mathcal B \rightarrow \mathcal C$ --- assumption</p>
<p>(3) --- $\mathcal A$ --- assumption</p>
<p>(4) --- $\mathcal B$ --- from (3) and (1) by <em>modus ponens</em></p>
<p>(5) --- $\mathcal C$ --- from (4) and (2) by <em>modus ponens</em></p>
<p>thus : $\mathcal A \rightarrow \mathcal B, \mathcal B \rightarrow \mathcal C, \mathcal A \vdash \mathcal C$; </p>
<p>so : $\mathcal A \rightarrow \mathcal B, \mathcal B \rightarrow \mathcal C, \vdash \mathcal A \rightarrow \mathcal C$ --- by <em>Deduction Theorem</em>.</p>
<hr>
<p>Now with the main proof :</p>
<p>(1) $p \rightarrow q$ --- assumption </p>
<p>(2) $\vdash (p \rightarrow q) \rightarrow ((p \rightarrow \lnot q) \rightarrow \lnot p)$ --- (A3)</p>
<p>(3) $\vdash \lnot q \rightarrow (p \rightarrow \lnot q)$ --- (A1)</p>
<p>(4) $(p \rightarrow \lnot q) \rightarrow \lnot p$ --- from (1) and (2) by <em>modus ponens</em></p>
<p>(5) $\lnot q \rightarrow \lnot p$ --- form (3) and (4) by <em>Syll</em>.</p>
<blockquote>
<blockquote>
<p>Thus : $p \rightarrow q \vdash \lnot q \rightarrow \lnot p$.</p>
</blockquote>
</blockquote>
<hr>
<p><strong>Appendix</strong></p>
<p>If you cannot use the resource of <em>Deduction Theorem</em>, you must prove :</p>
<p>$\vdash (p \rightarrow q) \rightarrow ((q \rightarrow r) \rightarrow (p \rightarrow r))$</p>
<p>and use it in step (5) of the above proof.</p>
<p>How to prove this [see Mendelson, page 37] ?</p>
<p>We may "mimick" the proof of the <em>DT</em> to find the proof of the above formula.</p>
<p><em>(Step 1)</em> </p>
<p>(1) --- $\mathcal A \rightarrow \mathcal B$ --- assumption</p>
<p>(2) --- $\mathcal B \rightarrow \mathcal C$ --- assumption</p>
<p>(3) --- $\vdash (\mathcal B \rightarrow \mathcal C) \rightarrow (\mathcal A \rightarrow (\mathcal B \rightarrow \mathcal C))$ --- (A1)</p>
<p>(4) --- $\mathcal A \rightarrow (\mathcal B \rightarrow \mathcal C)$ --- from (2) and (3) by <em>modus ponens</em></p>
<p>(5) --- $\vdash (\mathcal A \rightarrow \mathcal B) \rightarrow ((\mathcal A \rightarrow (\mathcal B \rightarrow C)) \rightarrow (\mathcal A \rightarrow \mathcal C))$ --- (A2)</p>
<p>(6) --- $(\mathcal A \rightarrow (\mathcal B \rightarrow C)) \rightarrow (\mathcal A \rightarrow \mathcal C)$ --- from (1) and (5) by <em>modus ponens</em></p>
<p>(7) --- $\mathcal A \rightarrow \mathcal C$ --- from (4) and (6) by <em>modus ponens</em>.</p>
<blockquote>
<p>Thus : $\mathcal A \rightarrow \mathcal B, \mathcal B \rightarrow \mathcal C \vdash \mathcal A \rightarrow \mathcal C$.</p>
</blockquote>
<p>What have we obtained so far ? A proof of $\mathcal A \rightarrow \mathcal C$ from $\mathcal A \rightarrow \mathcal B$ and $\mathcal B \rightarrow \mathcal C$ <strong>without</strong> the use of the <em>Deduction Theorem</em> (using only (A1), (A2)).</p>
<p>Now we can repeat the procedure to get :</p>
<p><em>(Step 2)</em></p>
<p>(a) --- $\mathcal A \rightarrow \mathcal B$ --- assumption</p>
<p>(b) --- $\vdash (\mathcal A \rightarrow \mathcal B) \rightarrow [(\mathcal A \rightarrow (\mathcal B \rightarrow \mathcal C)) \rightarrow (\mathcal A \rightarrow \mathcal C)]$ --- (A2)</p>
<p>(c) --- $(\mathcal A \rightarrow (\mathcal B \rightarrow \mathcal C)) \rightarrow (\mathcal A \rightarrow \mathcal C)]$ --- from (a) and (b) by <em>modus ponens</em> [call this formula $\mathsf F$]</p>
<p>(d) --- $\vdash \mathsf F \rightarrow [(\mathcal B \rightarrow \mathcal C) \rightarrow \mathsf F]$ --- (A1)</p>
<p>(e) --- $(\mathcal B \rightarrow \mathcal C) \rightarrow \mathsf F$ --- from (c) and (d) by <em>modus ponens</em></p>
<p>(f) --- $\vdash (\mathcal B \rightarrow \mathcal C) \rightarrow (\mathcal A \rightarrow (\mathcal B \rightarrow \mathcal C))$ --- (A1)</p>
<p>(g) --- $(\mathcal B \rightarrow \mathcal C) \rightarrow (\mathcal A \rightarrow \mathcal C)$ --- from (A2) with (f) and (e).</p>
<blockquote>
<p>Thus : $\mathcal A \rightarrow \mathcal B \vdash (\mathcal B \rightarrow \mathcal C) \rightarrow (\mathcal A \rightarrow \mathcal C)$.</p>
</blockquote>
<p>Finally, we repeat the above "procedure" to get :</p>
<p><em>(Step 3)</em></p>
<p>$\vdash \mathcal A \rightarrow \mathcal B \rightarrow ((\mathcal B \rightarrow \mathcal C) \rightarrow (\mathcal A \rightarrow \mathcal C))$.</p>
|
2,412,959 | <p>In <a href="https://math.stackexchange.com/questions/170362/pointwise-convergence-implies-lp-convergence">this</a> question a user asks if pointwise convergence implies convergence in $L^p$. I would have thought that the answer is yes. I am not experienced with measure theory, which is how that question is framed. The following statement seems to assert that p.w. convergence implies convergence in $L^p$:
$$
\lim_{n\to \infty} ||f_n - f||_{L^p(\Omega)}^p = \lim_{n\to \infty} \int_\Omega |f_n(x)-f(x)|^p dx = \int_\Omega |\lim_{n\to \infty} f_n(x)-f(x)|^p dx = \int_\Omega |0|^p dx = 0.
$$
But the answers to the other post say that p.w. convergence does not imply convergence in $L^p$, so what am I missing?</p>
| Angina Seng | 436,618 | <p>For $\Omega=\Bbb R$ and Lebesgue measure I like this example.
$$f_n(x)=e^{-(x-n)^2}.$$
Then $f_n\to0$ pointwise, but $$\|f_n\|_p=\|f_1\|_p>0.$$</p>
|
1,014,987 | <p>I need to solve the bound for $n$ from this inequality: </p>
<p>$$c \leq 1.618^{n+1} -(-0.618)^{n+1},$$</p>
<p>where $c$ is some known constant value. How can I solve this? At first I was going to take the logarithm, but the difference of the two exponentials trouble me...</p>
<p>Any hints? :) Thnx for any help !</p>
| LinAlgMan | 49,785 | <p><strong>Hints:</strong></p>
<p>First note that $$\phi = \frac{1 + \sqrt{5}}{2} \approx 1.618$$ and $$-\frac1\phi = \frac{1 - \sqrt{5}}{2} \approx -0.618$$ both are the roots of $x^2 - x - 1 = 0$.</p>
<p>Note that
$$ \lim_{n \to \infty}(-0.618)^{n+1} = 0 $$
as $|-0.618| < 1$.</p>
<p>Now note the $n$ element in the Fibonachi sequence is
$$ F_n = \frac{1}{\sqrt5} \left( \left( \frac{1 + \sqrt{5}}{2} \right)^n - \left( \frac{1 - \sqrt{5}}{2} \right)^n \right) $$</p>
|
4,537,489 | <p>Assume that <span class="math-container">$a>0$</span>, Suppose we have :<br />
<span class="math-container">$$X = \{x\in \mathbb{R} \ : \ x^2 < a \}$$</span><br />
We should prove that this set has a supremum, and that's <strong><span class="math-container">$\sqrt{a}$</span></strong> .<br />
I saw <a href="https://math.stackexchange.com/a/2281226/831100">this answer</a> on one of the related posts:</p>
<blockquote>
<p>Suppose that <span class="math-container">$a>0$</span> then <span class="math-container">$\sqrt{a}$</span> is an upper bound . To see this, use the definition of an open ball . Also <span class="math-container">$0 \in (-\sqrt{a},\sqrt{a})$</span> since <span class="math-container">$|0|<\sqrt{a}$</span>. Therefore supremum exists. Now assume for contradiction that <span class="math-container">$\sqrt{a}$</span> is not the least upper bound. Then there exist <span class="math-container">$M \in R$</span> which is the supremum and <span class="math-container">$M<\sqrt{a}$</span>.Consider <span class="math-container">$z:=\frac{\sqrt{a}-M}{\sqrt{a}}+M$</span>.By construction <span class="math-container">$z>M$</span>. it is impossible that <span class="math-container">$z<\sqrt{a}$</span> since M is the supremum,But if <span class="math-container">$\sqrt{a}\leq z$</span>, then <span class="math-container">$\sqrt{a}\leq\frac{\sqrt{a}-M}{\sqrt{a}}+M \to \sqrt{a}\leq M$</span> ,contradiction.</p>
</blockquote>
<p><strong>My first question:</strong><br />
Is how author recognized that she should use <span class="math-container">$\frac{\sqrt{a}-M}{\sqrt{a}}+M$</span> ? Can we determine a logical process to achieve this expression for our needs?</p>
<p><strong>My second question:</strong><br />
I have problem with this part:<br />
<span class="math-container">$$\sqrt{a}\leq\frac{\sqrt{a}-M}{\sqrt{a}}+M \to \sqrt{a}\leq M$$</span><br />
Can we conclude from <span class="math-container">$z>M$</span> and <span class="math-container">$\sqrt{a}\leq z$</span> that <span class="math-container">$\sqrt{a}\leq M$</span> ? I think that's not possible!<br />
<strong>Last one:</strong><br />
Is there any better way to prove that?</p>
| José Carlos Santos | 446,262 | <p><a href="https://math.stackexchange.com/questions/4080221/a-closed-subset-of-a-lindel%c3%b6f-space-is-lindel%c3%b6f">Every closed subspace of a Lindelöf space is also a Lindelöf space</a>. But an uncountable discrete space is not Lindlöf.</p>
|
3,712,094 | <p><a href="https://i.stack.imgur.com/S3n1g.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/S3n1g.jpg" alt="enter image description here"></a></p>
<p>For part (a) these are clearly two parallel lines so no points of intersection.<br>
For part (b) this has one point of intersection because these two lines cross at exactly one point.<br>
For parts (c) and (e) we have <span class="math-container">$z=0$</span> and <span class="math-container">$x=2y+1$</span> but what does this mean geometrically?<br>
For part (d) there are no points of intersection so does that mean the three planes are parallel or the planes never cross anywhere?
Thanks for the help.</p>
| hdighfan | 796,243 | <p><span class="math-container">$z=0$</span> and <span class="math-container">$x=2y+1$</span> gives the equation of a line; <span class="math-container">$x=2y+1$</span> is clearly a line in the <span class="math-container">$xy$</span>-plane, and <span class="math-container">$z=0$</span> forces us to stay in this plane.</p>
<p>For d), you'll notice that all three planes are parallel, since the left hand sides are multiples of each other. (Two of the planes are in fact the same; namely those given by the first and second equations.)</p>
|
639,449 | <p>I've seen on Wikipedia that for a complex matrix $X$, $\det(e^X)=e^{\operatorname{tr}(X)}$.</p>
<p>It is clearly true for a diagonal matrix. What about other matrices ?</p>
<p>The series-based definition of exp is useless here.</p>
| not all wrong | 37,268 | <p>A alternative to doing this by normal forms which perhaps assumes more but is much more natural to me is (as suggested in the comment on <a href="https://math.stackexchange.com/questions/299528/det-exp-x-e-mathrmtr-x-for-2-dimensional-matrices?rq=1">$\det(\exp X)=e^{\mathrm{Tr}\, X}$ for 2 dimensional matrices</a>) to note that it clearly holds for diagonalizable matrices (see the duplicate <a href="https://math.stackexchange.com/questions/322640/how-to-prove-detea-e-operatornametra">How to prove $\det(e^A) = e^{\operatorname{tr}(A)}$?</a> ...), and by</p>
<ol>
<li>the continuity of $\det, \mathrm{tr}$ and $\exp$</li>
<li>the density of diagonalizable matrices in the space of all complex matrices (<a href="https://math.stackexchange.com/questions/107945/diagonalizable-matrices-with-complex-values-are-dense-in-set-of-n-times-n-comp">Diagonalizable matrices with complex values are dense in set of $n\times n$ complex matrices.</a>)</li>
</ol>
<p>we have the result more generally for <em>all</em> matrices.</p>
|
258,205 | <p>I want to know if $\displaystyle{\int_{0}^{+\infty}\frac{e^{-x} - e^{-2x}}{x}dx}$ is finite, or in the other words, if the function $\displaystyle{\frac{e^{-x} - e^{-2x}}{x}}$ is integrable in the neighborhood of zero.</p>
| copper.hat | 27,978 | <p>Let $f(x) = \frac{e^{-x} - e^{-2x}}{x}$.</p>
<p>L'Hopital gives $\lim_{x \to 0} f(x)= 1$. Hence in some neighborhood $B(0,\epsilon)$ , $|f(x)| <2$. For $x\geq \epsilon$, we have $\frac{1}{x} \leq \frac{1}{\epsilon}$, and the function $x \mapsto e^{-x} - e^{-2x}$ is clearly integrable.</p>
<p>Hence $\int_0^\infty |f(x)| dx \leq 2 \epsilon + \frac{1}{\epsilon}\int_{\epsilon}^\infty |e^{-x} - e^{-2x}| dx $, and it follows that $f$ is integrable.</p>
|
258,205 | <p>I want to know if $\displaystyle{\int_{0}^{+\infty}\frac{e^{-x} - e^{-2x}}{x}dx}$ is finite, or in the other words, if the function $\displaystyle{\frac{e^{-x} - e^{-2x}}{x}}$ is integrable in the neighborhood of zero.</p>
| Douglas B. Staple | 65,886 | <p>Claim: $$\int_0^{\infty} \frac{e^{-u}-e^{-2u}}{u} du = \ln(2).$$</p>
<p>Proof: Let
\begin{align}
C &\equiv \int_0^{\infty} \frac{e^{-u}-e^{-2u}}{u} du\\\ \\
&=\lim_{x=0}\left[ \operatorname{Ei}(1,x) - \operatorname{Ei}(1,2x)\right],
\end{align}
where
$$
\operatorname{Ei}(1,x) \equiv \int_x^\infty \frac{e^{-u}}{u} du.
$$
Now, let $$f(x) \equiv \int_1^x \frac{e^{-u}}{u} du.$$
Note:$$\frac{d\operatorname{Ei}(1,x)}{dx} = - \frac{df}{dx},$$
so $$f(x) = -\operatorname{Ei}(1,x) + c,$$ where $c\in \mathbb{R}$. Then $f(1) = -\operatorname{Ei}(1,1)+c$. However,
$$
f(1) = \int_1^1 \frac{e^{-u}}{u} du = 0.
$$
$\therefore c=\operatorname{Ei}(1,1)$, i.e.
$$
\operatorname{Ei}(1,x) = \operatorname{Ei}(1,1) - \int_1^x \frac{e^{-u}}{u} du
$$
Considering that
$$
\ln(x) = \int_1^x \frac{1}{u} du,
$$
we have
$$
\operatorname{Ei}(1,x) = -\ln(x) + \operatorname{Ei}(1,1) + \int_1^x\frac{1-e^{-u}}{u} du \tag{$\star$}.
$$
$(\star)$ applied to the definition of $C$ gives:
\begin{align}
\int_0^{\infty} \frac{e^{-u}-e^{-2u}}{u} du &=\lim_{x=0}\left[ \operatorname{Ei}(1,x) - \operatorname{Ei}(1,2x)\right]\\
&=\lim_{x=0}\left[ \ln(2)-\ln(1) - \int_x^{2x} \frac{1-e^{-u}}{u}du \right]\\
&=\ln(2).
\end{align}
Q.E.D.</p>
|
2,937,990 | <p>I need to prove or disprove that in any Boolean algebra: if <span class="math-container">$a+ab=b$</span> then <span class="math-container">$a=b=1$</span> or <span class="math-container">$a=b=0$</span>.</p>
<p>I build the following truth table:
<span class="math-container">$$
\begin{array}{|c|c|c|}
\hline
a & b & a+ab \\ \hline
0 & 0 & 0 \\ \hline
0 & 1 & 0 \\ \hline
1 & 0 & 1 \\ \hline
1 & 1 & 1 \\ \hline
\end{array}
$$</span>
So it does looks like that theorem is true. Can I prove it with algebra? if not, How should I prove it?</p>
<p><strong>Edit</strong>:
You guys proved it for the binary Boolean algebra. The theorem is for every Boolean algebra (I just gave an example for binary). How can I prove it for <em>every</em> Boolean algebra?</p>
| Carl Schildkraut | 253,966 | <p>Your <em>reasoning</em> is correct, but the claim that <span class="math-container">$d_n$</span> is decreasing and <span class="math-container">$\lim_{n\to\infty} d_n = 0$</span> needs to be proven, not just stated. (In fact, it is not actually true, although it may <em>look</em> true.)</p>
|
2,937,990 | <p>I need to prove or disprove that in any Boolean algebra: if <span class="math-container">$a+ab=b$</span> then <span class="math-container">$a=b=1$</span> or <span class="math-container">$a=b=0$</span>.</p>
<p>I build the following truth table:
<span class="math-container">$$
\begin{array}{|c|c|c|}
\hline
a & b & a+ab \\ \hline
0 & 0 & 0 \\ \hline
0 & 1 & 0 \\ \hline
1 & 0 & 1 \\ \hline
1 & 1 & 1 \\ \hline
\end{array}
$$</span>
So it does looks like that theorem is true. Can I prove it with algebra? if not, How should I prove it?</p>
<p><strong>Edit</strong>:
You guys proved it for the binary Boolean algebra. The theorem is for every Boolean algebra (I just gave an example for binary). How can I prove it for <em>every</em> Boolean algebra?</p>
| Jack D'Aurizio | 44,121 | <p>If <span class="math-container">$a_n=\frac{(-1)^n}{\sqrt{n+1}}$</span> then <span class="math-container">$f(x)=\sum_{n\geq 0}a_n x^n$</span> has a singularity of the <span class="math-container">$\frac{1}{\sqrt{x+1}}$</span> kind at <span class="math-container">$x=-1$</span> and <span class="math-container">$g(x)=\sum_{n\geq 0}c_n x^n = f(x)^2$</span> has a simple pole at <span class="math-container">$x=-1$</span>. In particular <span class="math-container">$|c_n|$</span> does not converge to zero as <span class="math-container">$n\to +\infty$</span> and <span class="math-container">$\sum_{n\geq 1}c_n$</span> is not convergent. On the other hand, <span class="math-container">$\sum_{n\geq 1}c_n$</span> is convergent "à-la-Cesàro", like <span class="math-container">$\sum_{n\geq 1}(-1)^n$</span>, i.e. the sequence of the averaged partial sums <em>is</em> convergent. </p>
<p>The situation is much clearer if <span class="math-container">$a_n$</span> is replaced by <span class="math-container">$\frac{\sqrt{\pi}(-1)^n}{4^n}\binom{2n}{n}$</span>: in such a case <span class="math-container">$f(x)=\sqrt{\frac{\pi}{1+x}}$</span> and in the Cesàro sense we have <span class="math-container">$\sum_{n\geq 1}c_n=\frac{\pi}{2}$</span>.</p>
|
221,428 | <p>Is there any pair of random variables (X,Y) such that Expected value of X goes to infinity, Expected value of Y goes to minus infinity but expected value of X+Y goes again to infinity?</p>
| user642796 | 8,348 | <p>Your approach won't work, since your $x$ might not belong to $G_i$, but then it would be a limit point of $E_i$, and so $x \in \overline{E_i} \setminus G_i$. </p>
<hr>
<p>In order to prove this, note the following facts.</p>
<ul>
<li>If $G$ is a dense open set, and $U$ is any nonempty open set, then $G \cap U$ is a nonempty open set.</li>
<li>The intersection of finitely many dense open sets is itself dense open.</li>
</ul>
<p>Using the above, we can construct a sequence $\langle x_n \rangle_n$ in $X$ such that for each $n$ there is a $\delta_n$ with $0 < \delta_n \leq 2^{-n}$ such that</p>
<ol>
<li>$\overline{B}(x_n;\delta_n) = \{ x \in X : d(x,x_n) \leq \delta_n \} \subseteq G_n$;</li>
<li>if $m > n$, then $B(x_m;\delta_m) \subseteq B(x_n;\delta_n)$.</li>
</ol>
<p>To recursively construct such a sequence:</p>
<blockquote class="spoiler">
<p> Suppose that $x_1 , \ldots , x_n$ have been appropriately chosen (with associated positive reals $\delta_1 , \ldots , \delta_n$). As $B ( x_n , \delta_n ) \cap G_{n+1}$ is a nonempty open set we may pick some $x_{n+1} \in B ( x_n , \delta_n ) \cap G_{n+1}$. Then there is a $\varepsilon > 0$ such that $B ( x_{n+1} , \varepsilon ) \subseteq B ( x_n , \delta_n ) \cap G_{n+1}$, so set $\delta_{n+1} = \min \{ 2^{-(n+1)} , \frac{\varepsilon}{2} \}$.</p>
</blockquote>
<p>To see how such a sequence proves the result:</p>
<blockquote class="spoiler">
<p> Since for $m > n$ we have that $d(x_n,x_m) < \delta_n \leq 2^{-n}$ it follows that such a sequence (if constructed) must be Cauchy, and so has a limit, $x$. Furthermore for each $n$ as the tail $\langle x_k \rangle_{k=n}^\infty$ of the sequence is contained in $B ( x_n , \delta_n )$, then $x \in \overline{B ( x_n , \delta_n )} \subseteq \overline{B} ( x_n , \delta_n ) \subseteq G_n$. Therefore $x \in \bigcap_n G_n$.</p>
</blockquote>
<hr>
<p>As for the nature of dense open subsets complete metric spaces, note that for the real line $\mathbb{R}$ the following are examples sets would be of this kind:</p>
<ul>
<li>The complement of any finite set.</li>
<li>The complement of the integers.</li>
<li>The complement of any convergent sequence (including its limit point).</li>
<li>The complement of the Cantor ternary set.</li>
<li>If you enumerate the rational numbers as $\{ q_i : i \in \mathbb{N} \}$ and let $\{ \epsilon_i : i \in \mathbb{N} \}$ be any sequence of positive reals, then the set $\bigcup_i ( q_i - \epsilon_i , q_i + \epsilon_i )$.</li>
</ul>
<p>Basically in $\mathbb{R}$ a dense open set an open set (so a union of open intervals) whose complement includes no "non-degenerate intervals" (intervals of non-zero length).</p>
|
141,655 | <blockquote>
<p>What is the chance that at least two people were born on the same day
of the week if there are 3 people in the room?</p>
</blockquote>
<p>I'm wondering if my solution is accurate, as my answer was different than the solution I found:</p>
<p>Probability that there are at least 2 people in the room born on the same day = 1 - (No one was born on the same day) - (Exactly one person was born on the same day)</p>
<p>There are (3 choose 2) different pairs of couples. Each couple has the same birthday as another couple with the chances of 1/7 and different with chances 6/7. Thus:</p>
<p>$$1 – (6/7)^3 – 3(1/7)(6/7)^2 = 0.0553$$</p>
<p>Thanks for any help!</p>
| Dennis Gulko | 6,948 | <p>Another possible approach: name the guys $A,B,C$. Look at the ordered triple $d_1,d_2,d_3$ of days on which $A,B,C$ were born respectively. You have $7^3$ such triples. Now, count the "good" triples. i.e triples in which at least two guys have birthday on the same day:<br>
1) exactly two people have birthday on the same day: choose two days of the week: $\frac{7!}{5!}=42$ (ordered) and choose the two guys that were born on the first day: $\binom{3}{2}=3$. Total of $3\cdot 42=126$ triples.<br>
2) all the guys were born on the same day: choose the day: 7 options.<br>
So you got $126+7=133$ good triples out of $7^3$. So the probability is $\frac{133}{7^3}\sim0.39$</p>
|
4,019,119 | <p>im struggeling to find <span class="math-container">$$\lim _{x\to 0}\left(2-e^{\arcsin^{2}\left(\sqrt{x}\right)}\right)^{\frac{3}{x}}$$</span></p>
<p>Ive tried the following:
<span class="math-container">$$\lim_{x \to x_0} ax^{bx} = \lim_{x \to x_0} e^{ax^{bx}} = \lim_{x \to x_0} e^{bx \ln(ax)} = e^{\lim_{x \to x_0} bx \ln(ax)}$$</span>
wich leads me to
<span class="math-container">$$ = e^{\lim_{x \to 0} \frac{3\ln(2-e^{\arcsin^2(\sqrt{x})})}{x}}$$</span>
Is this the right way to go? If yes how to get rid of the division by x?
Thanks for any help!</p>
| RavenclawPrefect | 214,490 | <p>Shortly after reading Edward H's answer, I realized that there is a self-contained compactness argument, which I missed the first time around; I thought I would present it here.</p>
<p>Let <span class="math-container">$T$</span> be an infinite set of polyominoes. Say that an infinite "polyomino" <span class="math-container">$P$</span> is an <em>extension</em> of <span class="math-container">$T$</span> if, for every finite region in the plane, its possible intersections with that region could also be an intersection of <span class="math-container">$t$</span>, for infinitely many <span class="math-container">$t\in T$</span>. (Intuitively, we're saying that local patches of <span class="math-container">$P$</span> look like part of a generic large element of <span class="math-container">$T$</span> - if you have a finite field of view, you can't distinguish it from infinitely many possibilities in <span class="math-container">$T$</span>.)</p>
<p>Then, we claim that for every region <span class="math-container">$R$</span> which can be tiled by every element of <span class="math-container">$T$</span>, <span class="math-container">$R$</span> has a tiling using extensions of <span class="math-container">$T$</span>. (As a trivial consequence, at least one such extension must exist.)</p>
<p>In the original example, <span class="math-container">$T$</span> is the set of all <span class="math-container">$1\times n$</span> rectangles, and the extensions are <span class="math-container">$\{\text{infinite ray},\text{infinite line}\}$</span> - but since infinite lines can be tiled by infinite rays, we can just use the rays for everything.</p>
<p><strong>Proof:</strong> For every positive integer <span class="math-container">$N$</span>, let <span class="math-container">$S_N$</span> denote the square <span class="math-container">$[-N,N]\times[-N,N]$</span>. Consider the restrictions of the tilings of <span class="math-container">$R$</span> to <span class="math-container">$R\cap S_n$</span> for each tile <span class="math-container">$t\in T$</span>. Since there are finitely many ways to partition <span class="math-container">$R\cap S_n$</span> into tiles, but infinitely many tilings of <span class="math-container">$R$</span> by some <span class="math-container">$t\in T$</span>, at least one of these partitions of <span class="math-container">$R\cap S_n$</span> must extend to a <span class="math-container">$t$</span>-tiling of <span class="math-container">$R$</span> for infinitely many <span class="math-container">$t\in T$</span>.</p>
<p>Now, fix our partition of <span class="math-container">$S_N\cap R$</span>, and consider ways of extending it to a partition of <span class="math-container">$S_{N+1}\cap R$</span>. Again, there are finitely many ways to do this, so again, of the infinitely many ways to extend to a <span class="math-container">$t$</span>-tiling of <span class="math-container">$R$</span>, infinitely many <span class="math-container">$t$</span> must cluster around one such partition. (The wording of this suggests a canonical choice of tiling for each <span class="math-container">$t\in T$</span>, but we don't need to make this selection.)</p>
<p>Extending this construction inductively, we end up with a tiling of <span class="math-container">$R$</span> with tiles that have the property that every restriction of them to a finite square resembles (a part of) infinitely many tiles in <span class="math-container">$T$</span>. Hence, all our tiles are extensions of <span class="math-container">$T$</span>.</p>
<hr />
<p>To show that this generality is actually good for something, here are some other special cases of the above theorem:</p>
<ul>
<li><p>A region tileable by arbitrarily large squares can be tiled by infinite quadrants. (The possible extensions are quadrants, half-planes, and the whole plane, but all of these can be tiled by quadrants.)</p>
</li>
<li><p>A region tileable by arbitrarily long "zigzag" shapes can be tiled by an infinite staircase ray, i.e. the points <span class="math-container">$\{(x,y)\ |\ x,y\ge 0, x-y\in \{0,1\}\}$</span>.</p>
</li>
<li><p>A region tileable by all "hooks" (<span class="math-container">$1\times n$</span> rectangles with an extra cell on the side of one end, e.g. <code>:.......</code>) can be tiled using an infinite ray and an infinite ray with a cell on the side of one end.</p>
</li>
</ul>
<p>The same proof extends to most reasonable sort of grids - <span class="math-container">$\mathbb{Z}^n$</span>, polyiamonds on the triangular grid, etc.</p>
|
3,087,570 | <p>The "school identities with derivatives", like
<span class="math-container">$$
(x^2)'=2x
$$</span>
are not identities in the normal sense, since they do not admint substitutions. For example if we insert <span class="math-container">$1$</span> instead of <span class="math-container">$x$</span> into the identity above, the appearing equality will not be true:
<span class="math-container">$$
(1^2)'=2\cdot 1.
$$</span>
That is why when explaining this to my students I present the derivative in the left side as a formal operation with strings of symbols (and interpret the identity as the equality of strings of symbols). </p>
<p>This however takes a lot of supplementary discussions and proofs which look very bulky, and I have no feeling that this is a good way to explain the matter. In addition, people's reaction to <a href="https://math.stackexchange.com/questions/1501585/calculus-as-a-structure-in-the-sense-of-model-theory">this my question</a> makes me think that there are no texts to which I could refer when I take this point of view.</p>
<p>I want to ask people who teach mathematics how they bypass this difficulty. Are there tricks for introducing rigor into the "elementary identities with derivatives" (and similarly with integrals)?</p>
<p>EDIT. It seems to me I have to explain in more detail my own understanding of how this can be bypassed. I don't follow this idea accurately, in detail, but my "naive explanations" are the following. I describe Calculus as a first-order language with a list of variables (<span class="math-container">$x$</span>, <span class="math-container">$y$</span>,...) and a list of functional symbols (<span class="math-container">$+$</span>, <span class="math-container">$-$</span>, <span class="math-container">$\sin$</span>, <span class="math-container">$\cos$</span>, ...) and the functions which are not defined everywhere, like <span class="math-container">$x^y$</span>, are interpreted as relation symbols (of course this requires a lot of preparations and discussions, that is why I usually miss these details, and that is why I don't like this way). After that the derivative is introduced as a formal operation on <a href="https://en.wikipedia.org/wiki/First-order_logic#Terms" rel="nofollow noreferrer">terms</a> (expressions) of this language, and finally I prove that this operation coincide with the usual derivative on "elementary functions" (i.e. on the functions which are defined by terms of this language). </p>
<p>Derek Elkins suggests a simpler way, namely, to declare <span class="math-container">$x$</span> a notation of the function <span class="math-container">$t\mapsto t$</span>. Are there texts where this is done consistently? (I mean, with examples, exercises, discussions of corollaries...)</p>
<p>@Rebellos, you identity
<span class="math-container">$$
\frac{d}{dx}(x^2)\Big|_{x=1}=2\cdot 1
$$</span>
becomes true either if you understand the derivative as I describe, i.e. as an operation on expressions (i.e. on terms of the first order language), since in this case it becomes a corollary of the equality
<span class="math-container">$$
\frac{d}{dx}(x^2)=2\cdot x,
$$</span>
or if by substitution you mean something special, not what people usually mean, i.e. not the result of the replacement of <span class="math-container">$x$</span> by <span class="math-container">$1$</span> everywhere in the expression (and in this case you should explain this manipulation, because I don't understand it). Anyway, note, that your point is not what Derek Elkins suggests, since for him <span class="math-container">$x$</span> means a notation of the function <span class="math-container">$t\mapsto t$</span>, it can't be substituted by 1). </p>
| H Huang | 604,218 | <p>One way to do it is to say <span class="math-container">$f’(x)$</span>=2x, and define <span class="math-container">$f(x)=x^2$</span>. This way, it should be relatively clear that x is not some placeholder variable, but rather the independent variable in an equation. Additionally, if you introduce the Chain Rule, it should become even more clear. If students still try to use function composition without using the Chain Rule, switching to only using Leibniz notation for a while should quickly stifle that urge. You can then introduce prime notation again after the desire to directly compose in functions has been suppressed.</p>
|
3,087,570 | <p>The "school identities with derivatives", like
<span class="math-container">$$
(x^2)'=2x
$$</span>
are not identities in the normal sense, since they do not admint substitutions. For example if we insert <span class="math-container">$1$</span> instead of <span class="math-container">$x$</span> into the identity above, the appearing equality will not be true:
<span class="math-container">$$
(1^2)'=2\cdot 1.
$$</span>
That is why when explaining this to my students I present the derivative in the left side as a formal operation with strings of symbols (and interpret the identity as the equality of strings of symbols). </p>
<p>This however takes a lot of supplementary discussions and proofs which look very bulky, and I have no feeling that this is a good way to explain the matter. In addition, people's reaction to <a href="https://math.stackexchange.com/questions/1501585/calculus-as-a-structure-in-the-sense-of-model-theory">this my question</a> makes me think that there are no texts to which I could refer when I take this point of view.</p>
<p>I want to ask people who teach mathematics how they bypass this difficulty. Are there tricks for introducing rigor into the "elementary identities with derivatives" (and similarly with integrals)?</p>
<p>EDIT. It seems to me I have to explain in more detail my own understanding of how this can be bypassed. I don't follow this idea accurately, in detail, but my "naive explanations" are the following. I describe Calculus as a first-order language with a list of variables (<span class="math-container">$x$</span>, <span class="math-container">$y$</span>,...) and a list of functional symbols (<span class="math-container">$+$</span>, <span class="math-container">$-$</span>, <span class="math-container">$\sin$</span>, <span class="math-container">$\cos$</span>, ...) and the functions which are not defined everywhere, like <span class="math-container">$x^y$</span>, are interpreted as relation symbols (of course this requires a lot of preparations and discussions, that is why I usually miss these details, and that is why I don't like this way). After that the derivative is introduced as a formal operation on <a href="https://en.wikipedia.org/wiki/First-order_logic#Terms" rel="nofollow noreferrer">terms</a> (expressions) of this language, and finally I prove that this operation coincide with the usual derivative on "elementary functions" (i.e. on the functions which are defined by terms of this language). </p>
<p>Derek Elkins suggests a simpler way, namely, to declare <span class="math-container">$x$</span> a notation of the function <span class="math-container">$t\mapsto t$</span>. Are there texts where this is done consistently? (I mean, with examples, exercises, discussions of corollaries...)</p>
<p>@Rebellos, you identity
<span class="math-container">$$
\frac{d}{dx}(x^2)\Big|_{x=1}=2\cdot 1
$$</span>
becomes true either if you understand the derivative as I describe, i.e. as an operation on expressions (i.e. on terms of the first order language), since in this case it becomes a corollary of the equality
<span class="math-container">$$
\frac{d}{dx}(x^2)=2\cdot x,
$$</span>
or if by substitution you mean something special, not what people usually mean, i.e. not the result of the replacement of <span class="math-container">$x$</span> by <span class="math-container">$1$</span> everywhere in the expression (and in this case you should explain this manipulation, because I don't understand it). Anyway, note, that your point is not what Derek Elkins suggests, since for him <span class="math-container">$x$</span> means a notation of the function <span class="math-container">$t\mapsto t$</span>, it can't be substituted by 1). </p>
| Somos | 438,089 | <p>There is no one right answer to this question. It depends on what the students are willing to accept. One approach mentioned comes from the Wikipedia article on <a href="https://en.wikipedia.org/wiki/Dual_number" rel="nofollow noreferrer">Dual numbers</a>. The key idea is that
<span class="math-container">$\, f(x+\epsilon) = f(x)+f'(x)\,\epsilon\,$</span> where <span class="math-container">$\,\epsilon^2=0\,$</span> is postulated. Given this, then just use ordinary algebra <span class="math-container">$\,(x\!+\!\epsilon)^2 = (x\!+\!\epsilon)(x\!+\!\epsilon) = x^2\! +\! 2x\,\epsilon \!+\! \epsilon^2 = x^2\! +\! 2x\,\epsilon \, $</span> and therefore <span class="math-container">$\,(x^2)' = 2x.\,$</span> Notice that here you <strong>can</strong> do substitutions. For example,
<span class="math-container">$\, x \to 3+\epsilon,\,$</span> and then <span class="math-container">$\,x^2 \to (3+\epsilon)^2 = 9+6\,\epsilon,\,$</span> and thus <span class="math-container">$\, (9+6\,\epsilon)' = 6\,$</span> where we define <span class="math-container">$\, (a+b\,\epsilon)' := b.$</span></p>
|
39,551 | <p>How can I use <em>Mathematica</em> to equate coefficients in a non-power-series equation?</p>
<p>For example, I would like to take an equation like the following:
$$af_x+\frac{b}{2}f_xf_y+chf_x=f_x+e^af_x+3f_xf_y+2bhf_x$$
and produce the following system:
$$a=1+e^a$$
$$\frac{b}{2}=3$$
$$c=2b$$
<strong>EDIT:</strong> This is a rather small example. If possible, I would prefer a solution that requires minimal human inspection of the original equation. The equations I will be working with will have many, perhaps hundreds of partial derivative terms, and it would be unfeasible to do things like individually pick them out. Ideally, I would like to specify only the unknowns I am interested in (in this case, {a, b, c}) and let <em>Mathematica</em> take it from there.</p>
| Daniel Lichtblau | 51 | <p>Also there is <code>MonomialList</code>.</p>
<pre><code>coefficientRelations[expr_, params_] := Module[
{vars},
vars = DeleteCases[Variables[expr],
vv_ /; Internal`DependsOnQ[vv, params]];
MonomialList[expr, vars] /. Thread[vars -> 1]
]
</code></pre>
<p>Your example is then as follows.</p>
<pre><code>expr = a fx + b fx fy/2 +
c h fx - (fx + Exp[a] fx + 3 fx fy + 2 b h fx);
parameters = {a, b, c};
coefficientRelations[expr, parameters]
(* ut[104]= {-3 + b/2, -2 b + c, -1 + a - E^a} *)
</code></pre>
|
61,798 | <p>Are there any generalisations of the identity $\sum\limits_{k=1}^n {k^3} = \bigg(\sum\limits_{k=1}^n k\bigg)^2$ ?</p>
<p>For example can $\sum {k^m} = \left(\sum k\right)^n$ be valid for anything other than $m=3 , n=2$ ?</p>
<p>If not, is there a deeper reason for this identity to be true only for the case $m=3 , n=2$?</p>
| Gerben | 6,004 | <p>Not a completely rigorous answer, but you should be able to turn it into one.</p>
<p>By comparing the sums to their corresponding integrals $\int_0^n \mathrm{d}x x^m$, you can see that $$\sum k^m = \frac{1}{m+1} n^{m+1} + \mathcal{O}(n^m).$$ Also, $$(\sum k)^q = \frac{1}{2^{q}} n^{2q} + \mathcal{O}(n^{2q-1}).$$ By comparing leading order terms, equality can only occur if $m+1 = 2q$ and if $m+1 = 2^q,$ which implies that $q = 2$ and $m = 3.$</p>
|
2,603,799 | <p>Good morning, i need help with this exercise.</p>
<blockquote>
<p>Prove all tangent plane to the cone $x^2+y^2=z^2$ goes through the origin</p>
</blockquote>
<p><strong>My work:</strong></p>
<p>Let $f:\mathbb{R}^3\rightarrow\mathbb{R}$ defined by $f(x,y,z)=x^2+y^2-z^2$</p>
<p>Then,</p>
<p>$\nabla f(x,y,z)=(2x,2y,-2z)$</p>
<p>Let $(a,b,c)\in\mathbb{R}^3$ then
$\nabla f(a,b,c)=(2a,2b,-2c)$</p>
<p>By definition, the equation of the tangent plane is</p>
<p>\begin{eqnarray}
\langle(2a,2b,-2c),(x-a,y-b,z-c)\rangle &=& 2a(x-a)+2b(y-b)+2c(z-c)\\
&=&2ax-2a^2+2by-2b^2+2cz-2c^2 \\
&=&0
\end{eqnarray}</p>
<p>In this step i'm stuck, can someone help me?</p>
| user284331 | 284,331 | <p>The equation for the plane should be $2a(x-a)+2b(y-b)-2c(z-c)=2ax-2a^{2}+2by-2b^{2}-2cz+2c^{2}=0$. Now the point $(a,b,c)$ lies on the cone, so $a^{2}+b^{2}-c^{2}=0$, so simplifying the equation for the plane, we have then $2ax+2by-2cz=0$ and this equation goes through the origin since $2a\cdot 0+2b\cdot 0-2c\cdot 0=0$.</p>
|
2,590,068 | <p>$$\epsilon^\epsilon=?$$
Where $\epsilon^2=0$, $\epsilon\notin\mathbb R$.
There is a formula for exponentiation of dual numbers, namely:
$$(a+b\epsilon)^{c+d\epsilon}=a^c+\epsilon(bca^{c-1}+da^c\ln a)$$
However, this formula breaks down in multiple places for $\epsilon^\epsilon$, yielding many undefined expressions like $0^0$ and $\ln 0$. So, here's my question: what is $\epsilon^\epsilon$ equal to for <a href="https://en.wikipedia.org/wiki/Dual_number" rel="noreferrer">dual numbers</a>?</p>
| PC1 | 960,197 | <p>The problem is that you can't isolate <span class="math-container">$\epsilon^\epsilon$</span> as a standalone expression like what you're doing.</p>
<p>You need to consider the whole expression so it keeps its sense. So if we go back to your expression:
<span class="math-container">\begin{align}
(a+b\epsilon)^{c+d\epsilon}&=\left(a+b\epsilon\right)^c\left(a+b\epsilon\right)^{d\epsilon}\\
&=\left(a^c+ca^{c-1}b\epsilon\right)\left(1+d\log(a)\epsilon\right)\\
&=a^c+a^{c-1}\left(ad\log(a)+bc\right)\epsilon
\end{align}</span></p>
<p>We obtain the relations in the second line by looking at the Taylor expansions for small <span class="math-container">$\epsilon$</span> and using the fact that <span class="math-container">$\epsilon^2=0$</span>.</p>
|
55,482 | <p>I write a code that creates a compiled function, and then call that function over and over to generate a list. I run this code on a remote server via a batch job, and will run several instances of it. Sometimes when I make changes to the code, I make a mistake, and inside the compiled function is an undefined variable, such that when the function is called I get the following error messages (repeated several times)</p>
<pre><code> CompiledFunction::cfse: Compiled expression w should be a machine-size complex number.
CompiledFunction::cfex: Could not complete external evaluation at instruction 18; proceeding with uncompiled evaluation.
</code></pre>
<p>This causes massive memory usage (which puts me on the system administrator's bad side), and the results are garbage if since there was a mistake in the code. Is there any way to force the code to abort and quit the program rather than proceed with uncompiled evaluation?</p>
| Karsten 7. | 18,476 | <p>You can add</p>
<pre><code>RuntimeOptions -> {"EvaluateSymbolically" -> False}
</code></pre>
<p>to your <code>Compile</code> function.<br>
Consult <a href="http://reference.wolfram.com/language/ref/RuntimeOptions.html" rel="nofollow">RuntimeOptions</a> for more details.</p>
|
3,853,351 | <p>Given an n-dimensional ellipsoid in <span class="math-container">$\mathbb{R}^n$</span>, is any orthogonal projection of it to a subspace also an ellipsoid? Here, an ellipsoid is defined as</p>
<p><span class="math-container">$$\Delta_{A, c}=\{x\in \Bbb R^n\,:\, x^TAx\le c\}$$</span></p>
<p>where <span class="math-container">$A$</span> is a symmetric positive definite n by n matrix, and <span class="math-container">$c > 0$</span>.</p>
<p>I'm just thinking about this because it gives a nice visual way to think about least-norm regression.</p>
<p>I note that SVD proves immediately that any linear image (not just an orthogonal projection) of an ellipsoid is also an ellipsoid, however there might be a more geometrically clever proof when the linear map is an orthogonal projection.</p>
| Arnaud | 122,865 | <p>Yes they do. You can prove it by induction on the codimension of the subspace you project to. For <span class="math-container">$x\in Vect(e_1,\ldots e_{n-1})$</span> there exists <span class="math-container">$t \in \mathbb{R}$</span> such that <span class="math-container">$x+te_n$</span> belongs to <span class="math-container">$\Delta$</span> iff the discriminant of the degree <span class="math-container">$2$</span> equation <span class="math-container">$(x+te_n)^TA(x+te_n)\leq c$</span> w.r.t. the unknown <span class="math-container">$t$</span> is non-negative, which turns out to still be a quadratic inequality in <span class="math-container">$x$</span>.</p>
|
2,476,194 | <p>I am trying to prove that $I =(x^2+1,y-1)$ is a maximal ideal in $\mathbb{Q}[x,y]$, but I am having a hard time understanding what this ideal even looks like. I know that I can prove it's a maximal ideal by proving $\mathbb{Q}[x,y]/I$ is a field, but I'm also having a hard time understanding what this quotient looks like, or what it is isomorphic to.</p>
<p>Basically, some intuition into dealing with ideals generated by multiple polynomials, as well as how to deal with quotient rings would be much appreciated.</p>
<p>Edit: My first thoughts are that $\mathbb{Q}[x,y]/(x^2+1,y-1) \simeq \mathbb{Q}[x]/(x^2+1) \simeq \mathbb{Q}[i],$ which is a field, but I am not sure how to make these isomorphisms rigorous.</p>
<p>Also, is there a way to show this ideal is maximal other than showing the quotient is a field? Or is this the best way?</p>
| Zach Teitler | 343,280 | <p>@luthien Your idea is correct. The maps you are thinking of are indeed isomorphisms. One way to show they are isomorphisms is to start with the map $f : \mathbb{Q}[x,y] \to \mathbb{Q}[i]$ given by $x \mapsto i$, $y \mapsto 1$. Then $x^2+1$ and $y-1$ are in the kernel of $f$.</p>
<p><em>Claim:</em> The kernel of $f$ is the ideal generated by $x^2+1$ and $y-1$, namely, the ideal $I$.</p>
<p>I'm sure there is a "nice" way to do this. Here is a crude way.</p>
<p>Certainly $I \subseteq \ker f$. Conversely suppose $p = p(x,y) \in \ker f$. Say the highest total degree of any term of $p$ is $d$, and suppose by induction that every polynomial in $\ker f$ of degree strictly less than $d$ is in $I$ (the cases of polynomials of degree $0,1,2$ being "easy"). Let $c x^a y^{d-a}$ be one of the terms of degree $d$ appearing in $p$. If $a \geq 2$, then $c x^{a-2} y^{d-a} (x^2+1) \in I \subseteq \ker f$, and subtracting it from $p$ amounts to changing this term into $-c x^{a-2} y^{d-a}$, which has smaller total degree. Or if $a < 2$, $d-a \geq 1$, then we can similarly change $y$ into $1$. (If $a < 2$ and $d-a < 1$, then $d < 3$ and we are in the "initial" cases.) After changing enough of these terms we get a polynomial $p'$ that differs from $p$ by an element of $I$, and by induction $p' \in I$, so $p \in I$ too.</p>
<p><em>Claim:</em> The map $f$ is surjective.</p>
<p>Immediate.</p>
<p>By the First Isomorphism Theorem, the quotient ring $\mathbb{Q}[x,y]/I$ is isomorphic to the image $\mathbb{Q}[i]$, which is a field. So $I$ is maximal.</p>
<p>Alternative approaches? Well, you could suppose $I \subseteq J \subseteq \mathbb{Q}[x,y]$, and suppose $p \in J$, $p \notin I$, of smallest possible total degree (among all elements in $J \setminus I$). A similar "reduction" argument (modifying $p$ by replacing $x^2$ with $-1$ and $y$ with $1$) shows that $p$ is at most linear in $x$, and constant in $y$. If $0 \neq p = ax+b \in J$, then $(ax-b)p = a^2x^2-b^2 \in J$ too. And $J$ contains $x^2+1$ since $I \subseteq J$. From $a^2x^2-b^2$ and $x^2+1$ we get $-(a^2+b^2)=(a^2x^2-b^2)-a^2(x^2+1) \in J$. So $1 \in J$. That is a pretty direct proof that $I$ is maximal — it avoids quotient rings. It's beyond me to judge if one way or the other is "best".</p>
|
318,983 | <p>$$\int_{-\infty}^{\infty} \frac{x^2}{x^6+9}dx$$ I'm a bit puzzled as how to go about solving this integral. I can see that it isn't undefined on -infinity to infinity. But I just need maybe a hint on how to go about solving the problem.</p>
| amWhy | 9,003 | <p>Hint: put $\;u = x^3$, so $\,du \;=\; 3x^2\, dx \;\implies\; x^2\, dx \;= \;\frac 13\, du$</p>
<p>This gives you $$\frac 13 \int_{-\infty}^\infty \frac {du}{u^2 + (3)^2} $$</p>
<p>Look familiar?:</p>
<p>Using one more substitution, let $\quad u =3\tan(\theta),\quad du =3\sec^2(\theta)\,d\theta, \quad \theta=\arctan\left(\frac{u}{3}\right)$</p>
<p>And determine the corresponding bounds for integrating.</p>
|
2,929,203 | <p>Suppose we define the relation <span class="math-container">$∼$</span> by <span class="math-container">$v∼w$</span> (where <span class="math-container">$v$</span> and <span class="math-container">$w$</span> are arbitrary elements in <span class="math-container">$R^n$</span>) if there exists a matrix <span class="math-container">$$A∈ GL_n(R)$$</span> such that <span class="math-container">$v=Aw$</span>. What are the equivalence classes for <span class="math-container">$∼$</span> in this case? NOTE:<span class="math-container">$GL_n(R)$</span> is a set that contains all the <span class="math-container">$n×n$</span> matrices with <span class="math-container">$det≠0$</span></p>
| Yanko | 426,577 | <p>The relation works like that: The zero vector is only in relation with itself. All the other vectors are in relation with each other</p>
<p>Proof: if <span class="math-container">$v=0$</span> then <span class="math-container">$Av=0$</span> for every matrix and therefore <span class="math-container">$0\sim w$</span> if and only if <span class="math-container">$w=0$</span>.</p>
<p>Now take <span class="math-container">$v\not = 0$</span> and <span class="math-container">$w\not = 0$</span>. I claim that <span class="math-container">$v\sim w$</span>, since <span class="math-container">$v\not = 0$</span> you can find <span class="math-container">$v_1,...,v_{n-1}$</span> such that <span class="math-container">$\mathcal{B}_v:=\{v,v_1,...,v_{n-1}\}$</span> a basis, similarly find <span class="math-container">$w_1,...,w_{n-1}$</span> such that <span class="math-container">$\mathcal{B}_w:=\{w,w_1,...,w_{n-1}\}$</span> is another basis.</p>
<p>Now define a matrix by the following linear map : <span class="math-container">$$A(\lambda v + \lambda_1 v_1 +...+\lambda_{n-1} v_{n-1} ) := \lambda w+\lambda w_1 +...+\lambda w_{n-1}$$</span></p>
<p>(i.e. <span class="math-container">$A$</span> is the unique linear transformation that sends one basis to another this is usually denoted as <span class="math-container">$[I]^{\mathcal{B}_v}_{\mathcal{B}_w}$</span>)</p>
<p>In particular you find a matrix <span class="math-container">$A$</span> which is invertible (because it sends a basis to itself), hence <span class="math-container">$\det A \not = 0$</span> such that <span class="math-container">$Av=w$</span>. It follows that <span class="math-container">$v\sim w$</span>.</p>
|
4,577,266 | <blockquote>
<p>Let <span class="math-container">$X_n$</span> be an infinite arithmetic sequence with positive integers term. The first term is divisible by the common difference of successive members. Suppose, the term <span class="math-container">$x_i$</span> has exactly <span class="math-container">$m>1$</span> distinct prime factors, for some <span class="math-container">$i\in\Bbb N$</span>.</p>
<p><strong>Prove that, there infinitely many terms have exactly <span class="math-container">$m$</span> distinct prime factors.</strong></p>
</blockquote>
<p>My some thoughts.</p>
<p>Let <span class="math-container">$x_1$</span> has <span class="math-container">$n$</span> distinct prime factors and <span class="math-container">$x_2=x_1+d$</span> where <span class="math-container">$d\mid x_1\implies x_1=dk$</span>. Hence <span class="math-container">$x_2=x_1+d=d(1+k)$</span>. But, we can not say anything about the prime factors of <span class="math-container">$x_2$</span>. Therefore, we don't know the number of prime factors of <span class="math-container">$x_2$</span>. By induction we don't know the number of prime factors of <span class="math-container">$x_3$</span>.</p>
<p>I tried to make the solution by induction, but as it seems I could not be successful.</p>
| kabenyuk | 528,593 | <p>You have drawn the picture incorrectly. The vertex <span class="math-container">$w$</span> lies on the path <span class="math-container">$P$</span> and does not coincide with <span class="math-container">$x_1$</span>, so moving along <span class="math-container">$P$</span> until you meet <span class="math-container">$w$</span> you will not traverse the edge <span class="math-container">$xw$</span>.
This really means that your graph contains a cycle.
Here is the correct picture (in red the path <span class="math-container">$P$</span> is shown):
<a href="https://i.stack.imgur.com/oBIzf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oBIzf.png" alt="enter image description here" /></a></p>
|
1,419,209 | <p>How do I evaluate this (find the sum)? It's been a while since I did this kind of calculus.</p>
<p>$$\sum_{i=0}^\infty \frac{i}{4^i}$$</p>
| Ángel Mario Gallegos | 67,622 | <p>For $-1<x<1$, the series $\sum_{i=0}^{\infty}x^i$ converges absolutely to $\frac{1}{1-x}$
$$\sum_{i=0}^{\infty}x^i=\frac{1}{1-x}$$
Then
\begin{align*}
\sum_{i=0}^{\infty}ix^{i-1} &= \frac{d}{dx}\left(\frac{1}{1-x}\right)\\
&=\frac{1}{(1-x)^2}\\
\sum_{i=0}^{\infty}ix^i&=\frac{x}{(1-x)^2}
\end{align*}
Now, by plugging $x=1/4$ into the last equation, we have
$$\sum_{i=0}^{\infty}\frac{i}{4^i}=\frac{1/4}{(1-1/4)^2}=\frac{4}{9}$$</p>
|
3,536,061 | <p>Find the number of ways you can invite <span class="math-container">$3$</span> of your friends on <span class="math-container">$5$</span> consecutive days, exactly one friend a day, such that no friend is invited on more than two days. </p>
<p>My approach: Let <span class="math-container">$d_A,d_B$</span> and <span class="math-container">$d_C$</span> denote the total number of days <span class="math-container">$A, B$</span> and <span class="math-container">$C$</span> were invited respectively. According to the question we must have <span class="math-container">$0\le d_A,d_B,d_C\le 2.$</span> Also, we must have <span class="math-container">$$d_A+d_B+d_C=5.$$</span> </p>
<p>Now let <span class="math-container">$d_A+c_A=2, d_B+c_B=2, d_C+c_C=2,$</span> for some <span class="math-container">$c_A, c_B, c_C\ge 0$</span>. </p>
<p>This implies that <span class="math-container">$c_A+c_B+c_C=1$</span>. </p>
<p>Therefore the problem translates to finding the number of non-negative integer solutions to the equation <span class="math-container">$$c_A+c_B+c_C=1.$$</span> </p>
<p>By the stars and bars method the total number of required solutions is equal to <span class="math-container">$$\dbinom{1+3-1}{3-1}=3.$$</span></p>
<p>But the number of ways to invite the friends will be higher than this, since the friends are distinguishable and we have assumed them to be indistinguishable while applying the stars and bars method. </p>
<p>How to proceed after this?</p>
| David G. Stork | 210,401 | <p>A hint:</p>
<p>The only configurations that obey your constraint are: </p>
<p>person A: 2 days</p>
<p>person B: 2 days</p>
<p>person C: 1 day</p>
<p>(We'll assign names to these different people below.)</p>
<p>Suppose you start with the "1 day" person. Then there are just two legal sequences:</p>
<p>CABAB and CBABA</p>
<p>Suppose you start instead with a "2 day" person (e.g., A). Write out the sequences to see there are just <span class="math-container">$6$</span> legal sequences.</p>
<p>ABABC, ABACB, ... </p>
<p>But you could interchange the names of these people: </p>
<p>Mary = A, Tom = B, Chris = C. </p>
<p>OR </p>
<p>Tom = A, Mary = B, Chris = C </p>
<p>OR </p>
<p>....</p>
<p>Check these combinations and add up!</p>
<p>Hope that helps.</p>
|
3,726,772 | <p>For finite-dimensional vector space <span class="math-container">$V$</span>, there exist linear operators <span class="math-container">$A$</span> and <span class="math-container">$B$</span> on <span class="math-container">$V$</span> such that <span class="math-container">$AB=BA$</span> commutative relation holds.</p>
<p>If we define the <span class="math-container">$A$</span>'s minimal polynomial degree by <span class="math-container">$\deg(A)$</span>, how can I prove the inequality <span class="math-container">$\deg(A+B)\leq \deg(A)\deg(B)$</span>?</p>
<p>I grasp the idea that in the minimal polynomial of <span class="math-container">$A+B$</span>, I could expand the <span class="math-container">$(A+B)^k$</span> terms by exchanging
multiplication order of <span class="math-container">$A$</span> and <span class="math-container">$B$</span> but I can't proceed further.</p>
| user1551 | 1,551 | <p><span class="math-container">$Om(nom)^3$</span>'s elegant answer really shows the key: <span class="math-container">$\deg(A)$</span> is the dimension of the subspace of all polynomials in <span class="math-container">$A$</span>. I cannot do better. However, if you want to use binomial expansion to solve the problem, here is one way.</p>
<p>We may assume that the underlying field is algebraically closed, because minimal polynomial is invariant under field extension. Factor the minimal polynomial of <span class="math-container">$A$</span> into <span class="math-container">$\prod_{i=1}^k(x-\lambda_i)^{m_i}$</span>, where the <span class="math-container">$\lambda_i$</span>s are distinct. Let <span class="math-container">$V_i=\ker((A-\lambda_iI)^{m_i})$</span>. Then <span class="math-container">$V=V_1\oplus V_2\oplus\cdots\oplus V_k$</span> and <span class="math-container">$(A-\lambda_iI)^{m_i}BV_i=B(A-\lambda_iI)^{m_i}V_i=0$</span>. Hence <span class="math-container">$BV_i\subseteq V_i$</span>. In other words, each <span class="math-container">$V_i$</span> is an invariant subspace of both <span class="math-container">$A$</span> and <span class="math-container">$B$</span>.</p>
<p>We can prove that
<span class="math-container">$$
\deg(A+B)\le \deg(A) \deg(B).\tag{0}
$$</span>
by showing that each the following lines is true:
<span class="math-container">\begin{align}
\deg(A+B)
&\le \sum_i \deg((A+B)|_{V_i})\tag{1}\\
&\le \sum_i \deg(A|_{V_i}) \deg(B|_{V_i})\tag{2}\\
&\le \sum_i \deg(A|_{V_i}) \deg(B)\tag{3}\\
&= \deg(A) \deg(B)\tag{4}\\
\end{align}</span>
<span class="math-container">$(1),(3)$</span> and <span class="math-container">$(4)$</span> are clearly true. To prove <span class="math-container">$(2)$</span>, we may show that <span class="math-container">$\deg((A+B)|_{V_i})\le \deg(A|_{V_i}) \deg(B|_{V_i})$</span> for each <span class="math-container">$i$</span>. In other words, to prove <span class="math-container">$(0)$</span>, it suffices to consider the special case where all eigenvalues of <span class="math-container">$A$</span> are equal to some <span class="math-container">$\lambda$</span>. As <span class="math-container">$\deg(M)=\deg(M-\lambda I)$</span>, we may further assume that <span class="math-container">$A$</span> is nilpotent. But then, if we interchange the roles of <span class="math-container">$A$</span> and <span class="math-container">$B$</span> and go through some similar arguments to the above, we see that we may also assume that <span class="math-container">$B$</span> is nilpotent.</p>
<p>Thus it suffices to prove <span class="math-container">$(0)$</span> for two commuting nilpotent matrices <span class="math-container">$A$</span> and <span class="math-container">$B$</span>. Let <span class="math-container">$r,s\,(\ge1)$</span> be the indices of nilpotence of <span class="math-container">$A$</span> and <span class="math-container">$B$</span> respectively. Then <span class="math-container">$(0)$</span> is equivalent to the statement that <span class="math-container">$(A+B)^{rs}=0$</span>. But this is true because <span class="math-container">$(r-1)+(s-1)\le rs$</span>, i.e. if <span class="math-container">$p,q\ge0$</span> and <span class="math-container">$p+q=rs$</span>, we must have <span class="math-container">$p\ge r$</span> or <span class="math-container">$q\ge s$</span>. Hence each term <span class="math-container">$A^pB^q$</span> in the binomial expansion of <span class="math-container">$(A+B)^{rs}$</span> is zero.</p>
|
251,182 | <p>Is 13 a quadratic residue of 257? Note that 257 is prime.</p>
<p>I have tried doing it. My study guide says it is true. But I keep getting false. </p>
| André Nicolas | 6,312 | <p>We use somewhat heavy machinery, Quadratic Reciprocity. For typing convenience, we use the notation $(a/p)$ for the Legendre symbol. By Reciprocity,
$$(13/257)=(257/13)=(10/13)=(2/13)(5/13).$$
This is because at least one of $13$ and $257$ (indeed both) is of the shape $4k+1$. </p>
<p>Note that $(2/13)=-1$ because $13$ is of the shape $8k-3$.</p>
<p>By Reciprocity $(5/13)=(13/5)=(8/5)$.</p>
<p>But $(8/5)=(2/5)^3$, and $(2/5)=-1$. </p>
<p>Multiply. We have $4$ $-1$'s, and therefore $(13/257)=1$.</p>
<p>We could alternately use low-tech methods, by explicitly finding an $x$ such that $x^2\equiv 13\pmod{257}$. Not pleasant!</p>
|
3,514,547 | <p>The problem is as follows:</p>
<p>The figure from below shows vectors <span class="math-container">$\vec{A}$</span> and <span class="math-container">$\vec{B}$</span>. It is known that <span class="math-container">$A=B=3$</span>. Find <span class="math-container">$\vec{E}=(\vec{A}+\vec{B})\times(\vec{A}-\vec{B})$</span></p>
<p><a href="https://i.stack.imgur.com/kob4R.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kob4R.png" alt="Sketch of the problem"></a></p>
<p>The alternatives are:</p>
<p><span class="math-container">$\begin{array}{ll}
1.&-18\hat{k}\\
2.&-9\hat{k}\\
3.&-\sqrt{3}\hat{k}\\
4.&3\sqrt{3}\hat{k}\\
5.&9\hat{k}\\
\end{array}$</span></p>
<p>What I've attempted here was to try to decompose each vectors</p>
<p><span class="math-container">$\vec{A}=\left \langle 3\cos 53^{\circ}, 3 \sin 53^{\circ} \right \rangle$</span></p>
<p><span class="math-container">$\vec{B}=\vec{A}=\left \langle 3\cos (53^{\circ}+30^{\circ}), 3 \sin (53^{\circ}+30^{\circ}) \right \rangle$</span></p>
<p>But by attempting to use these relationships do seem to extend the algebra too much. Does it exist another way? some simplification?. Or could it be that am I overlooking something?</p>
<p>Can someone help me with this?.</p>
| Robert Z | 299,698 | <p>Hint. By expanding the cross product we find
<span class="math-container">$$(\vec{A}+\vec{B})\times(\vec{A}-\vec{B})=\vec{A}\times\vec{A}+\vec{B}\times\vec{A}-\vec{A}\times\vec{B}-\vec{B}\times\vec{B}.$$</span>
Are you able to find each of the 4 cross-products on the right-hand side?</p>
<p>Recall the <a href="https://en.wikipedia.org/wiki/Cross_product#Algebraic_properties" rel="nofollow noreferrer">algebraic properties</a> of the cross product!</p>
|
1,006,562 | <p>So I am trying to figure out the limit</p>
<p>$$\lim_{x\to 0} \tan x \csc (2x)$$</p>
<p>I am not sure what action needs to be done to solve this and would appreciate any help to solving this. </p>
| Aaron Maroja | 143,413 | <p>$$\lim_{x \to 0} \tan x \csc (2x) = \lim_{x \to 0} \frac{\sin x}{\cos x} \frac{1}{\sin 2x} = \lim_{x \to 0} \frac{\sin x}{\cos x} \frac{1}{2\sin x\cos x}$$ </p>
<p>Can you take from here?</p>
|
489,562 | <p>I am teaching a "proof techniques" class for sophomore math majors. We start out defining sets and what you can do with them (intersection, union, cartesian product, etc.). We then move on to predicate logic and simple proofs using the rules of first order logic. After that we prove simple math statements via direct proof, contrapositive, contradiction, induction, etc. Finally, we end with basic, but important concepts, injective/surjective, cardinality, modular arithmetic, and relations.</p>
<p>I am having a hard time keeping the class interested in the beginning set theory and logic part of the course. It is pretty dry material. What types of games or group activities might be both more enjoyable than my lectures and instructive?</p>
| dfeuer | 17,596 | <p>Sometimes it makes sense to teach a little bit backwards. Rather than always teaching the foundations first and then building on top of them, it sometimes pays to build a little higher-level context first, and then build foundations underneath. One way is to use a partially historical approach. That is, start by teaching about the <em>history</em> of an idea, and a little of the mathematics and/or philosophy surrounding that history, before you actually teach the idea.</p>
|
1,711,653 | <p>Let's define:</p>
<p>$f(t) = A_1 \cos(\omega_1t) + A_2 \cos(\omega_2t) $</p>
<p>I am interested in finding an expression for the peak of this function. It is not true in general that this peak will have the value:</p>
<p>$max{f(t)} = \sqrt{A_1^2 + A_2^2 + 2A_1A_2}$</p>
<p>To find the value of max(f), I did the following manipulations:</p>
<p>$\omega_2 = \omega_1 + \Delta \omega_1$</p>
<p>so I can express the second cosine as that of a sum of a single radian frequency:</p>
<p>$f(t) = A_1 \cos(\omega_1t) + A_2 \cos (\omega_1 t + \Delta \omega t)$</p>
<p>and after a little algebra:</p>
<p>$f(t) = [A_1 + A_2 \cos(\Delta \omega t)]\cos(\omega_1t) - A_2 \sin(\Delta \omega t) \sin(\omega_1t )$</p>
<p>I can then transform the sum of two isochronic $\sin$ and $\cos$ into a single $\cos$ with a certain amplitude and phase:</p>
<p>$f(t) = \sqrt{A_1^2 A_2^2 + A_1 A_2 \cos(\Delta \omega t)} \; \cos \left[\omega_1t - \tan^{-1} \left( \frac{A_1 + A_2 \cos (\Delta \omega t)}{A_2 \sin(\Delta \omega t)} \right) \right]$</p>
<p>But that's about how far I can drive it: since a trig function of $\Delta \omega$ is present both in the amplitude and phase of the cosine, I am not sure how to proceed. For sure, it is not said that the maximum of the function will be that of the amplitude part.</p>
<p>I find the <em>straight</em> approach of taking the derivative of f(t) and find its zero would be probably too cumbersome, however I would ask more experienced people what they think the best way to proceed would be.</p>
| Community | -1 | <p>If $A_1, A_2>0$, the maximum $A_1+A_2$ occurs at $t=0$ !</p>
|
422,233 | <p>I was asked to find a minimal polynomial of $$\alpha = \frac{3\sqrt{5} - 2\sqrt{7} + \sqrt{35}}{1 - \sqrt{5} + \sqrt{7}}$$ over <strong>Q</strong>.</p>
<p>I'm not able to find it without the help of WolframAlpha, which says that the minimal polynomial of $\alpha$ is $$19x^4 - 156x^3 - 280x^2 + 2312x + 3596.$$ (Truely it is - $\alpha$ is a root of the above polynomial and the above polynomial is also irreducible over <strong>Q</strong>.)</p>
<p>Can anyone help me with this?</p>
<p>Thank you!</p>
| DanielWainfleet | 254,665 | <p>(Part answer).Start by rationalizing the denominator.Let $x=1-\sqrt 5+\sqrt 7.$ Then $$1/x=(1-\sqrt 5-\sqrt 7)/(x(1-\sqrt 5-\sqrt 7))=(1-\sqrt 5-\sqrt 7)/(19-2\sqrt 5).$$ Let $y=19-2\sqrt 5.$ Then $$1/y=(19+2\sqrt 5)/(y(19+2\sqrt 5))=(19+2\sqrt 5)/341.$$ So $$1/x=(1-\sqrt 5-\sqrt 7)(19+2\sqrt 5)/341.$$ Inserting this into $\alpha$ and expanding it, we obtain $$\alpha =A+B\sqrt 5+C\sqrt 7+D\sqrt {35}$$ where $A,B,C, D$ are rational.</p>
<p>If you wish, send me a comment so I can be reminded to finish this. </p>
|
666,217 | <p>If $a^2+b^2 \le 2$ then show that $a+b \le2$</p>
<p>I tried to transform the first inequality to $(a+b)^2\le 2+2ab$ then $\frac{a+b}{2} \le \sqrt{1+ab}$ and I thought about applying $AM-GM$ here but without result</p>
| Felix Marin | 85,343 | <p>$\newcommand{\+}{^{\dagger}}
\newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle}
\newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack}
\newcommand{\ceil}[1]{\,\left\lceil\, #1 \,\right\rceil\,}
\newcommand{\dd}{{\rm d}}
\newcommand{\down}{\downarrow}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,{\rm e}^{#1}\,}
\newcommand{\fermi}{\,{\rm f}}
\newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}
\newcommand{\half}{{1 \over 2}}
\newcommand{\ic}{{\rm i}}
\newcommand{\iff}{\Longleftrightarrow}
\newcommand{\imp}{\Longrightarrow}
\newcommand{\isdiv}{\,\left.\right\vert\,}
\newcommand{\ket}[1]{\left\vert #1\right\rangle}
\newcommand{\ol}[1]{\overline{#1}}
\newcommand{\pars}[1]{\left(\, #1 \,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\pp}{{\cal P}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\vphantom{\large A}\,#2\,}\,}
\newcommand{\sech}{\,{\rm sech}}
\newcommand{\sgn}{\,{\rm sgn}}
\newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}}
\newcommand{\ul}[1]{\underline{#1}}
\newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}
\newcommand{\wt}[1]{\widetilde{#1}}$
\begin{align}
a + b&=\pars{a,b}\cdot\pars{1,1}\leq\verts{\pars{a,b}}\verts{\pars{1,1}}
=\root{a^{2} + b^{2}}\root{1^{2} + 1^{2}}\leq\root{2}\root{2} = 2
\end{align}</p>
<blockquote>
<p>$$
\imp\quad \color{#66f}{\large a + b \leq 2}
$$</p>
</blockquote>
|
1,748,751 | <p>By K values, I mean the values described here:</p>
<p><a href="https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods#Explicit_Runge.E2.80.93Kutta_methods" rel="nofollow">https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods#Explicit_Runge.E2.80.93Kutta_methods</a></p>
<p>I know how the K values in the Runge-Kutta method can be proven to be correct, by comparing their taylor expansion with the taylor expansion of the function to be approximated, but how were they originally figured out? </p>
<p>I think I understand the Runge-Kutta method derivation when you have the derivative in terms of one variable f'(t). It seems to be a direct consequence of Simpson's rule and its higher order equivalents. But when it is some form of first order differential equation (i.e. f'(t, y(t))), I am still lost. Is there an equivalent of Simpson's rule for multivariable functions? </p>
| Jan Peter Schäfermeyer | 399,820 | <p>This is how Runge himself taught the method to the students of Columbia University, where he was guest professor in 1909/10: <a href="https://archive.org/stream/graphicalmethod01runggoog#page/n135/mode/2up" rel="nofollow noreferrer">https://archive.org/stream/graphicalmethod01runggoog#page/n135/mode/2up</a></p>
|
1,827,080 | <p>Let $f:\mathbb R \to \mathbb R$ be a differentiable function such that $f(0)=0$ and $|f'(x)|\leq1 \forall x\in\mathbb R$. Then there exists $C$ in $\mathbb R $ such that </p>
<ol>
<li>$|f(x)|\leq C \sqrt |x|$ for all $ x$ with $|x|\geq 1$</li>
<li>$|f(x)|\leq C |x|^2$ for all $ x$ with $|x|\geq 1$</li>
<li>$f(x)=x+C$ for all $x \in \mathbb R $</li>
<li>$f(x)=0$ for all $x \in \mathbb R $</li>
</ol>
<p>If I take $f(x)=\frac{x}{2}$, then (4) is false, but I don't know how to prove or disprove others using the given conditions.Please help.</p>
<p>Thanks for your time.</p>
| Lærne | 252,762 | <p>Compute the derivative of the right hand side of the inequality. Then if the derivative is such that you can set $C$ so that this derivative it is always superior to 1, then, since the right hand side of the of inequality always grow larger than the left hand side, and since both begins at $(0,0)$, the inequality must hold. Else you can find a counter-example for $f$ in which no $C$ value work.</p>
<p>Consider that the $x \ge 0$. Let's ignore negative $x$ for now, which works by using $-x$ instead of $x$ in the following.</p>
<ol>
<li>You have $$\frac{\partial}{\partial x} C \sqrt{x} = \frac{C}{2\cdot\sqrt{x}}$$</li>
</ol>
<p>Take $f(x) = x$. Then, this assert there is a $C$ such that $x \le \frac{C}{2\cdot\sqrt{x}}$. Solving that equation you must have that for all $x$, $C \ge 2 \cdot \sqrt{x^3}$. Since $2 \cdot \sqrt{x^3}$ is unbounded, there is no solution.</p>
<ol start="2">
<li><p>You have $$\frac{\partial}{\partial x} C\cdot x^2 = 2 \cdot C \cdot x$$
For $x \ge 1$, $\frac{\partial}{\partial x}C\cdot x^2 \ge C$, so that if $f$ and $C \cdot x^2$ have the same value at $1$ and $C \ge 1$, the statement is true. For that, simply pick $C = \max(1,f(1))$.</p></li>
<li><p>Deriving the equation, you see that you require the derivative to be $1$ everywhere. Take $f(x) = 0$, its derivative is $0$ everywhere. </p></li>
</ol>
<p>If you want to be sure, take $x \ne C$, say $x = 1-C$, then $$f(1-C) = 0 \ne 1 = 1-C+C$$.</p>
<p>I'll leave it to you to check (2) which $x\le 0$.</p>
|
2,713,873 | <p>We know that if a real valued function $f$ is continuous over an interval $[a,b]$ then the following integral $$\int_a^bf(x)dx$$ represents the area between horizontally the line $y=0$ and the curve of $f$, vertically between the lines $x=a$ and $x=b$. So what represent the following $$\int_{[a,b]\times [c,d]}g(x,y)dxdy$$ and $$\int_{[a,b]\times [c,d]\times [e,f]}h(x,y,z)dxdydz$$ where $g$ and $h$ are two continuous real valued 2d and 3d functions.
Thanks</p>
| Redsbefall | 97,835 | <p>karimath, I may contribute.</p>
<hr>
<p>1D integral can be used to calculate area. For example, if we want to calculate an area bounded by $y=f(x)$ and $y=0$ in $x \in [a, b]$, then we can do approximate this by :</p>
<p>$$ A \approx \sum_{i=0}^{N} |f(x_{i})-0| \triangle x, \:\:\: \triangle x = \frac{b-a}{N} $$</p>
<p>$N$ is the number of rectangles, $\triangle x$ being the width of each rectangle $i$, and $f(x_{i})$ is the height of the rectangle $i$.</p>
<p>If we take as many rectangles, we get
$$ A = \lim_{N \rightarrow \infty} \sum_{i=0}^{N} |f(x_{i})-0| \triangle x, \:\:\: \triangle x = \frac{b-a}{N} $$ which is defined another way by
$$ A = \int_{a}^{b} f(x) dx$$</p>
<hr>
<p>2D integral is similar. You can approximate the Volume bounded by surface $z=f(x,y)$ and $z=0$, in the region $R : a < x < b, \:\: c < y<d$, by :</p>
<p>$$ V \approx \sum_{j=0}^{N} \sum_{i=0}^{M} |f(x_{i},y_{j})-0| \triangle x \triangle y, \:\:\: \triangle x = \frac{b-a}{M}, \: \: \: \triangle y = \frac{d-c}{N} $$
Notice that $$ |f(x_{i}, y_{j})| \triangle x \triangle y$$
is the volume of the small cuboid with center at position $(x_{i}, y_{j})$, $\triangle x \triangle y$ being the area of the small square as the floor of the cuboid.</p>
<p>Taking as many very-small-cuboids as possible, we get the Volume </p>
<p>$$ V = \int_{c}^{d} \int_{a}^{b} f(x,y) dx dy $$</p>
<hr>
<p>For 3D, it is a bit different. The function $h(x,y,z)$ is a quantity that may be measured. The $\triangle x \triangle y \triangle z$ is the small-volume in which a value $h(x_{i},y_{j},z_{k})$ holds. Connect this to @TrevorNorton comment.</p>
|
325,186 | <p>If <span class="math-container">$p$</span> is a prime then the zeta function for an algebraic curve <span class="math-container">$V$</span> over <span class="math-container">$\mathbb{F}_p$</span> is defined to be
<span class="math-container">$$\zeta_{V,p}(s) := \exp\left(\sum_{m\geq 1} \frac{N_m}{m}(p^{-s})^m\right). $$</span>
where <span class="math-container">$N_m$</span> is the number of points over <span class="math-container">$\mathbb{F}_{p^m}$</span>.</p>
<p>I was wondering what is the motivation for this definition. The sum in the exponent is vaguely logarithmic. So maybe that explains the exponential?</p>
<p>What sort of information is the zeta function meant to encode and how does it do it? Also, how does this end up being a rational function?</p>
| Vivek Shende | 4,707 | <p>In <a href="https://arxiv.org/abs/math/0001005" rel="nofollow noreferrer">this article of Kapranov</a>, you will find the motiv-ation of the zeta function of an algebraic curve.</p>
|
4,506 | <p>I keep producing bits of code like the following:</p>
<pre><code>stuff = Module[
{curTarget = #},
getRowsForUserAndTarget[u, #, curTarget] & /@ validUsers
] & /@ allTargets;
</code></pre>
<p>Basically, I'm iterating through all the targets and all the users. Using a For loop it would look something like this (in python):</p>
<pre><code>res = []
for user in validUsers:
for curTarget in allTargets:
res.append(getRowsForUserAndTarget[u, user, curTarget])
</code></pre>
<p>It seems like I should be able to do this succinctly in map notation using something like:</p>
<pre><code>getRowsForUserAndTarget[#userthing, #targetthing]& /@ validUsers /@ validTargets
</code></pre>
<p>but I don't know how to keep the mapped arguments from interfering with each other, or how to reference which one I mean; and I'm a little hazy about the order of iteration (validUsers would go first, then validTargets?)</p>
<p>Can someone set me straight? This would seem like a common pattern that I am abusing with my Module[] solution...</p>
| Rojo | 109 | <p>Is this what you want?</p>
<pre><code>l1 = {a, b, c};
l2 = {aa, bb, cc};
sth[#1, #2] & @@@ Tuples[{l1, l2}]
</code></pre>
<blockquote>
<pre><code>{sth[a, aa], sth[a, bb], sth[a, cc], sth[b, aa], sth[b, bb],
sth[b, cc], sth[c, aa], sth[c, bb], sth[c, cc]}
</code></pre>
</blockquote>
|
4,506 | <p>I keep producing bits of code like the following:</p>
<pre><code>stuff = Module[
{curTarget = #},
getRowsForUserAndTarget[u, #, curTarget] & /@ validUsers
] & /@ allTargets;
</code></pre>
<p>Basically, I'm iterating through all the targets and all the users. Using a For loop it would look something like this (in python):</p>
<pre><code>res = []
for user in validUsers:
for curTarget in allTargets:
res.append(getRowsForUserAndTarget[u, user, curTarget])
</code></pre>
<p>It seems like I should be able to do this succinctly in map notation using something like:</p>
<pre><code>getRowsForUserAndTarget[#userthing, #targetthing]& /@ validUsers /@ validTargets
</code></pre>
<p>but I don't know how to keep the mapped arguments from interfering with each other, or how to reference which one I mean; and I'm a little hazy about the order of iteration (validUsers would go first, then validTargets?)</p>
<p>Can someone set me straight? This would seem like a common pattern that I am abusing with my Module[] solution...</p>
| Mr.Wizard | 121 | <p>A couple of other options:</p>
<pre><code>allTargets = {"a", "b", "c", "d"};
validUsers = {1, 2, 3};
Table[getRowsForUserAndTarget[u, j, i], {i, allTargets}, {j, validUsers}]
Outer[getRowsForUserAndTarget[u, #2, #] &, allTargets, validUsers, 1]
</code></pre>
<p>Both of these methods produce nested lists separating output elements for each <code>validUsers</code> element, whereas <code>Tuples</code> produces a flattened list. This can be either advantagous or problematic. You can flatten one level using <code>Join @@</code> e.g. <code>Join @@ Table</code></p>
|
1,517,456 | <blockquote>
<p>Rudin Chp. 5 q. 13:</p>
<p>Suppose <span class="math-container">$a$</span> and <span class="math-container">$c$</span> are real numbers, <span class="math-container">$c > 0$</span>, and <span class="math-container">$f$</span> is defined on <span class="math-container">$[-1, 1]$</span> by</p>
<p><span class="math-container">$$f(x) = x^a \sin(|x|^{-c}), x≠0$$</span>
<span class="math-container">$$f(x) = 0, x=0$$</span></p>
<p>(b) <span class="math-container">$f'(0)$</span> exists iff <span class="math-container">$a > 1$</span></p>
</blockquote>
<p>To me, it seems quite clear that <span class="math-container">$a>1$</span> would work because it is intuitively clear that <span class="math-container">$f(x) → 0$</span> as <span class="math-container">$x → 0$</span>. The function <span class="math-container">$\sin(u)$</span> has a range of <span class="math-container">$[-1, 1]$</span>, so while <span class="math-container">$\sin(|x|^{-c})$</span> will oscillate infinitely as <span class="math-container">$x→0$</span>, <span class="math-container">$x^a → 0$</span> for <span class="math-container">$a > 0$</span>. It is clear that this is continuous for <span class="math-container">$a>0$</span>.</p>
<p>But I need to show that it is differentiable for <span class="math-container">$x=0$</span> iff <span class="math-container">$a>1$</span>. And this is where I have gotten stuck. I am able to show that it is <em>not</em> differentiable for <span class="math-container">$a ≤ 1$</span>. But when I try to show it is differentiable for <span class="math-container">$a>1$</span>, I fail to do so. I tried to differentiate <span class="math-container">$f(x)$</span> in general (<span class="math-container">$f'(x)$</span>) then show it will not work as <span class="math-container">$x→0$</span> for <span class="math-container">$a≤1$</span>, but this method does not work with <span class="math-container">$a>1$</span>, and I end up with <span class="math-container">$x^{a+1} / |x|^{-c-2}$</span> (plus unimportant constants and cosine). And that is bad because, for example, if a = 2 and c = 10, that limit clearly diverges to infinity.</p>
<p>A fellow student claimed to used the definition of the derivative to solve this and I tried this:</p>
<p><span class="math-container">$$f'(x) = \lim_{t→x} \frac{f(t) - f(x)}{t-x} = \lim_{t→x} \frac{t^a \sin|t|^{-c} - x^a \sin|x|^{-c}}{t-x}$$</span></p>
<p>And we are interested in only <span class="math-container">$f'(0)$</span>, so we can simply:</p>
<p><span class="math-container">$$f'(0) = \lim_{t→0} \frac{f(t) - f(0)}{t-0} = \lim_{t→0} \frac{t^a \sin|t|^{-c} - 0}{t}= \lim_{t→0} t^{a-1} \sin|t|^{-c}$$</span>
Assume <span class="math-container">$a>1$</span>
<span class="math-container">$$=\left[\lim_{t→0} t^{a-1}\right]\left[\lim_{t→0}\sin |t|^{-c}\right] = 0\left[\lim_{t→0}\sin|t|^{-c}\right] = 0$$</span>
This is clear because <span class="math-container">$\sin(u)$</span> has a range of [-1, 1].</p>
<p>Clearly in the case that <span class="math-container">$a≤1$</span>, this will diverge.</p>
<p>Is this all that I need to do? I don't understand why my first method did not work but the second did, if that is indeed all I must do.</p>
<p>I am worried about the main concept, not about how my “proof” looks. I can write it out MUCH better on paper, I am struggling to format this well on the computer (and sorry for this!)</p>
| Bernard | 202,857 | <p>$\mathbf{Q}(\sqrt[4] 5)$ is a $\mathbf{Q}$-vector space with dimension $4$, while $\mathbf{Q}(\sqrt[3] 5)$ is a $\mathbf{Q}$-vector space with dimension $3$. As they're both field extensions, if the latter were contained in the former, its dimension should divide $4$.</p>
|
3,673,613 | <p>I have to find out if <span class="math-container">$\displaystyle\sum_{n=2}^{\infty}$$\dfrac{\cos(\frac{\pi n}{2}) }{\sqrt n \log(n) }$</span> is absolute convergent, conditional convergent or divergent. I think it's divergent while the value for <span class="math-container">$\cos\left(\dfrac{\pi n}{2}\right)$</span> swings between <span class="math-container">$0$</span>, <span class="math-container">$1$</span> and <span class="math-container">$-1$</span>. And for <span class="math-container">$\left|\cos\left(\dfrac{\pi n}{2}\right)\right|$</span> it still swings between <span class="math-container">$0$</span> and <span class="math-container">$1$</span>. But how can I show it formally?</p>
| Ben Grossmann | 81,360 | <p>In fact, the dimensions are always equal. Because we generally have <span class="math-container">$\dim \ker(AB) = \dim\ker B + \dim(\operatorname{im}(B) \cap \ker A)$</span>, it suffices to note that the image of <span class="math-container">$(A - \psi I)$</span> contains the kernel of <span class="math-container">$(A - \lambda I)$</span>.</p>
<p>To that effect, suppose that <span class="math-container">$x \in \ker(A - \lambda I)$</span>, so that <span class="math-container">$Ax = \lambda x$</span>. It follows that <span class="math-container">$(A - \psi I)x = (\lambda - \psi)x$</span>, which means that <span class="math-container">$x = (A - \psi I)y$</span> where <span class="math-container">$y = (\lambda - \psi)^{-1}x$</span>. So, <span class="math-container">$x$</span> is indeed an element of the image of <span class="math-container">$A - \psi I$</span>.</p>
|
2,523,112 | <p>Let $f\left(x\right)$ be differentiable on interval $\left(a,b\right)$ and $f'\left(x\right)>0$ on that interval. If $\underset{x\rightarrow a+}{\lim}f\left(x\right)=0$, $f\left(x\right)>0$ on that interval?</p>
<p>I think this proposition is true by my intuitive, but I wonder whether intuitive is mathematically and strictly true and what condition to be add to be true. I can't believe my intuition, due to possiblilty of flaw.</p>
| Arkya | 276,417 | <p>By Taylor's theorem (upto first order), for any $x\in(a+\epsilon,b)$, $\exists \zeta\in(a+\epsilon,b)$
$$f(x)=f(a+\epsilon)+(x-a-\epsilon)f'(\zeta) $$</p>
<p>Since $f'(\zeta)>0$, and $x>a+\epsilon$, this shows that $f(x)>f(a+\epsilon)$. </p>
<p>Now take the limit $\epsilon\rightarrow0^+$ to get that $f(x)>\lim_{\epsilon\to 0^+}f(a+\epsilon)=\lim_{x\to a^+}f(x)=0$, i.e. $f(x)>0$, $\forall x\in(a,b)$. </p>
|
4,612 | <p>I would like to make a slope field. Here is the code</p>
<pre><code>slopefield =
VectorPlot[{1, .005 * p*(10 - p) }, {t, -1.5, 20}, {p, -10, 16},
Ticks -> None, AxesLabel -> {t, p}, Axes -> True,
VectorScale -> {Tiny, Automatic, None}, VectorPoints -> 15]
</code></pre>
<p>I solved the differential equations and plotted the curves manually. Three questions:</p>
<ol>
<li>Is there an easier way to do it?</li>
<li>Ticks -> None doesn't seems to work. I still get labels for the tick marks.</li>
<li>I'd like to selectively label 2 tick marks.</li>
</ol>
| Dr. belisarius | 193 | <p>For example:</p>
<pre><code>VectorPlot[{1, .005*p*(10 - p)}, {t, -1.5, 20}, {p, -10, 16},
FrameTicks -> {
Join[{{0, "ZERO", {0, .1}, Red}}, Table[{i, ""}, {i, -1.5, 20, 3}]],
Table[{i, ""}, {i, -10, 16, 3}]},
AxesLabel -> {t, p},
Frame -> True,
VectorScale -> {Tiny, Automatic, None},
VectorPoints -> 15]
</code></pre>
<p><img src="https://i.stack.imgur.com/cHKIf.png" alt="enter image description here"></p>
|
1,954,411 | <p>Let $N>0$ be a large integer, and $n<N$, then how to simply the following sum
$$\sum\limits_{k=1}^n\frac{N-n+k}{(N-k+1)(N-k+1)(N-k)}.$$
Thank you very much, guys.</p>
<p>Actually for another similar sum $\sum\limits_{k=1}^n\frac{1}{(N-k+1)(N-k)}=\sum\limits_{k=1}^n\frac{1}{N-k}-\frac{1}{N-k+1}=\frac{1}{N-n}-\frac{1}{N}$, I know the trick. But adding one term of such thing, $\frac{N-n+k}{N-k+1}$, it becomes difficult. </p>
<p>So, thanks a million for any clue.</p>
| Felix Marin | 85,343 | <p>$\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
\begin{align}
&\color{#f00}{%
\sum_{k = 1}^{n}{N - n + k \over \pars{N - k + 1}\pars{N - k + 1}\pars{N - k}}}
\\[5mm] = &\
\pars{2N - n}\pars{\sum_{k = 1}^{n}{1 \over k - N - 1} -
\sum_{k = 1}^{n}{1 \over k - N}} -
\pars{2N - n + 1}\sum_{k = 1}^{n}{1 \over \pars{k - N - 1}^{2}}
\\[5mm] = &\
{\pars{2N - n}n \over N\pars{N - n}} -
\pars{2N - n + 1}\sum_{k = 1}^{n}{1 \over \pars{k - N - 1}^{2}}
\end{align}</p>
<blockquote>
<p>Note that</p>
</blockquote>
<p>$$\left\{\begin{array}{rcl}
\ds{\sum_{k = 1}^{n}{1 \over k + a}} & \ds{=} &
\ds{\sum_{k = 0}^{n - 1}{1 \over k + 1 + a} =
\sum_{k = 0}^{\infty}\pars{{1 \over k + 1 + a} - {1 \over k + n + 1 + a}} =
H_{n + a} - H_{a}}
\\[2mm]
\ds{\sum_{k = 1}^{n}{1 \over \pars{k + a}^{2}}} & \ds{=} &
\ds{\partiald{}{a}\pars{H_{a} - H_{n + a}} = \Psi\,'\pars{1 + a} -
\Psi\,'\pars{1 + n + a}}
\end{array}\right.
$$</p>
<blockquote>
<p>$\ds{H_{z}}$ is the <em>Harmonic Number</em> and $\ds{\Psi\,'}$ is the <em>Trigamma Function</em>.</p>
</blockquote>
<hr>
<p>Then,
\begin{align}
&\color{#f00}{%
\sum_{k = 1}^{n}{N - n + k \over \pars{N - k + 1}\pars{N - k + 1}\pars{N - k}}}
\\[5mm] = &\
\color{#f00}{{\pars{2N - n}n \over N\pars{N - n}} -
\pars{2N - n + 1}\bracks{\Psi\,'\pars{-N} - \Psi\,'\pars{-N - n}}}
\end{align}</p>
<blockquote>
<p>Any issue with the <em>Trigamma's argument signs</em> can be deal with the <em>Euler Reflection Formula</em> or/and the <em>Recurrence Formula</em>:</p>
</blockquote>
<p>$$
\left\{\begin{array}{rcl}
\ds{\Psi\,'\pars{z}} & \ds{=} &
\ds{-\Psi\,'\pars{1 - z} + \pi^{2}\csc^{2}\pars{\pi z}}
\\[2mm]
\ds{\Psi\,'\pars{z + 1}} & \ds{=} & \ds{\Psi\,'\pars{z} - {1 \over z^{2}}}
\end{array}\right.
$$</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.