qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
2,734,338 | <p>I would like to show that
$$\forall n\in\mathbb{N}^*, \quad \sqrt{\frac{n}{n+1}}\notin \mathbb{Q}$$</p>
<p>I'm interested in more ways of proofing this.</p>
<p>My method :</p>
<p>suppose that $\sqrt{\frac{n}{n+1}}\in \mathbb{Q}$ then there exist $(p,q)\in\mathbb{Z}\times \mathbb{N}^*$ such that $\sqrt{\frac{n}{n+1}}=\frac{p}{q}$ thus </p>
<p>$$\dfrac{n}{n+1}=\dfrac{p^2}{q^2} \implies nq^2=(n+1)p^2 \implies n(q^2-p^2)=p^2$$
since $p\neq q\implies p^2\neq q^2$ then</p>
<p>$$n=\dfrac{p^2}{(q^2-p^2)}$$</p>
<p>since $n\in \mathbb{N}^*$ then $n\in \mathbb{Q}$</p>
<ul>
<li>I'm stuck here and I would like to see different ways to prove $\sqrt{\frac{n}{n+1}}\notin \mathbb{Q}$</li>
</ul>
| Community | -1 | <p>Another approach: If $n/(n+1)$ is a square, then both $n$ and $n+1$ are squares, since they are relatively prime. Therefore, there would be two squares, which difference is $1$:</p>
<p>$$(p+q)(p-q)=p^2-q^2=1$$</p>
<p>Therefore, $p+q=1$ and $p-q=1$. It follows that $p=1$ and $q=0$. Therefore, $n=0$.</p>
|
4,038,904 | <p>Your statistics teacher challenges you to write a mathematics paper, which depending on your time spent, will earn you a cash reward. You're given the paper and a candle stands on the table in front of you. The life span <span class="math-container">$X$</span>, in minutes, of the candle is a continuous random variable with a uniform distribution on <span class="math-container">$(0, 60)$</span>. You must leave after <span class="math-container">$\frac{1}{2}$</span> an hour has been spent writing the paper or as soon as the candle burns out, whichever of these happens first. The teacher gives you a cash reward <span class="math-container">$M$</span> equal to <em>half</em> the amount of time the candle was lit. Find the cumulative distribution function (cdf) of <span class="math-container">$M$</span> and determine the expected value <span class="math-container">$E[M]$</span>.</p>
<p>My attempt:</p>
<p>By inspection <span class="math-container">$M = \frac{X}{2}$</span>.</p>
<p>I think cdf of <span class="math-container">$M$</span> is of the form:</p>
<p><span class="math-container">$$F(m)= \left\{
\begin{array}{lr}
0 & \mbox{if } x < 0 \\
2x = m & \mbox{if } 0 \leq x < 60 \\
1 & \mbox{if } 60 \leq x
\end{array}
\right.$$</span></p>
<p>As for <span class="math-container">$E(M)$</span>, we have</p>
<p><span class="math-container">$$E(M) = \int_{0}^{60} m \cdot F'(m) \,dm = \int_{0}^{60} m \,dm = 1800$$</span></p>
<p>Is this correct? My work above seems lacking and I am not sure if I did it correctly. Any assistance is much appreciated.</p>
| Misha Lavrov | 383,078 | <p>About <span class="math-container">$8$</span> years ago, not too long after the linked question was asked, Code Golf StackExchange <a href="https://codegolf.stackexchange.com/questions/10739/find-largest-prime-which-is-still-a-prime-after-digit-deletion?noredirect=1&lq=1">took up the challenge</a>. The winning answer found a <span class="math-container">$274$</span>-digit solution:
<span class="math-container">$$\underbrace{444\dots 44}_{163} \underbrace{000 \dots 00}_{80} \underbrace{111\dots 11}_{31}$$</span>
This outdoes my own searches considerably. The largest solution I've personally found is the following <span class="math-container">$84$</span>-digit number, which is the second-largest solution found there:
<span class="math-container">$$\underbrace{444\dots 44}_{50} \underbrace{111 \dots 11}_9 \underbrace{333\dots 33}_{25}$$</span>
It makes sense to test numbers with only a few blocks of consecutive repeated digits: then there's only a few digit deletions to check, so it's more likely that all digit deletions will remain prime. The search I did included:</p>
<ul>
<li>All <span class="math-container">$2$</span>-block numbers with at most <span class="math-container">$150$</span> digits per block. The largest solution I found of this form was <span class="math-container">$222\,223\,333\,333$</span>, which is almost certainly the largest solution there is of this form.</li>
<li>All <span class="math-container">$3$</span>-block numbers with at most <span class="math-container">$50$</span> digits per block. You can see that the solution above brushes up against this limit, but the second-biggest solution of this form I found only had <span class="math-container">$32$</span> digits total. The heuristic below tells us not to expect anything better than the <span class="math-container">$274$</span>-digit solution.</li>
<li>All <span class="math-container">$4$</span>-block numbers with at most <span class="math-container">$20$</span> digits per block; I did this before I found the <span class="math-container">$3$</span>-block solution, or else I would've known not to bother. This gave me the second-largest solution I've found: an "inflation" of <span class="math-container">$3161$</span> with <span class="math-container">$19$</span> <span class="math-container">$3$</span>'s followed by <span class="math-container">$12$</span> <span class="math-container">$1$</span>'s followed by <span class="math-container">$10$</span> <span class="math-container">$6$</span>'s followed by <span class="math-container">$5$</span> more <span class="math-container">$1$</span>'s, for <span class="math-container">$46$</span> digits total.</li>
</ul>
<p>Here's a rough probabilistic analysis of how likely such numbers are to have the property we want. There are <span class="math-container">$9^k \binom{n-1}{k-1}$</span> distinct numbers with <span class="math-container">$n$</span> digits divided into <span class="math-container">$k$</span> blocks of digits (like the above has <span class="math-container">$16$</span> digits divided into <span class="math-container">$4$</span> blocks). Very loosely, this is <span class="math-container">$9^k \cdot \frac{n^{k-1}}{(k-1)!}$</span>. The probability that a random <span class="math-container">$n$</span>-digit number is prime is roughly <span class="math-container">$\frac1{n \ln 10}$</span>. Initially, I estimated that for an <span class="math-container">$n$</span>-digit number and all <span class="math-container">$k$</span> possible digit-deletions to be prime, you pay a factor of <span class="math-container">$\frac1{(n \ln 10)^{k+1}}$</span>. So we get a probability of <span class="math-container">$O\Big((\frac{9}{\ln 10})^k \cdot \frac1{(k-1)!} \cdot n^{-2}\Big)$</span> that there is any solution with <span class="math-container">$n$</span> digits and <span class="math-container">$k$</span> blocks. The part depending on <span class="math-container">$k$</span> is maximized when <span class="math-container">$k$</span> is around <span class="math-container">$4$</span>.</p>
<p>However, looking at the numbers modulo small primes like <span class="math-container">$2$</span>, <span class="math-container">$3$</span>, or <span class="math-container">$5$</span>, there's some correlation. Some of the correlation helps us (spot-checking primes using the last digit) and some hurts us on net (divisibility by <span class="math-container">$3$</span> enforces that either digits <span class="math-container">$1 \bmod 3$</span> or digits <span class="math-container">$2 \bmod 3$</span> must be missing from the number). It's hard to say what the best value of <span class="math-container">$k$</span> is, just that it should not grow with <span class="math-container">$n$</span>.</p>
<p>I've also limited the number of blocks for practical considerations. Even if it turns out that <span class="math-container">$k=5$</span> is abstractly more promising than <span class="math-container">$k=4$</span>, numbers with <span class="math-container">$5$</span> blocks will be at least an order of magnitude less likely to result in all the primes we want - there will just be more of them to compensate. So it will take longer to find such solutions.</p>
|
258,704 | <p>How can I solve a system of linear congruences as such?</p>
<p><span class="math-container">$$\begin{align*}
3x+2y+28z &= 9 \pmod {29} \\
5x+27y+z &= 9 \pmod {29} \\
2x+y+z &= 6 \pmod {29}
\end{align*}$$</span></p>
<p>I tried it this way as a system of equations, but no luck:</p>
<pre><code>eqn1 = FullSimplify[{3*x + 2*y + 28*z == 9 + (29*i) &&
5*x + 27*y + z == 9 + (29*j) && 2*x + y + z == 6 + (29*k)}]
Table[FindInstance[eqn1, {x, y, z, i, j, k}, Integers, 1] ]
</code></pre>
<p>Additionally, how can I solve these linear congruences:</p>
<p><span class="math-container">$$ 3x = 5 \pmod 6 $$</span></p>
<p>Tried this: No luck! <code>Reduce[3*x - 5 == 6, x, Modulus -> 6]</code></p>
<p>and</p>
<p><span class="math-container">$$ x^2 + x = 2 \pmod 8 $$</span></p>
<p>and</p>
<p>Find Multiplicative inverse of [5] in z42 ? which would mean <span class="math-container">$$ 5x + 42y =1 $$</span></p>
<p>and lastly:</p>
<p>Solve these systems in z11:</p>
<p><span class="math-container">$$ [2][x]+[7][y] = [4] $$</span>
<span class="math-container">$$ [3][x]+[2][y] = [9] $$</span></p>
<p>I'm pretty sure Mathematica can input and solve these.</p>
| yarchik | 9,469 | <pre><code>Solve[{3*x + 2*y + 28*z == 9 && 5*x + 27*y + z == 9 &&
2*x + y + z == 6}, Modulus -> 29]
(* {{x -> 24, y -> 23, z -> 22}} *)
Solve[3*x == 5, Modulus -> 6]
(* {} *)
Solve[x + x^2 == 2, Modulus -> 8]
(* {{x -> 1}, {x -> 6}} *)
Solve[5 x == 1, Modulus -> 42]
(*{{x -> 17}}*)
Solve[{2 x + 7 y == 4, 3 x + 2 y == 9}, Modulus -> 11]
(* {{x -> 0, y -> 10}} *)
</code></pre>
<p>It is actually fun to solve homework problems with pen and paper, why not trying?</p>
|
3,017,928 | <p>Doing some self study from the text <em>Basic Mathematics</em> by Serge Lang
I ran into an exercise question which I can't seem to wrap my head around.
The question is:</p>
<p>Express the following expressions in the form <span class="math-container">$2^m3^na^rb^s$</span> ,where <span class="math-container">$m,n,r,s$</span> are positive integers.</p>
<p><span class="math-container">$8a^2b^3(27a^4)(2^5ab)$</span></p>
<p>After some research I found that the final answer is expressed as</p>
<p><span class="math-container">$2^83^3a^7b^4$</span></p>
<p>I've attempted to use distribution as a means of solving it but end up stuck and confused.
I'm entirely lost as to how that answer is derived. </p>
| JDMan4444 | 620,745 | <p>The key idea here is that multiplication is both commutative and associative, so we may multiply in any order and in any groupings we wish, as well as "break up" products into any groupings and any order we wish.</p>
<p>The expression <span class="math-container">$8a^2b^3(27a^4)(2^5ab)$</span>, by associativity and commutativity, is just the same expression without the parentheses and with the powers of 2 first, then the powers of 3, then the powers of <span class="math-container">$a$</span>, and finally the powers of <span class="math-container">$b$</span>. So <span class="math-container">$8a^2b^3(27a^4)(2^5ab)=8\cdot2^5\cdot27\cdot a^2a^4ab^3b$</span>. </p>
<p>Using our rules of exponents, we can then condense these products of similar numbers into one big power. So <span class="math-container">$8\cdot2^5\cdot27\cdot a^2a^4ab^3b = 2^83^3a^7b^4$</span>, which is the answer your research uncovered.</p>
|
3,017,928 | <p>Doing some self study from the text <em>Basic Mathematics</em> by Serge Lang
I ran into an exercise question which I can't seem to wrap my head around.
The question is:</p>
<p>Express the following expressions in the form <span class="math-container">$2^m3^na^rb^s$</span> ,where <span class="math-container">$m,n,r,s$</span> are positive integers.</p>
<p><span class="math-container">$8a^2b^3(27a^4)(2^5ab)$</span></p>
<p>After some research I found that the final answer is expressed as</p>
<p><span class="math-container">$2^83^3a^7b^4$</span></p>
<p>I've attempted to use distribution as a means of solving it but end up stuck and confused.
I'm entirely lost as to how that answer is derived. </p>
| Blg Khalil | 572,713 | <p>You just apply basic power rules:
<br>
<span class="math-container">$a^na^m = a^{n+m}$</span> etc ...
<br>
Let's apply them now to your example:
<span class="math-container">$$8a^2b^3(27)a^2b^3a^4ab$$</span>
<span class="math-container">$$8 (27) 2^5 a^2 b^3 a^4 ab$$</span>
<span class="math-container">$$2^3 2^5 (27) a^7 b^4$$</span>
<span class="math-container">$$2^8 3^3 a^7 b^4$$</span></p>
|
2,508,381 | <p>Starting from the idea that $$\sum_{n=1}^\infty n = -\frac{1}{12}$$
It's fairly natural to ask about the series of odd numbers $$\sum_{n=1}^{\infty} (2n - 1)$$
I worked this out in two different ways, and get two different answers. By my first method
$$\sum_{n=1}^{\infty} (2n - 1) + 2\bigg( \sum_{n=1}^\infty n \bigg) = \sum_{n=1}^\infty n$$
$$\therefore ~\sum_{n=1}^{\infty} (2n - 1) = - \sum_{n=1}^\infty n = \frac{1}{12}$$
But then by the second
$$\sum_{n=1}^{\infty} (2n - 1) - \sum_{n=1}^\infty n = \sum_{n=1}^\infty n$$
$$\therefore ~\sum_{n=1}^{\infty} (2n - 1) = 2 \sum_{n=1}^\infty n = -
\frac{1}{6}$$
Is there any reason to prefer one of these answers over the other? Or is the sum over all odd numbers simply undefined? In which case, was there a way to tell that in advance?</p>
<p>I'm also curious if this extends to other series of a similar form
$$\sum_{n=1}^{\infty} (an + b)$$
Are such series undefined whenever $b \neq 0$?</p>
| Misha Lavrov | 383,078 | <p>If we replace the sum $\sum_{n=1}^\infty n$ by $\sum_{n=1}^\infty n^{-s}$ then (where it converges) we have $$\sum_{n=1}^\infty n^{-s} = \zeta(s)$$ and when $s=-1$ (outside the region where the sum converges) we have $\zeta(-1) = -\frac1{12}$. Since setting $s=-1$ turns the terms of the second sum back into the terms of the first, this is kind of like saying that $1+2+3+\dots = -\frac1{12}$.</p>
<p>Similarly, we can replace your sum by $\sum_{n=1}^\infty (2n-1)^{-s}$. Where it converges, we have
$$\sum_{n=1}^\infty (2n-1)^{-s} = (1-2^{-s}) \zeta(s)$$<br>
and if you now substitute $s=-1$ (which is, again, outside the radius of convergence) we get $(1-2) \zeta(-1) = \frac1{12}$. Again, since setting $s=-1$ turns the terms of the second sum back into the terms of the first, this is kind of like saying that $1+3+5+\dots = \frac1{12}$.</p>
<p>But because $(a+b)^{-s}$ is not equal to $a^{-s} + b^{-s}$, once you use this technique to assign values to divergent series, it is no longer valid to say that $$\sum_{n=1}^\infty (a_n + b_n) = \sum_{n=1}^\infty a_n + \sum_{n=1}^\infty b_n.$$ So both of your derivations use technically-invalid operations for this context.</p>
|
46,772 | <p>I have the following to set into mathematica. </p>
<p>Assume a stress(sigma) applied on a specimen which has a number of fibers within it. All fibers that have a strength less than the applied stress should fail. I set a cumulative distribution of the fiber strengths with a mean and standard deviation.</p>
<p>I want to use this cumulative distribution to tell me how many fibers are less than the applied stress which have failed and store this number somewhere. This procedure should be repeated until all fibers have failed. so there is a convergence in the two variables. I am new in mathematica so sorry for a vague question. </p>
<p>thanks,</p>
<p>Nick </p>
| phosgene | 11,679 | <p>Here's an implementation showing the load-carrying capacity of the bundle of fibers as the weakest remaining fiber breaks:</p>
<pre><code>nFibers = 1000;
mean = 100;
stdev = 20;
fibers = Sort@Abs@RandomVariate[NormalDistribution[mean, stdev], nFibers];
pull[{load_, fibers_}] := {
First@fibers*Length@fibers,(* load just before weakest fiber breaks *)
Rest@fibers(* remaining fibers after weakest fiber breaks *)
}
ListPlot[First@Transpose@NestList[pull, {0, fibers}, nFibers], Joined -> True]
</code></pre>
<p><img src="https://i.stack.imgur.com/ad1q2.png" alt="enter image description here"></p>
<p>The NestList command iteratively applies the 'pull' function, producing a sequence of the fiber bundle's load-carrying capacity as each successive fiber breaks.</p>
|
1,363,213 | <p>I am given a chessboard of size $8*8$. In this chessboard there are two holes at positions $(X1,Y1)$ and $(X2,Y2)$. Now I need to find the maximum number of rooks that can be placed on this chessboard such that no rook threatens another. </p>
<p>Also no two rooks can threaten each other if there is hole between them.</p>
<p>How can I tackle this problem? Please help</p>
<p><strong>NOTE : A hole can occupy only a single cell on the chess board.</strong></p>
| Asinomás | 33,907 | <p>Here is the drawing I meant to make for ten rooks.(holes are yellow and rooks are black)
<img src="https://i.stack.imgur.com/pc0G5.png" alt="enter image description here"></p>
|
983,566 | <p>Here is the problem in full:</p>
<blockquote>
<p>A heap has $x$ marbles, where $x$ is a positive integer. The following process is repeated until the heap is broken down into single marbles: choose a heap with more than 1 marble and form two non-empty heaps from it. One will contain $n$ marbles and the other $m$ marbles. Each time this is done, record the product $nm$. Use induction to show the sum of your recorded products is $\binom{x}{2}$ no matter what the sequence of dividing the heap is.</p>
</blockquote>
<p>So, using induction:
We assume truth when there are $k$ marbles, where $k$ is a positive integer. We test the basis case $k = 1$, which is a one marble pile. $\binom{1}{2} = 0$, and splitting into a pile with 1 marble and a pile with 0 marbles also gives a product of 0, so the basis case is true. Next we demonstrate truth when $x=k+1$. We begin by making a pile of size 1 and a pile of size $k$. The product that we get from this split is $k$. The induction hypothesis says $k$ marbles ends up with a result of $\binom{k}{2}$. I can't prove it just yet (I need help here too) but I know $k + \binom{k}{2} = \binom{k+1}{2}$. Therefore the proof is shown.</p>
<p>Now, I need help here. I don't think this is right because of the part, "no matter what the sequence of dividing the heap is." I think I violated this when I make a pile of size 1 specifically during the induction step. Could strong induction somehow help me here? I just can't really see how.</p>
<p>Additionally, if anyone could point me to a proof for $k + \binom{k}{2} = \binom{k+1}{2}$ I would really appreciate being able to understand this one from an analytic standpoint too.</p>
<p>Thanks.</p>
| Brian M. Scott | 12,042 | <p>Let’s dispose of the technical point first. By straight algebra,</p>
<p>$$k+\binom{k}2=k+\frac{k(k-1)}2=\frac{2k+k^2-k}2=\frac{k(k+1)}2=\binom{k+1}2\;.$$</p>
<p>Alternatively,</p>
<p>$$k+\binom{k}2=\binom{k}1+\binom{k}2=\binom{k+1}2$$</p>
<p>by the Pascal’s triangle identity.</p>
<p>You’re quite right that it’s not sufficient to consider splitting your $k+1$ pile into piles of size $1$ and $k$. In fact, you want to use what’s often called strong induction: suppose that the result is true for piles of size less than $k$, and show that it’s then necessarily true for piles of size $k$. Now you split your pile of size $k$ into piles of sizes $m$ and $n$, where $m,n\ge 1$ and $m+n=k$, and your induction hypothesis tells you that the pile of size $m$ will contribute a total of $\binom{m}2$, while the pile of size $n$ will contribute a total of $\binom{n}2=\binom{k-m}2$. In addition, of course, this split will contribute $mn$. Thus, you’ll end up with</p>
<p>$$\binom{m}2+\binom{k-m}2+m(k-m)\;.$$</p>
<p>Now do some algebra, more or less along the lines of what I did at the top.</p>
|
983,566 | <p>Here is the problem in full:</p>
<blockquote>
<p>A heap has $x$ marbles, where $x$ is a positive integer. The following process is repeated until the heap is broken down into single marbles: choose a heap with more than 1 marble and form two non-empty heaps from it. One will contain $n$ marbles and the other $m$ marbles. Each time this is done, record the product $nm$. Use induction to show the sum of your recorded products is $\binom{x}{2}$ no matter what the sequence of dividing the heap is.</p>
</blockquote>
<p>So, using induction:
We assume truth when there are $k$ marbles, where $k$ is a positive integer. We test the basis case $k = 1$, which is a one marble pile. $\binom{1}{2} = 0$, and splitting into a pile with 1 marble and a pile with 0 marbles also gives a product of 0, so the basis case is true. Next we demonstrate truth when $x=k+1$. We begin by making a pile of size 1 and a pile of size $k$. The product that we get from this split is $k$. The induction hypothesis says $k$ marbles ends up with a result of $\binom{k}{2}$. I can't prove it just yet (I need help here too) but I know $k + \binom{k}{2} = \binom{k+1}{2}$. Therefore the proof is shown.</p>
<p>Now, I need help here. I don't think this is right because of the part, "no matter what the sequence of dividing the heap is." I think I violated this when I make a pile of size 1 specifically during the induction step. Could strong induction somehow help me here? I just can't really see how.</p>
<p>Additionally, if anyone could point me to a proof for $k + \binom{k}{2} = \binom{k+1}{2}$ I would really appreciate being able to understand this one from an analytic standpoint too.</p>
<p>Thanks.</p>
| André Nicolas | 6,312 | <p><strong>Hint:</strong> We give a part-solution, to give you an opportunity to do some of the details.</p>
<p>Yes, it is by strong induction. </p>
<p>Suppose that $x=m+n$, and we break the pile of $x$ into parts of size $m$ and $n$. </p>
<p>Then we write down $mn$, and proceed to break up the $m$-pile and the $n$-pile. By the induction hypothesis, the sum of the numbers we write down when breaking down the $m$-pile is $\binom{m}{2}$, and the sum for the $n$-pile is $\binom{n}{2}$. Thus the total sum is
$$mn+\binom{m}{2}+\binom{n}{2}.$$
We need to check that this is $\binom{x}{2}$, that is, $\binom{m+n}{2}$. So we need to show that
$$mn+\binom{m}{2}+\binom{n}{2}=\binom{m+n}{2}.\tag{1}$$
The left-hand side of (1) is
$$mn+\frac{m(m-1)}{2}+\frac{n(n-1)}{2}.$$
Bring to a common denominator $2$ and simplify. We get
$$\frac{m^2+2mn+n^2-m-n}{2}.\tag{2}$$
Now you can work with the right-hand side of (1) and show that it is equal to (2). </p>
<p><strong>Remark:</strong> One probably should start the induction at $x=2$. Starting at $x=1$ involves the concept of an <em>empty product</em>, which by definition is $1$, and not the $0$ of your post.</p>
|
3,991,072 | <p>Consider, two planar vectors:</p>
<p><span class="math-container">$$V= a \hat{x} + b \hat{y}$$</span></p>
<p>And
<span class="math-container">$$ U = a' \hat{x} + b' \hat{y}$$</span></p>
<p>These are analogous to the complex numbers:</p>
<p><span class="math-container">$$ v = a + bi$$</span></p>
<p>and,</p>
<p><span class="math-container">$$u= a' + b' i$$</span></p>
<p>Now, there are clear rules for multiply <span class="math-container">$ v \cdot u$</span> and also the expression: <span class="math-container">$ \frac{v}{u}$</span> , also there exists a geometric interpretation (if you view in polar form). Then, why is it that we don't bring up these products when speaking vectors and only discuss dot and cross product?</p>
| Eric M. Schmidt | 48,235 | <p>Your proposed product is geometrical in a sense, but there is a crucial difference between it and the dot and cross products. Imagine you have a table, or some other flat horizontal surface, and two arrows drawn on it that represent vectors. Here is a question: What is the "complex number" product of these two vectors?</p>
<p>It is not possible to answer this question. This is perhaps most easily seen by noting that you cannot find the polar form of the vectors, because the angle depends on a coordinate system ("basis"), and we haven't specified one. In other words, the "complex number" product depends not only on the vectors, but on an (entirely arbitrary) choice of coordinates.</p>
<p>By contrast, the dot product and cross product can be understood without reference to any particular coordinate system. The dot product is the same in any orthonormal basis (that is, a coordinate system made from vectors of unit length that are all perpendicular to each other). This is also true of the cross product, provided the two bases have the same handedness (that is, you can rotate one basis into the other).</p>
|
2,003 | <p>I use some custom shortcut keys in <code>KeyEventTranslations.tr</code>. One is for the <code>Delete All Output</code> function: </p>
<pre><code>Item[KeyEvent["w", Modifiers -> {Control}],
FrontEnd`FrontEndExecute[FrontEnd`FrontEndToken["DeleteGeneratedCells"]]]
</code></pre>
<p>or simply:</p>
<pre><code>Item[KeyEvent["w", Modifiers -> {Control}], "DeleteGeneratedCells"]
</code></pre>
<p>This works as expected, putting up the dialog: "Do you really want to delete all the output cells in the notebook?". Is there any way to set up <code>KeyEventTranslations.tr</code> that when I hit <kbd>Ctrl</kbd>+<kbd>w</kbd> the dialog is automatically acknowledged and I don't have to hit <kbd>Enter</kbd>? The same goes for the <code>Quit kernel</code> function, that also puts up a dialog.</p>
| bobknight | 8,565 | <p>An alternative is to customize the file MenuSetup.tr that is in the same folder of KeyEventTranslations.tr. The difference is that there you can use KernelEvaluate to execute a NotebookDelete. Here is an example of a new menu item. Insert this code in the list of Menu into the MenuSetup.tr file and restart Mathematica.</p>
<pre><code>Menu["My commands",
{
MenuItem["Delete All Output", KernelExecute[NotebookDelete[NotebookDelete[Cells[InputNotebook[], CellStyle -> "Output"]]]], MenuEvaluator -> "System", MenuKey["F10", Modifiers->{}]],
MenuItem["Delete All Generated", KernelExecute[NotebookDelete[Cells[InputNotebook[], GeneratedCell -> True]]], MenuEvaluator -> "System", MenuKey["F", Modifiers->{}]]
}]
</code></pre>
<p>The "Delete All Output" is already in the Cell menu and in M9 it pops the warning message up (in M10 it no longer did it).</p>
|
439,745 | <blockquote>
<p>Prove:$|x-1|+|x-2|+|x-3|+\cdots+|x-n|\geq n-1$</p>
</blockquote>
<p>example1: $|x-1|+|x-2|\geq 1$</p>
<p>my solution:(substitution)</p>
<p>$x-1=t,x-2=t-1,|t|+|t-1|\geq 1,|t-1|\geq 1-|t|,$</p>
<p>square,</p>
<p>$t^2-2t+1\geq 1-2|t|+t^2,\text{Since} -t\leq -|t|,$</p>
<p>so proved.</p>
<p><em>question1</em> : Is my proof right? Alternatives?</p>
<p>one reference answer: </p>
<p>$1-|x-1|\leq |1-(x-1)|=|1-x+1|=|x-2|$</p>
<p><em>question2</em> : prove:</p>
<p>$|x-1|+|x-2|+|x-3|\geq 2$</p>
<p>So I guess:( I think there is a name about this, what's that? wiki item?)</p>
<p>$|x-1|+|x-2|+|x-3|+\cdots+|x-n|\geq n-1$</p>
<p>How to prove this? This is <em>question3.</em> I doubt whether the two methods I used above may suit for this general case.</p>
<p>Of course, welcome any interesting answers and good comments.</p>
| marty cohen | 13,079 | <p>You want
$S(x)
=\sum_{i=1}^n |x-i|$.</p>
<p>I will show</p>
<p>$\begin{cases}
\text{if } x \le 1&S(x) \ge \dfrac{n(n-1)}{2}\\
\text{if } x \ge n &S(x) \ge \dfrac{n(n-1)}{2}\\
\text{otherwise} &S(x) \ge \dfrac{n^2-1}{4}
\end{cases}
$</p>
<p>Exact results are</p>
<p>$\begin{cases}
\text{if } x \le 1, & S(x) = n(1-x) + \dfrac{n(n-1)}{2}\\
\text{if } x \ge n, & S(x) = n(x-n)+ \dfrac{n(n-1)}{2}\\
\text{if } j\le x < j+1 \text{ where } 1 \le j < n, &S(x) = \dfrac{n^2+(n-2x+1)^2-(1-2y)^2}{4}\text{ where } y=x-j
\end{cases}
$</p>
<p>As a check, these are correct for
$x=0, 1, n$,
and
$n+1$.</p>
<p>We consider three cases:
$x \le 1$,
$x \ge n$,
and $1 < x < n$.</p>
<p>If $x \le 1$,
then
$|x-i| = -x+i$,
so</p>
<p>$\begin{align}
S(x) &= \sum_{i=1}^n (-x+i)\\
&=-nx + \frac{n(n+1)}{2}\\
&=n(1-x)-n+ \frac{n(n+1)}{2}\\
&=n(1-x) +\frac{n(n-1)}{2}\\
\end{align}
$</p>
<p>If $x \ge n$,
then
$|x-i| = x-i$,
so</p>
<p>$\begin{align}
S(x) = \sum_{i=1}^n (x-i)
&=nx - \frac{n(n+1)}{2}\\
&=n(x-n)+n^2 - \frac{n(n+1)}{2}\\
&=n(x-n) \frac{2n^2-n(n+1)}{2}\\
&= n(x-n)+\frac{n^2-n}{2}\\
&= n(x-n)+\frac{n(n-1)}{2}\\
\end{align}
$</p>
<p>If $j \le x < j+1$
where $1 \le j < n$,
let $y = x-j$,
so $0 \le y < 1$.
Then</p>
<p>$\begin{align}
S(x)
&=\sum_{i=1}^n |x-i|\\
&=\sum_{i=1}^n |j+y-i|\\
&=\sum_{i=1}^j |j+y-i|+\sum_{i=j+1}^n |j+y-i|\\
&=\sum_{i=1}^j (j+y-i)+\sum_{i=j+1}^n -(j+y-i)\\
&=\sum_{i=1}^j (j+y-i)+\sum_{i=j+1}^n (i-j-y)\\
&=jy+\sum_{i=0}^{j-1} (i)-(n-j)y+\sum_{i=1}^{n-j} (i)\\
&=(2j-n)y+\frac{j(j-1)+(n-j)(n-j+1)}{2}\\
&=(2j-n)y+\frac{j^2-j+(n-j)^2+(n-j)}{2}\\
&=(2j-n)y+\frac{(n^2+(n-2j)^2)/2+(n-2j)}{2}\quad (*)\\
&=(2j-n)y+\frac{n^2+(n-2j)^2+2(n-2j)}{4}\\
&=\frac{n^2+(n-2j)^2+2(n-2j)-4y(n-2j)}{4}\\
&=\frac{n^2+(n-2j)^2+2(1-2y)(n-2j)}{4}\\
&=\frac{n^2+(n-2j+1-2y)^2-(1-2y)^2}{4}\\
&=\frac{n^2+(n-2x+1)^2-(1-2y)^2}{4}\\
&\ge \frac{n^2-1}{4}\\
\end{align}
$</p>
<p>Note: The line marked "(*)" used
$a^2+b^2 = \dfrac{(a+b)^2+(a-b)^2}{2}$.</p>
|
3,484,136 | <blockquote>
<p>Show that <span class="math-container">$n^2+n$</span> is even for all <span class="math-container">$n\in\mathbb{N}$</span> by contradiction.</p>
</blockquote>
<p>My attempt: assume that <span class="math-container">$n^2+n$</span> is odd, then <span class="math-container">$n^2+n=2k+1$</span> for all <span class="math-container">$k\in\mathbb{N}$</span>.</p>
<p>We have that
<span class="math-container">$$n^2+n=2k+1 \Leftrightarrow \left(n+\frac{1}{2}\right)^2-\frac{1}{4}=2k+1\Leftrightarrow n=\sqrt{2k+\frac{5}{4}}-\frac{1}{2}.$$</span></p>
<p>Choosing <span class="math-container">$k=1$</span>, we have that <span class="math-container">$n=\sqrt{2+\frac{5}{4}}-\frac{1}{2}\notin\mathbb{N}$</span>, so we have a contradiction because <span class="math-container">$n^2+n=2k+1$</span> should be verified for all <span class="math-container">$n\in\mathbb{N}$</span> and for all <span class="math-container">$k\in\mathbb{N}$</span>.</p>
<p>Is this correct or wrong? If wrong, can you tell me where and why? Thanks.</p>
| James Turbett | 721,003 | <p>Suppose <span class="math-container">$n^2 + n$</span> is odd for all natural numbers <span class="math-container">$n$</span>. Then <span class="math-container">$n(n+1)$</span>, the product of two consecutive integers, is also odd. However, an odd number can not have any even factors. Either <span class="math-container">$n$</span> is odd, and <span class="math-container">$n+1$</span> is even, or <span class="math-container">$n$</span> is even and <span class="math-container">$n+1$</span> is odd, as they are consecutive integers. The product <span class="math-container">$n(n+1)$</span> has one even and one odd factor, which contradicts our previous statement that <span class="math-container">$n(n+1)$</span> is odd, as an odd number times an even number is an even number. </p>
|
3,484,136 | <blockquote>
<p>Show that <span class="math-container">$n^2+n$</span> is even for all <span class="math-container">$n\in\mathbb{N}$</span> by contradiction.</p>
</blockquote>
<p>My attempt: assume that <span class="math-container">$n^2+n$</span> is odd, then <span class="math-container">$n^2+n=2k+1$</span> for all <span class="math-container">$k\in\mathbb{N}$</span>.</p>
<p>We have that
<span class="math-container">$$n^2+n=2k+1 \Leftrightarrow \left(n+\frac{1}{2}\right)^2-\frac{1}{4}=2k+1\Leftrightarrow n=\sqrt{2k+\frac{5}{4}}-\frac{1}{2}.$$</span></p>
<p>Choosing <span class="math-container">$k=1$</span>, we have that <span class="math-container">$n=\sqrt{2+\frac{5}{4}}-\frac{1}{2}\notin\mathbb{N}$</span>, so we have a contradiction because <span class="math-container">$n^2+n=2k+1$</span> should be verified for all <span class="math-container">$n\in\mathbb{N}$</span> and for all <span class="math-container">$k\in\mathbb{N}$</span>.</p>
<p>Is this correct or wrong? If wrong, can you tell me where and why? Thanks.</p>
| fleablood | 280,126 | <p>To be different. Suppose there some natural number where this is true. But it isn't true for <span class="math-container">$1$</span> or <span class="math-container">$2$</span>. But if there is some value where it <em>is</em> true there must be some <em>first</em> value, <span class="math-container">$n$</span> where it is true.</p>
<p>So suppose that <span class="math-container">$n^2 + n$</span> is odd but <span class="math-container">$(n-1)^2 + (n-1)$</span> is not odd. Then <span class="math-container">$n^2 - 2n + 1+ n-1 = n^2-n + 2$</span> is even. So <span class="math-container">$n^2 - n + 2 = 2k$</span> for some <span class="math-container">$k$</span>.</p>
<p>So <span class="math-container">$n^2 - n + 2 + (2n-2) = 2k + (2n-2) = 2(k+n -1)$</span> is even. But <span class="math-container">$n^2 - n + 2 + (2n-2)=n^2 + n$</span> which we assumed was odd.</p>
<p>The advantage of this proof is that we don't have to assume that if <span class="math-container">$n^2 + n = n(n+1)$</span> is odd that means <span class="math-container">$n$</span> and <span class="math-container">$n+1$</span> are both odd (why is that so?) and that it is impossible for <span class="math-container">$n$</span> and <span class="math-container">$n+1$</span> to both be odd (why is that so?).</p>
<p>The disadvantage of this proof is it's abstract and not straightforward.</p>
|
4,572,336 | <p>This is the
<a href="https://i.stack.imgur.com/BRjsf.jpg" rel="nofollow noreferrer">question</a></p>
<p>I have managed to find the formula for part a) which is n(n+1) for all natural numbers n, but I'm not sure how to prove it directly. Can I just say that the summation of 2i is the same as the sums of the summation of i plus the summation of i as the proof?</p>
<p>I have managed to do part b) which uses induction to prove that the summation of even numbers can be found using the formula n(n+1).</p>
| Sri-Amirthan Theivendran | 302,692 | <p>It suffices to prove that
<span class="math-container">$$
1+2+\dotsb+n=\frac{n(n+1)}{2} = \binom{n+1}{2}
$$</span>
The RHS counts the <span class="math-container">$2$</span> element subsets of <span class="math-container">$A=\{0,1,\dotsc, n\}$</span>. We note that there are <span class="math-container">$i$</span> subsets of <span class="math-container">$A$</span> whose max element is <span class="math-container">$i$</span> for <span class="math-container">$0\leq i\leq n$</span>. The LHS counts the <span class="math-container">$2$</span> element subsets of <span class="math-container">$A$</span> in this way.</p>
|
1,534,981 | <p>I am trying to verify my procedure to find if the extrema is correct for a function
$u \left( x,y \right) ={x}^{2}-{y}^{2}$ on the set $\left( \mathop{\rm D}~~=~\left\{\left(x ,y \right)
\in \mathbb{R}^{2}{\it} | x ^{2}+y ^{2}
\le 1\right\} \right)
$</p>
<p>By the closed interval method, to find the absolute maximum and minimum values of a continuous function u on a closed, bounded set D,</p>
<ol>
<li><p>Find the values of u at the critical points of u in D:
$u_{{x}} \left( x,y \right) =2\,x$ = 0 and
$u_{{y}} \left( x,y \right) =-2\,y$ = 0
and so (x,y) = (0,0) is the only critical point on the unit disk.</p></li>
<li><p>Find the extreme values of u on the boundary of the unit disk:
From ${y}^{2}=-{x}^{2}+1$, v(x) = $u \left( x,y \right) ={x}^{2}-{y}^{2}$ = ${2x}^2-1$
and
v'(x) = 4x.</p></li>
</ol>
<p>So, the solution of u at the critical point is x = 0.</p>
<p>The unit circle is bounded by -1 ≤ x ≤ 1, and so the corresponding values to the points of x are: </p>
<p>v(0) = -1, v(-1) = 1, v(1) = 1</p>
| Nizar | 227,505 | <p>I would advise you, in a similar strategy, to use <a href="https://en.wikipedia.org/wiki/Lagrange_multiplier" rel="nofollow">Lagrange multiplier</a>. </p>
<p>So you first study the max and min of $f$ in the open set $x^2+y^2< 1$, as you see in your case there is no min no max of $f$ in this region, because the only critical point is $(0,0)$, and the hesien matrix of $f$ at $(0,0)$ is $$\begin{pmatrix} 2 & 0 \\ 0 & -2 \end{pmatrix} $$ which is neither positive nor negative definite.</p>
<p>Now for the boundary i.e. at $x^2+y^2=1$ you may use optimization of $f(x,y)$ using the constraint $g(x,y)=0$, with $g(x,y)=x^2+y^2-1$. Then use the Lagrange multiplier to find the min and the max of $f$. </p>
|
4,050,336 | <p>I am familiar with the Negative Binomial distribution <span class="math-container">$NB(p, k)$</span>, which gives the number of failures before <span class="math-container">$k$</span> successes occur in a Bernoulli process with parameter <span class="math-container">$p$</span>. I am wondering, however, if there the distribution of the number of failures before getting <span class="math-container">$k$</span> <em>distinct</em> successes is a well-known distribution.</p>
<p>More precisely, suppose we have some set of <span class="math-container">$N$</span> elements; without loss of generality, we may assume this set is <span class="math-container">$[N] = \{1, \cdots, N\}$</span>. We have a subset <span class="math-container">$S \subseteq [N]$</span> which contains the successes, while every element in <span class="math-container">$[N] \setminus S$</span> is a failure. Now in our process, we select a random element uniformly at random from <span class="math-container">$N$</span>, and keep on doing this until we have attained some <span class="math-container">$k \leq |S|$</span> successes. Let <span class="math-container">$X$</span> be the total number of trials performed before we stop the process. <em>What is the distribution of <span class="math-container">$X$</span>, in terms of <span class="math-container">$k$</span>, N, and p = <span class="math-container">$|S| / N$</span>?</em> If we can't concisely describe this distribution, can we at least come up with some (relatively tight) tail bounds? And how valid is the approximation <span class="math-container">$X \approx NB(p, k)$</span>?</p>
<p>I know this problem can simplify to the coupon collector's problem in the case that <span class="math-container">$S = [N]$</span> and <span class="math-container">$k = N$</span>, but I'm more interested in the case where <span class="math-container">$0 < p < 1$</span> and <span class="math-container">$k \ll |S| = pN$</span>.</p>
| Richard Jensen | 658,583 | <p>You are on the right track assuming it's irreducible, but you have not reached a contradiction yet. This is because you need to use what elements in <span class="math-container">$\mathbb{Q}(i\sqrt{2})$</span> are. Notice that any element in <span class="math-container">$\mathbb{Q}(i\sqrt{2})$</span> can be written as <span class="math-container">$a + ib\sqrt{2} $</span> for <span class="math-container">$a,b \in \mathbb{Q}$</span> (why?).</p>
<p>So for an element in <span class="math-container">$a + ib\sqrt{2} \in \mathbb{Q}(i\sqrt{2})$</span> to be a solution to the equation, we must have</p>
<p><span class="math-container">$-1= (a + ib\sqrt{2})^2= a^2-2b^2+iab\sqrt{2}$</span></p>
<p>From this equation, we first get that <span class="math-container">$ab = 0$</span>. Secondly, if <span class="math-container">$b=0$</span>, we have <span class="math-container">$-1=a^2$</span>, which is impossible. So <span class="math-container">$a=0$</span>. Therefore we get</p>
<p><span class="math-container">$-1=-2b^2 \implies b^2=\frac{1}{2}$</span></p>
<p>which is impossible.</p>
|
4,638,393 | <p><a href="https://i.stack.imgur.com/SgVlq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SgVlq.png" alt="enter image description here" /></a></p>
<p>How does the author obtain formula (4)? From formula (2), I only get that <span class="math-container">$u\left(\frac{k}{n}+\frac{1}{n},\cdot\right)=\left(1-\frac{c}{n}\partial_x\right)u\left(\frac{k}{n},\cdot\right)$</span> but I don't see how the exponent <span class="math-container">$k$</span> appears as in (4).</p>
| Alex Ortiz | 305,215 | <p>Suppose that for some <span class="math-container">$k\ge 1$</span>, we have
<span class="math-container">$$
u(\frac kn, \cdot) \approx (1-\frac{c}{n}\partial_x)^kg(\cdot).\tag{$\ast$}
$$</span>
The case <span class="math-container">$k = 1$</span> holds by (2). Again by the approximation (2) and our assumption <span class="math-container">$(\ast)$</span>,
<span class="math-container">\begin{align*}
u(\frac kn + \frac 1n,\cdot) &\approx (1-\frac cn\partial_x)u(\frac kn,\cdot)\\
&\approx (1-\frac{c}{n}\partial_x)^{k+1}g(\cdot).
\end{align*}</span>
By induction (4) holds for each <span class="math-container">$k$</span>.</p>
|
1,159,155 | <p>If we define <strong>open</strong> as: A set $O⊆R$ is open if for all points $a∈O$ there exists an
$\epsilon$-neighborhood $V_\epsilon(a)⊆O$.</p>
<p>Where $V_\epsilon(a) = \{x \in \mathbb{R}: | x - a | < \epsilon\}$
Now consider some open interval: </p>
<p>$(c,d) = \{x \in \mathbb{R} : c<x<d \}$</p>
<p>To see that $(c,d)$ is open, let $x \in (c,d)$ be arbitrary.</p>
<p>let $\epsilon = \text{min}\{x-c,d-x\}$, then it follows that $V_\epsilon(x) \subseteq (c,d)$</p>
<p>I am unable to see why this definition does not hold for a set containing one or more closed end points.</p>
<p>If my understanding is correct, let's take the closed interval $[1,10]$ and lets choose $x = 10$. So clearly $x \in [1,10]$</p>
<p>$\epsilon = \text{min} \{10-1, 10-10\} = 0$</p>
<p>Then isn't it still true that
$V_\epsilon(x) \subseteq [1,10]$ </p>
<p>Also, how would you express $V_\epsilon(x)$ in this instance, like it is expressed in the third line?</p>
| layman | 131,740 | <p>You correctly focused on one of the end points. For every point that is not an end point in the interval $[1,10]$, your argument for finding a ball around that point contained in $[1,10]$ works.</p>
<p>Now, if $x = 10$, we need to show that <em>every</em> ball around $10$ is <em>not</em> contained in $[1, 10]$. Ok, so let $\epsilon > 0$ be arbitrary. $V_{\epsilon}(10) = (10 - \epsilon, 10 + \epsilon)$. This clearly contains $10$ for each positive $\epsilon$. BUT, is this interval contained in $[1,10]$? Of course not. The largest number in $[1,10]$ is $10$, but for each positive $\epsilon$, $(10 - \epsilon, 10 + \epsilon)$ contains a number bigger than $10$, and so this open set is <em>not</em> contained in $[1,10]$.</p>
<p>So, <strong>for every</strong> positive $\epsilon$, the ball around $10$ of radius $\epsilon$, which is equal to the interval $(10 - \epsilon, 10 + \epsilon)$, is <strong>not</strong> contained in $[1,10]$, which means $[1,10]$ is not open.</p>
<p>Instead of $x = 10$, you could have made a similar argument for $x = 1$. I will leave it to you as an exercise to show that $x = 1$ causes the definition of open to fail for the set $[1,10]$.</p>
|
4,544,787 | <p>These sums showed up in a probability problem I was working on. They're not quite the Stirling numbers of the first kind since it's possible to have e.g. <span class="math-container">$i_1 = i_2$</span>. Denoting the sum by <span class="math-container">$(k\mid n)$</span> we have the recurrence relation</p>
<p><span class="math-container">$(k\mid n) = n(k-1\mid n) + (k\mid n-1)$</span></p>
<p>I thought maybe the average term of the sum would be similar to the average product over a random list of <span class="math-container">$k$</span> numbers from <span class="math-container">$1$</span> to <span class="math-container">$n$</span>. But it seems to always be a few times greater.</p>
<p>I'm beginning to despair of there being a closed form expression for <span class="math-container">$(k\mid n)$</span> in terms of factorials and powers. Does anyone know how to analyze the sum further and perhaps find a good approximation?</p>
| Marko Riedel | 44,883 | <p>We get from first principles the generating function where we mark
multisets containing some number of instances of <span class="math-container">$q$</span> giving the factor
being contributed and the exponent of <span class="math-container">$z$</span> how often it occurs:</p>
<p><span class="math-container">$$[z^k] \prod_{q=1}^n (1+qz+q^2z^2+q^3z^3+\cdots)
= [z^k] \prod_{q=1}^n \frac{1}{1-qz}
\\ = \frac{1}{n!} [z^k] \prod_{q=1}^n \frac{1}{1/q-z}
= (-1)^n \frac{1}{n!} [z^k] \prod_{q=1}^n \frac{1}{z-1/q}.$$</span></p>
<p>Now using partial fractions by residues we have</p>
<p><span class="math-container">$$(-1)^n \frac{1}{n!} [z^k]
\sum_{p=1}^n \frac{1}{z-1/p}
\prod_{q=1}^{p-1} \frac{1}{1/p-1/q}
\prod_{q=p+1}^n \frac{1}{1/p-1/q}
\\ = (-1)^n \frac{1}{n!} [z^k]
\sum_{p=1}^n \frac{1}{z-1/p}
\prod_{q=1}^{p-1} \frac{pq}{q-p}
\prod_{q=p+1}^n \frac{pq}{q-p}
\\ = (-1)^n \frac{1}{n!} [z^k]
\sum_{p=1}^n \frac{p^{n-1}}{z-1/p}
\prod_{q=1}^{p-1} \frac{q}{q-p}
\prod_{q=p+1}^n \frac{q}{q-p}
\\ = (-1)^n [z^k]
\sum_{p=1}^n \frac{p^{n-2}}{z-1/p}
\prod_{q=1}^{p-1} \frac{1}{q-p}
\prod_{q=p+1}^n \frac{1}{q-p}
\\ = (-1)^n [z^k]
\sum_{p=1}^n \frac{p^{n-2}}{z-1/p}
\frac{(-1)^{p-1}}{(p-1)!} \frac{1}{(n-p)!}
\\ = (-1)^n \frac{1}{n!} [z^k]
\sum_{p=1}^n \frac{p^{n-1}}{z-1/p} {n\choose p} (-1)^{p-1}
\\ = (-1)^n \frac{1}{n!} [z^k]
\sum_{p=1}^n \frac{p^{n}}{pz-1} {n\choose p} (-1)^{p-1}
\\ = (-1)^n \frac{1}{n!} [z^k]
\sum_{p=1}^n \frac{p^{n}}{1-pz} {n\choose p} (-1)^p
\\ = (-1)^n \frac{1}{n!}
\sum_{p=1}^n {n\choose p} (-1)^p p^{n+k}.$$</span></p>
<p>We may lower <span class="math-container">$p$</span> to zero and obtain</p>
<p><span class="math-container">$$\frac{1}{n!}
\sum_{p=0}^n {n\choose p} (-1)^{n-p} p^{n+k}
= {n+k \brace n}.$$</span></p>
|
3,511,696 | <p>First off all, i looked to n=1. So, we had <span class="math-container">$\epsilon<x_1<1$</span>. Then, i'm start to think that the <span class="math-container">$\sqrt[n]{x_n}$</span> is in <span class="math-container">$(\epsilon,1)$</span> interval and is monotonous crescent(so, converges). But it's not so clear to me cause, if we look to n=2, we have <span class="math-container">$\epsilon<\sqrt[2]{x_2}<2^k$</span> and nothing guarantees me that this is in <span class="math-container">$(\epsilon,1)$</span>.</p>
| Mercy King | 23,304 | <p>For every positive integer <span class="math-container">$n$</span> the function <span class="math-container">$f_n$</span> defined by
<span class="math-container">$$f_n(x)=\sqrt[n]{x}$$</span>
is increasing over <span class="math-container">$(0,\infty)$</span>, therefore from the inequalities
<span class="math-container">$$
\epsilon <x_n<n^k,
$$</span>
we deduce that
<span class="math-container">$$
f_n(\epsilon)<f_n(x_n)<f_n(n^k)
$$</span>
That is
<span class="math-container">$$
\epsilon^{1/n}<\sqrt[n]{x_n}<n^{k/n}
$$</span>
Notice that
<span class="math-container">\begin{eqnarray}
\lim_{n\to\infty} \epsilon^{1/n}&=&\epsilon^0=1\\
\lim_{n\to\infty}n^{k/n}&=&\lim_{n\to\infty}e^{\frac{k\ln(n)}{n}}=e^0=1
\end{eqnarray}</span>
It follows from the Squeeze Theorem that
<span class="math-container">$$
\lim_{n\to\infty}\sqrt[n]{x_n}=1
$$</span></p>
|
3,511,696 | <p>First off all, i looked to n=1. So, we had <span class="math-container">$\epsilon<x_1<1$</span>. Then, i'm start to think that the <span class="math-container">$\sqrt[n]{x_n}$</span> is in <span class="math-container">$(\epsilon,1)$</span> interval and is monotonous crescent(so, converges). But it's not so clear to me cause, if we look to n=2, we have <span class="math-container">$\epsilon<\sqrt[2]{x_2}<2^k$</span> and nothing guarantees me that this is in <span class="math-container">$(\epsilon,1)$</span>.</p>
| Hagen von Eitzen | 39,174 | <p>Let <span class="math-container">$h>0$</span>. Suppose <span class="math-container">$\sqrt[n]{x_n}>1+h$</span> for infinitely many <span class="math-container">$n$</span>. Then for these <span class="math-container">$n$</span>, <span class="math-container">$$\begin{align}x_n-n^k&>(1+h)^n-n^k\\&\ge 1+nh+{n\choose 2}h^2+\cdots +{n\choose k+1}h^{k+1}-n^k.\end{align}$$</span>
The last expression, viewed as a function of <span class="math-container">$n$</span>, is a polynomial of degree <span class="math-container">$k+1$</span> with leading coefficient <span class="math-container">$\frac{h^{k+1}}{(k+1)!}>0$</span>. Hence for all big enough <span class="math-container">$n$</span> among our infinitely many <span class="math-container">$n$</span>, we have <span class="math-container">$x_n-n^k>0$</span>, contradiction.
We conclude that <span class="math-container">$\limsup\sqrt[n]{x_n}\le 1$</span>.</p>
<p>Can you see why also <span class="math-container">$\liminf\sqrt[n]{x_n}\ge 1$</span>?</p>
|
4,517,429 | <p>Let <span class="math-container">$ {\textstyle \{X_{1},\ldots ,X_{n},\ldots \}}$</span> be a sequence of independent random variables, each of those random variable follow a Gamma distribution.</p>
<p>For the summation of those random variable:</p>
<p><span class="math-container">$ {\displaystyle {\bar {X}}_{n}\equiv {\frac {X_{1}+\cdots +X_{n}}{n}}}$</span></p>
<p>Question: Is the summation of independent Gamma distributions a Gamma distribution or a normal distribution ?</p>
<p>Here is the confusion: the summation of Gamma distribution should still be a Gamma distribution. On the other hand, central limit theorem says that such summation should approach a normal distribution. which of those viewpoint is correct ?</p>
| Henry | 6,460 | <p>Assuming the rate (or scale) parameters are all the same, then you are correct:</p>
<ul>
<li>the sum of independent gamma distributed random variables has a gamma distribution, with shape parameter equal to the sum of the shape parameters of the individual distributions</li>
</ul>
<p>But gamma distributions with a large shape parameter are close to normal distributions with the same mean and variance, and (after adjusting for mean and variance) get closer as the shape parameter increases. Here is a case where the rate is <span class="math-container">$1$</span> and the shape, mean and variance are all <span class="math-container">$100$</span>: the gamma density is in black and the normal density in red</p>
<p><a href="https://i.stack.imgur.com/A7CsE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/A7CsE.png" alt="enter image description here" /></a></p>
|
1,527,891 | <p>Determine (up to a constant multiplier) the polynomial with a maximum at $(-1,1)$, a minimum $(1,-1)$ and no other critical points.</p>
<p>The only thing I can think of is coming up with an equation with roots $1$ and $-1$ and then integrating it but I don't think that will work.</p>
| Balloon | 280,308 | <p><strong>Hint :</strong> try with your idea : you know that the derivative of your polynomial is $k(x-1)(x+1)=k(x^2-1).$ Integrate this and get that your polynomial is $k(\frac{1}{3}x^3-x)+c.$ Then use your informations about the values of your polynomial at $-1$ and $1$ to find $k$ and $c$ and finally get $P=\frac{1}{2}X^3-\frac{3}{2}X.$</p>
|
3,566,603 | <p>I need a simple way to show <span class="math-container">$\mathbb R^2$</span> is not isomorphic <span class="math-container">$\mathbb{R}[x]/(x^2)$</span>. Both are not integral domains, and both are not fields, so I’m not sure how to go about it.</p>
| Vincent | 101,420 | <p>Everything you want to know about <span class="math-container">$\mathbb{R}^2$</span> can be understood through understanding the two elements <span class="math-container">$e_1 = (1, 0)$</span> and <span class="math-container">$e_2 = (0, 1)$</span>. So let's look at their properties. Both satisfy <span class="math-container">$e_i^2 = e_i$</span> and together they satisfy <span class="math-container">$e_1e_2 = 0$</span>. </p>
<p>One approach would be to show that no elements <span class="math-container">$a, b$</span>, with <span class="math-container">$a^2 =a, b^2 = b, ab = 0$</span> can exist in <span class="math-container">$\mathbb{R}[x]/(x^2)$</span>.</p>
|
3,566,603 | <p>I need a simple way to show <span class="math-container">$\mathbb R^2$</span> is not isomorphic <span class="math-container">$\mathbb{R}[x]/(x^2)$</span>. Both are not integral domains, and both are not fields, so I’m not sure how to go about it.</p>
| Andrea Mori | 688 | <p>I think that there are several ways to see this.</p>
<p>One way is to look at ideals and for you may recall that given a quotient <span class="math-container">$A/I$</span> there is a one to one correspondence between ideals of <span class="math-container">$A/I$</span> and ideals of <span class="math-container">$A$</span> containing <span class="math-container">$I$</span> and furthermore that prime ideals correspond to prime ideals (this is a very standard fact, so I will not reproduce the proof here).</p>
<p>It follows that the ideal <span class="math-container">$(\bar x)\subset{\Bbb R}[x]/(x^2)$</span> is the only prime ideal in that quotient since <span class="math-container">$(x)$</span> is the only prime ideal containing <span class="math-container">$(x^2)$</span> in <span class="math-container">${\Bbb R}[x]$</span>.</p>
<p>On the other hand <span class="math-container">${\Bbb R}\times{\Bbb R}$</span> has (at least) two prime ideals, namely <span class="math-container">$P_1={\Bbb R}\times(0)$</span> and <span class="math-container">$P_2=(0)\times{\Bbb R}$</span>.</p>
<p>Thus the two rings cannot be isomorphic.</p>
|
13,843 | <p>We have a natural number $n>1$. We want to determine whether there exist
natural numbers $a, k>1$ such that $n = a^k$. </p>
<p>Please suggest a polynomial-time algorithm.</p>
| Pace Nielsen | 3,199 | <p>This can be done in "essentially linear time." Check out Daniel Bernstein's website: <a href="http://cr.yp.to/arith.html">http://cr.yp.to/arith.html</a></p>
<p>Especially note his papers labeled [powers] and [powers2].</p>
|
1,212,336 | <p>Let R be the set of all real numbers. Is $\{\mathbb R^+,\mathbb R^−,\{0\}\}$ a partition of $\mathbb R$? Explain your answer.</p>
<p>My answer is no because of $\{0\}$. I am confused with $\{0\}$. please help.</p>
| N. S. | 9,176 | <p><strong>Hint 1:</strong> $\int_1^n \log x \, dx =\sum_{k=2}^n \int_{k-1}^k \log x \, dx $</p>
<p><strong>Hint 2:</strong> If $x \in [k-1,k]$ then $\log(x) \leq \log(k)$.</p>
|
3,959,178 | <p>Let <span class="math-container">$G$</span> be a group and <span class="math-container">$\mathbb k$</span> a field. Then, let <span class="math-container">$\mathfrak 1$</span> be the trivial <span class="math-container">$\mathbb k G$</span>-module.</p>
<p>According to Lorenz's A Tour of Representation Theory, he states "it turns out <span class="math-container">$\mathfrak 1$</span> has a great significance, if it is a projective as a <span class="math-container">$\mathbb kG$</span>-module, then all <span class="math-container">$\mathbb kG$</span>-module is projective" and goes on to reference Maschke's Theorem.</p>
<p>I can understand the following claim, that a ring <span class="math-container">$\mathbb kG$</span> is semisimple if and only if all its modules are projective, so Maschke's Theorem does indeed show that <span class="math-container">$\mathfrak 1$</span> is projective for a semisimple <span class="math-container">$\mathbb k G$</span>. I can also see that if <span class="math-container">$\mathfrak 1$</span> is projective, then it is direct summand of some free module.</p>
<p>However, I don't understand how the converse implication (i.e. <span class="math-container">$\mathfrak 1$</span> projective implies <span class="math-container">$\mathbb kG$</span> semisimple) suggested by Lorenz's remark is shown. It seems to suggest that we want <span class="math-container">$\mathfrak 1$</span> to be the direct summand of <span class="math-container">$\mathbb kG$</span> as a <span class="math-container">$\mathbb kG$</span>-module, but I don't quite see how this is done and even if it is, how the remark is proven.</p>
| Jendrik Stelzner | 300,783 | <p>One can see in the following way that every <span class="math-container">$G$</span>-representation (over <span class="math-container">$\mathbb{k}$</span>) is projective.</p>
<p><strong>Step 1.</strong>
For every two representations <span class="math-container">$V$</span> and <span class="math-container">$W$</span> of <span class="math-container">$G$</span>, the vector space <span class="math-container">$\operatorname{Hom}_{\mathbb{k}}(V,W)$</span> is again a representation of <span class="math-container">$G$</span> via
<span class="math-container">$$
(g \cdot f)(v) = g \cdot f( g^{-1} \cdot v )
$$</span>
for all <span class="math-container">$g \in G$</span>, <span class="math-container">$f \in \operatorname{Hom}_{\mathbb{k}}(V, W)$</span> and <span class="math-container">$v \in V$</span>.
A linear map <span class="math-container">$f$</span> from <span class="math-container">$V$</span> to <span class="math-container">$W$</span> is <span class="math-container">$G$</span>-invariant if and only if it is a homomorphism of representations, i.e.
<span class="math-container">$$
\operatorname{Hom}_{\mathbb{k}}(V,W)^G
=
\operatorname{Hom}_G(V,W) \,.
$$</span></p>
<p><strong>Step 2.</strong>
We have for every <span class="math-container">$\mathbb{k}$</span>-vector space <span class="math-container">$V$</span> the natural isomorphism of vector spaces
<span class="math-container">$$
\operatorname{Hom}_{\mathbb{k}}(\mathbb{k}, V) \longrightarrow V \,,
\quad
f \longmapsto f(1) \,.
$$</span>
Suppose now that <span class="math-container">$V$</span> is a representation of <span class="math-container">$G$</span>.
The above isomorphism of vector spaces is then an isomorphism of <span class="math-container">$G$</span>-representations.
It therefore restricts to a natural isomorphism of vector spaces
<span class="math-container">$$
\operatorname{Hom}_G(\mathbb{k}, V)
\longrightarrow
V^G \,.
$$</span></p>
<p><strong>Step 3.</strong>
Suppose now that the trivial representations <span class="math-container">$\mathbb{k}$</span> is projective.
This means that the functor
<span class="math-container">$$
\operatorname{Hom}_G(\mathbb{k}, -)
\colon
\mathbf{Rep}(G)
\longrightarrow
\mathbf{Vect}(\mathbb{k})
$$</span>
is exact.
It follows from the (natural) isomorphism <span class="math-container">$\operatorname{Hom}_G(\mathbb{k}, -) \cong (-)^G$</span> that the functor
<span class="math-container">$$
(-)^G
\colon
\mathbf{Rep}(G)
\longrightarrow
\mathbf{Vect}(\mathbb{k})
$$</span>is exact.</p>
<p><strong>Step 4.</strong>
Let now <span class="math-container">$V$</span> be an arbitrary representations of <span class="math-container">$G$</span>.
The functor <span class="math-container">$\operatorname{Hom}_G(V, -)$</span> equals the composition
<span class="math-container">$$
\mathbf{Rep}(G)
\xrightarrow{ \enspace \operatorname{Hom}_{\mathbb{k}}(V, -) \enspace }
\mathbf{Rep}(G)
\xrightarrow{ \enspace (-)^G \enspace }
\mathbf{Vect}(\mathbb{k}) \,.
$$</span>
Both of these functors are exact, whence <span class="math-container">$\operatorname{Hom}_G(V, -)$</span> is exact.
This shows that <span class="math-container">$V$</span> is projective.</p>
<hr />
<p>There is also another way to look at this problem:
The group structure of <span class="math-container">$G$</span> gives its group algebra <span class="math-container">$\mathbb{k}G$</span> the structure of a Hopf algebra.</p>
<blockquote>
<p><strong>Partial Definition.</strong>
A Hopf algebra over <span class="math-container">$\mathbb{k}$</span> is a <span class="math-container">$\mathbb{k}$</span>-algebra <span class="math-container">$H$</span> together with homomorphisms of <span class="math-container">$\mathbb{k}$</span>-algebras
<span class="math-container">$$
\Delta
\colon
H
\to
H \otimes H
\,,
\quad
\varepsilon
\colon
H
\to
\mathbb{k}
$$</span>
such that certain conditions hold (which I won’t mention here).
The map <span class="math-container">$\varepsilon$</span> is the <strong>counit</strong> of <span class="math-container">$H$</span>, and the map <span class="math-container">$\Delta$</span> (which we won’t care about in the following) is the <strong>comultiplication</strong> of <span class="math-container">$H$</span>.</p>
</blockquote>
<p>Given a Hopf algebra <span class="math-container">$H$</span>, the ground field <span class="math-container">$\mathbb{k}$</span> becomes an <span class="math-container">$H$</span>-module via
<span class="math-container">$$
x \cdot y
=
\varepsilon(x) y
$$</span>
for all <span class="math-container">$x \in H$</span> and <span class="math-container">$y \in \mathbb{k}$</span>.
The counit map <span class="math-container">$\varepsilon$</span> becomes in this way a homomorphism of <span class="math-container">$H$</span>-modules from <span class="math-container">$H$</span> to <span class="math-container">$\mathbb{k}$</span>.</p>
<blockquote>
<p><strong>Example.</strong>
In the case of <span class="math-container">$H = \mathbb{k}G$</span> the maps <span class="math-container">$\Delta$</span> and <span class="math-container">$\varepsilon$</span> are given on the basis <span class="math-container">$G$</span> of <span class="math-container">$\mathbb{k}G$</span> by
<span class="math-container">$$
\Delta(g)
=
g \otimes g \,,
\quad
\varepsilon(g)
=
1
$$</span>
for every <span class="math-container">$g \in G$</span>.
The resulting <span class="math-container">$\mathbb{k}G$</span>-module structure on <span class="math-container">$\mathbb{k}$</span> is given by
<span class="math-container">$$
g \cdot y
=
\varepsilon(g) y
=
1 \cdot y
=
y
$$</span>
for all <span class="math-container">$g \in G$</span> and <span class="math-container">$y \in \mathbb{k}$</span>.
This is precisely the <span class="math-container">$\mathbb{k}G$</span>-module structure on <span class="math-container">$\mathbb{k}$</span> that corresponds to the trivial action of <span class="math-container">$G$</span> on <span class="math-container">$\mathbb{k}$</span>.</p>
</blockquote>
<p>There is now a generalization of Maschke’s theorem to Hopf algebras.</p>
<blockquote>
<p><strong>Theorem (Maschke’s theorem for Hopf algebras).</strong>
Let <span class="math-container">$H$</span> be a Hopf algebra.
The following conditions on <span class="math-container">$H$</span> are equivalent.</p>
<ol>
<li><span class="math-container">$H$</span> is semisimple.</li>
<li>There exists an element <span class="math-container">$a$</span> of <span class="math-container">$H$</span> such that
<ul>
<li><span class="math-container">$x \cdot a = \varepsilon(x) a$</span> for all <span class="math-container">$x \in H$</span> and <span class="math-container">$a \in A$</span> (invariance), and</li>
<li><span class="math-container">$\varepsilon(a) = 1$</span> (normalization).</li>
</ul>
</li>
<li>There exists an <span class="math-container">$H$</span>-submodule <span class="math-container">$A$</span> of <span class="math-container">$H$</span> with <span class="math-container">$H = A \oplus \ker(\varepsilon)$</span>.</li>
<li>The kernel of <span class="math-container">$\varepsilon$</span> (which is an <span class="math-container">$H$</span>-submodule of <span class="math-container">$H$</span> because <span class="math-container">$\varepsilon$</span> is a homomorphism of <span class="math-container">$H$</span>-modules) is a direct summand of <span class="math-container">$H$</span>.</li>
<li>The <span class="math-container">$H$</span>-module <span class="math-container">$\mathbb{k}$</span> is projective.</li>
</ol>
</blockquote>
<p>One shows this by proving the implications
<span class="math-container">$$
1 \implies 5 \implies 4 \implies 3 \implies 2 \implies 1 \,.
$$</span></p>
<blockquote>
<p><strong>Example.</strong>
Consider again <span class="math-container">$H = \mathbb{k}G$</span>.
Let <span class="math-container">$a$</span> be an element of <span class="math-container">$H$</span> as in characterization 2 of the above theorem.
This element is given by a linear combination <span class="math-container">$a = \sum_{g \in G} a_g g$</span>.
The condition <span class="math-container">$x \cdot a = \varepsilon(x) a$</span> holds for every <span class="math-container">$x \in \mathbb{k}G$</span> if and only if <span class="math-container">$g \cdot a = \varepsilon(g) a$</span> for every group element <span class="math-container">$g$</span> of <span class="math-container">$G$</span>.
We have <span class="math-container">$\varepsilon(g) = 1$</span>, so this condition is furthermore equivalent to <span class="math-container">$g \cdot a = a$</span> for every <span class="math-container">$g \in G$</span>.
This means that all coefficients <span class="math-container">$a_g$</span> of <span class="math-container">$a$</span> have to be equal.</p>
<p>If <span class="math-container">$G$</span> is infinite, then this means that <span class="math-container">$a = 0$</span> (because <span class="math-container">$a$</span> can’t have infinitely many nonvanishing coefficients).
But then <span class="math-container">$\varepsilon(a) = 0$</span>, contradicting <span class="math-container">$\varepsilon(a) = 1$</span>.
So if <span class="math-container">$G$</span> is infinite, then no such element <span class="math-container">$a$</span> exist.</p>
<p>If <span class="math-container">$G$</span> is finite, then <span class="math-container">$a$</span> can be any element of the form <span class="math-container">$a = C \sum_{g \in G} g$</span> for some scalar <span class="math-container">$C$</span> in <span class="math-container">$\mathbb{k}$</span>.
But we have
<span class="math-container">$$
1
=
\varepsilon(a)
=
C \sum_{g \in G} \varepsilon(g)
=
C \cdot \lvert G \rvert.
$$</span>
We hence find that no such element <span class="math-container">$a$</span> exists if the characteristic of <span class="math-container">$\mathbb{k}$</span> divides <span class="math-container">$\lvert G \rvert$</span>, and otherwise <span class="math-container">$C = 1 / \lvert G \rvert$</span> and thus
<span class="math-container">$$
a
=
\frac{1}{\lvert G \rvert} \sum_{g \in G} g \,.
$$</span>
This is the “averaging element” (hence the letter <span class="math-container">$a$</span>) used in the usual proof of Maschke’s theorem for groups.</p>
</blockquote>
<p>By actually going through the full proof of the above theorem, one sees that when <span class="math-container">$\mathbb{k}$</span> is projective as an <span class="math-container">$H$</span>-module, then the surjective homomorphism of <span class="math-container">$H$</span>-modules <span class="math-container">$\varepsilon$</span> from <span class="math-container">$H$</span> to <span class="math-container">$\mathbb{k}$</span> splits.
Such a split gives us a homomorphism of <span class="math-container">$H$</span>-modules from <span class="math-container">$\mathbb{k}$</span> to <span class="math-container">$H$</span>, which in turn corresponds to an element of <span class="math-container">$H$</span>.
This element is then the “averaging element” <span class="math-container">$a$</span>.</p>
|
656,701 | <p>Suppose we have:</p>
<p>$A = \{(x,v,w):x+v=w\}$</p>
<p>$B = \{(x,v):x=v\}$</p>
<p>$C = \{(w,u):\exists x 2x=w\}$</p>
<p>Can we say that $C = A \cup B$?</p>
| Unwisdom | 124,220 | <p>I don't really understand what you are asking here. </p>
<p>The usual way to define a set with this kind of "set builder notation" is to specify a universe $U$, and then use the following form:
$$
\{x : \phi(x)\}
$$
where $\phi$ is a formula with one free variable that is either true or false at each $x\in U$. </p>
<p>(If you are using higher order logic, or you just want to make the universe explicit, you can also write $\{x\in U : \phi(x)\}$.)</p>
<p>So, the thing about the formula is that it has to be a well formed formula (wff). This means that it is either an atomic formula (such as an equation or inequality), or is built up recursively from atomic formulae using appropriate logical operators ($\wedge,\vee,\neg,\to,\forall,\exists$, for instance.)</p>
<p>In your example, it is not clear what the universe is, and the expressions on the right of the "such that" delimiter are not wffs. </p>
<p>Now, I guess that for the first set $A$, you intend something like:
$$A=\{\langle x,v,w\rangle : x+v-w=0\}.$$
Note that this has a logical formula in it: $\phi(x,v,w)\equiv x+v-w=0$ is either true or false depending on the values of $x$, $v$, and $w$ substituted into it: $\phi(0,0,0)$ is true and $\phi(1,1,1)$ is false. In this case, $A$ is a plane in $\mathbb{R}^{3}$. </p>
<p>So here's where it gets hard to follow. If we assume that $B=\{\langle x,v\rangle:x-v=0\}$, then $B$ is a line in $\mathbb{R}^{2}$. And $C$ is even harder to interpret, since you've left out the free variable to the left of the "such that" delimiter. The logical formula has only one free variable, so I'd guess that it is a subset of some one dimensional universe ($\mathbb{R}$ perhaps?), but that isn't necessarily the case. </p>
<p>So, what is the union of a plane and a line? Usually, before you can consider the union of two sets, you must make sure that they are in the same universe. In this case, the universes are probably $\mathbb{R}^{3}$ and $\mathbb{R}^{2}$. That doesn't mean that we can't take the union (we can always define a bigger universe that contains both $\mathbb{R}^{3}$ and $\mathbb{R}^{2}$), but it does make it hard to see why you would want to.</p>
|
907,893 | <p>I wanted to know about this convention :</p>
<p>By rate of growth of R, we normally mean : (change in R) / (change in Time)</p>
<p>But Rate of growth of a geometric sequence "a(1+r)^n" is r, which is strange i feel</p>
<p>I am kind of confused, can anyone clear it </p>
| dylan7 | 163,751 | <p>Since the geometric series, or their partner the continuous exponential have varying rates of change, it is nice to find something consistant within them we can call a constant rate. The actual rate of change is the derivative $ df/dx $ which is discrete in the sense of a series. These rates of change constantly change. The rate in the series in not the rate of change, it is called the rate of growth because that is what $ a $ is getting constany multiplied. This is NOT the rate of change. Two different concepts. And since the derivative is kind of strange with a discrete series, the rate of growth tends to be used, but yhey are in no way the same thing. It gets confusing with the abuse of terminology.</p>
|
109,989 | <p>A problem in my book is:</p>
<blockquote>
<p>Let the edges of $K_7$ be colored with the colors red and blue. Show that there are at least four subgraphs $K_3$ with all three edges the same color (monochromatic triangles). Also show that equality can occur.</p>
</blockquote>
<p>By the <a href="http://en.wikipedia.org/wiki/Theorem_on_friends_and_strangers">theorem on friends and strangers</a> it is clear that 1 monochromatic triangle exists. Deleting a vertex of that triangle and applying the theorem again yields another. Why are two more guaranteed?</p>
<p>As an aside, a result in my book states that the number of monochromatic triangles in a 2-colored $K_n$ is at least $\binom{n}{3}-\lfloor \frac{n}{2}\lfloor (\frac{n-1}{2})^2 \rfloor \rfloor $. I want to demonstrate my solution without applying this result though as it appears later on in the book.</p>
<p>Thank you for your time.</p>
| Gerry Myerson | 8,269 | <p>A "biangle" is a triple of vertices $(a,b,c)$ where the edge joining $a$ to $b$ is not the same color as the edge joining $b$ to $c$. We call $b$ the "apex" of the biangle. </p>
<p>A vertex with 3 red and 3 blue edges is the apex of 9 biangles; any other color distribution leads to fewer biangles. Thus, the whole graph has at most 63 biangles. </p>
<p>If a triangle is not monochromatic, it has exactly 2 biangles. So the graph has at most 31 non-monochromatic triangles. </p>
<p>But the graph has 35 triangles, so it has at least 4 monochromatic triangles. </p>
|
109,989 | <p>A problem in my book is:</p>
<blockquote>
<p>Let the edges of $K_7$ be colored with the colors red and blue. Show that there are at least four subgraphs $K_3$ with all three edges the same color (monochromatic triangles). Also show that equality can occur.</p>
</blockquote>
<p>By the <a href="http://en.wikipedia.org/wiki/Theorem_on_friends_and_strangers">theorem on friends and strangers</a> it is clear that 1 monochromatic triangle exists. Deleting a vertex of that triangle and applying the theorem again yields another. Why are two more guaranteed?</p>
<p>As an aside, a result in my book states that the number of monochromatic triangles in a 2-colored $K_n$ is at least $\binom{n}{3}-\lfloor \frac{n}{2}\lfloor (\frac{n-1}{2})^2 \rfloor \rfloor $. I want to demonstrate my solution without applying this result though as it appears later on in the book.</p>
<p>Thank you for your time.</p>
| Josias Oberlin | 829,443 | <p>Gerry Myerson's answer is very good. However, I wanted to offer a different, more simple-minded answer.</p>
<p>To begin with, my answer requires that I show every <span class="math-container">$K_6$</span> has two monochromatic triangles. The proof is as follows:</p>
<p>Suppose there is only one monochromatic triangle As you noted, we know that every <span class="math-container">$K_6$</span> must have at least one monochromatic triangle. For ease of reference, I call the vertices in this monochromatic triangle <span class="math-container">$A$</span>, <span class="math-container">$B$</span>, and <span class="math-container">$C$</span>; I call the other three vertices <span class="math-container">$D$</span>, <span class="math-container">$E$</span>, and <span class="math-container">$F$</span>. Suppose, without loss of generality, that the triangle formed by <span class="math-container">$A$</span>, <span class="math-container">$B$</span>, and <span class="math-container">$C$</span> is red. At most one of the edges connecting <span class="math-container">$D$</span> to <span class="math-container">$A$</span>, <span class="math-container">$B$</span>, and <span class="math-container">$C$</span> can be red; otherwise we have must have at least one more red monochromatic triangle. We can reason similarly for <span class="math-container">$E$</span> and <span class="math-container">$F$</span>. Now take <span class="math-container">$D$</span> and <span class="math-container">$E$</span>, and notice that, by the Pigeonhole Principle, there must be at least one vertex of <span class="math-container">$A$</span>, <span class="math-container">$B$</span>, and <span class="math-container">$C$</span> such that <span class="math-container">$D$</span> and <span class="math-container">$E$</span> are both connecting by a blue edge to that vertex. If <span class="math-container">$D$</span> and <span class="math-container">$E$</span> are connected by a blue edge, then we have a monochromatic triangle, so they must be connected by a red edge. We can reason similarly for the edge connecting <span class="math-container">$D$</span> and <span class="math-container">$F$</span> and the edge connecting <span class="math-container">$E$</span> and <span class="math-container">$F$</span>. But then <span class="math-container">$D$</span>, <span class="math-container">$E$</span>, and <span class="math-container">$F$</span> must form a red triangle. So we have a contradiction and there must be more than one monochromatic triangle in <span class="math-container">$K_6$</span>.</p>
<p>Now I turn to the involving <span class="math-container">$K_7$</span>.</p>
<p>For ease of reference, I refer to the seven vertices in <span class="math-container">$K_7$</span> by the letters <span class="math-container">$A$</span>, <span class="math-container">$B$</span>, <span class="math-container">$C$</span>, <span class="math-container">$D$</span>, <span class="math-container">$E$</span>, <span class="math-container">$F$</span>, and <span class="math-container">$G$</span>. The subgraph of <span class="math-container">$K_7$</span> made up of <span class="math-container">$A$</span>, <span class="math-container">$B$</span>, <span class="math-container">$C$</span>, <span class="math-container">$D$</span>, <span class="math-container">$E$</span>, and <span class="math-container">$F$</span> must have two monochromatic triangles. Without loss of generality, suppose <span class="math-container">$A$</span> is a vertex in one of the monochromatic triangles of the subgraph with vertices <span class="math-container">$A$</span>, <span class="math-container">$B$</span>, <span class="math-container">$C$</span>, <span class="math-container">$D$</span>, <span class="math-container">$E$</span>, and <span class="math-container">$F$</span>. Then the subgraph made up of up <span class="math-container">$B$</span>, <span class="math-container">$C$</span>, <span class="math-container">$D$</span>, <span class="math-container">$E$</span>, <span class="math-container">$F$</span>, and <span class="math-container">$G$</span> has a monochromatic triangle that wasn't in the subgraph made up of <span class="math-container">$A$</span>, <span class="math-container">$B$</span>, <span class="math-container">$C$</span>, <span class="math-container">$D$</span>, <span class="math-container">$E$</span>, and <span class="math-container">$F$</span> <em>or</em> the subgraph made up of <span class="math-container">$A$</span>, <span class="math-container">$B$</span>, <span class="math-container">$C$</span>, <span class="math-container">$D$</span>, <span class="math-container">$E$</span>, and <span class="math-container">$F$</span> had at least three monochromatic triangles at least two of which are in the subgraph consisting of <span class="math-container">$B$</span>, <span class="math-container">$C$</span>, <span class="math-container">$D$</span>, <span class="math-container">$E$</span>, and <span class="math-container">$F$</span>. Either way, this means that <span class="math-container">$K_7$</span> has at least three monochromatic triangles. If <span class="math-container">$K_7$</span> has three monochromatic triangles, then by Pigeonhole Principle at least one vertex must be in two monochromatic triangles, which means the subgraph of <span class="math-container">$K_7$</span> consisting the six vertices other than this vertex must have at least two monochromatic triangles. So <span class="math-container">$K_7$</span> must have at least four monochromatic triangles.</p>
|
1,290,516 | <p>Find the values of $m$ if the line $y=mx+2$ is a tangent to the curve $x^2-2y^2=1$.</p>
<p>My working:</p>
<p>First we differentiate $x^2-2y^2=1$ with respect to $y$ to get the gradient. We get $y^2=\frac{1}{2}x^2-\frac{1}{2}\implies y=\pm\sqrt{\frac{1}{2}x^2-\frac{1}{2}}$.</p>
<p>We take the positive one for demonstration<br>
$\frac{dy}{dx}=\frac{1}{2}x(\frac{1}{2}x^2-\frac{1}{2})^{-\frac{1}{2}}=\frac{x}{2\sqrt{\frac{1}{2}x^2-\frac{1}{2}}}$</p>
<p>$\implies(1-2m^2)x^2=-2m^2$</p>
<p>Since the tangent touches the curve, we can make $x^2-2(mx+2)^2=1$, we then get $(1-2m^2)x^2=9+8mx$</p>
<p>$\implies(1-2m^2)x^2=-2m^2$ and $(1-2m^2)x^2=9+8mx$ are two equations with two unknowns, then we should be able to find the values of $m$, but I couldn't find any easy way to solve those 2 simultaneous equations. Is there any easier method?</p>
<p>I tried solving $9+8mx=-2m^2$ but we still have two unknowns in one equation?</p>
<p>Also, if we don't use those two simultaneous equations, can we solve this question with a different method?</p>
<p>I am trying to solve WITHOUT implicit differentiation.</p>
<p>Many thanks for the help!</p>
| Ross Millikan | 1,827 | <p>You have $$(1-2m^2)x^2=-2m^2\\(1-2m^2)x^2=9+8mx\\-2m^2=9+8mx\\x=\frac{-2m^2-9}{8m}\\x^2=\frac{-2m^2}{1-2m^2}\\\frac{4m^4+36m^2+81}{64m^2}=\frac{-2m^2}{1-2m^2}$$Now you can use the quadratic equation in $m^2$</p>
|
521,740 | <p>$x$,$y$ are real numbers satisfying $(x-1)^{2}+4y^{2}=4$<br>
find the maximum of $xy$ and justify it without calculus.<br>
Does there exist a tricky solution using elementary inequalities (AM-GM or Cauchy-Schwarz) ?</p>
<p>I tried and got it's when $x=\dfrac{3+\sqrt{33}}{4}$</p>
| Betty Mock | 89,003 | <p>This is a circle of radius 2 centered at (1,0). If you let x-1 = 2cosu and y = 2sinu then xy - (x-1)y + y =4(cosu)(sinu) +2sinu = 2sin2u + 2 sinu = 2(sin2u + sinu).</p>
<p>Since sin2u < 0 for $\pi/2 < u <\pi$ and sinu = < 0 for $\pi < u < 2\pi$, the maximum is going to occur in the first quadrant. Since both sinu and sin 2u are < 1 on 0 < u < $\pi/4$ the maximum must occur between $\pi/4$ and $\pi/2$.</p>
<p>Now let's look at the circle $x^2 + y^2$ = 1. The points (x,y) on that circle are described as (cosu,sinu) so that xy = (cosu)(sinu) = (1/2)sin2u. Clearly sin2u is everywhere increasing in the first quadrant, so will take its maximum at 2u = $\pi$/2 or u = $\pi$/4.</p>
<p>This result has to be the same for all circles. Going back to your original circle, choose the point on that circle where u = $pi$/4. You are left only to compute the coordinates and do the multiplication.</p>
|
1,131,622 | <p>The question itself is a very easy one:<br/></p>
<blockquote>
<p>Somebody has got two kids, one of whom is a girl. Then what's the probability that he's got <strong>at least</strong> one boy?</p>
</blockquote>
<p>My answer is that, since he's already got a girl, then "he's got at least one boy" amounts to "the other kid is a boy", whose probability is apparently $\frac{1}{2}$.<br/>
But my friends argue that the probability should be $\frac23$: they say this is a binomial distribution, all the possible cases are (girl,girl),(girl,boy),(boy,girl) which yields that the probability is two cases out of three and is thus $\frac23$.<br/>
But I think this is totally unacceptable. I don't think it is a binomial distribution at all, at least not what my friends explained to me. However, I just can't disuade them of their opinion, nor can I prove that I am wrong.<br/>
So what on earth is the probability? and why? Any help is appreciated. Thanks in advance.<hr/>
Esp. Can anybody show why <strong>my</strong> explanation is wrong? Isn't it that whether the other kid is a boy or a girl a 50/50 event?
<hr/>
EDIT:<br/>
Thanks for all the help you provided for me, and special thanks will go to @HammyTheGreek and @KSmarts, who have made it clear to me that there is in fact some ambiguity in my statement in this problem.<br/>
As is pointed in <a href="http://en.wikipedia.org/wiki/Boy_or_Girl_paradox" rel="nofollow">this link</a> ,two distinct interpretation of the statement "one of whom is a girl" that gives rise to ambiguity:<br/></p>
<blockquote>
<p>From all families with two children, at least one of whom is a boy, a family is chosen at random. This would yield the answer of 1/3.<br/>
From all families with two children, one child is selected at random, and the sex of that child is specified to be a boy. This would yield an answer of 1/2. </p>
</blockquote>
| turkeyhundt | 115,823 | <p>Imagine all the people in the world with two kids separated into 4 groups. Those with (B,B), (B,G), (G,B), (G,G). </p>
<p>How many of them could say one of their kids is a girl? (75%) And of those, what is the probability that the other child is a boy? You see that it is $\frac{2}{3}$</p>
<p>ETA: Note your line of thinking would be correct had the speaker specified that the OLDER or YOUNGER child was a girl.</p>
|
73,238 | <p>How can I calculate the solid angle that a sphere of radius R subtends at a point P? I would expect the result to be a function of the radius and the distance (which I'll call d) between the center of the sphere and P. I would also expect this angle to be 4π when d < R, and 2π when d = R, and less than 2π when d > R.</p>
<p>I think what I really need is some pointers on how to solve the integral (taken from <a href="http://en.wikipedia.org/wiki/Solid_angle" rel="nofollow">wikipedia</a>) $\Omega = \iint_S \frac { \vec{r} \cdot \hat{n} \,dS }{r^3}$ given a parameterization of a sphere. I don't know how to start to set this up so any and all help is appreciated!</p>
<p>Ideally I would like to derive the answer from this surface integral, not geometrically, because there are other parametric surfaces I would like to know the solid angle for, which might be difficult if not impossible to solve without integration.</p>
<p>*I reposted this from mathoverflow because this isn't a research-level question.</p>
| Narasimham | 95,860 | <p>Using standard spherical coordinates for a sphere surface $( a,\theta,\phi)$ we have parametrization for axis on z-axis:</p>
<p>$$ (x,y,z) = (a \cos \phi \cos \theta +h, a \cos \phi \sin \theta+k, a \sin \phi+l) $$</p>
<p>for a sphere whose center is displaced from origin to $(h,k,l).$</p>
<p>A great circle is formed by intersection of an inclined plane through sphere center inclined at angle $\alpha$ to z-axis.</p>
<p>To choose a great circle on the spherical surface we have to find a relation $f(\theta,\phi)=0. $ </p>
<p><em>An implicit function</em> relating $ \theta,\phi$ for great circle is given as:</p>
<p>$$ \cos \phi \tan \alpha = \sin \theta \sin \phi + \frac{\cos \theta \sin \alpha}{\sqrt{\cos^2 \phi - \sin ^2 \alpha}} $$</p>
<p>It is found from conditions imposed for planar / zero torsion condition and geodesy condition. Proper sign to be chosen depending on which octant we are in. Sketched in <em>Mathematica</em> ContourPlot for three inclinations $ (45^0, 60^0, 75^0).$ </p>
<p>Euler rigid body rotations are to be applied by matrix multiplication for direction cosines of given axis.</p>
<p><a href="https://i.stack.imgur.com/xrksr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xrksr.png" alt="SphereGrtCircleLat&Long"></a></p>
|
1,601,970 | <p>I want to use proof by contradiction.</p>
<p>Suppose that real numbers are bounded, then according to the axiom of continuity, there exists a least upper bound $b$.</p>
<p>But if $x\in \Bbb R$, then $x+1\in \Bbb R$ because of the inclusion property of real numbers.</p>
<p>But $x+1\in \Bbb R\Longrightarrow x+1\leq b\Longrightarrow x\leq b-1$, hence $b-1$ is an upper bound for $\Bbb R$.</p>
<p>However since $b$ is a least upper bound we must have: $b\leq b-1\Longrightarrow 1\leq 0$, a contradiction, since $1>0$</p>
<p>Thus $\Bbb R$ is not bounded.</p>
<p>Is that proof correct?</p>
| CommonerG | 74,357 | <p>Suppose there is such upper bound $r\in\mathbb{R}$. </p>
<p>$r+1>r$ and $r+1\in \mathbb{R}$ since $\mathbb{R}$ is closed under addition. </p>
|
1,601,970 | <p>I want to use proof by contradiction.</p>
<p>Suppose that real numbers are bounded, then according to the axiom of continuity, there exists a least upper bound $b$.</p>
<p>But if $x\in \Bbb R$, then $x+1\in \Bbb R$ because of the inclusion property of real numbers.</p>
<p>But $x+1\in \Bbb R\Longrightarrow x+1\leq b\Longrightarrow x\leq b-1$, hence $b-1$ is an upper bound for $\Bbb R$.</p>
<p>However since $b$ is a least upper bound we must have: $b\leq b-1\Longrightarrow 1\leq 0$, a contradiction, since $1>0$</p>
<p>Thus $\Bbb R$ is not bounded.</p>
<p>Is that proof correct?</p>
| mvw | 86,776 | <p>The Archimedian axiom states:</p>
<blockquote>
<p>For every $x, y \in \mathbb{R}$ with $0 < x < y$ there exists a $n \in
\mathbb{N}$ such that $y < n x$.</p>
</blockquote>
<p>Of course $n x \in \mathbb{R}$, so there is no largest real number $y$.</p>
|
2,708,891 | <p>Let $R$ be a ring and consider $f = r_nx^n + 1.x^{n-1} + \cdots + rx + r_0\; \in R[X]$ such that $r^n = 0$ for all $r \in R$. Then can I call $f$ a monic polynomial in $R[X]$ (assume $r_n$ is non-invertible)?</p>
| David C. Ullrich | 248,223 | <p>No. A polynomial over $R$ is not the same thing as the function from $R$ to $R$ defined by a polynomial. In particular, if we assume that $r^3=0$ for every $r\in R$ then $$p(x)=2x^3+x^2$$and $$q(x)=x^2$$ define the same function, but (assuming that $2\ne0$ in $R$) they are different polynomials, and in fact $q$ is monic while $p$ is not.</p>
|
858,494 | <p>Where does the definition of the $L_\infty$ norm come from?</p>
<p>$$\|x\|_\infty=\max \{|x_1|,\dots,|x_k|\}$$</p>
| robjohn | 13,854 | <p><strong>General Approach</strong></p>
<p>As shown in <a href="https://math.stackexchange.com/a/833397">this answer</a>,
$$
\lim_{p\to\infty}\left(\int_A|f(x)|^p\,\mathrm{d}x\right)^{1/p}=\sup_{x\in A}|f(x)|\tag{1}
$$
Using a discete measure, $(1)$ shows that
$$
\lim_{p\to\infty}\left(\sum_{k=1}^n|x_k|^p\right)^{1/p}=\max_{1\le k\le n}|x_k|\tag{2}
$$
Since
$$
\|x_k\|_p=\left(\sum_{k=1}^n|x_k|^p\right)^{1/p}\tag{3}
$$
if we take the limit of $(3)$ and compare with $(2)$, we get
$$
\|x_k\|_\infty=\max_{1\le k\le n}|x_k|\tag{4}
$$</p>
<hr>
<p><strong>Discrete Approach</strong></p>
<p>The discrete $\ell^p$ norm is defined as
$$
\|x_k\|_p=\left(\sum_{k=1}^n|x_k|^p\right)^{1/p}\tag{5}
$$
suppose that $m=\max\limits_{1\le k\le n}|x_k|$, and that $q$ of the $|x_k|$ equal $m$ (that is, $1\le q\le n$). For the $|x_k|\lt m$, we have $\lim\limits_{p\to\infty}\left|\frac{x_k}{m}\right|^p=0$. Therefore,
$$
\begin{align}
\lim_{p\to\infty}\|x_k\|_p
&=\lim_{p\to\infty}\left(\sum_{k=1}^n|x_k|^p\right)^{1/p}\\
&=m\lim_{p\to\infty}\left(\sum_{k=1}^n\left|\frac{x_k}{m}\right|^p\right)^{1/p}\\[4pt]
&=m\lim_{p\to\infty}q^{1/p}\\[12pt]
&=m\tag{6}
\end{align}
$$
Taking the limit of $(6)$, we get that
$$
\|x_k\|_\infty=\max_{1\le k\le n}|x_k|\tag{7}
$$</p>
|
858,494 | <p>Where does the definition of the $L_\infty$ norm come from?</p>
<p>$$\|x\|_\infty=\max \{|x_1|,\dots,|x_k|\}$$</p>
| copper.hat | 27,978 | <p>The following adds nothing to the above proofs other than a slightly geometric flavour. It is too long for a comment, so I am including it as an answer.</p>
<p>I am taking the space to be $\mathbb{C}^n$.</p>
<p>It is not hard to show (see <a href="https://math.stackexchange.com/a/424335/27978">https://math.stackexchange.com/a/424335/27978</a> for example) that $p \mapsto \|x\|_p$ is non-increasing,
$\lim_{p \to \infty} \|x\|_p = \|x\|_\infty$, and $\|x\|_\infty \le \|x\|_p \le \|x\|_1 \le \sqrt{n} \|x\|_\infty$, hence the concepts of interior and boundedness are independent of $p$.</p>
<p>The above shows that $B_\infty(0,1) = \cup_{ p \ge 1 } B_p(0,1)$.</p>
<p>Since each $\|\cdot \|_p$ is a norm, we see that each $B_p(0,1)$ is bounded (uniformly by comment above), balanced, convex and contains $0$ in its interior. Hence $\cup_{ p \ge 1 } B_p(0,1)$ is bounded, balanced, convex and contains $0$ in its interior, and so is the unit ball of some norm, in this case, $\|\cdot \|_\infty$.</p>
<p>The point here is that there is a natural motivation in terms of the unit balls which 'converge' (in the above sense) to the $\|\cdot \|_\infty$ unit ball.</p>
|
3,066,020 | <p>I’m reading Hans Kurzweil ‘s “The Theory of Finite Groups”, where it says</p>
<blockquote>
<p>1.6.4 Let <span class="math-container">$N_1, . . . , N_n$</span> be normal subgroups of <span class="math-container">$G$</span>. Then the mapping <span class="math-container">$$α: G→G/N_1\times ··· \times G/N_n$$</span> given by <span class="math-container">$$g \mapsto
(gN_1,...,gN_n)$$</span> is a homomorphism with <span class="math-container">$\operatorname{Ker}α = \cap_i N_i$</span>. In
particular, <span class="math-container">$G/\cap_i N_i$</span> is isomorphic to a subgroup of <span class="math-container">$G/N_1
\times ··· \times G/N_n$</span>.</p>
</blockquote>
<p>I’m confused here: can we write <span class="math-container">$$G/N_1\times \cdots \times G/N_n$$</span>
? To write a product of groups as this, it’s required that each <span class="math-container">$G/N_i$</span> has only <span class="math-container">$e$</span> as common element.</p>
<p>What if <span class="math-container">$$G=C_2 \times C_3 \times C_5 \times C_7$$</span></p>
<p><span class="math-container">$$N_1=C_2 \times C_3 $$</span></p>
<p><span class="math-container">$$N_2=C_2 \times C_5 $$</span></p>
<p><span class="math-container">$$N_3=C_2 \times C_7 $$</span></p>
<p>, shouldn’t <span class="math-container">$$G/N_1 \cong C_3 \times C_5$$</span></p>
<p><span class="math-container">$$G/N_2 \cong C_2 \times C_7$$</span></p>
<p><span class="math-container">$$G/N_3 \cong C_5 \times C_7$$</span></p>
<p>, and they have common elements besides <span class="math-container">$e$</span>?</p>
| Community | -1 | <p>Given two groups <span class="math-container">$G_1$</span> and <span class="math-container">$G_2$</span>, you can form their direct product: <span class="math-container">$G_1×G_2$</span>, to be <span class="math-container">$\{(g_1,g_2)\mid g_1\in G_1,g_2\in G_2\}$</span>, with the group operation defined as <span class="math-container">$(g_1,g_2)+(h_1,h_2)=(g_1+h_1,g_2+h_2)$</span>.</p>
<p>As an example, consider the Klein four group, <span class="math-container">$V_4=C_2×C_2$</span>.</p>
<p>(As in the other answer, you seem to be thinking of the internal direct product of subgroups of the given group.)</p>
|
3,329,363 | <p>Sorry for the long text; this is a nebulous question that has always been in the back of my mind, and I've had trouble putting into a short form.</p>
<hr>
<p><strong>"Natural" Definition</strong></p>
<p>If someone on the street hears the word "permutation," I think they will naturally assume that a permutation:</p>
<ul>
<li>Involves rearranging objects</li>
<li>The order in which the objects are written, before and after the permutation is performed, is of crucial importantce (it's really the essence of the permutation)</li>
</ul>
<p>I would naturally expect a permutation to be an instruction, or an action. For example, I would expect a permutation to look something like</p>
<p><span class="math-container">$$\sigma = \text{interchange the first two entries.}$$</span> </p>
<p>Then, if we apply <span class="math-container">$\sigma$</span> to <span class="math-container">$(A, B, C)$</span> we get <span class="math-container">$(B, C, A);$</span> if we apply it to <span class="math-container">$(4, 6, 9)$</span> we get <span class="math-container">$(6, 4, 9).$</span> To me this is a very satisfying (informal) definition of a permutation, because it captures exactly what many people (or at least me) would think a permutation "should be."</p>
<p>Another way to define "permutation" (to me, less satisfactory than the previous, but still more satisfactory than the official definition) could be to just say that "the 3-tuple <span class="math-container">$(B, A, C)$</span> is a permutation of <span class="math-container">$(A, B, C).$</span>" (In fact, I think this is the definition used in elementary Statistics books.)</p>
<p><strong>Percieved Weaknesses of the Official Definition</strong></p>
<ul>
<li>It makes little sense to "permute" your set of objects. If you have a set of objects <span class="math-container">$\{4, 6, 8 \},$</span> and while you are not in the room someone applies a permutation to your set, you will never know; the output of your permutation is still <span class="math-container">$\{4, 6, 8 \}.$</span> Even if they only apply the permutation to a subset, you only <em>might</em> be able to tell.</li>
<li>Permutations seem to have <em>nothing</em> to do with the order that your objects are in, either before or after doing the permutation. This, like I mentioned above, seems to violate <em>the whole point</em> of a permutation</li>
</ul>
<p>I call these weaknesses because they seem to violate the "person on the street" understanding of a permutation, and I know that generally mathematicians try really hard to not distort the meaning of common English words too much.</p>
<p><strong>My Question</strong></p>
<p>Is there really such a big disconnect between the "Natural" and Official definitions of permutations? Even if there is not, and there is a way to tediously link the natural definition with the official definition (which I'm sure there is), why does the Official definition deserve to be called a permutation more than the natural one? Is there a name for the Natural definition? </p>
<p>Thanks.</p>
| Aloizio Macedo | 59,234 | <p>"Rearranging" objects is a procedure that is not inherently related with order. The relation with order is more due to how we usually list things than the essence of permutation itself.</p>
<p>For instance, I have three pockets in my pants, on which I put my cell phone, keys and wallet, one on each pocket. I sometimes rearrange them; driving with the wallet on the right pocket of some pants is somewhat unpleasant. I think it is natural to call this process of changing from one alocation to another a permutation, and I think people would generally agree. This is precisely a function <span class="math-container">$f: Pockets \to Pockets$</span> which tells me that whatever was in pocket <span class="math-container">$x$</span> is now in pocket <span class="math-container">$f(x)$</span>. Note that I never made any ordering of the pockets.</p>
<p>As I mentioned in the introduction, I think you are assuming that an ordering must be given due to the same reason why I said "cell phone, keys and wallet": because this is how we are used to communicate things, by listing them.</p>
|
1,239,211 | <p>I have been allowed to attend some preparatory lectures for a seminar on the Goodwillie Calculus of Functors. I found in my notes from one of the lectures two statements which I would like to ask about.</p>
<p>The first one is probably straightforward and I'm guessing is related to Whitehead-type theorems. Still, I would still like a detailed explanation of what it means.</p>
<blockquote>
<ol>
<li>Every homotopy type is a filtered colimit of finite CW complexes.</li>
</ol>
</blockquote>
<p>The second statement is a lot more problematic because I don't understand any of the context. Here it is:</p>
<blockquote>
<ol start="2">
<li>We want to look at (extraordinary) homology theories $h_\ast :\mathsf{Top}\rightarrow \mathsf{grAb}$ which commute with filtered colimits.</li>
</ol>
</blockquote>
<p>My question is why do we want to study homology theories which commute with filtered colimits? So that we may reduce to (finite) CW complexes? Is there anything else?</p>
<p>This statement is preceded in my notes by the following theorem of Whitehead: </p>
<p><strong><em>Theorem.</strong> For any extraordinary homology theory which is finitary ($\overset?=$ determined by values on finite CW complexes) there exists a spectrum $E\in \mathsf{Sp}$ such that $h_\ast (X)=\pi_\ast (E\wedge X)$ where $\pi _\ast$ are stable homotopy groups and $\wedge $ is the smash product.</em></p>
<p>Now I don't yet know anything about either spectra no stable homotopy, so I can't make out much of this theorem myself.</p>
| Qiaochu Yuan | 232 | <p>I think the most basic things you can say here have nothing much to do with homotopy theory in particular. It's a general feature of many familiar categories that every object is a filtered colimit of "small" objects which are easier to understand: for example, every group is a filtered colimit of finitely generated groups, and every commutative ring is a filtered colimit of finitely generated (in particular Noetherian!) rings. If you look at a functor on such a category which commutes with filtered colimits, then to understand what it does in general you only need to understand what it does to "small" objects, and this is a useful thing to be able to do. </p>
<p>The condition that a functor commutes with filtered colimits is particularly interesting when applied to representable functors $\text{Hom}(X, -)$, where it usually corresponds to some interesting "finiteness" or "compactness" condition on $X$. See <a href="http://ncatlab.org/nlab/show/compact+object" rel="noreferrer">compact object</a> for some details and examples. </p>
|
2,263,759 | <p>Let $E$ be a vector space and $\varphi: E \to E$ be a linear map. Let $x, y \in E \setminus \{0\}$ and $\lambda, \mu \in F$ such that
$\varphi(x) = \lambda x$ and $\varphi(y) = \mu y$. Prove that if $\lambda \neq \mu$ then $\{x, y\}$ is linearly independent.</p>
<p>This proof seems like it should be on the simpler side. But perhaps I am over thinking it. This is what I have:</p>
<p><strong>Proof :</strong>
Suppose $\{x,y\}$ is not linearly independent.
Then, there exists scalars $a,b$ s.t. $ax+by=0$ where $a=b=0$.
So, $0=ax+by=\varphi(x)+\varphi(y)= \varphi(x+y)$</p>
<p>And this is where I am stuck. I can use the fact that $\varphi$ is linear, and show that $\varphi(0)=0$, but the map is not necessarily injective, so there might be more elements in the null space. I can't use $\varphi(x+y)=c(x+y)$ because this assertion would only hold in dimension $1$. What am I missing? Am I going about this in the right way?</p>
<p>Thank you in advance!</p>
| florence | 343,842 | <p>Suppose that $\{x,y\}$ is linearly dependent. Then either $x$ is a multiple of $y$ or vice versa. Without loss of generality, assume that $x = cy$. Then
$$\lambda x = \phi(x) = \phi(cy) = c\phi(y) = c\mu y = \mu (cy) = \mu x$$
Thus, we have that $\mu x = \lambda x$. Since $x\neq 0$, it follows that $\mu = \lambda$. </p>
|
920,429 | <p>Given that $x(t)=(c_1+c_2 t + c_3 t^2)e^t$ is the general solution to a differential equation, how do you work backwards to find the differential equation? </p>
| AgentS | 168,854 | <p>Say $f(t) = c_1 + c_2t + c_3t^2$<br>
the given general solution is $$x(t) = fe^t$$
Since you have $3$ arbitrary constants, the required DE must be of order $3$. So you need to differentiate exactly 3 times :
$$\begin{align} x' &= (f'+f)e^t \\ x''&=(f''+2f'+f)e^t \\x'''&=(f''' + 3f''+3f'+f)e^t\end{align}$$</p>
<p>Its trivial to eyeball the required DE : $x'''-3x''+3x'-x=f'''e^t = 0$</p>
|
844,355 | <p>Let $T\colon V \rightarrow W$ a linear transformation between the real vector spaces $V$ and $W$ both with finite dimension.</p>
<p>How can i prove that $\dim(V) = \dim T(V) + \dim T^{-1}(0)$.</p>
<p>I can't understand this problem and how to solve it , if you can help me please.</p>
| egreg | 62,967 | <p>Hint: </p>
<ol>
<li><p>consider $\mathcal{A}=\{v_1,v_2,\dots,v_k\}$ such that $\{T(v_1),T(v_2),\dots, T(v_k)\}$ is a basis of $T(V)$;</p></li>
<li><p>consider a basis $\mathcal{B}=\{u_1,u_2,\dots,u_h\}$ of $\ker T=T^{-1}(0)$;</p></li>
<li><p>prove that $\mathcal{A}$ and $\mathcal{B}$ are disjoint;</p></li>
<li><p>prove that $\mathcal{A}\cup\mathcal{B}$ is a basis of $V$.</p></li>
</ol>
|
2,200,034 | <p>Suppose the points A and B are connected by two roads of the length $S_1$ and $S_2$. Cars can drive from A to B on either of the two roads. They start at point A and must decide, <strong>one after the other</strong> which road to take. They know how many cars already chose to drive on each road.</p>
<p>The speed of all cars on a road is equal to $\frac{1}{\sqrt{(N)}}$, where $N$ is the number of cars driving on that road at any time. </p>
<p>What is the limit of the ratios of cars on road $1$ to road $2$ as the number of cars that arrive per unit of time tends to infinity?</p>
<p>EDIT: The drivers can make decisions instantly and are $100\%$ logical.</p>
| samerivertwice | 334,732 | <p>$$\lim_{N\to\infty}\frac{1}{\sqrt{N}}=0$$</p>
<p>Therefore the cars are stationary</p>
<p>But they are arriving infinitely many per second, so there is a pile up of infinitely many cars every instant.</p>
<p>100% logical drivers cannot make an instant decision in the face of an illogical universe containing an infinite pile-up of stationary vehicles. A contradiction.</p>
|
3,103,991 | <p>Suppose <span class="math-container">$X$</span> is a topological space, <span class="math-container">$C$</span> is a closed subset of <span class="math-container">$X$</span>, <span class="math-container">$U$</span> is an open subset of <span class="math-container">$X$</span> and <span class="math-container">$U$</span> is dense in <span class="math-container">$X$</span>, is <span class="math-container">$U\cap C$</span> dense in <span class="math-container">$C$</span>?</p>
| Henno Brandsma | 4,280 | <p>No, take <span class="math-container">$C=\mathbb{Z}$</span> (closed in <span class="math-container">$X=\mathbb{R}$</span> (usual topology)) and <span class="math-container">$U = \mathbb{R}\setminus{Z}$</span> which is open and dense. <span class="math-container">$C \cap U=\emptyset$</span> is not dense in <span class="math-container">$C$</span>. </p>
<p>Any space with a closed set <span class="math-container">$C$</span> with empty interior (and <span class="math-container">$U$</span> its complement) will do.</p>
|
1,919,159 | <p>I don't get this step in proof of <a href="https://en.wikipedia.org/wiki/Carath%C3%A9odory%27s_theorem_(convex_hull)" rel="nofollow">Carathéodory's theorem (convex hull)</a>
Why:</p>
<blockquote>
<p>Suppose k > d + 1 (otherwise, there is nothing to prove). Then, the points $x_2 − x_1, ..., x_k − x_1$ are linearly dependent</p>
</blockquote>
<p>Why is this true?</p>
<p>How can we cay these points are linearly dependent?</p>
| Ben Grossmann | 81,360 | <p>Note that $k > d+1$, and that our points are vectors in $\Bbb R^d$. In $\Bbb R^d$ (or any $d$-dimensional vector space), any set consisting of more than $d$ vectors is linearly dependent.</p>
|
622,090 | <p>We are asked to solve the following linear system</p>
<p>$$x_1-3x_2+x_3=1$$
$$2x_1-x_2-2x_3=2$$
$$x_1+2x_2-3x_3=-1$$</p>
<p>by using gauss-jordan elimination method. The augmented matrix of the linear system is $$\left(\begin{array}{ccc|c}1 & -3 & 1 & 1 \\2 & -1 & -2 & 2 \\1 & 2 & -3 & -1\end{array}\right).$$ By a series of elementary row operations, we have $$\left(\begin{array}{ccc|c}1 & -3 & 1 & 1 \\0 & 5 & -4 & 0 \\0 & 0 & 0 & -2\end{array}\right).$$ My question is, although the question asked us to solve the linear system using gauss-jordan elimination method, can we stop immediately and conclude that the linear system is inconsistent without further apply any elementary row operation to the matrix $$\left(\begin{array}{ccc|c}1 & -3 & 1 & 1 \\0 & 5 & -4 & 0 \\0 & 0 & 0 & -2\end{array}\right)$$ until the matrix $$\left(\begin{array}{ccc|c}1 & -3 & 1 & 1 \\0 & 5 & -4 & 0 \\0 & 0 & 0 & -2\end{array}\right)$$ is transformed into reduced-row echelon form?</p>
| William Ballinger | 79,615 | <p>You can conclude that the system is inconsistent, because the last row of your matrix implies that $0x_1+0x_2+0x_3=-2$, which cannot be satisfied.</p>
|
1,265,531 | <p>I understand the question but I am not sure how to solve it. For example, if we flip HHHTTTTT then the next three must be heads because of the question. This however seems counterintuitive. I believe that there are $2^{10}$ possible strings, but I am unsure of how to count all possible strings that begin with HHH.</p>
| user3856370 | 237,743 | <p>There are $\frac{10!}{5!5!}$ ways of getting an equal number of heads and tails. The first three are fixed at heads so we need 2 more heads and 5 tails in the remaining seven tosses this can be done in $\frac{7!}{2!5!}$ ways. The probability is therefore
$$
{\frac{7!}{2!5!}\over\frac{10!}{5!5!}} = \frac{1}{12}
$$</p>
|
1,874,914 | <p>in order to find $e^{AT}$ We can't just take the exponential of A as we would do in its diagonalized form. We need to diagonalize $A=S^{-1}e^{\delta(t)}S$ in order to find $e^{AT}$ why is this the case? I know we can't take the exponential of the matrix right away, do we need to take the exponential of the diagonal and multiply by $S$ in order to reach the answer all the time?</p>
<p>If yes, why?</p>
| John Hughes | 114,036 | <p>In general,
$$
e^M = I + M + \frac{M^2}{2!} + \frac{M^3}{3!} + \ldots
$$
but computing this sum involves multiplying $M$ by itself many many times and summing up the results and is generally computationally horrible, even if we only go out 10 terms or so. </p>
<p>But observe that
\begin{align}
Q^{-1}M^2Q
&= Q^{-1}M(QQ^{-1})M Q \\
&= (Q^{-1}MQ)(Q^{-1}M Q)
\end{align}
and a similar trick works for $M^3$, or $M^4$, or any power. So for any invertible matrix $Q$, we can write
\begin{align}
Q^{-1}e^MQ
&= Q^{-1}(I + M + \frac{M^2}{2!} + \frac{M^3}{3!} + \ldots) Q\\
&= Q^{-1}IQ + Q^{-1}MQ + Q^{-1}\frac{M^2}{2!}Q + Q^{-1}\frac{M^3}{3!}Q + \ldots\\
&= I + Q^{-1}MQ + \frac{(Q^{-1}MQ)^2}{2!} + \frac{(Q^{-1}MQ)^3}{3!} + \ldots
&= e^ {Q^{-1}MQ}.
\end{align}
So suppose you're trying to exponentiate $A$, and it's a pain. But you know a matrix $Q$ with $Q^{-1}AQ$ diagonal, say $D$. Then you can compute
$$
e^{Q^{-1}AQ} = e^D
$$
But you also know that this is the same as $Q^{-1} e^A Q$. So you can write
$$
Q^{-1} e^A Q = e^D\\
e^A = Q e^D Q^{-1}.
$$
Now computing $e^A$ is easy: you exponentiate the diagonal matrix (easy! just exponentiate each diagonal entry!) and then conjugate by $Q$. </p>
<p>So the reason we do this is that exponentiating diagonals is easy (computationally and theoretically), and reducing general exponentiation to this simple case makes the general case easier. </p>
<p>Of course, to have this work, you need to know such a diagonalizing matrix $Q$, which isn't trivial (and in general may not exist...but does if $A$ is symmetric, for instance). But it may well be easier than computing the exponential without this diagonalization trick. </p>
|
299,170 | <p>Given the following integer programming formulation, how can I specify that the variables are unique and none of them has the same value as the other one. basically <code>x1</code>, <code>x2</code>, <code>x3</code> , and <code>x4</code> need to get only one unique value from 1, 2, 3 or 4. and same applies to <code>y1</code>, <code>y2</code>, <code>y3</code>, and <code>y4</code>.</p>
<pre><code>Minimize
obj: +2*x1 -3*y1 +3*x2 -3*y2 +1*x3 -2*y3 +4*x4 -2*y4
Constraints
+2*x1 -3*y1 +3*x2 -3*y2 +1*x3 -2*y3 +4*x4 -2*y4 >= 0
Bounds
1 <= x1 <= 4
1 <= x2 <= 4
1 <= x3 <= 4
1 <= x4 <= 4
1 <= y1 <= 4
1 <= y2 <= 4
1 <= y3 <= 4
1 <= y4 <= 4
</code></pre>
| dexter04 | 48,371 | <p>For every pair of variables $x_i,x_j$, define a new variable $z_{ij} = |x_i-x_j|$.</p>
<p>Note that the conditions $$z_{ij} \ge x_i-x_j \\z_{ij} \ge x_j-x_i$$ are enough to define $z_{ij}$. </p>
<p>Now, impose the extra condition $z_{ij}>0$ to keep $x_i,x_j$ distinct. You have to do this for all possible pairs if you want no duplicates.</p>
|
299,170 | <p>Given the following integer programming formulation, how can I specify that the variables are unique and none of them has the same value as the other one. basically <code>x1</code>, <code>x2</code>, <code>x3</code> , and <code>x4</code> need to get only one unique value from 1, 2, 3 or 4. and same applies to <code>y1</code>, <code>y2</code>, <code>y3</code>, and <code>y4</code>.</p>
<pre><code>Minimize
obj: +2*x1 -3*y1 +3*x2 -3*y2 +1*x3 -2*y3 +4*x4 -2*y4
Constraints
+2*x1 -3*y1 +3*x2 -3*y2 +1*x3 -2*y3 +4*x4 -2*y4 >= 0
Bounds
1 <= x1 <= 4
1 <= x2 <= 4
1 <= x3 <= 4
1 <= x4 <= 4
1 <= y1 <= 4
1 <= y2 <= 4
1 <= y3 <= 4
1 <= y4 <= 4
</code></pre>
| Erick Wong | 30,402 | <p>One way to make this work is to change the underlying representation. Instead of your basic variables being $x_i$ and $y_i$, use binary indicator variables to indicate which of $\{1,2,3,4\}$ is assigned to $x_i$.</p>
<p>For instance, with four variables $b_{i1}, b_{i2}, b_{i3}, b_{i4}$, if we constrain each $0 \le b_{ij} \le 1$ as well as $b_{i1} + b_{i2} + b_{i3} + b_{i4} = 1$, then exactly one of the four variables will be $1$ and the rest must be $0$. We can then set $x_i = b_{i1} + 2b_{i2} + 3b_{i3} + 4b_{i4}$ to give $x_i$ the chosen value.</p>
<p>Now it should be easy to see how to enforce distinctness: to avoid repeating the value $1$, we just need to prevent $b_{i1} = b_{j1} = 1$ for any distinct $i,j$, and this can be done with a single inequality $\displaystyle\sum_i b_{i1}\le1$. Then another to prevent $2$ from being repeated, etc.</p>
<p>EDIT: Here's a different encoding inspired by dexter04's idea. For each $i$ and $j$ create a boolean variable $b = b_{ij}$ where $0\le b_{ij}\le 1$, which encodes whether $x_i \ge x_j+1$ or $x_j \ge x_i+1$. Add the inequalities
$$x_i \ge x_j + (1-b) - 100(b) \qquad x_j \ge x_i + (b) - 100(1-b).$$
When $b=0$, the first inequality becomes $x_i \ge x_j+1$, and the second becomes $x_j \ge x_i - 100$, which is harmless. When $b=1$, the same thing happens but with $x_i$ and $x_j$ reversed. Since $b$ must be either $0$ or $1$, exactly one of the two situations ($x_i > x_j$ or $x_j > x_i$) must occur but it doesn't mandate a particular side.</p>
<p>This is a more scalable solution when the range of values taken by $x_i$ is much larger than the number of variables.</p>
|
995,775 | <p>So we have the regular $\delta$-$\epsilon$ definition of continuity as: </p>
<p>(1) For all $\epsilon>0$, there exists a $\delta>0$ such that, if $|x-a|<\delta$, then $|f(x)-f(a)|<\epsilon$.</p>
<p>My question is why is the following definition incorrect?</p>
<p>(2) There exists a $\delta>0$ such that, if $|x-a|<\delta$, then $|f(x)-f(a)|<\epsilon$ for all $\epsilon>0$.</p>
<p>The obvious response is that $\forall x$ $\exists y$ $\neq$ $\exists y$ $\forall x$ (or rather, they are not always equal), but look harder at the grammar: is that necessarily what is going on?</p>
<p>Let us define $p:=$ "There exists a $\delta>0$ such that, if $|x-a|<\delta$, then $|f(x)-f(a)|<\epsilon$"</p>
<p>So we now have the statement:</p>
<p>"$p$ is true for all $\epsilon>0$"</p>
<p><strong>Isn't that sentence identical <em>to the English sentence</em></strong>: "for all $\epsilon>0$, $p$ is true"?</p>
<p>In which case you would have (1):</p>
<p>For all $\epsilon>0$, there exists a $\delta>0$ such that, if $|x-a|<\delta$, then $|f(x)-f(a)|<\epsilon$.</p>
<p>The counterargument that $\forall x$ $\exists y$ $\neq$ $\exists y$ $\forall x$ makes sense <em>if you look at it immediately from a logical view,</em> but <em>because of the way English sentences work</em> (and their vagueness, in a case like this), the "for all $\epsilon>0$" clause can be <em>placed</em> anywhere without changing the meaning of the statement <em>in English</em>.</p>
<p>To illustrate better what I'm talking about, let us imagine that we write our definition as follows:</p>
<p>There exists a $\delta>0$ such that, if $|x-a|<\delta$, then $|f(x)-f(a)|<\epsilon$ (for all $\epsilon>0$).</p>
<p>Or, maybe:</p>
<p>There exists a $\delta>0$ such that, if $|x-a|<\delta$, then $|f(x)-f(a)|<\epsilon$ (N.B. we're talking about all $\epsilon>0$ here!).</p>
<p>Aren't these parentheticals sort of "overlying" the whole statement? Is the fault in my reasoning that $\epsilon$ is being "bound" to an if-then statement <em>before</em> it has been defined? Or am I just blatantly incorrect and this <em>is</em> a case of $\forall x$ $\exists y$ $\neq$ $\exists y$ $\forall x$.</p>
<p><strong>The essential point of all of this</strong> is that, if you have the statement: $$ \forall A \exists B : {B \subseteq A} $$
Then in English, this is equivalent to saying, <strong><em>both</em></strong> "For all A, there exists a B such that if B then A", <strong><em>and</em></strong>, "There exists a B such that if B then A, for all A".</p>
| rocinante | 117,337 | <p>The definition is incorrect because the order in which the values are picked matters. I'm glad you asked this because this is not nearly hammered as much as it should be to get people to understand and appreciate what is going on the first time around.</p>
<p>First you pick $\epsilon$. THEN you pick $\delta$. The $\delta$ depends on the $\epsilon$. This is hugely important because if you were to pick another $\epsilon$ you would get a different $\delta$. (Caveat: if you're talking about the case of uniform continuity, then no matter what $\epsilon$ you pick, the same $\delta$ works for any of them). </p>
<p>For me, the easiest way to grasp the intuition behind this is to draw it out. Pick a continuous function and draw it. Then draw 2 lines parallel to the y-axis a certain distance apart (say half an inch, or whatever distance you want). At the points they intersect the curve (i.e. function) mark the corresponding values on the y-xis. So now you have your first ($\epsilon_1$, $\delta_1$) pair. Repeat this process but this time around change the distance between the 2 parallel lines, so that you get another ($\epsilon_2$, $\delta_2$) pair. </p>
<p>Remember that $\epsilon_1$ =/= $\epsilon_2$ by design. Now look at $\delta_1$ and $\delta_2$. If you picked a continuous function (but not uniformly continuous) $\delta_1$ won't be the same as $\delta_2$. If you picked a uniformly continuous function, they will be the same. </p>
<p>For these kinds of long-winded explanations that articulate the intuition you're supposed to grasp, I highly recommend the book "Introduction to Real Analysis" by Michael J. Schramm. It is not nearly as popular as the terse Rudin, but I found it invaluable in helping me to understand how to do analysis. (I don't mean to bash Rudin. His book is great for exercises once you've had some hand-holding from Schramm.)</p>
|
2,706,776 | <p>In solving the wave equation
$$u_{tt} - c^2 u_{xx} = 0$$
it is commonly 'factored'</p>
<p>$$u_{tt} - c^2 u_{xx} =
\bigg( \frac{\partial }{\partial t} - c \frac{\partial }{\partial x} \bigg)
\bigg( \frac{\partial }{\partial t} + c \frac{\partial }{\partial x} \bigg)
u = 0$$</p>
<p>to get
$$u(x,t) = f(x+ct) + g(x-ct).$$</p>
<p><strong>My question is: is this legitimate?</strong></p>
<p>The partial differentiation operators are not variables, but here in 'factoring' they are treated as such.</p>
<p>Also it does not seem that both factors can individually be set to zero to obtain the solution--either one or the other, or both might be zero.</p>
| Community | -1 | <p>1) differentials are linear operators. if you are working on a space of differntiable functions they satisfy $\frac{\partial}{\partial x}\frac{\partial}{\partial y} = \frac{\partial}{\partial y} \frac{\partial}{\partial x}$. so yes, this is legit.</p>
<p>2) to prove they can be individually set to zero, put them separately into wave equation you started with. </p>
<p>P.S. if your answer feels like a magical guess, try changing the variables $\alpha = x - ct, \beta = x + ct$ then things come up straightforward</p>
|
2,416,446 | <p>I need to prove the following inequality: </p>
<blockquote>
<p>$$\sum_{n=m+1}^\infty \frac{1}{n^2}\leq \frac1m$$</p>
</blockquote>
<p>But I'm stuck with it. I found online geometric justifications for this but I'd really appreciate to see actual proof. Any hints? </p>
| Tsemo Aristide | 280,301 | <p>Hint: Compare with the integral $\int_{m+1}^{\infty} {1\over x^2}dx$.</p>
|
2,767,679 | <blockquote>
<p>Let $A$ be an $m \times n$ matrix and let $B, C$ be $n \times p$ matrices. Prove that $A(B + C) = AB + AC$</p>
</blockquote>
<p>I know it's obvious that it is and that every mathematician takes this for granted but I've been asked to prove it and I don't know how to do it without just multiplying out the brackets. Any help would be greatly appreciated.</p>
| José Carlos Santos | 446,262 | <p>Well, what I would do would be tu compute $A(B+C)$ and $AB+AC$ and to see that the result is the same.</p>
<p>Otherwise, you can use the fact that $A$, $B$, and $C$ are matrices of linear maps $f_A$, $f_B$, and $f_C$, that$$f_C\circ(f_B+f_C)=f_A\circ f_C+f_A\circ f_C$$and that the matrix of the composition of two linear maps is the product of the matrices.</p>
|
4,032,983 | <p>I would like to know math websites that are useful for students, PhD students and researchers (useful in the sense most of the students or researchers—of a particular area—are using it). Maybe you can share which math websites you sometime use and why you use it.</p>
<p>Let me give my websites and why I use them:</p>
<p>MathOverflow and Stack Exchange: Very good to look up questions about how to do mathematics, what mathematics is, how it evolves, and of course to have an exchange with smart people, who can help you if you are stuck at something.</p>
<p>arXiv: I don't use it now, but I think that's the main website to read papers and to publish (research level)</p>
<p>The Stacks Project: It is very useful for me to look algebraic things ups with more explanation than in some lectures...</p>
<p>MathSciNet: To search for papers that may help you.</p>
<p>Number Theory Web: Since I want to become a number theorist, this site helps me to see which kind of things exist.</p>
<p>Do you have more websites that help you to understand maybe mathematics, or just to connect to other mathematicians?</p>
<p>Everything is welcome. Just post it as an answer and not a comment.</p>
| kimchi lover | 457,779 | <p>I have found these extremely useful, from time to time:</p>
<p><a href="https://oeis.org/" rel="noreferrer">Online Encyclopedia of Integer Sequences</a></p>
<p><a href="https://dlmf.nist.gov/" rel="noreferrer">NIST Digital Library of Mathematical Functions</a></p>
<p>They both provide facts of a specialized nature, rather than understandings or arguments. That is, they are thin on explanations and proofs. The OEIS does have very thorough references, so this is not a serious problem. The DLMF also has many references, which I then skim with MathSciNet.</p>
<p>I have mixed feelings about Wikipedia's mathematics articles. Individual articles are often imprecise, more interested in giving the gist of a topic than in giving precise details; this is especially bad when it comes to precise statements of theorems or (worse) of proofs. There is usually only a loose connection between the cited references and the contents of the article: which (say) of the six cited sources contains details of this or that statement in the article? And there is poor coordination between collections of articles, which often use differing terminology and notation. On the other hand, it is a good source for understanding (say) current usage of mathematical terminology. I typically find 75 percent of MSE questions use some technical term in a way I do not understand, and Wikipedia often gives an indication of what the questions' posers had in mind.</p>
|
4,032,983 | <p>I would like to know math websites that are useful for students, PhD students and researchers (useful in the sense most of the students or researchers—of a particular area—are using it). Maybe you can share which math websites you sometime use and why you use it.</p>
<p>Let me give my websites and why I use them:</p>
<p>MathOverflow and Stack Exchange: Very good to look up questions about how to do mathematics, what mathematics is, how it evolves, and of course to have an exchange with smart people, who can help you if you are stuck at something.</p>
<p>arXiv: I don't use it now, but I think that's the main website to read papers and to publish (research level)</p>
<p>The Stacks Project: It is very useful for me to look algebraic things ups with more explanation than in some lectures...</p>
<p>MathSciNet: To search for papers that may help you.</p>
<p>Number Theory Web: Since I want to become a number theorist, this site helps me to see which kind of things exist.</p>
<p>Do you have more websites that help you to understand maybe mathematics, or just to connect to other mathematicians?</p>
<p>Everything is welcome. Just post it as an answer and not a comment.</p>
| Danilo Gregorin Afonso | 225,911 | <p>There is a website devoted to counterexamples:
<a href="http://www.mathcounterexamples.net/?fbclid=IwAR0dZKurrSUV1ft8AeW-Gh189x58BdjGZ6M2KzeeyiHJQ6grdCuCgLdGUgw" rel="noreferrer">http://www.mathcounterexamples.net/?fbclid=IwAR0dZKurrSUV1ft8AeW-Gh189x58BdjGZ6M2KzeeyiHJQ6grdCuCgLdGUgw</a></p>
<p>The Encyclopedia of Triangle Centers:
<a href="https://faculty.evansville.edu/ck6/encyclopedia/ETC.html" rel="noreferrer">https://faculty.evansville.edu/ck6/encyclopedia/ETC.html</a></p>
<p>and also the Periodic Table of Finite Elements:
<a href="http://femtable.org/" rel="noreferrer">http://femtable.org/</a></p>
<p>For curiosities, there is a website that teaches (with audio files) the pronunciations of many mathematician's names:
<a href="http://pronouncemath.blogspot.com/" rel="noreferrer">http://pronouncemath.blogspot.com/</a></p>
<p>Mathematical pictures:
<a href="https://images.math.cnrs.fr/?lang=fr" rel="noreferrer">https://images.math.cnrs.fr/?lang=fr</a></p>
<p>Finally, there is the nice Halmos' album of pictures of mathematicians:
<a href="https://www.maa.org/press/periodicals/convergence/whos-that-mathematician-images-from-the-paul-r-halmos-photograph-collection" rel="noreferrer">https://www.maa.org/press/periodicals/convergence/whos-that-mathematician-images-from-the-paul-r-halmos-photograph-collection</a></p>
|
4,032,983 | <p>I would like to know math websites that are useful for students, PhD students and researchers (useful in the sense most of the students or researchers—of a particular area—are using it). Maybe you can share which math websites you sometime use and why you use it.</p>
<p>Let me give my websites and why I use them:</p>
<p>MathOverflow and Stack Exchange: Very good to look up questions about how to do mathematics, what mathematics is, how it evolves, and of course to have an exchange with smart people, who can help you if you are stuck at something.</p>
<p>arXiv: I don't use it now, but I think that's the main website to read papers and to publish (research level)</p>
<p>The Stacks Project: It is very useful for me to look algebraic things ups with more explanation than in some lectures...</p>
<p>MathSciNet: To search for papers that may help you.</p>
<p>Number Theory Web: Since I want to become a number theorist, this site helps me to see which kind of things exist.</p>
<p>Do you have more websites that help you to understand maybe mathematics, or just to connect to other mathematicians?</p>
<p>Everything is welcome. Just post it as an answer and not a comment.</p>
| MathIsFun7225 | 424,950 | <p><a href="https://proofwiki.org/wiki/Main_Page" rel="noreferrer">ProofWiki</a> is an online collection of proofs that I have found to be useful at times. Each page contains links to definitions and lemmas that are used in the proof, as well as a list of external sources on the topic. More importantly, there is a list of <a href="https://proofwiki.org/wiki/ProofWiki:Jokes" rel="noreferrer">math jokes</a> ;)</p>
<p>Another useful site is <a href="https://mathworld.wolfram.com/" rel="noreferrer">Wolfram MathWorld</a>. It contains articles on many topics in mathematics, with links to related topics and references. It too has <a href="https://mathworld.wolfram.com/topics/MathematicalHumor.html" rel="noreferrer">a few jokes</a>.</p>
|
1,996,290 | <p>I don't know how to solve the following integral:</p>
<p>$$ \frac{2\pi}R\int_{r_1}^{r_2}r(r+R-|r-R|)dr $$</p>
<p>I have solved it when $R \le r_1$ and $R \ge r_2$ but I need the answer for $ r_1\lt R \lt r_2$. <br/>
R is a constant as well as $r_1$ and $r_2$.</p>
<p>I appreciate your help.</p>
| Dean C Wills | 366,201 | <p>A good thing to notice is $$r+R-|r-R|=2\min(r,R)$$.</p>
|
2,193,550 | <p>Prove that $G$ acts faithfully on $X$ when there are no two elements of $G$ acting the same way on an element $X$.</p>
<p>So I don't have much of a proof, but here's what I'm thinking. I know that for $G$ to act faithfully on $X$, the identity is the only element that fixes every element in $X$, so $\forall x \in X, ex = x$. So this means that, $\forall g \in G, gx \neq x$. So if $\phi$ is the action, $ker\phi = \{e \in G | ex = x\}$. I don't know how to derive the proof of this though.</p>
| Jack D'Aurizio | 44,121 | <p>$-2$ is not a quadratic residue $\!\!\pmod{5}$, hence $x^2+2$ is irreducible over $\mathbb{F}_5$ and $\mathbb{F}_5[x]/(x^2+2)$ is a finite field isomorphic to $\mathbb{F}_{5^2}$. The multiplicative part of a finite field is a cyclic group, hence every element of $\mathbb{F}_5[x]/(x^2+2)$ has an order that is a divisor of $24=25-1$. In particular, $x^2=-2$ implies $x^4=4$ and $x^8=1$, so the order of $x$ is exactly $8$.</p>
|
3,234,217 | <p>Let <span class="math-container">$a,b,c \in \mathbb{R},$</span> <span class="math-container">$\vec{v_1}=\begin{pmatrix}1\\4\\1\\-2 \end{pmatrix},$</span> <span class="math-container">$\vec{v_2}=\begin{pmatrix}-1\\a\\b\\2 \end{pmatrix},$</span> and <span class="math-container">$\vec{v_1}=\begin{pmatrix}1\\1\\1\\c \end{pmatrix}.$</span> What are the conditions on the numbers <span class="math-container">$a,b,c$</span> so that the three vectors are linearly dependent on <span class="math-container">$\mathbb{R}^4$</span>? I know that the usual method of solving this is to show that there exists scalars <span class="math-container">$x_1,x_2,x_3$</span> not all zero such that
<span class="math-container">\begin{align}
x_1\vec{v_1}+x_2\vec{v_2}+x_3\vec{v_3}=\vec{0}.
\end{align}</span></p>
<p>Doing this would naturally lead us to the augmented matrix
<span class="math-container">\begin{pmatrix}
1 & -1 & 1 &0\\
4 & a & 1 &0\\
1& b & 1 &0\\
-2 & 2 & c &0\\
\end{pmatrix}</span></p>
<p>Doing some row reduction would lead us to the matrix</p>
<p><span class="math-container">\begin{pmatrix}
1 & -1 & 1 &0\\
4 & a & 1 &0\\
0& b+1 & 0 &0\\
0 & 0 & c+2 &0\\
\end{pmatrix}</span>
I'm not quite sure how to proceed after this. Do I take cases on when whether <span class="math-container">$b+1$</span> or <span class="math-container">$c+2$</span> are zero and nonzero? </p>
| egreg | 62,967 | <p>The matrix
<span class="math-container">$$
\begin{pmatrix}
1 & -1 & 1 &0\\
4 & a & 1 &0\\
0& b+1 & 0 &0\\
0 & 0 & c+2 &0\\
\end{pmatrix}
$$</span>
is not in row echelon form. Sum to the second column the first multiplied by <span class="math-container">$-4$</span>, getting
<span class="math-container">$$
\begin{pmatrix}
1 & -1 & 1 &0\\
0 & a+4 & -3 &0\\
0& b+1 & 0 &0\\
0 & 0 & c+2 &0\\
\end{pmatrix}
$$</span>
Still not row echelon form, but “almost”. Anyway, computing the rank is easy.</p>
<p>If <span class="math-container">$b+1\ne0$</span>, you can swap the second and third row, then continue the Gaussian elimination getting
<span class="math-container">$$
\begin{pmatrix}
1 & -1 & 1 & 0\\
0 & 1 & 0 & 0\\
0& 0 & 1 & 0\\
0 & 0 & 0 & 0\\
\end{pmatrix}
$$</span>
which has rank <span class="math-container">$3$</span>. Thus a necessary condition so the vectors are linearly dependent is <span class="math-container">$b=-1$</span>. Now the matrix becomes
<span class="math-container">$$
\begin{pmatrix}
1 & -1 & 1 &0\\
0 & a+4 & -3 &0\\
0& 0 & 0 &0\\
0 & 0 & c+2 &0\\
\end{pmatrix}
$$</span>
If <span class="math-container">$a+4\ne0$</span> and <span class="math-container">$c+2\ne0$</span>, a row echelon form is (swapping the third and fourth rows)
<span class="math-container">$$
\begin{pmatrix}
1 & -1 & 1 &0\\
0 & a+4 & -3 &0\\
0 & 0 & c+2 &0\\
0& 0 & 0 &0\\
\end{pmatrix}
$$</span>
so the matrix has rank <span class="math-container">$3$</span>. In order the rank is less than <span class="math-container">$3$</span> we need either <span class="math-container">$a=-4$</span> or <span class="math-container">$c=-2$</span>.</p>
<p>You can so state that the vectors are linearly dependent if and only if <span class="math-container">$b=-1$</span> <em>and</em> either <span class="math-container">$a=-4$</span> or <span class="math-container">$c=-2$</span>.</p>
|
122,776 | <p>Let $\Gamma$ be a lattice in a (real or p-adic) Lie group.
Is it true that for a given natural number $n$ there exists a finite index subgroup $\Sigma\subset\Gamma$ such that each $\sigma\in\Sigma$ is an $n$-th power of some element of $\Gamma$?</p>
<p>In other words, is it true that for given $\sigma\in\Sigma$ there exists $\gamma\in\Gamma$ such that $\sigma=\gamma^n$?</p>
<p>If not true for general lattices, are there some restrictions under which this holds (cocompactness, arithmeticity,...)?</p>
| Venkataramana | 23,291 | <p>The answer is no, even in higher rank groups. For example, take $\Gamma = SL_3({\mathbb Z})$. If such a $\Sigma $ existed for any $n$, then its completion in the profinite (same as congruence) completion of $\Gamma$ would have this property that every element in $\Sigma$ would be an $n$-th power. But the completion of $\Sigma$, by strong approximation, is open. So for almost all primes, $SL_3({\mathbb Z}_p)$ would have this property. But you can easily cook up a prime $p$ and elements in $SL_3({\mathbb F}_p)$ which are not $n$-th powers. </p>
<p>To see this, get large primes such that $p^3 \equiv 1\quad (mod\quad n)$ but $p-1$ is not divisible by $n$. Choose a cubic extension of ${\mathbb F}_p$; the group of norm one elements in the cubic extension is cyclic of order $\frac{p^3-1}{p-1}$. Hence the $n$-th power map on this subgroup of $SL_3({\mathbb F}_p)$ is not surjective; but any $n$-th root of a generator for this subgroup already lies in this subgroup. </p>
<p>[Edit] To summarise: for any lattice in a semi-simple group, and for any integer $n\geq 2$, the image of the $n$-th power map never contains a finite index subgroup. The argument using strong approximation works if the lattice has number field entries (this holds for all lattices except those in $SL(2,{\mathbb R})$) . I think that it is true for any Zariski dense finitely generated subgroup of a semi-simple group, using strong approximation due to Nori (and Weisfeiler). </p>
|
3,531,693 | <p>Let <span class="math-container">$A \subset \mathbb{R}^n$</span> be a compact set with positive Lebesgue measure on <span class="math-container">$\mathbb{R}^n$</span>. Can we find an open set <span class="math-container">$B \subset \mathbb{R}^n$</span> such that <span class="math-container">$B \subset A$</span>?</p>
<p>PS: I know that if the compactness removed, the answer is no, since <span class="math-container">$A$</span> can be any compact set removing all the rational points.</p>
| Thorgott | 422,019 | <p>Check out the <a href="https://en.wikipedia.org/wiki/Smith%E2%80%93Volterra%E2%80%93Cantor_set" rel="noreferrer">Smith-Volterra-Cantor set</a> or variations thereof. Note that <span class="math-container">$B=\emptyset$</span> is, of course, always possible.</p>
|
147,095 | <p>I was wondering if there is any stationary distribution for bipartite graph? Can we apply random walks on bipartite graph? since we know the stationary distribution can be found from Markov chain, but we have two different islands in bipartite graph and connections occur between nodes from different groups. </p>
| Ben Barber | 25,485 | <p>There is a distribution such that, if you start a random walk in that distribution, then it will remain there for all time: the usual formula, $\pi(v) = d(v)/2e(G)$, still works. What is no longer true is that the random walk will converge to the stationary distribution as time tends to infinity independent of the start vertex, as there is an unsurmountable parity problem: if you start in one class, then you will always be in that class after an even number of steps, so the probability of being at a particular vertex is zero at every other time step. In the language of Markov chains, the random walk on a bipartite graph is <em>periodic</em>.</p>
|
3,072,720 | <p>Let <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> be independent random variables with uniform distribution between <span class="math-container">$0$</span> and <span class="math-container">$1$</span>, that is, have joint density <span class="math-container">$f_{xy}(x, y) = 1$</span>, if x <span class="math-container">$\in$</span> <span class="math-container">$[0,1]$</span> and y <span class="math-container">$\in [0,1]$</span> and <span class="math-container">$f_{xy} (x, y) = 0$</span>, cc. Let <span class="math-container">$W = (X + Y) / 2$</span>:</p>
<p>What is the distribution of <span class="math-container">$X|W=w$</span>?</p>
<p>I started considering something like this: <span class="math-container">$X|W = (x +y)/2$</span></p>
<p>But I stucked. I dont know how to begin.</p>
<p>Any idea? Hint?</p>
| Rohit Pandey | 155,881 | <p><span class="math-container">$$X = 2W-Y$$</span></p>
<p>So, </p>
<p><span class="math-container">$$X_c = (X|W=w) = (2w-Y_c) $$</span></p>
<p>Hence, <span class="math-container">$X_c$</span> should also be uniform from <span class="math-container">$[\max(2w-1,0),\min(2w,1)]$</span>.</p>
<hr>
<p>EDIT: Here is an argument for why <span class="math-container">$X_c$</span> will also be uniform over its domain.</p>
<p><span class="math-container">$$P(X=x_1|W=w) = \frac{P(X=x_1 \& W=w)}{P(W=w)} = \frac{P(X=x_1 \& Y=2w-x_1)}{P(W=w)}$$</span>
Since <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> are independent, this becomes:</p>
<p><span class="math-container">$$P(X=x_1|W=w) = \frac{P(X=x_1)P(Y=2w-x_1)}{P(W=w)}$$</span></p>
<p>If <span class="math-container">$2w-x_1 <0 $</span> or <span class="math-container">$2w-x_1>1$</span>, the density above becomes <span class="math-container">$0$</span>.</p>
<p>But otherwise, it will stay exactly the same if we replace <span class="math-container">$x_1$</span> by <span class="math-container">$x_2$</span> (such that both satisfy <span class="math-container">$0 \leq 2w-x \leq 1$</span>). Hence, <span class="math-container">$X_c$</span> and <span class="math-container">$Y_c$</span> should remain uniform over their valid domains.</p>
|
3,550,942 | <p>I am making a symbolic computation library which supports symbolic polynomials (both univariate and multivariate) and among other things I would like to support (possibly truncated) nth root (radical) computation of a (univariate or multivariate) polynomial since this computation is needed for some other things.</p>
<p>So lets say we have a polynomial <span class="math-container">$p(X) = \sum_{i}{a_{i}X^{i}}$</span> and I need an algorithm for <span class="math-container">$\sqrt[n]{p(X)}$</span>.</p>
<p>I have found <a href="https://math.stackexchange.com/questions/324385/algorithm-for-finding-the-square-root-of-a-polynomial">this post</a> on computing the square root of a polynomial but I dont quite get it but also I would like an algorithm for nth root.</p>
<p>Library can compute all primitive operations of polynomials (both univariate and multivariate) on a general ring of coeffcients (ie add/sub/mul/div/mod/pow/derivative). (For reference <a href="https://github.com/foo123/Abacus" rel="nofollow noreferrer">library is on github</a>)</p>
<p>Please provide a step-by-step algorithm in your answer or a link to such algorithm so I can test it.</p>
<p>PS. I tried using a variant of <a href="https://en.wikipedia.org/wiki/Nth_root_algorithm" rel="nofollow noreferrer">Newton's method</a> for computing nth root and adapted it to polynomial operations but result is completely off, not even close. </p>
<p>Note: If polynomial is not a perfect nth power then a truncated power series can be computed up to some limit (eg truncated Taylor series) as an approximation. For example the <a href="https://planetmath.org/SquareRootOfPolynomial" rel="nofollow noreferrer">square root algorithm on PlanetMath</a> computes the Taylor series if poly is not perfect power of 2. </p>
<p>if poly is not a perfect nth power or power series approximation cannot be computed is fine for me if algorithm throws an error.</p>
<p>Thank you.</p>
| Community | -1 | <p><strong>Suggestion:</strong></p>
<p>Write <span class="math-container">$p(x)$</span> as <span class="math-container">$p_0(1+x\,q(x))$</span> and compute</p>
<p><span class="math-container">$$\sqrt[n]{p(x)}=\sqrt[n]{p_0}\sum_{k=0}^d\binom{1/n}kx^k(q(x))^k.$$</span></p>
<p>You don't need to fully compute the powers of <span class="math-container">$q$</span>, you can stop at the highest power that does not exceed <span class="math-container">$d-k$</span>.</p>
|
3,550,942 | <p>I am making a symbolic computation library which supports symbolic polynomials (both univariate and multivariate) and among other things I would like to support (possibly truncated) nth root (radical) computation of a (univariate or multivariate) polynomial since this computation is needed for some other things.</p>
<p>So lets say we have a polynomial <span class="math-container">$p(X) = \sum_{i}{a_{i}X^{i}}$</span> and I need an algorithm for <span class="math-container">$\sqrt[n]{p(X)}$</span>.</p>
<p>I have found <a href="https://math.stackexchange.com/questions/324385/algorithm-for-finding-the-square-root-of-a-polynomial">this post</a> on computing the square root of a polynomial but I dont quite get it but also I would like an algorithm for nth root.</p>
<p>Library can compute all primitive operations of polynomials (both univariate and multivariate) on a general ring of coeffcients (ie add/sub/mul/div/mod/pow/derivative). (For reference <a href="https://github.com/foo123/Abacus" rel="nofollow noreferrer">library is on github</a>)</p>
<p>Please provide a step-by-step algorithm in your answer or a link to such algorithm so I can test it.</p>
<p>PS. I tried using a variant of <a href="https://en.wikipedia.org/wiki/Nth_root_algorithm" rel="nofollow noreferrer">Newton's method</a> for computing nth root and adapted it to polynomial operations but result is completely off, not even close. </p>
<p>Note: If polynomial is not a perfect nth power then a truncated power series can be computed up to some limit (eg truncated Taylor series) as an approximation. For example the <a href="https://planetmath.org/SquareRootOfPolynomial" rel="nofollow noreferrer">square root algorithm on PlanetMath</a> computes the Taylor series if poly is not perfect power of 2. </p>
<p>if poly is not a perfect nth power or power series approximation cannot be computed is fine for me if algorithm throws an error.</p>
<p>Thank you.</p>
| Nikos M. | 139,391 | <p>I managed to decipher the <strong>square root algorithm</strong> referenced in the question and <strong>generalise it to n-th root algorithm</strong>.</p>
<p><strong>Algorithm: Compute nth root/radical of polynomial <span class="math-container">$p(X) = \sum_{i}a_iX^i$</span></strong></p>
<p><em>Preliminary:</em> Arrange the polynomial from lowest degree term up to highest degree term or if multivariate do same according to some <a href="https://en.wikipedia.org/wiki/Monomial_order" rel="nofollow noreferrer">monomial ordering</a> (eg LEX). <span class="math-container">$LT(p(X))$</span> refers to the <strong>leading term</strong> according to monomial order and <span class="math-container">$TT(p(X))$</span> refers to the <strong>tail / last term</strong> according to monomial order. Algorithm works both if <strong>LT</strong> is used instead of <strong>TT</strong> below but with <strong>TT</strong> power series approximation (if <span class="math-container">$p(X)$</span> is not a perfect nth power) is easier to compute up to any desired accuracy. <span class="math-container">$maxdeg(p(X))$</span> refers to maximum power that exists in <span class="math-container">$p(X)$</span> in any variable (if multivariate). <span class="math-container">$maxterms$</span> is a user-defined limit and refers to maximum amount of terms to be computed if a power series approximation is the result. Eg up to <span class="math-container">$6$</span> terms of Taylor series expansion (if possible).</p>
<ul>
<li>Check: If <span class="math-container">$p(X)=0$</span> or <span class="math-container">$p(X)=1$</span> return <span class="math-container">$p(X)$</span></li>
<li>Init: <span class="math-container">$i \leftarrow 0, nterms \leftarrow 0, \text{ } r_{i}(X) \leftarrow \sqrt[n]{TT(p(X))}$</span></li>
<li>Step 1: if <span class="math-container">$nterms \geq maxterms$</span> return <span class="math-container">$r_{i}(X)$</span> else goto to Step 2</li>
<li>Step 2: <span class="math-container">$d_{i}(X) \leftarrow p(X)-r_{i}(X)^{n}$</span></li>
<li>step 3: if <span class="math-container">$d_{i}(X)=0$</span> return <span class="math-container">$r_{i}(X)$</span> else goto to Step 4</li>
<li>Step 4: <span class="math-container">$q_{i}(X) \leftarrow TT(d_{i}(X)) \text{ }/\text{ } TT(nr_{i}(X)^{n-1})$</span></li>
<li>Step 5: if <span class="math-container">$q_{i}(X)=0$</span> return <span class="math-container">$r_{i}(X)$</span> else goto to Step 6</li>
<li>Step 6: <span class="math-container">$r_{i+1}(X) \leftarrow r_{i}(X) + q_{i}(X)$</span></li>
<li>Step 7: if <span class="math-container">$n \cdot maxdeg(r_{i+1}(X)) > maxdeg(p(X))$</span> this means that <span class="math-container">$p(X)$</span> is not a perfect nth power and that a power series approximation is being computed. <span class="math-container">$nterms \leftarrow nterms+1$</span></li>
<li>Step 8: set <span class="math-container">$i \leftarrow i+1$</span> and goto Step 1</li>
</ul>
<p><em>Optionally</em> one can normalise returned <span class="math-container">$r(X)$</span> to have positive leading term coefficient if <span class="math-container">$n$</span> is a multiple of <span class="math-container">$2$</span>, since in that case both <span class="math-container">$r(X)$</span> and <span class="math-container">$-r(X)$</span> are roots.</p>
<p><a href="https://github.com/foo123/Abacus" rel="nofollow noreferrer">My library</a> is updated on github to use the above algorithm for computing nth radicals of polynomials.</p>
<p>Above algorithm <strong>correctly passes the example tests</strong>:</p>
<p><span class="math-container">$p \in Q[x]$</span></p>
<p><span class="math-container">$\sqrt{x^2} = x$</span></p>
<p><span class="math-container">$\sqrt{(x^2)^2} = x^2$</span></p>
<p><span class="math-container">$\sqrt[5]{(x+1)^5} = x+1$</span></p>
<p><span class="math-container">$\sqrt{9x^4+6x^3-11x^2-4x+4} = -3x^2-x+2$</span></p>
<p><span class="math-container">$\sqrt{x+1}=\frac{7}{256}x^5-\frac{5}{128}x^4+\frac{1}{16}x^3-\frac{1}{8}x^2+\frac{1}{2}x+1$</span> (truncated Taylor series up to <code>maxterms</code>)</p>
<p><span class="math-container">$p \in Q[x, y]$</span></p>
<p><span class="math-container">$\sqrt{4x^2-12xy+9y^2} = -2x+3y$</span></p>
|
4,586,527 | <p>Consider the following problem.</p>
<p><a href="https://i.stack.imgur.com/uzgHX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uzgHX.png" alt="enter image description here" /></a></p>
<p>I'm fairly new to probability theory, but have some experience with combinatorics. For that reason, after failing with a probabilistic approach, I approached the problem from a combinatoric perspective.</p>
<p>Firstly, notice there are <span class="math-container">$4!$</span> ways to arrange the four components. We do not care for permutations in the functioning components and therefore we will only consider <span class="math-container">$\frac{4!}{2!}=12$</span> orderings. Out of those <span class="math-container">$12$</span> orderings only two start with defective components. Then <span class="math-container">$P(Y=2)=\frac{2}{12}=\frac{1}{6}$</span>.</p>
<p>Similarly, one arrives to the conclusion that <span class="math-container">$P(Y=3)=\frac{1}{3}, P(Y=4)=\frac{1}{2}$</span>, and the problem seems to be solved.</p>
<p>However, I was still curious as to how one would formulate the problem in terms closer to probability theory. I attempted the following formulation, but I'm still unsure of whether it is formally correct.</p>
<hr />
<p>Let <span class="math-container">$S$</span> be the sample space where each <span class="math-container">$E\in S$</span> is a specific ordering of the components. Then there are <span class="math-container">$|S|=4!=24$</span> possible sample points. Let <span class="math-container">$A_i, B_j$</span> be the events that <span class="math-container">$A$</span> is the <span class="math-container">$i$</span>th component, <span class="math-container">$B$</span> the <span class="math-container">$j$</span>th component, respectively. Then it is clear that</p>
<p><span class="math-container">$$\begin{align} P(Y=2)&=P\Big((A_1 \cap B_2 ) \cup (A_2 \cap B_1)\Big) \\ & =P(A_1 \cap B_2) +P(A_2 \cap B_1) \\ &=\frac{1}{4}\frac{1}{3}+\frac{1}{4}\frac{1}{3} \\ &=\frac{1}{6} \end{align}$$</span></p>
<p>For <span class="math-container">$P(Y=3) $</span> the formulation is equivalent:</p>
<p><span class="math-container">$$\begin{align} P(Y=3)&=P\Big(\big(A_1 \cap B_3 ) \cup P(A_3 \cap B_1\big) \cup \big(A_2 \cap B_3 ) \cup P(A_3 \cap B_2\big)\Big)\end{align}$$</span></p>
<p>which under the exact same logic gives <span class="math-container">$P(Y=3)=\frac{1}{3}$</span>. In the same manner, <span class="math-container">$P(Y=4)=\frac{1}{2}$</span>.</p>
<p>It is clear the results match. What I want to be sure about is whether my formulation in terms of probability theory is formally sound and correct, since I'm barely buildnig a basic understanding on the subject.</p>
| user2661923 | 464,411 | <p>Shortcut.</p>
<p>With a sample space of <span class="math-container">$~\displaystyle \binom{4}{2}~$</span> there is only one way (out of <span class="math-container">$6$</span>) that exactly <span class="math-container">$2$</span> tests are made.</p>
<p>There is a <span class="math-container">$~\displaystyle \frac{2}{4} = \frac{1}{2}~$</span> probability that the <span class="math-container">$4$</span>th component is bad. Exactly <span class="math-container">$4$</span> tests will be required if and only if the <span class="math-container">$4$</span>th component is bad.</p>
<p>Then the probability that exactly <span class="math-container">$3$</span> tests are needed must be</p>
<p><span class="math-container">$$1 - \left[\frac{1}{6} + \frac{1}{2}\right].$$</span></p>
|
15,237 | <p><a href="https://matheducators.stackexchange.com/questions/176/knowing-mathematics-does-not-translate-to-knowing-to-teach-mathematics-why">A question</a> has been asked about why great mathematicians are not necessarily great teachers. On the other hand, I am wondering if knowing more mathematics actually helps with one's teaching of lower level courses in mathematics. For example, I believe that a good student with bachelor's degree in mathematics should have sufficient knowledge to teach calculus. However, how does having a master's or doctorate degree help one's teaching in calculus, if any at all?</p>
<p>I am teaching calculus now and I do not understand commutative algebra; I took a course on commutative algebra long time ago; I did poorly in the course and now I could hardly recall anything from this course. If I invest substantial among of time studying this subject well now, will it help me in my calculus course in any sense?</p>
| kcrisman | 1,608 | <p>I will speak purely anecdotally.</p>
<blockquote>
<p>However, how does having a master's or doctorate degree help one's teaching in calculus, if any at all?</p>
</blockquote>
<p>It isn't the degree per se that helps, but rather the process of having to learn, relearn, and reformulate gargantuan amounts of mathematics of all types that helps. Do I use either my PhD research or my current research in teaching non-major freshman calculus? Not at all. (I do get to mention my current research in linear algebra, so it is possible, of course.) But what I do use on a daily basis is the practice that comes from intense advanced study in seeing <em>every side of a mathematical problem</em>. </p>
<p>To give just one example, when teaching Riemann sums (often without using this phrase), one can just do left and right hand sums, or maybe trapezoid or midpoint if you are ambitious. Fine. What happens when a student decides to pick some left and some right hand endpoints? What happens when a student wants to pick only integer points? What about the student who insists that on the interval <span class="math-container">$[0,2]$</span> with <span class="math-container">$n=2$</span> you should use <span class="math-container">$f(0),f(1),f(2)$</span> with equal weight? Answering these questions beyond "the book says that is wrong" requires experience, and the ability to try to see what the many possibilities are from all sides.</p>
<p>Now, naturally you can obtain this experience in a multitude of ways. I have colleagues in other departments without a PhD whose deep understanding of the material they teach is unparalleled. And getting a doctorate doesn't automatically mean you magically look at calculus (or lower-level courses - I find that precalc is actually conceptually a bigger target because it roams so far and wide) better. </p>
<p>But, on the whole, I find that significant advanced study in mathematics makes me a <em>much</em> better calculus teacher than I would have been otherwise. Don't worry about commutative algebra; do worry about finding ways to stretch yourself mathematically early and often - whether through formal means or informal.</p>
<hr>
<p>Can't resist:</p>
<blockquote>
<p>If I invest substantial among of time studying [commutative algebra] well now, will it help me in my calculus course in any sense?</p>
</blockquote>
<p>If you want to talk commutative algebra and calculus, take aside the two students who really want to do deep AI/machine learning/big data/whatever, and tell them about dual numbers, <a href="https://en.wikipedia.org/wiki/Automatic_differentiation#Automatic_differentiation_using_dual_numbers" rel="noreferrer">automatic differentiation</a>, and <a href="https://math.stackexchange.com/questions/61763/dual-numbers-and-tangent-vectors">tangents</a>. Hard to find parts of math that don't influence each other in some way - but of course you don't tell this story to 99% of students! Anyway, the point isn't the specific math content, it's the process.</p>
|
285,227 | <p>I am trying to prove $\exp(x+y) = \exp(x) \exp(y)$.</p>
<p>I may use that $$\exp(x) = \sum_{n=0}^\infty \frac {x^n}{n!}$$
I further know how to multiply two power series in one point, i.e. if $f(x) = \sum_{n=0}^\infty c_n(x-a)^n$ and $g(x) = \sum_{k=0}^\infty d_n(x-a)^n$ then
$$
f(x)g(x) = \sum_{n=0}^\infty e_n(x-a)^n
$$
with
$$
e_n = \sum_{m=0}^n c_md_{n-m}
$$</p>
| pre-kidney | 34,662 | <p>\begin{align} \exp(x+y)&=\sum_n\frac{(x+y)^n}{n!} \\\\ &=\sum_{n}\frac{1}{n!}\sum_{a+b=n} {n \choose a} x^ay^b \\\\
&= \sum_{n}\frac{n!}{n!}\sum_{a+b=n}\frac{x^a}{a!}\frac{y^b}{b!} \\\\
&= \sum_{a,b} \frac{x^a}{a!}\frac{y^b}{b!} \\\\
&= \exp(x)\cdot\exp(y)
\end{align}</p>
|
285,227 | <p>I am trying to prove $\exp(x+y) = \exp(x) \exp(y)$.</p>
<p>I may use that $$\exp(x) = \sum_{n=0}^\infty \frac {x^n}{n!}$$
I further know how to multiply two power series in one point, i.e. if $f(x) = \sum_{n=0}^\infty c_n(x-a)^n$ and $g(x) = \sum_{k=0}^\infty d_n(x-a)^n$ then
$$
f(x)g(x) = \sum_{n=0}^\infty e_n(x-a)^n
$$
with
$$
e_n = \sum_{m=0}^n c_md_{n-m}
$$</p>
| kalalele | 460,776 | <p>Assume the function $\frac{exp(x+y)}{exp(x)}$, where $y$ is some constant. Differentiate with respect to $x$ and find that the derivative is zero. Therefore $\frac{exp(x+y)}{exp(x)}=c\in\mathbb{R}$. From the fact that $exp(0)=1$ we get $exp(y)=c$ which proves what was asked.</p>
|
3,395,910 | <p>I understand how to apply the trapezoidal rule to approximate the area under a curve.</p>
<p>But I'm not sure how to apply it when approximating areas between two functions. </p>
<ul>
<li>Do you use the formula like how you normally would, except apply it to the <strong>first function - the second function</strong>?</li>
</ul>
<p>Or is there a completely different approach?</p>
<p>Thanks!</p>
| Fred | 380,717 | <p>Let <span class="math-container">$(x_n)$</span> be a convergent sequence in <span class="math-container">$T$</span> with limit <span class="math-container">$x_0$</span>. You have to show that <span class="math-container">$x_0 \in T$</span>.</p>
<p>To this end use</p>
<ol>
<li><p><span class="math-container">$f(x_n) =g(x_n)$</span> for all <span class="math-container">$n$</span>,</p></li>
<li><p><span class="math-container">$x_0 \in [a,b],$</span></p></li>
<li><p><span class="math-container">$f(x_n) \to f(x_0)$</span> and <span class="math-container">$g(x_n) \to g(x_0).$</span></p></li>
</ol>
<p>Can you proceed ?</p>
|
3,395,910 | <p>I understand how to apply the trapezoidal rule to approximate the area under a curve.</p>
<p>But I'm not sure how to apply it when approximating areas between two functions. </p>
<ul>
<li>Do you use the formula like how you normally would, except apply it to the <strong>first function - the second function</strong>?</li>
</ul>
<p>Or is there a completely different approach?</p>
<p>Thanks!</p>
| Peter Szilas | 408,605 | <p><span class="math-container">$h(x):=f(x)-g(x)$</span>, <span class="math-container">$h$</span> is continuous.</p>
<p><span class="math-container">$T=$</span>{<span class="math-container">$x| h(x)=0$</span>}.</p>
<p>Then <span class="math-container">$T= h^{-1}$</span>{<span class="math-container">$0$</span>} is closed being the inverse image of the closed set {<span class="math-container">$0$</span>}.</p>
|
3,756,649 | <p>If <span class="math-container">$f:\mathbb{R}\to \mathbb{R}$</span> such that <span class="math-container">$\lim\limits_{x \to \infty}xf(x)=L$</span>. Then <span class="math-container">$\lim\limits_{x \to \infty}f(x)=0$</span>.</p>
<p>My proof is as follows:</p>
<p>Let <span class="math-container">$g(x)=xf(x)$</span>. Fix an <span class="math-container">$\epsilon>0$</span>, then by definition, there exists a <span class="math-container">$\delta>0$</span> such that <span class="math-container">$$x>\delta \implies |g(x)-L|<\epsilon$$</span> Then <span class="math-container">$$|f(x)|=\frac{|g(x)|}{|x|}\le \frac{L+\epsilon}{\delta}$$</span> claiming the result.</p>
<p>Are there any flaws in my argument?</p>
| Moe Sarah | 787,944 | <p><span class="math-container">$lim_{x\rightarrow \infty} f(x)=L\in \mathbb{R}$</span> if and only if</p>
<p>for all <span class="math-container">$\epsilon>0$</span> there exists <span class="math-container">$\delta>0$</span> such that for all <span class="math-container">$x>\delta$</span>, <span class="math-container">$|f(x)-L|<\epsilon$</span></p>
<p>Let us prove your problem.</p>
<p>Supposing <span class="math-container">$lim_{x\rightarrow \infty}xf(x)=L$</span> , we wish to show that <span class="math-container">$lim_{x\rightarrow \infty}f(x)=0$</span>.</p>
<p>To this end, let <span class="math-container">$\epsilon>0$</span> be arbitrary. As <span class="math-container">$lim_{x\rightarrow \infty}xf(x)=L$</span>, there exists <span class="math-container">$\delta_1>0$</span> such that for all <span class="math-container">$x>\delta_1$</span>, <span class="math-container">$|xf(x)-L|<\epsilon$</span>.</p>
<p>Hence, for <span class="math-container">$\delta>max(\delta_1,\frac{\epsilon+|L|}{\epsilon})$</span>, whenever <span class="math-container">$x>\delta$</span>, we get</p>
<p><span class="math-container">$|f(x)|=|\frac{xf(x)}{x}|$</span>=<span class="math-container">$\frac{1}{x}|xf(x)-L+L|\leq \frac{1}{x}|xf(x)-L|+\frac{1}{x}|L|\leq \frac{1}{\delta}(\epsilon+|L|)\leq \epsilon$</span></p>
|
7,268 | <p>I'm a private tutor working with a 7th grader who is struggling with solving equations. Given a simple equation, he is able to solve it using a formulaic procedure, but it is very obvious that he has no idea what the solution really means. Hence, if he gets a problem that's slightly different from ones he's solved before, he's completely lost. </p>
<p>Working with him yesterday, I realized that he doesn't understand what a variable is, or what he's doing when he's solving an equation -- he's just following the steps his teacher told him for that specific problem. </p>
<p>I'm trying to think of a way to visually demonstrate what's happening when he's solving an equation -- something visual that he can see. Kind of an algebraic equivalent to putting four coins on the table and adding one more to show 4 + 1 = 5. </p>
<p>Any ideas? Thanks!
-Ian </p>
| Aeryk | 401 | <p>I don't know how feasible this is or what kind of resources you have, but try finding a balance scale, some standard weights, and a collection of identical objects. You can set up the scale to represent a problem and ask the student to determine the unknown weight. The only rule is that the scale can never become unbalanced in the process (i.e. what they do to one side they must do to the other).</p>
<p>For example, to demonstrate solving $3x+6=5x+2$ start with 3 identical objects and 6 unit weights on one side and 5 of the same identical objects and 2 unit weights on the other. </p>
<p>Then the solutions is: Remove two units from each side. Remove 3 objects from each side. Split each side into two equal groups and remove a group from each side. This should leave one object on the right and 2 unit weights on the left.</p>
<p>Of course, to get the scale to balance out from the beginning, you, the teacher, will have to know the weight of the objects. And you'll have to watch out for fractions and negatives.</p>
|
7,268 | <p>I'm a private tutor working with a 7th grader who is struggling with solving equations. Given a simple equation, he is able to solve it using a formulaic procedure, but it is very obvious that he has no idea what the solution really means. Hence, if he gets a problem that's slightly different from ones he's solved before, he's completely lost. </p>
<p>Working with him yesterday, I realized that he doesn't understand what a variable is, or what he's doing when he's solving an equation -- he's just following the steps his teacher told him for that specific problem. </p>
<p>I'm trying to think of a way to visually demonstrate what's happening when he's solving an equation -- something visual that he can see. Kind of an algebraic equivalent to putting four coins on the table and adding one more to show 4 + 1 = 5. </p>
<p>Any ideas? Thanks!
-Ian </p>
| Joseph O'Rourke | 511 | <p>You may find this <a href="http://myplace.frontier.com/~paulgriffith2/mathmodels/math_models.shtml" rel="nofollow noreferrer">Mathematics Models I webpage</a> by Paul Griffith useful, although it targets a lower educational level:
<hr />
<img src="https://i.stack.imgur.com/z5eZi.gif" alt="Factors">
<hr />
Another source is <a href="http://www.soesd.k12.or.us/Page.asp?NavID=1264" rel="nofollow noreferrer">this Oregon web page with many links</a>, including:
<hr />
<img src="https://i.stack.imgur.com/BabtG.png" alt="AlgVis"></p>
<p><hr />
<em>Caveat</em>: I have not used any of these materials in the classroom myself.</p>
|
3,080,005 | <p>So, I was trying to understand the "Group action" theory. I read the definition of <span class="math-container">$Stab_G$</span> and I was trying to solve some basic questions.</p>
<p>I came across with the following question: </p>
<blockquote>
<p>Let <span class="math-container">$S_7$</span> be a group that on itself by <span class="math-container">$x\cdot y = xyx^{-1}$</span>. Calculate <span class="math-container">$|Stab_{S_7}((1 \ 2 \ 3)(4 \ 5 \ 6))|$</span>.</p>
</blockquote>
<p>Firstly, I don't understand what does "on itself by <span class="math-container">$x\cdot y = xyx^{-1}$</span>" means. Secondly I would like to see how to calculate it formally so I can calculate other sections of the question.</p>
| Rylee Lyman | 447,318 | <p>One way to proceed is to try to improve our understanding of the action. For instance, if we write <span class="math-container">$y = (1\ 2\ 3)(4\ 5\ 6)$</span>, what is <span class="math-container">$(x\cdot y)(x(1))$</span>, the image of <span class="math-container">$x(1)$</span> under <span class="math-container">$x\cdot y$</span>? how about <span class="math-container">$x(2)$</span>? In general, you should likely have seen some theorem about what conjugation does to a cycle in <span class="math-container">$S_n$</span>.</p>
|
342,064 | <p>We know that if an estimator, say $\widehat{\theta}$, is an unbiased estimator of $\theta$ and if its variance tends to 0 as n tends to infinity then it is a consistent estimator for $\theta$. But this is a sufficient and not a necessary condition. I am looking for an example of an estimator which is consistent but whose variance does not tend to 0 as n tends to infinity. Any suggestions? Thank you in advance for your time.</p>
| KalEl | 1,310 | <p>If the parameter space has only one element, then it's kinda boring. To make it a little more interesting, let's say parameters space is $\Theta=\{0,1\}$. And the random variable $X$ is normally distributed with mean $\theta$, i.e. $$X \sim N(\theta,1)$$</p>
<p>The natural estimate to consider is $t_n=\frac{\sum x_n}{n}$. It is consistent, but unfortunately also has $V(t_n)\rightarrow 0$.</p>
<p>But if we purturb it a lot, albeit with a small chance, say</p>
<p>$$t_n'=\frac{\sum^{n-1} x_i}{n-1}+\mathbb{1}_{x_n\le\Phi^{-1}(1/n^2)}\times n^{100}$$</p>
<p>then it will still be consistent, as chance of making the error is only $1/n^2$ (for $\theta=0$) or less (for $\theta=1$), but it's variance $V(t_n')$ clearly diverges since when we do make the error - it is big - and will not compensate the probability of the error which is $1/n^2$, at least when $\theta=0$.</p>
<p><em>That's the basic idea, but it won't probably work for $\theta=1$. But we can cover it by fabricating another example which works for that, and selecting them alternatively... as below.</em></p>
<p>To tackle $\theta=1$, we can fabricate another $t_n'$ -
$$t_n''=\frac{\sum^{n-1} x_i}{n-1}+\mathbb{1}_{x_n\gt 1+\Phi^{-1}(1-1/n^2)}\times n^{100}$$</p>
<p>To combine, we can create $t_n'''$ by choosing $t_n'$ when $n$ is odd, and $t_n''$ when $n$ is even as the estimate. It will converge in probability to the real $\theta$, but will have infinite variance.</p>
|
4,374,307 | <p>Problem:<br />
Suppose there are <span class="math-container">$7$</span> chairs in a row. There are <span class="math-container">$6$</span> people that are going to randomly
sit in the chairs. There are <span class="math-container">$3$</span> females and <span class="math-container">$3$</span> males. What is the probability that
the first and last chairs have females sitting in them?</p>
<p>Answer:<br />
Let <span class="math-container">$p$</span> be the probability we seek. Out of <span class="math-container">$3$</span> females, only <span class="math-container">$2$</span> can be sitting at the end of the row. I consider the first and last chairs to be at the end of the row.
<span class="math-container">\begin{align*}
p &= \dfrac{ {3 \choose 2 } 3(2) (4)(3)(2) } { 7(6)(5)(4)(3)(2) } \\
p &= \dfrac{ 3(3)(2) (4)(3)(2) } { 7(6)(5)(4)(3)(2) } \\
p &= \dfrac{ 3(4)(3)(2) } { 7(5)(4)(3)(2) } = \dfrac{ 3(3)(2) } { 7(5)(3)(2) } \\
p &= \dfrac{ 18 } { 35(3)(2) } \\
p &= \dfrac{ 3 } { 35 }
\end{align*}</span>
Am I right?
Here is an updated solution.</p>
<p>Let <span class="math-container">$p$</span> be the probability we seek. Out of <span class="math-container">$3$</span> females, only <span class="math-container">$2$</span> can be sitting at the end of the row. I consider the first and last chairs to be at the end of the row.
<span class="math-container">\begin{align*}
p &= \dfrac{ 3(2) (5)(4)(3)(2) } { 7(6)(5)(4)(3)(2) } \\
p &= \dfrac{ (5)(4)(3)(2) } { 7(5)(4)(3)(2) } \\
p &= \dfrac{1}{7}
\end{align*}</span>
Now is my answer right?</p>
| Math Lover | 801,574 | <p>Here is how I would think about it. There are seven of them - three male, three female and an empty chair (say, a ghost). In other words, for any given chair, there are seven equally likely possibilities.</p>
<p>So, <span class="math-container">$ \displaystyle P = {3 \choose 2} / {7 \choose 2} = \frac 17$</span></p>
|
1,521,518 | <p>Determine if the given vectors span $\mathbb{R}^4$</p>
<p>${(1, 1, 1, 1), (0, 1, 1, 1), (0, 0, 1, 1), (0, 0, 0, 1)}$.</p>
<p>I'm completely confused on this question. My textbook gives a different problem but in $\mathbb{R}^3$. How would i go about this?</p>
| MrMazgari | 284,607 | <p>You want to find constants $a,b,c,d \in \mathbb{R}$ such that every vector $(x,y,z,w) \in \mathbb{R}^4$ can be written as a linear combination of the given vectors. That is, \begin{align}(x,y,z,w)&=a(1,1,1,1)+b(0,1,1,1)+c(0,0,1,1)+d(0,0,0,1)\\
&=(a,a+b,a+b+c,a+b+c+d).
\end{align}
Equating components, we have: $$x=a, \quad y=a+b, \quad z=a+b+c, \quad w=a+b+c+d.$$</p>
<p>Hence, $$a=x, \quad b=y-x, \quad c=z-y, \quad d=w-z.$$</p>
<p>Therefore, the given list of vectors spans $\mathbb{R}^4$ (since the vector $(x,y,z,w)$ was arbitrary).</p>
<p>Alternatively, you can check to see that the given vectors are linearly independent, and then, since there are $\dim(\mathbb{R}^4)=4$ of them, they must be a basis for $\mathbb{R}^4$. They therefore span $\mathbb{R}^4$.</p>
|
1,559,906 | <p>Is there an equivalent in mathematical language to the modulo (or <code>mod</code>) function in computing? </p>
| Archis Welankar | 275,884 | <p>The mathematical symbol in many computer languages and also mathematics is mod it represents modular divsion . Eg $15 mod 2=7$</p>
|
3,367,672 | <p>Assume <span class="math-container">$a_n$</span> is a non-negative sequence, that is, <span class="math-container">$a_n \geq 0$</span> for every <span class="math-container">$n \in \mathbb{N}$</span>. Prove that if <span class="math-container">$a_n$</span> converges then the limit is non-negative. Clue: prove it by negation.</p>
<p>I think proof by contradiction is a good method, which is <span class="math-container">$a_n$</span> converges and the limit is negative. Since <span class="math-container">$a_n$</span> converges, <span class="math-container">$\lim_{n\to\infty}a_n = L$</span>. Also, since <span class="math-container">$a_n$</span> is non-negative sequence, <span class="math-container">$L \geq 0$</span>. This is a contradiction. Is my method correct? </p>
| Kavi Rama Murthy | 142,385 | <p>You have assumed what you are asked to prove.</p>
<p>If possible let <span class="math-container">$a_n \to L$</span> with <span class="math-container">$L <0$</span>. Take <span class="math-container">$\epsilon =-L$</span>. Then there exits <span class="math-container">$n_0$</span> such that <span class="math-container">$|a_n-L | <\epsilon =-L$</span> for <span class="math-container">$n >n_0$</span>. But then <span class="math-container">$a_n-L \leq |a_n-L|<(-L)$</span>, so <span class="math-container">$a_n <0$</span> for <span class="math-container">$n>n_0$</span> which is a contradiction. </p>
|
3,367,672 | <p>Assume <span class="math-container">$a_n$</span> is a non-negative sequence, that is, <span class="math-container">$a_n \geq 0$</span> for every <span class="math-container">$n \in \mathbb{N}$</span>. Prove that if <span class="math-container">$a_n$</span> converges then the limit is non-negative. Clue: prove it by negation.</p>
<p>I think proof by contradiction is a good method, which is <span class="math-container">$a_n$</span> converges and the limit is negative. Since <span class="math-container">$a_n$</span> converges, <span class="math-container">$\lim_{n\to\infty}a_n = L$</span>. Also, since <span class="math-container">$a_n$</span> is non-negative sequence, <span class="math-container">$L \geq 0$</span>. This is a contradiction. Is my method correct? </p>
| Nicholas Roberts | 245,491 | <p>You seemed to have used the result you are trying to prove in your proof! This is not allowed in mathematics. </p>
<p>The clue suggesting contradiction seems to be a good approach. Indeed, suppose <span class="math-container">$a_n \longrightarrow L$</span> where <span class="math-container">$L < 0$</span>. Then, there is an <span class="math-container">$\epsilon > 0$</span> such that <span class="math-container">$-\epsilon = L$</span> (just take <span class="math-container">$\epsilon = -L$</span>). Then, since the sequence converges, we are guaranteed a natural number <span class="math-container">$N$</span> such that <span class="math-container">$n \geq N$</span> implies <span class="math-container">$$|a_n - L| < \epsilon$$</span>
Unpacking the definition of the absolute value gives us:
<span class="math-container">$$-\epsilon < a_n - L < \epsilon $$</span>
Now recall that <span class="math-container">$-L = \epsilon$</span>, which gives:
<span class="math-container">$$-\epsilon < a_n + \epsilon < \epsilon$$</span>
Subtracting <span class="math-container">$\epsilon$</span> from both sides of the above inequality yields:
<span class="math-container">$$-2\epsilon < a_n < 0$$</span>
This contradicts the fact that <span class="math-container">$a_n \geq 0$</span> for all <span class="math-container">$n \in \mathbb{N}$</span>, so we are done.</p>
|
3,995,272 | <p>I am a master student in mathematical physics. I study soliton and traveling wave solutions of the differential equations.</p>
<p>Let's consider the following ODE:
<span class="math-container">$$Q^{\prime}(\xi)=ln(A)(\alpha+\beta Q(\xi)+\sigma Q^2(\xi))$$</span>
where
<span class="math-container">$A \neq 0,1.$</span></p>
<p>In a book, the solutions of the ODE are given as follows: <strong>(But, I don't understand how to derive it.)</strong></p>
<p>There are twelve solution cases w.r.t coefficients of ODE.
Any help would be appreciated.</p>
<p><strong>CASE I)</strong> When <span class="math-container">$\beta^{2}-4 \alpha \sigma<0$</span> and <span class="math-container">$\sigma \neq 0$</span>, then
<span class="math-container">$$
\begin{array}{l}
Q_{1}(\xi)=-\frac{\beta}{2 \sigma}+\frac{\sqrt{-\left(\beta^{2}-4 \alpha \sigma\right)}}{2 \sigma} \tan _{A}\left(\frac{\sqrt{-\left(\beta^{2}-4 \alpha \sigma\right)}}{2} \xi\right) \\
Q_{2}(\xi)=-\frac{\beta}{2 \sigma}-\frac{\sqrt{-\left(\beta^{2}-4 \alpha \sigma\right)}}{2 \sigma} \cot _{A}\left(\frac{\sqrt{-\left(\beta^{2}-4 \alpha \sigma\right)}}{2} \xi\right) \\
Q_{3}(\xi)=-\frac{\beta}{2 \sigma}+\frac{\sqrt{-\left(\beta^{2}-4 \alpha \sigma\right)}}{2 \sigma}\left(\tan _{A}\left(\sqrt{-\left(\beta^{2}-4 \alpha \sigma\right)} \xi\right) \pm \sqrt{p q} \sec _{A}\left(\sqrt{-\left(\beta^{2}-4 \alpha \sigma\right) \xi}\right)\right) \\
Q_{4}(\xi)=-\frac{\beta}{2 \sigma}-\frac{\sqrt{-\left(\beta^{2}-4 \alpha \sigma\right)}}{2 \sigma}\left(\cot _{A}\left(\sqrt{-\left(\beta^{2}-4 \alpha \sigma\right)} \xi\right) \pm \sqrt{p q} \csc _{A}\left(\sqrt{-\left(\beta^{2}-4 \alpha \sigma\right)} \xi\right)\right) \\
Q_{5}(\xi)=-\frac{\beta}{2 \sigma}+\frac{\sqrt{-\left(\beta^{2}-4 \alpha \sigma\right)}}{4 \sigma}\left(\tan _{A}\left(\frac{\sqrt{-\left(\beta^{2}-4 \alpha \sigma\right)}}{4} \xi\right)-\cot _{A}\left(\frac{\sqrt{-\left(\beta^{2}-4 \alpha\right) \sigma}}{4} \xi\right)\right)
\end{array}
$$</span></p>
<p><strong>CASE II:</strong></p>
<p><span class="math-container">$\vdots$</span></p>
<p><strong>CASE XII:</strong>
When <span class="math-container">$\beta=\lambda, \sigma=m \lambda(m \neq 0)$</span> and <span class="math-container">$\alpha=0,$</span> then
<span class="math-container">$$
Q_{37}(\xi)=\frac{p A^{\lambda \xi}}{q-m p A^{\lambda \xi}}
$$</span>
where triangular functions are defined as
<span class="math-container">\begin{array}{l}
\sin _{A}(\xi)=\frac{p A^{i \xi}-q A^{-i \xi}}{2 i}, \quad \cos _{A}(\xi)=\frac{p A^{i \xi}+q A^{-i \xi}}{2} \\
\tan _{A}(\xi)=-i \frac{p A^{i \xi}-q A^{-i \xi}}{p A^{i \xi}+q A^{-i \xi}}, \quad \cot _{A}(\xi)=i \frac{p A^{i \xi}+q A^{-i \xi}}{p A^{i \xi}-q A^{-i \xi}} \\
\sec _{A}(\xi)=\frac{2}{p A^{i \xi}+q A^{-i \xi}}, \quad \csc _{A}(\xi)=\frac{2 i}{p A^{i \xi}-q A^{-i \xi}}
\end{array}</span>
where <span class="math-container">$\xi$</span> is an independent variable, <span class="math-container">$p$</span> and <span class="math-container">$q$</span> are constants greater than zero and called deformation parameters.</p>
| SteelCubes | 859,262 | <p><span class="math-container">$$\lim _ { x \to \frac { \pi } { 2 } } \frac { \ln \sin x } { \cos ^ { 2 } x }\\
=\lim _ { x \to \frac { \pi } { 2 } } \frac { \ln( \sin x -1 +1)} { 1-\sin ^ { 2 } x }\\
=\lim _ { x \to \frac { \pi } { 2 } } \frac { \ln( 1+ \sin x -1)} { -(\sin x-1)(1+\sin x) }\\
=\frac{-1}{2}$$</span>
As <span class="math-container">$x \rightarrow \frac{\pi}{2}, \sin x \rightarrow 1$</span></p>
|
3,995,272 | <p>I am a master student in mathematical physics. I study soliton and traveling wave solutions of the differential equations.</p>
<p>Let's consider the following ODE:
<span class="math-container">$$Q^{\prime}(\xi)=ln(A)(\alpha+\beta Q(\xi)+\sigma Q^2(\xi))$$</span>
where
<span class="math-container">$A \neq 0,1.$</span></p>
<p>In a book, the solutions of the ODE are given as follows: <strong>(But, I don't understand how to derive it.)</strong></p>
<p>There are twelve solution cases w.r.t coefficients of ODE.
Any help would be appreciated.</p>
<p><strong>CASE I)</strong> When <span class="math-container">$\beta^{2}-4 \alpha \sigma<0$</span> and <span class="math-container">$\sigma \neq 0$</span>, then
<span class="math-container">$$
\begin{array}{l}
Q_{1}(\xi)=-\frac{\beta}{2 \sigma}+\frac{\sqrt{-\left(\beta^{2}-4 \alpha \sigma\right)}}{2 \sigma} \tan _{A}\left(\frac{\sqrt{-\left(\beta^{2}-4 \alpha \sigma\right)}}{2} \xi\right) \\
Q_{2}(\xi)=-\frac{\beta}{2 \sigma}-\frac{\sqrt{-\left(\beta^{2}-4 \alpha \sigma\right)}}{2 \sigma} \cot _{A}\left(\frac{\sqrt{-\left(\beta^{2}-4 \alpha \sigma\right)}}{2} \xi\right) \\
Q_{3}(\xi)=-\frac{\beta}{2 \sigma}+\frac{\sqrt{-\left(\beta^{2}-4 \alpha \sigma\right)}}{2 \sigma}\left(\tan _{A}\left(\sqrt{-\left(\beta^{2}-4 \alpha \sigma\right)} \xi\right) \pm \sqrt{p q} \sec _{A}\left(\sqrt{-\left(\beta^{2}-4 \alpha \sigma\right) \xi}\right)\right) \\
Q_{4}(\xi)=-\frac{\beta}{2 \sigma}-\frac{\sqrt{-\left(\beta^{2}-4 \alpha \sigma\right)}}{2 \sigma}\left(\cot _{A}\left(\sqrt{-\left(\beta^{2}-4 \alpha \sigma\right)} \xi\right) \pm \sqrt{p q} \csc _{A}\left(\sqrt{-\left(\beta^{2}-4 \alpha \sigma\right)} \xi\right)\right) \\
Q_{5}(\xi)=-\frac{\beta}{2 \sigma}+\frac{\sqrt{-\left(\beta^{2}-4 \alpha \sigma\right)}}{4 \sigma}\left(\tan _{A}\left(\frac{\sqrt{-\left(\beta^{2}-4 \alpha \sigma\right)}}{4} \xi\right)-\cot _{A}\left(\frac{\sqrt{-\left(\beta^{2}-4 \alpha\right) \sigma}}{4} \xi\right)\right)
\end{array}
$$</span></p>
<p><strong>CASE II:</strong></p>
<p><span class="math-container">$\vdots$</span></p>
<p><strong>CASE XII:</strong>
When <span class="math-container">$\beta=\lambda, \sigma=m \lambda(m \neq 0)$</span> and <span class="math-container">$\alpha=0,$</span> then
<span class="math-container">$$
Q_{37}(\xi)=\frac{p A^{\lambda \xi}}{q-m p A^{\lambda \xi}}
$$</span>
where triangular functions are defined as
<span class="math-container">\begin{array}{l}
\sin _{A}(\xi)=\frac{p A^{i \xi}-q A^{-i \xi}}{2 i}, \quad \cos _{A}(\xi)=\frac{p A^{i \xi}+q A^{-i \xi}}{2} \\
\tan _{A}(\xi)=-i \frac{p A^{i \xi}-q A^{-i \xi}}{p A^{i \xi}+q A^{-i \xi}}, \quad \cot _{A}(\xi)=i \frac{p A^{i \xi}+q A^{-i \xi}}{p A^{i \xi}-q A^{-i \xi}} \\
\sec _{A}(\xi)=\frac{2}{p A^{i \xi}+q A^{-i \xi}}, \quad \csc _{A}(\xi)=\frac{2 i}{p A^{i \xi}-q A^{-i \xi}}
\end{array}</span>
where <span class="math-container">$\xi$</span> is an independent variable, <span class="math-container">$p$</span> and <span class="math-container">$q$</span> are constants greater than zero and called deformation parameters.</p>
| Fred | 380,717 | <p>Let <span class="math-container">$t= \sin x$</span>. Then we have to compute</p>
<p><span class="math-container">$$\lim_{t \to 1} \frac{ \ln t}{1-t^2}.$$</span></p>
<p>We have</p>
<p><span class="math-container">$$\frac{ \ln t}{1-t^2}=\frac{ 1}{1+t} \cdot \frac{ \ln t}{1-t}.$$</span></p>
<p>It is clear that <span class="math-container">$\frac{ 1}{1+t} \to 1/2$</span> as <span class="math-container">$t \to 1.$</span></p>
<p>Let <span class="math-container">$t=e^s$</span>, then</p>
<p><span class="math-container">$$\lim_{t \to 1} \frac{ \ln t}{1-t}= \lim_{s \to 0} \frac{s}{1-e^s}.$$</span></p>
<p>If <span class="math-container">$f(s):= e^s,$</span> then</p>
<p><span class="math-container">$$ \frac{e^s-1}{s}= \frac{f(s)-f(0)}{s-0}=f'(0)=1.$$</span></p>
<p>Hence <span class="math-container">$\lim_{s \to 0} \frac{s}{1-e^s}=-1$</span></p>
<p>This gives</p>
<p><span class="math-container">$$\lim _ { x \rightarrow \frac { \pi } { 2 } } \frac { \ln \sin x } { \cos ^ { 2 } x }=-1/2.$$</span></p>
|
3,752,770 | <p>I tested this in python using:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(0, 10*2*np.pi, 10000)
y = np.sin(x)
plt.plot(y/y)
plt.plot(y)
</code></pre>
<p>Which produces:</p>
<p><a href="https://i.stack.imgur.com/pCwoV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pCwoV.png" alt="" /></a></p>
<p>The blue line representing <code>sin(x)/sin(x)</code> appears to be <code>y=1</code></p>
<p>However, I don't know if the values at the point where <code>sin(x)</code> crosses the x-axis really equals 1, 0, infinity or just undefined.</p>
| Kraigolas | 655,232 | <p>The function</p>
<p><span class="math-container">$$f(x)=\frac{\sin(x)}{\sin(x)}$$</span></p>
<p>has an infinite number of what are called <a href="https://en.wikipedia.org/wiki/Removable_singularity" rel="noreferrer">removable singularities</a>. If we consider the singularity at <span class="math-container">$x=0$</span>, we can see that</p>
<p><span class="math-container">$$\lim_{x\to0}f(x)=1$$</span></p>
<p>and thus we are able to "redefine" <span class="math-container">$f(x)$</span> so that it takes a value at all points. It is <em>very important</em> that you understand that the function is undefined at these singularities, but it is also very convenient and often very important to have a function which is defined for all values. What python is <em>actually</em> plotting is</p>
<p><span class="math-container">$$g(x)=\begin{cases}\frac{\sin(x)}{\sin(x)},\quad\text{if $x\neq n\pi$}\\
1,\qquad \text{ otherwise}\end{cases}$$</span></p>
<p>where <span class="math-container">$n$</span> is an integer. Without the explanation that we are removing removable singularities, you might see many authors write <span class="math-container">$f(x)=1$</span>.</p>
<p>As another example, if you are familiar with integration, you may see someone (or you yourself) write
<span class="math-container">$$\int_{-\infty}^\infty x dx=0$$</span></p>
<p>in any number of contexts, but this is actually the Cauchy principal value of the integral. The actual integral diverges.</p>
<p>I agree with the other answer to this question, but it seems to really discuss how computing and mathematics disagree, while I would say that you should expect these implicit evaluations beyond just computing, as we ourselves do them frequently as well.</p>
|
137,595 | <p>I was using the <code>StreamPlot</code> function to plot the direction field of a system of two first order differential equations. Is there any way I could add solution curves to my direction field with this function? Or is there another function that could do that for me? I looked around, but I couldn't find anything. </p>
<p>Edit: The system of equations is: </p>
<p>$$x' = -2x+y-11\quad \& \quad y' = -5x+4y-35$$</p>
<p>Any help would be appreciated. Thanks!</p>
| zhk | 8,538 | <p>Here is another way to super impose solution curves on the stream lines. For this I choose a random IVP.</p>
<p><strong>Random example</strong></p>
<pre><code>soln[y0_?NumericQ]:=First@NDSolve[{y'[x] == -1 + Sin[y[x]], y[0] == y0}, {y}, {x, -10,10}];
sp = StreamPlot[{1, (-1 + Sin[y])}, {x, -3, 3}, {y, -3, 3}];
Show[sp, Plot[Evaluate[{y[x]} /. soln[#] & /@ Range[-20, 20, 0.3]], {x, -3, 3},
PlotRange -> All, MaxRecursion -> 8, AxesLabel -> {"x", "y"},PlotStyle -> Red]]
</code></pre>
<p><a href="https://i.stack.imgur.com/kLYzb.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kLYzb.jpg" alt="enter image description here"></a></p>
<p><strong>OP's system</strong></p>
<pre><code>sp = StreamPlot[{-2*x + y - 11, -5*x + 4*y - 35}, {x, -15,15}, {y, -20, 20}];
soln[x0_?NumericQ] :=
First@NDSolve[{x'[t] == -2*x[t] + y[t] - 11, x[0] == x0,
y'[t] == -5*x[t] + 4*y[t] - 35, y[0] == x0}, {x, y}, {t, -20, 5}];
Show[sp, ParametricPlot[Evaluate[{x[t], y[t]} /. soln[#] & /@ Range[-15, 15, 1]], {t, -15,
5}, PlotRange -> All, MaxRecursion -> 8, AxesLabel -> {"x", "y"},PlotStyle -> Red]]
</code></pre>
<p><a href="https://i.stack.imgur.com/AJpKW.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AJpKW.jpg" alt="enter image description here"></a></p>
|
3,374,248 | <p>I haven't worked out all the details yet, but it seems to be true for the following functions:</p>
<ul>
<li><span class="math-container">$f(k) = 1$</span></li>
<li><span class="math-container">$f(k) = 1/k!$</span></li>
<li><span class="math-container">$f(k) = a^k$</span></li>
<li><span class="math-container">$f(k) = 1/\log(k+1)$</span></li>
</ul>
<p>What are the conditions on <span class="math-container">$f$</span> for this to be true? It sounds like a fairly general result that should be easy to prove. Sums like these are related to the discrete self-convolution operator, so I'm pretty sure the result mentioned here must be well known. </p>
<p><strong>Update</strong>: A weaker result that applies to a broader class of functions is the following:
<span class="math-container">$$\sum_{k=1}^n f(k)f(n-k) = O\Big(n f^2(\frac{n}{2})\Big).$$</span>
Is it possible to find a counter-example, with a function <span class="math-container">$f$</span> that is smooth enough and in wide use?</p>
| Ali Ashja' | 437,913 | <p>If you think the error term is produced from approximation of an integral:
<span class="math-container">$$\frac{1}{n} \int_0^n f*f=\frac{1}{f^2(n/2)}$$</span>
That means you should search for functions that mean of their convolution to themselves is equal to inverse of square of them at mid point of interval, that in fact meet each other. There is two natural points that many of functions close to satisfy them:</p>
<p>1) Get mean value at mid point.</p>
<p>2) Since <span class="math-container">$f$</span> convulses with itself, the <span class="math-container">$f^2$</span> is a good candidate for mean.</p>
<p>So you should more think about functions that satisfy the inverse of that mean we expect more, <span class="math-container">$1/f^2(n/2)$</span> instead of <span class="math-container">$f^2(n/2)$</span>.</p>
|
101,058 | <p>Let $M$ be a module over a ring $R$.</p>
<p>Let $\operatorname{Ass}(M)$ be the set of annihilator ideals $\operatorname{Ann}(x)$, which are prime, so</p>
<p>$$\operatorname{Ass}(M) = \{\operatorname{Ann}(x) \mid \operatorname{Ann}(x)\text{ is prime}, x \in M\}.$$</p>
<p>Recall that $\operatorname{Ann}(x) = \{r \in R \mid rx=0\}$.</p>
<blockquote>
<p>If $M_1$ and $M_2$ are two modules, I wish to prove that
$$\operatorname{Ass}(M_1 \oplus M_2) = \operatorname{Ass}(M_1) \cup \operatorname{Ass}(M_2),$$
where $\oplus$ is direct sum and $\cup$ is ordinary union of sets.</p>
</blockquote>
<p>I need to do this by considering an element of the left hand side and show it is in the right hand side, so nothing fancy. The direction from right to left is easy, since for any $m_1 \in M_1$ I have $\operatorname{Ann}(m_1) = \operatorname{Ann}(m_1,0)$, but the other direction causes me trouble.</p>
| Community | -1 | <p>You could use the fact that for a submodule <span class="math-container">$N\subseteq M$</span> we have <span class="math-container">$\mathrm{Ass}(M)\subseteq \mathrm{Ass}(N)\cup \mathrm{Ass}(M/N)$</span>.<br />
This can be proved directly: Let <span class="math-container">$\mathrm{Ann}(m)=\mathfrak p\in\mathrm{Ass}(M)$</span>.<br />
If <span class="math-container">$A/\mathfrak p\cap N\neq 0$</span>, then there is <span class="math-container">$0\neq n\in A/\mathfrak p\cap N$</span>. By <span class="math-container">$A/\mathrm{Ann}(m)\cong mA$</span> you get <span class="math-container">$\mathrm{Ann}(n)=\mathfrak p$</span> and <span class="math-container">$\mathfrak p\in \mathrm{Ass}(N)$</span>.<br />
If otherwise <span class="math-container">$A/\mathfrak p\cap N=0$</span> you can use that <span class="math-container">$\mathfrak p\in \mathrm{Ass}(M)$</span> iff there is a injection <span class="math-container">$A/\mathfrak p\rightarrow M$</span> to show that <span class="math-container">$\mathfrak p\in \mathrm{Ass}(M/N)$</span>.</p>
|
101,058 | <p>Let $M$ be a module over a ring $R$.</p>
<p>Let $\operatorname{Ass}(M)$ be the set of annihilator ideals $\operatorname{Ann}(x)$, which are prime, so</p>
<p>$$\operatorname{Ass}(M) = \{\operatorname{Ann}(x) \mid \operatorname{Ann}(x)\text{ is prime}, x \in M\}.$$</p>
<p>Recall that $\operatorname{Ann}(x) = \{r \in R \mid rx=0\}$.</p>
<blockquote>
<p>If $M_1$ and $M_2$ are two modules, I wish to prove that
$$\operatorname{Ass}(M_1 \oplus M_2) = \operatorname{Ass}(M_1) \cup \operatorname{Ass}(M_2),$$
where $\oplus$ is direct sum and $\cup$ is ordinary union of sets.</p>
</blockquote>
<p>I need to do this by considering an element of the left hand side and show it is in the right hand side, so nothing fancy. The direction from right to left is easy, since for any $m_1 \in M_1$ I have $\operatorname{Ann}(m_1) = \operatorname{Ann}(m_1,0)$, but the other direction causes me trouble.</p>
| Zhen Lin | 5,191 | <p>Let <span class="math-container">$\mathfrak{p} \in \textrm{Ass}(M_1 \oplus M_2)$</span>. Suppose <span class="math-container">$\mathfrak{p} = \textrm{Ann}(m_1 + m_2)$</span> where <span class="math-container">$m_1 \in M_1$</span> and <span class="math-container">$m_2 \in M_2$</span>. (I am considering <span class="math-container">$M_1$</span> and <span class="math-container">$M_2$</span> as submodules of <span class="math-container">$M_1 \oplus M_2$</span>.) So for all <span class="math-container">$a$</span> in <span class="math-container">$\mathfrak{p}$</span>, <span class="math-container">$a m_1 + a m_2 = 0$</span>, so <span class="math-container">$a m_1 = 0$</span> and <span class="math-container">$a m_2 = 0$</span>. Thus <span class="math-container">$\mathfrak{p} \subseteq \textrm{Ann}(m_1) \cap \textrm{Ann}(m_2)$</span>. </p>
<p>Now, either <span class="math-container">$\mathfrak{p} = \textrm{Ann}(m_1)$</span> or not; if it is we are done, so suppose <span class="math-container">$\mathfrak{p} \ne \textrm{Ann}(m_1)$</span>. Let <span class="math-container">$a \in \textrm{Ann}(m_1)$</span>, <span class="math-container">$a \notin \mathfrak{p}$</span>. Then,
<span class="math-container">$$a m_1 + a m_2 = a m_2 \ne 0$$</span>
since otherwise <span class="math-container">$a \in \textrm{Ann}(m_1 + m_2)$</span>, which would contradict <span class="math-container">$a \notin \mathfrak{p}$</span>. Thus <span class="math-container">$a \notin \textrm{Ann}(m_2)$</span>. So indeed <span class="math-container">$\mathfrak{p} = \textrm{Ann}(m_1) \cap \textrm{Ann}(m_2)$</span>. Let <span class="math-container">$a$</span> be as above, and let <span class="math-container">$b \in \textrm{Ann}(m_2)$</span>; then <span class="math-container">$a b \in \textrm{Ann}(m_1) \cap \textrm{Ann}(m_2) = \mathfrak{p}$</span>, so <span class="math-container">$b \in \mathfrak{p}$</span>. Thus <span class="math-container">$\textrm{Ann}(m_2) = \mathfrak{p}$</span>. Hence, either <span class="math-container">$\textrm{Ann}(m_1) = \mathfrak{p}$</span> or <span class="math-container">$\textrm{Ann}(m_2)=\mathfrak{p}$</span>, so
<span class="math-container">$$\textrm{Ass}(M_1 \oplus M_2) = \textrm{Ass}(M_1) \cup \textrm{Ass}(M_2)$$</span>
as required.</p>
|
101,058 | <p>Let $M$ be a module over a ring $R$.</p>
<p>Let $\operatorname{Ass}(M)$ be the set of annihilator ideals $\operatorname{Ann}(x)$, which are prime, so</p>
<p>$$\operatorname{Ass}(M) = \{\operatorname{Ann}(x) \mid \operatorname{Ann}(x)\text{ is prime}, x \in M\}.$$</p>
<p>Recall that $\operatorname{Ann}(x) = \{r \in R \mid rx=0\}$.</p>
<blockquote>
<p>If $M_1$ and $M_2$ are two modules, I wish to prove that
$$\operatorname{Ass}(M_1 \oplus M_2) = \operatorname{Ass}(M_1) \cup \operatorname{Ass}(M_2),$$
where $\oplus$ is direct sum and $\cup$ is ordinary union of sets.</p>
</blockquote>
<p>I need to do this by considering an element of the left hand side and show it is in the right hand side, so nothing fancy. The direction from right to left is easy, since for any $m_1 \in M_1$ I have $\operatorname{Ann}(m_1) = \operatorname{Ann}(m_1,0)$, but the other direction causes me trouble.</p>
| Community | -1 | <p><em>This is just a restructuring of <a href="https://math.stackexchange.com/a/101099/279515">@ZhenLin's answer</a> in light of the comments below it by <a href="https://math.stackexchange.com/questions/101058/set-of-associated-primes-of-direct-sum#comment237758_101099">@DylanMoreland</a> and <a href="https://math.stackexchange.com/questions/101058/set-of-associated-primes-of-direct-sum#comment1665731_101099">@PavelČoupek</a>.</em></p>
<hr />
<p>Let <span class="math-container">$M$</span> be an <span class="math-container">$A$</span>-module that is a finite direct sum of its submodules <span class="math-container">$M_i$</span>, that is, let <span class="math-container">$M = \bigoplus_{i=1}^n M_i$</span>. If <span class="math-container">$\mathfrak{p}$</span> is an associated prime of <span class="math-container">$M_i$</span>, then there exists <span class="math-container">$m_i \in M_i$</span> such that <span class="math-container">$\mathfrak{p} = \mathrm{ann}_A(m_i)$</span>. But since <span class="math-container">$m_i \in M$</span> as well, this shows that <span class="math-container">$\mathfrak{p}$</span> is an associated prime of <span class="math-container">$M$</span>. Hence, <span class="math-container">$\mathrm{Ass}(M) \supseteq \bigcup_{i=1}^n \mathrm{Ass}(M_i)$</span>.</p>
<p>Conversely, suppose that <span class="math-container">$\mathfrak{p}$</span> is an associated prime of <span class="math-container">$M$</span>.</p>
<p>Let <span class="math-container">$m = \sum_{i=1}^n m_i$</span>, <span class="math-container">$m_i \in M_i$</span>, be an element such that <span class="math-container">$\mathfrak{p} = \mathrm{ann}_A(m)$</span>. In particular, <span class="math-container">$\mathfrak{p}$</span> annihilates <span class="math-container">$m_i$</span> for each <span class="math-container">$i$</span>, so <span class="math-container">$\mathfrak{p} \subseteq \mathrm{ann}_A(m_i)$</span> for each <span class="math-container">$i$</span>. Hence, <span class="math-container">$\mathfrak{p} \subseteq \bigcap_{i=1}^n \mathrm{ann}_A(m_i)$</span>.</p>
<p>For the reverse inclusion, we observe that if <span class="math-container">$a \in \mathrm{ann}_A(m_i)$</span> for each <span class="math-container">$i$</span>, then <span class="math-container">$am = 0$</span>. Thus, <span class="math-container">$\bigcap_{i=1}^n \mathrm{ann}_A(m_i) \subseteq \mathfrak{p}$</span>.</p>
<p>Hence, <span class="math-container">$\mathfrak{p} = \bigcap_{i=1}^n \mathrm{ann}_A(m_i)$</span>. Now, by the observation mentioned by @DylanMoreland, this implies <span class="math-container">$\mathfrak{p} = \mathrm{ann}_A(m_i)$</span> for some <span class="math-container">$i$</span>, and so <span class="math-container">$\mathrm{Ass}(M) \subseteq \bigcup_{i=1}^n \mathrm{Ass}(M_i)$</span>.</p>
<hr />
<p>The same proof works for arbitrary direct sums as well. <span class="math-container">$\mathrm{Ass}(M) \supseteq \bigcup \mathrm{Ass}(M_i)$</span> by the same argument as before. To prove the reverse inclusion, let <span class="math-container">$\mathfrak{p}$</span> be an associated prime of <span class="math-container">$M$</span>. Let <span class="math-container">$m = \sum m_i$</span>, <span class="math-container">$m_i \in M_i$</span>, be an element such that <span class="math-container">$\mathfrak{p} = \mathrm{ann}_A(m)$</span>. In particular, <span class="math-container">$m_i = 0$</span> for all but finitely many indices <span class="math-container">$i$</span>. So, <span class="math-container">$m$</span> is an element of a finite direct sum. Now, we run through the same proof as before for this finite direct sum to prove that <span class="math-container">$\mathfrak{p}$</span> is an associated prime of one of the <span class="math-container">$M_i$</span>.</p>
<p>Hence, the result
<span class="math-container">$$\operatorname{Ass}\left(\bigoplus M_i\right) = \bigcup \operatorname{Ass}(M_i)$$</span> is true for arbitrary direct sums.</p>
|
3,895,314 | <p>How do I prove <span class="math-container">$x ^ {1 - x}(1 - x) ^ {x} \le \frac{1}{2}$</span>, for every <span class="math-container">$x \in (0, 1)$</span>.</p>
<hr />
<p>For <span class="math-container">$x = \frac {1}{2}$</span> the LHS is equal to one half. I tried studying what happens when <span class="math-container">$x \lt \frac {1}{2}$</span> and its correspondent, but to no result.</p>
| TheSilverDoe | 594,484 | <p>Because <span class="math-container">$1-x+x=1$</span>, and <span class="math-container">$x \in (0,1)$</span>, by concavity of the <span class="math-container">$\ln$</span>, one has
<span class="math-container">$$(1-x)\ln(x)+x \ln(1-x) \leq \ln((1-x)x+x(1-x))=\ln(2x(1-x))$$</span></p>
<p>But it is easy to see that for every <span class="math-container">$x$</span>,
<span class="math-container">$$x(1-x) \leq \frac{1}{4}$$</span> so <span class="math-container">$$(1-x)\ln(x)+x \ln(1-x) \leq \ln \left( \frac{1}{2}\right)$$</span></p>
<p>Now take the <span class="math-container">$\exp$</span> : you get directly
<span class="math-container">$$x^{1-x}(1-x)^x \leq \frac{1}{2}$$</span></p>
|
1,400,394 | <p>Given that $u(x,y)$ can someone please explain to me how the result as asked in the question is achieved? Steps would be really appreciated, thanks.</p>
| HK Lee | 37,116 | <p>Note that $u_{xx}=0$ implies that $$ u_x= f(y)
$$</p>
<p>(Here fix $y_0$. Then $u_x(x,y_0)$ is a function of variable $x$ only. If $u_{xx}=0$ then $u_x$ is constant : If not, that is, for example $u_x(x_1) < u_x(x_2)$, then note that by mean value theorem, $$ u_x(x_1)-u_x(x_2)= u_{xx} (x_3)(x_1-x_2),\ x_3\in [x_1,x_2] $$ for some $x_3$. So we have a contradiction. )</p>
<p>And $$ u=xf(y)+g(y)$$</p>
|
4,461,327 | <p>To show that they are equal, I need to show</p>
<p><span class="math-container">$\bigcap_{n=1}^{\infty}[0,1+1/n) \subset [0,1]$</span> and <span class="math-container">$[0,1] \subset \bigcap_{n=1}^{\infty}[0,1+1/n)$</span></p>
<p>My attempt is: let <span class="math-container">$x \in [0,1] \Rightarrow 0 \leq x \leq 1$</span>, since <span class="math-container">$1 < 1+1/n, \ \forall n \geq1 \Rightarrow x \in \bigcap_{n=1}^{\infty}[0,1+1/n) \Rightarrow [0,1] \subset \bigcap_{n=1}^{\infty}[0,1+1/n)$</span></p>
<p>However, I don't know how to show <span class="math-container">$\bigcap_{n=1}^{\infty}[0,1+1/n) \subset [0,1]$</span>. It seems obvious since <span class="math-container">$\lim_{n \to \infty} 1+1/n = 1$</span>, but I am having trouble to proving that. Any help or hint would be appreciated</p>
| Hidda Walid | 736,203 | <p>If <span class="math-container">$x$</span> is in the intersection, that means that <span class="math-container">$0\leq x<1+\frac{1}{n}$</span> for all <span class="math-container">$n\geq1$</span>, using the fact that if <span class="math-container">$w_n<u_n<v_n$</span> for all <span class="math-container">$n$</span>, then <span class="math-container">$\lim_n w_n\leq \lim_n u_n \leq \lim_n v_n$</span>, and since <span class="math-container">$\lim_n 0=0$</span> and <span class="math-container">$\lim_n x=x$</span> (constante sequences) then <span class="math-container">$0\leq x\leq 1$</span>.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.