qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
678,768 | <p>"Let $A$, $B$ be two infinite sets. Suppose that $f: A \to B$ is injective. Show that there exists a surjective map $g: B \to A$"</p>
<p>I am not sure how to go about this proof, I am trying to gather information to help me, and deduce as much as I can:
. Since $f$ is injective we know that $|A| \leq |B|$. </p>
<p>. If g is surjective then $|A| \geq |B|$.</p>
<p>. Thus $|A| = |B|$ (For this to hold)</p>
<p>. This suggests that we cannot have one countable and one uncountable set. </p>
<p>. I would start by assuming by contradiction that no such surjective map exists. I am thinking perhaps by removing the elements $b$ from $B$ that are not mapped to from A we would have an injective map $g: A \to B\setminus\{b\}$. However this is as far as I have got, can someone tell me if I am on the right track and if so where to go from here, or if I have got the completely wrong train of thought? Thanks</p>
| Martín-Blas Pérez Pinilla | 98,199 | <p>Definition of $g$:</p>
<p>If $y=f(x)\in F(A)$, $g(y)=x$.</p>
<p>If $y\in B\backslash f(A)$, $g(y)=$ arbitrary.</p>
<p>The function is well-defined because $f$ is injective.</p>
|
1,843,274 | <p>Good evening to everyone. So I have this inequality: $$\frac{\left(1-x\right)}{x^2+x} <0 $$ It becomes $$ \frac{\left(1-x\right)}{x^2+x} <0 \rightarrow \left(1-x\right)\left(x^2+x\right)<0 \rightarrow x^3-x>0 \rightarrow x\left(x^2-1\right)>0 $$ Therefore from the first $ x>0 $, from the second $ x_1 = 1 $ and $x_2=-1$ therefore $ x $ belongs to $(-\infty,-1)$ and $(1,\infty)$ therefore $x$ belongs to $(1,\infty)$. But on the answer sheet it shows that it's defined on $(-1,0)$ and $(1,\infty)$. Where I am wrong? Thanks for any response.</p>
| Siong Thye Goh | 306,553 | <p>It seems to me that you are only considering two roots for the cubic polynomial. The polynomial has three roots, $-1,0,1$ and they are all simple root.</p>
<p>Hence the polynomials is positive on $(1,\infty)$. (it is positive as it goes to positive infinity).</p>
<p>The sign changes at the roots.</p>
<p>it is negative on $(0,1)$. positive on $(-1,0)$, negative on $(-\infty,-1)$.</p>
|
777,863 | <p>Does there exists an $f:\mathbb{R}\rightarrow \mathbb{R}$ differentiable everywhere with $f'$ discontinuous at some point?</p>
| David | 119,775 | <p>Yes there does exist such a function, for example
$$f(x)=\cases{x^2\sin(1/x)&if $x\ne0$\cr 0&if $x=0$.\cr}$$
By normal differentiation rules we have
$$f'(x)=2x\sin(1/x)-\cos(1/x)$$
if $x\ne0$, and for $x=0$ we use the definition:
$$f'(0)=\lim_{h\to0}\frac{f(h)-f(0)}{h-0}=\lim_{h\to0}h\sin(1/h)=0\ .$$
So $f$ is certainly differentiable at $0$, and in fact everywhere else too. However, if $x\to0$ then $2x\sin(1/x)\to0$ and $\cos(1/x)$ oscillates between $1$ and $-1$, so $f'(x)$ has no limit as $x\to0$, and is therefore not continuous at $x=0$.</p>
|
184,699 | <p>First, we make the following observation: let $X: M \rightarrow TM $ be a vector
field on a smooth manifold. Taking the contraction with respect to $X$ twice gives zero, i.e.
$$ i_X \circ i_{X} =0.$$
Is there any "name" for the corresponding "homology" group that one can define
(Kernel mod image)? Has this "homology" group been studied by others (there are plenty of questions that one can ask........is it isomorphic to anything more familiar etc etc). </p>
<p>Similarly, a dual observation is as follows: Let $\alpha$ be a one form; taking
the wedge product with $\alpha$ twice gives us zero. One can again define kernel
mod image. Does that give anything "interesting"? </p>
<p>If people have investigated these questions, I would like to know a few references. </p>
<p>My purpose for asking the "name" of the (co)homology group is so that I can make a google search using the name. I was unable to do that, since I do not know of any key words under this topic (or if at all it is a topic).</p>
| Matthias Ludewig | 16,702 | <p>I believe that the answer is something in between "not really" and "kind of" and was indicated by Qiaochu Yan.</p>
<p>In Wittens famous paper "Morse Theory and Supersymmetry", he considers operators of the form $\sigma d + \alpha$ for a one-form $\alpha$ and a coefficient $\sigma$ with only non-degenerate critical points and shows that for $\sigma \rightarrow 0$, one gets the Morse Homology of a manifold: The Homology of the Complex at level $k$ is $n_k$-dimensional, where $n_k$ is the number of critical points of index $k$. It might be a reasonable assumption that you get the same kind of information with your approach.</p>
<p>However, you cannot set $\sigma = 0$ right away, I believe. I made a few example calculations, and I think that if you assume that your form $\alpha$ has only non-degenerate critical points, you always obtain a zero homology (if you drop the assumption of non-degeneracy, you will always end up with something infinite-dimensional, as pointed out above). At least this is easy to see in two dimensions, and in higher dimensions, it should not be difficult to show, only a little annoying probably ;-).</p>
<p>The physical intuition, is the following: The particles move along the flow lines of the vector field $\alpha^\sharp$, and if you let the diffusion coefficient $\sigma \rightarrow 0$, they concentrate in the minima (where "minima for $k$-forms" turn out to be critical points of higher index somehow). But: If you set the diffusion $\sigma$ to zero right away, the particles will not move; they are just pinned to where you are, so you don't notice anything of your potential term.</p>
|
2,970,787 | <blockquote>
<p>Find <span class="math-container">$\lim_{x\to -\infty} \frac{(x-1)^2}{x+1}$</span></p>
</blockquote>
<p>If I divide whole expression by maximum power i.e. <span class="math-container">$x^2$</span> I get,<span class="math-container">$$\lim_{x\to -\infty} \frac{(1-\frac1x)^2}{\frac1x+\frac{1}{x^2}}$$</span>
Numerator tends to <span class="math-container">$1$</span> ,Denominator tends to <span class="math-container">$0$</span></p>
<p>So I get the answer as <span class="math-container">$+\infty$</span></p>
<p>But when I plot the graph it tends to <span class="math-container">$-\infty$</span></p>
<p>What am I missing here? "Can someone give me the precise steps that I should write in such a case." Thank you very much!</p>
<p>NOTE: I cannot use L'hopital for finding this limit.</p>
| Dr. Sonnhard Graubner | 175,066 | <p>Hint: Write <span class="math-container">$$\frac{x^2\left(1-\frac{1}{x}\right)^2}{x\left(1+\frac{1}{x}\right)}$$</span></p>
|
2,970,787 | <blockquote>
<p>Find <span class="math-container">$\lim_{x\to -\infty} \frac{(x-1)^2}{x+1}$</span></p>
</blockquote>
<p>If I divide whole expression by maximum power i.e. <span class="math-container">$x^2$</span> I get,<span class="math-container">$$\lim_{x\to -\infty} \frac{(1-\frac1x)^2}{\frac1x+\frac{1}{x^2}}$$</span>
Numerator tends to <span class="math-container">$1$</span> ,Denominator tends to <span class="math-container">$0$</span></p>
<p>So I get the answer as <span class="math-container">$+\infty$</span></p>
<p>But when I plot the graph it tends to <span class="math-container">$-\infty$</span></p>
<p>What am I missing here? "Can someone give me the precise steps that I should write in such a case." Thank you very much!</p>
<p>NOTE: I cannot use L'hopital for finding this limit.</p>
| user | 505,767 | <p><strong>HINT</strong></p>
<p>We have that</p>
<p><span class="math-container">$$\lim_{x\to -\infty} \frac{(x-1)^2}{x+1}=\lim_{x\to -\infty} (x-1)\cdot \frac{x-1}{x+1}$$</span></p>
|
3,078,707 | <p>The above question is the equation <span class="math-container">$(2.4)$</span> of the following paper:</p>
<p><a href="http://www.jmlr.org/papers/volume6/tsuda05a/tsuda05a.pdf" rel="nofollow noreferrer">MATRIX EXPONENTIATED GRADIENT UPDATES</a>.</p>
<p>Let <span class="math-container">$M$</span> and <span class="math-container">$N$</span> be two <span class="math-container">$n \times n$</span> positive definite matrices where <span class="math-container">$M=U\Lambda U^{\top}$</span>, <span class="math-container">$N=\tilde{U}\tilde{\Lambda} \tilde{U}^{\top}
$</span> and <span class="math-container">$(\lambda_i,v_i)$</span> are eigenpairs of <span class="math-container">$M$</span>, likewise for <span class="math-container">$N$</span>.</p>
<p>How to show the following
<span class="math-container">$$\text{Tr}(M\log N)=\sum_{i,j}\lambda_i\log(\tilde{\lambda_j})(u_i^{\top}\tilde{u}_j)^2$$</span></p>
<p>First I do not know what <span class="math-container">$i,j$</span> mean in summation, and how we have it using two summations. Second how to get that.</p>
<p>My try:</p>
<p><span class="math-container">\begin{align}
\text{Tr}(M\log N) &=
\text{Tr}(U\Lambda U^{\top}
\tilde{U}\log(\tilde{\Lambda})\tilde{U}^{\top}) \\
& = \text{Tr}(\Lambda U^{\top}
\tilde{U}\log(\tilde{\Lambda})\tilde{U}^{\top}U)
\end{align}</span></p>
<p>How can I proceed using matrix calculus to get the result not by expanding? what is the hidden trick?</p>
| David K | 139,123 | <p>Here's a construction that I think is relatively easy to implement.</p>
<p>Find the point <span class="math-container">$P$</span> where the tangent at <span class="math-container">$(-c,d)$</span> intersects the <span class="math-container">$y$</span> axis.
Suppose the coordinates of that point are <span class="math-container">$(0,p).$</span>
Now find the point <span class="math-container">$(-\lambda c,d)$</span> that is at distance <span class="math-container">$p$</span> from <span class="math-container">$(0,p),$</span>
where <span class="math-container">$\lambda > 0.$</span>
There are various ways to do that, but one is to consider the right triangle
with vertices <span class="math-container">$(0,p),$</span> <span class="math-container">$(0,d),$</span> and <span class="math-container">$(-\lambda c,d).$</span>
The length of the hypotenuse should be <span class="math-container">$p$</span>, so we want
<span class="math-container">$(d - p)^2 + (-\lambda c)^2 = p^2,$</span>
that is, <span class="math-container">$\lambda = \lvert\frac1c\rvert\sqrt{p^2 - (d - p)^2}.$</span></p>
<p>Find the point on the <span class="math-container">$x$</span> axis that is the center of a circle through <span class="math-container">$(0,0)$</span>
and <span class="math-container">$(-\lambda c,d).$</span>
One way is to consider the line through <span class="math-container">$(-\lambda c,d)$</span> and <span class="math-container">$(0,p),$</span>
find the line perpendicular to that line through <span class="math-container">$(-\lambda c,d),$</span>
and find the intersection of the perpendicular line with the <span class="math-container">$x$</span> axis.
Let's say the point you find in this way is <span class="math-container">$(-a',0).$</span></p>
<p>Now you just need to "stretch" the circle you've just found in a horizontal direction while keeping it tangent to the <span class="math-container">$y$</span> axis.
Stop when the point <span class="math-container">$(-\lambda c,d)$</span> on the circle has been "stretched" onto the point <span class="math-container">$(-c,d).$</span>
The ellipse you get by "stretching" the circle this way will then be tangent to the desired line through <span class="math-container">$(-c,d).$</span></p>
<p>The factor by which you need to "stretch" is just <span class="math-container">$\frac1\lambda,$</span>
so <span class="math-container">$a = \frac {a'}\lambda.$</span></p>
<p>"Stretching" doesn't alter the "height" of the circle/ellipse, so
<span class="math-container">$b = a'.$</span></p>
<p>It is possible for this construction to fail, in particular if <span class="math-container">$p < \frac d2.$</span>
But in that case there is no ellipse matching the one you want.
The two given points and their tangent angles (robot starting and ending configuration)
and the given orientation of the axes of the ellipse (parallel to the <span class="math-container">$x$</span> and <span class="math-container">$y$</span> axes) can overconstrain the ellipse.</p>
|
192,020 | <p>I suspect this is a duplicate, but I can't seem to find what I'm looking for.</p>
<p>A routine problem I have is the following.</p>
<p>I have a set of data in three (or two, or more) lists:</p>
<pre><code>l1={a1, a2, a3}
l2={b1, b2, b3, b4}
l3={{c1, c2, c3, c4}, {d1, d2, d3, d4}, {e1, e2, e3, e4}}
</code></pre>
<p>where <code>c1</code> is a result under condition <code>{a1, b1}</code>, <code>c2</code> is a result under condition <code>{a1, b2}</code>, etc.</p>
<p>I want to create the list:</p>
<pre><code>{{a1, b1, c1}, {a1, b2, c2}, {a1, b3, c3},{a1, b4, c4}, {a2, b1, d1}, ...}
</code></pre>
<p>in preparation for creating a string to export to a text file. </p>
<p>My current solution:</p>
<pre><code>Map[Transpose[{l2, #}] &, l3]
MapIndexed[Prepend[#1, l1[[#2[[1]]]]] &, %, {2}]
Flatten[%, 1]
</code></pre>
<p>This works, but the solution isn't intuitive to me, which makes me think there's a better way. </p>
<p>Is there a preferred approach for this task?</p>
| Roman | 26,598 | <p>This will generate a nested list, in accordance with <code>l3</code>:</p>
<pre><code>MapThread[Append, {Outer[List, l1, l2], l3}, 2]
</code></pre>
<blockquote>
<p>{{{a1, b1, c1}, {a1, b2, c2}, {a1, b3, c3}, {a1, b4, c4}}, {{a2, b1, d1}, {a2, b2, d2}, {a2, b3, d3}, {a2, b4, d4}}, {{a3, b1, e1}, {a3, b2, e2}, {a3, b3, e3}, {a3, b4, e4}}}</p>
</blockquote>
<p><code>Flatten</code>ing once will give you what you want:</p>
<pre><code>Flatten[MapThread[Append, {Outer[List, l1, l2], l3}, 2], 1]
</code></pre>
<blockquote>
<p>{{a1, b1, c1}, {a1, b2, c2}, {a1, b3, c3}, {a1, b4, c4}, {a2, b1, d1}, {a2, b2, d2}, {a2, b3, d3}, {a2, b4, d4}, {a3, b1, e1}, {a3, b2, e2}, {a3, b3, e3}, {a3, b4, e4}}</p>
</blockquote>
<p>If you're not interested in the nested list above, then you can get straight to the result with</p>
<pre><code>MapThread[Append, {Tuples[{l1, l2}], Flatten[l3]}]
</code></pre>
<blockquote>
<p>{{a1, b1, c1}, {a1, b2, c2}, {a1, b3, c3}, {a1, b4, c4}, {a2, b1, d1}, {a2, b2, d2}, {a2, b3, d3}, {a2, b4, d4}, {a3, b1, e1}, {a3, b2, e2}, {a3, b3, e3}, {a3, b4, e4}}</p>
</blockquote>
<p>(in effect, <code>Flatten</code>ing before <code>MapThread</code>ing).</p>
|
341,202 | <p>The divisibility rule for $3$ is well-known: if you add up the digits of $n$ and the sum is divisible by $3$, then $n$ is divisible by three. This is quite helpful for determining if really large numbers are multiples of three, because we can recursively apply this rule:</p>
<p>$$1212582439 \rightarrow 37 \rightarrow 10\rightarrow 1 \implies 3\not\mid 1212582439$$
$$124524 \rightarrow 18 \rightarrow 9 \implies 3\mid 124524$$</p>
<p>This works for as many numbers as I've tried. However, I'm not sure how this may be <em>proven</em>. Thus, my question is:</p>
<blockquote>
<p>Given a positive integer $n$ and that $3\mid\text{(the sum of the digits of $n$)}$, how may we prove that $3\mid n$?</p>
</blockquote>
| Brian M. Scott | 12,042 | <p>HINT: Suppose that you have a four-digit number $n$ that is written $abcd$. Then</p>
<p>$$\begin{align*}
n&=10^3a+10^2b+10c+d\\
&=(999+1)a+(99+1)b+(9+1)c+d\\
&=(999a+99b+9c)+(a+b+c+d)\\
&=3(333a+33b+3c)+(a+b+c+d)\;,
\end{align*}$$</p>
<p>so when you divide $n$ by $3$, you’ll get </p>
<p>$$333a+33b+3c+\frac{a+b+c+d}3\;.$$</p>
<p>The remainder is clearly going to come from the division $\frac{a+b+c+d}3$, since $333a+33b+3c$ is an integer.</p>
<p>Now generalize: make a similar argument for any number of digits, not just four. (If you know about congruences and modular arithmetic, you can do it very compactly.)</p>
|
341,202 | <p>The divisibility rule for $3$ is well-known: if you add up the digits of $n$ and the sum is divisible by $3$, then $n$ is divisible by three. This is quite helpful for determining if really large numbers are multiples of three, because we can recursively apply this rule:</p>
<p>$$1212582439 \rightarrow 37 \rightarrow 10\rightarrow 1 \implies 3\not\mid 1212582439$$
$$124524 \rightarrow 18 \rightarrow 9 \implies 3\mid 124524$$</p>
<p>This works for as many numbers as I've tried. However, I'm not sure how this may be <em>proven</em>. Thus, my question is:</p>
<blockquote>
<p>Given a positive integer $n$ and that $3\mid\text{(the sum of the digits of $n$)}$, how may we prove that $3\mid n$?</p>
</blockquote>
| Arthur | 15,500 | <p>How about induction?</p>
<p>It is obviously true for the one-digit numbers $3, 6$ and $9$, so we have our base case (really, just the case $3$ is all it takes, but I like to be on the safe side when it comes to induction).</p>
<p>Now, let's say that we have a number divisible by $3$, and let's call it $n$. We can also assume that the sum of the digits of $n$ is divisible by $3$. I want to show that the sum of digits of $n+3$ is also divisible by $3$. If that is the case, then we are done, for the induction principle takes care of any case for us from there.</p>
<p>The sum of the digits of $n$ is some number, let's call it $m$, and this number is assumed to be divisible by $3$. Now, if we're lucky, the sum of digits in $n+3$ is just $m+3$, and by lucky I mean there is no carry involved. So, if there is no carry involved in adding $3$ to $n$, then we are done.</p>
<p>If there is a carry, however, then let's pretend for a second that the last digit of $n$ can surpass $9$. Were that the case, the sum of digits of $n+3$ would really <em>be</em> $m+3$. This is sadly not the case, but what really happens when we do the carry? We subtract $10$ from the $1$-digit, and add $1$ to the $10$-digit. This will have the net effect on the sum of digits that we subtract $9$, so in that case the sum of digits in $n+3$ is $m+3-9 = m-6$, which is still divisible by $3$, so there is no problem!</p>
<p>"Hold on there, not so fast", you say. "What if adding $1$ to the $10$-s digit makes a carry happen there?" Well, my enlightened reader, in that case the same argument as in the paragraph above would apply, only moved one space to the left in the digits of $n$. The net effect: the sum of digits of $n+3$ is $m-6-9 = m-15$, still divisible by $3$. If there is a carry from the hundreds-digit, then we will subtract another $9$ for a total of $m-24$. And so on. You will never make a carry like that take $m+3$ out of divisible-by-three-space. And this concludes the proof.</p>
|
341,202 | <p>The divisibility rule for $3$ is well-known: if you add up the digits of $n$ and the sum is divisible by $3$, then $n$ is divisible by three. This is quite helpful for determining if really large numbers are multiples of three, because we can recursively apply this rule:</p>
<p>$$1212582439 \rightarrow 37 \rightarrow 10\rightarrow 1 \implies 3\not\mid 1212582439$$
$$124524 \rightarrow 18 \rightarrow 9 \implies 3\mid 124524$$</p>
<p>This works for as many numbers as I've tried. However, I'm not sure how this may be <em>proven</em>. Thus, my question is:</p>
<blockquote>
<p>Given a positive integer $n$ and that $3\mid\text{(the sum of the digits of $n$)}$, how may we prove that $3\mid n$?</p>
</blockquote>
| lab bhattacharjee | 33,337 | <p>Using Property$\#10$ of <a href="http://mathworld.wolfram.com/Congruence.html" rel="nofollow noreferrer">this</a> (<a href="https://math.stackexchange.com/questions/188657/why-an-bn-is-divisible-by-a-b"> indirect Proof</a>),</p>
<p>as $10\equiv1\pmod9,10^r\equiv1^r\equiv1$ for integer $r\ge0$</p>
<p>$$\implies\sum_{r=0}^n10^ra_r\equiv\sum_{r=0}^na_r\pmod9$$</p>
|
1,687,147 | <blockquote>
<p>A category <span class="math-container">$\mathsf C$</span> consists of the following three mathematical entities:</p>
<ul>
<li><p>A class <span class="math-container">$\operatorname{ob}(\mathsf{C})$</span>, whose elements are called objects;</p>
</li>
<li><p>A class <span class="math-container">$\hom(\mathsf{C})$</span>, whose elements are called morphisms or maps or arrows. Each morphism <span class="math-container">$f$</span> has a source object <span class="math-container">$a$</span> and target object <span class="math-container">$b$</span>.</p>
</li>
<li><p>A binary operation <span class="math-container">$\circ$</span>, called composition of morphisms, such that for any three objects <span class="math-container">$a$</span>, <span class="math-container">$b$</span>, and <span class="math-container">$c$</span>, we have <span class="math-container">$\hom(b, c) \times \hom(a, b) \to \hom(a, c)$</span>. The composition of <span class="math-container">$f : a \to b$</span> and <span class="math-container">$g : b \to c$</span> is written as <span class="math-container">$g \circ f$</span> or <span class="math-container">$gf$</span>, governed by two axioms: [...]</p>
</li>
</ul>
</blockquote>
<p>What the exact meaning of 'consist of' in the first sentence? Of course, I know the usual meaning. However, since it is not a mathematical term, I don't know the mathematical meaning of 'consists of'.</p>
| SixWingedSeraph | 318 | <p>To say an object "consists of" followed by a list of entities means that entities are the data types you have to specify to describe the object.</p>
<p>"Consists of" in the case of the definition of category means that to define a category, you have to specify three mathematical objects: Two classes and a binary operation, and they must satisfy the requirements in the definition. </p>
<p>The definition using an ordered triple also works, but it has the minor problem that the three entities are specified in order, which adds a superfluous fact that is not necessary to the definition. No one thinks that the "first part" of a category is its class of objects. It is not wrong to define a category as a triple, but it certainly adds unnecessary structure and is therefore inelegant.</p>
|
2,987,994 | <p>I'm looking for definite integrals that are solvable using the method of differentiation under the integral sign (also called the Feynman Trick) in order to practice using this technique.</p>
<p>Does anyone know of any good ones to tackle?</p>
| Community | -1 | <p>Another example is in evaluating
<span class="math-container">$$\displaystyle \int_0^\infty \dfrac{\cos xdx}{1+x^2}$$</span></p>
<p>by first considering
<span class="math-container">$$I\left(a\right)=\int_{0}^{\infty}\frac{\sin\left(ax\right)}{x\left(1+x^{2}\right)}dx,\,a>0$$</span> we have <span class="math-container">$$I'\left(a\right)=\int_{0}^{\infty}\frac{\cos\left(ax\right)}{1+x^{2}}dx$$</span>
From which it can be shown that
<span class="math-container">$$I\left(a\right)=\frac{\pi}{2}\left(1-e^{-a}\right)$$</span>
hence
<span class="math-container">$$\lim_{a\rightarrow1}I'\left(a\right)=\frac{\pi }{2e}.$$</span></p>
|
2,386,182 | <p>Let $f:[a,b]\to \mathbb{R}$ is an increasing function and for any $y\in[f(a),f(b)],$ there exists a $\xi\in[a,b]$ such that $f(\xi)=y.$ Show that $f(.)$ is continuous on $[a,b]. $</p>
<p>Here is my argument, but I got stuck on the last part.</p>
<p>My goal is to show $$\lim_{n\to \infty}f(x_0+\xi_n)=\lim_{n\to\infty}(x_0-\xi_n)=f(x_0),$$
where $\xi\downarrow0.$</p>
<p>As the sequence $f(x_0+\xi_n)$ is decreasing and bounded by $f(x_0),$ it converges. Similarly, the sequence $f(x_0-\xi_n)$ converges. I want to show both of them converge to $f(x_0)$. </p>
<p>Suppose not, that is $U:=\lim_{n\to\infty}f(x_0+\xi_n)>f(x_0)$.</p>
<p>By our assumption, there must exist some $y>x_0$, such that $f(x_0)\leq f(y)<U$. Fixed $\delta>0. $ As $y>x_0$, there exists some $k\in\mathbb{N}$ such that $|y-(x_0+\xi_n)|<\delta$ for all $n\geq k.$ Hence $f(y)\geq f(x_0+\xi_n)$ for all $n\geq k$. Which implies that $f(y)>U$. I want to make sure if my argument is correct and if there is any better solution for this question. Thank you.</p>
| Community | -1 | <p>From what I hear jj sylvester pretty much invented invariant theory. Maybe take a look at some of his work. He was apparently a pretty good mathematician. .. well, upon looking it up it appears Cayley gets the credit: the idea was suggested to him by an elegant paper of Boole. Here is the <a href="https://en.m.wikipedia.org/wiki/Invariant_theory" rel="nofollow noreferrer">wiki</a>. Looking up sylvester, he made fundamental contributions. ..
Then again, Encyclopedia brittanica gives credit to <a href="https://www.britannica.com/biography/James-Joseph-Sylvester" rel="nofollow noreferrer">both </a>. </p>
|
474,568 | <p>In some books I've seen this symbol $\dagger$, next to some theorem's name, and I don't know what it means. I've googled it with no results which makes me suspect it's not standard.</p>
<p>Does anybody know what it means? One example I'm looking at right now is in a probability book, next to a section about Sitrling's approximation to factorials:</p>
<blockquote>
<p><strong>Stirling's formula ($\dagger$)</strong></p>
</blockquote>
<p>FOUND IT: The preamble says they're historic notes, it actually makes a historic introduction in the section about Stirling's formula, inc ase anyone's wondering.</p>
| RSh | 56,827 | <p>It depends on the author choice.
Some books are using it to show the kind of a specific problem (Harder, Article related, etc.)
Some Books are using it to mark some chapters/sections as an independent one (or in your book as a historical section)
you should refer to the book's preface to know the author mean.
(Generally -old fashioned- it's a brother of asterisk on footnotes such as
$\dagger$
<img src="https://i.stack.imgur.com/pJs7B.png" alt="double dragger">
instead of $*$,$**$,...)</p>
|
300,753 | <p>Revision in response to early comments. Users of set theory need an <em>implementation</em> (in case "model" means something different) of the axioms. I would expect something like this:</p>
<blockquote>
<p>An <em>implementation</em> consists of a "collection-of-elements" $X$, and a relation
(logical pairing) $E:X\times X\to \{0,1\}$. A logical function $h:X\to\{0,1\}$
is a <em>set</em> if it is of the form $x\mapsto E[x,a]$ for some $a\in X$. Sets are
required to satisfy the following axioms: ....</p>
</blockquote>
<p>The background "collection-of-elements" needs some properties to even get started. For instance "of the form $x\mapsto E[x,a]$" needs first-order quantification. Mathematical standards of precision seem to require <em>some</em> discussion, but so far I haven't seen anything like this. The first version of this question got answers like $X$ is "the domain of discourse" (philosophy??), "everything" (naive set theory?) "a set" (circular), and "type theory" (a postponement, not a solution). Is this a missing definition? Taking it seriously seems to give a rather fruitful perspective. </p>
| Giorgio Mossa | 14,969 | <p>If by "<em>the domain of $x \in A$</em>" you mean the objects you can put in $x$ and $A$ then the answer is everything.
This is due to the fact that in set theories such as ZFC and NBG all objects are set/classes (they have all the same type).</p>
<p>I am assuming your thinking $x \in A$ as an operation that associates to a pair of sets $(x,A)$ a truth value. This way of thinking it is fine as long as you consider the concept of operation as a primitive one and you do not identify it with the set theoretic defined one. </p>
<p>I hope this helps.</p>
|
300,753 | <p>Revision in response to early comments. Users of set theory need an <em>implementation</em> (in case "model" means something different) of the axioms. I would expect something like this:</p>
<blockquote>
<p>An <em>implementation</em> consists of a "collection-of-elements" $X$, and a relation
(logical pairing) $E:X\times X\to \{0,1\}$. A logical function $h:X\to\{0,1\}$
is a <em>set</em> if it is of the form $x\mapsto E[x,a]$ for some $a\in X$. Sets are
required to satisfy the following axioms: ....</p>
</blockquote>
<p>The background "collection-of-elements" needs some properties to even get started. For instance "of the form $x\mapsto E[x,a]$" needs first-order quantification. Mathematical standards of precision seem to require <em>some</em> discussion, but so far I haven't seen anything like this. The first version of this question got answers like $X$ is "the domain of discourse" (philosophy??), "everything" (naive set theory?) "a set" (circular), and "type theory" (a postponement, not a solution). Is this a missing definition? Taking it seriously seems to give a rather fruitful perspective. </p>
| Burak | 33,039 | <p>Perhaps, your confusion may be resolved by realizing that we do <strong>not</strong> define what a set is, using the axioms of ZFC. Sets are to us like points are to Euclid. Sets are the primitive objects that we are going to work with.</p>
<p>Let me take a Platonist approach to elaborate. When you set up your axiomatic system, which is ZFC in this case, you assume that there is a universe of objects over which your quantification takes places. (Otherwise, you cannot attach semantics to your system.)</p>
<p><em>Sets are simply the objects in the universe. Nothing more, nothing less.</em> When you include a binary relation symbol $\in$ in the language of your axiomatic system, you assume that between any two objects $x$ and $y$ in your universe, the atomic formula $x \in y$ is either true or false. So, the answer to your question "what is the domain of this function?" is the following: The Platonic universe of sets, which is somewhere in the sky!</p>
<p>Whether a sentence such as $\forall x \exists y \neg y \in x$ is true or not depends on whether for every set $x$ there is a set $y$ such that $x \in y$ does not hold. Since we do not have direct access to the Platonic universe of sets via our usual senses, we cannot directly check if this is the case. Consequently, we postulate that some statements about the universe of sets are true, namely, the axioms of ZFC. We then study the logical consequences of these axioms. Notice that the statement $\emptyset \in \omega$ is not true because we have some kind of logical function $\cdot \in \omega$ which checks the membership for $\omega$. It is true because it follows from the axioms which posit various facts about the relation $x \in y$.</p>
<p>I admit that I don't fully understand what your problem is. But as you can see, you may give a meaning to all these without circular reasoning. You may also take a formalist approach and simply think of <em>the game of proving</em> the logical consequences of the axioms of ZFC without worrying about questions such as "what is a set?", "what does $x \in y$ mean?".</p>
|
3,809,699 | <p>While we define norm on a vector space, we consider only real or complex vector field. But can we generalize this norm on a vector space over an arbitrary field ? I think this can be done, but we have to define a suitable modulus function on the ground field to be meaningful in the property ||cx||=|c|||x||, what |c| means. Is this possible to define norm in this general situation? And why we only bother about real or complex vector space?</p>
| Community | -1 | <p>We can (at least sometimes) even define a norm on a module over a ring, a notion which generalizes that of a vector space.</p>
<p>One interesting example is the Gaußian integers, <span class="math-container">$\Bbb Z[i]$</span>. An arbitrary element looks like <span class="math-container">$a+bi, a,b\in\Bbb Z$</span>.</p>
<p>If you define the "norm" by <span class="math-container">$N(a+bi)=a^2+b^2$</span>, it has the nice property of being multiplicative.</p>
|
3,482,476 | <p>For an arbitrary <span class="math-container">$0\leqslant x \leq\frac{\pi^2}6$</span>, can we write <span class="math-container">$x$</span> in the form
<span class="math-container">$$
x = x_0+\sum_{j\in S\subset\mathbb N\setminus\{0\}} \frac1{j^2}, \tag 1
$$</span></p>
<p>where <span class="math-container">$x_0\in\{0,1\}$</span>.</p>
<p>My motivation is this: Let <span class="math-container">$X_n\stackrel{\mathrm{i.i.d.}}\sim\mathrm{Ber}(p)$</span>, <span class="math-container">$Y_n = \frac{X_n}{n^2}$</span>, and <span class="math-container">$S_n = \sum_{k=0}^n Y_k$</span>. Then <span class="math-container">$S_n$</span> converges weakly to some random variable <span class="math-container">$S$</span>. I would like to know whether <span class="math-container">$S$</span> is continuous, i.e. takes all values over <span class="math-container">$\left[0,\frac{\pi^2}6\right)$</span>, or if <span class="math-container">$S$</span> is discrete (takes values in some countable subset of <span class="math-container">$\left[0,\frac{\pi^2}6\right)$</span>). The former is true if the representation of elements of <span class="math-container">$\mathbb R$</span> described in (1) is correct, and the latter is true if not. Note that <span class="math-container">$\mathbb P(Y\leqslant \frac{\pi^2}6)=1$</span> because <span class="math-container">$Y\leqslant\sum_{j=1}^\infty \frac1{j^2}=\frac{\pi^2}6$</span> a.s.</p>
<p>My gut feeling is that <span class="math-container">$S$</span> is discrete, as there will be values <span class="math-container">$\frac jk$</span> which cannot be obtained by finite sums of elements of <span class="math-container">$\{\frac1{m^2}:m=1,2,\ldots n\}$</span> no matter how large <span class="math-container">$n$</span> is. But I do not know how to show this rigorously. Advice on how to show this, and hints what the distribution of <span class="math-container">$S$</span> looks like would be appreciated.</p>
| Zarrax | 3,035 | <p>The sum <span class="math-container">$\sum_{n=2}^{\infty} {1 \over n^2}$</span> is less than <span class="math-container">$1$</span>, so if <span class="math-container">$r$</span> satisfies <span class="math-container">$\sum_{n=2}^{\infty} {1 \over n^2} < r < 1$</span> you won't be able to express <span class="math-container">$r$</span> in the desired form. If you include <span class="math-container">$1$</span> in the sum, the result would be greater than <span class="math-container">$r$</span>, and if you didn't include <span class="math-container">$1$</span> the result would be at most <span class="math-container">$\sum_{n=2}^{\infty} {1 \over n^2} < r$</span>.</p>
|
1,142,624 | <p>Find a generating function for $\{a_n\}$ where $a_0=1$ and $a_n=a_{n-1} + n$</p>
| kobe | 190,421 | <p>Multiply both sides of the recurrence by $x^n$ and sum over all $n\ge 1$ to get </p>
<p>\begin{align}\sum_{n \ge 1} a_n x^n &= \sum_{n \ge 1} a_{n-1}x^n + \sum_{n\ge 1} nx^n\\
\sum_{n\ge 1} a_nx^n &= x\sum_{n\ge 1} a_{n-1}x^{n-1} + x\sum_{n\ge 1} nx^{n-1}\\
\sum_{n\ge 1} a_n x^n &= x\sum_{n\ge 0} a_n x^n + x\frac{d}{dx}\sum_{n\ge 1} x^n\\
\sum_{n\ge 0} a_n x^n - 1&= x \sum_{n\ge 0} a_n x^n + x\frac{d}{dx}\frac{x}{1 - x}\\
(1 - x)\sum_{n\ge 0} a_n x^n &= 1 + \frac{x}{(1 - x)^2}\\
\sum_{n\ge 0} a_n x^n &= \frac{1}{1 - x} + \frac{x}{(1 - x)^3}.
\end{align} </p>
|
1,814,216 | <p>I was trying to show that $\sin(x)$ is non-zero for integers $x$ other than zero and I thought that this result might emerge as a corollary if I managed to show that the result in question is true. </p>
<p>I think it's possible to demonstrate this by looking at the power series expansion of $\sin(x)$ and assuming that we don't know anything about the existence of $\pi$. </p>
<p>All of the answers below insist that the proposition '$\exists p,q \in \mathbb{Z}, \sin(p)=\sin(q)$' -where $\sin(x)$ is the power series representation-is undecidable without using the properties of $\pi$. If so this is a truly wonderful conjecture and I would like to be provided with a proof. Until then, I insist that methods for analyzing infinite series from analysis should suffice to show that the proposition is false. </p>
<hr>
<p><strong>Note:</strong> In a previous version of this post the question said "bijective" instead of "injective". Some of the answers below have answered the first version of this post.</p>
| egreg | 62,967 | <p>You have $\sin m=\sin n$ if and only if $m=n+2k\pi$ or $m=\pi-n+2k\pi$ (for some integer $k$.</p>
<p>In the first case, if $k\ne0$, you get $\pi=(m-n)/(2k)$ is rational. In the second case, $\pi=(m+n)/(2k+1)$.</p>
<p>Since $\pi$ is irrational, the only possibility is $k=0$ and $m=n$.</p>
<hr>
<p>You can't prove injectivity without the knowledge that $\pi$ is irrational. Indeed, if $\pi=p/q$ with $p$ and $q$ positive integers, then
$\sin(p)=\sin(p+2q\pi)=\sin(3p)$, so the function would not be injective.</p>
<hr>
<p>Proofs of the irrationality of $\pi$ based on the power series expansion can be seen at <a href="https://en.wikipedia.org/wiki/Proof_that_%CF%80_is_irrational" rel="nofollow">Proof that $π$ is irrational</a></p>
|
2,702,726 | <p>Find the absolute minimum and maximum values of,</p>
<p>$$f(x) = 2 \sin(x) + \cos^2 (x) \text{ on } [0, 2\pi]$$</p>
<p>What I did so far is</p>
<p>$$f'(x) = 2\cos(x) -2 \cos(x) \sin(x)$$</p>
<p>Could someone please help me get started?</p>
| DeepSea | 101,504 | <p><strong>hint:</strong> $\cos^2x = 1 -\sin^2x \implies f(x) = 2-\left(1-\sin x\right)^2$. Then use the fact that $-1 \le \sin x \le 1$ to obtain the min and max.</p>
|
2,431,027 | <p>I am asking about changing the limits of integration. </p>
<p>I have the following integral to evaluate - </p>
<p>$$\int_2^{3}\frac{1}{(x^2-1)^{\frac{3}{2}}}dx$$ using the substitution $x = sec \theta$. </p>
<p>The problem states</p>
<p><strong>Use the substitution to change the limits into the form $\int_a^b$ where $a$ and $b$ are multiples of $\pi$.</strong></p>
<p>Now, this is what I did. </p>
<p>$$ x= \sec \theta$$
$$\frac{dx}{d\theta} = \sec\theta \tan\theta$$
$$dx = sec\theta tan\theta \ d\theta$$</p>
<p>$$\begin{align}\int_2^{3}\frac{1}{(x^2-1)^{\frac{3}{2}}}\,dx \\
&= \int\frac{1}{(\sec^2\theta-1)^{\frac{3}{2}}}\sec\theta \tan\theta \,d\theta \\
&= \int\frac{\sec\theta \tan\theta}{(\tan^2\theta)^{\frac{3}{2}}} \,d\theta \\
&= \int\frac{\sec\theta \tan\theta}{\tan^3\theta} \, d\theta \\
&= \int\frac{\sec\theta}{\tan^2\theta} \, d\theta \\
&= \int\frac{\cos\theta}{\sin^2\theta} \, d\theta \\
&= \int \csc\theta \cot\theta \, d\theta
\end{align}$$</p>
<p>But here is my problem.
I know that when $x = 2$,
$$2 = \sec \theta$$
$$\frac{1}{2} = \cos \theta$$
$$\frac{\pi}{3} = \theta$$</p>
<p>but when $x = 3$
$$3 = \sec \theta$$
$$\frac{1}{3} = \cos \theta$$
$$\arccos\left(\frac{1}{3}\right) = \theta = $$
but this does not give me a definite result in $\pi$.
The book says the following - </p>
<p><a href="https://i.stack.imgur.com/SPvci.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SPvci.png" alt="enter image description here"></a></p>
<p>where $\arccos\left(\frac{1}{3}\right) = \frac{\pi}{3}$
Am I the only one or is the book wrong in this instance?</p>
| egreg | 62,967 | <p>For simplicity, write $a=\arccos\frac{1}{2}$ and $b=\arccos\frac{1}{3}$. One can take care of them at the end.</p>
<p>It is true that $a=\pi/3$, but it's definitely wrong that $b=\pi/3$. Actually, $b\approx1.230959$, but you won't need that.</p>
<p>The integral becomes
$$
\int_a^b-\frac{1}{(\sec^2\theta-1)^{3/2}}\sec^2\theta\sin\theta\,d\theta
$$
Now it's better to write the integrand in terms of sine and cosine, recalling that the interval of integration is contained in $(0,\pi/2)$. Thus
$$
\sec^2\theta-1=
\frac{1-\cos^2\theta}{\cos^2\theta}=
\frac{\sin^2\theta}{\cos^2\theta}
$$
so the integrand is
$$
-\frac{\cos^3\theta}{\sin^3\theta}\frac{1}{\cos^2\theta}\sin\theta=
-\frac{\cos\theta}{\sin^2\theta}
$$
Thus we get
$$
\int_2^{3}\frac{1}{(x^2-1)^{3/2}}\,dx=
\int_a^b-\frac{\cos\theta}{\sin^2\theta}\,d\theta=
\int_a^b-\frac{1}{\sin^2\theta}\,d(\sin\theta)=
\left[\frac{1}{\sin\theta}\right]_a^b
$$
Since $a=\arccos(1/2)$ and $b=\arccos(1/3)$, we get
$$
\sin a=\sqrt{1-\cos^2a}=\frac{\sqrt{3}}{2}
\qquad
\sin b=\sqrt{1-\cos^2b}=\frac{2\sqrt{2}}{3}
$$
with no “sign uncertainty”, because $a$ and $b$ lie in $(0,\pi/2)$.</p>
<p>Thus the integral is
$$
\frac{2\sqrt{2}}{3}-\frac{\sqrt{3}}{2}
$$</p>
|
1,694,159 | <p>I am prepping for my mid semester exam, and came across with the following question:</p>
<blockquote>
<p>Find the closed form for the sum $\sum_{j=k}^n (-1)^{j+k}\binom{n}{j}\binom{j}{k}$, using the assumption that $k = 0, 1,...n$ and $n$ can be any natural number.</p>
</blockquote>
<p>So what I have done is to note the fact that $$\binom{n}{j}\binom{j}{k}= \frac{n!}{j!(n-j)!}\frac{j!}{k!(j-k)!}=\frac{n!}{(n-j)!\ k!\ (j-k)!}$$</p>
<p>Then we can write the summation as $$\sum_{j=k}^n {(-1)^{j+k}\binom{n}{j}\binom{j}{k}}= \sum_{j=k}^n (-1)^{j+k} \frac{n!}{(n-j)!\ k!\ (j-k)!} = \frac{n!}{k!} \sum_{j=k}^n (-1)^{j+k} \frac{1}{(n-j)!\ (j-k)!} $$</p>
<p>I tried to let $m=j-k$:
$$\frac{n!}{k!} \sum_{m=0}^{n-k} (-1)^{m+2k} \frac{1}{(n-m-k)!\ m!}=\frac{n!}{k!} \sum_{m=0}^{n-k} (-1)^{m} \frac{1}{(n-m-k)!\ m!}$$</p>
<p>But I am not sure what to proceed next. Any help would be highly appreciated!</p>
| mathlove | 78,967 | <p>Note that </p>
<p>$$\frac{1}{(n-m-k)!\ m!}=\frac{1}{(n-k)!}\cdot \frac{(n-k)!}{(n-m-k)!\ m!}=\frac{1}{(n-k)!}\binom{n-k}{m}$$
Then, you'll have
$$\frac{n!}{k!\ (n-k)!}\sum_{m=0}^{n-k}(-1)^m\cdot 1^{n-k-m}\cdot\binom{n-k}{m}$$</p>
|
1,694,159 | <p>I am prepping for my mid semester exam, and came across with the following question:</p>
<blockquote>
<p>Find the closed form for the sum $\sum_{j=k}^n (-1)^{j+k}\binom{n}{j}\binom{j}{k}$, using the assumption that $k = 0, 1,...n$ and $n$ can be any natural number.</p>
</blockquote>
<p>So what I have done is to note the fact that $$\binom{n}{j}\binom{j}{k}= \frac{n!}{j!(n-j)!}\frac{j!}{k!(j-k)!}=\frac{n!}{(n-j)!\ k!\ (j-k)!}$$</p>
<p>Then we can write the summation as $$\sum_{j=k}^n {(-1)^{j+k}\binom{n}{j}\binom{j}{k}}= \sum_{j=k}^n (-1)^{j+k} \frac{n!}{(n-j)!\ k!\ (j-k)!} = \frac{n!}{k!} \sum_{j=k}^n (-1)^{j+k} \frac{1}{(n-j)!\ (j-k)!} $$</p>
<p>I tried to let $m=j-k$:
$$\frac{n!}{k!} \sum_{m=0}^{n-k} (-1)^{m+2k} \frac{1}{(n-m-k)!\ m!}=\frac{n!}{k!} \sum_{m=0}^{n-k} (-1)^{m} \frac{1}{(n-m-k)!\ m!}$$</p>
<p>But I am not sure what to proceed next. Any help would be highly appreciated!</p>
| Yakov Shklarov | 222,048 | <p>Here is a cleaner way to solve it, without mucking around with factorials.</p>
<p>It's useful to know the "trinomial revision" identity,
$$\binom{x}{n}\binom{n}{m} = \binom{x}{m}\binom{x-m}{n-m},$$
which holds for all reals $x$ and integers $m,n$. The combinatorial interpretation (under a condition that certain values be nonnegative integers) is that these are two different expressions for the number of ways to distribute $x$ items among three boxes of sizes $m,\; n-m,$ and $x-n$. This can be called the trinomial coefficient
$$\binom{a+b+c}{a,\;b,\;c}= \frac{(a+b+c)!}{a!b!c!}.$$</p>
<p>Now we can begin. We assume $k$ and $n$ are nonnegative integers with $k \leqslant n$. Notice that after trinomial revision, the index variable $j$ appears in only one of the binomial coefficients.</p>
<p>\begin{align*}
\sum_{j=k}^n (-1)^{j+k}\binom{n}{j}\binom{j}{k} &= \binom{n}{k} \sum_{k \leqslant j \leqslant n} (-1)^{j+k}\binom{n-k}{j-k} \\
&= \binom{n}{k}\sum_{k-k \leqslant j-k\leqslant n-k}(-1)^{j-k}(-1)^{2k}\binom{n-k}{j-k} \\
&= \binom{n}{k} \sum_{0 \leqslant \ell \leqslant n-k} (-1)^\ell \binom{n-k}{\ell} \\
&= \binom{n}{k} (1+ (-1))^{n-k} \\
&= \delta_{nk}.
\end{align*}</p>
<p>The second to last line is an application of the binomial theorem. Note that $0^0 = 1$ by convention.</p>
|
1,514,094 | <p>Given three points on the $xy$ plane on $O(0,0),A(1,0)$ and $B(-1,0)$.Point $P$ is moving on the plane satisfying the condition $(\vec{PA}.\vec{PB})+3(\vec{OA}.\vec{OB})=0$<br>
If the maximum and minimum values of $|\vec{PA}||\vec{PB}|$ are $M$ and $m$ respectively then prove that the value of $M^2+m^2=34$</p>
<hr>
<p>My Attempt:<br>
Let the position vector of $P$ be $x\hat{i}+y\hat{j}$.Then $\vec{PA}=(1-x)\hat{i}-y\hat{j}$ and $\vec{PB}=(-1-x)\hat{i}-y\hat{j}$<br>
$\vec{OA}=\hat{i},\vec{OB}=-\hat{i}$<br>
$(\vec{PA}.\vec{PB})+3(\vec{OA}.\vec{OB})=0$ gives<br>
$x^2-1+y^2-3=0$<br>
$x^2+y^2=4$<br>
$|\vec{PA}||\vec{PB}|=\sqrt{(1-x)^2+y^2}\sqrt{(-1-x)^2+y^2}=\sqrt{(x^2+y^2-2x+1)(x^2+y^2+2x+1)}$<br>
$|\vec{PA}||\vec{PB}|=\sqrt{(5-2x)(5+2x)}=\sqrt{25-4x^2}$<br>
I found the domain of $\sqrt{25-4x^2}$ which is $\frac{-5}{2}\leq x \leq \frac{5}{2}$.Then i found the minimum and maximum values of $\sqrt{25-4x^2}$ which comes out to be $M=5$ and $m=0$.So $M^2+m^2=25$<br><br>
But i need to prove $M^2+m^2=34$.Where have i gone wrong?Please help me.Thanks.</p>
| Macavity | 58,320 | <p><strong>Hint:</strong> Let $P(x) = x^5-20x^4+bx^3+cx^2+dx+e$. If $P$ has all real roots, then $P'''(x)$ must have two real roots..</p>
<p>$\implies 60x^2-480x+6b$ has real roots $\implies b \le 160$</p>
|
373,510 | <p>I was wondering if there is some generalization of the concept metric to take positive and negative and zero values, such that it can induce an order on the metric space? If there already exists such a concept, what is its name?</p>
<p>For example on $\forall x,y \in \mathbb R$, we can use difference $x-y$ as such generalization of metric.</p>
<p>Thanks and regards!</p>
| Abel | 71,157 | <p>If we have a function $\delta$ such that</p>
<ol>
<li>$\delta(x,y)=0$ if and only if $x=y$</li>
<li>$\delta(x,y) = -\delta(y,x)$</li>
<li>If $\delta(x,y)\geq 0$ and $\delta(y,z)\geq 0$, then $0\leq \delta(x,z) \leq \delta(x,y)+\delta(y,z)$.</li>
</ol>
<p>then $d(x,y) = |\delta(x,y)|$ clearly defines a metric. Furthermore $x\geq y$ if and only if $\delta(x,y)\geq 0$ defines a total order.</p>
<p>Conversely, if $d$ is a metric and $\geq$ a total order, $\delta(x,y) =\begin{cases} d(x,y) & x\geq y \\ -d(x,y) & x<y \end{cases}$ satisfies all the axioms above.</p>
|
1,349 | <p>In this question here the OP asks for hints for a problem rather than a full proof.</p>
<p><a href="https://math.stackexchange.com/questions/14477">Proof of subfactorial formula $!n = n!- \sum_{i=1}^{n} {{n} \choose {i}} \quad!(n-i)$</a></p>
<p>Now, while I would like to respect that request, I also feel that questions on this site are not intended just for the OP's benefit. This leads me to the question...</p>
<blockquote>
<p><strong>Question</strong>: Is there any way to use some form of <em>spoiler space</em>, so that it's possible to post the answer for the other readers' benefit, but at the same time hiding it from those who do not want it?</p>
</blockquote>
<p>My attempted "look at the previous version of this post" turned out a disaster. I've seen people use rot13, but that seems like a lot of fuss (and clashes with the mathematics).</p>
<p>On some sites they use white text on white background for spoily material, which, when you select with the mouse, reveals the text. Is that possible?</p>
<hr />
<p>Testing:</p>
<blockquote>
<p>! Spoiler Space</p>
<p>! More spoiler space</p>
</blockquote>
<hr />
<blockquote class="spoiler">
<p> Spoiler Space
More spoiler space</p>
</blockquote>
<hr />
<blockquote class="spoiler">
<p> What happens if I write a really long sentence. Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test</p>
</blockquote>
<hr />
<blockquote class="spoiler">
<p> Maybe it'll involve some maths like <span class="math-container">$E=mc^2$</span> or exclamation marks <span class="math-container">$n!=n \times (n-1)!$</span>.</p>
</blockquote>
| Jonas Meyer | 1,424 | <p>Good idea. I searched meta.stackoverflow.com to see if this had already come up, and I found <a href="https://meta.stackexchange.com/questions/1191/add-markdown-support-for-hidden-until-you-click-text-aka-spoilers/71396#71396">this</a>, showing that it was recently implemented. I'm going to try it:</p>
<blockquote class="spoiler">
<p> $$!n = n!- \sum_{i=1}^{n} {{n} \choose {i}} \quad!(n-i)$$</p>
</blockquote>
|
74,108 | <p>Background: I was trying to convert a MATLAB code (fluid simulation, SPH method) into a <em>Mathematica</em> one, but the speed difference is huge.</p>
<p>MATLAB code:</p>
<pre class="lang-matlab prettyprint-override"><code>function s = initializeDensity2(s)
nTotal = s.params.nTotal; %# particles
h = s.params.h;
h2Sq = (2*h)^2;
for ind1 = 1:nTotal %loop over all receiving particles; one at a time
%particle i is the receiving particle; the host particle
%particle j is the sending particle
xi = s.particles.pos(ind1,1);
yi = s.particles.pos(ind1,2);
xj = s.particles.pos(:,1); %all others
yj = s.particles.pos(:,2); %all others
mj = s.particles.mass; %all others
rSq = (xi-xj).^2+(yi-yj).^2;
%Boolean mask returns values where r^2 < (2h)^2
mask1 = rSq<h2Sq;
rSq = rSq(mask1);
mTemp = mj(mask1);
densityTemp = mTemp.*liuQuartic(sqrt(rSq),h);
s.particles.density(ind1) = sum(densityTemp);
end
</code></pre>
<p>And the corresponding <em>Mathematica</em> code:</p>
<pre><code>Needs["HierarchicalClustering`"]
computeDistance[pos_] :=
DistanceMatrix[pos, DistanceFunction -> EuclideanDistance];
initializeDensity[distance_] :=
uniMass*Total/@(liuQuartic[#,h]&/@Pick[distance,Boole[Map[#<2h&,distance,{2}]],1])
initializeDensity[computeDistance[totalPos]]
</code></pre>
<p>The data are coordinates of 1119 points, in the form of <code>{{x1,y1},{x2,y2}...}</code>, stored in <code>s.particles.pos</code> and <code>totalPos</code> respectively. And <code>liuQuartic</code> is just a polynomial function. The complete MATLAB code is way more than this, but it can run about 160 complete time steps in 60 seconds, whereas the <em>Mathematica</em> code listed above alone takes about 3 seconds to run. I don't know why there is such huge speed difference. Any thoughts is appreciated. Thanks.</p>
<p>Edit:</p>
<p>The <code>liuQuartic</code> is defined as</p>
<pre><code>liuQuartic[r_,h_]:=15/(7Pi*h^2) (2/3-(9r^2)/(8h^2)+(19r^3)/(24h^3)-(5r^4)/(32h^4))
</code></pre>
<p>and example data can be obtained by</p>
<pre><code>h=2*10^-3;conWidth=0.4;conHeight=0.16;totalStep=6000;uniDensity=1000;uniMass=1000*Pi*h^2;refDensity=1400;gamma=7;vf=0.07;eta=0.01;cs=vf/eta;B=refDensity*cs^2/gamma;gravity=-9.8;mu=0.02;beta=0.15;dt=0.00005;epsilon=0.5;
iniFreePts=Block[{},Table[{-conWidth/3+i,1.95h+j},{i,10h,conWidth/3-2h,1.5h},{j,0,0.05,1.5h}]//Flatten[#,1]&];
leftWallIniPts=Block[{x,y},y=Table[i,{i,conHeight/2-0.5h,0.2h,-0.5h}];x=ConstantArray[-conWidth/3,Length[y]];Thread[List[x,y]]];
botWallIniPts=Block[{x,y},x=Table[i,{i,-conWidth/3,-0.4h,h}];y=ConstantArray[0,Length[x]];Thread[List[x,y]]];
incWallIniPts=Block[{x,y},Table[{i,0.2125i},{i,0,(2conWidth)/3,h}]];
rightWallIniPts=Block[{x,y},y=Table[i,{i,Last[incWallIniPts][[2]]+h,conHeight/2,h}];x=ConstantArray[Last[incWallIniPts][[1]],Length[y]];Thread[List[x,y]]];
topWallIniPts=Block[{x,y},x=Table[i,{i,-conWidth/3+0.7h,(2conWidth)/3-0.7h,h}];y=ConstantArray[conHeight/2,Length[x]];Thread[List[x,y]]];
freePos = iniFreePts;
wallPos = leftWallIniPts~Join~botWallIniPts~Join~incWallIniPts~Join~rightWallIniPts~Join~topWallIniPts;
totalPos = freePos~Join~wallPos;
</code></pre>
<p>where <code>conWidth=0.4</code>, <code>conHeight=0.16</code> and <code>h=0.002</code></p>
| ybeltukov | 4,678 | <p>You miss that many Mathematica functions are <a href="http://reference.wolfram.com/language/ref/Listable.html" rel="noreferrer">Listable</a>. It allows you to write a fast and clear code</p>
<pre><code>init2[distance_] := uniMass Total[liuQuartic[distance, h] UnitStep[2 h - distance], {2}]
h = 0.1;
uniMass = 1.0;
liuQuartic[d_, h_] := d^2 - h^2;
totalPos = RandomReal[1, {1119, 2}];
res1 = initializeDensity@computeDistance[totalPos]; // AbsoluteTiming
res2 = init2@computeDistance[totalPos]; // AbsoluteTiming
res1 == res2
(* {3.088372, Null} *)
(* {0.130059, Null} *)
(* True *)
</code></pre>
<p>It seems that this code is faster than MATLAB.</p>
|
185,478 | <blockquote>
<p>How do I solve for $x$ in $$x\left(x^3+\sin(x)\cos(x)\right)-\big(\sin(x)\big)^2=0$$</p>
</blockquote>
<p>I hate when I find something that looks simple, that I should know how to do, but it holds me up. </p>
<p>I could come up with an approximate answer using Taylor's, but how do I solve this? </p>
<p>(btw, WolframAlpha tells me the answer, but I want to know how it's solved.)</p>
| Christian Blatter | 1,303 | <p>Using the identity $\cos x=1-2\sin^2(x/2)$ and introduccing the function ${\rm sinc}(x):={\sin x\over x}$ we can rewrite the given function $f$ in the following way:
$$f(x)=x^2\left(x^2\left(1-{1\over2}{\rm sinc}(x){\rm sinc}^2(x/2)\right)+{\rm sinc}(x)\bigl(1-{\rm sinc}(x)\bigr)\right)\ .\qquad(*)$$
Now ${\rm sinc}(x)$ is $\geq0$ on $[0,\pi]$ and of absolute value $\leq1$ throughout. By distinguishing the cases $0<x\leq\pi$ and $x>\pi$ it can be verified by inspection that $f(x)>0$ for $x>0$. Since $f$ is even it follows that $x_0=0$ is the only real zero of $f$.</p>
<p>[One arrives at the representation $(*)$ by expanding the simple functions appearing in the given $f$ into a Taylor series around $0$ and grouping terms of the same order skillfully.]</p>
|
4,134,734 | <p>I'm trying to verify that a certain function of two variables <span class="math-container">$F(x,y)$</span> satisfies the conditions of a joint CDF. Showing that each condition holds has been fairly straightforward except, that is, for the condition that</p>
<p><span class="math-container">$a<b,c<d\implies F(b,d)-F(b,c)-F(a,d)+F(a,c)\geqslant 0$</span>.</p>
<p>I honestly don't know where to start for this one. It's easy enough when the task is to show that this condition is violated but showing that it holds is another matter.</p>
<p>For reference, the particular bivariate function I'm trying to show satisfies this condition is the following (a modification a function Jordan M. Stoyanov examines in Section 5.6 of his book Counterexamples in Probability, 3rd Edition):</p>
<p><a href="https://i.stack.imgur.com/Z0Wwm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Z0Wwm.png" alt="enter image description here" /></a></p>
<p>Any tips about how I might proceed would be appreciated.</p>
| Raskolnikov | 3,567 | <p>This is also called the 2-increasing property and if your function is continuously twice differentiable, you can equivalently check that</p>
<p><span class="math-container">$$\frac{\partial^2 F}{\partial x \partial y} \geq 0 \; .$$</span></p>
<p>However, your function is not on the transitions from one "block" of definition to another. For those, you can just check as @Satana suggests in the comments by considering all possible combinations of elements from each block. For example, take <span class="math-container">$0 \leq a < 1$</span>, <span class="math-container">$1 \leq b < \infty$</span>, <span class="math-container">$0 \leq c < 1/2$</span> and <span class="math-container">$1/2 \leq d < 1$</span>, then</p>
<p><span class="math-container">$$F(b,d)-F(b,c)-F(a,d)+F(a,c) = d - c - a/2 + ac \\ = d - c - a(1/2-c) \geq (1/2 - c)(1-a) \geq 0 \text{, etc...}$$</span></p>
|
2,986,647 | <p>If I know coordinates of point <span class="math-container">$A$</span>, coordinates of circle center <span class="math-container">$B$</span> and <span class="math-container">$r$</span> is the radius of the circle, is it possible to calculate the angle of the lines that are passing through point A that are also tangent to the circle?</p>
<p><a href="https://i.stack.imgur.com/VZu8a.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VZu8a.png" alt="tangents_to_a_circle_from_an_external_point"></a></p>
<p><span class="math-container">$A$</span> is the green point, <span class="math-container">$B$</span> is the center of the red circle and I am trying to find out the angle of the blue lines.</p>
| saz | 36,150 | <p>I don't see that Itô's formula is of much use for this problem (in particular, since there is the sinularity at zero which means that you would need to work with local times). It's much easier to use the fact that we know the transition density of <span class="math-container">$(B_t)$</span>.</p>
<hr>
<p>By the very definition of <span class="math-container">$A(t)$</span>, we need to compute</p>
<p><span class="math-container">$$\mathbb{E}(A^2(t)) = \mathbb{E} \left( \frac{1}{|B_t|^2} \right).$$</span></p>
<p>Plugging in the transition density of <span class="math-container">$B_t$</span> we find that</p>
<p><span class="math-container">$$\mathbb{E} \left( \frac{1}{|B_t|^2} \right) = \frac{1}{\sqrt{2\pi t}^3} \int_{\mathbb{R}^3} \frac{1}{x^2+y^2+z^2} \exp \left( - \frac{x^2+y^2+z^2}{2t} \right) \, d(x,y,z).$$</span></p>
<p>Introducing polar coordinates we get</p>
<p><span class="math-container">$$\begin{align*} \mathbb{E} \left( \frac{1}{|B_t|^2} \right) &= \frac{4\pi}{\sqrt{2\pi t}^3} \int_{(0,\infty)} \frac{1}{r^2} \exp \left(- \frac{r^2}{2t} \right) \, (r^2 \, dr) \\ &= \sqrt{\frac{2}{\pi t^3}} \int_{(0,\infty)} \exp \left(- \frac{r^2}{2t} \right) \, dr. \end{align*}$$</span></p>
<p>Hence,</p>
<p><span class="math-container">$$\begin{align*} \mathbb{E} \left( \frac{1}{|B_t|^2} \right) &=\frac{1}{t} \underbrace{\frac{1}{\sqrt{2\pi t}} \int_{\mathbb{R}} \exp \left(- \frac{r^2}{2t} \right) \, dr}_{=1} = \frac{1}{t}. \end{align*}$$</span></p>
|
4,149,355 | <p>Are there any stochastic processes <span class="math-container">$(X_t)_{t \in \mathbb{R}^d}$</span> such that</p>
<ol>
<li>almost surely paths are continuous but nowhere differentiable and</li>
<li>sampling of <span class="math-container">$n$</span> points <span class="math-container">$X_{t_n}$</span> on a path can be done in <span class="math-container">$O(n)$</span> time?</li>
</ol>
<p>Most sampling techniques I have in mind require at least <span class="math-container">$O(n \log n)$</span> time.</p>
<p>On the the other hand, all linear-time samplers I know, create processes such that 1) fails; like Perlin-noise, White noise, Pink noise, etc.</p>
<p>Is it even theoretically possible? So this might be a very fundamental question.</p>
| user1337 | 62,839 | <p>Cauchy-Schwarz works, as you suspected. Define
<span class="math-container">$$\mathbf{u}=(x,y) \\
\mathbf{v}=(1,1) $$</span>
We have
<span class="math-container">$$\mathbf{u} \cdot \mathbf{v} \leq \|\mathbf{u} \| \| \mathbf{v}\|, $$</span>
or</p>
<p><span class="math-container">$$x+y \leq \sqrt{2}.$$</span></p>
|
4,149,355 | <p>Are there any stochastic processes <span class="math-container">$(X_t)_{t \in \mathbb{R}^d}$</span> such that</p>
<ol>
<li>almost surely paths are continuous but nowhere differentiable and</li>
<li>sampling of <span class="math-container">$n$</span> points <span class="math-container">$X_{t_n}$</span> on a path can be done in <span class="math-container">$O(n)$</span> time?</li>
</ol>
<p>Most sampling techniques I have in mind require at least <span class="math-container">$O(n \log n)$</span> time.</p>
<p>On the the other hand, all linear-time samplers I know, create processes such that 1) fails; like Perlin-noise, White noise, Pink noise, etc.</p>
<p>Is it even theoretically possible? So this might be a very fundamental question.</p>
| Community | -1 | <p>Let us solve</p>
<p><span class="math-container">$$\begin{cases}x^2+y^2=1,\\x+y=\sqrt2.\end{cases}$$</span></p>
<p>Subracting the second multiplied by <span class="math-container">$\sqrt2$</span>,</p>
<p><span class="math-container">$$\left(x-\frac1{\sqrt2}\right)^2+\left(y-\frac1{\sqrt2}\right)^2-1=1-2$$</span>
gives the only solution <span class="math-container">$\left(\dfrac1{\sqrt2},\dfrac1{\sqrt2}\right)$</span>.</p>
<p>Hence all points are on the same side of <span class="math-container">$x+y=\sqrt2$</span>, because a disk is convex.</p>
|
3,581,707 | <p>I'm struggling with the following proof and hope some of you can help me:</p>
<p>By <span class="math-container">$H$</span> we denote a real Hilbert space and let be <span class="math-container">$T: H \rightarrow H$</span> be a compact, self-adjoint and linear operator. </p>
<p>(i) Show that <span class="math-container">$R(x):= \frac{(Tx,x)_H}{(x,x)_H}$</span> attains its maximum, i.e. there exists <span class="math-container">$x^* \in H$</span> s.t. <span class="math-container">$R(x^*)$</span> is maximal and <span class="math-container">$||x^*||_H=1$</span>.</p>
<p>(ii) Furthermore, I am looking for a counter-example, that this doesn't work for non-compact operators.</p>
<p>So we are asked to show that <span class="math-container">$\exists x^* \in H~ \textit{s.t.}~ \forall x \in H: R(x) \leq R(x_0)$</span>.</p>
<p>I've had several approaches, but all of them ended in a dead-end. The main problem was, that I couldn't draw the conclusion by using the compactness of <span class="math-container">$T$</span>, which should be crucial for that problem (at least in my opinion). But I couldn't find a counter example, with a non-compact operator that doesn't satisfy the statement (that's why I ask question (ii) - maybe this helps me to get a better "feeling" for the problem). Therefore, the possibly most promising try was to define a maximising sequence <span class="math-container">$(x_n)_{n\in\mathbb{N}} \subseteq H$</span>, <span class="math-container">$x_n \rightarrow x^*$</span>. My idea was to somehow use that for every bounded sequence <span class="math-container">$(x_n)_{n\in\mathbb{N}}$</span> there exists a subsequence <span class="math-container">$(x_{n_k})_{k\in\mathbb{N}}$</span> such that <span class="math-container">$Tx_{n_k}$</span> converges. Furthermore, there is a statement that if <span class="math-container">$u_n \rightarrow_w u$</span> then for a compact operator <span class="math-container">$T$</span> holds: <span class="math-container">$Tu_n \rightarrow Tu$</span>, which might be helpful.</p>
<p>Furthermore, since <span class="math-container">$||x^*||_H = 1$</span>, I can assume that <span class="math-container">$\forall n \in \mathbb{N}: ||x_n||_H = 1$</span>, which would give me the required boundedness. </p>
<p>Unfortunately, I wasn't able to prove the statement by using these ideas. Is my approach promising, or did I miss something crucial? How can I prove the statement?
I would be grateful for any help!</p>
| Disintegrating By Parts | 112,478 | <p>Suppose that <span class="math-container">$\lambda=\sup_{\|x\|=1}\langle Tx,x\rangle$</span>. Then <span class="math-container">$[x,y]=\langle (\lambda I-T)x,y\rangle$</span> satisfies the properties of an inner product, except that it may not be strictly positive. It is non-negative. Regardless, the Cauchy-Schwarz inequality still holds:
<span class="math-container">$$
|[x,y]| \le [x,x]^{1/2}[y,y]^{1/2}.
$$</span>
Let <span class="math-container">$y=(\lambda I-T)x$</span>. Then <span class="math-container">$[x,y]=\|(\lambda I-T)x\|^2$</span> and
<span class="math-container">$$
\|(T-\lambda I)x\|^2 \le \langle (\lambda I-T)x,x\rangle^{1/2}\|(\lambda I-T)x\| \\
\|(T-\lambda I)x\| \le \langle(\lambda I-T)x,x\rangle^{1/2}.
$$</span>
Now, if you choose a sequence <span class="math-container">$\{ x_n \}$</span> of unit vectors to minimize the right side of the last inequality, then <span class="math-container">$\{ (T-\lambda I)x_n \}$</span> converges to <span class="math-container">$0$</span>. Because <span class="math-container">$T$</span> is compact, there is a subsequence <span class="math-container">$\{ x_{n_k} \}$</span> such that <span class="math-container">$\{ Tx_{n_k} \}$</span> converges to some <span class="math-container">$y$</span>. It follows that <span class="math-container">$\{ x_{n_k} \}$</span> converges to <span class="math-container">$\lambda^{-1}y$</span>. So <span class="math-container">$Ty=\lambda y$</span>, and <span class="math-container">$y$</span> is a unit vector. So <span class="math-container">$\langle Tx,x\rangle$</span> achieves its maximum value at <span class="math-container">$x=y$</span>, and <span class="math-container">$y$</span> is an eigenvector of <span class="math-container">$T$</span> with eigenvalue <span class="math-container">$\lambda$</span>.</p>
|
2,924,380 | <p><span class="math-container">$\sum_{k=0}^{n}{k\binom{n}{k}}=n2^{n-1}$</span></p>
<p><span class="math-container">$n2^{n-1} = \frac{n}{2}2^{n} = \frac{n}{2}(1+1)^n = \frac{n}{2}\sum_{k=0}^{n}{\binom{n}{k}}$</span></p>
<p>That's all I got so far, I don't know how to proceed</p>
| Subhasis Biswas | 389,992 | <p>Take the function <span class="math-container">$(x+1)^n = 1^nx^0 {n \choose 0} \ + 1^{n-1}x^1{ n \choose 1} \ + 1^{n-2}x^2{ n \choose 2}+... + 1^{1}x^{n-1}{ n \choose {n-1}}+\ + 1^{0}x^n{ n \choose n} $</span></p>
<p>Differentiating both sides: <span class="math-container">$n(x+1)^{n-1} = \sum_{k=0}^{n}k{n \choose k}x^{k-1}$</span></p>
<p>Plugging <span class="math-container">$x=1$</span> on both sides gives us <span class="math-container">$n(1+1)^{n-1}=\sum_{k=0}^{n}k{n \choose k}=n2^{n-1}$</span></p>
|
4,027,604 | <p>Let <span class="math-container">$f(x)$</span> be continuous on <span class="math-container">$[a,b]$</span> and <span class="math-container">$F(x)=\frac{1}{x-a}\int_a^xf(t)dt$</span></p>
<p>Proof: The functions <span class="math-container">$F(x)$</span> and <span class="math-container">$f(x)$</span> have the same monotonicity on <span class="math-container">$(a, b]$</span>.</p>
<p><a href="https://i.stack.imgur.com/FPn2M.jpg" rel="nofollow noreferrer">The answer is here</a>,but I don't know why <span class="math-container">$\int_a^xf(x)\,dt = (x-a)f(x) $</span></p>
| Arthur | 15,500 | <p>The integrand <span class="math-container">$f(x)$</span> is a constant (we're integrating with respect to <span class="math-container">$t$</span>). And the integral of a constant is equal to the product of that constant with the width of the interval you're integrating over, which is <span class="math-container">$x-a$</span>.</p>
|
2,605,546 | <p>Simple question but not sure why for, $ f = \frac{\lambda}{2}\sum_{j=1}^{D} w_j^2$
$$\frac{\partial f}{\partial wj}= \lambda w_j$$</p>
<p>I would have thought the answer would be $\frac{\partial f}{\partial wj}= \lambda \sum_{j=1}^{D} w_j^2$</p>
<p>Since we get the derivative of $w_j^2$ which is $2w_j$, pull out the 2, getting rid of $\frac{\lambda}{2}$, and multiply by $w_j$. </p>
| John B | 301,742 | <p>Take for example $D=2$. Then
$$
f=\frac\lambda2(w_1^2+w_2^2)
$$
and indeed
$$
\frac{\partial f}{\partial w_1}=\frac\lambda22w_1=\lambda w_1
$$
and
$$
\frac{\partial f}{\partial w_2}=\frac\lambda22w_2=\lambda w_2.
$$
The case of arbitrary $D$ is analogous.</p>
|
676,171 | <blockquote>
<p>Prove that if $F$ is a field, every proper prime ideal of $F[X]$ is maximal.</p>
</blockquote>
<p>Should I be using the theorem that says an ideal $M$ of a commutative ring $R$ is maximal iff $R/M$ is a field? Any suggestions on this would be appreciated. </p>
| Bill Dubuque | 242 | <p><strong>Hint</strong> $\ $ Polynomial rings over fields enjoy a (Euclidean) division algorithm, hence every ideal is principal, generated by an element of minimal degree (= gcd of all elements). But for principal ideals: contains $\!\iff\!$ divides, i.e. $\rm\: (a)\supseteq (b)\!\iff\! a\mid b.\:$ Thus, having no proper containing ideal (maximal) is equivalent to having no proper divisor (irreducible), $ $ and $ $ irreducible $\!\iff\!$ prime, again by the Euclidean algorithm (or Euclid's Lemma, or $\,F[x]\,$ a UFD).</p>
|
3,294,402 | <p>Prove that it is impossible for three consecutive squares to sum to another perfect square.
I have tried for the three numbers <span class="math-container">$x-1$</span>, <span class="math-container">$x$</span>, and <span class="math-container">$x+1$</span>.</p>
| Anand | 687,233 | <p>The smallest solution you may think of:</p>
<p>Note that <span class="math-container">$(x-1)^2+x^2+(x+1)^2=3x^2+2\equiv 2\pmod{3}$</span>, but <span class="math-container">$$y^2\equiv 0,1\pmod{3}$$</span>for all integers <span class="math-container">$y$</span>. Therefore, <span class="math-container">$(x-1)^2+x^2+(x+1)^2$</span> can never be a perfect square.</p>
|
849,038 | <p>Im once again struggling to see the equivalence of two definitions. In my abstract algebra book (abstract algebra by beachy and blair) it says that elements $d_1,d_2,....,d_n \in D$ where $D$ is a UFD is said to be relatively prime in $D$ if there is no irreducible element $p \in D$ such that $p \mid a_i$ for $i=1,2,...n$.</p>
<p>However in my other book (A classical introduction to modern number theory by ireland and rosen) it says that two elements $a$ and $b$ are relatively prime if the only common divisors are units.</p>
<p>I am sure both definitions can not be correct. For instance, suppose that $a_1$ and $a_2$ are relatively prime according to the first definition, then we may very well have the case where a nonunit $p_1$ that is not irreducible divides both $a_1$ and $a_2$, so according to definition 2 the elements are not relatively prime. In this case, the elements are both relatively prime and not at the same time which shows that the definitions are not equivalent.</p>
<p>Am I missing something here or are the authors simply being sloppy?</p>
| Claude Leibovici | 82,404 | <p>Let us suppose that you fit a model $Z = a +b X+cY$ based on $N$ data points $(X_i,Y_i,Z_i)$. The so-called normal equations are $$\sum _{i=1}^N Z_i= N a + b\sum _{i=1}^N X_i+ c\sum _{i=1}^N Y_i$$ $$\sum _{i=1}^N X_iZ_i= a\sum _{i=1}^N X_i + b\sum _{i=1}^N X_i^2+ c\sum _{i=1}^N X_iY_i$$ $$\sum _{i=1}^N Y_iZ_i= a\sum _{i=1}^N Y_i + b\sum _{i=1}^N X_iY_i+ c\sum _{i=1}^N Y_i^2$$ and you solve them for $a,b,c$. </p>
<p>If we rewrite these equations in a more symbolic manner, as $$Sz=N a+b Sx + c Sy$$ $$Sxz=a Sx+b Sxx+c Sxy$$ $$Syz=a Sy+b Sxy+c Syy$$ the solutions are given by $$c=\frac{-N \text{Sxx} \text{Syz}+N \text{Sxy} \text{Sxz}+\text{Sx}^2
\text{Syz}-\text{Sx} \text{Sxy} \text{Sz}-\text{Sx} \text{Sxz}
\text{Sy}+\text{Sxx} \text{Sy} \text{Sz}}{-N \text{Sxx} \text{Syy}+N
\text{Sxy}^2+\text{Sx}^2 \text{Syy}-2 \text{Sx} \text{Sxy} \text{Sy}+\text{Sxx}
\text{Sy}^2}$$ $$b=\frac{c N \text{Sxy}-c \text{Sx} \text{Sy}-N \text{Sxz}+\text{Sx}
\text{Sz}}{\text{Sx}^2-N \text{Sxx}}$$ $$a=\frac{-b \text{Sx}-c \text{Sy}+\text{Sz}}{N}$$</p>
<p>Now, you add another data point ($N+1$), so the equations are now $$\sum _{i=1}^{N+1} Z_i= (N+1) a + b\sum _{i=1}^{N+1} X_i+ c\sum _{i=1}^{N+1} Y_i$$ $$\sum _{i=1}^{N+1} X_iZ_i= a\sum _{i=1}^{N+1} X_i + b\sum _{i=1}^{N+1} X_i^2+ c\sum _{i=1}^{N+1} X_iY_i$$ $$\sum _{i=1}^{N+1} Y_iZ_i= a\sum _{i=1}^{N+1} Y_i + b\sum _{i=1}^{N+1} X_iY_i+ c\sum _{i=1}^{N+1} Y_i^2$$ But the new sums can be expressed as the old sums plus an extra term corresponding to the new data point $$\sum _{i=1}^{N+1} Z_i=\sum _{i=1}^{N} Z_i+Z_{N+1}$$ $$\sum _{i=1}^{N+1} X_iZ_i=\sum _{i=1}^{N} X_iZ_i+X_{N+1}Z_{N+1}$$ and so on for all the summations.</p>
<p>So, you only need to keep the values of the sums and update them every time you add a new data point. The only thing left is to solve the new three normal equations for which I gave you the formulas.</p>
<p>So, updating the regression by adding one extra point is really simple.</p>
|
673,334 | <p>I used below pseudocode to generate a discrete normal distribution over 101 points.</p>
<pre><code>mean = 0;
stddev = 1;
lowerLimit = mean - 4*stddev;
upperLimit = mean + 4*stddev;
interval = (upperLimit-lowerLimit)/101;
for ( x = lowerLimit + 0.5*interval ; x < upperLimit; x = x + interval) {
y = exp(-sqr(x)/2)/sqrt(2*PI);
print ("%f %f", x, y);
}
}
</code></pre>
<p>When I plot y Vs x I get normal distribution curve as expected. But when I try to calculate standard deviation I use following algorithm (According to <a href="http://en.wikipedia.org/wiki/Standard_deviation#Discrete_random_variable" rel="nofollow">http://en.wikipedia.org/wiki/Standard_deviation#Discrete_random_variable</a>)</p>
<pre><code>for i = 1:101
sumsq += y[i]*(x[i]^2)
end
stddev = sqrt(sumsq)
</code></pre>
<p>I get $stddev = 3.55$ instead of $1$. Where is the problem?</p>
| Batman | 127,428 | <p>y isn't a probability distribution.</p>
|
2,907,771 | <p>I tried coming up with a proof of compactness of $[0,1]$ in $\mathbb{R}$ and thought of the following method. please let me know if it is correct or how it could be made more correct.</p>
<p>For any open cover of $[0,1]$ there exists an $\mathbb{\epsilon}$ such that $[0,\mathbb{\epsilon})$ is contained in one open set of the cover. Using the Least upper bound property of reals, there exists $l_1$ such that $[0,l_1)$ is contained in one open set $U_1$ and for any $l_1<m, [0,m) $ is not contained in one open set. </p>
<p>Now there exists some $l_2$ such that $[l_1,l_2)$ is contained in one open set $U_2$ and for any $l_2<m, [l_2,m) $ is not contained in one open set.</p>
<p>this way one forms an increasing sequence of real numbers ${(0,l_1,l_2..)}$ this sequence must converge to some $l$.</p>
<p>if $l\ne1$, pick an $\mathbb{\epsilon}$ ball around $l,$ such that $ (l-\mathbb{\epsilon}, l+\mathbb{\epsilon})$ is covered in one open set.
since $l$ was the limit point of sequence ${(0,l_1,l_2..)}$, $ (l-\mathbb{\epsilon}, l+\mathbb{\epsilon})$ contains all but finitely many $l_n$'s. </p>
<p>let $l_m$ be the first entry in the sequence ${(0,l_1,l_2..)}$ which belongs to $ (l-\mathbb{\epsilon}, l+\mathbb{\epsilon})$, </p>
<p>form a new increasing sequence $(0,l_1,l_2....l_m,l..)$ similar to the previous method. This way one gets a increasing sequence not bounded by any $l_n<1$. </p>
<p>So there exists an increasing sequence of $(0,l_1,l_2...)$ getting arbitarily close to $1$ and their corresponding open set sequence,$(U_1,U_2,...)$ .</p>
<p>since there exists one open set containing $ (1-\mathbb{\epsilon}, 1]$. so all but finitely many $l_n$'s are contained in that open set. So finitely many open sets $(U_1,U_2...U_n)$ along with the open set covering $(1-\mathbb{\epsilon}, 1]$ cover $[0,1]$</p>
| hmakholm left over Monica | 14,366 | <p>When you say</p>
<blockquote>
<p>form a new increasing sequence $(0,l_1,l_2,\ldots,l_m,\ldots)$ similar to the previous method.</p>
</blockquote>
<p>you're sweeping quite a deal under the carpet. What if it keeps going wrong and you habe to keep backtracking? In general arguments that simply state "continue this process to infinity" are likely to contain hidden subtleties, so I would require the details to be quite a bit more explicit before I would feel convinced by such a proof.</p>
<p>I <em>think</em> your approach can actually be made to work, but doing so rigorously would require you to speak about transfinite ordinals. Not something one would <em>want</em> to do for an elementary real-analysis exercise.</p>
<p>Instead it would to be simpler to cut through all of the piecewise step-by-step fog and jump directly to the end. Define</p>
<p>$$ B = \{ x\in[0,1] : [0,x] \text{ is covered by some finite subcover}\} $$
then set $b=\sup B$ and argue that</p>
<ol>
<li>$b\in B$</li>
<li>$b$ cannot be less than $1$.</li>
</ol>
|
199,695 | <p>I believe the answer is $\frac12(n-1)^2$, but I couldn't confirm by googling, and I'm not confident in my ability to derive the formula myself.</p>
| Belgi | 21,335 | <p>Another way to calculate what Matt said it to this: number the vertices from $1$ to $n$, and consider the graph with $n$ vertices but with no edges.</p>
<p>Take the first vertice: it has vertics to the other $n-1$ vertices, connect those $n-1$ vertices</p>
<p>Take the second vertice: it has vertics to the other $n-1$ vertices - but one of them is already connected - connect the other $n-2$</p>
<p>do this untill the last vertice. you get that you connected $(n-1)+(n-2)+...+1$ vertices which is an arithmetic proggrestion whose sum is $\frac{1}{2}(n-1)n$</p>
|
287,859 | <p>Prove that $\lim\limits_{x\rightarrow+\infty}\frac{x^k}{a^x} = 0\ (a>1,k>0)$.</p>
<p>P.S. This problem comes from my analysis book. You may use the definition of limits or invoke the Heine theorem for help. <em>It means the proof should only use some basic properties and definition of limits rather than more complicated approaches.</em></p>
| L. F. | 56,837 | <p>$$x>0:$$</p>
<p>$$\frac{a^x}{x^k}=\sum_{n=0}^{\infty}\frac{(x\ln a)^n}{x^kn!}$$</p>
<p>$$\qquad\qquad\qquad\qquad\qquad\qquad\qquad\;\;\;\;\;\;=\frac{1}{x^k}+\frac{\ln a}{x^{k-1}}+\cdots+\frac{(\ln a)^k}{k!}+\frac{x(\ln a)^{k+1}}{(k+1)!}+\cdots$$</p>
<p>$$\;>\frac{x(\ln a)^{k+1}}{(k+1)!}$$</p>
<p>$$\Rightarrow0<\frac{x^k}{a^x}<\frac{1}{x}\cdot\frac{(k+1)!}{(\ln a)^{k+1}}$$</p>
<p>$$\text{etc.}$$</p>
|
1,570,193 | <p>Let $X$ be a random variable and $\;f_X(x)=c6^{-x^2}\;\forall x\in\Bbb R$ its pdf. What I'm trying to compute is $\sqrt {Var(X)}$. I've got that $c=\sqrt{\frac{\ln(6)}{\pi}}$ for $f_X(x)$ to be a pdf and also that $\Bbb E(X)=0$. So my problem reduces to compute $\Bbb E(X^2)$ where</p>
<p>$$\Bbb E(X^2)=\sqrt{\frac{\ln(6)}{\pi}}\int_{-\infty}^\infty x^2e^{-x^2\ln(6)}dx$$</p>
<p>but I got stuck since I can't manage to make a variable change such that leaves some constant term multiplied by integral of a $N(0,1)$ random variable.</p>
| Jonathan Lamar | 204,429 | <p>You are correct about the identity $1=u(1_{\mathbb{K}})$. For the second part, this should follow from the distributivity of tensor product over direct sum and the fact that, as an element of $\overline{C}$, $x$ has no nonzero part in $\mathbb{K}1$.</p>
|
1,902,878 | <p>If $a^x=bc$, $b^y=ca$ and $c^z=ab$, prove that: $xyz=x+y+z+2$.</p>
<p>My Approach;
Here,</p>
<p>$$a^x=bc$$
$$a={bc}^{\frac {1}{x}}$$</p>
<p>and,</p>
<p>$$b={ca}^{\frac {1}{y}}$$
$$c={ab}^{\frac {1}{z}}$$</p>
<p>I got stopped from here. Please help me to continue </p>
| Ng Chung Tak | 299,599 | <p><strong>Hint:</strong></p>
<blockquote class="spoiler">
<p> \begin{align*} x &= \frac{\log b+\log c}{\log a} \\[7pt] (B+C)(C+A)(A+B) &= BC(B+C)+CA(C+A)+AB(A+B)+2ABC \end{align*}</p>
</blockquote>
|
4,486,131 | <p><span class="math-container">$F=Q(\sqrt{2i})$</span>,then
Which one of the following is not true (Duet-2017 Q.26)</p>
<p>1.<span class="math-container">$\sqrt{2}\in F$</span></p>
<p>2.<span class="math-container">$i \in F$</span></p>
<p>3.<span class="math-container">$x^8-16=0$</span> has a solution in <span class="math-container">$F$</span></p>
<p>4.<span class="math-container">$dim_Q(F)=2$</span></p>
<p>I thought it as <span class="math-container">$F=$</span>{<span class="math-container">$a+b(\sqrt{2i})| a,b \in Q$</span>}</p>
<p>from which it implies <span class="math-container">$\sqrt{2} \notin F$</span> and <span class="math-container">$\sqrt{2i}$</span> is a solution of <span class="math-container">$x^8-16=0$</span></p>
<p>I didn't get about what about dimensions
And I thought <span class="math-container">$i \notin F$</span> but answer has only one solution which is option 1.</p>
<p>Please guide.
Thanks</p>
| Mr.Gandalf Sauron | 683,801 | <p>First take a look at what <span class="math-container">$\sqrt{i}$</span> can be. From elementary complex analysis you have <span class="math-container">$\sqrt{2i}=\sqrt{2}(\frac{1}{\sqrt{2}}+i\frac{1}{\sqrt{2}})=1+i$</span> .</p>
<p>Note that we are only considering <span class="math-container">$\sqrt{i}=(\frac{1}{\sqrt{2}}+i\frac{1}{\sqrt{2}})$</span> and but we can also take it to be <span class="math-container">$-\frac{1}{\sqrt{2}}-i\frac{1}{\sqrt{2}}$</span> . In any case we would get that it is <span class="math-container">$\Bbb{Q}(1+i)=\Bbb{Q}(i)$</span> .</p>
<p>Now it is clear from above that <span class="math-container">$\sqrt{2}\notin \Bbb{Q}(i)$</span> and <span class="math-container">$i\in\Bbb{Q}(i)$</span>. Also <span class="math-container">$\dim(F)=2$</span> over <span class="math-container">$\Bbb{Q}$</span> as it is isomorphic to <span class="math-container">$\Bbb{Q}[x]/(x^{2}+1)$</span> as a field . Also look that you can take <span class="math-container">$\sqrt{2i}$</span> to be a root of either of the irreducible polynomials <span class="math-container">$x^{2}-2x+2$</span> or <span class="math-container">$x^{2}+2x+2$</span> . This just means that <span class="math-container">$\{1,\sqrt{2i}\}$</span> or <span class="math-container">$\{1,i\}$</span> forms a basis for <span class="math-container">$\Bbb{Q}(\sqrt{2i})=\Bbb{Q}(i)$</span> as a vector space over <span class="math-container">$\Bbb{Q}$</span>. Now to prove these things, you would need to say what kind of field theory you have been introduced to. If you just know <span class="math-container">$\Bbb{F}(\alpha)=\{a_{0}+a_{1}\alpha+...+a_{n-1}\alpha^{n-1}\}$</span> where <span class="math-container">$\alpha$</span> satisfies an irreducible polynomial <span class="math-container">$f(x)\in F[x]$</span> of degree <span class="math-container">$n$</span> , then you can directly see that <span class="math-container">$\{1,\sqrt{2i}\}$</span> or <span class="math-container">$\{1,i\}$</span> is a basis . Otherwise you'll need a more formal and precise argument. You have define it to be <span class="math-container">$\Bbb{Q}[x]/(x^{2}-2x+2)$</span> and then use a canonical isomorphism to <span class="math-container">$\Bbb{Q}(\sqrt{2i})$</span> .</p>
<p>The whole thing depends on what definition you use for <span class="math-container">$\sqrt{2i}$</span> as <span class="math-container">$\sqrt{z}$</span> is a multiple valued function. Usually in such cases, the polynomial which the above root satisfies is given so that you can identify it to be one root of it . But in this case, you have to make an assumption that either <span class="math-container">$\sqrt{2i}=1+i$</span> or <span class="math-container">$\sqrt{2i}=-(1+i)$</span> and carry on using basic complex analysis.</p>
<p><span class="math-container">$\sqrt{2i}$</span> is a solution for <span class="math-container">$x^{8}-16$</span> .</p>
|
246,114 | <p>A Latin Square is a square of size <strong>n × n</strong> containing numbers <strong>1</strong> to <strong>n</strong> inclusive. Each number occurs once in each row and column.</p>
<p>An example of a 3 × 3 Latin Square is:</p>
<p><span class="math-container">$$
\left(
\begin{array}{ccc}
1 & 2 & 3 \\
3 & 1 & 2 \\
2 & 3 & 1 \\
\end{array}
\right)
$$</span>
Another is:
<span class="math-container">$$
\left(
\begin{array}{ccc}
3 & 1 & 2 \\
2 & 3 & 1 \\
1 & 2 & 3 \\
\end{array}
\right)
$$</span></p>
<p>My code can work when the order is less than 5</p>
<pre><code>n=4;
Dimensions[ans=Permutations[Permutations[Range[n]],{n}]//
Select[AllTrue[Join[#,Transpose@#],DuplicateFreeQ]&]]//AbsoluteTiming
</code></pre>
<blockquote>
<p><code>{0.947582, {576, 4, 4}}</code></p>
</blockquote>
<p>When the order is 5, the memory is not enough, I want to know if there is a better way to get all 5×5 Latin squares?</p>
| flinty | 72,682 | <p>If you do a <a href="https://resources.wolframcloud.com/FunctionRepository/resources/BacktrackSearch/" rel="noreferrer"><code>"BacktrackSearch"</code></a> then it will take forever for <span class="math-container">$n=5$</span> but use less memory. There are 161280 <span class="math-container">$5\times5$</span> Latin squares - I recommend you test this with smaller numbers first like <span class="math-container">$n=3$</span>:</p>
<pre><code>latinQ[mtx_] :=
AllTrue[mtx, DuplicateFreeQ] &&
AllTrue[Transpose@mtx, DuplicateFreeQ];
n = 5;
lsquares = ResourceFunction["BacktrackSearch"][
ConstantArray[Permutations[Range[n]], n],
latinQ, latinQ, 161280
];
</code></pre>
|
246,114 | <p>A Latin Square is a square of size <strong>n × n</strong> containing numbers <strong>1</strong> to <strong>n</strong> inclusive. Each number occurs once in each row and column.</p>
<p>An example of a 3 × 3 Latin Square is:</p>
<p><span class="math-container">$$
\left(
\begin{array}{ccc}
1 & 2 & 3 \\
3 & 1 & 2 \\
2 & 3 & 1 \\
\end{array}
\right)
$$</span>
Another is:
<span class="math-container">$$
\left(
\begin{array}{ccc}
3 & 1 & 2 \\
2 & 3 & 1 \\
1 & 2 & 3 \\
\end{array}
\right)
$$</span></p>
<p>My code can work when the order is less than 5</p>
<pre><code>n=4;
Dimensions[ans=Permutations[Permutations[Range[n]],{n}]//
Select[AllTrue[Join[#,Transpose@#],DuplicateFreeQ]&]]//AbsoluteTiming
</code></pre>
<blockquote>
<p><code>{0.947582, {576, 4, 4}}</code></p>
</blockquote>
<p>When the order is 5, the memory is not enough, I want to know if there is a better way to get all 5×5 Latin squares?</p>
| Neil Strickland | 80,069 | <p>We can say that a latin square is standard if the first row is [1,...,n] and the first column is also [1,...,n]. Any latin square arises uniquely as follows: take a standard latin square, apply an arbitrary permutation to the full set of n columns, then apply an arbitrary permutation to the last (n-1) rows. Thus, the number of standard squares is smaller than the full set of squares by a factor n! (n-1)!, which is 2880 in the case n=5.
It is most efficient to find the standard squares by search, and then just apply permutations to get the rest.</p>
<p>If you were using C you could encode everything using bit patterns and then it would only take a few processor cycles worth of bit operations to test the admissibility of each potential new row, which would be extremely fast. I don't know how well you could do in Mathematica (I'm just a tourist on this particular stackexchange site.)</p>
|
246,114 | <p>A Latin Square is a square of size <strong>n × n</strong> containing numbers <strong>1</strong> to <strong>n</strong> inclusive. Each number occurs once in each row and column.</p>
<p>An example of a 3 × 3 Latin Square is:</p>
<p><span class="math-container">$$
\left(
\begin{array}{ccc}
1 & 2 & 3 \\
3 & 1 & 2 \\
2 & 3 & 1 \\
\end{array}
\right)
$$</span>
Another is:
<span class="math-container">$$
\left(
\begin{array}{ccc}
3 & 1 & 2 \\
2 & 3 & 1 \\
1 & 2 & 3 \\
\end{array}
\right)
$$</span></p>
<p>My code can work when the order is less than 5</p>
<pre><code>n=4;
Dimensions[ans=Permutations[Permutations[Range[n]],{n}]//
Select[AllTrue[Join[#,Transpose@#],DuplicateFreeQ]&]]//AbsoluteTiming
</code></pre>
<blockquote>
<p><code>{0.947582, {576, 4, 4}}</code></p>
</blockquote>
<p>When the order is 5, the memory is not enough, I want to know if there is a better way to get all 5×5 Latin squares?</p>
| chyanog | 2,090 | <pre><code>ClearAll[next];
next =
With[{Part = Compile`GetElement},
Compile[{{perms, _Integer, 2}, {rows, _Integer, 2}},
Table[
Append[rows, p],
{p, Select[perms,
Catch[Do[If[#[[j]] == rows[[i, j]], Throw@False],
{i, Length@rows}, {j, Length@#}]; True] &]
}
], CompilationTarget -> "C", RuntimeAttributes -> {Listable},
RuntimeOptions -> "Speed"
]];
n = 5;
perms = Permutations@Range@n;
Nest[Join @@ next[perms, #] &, List /@ perms, n - 1] // Dimensions // AbsoluteTiming
</code></pre>
<blockquote>
<p>{0.375176, {161280, 5, 5}}</p>
</blockquote>
|
379,669 | <p>So I was exploring some math the other day... and I came across the following neat identity:</p>
<p>Given $y$ is a function of $x$ ($y(x)$) and
$$
y = 1 + \frac{\mathrm{d}}{\mathrm{d}x} \left(1 + \frac{\mathrm{d}}{\mathrm{d}x} \left(1 + \frac{\mathrm{d}}{\mathrm{d}x} \left(1 + \frac{\mathrm{d}}{\mathrm{d}x} \left(1 + \frac{\mathrm{d}}{\mathrm{d}x} \left( \cdots \right) \right) \right) \right) \right) \text{ (repeated differential)}
$$</p>
<p>then we can solve this equation as follows:
$$
y - 1 = \frac{\mathrm{d}}{\mathrm{d}x} \left(1 + \frac{\mathrm{d}}{\mathrm{d}x} \left( \cdots \right) \right) \iff \int y - 1 \, \mathrm{d} x = 1 + \frac{\mathrm{d}}{\mathrm{d}x} \left( 1 + \frac{\mathrm{d}}{\mathrm{d}x} \left( \cdots \right) \right)
$$
$$
\implies \int y - 1 \, \mathrm{d} x = y \iff y - 1 = \frac{\mathrm{d} y }{ \mathrm{d} x}
$$</p>
<p>So</p>
<p>$$
\ln \left( y - 1 \right) = x + C \iff y = Ce^x + 1
$$</p>
<p>This problem reminded me a lot of nested radical expressions such as:
$$
x = 1 + \sqrt{1 + \sqrt{ 1 + \sqrt{ \cdots }}} \iff x - 1 = \sqrt{1 + \sqrt{ 1 + \sqrt{ \cdots }}}
$$
$$
\implies (x - 1)^2 = x \iff x^2 - 3x + 1 = 0
$$</p>
<p>and so</p>
<p>$$
x = \frac{3}{2} + \frac{\sqrt{5}}{2}
$$</p>
<p>This reminded of the Ramanujan nested radical which is:</p>
<p>$$
x = 0 + \sqrt{ 1 + 2 \sqrt{ 1 + 3 \sqrt{1 + 4 \sqrt{ \cdots }}}}
$$</p>
<p>whose solution cannot be done by simple series manipulations but requires knowledge of general formula found by algebraically manipulating the binomial theorem...</p>
<p>This made me curious...</p>
<p>say $y$ is a function of $x$ ($y(x)$) and</p>
<p>$$
y = 0 + \frac{\mathrm{d}}{\mathrm{d}x} \left(1 + 2\frac{\mathrm{d}}{\mathrm{d}x} \left(1 + 3\frac{\mathrm{d}}{\mathrm{d}x} \left(1 + 4\frac{\mathrm{d}}{\mathrm{d}x} \left(1 + 5\frac{\mathrm{d}}{\mathrm{d}x} \left( \cdots \right) \right) \right) \right) \right)
$$</p>
<p>What would the solution come out to be?</p>
| Lucian | 93,448 | <p>If the operator is meant to derive what follows, then we have $$y(x) = \lim_{n \to \infty} n! \cdot y^{(n)}(x) \qquad,\qquad \forall\ x \in X$$</p>
<p>since the derivative of a constant is always $0$ , and the derivative of a sum is the sum of derivatives. However, if multiplication is meant, with the “last” term of the nested product presumably being none other than <i>y(x)</i>, then we have $$y(x) = \sum_{n=1}^\infty n! \cdot y^{(n)}(x) \qquad,\qquad \forall\ x \in X$$</p>
<p>where $y : X \to Y$ ; either way, since $$\lim_{n \to \infty}n! = \infty$$ then, in order for the function to converge $\forall\ x \in X$ , we must have $$\lim_{n \to \infty} y^{(n)}(x) = 0 \quad,\quad \forall\ x \in X \quad=>\quad y(x)\ =\ P_m(x)\ =\ \sum_{k=0}^m a_k \cdot x^k \quad,\quad m \in \mathbb{N}$$</p>
<p>since the N<sup>th</sup> nested integral of $0$ is nothing else than a polynomial function of degree N-$1$ .</p>
|
2,368,813 | <p>I'm in a bit of trouble I want to calculate the cross ratio of 4 points $ A B C D $ that are on a circle </p>
<p>Sadly "officially" it has to be calculated with A B C D as complex numbers and geometers sketchpad ( the geomerty program I am used to) don't know about complex numbers</p>
<p>Now I am wondering
The cross ratio of 4 points on a circle is a real number<br>
And is there so much difference between</p>
<p>$$ \frac { (A-C)(B-D) }{ (A-D)(B-C) } \text{ (complex method) }$$
And
$$ \frac { |AC||BD| }{ |AD||BC| } \text{ (distance method). }$$</p>
<p>Where X-Y is the complex difference between X and Y</p>
<p>and |XY| is the distance between X andY</p>
<p>If allready is given that the four points are on a circle?</p>
<p>I think that the absolute values of the calculations are the same but am I right (and can we prove it)</p>
<p>ADDED LATER :</p>
<p>on Cave 's suggestion I used a circle inversion to move the four points on the circle to four points on a line and then the formulas give the same value ( if I did it correctly)</p>
<p>What I did:</p>
<p>I took a diameter of the circle
This diameter intersects the circle in O and U ,
u is the line through U perpendicular to OU</p>
<p>Project the 4 points onto line u with centre O (draw a ray through O and the point, the new point is where this ray intersects line u)</p>
<p>And calculate the cross ratio of the four new points.</p>
<p>This "projection" method and the earlier "distance" method give the same value but does this prove anything?</p>
| MvG | 35,416 | <p>Cross ratios are invariant under projective transformations, and projective transformations of the projective complex line $\mathbb{CP}^1$ are Möbius transformations. You can use a Möbius transformation to map the unit circle to the real line. (A circle inversion would do the same for the circle itself, although it would exchange the upper and lower half plane. Either one is fine for the question at hand.) After that you can compute the cross ratio on the line.</p>
<p>Actually there are still 3 real degrees of freedom mapping the unit circle to the real line. In the absense of reasons to do otherwise, I'd suggest the following map: map $1$ to $0$, $i$ to $1$ and $-1$ to $\infty$.</p>
<p>$$z\mapsto\frac{z-1}{iz+i}$$</p>
<p>That map has a strong connection to the <a href="https://en.wikipedia.org/wiki/Tangent_half-angle_substitution" rel="nofollow noreferrer">tangent half-angle substitution</a>. A point $t$ on the real axis corresponds to $\frac{1-t^2}{1+t^2}+i\frac{2t}{1+t^2}$ on the unit circle. Converseley if you have $x+iy$ as a point on the unit circle, then $t=\frac{y}{x+1}$ is the corresponding real parameter.</p>
<p>As you see, the point at infinity is a relevant point on the real line, so using real numbers only is a poor choice of framework here. Instead use the real projective line, with homogeneous coordinates. A point $x+iy$ on the circle corresponds to a homogeneous coordinate vector</p>
<p>$$\begin{pmatrix}y\\x+1\end{pmatrix}\;.$$</p>
<p>Instead of differences of real numbers, you use $2\times2$ determinants of homogeneous coordinates in the formula for the cross ratio. So you compute</p>
<p>$$\frac{\begin{vmatrix}y_A&y_C\\x_A+1&x_C+1\end{vmatrix}\cdot
\begin{vmatrix}y_B&y_D\\x_B+1&x_D+1\end{vmatrix}}
{\begin{vmatrix}y_A&y_D\\x_A+1&x_D+1\end{vmatrix}\cdot
\begin{vmatrix}y_B&y_C\\x_B+1&x_C+1\end{vmatrix}}=\\
\frac{\bigl(y_A(x_C+1)-y_C(x_A+1)\bigr)
\bigl(y_B(x_D+1)-y_D(x_B+1)\bigr)}
{\bigl(y_A(x_D+1)-y_D(x_A+1)\bigr)
\bigl(y_B(x_C+1)-y_C(x_B+1)\bigr)}$$</p>
<p>This is obviously a real number, computed using real arithmetic only. And in conrtrast to the version using distances, you can get away without square roots, too.</p>
|
3,354,684 | <p>I was trying to prove following inequality:</p>
<p><span class="math-container">$$|\sin n\theta| \leq n\sin \theta \
\text{for all n=1,2,3... and } \
0<\theta<π $$</span></p>
<p>I succeeded in proving this via induction but I didn't get "feel" over the proof. Are there other proof for this inequality?</p>
| Theo Bendit | 248,286 | <p>You can show that <span class="math-container">$|\sin(x)|$</span> is subadditive, i.e.
<span class="math-container">$$|\sin(x + y)| \le |\sin(x)| + |\sin(y)|.$$</span>
To prove this, simply expand the left side:
<span class="math-container">\begin{align*}
|\sin(x + y)| &= |\sin(x)\cos(y) + \sin(y)\cos(x)| \\
&\le |\sin(x)| \cdot |\cos(y)| + |\sin(y)| \cdot | \cos(y)| \\
&\le |\sin(x)| + |\sin(y)|,
\end{align*}</span>
as <span class="math-container">$|\cos(x)|$</span> and <span class="math-container">$|\cos(y)|$</span> are less than or equal to <span class="math-container">$1$</span>.</p>
<p>How does this help? Note that, when <span class="math-container">$0 < \theta < \pi$</span>, we have <span class="math-container">$\sin(x) \ge 0$</span>, hence <span class="math-container">$|\sin(\theta)| = \sin(\theta)$</span>. Using induction, we can use subadditivitiy to show that
<span class="math-container">$$|\sin(n\theta)| = |\sin(\underbrace{\theta + \theta + \ldots + \theta}_{\text{n times}})| \le \underbrace{|\sin(\theta)| + \ldots + |\sin(\theta)|}_{\text{n times}} = n|\sin(\theta)| = n \sin(\theta).$$</span></p>
|
4,027,889 | <p>I'm trying to graph <span class="math-container">$f(x,y)=\ln(x)-y$</span>, however, I am not sure how as all of my tools are refusing to graph it.</p>
<p>Can you please help me?</p>
<p>Thanks</p>
| DMcMor | 155,622 | <p>Note that there is no way to plot this in 2D because you would need an equation of the form <span class="math-container">$f(x,y) = 0$</span>, and what you have is <span class="math-container">$f(x,y) = \ln(x) - y$</span>, which can be thought of as <span class="math-container">$f(x,y) = z$</span>. That means you need a 3D plot to visualize it.</p>
<p>There are several good online options for plotting 3D graphs. Here are two good ones:</p>
<p>1.) <a href="https://c3d.libretexts.org/CalcPlot3D/index.html?type=z;z=ln(x)-y;visible=true;umin=-6;umax=6;vmin=-6;vmax=6;grid=50;format=normal;alpha=-1;constcol=rgb(255,0,0);view=0;contourcolor=red;fixdomain=true&type=window;hsrmode=3;nomidpts=true;anaglyph=-1;center=-6.513098527506768,-7.382907616178462,1.7527757135334454,1;focus=0,0,0,1;up=-0.028634991195109524,0.2547385033929391,0.9665859155648717,1;transparent=false;alpha=140;twoviews=false;unlinkviews=false;axisextension=0.7;xaxislabel=x;yaxislabel=y;zaxislabel=z;edgeson=true;faceson=true;showbox=true;showaxes=true;showticks=true;perspective=true;centerxpercent=0.5;centerypercent=0.5;rotationsteps=30;autospin=true;xygrid=false;yzgrid=false;xzgrid=false;gridsonbox=false;gridplanes=false;gridcolor=rgb(128,128,128);xmin=-6;xmax=6;ymin=-6;ymax=6;zmin=-6;zmax=6;xscale=5;yscale=5;zscale=5;zcmin=-4;zcmax=4;zoom=0.277778;xscalefactor=1;yscalefactor=1;zscalefactor=1" rel="nofollow noreferrer">Calcplot3D</a>, which produces the following plot. This one is my personal favorite, but you might want to see <a href="https://c3d.libretexts.org/CalcPlot3D/CalcPlot3D-Help/section-2.html" rel="nofollow noreferrer">here</a> for key commands for camera controls.<a href="https://i.stack.imgur.com/LO4q5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LO4q5.png" alt="enter image description here" /></a></p>
<p>2.) <a href="https://www.geogebra.org/3d/gu3rkwpa" rel="nofollow noreferrer">Geogebra</a>, which produces the plot <a href="https://i.stack.imgur.com/6NLFM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6NLFM.png" alt="enter image description here" /></a></p>
|
3,299,492 | <p>Is there any nice characterization of the class of polynomials can be written with the following formula for some <span class="math-container">$c_i , d_i \in \mathbb{N}$</span>? Alternatively, where can I read more about these? do they have a name?
<span class="math-container">$$c_1 + \left( c_2 + \left( \dots (c_k + x^{d_k}) \dots \right)^{d_2} \right)^{d_1}$$</span></p>
<p>For instance, it is not possible to write <span class="math-container">$1 + x + x^2$</span> in this way, but it is possible to write <span class="math-container">$1 + 2x + x^2$</span> or <span class="math-container">$0 + x^3$</span>.</p>
<p><em>For some context:</em> two actions on the set of polynomials <span class="math-container">$A \times \mathbb{N}[x] \to \mathbb{N}[x]$</span>, and <span class="math-container">$B \times \mathbb{N}[x] \to \mathbb{N}[x]$</span> can be combined into a single one <span class="math-container">$\left<A,B\right> \times \mathbb{N}[x] \to \mathbb{N}[x]$</span> that takes a word of elements on <span class="math-container">$A$</span> and <span class="math-container">$B$</span> and applies the multiple actions in order. In the case of multiplication and exponential, we can see that the class of polynomials
<span class="math-container">$$c_1 \left( c_2 \left( \dots (c_k x^{d_k}) \dots \right)^{d_2} \right)^{d_1}$$</span>
can be just described as the polynomials of the form <span class="math-container">$cx^d$</span>. I do not expect such a simple characterization in the case of sums and exponentials, but I would like to know if this class of polynomials has been described or studied somewhere.</p>
| Kavi Rama Murthy | 142,385 | <p>It converges for <span class="math-container">$x=0$</span> but not for any other value of <span class="math-container">$x$</span>. </p>
|
3,426,907 | <p>I'm not a mathematician, but a programmer who loves solving math puzzles, so please forgive me if I don't use the correct terms.</p>
<p>Imagine there are 100 lotteries with 100 tickets each. The lotteries have no connection at each other and all lotteries have exactly 1 price. I buy 1 ticket from each lottery so in the end I have 100 tickets from 100 lotteries.</p>
<p>I know how to calculate the probability P(x) for getting x prices, with x in range from 0..100. In total all probabilities added gives 1.0</p>
<p>My question is: What is the expected number of prices I'll win? How do I calculate this correctly?</p>
| Eric Towers | 123,905 | <p>Regarding 2: Are you familiar with the <a href="https://en.wikipedia.org/wiki/Topologist%27s_sine_curve" rel="nofollow noreferrer">topologist's sine curve</a>, <span class="math-container">$T$</span>, and its closure? </p>
<p><span class="math-container">$T \smallsetminus \{(0,0)\}$</span> is path connected, but its closure, <span class="math-container">$\overline{T} = T \cup \{(0,y) : y \in [-1,1]\}$</span>, is not.</p>
|
2,546,497 | <p>987x ≡ 610 (mod 1597)</p>
<p>Is this correct way of applying little Fermat's theorem for linear congruences? Does it make any sense? If not could someone advice a bit.</p>
<p>Since gcd(987,1597)=1</p>
<p>-> 987ˆ1597-1 ≡ 1 (mod 1597)</p>
<p>-> 987ˆ1596 ≡ 1 (mod 1597)</p>
<pre><code>610 ≡ 610 (1597)
987ˆ1596 * 610 ≡ 610 (mod 1597)
987 * 987ˆ1595 * 610 ≡ 610 (mod 1597)
-> x0 = 987ˆ1595 * 610
->[987ˆ1595 * 610]127
</code></pre>
| vrugtehagel | 304,329 | <p>Yes the order matters, but we still use $c(n,r)$ (more commonly denoted ${n\choose r}$) because we essentially want to find the number of ways to pick $r$ zeroes out of a string with length $n$ and change those to $1$'s, resulting in the amount of strings with exactly $r$ ones.</p>
|
1,285,416 | <p>Is there an easy way to see that the polynomial $x^2 + 3x + 10$ is irreducible modulo 29 without having to go through each element 0,1,..,28 and check for roots?</p>
| André Nicolas | 6,312 | <p>Probably not. But I can suggest a harder way. The congruence $x^2+3x+10\equiv 0$ can be rewritten as $4x^2+12x+40\equiv 0$, and then as $(2x+3)^2+31\equiv 0$ and then as $(2x+3)^2 \equiv -2$. So the issue is whether $-2$ is a quadratic residue of $29$. Note that $-1$ is a QR of $29$. But $2$ is an NR of $29$, since $29$ is of the form $8k+5$. So $-2$ is an NR of $29$, no solutions.</p>
|
1,285,416 | <p>Is there an easy way to see that the polynomial $x^2 + 3x + 10$ is irreducible modulo 29 without having to go through each element 0,1,..,28 and check for roots?</p>
| Rolf Hoyer | 228,612 | <p>One way to show it isn't a root is to complete the square.
$$
x^2+3x+10
\equiv x^2 -26x + 155
= (x-13)^2 -14 \pmod{29}
$$
Now, to show that there is no root is equivalent to showing that $14$ is not a quadratic residue mod 29. Using quadratic reciprocity and the Lagrange symbol, we compute
$$
\binom{14}{29} = \binom{2}{29}\binom{7}{29}
= - \binom{29}{7} = -\binom{1}{7} = -1
$$
Thus, there is no solution to $x^2 = 14\pmod{29}$, and therefore no solution to $(x-13)^2-14 = 0 \pmod{29}$.</p>
|
1,285,416 | <p>Is there an easy way to see that the polynomial $x^2 + 3x + 10$ is irreducible modulo 29 without having to go through each element 0,1,..,28 and check for roots?</p>
| Mark Bennet | 2,906 | <p>Note that $$x^2+3x+10\equiv x^2+32x+10 = (x+16)^2-246$$ So the real question is whether $246\equiv 14\equiv 72 \bmod 29$ is a square modulo $29$ i.e. whether $6^2\times 2$ is a square modulo $29$, i.e. whether $2$ is a square. But $2$ is a square modulo a prime $p$ only if $p\equiv \pm 1 \bmod 8$</p>
|
1,285,416 | <p>Is there an easy way to see that the polynomial $x^2 + 3x + 10$ is irreducible modulo 29 without having to go through each element 0,1,..,28 and check for roots?</p>
| Bill Dubuque | 242 | <p>One doesn't need quadratic reciprocity, only the simple idea behind <a href="http://en.wikipedia.org/wiki/Euler%27s_criterion" rel="nofollow">Euler's Criterion</a></p>
<p>$ {\rm mod}\ 29\!:\ 0 \equiv 4f(x) \equiv {(2x\!+\!3)^2}\!+31\,\Rightarrow \color{#0a0}{(2x\!+\!3)^2\equiv -2}.\ $ But $\,\color{#0a0}{-2}\,$ is not a square by</p>
<p>$\qquad \color{#0a0}{a^2 \equiv -2}\!\!\overset{\large\ \ (\,\ )^{14}}\Rightarrow\! a^{28} \equiv 2^{14} \equiv \dfrac{(2^5)^3}{2} \equiv \dfrac{3^3}{2}\equiv \dfrac{-2}{2} \equiv \color{#c00}{-1},\,$ contra little Fermat</p>
|
3,267,550 | <p>Considering the system <span class="math-container">$x_{k+1}=Ax_k+Bu_k$</span> with quadratic cost </p>
<p><span class="math-container">$J^* = \min x_N^T S x_N + \sum_{k=0}^{N-1} x_k^T Qx_k+u^T_kRu_k$</span></p>
<p>where <span class="math-container">$Q,S\succeq 0, R\succ 0$</span>. The optimal state feedback is found as <span class="math-container">$u_k = -(R+B^\top P_{k+1}B)^{-1}B^T P(k+1)A x_k$</span>
where <span class="math-container">$P(k)$</span> is from the discrete-time riccati equation</p>
<p><span class="math-container">$
P_{k} = Q+ A^T P_{k+1}A -A^T P_{k+1}B(R+B^T P_{k+1}B)^{-1}B^T P_{k+1}A
$</span></p>
<p>and terminal penalty <span class="math-container">$P_N = S$</span>. This can be solved like an LMI turning the equality into</p>
<p><span class="math-container">$Q+ A^T P_{k+1}A - P_{k} -A^T P_{k+1}B(R+B^T P_{k+1}B)^{-1}B^T P_{k+1}A \succeq 0$</span></p>
<p>But how can I formally prove that the Riccati equality can be turned into an inequality? is it just Schur complement considering <span class="math-container">$P(k)\succeq 0$</span> and <span class="math-container">$R\succ0$</span>?</p>
<p>thanks for any suggestion</p>
| Kwin van der Veen | 76,466 | <p>The inequality </p>
<p><span class="math-container">$$
Q + A^\top P_{k+1} A - P_{k} - A^\top P_{k+1} B \left(R + B^\top P_{k+1} B\right)^{-1} B^\top P_{k+1} A \succeq 0 \tag{1}
$$</span></p>
<p>can also be seen as an equality plus some unknown positive semi-definite term <span class="math-container">$X_k=X_k^\top\succeq0$</span></p>
<p><span class="math-container">$$
Q + A^\top P_{k+1} A - P_{k} - A^\top P_{k+1} B \left(R + B^\top P_{k+1} B\right)^{-1} B^\top P_{k+1} A + X_k = 0. \tag{2}
$$</span></p>
<p>However this can also be seen as solving the time varying cost</p>
<p><span class="math-container">$$
J^* = \min_u x_N^T S\,x_N + \sum_{k=0}^{N-1} x_k^\top Q_k x_k + u^\top_k R\,u_k, \tag{3}
$$</span></p>
<p>with <span class="math-container">$Q_k \succeq Q$</span>. For example setting <span class="math-container">$Q_k = Q + I$</span> would have a corresponding solution for <span class="math-container">$(1)$</span>, but this minimizes the cost from <span class="math-container">$(3)$</span> and not the cost when <span class="math-container">$Q_k=Q$</span>. So solving <span class="math-container">$(1)$</span> can give different results then solving the discrete-time Riccati equation.</p>
|
4,642,566 | <p>For <span class="math-container">$x, y ∈$</span> <span class="math-container">$\mathbb{R}$</span>, let <span class="math-container">$x△y = 2(x + y)$</span>. Then <span class="math-container">$△$</span> is a binary operation on <span class="math-container">$\mathbb{R}$</span>.</p>
<p>Show that there is no identity element for <span class="math-container">$△$</span> on <span class="math-container">$\mathbb{R}$</span>.</p>
<p>I have tried <span class="math-container">$x△e = e△x=x$</span></p>
<p>I don't know what else to do.</p>
| Parcly Taxel | 357,390 | <p>The operator is easily seen to be commutative, so we can just solve
<span class="math-container">$$x\triangle e=2(x+e)=x\implies e=-x/2$$</span>
But the identity cannot depend on <span class="math-container">$x$</span>, so there is no identity in the first place.</p>
|
1,315,450 | <blockquote>
<p>Prove that for all $n\in\mathbb{N}$ and $x>0$,
$$2^{-1+\frac{1}{n}}\left(x+1\right)\leq\left(x^{n}+1\right)^{\frac{1}{n}}$$</p>
</blockquote>
<p>The last class was about Taylor polynomial of functions, so I thought this might give me a solutions, but looking at the derivatives the only think I could think would be useful is looking at the $n-1$th degree polynomial of </p>
<p>$$\frac{\left(x+1\right)^n}{2^{n-1}}-\left(x^{n}+1\right)$$</p>
<p>(Which I got by raising to the $n$th power)</p>
<p>Though this gave me the ugly expression (for some $c$ in $[0,x]$):</p>
<p>$$f\left(x\right)=\frac{1}{2^{n-1}}+\frac{n}{2^{n-1}}x+\frac{n\left(n-1\right)}{2^{n-1}2!}x^{2}+\dots+\frac{n\left(n-1\right)\cdots\left(3\right)}{2^{n-1}\left(n-2\right)!}x^{n-2}+\left(\frac{n!\left(c+1\right)}{2^{n-1}}-n!c\right)\frac{1}{\left(n-1\right)!}x^{n-1}$$</p>
<p>Which I have no idea how to "make negative". So I tried falling back to induction, but after the pretty obvious base case of $2$ I had no idea how to continue.</p>
<p>Any tips/hints on how to prove it?</p>
| Ewan Delanoy | 15,381 | <p>If you put $t=x-1$ and $d=x^n+1-2\bigg(\frac{x+1}{2}\bigg)^n$, you have</p>
<p>$$
\begin{array}{lcl}
d &=& 1+(1+t)^n-2\bigg(1+\frac{t}{2}\bigg)^n \\
&=& 1+\sum_{j=0}^n \binom{n}{j} t^j -\sum_{j=0}^n \binom{n}{j} \frac{t^j}{2^{j-1}} \\
&=& 1+\sum_{j=0}^n \binom{n}{j} \frac{2^{j-1}-1}{2^{j-1}}t^j \\
&=& \sum_{j=2}^n \binom{n}{j} \frac{2^{j-1}-1}{2^{j-1}}t^j
\end{array}
$$</p>
<p>and all the terms in the last sum above are clearly positive (except possibly for $t^j$). This shows that $d\geq 0$ whenever $x\geq 1$.</p>
|
2,339,521 | <p>In a game, each trial consists of two possible outcomes, success or failure. Two trials $H$ and $K$ are carried out. The probability of success for trial $H$ is $x$, and the probability of success for trial $K$ is $2/5$ if trial $H$ is a success, and $x/2$ if trial $H$ is a failure. Given that the probability of trials $H$ and $K$ ending with one success is $1/5$, determine the value of $x$. </p>
<p>I approached this question by multiplying $(1-x)(x/2) = 1/5$, to which I didn't obtain answer at all. What am I missing?</p>
| Henno Brandsma | 4,280 | <p>So $K$ is done after $H$,so we condition on the result of $H$</p>
<p>To get 1 success: $$P(H \land \lnot K) + P(\lnot H \land K) = P(\lnot K|H)P(H) + P(K| \lnot H)P(\lnot H) = \frac{3}{5} x + \frac{x}{2}(1-x)$$</p>
<p>and this should equal $\frac{1}{5}$</p>
|
269,398 | <p>I have a python code for data analysis, that uses the <a href="https://matplotlib.org/3.5.0/tutorials/colors/colormaps.html" rel="nofollow noreferrer">"seismic" color scale</a> for 2D density plots. However, I also need to do some other plots with Mathematica (because of packages etc), for which I would like the same color scale. Unfortunately, the closer resembling color scale (temperature map) of Mathematica is still quite different from the one in Python.
Do you have any suggestion on how to "export/import" a color scale between python and mathematica? This can then be applied to any variation of color map.</p>
| J. M.'s persistent exhaustion | 50 | <p>There's no need to guess if you look at <a href="https://github.com/matplotlib/matplotlib/blob/main/lib/matplotlib/_cm.py#L190-L193" rel="noreferrer">the Matplotlib source</a>:</p>
<pre><code>seismicColors[x_?NumericQ] /; 0 <= x <= 1 :=
Blend[{RGBColor[0., 0., 0.3], RGBColor[0., 0., 1.], RGBColor[1., 1., 1.],
RGBColor[1., 0., 0.], RGBColor[0.5, 0., 0.]}, x]
</code></pre>
<p>Examples:</p>
<pre><code>LinearGradientImage[seismicColors, {300, 30}]
</code></pre>
<p><img src="https://i.stack.imgur.com/M2nWt.png" alt="seismic colormap" /></p>
<pre><code>ContourPlot[3 (1 - x)^2 Exp[-x^2 - (y + 1)^2] - 10 (x/5 - x^3 - y^5) Exp[-x^2 - y^2] -
Exp[-(x + 1)^2 - y^2]/3, {x, -3, 3}, {y, -3, 3},
ColorFunction -> seismicColors, Contours -> 25, PlotRange -> All]
</code></pre>
<p><img src="https://i.stack.imgur.com/PMuDi.png" alt="contour plot of MATLAB's "peaks"" /></p>
|
3,052,746 | <p>I want to solve <span class="math-container">$2x = \sqrt{x+3}$</span>, which I have tried as below:</p>
<p><span class="math-container">$$\begin{equation}
4x^2 - x -3 = 0 \\
x^2 - \frac14 x - \frac34 = 0 \\
x^2 - \frac14x = \frac34 \\
\left(x - \frac12 \right)^2 = 1 \\
x = \frac32 , -\frac12
\end{equation}$$</span></p>
<p>This, however, is incorrect.</p>
<p>What is wrong with my solution?</p>
| KM101 | 596,598 | <p>You made a mistake when completing the square.</p>
<p><span class="math-container">$$x^2-\frac{1}{4}x = \frac{3}{4} \color{red}{\implies\left(x-\frac{1}{2}\right)^2 = 1}$$</span></p>
<p>This is easy to spot since <span class="math-container">$(a\pm b)^2 = a^2\pm2ab+b^2$</span>, which means the coefficient of the linear term becomes <span class="math-container">$-2\left(\frac{1}{2}\right) = -1 \color{red}{\neq -\frac{1}{4}}$</span>. This means something isn’t correct...</p>
<p>Note that the equation is rewritten such that <span class="math-container">$a = 1$</span>, so you need to add <span class="math-container">$\left(\frac{b}{2}\right)^2$</span> to both sides and factor. (In other words, divide the coefficient of the linear term <span class="math-container">$x$</span> by <span class="math-container">$2$</span> and square the result, which will then be added to both sides.)</p>
<p><span class="math-container">$$b = -\frac{1}{4} \implies \left(\frac{b}{2}\right)^2 \implies \frac{1}{64}$$</span></p>
<p>Which gets</p>
<p><span class="math-container">$$x^2-\frac{1}{4}x+\color{blue}{\frac{1}{64}} = \frac{3}{4}+\color{blue}{\frac{1}{64}}$$</span></p>
<p>Factoring the perfect square trinomial yields</p>
<p><span class="math-container">$$\left(x-\frac{1}{8}\right)^2 = \frac{49}{64}$$</span></p>
<p>And you can probably take it on from here.</p>
<p><strong>Edit</strong>: As it has been noted in the other answers (should have clarified this as well), squaring introduces the possibility of extraneous solutions, so always check your solutions by plugging in the values obtained in the original equation. By squaring, you’re solving</p>
<p><span class="math-container">$$4x^2 = x+3$$</span></p>
<p>which is actually</p>
<p><span class="math-container">$$2x = \color{blue}{\pm}\sqrt{x+3}$$</span></p>
<p>so your negative solution will satisfy this new equation but not the original one, since that one is</p>
<p><span class="math-container">$$2x = \sqrt{x+3}$$</span></p>
<p>with no <span class="math-container">$\pm$</span>.</p>
|
3,052,746 | <p>I want to solve <span class="math-container">$2x = \sqrt{x+3}$</span>, which I have tried as below:</p>
<p><span class="math-container">$$\begin{equation}
4x^2 - x -3 = 0 \\
x^2 - \frac14 x - \frac34 = 0 \\
x^2 - \frac14x = \frac34 \\
\left(x - \frac12 \right)^2 = 1 \\
x = \frac32 , -\frac12
\end{equation}$$</span></p>
<p>This, however, is incorrect.</p>
<p>What is wrong with my solution?</p>
| Leonhard Euler | 481,442 | <p><span class="math-container">$$\begin{equation}
4x^2 - x -3 = 0 \\
(4x + 3)(x - 1) = 0 \\
x = -\frac34 , 1
\end{equation}$$</span></p>
|
2,343,027 | <p>I am having a problem in proving this map to be one-one. It is not said anything about the relationship about $K$ and $R$. Or is it not necessary that they be related somehow. Please help.</p>
| José Carlos Santos | 446,262 | <p>Suppose that $f(x)=f(y)$ with $x,y\in K$ and $x\neq y$. Then, if $z=x-y$, $f(z)=0$ with $z\neq 0$. But$$1=f(1)=f\left(z\times z^{-1}\right)=f(z)\times f(z^{-1})\text,$$which is impossible, since $f(z)=0$.</p>
|
3,956,828 | <p>I can find the nth integral of <span class="math-container">$\ln(z)$</span> as follows:
<span class="math-container">\begin{aligned}
\left(\frac d{dz}\right)^{-n}\ln(z)&=\frac1{\Gamma(n)}\int\limits_0^z(z-t)^{n-1}\ln(t)dt\\
&=\frac1{n!}\left[\int\limits_0^z\frac1t(z-t)^ndt-z^n\ln(0)\right]\\
&=\frac1{n!}\left[\int\limits_0^1\frac{z^nt^n-z^n+z^n}{1-t}dt-z^n\ln(0)\right]\\
&=\frac{\ln(z)-H_n}{n!}z^n,
\end{aligned}</span>
but I can't get very far with <span class="math-container">$\ln(1+z/k)$</span>. I was able to come up with this but it is only a conjecture:
<span class="math-container">\begin{aligned}
\frac1{\Gamma(n)}\int\limits_0^z(z-t)^{n-1}\ln\left(1+\frac tk\right)dt&=\frac{\ln\left(1+\frac{z}{k}\right)-H_n}{n!}(k+z)^n+\sum ^{n}_{i=0}\frac{H_{i}}{i!(n-i)!}z^{n-i}k^i\\
&=\frac1{n!}\ln\left( 1+\frac{z}{k}\right)( k+z)^{n} -\sum ^{n}_{i=0}\frac{H_{n} -H_{i}}{i!( n-i) !} z^{n-i} k^{i}.
\end{aligned}</span>
I'm only using <span class="math-container">$k,n\in\mathbb{N}$</span> and <span class="math-container">$z\in\mathbb{R}$</span> for now. Any help proving this would be appreciated.</p>
| Varun Vejalla | 595,055 | <p><em>This is not a derivation of the integral, but rather a proof of your conjecture.</em></p>
<p>Your expression is <span class="math-container">$$f_n(z)=\frac1{n!}\ln\left( 1+\frac{z}{k}\right)( k+z)^{n} -\sum ^{n}_{i=1}\frac{H_{n} -H_{n-i}}{i!( n-i) !} k^{n-i} z^{i}$$</span></p>
<p>I reversed the order of summation from yours and then got rid of the first term since <span class="math-container">$H_n - H_n = 0$</span>. Then there are two things that need to be shown for each <span class="math-container">$n \ge 1$</span>: <span class="math-container">$f_{n-1}(z) = \frac{d}{dz}f_n(z)$</span> and <span class="math-container">$f_n(z) = 0$</span>. The second is true since each term is <span class="math-container">$0$</span> when <span class="math-container">$z = 0$</span>. For the first, the derivative of <span class="math-container">$f_n(z)$</span> is <span class="math-container">$$\frac{1}{n!}\left( \ln\left( 1+\frac{z}{k}\right)\frac{d}{dz}(k+z)^n + (k+z)^n\frac{d}{dz}\ln\left( 1+\frac{z}{k}\right) \right) - \sum_{i=1}^n \frac{H_{n} -H_{n-i}}{i!( n-i) !} k^{n-i}\frac{d}{dz}z^{i}$$</span></p>
<p>This simplifies to <span class="math-container">$$\frac{1}{(n-1)!}\ln\left( 1+\frac{z}{k}\right)(k+z)^{n-1} + \frac{1}{n!}(k+z)^{n-1} - \sum_{i=1}^n \frac{H_{n} -H_{n-i}}{(i-1)!( n-i) !} k^{n-i}z^{i-1}$$</span></p>
<p>Then, what remains to be shown is <span class="math-container">$$\frac{1}{n!}(k+z)^{n-1} - \sum_{i=1}^n \frac{H_{n} -H_{n-i}}{(i-1)!( n-i) !} k^{n-i}z^{i-1} = -\sum ^{n-1}_{i=1}\frac{H_{n-1} -H_{n-1-i}}{i!( n-1-i) !} k^{n-1-i} z^{i}$$</span></p>
<p>Expanding <span class="math-container">$(k+z)^{n-1}$</span> by the binomial expansion and setting the coefficients of <span class="math-container">$z^i$</span> equal to each other yields <span class="math-container">$$\frac{1}{n!}\binom{n-1}{i}k^{n-1-i} - \frac{H_{n} -H_{n-i-1}}{i!( n-i-1) !} k^{n-i-1}=-\frac{H_{n-1} -H_{n-1-i}}{i!( n-1-i) !} k^{n-1-i}$$</span></p>
<p>Subtracting <span class="math-container">$\frac{H_{n-1-i}}{i!(n-1-i)!}k^{n-1-i}$</span>, dividing by <span class="math-container">$k^{n-1-i}$</span>, then multiplying by <span class="math-container">$i!(n-1-i)!$</span> on both sides makes it <span class="math-container">$$\frac{(n-1)!}{n!} - H_n = -H_{n-1}$$</span></p>
<p>This simplifies to <span class="math-container">$$\frac{1}{n}-H_n = -H_{n-1} \to H_n = H_{n-1} + \frac{1}{n}$$</span></p>
<p>which is exactly the recursive definition of the harmonic numbers. Therefore, your conjecture is true.</p>
|
4,422,512 | <p>I do have a matrix of following form</p>
<p><span class="math-container">$$M:=\left(\begin{array}{c|ccc}
A & & * &\\
\hline 0 & & &\\
0 & & B &\\
0 & & &\\
\end{array}\right)$$</span></p>
<p>Here the <span class="math-container">$0$</span>'s represent matrices of which any entry is equal to <span class="math-container">$0$</span>. Moreover, <span class="math-container">$A$</span> is an invertible square matrix. Is it true, that <span class="math-container">$M$</span> is invertible, iff <span class="math-container">$B$</span> is invertible? My guess is that in this case, it holds <span class="math-container">$\det(M)=\det(A)\det(B)$</span>, but I am not completely convinced, if this is true.</p>
| AnCar | 447,933 | <p>If <span class="math-container">$B$</span> is not invertible, then its rows are linearly dependent and this is preserved if you pad the rows with the same number of <span class="math-container">$0$</span>s at the beginning, implying that <span class="math-container">$M$</span> is not invertible either.
Now if <span class="math-container">$B$</span> is invertible, consider the block matrix <span class="math-container">$N$</span> formed by diagonal blocks <span class="math-container">$A^{-1}$</span> and <span class="math-container">$B^{-1}$</span> and everywhere else <span class="math-container">$0$</span>. Multiplying <span class="math-container">$MN$</span> you obtain a block matrix with identities on the diagonal, zeros in the lower left block and something in the top right block. Performing row-echelon reduction on this will land at the identity matrix. Since <span class="math-container">$N$</span> is clearly invertible (and with determinant <span class="math-container">$(det(A)det(B))^{-1}$</span>), I think it follows that <span class="math-container">$M$</span> is invertible and that the determinant satisfies the desired equation.</p>
|
2,615,626 | <p>The problem I have is:</p>
<blockquote>
<p>$\lim \limits_{x \to \infty} \sin{(x)}\ e^{-x}$</p>
</blockquote>
<p>Things I've tried:</p>
<ol>
<li><p>Researching how to do this problem, I've come across kind of similar examples that use either Taylors Rule, L'Hopitals Rule, or the Squeeze Theorem. Not sure which one to use.</p></li>
<li><p>On Wolfram the anser is $0$.</p></li>
<li><p>Broke the problem apart into $\lim \limits_{x \to \infty} \sin{(x)}\lim \limits_{x \to \infty} e^{-x}$</p></li>
</ol>
<p>$\lim \limits_{x \to \infty} \sin{(x)}=[-1,1]$</p>
<p>$\lim \limits_{x \to \infty} e^{-x}=0$</p>
| user284331 | 284,331 | <p>$|(\sin x)e^{-x}|\leq e^{-x}\rightarrow 0$ as $x\rightarrow\infty$.</p>
|
542,808 | <p>I had a little back and forth with my logic professor earlier today about proving a number is irrational. I proposed that 1 + an irrational number is always irrational, thus if I could prove that 1 + irrational number is irrational, then it stood to reason that was also proving that the number in question was irrational.</p>
<p>Eg. $\sqrt2 + 1$ can be expressed as a continuous fraction, and through looking at the fraction, it can be assumed $\sqrt2 + 1$ is irrational. I suggested that because of this, $\sqrt2$ is also irrational.</p>
<p>My professor said this is not always true, but I can't think of an example that suggests this.</p>
<p>If $x+1$ is irrational, is $x$ always irrational? </p>
<p>Actually, a better question is: if $x$ is irrational, is $x+n$ irrational, provided $n$ is a rational number?</p>
| Casteels | 92,730 | <p>Look at the contrapositive: If $x$ is rational, then $x+n$ is rational. Clearly this is a true statement.</p>
|
542,808 | <p>I had a little back and forth with my logic professor earlier today about proving a number is irrational. I proposed that 1 + an irrational number is always irrational, thus if I could prove that 1 + irrational number is irrational, then it stood to reason that was also proving that the number in question was irrational.</p>
<p>Eg. $\sqrt2 + 1$ can be expressed as a continuous fraction, and through looking at the fraction, it can be assumed $\sqrt2 + 1$ is irrational. I suggested that because of this, $\sqrt2$ is also irrational.</p>
<p>My professor said this is not always true, but I can't think of an example that suggests this.</p>
<p>If $x+1$ is irrational, is $x$ always irrational? </p>
<p>Actually, a better question is: if $x$ is irrational, is $x+n$ irrational, provided $n$ is a rational number?</p>
| Community | -1 | <p>Suppose <span class="math-container">$x$</span> is irrational and <span class="math-container">$x+\dfrac pq=\dfrac mn$</span> then, <span class="math-container">$x=\dfrac mn-\dfrac pq=\dfrac{mq-np}{nq}$</span> so, <span class="math-container">$x$</span> would then be rational. :) </p>
|
542,808 | <p>I had a little back and forth with my logic professor earlier today about proving a number is irrational. I proposed that 1 + an irrational number is always irrational, thus if I could prove that 1 + irrational number is irrational, then it stood to reason that was also proving that the number in question was irrational.</p>
<p>Eg. $\sqrt2 + 1$ can be expressed as a continuous fraction, and through looking at the fraction, it can be assumed $\sqrt2 + 1$ is irrational. I suggested that because of this, $\sqrt2$ is also irrational.</p>
<p>My professor said this is not always true, but I can't think of an example that suggests this.</p>
<p>If $x+1$ is irrational, is $x$ always irrational? </p>
<p>Actually, a better question is: if $x$ is irrational, is $x+n$ irrational, provided $n$ is a rational number?</p>
| Fredrik Meyer | 4,284 | <p>A proof in the style of "mathematics made difficult": Note that a number $r$ is rational if and only if $\mathbb Q(r) = \mathbb Q$. Now it is easy to see that $\mathbb Q(\gamma) = \mathbb Q(r+\gamma)$ for all rational $r$ and arbitrary $\gamma$. So if $r+\gamma$ is irrational, then $\gamma$ is also. </p>
|
542,808 | <p>I had a little back and forth with my logic professor earlier today about proving a number is irrational. I proposed that 1 + an irrational number is always irrational, thus if I could prove that 1 + irrational number is irrational, then it stood to reason that was also proving that the number in question was irrational.</p>
<p>Eg. $\sqrt2 + 1$ can be expressed as a continuous fraction, and through looking at the fraction, it can be assumed $\sqrt2 + 1$ is irrational. I suggested that because of this, $\sqrt2$ is also irrational.</p>
<p>My professor said this is not always true, but I can't think of an example that suggests this.</p>
<p>If $x+1$ is irrational, is $x$ always irrational? </p>
<p>Actually, a better question is: if $x$ is irrational, is $x+n$ irrational, provided $n$ is a rational number?</p>
| user88377 | 88,377 | <p>This follows essentially from the fact that as additive abelian groups $\mathbb Q$ is a normal subgroup of $\mathbb R$, which is true since both groups are abelian and the latter contains the former. Hence we have a quotient homomorphism $\varphi: \mathbb R \rightarrow \mathbb R / \mathbb Q$. $r \in \mathbb R$ is rational iff $\varphi (r) = 0_{\mathbb R / \mathbb Q}$, where $0_{\mathbb R / \mathbb Q}$ is the identity element of $\mathbb R / \mathbb Q$.</p>
<p>Consider then for $x \in \mathbb R - \mathbb Q$, $r \in \mathbb Q$, that
\begin{align*}
\varphi (x+r)&= \varphi(x) + \varphi(r) \\
&= \varphi(x) + 0_{\mathbb R / \mathbb Q} \\
&= \varphi(x) \\
& \ne 0_{\mathbb R / \mathbb Q} \text{ by assumption that }x \not\in \mathbb Q,
\end{align*}</p>
<p>so $x+r \not \in \mathbb Q$.</p>
<p>Of course, the above is just reinterpreting the above elementary proofs in a more general context, but this lets us apply the same line of reasoning to a wide variety of things including modular arithmetic, rotations/reflections of geometric objects, Rubik's cube moves, matrix multiplication, permutations of sets of numbers, exotic differentiable structures on spheres, ...</p>
|
542,808 | <p>I had a little back and forth with my logic professor earlier today about proving a number is irrational. I proposed that 1 + an irrational number is always irrational, thus if I could prove that 1 + irrational number is irrational, then it stood to reason that was also proving that the number in question was irrational.</p>
<p>Eg. $\sqrt2 + 1$ can be expressed as a continuous fraction, and through looking at the fraction, it can be assumed $\sqrt2 + 1$ is irrational. I suggested that because of this, $\sqrt2$ is also irrational.</p>
<p>My professor said this is not always true, but I can't think of an example that suggests this.</p>
<p>If $x+1$ is irrational, is $x$ always irrational? </p>
<p>Actually, a better question is: if $x$ is irrational, is $x+n$ irrational, provided $n$ is a rational number?</p>
| Joe | 623,665 | <p>Lemma:</p>
<blockquote>
<p>If <span class="math-container">$a$</span> is rational, and <span class="math-container">$b$</span> is rational, then <span class="math-container">$a+b$</span> is rational.</p>
</blockquote>
<p>Proof:</p>
<blockquote>
<p>Let <span class="math-container">$a=p/q$</span>, and <span class="math-container">$b=r/s$</span>, where <span class="math-container">$p,q,r,s$</span> are integers. Then, <span class="math-container">$a+b=(ps+qr)/(qs)\blacksquare$</span></p>
</blockquote>
<p>Now, let <span class="math-container">$a$</span> be rational, so that <span class="math-container">$-a$</span> is rational. If <span class="math-container">$a+x$</span> is rational, then <span class="math-container">$x=(a+x)+(-a)$</span> is rational, by the lemma. The desired result is the contrapositive: if <span class="math-container">$x$</span> is <em>irrational</em>, then <span class="math-container">$a+x$</span> cannot be rational, for then <span class="math-container">$x$</span> would be rational, which is a contradiction; hence, if <span class="math-container">$x$</span> is irrational, then <span class="math-container">$a+x$</span> is irrational.</p>
<p>This is a proof by contrapositive: when you prove an implication <span class="math-container">$P\implies Q$</span> by first proving that <span class="math-container">$\neg Q\implies \neg P$</span>. Here, we assume that <span class="math-container">$a$</span> is rational. Then, we prove that
<span class="math-container">$$
\text{$a+x$ is rational} \implies \text{$x$ is rational}
$$</span>
Then, to prove the implication
<span class="math-container">$$
\text{$x$ is irrational} \implies \text{$a+x$ is irrational}
$$</span>
we assume, for the sake of contradiction, that if <span class="math-container">$x$</span> is rational, then <span class="math-container">$a+x$</span> can be rational. This would imply that <span class="math-container">$x$</span> is irrational, which is a contradiction. Hence, <span class="math-container">$a+x$</span> is irrational.</p>
<p>Proofs by contrapositive are similar in nature to proofs by contradiction, except that the proof of <span class="math-container">$\neg Q\implies\neg P$</span> is a direct proof. This means that any intermediate results deduced using this direct proof can be used in other proofs. This is why, generally speaking, mathematicians favour contraposition over contradiction. See <a href="https://math.stackexchange.com/questions/4241520/taking-absolute-value-of-x-when-i-have-inequalities-on-both-sides?noredirect=1#comment8814353_4241520">here</a> for further discussion.</p>
|
2,530,788 | <p>$x + y + z = 0$;</p>
<p>$x^2 + y^2 + z^2 = 1$;</p>
<p>$x^3 + y^3 + z^3 = 0$;</p>
<p>I understand that there are multiple solutions which are the permutations of $(\sqrt{ 2 }/2, 0, -\sqrt{2}/2).$</p>
<p>How do i go about solving for it? I have tried the normal brute force gaussian elimination method, Cramer's rule and i still cant get the answer.</p>
<p>Would appreciate if someone could provide me with an algorithm and/or the steps.</p>
<p>Thank you very much!!</p>
| Dietrich Burde | 83,966 | <p>This is easy to solve by a direct calculation. Not even Vieta is needed. Substituting $z=-x-y$ the last equation gives
$$
xy(x+y)=0
$$
Obviously $x=y=0$ gives a contradiction, and also we may assume $xy\neq 0$; for $y=-x$ then the second equation is now very easy. </p>
|
228,651 | <p>When testing to determine the convergence or divergence of series with positive terms, there's a common way by comparing them with other series which we already know converge or diverge.</p>
<p>My question is, how do we choose the proper to-be-compared series? I hope to get some detailed <strong>methodology</strong> about this. I am a bit confused - do I have to even rely on my intuition?</p>
<p>For instance, how do I choose a comparison series for this given one below:</p>
<p>$$\sum_{n=2}^\infty\frac{1}{n\ln n}$$</p>
| Community | -1 | <p>In your case, the convergence of $\displaystyle \sum_{n=2}^{\infty} \dfrac1{n \log n}$ can be checked by using the following convergence test. If we have a monotone decreasing sequence, then $\displaystyle \sum_{n=2}^{\infty} a_n$ converges iff $\displaystyle \sum_{n=2}^{\infty} 2^na_{2^n}$ converges.</p>
<p>Note that $$\sum_{n=2^k}^{2^{k+1}-1}\dfrac1{n \log n} > \sum_{n=2^k}^{2^{k+1}-1} \dfrac1{2^k \log \left(2^k \right)} = \dfrac{2^k}{2^k k \log(2)} = \dfrac1{\log 2} \dfrac1k$$</p>
<p>Hence, $$\displaystyle \sum_{n=2}^{\infty} \dfrac1{n \log n} > \dfrac1{\log 2} \sum_{k=1}^{\infty} \dfrac1k$$
Hence, it diverges.</p>
<p>In general, if you want to prove $$\sum_{k=1}^{\infty} a_k$$ diverges and you are unable to find $b_k$ such that $a_k > b_k$ and $$\sum_{k=1}^{\infty} b_k$$ diverges, your next bet is to find $b_k$ such that $\displaystyle \sum_{n=f(k)}^{f(k+1)-1} a_n > b_k$, where $f(k)$ is some strictly monotone increasing function, such that $\sum b_k$ diverges.</p>
|
2,932,305 | <p>What are the intercepts of the planes <span class="math-container">$x = 0$</span> and <span class="math-container">$2y + 3z = 12$</span>? The word intercept is confusing me because I don't understand if I should say they intersect at point <span class="math-container">$(0,6,0)$</span> or the intercept is at <span class="math-container">$y=6$</span>. </p>
<p><a href="https://i.stack.imgur.com/b3Gue.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/b3Gue.png" alt="enter image description here"></a></p>
| Winter Soldier | 562,471 | <p>As @DavidPeterson as pointed out, the question alludes to the intersection of the two planes.</p>
<p>Now, you can use the fact that if the two planes intersect, then the intersection will be given by a straight line in space. Let the equation of the line be given by:
<span class="math-container">$$ \mathbf{r}(t) = \mathbf{r}_0+t\mathbf{v}$$</span>
where <span class="math-container">$\mathbf{r}_0$</span> is any point that lies on the line of intersection of the planes and <span class="math-container">$\mathbf{v}$</span> is the direction vector of the line.
Also, let <span class="math-container">$\mathbf{i}$</span>, <span class="math-container">$\mathbf{j}$</span> and <span class="math-container">$\mathbf{k}$</span> be the cartesian unit vectors.</p>
<p>To find <span class="math-container">$\mathbf{v}$</span>, use the fact that the direction of the line will be perpendicular to the normals of both planes.</p>
<p>For the plane <span class="math-container">$x=0$</span>, one form of its direction vector (in the positive <span class="math-container">$x$</span>-axis) is given by <span class="math-container">$\mathbf{n_1}=\mathbf{i}$</span>.</p>
<p>For the plane <span class="math-container">$2y + 3z = 12$</span>, its direction vector is given by <span class="math-container">$\mathbf{n_2}=2\mathbf{j}+3\mathbf{k}$</span>.</p>
<p>Then, the direction vector of the line of intersection is given by:
<span class="math-container">\begin{align}
\mathbf{v} &=
\begin{vmatrix}
\mathbf{i} & \mathbf{j} & \mathbf{k} \\
1 & 0 & 0 \\
0 & 2 & 3 \\
\end{vmatrix}
= -3\mathbf{j}+2\mathbf{k} \\
\end{align}</span> </p>
<p>Therefore, the direction of the line of intersection is given by:
<span class="math-container">\begin{align}
\mathbf{v} &=
\begin{pmatrix}
0 \\
-3 \\
2 \\
\end{pmatrix} \nonumber
\end{align}</span></p>
<p>Finally, a point on the line that satisfies both planes is <span class="math-container">$(x, y, z)=(0, 6, 0)$</span>, or <span class="math-container">$(x, y, z) = (0, 0, 4)$</span>.</p>
<p>Therefore, the equation of the line of intersection of the planes is:
<span class="math-container">$$
\begin{align}
\mathbf{r}(t) &=
\begin{pmatrix}
0 \\
6 \\
0 \\
\end{pmatrix}
+
t\begin{pmatrix}
0 \\
-3 \\
2 \\
\end{pmatrix}
\end{align} \nonumber
$$</span></p>
|
272,144 | <p>Consider a multi-value function <span class="math-container">$f(z)=\sqrt{(z-a)(z+\bar a)}, \Im{a}>0,\Re{a}>0$</span>. To make the function be single-valued, one needs to make a cut. Suppose <span class="math-container">$a=e^{i\theta}$</span>, my choice of the branch cut is <span class="math-container">$e^{it},t\in (\theta,\pi-\theta)$</span>. This uniquely defines my function <span class="math-container">$f(z)$</span>, now I want to study the level curves of <span class="math-container">$f$</span>, and how to visualize it on Mathematica?</p>
<p><strong>Note:</strong> How to Choose a branch so that the cut is part of the level curve (say <span class="math-container">$\Im f=0$</span>).</p>
<p><strong>Update:</strong> In fact, since we know the effect of passing the cut is changing sign. We thus can define the radical by the following Riemann-Hilbert Problem:
<span class="math-container">$$R_+=-R_-(z),\quad z\in \Gamma,$$</span>
where <span class="math-container">$\Gamma$</span> is any branch cuts you want. Then up to some proper normalization, the solution is
<span class="math-container">$$\exp\{h(z)+C_\Gamma(\log(-1))(z)\},$$</span>
where <span class="math-container">$C_\Gamma$</span> is the Cauchy transform. If the branch cut is properly parametrized, the integral can be computed in Mathematica using <code>NIntegrate</code>.</p>
| Michael E2 | 4,999 | <p><em>Update</em></p>
<p>Here's a simplified numerical way, by integrating an ODE without branch points from the origin to <span class="math-container">$z$</span>, determining the sign by whether the path crosses the branch cut (<em>update 2:</em> originally, I used <code>WhenEvent</code> to keep track of the sign, and it might be needed when the branch cut is more complicated but still described by an equation; in this case I realized that the example branch cut is particularly simple to deal with; see edit history for the <code>WhenEvent</code> approach).</p>
<pre><code>With[{w = Exp[I Pi/3]},
ndSqrt[z0_?NumericQ] := Block[{z, t, u, sign},
NDSolveValue[
{D[u[t]^2 == (z - w) (z + Conjugate[w]) /. z -> t*z0, t],
u[0] == Sqrt[(0 - w) (0 + Conjugate[w])]},
If[Pi/3 < Arg[z0] < 2 Pi/3 && Abs@z0 > 1, -1, 1] u[1],
{t, 0, 1}]]
]
</code></pre>
<p>You can just plot <code>Re@ndSqrt[z]</code> and <code>Im@ndSqrt[z]</code>, since the argument to the square root is built into the ODE.</p>
<hr />
<p><em>Original answer</em></p>
<p>This is my interpretation of what is wanted, at least as far as drawing a picture goes. A PIA to construct the branch cut, since the default branch cut of <code>Sqrt[(z - w) (z + Conjugate[w])]</code> is the imaginary axis plus the line segment joining the branch points, perhaps its construction can be improved.</p>
<pre><code>Block[{w = Exp[I Pi/3](*,z=x+I y*)},
branchcut =
Piecewise[{{1, -1/2 < x < 0 && -1/2 < y < 1/2 &&
First@GroebnerBasis[(z - w) (z + Conjugate[w]) /.
z -> Exp[I t] // ReIm // # == {x, y} & //
ComplexExpand //
# /. {Cos[t] -> a, Sin[t] -> b} & // {#,
a^2 + b^2 == 1} &, {x, y}, {a, b}
] > 0}},
-1] /. {x -> Re[z], y -> Im[z]} // FullSimplify
]
Block[{w = Exp[I Pi/3](*,z=x+I y*)},
mySqrt[z_] = -Sqrt[z] (2 UnitStep@Im[z] - 1) branchcut
]
Block[{w = Exp[I Pi/3], z = x + I y},
GraphicsRow[{
Plot3D[
Im[mySqrt[(z - w) (z + Conjugate[w])] Piecewise[{{-1,
Im[z] > Im[w] || Im[z] <= Im[w] && Abs[z - 2 Im[w] I] < 1}},
1]],
{x, -3/2, 3/2}, {y, -1/2, 3/2},
AxesLabel -> {HoldForm@Re[z], HoldForm@Im[z]},
Exclusions -> {{x^2 + y^2 == 1, -1/2 < x < 1/2 && y > 0}},
ViewPoint -> {1.3, 2.4, 2.},
AxesEdge -> {{1, -1}, {-1, 1}, {-1, 1}},
MeshFunctions -> {#3 &},
MeshShading -> ColorData["SolarColors"] /@ Subdivide[0., 1., 15]
],
Plot3D[
Re[mySqrt[(z - w) (z + Conjugate[w])] Piecewise[{{-1,
Im[z] > Im[w] || Im[z] <= Im[w] && Abs[z - 2 Im[w] I] < 1}},
1]],
{x, -3/2, 3/2}, {y, -1/2, 3/2},
AxesLabel -> {HoldForm@Re[z], HoldForm@Im[z]},
Exclusions -> {{x^2 + y^2 == 1, -1/2 < x < 1/2 && y > 0}},
ViewPoint -> {1.3, 2.4, 2.},
AxesEdge -> {{1, -1}, {-1, 1}, {-1, 1}},
MeshFunctions -> {#3 &},
MeshShading -> ColorData["SolarColors"] /@ Subdivide[0., 1., 15]
]
}]
]
</code></pre>
<p><a href="https://i.stack.imgur.com/YFxSW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YFxSW.png" alt="enter image description here" /></a></p>
<p>With <code>ContourPlot</code>:</p>
<p><a href="https://i.stack.imgur.com/fiSae.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fiSae.png" alt="enter image description here" /></a></p>
|
80,056 | <p>I am toying with the idea of using slides (Beamer package) in a third year math course I will teach next semester. As this would be my first attempt at this, I would like to gather ideas about the possible pros and cons of doing this. </p>
<p>Obviously, slides make it possible to produce and show clear graphs/pictures (which will be particularly advantageous in the course I will teach) and doing all sorts of things with them; things that are difficult to carry out on a board. On the other hand, there seems to be something about writing slowly with a chalk on a board that makes it easier to follow and understand (I have to add the disclaimer that here I am just relying on only a few reports I have from students and colleagues).</p>
<p>It would very much like to hear about your experiences with using slides in the classroom, possible pitfalls that you may have noticed, and ways you have found to optimize this.</p>
<p>I am aware that the question may be a bit like the one with Japanese chalks, once discussed here, but I think as using slides in the classroom is becoming more and more common in many other fields, it may be helpful to probe the advantages and disadvantages of using them in math classrooms as well.</p>
| Thierry Zell | 8,212 | <p>I think you already touched on the two main points: pretty pictures are so much better than anything done on a chalkboard is the pro, but you cannot decently unwind any argument on slides. </p>
<p>I've used them intensively, I do it a lot less now. (Here's a con you did forget about: they take a <strong>lot</strong> of time to prepare, even when you're only revising them.) If the room lends itself well to it, the hybrid method is best: use the slides only when they beat the board. Rooms that have a screen in the corner, rather than in front of the board, are best for this.</p>
<p>Also, it seems that it's easier to fall asleep to slides than to a lecture, so be aware of that. Make sure that the room is never too dark (the quality of the screen material can be critical here too: good screens should be readable in full light). And switching your routine, never showing slides for too long, helps keeping the students awake.</p>
|
80,056 | <p>I am toying with the idea of using slides (Beamer package) in a third year math course I will teach next semester. As this would be my first attempt at this, I would like to gather ideas about the possible pros and cons of doing this. </p>
<p>Obviously, slides make it possible to produce and show clear graphs/pictures (which will be particularly advantageous in the course I will teach) and doing all sorts of things with them; things that are difficult to carry out on a board. On the other hand, there seems to be something about writing slowly with a chalk on a board that makes it easier to follow and understand (I have to add the disclaimer that here I am just relying on only a few reports I have from students and colleagues).</p>
<p>It would very much like to hear about your experiences with using slides in the classroom, possible pitfalls that you may have noticed, and ways you have found to optimize this.</p>
<p>I am aware that the question may be a bit like the one with Japanese chalks, once discussed here, but I think as using slides in the classroom is becoming more and more common in many other fields, it may be helpful to probe the advantages and disadvantages of using them in math classrooms as well.</p>
| BSteinhurst | 11,332 | <p>If you intend to post your slides online after class then you run the risk of students not even taking notes/digesting the material on their own (I've had this feeling myself) or feeling that they don't have to attend class. This is obviously a con but the other side is that the students then have a good outline of what you talked about in class with your emphasis included. </p>
<p>I second Thierry's comments. </p>
|
80,056 | <p>I am toying with the idea of using slides (Beamer package) in a third year math course I will teach next semester. As this would be my first attempt at this, I would like to gather ideas about the possible pros and cons of doing this. </p>
<p>Obviously, slides make it possible to produce and show clear graphs/pictures (which will be particularly advantageous in the course I will teach) and doing all sorts of things with them; things that are difficult to carry out on a board. On the other hand, there seems to be something about writing slowly with a chalk on a board that makes it easier to follow and understand (I have to add the disclaimer that here I am just relying on only a few reports I have from students and colleagues).</p>
<p>It would very much like to hear about your experiences with using slides in the classroom, possible pitfalls that you may have noticed, and ways you have found to optimize this.</p>
<p>I am aware that the question may be a bit like the one with Japanese chalks, once discussed here, but I think as using slides in the classroom is becoming more and more common in many other fields, it may be helpful to probe the advantages and disadvantages of using them in math classrooms as well.</p>
| André Henriques | 5,690 | <blockquote>
<p>I would never, never use slides for a course.</p>
</blockquote>
<p><i>That said:</i><br>
I do sometimes show my student pictures taken from the web.<br> For example, I recently showed <a href="http://upload.wikimedia.org/wikipedia/en/0/03/Compound_of_five_cubes.png">this picture</a> to the students in my group theory class in order to illustrate the isomorphism between $A_5$ and the group of symmetries of a dodecahedron.</p>
<p>Also, I sometimes prepare animations with <a href="http://www.geogebra.org/cms/">Geogebra</a> that I then show during class. Here's an <a href="http://www.staff.science.uu.nl/~henri105/Teaching/LogPowSer.html">example</a> (click and drag the blue node).
Of course, it's even better to create the graph in front of the students: Geogebra is good for that. My philosophy is that students should be shown things <b>being created</b>, not ready made. But I'll admit that this is not always possible...</p>
|
80,056 | <p>I am toying with the idea of using slides (Beamer package) in a third year math course I will teach next semester. As this would be my first attempt at this, I would like to gather ideas about the possible pros and cons of doing this. </p>
<p>Obviously, slides make it possible to produce and show clear graphs/pictures (which will be particularly advantageous in the course I will teach) and doing all sorts of things with them; things that are difficult to carry out on a board. On the other hand, there seems to be something about writing slowly with a chalk on a board that makes it easier to follow and understand (I have to add the disclaimer that here I am just relying on only a few reports I have from students and colleagues).</p>
<p>It would very much like to hear about your experiences with using slides in the classroom, possible pitfalls that you may have noticed, and ways you have found to optimize this.</p>
<p>I am aware that the question may be a bit like the one with Japanese chalks, once discussed here, but I think as using slides in the classroom is becoming more and more common in many other fields, it may be helpful to probe the advantages and disadvantages of using them in math classrooms as well.</p>
| matthias beck | 3,193 | <p>I use a hybrid version for some of my classes which take place in a room that allows this: I use computer slides (and animations, computations, etc.) <b>and</b> the board. I learned this from my colleague Serkan Hosten, and it works really well in some classes. E.g., I use slides for definitions and theorems (including the relevant ones from the previous lecture) but then work out examples and proofs on the board. This has the obvious advantage of spending time on exactly the items that need time and just the right pauses to get digested, but it also has nice side effects: e.g., the statement of the theorem will stay on the screen even if I'll have to clean the board.</p>
|
80,056 | <p>I am toying with the idea of using slides (Beamer package) in a third year math course I will teach next semester. As this would be my first attempt at this, I would like to gather ideas about the possible pros and cons of doing this. </p>
<p>Obviously, slides make it possible to produce and show clear graphs/pictures (which will be particularly advantageous in the course I will teach) and doing all sorts of things with them; things that are difficult to carry out on a board. On the other hand, there seems to be something about writing slowly with a chalk on a board that makes it easier to follow and understand (I have to add the disclaimer that here I am just relying on only a few reports I have from students and colleagues).</p>
<p>It would very much like to hear about your experiences with using slides in the classroom, possible pitfalls that you may have noticed, and ways you have found to optimize this.</p>
<p>I am aware that the question may be a bit like the one with Japanese chalks, once discussed here, but I think as using slides in the classroom is becoming more and more common in many other fields, it may be helpful to probe the advantages and disadvantages of using them in math classrooms as well.</p>
| Jim Conant | 9,417 | <p>Full disclosure: I stole the following idea from my wife. </p>
<p>For some courses, like calculus, I will create slides with beamer, leaving blank spots to fill in during class. I then print the slides out on paper and present them with the document camera during lecture. When I get to an example, I will work it out by writing on the paper during class, and have it projected in real time. This approach combines the advantages of blackboard talks where you work things out in realtime, with the advantages of beamer presentations where you can present nice graphics and also have an outline to limit getting distracted and wandering off on tangents. </p>
|
80,056 | <p>I am toying with the idea of using slides (Beamer package) in a third year math course I will teach next semester. As this would be my first attempt at this, I would like to gather ideas about the possible pros and cons of doing this. </p>
<p>Obviously, slides make it possible to produce and show clear graphs/pictures (which will be particularly advantageous in the course I will teach) and doing all sorts of things with them; things that are difficult to carry out on a board. On the other hand, there seems to be something about writing slowly with a chalk on a board that makes it easier to follow and understand (I have to add the disclaimer that here I am just relying on only a few reports I have from students and colleagues).</p>
<p>It would very much like to hear about your experiences with using slides in the classroom, possible pitfalls that you may have noticed, and ways you have found to optimize this.</p>
<p>I am aware that the question may be a bit like the one with Japanese chalks, once discussed here, but I think as using slides in the classroom is becoming more and more common in many other fields, it may be helpful to probe the advantages and disadvantages of using them in math classrooms as well.</p>
| Terry Tao | 766 | <p>Slides can, in principle, enhance a lecture, but there is one important difference between slides and blackboard that definitely needs to be kept in mind, and that is that slides are much more transient than a blackboard. Once one moves on from one slide to the next, the old slide is completely gone from view (unless one deliberately cycles back to it); and so if the student has not fully digested or at least copied down what was on that slide, he or she will have to somehow try to catch up in real time using the subsequent slides. Often, the net result is that the student will become more and more lost for the remainder of the lecture, or else is spending all of his or her time transcribing the slides instead of listening in real time.</p>
<p>In contrast, given enough blackboard space, the material from a previous blackboard tends to persist for several minutes after the point when one has moved onto another blackboard, which allows for a less frantic deployment of attention and concentration by the student. </p>
<p>If one distributes printed versions of the slides beforehand, then this difficulty is mostly eliminated. Though sometimes it takes a few lectures for the students to adapt to this. Once, in the first class in an undergraduate maths course, I said that I wanted my students to try to understand the lecture rather than simply copy it down, and to that end I distributed printed copies of the slides that I would be lecturing from. (The slides were in bullet point form, and I would expand upon them in speech and on the board.) I then found that for the first few lectures, the students, not knowing exactly what to do with their time now that they did not have to take as much notes, started highlighting all the bullet points on the printed notes. It was only after I threatened to distribute pre-highlighted lecture notes that they finally started listening to the lecture (and annotating the notes as necessary).</p>
|
466,576 | <p>Some conventional math notations seem arbitrary to English speakers but were mnemonic to non-English speakers who started them. To give a simple example, Z is the symbol for integers because of the German word <em>Zahl(en)</em> 'number(s)'. What are some more examples?</p>
| MJD | 25,554 | <p>The Klein $V$-group is the four-element group with generators $a$ and $b$ and $a^2 = b^2 = (ab)^2 = 1$. The $V$ is for <em>vierergruppe</em> = "four-group".</p>
|
466,576 | <p>Some conventional math notations seem arbitrary to English speakers but were mnemonic to non-English speakers who started them. To give a simple example, Z is the symbol for integers because of the German word <em>Zahl(en)</em> 'number(s)'. What are some more examples?</p>
| MJD | 25,554 | <p>The logical-or symbol $\lor$ is a stylized letter ‘V’, the first letter of the Latin word <em>vel</em>.</p>
<p>(The $\land$ symbol arose later, derived by analogy from $\lor$.)</p>
|
466,576 | <p>Some conventional math notations seem arbitrary to English speakers but were mnemonic to non-English speakers who started them. To give a simple example, Z is the symbol for integers because of the German word <em>Zahl(en)</em> 'number(s)'. What are some more examples?</p>
| ratchet freak | 17,442 | <p>Eigen (as in the eigen vectors of a matrix) is Dutch/German for "own".</p>
|
466,576 | <p>Some conventional math notations seem arbitrary to English speakers but were mnemonic to non-English speakers who started them. To give a simple example, Z is the symbol for integers because of the German word <em>Zahl(en)</em> 'number(s)'. What are some more examples?</p>
| Marko Riedel | 44,883 | <p>In Polya's enumeration theorem the letter $Z(G)$ which is used for the cycle index of the permutation group $G$ originates with the German word <em>Zyklenzeiger</em>, I think.</p>
|
751,670 | <p>I've just started to learn about Cohen-Macaulay rings. I want to show that the following rings are Cohen-Macaulay:</p>
<p>$k[X,Y,Z]/(XY-Z)$ and $k[X,Y,Z,W]/(XY-ZW)$.</p>
<p>Also I am looking for a ring which is not Cohen-Macaulay.</p>
<p>Can anyone help me?</p>
| Community | -1 | <p><em>bruns herzog</em> book <em>Cohen Macaulay rings</em><br>
<img src="https://i.stack.imgur.com/7f7lw.png" alt="enter image description here"></p>
<p><img src="https://i.stack.imgur.com/UE9vc.png" alt="enter image description here"></p>
<p>note that the rings $k[X,Y,Z]$ and $k[X,Y,Z,W]$, are integral domains and each nonzero element of them is a regular sequence.</p>
|
272,114 | <p>Yesterday, my uncle asked me this question:</p>
<blockquote>
<p>Prove that $x = 2$ is the unique solution to $3^x + 4^x = 5^x$ where $x \in \mathbb{R}$.</p>
</blockquote>
<p>How can we do this? Note that this is not a diophantine equation since $x \in \mathbb{R}$ if you are thinking about Fermat's Last Theorem.</p>
| Community | -1 | <p>For all $x_j>x_i$ and $0<a<1$, $a^{x_i}>a^{x_j}$ . </p>
<p>Hence
\begin{align}
\left(\frac{3}{5}\right)^{x} + \left(\frac{4}{5}\right)^{x} - 1 < \left(\frac{3}{5}\right)^{2} + \left(\frac{4}{5}\right)^{2} - 1 = 0
\end{align}</p>
<p>for all $x>2$. Hence, there is no solution for $x>2$.</p>
<p>Similarly
\begin{align}
\left(\frac{3}{5}\right)^{x} + \left(\frac{4}{5}\right)^{x} - 1 > \left(\frac{3}{5}\right)^{2} + \left(\frac{4}{5}\right)^{2} - 1 =0
\end{align}</p>
<p>for all $x<2$. </p>
|
632,043 | <p>tl;dr: why is raising by $(p-1)/2$ not always equal to $1$ in $\mathbb{Z}^*_p$?</p>
<p>I was studying the proof of why generators do not have quadratic residues and I stumbled in one step on the proof that I thought might be a good question that might help other people in the future when raising powers modulo $p$.</p>
<p>Let $p$ be prime and as usual, $\mathbb{Z}^*_p$ be the integers mod $p$ with inverses.</p>
<p>Consider raising the generator $g$ to the power of $(p-1)/2$:</p>
<p>$$g^{(p-1)/2}$$</p>
<p>then, I was looking for a somewhat rigorous argument (or very good intuition) on why that was <strong>not always</strong> equal to $1$ by fermat's little theorem (when I say always, I mean, even when you do NOT assume the generator has a quadratic residue).</p>
<p>i.e. why is this logic flawed:</p>
<p>$$ g^{(p-1)/2} = (g^{(p-1)})^{\frac{1}{2}} = (1)^{\frac{1}{2}} \ (mod \ p)$$</p>
<p>to solve the last step find an x such that $1 = x \ (mod \ p)$. $x$ is obviously $1$, which completes the wrong proof that raising anything to $(p-1)/2$ is always equal to $1$. This obviously should not be the case, specially for a generator since the only power that should yield $1$ for a generator is $p-1$, otherwise, it can't generate one of the elements in the cyclic set. </p>
<p>The reason that I thought that this was illegal was because you can only raise to powers of integers mod $p$ and $1/2$ is obviously not valid (since its not an integer). Also, if I recall correctly, not every number in a set has a k-th root, right? And $1/2$ actually just means square rooting...right? Also, maybe it was a notational confusion where to the power of $1/2$ actually just means a function/algorithm that "finds" the inverse such that $z = x^2 \ (mod \ p)$. So is the illegal step claiming that you can separate the powers because at that step, you would be raising to the power of an element not allowed in the set?</p>
| Bill Dubuque | 242 | <p>You've attempted to apply a method of computing $k$-th roots outside its domain of applicability.</p>
<p>It <em>is</em> true that if $\,(k,p\!-\!1) = 1\,$ then $\,{\rm mod}\ p\!-\!1\!:\ \,k^{-1}\! = \color{#c00}{1/k \equiv i}\,$ exists, so $\ g^{\Large{j/k}} \equiv (g^{\Large j})^{\large{ \color{#c00}{1/k}}} \equiv g^{\Large j\color{#c00}i}.$</p>
<p>But this does not apply in your case $\,k =2\ $ since $\,2\mid p\!-\!1\,$ so $\,(k,p\!-\!1) = (2,p\!-\!1) = 2\ne 1$. </p>
<p>Essentially you are attempting to invert $\,2\,$ in the ring $\,\Bbb Z/(p\!-\!1),\,$ where $\,2\,$ is a <em>zero-divisor</em>, since $\,2\mid p\!-\!1.\, $ This is a sort of ring-theoretic generalization of the sin of dividing by zero, since the only ring with an invertible zero-divisor is the trivial ring $\{0\}.$</p>
|
179,583 | <p>I have a fairly large array, a billion or so by 500,000 array. I need to calculate the singular value decomposition of this array. The problem is that my computer RAM will not be able to handle the whole matrix at once. I need an incremental approach of calculating the SVD. This would mean that I could take one or a couple or a couple hundred/thousand (not too much though) rows of data at one time, do what I need to do with those numbers, and then throw them away so that I can address memory toward getting the rest of the data.</p>
<p>People have posted a couple of papers on similar issues such as <a href="http://www.bradblock.com/Incremental_singular_value_decomposition_of_uncertain_data_with_missing_values.pdf" rel="nofollow">http://www.bradblock.com/Incremental_singular_value_decomposition_of_uncertain_data_with_missing_values.pdf</a> and <a href="http://www.jofcis.com/publishedpapers/2012_8_8_3207_3214.pdf" rel="nofollow">http://www.jofcis.com/publishedpapers/2012_8_8_3207_3214.pdf</a>. </p>
<p>I am wondering if anyone has done any previous research or has any suggestions on how should go on approaching this? I really do need the FASTEST approach, without losing too much accuracy in the data.</p>
| Squirtle | 29,507 | <p>So is there any other property that the matrix has? For example, is it sparse or is it symmetric, real/complex, etc... As there are optimized algorithms for various situations, SVD may not be the best option -- it would be helpful to know what you are trying to get from the SVD.</p>
|
17,143 | <p>My next project I'd like to start working on is Domain Coloring. I am aware of the beautiful discussion at:</p>
<p><a href="https://mathematica.stackexchange.com/questions/7275/how-can-i-generate-this-domain-coloring-plot">How can I generate this "domain coloring" plot?</a></p>
<p>And I am studying it. However, a lot of the articles on domain coloring refer back to Hans Lundmark's page at:</p>
<p><a href="http://www.mai.liu.se/~halun/complex/domain_coloring-unicode.html" rel="nofollow noreferrer">http://www.mai.liu.se/~halun/complex/domain_coloring-unicode.html</a></p>
<p>So, I would like to begin my work by using Mathematica to draw these three images based on Hans' notes. I'd appreciate if anyone can provide some code that will produce these images, as I could use it to start my study of the rest of Hans' page.</p>
<p><img src="https://i.stack.imgur.com/FuqMb.jpg" alt="arg"></p>
<p><img src="https://i.stack.imgur.com/9S0I6.jpg" alt="abs"></p>
<p><img src="https://i.stack.imgur.com/8cqhp.png" alt="blend"></p>
<p>A very small adjustment. Still learning.</p>
<pre><code>g[{f_, cf_}] :=
DensityPlot[f, {x, -1, 1}, {y, -1, 1}, PlotPoints -> 51,
ColorFunction -> cf, Frame -> False];
g /@ {{Arg[-(x + I y)], "SolarColors"},
{Mod[Log[2, Abs[x + I y]], 1], GrayLevel}}
ImageMultiply @@ %
</code></pre>
<p><img src="https://i.stack.imgur.com/115CH.png" alt="scheme-blend-1"></p>
<p>Not sure where to put my current question, so I'll update here. Just came back to visit and discovered some wonderful answers at the bottom of this list. I do understand the opening code:</p>
<pre><code>f[z_] := (z + 2)^2*(z - 1 - 2 I)*(z + I)
paint[z_] :=
Module[{x = Re[z], y = Im[z]},
color = Blend[{Black, Red, Orange, Yellow},
Rescale[ArcTan[-x, -y], {-Pi, Pi}]];
shade = Mod[Log[2, Abs[x + I y]], 1];
Darker[color, shade/4]]
</code></pre>
<p>But then I encounter difficulty with the following code:</p>
<pre><code>ParametricPlot[{x, y}, {x, -3, 3}, {y, -3, 3},
ColorFunctionScaling -> False,
ColorFunction -> Function[{x, y}, paint[f[x + y I]]], Frame -> False,
Axes -> False, MaxRecursion -> 1, PlotPoints -> 50, Mesh -> 400,
PlotRangePadding -> 0, MeshStyle -> None, ImageSize -> 300]
</code></pre>
<p>I'm good with the first few lines. Looks like ParametricPlot is plotting points, where x and y both range from -3 to 3 (correct me if I am wrong). I also understand the ColorFunctionScaling and the ColorFunction lines. I understand Axes, PlotRangePadding, MeshStyle, and ImageSize. Where I am having trouble is with what PlotPoints->50 and Mesh->400 are doing. </p>
<p>First of all, my image size is 300. What does PlotPoints->50 mean? Does that mean it will sample and array of 50x50 points out of 300x300 and scale the results to fit in the domain [-3,3]x[-3,3]? My next question is, then those points get colored? And if so, how are the remainder of the points in the image colored? For example, I tried:</p>
<pre><code>Table[ParametricPlot[{x, y}, {x, -3, 3}, {y, -3, 3},
ColorFunctionScaling -> False,
ColorFunction -> Function[{x, y}, paint[f[x + y I]]],
PlotPoints -> n, MeshStyle -> None], {n, 10, 50, 10}]
</code></pre>
<p>And the images got a little sharper as the PointPlots->n increased. </p>
<p>Here's another question. What does Mesh->400 do in this situation. For example, I tried lowering the mesh number:</p>
<pre><code>ParametricPlot[{x, y}, {x, -3, 3}, {y, -3, 3},
ColorFunctionScaling -> False,
ColorFunction -> Function[{x, y}, paint[f[x + y I]]], Frame -> False,
Axes -> False, MaxRecursion -> 1, PlotPoints -> 50, Mesh -> 100,
PlotRangePadding -> 0, MeshStyle -> None, ImageSize -> 300]
</code></pre>
<p>And was completely surprised that it had an effect on the image, particularly when MeshStyle->None. Here's the image I get:</p>
<p><img src="https://i.stack.imgur.com/4jqEj.png" alt="today"></p>
<p>Why does setting Mesh->100 decrease the sharpness of the image?</p>
<p>One final question I have regards adding the mesh lines. Simon suggested<br>
For the mesh you could do something like Mesh->{Range[-5,5],Range[-5,5]}, MeshStyle->Opacity[0.5], MeshFunctions->{(Re@f[#1+I #2]&),(Im@f[#1+I #2]&)} and cormullion added them to produce a beautiful result, but I tried this:</p>
<pre><code>ParametricPlot[{x, y}, {x, -3, 3}, {y, -3, 3},
ColorFunctionScaling -> False,
ColorFunction -> Function[{x, y}, paint[f[x + y I]]], Frame -> False,
Axes -> False, MaxRecursion -> 1, PlotPoints -> 50,
Mesh -> {Range[-5, 5], Range[-5, 5]}, PlotRangePadding -> 0,
MeshStyle -> Opacity[0.5],
MeshFunctions -> {(Re@f[#1 + I #2] &), (Im@f[#1 + I #2] &)},
ImageSize -> 300]
</code></pre>
<p>And got this resulting image.</p>
<p><img src="https://i.stack.imgur.com/zamAO.png" alt="today2"></p>
<p>So I am clearly missing something. Maybe someone could post the code that gives cormullion's last image?</p>
<p>OK, just purchased and installed Presentations package. Tried this:</p>
<pre><code>With[{f = Function[z, (z + 2)^2 (z - 1 - 2 I) (z + I)],
zmin = -2 - 2 I, zmax = 2 + 2 I,
colorFunction = Function[arg, HotColor[Rescale[arg, {-Pi, Pi}]]],
imgSize = 400},
Draw2D[{ComplexDensityDraw[Arg[f[z]], {z, zmin, zmax},
ColorFunction -> colorFunction, ColorFunctionScaling -> False,
Mesh -> 50, MeshFunctions -> {Function[{x, y}, Abs[f[x + I y]]]},
PlotPoints -> {50, 50}]}, Frame -> True, FrameLabel -> {Re, Im},
PlotLabel -> Row[{"Arg coloring and Abs mesh of ", f[z]}],
RotateLabel -> False, BaseStyle -> 12, ImageSize -> imgSize]]
</code></pre>
<p>But got this colorless image.</p>
<p><img src="https://i.stack.imgur.com/xSlX8.png" alt="today3"></p>
<p>Any thoughts on how to fix this?</p>
| murray | 148 | <p>There are many approaches to domain coloring. The approach in @cormullion's answer is to use <code>Re</code> and <code>Im</code> as the mesh functions. Another way is to color points just by $\text{Arg}(f(z))$ and then superimpose contours for $\text{Abs}(f(z))$.</p>
<p>While that may readily be done in "pure <em>Mathematica</em>", by pulling out real and imaginary parts of complex numbers, it seems a bit silly to do so given that complex numbers are already built in to <em>Mathematica</em> (and, in a sense, <em>Mathematica</em> prefers working with complex numbers to real numbers). Here, David Park's <em>Presentations</em> add-on (<a href="http://home.comcast.net/~djmpark/DrawGraphicsPage.html" rel="nofollow noreferrer">http://home.comcast.net/~djmpark/DrawGraphicsPage.html</a>) may be applied. For example: </p>
<pre><code>With[{f = Function[z, (z + 2)^2 (z - 1 - 2 I) (z + I)],
zmin = -2 - 2 I, zmax = 2 + 2 I,
colorFunction = Function[arg, HotColor[Rescale[arg, {-Pi, Pi}]]],
imgSize = 400},
Draw2D[{ComplexDensityDraw[Arg[f[z]], {z, zmin, zmax},
ColorFunction -> colorFunction, ColorFunctionScaling -> False,
Mesh -> 50, MeshFunctions -> {Function[{x, y}, Abs[f[x + I y]]]},
PlotPoints -> {50, 50}]
},
Frame -> True, FrameLabel -> {Re, Im},
PlotLabel -> Row[{"Arg coloring and Abs mesh of ", f[z]}], RotateLabel -> False,
BaseStyle -> 12, ImageSize -> imgSize
]
]
</code></pre>
<p><img src="https://i.stack.imgur.com/ML9Fx.png" alt="Mathematica graphics"></p>
|
2,250,733 | <p>I have a general idea to solve the problem, which is to pair up 2s and 5s in the numerator and denominator, cancel those that are common, and the remaining pairs of 2s and 5s are the number of 0s left. Since 130 choose 70 is so large, how do I do this?</p>
| Angina Seng | 436,618 | <p><a href="https://en.wikipedia.org/wiki/Kummer%27s_theorem" rel="nofollow noreferrer">Kummer's theorem</a>
states that the power of a prime $p$ dividing a binomial coefficient
$\binom nk$ is the number of carries needed when adding $k$ to $n-k$ in
base $p$ notation.</p>
<p>Here $k=70$ and $n-k=60$. In base $2$, $k=(1000110)_2$ and $n-k=(111100)_2$. So $\binom{130}{70}$ is exactly divisible by $2^5$.
In base $5$, $k=(240)_5$ and $n-k=(220)_5$.
So $\binom{130}{70}$ is exactly divisible by $5^2$.
Therefore, $\binom{130}{70}$ ends in two zeros.</p>
|
3,287,710 | <p>I want to calculate the length of a clothoid segment from the following available information.</p>
<ol>
<li>initial radius of clothoid segment </li>
<li>final radius of clothoid segment</li>
<li>angle (i am not really sure which angle is this, and its not
documented anywhere)</li>
</ol>
<p>As a test case: I need to find length of a clothoid(left) that starts at <span class="math-container">$(1000, 0)$</span> and ends at approximately <span class="math-container">$(3911.5, 943.3)$</span>. The arguments are: <span class="math-container">$initialRadius=10000$</span>, <span class="math-container">$endRadius=2500$</span>, <span class="math-container">$angle=45(deg)$</span>.</p>
<p>Previously I have worked on a similar problem where initial radius, final radius, and length are given. So I want to get the length so I can solve it the same way.</p>
<p>I am working on a map conversion problem. The format does not specify what are the details of this angle parameter.</p>
<p>Please help. I have been stuck at this for 2 days now.</p>
| Empy2 | 81,790 | <p>I would introduce <span class="math-container">$z=-x-y$</span> so it is a shape on the plane <span class="math-container">$x+y+z=0$</span>. Then there are <span class="math-container">$(r,\theta)$</span> with
<span class="math-container">$$x=r\cos\theta\\y=r\cos(\theta+2\pi/3)\\
z=r\cos(\theta-2\pi/3)$$</span>
Then the equation is
<span class="math-container">$$(x^2+y^2+z^2)^{3/2}-cxyz=d\\
r^3(1-p\cos3\theta)=q$$</span>
You can work out <span class="math-container">$p$</span> and <span class="math-container">$q$</span> in terms of <span class="math-container">$\alpha$</span>.<br>
Then <span class="math-container">$x=r\cos\theta$</span> so
<span class="math-container">$$x^3=\frac{q\cos^3\theta}{1-p\cos3\theta}$$</span>
The shape is convex if <span class="math-container">$x$</span> has a local maximum at <span class="math-container">$\theta=0$</span> Find the value of <span class="math-container">$p$</span> for which the second derivative <span class="math-container">$d^2(1/x^3)/d\theta^2=0$</span>.</p>
|
1,413,150 | <p>So for a periodic function <span class="math-container">$f$</span> (of period <span class="math-container">$1$</span>, say), I know the Riemann-Lebesgue Lemma which states that if <span class="math-container">$f$</span> is <span class="math-container">$L^1$</span> then the Fourier coefficients <span class="math-container">$F(n)$</span> go to zero as <span class="math-container">$n$</span> goes to infinity. And as far as I know, the converse of this is not true. My question, then, is this:</p>
<blockquote>
<p>Under what conditions on the Fourier coefficients <span class="math-container">$F(n)$</span> is the function <span class="math-container">$f$</span>, defined pointwise as the Fourier series with <span class="math-container">$F(n)$</span> as coefficients,</p>
<ol>
<li>integrable,</li>
<li>continuous, and</li>
<li>differentiable?</li>
</ol>
</blockquote>
| Arin Chaudhuri | 404 | <p>Let $a_n = (1 + 1/n)^n.$ </p>
<p>We want to show $a_{n+1} - a_{n} \geq \dfrac{1}{n(n+1)}$ for large $n$. </p>
<p>$\dfrac{a_{n+1}}{a_n} = \left(1 + \dfrac{1}{n}\right) \left(1 - \dfrac{1}{(n+1)^2}\right)^{n+1}.$</p>
<p>The RHS can be expanded as</p>
<p>$\left(1 + \dfrac{1}{n}\right) \left(1 - \dfrac{1}{(n+1)^2}\right)^{n+1} = \dfrac{n+1}{n} \times \left( \underbrace{1 - \dfrac{1}{n+1}} + \dfrac{1}{2!(n+1)^2}\underbrace{\left(1 - \dfrac{1}{n+1}\right)} - \dfrac{1}{3!(n+1)^3}\underbrace{\left(1 - \dfrac{1}{n+1}\right)} \left(1 - \dfrac{2}{n+1} \right) + \dots (-1)^{n+1} \dfrac{1}{(n+1)!(n+1)^{n+1}}\underbrace{\left(1 - \dfrac{1}{n+1}\right)} \left(1 - \dfrac{2}{n+1} \right) \dots\left(1 - \dfrac{n}{n+1}\right)\right).$</p>
<p>Since $ \dfrac{n+1}{n} \times (1-\dfrac{1}{n+1}) = 1$, we have
$\dfrac{a_{n+1}}{a_n} = 1 + \dfrac{1}{2!(n+1)^2} - \dfrac{1}{3!(n+1)^3} \left(1 - \dfrac{2}{n+1} \right) + \dots$</p>
<p>So,</p>
<p>$|\dfrac{a_{n+1}}{a_n} - 1 - \dfrac{1}{2(n+1)^2}| \leq \dfrac{1}{3!(n+1)^3} + \dfrac{1}{4!(n+1)^4} + \dots \leq \dfrac{1}{6(n+1)^2n}.$</p>
<p>This implies $$ (n+1)^2 \left( \dfrac{a_{n+1}}{a_n} - 1 \right) \to 1/2$$ so upon multiplying the above by $na_n/(n+1)$ $$ n(n+1)(a_{n+1} - a_n) \to e/2 > 1.$$
Hence, $a_{n+1} - a_n \geq 1/n(n+1)$ for all large n.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.