qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
1,255,629
<p>Show that </p> <p>$$\sin\left(\frac\pi3(x-2)\right)$$ </p> <p>is equal to </p> <p>$$\cos\left(\frac\pi3(x-7/2)\right)$$</p> <p>I know that $\cos(x + \frac\pi2) = −\sin(x)$ but i'm not sure how i can apply it to this question.</p>
5xum
112,884
<p><strong>Hint</strong>:</p> <p>Try to use addition formulas. Let me get you started:</p> <p><span class="math-container">$$\sin\left(\frac\pi3(x-2)\right) = \sin\left(\frac{\pi x}{3} - \frac{2\pi}{3}\right) = \sin\frac{\pi x}{3}\cos\frac{2\pi}3 - \cos\frac{\pi x}3\sin\frac{2\pi}{3}=\\=-\frac12 \sin\frac{\pi x}{3} - \frac{\sqrt3}{2}\cos\frac{\pi x}{3}$$</span></p> <p>Now, do a similar thing with the second expression.</p>
2,927,374
<p>In a right triangle, relative to a <span class="math-container">$45^\circ$</span> angle, if we have <span class="math-container">$$\text{adjacent} = 1$$</span> <span class="math-container">$$\text{opposite} = 1$$</span></p> <p>then <span class="math-container">$$\text{hypotenuse}=\sqrt{o^2+a^2}=\sqrt{1^2+1^2}=\sqrt{2}$$</span> so that <span class="math-container">$$\sin 45^\circ=\frac{1}{\sqrt{2}}$$</span></p> <p>But, when <span class="math-container">$$\text{hypotenuse} = 1$$</span> <span class="math-container">$$\text{opposite} = \text{adjacent}$$</span> then (writing <span class="math-container">$o$</span> for <span class="math-container">$\text{opposite}$</span> and <span class="math-container">$a$</span> for <span class="math-container">$\text{adjacent}$</span>) <span class="math-container">$$\begin{align} o &amp;= \sqrt{h^2-a^2} = \sqrt{1^2-o^2} \\[4pt] \implies \quad o^2 &amp;= h^2-o^2\\ &amp;=1-o^2 \\[4pt] \implies\quad 2o^2&amp;=1 \\ \implies\quad o&amp;=\sqrt{\frac{1}{2}} \end{align}$$</span> so that <span class="math-container">$$\sin 45^\circ =\frac{\sqrt{\frac{1}{2}}}{1}$$</span></p> <p>What went wrong?</p>
NKRsolutions
504,365
<p>You have the square root of 1/2. Separate the entire square root into the square root of the numerator and the denominator and second result is identical to your first result!</p>
1,030,327
<p>Im trying to do an exercise from the book Algebraic Curves of Fulton (Exercise $\:6.26^{*}$).</p> <p>It says:</p> <p>Let $f:X\rightarrow Y$ be a morphism of affine varieties. Show that $f(X)$ in dense in $Y$ if and only if the homomorphism $\tilde{f}:\Gamma(Y)\rightarrow\Gamma(X)$ is one-to-one.</p>
Alex Heinis
178,368
<p>a) Suppose that <span class="math-container">$f(X)$</span> is dense in <span class="math-container">$Y$</span> and that <span class="math-container">$\tilde{f}(\phi)=0$</span>. Then <span class="math-container">$\phi \circ f=0\Rightarrow \phi=0\ {\rm on} f(X) \Rightarrow \phi=0$</span> where the last step follows since morphisms are continuous wrt. the Zariski topology.</p> <p>b) Suppose <span class="math-container">$X\subset A^m, Y\subset A^n$</span> and <span class="math-container">$f(X)$</span> not dense in <span class="math-container">$Y$</span>. By definition there exists an open set <span class="math-container">$O\subset A^n$</span> with <span class="math-container">$O\cap Y\not=\emptyset$</span> and <span class="math-container">$O\cap f(X)=O\cap Y\cap f(X)=\emptyset$</span>. Since <span class="math-container">$\emptyset \not=O \not= A^n$</span> we can write <span class="math-container">$O=\{\phi_1\not=0\}\cup \cdots \cup \{\phi_s\not=0\}$</span> for a finite collection <span class="math-container">$\phi_i$</span> of polynomials. Hence a polynomial <span class="math-container">$\phi$</span> exists with <span class="math-container">$\{\phi\not=0\}\cap Y\not=\emptyset$</span> and <span class="math-container">$\{\phi\not=0\}\cap f(X)=\emptyset$</span>. Then <span class="math-container">$\phi\in \Gamma(Y), \phi\not=0$</span> and <span class="math-container">$\tilde{f}(\phi)=0$</span>. Hence <span class="math-container">$\tilde{f}$</span> is not injective.</p>
1,313,056
<p>Got this matrix: </p> <p>\begin{bmatrix} 1 &amp; 2 \\ -2 &amp; 5 \end{bmatrix}</p> <p>I should determine if the matrix is diagonalizable or not. I found the eigenvalues ( only one) = 3. My eigenvector is then \begin{bmatrix} 1 \\ 1 \end{bmatrix} This matrix is not diagonizable (from my teachers notes) but i don't know why, can someone explain this? </p>
Miz
205,000
<p>Basically you need to find the set $E$ all the eigenvalues associated with the matrix. (In this case $E =\{3\}$). Next for each eigenvalue in $E$ you find the eigenvectors. Let $W$ be the set of all eigenvectors of the matrix. Then if $W$ spans the column space of the matrix we say that the matrix is diagonalizable. In this case the column space is $\mathbb{R}^2$ but your lone eigenvector does not span $\mathbb{R}^2$ therefore the matrix is not diagonalizable. </p>
446,959
<p>I am searching two simple/efficient/generic algorithms to generate a uniform distribution of random points:</p> <ul> <li>in the volume of a n-dimensional hypersphere</li> <li>on the surface of a n-dimensional hypersphere</li> </ul> <p>knowing the dimension $n$, the center of the hypersphere $\vec{x}$ and its radius $r$.</p> <p>How to do that ?</p>
David K
139,123
<p>You have an answer for the surface of the sphere, so I'll just address the question about the interior of the <span class="math-container">$n$</span>-ball.</p> <p>For sufficiently small <span class="math-container">$n,$</span> the rejection method is a reasonably efficient way to get a uniformly distributed random point in the unit ball of <span class="math-container">$n$</span> dimensions.</p> <p>For larger <span class="math-container">$n,$</span> a more efficient way to get a uniform distribution within an <span class="math-container">$n$</span>-dimensional ball is to first find a point uniformly distributed on the surface of an <span class="math-container">$n$</span>-dimensional sphere, and then take a point in the same direction from the origin but at a random distance. Given that the ball has unit radius, a concentric smaller ball of radius <span class="math-container">$x$</span> has volume <span class="math-container">$x^n$</span> times the volume of the unit ball, so if <span class="math-container">$X$</span> is the distance from the center to a randomly chosen point inside the unit ball, the probability distribution function of <span class="math-container">$X$</span> should be <span class="math-container">$$ F_X(x) = \mathbb P(X \leq x) = \begin{cases} 0 &amp; x &lt; 0, \\ x^n &amp; 0 \leq x \leq 1, \\ 1 &amp; x &gt; 1. \end{cases} $$</span></p> <p>The probability density function of <span class="math-container">$X$</span> then is the derivative of the distribution (almost everywhere), <span class="math-container">$$ f_X(x) = \begin{cases} n x^{n - 1} &amp; 0 \leq x \leq 1, \\ 0 &amp; \text{otherwise}. \end{cases} $$</span></p> <p>In order to produce a variable <span class="math-container">$X$</span> with such a distribution, given a variable <span class="math-container">$U$</span> that is uniformly distributed on <span class="math-container">$[0,1],$</span> note that for <span class="math-container">$0 \leq x \leq 1,$</span> <span class="math-container">$$ \mathbb P(X \leq x) = x^n = \mathbb P(U \leq x^n) = \mathbb P(U^{1/n} \leq x), $$</span> so you can set <span class="math-container">$X = U^{1/n}.$</span></p> <p>For a ball of radius <span class="math-container">$r$</span> around an arbitrary center, just take the unit ball, scale it to radius <span class="math-container">$r$</span> and translate it so its center is at the desired location.</p>
4,004,157
<p>A firm wants to know how many of its employees have drug problems. Realizing the sensitivity of this issue, the personnel director decides to use a randomized response survey.</p> <p>Each employee is asked to flip a fair coin,</p> <p>If head (H), answer the question “Do you carpool to work?”</p> <p>If tail (T), answer the question “Have you used illegal drugs within the last month?”</p> <p>Out of 8000 responses, 1420 answered “YES” (assuming honesty)</p> <p>The company knows that 35% of its employees carpool to work. What is the probability that an employee (chosen at random) used illegal drugs within the last month?</p> <p>I think the probability that I am trying to figure out is <span class="math-container">$\mathbb{P}(yes|T)$</span>. From the problem, I was able to figure out that <span class="math-container">$\mathbb{P}(yes)=1420/8000$</span>, <span class="math-container">$\mathbb{P}(T)=50%$</span> (because it's a fair coin) and that <span class="math-container">$\mathbb{P}(yes|H)=35%$</span>. But for Bayes' theorem, I need to find <span class="math-container">$\mathbb{P}(T|yes)$</span>, and that is where I am stuck.</p> <p>I realized that I did not need Bayes' theorem as that would have made it more difficult.</p>
Hagen von Eitzen
39,174
<p>As the multiplicative group of a finite field <span class="math-container">$F$</span> is cyclic, it follows that <span class="math-container">$b=c^2a$</span> with come <span class="math-container">$c\in F^\times$</span> whenever <span class="math-container">$a,b$</span> are both non-squares. Then wlog. <span class="math-container">$\sqrt b=c\sqrt a$</span> and so <span class="math-container">$\sqrt a+\sqrt b$</span> is a root of one of <span class="math-container">$X^2-(1+ c)^2a$</span> whereas <span class="math-container">$\sqrt a-\sqrt b$</span> is a root of <span class="math-container">$X^2-(1- c)^2a$</span>. These minimal<span class="math-container">$^1$</span> polynomials of <span class="math-container">$\sqrt a+\sqrt b$</span> and <span class="math-container">$\sqrt a-\sqrt b$</span> are different except in characteristic <span class="math-container">$2$</span>.</p> <hr /> <p><span class="math-container">$^1$</span> Okay, one of them is not minimal when <span class="math-container">$c=\pm1$</span>, i.a., when <span class="math-container">$a=b$</span>.</p>
2,680,297
<p>I had been given in my complex analysis examination the following problem. </p> <p><strong>Evaluate the following integral by using contour integration:</strong></p> <p>$$ \int_0^{\pi}\dfrac{\sin(2\theta)}{5-3\cos(\theta)}d\theta $$</p> <p>The answer to which is: $$ \dfrac{2}{9}(log_e(1024)-6). $$ </p> <p>I tried to get this answer trying different ways but I just could not find a way to do it. </p> <p>A thing to notice is that there is no $\pi$ term occurring in the final answer which means whatever the contour is which will give the answer easily is such that by integrating along that contour the value which we will get will be proportional to $\dfrac{1}{i\pi }$. This is because the $2\pi i$ term multiplying the residue should be canceled out as the actual answer does not have $\pi$ dependence. The $pi$ in the denominator gives a hint to me that this going to be complicated as in complex analysis at least in introductory courses nowhere $\pi$ comes in the denominator unless the problem is intentionally set up that way.</p> <p>Another thing to notice is that if we have to use a contour that covers the angle from $0$ to $\pi$. Usually, the limits given in such problems is $0$ to $2\pi$ which is easy to do just by replacing $sine$ and $cosine$ term as follows: $$ sin(\theta) \rightarrow \dfrac{1}{2i}(z-\dfrac{1}{z}) $$ $$ cos(\theta) \rightarrow \dfrac{1}{2}(z+\dfrac{1}{z}) $$ $$ d\theta \rightarrow \dfrac{dz}{iz} $$ and then carrying out the contour integral along $|z|=1$ contour. The limits of the integral could be converted to $0$ to $2\pi$ by proper substitution and then the contour integral by above procedure could be carried out but that would not help as it will bring in fractional order poles (as $cos\dfrac{\theta}{2}$ term will appear) which residue theory can't deal with.</p> <p><strong>So, what I need is a hint on how to do it.</strong> </p>
learningmaths
1,021,431
<p>I just saw this question and found it very interesting! :)</p> <p>You can use a contour integral but by using the identity <span class="math-container">$\sin 2t = 2\sin t\cos t$</span> first.</p> <p>So let's try to calculate <span class="math-container">$$\int_0^{2\pi}\frac{\sin 2t}{5-3\cos t}dt = \int_0^{2\pi}\frac{2\sin t\cos t}{5-3\cos t}dt .$$</span></p> <p>Put <span class="math-container">$z=e^{it}$</span>. Then <span class="math-container">$dt= \frac{dz}{iz}$</span> and we know that <span class="math-container">$$\sin t = \frac{1}{2i}\left(z- z^{-1}\right),\quad\cos t = \frac{1}{2}\left(z+ z^{-1}\right).$$</span> So for a contour <span class="math-container">$C$</span> given by <span class="math-container">$\left\{ z=e^{it} , 0\le t \le 2\pi \right\}$</span> there holds <span class="math-container">$$I = \int_C \frac{\frac{1}{2i}\left(z- z^{-1}\right) \frac{1}{2}\left(z+ z^{-1}\right) \frac{dz}{iz} }{ 5-3\left(\frac{1}{2}\left(z+ z^{-1}\right)\right)} = \int_C \frac{(z^4-1)dz}{2(z-3)z^2(3z-1)}.$$</span> To evaluate this integral note that the integrand is analytic except at the zeros of the denominator i.e. <span class="math-container">$0, 1/3$</span> and <span class="math-container">$3$</span>. Since only <span class="math-container">$0$</span> and <span class="math-container">$1/3$</span> are inside <span class="math-container">$C$</span>, we just need to apply the <a href="https://en.wikipedia.org/wiki/Residue_theorem" rel="nofollow noreferrer">Residue Theorem</a> to obtain desired value. Put <span class="math-container">$$g(z) = \frac{z^4-1}{2(z-3)z^2(3z-1)}.$$</span> Then <span class="math-container">\begin{eqnarray*} I &amp;=&amp; 2\pi i\big[ \text{res}_{z=0}\,g(z) +\text{res}_{z=1/3}\,g(z) \big]\\ &amp;=&amp; 2\pi i\left[ -\frac{5}{9} + \frac{5}{9}\right]\\ &amp;=&amp; 0\\ \end{eqnarray*}</span> This can be confirm using the antiderivative provided in <a href="https://www.wolframalpha.com/input?i=int_0%5E%7B2pi%7D%28sin%282x%29%29%2F%285-3cos%28x%29%29dx" rel="nofollow noreferrer">Wolfram Alpha</a>.</p> <p>Clearly, you cannot use this result to calculate <span class="math-container">$$\int_0^{\pi}\frac{\sin 2t}{5-3\cos t}dt$$</span> since <span class="math-container">$$\int_0^{\pi}\frac{\sin 2t}{5-3\cos t}dt \neq \frac{1}{2}\int_0^{2\pi}\frac{\sin 2t}{5-3\cos t}dt.$$</span> In addition to the answer provided by K V Dave.</p>
1,237,618
<p>I really need Your help.</p> <p>I need to prove that Euclidean norm is strictly convex. I know that a function is strictly convex if $f ''(x)&gt;0$. Can I use it for Euclidean norm and how? $||x||''=\frac{||x||^2-x^2}{||x||^3}$</p> <p>Thank You!</p>
Michael Grant
52,878
<p>Here's a second-derivative proof for <span class="math-container">$\mathbb{R}^n$</span>. The gradient and Hessian do not exist at the origin, but everywhere else they are given by <span class="math-container">$$\nabla f(x) = \|x\|^{-1} x, \quad \nabla^2 f(x) = \|x\|^{-1} I - \|x\|^{-3} xx^T$$</span> Strict convexity requires that the Hessian be positive definite; that is, <span class="math-container">$v^T(\nabla^2 f(x))v&gt;0$</span> for all <span class="math-container">$v\neq 0$</span>. But suppose <span class="math-container">$v=\alpha x$</span>, where <span class="math-container">$\alpha$</span> is a scalar. Then <span class="math-container">$$\begin{aligned} \nabla^2 f(x) \cdot v &amp;= \alpha \left( \|x\|^{-1} I - \|x\|^{-3} xx^T \right) x \\ &amp;= \alpha \|x\|^{-1} x - \alpha \|x\|^{-3} (xx^T) x = \alpha \|x\|^{-1} x - \alpha \|x\|^{-3} x (x^T x) \\ &amp;= \alpha\|x\|^{-1} x - \alpha\|x\|^{-3} \|x\|^2 x = \alpha\|x\|^{-1} x - \alpha\|x\|^{-1}x = 0. \end{aligned}$$</span> Thus <span class="math-container">$v^T (\nabla^2 f(x) ) v = 0$</span>, and the Hessian is not positive definite as required. This establishes that the norm is not strictly convex anywhere other than the origin. It's not strictly convex at the origin, either, but one must appeal to the fundamental definition for that instead, as Mercy offered. (That answer should be the one accepted, not this one, because it is complete.)</p>
267,121
<h2>UPDATE on FINAL RESULT</h2> <p>Thanks to @SquareOne effort I generated higher-resolution videos with smoothing transitions that can be seen here:</p> <ul> <li><p><a href="https://www.linkedin.com/feed/update/urn:li:activity:6926902980323512320/" rel="noreferrer">https://www.linkedin.com/feed/update/urn:li:activity:6926902980323512320/</a></p> </li> <li><p><a href="https://twitter.com/superflow/status/1521191832012705792" rel="noreferrer">https://twitter.com/superflow/status/1521191832012705792</a></p> </li> </ul> <p>I might post my version of @SquareOne code with some bug corrections later. I am grateful to this community and @SquareOne for outstanding support.</p> <h2>INTRO &amp; <em>BOUNTY</em> TARGET</h2> <p>Dear friends, as you know there is currently an ongoing war in Ukraine: <strong><a href="https://war.ukraine.ua" rel="noreferrer">https://war.ukraine.ua</a></strong></p> <blockquote> <p><em><strong>I need your help on some coding in image/video processing, which is very simple to formulate, but not obvious in execution. Smooth exactly-timed transitions between video frames and perfect alignment of video frames is a key challenge here. BOUNTY TARGET IS EXPLAINED AT THE END OF THE POST.</strong></em></p> </blockquote> <h2>DATA Description</h2> <p><em>United States' <a href="https://www.understandingwar.org/" rel="noreferrer">Institute for the Study of War</a> (ISW, &quot;a non-partisan, non-profit, public policy research organization&quot;) performs daily research and publishes daily maps of the battlefield. Their work is public and gives many references to data sources they use. For instance...</em></p> <blockquote> <p><strong>The whole-Ukraine overview map from the 1st day of invasion on FEB 24, 2022 (<a href="https://www.understandingwar.org/backgrounder/ukraine-conflict-update-7" rel="noreferrer">source</a>):</strong></p> </blockquote> <p><a href="https://i.stack.imgur.com/bmDVR.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/bmDVR.jpg" alt="enter image description here" /></a></p> <blockquote> <p><strong>A recent whole-Ukraine overview map from APR 19, 2022 (<a href="https://www.understandingwar.org/backgrounder/russian-offensive-campaign-assessment-april-19" rel="noreferrer">source</a>):</strong></p> </blockquote> <p><a href="https://i.stack.imgur.com/YYdnd.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/YYdnd.jpg" alt="enter image description here" /></a></p> <h2>DATA Source</h2> <p>These maps are published almost daily with all publications gathered here: <a href="https://www.understandingwar.org/publications" rel="noreferrer">https://www.understandingwar.org/publications</a></p> <h2><em>BOUNTY</em> TARGET</h2> <p><em><strong>BOUNTY WILL BE AWARDED TO the CODE GENERATING BEST .MP4 VIDEO of a SEQUENCE of MAPS.</strong></em></p> <p>Basic part for the <strong>BOUNTY</strong>:</p> <ul> <li><p><strong>Programatic data access</strong>. While URLs of daily articles and images follow some pattern, it is not always regular. <em>How do we write a piece of code that accesses whole-map programmatically?</em> We do not want do do this manually. Approach 1: look at the daily image URLs (<a href="https://www.understandingwar.org/sites/default/files/DraftUkraineCoTApril19%2C2022.png" rel="noreferrer">example</a>), but they are still not regular. Approach 2: look at the daily articles URLs and get 1st image from the article (<a href="https://www.understandingwar.org/backgrounder/russian-offensive-campaign-assessment-april-19" rel="noreferrer">example</a>), but they are still not regular. Maybe there are other approaches. <strong>START DAT: FEB 24. END DATE: CURRENT DAY</strong>.</p> </li> <li><p><strong>Each frame must have a date stamp.</strong> For example: FEB 24, 2022, FEB 25, 2022, etc.</p> </li> <li><p><strong>Image alignment of Ukraine border - the GREATEST challenge</strong>. All these maps-images are slightly different. Ukraine country border should NOT jump from frame to frame.</p> </li> <li><p><strong>Duration of a frame and smoothness of transition</strong> Each map-image (key-frame) should be held on screen 1 second. Each of 3 transitional blended frames should last 0.15 seconds. This is a toy example of how to achieve it. Imagine you have just 3 key-frames.</p> </li> </ul> <p><a href="https://i.stack.imgur.com/3VgCO.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/3VgCO.jpg" alt="enter image description here" /></a></p> <p>Build transitions via interpolated blended frames:</p> <pre><code>frames=Values[TimeSeriesResample[TimeSeries[imglist,{0}],1/4]] </code></pre> <p><a href="https://i.stack.imgur.com/oL4tF.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/oL4tF.jpg" alt="enter image description here" /></a></p> <p>Define non-inform timings as</p> <pre><code>In[68]:= timings=Flatten[Riffle[Table[1,3],{Table[.15,3]}]] </code></pre> <blockquote> <p>Out[68]= {1,0.15<code>,0.15</code>,0.15<code>,1,0.15</code>,0.15<code>,0.15</code>,1}</p> </blockquote> <p>Create video as</p> <pre><code>SlideShowVideo[frames -&gt; timings] </code></pre> <p><a href="https://i.stack.imgur.com/N3XNu.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/N3XNu.jpg" alt="enter image description here" /></a></p> <p>Export to .MP4 via</p> <p><a href="http://reference.wolfram.com/language/ref/format/MP4.html" rel="noreferrer">http://reference.wolfram.com/language/ref/format/MP4.html</a></p> <p><strong>Thank you very much for considering this!!!</strong> Collecting data from independent sources, and displaying it in a comprehensive animation, can help to inform society in ways that numbers and unorganized static images cannot. This is the sort of thing we as a community can do best in these difficult times.</p>
user5601
5,601
<p>Here's a first pass at naively grabbing and animating the main map frames:</p> <pre><code>april = Table[ dayImgs = Import[&quot;https://www.understandingwar.org/backgrounder/russian-offensive-campaign-assessment-april-&quot;&lt;&gt;ToString[n],&quot;Images&quot;]; img = dayImgs[[5]]; (* brittle *) ymax=Max@ImageValuePositions[Image@ImageData[img][[All,{10,10}]],{0.9686274509803922`,0.9686274509803922`,0.9686274509803922`,1.`}][[All,2]]; ymin=Max@ImageValuePositions[Image@ImageData[img][[All,{10,10}]],{0.,0.,0.,1.}][[All,2]]; ImageTrim[img,{{2550,ymax},{0,ymin}}], {n,1,19}] ListAnimate[april] </code></pre> <p><a href="https://i.stack.imgur.com/b74Sa.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/b74Sa.gif" alt="enter image description here" /></a></p> <p>Or you can try:</p> <pre><code>SlideShowVideo[Thread[april-&gt;1]] </code></pre> <p><strong>Update:</strong></p> <p>Another approach could be to try registering with <code>GeoGraphics</code> and using color analysis to get &quot;the locations&quot;, here's a sketch:</p> <pre><code>dayImgs = Import[&quot;https://www.understandingwar.org/backgrounder/russian-offensive-campaign-assessment-april-18&quot;,&quot;Images&quot;]; img = dayImgs[[5]]; (* same as above *) ymax=Max@ImageValuePositions[Image@ImageData[img][[All,{10,10}]],{0.9686274509803922`,0.9686274509803922`,0.9686274509803922`,1.`}][[All,2]]; ymin=Max@ImageValuePositions[Image@ImageData[img][[All,{10,10}]],{0.,0.,0.,1.}][[All,2]]; i=ImageTrim[img,{{2550,ymax},{0,ymin}}] (* Extract the legend colors and overlay graphics *) {rc,uc}={RGBColor[0.9568629185428273, 0.6901959859893049, 0.6862744789494377, 1.], RGBColor[0.6666667137194934, 0.9137255348660686, 0.9882353221073724, 1.]}; {ru,ua}=Image[ColorReplace[Image[DeleteSmallComponents[Binarize[Blur[ColorDetect[i, ColorsNear[#, 0]], 6], .1]]], {White-&gt;#, Black-&gt;Transparent}],ImageSize-&gt;{613.7,650.6}]&amp;/@{rc,uc}; g=GeoGraphics[&quot;World&quot;, GeoRange-&gt;{{38.5, 53.89}, {24.69, 39.5}},ImageSize-&gt;{613.7,650.6}, (* guesstimates *) GeoProjection-&gt;&quot;Mercator&quot;, GeoBackground-&gt;&quot;CountryBorders&quot;]; Magnify[Row[{Overlay[{g,ru,ua}],i}],.5] </code></pre> <p><a href="https://i.stack.imgur.com/BTb8C.png" rel="noreferrer"><img src="https://i.stack.imgur.com/BTb8C.png" alt="enter image description here" /></a></p>
2,456,961
<p>I thought of this myself and could not begin to think of a solution without becoming confused: There are two people, A and B. A says, "B is telling the truth." B says, "A is lying." Are both lying or telling the truth? Or is one lying and the other telling the truth?</p>
zwim
399,263
<p>If you have $n(x,y)$ continuous and $d(x,y)$ continuous and $d$ does not annulate then the fraction $f(x,y)=\dfrac{n(x,y)}{d(x,y)}$ is continuous.</p> <p>I have shown it here for a simplified version $\dfrac{f(x)}{g(y)}$ but the principle is exactly the same.</p> <p><a href="https://math.stackexchange.com/questions/2450838/showing-that-fx-y-dfracxy-is-continuous-when-x0-and-y0/2450879#2450879">Showing that $f(x,y)=\dfrac{x}{y}$ is continuous when $x&gt;0$ and $y&gt;0$.</a></p> <p>So the possible discontinuity points (if any) are these which <strong>annulate the denominator</strong>.</p> <hr> <p>In your case $d(x,y)=x^2+y^2-1=0$, represents a circle $\mathcal C$ of radius $1$.</p> <p>But it is only a necessary condition, it is possible that not all the points of $\mathcal C$ (or even none of them) are discontinuity points.</p> <p>Note that for $(x_0,y_0)\in\mathcal C$ then if $n(x_0,y_0)\neq 0$ then $f(x_0,y_0)$ is an infinite quantity, we say that $(x_0,y_0)$ is a pole of $f$.</p> <p>But what happens if $n(x_0,y_0)=0$ too ? In this case, we have an undetermined form $\dfrac 00$ and have to study deeper if the ratio has a limit.</p> <p>In the case where the limit exists: $\lim\limits_{(x,y)\to(x_0,y_0)}f(x,y)=\dfrac{n(x,y)}{d(x,y)}=C\neq\infty$ we can extend $f$ by continuity in this point $(x_0,y_0)$.</p> <p>If the limit is infinite or does not exist (i.e $f$ has multiple limits depending on the path chosen) then $f$ is not continuous at $(x_0,y_0)$.</p> <p>Eventually, the discontinuity points are only a subset of $\mathcal C$.</p> <hr> <p>So let's get back to study first if there are points of $\mathcal C$ for which $n(x,y)=0$.</p> <p>A point on $\mathcal C$ can be represented by $(\cos(t),\sin(t))$ and $n(x,y)=0\iff 2x=5y\iff 2\cos(t)=5\sin(t)\iff \tan(t)=\dfrac 25$</p> <p>So if we call $\alpha=\tan^{-1}(\frac 25)$ then there are two points on the circle where $f$ might not be discontinuous $(\pm\cos(\alpha),\pm\sin(\alpha))$.</p> <p>Yet let's set $\begin{cases} (x,y)=(r\cos(t),r\sin(t))\to(x_0,y_0)\\r=\pm(1+u)\quad u\to 0\\t=\alpha+v\quad v\to 0\end{cases}$</p> <p>When substituting in $f$ and develop we arrive to $f(x,y)\sim k\ \dfrac{\sin(v)}u$ where $k$ is a constant depending of $\alpha$.</p> <p>This has no limit, in general, when $(u,v)\to(0,0)$ thus $f$ is also not extendable by continuity at the two candidate points.</p> <p>Finally, $f$ is not defined and cannot be extended by continuity at any point of $\mathcal C$, and since it is defined and continuous elsewhere, we say that $f$ is discontinuous on whole $\mathcal C$.</p>
2,338,508
<p>Given this matrix that stretches to infinity to the right and up: $$ \begin{matrix} ...&amp;...&amp;...\\ \frac{1}{4}&amp; \frac{1}{8}&amp; \frac{1}{16}&amp;... \\ \frac{1}{2} &amp; \frac{1}{4}&amp; \frac{1}{8}&amp;... \\ 1 &amp; \frac{1}{2}&amp; \frac{1}{4}&amp;... \\ \end{matrix} $$</p> <p>I was trying to find the total sum of this matrix. I know the answer should be $4$. I came up with a different solution and a different answer. What is wrong with that solution? Here it is:</p> <p>The first row sums to $2$. The second row to $2-1$. The third row to $2-1-\frac{1}{2}$ etc... So we get:</p> <p>$$ \begin{matrix} 2&amp;-1&amp;-\frac{1}{2}&amp;-\frac{1}{4}&amp;-\frac{1}{8}&amp;-\frac{1}{16}\\ 2&amp;-1&amp;-\frac{1}{2}&amp;-\frac{1}{4}&amp;-\frac{1}{8}\\ 2&amp;-1&amp;-\frac{1}{2}&amp;-\frac{1}{4}\\ 2&amp;-1&amp;-\frac{1}{2}\\ 2&amp;-1 \\ 2 \\ \end{matrix} $$</p> <p>Now for each "$2$" there is a diagonal that gives the sequence $2-1-\frac{1}{2}-\frac{1}{4}...=0$ (since the matrix goes on forever) Therefore, the sum of the matrix must be $0$!</p> <p>Apparently that's wrong; but why? Thanks!</p> <p>EDIT: I am looking for an answer to the question what is fundamentally <strong>wrong</strong> with my method plus an explanation for why that is wrong. I am not looking for an explanation of the correct method.</p>
Masacroso
173,262
<blockquote> <p>There is a theorem that says that a double series is summable if</p> <p><span class="math-container">$$\sup_{n\in\Bbb N}\sum_{j,k=0}^n|a_{j,k}|&lt;\infty$$</span></p> </blockquote> <p>Your method was wrong because the triangular matrix that you derived is not summable as a double series, that is</p> <p><span class="math-container">$$\sup_{n\in\Bbb N}\sum_{j,k=0}^n|a_{j,k}|\ge\sup_{n\in\Bbb N}\sum_{j=0}^n|a_{j,0}|=\sup_{n\in\Bbb N}\sum_{j=0}^n 2=\infty$$</span></p> <p>That is: you can see that if you change the order in the sum of the triangular matrix the sum will be different, hence this matrix is not summable.</p> <hr /> <p>Observe that the sum of the original matrix is equivalent to the sum of this double series</p> <p><span class="math-container">$$\sum_{k=0}^\infty\sum_{j=0}^\infty\frac1{2^{j+k}}$$</span></p> <p>what is summable because</p> <p><span class="math-container">$$\begin{align}\sum_{j,k=0}^\infty\left(\frac12\right)^{j+k}&amp;=\lim_{n\to\infty}\sum_{j,k=0}^n\left(\frac12\right)^{j+k}\\&amp;=\lim_{n\to\infty}\sum_{m=0}^n(m+1)\left(\frac12\right)^m,\qquad\text{renaming }m=j+k\\&amp;=\sum_{m=0}^\infty(m+1)\left(\frac12\right)^m\\&amp;=\left[\sum_{m=0}^\infty(m+1)x^m\right]_{x=1/2}\\&amp;=\left[\sum_{m=0}^\infty \partial_x x^{m+1}\right]_{x=1/2}\\&amp;=\left[\partial_x\sum_{m=0}^\infty x^{m+1}\right]_{x=1/2},\qquad\text{because the geometric series is analytic in }|x|&lt;1\\&amp;=\left[\partial_x\frac{x}{1-x}\right]_{x=1/2},\qquad \text{if }|x|&lt;1\\&amp;=\left[\frac{1}{(1-x)^2}\right]_{x=1/2}=4&lt;\infty\end{align}$$</span></p> <p>because there are <span class="math-container">$m+1$</span> ways to sum up to <span class="math-container">$m$</span> using two non-negative integers, that is</p> <p><span class="math-container">$$m+1=\binom{m+2-1}{2-1}$$</span></p> <p>Check it <a href="https://en.wikipedia.org/wiki/Stars_and_bars_(combinatorics)#Theorem_two_2" rel="nofollow noreferrer">here</a> or <a href="https://en.wikipedia.org/wiki/Composition_(combinatorics)" rel="nofollow noreferrer">here</a> as weak compositions of <span class="math-container">$m$</span> with two non-negative integers.</p>
3,520,538
<blockquote> <p>Let <span class="math-container">$X$</span> be a Hausdorff topological space. Prove that if <span class="math-container">$\{ C_i | i \in I \}$</span> is an infinite collection of compact subsets of <span class="math-container">$X$</span> such that <span class="math-container">$\cap_{i \in I} C_i = \emptyset$</span>, then some finite sub collection of <span class="math-container">$\{ C_i | i \in I \}$</span> also has an empty intersection. </p> </blockquote> <p>Well my idea is that I want to show that <span class="math-container">$X$</span> is compact. Since each <span class="math-container">$C_i$</span> is a compact subset of <span class="math-container">$X$</span>, this means each <span class="math-container">$C_i$</span> is closed, which means <span class="math-container">$(X \setminus C_i)$</span> is an open set. I want to say that <span class="math-container">$\cup_{i \in I} (X \setminus C_i)$</span> is an open cover of X but I don't know if this proves <span class="math-container">$X$</span> is a compact set. Do you think I am on the right track? Thank you very much. </p>
HallaSurvivor
655,547
<p>One famous, and perhaps surprising, reply:</p> <p><span class="math-container">$$\sum_{p \text{ prime}} \frac{1}{p}$$</span></p> <p>It grows like <span class="math-container">$\log \log n$</span>, and shows that there are "lots of primes" heuristically, since they form enough of the harmonic series to still diverge. </p> <hr> <p>I hope this helps ^_^</p>
2,744,981
<blockquote> <p>Let $X$ be a Banach space. Let $T\in \mathbb{B}(X)$. If $T$ is an isometry and not invertible, prove that $\sigma(T) = \overline{\mathbb{D}}$.</p> </blockquote> <p>I can show that $\sigma(T) \subset \overline{\mathbb{D}}$. Since $T$ is not invertible, then $0 \in \sigma(T)$. Suppose $\sigma(T) \neq \overline{\mathbb{D}}$, then we can find $|\lambda|&lt;1$ on the boundry of the spectrum, $\partial \sigma(T)$. Then $\lambda \in \sigma_{ap}(T)$. How can I go from here to a contradiction?</p>
bitesizebo
547,202
<p>So you've gotten to the point that there must exists $\lambda \in \sigma_{ap}(T)$ with $\lvert \lambda \rvert &lt; 1$. Then by definition of the approximate point spectrum there exists a sequence $(x_i) \subset X$ such that $\lVert x_i \rVert = 1$ for all $i$, and $Tx_i - \lambda x_i \to 0$ as $i \to \infty$. But $$ \lVert Tx_i - \lambda x_i \rVert \geq \lVert T x_i \rVert - \lvert \lambda \rvert \lVert x_i \rVert = \lVert x_i \rVert - \lvert \lambda \rvert \lVert x_i \rVert$$ since $T$ is an isometry. But then $\lVert x_i \rVert = 1$ and $\lvert \lambda \rvert &lt; 1$ therefore $\lVert Tx_i - \lambda x_i \rVert \geq 1 - \lvert \lambda \rvert &gt; 0$ thus $Tx_i - \lambda x_i \not \to 0$ as $i \to \infty$. Contradiction</p>
4,694
<p>While teaching the concept of vector spaces, my professor mentioned that addition and multiplication aren't necessarily what we <em>normally</em> call addition and multiplication, but any other function that complies with the eight axioms needed by the definition of a vector space (for instance, associativity, commutativity of addition, etc.). Is there any widely used vector space in which alternative functions are used as addition/multiplication?</p>
Jason DeVito
331
<p>This example is certainly not &quot;widely used&quot;, but I think it's worth thinking about anyway. This answer comes from an MO post by John Goodrick: <a href="https://mathoverflow.net/questions/9402/pedagogical-question-about-linear-algebra">https://mathoverflow.net/questions/9402/pedagogical-question-about-linear-algebra</a></p> <p>I'll quote the entirety of his post here in case you don't feel like clicking on the link (and since I've done no work on my own, I'll make this post CW)</p> <blockquote> <p>You could try giving the following example: the set of all positive real numbers, considered as a vector space over the field R, with vector addition given by multiplication and scalar multiplication given by taking exponents.</p> <p>As a first step, you could verify that this satisfies a few of the vector-space axioms, and then let students check the rest of them (say, as homework). Then, you could ask questions like, &quot;what is the dimension of this vector space?&quot; or, &quot;give an example of a (nontrivial) linear transformation from this space into R^3.&quot;</p> </blockquote>
4,694
<p>While teaching the concept of vector spaces, my professor mentioned that addition and multiplication aren't necessarily what we <em>normally</em> call addition and multiplication, but any other function that complies with the eight axioms needed by the definition of a vector space (for instance, associativity, commutativity of addition, etc.). Is there any widely used vector space in which alternative functions are used as addition/multiplication?</p>
Agustí Roig
664
<p>What do you "normally" call addition and multiplication?</p> <p><em>Just</em> those operations with real numbers, or <em>all</em> kind of addition and multiplication "derived" from the well-known operations with real numbers, or that "look like" these operations?</p> <p>Because, in the first case, you have plenty of elementary and widely used vector spaces with operations which are <em>not</em> those of real numbers:</p> <ol> <li>$\mathbb{R}^2$, the set of ordered pairs of real numbers $(x,y)$, is a real vector space, with addition and multiplication defined as $(x,y) + (u,v) = (x+u, y+v)$ and $\lambda (x,y) = (\lambda x , \lambda y)$. These operations are defined using the "normal" addition and multiplication of real numbers, but are <em>not</em> the "normal" addition and multiplication of real numbers just because $(x,y)$ is <em>not</em> a real number.</li> <li>${\cal C}^0 (\mathbb{R}, \mathbb{R})$, the set of continuous functions $f:\mathbb{R} \longrightarrow \mathbb{R}$, is a real vector space, with addition and multiplication defined point-wise; that is $(f+g)(x) = f(x) + g(x)$ and $(\lambda f)(x) = \lambda f(x)$. Again, these addition and multiplication are defined using the "normal" addition and multiplication of real numbers, but are not the "normal" addition and multiplication of real numbers for the same reason.</li> <li>$\mathbb{Z}/2\mathbb{Z}$, the set of integers mod 2, is a $\mathbb{Z}/2\mathbb{Z}$-vector space, with addition and multiplication $\widetilde{m} + \widetilde{n} = \widetilde{n+m}$ and $\widetilde{\lambda}\widetilde{m} = \widetilde{\lambda m}$, where $\widetilde{m}$ denotes the class of $m$ mod 2. Ditto.</li> <li>$\mathbb{R}(x)$, the field of rational functions $\frac{p(x)}{q(x)}$, where $p(x), q(x) \in \mathbb{R}[x]$ are polynomials, $q(x) \neq 0$, is a $\mathbb{R}(x)$-vector space, with addition and multiplication $\frac{p(x)}{q(x)} + \frac{r(x)}{s(x)} = \frac{p(x) s(x) + r(x) q(x)}{q(x)s(x)} $ and $\frac{p(x)}{q(x)} \frac{r(x)}{s(x)} = \frac{p(x)r(x)}{q(x)s(x)}$. Ditto.</li> <li>$\mathbb{C}$, the set of complex numbers, is a $\mathbb{C}$-vector space, whit the addition and multiplication of complex numbers. Ditto.</li> <li>$\mathbb{K}^n$, the set of ordered families $(x_1, \dots , x_n)$ of elements of any field $\mathbb{K}$, is a $\mathbb{K}$-vector space, with addition and multiplication defined as in example 1. Examples 3, 4 and 5 are particular cases of this one with $n=1$ and $\mathbb{K} =$ $\mathbb{Z}/2\mathbb{Z}$, $\mathbb{R}(x)$ and $\mathbb{C}$, respectively. Example 1 is also a particular case, with $n=2$ and $\mathbb{K} = \mathbb{R}$. Addition and multiplication in $\mathbb{K}$ may have nothing in common with the operations with real numbers.</li> </ol>
3,112,263
<p>Given that there are three integers <span class="math-container">$a, b,$</span> and <span class="math-container">$c$</span> such that <span class="math-container">$\frac{1}{a}+\frac{1}{b}+\frac{1}{c}=\frac{6}{7}$</span>, what is the value of a+b+c?</p> <hr> <p>Immediately, I see that I should combine the left hand side. Doing such results in the equation <span class="math-container">$$\frac{ab+ac+bc}{abc}=\frac{6x}{7x}.$$</span> This branches into two equation <span class="math-container">$$ab+ac+bc=6x$$</span><span class="math-container">$$abc=7x.$$</span> From this, I can tell that one of a, b, or c must be a multiple of 7, and the other two are a factor of <span class="math-container">$x$</span>. Now, I do trial and error, but I find this very tiring and time consuming. Is there a better method?</p> <p>Also, if you are nice, could you also help me on <a href="https://math.stackexchange.com/questions/3098173/ns-base-5-and-base-6-representations-treated-as-base-10-yield-sum-s-for">this</a>(<a href="https://math.stackexchange.com/questions/3098173/ns-base-5-and-base-6-representations-treated-as-base-10-yield-sum-s-for">$N$&#39;s base-5 and base-6 representations, treated as base-10, yield sum $S$. For which $N$ are $S$&#39;s rightmost two digits the same as $2N$&#39;s?</a>) question?</p> <p>Thanks!</p> <p>Max0815</p>
Dan Uznanski
167,895
<p>Looks like the greedy algorithm wins here, and quite quickly.</p> <p>Let's assume wlog that <span class="math-container">$a\le b\le c$</span>.</p> <p>If <span class="math-container">$a=1$</span>, then we're already higher than <span class="math-container">$6/7$</span>, so let's try <span class="math-container">$a=2$</span>. Now we have <span class="math-container">$\frac{1}{b} + \frac{1}{c} = \frac{6}{7} - \frac{1}{2} = \frac{5}{14}$</span>. Now, <span class="math-container">$b$</span> has to be at least <span class="math-container">$3$</span>; let's try it, we get ... <span class="math-container">$\frac{1}{c} = \frac{5}{14} - \frac{1}{3} = \frac{1}{42}$</span>.</p> <p>Oh.</p> <p><span class="math-container">$$\frac{6}{7} = \frac{1}{2} + \frac{1}{3} + \frac{1}{42}$$</span></p>
1,703,169
<p>At a recent maths competition one of the questions was to find for which $x,y,z$ this equation holds true:</p> <p>$$\sqrt{x}-\sqrt{z+y}=\sqrt{y}-\sqrt{z+x}=\sqrt{z}-\sqrt{x+y}$$</p> <p>where $x,y,z \in \mathbb{R} \cup \{0\}$. So how am I supposed to approach this problem?</p> <p>Also sorry for not explaining my personal progress on this problem, but there simply is none.</p>
user322840
322,840
<p>Take square on both sides? An obvious solution would be $x = y = z$</p>
1,703,169
<p>At a recent maths competition one of the questions was to find for which $x,y,z$ this equation holds true:</p> <p>$$\sqrt{x}-\sqrt{z+y}=\sqrt{y}-\sqrt{z+x}=\sqrt{z}-\sqrt{x+y}$$</p> <p>where $x,y,z \in \mathbb{R} \cup \{0\}$. So how am I supposed to approach this problem?</p> <p>Also sorry for not explaining my personal progress on this problem, but there simply is none.</p>
Roman83
309,360
<p>$$\sqrt{x}-\sqrt{z+y}=\sqrt{y}-\sqrt{z+x}=\sqrt{z}-\sqrt{x+y} \Rightarrow$$ $$\sqrt{x}+\sqrt{x+y}=\sqrt{z}+\sqrt{z+y}$$ Let $f(t)=\sqrt t+ \sqrt{t+y},$ where $y -$ fixed. </p> <p>$f(t) -$ increasing function $\Rightarrow$ if $f(t_1)=f(t_2)$ then $t_1=t_2$. So $x=z$.</p> <p>Similarly, $x=y$. So $$x=y=z$$</p>
3,758
<p>When implicitly finding the derivative of: </p> <blockquote> <p>$xy^3 - xy^3\sin(x) = 1$</p> </blockquote> <p>How do you find the implicit derivative of:</p> <blockquote> <p>$xy^3\sin(x)$</p> </blockquote> <p>Is it using a <em>triple</em> product rule of sorts?</p>
WWright
249
<p>It is possible to derive a 'triple product rule.' In fact, you can generalize it to an arbitrary number of products. You may ask why these rules aren't presented in these general forms. One reason for this is that writing the general formula out is a more complicated to memorize and more intimidating towards students just learning calculus. The other reason, is that having the product rule is enough to give you the general rule by a simple induction argument. Also, for practical purposes, problems can be solved with only the usual product rule being applied in multiple steps.</p> <p>In your case we can apply the product rule twice: $\frac{d}{dx}xsin(x)y^{3}=y^{3}\frac{d}{dx}(xsin(x))+3y^2\frac{dy}{dx}(xsin(x))=y^{3}(sin(x)+xcos(x))+3y^{2}\frac{dy}{dx}$</p> <p>We can also derive a 'triple product rule'</p> <p>$\frac{d}{dx}(f\cdot g\cdot h)=\frac{df}{dx}\cdot(h\cdot g)+\frac{dg}{dx}\cdot (f\cdot h)+\frac{dh}{dx}\cdot (f\cdot g)$</p> <p>But you see that you still have to apply the chain rule to that to get it to fit how you need to use it in this situation and so it would be a real mess to memorize all the different special cases. This is why we just provide the product rule with 2 functions.</p> <p>Does my rambling make sense?</p>
3,758
<p>When implicitly finding the derivative of: </p> <blockquote> <p>$xy^3 - xy^3\sin(x) = 1$</p> </blockquote> <p>How do you find the implicit derivative of:</p> <blockquote> <p>$xy^3\sin(x)$</p> </blockquote> <p>Is it using a <em>triple</em> product rule of sorts?</p>
Arturo Magidin
742
<p>You get a formula for the derivative of a product of $n$ factors from the formula for the product of $2$ factors by doing induction. Intuitively, you do it the same way as you go from $2$ to $3$: $(fgh)' = ((fg)h)' = (fg)'h + (fg)h' = (f'g + fg')h + (fg)h' = f'gh + fg'h + fgh'$. The pattern should now be clear.</p> <p>Assuming you already know that $(f_1\cdots f_k)' = f_1'f_2\cdots f_k + f_1f_2'f_3\cdots f_k + \cdots + f_1\cdots f_{k-1}f_k'$, then inductively you have $$(f_1\cdots f_kf_{k+1})' = (f_1\cdots f_k)'f_{k+1} + (f_1\cdots f_k)f_{k+1}'$$ and the formula follows.</p> <p>So for $xy^3\sin(x)$, the derivative will be $$(x)'y^3\sin(x) + x(y^3)'\sin(x) + xy^3(\sin x)'$$ with the middle summand requiring implicit differentiation.</p>
2,849,303
<p>Given two distinct factorizations of a positive integer with the same number of factors (not necessarily prime or all distinct), must the sums of the respective sets of factors also be distinct? This question arises frequently in puzzles of the KenKen or Killer Sudoku type. I have found no obvious counter examples searching by hand. For the purpose at hand, the numbers being factored may be limited to less than 1000, say.</p>
Xerxes
93,162
<p>I thought it might be fun to write a <em>Mathematica</em> code to find instances of same-sum factorizations. Using the following code, you can find integers such that <code>num</code> different sets of <code>len</code> different same-length factorizations of an integer less than <code>max</code> have the same sum. It omits multiples of lower cases on the grounds that those are boring.</p> <p>The smallest integers with increasingly large sets of different same-sum factorizations are:<br>72, 432, 3456, 5760, 7200, 12096, 17280, 21600, ...</p> <p>The smallest integers with pairs of increasingly large sets of different same-sum factorizations are:<br> 144, 720, 2160, 5040, 8640, 10080, 14400, 25920, 30240, ...</p> <p>The smallest integers with triples of increasingly large sets of different same-sum factorizations are:<br> 144, 1440, 2880, 7200, 8640, 15120, 17280, 30240, 30240, ...</p> <p>30240 is an interesting case with 6 different same-sum factorization 9-sets, 4 10-sets, an 11-set and a 12-set.</p> <pre><code>factorizationList[n_Integer?PrimeQ] := {{n}} factorizationList[n_Integer] := (factorizationList[n] = Union[Append[Join@@ (#/{L__List}:&gt;Sort/@Flatten[Outer[Join,L,1],1]&amp;/@ Map[factorizationList,{#,n/#}&amp;/@ #[[-Ceiling[Length[#]/2];;-2]]&amp;[Divisors[n]],{2}]),{n}]]) Block[{num=2,len=10,max=25000}, DeleteDuplicates[DeleteCases[ Table[{n,Select[GatherBy[factorizationList[n], {Total[#],Length[#]}&amp;], Length[#]&gt;=num&amp;]}, {n,max}], {_, L_ /; Length[L] &lt; num}],IntegerQ[#2[[1]]/#1[[1]]]&amp;]] </code></pre> <p>Output:</p> <pre><code>{{21600,{{{2,3,9,20,20},{2,3,10,15,24},{2,3,12,12,25},{2,5,5,18,24}, {2,5,8,9,30},{2,6,6,10,30},{3,3,8,10,30},{3,4,4,18,25}, {3,4,5,12,30},{3,5,5,9,32}}}}} </code></pre>
763,073
<p>I came across the symbol $|v_1 \wedge \dots \wedge v_m|^{-1}$ in a paper - this is the norm of the wedge product of vectors $v_k \in \mathbb{R}^n$ . I thought it's meaning was self-evident until I tried to compute it. </p> <p>First of all $|v_1 \wedge \dots \wedge v_m|$ is a <em>number</em> since we are taking the reciprocal.</p> <p>If there were $m = n$ vectors then $|v_1 \wedge \dots \wedge v_n| = \det (v_i \cdot v_j) $ and it is the volume of the <a href="https://en.wikipedia.org/wiki/Parallelepiped" rel="nofollow">paralleliped</a> generated by the vectors $v_1, \dots, v_n$. </p> <p><img src="https://upload.wikimedia.org/wikipedia/commons/3/3e/Parallelepiped_volume.svg" width="100"></p> <p>If we have $m &lt; n$ vectors, given in coordinates, we have the volume of the $m$-dimensional parallelipiped inside of $n$-dimensional space. This volume is <strong>0</strong> unless we use the $m$-dimensional volume measure.</p> <p>In <a href="https://en.wikipedia.org/wiki/Exterior_algebra#Alternating_multilinear_forms" rel="nofollow">exterior algebra</a>, if I have the coordinates of the vectors, $v_1, \dots, v_m$, how do I compute this volume?</p>
Muphrid
45,296
<p>In clifford algebra, this could be easily computed. A wedge product of $m$ linearly independent vectors can be rewritten as a geometric product of $m$ orthogonal vectors $u_1 u_2 \ldots u_m$. The product of this $m$-vector with its reverse $u_m u_{m-1} \ldots u_2 u_1$ is guaranteed to be a scalar; call this $|v_1 \wedge v_2 \wedge \ldots \wedge v_m|^2$.</p> <p>If you would rather not use clifford algebra, then you can think of the metric as having a natural extension to $k$-vectors. For instance, can you identify what $g(e_1 \wedge e_2 , e_1 \wedge e_2)$ should be? How about $g(e_1 \wedge e_2, e_1 \wedge e_3)$?</p> <p>You could then expand $V = v_1 \wedge v_2 \wedge \ldots \wedge v_m$ in terms of the basis $m$-vectors, and since you know how the metric acts on these basis $m$-vectors (it's very simple for any orthonormal basis), you can compute the norm. Realistically, if this is Euclidean space with an orthonormal basis, then the metric will just tell you to take the sum of the squares of the components.</p>
421,198
<p>An object <span class="math-container">$X$</span> of a <a href="https://ncatlab.org/nlab/show/cartesian+closed+category" rel="nofollow noreferrer">cartesian closed category</a> <span class="math-container">$\mathbf C$</span> is <a href="https://ncatlab.org/nlab/show/amazing+right+adjoint" rel="nofollow noreferrer">atomic</a> if <span class="math-container">$({-})^X \colon \mathbf C \to \mathbf C$</span> has a right adjoint (hence is also <a href="https://ncatlab.org/nlab/show/tiny+object" rel="nofollow noreferrer">internally tiny</a>). Intuitively, atomic objects are &quot;very small&quot; (as the name suggests), and consequently there aren't usually many tiny objects in <span class="math-container">$\mathbf C$</span>.</p> <p>However, is this necessarily the case? More precisely, do there exist any non-posetal cartesian closed categories in which every object is atomic?</p> <p>If there are no non-posetal examples, are there any nontrivial posetal examples? (The terminal category forms a trivial example of a posetal example.)</p>
Maxime Ramzi
102,343
<p>This is a partial answer : if <span class="math-container">$C$</span> has finite coproducts, then <span class="math-container">$C$</span> must be posetal. In fact, I only need biproducts of the form <span class="math-container">$X\coprod X$</span>.</p> <p>Indeed, because <span class="math-container">$C$</span> is cartesian closed, <span class="math-container">$X\times -$</span> commutes with these coproducts and in particular it is easy to check that <span class="math-container">$\underline{\hom}(* \coprod *, Y)\cong Y \times Y$</span>, naturally in <span class="math-container">$Y$</span> (<span class="math-container">$\underline\hom$</span> denotes the internal hom).</p> <p>In particular, if <span class="math-container">$*\coprod *$</span> is atomic, the canonical morphism <span class="math-container">$(Y_0\times Y_0)\coprod (Y_1\times Y_1)\to (Y_0\times Y_0)\coprod (Y_0\times Y_1)\coprod (Y_1\times Y_0)\coprod (Y_1\times Y_1)$</span> is an isomorphim (here I'm denoting by <span class="math-container">$Y_0$</span> or <span class="math-container">$Y_1$</span> the same object <span class="math-container">$Y$</span>, it's simply to indicate the &quot;position&quot;, i.e. what the morphism is).</p> <p>This morphism is of the form <span class="math-container">$X\overset{in_1}\to X\coprod X$</span>, and the claim is that this is an isomorphism. This implies that any two morphisms <span class="math-container">$X\to Z$</span> must be equal: <span class="math-container">$\hom(X,Z)\times \hom(X,Z)\cong \hom(X,Z)$</span>.</p> <p>Here <span class="math-container">$X$</span> is <span class="math-container">$Y\times Y$</span>, for an arbitrary <span class="math-container">$Y$</span>. Any of the two projections <span class="math-container">$Y\times Y\to Y$</span> is split, so it follows that any two morphisms out of <span class="math-container">$Y$</span> must be equal. <span class="math-container">$Y$</span> was arbitrary, so <span class="math-container">$C$</span> is posetal.</p>
421,198
<p>An object <span class="math-container">$X$</span> of a <a href="https://ncatlab.org/nlab/show/cartesian+closed+category" rel="nofollow noreferrer">cartesian closed category</a> <span class="math-container">$\mathbf C$</span> is <a href="https://ncatlab.org/nlab/show/amazing+right+adjoint" rel="nofollow noreferrer">atomic</a> if <span class="math-container">$({-})^X \colon \mathbf C \to \mathbf C$</span> has a right adjoint (hence is also <a href="https://ncatlab.org/nlab/show/tiny+object" rel="nofollow noreferrer">internally tiny</a>). Intuitively, atomic objects are &quot;very small&quot; (as the name suggests), and consequently there aren't usually many tiny objects in <span class="math-container">$\mathbf C$</span>.</p> <p>However, is this necessarily the case? More precisely, do there exist any non-posetal cartesian closed categories in which every object is atomic?</p> <p>If there are no non-posetal examples, are there any nontrivial posetal examples? (The terminal category forms a trivial example of a posetal example.)</p>
Tim Campion
2,362
<p>Building on Maxime's answer -- if <span class="math-container">$C$</span> is cartesian closed and has an initial object <span class="math-container">$0$</span>, and if <span class="math-container">$0$</span> is atomic (or even just tiny), then <span class="math-container">$C$</span> is the terminal category. For <span class="math-container">$1 = 0^0 = 0$</span> (the former equation holds in any cartesian closed category with an inital object <span class="math-container">$0$</span>; the latter holds because <span class="math-container">$(-)^0$</span> preserves initial objects). That is, <span class="math-container">$C$</span> is pointed, and the only pointed cartesian closed category is the terminal one.</p>
3,557,398
<p>Here are two second-order differential equations.</p> <p><span class="math-container">$$ y''+9y=\sin(2t) \tag 1 $$</span></p> <p><span class="math-container">$$ y'' +4y =\sin(2t) \tag 2 $$</span></p> <p>I am told to use undetermine coefficients method to solve.</p> <p>For 1), I use <span class="math-container">$y_p=A \cos(2t)+B \sin(2t)$</span> to get <span class="math-container">$A=0$</span> and B=<span class="math-container">$\frac{1}{5}$</span> and get <span class="math-container">$y_p=\frac{1}{5} \sin(2t)$</span></p> <p>For 2), I realize that that method doesn't work and told to do <span class="math-container">$y_p=t(A \cos(2t)+B \sin(2t)$</span> Why does it work then?</p>
fleablood
280,126
<p>Just do it in cases and note the trivial: <span class="math-container">$\min(a,b) \le a$</span> and <span class="math-container">$\min(a,b) \le b$</span></p> <p>If both <span class="math-container">$x \le z$</span> and <span class="math-container">$y\le z$</span> than <span class="math-container">$\min(x+y, z) \le x+ y = \min(x,z)+ \min(y,z)$</span>.</p> <p>If <span class="math-container">$x &gt; z$</span> then <span class="math-container">$\min(x+y,z) \le z = \min(x,z) &lt; \min(x,z)+\min(y,z)$</span>.</p> <p>If <span class="math-container">$y &gt; z$</span> then <span class="math-container">$\min(x+y,z) \le z=\min(y,z) &lt; \min(x,z) + \min(y,z)$</span>.</p> <p>That's it.</p>
84,897
<pre><code>Integrate[a/(Sin[t]^2 + a^2), {t, 0, 2 Pi}] </code></pre> <p>$$\int_0^{2 \pi } \frac{a}{a^2+\sin ^2(t)} \, dt$$</p> <p>gives $0$</p> <p>This cannot be true. What is going on?</p> <p>If I insert a number into <code>a</code>, it gives a reasonable result:</p> <pre><code>NIntegrate[2/(Sin[t]^2 + 4), {t, 0, 2 Pi}] </code></pre> <p>give <code>2.80993</code></p>
Silvia
17
<p>Mathematica 10.1 seems to have fixed this bug:</p> <p><img src="https://i.stack.imgur.com/1Rnvm.png" alt="answer from 10.1"></p>
84,897
<pre><code>Integrate[a/(Sin[t]^2 + a^2), {t, 0, 2 Pi}] </code></pre> <p>$$\int_0^{2 \pi } \frac{a}{a^2+\sin ^2(t)} \, dt$$</p> <p>gives $0$</p> <p>This cannot be true. What is going on?</p> <p>If I insert a number into <code>a</code>, it gives a reasonable result:</p> <pre><code>NIntegrate[2/(Sin[t]^2 + 4), {t, 0, 2 Pi}] </code></pre> <p>give <code>2.80993</code></p>
J. M.'s persistent exhaustion
50
<p>A bit late, but consider the following function:</p> <p>$$f_c(a,x)=\frac1{\sqrt{1+a^2}}\left(x+\arctan\left(\frac{\sin\,x\cos\,x}{a^2+a\sqrt{1+a^2}+\sin^2 x}\right)\right)$$</p> <p>You can verify the following identity in <em>Mathematica</em>:</p> <pre><code>fc[a_, x_] := (x + ArcTan[(Sin[x] Cos[x])/(a^2 + a Sqrt[1 + a^2] + Sin[x]^2)])/Sqrt[1 + a^2] D[fc[a, x], x] == a/(a^2 + Sin[x]^2) // Simplify True </code></pre> <p>Compare this with the antiderivative produced by <code>Integrate[]</code>:</p> <pre><code>f[a_, x_] = Integrate[a/(a^2 + Sin[x]^2), x] With[{a = 1/2}, Plot[{f[a, x], fc[a, x]}, {x, 0, 2 π}, PlotStyle -&gt; {Orange, Blue}]] </code></pre> <p><img src="https://i.stack.imgur.com/t2v5j.png" alt="two different antiderivatives"></p> <p>and we see that $f_c(a,x)$ is continuous over $[0,2\pi]$, while $f(a,x)$ is not. Thus, $f(a,x)$ is not suitable for computing the definite integral through the FTC. But, if you use $f_c(a,x)$,</p> <pre><code>fc[a, 2 π] - fc[a, 0] 2 π/Sqrt[1 + a^2] </code></pre> <p>Again, see <a href="http://blog.wolfram.com/2008/01/19/mathematica-and-the-fundamental-theorem-of-calculus/" rel="nofollow noreferrer">O. Pavlyk's blog entry</a> for more details.</p>
3,810,856
<p>I'm having trouble understanding how this form of the principle (<a href="https://en.wikipedia.org/wiki/Inclusion%E2%80%93exclusion_principle" rel="nofollow noreferrer">on wiki</a>) results in the form below.</p> <p>Wiki form: <a href="https://i.stack.imgur.com/6IXQZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6IXQZ.png" alt="wiki form" /></a></p> <p>Using three sets: <a href="https://i.stack.imgur.com/6maIp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6maIp.png" alt="example" /></a></p> <p>My confusion is the last <span class="math-container">$(-1)^{n-1} |A_1 \cap \cdots \cap A_n|$</span>. Where does this last form appear in the three set example?</p> <p>Also, if using two sets vs three, is the <span class="math-container">$\sum_{1 \le i \lt j \lt k \le n}$</span> unsatisfiable because there are no three <span class="math-container">$i, j, k$</span> terms to satisfy it? In that case this summation term disappears?</p>
Michael Hardy
11,667
<p><span class="math-container">$$ \frac \pi {\sin^2 \frac\pi x} &gt; \frac x {\tan \frac{\pi}{x}} $$</span> Let <span class="math-container">$u = \pi/x.$</span> then <span class="math-container">$x&gt;2$</span> means <span class="math-container">$0&lt;u&lt;\pi/2.$</span> Then the inequality becomes <span class="math-container">$$ \frac u {\sin^2 u} &gt; \frac 1 {\tan u} \quad\text{for } 0&lt;u&lt;\frac \pi 2 $$</span> and then <span class="math-container">$$ u \tan u &gt; \sin^2 u \quad \text{for } 0&lt;u&lt;\frac\pi2. $$</span> Since <span class="math-container">$\sin u &lt; u &lt; \tan u$</span> for <span class="math-container">$0&lt;u&lt;\pi/2,$</span> the inequality follows.</p>
4,490,117
<blockquote> <p>(a) Let <span class="math-container">$f : [a, b] \to \mathbb{R}$</span> be continuous and suppose that <span class="math-container">$f(x) \gt 0$</span> for all <span class="math-container">$x$</span>. Show that there is some <span class="math-container">$L\gt 0$</span> such that <span class="math-container">$f(x) \ge L$</span> for all <span class="math-container">$ x \in [a, b]$</span>.</p> </blockquote> <blockquote> <p>(b) Give an example of a continuous function <span class="math-container">$f: \mathbb{R} \to \mathbb{R}$</span> satisfying <span class="math-container">$f(x) \gt 0$</span> for all <span class="math-container">$x$</span>, such that no <span class="math-container">$L\gt 0$</span> satisfies <span class="math-container">$f(x) \ge L$</span> for all <span class="math-container">$x$</span>.</p> </blockquote> <p>I was given these two pribelsy together. For the first one I could solve it easily by using the property that <span class="math-container">$f$</span> will attain it's bounds in the given closed interval and hence the minimum value will do the trick.</p> <p>But I can't prove (b) analytically.</p> <p>I thought of <span class="math-container">$f(x) = e^x$</span> and I know it will work but I can't prove it using any contradiction.</p> <p>Can I get some help please?</p>
blargoner
129,912
<p>I'd recommend <em>Understanding Numbers in Elementary School Mathematics</em> by <a href="https://math.berkeley.edu/%7Ewu/" rel="nofollow noreferrer">Hung-Hsi Wu</a>. Be sure to reference the errata listing on Wu's website.</p>
305,373
<p>I'm doing some self-studying on discrete mathematics and so far its going well however I came upon a question that whilst I can make sense of and I know that set $A$ is a subset of set $B$, I cannot think of how to express it and the book does not feature an answer for this question.</p> <p>The question is, prove that $A$ is a subset of $B$, where</p> <p>$$A = \{ 2n \mid n\in\mathbb{Z}^+\},\quad B = \{ n \mid n\in\mathbb{Z}^+\}.$$</p> <p>I am somewhat at a loss to the logic of this, both sets are identified as they both have elements that contain positive integers and $n$ is half of $2n$, but then isn't if we were to use numbers: $n = 1$, $2n = 2$ etc... therefore the two are not equal as they will never contain the same numbers, I believe I am thinking about this wrong but am somewhat at a loss with this simple question! Any help is appreciated, thanks!</p>
Dominic Michaelis
62,278
<p>Ok if $A\subset B$ you have to show that $x\in A \implies x \in B$. $$x\in A \implies \exists n \in \mathbb{Z}: x=2 n$$ Because of $2 \in \mathbb{Z}^+$ and $n \in \mathbb{Z}^+$ and $(\mathbb{Z}^+, \cdot)$ is a semi-group (you just need to say that when $a\in \mathbb{Z}^+$ and $b\in \mathbb{Z}^+$ than $$a\cdot b\in \mathbb{Z}^+.$$ Because of this there exists a $m$ so that $ 2 n = m \in \mathbb{Z}$ and thats why $A\subset B$</p>
2,653,713
<p>Let $\varphi:\mathbb{R}\rightarrow \mathbb{R}$ be a convex function. If $y&lt;\varphi(x)$ why does there exist a line through $(x,y)$ which lies strictly below the graph of $\varphi$? I ask because this is a step in the proof of Jensen's inequality for conditional expectation.</p>
g.kov
122,782
<p>Consider the point $(x_1,g(x_1))$. At this point the distance to the line $y=x$ is $d(x_1)=\exp(x_1)$. </p> <p><a href="https://i.stack.imgur.com/4rFBp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4rFBp.png" alt="enter image description here"></a></p> <p>As we can see, for every $x$ there are two suitable points,</p> <p>\begin{align} P_1&amp;=(x_1,x_1+\sqrt2\,d(x_1)) \\ \text{and }\quad P_1&amp;=(x_1,x_1-\sqrt2\,d(x_1)) , \end{align}<br> so at least there are two suitable continuous functions,</p> <p>\begin{align} g_1(x)&amp;=x+\sqrt2\,d(x) ,\\ g_2(x)&amp;=x-\sqrt2\,d(x) . \end{align} </p>
1,255,215
<p>How to prove that:</p> <p>$$\cos{n\theta}=\cos^n{\theta}- \binom {n} {2}\cos^{n-2} \theta \cdot \sin^2 \theta+ \binom {n} {4}\cos^{n-4} \theta \cdot \sin^{4} \theta -\cdots$$</p> <p>$$\sin n\theta = \binom {n} {1}\cos^{n-1} \theta \cdot \sin \theta - \binom {n} {3}\cos^{n-3} \theta \cdot \sin^3\theta +\cdots$$</p> <p>I don't see how to start.</p>
L F
221,357
<p>$\textbf{Hint}$: $cos(t)^2+\sin(t)^2=1$, do $x^2+y^2$ and you will get a function $y$ who depends of $x$, then apply chain rule and dont forget the term $dx/dt$</p>
1,255,215
<p>How to prove that:</p> <p>$$\cos{n\theta}=\cos^n{\theta}- \binom {n} {2}\cos^{n-2} \theta \cdot \sin^2 \theta+ \binom {n} {4}\cos^{n-4} \theta \cdot \sin^{4} \theta -\cdots$$</p> <p>$$\sin n\theta = \binom {n} {1}\cos^{n-1} \theta \cdot \sin \theta - \binom {n} {3}\cos^{n-3} \theta \cdot \sin^3\theta +\cdots$$</p> <p>I don't see how to start.</p>
Community
-1
<p>Work backwards from the desired endpoint. There's really only one thing to be done with $\tan(t-\frac\pi4)$, and that's applying the addition formula: $$ \tan(t-\tfrac\pi4) = \frac{\tan(t)-\tan(\tfrac\pi4)}{1+\tan(t)\tan(\tfrac\pi4)} = \frac{\tan(t)-1}{1+\tan(t)} $$ What could you do next with this, to make it look more like $\frac{\sin(t)-\cos(t)}{\sin(t)+\cos(t)}$?</p>
4,632,869
<p>Suppose <span class="math-container">$\Omega \subset \mathbb{R}$</span> is a bounded domain. Then using Poincare-Wirtinger, one can prove that for functions <span class="math-container">$f \in W^{1,2}(\Omega)$</span> there exists <span class="math-container">$C&gt;0$</span> such that</p> <p><span class="math-container">$$\|u\|_{L^{2}}^{2} \le C\|u'\|_{L^{2}}^{2} + C\left( \int u(x)~dx \right)^{2}. $$</span></p> <p><strong>Question:</strong> Suppose we have a function <span class="math-container">$u$</span> which satisfies</p> <p><span class="math-container">$$ \|u'\|_{L^{2}}^{2} + \left( \int u(x)~dx \right)^{2} \le C . $$</span></p> <p>Can we conclude that <span class="math-container">$u \in W^{1,2}(\Omega)$</span>? The reason I am not sure is because in order for us to use the first inequality we need that <span class="math-container">$u \in W^{1,2}(\Omega)$</span> which a-priori we don't have. I think the statement could be true and we can prove it by some density argument but I am not sure how to proceed.</p>
notpron
887,931
<p>If <span class="math-container">$u$</span> satisfies this second inequality, we should already have <span class="math-container">$u \in W^{1,2}(\Omega)$</span> or <span class="math-container">$u \in C^1(\Omega) \subseteq W^{1,2}(\Omega) $</span>, or else how do you define <span class="math-container">$u'$</span> inside the <span class="math-container">$L^2$</span> norm?</p>
2,862,166
<p>In a paper I've been studying it says:</p> <blockquote> <p>Let $x$ in the cone $\mathbb R_+^n$ of all vectors in $\mathbb R^n$ with nonnnegative components ($n\in\mathbb N$)</p> </blockquote> <p>Somebody tell me what does it means, please? $\mathbb R_+^n$ should be $[0,\infty)\times\dots\times [0,\infty)$ ($n$ times), but I don't understand why <em>the cone $\mathbb R_+^n$</em>. Maybe <em>the cone $\mathbb R_+^n$</em> is different from $[0,\infty)\times\dots\times [0,\infty)$?</p>
nicomezi
316,579
<p>A subset $A$ of a vector space $X$ on a field $K$ with a notion of positivity is said to be a cone if, for all $x \in A$, $\lambda&gt;0$, we have $\lambda x \in A$.</p> <p>The infite version of the cone you probably have in mind satisfies this property if the vertex, or apex, is at the origin.</p>
2,862,166
<p>In a paper I've been studying it says:</p> <blockquote> <p>Let $x$ in the cone $\mathbb R_+^n$ of all vectors in $\mathbb R^n$ with nonnnegative components ($n\in\mathbb N$)</p> </blockquote> <p>Somebody tell me what does it means, please? $\mathbb R_+^n$ should be $[0,\infty)\times\dots\times [0,\infty)$ ($n$ times), but I don't understand why <em>the cone $\mathbb R_+^n$</em>. Maybe <em>the cone $\mathbb R_+^n$</em> is different from $[0,\infty)\times\dots\times [0,\infty)$?</p>
M. Winter
415,941
<p>A <em>cone</em> is a set $C\subseteq\Bbb R^n$ with $x\in C\implies\alpha x\in C$ for all $\alpha\ge0$. The name is motivated by the usualy geometric cones, but $\Bbb R^n_+ := [0,\infty)^n$ satisfies this too.</p> <p>See this picture of different cones in $\Bbb R^3$, two are familiar, and the right most one is $\Bbb R^n_+$.</p> <p><a href="https://i.stack.imgur.com/QN88M.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QN88M.png" alt="enter image description here"></a></p>
166,202
<p>If we have $k(x,t)= \frac {1}{(4t)^{\frac{n}{2}}} \exp\left(\frac{-|x|^2}{4t}\right)$ is the fundamental solution of heat equation. If we consider $n \ge 3 $, I would like to show that $\int_0^\infty k(x,t) dt$ is the fundamental solution of lLaplace equation. I would like some hints. I thought of integrating but don't know how to approach. Thank you ie , i need to arrive to a form like $\frac{1}{B} \frac{1}{|x|^{n-1}}$ $B $ is a constant depending on the measure of the space. </p>
Davide Giraudo
9,849
<p>We use the substitution $s=\frac t{|x|^2}$ (then $dt=|x|^2ds$) to get \begin{align} \int_0^{+\infty}k(x,t)dt&amp;=\int_0^{+\infty}\frac 1{(4t)^{n/2}}\exp\left(-\frac{|x|^2}{4t}\right)dt\\ &amp;=\frac 1{4^{n/2}}\int_0^{+\infty}\frac 1{(s|x|^2)^{n/2}}\exp\left(-\frac 1{4s}\right)|x|^2ds\\ &amp;=\frac 1{4^{n/2}}|x|^{2-n}\int_0^{+\infty}\frac 1{s^{n/2}}\exp\left(-\frac 1{4s}\right)ds\\ &amp;=\frac 1{4^{n/2}}|x|^{2-n}\int_0^{+\infty}y^{n/2-2}\exp(-y/4)dy\\ &amp;=c_n|x|^{2-n}. \end{align} We have to show that $f\colon x\mapsto |x|^{2-n}=\left(\sum_{k=1}^nx_k^2\right)^{1-n/2}$ is harmonic. Let $j\in\{1,\dots,n\}$. We have $$\partial_jf(x)=\left(\sum_{k=1}^nx_k^2\right)^{-n/2}2x_j\left(1-\frac n2\right)$$ and $$\partial_{jj}f(x)=2\left(1-\frac n2\right)\left(\sum_{k=1}^nx_k^2\right)^{-n/2}-\left(1-\frac n2\right)nx_j\left(\sum_{k=1}^nx_k^2\right)^{-n/2-1}(2x_j).$$ Summing that, we get \begin{align} \Delta f(x)&amp;=\sum_{j=1}^n\partial_{jj}f(x)\\ &amp;=\left(1-\frac n2\right)\left(\sum_{k=1}^nx_k^2\right)^{-n/2-1}\left(2n|x|^2-n\cdot 2\cdot |x|^2\right)\\ &amp;=0. \end{align}</p>
101,256
<p>Can you please help me finding an exact description of the set:</p> <p>$$ E_{R}=\{\cos{z} | z \in \mathbb{C}, |z|&gt;R\} $$</p> <p>For any $0&lt;R \in \mathbb{R}$.</p> <p>My feeling is the $E_R = \mathbb{C}$, for any $R$, but I don't know how to show it, if it's true.</p>
Julián Aguirre
4,791
<p>A different approach. Let $w\in\mathbb{C}$. We must find $z\in\mathbb{C}$ such that $|z|&gt;R$ and $$\cos z=\frac{e^{iz}+e^{-iz}}{2}=w\implies e^{2iz}-2\,w\,e^{iz}+1=0\ . $$ Solving for $e^{iz}$ we obtain $$e^{iz}=w\pm\sqrt{w^2-1}\ .$$ One at least of $w+\sqrt{w^2-1}$ or $w-\sqrt{w^2-1}$ is non-zero. Assume $w+\sqrt{w^2-1}\ne0$. Then $$ z=\frac1i\,\log(w+\sqrt{w^2-1})+2\,k\,\pi ,\quad k\in\mathbb{Z}, $$ where $\log$ is the principal branch of the logarithm. We can take $k$ large enough to have $|z|&gt;R$.</p>
4,023,342
<p>Blue: <span class="math-container">$x \ln x-x$</span></p> <p>Brown: <span class="math-container">$\ln \frac{\Gamma(x+1/2)}{\sqrt{\pi}}$</span></p> <p><a href="https://i.stack.imgur.com/CbVYi.png" rel="noreferrer"><img src="https://i.stack.imgur.com/CbVYi.png" alt="enter image description here" /></a></p>
5201314
869,482
<p>Using Sterling’s approximation for the Gamma function, <span class="math-container">$$\Gamma(z)\sim \sqrt{\frac{2\pi}{z}}\left(\frac{z}{e}\right)^z$$</span> In this case <span class="math-container">\begin{align*} \Gamma\left(x+\frac{1}{2}\right)&amp;\sim \sqrt{\frac{2\pi}{x+\frac{1}{2}}}\left(\frac{x+\frac{1}{2}}{e}\right)^{x+\frac{1}{2}}\\ \ln\Gamma\left(x+\frac{1}{2}\right)&amp;\sim \frac{1}{2}\ln 2+\ln \sqrt \pi -\frac{1}{2}\ln \left(x+\frac{1}{2}\right)+\left(x+\frac{1}{2}\right)\ln\left(x+\frac{1}{2}\right)-x-\frac{1}{2}\\ &amp;=\frac{1}{2}\ln 2+\ln \sqrt \pi +x\ln \left(x+\frac{1}{2}\right)-x-\frac{1}{2}\\ &amp;\sim \ln \sqrt\pi +x\ln x-x\\ \ln \left(\frac{\Gamma\left(x+\frac{1}{2}\right)}{\sqrt\pi}\right)&amp;\sim x\ln x-x \end{align*}</span></p>
3,239,689
<p><span class="math-container">$$n\in\mathbb{N}^{*}; S_{n}=\sum_{k=1}^{n}k!(k^2+1)$$</span></p> <p>I need to find <span class="math-container">$S_n$</span></p> <p>I started like this: <span class="math-container">$S_{n}=\sum_{k=1}^{n}(k+2)!-3(k+1)!+2k!$</span></p> <p>How to continue?I tried to give the k values but the terms don't vanish.</p>
J.G.
56,861
<p>Next we use <span class="math-container">$$S_n=\sum_{k=1}^n\left\{[(k+2)!-(k+1)!]-2[(k+1)!-k!]\right\},$$</span>which telescopes to <span class="math-container">$$(n+2)!-2-2((n+1)!-1)=(n+2)!-2\cdot(n+1)!=n\cdot (n+1)!$$</span></p>
3,239,689
<p><span class="math-container">$$n\in\mathbb{N}^{*}; S_{n}=\sum_{k=1}^{n}k!(k^2+1)$$</span></p> <p>I need to find <span class="math-container">$S_n$</span></p> <p>I started like this: <span class="math-container">$S_{n}=\sum_{k=1}^{n}(k+2)!-3(k+1)!+2k!$</span></p> <p>How to continue?I tried to give the k values but the terms don't vanish.</p>
Hagen von Eitzen
39,174
<p>So far you arrived at <span class="math-container">$$\begin{array}c S_n=&amp;&amp;&amp;&amp;\hphantom{+}3!&amp;+4!+\ldots+n!&amp;+(n+1)!&amp;+(n+2)!\\ &amp;-3(&amp;&amp;\hphantom+2!&amp;+3!&amp;+4!+\ldots +n!&amp;+(n+1)!&amp;&amp;)\\ &amp;+2(&amp;1!&amp;+2!&amp;+3!&amp;+4!+\ldots +n!&amp;&amp;&amp;)\end{array}$$</span> Can you simplify?</p>
2,996,640
<p>Maybe trivial for number theorists, but not for me: is the title meaningful to ask for (<span class="math-container">$p$</span> is a prime)? If so, what's the answer? Thanks</p>
Keen-ameteur
421,273
<p>You have to see how the set <span class="math-container">$\{ (x,y): x&gt;y \}$</span> intersects with <span class="math-container">$\{(x,y): x&gt;0, y&lt;1 \}$</span>. The intersection can be written as:</p> <p><span class="math-container">$D_1\sqcup D_2$</span>, where <span class="math-container">$D_1:= \{ (x,y): x&gt;y, 0&lt;x&lt;1, y&lt;1\}$</span> and <span class="math-container">$D_2:=\{ (x,y): x\geq 1, y&lt;1 \}$</span>.</p> <p>By additivity you then have to calculate:</p> <p><span class="math-container">$\int_{D_1}f(x,y)dydx+ \int_{D_2}f(x,y)dydx$</span></p> <p>Notice that for a rectangle <span class="math-container">$D=\{ (x,y) \in \mathbb{R}^2: a\leq x\leq b, c\leq y \leq d \}$</span>, we have:</p> <p><span class="math-container">$\int_D f(x,y)dx dy= \int_a^b \Big( \int_c^d f(x,y) dy \Big) dx= \int_c^d \Big( \int_a^b f(x,y) dx \Big) dy$</span></p> <p>In our case:</p> <p><span class="math-container">$\int_{D_2} f(x,y)dx dy= \int_1^\infty \Big( \int_{-\infty}^1 f(x,y) dy \Big) dx$</span></p> <p>And for a set <span class="math-container">$C=\{ (x,y)\in \mathbb{R}^2: a\leq x\leq b, g_1(x) \leq y\leq g_2(x) \}$</span>, where <span class="math-container">$g_1,g_2$</span> are functions of <span class="math-container">$x$</span>, we have:</p> <p><span class="math-container">$\int_Cf(x,y)dxdy= \int_a^b \Big( \int_{g_1(x)}^{g_2(x)} f(x,y)dy \Big)dx$</span></p> <p>In our case:</p> <p><span class="math-container">$\int_{D_1}f(x,y)dydx = \int_0^1 \Big( \int_{-\infty}^x f(x,y) dy \Big)dx$</span></p> <p>I decomposed the intersection to sets such that the upper bound is a constant and explicit function on them. Were you to impose more conditions, say for example <span class="math-container">$Y&lt;-X+3$</span> and <span class="math-container">$Y&gt;-100$</span>, then I would have to consider the set:</p> <p><span class="math-container">$\{ (x,y): x&lt;y, x&gt;0 , y&lt;1, y&lt;-x+3, y&gt;-100 \}$</span></p> <p>And decompose them into sets where the upper bound and lower bounds are constant functions. Which observing this set in <span class="math-container">$\mathbb{R}^2$</span> would give me a natural decomposition looking at the set with respect to the '<span class="math-container">$x$</span>' axis:</p> <p><span class="math-container">$E_1:= \{ (x,y): 0&lt;x&lt;1, -100&lt;y&lt;x \}$</span>, <span class="math-container">$E_2:=\{ (x,y): 1 \leq x \leq 2, -100&lt;y&lt;1 \}$</span> and <span class="math-container">$E_3:=\{ (x,y): 2&lt;x&lt;103, -100&lt;y&lt;-x+3 \}$</span></p> <p>And summed the integrals on each set.</p>
1,483,343
<p>For equation $$ x_1+x_2+x_3 = 15 $$ Find number of positive integer solutions on conditions: $$ x_1&lt;6, x_2 &gt; 6 $$ Let: $y_1 = x_1, y_2 = x_2 - 6, y_3 = x_3$ than, to solve the problem, equation $y_1+y_2 +y_3 = 9$ where $y_1 &lt; 6,0&lt;y_2, 0&lt;y_3 $ has to be solved. <strong>Is this correct?</strong></p> <p>To solve this equation by <strong>inclusion-exclusion</strong>, number of solution without restriction have to be found $C_1 (3+9-1,9)$ and this value should be subtracted by $C_2 (3+9-7-1,2)$ , (as the negation of $y_1 &lt; 6$ is $y_1 \geq 7$). Thus: $$ 55-6=49 $$ Is this the correct answer ? <br> <strong>Problem must be solved using inclusion-exclusion...</strong></p>
Community
-1
<p>There is no infinite descending sequence of natural numbers. That is, there is no sequence of natural numbers $$ n_0 &gt; n_1 &gt; n_2 &gt; n_3 \ldots .$$ Intuitively, that fact is obvious: $n_0$ is some finite number, and you can't start at some finite number and descend infinitely. More formally, you can show the following by induction:</p> <blockquote> <p>For all natural numbers $n_0$, $n_0$ can't start an infinite descending sequence.</p> </blockquote> <p><strong>Base case:</strong> $n_0 = 0$. There is no natural number less than 0, so there is no possible choice of $n_1 &lt; n_0$ to continue the sequence.</p> <p><strong>Inductive case:</strong> assume we know that, for all $n &lt; n_0$, $n$ can't start an infinite descending sequence. Then $n_0$ can't start one either: if there were a sequence $n_0 &gt; n_1 &gt; n_2 &gt; \ldots$, we would have an infinite descending sequence $n_1 &gt; n_2 &gt; n_3 &gt; \ldots$ starting at $n_1 &lt; n_0,$ contradicting the hypothesis.</p> <p>We've now shown the claim. From here, we see that you can't have a function $f : \mathbb{N} \to \mathbb{N}$ such that $f(n) &gt; f(n+1)$, since it would give a sequence $$f(0) &gt; f(1) &gt; f(2) &gt; f(3) &gt; \ldots,$$ which we know can't happen.</p>
150,518
<blockquote> <p>Analyze the convergence behavior of the following series: $$\sum_{k=0}^{\infty}\frac{x^{2k}}{2^{2k}}-\frac{x^{2k+1}}{3^{2k+1}}.$$</p> </blockquote> <p>I came across this problem as I was preparing for an exam. It is supposed to be an easy one but I am not sure which test to apply.</p>
ncmathsadist
4,154
<p>Break it in two using the - sign. Then examine the pieces using the root or ratio test. More simply, realize that both series are geometric, which converges when the common ration is less than 1 in absolute value.</p>
2,309,986
<p>This question is from the text <em>Geometry and Complexity Theory</em>, from J.M. Landsberg. Before start talking about the question, I think it is good to show the definition of border rank. Consider a tensor $T \in \mathbb{C}^r \otimes \mathbb{C}^r \otimes \mathbb{C}^r$. Then the <em>border rank</em> of $T$ is the minimum $r$ such that $T$ can be arbitrarily approximated by tensors of rank $r$. </p> <p>In this discussion we asume $r &gt; 1$ is integer. In general, the limit of tensors of rank $r$ may not be a tensor of rank $r$. An example is given in the text (I changed a little the notation):</p> <p>Let $x_i, y_i \in \mathbb{C}^r$ be linearly independent vectors, for $i = 1,2,3$. Then the sequence of rank 2 tensors</p> <p>$$T_n = n\left( x_1 + \frac{1}{n} y_1 \right) \otimes \left( x_2 + \frac{1}{n} y_2 \right) \otimes \left( x_3 + \frac{1}{n} y_3 \right) - nx_1 \otimes x_2 \otimes x_3 $$</p> <p>converges to the rank 3 tensor</p> <p>$$T = x_1 \otimes x_2 \otimes y_3 + x_1 \otimes y_2 \otimes x_3 + y_1 \otimes x_2 \otimes x_3. $$</p> <p>The author says that $T$ is a rank $3$ tensor with border rank $2$. With this example in mind I tried to come up with the following idea: </p> <p>Let $x_i, y_i \in \mathbb{C}^r$ be linearly independent vectors, for $i = 1, \ldots, r$. Consider the sequence</p> <p>$$T_n = n\left( x_1 + \frac{1}{n} y_1 \right) \otimes \left( x_2 + \frac{1}{n} y_2 \right) \otimes \left( x_3 + \frac{1}{n} y_3 \right) - nx_1 \otimes x_2 \otimes x_3 + $$ $$ + n\left( x_2 + \frac{1}{n} y_2 \right) \otimes \left( x_3 + \frac{1}{n} y_3 \right) \otimes \left( x_4 + \frac{1}{n} y_4 \right) - nx_2 \otimes x_3 \otimes x_4 + \ldots$$ $$ \ldots + n\left( x_{\frac{r}{2}} + \frac{1}{n} y_{\frac{r}{2}} \right) \otimes \left( x_{\frac{r}{2}+1} + \frac{1}{n} y_{\frac{r}{2}+1} \right) \otimes \left( x_{\frac{r}{2}+2} + \frac{1}{n} y_{\frac{r}{2}+2} \right) - nx_{\frac{r}{2}} \otimes x_{\frac{r}{2}+1} \otimes x_{\frac{r}{2}+2}$$</p> <p>for $r$ even. The $T_n$ is a sum of $r$ rank 1 tensor, so $T_n$ has rank $r$. Furthermore, we have that $T_n$ converges to </p> <p>$$T = x_1 \otimes x_2 \otimes y_3 + x_1 \otimes y_2 \otimes x_3 + y_1 \otimes x_2 \otimes x_3 + $$ $$ + x_2 \otimes x_3 \otimes y_4 + x_2 \otimes y_3 \otimes x_4 + y_2 \otimes x_3 \otimes x_4 + \ldots $$ $$\ldots + x_{\frac{r}{2}-2} \otimes x_{\frac{r}{2}-1} \otimes y_\frac{r}{2} + x_{\frac{r}{2}-2} \otimes y_{\frac{r}{2}-1} \otimes x_\frac{r}{2} + y_{\frac{r}{2}-2} \otimes x_{\frac{r}{2}-1} \otimes x_\frac{r}{2}$$</p> <p>which is a sum of $3(\frac{r}{2}-2)$ rank 1 tensors, so $T$ has rank $3(\frac{r}{2}-2)$.</p> <p>I have some concerns about this "solution"., which I'm going to list here.</p> <p><strong>1)</strong> I'm not sure if the tensors $T_n$ really have rank $r$, maybe there is a way to reduce the number of terms which I'm not aware of. The same goes to $T$. </p> <p><strong>2)</strong> In order to have $3(\frac{r}{2} - 2) &gt; r$ we need $r &gt; 12$. This restriction indicates my solution is wrong someway. </p> <p><strong>3)</strong> Finally, to work with $r$ odd I need to add one last term in an artificial way. This also makes me think this whole idea is not good.</p> <p>Well, I need some directions here. To be honest I'm new in the study of tensors, so any help with nice explanations are very welcome!</p> <p>Thank you! </p>
Zach Teitler
343,280
<p>Let $V = \mathbb{C}^r$ have basis $\{x_1,\dotsc,x_r\}$. Consider the tensor $$ \begin{split} T &amp;= x_1^2 x_2 + x_3^3 + x_4^3 + \dotsb + x_r^3 \\ &amp;= x_1 \otimes x_1 \otimes x_2 + x_1 \otimes x_2 \otimes x_1 + x_2 \otimes x_1 \otimes x_1 + x_3 \otimes x_3 \otimes x_3 + \dotsb + x_r \otimes x_r \otimes x_r . \end{split} $$ where I have used polynomial (symmetric tensor) notation, I hope it is clear how to make it correspond with usual tensors.</p> <p>As a tensor, this is written as a sum involving $r+1$ terms ($3$ for the first $2$ basis elements, plus $r-2$ more for $x_3,\dotsc,x_r$). Therefore $T$ has rank less than or equal to $r+1$.</p> <p>I claim that $T$ has rank equal to $r+1$ and border rank equal to $r$.</p> <p>First, $$ T = \lim_{t \to 0} \frac{(x_1+t x_2)^3-x_1^3}{3t} + x_3^3 + \dotsb + x_n^3 $$ (again using polynomial notation for convenience), which shows that $T$ has border rank less than or equal to $r$.</p> <p>A lower bound for border rank is not too hard. For example there is a lower bound given by flattening rank, see <a href="https://mathoverflow.net/questions/280554/is-a-flattening-rank-a-lower-bound-for-the-border-rank">https://mathoverflow.net/questions/280554/is-a-flattening-rank-a-lower-bound-for-the-border-rank</a>. To be a little bit explicit, we get a map from, say, $V^* \otimes V^* \to V$. Say $V^*$ has dual basis $\{y_1,\dotsc,y_r\}$. The induced map $V^*\otimes V^* \to V$ (also denoted $T$ by slight abuse of notation) is given by $$ T(y_i \otimes y_j) = x_1(x_1(y_i)\cdot x_2(y_j)) + x_1(x_2(y_i)\cdot x_1(y_j)) + \dotsb + x_r(x_r(y_i) \cdot x_r(y_j)) . $$ So $T(y_1 \otimes y_2) = x_1$, $T(y_1 \otimes y_1) = x_2$, and for $3 \leq i \leq r$, $T(y_i \otimes y_i) = x_i$. This shows that this induced map is onto. So it has rank $r$. Therefore the tensor $T$ has border rank greater than or equal to $r$, see the linked MO answer for an explanation why.</p> <p>Here is one idea for how to get a lower bound for the tensor rank of $T$. First observe that the tensor given by the first three terms of $T$ (corresponding to $x_1^2 x_2$) is precisely the first example you started with, so it has rank $3$. (One way to see that it has rank $3$ is here <a href="https://math.stackexchange.com/questions/1905543/find-the-rank-of-the-tensor">Find the rank of the tensor</a>.) And by a special case of Strassen's conjecture, each time we add a term $x_i \otimes x_i \otimes x_i$, with $x_i$ linearly independent from all the previously used vectors, then we in fact add $1$ to the rank. Unfortunately I am having trouble finding a citation for that at the moment. Please let me know if you're still interested, I will be happy to look again.</p>
240,620
<p>I would like to train some recreational probability (Puzzles). </p> <p>Does any of you know a good collection? Preferably with hints or answers.</p> <p>I've been studying quite a bit of probability theory, but I don't suspect that will do me any real good.</p> <p>Thanks in advance,</p>
Mehness
226,027
<p>definitely agree that Mosteller's book is excellent. In case you exhaust that, there's also quite a few problems in the recreational vein in: Challenging Mathematical Problems with Elementary Solutions, Volume I: Combinatorial Analysis and Probability Theory.</p> <p><a href="http://www.amazon.co.uk/Challenging-Mathematical-Problems-Elementary-Solutions/dp/0486655369" rel="nofollow">http://www.amazon.co.uk/Challenging-Mathematical-Problems-Elementary-Solutions/dp/0486655369</a></p> <p>Have just bought this myself actually for some recreational amusement.</p> <p>Possibly the ultimate compendium of problems is Grimmett and Stirzaker's volme:</p> <p><a href="http://www.amazon.co.uk/Thousand-Exercises-Probability-Geoffrey-Grimmett/dp/0198572212/ref=sr_1_1?s=books&amp;ie=UTF8&amp;qid=1427198901&amp;sr=1-1&amp;keywords=one+thousand+exercises+in+probability" rel="nofollow">http://www.amazon.co.uk/Thousand-Exercises-Probability-Geoffrey-Grimmett/dp/0198572212/ref=sr_1_1?s=books&amp;ie=UTF8&amp;qid=1427198901&amp;sr=1-1&amp;keywords=one+thousand+exercises+in+probability</a></p> <p>If you've done all of those, you've pretty much done it all!</p> <p>Good luck.</p>
139,818
<p>I'm currently working with large datasets of "discrete data". Each of these is a <em>multiset</em> of real numbers of the form $kS + B$, where $k$ is an integer, and $S$ (the <em>"scale</em>") and $B$ (the <em>"bias"</em>) are real values.</p> <p>These datasets are such that it is often economical to represent them as "tallies":</p> <pre><code>dd = &lt;| "scale" -&gt; 1.234, "bias" -&gt; 5.678, "tally" -&gt; {{-5, 2}, {-4, 251}, {-3, 5941}, {-2, 60383}, {-1, 241185}, { 0, 383613}, { 1, 241644}, { 2, 61035}, { 3, 5686}, { 4, 259}, { 5, 1}} |&gt; </code></pre> <p>The <code>"scale"</code> and <code>"bias"</code> entries are $S$ and $B$, as described already. The <code>"tally"</code> entry is a list of pairs $\{k, n\}$, where $n$ is the number of times that $kS + B$ appears in the data.</p> <p>(The example above is just a toy; in practice, the <code>"tally"</code> elements are much longer, but still far more compact than the simple <code>List</code> representation of the data.)</p> <p>One can recover the fully expanded representation from the tally representation with the function</p> <pre><code>(#[["bias"]] + #[["scale"]] Flatten[ ConstantArray @@@ #[["tally"]] ]) &amp; </code></pre> <hr> <p>Even though the tally representation is not a sparse array, it would be nice to give it an interface similar to that of a <code>SparseArray</code> object.</p> <p>The most important feature of such an interface is <em>the ability to behave correctly in any <strong>context</strong> where normally a list would be expected.</em></p> <p>For example, it would be nice if <code>Length[dd]</code> automatically behaved like <code>Plus @@ (Last /@ dd[["tally"]])</code> when given a discrete data object <code>dd</code> as argument.</p> <p>And more generally, if a list is expected, and no special method is available (like the one for <code>Length</code> above), the discrete data object would be automatically expanded to its full list form.</p> <p>How does one do this in <em>Mathematica</em>?</p> <hr> <p>Python, for one, solves this problem by specifying a set of standard methods that will get called on an object when it is in one of a corresponding set of contexts. For example, if for some Python object <code>x</code> the method <code>__length__</code> exists, then the value of the expression <code>len(x)</code> will evaluate to whatever <code>x.__length__()</code> returns. In contexts where <code>x</code> is expected to be iterated over, <code>x.__iterator__</code> will be invoked. Etc.</p> <p>Therefore, by implementing a few standard methods (like <code>__length__</code> and <code>__iterator__</code>), one can define new classes that can be used like the standard classes (lists, dicts, etc) even if they have completely different internal implementations.</p> <p>Is there something like this in <em>Mathematica</em>?</p>
Leonid Shifrin
81
<h2>General</h2> <p>A late-to-the party post. To complement other answers, which already show most important ingredients for constructing data types with overloaded system functions, I'd like to show an approach that follows the same ideas a couple steps further. </p> <p>While <code>UpValues</code> are the tool to use, one problem with them is the lack of control or introspection: they leave no middle ground between the functions they overload and actual implementations for a specific data type. In some cases, more control over the overloaded behavior might be beneficial.</p> <p>The examples below are based on the code of a tiny framework, which has more didactic goal and is not meant to be directly production-usable. That code is at the end of this post.</p> <h2>Defining an interface</h2> <h3>Preliminaries</h3> <p>For the purposes of this example, we will define an interface <code>ListObject</code>, which should support versions of <code>Length</code>, <code>Normal</code>, <code>Part</code>, <code>Take</code>, <code>Drop</code>, <code>Map</code> and <code>Select</code>, as well as a type-testing predicate. The actual objects implementing this interface will look like</p> <pre><code>ListObject[objectContainer[inner-data]] </code></pre> <p>One observation here is that the operations one can perform on a list-like object can be broadly divided into non-terminal (resulting in an object of the same type) and terminal (resulting in anything else). I have wired this notion into the example framework.</p> <p>So, assuming that code below (at the end of the post) has been executed, we define, as our first step:</p> <pre><code>Scan[SetTerminal, {Length, Normal}]; Scan[SetNonTerminal, {Part, Take, Drop, Map, Select}] </code></pre> <h3><code>ListObject</code> interface</h3> <p>We now define the interface symbol and testing predicate:</p> <pre><code>ClearAll[ListObject]; ListObject::notimpl = "The function `1` has not been implemented for backend `2`"; ListObject::badargs = "Function `1` received bad number and / or types of arguments: `2`"; ClearAll[ListObjectQ]; ListObjectQ[ListObject[backend_Symbol[___]]]:=True; ListObjectQ[_]:=False; </code></pre> <p>The following code sets generic definitions for the interface functions:</p> <pre><code>Scan[ SetGeneralDef[ListObject], {ListLength, ListNormal, ListPart, ListTake, ListDrop, ListSelect} ]; Scan[SetGeneralDef[ListObject][#, 2]&amp;, {ListMap}]; </code></pre> <p>where new symbols starting with <code>List</code> are part of the <code>ListObject</code> interface.</p> <p>Next, we define the dispatch:</p> <pre><code>Def[ListObject] @ Length @ ListObject -&gt; ListLength Def[ListObject] @ Normal @ ListObject -&gt; ListNormal (* Force terminal behavior on Part[lst, _Integer] *) Def[ListObject, False] @ Part[ListObject, _Integer] -&gt; ListPart Def[ListObject] @ Part[ListObject, __] -&gt; ListPart Def[ListObject] @ Take[ListObject, __] -&gt; ListTake Def[ListObject] @ Drop[ListObject, __] -&gt; ListDrop Def[ListObject] @ Map[_, ListObject, ___] -&gt; ListMap Def[ListObject] @ Select[ListObject, __] -&gt; ListSelect </code></pre> <p>At this point, we are done defining an interface.</p> <h3>Notes</h3> <p>The above uses syntactic sugar, defined in the code below. What really happens is that definitions has been generated, which you can test by executing</p> <pre><code>?ListObject </code></pre> <p>For example, here is how a definition for <code>Map</code> may look:</p> <blockquote> <pre><code>Map[ff$_,ListObject[backend$_],rr$:PatternSequence[___]]^:= ListObject[With[{type$=ContainerType[backend$]}, Postprocessor[type$,Map,ListObject][ListMap[ Preprocessor[type$,Map,ListObject][ff$,backend$,rr$] ]]]] </code></pre> </blockquote> <p>These definitions connect <code>Map</code> called on <code>ListObject[type[contents]]</code> with the internal symbol <code>ListMap</code>, and also adds pre- and post-processor.</p> <h2>Adding a sample implementation</h2> <p>As an example, let us define a list-like object that would implement certain operations in a lazy fashion (very simplistically, for the sake of this example). Basically, the list object will look like </p> <pre><code>MyList[data, functions] </code></pre> <p>where <code>data</code> is the original data (a list), and <code>functions</code> is an (initially empty) list representing a queue of functions to be executed on original data. Most interface functions will add some data transformation to that queue (rather than actually doing the computation), and when one calls <code>Normal</code>, only then all the accumulated functions in the queue get applied to data, in the order they stand in <code>functions</code> list.</p> <h3>Implementation</h3> <p>Here is the code:</p> <pre><code>ClearAll[MyList]; ApplyDelayed[f_, MyList[data_List, funs_List:{}]]:= MyList[data, Prepend[funs, f]]; MyList /: ListNormal[MyList[data_List, funs_List:{}]]:= Apply[Composition, funs] @ data; MyList /: ListLength[lst_MyList]:=Length @ ListNormal @ lst; MyList /: ListPart[MyList[data_List, funs_List:{}], p_Integer]:= Apply[Composition, funs] @ data[[p]]; MyList /: ListPart[lst_MyList, spec__]:= ApplyDelayed[Part[#, spec]&amp;, lst]; MyList /: ListTake[lst_MyList, spec_]:= ApplyDelayed[Take[#, spec]&amp;, lst]; MyList /: ListDrop[lst_MyList, spec_]:= ApplyDelayed[Drop[#, spec]&amp;, lst]; MyList /: ListMap[f_, lst_MyList]:=ApplyDelayed[Map[f], lst]; MyList /: ListSelect[lst_MyList, pred_]:= ApplyDelayed[Select[pred], lst]; </code></pre> <h3>Examples</h3> <p>Here, for example, we square the list, and then select odd elements. You can see that function application simply adds operations to the queue, and everything gets evaluated only when we call <code>Normal</code>:</p> <pre><code>Select[Map[#^2 &amp;, ListObject[MyList[Range[10]]]], OddQ] Normal @ Select[Map[#^2 &amp;, ListObject[MyList[Range[10]]]], OddQ] (* ListObject[MyList[{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}, {Select[OddQ], Map[#1^2 &amp;]}]] {1, 9, 25, 49, 81} *) </code></pre> <p>A somewhat larger example:</p> <pre><code>Take[Drop[Select[Map[#^2 &amp;, ListObject[MyList[Range[20]]]], OddQ], 2], 5] (* ListObject[ MyList[ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20}, {Take[#1, 5] &amp;, Drop[#1, 2] &amp;, Select[OddQ], Map[#1^2 &amp;]} ] ] *) </code></pre> <hr> <p>Here is one way to illustrate the work of the post-processor: the code below redefines the default (trivial) post-processor for all implemented functions for type <code>MyList</code>:</p> <pre><code>MyList /: Postprocessor[MyList, f_, _] := Function[Print[HoldForm[f -&gt; #]]; #] </code></pre> <p>We can now execute the same code:</p> <pre><code>Normal @ Select[Map[#^2 &amp;, ListObject[MyList[Range[10]]]], OddQ] </code></pre> <blockquote> <pre><code>During evaluation of In[605]:= Map-&gt;MyList[{1,2,3,4,5,6,7,8,9,10},{Map[#1^2&amp;]}] During evaluation of In[605]:= Select-&gt;MyList[{1,2,3,4,5,6,7,8,9,10},{Select[OddQ],Map[#1^2&amp;]}] During evaluation of In[605]:= Normal-&gt;{1,9,25,49,81} </code></pre> </blockquote> <pre><code>(* {1, 9, 25, 49, 81} *) </code></pre> <p>We now remove the extra definition we added:</p> <pre><code>MyList /: Postprocessor[MyList, f_, _] =. </code></pre> <p>One can (re) define post-processors more granularly, on the level of individual functions, for example:</p> <pre><code>ListMap /: Postprocessor[MyList, ListMap, _]:=... </code></pre> <p>One can obviously do more interesting things than just printing. For example, catching some errors / exceptions, or inserting non-trivial data post-processing.</p> <hr> <p>Let's now see what happens if we try using some types, unknown to the system:</p> <pre><code>Normal[ListObject[UnknownList[stuff]]] </code></pre> <blockquote> <pre><code>During evaluation of In[606]:= ListObject::notimpl: The function ListNormal has not been implemented for backend UnknownList During evaluation of In[606]:= Throw::nocatch: Uncaught Throw[$Failed,Error[ListNormal]] returned to top level. </code></pre> </blockquote> <pre><code>(* Hold[Throw[$Failed, Error[ListNormal]]] *) </code></pre> <p>This is in contrast to the usual behavior:</p> <pre><code>Normal[UnknownList[stuff]] (* UnknownList[stuff] *) </code></pre> <p>which is much softer, but such softness isn't always good - often we actually may prefer to get an error instead of just silently returning an expression back.</p> <h2>What's the difference?</h2> <p>One may ask how the above construct fundamentally differs from what has been already suggested. The main difference is that here, we really defined an interface rather than an abstract data type. An interface does not really provide specific implementation, rather it provides a contract that all compatible implementations should meet.</p> <p>The core mechanism here is the same as before - using <code>UpValues</code>. However, the above scheme gives one more control, essentially because we go though an intermediate step: instead of doing, for example, <code>Map[MyList]</code> -> <code>MyList</code> implementation, we do <code>Map[ListObject[MyList]]</code> -> <code>ListMap[MyList]</code> -> <code>MyList</code> implementation. So, the main features of this approach are then:</p> <ul> <li>All code execution for all implemented types now goes through a single intermediate step (one of the <code>List...</code> functions).</li> <li>There are ways to intercept the computations, facilitated by pre- and post-processors. </li> <li>We can have a harder (or, generally, more customizable) default behavior in terms of error-handling and reporting.</li> </ul> <p>I would not necessarily bother with such more complex construction for a single throw-away data type, but if I were planning to build an extensible framework where the users may implement new types, then having something like this may provide advantages for both myself as a framework author, and the users. </p> <p>For example, should I want to change the behavior of some of the functions for all implementations of <code>ListObject</code>, all I have to do is to change the definition of the mapping of system symbol to interface symbol (e.g. <code>Map</code> to <code>ListMap</code>) - in the simplest case, by (re)defining pre- and / or post-processor. I would not need to touch any of the user implementations of <code>ListObject</code> interface for that. </p> <h2>The code for the interface generation</h2> <h3>Interface-generation micro-framework</h3> <pre><code>ClearAll[MsgFail]; SetAttributes[MsgFail, HoldRest]; MsgFail[f_, msg_:None, args___]:= (msg; Throw[$Failed, Error[f, args]]); ClearAll[TerminalQ, NonTerminalQ]; TerminalQ[_] = NonTerminalQ[_]=False; (* Wraps non-terminal operations with the container head *) ClearAll[ObjectWrapper]; ObjectWrapper[obj_Symbol, op_, flag_]:= If[TrueQ @ flag, obj, Identity]; ObjectWrapper[obj_Symbol, op_?TerminalQ]:=Identity; ObjectWrapper[obj_Symbol, op_?NonTerminalQ]:= obj; ObjectWrapper[___]:=MsgFail[ObjectWrapper]; ClearAll[SetTerminal, SetNonTerminal]; SetTerminal[op_, flag_:True]:=TerminalQ[op]:=flag; SetNonTerminal[op_, flag_:True]:=NonTerminalQ[op]:=flag; (* Helper function for code generation for functions' definitions *) ClearAll[defWrapped]; SetAttributes[defWrapped, HoldAll]; defWrapped[obj_, {f_, flag_:None}, TagSetDelayed[s_, lhs_, rhs_]]:= With[{wrapper = ObjectWrapper[obj, f, If[flag === None, Sequence @@ {}, flag]]}, TagSetDelayed[s, lhs, wrapper @ rhs]; ]; ClearAll[ContainerType]; ContainerType[c_Symbol[___]]:=c; ContainerType[_]:=Null; ClearAll[Preprocessor, Postprocessor]; Preprocessor[type_, function_, interface_]:=##&amp;; Postprocessor[type_, function_, interface_]:=##&amp;; (* Syntactic sugar for interface function dispatch definitions *) ClearAll[Def]; SetAttributes[Def, HoldAll]; Def[obj_, flag:True|False|None:None]:=Function[code, Def[obj, code, flag], HoldAll]; Def /: Def[obj_, f_[obj_], flag_:None] -&gt; impl_:= defWrapped[obj,{f, flag}, obj /: f[obj[backend_]]:= With[{type = ContainerType[backend]}, Postprocessor[type, f, obj] @ impl[Preprocessor[type, f, obj] @ backend] ]]; Def /: Def[obj_, f_[obj_, args__], flag_:None] -&gt; impl_:= defWrapped[obj, {f, flag}, obj /: f[obj[backend_], p:PatternSequence[args]]:= With[{type = ContainerType[backend]}, Postprocessor[type, f, obj] @ impl[Preprocessor[type, f, obj][backend, p]] ] ]; Def /: Def[obj_, f_[fst_, obj_], flag_:None] -&gt; impl_:= defWrapped[obj, {f, flag}, obj /: f[ff:fst, obj[backend_]]:= With[{type = ContainerType[backend]}, Postprocessor[type, f, obj] @ impl[Preprocessor[type, f, obj][ff, backend]] ] ]; Def /: Def[obj_, f_[fst_, obj_, rest__], flag_:None] -&gt; impl_:= defWrapped[obj, {f, flag}, obj /: f[ff:fst, obj[backend_], rr:PatternSequence[rest]]:= With[{type = ContainerType[backend]}, Postprocessor[type, f, obj] @ impl @ Preprocessor[type, f, obj][ff, backend,rr] ] ] (* General catch-all definitions for interface functions *) ClearAll[SetGeneralDef]; SetGeneralDef[objsym_, clear_:True][f_Symbol, pos_Integer:1]:= With[{n = pos - 1}, If[TrueQ[clear], ClearAll[f]]; f[Repeated[_, {n}], backend_Symbol[___],args___]:= MsgFail[f, Message[MessageName[objsym, "notimpl"], f, backend]]; f[args___]:=MsgFail[f, Message[MessageName[objsym,"badargs"], f, {args}]]; ]; </code></pre> <h3>Code for <code>ListObject</code> interface generation in one peace, for convenience</h3> <p><em>Execute the above code for interface generation first</em>.</p> <pre><code>Scan[SetTerminal, {Length, Normal}]; Scan[SetNonTerminal, {Part, Take, Drop, Map, Select}] ClearAll[ListObject]; ListObject::notimpl = "The function `1` has not been implemented for backend `2`"; ListObject::badargs = "Function `1` received bad number and / or types of arguments: `2`"; ClearAll[ListObjectQ]; ListObjectQ[ListObject[backend_Symbol[___]]]:=True; ListObjectQ[_]:=False; Scan[ SetGeneralDef[ListObject], {ListLength, ListNormal, ListPart, ListTake, ListDrop, ListSelect} ]; Scan[SetGeneralDef[ListObject][#, 2]&amp;, {ListMap}]; Def[ListObject] @ Length @ ListObject -&gt; ListLength Def[ListObject] @ Normal @ ListObject -&gt; ListNormal (* Force terminal behavior on Part[lst, _Integer] *) Def[ListObject, False] @ Part[ListObject, _Integer] -&gt; ListPart Def[ListObject] @ Part[ListObject, __] -&gt; ListPart Def[ListObject] @ Take[ListObject, __] -&gt; ListTake Def[ListObject] @ Drop[ListObject, __] -&gt; ListDrop Def[ListObject] @ Map[_, ListObject, ___] -&gt; ListMap Def[ListObject] @ Select[ListObject, __] -&gt; ListSelect </code></pre>
12,091
<p>It looks like the traffic is really high today, probably because of the introduction of hats. </p> <p>I was wondering, is there a way to check statistics about the traffic on math.SE, and on the other SE websites?</p>
Community
-1
<p>There are publicly visible statistics available on <a href="https://www.quantcast.com/math.stackexchange.com" rel="nofollow">Quantcast</a>. As SE uses the Quantcast tracker, they don't have to guess and this is actual traffic data.</p> <p>You'll have to wait a bit though, the graphs are only updated later the next day.</p>
71,670
<pre><code>TableForm[Table[i/j + 4*Boole[j &gt; i] // N, {i, 3}, {j, 4}], TableHeadings -&gt; {{"Row1", "Row2", "Row3"}, {"Col1", "Col2", "Col3", "Col4"}}] </code></pre> <p>Produces the following table:</p> <p><img src="https://i.stack.imgur.com/Dqnmr.png" alt="enter image description here"></p> <p>How select the maximum value in each row and make it a bold font?</p> <p>So that <code>4.5</code>, <code>4.66667</code> and <code>4.75</code> are bold in the 1st, 2nd and 3rd row.</p> <p>Thanks</p>
george2079
2,079
<pre><code> TableForm[ (m = Max[#]; (# /. m :&gt; Style[m, Red] ) ) &amp; /@ Table[i/j + 4*Boole[j &gt; i] // N, {i, 3}, {j, 4}] , TableHeadings -&gt; {{"Row1", "Row2", "Row3"}, {"Col1", "Col2", "Col3", "Col4"}}] </code></pre> <p><img src="https://i.stack.imgur.com/HwgOl.png" alt="enter image description here"> </p> <p>Red is easier to see but Bold works as well..</p>
4,132,907
<p>I'm trying to figure out the identity above though I'm having difficulties towards figuring it out and would kindly appreciate your support!</p> <p><span class="math-container">$n\binom{n-1}{r-1} = r\binom{n}{r}$</span></p> <p>What I have tried: Given that</p> <p><span class="math-container">$$\binom{n}{r}=\binom{n-1}{r-1}+\binom{n-1}{r}$$</span> Then by rearranging for <span class="math-container">$\binom{n-1}{r-1}$</span> I get</p> <p><span class="math-container">$$\binom{n}{r}-\binom{n-1}{r}=\binom{n-1}{r-1}$$</span></p> <p>Which simplifys to:</p> <p><span class="math-container">$$n\left[\frac{n!}{(n-r)!r!}-\frac{(n-1)!}{(n-r!)r!}\right]=n\cdot\frac{n!-(n-1)!}{(n-r!)r!}$$</span></p> <p>I'm stuck here on how to simply this any further to get the result I'm after.</p>
David C. Ullrich
248,223
<p>Of course this just falls out from the identity <span class="math-container">$\begin{pmatrix}p\\q\end{pmatrix}=\frac{p!}{(p-q)!q!}$</span>. That's no fun (also unenlightening). <strong>Outline</strong> of a &quot;combinatorial&quot; proof:</p> <p>Say <span class="math-container">$S=\{1,2,\dots,n\}$</span>. Let <span class="math-container">$X$</span> be the set of all ordered pairs <span class="math-container">$(s,E)$</span> such that <span class="math-container">$s\in S$</span>, <span class="math-container">$E\subset S$</span>, <span class="math-container">$E$</span> has <span class="math-container">$r-1$</span> elements, and <span class="math-container">$s\notin E$</span>. The identity follows by counting the number of elements of <span class="math-container">$X$</span> in two different ways.</p>
41,940
<p>For example, the square can be described with the equation $|x| + |y| = 1$. So is there a general equation that can describe a regular polygon (in the 2D Cartesian plane?), given the number of sides required?</p> <p>Using the Wolfram Alpha site, this input gave an almost-square: <code>PolarPlot(0.75 + ArcSin(Sin(2x+Pi/2))/(Sin(2x+Pi/2)*(Pi/4))) (x from 0 to 2Pi)</code></p> <p>This input gave an almost-octagon: <code>PolarPlot(0.75 + ArcSin(Sin(4x+Pi/2))/(Sin(4x+Pi/2)*Pi^2)) (x from 0 to 2Pi)</code></p> <p>The idea is that as the number of sides in a regular polygon goes to infinity, the regular polygon approaches a circle. Since a circle can be described by an equation, can a regular polygon be described by one too? For our purposes, this is a regular convex polygon (triangle, square, pentagon, hexagon and so on).</p> <p>It can be assumed that the centre of the regular polygon is at the origin $(0,0)$, and the radius is $1$ unit.</p> <p>If there's no such equation, can the non-existence be proven? If there <em>are</em> equations, but only for certain polygons (for example, only for $n &lt; 7$ or something), can those equations be provided?</p>
E.Sokol
921,142
<p>A more general formula exists, for star polygon too and its smoother variations:</p> <p><span class="math-container">$$1=\frac{\sqrt{x^2+y^2} \cos \left(\frac{2 \sin ^{-1}\left(k \cos \left(n \tan ^{-1}(x,y)\right)\right)+\pi m}{2 n}\right)}{\cos \left(\frac{2 \sin ^{-1}(k)+\pi m}{2 n}\right)}$$</span></p> <p>or, with polar coordinates:</p> <p><span class="math-container">$$\rho = \frac{\cos \left(\frac{2 \sin ^{-1}(k)+\pi m}{2 n}\right)}{\cos \left(\frac{2 \sin ^{-1}(k \cos (n \phi ))+\pi m}{2 n}\right)}$$</span></p> <p>where</p> <p><span class="math-container">$\phi$</span> - angle;<br /> <span class="math-container">$\rho$</span> - radius;<br /> <span class="math-container">$n$</span> - number of convex vertices;<br /> <span class="math-container">$m$</span> - determines how many vertices the sides will lie on one straight line;<br /> <span class="math-container">$k$</span> - hardness - for <span class="math-container">$k=0$</span> we get a circle regardless of other parameters, for <span class="math-container">$k=1$</span> - a polygon with straight lines, with intermediate values from <span class="math-container">$0$</span> to <span class="math-container">$1$</span> - intermediate figures between the circle and the polygon.</p> <p>For more details see <a href="https://habr.com/ru/post/519954/" rel="nofollow noreferrer">this paper</a> (at russian)</p> <p><a href="https://i.stack.imgur.com/sfyn1.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sfyn1.gif" alt="sample image" /></a> <a href="https://i.stack.imgur.com/E9ZO7.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/E9ZO7.gif" alt="sample image" /></a></p>
4,237,056
<p>Can any subgroup of a cyclic group <span class="math-container">$\left&lt;a\right&gt;$</span> be presented as <span class="math-container">$\{x\in \left&lt;a\right&gt;|x^m=e\}$</span>?<br /> Note that m is a factor of n, where n is the order of <span class="math-container">$\left&lt;a\right&gt;$</span>.<br /> I found that we can easily construct a subgroup from <span class="math-container">$\left&lt;a\right&gt;$</span> using a factor m of n. Then <span class="math-container">$C_m≡\{x\in \left&lt;a\right&gt;|x^m=e\}$</span> is a subgroup of <span class="math-container">$\left&lt;a\right&gt;$</span> (the proof is simple and I will not write it here).<br /> For example, let <span class="math-container">$\left&lt;a\right&gt;=\mathbb{Z}_{12}$</span>.<br /> If <span class="math-container">$m=1$</span>, then <span class="math-container">$C_m=\{0\}$</span>.<br /> If <span class="math-container">$m=3$</span>, then <span class="math-container">$C_m=\{0,4,8\}$</span>.<br /> If <span class="math-container">$m=4$</span>, then <span class="math-container">$C_m=\{0,3,6,9\}$</span>.<br /> If <span class="math-container">$m=12$</span>, then <span class="math-container">$C_m=\mathbb{Z}_{12}$</span>.<br /> The question is, is the reverse also true? Can every subgroup of <span class="math-container">$\left&lt;a\right&gt;$</span> be written as <span class="math-container">$C_m$</span>? In other words, can we prove that the number of subgroups of <span class="math-container">$\left&lt;a\right&gt;$</span> is exactly the number of factors of n?</p>
subrosar
602,170
<p>The first partial of the Lagrangian simplifies to <span class="math-container">$2(x_1-1)(1+\lambda),$</span> so either <span class="math-container">$x_1=1$</span> or <span class="math-container">$\lambda=-1.$</span> If <span class="math-container">$x_1=1$</span> then the constraint tells us <span class="math-container">$x_2=0.$</span> If <span class="math-container">$\lambda=-1$</span> then the constraint on the second partial tells us <span class="math-container">$x_2=-\frac{1}{2}.$</span></p>
3,910,015
<p>My impression has been that <span class="math-container">$\mathbb{Z_n}$</span> is the set <span class="math-container">$\{0,1,...,n-1\}$</span> under binary operation addition modulo <span class="math-container">$n$</span>. However I'm also coming across this notion that <span class="math-container">$\mathbb{Z_n}$</span> is actually a set of equivalence classes of equivalence relation <span class="math-container">$x\sim y \iff x \equiv y$</span> mod <span class="math-container">$n$</span> and the addition here is actually addition of equivalence classes rather than simply addition of integers modulo <span class="math-container">$n$</span>. Is this correct? So would it be correct to say <span class="math-container">$\mathbb{Z_n} = \{n,n+1,...,2n-1\}$</span>? if we are considering these elements as being equivalence classes?</p>
Duncan Ramage
405,912
<p>To put it shortly: yes.</p> <p>What isn't important is what <span class="math-container">$\mathbb{Z}_n$</span> &quot;is&quot; as a set. Rather, what's important is that all these different formulations of <span class="math-container">$\mathbb{Z}_n$</span> (equivalence classes of integers, select integers with a certain operation, a different set of integers with a certain operation) all give rise to an algebraic structure (groups, rings, fields, depending on what you're working with) which are all <em>isomorphic</em>.</p> <p>Though, if you're saying that <span class="math-container">$\mathbb{Z}_n$</span> is equivalence classes, then the accepted notation would probably look more like <span class="math-container">$\mathbb{Z}_n = \{[n]_n, [n + 1]_n, \dots\}$</span>, or perhaps even <span class="math-container">$\mathbb{Z}_n = \{n + n\mathbb{Z}, (n + 1) + n\mathbb{Z}, \dots\}$</span>, depending on your background and your personal aesthetic concerns. <span class="math-container">$n$</span>, after all, is an integer, not an equivalence class of integers.</p> <p>As Qiaochu Yian says in his comment, the equivalence class definition is usually preferred, as this makes certain proofs much easier.</p>
3,910,015
<p>My impression has been that <span class="math-container">$\mathbb{Z_n}$</span> is the set <span class="math-container">$\{0,1,...,n-1\}$</span> under binary operation addition modulo <span class="math-container">$n$</span>. However I'm also coming across this notion that <span class="math-container">$\mathbb{Z_n}$</span> is actually a set of equivalence classes of equivalence relation <span class="math-container">$x\sim y \iff x \equiv y$</span> mod <span class="math-container">$n$</span> and the addition here is actually addition of equivalence classes rather than simply addition of integers modulo <span class="math-container">$n$</span>. Is this correct? So would it be correct to say <span class="math-container">$\mathbb{Z_n} = \{n,n+1,...,2n-1\}$</span>? if we are considering these elements as being equivalence classes?</p>
Community
-1
<p>To my knowledge, possibly the main plus of the definition of <span class="math-container">$\Bbb Z_n$</span> as the quotient set (group under addition modulo <span class="math-container">$n$</span>, actually) <span class="math-container">$\Bbb Z/\sim$</span>, where <span class="math-container">$x\sim y \stackrel{(def.)}{\iff} x-y \in n\Bbb Z$</span>, is that it leads to the generalization to any group <span class="math-container">$G$</span> (in place of <span class="math-container">$\Bbb Z$</span>) and any subgroup <span class="math-container">$H$</span> (in place of <span class="math-container">$n\Bbb Z$</span>), and finally to the powerful notion of coset.</p>
2,870,039
<p>Just for the sake of simplicity, assume that the centre of rotation is $(0, 0)$. I know that I can find the destination point $(x', y')$ given angle of rotation $a$ and source point $(x, y)$ using $$\begin{cases} x' = x \cos a - y \sin a \\ y' = x \sin a + y \cos a \\ \end{cases}$$</p> <p>However, I want to ask how can I find the angle of rotation $a$ given the source point $(x, y)$ and only the x-coordination of the destination point $x'$? The y-coordinate of destination point $y'$ would be according to the angle of rotation $a$ that we get. I am having trouble trying to solve the above equations for the desired value because we don't have exact value of $y'$. It would be great if someone could help. Thanks.</p>
amd
265,466
<p>Changing notation a bit, all of the possible rotations of $\mathbf p_0=(x_0,y_0)$ lie on a circle with radius $r = \|\mathbf p_0\| = \sqrt{x_0^2+y_0^2}$. An equation of this circle is $x^2+y^2=x_0^2+y_0^2$. Your problem comes down to finding the points on this circle with $x$-coordinate equal to $x_1$. Plugging $x_1$ into the above equation gives you a quadratic equation with solutions $y_1 = \pm\sqrt{r^2-x_1^2}$. (You can also get this directly from the Pythagorean theorem.) For each of the solutions $\mathbf p_1=(x_1,y_1)$ you can recover the cosine of the angle $\alpha$ via the dot product identity: $$\mathbf p_0\cdot\mathbf p_1 = \|\mathbf p_0\|\,\|\mathbf p_1\|\cos\alpha = r^2\cos\alpha$$ therefore $$\cos\alpha = {x_0x_1+y_0y_1 \over x_0^2+y_0^2}.\tag1$$ Because $\cos\alpha = \cos(-\alpha)$, this equation has a sign ambiguity, but you can resolve it by examining $$\det\begin{bmatrix}\mathbf p_0^T&amp;\mathbf p_1^T\end{bmatrix} = x_0y_1-x_1y_0 = \|\mathbf p_0\|\,\|\mathbf p_1\|\sin\alpha.\tag2$$ You can, if you like, combine these two equations to obtain $$\tan\alpha = {x_0y_1-x_1y_0 \over x_0x_1+y_0y_1}.$$ This also has an ambiguity, this time of the quadrant of the angle, but if you’re coding this, there’s likely a two-argument form of the arctangent function available (often called something like <code>ATAN2</code>) that allows you to pass the numerator and denominator in separately in order to resolve this ambiguity.</p>
671,437
<p>A random variable $X \sim N(0,1)$, compute $\Bbb E(X^n)$ .</p> <p>I manage to do this by characteristic function. Now I try to compute this by moment generating function or do it directly. So I have 2 questions for 2 ways to compute that expectation.</p> <ol> <li>Moment generating function, I have $$M_X(t)= e^{t^2/2},$$ but how to compute this $n$-th order derivative?</li> <li>If I compute that directly, I have $$\Bbb E(X^n)=\int x^n {1 \over \sqrt {2 \pi}} e^{-x^2 \over 2}dx,$$ but how to work out this integral?</li> </ol>
Gurvan
127,051
<p>If you name $u_n$ a term of the sequence, you can check the sign of $u_{n+1}-u_n$, if the sign is always negative (resp. positive), the sequence is decreasing (resp. increasing).</p>
671,437
<p>A random variable $X \sim N(0,1)$, compute $\Bbb E(X^n)$ .</p> <p>I manage to do this by characteristic function. Now I try to compute this by moment generating function or do it directly. So I have 2 questions for 2 ways to compute that expectation.</p> <ol> <li>Moment generating function, I have $$M_X(t)= e^{t^2/2},$$ but how to compute this $n$-th order derivative?</li> <li>If I compute that directly, I have $$\Bbb E(X^n)=\int x^n {1 \over \sqrt {2 \pi}} e^{-x^2 \over 2}dx,$$ but how to work out this integral?</li> </ol>
imranfat
64,546
<p>The numerator can be written as $\frac{n^2+n}{2}$ (that's the closed formula Daniel is hinting at) and so if you combine the fractions into one, you get a numerator of $-n$ and a denominator of $2(n+2)$ Taking the limit for $n$ to inf gives -0.5</p>
359,528
<p>Let $P$ be the set of permutations all of whose cycles are of even length. Prove that the exponential generating function for $P$ is $\dfrac{1}{\sqrt{1-x^2}}$.</p>
Community
-1
<p>A bit of notation to make the answer clearer. I shall write $P_n$ to be the set of all partitions of the set $\{1,\ldots,n\}$ and $\sigma=\{S_1,\ldots,S_k\} \in P_n$ to mean that $\sigma$ is a partition of the set $\{1,\ldots,n\}$ into "parts" $S_1,\ldots,S_k$</p> <p>Define $$ a_n= \begin{cases} (n-1)! &amp; \text{if $n$ is even and $n\geq$2} \\ 0 &amp; \text{otherwise} \end{cases} $$ and $b_n=1$ for all $n$. Let $A(x)=\sum\limits_{i=0}^\infty\frac{a_nx^n}{n!}$ and $B(x)=\sum\limits_{n=0}^\infty\frac{a_nx^n}{n!}$. Then the exponential generating series for $P$ is $B(A(x))=\sum\limits_{n=0}^\infty\frac{c_nx^n}{n!}$, where</p> <p>$$ c_n=\sum\limits_{\sigma=\{S_1,\ldots,S_k\} \in P_n}b_k a_{|S_1|}a_{|S_2|}\cdots a_{|S_k|}=\sum\limits_{\sigma=\{S_1,\ldots,S_k\} \in P_n,\ |S_i|\text{ even}}(|S_1|-1)! \cdots (|S_k|-1)!, $$</p> <p>which is exactly the number of permutations of the set $\{1, \ldots,n\}$ with all cycles even!</p> <p>Note that $B(x)=\exp(x)$ and</p> <p>$$ A(x)=\sum\limits_{n \geq 2,\text{ even}}^\infty \frac{(n-1)!x^n}{n!}=\sum\limits_{n \geq 2,\text{ even}}^\infty\frac{x^n}{n}=\frac{1}{2}(\ln(1+x) - \ln(1-x))= \ln \left(\frac{1}{\sqrt{1-x^2}}\right), $$</p> <p>Hence the generating function for $P$ is $B(A(x))=\exp\left(\ln \left(\frac{1}{\sqrt{1-x^2}}\right)\right)=\dfrac{1}{\sqrt{1-x^2}}$</p>
391,572
<p>I never really understood what $e$ means and I'm always terrified when I see it in equations. What is it? Can somebody dumb it down for me? I know it's a constant. Is it as simple as that?</p>
JohnWO
73,241
<p>$e$, the constant, is the limit of $(1 + {1\over n})^n$ as n approaches infinity; As expressed by the series: $$e = 1 + {1\over 1} + {1\over 1\cdot2 } + {1\over 1\cdot2\cdot3} + {1\over 1\cdot2\cdot3\cdot4} + ... + {1\over 1\cdot 2\cdot...\cdot\infty}$$</p> <p>It is the base of the <a href="http://en.wikipedia.org/wiki/Natural_logarithm" rel="nofollow">natural logarithm</a>.</p>
391,572
<p>I never really understood what $e$ means and I'm always terrified when I see it in equations. What is it? Can somebody dumb it down for me? I know it's a constant. Is it as simple as that?</p>
Ron Gordon
53,268
<p>The idea of $e$ may be explained through the concept of compound interest. Let's say that you earn an annual interest rate $r$ on a principal $P$ dollars. Let's say that the money is compounded annually. Then after $n$ compounding periods, you have $P (1+r)^n$ dollars.</p> <p>Now lets say we compound semiannually; then after the same amount of time $n$ years, i.e., $2 n$ compounding periods, you will have </p> <p>$$P \left ( 1+\frac{r}{2} \right )^{2 n}$$</p> <p>dollars. You can now imagine more frequent compounding events; let's say that there are $M$ such events per year. Then after $n$ years you will have</p> <p>$$P \left ( 1+\frac{r}{M} \right )^{M n}$$</p> <p>dollars.</p> <p>Now imagine that you are compounding continuously (like, every microsecond). This corresponds to the limit as $M \to \infty$: after $n$ years, you will have</p> <p>$$P \lim_{M \to \infty} \left ( 1+\frac{r}{M} \right )^{M n} =P\, e^{r n}$$</p> <p>dollars.</p>
4,547,151
<p>Say I have a set of ordered points, like in a polygon, I can name a point in this set <span class="math-container">$P_i$</span>.</p> <p>Say I have a high dimensional point and I want to denote its x coordinate, I can do <span class="math-container">$P_x$</span> and if I want it's ith coorddinate I can do <span class="math-container">$P_i$</span></p> <p>Say I want the jth coordinate of the ith point, or the x coordinate of the ith point. How do I write that down?</p> <p><span class="math-container">$P_{i, j}$</span>? or <span class="math-container">$P_{i}[j]$</span> or what other convention is there?</p>
Doug M
317,176
<p>I would do this with cylindrical shells.</p> <p><span class="math-container">$\int_0^1 2\pi x(1-x^4)\ dx$</span></p>
679,341
<p>The 4 bytes used in CDROM sectors is just a usual 32 bit CRC. It uses the polynomial </p> <pre><code>P(x) = (x^16 + x^15 + x^2 + 1).(x^16 + x^2 + x + 1) </code></pre> <p>which expands to </p> <pre><code>x^32 + x^31 + 2x^18 + 2x^17 + 3x^16 + x^15 + x^4 + x^3 + 2x^2 + x + 1 </code></pre> <p>The CRC process reverses the bits of the input bytes and the final CRC value. It is stored in big endian format in the sector.</p> <p>So I have to pass a polynomial prime into crc32 algorithm.</p> <p>How do I convert this polynomial expression into binary form. any explanation will be appreciated.</p>
gammatester
61,216
<p>For a CRC32 algorithm with the polynomial $$x^{32} + x^{31} + 2x^{18} + 2x^{17} + 3x^{16} + x^{15} + x^4 + x^3 + 2x^2 + x + 1$$ you first omit the $x^{32}$ term and reduce the remaining coefficients mod 2. This gives $$x^{31} + x^{16} + x^{15} + x^4 + x^3 + x + 1$$ Now simply substitute $x=2\;$ and evaluate, this gives the 32 bit number for the CRC polynomial:</p> <p>$$10000000000000011000000000011011_2 = 8001801B_{16} = 2147581979_{10} $$</p> <p>But note that for an actual implementation there more design subtleties: Are you using big-endian or little endian? There are different reflections modes, initial values, final xoring etc.</p>
262,601
<p>What does the following statement mean, that Nature likes to minimize things (like energy) and this equation describes one particular minimization problem? </p> <p>What is the Variational Principle?</p>
Siminore
29,672
<p>It is, to some extent, just folklore. In particular, Nature likes to find <em>stationary</em> points, which can be maxima or saddle points, not only minima. Usually we first meet variational principles when studying mechanics (maxima/minima of the action functional), and it is proved that most laws of general physics are variational in nature. This means, roughly, that solutions to the equations of motion are stationary points of some functional (like the action or the energy of a system).</p> <p>The mathematical framework is considerably complicated: you need (infinite-dimensional) normed vector spaces, Fréchet and Gateaux derivatives, integration theory, some theory of ODEs and PDEs. The keywords are <em>calculus of variations</em> and <em>critical point theory</em>, but a good background is needed to approach these topics.</p> <p>The Wikipedia page about <a href="http://en.wikipedia.org/wiki/Variational_principle" rel="nofollow">variational principles</a> is an interesting starting point.</p>
2,821,514
<p>so I'm trying to work through this problem: </p> <p>How many elements does a 2-Sylolw subgroup of $\left(\mathbb Z^\times_{11} \times \mathbb Z^\times_{13} , \cdot \right)$ have (i.e. what is the order) and how many 2-Sylow subgroups there are in $\left(\mathbb Z^\times_{11} \times \mathbb Z^\times_{13} , \cdot \right)$ ? Find at least one 2-sylow subgroup and list all of its elements. (just in case $\mathbb Z^\times_n$ denotes the set of all $k \in \mathbb N$ that are relatively prime to n) </p> <p>Now i think i can answer some of it so here it goes:</p> <p>so the order of $\mathbb Z^\times_{11} \times \mathbb Z^\times_{13}$ is (or not?) $\left|\mathbb Z^\times_{11} \times \mathbb Z^\times_{13}\right|$ =$\phi(11)\phi(13) = 10\cdot12=120=2^3\cdot15$. So if that is correct then i know they are of order $2^3=8$ and i also know that if $s$ is the total number of 2-Sylow subgroups then the following must hold: $s\equiv 1\pmod 2$ and $s\mid15$. So from that i can say that $s \in \{1,3,5,15\}$. </p> <p>Now that's as far as i can go. I don't know how to construct such subgroups i know that the elements of those subgroups must be of order that divides 8 that is of order 1,2,4 or 8, i know that there must be an element of order 2 (by Cauchy theorem i think). But i don't have an efficient way of finding those elements and i don't know how to specify exactly how many 2-Sylow subgroups there are other than saying it's either 1,3,5 or 15. </p> <p>Any help will be greatly appreciated.</p>
lhf
589
<p>We have $\mathbb{Z}_{11}^\times \times \mathbb{Z}_{13}^\times \cong C_{10} \times C_{12}$, whose $2$-Sylow subgroup is $C_{2} \times C_{4}$.</p> <p>Therefore, the $2$-Sylow subgroup of $\mathbb{Z}_{11}^\times \times \mathbb{Z}_{13}^\times$ is $\langle g_{11}^5 \rangle \times \langle g_{13}^3 \rangle$, where $g_{p}$ is a primitive root mod $p$.</p> <p>Note that $g_{11}^5$ has order $\frac{10}{5}=2$ and $g_{13}^3$ has order $\frac{12}{3}=4$, as needed.</p> <p>We can take $g_{11}=2$ and $g_{13}=2$ and then the $2$-Sylow subgroup is $\langle 10 \rangle \times \langle 8 \rangle$.</p>
1,713,841
<p>I think $\pi$ is algebraic of degree 3 over $\mathbb{Q}(\pi^3)$. To prove it, I need to show that $\pi \notin \mathbb{Q}(\pi^3)$ (which implies that $x^3-\pi^3$ is the minimum polynomial of $\pi$ over $\mathbb{Q}(\pi^3)$). I have no experience solving this kind of problem for transcendental elements like $\pi$.</p>
Eric Wofsey
86,856
<p>Suppose $\pi\in\mathbb{Q}(\pi^3)$. That means that there are some polynomials $f(x)$ and $g(x)$ with rational coefficients (with $g(x)\neq 0$) such that $\pi=f(\pi^3)/g(\pi^3)$, or $\pi g(\pi^3)-f(\pi^3)=0$. But this is then a polynomial in $\pi$ which vanishes, so since $\pi$ is transcendental, it must be identically $0$. This is impossible, since only powers of $\pi$ divisible by $3$ appear in $f(\pi^3)$, and only powers of $\pi$ which are $1$ mod $3$ appear in $\pi g(\pi^3)$, so there can be no cancellation of terms.</p>
4,013,630
<blockquote> <p>The population of bacteria triples each day on a petri dish. If it takes 20 days for the population of bacteria to fill the entire dish, how many days will it take bacteria to fill half of the petri dish?</p> </blockquote> <ul> <li><p>My doubt 1: if we don't have a initial population can be solve this problem? Like <span class="math-container">$A(d)$</span>= initial population times <span class="math-container">$3^{20}$</span> days but as I don't have this I am stuck.</p> </li> <li><p>My doubt 2: Also, can we assume a number for population at day 20 and move our way backwards?</p> </li> </ul>
Neat Math
843,178
<p><span class="math-container">$$a_n-2=4-a_{n-1}-2=(-1)(a_{n-1}-2)$$</span> <span class="math-container">$\{a_n-2\}$</span> is a geometric sequence so <span class="math-container">$$a_n-2=(-1)^{n-2}(a_2-2)=5(-1)^{n-1} \implies a_n = 2+5(-1)^{n-1}.$$</span></p>
4,623,427
<p><span class="math-container">$f$</span> is a twice differentiable function from <span class="math-container">$R^n $</span> to <span class="math-container">$R$</span>. I want to show that <span class="math-container">$$ \| \nabla^2 f(x) \| \le L \implies \| \nabla f(x) - \nabla f(y) \| \le L \| x-y \| $$</span> for all <span class="math-container">$x,y \in R^n $</span> and <span class="math-container">$L \ge 0 \in R$</span></p> <p>I know that based on the mean value theorem that <span class="math-container">$$ f(x) - f(y) = (\nabla f (z) )^T (x-y) \implies \| f(x) - f(y) \| \le \| \nabla f (z) \| \| x-y| $$</span> for some point z on the line going thru <span class="math-container">$x$</span> and <span class="math-container">$y$</span>. But not sure how to utilize this.</p> <p>Thanks!</p>
copper.hat
27,978
<p>Here is an outline:</p> <p>Pick some <span class="math-container">$w$</span> and let <span class="math-container">$\phi(x) = w^T \nabla f(x)$</span>.</p> <p>Compute <span class="math-container">$\nabla \phi(x)$</span>.</p> <p>Apply the mean value theorem to <span class="math-container">$\phi$</span> to get <span class="math-container">$|w^T (\nabla f(x) - \nabla f(y) ) | \le L \|w\| \|x-y\|$</span>. For fixed <span class="math-container">$x,y$</span> this holds for all <span class="math-container">$w$</span>.</p> <p>Choose <span class="math-container">$w$</span> appropriately to show that this implies the desired result.</p>
262,985
<p>$Ax=0$, $A$ has $m$ rows and $n$ columns, $m \le n$, all entries of $x$ are non-negative.</p> <p>What should $A$ satisfy to guarantee the equation set have only zero solution?</p>
Max Alekseyev
7,076
<p>The paper "<a href="https://people.ece.cornell.edu/atang/pub/09/allerton09_uniqueness.pdf" rel="nofollow noreferrer">Conditions for a Unique Non-negative Solution to an Underdetermined System</a>" seems to answer your question.</p>
4,174,481
<p>I'm going through a proof in which at one point, the authors make the following statement :</p> <blockquote> <p>Let <span class="math-container">$(X,d)$</span> be a complete metric space and <span class="math-container">$f:X\to X$</span> and <span class="math-container">$f_m:X\to X$</span> Lipschitz with <span class="math-container">$lip(f)&lt;1$</span> and <span class="math-container">$lip(f_m)&lt;1\; \forall m\in\mathbb{N}$</span>. Let <span class="math-container">$x^*_m$</span> the fixed point of <span class="math-container">$f_m$</span> and <span class="math-container">$x^*$</span> the fixed point of <span class="math-container">$f$</span>. If for each <span class="math-container">$x\in X$</span> we have that<br /> <span class="math-container">$$\lim_{m\to+\infty} d(f_m(x),f(x)) = 0, $$</span> it follows that <span class="math-container">$$\lim_{m\to+\infty} d(x^*_m,x^*) = 0 .$$</span></p> </blockquote> <p>I tried to figure out how they came up with this by an argument along the lines of <span class="math-container">\begin{align*}d(x^*_m,x^*) &amp;= d(f_m(x^*_m),f(x^*)) \leq d(f_m(x^*_m),f(x_m^*))+ d(f(x_m^*),f(x^*))\\&amp;\leq d(f_m(x^*_m),f(x_m^*))+lip(f)d(x_m^*,x^*). \end{align*}</span></p> <p>Then <span class="math-container">$$0\leq (1-lip(f))d(f(x_m^*),f(x^*))\leq d(f_m(x^*_m),f(x_m^*)).$$</span> But now we can't apply the first limit because <span class="math-container">$x_m^*$</span> depends on <span class="math-container">$m$</span>.</p> <p>Also another thing I tried is by using the fact that <span class="math-container">$$\lim_{n \to+\infty}d(f^{\circ n}(x),x^*)=0 \; \forall x\in X$$</span> Where <span class="math-container">$f^{\circ n} = \underbrace{f\circ f\circ \dots \circ f}_{\text {n times}}.$</span></p> <p>Then for a fixed <span class="math-container">$x\in X$</span> <span class="math-container">\begin{equation*}d(x^*_m,x^*)\leq d(x^*_m,f^{\circ n}_m(x)) + d(f^{\circ n}_m(x),f^{\circ n}(x))+ d(f^{\circ n}(x),x^*),\; \forall n\in\mathbb{N} \end{equation*}</span></p> <p>Now let <span class="math-container">$\varepsilon &gt; 0$</span> and set <span class="math-container">$N$</span> big enough so that <span class="math-container">$d(f^{\circ N}(x),x^*)&lt;\varepsilon/3$</span> and <span class="math-container">$d(f_m^{\circ N}(x),x_m^*)&lt; \varepsilon/3$</span>.</p> <p>Then <span class="math-container">\begin{align*} d(x^*_m,x^*) &amp;\leq d(x^*_m,f^{\circ N}_m(x)) + d(f^{\circ N}_m(x),f^{\circ N}(x))+ d(f^{\circ N}(x),x^*)\\&amp;&lt;2\varepsilon/3 + d(f^{\circ N}_m(x),f^{\circ N}(x)). \end{align*}</span></p> <p>Now it can be shown that <span class="math-container">$\lim\limits_{m\to+\infty} d(f^{\circ N}_m(x),f^{\circ N}(x)) =0$</span>, so there is some <span class="math-container">$M$</span> such that <span class="math-container">$d(f^{\circ N}_m(x),f^{\circ N}(x)) &lt;\varepsilon/3\; \forall m&gt;M$</span>, from which we get <span class="math-container">$$d(x^*_m,x^*)&lt;\varepsilon.$$</span></p> <p>But I feel like something is not right in my second attempt.</p> <p>Is something missing from the statement?</p>
Danny Pak-Keung Chan
374,270
<p>If there exists <span class="math-container">$c\in(0,1)$</span> such that <span class="math-container">$lip(f_{n})\leq c$</span> for all <span class="math-container">$n$</span>, then it is easy. For, observe that <span class="math-container">\begin{eqnarray*} d(x_{n}^{\ast},x^{\ast}) &amp; = &amp; d(f_{n}(x_{n}^{\ast}),f(x^{\ast}))\\ &amp; \leq &amp; d(f_{n}(x_{n}^{\ast}),f_{n}(x^{\ast}))+d(f_{n}(x^{\ast}),f(x^{\ast}))\\ &amp; \leq &amp; lip(f_{n})d(x_{n}^{\ast},x^{\ast})+d(f_{n}(x^{\ast}),f(x^{\ast}))\\ &amp; \leq &amp; c\cdot d(x_{n}^{\ast},x^{\ast})+d(f_{n}(x^{\ast}),f(x^{\ast})) \end{eqnarray*}</span> Therefore, <span class="math-container">$d(x_{n}^{\ast},x^{\ast})\leq\frac{1}{1-c}\cdot d(f_{n}(x^{\ast}),f(x^{\ast})).$</span> Letting <span class="math-container">$n\rightarrow\infty$</span> and observe that <span class="math-container">$d(f_{n}(x^{\ast}),f(x^{\ast}))\rightarrow0,$</span> the result follows.</p>
2,071,054
<p>Let $f(x)$ be continuous in $[a,b]$</p> <p>Let $A$ be the set defined as: $$A = \{ x \in [a,b] \mid f(x) = f(a) \}$$</p> <ol> <li><p>Does $A$ have a maximum? I guess it it is the max value in $[a,b]$ which $f$ sends to $f(a)$ but I don't know how to prove it.</p></li> <li><p>Would it still have a maximum if $[a,b]$ would turn into $[a,b)$?</p></li> </ol>
Teoc
190,244
<p>Since you want all the variables to be distinct, we can sum it over all values of i,j,k then subtract the sum that occurs when they are not distinct, and apply PIE(Principle of Inclusion/Exclusion). </p> <p>Can you take it from there?</p>
1,379,878
<blockquote> <p>Let $M_n(\mathbb{C})$ denote the vector space over $\mathbb{C}$ of all $n\times n$ complex matrices. Prove that if $M$ is a complex $n\times n$ matrix then $C(M)=\{A\in M_n(\mathbb{C}) \mid AM=MA\}$ is a subspace of dimension at least $n$.</p> </blockquote> <p>My Try:</p> <p>I proved that $C(M)$ is a subspace. But how can I show that it is of dimension at least $n$. No idea how to do it. I found similar questions posted in MSE but could not find a clear answer. So, please do not mark this as duplicate.</p> <p>Can somebody please help me how to find this? </p> <p>EDIT: Non of the given answers were clear to me. I would appreciate if somebody check my try below:</p> <p>If $J$ is a Jordan Canonical form of $A$, then they are similar. Similar matrices have same rank. $J$ has dimension at least $n$. So does $A$. Am I correct?</p>
GAVD
255,061
<p>HINT: A square matrix $A$ over a field $F$ commutes with every $F$-linear combination of non-negative powers of $A$.</p> <p>That is, for every $a_0$, $\dots$ ,$a_n \in F$,</p> <p>$$A(\sum_{k=0}^n a_kA^k) = \sum_{k=0}^n a_k A^{k+1} = (\sum_{k=0}^n a_k A^k) A.$$ </p>
1,685,621
<p>I know that we can define two vectors to be orthogonal only if they are elements of a vector space with an inner product. </p> <p>So, if $\vec x$ and $\vec y$ are elements of $\mathbb{R}^n$ (as a real vector space), we can say that they are orthogonal iff $\langle \vec x,\vec y\rangle=0$, where $\langle \vec x,\vec y\rangle $ is an inner product.</p> <p>Usually the inner product is defined with respect to the standard basis $E=\{\hat e_1,\hat e_2 \}$ (for $n=2$ to simplify notations), the standard definition is: $$ \langle \vec x,\vec y\rangle_E=x_1y_1+x_2y_2 $$ Where $$ \begin{bmatrix} x_1\\x_2 \end{bmatrix} =[\vec x]_E \qquad \begin{bmatrix} y_1\\y_2 \end{bmatrix} =[\vec y]_E $$ are the components of the two vectors in the standard basis and, by definition of the inner product, $\hat e_1$ and $\hat e_2$ are orho-normal. </p> <p>Now, if $\vec v_1$ and $\vec v_2$ are linearly independent the set $V=\{\vec v_1,\vec v_2\}$ is a basis and we can express any vector in this basis with a couple of components: $$ \begin{bmatrix} x'_1\\x'_2 \end{bmatrix} =[\vec x]_V \qquad \begin{bmatrix} y'_1\\y'_2 \end{bmatrix} =[\vec y]_V $$ from which we can define an inner product: $$ \langle \vec x,\vec y\rangle_V=x'_1y'_1+x'_2y'_2 $$</p> <p>Obviously we have: $$ [\vec v_1]_V= \begin{bmatrix} 1\\0 \end{bmatrix} \qquad [\vec v_2]_V= \begin{bmatrix} 0\\1 \end{bmatrix} $$ and $\{\vec v_1,\vec v_2\}$ are orthogonal (and normal) for the inner product $\langle \cdot,\cdot\rangle_V$.</p> <p>This means that any two linearly independent vectors are orthogonal with respect to a suitable inner product defined by a suitable basis. So orthogonality seems a ''coordinate dependent'' concept. </p> <p>The question is: is my reasoning correct? And, if yes, what make the usual standard basis so special that we chose such basis for the usual definition of orthogonality? </p> <hr> <p>I add something to better illustrate my question.</p> <p>If my reasoning is correct than, for any basis in a vector space there is an inner product such that the vectors of the basis are orthogonal. If we think at vectors as oriented segments (in pure geometrical sense) this seems contradicts our intuition of what ''orthogonal'' means and also a geometric definition of orthogonality. So, why what we call a ''standard basis'' seems to be in accord with intuition and other basis are not? </p>
Balloon
280,308
<p>Your reasoning is good, and, as Daniel Fisher said, the assertion "be orthogonal to" depends only on your inner product. </p> <p>What makes the standard basis $$\mathcal{C}=(e_1,\dots,e_n) \hspace{0.5cm}\text{ with }\hspace{0.5cm} e_i=(0,\dots,0,\underset{i \text{ rank}}{\underbrace{1}},0,\dots,0)$$ special when you are working in $\mathbb{R}^n$ with an inner product $\langle\cdot,\cdot\rangle$ is that you have the two following points:</p> <ul> <li><p>$\forall x\in\mathbb{R}^n,$ $[x]_\mathcal{C}={}^tx,$ </p></li> <li><p>If $\mathcal{F}=(f_1,\dots,f_n)$ is an orthonormal basis of $(\mathbb{R}^n,\langle\cdot,\cdot\rangle)$ (which always exists according to Gram-Schmidt's process), then you have : $$\forall u,v\in\mathbb{R}^n, \langle u,v\rangle={}^t[u]_\mathcal{F}[v]_\mathcal{F}=\langle {}^t[u]_\mathcal{F},{}^t[v]_\mathcal{F}\rangle_\mathcal{C}=\langle U, V\rangle_\mathcal{C}.$$</p></li> </ul> <p>In other words: you can easily compute the coordinates of your vectors in this basis, and, after you have chosen a "good" basis, your inner product can always be written as the inner product given by your method in the standard basis.</p> <p><strong>Edit :</strong> For the geometrical part, I will try to detail my comment : when you represent a vector in the plane, what you really do is to choose a base and draw the (vector of) coordinates in the plane (you well-understand this process when you consider vectorial spaces which are not $\mathbb{R}^n$, the difference being that you can not immediately see the vectors as $n-$tuples and that you have to consider a basis to see the things vectorially). </p> <p>Then, your question is : </p> <blockquote> <p>Which basis should I consider to see orthogonality as I am used to see, i.e. that my vectors do a "right angle" ?</p> </blockquote> <p>(The good definition of angle comes from euclidian geometry, here I suppose that we understand what we want to see on the drawing). And the answer is the second point I already noted : we are in the usual case of $\mathbb{R}^n$ and its usual inner product <em>when we considerate a orthonormal basis for your inner product</em>. </p> <p>For example, if you consider $(v_1,v_2)=\big((1,0),(1,1)\big)$ and the inner product $\langle x,y\rangle_V={}^t[x]_V[y]_V,$ then as $v_1$ and $v_2$ are not orthogonal in $\mathbb{R}^2$ for the usual inner product, then they won't appear as orthogonal vectors represented in the standard basis (the "angle" between the two being $45^°$), but if you represent them in the base $(v_1,v_2),$ which is orthonormal for your inner product, they will appear as orthogonal vectors.</p>
1,685,621
<p>I know that we can define two vectors to be orthogonal only if they are elements of a vector space with an inner product. </p> <p>So, if $\vec x$ and $\vec y$ are elements of $\mathbb{R}^n$ (as a real vector space), we can say that they are orthogonal iff $\langle \vec x,\vec y\rangle=0$, where $\langle \vec x,\vec y\rangle $ is an inner product.</p> <p>Usually the inner product is defined with respect to the standard basis $E=\{\hat e_1,\hat e_2 \}$ (for $n=2$ to simplify notations), the standard definition is: $$ \langle \vec x,\vec y\rangle_E=x_1y_1+x_2y_2 $$ Where $$ \begin{bmatrix} x_1\\x_2 \end{bmatrix} =[\vec x]_E \qquad \begin{bmatrix} y_1\\y_2 \end{bmatrix} =[\vec y]_E $$ are the components of the two vectors in the standard basis and, by definition of the inner product, $\hat e_1$ and $\hat e_2$ are orho-normal. </p> <p>Now, if $\vec v_1$ and $\vec v_2$ are linearly independent the set $V=\{\vec v_1,\vec v_2\}$ is a basis and we can express any vector in this basis with a couple of components: $$ \begin{bmatrix} x'_1\\x'_2 \end{bmatrix} =[\vec x]_V \qquad \begin{bmatrix} y'_1\\y'_2 \end{bmatrix} =[\vec y]_V $$ from which we can define an inner product: $$ \langle \vec x,\vec y\rangle_V=x'_1y'_1+x'_2y'_2 $$</p> <p>Obviously we have: $$ [\vec v_1]_V= \begin{bmatrix} 1\\0 \end{bmatrix} \qquad [\vec v_2]_V= \begin{bmatrix} 0\\1 \end{bmatrix} $$ and $\{\vec v_1,\vec v_2\}$ are orthogonal (and normal) for the inner product $\langle \cdot,\cdot\rangle_V$.</p> <p>This means that any two linearly independent vectors are orthogonal with respect to a suitable inner product defined by a suitable basis. So orthogonality seems a ''coordinate dependent'' concept. </p> <p>The question is: is my reasoning correct? And, if yes, what make the usual standard basis so special that we chose such basis for the usual definition of orthogonality? </p> <hr> <p>I add something to better illustrate my question.</p> <p>If my reasoning is correct than, for any basis in a vector space there is an inner product such that the vectors of the basis are orthogonal. If we think at vectors as oriented segments (in pure geometrical sense) this seems contradicts our intuition of what ''orthogonal'' means and also a geometric definition of orthogonality. So, why what we call a ''standard basis'' seems to be in accord with intuition and other basis are not? </p>
amd
265,466
<p>To expand a bit on Daniel Fischer’s comment, coming at this from a different direction might be fruitful. There are, as you’ve seen, many possible inner products. Each one determines a different notion of length and angle—and so orthogonality—via the formulas with which you’re familiar. There’s nothing inherently coordinate-dependent here. Indeed, it’s often possible to define inner products in a coordinate-free way. For example, for vector spaces of functions on the reals, $\int_0^1 f(t)g(t)\,dt$ and $\int_{-1}^1 f(t)g(t)\,dt$ are commonly-used inner products. The fact that there are many different inner products is quite useful. There is, for instance, a method of solving a large class of interesting problems that involves orthogonal projection relative to one of these “non-standard” inner products. </p> <p>Now, when you try to express an inner product in terms of vector coordinates the resulting formula is clearly going to depend on the choice of basis. It turns out that for any inner product one can find a basis for which the formula looks just like the familiar dot product. </p> <p>You might also want to ask yourself what makes the standard basis so “standard?” If your vector space consists of ordered tuples of reals, then there’s a natural choice of basis, but what about other vector spaces? Even in the Euclidean plane, there’s no particular choice of basis that stands out a priori. Indeed, one often chooses an origin and coordinate axes so that a problem takes on a particularly simple form. Once you’ve made that choice, then you can speak of a “standard” basis for that space.</p>
1,183,643
<p>Given a integer $h$</p> <blockquote> <p>What is $N(h)$ the number of full binary trees of height less than $h$?</p> </blockquote> <p><img src="https://i.stack.imgur.com/XcNVi.jpg" alt="enter image description here"></p> <p>For example $N(0)=1,N(1)=2,N(2)=5, N(3)=21$(As pointed by <a href="https://math.stackexchange.com/users/212738/travisj">TravisJ</a> in his partial answer) I can't find any expression of $N(h)$ neither a reasonable upper bound.</p> <p><strong>Edit</strong> In a full binary tree (sometimes called proper binary tree) every node other than the leaves has two children.</p>
Qudit
210,368
<p>Let's consider all possible ways of constructing a full binary tree of height at most $h \geq 1$. The root has either $0$ or $2$ children, so one possibility is that the tree is just a single node. On the other hand, the number of full binary trees such that the root has two children, is equal to the square of the number of full binary trees of height at most $h - 1$. Therefore, we obtain the recurrence</p> <p>\begin{align} N(0) &amp; = 1 \\ N(h) &amp; = N(h - 1)^2 + 1 \text{ if } h \geq 1 \end{align}</p> <p>It follows that $2^{2^{h-1}}$ is a lower bound. On the other hand, $2^{2^{2 h}}$ is an upper bound so the complexity is doubly exponential. I suspect there is a closed form as well.</p> <p>Note that $N(3) = 26$ here does not disagree with the other answers since they consider trees of height exactly $h$.</p>
5,998
<p>I have restricted the context of my notebooks to each individual notebook. So variables in each notebook are local and are not seen in another notebook.</p> <p>But in two of my notebooks I have two functions that I want to plot simultaneously with the command <code>Show</code>. Please note that I want variables of each notebook to be local except for those two functions. So I need to define two global variables and then plot that two functions in <code>Show</code>. How can I define global variables in Mathematica?</p>
user22889
22,889
<p>Globally define as follows $Assumptions = b >= 0 &amp;&amp; c >= 0 &amp;&amp; {u11, u13, u14} [Element] Reals then use globally defined variables as follows Simplify[expression with global variables]</p>
398,535
<p>I understand that "recursive" sets are those that can be completely decided by an algorithm, while "recursively enumerable" sets can be listed by an algorithm (but not necessarily decided). I am curious why the word "recursive" appears in their name. What does the concept of decidability/recognizability have to do with functions that call themselves?</p>
William
13,579
<p>Today in the context of the branch of mathematics called recursion theory or computability theory, the term recursive and computable mean the same thing. The term decidable also mean the same thing but more often used in the context of logical theories or in textbook more focused on computer science. All these terms means there is an effective procedure to determine membership in some sets. Effective formally means accepted by a Turing Machine, $\mu$-recursive function, unlimited register machine, $\lambda$ calculus, etc. </p> <p>The terms computable enumerable, recursively enumerable, recognizable all mean the same thing. They are the domain of the algorithms, the range of a total algorithm, the $\Sigma_1^0$ definable subsets of $\omega$, etc. </p> <p>As I mentioned above, there are several formal model of computation that define the concept of being computable (decidable, recursive). I suspect the name "recursion" comes from one of the earlier method of computation. I believe Kleene proved many of the basic theorems of computability theory using primitive recursive functions and the $\mu$-recursive functions. Possibly this is where the term recursion comes from. </p> <p>Soare argues in the 1990s that the name of the field called Recursion Theory should be called Computability Theory. The following papers discusses the history of the name "recursion" and "computability" and why he thinks computability is more appropriate. </p> <p><a href="http://www.people.cs.uchicago.edu/~soare/History/compute.pdf">http://www.people.cs.uchicago.edu/~soare/History/compute.pdf</a> <a href="http://www.people.cs.uchicago.edu/~soare/History/siena.pdf">http://www.people.cs.uchicago.edu/~soare/History/siena.pdf</a></p>
398,535
<p>I understand that "recursive" sets are those that can be completely decided by an algorithm, while "recursively enumerable" sets can be listed by an algorithm (but not necessarily decided). I am curious why the word "recursive" appears in their name. What does the concept of decidability/recognizability have to do with functions that call themselves?</p>
Peter Smith
35,151
<p>There's a history here. In thumbnail version, in the 1930s various attempts where made to formally characterize the intuitively <strong>computable</strong> numerical functions, and relatedly the <strong>effectively decidable</strong> sets of numbers (i.e. those whose membership can be decided by a computable function). Thus we encounter as various formal accounts of computability Church's idea of $\lambda$-computability, Turing computability, Herbrand-Gödel computability, and Gödel and Kleene's $\mu$-recursiveness (and more!). Of these, it is in the latter formal definition of computability where the notion of <strong>recursion</strong> in the sense of a function calling itself centrally features. </p> <p>Now as a matter of technical fact, the $\lambda$-computable function, the Turing-computable functions, the Herbrand-Gödel computable functions, and the ($\mu$)-recursive functions turn out to be the same class of functions. And for various reasons the preferred term for this class of functions became "recursive". </p> <p>The technical fact that all these attempts (and other later ones) to characterize the intuitively computable functions converge on the recursive functions leads to the Church-Turing thesis that the computable functions in the intuitive sense just <em>are</em> these recursive functions (and the decidable sets of numbers are the recursive sets, i.e. those with a recursive characteristic function).</p> <p>I wouldn't myself say that "computable" <em>means</em> "recursive" (nor would I recommend a linguistic reform to this effect). Rather I'd put it like this: it is a <em>discovery</em> that the intuitive notion of an algorithmically computable function (dressed up a bit) picks out the class of recursive functions. </p>
18,174
<p>I am an undergraduate secondary math education major. In <span class="math-container">$2$</span> weeks I have to give a <a href="https://www.mathmammoth.com/lessons/number_talks.php" rel="nofollow noreferrer">Number Talk</a> in my math ed class on the problem "<span class="math-container">$3.9$</span> times <span class="math-container">$7.5$</span>". I need to come up with as many different solution methods as possible. </p> <p>Here is what I have come up with so far:</p> <ol> <li><p>The most common way: multiply the two numbers "vertically", ignoring the decimal, to get <span class="math-container">$2925$</span>: <span class="math-container">\begin{array} {}\hfill {}^6{}^439\\ \hfill \times\ 75 \\\hline \hfill {}^1 195 \\ \hfill +\ 273\phantom{0} \\\hline \hfill 2925 \end{array}</span> Since there are two numbers that are to the right of the decimal, place the decimal after the <span class="math-container">$9$</span> to get the answer <span class="math-container">$29.25$</span>.</p></li> <li><p>Write both numbers as improper fractions: <span class="math-container">$$3.9= \dfrac{39}{10}$$</span> and <span class="math-container">$$7.5=\dfrac{75}{10}$$</span>Then multiply <span class="math-container">$$\dfrac{39}{10}\cdot\dfrac{75}{10}$$</span> to get <span class="math-container">$\dfrac{2925}{100}$</span> which simplifies to 29.25.</p></li> <li><p>Use lattice multiplication. This is a very uncommon method that I doubt the students will use, and I need to review it myself before I consider it.</p></li> <li><p>Since <span class="math-container">$3.9$</span> is very close to <span class="math-container">$4$</span>, we could instead do <span class="math-container">$4\cdot7.5=30$</span> and then subtract <span class="math-container">$0.1\cdot7.5=0.75$</span> to get <span class="math-container">$30 - 0.75=29.25$</span></p></li> <li><p>Similarly, since <span class="math-container">$7.5$</span> rounds up to <span class="math-container">$8$</span>, we can do <span class="math-container">$3.9\cdot 8=31.2$</span> and then subtract .<span class="math-container">$5\cdot 3.9=1.95$</span> to get <span class="math-container">$31.2-1.95=29.25$</span> </p></li> </ol> <p>Are there any other possible methods the students might use? (<strong>Note:</strong> they are junior college math ed students.) Thanks!</p>
Martin Argerami
475
<p>I would do <span class="math-container">$$3.9\times7.5=\frac{3.9\times30}4=\frac{39\times3}4=\frac{(40-1)\times3}4=\frac{40\times 3}4-\frac34=30-\frac34.$$</span></p>
18,174
<p>I am an undergraduate secondary math education major. In <span class="math-container">$2$</span> weeks I have to give a <a href="https://www.mathmammoth.com/lessons/number_talks.php" rel="nofollow noreferrer">Number Talk</a> in my math ed class on the problem "<span class="math-container">$3.9$</span> times <span class="math-container">$7.5$</span>". I need to come up with as many different solution methods as possible. </p> <p>Here is what I have come up with so far:</p> <ol> <li><p>The most common way: multiply the two numbers "vertically", ignoring the decimal, to get <span class="math-container">$2925$</span>: <span class="math-container">\begin{array} {}\hfill {}^6{}^439\\ \hfill \times\ 75 \\\hline \hfill {}^1 195 \\ \hfill +\ 273\phantom{0} \\\hline \hfill 2925 \end{array}</span> Since there are two numbers that are to the right of the decimal, place the decimal after the <span class="math-container">$9$</span> to get the answer <span class="math-container">$29.25$</span>.</p></li> <li><p>Write both numbers as improper fractions: <span class="math-container">$$3.9= \dfrac{39}{10}$$</span> and <span class="math-container">$$7.5=\dfrac{75}{10}$$</span>Then multiply <span class="math-container">$$\dfrac{39}{10}\cdot\dfrac{75}{10}$$</span> to get <span class="math-container">$\dfrac{2925}{100}$</span> which simplifies to 29.25.</p></li> <li><p>Use lattice multiplication. This is a very uncommon method that I doubt the students will use, and I need to review it myself before I consider it.</p></li> <li><p>Since <span class="math-container">$3.9$</span> is very close to <span class="math-container">$4$</span>, we could instead do <span class="math-container">$4\cdot7.5=30$</span> and then subtract <span class="math-container">$0.1\cdot7.5=0.75$</span> to get <span class="math-container">$30 - 0.75=29.25$</span></p></li> <li><p>Similarly, since <span class="math-container">$7.5$</span> rounds up to <span class="math-container">$8$</span>, we can do <span class="math-container">$3.9\cdot 8=31.2$</span> and then subtract .<span class="math-container">$5\cdot 3.9=1.95$</span> to get <span class="math-container">$31.2-1.95=29.25$</span> </p></li> </ol> <p>Are there any other possible methods the students might use? (<strong>Note:</strong> they are junior college math ed students.) Thanks!</p>
Paul_Pedant
13,841
<p>Trachtenberg methods: this depends on calculating the answer one digit at a time, starting at the units. Maybe this is what you call "lattice". This was in vogue around 1960 -- maybe it gets rediscovered every generation.</p> <p>Units can only come from 9 x 5. Write down 5, carry 4.</p> <p>Tens can come from 3 x 5 and 9 x 7, i.e. 78, plus carry. Write down 2, carry 8.</p> <p>Hundreds come from 3 x 7 plus carry, so 29. Write 9, carry 2.</p> <p>Thousands leaves only the carry. Write down 2.</p> <p>On paper, it is customary to put a dot over each digit of the top number as it is used, until you get proficient.</p> <p>There are specific methods for particular multipliers, but they are special cases of this method.</p>
18,174
<p>I am an undergraduate secondary math education major. In <span class="math-container">$2$</span> weeks I have to give a <a href="https://www.mathmammoth.com/lessons/number_talks.php" rel="nofollow noreferrer">Number Talk</a> in my math ed class on the problem "<span class="math-container">$3.9$</span> times <span class="math-container">$7.5$</span>". I need to come up with as many different solution methods as possible. </p> <p>Here is what I have come up with so far:</p> <ol> <li><p>The most common way: multiply the two numbers "vertically", ignoring the decimal, to get <span class="math-container">$2925$</span>: <span class="math-container">\begin{array} {}\hfill {}^6{}^439\\ \hfill \times\ 75 \\\hline \hfill {}^1 195 \\ \hfill +\ 273\phantom{0} \\\hline \hfill 2925 \end{array}</span> Since there are two numbers that are to the right of the decimal, place the decimal after the <span class="math-container">$9$</span> to get the answer <span class="math-container">$29.25$</span>.</p></li> <li><p>Write both numbers as improper fractions: <span class="math-container">$$3.9= \dfrac{39}{10}$$</span> and <span class="math-container">$$7.5=\dfrac{75}{10}$$</span>Then multiply <span class="math-container">$$\dfrac{39}{10}\cdot\dfrac{75}{10}$$</span> to get <span class="math-container">$\dfrac{2925}{100}$</span> which simplifies to 29.25.</p></li> <li><p>Use lattice multiplication. This is a very uncommon method that I doubt the students will use, and I need to review it myself before I consider it.</p></li> <li><p>Since <span class="math-container">$3.9$</span> is very close to <span class="math-container">$4$</span>, we could instead do <span class="math-container">$4\cdot7.5=30$</span> and then subtract <span class="math-container">$0.1\cdot7.5=0.75$</span> to get <span class="math-container">$30 - 0.75=29.25$</span></p></li> <li><p>Similarly, since <span class="math-container">$7.5$</span> rounds up to <span class="math-container">$8$</span>, we can do <span class="math-container">$3.9\cdot 8=31.2$</span> and then subtract .<span class="math-container">$5\cdot 3.9=1.95$</span> to get <span class="math-container">$31.2-1.95=29.25$</span> </p></li> </ol> <p>Are there any other possible methods the students might use? (<strong>Note:</strong> they are junior college math ed students.) Thanks!</p>
Mark
346
<p>Pull out your slide rule. Align the second "1" on the C scale with the "3.9" on the D scale. Move the slide to the "7.5" on the C scale and read "2.92" off the D scale. Observe that there are two decimal places in the problem, and write down the answer as approximately 29.2.</p> <p>(If you want more precision, get a bigger slide rule. A standard 10-inch rule is only good for about three and a half digits.)</p>
18,174
<p>I am an undergraduate secondary math education major. In <span class="math-container">$2$</span> weeks I have to give a <a href="https://www.mathmammoth.com/lessons/number_talks.php" rel="nofollow noreferrer">Number Talk</a> in my math ed class on the problem "<span class="math-container">$3.9$</span> times <span class="math-container">$7.5$</span>". I need to come up with as many different solution methods as possible. </p> <p>Here is what I have come up with so far:</p> <ol> <li><p>The most common way: multiply the two numbers "vertically", ignoring the decimal, to get <span class="math-container">$2925$</span>: <span class="math-container">\begin{array} {}\hfill {}^6{}^439\\ \hfill \times\ 75 \\\hline \hfill {}^1 195 \\ \hfill +\ 273\phantom{0} \\\hline \hfill 2925 \end{array}</span> Since there are two numbers that are to the right of the decimal, place the decimal after the <span class="math-container">$9$</span> to get the answer <span class="math-container">$29.25$</span>.</p></li> <li><p>Write both numbers as improper fractions: <span class="math-container">$$3.9= \dfrac{39}{10}$$</span> and <span class="math-container">$$7.5=\dfrac{75}{10}$$</span>Then multiply <span class="math-container">$$\dfrac{39}{10}\cdot\dfrac{75}{10}$$</span> to get <span class="math-container">$\dfrac{2925}{100}$</span> which simplifies to 29.25.</p></li> <li><p>Use lattice multiplication. This is a very uncommon method that I doubt the students will use, and I need to review it myself before I consider it.</p></li> <li><p>Since <span class="math-container">$3.9$</span> is very close to <span class="math-container">$4$</span>, we could instead do <span class="math-container">$4\cdot7.5=30$</span> and then subtract <span class="math-container">$0.1\cdot7.5=0.75$</span> to get <span class="math-container">$30 - 0.75=29.25$</span></p></li> <li><p>Similarly, since <span class="math-container">$7.5$</span> rounds up to <span class="math-container">$8$</span>, we can do <span class="math-container">$3.9\cdot 8=31.2$</span> and then subtract .<span class="math-container">$5\cdot 3.9=1.95$</span> to get <span class="math-container">$31.2-1.95=29.25$</span> </p></li> </ol> <p>Are there any other possible methods the students might use? (<strong>Note:</strong> they are junior college math ed students.) Thanks!</p>
Toby Mak
13,864
<p>This is similar to Xander Henderson's approach, but the calculations have been made slightly easier by using two FOIL multiplications instead of just one:</p> <p><span class="math-container">$$3.9 \times (3.8 + 3.7)$$</span> <span class="math-container">$$=3.9 \times 3.8 + 3.9 \cdot 3.7$$</span> <span class="math-container">$$=(4 - 0.1)(4-0.2)+(4-0.1)(4-0.3)$$</span> <span class="math-container">$$=16-0.8-0.4+0.02+16-1.2-0.4+0.03$$</span> <span class="math-container">$$=16+16-0.8-1.2-0.4-0.4+0.02+0.03$$</span> <span class="math-container">$$=(16+16)-(0.8+1.2)-(0.4+0.4)+(0.02-0.03)$$</span> <span class="math-container">$$=32-2-0.8+0.05$$</span> <span class="math-container">$$=29.2+0.05$$</span> <span class="math-container">$$=29.25$$</span></p>
946,738
<p>I ran into a nice question from one book in Discrete Mathematics. I want to someone lean me how solve such a problem, because I prepare for entrance exam. </p> <blockquote> <blockquote> <p>if the time is "Wednesday 4 afternoon", after $47^{74}$ hours, we are in what hours? and what day? </p> </blockquote> </blockquote> <p>Thanks to all. </p>
egreg
62,967
<p>If now it's “Wednesday, 4pm”, then $16$ hours ago it was “Wednesday, 0:00”.</p> <p>Thus the problem is to know what hour it is $47^{74}+16$ hours after “Wed, 0:00”. This is obviously solved by computing the remainder of $47^{74}+16$ divided by $24$; since $47$ is coprime with $24$ and $\varphi(24)=8$, from Fermat-Euler we can say $$ 47^{8}\equiv 1\pmod{24} $$ hence $47^{74}\equiv 47^2=2209\equiv 1\pmod{24}$. Therefore, adding back $16$, we know that we'll be at 17:00, that is, 5pm.</p> <p>In order to know what day it will be, compute the remainder of $47^{74}+16$ modulo $7\cdot 24$; recall that $\varphi(7\cdot24)=6\cdot 8=48$.</p> <p>By Fermat-Euler, you need to compute the remainder of $$ 47^{74-48}=47^{26}=47^2\cdot47^8\cdot47^{16} $$ Now (all congruences are modulo $168$) \begin{align} 47^2&amp;=2209\equiv25\\ 47^3&amp;\equiv25\cdot47=1175\equiv167\equiv-1\\ \end{align} So $47^8=(47^3)^2\cdot47^2\equiv25$ and $47^{16}=(47^3)^5\cdot47\equiv-47\equiv121$.</p> <p>Therefore $$ 47^{74}\equiv25\cdot25\cdot121=75625\equiv25\pmod{168} $$ and it's the same as $25+16$ hours passed from “Wed 0:00”: one full day plus one hour (as seen before).</p>
2,879,915
<p>I am attempting to prove $\lim_{k\to\infty}\dfrac{k^2}{k^2+2k+2}=1$. However, I am getting tripped up on the algebra. I believe I want to show that there exists some $N\in\mathbb{N}$ such that when $k\geq N$, $\left|\dfrac{k^2}{k^2+2k+2}-1\right|&lt;\epsilon$ for arbitrary $\epsilon&gt;0$. I set out to find a choice for $N$ by turning $1$ into $\dfrac{k^2+2k+2}{k^2+2k+2}$, but I am not sure where to go from $\left|\dfrac{-2k-2}{k^2+2k+2}\right|&lt;\epsilon$. How can I rearrange this inequality to be in terms of $k$? Thank you!</p>
Community
-1
<p>You clearly get something that goes to zero. Try dividing by $k^2$ in both numerator and denominator: $$\left|\frac{-2k-2}{k^2+2k+2}\right|=\left|\frac{\frac{-2}k-\frac 2{k^2}}{1+\frac2k+\frac2{k^2}}\right|\lt\left|\frac2k +\frac2{k^2}\right |\lt\left|\frac2k+\frac2k \right |=\left|\frac4k\right |$$, for $k\gt1$.</p> <p>So just choose $N$ such that $N\gt\frac 4{\epsilon}$.</p>
1,843,369
<p>Consider the congruence </p> <p>$$2x+7y \equiv 5\pmod{12}$$</p> <p>Here $(2,7,12)=1$. Since $(2,12)=2$, we must have $$7y \equiv 5\pmod{2}$$</p> <p>Which clearly gives $y \equiv 1\pmod{2}$, or $y \equiv 1,3,5,7,9,11\pmod{12}$</p> <p>Why does the previous statement follow?. This is not a problem. It's something I can't understand from the chapter.</p>
3x89g2
90,914
<p>Suppose $[7y]_2 = [5]_2$, then $[7]_2 [y]_2 = [5]_2$, which implies that $[1]_2 [y]_2 = [1]_2$, that is, $y \equiv 1 (\mod 2)$. This implies that $y$ has to be odd. </p>
1,843,369
<p>Consider the congruence </p> <p>$$2x+7y \equiv 5\pmod{12}$$</p> <p>Here $(2,7,12)=1$. Since $(2,12)=2$, we must have $$7y \equiv 5\pmod{2}$$</p> <p>Which clearly gives $y \equiv 1\pmod{2}$, or $y \equiv 1,3,5,7,9,11\pmod{12}$</p> <p>Why does the previous statement follow?. This is not a problem. It's something I can't understand from the chapter.</p>
Arthur
15,500
<p>$2x + 7y$ must be odd (since $5\mod 12$ is odd). But $2x$ cannot be odd, and therefore $7y$ must be odd. That means $y$ must be odd. What you've written is saying the same thing, only with modulo $2$ instead of even / odd.</p>
2,328,677
<p>I wish to find the area between the curves:</p> <p>$y=\sqrt{x}$</p> <p>and </p> <p>$y=x^{2}-3x-2$</p> <p>and between $x=1, x=9$</p> <p>Now, I think that the first thing to do is to find the intersection between the two curves, and then to find which curve is the upper one before and after the intersection point. At the end, an integral will follow.</p> <p>The problem is, how do you find the intersection point? If you compare the curves, you get an equation which is find to solve. Is there a trick I am missing? How do you solve it when you have a square root and a power of 2?</p> <p>Thank you !</p>
Mundron Schmidt
448,151
<p>You ask how to solve $x^2-3x-2=\sqrt{x}$? Compute the square on both sides and you get \begin{align} x^2-3x-2=\sqrt{x}&amp;\Leftrightarrow (x^2-3x-2)^2=x\\ &amp;\Leftrightarrow x^4-6x^3+5x^2+12x+4=x\\ &amp;\Leftrightarrow x^4-6x^3+5x^2+11x+4=0. \end{align} If you are lucky then you get one of $-4,-2,-1,1,2,4$ is a solution of your equation. Since you consider the positive solution, you can drop $-4$, $-2$ and $-1$. Evaluate the polynom at $1$, $2$ and $4$ and you get that $x=4$ is one solution. If you factorize you get $$ (x-4)\underbrace{(x^3-2x^2-3x-1)}_{=:p(x)}=0. $$ Now you have to be careful since $p(1)=1-2-3-1=-5&lt;0$ while $p(4)=4^3-2\cdot4^2-3\cdot 4-1=19&gt;0$. The intermediate value theorem yields a zero $\xi\in(1,4)$.<p> What happened? By squaring we flipped the negative part of $x^2-3x-2$ to positive and that produced a fake intersection point at $\xi$. But we know that $x^2-3x-2$ intersects just once $\sqrt{x}$ and since $$ 4^2-3\cdot4-2=2=\sqrt{4} $$ we can conclude that $\xi$ is a fake solution while $4$ is the real intersection point.</p>
220,170
<p>My question is rather philosophical : without using advanced tools as Perlman-Thurston's geometrisation, how can we get convinced that the class of closed oriented $3$-manifolds is large and that simple invariants as Betti number are not even close to classify ?</p> <p>For example i would start with :</p> <ol> <li><p>If $S_g$ is the closed oriented surface of genus $g$, the family $S_g \times S^1$ gives an infinite number of non pairwise homeomorphic $3$-manifolds.</p></li> <li><p>Mapping tori of fiber $S_g$ gives as much as non-diffeomorphic $3$-manifolds as conjugacy classes in the mapping class group of $S_g$ which can be shown to be large using the symplectic representation for instance.</p></li> </ol> <p>I think that I would like also say that Heegaard splittings give rise to a lot of different $3$-manifolds which are essentially different, but I don't know any way to do this. </p> <p>So if you know a nice construction which would help understanding the combinatorial complexity of three manifolds, please share it :) </p>
Liviu Nicolaescu
20,302
<p>Here are two examples suggesting the complexity of the world of $3$-manifolds.</p> <p>The first is the classical result that any $3$-manifold can be obtained by integral surgery on a link in $S^3$. If you believe that knots and links form a complex Universe, than this result should suggest that $3$-manifolds cannot be much simpler.</p> <p>The next example comes from the striking work of <a href="http://arxiv.org/pdf/math/0502567.pdf">Dunfield and Thurston</a> on random $3$-manifolds. You can get such things by picking random elements in the mapping class group, where randomness is generated by a random walk on this group. This has lead to the discovery of strange $3$-manifolds. For more recent work on this topic see also this paper of <a href="http://arxiv.org/pdf/1405.6410.pdf">Lubotzky, Maher and Wu</a>.</p>
3,900,435
<p>Suppose that <span class="math-container">$\mathbf{Y}\sim N_3(0,\,\sigma^2\mathbf{I}_3)$</span> and that <span class="math-container">$Y_0$</span> is <span class="math-container">$N(0,\,\sigma_0^2)$</span>, independently of the <span class="math-container">$Y_i$</span>'s. My question is that does <span class="math-container">$(\mathbf{Y}, Y_0)$</span> also have multivariate normal distribution? By using moment generating function, it suffices that <span class="math-container">$\mathbf Y$</span> and <span class="math-container">$Y_0$</span> are independent. But is this the case?</p>
perpetuallyconfused
570,087
<p>So apparently, <a href="https://sbseminar.wordpress.com/2007/10/30/theme-and-variations-schroeder-bernstein/" rel="nofollow noreferrer">it is possible for this to fail</a>, though I don't profess to understand the counterexamples given in the comments. You are asking if fields satisfy the <a href="https://en.wikipedia.org/wiki/Schr%C3%B6der%E2%80%93Bernstein_property" rel="nofollow noreferrer">Cantor–Schröder–Bernstein</a>, and it appears that they do not.</p> <p>Edit: I take that back, I think I understand one example, if not its construction. There is a natural inclusion <span class="math-container">$\mathbb{C} \hookrightarrow \mathbb{C}(x)$</span> and a (complicated) inclusion <span class="math-container">$\mathbb{C}(x) \hookrightarrow F$</span> where <span class="math-container">$F \cong \mathbb{C}$</span>, where <span class="math-container">$\mathbb{C}(x)$</span> is the field of rational functions over <span class="math-container">$\mathbb{C}$</span>. However, the complex field is algebraically closed whereas <span class="math-container">$\mathbb{C}(x)$</span> is (apparently) not, so they cannot be isomorphic.</p>
4,432,047
<p><span class="math-container">$3\times 2$</span> is <span class="math-container">$3+3$</span> or <span class="math-container">$2+2+2$</span>. We know both are correct as multiplication is commutative for whole numbers. But which one is <strong>mathematically</strong> accurate?</p>
mweiss
124,095
<p>I think this is an interesting and non-trivial question, but essentially impossible to answer. Let me explain why I think this.</p> <p>The usual (I think?) way to define multiplication in the natural numbers is recursively: we define, for each <span class="math-container">$n \in \mathbb N$</span>, a function <span class="math-container">$Mult_n : \mathbb N \to \mathbb N$</span> as follows:</p> <ul> <li>First we define the base of the recursion to be <span class="math-container">$Mult_n(1) = n$</span>.</li> <li>Then for all <span class="math-container">$m$</span> we define <span class="math-container">$Mult_n(m + 1) = Mult_n(m) + n$</span>.</li> </ul> <p>(Alternatively, if one is using the convention that the smallest natural number is <span class="math-container">$0$</span>, we can begin the recursion with <span class="math-container">$Mult_n(0) = 0$</span>. It doesn't really matter.)</p> <p>With this definition, <span class="math-container">$$Mult_3(2) = Mult_3(1) + 3 = 3 + 3$$</span></p> <p>whereas <span class="math-container">$$Mult_2(3) = Mult_2(2) + 2 = (Mult_2(1) + 2) + 2 = (2 + 2) + 2$$</span></p> <p>At this point you do one of the following things:</p> <p><strong>Option 1:</strong></p> <ul> <li>We introduce the notational shorthand <span class="math-container">$m \times n = Mult_m(n)$</span></li> <li>We then prove the important (and not entirely obvious!) theorem that <span class="math-container">$Mult_n(m) = Mult_m(n)$</span> for any natural numbers <span class="math-container">$m, n$</span>.</li> </ul> <p><strong>Option 2:</strong></p> <ul> <li>We prove the important (and not entirely obvious!) theorem that <span class="math-container">$Mult_n(m) = Mult_m(n)$</span> for any natural numbers <span class="math-container">$m, n$</span>.</li> <li>We introduce the notational shorthand that <span class="math-container">$m \times n$</span> means the common value of <span class="math-container">$Mult_m(n)$</span> and <span class="math-container">$Mult_n(m)$</span>, which have been proven to be equal.</li> </ul> <p>The crucial point here is that it <em>does not matter which order we do it in</em>. If we introduce the notational shorthand first, then <span class="math-container">$3 \times 2$</span> is <strong>by definition</strong> equal to <span class="math-container">$3 + 3$</span>, and the fact that this is equivalent to <span class="math-container">$2 + 2 + 2$</span> is a consequence of the theorem.</p> <p>On the other hand, if prove the theorem first, before introducing the notational shorthand, then <span class="math-container">$m \times n$</span> is <strong>by definition</strong> equal to <strong>both</strong> <span class="math-container">$3 + 3$</span> and <span class="math-container">$2 + 2 + 2$</span>, which are equal by the theorem.</p> <p>At the end of the day the difference between these two approaches has to do not with the content of the mathematics but with how you choose to narrate the exposition of the mathematics, which is not really a mathematical question but more of a stylistic choice.</p>
80,452
<p>When can the curvature operator of a Riemannian manifold (M,g) be diagonalized by a basis of the following form</p> <p>'{${E_i\wedge E_j }$}' where '{${E_i}$}' is an orthonormal basis of the tangent space? If the manifold is three dimensional then it is always possible. But what about higher dimensional cases?</p>
Anton Petrunin
1,441
<p><em>This is not what you asking, but maybe you want to know.</em></p> <ol> <li><p>$\mathbb C\mathrm P^2$ does not admit a metric with splitting tensor in your sense. It has a non-zero Pontryagin number and any Pontryagin number can be expressed as an integral of some function of curvature tensor which is zero for splitting tensors. (In fact such a function is zero on any tensor which has orientation reversing symmetry and clearly splinting tensors are among them.) </p></li> <li><p>Any riemannian metric on $\mathbb S^n$ can be $C^0$ approximated by $C^\infty$ metrics with splitting curvature tensor. This follows from Nash--Kuiper embedding theorem and part (1) in the answer of Thomas Richard.</p></li> </ol>
893,839
<p>I am dealing with the dual spaces for the first time.</p> <p>I just wanted to ask is their any practical application of Dual space or is it just some random mathematical thing? If there is, please give a few.</p>
Jonas Dahlbæk
161,825
<p>Locally convex spaces have a very rich theory for their dual spaces, and much of functional analysis (as the name suggests) is devoted to the study of such spaces. Consider how important the concept of a basis is in the finite dimensional setting and then note that the continuous linear functionals are a natural generalization to infinite dimensional spaces.</p> <p>Of course, the prime example would be Hilbert spaces, where the inner product allows one to identify the dual space with the space itself in a very direct manner. In a sense this obscures the role of the dual space, but you should take note of how important the inner product (and thus the linear functionals) are for the theory.</p>
13,616
<p>Are there enough interesting results that hold for general locally ringed spaces for a book to have been written? If there are, do you know of a book? If you do, pelase post it, one per answer and a short description.</p> <p>I think that the tags are relevant, but feel free to change them.</p> <p>Also, have there been any attempts to classify locally ringed spaces? Certainly, two large classes of locally ringed spaces are schemes and manifolds, but this still doesn't cover all locally ringed spaces.</p>
Vectornaut
1,096
<p><em>An Introduction to Families, Deformations and Moduli</em>, by T.E. Venkata Balaji, has a beautiful little appendix that introduces smooth manifolds, complex manifolds, schemes, and complex analytic spaces in a unified way as locally ringed spaces. Although it doesn't say much in general about arbitrary locally ringed spaces, I enjoyed reading it and seeing how the stuff I knew about particular classes of locally ringed spaces fit into the general framework. One thing that particularly struck me, although it's obvious in hindsight, was the remark (A.5.5) that for all the categories of spaces mentioned above, a morphism as defined classically is the same thing as a morphism of locally ringed spaces.</p>
1,014,614
<p>That Theorem 3.2 says: Every finite orthogonal set of nonzero vectors is linearly independent.</p> <p>The proof is simple, but it seems to me that the finiteness is redundant, for the argument in the proof applies to an infinite set. Am I right?</p> <p>The proof runs as follows: If $k &gt; 0$ is an integer, if $a_{1}, \dots, a_{k}$ are reals, if $v_{1}, \dots, v_{k}$ are vectors, and if $$\sum_{1}^{k}a_{j}v_{j} = 0,$$ then, by taking inner product with some $v_{i}$ we have $$a_{i}(v_{i} \cdot v_{i}) = 0,$$ so that $$a_{i} = 0.$$ Since this argument holds for any $1 \leq i \leq k,$ qed.</p> <p>To me, the proof above holds for any $k$.</p>
Alfred Chern
42,820
<p>Suppose that $f(x)$ is defined on $\mathbb{R}$.</p> <p>$f(x)$ uniformly continuous </p> <p>$\Rightarrow\forall\epsilon&gt;0,\exists\delta&gt;0,\forall x_1,x_2\in\mathbb{R}:|x_1-x_2|\leq\delta,|f(x_1)-f(x_2)|&lt;\epsilon$, then fix $\epsilon=1$.</p> <p>$\forall x\in\mathbb{R},\exists k\in\mathbb{N}^+$, such that $\frac{|x|}{\delta}\leq k&lt;\frac{|x|}{\delta}+1$, then $\frac{|x|}{k}\leq\delta$.</p> <p>Note that $f(x)=\sum_{i=1}^k[f(\frac{i}{k}x)-f(\frac{i-1}{k}x)]+f(0)$, so $$|f(x)|\leq\sum_{i=1}^k|f(\frac{i}{k}x)-f(\frac{i-1}{k}x)|+|f(0)|&lt;k+|f(0)|&lt;\frac{1}{\delta}|x|+1+|f(0)|$$ Set $a=\frac{1}{\delta},b=1+|f(0)|$.</p>
515,915
<p><strong>Definiton</strong> A $p$-adic integer is a (formal) series $$\alpha=a_0+a_1p+a_2p^2+\ldots$$ with $0\leq a_i&lt;p$.</p> <p>The set of $p$-adic integers is denoted by $\mathbb{Z}_p$. If we cut an element $\alpha\in\mathbb{Z}_p$ at its $k$-th term $$\alpha_k=a_0+a_1p+\ldots+\alpha_{k-1}p^{k-1}$$ $\textbf{we get a well defined element of }$ $\mathbf{\mathbb{Z}/p^k\mathbb{Z}}$.</p> <p>Could someone explain me the bold part?</p>
Eric Stucky
31,888
<p><strong>Hint:</strong> Any non-identity rotation always fixes a unique point. Which points (if any) are fixed by $RT$ and $TR$?</p>
2,842,481
<p>If the minimum and the maximum values of $y = (x^2-3x+c)/(x^2+3x+c)$ are $7$ and $1/7$ , then the value of $c$ is?</p> <p>I cross multiplied the equation and tried to find it's discriminant, but I don't think it gets me anywhere. A little hint would be appreciated!</p>
mengdie1982
560,634
<h1>Solution</h1> <p>First, we should constrain the value of <span class="math-container">$c$</span> such that <span class="math-container">$x^2+3x+c \neq 0$</span> for all <span class="math-container">$x \in \mathbb{R}.$</span> Otherwise, there necessarily exists at least one infinite discontinuity for <span class="math-container">$y=f(x)$</span>, and if so, there exists no minimum or maximum value for <span class="math-container">$y$</span>. For this purpose, let <span class="math-container">$\Delta=9-4c&lt;0$</span>, i.e. <span class="math-container">$c&gt;\dfrac{9}{4},$</span> which is enough.</p> <p>Under this constraint, <span class="math-container">$y=f(x)$</span> is continuous over <span class="math-container">$(-\infty,+\infty)$</span>. It's clear that the maximum and minimum value given can only be reached at the local extremum point.</p> <p>Now,notice that <span class="math-container">$$y'=f'(x)=\frac{6(x^2-c)}{(x^2+3x+c)^2}.$$</span>Let <span class="math-container">$y'=0$</span>. Then <span class="math-container">$$x=\pm\sqrt{c}.$$</span> Hence,<span class="math-container">$$f(\sqrt{c})=\frac{2\sqrt{c}-3}{2\sqrt{c}+3}=1-\frac{6}{2\sqrt{c}+3}&lt;1,~~~~~f(-\sqrt{c})=\frac{2\sqrt{c}+3}{2\sqrt{c}-3}=1+\frac{6}{2\sqrt{c}-3}.$$</span></p> <p>Hence, <span class="math-container">$f(\sqrt{c})=\dfrac{1}{7},$</span> then <span class="math-container">$$c=4.$$</span></p> <p>Finally,we may verify that <span class="math-container">$c=4$</span> could satisfy all the conditions. We are done.</p>
40,618
<p>Let $f_{=}$ be a function from $\mathbb{R}^{2}$ be defined as follows: (1) if $x = y$ then $f_{=}(x,y) = 1$; (2) $f_{x,y} = 0$ otherwise.</p> <p>I would like to have a proof for / a reference to a textbook proof of the following theorem (if it indeed is a theorem):</p> <p>$f_{=}$ is uncomputable even if one restricts the domain of $f_{=}$ to a proper subset of $\mathbb{R}^{2}$, viz. the set of the computable real numbers</p> <p>Thanks!</p>
Thierry Zell
8,212
<p><a href="http://people.bath.ac.uk/masdr/" rel="nofollow">Dan Richardson</a> in Bath has extensively studied the problem of recognizing zero under various hypotheses. I would be hard-pressed to give you an account of the details, because there is a lot of subtle and surprising results, but his page has all his papers and I'm sure you can find something of interest there.</p>
1,456,262
<p>Show that the equation to a circle with center at $z_0$ and radius $r$ can be written as $$|z-z_0| = r$$ or as $$z\bar{z} - z\bar{z_0} - \bar{z}z_0 + |z_0|^2 = r^2$$</p> <p>I let $z_0 = (x_0 + iy_0) = (x_0,y_0)$. Now I have $$(x-x_0)^2 + (y-y_0)^2 = r^2$$</p> <p>I know $x=\frac{1}{2}(z+\bar{z})$ and $y = \frac{1}{2i}(z-\bar{z})$.</p> <p>I'm not really sure where to go from here. I seem to be having issues with these types of geometric characterization problems.</p>
Julián Aguirre
4,791
<p>Variations of constants is used to find a particular solution of the complete system once you know the general solution of the homogeneous system. Let $U=(1,1)$ be an eigenvector of the eigenvalue $1$ and let $V$ be another vector. Look for a solution of the form $$ t\,e^t\,U+e^t\,V. $$ This leads to the equation $(A-I)U=V$.</p> <p>Another way to solve the system is to reduce it to a second order equation: $$ x''=2\,x'-y'+4=2\,x'-x-e^{-t}+4. $$</p>
2,873,449
<p>For a continuous function $f: R \to R $ Define: $$ \lim_{x \to 0} \frac{1}{x^2} \int_{0}^{x} f(t)t \space dt $$ Since the function is continuous I can assume that it's also integrable since continuity implies integrability. I assume furthermore that there exists a function $F$ which is an antiderivative of $f$ for which the following are true:</p> <p>$$\int_{a}^{b} f(x) dx =F(b)-F(a) \space\space\space\space\space a,b\in R \space $$ $$ \lim_{x \to x_{0}} \frac{F(x)-F(x_{0})}{x-x_{0}}=f(x)$$ And that for $f$ $$ \lim_{x \to x_{0}} f(x)=f(x_{0})$$</p> <p>In order to find the limit, I used partial integration and ended up with: $$ \lim_{x \to 0} \frac{F(x)(x-1) +F(0)}{x^2}$$ At this point, I tried to use L'Hôpital's rule and ended up with the value $\frac{f(0)}{2}$ which seems totally wrong to me . Any advice would be appreciated, I mainly think that my solution idea is wrong, but I am stuck.</p>
Martin Argerami
22,857
<p>The limit is indeed $f(0)/2$. L'Hôpital's Rule applies because the limit of the quotient of the derivatives of numerator and denominator exists. Thus $$ \lim_{x\to0}\frac1{x^2}\int_0^x f(t)\,t\,dt =\lim_{x\to0}\frac{x\,f(x)}{2x}=\frac{f(0)}2 $$</p>
1,729,220
<p>Let</p> <p>$$f(x)=\frac{1}{x^2+3x+2}$$</p> <p>I must find $$\lim_{n\to\infty}\sum_{i=0}^n \frac{(-1)^i}{i!}f^{(i)}(1)$$</p> <p>How should I proceed?</p>
oren revenge
306,175
<p>I figured it out, here is the solution:</p> <p>by decomposing f(x) and applying some derivates you get (note that I have skipped induction) $$f^{(n)}(1)=\frac{(-1)^nn!}{2^{n+1}}-\frac{(-1)^nn!}{3^{n+1}}$$</p> <p>after some simple algebraic manipulation the limit becomes $$\lim_{n-&gt;\infty}{\sum^n_{k=0}{\frac{1}{2^{k+1}}-\frac{1}{3^{k+1}}}}$$</p> <p>those are 2 geometric series with ratios of q = $\frac{1}{2}$ and $\frac{1}{3}$ using the formula for the geometric series $$\sum^n_{k=0}{b_k}=b_1\frac{q^n-1}{q-1}$$</p> <p>we can rewrite the limit as</p> <p>$$\lim_{n-&gt;\infty}{1-\frac{1}{2^{n+1}}} - \lim_{n-&gt;\infty}{\frac{1}{2}(1-\frac{1}{3^n})} = \frac{1}{2}$$</p>
221,026
<p>I need to find the value of <span class="math-container">$z$</span> for a particular value of <span class="math-container">$D_c$</span> (eg. <span class="math-container">$500$</span>), but <span class="math-container">$z$</span> is inside an integral, and I'm not able to use <code>Solve</code> since the integral is giving <code>Hypergeometric2F1</code> function as the output.</p> <pre><code>OmegaM = 0.3111; OmegaLambda = 0.6889; Dc = 500; eqn = Integrate[(OmegaM (1 + z1)^3 + OmegaLambda)^(-1/2), {z1, 0, z}, Assumptions -&gt; z &gt; 0] </code></pre> <blockquote> <pre><code>-1.1473+(1.20482+1.20482z)Hypergeometric2F1[0.333333,0.5,1.33333,-0.451589(1.+z)^3] </code></pre> </blockquote> <pre><code>zvalue = Solve[eqn == Dc, z] </code></pre> <blockquote> <pre><code>Solve was unable to solve the system with inexact coefficients or the system obtained by direct rationalization of inexact numbers present in the system. Since many of the methods used by Solve require exact input, providing Solve with an exact version of the system may help. </code></pre> </blockquote> <p>Is there any other way I can solve this equation? </p> <p>Also, Integrate is taking some time and I'd like it to be fast since I need to put it in a loop with lots of <span class="math-container">$z$</span> values to be computed for corresponding <span class="math-container">$D_c$</span> values. </p>
SuperCiocia
35,368
<p>From your integral I get the following equation (same as yours when you plot it):</p> <pre><code>eqn[z_] := 3.2566440560469836` - ( 3.5857498598223954` Hypergeometric2F1[1/6, 1/2, 7/ 6, -(2.2144005143040824`/(1 + z)^3)])/Sqrt[1 + z] </code></pre> <p>Both <code>Solve</code>and <code>NSolve</code> fail.</p> <p>So I tried <code>FindRoot</code>:</p> <pre><code>Dc = 3.1; FindRoot[ eqn[z] - Dc, {z, 0}] </code></pre> <blockquote> <pre><code>{z -&gt; 523.001} </code></pre> </blockquote> <p>which agrees with a graphical solution: <a href="https://i.stack.imgur.com/e9CfB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/e9CfB.png" alt="enter image description here" /></a></p> <p>I don't think it has a solution for <code>Dc=500</code> as the <code>eqn</code> flattens out to <code>3.25664</code> as <span class="math-container">$z\rightarrow \infty$</span>:</p> <pre><code>Limit[ eqn[z], {z -&gt; ∞}] </code></pre> <blockquote> <pre><code>3.25664 </code></pre> </blockquote>