qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
2,886,544 | <p>For some set $V \subset [a,b]^d$, define the convex hull of $V$ as the set</p>
<p>$$\{\lambda_1v_1 + ... + \lambda_kv_k: \ \lambda_i \ge 0, \ v_i \in V, \ \sum_{i=1}^k \lambda_i = 1, k = 1, 2, 3, ...\}.$$</p>
<p>I don't understand why exactly these vectors form the convex hull of $V$. Why wouldn't I be able to choose $\lambda_i = 1$ and $\lambda_j = 0$ for $j \neq i$ and thus make every $v_i \in V$ be a part of the convex hull?</p>
| Jack M | 30,481 | <p>In the following picture, the convex hull of the set of black points is the region inside of the red line.</p>
<p><a href="https://i.stack.imgur.com/gQeDw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gQeDw.png" alt="enter image description here"></a></p>
<p>So yes, the convex hull of the $v_i$ contains each $v_i$.</p>
|
866,847 | <p><strong>Question:</strong><br/>
Show that $$\sum_{\{a_1, a_2, \dots, a_k\}\subseteq\{1, 2, \dots, n\}}\frac{1}{a_1*a_2*\dots*a_k} = n$$
(Here the sum is over all non-empty subsets of the set of the $n$ smallest positive integers.)</p>
<blockquote>
<p>I made an attempt and then encountered an inconsistency with the
answer key which is detailed at the very bottom.</p>
</blockquote>
<p><strong>My Attempt:</strong><br/>
$n=1$, there's only one non-empty subset $\{1\}$, hence,</p>
<p>$$\frac{1}{1} = 1$$</p>
<p>Let the proposition in the question be $P(n)$. Suppose $P(n)$ is true, show that $P(n)\rightarrow P(n+1)$ is true,</p>
<p>$$P(n) = \sum_{\{a_1, a_2, \dots, a_k\}\subseteq\{1, 2, \dots, n\}}\frac{1}{a_1*a_2*\dots*a_k}=n$$</p>
<p>$P(n+1)$ would mean, I add another integer $n+1$ to the set, the change to the possible subsets are divided into 3 cases,</p>
<ol>
<li><p>Subset of only $\{n+1\}$.</p></li>
<li><p>Subsets of all positive integers containing $\{n+1\}$.</p></li>
<li><p>Subsets of all positive integers not containing $\{n+1\}$.</p></li>
</ol>
<p>First case, just $$\frac{1}{n+1}$$,
Second case, is $$\frac{P(n)}{n+1} = \frac{n}{n+1}$$,
Third case, by inductive hypothesis is just $n$,</p>
<p>Thus, adding them all we get, $$\frac{1}{n+1} + \frac{n}{n+1} + n = n + 1$$, completing the proof.</p>
<p><strong>Problem:</strong><br/>
The following are cases in the answer key:</p>
<ol>
<li>Just $\{n+1\}$</li>
<li>a non-empty subsets of the first $n$ positive integers together with $n+1$.</li>
<li>a non-empty subset of the first $n$ positive integers.</li>
</ol>
<p>(I've rearrange the cases so they all correspond to the ordering I gave in my Inductive Step). The following are the sub of the cases in the answer key:</p>
<ol>
<li>$\frac{1}{n+1}$</li>
<li>$n$</li>
<li>$\frac{n}{n+1}$</li>
</ol>
<p>As one can tell, case 1 is the same but case 2 and 3 is switch with respect to mine. What did I do wrong, it is unlikely that the textbook is wrong.</p>
| Thanos Darkadakis | 105,049 | <p>You write:</p>
<p>$\cos (\beta -\alpha) = \cos (-\alpha) \cos\beta + \sin(-\alpha) \sin\beta$.</p>
<p>This is not correct. You should either use the sum of $(\beta)$ and $(-\alpha)$. Or you should use the difference of $(\beta)$ and $(\alpha)$.</p>
<p>1st case:
$\cos (\beta +(-\alpha)) = \cos (-\alpha) \cos(\beta) - \sin(-\alpha) \sin(\beta)= \cos (\alpha) \cos(\beta) + \sin(\alpha) \sin(\beta)$.</p>
<p>2nd case:
$\cos (\beta -\alpha) = \cos (\beta) \cos(\alpha) + \sin(\alpha) \sin(\beta)= \cos (\alpha) \cos(\beta) + \sin(\alpha) \sin(\beta)$.</p>
|
3,599,893 | <p>I had this idea to build a model of Earth in Minecraft. In this game, everything is built on a 2D plane of infinite length and width. But, I wanted to make a world such that someone exploring it could think that they could possibly be walking on a very large sphere. (Stretching or shrinking of different places is OK.) </p>
<p>What I first thought about doing was building a finite rectangular model of the world as like a mercator projection, and tessellating this model infinitely throughout the plane. </p>
<p><a href="https://i.stack.imgur.com/bzdjA.png" rel="noreferrer"><img src="https://i.stack.imgur.com/bzdjA.png" alt="enter image description here"></a></p>
<p>Someone starting in the US could swim eastwards in a straight line across the Atlantic, walk across Africa and Asia, continue through the Pacific and return to the US. This would certainly create a sense of 3D-ness. However, if you travel north from the North Pole, you would wind up immediately at the South Pole. That wouldn't be right.</p>
<p>After thinking about it, I hypothesized that an explorer of this model might conclude that they were walking on a donut-shaped world, since that would be the shape of a map where the left was looped around to the right (making a cylinder), and then the top was looped to the bottom. For some reason, by simply tessellating the map, I was creating a hole in the world.</p>
<p>Anyway, to solve this issue, I thought about where one ends up after travelling north from various parts of the world. Going north from Canada, and continuing to go in that direction, you end up in Russia and you face south. The opposite is true as well: going north from Russia, you end up in Canada pointing south. Thus, I started to modify the tessellation to properly connect opposing parts of Earth at the poles. </p>
<p>When going north of a map of Earth, the next (duplicate) map would have to be rotated 180 degrees to reflect the fact that one facing south after traversing the north pole. This was OK. However, to properly connect everything, the map also had to be <em>flipped</em> about the vertical axis. On a globe, if Alice starts east of Bob and they together walk North and cross the North Pole, Alice still remains east of Bob. So, going north from a map, the next map must be flipped to preserve the east/west directions that would have been otherwise rotated into the wrong direction.</p>
<p><a href="https://i.stack.imgur.com/U5n9t.png" rel="noreferrer"><img src="https://i.stack.imgur.com/U5n9t.png" alt="enter image description here"></a></p>
<p>Now the situation is hopeless. After an explorer walks across the North Pole in this Minecraft world, he finds himself in a mirrored world. If the world were completely flat, it would feel as if walking North will take you from the outside of a 3D object to its inside.</p>
<p>Although I now think that it is impossible to trick an explorer walking on infinite plane into thinking he is on a sphere-like world, a part of me remains unconvinced. Is it really impossible? Also, how come a naive tessellation introduces a hole? And finally, if an explorer were to roam the world where crossing a pole flips everything, what would he conclude the shape of the world to be?</p>
| Adayah | 149,178 | <h1>A mathematical model</h1>
<p>Assume you managed to trick the player into thinking they are on a sphere while they are really walking on an infinite plane. What would the world have to look like?</p>
<p>First of all, whenever the player is standing at some point <span class="math-container">$x$</span> on the flat world, they are deceived to think they really are at some point <span class="math-container">$i(x)$</span> on the imaginary spherical world. In other words, the player's imagination creates a mapping <span class="math-container">$i : \mathbb{R}^2 \to S^2$</span>.</p>
<h1>The assumption</h1>
<p>As another answer points out, it is impossible for <span class="math-container">$i$</span> to be a local isometry because of the difference in curvatures of the plane and the sphere. Another easy argument is that on the sphere there is a triangle with three right angles, while on the plane clearly there is not. But we can relax our expectations and only demand that <span class="math-container">$i$</span> be a <em>rough</em> local isometry. What do I mean by that?</p>
<p>Our player is just a human and as such, they can't really distinguish between <span class="math-container">$1$</span> meter and <span class="math-container">$99$</span> centimeters, they also can't see very far away. Thus we assume that for each <strong>sufficiently close</strong> points <span class="math-container">$x, y \in \mathbb{R}^2$</span> the following <strong>equality up to a small margin <span class="math-container">$\varepsilon$</span></strong> between distances on the plane and on the sphere holds:
<span class="math-container">$$(1-\varepsilon) \cdot d_{\mathbb{R}^2}(x, y) \leqslant d_{S^2} \big( i(x), i(y) \big) \leqslant (1+\varepsilon) \cdot d_{\mathbb{R}^2}(x, y).$$</span></p>
<h1>A solution</h1>
<p>It can be proven (though it's quite technical) that under this assumption <span class="math-container">$e : \mathbb{R}^2 \to S^2$</span> must be a <a href="https://en.wikipedia.org/wiki/Covering_space" rel="noreferrer">covering map</a>. But <span class="math-container">$S^2$</span> is simply connected, so it follows that <span class="math-container">$\mathbb{R}^2$</span> is homeomorphic to <span class="math-container">$S^2$</span>, which is a contradiction. Hence a function with the properties stated above does not exist.</p>
<p>Which means what you are trying to do - is impossible.</p>
|
1,617,239 | <blockquote>
<p><strong>Problem</strong></p>
<p>Find the area of the cone <span class="math-container">$z=\sqrt{2x^2+2y^2}$</span> inscribed in the sphere <span class="math-container">$x^2+y^2+z^2=12^2$</span>.</p>
</blockquote>
<p>I think I have to solve this via the surface integral</p>
<p><span class="math-container">$$\iint_S dS=\iint_T \|\Sigma_u\times \Sigma_v\| \, du \, dv$$</span></p>
<p>Where <span class="math-container">$\Sigma$</span> is a parametrization of the cone and <span class="math-container">$T=\operatorname {dom}\Sigma$</span>.</p>
<p>Now</p>
<p><span class="math-container">$$\Sigma(u,v):=(u,v,\sqrt{2u^2+2v^2})$$</span></p>
<p>Should work, but I have to figure out the domain, as <span class="math-container">$z\geq 0$</span> (and <span class="math-container">$0$</span> is achieved by <span class="math-container">$z$</span>), we get that the domain has the form <span class="math-container">$[0,a]\times [0,b]$</span> (as <span class="math-container">$u=v=0$</span> is the only way to get <span class="math-container">$z=0$</span>).</p>
<p>But I'm having problems getting the <span class="math-container">$a,b$</span>: If I look for the intersection of the cone and the sphere I get a circle: <span class="math-container">$x^2+y^2=4\cdot 12$</span> and I'm not sure what to do next.</p>
<p>Could someone give a thorough explanation of how to solve this problem from the start? I'm pretty lost.</p>
<hr />
<p>Also, on this <a href="http://tutorial.math.lamar.edu/Classes/CalcIII/SurfaceArea.aspx" rel="nofollow noreferrer">site</a></p>
<p>I found the following formula, if <span class="math-container">$z=f(x,y)$</span></p>
<p><span class="math-container">$$S=\iint_D\sqrt{f_x^2+f_y^2+1} \, dA$$</span></p>
<p>Then <span class="math-container">$S$</span> is the area of the surface <span class="math-container">$R=\operatorname{Im}_D(f)$</span>.</p>
<p>Where does this formula come from? (Does it relate to surface integrals?)</p>
| Jack's wasted life | 117,135 | <p>At the intersection
$$
{z^2\over2}+z^2=12^2\implies z=4\sqrt6
$$
So we have to calculate the surface area of the cone from $z=0$ to $4\sqrt6$. We can use cylindrical polar to parametrize the cone :
$$
\vec r={z\over\sqrt2}\cos\phi\hat i+{z\over\sqrt2}\sin\phi\hat j+z\hat k,\qquad (z,\phi)\in(0,4\sqrt6)\times[0,2\pi)\\
\implies{\partial\vec r\over\partial z}=\left[{\cos\phi\over\sqrt2}\;{\sin\phi\over\sqrt2}\;1\right]^\top,\quad{\partial\vec r\over\partial \phi}=\left[-{z\sin\phi\over\sqrt2}\;{z\cos\phi\over\sqrt2}\;0\right]^\top\\
\implies\Big{|}{\partial\vec r\over\partial z}\times{\partial\vec r\over\partial \phi}\Big{|}={\sqrt3z\over2}
$$
So the required area is
$$
\int_0^{4\sqrt6}\int_0^{2\pi}{\sqrt3z\over2}\,d\phi \,dz=48\sqrt3\pi
$$
A way to check that this is indeed the correct answer is to note that the right circular cone inside the sphere has height $4\sqrt6$ units and base-radius $4\sqrt3$ units and therefore has area
$$
\pi4\sqrt3\sqrt{4^2\times3+4^2\times6}=48\sqrt3\pi\text{ sq units}
$$</p>
<hr>
<p>Suppose your surface is $z=f(x,y)$. So the surface is already parametrized as
$$
\vec r=x\hat i+y\hat j+f\hat k\implies \Big{|}{\partial \vec r\over\partial x}\times{\partial \vec r\over\partial y}\Big{|}=\Big{|}[1\,0\,f_x]^\top\times[0\,1\,f_y]^\top\Big{|}=\sqrt{f_x^2+f_y^2+1}
$$</p>
<p>So using your regular formula the area becomes
$$
\iint_D\sqrt{f_x^2+f_y^2+1}dxdy=\iint_D\sqrt{f_x^2+f_y^2+1}dA
$$</p>
|
18,659 | <p>This is more of a philosophy/foundation question.</p>
<p>I usually come across things like "the set of all men", or for example sets of symbols, i.e. sets of non-mathematical objects.</p>
<p>This confuses me, because as I understand it, the only kind of objects that exists in set theory are sets. It doesn't make sense to speak of other objects unless we have formalized them in terms of sets. So what to do with something like the set of all men? Are we working with a different set theory, a naive one? Or is it that we are omitting the formalization, because it is straightforward (e.g. assign a number to every man)?</p>
| Sergei Ivanov | 4,354 | <p>You don't have to <em>define</em> your objects as sets, in fact, you should avoid such unnatural definitions. I don't think a number theorist would be happy to see a proof referring to elements of a natural number or using the identity $1=\{0\}$. Such proofs are not acceptable because they won't survive even the slightest change in the foundations.</p>
<p>Similarly, if you develop Euclidean geometry, you don't <em>define</em> a point as a two-element set whose first element is a Dedekind cut and the second one is another weird set. You rather begin with axioms (either Euclidean ones or some axiomatization of the real line) and build the geometry on these. The set theory comes in if you want to show that your theory is consistent (as long as ZF is), and you do that by building a model within ZF.</p>
<p>In your example with men, your actually create a mathematical model of whatever you want to study, in the same way as physics does. There are always translation steps from real world to mathematics and back, they just happen to be trivial in this case. So it's not a problem that men are (modelled by) sets.</p>
<p>Only if you like to believe that mathematical objects do exist in some metaphysical sense, you will have a problem with the counter-intuitive claim that everything is a set. But you can just remove this axiom and stay agnostic about whether everything is a set or not. You will not lose anything - the only essential use of this axiom is that you are able to <em>define</em> the notion of equality rather than having it built into logic. And this is hardly of any importance outside the logic itself.</p>
|
4,084,624 | <p>A cool problem I was trying to solve today but I got stuck on:</p>
<p>Find the maximum possible value of <span class="math-container">$x + y + z$</span> in the following system of equations:</p>
<p><span class="math-container">$$\begin{align}
x^2 – (y– z)x – yz &= 0 \tag1 \\[4pt]
y^2 – \left(\frac8{z^2}– x\right)y – \frac{8x}{z^2}&= 0 \tag2\\[4pt]
z^2 – (x – y)z – xy &= 0 \tag3
\end{align}$$</span></p>
<p>I tried extending the first equation to
<span class="math-container">$$x^2 - xy + xz - yz = 0 \tag4$$</span></p>
<p>I then did the same thing for the second equation and got
<span class="math-container">$$y^2 - \frac{8y}{z^2} + xy - \frac{8x}{z^2} = 0 \tag5$$</span></p>
<p>For the 3rd equation:
<span class="math-container">$$z^2 - xz + yz - xy = 0 \tag6$$</span></p>
<p>I realized that adding the first and third equations got
<span class="math-container">$$x^2 + z^2 - 2xy = 0 \tag7$$</span></p>
<p>I also realized that the 2nd equation could be written as
<span class="math-container">$$y^2 - \frac{8}{z^2}(x + y) + xy = 0 \tag8$$</span>
but I couldn't get much more. I was thinking that graphs could help me here but I don't have much of an idea.</p>
<p>It would be really helpful if someone could explain to me what I could do further to solve the problem.</p>
| Eric Towers | 123,905 | <p>From (1):
<span class="math-container">\begin{align*}
0 &= x^2 -(y-z)x-yz \\
&= (x+y+z)(x-y) -y(x-y) \text{,}
\end{align*}</span>
so either
<span class="math-container">$$ x+y+z = y \qquad \text{ or } \qquad x-y = 0 \text{.} $$</span>
We conclude either <span class="math-container">$x = -z$</span> or <span class="math-container">$x = y$</span>.</p>
<p>From (2):
<span class="math-container">\begin{align*}
0 &= z^2 y^2 - (8-xz^2)y-8x \\
&= (x+y+z)(yz^2 - 8) - z(yz^2-8) \text{,}
\end{align*}</span>
so either
<span class="math-container">$$ x+y+z = z \qquad \text{ or } \qquad yz^2 - 8 = 0 \text{.} $$</span>
We conclude either <span class="math-container">$x = -y$</span> or <span class="math-container">$yz^2 = 8$</span>.</p>
<p>From (3):
<span class="math-container">\begin{align*}
0 &= z^2 -(x-y)z-xy \\
&= (x+y+z)(z-x) - x(z-x) \text{,} \end{align*}</span>
so either
<span class="math-container">$$ x+y+z = x \qquad \text{ or } \qquad z-x = 0 \text{.} $$</span>
We conclude either <span class="math-container">$y = -z$</span> or <span class="math-container">$x = z$</span>.</p>
<p>By cases:</p>
<ul>
<li><span class="math-container">$x = -z$</span>, <span class="math-container">$x = -y$</span>, <span class="math-container">$y = -z$</span>: The first two give <span class="math-container">$y = z$</span>. With the third, this forces <span class="math-container">$y = z = 0$</span> and then <span class="math-container">$x = 0$</span>, giving <span class="math-container">$x+y+z = 0$</span>.</li>
<li><span class="math-container">$x = -z$</span>, <span class="math-container">$x = -y$</span>, <span class="math-container">$x = z$</span>: The first and third require <span class="math-container">$x = z = 0$</span>. With the second, <span class="math-container">$y = 0$</span>, giving <span class="math-container">$x+y+z = 0$</span>.</li>
<li><span class="math-container">$x = -z$</span>, <span class="math-container">$yz^2 = 8$</span>, <span class="math-container">$y = -z$</span>: Using the third in the second, <span class="math-container">$-z^3 = 8$</span>, which has no solutions in real numbers.</li>
<li><span class="math-container">$x = -z$</span>, <span class="math-container">$yz^2 = 8$</span>, <span class="math-container">$x = z$</span>: The first and third require <span class="math-container">$x = z = 0$</span>, making the second unsatisfiable.</li>
<li><span class="math-container">$x = y$</span>, <span class="math-container">$x = -y$</span>, <span class="math-container">$y = -z$</span>: The first and second require <span class="math-container">$x = y = 0$</span>. With the third, <span class="math-container">$z = 0$</span>, giving <span class="math-container">$x+y+z = 0$</span>.</li>
<li><span class="math-container">$x = y$</span>, <span class="math-container">$x = -y$</span>, <span class="math-container">$x = z$</span>: Again, <span class="math-container">$x+y+z = 0$</span> in this case.</li>
<li><span class="math-container">$x = y$</span>, <span class="math-container">$yz^2 = 8$</span>, <span class="math-container">$y = -z$</span>: Again, the second and third are unsatisfiable.</li>
<li><span class="math-container">$x = y$</span>, <span class="math-container">$yz^2 = 8$</span>, <span class="math-container">$x = z$</span>: Eliminating <span class="math-container">$y$</span> between the first and second and <span class="math-container">$z$</span> between the result and third, <span class="math-container">$x^3 = 8$</span>, so <span class="math-container">$x = 2$</span>, so <span class="math-container">$x = y = z = 2$</span>, giving <span class="math-container">$x+y+z = 6$</span>.</li>
</ul>
<p>The maximum of <span class="math-container">$x+y+z$</span> is <span class="math-container">$6$</span> occurring when <span class="math-container">$x = y = z = 2$</span>.</p>
|
3,881,390 | <p>I tried multiplying both sided by 4a
which leads to <span class="math-container">$(6x+4)^2=40 \pmod{372}$</span>
now I'm stuck with how to find the square root of a modulo.</p>
| David Cheng | 452,655 | <p>Completing the square, we have:
<span class="math-container">$$5x^2\equiv 2(x-1)^2$$</span>
Since <span class="math-container">$36\equiv 31+5\equiv 5$</span>, we have:
<span class="math-container">$$36x^2\equiv2(x-1)^2$$</span>
<span class="math-container">$$18x^2\equiv(x-1)^2$$</span>
Adding <span class="math-container">$31$</span> again:
<span class="math-container">$$49x^2\equiv(x-1)^2$$</span>
Then <span class="math-container">$7x\equiv x-1$</span> or <span class="math-container">$-7x\equiv x-1$</span>. Which is the same as <span class="math-container">$6x\equiv-1\equiv30$</span> or <span class="math-container">$8x\equiv1\equiv 32$</span>.</p>
<p>Giving us the final answer of <span class="math-container">$x\equiv4,5$</span>.</p>
|
3,881,390 | <p>I tried multiplying both sided by 4a
which leads to <span class="math-container">$(6x+4)^2=40 \pmod{372}$</span>
now I'm stuck with how to find the square root of a modulo.</p>
| Michael Rozenberg | 190,319 | <p><span class="math-container">$$3x^2+4x-2\equiv0\operatorname{mod}31$$</span>it's<span class="math-container">$$
21(3x^2+4x-2)\equiv0\operatorname{mod}31$$</span> or<span class="math-container">$$x^2-9x+20\equiv0\operatorname{mod}31$$</span> or<span class="math-container">$$(x-4)(x-5)\equiv0\operatorname{mod}31,$$</span>
which gives <span class="math-container">$$x\equiv4\operatorname{mod}31$$</span> or
<span class="math-container">$$x\equiv5\operatorname{mod}31.$$</span></p>
|
1,828,729 | <p>I am trying to solve this summation problem .
$$\sum\limits_{k = 0}^\infty {\left( {\begin{array}{*{20}{l}}
{n + k}\\
{2k}
\end{array}} \right)} \left( {\begin{array}{*{20}{l}}
{2k}\\
k
\end{array}} \right)\frac{{{{( - 1)}^k}}}{{k + 1}}$$
It will be grateful if someone could help me !!</p>
| mercio | 17,445 | <p>The line $y=ax+b$ has $2$ tangent points to the curve $y = x^4-2x^2-x$ if and only if $( x^4-2x^2-x)- (ax+b)$ has two (real) double roots $x_1,x_2$, so this polynomial has to be a perfect square $((x-x_1)(x-x_2))^2$</p>
<p>Can you complete the square $(x^4-2x^2+?x+?) = (x^2+?x+?)^2$ and then find its roots $x_1,x_2$ ?</p>
|
56,082 | <p>Suppose I have a nested list such as,</p>
<pre><code>{{{A, B}, {A, D}}, {{C, D}, {A, A}, {H, A}}, {{A, H}}}
</code></pre>
<p>Where the elements of interest are,</p>
<blockquote>
<pre><code>{{A, B}, {A, D}}
{{C, D}, {A, A}, {H, A}}
{{A, H}}
</code></pre>
</blockquote>
<p>How would I use select to pick up only elements that contain two or more <code>A</code>s in the first part of their sub-elements. In this example I would want the following as an output,</p>
<blockquote>
<pre><code>{{A,B},{A,D}}
</code></pre>
</blockquote>
| Mr.Wizard | 121 | <p>I chose a slightly different formulation:</p>
<pre><code>expr = {{{A, B}, {A, D}}, {{C, D}, {A, A}, {H, A}}, {{A, H}}};
Select[expr, Count[#, {A, _}] > 1 &]
</code></pre>
<blockquote>
<pre><code>{{{A, B}, {A, D}}}
</code></pre>
</blockquote>
<p>I will note that this form is faster all four in the Accepted answer:</p>
<pre><code>list = RandomChoice[CharacterRange["A", "H"], {500000, 3, 2}];
Select[list, Count[#, {"A", _}] > 1 &] // Timing // First
Pick[list, Count[#[[All, 1]], "A"] >= 2 & /@ list] // Timing // First
Select[list, Count[#[[All, 1]], "A"] >= 2 &] // Timing // First
Cases[list, _?(Count[#[[All, 1]], "A"] >= 2 &)] // Timing // First
DeleteCases[list, _?(! Count[#[[All, 1]], "A"] >= 2 &)] // Timing // First
</code></pre>
<blockquote>
<p>0.592804</p>
<p>0.795605<br>
0.811205<br>
0.936006<br>
1.060807</p>
</blockquote>
|
1,509,340 | <p>I'm just wondering, what are the advantages of using either the Newton form of polynomial interpolation or the Lagrange form over the other?
It seems to me, that the computational cost of the two are equal, and seeing as the interpolated polynomial is unique, why ever use one over the other?</p>
<p>I get that they give different forms of the polynomial, but when is one form superior to the other?</p>
| Ian | 83,396 | <p>Frankly, Lagrange interpolation is mostly just useful for theory. Actually computing with it requires huge numbers and catastrophic cancellations. In floating point arithmetic this is very bad. It does have some small advantages: for instance, the Lagrange approach amounts to diagonalizing the problem of finding the coefficients, so it takes only linear time to find the coefficients. This is good if you need to use the same set of points repeatedly. But all of these advantages do not make up for the problems associated with trying to actually evaluate a Lagrange interpolating polynomial.</p>
<p>With Newton interpolation, you get the coefficients reasonably fast (quadratic time), the evaluation is much more stable (roughly because there is usually a single dominant term for a given $x$), the evaluation can be done quickly and straightforwardly using Horner's method, and adding an additional node just amounts to adding a single additional term. It is also fairly easy to see how to interpolate derivatives using the Newton framework.</p>
|
1,509,340 | <p>I'm just wondering, what are the advantages of using either the Newton form of polynomial interpolation or the Lagrange form over the other?
It seems to me, that the computational cost of the two are equal, and seeing as the interpolated polynomial is unique, why ever use one over the other?</p>
<p>I get that they give different forms of the polynomial, but when is one form superior to the other?</p>
| Gil | 60,928 | <p>Lagrange method is mostly a theoretical tool used for proving theorems. Not only it is not very efficient when a new point is added (which requires computing the polynomial again, from scratch), it is also numerically unstable.</p>
<p>Therefore, Newton's method is usually used. However, there is a variation of the Lagrange interpolation, which is numerically stable and computationally efficient! Unfortunately, this method is not very known...</p>
<p>I am attaching a link to a paper from SIAM Review called "Barycentric Lagrange interpolation", which is not difficult to read. I hope you will find it interesting.</p>
<p><a href="http://epubs.siam.org/doi/abs/10.1137/S0036144502417715" rel="nofollow noreferrer">http://epubs.siam.org/doi/abs/10.1137/S0036144502417715</a></p>
<p>(A typo for that article is noted at <a href="https://people.sc.fsu.edu/%7Ejburkardt/py_src/barycentric_interp_1d/barycentric_interp_1d.html" rel="nofollow noreferrer">https://people.sc.fsu.edu/~jburkardt/py_src/barycentric_interp_1d/barycentric_interp_1d.html</a>)</p>
|
1,509,340 | <p>I'm just wondering, what are the advantages of using either the Newton form of polynomial interpolation or the Lagrange form over the other?
It seems to me, that the computational cost of the two are equal, and seeing as the interpolated polynomial is unique, why ever use one over the other?</p>
<p>I get that they give different forms of the polynomial, but when is one form superior to the other?</p>
| Qiaochu Yuan | 232 | <p>Here is an example of a problem that is much easier using Newton interpolation than Lagrange interpolation. Let $p(x)$ be the unique polynomial of degree $n$ such that</p>
<p>$$p(k) = 3^k, 0 \le k \le n.$$</p>
<p>What is $p(n + 1)$? </p>
|
3,242,363 | <blockquote>
<p>Why does this function, <span class="math-container">$$\tan\left(x ^ {1/x}\right)$$</span>
have a maximum value at <span class="math-container">$x=e$</span>?</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/pqE0Q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pqE0Q.png" alt="Graph"></a></p>
| pancini | 252,495 | <p>You know <span class="math-container">$x$</span>, right? If so, note that
<span class="math-container">\begin{align}
a&=0.029(x+b)+0.3\\
&=0.029(x+0.015(x+a))+0.3\\
&=(0.029+0.029\cdot 0.015)x+0.029\cdot0.015 a+0.3
\end{align}</span></p>
<p>Thus
<span class="math-container">$$(1-0.029\cdot0.015)a=(0.029+0.029\cdot 0.015)x+0.3$$</span>
which means
<span class="math-container">$$a=\frac{(0.029+0.029\cdot 0.015)x+0.3}{1-0.029\cdot0.015}.$$</span>
Now that you have <span class="math-container">$x$</span> and <span class="math-container">$a$</span>, its easy to plug them in to find <span class="math-container">$b$</span> and then <span class="math-container">$y$</span>.</p>
|
446,197 | <blockquote>
<p>Dudley Do-Right is riding his horse at his top speed of $10m/s$ toward the bank, and is $100m$ away when the bank robber begins to accelerate away from the bank going in the same direction as Dudley Do-Right. The robber's distance, $d$, in metres away from the bank after $t$ seconds can be modelled by the equation $d=0.2t^2$. Write a model for the position of Dudley Do-Right as a function of time.</p>
</blockquote>
<p>The answer is $d=10t-100$. </p>
<p>My question is how do you know that it is $-100$, and not $100$? </p>
<p>Thanks in advance for your help.</p>
| Blue | 409 | <p>You mention converting the $\cot$ to $\csc$ ---presumably via the identity $\cot^2 + 1 = \csc^2$--- but perhaps you got off track.</p>
<p>$$\begin{align}
3\left(\cot^2\theta + 1 \right) - \csc^2\theta - 1 &= 3\csc^2\theta - \csc^2\theta - 1 \\
&=2\csc^2\theta - 1 \\[6pt]
&=\csc^2\theta + \left( \csc^2\theta - 1 \right) \\[6pt]
&=\csc^2\theta + \cot^2\theta
\end{align}$$</p>
|
2,828,487 | <p>If $\mathcal{R}$ is a von Neumann algebra acting on Hilbert space $H$, and $v \in H$ is a cyclical and separating vector for $\mathcal{R}$ (hence also for its commutant $\mathcal{R}'$), and $P \in \mathcal{R}, Q \in \mathcal{R}'$ are nonzero projections, can we have $PQv = 0$?</p>
<p>[note i had briefly edited this to a reformulated version of the question, but am rolling it back to align with the answer below.]</p>
| Dave L. Renfro | 13,130 | <p>In what follows I’ve restricted myself to those books I actually used at the time (nothing from after about 1991 or 1992), so this does not include anything that appeared afterwards.</p>
<p>I had a one-semester course out of Royden (covered most of the book --- it was a fairly fast-paced course), a one-semester course out of Taylor’s <a href="https://rads.stackoverflow.com/amzn/click/0486649881" rel="noreferrer"><strong>General Theory of Functions and Integration</strong></a> (covered the middle third of the book, this being at a different university), and a two-semester course using Wheeden/Zygmund's <a href="https://rads.stackoverflow.com/amzn/click/0824764994" rel="noreferrer"><strong>Measure and Integral</strong></a> that was taught by Torchinsky (6 years before his 1988 book appeared, but much of what's in his book was included, in particular all the stuff on cardinal and ordinal numbers and order types). But these are introductory texts that Ph.D. qualifying exams are based on, and of course I and pretty much everyone else was also familiar with several other books, in my case probably the most significant were (in order of how much I studied the book) <a href="https://rads.stackoverflow.com/amzn/click/0387900888" rel="noreferrer"><strong>Measure Theory</strong></a> by Halmos, <a href="https://rads.stackoverflow.com/amzn/click/0387901388" rel="noreferrer"><strong>Real and Abstract Analysis</strong></a> by Hewitt/Stromberg, <a href="https://rads.stackoverflow.com/amzn/click/0817630031" rel="noreferrer"><strong>Measure Theory</strong></a> by Cohn, and <a href="https://rads.stackoverflow.com/amzn/click/0691625751" rel="noreferrer"><strong>Integration</strong></a> by McShane. Also, <a href="https://rads.stackoverflow.com/amzn/click/0070542341" rel="noreferrer"><strong>Real and Complex Analysis</strong></a> by Rudin (first third of the book) was often recommended, and many students used it for Ph.D. qualifying exam preparation, but for whatever reason I never looked at it all that much, and in fact it was only 4 or 5 years ago that I finally wound up getting a copy of Rudin’s book (saw it at a used bookstore).</p>
<p>For what it’s worth, a couple of books that I recall being recommended very often were (this being late 1970s to early 1980s) <a href="https://rads.stackoverflow.com/amzn/click/0486662896" rel="noreferrer"><strong>Functional Analysis</strong></a> by Riesz/Sz.-Nagy (first third of the book, the three chapters covering integration) and <a href="https://rads.stackoverflow.com/amzn/click/0471608483" rel="noreferrer"><strong>Linear Operators. Part I. General Theory</strong></a> by Dunford/Schwartz (also first third of the book, which is also the three chapters covering integration), but I never wound up looking at them very much. However, in looking at these two books right now, I believe the first third of Riesz/Sz.-Nagy would make for a good summer project if someone wants a refresher in classical analysis topics. But not Dunford/Schwartz, unless you really want to dig deep into set function machinery stuff. Another book (2 volumes, actually) that was often suggested is <a href="https://rads.stackoverflow.com/amzn/click/048680643X" rel="noreferrer"><strong>Theory of Functions of a Real Variable</strong></a> by Natanson, which incidentally I later wound up using often as a reference, but because everything is restricted to the real line the drawback is that you’re not going to see any general measure theory.</p>
<p>What you should read AFTER the basic first year course material will depend hugely on what area of analysis you intend to work in. In my particular case, and this is a rather outlying area (but which I find fascinating), the books that I wound up making the most use of include <a href="https://rads.stackoverflow.com/amzn/click/1124060154" rel="noreferrer"><strong>Real Functions</strong></a> by Goffman, <a href="https://rads.stackoverflow.com/amzn/click/088385029X" rel="noreferrer"><strong>A Primer of Real Functions</strong></a> by Boas (1981 edition), <a href="https://rads.stackoverflow.com/amzn/click/0521283612" rel="noreferrer"><strong>A Second Course on Real Functions</strong></a> by van Rooij/Schikhof, <a href="https://rads.stackoverflow.com/amzn/click/0387905081" rel="noreferrer"><strong>Measure and Category</strong></a> by Oxtoby, <a href="https://rads.stackoverflow.com/amzn/click/0521337054" rel="noreferrer"><strong>The Geometry of Fractal Sets</strong></a> by Falconer, <a href="https://rads.stackoverflow.com/amzn/click/0471922870" rel="noreferrer"><strong>Fractal Geometry</strong></a> by Falconer, <a href="https://rads.stackoverflow.com/amzn/click/0821869906" rel="noreferrer"><strong>Differentiation of Real Functions</strong></a> by Bruckner (1978 edition), <a href="https://rads.stackoverflow.com/amzn/click/0387160582" rel="noreferrer"><strong>Real Functions</strong></a> by Thomson, and <a href="https://archive.org/details/theoryoftheinteg032192mbp" rel="noreferrer"><strong>Theory of the Integral</strong></a> by Saks.</p>
|
930,611 | <blockquote>
<p>Find the maximal value of the function for $a=24.3$, $b=41.5$:
$$f(x,y)=xy\sqrt{1-\frac{x^2}{a^2}-\frac{y^2}{b^2}}$$</p>
</blockquote>
<p>Using the second derivative test for partial derivatives, I find the critical point in terms of $a$ and $b$ by taking partial derivatives of $x$ and $y$ and equating them to $0$. </p>
<p>$$f_y=0$$
$$f_x=0$$</p>
<p>Getting</p>
<p>$$y \left ( 1-\frac{x^2}{a^2}-\frac{y^2}{b^2} \right ) -\frac{yx^2}{a^2}=0$$</p>
<p>and</p>
<p>$$x \left ( 1-\frac{x^2}{a^2}-\frac{y^2}{b^2} \right ) - \frac{xy^2}{b^2}=0$$</p>
<p>Then i combine both equations together to get</p>
<p>$$\left ( 1-\frac{y^2}{b^2} \right ) =\frac{2x^2}{a^2}$$</p>
<p>and</p>
<p>$$\left ( 1-\frac{x^2}{a^2} \right ) =\frac{2y^2}{b^2}$$</p>
<p>Solving both equations to get the critical points in terms of a and b. I got</p>
<p>$$(0,0)$$<br>
$$\left(\frac{a}{\sqrt{3}},\frac{b}{\sqrt{3}}\right)$$</p>
<p>Hence to get the maximum value I substitute the second critical point back into the original function. And I let $a=24.3$, $b=41.5$ to get the maximal point. However, I don't seem to get the right answer.</p>
<p>Is my method correct?</p>
| marco trevi | 170,887 | <p>I think you are missing some solutions: for instance, if $y=0$ also the points $(a,0)$ and $(-a,0)$ are solutions. Maybe during the calculations you divided by $y$ and/or $x$ without checking what happens when they vanish.</p>
<p>A thing that often helps me "seeing" the problem before actually calculating the solution is searching for structures. Your function is defined on the elliptic region $D=\{(x,y)\in\mathbb{R}^2|\frac{x^2}{a^2}+\frac{y^2}{b^2}\leq1\}$, it is zero on the border of $D$ (i.e. on the ellipse $\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$) and on the axes. This might help visualize the graph of $f$ and could give an idea of where the maxima/minima might be.</p>
<p>As for the calculations, you correctly get the system
\begin{equation}
y\left(1-\frac{x^2}{(a/2)^2}-\frac{y^2}{b^2}\right)=0\quad\text{and}\quad x\left(1-\frac{x^2}{a^2}-\frac{y^2}{(b/2)^2}\right)=0
\end{equation}</p>
<p>Here again you can spot two ellipses on which your derivatives are constantly zero. Let's call $E_1$ the ellipse $\frac{x^2}{(a/2)^2}+\frac{y^2}{b^2}=1$ and $E_2$ the ellipse $\frac{x^2}{a^2}+\frac{y^2}{(b/2)^2}=1$. When intersecting ellipses you can have up to 4 solutions; in your case you have to intersect the set $S_1=\{y=0\}\cup E_1$ with the set $S_2=\{x=0\}\cup E_2$ (you can see them as "barred" ellipses) so the solutions can be more (up to 9). If you get only two solutions maybe you'd like to check your calculations again (I'm not saying that you $must$ have more than two solutions, only saying that you $could$).</p>
<p>For instance, if $y=0$, then for $x$ you get the values $0,a,-a$ so all the points $(0,0),(a,0)$ and $(-a,0)$ are solutions. Similarly, if $x=0$ you have the solutions $(0,0),(0,b)$ and $(0,-b)$. For other solutions you have to intersect the two ellipses $E_1$ and $E_2$. Hope that helps</p>
|
4,575,771 | <p>I need to show that <span class="math-container">$\int_0^1 (1+t^2)^{\frac 7 2} dt < \frac 7 2 $</span>. I've checked numerically that this is true, but I haven't been able to prove it.</p>
<p>I've tried trigonometric substitutions. Let <span class="math-container">$\tan u= t:$</span></p>
<p><span class="math-container">$$\int_0^1 (1+t^2)^{\frac 7 2} dt = \int_0^{\frac{\sqrt 2}{2}} (1+\tan^2 u )^{\frac 9 2} du = \int_0^{\frac{\sqrt 2}{2}} \sec^9 u \ du = \int_0^{\frac{\sqrt 2}{2}} \sec^{10} u \cos u\ du = \int_0^{\frac{\sqrt 2}{2}} \frac {\cos u}{(1-\sin^2 u)^5} du$$</span>
Now let <span class="math-container">$\sin u = w$</span>. Then:
<span class="math-container">$$\int_0^{\frac{\sqrt 2}{2}} \frac {\cos u}{(1-\sin^2 u)^5} du = \int_0^{\sin {\frac{\sqrt 2}{2}}} \frac {1}{(1-w^2)^5} dw.$$</span>
This last integral is solvable using partial fraction decomposition, but even after going through all the work required I'm not really sure how to compare it with <span class="math-container">$\frac 7 2$</span>, because of that <span class="math-container">$\sin {\frac {\sqrt{2}}{2}}$</span> term, which is not easy to compare.</p>
| Adam Rubinson | 29,156 | <p>Since <span class="math-container">$t^2 \in [0,1],\ $</span> we may use the Binomial expansion,</p>
<p><span class="math-container">$$ \left( 1+ t^2 \right)^{7/2} = 1 + \frac{7}{2}t^2 + \frac{35}{8} t^4 + \frac{35}{16} t^6 + \frac{35}{128} t^8 + (\text{ alternating sequence of decreasing terms with negative leading term }),$$</span></p>
<p>so,</p>
<p><span class="math-container">$$ \left( 1+ t^2 \right)^{7/2} < 1 + \frac{7}{2}t^2 + \frac{35}{8} t^4 + \frac{35}{16} t^6 + \frac{35}{128} t^8\qquad \forall\ t\in(0,1) $$</span></p>
<p>Unless I've made a calculation error, I get:</p>
<p><span class="math-container">$$\ \int_0^1 1 + \frac{7}{2}t^2 + \frac{35}{8} t^4 + \frac{35}{16} t^6 + \frac{35}{128} t^8\ dt < 3.4.$$</span></p>
|
210,110 | <p>A good approximation of $(1+x)^n$ is $1+xn$ when $|x|n << 1$. Does this approximation have a name? Any leads on estimating the error of the approximation?</p>
| Jean-Sébastien | 31,493 | <p>I would say it comes from the Bernoulli inequality. You can read it up on <a href="http://en.wikipedia.org/wiki/Bernoulli%27s_inequality" rel="nofollow">Wiki</a></p>
|
210,110 | <p>A good approximation of $(1+x)^n$ is $1+xn$ when $|x|n << 1$. Does this approximation have a name? Any leads on estimating the error of the approximation?</p>
| EuYu | 9,246 | <p>I would just call it the first order truncation of the <a href="http://en.wikipedia.org/wiki/Binomial_series">Binomial series</a>. If you want more terms of the series, then it's given by
$$(1+x)^n = 1 + nx + \frac{n(n-1)}{2}x^2 + \frac{n(n-1)(n-2)}{3!}x^3 + \mathcal{O}(x^4)$$
for the full series, you can visit the link I provided.</p>
<p>You may also be interested in <a href="http://en.wikipedia.org/wiki/Bernoulli%27s_inequality">Bernoulli's inequality</a></p>
|
2,232,060 | <p>$f(x) = \sqrt[3]{1+ \sqrt[3]x}$ </p>
<p>I have to derive in 1st order and 2nd order</p>
<p>$f'(x) = \frac{1}{9x^\frac 23(1+x^\frac 13)^\frac 23}$ Is what I get after the first derivation </p>
<p>Now the teachers assistant is making $some$ $magic$ by showing that </p>
<p>$f(u) = \frac{1}{U^\frac 23}$</p>
<p>$u=g(x)=x(1+{x^\frac 13}) = x + x^\frac 43$ <- where did she get that first $x$ from. For me it doesn't make sense since she got the $\frac{1}{9x^\frac 23}$ as $u$ already so the $x$ goes there. </p>
<p>$f"(x)= \frac{1}{9} * f'(u) * g'(x)$ <- is how the second derivation continues. Can someone please explain how you would get a second order derivative?</p>
<p>Is there some kind of rule am I missing or something? </p>
| Ivan Neretin | 269,518 | <p>Let's see. Apparently, $n$ is divisible by 2 and 3, otherwise it couldn't enforce divisibility by 12. Now, the product $ab$ is divisible by 3, which means that at least one of the numbers $a$ and $b$ is divisible by 3, and so is their sum, hence so is the other number. By similar reasoning, both are divisible by 2. So $a=6x,\;b=6y$. Now the question changes to: given that $x+y$ is even, what is the condition on $n$, such that ${n\over36}\mid xy$, that would make both $x$ and $y$ even? The answer is obvious: the product must be even as well, so $n=72$ will suffice.</p>
|
1,936,043 | <p>I would like to prove that the sequence $n^{(-1)^{n}}$ is divergent. </p>
<p>My thoughts: I know $(-1)^n$ is divergent, so $n$ to the power of a divergent sequence is still divergent? I am not sure how to give a proper proof, pls help!</p>
| Ethan Bolker | 72,858 | <p>There are lots of correct answers. Here's a suggestion for how to attack a problem like this.</p>
<p>Before you try to invoke abstract principles like</p>
<blockquote>
<p>$n$ to the power of a divergent sequence is still divergent</p>
</blockquote>
<p>which you rightly wonder about (hence your "?") try <em>writing out the first few terms</em>:</p>
<p>$$
1, 2, \frac{1}{3}, 4, \frac{1}{5}, 6, \ldots
$$</p>
<p>Then you can easily see that the sequence doesn't converge and can set about proving it.</p>
|
506,394 | <p>Let $A=\{g\in C([0,1]):\int_{0}^{1}|g(x)|dx<1\}$. If $p\in [0,\infty]$, is $A$ an open set of $(C([0,1]), \left\|{\cdot}\right\|_p)$?</p>
<p>Is it obvious that if $p=1$ then $A$ is open in $(C([0,1]), \left\|{\cdot}\right\|_1)$, because $A=B(0,1)$.</p>
<p>I think $A$ is not open if $p>1$. Any hint to show this?</p>
<p>Thanks.</p>
| André Caldas | 17,092 | <p>Notice that from <a href="https://en.wikipedia.org/wiki/H%C3%B6lder%27s_inequality" rel="nofollow">Hölder's inequality</a>, valid for $p \geq 1$,
$$
\|f\|_1 = \|f \cdot 1\|_1 \leq \|f\|_p \|1\|_q = \|f\|_p,
$$
where $\frac{1}{p} + \frac{1}{q} = 1$,
since $\|1\|_q = 1$.</p>
<p>But this implies that the identity
$$
\begin{array}{rrl}
\mathrm{id}: &(C([0,1], \|\cdot\|_p) &\rightarrow &(C([0,1], \|\cdot\|_1)
\\
&x &\mapsto &x
\end{array}
$$
is continuous. And since $A = \mathrm{id}^{-1}(B(0,1))$, and $B(0,1)$ is open in the $1$-norm, it follows that $A$ is open.</p>
<hr>
<p><b>Edit:</b> added observation from Pedro Tamaroff that Hölder is valid for $p \geq 1$.</p>
|
1,441,905 | <blockquote>
<p>Find the range of values of $p$ for which the line $ y=-4-px$ does not intersect the curve $y=x^{2}+2x+2p$</p>
</blockquote>
<p>I think I probably have to find the discriminant of the curve but I don't get how that would help.</p>
| Narasimham | 95,860 | <p>Equating slope to the required line slope for finding tangency point which demarcates between intersection and non-intersection. </p>
<p>$ 2 x + 2 = - p$</p>
<p>or</p>
<p>$ x = -(1 + p/2) $</p>
<p>and</p>
<p>$ y = p\, ( p/2 +1) -4 $</p>
<p>EDIT 1:</p>
<p>The parabola </p>
<p>$ y = 2 x ( x+1) -4 $</p>
<p>is envelope to the set of tangents </p>
<p>$ y = -p x - 4, -\infty < p < \infty $</p>
<p>Clairaut's form differential equations with solution $ y = a \,p\, x + p^2. $</p>
|
1,915,450 | <p>Can anyone help me to prove this? This is given as a fact, but I don't understand why it is true.</p>
<blockquote>
<p>For an integer $n$ greater than 1, let the prime factorization of $n$ be $$n=p_1^ap_2^bp_3^cp_4^d...p_k^m$$
Where a, b, c, d, ... and m are nonegative integers, $p_1, p_2, ..., p_k$ are prime numbers.
The number of divisors is $$d(n)=(a+1)(b+1)(c+1)....(m+1)$$</p>
</blockquote>
| GoodDeeds | 307,825 | <p>This is solved using combinatorics. Any divisor <span class="math-container">$x$</span> of <span class="math-container">$n$</span> will be of the form
<span class="math-container">$$x=p_1^{n_1}p_2^{n_2}\cdots p_k^{n_k}$$</span>
where <span class="math-container">$0\le n_1\le a$</span>, <span class="math-container">$0\le n_2\le b$</span>, and so on.</p>
<p>The <span class="math-container">$k$</span>-tuple <span class="math-container">$(n_1,n_2,\cdots,n_k)$</span> uniquely specifies a divisor. Thus, the number of divisors will be the number of ways of choosing <span class="math-container">$n_1,n_2,\cdots,n_k$</span> given the constraints.</p>
<p>The value of <span class="math-container">$n_i$</span> in <span class="math-container">$x$</span> is independent of the value of <span class="math-container">$n_j$</span> for all <span class="math-container">$i\ne j$</span>. So, the number of ways of choosing <span class="math-container">$x$</span> will be the product of the number of ways of choosing <span class="math-container">$n_i$</span> for all <span class="math-container">$1\le i\le k$</span>.</p>
<p><span class="math-container">$$\text{Number of ways}=\Pi_i \text{ (Number of ways of choosing }n_i)$$</span></p>
<p>Now, <span class="math-container">$n_1$</span> can take any value from <span class="math-container">$0$</span> to <span class="math-container">$a$</span>, <span class="math-container">$n_2$</span> from <span class="math-container">$0$</span> to <span class="math-container">$b$</span>, and so on. That is, <span class="math-container">$n_1$</span> has <span class="math-container">$(a+1)$</span> choices, <span class="math-container">$n_2$</span> has <span class="math-container">$(b+1)$</span> choices, and so on.</p>
<p>Thus,
<span class="math-container">$$\text{Number of ways}=(a+1)\times(b+1)\times\cdots\times(m+1)$$</span></p>
|
2,130,836 | <p>My question is really simple: </p>
<p>Let $E$ be a vector space and $A_r(E)$ be the vector space of the alternating $r$-linear maps $\varphi:E\times\ldots \times E\to \mathbb R$. If $v_1,\ldots,v_r$ are linearly independent vectors. Can we get $\omega\in A_r(E)$ such that $\omega(v_1\ldots,v_r)\neq 0$? Is the converse true?</p>
| B. S. | 231,386 | <p>When <span class="math-container">$2^n-1$</span> is a Mersenne prime,this can be resolved ( although this isn't very helpful, because we only know of 49 Mersenne primes and we don't know if they are finitely many.However, it sure is nice to know that <span class="math-container">$ 2^{74,207,281} − 1$</span> does not divide <span class="math-container">$3^{74,207,281} − 1$</span>).</p>
<p>Let <span class="math-container">$q = 2^p-1$</span> be prime, therefore <span class="math-container">$F_q$</span> is a field. We know that polynomials of degree k must have at most k solutions in a field.Applying this to <span class="math-container">$x^p-1$</span>, which has the solution 2 mod q, we see that this must have at most p solutions.But the set <span class="math-container">$A=(1,2,...,2^{p-1})$</span> obviously consists of different solutions, therefore it is the complete solution set. Since <span class="math-container">$q|3^p-1$</span> , we see that 3 is a solution, therefore <span class="math-container">$3 \in A $</span>, but all the elements of the set <span class="math-container">$A-3$</span> have modulus less than q (obviously) and are different from 0, so no such solution may exist.</p>
<p>When n is a prime, but <span class="math-container">$2^n-1$</span> is not necessarily a Mersenne prime, we can employ the same reasoning for a prime divisor <span class="math-container">$q$</span> of <span class="math-container">$2^n-1$</span> :3 must be congruent to some power of 2 modulo q. Therefore q divides a number of the form <span class="math-container">$2^i-3$</span>.I don't know what the prime divisors of the sequence <span class="math-container">$2^i-3$</span> are, but a very weak corollary is this : either <span class="math-container">$3$</span> or <span class="math-container">$6$</span> is a quadratic residue mod q, therefore, by toying with quadratic reciprocity a bit, we get this : <span class="math-container">$q \equiv \pm 1, \pm 5, \pm 13\pmod{24}$</span>.So when n is prime, the prime divisors of <span class="math-container">$2^n-1$</span> must be of this specific form (note that this is a very weak corollary).</p>
|
4,074,630 | <p>Let <span class="math-container">$f: [a,b] \to [0,\infty)$</span> and <span class="math-container">$f$</span> is Riemann Integrable on every subinterval <span class="math-container">$[a + \epsilon,b]$</span> for <span class="math-container">$\epsilon > 0$</span>. Suppose that the improper Riemann integral exists. That is
<span class="math-container">$$I = \lim_{\epsilon \to 0} \int_{a + \epsilon}^{b} f(x) dx < \infty$$</span>
exists. Prove that <span class="math-container">$f$</span> is Lebesgue integrable on <span class="math-container">$[a,b]$</span> and that <span class="math-container">$\int_{[a,b]} f(x) dx = I$</span>.</p>
<p>I found another posting of this problem written in the same way, but it used dominated convergence theorem, which I have not covered. The others I saw were written slightly differently and/or weren't making too much sense.</p>
<p>Also, maybe I'm not seeing something clearly, but here's something I thought:</p>
<p>The improper integral I is finite (it exists). If the improper integral exists, isn't it equal to the "proper" integral? That would mean the "normal" proof of showing that every Riemann Integrable function is Lebesgue integrable would apply. However, I have a feeling this is not the way to prove it or else this question wouldn't be asked.</p>
<p>Thanks for the help!</p>
| Eric Towers | 123,905 | <p>Consider the Riemann integral
<span class="math-container">$$ \int_0^1 x^{-2} \,\mathrm{d}x $$</span>
This is an improper Riemann integral due to the unbounded behaviour at the left endpoint of the interval of integration. It's value is defined to be (if this limit exists)
<span class="math-container">$$ \lim_{\varepsilon \rightarrow 0^+} \int_{\varepsilon}^1 x^{-2} \,\mathrm{d}x \text{.} $$</span></p>
<p>Now think of a decreasing sequence <span class="math-container">$(\varepsilon_i)_{i \in \Bbb{Z}_{>0}}$</span> where each <span class="math-container">$\varepsilon_i \in (0, 1]$</span>. This sequence provides a way to get at that limit:
<span class="math-container">$$ I_i = \int_{\varepsilon_i}^1 x^{-2} \,\mathrm{d}x \text{.} $$</span></p>
<p>Every <span class="math-container">$I_i$</span> is a proper integral. But just because every proper integral in the sequence <span class="math-container">$(I_i)_i$</span> converges does not mean the limit <span class="math-container">$\lim_{i \rightarrow \infty} I_i$</span> exists. Contrast with <span class="math-container">$\displaystyle J_i = \int_{\varepsilon_i}^1 x^{-1} \,\mathrm{d}x$</span>, for which each <span class="math-container">$J_i$</span> is some finite number, but the limit of <span class="math-container">$J_i$</span> as <span class="math-container">$i \rightarrow \infty$</span> does not exist.</p>
<p>So to make progress, you will use some idea like <span class="math-container">$|I - I_i| < \delta$</span>, which idea is based on the fact that <span class="math-container">$I$</span> exists and is finite and that each <span class="math-container">$I_i$</span> exists, is finite, and is approaching the integral over the whole interval. You might use this to show <span class="math-container">$f$</span> is Lebesgue integrable on <span class="math-container">$(0,1]$</span>, then observe that <span class="math-container">$[0,1] \smallsetminus (0,1]$</span> is a set of Lebesgue measure zero, obtaining your goal. (I'm having to guess at what tools are available to you -- you assert that DCT is unavailable, but you don't say what <em>is</em> available.)</p>
|
2,011,181 | <blockquote>
<p><strong>Question:</strong> Find the area of the shaded region given $EB=2,CD=3,BC=10$ and $\angle EBC=\angle BCD=90^{\circ}$.</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/BFf2h.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BFf2h.jpg" alt="Diagram"></a></p>
<p>I first dropped an altitude from $A$ to $BC$ forming two cases of similar triangles. Let the point where the altitude meets $BC$ be $X$. Thus, we have$$\triangle BAX\sim\triangle BDC\\\triangle CAX\sim\triangle CEB$$
Using the proportions, we get$$\frac {BA}{BD}=\frac {AX}{CD}=\frac {BX}{BC}\\\frac {CA}{CE}=\frac {AX}{EB}=\frac {CX}{CB}$$
But I'm not too sure what to do next from here. I feel like I'm <em>very</em> close, but I just can't figure out $AX$.</p>
| yurnero | 178,464 | <p><strong>Hint</strong>: From the given info, you can compute the sum of the areas of triangles $\triangle EBC$ and $\triangle BDC$:
$$
\frac{1}{2}(2\cdot 10)+\frac{1}{2}(3\cdot 10)=25.
$$
With a quick observation, you can also compute the sum of the areas of triangles $\triangle EBA$ and $\triangle ACD$:
$$
\frac{1}{2}(2\cdot4)+\frac{1}{2}(3\cdot6)=13.
$$
Two times the answer you seek is the difference between these 2 sums.</p>
|
19,356 | <p>So I was wondering: are there any general differences in the nature of "what every mathematician should know" over the last 50-60 years? I'm not just talking of small changes where new results are added on to old ones, but fundamental shifts in the nature of the knowledge and skills that people are expected to acquire during or before graduate school.</p>
<p>To give an example (which others may disagree with), one secular (here, secular means "trend over time") change seems to be that mathematicians today are expected to feel a lot more comfortable with picking up a new abstraction, or a new abstract formulation of an existing idea, even if the process of abstraction lies outside that person's domain of expertise. For example, even somebody who knows little of category theory would not be expected to bolt if confronted with an interpretation of a subject in his/her field in terms of some new categories, replete with objects, morphisms, functors, and natural transformations. Similarly, people would not blink much at a new algebraic structure that behaves like groups or rings but is a little different.</p>
<p>My sense would be that the expectations and abilities in this regard have improved over the last 50-60 years, partly because of the development of "abstract nonsense" subjects including category theory, first-order logic, model theory, universal algebra etc., and partly because of the increasing level of abstraction and the need for connecting frameworks and ideas even in the rest of mathematics. I don't really know much about how mathematics was taught thirty years ago, but I surmised the above by comparing highly accomplished professional mathematicians who probably went to graduate school thirty years ago against today's graduate students.</p>
<p>Some other guesses:</p>
<ol>
<li>Today, people are expected to have a lot more of a quick idea of a larger number of subjects, and less of an in-depth understanding of "Big Proofs" in areas outside their subdomain of expertise. Basically, the Great Books or Great Proofs approach to learning may be declining. The rapid increase in availability of books, journals, and information via the Internet (along with the existence of tools such as Math Overflow) may be making it more profitable to know a bit of everything rather than master big theorems outside one's area of specialization.</li>
<li>Also, probably a thorough grasp of multiple languages may be becoming less necessary, particularly for people who are using English as their primary research language. Two reasons: first, a lot of materials earlier available only in non-English languages are now available as English translations, and second, translation tools are much more widely available and easy-to-use, reducing the gains from mastery of multiple languages.</li>
</ol>
<p>These are all just conjectures. Contradictory information and ideas about other possible secular trends would be much appreciated.</p>
<p>NOTE: This might be too soft for Math Overflow! Moderators, please feel free to close it if so.</p>
| Gerry Myerson | 3,684 | <p>I think one way to answer this question would be to get hold of the qualifying exams from University X from 50-60 years ago and compare them to the exams at the same university today. </p>
|
19,356 | <p>So I was wondering: are there any general differences in the nature of "what every mathematician should know" over the last 50-60 years? I'm not just talking of small changes where new results are added on to old ones, but fundamental shifts in the nature of the knowledge and skills that people are expected to acquire during or before graduate school.</p>
<p>To give an example (which others may disagree with), one secular (here, secular means "trend over time") change seems to be that mathematicians today are expected to feel a lot more comfortable with picking up a new abstraction, or a new abstract formulation of an existing idea, even if the process of abstraction lies outside that person's domain of expertise. For example, even somebody who knows little of category theory would not be expected to bolt if confronted with an interpretation of a subject in his/her field in terms of some new categories, replete with objects, morphisms, functors, and natural transformations. Similarly, people would not blink much at a new algebraic structure that behaves like groups or rings but is a little different.</p>
<p>My sense would be that the expectations and abilities in this regard have improved over the last 50-60 years, partly because of the development of "abstract nonsense" subjects including category theory, first-order logic, model theory, universal algebra etc., and partly because of the increasing level of abstraction and the need for connecting frameworks and ideas even in the rest of mathematics. I don't really know much about how mathematics was taught thirty years ago, but I surmised the above by comparing highly accomplished professional mathematicians who probably went to graduate school thirty years ago against today's graduate students.</p>
<p>Some other guesses:</p>
<ol>
<li>Today, people are expected to have a lot more of a quick idea of a larger number of subjects, and less of an in-depth understanding of "Big Proofs" in areas outside their subdomain of expertise. Basically, the Great Books or Great Proofs approach to learning may be declining. The rapid increase in availability of books, journals, and information via the Internet (along with the existence of tools such as Math Overflow) may be making it more profitable to know a bit of everything rather than master big theorems outside one's area of specialization.</li>
<li>Also, probably a thorough grasp of multiple languages may be becoming less necessary, particularly for people who are using English as their primary research language. Two reasons: first, a lot of materials earlier available only in non-English languages are now available as English translations, and second, translation tools are much more widely available and easy-to-use, reducing the gains from mastery of multiple languages.</li>
</ol>
<p>These are all just conjectures. Contradictory information and ideas about other possible secular trends would be much appreciated.</p>
<p>NOTE: This might be too soft for Math Overflow! Moderators, please feel free to close it if so.</p>
| Dieter | 30,457 | <p>I arrived at this question through my frustration that, despite my master degree, I could not come up with the proof of pi's irrationality just like that. So I studied it and wondered, why was this not on the list of things we learnt at university.</p>
<p>The question is different for an active professional mathematician, a high school math teacher or someone who is otherwise orbiting in our society with a mathematical education in the bag.</p>
<p>I would like to be able and answer questions by non-educated but interested people and picture them a background for the facts. The irrationality of Pi is a likely candidate for Christmas Eve questions, as is the infinite number of primes, or even Gödel's theorem. I studied that one too and it made a lasting impression on me.</p>
<p>In terms of relevance for society, an accomplished mathematician these days should be there to point out flaws in logic and bring an enhanced intuition of statistics to the public domain. Newspapers are full of "significant research results" and their interpretations. People are developing certain common knowledge while mostly remaining ignorant about the statistical aspects of that knowledge, as Daniel Kahneman has pointed out.</p>
|
2,322,678 | <p>We have $n$ different elements $(a_1,...,a_n)$ that are all the elements of $K$ and $\in$ finite field $K$.
I want to prove, that $\prod_{i=1}^{n} (X - a_i) + 1 \in K[X]$ doesn't have roots</p>
<p>I know, that if $a_i$ is a root of polynomial $p \in K[X]$ , then exists $f \in K[X]$ such that $p = (x - a_i)f$</p>
<p>How can we use this fact in order to prove the statement?</p>
| Doug M | 317,162 | <p>$\int_0^\infty \frac {sin x}{x} dx = \int_0^\pi \frac {sin x}{x} dx + \int_\pi^{2\pi} \frac {sin x}{x} dx +\cdots$</p>
<p>or $\sum_\limits{i=0}^{\infty} \int_{i\pi}^{(i+1)\pi} \frac {sin x}{x} dx$</p>
<p>$\frac {\sin x}{x}\le 1$ for all $x$</p>
<p>$\int_0^\pi \frac {sin x}{x} dx < \pi$</p>
<p>$|\int_{i\pi}^{(i+1)\pi} \frac {\sin x}{x} dx|< \frac {\pi}i$</p>
<p>$\lim_\limits {i\to\infty} \frac {1}{i} = 0$</p>
<p>$|\int_{i\pi}^{(i+1)\pi} \frac {\sin x}{x} dx| < |\int_{(i+1)\pi}^{(i+2)\pi} \frac {\sin x}{x} dx|$</p>
<p>The sign is alternating.</p>
<p>We have passed the alternating series test.</p>
|
79,041 | <p>Let <span class="math-container">$\mathfrak{g}$</span> be the Lie algebra of a Lie group <span class="math-container">$G$</span> which acts on a manifold <span class="math-container">$M$</span>.
It is quite standard that the basic forms in <span class="math-container">$\Omega^*(M) \otimes W(\mathfrak{g}^*)$</span> form a model for the singular equivariant cohomology of <span class="math-container">$M$</span>. However, I have never seen a proof and it is not straightforward to me. Could someone give a sketch or a reference of the proof of this fact? It is probably in one of Cartan's papers but I haven't been able to find it.</p>
<hr>
<p>Here goes some background:</p>
<p>We define its Weil algebra by <span class="math-container">$W^*(\mathfrak{g}^*)=S^*(\mathfrak{g}^*) \otimes \wedge^*(\mathfrak{g}^*)$</span> there is also a natural differential operator <span class="math-container">$d_W$</span> which makes <span class="math-container">$W*(\mathfrak{g}^*)$</span> into a complex. We define <span class="math-container">$d_W$</span> as follows:</p>
<p>Choose a basis <span class="math-container">$e_1,...,e_n$</span> for <span class="math-container">$\mathfrak{g}$</span> and let <span class="math-container">$e^*_1,...e^*_n$</span> its dual basis in <span class="math-container">$\mathfrak{g}^*$</span>. Let <span class="math-container">$\theta_1,...,\theta_n$</span> be the image of <span class="math-container">$e^*_1,...e^*_n$</span> in <span class="math-container">$\wedge(\mathfrak{g}^*)$</span> and let <span class="math-container">$\Omega_1,...,\Omega_n$</span> be the image of <span class="math-container">$e^*_1,...e^*_n$</span> in <span class="math-container">$S(\mathfrak{g}^*)$</span>. Let <span class="math-container">$c_{jk}^i$</span> be the structure constants of <span class="math-container">$\mathfrak{g}$</span>, that is <span class="math-container">$[e_j,e_k]=\sum_{i=1}^nc_{jk}^ie_i$</span>. Define <span class="math-container">$d_W$</span> by
<span class="math-container">\begin{eqnarray}
d_W\theta_i=\Omega_i- \frac{1}{2}\sum_{j,k} c_{jk}^i \theta_j \wedge \theta_k
\end{eqnarray}</span>
and
<span class="math-container">\begin{eqnarray}
d_W\Omega_i=\sum_{j,k}c_{jk}^i\theta_j \Omega_k
\end{eqnarray}</span>
and extending <span class="math-container">$d_W$</span> to <span class="math-container">$W(\mathfrak{g})$</span> as a derivation.</p>
<p>We can also define interior multiplication <span class="math-container">$i_X$</span> on <span class="math-container">$W(\mathfrak{g}^*)$</span> for any <span class="math-container">$X \in \mathfrak{g}$</span> by
<span class="math-container">\begin{eqnarray}
i_{e_r}(\theta_s)=\delta^r_s, i_{e_r}(\Omega_s)=0
\end{eqnarray}</span>
for all <span class="math-container">$r,s=1,...,n$</span> and extending by linearity and as a derivation. </p>
<p>Now consider <span class="math-container">$\Omega^*(M) \otimes W(\mathfrak{g}^*)$</span> as a complex. Using this definition of interior multiplication, together with the usual definition of interior multiplication on forms, we define the basic complex of <span class="math-container">$\Omega^*(M) \otimes W(\mathfrak g^*)$</span>:</p>
<p>We call <span class="math-container">$\alpha \in \Omega^*(M) \otimes W(\mathfrak{g}^*)$</span> a basic element if <span class="math-container">$i_X(\alpha)=0$</span> and <span class="math-container">$i_X(d \alpha)=0$</span>. Basic elements in <span class="math-container">$\Omega^*(M) \otimes W(\mathfrak{g}^*)$</span> form a subcomplex which we denote by <span class="math-container">$\Omega^*_G (M)$</span>.</p>
<p>The claim is that <span class="math-container">$H^*(\Omega^*_G (M))=H^*(M \times_G EG)$</span> where the right hand side denotes the singular equivariant cohomology of <span class="math-container">$M$</span>.</p>
| SGP | 11,786 | <p>see the very nice book of <a href="http://books.google.com/books?id=zYMp0GWLFiAC&lpg=PA248&ots=Bx2FxpUDmI&dq=guillemin%2520sternberg%2520supersymmetry&pg=PA182#v=onepage&q&f=false" rel="noreferrer">Guillemin-Sternberg (Supersymmetry and ...)</a>; it also has a reprint of Cartan's paper.</p>
|
1,210,018 | <p>$$ \begin{bmatrix}
1 & 1 \\
1 & 1 \\
\end{bmatrix}
\begin{Bmatrix}
v_1 \\
v_2 \\
\end{Bmatrix}=
\begin{Bmatrix}
0 \\
0 \\
\end{Bmatrix}$$</p>
<p>How can i solve this ?</p>
<p>I found it $$v_1+v_2=0$$ $$v_1+v_2=0$$ .</p>
<p>So i can't solve it for $v_1$ and $v_2$ .</p>
| abel | 9,252 | <p>because you have only two variables $v_1, v_2$ you can think of the equation $v_1 + v_2 = 0$ as a line through the origin with slope $-1.$ you have one line in the plane because the second equation does not anything more to this line. any point on this line is a solution. all solutions are given by $v_1 = t, v_2 = -t$ where you can take $t$ to be anything.</p>
|
188,158 | <p>I am interested in a function such that <code>f[m, i] = n</code> where <code>m, n</code> are positive integers and <code>n</code> is the <code>i</code>-th number relatively prime with <code>m</code>.</p>
<p>Getting a sample of the possible outputs of <code>f</code> is straightforward. For example, let <code>m = 30</code>. Now we can use</p>
<pre><code>list = 2 Range[0,29] + 1;
list = Pick[list, GCD[30, list], 1]
(*{1, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 49, 53, 59}*)
</code></pre>
<p>where I'm picking from odd numbers since <code>m</code> happens to be even. There should be a pattern in these numbers given by <code>EulerPhi[30]</code> (this is <code>8</code>) and indeed, <code>list[[;;8]] + 30</code> coincides with <code>list[[9;;16]]</code>. How to continue from here?</p>
| Eric Towers | 16,237 | <pre><code>f[m_, i_] := (
m*#[[1]] +
Select[Range[m], GCD[#, m] == 1 &][[ #[[2]] ]]
)& [ QuotientRemainder[i, EulerPhi[m]] ]
RepeatedTiming[f[223 227, 4021987]]
(* {0.058, 4057980} *)
</code></pre>
<p>As long as <code>m</code> is not too big and you repeat <code>m</code>s, you can trade some memory for time.</p>
<pre><code>fTable[m_] := fSmall[m] =
Select[Range[m], GCD[#, m] == 1 &];
f[m_, i_] := (m*#[[1]] + fTable[m][[#[[2]] ]]
)& [ QuotientRemainder[i, EulerPhi[m]] ]
RepeatedTiming[f[223 227, 4021987]]
(* {0.0000110, 4057980} *)
</code></pre>
<p>If you are repeating <code>m</code>s, but you still have too many different <code>m</code>s, <a href="https://mathematica.stackexchange.com/questions/19536/how-to-clear-parts-of-a-memoized-function">discarding</a> "old" <code>fTable</code>s is a good idea.</p>
|
2,937,671 | <p>Definition <span class="math-container">$\{A_i\}_{i\in I}$</span> be an indexed family of classes; Let
<span class="math-container">$$A=\bigcup_{i\in I} A_i.$$</span></p>
<p>The <span class="math-container">$product$</span> of the classes <span class="math-container">$A_i$</span> is defined to be the class
<span class="math-container">$$\prod_{i\in I}A_i\{f:f:I\rightarrow A\ is\ a\ function,\ and\ f(i)\in A_i,\forall i\in I \}$$</span></p>
<p>And I want to prove </p>
<p>Let <span class="math-container">$\{A_i\}_{i\in I}$</span> and <span class="math-container">$\{B_j\}_{j\in J}$</span> be families of classes. Prove the following:
<span class="math-container">$$(\prod_{i\in I}A_i)\cap(\prod_{j\in J}B_j)=\prod_{(i,j)\in{I\times J}}(A_i\cap B_j)$$</span></p>
<p>if <span class="math-container">$$\bigcup_{i\in I} A_i=\bigcup_{j\in J} B_j=X$$</span>
is satisfying.
But I don't know the meaning of <span class="math-container">$\prod_{(i,j)\in{I\times J}}(A_i\cap B_j)$</span>.</p>
<p>If there are someone who know meaning can you give me some explain using example
<span class="math-container">$I=\{a,b\},J=\{x,y,z\},A_a=\{1,2\},A_b=\{2,3,4\},B_x=\{1,4\},B_y=\{1,3\},B_z=\{1,2\}.$</span></p>
<p>and if you can please explain </p>
<p>how <span class="math-container">$$(\prod_{i\in I}A_i)\cap(\prod_{j\in J}B_j)=\prod_{(i,j)\in{I\times J}}(A_i\cap B_j)$$</span> is satisfying?</p>
| Porky | 807,523 | <p>This question appears in Pinter, Exercises 2.5, 6 a). I've been mulling over if for a while. I believe it's not correct as you state above, but as it appears in a textbook I've been trying to work out a way to prove it (perhaps by taking some liberty in notation or defintion!).</p>
<p>If <span class="math-container">$f\in\prod\limits_{i\in I}A_i\cap\prod\limits_{j\in J}B_j$</span> then dom <span class="math-container">$f = I$</span> and dom <span class="math-container">$f = J$</span> and so either</p>
<ul>
<li><span class="math-container">$I=J$</span> and <span class="math-container">$\ f:I \to X$</span> or</li>
<li><span class="math-container">$I\ne\ J$</span> and <span class="math-container">$\prod\limits_{i\in I}A_i\cap\prod\limits_{j\in J}B_j=\ \phi$</span></li>
</ul>
<p>And if <span class="math-container">$f\in\prod\limits_{<i,j>\in I\mathrm{x}J}A_i\cap\ B_j$</span> then <span class="math-container">$\ f:I\mathrm{x}J \to X$</span>. For the case <span class="math-container">$I=J$</span> I can't see how these can be reconciled as the functions will have different domains. Still, it bothers me!</p>
<p>By the way, <span class="math-container">$\prod\limits_{i\in I}A_i\cap\prod\limits_{i\in I}B_i=\prod\limits_{i\in I}A_i\cap\ B_i$</span> appears in 2.5 exercise 5 and yes, I have proved this as well!</p>
|
2,534,999 | <p>I tried to solve $z^3=(iz+1)^3$. I noticed that $(iz+1)^3=i(z-1)^3$ so $(\frac{z-1}{z})^3=i$. How to finish it?</p>
| José Carlos Santos | 446,262 | <p>Let $\omega=\cos\left(\frac{2\pi}3\right)+\sin\left(\frac{2\pi}3\right)i=-\frac12+\frac{\sqrt3}2i$. Then $\omega^3=1$ and$$z^3=(iz+1)^3\iff z=iz+1\vee\omega z=iz+1\vee\omega^2z=iz+1.$$</p>
|
3,779,785 | <p>So I have this problem, <span class="math-container">$W=3^n -n -1$</span>. How to find all <span class="math-container">$n$</span> so <span class="math-container">$W$</span> can be divided by <span class="math-container">$5$</span>.</p>
<p><em>what I tried:</em>
I found all the remainders of <span class="math-container">$3^n$</span> divided by <span class="math-container">$5$</span>, they are: <span class="math-container">$1,3,4,2$</span>.</p>
<p>if there isn't (<span class="math-container">$-n$</span>), it's easy but with it I can't continue.</p>
<p>this is the solutions of the problem: <span class="math-container">$n=20k+11$</span>, <span class="math-container">$n=20k+18$</span>, <span class="math-container">$n=20k+17$</span>, <span class="math-container">$n=20k$</span>. but I don't know how to find them</p>
<p>thanks</p>
| Àlex Rodríguez | 813,535 | <p>First of all, notice that <span class="math-container">$3^4$</span> is congruent 1 mod 5 (by fermat's little theorem). Then, as you said, these residues are 1, 3, 4, 2, so consider next 4 cases:</p>
<ol>
<li>n is congruent 0 mod 4. now, let's see it modulo 5: <span class="math-container">$1-n-1=0 (mod 5)$</span>. That implies <span class="math-container">$n=0 (mod5)$</span>. Using chinese residue theorem, we obtain that <span class="math-container">$n=0 (mod 20)$</span></li>
</ol>
<p>Now we have found the first solution: <span class="math-container">$n=20k$</span>.</p>
<p>Use the same argument in the other congruences of <span class="math-container">$n$</span> mod 4 to find all the other solutions.</p>
<p>Hope it was useful :)</p>
|
3,773,856 | <p>I'm having trouble with part of a question on Cardano's method for solving cubic polynomial equations. This is a multi-part question, and I have been able to answer most of it. But I am having trouble with the last part. I think I'll just post here the part of the question that I'm having trouble with.</p>
<p>We have the depressed cubic equation :
<span class="math-container">\begin{equation}
f(t) = t^{3} + pt + q = 0
\end{equation}</span>
We also have what I believe is the negative of the discriminant :
<span class="math-container">\begin{equation}
D = 27 q^{2} + 4p^{3}
\end{equation}</span>
We assume <span class="math-container">$p$</span> and <span class="math-container">$q$</span> are both real and <span class="math-container">$D < 0$</span>. We also have the following polynomial in two variables (<span class="math-container">$u$</span> and <span class="math-container">$v$</span>) that results from a variable transformation <span class="math-container">$t = u+v$</span> :
<span class="math-container">\begin{equation}
u^{3} + v^{3} + (3uv + p)(u+v) + q = 0
\end{equation}</span>
You also have the quadratic polynomial equation :
<span class="math-container">\begin{equation}
x^{2} + qx - \frac{p^{3}}{27} = 0
\end{equation}</span>
The solutions to the 2-variable polynomial equation satisfy the following constraints :
<span class="math-container">\begin{equation}
u^{3} + v^{3} = -q
\end{equation}</span>
<span class="math-container">\begin{equation}
uv = -\frac{p}{3}
\end{equation}</span>
The first section of this part of the larger question asks to prove that the solutions of the quadratic equation are non-real complex conjugates. Here the solutions to the quadratic are equal to <span class="math-container">$u^{3}$</span> and <span class="math-container">$v^{3}$</span> (this relationship between the quadratic polynomial and the polynomial in two variables was proven in an earlier part of the question). I was able to do this part. The second part of this sub-question is what I'm having trouble with.</p>
<p>The question says, let :
<span class="math-container">\begin{equation}
u = r\cos(\theta) + ir\sin(\theta)
\end{equation}</span>
<span class="math-container">\begin{equation}
v = r\cos(\theta) - ir\sin(\theta)
\end{equation}</span>
The question then asks the reader to prove that the depressed cubic equation has three real roots :
<span class="math-container">\begin{equation}
2r\cos(\theta) \text{ , } 2r\cos\left( \theta + \frac{2\pi}{3} \right) \text{ , } 2r\cos\left( \theta + \frac{4\pi}{3} \right)
\end{equation}</span>
In an earlier part of the question they had the reader prove that given :
<span class="math-container">\begin{equation}
\omega = \frac{-1 + i\sqrt{3}}{2}
\end{equation}</span>
s.t. :
<span class="math-container">\begin{equation}
\omega^{2} = \frac{-1 - i\sqrt{3}}{2}
\end{equation}</span>
and :
<span class="math-container">\begin{equation}
\omega^{3} = 1
\end{equation}</span>
that if <span class="math-container">$(u,v)$</span> is a root of the polynomial in two variables then so are :
<span class="math-container">$(u\omega,v\omega^{2})$</span> and <span class="math-container">$(u\omega^{2},v\omega)$</span>. I think that the part of the question I'm having trouble with is similar. I suspect that :
<span class="math-container">\begin{equation}
2r \cos\left( \theta + \frac{2\pi}{3} \right) = u\omega + v\omega^{2} \text{ or } u\omega^{2} + v\omega \tag{1}
\end{equation}</span>
and :
<span class="math-container">\begin{equation}
2r \cos\left( \theta + \frac{4\pi}{3} \right) = u\omega + v\omega^{2} \text{ or } u\omega^{2} + v\omega \tag{2}
\end{equation}</span>
I have derived that :
<span class="math-container">\begin{equation}
\omega = \cos(\phi) + i\sin(\phi)
\end{equation}</span>
where <span class="math-container">$\phi = \frac{2\pi}{3}$</span>. Also :
<span class="math-container">\begin{equation}
\omega^{2} = \cos(2\phi) + i \sin(2\phi)
\end{equation}</span>
So that the goal of the question may be to prove equations <span class="math-container">$(1)$</span> and <span class="math-container">$(2)$</span>. I have tried to do this but haven't been able to.</p>
<p>Am I approaching this question in the correct way ? If I am approaching it the right way can someone show me how to use trigonometric identities to prove equations #1 and #2 ?</p>
| José Carlos Santos | 446,262 | <p>Suppose that <span class="math-container">$u$</span> and <span class="math-container">$v$</span> are such that <span class="math-container">$u^3+v^3=-q$</span> and that <span class="math-container">$3uv=-p$</span>. You already know that then <span class="math-container">$u+v$</span> is a root of the depressed equation. On the other hand, <span class="math-container">$u^3$</span> and <span class="math-container">$v^3$</span> are the roots of a quadratic equation with real coefficients and without real roots; it follows that <span class="math-container">$v^3=\overline{u^3}=\overline u^3$</span> and that therefore, <span class="math-container">$v=\overline u$</span>, <span class="math-container">$v=\omega\overline u$</span> or <span class="math-container">$v=\omega^2\overline u$</span>. But, since <span class="math-container">$3uv=-p\in\Bbb R$</span>, then in fact, you can't have <span class="math-container">$v=\omega\overline u$</span> and neither can you have <span class="math-container">$v=\omega^2\overline u$</span>. Conclusion: <span class="math-container">$y=\overline u$</span>.</p>
<p>If <span class="math-container">$u=r(\cos\theta+i\sin\theta)$</span>, then <span class="math-container">$v=\overline u=r(\cos\theta-i\sin\theta)$</span>, and so <span class="math-container">$u+v=2\cos\theta$</span>.</p>
<p>Now, let <span class="math-container">$u'=\omega u$</span> and let <span class="math-container">$v'=\omega^2v$</span>. Then <span class="math-container">$u'^3+v'^3=-q$</span> and <span class="math-container">$3u'v'=-p$</span>. So, <span class="math-container">$u'+v'$</span> is also a root of the cubic. But<span class="math-container">\begin{align}u'+v'&=(r\cos\theta+ri\sin\theta)\left(\cos\left(\frac{2\pi}3\right)+\sin\left(\frac{2\pi}3\right)i\right)+\\&\ +(r\cos(-\theta)+ri\sin(-\theta))\left(\cos\left(\frac{-2\pi}3\right)+\sin\left(-\frac{2\pi}3\right)i\right)\\&=2r\cos\left(\theta+\frac{2\pi}3\right).\end{align}</span></p>
<p>Finally, if you take <span class="math-container">$u''=\omega^2u$</span> and <span class="math-container">$v''=\omega v$</span>, you can deduce that <span class="math-container">$2r\cos\left(\theta+\frac{4\pi}3\right)$</span> is still another root of your cubic.</p>
|
2,972,355 | <p>How to convert sentence that contains “no more than 3” into predicate logic sentence?</p>
<p>For example: "No more than three <span class="math-container">$x$</span> satisfy <span class="math-container">$R(x)$</span>"
using predicate logic. </p>
<p>This is what I have for "exactly one <span class="math-container">$x$</span> satisfies <span class="math-container">$R(x)$</span>":
<span class="math-container">$\exists x(R(x) \land \forall y(R(y) \rightarrow (x = y)))$</span></p>
| Bram28 | 256,001 | <p>Interpreting 'no more than three' as 'at most three' (i.e. it could be three, two, one, or maybe just none at all), you can do:</p>
<p><span class="math-container">$$\exists x \exists y \exists z \forall u (R(u) \rightarrow (u = x \lor u = y \lor u = z))$$</span></p>
|
4,252,431 | <p>Let the constant <span class="math-container">$\alpha > 0$</span> be the problem
<span class="math-container">$$\left\{\begin{array}{cll}
u_t + \alpha u_x & = & f(x,t); \ \ 0 < x < L; \ t > 0\\
u(0,t) & = & 0; \ \ t > 0;\\
u(x,0) & = & 0; \ \ 0 < x < L.
\end{array}\right.$$</span>
Prove that, for every <span class="math-container">$t > 0$</span>, the following applies:
<span class="math-container">$$\int_{0}^L |u(x,t)|^2dx \leq \int_{0}^L \int_{0}^t |f(x,s)|^2 dsdx.$$</span></p>
<p><strong>TIP:</strong> Uses Gronwall Inequality.</p>
<p><strong>Outline:</strong> I tried to use the fact that
<span class="math-container">$$u(x,t) = \int_{0}^t f(x + (s-t)\alpha,s)ds$$</span>
is the solution to the above problem when <span class="math-container">$u(x,0) = 0$</span>. Then I used Holder inequality to get to
<span class="math-container">$$|u(x,t)|^2 \leq t\int_{0}^t |f(x + (s-t)\alpha,s)|^2ds.$$</span>
Then I got stuck because I couldn't apply the Gronwall inequality satisfactorily...</p>
| herb steinberg | 501,262 | <p>Proof by contradiction:</p>
<p>Take the difference of the two equations and divide out common factors to get <span class="math-container">$y^3-y=x^3-x$</span>. This is a cubic in either variable in terms of the other, giving three solutions in each case, possible duplicates (x=y will appear in both sets). Use synthetic division by <span class="math-container">$x-y$</span> to get quadratics in both cases to get remaining solutions.</p>
<p>Remaining solutions: <span class="math-container">$x=\frac{-y\pm \sqrt{4-3y^2}}{2}$</span> and <span class="math-container">$y=\frac{-x\pm \sqrt{4-3x^2}}{2}$</span></p>
<p>However these possible solutions do not in general satisfy the original equations, leaving <span class="math-container">$x=y$</span> as the only possible. An example: <span class="math-container">$y=1$</span> leads to <span class="math-container">$x=0$</span> and <span class="math-container">$x=-1$</span>, which fail.</p>
|
4,527,429 | <p>I am confused as to how we open the abs value, do we get <span class="math-container">$e=0$</span> and <span class="math-container">$e=2x$</span>, or does the identity not exist?</p>
<p>Thanks.</p>
| Anne Bauval | 386,889 | <p>The identity <span class="math-container">$e$</span> does not exist because it should satisfy e.g. <span class="math-container">$-3=-3*e=|-3-e|\ge0.$</span></p>
|
2,412,783 | <p>I'm very new to linear algebra, and I have a homework problem that hasn't been covered in the book or by the professor. It seems like I have a fundamental misunderstanding of what matrices represent, but I can't find a good article or answer.</p>
<blockquote>
<p>Do the three lines $x_1 - 4x_2 = 1$, $2x_1 - x_2 = -3$, and $-x_1 - 3x_2 = 4$ have a common point of intersection? Explain.</p>
</blockquote>
<p>I assumed that the solution set of the matrix would represent how many intersections there were. I solved the echelon form and got:
$$\begin{bmatrix}1 & -4 & & 1\\2& -1 & & -3\\-1 & -3 & & 4\end{bmatrix} \rightarrow \begin{bmatrix}1 & -4 & & 1\\0& 1 & & -\frac{5}{7}\\0 & 0 & & 0\end{bmatrix}$$</p>
<p>Since this has infinite solutions, I would have thought it meant there were infinite intersections, or rather two equivalent lines, but that obviously isn't true. Is there any relationship between the solution set of a matrix and its original equations/lines? What is the matrix actually representing?</p>
| Sam Mills | 399,086 | <p>Your solution is correct (I assume you have row-reduced properly) but the bottom row gives you no information - does $0x + 0y + 0z = 0$ tell you anything about $x, y,$ or $z$? You can safely scrub out a row of zeroes from a matrix and solve the system from there.</p>
|
3,012,090 | <p>Let <span class="math-container">$x>0$</span>. I have to prove that</p>
<p><span class="math-container">$$
\int_{0}^{\infty}\frac{\cos x}{x^p}dx=\frac{\pi}{2\Gamma(p)\cos(p\frac{\pi}{2})}\tag{1}
$$</span></p>
<p>by converting the integral on the left side to a double integral using the expression below:</p>
<p><span class="math-container">$$
\frac{1}{x^p}=\frac{1}{\Gamma(p)}\int_{0}^{\infty}e^{-xt}t^{p-1}dt\tag{2}
$$</span></p>
<p>By plugging <span class="math-container">$(2)$</span> into <span class="math-container">$(1)$</span> I get the following double integral:</p>
<p><span class="math-container">$$
\frac{1}{\Gamma(p)}\int_{0}^{\infty}\int_{0}^{\infty}e^{-xt}t^{p-1}\cos xdtdx\tag{3}
$$</span></p>
<p>However, I unable to proceed any further as I am unclear as to what method should I use in order to compute this integral. I thought that an appropriate change of variables could transform it into a product of two gamma functions but I cannot see how that would work. Any help would be greatly appreciated.</p>
| Jack D'Aurizio | 44,121 | <p>The Laplace transform of <span class="math-container">$\cos x$</span> is <span class="math-container">$\frac{s}{1+s^2}$</span> and the inverse Laplace transform of <span class="math-container">$\frac{1}{x^p}$</span> is <span class="math-container">$\frac{s^{p-1}}{\Gamma(p)}$</span>, hence
<span class="math-container">$$ \int_{0}^{+\infty}\frac{\cos x}{x^p}\,dx = \frac{1}{\Gamma(p)}\int_{0}^{+\infty}\frac{s^p}{s^2+1}\,ds=\frac{1}{\Gamma(p)}\int_{0}^{\pi/2}\left(\tan u\right)^p\,du $$</span>
equals
<span class="math-container">$$ \begin{eqnarray*}\frac{1}{\Gamma(p)}\int_{0}^{1} v^p (1-v^2)^{-(p+1)/2}\,dv&=&\frac{1}{2\,\Gamma(p)}\int_{0}^{1}w^{(p-1)/2}(1-w)^{-(p+1)/2}\,dw\\& =& \frac{B\left(\tfrac{1+p}{2},\tfrac{1-p}{2}\right)}{2\,\Gamma(p)}\end{eqnarray*} $$</span>
or
<span class="math-container">$$ \frac{\Gamma\left(\frac{1+p}{2}\right)\Gamma\left(\frac{1-p}{2}\right)}{2\,\Gamma(p)}= \frac{\pi}{2\,\Gamma(p)\sin\left(\frac{\pi}{2}(p+1)\right)}=\frac{\pi}{2\,\Gamma(p)\cos\left(\frac{\pi p}{2}\right)}$$</span>
as wanted. We have exploited the Beta function and the reflection formula for the <span class="math-container">$\Gamma$</span> function.</p>
|
2,248,550 | <p>Will be the value in the form of $\frac{"0"}{"0"}$? Do I have to use the L'Hopital rule? Or can I say, that the limit doesn't exist?</p>
| Olivier Oloa | 118,798 | <p><strong>Hint</strong>. One may observe that, as $x \to 0^+$, $y\to 1^-$,
$$
x+y-1=x-(1-y)=\left(\sqrt{x}-\sqrt{1-y}\right)\left(\sqrt{x}+\sqrt{1-y}\right).
$$</p>
|
1,778,098 | <p>Let $f,g: M^{k} \to N$ ($M$ and $N$ with out boundary ) such that they are homotopic then for $\omega$ a $k$-form on $N$ do we have that </p>
<p>$$ \int_M f^{\ast} \omega = \int_M g^{\ast} \omega$$ </p>
<p>as conclusion? I can't figure out a proof so I am starting to think that it is not true. I can't use Homology or Cohomology and all that fancy stuff . I think is just a matter of rearranging singular cubes but I can't see how. </p>
<p>The way I was trying to approach this is the following: Consider first
the case where support of $\omega$ is in the interior of the image of a singular cube $f \circ c$, where $c$ is a singular cube into $M$. </p>
<p>Then we look at the new singular cube $f \circ c : I \to N$. But I don't know what to do from here and how to approach the general case. </p>
<blockquote>
<p><strong>Attempt</strong></p>
</blockquote>
<p>Using @TedShifrin's answer I've managed the following </p>
<p>$$\int_{M \times [0,1]}d(H^{*}w)=\int_{\partial(M \times [0,1])}H^{*}w = \int_{\partial M}g^{\ast}w-f^{\ast}w=0$$</p>
<p>but I am not sure.</p>
<p>Thanks a lot in advance.</p>
| user134824 | 134,824 | <p>Take $M=N=\mathbb R$ and take $\omega$ to be some nonvanishing $1$-form. Any two functions $\mathbb R\to\mathbb R$ are homotopic to each other, hence all are homotopic to zero. If your claim were true it would imply that the integral of any $1$-form is zero, which of course is not the case!</p>
<p>What you might be thinking of is that if $f$ and $g$ are homotopic then they induce the same map on de Rham complexes. In the previous example, I showed that <em>every</em> map $\mathbb R\to\mathbb R$ induces zero as a function from the de Rham complex of $\mathbb R$ to itself. But because $\mathbb R$ is convex (or better, contractible) its de Rham cohomology vanishes! So we don't learn anything.</p>
|
165,582 | <p>The three lines intersect in the point $(1, 1, 1)$: $(1 - t, 1 + 2t, 1 + t)$, $(u, 2u - 1, 3u - 2)$, and $(v - 1, 2v - 3, 3 - v)$. How can I find three planes which also intersect in the point $(1, 1, 1)$ such that each plane contains one and only one of the three lines?</p>
<p>Using the equation for a plane
$$a_i x + b_i y + c_i z = d_i,$$
I get $9$ equations.</p>
<p>Sharing equations with the lines:</p>
<p>$$a_1(1 - t) + b_1(1 + 2t) + c_1(1 + t) = d_1,$$
$$a_2(u) + b_2(2u - 1) + c_2(3u - 2) = d_2,$$
$$a_3(v - 1) + b_3(2v - 3) + c_3(3 - v) = d_3.$$</p>
<p>Intersection at $(1,1,1)$:
$$a_1 + b_1 + c_1 = d_1,$$
$$a_2 + b_2 + c_2 = d_2,$$
$$a_3 + b_3 + c_3 = d_3.$$</p>
<p>Dot product of plane normals and line vectors is $0$ since perpendicular:
$$\langle a_1, b_1, c_1 \rangle \cdot \langle -1, 2, 1 \rangle = -a_1 + 2b_1 + c_1 = 0,$$
$$\langle a_2, b_2, c_2\rangle \cdot \langle 1, 2, 3\rangle = a_2 + 2b_2 + 3c_2 = 0,$$
$$\langle a_3, b_3, c_3 \rangle \cdot \langle 1, 2, -1 \rangle = a_3 + 2b_3 - c_3 = 0.$$</p>
<p>I know how to find the intersection of $3$ planes using matrices/row reduction, and I know some relationships between lines and planes. However, I seem to come up with $12$ unknowns and $9$ equations for this problem. I know the vectors for the lines must be perpendicular to the normals of the planes, thus the dot product between the two should be $0$. I also know that the planes pass through the point $(1,1,1)$ and the $x,y,z$ coordinates for the parameters given in the line equations. What information am I missing? Maybe there are multiple solutions. If so, how can these planes be described with only a line and one point? Another thought was to convert the planes to parametric form, but to describe a plane with parameters normally I would have $2$ vectors and one point, but here I only have one vector and one point.</p>
| Cameron Buie | 28,900 | <p>Why does $k+3$ need to be even for the gcd of $k(k+5)/2$ and $k+3$ to divide $2$? For instance, consider the $k=2$ case.</p>
<p>Now, if $2$ had to divide the gcd of those numbers, then we could conclude that $k+3$ must be even.</p>
|
2,375,298 | <p>My question is as follows:</p>
<p>I have four different die and I'm trying to figure out how many possible combinations there are of (6,6,6,3)</p>
<p>My intuition tells me that there are 24 combinations. I'm imagining we have 4 spots:</p>
<hr>
<p>For the first spot there are 4 options (6,6,6,3)
For the second spot there are 3 options, etc.</p>
<p>I believe this is wrong but I can't figure out the flaw in my reasoning.</p>
<p>Any help would be appreciated. Thanks!</p>
| Aweygan | 234,668 | <p>Here's a proof that the space of weakly Cauchy sequences is closed in $\ell_\infty(X)$:</p>
<p>Let $WC(X)$ denote the subspace of $\ell_\infty(X)$ composed of weakly Cauchy sequences. Let $(x_n)$ be a sequence in $WC(X)$, with $x_n=(x_{nm})$, convergent to some $y=(y_n)\in\ell_\infty(X)$. Fix $f\in X^*$ and $\varepsilon>0$. Since $x_n\to y$, there is some $n\in\mathbb N$ such that $\|x_n-y\|_\infty<\varepsilon$. For this $n$, there is some $N\in\mathbb N$ such that $|f(x_{nm_1}-x_{nm_2})|<\varepsilon$ for all $m_1,m_2\geq N$. Thus we have
\begin{align*}
|f(y_{m_1}-y_{m_2})|&\leq|f(y_{m_1}-x_{nm_1})|+|f(x_{nm_1}-x_{nm_2})|+|f(x_{nm_2}-y_{m_2})|\\
&\leq\|f\|\|y_{m_1}-x_{nm_1}\|+|f(x_{nm_1}-x_{nm_2})|+\|f\|\|x_{nm_2}-y_{m_2}\|\\
&\leq2\|f\|\|y-x_n\|_\infty+|f(x_{nm_1}-x_{nm_2})|\\
&<(2\|f\|+1)\varepsilon.
\end{align*}
Thus $(y_n)$ is weakly Cauchy, so $y\in WC(X)$ and therefore $WC(X)$ is closed in $\ell_\infty(X)$.</p>
<p>As far as resources are concerned about the space of weakly convergent sequences, I'm afraid I'm not aware of any.</p>
|
3,865,607 | <p>Given <span class="math-container">$B\subseteq X$</span> with both <span class="math-container">$B$</span> and <span class="math-container">$X$</span> contractible. How would you prove that the inclusion map <span class="math-container">$i:B \to X$</span> is a homotopy equivalence?</p>
<p>Thank you</p>
| Tsemo Aristide | 280,301 | <p>Let <span class="math-container">$H_t$</span> be the homotopy between <span class="math-container">$Id_B$</span> and the constant map <span class="math-container">$f_B(x)=b$</span> and <span class="math-container">$G_t$</span> the homotopy between the identity of <span class="math-container">$X$</span> and the constant map <span class="math-container">$f_X(x)=b$</span>.
Consider <span class="math-container">$g:X\rightarrow B$</span> defined by <span class="math-container">$g(x)=b$</span>.</p>
<p><span class="math-container">$g\circ i=f_B$</span> and <span class="math-container">$i\circ g=f_X$</span></p>
|
23,268 | <p>I'm the sort of mathematician who works really well with elements. I really enjoy point-set topology, and category theory tends to drive me crazy. When I was given a bunch of exercises on subjects like limits, colimits, and adjoint functors, I was able to do them, although I am sure my proofs were far longer and more laborious than they should have been. However, I felt like most of the understanding I gained from these exercises was gone within a week. I have a copy of MacLane's "Categories for the Working Mathematician," but whenever I pick it up, I can never seem to get through more than two or three pages (except in the introduction on foundations).</p>
<p>A couple months ago, I was trying to use the statements found in Hartshorne about glueing schemes and morphisms and realized that these statements were inadequate for my purposes. Looking more closely, I realized that Hartshorne's hypotheses are "wrong," in roughly the same way that it is "wrong" to require, in the definition of a basis for a topology that it be closed under finite intersections. (This would, for instance, exclude the set of open balls from being a basis for $\mathbb{R}^n$.) Working through it a bit more, I realized that the "right" statement was most easily expressed by saying that a certain kind of diagram in the category of schemes has a colimit. At this point, the notion of "colimit" began to seem much more manageable: a colimit is a way of gluing objects (and morphisms).</p>
<p>However, I cannot think of any similar intuition for the notion of "limit." Even in the case of a fibre product, a limit can be anything from an intersection to a product, and I find it intimidating to try to think of these two very different things as a special cases of the same construction. I understand how to show that they are; it just does not make intuitive sense, somehow.</p>
<p>For another example, I think (and correct me if I am wrong) that <strike>the sheaf condition on a presheaf can be expressed as stating that the contravariant functor takes colimits to limits</strike>. [This is not correct as stated. See Martin Brandenburg's answer below for an explanation of why not, as well as what the correct statement is.] It seems like a statement this simple should make everything clearer, but I find it much easier to understand the definition in terms of compatible local sections gluing together. I can (I think) prove that they are the same, but by the time I get to one end of the proof, I've lost track of the other end intuitively.</p>
<p>Thus, my question is this: Is there a nice, preferably geometric intuition for the notion of limit? If anyone can recommend a book on category theory that they think would appeal to someone like me, that would also be appreciated.</p>
| Martin Brandenburg | 2,841 | <p>I pick up your remarks about sheaves. Indeed, the sheaf condition is a very good example to get a geometric idea of a limit.</p>
<p>Assume that $X$ is a set and $X_i$ are subsets of $X$ whose union is $X$. Then it is clear how to characterize functions on $X$: These are simply functions on the $X_i$ which agree on the overlaps $X_i \cap X_j$. This can be formulated in a fancy way: Let $J$ be the category whose objects are the indices $i$ and pairs of such indices $(i,j)$. It should be a preorder and we have the morphisms $(i,j) \to i, (i,j) \to j$. Consider the diagram $J \to Set$, which is given by $i \mapsto X_i, (i,j) \mapsto X_i \cap X_j$. What we have remarked above says exactly that $X$ is the colimit of this diagram! In a similar fashion, open coverings can be understood as colimits in the category of topological spaces, ringed spaces or schemes. It's all about gluing morphisms.</p>
<p>Now what about limits? I think it is important first to understand limits in the category of sets. If $F : J \to Set$ is a small diagram, then we can consider simply the set of "compatible elements in the image" of $F$, namely</p>
<p>$X = \{x \in \prod_j F(j) : \forall i \to j : x_j = F(i \to j)(x_i)\}$.</p>
<p>A short definition would be $X = Cone(*,F)$. Observe that we have projections $X \to F(j), x \mapsto x_j$ and with these $X$ is the limit of $F$. Now the Yoneda-Lemma or just the definition of a limit tells you how you can think of a limit in an arbitrary category: That $X$ is a limit of a diagram $F : J \to C$ amounts to say that elements of $X$ .. erm we don't have any elements, so let's say morphisms $Y \to X$, naturally correspond to compatible elem... erm morphisms $Y \to F(i)$. In other words, for every $Y$, $X(Y)$ is the set-theoretic limit of the diagramm $F(Y)$. I hope that this makes clear that the concept of limits in arbitrary categories is already visible in the category of sets.</p>
<p>Now let $X$ be a topological space and $O(X)$ the category of open subsets of $X$; it's an preorder with respect to the inclusion. Thus a presheaf is just a functor $F$ from $O(X)^{op}$ to the category of sets (or which suitable category you like). Now open coverings can be described as certain limits in $O(X)^{op}$, i.e. colimits in $O(X)$, as above. Observe that $F$ is a sheaf if and only if $F$ preserves these limits: If $U$ is covered by $U_i$, then $F(U)$ should be the limit of the $F(U_i), F(U_i \cap U_j)$ with transition maps $F(U_i) \to F(U_i \cap U_j), F(U_j) \to F(U_i \cap U_j)$, i.e. $F(U)$ consists of compatible elements of the $F(U_i)$, meaning that the elements of $F(U_i)$ and $F(U_j)$ restrict to the same element in $F(U_i \cap U_j)$. Thus we have a perfect geometric example of a limit: the set of sections on an open set is the limit of the set of sections on the open subsets of a covering.</p>
<p>Somehow this view takes over to the general case: Let $F : J \to Set$ be a functor. Regard it as a presheaf on $J^{op}$, and the map induced by $i \to j$ in $J^{op}$ as a restriction $F(j) \to F(i)$. Also call the elements of $F(i)$ sections on $i$. Then the limit of $F$ consists of compatible sections. Since I've been learning algebraic geometry, I almost always think of limits in this way.</p>
<p>Finally it is important to remember that limit is just the dual concept of colimit. And often algebra and geometry appear dually at once, for example sections and open subsets in sheaves. If $(X_i,\mathcal{O}_{X_i})$ are ringed spaces and you want to find the colimit, well you can guess that you <em>have</em> to do: Take the colimit of the $X_i$ and the limit of the $\mathcal{O}_{X_i}$ (pullbacked to the colimit).</p>
<blockquote>
<p>"...the sheaf condition on a presheaf can be expressed as stating that the contravariant functor takes colimits to limits"</p>
</blockquote>
<p>This is not correct. The reason is that the index category can be rather wild and colimits in preorders don't care about that. In detail: Let $U : J \to O(X)^{op}$ be a small diagram. Then the limit is just the union $V$ of $U_j$. Thus $F$ preserves this limit iff sections on $V$ are sections on the $U_j$ which are compatible with respect to the restriction morphisms given by $U$. If $J$ is discrete and $U$ maps everything to the same open subset $V$ of $X$, then the compatible sections are $F(V)^J$, which is bigger than $F(V)$.</p>
<blockquote>
<p>"... I have a copy of MacLane's "Categories for the Working Mathematician," but whenever I pick it up, I can never seem to get through more than two or three pages (except in the introduction on foundations"</p>
</blockquote>
<p>I think this book is still one of the best introductions into category theory. It can be hard to grasp all these abstract concepts and examples, but it gets easier as soon as you get input from other areas where category theoretic ideas are omnipresent. Your example about gluing morphisms illustrates this very well.</p>
|
23,268 | <p>I'm the sort of mathematician who works really well with elements. I really enjoy point-set topology, and category theory tends to drive me crazy. When I was given a bunch of exercises on subjects like limits, colimits, and adjoint functors, I was able to do them, although I am sure my proofs were far longer and more laborious than they should have been. However, I felt like most of the understanding I gained from these exercises was gone within a week. I have a copy of MacLane's "Categories for the Working Mathematician," but whenever I pick it up, I can never seem to get through more than two or three pages (except in the introduction on foundations).</p>
<p>A couple months ago, I was trying to use the statements found in Hartshorne about glueing schemes and morphisms and realized that these statements were inadequate for my purposes. Looking more closely, I realized that Hartshorne's hypotheses are "wrong," in roughly the same way that it is "wrong" to require, in the definition of a basis for a topology that it be closed under finite intersections. (This would, for instance, exclude the set of open balls from being a basis for $\mathbb{R}^n$.) Working through it a bit more, I realized that the "right" statement was most easily expressed by saying that a certain kind of diagram in the category of schemes has a colimit. At this point, the notion of "colimit" began to seem much more manageable: a colimit is a way of gluing objects (and morphisms).</p>
<p>However, I cannot think of any similar intuition for the notion of "limit." Even in the case of a fibre product, a limit can be anything from an intersection to a product, and I find it intimidating to try to think of these two very different things as a special cases of the same construction. I understand how to show that they are; it just does not make intuitive sense, somehow.</p>
<p>For another example, I think (and correct me if I am wrong) that <strike>the sheaf condition on a presheaf can be expressed as stating that the contravariant functor takes colimits to limits</strike>. [This is not correct as stated. See Martin Brandenburg's answer below for an explanation of why not, as well as what the correct statement is.] It seems like a statement this simple should make everything clearer, but I find it much easier to understand the definition in terms of compatible local sections gluing together. I can (I think) prove that they are the same, but by the time I get to one end of the proof, I've lost track of the other end intuitively.</p>
<p>Thus, my question is this: Is there a nice, preferably geometric intuition for the notion of limit? If anyone can recommend a book on category theory that they think would appeal to someone like me, that would also be appreciated.</p>
| Spice the Bird | 14,167 | <p>This answer is sort of an analogy, I am not quite sure how to make it precise. Further, It addresses that part of the question about a fiber product being anything from an intersection to a product (so this is perhaps a narrow answer). I am also not quite sure if this is "geometric". All of this said, a fiber product is a collection of "events" along with some dependency give by the maps that define the fiber product. In the case of the targets of the two defining arrows is the terminal object, their is no dependency whatsoever. Of course, I am speaking about this as if we were doing probability theory, but these ideas should work in any category. </p>
|
67,460 | <p>Denote the system in $GF(2)$ as $Ax=b$, where:
$$
\begin{align}
A=&(A_{ij})_{m\times m}\\
A_{ij}=&
\begin{cases}
(1)_{n\times n}&\text{if }i=j\quad\text{(a matrix where entries are all 1's)}\\
I_n&\text{if }i\ne j\quad\text{(the identity matrix)}
\end{cases}
\end{align}
$$
that is, $A$ is a square matrix of order $m\times n$. And $b$ is a 0-1 vector with length $m\times n$. Now what is the solution of this system, if any, for a general pair of $m$ and $n$?</p>
<p>Example: For $m=2,n=3$ and $b=(0, 1, 0, 0, 1, 0)^T$, we have
$$
A=
\begin{pmatrix}
1 & 1 & 1 & 1 & 0 & 0 \\
1 & 1 & 1 & 0 & 1 & 0 \\
1 & 1 & 1 & 0 & 0 & 1 \\
1 & 0 & 0 & 1 & 1 & 1 \\
0 & 1 & 0 & 1 & 1 & 1 \\
0 & 0 & 1 & 1 & 1 & 1
\end{pmatrix}
$$
then one solution is $x=(1, 0, 1, 0, 1, 0)^T$</p>
<p>I know Gaussian elimination. I am trying but find it not very easy when dealing with a general case.</p>
| hmakholm left over Monica | 14,366 | <p>Some observations, too long for a comment:</p>
<p>If $n$ is odd, your matrix is not invertible, and so there is no solution for arbitrary $b$ (and a solution will not be unique if it exists). First, do some row operations to rewrite the constituent blocks to $$\pmatrix{1&1&1&1\\0&0&0&0\\0&0&0&0\\0&0&0&0}, \pmatrix{1&0&0&0\\1&1&0&0\\1&0&1&0\\1&0&0&1}$$
Then do some column operations to rewrite the blocks to
$$\pmatrix{n\bmod 2&1&1&1\\0&0&0&0\\0&0&0&0\\0&0&0&0}, \pmatrix{1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1}$$
But if $n$ is odd, then columns $1$, $n+1$, $2n+1$, .. $(m-1)n+1$ are now all identical.</p>
<p>On the other hand, if $m$ is odd, then permuting the indices will turn the $m\times n$ problem into an $n\times n$ problem from the same family, and that will not be invertible either.</p>
<p>Some further progress in the even case (before I noticed Robert's elegant solution): We have the blocks
$$\pmatrix{0&1&1&1\\0&0&0&0\\0&0&0&0\\0&0&0&0}, \pmatrix{1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1}$$
Further column operations give
$$\pmatrix{0&1&0&0\\0&0&0&0\\0&0&0&0\\0&0&0&0}, \pmatrix{1&0&0&0\\0&1&1&1\\0&0&1&0\\0&0&0&1}$$
and by row operations we can blank out the new 1's to the right:
$$\pmatrix{0&1&0&0\\0&0&0&0\\0&0&0&0\\0&0&0&0}, \pmatrix{1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1}$$
Now everything decouples into one $2m\times 2m$ problem (containing the first two rows and columns of each block and $n-2$ separate $m\times m$ problems.</p>
|
487,171 | <p>Now I tried tackling this question from different sizes and perspectives (and already asked a couple of questions here and there), but perhaps only now can I formulate it well and ask you (since I have no good ideas).</p>
<p>Let there be $k, n \in\mathbb{Z_+}$. These are fixed.</p>
<p>Consider a set of $k$ integers $S=\{0, 1, 2, ... k-1\}$.</p>
<p>We form a sequence $a_1, a_2, ..., a_n$ by picking numbers from $S$ at random with equal probability $1/k$.</p>
<p>The question is - what is the probability of that sequence to be sorted ascending, i.e. $a_1 \leq a_2 \leq ... \leq a_n$? </p>
<p>Case $k \to \infty$:</p>
<p>This allows us to assume (with probability tending to $1$) that all elements $a_1, ..., a_n$ are different. It means that only one ordering out of $n!$ possible is sorted ascending. </p>
<p>And since all orderings are equally likely (not sure why though), the probability of the sequence to be sorted is</p>
<p>$$\frac{1}{n!}.$$</p>
<p>Case k = 2:</p>
<p>Now we have zeroes and ones which come to the resulting sequence with probability $0.5$ each. So the probability of any particular n-sequence is $\frac{1}{2^n}$. </p>
<p>Let us count the number of possible sorted sequences:</p>
<p>$$0, 0, 0, \ldots, 0, 0$$
$$0, 0, 0, \ldots, 0, 1$$
$$0, 0, 0, \ldots, 1, 1$$
$$\ldots$$
$$0, 0, 1, \ldots, 1, 1$$
$$0, 1, 1, \ldots, 1, 1$$
$$1, 1, 1, \ldots, 1, 1$$</p>
<p>These total to $(n+1)$ possible sequences. Now again, any sequence is equally likely, so the probability of the sequence to be sorted is </p>
<p>$$ \frac{n+1}{2^n}. $$</p>
<p>Question:</p>
<p>I have no idea how to generalize it well for arbitrary $k, n$. Maybe we can tackle it together since my mathematical skills aren't really that high. </p>
| coffeemath | 30,316 | <p>Each sorted string consists of $n_0$ copies of $0$, followed by $n_1$ copies of $1$, etc., ending with $n_{k-1}$ copies of $k-1$. The restriction on the $n_j$ are that they be nonnegative and sum to $n$. The number of solutions to that has a known expression via binomial coefficients, in this case it is $\binom{n+k-1}{k-1}.$ So if this is placed over the number $k^n$ of possible strings, that is the probability the string is nondecreasing.</p>
<p>Here's a reference for the binomial coefficient use above. The thing being counted is the number of "weak compositions" of $n$, in the terminology used at that site:
<a href="http://en.wikipedia.org/wiki/Composition_(combinatorics)" rel="nofollow">http://en.wikipedia.org/wiki/Composition_(combinatorics)</a></p>
|
3,243,406 | <p>I know that the function <span class="math-container">$f(x)=|x(x-1)^3|$</span> is not derivable in <span class="math-container">$x=0$</span>, but why is it derivable in <span class="math-container">$x=1$</span>?</p>
| XYSquared | 648,514 | <p>The function <span class="math-container">$f(x) = |x|\;|(x-1)^3|$</span>. <span class="math-container">$|x|$</span> certainly has a derivative 1 at <span class="math-container">$x=1$</span>, and <span class="math-container">$h(x)=|(x-1)^3|$</span> indeed has a derivative at <span class="math-container">$x=1$</span> too. <span class="math-container">$$
h'(1)=\lim_{x \rightarrow 1}\frac{h(x)-h(1)}{x-1}=\frac{|(x-1)^3|}{x-1}$$</span>
But <span class="math-container">$$\lim_{x \rightarrow 1^+}\frac{|(x-1)^3|}{x-1}=\lim_{x \rightarrow 1^+}(x-1)^2=0$$</span>
and <span class="math-container">$$\lim_{x \rightarrow 1^-}\frac{|(x-1)^3|}{x-1}=\lim_{x \rightarrow 1^+}-(x-1)^2=0$$</span>
Hence <span class="math-container">$h'(1)=0$</span>. Note that this method cannot be applied to <span class="math-container">$|x-1|$</span> at <span class="math-container">$x=1$</span>. For intuitions, look at the pictures of <span class="math-container">$|x-1|$</span> and <span class="math-container">$|(x-1)^3|$</span>.</p>
|
295,597 | <p>I'm trying to solve this simple integral:</p>
<p>$$\frac12 \int \frac{x^2}{\sqrt{x + 1}} dx$$</p>
<p>Here's what I have done so far:</p>
<ol>
<li><p>$\displaystyle t = \sqrt{x + 1} \Leftrightarrow x = t^2 - 1 \Rightarrow dx = 2t dt$</p></li>
<li><p>$\displaystyle \frac12 \int \frac{x^2}{\sqrt{x + 1}} dx = \int \frac{t (t^2 - 1)^2}t dt$</p></li>
<li><p>$\displaystyle \int (t^2 - 1)^2 dt = \frac15 t^5 - \frac23 t^3 + t + C$</p></li>
<li><p>$\displaystyle \frac15 t^5 - \frac23 t^3 + t + C = \frac15 \sqrt{(x + 1)^5} - \frac23 \sqrt{(x + 1)^3} + \sqrt{x + 1} + C$</p></li>
</ol>
<p>WolframAlpha tells me steps 1 and 3 are right so the mistake must be somewhere in steps 2 and 4, but I really can't see it.</p>
| Alex | 38,873 | <p>There is a (slightly) more obvious way of solving it: rewrite the numerator as $x^2+1-1$ and then the whole integral as a sum of two integrals:
$$
\int \frac{(x^2-1)dx}{\sqrt{x+1}} + \int \frac{dx}{\sqrt{x+1}}
$$
The second integral is easy, the first one is
$$
\int \frac{(x^2-1)dx}{\sqrt{x+1}} =\int \frac{(x+1)(x-1)dx}{\sqrt{x+1}}=\int x \sqrt{x+1}dx-\int \sqrt{x+1}dx
$$
The second integral is also easy, and the integrand in the first one should be rewritten and $(x+1-1)\sqrt{x+1}$ and the rest is easy. This is a bit too straightforward, I admit. </p>
|
3,909,005 | <p>I would like to ask what are the derivative values (first and second) of a function "log star": <span class="math-container">$f(n) = \log^*(n)$</span>?</p>
<p>I want to calculate some limit and use the De'l Hospital property, so that's why I need the derivative of "log star":
<span class="math-container">$$\lim_{n \to \infty}\frac{\log_{2}^*(n)}{\log_{2}(n)}$$</span></p>
<p>More about this function: <a href="https://en.wikipedia.org/wiki/Iterated_logarithm" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Iterated_logarithm</a></p>
| Shaun | 732,537 | <p>You might try to use the definition of derivative to find your solutions
<span class="math-container">$$
\lim_{\Delta x \rightarrow 0}\frac{f(x+\Delta x)-f(x)}{\Delta x}
$$</span>
and evaluate at the different intervals that are valid for the function.</p>
|
3,998,098 | <p>I was asked to determine the locus of the equation
<span class="math-container">$$b^2-2x^2=2xy+y^2$$</span></p>
<p>This is my work:</p>
<blockquote>
<p>Add <span class="math-container">$x^2$</span> to both sides:
<span class="math-container">$$\begin{align}
b^2-x^2 &=2xy+y^2+x^2\\
b^2-x^2 &=\left(x+y\right)^2
\end{align}$$</span></p>
</blockquote>
<p>I see that this is similar to the equation of a circle. How can I find the locus of this expression?</p>
| Amanuel Getachew | 669,545 | <p>The equation is clearly of a conic section. However,since the coefficient of <span class="math-container">$xy$</span> is non-zero, the conic section is tilted by some angle <span class="math-container">$\theta$</span>. The value of <span class="math-container">$\theta$</span> can be determined as:
<span class="math-container">$$\theta = \dfrac{1}2 \arctan (\dfrac{C}{A-B})$$</span>
, where <span class="math-container">$A,\ C $</span> and <span class="math-container">$B$</span> are the coefficients of <span class="math-container">$x^2$</span>, <span class="math-container">$y^2$</span> and <span class="math-container">$xy$</span> respectively in the equation of the conic section. Substitution yields <span class="math-container">$\theta = 0.5535\dots$</span> (in radians).
Now rotate the conic section by <span class="math-container">$-\theta$</span> to get its standard equation with vertical or horizontal orientation.
<span class="math-container">$$ x = x^\prime \cos \theta - y^\prime \sin \theta $$</span>
<span class="math-container">$$y = x^\prime \sin \theta + y^\prime \cos \theta$$</span></p>
<p>After substitution and (ugly) computation, you will get the following equation:</p>
<p><span class="math-container">$$2.62 (x')^2 + 0.38(y')^2 = b^2$$</span>
This means the original equation is an equation of an ellipse with major axis of length <span class="math-container">$a^\prime= b\sqrt{1/0.38}$</span> tilted by <span class="math-container">$.5535\dots$</span> radians (about <span class="math-container">$31$</span> degrees) and minor axis of lengh <span class="math-container">$b^\prime = b\sqrt{1/2.62}$</span></p>
<p>But,of course, there are easier tests you can check with what conic section formula it is.</p>
|
56,134 | <p>As mentioned,
I wish to read the first line of a file, and if needed, overwrite it with a new string.
The aim is to have a CSV with a list of possible elements.
Example:</p>
<pre><code>Adding the elements: 5 A, 6 B, and 7 C to a blank CSV:
A B C
5 6 7
Adding 4 A, 9 D:
A B C D
5 6 7
4 0 0 9
Adding 2 B, 7 E
A B C D E
5 6 7
4 0 0 9
0 2 0 0 7
</code></pre>
<p>Rewriting the file is okay for small files, but the files are expected to grow to sizes of hundreds of MB, so repeatedly overwriting the entire file is not practiceable, since the entries are preferably done quickly. </p>
<p>How would one go about overwriting only the first line of the file? Reading the first line is possible from <a href="https://mathematica.stackexchange.com/questions/5179/how-to-read-data-file-quickly#_=_">https://mathematica.stackexchange.com/questions/5179/how-to-read-data-file-quickly#<em>=</em></a>, but what about writing?</p>
| george2079 | 2,079 | <p>Being fairly annoyed that mathematica can't do this straightforward thing..here is a solution using an external python script:</p>
<pre><code> Export["test.txt",
Join[{StringJoin[Join[{"*"}, ConstantArray[" ", {80}], {"*"}]]},
RandomInteger[100, {3, 20}], {"end of file\n"}], "Table"]
FilePrint["test.txt"]
overwrite[file_, off_, string_] :=
Run[ "python", "replacebytes.py", file, off, "'" <> string <> "'"];
firstline = "some header 1 2";
overwrite["test.txt", 2, firstline];
FilePrint["test.txt"]
firstline = firstline <> " 3";
overwrite["test.txt", 2, firstline];
FilePrint["test.txt"]
</code></pre>
<p>Where the python script "replacebytes.py" is just this:</p>
<pre><code> # usage python replacebytes.py file offset string
# ovewrites file with string beginning at offset
import sys
if len(sys.argv) != 4 : sys.exit()
f=open(sys.argv[1],'r+')
f.seek(int(sys.argv[2]))
f.write(sys.argv[3])
f.close()
</code></pre>
<p><img src="https://i.stack.imgur.com/e6EuV.png" alt="enter image description here"></p>
<p>Note this will happily overwrite your data if <code>firstline</code> exceeds the length of the blank padded first line. (note also the asterisks are in there just so we can see whats going on, drop those in practice )</p>
|
963,503 | <p>Vectors $a$, $b$ and $c$ all have length one. $a + b + c = 0$. Show that
$$
|a-c| = |a-b| = |b-c|
$$
I am not sure how to get started, as writing out the norms didn't help and there is no way to manipulate
$$
|a-c| \le |a-b| + |b-c|
$$
to get an equality. I just need an idea of where to start.</p>
| please delete me | 168,166 | <p>You can try to prove the contrapositive of that implication. Start by assuming that $Z$ is not contained in $X$.</p>
|
4,280,424 | <p>The PDE:
<span class="math-container">$$\frac1D C_t-Q=\frac2rC_r+C_{rr}$$</span></p>
<p>on the domain <span class="math-container">$r \in [0,\bar{R}]$</span> and <span class="math-container">$t \in [0,+\infty]$</span> and where <span class="math-container">$D$</span> and <span class="math-container">$Q$</span> are Real constants. We're looking for a function <span class="math-container">$C(r,t)$</span>.</p>
<p>The BC:
<span class="math-container">$$C(0,t)=f(t)$$</span>
<span class="math-container">$$C_r(\bar{R},t)=0$$</span>
The IC:
<span class="math-container">$$C(r,0)=C_0$$</span>
If <span class="math-container">$f(t)=0$</span> then I know the solution. Assume:
<span class="math-container">$$C(r,t)=C_E(r)+v(r,t)$$</span>
<span class="math-container">$$-Q=\frac2rC_r+C_{rr}$$</span>
<span class="math-container">$$-Q=\frac2rC_E'(r)+C_E''(r)$$</span>
<span class="math-container">$$rC_E''+2C_E'+Qr=0$$</span>
where <span class="math-container">$C_E(r)$</span> is the steady-state solution (<span class="math-container">$t \to \infty$</span>).
<span class="math-container">$$C_E(r)=-\frac{Qr^2}{6}+\frac{c_1}{r}+c_2$$</span>
<span class="math-container">$$\text{because }\lim_{r\to 0}C(r)=\infty\Rightarrow c_1=0$$</span>
But since as <span class="math-container">$f(t) \neq 0$</span>, <span class="math-container">$c_2$</span> cannot be determined.</p>
<p>All help will be appreciated.</p>
<hr>
<p><strong>Edit.</strong> In the case of <span class="math-container">$f(t)=0$</span> the solution, summarised, becomes:</p>
<p><span class="math-container">$$c_2=0$$</span>
<span class="math-container">$$C_E(r)=-\frac{Qr^2}{6}$$</span>
<span class="math-container">$$C(r,t)=-\frac{Qr^2}{6}+v(r,t)$$</span>
Compute partial derivatives:
<span class="math-container">$$C_t=v_t$$</span>
<span class="math-container">$$C_r=-\frac{Qr}{3}+v_r$$</span>
<span class="math-container">$$C_{rr}=-\frac{Q}{3}+v_{rr}$$</span>
Inserting in the PDE then gives the homogeneous PDE in <span class="math-container">$v(r,t)$</span>:
<span class="math-container">$$\frac1D v_t=\frac2r v_r+v_{rr}$$</span>
Ansatz: <span class="math-container">$v(r,t)=R(r)T(t)$</span>, then separation of variables yields the ODE solutions, with <span class="math-container">$-m^2$</span> a separation constant:
<span class="math-container">$$T(t)=c_3\exp(-m^2 D t)$$</span>
<span class="math-container">$$R(r)=c_4\frac{\sin mr}{r}$$</span>
BCs:
<span class="math-container">$$R(0)=0$$</span>
<span class="math-container">$$R'(\bar{R})=0$$</span>
<span class="math-container">$$R'=c_4\frac{mr\cos mr-\sin mr}{r^2}$$</span>
<span class="math-container">$$R'(\bar{R})=c_4\frac{m\bar{R}\cos m\bar{R}-\sin m\bar{R}}{\bar{R}^2}=0$$</span>
The eigenvalues <span class="math-container">$m_i$</span> are the solutions to the transcendental equation:
<span class="math-container">$$m_i\bar{R}=\tan m_i\bar{R}$$</span>
So we have:
<span class="math-container">$$v(r,t)=\sum_{i=1}^\infty A_i\exp(-m_i^2 D t)\frac{\sin m_ir}{r}$$</span>
Determine the <span class="math-container">$A_i$</span> the usual way with the IC and the Fourier series.</p>
<p>So we have:</p>
<p><span class="math-container">$$C(r,t)=-\frac{Qr^2}{6}+\sum_{i=1}^\infty A_i\exp(-m_i^2 D t)\frac{\sin m_ir}{r}$$</span></p>
| José Carlos Santos | 446,262 | <p>Yes, <span class="math-container">$\overline A=\Bbb R$</span>, since <span class="math-container">$A$</span> is not closed, from which it follows that the only closed subset of <span class="math-container">$\Bbb R$</span> which contains <span class="math-container">$A$</span> is <span class="math-container">$\Bbb R$</span> itself.</p>
|
4,393,193 | <p>I am looking at the function
<span class="math-container">$$f(x) = \begin{cases}
\dfrac{x^2-1}{x^2-x} & x \ne 0,1\\
0 & x=0\\
2 &x=1
\end{cases}$$</span>
and am trying to show that <span class="math-container">$\lim_{x \to 0} f(x)$</span> DNE. This makes sense to me because <span class="math-container">$f$</span> goes towards <span class="math-container">$+\infty$</span> from the left and <span class="math-container">$-\infty$</span> from the right. However, in my analysis class we do not have a definition for when a limit does not exist. My first instinct was to negate the definition of when a limit exists, but this means that the limit would not exist for any number I chose, and I want to prove that the limit doesn't exist for any numbers.</p>
<p>I don't understand why there needs to be cases for this type of problem as detailed <a href="https://math.libretexts.org/Courses/Monroe_Community_College/MTH_210_Calculus_I_(Professor_Dean)/Chapter_2_Limits/2.7%3A_The_Precise_Definition_of_a_Limit" rel="nofollow noreferrer">here</a>.</p>
<p>How can I go about proving this rigorously?</p>
| David G. Stork | 210,401 | <p>The general equation of an ellipse (centered on the origin) with principal axes of length <span class="math-container">$a$</span> and <span class="math-container">$b$</span> rotated by angle <span class="math-container">$\theta$</span> is:</p>
<p><span class="math-container">$$\frac{(x \cos \theta + y \sin \theta)^2}{a^2} + \frac{(x \sin \theta - y \cos \theta)^2}{b^2} = 1$$</span></p>
<p>If you want to displace the center, replace <span class="math-container">$x$</span> and <span class="math-container">$y$</span> by <span class="math-container">$(x - x_0)$</span> and <span class="math-container">$(y - y_0)$</span>, respectively.</p>
<p>Note here: "axes" refers to the <em>axes of the ellipse</em>---<em>NOT</em> (necessarily) the <span class="math-container">$x$</span> and <span class="math-container">$y$</span> axes of a coordinate system. In fact, the two foci (which can be anywhere) <em>define</em> the major axis of the ellipse, so of <em>course</em> the foci <em>must</em> be on the same axis!</p>
|
2,329,730 | <p>Let $G$, an algebraic group, act morphically on the affine variety $X$.</p>
<p>Then we can also have $G$ act on the affine algebra $K[X]$ as follows:
$$\tau_x(f(y))=f(x^{-1}\cdot y),\qquad (x\in G, y\in X)$$</p>
<p>Then $\tau:G\to GL(K[X]),\quad \tau:x\mapsto \tau_x$.</p>
<p>Humphreys says that the reason that the inverse appears, is so that $\tau$ is a group homomorphism. But to me it seems that without the inverse it would be a group homomorphism, and with it, it isn't even a group homomorphism.</p>
<p>$\tau_{xy}(f(z))=f((xy)^{-1}\cdot z)=f(y^{-1}\cdot x^{-1}\cdot z)$ and $\tau_x\tau_yf(z)=\tau_xf(y^{-1}\cdot z)=f(x^{-1}\cdot y^{-1}\cdot z),$
so these seem to fail to be a homomorphism, where it is clear to see that without the inverse, it would be a group homomorphism.</p>
<p>What's the deal?</p>
| Dr. Sonnhard Graubner | 175,066 | <p>if i'm right understand we get
$$x=\frac{b}{27}$$ and $$\frac{b^2}{27}=1024$$ is that what you meant?</p>
|
2,329,730 | <p>Let $G$, an algebraic group, act morphically on the affine variety $X$.</p>
<p>Then we can also have $G$ act on the affine algebra $K[X]$ as follows:
$$\tau_x(f(y))=f(x^{-1}\cdot y),\qquad (x\in G, y\in X)$$</p>
<p>Then $\tau:G\to GL(K[X]),\quad \tau:x\mapsto \tau_x$.</p>
<p>Humphreys says that the reason that the inverse appears, is so that $\tau$ is a group homomorphism. But to me it seems that without the inverse it would be a group homomorphism, and with it, it isn't even a group homomorphism.</p>
<p>$\tau_{xy}(f(z))=f((xy)^{-1}\cdot z)=f(y^{-1}\cdot x^{-1}\cdot z)$ and $\tau_x\tau_yf(z)=\tau_xf(y^{-1}\cdot z)=f(x^{-1}\cdot y^{-1}\cdot z),$
so these seem to fail to be a homomorphism, where it is clear to see that without the inverse, it would be a group homomorphism.</p>
<p>What's the deal?</p>
| TStancek | 109,322 | <p>Since $ax=b$, then $bx=(ax)\cdot x$, so you get quadratic equation $ax^2=c$, or $ax^2-c=0$. Solve it and you will find your solutions.</p>
|
3,965,834 | <p>Does this sum converge or diverge?</p>
<p><span class="math-container">$$ \sum_{n=0}^{\infty}\frac{\sin(n)\cdot(n^2+3)}{2^n} $$</span></p>
<p>To solve this I would use <span class="math-container">$$ \sin(z) = \sum \limits_{n=0}^{\infty}(-1)^n\frac{z^{2n+1}}{(2n+1)!} $$</span></p>
<p>and make it to <span class="math-container">$$\sum \limits_{n=0}^{\infty}\sin(n)\cdot\frac{(n^2+3)}{2^n} = \sum \limits_{n=0}^{\infty}(-1)^n\frac{n^{2n+1}}{(2n+1)!} \cdot \sum \limits_{n=0}^{\infty}\frac{(n^2+3)}{2^n} $$</span></p>
<p>and since <span class="math-container">$$\sum \limits_{n=0}^{\infty}\frac{(n^2+3)}{2^n} \text{ and } \sum \limits_{n=0}^{\infty}(-1)^n\frac{n^{2n+1}}{(2n+1)!} $$</span></p>
<p>converges <span class="math-container">$$ \sum \limits_{n=0}^{\infty}\frac{\sin(n)\cdot(n^2+3)}{2^n} $$</span>
would also converge.</p>
<p>Is my assumption true? I'm also a bit scared to use it since I've got the sin(z) equation from a source outside the stuff that my professor gave us</p>
| Community | -1 | <p>Clearly,</p>
<p><span class="math-container">$$\left|\sum_{n=0}^\infty\frac{\sin(n)(n^2+3)}{2^n}\right|<\sum_{n=0}^\infty\frac{(n^2+3)}{2^n}.$$</span></p>
<p>Then by the ratio test, for <span class="math-container">$n\ge2$</span>,</p>
<p><span class="math-container">$$\frac12\frac{(n+1)^2+3}{n^2+3}\le\frac67$$</span> and the series converges.</p>
|
31,158 | <p>To generate 3D mesh <a href="http://reference.wolfram.com/mathematica/TetGenLink/tutorial/UsingTetGenLink.html#167310445" rel="nofollow noreferrer">TetGen</a> can be easily used. Are there similar functions (or a way to use TetGen) to generate 2d mesh? I know that such functionality can be <a href="https://mathematica.stackexchange.com/questions/22244/creating-a-2d-meshing-algorithm-in-mathematica">easily implemented</a> but I would like to use a Mathematica provided function, as I need to experiment with number of nodes in elements and so on. I just want to solve PDE using FEM not really to play around with mesh generation.</p>
| Misery | 742 | <p>Since Mathematica 10.3 ToElementMesh[] function can be used, along with FEM solver. For details see <a href="https://reference.wolfram.com/language/FEMDocumentation/tutorial/ElementMeshCreation.html" rel="noreferrer">this link</a></p>
|
1,130,487 | <p>Jessica is playing a game where there are 4 blue markers and 6 red markers in a box. She is going to pick 3 markers without replacement.
If she picks all 3 red markers, she will win a total of 500 dollars. If the first marker she picks is red but not all 3 markers are red, she will win a total of 100 dollars. Under any other outcome, she will win 0 dollars. </p>
<p><strong>Solution</strong>
The probability of Jessica picking 3 consecutive red markers is: $\left(\frac16\right)$</p>
<p>The probability of Jessica's first marker being red, but not picking 3 consecutive red markers is:<br/>$\left(\frac35\right)-\left(\frac16\right)=\left(\frac{13}{30}\right)$
<br/>
So i am bit stuck here<br/></p>
<p><strong>what i think</strong> is it shouldn't be that complex it should be as simple as
chance of Jessica's first marker being red=chance of getting red 1 time
i.e P(First marker being red)=$\left(\frac{6}{10}\right)$
can any explain me the probability of Jessica's first marker being red=$\left(\frac{13}{30}\right)$?</p>
| mrf | 19,440 | <p>$\ln(2x+2) = \ln 2 + \ln(x+1)$ (assuming $x > -1$). Antiderivatives are only determined up to an additive constant.</p>
|
405,783 | <p>I saw the following in my lecture notes, and I am having difficulties
verifying the steps taken.</p>
<p>The question is:</p>
<blockquote>
<p>Assuming $0<\epsilon\ll1$ find all the roots of the polynomial
$$\epsilon^{2}x^{3}+x+1$$ which are $O(1)$ up to a precision of
$O(\epsilon^{2})$</p>
</blockquote>
<p>and the solution given was </p>
<blockquote>
<p>Assume that $x=O(1)$ and that $$x(\epsilon)=x_{0}+\epsilon
x_{1}+O(\epsilon^{2})$$ Then by setting it in the equation and letting
$\epsilon\to0$ we get $$x_{0}=-1,x_{1}=0$$</p>
<p>Hence $x(\epsilon)=-1+O(\epsilon^{2})$</p>
</blockquote>
<p>I have two questions: </p>
<ol>
<li><p>Where did we use the assumption that $x=O(1)$</p></li>
<li><p>How did they get $$x_{0}=-1,x_{1}=0 ?$$ </p></li>
</ol>
<p>When I did the step setting
it in the equation and letting $\epsilon\to0$ I got $$x_{0}+1+O(\epsilon^{2})=0$$
and so I don't know anything about $x_{1}$. </p>
<p>Should I ignore $O(\epsilon^{2})$
and from that I should get $x_{0}=-1$</p>
| Hagen von Eitzen | 39,174 | <p>Much simpler: As $x(\epsilon)\in O(1)$, we have immediately from rewriting the cubic that
$$x(\epsilon)=-1-\epsilon^2x(\epsilon)^3\in -1+O(\epsilon^2).$$</p>
|
269,655 | <p>I am trying to find a nonlinear model from the data.</p>
<p><a href="https://i.stack.imgur.com/W6JEI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/W6JEI.png" alt="enter image description here" /></a></p>
<p>My code is below:</p>
<pre><code>data = {{0.0, 0.0}, {0.05, 0.87}, {0.1, 0.99}, {0.15, 0.98}, {0.2,
0.91}, {0.25, 0.81}, {0.3, 0.71}, {0.35, 0.62}, {0.4, 0.51}, {0.45,
0.31}, {0.5, 0.31}, {0.55, 0.23}, {0.6, 0.18}, {0.65, 0.14}, {0.7,
0.08}, {0.75, 0.05}, {0.8, 0.03}, {0.85, 0.02}, {0.9,
0.01}, {0.95, 0.002}, {1, 0}};
model=((1 - x)/(1 - a))^((0.5 (1 - a))/
a) (x/a)^0.5;
(* fit model*)
NonlinearModelFit[data, model, a, x]
</code></pre>
<p><strong>NonlinearModelFit</strong> doesn't work for this model, i.e.</p>
<p><a href="https://i.stack.imgur.com/lNsMy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lNsMy.png" alt="enter image description here" /></a></p>
<p>Are there any other ways to solve this problem?</p>
<p>Thanks in advance!</p>
<p>Update:</p>
<p>If I try:</p>
<pre><code>NonlinearModelFit[data, {model, {a > 0.000001}}, a, x]
</code></pre>
<p>Errors:</p>
<p><a href="https://i.stack.imgur.com/SJLzN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SJLzN.png" alt="enter image description here" /></a></p>
| Michael Seifert | 27,813 | <p>The problem seems to stem from the first and last data points, with <span class="math-container">$x = 0$</span> and <span class="math-container">$x = 1$</span>. My guess is that it has to do with <span class="math-container">$\partial f/\partial x$</span> being singular at these points. In addition, <span class="math-container">$\partial f/\partial a$</span> is singular when <span class="math-container">$a = 0$</span> or <span class="math-container">$a = 1$</span>.</p>
<p>If you remove the offending data points, and give Mathematica an initial guess for <span class="math-container">$a$</span> that is away from the trouble spots, <code>NonlinearModelFit</code> runs without complaints & yields parameter <code>a -> 0.127671</code>.</p>
<pre><code>newdata = Most[Rest[data]]
fit = NonlinearModelFit[newdata, model, {{a, 0.5}}, x]
Show[ListPlot[newdata, PlotStyle -> Orange], Plot[fit[x], {x, 0, 1}]]
</code></pre>
<p><a href="https://i.stack.imgur.com/XIQPa.png" rel="noreferrer"><img src="https://i.stack.imgur.com/XIQPa.png" alt="enter image description here" /></a></p>
<p>Note that since the model automatically goes through the omitted data points for <span class="math-container">$0 < a < 1$</span>, omitting them shouldn't affect the quality of the fit.</p>
|
386,172 | <p>The expression was simplified in the answer to <a href="https://math.stackexchange.com/questions/384592/finding-markov-chain-transition-matrix-using-mathematical-induction">this question</a>. I'm trying to simplify it but I got stuck. Multiplying all the factors and regrouping didn't work, but maybe I'm doing the wrong regrouping.</p>
<p>Also, I don't understand why the $(2p-1)^n$ dropped the exponent in the second line of the same solution.</p>
| xisk | 73,012 | <p>Start with: $(p)(\frac{1}{2}(2p-1)^n) + (1-p)(-\frac{1}{2}(2p-1)^n) = x$<br>
(I'm setting it equal to $x$ so it's easier to follow.) </p>
<p>$\therefore (p)(2p-1)^{n} + (1-p)(-1)(2p-1)^{n} = 2x$<br>
Factor the $(2p-1)^{n}$:<br>
$\therefore (2p-1)^{n} \cdot (p + (1-p)(-1)) = 2x$<br>
$\therefore (2p-1)^{n} \cdot (p + (p-1)) = 2x$<br>
$\therefore (2p-1)^{n} \cdot (2p-1) = 2x$<br>
By law of exponent addition:<br>
$(2p-1)^{n} \cdot (2p-1)^{1} = 2x$<br>
$(2p-1)^{n+1} = 2x$<br>
$\therefore \frac{1}{2} \cdot (2p-1)^{n+1} = x$</p>
<p>And there we are at the solution!<br>
$ = \frac{1}{2} \cdot (2p-1)^{n+1}$</p>
|
3,652,102 | <p>Let, <span class="math-container">$(P,\le)$</span> be the poset.<br>
I have begun to solve this in the following way-
Note that, <span class="math-container">$rs-r\le rs-s\iff r\ge s$</span><br>
So, without loss of generality assume that <span class="math-container">$r\ge s$</span>, then <span class="math-container">$\operatorname{min}(rs-r,rs-s)=r(s-1)$</span><br>
As per the question <span class="math-container">$P$</span> has elements <span class="math-container">$\ge r(s-1)$</span>.<br>
So, let number of elements of <span class="math-container">$P$</span> is <span class="math-container">$r(s-1)+n$</span> where <span class="math-container">$n\in\Bbb{N}$</span>.<br>
Let us assume on contrary, <span class="math-container">$P$</span> neither has an anti-chain of size <span class="math-container">$r$</span> nor a chain of size <span class="math-container">$s$</span><br>
i.e. for any <span class="math-container">$A$</span> of <span class="math-container">$P$</span> with <span class="math-container">$r$</span> elements, <span class="math-container">$\exists a,b\in A$</span> such that either <span class="math-container">$a\le b$</span> or <span class="math-container">$b\le a$</span>.<br>
And for any <span class="math-container">$C$</span> of <span class="math-container">$P$</span> with <span class="math-container">$s$</span> elements, <span class="math-container">$\exists x,y\in A$</span> such that neither <span class="math-container">$x\le y$</span> nor <span class="math-container">$y\le x$</span>.<br>
Now, I cannot use the number of elements of <span class="math-container">$P$</span> to get a contradiction from the above assumption.<br>
Can anybody help me with this? Thanks for assistance in advance.</p>
| Brian M. Scott | 12,042 | <p>HINT: Use <a href="https://en.wikipedia.org/wiki/Dilworth%27s_theorem" rel="nofollow noreferrer">Dilworth’s decomposition theorem</a>. I’ve finished the argument in the spoiler-protected block below.</p>
<blockquote class="spoiler">
<p> Let <span class="math-container">$A$</span> be an antichain in <span class="math-container">$P$</span> of maximum size; say <span class="math-container">$|A|=m<r$</span>. By Dilworth’s theorem <span class="math-container">$P$</span> has a decomposition into <span class="math-container">$m$</span> disjoint chains, all of which are of length at most <span class="math-container">$s-1$</span>, so <span class="math-container">$|P|\le m(s-1)<r(m-1)$</span>, contradicting the choice of <span class="math-container">$P$</span>.</p>
</blockquote>
|
802,848 | <p>I am reading this book, <em>Gödel's Proof</em>, by James R. Newman, at location 117 (Kindle), it says,</p>
<blockquote>
<p>For <strong>various reasons</strong>, this axiom, (through a point outside a given line only one parallel to the line can be drawn), did not appear "self-evident" to the ancients.</p>
</blockquote>
<p>Any idea what the <strong>various reasons</strong> might be? It's self-evident enough to me.</p>
<p><strong>Edit</strong></p>
<p>Sorry, my bad, right after the sentence, (the above quote), there is a footnote, says that:</p>
<blockquote>
<p>The chief reason for this alleged lack of self-evidence seems to have been the fact that the parallel axiom makes an assertion about infinitely remote regions of space. Euclid defines parallel lines as straight lines in a plane that, "being produced indefinitely in both directions," do not meet. Accordingly, to say that two lines are parallel is to make the claim that the two lines will not meet even "at infinity." But the ancients were familiar with lines that, though they do not intersect each other in any finite region of the plane, do meet "at infinity." Such lines are said to be "asymptotic." Thus, a hyperbola is asymptotic to its axes. It was therefore not intuitively evident to the ancient geometers that from a point outside a given straight line only one straight line can be drawn that will not meet the given line even at infinity.</p>
</blockquote>
| mau | 89 | <p>Actually the standard definition of the Fifth Postulate does not involve (of course) infinite: Euclid says that that two lines which cross a third one will eventually meet on the side where the angles made with the third one add to less than two right angles.</p>
<p>Ancient Greeks were uneasy with it because they thought that it could be derived from the other postulates. Euclid's formulation, by the way, is really awkward, since an equivalent statement is "the angles of a triangle sum to two right angles"; I often wonder if he chose the other one so to warn future mathematicians.</p>
|
4,400,261 | <p>If <span class="math-container">$f$</span> and <span class="math-container">$g$</span> are <a href="https://en.wikipedia.org/wiki/Multivariate_normal_distribution#Standard_normal_random_vector" rel="nofollow noreferrer">bivariate normal PDFs</a> having correlation coefficients <span class="math-container">$ρ_f$</span> and <span class="math-container">$ρ_g$</span> respectively, what is the correlation coefficient of the bivariate normal distribution <span class="math-container">$h=f*g$</span>, where <span class="math-container">$*$</span> denotes the convolution operator? I've tried searching for the answer but come up dry.</p>
| Leander Tilsted Kristensen | 631,468 | <p>I don't think that you can determine the correlation coefficient without also knowing the variances, but you should be able to determine the new covariance matrix given the covariance matrices for <span class="math-container">$f$</span> and <span class="math-container">$g$</span>. It can be shown that the distribution with pdf <span class="math-container">$h=f*g$</span> corresponds to the the distribution of <span class="math-container">$Z=X+Y$</span>, where <span class="math-container">$X,Y$</span> are independent and have the respective densities <span class="math-container">$f$</span> and <span class="math-container">$g$</span>. In particular if <span class="math-container">$X,Y$</span> are independent with <span class="math-container">$X\sim N(\mu_X,\Sigma_X)$</span> and <span class="math-container">$Y\sim N(\mu_Y,\Sigma_Y)$</span>, then
<span class="math-container">$$X+Y \sim N_2(\mu_X+\mu_Y,\Sigma_X + \Sigma_Y),$$</span>
and the correlation coefficient can then be determined from the matrix <span class="math-container">$\Sigma_X + \Sigma_Y$</span> as
<span class="math-container">$$\rho_Z = \frac{\rho_X \sigma_{X_1} \sigma_{X_2}+\rho_Y \sigma_{Y_1}\sigma_{Y_2}}
{\sqrt{\sigma_{X_1}^2+\sigma_{Y_1}^2} \sqrt{\sigma_{X_2}^2 + \sigma_{Y_2}^2 }},$$</span>
where
<span class="math-container">$$\Sigma_X = \begin{pmatrix} \sigma_{X_1}^2 & \rho_X \sigma_{X_1} \sigma_{X_2} \\
\rho_X \sigma_{X_1} \sigma_{X_2} & \sigma_{X_2}^2\end{pmatrix}
\quad \text{ and } \quad
\Sigma_Y = \begin{pmatrix} \sigma_{Y_1}^2 & \rho_Y \sigma_{Y_1} \sigma_{Y_2} \\
\rho_Y \sigma_{Y_1} \sigma_{Y_2} & \sigma_{Y_2}^2\end{pmatrix}
$$</span></p>
|
591,765 | <blockquote>
<p>What is the way to convince myself that $\left\langle(1,2),\ (1,2,3,4)\right\rangle=S_4$ but $\left\langle(1,3),\ (1,2,3,4)\right\rangle\ne S_4$?</p>
</blockquote>
<p>Let $\sigma$ be any transposition and $\tau$ be any $p-$cycle, where $p$ is a prime.
Then show that $S_p=\langle\sigma,\tau\rangle$.</p>
| Betty Mock | 89,003 | <p>Not sure how to "convince" you, but I suspect you are worried because some of the 2-cycles do the job and others do not. The trick is in picking the right 4-cycle to go with your 2-cycle or vice versa. (1,2) works with (1,2,3,4) because 1 and 2 are adjacent in (1,2,3,4) whereas 1,3 is not. But (1,3) will work with (1,3,2,4).</p>
<p>For more info: </p>
<p><a href="http://en.wikipedia.org/wiki/Symmetric_group#Generators_and_relations" rel="nofollow">http://en.wikipedia.org/wiki/Symmetric_group#Generators_and_relations</a> In particular it says the following is a generating set:
"a set containing any n-cycle and a 2-cycle of adjacent elements in the n-cycle."</p>
|
1,749,128 | <p>How to compute taylor series $f(x)=\frac{1}{1-x}$ about $a=3$? It should be associated with the geometric series. Setting $t=x-3,\ x=t+3$, then I don't know how to continue, could someone clarify the procedure?</p>
| BrianO | 277,043 | <p>Ultimately your question, as expressed in your <strong>Edit</strong>, is a philosophical one, not a mathematical one. If you're a <em>formalist</em>, then no, the objects of consideration have no independent existence, and, visualize what we may, only the formal systems, marks on paper and screens, really exist. If you're a <em>Platonist</em>, however, mathematical objects actually exist in the same sense that tables and chairs do, and axiom systems merely characterize classes of such abstract entities.</p>
<p>It's a question of which comes first, the theory or the (alleged) objects: do the axioms of, say, group theory "bring groups into existence" (existence at least in the minds of mathematicians)? or do they and did they exist independently, and the group axioms merely capture group-ness? Were groups created or discovered? </p>
<p>There's no <em>theorem</em> that answers these questions, and there's no "right" answer; it's more a matter of disposition and belief.</p>
|
3,423,225 | <p>I know that the angle <span class="math-container">$\theta$</span> of a right-angled triangle, centered at the origin, is defined as the radian measure of its intersection point with the unit circle, and that <span class="math-container">$\cos(\theta)$</span> and <span class="math-container">$\sin(\theta)$</span> are defined to be the x- and y-cordinates of that intersection point:</p>
<p><a href="https://i.stack.imgur.com/axHKG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/axHKG.png" alt="enter image description here"></a></p>
<p>However, for larger right-angled triangles, we compute <span class="math-container">$\cos(\theta)=A/C$</span> and <span class="math-container">$\sin(\theta)=B/C$</span> by dividing the respective side length with the hypotenuse, respectively. But, for this to work, right-angled triangles with a common angle <span class="math-container">$\theta$</span> must have proportional side lengths.</p>
<p>How do I see that right-angled triangles with a common angle <span class="math-container">$\theta$</span> have proportional side lengths? </p>
| suhbell | 592,879 | <p>Triangles that have the same angle measurements are similar. Since Both of these triangles are right triangles, and both of these triangles share a common angle, then these triangles have 2 angles that are the same. Since a triangle only has 3 angles, and if 2 angles are the same, then the third angle must be the same. So, right triangles that share a common angle (not including the right angle) are similar, meaning that they have proportional side lengths.</p>
|
2,882,696 | <p>$a,b,x$ are elements of a group .</p>
<p>$x$ is the inverse of $a$.</p>
<p>Here is my attempt to prove it :-</p>
<p>$a\cdot b = e$</p>
<p>$x\cdot (a\cdot b) = x\cdot e$</p>
<p>$(x\cdot a)\cdot b = x$</p>
<p>$e\cdot b = x$</p>
<p>$b = x$</p>
<p>Are my steps correct?
What I wanted to prove is that if $ab = e$, then $ba = e$</p>
| Cornman | 439,383 | <p>No, it does not imply that $x=a$. It implies that $x=b$. Maybe a typo?</p>
<p>We have $ab=e$ since $x=a^{-1}$ we get after multiplying both sides with $a^{-1}$:</p>
<p>$a^{-1}ab=a^{-1}e\Leftrightarrow eb=a^{-1}\Leftrightarrow b=a^{-1}=x$</p>
|
2,882,696 | <p>$a,b,x$ are elements of a group .</p>
<p>$x$ is the inverse of $a$.</p>
<p>Here is my attempt to prove it :-</p>
<p>$a\cdot b = e$</p>
<p>$x\cdot (a\cdot b) = x\cdot e$</p>
<p>$(x\cdot a)\cdot b = x$</p>
<p>$e\cdot b = x$</p>
<p>$b = x$</p>
<p>Are my steps correct?
What I wanted to prove is that if $ab = e$, then $ba = e$</p>
| Robert Lewis | 67,071 | <p>This is pretty standard, basic and elementary stuff; the kind of stuff one usually sees in the first few pages of a textbook on group theory; but essential stuff nevertheless.</p>
<p>Our OP neraj's proof that $x = b$ is, of course, unarguably flawless. Lauds.</p>
<p>If we wish to see that</p>
<p>$ab = e \Longrightarrow ba = e \tag 1$</p>
<p>under the given hypotheses of the question, the simplest thing we can do is exploit the given that $x$ is the inverse of $a$, which by definition means that</p>
<p>$ax = xa = e; \tag 2$</p>
<p>of course we often write</p>
<p>$x = a^{-1} \tag 3$</p>
<p>under such circumstances; in any event, if</p>
<p>$ab = e, \tag 4$</p>
<p>then</p>
<p>$(ab)a = ea = a; \tag 5$</p>
<p>thus</p>
<p>$a(ba) = a, \tag 6$</p>
<p>whence, by (2),</p>
<p>$ba = e(ba) = (xa)(ba) = x((ab)a) = x(ea) = xa = e. \tag 7$</p>
|
3,056,121 | <p>I'm trying to find a function with infinitely many local minimum points where x <span class="math-container">$\in$</span> [0,1] and f has only 1 root. No interval should exist where the function is constant.</p>
| Jack D'Aurizio | 44,121 | <p><span class="math-container">$$\int_{0}^{+\infty}\frac{x^{2+\alpha}}{(1+x^2)^3}\,dx \stackrel{(*)}{=}\frac{\pi(1-\alpha^2)}{16\cos\frac{\pi \alpha}{2}} $$</span>
<span class="math-container">$(*)$</span>: we use the substitution <span class="math-container">$\frac{1}{1+x^2}=u$</span>, the Beta function and the reflection formula for the <span class="math-container">$\Gamma$</span> function.
This holds for any <span class="math-container">$\alpha$</span> such that <span class="math-container">$-3<\text{Re}(\alpha)<3$</span>, and since the RHS is an even function, the origin is a stationary point, i.e.
<span class="math-container">$$\color{red}{0}=\frac{d}{d\alpha}\left.\int_{0}^{+\infty}\frac{x^{2+\alpha}}{(1+x^2)^3}\,dx\right|_{\alpha=0}\stackrel{\text{DCT}}{=}\int_{0}^{+\infty}\frac{x^{2}\log x}{(1+x^2)^3}\,dx.$$</span></p>
|
3,056,121 | <p>I'm trying to find a function with infinitely many local minimum points where x <span class="math-container">$\in$</span> [0,1] and f has only 1 root. No interval should exist where the function is constant.</p>
| Aleksas Domarkas | 562,074 | <p><span class="math-container">$$\int_0^\infty \frac{x^2 \ln(x)}{(1+x^2)^3} {\rm d}x=
\int_0^1 \frac{x^2 \ln(x)}{(1+x^2)^3} {\rm d}x+
\int_1^\infty \frac{x^2 \ln(x)}{(1+x^2)^3} {\rm d}x
$$</span>
Then change in first integral <span class="math-container">$x=\frac{1}{t}$</span>.</p>
|
1,476,946 | <p>So, I'm just starting to peruse "Categories for the Working Mathematician", and there's one thing I'm uncertain on. Lets say I have three objects, $X,Y,Z$ and two arrows $f,g$ such that $X\overset {f} {\to}Y\overset {g} {\to}Z$. Does this necessitate the composition arrow exist so the diagram commutes, i.e must I have an $X\overset {h} {\to} Z$ such that $h=g\circ f$, or is it just that IF such an arrow $h$ exists, then it commutes? </p>
<p>The question came up when the book defined preorders, saying that they were transitive since we could associate arrows...I just wanted to make sure association of arrows actually mandates the creation of the direct arrow.</p>
| Ben Sheller | 250,221 | <p>If you have arrows $f:X\to Y$ and $g:Y\to Z$, it means that $h=g\circ f:X\to Z$ exists.</p>
<p>It is definitely NOT "If an arrow $h:X\to Z$ exists...", since this would imply that it would have to be true for every arrow $h:X\to Z$.</p>
|
251,466 | <p>Let $A$, $B$ and $C$ be three points in a disk,
does $f\left(A,B,C\right)=\mbox{Area}\left(\mbox{triangle}\,ABC\right)/\mbox{Perimeter}\left(\mbox{triangle}\,ABC\right)$
have maximum on
the boundary? </p>
| coffeemath | 30,316 | <p>First note that if a triangle is subjected to a homothety by factor $r>1$ then the area multiplies by $r^2$ and the perimeter by $r$, so that area/perimeter gets multiplied by $r$. This means for the triangle $ABC$ with longest side say $AB$, that we may expand and move the triangle until vertices $A,B$ are on the boundary of $D$, while increasing the ratio area/perimeter.</p>
<p>If at this point the vertex $C$ happens to lie in the smaller part of $D$ cut by $AB$, reset $C$ to its reflection through $AB$, so that $C$ now lies in the larger part of $D$ cut by $AB$.</p>
<p>Now suppose the vertex $C$ is moved so that the perimeter remains constant. This means $C$ moves on an ellipse with foci at $A,B$; this ellipse will not entirely lie in $D$,
however it is clear that $C$ may be moved until triangle $ABC$ becomes isosceles, and that during this mmovement the area of $ABC$ increases, since the altitude from $C$ increases. Thus the ratio area/perimeter increases at this step also.</p>
<p>Now move $C$ in the direction perpendicular to $AB$ and away from that line, until $C$ lies on the boundary of $D$. This will increase area more than perimeter: as a map it is an expansion in the direction perpendicular to $AB$ and thus multiplies area by some $k>1$, while since the sides $AC$ and $BC$ are on a slant to the perpendicular, they will each expand by a factor less than $k$. So again the ratio area/perimeter has increased.</p>
<p>We now have what is required, since we have the triangle $ABC$ with its vertices on the boundary of $D$, and during the process its ratio of area/perimeter has only increased.</p>
<p>With a little more work one can show that in fact the actual max ratio occurs when the triangle $ABC$ is equilteral, with vertices on the boundary of $D$.</p>
|
68,386 | <p>I'm looking for a theorem of the form </p>
<blockquote>
<p>If $R$ is a nice ring and $v$ is a reasonable element in $R$ then Kr.Dim$(R[\frac{1}{v}])$ must be either Kr.Dim$(R)$ or Kr.Dim$(R)-1$.</p>
</blockquote>
<p>My attempts to do this purely algebraically are not working, so I started looking into methods from algebraic geometry. I thought that Grothendieck's Vanishing Theorem might help (i.e. if dim$(X)=n$ then $H^i(X,\mathcal{F})=0$ for any sheaf of abelian groups $\mathcal{F}$ and any $i>n$) but the problem is that the converse for this theorem fails, so I can't conclude anything about dimension. Perhaps this theorem could give some sort of test for when dimension drops, but I'm hoping for a better answer.</p>
<p>We'll definitely need some hypotheses. For the application I have in mind we can assume $R$ is commutative and is finitely generated over some base ring (e.g. $\mathbb{Z}_{(2)}$), but we should not assume it's an integral domain. If necessary we can assume it's Noetherian and local, but I'd rather avoid this. As for $v$, it's not in the base ring and it has only a few relations with other elements in $R$, none of which are in the base ring. If we can't get the theorem above, perhaps we can figure out something to help me get closer:</p>
<blockquote>
<p>Are there any conditions on $v$ such that the dimension would drop by more than 1 after inverting $v$?</p>
</blockquote>
<p>One thing I know: to have any hope of dimension dropping by $1$ I need to be inverting a maximal irreducible component. I'm curious as to the algebraic condition this puts on $v$. </p>
| Fernando Muro | 12,166 | <p>Thinking geometrically, take the disjoint union of a plane and a point in the 3-dimensional affine space. This has dimension 2. If you remove the plane by inverting its equation, you obtain the the point, which is 0-dimensional. </p>
<p>Algebraically, let $k$ be a field, $R = k[x,y,z]/(x^2-x,xy,xz)$, $v=x$. Then $R$ has Krull dimension $2$ and $R[1/v]=k$ is 0-dimensional. </p>
|
2,083,347 | <p>Let's consider a linear operator
$$
Lu = -\frac{1}{w(x)}\Big(\frac{d}{dx}\Big[p(x)\frac{du}{dx}\Big] + q(x)u\Big)
$$
So the Sturm-Liouville equation can be written as
$$
Lu = \lambda u
$$
Why the proper setting for this problem is the weighted Hilbert space $L^2([a,b], w(x)dx)$?</p>
| Varun Iyer | 118,690 | <p>So good job on finding your inverses:</p>
<p>$$
-\frac{x-2}{4}
$$
$$
-\frac{x-6}{5}
$$</p>
<p>Recall that the definition of a function maps $x$ values to $y$ values, or $k(x)$. Therefore, the inverse of a function $k(x)$ maps the $y$ values to the $x$ values.</p>
<p>So we must find our range of $x$ values:</p>
<p>Since $k(4) = -14$, $k^{-1}(-14) = 4$</p>
<p>So we have:</p>
<p>$$k^{-1}(x)=\begin{cases}-\dfrac{x-2}{4}, & x\le -14 \\ -\dfrac{x-6}{5}, & x>-14\end{cases}$$</p>
<p>Therefore, if we want the range of these inverse functions, we simply look at the domain of our original function $k(x)$.</p>
<p>Therefore, the range of the inverse functions are $(-\infty, 4)$ & $(4, \infty)$, respectively.</p>
|
2,785,698 | <p>Please help me go over this problem; I am a bit confused.</p>
<p>Find ${\displaystyle \frac{\mathrm d}{\mathrm dt} \int_2^{x^2}e^{x^3}\mathrm dx}$.</p>
| ThatEvilChickenNextDoor | 562,857 | <p>First off, I'm assuming that you want to evaluate this:
$$
\frac{d}{dx} \int_2^{x^2}e^{t^3}dt
$$
You want to find the derivative of the integral, so we replace the $x$ inside the integral with a dummy variable to avoid confusion. Now, let's set $f(x)=e^{x^3}$ and say that $F(x)$ is the antiderivative, so $F'(x)=f(x)=e^{x^3}$. We don't know what $F(x)$ is yet, but as it turns out it doesn't actually matter. By the second part of the Fundamental Theorem of Calculus,
$$
\int_a^bf(x)dx = F(b)-F(a)
$$
and in our case $a=2, b=x^2, and f(x)=e^{x^3}$. Now we want to find the derivative of this, so we differentiate the right hand side:
$$
\frac{d}{dx}[F(b)-F(a)]
$$
Applying the chain rule, this becomes:
$$
F'(b)\cdot b'-F'(a)\cdot a'
$$
But wait! We already defined $F'(x)=f(x)$! So this simplifies down to:
$$
f(b)\cdot b' -f(a)\cdot a'
$$
Plugging in our values, and noting that the derivative of 2 is zero, we get:
$$
\frac{d}{dx} \int_2^{x^2}e^{t^3}dt=e^{(x^2)^3}\cdot 2x=2xe^{x^6}
$$</p>
|
4,328,117 | <p>I have this lambda expression</p>
<p><span class="math-container">$$(\lambda xyz.xy(zx)) \;1\; 2\; 3$$</span></p>
<p>or</p>
<p><span class="math-container">$$(\lambda x. (\lambda y. (\lambda z.xy(zx))))\;1\;2\;3$$</span>
<span class="math-container">$$(\lambda y. (\lambda z.1y(z1))))\;2\;3$$</span>
<span class="math-container">$$(\lambda z.12(z1))))\;3$$</span>
<span class="math-container">$$1\;2\;(3\;1)$$</span></p>
<p>What exactly is the point of the inner parentheses in the original expression -- which I dutifully carried along to the end? What's the difference between <span class="math-container">$1\;2\;(3\;1)$</span> and <span class="math-container">$1\;2\;3\;1$</span>? The outer is to distinguish between the expression and the input, I'm assuming, but the inner <span class="math-container">$\ldots(zx)\ldots$</span> isn't clear to me. Parentheses in math indicate "belonging together," but I don't understand this "belonging together." Even if I wanted to have two <span class="math-container">$x$</span> variables in the body, I'd just have them without needing parentheses. I checked out <a href="https://math.stackexchange.com/questions/50401/use-of-parenthesis-in-lambda-calculus">this</a>, but I don't think it's the same parentheses issue.</p>
| Couchy | 87,768 | <p>First of all, I should point out that there are two ways to understand the lambda calculus, it can be typed or untyped. Not all lambda terms can be typed.</p>
<hr />
<p>First observe that the term <span class="math-container">$λxyz.xy(zx)$</span> can be typed. Since it takes three arguments, let us say the type is
<span class="math-container">$$\alpha\to\beta\to\gamma\to\delta$$</span>
so that given <span class="math-container">$x:\alpha, y:\beta,z :\gamma$</span> we obtain <span class="math-container">$xy(zx):\delta$</span>.</p>
<p>Now in the body, <span class="math-container">$z$</span> is applied to <span class="math-container">$x$</span>, so this means <span class="math-container">$\gamma = \alpha\to\eta$</span>, and <span class="math-container">$zx : \eta$</span>. Similarly, <span class="math-container">$x$</span> is applied to <span class="math-container">$y$</span> and <span class="math-container">$zx$</span>, so it must be that <span class="math-container">$\alpha = \beta\to\eta\to\delta$</span>. Putting this together we get that</p>
<p><span class="math-container">$$λxyz.xy(zx) : (\beta\to\eta\to\delta)\to\beta\to((\beta\to\eta\to\delta)\to\eta)\to\delta.$$</span>
We can now check that if
<span class="math-container">$$x : \beta\to\eta\to\delta\\y:\beta\\z : (\beta\to\eta\to\delta)\to\eta$$</span>
then
<span class="math-container">$$zx: \eta\\xy:\eta\to\delta\\xy(zx):\delta.$$</span></p>
<p>What this means is that for the application <span class="math-container">$(λxyz.xy(zx))\ u\ v\ w$</span> to make sense in the typed setting, it must be that for some given types <span class="math-container">$\beta,\eta,\delta$</span> we have <span class="math-container">$u:\beta\to\eta\to\delta$</span>, <span class="math-container">$v :\beta$</span>, and <span class="math-container">$z:(\beta\to\eta\to\delta)\to\eta$</span>. Since <span class="math-container">$\mathbb N$</span> is not a function type, the first and third arguments cannot be of type <span class="math-container">$\mathbb N$</span>.</p>
<hr />
<p>Now if you still wish to give some sense in an <em>untyped setting</em> to the term <span class="math-container">$(λxyz.xy(zx))\ 1\ 2\ 3$</span>, you can only do this if you interpret <span class="math-container">$1, 2,3$</span> as lambda terms. You can do this using a <a href="https://en.wikipedia.org/wiki/Church_encoding" rel="nofollow noreferrer">Church encoding</a>, that is if you define
<span class="math-container">$$0 := \lambda s.\lambda z. z\\
1:=\lambda s.\lambda z. sz\\
2:=\lambda s.\lambda z. s(sz)\\
3:=\lambda s.\lambda z. s(s(sz)).$$</span></p>
<p>Then, your expression <span class="math-container">$1\ 2\ (3\ 1)$</span> corresponds to the term
<span class="math-container">$$(\lambda s.\lambda z.sz)(\lambda s.\lambda z.s(sz))((\lambda s.\lambda z.s(s(sz)))(\lambda s.\lambda z. sz))$$</span>
which you can reduce further.</p>
<hr />
<p>You could also check out <a href="https://math.stackexchange.com/questions/4283320/how-to-read-verbalize-lambda-expressions-what-is-%CE%BBx-%CE%BBy-m">this</a> question for an explanation of the structure of a lambda term.</p>
|
463,239 | <p>Integrate $$\int{x^2(8x^3+27)^{2/3}}dx$$</p>
<p>I'm just wondering, what should I make $u$ equal to?</p>
<p>I tried to make $u=8x^3$, but it's not working. </p>
<p>Can I see a detailed answer?</p>
| user71352 | 71,352 | <p>Let $u=8x^{3}+27$ then $du=24x^{2}dx$. So</p>
<p>$\displaystyle\int x^{2}(8x^{3}+27)^{\frac{2}{3}}dx=\frac{1}{24}\int(8x^{3}+27)^{\frac{2}{3}}(24x^{2})dx=\frac 1{24}\int u^{\frac{2}{3}}du$</p>
|
877,646 | <p>Friends,I have a set of matrices of dimension $3\times3$ called $A_i$. ,</p>
<p>Following are the given conditions</p>
<p>a) each $A_i$ is non invertible <strong>except $A_0$</strong> because their determinant is zero.</p>
<p>b) $\sum_{n=0}^\infty A_i$ is invertible and determinant is not zero</p>
<p>c) </p>
<ol>
<li><p>This is the recursion available for $A_i$,
$ A_{n}=\frac{1}{n} \{C_1* A_{n-1} +C_2 * A_{n-2}\} \tag 1$, where $A_0$ = Constant matrix ,$A_1$ =Constant matrix </p></li>
<li><p>$C_1,C_2 $ are constant matrices. $A_1$ and $A_0$ are initial values.
$A_0,A_1,C_1,C_2,A_n $ have dimension $3\times 3$</p></li>
<li><p>$C_1,C_2,C_1+C_2 $ etc are skew symmetric matrices , not commutative, and also with diagonals as zeros </p></li>
<li><p>$A_n$ are converging series. Means last terms will be approaching to zero or very very small values</p></li>
<li><p>Determinant of $C_1*A_{n-1}$ and $C_2*A_{n-2}$ both are zero {Logic : det($C_1A_{n-1}$)=det($C_1$)det($A_{n-1}$),=0*det($A_{n-1}$),$=0 $ }</p></li>
<li><p>Given that SUM= $ \sum_{n=0}^{n= \infty} A_n \ne 0 $.</p></li>
<li><p>Let $S(x) = \sum_{n=0}^\infty A_nx^n$,$SUM=S(1)$.<strong><em>Given that $S(1)$ is invertible</em></strong> . Remember we still have not proved S(x) is invertible. What we only know is, S(1) is invertible from the given conditions </p></li>
</ol>
<p><strong>Question</strong>
From the given condition can we say that $S(x)=\sum_{n=0}^\infty A_nx^n$ is invertible? If so. how do we prove that?. (x is not a matrix, it is just a variable)</p>
| David Zhang | 80,762 | <p>The mistake is simple--$\mathrm{Ci}$ has a branch cut across the negative real axis, so $\mathrm{Ci}(-\infty - i)$ should indeed evaluate to $-i \pi$ rather than $i \pi$.</p>
|
1,380,697 | <p>I am currently an undergraduate and thinking about applying to graduate school for math. The problem is that I don't know what field I want to go. Taking graduate classes even more confuse me because the more I learn the less I know what specifically I want to do. My question is to where to find an information about different fields of mathematics? Maybe you can recommend me some good journals about math with overview of top areas of math or popular fields. I already spoke with my professors, asked graduate students about their history but I think that my knowledge about math in a broad sense grows slower than I want it. </p>
<p>Maybe there is some good website with people chatting about different fields of research? What about conferences: is there a conference available for undergrad about top-trands in mathematics? </p>
<p>All sources and all answers are welcome. </p>
<p>I am mostly interested in pure math, but I also like applied math. </p>
| David Wheeler | 23,285 | <p>As you finish your undergraduate studies, you should have had at least some introduction or passing acquaintance to the following subjects:</p>
<p>Algebra (the abstract kind)</p>
<p>Discrete mathematics (maybe some number theory)</p>
<p>Linear algebra</p>
<p>Real/complex analysis (post-calculus)</p>
<p>Topology</p>
<p>Differential equations</p>
<p>Statistics/probability</p>
<p>Of these, which did you <em>enjoy</em> the most? All of these areas will have "niches" in which investigations are still being made. Pick an area that speaks to you, and "go deep". Finding the right journals, or the right texts, is a lot easier when your search is <em>focused</em>. Savor this moment, you have a brief window of true freedom to <em>choose</em>. Two or three years in, and your choices will be a lot more constrained, because there is <em>too much math</em> out there to master it all.</p>
<p>I understand, I do, because with the freedom to choose, there is also bewilderment. What would be best? For my money, I say trust your gut, and I believe it's far better to be <em>happy</em> than <em>trendy</em>. If you are too driven on trying to find the most "fruitful" area, there's a real risk of it turning first to drudgery, then to feeling enslaved. You'll do far better, in the long run, having a <em>passion</em> for what you study.</p>
<p>As the old saying goes: "if you do what you love for a living, you never have to work a day in your life."</p>
|
3,931,831 | <p>For the scenario given below, I am confused about if the samples are dependent or independent since the scenario does not mention anything about the samples being paired/related or vice versa.</p>
<p>I am aware if terms such as paired, repeated measurements, within-subject effects, matched pairs, and pretest/posttest are instructed in scenarios then it indicates that the samples are dependent and the opposite applies to independent samples, but I am clueless for the given scenario. Any help would be appreciated.</p>
<p><em>Alice and Bob work evening shifts in a supermarket. Alice has complained to
the manager that she works, on average, much more than Bob. The manager claims that on
average they both work the same amount of time, i.e. the competing claim is that the average
working hours are different. After a short discussion between the manager and Alice, the manager
randomly selected 50 evenings when Alice and Bob both worked.</em></p>
| Math Lover | 801,574 | <p>This is from your working -</p>
<p><span class="math-container">$(3x^2 -3, 3y^2 -3) = \lambda (1,2)$</span></p>
<p><span class="math-container">$3x^2 - 3 = \lambda, 3y^2-3 = 2\lambda$</span></p>
<p>Equating <span class="math-container">$\lambda$</span> from both equations,</p>
<p><span class="math-container">$6x^2-6 = 3y^2-3 \implies 2x^2 - y^2 = 1$</span></p>
<p>Substitute <span class="math-container">$x$</span> from <span class="math-container">$x+2y = 3$</span></p>
<p><span class="math-container">$2(3-2y)^2 - y^2 = 1$</span></p>
<p><span class="math-container">$\implies 7y^2 - 24y + 17 = 0 \, $</span> or <span class="math-container">$(7y-17)(y-1) = 0$</span></p>
<p>Can you take it from here and find possible points for extrema?</p>
|
2,362,477 | <p>Solve the equation $f(x) = 2$.
I reached the stage $\sin(x) = {2\over 3}$ but then (as I remember it was solved) using $x = \sin^{-1}(2/3)$ (sine inverse) I get the answer $x = 41.81$ but the correct answer is $x = 0.730$ or $2.41$. Why is this so? Sorry it might be a silly question but it had been long since I studied mathematics so I kinda forgot everything. Thanks in advance!</p>
| Riccardo.Alestra | 24,089 | <p>The correct answer is:$$x=\arcsin(2/3)=0.7297276563$$</p>
|
4,622,956 | <p>I think <span class="math-container">$\,9\!\cdot\!10^n+4\,$</span> can be a perfect square, since it is <span class="math-container">$0 \pmod 4$</span> (a quadratic residue modulo <span class="math-container">$4$</span>), and <span class="math-container">$1 \pmod 3$</span> (also a quadratic residue modulo <span class="math-container">$3$</span>).<br />
But when I tried to find if <span class="math-container">$\;9\!\cdot\!10^n+4\,$</span> is a perfect square, I didn’t succeed. Can someone help me see if <span class="math-container">$\;9\!\cdot\!10^n+4\,$</span> can be a perfect square ?</p>
| Umesh Shankar | 816,291 | <p>Note that if <span class="math-container">$$9\!\cdot\!10^n+4=m^2\implies (m+2)(m-2)=9\!\cdot\!10^n$$</span></p>
<p>Note that <span class="math-container">$5^n$</span> must divide either <span class="math-container">$m+2$</span> or <span class="math-container">$m-2$</span>.
If that happens, the rest of the factors are not big enough to maintain the difference of <span class="math-container">$4$</span> as <span class="math-container">$\left|5^n-9\!\cdot\!2^n\right|>4$</span> for <span class="math-container">$n\geqslant3$</span>.</p>
|
3,545,548 | <p><span class="math-container">$\def\LIM{\operatorname{LIM}}$</span>
Let <span class="math-container">$(X,d)$</span> be a metric space and given any cauchy sequence <span class="math-container">$(x_n)_{n=1}^{\infty}$</span> in <span class="math-container">$X$</span> we introduce the formal limit <span class="math-container">$\LIM_{n\to \infty}x_n$</span>. We say that two formal limits <span class="math-container">$\LIM_{n\to \infty}x_n$</span> and <span class="math-container">$\LIM_{n\to \infty}y_n$</span> are equal iff <span class="math-container">$\lim_{n \to \infty}d(x_n,y_n)=0$</span>. We then define <span class="math-container">$\bar{X}$</span> to be set of all the formal limits of Cauchy sequences in <span class="math-container">$X$</span>. We define the metric <span class="math-container">$d_{\bar{X}}$</span> as follows: <span class="math-container">$$d_{\bar{X}}(\LIM_{n\to \infty}x_n,\LIM_{n\to \infty}y_n)= \lim_{n \to \infty} d(x_n,y_n)$$</span>
I have proved that <span class="math-container">$(\bar{X},d_{\bar{X}})$</span> is indeed a metric space that that the definition of metric is well defined. But I am stuck to prove that <span class="math-container">$(\bar{X},d_{\bar{X}})$</span> is a complete metric space. This problem could be resolved without taking into account topological spaces as that concept in later in the book. Any suggestion on how to go about this problem without using machinery of topology would be highly invaluable. Thanks in advance.</p>
| Paweł Czyż | 551,592 | <p>Let's prove a bit stronger statement.</p>
<p><strong>Propostion</strong> Let <span class="math-container">$x_1, \dots, x_n\in \mathbb R$</span> and <span class="math-container">$0 < k \neq 1$</span> be another real number. Then numbers
<span class="math-container">$$ k^{x_1}, \dots, k^{x_n}$$</span>
are successive terms of geometric progression if and only if <span class="math-container">$x_1, \dots, x_n$</span> are successive terms of arithmetic progression.</p>
<p><em>Proof:</em> Assume <span class="math-container">$k^{x_1}, \dots, k^{x_n}$</span> are successive terms of geometric progression, what means that there is a number <span class="math-container">$c\in \mathbb R$</span> such that <span class="math-container">$k^{x_{i+1}}=ck^{x_i}$</span> for <span class="math-container">$i=1, 2,\dots, n-1$</span>. Observe that both sides are positive, so <span class="math-container">$c>0$</span>, and as <span class="math-container">$k\neq 1$</span> we can take the logarithm to get
<span class="math-container">$$ x_{i+1} = \log_k c + x_i$$</span>
which are successive terms of arithmetic progression with difference <span class="math-container">$\log_kc$</span>.</p>
<p>Now assume that <span class="math-container">$x_1, \dots, x_n$</span> are successive terms of an arithmetic progression with difference <span class="math-container">$a$</span>, i.e. <span class="math-container">$x_{i+1}=x_i+a$</span> for <span class="math-container">$i=1, 2,\dots,n-1 $</span>. Exponentiating with base <span class="math-container">$k$</span> we have
<span class="math-container">$$k^{x_{i+1}} = k^{x_i+a}=k^{x_i}\cdot k^a$$</span>
which are successive terms of a geometric progression with ratio <span class="math-container">$k^a$</span>.</p>
|
3,989,878 | <p>I can't solve this problem. I tried to find <span class="math-container">$\tan x$</span> directly by solving cubic equations but I failed.</p>
<p>The problem is to find <span class="math-container">$\tan x\cot 2x$</span> given that
<span class="math-container">$$\tan x+ \tan 2x=\frac{2}{\sqrt{3}}, \>\>\>\>\>0<x<\pi/4$$</span></p>
<p>How am I supposed to solve this problem?</p>
| Quanto | 686,284 | <p>Denote <span class="math-container">$y = \tan x \cot 2x = \frac{1-\tan^2x}2$</span> and express <span class="math-container">$\tan x+ \tan 2x=\frac{2}{\sqrt{3}}$</span> as a system of equations in <span class="math-container">$x,y$</span></p>
<p><span class="math-container">$$\tan x+\frac{2\tan x}{1-\tan^2x}=\left(1+\frac1y \right)\tan x=\frac{2}{\sqrt{3}}
$$</span>
Then, eliminate <span class="math-container">$\tan x$</span> to get
<span class="math-container">$$\frac3{y^3} -\frac{13}{y}-6=0$$</span>
which is a depressed cubic equation in <span class="math-container">$\frac1y$</span>, yielding</p>
<p><span class="math-container">$$\tan x \cot 2x=y= \left( \frac{2\sqrt{13}}3 \cos \left( \frac13\cos^{-1} \frac{27}{13\sqrt{13}}\right) \right)^{-1}
$$</span></p>
|
2,400,654 | <p>I am told that the statement "any closed set has a point on its boundary" is false, yet I don't know how to disprove it. In fact, I think it is true. </p>
<p>Suppose we have [a,b], a closed set. Then, the boundary would be {a,b}, both of which are the elements of the set. So, there we have a closed set that has point(s) on its boundary.</p>
<p>I think one reason why I was told that the statement is false can be related to the empty set since</p>
<ul>
<li>Empty set is also closed (clopen, but still)</li>
<li>It does not have a point on its boundary</li>
<li>The statement goes "any closed set..", hence empty set is not
excluded.</li>
</ul>
<p>Supposing that the statement is false due to the above-mentioned reasoning (fact that empty set was not excluded), would "any nonempty closed set has a point on its boundary" be correct?</p>
<p>Thanks.</p>
| kimchi lover | 457,779 | <p>Of course it's just a convention, so open to change <em>in principle</em>.</p>
<p>One reason for the current convention is that when working with such functions one often makes changes of variables of the form $x = cy$, with attendant $dx = c\,dy$ substitutions. When working this way it is often handy to know that the form $dx/x$ turns into $dy/y$. So think of the obnoxious $x^{\alpha-1}\dots dx$ as the user-friendly $x^\alpha\dots dx/x$.</p>
<p>Another reason is that such a change would confuse everyone, a lot. </p>
|
2,791,068 | <p>The Laplace transform of a measure $\mu$ on the real line is defined by
$$f_{\mu}(s)= \int_{\mathbb{R}}e^{-st}d\mu(t), \hspace{1cm} \forall s \geqslant 0.$$
My question is ----</p>
<p>1)Does the Laplace transform of a measure (finite or infinite) always exists?</p>
<p>2)If not, can it be said that the Laplace transform of a probability measure always exists?</p>
<p>If the support of the measure is changed from the real line to the non-negative part of the real line, what happens to question (1) and (2)?</p>
| Arrhenius Impostor | 563,282 | <p>Your definition of the Laplace transform will not converge in general, because if <span class="math-container">$\mu$</span> has support on the whole real line then the exponential term can blow up, depending on the exact form of <span class="math-container">$\mu$</span>.</p>
<p>The common definition of the Laplace transform is for measures supported on the non-negative real line.
(Alternatively you may want to look into the definition of the two-sided Laplace transform, where <span class="math-container">$t$</span> in the exponential is replaced by <span class="math-container">$|t|$</span>.)</p>
<p>Now, consider a <span class="math-container">$\mu$</span> that is supported on the non-negative real line, and that is w.l.o.g. positive.
Since <span class="math-container">$e^{-st} \le 1$</span> for <span class="math-container">$\Re[s]>0$</span>, it is easy to see that</p>
<p><span class="math-container">$|f_\mu(s)| \le f_\mu(0) = \int\limits_0^\infty d\mu(t) = \| \mu \|$</span>.</p>
<p>Hence, if the measure is finite, then its Laplace transform exists.
In conclusion, the Laplace transform of a probability measure always exists.
(It is in fact (related to) the measure's moment generating function.)</p>
<p>For infinite measures, one needs some regularity conditions on <span class="math-container">$\mu$</span> that ensure that the integral does not blow up.
One of these would be for example, that its distribution function <span class="math-container">$F(t) := \mu([0,t))$</span> is exponentially bounded.
That means:</p>
<p><span class="math-container">$ \exists K,C > 0: \;\; |F(t)| \le C e^{Kt}$</span></p>
<p>Then one can show that the Laplace transform exists for <span class="math-container">$\Re[s]>K$</span>.</p>
|
66,068 | <p>I have a list like this. </p>
<pre><code>cdatalist = {{1., 0.898785, Failed, Failed, 50., 25., "serial"}, {1., 1.31175,1., Failed, 50., 25., "serial"}, {1., 18.8025, Failed, 0.490235, 50., 25., "serial"}, {1., 19.6628, 0.990079, Failed, 50., 25., "serial"}, {1., 39.547, Failed, Failed, 50., 25., "serial"}, {1., 39.7503, Failed, 0.482749, 50., 25., "serial"}, {1., 40.2078, Failed, Failed, 50., 25., "serial"}, {1., 40.6208, 0.980588, Failed, 50., 25., "serial"}, {1., 102.588, Failed, Failed, 50., 25., "serial"}, {1., 102.781, Failed, 0.466214, 50., 25., "serial"}, {1., 102.826, Failed, Failed, 50., 25., "serial"}, {1., 102.833, Failed, Failed, 50., 25., "serial"}, {15., 0.89985, Failed, Failed, 50., 25., "serial"}, {15., 1.31344, 1., $Failed, 50., 25., "serial"}}
</code></pre>
<p>at the end, I want to compile a new list by dropping any lines that don't have "Failed" on the third column on each row. </p>
<pre><code>datalistfunc[input_] :=
Module[{cell, cell2, celltable, celllist},
i = 1;
celllist = {};
While[i < Length@cdatalist + 1,
cell =
Select[cdatalist[[i]][[1 ;; 3]],
Head[cdatalist[[i]][[3]]] == Real &];
i = If[i < Length@cdatalist + 1, i + 1, Length@cdatalist + 1];
celllist = AppendTo[celllist, cell2];
Print[cell2]
]
]
datalist = datalistfunc[cdata];
</code></pre>
<p>My list looks like this after filtering. </p>
<pre><code>{{},{}}
{{1.,1.31175,1.},{}}
{{},{}}
{{1.,19.6628,0.990079},{}}
{{},{}}
{{},{}}
{{},{}}
{{1.,40.6208,0.980588},{}}
{{},{}}
{{},{}}
{{},{}}
{{},{}}
{{},{}}
{{15.,1.31344,1.},{}}
</code></pre>
<p>Instead, I want my list to look like this. </p>
<pre><code>{{1.,1.31175,1.},
{1.,19.6628,0.990079},
{1.,40.6208,0.980588},
{15.,1.31344,1.}}
</code></pre>
| user2895279 | 11,600 | <pre><code>cdatalist2 =
Cases[cdatalist[[All, 1 ;; 3]], {_?NumericQ, _?NumericQ, _?NumericQ}]
</code></pre>
|
998,769 | <p>A random variable $X$ distributed over the interval $[0, 2\pi]$</p>
<p>a) the pdf of $X$</p>
<p>b) the cdf of $X$</p>
<p>c) $P(\frac{\pi}{6} \leq X \leq \frac{\pi}{2})$</p>
<p>d) $P(-\frac{\pi}{6} \leq X \leq \frac{\pi}{2})$</p>
<p>my answers:</p>
<p>a) pdf of $X$ is $f(x) = \begin{cases}\frac{1}{2\pi},& 0 \leq x \leq 2\pi, \\
0, & \text{otherwise.}\end{cases}$</p>
<p>b) cdf of $X$ is $F(x) = \begin{cases}0,& x < 0, \\ \frac{x}{2\pi}, &0 \leq x \leq 2\pi, \\ 1,& x > 2\pi.\end{cases}$</p>
<p>c) For this one, do i just do $F(\frac{\pi}{6}) - F(\frac{\pi}{2})$ ?</p>
<p>d) and for this question, it looks like I have to do something else? because it's asking for the probability between
$F(-\frac{\pi}{6})$ and $F(\frac{\pi}{2}) $ but when $x < 0,$ the probability should be $0$ as well? Not sure if i'm looking at this the wrong way. </p>
<p>could someone also kindly explain to me why in a CDF, the probability is always $0$ when $x$ less than the start of the interval but when it's greater than the interval, the probability is always 1? I don't know if it's a coincidence or if i didn't properly understand a CDF but every CDF I've seen so far has it so that the probability is 1 when x is greater than the end of the interval. </p>
| Rey | 73,712 | <p>For proving $h$ is injective, you want to show:
$$h(x)=h(y) \iff x=y $$
Which can be proved with something like this: </p>
<p>From $h(x)=h(y)$ we write:
$$\frac{x^3}{|x|} = \frac{y^3}{|y|} $$
$$ \Rightarrow x^2\frac{x}{|x|} = y^2\frac{y}{|y|} $$
$$ \Rightarrow x^2 sign(x) = y^2 sign(y) $$
$$ \Rightarrow \frac{x^2}{y^2} = \frac{sign(y)}{sign(x)} $$
$$ \Rightarrow \frac{sign(y)}{sign(x)} \geq 0 $$
$$ \Rightarrow sign(y) = sign(x) $$
$$ \Rightarrow x^2 = y^2 $$
$$ \Rightarrow x = y $$ </p>
|
3,166,999 | <p>I'm reading Kechris' book "Classical Descriptive Set Theory" and the author gives the following definition (pp. <span class="math-container">$49$</span>, row <span class="math-container">$3$</span>):</p>
<blockquote>
<p>A <strong>weak basis</strong> of a topological space <span class="math-container">$X$</span> is a collection of nonempty open sets s.t. every nonempty open set contains one of them.</p>
</blockquote>
<p>My question is: is this definition equivalent to that of a basis for a topology?</p>
<p>The fact that the author gives a specific name to such a family suggests that it is not, but for every <span class="math-container">$x\in X$</span> and for every open nhbd <span class="math-container">$U(x)$</span> there exists <span class="math-container">$V(x)$</span> in the weak basis contained in <span class="math-container">$U$</span>. This means that a weak basis is also a covering and hence satisfies the conditions for being a basis.</p>
<p>Any comment is appreciated.
Thank you in advance for your help.</p>
| Cameron Buie | 28,900 | <p>While it is true that for every <span class="math-container">$x,$</span> any neighborhood <span class="math-container">$U(x)$</span> contains an element of the weak basis, say <span class="math-container">$V(x),$</span> we <em>don't</em> know that <span class="math-container">$V(x)$</span> is a neighborhood of <span class="math-container">$x$</span>! All we know is that it is a subset of <span class="math-container">$U(x)$</span> and that it is open and nonempty. Thus, a weak basis need not cover the space, so need not be a basis.</p>
<p>For example, consider the topology of the empty set together with the cofinite sets (sets whose complement is finite) on the set of non-negative integers. A weak basis would be the set of cofinite sets of positive integers, but this cannot be a basis, having no neighborhood of <span class="math-container">$0.$</span></p>
<p>In general--among <span class="math-container">$T_1$</span> spaces, anyway--I suspect that if a space has the property that every weak basis is a basis, then the space is discrete. (The converse trivially holds.)</p>
|
4,325,373 | <blockquote>
<p>Using Algebraic approach, test the convexity of the set <span class="math-container">$$S=\{(x_1,x_2):x_2^2\geq8x_1\}$$</span></p>
</blockquote>
<p>Definition of convexity: <span class="math-container">$S \in \mathbb R^2$</span> is a convex set if <span class="math-container">$\forall \alpha \in \mathbb R, 0 \leq\alpha \leq 1$</span> and <span class="math-container">$\forall \vec x,\vec y \in S$</span> holds: <span class="math-container">$\alpha \vec x + (1 - \alpha)\vec y \in S$</span>.</p>
<p>Let <span class="math-container">$\vec x=(a,b)$</span> and <span class="math-container">$\vec y=(c,d)$</span> then the convex combination is <span class="math-container">$\alpha(a,b)+(1-\alpha)(c,d)=(\alpha(a-c)+c,\alpha(b-d)+d)$</span> and
<span class="math-container">$$
\begin{align}
b^2-8a&\geq0\\
d^2-8c&\geq0\qquad(A)
\end{align}
$$</span>
Now I need to show that,
<span class="math-container">$$
\begin{align}
&(\alpha(b-d)+d)^2-8(\alpha(a-c)+c)\\
&\alpha^2b^2-2\alpha^2bd+\alpha^2d^2 +d^2+2\alpha(b-d)d-\alpha(8a)+\alpha(8c)-8c\\
&\alpha^2b^2-2\alpha^2bd+\alpha^2d^2+(d^2-8c)+2\alpha(b-d)d-\alpha(8a)+\alpha(8c)
\end{align}
$$</span></p>
<p>I couldn't conclude anything from the above expression except for <span class="math-container">$d^2-8c\geq0$</span>. Any help will be appreciated.</p>
| TravorLZH | 748,964 | <p>To verify this, I recommend using Dirichlet series as it is a more powerful device to study Dirichlet convolution. Suppose we define the symbol</p>
<p><span class="math-container">$$
D(s;f)=\sum_{n\ge1}{f(n)\over n^s}
$$</span></p>
<p>It is not difficult to show that <span class="math-container">$D(s;f)D(s;g)=D(s;f*g)$</span>. Using the properties of multiplication, we easily see that this indicates that Dirichlet convolutions are commutative and associative. If <span class="math-container">$f(1)\ne0$</span>, then we see that</p>
<p><span class="math-container">$$
D(s;f^{-1})=[D(s;f)]^{-1}
$$</span></p>
<p>This indicates that if <span class="math-container">$f(1)g(1)\ne0$</span> then</p>
<p><span class="math-container">$$
D(s;f^{-1}*g^{-1})=[D(s;f)D(s;g)]^{-1}=D(s;(f*g)^{-1})
$$</span></p>
<p>Hope this can address your concern!</p>
|
140,819 | <p>Everybody loves the good old quadratic Mandelbrot set. As you probably know, both it and the corresponding quadratic Julia sets are defined by the iteration $f(z) = z^2 + c$.</p>
<p>You might expect, however, that $f(z) = az^2 + bz + c$ would give you more possibilities. However, all the books on the subject assert that this is not the case. I'm trying to get to the bottom of why this is so.</p>
<hr/>
<p>For a start, you can see that if you divide the entire thing through $a$, then the specific <em>values</em> taken by $f$ would change, but their <em>relationship</em> would not. Hence, multiplying by $a$ is only scaling and/or rotating the system. It doesn't actually <em>change</em> its behaviour.</p>
<p>But what of the linear term, $bz$? Why is <em>that</em> redundant?</p>
<hr/>
<p>We can ask a similar question about the <em>cubic</em> Mandelbrot set. A lot of people define this as $g(z) = z^3 + c$, but the definition I like is $h(z) = z^3 - 3a^2z + b$. This has two critical points (whatever that means), which yields strange, shadowy images. More interestingly, with <em>two</em> complex-valued parameters, the corresponding Mandelbrot set is <em>four-dimensional!</em></p>
<p>Again, we are told that $h(z)$ is the most general formulation. (In particular, you don't need a quadratic term.) The strange formulation $-3a^2z$ rather than just $az$ seems necessary to make the parameter plane plot correctly. (It also means that the critical points are $+a$ and $-a$ exactly.)</p>
<p>Does anybody know <em>why</em> this formulation is correct? What would the general 4th order case look like?</p>
| lhf | 589 | <p>For the first question, the key word is <em>conjugation</em>: there is an affine change of coordinates $\phi$ such that $f(z) = az^2 + bz + c$ becomes $g(z)=z^2+c$ in the sense that $f\circ\phi=\phi\circ g$. Since, $g=\phi^{-1} \circ f\circ\phi$, the iterates of $f$ are conjugated to the iterates of $g$ by the same change of coordinates.</p>
<p>For the second question, the formulation of cubic maps as $h(z) = z^3 - 3a^2z + b$ is probably just to highlight the critical point $a$, as you have noticed.</p>
|
3,788,298 | <p>Let <span class="math-container">$f(x)$</span> be an integrable function on <span class="math-container">$[0,1]$</span> that obeys the property <span class="math-container">$f(x)=x, x=\frac{n}{2^m}$</span> where <span class="math-container">$n$</span> is an odd positive integer and m is a positive integer. Calculate <span class="math-container">$\int_0^1f(t)dt$</span></p>
<p><strong>My attempt:-</strong></p>
<p>Any positive even number can be written as the sum of two positive odd integers. So, <span class="math-container">$f(x)=x, \forall x\in \{n/2^m:n,m\in \mathbb Z^+\}.$</span> I know the set <span class="math-container">$\{n/2^m:n,m\in \mathbb Z^+\}$</span> is dense in <span class="math-container">$[0,1]$</span>.</p>
<p>Define <span class="math-container">$g(x)=f(x)-x$</span>, if <span class="math-container">$f$</span> is continuous, I could say that <span class="math-container">$f(x)=x$</span> using the sequential criterion of limit. Hence,<span class="math-container">$\int_0^1f(t)dt=\frac{1}{2}$</span>
How do I proceed for non-continuous function?</p>
| Kavi Rama Murthy | 142,385 | <p>If <span class="math-container">$f$</span> is Riemann integbrable it is continuous almost everywhere. This shows that <span class="math-container">$f(x)=x$</span> almost everywhere (since the equation holds on a dense set). Hence the integral is <span class="math-container">$\int_0^{1} xdx =\frac 1 2 $</span>.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.