qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
34,215
<p>How do professional mathematicians learn new things? How do they expand their comfort zone? By talking to colleagues? </p>
algori
2,349
<p>While we are all waiting for sensible replies let me say this: one can't learn a new area of mathematics without asking at least one hundred silly questions (which is why Mathoverflow is such a great website by the way).</p> <p>On a more serious note: learning really new stuff involves rethinking the basics. Or, as some people would say, learning a new language. Almost everyone speaks some language, but learning a new one once one has learned one already can be tricky, and there are few people who can learn it as well as their first language, although learning to just communicate in a foreign language is not that hard. A possible analogy would be that almost any mathematician knows something about physics, but there are few mathematicians who have a really good command of it.</p> <p>On the other hand, most people who are bilingual have learned two languages simultaneously. Moreover, they did it not necessarily because they are exceptionally bright at learning languages, but because e.g. one parent spoke one language and the other parent spoke the other one, or because the parents had to move from one country to another (a not at all uncommon thing among mathematicians).</p> <p>So here is an obvious conclusion: one should try to learn as many conceptually different things (geometry, algebra, analysis, physics) as one can while one is still an undergraduate or a beginning graduate student, or in high school if possible. When one is an undergraduate, one can learn anything, no questions asked (except for when the exam is); it can be harder later, when one is constantly trying to put things into perspective.</p>
34,215
<p>How do professional mathematicians learn new things? How do they expand their comfort zone? By talking to colleagues? </p>
Gordon Royle
1,492
<p>Teaching a course in something is the only way that I can really learn something new.</p>
3,493,387
<p>Let <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> be two topological spaces, and let <span class="math-container">$X^*$</span> and <span class="math-container">$Y^*$</span> denote their one-point compactification (<span class="math-container">$X^* := X \cup \{\infty\},\,\mathcal{T}^* := \{U \subseteq X^*\mid U \cap X \in \mathcal{T} \land (\infty \in U \implies U= (X \setminus K)\cup \{\infty\}$</span>, <span class="math-container">$ K $</span> closed and compact<span class="math-container">$\}$</span>).</p> <p>Let <span class="math-container">$f:X\rightarrow Y$</span> be a continuous function. How can <span class="math-container">$f$</span> induce a continuous function between <span class="math-container">$X^*$</span> and <span class="math-container">$Y^*$</span>?</p> <p>My idea was to define <span class="math-container">$f^*(x)=f(x)$</span> for <span class="math-container">$x\neq \infty$</span> and <span class="math-container">$f^*(\infty)=\infty$</span>. Then, let <span class="math-container">$U\subset Y^*$</span> be an open subset.</p> <ul> <li><p>If <span class="math-container">$\infty\notin U$</span> then <span class="math-container">$f^*{}^{-1}(U)=f^{-1}(U)$</span> is open since <span class="math-container">$f$</span> is continuous.</p> </li> <li><p>If <span class="math-container">$\infty \in U$</span>, then <span class="math-container">$U=(Y\setminus K)\cup \{\infty\}$</span> and <span class="math-container">$f^*{}^{-1}((Y\setminus K)\cup \{\infty\})=f^*{}^{-1}(Y\setminus K)\cup f^*{}^{-1}(\{\infty\})=f^{-1}(Y\setminus K)\cup\{\infty\}$</span>.</p> </li> </ul> <p>But <span class="math-container">$f^{-1}(Y\setminus K)=X\setminus f^{-1}(K)$</span> and since <span class="math-container">$K$</span> is closed and <span class="math-container">$f$</span> is continuous, <span class="math-container">$f^{-1}(K)$</span> is also closed. Since we know that <span class="math-container">$X$</span> topological space implies <span class="math-container">$X^*$</span> compact, and closed subsets of a compact are compact, we conclude <span class="math-container">$f^{-1}(K)$</span> is compact and therefore <span class="math-container">$f^{-1}((Y\setminus K)\cup\{\infty\})$</span> is open.</p> <p>So apparently there seem to be no extra conditions required, however this question (<a href="https://math.stackexchange.com/questions/88475/continuity-of-the-extension-of-a-function-between-two-locally-compact-hausdorff">Continuity of the extension of a function between two locally compact Hausdorff spaces to their one point compactifications</a>) suggests otherwise: It requires <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> locally compact Hausdorff and <span class="math-container">$f$</span> proper. Could you suggest where the mistake is?</p> <p>Thank you very much for your attention.</p>
Matsmir
685,805
<p>Robert gave you an example. I'll try to explane some &quot;logic&quot;.</p> <p>You said that <span class="math-container">$f^{-1}(K)$</span> is closed since <span class="math-container">$K \subset Y$</span> is closed and <span class="math-container">$f$</span> is continuous. That is true but in the sense of space <span class="math-container">$X$</span>. In the sense of space <span class="math-container">$X^*$</span> it will be closed if and only if <span class="math-container">$f^{-1}(K)$</span> is compact in <span class="math-container">$X$</span> (a closed but not compact subset of <span class="math-container">$X$</span> is not closed in <span class="math-container">$X^*$</span>). A function that satisfies <span class="math-container">$f^{-1}(K)$</span> is compact if <span class="math-container">$K$</span> is compact is called a proper function. It is often referred as &quot;continuity at infinity&quot; (I saw such &quot;definition&quot; in one book about differential topology, I don't remember in which one exactly).</p> <p>In Robert's example you have a function <span class="math-container">$f$</span> such that <span class="math-container">$f^{-1}(K) = X$</span> where <span class="math-container">$K = \{z: |z| \le \frac{1}{2}\}$</span>. <span class="math-container">$K$</span> is compact in <span class="math-container">$Y$</span> but <span class="math-container">$f^{-1}(K)$</span> isn't compact in <span class="math-container">$X$</span>. Therefore <span class="math-container">$f$</span> is not proper and <span class="math-container">$f^*$</span> is not continuous.</p>
3,493,387
<p>Let <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> be two topological spaces, and let <span class="math-container">$X^*$</span> and <span class="math-container">$Y^*$</span> denote their one-point compactification (<span class="math-container">$X^* := X \cup \{\infty\},\,\mathcal{T}^* := \{U \subseteq X^*\mid U \cap X \in \mathcal{T} \land (\infty \in U \implies U= (X \setminus K)\cup \{\infty\}$</span>, <span class="math-container">$ K $</span> closed and compact<span class="math-container">$\}$</span>).</p> <p>Let <span class="math-container">$f:X\rightarrow Y$</span> be a continuous function. How can <span class="math-container">$f$</span> induce a continuous function between <span class="math-container">$X^*$</span> and <span class="math-container">$Y^*$</span>?</p> <p>My idea was to define <span class="math-container">$f^*(x)=f(x)$</span> for <span class="math-container">$x\neq \infty$</span> and <span class="math-container">$f^*(\infty)=\infty$</span>. Then, let <span class="math-container">$U\subset Y^*$</span> be an open subset.</p> <ul> <li><p>If <span class="math-container">$\infty\notin U$</span> then <span class="math-container">$f^*{}^{-1}(U)=f^{-1}(U)$</span> is open since <span class="math-container">$f$</span> is continuous.</p> </li> <li><p>If <span class="math-container">$\infty \in U$</span>, then <span class="math-container">$U=(Y\setminus K)\cup \{\infty\}$</span> and <span class="math-container">$f^*{}^{-1}((Y\setminus K)\cup \{\infty\})=f^*{}^{-1}(Y\setminus K)\cup f^*{}^{-1}(\{\infty\})=f^{-1}(Y\setminus K)\cup\{\infty\}$</span>.</p> </li> </ul> <p>But <span class="math-container">$f^{-1}(Y\setminus K)=X\setminus f^{-1}(K)$</span> and since <span class="math-container">$K$</span> is closed and <span class="math-container">$f$</span> is continuous, <span class="math-container">$f^{-1}(K)$</span> is also closed. Since we know that <span class="math-container">$X$</span> topological space implies <span class="math-container">$X^*$</span> compact, and closed subsets of a compact are compact, we conclude <span class="math-container">$f^{-1}(K)$</span> is compact and therefore <span class="math-container">$f^{-1}((Y\setminus K)\cup\{\infty\})$</span> is open.</p> <p>So apparently there seem to be no extra conditions required, however this question (<a href="https://math.stackexchange.com/questions/88475/continuity-of-the-extension-of-a-function-between-two-locally-compact-hausdorff">Continuity of the extension of a function between two locally compact Hausdorff spaces to their one point compactifications</a>) suggests otherwise: It requires <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> locally compact Hausdorff and <span class="math-container">$f$</span> proper. Could you suggest where the mistake is?</p> <p>Thank you very much for your attention.</p>
Stinking Bishop
700,480
<p>The mistake is in the step "Since we know that <span class="math-container">$X$</span> topological space implies <span class="math-container">$X^∗$</span> compact, and closed subsets of a compact are compact, we conclude <span class="math-container">$f^{−1}(K)$</span> is compact..." We know that <span class="math-container">$f^{−1}(K)$</span> is closed <strong>in <span class="math-container">$X$</span></strong>, but for compactness it would need to be closed <strong>in <span class="math-container">$X^*$</span></strong>.</p> <p>Maybe easier to see on an example: take the complex map <span class="math-container">$f:\mathbb C\to\mathbb C, f(z)=e^z$</span> and then look at the closed disk <span class="math-container">$K=D[0,1]=\{z\in\mathbb C:|z|\le 1\}$</span> and its complement <span class="math-container">$U=\{z\in\mathbb C:|z|&gt;1\}\cup\{\infty\}$</span>, which is open (as <span class="math-container">$D[0,1]$</span> is compact). The inverse image of <span class="math-container">$U$</span> is <span class="math-container">$f^{-1}(U)=\{z\in\mathbb C: \Re(z)&gt;0\}\cup\{\infty\}$</span> - and this is not open in <span class="math-container">$\mathbb C^*$</span>. Note that, in that case, your set <span class="math-container">$f^{-1}(K)=\{z\in\mathbb C:\Re(z)\le 0\}$</span> is closed but not compact.</p>
240,461
<p>What's the mathematica command to get the <strong>numerical value</strong> of :</p> <p><span class="math-container">$$PV\int_0^\infty \frac{\tan x}{x}\text{d}x?$$</span></p> <p>where <span class="math-container">$PV$</span> is the principal value.</p>
Michael E2
4,999
<p>The integral is <code>Pi/2</code>, a proof of which may be found on <a href="https://math.stackexchange.com/questions/2141437/the-principal-value-of-an-integral">math.SE</a>. Here's a numerical check, integrating <span class="math-container">$(\tan z)/z$</span> over parallel paths <span class="math-container">$z = x \pm a i$</span> and subtracting <span class="math-container">$(2 a)/(a^2 + x^2)$</span> to get <code>NIntegrate</code> to converge. The OP's integral is equal to the sum of the two below.</p> <pre><code>a = 50; 1/2 NIntegrate[ Tan[x + a I]/(x + a I) + Tan[x - a I]/(x - a I) - (2 a)/(a^2 + x^2) // ComplexExpand // Simplify, {x, 0, Infinity}, AccuracyGoal -&gt; 16] 1/2 Integrate[(2 a)/(a^2 + x^2), {x, 0, Infinity}, Assumptions -&gt; a &gt; 0] (* -7.83867*10^-47 Pi/2 *) </code></pre> <p><em>Update</em></p> <p>It was late and I was tired. A shortcut (due to the symmetry of <code>Tan[x]/x</code>) occurred to me this morning:</p> <pre><code>a = 50; NIntegrate[Re[Tan[x]/x], {x, 0 + a I, Infinity}, WorkingPrecision -&gt; 50] (* 1.5707963267948966192313216916397514399782555923723 *) </code></pre>
1,319,476
<p>This is a question related to another posted question:</p> <p>The answer to the following question "Find all solutions to: $e^{ix}=i$" is as follows: </p> <p>"Euler's formula: $e^{ix}=\cos(x)+i\sin(x)$,</p> <p>so: $ \cos x+i\sin x=0+1⋅i$</p> <p>compare real and imaginary parts $\sin(x)=1$ and $\cos(x)=0$</p> <p>$x=\frac{(4n+1)π}2$, $n∈$ (W stands for set of whole number W={0,1,2,3,.......,n})."</p> <p>My question: Where does $x=\frac{(4n+1)π}2$, $n∈$ come from? </p> <p>My steps: </p> <ol> <li><p>$\cos(x) + i\sin(x) = 0 + i(1)$</p></li> <li><p>$\cos(x) = i(1 - \sin(x))$</p></li> <li><p>... </p></li> <li><p>how does $x=\frac{(4n+1)π}2$ follow? </p></li> </ol>
Emanuele Paolini
59,304
<p>$\sin$ and $\cos$ functions are $2\pi$-periodic which is: $\sin(x+2n\pi)=\sin x$, $\cos(x+2n\pi)=\cos(x)$. So when you find that $x=\pi/2$ is a solution, then also $x_n = \pi/2 + 2n\pi$ is a solution for every $n\in \mathbb Z$ (where $\mathbb Z$ are whole numbers: $0, 1, -1, 2, -2,\dots$)</p> <p>Notice that $$ \frac \pi 2 + 2n\pi = \frac{4n+1}{2}\pi. $$</p>
3,524,550
<p>Lines in <span class="math-container">$\mathbb{R}^3$</span> are all congruent to one another, but circles in <span class="math-container">$\mathbb{R}^3$</span> are not all congruent to one another (because two different circles may have different radii). Visually, this is completely obvious. However, I would like a <strong>group-theoretic</strong> explanation for this.</p> <blockquote> <p>I am thinking of <span class="math-container">$\mathbb{R}^3$</span> as the homogeneous space <span class="math-container">$\mathbb{R}^3 = \frac{G}{G_0} = \frac{\text{SE}(3)}{\text{SO}(3)}$</span>, where <span class="math-container">$G = \text{SE}(3)$</span> is the group of (orientation-preserving) rigid motions and <span class="math-container">$G_0 = \text{SO}(3)$</span> is the stabilizer of the origin.</p> <p>A <strong>line in <span class="math-container">$\mathbb{R}^3$</span></strong> is an orbit of a point in <span class="math-container">$\mathbb{R}^3$</span> by a subgroup <span class="math-container">$H \leq G$</span> that is conjugate to the subgroup <span class="math-container">$\{ (x_1, x_2, x_3) \mapsto (x_1 + t, x_2, x_3) \colon t \in \mathbb{R}\}$</span> of translations by the vector <span class="math-container">$(1,0,0)$</span>.</p> <p>A <strong>circle in <span class="math-container">$\mathbb{R}^3$</span></strong> is an orbit of a point in <span class="math-container">$\mathbb{R}^3$</span> by a subgroup <span class="math-container">$K \leq G$</span> that is conjugate to the subgroup <span class="math-container">$\{ (x_1 + ix_2, x_3) \mapsto (e^{i\theta}(x_1 + ix_2), x_3) \colon e^{i\theta} \in \mathbb{S}^1\}$</span> of rotations around the <span class="math-container">$x_3$</span>-axis.</p> <p>Two subsets <span class="math-container">$S_1, S_2$</span> of <span class="math-container">$\mathbb{R}^3$</span> are <strong>congruent</strong> if there exists <span class="math-container">$g \in \text{SE}(3)$</span> such that <span class="math-container">$S_2 = g \cdot S_1$</span>.</p> </blockquote> <p>Given these definitions of &quot;line&quot; and &quot;circle&quot; --- as orbits of subgroups --- how could we have known that all lines in <span class="math-container">$\text{SE}(3)/\text{SO}(3)$</span> are congruent, but not all circles in <span class="math-container">$\text{SE}(3)/\text{SO}(3)$</span> have this property?</p> <p>In other words: What are the relevant aspects of the subgroups <span class="math-container">$H$</span>, <span class="math-container">$K$</span>, and <span class="math-container">$G_0$</span> that explain the <span class="math-container">$G$</span>-equivalence of <span class="math-container">$H$</span>-orbits in <span class="math-container">$G/G_0$</span>, as opposed to the non-<span class="math-container">$G$</span>-equivalence of all <span class="math-container">$K$</span>-orbits in <span class="math-container">$G/G_0$</span>?</p>
Eric Wofsey
86,856
<p>Here's the general group-theoretic setup. Let <span class="math-container">$G$</span> be a group and <span class="math-container">$G_0,H\subset G$</span> be subgroups. An orbit of <span class="math-container">$H$</span> in <span class="math-container">$G/G_0$</span> can be considered as a double coset <span class="math-container">$HxG_0\subseteq G$</span>. Let <span class="math-container">$S$</span> be the set of all orbits of conjugates of <span class="math-container">$H$</span> in <span class="math-container">$G/G_0$</span>. Then <span class="math-container">$G$</span> acts on <span class="math-container">$S$</span> by left translation, since <span class="math-container">$g\cdot HxG_0=(gHg^{-1})gxG_0$</span> is a double coset for the conjugate <span class="math-container">$gHg^{-1}$</span>.</p> <p>I don't think there's any nice necessary and sufficient characterization of when <span class="math-container">$G$</span> acts transitively on <span class="math-container">$S$</span>, but there are a couple simple special cases that are enough to answer your question about lines and circles.</p> <p>First, suppose <span class="math-container">$H\subseteq G_0$</span> but some conjugate <span class="math-container">$x^{-1}Hx$</span> of <span class="math-container">$H$</span> is not contained in <span class="math-container">$G_0$</span>. (This is true when <span class="math-container">$H$</span> is your <span class="math-container">$K$</span>.) Then one element of <span class="math-container">$S$</span> is <span class="math-container">$HG_0=G_0$</span> and another is <span class="math-container">$HxG_0$</span>. If <span class="math-container">$G$</span> acted transitively on <span class="math-container">$S$</span> there would be some <span class="math-container">$g\in G$</span> such that <span class="math-container">$gG_0=HxG_0$</span>; that is, <span class="math-container">$HxG_0$</span> would be a left coset of <span class="math-container">$G_0$</span>. Since <span class="math-container">$x\in HxG_0$</span>, it would be the left coset of <span class="math-container">$x$</span> so <span class="math-container">$xG_0=HxG_0$</span>. This implies <span class="math-container">$G_0=x^{-1}HxG_0$</span>, but that is not true by assumption since <span class="math-container">$x^{-1}Hx\not\subseteq G_0$</span>. Thus <span class="math-container">$G$</span> cannot act transitively on <span class="math-container">$S$</span>.</p> <p>(Interestingly, in the context of circles, this argument makes crucial use of a degenerate circle of radius <span class="math-container">$0$</span>, which is what the double coset <span class="math-container">$HG_0=G_0$</span> represents. In geometric terms, it is saying that since your group <span class="math-container">$K$</span> fixes one point but does not fix all points, there is a circle with just one point and a circle with more than one point, and they cannot be congruent.)</p> <p>Now suppose that <span class="math-container">$N(H)G_0=G$</span>. (This is true for your line group <span class="math-container">$H$</span>, since every translation normalizes <span class="math-container">$H$</span> and every rigid transformation is a composition of a rotation around the origin and a translation.) Consider any double coset <span class="math-container">$H'xG_0\in S$</span> for some <span class="math-container">$H'$</span> conjugate to <span class="math-container">$H$</span>; we wish to show <span class="math-container">$H'xG_0$</span> is in the orbit of <span class="math-container">$HG_0$</span>, so <span class="math-container">$G$</span> acts transitively on <span class="math-container">$S$</span>. If <span class="math-container">$H'=gHg^{-1}$</span> we can first multiply <span class="math-container">$H'xG_0$</span> by <span class="math-container">$g^{-1}$</span> to assume that <span class="math-container">$H'=H$</span>. Now by hypothesis, we can write <span class="math-container">$x=ng$</span> for some <span class="math-container">$n\in N(H)$</span> and <span class="math-container">$g\in G_0$</span>. We then have <span class="math-container">$$HxG_0=HngG_0=HnG_0=nHG_0$$</span> so <span class="math-container">$HxG_0$</span> is indeed in the orbit of <span class="math-container">$HG_0$</span>.</p> <p>(Note that you might try and reverse this argument to prove that <span class="math-container">$N(H)G_0=G$</span> is actually necessary and sufficient for <span class="math-container">$G$</span> to act transitively on <span class="math-container">$S$</span>. Indeed, there exists <span class="math-container">$n\in N(H)$</span> such that <span class="math-container">$HxG_0=nHG_0$</span> iff <span class="math-container">$x\in N(H)G_0$</span>. However, this isn't quite enough to prove necessity, since you could have <span class="math-container">$HxG_0=yHG_0$</span> for some <span class="math-container">$y\in G$</span> that is not in <span class="math-container">$N(H)$</span>, and I don't know any particularly nice way of describing when that happens. Note here that given an element of <span class="math-container">$S$</span>, the conjugate of <span class="math-container">$H$</span> for which it is a double coset is not necessarily unique. See <a href="https://math.stackexchange.com/questions/3259196/normalizer-of-group-action">Normalizer of group action</a> for some related discussion, and in particular the example at the end of Morgan Rodgers's answer which is one where <span class="math-container">$G$</span> acts transitively on <span class="math-container">$S$</span> but <span class="math-container">$N(H)G_0\neq G$</span>.)</p>
23,020
<p>I am in the process of learning about Mapping class groups. At this point it seems like most of what I've read involves very low dimensional (surfaces and 3-manifolds) applications.</p> <p>I was wondering if they were studied (or arise naturally) in higher dimensional settings?</p> <p>In particular, any references to their uses in homotopy theory would be appreciated.</p>
Ryan Budney
642
<p>In high dimensions there are several variants that are all distinct (which for surfaces they all agree). There's mapping class groups in the &quot;homotopy category&quot; meaning the homotopy-classes of homotopy equivalences of a topological space, with composition giving the group structure. This is a &quot;core&quot; object of study of classical algebraic topology. In the topological/pl/smooth categories there are isotopy classes of homeomorphisms/pl automorphisms/diffeomorphisms of a manifold. The smooth category gets quite a bit of attention -- for example the smooth category mapping class group of <span class="math-container">$S^n$</span> (if you restrict to orientation-preserving diffeomorphisms) is the group of homotopy <span class="math-container">$(n+1)$</span>-spheres, provided <span class="math-container">$n \geq 5$</span>. There has been some work on stable high-dimensional mapping class groups by people like Giansiracusa (Swansea).</p> <p>I have to head out but I can add more later.</p> <p>The Giansiracusa reference is this: <a href="http://www.arxiv.org/abs/math.gt/0510599" rel="nofollow noreferrer">http://www.arxiv.org/abs/math.gt/0510599</a></p> <p>Modulo some qualifiers the statement is that the stable mapping class group of a 4-manifold is the automorphisms of homology that preserve the intersection form.</p> <p>Mapping class groups of a products of circles <span class="math-container">$(S^1)^n$</span> in the topological, PL and smooth categories were computed by Hatcher in his &quot;Higher simple homotopy theory&quot; paper.</p> <p>Is there anything in particular you're interested in?</p> <p>edit: In a little self-plug, David Gabai and I recently proved a result that contrasts heavily with the stability results referenced in the Giansiracusa paper. Specifically, we show that the mapping class group (smooth diffeomorphisms) of <span class="math-container">$S^1 \times D^3$</span> and <span class="math-container">$S^1 \times S^3$</span> are <a href="https://arxiv.org/abs/1912.09029" rel="nofollow noreferrer">not finitely generated.</a> <a href="https://arxiv.org/abs/1912.09029" rel="nofollow noreferrer">Tadayuki Watanabe</a> has also been able to prove this result for <span class="math-container">$S^1 \times D^3$</span> using fairly different techniques.</p>
4,135,472
<p><strong>What is the clearest and simplest way of proving that <span class="math-container">$[x]+[x+1/2]=[2x]$</span>? (Where <span class="math-container">$[x]$</span> is the greatest integer function)</strong></p> <p>According to Bartleby, if <span class="math-container">$x=m$</span> for <span class="math-container">$m\in \mathbb{Z}$</span>, then <span class="math-container">$[x]=m$</span> and <span class="math-container">$[x+1/2]=m$</span>. Hence, <span class="math-container">$[x]+[x+1/2]=2m=[2x]$</span>. Hence when <span class="math-container">$m\in\mathbb{Z}$</span>, <span class="math-container">$[x]+[x+1/2]=[2x]$</span>.</p> <p>When <span class="math-container">$m&lt;x&lt;m+1$</span>, <span class="math-container">$[x]=[m+\{x\}]=m+[\{x\}]$</span> and <span class="math-container">$[x+1/2]=[m+\{x\}+1/2]=m+[\{x\}+1/2]$</span>. Hence <span class="math-container">$[x]+[x+1/2]=2m+[\{x\}]+[\{x\}+1/2]$</span>. Since <span class="math-container">$0 \le \{x\} &lt; 1$</span> and <span class="math-container">$1/2 \le \{x\}+1/2 &lt; 3/2$</span>; we have <span class="math-container">$0 \le [\{x\}] &lt; 1$</span> and <span class="math-container">$0\le[\{x\}+1/2]&lt; 1$</span>. Hence <span class="math-container">$2m &lt; 2m+[\{x\}]+[\{x\}+1/2]\le 2m+1$</span> which implies <span class="math-container">$[2x]$</span>. Hence <span class="math-container">$[x]+[x+1/2]=[2x]$</span>.</p> <p>Hence <span class="math-container">$[x]+[x+1/2]=[2x]$</span></p> <p>Is my solution clear or is there a better one?</p>
N. S.
9,176
<p>My favourite, and the cleanest way in my oppinion is the following classical solution:</p> <p>Let <span class="math-container">$f(x)= \lfloor x\rfloor+\lfloor x+\frac{1}{2}\rfloor-\lfloor 2x \rfloor$</span>.</p> <p>Then, it is trivial to see that</p> <ul> <li><span class="math-container">$f(x+\frac{1}{2})=f(x)$</span>.</li> <li><span class="math-container">$f(x)=0 \forall x \in [0, \frac{1}{2})$</span>.</li> </ul> <p>From here it follows immediatelly that <span class="math-container">$f \equiv 0$</span>.</p> <p><strong>P.S.</strong> By using <span class="math-container">$f(x)= \lfloor x\rfloor+\lfloor x+\frac{1}{n}\rfloor+ \ldots +\lfloor x+\frac{n-1}{n}\rfloor-\lfloor nx \rfloor$</span> you can deduce the same way that <span class="math-container">$$\lfloor x\rfloor+\lfloor x+\frac{1}{n}\rfloor+ \ldots +\lfloor x+\frac{n-1}{n}\rfloor=\lfloor nx \rfloor$$</span></p>
1,863,943
<p>I have the following problem:</p> <blockquote> <p>Let $M$ be a connected closed $7$-manifold such that $H_1(M,\mathbb{Z}) = 0$, $H_2(M,\mathbb{Z}) = \mathbb{Z}$, $H_3(M,\mathbb{Z}) = \mathbb{Z}/2$. Compute $H_i(M,\mathbb{Z})$ and $H^i(M,\mathbb{Z})$ for all $i$.</p> </blockquote> <p>I know that if $M$ is orientable, using Poincaré duality, the fact that $\chi(M)=0$ and the exact sequence for $H^i(M,\mathbb{Z})$ I can get the result.</p> <p>But, I don't know how to prove that $M$ is orientable. I know that is the case if $M$ does not have $2$-torsion on $\pi_1(M)$, but I don't see why this $2$-torsion should descend to $H_1(M)$.</p>
Eric Wofsey
86,856
<p>For any connected manifold $M$, there is a homomorphism $\pi_1(M)\to\mathbb{Z}/2$ which sends a loop to $0$ if going around the loop preserves orientation and sends the loop to $1$ if going around the loop reverses orientation. This homomorphism is trivial iff $M$ is orientable. Since $\mathbb{Z}/2$ is abelian, this homomorphism factors through the Hurewicz map $\pi_1(M)\to H_1(M)$. In particular, this means that if $H_1(M)=0$, the homomorphism is trivial so $M$ is orientable.</p> <p>(By the way, the statement that $\chi(M)=0$ does not require $M$ to be orientable--you can prove it using mod $2$ Poincare duality, for instance.)</p>
1,863,943
<p>I have the following problem:</p> <blockquote> <p>Let $M$ be a connected closed $7$-manifold such that $H_1(M,\mathbb{Z}) = 0$, $H_2(M,\mathbb{Z}) = \mathbb{Z}$, $H_3(M,\mathbb{Z}) = \mathbb{Z}/2$. Compute $H_i(M,\mathbb{Z})$ and $H^i(M,\mathbb{Z})$ for all $i$.</p> </blockquote> <p>I know that if $M$ is orientable, using Poincaré duality, the fact that $\chi(M)=0$ and the exact sequence for $H^i(M,\mathbb{Z})$ I can get the result.</p> <p>But, I don't know how to prove that $M$ is orientable. I know that is the case if $M$ does not have $2$-torsion on $\pi_1(M)$, but I don't see why this $2$-torsion should descend to $H_1(M)$.</p>
Najib Idrissi
10,014
<p>A quick proof using <a href="https://en.wikipedia.org/wiki/Stiefel%E2%80%93Whitney_class" rel="noreferrer">Stiefel–Whitney classes</a>: a manifold $M$ is orientable iff the first SW class $w_1(M) \in H^1(M;\mathbb{Z}/2\mathbb{Z})$ is zero. But by the universal coefficient theorem, $$H^1(M;\mathbb{Z}/2\mathbb{Z}) = \operatorname{Hom}_\mathbb{Z}(H_1(M;\mathbb{Z}), \mathbb{Z}/2\mathbb{Z}) = 0.$$</p> <p>Of course under the hood I don't think there's anything more than what you can find in Eric Wofsey's answer, but this argument is quite simple and shows the power of characteristic classes.</p>
493,104
<p>I'm finding the area of an ellipse given by $\frac{x^2}{a^2}+\frac{y^2}{b^2} = 1$. I know the answer should be $\pi ab$ (e.g. by Green's theorem). Since we can parameterize the ellipse as $\vec{r}(\theta) = (a\cos{\theta}, b\sin{\theta})$, we can write the polar equation of the ellipse as $r = \sqrt{a^2 \cos^2{\theta}+ b^2\sin^2{\theta}}$. And we can find the area enclosed by a curve $r(\theta)$ by integrating </p> <p>$$\int_{\theta_1}^{\theta_2} \frac12 r^2 \ \mathrm d\theta.$$</p> <p>So we should be able to find the area of the ellipse by </p> <p>$$\frac12 \int_0^{2\pi} a^2 \cos^2{\theta} + b^2 \sin^2{\theta} \ \mathrm d\theta$$</p> <p>$$= \frac{a^2}{2} \int_0^{2\pi} \cos^2{\theta}\ \mathrm d\theta + \frac{b^2}{2} \int_0^{2\pi} \sin^2{\theta} \ \mathrm d\theta$$</p> <p>$$= \frac{a^2}{4} \int_0^{2\pi} 1 + \cos{2\theta}\ \mathrm d\theta + \frac{b^2}{4} \int_0^{2\pi} 1- \cos{2\theta}\ \mathrm d\theta$$</p> <p>$$= \frac{a^2 + b^2}{4} (2\pi) + \frac{a^2-b^2}{4} \underbrace{\int_0^{2\pi} \cos{2\theta} \ \mathrm d\theta}_{\text{This is $0$}}$$</p> <p>$$=\pi\frac{a^2+b^2}{2}.$$</p> <p>First of all, this is not the area of an ellipse. Second of all, when I plug in $a=1$, $b=2$, this is not even the right value of the integral, as <a href="http://www.wolframalpha.com/input/?i=1%2F2+*+integral+from+0+to+2*pi+of+%28cos%28x%29%29%5E2+%2B+2*+%28sin%28x%29%29%5E2" rel="nofollow">Wolfram Alpha tells me</a>.</p> <p>What am I doing wrong?</p>
Bennett Gardiner
78,722
<p>Here you go - this person even made your mistake, then someone else corrected it.</p> <p><a href="https://web.archive.org/web/20180312075252/http://mathforum.org/library/drmath/view/53635.html" rel="nofollow noreferrer">Link</a></p>
493,104
<p>I'm finding the area of an ellipse given by $\frac{x^2}{a^2}+\frac{y^2}{b^2} = 1$. I know the answer should be $\pi ab$ (e.g. by Green's theorem). Since we can parameterize the ellipse as $\vec{r}(\theta) = (a\cos{\theta}, b\sin{\theta})$, we can write the polar equation of the ellipse as $r = \sqrt{a^2 \cos^2{\theta}+ b^2\sin^2{\theta}}$. And we can find the area enclosed by a curve $r(\theta)$ by integrating </p> <p>$$\int_{\theta_1}^{\theta_2} \frac12 r^2 \ \mathrm d\theta.$$</p> <p>So we should be able to find the area of the ellipse by </p> <p>$$\frac12 \int_0^{2\pi} a^2 \cos^2{\theta} + b^2 \sin^2{\theta} \ \mathrm d\theta$$</p> <p>$$= \frac{a^2}{2} \int_0^{2\pi} \cos^2{\theta}\ \mathrm d\theta + \frac{b^2}{2} \int_0^{2\pi} \sin^2{\theta} \ \mathrm d\theta$$</p> <p>$$= \frac{a^2}{4} \int_0^{2\pi} 1 + \cos{2\theta}\ \mathrm d\theta + \frac{b^2}{4} \int_0^{2\pi} 1- \cos{2\theta}\ \mathrm d\theta$$</p> <p>$$= \frac{a^2 + b^2}{4} (2\pi) + \frac{a^2-b^2}{4} \underbrace{\int_0^{2\pi} \cos{2\theta} \ \mathrm d\theta}_{\text{This is $0$}}$$</p> <p>$$=\pi\frac{a^2+b^2}{2}.$$</p> <p>First of all, this is not the area of an ellipse. Second of all, when I plug in $a=1$, $b=2$, this is not even the right value of the integral, as <a href="http://www.wolframalpha.com/input/?i=1%2F2+*+integral+from+0+to+2*pi+of+%28cos%28x%29%29%5E2+%2B+2*+%28sin%28x%29%29%5E2" rel="nofollow">Wolfram Alpha tells me</a>.</p> <p>What am I doing wrong?</p>
Community
-1
<p>There are already a lot of good answers here, so I'm adding this one primarily to dazzle people w/ my Mathematica diagram-creating skills. </p> <p>As noted previously, </p> <p>$x(t)=a \cos (t)$ </p> <p>$y(t)=b \sin (t)$ </p> <p><strong>does</strong> parametrize an ellipse, but t is <strong>not</strong> the central angle. What is the relation between t and the central angle?:</p> <p><img src="https://i.stack.imgur.com/TanBJ.jpg" alt="cool, huh?"></p> <p>Since y is b<em>Sin[t] and x is a</em>Cos[t], we have: </p> <p>$\tan (\theta )=\frac{b \sin (t)}{a \cos (t)}$ </p> <p>or </p> <p>$\tan (\theta )=\frac{b \tan (t)}{a}$ </p> <p>Solving for t, we have: </p> <p>$t(\theta )=\tan ^{-1}\left(\frac{a \tan (\theta )}{b}\right)$ </p> <p>We now reparametrize using theta: </p> <p>$x(\theta )=a \cos (t(\theta ))$ </p> <p>$y(\theta )=b \sin (t(\theta ))$ </p> <p>which ultimately simplifies to: </p> <p>$x(\theta)=\frac{a}{\sqrt{\frac{a^2 \tan ^2(\theta )}{b^2}+1}}$ </p> <p>$y(\theta)=\frac{a \tan (\theta )}{\sqrt{\frac{a^2 \tan ^2(\theta )}{b^2}+1}}$ </p> <p>Note that, under the new parametrization, $y(\theta)/x(\theta) = tan(\theta)$ as desired. </p> <p>To compute area, we need $r^2$ which is $x^2+y^2$, or: </p> <p>$r(\theta )^2 = (\frac{a}{\sqrt{\frac{a^2 \tan ^2(\theta )}{b^2}+1}})^2+ (\frac{a \tan (\theta )}{\sqrt{\frac{a^2 \tan ^2(\theta )}{b^2}+1}})^2$ </p> <p>(note that we could take the square root to get r, but we don't really need it) </p> <p>The above ultimately simplifies to: </p> <p>$r(\theta)^2 = \frac{1}{\frac{\cos ^2(\theta )}{a^2}+\frac{\sin ^2(\theta )}{b^2}}$ </p> <p>Now, we can integrate $r^2/2$ to find the area: </p> <p>$A(\theta) = (\int_0^\theta \frac{1}{\frac{\cos ^2(x )}{a^2}+\frac{\sin ^2(x )}{b^2}} \, dx)/2$ </p> <p>which yields: </p> <p>$A(\theta) = \frac{1}{2} a b \tan ^{-1}\left(\frac{a \tan (\theta )}{b}\right)$ </p> <p>good for $0\leq \theta &lt;\frac{\pi }{2}$ </p> <p>Interestingly, it doesn't work for $\theta =\frac{\pi }{2}$ so we can't test the obvious case without using a limit: </p> <p>$\lim_{\theta \to \frac{\pi }{2}} \, \frac{1}{2} a b \tan ^{-1}\left(\frac{a \tan (\theta )}{b}\right)$ </p> <p>which gives us $a*b*Pi/4$ as expected. </p>
2,868,595
<p>A Vitali set is a subset $V$ of $[0,1]$ such that for every $r\in \mathbb R$ there exists one and only one $v\in V$ for which $v-r \in \mathbb Q$. Equivalently, $V$ contains a single representative of every element of $\mathbb R / \mathbb Q$.</p> <p>The proof I read is in this short article on Wikipedia: <a href="https://en.wikipedia.org/wiki/Vitali_set" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Vitali_set</a></p> <p>Under "proof", the second to last inequality $1 \leq \sum \lambda (V_k) \leq 3$ is claimed to result from the previous inequality $[0,1] \subset \bigcup V_k \subset [-1,2]$ simply using sigma-additivity. There must be some missing argument to claim that the sum of the measures, although greater than the measure of the union, is still less than the measure of $[-1,2]$.</p> <p>What is the missing argument ?</p>
Dennis
513,093
<p>We assume for the sake of contradiction $V$ is measurable, so $\lambda(V)$ is a real number. By construction, each $V_k$ is disjoint, and is merely a shifted copy of $V$, so each of these sets has the same measure. The sequence $(V_k)_{k=1}^\infty$ is countable.</p> <p>Sigma additivity (the assumption in the definition of Lebesgue measure) guarantees that the measure of a countable union of disjoint sets will be the sum of the measures of these sets. Applying this principle along with the equal measures of all $V_k$, we find:</p> <p>$$ \lambda\left(\bigcup_k V_k\right)=\sum_k\lambda\left(V_k\right)=\sum_k\lambda(V). $$</p> <p>What can the infinite sum of a constant term be? If the value of the constant term $\lambda(V)$ is $0$, then the sum is zero. On the other hand, if $\lambda(V)&gt;0$, then the sum is unbounded. It follows that $\lambda\left(\bigcup_k V_k\right)$ is either $0$ or $\infty$.</p> <p>However, by construction, this union is a superset of the interval $[0,1]$, which has measure $1$, and a subset of the interval $[-1, 2]$, which has a measure $3$. The measure must therefore fall between $1$ and $3$.</p> <p>We cannot have a real number be simultaneously in $\{0, \infty\}$ and $[1,3]$, so we have our contradiction.</p>
3,154,032
<p>Suppose we have a 4 dimension positive signature clifford algebra. In <a href="https://math.stackexchange.com/questions/443555/calculating-the-inverse-of-a-multivector">Calculating the inverse of a multivector</a> and <a href="https://math.stackexchange.com/questions/556247/inverse-of-a-general-nonfactorizable-multivector">Inverse of a general nonfactorizable multivector</a>, the inverse of a multivector is presented as a solution when vectors/bivectors are present </p> <p><span class="math-container">$B^{-1} = \frac{B^\dagger}{B B^\dagger}$</span></p> <p>but the above is not true for any multivector. For example, how to know if </p> <p><span class="math-container">$(1+e_{1234})^{-1}$</span> </p> <p>exists and how to compute it?</p>
amnesiac
419,567
<p>Naively speaking, the existence of inverses will depend on the signature <span class="math-container">$(p,q)$</span> of the quadratic space <span class="math-container">$\mathbb{R}^{p,q}=(\mathbb{R}^{p+q},g)$</span>, in which for an orthonormal basis <span class="math-container">$\{e_i\}_{i=1}^{n=p+q}$</span> and <span class="math-container">$v=\sum v^ie_i$</span> we have <span class="math-container">$$g(v,v)=(v^1)^2+(v^2)^2+\cdots+(v^p)^2-(v^{p+1})^2-\cdots-(v^{p+q})^2.$$</span></p> <p>For your example, notice that if <span class="math-container">$(e_{1234})^2=-1$</span>, then <span class="math-container">$$(1+e_{1234})\frac{1}{2}(1-e_{1234})=1,$$</span> which means that <span class="math-container">$(1+e_{1234})^{-1}=\frac{1}{2}(1-e_{1234})$</span>. </p> <p>Now, if <span class="math-container">$(e_{1234})^2=1$</span>, then there is no inverse for <span class="math-container">$(1+e_{1234})$</span>, which is due to the fact that <span class="math-container">$x\overline{x}=0$</span>. More specifically, one can <a href="https://core.ac.uk/download/pdf/74374477.pdf" rel="noreferrer">derive</a> conditions for which there are inverses for the cases where <span class="math-container">$p+q=n\leq 5$</span>. The discovery of new faster methods for higher dimensions, which do not depend on the signature is a problem still under development as of today. As an example, we could cite <a href="https://arxiv.org/pdf/1712.05204.pdf" rel="noreferrer">this</a>. </p>
184,719
<p>Let $\Sigma_g$ be a Riemann surface of genus $g\geq 2$ and $G=\pi_1(\Sigma_g)$. Let $\pi\colon \mathbb{H}\to \Sigma_g$ be the universal covering map. What kind of surface is $\mathbb{H}/[G,G]$? </p> <p>Moreover, what is $[G,G]$; e.g. if $g=2$?</p>
Holonomia
43,122
<p>Consider the Abel-Jacobi map $\mu : \Sigma_g \to J(\Sigma_g )=\mathbb{C}^g/\Lambda$. Then take the lift $\widetilde{\mu}: \mathbb{H} \to \mathbb{C}^g$ from the universal covering $\mathbb{H}$ of $\Sigma_g$. It seems to me that the image $X := \widetilde{\mu}(\mathbb{H})$ is the surface $\mathbb{H}/[G,G]$.</p>
184,719
<p>Let $\Sigma_g$ be a Riemann surface of genus $g\geq 2$ and $G=\pi_1(\Sigma_g)$. Let $\pi\colon \mathbb{H}\to \Sigma_g$ be the universal covering map. What kind of surface is $\mathbb{H}/[G,G]$? </p> <p>Moreover, what is $[G,G]$; e.g. if $g=2$?</p>
Sam Nead
1,650
<p>$\newcommand{\ZZ}{\mathbb{Z}}\newcommand{\RR}{\mathbb{R}}$Let $S = \Sigma_2$ be the genus two surface. In this case, $\ZZ^4$ is the deck group of the desired covering. Consider $\ZZ^4$ inside of $\RR^4$ and add to these points the usual edges labelled $a, b, c, d$ parallel to the four coordinate axes. This gives a Cayley graph for $\ZZ^4$. </p> <p>Next, starting at every vertex of the graph we attach a two-cell via the attaching map $abcdABCD$ (capital letters denote inverses). This is possible because the boundary word describes a closed loop in the graph. Let $S'$ be the resulting two-complex. Every edge of $S'$ meets a pair of two-cells while every vertex meets eight two-cells. The eight corners give the vertex a disk neighborhood in $S'$.</p> <p>Thus $S'$ is a surface. Taking the quotient by the action of $\ZZ^4$ gives the original surface $S$. By the Galois correspondence, $S'$ is the desired covering space. Note that $S'$ is quasi-isometric to $\ZZ^4$ so it is one-ended. The loops $abAABa$ and $cdCCDc$, based at the origin, meet in exactly one point. Thus $S'$ has genus, and so has infinite genus. </p> <p>This construction works in any genus. When $g = 1$ the construction produces the universal cover. </p>
2,197,790
<h3>Question</h3> <blockquote> <p>A sequence $\{a_n\}$ of real numbers is said to be a Cauchy sequence of for each $\epsilon$ > 0 there exists a number $N &gt; 0$ such that m, $n &gt; N$ implies that $|a_n − a_m| &lt;\epsilon$.</p> <p>Prove that every convergent sequence is a Cauchy sequence</p> </blockquote> <hr> <h3>Attempt</h3> <p>This is my first time hearing what a cauchy sequence is. I have no idea how to even start this. I googled cauchy sequence and I think its when $a_n$ converges to $a_{n+1}$? </p> <p>Attempt:</p> <p>WTS: $\exists a_m \in \mathbb R, \forall \epsilon &gt; 0, \exists N &gt; 0$, such that for all $n \in \mathbb N$, if $n &gt; N$, then $|a_n - a_m| &lt; \epsilon$</p> <p>Let $\epsilon &gt; 0$ be arbitrary</p> <p>Choose N such that for $n &gt; N$ we have $|a_n - a_m| &lt; \epsilon$</p> <p>Suppose $n &gt; N$, then </p> <p>??</p> <p>Could someone point me to the right direction? Thx.</p>
Bernard W
278,779
<p>The idea behind the standard proof that every convergent sequence is Cauchy is the triangle inequality.</p> <p>If a sequence $a_n$ converges to a limit $L$ then for every $\epsilon&gt;0$ there is some cutoff point $N$ such that every term past $N$ is within $\epsilon$ of $L$.</p> <p>This means that any two points in the sequence $a_n,a_m$ with $n,m&gt;N$ are within $\epsilon$ of $L$ so by the triangle inequality they must be within $2\epsilon$ of each other.</p>
2,479,918
<p>Every vector space $V$ could be embedded into $V^{\ast}$ (see <a href="https://en.wikipedia.org/wiki/Dual_basis" rel="noreferrer">here</a>) after choosing a basis, for a given vector $v \in V$ denote this embedding by $v^{\ast}\in V^{\ast}$. Now for given vector spaces $V_1, \ldots, V_k$ over some field $F$, let $V = \{ \varphi : V_1 \times \ldots \times V_k \to F \mbox{ multilinear } \}$. Why not define the tensor product of $V_1, \ldots, V_k$ simply as $T = \{ \varphi^{\ast} \mid \varphi\in V\}$. Then the universal property is obviously fulfilled, for if we define $\pi : V_1 \times \ldots \times V_k \to T$ by $\pi(v_1, \ldots, v_k) = \Phi \in V^{\ast}$ with $$ \Phi(\varphi) = \varphi(v_1, \ldots, v_k). $$ Then if we have some multilinear $\varphi : V_1 \times \ldots \times V_k \to F$ define the linear map $h_{\varphi} : T \to F$ by $$ h_{\varphi}(\Phi) = \Phi(\varphi) $$ and we have $$ h_{\varphi}(\pi(v_1, \ldots, v_k)) = \varphi(v_1, \ldots, v_k) $$ i.e. it factors through $T$ by $\pi$ and $h_{\varphi}$. Then everything works out quite easily, no nasty "quotient constructions", it even appears too simple for me...</p> <p>I have nowhere seen this definition? So why not define it that way? Have I overlooked something? Note that we do not rely on reflexivity here, as $T$ does not has to be all of $V^{\ast}$, but just those elements that arise from elements of $V$ (the image of the embedding). Maybe the universal property breaks down because the linear map is not unique, but I do not see other choices for it?</p>
Arnaud D.
245,577
<p>You claim that you only need $T$ to be a subspace of $V^*$; but in your construction, you define $\pi$ as a multilinear map $V_1\times \dots \times V_k\to V^*$, and it is not obvious that $\pi$ can be restricted to have domain $T$. In other words, given $v_i\in V_i$ for $i=1,\dots,k$, it is not obvious $\pi(v_1,\dots,v_k)=\Phi$ can be written as $\mu^*$ for some $\mu\in V$.</p> <p>This is particularly difficult because the embedding $V\to V^*$ depends on the choice of a basis for $V$.</p> <p>Note also that your proof of the universal property is incomplete : you only consider it for maps to $F$, but the universal property should hold for multilinear map to any vector space. The extension can be done for vector spaces if you choose a basis, but that would fail for modules over $\Bbb Z$, for example; and moreover, you would have to prove that it is independent on the chosen basis (otherwise you wouldn't have uniqueness).</p>
4,520,485
<p>Suppose there are 78 heroes. Only one of them is considered to be 'Tier 1'. At the beginning of some game you are given a choice between either 2 heroes or 4 heroes. The question is: how advantegeous is it to choose out of 4 heroes to choosing out of 2, if by advantageous we mean to have a higher probability of getting a 'Tier 1' hero? My logic is to first compute the total number of 2-hero combinations and 4-hero combinations: <span class="math-container">$$C^2_{78}=\frac{78!}{(78-2)!2!}=3003; C^4_{78}=\frac{78!}{(78-4)!4!}=1426425$$</span> The prbability of getting a 'Tier 1' hero if we choose between 2 is <span class="math-container">$\frac{77}{3033}\approx0.0256$</span>, while the probability of getting a 'Tier 1' hero if we choose between 4 is <span class="math-container">$\frac{77\cdot76\cdot75}{1426425}\approx0.3077$</span>. So, the advantage seems to be <span class="math-container">$\frac{77\cdot76\cdot75}{1426425}\cdot\frac{3033}{77}=12$</span>. So, you are 12 times more likely to get a 'Tier 1' hero if you choose out of 4 heroes than out of 2. Is this logic correct? The advantage seems to be too big</p>
José Carlos Santos
446,262
<p>The expression <span class="math-container">$(k+1)(k+2)\ldots n$</span> is the product of the numbers <span class="math-container">$k+1$</span>, <span class="math-container">$k+2$</span>, …, <span class="math-container">$n$</span>. When <span class="math-container">$n=3$</span> and <span class="math-container">$k=2$</span> , there is only one such number: <span class="math-container">$2+1(=3)$</span>. So, the product is <span class="math-container">$3$</span>.</p> <p>If <span class="math-container">$n=4$</span> and <span class="math-container">$k=2$</span>, there are two such numbers: <span class="math-container">$2+1(=3)$</span> and <span class="math-container">$2+2(=4)$</span>. And, indeed,<span class="math-container">$$\frac{4!}{2!}=12=3\times4.$$</span></p>
670,522
<p>In my very young mathematical career, I have worked a lot with modular forms. Recently, I worked as a teaching assistant in a course about geometry. At the end of the course, we dealt with hyperbolic geometry. It seems as if there is some relation between hyperbolic geometry and modular forms, for example, why is it precisely the set $\mathbb{H}$ (from which modular forms map into $\mathbb{C}$) that is also a model for a "weird" geometry in which the sum over the angles in a triangle is not $\pi$ or in which some axiom about parallel lines does not hold? It seems at first sight, as if these two mathematical areas are quite distant from each other.</p> <p>If there is such a relation, can someone solve the following equation:</p> <p>$$ \frac{\text{modular forms}}{\text{hyperbolic geometry}} = \frac{???}{\text{euclidean geometry}}$$</p> <p>Of course, one can reinterpret modular forms as certain sections of line bundles over ... blah blah blah, but this is not the way you would ever describe what a modular form is to someone who has never heard about them.</p> <p>cheers,</p> <p>FW</p>
DIEGO R.
297,483
<p>I think that introducing the projection map is more intuitive. Given a vector space <span class="math-container">$V$</span>, a linear map <span class="math-container">$P: V \to V$</span> is said to be a linear projection if <span class="math-container">$P^{2} = P$</span>. As a consequence of this definition, given a subspace <span class="math-container">$U$</span> of <span class="math-container">$V$</span>, there exists a linear projection <span class="math-container">$P:V \to V$</span> such that <span class="math-container">$P(V) = U$</span>. To show this is not difficult, just consider <span class="math-container">$U'$</span> a linear complement of <span class="math-container">$U$</span>, so <span class="math-container">$V = U\bigoplus U'$</span>, and given <span class="math-container">$v \in V$</span> we can express in a unique way <span class="math-container">$v = x + y$</span> where <span class="math-container">$x\in U$</span>, <span class="math-container">$y\in V$</span>; finally define <span class="math-container">$P(v) = x$</span> and show that <span class="math-container">$P$</span> is linear and <span class="math-container">$P^{2} = P$</span>.</p> <p><a href="https://i.stack.imgur.com/Vjjiw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Vjjiw.png" alt="enter image description here" /></a></p> <p>To answer the question, just define <span class="math-container">$T:V \to W$</span> as <span class="math-container">$T = S\circ P$</span>. Note that <span class="math-container">$T(u) = S(u)$</span> for all <span class="math-container">$u \in U$</span>.</p>
3,440,093
<p>The problem is minimize over all <span class="math-container">$\theta \in \mathbb{R}^n$</span></p> <p><span class="math-container">$$\frac{1}{2} ||Y - \theta||^2$$</span> subject to <span class="math-container">$A \theta = 0$</span> where <span class="math-container">$A$</span> is <span class="math-container">$m \times n$</span>. </p> <p>Let <span class="math-container">$\theta^*$</span> be the solution. My notes seem to suggest <span class="math-container">$$\theta^* = (I - A'(AA')^{-1} A)Y = (I - P)Y$$</span></p> <p>where <span class="math-container">$P = A'(AA')^{-1} A $</span> is being referred to as the projection matrix. </p> <p>If <span class="math-container">$P$</span> is projection into, maybe <span class="math-container">$\ker$</span> <span class="math-container">$A$</span>, then <span class="math-container">$\theta^*$</span> should be <span class="math-container">$PY$</span>. But since <span class="math-container">$\theta^* = Y - YP$</span>, this suggests <span class="math-container">$P$</span> is projection into <span class="math-container">$(\ker A)^\perp$</span>. </p> <p>Can someone prove this? Or in general prove why <span class="math-container">$\theta^* = (I - P)Y$</span>?</p>
ironX
534,898
<p>I was able to solve this using Lagrangian, <span class="math-container">$L(\theta) = \frac{1}{2} ||Y - \theta||^2 + \lambda^T A \theta$</span></p> <p><span class="math-container">$\nabla_\theta L(\theta) = 0$</span> <span class="math-container">$\implies $</span> <span class="math-container">$\theta^* = Y - A^T \lambda$</span></p> <p>Using <span class="math-container">$A \theta^* = 0$</span> <span class="math-container">$\implies $</span> <span class="math-container">$\lambda = (A A')^{-1} A Y$</span></p> <p>giving me <span class="math-container">$\theta^* = Y - A'(AA')^{-1} AY$</span>.</p> <p>But I still want to confirm/refute my intuition that <span class="math-container">$P = A'(AA')^{-1} A $</span> is projection onto <span class="math-container">$(\ker A)^\perp$</span>. Anyone is welcome to explain. </p>
3,242,844
<p><span class="math-container">$$\int_0^{\pi/6} \frac{x\cos x}{1+2\cos x}dx$$</span></p> <p>Does it have a closed solution? <a href="https://www.wolframalpha.com/input/?i=int%20%5Cfrac%7Bxcos%20x%7D%7B1%2B2cos%20x%7Ddx%20from%200%20to%20%5Cpi%2F6" rel="nofollow noreferrer">WA</a> outputs this result.</p>
R. Burton
614,269
<p>Because...</p> <p><span class="math-container">$$\int \frac{x\cos x}{1+2\cos x}dx=\int_0^x \frac{t\cos t}{1+2\cos t}dt$$</span></p> <p>...the definite integral...</p> <p><span class="math-container">$$\int_0^{\pi/6} \frac{x\cos x}{1+2\cos x}dx$$</span></p> <p>...may be regarded as the <em>value</em> of the function...</p> <p><span class="math-container">$$f(x)=\int_0^x \frac{t\cos t}{1+2\cos t}dt$$</span></p> <p>...at <span class="math-container">$x=\pi/6$</span>.</p> <p>The function <span class="math-container">$f$</span> thus described is clearly <em>Liouvillian</em>, but this does <strong>not</strong> guarantee that it is <em>elementary</em>. As far as I'm aware, the only guaranteed way to show that a function defined via integral is elementary is to evaluate the integral. In this case <span class="math-container">$f(x)$</span> (according to WA) is expressed in terms of the <a href="http://mathworld.wolfram.com/Polylogarithm.html" rel="nofollow noreferrer">polylogarithm</a>, which is <strong>not</strong> an elementary function. So, if you wish to express the definite integral as <span class="math-container">$f(\pi/6)$</span> then no, there is no 'closed' form.</p>
2,210,893
<p>A lot of times when proving for example inequalities like $$x \leq y$$ for real numbers $x,y$ the argument looks like $$x \leq y + \varepsilon$$ for all $\varepsilon &gt; 0$, hence $x \leq y$. </p> <p>Now this is obviously very intuitive, but is there a "proof" that this conclusion is correct? And is it always sufficient in order to proof $x \leq y$ to show $x \leq y + \epsilon$ for all $\varepsilon &gt; 0$? </p> <p>I'd appreciate any explanations!</p> <p><strong>NOTE:</strong> I know that these kinds of arguments are correct when dealing with sequences. But here we have no sequences so I wanted to understand this too. </p>
Community
-1
<p>Consider </p> <p>$$t\le\epsilon$$ for all $\epsilon&gt;0$.</p> <p>Clearly,</p> <p>$$t\le0$$ is compatible, while </p> <p>$$t&gt;0$$ is not because</p> <p>$$0&lt;t\le\epsilon$$ cannot hold for all $\epsilon&gt;0$.</p> <p>Rewrite with $t:=x-y$.</p>
4,380,475
<p>I'm trying to differentiate <span class="math-container">$x\sqrt{4-x^2}$</span> using the definition of derivative.</p> <p>So it would be something like</p> <p><span class="math-container">$$\underset{h\to 0}{\text{lim}}\frac{(h+x) \sqrt{4-\left(h^2+2 h x+x^2\right)}-x \sqrt{4-x^2}}{h}$$</span></p> <p>I was trying to solve and I just can end up with something like</p> <p><span class="math-container">$$\underset{h\to 0}{\text{lim}}\frac{(x+h)\sqrt{4-x^2-2xh-h^2}-x\sqrt{4-x^2}}h \cdot \frac{\sqrt{4-x^2-2xh-h^2}+\sqrt{4-x^2}}{\sqrt{4-x^2-2xh-h^2}+\sqrt{4-x^2}}$$</span></p> <p><span class="math-container">$$\underset{h\to 0}{\text{lim}}\frac{-3x^2h-3xh^2+4h-h^3+\sqrt{4-x^2}-\sqrt{4-x^2-2xh+h^2}}{h\sqrt{4-x^2-2xh-h^2}+\sqrt{4-x^2}}$$</span></p> <p>Now if I group on h, I will have some tricky 3 instead of 2. The idea is I should have something like <span class="math-container">$h(2x^2+4)$</span> that would cancel up.</p> <p>I'm quite stuck can I ask a little of help? I know wolframalpha exists but it refuses to create the step by step solution with the error &quot;Ops we don't have a step by step solution for this query&quot;.</p> <p>The final result shall be <span class="math-container">$$-\frac{2 \left(x^2-2\right)}{\sqrt{4-x^2}}$$</span></p>
Matteo
686,644
<p>Clearly, when <span class="math-container">$h\to 0$</span>: <span class="math-container">$$\frac{h\cdot \sqrt{4-\left(h^2+2 h x+x^2\right)}}{h}\to \sqrt{4-x^2}$$</span> So, the limit simplify to: <span class="math-container">$$\lim_{h\to 0}\frac{x\sqrt{4-\left(h^2+2 h x+x^2\right)}-x \sqrt{4-x^2}}{h}=\sqrt{4-x^2}+\lim_{h\to 0}\frac{x\left[\sqrt{4-\left(h^2+2 h x+x^2\right)}-\sqrt{4-x^2}\right]}{h}=\sqrt{4-x^2}+\lim_{h\to 0}\frac{x\cdot \sqrt{4-x^2}\left[\sqrt{1-\frac{h^2+2hx}{4-x^2}}-1\right]}{h}$$</span></p> <p>Using the fact that: <span class="math-container">$$\sqrt{1+x}\,\, \sim\,\, \frac{1}{2}x\,\, x \to 0$$</span> We obtain: <span class="math-container">$$\lim_{h\to 0}\frac{x\cdot \sqrt{4-x^2}\left[\sqrt{1-\frac{h^2+2hx}{4-x^2}}-1\right]}{h}\,\, \sim\,\, \lim_{h\to 0}\frac{x\cdot \sqrt{4-x^2}\cdot\left(-\frac{1}{2}\cdot\frac{2hx}{4-x^2}\right)}{h}=\lim_{h\to 0}\frac{-\frac{hx^2}{\sqrt{4-x^2}}}{h}=-\frac{x^2}{\sqrt{4-x^2}}$$</span></p> <p>So, the limit is: <span class="math-container">$$\sqrt{4-x^2}-\frac{x^2}{\sqrt{4-x^2}}=\frac{4-x^2-x^2}{\sqrt{4-x^2}}=2\cdot\frac{2-x^2}{\sqrt{4-x^2}}$$</span></p>
18,686
<p>Suppose you have an arbitrary triangle with vertices $A$, $B$, and $C$. <a href="http://www.cs.princeton.edu/~funk/tog02.pdf">This paper (section 4.2)</a> says that you can generate a random point, $P$, uniformly from within triangle $ABC$ by the following convex combination of the vertices:</p> <p>$P = (1 - \sqrt{r_1}) A + (\sqrt{r_1} (1 - r_2)) B + (r_2 \sqrt{r_1}) C$</p> <p>where $r_1, r_2 \sim U[0, 1]$.</p> <p>How do you prove that the sampled points are uniformly distributed within triangle $ABC$?</p>
Gilbert Colgate
644,609
<p>Note that these points, when random, will be uniformly distributed in a nicely random way, but if you loop through r1 and r2 with an increment (say .01) your resulting points will have unusual artifacts and not look randomly distributed. One end of the triangle may have few points.</p> <p>I determined this with code ( Note that similar code, using math.Random() looks fine). </p> <blockquote> <pre><code> FillTriangleWithPointsBarycentric if r1 and r2 are uniform random numbers between 0 and 1 then This math produces a uniform distribution (note √ means sqrt of) d = (1.0−√r1)*vector1 + √r1*(1.0−r2)*vector2+√r1 * r2 * vector3 But rather than using a uniform random number, just loop through them and the result does not look good. @param {THREE.Vector3} vector1 @param {THREE.Vector3} vector2 @param {THREE.Vector3} vector3 @param {Array&lt;number&gt;} output input/output points &gt; [x0,y0,z0,x1,y1,z1,...xn,yn,zn] displayable in point cloud @returns {void} </code></pre> <p>FillTriangleWithPointsBarycentric(vector1, vector2, vector3, output) {</p> <pre><code>let triangle = new THREE.Triangle(vector1, vector2, vector3); let area = triangle.getArea(); console.log('Area is ' + area); area = Math.sqrt(area); console.log('sqrt Area is ' + area); let increment = 0.1 / area; for (let r1 = 0; r1 &lt;= 1; r1 += increment) { for (let r2 = 0; r2 &lt;= 1; r2 += increment ) { // of course this is javascript we have to write this out instead of // using only one line let sqrtR = Math.sqrt(r1); let A = (1 - sqrtR); let B = (sqrtR * (1 - r2)); let C = (sqrtR * r2); let x = A * vector1.x + B * vector2.x + C * vector3.x; let y = A * vector1.y + B * vector2.y + C * vector3.y; let z = A * vector1.z + B * vector2.z + C * vector3.z; output.push(x, y, z); } } </code></pre> </blockquote>
1,942,578
<p>Consider the following wedge</p> <p><a href="https://i.stack.imgur.com/xiaPX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xiaPX.png" alt=""></a> cut from a cylinder of radius r. The plane that cuts the wedge goes through the very bottom of the cylinder leading to an ellipse as the cross section of the wedge.</p> <p>The long axis of the ellipse is $2R$ in length. </p> <p>How can I prove that the minor axis of the ellipse is $2r$ in length where $r$ is the radius of the cylinder?</p>
axioman
369,033
<p>The farther you get x towards zero, the bigger 1/x gets and therefore you need to choose a bigger n to get $\frac{1}{nx}&lt;\epsilon$. Therefore the convergence can't be uniform. On any compact interval like [0.5,1] there should not be a problem, because 1/x will have a maximum value on the interval, so if you take n with $\frac{max\frac{1}{x}}{n}&lt;\epsilon $ This n will work for every x in the interval.</p>
1,835,158
<p>I wont to choose three random integer point in origin $|x|\leq r, |y|\leq r$ at plane as $(a_{1},b_{1}),(a_{2},b_{2}),(a_{3},b_{3})$. What the probability that this three point create a right triangle ( it is depend to r? what about isosceles triangle? I think that its zero but I cant proof. Thank you.</p>
Empy2
81,790
<p>The following makes me think the number of right-angled triangles within the square $$[0,N]\times[0,N]\subset\mathbb{Z}^2$$ is some multiple of $N^4\log N$. Since there are $O(N^6)$ triangles, this is a small proportion of them, as expected. </p> <ol> <li><p>Take triangles whose short sides are aligned North-South and East-West. There are $(N+1)^2$ choices for the position of the right-angle; $N$ choices for the end of the East-West side, and $N$ choices for the end of the North-South side, for $(N+1)^2N^2$, or roughly $N^4$ in all. </p></li> <li><p>There is also triangles with short sides aligned NW/SE and NE/SW - along the diagonals. I don't have a formula for them yet. It should be about half the number from case 3, with $a=b=1$, because $(a,b)$ and $(b,a)$ give the same triangles.</p></li> <li><p>All other triangles have short sides aligned in directions $(a,b)$ and $(b,-a)$, where $a$ and $b$ have no common factor, and $a&gt;b&gt;0$.<br> A typical triangle has one side $m(a,b)$ for an integer $m$, and the other side $n(b,-a)$ for some integer $n$.<br> For now, I take $m&gt;0,n&gt;0$. Later, I consider symmetries of the square, which is equivalent to considering $m&lt;0,n&lt;0$ and $(b,a)$ instead of $(a,b)$.<br> The triangle has vertices $$(p,q),(p+ma,q+mb),(p+nb,q-na)$$ This fits within a rectangle of sides $(1+\min(ma,nb),1+mb+na)$, so the number of choices for $(p,q)$ is $(N-\min(nb,ma))(N-mb-na)$.<br> Sum for all values of $m$ and $n$ that give a positive number of triangles. I turned the sum into an integral, and to leading order, I found $$\frac{\frac{(1-b/N)^4}{a^2b^2}+\frac{(a-b)^4}{a^4}+5}{24(a^2+b^2)}N^4$$ The numerator is between $5$ and $7$ because $a&gt;b$. There are eight symmetries of the square, so the total for this combination of $a$ and $b$ is eight times this value. </p></li> <li>The total number of right-angled triangles is then of the order $$2N^4\sum_{(a,b)}\frac1{a^2+b^2}$$ If it were over all $0&lt;b&lt;a&lt;N$, the sum by itself would be $O(\log N)$. Since it is only for coprime $b$ and $a$, it may be less than that. But the proportion of $(a,b)$ that are coprime is $6/\pi^2$, so that the sum may end up $$\frac{12}{\pi^2}N^4\log N$$</li> <li>When $a$ is prime, all $1\leq b\leq a-1$ are coprime to $a$.<br> $$\frac1{2a^2}&lt;\frac1{a^2+b^2}&lt;\frac1{a^2}\\ \frac1{3a}&lt;\frac{a-1}{2a^2}&lt;\sum_{b=1}^{a-1}\frac1{a^2+b^2}&lt;\frac1a$$ So the sum, for just the prime values of $a$, is more than $\frac13\sum_{p&lt;N}\frac1p$, which is known to be $O(\log\log N)$. </li> <li>This gives a grand total $$\frac23N^4\log\log N&lt;\text{Number of Right-angled Triangles}&lt;2N^4\log N$$</li> </ol>
4,174
<p>I'm developing a course that focuses on the transistion from arithmetic to algebraic thinking, particularly in grades 5-8. We will do this through focus on the common core. I'm also putting together a collection of suggested readings from the math education literature. I would be interested to hear your suggestions for suggested readings.</p>
Burt Furuta
3,578
<p>In case you haven't already seen these, I'll suggest articles by Luis Radford and Jean Schmittau. Both are influenced by Vygotsky, but take quite different approaches to algebra. Most of Radford's articles are available as pdfs on <a href="http://luisradford.ca/luisradford/?page_id=13" rel="nofollow">his website</a>.</p> <p>Radford, L. (2010). Algebraic thinking from a cultural semiotic perspective. <em>Research in Mathematics Education</em>, 12(1), 1-19.</p> <p>pdf at <a href="http://www.luisradford.ca/pub/22_RME2010Algebraicthinkingfromaculturalsemioticperspective.pdf" rel="nofollow">http://www.luisradford.ca/pub/22_RME2010Algebraicthinkingfromaculturalsemioticperspective.pdf</a></p> <blockquote> <p>Abstract: In this article, I introduce a typology of forms of algebraic thinking. In the first part, I argue that the form and generality of algebraic thinking are characterised by the mathematical problem at hand and the embodied and other semiotic resources that are mobilised to tackle the problem in analytic ways. My claim is based not only on semiotic considerations but also on new theories of cognition that stress the fundamental role of the context, the body and the senses in the way in which we come to know. In the second part, I present some concrete examples from a longitudinal classroom research study through which the typology of forms of algebraic thinking is illustrated.</p> </blockquote> <p>Schmittau, J. &amp; Morris, A. (2004). The development of algebra in Davydov’s elementary curriculum, <em>The Mathematics Educator</em>.</p> <p>Also available as a pdf at <a href="http://math.nie.edu.sg/ame/matheduc/tme/tmeV8_1/Schmittau.pdf" rel="nofollow">http://math.nie.edu.sg/ame/matheduc/tme/tmeV8_1/Schmittau.pdf</a></p> <blockquote> <p>Abstract: A comparison of the development of algebra in Davydov’s elementary mathematics curriculum with the approach to algebra advocated by the National Council of Teachers of Mathematics in the US reveals striking differences. Rather than developing algebra as a generalization of number, Davydov’s curriculum develops algebraic structure from the relationships between quantities such as length, area, volume, and weight. The arithmetic of the real numbers follows as a concrete application of these algebraic generalizations. The instructional approach, while similar to constructivist teaching methodology, emanates from a very different theoretical perspective, namely, the findings of Vygotsky and Luria that cognitive development is enabled by overcoming obstacles for which previous methods of solution prove inadequate. In a study in which the entire three-year elementary curriculum of Davydov was implemented in a US school setting, children using the curriculum developed the ability to solve algebraic problems normally not encountered until the secondary level in the US.</p> </blockquote> <p>UPDATE: I’d like to add a comment on the takeaway from the Schmittau and Morris article. The Davydov curriculum is very unconventional, and unlikely to be widely adopted in the United States. However, there are important insights that we could gain from it. I’ll highlight two of them. First, Schmittau points out in this and other articles that traditional instruction does not make as much use of what Vygotsky calls <em>psychological tools</em> as Davydov’s curriculum. Psychological tools are graphics, tables, language and other ways of making ideas concrete. Psychological tools turn thoughts into objects that we can reflect upon, manipulate and consciously apply when appropriate.</p> <p>Second, Davydov’s curriculum develops a thorough understanding of concepts. When I first read the article, I thought, “Wow, they spend an excessive amount of time on one topic.” The students work on different aspects of a concept in a concrete way. Now I ask, ”Is this a good example of what we call rigorous instruction? Is this what the opposite of a ‘mile wide and inch deep’ curriculum looks like?” Concepts are developed deeply, in a logical progression. The zone of proximal development is moved forward so other higher level concepts can be taught. And it works. At first, students in the Davydov curriculum seem to be behind students in our traditional curriculum, as the Davydov curriculum builds a solid foundation. But they overtake students in traditional classes, seemingly performing at levels that we may consider to be developmentally impossible. Vygostsky says, “Learning leads development.” The secret may be a deep understanding of concepts that prepares students to move to the next level.</p>
3,430,305
<blockquote> <p>Let <span class="math-container">$P$</span> be a partition of <span class="math-container">$[0, b]$</span> defined as <span class="math-container">$P = \{ 0 = x_0 &lt; x_1 &lt; &gt; \ldots &lt; x_n = b\}$</span>, and let <span class="math-container">$c_i \in [x_{i-1}, x_i]$</span> for every <span class="math-container">$1 &gt; \leq i \leq n$</span>.</p> <p>Prove: <span class="math-container">$$\left|\Sigma^n_{i=1}c_i\Delta x_i - \frac{b^2}{2}\right|\leq \frac{1}{2}\Sigma^n_{i=1}(\Delta x_i)^2$$</span></p> </blockquote> <p>Here's my attempting, trying to go from left to right:</p> <p>Using the triangle inequality: <span class="math-container">$$\left|\Sigma^n_{i=1}c_i\Delta x_i +\left(- \frac{b^2}{2}\right)\right| \leq \left|\Sigma^n_{i=1}c_i\Delta x_i\right| + \left|-\frac{b}{2}\right|\\ = \Sigma^n_{i=1}c_i\Delta x_i + \frac{b}{2}=\Sigma^n_{i=1}c_i\Delta x_i + \frac{\left(\Sigma^n_{i=1}\Delta x_i\right)^2}{2} \\\leq \Sigma^n_{i=1}x_i \cdot \Delta x_i + \frac{\left(\Sigma^n_{i=1}\Delta x_i\right)^2}{2}$$</span></p> <p>This is as far as I could develop it. Any way to proceed?</p>
Pebeto
605,486
<p>Here is a suggestion based on your idea. I will consider the case where <span class="math-container">$0 \notin P.$</span></p> <p>A hyperplane is such <span class="math-container">$\{ x \in \mathbb{R}^n\ s.t. \ c^t x = c_0 \},$</span> where <span class="math-container">$c \in \mathbb{R}^n, \ c \neq 0$</span> and <span class="math-container">$c_0 \in \mathbb{R}$</span>. </p> <p>As <span class="math-container">$0 \notin P$</span>, there exists a hyperplane that satisfies: <span class="math-container">$$ c^t 0 &lt; c_0$$</span> and <span class="math-container">$$c^t x \geq c_0, \ \forall x \in P.$$</span> Hence, we get: <span class="math-container">$$c_0 &gt; 0,$$</span> and we can set <span class="math-container">$c_0 = 1$</span> wlog.</p> <p>Using the resolution theorem you mentioned, we must also have:</p> <p><span class="math-container">$$c^t x^i \geq 1,\ \forall i,$$</span></p> <p>and</p> <p><span class="math-container">$$c^t w^j \geq 0, \forall j.$$</span></p> <p>Therefore, to find the separating hyperplane, we can solve the following LP: <span class="math-container">$$ \max_{c} 0 \ s.t. $$</span> <span class="math-container">$$c^t x^i \geq 1,\ \forall i,$$</span> <span class="math-container">$$c^t w^j \geq 0, \forall j.$$</span></p>
469,485
<p>Here's the simple question:</p> <p>Devon has a piece of poster board 45 cm by 20 cm. His teacher challenges him to cut the board into parts, then rearrange</p> <p>the parts to form a square. a) What is the side length of the</p> <p>square? b) What are the fewest cuts</p> <p>Devon could have made? Explain.</p> <p>I understand part a (the answer is 30 or the square root of 900) but how many cuts would he have made to make it a square? 45+20 = 65, 65 doesn't divide in 30 and if I try taking 45-15 and moving the 15 to the 20, I now get 35*30 which is not = 900 (1050)? How come then, 45*20 = 90 but when you displace 15, you get a greater answer? I'm sure the error is somewhere in my conversion between the rectangle and the square.</p>
qaphla
85,568
<p>The issue with your calculations is that when you cut the 45 off at the 30/15 mark, you get a piece that is 20*30 and a piece that is 20*15. Putting the 20*15 piece along the 20*30 piece will give you an L-shape, that is 35*30 with a 10*15 rectangle cut out of the corner.</p> <p>The best way that I can see to do this is in two cuts. First, make the cut that you describe. Now, you have a 20*30 piece and a 20*15 piece. Cut the 20*15 piece into two 10*15 pieces. Now, put those two pieces together along their 10 edges, giving a 10*30 piece. Now, put the 10*30 next to the 20*30 piece along 30 edges, giving a 30*30 square in two cuts.</p>
3,333,928
<p>I am reading an example of root test for a serie: <span class="math-container">$$\sum_{n=1}^\infty\frac{3n}{2^n}.$$</span> So applying the root test, we get <span class="math-container">$$\lim_{n \to \infty}\sqrt[n]{\frac{3n}{2^n}}=\lim_{n \to \infty}\frac{\sqrt[n]{3n}}{2}=\frac{1}{2}\lim_{n\to\infty}\exp(\frac{1}{n}\ln (3n))\\=\frac{1}{2}\exp(\lim_{n\to\infty}\frac{\ln (3n)}{n})=\frac{1}{2}\exp(\lim_{n\to\infty}\frac{\frac{3}{3n}}{1})=\frac{1}{2}e^0=\frac{1}{2}.$$</span></p> <p>I don't understand the expression starting to where the <span class="math-container">$\exp$</span> is used. can someone explain which formula is used there? Then why <span class="math-container">$$\lim_{n\to\infty}\frac{\ln (3n)}{n}=\lim_{n\to\infty}\frac{\frac{3}{3n}}{1}?$$</span></p>
Community
-1
<p>A polynomial is an expression obtained by combining constants and variables by means of a <em>finite number</em> of additions and multiplications.</p> <p>E.g. <span class="math-container">$3xy^3+2x-1$</span>.</p> <p>An <em>algebraic function</em> of a single variable <span class="math-container">$x$</span> is such that the relation to the dependent variable <span class="math-container">$y(x)$</span> can be expressed by a bivariate polynomial equation with integer coefficients.</p> <p>E.g. <span class="math-container">$3x(y(x))^3+2x-1=0$</span>, which can also be written <span class="math-container">$y(x)=\sqrt[3]{\dfrac{1-2x}{3x}}$</span>.</p> <p>In particular, the ratio of two polynomials in <span class="math-container">$x$</span> is an algebraic function, as is any expression involving only the four operations and radicals.</p> <p>A <em>transcendental</em> function is one that is not algebraic.</p> <p>E.g. <span class="math-container">$\sin(x)$</span> is transcendental because there is no polynomial <span class="math-container">$P$</span> such that <span class="math-container">$P(x,\sin(x))=0$</span>.</p>
2,764,818
<blockquote> <p>Let $f(x)=ax^3+bx^2+cx+d$, be a polynomial function, find relation between $a,b,c,d$ such that it's roots are in an arithmetic/geometric progression. (separate relations)</p> </blockquote> <p>So for the arithmetic progression I took let $\alpha = x_2$ and $r$ be the ratio of the arithmetic progression.</p> <p>We have:</p> <p>$$x_1=\alpha-2r, \quad x_2=\alpha, \quad x_3=\alpha +2r$$</p> <p>Therefore:</p> <p>$$x_1+x_2+x_3=-\frac ba=3\alpha$$ $$x_1^2+x_2^2+x_3^2 = 9\alpha^2-2\frac ca \to 4r^2=\frac {b^2-3ac}{3a^2}$$ $$x_1x_2x_3=\alpha(\alpha^2-4r^2)=-\frac da$$</p> <p>and we get the final result $2b^3+27a^2d-9abc=0$.</p> <p>How should I take the ratio at the geometric progression for roots?</p> <p>I tried something like </p> <p>$$x_1=\frac {\alpha}q, \quad x_2=\alpha, \quad x_3=\alpha q$$</p> <p>To get $x_1x_2x_3=\alpha^3$ but it doesn't really work out..</p> <p>Note:</p> <p>I have to choose from this set of answers:</p> <p>$$\text{(a)} \ a^2b=c^2d \quad\text{(b)}\ a^2b^2=c^2d \quad\text{(c)}\ ab^3=c^3d$$</p> <p>$$\text{(d)}\ ac^3=b^3d \quad\text{(e)}\ ac=bd \quad\text{(f)}\ a^3c=b^3d$$</p>
Claude Leibovici
82,404
<p>Using your notations $$ax^3+bx^2+c x+d=a(x-\frac \alpha q)(x-\alpha)(x-\alpha q)$$ Expand the rhs to get after simplifications $$a x^3 -\frac{a \alpha \left(q^2+q+1\right)}{q}x^2+\frac{a \alpha ^2 \left(q^2+q+1\right)}{q}x-a \alpha ^3$$ Compare the coefficients to get $$b=-\frac{a \alpha \left(q^2+q+1\right)}{q}$$ $$c=\frac{a \alpha ^2 \left(q^2+q+1\right)}{q}$$ $$d=-a \alpha ^3$$</p>
1,772,650
<p><strong>Note: in relation to the answer of the duplicate question, I see that the second picture below refers to the triangulation when we consider <em>simplicial complexes</em>. I do not understand why the triangles are used as they are, however, so would like some help trying to understand this</strong></p> <p>I am having a lot of trouble understanding triangulations. I know that a triangulation involves decomposing a 2-manifold into triangular regions.</p> <p>A common example is the torus, which can be constructed from the square. I understand this representation:</p> <p><a href="https://i.stack.imgur.com/muE2n.png" rel="noreferrer"><img src="https://i.stack.imgur.com/muE2n.png" alt="Torus"></a></p> <p>Since the torus is homeomorphic to the space obtained by identifying those edges together.</p> <p>What I do not understand is the triangulation given:</p> <p><a href="https://i.stack.imgur.com/XVDtq.png" rel="noreferrer"><img src="https://i.stack.imgur.com/XVDtq.png" alt="Torus"></a></p> <p>Why is this triangulation given in all the books and resources etc?</p> <p>I do not understand what all the triangles mean. Why could we not just split the square into 2 triangles?</p> <p>Many thanks for your help on this one</p>
zibadawa timmy
92,067
<p>Hint: What are the images of the corners of the big square in the quotient?</p> <p>See also <a href="https://math.stackexchange.com/questions/953586/triangulation-of-torus">this Q&amp;A</a> (and the comments), where the poster makes much the same mistake.</p>
3,419,550
<p>Let <span class="math-container">$D ⊂ \mathbb{R}$</span> </p> <p>Let <span class="math-container">$D_A$</span> be the set of all accumulation points of <span class="math-container">$D$</span>. The set <span class="math-container">$\bar{D} := D \cup D_A$</span> is called the closure of <span class="math-container">$D$</span>.</p> <p>Show that if <span class="math-container">$D$</span> is bounded, then <span class="math-container">$\bar{D}$</span> is bounded</p> <hr> <p>My professor discussed with me that I could prove this by contrapositive. </p> <p>Let <span class="math-container">$x$</span> not be in <span class="math-container">$\bar{D}$</span>, thus <span class="math-container">$x$</span> is not in D.</p> <hr> <p>I can most certainly proof that if x is not in <span class="math-container">$\bar{D}$</span> , then x is not in <span class="math-container">$D$</span>, however I'm somewhat lost in how this shows that <span class="math-container">$\bar{D}$</span> is bounded.</p>
Math1000
38,584
<p>If <span class="math-container">$\overline D$</span> were unbounded above, there would exist a sequence of elements <span class="math-container">$x_n\subset\overline D$</span> with <span class="math-container">$x_n&gt;n$</span>. Since <span class="math-container">$D$</span> is bounded, there is some <span class="math-container">$M$</span> such that <span class="math-container">$|x|&lt;M$</span> for all <span class="math-container">$x\in D$</span>. Hence for <span class="math-container">$n&gt;M$</span> we have <span class="math-container">$x_n\in \overline D$</span>. But these <span class="math-container">$x_n$</span> cannot be limit points of <span class="math-container">$D$</span>, since every element of <span class="math-container">$D$</span> is strictly less than <span class="math-container">$M$</span>. Hence, contradiction.</p>
2,066,716
<p>I know how to prove that every infinite subset $A$ of a compact set in a metric space satisfies $A'\ne\emptyset$, but my book also claims the opposite implication, without proof, stating "it falls outside the scope of this book". I've been searching online but I could only find the "standard" result. </p>
Amin Idelhaj
333,883
<p>Assume every infinite subset of $E$ has a limit point. Then fix some $\delta &gt; 0$, choose some $x_1 \in E$, and then given $x_1,\ldots,x_n$, choose $x_{n+1}$ such that $d(x_{n+1},x_i) \ge \delta$ for $i = 1,\ldots,n$. This terminates in finitely many steps, so $X$ can be covered in finitely many open balls of radius $\delta$. This implies that $E$ is totally bounded. Now, a sequence in $E$ must have a convergent subsequence, so in particular, a Cauchy sequence must converge. $E$ is complete and totally bounded, so it is compact. </p>
2,066,716
<p>I know how to prove that every infinite subset $A$ of a compact set in a metric space satisfies $A'\ne\emptyset$, but my book also claims the opposite implication, without proof, stating "it falls outside the scope of this book". I've been searching online but I could only find the "standard" result. </p>
David Bowman
366,588
<p>We prove this by contradiction.</p> <p>Suppose $\{F_n\}_{n \in \mathbb{N}}$ is an open cover of $A$ without a finite subcover (i.e. $A$ is not compact). Then define $G_n = (\bigcup_{i=1}^n F_i)^c$. For each $n$, $G_n$ must be nonempty, else that finite subcollection $F_1 \cup ... \cup F_n$ would be a finite subcover of $A$.</p> <p>Now define $E = \{x_n\}_{n \in \mathbb{N}}$ such that $x_i \in G_i$. Clearly this set is infinite by what we have shown above, and by hypothesis it has a limit point $x \in A$. Then $x \in F_m$ for some m. Since $F_m$ is open, there exists an $\epsilon \gt 0$ such that $B_\epsilon(x) \subset F_m$. By how we have constructed $E$, there can only be finitely many points in $B_\epsilon(x)$, since no point $x_n$ of $E$ with $n \gt m$ is in $F_m$. But then $x$ is not a limit point of $E$, since every neighborhood of a limit point of a set must contain infinitely many elements of the set. Contradiction.</p> <p>Thus there must be a finite subcover, and since $\{F_n\}_{n \in \mathbb{N}}$ was arbitrary, we can conclude $A$ is compact. </p>
2,066,716
<p>I know how to prove that every infinite subset $A$ of a compact set in a metric space satisfies $A'\ne\emptyset$, but my book also claims the opposite implication, without proof, stating "it falls outside the scope of this book". I've been searching online but I could only find the "standard" result. </p>
DanielWainfleet
254,665
<p>Let $(X,d)$ be a metric space.</p> <p>(1). If $D$ is a dense subset of $X$ then $\mathbb B=\{B_d(x,q): x\in D\land q\in \mathbb Q^+\}$ is a base for $X.$ Note that if $D$ is countable then $\mathbb B$ is countable. </p> <p>(2). If $X$ has a countable base then every open cover of $X$ has a countable sub-cover. </p> <p>Let $(X,d)$ be a metric space such that $A'\ne \emptyset$ whenever $A$ is an infinite subset of $X.$</p> <p>(i). For $q\in \mathbb Q^+$ let $S_q\subset X$ such that $\{B_d(s,q):s\in S_q\}$ is a maximal family of pair-wise disjoint open balls, where $B_d(s,q)\ne B_d(t,q)$ for distinct $s,t\in S_q$. Then $S_q$ is finite (because $S'$ is empty because $d(s,t)\geq q$ for distinct $s,s'\in S_q$).</p> <p>(ii). Let $D=\cup_{q\in \mathbb Q^+}S_q$ . Then $D$ is countable.</p> <p>$D$ dense in $X$. Because otherwise $B_d(x,r)\cap D=\emptyset$ for some $x\in X$ and some $r&gt;0.$ But take $q\in \mathbb Q\cap (0,r/2).$ Then $B_d(x,q)\cap B_d(s,q)=\emptyset$ for all $s\in S_q,$ contradicting the maximality of $\{B_d(s,q):s\in S_q\}.$</p> <p>By (1), let $B$ be a countable base for $X.$ </p> <p>(iii). Let $C$ be an open cover of $X.$ By (2), let $C'=\{c_n: n\in \mathbb N\}$ be a countable subcover.Suppose,by contradiction, that $C'$ has no finite subcover.</p> <p>Then for $n\in \mathbb N$ let $x_n\in X$ \ $\cup_{j=1}^nc_j.$ For each $n$ there are only finitely many $n'$ for which $x_n=x_{n'}$ because $x_n\in c_m$ for some $m,$ and $x_{n'}\not \in c_m$ for all $n'\geq m.$ </p> <p>So $A=\{x_n: n\in \mathbb N\}$ is an infinite set. By hypothesis, $A$ has an accumulation point $p.$ Now $p\in c_m$ for some $m,$ so $A\cap c_m$ is infinite, so $\{n: x_n\in c_m\}$ is infinite. But this implies there exists $n&gt;m$ with $x_n\in c_m\subset \cup_{j=1}^mc_j,$ contrary to the def'n of $x_n .$</p> <p>Therefore $C'$ has a finite sub-cover. </p> <p>Remark: The Q is equivalent to: A non-compact metric space has an infinite closed discrete subspace. This is useful in other q's. For example we can use it to prove that a non-compact metrizable space has an unbounded metric. Another example: The $\epsilon$-order topology on $\omega_1$ is not metrizable, because it is non-compact but has no infinite closed discrete subspace. </p>
24,305
<p>I have several functions, let's assume they are:</p> <pre><code>func1[x_]=x; func2[x_]=3*x-5; func3[x_]=0.1*x^2; </code></pre> <p>and a lot more like these.</p> <p>For each and every one of these I want to do the following</p> <pre><code>xvalues = Range[0, 500, 2.5]; points1 = Map[func1, xvalues]; Do[If points1[[[i]] &lt; 0, points1[[i]] = 0, points1[[i]] = points1[[i]], {i, 1, Length[points1]}] table1 = Transpose[{xvalues, points1}]; </code></pre> <p>Now seeing as I have a lot of these functiones, is there any way to automate this in some kind of routine? </p> <p>While answering, please be aware that I don't really have any extensive Mathematica knowledge.</p>
bill s
1,783
<p>How about:</p> <pre><code>apply[func_] := Module[{}, xvalues = Range[0, 500, 2.5]; points1 = Map[func1, xvalues]; Do[If[points1[[i]] &lt; 0, points1[[i]] = 0], {i, 1, Length[points1], 1}]; table1 = Transpose[{xvalues, points1}]]; </code></pre> <p>Now you call the function apply with your desired funcX as an argument</p> <pre><code>apply[func1] </code></pre> <p>Or you can automate this by defining</p> <pre><code>allFuncs = {func1, func2, func3, func4}; </code></pre> <p>and then </p> <pre><code>apply/@allFuncs </code></pre> <p>will run them all.</p>
1,021,599
<p>Let $X$ be a metric space and $q \in X$. I want to show that the distance function $d(q,p)$ is a uniformly continuous function of $p$. </p> <p>I know how to show that $d$ is continuous, but I am stuck on how to show UC. </p> <p>Given $\epsilon &gt;0$ let $\delta =?$. Then if $d(x,y) &lt;\delta$, then $|d(q,x)-d(q,y)|&lt;\epsilon$. </p> <p>I cannot figure out how to choose $\delta$. </p> <p>Please help :). Thank you. </p>
Suzu Hirose
190,784
<p>$d(q,x) \leq d(q,y) + d(y,x)$ and $d(q,y) \leq d(q,x)+d(x,y)$ so $|d(q,x)-d(q,y)| \leq |d(x,y)| &lt;\epsilon$ if $d(x,y)&lt;\epsilon$.</p>
4,613,982
<p>Calculate the integral <span class="math-container">$$\int_{-\infty }^{\infty } \frac{\sin(\Omega x)}{x\,(x^2+1)} dx$$</span> given <span class="math-container">$$\Omega &gt;&gt;1 $$</span></p> <p><a href="https://i.stack.imgur.com/mPVqq.jpg" rel="nofollow noreferrer">I tried but couldn't find C1</a></p>
Abezhiko
1,133,926
<p>As you solved a differential equation with respect to <span class="math-container">$\Omega$</span>, <span class="math-container">$C_1$</span> is a constant of integration for this variable, that is why you need an initial/boundary condition such as <span class="math-container">$I(\Omega=0) = -\pi + C_1 = 0$</span>.</p> <hr /> <p><strong>Addendum</strong> I don't know how you solved the integral <span class="math-container">$\int_\mathbb{R}\frac{\cos(\Omega x)}{x^2+1}\mathrm{d}x$</span> in the middle of your derivation, all this kind of integrals can be tackled pretty easily thanks to complex integration and residues.</p> <p>In your case, you would start from noticing that <span class="math-container">$\int_\mathbb{R}\frac{\sin(\Omega x)}{x(x^2+1)}\mathrm{d}x = \mathcal{Im}\left(\int_\mathbb{R}\frac{e^{i\Omega z}}{z(z^2+1)}\mathrm{d}z\right)$</span>, whose integrand has three simple poles at <span class="math-container">$z = 0,\pm i$</span>. The residues of the poles with a non-negative imaginary part are given by <span class="math-container">$$ \begin{array}{l} \mathrm{Res}_{z=0}\left(\frac{e^{i\Omega z}}{z(z^2+1)}\right) = \lim_{z\rightarrow0} \frac{e^{i\Omega z}}{(z^2+1)} = 1 \\ \mathrm{Res}_{z=i}\left(\frac{e^{i\Omega z}}{z(z^2+1)}\right) \,= \lim_{z\rightarrow i} \frac{e^{i\Omega z}}{z(z+i)} \;= -\frac{1}{2}e^{-\Omega} \end{array} $$</span> hence finally <span class="math-container">$$ \int_\mathbb{R}\frac{\sin(\Omega x)}{x(x^2+1)}\mathrm{d}x = \mathcal{Im}\left(\pi i\cdot1 + 2\pi i\left(-\frac{1}{2}e^{-\Omega}\right)\right) = \pi\left(1-e^{-\Omega}\right) $$</span></p>
3,867,834
<p>I gotta find the value of <span class="math-container">$x+y$</span> in the following image</p> <p><a href="https://i.stack.imgur.com/j9RPH.png" rel="noreferrer"><img src="https://i.stack.imgur.com/j9RPH.png" alt="enter image description here" /></a></p> <p>I have no info about if a point is the middle point of a length or even the figure is a square. I can prove that <span class="math-container">$$x-y=30°$$</span> And I tried to use similarity between triangles (and so, parallelism) but I got no clue at all.</p>
WA Don
542,712
<p>The link <a href="https://en.wikipedia.org/wiki/Trapezoidal_rule#Error_analysis" rel="nofollow noreferrer">Trapezoid rule</a> shows the error term is bounded but is a little unsatisfactory since it does not prove equality in this particular case.</p> <p>Simplifying to the interval [0,1], If we assume <span class="math-container">$f$</span> is twice continuously differentiable on <span class="math-container">$[0,1]$</span> then, using integration by parts we can proceed thus, <span class="math-container">\begin{aligned} \int_0^1 f(x)~dx &amp;= \Big[ \big(x-\tfrac{1}{2}\big) f(x) \Big]_0^1 - \int_0^1\big(x-\tfrac{1}{2}\big) f'(x) ~ dx \\ &amp;=\frac{f(1)+f(0)}{2}-\Big[\big(\tfrac{1}{2}x^2-\tfrac{1}{2}x\big)f'(x)\Big]_0^1+\int_0^1 \big(\tfrac{1}{2}x^2-\tfrac{1}{2}x\big)f''(x)~dx \\ &amp;= \frac{f(1)+f(0)}{2}+\int_0^1 \big(\tfrac{1}{2}x^2-\tfrac{1}{2}x\big)f''(x)~dx \end{aligned}</span> Now, <span class="math-container">$f''(x)$</span> is continious and therefore bounded, say <span class="math-container">$m \leqslant f’’(x) \leqslant M$</span>. The expression <span class="math-container">$\tfrac{1}{2}x^2-\tfrac{1}{2}x \leqslant 0 $</span> throughout <span class="math-container">$[0,1]$</span>, and therefore <span class="math-container">\begin{aligned} M\int_0^1 \big(\tfrac{1}{2}x^2-\tfrac{1}{2}x \big)~dx \leqslant \int_0^1 \big(\tfrac{1}{2}x^2-\tfrac{1}{2}x \big)f’’(x) ~dx \leqslant m \int_0^1 \big(\tfrac{1}{2}x^2-\tfrac{1}{2}x \big)~dx. \end{aligned}</span> where the inequalities can be taken as strict unless <span class="math-container">$f’’$</span> is constant. This becomes, <span class="math-container">\begin{aligned} -\frac{M}{12} \leqslant \int_0^1 \big(\tfrac{1}{2}x^2-\tfrac{1}{2}x\big)f’’(x)~dx \leqslant -\frac{m}{12} \end{aligned}</span> Moreover, being continuous, <span class="math-container">$f’’$</span> takes every value between <span class="math-container">$m$</span> and <span class="math-container">$M$</span> so there exists <span class="math-container">$\xi\in(0,1) ^1$</span> such that, <span class="math-container">\begin{aligned} \int_0^1 \big(\tfrac{1}{2}x^2-\tfrac{1}{2}x\big)f’’(x)~dx = -\frac{1}{12}f’’(\xi) \end{aligned}</span> which completes the proof, that there exists <span class="math-container">$\xi\in(0,1)$</span> such that: <span class="math-container">\begin{aligned} \int_0^1 f(x) ~dx = \frac{f’’(0)+f’’(1)}{2} - \frac{1}{12} f''(\xi). \end{aligned}</span> The result for the case <span class="math-container">$[a,b]$</span> can be derived through a change of variable.</p> <p>Corrected to refer to <span class="math-container">$f’’$</span> consistently</p> <hr /> <p>Footnote: if <span class="math-container">$f’’$</span> is not constant we can choose <span class="math-container">$\xi$</span> in the interior of <span class="math-container">$[0,1]$</span> because the inequalities are then strict; if <span class="math-container">$f’’$</span> is constant any point <span class="math-container">$\xi$</span> will serve.</p>
7,715
<p>I am starting on a Phd program and am supposed to read Colliot Thelene and Sansuc's article on R-equivalence for tori. I find it very difficult and although I have some knowledge over schemes , I am completely baffled by this scalar restriction business of having a field extension $K/k$ , a torus over $K$ and "restricting" it to $k$. I would be very gratefull for a reference or even better by some explanation . I found nothing in my standard books (Hartshorne, Qing Liu, Mumford etc) so I hope this question is appropriate for the site. Thank you.</p>
Evgeny Shinder
2,260
<p>It's not that hard at all. Here is an example. Let $k = \mathbf R$ and $K = \mathbf C$. Consider a 1-dimensional torus $G_m$ over $\mathbf C$. It basically the group $\mathbf C^*$ over $\mathbf C$. </p> <p>Now $G = Res_{\mathbf C/\mathbf R} G_m$ is the same group $\mathbf C^*$ considered as group over $\mathbf R$.</p>
7,715
<p>I am starting on a Phd program and am supposed to read Colliot Thelene and Sansuc's article on R-equivalence for tori. I find it very difficult and although I have some knowledge over schemes , I am completely baffled by this scalar restriction business of having a field extension $K/k$ , a torus over $K$ and "restricting" it to $k$. I would be very gratefull for a reference or even better by some explanation . I found nothing in my standard books (Hartshorne, Qing Liu, Mumford etc) so I hope this question is appropriate for the site. Thank you.</p>
Mikhail Bondarko
2,191
<p>There is another description for tori. The category of tori over a field F is equivalent to the category of finite-dimensional $G_F$-lattices. Now, there is an operation of induction for group representations that converts a $G_K$-lattice into a $G_k$-lattice; this is the lattice you need.</p> <p>See <a href="http://en.wikipedia.org/wiki/Algebraic_torus">http://en.wikipedia.org/wiki/Algebraic_torus</a></p>
1,862,108
<p>This question is related to maths, so I post here. Actually it's a computer science question and I am facing this type of question while learning Design and Analysis of Algorithms, but we all know that computer science has complete relation with maths. </p> <p>Arrange the following functions in increasing order of growth rate (with $g(n)$ following $f(n)$ in your list if and only if $f(n)=O(g(n)))$.</p> <p>a) $2^{log(n)}$<br> b) $2^{2log(n)}$<br> c) $n^5/2$<br> d) $2^{n^2}$<br> e) $n^2 log(n)$ </p> <p>So i think the answer, in increasing order, is CEDAB.</p> <p>Is it correct? I have confusion in option A and B. I think option A should be first... the one with the lower growth rate I mean. Please help me solve this. I faced this question in algorithm course part 1 assignment (Coursera).</p>
J Tim
354,581
<p>Okay, so $2^{log(n)} &lt; n$ because the logarithm base is greater than 2. Now you might want to see that $2^{2 log(n)} = (2^{log(n)})^2 $ to realise that B is faster growing than A. </p> <p>Exponential growth is always faster than polynomial, so D has to be the fastest growing one. Furthermore, C > E because $n &gt; log(n)$ (the factor $\frac{1}{2}$ has absolutely no effect here). Finally, E > B, because $n^2 &gt; x^2$, where $x&lt;n$ (see the equation on my first line).</p> <p>So it is ABECD </p>
2,791,914
<ul> <li>$\displaystyle \int_0^\infty \frac{\arctan\frac{x^3}{1+x^2}}{x^2} \, dx$ </li> </ul> <hr> <p>So i know that this one converge from 1 to infinity (by Dirichlet rule), but i'm not sure about the 0 to 1 segment. I kind of think that it converge as well, but can't prove it myself. Any suggestions?</p>
user
505,767
<p><strong>HINT</strong></p> <p>Note that for $x\to 0$</p> <p>$$\frac{\arctan\frac{x^3}{1+x^2}}{x^2}=\frac{\arctan\frac{x^3}{1+x^2}}{\frac{x^3}{1+x^2}}\frac{x}{1+x^2}\to 1\cdot 0=0$$</p>
4,133,782
<p>I am having trouble finding a formula that connects the two and can produce an answer. Anyone know how this is done? I tried y=mx+b, m=3, and b=5-a. But I don't know what to do next or did I even start right.</p>
hm2020
858,083
<p><strong>Question:</strong> &quot;I understand why there can be at most one solution for a full column rank but how does that lead to A having a left inverse? I'd be grateful if someone could help or hint at the answer.&quot;</p> <p><strong>Answer:</strong> Let <span class="math-container">$k$</span> be a real numbers and <span class="math-container">$V:=k^n, W:=k^m$</span>. Since the equation <span class="math-container">$Ax=b$</span> has a unique solution for all <span class="math-container">$b\in W$</span> it follows the equation <span class="math-container">$Ax=0$</span> has a unique solution, namely <span class="math-container">$x=0$</span>. Hence the map <span class="math-container">$A: V \rightarrow W$</span> is an injection and it follows <span class="math-container">$n \leq m$</span>. We get an exact sequence of <span class="math-container">$k$</span>-vector spaces</p> <p><span class="math-container">$$0 \rightarrow V \rightarrow W \rightarrow^p W/V \rightarrow 0$$</span></p> <p>and we may choose a section <span class="math-container">$s$</span> of <span class="math-container">$p$</span>. This is a <span class="math-container">$k$</span>-linear map <span class="math-container">$s: W/V \rightarrow W$</span> with <span class="math-container">$ p \circ s = Id$</span>. This gives an idempotent endomorphism of <span class="math-container">$W$</span>: <span class="math-container">$u:=s \circ p$</span> with <span class="math-container">$u^2=u$</span>. From this it follows we may write</p> <p><span class="math-container">$$W \cong V \oplus Im(u)$$</span></p> <p>and the projection map <span class="math-container">$p_V: W \cong V \oplus Im(u) \rightarrow V$</span> is a left inverse to the inclusion map defined by the matrix <span class="math-container">$A$</span>. If you choose a basis for <span class="math-container">$W$</span> you get a matrix <span class="math-container">$B$</span> with <span class="math-container">$BA=Id_n$</span></p>
301,662
<p>This is a challenging puzzle I heard from my little brother.</p> <p>For some $n$ and $x$, $\sum_{k=1}^n \sin^{2k}(x) = 2013$.</p> <p>Is it possible to deduce $$\sum_{k=1}^n \cos^{2k}(x) \text{ ?}$$</p> <p>Edit: I've just noticed something which now seems obvious to me.<br> Choose $n = 2013$ and $x = \pi/2$ which satisfies the condtion. It follows that the cosine terms would sum to zero. I'm not sure this is a unique solution.</p>
Community
-1
<p>let $r=sin^2(x)$ we have</p> <p>$$\sum_{k=1}^n r^k=\frac{r(1-r^n)}{1-r}=2013$$ Now we want: $$\sum_{k=1}^n (1-r)^k=\frac{(1-r)(1-(1-r)^n)}{r}$$</p> <p>We can deduce: $$2013\sum_{k=1}^n (1-r)^k=(1-r^n)(1-(1-r)^n)$$</p>
919,572
<p>Do you know any nice way of expressing </p> <p>$$\sum_{k=0}^{n} \frac{H_{k+1}}{n-k+1}$$ ?</p> <p>Some simple manipulations involving the integrals lead to an expression that also uses<br> the hypergeometric series. Is there any way of getting a form that doesn't use the HG function?</p>
Alex
38,873
<p>This is a really rough approximation: write $H_{k+1} \sim \log (k+1) \sim \log k + \frac{1}{k}$ then you get two sums: \begin{align} &amp;\sum_{k=1}^{n}\frac{\log k }{n-k+1} + \sum_{k=1}^{n} \frac{1}{k(n-k+1)}\\ &amp;\leq \log n \sum_{k=1}^{n}\frac{1}{n-k+1} + \sum_{k=1}^{n} \frac{1}{k(n-k+1)} \\ &amp;\sim \log^2n + \frac{H_n}{n+1} \end{align}</p>
919,572
<p>Do you know any nice way of expressing </p> <p>$$\sum_{k=0}^{n} \frac{H_{k+1}}{n-k+1}$$ ?</p> <p>Some simple manipulations involving the integrals lead to an expression that also uses<br> the hypergeometric series. Is there any way of getting a form that doesn't use the HG function?</p>
robjohn
13,854
<p>Using small steps: $$ \begin{align} \sum_{k=1}^{n+1}\frac{H_k}{n-k+2} &amp;=\sum_{k=1}^{n+1}\sum_{j=1}^k\frac1j\frac1{n-k+2}\tag{1}\\ &amp;=\sum_{j=1}^{n+1}\sum_{k=j}^{n+1}\frac1j\frac1{n-k+2}\tag{2}\\ &amp;=\sum_{j=1}^{n+1}\sum_{k=j}^{n+1}\frac1j\frac1{k-j+1}\tag{3}\\ &amp;=\sum_{k=1}^{n+1}\sum_{j=1}^k\frac1j\frac1{k-j+1}\tag{4}\\ &amp;=\sum_{k=1}^{n+1}\frac1{k+1}\sum_{j=1}^k\left(\frac1j+\frac1{k-j+1}\right)\tag{5}\\ &amp;=\sum_{k=1}^{n+1}\frac1{k+1}\sum_{j=1}^k\frac2j\tag{6}\\ &amp;=\sum_{k=1}^{n+1}\frac1{k+1}\sum_{j=1}^k\frac1j +\sum_{j=1}^{n+1}\frac1{j+1}\sum_{k=1}^j\frac1k\tag{7}\\ &amp;=\color{#C00000}{\sum_{k=1}^{n+1}\frac1{k+1}\sum_{j=1}^k\frac1j} +\color{#00A000}{\sum_{k=1}^{n+1}\frac1k\sum_{j=k}^{n+1}\frac1{j+1}}\\ &amp;+\color{#0000FF}{\sum_{k=1}^{n+2}\frac1{k^2}}-\sum_{k=1}^{n+2}\frac1{k^2}\tag{8}\\ &amp;=\left(\sum_{k=1}^{n+2}\frac1k\right)^2-\sum_{k=1}^{n+2}\frac1{k^2}\tag{9}\\[6pt] &amp;=H_{n+2}^2-H_{n+2}^{(2)} \end{align} $$ Explanation:<br> $(1)$: $H_k=\sum\limits_{j=1}^k\frac1j$<br> $(2)$: change order of summation<br> $(3)$: substitute $k\mapsto n+j+1-k$<br> $(4)$: change order of summation<br> $(5)$: $\frac1j\frac1{k-j+1}=\frac1{k+1}\left(\frac1j+\frac1{k-j+1}\right)$<br> $(6)$: $\sum\limits_{j=1}^k\frac1j=\sum\limits_{j=1}^k\frac1{k-j+1}$<br> $(7)$: substitute $j\leftrightarrow k$ in the right sum<br> $(8)$: change order of summation in the green sum; add $0$<br> $(9)$: red, green, blue sum covers first term bigger, smaller, equal </p>
2,756,332
<p>It is <a href="https://math.stackexchange.com/questions/985879/relation-between-trace-and-rank-for-projection-matrices">not difficult</a> to show that if $A \in M_n(k)$ for some field $k$, and $A^2=A$ then $\operatorname{tr}(A) = \dim(\operatorname{Im}(A))$</p> <p>In <a href="https://mathoverflow.net/questions/13526/geometric-interpretation-of-trace/13530#comment447113_13527">this comment</a>, it is written:</p> <blockquote> <p>This property, together with linearity, determines the trace uniquely, and so one can view the trace as the linearised version of the dimension-counting operator.</p> </blockquote> <p>What does it mean precisely? If $f : M_n(k) \to k$ is $k$-linear (say with $k = \Bbb C$), and $f(A) = \dim(\operatorname{Im}(A))$ for any $A^2= A \in M_n(k)$, then we have that $f$ is the trace function?</p>
user1551
1,551
<p>Yes. More specifically, over any field $k$, regardless of its characteristic or algebraic closedness (or the lack of it), if $f : M_n(k) \to k$ is $k$-linear and $f(A)=\operatorname{rank}(A)$ (modulo $\operatorname{char}(k)$ if $k$ has finite characteristic) for every projection matrix $A$, then $f$ is necessarily the trace function.</p> <p>Denote by $E_{ij}$ the matrix whose only nonzero entry is a $1$ at the $(i,j)$-th position. The assumption on $f$ implies that $f(E_{ii})=1$ for each $i$. Since $$ E_{12}=\left(\pmatrix{0&amp;1\\ 0&amp;1}\oplus I_{n-2}\right) - \operatorname{diag}(0,1,\ldots,1) $$ is a difference of two projections of equal ranks, we also have $f(E_{12})=0$ and similarly $f(E_{ij})=0$ whenever $i\ne j$. The linearity of $f$ thus implies that $f$ is the trace function.</p>
1,307,069
<p>Let's look at $f(x)=\cos(x)$ defined on the interval $[0,\pi]$.</p> <p>We know that for any function $g$ defined on $[0,\pi]$ we have:</p> <p>$g(x)=\sum_{k=1}^{\infty}B_k\sin(kx)$ where $B_k=\frac{2}{\pi}\int_{0}^{\pi}g(x)\sin(kx)dx$. And $f$ is no different. So in our case:</p> <p>$B_k=\frac{2}{\pi}\int_{0}^{\pi}f(x)\sin(kx)dx=\frac{2}{\pi}\int_{0}^{\pi}\cos(x)\sin(kx)dx$</p> <p>It can be shown that $\int\cos(x)\sin(kx)dx=\frac{\sin(x)\sin(kx)+k\cos(x)\cos(kx)}{1-k^2}$, so:</p> <p>$B_k=\frac{2}{\pi}\int_{0}^{\pi}\cos(x)\sin(kx)dx=\frac{2}{\pi}\frac{-k\cos(k\pi)-k}{1-k^2}=\frac{2}{\pi}\frac{(-1)^{k+1}k-k}{1-k^2}$</p> <p>So overall we should have $f(x)=\cos(x)=\sum_{k=1}^{\infty}\frac{2}{\pi}\frac{(-1)^{k+1}k-k}{1-k^2}\sin(kx)$ But that clearly can't be true, because $\cos(0)=1$ but that sum is equal to $0$ at $x=0$ since $\sin(0)=0$.</p> <p>Where is the mistake? and not only that, we seem to have a big problem when $k=1$</p>
Rory Daulton
161,807
<p><em>Any</em> sum of sines such as $\sin kx$ will give zero for $x=0$, since $\sin (k\cdot 0)=0$ for any $k$.</p> <p>Therefore, you cannot represent the cosine function as a sum of sines of the form $\sin kx$. That's why the usual Fourier series uses both sines and cosines, or the equivalent of $e^{ikx}$. Making you realize that may be the point of this exercise.</p> <hr> <p>Other answers and comments point out that you can get an <em>approximation</em> of the cosine function, namely the restriction of cosine to the domain $(0,\pi)$. Outside that domain, the function is discontinuous at zero and $\pi$ and is an odd function so it does not approximate cosine at all for negative values of the variable. Again, using cosine functions as part of the Fourier series greatly lessons these limitations, and realizing that may be the point of the exercise.</p>
1,307,069
<p>Let's look at $f(x)=\cos(x)$ defined on the interval $[0,\pi]$.</p> <p>We know that for any function $g$ defined on $[0,\pi]$ we have:</p> <p>$g(x)=\sum_{k=1}^{\infty}B_k\sin(kx)$ where $B_k=\frac{2}{\pi}\int_{0}^{\pi}g(x)\sin(kx)dx$. And $f$ is no different. So in our case:</p> <p>$B_k=\frac{2}{\pi}\int_{0}^{\pi}f(x)\sin(kx)dx=\frac{2}{\pi}\int_{0}^{\pi}\cos(x)\sin(kx)dx$</p> <p>It can be shown that $\int\cos(x)\sin(kx)dx=\frac{\sin(x)\sin(kx)+k\cos(x)\cos(kx)}{1-k^2}$, so:</p> <p>$B_k=\frac{2}{\pi}\int_{0}^{\pi}\cos(x)\sin(kx)dx=\frac{2}{\pi}\frac{-k\cos(k\pi)-k}{1-k^2}=\frac{2}{\pi}\frac{(-1)^{k+1}k-k}{1-k^2}$</p> <p>So overall we should have $f(x)=\cos(x)=\sum_{k=1}^{\infty}\frac{2}{\pi}\frac{(-1)^{k+1}k-k}{1-k^2}\sin(kx)$ But that clearly can't be true, because $\cos(0)=1$ but that sum is equal to $0$ at $x=0$ since $\sin(0)=0$.</p> <p>Where is the mistake? and not only that, we seem to have a big problem when $k=1$</p>
orion
137,195
<p>Developing a sine series makes an <em>odd</em> periodic extension of the function. So a better question would be, which zero are you talking about? You have a discontinuity at $0$, so it depends on which side you approach from: $$f(0^+)=1$$ and $$f(0^-)=-1$$</p> <p>The fourier series, when faced with a discontinuity, gives you an average value at the point of discontinuity (Dirichlet theorem) plus Gibbs oscillations around it.</p> <p>So, your series is ok, it's just that your function is ugly.</p>
1,746,748
<p>My calculus teacher gave us this problem in class:</p> <p>Which is easier to integrate?</p> <p>$$\int \sin^{100}x\cos x dx$$</p> <p>or</p> <p>$$\int \sin^{50}xdx$$</p> <p>By easier, I assume the teacher means which integral would take less work. I'm unsure of how to approach this problem because of the relatively large exponents. I would guess the second because it has smaller exponents but I'm not sure.</p>
SchrodingersCat
278,967
<p>Obviously, the first one. $$\int \sin^{100}x \cos x \,\, dx=\int \sin^{100}x \, \,d(\sin x)=\frac{\sin^{101}x}{101}+c$$</p> <p>And the other integral requires more than <em>just</em> $2$ steps. You may try any method you like.It cannot be rigorously proved that the integral requires more than <em>just</em> $2$ steps.</p>
4,338,190
<p>Its required to prove that <span class="math-container">$|x^{1/n} -1| \lt \epsilon$</span> for <span class="math-container">$\epsilon \gt 0$</span> and <span class="math-container">$n \ge N$</span> where <span class="math-container">$N \in \mathbb N$</span>.<br> Let <span class="math-container">$x^{1/n} -1 = h$</span> for <span class="math-container">$h \gt 0$</span>. So <span class="math-container">$x = (1+h)^{n}$</span>, and we now need to prove that <span class="math-container">$h \lt \epsilon$</span>. The binomial theorem gives <span class="math-container">$x \gt nh $</span> so <span class="math-container">$h \lt \frac{x}{n}$</span>. Now its sufficient to choose <span class="math-container">$n $</span> such that <span class="math-container">$\frac{x}{n} \lt \epsilon$</span>, i.e. <span class="math-container">$n \gt\frac{x}{\epsilon}$</span>. So the given condition is satisfied for all <span class="math-container">$ N \ge [\frac{x}{\epsilon}] + 1.$</span>(The square brackets stand for the floor function.)<Br> Is this in form and content correct?</p>
Jochen
950,888
<p>It's almost correct :) Like already mentioned your proof only works for <span class="math-container">$x\geq 1$</span> because for <span class="math-container">$0\leq x&lt;1$</span> we get some negative terms in the expansion of <span class="math-container">$(1+h)^n$</span> since <span class="math-container">$h&lt;0$</span>.</p> <p>But it's easy to fix it. You can show by induction that <span class="math-container">$(1+h) ^n\geq 1+hn$</span> even for <span class="math-container">$h\geq -1$</span>. See here <a href="https://en.m.wikipedia.org/wiki/Bernoulli%27s_inequality" rel="nofollow noreferrer">https://en.m.wikipedia.org/wiki/Bernoulli%27s_inequality</a> Hence, we get <span class="math-container">$$x-1=(1+h) ^n-1\geq hn. $$</span> After that your argumentation works.</p>
333,807
<p><strong>Notations</strong>: For a scalar $a\in\mathbb{R}$, denote $$\mathrm{sgn}(a)=\left\{ \begin{array}{l l} 1 &amp; \mbox{if } a&gt;0\\ 0 &amp; \mbox{if } a=0\\ -1 &amp; \mbox{if } a&lt;0 \end{array}.\right.$$ For a vector $r\in\mathbb{R}^n$, $\mathrm{sgn}(r)$ is element-wise.</p> <p>Now we have a set of vectors $r_1, \dots, r_p\in\mathbb{R}^n$, $p&lt;n$. The corresponding sign vectors are $v_1,\dots,v_p\in\mathbb{R}^n$ satisfying $$\mathrm{sgn}(r_i)=v_i$$ which implies that the elements of $v_i$ can only be $1$, $-1$ or $0$.</p> <p><strong>Question</strong>: Given a nonzero vector $x\in\mathbb{R}^n$ satisfying $x\perp \mathrm{span}\{r_1,\dots,r_p\}$, can we prove $x\notin\mathrm{span}\{v_1,\dots,v_p\}$? If not, can you given any counterexamples?</p>
user1551
1,551
<p>Counterexample: we have $x=v_1+v_2+v_3$ when $$ (r_1,r_2,r_3,x,v_1,v_2,v_3)=\begin{pmatrix} 1&amp;1&amp;-3&amp;1&amp;1&amp;1&amp;-1\\ 1&amp;-2&amp;2&amp;1&amp;1&amp;-1&amp;1\\ -2&amp;1&amp;1&amp;1&amp;-1&amp;1&amp;1\\ 0&amp;0&amp;0&amp;0&amp;0&amp;0&amp;0 \end{pmatrix}. $$ <strong>Edit.</strong> Here is a better example: $$ (r_1,r_2,r_3,x,v_1,v_2,v_3)=\begin{pmatrix} 1&amp;1&amp;-5&amp;1&amp;1&amp;1&amp;-1\\ 1&amp;-5&amp;1&amp;1&amp;1&amp;-1&amp;1\\ -5&amp;1&amp;1&amp;1&amp;-1&amp;1&amp;1\\ 1&amp;1&amp;1&amp;3&amp;1&amp;1&amp;1 \end{pmatrix}. $$ The key is, each $v_i$ contains a negative entry, but the entries of $x=v_1+v_2+v_3$ are all positive. So we have a great liberty to pick some $r_1,r_2,r_3$ that are orthogonal to $x$.</p>
513,779
<p>If $a,b\in\mathbb{N}$ are odd</p> <p>then demonstrate: $$ {\sqrt{a^2 + b^2}} \not\in \mathbb{Q}$$ </p> <p>I try to guess that $$ {\sqrt{a^2 + b^2}} \in\mathbb{Q}.$$ Then i write $$ {\sqrt{a^2 + b^2}= m/n}.$$ After that: $$ {n\sqrt{a^2 + b^2}= m}$$ , I raised at squared and i have like $$ n^2(a^2+ b^2)=m^2 $$ and i thought that $$ (a^2 +b^2)$$ is even so m^2 is even. After this I write $$m^2= 4k^2.$$ In the and I have this ecuation $$a^2 + b^2 = 4k^2/ n^2$$ This fraction is irreducible..I think</p>
njguliyev
90,209
<p>Hint: $(2m+1)^2+(2n+1)^2=2(2k+1)$. Show that this number cannot be written as $\dfrac{p^2}{q^2}$ with $(p,q)=1$.</p>
4,559,503
<blockquote> <p>Let <span class="math-container">$\displaystyle f(x)=\frac{1+\cos(2\pi x)}2$</span> for <span class="math-container">$x\in\mathbb R$</span>, and <span class="math-container">$f^n=\underbrace{ f \circ \cdots \circ f}_{n}$</span>. Is it true that for Lebesgue almost every <span class="math-container">$x$</span>, <span class="math-container">$\displaystyle\lim_{n \to \infty} f^n(x)=1$</span>?</p> </blockquote> <p>I'm more inclined to believe that the answer is &quot;yes&quot;.</p> <p>This is the Problem <span class="math-container">$5$</span> of <a href="https://artofproblemsolving.com/community/c2557361_2021_mikloacutes_schweitzer" rel="noreferrer"><span class="math-container">$2021$</span> Miklós Schweitzer</a>. Recently, a <a href="https://math.stackexchange.com/questions/4558540/spivak-ch-22-problem-20-f-continuous-sequence-x-fx-ffx-fffx?noredirect=1&amp;lq=1">related question</a> reminds me of this problem. After spending some time on it, I found that it is a hard problem, as always as Miklós Schweitzer does. Almost every problem from that competition is very difficult for me.</p> <p>First of all, for a fixed <span class="math-container">$x_0\in\mathbb R$</span>, if <span class="math-container">$f^n(x_0)$</span> is convergent, then its limit <span class="math-container">$\ell$</span> must be a fixed point of <span class="math-container">$f$</span>. Since <span class="math-container">$f(x)=\cos^2(\pi x)\in[0,1]$</span>, we must have <span class="math-container">$\ell\in[0,1]$</span>. Let's find the fixed points of <span class="math-container">$f$</span>. Let <span class="math-container">$g(x)=f(x)-x$</span> for <span class="math-container">$x\in[0,1]$</span>, then we need to find the zeroes of <span class="math-container">$g$</span>. Since <span class="math-container">$g'(x)=-\pi\sin(2\pi x)-1$</span>, <span class="math-container">$g'$</span> has two zeroes <span class="math-container">$\eta_1,\eta_2\in[0,1]$</span> with <span class="math-container">$1/2&lt;\eta_1&lt;3/4$</span>, <span class="math-container">$3/4&lt;\eta_2&lt;1$</span>, and <span class="math-container">$\sin(2\pi\eta_1)=\sin(2\pi\eta_2)=-1/\pi$</span>. Hence, <span class="math-container">$g$</span> is decreasing in <span class="math-container">$[0, \eta_1)$</span>, then increasing in <span class="math-container">$(\eta_1, \eta_2)$</span>, and then decreasing in <span class="math-container">$(\eta_2,1]$</span>. Note that <span class="math-container">$g(1/2)=-1/2&lt;0, g(1)=0$</span>, we know that <span class="math-container">$g(\eta_1)&lt;0$</span> and <span class="math-container">$g(\eta_2)&gt;0$</span>. Therefore, we can find three zeroes of <span class="math-container">$g$</span>, named by <span class="math-container">$\ell_1$</span>, <span class="math-container">$\ell_2$</span> and <span class="math-container">$\ell$</span> with <span class="math-container">$\ell_1\in(0,1/2)$</span>, <span class="math-container">$\ell_2\in(\eta_1, \eta_2)$</span> and <span class="math-container">$\ell=1$</span>.</p> <p><a href="https://i.stack.imgur.com/Gl1UO.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/Gl1UO.jpg" alt="The graphs" /></a></p> <p>We can find the locations the fixed points <span class="math-container">$\ell_1, \ell_2$</span> more accurately. Indeed, since <span class="math-container">$$g\left(\frac14\right)=\cos^2\left(\frac\pi4\right)-\frac14=\frac12-\frac14&gt;0,\qquad g\left(\frac13\right)=\cos^2\left(\frac\pi3\right)-\frac13=\frac14-\frac13&lt;0,$$</span> we have <span class="math-container">$\ell_1\in(1/4,1/3)$</span>, hence <span class="math-container">$$f'(\ell_1)=-\pi\sin(2\pi\ell_1)&lt;-\pi\sin\left(\frac{2\pi}3\right)=-\frac{\sqrt 3}2\pi&lt;-1.$$</span> Also, <span class="math-container">$$g\left(\frac56\right)=\cos^2\left(\frac56\pi\right)-\frac56=\frac34-\frac56&lt;0,\qquad g\left(\frac{11}{12}\right)=\frac{1+\cos\left(\frac{11}6\pi\right)}2-\frac{11}{12}=\frac{\sqrt3-2}4&gt;0,$$</span> we have <span class="math-container">$\ell_2\in(5/6,11/12)$</span>, hence <span class="math-container">$$f'(\ell_2)=-\pi\sin(2\pi\ell_2)&gt;-\pi\sin\left(\frac{11\pi}6\right)=\frac{1}2\pi&gt;1.$$</span></p> <p><strong>The following are not rigorous.</strong></p> <p>Therefore, locally, near <span class="math-container">$\ell_1$</span>, <span class="math-container">$f$</span> behaves like <span class="math-container">$-A(x-\ell_1)$</span> with <span class="math-container">$A&gt;1$</span>. Consider the map <span class="math-container">$f_1: x\mapsto -Ax$</span>, then <span class="math-container">$f_1^n(x)$</span> converges if and only if <span class="math-container">$x=0$</span>. This indicates that, for fixed <span class="math-container">$x_0$</span>, if the sequence <span class="math-container">$\{f^n(x_0)\}$</span> doesn't reach <span class="math-container">$\ell_1$</span>, it will not converge to <span class="math-container">$\ell_1$</span> ; a similar analysis on <span class="math-container">$\ell_2$</span> indicates that <span class="math-container">$\{f^n(x_0)\}$</span> will not converge to <span class="math-container">$\ell_2$</span> if it doesn't touch <span class="math-container">$\ell_2$</span>; hence, if <span class="math-container">$\{f^n(x_0)\}$</span> converges without touching <span class="math-container">$\ell_1, \ell_2$</span>, then the limit should must be <span class="math-container">$\ell=1$</span>. I think the ideas in this paragraph can be written down rigorously, although I don't know how to write a clean one.</p> <p>Another question I've not had any ideas: What if <span class="math-container">$\{f^n(x_0)\}$</span> diverges? To finish the problem, even if we write down a proof about the above paragraph, we also need to prove that for a.e. <span class="math-container">$x$</span>, the sequence <span class="math-container">$\{f^n(x)\}$</span> is convergent.</p> <p>Any help would be appreciated!</p>
Oliver Díaz
121,671
<p>Let <span class="math-container">$\ell_1&lt;\ell_2&lt;1$</span> be the three fixed points of <span class="math-container">$f$</span>.The idea is to follow where certain subintervals of <span class="math-container">$[0,1]$</span> get mapped and to show that in a finite number of iterations, such subintervals fall into <span class="math-container">$[\ell_2,1]$</span> where the dynamics of the system is obvious.</p> <ol> <li>Notice that <span class="math-container">$[\ell_2,1]\mapsto [\ell_2,1]$</span> and since <span class="math-container">$f(x)&gt;x$</span> and <span class="math-container">$f$</span> increases in <span class="math-container">$[\ell_2,1]$</span>, for <span class="math-container">$x\in (\ell_2,1]$</span>, <span class="math-container">$f^n(x)\xrightarrow{n\rightarrow\infty}1$</span>.</li> </ol> <p>The key now is to show that with the exception of preimages (of preimages) of the points <span class="math-container">$\ell_1$</span> and <span class="math-container">$\ell_2$</span> and of the root <span class="math-container">$z$</span>, all other points get mapped into <span class="math-container">$(\ell_2,1]$</span>.</p> <ol start="2"> <li><p>Since <span class="math-container">$f$</span> decreases in <span class="math-container">$[0,\ell_1]$</span> and <span class="math-container">$f(x)&gt;x$</span> on <span class="math-container">$[0,\ell_1)$</span>, there is a point <span class="math-container">$0&lt;\ell^{(1)}_2&lt;\ell_1$</span> such that <span class="math-container">$f(\ell^{(1)}_2)=\ell_2$</span>. Notice that <span class="math-container">$[0,\ell^{(1)}_2]\mapsto[\ell_2,1]$</span>. Then by (1), iterations of <span class="math-container">$f$</span> at points in <span class="math-container">$[0,\ell_2')$</span> converge to <span class="math-container">$1$</span>.</p> </li> <li><p><span class="math-container">$f$</span> has a (the only one) zero <span class="math-container">$\ell_1&lt;z&lt;\ell_2$</span>. Notice that <span class="math-container">$[\ell_1,z]\mapsto[0,\ell_1]$</span> and <span class="math-container">$[z,\ell_2]\mapsto[\ell_1,\ell_2]$</span>, Thus, the interesting dynamics happens in <span class="math-container">$[\ell^{(1)}_2,\ell_2]$</span>.</p> </li> <li><p>There is <span class="math-container">$\ell_1&lt;\ell^{(2)}_2&lt;z$</span> such that <span class="math-container">$f(\ell^{(2)}_2)=\ell^{(1)}_2$</span>. Notice that <span class="math-container">$[\ell^{(2)}_2,z]\mapsto[0,\ell^{(1)}_2]$</span> and from there we get to <span class="math-container">$[\ell_2,1]$</span>.</p> </li> <li><p>There are points <span class="math-container">$z&lt;\ell^{(3)}_2&lt;\ell^{(1)}_1&lt;\ell_2$</span> such that <span class="math-container">$f(\ell^{(3)}_2)=\ell^{(1)}_2$</span> and <span class="math-container">$f(\ell^{(1)})=\ell_1$</span>. Notice that <span class="math-container">$[z,\ell^{(3)}_2]\mapsto[0,\ell^{(1)}_2]$</span> and from there we get to <span class="math-container">$[\ell_2,1]$</span>.</p> </li> </ol> <p>We may proceed this way by partitioning the remaining subintervals in <span class="math-container">$[\ell^{(1)}_2,\ell^{(2)}_2]$</span> and <span class="math-container">$[\ell^{(3)}_2,\ell_2]$</span> using higher order preimages of <span class="math-container">$z$</span> and <span class="math-container">$\ell_1$</span> and <span class="math-container">$\ell_2$</span>. To make this much clearer, we may have to use a much better labeling of preimages to denote the order of a preimage., and possibly use symbolic dynamics.</p>
69,272
<p>By the way, does anyone know how to prove in an elementary way (i.e. expanding) that $\prod_1^n (1+a_i r)$ tends to $e^r=\sum \frac{r^k}{k!}$ as you let $\max|a_i|\to 0$ with $0\leq a_i \leq 1$ and $\sum a_i = 1$? An easy solution goes by writing the product with the exponential function so that you get the exponential of $\sum \log(1+a_i r) = \sum \int_0^1 \frac{a_i r}{(1+s a_i r)} ds$.</p> <p>You can then integrate by parts (i.e. Taylor expand) to obtain $\sum a_ i r − \sum \int_0^1 (1−s)\frac{(a_i r)2}{(1+s a_i r)2}ds$. Now, $\sum a_i r = r$ is the main term. After you take $\max|a_i|$ to be less than $.5/|r|$, the error term is bounded in absolute value by $C \sum |a_i r|^2 \leq \max|a_i|\cdot \sum |a_i| |r|^2 \leq C |r|^2 \max |a_i|$.</p> <p>I was hoping to find an elementary proof of this convergence by expanding the product $\prod_1^n (1+a_i r)$ and gathering terms with a common power of $r$. In particular, it would be nice to prove the convergence of this limit without the exponential function, since then the limit could be considered a definition of $e^r$. The case when all of the $a_i$ are equal is done in Rudin's "Principles of Mathematical Analysis".</p> <p>The motivation for this problem comes from compound interest, which I described in a different thread here: <a href="https://mathoverflow.net/questions/40005/generalizing-a-problem-to-make-it-easier/69224#69224">Generalizing a problem to make it easier</a> .</p>
Roland Bacher
4,556
<p>Knowing $\lim_{N\rightarrow\infty} \left(1+\frac{r}{N}\right)^N=e^r$ (the case where all $a_i$ are equal) we can use continuity and $\left(1+\frac{r}{N}\right)^{Na_i}\sim 1+a_ir$ if $N$ is very huge and $a_i\rightarrow 0$.</p>
4,083,410
<p>I have to check convergence of <span class="math-container">$\int_0^\infty \sin{(e^{ax})}dx$</span>?</p> <p>I tried with substitution <span class="math-container">$e^{ax}=t$</span> for <span class="math-container">$a&gt;0$</span> and got <span class="math-container">$\int_1^\infty \frac{\sin{t}}{at} dt$</span>. This integral converges.</p> <p>For <span class="math-container">$a=0$</span> I got divergent integral. I'm having doubts about convergence for <span class="math-container">$a&lt;0$</span>, is integral also convergent in this case?</p> <p>Am I right so far?</p> <p>Any help is welcome. Thanks in advance.</p>
hamam_Abdallah
369,188
<p><span class="math-container">$$(\forall n\in \Bbb N)\;\;\big|I_n\big|=\left|\int_0^1x^n\sin(\pi x)dx\right|$$</span> <span class="math-container">$$\le \int_0^1\big|x^n\sin(\pi x)\big|dx$$</span> <span class="math-container">$$\le \int_0^1x^ndx=\frac{1}{n+1}$$</span></p> <p><span class="math-container">$$\implies \lim_{n\to+\infty}I_n=0$$</span></p> <p>The recursive relation you got by double partial integration, can also be written as</p> <p><span class="math-container">$$\boxed{I_{n+2}\pi^2=\color{red}{\pi}-(n+2)(n+1)I_n.}$$</span> <span class="math-container">$$\lim_{n\to+\infty}I_{n+2}=0\implies$$</span> <span class="math-container">$$\implies \lim_{n\to+\infty}(n+2)(n+1)I_n=\color{red}{\pi}$$</span></p> <p><span class="math-container">$$\implies I_n\sim \frac{\pi}{(n+2)(n+1)}\sim \frac{\pi}{n^2}$$</span> <span class="math-container">$$\implies \sum I_n \text{ converges}$$</span></p>
4,083,410
<p>I have to check convergence of <span class="math-container">$\int_0^\infty \sin{(e^{ax})}dx$</span>?</p> <p>I tried with substitution <span class="math-container">$e^{ax}=t$</span> for <span class="math-container">$a&gt;0$</span> and got <span class="math-container">$\int_1^\infty \frac{\sin{t}}{at} dt$</span>. This integral converges.</p> <p>For <span class="math-container">$a=0$</span> I got divergent integral. I'm having doubts about convergence for <span class="math-container">$a&lt;0$</span>, is integral also convergent in this case?</p> <p>Am I right so far?</p> <p>Any help is welcome. Thanks in advance.</p>
Angelo
771,461
<p><span class="math-container">$\;\sum\limits_{n=1}^\infty I_n=\text{Si}(\pi)-\dfrac2\pi\;$</span></p> <p>and as far as I know there is not a simplier way to write the sum of your series.</p>
2,018,239
<p>I have to show, using induction, that $2^{4^n}+5$ is divisible by $21$. It is supposed to be a standard exercise, but no matter what I try, I get to a point where I have to use two more inductions.</p> <p>For example, here is one of the things I tried:</p> <p>Assuming that $21 |2^{4^k}+5$, we have to show that $21 |2^{4^{k+1}}+5$.</p> <p>Now, $2^{4^{k+1}}+5=2^{4\cdot 4^k}+5=2^{4^k+3\cdot 4^k}+5=2^{4^k}2^{3\cdot 4^k}+5=2^{4^k}2^{3\cdot 4^k}+5+5\cdot 2^{3\cdot 4^k}-5\cdot 2^{3\cdot 4^k}=2^{3\cdot 4^k}(2^{4^k}+5)+5(1-2^{3\cdot 4^k})$.</p> <p>At this point, the only way out (as I see it) is to prove (using another induction) that $21|5(1-2^{3\cdot 4^k})$. But when I do that, I get another term of this sort, and another induction.</p> <p>I also tried proving separately that $3 |2^{4^k}+5$ and $7 |2^{4^k}+5$. The former is OK, but the latter is again a double induction.</p> <p>Is there an easier way of doing this?</p> <p>Thank you!</p> <p><strong>EDIT</strong></p> <p>By an "easier way" I still mean a way using induction, but only once (or at most twice). Maybe add and subtract something different than what I did?...</p> <p>Just to put it all in a context: a daughter of a friend got this exercise in her very first HW assignment, after a lecture about induction which included only the most basic examples. I tried helping her, but I can't think of a solution suitable for this stage of the course. That's why I thought that there should be a trick I am missing... </p>
mathlove
78,967
<blockquote> <p>Is there an easier way of doing this?</p> </blockquote> <p>Note that $2^{4^{k+1}}=(2^{4^k})^4$.</p> <p>Inductive step : </p> <p>Supposing that $2^{4^k}+5=21m$ gives $$\begin{align}2^{4^{k+1}}+5&amp;=(2^{4^k})^4+5\\&amp;=(21m-5)^4+5\\&amp;=\sum_{i=0}^{4}\binom{4}{i}(21m)^i(-5)^{4-i}+5\\&amp;\equiv (-5)^{4}+5\quad\pmod{21}\\&amp;\equiv 0\pmod{21}\end{align}$$</p> <p>Added :</p> <p>If you want to use neither the binomial theorem nor mod, then supposing that $2^{4^k}+5=21m$ gives $$\begin{align}2^{4^{k+1}}+5&amp;=(2^{4^k})^4+5\\&amp;=(21m-5)^4+5\\&amp;=21(9261m^4-8820m^3+3150m^2-500m+30)\end{align}$$ So, $2^{4^{k+1}}+5$ is divisible by $21$.</p>
3,063,577
<p>Please suggest a book on applications of Diophantine equations in physics, chemistry, and biology. This book should be suitable to introduce this subject to students who are not mathematics specialists. </p>
Sam
632,330
<p>"OP" Can look up book by 'Stephen Wolfram' called 'A new kind of science'. It has stuff on Diophantine equations related to different branches of science in the notes section. The book can be availed of at the local library or purchased as hardcover or E-book. Also a summary is available on line. The links are given below:</p> <p><a href="https://www.wolframscience.com" rel="nofollow noreferrer">https://www.wolframscience.com</a></p> <p><a href="http://store.wolfram.com/view/book/ISBN1579550088.str?Qualifier=COMM" rel="nofollow noreferrer">http://store.wolfram.com/view/book/ISBN1579550088.str?Qualifier=COMM</a></p> <p><a href="https://itunes.apple.com/us/app/stephen-wolfram-a-new-kind-of-science/id390711826" rel="nofollow noreferrer">https://itunes.apple.com/us/app/stephen-wolfram-a-new-kind-of-science/id390711826</a></p>
601,296
<p>Let's say im a guy for ancient Greece and I only have a string and a pencil. And I want to draw a line, the width of the line is the square root of 6. And I only know how to draw a line in the width of real numbers. I've checked out the <a href="https://en.wikipedia.org/wiki/Spiral_of_Theodorus" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Spiral_of_Theodorus</a> but I can't use Pythagoras theorem.</p> <p>Sorry I couldn't be more specific, if you still don't understand, I'll do my best to explain it again.</p>
abiessu
86,846
<p>With a string and a pencil, you can measure &quot;one unit&quot;, that being the length of the string, or a portion of it. Then you can make a triangle &quot;one unit by one unit.&quot; Further, this triangle can be a right triangle since you can perpendicularly bisect a line segment to make the second side. Then you have a hypotenuse side that is <span class="math-container">$\sqrt 2$</span> units in length. Because it is isosceles, you can cut it in half along the right angle and get two pieces that have leg lengths of <span class="math-container">$\frac 1{\sqrt 2}=\frac {\sqrt 2}2$</span>, and so you can get legs as small as desired that are a multiple of <span class="math-container">$\sqrt 2$</span>.</p> <p>Next, you can make a second right triangle whose short leg is <span class="math-container">$\sqrt 2$</span>, and whose hypotenuse is <span class="math-container">$2\sqrt 2$</span>. Then the long leg has length <span class="math-container">$\sqrt 6$</span>. Now you have your line of the given length.</p> <p>The construction below attempts to demonstrate this process. Begin with one line (black), draw a line perpendicular to it (anywhere), draw a &quot;unit&quot; circle centered at the point of intersection, then the chord (green) between two consecutive intersections of line and circle has length <span class="math-container">$\sqrt 2$</span>. Draw a circle having this radius (purple), then the diameter will have length <span class="math-container">$2\sqrt 2$</span>. Draw a perpendicular to the chord (blue) that intersects the (purple) circle's center. Draw a circle (light blue) having radius equal to the diameter of the previous circle (purple), or <span class="math-container">$2\sqrt 2$</span>. The chord where the (blue) line intersects the (light blue) circle has length <span class="math-container">$2\sqrt 6$</span>, since the one side associated with the (green, light green, blue) triangle has length <span class="math-container">$\sqrt 6$</span>.</p> <p><img src="https://i.stack.imgur.com/YgZ4m.png" alt="Construction of sqrt 6" /></p>
601,296
<p>Let's say im a guy for ancient Greece and I only have a string and a pencil. And I want to draw a line, the width of the line is the square root of 6. And I only know how to draw a line in the width of real numbers. I've checked out the <a href="https://en.wikipedia.org/wiki/Spiral_of_Theodorus" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Spiral_of_Theodorus</a> but I can't use Pythagoras theorem.</p> <p>Sorry I couldn't be more specific, if you still don't understand, I'll do my best to explain it again.</p>
lhf
589
<p>Draw two adjacent segments of size <span class="math-container">$2$</span> and <span class="math-container">$3$</span>. Using the combined segment as diameter, draw a semicircle. Now draw a perpendicular at the point where the two segments meet. That perpendicular defines a segment of length <span class="math-container">$\sqrt 6$</span> where it meets the semicircle.</p> <p>This is the geometric equivalent of the <a href="https://en.wikipedia.org/wiki/Right_triangle_altitude_theorem" rel="nofollow noreferrer">right triangle altitude theorem</a> that says that <span class="math-container">$h^2=mn$</span>, where <span class="math-container">$h$</span> is the altitude of a right triangle with respect to the hypothenuse and <span class="math-container">$m$</span> and <span class="math-container">$n$</span> are the projections of the sides onto the hypothenuse.</p>
385,537
<p>How would you go about proving the following?</p> <p>$${1- \cos A \over \sin A } + { \sin A \over 1- \cos A} = 2 \operatorname{cosec} A $$</p> <p>This is what I've done so far:</p> <p>$$LHS = {1+\cos^2 A -2\cos A + 1 - \cos^2A \over \sin A(1-\cos A)}$$</p> <p>....no idea how to proceed .... X_X</p>
Parth Thakkar
70,311
<p>$$ LHS =\frac {1 - \cos A} {\sin A} + \frac {\sin A} {1 - \cos A} $$ $$ = \frac {2 \sin^2 \frac A2} {2\sin \frac A2 \cos \frac A2} + \frac {2\sin \frac A2 \cos \frac A2}{2 \sin^2 \frac A2}$$</p> <p>$$ = \frac {\sin \frac A2} {\cos \frac A2} + \frac {\cos \frac A2} {\sin \frac A2} $$</p> <p>Now just cross multiply and you get the answer.</p>
2,276,907
<p>If $\cos{x}=\frac{3}{5}$ and angle $x$ terminates in the fourth quadrants, find the exact value of each of the following:</p> <p>A. $\sin{2x}$ B. $\cos{2x}$ C. $\tan {\frac{x}{2}}$</p> <p>Okay, so I am going through my old exam reviews for the final exam I have this evening, and choosing problems I have trouble with. Problems like these a struggle. Could someone give me some sort of step by step? I don't need to know all A,B,and C, but maybe one of them would help. Also, if $\cos{x}=\frac{3}{5}$ and angle $x$ terminates in the fourth quadrants, wouldn't that fraction be negative?</p> <p>EDIT: Thank you for all the feedback! I understand now and finally realized what I have been messing up on was so small! Will make a mental note so I don't mess up on tonights final. :)</p>
Siong Thye Goh
306,553
<p>Guide:</p> <p>Find out what is $\sin x$ and what is $\cos x$.</p> <p>The following formula might be helpful.</p> <p>$$\sin 2x = 2 \sin x \cos x$$</p> <p>$$\cos 2x = 2 \cos^2 x -1 $$</p> <p>$$ \tan 2x = \frac{2 \tan x}{1- \tan^2 x}$$</p> <p>For the tangent problem, note that </p> <p>$$\tan x = \frac{2 \tan \frac{x}2}{1- \tan^2 \frac{x}2}$$</p>
2,276,907
<p>If $\cos{x}=\frac{3}{5}$ and angle $x$ terminates in the fourth quadrants, find the exact value of each of the following:</p> <p>A. $\sin{2x}$ B. $\cos{2x}$ C. $\tan {\frac{x}{2}}$</p> <p>Okay, so I am going through my old exam reviews for the final exam I have this evening, and choosing problems I have trouble with. Problems like these a struggle. Could someone give me some sort of step by step? I don't need to know all A,B,and C, but maybe one of them would help. Also, if $\cos{x}=\frac{3}{5}$ and angle $x$ terminates in the fourth quadrants, wouldn't that fraction be negative?</p> <p>EDIT: Thank you for all the feedback! I understand now and finally realized what I have been messing up on was so small! Will make a mental note so I don't mess up on tonights final. :)</p>
helloworld112358
300,021
<p>Notice that $\sin^2(x)=1-\cos^2(x)=\frac{16}{25}$, so $\sin(x)=\pm\frac{4}{5}$. Since we are in the fourth quadrant, we must have $\sin(x)=-\frac 45$. Thus, $\sin(2x)=2\sin(x)\cos(x)=\frac{24}{25}$. </p> <p>From this, we can calculate $\cos^2(2x)=1-\sin^2(2x)=\frac{7^2}{25^2}$. Since $\sin(x)&lt;-\cos(x)$, we know $x&lt;-\frac{\pi}{4}$, so $2x$ is in the third quadrant. Thus, $\cos(2x)=-\frac{7}{25}$. </p> <p>Finally, we can use $\tan\left(\frac{x}{2}\right)=\frac{\sin(x)}{1+\cos(x)}=-\frac{4/5}{8/5}=-\frac{1}{2}$</p>
2,276,907
<p>If $\cos{x}=\frac{3}{5}$ and angle $x$ terminates in the fourth quadrants, find the exact value of each of the following:</p> <p>A. $\sin{2x}$ B. $\cos{2x}$ C. $\tan {\frac{x}{2}}$</p> <p>Okay, so I am going through my old exam reviews for the final exam I have this evening, and choosing problems I have trouble with. Problems like these a struggle. Could someone give me some sort of step by step? I don't need to know all A,B,and C, but maybe one of them would help. Also, if $\cos{x}=\frac{3}{5}$ and angle $x$ terminates in the fourth quadrants, wouldn't that fraction be negative?</p> <p>EDIT: Thank you for all the feedback! I understand now and finally realized what I have been messing up on was so small! Will make a mental note so I don't mess up on tonights final. :)</p>
Community
-1
<p>Cosine is positive in quadrants 1 and 4. Think of $\cos \theta$ as the $x$-coordinate of the point where the terminal side of $\theta$ intersects the unit circle, because that's one way of defining $\cos\theta$ for any angle $\theta$. (Similarly, $\sin\theta$ is the $y$-coordinate of that point.)</p> <p>Step 1: Draw yourself a reference triangle in the correct quadrant. In this case it would be this triangle:</p> <p><a href="https://i.stack.imgur.com/ut5V4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ut5V4.png" alt="enter image description here"></a></p> <p>Note that I labeled the angle $\overline x$ and not just $x$. This is because the angle I drew is actually the reference angle for $x$, and not the angle $x$ itself. This is just a subtle detail worth mentioning.</p> <p>Step 2: Find the length of the missing side so that you can get the values of the other trig functions you need. In this case we can use the Pythagorean theorem to see that the missing side has length $4$. <strong>But, let's call it $-4$ because we need to move down (in the negative direction) off of the $x$-axis to "walk" across that side.</strong> Also, calling it $-4$ will help us keep the signs of our other trig functions straight.</p> <p>Step 3: Now that we know all three sides of this triangle, we can immediately gather the following info: $$ \sin x = -\frac45 \qquad\qquad \tan x = \frac43 \qquad\qquad \csc x = -\frac54 \qquad\qquad \cot x = -\frac34$$ Also, we can see that $\sec x = \dfrac53$, but we can determine that just from the given fact that $\cos x = \dfrac35$.</p> <p>Step 4: Use trig identities to express $\sin 2x$, $\cos 2x$, and $\tan \dfrac x2$ in terms of some of the trig functions you found in step 3.</p>
128,221
<p>Let $v_1=[-3;-1]$ and $v_2= [-2;-1]$</p> <p>Let $T:\mathbb{R}^2 \rightarrow \mathbb{R}^2$ be the linear transformation satisfying:</p> <p>$T(v_1)=[15;-6]$ and $T(v_2)=[11;-3]$</p> <p>Find the image of an arbitrary vector $[x;y]$</p>
David Mitra
18,986
<p>One method would be to find the image of the standard unit vectors first. Then using linearity, you can find the image of an arbitrary vector.</p> <p>In a bit more detail:</p> <p>To find $T(0,1)$, first write $(1,0)$ as a linear combination of $v_1$ and $v_2$. Here you have to solve the equation $$ (1,0)=\alpha v_1+\beta v_2. $$ The solution is $$ (1,0) =v_2-v_1. $$ Now using the fact that $T$ is linear $$ T(1,0)=T( v_2-v_1 )=T(v_2)-T(v_1)=(11,-3)-(15,-6)= (-4,3). $$</p> <p>Now, do the same procedure to figure out what $T(0,1)$ is.</p> <p>Then you can say $$ T(x,y)=T\bigl( (x,0)+(0,y)\bigr) =x\,T(1,0)+y\,T(0,1). $$</p>
128,221
<p>Let $v_1=[-3;-1]$ and $v_2= [-2;-1]$</p> <p>Let $T:\mathbb{R}^2 \rightarrow \mathbb{R}^2$ be the linear transformation satisfying:</p> <p>$T(v_1)=[15;-6]$ and $T(v_2)=[11;-3]$</p> <p>Find the image of an arbitrary vector $[x;y]$</p>
alpha.Debi
20,863
<p>The matrix whose columns are T(v1) and T(v2) is the representation matrix of T in the basis B=(v1, v2) of R2 (as domain) and E=((1,0), (0,1)) of R2 as codomain. If you multiply this matrix by the coordinates vector of (x,y) in the basis B you will get T(x,y) (since E is the canonic basis).</p>
2,870,729
<blockquote> <p>Why does $|e^{ix}|^2 = 1$?</p> </blockquote> <p>The book said $e^{ix} = \cos x + i\sin x$, and square it, then $|e^{ix}|^2 = \cos^2x + \sin^2x = 1$.</p> <p>But, when I calculated it, $ |e^{ix}|^2 = \left|\cos x + i\sin x\right|^2 = \cos^2x - \sin^2x + 2i\sin x\cos x$.</p> <p>I can't make it to be equal $1.$ How can I do it?</p>
Michael Hardy
11,667
<p>If $a,b$ are <b>real</b> then $\displaystyle \left| a+bi \right| = \sqrt{a^2+b^2\,\,} = \sqrt{(a+bi)(a-bi)\,}.$</p>
3,794,158
<p>I am trying to prove that: Let <span class="math-container">$(M,d)$</span> an metric space and <span class="math-container">$(x_n)$</span>,<span class="math-container">$(y_n)$</span> sequences in <span class="math-container">$M$</span> such that <span class="math-container">$d(x_n,y_n) \leq \frac{1}{n}$</span> <span class="math-container">$\forall n \in \mathbb{N}$</span>. If <span class="math-container">$(x_n)$</span> converge to <span class="math-container">$L$</span> then <span class="math-container">$(y_n)$</span> converges and <span class="math-container">$\lim{y_n} = L$</span>.</p> <p>My attempt: If <span class="math-container">$\lim{y_n} = L_y$</span> different of <span class="math-container">$L$</span> then <span class="math-container">$\forall \epsilon &gt; 0, y_n \in B_{\epsilon}(L_y)$</span>. But <span class="math-container">$d(x_n,y_n) \leq \frac{1}{n}$</span> say that <span class="math-container">$y_n \in B_{\epsilon}(x_n)$</span> <span class="math-container">$\forall n \in \mathbb{N}$</span>. If we take <span class="math-container">$\epsilon = \frac{1}{n}$</span> at the begin then <span class="math-container">$y_n \in B_{\frac{1}{n}}(L_y)$</span> [...]</p> <p>I know that statement is very intuitive but from here i don't know how to conclude <span class="math-container">$L_y = L$</span>. Some one could help me?</p>
hJulian
647,814
<p>Since <span class="math-container">$x_{n}\to x$</span>, let <span class="math-container">$\varepsilon &gt;0$</span> there exist <span class="math-container">$N_{1}\in \mathbb{Z}_{\geq0}$</span>, such that fot any <span class="math-container">$n&gt;N_{1}$</span>, <span class="math-container">$d(x_{n},l)&lt;\varepsilon$</span>. Then <span class="math-container">$d(y_{n},l)\leq d(y_{n},x_{n})+d(x_{n},l)&lt;\frac{1}{n}+\varepsilon&lt;2\varepsilon$</span>, if <span class="math-container">$N_{3}=\max\lbrace N,N_{1}\rbrace$</span></p>
933,604
<p>Hi can anyone solve these two questions using logs and indices</p> <p>a. $$4^{2x}-2^{x+1}=48$$</p> <p>b. $$6^{2x+1}-17*{6^x}+12=0$$</p> <p>Thanks.</p>
Claude Leibovici
82,404
<p>I am afraid but I do not think that there is any explicit solution for $$4^{2x}-2^{x+1}=48$$ To visualize it better, since the terms grow very fast, it is better to look at function $$f(x)=\log(4^{2x})-\log(2^{x+1}+48)=2x\log(4)-\log(2^{x+1}+48)$$ which is basically a straight line.</p> <p>To solve this equation, let us use Newton, which, starting from a reasonable guess $x_0$, will update it according to $$x_{n+1}=x_n-\frac{f(x_n)}{f'(x_n)}$$ Here, we shall have $$f'(x)=2 \log (4)-\frac{2^{x+1} \log (2)}{2^{x+1}+48}$$ Since the function seems to be very close to linearity, let me be lazy and start iterations at $x_0=0$. Then the successive iterates of Newton are $1.42522$, $1.43474$ which is the solution for six significant figures.</p> <p>What is amazing is that, if you start the calculations at $x_0=1$, the result of the first iteration is $$x_1=\frac{1}{51} \left(25+\frac{13 \log (13)}{\log (2)}\right)\approx 1.43345$$.</p> <p>I also thought that there are some typo's in the equation but I give you this just for illustration purposes of a simple numerical solution of a quite difficult problem.</p>
181,702
<p>I am working on getting the hang of proofs by induction, and I was hoping the community could give me feedback on how to format a proof of this nature:</p> <p>Let $x &gt; -1$ and $n$ be a positive integer. Prove Bernoulli's inequality: $$ (1+x)^n \ge 1+nx$$</p> <p><strong>Proof</strong>: </p> <p>Base Case: For $n=1$, $1+x = 1+x$ so the inequality holds.</p> <p>Induction Assumption: Assume that for some integer $k\ge1$, $(1+x)^k \ge 1+kx$. </p> <p>Inductive Step: We must show that $(1+x)^{k+1} \ge 1+(k+1)x$</p> <p><em>Proof of Inductive Step</em>: $$\begin{align*} (1+x)^k &amp;\ge 1+kx \\ (1+x)(1+x)^k &amp;\ge (1+x)(1+kx)\\ (1+x)^{k+1} &amp;\ge 1 + (k+1)x + kx^2 \\ 1 + (k+1)x + kx^2 &amp;&gt; 1+(k+1)x \quad (kx^2 &gt;0) \\ \Rightarrow (1+x)^{k+1} &amp;\ge 1 + (k+1)x \qquad \qquad \qquad \square \end{align*}$$ </p>
Brian M. Scott
12,042
<p>What you have is perfectly acceptable. The calculations could be organized a little more neatly:</p> <p>$$\begin{align*} (1+x)^{k+1}&amp;=(1+x)(1+x)^k\\ &amp;\ge(1+x)(1+kx)\\ &amp;=1+(k+1)x+kx^2\\ &amp;\ge1+(k+1)x\;, \end{align*}$$</p> <p>since $kx^2\ge 0$. This completes the induction step.</p>
116,394
<p>After importing a sound file, how can I add an echo to it?</p> <pre><code> sound = Import["test.wav", "SampleRate"] </code></pre> <p>It needs to be apply after time specified by user. This is as far as I have got:</p> <pre><code> addEcho[sound_, time_] := Module[{tmp = sound, channels, samples, duration}, {channels, samples} = Dimensions[Import["test.wav", "Data"]]; duration = samples/tmp // N; result] </code></pre>
Szabolcs
12
<p>Since version 11.0, <a href="http://reference.wolfram.com/language/ref/AudioReverb.html" rel="nofollow"><code>AudioReverb</code></a> does this in a single go. There are several impulse responses that come with Mathematica as example data. Try <code>ExampleData["Audio"]</code> and look for names starting with <code>IR</code>.</p> <hr> <p>An echo can be added by convolving the waveform with an appropriate <a href="https://en.wikipedia.org/wiki/Impulse_response" rel="nofollow">impulse response</a>.</p> <p>You can find many real recorded impulse responses at <a href="http://www.echothief.com/" rel="nofollow">http://www.echothief.com/</a> Download the package and extract it.</p> <p>Then we can do something like this:</p> <pre><code>echo = Import["~/Downloads/EchoThiefImpulseResponseLibrary/Miscellaneous/ConvocationMall.wav"] (* import from appropriate location *) sound = ExampleData[{"Sound", "BlackcapWarbler"}] sounddata = sound[[1, 1, 1]]; (* I'm lazy so I'll throw away the second channel *) echodata = echo[[1, 1, 1]]; ListPlay[ ListConvolve[echodata, sounddata], SampleRate -&gt; 44100, PlayRange -&gt; All ] </code></pre> <p>Make sure that the impulse response has the same sample rate as the sound you are convolving it with.</p> <hr> <p>Note that simply adding a delay is equivalent to convolving with a list which has all zero elements except for the first and last one.</p> <p>For example, to add a delay of <code>t</code> seconds, with a sample rate of 44100, use</p> <pre><code>t = 0.1; echodata = Developer`ToPackedArray@Join[{1.}, ConstantArray[0., Round[44100 t]], {0.5}]; </code></pre>
4,045,238
<p>I was working on the problems in Mathematical Methods for Physics and Engineering by Riley,Hobson &amp; Bence. In Problem 2.34 (d) I'm supposed to find this integral: <span class="math-container">$$J=\int\frac{dx}{x(x^n+a^n)}.$$</span> I used partial fractions and arrived at the form <span class="math-container">$$J=\frac{1}{a^n}\left[\log x-\int \frac{dx}{x^n+a^n}\right]$$</span> and now I'm stuck, I don't know how to integrate <span class="math-container">$1/(x^n+a^n)$</span>.</p>
heropup
118,193
<p>We seek a decomposition of the form <span class="math-container">$$\frac{1}{x(x^n + a^n)} = \frac{A}{x} + \frac{Bx^{n-1}}{x^n + a^n} = \frac{(A+B)x^n + Aa^n}{x(x^n+a^n)}.$$</span> Hence the choice <span class="math-container">$A = a^{-n}$</span>, <span class="math-container">$B = -A = -a^{-n}$</span>, yields <span class="math-container">$$\frac{1}{x(x^n+a^n)} = \frac{1}{a^n} \left(\frac{1}{x} - \frac{x^{n-1}}{x^n + a^n}\right).$$</span> The rest is straightforward:</p> <p><span class="math-container">$$\int \frac{dx}{x(x^n+a^n)} = \frac{1}{a^n} \left(\log |x| - \frac{1}{n} \int \frac{nx^{n-1}}{x^n + a^n} \, dx \right) = \frac{1}{a^n} \left( \log |x| -\frac{1}{n} \log |x^n + a^n| \right) + C.$$</span></p>
3,362,000
<p>From listing the first few terms, I suspect that the sequence is increasing, so I wanted to use mathematical induction to verify my suspicion.</p> <p>I have assumed that <span class="math-container">$a_k&lt;a_{k+1}$</span>, I don't see how I can obtain <span class="math-container">$a_{k+1}&lt;a_{k+2}$</span> because <span class="math-container">$\frac{1}{a_k}&gt;\frac{1}{a_{k+1}}$</span></p>
Community
-1
<p>Base case:</p> <p><span class="math-container">$$1+\dfrac11&gt;1\implies a_2&gt;a_1.$$</span></p> <p>Inductive step:</p> <p><span class="math-container">$$a_n=a_{n-1}+\frac1{a_{n-1}}&gt;a_{n-1} \\\implies a_{n-1}&gt;0 \\\implies a_n=a_{n-1}+\dfrac1{a_{n-1}}&gt;0 \\\implies a_{n+1}=a_n+\frac1{a_n}&gt;a_n.$$</span></p> <hr> <p>Anyway, it is much simpler to establish <span class="math-container">$a_n&gt;0$</span> (<span class="math-container">$1&gt;0$</span> and <span class="math-container">$a_n&gt;0\implies a_{n+1}=a_n+\dfrac1{a_n}&gt;a_n&gt;0)$</span> which is enough to justify the growth.</p>
166,013
<p>Ordinary (connective) complex $K$-theory is the algebraic $K$ theory of the topological ring $\mathbb{C}$ with analytic topology. One can also study the $K$ theory of $\mathbb{C}$ with discrete topology. Weibel, in his $K$-theory book, computes the torsion in its coefficient ring. I would like to know the torsion-free part in the homotopy groups, but can't find this anywhere. The best language for this might be in terms of motives (without factoring out $\mathbb{A}^1$), but I don't know where to find its homotopy groups computed in this language either. Anyone know this? </p>
Matthias Wendt
50,846
<p>I think the answer to this question is not known. All we can say about the K-theory of $\mathbb{C}$ concerns the torsion. The trouble starts with $K_1(\mathbb{C})\cong\mathbb{C}^\times$, which is pretty difficult to understand as an abelian group. There is a formula for $K_2$ of a field due to Matsumoto which is $K_2(F)\cong (F^\times\otimes F^\times)/(x\otimes(1-x)|x\in F^\times\setminus\{1\})$ which you probably found in Weibel's book already. For $K_3$ we still have a computation due to Suslin, which expresses $K_3(\mathbb{C})$ in terms of a Bloch group, this can be found in the book "Homology of linear groups" by K.Knudson but probably also in Weibel's K-book. This description relates $K_3(\mathbb{C})$ to scissors congruences in hyperbolic space and the three-sphere. However, although there is such a conceptual interpretation of $K_3$, it is still far from understood: it is (I believe) still an open question if the natural inclusion $K_3(\overline{\mathbb{Q}})\to K_3(\mathbb{C})$ is surjective, this is a rigidity question attributed to Sah in Dupont's book on scissors congruences. Note that $K_3(\overline{\mathbb{Q}})$ is "understood" in terms of Borel regulators. Finally, there is a (still conjectural) generalization of the Bloch-group description of $K_3$ to higher K-groups, due to Goncharov. You can find this for instance in Goncharov's article in the second part of Motives (Proceedings of Symposia in Pure Mathematics, Volume 55, eds Jannsen, Kleiman, Serre, 1994), or other articles of Goncharov.</p>
3,873,882
<p>(Follow-up question to <a href="https://math.stackexchange.com/questions/3873041/how-to-do-proofs-by-induction-with-2-variables">How to do proofs by induction with 2 variables?</a>)</p> <p>Suppose you want to prove that <span class="math-container">$P(x,y,z)$</span> is true for all <span class="math-container">$x,y,z \in N$</span>. Will it suffice to prove each of the following?</p> <ol> <li><p><span class="math-container">$P(0,0,0)$</span></p> </li> <li><p>For all <span class="math-container">$k \in N:[P(0,0,k) \implies P(0,0, k+1)]$</span></p> </li> <li><p>For all <span class="math-container">$j, k \in N:[P(0,j, k) \implies P(0,j+1, k)]$</span></p> </li> <li><p>For all <span class="math-container">$i,j,k \in N:[P(i,j,k) \implies P(i+1,j ,k)$</span></p> </li> </ol> <p><strong>EDIT: This theorem may NOT be all that useful in writing proofs.</strong></p>
paulinho
474,578
<p>Your first insight is exactly the idea, though the algebra would be easier without the expansion you do:</p> <p><span class="math-container">$$f(-2) = 0 \implies -1 + (k - 10)^3 = 0 \implies (k - 10)^3 = 1$$</span></p> <p>If your problem only asks for real solutions of <span class="math-container">$k$</span>, then it is clear that <span class="math-container">$k = 11$</span> is the only solution. However, if complex numbers are allowed, you will need to consider the case where <span class="math-container">$k - 10$</span> is a complex third root of unity, i.e. <span class="math-container">$k - 10 = \frac{-1}{ 2} + \frac{\sqrt 3}{ 2} i$</span> or <span class="math-container">$k - 10 = \frac{-1}{ 2} - \frac{\sqrt 3}{ 2} i$</span>.</p>
181,499
<p>In many of the classes that I teach, I require students to learn the basics of Mathematica which we use throughout the semester to do computations and to submit homeworks (in notebook form). Some students really like this and some... not so much. </p> <p>Since I teach in an engineering department, almost everyone already knows some programming language: <em>Matlab</em>, <em>python</em>, <em>java</em>, or <em>C</em> are the most common, though there is quite a variety. One thing that I have found pretty effective is to try and relate Mathematica formalisms, structures, and ideas to those that students already know. For example:</p> <p><span class="math-container">$-$</span> When talking about using the <a href="https://reference.wolfram.com/language/ref/Listable.html" rel="noreferrer"><code>Listable</code></a> Attribute of functions, I compare this to Matlab's <a href="https://www.mathworks.com/help/matlab/matlab_prog/vectorization.html" rel="noreferrer">vectorization</a></p> <p><span class="math-container">$-$</span> When talking about alternatives for loops, Mathematica's <a href="https://reference.wolfram.com/language/ref/Table.html" rel="noreferrer"><code>Table</code></a> function is analogous to python's <a href="https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions" rel="noreferrer">List Comprehensions</a>, for example, observe the similarity between</p> <pre><code>squares = [x**2 for x in range(10)] </code></pre> <p>and</p> <pre><code>squares = Table[x^2, {x, Range[10]}] </code></pre> <p><span class="math-container">$-$</span> Mathematica's Notebook format is analogous to <a href="https://jupyter.org/index.html" rel="noreferrer">Jupyter notebooks</a> which merge word processing, computation, and interactive presentations.</p> <p>My question is this: What are some other analogies between Mathematica functions, expressions, and structures that might be helpful to new users in understanding "what Mathematica is thinking" or "why it works that way"?</p> <p>Update: It seems that we have some very good answers for Matlab and for python. How about other languages? Any nice analogies for/with other popular languages?</p>
Henrik Schumacher
38,178
<p>Imho some important things to translate between <em>Matlab</em> and <em>Mathematica</em>:</p> <ul> <li><p>"everything is a matrix (or inefficient)" vs. "everything is an expression"</p></li> <li><p>indexing into arrays: <code>:</code> vs. <code>All</code> or <code>;;</code></p></li> <li><p>indexing into arrays: <code>j:i:k</code> vs. <code>j;;k;;i</code></p></li> <li><p>constructing ranges: <code>j:i:k</code> vs. <code>Range[j,k,i]</code></p></li> <li><p>column-major vs. row-major: <code>mat(:)</code> vs. <code>Flatten[Transpose[mat]]</code> or <code>(mat')(:)</code> vs. <code>Flatten</code></p></li> <li><p>combining tensors: <code>cat</code> vs. <code>Join</code> and <code>ArrayFlatten</code></p></li> <li><p>anonymous functions: <code>@(x) x^2</code> vs. <code>#^2&amp;</code> or <code>x \[Function] x^2</code></p></li> <li><p>building simple tensors: <code>zeroes</code>, <code>ones</code> vs. <code>ConstantArray</code></p></li> <li><p>more tensors: <code>eye</code> and <code>speye</code> vs. <code>IdentityMatrix</code> and <code>IdentityMatrix[#,SparseArray]&amp;</code></p></li> <li><p><code>diag</code> and <code>spdiags</code> vs. <code>DiagonalMatrix</code> and <code>DiagonalMatrix[SparseArray[#]] &amp;</code> / <code>SparseArray</code> together with <code>Band</code> (but also <code>diag</code> vs. <code>Diagonal</code> btw.)</p></li> <li><p>even more tensors: <code>rand</code> vs. <code>RandomReal</code></p></li> <li><p>loops: <code>for</code> vs. <code>Do</code>, <code>Table</code>, <code>Array</code>, and <code>Map</code> (and not <code>For</code>!!11eleven)</p></li> <li><p><code>arrayfun</code> and <code>cellfun</code> vs. <code>Map</code> (with level spec <code>{-1}</code>) (special thanks to mikado for pointing out this one)</p></li> <li><p><code>while</code> and <code>repeat</code> vs. <code>While</code>, but also <code>NestWhile</code>, <code>NestWhileList</code>, <code>FixedPoint</code>, and <code>FixedPointList</code></p></li> <li><p><code>if ... else ... end</code> vs. <code>If</code></p></li> <li><p><code>if ... elseif... elseif... end</code> vs. <code>Which</code></p></li> <li><p><code>piecewise</code> vs., well, <code>Piecewise</code></p></li> <li><p>solving linear systems: <code>\</code> and <code>/</code> vs. <code>LinearSolve</code> and <code>LinearSolve[#1][#2, "T"] &amp;</code> (for details, see also <a href="https://mathematica.stackexchange.com/a/161818">this post</a>)</p></li> <li><p>more linear systems: <code>pinv</code> vs. <code>LeastSquares</code> and <code>PseudoInverse</code></p></li> <li><p><code>struct</code> vs. <code>Association</code></p></li> <li><p><code>cell</code> vs. <code>List</code></p></li> </ul> <p>Certainly less important</p> <ul> <li><p><code>kron</code> vs. <code>KroneckerProduct</code></p></li> <li><p><code>meshgrid</code> vs. <code>Tuples</code> (Due to the intuitive plotting in <em>Mathematica</em>, <code>Tuples</code> within <em>Mathematica</em> has not nearly the same importance as <code>meshgrid</code> has within <em>Matlab</em>.)</p></li> <li><p>classes vs. tags (<code>TagSet</code> and <code>TagSetDelayed</code>) (though each Matlab programmer I've ever met refused to use classes...)</p></li> <li><p><code>isa</code> vs. <code>Head</code> and patterns</p></li> <li><p><code>mex</code> vs. <code>Compile</code> (and LibraryLink for the pro users) </p></li> </ul>
2,249,707
<blockquote> <p>$$\int f(x)\sin x \cos x dx = \log(f(x)){1\over 2( b^2 - a^2)}+C$$</p> </blockquote> <hr> <p>On differentiating, I get,</p> <p>$$f(x)\sin x\cos x = {f^\prime(x)\over f(x)}{1\over 2( b^2 - a^2)}$$ </p> <p>$$\sin 2x (b^2 - a^2) = {f^\prime( x)\over (f(x))^2} $$</p> <p>On integrating, </p> <p>$${-1\over f(x)} = {-(b^2 - a^2)\cos 2x\over 2} \implies f(x) = { 2\over(b^2 - a^2)\cos 2x}$$</p> <p>The answer given is $\displaystyle f(x) = {1\over a^2 \sin^2 x + b^2 \cos^2 x}$.</p> <p>I am unable to get the given result, the closest I got is, $$f(x) = {2\over b^2 \cos^2x -b^2\sin^2 x- a^2\cos^2x+ a^2\sin^2 x}$$.</p> <p>How to simplify further to get the given answer ?</p> <p><a href="https://math.stackexchange.com/questions/451131/if-int-fx-sinx-cosx-mathrm-dx-frac-12b2-a2-log-fx-c">Related</a> but not duplicate.</p>
Kanwaljit Singh
401,635
<p>$f(x) = {2\over b^2 \cos^2x -b^2\sin^2 x- a^2\cos^2x+ a^2\sin^2 x}$</p> <p>$= {2\over b^2 \cos^2x -b^2(1-\cos^2 x)- a^2(1-\sin^2x)+ a^2\sin^2 x}$</p> <p>$= {2\over b^2 \cos^2x -b^2 + b^2\cos^2 x- a^2 +a^2\sin^2x+ a^2\sin^2 x}$</p> <p>$= {2\over -(a^2 + b^2 ) + 2b^2\cos^2x +2a^2\sin^2x}$</p> <p>As you can see one extra term $-(a^2+b^2)$ in denominator and its not possible to eliminate it. </p> <p><strong>Edit -</strong></p> <p>From @mickep comment as you have constant term after integration. If we consider $= {2\over -(a^2 + b^2 )}$ as constant term which is possible we get expected result.</p>
2,249,707
<blockquote> <p>$$\int f(x)\sin x \cos x dx = \log(f(x)){1\over 2( b^2 - a^2)}+C$$</p> </blockquote> <hr> <p>On differentiating, I get,</p> <p>$$f(x)\sin x\cos x = {f^\prime(x)\over f(x)}{1\over 2( b^2 - a^2)}$$ </p> <p>$$\sin 2x (b^2 - a^2) = {f^\prime( x)\over (f(x))^2} $$</p> <p>On integrating, </p> <p>$${-1\over f(x)} = {-(b^2 - a^2)\cos 2x\over 2} \implies f(x) = { 2\over(b^2 - a^2)\cos 2x}$$</p> <p>The answer given is $\displaystyle f(x) = {1\over a^2 \sin^2 x + b^2 \cos^2 x}$.</p> <p>I am unable to get the given result, the closest I got is, $$f(x) = {2\over b^2 \cos^2x -b^2\sin^2 x- a^2\cos^2x+ a^2\sin^2 x}$$.</p> <p>How to simplify further to get the given answer ?</p> <p><a href="https://math.stackexchange.com/questions/451131/if-int-fx-sinx-cosx-mathrm-dx-frac-12b2-a2-log-fx-c">Related</a> but not duplicate.</p>
egreg
62,967
<p>On differentiating you get indeed $$ f(x)\sin x\cos x=\frac{f'(x)}{f(x)}\frac{1}{2(b^2-a^2)} $$ so the differential equation $$ \frac{f'(x)}{f(x)^2}=(b^2-a^2)\sin2x $$ Integrating it you get $$ -\frac{1}{f(x)}=-\frac{1}{2}(b^2-a^2)\cos2x+c $$ hence $$ f(x)=\frac{2}{(b^2-a^2)\cos2x-2c} $$</p> <p>You can expand $\cos2x=\cos^2x-\sin^2x$ and $2c=2c\cos^2x+2c\sin^2x$, so $$ (b^2-a^2)\cos2x-2c= (b^2-a^2-2c)\cos^2x+(a^2-b^2-2c)\sin^2x $$</p> <p>We can try for $$ \begin{cases} a^2-b^2-2c=2a^2 \\[4px] b^2-a^2-2c=2b^2 \end{cases} $$ which is valid for $-2c=a^2+b^2$, but I see no reason for choosing this particular solution. The only limitation is that $$ (b^2-a^2)\cos2x-2c&gt;0 $$ as far as I can see.</p> <p>Is there any other condition in your problem?</p>
213,872
<p>I'm learning probability theory and I see the half-open intervals $(a,b]$ appear many times. One of theorems about Borel $\sigma$-algebra is that</p> <blockquote> <p>The Borel $\sigma$-algebra of ${\mathbb R}$ is generated by inervals of the form $(-\infty,a]$, where $a\in{\mathbb Q}$. </p> </blockquote> <p>Also, the distribution function induced by a probability $P$ on $({\mathbb R},{\mathcal B})$ is defined as $$ F(x)=P((-\infty,x]) $$</p> <p>Is it because for some theoretical convenience that the half-open intervals are used often in probability theory or are they of special interest?</p>
BallzofFury
11,969
<p>The half-open intervals are not necessarily special in a particular way, they are one of many possible generators of the Borel $\sigma$-algebra. </p> <p>As I understand it, most of the things you do with half-open intervals you could also do with other generators, but in practice they are easy to work with</p>
3,041,656
<p>I need some help in a proof: Prove that for any integer <span class="math-container">$n&gt;6$</span> can be written as a sum of two co-prime integers <span class="math-container">$a,b$</span> s.t. <span class="math-container">$\gcd(a,b)=1$</span>.</p> <p>I tried to go around with "Dirichlet's theorem on arithmetic progressions" but didn't had any luck to come to an actual proof. I mainly used arithmetic progression of <span class="math-container">$4$</span>, <span class="math-container">$(4n,4n+1,4n+2,4n+3)$</span>, but got not much, only to the extent of specific examples and even than sometimes <span class="math-container">$a,b$</span> weren't always co-prime (and <span class="math-container">$n$</span> was also playing a role so it wasn't <span class="math-container">$a+b$</span> it was <span class="math-container">$an+b$</span>).</p> <p>I would appriciate it a lot if someone could give a hand here.</p>
Will Jagy
10,400
<p>Later: the numbers between <span class="math-container">$1$</span> and <span class="math-container">$n-1$</span> that are relatively prime to <span class="math-container">$n$</span> itself come in pairs that add up to <span class="math-container">$n$</span> and are relatively prime to each other as well. If <span class="math-container">$n=5$</span> or <span class="math-container">$n \geq 7$</span> both such numbers can be chosen strictly larger than <span class="math-container">$1.$</span></p> <p>Original:</p> <p>A different emphasis: if Euler's totient <span class="math-container">$\phi(n) \geq 3,$</span> then there is some integer <span class="math-container">$a$</span> with <span class="math-container">$\gcd(a,n) = 1$</span> and <span class="math-container">$1 &lt; a &lt; n-1.$</span> If we then name <span class="math-container">$b = n-a,$</span> we find that <span class="math-container">$\gcd(a,b) = 1$</span> as well, since a prime <span class="math-container">$p$</span> that divides both <span class="math-container">$a,n-a$</span> also divides <span class="math-container">$n,$</span> and this contradicts <span class="math-container">$\gcd(a,n) = 1.$</span></p> <p>So, when is <span class="math-container">$\phi(n) \geq 3 \; ? \; \;$</span> If <span class="math-container">$n$</span> is divisible by any prime <span class="math-container">$q \geq 5,$</span> then <span class="math-container">$\phi(n)$</span> is a multiple of <span class="math-container">$\phi(q) = q-1,$</span> and that is at least <span class="math-container">$4.$</span></p> <p>Next, if <span class="math-container">$n = 2^c \; 3^d \; . \;$</span> When <span class="math-container">$d=0$</span> we find <span class="math-container">$\phi(n) = 2^{c-1}$</span> is at least <span class="math-container">$3$</span> when <span class="math-container">$c \geq 3,$</span> leaving <span class="math-container">$2,4$</span> out. When <span class="math-container">$c=0$</span> we find <span class="math-container">$\phi(n) = 2 \cdot 3^{d-1}$</span> is at least <span class="math-container">$3$</span> when <span class="math-container">$d \geq 2,$</span> leaving <span class="math-container">$3$</span> out. When <span class="math-container">$c,d \geq 1,$</span> we find <span class="math-container">$\phi(n) = 2^c \cdot 3^{d-1}$</span> is at least <span class="math-container">$3$</span> when either <span class="math-container">$c \geq 2$</span> or <span class="math-container">$d \geq 2,$</span> so this leaves out <span class="math-container">$6.$</span></p> <p>Put it together, for <span class="math-container">$n=5$</span> or <span class="math-container">$n \geq 7,$</span> there is some <span class="math-container">$a$</span> with <span class="math-container">$1 &lt; a &lt; n-1$</span> and <span class="math-container">$\gcd(a,n) = 1.$</span> </p>
3,415,378
<p>I am looking for an estimation or an approximation of </p> <p><span class="math-container">$\sum _{k=1}^{n}{\log(k)\binom {n}{k}}$</span></p> <p>Any hints will be appreciated. Thank you.</p>
reuns
276,986
<p>For <span class="math-container">$\log n \ge 2$</span> <span class="math-container">$$\sum_{k=1}^n {n \choose k} \log(k) \ge \sum_{k=n/\log n}^n {n \choose k} \log(n/\log n)$$</span> <span class="math-container">$$=\sum_{k=1}^n {n \choose k} \log(n/\log n)-\sum_{k=1}^{n/\log n} {n \choose k} \log(n/\log n)$$</span> <span class="math-container">$$ \ge \sum_{k=1}^n {n \choose k} \log(n/\log n)-\frac{n/\log n}{n}\sum_{k=1}^n {n \choose k} \log(n/\log n)$$</span> <span class="math-container">$$ = \log(n/\log n)2^n- \log(n/\log n) 2^n/\log n$$</span> <span class="math-container">$$ = \log(n) 2^n(1-\frac1{\log n})(1-\frac{\log \log n}{\log n})$$</span> Together with the obvious bound <span class="math-container">$$\sum_{k=1}^n {n \choose k }\log(k) \le \sum_{k=1}^n {n \choose k }\log(n)=\log(n) 2^n$$</span> we get <span class="math-container">$$\frac{\sum_{k=1}^n {n \choose k }\log(k)}{\log(n) 2^n} \in [(1-\frac1{\log n})(1-\frac{\log \log n}{\log n}),1]$$</span></p>
428,841
<p>Let $x_{n} = \sqrt{1 + \sqrt{2 + \sqrt{3 + ...+\sqrt{n}}}}$</p> <p>a) Show that $x_{n} &lt; x_{n+1}$</p> <p>b) Show that $x_{n+1}^{2} \leq 1+ \sqrt{2} x_{n}$</p> <p>Hint : Square $x_{n+1}$ and factor a 2 out of the square root</p> <p>c) Hence Show that $x_{n}$ is bounded above by 2. Deduce that $\lim\limits_{n\to \infty} x_{n}$ exists.</p> <p>Any help? I don't know where to start.</p>
Kevin Pardede
82,064
<p>10 days old question, but .</p> <p>a) Is already clear, that $ \sqrt{1 + \sqrt{2 + \sqrt{3 + ...+\sqrt{n}}}} &lt; \sqrt{1 + \sqrt{2 + \sqrt{3 + ...+\sqrt{n+1}}}}$ , because $\sqrt{n} &lt;\sqrt{n} + \sqrt{n+1}$ which is trivial.<br> My point here is to give some opinion about b) and c), for me it's better to do the c) first. We know that : $$ \sqrt{1 + \sqrt{2 + \sqrt{3 + ...+\sqrt{n}}}} &lt; \sqrt{p+\sqrt{p+\sqrt{p+ ... }}} $$ But it is only true for $q\leq p&lt;\infty $ for $q \in \mathbb{Z}^{+}$. Because it is trivial that $$ \sqrt{1 + \sqrt{2 + \sqrt{3 + ...+\sqrt{n}}}} &gt; \sqrt{1+\sqrt{1+\sqrt{1+ ... }}} $$ Let $x=\sqrt{2+\sqrt{2+\sqrt{2+ ... }}}$, then $x^2=2+ \sqrt{2+\sqrt{2+\sqrt{2+ ... }}} \rightarrow x^2-x-2=0 $, thus $x=2$, because $x&gt;0$. Now let's probe this equation : $$\sqrt{1 + \sqrt{2 + \sqrt{3 + ...+\sqrt{n}}}} \leq \sqrt{2+\sqrt{2+\sqrt{2+ ... }}}=2 \tag{1}$$ 2 is bigger than 1 , with their difference is 1. so for $x_{n}$ to be bigger than 2, it is required for $\sqrt{2+\sqrt{3+\sqrt{4+ ... \sqrt{n}}}} \geq 3$ but if square both sides of (1) and substract, we get that $\sqrt{2+\sqrt{3+\sqrt{4+ ... \sqrt{n}}}} \leq 3$. <br> for (b) , first square both sides, the '1' is gone , square again until the '2' is gone, and we arrive to this equation : $$\sqrt{3+\sqrt{4+...\sqrt{n}}} \leq 2.\sqrt{2+\sqrt{3+...\sqrt{n}}}$$ which is true, because from (1) we know that <br>$\sqrt{3 +\sqrt{4 ...+\sqrt{n}}} \leq 2$ and $ \sqrt{2+\sqrt{3+...\sqrt{n}}} &gt;0 $ <br> In fact, if you can prove (b) then (c) is trivial and vice versa.</p>
2,032,387
<p>I know this is somewhat of an odd question, but I am having trouble with my TI-84 calculator and I don't know why.</p> <p>I'm trying to find the RREF of the transpose of a <span class="math-container">$4\times6$</span> matrix; for some reason my graphing calculator gives me an error. Something to do with the dimensions? Here is a photo of matrix <span class="math-container">$A$</span>. <img src="https://i.stack.imgur.com/JHf9A.png" alt="" /></p> <p>I want to find RREF<span class="math-container">$(A$</span> transposed<span class="math-container">$)$</span>.</p>
Donn Liddle
871,234
<p>To compute a unique solution for a system of equations you need the same number of equations as unknowns. In other words if you have 4 variables (unknowns) you need only 4 equations.</p> <p>Adding more equations than variables creates what is called an &quot;over determined&quot; system of equations. In most &quot;math class&quot; type problems, all you need to do is drop 2 equations and solve using the remaining 4 equations. It does not matter which two equations you drop (although common sense would say to drop the more complex equations).</p> <p>There is a field of &quot;applied&quot; math in which we intentionally write more equations than variables. This is usually done when the numerical data contains small measurement errors. Using an &quot;over determined&quot; system of equations allows you to compute the most statistically likely answer for each variable.</p>
4,198,263
<p><a href="https://www.cliffsnotes.com/study-guides/algebra/linear-algebra/real-euclidean-vector-spaces/projection-onto-a-subspace" rel="nofollow noreferrer">https://www.cliffsnotes.com/study-guides/algebra/linear-algebra/real-euclidean-vector-spaces/projection-onto-a-subspace</a></p> <p>I am following this example here. It is written in a way that clarifies some things I didn't quite grasp before, however, this part, I don't quite understand</p> <p>&quot;The vector <span class="math-container">$v_{\parallel S}$</span>, which actually lies in <span class="math-container">$S$</span>, is called the projection of <span class="math-container">$v$</span> onto <span class="math-container">$S$</span>, also denoted <span class="math-container">$\text{proj}_S v$</span>. If <span class="math-container">$v_1, v_2, \dots, v_r$</span> form an orthogonal basis for S, <strong>then the projection of <span class="math-container">$v$</span> onto <span class="math-container">$S$</span> is the sum of the projections of <span class="math-container">$v$</span> onto the individual basis vectors, a fact that depends critically on the basis vectors being orthogonal:&quot;</strong></p> <p>a. That bolded part is especially unclear. First, can you project on to subspaces, like S, that DO NOT have an orthogonal basis? b. not understanding what they mean by the projection of v onto S is the SUM of the projections of v on to the INDIVIDUAL basis vectors</p> <p>Regards</p>
Glorious Nathalie
948,761
<p>Take for example, the set <span class="math-container">$S$</span> to the plane spanned by <span class="math-container">$v_1 =(1, 2, 1) $</span> and <span class="math-container">$v_2 = (-1, 0, 1)$</span> which are two orthogonal vectors. And let the vector to projected onto <span class="math-container">$S$</span> be <span class="math-container">$ v = (2, 3, 4) $</span>. Let <span class="math-container">$w$</span> be the projection of <span class="math-container">$v$</span> onto <span class="math-container">$S$</span>, then in matrix-vector form, <span class="math-container">$ w = A x = v + p $</span></p> <p>where <span class="math-container">$p$</span> is orthogonal to both <span class="math-container">$v_1 $</span> and <span class="math-container">$v_2$</span></p> <p>where <span class="math-container">$A = [v_1, v_2]$</span></p> <p>Premultiplying by <span class="math-container">$A^T $</span> (the transpose of matrix <span class="math-container">$A$</span> ) we obtain</p> <p><span class="math-container">$A^T A x = A^T v + A^T p = A^T v $</span></p> <p>Since the two columns of <span class="math-container">$A$</span> are linearly independent, <span class="math-container">$A$</span> has full rank and thus</p> <p>it is invertible, and we have,</p> <p><span class="math-container">$x = (A^T A)^{-1} A^T v $</span></p> <p>And finally the vector <span class="math-container">$w = A x = A (A^T A)^{-1} A^T v$</span></p> <p>If the two columns of <span class="math-container">$A$</span> are orthogonal, then <span class="math-container">$A^T A$</span> is diagonal, and we have</p> <p><span class="math-container">$x_1 = \dfrac{v_1^T v}{v_1^T v_1} $</span> and <span class="math-container">$ x_2 = \dfrac{v_2^T v }{v_2^T v_2} $</span></p> <p>from which,</p> <p><span class="math-container">$w = A x = x_1 v_1 + x_2 v_2 $</span></p> <p>Note that <span class="math-container">$x_1 v_1 $</span> and <span class="math-container">$x_2 v_2 $</span> are just the projections of <span class="math-container">$v$</span> onto the subspaces: Span{<span class="math-container">$v_1$</span>} and Span{<span class="math-container">$v_2$</span>}.</p> <p>The above verifies the statement made in the quoted paragraph.</p> <p>So for our <span class="math-container">$v_1 $</span>, <span class="math-container">$v_2$</span>, and <span class="math-container">$v$</span>, the projection is simply</p> <p><span class="math-container">$w = \dfrac{12}{6} (1, 2, 1) + \dfrac{2}{2} (-1, 0, 1) = (1, 4, 3) $</span></p> <p>If <span class="math-container">$v_2 $</span> is replaced by another basis vector like <span class="math-container">$v_3 = (0, 2, 2)$</span> then</p> <p><span class="math-container">$v_1$</span> and <span class="math-container">$v_3$</span> are no longer orthogonal, thus we cannot say that</p> <p><span class="math-container">$ w = \dfrac{12}{6} (1, 2, 1) + \dfrac{14}{8} (0, 2, 2) =$</span> (wrong answer)</p> <p>However, what we can do is follow the above procedure, by defining</p> <p><span class="math-container">$A = \begin{bmatrix} 1 &amp;&amp; 0 \\ 2 &amp;&amp; 2 \\ 1 &amp;&amp; 2 \end{bmatrix}$</span></p> <p>Then it would follow that <span class="math-container">$A^T A = \begin{bmatrix} 6 &amp;&amp; 6 \\ 6 &amp;&amp; 8 \end{bmatrix} $</span></p> <p>and <span class="math-container">$(A^T A)^{-1} = \dfrac{1}{12} \begin{bmatrix} 8 &amp;&amp; -6 \\ -6 &amp;&amp; 6 \end{bmatrix}$</span></p> <p>and the projection is given by</p> <p><span class="math-container">$w = A (A^T A)^{-1} A^T v $</span></p> <p>we have <span class="math-container">$A^T v = \begin{bmatrix} 1 &amp;&amp; 2 &amp;&amp; 1 \\ 0 &amp;&amp; 2 &amp;&amp; 2 \end{bmatrix} \begin{bmatrix} 2 \\ 3 \\ 4 \end{bmatrix} = \begin{bmatrix} 12 \\ 14 \end{bmatrix}$</span></p> <p>And therefore,</p> <p><span class="math-container">$ (A^T A)^{-1} A^T v = \dfrac{1}{12} \begin{bmatrix} 8 &amp;&amp; -6 \\ -6 &amp;&amp; 6 \end{bmatrix} \begin{bmatrix} 12 \\ 14 \end{bmatrix} = \begin{bmatrix} 1 \\ 1 \end{bmatrix} $</span></p> <p>Hence, finally,</p> <p><span class="math-container">$ w = A x = v_1 + v_3 = (1, 4, 3) $</span> (correct answer)</p>
627,444
<p>I guess that I'm quite familiar with the basic "everyday algebraic structures" such as groups, rings, modules and algebras and Lie algebras. Of course, I also heard of magmas, semi-groups and monoids, but they seem to be way to general notions as to admit a really interesting theory.</p> <p>Thus, I'm wondering whether there are also other interesting algebraic structure (here, this means mainly some set $S$ together with a bunch of functions $f_i:S^n\to S$ satisfying some laws) which behave somewhat differently, i. e. satisfy some unusual relations like $(ab)c=(ca)(cb)$ or $ba=(aa)(bb)$, but in such a way that there is a decent amount of theory about them (some kind of nontrivial classification or representation theorem would be truly fascinating).</p> <p>Bonus points if these structures arise naturally in some areas of mathematics.</p>
Shaun
104,041
<p>You might be interested in <a href="http://en.m.wikipedia.org/wiki/Universal_algebra" rel="nofollow">Universal Algebra</a>. You could build your own algebra that way. Have a look through <a href="http://www.math.uwaterloo.ca/~snburris/htdocs/UALG/univ-algebra.pdf" rel="nofollow"><em>A Course in Universal Algebra</em></a>, by S. Burris and H. P. Sankappanavar; it builds up a wonderful theory for them. Examples from this book include "squags" and "sloops".</p>
3,576,008
<p><strong>Question:</strong></p> <p>In acute <span class="math-container">$\Delta ABC$</span>, let <span class="math-container">$D$</span> be the foot of the altitude from <span class="math-container">$A$</span> to <span class="math-container">$BC$</span>, and let <span class="math-container">$\overline{AD}$</span> intersect the circumcircle of <span class="math-container">$\Delta ABC$</span> at <span class="math-container">$E$</span>. </p> <p>Let the circle with diameter <span class="math-container">$AE$</span> intersect lines <span class="math-container">$AB$</span> and <span class="math-container">$AC$</span> at <span class="math-container">$N$</span> and <span class="math-container">$M$</span>, respectively. Given that <span class="math-container">$DB=3NB$</span> and <span class="math-container">$MA=5NA$</span>, </p> <p>then the value of <span class="math-container">$\displaystyle \frac{DC}{MC} $</span> can be written in simplest form as <span class="math-container">$\displaystyle \frac{a}{b}$</span>. What is the value of <span class="math-container">$a-b$</span>?</p> <p><a href="https://brilliant.org/problems/interesting-circles/" rel="nofollow noreferrer">SOURCE</a></p> <p><strong>My attempt to draw:</strong> Please guide me.</p> <p><a href="https://i.stack.imgur.com/cRZK0.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cRZK0.jpg" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/iV7vu.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iV7vu.jpg" alt="enter image description here"></a></p>
David K
139,123
<p>The problem statement says, "Let the circle with diameter <span class="math-container">$AE$</span> intersect <strong>lines</strong> <span class="math-container">$AB$</span> and <span class="math-container">$AC$</span>" (emphasis added by me).</p> <p>If the statement had said "<strong>sides</strong> <span class="math-container">$AB$</span> and <span class="math-container">$AC$</span>" then your constructions would be counterexamples. But the <strong>line</strong> <span class="math-container">$AC$</span> is generally considered to be the line through <span class="math-container">$A$</span> and <span class="math-container">$C$</span> extended indefinitely in any direction. Since the circle with diameter <span class="math-container">$AE$</span> intersects that circle at <span class="math-container">$A$</span> and is not tangent to the line at <span class="math-container">$A$</span>, it will certainly intersect the line at one other point. That point may not be between <span class="math-container">$A$</span> and <span class="math-container">$C$</span> but it will still exist.</p>
9,629
<p>are people facing problem of not loading latex symbols in MSE? I have high speed internet connection but I am facing this problem from yesterday,any suggestion?It says "math processing error" if my connection is low speed but this is not the case, I am just watching all latex symbols instead of compiled complete picture.</p>
Davide Cervone
7,798
<p>Try clearing your cache and restarting your browser (restarting is an important step). It may be that you have a mixture of v2.1 and v2.2 files in your cache. The CDN edge nodes should have been updated by now, so it is probably a caching problem on your end.</p>
96,191
<p>I am trying to calculate the following integral which contains a parameter. <a href="https://i.stack.imgur.com/qUJ9f.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qUJ9f.jpg" alt="enter image description here"></a></p> <p>I have used the Integrate and FullSimplify using assumptions but Mathematica fails to produce an analytical solution.</p> <pre><code>Integrate[((Sin[u]^1.82 + (parameter^(-1))^0.63*Sin[u]^2.45)* Sin[parameter + u]^1.82)/(parameter + Sin[u])^2, {u, 0, Pi}, Assumptions -&gt; Inequality[0, Less, parameter, Less, 1]] </code></pre> <p>Is there another function I can use? If not, what function would you recommend in order to estimate the integral? My end goal is to replace π (integral upper limit) with a second parameter.</p>
Alexei Boulbitch
788
<p>Well you might think about a semi-analytical approach:</p> <pre><code> lst = Table[{p, NIntegrate[((Sin[u]^1.82 + (p^(-1))^0.63*Sin[u]^2.45)* Sin[p + u]^1.82)/(p + Sin[u])^2, {u, 0, Pi}]}, {p, 0.1, 1, 0.05}]; </code></pre> <p>giving a list of complex values (right? you expect it?) Then you can plot separately the real and imaginary parts and fit them to some reasonable functions. For the real part this gives:</p> <pre><code>m odel1 = a/x^0.5 + b; ff1 = FindFit[ Re[lst], model1, {a, b}, x] Show[{ ListPlot[Re[lst]], Plot[model1 /. ff1, {x, 0, 1}, PlotStyle -&gt; Red] }] (* {a -&gt; 2.76556, b -&gt; -2.47875} *) </code></pre> <p>and for the imaginary one - the following:</p> <pre><code> model2 = a - b*x^2.5; ff2 = FindFit[ Transpose[{Transpose[lst][[1]], Im[Transpose[lst][[2]]]}], model2, {a, b}, x] Show[{ ListPlot[Transpose[{Transpose[lst][[1]], Im[Transpose[lst][[2]]]}]], Plot[model2 /. ff2, {x, 0, 1}, PlotStyle -&gt; Red] }] (* {a -&gt; -0.0000222685, b -&gt; 0.0152534} *) </code></pre> <p>The images which will appear enable you to visually check the fitting quality, and vary it, if necessary. I do not publish them, since I have today a poor internet connection.</p> <p>Have fun!</p>
1,587,498
<p>I need some help with this (seemingly) simple problem. As before, it comes from Apostol "Calculus", Volume 1, Section 8.28, Question 23 and it states:</p> <p>Solve the differential equation $(1+y^2e^{2x})y^{'} + y = 0$ by introducing a change of variable of the form $y = ue^{mx}$, where $m$ is constant and $u$ is a new unknown function.</p> <p>This seems to be straight forward but I seem to be having problems. My working so far has been as follows:</p> <p>$$ y = ue^{mx} \Rightarrow y^{'} = u(me^{mx}) + u^{'}e^{mx} = e^{mx}(u^{'}+mu) $$ so we obtain: $$ (1+(ue^{mx})^{2}e^{2x})e^{mx}(u^{'}+mu) + ue^{mx}= 0 $$ $\Rightarrow$ $$ (1+u^{2}e^{2(m+1)x})e^{mx}(u^{'}+mu) + ue^{mx}= 0 $$ $\Rightarrow$ $$ e^{mx}((1+u^{2}e^{2(m+1)x})(u^{'}+mu) + u) = 0 $$ $\Rightarrow$ $$ (1+u^{2}e^{2(m+1)x})(u^{'}+mu) + u = 0 $$ $\Rightarrow$ $$ u^{'}+mu = \frac{-u}{1+u^{2}e^{2(m+1)x}} $$ $\Rightarrow$ $$ u^{'} = \frac{-u}{1+u^{2}e^{2(m+1)x}} - mu $$ $\Rightarrow$ $$ u^{'} = \frac{-u - mu(1+u^{2}e^{2(m+1)x})}{1+u^{2}e^{2(m+1)x}} $$ $\Rightarrow$ $$ u^{'} = \frac{-u - mu - mu^{3}e^{2(m+1)x}}{1+u^{2}e^{2(m+1)x}} $$ And I seem to get somewhat stuck as to how to proceed with this. Im trying to convert this (somehow) into a differential equation that has separate variable to allow for extracting a solution but the trick seems to be eluding me. If anyone has any suggestions that would be much appreciated.</p>
JJacquelin
108,514
<p>OK. up to $(1+u^2e^{2(m+1)x})(u'+mu)+u=0$</p> <p>Then, why not chosing a value of $m$ in order to simplify thr equation ? Obviously $m=-1$ is a good choice : $$(1+u^2)(u'-u)+u=0$$ $$\frac{1+u^2}{u^3}u'=1$$ $$\frac{-1}{2u^2}+\ln|u|=x+c$$ The result expressed on the forme of the inverse function $x(u)$ can be inverted thanks to the Lambert W function : $$u=\pm \frac{1}{\sqrt{W(e^{-2(x+c)})}}$$ $$y=ue^{-x}=\pm \frac{e^{-x}}{\sqrt{W(e^{-2(x+c)})}}$$</p>
535,080
<p>For the following example: </p> <blockquote> <p>Let the topological space $X$ be the real line $\mathbb{R}$. An open set is any set whose complement is finite. Let $S=[0,1]$. Find the closure, the interior, and the boundary of $S$. </p> </blockquote> <p>What is meant by let the topological space $X$ be the real line $\mathbb{R}$?</p>
Elias Costa
19,266
<p><strong>Hint.</strong> Let's $\tau$ is the topology of your question. You can think of a more concrete way of explaining the face of this open topology.If $O\in \tau$ and $O\neq \emptyset$ then $O^c$ is finite. That is, thare is $n$ numbers $x_1&lt;x_2&lt;\ldots, x_{n-1}&lt;x_n$ such that $O^c=\{x_1, \ldots, x_n\}$. Then, $$ O=(-\infty,x_1)\cup( x_1,x_2)\cup\ldots\cup( x_{n-1},x_n)\cup(x_n,\infty) $$ This means that each nonempty open $O\in\tau$ contains at least two intervals of infinite type $$ (-\infty,a) \mbox{ and } (b,+\infty) \mbox{ with } a\leq b. $$ Therefore the only open that can be contained in the set S is empty. Since, by definition, <strong>the interior of a set $S$ is the union of all open contained in $S$</strong> then the interior of $S$ is the empty set ($S$ contains only empty set).</p>
3,428,995
<p>I found this inequality on twitter and I can't seem to prove the statement.</p> <p>Prove that for <span class="math-container">$a,b,c &gt; 0$</span> that </p> <p><span class="math-container">$$ \frac{a+b+c}{2} \geq \frac{ab}{a+b} + \frac{ac}{a+c} + \frac{bc}{b+c} $$</span></p> <p>After an hour (and a crick in my neck) I've only been able to turn it into </p> <p><span class="math-container">$$ a^3(b+c)+b^3(a+c)+c^3(a+b)-2abc(a+b+c) \geq 0 $$</span></p> <p>and I'm not even sure if that's much better. </p>
Michael Rozenberg
190,319
<p>Since <span class="math-container">$(3,1,0)\succ(2,1,1),$</span> your inequality is true by Muirhead. </p>
1,645,130
<p>Is there any known explicit bijection between these two sets? </p> <p>I know it can be proved that such bijection exists using two injections and Schröder–Bernstein theorem, but I wanted to know whether some explicit bijection is known. I failed to find any except ones constructed awkwardly from the Schröder–Bernstein theorem.</p>
Patrick Stevens
259,262
<p>This answer is incomplete, but it at least makes the Schröder-Bernstein a bit nicer.</p> <p>Firstly, $[0,1)$ bijects with $\mathbb{R}$, by the following bijections:</p> <p>$h: (0, 1) \to \mathbb{R}$ by $x \mapsto \tan(\frac{\pi}{2} (2x-1))$</p> <p>$i: [0,1) \to (0,1)$ by $\frac{1}{n} \mapsto \frac{1}{n+1}$, $0 \mapsto \frac{1}{2}$, and $x \mapsto x$ otherwise.</p> <hr> <p>Now, we show bijections with the set $S$ of (possibly countably infinite) sequences of $0$s and $1$s, and with $S'$ which is the subset of $S$ such that no sequence ever ends in infinitely many $1$s.</p> <p>$f: [0,1) \to S'$ is defined by taking the binary expansion of the input number, where we insist that no expansion never ends in infinitely many $1$s if there is the choice. For example, $$0.011\bar{1} = 0.10\bar{0}$$ so we choose the latter.</p> <p>$g: \mathcal{P}(\mathbb{N}) \to S$ is defined by $\{ a_1, a_2, \dots \}$ being sent to the sequence which has $1$s in exactly positions $a_1, a_2, \dots$.</p> <hr> <p>Finally, a bijection between $S$ and $S'$ is provided by Schröder-Bernstein, which is improved by the fact that $S'$ is a subset of $S$ so the injection one way is simply inclusion. The injection the other way could be "prepend a 0 if you don't end with infinitely many $1$s; otherwise prepend a 1 and replace all but one of the final infinitely-many 1s with 0s".</p>
3,153,306
<p>In other words, say I am looking for multiple X</p> <p>let: </p> <p>X &lt; 1000005</p> <p>let the fist 18 divisors of X be: 1 | 2 | 4 | 5 | 8 | 10 | 16 | 20 | 25 | 32 | 40 | 50 | 64 | 80 | 100 | 125 | 160 | 200 </p> <p>finally, I also know: X has exactly 49 divisors. </p> <p>I will tell you what the answer is...frankly if you google it it will probably showed up...but again: is this possible knowing only the count/number of divisors and that x cannot be bigger than some number? thanks</p>
user655522
655,522
<p>The result is true for <span class="math-container">$p = 3$</span>, <span class="math-container">$5$</span>, and <span class="math-container">$7$</span>, so assume that <span class="math-container">$p = 2n+1$</span> for <span class="math-container">$n \ge 4$</span>. Note that all the primes <span class="math-container">$q$</span> occurring in the sum are odd. Thus</p> <p><span class="math-container">$$ \begin{aligned} f(p) = f(2n+1) = &amp; \ \frac{1}{4n^2 + 4n} \sum_{q=3}^{p, \text{ with $q$ prime}} \frac{q^2 - 3}{q}\\ &lt; &amp; \ \frac{1}{4(n^2 + n)} \sum_{q=3}^{p, \text{ with $q$ prime}} q \\ &lt; &amp; \ \frac{1}{4(n^2 + n)} \sum_{q = 3}^{p, \text{with $q$ odd}} q \\ = &amp; \ \frac{1}{4(n^2 + n)} \sum_{j=1}^{n} (2j+1)\\ = &amp; \ \frac{n^2 +2n}{4(n^2 + n)} \\ = &amp; \ \frac{3}{10} - \frac{(n-4)}{20(n+1)} \\ \le &amp; \ \frac{3}{10}. \end{aligned}$$</span></p> <p>In reality, <span class="math-container">$f(p) \rightarrow 0$</span>, since </p> <p><span class="math-container">$$f(p) \le \frac{1}{p^2} \sum_{q \le p} q \le \frac{1}{p^2} \sum_{q \le p} p = \frac{1}{p^2} \cdot p \pi(p) \sim 1/\log p.$$</span></p>
796,564
<p><img src="https://i.stack.imgur.com/rLmA6.jpg" alt="enter image description here"></p> <p>I don't get the answer to this problem, can somebody please tell me what the answer is. </p>
user7000
149,900
<p>It might help to re-draw the rectangle so that it is slanted. Do you mean the equation of PU is y=2/3x+4? That means the slope is 2/3. Since SP is at a right angle to PU (it's a rectangle so it has to be), then it's slope is the negative reciprocal of that. That would be -3/2 (flip the fraction and multiply by -1), so the answer is C. Hope this helps.</p>
796,564
<p><img src="https://i.stack.imgur.com/rLmA6.jpg" alt="enter image description here"></p> <p>I don't get the answer to this problem, can somebody please tell me what the answer is. </p>
user150369
150,369
<p>The answer is C, assuming that the equation of PU is $y = \frac{2}{3}x+4$. </p>