qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
2,217,928
<p>I know that in infinite dimensional Hilbert spaces sometimes the best we can do is to find an orthonormal basis in the sense that any element in H can be approximated arbitrarily close in the NORM by a finite linear combination of this basis elements.</p> <p>So then does that mean we can't expect that every x in H could be written as a finite linear combination of basis elements correct? So then we can't have things like $x= \sum_{i=1}^{\infty} a_k e_k $ for some $ a_k$ ARE constants in the underlying field, usually $\mathbb C$ (usually the projections of x on each $e_k$). So then how do we deal with linear transformations? For example how do we even define what a linear transformation does without explicitly saying what T(x) is for each x in H...ie how is saying what $T(e_k)$ is for each k enough to describe the whole linear transformation? </p> <p>Thanks for answers to either question.. I see in a lot of proofs people writing x as this kind of infinite sum which confuses me since the infinite sum might not be in H..</p> <p>Finally if we take some kind of infinite sum of elements in H, and the norm of that is finite, can we conclude the infinite sum is in H or that's still not enough?</p>
Ben Grossmann
81,360
<p>There are two notions of a basis. One is the idea of a Hamel basis, and the other is the idea of a Schauder basis. For more on that, see <a href="https://math.stackexchange.com/questions/630142/what-is-the-difference-between-a-hamel-basis-and-a-schauder-basis">this post</a>. Every vector space has a Hamel basis. However, in a Hilbert space, one primarily talks in terms of Schauder bases. </p> <p>So, every element in $H$ can be written as an "infinite linear combination" of elements from our orthonormal basis. Moreover, because our basis is orthonormal, the sum $\sum_{k=1}^\infty a_ke_k$ is convergent in $H$ if and only if $\sum_{k=1}^\infty |a_k|^2$ converges, and we have $\left\| \sum_{k=1}^\infty a_ke_k \right\|^2 = \sum_{k=1}^\infty |a_k|^2$.</p> <p>Remember that any infinite sum is really a limit. In particular, $\sum_{k=1}^\infty a_ke_k = \lim_{N \to \infty}\sum_{k=1}^N a_k e_k$. This limit is defined with respect to the norm on a Hilbert space. The limit exists whenever $\sum_{k=1}^\infty |a_k|^2$ is finite because Hilbert spaces are <strong>complete</strong>.</p> <p>Whenever $T$ is a continuous (equivalently, bounded) linear operator, it suffices to define $T$ over our orthonormal basis. Because $T$ is continuous, we can say $$ T(\sum_{k=1}^\infty a_ke_k) = T(\lim_{N \to \infty}\sum_{k=1}^N a_ke_k) = \lim_{N \to \infty}T(\sum_{k=1}^N a_ke_k) = \lim_{N \to \infty}\sum_{k=1}^N a_kT(e_k) $$ In a sense, we can still use this trick when $T$ is a <em>closed operator</em>, but we need to remember that $T$ is only defined over its domain. The <em>closedness</em> tells us that whenever the sum on the right converges, it coincides with the actual output.</p>
110,709
<p>By a closed geodesic, I mean a smooth periodic geodesic $\mathbb{R} \rightarrow (M,g)$. I will consider them up to geometric distinction. This means that any two closed geodesics are equivalent if they have the same image in $(M,g)$. </p> <p>Manifolds with constant curvature $\leq 0$, by Cartan's theorem, cannot have any closed contractible geodesics, and every riemannian metric on $S^2$ has infinitely many closed geodesics (for $n\geq 3$, the analogous theorem for $S^n$ is not known). Moreover, if the sequence of Betti numbers of the loops space $\Omega(M)$ is unbounded and $M$ is simply-connected, then $(M,g)$ contains infinitely many (contractible) closed geodesics.</p> <p><strong>Are there any known examples of riemannian manifolds with finitely and positively many closed contractible geodesics (or even just closed geodesics)?</strong></p> <p>There is a theorem associated with Gromov asserting that the word problem of $\pi_1 M$ is solvable if there is a metric $g$ on $M$ with only finitely many contractible closed geodesics. I was wondering if there are any non-trivial examples for this theorem.</p>
Igor Rivin
11,142
<p>Ellipsoids with almost but not quite equal axes have exactly three simple closed geodesics.</p>
3,642,479
<p>What properties do we lose as we go from real numbers to quaternions, then to octonions? Do any new properties arise, or do calculations just become more "path dependant"?</p>
Physical Mathematics
592,278
<p>The most important property (IMHO) that <span class="math-container">$\mathbb{R}$</span> has that its extensions don't have is that it is a totally ordered field with least upper bound property. You don't have an order (that works nicely with the field) in these extensions.</p>
2,271,204
<p>How do I solve these two equations: $\frac{x}{x+y} = \frac{1}{2}$ and $\frac{y}{x+y} = \frac{1}{6}$ ?</p> <p>I tried reducing the first one and end up getting $x = y$, clearly my maths is pretty dusty </p>
Community
-1
<p>Turn it to a linear form</p> <p>$$2x=x+y,\\6y=x+y.$$</p> <p>But then,</p> <p>$$x=3y$$ and substituting,</p> <p>$$6y=3y+y$$ or $y=x=0$, which is not allowed.</p>
2,153,128
<p>I have been given an exercise to convert switch coordinated from cylindrical to rectangular ones. This task is easy but one of them is a strange looking. The point in cylindrical coordinates is $(0,45,10)$. This corresponds to $r=0$. What is this point? Is it not the origin? But why then the angle and z-coordinate. I have one more question that is how to describe the geometric meaning of the following transformation in cylindrical coordinates: $(r,\theta,z)$ to $(-r,\theta-\pi/4,z)$</p> <p>$-r$ makes the whole problem here.</p>
Shashaank
333,392
<p>If you just have to choose then it will be 11C2 and 11C3 respectively . Had you asked for arrangement of these then it would have been 11P2 and 11P3 , because this time 3 choses objects can be arranged in 6 ways also. So you would have to multiply 11C3 * 3! Which is 11P3. </p> <p>You can think of it in this way -</p> <p>If you just need to choose then your first option has 11 choices. For each choice you have another 10 options and for each 10 you have another 9 . But since there are no arrangements you got to divide be 3! since you would be over counting the different arrangement of the 3 persons.</p>
552,384
<blockquote> <p>Prove that</p> <p>$$ \int_{0}^{\infty}\frac{e^{-bx}-e^{-ax}}{x}\,dx = \ln\left(\frac{a}{b}\right) $$</p> </blockquote> <p><strong>My Attempt:</strong></p> <p>Define the function $I(a,b)$ as</p> <p>$$ I(a,b) = \int_{0}^{\infty}\frac{e^{-bx}-e^{-ax}}{x}\,dx $$</p> <p>Differentiate both side with respect to $a$ to get</p> <p>$$ \begin{align} \frac{dI(a,b)}{da} &amp;= \int_{0}^{\infty}\frac{0-e^{-ax}(-x)}{x}\,dx\\ &amp;= \int_{0}^{\infty}e^{-ax}\,dx\\ &amp;= -\frac{1}{a}(0-1)\\ &amp;= \frac{1}{a} \end{align} $$</p> <p>How can I complete the proof from here?</p>
Count Iblis
155,436
<p>$$ \begin{split} \int_{0}^{\infty}\frac{\exp(-ax) - \exp(-bx)}{x}dx &amp;= \lim_{\epsilon\to 0}\int_{\epsilon}^{\infty}\frac{\exp(-ax) - \exp(-bx)}{x}dx\\ &amp;=\lim_{\epsilon\to 0}\left[\int_{\epsilon}^{\infty}\frac{\exp(-ax)}{x}dx - \int_{\epsilon}^{\infty}\frac{\exp(-bx)}{x}dx\right]\\ &amp;=\lim_{\epsilon\to 0}\left[\int_{a\epsilon}^{\infty}\frac{\exp(-t)}{t}dt - \int_{b\epsilon}^{\infty}\frac{\exp(-t)}{t}dt\right]\\ &amp;=\lim_{\epsilon\to 0}\int_{a\epsilon}^{b\epsilon}\frac{\exp(-t)}{t}dt=\lim_{\epsilon\to 0}\int_{a}^{b}\frac{\exp(-\epsilon u)}{u}du \end{split} $$</p> <p>The integrand converges uniformly to $\frac{1}{u}$ within the finite integration limits, therefore we're allowed to move the limit inside the integral.</p>
1,940,249
<p>According to Wikipedia, <a href="https://en.wikipedia.org/wiki/First-order_logic#Completeness_and_undecidability" rel="nofollow">first order logic is complete</a>. What is the proof of this?</p> <p>(Also, in the same paragraph, it says that its undecidable. Couldn't you just enumerate all possible proofs and disproofs to decide it though?)</p>
Noah Schweber
28,111
<p>I think there's a bit of confusion about the meaning of the theorem. The phrase "First-order logic is complete" means <strong>exactly</strong> "If a sentence $\varphi$ is true in every model of $\Gamma$, then $\Gamma\vdash\varphi$" <em>(so it's saying something about how the</em> semantics <em>and a</em> <a href="https://math.stackexchange.com/questions/1463478/what-is-the-role-of-the-proof-system-in-a-henkin-proof-of-completeness?noredirect=1&amp;lq=1">specific deduction system</a> <em>interact; note that this means that the phrase isn't totally appropriate, and should really be along the lines of e.g. "Natural deduction is complete," or better yet "Natural deduction is complete for the usual semantics of first-order logic")</em>.</p> <p>As to the proof, it's given in many easily-findable sources so I won't give it in detail, but the key observation is the following. The completeness theorem is equivalent to the statement "If $\Gamma$ is consistent (that is, $\Gamma\not\vdash\perp$) then $\Gamma$ is satisfiable (that is, there is some $\mathcal{M}\models\Gamma$)." So the proof proceeds by <em>constructing a model</em>: we give a way to associate a structure $\mathcal{M}_\Gamma$ to <em>any</em> theory $\Gamma$, and then want to argue that $\mathcal{M}_\Gamma\models\Gamma$ if $\Gamma$ is consistent.</p> <p>Since our only hypothesis on $\Gamma$ is syntactic (namely, its consistency), it's reasonable that $\mathcal{M}_\Gamma$ should itself be related to syntactic ideas; the most obvious first guess is that $\mathcal{M}_\Gamma$ should consist of <em>terms in the language</em> modulo <em>$\Gamma$-provable equality</em> (with the obvious structure imposed). The problem is that depending on $\Gamma$ this might not actually work, either because the language doesn't have enough terms or because <a href="https://math.stackexchange.com/questions/1449041/why-are-maximal-consistent-sets-essential-to-henkin-proofs-of-completeness/1449045">$\Gamma$ itself is too weak</a>. Instead, we have to first construct a possibly-larger theory in a possibly-larger language, $\hat{\Gamma}\supseteq\Gamma$, such that if $\Gamma$ is consistent then $(i)$ $\hat{\Gamma}$ is consistent if $\Gamma$ is and $(ii)$ $\mathcal{M}_{\hat{\Gamma}}\models \hat{\Gamma}$ <em>(note that $(ii)$ implies $(i)$ so this is a bit redundant)</em>.</p> <p>It's worth noting that this <strong>is not Godel's original proof</strong>, but rather Henkin's. Godel's (quite neat) original proof was more proof-theoretic; it's not as easily findable, but <a href="http://www.andrew.cmu.edu/user/avigad/Papers/goedel.pdf" rel="nofollow noreferrer">this article by Avigad</a> has an excellent outline of it (section 4).</p> <p>Especially in light of the undecidability of first-order logic and the <em>syntactic</em> incompleteness of many first-order theories like PA <em>(note that this notion of incompleteness is different from that of the completeness theorem; saying that a</em> theory, <em>rather than a</em> logic, <em>is incomplete is to say that there is some sentence the theory neither proves nor disproves)</em>, the completeness theorem may appear quite surprising; see <a href="https://math.stackexchange.com/questions/2099531/how-to-verify-satisfialibility-in-a-model-confusions-with-g%C3%B6dels-completeness">this old question</a> for some discussion of this.</p>
30,586
<p>There are many ways to type a pipe. You could use \$|\$ (<span class="math-container">$|$</span>), \$\vert\$ (<span class="math-container">$\vert$</span>), \$\mid\$ (<span class="math-container">$\mid$</span>), or just a plain | (not surrounded by dollar signs). You could also use vmatrix to indicate matrix determinants.</p> <p>I wanted to know when is it appropriate to use each type of pipe on Mathematics Stack Exchange. For example, pipes can be used in the following cases:</p> <ul> <li>To indicate that one integer is a factor (or divisor) of another (e.g. <span class="math-container">$2|4$</span>)</li> <li>To indicate conditions in set notation (e.g. <span class="math-container">$Dom(\sqrt{x}) = \{x \in \mathbb{R} \mid x \ge 0\}$</span>)</li> <li>To indicate absolute value (e.g. <span class="math-container">$|-2019| = |2019| = 2019$</span>)</li> <li>To indicate the cardinality of a set (e.g. <span class="math-container">$|\emptyset|=0$</span>)</li> <li>To indicate the order of an element of a group (e.g. <span class="math-container">$\forall x \in K_4 ((x=e) \lor (|x|=2))$</span>, where <span class="math-container">$K_4$</span> is the Klein four-group)</li> <li>To indicate the determinant of a square matrix (e.g. <span class="math-container">$\begin{vmatrix} 2 &amp; 3\\5 &amp; 7 \end{vmatrix}=-1$</span>)</li> </ul> <p>There is also of course the double pipe symbol (<span class="math-container">$||$</span>), which is used for logical or in programming, concatenation, and parallel lines; and should not be confused with the number eleven.</p>
Community
-1
<p>I believe the word "<a href="https://en.wikipedia.org/wiki/Vertical_bar#Pipe" rel="nofollow noreferrer">pipe</a>" is used more in computer science than in mathematics. </p> <p>Unlike the strict syntax requirement in programming languages, <em>how</em> to type out this vertical bar in mathematical writing is mostly about typographical consideration. There are discussions regarding different commands in <a href="https://tex.stackexchange.com/q/498/5590">this question</a> at <a href="https://tex.stackexchange.com">https://tex.stackexchange.com</a>. Mathematically, it does not really matter.</p>
3,831,310
<p>I try to integrate</p> <p><span class="math-container">$$\int \frac {dv}{\frac {-c}{m}v^2 - g \sin \theta}$$</span></p> <p>I did substituted <span class="math-container">$u = \frac{c}{m}$</span> and <span class="math-container">$w = g \sin \theta$</span> to get</p> <p><span class="math-container">$$-\int \frac {dv}{uv^2 + w}$$</span></p> <p>I'm wondering if I have to do a second substitution. To be honest, I don't know if I can do that or how to do that. Furthermore, maybe I have to rearrange to get <span class="math-container">$\frac1{1+x^2}$</span></p>
Soumyadwip Chanda
823,370
<p>Divide the numerator by <span class="math-container">$u$</span> to get <span class="math-container">$$-\frac{1}{u}\int_{ }^{ }\frac{dv}{v^{2}+\left(\frac{w}{u}\right)}$$</span></p> <p>Now you may apply the standard integral, a list of which is given <a href="http://integral-table.com/downloads/integral-table.pdf" rel="nofollow noreferrer">here</a> and may provide some additional help.</p> <p>Also, I have posted some pictures below of the standard integrals and the common integration techniques. Might help someone. <em>(I know that the question didn't demand this and I will remove this part if this attracts too many downvotes or opposing comments)</em></p> <blockquote class="spoiler"> <p> <a href="https://i.stack.imgur.com/11ovy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/11ovy.png" alt="enter image description here" /></a> <a href="https://i.stack.imgur.com/9OVMG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9OVMG.png" alt="enter image description here" /></a> <a href="https://i.stack.imgur.com/cmCZD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cmCZD.png" alt="enter image description here" /></a> <a href="https://i.stack.imgur.com/cBDmo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cBDmo.png" alt="enter image description here" /></a> <a href="https://i.stack.imgur.com/2nlRc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2nlRc.png" alt="enter image description here" /></a></p> </blockquote>
1,016,302
<p>I'm having a bit of trouble with a the proof in Ross' Elementary Analysis. The theorem is the $\epsilon-\delta$ one. Theorem: Let $f$ be continuous at $x_0$ in $dom(f)$ if and only if for each $\epsilon&gt;o$ there exists $\delta&gt;0$ such that $x \in dom(f)$ and $|x-x_0|&lt;\delta$ implies $|f(x)-f(x_0)|$ $(1)$. </p> <p>My issue with the proof is in the forward direction. He writes:</p> <p>"Now assume $f$ is continuous at $x_0$, but $(1)$ fails. Then there exists $\epsilon&gt;0$ so that the implication $x\in dom(f)$ and $|x-x_0|&lt;\delta$ implies $|f(x)-f(x_0)|&lt; \epsilon$ failed for each $\delta&gt;0$ . In particular, the implication $x\in dom(f)$ and $|x-x_0|&lt;\frac{1}{n}$ implies $|f(x)-f(x_0)|&lt; \epsilon$ fails for each $n\in\mathbb{N}$. So for each $n\in\mathbb{N}$ there exists $x_n$ in $dom(f)$ such that $|x_n-x_0|&lt; \frac{1}{n}$ and yet $|f(x_0)-f(x_n)|\ge \epsilon$.""</p> <p>In his example, where he chose $\delta=\frac{1}{n}$, I am failing to see how that particular choice of delta implies $|f(x_0)-f(x_n)|\ge\epsilon$. Is it just by assumption? Or is there another reason involved?</p>
2good4this
178,838
<p>His choice of delta is not special in any interesting way, and it's not that particular choice of delta that makes the condition for continuity fail. That delta is just one of the many positive deltas that make the condition fail for a particular epsilon. He assumes that the function is continuous but the condition for continuity fails for <em>any</em> positive delta, that is <strong>for each</strong> $δ&gt;0$, and $|x−x0|&lt;δ$, we would have $|f(x0)−f(xn)|≥ϵ$ for some particular epsilon. So choosing $δ=1/n$ is really just an example, not anything that adds value to his argument. He is choosing a delta that allows us to get as close to zero as possible, but still greater than zero. That's why he defines delta like that.</p>
2,368,771
<blockquote> <p>How to prove the function $$ f(z)=\exp\Big(\frac{z}{1-\cos z}\Big)$$ has an essential singularity at $z=0$ ?</p> </blockquote> <p>It's actually hard to express the Laurent series of $f(z)$ around $0$, because the power $\frac{z}{1-\cos z}$ itself is already in the series form (since $\cos z$ appears there and it has the series expansion) and $e^{z/(1-\cos z)}$ has again a series form. </p> <p>Edit 1: I already see <a href="https://math.stackexchange.com/questions/407318/at-z-0-the-function-fz-expz-over-1-cos-z-has">this</a> but it does not give information about the Laurent expension of $f(z)$ </p> <p>Edit 2: How to Proceed or can anyone explain the limit of $e^{z/(1-\cos z)}$ at $0$ does not exist?</p>
mathstackuser12
361,383
<p>$\exp \left( \frac{z}{1-\cos \left( z \right)} \right)=\exp \left( \frac{2}{z}+\frac{z}{6}+\frac{{{z}^{2}}}{120}+... \right)=\exp \left( \frac{2}{z}+O\left( z \right) \right)=\exp \left( O\left( z \right) \right)\sum\limits_{n=0}^{\infty }{\frac{{{2}^{n}}}{n!{{z}^{n}}}}$</p> <p>...or am i missing something?</p> <p><strong>Edit:</strong> if that's not enough, note: $$\begin{align} &amp; \frac{z}{1-\cos \left( z \right)}=\frac{z}{2}{{\csc }^{2}}\left( \frac{1}{2}z \right)=\frac{z}{2}{{\left( \sum\limits_{n=0}^{\infty }{\frac{{{\left( -1 \right)}^{n+1}}2\left( {{2}^{2n-1}}-1 \right){{B}_{2n}}}{\left( 2n \right)!}}{{\left( \frac{z}{2} \right)}^{2n-1}} \right)}^{2}} \\ &amp; =\frac{z}{2}{{\left( \frac{2}{z}+\frac{z}{12}+... \right)}^{2}}=\frac{2}{z}{{\left( 1+\frac{{{z}^{2}}}{24}+... \right)}^{2}}=\frac{2}{z}+O\left( z \right) \\ \end{align}$$</p>
77,642
<p>$$\frac{1}{\sin(z)} = \cot (z) + \tan (\tfrac{z}{2})$$</p> <p>I did this: </p> <p><strong>First attempt</strong>: $$\displaystyle{\frac{1}{\sin (z)} = \frac{\cos (z)}{\sin (z)} + \frac{\sin (\frac{z}{2})}{ \cos (\frac{z}{2})} = \frac{\cos (z) }{\sin (z)} + \frac{2\sin(\frac{z}{4})\cos(\frac{z}{4})}{\cos^{2}(\frac{z}{4})-\sin^{2}(\frac{z}{4})}} = $$ $$\frac{\cos (z)(\cos^{2}(\frac{z}{4})-\sin^{2}(\frac{z}{4}))+2\sin z \sin(\frac{z}{4})\cos(\frac{z}{4})}{\sin (z)(\cos^{2}(\frac{z}{4})-\sin^{2}(\frac{z}{4}))}$$</p> <p>Stuck.</p> <p><strong>Second attempt</strong>: </p> <p>$$\displaystyle{\frac{1}{\sin z} = \left(\frac{1}{2i}(e^{iz}-e^{-iz})\right)^{-1} = 2i\left(\frac{1}{e^{iz}-e^{-iz}}\right)}$$</p> <p>Stuck.</p> <p>Does anybody see a way to continue?</p>
robjohn
13,854
<p>Start out with $$ \frac{1-\cos(z)}{\sin(z)}=\frac{2\sin^2(\tfrac{z}{2})}{2\sin(\tfrac{z}{2})\cos(\tfrac{z}{2})}=\tan(\tfrac{z}{2})\tag{1} $$ and add $\cot(z)$ to both sides: $$ \frac{1}{\sin(z)}=\cot(z)+\tan(\tfrac{z}{2})\tag{2} $$</p>
1,782,423
<p>Gödel's completeness theorem says that for any first order theory $F$, the statements derivable from $F$ are precisely those that hold in all models of $F$. Thus, it is not possible to have a theorem that is "true" (in the sense that it holds in the intersection of all models of $F$) but unprovable in $F$.</p> <p>However, Gödel's completeness theorem is not constructive. <a href="https://en.wikipedia.org/wiki/G%C3%B6del%27s_completeness_theorem#Relationship_to_the_compactness_theorem" rel="nofollow">Wikipedia</a> claims that (at least in the context of reverse mathematics) it is equivalent to the weak König's lemma, which in a constructive context is not valid, as it can be interpreted to give an effective procedure for the halting problem.</p> <p>My question is, is it still possible for there to be "unprovable truths" in the sense that I describe above in a first order axiomatic system, given that Gödel's completeness theorem is non-constructive, and hence, given a property that holds in the intersection of all models of $F$, we may not actually be able to <em>effectively</em> prove that proposition in $F$?</p>
Roy Simpson
40,335
<p>Although it is the case that the Completeness Theorem of First Order Logic implies that : </p> <p><em>Every valid sentence $\varphi$ has a proof</em>.</p> <p>and that such proofs can be found semi-decidably - there <em>is</em> a sense in which the necessity of using weak Konig's Lemma ($WKL_{0}$) in the meta-proof of the Completeness Theorem (and also of the Compactness Theorem) allows one to construct a Logic of First Order expressivity being incomplete and non-compact. This will happen when the semantics is constructive, unlike in First Order Logic.</p> <p>An interesting example is <a href="https://www.cs.uic.edu/~hinrichs/herbrand/html/herbrandlogic.html" rel="nofollow noreferrer">Herbrand Logic</a> from the Stanford Logic Group. </p> <p>Here there is a replacement to the semantics is as schematically described:</p> <p>First Order Logic == First Order Syntax + Tarski Semantics</p> <p>Herbrand Logic == First Order Syntax + Herbrand Semantics</p> <p>Herbrand Semantics is constructively determined from the syntax itself, and does not need $WKL_{0}$ to construct arbitrary models. In Herbrand Logic entailment is not semi-decidable, the logic is inherently incomplete, and non-compact. </p> <p>Some valid statements would require infinite proofs.</p> <p>There are some simple examples, although the formal meta-proof uses Diophantine Sets.</p>
1,740,032
<p>In layman's terms, why would anyone ever want to change basis? Do eigenvalues have to do with changing basis? </p>
Owen
125,714
<p>Singular Value Decomposition, which is one of the most widely-used techniques in numerical computing, is a basis-change. It takes a linear transformation, and does a basis change on both the input space and the output space so that the transformation becomes just multiplication with no addition (ie a diagonal matrix).</p> <p>This uncovers a lot of useful information about the transformation such as which input variables play the biggest role, how correlated they are to each other, whether there are any irrelevant variables, etc. It is pretty much the sledge-hammer for doing numerical computing with matrices.</p>
219,782
<p>Hi I'm wondering if there's some workaround to get Mathematica to use time-shifting identities for Laplace and Inverse Laplace transforms. My examples are below.</p> <pre><code>$Assumptions = Element[{t, τ}, PositiveReals]; LaplaceTransform[f[t -τ]HeavisideTheta[t-τ], t, s] (* Output *) (* LaplaceTransform[f[t -τ]HeavisideTheta[t-τ], t, s] *) (* Desired Output *) (* E^(-s τ) LaplaceTransform[f[t], t, s] *) (* And for inverse *) InverseLaplaceTransform[E^(-s τ) LaplaceTransform[f[t], t, s], s,t] (* Output is again unchanged *) (* Desired Output *) (* f[t -τ]HeavisideTheta[t-τ] *) </code></pre> <p>Thanks for the help</p>
Nasser
70
<p><a href="https://i.stack.imgur.com/saN4W.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/saN4W.gif" alt="enter image description here"></a></p> <pre><code>fApprox[max_, t_] := (1/2) + Sum[ Sin[2 n Pi]/(Pi n) Cos[ n Pi t] + ((-1)^n - Cos[2 Pi n])/(Pi n) Sin[n Pi t], {n, 1, max}] f[t_] := Piecewise[{{0, 0 &lt; t &lt; 1}, {1, 1 &lt; t &lt; 2}}]; Manipulate[ Plot[{f[t], fApprox[nTerms, t]}, {t, 0, 2}, PlotRange -&gt; {Automatic, {-0.3, 1.3}}, PlotStyle -&gt; {{Thick, Blue}, Red}, Exclusions -&gt; None ], {{nTerms, 5, "How many terms?"}, 1, 30, 1, Appearance -&gt; "Labeled"}, TrackedSymbols :&gt; {nTerms} ] </code></pre> <p>Notice the Gibbs effect on where <span class="math-container">$f(x)$</span> is discontinuous. There is 9% overshoot at each side which can't be reduced no matter how large the number of terms is.</p> <hr> <p>To plot the periodic extended version:</p> <p><a href="https://i.stack.imgur.com/NcV6b.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NcV6b.gif" alt="enter image description here"></a></p> <pre><code>fApprox[max_, t_] := (1/2) + Sum[ Sin[2 n Pi]/(Pi n) Cos[ n Pi t] + ((-1)^n - Cos[2 Pi n])/(Pi n) Sin[n Pi t], {n, 1, max}] f[t_] := Piecewise[{{0, 0 &lt; t &lt; 1}, {1, 1 &lt; t &lt; 2}}]; fExtended[t_] := If[t &lt; 0 || t &gt; 2, f[Mod[t, 2]], f[t]] Manipulate[ Plot[{fExtended[t], fApprox[nTerms, t]}, {t, -4, 4}, PlotRange -&gt; {Automatic, {-0.3, 1.3}}, PlotStyle -&gt; {{Thick, Blue}, Red}, Exclusions -&gt; None ], {{nTerms, 5, "How many terms?"}, 1, 30, 1, Appearance -&gt; "Labeled"}, TrackedSymbols :&gt; {nTerms} ] </code></pre> <p>For more cool Fourier series animations, all done using Mathematica, I found this <a href="https://www.12000.org/my_notes/fourier_series_animations/index.htm" rel="nofollow noreferrer">web page</a> (for some reason the Mathematica source code used for those is not shown at this time).</p> <p>Mathematica is probably the best software for making such animations.</p>
2,625,719
<p>I'm getting an equation $$(\ln y - x)\frac{dy}{dx} - y\ln y = 0$$</p> <p>Which I try to factorize and bring over to get: $$\frac{dy}{dx} = \frac{y\ln y}{(\ln y - x)}$$</p> <p>But this cannot be factorized further. I am intending to use either the the substitution $u=y/x$ or finding the integrating factor, but this form makes it hard for me to try either method. How should I proceed?</p>
user577215664
475,762
<p><strong><em>Hint</em></strong></p> <p>Substitute $y=e^z$ $$(\ln y - x)\frac{dy}{dx} - y\ln y = 0$$ $$(z - x)\frac{dy}{dz} \frac{dz}{dx}- e^zz = 0$$ $$(z - x)e^z z'- e^zz = 0$$ $$(z - x) z'-z = 0$$ $$z'=\frac z {z-x}$$ Then consider $\frac {dx}{dz}$ instead: $$x'+\frac x z=1$$ Multiply by z $(z \ne 0)$ $$ x'z+x=z$$ Note the derivative of $xz$ $$(xz)'=z$$ Now integrate $$ x=\frac 1 z \int zdz$$ Evaluate the integral then substitute $z=\ln(y)$ </p>
1,910,030
<p>Let $V$ be a vector space and $U,W$ be its subspaces. Prove that if the union $U\cup W$ is a subspace of $V$, then $W \subseteq U$ or $U \subseteq W$.</p> <p>I'm not sure where to begin at all really.</p>
Kitter Catter
166,001
<p>Broadly speaking the reason we are multiplying is that each of those choices is in some respect independent. Since our choice of ranks doesn't affect the number of cards they are independent. </p> <p>As to the 44 issue I think others have covered that neatly, since we only want two pair and not a full house we can't pick either of the leftover cards from the 4 choose 2.</p> <p>If you want a better challenge you might think of a worn out deck where there are only 2 twos. or 3 threes. </p>
2,614,375
<p>I have tried using prime factorization which is $2^2 \times 5^2 \times 11$ and found out that I'll need $37$ slices. but in the question paper which is a multiple choice question there's no such answer. the choices are $30,31,32,61,110$.</p>
Community
-1
<p>You are asked to count the slices, not the cuts.</p> <p>Hence with $10\times10\times11=1100$,$$10+10+11=31.$$</p>
424,853
<p>As an interested outsider, I have been intrigued by the number of times that homotopy theory seems to have revamped its foundations over the past fifty years or so. Sometimes there seems to have been a narrowing of focus, via a choice to ignore certain &quot;pathological&quot;—or at least intractably complicated—phenomena; instead of considering all topological spaces, one focuses only on compactly generated spaces or CW complexes or something. Or maybe one chooses to focus only on <i>stable</i> homotopy groups. Other times, there seems to have been a broadening of perspective, as new objects of study are introduced to fill perceived gaps in the landscape. Spectra are one notable example. I was fascinated when I discovered the 1991 paper by Lewis, <a href="https://doi.org/10.1016/0022-4049(91)90030-6" rel="noreferrer">Is there a convenient category of spectra?</a>, showing that a certain list of seemingly desirable properties cannot be simultaneously satisfied. More recent concepts include model categories, <span class="math-container">$\infty$</span>-categories, and homotopy type theory.</p> <p>I was wondering if someone could sketch a timeline of the most important such &quot;foundational shifts&quot; in homotopy theory over the past 50 years, together with a couple of brief sentences about what motivated the shifts. Such a bird's-eye sketch would, I think, help mathematicians from neighboring fields get some sense of the purpose of all the seemingly high-falutin' modern abstractions, and reduce the impenetrability of the current literature.</p>
Dmitri Pavlov
402
<p>Such a timeline is necessarily highly subjective.</p> <p>With this disclaimer in mind, we can identify some important turns in the development of foundations of homotopy theory. The list below concentrates on developments that in some way affect the foundations of homotopy theory, as opposed to general advances in homotopy theory. Given the length of the list, I probably omitted many important developments, feel free to point them out in the comments! I also excluded from consideration the last decade or so, restricting to older developments.</p> <p><strong>Poincaré</strong> defined <em>homology</em> (via Betti numbers) and the <em>fundamental group</em> in a series of papers starting from 1895. The initial approach was nonrigorous, but in response to the resulting criticism, Poincaré reformulated his work in terms of simplicial complexes.</p> <p><strong>Fréchet</strong> defined <em>metric spaces</em> in 1906 and <strong>Hausdorff</strong> defined <em>topological spaces</em> in 1914. This enabled to study the topological properties of spaces without first triangulating them.</p> <p>Around 1925, Emmy <strong>Noether</strong> proposed to upgrade Betti numbers to <em>homology groups</em>. In connection with this, sometime in 1930s, the terminology shifted from “combinatorial topology” to “algebraic topology”.</p> <p>Around 1931, <strong>Veblen</strong> and J. H. C. <strong>Whitehead</strong> introduced the modern definition of a <em>smooth manifold</em>.</p> <p><strong>Eilenberg</strong> defined <em>singular homology</em> in 1943, which resulted in a systematic study of homology and cohomology (defined by <strong>Kolmogoroff</strong> and <strong>Alexander</strong> in 1936) of arbitrary topological spaces.</p> <p>Around 1945, <strong>Leray</strong> introduced <em>sheaves</em> and <em>spectral sequences</em>. The relevant theory was further developed by <strong>Cartan</strong>, <strong>Serre</strong>, and others.</p> <p><strong>Eilenberg</strong> and <strong>MacLane</strong> introduced <em>categories</em>, <em>functors</em>, and <em>natural transformations</em> in 1945. Ever since then, category theory played an increasingly important role in homotopy theory, to the point where we are now often unable to cleanly separate them.</p> <p><strong>Eilenberg</strong> and <strong>Zilber</strong> developed the theory of <em>simplicial sets</em> (known at the time as “complete semi-simplicial complexes**) in 1949.</p> <p>J. H. C. <strong>Whitehead</strong> proved what is now known as the <em>Whitehead theorem</em> in 1948.</p> <p><strong>Eilenberg</strong> and <strong>Steenrod</strong> published their <em>Foundations of Algebraic Topology</em> in 1952, formulating what is now known as the <em>Eilenberg–Steenrod axioms</em>.</p> <p>Around 1953, <strong>Cartan</strong> and <strong>Eilenberg</strong> completed their book on homological algebra (published in 1956).</p> <p><strong>Kan</strong> (advised by <strong>Eilenberg</strong>) systematically developed simplicial homotopy theory (and briefly also cubical homotopy theory) starting from around 1955. He introduced combinatorial homotopy groups, the <strong>Dold</strong>–Kan correspondence, adjoint functors, limits and colimits, Kan extensions, etc.</p> <p><strong>Lima</strong> defined <em>spectra</em> in 1958.</p> <p><strong>Quillen</strong> published his <em>Homotopical Algebra</em> in 1967, introducing <em>model categories</em> and using them in his <em>Rational homotopy theory</em> around 1968. Around 1972, he introduced <em>higher algebraic K-theory</em>.</p> <p>In 1971, <strong>Gabriel</strong> and <strong>Ulmer</strong> published their systematic account of <em>locally presentable categories</em>.</p> <p><strong>Segal</strong> introduced <a href="https://doi.org/10.1016/0040-9383(74)90022-6" rel="noreferrer"><em>Γ-spaces</em></a> around 1972. At the same time, <strong>May</strong> introduced operads, also in connection with infinite loop spaces.</p> <p><strong>Brown</strong> studied the homotopy theory of <a href="https://doi.org/10.2307/1996573" rel="noreferrer"><em>sheaves of spaces and spectra</em></a> in 1972.</p> <p><strong>Boardman</strong> and <strong>Vogt</strong> introduced <a href="https://doi.org/10.1007/BFb0068547" rel="noreferrer"><em>quasicategories</em></a> in 1973.</p> <p>In 1977, <strong>Sullivan</strong> published his work on <em>rational homotopy theory</em> in the language of commutative differential graded algebras, complementing the previous work by Quillen.</p> <p><strong>Dwyer</strong> and <strong>Kan</strong> introduced and developed the theory of <a href="https://doi.org/10.1016/0022-4049(80)90049-3" rel="noreferrer"><em>simplicial localizations</em></a> starting from around 1979.</p> <p>Around 1979, <strong>Bousfield</strong> introduced what is now known as <a href="https://doi.org/10.1016/0040-9383(79)90018-1" rel="noreferrer"><em>Bousfield localizations</em></a>.</p> <p>In 1983, <strong>Grothendieck</strong> introduced what is now known as <em>Grothendieck homotopy theory</em>, as well as <em>derivators</em>.</p> <p>In 1980s, <strong>Joyal</strong> established what is now known as the <a href="https://ncatlab.org/nlab/show/model+structure+on+simplicial+sets#joyal_model_structure_on_simplicial_sets" rel="noreferrer"><em>Joyal model structure</em></a> on simplicial sets.</p> <p>In mid-1980s, <strong>Segal</strong> (following Witten) introduced what is now known as <a href="https://doi.org/10.1007/978-94-015-7809-7_9" rel="noreferrer"><em>functorial field theory</em></a>, later studied by Atiyah, Kontsevich, Freed, Lawrence, and many others.</p> <p>In 1985, <strong>Jardine</strong> gave an account of <a href="http://math.uchicago.edu/~amathew/simplicialpresheaves.pdf" rel="noreferrer"><em>simplicial presheaves</em></a>.</p> <p>Around 1986, <strong>Lewis</strong>, <strong>May</strong>, <strong>Steinberger</strong>, <strong>McClure</strong> introduced genuine <a href="https://doi.org/10.1007/BFb0075778" rel="noreferrer">equivariant spectra</a>.</p> <p>In 1989, <strong>Makkai</strong> and <strong>Paré</strong> published a systematic account of <em>accessible categories</em>.</p> <p>In 1995, <strong>Baez</strong> and <strong>Dolan</strong> formulated the <a href="https://arxiv.org/abs/q-alg/9503002" rel="noreferrer"><em>cobordism and tangle hypotheses</em></a>, which perhaps qualifies as the first noticeable conjecture about (∞,n)-categories for arbitrary n.</p> <p>In 1997, <strong>Elmendorf</strong>, <strong>Kriz</strong>, <strong>Mandell</strong>, <strong>May</strong> published the first ever account of a <a href="https://web.archive.org/web/20161019233205id_/http://www.math.uchicago.edu:80/~may/BOOKS/EKMM.pdf" rel="noreferrer"><em>symmetric monoidal category of spectra</em></a>.</p> <p>In 1998, <strong>Hovey</strong>, <strong>Shipley</strong>, <strong>Smith</strong> published an account of <a href="https://arxiv.org/abs/math/9801077" rel="noreferrer"><em>symmetric spectra</em></a>.</p> <p>In 1998, <strong>Rezk</strong> introduced <a href="https://arxiv.org/abs/math/9811037" rel="noreferrer"><em>complete Segal spaces</em></a>.</p> <p>In the late 1990s, <strong>Voevodsky</strong> introduced and developed <em>motivic homotopy theory</em> (including some joint work with <strong>Morel</strong>).</p> <p>Around the late 1990s, <strong>Smith</strong> introduced <em>combinatorial model categories</em> and proved what is now known as the <a href="https://ncatlab.org/nlab/show/combinatorial+model+category#SmithTheorem" rel="noreferrer"><em>Smith recognition theorem</em></a> and established the existence of left Bousfield localizations of left proper combinatorial model categories.</p> <p><a href="https://arxiv.org/abs/math/0209342" rel="noreferrer"><em>Monoidal model categories</em></a> were systematically studied by <strong>Schwede</strong> and <strong>Shipley</strong> starting from 1997.</p> <p>In 2006 (based on a 2003 preprint), <strong>Lurie</strong>'s <a href="https://arxiv.org/abs/math/0608040" rel="noreferrer"><em>Higher Topos Theory</em></a> came out, first as an online draft, which was later published.</p>
613,836
<blockquote> <p>Let $x,y,z$ be integers and $11$ divides $7x+2y-5z$. Show that $11$ divides $3x-7y+12z$.</p> </blockquote> <p>I know a method to solve this problem which is to write into $A(7x+2y-5z)+11(B)=C(3x-7y+12z)$, where A is any integer, B is any integer expression, and C is any integer coprime with $11$.</p> <p>I have tried a few trials for example $(7x+2y-5z)+ 11(x...)=6(3x-7y+12z)$, but it doesn't seem to work. My question is are there any tricks or algorithms for quicker way besides trials and errors? Such as by observing some hidden hints or etc?</p> <p>I am always weak at this type of problems where we need to make smart guess or gain some insight from a pool of possibilities? Any help will be greatly appreciated. And maybe some tips to solve these types of problems.</p> <p>Thanks very much!</p>
mathlove
78,967
<p>Let $n=3x-7y+12z$. Since there exsits an $m\in\mathbb Z$ such that $$7x+2y-5z=11m,$$ We have the following two :</p> <p>$$7n=21x-49y+84z$$ $$33m=21x+6y-15z$$ Then, we have $$7n-33m=-55y+99z\Rightarrow 7n=33m-55y+99z=11(3m-5y+9z).$$ Since we know $7$ and $11$ are coprime, we know that $n$ is a multiple of $11$.</p>
4,045,943
<p>I was requested to find the values of the parameter p for which the following series converges:</p> <p><span class="math-container">$$\sum_{n=1}^\infty \sqrt{n} \ln^{p} \left(1+ \frac{1}{\sqrt{n}}\right)$$</span></p> <p>I tried using <a href="https://en.wikipedia.org/wiki/Root_test" rel="nofollow noreferrer">Cauchy's test</a> and the <a href="https://en.wikipedia.org/wiki/Term_test" rel="nofollow noreferrer">Term test</a>, but reached a dead-end. I also tried to use the ratio test with but it didn't seem to be helpful in this situation.</p> <p>At this point we aren't allowed to use the integral test.</p> <p>I would appreciate any suggestions on how to approach this problem.</p>
VIVID
752,069
<p><span class="math-container">$$\begin{align} \sqrt{n} \ln^{p} (1+ \frac{1}{\sqrt{n}}) &amp;\overset1= \sqrt n \left( \frac{1}{\sqrt n} + O\left(\frac 1n \right)\right)^p \\ &amp;\overset2= \sqrt n\frac{1}{(\sqrt n)^p}\left(1 + O\left(\frac{1}{\sqrt n}\right)\right) \\ &amp;\overset3= \frac{1}{n^{p/2-1}} + O\left(\frac{1}{n^{(p-1)/2}}\right) \end{align}$$</span> <strong>Explanation:</strong></p> <ol> <li>Using <span class="math-container">$\ln(1+x) = x - \frac{x^2}{2} + \frac{x^3}{3}-...$</span> for <span class="math-container">$|x|&lt;1$</span>.</li> <li>Factoring out <span class="math-container">$\frac{1}{\sqrt n}$</span> and using <span class="math-container">$(1+x)^p = 1 + O(x)$</span>.</li> <li>Expanding the bracket.</li> </ol>
4,045,943
<p>I was requested to find the values of the parameter p for which the following series converges:</p> <p><span class="math-container">$$\sum_{n=1}^\infty \sqrt{n} \ln^{p} \left(1+ \frac{1}{\sqrt{n}}\right)$$</span></p> <p>I tried using <a href="https://en.wikipedia.org/wiki/Root_test" rel="nofollow noreferrer">Cauchy's test</a> and the <a href="https://en.wikipedia.org/wiki/Term_test" rel="nofollow noreferrer">Term test</a>, but reached a dead-end. I also tried to use the ratio test with but it didn't seem to be helpful in this situation.</p> <p>At this point we aren't allowed to use the integral test.</p> <p>I would appreciate any suggestions on how to approach this problem.</p>
user
505,767
<p>We have that</p> <p><span class="math-container">$$\frac{\sqrt{n} \ln^{p} \left(1+ \frac{1}{\sqrt{n}}\right)}{\left(\frac1{\sqrt{n}}\right)^{p-1}}=\left(\frac{\ln \left(1+ \frac{1}{\sqrt{n}}\right)}{\frac1{\sqrt{n}} }\right)^p \to 1$$</span></p> <p>therefore by limit comparison test the given series converges if and only if the series <span class="math-container">$\sum \left(\frac1{\sqrt{n}}\right)^{p-1}$</span> converges that is for <span class="math-container">$$(p-1)/2&gt;1 \iff p&gt;3$$</span></p>
3,274,061
<p>If <span class="math-container">$T:H \to H$</span> is linear and bounded and compact and <span class="math-container">$\{h_n\}$</span> is bounded, is <span class="math-container">$\{T(h_n)\}$</span> a compact subset of <span class="math-container">$H$</span>? We have <span class="math-container">$H$</span> as a Hilbert space. </p> <p>I am getting a problem with definition of compact operator since it maps a bounded set only into a relatively compact set</p>
Nate Eldredge
822
<p>Not necessarily. Take <span class="math-container">$H = \ell^2$</span> and let <span class="math-container">$(Th)(i) = \frac{1}{i} h(i)$</span>. You can verify that <span class="math-container">$T$</span> is compact. Let <span class="math-container">$\{h_n\}$</span> be the usual orthonormal basis for <span class="math-container">$\ell^2$</span>, i.e. <span class="math-container">$h_n(i) = \delta_{ni}$</span>, so that <span class="math-container">$T h_n = \frac{1}{n} h_n$</span>. Then <span class="math-container">$T h_n \to 0$</span>, and so <span class="math-container">$\{T(h_n)\}$</span> is not closed.</p>
3,274,061
<p>If <span class="math-container">$T:H \to H$</span> is linear and bounded and compact and <span class="math-container">$\{h_n\}$</span> is bounded, is <span class="math-container">$\{T(h_n)\}$</span> a compact subset of <span class="math-container">$H$</span>? We have <span class="math-container">$H$</span> as a Hilbert space. </p> <p>I am getting a problem with definition of compact operator since it maps a bounded set only into a relatively compact set</p>
Andreas Blass
48,510
<p>Remember that a finite-dimensional vector space with an inner product is a perfectly good Hilbert space, and all its linear operators are compact. So you can get a counterexample by taking <span class="math-container">$H$</span> to be <span class="math-container">$\mathbb R$</span> or <span class="math-container">$\mathbb C$</span> (depending on whether you prefer real or complex Hilbert spaces), taking <span class="math-container">$T$</span> to be the identity map, and taking <span class="math-container">$h_n=\frac1n$</span>.</p> <p>If, for some reason, you insist on an infinite-dimensional Hilbert space, then let <span class="math-container">$H$</span> be one (for example <span class="math-container">$l^2$</span>), and let <span class="math-container">$T:H\to H$</span> be the projection to a one-dimensional subspace. That's compact, and then you can use my previous example of <span class="math-container">$h_n$</span> in that one-dimensional subspace.</p>
269,696
<p>I am given that $\sum\limits_{n=1}^\infty a_n$ is convergent. </p> <p>I need to determine whether $\sum\limits_{n=1}^\infty (a_n)^\frac{1}{3}\;$ and $\;\sum\limits_{n=1}^\infty (a_n)^2\;$ are also convergent.</p> <p>Imagine that $a_n = \dfrac{1}{n^4}.\;$ I believe that this is convergent because it's converging to $0$.</p> <p>Following the same thought, if $\displaystyle a_n = \left(\frac{1}{n^4}\right)^2,\;$ it's convergent because it's converging to $0$.</p> <p>Am I doing this correctly or there is some other way to prove this?</p>
Sugata Adhya
36,242
<p>$\sum\limits_{n=1}^\infty\frac{1}{n^3}$ &amp; $\sum\limits_{n=1}^\infty(-1)^n\frac{1}{\sqrt n}$ are convergent but $\sum\limits_{n=1}^\infty\frac{1}{n}$ is not.</p>
1,080,746
<p>I want to calculate the limit of this sum :</p> <p>$$\lim\limits_{x \to 1} {\left(x - x^2 + x^4 - x^8 + {x^{16}}-\dotsb\right)}$$</p> <p>My efforts to solve the problem are described in the <a href="https://math.stackexchange.com/a/1308281/202081">self-answer below</a>.</p>
achille hui
59,379
<p>Based on a paper "Summability of alternating gap series" by J.P. Keating and J.B.Reade in year $2000$, (an online copy can be found <a href="http://journals.cambridge.org/article_S001309150002071X">here</a>) , one can use <a href="http://en.wikipedia.org/wiki/Poisson_summation_formula">Poisson summation formula</a> to show $$ S(x) =\sum_{n=0}^\infty (-1)^n x^{2^n} = \frac12 x + \frac{2}{\log 2} \Re\sum_{n=0}^\infty\left( \frac{\Gamma(\alpha_n i)}{\lambda^{\alpha_n i}} - \sum_{k=0}^\infty \frac{(-1)^k}{k!}\frac{\lambda^k}{\alpha_n i + k} \right)$$ where $\alpha_n = \frac{(2n+1)\pi}{\log 2}$ and $x = e^{-\lambda}$.</p> <p>As $x \to 1^{-}$, $S(x) - \frac12 x$ will be dominated by the first term ( the term for $\alpha_0$ ) which oscillate with amplitude </p> <p>$$\frac{2}{\log 2}\left|\Gamma\left(\frac{\pi i}{\log 2}\right)\right| = \frac{2}{\sqrt{\log 2\sinh(\pi^2/\log 2)}} \sim 0.00275 $$ and periodic in $\log_2 \lambda = \frac{\log\log\frac1x}{\log 2}$ with period $2$.</p> <p>Please look at the paper mentioned above for more details.</p>
1,080,746
<p>I want to calculate the limit of this sum :</p> <p>$$\lim\limits_{x \to 1} {\left(x - x^2 + x^4 - x^8 + {x^{16}}-\dotsb\right)}$$</p> <p>My efforts to solve the problem are described in the <a href="https://math.stackexchange.com/a/1308281/202081">self-answer below</a>.</p>
Abdou Abdou
202,081
<ul> <li>After consulting @achille hui's answer, looking on this <a href="http://journals.cambridge.org/download.php?file=%2FPEM%2FPEM2_43_01%2FS001309150002071Xa.pdf&amp;code=59a6cb2e1c51d86990aecae20ceafeb1" rel="nofollow noreferrer">paper</a> he/she brought, I found an approbation to my answer posted awhile ago at the bottom of the page -97- proven by Hardy in this <a href="https://www.uam.es/personal_pdi/ciencias/dragan/respub/Duren_Tauberian_Talk_2013-10_UAM.pdf" rel="nofollow noreferrer">book</a>.Well, I came little bit late :p.</li> </ul> <blockquote> <p>-"for $k$ even. Therefore, the limit as $x-&gt;\infty$ does not exist: $y_n$ has maxima whose heights tend to <strong>$\frac{2}{3}$</strong>, and minima whose heights tend to <strong>$\frac{1}{3}$</strong>. Hence the series is not Cesaro summable. It then follows from the second of the results quoted above that neither can it be Abel summable (because the partial sums $s_m$ are bounded). Hardy gave a direct proof of this in 2. The question we address here is: what is the asymptotic form of the gap series (1.1) in this case as $i -&gt; 1$"</p> </blockquote> <hr> <p>Well as <a href="https://i.stack.imgur.com/bQF5b.png" rel="nofollow noreferrer">this graph</a> illustrates that f(x) oscillates infinitly between $0.5+\delta$ and $0.5-\delta$ . now , need to prove it analytically since an algebric way would be tooo long.</p> <p>so when f(x) changes direction as x tends to 1 means .</p> <p>I tried to zoom in the behavior of the graph at the nearest right of x=1 using variable substitution :</p> <p>x -> 1 </p> <p>$sin(( \frac{x-1}{a}+ \frac{1}{2}) \Pi)$ -> 1 with <a href="https://math.stackexchange.com/questions/1101974/dilating-sinusoidal-period-at-certain-abscissa-x">closer values</a></p> <p>The graph i received looks like this :</p> <p><img src="https://i.stack.imgur.com/dTkqC.gif" alt="enter image description here"></p> <p>Matlab code :</p> <pre><code>syms x;syms k;for i=1 : 50 f=@(x) sin((1/(i)*(x-1)+1/2)*pi); y= f(x)+symsum(((-1)^k)*(f(x)^(2^k)),k,1,10+i); a=ezplot(y,-1,1); grid; pause(0.1); delete(a); end </code></pre> <hr> <p>Now lets find out the upper and lower limits:</p> <p>Lets prove firstly that the curve is changing direction endlessly as x gets closer to 1.</p> <p>$f'(x)= 1-2x+4x^3-...+(-1)^{k+1}x^{2^k+1}$</p> <p>$lim_{x\rightarrow 1} f'(x)= \pm \inf$ </p> <p>the limit sways between $+inf$ and $-inf$ it means it crosses x-axis repeatedly infinite number of times, this can be proved by reccurence but lets do it easy way since it is not the aim of this question.</p> <p>to prove that $f'(x)$ changes diection infinitely it suffices to prove that $lim_{x\rightarrow 1} f'(x) \neq f'(1) $</p> <p>$f'(x=1)=f'(x²=1)=1-2(x=1)f'(x²=1)$</p> <p>$f'(1)=1-2f'(1)$</p> <p>$f'(1)=\frac{1}{3}$</p> <p>f'(x) has an average limit between $+\inf$ and $-\inf$ which is not one of those, that is a blatant proof that $0.5$ isnt the real limit of $f(x)$.</p> <hr> <p>Now lets find the mysterious limit.</p> <p>$$xf'(x) = x-2x²+4x^4 ...$$</p> <p>$$x²f'(x²) = x²-2x^4 +4x^8..$$</p> <p>$$....$$</p> <p>$$x^n f'(x^n) = x^n-2x^{2n}+4x^{4n}...$$</p> <p>So, </p> <p>$$f(x) = xf'(x) + x²f'(x²)-x^4f'(x^4)...$$</p> <p>$$f(x) = xf'(x) + \frac{x(f'(x)-1)}{(-2)} - \frac{x(f'(x)-1+2X)}{2²}+\frac{x(f'(x)-1+2X-4x^3)}{-2^3} ... $$</p> <p>the derived $f'(x)$ is nil when $f(x)$ is at the level of either its lower or higher summit, lets denote $x_0$ the group of abscissas where the curve of $f(x)$ changes direction from $l+\delta$ to $l-\delta$.</p> <p>$f(x_0)=l \pm \delta $ means $f'(x_0)=0$</p> <p>as $f'(x_0)=0$ at any $x_0$ where $f(x)$ reaches its upper or lower summit $l+\delta$ or $l-\delta$, </p> <p>$$f(x_0) = x+ \frac{x_0(-1)}{(-2)} - \frac{x_0(-1+2x_0)}{2²}+\frac{x_0(-1+2x_0-4x_0^3)}{-2^3} ... $$</p> <p>and since $x_0\rightarrow 1$,</p> <p>$$\lim_{x_0\rightarrow 1} f(x_0)=(\frac{1}{2}+\frac{1-2}{2²}+\frac{1-2+4}{2^3}+...)=\frac{1}{2}-\frac{1}{4}+\frac{3}{8}-\frac{5}{15}+\frac{11}{32}-...=0.333 or 0.666$$</p> <p>the mysterious limits are:</p> <blockquote class="spoiler"> <p> $l=\frac{1}{2} \pm (\frac{1}{6}) $</p> </blockquote> <hr> <p>if we take deeper look (using last trigonometric magnifier) at functions $f(x)$ and $f(x_0)$ we should notice remarkable matching</p> <p><img src="https://i.stack.imgur.com/qhtKz.png" alt="enter image description here"></p> <p>$f(x)=x+\sum_k {(-1)^k x^{2^k}}$</p> <p>$f(x_0)=x_0.\sum_k (\frac{1}{2^k} \sum_l ((-1)^l 2^l x_0^{2^l-1})$</p> <pre><code>syms k;syms l;x=-1:0.1:1; for i=1 : 1000 f=@(x) sin((1/i.*(x-1)+1./2).*pi); y=@(x,n) f(x)+symsum(((-1).^k).*(f(x).^(2.^k)),k,1,n); yy=@(x,n) f(x).* symsum((1./2.^k).*symsum((-1).^l.*2.^l.*f(x).^(2.^l-1),l,0,k-1),k,1,n); a=plot(x,y(x,i),'Color', [0.5, 1.0, 0.0], 'LineStyle', '--'); hold on; b=plot(x,yy(x,i+1),'Color', [1.0, 0.0, 0.5]); set(gca, 'XLim',[-1 1], 'YLim',[-1 2]); grid; pause(0.1); delete(a); delete(b); grid; end </code></pre> <p><strong>similitude of two curves:</strong></p> <pre><code>syms x;syms k;syms l; for i=1 : 1000 y=@(x,n) (x)+symsum(((-1)^k)*((x)^(2^k)),k,1,n); yy=@(x,n) (x)* symsum((1/2^k)*symsum((-1)^l*2^l*(x)^(2^l-1),l,0,k-1),k,1,n); a=ezplot(y(x,i)); set(gca, 'colororder', [1, 0.5, 0.753;0.5, 0.5, 1;1, 1, 0.753]); hold on; b=ezplot(yy(x,i+1)); set(gca, 'colororder', [0.1, 0.3, 0.53;0.35, 0.85, 1;1, 0.1, 0.73]); legend({'y' 'yy'}, 'Location','NorthWest') grid; pause(0.1); delete(a); delete(b); grid; end </code></pre> <p>snapshots: ($f_n(x)$,$f_{n+1}(x_0)$)</p> <p>n=1</p> <p><img src="https://i.stack.imgur.com/BvsHt.png" alt="enter image description here"></p> <p>n=3</p> <p><img src="https://i.stack.imgur.com/arJAM.png" alt="enter image description here"></p> <p>n=6</p> <p><img src="https://i.stack.imgur.com/zEDGQ.png" alt="enter image description here"></p> <p>n=9</p> <p><img src="https://i.stack.imgur.com/PUcQb.png" alt="enter image description here"></p> <hr> <ul> <li><strong>note</strong>:</li> </ul> <p>-I know it has a high probability of being wrong but, that is well argumented, and any critics should be also well argumented. thanks.</p>
4,586,314
<p><em>A three-sided fence is to be built next to a straight section of river, which forms the fourth side of a rectangular region, as shown in the diagram below. The enclosed area is to equal <span class="math-container">$1800 m^2$</span> and the fence running parallel to the river must be set back at least <span class="math-container">$20 m$</span> from the river. Determine the minimum perimeter of such an enclosure and the dimensions of the corresponding enclosure.</em></p> <p>I am using this for the first part: <span class="math-container">$1800m^2 = l(w- 20)$</span></p> <p>Every time I use this I can't really figure out how to differentiate, which leads me to believe I'm doing something wrong. I know that one of the sides needs to be subtracted by <span class="math-container">$20$</span>, but I can't understand when it should be subtracted.</p> <p>Also, if I do find the new width, would that change the entire area?</p>
nasekatnasushi
1,123,549
<p>The way I understand it is that you want one matrix <span class="math-container">$ T $</span> that does the same kind of thing (as you described) to any matrix <span class="math-container">$ A $</span>.</p> <p>There is no such matrix <span class="math-container">$ T $</span>. One way you can view multiplying a matrix <span class="math-container">$ A $</span> by another matrix <span class="math-container">$ T $</span> on the right is performing a linear transformation on each row of <span class="math-container">$ A $</span>. The important thing is that you perform <em>the same</em> transformation on each row of <span class="math-container">$ A $</span>. Therefore, for example, if all rows of <span class="math-container">$ A $</span> were identical, then all rows of <span class="math-container">$ AT $</span> would have to be identical as well.</p> <p>For example, let <span class="math-container">$$ A = \begin{pmatrix} 1 &amp; 2 \\ 1 &amp; 2 \end{pmatrix} $$</span> and suppose we want to swich the bottom two elements of <span class="math-container">$ A $</span>. There is no matrix <span class="math-container">$ T $</span> satisfying <span class="math-container">$$ AT = \begin{pmatrix} 1 &amp; 2 \\ 2 &amp; 1 \end{pmatrix}, $$</span> because if <span class="math-container">$ AT $</span> is defined, both rows of <span class="math-container">$ AT $</span> are equal to (the row times matrix product) <span class="math-container">$ \begin{pmatrix} 1 &amp; 2 \end{pmatrix} T $</span> and therefore equal to each other.</p>
1,649,997
<p>I'm having a hard time coming up with two unbounded sequences where their difference yields $0$ when $n\rightarrow\infty$. Any ideas?</p>
P Vanchinathan
28,915
<p>Take any monotonic sequence diverging to $+\infty$, as the first sequence. Define the second sequence by adding $\frac1n$ to the $n$th term of the first. What can you say bout the difference between these two sequences? </p>
1,191,176
<p>I know how to prove the result for $n=2$ by contradiction, but does anyone know a proof for general integers $n$ ?</p> <p>Thank you for your answers.</p> <p>Marcus</p>
Viktor Vaughn
22,912
<p>Apply <a href="http://en.wikipedia.org/wiki/Eisenstein%27s_criterion">Eisenstein's criterion</a> with $p = 2$ to the polynomial $f(x) = x^n - 2$. This shows that $f$ is irreducible over $\mathbb{Q}$, so in particular it has no roots in $\mathbb{Q}$.</p>
4,383,098
<p>This was a question I had ever since I started studying Formal mathematics. Take ZFC for example, in it the axioms tell us 'tests' to check if something is a set or not and how the object, if they are set, behave with some other operations defined on the set.</p> <p>My question is how exactly do we find objects do fulfill these axioms? Is there some formal procedure for it , or, is it just guess work?</p>
Graham Kemp
135,106
<p>To be demonstrated: <span class="math-container">$(A∪B)∩(B∪C)∩(C∪A)=(A∩B)∪(A∩C)∪(B∩C)$</span></p> <p>Observation: Elements of the LHS that are in <span class="math-container">$A$</span> must also be in <span class="math-container">$B$</span> or in <span class="math-container">$C$</span>, however, those that are not in <span class="math-container">$A$</span> must be in <span class="math-container">$B$</span> and <span class="math-container">$C$</span>. This concurs with the RHS.</p> <p>Strategy: Associate unions with <span class="math-container">$A$</span>, distribute to obtain intersections with <span class="math-container">$A$</span> and not with <span class="math-container">$A$</span>, and the later should simplify to <span class="math-container">$B\cap C$</span> ....</p> <p><span class="math-container">$$(A\cup B)\cap (B\cup C)\cap (C\cup A)\\=\\((A\cup B)\cap(A\cup C))\cap(B\cup C)\\=\\(A\cup (B\cap C))\cap(B\cup C)\\=\\(A\cap(B\cup C))\cup((B\cap C)\cap(B\cup C))\\=\\\vdots\\=\\(A\cap B)\cup(A\cap C)\cup(B\cap C)$$</span></p>
4,435,032
<p>I have equations of two conic sections in general form. Is it possible to find minimal distance between them (if they are not intercross)?</p> <p>I need it to calculate is two spacecrafts on two orbits (2d case) can collide or not. If minimal distance bigger than sum of radiuses of bounding circles I don't need to care about collision.</p>
Overflowian
211,748
<p><strong>Remark1.</strong> Recall that the holonomy of a metric is the holonomy of its Levi-Civita connection. Thus a manifold may admit an affine connection with holonomy in <span class="math-container">$G&lt;GL(n)$</span> and still not admitting any metric with holonomy in <span class="math-container">$G$</span>.</p> <p><strong>Remark2.</strong> If there is a reduction of the structure group to <span class="math-container">$G&lt;GL(n)$</span>, then there is a connection with holonomy <span class="math-container">$G$</span>. This is essentially the fact that any vector bundle admits a connection applied to the vector bundle induced by the reduction.</p> <p><strong>Prop1.</strong> <em>Exists an <span class="math-container">$n$</span>-manifolds <span class="math-container">$X$</span> and subgroup <span class="math-container">$G&lt;GL(n), G = Stab(T)$</span> (<span class="math-container">$T$</span> in the tensor algebra of <span class="math-container">$\mathbb R^n$</span>) such that the frame bundle of <span class="math-container">$X$</span> admits a reduction of the structure group to <span class="math-container">$G$</span> but <span class="math-container">$X$</span> does not admit a metric with holonomy in <span class="math-container">$G$</span>. In particular there is no Riemannian connection on <span class="math-container">$TX$</span> which is torsion free and for which the tensor induced by <span class="math-container">$G$</span> is parallel.</em></p> <p><em>proof:</em> For example, an almost Hermitian manifold is a manifold with an almost complex structure <span class="math-container">$J$</span> and a compatible Riemannian metric <span class="math-container">$g$</span>. The pair <span class="math-container">$(J,g)$</span> exists iff there is a reduction of the structure group to <span class="math-container">$U(2n)$</span>. This condition is homotopy-theoretical in nature. Indeed by a theorem of Wu, we know that in dimension 4 such a structure exists iff the third integral Stiefel-Whitney class <span class="math-container">$W_3\in H^3(X,\mathbb Z)$</span> and <span class="math-container">$c^2-2\chi(M)-3\sigma(X)\in H^4(X,\mathbb Z)$</span> vanish (<span class="math-container">$c$</span> is the first Chern-class of <span class="math-container">$J$</span>). However the existence of a torsion free connection for which <span class="math-container">$(J,g)$</span> is parallel would imply that the Nijenhuis tensor vanishes and hence <span class="math-container">$X$</span> would be a complex manifold with <span class="math-container">$J$</span> as induced ACS. This is not true in general. For example <span class="math-container">$\mathbb{CP}^2\#\mathbb{CP}^2\#\mathbb{CP}^2$</span> admits an ACS but not a a complex structure (a proof that uses the Bogomolov-Miyaoka-Yau may be found in <a href="https://guests.mpim-bonn.mpg.de/milivojevic/acsbutnotcx.pdf" rel="nofollow noreferrer">these notes by Aleksandar Milivojević</a>).</p> <p><strong>Prop2.</strong> <em>Let <span class="math-container">$G = \bigcap_i Stab(T_i^0)&lt; GL(n)$</span>, <span class="math-container">$T_i^0$</span> in the tensor algebra of <span class="math-container">$\mathbb R^n$</span> and suppose that <span class="math-container">$X$</span> has a reduction to of the structure group to <span class="math-container">$G$</span> with associated <span class="math-container">$G$</span>-principal bundle <span class="math-container">$P\to X$</span>. Let <span class="math-container">$\nabla$</span> be a connection on <span class="math-container">$P$</span>. Then the induced tensor fields <span class="math-container">$T_i$</span> obtained from <span class="math-container">$T^0_i$</span> are <span class="math-container">$\nabla$</span>-parallel.</em></p> <p><em>Proof:</em> Since <span class="math-container">$\nabla$</span> is a connection <span class="math-container">$P$</span> it has holonomy contained in <span class="math-container">$G$</span>. The tensors <span class="math-container">$T_i$</span> associated to the reduction are obtained by parallel transporting the model <span class="math-container">$T^0_1, \dots, T^0_k$</span> written in a G-frame given by the reduction. The holonomy contained in <span class="math-container">$G = Stab(T^0_k)$</span> ensures that this procedure yields well defined tensors <span class="math-container">$T_i$</span>.</p> <p><em>EDIT after comment.</em> Consider a frame <span class="math-container">$F_{x_0}$</span> at <span class="math-container">$x_0 \in X$</span> in the reduction <span class="math-container">$G_{x_0}$</span> and consider a tensor <span class="math-container">$T_{i,x_0}$</span> at <span class="math-container">$x_0$</span> that has components equal to <span class="math-container">$T^0_i$</span> in this frame. Now define a tensor field <span class="math-container">$T_i$</span> by parallel transporting <span class="math-container">$T_{i,x_0}$</span>. Consider a point <span class="math-container">$x\in X$</span>. Then, since <span class="math-container">$T_i$</span> is defined by parallel transport, <span class="math-container">$T_i(x)$</span> will have components equal to <span class="math-container">$T^0_i$</span> in any frame obtained by parallel transporting <span class="math-container">$F_{x_0}$</span> to <span class="math-container">$x$</span> along any curve <span class="math-container">$\gamma$</span> joining <span class="math-container">$x_0$</span> to <span class="math-container">$x$</span>. But since <span class="math-container">$G$</span> stabilizes <span class="math-container">$T^0_i$</span>, if we represent <span class="math-container">$T_i(x)$</span> in any other frame in <span class="math-container">$G_x$</span> we will obtain the same components <span class="math-container">$T^0_i$</span>.</p> <p>Therefore the two definitions coincide.</p>
2,634,282
<p><strong>Question:</strong></p> <p>Write all the points where $\cfrac{n^2-9n+20}{n^4-n^3-3n^2+n+2}$ is discontinuous.Provide the answer in simplest form.</p> <p><strong>My Approach:</strong></p> <p>Should I convert the numerator and denominator into its respective product form and try to cancel them? I am not getting any idea. Any help or guidance to solve this problem would be appreciated.</p>
David C. Ullrich
248,223
<p>Answer to the question below. First we should note that the theorem as stated is false, if we're talking about Riemann integrals.</p> <p>Let $X=Y=[0,1]$. Let $f(x,y)=0$ if $y\ne0$; let $f(x,0)=1$ if $x$ is rational, $0$ otherwise. It's easy to see from the definition that $f$ is Riemann integrable on $X\times Y$ (or note that $f$ is certainly continuous almost everywhere.) But $\int_0^1 f(x,0)\,dx$ does not exist, hence at least one of the iterated integrals fails to exist.</p> <p>Fubini's theorem would be one reason they invented the Lebesgue integral... It turns out that that little detail, a function has to be <em>defined</em> on $X$ before it can be Riemann integrable on $X$, is the only problem:</p> <blockquote> <blockquote> <p><strong>Theorem:</strong> Suppose $f$ is Riemann integrable on $X\times Y$ as above. If $g(x)=\int_Yf(x,y)\,dy$ <em>exists</em> for every $x\in X$ then $g$ is Riemann integrable and $\int_X g(x)\,dx=\int_{X\times Y}f(x,y)\,dxdy$.</p> </blockquote> </blockquote> <p><strong>Proof, in questionable taste:</strong> For a given $x$, if $f$ is continuous at $(x,y)$ for almost every $y$ then DCT shows that $g$ is continuous at $x$. So the measure-theory Fubini's theorem shows that $g$ is continuous almost everywhere, hence Riemann integrable. The measure-theory Fubini theorem shows that $\int_X g(x)\,dx=\int_{X\times Y}f(x,y)\,dxdy$.</p> <p><strong>Digression</strong></p> <p>The OP has been insisting that changing a function on a set of measure zero does not change the Riemann integral. This is well known to be nonsense. For the benefit of anyone who doesn't see why it's nonsense:</p> <p>Define $z:[0,1]\to\Bbb R$ by $z(t)=0$. Then $\int_0^1 z(t)\,dt=0$. Now modify $z$ on a set of measure zero: Define $r(t)=0$ if $t$ is irrational, $1$ if $t$ is rational. Then $r$ is not Riemann integrable.</p> <p>It's obvious for example that every "upper sum" for $r$ equals $1$ while every lower sum equals $0$.</p> <p>An explanation using just Riemann sums, showing that modifying a function on a set of measure zero <em>does</em> change the limiting behavior of the Riemann sums: If $n$ is even let $$s_n=\frac1n\sum_{j=1}^nr(j/n)$$. If $n$ is odd choose an irrational number $\alpha_n$ with $0&lt;\alpha_n&lt;1/n$, and let $$s_n=\frac1n\sum_{j=1}^nr(j/n-\alpha_n).$$</p> <p>Then $(s_n)$ is a sequence of Riemann sums for $r$, corresponding to a sequence of partitions with mesh tending to $0$. But $s_n=1$ if $n$ is even and $s_n=0$ if $n$ is odd. So $\lim s_n$ does not exist. So by definition $r$ is not Riemannn integrable.</p> <p>(It's true, and not hard to show, that modifying a function on a <em>compact</em> set of measure zero does not change the Riemann integral. That doesn't help rehabilitate the theorem, because if $f$ is Riemann integrable on $[0,1]\times[0,1]$ the null set of $x$ where $\int_0^1f(x,y)\,dy$ does not exist need not be compact.)</p> <p><strong>Example</strong> showing the the set of $x$ such that $\int_0^1f(x,y)\,dy$ does not exist need not be compact: Say $(q_j)$ is a countable dense subset of $[0,1]$. Define $f(x,y)=0$ if $x\notin(q_j)$, and set $f(q_j,y)=0$ if $y$ is irrational, $1/j$ if $y$ is rational. Then $f$ is Riemann integrable on $[0,1]\times[0,1]$, but for every $j$ the integral $\int_0^1f(q_j,y)\,dy$ fails to exist.</p> <p><strong>End digression</strong></p> <p><strong>Below</strong> Assuming you mean $I=[0,1]\times[0,1]$: Let $S=(p_j)$ be a countable dense subset of $I$ such that $S$ intersects each vertical line and each horizontal line in at most one point. (Construction below.) Let $f(p_j)=1$, $f(x,y)=0$ for $(x,y)\notin S$. Then both iterated (Riemann) integrals exist, but $f$ is not Riemann integrable on $I$; for example $f$ is not continuous at <em>any</em> point.</p> <p>Construction: Say $(q_j)$ is a countable dense set. Let $p_1=q_1$. Choose $p_2$ so $|p_2-q_2|&lt;1/2$ and $p_2$ does not have either coordinate in common with $p_1$. Etc: One by one choose $p_n$ so $|q_n-p_n|&lt;1/n$ and the $x$ and $y$ coordinates of $p_n$ are different from the coordinates of $p_j$, $1\le j&lt;n$.</p>
2,634,282
<p><strong>Question:</strong></p> <p>Write all the points where $\cfrac{n^2-9n+20}{n^4-n^3-3n^2+n+2}$ is discontinuous.Provide the answer in simplest form.</p> <p><strong>My Approach:</strong></p> <p>Should I convert the numerator and denominator into its respective product form and try to cancel them? I am not getting any idea. Any help or guidance to solve this problem would be appreciated.</p>
RRL
148,510
<p>The statement of the Theorem given as a reference is false. As such, it is just a distraction with respect to the Question, which David C. Ullrich has answered by providing a nice example. </p> <p>I just will focus on the theorem to help clarify beyond what has already been discussed in the comments.</p> <p>The hypothesis is that the bounded function $f:X \times Y \to \mathbb{R}$ is Riemann integrable on the bounded interval (rectangle) $X \times Y \subset \mathbb{R}^{n+m}.$ What is a true statement is that </p> <p>$$\tag{1}\int_{X \times Y} f = \int_X \left(\underline{\int}_Y f(x,y) \, dy\right) dx = \int_X \underline{J}(x) \, dx \\ = \int_X \left(\overline{\int}_Y f(x,y) \, dy\right) \, dx = \int_X \overline{J}(x)\, dx, $$</p> <p>where for fixed $x\in X$ the lower and upper Darboux integrals appearing above must exist (since $f$ is bounded) and as a conclusion are themselves Riemann integrable over $X$ and satisfy (1).</p> <p>We also have a similar statement as (1) with the order of the integration reversed, but we don't need to discuss that to proceed.</p> <p><strong>Proof of (1)</strong></p> <p>Let $P = P_X \times P_Y$ be partition of $X \times Y$ where $P_X$ and $P_Y$ are partions of $X$ and $Y$ into subintervals in $\mathbb{R}^m$ and $\mathbb{R}^n$, respectively. On any subinterval $R_X \times R_Y$ of $P$ we have $m_{R_X \times R_Y}(f) = \inf_{R_X \times R_Y} f(x,y) \leqslant f(x,y)$ and $m_{R_X \times R_Y}(f) \leqslant \inf_{R_Y} f(x,y) = m_{R_Y}(f)$, where we take $x$ as fixed in the second inequality.</p> <p>Hence,</p> <p>$$\sum_{R_Y} m_{R_X \times R_Y}(f) \text{vol }(R_Y) \\ \leqslant \sum_{R_Y} m_{R_Y}(f) \text{vol }(R_Y) = L(P_Y, f(x,\cdot)) \leqslant \underline{\int}_Y f(x,y) \, dy = \underline{J}(x)$$</p> <p>Taking the infimum over $x \in R_X$, multiplying by $\text{vol }(R_X)$, and summing we get for lower Darboux sums</p> <p>$$L(P,f) = \sum_{R_X, R_Y} m_{R_X \times R_Y}(f) \text{vol }(R_Y)\text{vol }(R_X) \leqslant \sum_{R_X} \inf_{R_X} \underline{J}(x) \text{vol }(R_X) = L(P_X, \underline{J}) $$</p> <p>Similarly we can show for upper Darboux sums that $U(P,f) \geqslant U(P_X, \overline{J}),$ and it follows that</p> <p>$$L(P,f) \leqslant L(P_X, \underline{J}) \leqslant U(P_X,\underline{J}) \leqslant U(P_X,\overline{J}) \leqslant U(P,f), \\ L(P,f) \leqslant L(P_X, \underline{J}) \leqslant L(P_X,\overline{J}) \leqslant U(P_X,\overline{J}) \leqslant U(P,f). $$</p> <p>Since $f$ is Riemann integrable, for any $\epsilon &gt; 0$ there is a partition $P$ such that</p> <p>$$U(P,f) - L(P,f) &lt; \epsilon, \,\, U(P_X,\underline{J}) - L(P_X,\underline{J}) &lt; \epsilon, \, \, U(P_X,\overline{J}) - L(P_X,\overline{J}) &lt; \epsilon,$$</p> <p>and it follows that $\underline{J}$ and $\overline{J}$ are integrable over $X$ and (1) holds.</p> <p><strong>Correction of the Theorem (Zorich)</strong></p> <p>Since $\int_X \overline{J}(x) \, dx = \int_X \underline{J}(x) \, dx $ and $\overline{J}(x) \geqslant \underline{J}(x)$, it follows that $\overline{J}(x) = \underline{J}(x)$ almost everywhere, and the Riemann integral</p> <p>$$\int_Y f(x,y) \, dy$$</p> <p>exists except perhaps for $x$ in a set of measure zero where $\underline{J}(x) &lt; \overline{J}(x)$ with strict inequality.</p> <p>As pointed out by David C. Ulrich, that does not mean that a value may be assigned arbitrarily to the "symbol" $\int_Y f(x,y) \, dy$ and the Theorem holds. What Zorich should have stated is let the function $F:X \to \mathbb{R}$ be defined as </p> <p>$$F(x) = \int_Y f(x,y) \, dy$$</p> <p>when that integral exists, and let it be defined as any value in the interval $[\underline{J}(x), \overline{J}(x)]$ when $\underline{J}(x) &lt; \overline{J}(x)$ and the integral does not exist. Then instead of (1) the correct statement is </p> <p>$$\tag{2} \int_{X \times Y} f = \int_X F(x) \, dx,$$</p> <p>with something similar when the order of integration is reversed.</p>
3,059,617
<p>Note: I apologize in advance for not using proper notation on some of these values, but this is literally my first post on this site and I do not know how to display these values correctly.</p> <p>I recently was looking up facts about different cardinalities of infinity for a book idea, when I found a post made several years ago about <span class="math-container">$ℵ_{ℵ_0}$</span></p> <p><a href="https://math.stackexchange.com/questions/715321/if-the-infinite-cardinals-aleph-null-aleph-two-etc-continue-indefinitely-is">If the infinite cardinals aleph-null, aleph-two, etc. continue indefinitely, is there any meaning in the idea of aleph-aleph-null?</a></p> <p>In this post people are talk about the difference between cardinal numbers and how <span class="math-container">$ℵ_{ℵ_0}$</span> should instead be <span class="math-container">$ℵ_ω$</span>. The responses to the post then go on to talk about <span class="math-container">$ℵ_{ω+1}$</span>, <span class="math-container">$ℵ_{ω+2}$</span>, and so on.</p> <p>Anyways, my understanding of the different values of ℵ was that they corresponded to the cardinalities of infinite sets, with <span class="math-container">$ℵ_0$</span> being the cardinality of the set of all natural numbers, and that if set X has cardinality of <span class="math-container">$ℵ_a$</span>, then the cardinality of the powerset of X would be <span class="math-container">$ℵ_{a+1}$</span>.</p> <p>With this in mind, I always imagined that if a set Y had cardinality <span class="math-container">$ℵ_0$</span>, and you found its powerset, and then you found the powerset of that set, and then you found the powerset of THAT set, and repeated the process infinitely you would get a set with cardinality <span class="math-container">$ℵ_{ℵ_0}$</span>.</p> <p>So, I guess my question is, in the discussion linked above, when people are talking about <span class="math-container">$ℵ_{ω+1}$</span>, how is that possible? Because if you take a powerset an infinite number of times, taking one more powerset is still just an infinite number of times, isn't it?</p> <p>I hope I worded this question in a way that people will understand, and thanks in advance into any insight you can give me about all this.</p>
David C. Ullrich
248,223
<p>The reason it should really be <span class="math-container">$\aleph_\omega$</span> instead of <span class="math-container">$\aleph_{\aleph_0}$</span> is that we think of <span class="math-container">$\aleph_\alpha$</span> for <em>ordinals</em> <span class="math-container">$\alpha$</span>. Yes, <span class="math-container">$\aleph_0=\omega$</span>, but we write <span class="math-container">$\omega$</span> when we think of it as an ordinal instead of a cardinal.</p> <p>The rest of your question is based on a fundamental misunderstanding: <span class="math-container">$\aleph_{\alpha+1}$</span> is <em>not</em> the cardinality of the power set of <span class="math-container">$\aleph_\alpha$</span>; what it is is the smallest cardinal larger than <span class="math-container">$\aleph_\alpha$</span>.</p>
3,059,617
<p>Note: I apologize in advance for not using proper notation on some of these values, but this is literally my first post on this site and I do not know how to display these values correctly.</p> <p>I recently was looking up facts about different cardinalities of infinity for a book idea, when I found a post made several years ago about <span class="math-container">$ℵ_{ℵ_0}$</span></p> <p><a href="https://math.stackexchange.com/questions/715321/if-the-infinite-cardinals-aleph-null-aleph-two-etc-continue-indefinitely-is">If the infinite cardinals aleph-null, aleph-two, etc. continue indefinitely, is there any meaning in the idea of aleph-aleph-null?</a></p> <p>In this post people are talk about the difference between cardinal numbers and how <span class="math-container">$ℵ_{ℵ_0}$</span> should instead be <span class="math-container">$ℵ_ω$</span>. The responses to the post then go on to talk about <span class="math-container">$ℵ_{ω+1}$</span>, <span class="math-container">$ℵ_{ω+2}$</span>, and so on.</p> <p>Anyways, my understanding of the different values of ℵ was that they corresponded to the cardinalities of infinite sets, with <span class="math-container">$ℵ_0$</span> being the cardinality of the set of all natural numbers, and that if set X has cardinality of <span class="math-container">$ℵ_a$</span>, then the cardinality of the powerset of X would be <span class="math-container">$ℵ_{a+1}$</span>.</p> <p>With this in mind, I always imagined that if a set Y had cardinality <span class="math-container">$ℵ_0$</span>, and you found its powerset, and then you found the powerset of that set, and then you found the powerset of THAT set, and repeated the process infinitely you would get a set with cardinality <span class="math-container">$ℵ_{ℵ_0}$</span>.</p> <p>So, I guess my question is, in the discussion linked above, when people are talking about <span class="math-container">$ℵ_{ω+1}$</span>, how is that possible? Because if you take a powerset an infinite number of times, taking one more powerset is still just an infinite number of times, isn't it?</p> <p>I hope I worded this question in a way that people will understand, and thanks in advance into any insight you can give me about all this.</p>
Asaf Karagila
622
<p>Ordinals are not cardinals.</p> <p>Recall Hilbert's hotel. Where you have infinitely many rooms, one for each natural number, and they are all full. And there's a party, with all the guests invited. At some point, after so many drinks, people need to use the restroom.</p> <p>So someone goes in, and immediately after another person comes and stands in line. They only have to wait for the person inside to come out, so they have <span class="math-container">$0$</span> people in front of them, and then another person comes and they only have to wait for <span class="math-container">$1$</span> person in front of them, and then another and another and so on. That's fine. But the person in the bathroom had passed out, unfortunately, and everyone is so polite, so they just wait quietly. And the queue gets longer.</p> <p>Let's for concreteness sake, point out that only people who stay in rooms with an even room number go to the toilet. The others are just fine holding it in. Now for every given <span class="math-container">$n$</span>, there is someone in the queue which needs to wait for at least <span class="math-container">$n$</span> people. The queue is infinite. But it's fine, since each person has only to wait a finite amount of time for their turn.</p> <p>But what's this now? The person in room <span class="math-container">$3$</span> has to use the toilet as well. But they cannot cut in the line, that would be impolite. So they stand at the back. Well. There were <span class="math-container">$\aleph_0$</span> people in the queue, that's <em>how many</em>, and we added just one more, so there are still <span class="math-container">$\aleph_0$</span> people waiting in line. But now we have one person who has to wait for infinitely many people to go before them. So the queue <em>is ordered</em> in a brand new way. If they were lucky and someone decided to let them cut in line, then the queue would have looked the same, just from some point on people would have to wait just one more person to go first.</p> <p>This is not what happened, though. So the queue <em>looks</em> different. Well, now we continue, all the people in room numbers which are powers of <span class="math-container">$3$</span> start to follow. And at some point we get to a queue which looks like two copies of the natural numbers stitched up. And then the bloke from room <span class="math-container">$5$</span> joins the line, and he has to two for <em>two</em> infinite queues before their turn. And so on and so forth.</p> <hr> <p>Okay, what's the point of all that?</p> <p>The point is that for finite queues the question of "how many people" and "how is the queue ordered" are the same question. So adding one person does not matter where this person was added to the queue. But when the queue was infinite, adding one person at the end or adding it to the middle would very much change the queue's order. So "how many" is no longer the same as "how long is the queue".</p> <p>When iterating an operation transfinitely many times, e.g. by taking power sets or cardinal successors, we work successively. This creates a queue-like structure of cardinals. The <em>first</em>, the <em>second</em>, etc., which are <em>ordinal</em> numbers, they talk about order.</p> <p>So once you go through the finite ones, you have to move to infinite <em>ordinals</em>, not to infinite <em>cardinals</em>. As such <span class="math-container">$\omega$</span> is the appropriate notation, since it denotes an ordinal, rather than <span class="math-container">$\aleph_0$</span> which denotes a cardinal.</p> <p>Between <span class="math-container">$\aleph_\omega$</span> and <span class="math-container">$\aleph_{\omega+1}$</span> there are similarities: both have infinitely many [infinite] cardinals smaller than themselves. But it is not the same, exactly because we are dealing with the question "how are these ordered" rather than "how many are there".</p> <p><sub>Note that <span class="math-container">$\aleph$</span> numbers are not defined by power sets, these are <span class="math-container">$\beth$</span> numbers (Beth is the second letter of the Hebrew alphabet, whereas Aleph is the first one). But this is irrelevant to your actual question.</sub></p>
2,044,451
<p>I'm trying to prove the following inequality: $$- \ln (x) \leq (x)^{-\frac{1}{e}} $$</p> <p>over $[0, 1]$. I'm not sure how to move forward. I know that the equality is at $x = e^{-e}$. Any help is greatly appreciated.</p>
Jack D'Aurizio
44,121
<p>If we set $x=e^{-t}$, we just have to show that $$ \forall t\geq 0,\qquad t\leq \exp\frac{t}{e} \tag{1}$$ but that is trivial by convexity: $f(t)=\exp\frac{t}{e}$ is a convex function on $\mathbb{R}$, and the equation of the tangent line at $x=e$ is exactly $g(t)=t$.</p>
1,670,816
<p>Given <span class="math-container">$a, b \in \Bbb R$</span>, consider the following large tridiagonal matrix</p> <p><span class="math-container">$$M := \begin{pmatrix} a^2 &amp; b &amp; 0 &amp; 0 &amp; \cdots \\ b &amp; (a+1)^2 &amp; b &amp; 0 &amp; \cdots &amp; \\ 0 &amp; b &amp; (a+2)^2 &amp; b &amp; \cdots \\ \vdots &amp; \vdots &amp; \vdots &amp; \vdots &amp; \ddots \end{pmatrix}$$</span></p> <p>What can be said about its eigenvalues? Are analytic expressions known? Or, at least, properties of the eigenvalues?</p>
copper.hat
27,978
<p>If you want an approximation, the implicit function theorem is a useful tool.</p> <p>Let $\phi(x,\epsilon) = (x-1)(x-2)(x-3)(x-4)-\epsilon x^6$. It is not too difficult to compute ${\partial \phi(4,0) \over \partial x} = 6$, ${\partial \phi(4,0) \over \partial \epsilon} = -4^6$. Hence there is a function $\xi$ defined in a neighbourhood of $\epsilon=0$ such that $\phi(\xi(\epsilon), \epsilon) = 0$, and ${\partial \xi(0) \over \partial \epsilon} = - ({\partial \phi(4,0) \over \partial x})^{-1} {\partial \phi(4,0) \over \partial \epsilon} = - {-4^6 \over 6} = {4^6 \over 6}$.</p> <p>Hence we expect $\xi({1 \over 10^6}) \approx \xi(0) + {\partial \xi(0) \over \partial \epsilon} {1 \over 10^6}=4 +{4^6 \over 6} {1 \over 10^6} \approx 4.000683$.</p>
4,092,182
<p>Let <span class="math-container">$I$</span> be a interval and <span class="math-container">$f:I \rightarrow \mathbb{R}$</span> monotone and surjective prove that <span class="math-container">$f$</span> is continuous.</p> <p>I tried using the definition of <span class="math-container">$\epsilon$</span>-<span class="math-container">$\delta$</span> and supposing that <span class="math-container">$f$</span> is not continuous but I don't see where use that <span class="math-container">$f$</span> is surjective.</p>
user126154
126,154
<p>Given a tensor <span class="math-container">$T$</span> of type <span class="math-container">$(0,q)$</span> and given a vector field <span class="math-container">$X$</span>, the covariant derivative <span class="math-container">$\nabla_XT$</span> is still a tensor of type <span class="math-container">$(0,q)$</span>, which can be made explicit as follows:</p> <p><span class="math-container">$$(\nabla_XT)(Y_1,...,Y_q)= X(T(Y_1,...,Y_q))-\sum_i T(Y_1,\dots,\nabla_XY_i,\dots,Y_q)$$</span></p> <p>(note that <span class="math-container">$T(Y_1,...,Y_q)$</span> is a function <span class="math-container">$\varphi$</span> on <span class="math-container">$M$</span>, so <span class="math-container">$X(\varphi)=d\varphi(X)$</span> is well-defined and equals <span class="math-container">$\nabla_X\varphi$</span>).</p> <p>This expression is tensorial in <span class="math-container">$X$</span>. Thus, given a Tensor <span class="math-container">$T$</span> of type <span class="math-container">$(0,q)$</span>, we can define <span class="math-container">$\nabla T$</span>, a tensor <span class="math-container">$(0,q+1)$</span>, as follows:</p> <p><span class="math-container">$$(\nabla T)(X,V_1,...,V,q)=(\nabla_XT)(V_1,...,V_q)$$</span></p> <p>In this way <span class="math-container">$\nabla$</span> is seen as an operator from tensors <span class="math-container">$(0,q)$</span> to tensors <span class="math-container">$(0,q+1)$</span>. We can therefore consider <span class="math-container">$\nabla^2=\nabla\circ\nabla$</span> from <span class="math-container">$(0,q)$</span>-tensors to <span class="math-container">$(0,q+2)$</span> tensor. (And all this remains true also for tensors <span class="math-container">$(p,q)$</span>).</p> <p>Now, if <span class="math-container">$f$</span> is a function (a <span class="math-container">$(0,0)$</span> tensor), then its <span class="math-container">$(0,2)$</span> Hessian is just <span class="math-container">$$Hess(f):=\nabla\nabla f.$$</span></p> <p>Let's compute it explicitly:</p> <p><span class="math-container">$Hess(f)(X,Y)=(\nabla\nabla f)(X,Y)$</span>. Now we set <span class="math-container">$T=\nabla f$</span>, that is to say <span class="math-container">$T(Y)=\nabla_Yf=Y(f)$</span>, and we apply the above formula. We get: <span class="math-container">$(\nabla\nabla f)(X,Y)=(\nabla T)(X,Y)=(\nabla_X T)(Y)=X(T(Y))-T(\nabla_XY)=\nabla_X(\nabla_Yf)-\nabla_{\nabla_XY}f$</span></p> <p>So <span class="math-container">$$\nabla^2f(X,Y)=X(Y(f))-\nabla_{\nabla_XY}f$$</span></p> <p>So, coming back to your questions:</p> <ol> <li><p><span class="math-container">$\nabla_X(\nabla_Yf))=X(Y(f))$</span>, denotes indeed usual derivatives: first along <span class="math-container">$Y$</span> and then along <span class="math-container">$X$</span>; and</p> </li> <li><p><span class="math-container">$\nabla^2f(X,Y)$</span> is different from <span class="math-container">$\nabla_X\nabla_Yf$</span>, the difference being the term <span class="math-container">$\nabla_{\nabla_XY}f$</span>.</p> </li> </ol> <p>All these calculations generalise when <span class="math-container">$f$</span> is a tensor and not just a function.</p>
385,404
<p>So, I have this question which is still troubling me:</p> <blockquote> <p>Find the value of $k$ such that the equation $2x^3 + 3x^2 + kx - 48 = 0$ has two solutions equal in value but opposite in sign.</p> </blockquote> <p>I've had numerous attempts at this, such as using simultaneous equations and the factor theorem, but there always seems to be a problem. I'm sure I'm missing an important step here. Any clearing up would be great, thanks!</p>
lab bhattacharjee
33,337
<p>HINT:</p> <p>So, we can assume the roots to be of the form $a,-a,b$</p> <p>So using <a href="http://mathworld.wolfram.com/VietasFormulas.html">Vieta's formula</a> </p> <p>$a+(-a)+b=-\frac32\implies b=-\frac32$ and $a\cdot(-a)\cdot b=\frac{48}2\implies a^2=16\implies a=\pm4$</p> <p>So, the roots are $\pm4,-\frac32$</p> <p>Again using Vieta's formula $k=a\cdot b+(-a)\cdot b+a\cdot(-a)$</p>
1,915,257
<p>$$\sum_{n=2}^\infty\frac{\cos\ln\ln n}{\ln n}$$ My idea is $$-\frac1{\ln n}\le\frac{\cos\ln\ln n}{\ln n}\le\frac1{\ln n}$$ But I don't know if $\sum\frac1{\ln n}$ converges.</p>
marty cohen
13,079
<p>Off the top of my head and from my phone:</p> <p>Cos(log log n) Is essentially constant for longer and longer stretches and sum 1/log n diverges so the sum diverges.</p> <p>I'm sure this could be made rigorous for any function thst grows slowly like log log.</p>
4,136,532
<p>I know it's a weird question.<br /> But this thing is confusing me.<br /> (x̅) : average μ<br />     </p> <p>①<br /> ∵ <span class="math-container">$\frac{1}{n}\sum\limits_{i=1}^n(x_i) = \bar{x}$</span><br /> ∴ <span class="math-container">$\sum\limits_{i=1}^n(x_i) = {n}\bar{x}$</span><br />     </p> <p>②<br /> ∵ <span class="math-container">$\sum\limits_{i=1}^n(C) = {n}{C}$</span>    | C = constant<br /> ∴ <span class="math-container">$\sum\limits_{i=1}^n(\bar{x}) = {n}{\bar{x}}$</span>     </p> <hr /> <p>From ①, ②<br /> <span class="math-container">$(x_i) = (\bar{x})$</span> ???</p> <p>How could this be true?? Am I missing something?</p>
user170231
170,231
<p><span class="math-container">$\bar x$</span> is the average, so it's just some number that you can factor out of the summand:</p> <p><span class="math-container">$$\sum_{i=1}^n\bar x=\bar x\sum_{i=1}^n1=\bar x\left(\underbrace{1+1+\cdots+1}_{n\text{ times}}\right)=\bar x n$$</span></p>
1,621,275
<blockquote> <p>Show that $a_n=\frac{n+1}{2n}a_{n-1}+1$ given that:</p> <p>$a_n=1/{{n}\choose{0}}+1/{{n}\choose{1}}+...+1/{{n}\choose{n}}$</p> </blockquote> <p>The hint says to consider when $n$ is even and odd. When $n=2k$ I get:</p> <p>$$a_{n}=1/{{2k}\choose{0}}+1/{{2k}\choose{1}}+...+1/{{2k}\choose{2k}}$$ $$=1+1/{{2k}\choose{1}}+...+1/{{2k}\choose{2k}}$$ $$=1+1/(2k{{2k-1}\choose{0}})+1/(\frac{2k}{2}{{2k-1}\choose{0}})...+1/(\frac{2k}{2k}{{2k-1}\choose{2k-1}})$$ $$=1+\frac{1}{2k}(1/{{2k-1}\choose{0}}+1/{{2k-1}\choose{1}}+1/{{2k-1}\choose{2k-1}})$$</p> <p>which should be $\frac{2k+1}{4k}a_{n-1}+1$ in the end.</p> <p><strong>I used:</strong></p> <p>${{n}\choose{k}}=\frac{n}{k}{{n-1}\choose{k-1}}$ and tried ${{n-1}\choose{k-1}}={{n-1}\choose{n-k}}$</p>
Community
-1
<p>Let $$f(x) =\sum_{n=1}^{\infty} a_n x^n =\sum_{n=1}^{\infty} \frac{n+1}{2n} a_{n-1} x^n +\sum_{n=1}^{\infty} x^n =1+\sum_{n=1}^{\infty}\frac{n+2}{2(n+1)} a_n x^{n+1} +\frac{x}{1-x}=\frac{1}{1-x} +\frac{1}{2} \int\sum_{n=1}^{\infty}(n+2) a_n x^n dx =\frac{1}{1-x} +\frac{1}{2}\int \frac{1}{x} \left(\sum_{n=1}^{\infty} a_n x^{n+2}\right)' dx=\frac{1}{1-x}+\frac{1}{2}\int \frac{(x^2 f(x))'}{x} dx =\frac{1}{1-x} +\frac{1}{2} \int (2f(x) +xf'(x))dx $$ Hence $$f'(x) =\frac{1}{(1-x)^2} +f(x) +\frac{xf'(x)}{2}$$ </p>
142,734
<p>What are some books that discuss elementary mathematical topics ('school mathematics'), like arithmetic, basic non-abstract algebra, plane &amp; solid geometry, trigonometry, etc, in an insightful way? I'm thinking of books like Klein's <em>Elementary Mathematics from an Advanced Standpoint</em> and the books of the Gelfand Correspondence School - school-level books with a university ethos.</p>
Piyush Grover
30,684
<p><a href="http://rads.stackoverflow.com/amzn/click/0821819984"><em>Geometry and the Imagination</em></a> by Hilbert and Cohn-Vossen.</p>
142,734
<p>What are some books that discuss elementary mathematical topics ('school mathematics'), like arithmetic, basic non-abstract algebra, plane &amp; solid geometry, trigonometry, etc, in an insightful way? I'm thinking of books like Klein's <em>Elementary Mathematics from an Advanced Standpoint</em> and the books of the Gelfand Correspondence School - school-level books with a university ethos.</p>
Zurab Silagadze
32,389
<p><a href="http://rads.stackoverflow.com/amzn/click/0195105192">What Is Mathematics? An Elementary Approach to Ideas and Methods </a> by Richard Courant and Herbert Robbins</p> <p><a href="http://rads.stackoverflow.com/amzn/click/0821843672">Lessons in Geometry </a> by Jacques Hadamard, and its companion books: <a href="http://rads.stackoverflow.com/amzn/click/0821843680">Hadamard's Plane Geometry </a> and <a href="http://www.ams.org/publications/authors/books/postpub/mbk-70-Hadamard-supp-problems.pdf">Hadamard: elementary geometry. solutions and notes to supplementary problems</a> by Mark Saul.</p>
142,734
<p>What are some books that discuss elementary mathematical topics ('school mathematics'), like arithmetic, basic non-abstract algebra, plane &amp; solid geometry, trigonometry, etc, in an insightful way? I'm thinking of books like Klein's <em>Elementary Mathematics from an Advanced Standpoint</em> and the books of the Gelfand Correspondence School - school-level books with a university ethos.</p>
Arash
39,212
<p>I personally enjoyed these books:</p> <p><a href="http://en.wikipedia.org/wiki/How_to_Solve_It" rel="nofollow">How To Solve It</a> by George Polya</p> <p><a href="http://rads.stackoverflow.com/amzn/click/0883856190" rel="nofollow">Geometry Revisited</a> by H. S. M. Coxeter , Samuel L. Greitzer </p>
2,378,042
<p>I have been learning that the sum of the squares of the first <span class="math-container">$n $</span> natural numbers is given by <span class="math-container">$$\Sigma= \frac {1}{6} n (n+1)(2n+1) $$</span> while the sum of the cubes of the first <span class="math-container">$n $</span> natural numbers is given by: <span class="math-container">$$\left[ \dfrac {n (n+1)}{2} \right] ^2$$</span> I know that these can be <strong>verified</strong> by induction process, but can anyone explain how these were <strong>derived</strong> for the first time, and how similar formulae are derived? In derivation, one cannot use mathematical induction, so which process is used to derive such formulae?</p> <p>Simply said, can anyone show how these formulae are <strong>derived</strong>, and not verified?</p>
Thiago Nascimento
226,141
<p>You can use: $\sum_{k=0}^{n} k^{3} = \sum_{k=0}^{n} (k+1)^{3}- (n+1)^{3}$, expand the sum and see that o term cubic cancel and you have a sum in function of $\sum k$. You can use this metod for sum of $k^{4}$ ... etc.</p>
28,395
<p>I want to generate a table of Euclidean distances from the points in a list to a given point, e.g.:</p> <pre><code>Table[EuclideanDistance[MyList[[i]], c], {i, 1, Length[MyList]}] </code></pre> <p>Moreover, when <code>EuclideanDistance</code> returns a value greater than $T$, I'd like to replace that value with a given real number $R$. Is there a simple way to do this?</p>
Jens
245
<p>Why not use <code>Clip</code>?</p> <pre><code>MyList = RandomReal[{-1, 1}, 10]; c = {0, 0}; t = 1; r = -1; Clip[ Table[EuclideanDistance[MyList[[i]], c], {i, 1, Length[MyList]}], {0, 1}, {0, r}] (* ==&gt; {0.9957995322104205`,0.3452581732209688`,0.016464628727136405`, -1,-1,0.5914902487316964`,0.8531216593853862`,-1,0.9996775567985703`, -1} *) </code></pre> <p>Here, <code>r = -1</code> is the value that replaces any number above the threshold <code>t = 1</code>.</p>
429,380
<p>I am not familiar with this three lines equal sign and reading about it didnt really help with the original problem, which is: </p> <blockquote> <p>From the options below choose up to two that show correct solutions for each of the three calculations:</p> </blockquote> <p>The photo of the options:</p> <blockquote> <p><img src="https://i.stack.imgur.com/byqGi.jpg" alt="enter image description here"></p> </blockquote> <p>Little help?</p>
Higgs88
33,088
<p>I think that I find a proof:</p> <p>As I have showed, by Jensen's inequality we prove that $\{X_n^+\}_n$ is a backward submartingale. </p> <p>For any $K&gt;0$, we have</p> <p>$$KP(|X_n|\geq K)\leq E[|X_n|]=2E[X_n^+]-E[X_n]\leq 2E[X_1^+]-l&lt;\infty$$</p> <p>It follows that </p> <p>$$\sup_nP(|X_n|\geq K)\rightarrow 0,\ \ K\rightarrow\infty$$</p> <p>Firstly,</p> <p>$$E[1_{\{X_n^+\geq K\}}X_n^+]\leq E[1_{\{X_n^+\geq K\}}E[X_1^+|\mathcal{F}_n]]=E[1_{\{X_n^+\geq K\}}X_1^+]\leq E[1_{\{|X_n|\geq K\}}X_1^+]\rightarrow 0,\ \ K\rightarrow\infty$$</p> <p>by the integrability of $X_1^+$, which implies the uniform intergrability of $\{X_n^+\}_n$.</p> <p>Secondly,</p> <p>$$0\geq E[1_{\{X_n^+\leq -K\}}X_n]=E[X_n]-E[1_{\{X_n^+&gt;-K\}}X_n]\geq E[X_n]-E[1_{\{X_n^+&gt;-K\}}X_N]=E[X_n]-E[X_N]+E[1_{\{X_n^+\leq -K\}}X_N],\ \ \forall n\geq N$$</p> <p>So for any given $\epsilon&gt;0$, we have take $N$ large enough that $-\epsilon/2\leq E[X_n]-E[X_N]\leq 0$ and for this fixed $N$ we can choose $K$ such that </p> <p>$$\sup_{n\geq N}E[1_{\{X_n^+\leq -K\}}|X_N|]\leq\epsilon/2$$</p> <p>which implies the uniform intergrability of $\{X_n^-\}_n$, and we show $\{X_n\}_n$ is uniformly intergrable.</p>
429,380
<p>I am not familiar with this three lines equal sign and reading about it didnt really help with the original problem, which is: </p> <blockquote> <p>From the options below choose up to two that show correct solutions for each of the three calculations:</p> </blockquote> <p>The photo of the options:</p> <blockquote> <p><img src="https://i.stack.imgur.com/byqGi.jpg" alt="enter image description here"></p> </blockquote> <p>Little help?</p>
Conrado Costa
226,425
<p>in the last step one can't choose $K$ large so that $\sup_{n \geq N} \mathbb{E}[1_{\{X_n^+ \leq -K\}|X_N|}] \leq \frac{\epsilon}{2}$</p> <p>and that is not just because of a typo (most likely you meant $\sup_{n \geq N} \mathbb{E}[1_{\{X_n \leq -K\}|X_N|}] \leq \frac{\epsilon}{2}$ ) </p> <p>It is because the negative part of the submartingale can't be bound that way</p> <p>maybe you should take a look at</p> <p><a href="http://www.ma.utexas.edu/users/gordanz/notes/uniform_integrability.pdf" rel="nofollow">http://www.ma.utexas.edu/users/gordanz/notes/uniform_integrability.pdf</a></p>
2,958,566
<p>Expressing <span class="math-container">$\frac{t+2}{t^3+3}$</span> in the form <span class="math-container">$a_o+a_1t+...+a_4t^4$</span>, where <span class="math-container">$t$</span> is a root of <span class="math-container">$x^5+2x+2$</span>.</p> <p>So i can deal with the numerator, but how do I get rid of the denomiator to get it into the correct form? Thanks in advance!</p>
rtybase
22,583
<p><strong>Without induction</strong>. Let's do some tagging 1st <span class="math-container">$$m \mid a+d \tag{1}$$</span> <span class="math-container">$$m \mid (b-1)c \tag{2}$$</span> <span class="math-container">$$m \mid ab-a+c \tag{3}$$</span> then <span class="math-container">$$m \mid ab^n+cn+d \iff m \mid a\left(b^n-1\right)+cn+\color{red}{a+d} \overset{(1)}{\iff}\\ m \mid a\left(b^n-1\right)+cn \iff \\ m \mid a(b-1)\left(b^{n-1}+b^{n-2}+...+b^2+b+1\right)+cn \iff \\ m \mid (ab-a+c-c)\left(b^{n-1}+b^{n-2}+...+b^2+b+1\right)+cn \iff \\ m \mid \color{red}{(ab-a+c)\left(b^{n-1}+b^{n-2}+...+b^2+b+1\right)}-c\left(b^{n-1}+b^{n-2}+...+b^2+b+1\right)+cn \overset{(3)}{\iff}\\ m \mid c\left(b^{n-1}+b^{n-2}+...+b^2+b+1\right)-cn \iff \\ m \mid c\left(b^{n-1}-1\right)+c\left(b^{n-2}-1\right)+...+c\left(b^2-1\right)+\color{red}{c\left(b-1\right)} \overset{(2)}{\iff}\\ m \mid c\left(b^{n-1}-1\right)+c\left(b^{n-2}-1\right)+...+c\left(b^2-1\right)$$</span> which is true because for <span class="math-container">$\forall k\geq 2$</span> we have <span class="math-container">$$c\left(b^k-1\right)=\color{red}{c\left(b-1\right)}\left(b^{k-1}+b^{k-2}+...+b^{2}+b+1\right)$$</span> and from <span class="math-container">$(2)$</span> <span class="math-container">$$m \mid c\left(b^k-1\right)$$</span></p>
2,958,566
<p>Expressing <span class="math-container">$\frac{t+2}{t^3+3}$</span> in the form <span class="math-container">$a_o+a_1t+...+a_4t^4$</span>, where <span class="math-container">$t$</span> is a root of <span class="math-container">$x^5+2x+2$</span>.</p> <p>So i can deal with the numerator, but how do I get rid of the denomiator to get it into the correct form? Thanks in advance!</p>
Bill Dubuque
242
<p><span class="math-container">$\bmod m\!:\ \color{#0a0}{f_{\large n+1}\!-b f_{\large n}} =\, \overbrace{(1\!-\!b)c}^{\large \equiv\ 0}n \,\overbrace{-\color{#c00}d\,b+\color{#c00}d+c}^{\large \equiv\ 0\ \ {\rm by}\ \ \color{#c00}{d}\ \equiv\ -a}\!\equiv\color{#0a0} 0,\ $</span> so <span class="math-container">$\ f_n\equiv 0\,\Rightarrow\,\color{#0a0}{f_{n+1}\equiv bf_n\equiv} 0$</span></p>
696,145
<blockquote> <p>For each positive integer <span class="math-container">$n$</span>, define <span class="math-container">$f(n)$</span> such that <span class="math-container">$f(n+1) &gt; f(n)$</span> and <span class="math-container">$f(f(n))=3n$</span>. What is the value of <span class="math-container">$f(10)?$</span></p> </blockquote> <p>This question was really hard for me. Since <span class="math-container">$n$</span> is a positive integer and <span class="math-container">$f(f(n)) = 3n$</span>, I deduced that <span class="math-container">$1&lt;f(1)&lt;f(f(1))$</span> so <span class="math-container">$f(1) = 2$</span> and I couldn't manage to carry on because if i used <span class="math-container">$2&lt;f(2)&lt;f(f(2))$</span>, I would get <span class="math-container">$f(f(2)) = 6$</span> but I wouldn't know how to work out <span class="math-container">$f(2)$</span>.</p> <p>If someone could please show me step by step how to get the answer, I would appreciate it as I would like to know how to get the answer, thanks.</p>
Anas A. Ibrahim
650,028
<p>Well, note that <span class="math-container">$$f(f(n))=3n \implies f(f(f(n)))=f(3n)=3f(n)$$</span> So <span class="math-container">$f(1)=2 \implies f(3)=6=f(f(2)) \implies f(2)=3$</span> (because <span class="math-container">$f$</span> is injective, easily proved from <span class="math-container">$f(f(n))=3n$</span>). Now, working our way through: <span class="math-container">$$f(6)=3f(2)=9$$</span> Thus <span class="math-container">$f(3)&lt;f(4)&lt;f(5)&lt;f(6) \implies f(4)=7, f(5)=8$</span> <span class="math-container">$$f(9)=3f(3)=18$$</span> <span class="math-container">$$f(12)=3f(4)=21$$</span> <span class="math-container">$$\implies 18&lt;f(10)&lt;f(11)&lt;21 \implies f(10)=19$$</span></p>
2,395,611
<p>This is probably an easy question, hanging on some minor detail, but i cannot find a proof for it. I try working through T.A. Springer's book "Linear algebraic groups" and i got stuck at remark 7.1.4. (I know there is already a question asked about the same remark, but concerning a different statement made in it). Specifically i am interested in proofing the following: </p> <p>Let $G$ be a connected linear algebraic group, $T\subset G$ a maximal subtorus and $S \subset C(G)$ another torus, lying in the center of the group $G$. Then it holds, that: $$ W(G,T) \cong W(G/S,T/S). $$</p> <p>So what i do is:<br> For $n\in N_G(T)$, one already has $nTn^{-1}=T$, hence also module $S$, which gives a function $$ \phi : N_G(T) \to \frac{N_{G/S}(T/S)}{Z_{G/S}(T/S)} \\ n \mapsto n ~\mathrm{mod}~S~, $$ which is surjective, because $S\subset T$ (i could do that). </p> <p>What i need would be a hint on how to proof, that $\ker(\phi) = Z_G(T)$ holds. Thanks in advance!</p>
Simon Weinzierl
176,964
<p>$\require{AMScd}$ So i found an answer to this question in Humphrey's "Linear algebraic groups" (around paragraph §25). There a more general proposition is shown, namely:<br> For an epimorphism $\varphi : G\to G'$, such that $\varphi(T) =: T'$ is the maximal torus of $G'$, and that $\ker(\varphi)$ is contained in every Borel group of $G$, $\mathfrak{B} \to \mathfrak{B}'$ (the induced map on Borel groups), $\mathfrak{B}^T \to \mathfrak{B}'^{T'}$ (induced map on Borel groups over $T$) and $W(G,T) \to W(G',T')$ (induced map on Weyl groups (like $\phi$ in the question)) are bijective.<br> The point being, that $W(G,T) \cong \mathfrak{B}^T$ and that there is a commutative diagram $$ \begin{CD} W(G,T) @&gt;&gt;&gt; W(G',T') \\ @VV{\cong}V @VV{\cong}V\\ \mathfrak{B}^T @&gt;&gt;&gt; \mathfrak{B}'^{T'}, \end{CD} $$ where the horizontal arrows are induced by the epimorphism $\varphi$.<br> I will show bijectivity of the lower horizontal arrow:<br> It suffices to show injectivity for $\mathfrak{B} \to \mathfrak{B}'$. So take two Borel groups of $G$, call them $B_1$ and $B_2$, s.t. $\varphi(B_1)=\varphi(B_2)$. Then by the conjugacy theorem $B_1= gB_2g^{-1}$, for some $g\in G$. Then it holds that $\varphi(B_1) =\varphi(g)\varphi(B_1)\varphi(g)^{-1}$, i.e. $\varphi(g)\in N_{G'}(\varphi(B_1)) =\varphi(B_1)$. Since $\ker(\varphi) \subseteq B_1$, $g\in B_1$ and hence $B_1=B_2$ follows.<br> For surjectivity take $B'$ Borel group of $G'$, over $T'$. Then $\varphi^{-1}(B')$ is some subgroup $H$, s.t. $G/H \to G'/B'$ is an epimorphism. Since $G'/B'$ is complete (Borel group is parabolic) $G/H$ is complete, i.e. $H$ is a parabolic and contains at least one Borel group $B$. Now $\varphi(B) \subseteq B'$ is a Borel group inside a bigger one, hence by the conjugacy theorem, they agree.<br> This shows that $W(G,T)\to W(G',T')$ is bijective. </p> <p>Now to show that the conditions of Humphrey are actually met, by the ones given in the question: </p> <ul> <li>$S\subseteq C(G)^0 \subseteq C(B) \subseteq B$, i.e. the identity component of the center of $G$ lies in the center of every Borel group $B$.</li> <li>$S$ lies in $T$, because $S$ is in $Z_G(T)^0$ (it commutes with every element of $T$) and thus there is a maximal torus in the Cartan subgroup of $T$ over $S$. Since there is only one maximal torus in $Z_G(T)^0$ and it is $T$, $S\subseteq T$. </li> </ul> <p>I will omit the rest, as it is again contained in Springer's book.</p>
1,025,079
<p>Consider two related problems:</p> <ol> <li>You have $n$ cannisters that must go into $m$ trucks that can each carry $k$ cannisters. You require that no truck becomes overloaded, and for each cannister, there is a specified subset of trucks in which it may be safely carried. Is there a way to load all $n$ cannisters into the $m$ trucks such that no truck is overloaded, and each cannisters goes into a truck that is allowed to carry it?</li> <li>Now, any cannisters can be placed in any truck, but there are certain pairs of cannisters that cannot be placed together in the same truck. Is there a way to load all $n$ cannisters into the $m$ trucks such that no truck is overloaded, and no two cannisters are placed in the same truck when they're not supposed to be?</li> </ol> <p>The question I have is whether either of these has a polynomial-time algorithm to solve it. When I think in terms of greedy algorithms, I can't really come up with anything, so is there a clever trick (or algorithm paradigm) that can be used to solve these in polynomial time?</p>
Jamie Lannister
234,745
<p>Yes, there is a clever trick for approximating solutions to the knapsack problem. Since the knapsack problem is pseudo polynomial, we can find a full approximation scheme for it such that out approximation can be made arbitrarily close. The details are here:</p> <p><a href="http://math.mit.edu/~goemans/18434S06/knapsack-katherine.pdf" rel="nofollow">http://math.mit.edu/~goemans/18434S06/knapsack-katherine.pdf</a></p> <p>Basically, the trick is rounding your numbers.</p>
176,386
<p>I stumbled across this "dictionary for noncommutative topology" <a href="http://planetmath.org/noncommutativetopology" rel="nofollow">http://planetmath.org/noncommutativetopology</a> and I would be very interested in learning more on the subject, particularly I'd like to see why these results are true. </p> <p>What is the reference to the results in section 3 of the above linked page?</p>
Nik Weaver
23,141
<p>Dixmier's book is a fine, if dated, introduction to C*-algebra, but really not the right place to learn about noncommutative topology. The place you want to go is Alain Connes' <a href="http://www.alainconnes.org/docs/book94bigpdf.pdf" rel="nofollow">Noncommutative Geometry</a>.</p> <p>Edit: in response to Yemon's comment I might refer to my own book <em>Mathematical Quantization</em>. Most of the correspondences mentioned are covered in Section 5.1, and many others can be found throughout the book.</p>
4,335,896
<blockquote> <p><span class="math-container">$$\frac{1}{2\cdot4}+\frac{1\cdot3}{2\cdot4\cdot6}+\frac{1\cdot3\cdot5}{2\cdot4\cdot6\cdot8}+\frac{1\cdot3\cdot5\cdot7}{2\cdot4\cdot6\cdot8\cdot10}+\cdots$$</span> is equal to?</p> </blockquote> <p><strong>My approach:</strong></p> <p>We can see that the <span class="math-container">$n^{th}$</span> term is <span class="math-container">\begin{align}a_n&amp;=\frac{1\cdot3\cdot5\cdot\space\dots\space\cdot(2n-3)\cdot(2n-1)}{2\cdot4\cdot6\cdot\space\dots\space\cdot(2n)\cdot(2n+2)}\\&amp;=\frac{1\cdot3\cdot5\cdot\space\dots\space\cdot(2n-3)\cdot(2n-1)}{2\cdot4\cdot6\cdot\space\dots\space\cdot(2n)\cdot(2n+2)}\color{red}{[(2n+2)-(2n+1)}]\\&amp;=\frac{1\cdot3\cdot5\cdot\space\dots\space\cdot(2n-3)\cdot(2n-1)}{2\cdot4\cdot6\cdot\space\dots\space\cdot(2n)}-\frac{1\cdot3\cdot5\cdot\space\dots\space\cdot(2n-3)\cdot(2n-1)\cdot(2n+1)}{2\cdot4\cdot6\cdot\space\dots\space\cdot(2n)\cdot(2n+2)}\\ \end{align}</span></p> <p>From here I just have a telescopic series to solve, which gave me <span class="math-container">$$\sum_{n=1}^{\infty}a_n=0.5$$</span></p> <p><strong>Another approach :</strong> <em>note :</em> <span class="math-container">$$\frac{(2n)!}{2^nn!}=(2n-1)!!$$</span></p> <p>Which gives <span class="math-container">$$a_n=\frac{1}{2}\left(\frac{(2n)!\left(\frac{1}{4}\right)^n}{n!(n+1)!}\right)$$</span></p> <p>So basically I need to compute <span class="math-container">$$\frac{1}{2}\sum_{n=1}^{\infty}\left(\frac{(2n)!\left(\frac{1}{4}\right)^n}{n!(n+1)!}\right) \tag{*}$$</span></p> <p>I'm not able to determine the binomial expression of <span class="math-container">$(*)$</span> (if it exists) or else you can just provide me the value of the sum</p> <p>Any hints will be appreciated, and you can provide different approaches to the problem too</p>
Eldar Sultanow
993,738
<p>I tried to rewrite the sum:</p> <p><span class="math-container">$$\sum _{n=1}^k \frac{\left(\frac{1}{4}\right)^n (2 n)!}{n! (n+1)!}$$</span></p> <p>computationally using the Mathematica Code:</p> <pre><code>Sum[Factorial[2*n]*(1/4)^n/(Factorial[n]*Factorial[n + 1]), {n, 1, k}] </code></pre> <p>As a result I got:</p> <p><span class="math-container">$$\frac{2^{-2 k-1} \left(-k (2 (k+1))!-2 (2 (k+1))!+2^{2 k+1} (k+1)! (k+2)!\right)}{(k+1)! (k+2)!}\\=1-\frac{2^{-2 k-1} (k+2) (2 (k+1))!}{(k+1)! (k+2)!}=1-\frac{2 \Gamma \left(k+\frac{3}{2}\right)}{\sqrt{\pi } \Gamma (k+2)}$$</span></p> <p>where <span class="math-container">$\Gamma$</span> is <a href="https://mathworld.wolfram.com/GammaFunction.html" rel="nofollow noreferrer">Euler Gamma Function</a>. By setting <span class="math-container">$k=\infty$</span>, the complete sum becomes 1.</p>
3,064,943
<p>Consider the senquence of iid r.v. <span class="math-container">$(Y_k)_{k\geq1}$</span> such that <span class="math-container">$\mathbb{P}(Y_k=1)=\mathbb{P}(Y_k=-1)=\frac{1}{2}$</span> and then consider the process <span class="math-container">$X=(X_k)_{n\geq1}$</span> such that <span class="math-container">$X_n=\sum_{k=1}^n\frac{Y_k}{k}$</span>. It's very easy to see that <span class="math-container">$X$</span> is a martingale w.r.t. the filtration that it generate itself. However i tried proving that it is almost surely convergent using the convergence theorem for martingales but i failed. Is this really almost surely convergent? if yes is possible to prove this using martingales theory or is better to use other techniques like Borel-Cantelli lemmas?</p>
Davide Giraudo
9,849
<p>One can see that <span class="math-container">$$ \mathbb E\left[X_n^2\right]=\sum_{k=1}^n\frac 1{k^2}\mathbb E\left[Y_k^2\right] $$</span> hence that the martingale <span class="math-container">$\left(X_n\right)_{n\geqslant 1}$</span> is bounded in <span class="math-container">$\mathbb L^2$</span>.</p>
2,648,273
<p>I am reading "Linear Algebra" by Takeshi SAITO.</p> <p>Why $n \geq 0$ instead of $n \geq 1$?<br> Why $K^0 = \{0\}$?<br> Is $K^0 = \{0\}$ a definition or not?</p> <p>He wrote as follows in his book:</p> <p>Let $K$ be a field, and $n \geq 0$ be a natural number. </p> <p>$$K^n = \left\{\begin{pmatrix} a_{1} \\ a_{2} \\ \vdots \\ a_{n} \end{pmatrix} \middle| a_1, \cdots, a_n \in K \right\}$$</p> <p>is a $K$ vector space with addition of vectors and scalar multiplication.</p> <p>$$\begin{pmatrix} a_{1} \\ a_{2} \\ \vdots \\ a_{n} \end{pmatrix} + \begin{pmatrix} b_{1} \\ b_{2} \\ \vdots \\ b_{n} \end{pmatrix} = \begin{pmatrix} a_{1}+b_{1} \\ a_{2}+b_{2} \\ \vdots \\ a_{n}+b_{n} \end{pmatrix}\text{,}$$ $$c \begin{pmatrix} a_{1} \\ a_{2} \\ \vdots \\ a_{n} \end{pmatrix} = \begin{pmatrix} c a_{1} \\ c a_{2} \\ \vdots \\ c a_{n} \end{pmatrix}\text{.}$$ When $n = 0$, $K^0 = 0 = \{0\}$.</p>
Aloizio Macedo
59,234
<p>Another way to see as why this definition is natural is that $K^n$ can be seen as the set of functions from a set with $n$ elements to $K$ (and the obvious "pointwise" addition, multiplication by scalar etc).</p> <p>This way, $K^0$ must be the set of functions from a set of zero elements (empty set) to $K$. This is a set with only one element, which must be the zero element.</p>
3,186,192
<p>Is the function below always positive for <span class="math-container">$0&lt; x &lt;1$</span>? (I am determining if the function requires the modulus sign or not.)</p> <p><span class="math-container">$$\frac{1}{2}\log\left|\frac{1+\log(x)}{1-\log(x)}\right|$$</span></p> <p>My first instinct is that it cannot be, but I would just like some third-party feedback. </p> <p>Thank you!</p>
Dasherman
177,453
<p>This function is actually negative in your interval (plot it to verify).</p> <p>Note that excluding <span class="math-container">$x=1/e$</span> (for which your fraction does not exist), we have in the interval <span class="math-container">$0&lt;x&lt;1$</span> that <span class="math-container">$log (x)&lt;0$</span>, so the fraction will evaluate to something with absolute value smaller than 1, so the complete expression will be negative.</p>
3,855,521
<p>Suppose <span class="math-container">$f:A\bigoplus M \to A\bigoplus N$</span> and <span class="math-container">$g: A \to A$</span>, where <span class="math-container">$g = \pi_A\circ f\circ l_A$</span>, are isomorphisms of <span class="math-container">$R$</span>-module. Prove that <span class="math-container">$M\cong N$</span>.</p>
Dave L. Renfro
13,130
<p>First, let's determine the possible values for <span class="math-container">$x.$</span> Thanks to @Intelligenti pauca for pointing out this oversight in my original answer, which caused significant <em>qualitative</em> errors in my original answer.</p> <p>Since <span class="math-container">$y^2$</span> is non-negative, we have:</p> <p><span class="math-container">$$ 1 \; - \; \frac{4x^{{10}^{12}}}{{\pi}^2} \; \geq \; 0 $$</span></p> <p><span class="math-container">$$ x^{{10}^{12}} \; \leq \; \frac{{\pi}^2}{4} $$</span></p> <p><span class="math-container">$$ -\left(\frac{{\pi}^2}{4}\right)^{{10}^{-12}} \; \leq \; x \; \leq \; \left(\frac{{\pi}^2}{4}\right)^{{10}^{-12}} $$</span></p> <p><span class="math-container">$$ -1.0000000000009031654105793 \ldots \; \leq \; x \; \leq \; 1.0000000000009031654105793 \ldots $$</span></p> <p>For the decimal approximation used above, see <a href="https://www.wolframalpha.com/input/?i=%28%28%28pi%29%5E2%29%2F4%29%5E%2810%5E%28-12%29%29" rel="nofollow noreferrer">this WolframAlpha computation</a>.</p> <p>Note that for <span class="math-container">$x = \pm \left(\frac{{\pi}^2}{4}\right)^{{10}^{-12}} \stackrel{\text{def}}{=} \; \pm \beta,$</span> we have <span class="math-container">$y^2 = 0,$</span> and hence <span class="math-container">$y = 0.$</span></p> <p>When <span class="math-container">$x = \pm \, 0.999999,$</span> we find that <a href="https://www.wolframalpha.com/input/?i=%284*%280.999999%29%5E%28%2810%29%5E%2812%29%29%29%2F%28%28pi%29%5E2%29" rel="nofollow noreferrer"><span class="math-container">$\;y^2 \approx 1 \; – \; {10}^{-434,000}\;$</span></a> and <a href="https://www.wolframalpha.com/input/?i=sqrt%28%284*%280.999999%29%5E%28%2810%29%5E%2812%29%29%29%2F%28%28pi%29%5E2%29%29" rel="nofollow noreferrer"><span class="math-container">$\;y \approx \pm \left(1 \; – \; {10}^{-217,000}\right)$</span></a>. The table below shows the result of several similar calculations.</p> <p><span class="math-container">$$\begin{array}{|c|c|c|} \hline x &amp; y^2 &amp; y \\ \hline &amp; &amp; \\ \hline 0 &amp; 1 &amp; \pm \, 1 \\ \hline \pm \, 0.9 &amp; 1 - {10}^{-45,700,000,000} &amp; \pm \left(1 - {10}^{-22,900,000,000}\right) \\ \hline \pm \left(1 - {10}^{-6}\right) \; = \;\pm \, 0.999999 &amp; 1 - {10}^{-434,000} &amp; \pm \left(1 - {10}^{-217,000}\right) \\ \hline \pm \left(1 - {10}^{-10}\right) \; = \;\pm \, 0.9999999999 &amp; 1 \; - \; 2.5\times{10}^{-44} &amp; \pm \left(1 \; - \; 1.2\times{10}^{-22}\right) \\ \hline \pm\left(1 - {10}^{-12}\right) &amp; 0.8509 \ldots &amp; \pm \, 0.9224\ldots \\ \hline \pm \left(1 - {10}^{-15}\right) &amp; 0.5951 \ldots &amp; \pm \, 0.7714\ldots \\ \hline \pm \, 1 &amp; 0.5947 \ldots &amp; \pm \, 0.7711\ldots \\ \hline \pm \, 1.000000000000903 &amp; 0.000165 \ldots &amp; \pm \, 0.012860 \ldots \\ \hline \pm \, \beta &amp; 0 &amp; 0 \\ \hline \end{array}$$</span></p> <p>Thus, using the fact that <span class="math-container">$y^2$</span> is a <strong>decreasing function</strong> of <span class="math-container">$|x|$</span> for <span class="math-container">$-\beta &lt; x &lt; \beta,$</span> it follows that the points <span class="math-container">$(x,y)$</span> on the graph form two nearly horizonal arcs and two nearly vertical arcs. The upper arc is concave down, has endpoints <span class="math-container">$(- \beta, 0)$</span> and <span class="math-container">$(\beta, 0),$</span> reaches a maximum height above the <span class="math-container">$x$</span>-axis at the point <span class="math-container">$(0,1),$</span> and visually it will look like a horizontal segment for <span class="math-container">$-\beta \approx -1 &lt; x &lt; 1 \approx \beta$</span> along with a pair of vertical segments, one at <span class="math-container">$x = 1 \approx \beta$</span> and the other at <span class="math-container">$x = -1 \approx -\beta.$</span> The lower arc is the reflection of the upper arc about the <span class="math-container">$x$</span>-axis.</p> <p>Visually, the upper arc will look like the upper horizontal and two vertical sides of a rectangle whose vertices are <span class="math-container">$(-1,0)$</span> and <span class="math-container">$(-1,1)$</span> and <span class="math-container">$(1,1)$</span> and <span class="math-container">$(1,0).$</span> Visually, the lower arc will look like the lower horizontal and two vertical sides of a rectangle whose vertices are <span class="math-container">$(-1,-1)$</span> and <span class="math-container">$(-1,0)$</span> and <span class="math-container">$(1,0)$</span> and <span class="math-container">$(1,-1).$</span> Together, these two arcs will visually look like the four sides of a square whose vertices are <span class="math-container">$(-1,-1)$</span> and <span class="math-container">$(-1,1)$</span> and <span class="math-container">$(1,1)$</span> and <span class="math-container">$(1,-1).$</span></p>
606,380
<blockquote> <p>If <span class="math-container">$abc=1$</span> and <span class="math-container">$a,b,c$</span> are positive real numbers, prove that <span class="math-container">$${1 \over a+b+1} + {1 \over b+c+1} + {1 \over c+a+1} \le 1\,.$$</span></p> </blockquote> <p>The whole problem is in the title. If you wanna hear what I've tried, well, I've tried multiplying both sides by 3 and then using the homogenic mean. <span class="math-container">$${3 \over a+b+1} \le \sqrt[3]{{1\over ab}} = \sqrt[3]{c}$$</span> By adding the inequalities I get <span class="math-container">$$ {3 \over a+b+1} + {3 \over b+c+1} + {3 \over c+a+1} \le \sqrt[3]a + \sqrt[3]b + \sqrt[3]c$$</span> And then if I proof that that is less or equal to 3, then I've solved the problem. But the thing is, it's not less or equal to 3 (obviously, because you can think of a situation like <span class="math-container">$a=354$</span>, <span class="math-container">$b={1\over 354}$</span> and <span class="math-container">$c=1$</span>. Then the sum is a lot bigger than 3).</p> <p>So everything that I try doesn't work. I'd like to get some ideas. Thanks.</p>
math110
58,742
<p>I have other nice Cauchy-Schwarz inequality solve it.</p> <p>since $$\dfrac{1}{1+a+b}=1-\dfrac{a+b}{1+a+b}$$ so the original inequality can be written $$\sum_{cyc}\dfrac{a+b}{a+b+1}\ge2$$ use Cauchy-Schwarz inequaliy and the AM-GM inequality,we have $$\sum_{cyc}\dfrac{a+b}{a+b+1}\ge\dfrac{(\sum\sqrt{a+b})^2}{\sum(a+b+1)}=\dfrac{2p+2\sum\sqrt{(a+b)(a+c)}}{2p+3}\ge\dfrac{2p+2\sum(a+\sqrt{bc})}{2p+3}=\dfrac{4p+2\sum\sqrt{bc}}{2p+3}\ge 2$$ because use AM-GM inequality $$\sqrt{bc}+\sqrt{ac}+\sqrt{ab}\ge 3\sqrt[3]{abc}=3$$ where $p=a+b+c$</p>
606,380
<blockquote> <p>If <span class="math-container">$abc=1$</span> and <span class="math-container">$a,b,c$</span> are positive real numbers, prove that <span class="math-container">$${1 \over a+b+1} + {1 \over b+c+1} + {1 \over c+a+1} \le 1\,.$$</span></p> </blockquote> <p>The whole problem is in the title. If you wanna hear what I've tried, well, I've tried multiplying both sides by 3 and then using the homogenic mean. <span class="math-container">$${3 \over a+b+1} \le \sqrt[3]{{1\over ab}} = \sqrt[3]{c}$$</span> By adding the inequalities I get <span class="math-container">$$ {3 \over a+b+1} + {3 \over b+c+1} + {3 \over c+a+1} \le \sqrt[3]a + \sqrt[3]b + \sqrt[3]c$$</span> And then if I proof that that is less or equal to 3, then I've solved the problem. But the thing is, it's not less or equal to 3 (obviously, because you can think of a situation like <span class="math-container">$a=354$</span>, <span class="math-container">$b={1\over 354}$</span> and <span class="math-container">$c=1$</span>. Then the sum is a lot bigger than 3).</p> <p>So everything that I try doesn't work. I'd like to get some ideas. Thanks.</p>
SB1729
466,737
<p>This answer only assumes that <span class="math-container">$abc\geq 1$</span>. Make the following substitution <span class="math-container">$$\sqrt[3]{a}=x,\sqrt[3]{b}=y,\sqrt[3]{c}=z$$</span> then we have <span class="math-container">$xyz\geq1$</span> and we have to prove the following inequality now</p> <p><span class="math-container">$$\frac{1}{1+x^3+y^3}+\frac{1}{1+y^3+z^3}+\frac{1}{1+z^3+x^3} \leq 1 $$</span></p> <p>Clearly <span class="math-container">$$(x^3+y^3)=(x+y)(x^2-xy+y^2)\overset{\text{AM-GM}}{\geq}(x+y)xy$$</span></p> <p>We have the following chain of inequalities</p> <p><span class="math-container">$$\frac{1}{1+x^3+y^3}+\frac{1}{1+y^3+z^3}+\frac{1}{1+z^3+x^3} \leq \frac{1}{1+xy(x+y)}+\frac{1}{1+xz(x+z)}+\frac{1}{1+yz(z+y)} \\ \leq \frac{1}{1+\frac{1}{z}(x+y)}+\frac{1}{1+\frac{1}{y}(x+z)}+\frac{1}{1+\frac{1}{x}(z+y)}=1$$</span></p>
52,361
<p>I have seen the following one. Please give the proof of the observation. We know that, The difference between the sum of the odd numbered digits (1st, 3rd, 5th...) and the sum of the even numbered digits (2nd, 4th...) is divisible by 11. I have checked the same for other numbers in different base system. For example, if we want to know 27 is divisible by 3 or not. To check the divisibility for 3, take 1 lees than 3 (i.e., 2) and follow as shown bellow now 27 = 2 X 13 + 1 and then 13 = 2 X 6 + 1 and then 6 = 2 X 3 + 0 and then 3 = 2 X 1 + 1 and then 1 = 2 X 0 + 1 Now the remainders in base system is 27 = 11011 sum of altranative digits and their diffrence is ( 1 + 0 + 1) - (1 + 1) = 0 So, 27 is divisible by 3. What I want to say that, to check the divisibility of a number K, we will write the number in K-1 base system and then we apply the 11 divisibility rule. How this method is working.Please give me the proof. Thanks in advance. </p>
Ilmari Karonen
9,602
<p>Your question is rather confusingly written, but I assume you're asking for a proof that the base-10 divisibility rule for 11 &mdash; sum the even-numbered digits, subtract the odd-numbered digits, check (recursively) if result is divisible by 11 &mdash; can be generalized to a divisibility rule for $n+1$ in base $n$.</p> <p>For a simple proof, let $x = \sum_{k=0}^\infty x_k n^k$, where $n$ is the base and $x_0$, $x_1$, $x_2$, etc. are the base-$n$ digits of $x$. Since $n \equiv -1 \mod n+1$, we have</p> <p>$$ n^k \equiv (-1)^k = \begin{cases} \phantom{+}1 &amp; \text{if }k\text{ is even,} \\ -1 &amp; \text{if }k\text{ is odd,} \end{cases}\mod n+1 $$</p> <p>and therefore</p> <p>$$ x = \sum_{k=0}^\infty x_k n^k \equiv \sum_{k=0}^\infty x_k (-1)^k = \sum_{k\text{ even}} x_k - \sum_{k\text{ odd}} x_k \mod n+1.$$</p> <p>Thus, $x$ is divisible by $n+1$ if and only if the sum of its even base-$n$ digits minus the sum of its odd base-$n$ digits is divisible by $n+1$.</p>
239,045
<p>I'm trying to write a short program that does the following over and over again.</p> <ol> <li>Import all files in a folder</li> <li>Automatically generate names for the imported files based on source file name.</li> <li>Perform a simple operation on the file.</li> <li>Re-Export the modified files with a new name.</li> </ol> <p>For concreteness, here's an example of how I would do this manually. It is important that I be able to have a descriptive name for the files (e.g. file1 etc).</p> <pre><code>root = NotebookDirectory[]; file1 = Import[root &lt;&gt; &quot;file1.csv&quot;]; file2 = Import[root &lt;&gt; &quot;file2.csv&quot;]; </code></pre> <p>These files are just .csv files.<br /> file1 is <code>{{1, 1}, {2, 9}, {3, 3}, {4, 2}, {5, 9}, {6, 1}, {7, 6}}</code></p> <p>file2 is <code>{{1, 6}, {2, 0}, {3, 4}, {4, 7}, {5, 10}, {6, 9}, {7, 3}}</code></p> <p>Then I would modify the files. I've defined a simple function that adds a shift to each column of the table with name 'FILEname':</p> <pre><code>MODdata[FILEname_] := Table[{FILEname[[i]][[1]] + 2, FILEname[[i]][[2]] + 10}, {i, 1, Length[FILEname], 1}] file1mod = MODdata[file1]; file2mod = MODdata[file2]; </code></pre> <p>Then finally I want to export the files with a new name that indicates that they've been modified.</p> <pre><code>Export[root &lt;&gt; &quot;file1_mod.csv&quot;, file1mod] Export[root &lt;&gt; &quot;file2_mod.csv&quot;, file2mod] </code></pre> <p>Any help with this would be greatly appreciated!</p> <p>......... EDIT ............</p> <p>I tried to adapt @TumbiSapichu suggestion to my case but am still having issues. In the code below I replaced <code>myAnalysedData=readCurrentCSV+1</code> with my &quot;MODdata&quot; function above:</p> <pre><code>root = NotebookDirectory[]; myCurrentCSVs = FileNames[root &lt;&gt; &quot;*.csv&quot;]; i = 1; While[i &lt;= Length[myCurrentCSVs], readCurrentCSV = Import[myCurrentCSVs[[i]]]; (*Make some operation on the data,say,add 1*) myAnalysedData = Table[{readCurrentCSV[[k]][[1]] + 2, readCurrentCSV[[k]][[2]] + 10}, {k, 1, Length[readCurrentCSV], 1}]; (*Now,export the analysed data to new file*) Export[StringJoin[&quot;Analyzed&quot;, myCurrentCSVs[[i]]], myAnalysedData]; i++] </code></pre> <p>I'm getting both an 'Export: Directory xxxx does not exist' and an 'OpenWrite' Cannot open xxxx/file1.csv' error for both files. Maybe I'm not specifying the file paths correctly with my <code>root = NotebookDirectory[]</code>?</p>
Albert Retey
169
<p>This is a very common task for me and most probably many other Mathematica users as well. It typically looks somewhat like the following when I do this:</p> <pre><code>Scan[ Function[filename, Module[{data,newdata,newfilename}, Print[&quot;working on &quot;&lt;&gt;FileBaseName[filename]&lt;&gt;&quot;...&quot;]; data=Import[filename]; newdata=manipulateData[data]; newfilename=FileNameJoin[{ DirectoryName[filename], StringJoin[FileBaseName[filename],&quot;_mod.csv&quot;] }]; Export[newfilename,newdata] ] ], FileNames[&quot;*.csv&quot;,NotebookDirectory[]] ] </code></pre> <p><code>manipulateData</code> would be a function you can write, test and debug before running it on the files with ad hoc data which makes it much easier to get it right.</p> <p>The <code>Print</code> is of course optional, but for most realistic cases data manipulation will take a while and it is a simple way to see the progress and relate potential errors to the correct file. For more details about the rest of the code, I think the documentation of the functions used should be sufficient...</p> <p>An implementation of <code>manipulateData</code> (based on Rohit's suggestion) for your specific case could look like:</p> <pre><code>manipulateData[data_]:=({#1+2,#2+10}&amp;@@@data); </code></pre>
2,048,962
<blockquote> <p><strong>Question:</strong> How would you solve this sinusoidal equation:</p> <blockquote> <p>Solve $5\cos(6x)+6=9$. Assume $n$ is an integer and the answers are in degrees.</p> <ul> <li><p>$-8.86+n\cdot 60$</p></li> <li><p>$-3.54+n\cdot 60$</p></li> <li><p>$3.54+n\cdot 60$</p></li> <li><p>$8.86+n\cdot 60$</p></li> <li><p>$15.13+ n\cdot 360$</p></li> <li><p>$126.87+n\cdot 360$</p></li> </ul> </blockquote> </blockquote> <p>I'm sort of new to this. But I have tried to isolate the trigonometric parts, and I get$$\cos(6x)=\frac 35\tag{1}$$ But after this, I'm not sure what to do. Do I take the $\arccos$ of both sides? If so, what will $\arccos\frac 35$ evaluate to? I don't think it's going to be a "perfect" number such as $\dfrac \pi 3$.</p>
DanielWainfleet
254,665
<p><span class="math-container">$\cos 6x=3/5\iff |\cos 3x|=(\sqrt {1+\cos 6x})/2=(\sqrt {8/5})/2=\sqrt {2/5}.$</span></p> <p><span class="math-container">$|\cos 3x|=\sqrt {2/5}\iff |4\cos^3x-3\cos x|=\sqrt {2/5}\iff (4\cos^3x-3\cos x)^2=4/25.$</span></p> <p>Let <span class="math-container">$y=\cos^2 x.$</span> Then <span class="math-container">$\cos 6x=3/5\iff 16y^3-24y^2+9y-4/25=0.$</span> The cubic formula is on <a href="https://sciencing.com/how-to-use-the-quadratic-formula-13712185.html" rel="nofollow noreferrer">this website (remember to scroll down to the bottom of the page)</a>.</p>
3,823,048
<p>My questions needs more context than what can fit into the title so let me elaborate. In pretty much all the art textbooks I am reading on linear perspective they state that the correct way of placing an ellipse is to imagine the minor axis of an ellipse as an axel of a wheel running to the opposite vanishing point. Here are examples for the various types of perspective the textbooks mention.</p> <p>One point perspective:</p> <p>In one point perspective only one set of parallel lines of a cuboid are concurrent, the other two sets are parallel (one set parallel to the horizon, the other perpendicular to it). If we plot an ellipses inside a one-point square the minor axis' should all run vertical (to the non-concurrent vanishing point). We find that only ellipses whose major axis are perpendicular to the vanishing point (A`) have this property.</p> <p><a href="https://i.stack.imgur.com/L94aq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/L94aq.png" alt="One Point perspective" /></a></p> <p>Two point perspective:</p> <p>In two point perspective two of the sets of parallel lines of a cuboid are concurrent (meeting at R3 &amp; S3 in the image) and the other set are not concurrent (parallel). I tried placing an ellipse such that the perspective center points of the sides of the quadrilateral are the tangent points of the ellipse but the minor axis does not seem concurrent with the lines running to the opposite concurrence (vanishing point). It should be stated that the two quadrilateral and their concurrences are a mirror reflection of each other, but this is not always the case in two point perspective.</p> <p><a href="https://i.stack.imgur.com/tJ1j5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tJ1j5.png" alt="Two point perspective" /></a></p> <p>If I adjust the size of the bounding quadrilateral I can make the minor axis concurrent with the lines running to the opposite vanishing point... but what if I want an ellipse placed inside a different sized quadrilateral?</p> <p><a href="https://i.stack.imgur.com/UtGuI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UtGuI.png" alt="Two point additional" /></a></p> <p>Am I missing something in my plotting of ellipses or are the authors of these textbooks mistaken?</p>
Jean Marie
305,862
<p>Minor/major axes, foci, etc. are &quot;metric objects&quot; that have a meaning in the framework of metric geometry, therefore are outside the scope of projective geometry which (in particular) doesn't preserve neither the lengths nor the ratio of lengths (but the cross-ratio). As a consequence, it is not astonishing that you find such discrepancies...</p> <p>Besides, are you aware of the fact that a figure like your first figure can be easily obtained by programmation in a convenient language. For example, this figure:</p> <p><a href="https://i.stack.imgur.com/Rn5DT.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Rn5DT.jpg" alt="enter image description here" /></a></p> <p><em>Fig. 1. Figure generated by a program written in Matlab (see bottom of this answer) The vanishing points are materialized by red dots. We see on this figure that we cannot infer anything on the orientations of the major/minor axis of the ellipses...</em></p> <p>Here is another figure, similar to the first one you have given with trapezoids instead of general quadrilaterals:</p> <p><a href="https://i.stack.imgur.com/dlccA.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dlccA.jpg" alt="enter image description here" /></a></p> <p><em>Fig. 2: Figure obtained by changing <span class="math-container">$A$</span> into <span class="math-container">$[5+0.i,1+2i,0+0i]$</span> and changing denominator <span class="math-container">$(x+y+t)$</span> into <span class="math-container">$(y+t)$</span> in function <span class="math-container">$Z$</span>. Only a single vanishing point at finite distance (the other one being at infinity).</em></p> <p>Explanation: everything is based on the common (and simple) expression of all projective transformations (in particular those giving a perspective):</p> <p><span class="math-container">$$X=\dfrac{ax+by+c}{gx+hy+k}, \ \ Y=\dfrac{dx+ey+f}{gx+hy+k}, \tag{1}$$</span></p> <p>(please note the common denominator) or equivalently:</p> <p><span class="math-container">$$Z=\dfrac{Ax+By+Ct}{gx+hy+kt}\tag{2}$$</span></p> <p><span class="math-container">$$\text{with} \ Z=X+iY, A=a+id, B=b+ie, C=c+if,$$</span></p> <p>with <span class="math-container">$t=1$</span> for ordinary points and <span class="math-container">$t=0$</span> for points at infinity.</p> <p>(using complex numbers for plane geometry is very handy).</p> <p>Remark:</p> <p>Here, in the plane with coordinates <span class="math-container">$(x,y)$</span>, I have first described a chain of 6 tangent <strong>circles</strong> (+ circumscribed squares) to which I have applied a certain projective transform of the type given by (2).</p> <p>Program:</p> <blockquote> <pre><code>clear all;close all;hold on A=[2+i,1+1i,0+0i]; % could be (almost) any set of comp. numbers Z=@(x,y,t)((A(1)*x+A(2)*y+A(3)*t)./(x+y+t)); % see formula (2) s=5; % shift a=0:0.01:2*pi; % parameter for k=1:4; % ellipses as images of circles by &quot;Z transform&quot;: plot(Z(cos(a)+2*k,sin(a)+s,1)); % quadrilaterals as images of squares: plot(Z([-1,1,1,-1,-1]+2*k,[-1,-1,1,1,-1]+s,1)); end; % vanishing points in the 2 directions, with <span class="math-container">$t=0$</span>: plot(Z(1,0,0),'or','markerfacecolor','r'); plot(Z(0,1,0),'or','markerfacecolor','r'); </code></pre> </blockquote>
2,737,830
<p>For some fixed b, r > 0, place b blue balls and r red balls in a box. Select balls from the box, one after the other, without replacement. Prove that the probability that the kth ball taken from the box is blue is the same for 1 ≤ k ≤ b.</p> <p>Does this just come from the fact that if you do not know what you are pulling out, the probability remains the same? Like if I had a deck of cards face down in a line, the probability that any kth card is an Ace is 4/52. How do you create a proof on this?</p>
Christian Blatter
1,303
<p>For all $k\in [b+r]$ the probability that the $k^{\rm th}$ ball is blue is the same, namely ${b\over b+r}$.</p> <p>Imagine that the blue balls are secretly numbered from $1$ to $b$ and the red balls from $1$ to $r$. The devil arranges the $b+r$ now distinguishable balls in a random linear order in one of $(b+r)!$ ways. Since each of the balls is equally likely to land on place $k$ of this order, the probability that the ball at place $k$ is blue amounts to ${b\over b+r}$.</p>
2,381,496
<p>I am trying to write the explicit formula of all solutions of a linear system in the form :</p> <p>$Ax=b$ where $A$ is an $m \times n$ matrix ($n$ different from $m$), $x$ is $n$-dimensional vector and $b$ is $m$-dimensional vector.</p>
mathfan27543
279,205
<p>Hint: isolate for c in both equations: </p> <p>$c=\frac{ab}4 - a - b$ and $c = \sqrt{a^2+b^2} $. Now isolate for a or b and then substitute back into your equations.</p>
949,853
<p>If $\alpha: [a,b] \rightarrow \mathbb{R}$ is an increasing function, we can define the Riemann-Stieltjes integral $$\int_a^b f d \alpha$$ Does the function $\langle f,g\rangle = \int_a^b fg d\alpha$ define an inner product on the $\mathbb{R}$-space of all Riemann-Stieltjes integrable functions on $[a,b]$ (with respect to $\alpha$)? It seems to me that the condition $\langle f,f\rangle = 0$ implies $f= 0$ is false. For example if we take the Riemann integral and take $f(x)$ to be $0$ everywhere except $1$ at $(b-a)/2$, then $\langle f,f\rangle = \int_a^b f(x)^2dx = 0$, but $f \neq 0$. </p> <p>If it is not an inner product, can we still get something like the Cauchy-Schwarz inequality, i.e. $$\Biggl[\int_a^b |fg| d \alpha\Biggr]^2 \leq \Biggl(\int_a^b |f|^2 d \alpha\Biggr) \Biggl(\int_a^b |g|^2 d \alpha\Biggr)\,?$$</p>
DanZimm
37,872
<p>I know this has been answered in the comments but you might be interested in reading up about exactly how this is done over <a href="http://en.wikipedia.org/wiki/Kolmogorov_space#The_Kolmogorov_quotient" rel="nofollow">here</a>. Note that in that article they talk about functions being equal <em>almost everywhere</em> which in the view of Riemann-Stieltjes Integral is exactly the type of issue you brought up: the functions are nearly identical (in fact they are identical <em>almost everywhere</em>).</p> <p>Also I'll a quick overview of the process outlined by that article:</p> <ul> <li>Define a function $\lVert \cdot \rVert$ on the set of $\alpha$ square integrable functions by $$ \lVert f \rVert = \sqrt{\int \lvert f \rvert^2 \, \mathrm{d} \alpha} $$</li> <li>Note that the above function is in fact a seminorm, that the space is complete under it and that the seminorm abides by the parallelogram equality: $$ \lVert f \rVert^2 + \lVert g \rVert^2 = \lVert f + g \rVert^2 + \lVert f - g \rVert^2 $$</li> <li>We can now "mod" out by all $0-$functions by modding out by the kernel of $\lVert \cdot \rVert$ (i.e. create equivalence classes around functions $f$ so that $\lVert f \rVert = 0$).</li> <li>Our modded out space can be checked to be a complete normed vector space that satisfies the parallelogram equality thus we can define $$ \langle f, g \rangle = \frac{\lVert f + g \rVert^2 - \lVert f - g \rVert^2}{4} $$ in order to make our space a Hilbert space (and thus we have our inner product).</li> </ul>
1,839,458
<blockquote> <p>For any $n\ge2, n \in \mathbb N$ prove that $$\sqrt{\frac{1}n}-\sqrt{\frac{2}n}+\sqrt{\frac{3}n}-\cdots+\sqrt{\frac{4n-3}n}-\sqrt{\frac{4n-2}n}+\sqrt{\frac{4n-1}n}&gt;1$$</p> </blockquote> <h3>My work so far:</h3> <p>1) $$\sqrt{n+1}-\sqrt{n}&gt;\frac1{2\sqrt{n+0.5}}$$</p> <p>2) $$\sqrt{n+1}-\sqrt{n}&lt;\frac1{2\sqrt{n+0.375}}$$</p>
Nate River
279,404
<p>Another proof is this:</p> <p>Note that $$ 2 = \sqrt{\frac{4n}{n}} = \frac{1}{\sqrt{n}}\sum_{j=0}^{4n-1}\sqrt{j+1}-\sqrt{j} $$ where the RHS can be expressed as $$ \frac{1}{\sqrt{n}}\left(\sum_{j=1}^{2n}(\sqrt{2j}-\sqrt{2j-1})+\sum_{j=1}^{2n-1}(\sqrt{2j-1}-\sqrt{2j-2})\right) $$ Using that the function $f:(0,+\infty)\to \mathbb{R}$ given by $f(x)=\sqrt{x+1}-\sqrt{x}$ is strictly decreasing, we deduce, for all $j\in \{1,...,2n\}$, $$ \sqrt{2j-1}-\sqrt{2j-2}&gt;\sqrt{2j}-\sqrt{2j-1} $$ hence $$ 2\sum_{j=1}^{2n}\left(\sqrt{\frac{2j-1}{n}}-\sqrt{\frac{2j-2}{n}}\right)&gt;2 $$</p>
3,188,966
<p>Find the number of real solutions of <span class="math-container">$x^7 + 2x^5 + 3x^3 + 4x = 2018$</span>?</p> <p>What is the general approach to solving this kind of questions? I am interested in the thought process. </p> <hr> <p>Few of my thoughts after seeing this question: since <span class="math-container">$x$</span> has all odd powers so, it can not have any negative solution. 2018 is semiprime; not much progress here. We can sketch the curve but graphing a seven order polynomial is difficult. </p>
Oscar Lanzi
248,217
<p>Render</p> <p><span class="math-container">$x^7+0x^6+2x^5+0x^4+3x^3+0x^2+4x-2018=0$</span></p> <p><a href="https://www.mathplanet.com/education/algebra-2/polynomial-functions/descartes-rule-of-sign" rel="nofollow noreferrer">Descartes' Rule of Signs</a> forces exactly one positive root and no negative roots. That's it!</p>
3,161,584
<p>Consider the following matrix</p> <p><span class="math-container">$A=\begin{bmatrix}0&amp;0&amp;0&amp;0\\a&amp;0&amp;0&amp;0\\0&amp;b&amp;0&amp;0\\0&amp;0&amp;c&amp;0\end{bmatrix}$</span></p> <p>Which <span class="math-container">$a,b,c$</span> are real numbers</p> <p>What conditions are required for <span class="math-container">$a,b,c$</span> such that <span class="math-container">$\mathbb{R}^4$</span> has a basis made of A eigenvectors?</p>
jmerry
619,637
<p>All right, let's clear up some things about the configuration.</p> <p>First, any planar graph with all faces quadrilaterals (or even polygons) must be bipartite. How do those parts split? If there are six or more vertices in one of the parts, then, since each of those vertices has degree at least <span class="math-container">$3$</span> and each edge has only one of its vertices in the part, that's at least <span class="math-container">$18$</span> edges. This contradicts our calculation of <span class="math-container">$16$</span> edges, so the two parts must have five vertices each.</p> <p>Now, count degrees at the vertices in each part. With <span class="math-container">$16$</span> total edges over five vertices, that must be four vertices of degree <span class="math-container">$3$</span> and one of degree <span class="math-container">$4$</span>. The same is true of the other part.</p> <p>Now, this splits into two cases for the configuration. Either the two vertices of degree <span class="math-container">$4$</span> are directly connected by an edge, or they're not. Let's see if we can build a planar graph in the former case:</p> <p><a href="https://i.stack.imgur.com/Ochdo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ochdo.png" alt="Grid 1"></a></p> <p>We have several edges left to fill in, but the six faces around the two degree-<span class="math-container">$4$</span> vertices give us this much. Number the vertices and label as shown; the vertices of one part are shown in red, and the other part in blue.</p> <p>Next, what can vertex blue #5 (<span class="math-container">$U_5$</span>) connect to? It has three edges going to red vertices #2,3,4,5 (<span class="math-container">$R_2$</span>,<span class="math-container">$R_3$</span>,<span class="math-container">$R_4$</span>,<span class="math-container">$R_5$</span>). In particular, that means it must connect to at least one of <span class="math-container">$R_2$</span> or <span class="math-container">$R_3$</span>. WLOG, that's <span class="math-container">$R_2$</span>.</p> <p><a href="https://i.stack.imgur.com/CrkMV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CrkMV.png" alt="Grid 2"></a></p> <p>Then, there's a quadrilateral with consecutive vertices <span class="math-container">$U_5$</span>, <span class="math-container">$R_2$</span>, <span class="math-container">$U_1$</span>. This must be face <span class="math-container">$C$</span>, and the fourth vertex is <span class="math-container">$R_4$</span>. That means it closes with an edge between <span class="math-container">$U_5$</span> and <span class="math-container">$R_4$</span>, our second edge drawn in.</p> <p>Now, we have some edges in the picture that only have one of their connected faces indicated. First, the triad U2 to R2 to U5 has to be part of a face. It can't be <span class="math-container">$A$</span>, <span class="math-container">$B$</span>, or <span class="math-container">$C$</span> since those faces have all of their vertices known. It can't be <span class="math-container">$D$</span>, <span class="math-container">$E$</span>, or <span class="math-container">$F$</span> since those faces have three known vertices each that agree with the list of three we have in at most one place. Therefore, it must be a new face; call it <span class="math-container">$G$</span>.</p> <p>Second, the edge from <span class="math-container">$R_3$</span> to <span class="math-container">$U_3$</span> is part of face <span class="math-container">$B$</span>. What's its second face? It can't be <span class="math-container">$A$</span> or <span class="math-container">$C$</span> because those don't have either <span class="math-container">$R_3$</span> or <span class="math-container">$U_3$</span> as a vertex. It can't be <span class="math-container">$E$</span> or <span class="math-container">$G$</span> since those have three known vertices and can't include both <span class="math-container">$R_3$</span> and <span class="math-container">$U_3$</span>. It can't be <span class="math-container">$D$</span> or <span class="math-container">$F$</span> since two faces of the polyhedron can't share adjacent edges. All of the old faces are ruled out, so it has to be a new face <span class="math-container">$H$</span>.</p> <p>Third, the edge from <span class="math-container">$R_4$</span> to <span class="math-container">$U_5$</span> is part of face <span class="math-container">$C$</span>. What's its second face? Not <span class="math-container">$A$</span>, <span class="math-container">$B$</span>, <span class="math-container">$E$</span>, or <span class="math-container">$F$</span> because there aren't enough free vertices. Not <span class="math-container">$D$</span> or <span class="math-container">$G$</span> because it can't share two adjacent edges with <span class="math-container">$C$</span>. There are only eight faces, so that leaves us with <span class="math-container">$H$</span> as the only option.</p> <p><a href="https://i.stack.imgur.com/djpYQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/djpYQ.png" alt="Grid 3"></a></p> <p>How can <span class="math-container">$R_3$</span>, <span class="math-container">$U_3$</span>, <span class="math-container">$R_4$</span>, and <span class="math-container">$U_5$</span> be part of the same face <span class="math-container">$H$</span>? We must have edges from <span class="math-container">$R_3$</span> to <span class="math-container">$U_5$</span> and <span class="math-container">$R_4$</span> to <span class="math-container">$U_3$</span>.</p> <p>But that doesn't work. There's no way to connect those edges while remaining planar. Merging vertices <span class="math-container">$R_1$</span> and <span class="math-container">$R_2$</span> through their connection at <span class="math-container">$U_2$</span>, that's the equivalent of a <span class="math-container">$K_{3,3}$</span> between <span class="math-container">$(U_1,U_3,U_5)$</span> and <span class="math-container">$(R_3,R_4,R_{1,2})$</span>.</p> <p>This configuration has failed, leaving only one possibility - the two degree-4 vertices aren't connected by an edge, and indeed don't share a face. That configuration works as a polyhedron. A flattened version:</p> <p><a href="https://i.stack.imgur.com/xUJ36.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xUJ36.png" alt="Octagonal caps"></a></p> <p>I'm maintaining the color choices; the two caps will meet at corresponding colors when we stack them and open it up.</p> <p>And that's the configuration you'll need to work with in figuring out how many different edge lengths there can be.</p>
402,310
<p>Vector math is something I find very interesting. However, we have never been told the link between vectors in physics (usually represented as arrows, e.g. a force vector) and in algebra (e.g. represented like a column matrix). It was really never explained well in classes.</p> <p>Here are the things I can't wrap my head around:</p> <ul> <li>How can a vector (starting from the algebraic definition) be represented as an arrow? Is it correct to assume that a vector (in a 2-dimensional space) $v = [1,1]$ could be represented as an arrow from the origin $[0,0]$ to the point $[1,1]$?</li> <li>If the above assumption is correct, what does it mean in the physics representation to normalize a vector?</li> <li>If I have a vector $[1,1]$, would the vector $[-1,1]$ be orthogonal to that first vector? (Because if you draw the arrows they are perpendicular).</li> <li>How can one translate an object along a vector? Is that simply scalar addition?</li> </ul> <p>These questions probably sound really odd, but they come from a lack of decent explanation in both physics and algebra.</p>
Christopher A. Wong
22,059
<p>Actually, I think your intuition is very good here. An "arrow" has two bits of information: its length and direction. When you have a vector $v = [a, b]$, it's describing an arrow that moves $a$ in the horizontal direction and $b$ in the vertical direction. Since this defines a right triangle, this can be converted into length and direction information.</p> <p>Normalizing a vector simply means adjusting the length of a vector to be $1$ without changing its direction.</p> <p>To address your orthogonality question, yes, as you mentioned, $[-1, 1]$ and $[1,1]$ are indeed orthogonal, because of the geometric reasoning you correctly described. However, they are only orthogonal with respect to the <em>standard inner product</em>. This is imply the dot product, which has a lot of geometric connections. However, in mathematics, we can define many other types of inner products that don't have as nice geometric connections.</p>
2,577,634
<p>My goal is to compute $$I=\int_{0}^{+∞}\frac{\cos{ax}}{1+x^2}dx$$ where $a&gt;0$. </p> <p>$$I=\frac{1}{2}\int_{-∞}^{+∞}\frac{\cos{ax}}{1+x^2}dx=\frac{1}{2}Re\bigg(\int_{-∞}^{+∞}\frac{e^{iax}}{1+x^2}dx\bigg)$$. </p> <p>Let $f(z)=\frac{e^{iaz}}{1+z^2}$.</p> <p>By Residue Theorem, $\int_{-R}^{R}\frac{e^{iax}}{1+x^2}dx+\int_{\gamma_R}\frac{e^{iaz}}{1+z^2}dz=2\pi Res(f,i)=\frac{e^{-a}}{2i}$, where $\gamma_R$ denotes the upper semi-circle centered at $O$ with radius $R$.</p> <p>As $R$—>$+∞$,</p> <p>$\int_{-R}^{R}\frac{e^{iax}}{1+x^2}dx$ —> $\int_{-∞}^{+∞}\frac{e^{iax}}{1+x^2}dx$</p> <p>Now, I am stuck on how to prove $\int_{\gamma_R}\frac{e^{iaz}}{1+z^2}dz$ goes to $0$ as $R$ goes to infinity.</p> <p>Anyone know how to do it? Many thanks.</p>
Rebellos
335,894
<p>Recall Jordan's Lemma and your solution is done :</p> <blockquote> <p>If the only singularities of $F(z)$ are poles, then : $$\lim_{R\to\infty}\int_{γR} e^{inz}F(z)dz$$ provided that $n&gt;0$ and $|F(z)|\to 0$ as $R\to \infty$.</p> </blockquote> <p>Now, you have :</p> <p>$$\int_{\gamma_R}\frac{e^{iaz}}{1+z^2}dz$$</p> <p>where : </p> <p>$$F(z) = \frac{1}{1+z^2}$$</p> <p>Trully, the only singularities of $F(z)$ are the poles $z=\pm i$ and this function satisfies <strong><em>Jordan's Lemma</em></strong> (easy to check!), so it will be : </p> <p>$$\lim_{R\to\infty}\int_{\gamma_R}\frac{e^{iaz}}{1+z^2}dz=0$$</p> <p>I have answered in detail and in good presentation a similar example that you can check <a href="https://math.stackexchange.com/questions/2559997/how-to-finish-calculating-of-int-0-infty-fracdzz61/2560021#2560021">here !</a></p>
3,779,736
<p>Let <span class="math-container">$(X_1, \ldots, X_n) \sim \operatorname{Unif}(0,b), b&gt;0$</span>. Find <span class="math-container">$E\left[\sum \frac{X_i }{X_{(n)}}\right]$</span> where <span class="math-container">$X_{(n)} = \max_i X_i$</span>.</p> <p>It was suggested to use Basu's Theorem which I am unfamiliar with.</p> <p>There are finitely many terms so we can rearrange using order statistics and write it as:</p> <p><span class="math-container">\begin{align} E\left[\sum_{i = 1}^n \frac{X_i }{X_{(n)}}\right] &amp; = E\left[\frac{X_{(1)}}{X_{(n)}}\right] + E\left[\frac{X_{(2)}}{X_{(n)}}\right]+ \cdots +E[1] \\[8pt] &amp; = (n-1) E\left[\frac{X_i}{X_{(n)}}\right] + 1 \end{align}</span></p> <p>If this is correct then I will need to calculate a conditional expectation to calculate this so I wanted to see if this is even correct before moving forward. Or if someone familiar with Basu's theorem can explain how I apply that here.</p>
Michael Hardy
11,667
<p>Basu's theorem says that a complete sufficient statistic is independent of an ancillary statistic.</p> <p>To say that <span class="math-container">$X_{(n)} = \max\{X_1,\ldots,X_n\}$</span> is sufficient for this family of distributions (parametrized by <span class="math-container">$b$</span>) means that the conditional distribution of <span class="math-container">$X_1,\ldots,X_n$</span> given <span class="math-container">$X_{(n)}$</span> does not depend on the value of <span class="math-container">$b.$</span></p> <p>To say that <span class="math-container">$X_{(n)}$</span> is complete means there is no function <span class="math-container">$g$</span> (not depending on <span class="math-container">$b$</span>) for which <span class="math-container">$\operatorname E(g(X_{(n)}))$</span> remains equal to <span class="math-container">$0$</span> as <span class="math-container">$b$</span> changes.</p> <p>To say that <span class="math-container">$\dfrac{X_i}{X_{(n)}}$</span> is an ancillary statistic means that its probability distribution remains the same as <span class="math-container">$b$</span> changes, even though <span class="math-container">$\dfrac{X_i}{X_{(n)}}$</span> depends on <span class="math-container">$(X_1,\ldots,X_n,b)$</span> only through <span class="math-container">$(X_1,\ldots,X_n).$</span></p> <p>If all of that is shown, then what Basu's theorem tells you is that <span class="math-container">$\dfrac{X_i}{X_{(n)}}$</span> and <span class="math-container">$X_{(n)}$</span> are independent.</p> <p>That is what makes it possible to conclude that the expected value of their product is the product of their expected values, and then the rest is routine.</p>
23,264
<p>Let's say that I have a variable $j$ defined by the following formula: $$j=\frac{n(n+2) + m}{2}$$ where $n$ and $m$ are two parameters, both integers, that satisfy the following conditions:</p> <ol> <li>$n\in \left[0,1,2,...\right]$</li> <li>$m\in\left[-n,n\right]$</li> <li>$\left(n-m\right)$ even</li> </ol> <p>We know that, subject to the above conditions, there exists a bijection between $j$ and the tuples $\left(m,n\right)$. </p> <ul> <li>Is there a way of expressing $m$ and $n$ as (hopefully compact) explicit functions of (only) $j$? </li> <li>How would one proceed to find such a formula?</li> </ul>
joriki
6,622
<p>Since $2j$ depends quadratically on $n$, you basically need the square root of $2j$ to get $n$. To get the limits exactly right, I'd try $n=\lfloor \sqrt{2j+a}+b\rfloor$ and then determine $a$ and $b$ so that everything comes out right. Once you have $n$, it's easy to get $m$.</p> <p>Update: You can systematically determine $a$ and $b$ by looking at what happens at the boundaries. You need</p> <p>$$\lfloor \sqrt{n(n+2)-n+a}+b\rfloor=n$$</p> <p>but</p> <p>$$\lfloor \sqrt{(n-1)((n-1)+2)+(n-1)+a}+b\rfloor=n-1\;.$$</p> <p>The difference between the two arguments of the square root is $2$. To avoid rounding problems you can choose $a$ and $b$ such that the argument of the floor function is the integer $n$ when the argument of the square root is in the middle between the two:</p> <p>$$\sqrt{n^2+n-1 + a} + b = n\;,$$</p> <p>$$n^2+n-1+a=(n-b)^2\;,$$</p> <p>$$n-1+a=-2bn+b^2\;.$$</p> <p>To make this true for all $n$, you can equate the constant terms and the coefficients of $n$:</p> <p>$$1=-2b\;,$$ $$-1+a=b^2\;.$$</p> <p>That yields $a=\frac{5}{4}$ and $b=-\frac{1}{2}$, so the formula for $n$ is</p> <p>$$n=\lfloor \sqrt{2j+\frac{5}{4}}-\frac{1}{2}\rfloor\;.$$</p>
23,264
<p>Let's say that I have a variable $j$ defined by the following formula: $$j=\frac{n(n+2) + m}{2}$$ where $n$ and $m$ are two parameters, both integers, that satisfy the following conditions:</p> <ol> <li>$n\in \left[0,1,2,...\right]$</li> <li>$m\in\left[-n,n\right]$</li> <li>$\left(n-m\right)$ even</li> </ol> <p>We know that, subject to the above conditions, there exists a bijection between $j$ and the tuples $\left(m,n\right)$. </p> <ul> <li>Is there a way of expressing $m$ and $n$ as (hopefully compact) explicit functions of (only) $j$? </li> <li>How would one proceed to find such a formula?</li> </ul>
Arturo Magidin
742
<p>You can take $2j = n(n+2)+m = (n+1)^2 + (m-1)$.</p> <p>Consider $k^2$ and $(k+1)^2$. The difference between them is $2k+1$; if you allow yourself to add at most $k$ to $k^2$, you "get" all the way to $k^2+k$. If you allow yourself to subtract at most $k$ from $(k+1)^2$, you can "get down" all the way to $k^2+k+1$. So that is why the map $(m,n)\mapsto n(n+1)+m$ is bijective under the given conditions (the parity condition is just to ensure that this number is even). </p> <p>So look at $k = \lfloor \sqrt{2j}\rfloor$. This is either $n+1$ or $n$, depending on whether $m$ is positive or negative, and once you determine if $k=n$ or if $k=n+1$, you can compute $m$ from $m = 2j - n(n+2)$. </p> <p>Well, since $k=n$ or $k=n+1$, either $2j-k(k+2)$ will lie between $-k$ and $k$ or not. If it does, then $k=n$ and $m=2j-k(k+2)$. If it does not lie between $-k$ and $k$, then $k=n+1$, so $m = 2j-(k-1)(k+1)$.</p>
4,218,944
<p>A very simple question. Saw this formula many places earlier, but how do we prove it? <span class="math-container">$$ax^2+bx+c=a(x-r_1)(x-r_2)$$</span> Where <span class="math-container">$r_1$</span> and <span class="math-container">$r_2$</span> are the roots of the quadratic.</p> <p>P.S.- I have seen a &quot;proof&quot; using Vieta's formulas, but Vieta's formula itself requires this fact in its proof.</p>
julio vasquez
956,404
<p>I don't know if that's what you want, but try factoring <span class="math-container">$ax^2+bx+c$</span> and you will have expressions for the roots.</p>
1,589,577
<p>Try to find the solution of the functional equation $$f(x+y)=\frac{f(x)+f(y)}{1-4f(x)f(y)}$$ with $f'(1)=1/2$.</p>
Yiorgos S. Smyrlis
57,021
<p>I am solving it with the assumption that $f'(0)=1/2$ and $f$ is continuously differentiable in an open neighbourhood containing $0$.</p> <p>First, we observe that $f(0)=0$. </p> <p>Then, $$ \frac{f(x+h)-f(x)}{h}=\frac{\frac{f(x)+f(h)}{1-4f(x)f(h)}-f(x)}{h} =\frac{f(x)+f(h)-f(x)+4f(h)f^2(x)}{h\big(1-4f(h)f(x)\big)} =\frac{f(h)\big((1+4f^2(x)\big)}{h\big(1-4f(h)f(x)\big)}\to f'(0)\big(1+4f^2(x)\big)=\frac{1}{2}\big(1+4f^2(x)\big). $$ Hence, $f$ satisfies the IVP $$ f'(x)=\frac{1}{2}\big(1+4f^2(x)\big), \quad f(0)=0. $$ Hence $$ \frac{d}{dx}\arctan\big(2f(x)\big)=\frac{2f'(x)}{1+4f^2(x)}=1. $$ Thus $\arctan\big(2f(x)\big)=x+c$, for some $c\in\mathbb R$, and thus $f(x)=\frac{1}{2}\tan (x+c)$. The initial data $f(0)=0$ forces $c=0$. Thus $f(x)=\frac{1}{2}\tan x$.</p> <p>Note. If $f'(1)=1/2$ is not a mistype in the OP, then we obtain instead that $$ f(x)=\frac{1}{2}\tan\big(2f'(0)x\big), $$ and $$ \frac{1}{2}=f'(1)=\frac{1}{2}\cdot 2f'(0)\cdot\frac{1}{\cos^2\big(2f'(0)\big)}, $$ and thus $2f'(0)= \cos^2\big(2f'(0)\big)$. Now the equation $x=\cos^2(x)$, has a unique solution $s_0\approx 0.6417$ and thus $$ f(x)=\frac{1}{2}\tan\big(s_0x\big). $$</p>
3,623,856
<p>Use the definition of a Cauchy sequence to prove that the sequence defined by <span class="math-container">$x_n = \left (\frac{3}{2}\right )^n$</span> is a Cauchy sequence in <span class="math-container">$\mathbb{R}$</span>. </p>
Mnifldz
210,719
<p>This sequence is not Cauchy. Since <span class="math-container">$\mathbb{R}$</span> is complete, then every Cauchy sequence in <span class="math-container">$\mathbb{R}$</span> converges, but you can clearly see that <span class="math-container">$\left (\frac{3}{2}\right )^n \to \infty$</span>.</p>
4,172,955
<p>Find all <span class="math-container">$|z|=1$</span> such that <span class="math-container">$|z^4+4| = \sqrt{5}.$</span></p> <hr /> <p>I've tried doing <span class="math-container">$$|z^4+4|^2 = 5 \implies (z^4+4)(\overline{z^4}+4) = 5 \implies |z|^8 + 4(z^4+\overline{z^4}) +11=0,$$</span> but i'm not sure how to solve that.</p>
Cameron Williams
22,551
<p>Hint: try defining a new variable <span class="math-container">$w = z^4$</span> so that the equation in terms of <span class="math-container">$w$</span> reads (correcting the <span class="math-container">$z^8$</span> term to <span class="math-container">$|z|^8$</span>)</p> <p><span class="math-container">$$ |w|^2 + 4(w+\bar{w}) + 11 = 0.$$</span></p> <p>There are probably multiple ways to solve this, but I would break <span class="math-container">$w$</span> into real and imaginary parts and solve, noting that since <span class="math-container">$|z|=1$</span>, <span class="math-container">$|w|=1$</span>. I imagine there's a more elegant approach. Then after solving for <span class="math-container">$w$</span>, you can solve for <span class="math-container">$z$</span>.</p>
2,903,110
<p>How do I prove that the function $$f:(0,1)\rightarrow \mathbb R$$ defined by:</p> <p>$$f(x) = \frac{-2x+1}{(2x-1)^2-1}$$</p> <p>is onto?</p>
DeepSea
101,504
<p>Alternatively, set $t = 2x-1$ and for each $r \in \mathbb{R}$ we show we can find a number $t$ and then an $x \in (0,1)$ such that$f(x) = r$. We have: $\dfrac{t}{1-t^2}=r$. So this gives $t = r - rt^2\implies rt^2+t -r = 0$. If $r = 0 \implies t = 0\implies x = \dfrac{1}{2}$. If $r \neq 0$, $t = \dfrac{-1\pm \sqrt{1+4r^2}}{2r}$. If $r &gt; 0$, take $t = \dfrac{-1+\sqrt{1+4r^2}}{2r}&gt;0$, and $t = \dfrac{2r}{1+\sqrt{1+4r^2}}&lt; \dfrac{2r}{\sqrt{1+4r^2}}&lt;1\implies t \in (0,1)\implies x = \dfrac{t+1}{2} &gt; \dfrac{1}{2} &gt; 0$, and $x &lt; \dfrac{1+1}{2} = 1\implies x \in (0,1)$. If $r &lt; 0$, choose $t = \dfrac{-1+\sqrt{1+4r^2}}{2r}\implies x = \dfrac{-1+2r+\sqrt{1+4r^2}}{4r}$. Observe that with $r &lt; 0\implies \sqrt{1+4r^2} &lt; 1-2r$ ( clear ),and this yields $-1+2r+\sqrt{1+4r^2} &lt; 0\implies x &gt; 0$. To complete the proof, you show $x &lt; 1$. But $x = \dfrac{1-2r-\sqrt{1+4r^2}}{-4r}&lt; 1 \iff 1-2r-\sqrt{1+4r^2} &lt; -4r\iff 1+2r &lt; \sqrt{1+4r^2}\iff 4r &lt; 0$, and this is true since $r &lt; 0$ ( by assumption ) . </p>
18,421
<p>I am teaching 4th-grade kids. The topic is Fraction. Basic understanding of a fraction as a part of the whole and as part of the collection is clear to the kids. Several concrete ways exist to teach this basic concept. But when it comes to fraction addition/subtraction I could not find a way that teaches it concretely.<br> Of course, teaching fraction addition &amp; subtraction of the form 3/2 + 1/2 is easy. But what about 3/2+ 4/3?<br> It is where we start talking about the algorithm (using LCM), which makes the matter less intuitive and more abstract which I am trying to avoid in the beginning. I believe all abstract concepts should come after the concrete experience. </p> <p>So teachers do you have any suggestions? </p>
Roar Stovner
8,357
<p>Another way to represent addition and subtraction of two fractions is by subdividing rectangles both horizontally and vertically.</p> <p>Say you want to show how to add <span class="math-container">$\frac{1}{3}$</span> and <span class="math-container">$\frac{2}{5}$</span>. Draw three equal sized rectangles. Subdivide the first in threes with horizontal lines. Subdivide the second rectangle in fifths with vertical lines. The students probably understand these representations of the fractions already.</p> <p>The third rectangle is used to represent the sum. Subdivide that rectangle both horizontally and vertically to get fifteens. Now you must ensure that the students understand that this is actually fifteenths, and that they can see how many fifteenths are covered by <span class="math-container">$\frac{1}{3}$</span> and <span class="math-container">$\frac{2}{5}$</span>. The first two rectangles should make this obvious to most students.</p> <p>This representation of the sum of fractions naturally shows how adding thirds and fifths leads to fifteenths. That is, it clearly shows how a common denominator solves the problem. It also shows why the common denominator is found by multiplication.</p> <p>The representation is not perfect, though. It does not naturally explain the <em>least</em> common denominator, it does not extend to three or more summands without adding dimensions, and it is awkward with improper fractions.</p>
31,451
<p>I have created a notebook, with functions. The only thing that I want to change in the notebook later is the input file. I therefor want to have a button that runs every input step-by-step so that I do not have to go into every input and press enter for it to update. How could this be done. </p>
m0nhawk
2,342
<p>There is a <a href="http://reference.wolfram.com/mathematica/ref/menuitem/EvaluateNotebook.html" rel="noreferrer"><code>Evaluation -&gt; Evaluate Notebook</code></a> menu for that.</p> <p>Or, you can set some cells to have <a href="http://reference.wolfram.com/mathematica/tutorial/OptionsForCells.html" rel="noreferrer"><code>Initialization</code></a> property (<a href="http://reference.wolfram.com/mathematica/howto/WorkWithInitializationCells.html" rel="noreferrer">documentation</a> on usage).</p>
1,074,809
<p>What is a non-decreasing sequence of sets and how come it can have a limit?</p> <p>It appear in a probability theory book</p>
Workaholic
201,168
<p>By the following</p> <ul> <li>$x\mapsto 2x+1$ is always defined.</li> <li>$x\mapsto \sin x$ is defined for all $x$.</li> <li>$x\mapsto \sqrt{x}$ is defined if and only if $x\geqslant0$.</li> </ul> <p>Your function is a combination of them, thus you should see when the expression inside the radical is positive, i.e. for which $x$ is it true that $2x-1\geqslant0$. The domain of your function is then the set of solutions to that inequality.</p>
3,369,129
<p>I have the following formulas: </p> <p><span class="math-container">$$S = 4 \times \tan\left(\frac{180^\circ-a}{2}\right) - \frac{π}{90^\circ} \times (180^\circ-a)$$</span></p> <p>where:</p> <p><span class="math-container">$$S \gt 0$$</span></p> <p><span class="math-container">$$0^\circ \lt a \lt 180^\circ$$</span></p> <p>and I want to solve them by <strong>α</strong> but I don't know how to take <strong>α</strong> out of the <strong>tan()</strong>.<br> I tried to use some functions with arctan() but with no luck.</p> <p>Could someone help me out with it?</p>
Claude Leibovici
82,404
<p>Starting from LutzL's answer, we need to find "good" approximation of <span class="math-container">$x$</span> such that <span class="math-container">$$k=\tan(x)-x \qquad \text{where} \qquad k=\frac S 4$$</span> For the range <span class="math-container">$ 0 \leq x \leq \frac \pi 4$</span>, a quite good approximation of the rhs is <span class="math-container">$$\tan(x)-x\sim\frac{5 x^3}{15-6 x^2}$$</span> corresponding to the <span class="math-container">$[3,2]$</span> Padé approximant built at <span class="math-container">$x=0$</span>. As a result, this gives a cubic equation in <span class="math-container">$x$</span> <span class="math-container">$$x^3+\frac{6 k }{5}x^2-3 k=0$$</span> the only real solution of which being given by <span class="math-container">$$\color{blue}{x=\frac{2}{5} k \left(2 \cosh \left(\frac{1}{3} \cosh ^{-1}\left(\frac{375}{16 k^2}-1\right)\right)-1\right)}$$</span> Below are given some results for comparison <span class="math-container">$$\left( \begin{array}{ccc} k &amp; \text{approximation} &amp; \text{exact} \\ 0.00 &amp; 0.000000 &amp; 0.000000 \\ 0.01 &amp; 0.306774 &amp; 0.306773 \\ 0.02 &amp; 0.383648 &amp; 0.383643 \\ 0.03 &amp; 0.436456 &amp; 0.436446 \\ 0.04 &amp; 0.477750 &amp; 0.477734 \\ 0.05 &amp; 0.512063 &amp; 0.512040 \\ 0.06 &amp; 0.541613 &amp; 0.541582 \\ 0.07 &amp; 0.567670 &amp; 0.567630 \\ 0.08 &amp; 0.591038 &amp; 0.590989 \\ 0.09 &amp; 0.612261 &amp; 0.612203 \\ 0.10 &amp; 0.631728 &amp; 0.631659 \\ 0.11 &amp; 0.649725 &amp; 0.649646 \\ 0.12 &amp; 0.666472 &amp; 0.666382 \\ 0.13 &amp; 0.682141 &amp; 0.682039 \\ 0.14 &amp; 0.696867 &amp; 0.696753 \\ 0.15 &amp; 0.710763 &amp; 0.710637 \\ 0.16 &amp; 0.723922 &amp; 0.723783 \\ 0.17 &amp; 0.736418 &amp; 0.736266 \\ 0.18 &amp; 0.748319 &amp; 0.748154 \\ 0.19 &amp; 0.759678 &amp; 0.759500 \\ 0.20 &amp; 0.770545 &amp; 0.770352 \\ 0.21 &amp; 0.780960 &amp; 0.780753 \end{array} \right)$$</span></p> <p>For the case where <span class="math-container">$ \frac \pi 4 \leq x \leq \frac \pi 2$</span>, let <span class="math-container">$t= \frac \pi 2-x$</span> which makes the equation to be <span class="math-container">$$k'=k+\frac \pi 2=t+\cot(t)$$</span> and using again Padé approximant built at <span class="math-container">$t=0$</span> <span class="math-container">$$t+\cot(t)\sim \frac{1+\frac{7 }{10}t^2}{t+\frac{1}{30}t^3}$$</span> This again gives a cubic equation in <span class="math-container">$t$</span> <span class="math-container">$$\frac{k' }{30}t^3-\frac{7}{10} t^2+k' t-1=0$$</span></p>
342,491
<p>How to prove the following:</p> <p>$a_n = \left\{\left(1+\frac{1}{n}\right)^n\right\}$ is bounded sequence, $ n\in\mathbb{N}$</p>
Stahl
62,500
<p>As Weson Jiang notes, $a_n = \left(n + \frac{1}{n}\right)^n &gt; n^n$, and $n^n\to\infty$ as $n\to\infty$, so $a_n\to\infty$ as well.</p> <p>You might have wanted to show that $\left\{b_n\right\}_{n\in\Bbb{N}} = \left\{\left(1 + \frac{1}{n}\right)^n\right\}_{n\in\Bbb{N}}$ is a bounded sequence. The binomial theorem implies $\left(1 + \frac{1}{n}\right)^n = \sum_{k = 0}^n\begin{pmatrix}n\\k\end{pmatrix}\frac{1}{n^k}$, so \begin{align*} \left(1 + \frac{1}{n}\right)^n &amp;= \sum_{k = 0}^n\begin{pmatrix}n\\k\end{pmatrix}\frac{1}{n^k}\\ &amp;=\frac{1}{1} + n\frac{1}{n} + \begin{pmatrix}n\\2\end{pmatrix}\frac{1}{n^2} + \ldots + \frac{1}{n^n}\\ &amp;\leq 1 + 1 + \frac{n^2}{2!}\frac{1}{n^2} + \frac{n^3}{3!}\frac{1}{n^3} + \ldots + \frac{n^n}{n!}\frac{1}{n^n}\\ &amp;&lt; 1 + 1 + \frac{1}{2!} + \frac{1}{3!} + \ldots + \frac{1}{n!} + \ldots\\ &amp;&lt; 1 + 1 + \frac{1}{2} + \frac{1}{2^2} + \ldots + \frac{1}{2^n} + \ldots\\ &amp;= 1 + \sum_{k = 0}^{\infty}\frac{1}{2^k}\\ &amp;= 1 + 2 = 3, \end{align*} so the sequence is bounded above by $3$.</p>
2,321,850
<p>The base is the semicircle $$y=\sqrt{16−x^2},$$ where -4 $\le$ $x$ $\le$ 4. The cross-sections perpendicular to the $x$-axis are squares. $$\\$$ So far this is what I have:</p> <p>$\int (Area)\,dx$</p> <p>$\implies$ $\int \frac{π}{2} ($$r^2$$)\ dx$.</p> <p>$r$ = $\frac{\sqrt{16−x^2}}{2}$</p> <p>$\implies$ $ \frac {π}{8}$ $\int 16- $$x^2$$\ dx$. $$\\$$ I'm confused with what I have to do with the information given about the squares.</p>
Saketh Malyala
250,220
<p>Each slice of the solid is a square.</p> <p>The side length of the square is equal to the height of the function at that point.</p> <p>So each square has area $(\sqrt{16-x^2})^2=16-x^2$.</p> <p>You just need to integrate $\displaystyle \int_{-4}^4 (16-x^2)\,dx=\boxed{\frac{256}{3}}$. </p> <p>Note, the base is the region bounded by $y=\sqrt{16-x^2}$ AND $y=0$. </p> <p>Therefore, the side of the square at each point would be the distance between the two curves.</p>
2,559,110
<p>Prove if $g:\mathbb{R}^2\rightarrow \mathbb{R}$ is a differentiable function, then:</p> <p>$\frac{d}{dx}g(x,x)=\frac{\partial}{\partial x}g(x,x)+\frac{\partial}{\partial y}g(x,x)$</p> <p>My work:</p> <p>$\frac{\partial}{\partial x}g(x,x)=g'(x,x)g(1,1)$</p> <p><br>$\frac{\partial}{\partial y}g(x,x)=0$</p> <p><br> I try to calculate the partial derivatives of the $g$ function but after that step i'm stuck. Can someone help me?</p>
Dr. Sonnhard Graubner
175,066
<p>write $$\frac{\sqrt{4x^2+x}-x)(\sqrt{4x^2+x}+x)}{\sqrt{4x^2+x}+x}$$ and then write $$\frac{x^2(3+\frac{1}{x})}{|x|\left(\sqrt{4+\frac{1}{x}}+1\right)}$$ we get in both cases $\infty$</p>
2,559,110
<p>Prove if $g:\mathbb{R}^2\rightarrow \mathbb{R}$ is a differentiable function, then:</p> <p>$\frac{d}{dx}g(x,x)=\frac{\partial}{\partial x}g(x,x)+\frac{\partial}{\partial y}g(x,x)$</p> <p>My work:</p> <p>$\frac{\partial}{\partial x}g(x,x)=g'(x,x)g(1,1)$</p> <p><br>$\frac{\partial}{\partial y}g(x,x)=0$</p> <p><br> I try to calculate the partial derivatives of the $g$ function but after that step i'm stuck. Can someone help me?</p>
Mark Viola
218,419
<blockquote> <p><strong>I thought it might be instructive to present a very simple way forward that relies on the definition of the limit and an elementary inequality. To that end we proceed.</strong></p> </blockquote> <p>Note that for any number $B&gt;0$, however large, we have for $x&gt;0$</p> <p>$$\begin{align} \sqrt{4x^2+x}-x&amp;&gt;2x-x\\\\ &amp;=x\\\\ &amp;&gt;B \end{align}$$</p> <p>whenever $x&gt;B$. Hence, by the definition of the limit, we find </p> <p>$$\lim_{x\to \infty}\sqrt{4x^2+x}-x=\infty$$</p>
743,292
<p>Three fair six-sided dice are thrown and the dice show three different numbers. Find the probability that at least one six is obtained. </p> <p>Im unsure ofwhat type of question this is, I have tried combinations such as 6C1 over 18C3. </p> <p>But im not sure if this is correct. Any guidance is much appreciated. </p> <p>Thank you</p>
Cm7F7Bb
23,249
<p>The number of possible outcomes of this experiment is given by $$ \frac{6\cdot 5\cdot 4}{3!}. $$ Outcomes when no $6$ is obtained is given by $$ \frac{5\cdot 4\cdot 3}{3!}. $$ The probability that at least one $6$ is obtained $$ 1-\frac{\frac{5\cdot 4\cdot 3}{3!}}{\frac{6\cdot 5\cdot 4}{3!}}=\frac12. $$</p>
3,274,461
<p>Suppose I have a recurrence relation like : </p> <p><span class="math-container">\begin{equation} f(n)=\begin{cases} 2*f(n-1) , &amp; \text{n%2 = 1}.\\ f(n-1) + 1, &amp; \text{n%2 = 0}. \end{cases} \end{equation}</span></p> <p><span class="math-container">\begin{equation} f(0) = f(1) = 1 \end{equation}</span></p> <p>How would I go about solving this? What do you call such a recurrence relation?</p> <p>Is there a way I can get <span class="math-container">$f(n)$</span> that will solve it for both cases?</p> <p>I tried using the WolframAlpha <a href="https://reference.wolfram.com/language/ref/RSolve.html?q=RSolve" rel="nofollow noreferrer">RSolve</a> function and tried to use <a href="https://reference.wolfram.com/language/ref/Mod.html" rel="nofollow noreferrer">Mod</a> to write a single equation hoping it could solve it. But it kept throwing a syntax error.</p> <p>I am fine even if you provide a solution using WolframAlpha.</p>
John Omielan
602,049
<p>I'm not sure if this type of problem has a special name. It's just basically a function definition split between <span class="math-container">$2$</span> different types of values, in this case depending on whether <span class="math-container">$n$</span> is odd or even.</p> <p>The way I would approach trying to solve these types of problems is to determine the first few values to see if I can find any pattern and, if so, then I will try to prove it, usually by using some form of induction.</p> <p>You're given <span class="math-container">\begin{equation} f(n)=\begin{cases} 2*f(n-1) , &amp; \text{n%2 = 1}.\\ f(n-1) + 1, &amp; \text{n%2 = 0}. \end{cases} \tag{1}\label{eq1} \end{equation}</span></p> <p><span class="math-container">\begin{equation} f(0) = f(1) = 1 \end{equation}</span></p> <p>The next few values are: <span class="math-container">\begin{align} f(2) &amp; = f(1) + 1 = 2 \\ f(3) &amp; = 2 \times f(2) = 4 \\ f(4) &amp; = f(3) + 1 = 5 \\ f(5) &amp; = 2 \times f(4) = 10 \\ f(6) &amp; = f(5) + 1 = 11 \\ f(7) &amp; = 2 \times f(6) = 22 \\ f(8) &amp; = f(7) + 1 = 23 \end{align}</span></p> <p>Since <span class="math-container">$f(n)$</span> is defined separately for odd &amp; even <span class="math-container">$n$</span>, it's likely the values for <span class="math-container">$f(n)$</span> would have somewhat separate patterns for odd &amp; even values of <span class="math-container">$n$</span>. In this case, the pattern for even <span class="math-container">$n$</span> seems a bit simpler, with the odd <span class="math-container">$n$</span> being just one less than the even <span class="math-container">$n + 1$</span> value. The even <span class="math-container">$n$</span> have <span class="math-container">$f(0) = 1$</span>, <span class="math-container">$f(2) = 2$</span>, <span class="math-container">$f(4) = 5$</span>, <span class="math-container">$f(6) = 11$</span> and <span class="math-container">$f(8) = 23$</span>. It's not immediately obvious, at least it wasn't for me, but for <span class="math-container">$n \ge 2$</span>, the value is the sum of power of <span class="math-container">$2$</span> and the next smaller power of <span class="math-container">$2$</span>, then less 1, with the powers of <span class="math-container">$2$</span> increasing by <span class="math-container">$1$</span> each time. In particular,</p> <p><span class="math-container">$$f(n) = 2^{n/2} + 2^{(n-2)/2} - 1 \; \text{ for } \; n \ge 2, n \, \% \, 2 = 0 \tag{2}\label{eq2}$$</span></p> <p>Note this equation doesn't apply to <span class="math-container">$n = 0$</span>, as it gives a result of <span class="math-container">$f(0) = \frac{1}{2}$</span>. This is because the initial value of <span class="math-container">$f(1) = 1$</span> doesn't behave according to \eqref{eq1} since it gives <span class="math-container">$f(1) = 2 \times f(0) = 2$</span> instead.</p> <p>From the recursion formula for odd <span class="math-container">$n$</span> being one less than the function for <span class="math-container">$n + 1$</span> (which is even), you get,</p> <p><span class="math-container">$$f(n) = f(n+1) - 1 = 2^{(n+1)/2} + 2^{(n-1)/2} - 2 \; \text{ for } \; n \ge 1, n \, \% \, 2 = 1 \tag{3}\label{eq3}$$</span></p> <p>You can prove \eqref{eq2} and \eqref{eq3} by using strong induction. For <span class="math-container">$n = 1$</span>, \eqref{eq3} gives <span class="math-container">$f(1) = 2^1 + 2^0 - 2 = 2 + 1 - 2 = 1$</span>. For <span class="math-container">$n = 2$</span>, \eqref{eq2} gives <span class="math-container">$f(2) = 2^1 + 2^0 - 1 = 2 + 1 - 1 = 2$</span>. This proves the base cases. Assume \eqref{eq2} and \eqref{eq3} hold for all <span class="math-container">$n \le k$</span> for some <span class="math-container">$k \ge 1$</span>. For <span class="math-container">$n = k + 1$</span>, if <span class="math-container">$n$</span> is even, then \eqref{eq1}, along with \eqref{eq3} for <span class="math-container">$f(n-1)$</span>, gives <span class="math-container">$f(n) = f(n - 1) + 1 = \left(2^{n/2} + 2^{(n - 2)/2} - 2\right) + 1 = 2^{n/2} + 2^{(n - 2)/2} - 1$</span>, which matches \eqref{eq2}. If <span class="math-container">$n$</span> is odd instead, then \eqref{eq1}, along with \eqref{eq2} for <span class="math-container">$f(n-1)$</span>, gives <span class="math-container">$f(n) = 2 \times f(n - 1) = 2 \times \left(2^{(n-1)/2} + 2^{(n-3)/2} - 1\right) = 2^{(n+1)/2} + 2^{(n-1)/2} - 2$</span>, which matches \eqref{eq3}. Thus, this proves the inductive step, confirming that \eqref{eq2} and \eqref{eq3} are valid for all <span class="math-container">$n \ge 1$</span>.</p> <p>Update: I read some of the Wolfram-Alpha documentation to figure out how to use it. However, I didn't see any way to use a function definition split as done here. Nonetheless, I was able to use the Mod function to express it in one equation, so the expression I used was</p> <blockquote> <p>RSolve[{f[n]==(2^Mod[n,2])f[n-1] + Mod[n-1,2],f[1]==1},f[n],n]</p> </blockquote> <p>Note I only gave it <span class="math-container">$f(1)$</span> as <span class="math-container">$f(0)$</span> doesn't follow from the formula (when I also gave it <span class="math-container">$f(0)$</span>, it failed to do the calculations). The results are <a href="https://www.wolframalpha.com/input/?i=RSolve%5B%7Bf%5Bn%5D%3D%3D(2%5EMod%5Bn,2%5D)f%5Bn-1%5D+%2B+Mod%5Bn-1,2%5D,f%5B1%5D%3D%3D1%7D,f%5Bn%5D,n%5D" rel="nofollow noreferrer">here</a>. It produced just one equation of</p> <p><span class="math-container">$$f(n) = 2^{\lfloor(n - 1)/2\rfloor - \lfloor(n - 2)/2\rfloor}\left(3 \times 2^{\lfloor(n - 2)/2\rfloor} - 1\right) \tag{4}\label{eq4}$$</span></p> <p>For even <span class="math-container">$n$</span>, <span class="math-container">$2^{\lfloor(n - 1)/2\rfloor - \lfloor(n - 2)/2\rfloor} = 2^{(n-2)/2 - (n-2)/2} = 2^0 = 1$</span> and <span class="math-container">$3 \times 2^{\lfloor(n - 2)/2\rfloor} - 1 = (2 + 1) \times 2^{(n - 2)/2} - 1 = 2^{n/2} + 2^{(n-2)/2} - 1$</span>, so it matches \eqref{eq2}. For odd <span class="math-container">$n$</span>, <span class="math-container">$2^{\lfloor(n - 1)/2\rfloor - \lfloor(n - 2)/2\rfloor} = 2^{(n-1)/2 - (n-3)/2} = 2^1 = 2$</span> and <span class="math-container">$3 \times 2^{\lfloor(n - 2)/2\rfloor} - 1 = (2 + 1) \times 2^{(n-3)/2} - 1 = 2^{(n-1)/2} + 2^{(n-3)/2} - 1$</span>, so the product of the <span class="math-container">$2$</span> parts gives \eqref{eq3}. In my opinion, it's mainly a matter of choice as to whether it's better to have <span class="math-container">$2$</span> simpler equations like I do or just one somewhat more complicated equation, like Wolfram-Alpha produces.</p>
3,382,119
<p><span class="math-container">$x\cdot\begin{bmatrix}0\\-3\\1\end{bmatrix}$</span> + <span class="math-container">$y\cdot\begin{bmatrix}-5\\2\\-4\end{bmatrix}$</span> + <span class="math-container">$z\cdot\begin{bmatrix}-20\\-1\\-13\end{bmatrix}$</span> =<span class="math-container">$\begin{bmatrix}0\\0\\0\end{bmatrix}$</span></p> <p>x = ? y = ? z = ?</p> <p>RREF = <span class="math-container">$\begin{bmatrix}1&amp;0&amp;3\\0&amp;1&amp;4\\0&amp;0&amp;0\end{bmatrix}$</span></p> <p>I started with looking for the Reduced Row Echelon Form, but don't know what to do next.</p>
Bernard
202,857
<p>Supposin your RREF is exact (which it is <em>not</em>), there remains to interpret it as an equivalent linear system of equations: <span class="math-container">$$\begin{cases} x+3z=0\\y+4z=0 \end{cases}\implies \begin{bmatrix}x\\y\\z\end{bmatrix}=\begin{bmatrix}-3z\\-4z\\\quad z\end{bmatrix}=z\begin{bmatrix}-3\\-4\\\quad1\end{bmatrix}.$$</span></p>
3,382,119
<p><span class="math-container">$x\cdot\begin{bmatrix}0\\-3\\1\end{bmatrix}$</span> + <span class="math-container">$y\cdot\begin{bmatrix}-5\\2\\-4\end{bmatrix}$</span> + <span class="math-container">$z\cdot\begin{bmatrix}-20\\-1\\-13\end{bmatrix}$</span> =<span class="math-container">$\begin{bmatrix}0\\0\\0\end{bmatrix}$</span></p> <p>x = ? y = ? z = ?</p> <p>RREF = <span class="math-container">$\begin{bmatrix}1&amp;0&amp;3\\0&amp;1&amp;4\\0&amp;0&amp;0\end{bmatrix}$</span></p> <p>I started with looking for the Reduced Row Echelon Form, but don't know what to do next.</p>
J. W. Tanner
615,567
<p><span class="math-container">$x\cdot\begin{bmatrix}0\\-3\\1\end{bmatrix}$</span> + <span class="math-container">$y\cdot\begin{bmatrix}-5\\2\\-4\end{bmatrix}$</span> + <span class="math-container">$z\cdot\begin{bmatrix}-20\\-1\\-13\end{bmatrix}$</span> =<span class="math-container">$\begin{bmatrix}0\\0\\0\end{bmatrix}$</span></p> <p><span class="math-container">$\implies \begin{bmatrix}-5y-20z\\-3x+2y-z\\x-4y-13z\end{bmatrix}$</span>=<span class="math-container">$\begin{bmatrix}0\\0\\0\end{bmatrix}$</span></p> <p><span class="math-container">$\implies y=-4z, -3x-9z=0, x+3z=0$</span></p> <p><span class="math-container">$\implies y=-4z, x=-3z$</span></p> <p><span class="math-container">$\implies\begin{bmatrix}x\\y\\z\end{bmatrix}=c\begin{bmatrix}-3\\-4\\1\end{bmatrix}$</span></p>
13,274
<p>I have searched stackoverflow (and comparable pages) for quite a while now (got redirected from there to this specialized stack), and I surrender. I am trying to evaluate an expression that is small in the end numerically. <br></p> <p>Example:</p> <pre>Log[Log[Log[6^5^4^3^2^1]]]=12.9525...<br></pre> <p>WolframAlpha has no problem evaluating these values for (any/a very high) amount of exponents (I tried it up to 20). I guess it is possible to achieve this in Mathematica aswell?</p> <p>I tried Hold, Defer etc, as described in <br> <a href="https://stackoverflow.com/questions/1616592/mathematica-unevaluated-vs-defer-vs-hold-vs-holdform-vs-holdallcomplete-vs-etc">https://stackoverflow.com/questions/1616592/mathematica-unevaluated-vs-defer-vs-hold-vs-holdform-vs-holdallcomplete-vs-etc</a> <br> Hoever none of these did what I hoped for. Is it a matter of explaining Mathematica the rules of logarithms?</p> <pre>FullSimplify[Log[x^b], x>0 && b>0]</pre> <p>expands it nicely, however that is not what I want (I have explicit numbers). Is there any way to perform the calculations WolframAlpha performs with mathematica (obviously avoiding the WolframAlpha Output Operator ;)) ?</p> <p>Is there some Option/Assumption etc I have overlooked?</p> <hr> <p>For this specific question there is a recursive algebraic solution: $$ n^{(n-1)^{...^1}}=e^{\log(n)*(n-1)^{...^1}} $$ and so on, remove a bunch of e-s at the end. I guess Wolfram|Alpha uses this. I would still like to know if theres a true Mathematica solution to this.</p>
Rojo
109
<p>An alternative could be</p> <pre><code>Block[{Power, Log}, Log[Log[Log[6^5^4^3^2^1]]] // PowerExpand] </code></pre> <blockquote> <p>Log[262144 Log[5] + Log[Log[2] + Log[3]]]</p> </blockquote> <pre><code>% // N </code></pre> <blockquote> <p>12.9525</p> </blockquote> <p>Still gives overflows, it's equivalent to @belisarius's</p> <p>Also, </p> <pre><code>Log[Log[Log[6^5^4^3^2^1]]] // Hold // PowerExpand // ReleaseHold </code></pre>
1,096,453
<p>Let $X$ be an inner product space. If $x_n\to x$, $y_n\to y$ (in norm), then $(x_n,y_n)\to(x,y)$ (in modulus).</p>
Pp..
203,995
<p>Add and subtract $(x,y_n)$ and use Cauchy's and triangle inequality many times.</p> <p>$$\begin{align}|(x,y)-(x_n,y_n)|&amp;=|(x,y)-(x,y_n)+(x,y_n)-(x_n,y_n)|\\&amp;\leq |(x,y-y_n)|+|(x-x_n,y_n)|\\&amp;\leq ||x||\cdot||y-y_n||+||x-x_n||\cdot||y_n||\\&amp;\leq||x||\cdot||y-y_n||+||x-x_n||\cdot(||y||+\epsilon)\end{align}$$</p> <p>for $n$ large such that $||y_n||-||y||\leq ||y_n-y||&lt;\epsilon$.</p>
1,096,453
<p>Let $X$ be an inner product space. If $x_n\to x$, $y_n\to y$ (in norm), then $(x_n,y_n)\to(x,y)$ (in modulus).</p>
Surb
154,545
<p>I'm not sure what you mean by "modulus" but other definition of the norm on $X \times Y$ could probably be done with very similar proofs.</p> <p>Let $\|\cdot\|_X,\|\cdot\|_Y$ be norms on $X$ and $Y$ and $\|\cdot\|_{X\times Y}:X \times Y \to \Bbb R: (u,v)\mapsto \sqrt{\|u\|_X^2+\|v\|^2_2}$. Let us suppose that $(x_n)_{n\in\Bbb N} \subset X$ and $(y_n)_{n\in\Bbb N} \subset Y$ are sequences so that </p> <p>$$\lim_{n\to\infty}\|x_n-x\|_X =0 \quad \text{ and } \quad \lim_{n\to\infty}\|y_n-y\|_Y =0.$$</p> <p>Let $\epsilon &gt; 0$, there exists $N_1,N_2&gt;0$ such that $$\|x_n-x\|_X &lt; \frac{\epsilon}{\sqrt 2}\qquad \forall n &gt; N_1$$ and $$\|y_n-y\|_Y &lt; \frac{\epsilon}{\sqrt 2}\qquad \forall n &gt; N_2.$$ Then for every $n &gt; \max\{N_1,N_2\}$ we get $$\|(x_n,y_n)-(x,y)\|_{X\times Y}= \sqrt{\|x-x_n\|_X^2+\|y_n-y\|_2}\leq \sqrt{\left(\frac{\epsilon}{\sqrt 2}\right)^2+\left(\frac{\epsilon}{\sqrt 2}\right)^2} = \epsilon.$$ That is $(x_n,y_n)\to (x,y)$ in the norm $\|\cdot\|_{X \times Y}$</p>
4,527,455
<p>My goal is to find all values of &quot;a&quot; so that the circle <span class="math-container">$x^2 - ax + y^2 + 2y = a$</span> has the radius 2</p> <p>The correct answer is: <span class="math-container">$a = -6$</span> and <span class="math-container">$a = 2$</span></p> <p>I tried solving it by doing this:<br/> <span class="math-container">$x^2 - ax + y^2 +2y=a$</span><br/> <span class="math-container">$x^2 - ax + (y+1)^2-1=a$</span><br/> <span class="math-container">$(x - \frac a2)^2 - (\frac a2)^2 + (y+1)^2-1=a$</span><br/> <span class="math-container">$(x - \frac a2)^2 - {a^2\over 4} + (y+1)^2-1=a$</span><br/> <span class="math-container">$(x - \frac a2)^2 + (y+1)^2=a + {a^2\over 4} + 1$</span><br/> <span class="math-container">$(x - \frac a2)^2 + (y+1)^2={a^2+4a + 4\over 4}$</span><br/></p> <p>We want the radius to be 2 so set this <span class="math-container">${a^2+4a + 4\over 4}$</span> equal to 2<br/> <span class="math-container">${a^2+4a + 4\over 4}=2$</span><br/> <span class="math-container">$a^2+4a + 4=8$</span><br/> <span class="math-container">$a^2+4a -4=0$</span><br/><br/> Solve for a:<br/> <span class="math-container">$a=-2 \pm \sqrt{4+4}$</span><br/> <span class="math-container">$a=-2 \pm \sqrt{8}$</span><br/><br/> This is not correct as you can see. I don't understand what I do wrong, I'm not sure if there is one of those tiny mistakes somewhere in my solving process or if I'm completely wrong from the beginning. Thanks in advance.</p>
binbni
1,011,566
<p><span class="math-container">$\frac{a^2+4a+4}{4}$</span> is not a radius. Actually, it is the square of radius.</p> <p>So, you should solve <span class="math-container">$\frac{a^2+4a+4}{4}=2^2$</span></p> <p>And its solution is a=-6, a=2</p>
56,804
<p>I know the Galois group is $S_3$. And obviously we can swap the imaginary cube roots. I just can't figure out a convincing, "constructive" argument to show that I can swap the "real" cube root with one of the imaginary cube roots. </p> <p>I know that if you have a 3-cycle and a 2-cycle operating on three elements, you get $S_3$. I have a general idea that based on the order of the group there's supposed to be at least a 3-cycle. But this doesn't feel very "constructive" to me. </p> <p>I wonder if I've made myself understood in terms of what kind of argument I'd like to see?</p>
zyx
14,120
<p>The question implicitly assumes the base field is $Q$. The Galois group of a polynomial only exists with reference to a specified field containing the coefficients. It is not enough to assume that $X^3 - 2$ is irreducible, because the Galois group is cyclic of order 3 (that is, a proper subgroup of S_3 rather than the whole thing) if the base field contains primitive cube roots of 1, and $S_3$ otherwise.</p> <p>If the base field contains no cube roots of 1 or 2, then a splitting field of $X^3 -2$ is obtained by adjoining all the cube roots of 1 and 2. If the adjoined roots are taken from a larger "pre-existing" field such as the complex numbers, an abstract choice of splitting field is equivalent to selecting one nontrivial cube root of 1 (two choices) and one cube root of 2 (three choices). There are six possibilities and they differ by elements of $S_3$.</p> <p>The group structure is not the direct product of the cyclic subgroups of order 2 and 3, because the operation of changing one's choice of $\sqrt[3]{1}$ will also re-arrange the cube roots of 2, exchanging the two unselected $\sqrt[3]{2}$'s.</p>