qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
1,242,570
<p>I want to use the standard definition $x_n \rightarrow x$ if for all $\epsilon&gt;0$ there exists $N$ such that if $n&gt;N$ then $|x_N-x|&lt;\epsilon$. </p> <p>So my claim is $x_n\rightarrow 0$ If I set $N=\epsilon^2,$ then the following expression $|\sqrt{n^2+1}-n-0|&lt;\epsilon$ will hold true. I solved for $N$ by squaring both sides $n^2+1-n^2&lt;\epsilon^2$ Does this work? </p> <p>Edit: Made a dumb algebraic mistake, thanks everyone.</p>
André Nicolas
6,312
<p>Note that for positive $n$ we have $$\left(n+\frac{1}{2n}\right)^2\gt n^2+1.$$ It follows that $$n\lt \sqrt{n^2+1}\lt n+\frac{1}{2n},$$ and therefore $$0\lt \sqrt{n^2+1}-n\lt \frac{1}{2n}.$$ We conclude by Squeezing that our limit is $0$.</p>
79,292
<p>I recently realized that I don't know any non-linear diffeomorphisms of the plane (or $\mathbb{R}^n$ in general) except for linear ones, so I want to ask rather broad questions hoping to be pointed to the appropriate literature.</p> <p><strike>1) Are there simple ways of constructing autodiffeomorphisms of $\mathbb{R}^n$ that can be expressed in closed form?</strike> UPD: Ok, there's e.g. $(x, y) \mapsto (x, y + f(x))$, where $f: \mathbb{R} \to \mathbb{R}$ is smooth, so I call this one back - kind of, if you know some exciting and unusual family, feel free to share :)<br> 2) Is every autodiffeomorphism of $\mathbb{R}^n$ isotopic to a linear one? Obviously, every one is <em>homotopic</em> to any other due to $\mathbb{R}^n$ being contractible, but since $\mathrm{GL}(\mathbb{R}^n)$ is not connected, $\mathbb{R}^n$ is a k-space), and homotopies behave nicely under differentials at a point, even some linear autodiffeomorphisms are not isotopic, if I'm not mistaken.</p>
Ryan Budney
642
<p>The answer to your question (2) is yes. </p> <p>The proof goes like this. Let $f$ be a diffeomorphism. We find an isotopy from $f$ to a diffeomorphism $g$ with $g(0)=0$. The isotopy is</p> <p>$$(x,t) \longmapsto f(x)-tf(0) $$</p> <p>when $t=0$ this is $f$, when $t=1$ you get $f(x)-f(0)$. </p> <p>Next, given a diffeo $g$ with $g(0)=0$ we isotope $g$ to a linear diffeomorphism. The map is this:</p> <p>$$(x,t) \longmapsto \frac{g((1-t)x)}{1-t}$$</p> <p>for $t \in [0,1)$ and at $t=1$ we have</p> <p>$$(x,1) \longmapsto Dg_0(x)$$</p> <p>You can check this map is continuous provided $g$ is $C^1$. </p> <p>Regarding your 1st question, one of the most common techniques is to integrate vector fields. </p>
2,717,007
<p>the curves are $x^2 = 4y$ and $x^2=4y-4$ these are just the same parabolas but the other one is shifted up by one unit.</p> <p>I have been thinking of 3 possibilities that might be the answer.</p> <ol> <li><p>The area is equal to infinite sq. units</p></li> <li><p>The area is equal zero</p></li> <li><p>The area is undefined</p></li> </ol> <p>The answer I have concluded that is probably the most correct is that the area is undefined because:</p> <ol> <li><p>The area is not enclosed by the two curves</p></li> <li><p>Infinity is not a number</p></li> <li><p>The area is definitely not zero since the curves are not overlapping</p></li> </ol> <p>So my question is if I answered this correctly.</p>
Mundron Schmidt
448,151
<p>While the other answers tell you why the first option is the correct answer, let me explain where your arguments and conclusions are correct or wrong.</p> <p>While your first argument is a bit fishy, the second is true but the conclusion is wrong. And the third argument is correct to exclude the 3. option. Therefore you concluded wrongly the third option instead of the correct first option. </p> <p>But here a bit more details to your arguments: <p></p> <blockquote> <ol> <li>The area is not enclosed by the two curves</li> </ol> </blockquote> <p>The area is enclosed by the two curves. You might have the feeling that there are missing a part of the boundary, because the curves doesn't intersect, but a boundary doesn't need to go <em>around the area in one draw</em>. So, infinite large areas are possible too.</p> <blockquote> <ol start="2"> <li>Infinity is not a number</li> </ol> </blockquote> <p>That's true and very important to know. But in fact, the measure of an area maps an area to a real number <strong>or</strong> infinity. Hence, infinity is a valid measure of an area.</p> <blockquote> <ol start="3"> <li>The area is definitely not zero since the curves are not overlapping</li> </ol> </blockquote> <p>That is correct. That's why the measure of the decribed area has to be positive.</p>
403,114
<blockquote> <p>Let <span class="math-container">$G$</span> be any group, and let <span class="math-container">$Z$</span> be its center.</p> <p>(a) Show that <span class="math-container">$G/Z\cong \text{Inn}(G)$</span>.</p> <p>(b) Conclude that <span class="math-container">$\text{Inn}(G)$</span> cannot be a nontrivial cyclic group.</p> </blockquote> <p>I've already gotten part (a) by considering the mapping <span class="math-container">$\pi:G\rightarrow\text{Inn}(G)$</span> such that <span class="math-container">$\pi(g)$</span> is the automorphism that takes <span class="math-container">$x$</span> to <span class="math-container">$g^{-1}xg$</span> for all <span class="math-container">$x\in G$</span>. The mapping <span class="math-container">$\pi$</span> is clearly a surjective homomorphism with kernel <span class="math-container">$Z$</span>, and part (a) follows from the isomorphism theorem.</p> <p>For part (b), I must prove that <span class="math-container">$G/Z$</span> cannot be a nontrivial cyclic group. If it were, the group would equal <span class="math-container">$\{Z,Zg,Zg^2,\ldots,Zg^{n-1}\}$</span> for some <span class="math-container">$g\in G$</span>. Also, <span class="math-container">$G/Z$</span> would be an abelian group, and it follows that the commutator subgroup <span class="math-container">$G'$</span> belongs to <span class="math-container">$Z$</span>. I don't see how to derive a contradiction from there.</p>
DonAntonio
31,254
<p>Hints:</p> <p>$$\forall x,y\in G\;\exists z_1,z_2\in Z(G)\;,\;n_1,n_2\in\Bbb N\;\;s.t.\;\; x=g^{n_1}z_1\;,\;y=g^{n_2}z_2\implies$$</p> <p>$$xy=g^{n_1}z_1g^{n_2}z_2=g^{n_1}g^{n_2}z_1z_2=g^{n_2}g^{n_1}z_2z_1=\ldots$$</p> <p>The above thus proves $\,G\,$ is abelian, but then $\,Z(G)=G\,$ , so$\;\ldots\;$</p>
3,095,310
<p>There is a connection between type theory and logic, where types are propositions, and type checking performs the role of checking whether a proof of a proposition is correct (Curry-Howard isomorphism).</p> <p>But I can imagine a different connection: There seems to be a similarity between type checking and checking whether a particular mathematical structure satisfies a set of axioms. </p> <p>We might say that propositions (axioms) are formalized as types (just as in the CH-isomorphism), but that now, an instance of a proposition (i.e. an instance of that type) is not a proof of the proposition, but a model of it. <strong>Type checking then takes the role of checking whether a particular mathematical structure is indeed a model of that axiom.</strong></p> <p><strong>Is there a formalization of "checking whether a structure is a model of a proposition" as type checking? Could you explain such a formalization, or point to an explanation of it?</strong></p>
Nikolaj-K
18,993
<p>Yes, there are two notions of "fit into" here:</p> <ul> <li>Terms (what you call instances here) "fit into" types or don't, and that is checked when you do type checking</li> <li>Structures (e.g. set theoretical constructions) "fit into" propositions or don't, and that's checked by when you verify whether the structure is a model.</li> </ul> <p>I think there's nothing wrong with drawing an analogy of how things "fit into" another here, as both processes can be considered under the guise of a computational task. Although there are many stratification involved. </p> <ul> <li>For one, mathematical logicans (and e.g. set theorists) will do everything formalistically with symbols on paper, but nontheless they make a conceptual distrinction between syntax and semantics. As a result, you have philosophical positions like mathematical Platonism which consider mathematical structures as farther away from syntax and closer to something "actual". Something that's realized among possible semantics. Computer science type theorists (which are also logicans of course, just sometimes closer to another school, say the proof theorist ones) will work in large parts purely on the syntactic side (and semantics are sought in more general setups than e.g. in set theorists departments)</li> <li>Secondly, as mentioned, both "checkings" here are forms of computations and it's worth pointing out that what's doable here will depend on the underlying logic. The <a href="https://en.wikipedia.org/wiki/Hindley%E2%80%93Milner_type_system" rel="nofollow noreferrer">Hindley–Milner type system</a> is the framework that motivated the programming languages ML and then Haskell, and grants a nice type checking routine. The frameworks where good checking is possible made cuts. Modern efforts in ramping up a dependently typed language gear towards a constructive/<a href="https://en.wikipedia.org/wiki/Intuitionistic_type_theory" rel="nofollow noreferrer">intuitionistic</a> logic that now include strong forms of quantifiers, while logicans and model theorists do large parts do a deeper and thus less varied (relatively speaking) study of the long established strong first- or higher order logics. The answers in <a href="https://philosophy.stackexchange.com/questions/2617/how-did-first-order-logic-come-to-be-the-dominant-formal-logic">this logic SE questions</a> are one among the most interesting on the platform. As is always the case with such broad questions on logic, one could now go down many rabbit holds. I might give a shoutout to <a href="https://en.wikipedia.org/wiki/Tarski%27s_undefinability_theorem" rel="nofollow noreferrer">Tarski's restricting theorem</a> for strong logics. Decidability is also a question for type systems, if you just choose any. </li> <li>Along with the theme of restrictions, type checking of a term checks all it can, while with propositions, there's always many and different aspects of a structure you can validate. Again, here, if you make cuts and your type system admits subtyping and whatnot, I suppose you can take coarser views on terms here too.</li> <li>In the latter notion of "fit in", there's the concept of <a href="https://en.wikipedia.org/wiki/Categorical_theory" rel="nofollow noreferrer">categoricity</a> and one may try to reflect that in uniqueness types. </li> <li>If you have narrowed down a your logics of interest and decide you want to approach your question from a raw computation and "fit into" perspective, then the subject of <a href="https://en.wikipedia.org/wiki/Confluence_(abstract_rewriting)" rel="nofollow noreferrer">abstract rewriting</a> and <a href="https://en.wikipedia.org/wiki/Rewriting" rel="nofollow noreferrer">rewriting systems</a>, which may strike one as even more formal than subfields mathematical logic, may be down your alley.</li> </ul> <p>Side note 1: The verification process in the latter case need not even be restricted to axioms but we can do that for any propositions.</p> <p>Side note 2: I'm not a fan of your initial wording <em>"checking whether a proof of a proposition is correct"</em>. I'd instead say <em>"checking whether an expression is a proof of a proposition"</em>. I say that because if you use your language here, then we grant every term the status of being a "wrong proof" for almost all propositions. That's just a question of our language here, though.</p>
3,095,310
<p>There is a connection between type theory and logic, where types are propositions, and type checking performs the role of checking whether a proof of a proposition is correct (Curry-Howard isomorphism).</p> <p>But I can imagine a different connection: There seems to be a similarity between type checking and checking whether a particular mathematical structure satisfies a set of axioms. </p> <p>We might say that propositions (axioms) are formalized as types (just as in the CH-isomorphism), but that now, an instance of a proposition (i.e. an instance of that type) is not a proof of the proposition, but a model of it. <strong>Type checking then takes the role of checking whether a particular mathematical structure is indeed a model of that axiom.</strong></p> <p><strong>Is there a formalization of "checking whether a structure is a model of a proposition" as type checking? Could you explain such a formalization, or point to an explanation of it?</strong></p>
Alex Kruckman
7,062
<p>The key observation of the Curry-Howard correspondence is that the inductive structure of terms in type theory mirrors the inductive structure of proofs in logic.</p> <p>For example, given two terms <span class="math-container">$t_1$</span> and <span class="math-container">$t_2$</span> of types <span class="math-container">$A_1$</span> and <span class="math-container">$A_2$</span>, I can construct a term <span class="math-container">$(t_1,t_2)$</span> of the product type <span class="math-container">$A_1\times A_2$</span>. Similarly, given two proofs <span class="math-container">$p_1$</span> and <span class="math-container">$p_2$</span> of the propositions <span class="math-container">$Q_1$</span> and <span class="math-container">$Q_2$</span>, I can put them together to form a proof <span class="math-container">$(p_1,p_2)$</span> of the conjunction <span class="math-container">$Q_1\land Q_2$</span>. </p> <p>On the other hand, models of a proposition (at least in ordinary semantics for first-order / predicate logic) just don't have this same kind of inductive structure. Given models <span class="math-container">$M_1$</span> and <span class="math-container">$M_2$</span> of sentences <span class="math-container">$\varphi_1$</span> and <span class="math-container">$\varphi_2$</span>, there's no canonical way of constructing a model of the conjunction <span class="math-container">$\varphi_1\land \varphi_2$</span>. So I'm not optimistic that the kind of extension of Curry-Howard that you're looking for is possible. </p> <p>It's conceivable to me that you could change the notion of "model" to something sufficiently syntactic to make this work - but that would likely involve making a "model" of <span class="math-container">$\varphi$</span> essentially an encoding of a proof of <span class="math-container">$\varphi$</span>, and then relying on the usual Curry-Howard correspondence. </p>
2,749,624
<p>Prove: $k^3 - k( b c + c a + a b ) + 2 a b c = 0$ always has a negative root with all positive parameters $a, b, c$</p> <p>I tried: Write $f(x)=x^3-x(ab+ac+bc)+2abc$ then $f(-\infty)=-\infty,f(0)&gt;0$. Now use the Intermediate Value Theorem. I can' t continue. Help me! Thanks!</p>
Sonal_sqrt
477,581
<p>Since $a,b,c$ are positive the value of $f(0)=2abc&gt;0$. And $f(-\infty)=-\infty$. So by intermediate value property, there exists $c\in(-\infty,0)$ such that $f(c)=0$</p> <p><strong>Intermediate Value Theorem</strong> If there is a continuous function $f:[a,b]\rightarrow \mathbb{R}$. There is some $L$ such that $f(a)&lt;L&lt;f(b)$ or $f(a)&gt;L&gt;f(b)$, then there exists a point $c \in [a,b]$, such that $f(c)=L$.</p> <p>Look at the theorem and see if you can match what $a,b,L,c$ are in our case. </p>
2,749,624
<p>Prove: $k^3 - k( b c + c a + a b ) + 2 a b c = 0$ always has a negative root with all positive parameters $a, b, c$</p> <p>I tried: Write $f(x)=x^3-x(ab+ac+bc)+2abc$ then $f(-\infty)=-\infty,f(0)&gt;0$. Now use the Intermediate Value Theorem. I can' t continue. Help me! Thanks!</p>
dxiv
291,201
<p>Alt. hint (without IVT): &nbsp; by <a href="https://en.wikipedia.org/wiki/Vieta%27s_formulas" rel="nofollow noreferrer">Vieta's relations</a> the product of the three roots is $\,-2abc \lt 0\,$. Since the polynomial has real coefficients:</p> <ul> <li><p>either all three roots are real, in which case at least one must be negative;</p></li> <li><p>or one root is real and the other two are complex conjugates, in which case the product of the two complex roots equals the square of their modulus, so the real root must be negative.</p></li> </ul>
138,079
<p>I want to find an elegant method to rearrange these two sublists:</p> <pre><code>SeedRandom[1] list = {RandomInteger[10, {4, 2}], RandomInteger[{10, 30}, {4, 2}]} </code></pre> <blockquote> <p>{{{1,4},{0,7},{0,0},{8,6}},{{11,20},{11,11},{25,17},{27,16}}}</p> </blockquote> <p>Make these two sublists’ elements have shortest distance from inside to outside. This is my current method:</p> <pre><code>MapAt[Reverse, Transpose[ Reap[Nest[ MapThread[ DeleteCases, {#, Sow[First[ MinimalBy[Transpose[{First /@ Nearest @@ #, Last[#]}], N[EuclideanDistance @@ #] &amp;]]]}] &amp;, list, Length[First[list]]]][[2, 1]]], {1}] </code></pre> <blockquote> <p>{{{0,0},{1,4},{0,7},{8,6}},{{11,11},{11,20},{25,17},{27,16}}}</p> </blockquote> <p><img src="https://i.stack.imgur.com/77NCM.png" alt=""> </p> <p>Show it in graphic:</p> <pre><code>ListPlot[Map[Labeled[#, ToString[#]] &amp;, #] &amp; /@ rearrangPoint, PlotStyle -&gt; PointSize[.03]]~Show~ ListLinePlot[ Reap[Nest[ MapThread[ DeleteCases, {#, Sow[First[ MinimalBy[Transpose[{First /@ Nearest @@ #, Last[#]}], N[EuclideanDistance @@ #] &amp;]]]}] &amp;, list, Length[First[list]]]][[2, 1]], PlotStyle -&gt; ColorData[3], PlotLegends -&gt; Automatic, AspectRatio -&gt; Automatic] </code></pre> <p><img src="https://i.stack.imgur.com/r6jJP.png" alt=""> </p> <p>But I have to say this method is too ugly.</p>
Coolwater
9,754
<p>Borrowing WReach's idea of using <code>DeleteDuplicates</code>:</p> <pre><code>Module[{L = Tuples[list]}, L = L[[Ordering[EuclideanDistance @@@ L, All, Less]]]; L = DeleteDuplicates[L, Or @@ MapThread[Equal, {##}] &amp;]; {Reverse[L[[All, 1]]], L[[All, 2]]}] </code></pre> <hr> <p>Edit by yode:</p> <pre><code>Transpose[DeleteDuplicates[SortBy[Tuples[list], N[EuclideanDistance @@ #] &amp;], ContainsAny]] // {Reverse[#], #2} &amp; @@ # &amp; </code></pre> <hr> <p>Edit by coolwater</p> <p>Note that both of the above code fragments are wrong because of repetitions when choosing an unlucky seed:</p> <pre><code>SeedRandom[63112] list = {RandomInteger[10, {4, 2}], RandomInteger[{10, 30}, {4, 2}]} </code></pre> <p>The following adds the distinguishability needed within <code>DeleteDuplicates</code>:</p> <pre><code>Module[{L = Tuples[Range[Length[Last[list]]], 2]}, L = L[[Ordering[EuclideanDistance @@@ Tuples[list], All, Less]]]; L = DeleteDuplicates[L, MemberQ[# - #2, 0] &amp;]; {list[[1, Reverse[L[[All, 1]]]]], list[[2, L[[All, 2]]]]}] </code></pre>
324,557
<p>Map the common part of the disks $|z|&lt;1$ and $|z-1|&lt;1$ on the inside of the unit circle. Choose the mapping sot hat the two symmetries are preserved.</p> <p>I don't really know how to approach this??</p> <p>Any suggestions on how to start constructing such a linear transformation??</p> <p>Thanks in advance!</p>
Cameron Buie
28,900
<p>For a real-world example, suppose $A$ is the set of all humans who have ever lived and that $a\:R\:b$ means "$a$ is an ancestor of $b$." Well, if $S:=R\circ R$, then $a\:S\:b$ means that "$a$ is an ancestor of one of $b$'s parents." In particular, then, if $a$ is a parent of $b$, then $a\:R\:b$, but it is <strong>not</strong> the case that $a\:S\:b$. It <em>is</em> true that $a\:R\:b$ whenever $a\:S\:b$, of course (by definition, since $R$ is transitive), so $R\circ R=S\subsetneq R.$</p>
187,459
<p>What are all 4-regular graphs such that every edge in the graph lies in a unique-4 cycle?</p> <p>Among all such graphs, if we impose a further restriction that any two 4-cycles in the graph have at most one vertex in common, then can we characterize them in some way?</p> <p>When is it possible to draw such a graph on a plane such that every 4-cycle is of the form: (a,c)-(b,c)-(b,d)-(a,d)-(a,c) for some a,b,c,d ?</p>
The Masked Avenger
35,626
<p>Consider two labeled squares, vertices on one labeled from the abcd alphabet, the other labeled from 1234. We are going to identify one or more pairs of vertices while maintaining the constraint that induced edges are not identified as well as not identifying vertices labeled from the same alphabet.</p> <p>It is clear that there are 16 ways to make one pair of vertices. Once one such pair is made, there are for each pair precisely 4 ways to make a second disjoint pair. Making a third pair violates the edge identification condition. So there are 48 distinct ways (after removing duplicated efforts) to identify two labeled squares.</p> <p>The idea now is to make a brute force enumeration extending this to larger sets of squares. Even if one considers the labels as distinct, it will be a challenge to list those identifications that do not induce additional four-cycles. Further, even with software to figure out isomorphism types, I think the number of such types will be exponential in the number of squares, if not doubly exponential, as it seems to me to be enumerating certain 4-designs.</p>
295,076
<p>If a finite-dimensional vector space $V$ is a direct sum of two subspaces $W_1$ and $W_2$, prove that $V^* = W_1^0 \oplus W_2^0$.</p> <p>Where $V^*$ is the dual space of $V$ and $W^0$ is the annihilator of $W$.</p>
guest196883
43,798
<p>We have $W_1^0\cap W_2^0 = \{0\}$ because a form annihilating both will certainly annihilate their sum. Therefore this sum can be considered direct. So if $\rho_U = \operatorname{id}$ on $U$ and 0 elsewhere for a subspace $U \subseteq V$, we have $$\rho_{W_1}+\rho_{W_2}= \operatorname{id}$$ And so $$f = f \circ(\rho_{W_1}+\rho_{W_2}) = f \circ\rho_{W_1} + f \circ\rho_{W_2 } \in W_2^0 \oplus W_1^0$$</p> <p>For any $f \in V^*$. </p>
455,259
<p>This person has been on all seven continents. But this same person has never been to Brazil.</p> <p>Contrary/Consistent: I would say it's consistent because Brazil is not a continent.</p> <p>am i right?</p>
rurouniwallace
35,878
<p>I'll take a bit more of a rigorous approch to this:</p> <p>Brazil is in South America. Therefore, someone having been to Brazil $\to$ he has been to South America. However, this is only a one way implication. That is to say, someone having been to South America $\not\to$ he has been to Brazil.</p> <p>So even though having been to every continent $\to$ having been to South America, having been to South America $\not\to$ having been to Brazil.</p>
2,972,938
<p>When is it possible to make a change of variables in the limit?</p> <p>For example <span class="math-container">$\lim_{x \to \infty}(\ln x/x)$</span>, can I change <span class="math-container">$x=e^{y}$</span>?</p> <p>Then <span class="math-container">$\lim_{x \to \infty}(\ln x/x)= \lim_{y \to \infty}(y/e^{y})$</span>?</p> <p>How can I prove that the change of variables is valid?</p>
Community
-1
<p>A solution without calculation.</p> <p>from <span class="math-container">$tr(A)=a-b=1$</span>, we deduce that</p> <p><span class="math-container">$A^2=I$</span> IFF <span class="math-container">$A$</span> is diagonalizable and <span class="math-container">$\{1,1\}\subset spectrum(A)$</span> </p> <p>IFF <span class="math-container">$rank(A-I)=1$</span>.</p> <p>Clearly, we see that <span class="math-container">$rank(A-I)=rank\begin{pmatrix}-1&amp;b+1&amp;b\\1&amp;-b-1&amp;-b\\-1&amp;b+1&amp;b\end{pmatrix}=1$</span>.</p>
3,756,970
<p>I know which step is wrong in the following argument, but would like to have contributors' explanations of <em>why</em> it is wrong.</p> <p>We assume below that weather forecasts always predict whether or not it is going to rain, so <em>not forecast to rain</em> means the same as <em>forecast not to rain</em>. We shall also assume that forecasts are not always right.</p> <p>It is not generally true that the probability of rain when forecast is equal to that of its having been forecast to rain when it does rain. Indeed let us assume that <span class="math-container">$$P(\text{R}|\text{F}_{\text R}) \neq P(\text{F}_{\text R}|\text{R}).$$</span> But, having been forecast to rain, it will either rain or not rain (<span class="math-container">$\bar{\text{R}}$</span>), so <span class="math-container">$$P(\text{R}|\text{F}_{\text R})+P(\overline {\text{R}}|\text{F}_{\text R})=1\ \ \ \ \ \ \mathbf{eq. 1}$$</span> Likewise, if it rains, it will either have been forecast to rain or (we are assuming) not forecast to rain (<span class="math-container">$\overline{\text{F}_{\text R}}$</span>), so <span class="math-container">$$P(\text{F}_{\text R}|\text{R})+P(\overline{\text{F}_{\text R}}|\text{R})=1 \ \ \ \ \ \ \mathbf{eq. 2}$$</span> But we know that &quot;If rain then not forecast to rain&quot; is logically equivalent to &quot;If forecast to rain then no rain&quot;. So the corresponding conditional probabilities must be equal, that is <span class="math-container">$$P(\overline{\text{F}_{\text R}}|\text{R})=P(\overline {\text{R}}|\text{F}_{\text R})\ \ \ \ \ \ \ \ \ \ \ \ \mathbf{eq. 3}$$</span> It follows immediately from <span class="math-container">$\mathbf {eqs 1,\ 2\ and\ 3}$</span> that <span class="math-container">$$P(\text{R}|\text{F}_{\text R}) = P(\text{F}_{\text R}|\text{R}).$$</span> which is contrary to our hypothesis.</p>
J. W. Tanner
615,567
<p>Any set can have a metric, because the discrete metric can be applied to all sets.</p> <p>See <a href="https://en.wikipedia.org/wiki/Discrete_space" rel="nofollow noreferrer">here</a> and <a href="https://en.wikipedia.org/wiki/Metric_space#Examples_of_metric_spaces" rel="nofollow noreferrer">here</a> for further details about it.</p>
45,973
<p>Let $B,C,D \geq 1$ be positive integers and $(b_n)_{n\geq 0}$ be a sequence with $b_0 = 1, b_n = B b_{n-1} + C B^n + D$ for $n \geq 1$.</p> <p>Prove that </p> <p>(a) $\sum_{n\geq 0}^\infty b_n t^n$ ist a rational function</p> <p>(b) identify a formula for $b_n$</p> <hr> <p>Hi!</p> <p>(a)</p> <p>As I know I need to show that $\sum_{n\geq 0}^\infty b_n t^n$ can be rewritten as a fraction of two polynomials $\frac{P(x)}{Q(x)}$ while $Q(x)$ is not the zero polynomial, right?</p> <p>There is no fraction in the recurrence formula given above, how do I show that? Can't I just take $Q(x) = 1$?</p> <p>(b)</p> <p>I do already have the formula $b_n = B b_{n-1} + C B^n + D$, so I might need one without any $b_n$ on the right side. But how do I eleminate it? If I would divide by $b_n$ I still don't know how to calculate $\frac{b_{n-1}}{b_n}$ (if this would actually help). </p> <p>Any ideas how this might be done?</p> <p>Thanks in advance!</p>
Ross Millikan
1,827
<p>For part b), you can just find the first few terms by hand: $b_0=1$</p> <p>$b_1=B+CB+D=B(C+1)+D$</p> <p>$b_2=B^2+CB^2+dB+CB^2+D=B^2(2C+1)+D(B+1)$</p> <p>$b_3=B^3(3C+1)+D(B^2+B+1)$</p> <p>Maybe you can see a pattern and prove it by induction.</p>
3,520,722
<p>I have a few challenges setting up the bounds of integration for the region <span class="math-container">$$U = \{(x,y) | -1 \leq x-y \leq 1 , \quad 1 \leq xy \leq 2 \}$$</span> My ultimate goal is to solve <span class="math-container">$$\iint_U x^2y + xy^2 dxdy = \iint_U f(x,y) dxdy$$</span></p> <p>Here is a plot of the domain I have to consider:</p> <p><span class="math-container">$\hspace{4.5cm}$</span><img src="https://i.stack.imgur.com/l5qdjm.png" alt="enter image description here][1]"></p> <p>Since the region is symmetric, I decided to first focus on the first quadrant. The domain is not simple, therefore I divided it into simple domains by introducing two new bounds <span class="math-container">$x=1$</span> and <span class="math-container">$y=1$</span> yielding three simple domains: (<a href="https://www.desmos.com/calculator/8cshs9eejc" rel="nofollow noreferrer">Link to the interactive graph on Desmos</a>)</p> <p><span class="math-container">$\hspace{4.5cm}$</span><img src="https://i.stack.imgur.com/2IGJvm.png" alt="enter image description here][3]"></p> <p>Starting with the region below <span class="math-container">$y=1$</span> then the middle region, then the upper one, I came up with the following limits:</p> <p><span class="math-container">$$\int_{ y= \frac{\sqrt5 - 1}{2} }^1 \int_{\frac1y}^{y+1} \space f(x,y) \space dxdy + \int_{ x=1 }^2 \int_{1}^{\frac2x} \space f(x,y) \space dydx + \int_{ y=1 }^2 \int_{\frac1y}^{1} \space f(x,y) \space dxdy $$</span></p> <p>Now, the order turned out different for each of the summands, which is making me doubt that my setup is correct. Also, it seems to me that the points on the lines <span class="math-container">$x=1$</span> and <span class="math-container">$y=1$</span> will be ... counted twice (?) since they border two different integrals if that makes sense. Thus, could anyone kindly clarify on this.</p>
epi163sqrt
132,007
<p>We obtain <span class="math-container">\begin{align*} \sum_{k=0}^\infty\frac{k!}{(2k+3)!!}&amp;=\sum_{k=0}^\infty\frac{k!(2k+2)!!}{(2k+3)!}\\ &amp;=\sum_{k=0}^\infty\frac{k!2^{k+1}(k+1)!}{(2k+3)!}\\ &amp;=\sum_{k=0}^\infty\binom{2k}{k}^{-1}\frac{2^k}{(2k+1)(2k+3)}\\ &amp;=\sum_{k=0}^\infty\binom{2k}{k}^{-1}\frac{2^{k-1}}{2k+1}-\sum_{k=0}^\infty\binom{2k}{k}^{-1}\frac{2^{k-1}}{2k+3}\tag{1} \end{align*}</span></p> <p>We use a representation of reciprocal binomial coefficients via the <em><a href="https://en.wikipedia.org/wiki/Beta_function" rel="nofollow noreferrer">Beta function</a></em>:</p> <p><span class="math-container">\begin{align*} \binom{n}{k}^{-1}=(n+1)\int_0^1z^k(1-z)^{n-k}\,dz\tag{2} \end{align*}</span></p> <blockquote> <p>and the left-hand series of (1) can be calculated as</p> <p><span class="math-container">\begin{align*} \color{blue}{\sum_{k=0}^\infty\binom{2k}{k}^{-1}\frac{2^{k-1}}{2k+1}} &amp;=\sum_{k=0}^\infty 2^{k-1}\int_0^1z^k(1-z)^k\,dz\tag{3}\\ &amp;=\frac{1}{2}\int_{0}^{1}\sum_{k=0}^\infty \left(2z(1-z)\right)^k\,dz\\ &amp;=\frac{1}{2}\int_{0}^1\frac{dz}{1-2z(1-z)}\tag{4}\\ &amp;=\frac{1}{2}\int_{0}^{1}\frac{dz}{z^2+(1-z)^2}\\ &amp;=\frac{1}{2}\int_{0}^{\infty}\frac{du}{1+u^2}\tag{5}\\ &amp;\,\,\color{blue}{=\frac{\pi}{4}}\tag{6} \end{align*}</span></p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>In (3) we use the identity (2).</p></li> <li><p>In (4) we apply the <em><a href="https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series" rel="nofollow noreferrer">geometric series expansion</a></em>.</p></li> <li><p>In (5) we use the substitution <span class="math-container">$u=\frac{1-z}{z}, du=-\frac{1}{z^2}dz$</span>.</p></li> </ul> <p>We also want to apply (2) to the right-hand series of (1). To do this conveniently we need some preparatory work: <span class="math-container">\begin{align*} \sum_{k=0}^\infty&amp;\binom{2k}{k}^{-1}\frac{2^{k-1}}{2k+3}\\ &amp;=\sum_{k=0}^\infty\frac{k!k!}{(2k)!}\cdot\frac{2^{k-1}}{2k+3}\\ &amp;=\sum_{k=0}^\infty\frac{k!(k+1)!(2k+1)}{(2k+1)!(k+1)}\cdot\frac{2^{k-1}}{2k+3}\\ &amp;=\sum_{k=0}^\infty\frac{(k+1)!(k+1)!}{(2k+2)!}\cdot\frac{2^{k+1}}{2k+3}-\sum_{k=0}^\infty\frac{k!(k+1)!}{(2k+1)!(k+1)}\cdot\frac{2^{k-1}}{2k+3}\\ &amp;=\sum_{k=0}^\infty\binom{2k+2}{k+1}^{-1}\frac{2^{k+1}}{2k+3}-\sum_{k=0}^\infty\binom{2k}{k}^{-1}\frac{2^{k-1}}{(2k+1)(2k+3)}\\ &amp;=\sum_{k=0}^\infty\binom{2k+2}{k+1}^{-1}\frac{2^{k+1}}{2k+3} -\sum_{k=0}^\infty\binom{2k}{k}^{-1}\frac{2^{k-2}}{2k+1} +\sum_{k=0}^\infty\binom{2k}{k}^{-1}\frac{2^{k-2}}{2k+3}\tag{7}\\ \end{align*}</span> In the last line (7) we use a partial fraction decomposition as we did in (1).</p> <blockquote> <p>We are now well prepared to do the calculation. We obtain together with (6):</p> <p><span class="math-container">\begin{align*} \color{blue}{\sum_{k=0}^\infty \binom{2k}{k}^{-1}\frac{2^{k-2}}{2k+3}} &amp;=\sum_{k=0}^\infty\binom{2k+2}{k+1}^{-1}\frac{2^{k+1}}{2k+3}-\frac{\pi}{8}\\ &amp;=\sum_{k=0}^\infty2^{k+1}\int_{0}^1z^{k+1}(1-z)^{k+1}\,dz-\frac{\pi}{8}\\ &amp;=\sum_{k=1}^\infty2^k\int_{0}^1z^k(1-z)^k\,dz-\frac{\pi}{8}\\ &amp;=\frac{\pi}{2}-2^0\int_{0}^1\,dz-\frac{\pi}{8}\\ &amp;\,\,\color{blue}{=\frac{3}{8}\pi-1}\tag{8} \end{align*}</span></p> <p>We finally conclude from (1) together with (6) and (8) <span class="math-container">\begin{align*} \color{blue}{\sum_{k=0}^\infty\frac{k!}{(2k+3)!!}} &amp;=\sum_{k=0}^\infty\binom{2k}{k}^{-1}\frac{2^{k-1}}{2k+1}-\sum_{k=0}^\infty\binom{2k}{k}^{-1}\frac{2^{k-1}}{2k+3}\\ &amp;=\frac{\pi}{4}-2\left(\frac{3}{8}\pi-1\right)\\ &amp;\,\,\color{blue}{=2-\frac{\pi}{2}} \end{align*}</span></p> <p>and the claim follows.</p> </blockquote>
3,520,722
<p>I have a few challenges setting up the bounds of integration for the region <span class="math-container">$$U = \{(x,y) | -1 \leq x-y \leq 1 , \quad 1 \leq xy \leq 2 \}$$</span> My ultimate goal is to solve <span class="math-container">$$\iint_U x^2y + xy^2 dxdy = \iint_U f(x,y) dxdy$$</span></p> <p>Here is a plot of the domain I have to consider:</p> <p><span class="math-container">$\hspace{4.5cm}$</span><img src="https://i.stack.imgur.com/l5qdjm.png" alt="enter image description here][1]"></p> <p>Since the region is symmetric, I decided to first focus on the first quadrant. The domain is not simple, therefore I divided it into simple domains by introducing two new bounds <span class="math-container">$x=1$</span> and <span class="math-container">$y=1$</span> yielding three simple domains: (<a href="https://www.desmos.com/calculator/8cshs9eejc" rel="nofollow noreferrer">Link to the interactive graph on Desmos</a>)</p> <p><span class="math-container">$\hspace{4.5cm}$</span><img src="https://i.stack.imgur.com/2IGJvm.png" alt="enter image description here][3]"></p> <p>Starting with the region below <span class="math-container">$y=1$</span> then the middle region, then the upper one, I came up with the following limits:</p> <p><span class="math-container">$$\int_{ y= \frac{\sqrt5 - 1}{2} }^1 \int_{\frac1y}^{y+1} \space f(x,y) \space dxdy + \int_{ x=1 }^2 \int_{1}^{\frac2x} \space f(x,y) \space dydx + \int_{ y=1 }^2 \int_{\frac1y}^{1} \space f(x,y) \space dxdy $$</span></p> <p>Now, the order turned out different for each of the summands, which is making me doubt that my setup is correct. Also, it seems to me that the points on the lines <span class="math-container">$x=1$</span> and <span class="math-container">$y=1$</span> will be ... counted twice (?) since they border two different integrals if that makes sense. Thus, could anyone kindly clarify on this.</p>
Felix Marin
85,343
<p><span class="math-container">$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</span> <span class="math-container">\begin{align} \sum_{k = 0}^{\infty}{k! \over \prod_{j = 0}^{k}\pars{2j + 3}} &amp; = \sum_{k = 0}^{\infty}{k! \over 2^{k + 1}\prod_{j = 0}^{k}\pars{j + 3/2}} = \sum_{k = 0}^{\infty}{k! \over 2^{k + 1}\pars{3/2}^{\overline{k + 1}}} \\[5mm] &amp; = \sum_{k = 0}^{\infty}{1 \over 2^{k + 1}}\,{k! \over \Gamma\pars{3/2 + k + 1}/\Gamma\pars{3/2}} \\[5mm] &amp; = \sum_{k = 0}^{\infty}\,{1 \over 2^{k + 1}}\, {\Gamma\pars{k + 1}\Gamma\pars{3/2} \over \Gamma\pars{k + 5/2}} \\[5mm] &amp; = \sum_{k = 0}^{\infty}{1 \over 2^{k + 1}}\, \int_{0}^{1}t^{k}\pars{1 - t}^{1/2}\,\dd t \\[5mm] &amp; = {1 \over 2}\int_{0}^{1}\root{1 - t} \sum_{k = 0}^{\infty}\pars{t \over 2}^{k}\,\dd t \\[5mm] &amp; = \int_{0}^{1}{\root{1 - t} \over 2 - t}\,\dd t \,\,\,\stackrel{t\ =\ 1 - x^{2}}{=}\,\,\, 2\int_{0}^{1}\pars{1 - {1 \over 1 + x^{2}}}\,\dd x \\[5mm] &amp; = \bbx{2 - {\pi \over 2}}\ \approx\ 0.4292 \end{align}</span></p>
1,450,176
<p>I would like to evaluate this limit :$$\displaystyle \lim_{x \to \infty} ({x\sin \frac{1}{x} })^{1-x}$$.</p> <p>I used taylor expansion at $y=0$ , where $x$ go to $\infty$ i accrossed this </p> <p>problem : ${1}^{-\infty }$ then i can't judge if this limit equal's $1$ , </p> <p>because it is indeterminate case ,Then is there a mathematical way to </p> <p>evaluate this limit ?</p> <p>Thank you for any help </p>
DirkGently
88,378
<p>Let $y=1/x$ and $y\to 0^+$. We have \begin{align*} \left(\frac{1}{y}\sin y\right)^{1-1/y} &amp; =\left(\frac{1}{y}\left(y+O\left(y^{3}\right)\right)\right)^{1-1/y}\\ &amp; =\left(1+O\left(y^{2}\right)\right)^{1-1/y}\\ &amp; =\exp\left(\ln\left(1+O\left(y^{2}\right)\right)\frac{y-1}{y}\right)\\ &amp; =\exp\left(O\left(y^{2}\right)\frac{y-1}{y}\right)\\ &amp; =\exp\left(O\left(y\right)\right). \end{align*} So the limit as $y\to0^+$ is 1.</p>
4,652,260
<p>I am self-studying Functional Analysis from Kreyszig's <em>Introductory Functional Analysis with Applications</em>. In section-1.3, he proves that the sequence space <span class="math-container">$\ell^\infty$</span> with the metric <span class="math-container">$d_\infty(x,y)=\sup_i\{|x_i-y_i|\}$</span> is not separable.</p> <p>While going through the author's proof and some other ones online, I realized that there are two main ways to show that <span class="math-container">$\ell^\infty$</span> is not separable: <span class="math-container">$(1)$</span> Showing that every dense subset of <span class="math-container">$\ell^\infty$</span> is uncountable, OR <span class="math-container">$(2)$</span> Showing that there is no countable subset of <span class="math-container">$\ell^\infty$</span> which is dense in <span class="math-container">$\ell^\infty$</span>. However, I have thought about the following way of showing that <span class="math-container">$\ell^\infty$</span> is not separable that does not use either of these strategies:</p> <blockquote> <p>Let <span class="math-container">$Y\subset\ell^\infty$</span> be the set consisting of all sequences with <span class="math-container">$0$</span>'s or <span class="math-container">$1$</span>'s. Then <span class="math-container">$Y$</span> is uncountable. We also notice that the restriction of the metric <span class="math-container">$d_\infty$</span> to <span class="math-container">$Y$</span> yields the discreet metric. But then by Result-<span class="math-container">$1.3.8$</span>, <span class="math-container">$\ell^\infty$</span> cannot be separable since it is uncountable.</p> </blockquote> <p>I am using the following Result from Kreyszig:</p> <blockquote> <p><strong>Result-<span class="math-container">$1.3.8$</span>:</strong> A discreet metric space <span class="math-container">$X$</span> is separable iff <span class="math-container">$X$</span> is countable.</p> </blockquote> <p>I feel like I am in the right direction but need more details to justify my claims. Could someone tell me if this way of showing <span class="math-container">$\ell^\infty$</span> is not separable is valid or what details to add? TIA.</p>
Robert Israel
8,508
<p>Other than misspelling &quot;discrete&quot; as &quot;discreet&quot; and writing the ambiguous &quot;it is uncountable&quot; rather than &quot;<span class="math-container">$Y$</span> is uncountable&quot;, your proof is essentially correct. You might note that in a separable metric space, any family of disjoint open sets must be countable.</p>
3,270,245
<p>I was provided this graph and asked if it passed the Extreme Value Theorem. I thought yes. I can see that this function is discontinuous...however, I was informed that this graph actually fails the Extreme Value Theorem due to the hole at x = 2. This caught me off guard, because I thought for certain there was a maximum. </p> <p><a href="https://i.stack.imgur.com/FKBcJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FKBcJ.png" alt="Picture of Graph that Fails EVT"></a></p> <p>Is the reason that there is no maximum for this graph because if someone were to say, "Well, the max is clearly 7.999"...another person could say, "Actually it's 7.9999"...and then another person could say...etc?</p>
Mohammad Riazi-Kermani
514,496
<p>The maximum and minimum of a set are elements of the set. In your case the maximum is not attained and as you have explained any number close to <span class="math-container">$2$</span> is not a maximum.</p> <p>That is why we have the notion of supremum of a set which is the least upper bound of the set and it does not have to belong to the set.</p> <p>We say that <span class="math-container">$2$</span> is the supremum of the values of this function not a maximum.</p>
3,270,245
<p>I was provided this graph and asked if it passed the Extreme Value Theorem. I thought yes. I can see that this function is discontinuous...however, I was informed that this graph actually fails the Extreme Value Theorem due to the hole at x = 2. This caught me off guard, because I thought for certain there was a maximum. </p> <p><a href="https://i.stack.imgur.com/FKBcJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FKBcJ.png" alt="Picture of Graph that Fails EVT"></a></p> <p>Is the reason that there is no maximum for this graph because if someone were to say, "Well, the max is clearly 7.999"...another person could say, "Actually it's 7.9999"...and then another person could say...etc?</p>
mlchristians
681,917
<p>All the hypotheses of the EVT are not satisfied; in this case, your function is not continuous over the closed interval <span class="math-container">$[1,3]$</span>.</p>
1,704,555
<blockquote> <p>If $I\subseteq J$ are ideals in a polynomial ring of $n$ variables, how do I prove that $I = J$ if $\operatorname{in}_{\lt}(I)=\operatorname{in}_{\lt}(J)$, where $\lt$ is any monomial ordering?</p> </blockquote> <p>Obviously it suffices to prove that $J \subseteq I$. I'm stuck with how to go forward once I pick an arbitrary element $f \in J$ and have that $\operatorname{in}_{\lt}(f) \in \operatorname{in}_{\lt} (J)=\operatorname{in}_{\lt}(I)$.</p>
Matematleta
138,929
<p>An arbitrary vector in the plane is $(x,y,-x-5y)-(0,0,0)$, so we get the subspaces </p> <p>$\left \{ x(1,0,-1) +y(0,1,-5)\right \}$.</p> <p>A vector normal to these subspaces is $(1,2,5)$ so that if $\vec v\in \mathbb R^3$, then </p> <p>$\vec v=x(1,0,-1) +y(0,1,-5)+t(1,2,5)$ and so </p> <p>$T(\vec v)=x(1,0,-1) +y(0,1,-5)$. </p>
1,790,612
<p>Let $G$ be a compact connected semisimple Lie group and $\frak g$ its Lie algebra. It is known that the Killing form of $\frak g$ is negative definite. What about the Killing form $B$ of the complex semisimple Lie algebra ${\frak g}_{\Bbb C}={\frak g}\otimes\Bbb C$?</p> <p>In particular:</p> <blockquote> <p>If $X\in{\frak g}_{\Bbb C}$ lies in the Cartan subalgebra ${\frak h}_{\Bbb C}\subseteq{\frak g}_{\Bbb C}$ and $B(X,X)=0$, does that imply that $X=0$?</p> </blockquote>
Andreas Cap
202,204
<p>The Killing form of $\mathfrak g_{\mathbb C}$ is complex bilinear by construction, so in any complex subspace of $\mathfrak g_{\mathbb C}$ you find non-zero vectors which are isotropic. </p>
1,904,354
<p>Yeah the title says everything I will explain this quick if someone is so smart and nice than he has my ammiration! Here you are :: if we take an irrational number like π or e or whatever and we write this π+π-π… (ecc) at infinity of this series what could possibly come out?? I hope somebody can explain this thanks in advance to everybody!!</p>
Jack D'Aurizio
44,121
<p>If we set $$ I_n = \int_{0}^{1}\frac{x^n}{x+5}\,dx \tag{1}$$ we clearly have $I_0=\log\frac{6}{5}$ and $$ I_n+ 5I_{n-1} = \int_{0}^{1}\frac{x^n+5 x^{n-1}}{x+5}\,dx = \int_{0}^{1}x^{n-1}\,dx = \frac{1}{n}.\tag{2} $$</p>
587,275
<p>I was trying to understand why $e^{x}$ is special by finding the derivatives of other exponential functions and comparing the results. So I tried ${\rm f}\left(x\right) = 2^{x}$, but now I'm stuck.</p> <p>Here's my final step: <strong>$\displaystyle{{\rm f}'\left(x\right) = \lim_{h \to 0}{2^{x}\left(2^{h} - 1\right) \over h}}$.</strong> </p>
xavierm02
10,385
<p>$a&gt;0$</p> <p>$a^x := e^{x\ln a}$</p> <p>$f:x\mapsto e^{x\ln a}$</p> <p>$g:x\mapsto x\ln a$</p> <p>$f=\exp \circ g$</p> <p>$f'=g' \times(\exp '\circ g)=(x\mapsto \ln a)\times(\exp\circ g)=(\ln a) \times f$</p>
2,050,698
<p>So I'm studying a few special families of square matrices, the diagonal matrices, upper triangular matrices, lower triangular matrices and symmetric matrices and I just had a few questions. </p> <p>I know...</p> <p>a diagonal matrix is if every nondiagonal entry is zero, $a_{ij}$=0 whenever $i$ doesn't equal $j $.</p> <p>an upper triangular matrix is if all entries below the diagonal are zero, $a_{ij}=0$ whenever $i &gt;j$.</p> <p>a lower triangular matrix is if all entries above the diagonal are zero, $a_{ij}=0$ whenever $i &lt; j$.</p> <p>symmetric if $a_{ij}=a_{ji}$ for all $i$ and $j$.</p> <blockquote> <p>But I was just wondering, can the diagonal matrices, upper triangular matrices, lower triangular matrices and symmetric matrices have the $0_{n\times n}$? Also I know the $I_{n\times n}$ matrix is in the diagonal matrices, but can it be in the other three types?</p> </blockquote>
user361424
361,424
<p>If you know Java, you can use <a href="https://docs.oracle.com/javase/7/docs/api/java/util/Random.html#nextGaussian()" rel="nofollow noreferrer">Random.nextGaussian()</a> to get a dataset with mean 0 and standard deviation $\sqrt{N}$. Multiplying each number in the dataset by the ratio of this to the desired standard deviation will get you to the standard deviation you want, and <em>then</em> (note the order) adding the desired mean to each will then get you the mean you want.</p>
375,094
<p>A metric space <span class="math-container">$(M,d)$</span> is <em>doubling</em> if there exists <span class="math-container">$n$</span> such that every ball of radius <span class="math-container">$r$</span> can be covered by <span class="math-container">$n$</span> balls of radius <span class="math-container">$r/2$</span>, for all <span class="math-container">$r$</span>. For which f.g. groups <span class="math-container">$G$</span> and finite symmetric generating sets <span class="math-container">$S$</span>, is <span class="math-container">$\mathrm{Cay}(G, S)$</span> doubling under the path metric? Groups like this have polynomial growth, so they are virtually nilpotent by Gromov's theorem.</p> <p>So which virtually nilpotent groups are doubling, and for which generating sets? All, I suppose, but I got cold feet trying to do it, it seemed quite difficult straight from the definitions and I don't really know the Lie group stuff well enough.</p> <blockquote> <p>If <span class="math-container">$S$</span> is a finite symmetric generating set for a group <span class="math-container">$G$</span>, is <span class="math-container">$\mathrm{Cay}(G, S)$</span> doubling precisely when <span class="math-container">$G$</span> is virtually nilpotent?</p> </blockquote> <p>I'll note that in general (undirected) graphs, doubling implies polynomial growth, but not the other way around, consider for example the comb graph with vertices <span class="math-container">$\mathbb{Z} \times \mathbb{N}$</span> and edges <span class="math-container">$\{\{(m,n), (m,n+1)\}, \{(m,0), (m+1,0)\} \;|\; m \in \mathbb{Z}, n \in \mathbb{N}\}$</span>. But could be true for vertex-transitive graphs.</p>
Ian Agol
1,345
<p>I think this follows from a standard ball-packing argument.</p> <p>Suppose that <span class="math-container">$G$</span> with the metric <span class="math-container">$\rho$</span> induced from the Cayley graph has growth <span class="math-container">$V(R)=|B_R(1)| \sim R^d$</span>, i.e. <span class="math-container">$\exists\ 0&lt;c&lt; C$</span> such that <span class="math-container">$cR^d\leq V(R)\leq CR^d$</span>, where <span class="math-container">$B_R(1)$</span> is the open ball of radius <span class="math-container">$R$</span> about the identity (indeed, this argument works for any metric-measure space with polynomial growth of balls about every point in this sense). This holds for finitely generated nilpotent groups by a <a href="https://mathscinet.ams.org/mathscinet-getitem?mr=379672" rel="nofollow noreferrer">result of Bass</a>.</p> <p>Then take a maximal <a href="https://en.wikipedia.org/wiki/Delone_set" rel="nofollow noreferrer"><span class="math-container">$R/2$</span>-packing</a> <span class="math-container">$N_R$</span> of <span class="math-container">$B_R(1)$</span>, i.e. <span class="math-container">$N_R\subset B_R(1)$</span> where <span class="math-container">$\rho(g_1,g_2)\geq R/2$</span> for <span class="math-container">$g_1,g_2\in N_R, g_1\neq g_2$</span>. Then <span class="math-container">$B_{R/4}(g_1)\cap B_{R/4}(g_2) =\emptyset$</span>.</p> <p>By maximality, <span class="math-container">$B_R(1)\subseteq \cup_{g\in N_R} B_{R/2}(g)$</span>: if not, then we could find another point <span class="math-container">$h\in B_R(1)$</span> whose distance <span class="math-container">$\rho(h,N_R)$</span> is at least <span class="math-container">$R/2$</span>, so <span class="math-container">$N_R\cup \{h\}$</span> is an <span class="math-container">$R/2$</span>-packing, a contradiction to maximality of <span class="math-container">$N_R$</span>. Thus <span class="math-container">$N_R$</span> is an <span class="math-container">$R/2$</span>-net of <span class="math-container">$B_R(1)$</span>.</p> <p>Moreover, the union of the balls of radius <span class="math-container">$R/4$</span> about points of <span class="math-container">$N_R$</span> lies in <span class="math-container">$B_{5R/4}(1)$</span>. Hence <span class="math-container">$|N_R|V(R/4) \leq V(5 R/4)$</span>. So we have <span class="math-container">$|N_R| \leq \frac{V(5R/4)}{V(R/4)} \leq \frac{C (5R/4)^d}{c(R/4)^d} =C/c5^d $</span>. Hence the space is doubling.</p>
85,351
<p>It has been proven that:</p> <p>1) if $s$ is a non trivial zero $\rho$ of $\zeta(s)$ then so is $1−s$.</p> <p>2) $\zeta(s) = 2^s \pi^{s-1} \sin(\frac{\pi s}{2}) \Gamma(1-s) \zeta(1-s)$</p> <p>3) $ 0 &lt; \Re(\rho) &lt;1$</p> <p>From this it follows that when $s \to \rho$:</p> <p>$\displaystyle \lim_{s \to \rho} |\dfrac{\zeta(s)}{\zeta(1-s)}| = |2^s \pi^{s-1} \sin(\frac{\pi s}{2}) \Gamma(1-s)|=1$</p> <p>It is easy to see that the outcome will be $1$ for all $y$ in $s=\frac12 + y i$.</p> <p>But if a $\rho$ would lie off this critical line, it also must reside in 'spots' where $\displaystyle \lim_{s \to \rho} |\dfrac{\zeta(s)}{\zeta(1-s)}|=1$.</p> <p>On which points off the critical line could this occur? I found a surprisingly small domain (no proof).</p> <p>The blue line shows the only values where:</p> <p>$\displaystyle |2^s \pi^{s-1} \sin(\frac{\pi s}{2}) \Gamma(1-s)|=1$, $s=x + y i$, $ 0 \le x \le 1$.</p> <p>Note that $y \to 2\pi$ for both $x=0$ and $x=1$. The $y$ rises only a little in the middle.</p> <p>This doesn't say anything about whether or not off-line $\rho$'s are actually hiding on this curve. There still is an infinite number to check. However, I wondered if anything more is known about this curve? </p> <p><img src="https://i.stack.imgur.com/95HcV.jpg" alt=""> <a href="https://web.archive.org/web/20131030092719if_/http://img822.imageshack.us/img822/3065/riemanntest.jpg" rel="nofollow noreferrer"><sup>(source: Wayback Machine)</sup></a></p>
Gerhard Paseman
3,568
<p>Here is an approach you might take. Consider the class of , oh, lets call them left-sections of f(x,y): fix x and define g_x(y) = f(x,y). Consider the equivalence classes induced by level sets of g_x. Associativity combined with the shape of the level sets of g_x, g_z, and g_w will determine whether it is feasible to have w = x.z . At the very least you should have some interesting conditions on f which will be necessary for the representation. It may be that Green's relations will prove useful here.</p> <p>Gerhard "Not an Expert in Semigroups" Paseman, 2012.01.11</p>
2,526,695
<p>I've got following sequence formula: $ a_{n}=2a_{n-1}-a_{n-2}+2^{n}+4$</p> <p>where $ a_{0}=a_{1}=0$</p> <p>I know what to do when I deal with sequence in form like this:</p> <p>$ a_{n}=2a_{n-1}-a_{n-2}$ - when there's no other terms but previous terms of the sequence. Can You tell me how to deal with this type of problems? What's the general algorithm behind solving those?</p>
John Doe
399,334
<p>Write: $$\begin{align}a_n &amp;=2a_{n-1}-a_{n-2}+2^n+4\\ &amp;=2(2a_{n-2}-a_{n-3}+2^{n-1}+4)-a_{n-2}+2^n+4\\ &amp;=3a_{n-2}-2a_{n-3}+4\cdot2^{n-1}+4(1+2)\\ &amp;=3(2a_{n-3}-a_{n-4}+2^{n-2}+4)-2a_{n-3}+8\cdot2^{n-2}+4(1+2)\\ &amp;=4a_{n-3}-3a_{n-4}+11\cdot2^{n-2}+4(1+2+3)\end{align}$$</p> <p>You can see a few patterns emerging.<br> </p> <p><ul> <li> The coefficient of $a_{n-k}$ is $k+1$</li> <li> The coefficient of $a_{n-k-1}$ is $-k$. </li> <li> You end up with an additional term of $4(1+2+...+k)$. </li> </ul> What about the coefficient of $2^{n-k+1}$? This is $1,4,11,26,57,...$, where at each point, we double and add $k$ to the previous coefficient. This can be written as a separate relation: $$b_n=2b_{n-1}+n,\,\,\,\,\, b_1=1$$ which can be solved just as this one to give $$b_n=2^{n+1}-n-2$$</p> <p>Substituting this in, we get $$a_n=(n+2)a_1-(n+1)a_0+[2^n-n-1]2^2+4\cdot\frac12 n(n-1)\\\implies a_n=2^{2+n}+2n^2-6n-4$$ after incorporating the initial conditions $a_0=a_1=0$.</p>
1,127,551
<p>I have an equation, where I need to find <em>n</em>, that I need help solving.</p> <p>I already cheated a little bit by using a CAS (<em>Maple</em>) to solve the equation, so i know what the result should be, but I need to know how to get the to result without using a CAS.</p> <p>The equation:</p> <p>$$\frac{2400}{(n-5)}= \frac{2400}{n} + 40$$</p> <p>Here is what I have done so far, but I'm not sure how to proceed:</p> <p><img src="https://i.stack.imgur.com/pA8aW.png" alt="Maple calculations"></p>
Henrik supports the community
193,386
<p>You have already rewritten it to a quadratic, you just need to move the last term to have it in standard form allowing you to use the formula for the roots of a quadratic. </p>
2,681,621
<p>I'm trying to calculate the following limit:</p> <p>$$\lim_{x\to\pi} \dfrac{1}{x-\pi}\left(\sqrt{\dfrac{4\cos²x}{2+\cos x}}-2\right)$$</p> <p>I thought of calculating this:</p> <p>$$\lim_{t\to0} \dfrac{1}{t}\left(\sqrt{\dfrac{4\cos²(t+\pi)}{2+\cos(t+\pi)}}-2\right)$$</p> <p>Which is the same as:</p> <p>$$\lim_{t\to0} \dfrac{1}{t}\left(\sqrt{\dfrac{4\cos²t}{2-\cos t}}-2\right)$$</p> <p>I don't have an idea about where to go from here.</p>
xyzzyz
23,439
<p>Note that $a-b = \frac{a^2 -b^2}{a+b}$. Then:</p> <p>$$ \frac{1}{t}\left(\sqrt{\frac{4\cos²t}{2-\cos t}}-2\right) = \frac{1}{t}\left(\frac{\frac{4\cos²t}{2-\cos t}-4}{\sqrt{\frac{4\cos²t}{2-\cos t}}+2}\right) = \frac{1}{t}\left(\frac{\frac{4\cos²t-8 + 4 \cos t}{2-\cos t}}{\sqrt{\frac{4\cos²t}{2-\cos t}}+2}\right) = \frac{\frac{4\cos²t-8 + 4 \cos t}{t(2-\cos t)}}{\sqrt{\frac{4\cos²t}{2-\cos t}}+2} $$</p> <p>The limit of the denominator is easy, so we just need to calculate </p> <p>$$ \lim_{t \to 0} \frac{4\cos²t-8 + 4 \cos t}{t(2-\cos t)} = \lim_{t \to 0} \frac{4 (\cos t + 2)(\cos t - 1)}{t(2-\cos t)} = 4\cdot 3 \cdot \lim_{t \to 0}\frac{\cos t - 1}{t} = 0 $$</p>
1,918,071
<p>Sometimes you will see theorems of the form "Let $H_1, \dots, H_n$. If $A$, then $B$". Sometimes "suppose" or "if" is used instead of "let". Here's an example:</p> <ol> <li><p>Let $x\in\mathbb{R}$. If $x\geq 0$, then $|x|=x$.</p></li> <li><p>Suppose $x\in\mathbb{R}$. If $x\geq 0$, then $|x|=x$.</p></li> <li><p>If $x\in\mathbb{R}$ and $x\geq 0$, then $|x|=x$.</p></li> </ol> <p>I'm under the impression that these are all equivalent ways of saying the same thing. In this example, I would call "$x\in\mathbb{R}$" a hypothesis and "$x\geq 0$" the antecedent. But in the third statement, is there an unambiguous contrapositive? In certain contexts, I think it is understood that we're not really considering the case when $x\notin\mathbb{R}$. But "$x\in\mathbb{R}$" is nevertheless part of the antecedent in statement (3). So if we agree (1-3) are equivalent, then I see two contrapositives:</p> <p>a) If $x\in\mathbb{R}$ and $|x|\neq x$, then $x&lt;0$.</p> <p>b) If $|x|\neq x$, then $x&lt;0$ or $x\notin\mathbb{R}$. </p> <p>I think Halmos' Naive Set Theory is an example where form (3) is preferred to (1,2). </p> <p>The questions are:</p> <ol> <li><p>Are those statements equivalent?</p></li> <li><p>In the third statement, what is The contrapositive? EDIT: Generally, if you see a theorem of the form "Let $H_1, \dots, H_n$. If $A$, then $B$", what is its contrapositive? How do you know?</p></li> <li><p>Do mathematicians make any effort to separate the hypotheses ($H_1,\dots,H_n$) from the antecedent ($A$) of the claim? If so, how? Or is this one of those things everybody understands and no one is explicit about?</p></li> </ol>
haqnatural
247,767
<p>It should be $${ e }^{ 2x }+{ e }^{ x }-2=0\\ \left( { e }^{ x }+2 \right) \left( { e }^{ x }-1 \right) =0\\ { e }^{ x }+2=0,{ e }^{ x }-1=0\\ { e }^{ x }=-2,{ e }^{ x }=1$$ is only solution</p> <blockquote> <p>$$x=0$$</p> </blockquote>
2,878,206
<p>Let $a_n = \frac{9^n}{n + 5^n}$.</p> <p>At large $n$ value, $a_n$ is expected to behave like $\frac{9^n}{5^n}$, therefore it diverges.</p> <p>Using the direct comparison test, how can I find $b_n$ (has to be smaller than $a_n$ to prove that $a_n$ diverges)?</p>
Kavi Rama Murthy
142,385
<p>By Binomial Theorem $5^{n}=(1+4)^{n}=1+4n+...+4^{n}&gt;1+4n &gt;n$ so $\frac {9^{n}} {n+5^{n}} &gt; \frac {9^{n}} {2(5^{n})}$. Take $b_n=\frac {9^{n}} {2(5^{n})}$.</p>
104,195
<p>Background: This year I'll do another Group Theory course ( Open University M336 ). In the past I have used Mathematica's AbstractAlgebra package but (although visually appealing ) this is no longer sufficient (i.e. listing subgroups of <span class="math-container">$S_4$</span> takes ages). So, I want to learn more about GAP. I worked through beginner tutorials that I found via the <a href="https://www.gap-system.org" rel="nofollow noreferrer">GAP website</a>. Currently, I am not making much progress with GAP. The <a href="https://www.gap-system.org/Manuals/doc/ref/chap0.html" rel="nofollow noreferrer">reference manual</a> does not help me much at this stage.</p> <p>Question: <strong>Which resources are available to self-study GAP?</strong> How does one become proficient in GAP? What ( books, tutorials ) should you study?</p>
Angel Blasco
163,996
<p>A book called <a href="http://www.springer.com/us/book/9783540654667" rel="noreferrer">"Computer Algebra Handbook" by Grabmeier, Kaltofen and Weispfenning (eds.)</a> (2003) includes some advanced topics in group theory and examples of code that you can use with GAP. </p> <p>In particular, this book has chapters:</p> <ul> <li>"Computational Group Theory" by Charles Sims</li> <li>"Algorithms of Representation Theory" by Gerhard Hiss</li> <li>"Computer Algebra in Group Theory" by Gerhard Hiss</li> </ul> <p>and also a 6 pages long section on GAP by Thomas Breuer and Alexander Hulpke, referring to GAP 4.2 (March 2000) which was the current version at the time of writing. I did not find any examples of the GAP code in the book, but I have no accompanying CD which might contain some. Anyhow, for code examples I'd suggest to use more modern sources.</p> <p>Another interesting book is "<a href="https://www.crcpress.com/Handbook-of-Computational-Group-Theory/Holt-Eick-OBrien/9781584883722" rel="noreferrer">Handbook of Computational Group Theory</a>" by Derek F. Holt, Bettina Eick and Eamonn A. O'Brien.</p>
2,057,857
<p>I am trying to complete my homework based on equivalence relation and I don't seem to understand it properly so I need help !</p> <p>My question is that do all the elements in my set must satisfy all the three conditions then I can say there is an equivalence relation or I can say there is an equivalence relation of just some of the elements satisfy the three conditions and does the order matter do I have to compare the element from the left to the right or can I pick randomly e.g. my set=( 0,2,4,6) relation defined as a~b if a+b>2 so if I start by oder I can say that 0 is not related to 2 because 0+2=2 but if I pick randomly I can state that 6~0 because 6+0>2 and u can see that some of the elements satisfy the conditions but not all of them so do I still say there is an equivalence relation ?? I am really confuse </p> <p>Thank you</p>
chelivery
395,717
<p>We can show that <span class="math-container">$\#S \ge {c}$</span> via the injection <span class="math-container">$f: \mathbb{R} \rightarrow S$</span>, <span class="math-container">$f(x) = \{x\}$</span>.</p> <p>If you show <span class="math-container">$\#S \le c$</span> then you can conclude <span class="math-container">$\#S = c$</span> invoking <a href="https://en.wikipedia.org/wiki/Schr%C3%B6der%E2%80%93Bernstein_theorem" rel="noreferrer">Bernstein's theorem</a>.</p> <hr /> <p>Let's prove the following: let <span class="math-container">$X = \{X_k,\ k \in \mathbb{N}\}$</span> be a family of subsets such that <span class="math-container">$\#X_i \le c$</span>, <span class="math-container">$\ \forall i \in \mathbb{N}$</span>, and let <span class="math-container">$V = \bigcup_{k\in \mathbb{N}}X_k$</span>. Then <span class="math-container">$\#V \le c$</span>.</p> <p>Because <span class="math-container">$\#X_i \le c$</span>, we have for each <span class="math-container">$i$</span> a surjective <span class="math-container">$f_i: \mathbb{R} \rightarrow X_i$</span>. We define <span class="math-container">$g: \mathbb{N} \times \mathbb{R} \rightarrow V$</span>, <span class="math-container">$g(n,x) = f_n(x)$</span>.</p> <p><span class="math-container">$g$</span> is an surjective function: let <span class="math-container">$x \in V$</span>, then <span class="math-container">$x \in X_i$</span> for some <span class="math-container">$i$</span>, then we have <span class="math-container">$a \in \mathbb{R}$</span> such that <span class="math-container">$f_i(a) = x$</span> because <span class="math-container">$f_i$</span> is surjective. Then <span class="math-container">$g(i,a) = f_i(a) = x$</span>, with <span class="math-container">$(i,a) \in \mathbb{N}\times\mathbb{R}$</span>.</p> <p>We conclude that <span class="math-container">$g$</span> is surjective.</p> <p>Thus <span class="math-container">$\#V \le \#(\mathbb{N}\times\mathbb{R})$</span>, this is, <span class="math-container">$\#V \le c$</span>.</p> <hr /> <p>Your set <span class="math-container">$S$</span> satisfies the hypotheses, so we have <span class="math-container">$\#S \le c$</span>, and thus <span class="math-container">$\#S = c$</span>.</p>
4,037,295
<p>The Cauchy Schwarz inequality says <span class="math-container">$$ (ax+by+cz)^2 = (a^2+b^2+c^2)(x^2+y^2+z^2). $$</span></p> <p>I found that there is a kind of analogous inequality for <span class="math-container">$(ax+by+cz)^n$</span> <span class="math-container">$$ (x+y+z)^n \leq 3^n(x^n+y^n +z^n). $$</span> if I remember it correctly.</p> <p>How to prove this inequality?</p>
Falcon
766,785
<p>We have, for <span class="math-container">$x, y, z \ge 0$</span>, <span class="math-container">$$(x + y + z)^n \le 3^n \max\{x, y, z\}^n \le 3^n(x^n + y^n + z^n).$$</span></p>
635,195
<p>I'm trying to calculate the following limit: </p> <p>$$\mathop {\lim }\limits_{x \to {0^ + }} {\left( {\frac{{\sin x}}{x}} \right)^{\frac{1}{x}}}$$</p> <p>What I did is writing it as: </p> <p>$${e^{\frac{1}{x}\ln \left( {\frac{{\sin x}}{x}} \right)}}$$</p> <p>Therefore, we need to calculate: </p> <p>$$\mathop {\lim }\limits_{x \to {0^ + }} \frac{{\ln \left( {\frac{{\sin x}}{x}} \right)}}{x}$$</p> <p>Now, we can apply L'Hopital rule, Which I did:<br> $$\Rightarrow cot(x) - {1 \over x}$$</p> <p>But in order to reach the final limit two more application of LHR are needed. Is there a better way?</p>
robjohn
13,854
<p>According to <a href="http://en.wikipedia.org/wiki/Bernoulli%27s_inequality" rel="nofollow noreferrer">Bernoulli's Inequality</a>, for $0\le x\le1$ $$ \left(\frac{\sin(x)}{x}\right)^{1/x}=\left(1-\frac{x-\sin(x)}{x}\right)^{1/x}\ge1-\frac1x\frac{x-\sin(x)}{x}\tag{1} $$ <a href="https://math.stackexchange.com/a/327189">This answer</a> provides an elementary proof of Bernoulli's Inequality for rational exponents.</p> <p>In <a href="https://math.stackexchange.com/a/75151">this answer</a>, it is shown geometrically that for $0\lt x\lt\frac\pi2$, we have $\sin(x)\le x\le\tan(x)$. Therefore, $$ \begin{align} 0\le\frac{x-\sin(x)}{x^2}&amp;\le\frac{\tan(x)-\sin(x)}{\sin^2(x)}\\[4pt] &amp;=\frac{\tan(x)-\sin(x)}{\sin(x)\tan(x)}\sec(x)\\[4pt] &amp;=\frac{1-\cos(x)}{\sin(x)}\sec(x)\\[4pt] &amp;=\frac{\sin(x)}{1+\cos(x)}\sec(x)\\[4pt] &amp;\to\frac02\cdot1\\[9pt] &amp;=0\tag{2} \end{align} $$ Thus, by the <a href="http://en.wikipedia.org/wiki/Squeeze_theorem" rel="nofollow noreferrer">Squeeze Theorem</a>, $$ \lim_{x\to0}\frac{x-\sin(x)}{x^2}=0\tag{3} $$ and therefore, $$ 1\ge\lim_{x\to0}\left(\frac{\sin(x)}{x}\right)^{1/x}\ge1-0\tag{4} $$</p>
2,016,995
<p>Is is possible to find a metric $d$ on $\mathbb{R}$ so that $(\mathbb{R}, d)$ would have a <strong>finite</strong> number of open sets?</p>
miracle173
11,206
<p>No.</p> <p>If $(X,d)$ is a metric space and $X$ is not finite then choose $n$ arbitrary elements $x_1,\ldots,x_n$ of $X$ and let $$d_0:=\frac{1}{3}\min\{d(x_i,x_j)|i \ne j\}$$ We have $d_0&gt;0$ and the sets $$B(x_i,d_0)=\{x\in X|d(x_i,x)&lt;d_0\}$$ are open and disjoint and therefore different.</p>
3,320,830
<p>I was wondering if the inequality <span class="math-container">$$\left|\int_0^T f(t,\omega )dW_t\right|\leq \int_0^T|f(t,\omega )|dW_t$$</span> holds for stochastic integral. In fact, I don't see such a property in any book, neither on Google, so I have some doubt. What do you think ?</p>
Jyrki Lahtonen
11,619
<p>This is an old trick question.</p> <p>Without loss of generality we can imagine that we are sitting on the train leaving from Nagpur at 7 a.m., before the train that left Raipur at 12.30 a.m. arrives, but after the arrival of 11.30 p.m. departure. That train will arrive at Raipur at noon, after the 11.30 a.m. departure but before the 12.30 p.m. departure.</p> <p>So during the journey we will encounter the trains that departed from Raipur any half hour between midnight and noon. That is 12 trains. </p> <hr> <p>No need to calculate the times of the encounters.</p>
356,574
<blockquote> <p>$$\aleph_2^{\aleph_0}=\aleph_2$$</p> </blockquote> <p>Appreciate your help</p>
Brian M. Scott
12,042
<p>Yes, you can. Any function $f:\omega\to\omega_2$ is bounded, so ${}^\omega\omega_2=\bigcup_{\alpha&lt;\omega_2}{}^\omega\alpha$, and therefore</p> <p>$$\aleph_2\le\aleph_2^{\aleph_0}=\left|{}^\omega\omega_2\right|\le\sum_{\alpha&lt;\omega_2}|\alpha|^\omega=\aleph_2\cdot\aleph_1^{\aleph_0}=\aleph_2\cdot2^{\aleph_0\cdot\aleph_0}=\aleph_2\cdot2^{\aleph_0}=\aleph_2\cdot\aleph_1=\aleph_2\;.$$</p>
1,692,346
<p>I have heard of a statement like this:</p> <blockquote> <p>A car can technically never run out of gas (when still moving) if the driver uses half of the gas left each time.</p> </blockquote> <p>Is this possible (mathematics wise)?</p>
Henricus V.
239,207
<p>Assume that the car engine is perfect and proportionally converts gas into distance. Then the answer is yes and no. Using the series $$ \sum_{j=1}^\infty \frac{1}{2^j} = 1 $$ The car never runs out of gas since infinitely many terms are non-zero, but only travels a finite distance since the sum is finite.</p> <p>This only works if you view gasoline as continuous matter instead of discrete particles.</p>
1,039,474
<p>Solve the equation $x^4 - 14x^3 + 50x^2 -14x + 1 = 0$. <br/> I am not sure about how to best proceed, and would like a solution that does not involved the generalised quartic formula.</p>
Community
-1
<p>Not knowing the substitution trick, you can anyway infer that if $x$ is a solution, then $1/x$ as well, so that the polynomial can be factored in two polynomials of the second degree, and these will be palindromic too:</p> <p>$$x^4 - 14x^3 + 50x^2 -14x + 1 =(x^2+Ax+1)(x^2+Bx+1).$$ Developing and identifying, $$A+B=-14,\\1+AB+1=50.$$ The solutions are $$\frac{-14\pm\sqrt{14^2-4\cdot48}}2=-8,-6.$$</p> <p>Now solve $$x^2-8x+1=0,\\x^2-6x+1=0.$$</p>
3,370,076
<p>The total mechanical energy is conserved when a ball is dropped from a height of 4.00 <span class="math-container">$\mathit{m}$</span>, and it makes a elastic collision with the ground. Assuming no non-conservative forces are acting find the period of the ball. g of course is 9.81.</p> <p><span class="math-container">\begin{align} PE_g &amp;= U_s \\ mgh &amp;= \frac{1}2 kA^2 \\ mgh &amp;= \frac{1}2 kh^2 \\ 2mgh &amp;= kh^2 \\ 2\frac{g}{h} &amp;= \frac{k}{m} \\ \omega &amp;= \sqrt{\frac{k}{m}} = \sqrt{\frac{2g}{h}} \\ T &amp;= \frac{2 \pi}{\omega}=2\pi \sqrt{\frac{h}{2g}} = \sqrt{2} \pi\sqrt{\frac{h} {g}}=2.837 s \end{align}</span></p> <p>Is my approach correct?</p> <h2>Fixed Approach</h2> <p><span class="math-container">\begin{align} mgh &amp;= \frac{1}2 m v^2_f \\ v_f &amp;= \sqrt{2gh} \\ \frac{v_f - v_0}{g} &amp;= t = \frac{T}{2} \\ 2t &amp;= T = 1.80 s \end{align}</span></p>
Marios Gretsas
359,315
<p>Let <span class="math-container">$a(x_1)=a(x_2)$</span> then <span class="math-container">$ax_1=ax_2$</span></p> <p>So <span class="math-container">$x_1=a^{-1}ax_1=a^{-1}ax_2=x_2$</span></p> <p>Now for <span class="math-container">$y \in G$</span> </p> <p>Take <span class="math-container">$x=a^{-1}y$</span></p> <p>So <span class="math-container">$a(x)=y$</span></p> <p>This <span class="math-container">$a(x)$</span> is one ot one and onto.</p>
114,733
<p>Say you have the half-plane $\{z\in\mathbb{C}:\Re(z)&gt;0\}$. Is there a rigorous explanation why the transformation $w=\dfrac{z-1}{z+1}$ maps the half plane onto $|w|&lt;1$?</p>
davidlowryduda
9,754
<p>There is, but I'm not sure what you do or don't know. But we'll see what we can do.</p> <p>So you know that it's a fractional linear transformation. It's continuous, bijective, open, and ultimately beautiful. It can be shown that it preserves circilinearity - i.e. it takes lines and circles to lines and circles (not respectively - a line in the extended complex plane is a circle through infinity). You can then check that this transformation does in fact send the imaginary line to the boundary of the unit disk. (In fact, the modulus on imaginary numbers is 1 - that's all you need to check).</p> <p>So we have a continuous linear map that sends the boundary where we want to send it. So take some point in the half plane and make sure it gets sent to the interior and not the exterior. $1$, for instance, is sent to $0$, and so the right half plane is on the interior of the disk (and in particular not the boundary, which is the image of the imaginary line).</p> <p>That's why, in a nutshell.</p>
3,406,106
<p>Want to prove rigorously (if possible, since I was not able to think of any counter-example) that <span class="math-container">$\lim_{x\to a} f(x)$</span> exists <span class="math-container">$\implies \lim_{x\to a} f(x^2)$</span> exists. (I also have a feeling that the limits equate.)</p> <p>I started with the <span class="math-container">$\epsilon-\delta$</span> definition of <span class="math-container">$\lim_{x\to a} f(x) = l$</span> which states that <span class="math-container">$\forall \epsilon &gt;0 \exists \delta&gt;0 \forall x:0&lt;|x-a|&lt;\delta \implies |f(x)-l| &lt; \epsilon$</span>.</p> <p>Now I want to show that <span class="math-container">$\forall \epsilon &gt;0 \exists \delta&gt;0 \forall x:0&lt;|x-a|&lt;\delta \implies |f(x^2)-l_1| &lt; \epsilon$</span> where <span class="math-container">$l_1 \in \mathbb{R}$</span>.</p> <p>I've got no clue how to continue. All help is appreciated.</p> <p>As an additional note: I believe the converse (<span class="math-container">$\lim_{x\to a} f(x^2)$</span> exists <span class="math-container">$\implies \lim_{x\to a} f(x)$</span> exists) is not true as <span class="math-container">$f(x)=\frac{|x|}{x}$</span> is a counter-example.</p> <p>Edit: A counter-example was found for the statement but I was wondering whether it is true for <span class="math-container">$a = 0$</span>? So in other words, I would like to prove or disprove the statement <span class="math-container">$\lim_{x\to 0} f(x)$</span> exists <span class="math-container">$\implies \lim_{x\to 0} f(x^2)$</span> exists</p>
Randall
464,495
<p>This is false, as evidenced by the choices <span class="math-container">$a=2$</span> and <span class="math-container">$f(x) = \begin{cases} x, &amp; x \leq 4 \\ x+1, &amp; x&gt;4.\end{cases}$</span></p>
764,947
<p>I want to solve the following exercise:<br/> <br/> Show that the two elliptic curves $E/ \mathbb{Q}$ and $E'/ \mathbb{Q}$ are isomorphic.<br/> $E: y^2 = x^3+x-2$ and $E': y'^2 = x'^3-\frac{1}{3}x' - \frac{52}{27}$. <br/> <br/> I am trying to find a change of variables $(x,y)\mapsto(x',y')$ transforming the Weierstraß equation defining $E$ to the Weierstraß equation defining $E'$.<br/> <br/> I tried this by guesswork because I couldn't think of a clever way.<br/> A first idea was to put $y = (\sqrt{27 y'}-\sqrt{2})$ because then I already get $27y'^2 = 27x'^3-27x'-52$ which is $y'^2 = x'^3-x'-\frac{52}{27}$ which looks a bit more like $E'$. But I don't know what to do about the $x$. <br/> <br/> Is there a more strategic way to do this? Does anyone have a hint how to solve this exercise?<br/> <br/> All the best!</p>
Noam D. Elkies
93,983
<p>Typo: the equation of the first curve should be $y^2 = x^3 + x^2 - 2$. Hint: $y=y'$, so you need only find a transformation from $x$ to $x'$ that kills the quadratic term of the cubic on the right-hand side.</p>
1,748,547
<p>Show that if the closed interval $[a,b]$ is covered by finitely many open intervals $(a_1,b_1), ...,(a_n,b_n)$, then $$b-a \le \sum^n_{i=1}(b_i-a_i)$$. </p> <p>I know that $(a_1,b_1), ...,(a_n,b_n)$ form an open covering of $[a,b]$, and my thought is to show the inequality by mathematical induction, but not sure how to prove this. Could someone provide a complete proof please? Thanks a lot. </p>
Lionel Ricci
242,892
<p>The base case is clear. Let $\{(a_i,b_i)\}_{i=1}^n$ be an open cover of $[a,b]$. Suppose the sets are not nested. Then there are two sets $(a_i,b_i)$ and $(a_j,b_j)$ with $a_i \leq a_j \leq b_i \leq b_j$. Without loss of generality we may assume $i=1, j=2$. Taking their union, i.e. forming $(a_1,b_2)$ we get a smaller open cover and the induction step tells us that $$ b - a \leq b_2 - a_1 + \sum_{i = 3}^n b_i - a_i. $$ However we have that $b_2 - a_1 \leq b_2 - a_2 + b_1 - a_1$. This proves the inequality. If the sets are all nested, a similar method will do it.</p>
497,422
<p>Which is bigger: $a$ or $a^2$ and what is the proof of that?</p> <p>I'm kinda stuck and because there are cases where $a$ is bigger and other cases where $a^2$ is bigger.</p>
Community
-1
<p>$a&gt;a^2:\ 0&lt;a&lt;1$</p> <p>$a&lt;a^2:\ \text{for all other }a,a\neq0,1$</p>
3,120,187
<p>Suppose tall matrix <span class="math-container">$A$</span> is <span class="math-container">$n \times k$</span> and that its columns are orthogonal, i.e., <span class="math-container">$A' A = I_k$</span>. Suppose further that diagonal <span class="math-container">$M$</span> is <span class="math-container">$n \times n$</span> and has either <span class="math-container">$1$</span> or <span class="math-container">$0$</span> on its main diagonal. Suppose also that <span class="math-container">$n \gg k$</span> and <span class="math-container">$\mbox{rank} (M) &gt; k$</span>. Is it true that <span class="math-container">$A'MA = I_k$</span>? If so, why?</p> <p>I have gotten as far as:</p> <p><span class="math-container">$$A'MA = A'M'MA = (MA)'(MA)$$</span></p> <p>but am unsure if <span class="math-container">$M A$</span> should also have orthogonal columns and, if so, why. If this is not true, are there any similar statements I could make about <span class="math-container">$A' M A$</span>?</p>
Mathiaspilot123
489,487
<p>If I understood you correctly, the diagonal entries of <span class="math-container">$M$</span> is either <span class="math-container">$0$</span> or <span class="math-container">$1$</span>. If you let <span class="math-container">$M=diag(0\cdots 0)$</span> then <span class="math-container">$A'MA=$</span> zero matrix.</p>
292,835
<p>Does there exist a bijective function $f:{\mathbb R}\rightarrow{\mathbb R}$ that is nowhere-continuous, assuming that both domain and range have the "standard topology"?&nbsp;<sup>1</sup></p> <p><sub><sup>1</sup> By this I mean the one generated by the open intervals $(a, b) \subset {\mathrm R}$. BTW, if this topology has a name more readily recognized than <em>the standard topology</em> (<em>on</em> ${\mathbb R}$), please toss me a comment!</sub></p> <p>EDIT: the original version of this question allowed for the possibility that $f$ be only injective, but shortly after I posted the following injective function came to mind: let $n:{\mathbb Q}\rightarrow {\mathbb N}$ be an ordering of the rationals, and define</p> <p>$$f(x)=\begin{cases} n(x) &amp; x\in\mathbb Q\\ x&amp; x\notin\mathbb Q\end{cases}$$</p> <p>It is clear that this $f$ is injective, and it seems to me that the proof of the nowhere-continuity of the <a href="http://en.wikipedia.org/wiki/Dirichlet_function" rel="nofollow">Dirichlet function</a> applies to this case as well.</p> <p>EDIT2: OK, I was next going to try modifying the candidate above to make the function bijective, but Asaf Karagila got there first, with a much neater solution than what I was heading for...</p>
Asaf Karagila
622
<p>How about: $$f(x)=\begin{cases} x+1 &amp; x\in\mathbb Q\\ x&amp; x\notin\mathbb Q\end{cases}$$</p>
102,383
<p>I have a specific Generalized Eigenvalue Problem (GEVP) where i am primary not interested in solving this problem but concluding from a standard EVP the spectrum of the GEVP. </p> <p><strong>The Problem</strong><br> Let $A$ be a $n\times x$ possibly complex matrix and $B$ a diagonal, real $n\times n$ matrix with maximal rank of $n-1$ (e.g. the matrix $B$ has at minimum 1 zero column and row).<br> Solving </p> <p>$(B\lambda-A)\cdot v=0$ </p> <p>with $|v|=1$, so that we have $n+1$ equations for $n+1$ unknown is the GEVP. The GEVP can not be reformulated as EVP because $det(B)=0$ and therefore $B$ is not invertible.</p> <p>As I said the goal is not just solving this problem (this could be done by solving $det(\lambda B I-A)=0$ to obtain the eigenvalues) but to conclude eigenvalues for the stated GEVP from the following, already solved, EVP (the $n$ eigenvalues $\mu_1\leq\mu_2\leq\dots\leq \mu_n$ of $A$ are known): </p> <p>$(I\mu-A)\cdot w=0$.</p> <p><strong>What I have already learned</strong><br> *As $A$ and $B$ in general do not commutate it is not possible to diagonalize $A$ and $B$ simultanously. Therefore the spectra will be different.<br> *If the EVP results in eigenvalues $\mu=0$, so there will be the same number of eigenvalues $\lambda=0$ in the GEVP. (Because in both cases $det(A)=0$ must be fullfilled and the geometric multiplicity comes from the dimension of $kern(A)$.)<br> *For every zero-row in $B$ the number of eigenvalues $\mu$ is one less then in the EVP. This is because the order of the characteristic polynom (CP) goes one down for every zero-row in $B$ compared to the order of the CP in the EVP.</p> <p><strong>Questions</strong><br> *Can be said which eigenvalues (in addition to the zeros) of the EVP are also eigenvalues of the GEVP (the eigenvectors may not be the same in both cases, but the eigenvalues).<br> *Is there a pertubation theory? Can I somehow make a taylor series of the CP in the GEVP where the zeroth-term is the CP of the EVP?<br> *The number of eigenvalues in the GEVP is less than in the EVP, can be concluded which eigenvalues vanish?</p> <hr> <p>In case anybody wants to know, where my question emerges from (this is not essential for my questions but possibly from general interest):</p> <p>If one wants to conclude the stability of a fix point $x^*$ of ODEs one needs to solve the variational ODE $\dot{\delta x}(t)=D_xf(x^*)\delta x(t)$. Where $\delta x$ is a small pertubation away from the fix point: $\delta x(t)=x(t)-x^*$. Solving this with $\delta x(t)=\delta x_0 e^{\mu t}$ results in the EVP<br> $\mu\delta x_0 = D_x f(x^*) \delta x_0$.<br> Using $D_xf(x^*)=A$ and $w=\delta x_0$ results in the stated EVP.</p> <p>If one has additional constraints in a implicit way<br> $g(x(t))=0$<br> the stability of a fix point in the ODE may change (e.g. the constraint acts in a unstable direction. The eigenvalue of $A$ in this direction is still greater zero (obviously the matrix $A$ does not change if constraints are imposed) but it is a "forbidden" direction as the corrisponding eigenvector is in a direction which is not allowed due to the constraint).<br> Taking the time derivative of $g$ results in $D_x g(x)\cdot \dot{x}(t)=0$. Inserting the pertubation away from the fix point results in<br> $(D_xg(x^*)+D_x(D_xg(x^*))\cdot \delta x)\cdot \dot{\delta x}(t)+\dots=0$<br> $D_xg(x^*)\cdot \dot{\delta x}(t)+O(\delta x^2)=0$<br> Inserting the exp-ansatz results in<br> $D_xg(x^*)\cdot \delta x_0\mu\approx0$<br> This means that the small pertubations need to be orthogonal to the gradient on the invariant manifold near the fix point (e.g. they are inside the invariant manifold). </p> <p>One possible way to go to study the change of stability of the fix point when constraints are imposed, is to solve the EVP and then to check consistency with the last equation.<br> I want to include the constraint directly in the EVP, which leads to the GEVP by simply adding the last equation to the EVP (with $D_x g(x^*)=\hat{B}$ and $w=\delta x_0$):<br> $(\hat{B}\mu+I\mu-A)\cdot w=0$<br> and with $\hat{B}+I=B$<br> $(B\mu-A)\cdot w=0$<br> The criterion of "low rank $B$" comes from the generic constraints like $g(x_1,\dots,x_n)=x_0^1-x_1\rightarrow D_xg=(-1,0,\dots,0)\rightarrow B=\mbox{diag}(0,1,\dots,1)$. </p>
Sven E
25,166
<p>Thanks for this nice answer. For the stability-problem with only 1 constraint the formula is then really simple: $B_{11}=\mbox{diag}(1,\dots,1)=I_{n-1}$ and $A_{12},A_{21},A_{22}$ are only scalars. So that the GEVP is the solution of $(A_{11}-I_{n-1}(ac/b-\lambda))\cdot v=0$ with $a=A_{12},b=A_{22},c=A_{21}$ and $b\neq0$. Resacling $\lambda$ leads to<br> $(A_{11}-I_{n-1}\hat{\lambda})\cdot v=0$. </p> <p>Essentially the eigenvalues of the GEVP are the same as in the EVP but shifted by $ac/b$ and one eigenvalue vanishes, right? Of course this is only a guess but it should be right for large $n$ and low number of zero-rows of $B$ ($m$).</p> <p>Is there a theory for what happens to the spectrum of a matrix if one strikes certain rows and columns? I guess if all eigenvalues are different e.g. there are so many eigenspaces as dimensions of the matrix, striking one row and its respective line should only lead in vanishing of one eigenvalue and all other eigenvalues should stay constant, right?</p>
70,146
<p>I'm trying to use an image as a <code>ChartLabel</code> and I'm getting strange results.</p> <p>Here is a bar chart, with labels, that looks ok:</p> <p><img src="https://i.stack.imgur.com/YmRQp.png" alt="chart ok"></p> <p>But when I try to replace the "A" label with an image, the output is confusing:</p> <p><img src="https://i.stack.imgur.com/7XT0u.png" alt="chart busted"></p> <p>Specifically, the image overlaps the plot and is scaled weirdly.</p> <p>I'd like it to be small, and centered, as the "A" label is in the image above.</p> <p>What's the right way to use an image as a <code>ChartLabel</code>?</p>
MinHsuan Peng
1,376
<p>How about using <code>ChartElements</code> instead of <code>ChartLabels</code>.</p> <pre><code>images = ExampleData[{"TestImage", #}] &amp; /@ {"Lena", "Mandrill"}; BarChart[{{1, 2, 3}, {4, 5, 6}}, ChartElements -&gt; {images, None}] </code></pre> <p><img src="https://i.stack.imgur.com/iZick.png" alt="enter image description here"></p>
2,564,217
<p>For a project I'm doing, I'm wrapping an led strip light around a tube. The tube is 19mm in diameter and 915mm tall. I'm going to coil the led strip around the tube from top to bottom and the strip is 8mm wide, so the coils will be 8mm apart. How long does the led strip need to be to fully cover the tube?</p> <p>This reminds me of a popular question on Math SE about a toilet paper roll, but slightly different. I estimated this by measuring how many 8mm wide circles could fit around the tube, then multiplied by the circumference. However, I don't know how to calculate the exact length of the coil. Out of curiosity, how would you find the exact length of the coil wrapping around the tube with each coil being 8mm apart?</p>
karakfa
14,900
<p>Let's work on $X=3$ case, the other dice will take values $\{1..6\}$, where the corresponding $Y$ values will be $\{2,1,0,1,2,3\}$ Grouping the same values together will give (value, count) pairs as $\{(0,1), (1,2), (2,2), (3,1)\}$. Converting this the probability, you'll need to divide each count by $6*6$.</p> <p>Note that due to symmetry $P(X,Y)=P(6-X+1,Y)$, so you just need to do this for $1$ and $2$, but $1$ is already the trivial case.</p>
2,266,634
<p>If $G$ is a finite abelian group then $G$ has a decomposition into two types:</p> <p>(1) one is direct product of cyclic groups of some prime power order (may be with repitition)</p> <p>(2) other is direct product of cyclic groups where order of one component divides order of next one (invariant factors).</p> <p>In the comparison of these two factorisations, I came across following natural question, which is usually not raised or discussed in classrooms or in books. </p> <p><strong>Q.</strong> What information about $G$ can be immediately given from one factorization which is not immediate from other factorization? </p> <hr> <p>For example, if we know factorization $\mathbb{Z}_{d_1}\times \cdots\times\mathbb{Z}_{d_r}$ with $d_i|d_{i+1}$, we can say what is exponent, whether group is cyclic. I don't know beyond this, for what purpose one fattorisation is useful than other.</p>
Rajat
177,357
<p>Let say $T=\begin{bmatrix} 1 &amp; 0\\ 1 &amp; 0\end{bmatrix}$, $u=[0 \quad 1]^{\top}$ and $v=[1 \quad 0]^{\top}$. $Tu$ and $Tv$ are not linear independent.</p> <p>Only, if $T$ is a one-to-one mapping, then we can say $Tu, Tv$ must be linearly independent.</p>
2,266,634
<p>If $G$ is a finite abelian group then $G$ has a decomposition into two types:</p> <p>(1) one is direct product of cyclic groups of some prime power order (may be with repitition)</p> <p>(2) other is direct product of cyclic groups where order of one component divides order of next one (invariant factors).</p> <p>In the comparison of these two factorisations, I came across following natural question, which is usually not raised or discussed in classrooms or in books. </p> <p><strong>Q.</strong> What information about $G$ can be immediately given from one factorization which is not immediate from other factorization? </p> <hr> <p>For example, if we know factorization $\mathbb{Z}_{d_1}\times \cdots\times\mathbb{Z}_{d_r}$ with $d_i|d_{i+1}$, we can say what is exponent, whether group is cyclic. I don't know beyond this, for what purpose one fattorisation is useful than other.</p>
Itay4
385,242
<p>Consider the zero transformation:</p> <p>$$Tv=0, \ \forall v \in V$$</p> <p>It is a linear transformation (why?)</p> <p>No matter what two linearly independent vectors $u$ and $v$ are, obviously $Tv$ and $Tu$ aren't. </p>
14,385
<p>I have always taught my students that the <span class="math-container">$y$</span>-intercept of a line is the <span class="math-container">$y$</span>-coordinate of the point of intersection of a line with the <span class="math-container">$y$</span>-axis, that is, for the line given by the equation <span class="math-container">$y=mx+y_0$</span>, the <span class="math-container">$y$</span>-intercept is <span class="math-container">$y_0$</span>. I emphasize that that the <span class="math-container">$y$</span>-intercept is the <em>number</em> <span class="math-container">$y_0$</span> and not the <em>point</em> <span class="math-container">$(0,y_0)$</span>.</p> <p>But I was quite surprised when I recently looked at the <a href="https://en.wikipedia.org/wiki/Intercept" rel="nofollow noreferrer">Wikipedia</a> and <a href="http://mathworld.wolfram.com/x-Intercept.html" rel="nofollow noreferrer">Wolfram</a> <a href="http://mathworld.wolfram.com/y-Intercept.html" rel="nofollow noreferrer">MathWorld</a> entries for <span class="math-container">$y$</span>-intercept because these define the intercept as a point and not as a number (&quot;the point where a line crosses the y-axis&quot; and &quot;The point at which a curve or function crosses the y-axis&quot;).</p> <p>Further investigation yielded inconsistencies: the Wikipedia entry for &quot;<a href="https://en.wikipedia.org/wiki/Line_(geometry)#On_the_Cartesian_plane" rel="nofollow noreferrer">Line (geometry)</a>&quot; states that in the equation <span class="math-container">$y=mx+b$</span>, &quot;<span class="math-container">$b$</span> is the y-intercept of the line&quot;; the Wolfram MathWorld entry for &quot;<a href="http://mathworld.wolfram.com/Line.html" rel="nofollow noreferrer">Line</a>&quot; states that &quot;The line with <span class="math-container">$y$</span>-intercept <span class="math-container">$b$</span> and slope <span class="math-container">$m$</span> is given by the slope-intercept form <span class="math-container">$y=mx+b$</span>.</p> <hr /> <p><sup>Edit made on February 21, 2021</sup></p> <p>According to the <em>Dictionary of Analysis, Calculus, and Differential Equations</em> (edited by Douglas N. Clark, published by CRC Press in 2000),</p> <blockquote> <p><strong>intercept</strong> The point(s) where a curve or graph of a function in <span class="math-container">$\mathbf R^n$</span> crosses one of the axes. For the graph of <span class="math-container">$y=f(x)$</span> in <span class="math-container">$\mathbf R^2$</span>, the <span class="math-container">$y$</span>-<em>intercept</em> is the point <span class="math-container">$(0,f(0))$</span> and the <span class="math-container">$x$</span>-<em>intercepts</em> are the points <span class="math-container">$(p,f(p))$</span> such that <span class="math-container">$f(p)=0$</span>.</p> </blockquote> <p>Unfortunately, the book does not consistently use that definition.</p> <blockquote> <p><strong>slope-intercept equation of line</strong> An equation of the form <span class="math-container">$y=mx+b$</span>, for a straight line in <span class="math-container">$\mathbf R^2$</span>. Here <span class="math-container">$m$</span> is the slope of the line and <span class="math-container">$b$</span> is the <span class="math-container">$y$</span>-intercept; that is, <span class="math-container">$y=b$</span>, when <span class="math-container">$x=0$</span>.</p> </blockquote> <p>Thus, even though the book defines an intercept as a point, it uses the term to denote a number.</p> <hr /> <p>Is there a trusted source targeted at mathematics educators (from, say, a government agency, an educational institution, or an organization) that defines &quot;intercept&quot; and consistently uses that definition?</p>
Kyle Miller
8,981
<p>It's worth noting that a real number is a point on the real number line (according to, for instance, Stewart's <em>Calculus</em>, Appendix A). The $y$-axis is a copy of the real number line. So, one could take the defensible position that a $y$-intercept is just as much a point $(0,b)$ as it is a point $b$ along the $y$-axis.</p> <p>For a little pedantic gloss: linear equations and lines in the plane are different concepts, yet they both have a $y$-intercept. Authors disagree on what the $y$-intercept of a linear equation $y=mx+b$ should be: is it $b$ itself? or is it the point where the graph of the equation intersects the $y$-axis? In any case, an equation is not literally a line, so we have to recognize that it's the same terminology for two different things. In the end, the whole $y$-intercept concept is about capturing a connection between geometry (lines) and algebra (linear equations), and since the connection is so strong it might not be worth forcing a distinction, unless it's to make a pedagogical point.</p> <p>(I did some digging into geometry, and there are the concepts of intercepted line segments and intercepted arcs. The $y$-intercept is related to the intercepted line segment on the $y$-axis, bounded by the $x$-axis and the given line. Strangely, I couldn't find any mention of calling the actual point where two intersecting lines meet the intercept, except for the $x$- and $y$-intercepts.)</p>
2,974,747
<p><strong>Q</strong>:Solve the equation <span class="math-container">$x^4+x^3-9x^2+11x-4=0$</span> which has multiple roots.<br><strong>My approach</strong>:Let <span class="math-container">$f(x)=x^4+x^3-9x^2+11x-4=0$</span>.And i knew that if the equation have multiple roots then there must exist H.C.F(Highest Common Factor) of <span class="math-container">$f'(x)$</span> and <span class="math-container">$f(x)$</span> Or H.C.F of <span class="math-container">$f''(x)$</span> and <span class="math-container">$f(x)$</span>.But i don't know how to find H.C.F of two polynomial by synthetic division(my book titled it and written the H.C.F).My book provided that H.C.F of <span class="math-container">$f(x),f'(x)$</span> and <span class="math-container">$f''(x)$</span> is <span class="math-container">$(x-1)$</span>.<br> So <span class="math-container">$(x-1)^3$</span> is a factor of <span class="math-container">$f(x)$</span>.Hence <span class="math-container">$f(x)=(x-1)^3(x+4)=0$</span>.<br>Now my <strong>Question</strong> is how they find H.C.F without division method.Is there any general process to get H.C.F of polynomials without division method or some easy process.Any solution will be appreciated.<br>Thanks in advanced.</p>
Mohammad Riazi-Kermani
514,496
<p><span class="math-container">$$x^4+x^3-9x^2+11x-4=0$$</span></p> <p>By checking the divisors of <span class="math-container">$-4$</span> we see that <span class="math-container">$x=1$</span> satisfies our equation.</p> <p>Upon synthetic division, we get <span class="math-container">$$x^4+x^3-9x^2+11x-4=(x-1)^3(x+4)$$</span> with solutions of <span class="math-container">$$ x=1,1,1,-4$$</span> If you want to find the HCF of f and f', then you have to find the derivative <span class="math-container">$$ 4x^3+3x^2-18x+11$$</span> and find the common factors of <span class="math-container">$f$</span> and <span class="math-container">$f'$</span></p> <p>Sounds like extra work for no reason.</p>
428,408
<p>Consider a norm on <span class="math-container">$\mathbb C^2$</span> as <span class="math-container">$\|(z_1,z_2)\|:=\max\{|z_1|,|z_2|,\frac{1}{\sqrt{2}}|z_1+iz_2|\}.$</span></p> <p><em>Question.</em> Is <span class="math-container">$(\mathbb C^2,\|.\|)$</span> linearly isometric to <span class="math-container">$(\mathbb C^2,\|.\|_{\infty})$</span> where <span class="math-container">$\|(z_1,z_2)\|_\infty:=\max\{|z_1|,|z_2|\}?$</span></p>
Gerald Edgar
454
<p><strong>comment</strong><br /> I think they are not isometric, having different structure for the set of extreme points.</p> <p>The set of extreme points for the unit ball of <span class="math-container">$\|\cdot\|_\infty$</span> is a torus: <span class="math-container">$$T = \{(z_1,z_2) : |z_1| = |z_2| = 1\}.$$</span></p> <p>The set of extreme points of the unit ball of <span class="math-container">$\|\cdot\|$</span> perhaps consist of the union of three tori: <span class="math-container">$$T_1 = \{(z_1,z_2) : |z_1|=|z_2| = 1\},\\T_2 = \textstyle\{(z_1,z_2) : |z_1|=\frac{1}{\sqrt{2}}|z_1+iz_2| = 1\},\\T_3 = \{(z_1,z_2) : |z_2|=\textstyle\frac{1}{\sqrt{2}}|z_1+iz_2| = 1\}.$$</span></p> <p>Here is an example of a point in <span class="math-container">$T_2$</span> but not <span class="math-container">$T_1$</span>: <span class="math-container">$$ \left(1,\; \frac{\sqrt{-r^4+6r^2-1}}{2} + i\,\frac{r^2-1}{2}\right),\quad \sqrt{2}-1 &lt; r &lt; 1 . $$</span></p> <hr /> <p>Now all that remains is to prove this...</p>
1,290,363
<p>So I already proved Closure and Associativity, now I'm trying to find the identity element of this operation defined as: $$ a * b = a + b - ab $$</p> <p>But my identity element gets cancelled...</p> <p>(The set defined in this exercise is the real numbers.)</p> <p><img src="https://i.stack.imgur.com/ZchjC.jpg" alt="enter image description here"></p>
3x89g2
90,914
<p><strong>Claim:</strong> The identity element in $(\mathbb{R},*)$ is the real number zero.</p> <p><strong>Proof:</strong> For any $x\in \mathbb{R}$, $x*0=x+0-x\times 0=x$. Since the identity element in a group is unique, zero is the identity element.</p> <p>Following your way, suppose the identity is $e$, it has to satisfy that $a*e=a+e-a\times e=a$. This implies that $e=a\times e$. Suppose $e\ne 0$, then we would get $a=1$, which is impossible since we know that there are lots of real numbers that not equal to zero! So $e=0$.</p>
672,736
<p>Let $A = \begin{bmatrix}1&amp;2&amp;1\\0&amp;1&amp;0\\1&amp;3&amp;1\end{bmatrix}$. Find the eigenvalues of $A$.</p> <p>I think I got a pretty steady ground on how I approached this, I just have some difficulty getting the right answer.</p> <p>What I have done so far:</p> <p>$P(\lambda) = det(A - \lambda I)$</p> <p>$det\begin{bmatrix}1-\lambda&amp;2&amp;1\\0&amp;1-\lambda&amp;0\\1&amp;3&amp;1-\lambda\end{bmatrix} = 0$</p> <p>$=(1-\lambda)(1-\lambda)^2 - 2(0) + 1(1-\lambda) = 0$</p> <p>$= (1- \lambda) ^3 +(1-\lambda) = 0$</p> <p>But I'm not getting the right eigenvalues. The above answer gives me the eigenvalue: 1 only.</p> <p>but the right answer is: 2, 1, 0.</p>
hjpotter92
27,741
<p>You've made a little error in calculating your $ \det $ value.</p> <p>$$ det\begin{bmatrix}1-\lambda&amp;2&amp;1\\0&amp;1-\lambda&amp;0\\1&amp;3&amp;1-\lambda\end{bmatrix} = 0 $$</p> <p>will expand to give you:</p> <p>$$ =(1-\lambda)(1-\lambda)^2 - 2(0) + 1 (\color{red}{-1 \times (1 - \lambda)}) = 0 $$</p> <p>which on further calculations give:</p> <p>$$ (1 - \lambda)^3 - (1 - \lambda) = (1 - \lambda)((1 - \lambda)^2 - 1) = 0 $$</p>
672,736
<p>Let $A = \begin{bmatrix}1&amp;2&amp;1\\0&amp;1&amp;0\\1&amp;3&amp;1\end{bmatrix}$. Find the eigenvalues of $A$.</p> <p>I think I got a pretty steady ground on how I approached this, I just have some difficulty getting the right answer.</p> <p>What I have done so far:</p> <p>$P(\lambda) = det(A - \lambda I)$</p> <p>$det\begin{bmatrix}1-\lambda&amp;2&amp;1\\0&amp;1-\lambda&amp;0\\1&amp;3&amp;1-\lambda\end{bmatrix} = 0$</p> <p>$=(1-\lambda)(1-\lambda)^2 - 2(0) + 1(1-\lambda) = 0$</p> <p>$= (1- \lambda) ^3 +(1-\lambda) = 0$</p> <p>But I'm not getting the right eigenvalues. The above answer gives me the eigenvalue: 1 only.</p> <p>but the right answer is: 2, 1, 0.</p>
Brian Fitzpatrick
56,960
<p>You could expand your equation for the determinant about the second row. This gives \begin{align*} \det(A-\lambda\cdot I) &amp;= \det\begin{bmatrix}1-\lambda&amp;2&amp;1\\0&amp;1-\lambda&amp;0\\1&amp;3&amp;1-\lambda\end{bmatrix} \\ &amp;= (1-\lambda)\cdot\det \begin{bmatrix} 1-\lambda &amp; 1 \\ 1 &amp; 1-\lambda \end{bmatrix} \\ &amp;=(1-\lambda)\cdot\left((1-\lambda)^2-1\right) \\ &amp;= (1-\lambda)\cdot((1-\lambda)+1)\cdot((1-\lambda)-1) \\ &amp;= (1-\lambda)\cdot(2-\lambda)\cdot(-\lambda) \end{align*} where we utilize the equation $a^2-b^2=(a+b)\cdot(a-b)$ for $a=(1-\lambda)$ and $b=1$. Hence the eigenvalues are $\lambda=0,1,2$.</p>
272,846
<p>Suppose I have a List of numbers:</p> <pre><code>num = Range[5] </code></pre> <p>I want to combine the second and the third element into a sublist to get the result as {1,{2,3},4,5}.<br /> I tried using this:</p> <pre><code>MapAt[List, num, {{2}, {3}}] </code></pre> <p>which is not giving me the desired result. What changes are needed to be made?<br /> Can the same changes be applied to this code:</p> <pre><code>music = SoundNote[&quot;CSharp&quot;, 0.1, 0.2, &quot;Violin&quot;] </code></pre> <p>to get the result as SoundNote[CSharp,{0.1,0.2},Violin]?</p>
lericr
84,894
<p>If you want to do this at a position (i.e. you're not looking at individual elements or patterns to determine the nested list), then you'll probably need to break apart the list and reassemble it.</p> <p>One way:</p> <pre><code>FlattenAt[TakeList[num, {1, 2, All}], {{1}, {3}}] (* you'd need to set the list argument in TakeList based on the positions you want grouped together *) </code></pre> <p>Another way:</p> <pre><code>Insert[Drop[num, {2, 3}], Take[num, {2, 3}], 2] (* this way might be more straightforward because the relevant positions are obvious *) </code></pre> <p>You could also force this into a pattern-matching situation. This could simplify the process if you needed to do this in multiple locations in your list. The way I'll approach this is to create a bespoke &quot;tag&quot; which I'll apply to the target elements and then later use to aggregate them.</p> <pre><code>SetAttributes[MyTag, Flat];(* This will make it easier to lump elements together *) SequenceReplace[ MapAt[MyTag, num, {{2}, {3}}], {tagged__MyTag} -&gt; MyTag[tagged]] /. MyTag -&gt; List </code></pre> <p>Explanation: First, use MapAt to &quot;tag&quot; each target element--really just wrapping a dummy head around each. Next, use SequenceReplace to wrap another dummy head around any contiguous sequence of &quot;tagged&quot; elements. Since our dummy head has the Flat attribute, this flattens out all of the nested tags. Then we finally just replace the MyTag head with List.</p> <p>There are many other ways to do this--it'll just depend on your exact circumstance and stylistic preference.</p>
3,014,453
<p>If there is a number somewhere between 0 and 100 and you have to find it with the least attempts possible. Every attempt consists of you checking if the number is smaller (or bigger) than a number in the said interval (0 to 100). My guess would be you start with the half way point.</p> <p>Is it smaller than 50? yes --> is it smaller than 25---> yes ---> is it smaller than 25 ---> no ---> is it smaller than 37.5 ---> yes...etc </p> <p>If this is indeed the faster method, what would be the formula that expresses it? If this isn't the fastest method, what is it and how is it expressed mathematically and verbally? Thanks.</p>
gandalf61
424,513
<p>Yes, this is the most efficient search method if you can only get a "yes" or a "no" answer on each attempt. Because there are two possible responses on each attempt, it is known as a binary search - see <a href="https://en.wikipedia.org/wiki/Binary_search_algorithm" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Binary_search_algorithm</a>.</p>
475,005
<p>I want to check how many integral numbers in $\big[1,10^6\big]$ include the numbers $1,2,3,4,5$ and how many only them.<br> how should I check it? this is a problem of inclusion-exclusion? <br> I would like to get some advice!<br> Thanks!</p>
Henry
6,460
<p>Hints:</p> <ul> <li><p>You do not need use inclusion-exclusion, though you can if you want to. Another approach to the first question could be to look at those which do not include any of $1,2,3,4,5$</p></li> <li><p>You might find it easier to treat $1000000$ and $0$ as special cases and instead look at the first question as if it was asking you to look at $[0,999999]$ with an adjustment.</p></li> <li><p>For the second question you might split the range by numbers of digits, i.e. into $[1,9]$, $[10,99]$, etc. </p></li> </ul>
2,698,960
<p>Why this statement is false to $a \in \mathbb{C}$ </p> <p>$(\sqrt[n]{a} * \sqrt[k]{a} ) - (a^{\frac{n+k}{nk}})= 0$</p> <p>How you can prove it with high school maths ?</p>
Clive Newstead
19,542
<p><strong>Hint:</strong> Left adjoints preserve colimits, so it suffices to prove that $U$ does not preserve colimits.</p> <p>For an even bigger hint, hover over the box below.</p> <blockquote class="spoiler"> <p> $U$ doesn't even preserve binary coproducts. The coproduct of two groups in $\mathbf{Ab}$ is their direct sum, but their coproduct in $\mathbf{Grp}$ is their free product.</p> </blockquote>
2,698,960
<p>Why this statement is false to $a \in \mathbb{C}$ </p> <p>$(\sqrt[n]{a} * \sqrt[k]{a} ) - (a^{\frac{n+k}{nk}})= 0$</p> <p>How you can prove it with high school maths ?</p>
Pece
73,610
<p>Clive gave a neat short way to answer your question. But let me try to explain what is really going on under the hood here.</p> <p>Call a <em>Lawvere theory</em> any bijective-on-objects finite-coproducts-preserving functor $j : \aleph_0 \to \mathcal L$ where $\aleph_0$ is a skeleton of the category of finite sets. Objects of $\aleph_0$ (and so also those of $\mathcal L$) are just the integers $0,1,2,\dots$ and the morphisms $n \to m$ in $\aleph_0$ are the set-maps from $\{0,1,\dots, n-1\}$ to $\{0,1,\dots,m-1\}$. A <em>model</em> of the theory $j$ is a presheaf $M : \mathcal L^{\rm op} \to \mathbf{Set}$ that preserves finite products. The category of models of $j$ is the full subcategory of $\widehat{\mathcal L}$ spanned by the model of $j$; let's denote it $\mathbf{Mod}_j$. You might want to think of the morphisms in $\mathcal L(1,n)$ as $n$-ary operations and commutative diagrams in $\mathcal L$ as axioms; then the models are to be thought as sets equipped with these operations and satisfying the axioms.</p> <p>This might seem a strange notion, but you actually know at least two (and probably many more) Lawvere theories : the theory of groups and the theory of abelian groups. The first one is described by the category $\mathcal L_{\rm gr}$ given as the full subcategory of $\mathbf{Grp}$ spanned by the free groups on finite sets; the fuctor $j_{\rm gr}$ is the obvious one, mapping $n$ to the free group on $\{0,1,\dots,n-1\}$. The second one is described by the category $\mathcal L_{\rm ab}$ given as the full subcategory of $\mathbf{Ab}$ spanned by the free abelian groups on finite sets; the fuctor $j_{\rm ab}$ is the obvious one, mapping $n$ to the free abelian group on $\{0,1,\dots,n-1\}$. If you are careful enough, you can check that $\mathbf{Grp}\simeq \mathbf{Mod}_{j_{\rm grp}}$ and $\mathbf{Ab}\simeq \mathbf{Mod}_{j_{\rm ab}}$.</p> <p>Of course, we want a way to compare those theories. There is a very natural notion of morphisms between theories $j_1$ and $j_2$: these are the functors $\varphi$ such that $\varphi j_1 = j_2$. In the case $j_1 = j_{\rm grp}$ and $j_2 = j_{\rm ab}$ there is such a morphism $\varphi$: it maps the free groups on $n$ elements to the free abelian groups on $n$ elements; the action of maps takes advantages of the fact that an element of the free groups on $n$ elements can equally be seen as an element of the free abelian group on $n$ elements (just not normalized). More hand-waving, this morphisms of theory reflects the fact that axioms of groups are (some) axioms of abelian groups. Going back to a general $\varphi$, it gives a restriction functors: $$ \varphi^\ast : \widehat{\mathcal L_2} \to \widehat{\mathcal L_1} $$ By the theory of Kan extensions, this functor admits a left and a right adjoint, usually denoted $\varphi_!$ and $\varphi_\ast$ respectively. Now remark that $\varphi^\ast$ maps finite-products-preserving presheaves to finite-products-preserving presheaves, so that it induces a functor $$ U_\varphi : \mathbf{Mod}_{j_2} \to \mathbf{Mod}_{j_1} $$ This functor will have a left adjoint precisely if $\varphi_!$ maps finite-products-preserving presheaves to finite-products-preserving presheaves and it will have a right djoint precisely if $\varphi_\ast$ maps finite-products-preserving presheaves to finite-products-preserving presheaves. Here is the catch: $\varphi_!$ always does while $\varphi_\ast$ has no reason to! (It has to do with the representation of presheaves as canonical colimits of representables, but I dont want to burry you under details here.)</p> <p>In the case $j_1 = j_{\rm grp}$ and $j_2=j_{\rm ab}$ and the morphism $\varphi$ sketched earlier, $U_\varphi$ is exactly the forgetful functor from abelian groups to groups, which admits a left adjoint (the abelianization functor) but have no reason to admit a right adjoint.</p> <hr> <p>As you probably guessed, this framework applies to most algebraic theories you encountered (more precisely, the unisorted first-order equational theories). I like the stratospheric view that this abstract point of view gives you: the real reason behind the non existence of a right adjoint in your case is that "abelianity" is an additional <strong>axiom</strong> on the usual structure of group.</p>
309,380
<p>Let me sum up my - hopefully correct - understanding of the <a href="https://en.wikipedia.org/wiki/Travelling_salesman_problem" rel="nofollow noreferrer">travelling salesman problem</a> and <a href="https://en.wikipedia.org/wiki/Complexity_class" rel="nofollow noreferrer">complexity classes</a>. It's about <a href="https://en.wikipedia.org/wiki/Decision_problem" rel="nofollow noreferrer">decision problems</a>:</p> <blockquote> <p>"[...] a decision problem is a problem that can be posed as a yes-no question of the input values. Decision problems typically appear in mathematical questions of decidability, that is, the question of the existence of an <strong>effective method</strong> to determine the existence of some object."</p> </blockquote> <p>The travelling salesman problem (<a href="https://en.wikipedia.org/wiki/Travelling_salesman_problem" rel="nofollow noreferrer"><strong>TSP</strong></a>) - as a decision problem - is to find an answer to the question:</p> <blockquote> <p>Given an $n \times n$ matrix $W = (w_{ij})$ with $w_{ij} \in \mathbb{Q}$ and a number $L\in \mathbb{Q}$.</p> <p>Is there a permutation $\pi$ of $\{1,\dots, n\}$ such that</p> <p>$$L(\pi) = \sum_{i=1}^{n} w_{\pi(i)\pi(i+1)} &lt; L?$$</p> </blockquote> <p><sub>with <a href="https://en.wikipedia.org/wiki/Modular_arithmetic" rel="nofollow noreferrer">modular</a> addition, i.e. $n+1 = 1$</sub></p> <p>The answer can be given as a specific example (the output of a constructive "problem solver") which then can be checked for correctness. For <strong>TSP</strong> we know that a specific example given by a constructive problem solver (e.g. a specific permutation $\pi$) can be checked in polynomial time for $L(\pi) &lt; L$, that means <strong>TSP</strong> <a href="https://en.wikipedia.org/wiki/NP_(complexity)" rel="nofollow noreferrer">$\in\mathcal{NP}$</a>. </p> <p>But the answer may also be given by just a boolean value <strong>YES</strong> or <strong>NO</strong> , which cannot be checked at all. (What would we try to check?)</p> <p>The first kind of answer is given by algorithms that are programmed to read arbitary matrices $W$ and numbers $L$ and give an example $\pi$. These are equivalent to constructive proofs which somehow construct a $\pi$ from given $W$ and $L$, and which may be correct or not.</p> <p>The second kind of answer is given by non-construtive proofs - which nevertheless give an answer. Such a proof also "reads" some general $W$ and $L$ and makes some general considerations about them, e.g. like this: If numbers $x_1, \dots x_n$ can be calculated from $W$ and they relate to $L$ such that $f(x_1,\dots, x_n, L) = 0$ then the answer is <strong>YES</strong> otherwise <strong>NO</strong>.</p> <p>My question is: </p> <blockquote> <p>If some day it is proved that <strong>TSP</strong> <a href="https://en.wikipedia.org/wiki/P_(complexity)" rel="nofollow noreferrer">$\not\in \mathcal{P}$</a> (because <a href="https://en.wikipedia.org/wiki/P_versus_NP_problem" rel="nofollow noreferrer">$\mathcal{P} \neq \mathcal{NP}$</a> and <strong>TSP</strong> is <a href="https://en.wikipedia.org/wiki/NP-hardness" rel="nofollow noreferrer">$\mathcal{NP}$-hard</a>), what do we learn about hypothetical non-construtive proofs that for given $W$ and $L$ there exist solutions $\pi$ with $L(\pi) &lt; L$ (<strong>YES</strong> or <strong>NO</strong>)?</p> </blockquote> <p>Or is the talk about such proofs only a chimera - because they are ill-defined or cannot exist for obvious reasons?</p> <p><strong>Remark 1:</strong> Since proofs have no run-time, the things we can learn about them may concern only their length and/or complexity (in general: structure).</p> <p><strong>Remark 2:</strong> Very short and simple algorithms may have exponential run-times.</p> <p>To think more specifically about this: Assume there is a proof that proves:</p> <blockquote> <p>If you calculate numbers $x_1(W),\dots, x_m(W)$ of a quadratic matrix $W$ and you find that if $f(x_1,\dots,x_m,L) = 0$ then there is a permutation $\pi$ with $L(\pi) &lt; L$.</p> </blockquote> <p>What could be said about this (hypothetical!) proof, assuming that <a href="https://en.wikipedia.org/wiki/P_versus_NP_problem" rel="nofollow noreferrer">$\mathcal{P} \neq \mathcal{NP}$</a>?</p>
mhum
9,840
<p>I think that such a non-constructive method for solving an NP-complete problem can always be transformed into a polynomial-time constructive method by re-running the non-constructive method on a sequence of modified versions of the original problem. It's a little more straightforward in the case of <a href="https://en.wikipedia.org/wiki/Boolean_satisfiability_problem" rel="noreferrer">Satisfiability</a> (i.e.: given a boolean formula, does there exist a satisfying assignment). Suppose you have a boolean formula such that your non-constructive method returns TRUE (i.e.: there exists a satisfying assignment). Fix the first variable, $x_1$, to TRUE in the original formula and re-run the non-constructive method on this new formula. If the non-constructive method still returns TRUE, then we know we have a satisfying assignment in the original formula with $x_1 = $TRUE, otherwise we set $x_1 = $ FALSE. Repeat on the remaining variables and you'll have your answer.</p> <p>A similar trick can be applied to TSP by setting the distances of all but one edge out of each node to some value $&gt;L$. </p> <p>Edited to add:</p> <p>I think I should also be explicit here. If we are able to treat this non-constructive proof as a black box that can be executed in polynomial time, then we can use it to construct explicit solutions to NP-complete problems in polynomial time. So, if we assume that P $\neq$ NP, this implies that such a hypothetical non-constructive proof cannot be "run" (for whatever that might mean) in polynomial time. In the example given at the bottom of the question, this would imply that the calculation of the numbers $x_i(W)$ and/or the function $f(x_1, \ldots, x_m, L)$ should not be calculable in polynomial time. Otherwise, we would just use the above trick and explicitly construct a satisfying TSP tour in polynomial time.</p>
309,380
<p>Let me sum up my - hopefully correct - understanding of the <a href="https://en.wikipedia.org/wiki/Travelling_salesman_problem" rel="nofollow noreferrer">travelling salesman problem</a> and <a href="https://en.wikipedia.org/wiki/Complexity_class" rel="nofollow noreferrer">complexity classes</a>. It's about <a href="https://en.wikipedia.org/wiki/Decision_problem" rel="nofollow noreferrer">decision problems</a>:</p> <blockquote> <p>"[...] a decision problem is a problem that can be posed as a yes-no question of the input values. Decision problems typically appear in mathematical questions of decidability, that is, the question of the existence of an <strong>effective method</strong> to determine the existence of some object."</p> </blockquote> <p>The travelling salesman problem (<a href="https://en.wikipedia.org/wiki/Travelling_salesman_problem" rel="nofollow noreferrer"><strong>TSP</strong></a>) - as a decision problem - is to find an answer to the question:</p> <blockquote> <p>Given an $n \times n$ matrix $W = (w_{ij})$ with $w_{ij} \in \mathbb{Q}$ and a number $L\in \mathbb{Q}$.</p> <p>Is there a permutation $\pi$ of $\{1,\dots, n\}$ such that</p> <p>$$L(\pi) = \sum_{i=1}^{n} w_{\pi(i)\pi(i+1)} &lt; L?$$</p> </blockquote> <p><sub>with <a href="https://en.wikipedia.org/wiki/Modular_arithmetic" rel="nofollow noreferrer">modular</a> addition, i.e. $n+1 = 1$</sub></p> <p>The answer can be given as a specific example (the output of a constructive "problem solver") which then can be checked for correctness. For <strong>TSP</strong> we know that a specific example given by a constructive problem solver (e.g. a specific permutation $\pi$) can be checked in polynomial time for $L(\pi) &lt; L$, that means <strong>TSP</strong> <a href="https://en.wikipedia.org/wiki/NP_(complexity)" rel="nofollow noreferrer">$\in\mathcal{NP}$</a>. </p> <p>But the answer may also be given by just a boolean value <strong>YES</strong> or <strong>NO</strong> , which cannot be checked at all. (What would we try to check?)</p> <p>The first kind of answer is given by algorithms that are programmed to read arbitary matrices $W$ and numbers $L$ and give an example $\pi$. These are equivalent to constructive proofs which somehow construct a $\pi$ from given $W$ and $L$, and which may be correct or not.</p> <p>The second kind of answer is given by non-construtive proofs - which nevertheless give an answer. Such a proof also "reads" some general $W$ and $L$ and makes some general considerations about them, e.g. like this: If numbers $x_1, \dots x_n$ can be calculated from $W$ and they relate to $L$ such that $f(x_1,\dots, x_n, L) = 0$ then the answer is <strong>YES</strong> otherwise <strong>NO</strong>.</p> <p>My question is: </p> <blockquote> <p>If some day it is proved that <strong>TSP</strong> <a href="https://en.wikipedia.org/wiki/P_(complexity)" rel="nofollow noreferrer">$\not\in \mathcal{P}$</a> (because <a href="https://en.wikipedia.org/wiki/P_versus_NP_problem" rel="nofollow noreferrer">$\mathcal{P} \neq \mathcal{NP}$</a> and <strong>TSP</strong> is <a href="https://en.wikipedia.org/wiki/NP-hardness" rel="nofollow noreferrer">$\mathcal{NP}$-hard</a>), what do we learn about hypothetical non-construtive proofs that for given $W$ and $L$ there exist solutions $\pi$ with $L(\pi) &lt; L$ (<strong>YES</strong> or <strong>NO</strong>)?</p> </blockquote> <p>Or is the talk about such proofs only a chimera - because they are ill-defined or cannot exist for obvious reasons?</p> <p><strong>Remark 1:</strong> Since proofs have no run-time, the things we can learn about them may concern only their length and/or complexity (in general: structure).</p> <p><strong>Remark 2:</strong> Very short and simple algorithms may have exponential run-times.</p> <p>To think more specifically about this: Assume there is a proof that proves:</p> <blockquote> <p>If you calculate numbers $x_1(W),\dots, x_m(W)$ of a quadratic matrix $W$ and you find that if $f(x_1,\dots,x_m,L) = 0$ then there is a permutation $\pi$ with $L(\pi) &lt; L$.</p> </blockquote> <p>What could be said about this (hypothetical!) proof, assuming that <a href="https://en.wikipedia.org/wiki/P_versus_NP_problem" rel="nofollow noreferrer">$\mathcal{P} \neq \mathcal{NP}$</a>?</p>
Gerhard Paseman
3,402
<p>I suspect that one learns nothing about such proofs, even in the case that P equals NP. (If a specific polynomial time algorithm were given for 3-SAT, we could build one for any NP problem, of course, and we would then learn something. But you are not asking about that situation.) Let me illustrate with a particular problem (which turns out to be in P, and is easily but not trivially so demonstrated), and at the same time show that the problem representation is important.</p> <p>The input is a finite tuple of integers $k_i$, each greater than 1, which represents a hypercubic graph with number of vertices the product of the $k_i$, and is a "brick-shaped" subset with the edge lengths given by the $k_i$ of the infinite hypercubic lattice graph given by the product of the integers with itself. The output is Yes if there is a Hamiltonian cycle (a TSP) in this brick and No if not.</p> <p>Take a moment if you wish to find a poly-time algorithm for this problem. It turns out the base used to encode the input affects the time needed to solve the problem.</p> <p>Suppose the input is represented in tertiary base. Here is a sketch of an algorithm to solve the problem. For each input integer given in base 3, take its residue modulo two. If for one of the integers the result is 0, output Yes and stop. Otherwise process all the integers, find their residues as equal to 1, and then output No and stop.</p> <p>As an exercise, show that a base two encoding of the input can permit a quicker algorithm , especially if the hardware supports it.</p> <p>It turns out the problem specification on the specified input domain is equivalent to the following: return Yes if the input tuple contains an even integer, otherwise return No. Note that you do not always have to examine all of the input, and with a base two encoding, you just have to examine the least significant bit of the $k_i$'s until you find a zero.</p> <p>Until you analyze the problem, this seems almost like magic: you examine a (often proper) subset of the input quickly and come up with an answer. This is because you understand the pattern of Yes instances so well that you can distinguish them from No instances quickly, but may not be able to explain why unless you educate the reader about the technique of modular arithmetic (and the reason why this matters; I also leave that as an exercise).</p> <p>If I tell you that there is such a pattern for every input instance of (a polynomial time encoded version of) 3-SAT, and then you test me and I give you correct Yes/ No answers on your battery of examples in time much less than it took you to verify them yourself, you might believe me. If you ask for the pattern, and I say that in order to describe it we have to start with simplicial homology and apply some recent results of Peter Scholze and Caucher Birkar, you might be determined and attempt to understand the complicated description, but if you are like me and a lot of other people, you might give up and still declare it a mystery.</p> <p>In short the mere fact of whether P and NP differ will not tell us anything (besides existence) about constructive or non constructive proofs. There are many complicated and obscure patterns out there, and P = NP would just tell us about asymptotics, but not whether it was brief to describe and capable of being understood after (say) a year of concentrated study.</p> <p>Gerhard "Work On Simplifying CFSG Instead" Paseman, 2018.08.29.</p>
3,430,136
<p>I would like to prove that the map <span class="math-container">$f: S^n \times S^m \to 2S^{m+n+1}: ((x_1,..,x_{n+1}), (y_1,...,y_{m+1})) \to (x_1,...,x_{n+1},y_1,...,y_{m+1})$</span> is an imersion. Here <span class="math-container">$2S^{m+n+1}$</span> is the <span class="math-container">$m+n+1$</span> dimensional sphere with radius <span class="math-container">$\sqrt2$</span>. </p> <p>I know that I have to prove that the map <span class="math-container">$(f_\star)_p : T_p(S^n \times S^m) \to T_{f(p)}(2S^{m+n+1}) : [\gamma] \to [f \circ \gamma]$</span> is injective but I fail to do this. My initial idea was the following: </p> <p>Assume that <span class="math-container">$(f_\star)_p([\gamma_1]) = (f_\star)_p([\gamma_2])$</span>. It holds that <span class="math-container">$[f \circ \gamma_1] = [f \circ \gamma_2]$</span> and thus <span class="math-container">$f \circ \gamma_1 \sim f \circ \gamma_2$</span>. So there is a chart <span class="math-container">$(U,\phi: U \to \mathbb{R}^p)$</span> with <span class="math-container">$ U \subset \mathbb{R}^{n+m+2}$</span> open such that <span class="math-container">$(\phi \circ (f \circ \gamma_1))'(0) = (\phi \circ (f \circ \gamma_2))'(0)$</span>. It is easy to see that <span class="math-container">$f:S^n \times S^m \to f(S^n \times S^m)$</span> is a homeomorphism and thus <span class="math-container">$\phi \circ f$</span> is a chart for <span class="math-container">$S^n \times S^m$</span>. Thereby <span class="math-container">$\gamma_1 \sim \gamma_2$</span> and thus <span class="math-container">$[\gamma_1] = [\gamma_2]$</span>. </p> <p>I don't think that this is true because we do not know if <span class="math-container">$\phi \circ f$</span> is globally good defined. Can someone help me? </p> <p>Thanks! </p>
Tsemo Aristide
280,301
<p>The map <span class="math-container">$h:\mathbb{R}^{n+1}\times\mathbb{R}^{n+1}\rightarrow\mathbb{R}^{n+m+2}$</span> defined by <span class="math-container">$h((x_1,..,x_{n+1}),(y_1,..,y_{m+1}))=(x_1,..,x_{n+1},y_1,..,y_{m+1})$</span> is an immersion. Its restriction to the submanifold <span class="math-container">$S^n\times S^m$</span> of <span class="math-container">$\mathbb{R}^{n+1}\times\mathbb{R}^{m+1}$</span> is also an immersion.</p>
2,470,958
<p>Let's say that I've got the following IVP:</p> <p>$\frac{dy}{dx} = f(x,y)$</p> <p>$y(x_0) = y_0$</p> <p>And I want conditions that guarantee existence and uniqueness of its solution.</p> <p>On the one hand I've got the Picard–Lindelöf theorem. It asks that there exists a rectangle $R = [a,b] \times [c,d]$, containing $(x_0, y_0)$ as an interior point, where $f$ is continuous in $x$ and Lipschitz continuous in $y$.</p> <p>On the other hand I've got a theorem, which I've encountered in many undergraduate text books, that requires $f$ and $\frac{\partial f}{\partial y}$ to be continuous in the aforementioned rectangle. </p> <p>Are these two different theorems? It seems to me that the hypotheses of the first one are implied by those of the second one. But in that case, why would some authors prefer this more restrictive form of the theorem? Could it be just so that students don't need to learn the concept of Lipschitz continuity? </p>
user247327
247,327
<p>The first, that requires that the partial derivative of f with respect to y exist, is an "if and only if" theorem. The second, that requires "Lindelof" in y, says "if" but not "only if". Any function that is differentiable with respect to y is "Lindelof" but the converse is not true.</p>
2,144,520
<p>I don't really understand Cantor's diagonal argument, so this proof is pretty hard for me. I know this question has been asked multiple times on here and i've gone through several of them and some of them don't use Cantor's diagonal argument and I don't really understand the ones that use it. I know i'm supposed to assume that A is countable and then find the contradiction to state that it is uncountable. I just don't know how to get there. Also, there's a part B. </p> <p>Here's part B if you can help: </p> <p>Prove that P(N) = {X : X ⊆ N}, the power set of the natural numbers, is uncountable by establishing a bijection between P(N) and the set A from part (a). </p> <p>(HINT: Given X ⊆ N, we can ask whether 1 ∈ X, 2 ∈ X, etc. Based on the true/false results, can you think of a way to define a unique binary sequence to go with each subset of N?)</p>
Noah Schweber
28,111
<p>In the comments to your question, you indicate that your professor began by showing that $(0, 1)$ is uncountable. I actually think this is a bad way to start; it will be easier to understand the proof of the uncountability of set of <strong>infinite sequences of natural numbers</strong>, $\mathbb{N}^\mathbb{N}$. </p> <p><em>Why? Well,there is a slight issue in the uncountability of the interval $(0, 1)$ - namely the fact that some reals have <strong>two decimal expansions</strong> (like $0.1000000...=0.0999999....$) - which forces us to be a little weird (this is the whole "if $x_i(i)&lt;8$ . . ." business).</em></p> <p>So let me explain why $\mathbb{N}^\mathbb{N}$ is uncountable. Suppose I have a "counting" of some infinite sequences - that is, a map $f: \mathbb{N}\rightarrow \mathbb{N}^\mathbb{N}$. (I think of $f$ as a list: the first element on the list is $f(1)$, the second is $f(2)$, etc.)</p> <p>Now I want to build a "missing sequence" - that is, an infinite sequence of natural numbers which is "not on the list" (that is, not in the range of $f$). To do this, it will be enough to build a sequence $s\in \mathbb{N}^\mathbb{N}$ satisfying the following property:</p> <blockquote> <p>For each $n$, there is some place where $s$ differs from $f(n)$: that is, some $i$ such that the $i$th term of $s$ is different from the $i$th term of $f(n)$. <em>(Remember that $f(n)$ is a sequence.)</em></p> </blockquote> <p>Why? Well, if $s=f(n)$ for some $n$, then each of the terms of $s$ and $f(n)$ had better be the same: if the $57$th term of $s$ is $0$, but the $57$th term of $f(n)$ is $5$, they're clearly different sequences!</p> <p>So how can I do this? Well, I'll define $s$ so that the $n$th term of $s$ is different from the $n$th term of $f(n)$, for each $n$. This will be enough to make $s$ different from each $f(n)$, that is, not on the list.</p> <p>And this isn't hard to do - just add $1$! For instance, if the first four sequences on my list look like</p> <ul> <li><p>$f(1) = (4, 2, 5, 1, . . .)$,</p></li> <li><p>$f(2) = (1, 52, 2, 8, . . .)$,</p></li> <li><p>$f(3) = (0, 0, 0, 0, . . . )$, </p></li> <li><p>$f(4) = (5, 10, 15, 20, . . .)$, </p></li> </ul> <p>then my $s$ will begin as follows: $$s = (5, 53, 1, 21, . . .)$$</p> <p>This $s$ isn't $f(1)$, since the first term of $s$ is different from the first term of $f(1)$. It's not $f(2)$, since the second term is different from the second term of $f(2)$. And so on.</p> <p>Formally, here's how we define $s$:</p> <p>$$s(n)=f(n)(n)+1$$</p> <p>(here "$f(i)(j)$" means "the $j$th term of $f(i)$," so e.g. in my example above $f(1)(3)=5$).</p> <p>It's clear that for each $n$, the $n$th term of $s$ is different from the $n$th term of $f(n)$; so $s\not=f(n)$ for any $n$. In particular, $s$ is not on the list.</p> <p>This shows that <em>any listing of infinite sequences of naturals is incomplete</em> - that is, there is no bijection from $\mathbb{N}$ to $\mathbb{N}^\mathbb{N}$.</p> <hr> <p>Now, do you see how to adapt this idea to infinite <em>binary</em> sequences? Note that adding $1$ to each term doesn't work anymore, since if you do that you don't get a binary sequence ($2\not\in \{0, 1\}$). But there's something else you can do . . .</p>
2,548,942
<p>What would be the best approach to calculate the following limits </p> <p>$$ \lim_{x \rightarrow 0} \left (1+\frac {1} {\arctan x} \right)^{\sin x}, \qquad \lim_{x \rightarrow 0} \frac {\tan ^7 x} {\ln (7x+1)} $$ in a basic way, using some special limits, without L'Hospital's rule? </p>
Sebastiano
705,338
<p>For the first limit being <span class="math-container">$x\to 0$</span> we have <span class="math-container">$\arctan x\sim x$</span>, hence:</p> <p><span class="math-container">$$\lim_{x \rightarrow 0} \left (1+\frac {1} {\arctan x} \right)^{\sin x}\sim \lim_{x \rightarrow 0} \Biggl[\left (1+\frac {1} {x} \right)^x\Biggr]^{\displaystyle \frac{\sin x}{x}}=e^1=e$$</span></p> <p>For the second limit being <span class="math-container">$x\to 0$</span> we have <span class="math-container">$\tan \psi(x)\sim \psi(x) \ $</span> and <span class="math-container">$\ln \gamma(x) \sim \gamma(x)$</span>, hence: <span class="math-container">$$\lim_{x \rightarrow 0} \frac {\tan ^7 x} {\ln (7x+1)}\sim \lim_{x \rightarrow 0} \frac{x^7}{7x+1}=0$$</span></p>
599,139
<p>Let $A = B = N$, where N is the set of natural numbers. Define $f:A \to B$ by $f(a)=2a$ and define $g:B\to A$ by $g(b)=3b$</p> <p>Find $g^{-1}(\{2,4,6\})$.</p> <p>Find $g^{-1} (\{2,4\})$</p> <p>My trouble here is would $g^{-1}$ just be f?</p> <p>Also an explanation of what the difference between $f(g(3))$ and $f(g(\{3\}))$ is.</p>
Mauro ALLEGRANZA
108,274
<p>See <strong>Herbert Enderton</strong>, <em>Computability Theory An Introduction to Recursion Theory</em> (2011) [pag.4] :</p> <blockquote> <p>The concept of decidability can [...] be described in terms of functions: For a subset $S$ of $\mathbb{N}$ , we can say that $S$ is <em>decidable</em> iff its characteristic function</p> </blockquote> <p>$C^S(x)$ = Yes if $x \in S$</p> <p>$C^S(x)$ = No if $x \lnot \in S$</p> <blockquote> <p>is effectively calculable. Here “Yes” and “No” are some fixed members of $\mathbb{N}$, such as $1$ and $0$.</p> </blockquote> <p>So, a predicate $P$ is <em>decidable</em> when you have an effective way (an algorithm) to check, for each $x$ in the domain of $P$, if $x$ is in $P$ (i.e. if $P(x)$ is true). </p> <p>The class of effectively calculable functions (and predicates) is closed under standard logical operators (like $\land$ , $\lnot$, $\lor$) ; so, the conjunction of two decidable predicates $P$ and $Q$ (i.e. $P \land Q$) is again decidable.</p> <p>From algorithms checking for membership into sets $P$ and $Q$ you can find an algorithm checking for membership into the intersection of $P$ and $Q$ (i.e.$P \cap Q$).</p> <p>Take as an example the set of natural numbers $\mathbb{N}$ and let $P$ the predicate "_ is even", so that $P(x)$ is true for $x \in \mathbb{N}$ iff "$x$ is even".</p> <p>Let $Q$ the predicate "_ is multiple of $5$"; now $P \land Q$ is the predicate "_ is even and multiple of $5$".</p> <p>By usual arithmetical methods, you can check, for each number $x \in \mathbb{N}$, if $(P \land Q)(x)$ is true (e.g.it is true for $10$, $20$ ..., i.e.by all numbers divisible by $2$ and $5$).</p>
599,139
<p>Let $A = B = N$, where N is the set of natural numbers. Define $f:A \to B$ by $f(a)=2a$ and define $g:B\to A$ by $g(b)=3b$</p> <p>Find $g^{-1}(\{2,4,6\})$.</p> <p>Find $g^{-1} (\{2,4\})$</p> <p>My trouble here is would $g^{-1}$ just be f?</p> <p>Also an explanation of what the difference between $f(g(3))$ and $f(g(\{3\}))$ is.</p>
Mauro ALLEGRANZA
108,274
<p>About <em>semi-decidability</em>, we have that (again form Enderton's book, pag.5) :</p> <blockquote> <p>It is very natural to extend these concepts to the situation where we have half of decidability: Say that S is semidecidable if its “semicharacteristic function”</p> </blockquote> <p>$c^S(x)$ = Yes if $x \in S$</p> <p>$c^S(x)$ = undefined if $x \lnot \in S$</p> <blockquote> <p>is an effectively calculable partial function. Thus, a set S of numbers is semidecidable if there is an effective procedure for recognizing members of S. We can think of S as the set that the procedure accepts. And the effective procedure, while it may not be a decision procedure, is at least an acceptance procedure. Any decidable set is also semidecidable. If we have an effective procedure that calculates the characteristic function $C^S$, then we can convert it to an effective procedure that calculates the semicharacteristic function $c^S$. We simply replace each “output No” command by some endless loop. </p> </blockquote> <p>The fundamental theorem regarding these two concepts is (pag.9) :</p> <blockquote> <p><strong>Kleene’s theorem</strong>: A set (or a relation) is decidable if and only if both it and its complement are semidecidable.</p> </blockquote>
1,986,402
<blockquote> <p>How can I simplify $\prod \limits_{l=1}^{a} \frac{1}{4^a} \cdot 16^l$?</p> </blockquote> <p>I've tried looking at the terms and finding something in there to conlcude what it might be and also took the $n^{th}$ term of $16^l$ into one fraction but that does rather the opposite of simplification.</p>
Leucippus
148,155
<p>Following Jack's answer and using $\sum_{k=1}^{n} k = n(n+1)/2$ it can be seen that: \begin{align} \prod_{k=1}^{n} \left\{ \frac{x^{2k}}{y^{n}} \right\} &amp;= \left( y^{-n} \right)^{n} \, \left( \prod_{k=1}^{n} x^{2k} \right) \\ &amp;= y^{-n^{2}} \, x^{2 \, \sum_{k=1}^{n} k } = y^{-n^{2}} \, x^{n^{2} + n } \\ &amp;= x^{n} \, \left( \frac{x}{y}\right)^{n^{2}}. \end{align} Examples: </p> <ul> <li>$y=x$ $$\prod_{k=1}^{n} \left\{ \frac{x^{2k}}{x^{n}} \right\} = x^{n}$$</li> <li>$y = x^{2}$ $$\prod_{k=1}^{n} \left\{ \frac{x^{2k}}{x^{2n}} \right\} = \frac{1}{x^{n(n-1)}}$$</li> </ul>
2,292,015
<p>The question is: the first three terms of an arithmetic series $c_{n}$ are $$a(1+b), a(1+3b),a(1+5b)$$ I needed to find the common difference in terms of $a$ and $b$ and then find the expression for $c_{n}$.</p> <p>The final part I struggled with where I have to find $a$ and $b$ and the information given is $$c_{5} = 25,c_{10} = 55$$</p> <p>The answers for the first two parts are $difference = 2ab$ and $c_{n}=a(1+(2n-1)b)$</p>
Ahmed S. Attaalla
229,023
<p>If we write it as follows,</p> <p>$$\vec r(t)=\langle t\cos t,t \sin t, t \rangle$$</p> <p>Then we are able to see that $x^2+y^2=t^2$. As $t$ increases from $0$ we get higher an higher up the path, but the points stay circular in nature. The circles that a single point $(t\cos(t),t\sin(t),y\sin (t))$ are at get bigger as $t$ gets bigger or smaller from $0$.</p> <p>The path looks like a spiral as,</p> <p><a href="https://i.stack.imgur.com/4Phae.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4Phae.gif" alt="enter image description here"></a></p> <p>It is easy to see that $t$ behaves just like the polar angle $\theta$ when $t&gt;0$ because $\vec r(t)=t \langle \cos t,\sin t,1 \rangle$. So if we just plotted , in a way level points for the path with $t&gt;0$, we would get $x^2+y^2=t^2=\theta^2$ that tells us that the level points together look like $r=\theta$ which indeed is well known to be a "spiral".</p>
4,247,637
<p>There are several tea cups in the kitchen, some with handle and the others without handles. The number of ways of selecting two cups without a handle and three with a handle is exactly <span class="math-container">$1200$</span>. What is the maximum possible number of cups in the kitchen?<br> Here's what I did:<br> I assumed cups with handle are <span class="math-container">$x$</span> and without handle are <span class="math-container">$y$</span>. Now ways of selecting three cups with a handle are <span class="math-container">$^xC_3$</span> and ways of selecting <span class="math-container">$2$</span> cups without a handle are <span class="math-container">$^yC_2$</span>. So, <span class="math-container">$x(x-1)y(y-1)(y-2)=14400$</span> .Now I am stuck here. How do I proceed from here? Is the only way hit and trial? Please help me out.</p>
Jack D'Aurizio
44,121
<p>I will prove that <span class="math-container">$(1)$</span> and <span class="math-container">$(2)$</span> are equivalent definitions for <span class="math-container">$K_0$</span>: <span class="math-container">$$ K_0(\xi)=\frac{1}{2}\int_{-\infty}^{+\infty}\frac{e^{i\xi x}}{\sqrt{x^2+1}}\,dx=\int_{0}^{+\infty}\frac{\cos(\xi x)}{\sqrt{x^2+1}}\,dx \tag{1} $$</span> <span class="math-container">$$ K_0(\xi) = \int_{0}^{+\infty} \exp\left(-\xi \cosh x\right)\,dx. \tag{2} $$</span> It is enough to show that <span class="math-container">$(1)$</span> and <span class="math-container">$(2)$</span> have the same Laplace transform. <br> The Laplace transform of the function defined by <span class="math-container">$(1)$</span> is given by <span class="math-container">$$ \int_{0}^{+\infty}\frac{s}{(s^2+x^2)\sqrt{x^2+1}}\,dx \stackrel{x\mapsto\sinh(x)}{=} \int_{0}^{+\infty}\frac{s}{s^2+\sinh^2(x)}\,dx \tag{L1}$$</span> while the Laplace transform of the function defined by <span class="math-container">$(2)$</span> is <span class="math-container">$$ \int_{0}^{+\infty}\frac{1}{s+\cosh(x)}\,dx.\tag{L2} $$</span> Through the substitution <span class="math-container">$x=\log u$</span> and integration by parts it is very simple to check that both <span class="math-container">$(L1)$</span> and <span class="math-container">$(L2)$</span> boil down to</p> <p><span class="math-container">$$ \int_{1}^{+\infty}\frac{2\,du}{1+2su+u^2}=\int_{0}^{1}\frac{2\,du}{1+2su+u^2} $$</span> and this finishes the proof. In other terms, both <span class="math-container">$(1)$</span> and <span class="math-container">$(2)$</span> show that <span class="math-container">$K_0(\xi)$</span> is the inverse Laplace transform of the analytic continuation of <span class="math-container">$$\frac{2}{\sqrt{1-s^2}}\,\arctan\sqrt{\frac{1-s}{1+s}}, $$</span> which equals <span class="math-container">$\frac{1}{\sqrt{s^2-1}}\,\operatorname{arctanh}\sqrt{1-\frac{1}{s^2}}$</span> for <span class="math-container">$s&gt;1$</span>.</p>
170,967
<p>Cog $A$ is at position: $Ax$, $Ay$, rotation: $Ar$ and number of teeth: $At$</p> <p>Cog $B$ is at position: $Bx$, $By$ and number of teeth $Bt$. What is Cog $B$'s rotation such that teeth between Cog $A$ and Cog $B$ line up. There will be the same number of answers as there are teeth, but a 'base angle' is desired.</p> <p><img src="https://i.imgur.com/ICrhs.jpg" alt="cog diagram"></p>
joriki
6,622
<p>You can calculate the angle $\alpha$ of the line from $A$ to $B$ as $\alpha=\arctan\frac{By-Ay}{Bx-Ax}$. You want the phases to be opposite at this angle, so $(\alpha-Ar)At=(\alpha+\pi-Br)Bt+\pi+ 2\pi n$, with $n$ an integer; you can solve this for $Br$ to determine $Br$ up to integer multiples of $2\pi/Bt$.</p>
2,746,222
<p>Problem:</p> <p>A boats speed is <strong>1,70 m/s</strong> in still water.<br /> It must cross a river with a width of <strong>260 m</strong>.<br /> The boats starting point is the <strong>origin on the xy-axsis</strong> (on the shore).<br /> It has to dock <strong>110 m</strong> to the right(in the positive x-direction) opposite of the starting point on the other shore(i.e. the point parallel to the starting point on the other side + 110 m).<br /> The boat must sail in a <strong>45°</strong> angle relative to the shore(x-axis) to arrive at that point.</p> <p>What is the speed of the water current(water flows to the negative x-direction)?</p> <p><img src="https://i.stack.imgur.com/EVFzH.jpg" alt="Picture of the problem" /></p> <p>What I have done:</p> <p>It semms to be a pretty simple vector problem.<br /> Just subtract the vector of the boat in moving water from the vector of the boat in still water(direct route) to get the vector of water flow.</p> <p>I did this and got a nonzero y component of the water flow, which can't be true. How can it even be zero if only the sin(0°+180°*n)= 0 and the y components of the vectors aren't equal?</p> <p>Thank you for your help</p>
Vasili
469,083
<p>Let $v_b$ be the speed of boat in still water, $v_r$ is the speed of river. The speed of boat in $x$ direction (in still water at 45 degree angle): $v_x=v_b/\sqrt{2}$, the speed of boat in $y$ direction is $v_y=v_b/\sqrt{2}$. The river speed has negative $x$ direction as you correctly concluded. Thus, we need to subtract it from $v_x$. We have: $(v_b/\sqrt{2}-v_r)\cdot t=110$, $v_b/\sqrt{2} \cdot t=260$ (where $t$ is the time to cross the river). This will give us the equation to find $v_r$: $$\frac{v_b/\sqrt{2}}{v_b/\sqrt{2}-v_r}=\frac{260}{110}$$ Can you complete from here?</p>
2,704,394
<p>Here is the formal statement:</p> <blockquote> <p>Let $\lambda_1, \lambda_2, \lambda_3$ be distinct eigenvalues of $n\times n$ matrix $A$. Let $S=\{v_1, v_2, v_3\}$, where $Av_i = \lambda_i v_i$ for $1\leq i\leq 3$. Prove $S$ is linearly independent. </p> </blockquote> <p>Many resources online state the general proof or the proof for two eigenvectors. What is the proof for specifically 3? I tried to derive the 3 eigenvector proof from the 2 eigenvector proofs, but failed. </p>
RCT
424,406
<p>Here's one idea that comes to mind, although I don't promise there isn't a slicker way to do it. Suppose $c_1v_1 + c_2v_2 + c_3v_3 = 0.$ Applying $A$ gives $$\lambda_1c_1v_1 + \lambda_2c_2v_2 + \lambda_3c_3v_3 = 0.$$ On the other hand, multiplying the original equation by $\lambda_1$ gives $$\lambda_1c_1v_1 + \lambda_1c_2v_2 + \lambda_1c_3v_3 = 0.$$ Comparing the two displayed equations, we get $$(\lambda_2-\lambda_1)c_2v_2 + (\lambda_3-\lambda_1)c_3v_3 = 0.$$ Since you say you can prove any two eigenvectors corresponding to distinct eigenvalues are linearly independent, you now know that $(\lambda_2-\lambda_1)c_2 = (\lambda_3-\lambda_1)c_3 = 0.$ But since all the $\lambda_i$ were distinct, this means $c_2 = c_3 = 0.$ Thus the original equation says $c_1v_1 = 0,$ but since eigenvectors are by definition non-zero, we see that $c_1 = 0,$ completing the proof. </p> <p>Notice the inductive nature of the proof -- the same idea will work for $n$ eigenvectors.</p>
2,798,206
<p>How do you prove this using the epsilon-delta definition? I'm unsure of using the min = { } function.</p> <p>$\lim \limits_{x \to \infty}\frac{2x+1}{1-x}$</p> <p>These are my steps: </p> <p>$ |f(x) - L| &lt; \epsilon =&gt; |\frac{2x+1}{1-x} +2|&lt; \epsilon $</p> <p>$ \qquad \qquad \; \; \; \; \; =&gt;|\frac{3}{1-x} | &lt; \epsilon $</p> <p>$ \qquad \qquad \; \; \; \; \; =&gt;|\frac{-3}{x-1} | &lt; \epsilon$</p> <p>$ \qquad \qquad \; \; \; \; \; =&gt;|-3||\frac{1}{x-1} | &lt; \epsilon$</p> <p>$ \qquad \qquad \; \; \; \; \; =&gt;|\frac{1}{x-1} | &lt; \frac{\epsilon}{3}$</p> <p>$ \qquad \qquad \; \; \; \; \; =&gt;|{x-1} | &lt; \frac{\epsilon |x-1|}{3}$</p> <p>How do I continue from here?</p>
G.L.
565,369
<p>Thanks, Theo Bendit. I re-did my answer, not sure if it's correct.</p> <p>$ |f(x) - L| = |\frac{2x+1}{1-x} +2|&lt; \delta $</p> <p>$ \qquad \quad \; \; \; =|\frac{3}{1-x} | &lt; \delta $</p> <p>$ \qquad \quad \; \; \; =|\frac{-3}{x-1} | &lt; \delta $</p> <p>$ \qquad \quad \; \; \; =|-3||\frac{1}{x-1} | &lt; \delta $</p> <p>$ \qquad \quad \; \; \; =|\frac{1}{x-1} | &lt; \frac{\delta}{3}$</p> <p>$ \qquad \quad \; \; \; =|{x-1} | &gt; \frac{3}{\delta}$</p> <p>$ \frac{3}{\delta} = \epsilon &lt;=&gt; \delta = \frac{3}{\epsilon}$</p> <p>$ |x-1| &gt; \frac {3}{\epsilon} &lt;=&gt; 0 &lt; |\frac{1}{x-1}|&lt;\frac{\epsilon}{3}$</p> <p>$\qquad \qquad \; \; &lt;=&gt; 0 &lt; |\frac{3}{x-1}|&lt;\epsilon$</p> <p>$\qquad \qquad \; \; &lt;=&gt; 0 &lt; |-3||\frac{1}{x-1}|&lt;\epsilon$</p> <p>$\qquad \qquad \; \; &lt;=&gt; 0 &lt; |\frac{3}{1-x}|&lt;\epsilon$</p> <p>$\qquad \qquad \; \; &lt;=&gt; 0 &lt; |\frac{2x+1}{1-x} + 2|&lt;\epsilon$</p> <p>$\qquad \qquad \; \; =&gt; 0 &lt; |f(x) - L|&lt;\epsilon$</p>
1,083,277
<p>$a,b,c \in \mathbb{R}$ and $a+b+c=0$. Prove that: $8^{a}+8^{b}+8^{c}\geqslant 2^{a}+2^{b}+2^{c}$</p> <p>I think that $2^{a}.2^{b}.2^{c}=1$, but i don't know what to do next</p>
Blind
207,277
<p>We have $$ 8^a+1+1\geq3\sqrt[3]{8^a}=3\times 2^a, $$ $$ 8^b+1+1\geq3\sqrt[3]{8^b}=3\times 2^b, $$ $$ 8^c+1+1\geq3\sqrt[3]{8^c}=3\times 2^c, $$ $$ 2^a+2^b+2^c\geq 3\sqrt[3]{2^{a+b+c}}=3. $$ It follows that \begin{eqnarray} 8^a+8^b+8^c&amp;\geq&amp;3(2^a+2^b+2^c)-6\\ &amp;=&amp;(2^a+2^b+2^c)+2(2^a+2^b+2^c-3)\\ &amp;\geq&amp;2^a+2^b+2^c. \end{eqnarray}</p>
73,559
<p>I have the following problem. I'd like to add a legend to <code>MatrixPlot</code>. Each colour should have a legend entry. I used <code>PlotLegends</code>, which in principle works. However, if I use more than five colours, this doesn't work anymore.</p> <pre><code>a = RandomInteger[{1, 6}, {50}]; MatrixPlot[{a}, ColorRules -&gt; {1 -&gt; Red, 2 -&gt; Blue, 3 -&gt; Green, 4 -&gt; Gray,5-&gt;Yellow,6-&gt;Orange}] </code></pre>
Bob Hanlon
9,362
<pre><code>a = RandomInteger[{1, 6}, {50}]; colors = {1 -&gt; Red, 2 -&gt; Blue, 3 -&gt; Green, 4 -&gt; Gray, 5 -&gt; Yellow, 6 -&gt; Orange}; Column[{ MatrixPlot[{a}, ColorRules -&gt; colors, ImageSize -&gt; 400], SwatchLegend[ colors[[All, 2]], colors[[All, 1]], LegendLayout -&gt; "Row"]}, Alignment -&gt; Center] </code></pre> <p><img src="https://i.stack.imgur.com/kin4v.png" alt="enter image description here"></p>
1,317,143
<blockquote> <p><em>Notation</em>: $\log:=\log_{10}$</p> </blockquote> <p>$\log x+\log_x 10$</p> <p>$=\log x+ \frac{1}{\log x}$ </p> <p>$=\log(x \cdot \frac{1}{x})$ </p> <p>$=\log 1$ </p> <p>$=0$ </p> <p>Is the process correct? I doubt this is wrong. Please help. Thanks.</p>
Hagen von Eitzen
39,174
<p>Note that $\log_a b=\frac{\ln b}{\ln a}$. Hence you want ot minimize $y+\frac1y$ with $y=\frac{\ln x}{\ln 10}$. "Clearly", this minimum is $2$ and achieved when $y=1$, i.e., when $\ln x=\ln 10$ and finally $x=10$.</p> <p>Why $y=\frac1y\ge 2$ for $y&gt;0$? Well, we have $y-2+\frac1y=\left(\sqrt y-\frac1{\sqrt y}\right)^2\ge 0$ with equality iff the expression in parentheses is zero.</p> <hr> <p>Your derivation is wrong when you go from $\log x+\frac1{\log x}$ to $\log(x\cdot \frac1x)$. Note that you'd need $\log x+\log \frac1{ x}$ for this instead of $\log x+\frac1{\log x}$.</p>
481,952
<p>Why is a union of infinitely many bounded sets not necessarily bounded, please? In addition, what condition can we add to make this union bounded, please?</p>
Community
-1
<p>Since you're asking why it's not <em>necessarily</em> true that such a union is bounded, it suffices to consider a counterexample. Define $A_n = [n, n + 1]$; then </p> <p>$$\bigcup_{n \in \Bbb{N}} A_n = [0, \infty)$$</p> <p>is unbounded.</p> <p>It is necessary and sufficient that there is a common bound on all the $A_n$ for the union of the $A_n$ to be bounded.</p>
3,403,255
<p>I am trying to follow wikipedia's page about matrix rotation and having a hard time understanding where the formula comes from.</p> <p><a href="https://en.wikipedia.org/wiki/Rotation_matrix" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Rotation_matrix</a> Wiki page about it.</p> <p>what i have so far:</p> <p>y<sub>2</sub>=sin(<em>a<sub>1</sub>+a<sub>2</sub></em>)R -> where R is hypotenuse, a1 is current angle and a2 is the angle by which something must rotate.</p> <p>this how i used to calculate my rotation, but it takes long time to compute and uses up a lot of cpu time for square roots and other heavy stuff due tot he need of finding the initial angle.</p> <p>So i decided to reduce computation time and found that sin(a1+a2) could be writen as <code>sin(a1)cos(a2)+cos(a1)sin(a2)</code> and from there i got to the point where it is:</p> <p>y<sub>2</sub>=y<sub>1</sub>cos(<em>a<sub>2</sub></em>)+xsin(<em>a<sub>2</sub></em>)sin(<em>a<sub>1</sub></em>)</p> <p>But wiki page says that it must b:</p> <p>y<sub>2</sub>=y<sub>1</sub>cos(<em>a<sub>2</sub></em>)+xsin(<em>a<sub>2</sub></em>)</p> <p><a href="https://i.stack.imgur.com/vRzo7.png" rel="nofollow noreferrer">My work book</a></p>
J.G.
56,861
<p>You've not shown how you got a different result, so I can't comment on your mistake. But it looks to me like you're examining the composition of rotations about the origin in <span class="math-container">$2$</span> dimensions. (@bounceback gave an answer that understood your aims differently, so I hope between us we give you the help you need.) Let's denote one rotation as <span class="math-container">$R(\theta):=\left(\begin{array}{cc} \cos\theta &amp; -\sin\theta\\ \sin\theta &amp; \cos\theta\end{array}\right)$</span>. Then<span class="math-container">$$\left(\begin{array}{cc} \cos a_2 &amp; -\sin a_2\\ \sin a_2 &amp; \cos a_2 \end{array}\right)\left(\begin{array}{cc} \cos a_1 &amp; -\sin a_1\\ \sin a_1 &amp; \cos a_1 \end{array}\right)=\left(\begin{array}{cc} \cos a_{1}\cos a_2-\sin a_1\sin a_2 &amp; -\sin a_1\cos a_2-\sin a_2\cos a_1\\ \sin a_1\cos a_2+\sin a_2\cos a_1 &amp; \cos a_1\cos a_2-\sin a_1\sin a_2 \end{array}\right)$$</span>is an exercise in matrix multiplication, and the right-hand side reduces by trigonometric identities to<span class="math-container">$$\left(\begin{array}{cc} \cos(a_1+a_2) &amp; -\sin (a_1+a_2)\\\sin(a_1+a_2) &amp; \cos(a_1+a_2)\end{array}\right).$$</span>In other words, <span class="math-container">$R(a_2)R(a_1)=R(a_1+a_2)$</span>, and if we rotate a vector <span class="math-container">$v$</span> to <span class="math-container">$R(a_1)v$</span> and then to <span class="math-container">$R(a_2)R(a_1)v$</span> the final result is <span class="math-container">$R(a_1+a_2)v$</span>. (If you write all this in terms of <span class="math-container">$x$</span>- and <span class="math-container">$y$</span>-coordinates, you can remove a discussion of vectors and matrices altogether.)</p>
1,134,215
<p>How can I determine whether {$\frac{z}{1+z^2}$; z $\in$ $\mathbb{C}$ \ {-i, i}} is bounded? My textbook is very poor at describing boundedness for complex functions. Thanks for the help!</p>
ahulpke
159,739
<p>I don't think this is true without some further finiteness condition that limits the number of images of $K$:</p> <p>Take $G$ the (two-sided) infinite sequences with entries in 0,1 and as operation component-wise addition (so its an infinite direct product of $C_2$ with itself), and as $K$ the kernel of the projection on one particular component.</p> <p>Clearly right/left shift are automorphisms of $G$, and under these $K$ has infinitely many images, so the stabilizer of $K$ in the automorphism group must have infinite index.</p>
1,302,932
<p>Given $$A=\begin{pmatrix} 2 &amp; 0 &amp; 0\\ a &amp; 2&amp; 0\\ a+3 &amp; a &amp;-1 \end{pmatrix}$$<br> For which values of $a$ can $A$ be diagonal?<br> I found that $p_A(x)=(x-2)^2(x+1)$ and tried to find the eigen subspace of 2, to see if the geomtric multiplicity of the eigenvalue $2$ is $2$.<br> I got a set of equations:$$2x=2x ; ax+2y=2y ; (a+3)x+ay-z=2z$$<br> But I could not understand how to extract the relevant information from it.</p>
user84413
84,413
<p>Since $A-2I=\begin{pmatrix}0&amp;0&amp;0\\a&amp;0&amp;0\\a+3&amp;a&amp;1\end{pmatrix}$, $\;\;\text{nullity}(A-2I)=2\iff \text{rank}(A-2I)=1 \iff a=0$,</p> <p>so A is diagonalizable $\iff a=0$.</p>
3,350,021
<blockquote> <p>We have the following quadratic equation:</p> <p><span class="math-container">$2x^2-\sqrt{3}x-1=0$</span> with roots <span class="math-container">$x_1$</span> and <span class="math-container">$x_2$</span>.</p> <p>I have to find <span class="math-container">$x_1^2+x_2^2$</span> and <span class="math-container">$|x_1-x_2|$</span>.</p> </blockquote> <p>First we have: <span class="math-container">$x_1+x_2=\dfrac{\sqrt{3}}{2}$</span> and <span class="math-container">$x_1x_2=-\dfrac{1}{2}$</span></p> <p>So <span class="math-container">$x_1^2+x_2^2=(x_1+x_2)^2-2x_1x_2=\dfrac{7}{4}$</span></p> <p>Can someone help me with the second one?</p> <p>I forgot to tell that solving the equation is not an option in my case.</p>
Hussain-Alqatari
609,371
<p>Note that: if <span class="math-container">$a,b,c \in \mathbb{R}$</span> and <span class="math-container">$a\ne0$</span>, if <span class="math-container">$ax^2+bx+c=0$</span>, then <span class="math-container">$x_{1,2}=\frac{-b\pm\sqrt{b^2-4ac}}{2a}$</span></p> <p><span class="math-container">$2x^2-\sqrt{3}x-1=0$</span>,</p> <p>solving, we get: <span class="math-container">$x_{1,2}=\frac{\sqrt{3}\pm\sqrt{3-4\cdot2\cdot(-1)}}{2\cdot2}=\frac{\sqrt{3}\pm\sqrt{11}}{4}$</span></p> <p>The required expression, <span class="math-container">$x_1^2+x_2^2=(\frac{\sqrt{3}+\sqrt{11}}{4})^2+(\frac{\sqrt{3}-\sqrt{11}}{4})^2=\frac{3+2\sqrt{33}+11+3-2\sqrt{33}+11}{16}=\frac{7}{4}$</span>.</p> <p>The second required expression, <span class="math-container">$|x_1-x_2|=|\frac{\sqrt{3}+\sqrt{11}}{4}-\frac{\sqrt{3}-\sqrt{11}}{4}|=|\frac{\sqrt{3}+\sqrt{11}-\sqrt{3}+\sqrt{11}}{4}|=|\frac{\sqrt{11}}{2}|=\frac{\sqrt{11}}{2}$</span>.</p>
3,413,364
<blockquote> <p>Consider the set of points <span class="math-container">$$O = \{ x \in P \mid \alpha^* = C^T x \}$$</span> where <span class="math-container">$P \subseteq \mathbb R^n$</span> is a closed convex set, <span class="math-container">$C \in \mathbb R^n$</span> and <span class="math-container">$\alpha^* = \min \{ C^Tx \}$</span>. Then, <span class="math-container">$O$</span> is closed convex set.</p> </blockquote> <p>This seems a pretty simple statement in my linear programming class but I am unsure how to show it formally. I can easily show it is a convex set but I am not sure how to show it is a closed set.</p>
Kavi Rama Murthy
142,385
<p>If <span class="math-container">$x_k \in O$</span> and <span class="math-container">$x_k \to x$</span> then <span class="math-container">$C^{T}x_k \to C^{T}x$</span> and <span class="math-container">$\alpha^{*}=C^{T}x_k$</span> for each <span class="math-container">$k$</span>. Hence <span class="math-container">$\alpha^{*}=C^{T}x$</span> and <span class="math-container">$x \in O$</span>. </p>
2,959,686
<p>I'm trying to see if I can find a bijection between two groups that are infinite of which one in the subset of the other. If I find the inverse <span class="math-container">$\phi^{-1}(x)=\frac{1}{5}x$</span> since it doesn't work for <span class="math-container">$x \in \mathbb{Z}$</span> (because I will have values in <span class="math-container">$\mathbb{Q}$</span>) then there isn't an isomorphism right?</p> <p>Or have I approached the problem incorrectly?</p> <p>Thanks for your help.</p>
Dietrich Burde
83,966
<p>The subgroups of <span class="math-container">$\Bbb{Z}$</span> are given by <span class="math-container">$n\Bbb{Z}$</span>. For <span class="math-container">$n\neq 0$</span>, <span class="math-container">$n\Bbb{Z}$</span> is an infinite cyclic group with generator <span class="math-container">$n$</span>, and hence isomorphic to <span class="math-container">$\Bbb{Z}$</span>. You do not necessarily need to find an explicit isomorphism (assuming you had cyclic groups in class already). So in fact <span class="math-container">$5\Bbb{Z}\cong \Bbb{Z}$</span>, in contrast to your title.</p>
892,114
<p>i have three number 1 2 3 which will always be in this order {123}, i want to find out number of cases can be made, like {1},{2},{23},{13},{12},{123}{3},{}. but each number has two states like "a" "b", i.e, each one will become different entity,like 2a,2b,3a,3b,1a, with only exception i.e. 1 will have only one state 1a.</p> <p>please tel me step wise using formulas, so that i can understand, also, any link will be helpfull. yours sincerly</p>
Mufasa
49,003
<p>$$(x-2)^2=(x-2)(x-2)=x(x-2)-2(x-2)=x^2-2x-2x+4=x^2-4x+4$$</p>
187,975
<p>Let $\mu$ be a finite nonatomic measure on a measurable space $(X,\Sigma)$, and for simplicity assume that $\mu(X) = 1$. There is a well-known "intermediate value theorem" of Sierpiński that states that for every $t \in [0,1]$, there exists a set $S \in \Sigma$ with $\mu(S) = t$.</p> <p>I would like to use the following stronger conclusion for such a measure: </p> <blockquote> <p>There exists a chain of sets $\{S_t \mid t \in [0,1]\}$ in $\Sigma$, with $S_t \subseteq S_r$ whenever $0 \leq s \leq r \leq 1$, such that $\mu(S_t) = t$ for all $t \in [0,1]$.</p> </blockquote> <p>(One can view this as the existence a right inverse to the map $\mu \colon \Sigma \to [0,1]$ in the category of partially ordered sets.)</p> <p>This statement appears (albeit hidden within a proof) on the Wikipedia page for "<a href="http://en.wikipedia.org/wiki/Atom_%28measure_theory%29#Non-atomic_measures" rel="noreferrer">Atom (measure theory)</a>," and even includes a sketch for the proof! However, I would like to see some mention of this in the literature. I've checked the Wiki references and they both seem to prove the weaker statement. I looked in Fremiln's <em>Measure Theory</em>, vol. 2, and again found the weaker version but not the stronger. </p> <p><strong>Question:</strong> Can anyone provide me with such a reference?</p> <hr> <p><strong>A proof.</strong> In case anyone stumbles to this page and wants to see a proof, I'll sketch one that is more constructive than the one that I linked to above. Set $S_0 = \varnothing$ and $S_1 = X$. By Sierpiński, there exists $S_{1/2} \in \Sigma$ of measure $1/2$. For each Dyadic rational $q = m/2^n \in [0,1]$ ($1 \leq m \leq 2^n$), we may proceed by induction on $n$ to construct each $S_q$. Now given $r \in [0,1]$, set $S_r = \bigcup_{q \leq r} S_q$. (This is essentially the same method of proof as the one in the reference provided in Ramiro de la Vega's answer.)</p>
user42397
42,397
<p>It seems to be a special case of Lemma 2.5(chapter 2) of "interpolation of operators"(Bennett, Sharpley).</p>
43,956
<p>There is this example at the Wikipedia article on Quotient spaces (QS):</p> <blockquote> <p>Consider the set $X = \mathbb{R}$ of all real numbers with the ordinary topology, and write $x \sim y$ if and only if $x−y$ is an integer. Then the quotient space $X/\sim$ is homeomorphic to the unit circle $S^1$ via the homeomorphism which sends the equivalence class of $x$ to $\exp(2πix)$.</p> </blockquote> <p>I understand relations, equivalence relation and equivalence class but quotient space is still too abstract for me. This seems like a simple enough example to begin with.</p> <p>I understand (sort of) the definition but I can't visualize. And by this example and others, there is a lot of visualizing going on here! torus, circles etc.</p>
Henno Brandsma
4,280
<p>The quotient space <span class="math-container">$Y = X / \sim$</span> as a set is just the set of equivalence classes of <span class="math-container">$X$</span> under <span class="math-container">$\sim$</span>, so the set <span class="math-container">$\{ [x]: x \in \mathbb{R} \} $</span> in your case. </p> <p>The equivalence class of a number <span class="math-container">$x$</span> is just (in your case) the set <span class="math-container">$\{ x+n : n \in \mathbb{Z} \}$</span>. Now we need a topology. The standard topology that we take on <span class="math-container">$Y$</span> is <strong>all</strong> subsets <span class="math-container">$O$</span> of <span class="math-container">$Y$</span> (where points in <span class="math-container">$Y$</span> are "really" subsets of <span class="math-container">$X$</span>, the equivalence classes) such that <span class="math-container">$q^{-1}[O]$</span> is open in the topology of <span class="math-container">$X = \mathbb{R}$</span>. Here <span class="math-container">$q$</span> is the map that sends <span class="math-container">$x$</span> to its class <span class="math-container">$[x]$</span> in <span class="math-container">$Y$</span>, the so-called quotient map. This is called the <em>quotient topology</em> on <span class="math-container">$Y$</span>, and as you see it assumes you have a topology on <span class="math-container">$X$</span> already, and we give <span class="math-container">$Y$</span> the largest topology possible to still have <span class="math-container">$q$</span> continuous. (The smallest one would always be the indiscrete topology, which is not very interesting, hence the other "natural" choice.)</p> <p>Now, if we have a function <span class="math-container">$f$</span> from <span class="math-container">$Y$</span>, the quotient space in the quotient topology, to any space <span class="math-container">$Z$</span>, then <span class="math-container">$f$</span> is continuous iff <span class="math-container">$f \circ q$</span> is continuous as a function from <span class="math-container">$X$</span> to <span class="math-container">$Z$</span>: one way is clear, as the composition of continuous maps is continuous, and for the other side, if <span class="math-container">$O$</span> is open in <span class="math-container">$Z$</span>, by definition <span class="math-container">$f^{-1}[O]$</span> is open in <span class="math-container">$Y$</span> iff <span class="math-container">$q^{-1}[f^{-1}[O]]$</span> is open in <span class="math-container">$X$</span>, and this set equals <span class="math-container">$(f \circ q)^{-1}[O]$</span> which is open, as by assumption <span class="math-container">$f \circ q$</span> is continuous.</p> <p>Now, consider the map <span class="math-container">$f$</span> that sends the class <span class="math-container">$[x]$</span> to the point <span class="math-container">$e^{2\pi ix}$</span> in <span class="math-container">$\mathbb{S}^1$</span>, the unit circle. This is well-defined: if <span class="math-container">$x'$</span> were another representative of <span class="math-container">$[x]$</span>, then <span class="math-container">$x \sim x'$</span> and thus <span class="math-container">$x - x'$</span> is an integer and so <span class="math-container">$f(x') = f(x)$</span>. It is continuous, as <span class="math-container">$f \circ q$</span> is just the regular map sending <span class="math-container">$x$</span> to <span class="math-container">$e^{2\pi ix}$</span>, and this is even differentiable etc. It is clearly surjective and injective because the only way <span class="math-container">$[x]$</span> and <span class="math-container">$[y]$</span> will have the same value is when <span class="math-container">$2 \pi ix - 2 \pi i y$</span> is an integer multiple of <span class="math-container">$2 \pi i$</span>, which happens iff <span class="math-container">$x - y$</span> is an integer. One can also check that <span class="math-container">$q[X] = q[[0,1]]$</span> and by continuity of <span class="math-container">$q$</span> we have that <span class="math-container">$Y$</span> is compact. This makes (with <span class="math-container">$\mathbb{S}^1$</span> Hausdorff) the map <span class="math-container">$f$</span> a homeomorphism, by standard theorems.</p> <p>We could also have achieved this as the quotient of <span class="math-container">$[0,1]$</span> under the equivalence relation that has exactly one non-trivial class, namely <span class="math-container">$\{0,1\}$</span>. This is more intuitive, as we then glue together (consider as one point) just the points <span class="math-container">$0$</span> and <span class="math-container">$1$</span>, and this geometrically gives a circle. In your example (which is a nice so-called covering map, and a group homomorphism as well) we glue a lot more points together, but all classes are now similar: just shifted versions of a point by an integer. We sort of wrap the interval <span class="math-container">$[0,1)$</span> infinitely many times over itself.</p>
4,015,741
<p>I want to find the solutions of <span class="math-container">$(x+1)^{63}+(x+1)^{62}(x-1)+\cdots+(x-1)^{63}=0$</span>.</p> <p>It is not hard to see <span class="math-container">$x=0$</span> is a root of the equation. but I don't know how to solve this equation in general. I can see terms of the equation looks very similar to binomial expansion of <span class="math-container">$[(x+1)+(x-1)]^{63}$</span> except the coefficient of each term is <span class="math-container">$1$</span> rather than <span class="math-container">$63\choose k $</span> (for <span class="math-container">$k=0,1,\cdots,63$</span> ). is it possible to use binomial theorem to solve the equation? (or other approaches)</p>
player3236
435,724
<p><span class="math-container">$$\frac {a^n - b^n}{a-b} = a^{n-1} + a^{n-2} b + a^{n-3}b^2 + \dots + ab^{n-2} + b^{n-1}$$</span></p> <p>Hence:</p> <p><span class="math-container">$$(x+1)^{63} + (x+1)^{62}(x-1) + \dots+ (x-1)^{63} = \frac {(x+1)^{64}-(x-1)^{64}}{(x+1)-(x-1)} = \frac12((x+1)^{64} - (x-1)^{64})$$</span></p> <p>If the above expression is zero, we must have <span class="math-container">$(x+1)^{64} = (x-1)^{64}$</span>. Hence <span class="math-container">$\dfrac {x+1}{x-1}$</span> must be one of the 64-th roots of unity (or if you are only considering real roots, <span class="math-container">$\pm 1$</span>).</p>
1,621,363
<p>Integrate: $$\int \frac{\sin(x)}{9+16\sin(2x)}\,\text{d}x.$$</p> <p>I tried the substitution method ($\sin(x) = t$) and ended up getting $\int \frac{t}{9+32t-32t^3}\,\text{d}t$. Don't know how to proceed further. </p> <p>Also tried adding and substracting $\cos(x)$ in the numerator which led me to get $$\sin(2x) = t^2-1$$ by taking $\sin(x)+\cos(x) = t$. </p> <p>Can't figure out any other method now. Any suggestions or tips? </p>
Jan Eerland
226,665
<p>HINT:</p> <p>$$\int\frac{\sin(x)}{9+16\sin(2x)}\space\text{d}x=$$</p> <hr> <p>Use the double angle formula $\sin(2x)=2\sin(x)\cos(x)$:</p> <hr> <p>$$\int\frac{\sin(x)}{32\sin(x)\cos(x)+9}\space\text{d}x=$$</p> <hr> <p>Subsitute $u=\tan\left(\frac{x}{2}\right)$ and $\text{d}u=\frac{\sec^2\left(\frac{x}{2}\right)}{2}\space\text{d}x$.</p> <p>Then transform the integrand using the substitutions:</p> <p>$\sin(x)=\frac{2u}{u^2+1},\cos(x)=\frac{1-u^2}{u^2+1}$ and $\text{d}x=\frac{2}{u^2+1}\space\text{d}u$:</p> <hr> <p>$$\int\frac{4u}{\left(u^2+1\right)^2\left(\frac{64u(1-u^2)}{(u^2+1)^2}+9\right)}\space\text{d}u=$$ $$\int\frac{4u}{9 u^4-64 u^3+18 u^2+64 u+9}\space\text{d}u$$</p>
2,887,440
<p>We were asked in our Calculus class to prove that,</p> <blockquote> <p>$f(x+y) - f(x) = \frac {\sec^2(x) \tan(y)} {1 - \tan(x) \tan(y)}$ given that $f(x) = \tan(x)$</p> </blockquote> <p>I have gotten so far as:</p> <p>$$f(x+y) - f(x)$$</p> <p>$$\tan(x+y) - \tan(x)$$</p> <p>$$\frac{\tan(x)+\tan(y)}{1-\tan(x)\tan(y)} - \tan(x)$$</p> <p>$$\frac{\tan(x)+\tan(y)}{1-\tan(x)\tan(y)} + \frac{-\tan(x)+\tan^2(x)\tan(y)}{1-\tan(x)\tan(y)}$$</p> <p>$$\frac{\tan(y) + \tan^2(x)\tan(y)}{1-\tan(x)\tan(y)}$$</p> <p>$$\frac{\tan(y) [1+\tan^2(x)]}{1-\tan(x)\tan(y)}$$</p> <blockquote> <p>Substituting the pythagorean identity, $$1+\tan^2(x) = \sec^2(x)$$</p> </blockquote> <p>$$\frac{\tan(y) \sec^2(x)}{1-\tan(x)\tan(y)} = \boxed{\frac{\sec^2(x)\tan(y)}{1-\tan(x)\tan(y)}}$$ </p> <p>I don't quite understand how $f(x+y)$ became $\tan(x+y)$. I've had a few search results stating that $f(x+y) = f(x)+f(y)$ but it does not quite fit the bill. </p> <p>I got the idea for my solution above because of a textbook example I've read, where:</p> <blockquote> <p>Given $f(x)=x^2-4x+7$, find $\frac {f(x+h)-f(x)}{h}$</p> <blockquote> <p>$\frac{[(x+h)^2 - 4(x+h) + 7] - (x^2 - 4x + 7)}{h} = \frac{h(2x+h-4)}{h} = 2x+h-4$</p> </blockquote> </blockquote> <p>...but the book did not describe what property was used in order to 'insert' the value of $f(x)$ into $f(x+h)$, and by extension the $f(x)$ into the $f(x+y)$ of my problem. They feel... similar.</p> <p>Is there a name for this mathematical property? Thank you very much.</p>
zahbaz
176,922
<p>You might be unnecessarily hung up on the $x$ in $f(x)=\tan(x)$. Remember that your function $f$ is simply a mapping between some input to some output. The definition of $f$ just so happens to use $x$ to stand for any real number in this context. </p> <p>You can consider what happens to another real number $x'=x+y$. That results in $f(x')=\tan(x')$, which is $f(x+y)=\tan(x+y)$ by substituion property of quality.</p>
233,618
<p>I want to be able to take a polynomial and take the 1st 5 derivatives, then add at least one root of each derivative to a list using a loop. However, each attempt I try only ends up outputting the roots of the 5th derivative, not the rest. So far I have:</p> <pre><code>rootderivs[n_]:=( p[x_]:= x^8-3x^5+x-1; rootlist={}; Do[ AppendTo[rootlist, NSolve[D[p[x],{x,n}]==0]],1]; Print[rootlist]) </code></pre> <p>Which gives an ouput of :</p> <pre><code>rootderivs[5] {{{x-&gt;-0.188487-0.326469 I},{x-&gt;-0.188487+0.326469 I},{x-&gt;0.376974}}} </code></pre> <p>Any help would be appreciated!!</p>
cvgmt
72,111
<pre><code>Table[D[x^8 - 3 x^5 + x - 1, {x, n}], {n, 1, 8}] (* {1 - 15 x^4 + 8 x^7, -60 x^3 + 56 x^6, -180 x^2 + 336 x^5, -360 x + 1680 x^4, -360 + 6720 x^3, 20160 x^2, 40320 x, 40320} *) </code></pre> <pre><code>Table[NSolve[D[x^8 - 3 x^5 + x - 1, {x, n}] == 0, x], {n, 1, 8}] (* {{{x -&gt; -0.628102 - 1.06836 I}, {x -&gt; -0.628102 + 1.06836 I}, {x -&gt; -0.5}, {x -&gt; 0.00877753 - 0.507295 I}, {x -&gt; 0.00877753 + 0.507295 I}, {x -&gt; 0.518012}, {x -&gt; 1.22064}}, {{x -&gt; -0.511632 - 0.886173 I}, {x -&gt; -0.511632 + 0.886173 I}, {x -&gt; 0.}, {x -&gt; 0.}, {x -&gt; 0.}, {x -&gt; 1.02326}}, {{x -&gt; -0.406083 - 0.703356 I}, {x -&gt; -0.406083 + 0.703356 I}, {x -&gt; 0.}, {x -&gt; 0.}, {x -&gt; 0.812165}}, {{x -&gt; -0.299204 - 0.518237 I}, {x -&gt; -0.299204 + 0.518237 I}, {x -&gt; 0.}, {x -&gt; 0.598408}}, {{x -&gt; -0.188487 - 0.326469 I}, {x -&gt; -0.188487 + 0.326469 I}, {x -&gt; 0.376974}}, {{x -&gt; 0.}, {x -&gt; 0.}}, {{x -&gt; 0.}}, {}} *) </code></pre>
3,850,422
<p>For a few days now I've been trying to find a closed form expression for the determinant of the following <span class="math-container">$n\times n$</span> tridiagonal matrix</p> <p><span class="math-container">$$\begin{pmatrix}c_1+b_1+a_1 &amp; b_1 &amp; 0 &amp; \ddots &amp; 0 \\ c_2 &amp; c_2+b_2+a_2 &amp; b_2 &amp; \ddots &amp; 0 \\ 0 &amp; c_3 &amp; c_3+b_3+a_3 &amp; \ddots &amp; \vdots \\ \vdots &amp; \ddots &amp; \ddots &amp; \ddots &amp; b_{n-1}\\ 0 &amp; ... &amp; ... &amp; c_{n} &amp; c_{n}+b_n +a_n\end{pmatrix}$$</span></p> <p>For the sequences <span class="math-container">$c_n$</span>, <span class="math-container">$b_n$</span>, and <span class="math-container">$a_n$</span>. I've figured out closed form expression for special cases. Namely, when <span class="math-container">$a_n=0$</span>, the determinant is <span class="math-container">$$\Big(\prod_{i=1}^nb_i\Big)\sum_{l=0}^n\prod_{k=1}^l\frac{c_{k}}{b_k}$$</span> When <span class="math-container">$l=0$</span> in the product series, that returns a <span class="math-container">$1$</span>. Additionally, if <span class="math-container">$c_1=0$</span>, then the determinant is simply <span class="math-container">$$\prod_{i=1}^nb_i.$$</span></p> <p>I would really like to find an analogous formula in the case where <span class="math-container">$a_n \neq 0$</span>. For your benefit I will list the first few determinants for small <span class="math-container">$n$</span> <span class="math-container">$$n=1:\quad\quad c_1+b_1+a_1$$</span> <span class="math-container">$$n=2:\quad\quad a_1a_2+b_1a_2+a_1b_2+b_1b_2+c_1a_2+c_1b_2+a_1c_2+c_1c_2$$</span> <span class="math-container">$$n=3:\quad\quad a_1a_2a_3+b_1a_2a_3+a_1b_2a_3+b_1b_2a_3+a_1a_2b_3+b_1a_2b_3+a_1b_2b_3+b_1b_2b_3+c_1a_2a_3+c_1b_2a_3+c_1a_2b_3+c_1b_2b_3+a_1c_2a_3+a_1c_2b_3+c_1c_2a_3+c_1c_2b_3+a_1a_2c_3+b_1a_2c_3+c_1a_2c_3+a_1c_2c_3+c_1c_2c_3$$</span></p> <p>When you look at this, you may suspect that it is just the sum of every <span class="math-container">$n$</span>th order product of <span class="math-container">$a$</span>'s <span class="math-container">$b$</span>'s and <span class="math-container">$c$</span>'s with no subscript repeated, however this is not the case. For instance, <span class="math-container">$b_1c_2$</span> does not appear in the <span class="math-container">$n=2$</span> formula. Similarly there are <span class="math-container">$6$</span> terms which do not appear in the <span class="math-container">$n=3$</span> formula.</p> <p>I would really appreciate anyones input on this!</p>
fewfew4
617,212
<p>I believe I have an explicit solution!</p> <p>Using the case that I had already figured out (when <span class="math-container">$a_k=0$</span>), we can Taylor expand around this solution. For finite <span class="math-container">$n$</span>, this will be a finite expansion.</p> <p>First I define the quantity <span class="math-container">$\theta_{km}$</span>, with <span class="math-container">$1\leq k,m\leq n$</span>, which satisfies the following recursive relations</p> <p><span class="math-container">$$\theta_{km}=(c_m+b_m+a_m)\theta_{k,m-1}-b_{m-1}c_m\theta_{k,m-2},\quad \theta_{kk}=c_k+b_k+a_k,\quad \theta_{k,k-1}=1$$</span> <span class="math-container">$$\theta_{km}=(c_k+b_k+a_k)\theta_{k+1,m}-b_{k}c_{k+1}s\theta_{k+2,m},\quad \theta_{mm}=c_m+b_m+a_m,\quad \theta_{m+1,m}=1$$</span> and <span class="math-container">$\theta_{km}=0$</span> when <span class="math-container">$k&gt; m+1$</span> and <span class="math-container">$m&lt; k-1$</span>.</p> <p>Note that this quantity combines the <span class="math-container">$\theta_n$</span> and <span class="math-container">$\phi_n$</span> which is defined in this <a href="https://en.wikipedia.org/wiki/Tridiagonal_matrix#Inversion" rel="nofollow noreferrer">Wikipedia article</a>. And <span class="math-container">$\theta_{1n}$</span> is the determinant of the matrix.</p> <p>When <span class="math-container">$a_k=0$</span>, this quantity has an explicit solution:</p> <p><span class="math-container">$$\theta_{km}=\Big(\prod_{i=k}^mb_i\Big)\sum_{l=k-1}^m\prod_{j=k}^l\frac{c_{j}}{b_j}$$</span></p> <p>Using the recursive relations, it can be shown that this quantity satisfies</p> <p><span class="math-container">$$\frac{d\theta_{km}}{da_j}=\theta_{k,j-1}\theta_{j+1,m}$$</span></p> <p>Thus the general solution for nonzero <span class="math-container">$a_k$</span> is</p> <p><span class="math-container">$$\theta_{1n}+\sum_{k=1}^n\theta_{1k-1}a_k\theta_{k+1n}+\cdots+\sum_{k_1\cdots k_p=1}^n\theta_{1k_1-1}a_{k_1}\theta_{k_1+1,k_2-1}\cdots a_{k_p}\theta_{k_p+1,n}+\cdots+a_1\cdots a_n$$</span></p> <p>Where all of the <span class="math-container">$\theta$</span>'s in the above expression are for the case where <span class="math-container">$a_k=0$</span>.</p> <p>To tidy up the formula a bit more, one can note that <span class="math-container">$(a\theta)_{nm}=a_n\theta_{n+1,m-1}$</span> is a nilpotent upper triangular matrix. So this formula can actually be cast as</p> <p><span class="math-container">$$\Big(\theta(1-a\theta)^{-1}\Big)_{0n}$$</span></p> <p>That's about as explicit as I can do for now.</p>