qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
3,397,548
<p>For a sequence <span class="math-container">$\{x_n\}_{n=1}^{\infty}$</span>, define <span class="math-container">$$\Delta x_n:=x_{n+1}-x_n,~\Delta^2 x_n:=\Delta x_{n+1}-\Delta x_n,~(n=1,2,\ldots)$$</span> which are named <strong>1-order</strong> and <strong>2-order difference</strong>, respectively. </p> <p>The problem is stated as follows:</p> <blockquote> <p>Let <span class="math-container">$\{x_n\}_{n=1}^{\infty}$</span> be <strong>bounded</strong> , and satisfy <span class="math-container">$\lim\limits_{n \to \infty}\Delta^2 x_n=0$</span>. Prove or disprove <span class="math-container">$\lim\limits_{n \to \infty}\Delta x_n=0.$</span></p> </blockquote> <p>By intuiton, the conclusion is likely to be true. According to <span class="math-container">$\lim\limits_{n \to \infty}\Delta^2 x_n=0,$</span> we can estimate <span class="math-container">$\Delta x_n$</span> almost equal with an increasing <span class="math-container">$n$</span>. Thus, <span class="math-container">$\{x_n\}$</span> looks like an <strong>arithmetic sequence</strong>. If <span class="math-container">$\lim\limits_{n \to \infty}\Delta x_n \neq 0$</span>, then <span class="math-container">$\{x_n\}$</span> can not be bounded.</p> <p>But how to prove it rigidly?</p>
Arthur
15,500
<p>Let <span class="math-container">$\{x_n\}$</span> be bounded by <span class="math-container">$X$</span> (i.e. <span class="math-container">$|x_n|&lt; X$</span> for all <span class="math-container">$n$</span>) and <span class="math-container">$\lim_{n\to\infty} \Delta^2x_n = 0$</span>. Take an arbitrary <span class="math-container">$\varepsilon&gt;0$</span>. I will show that there is an <span class="math-container">$N\in \Bbb N$</span> such that <span class="math-container">$|\Delta x_n|&lt;\varepsilon$</span> for all <span class="math-container">$n&gt;N$</span>, thus showing that <span class="math-container">$\lim_{n\to\infty}\Delta x_n = 0$</span>.</p> <p>First fix some natural number <span class="math-container">$m\geq \frac{4X}{\varepsilon} + 1$</span> (but also make sure that <span class="math-container">$m\geq 3$</span>). Next, fix some <span class="math-container">$N$</span> such that <span class="math-container">$|\Delta^2x_n|&lt;\frac{\varepsilon}{m-2} = Y$</span> for all <span class="math-container">$n&gt;N$</span>. Now, assume for contradiction, that there is some <span class="math-container">$n&gt;N$</span> such that <span class="math-container">$\Delta x_n\geq\varepsilon$</span>. Then we have <span class="math-container">$$ \begin{align} x_{m+n} - x_n &amp;= \Delta x_n + \Delta x_{n+1} + \Delta x_{n+1} + \cdots + \Delta x_{n+m-1}\\ &amp;\geq \Delta x_n + (\Delta x_{n} - Y) + (\Delta x_n - 2Y) + \cdots + (\Delta x_{n} - (m-2)Y)\\ &amp;= (m-1)\Delta x_n - \frac{(m-1)(m-2)}{2}Y\\ &amp;\geq (m-1)\varepsilon - \frac{(m-1)(m-2)}{2}Y\\ &amp;= (m-1)\left(\varepsilon - \frac{m-2}2 \cdot \frac{\varepsilon}{m-2}\right)\\ &amp;\geq \left(\frac{4X}\varepsilon + 1 - 1\right)\cdot \frac\varepsilon2\\ &amp;= 2X \end{align} $$</span> But <span class="math-container">$|x_n|&lt;X$</span> and <span class="math-container">$|x_{n+m}|&lt;X$</span>, so we can't have <span class="math-container">$x_{n+m} - x_n\geq 2X$</span>. Thus we have a contradiction. So we must have <span class="math-container">$\Delta x_n &lt; \varepsilon$</span> for all <span class="math-container">$n&gt;N$</span>. A very similar contradiction argument shows that <span class="math-container">$\Delta x_n &gt; -\varepsilon$</span>. It follows that <span class="math-container">$\Delta x_n\to 0$</span>.</p> <hr> <p>It may look like <span class="math-container">$\frac{4X}\varepsilon + 1$</span> and <span class="math-container">$\frac{\varepsilon}{m-2}$</span> are pulled out of thin air, and that it just magically works out in the end. This is not the case. They are derived the following way:</p> <p>We want some natural number <span class="math-container">$m$</span> to indicate how many <span class="math-container">$\Delta x_n$</span> terms we are adding together, and we want some <span class="math-container">$Y$</span> to bound <span class="math-container">$\Delta^2x_n$</span>. With those bounds named, but without knowing what they are, we can actually do most of the working-out above. We get <span class="math-container">$$ \begin{align} x_{m+n} - x_n &amp;\geq\Delta x_n + (\Delta x_{n} - Y) + (\Delta x_n - 2Y) + \cdots + (\Delta x_{n} - (m-2)Y)\\ &amp;\geq \varepsilon + (\varepsilon - Y) + (\varepsilon - 2Y) + \cdots + (\varepsilon - (m-2)Y)\\ &amp; \geq (m-1)\left(\varepsilon - \frac{m-2}{2}Y\right) \end{align} $$</span> We want <span class="math-container">$\varepsilon - (m-2)Y\geq 0$</span>. (We don't want to add enough terms that we allow <span class="math-container">$\Delta x_{n+m}$</span> to become negative again. That's wasting terms.) And we want the final <span class="math-container">$(m-1)\left(\varepsilon - \frac{m-2}{2}Y\right)$</span> to be at least <span class="math-container">$2X$</span>. Solving these two inequalities give <span class="math-container">$m\geq \frac{4X}{\varepsilon} + 1$</span> and <span class="math-container">$Y\leq \frac{\varepsilon}{m-2}$</span>, which is what I used in the proof above.</p> <p>Note that we don't really need <span class="math-container">$m\geq 3$</span>. If <span class="math-container">$\frac{4X}\varepsilon \leq 1$</span>, and we happen to pick <span class="math-container">$m = 2$</span>, then we can choose whichever value we want for <span class="math-container">$Y$</span>, and the argument works out in the end. However, because the general expressions require division by <span class="math-container">$m-2$</span>, I added the <span class="math-container">$m\geq 3$</span> requirement for simplicity.</p>
3,811,753
<p>Show that the equation:</p> <p><span class="math-container">$$ y’ = \frac{2-xy^3}{3x^2y^2} $$</span></p> <p>Has an integration factor that depends on <span class="math-container">$x$</span> And solve it that way.</p> <hr /> <p>Already we got to:</p> <p><span class="math-container">$$ y’ + \frac{xy^3}{3x^2y^2} = \frac{2}{3x^2y^2} $$</span></p> <p>Therefore:</p> <p><span class="math-container">$$ y’ + \frac{1}{3x}y = \frac{2}{3x^2}y^{-2} $$</span></p> <p>But, in order to get an integration factor, shouldn’t we have a linear equation? Of the form:</p> <p><span class="math-container">$$ y‘ + p(x)y = g(x) $$</span></p> <p>That way getting the integration factor:</p> <p><span class="math-container">$$ \mu = ke^{\int p(x)}, k \in R $$</span></p> <p>But what we have is a non-linear equation, so how could an integration factor exists?</p> <p>Thanks.</p>
robjohn
13,854
<p><span class="math-container">$$ y’=\frac{2-xy^3}{3x^2y^2}\tag1 $$</span> Multiply <span class="math-container">$(1)$</span> by <span class="math-container">$3y^2g$</span> and shuffle some terms: <span class="math-container">$$ \left(y^3\right)'g+y^3\frac{g}x=\frac{2g}{x^2}\tag2 $$</span> We want the integrating factor to satisfy <span class="math-container">$$ g'=\frac{g}x\tag3 $$</span> So we can use <span class="math-container">$g(x)=x$</span>. Then <span class="math-container">$(2)$</span> becomes <span class="math-container">$$ \left(xy^3\right)'=\frac2x\tag4 $$</span> which is much easier to solve.</p>
230,504
<p>Again, this question is related (**) to a <a href="https://mathoverflow.net/questions/101700/large-cardinals-without-the-ambient-set-theory?rq=1">previous one</a>:</p> <p>in standard books on basic set theory, after stating the axioms of ZFC, ordinal numbers are introduced early on. Afterwards cardinals appear: they are special ordinals which are minimal with respect to equinumerosity. </p> <p>Ordinals and cardinals live happily within standard set theory. Nothing wrong with that, but <strong>what about axiomatizing them <em>directly</em>, ie without the underlying set theory</strong>? </p> <p>What I mean is: develop a first-order theory of some number system (the intended ordinals), such that they are totally ordered, there is an initial limit ordinal, and satisfy induction with respect to formulas in the language of ordinal arithmetic (here, of course, one will have to adjust the induction schema to accommodate the limit case).</p> <p>The cardinals could be introduced by adding an order-preserving operator $K$ on the ordinal numbers mimicking their definition in ZFC: cardinals would then be the fixed points of $K$.</p> <ol> <li>has some direct axiomatization along these lines been fully developed? I would suspect that the answer is in the affirmative, but I have no refs. </li> <li>the induction schema would be limited to first order formulae, so, assuming that the answer to 1 is yes, is there a theory of non-standard models of Ordinal Arithmetic? </li> <li>Assuming 1 AND 2, what about weaker induction schemas for ordinals (*)? </li> </ol> <p>(*) I am thinking again of formal arithmetics and the various sub-systems of Peano</p> <p>(**) it is not the same, though: here I am asking for a direct axiomatization of ordinals, and indirectly of cardinals, via the operator $k$</p>
Joel David Hamkins
1,946
<p>This is not exactly what you asked for, but there is an interesting axiomatization of Ordinals + Sets of Ordinals, which turns out to be precisely equiconsistent with ZFC. </p> <ul> <li>Peter Koepke, Martin Koerwien, <a href="http://arxiv.org/abs/math/0502265" rel="nofollow">The Theory of Sets of Ordinals</a>.</li> </ul> <p>Basically, if one is committed to the ordinals and having certain kinds of sets of ordinals, then you can build G&ouml;del's constructible universe $L$ and simulate a model of ZFC that way.</p>
230,504
<p>Again, this question is related (**) to a <a href="https://mathoverflow.net/questions/101700/large-cardinals-without-the-ambient-set-theory?rq=1">previous one</a>:</p> <p>in standard books on basic set theory, after stating the axioms of ZFC, ordinal numbers are introduced early on. Afterwards cardinals appear: they are special ordinals which are minimal with respect to equinumerosity. </p> <p>Ordinals and cardinals live happily within standard set theory. Nothing wrong with that, but <strong>what about axiomatizing them <em>directly</em>, ie without the underlying set theory</strong>? </p> <p>What I mean is: develop a first-order theory of some number system (the intended ordinals), such that they are totally ordered, there is an initial limit ordinal, and satisfy induction with respect to formulas in the language of ordinal arithmetic (here, of course, one will have to adjust the induction schema to accommodate the limit case).</p> <p>The cardinals could be introduced by adding an order-preserving operator $K$ on the ordinal numbers mimicking their definition in ZFC: cardinals would then be the fixed points of $K$.</p> <ol> <li>has some direct axiomatization along these lines been fully developed? I would suspect that the answer is in the affirmative, but I have no refs. </li> <li>the induction schema would be limited to first order formulae, so, assuming that the answer to 1 is yes, is there a theory of non-standard models of Ordinal Arithmetic? </li> <li>Assuming 1 AND 2, what about weaker induction schemas for ordinals (*)? </li> </ol> <p>(*) I am thinking again of formal arithmetics and the various sub-systems of Peano</p> <p>(**) it is not the same, though: here I am asking for a direct axiomatization of ordinals, and indirectly of cardinals, via the operator $k$</p>
Andreas Blass
6,794
<p>Are these papers of Takeuti the sort of thing you want?</p> <p>MR0086751 (19,237e) 02.0X Takeuti, Gaisi, On the theory of ordinal numbers. J. Math. Soc. Japan 9 (1957), 93–113.</p> <p>MR0099918 (20 #6354) 02.00 Takeuti, Gaisi, On the theory of ordinal numbers. II. J. Math. Soc. Japan 10 1958 106–120</p> <p>MR0197302 (33 #5467) 02.18 Takeuti, Gaisi, A formalization of the theory of ordinal numbers. J. Symbolic Logic 30 1965 295–317</p>
231,187
<p>I'm wondering if there's an efficient way of checking to see if two context free grammars are equivalent, besides working out "test cases" by hand (ie, just trying to see if both grammars can generate the same things, and only the same things, by trial and error).</p> <p>Thanks!</p>
Jurgen Vinju
111,672
<p>This paper provides an answer "Comparison of Context-Free Grammars Based on Parsing Generated Test Data", <a href="http://link.springer.com/chapter/10.1007%2F978-3-642-28830-2_18" rel="nofollow">http://link.springer.com/chapter/10.1007%2F978-3-642-28830-2_18</a>. The authors propose to generate sub-sentences from the one grammar and use a parser generated from the other to parse them. A statistical analysis and visualization helps to interpret the results.</p> <p>Quoting the abstract:</p> <blockquote> <p>There exist a number of software engineering scenarios that essentially involve equivalence or correspondence assertions for some of the context-free grammars in the scenarios. For instance, when applying grammar transformations during parser development—be it for the sake of disambiguation or grammar-class compliance—one would like to preserve the generated language. Even though equivalence is generally undecidable for context-free grammars, we have developed an automated approach that is practically useful in revealing evidence of nonequivalence of grammars and discovering correspondence mappings for grammar nonterminals. Our approach is based on systematic test data generation and parsing. We discuss two studies that show how the approach is used in comparing grammars of open source Java parsers as well as grammars from the course work for a compiler construction class.</p> </blockquote>
231,187
<p>I'm wondering if there's an efficient way of checking to see if two context free grammars are equivalent, besides working out "test cases" by hand (ie, just trying to see if both grammars can generate the same things, and only the same things, by trial and error).</p> <p>Thanks!</p>
Anderson Green
32,826
<p>In some cases, it is relatively easy to prove that two formal grammars are equivalent. If two grammars generate a <em>finite language</em>, then you only need to compare the sets of strings generated by both grammars to prove that they are <a href="https://en.wikipedia.org/wiki/Equivalence_(formal_languages)" rel="nofollow noreferrer">weakly equivalent</a>.</p> <p>In some other cases, it is possible to simplify an equivalence proof by replacing non-terminal symbols with terminal symbols. In this example, <code>A1</code> is equivalent to <code>A</code> because <code>D</code> can be replaced with <code>B C</code>: </p> <pre><code>A --&gt; A B C A1 --&gt; A D D --&gt; B C </code></pre> <p>Equivalence proofs can sometimes be simplified by re-writing grammars into normal forms, such as <a href="https://en.wikipedia.org/wiki/Chomsky_normal_form" rel="nofollow noreferrer">Chomsky normal form</a>.</p> <p>Similarly, there are some repeating patterns that can be proven to be weakly equivalent. All three of the following grammars are equivalent to <code>(B | A)*</code>, where <code>*</code> is the <a href="https://en.wikipedia.org/wiki/Kleene_star" rel="nofollow noreferrer">Kleene star</a>:</p> <pre><code>A1 --&gt; (B*) ((B A)*) A2 --&gt; ((B | A)*) (B*) A2 --&gt; (B | A | (A B))* </code></pre>
1,560,411
<p>If $B_1$ and $B_2$ are the bases of two integer lattices $L_1$ and $L_2$, i.e.</p> <p>$L_1=\{B_1n:n\in\mathbb Z^d\}$ and $L_2=\{B_2n:n\in\mathbb Z^d\}$,</p> <p>is there an easy way to determine a basis for $L_1\cap L_2$? Answers of the form "Plug the matrices into a computer and ask for Hermite Normal Form, etc" are perfectly acceptable as this is a practical problem and the matrices of integers $B_1$ and $B_2$ are known, but I need some algorithmic way because the procedure will be repeated many times.</p>
Andrew Young
491,550
<p>L1 and L2 are column vector latices<br/> A is a 2x2 block matrix<br/> 0 is an all zeros matrix<br/> $$ \begin{align} A= \begin{bmatrix} L1 &amp; L2\\ L1 &amp; 0 \end{bmatrix} \end{align} $$</p> <p>hermite normal form<br/> $$ \begin{align} B=hnf(A) \end{align} $$</p> <p>$$ \begin{align} B= \begin{bmatrix} C &amp; 0\\ D &amp; E \end{bmatrix} \end{align} $$ B is a 2x2 block matrix where E is the latice intersection. you can get the number of columns in E by counting how many all zero columns there are in the block above.</p> <p>The reason this works is because hnf produces zeros in the upper right block. These zero columns are made up of a linear combination of vectors from L1 and L2. since they add up to zero, the linear combination of L1 vectors equals the negative of the L2 vectors. This shows that these vectors are in both the L1 and L2 latice. You can recover these vectors by just looking at the L1 components, the block E matrix.</p> <p>Unfortunately hnf tends to produce integer overflow errors in even small latices. However you don't need to put it in full hnf form. Any algorithm that will produce all zero columns in the top right block will work.</p>
43,956
<p>There is this example at the Wikipedia article on Quotient spaces (QS):</p> <blockquote> <p>Consider the set $X = \mathbb{R}$ of all real numbers with the ordinary topology, and write $x \sim y$ if and only if $x−y$ is an integer. Then the quotient space $X/\sim$ is homeomorphic to the unit circle $S^1$ via the homeomorphism which sends the equivalence class of $x$ to $\exp(2πix)$.</p> </blockquote> <p>I understand relations, equivalence relation and equivalence class but quotient space is still too abstract for me. This seems like a simple enough example to begin with.</p> <p>I understand (sort of) the definition but I can't visualize. And by this example and others, there is a lot of visualizing going on here! torus, circles etc.</p>
Aaron
9,863
<p>Given a set $S$ and an equivalence relation $\sim$ on $S$, then we can quotient out the set by the equivalence relation to obtain the quotient $S/\sim$, which is the set of all equivalence classes of $S$ under $\sim$.</p> <p>If $S$ isn't just a set but is actually a topological space, we wish to give a topology to $S/\sim$. There are many ways to do this, but we want one that has as much to do with the topology of $S$ as possible.</p> <p>The main condition that we want is that the natural quotient map $q:S\to S/\sim$ should be continuous. Any topology on $S/\sim$ that didn't satisfy this property would be too far from $S$. However, even with this condition, there are still lots of choices for the topology. For example, if we gave $S/\sim$ the trivial topology (only the empty set and entire set are open), the map would be continuous, but the topology would be very far from being what we want.</p> <p>Instead, we take the other extreme approach. If $U\subset S/\sim$, then $U$ is open if and only if $q^{-1}(U)$ is open. Alternatively, the open sets of $S/\sim$ are exactly those of the form $q(V)$ for $V\subset S$ open such that $V$ is the union of equivalence classes.</p> <p>This gives us the following universal property: If $f:S\to X$ is a continuous map such that $f(x)=f(y)$ whenever if $x\sim y$, then we can uniquely factor the map through $S/\sim$, that is, we can find a unique continuous map $\widetilde{f}:S/\sim \to X$ such that $f=\widetilde{f}\circ q$.</p> <p>For the example you are looking at, the equivalence relation defines a quotient map to the circle by the map $x\to e^{2\pi i x}$, where we are viewing $S^1$ as the unit circle in the complex plane. What we need to check is that the quotient topology is the normal topology.</p> <p>The wrapping map is continuous. We just need to make sure that every open set in the quotient topology is open in the usual (subspace) topology. However, we have some extra structure in our situation which makes this easier to check. Namely, $\mathbb{R}$ is a group, and our quotient is the (group theoretic) quotient by the subgroup $\mathbb{Z}$. Given an open set $U\subset \mathbb{R}$, the set $\widetilde{U}=\bigcup_{i\in \mathbb Z} U+i$ is the smallest set that contains $U$ and is the union of equivalence classes. Moreover, because it is the union of open sets, it is open. The image of such sets will be the open sets in the quotient topology.</p> <p>Because the topology of $\mathbb{R}$ has a basis of intervals, and because passing from an interval $U=(a,b)$ to $\widetilde{U}$ to $q\widetilde{U}$ is the same as wrapping the interval around the circle, and because subsets of this form give a basis for the topology of the circle, we must have that the usual topology is the same as the quotient topology.</p> <p>Note that this implies that if you quotient a topological group by a subgroup, giving the quotient the quotient topology, the quotient map will actually be an open map (the image of an open set is open). This is not true for general quotient maps.</p>
4,015,741
<p>I want to find the solutions of <span class="math-container">$(x+1)^{63}+(x+1)^{62}(x-1)+\cdots+(x-1)^{63}=0$</span>.</p> <p>It is not hard to see <span class="math-container">$x=0$</span> is a root of the equation. but I don't know how to solve this equation in general. I can see terms of the equation looks very similar to binomial expansion of <span class="math-container">$[(x+1)+(x-1)]^{63}$</span> except the coefficient of each term is <span class="math-container">$1$</span> rather than <span class="math-container">$63\choose k $</span> (for <span class="math-container">$k=0,1,\cdots,63$</span> ). is it possible to use binomial theorem to solve the equation? (or other approaches)</p>
Shubham Johri
551,962
<p>Note that <span class="math-container">$x=-1$</span> is not a solution. This is a GP with common ratio <span class="math-container">$r=(x-1)/(x+1)\ne1$</span> and the sum is<span class="math-container">$$(x+1)^{63}\left[\frac{r^{64}-1}{r-1}\right]=0\implies r=\pm1$$</span>Since <span class="math-container">$r\ne1$</span> it must equal <span class="math-container">$-1$</span> giving only real solution as <span class="math-container">$x=0$</span>.</p>
4,015,741
<p>I want to find the solutions of <span class="math-container">$(x+1)^{63}+(x+1)^{62}(x-1)+\cdots+(x-1)^{63}=0$</span>.</p> <p>It is not hard to see <span class="math-container">$x=0$</span> is a root of the equation. but I don't know how to solve this equation in general. I can see terms of the equation looks very similar to binomial expansion of <span class="math-container">$[(x+1)+(x-1)]^{63}$</span> except the coefficient of each term is <span class="math-container">$1$</span> rather than <span class="math-container">$63\choose k $</span> (for <span class="math-container">$k=0,1,\cdots,63$</span> ). is it possible to use binomial theorem to solve the equation? (or other approaches)</p>
Shrivardhan
884,246
<p>You can solve it by using the formula of GP<br /> Here <span class="math-container">$a = (x+1)^{63}$</span> and <span class="math-container">$r = \dfrac{x-1}{x+1}$</span></p>
596,374
<p>I solved this , but I am not sure if I did in the right way.</p> <p>$$2^{2x + 1} - 2^{x + 2} + 8 = 0$$</p> <p>$$2^{x + 2} - 2^{2x + 2} = 8$$</p> <p>$$\log_22^{x + 2} - \log_22^{2x + 2} = \log_28$$</p> <p>$$x + 2- 2x - 2 = 3$$</p> <p>solving for $x$:</p> <p>$$x = -2$$</p> <p>any feedback would be appreciated.</p>
Ahaan S. Rungta
85,039
<p>You can easily check the solution you get. Checking $x=-2$, you get $$ 2^{2x+1}-2^{x+2}+8=2^{-3}-2^{0}+8=\dfrac{1}{8}-1+8\ne0, $$which means you have an error. Specifically, you transition from the first line to the second line is incorrect. </p>
1,539,350
<p>orthogonal matrixes conserve length of every vector and scalar products of them, I think only rotate transformation satisfy those property, but I don't know how to prove it, whether it's true or not.</p>
Gautam Shenoy
35,983
<p>Ok. I got it. The Lemma is true. Given a sequence of probability measures $P_n$, we consider $P_n(x)$ for every $x \in \mathcal{X}$. Using Weierstrass Bolzano theorem, noting that $P_n(x)$ is a bounded sequence of real numbers, there exists a convergent subsequence. We do this iteratively: WLOG, assume $\mathcal{X} = \{1,2,...,N\}$.</p> <p>Step $1$: Find a convergent subsequence in $P_n(1)$. Let the limit be $\hat{P}(1)$. Note that the measures are applied to $1$ here.</p> <p>Step $2$: For $i \geq 2$, repeat step $1$ but for the subsequence obtained in step $i-1$ and the measure applied to $i$. Stop after step $N$.</p> <p>Thus we get a final subsequence, $P_{n_{k}}$ which converges in Total Variation to $\hat{P}$. To see that $\hat{P}$ is a prob measure, firstly $\hat{P}(x) \in[0,1]$ for every $x$. Also we have $$\sum_{x \in \mathcal{X}} \hat{P}(x) = \sum_{x \in \mathcal{X}} \lim_{k \to \infty} P_{n_k}(x) = \lim_{k \to \infty} \sum_{x \in \mathcal{X}} P_{n_k}(x)=1$$</p> <p>Thus $\hat{P}$ is a valid prob. measure and hence $\hat{P} \in \mathcal{P}$.</p> <p>Thus $\mathcal{P}$ is sequentially compact and hence is compact as it is a metric space. $\blacksquare$</p>
1,539,350
<p>orthogonal matrixes conserve length of every vector and scalar products of them, I think only rotate transformation satisfy those property, but I don't know how to prove it, whether it's true or not.</p>
Ilya
5,887
<p>When $X$ is finite, you can treat $\mathcal P$ as a subset of $\Bbb R^X$. Total variation is the norm there, and all the norms over finite-dimensional spaces are equivalent, so the induced metric is equivalent to the Eucledian one. Embedded $\mathcal P$ is bounded and closed in Eucledian metric, hence compact.</p>
55,404
<p>I have been searching for a version of the isoperimetric inequality which is something like:</p> <p>$P(\Omega) - 2\sqrt{\pi} Vol(\Omega)^{1/2} \geq \pi (r_{out}^2 - r_{in}^2)$ where $r_{out}$ and $r_{in}$ are the inner and outer radius of a given set. There are of course details which I am missing such as what kind of sets this applies to (clearly connected and possibly simply connected). I was hoping somebody may recognize this inequality and be able to direct me to a source for it along with a proof.</p> <p><strong>Update:</strong> I'm curious if anyone can direct me to a some papers which relate the isoperimetric deficit to Hausdorff distance. Such as: $P(\Omega)^2 - 4\pi Vol(\Omega) \geq C d_H(\Omega,B)^2$ whre $B$ is a sphere in $\mathbb{R}^2$ which may be the inner or outer circle.</p> <p><strong>Update April 12:</strong> I would like to know if the first Bonnesen inequality written below is strictly stronger than the one in higher dimensions? In particular, if one considers the Fraenkel assymetry $\alpha(\Omega) = \min_B |\Omega \Delta B|$ where $|B|=|\Omega|$, does it hold on a bounded domain that</p> <p>$ r_{out}^2 - r_{in}^2 \leq C \alpha(\Omega)$,</p> <p>for some constant $C&gt;0$? This seems like it should be true but I can't seem to find a concise proof of it.</p>
Andrey Rekalo
5,371
<p>There is a sharpened version of the plane isoperimetric inequality due to Benson which involves the inner and outer radii. Let $$\Gamma=\{(r,\theta):\ r=r(s),\theta=\theta(s)\}$$ be a simple closed rectifiable curve on the plane, parametrized by the arc length $s$, and let $$r_1=\sup\{r:\ (r,\theta)\in\Gamma\},\qquad r_2= \inf\{r:\ (r,\theta)\in\Gamma\}.$$ Assume that $\Gamma$ winds once arround the inner circle. Then </p> <blockquote> <p>$$L^2-4\pi A\geq\frac{(2FA-2\pi E-\pi/(2F))^2}{1+4EF},$$</p> </blockquote> <p>where $L$ is the perimeter of $\Gamma$, $A$ is the area of the enclosed region, and $$F=\frac{1}{r_1-r_2},\qquad E=\frac{r_1r_2(r_1+r_2)}{(r_1-r_2)^2}.$$ </p> <p>The reference is: D.Benson, <a href="http://www.jstor.org/pss/2316850" rel="nofollow">"Sharpened Forms of the Plane Isoperimetric Inequality"</a>, <em>The American Mathematical Monthly</em>, Vol. 77 (1970), pp. 29-34.</p>
3,176,629
<p>In the evening, pizza was ordered nine people sat around a round table, 50 slices of pizza were served to these nine people. Prove that there were two people sitting next to each other who ate at least 12 pizza slices.</p> <p>I used the pigeon hole principle to determine 50/9 = 5.5 => 6</p> <p>Therefore, at least one person ate 6 slice of pizza. </p> <p>I just don't know how to prove that two people ate at least 12 slices..</p> <p>Help would be greatly appreciated!</p>
Mike Earnest
177,399
<p>Since on average people ate <span class="math-container">$50/9&lt;6$</span> slices, there exists a person who ate <em>at most</em> <span class="math-container">$5$</span> slices. The other eight people together ate at least <span class="math-container">$45$</span> slices. Can you conclude?</p>
2,637,983
<p>I was working on a program to carry out some computations, and ran into an issue of needing to compare some algebraic numbers, but not having enough precision to do it without exact arithmetic, and not knowing how to do it with exact arithmetic.</p> <p>A little algebra shows that the statement $$a+b\sqrt{n}&gt;0$$ is equivalent to asking that either $a^2&gt;nb^2$ and $a&gt;0$ or $nb^2&gt;a^2$ and $b&gt;0$. In particular, this means that we can easily compute the order on $\mathbb Q(\sqrt{5})$ using only rational arithmetic on the coefficients of polynomials in $\sqrt{5}$.</p> <p>However, it seems not so clear how to generalize this reasoning even to an example like deciding whether $a+b\sqrt[3]{n}+c\sqrt[3]{n}^2$ is positive.</p> <p>In general, suppose that $f$ is an irreducible polynomial in $\mathbb Q[x]$ and has some real root $\alpha$. Let $F=\mathbb Q[x]/(f)\cong \mathbb Q(\alpha)$ be the corresponding field extension. This field clearly can be ordered, as it is identified with a subfield of $\mathbb R$.</p> <p>Is it possible to compute an explicit order* on $F$ using only rational arithmetic? I feel that this must be possible, but can't figure out how.</p> <p>I'm most interested in whether, for each fixed field extension $F$, there exists an algorithm taking as input a polynomial in $\alpha$ of degree less than $\deg f$ and deciding whether it is positive or not, using a bounded number of operations. I want this primarily for field extensions of low degree, so I'm less interested in how the complexity grows as $F$ becomes more complex than in how algorithms tailored to a single $F$ fare.</p> <p>(*Obviously, I'm most interested in being able to compute the order on $\mathbb Q(\alpha)$ inherited from $\mathbb R$, but given that this field is isomorphic to $\mathbb Q(\alpha')$ for any other root of $f$, there are probably multiple orders - any of which would be interesting to compute)</p>
orangeskid
168,051
<p>Just an observation, you may already know it:</p> <p>If all the conjugates of the real algebraic number <span class="math-container">$\alpha$</span> are complex ( that is, <span class="math-container">$\alpha$</span> is the only real root of its minimal polynomial), then there exists a unique ordering on <span class="math-container">$\mathbb{Q}(\alpha)$</span>. An element <span class="math-container">$\beta \in \mathbb{Q}(\alpha)$</span> is positive if and only if its norm <span class="math-container">$N(\beta)$</span> is positive. So for instance, to check whether <span class="math-container">$$-136 + 8 \sqrt[3]{7} + 33 \sqrt[3]{49}$$</span> calculate its norm, which equals <span class="math-container">$3025$</span>. Therefore, the number is positive. To see that is is smaller than <span class="math-container">$1$</span>, calculate <span class="math-container">$$N(1-(136 + 8 \sqrt[3]{7} + 33 \sqrt[3]{49}) ) = 47328$$</span></p> <p>There is a theorem of Artin that says that an element of a number field is positive in all real embedding of the field if and only if the number is a sum of squares. In this case, there only one real embedding, so the positives are exactly the sum of squares, something that also follows from the norm being positive.</p>
95,819
<p>I think I have solved a problem in <em>Topology</em> by Munkres, but there is a small detail that is bugging me. The problem is stated in this question's title. I will write down the proof and will highlight what is troubling me.</p> <p>We prove by contradiction: Assume $X$ is not Hausdorff. Then there exist points $x,y$ where $x$ is different from $y$ such that no neighbourhoods $U$, $V$ about $x$ and $y$ respectively have trivial intersection. Now consider the point $(x,y)$ that is in the complement of $\Delta$. Now let $U \times V$ be any basis element that contains $(x,y)$ (such an element exists by definition of the product topology being generated by the basis $\mathcal{B}$ consisting of elements of the form $W \times Z$, where $W$ is open in $X$ and $Z$ is open in $Y$). Consider $(U \times V ) \cap \Delta$, <strong>which I claim to be $(U \cap X) \times (V \cap X)$</strong>.</p> <p>By our choice of $x$ and $y$ there is $z \in U \cap V$, implying that the intersection $(U \times V ) \cap \Delta$ is not trivial.</p> <p>Since $U \times V$ was any basis element containing $(x,y)$, this means that $(x,y) \in \overline{\Delta}$, which means that there exists a limit point of $\Delta$ that is not in it, contradicting $\Delta$ being closed.</p> <p>The problem comes is in the way I have decomposed $\Delta$; the way I have put it seems I am saying that $\Delta$ <em>is equal to $X \times X$</em>, which is not the case. How can I get round this?</p> <p>Thanks.</p> <p><strong>Edit:</strong> Martin Sleziak has pointed out some mistakes, $(U \times V ) \cap \Delta$ should be $\{ (x,x) : x \in U \cap V\}$ and not as claimed.</p>
David Mitra
18,986
<p>A straightforward proof is a bit simpler:</p> <p>If the diagonal is closed, then for $x\ne y$ in $X$, the point $(x,y)$ is not on the diagonal. So, there is an nhood of $(x,y)$ in $X\times X$ containing $(x,y)$ disjoint from the diagonal. But, then, this nhood is of the form $U\times V$ where $U$ and $V$ are open disjoint sets containing $x$ and $y$ respectively.</p> <p><hr> Actually, contraposition would be best as you started in my opinion. If $X$ is not Hausdorf then there are $x$ and $y$ with $x\ne y$ such that for every open set $U$ containing $x$ and for every open set $V$ containing $y$ we have $U\cap V\ne\emptyset$ . Then, as you show, $(x,y)$ is in the closure of the diagonal. So, as $(x,y)$ is not on the diagonal but is in its closure, the diagonal is not closed.</p> <p><hr> The answer to your actual question is nicely explained in the other posts here. But I wanted to emphasise that an nhood $U\times V$ in $X\times X$ (which is as you describe) intersects the diagonal if and only if $U\cap V\ne\emptyset$. This follows from the relevant definitions is the key to the whole problem...</p>
7,575
<p>How could I display text that flashed red for a half second or so and then reverted to black? (Or was put in bold and reverted to normal, etc.)</p>
István Zachar
89
<p>By specifying different values for <code>time</code> and <code>frequencyInverse</code>, the behavior of flashing can be finetuned.</p> <pre><code>time = 100; frequencyInverse = 4; i = 0; Dynamic@Style["TESTESTEST", Bold, RGBColor[color, 0, 0]] RunScheduledTask[(i = i + 1; color = Rescale[Sin[i/frequencyInverse], {-1, 1}]; If[i == time, RemoveScheduledTask[ScheduledTasks[]]; color = 0]), {.01, \[Infinity]}]; </code></pre> <p><img src="https://i.stack.imgur.com/DtYK8.gif" alt="enter image description here"></p> <p>(Not that the actual flashing is faster in <em>Mathematica</em>, the slowdown is because of exporting it to GIF.) </p> <p>With <code>Clock</code>:</p> <pre><code>time = 1; (* period length *) iterations = 5; (* number of successive flashes *) img1 = DensityPlot[Sin[x] Sin[y], {x, -4, 4}, {y, -3, 3}, ImageSize -&gt; 50, Frame -&gt; False]; img2 = DensityPlot[Sin[x] Sin[y], {x, -4, 4}, {y, -3, 3}, ColorFunction -&gt; "SunsetColors", ImageSize -&gt; 50, Frame -&gt; False]; Dynamic@Overlay[{img1, SetAlphaChannel[ img2, (1 - Abs@Clock[{-1, 1, .1}, time, iterations])]}] </code></pre> <p><img src="https://i.stack.imgur.com/SEiR4.gif" alt="enter image description here"></p>
3,573,334
<blockquote> <p>Given positives <span class="math-container">$a, b, c$</span> such that <span class="math-container">$a + b + c = 3$</span>, prove that <span class="math-container">$$\frac{1}{c^2 + 4a^2 + b^2} + \frac{1}{a^2 + 4b^2 + c^2} + \frac{1}{b^2 + 4c^2 + a^2} \le \frac{1}{2}$$</span></p> </blockquote> <p>We have that <span class="math-container">$$a^2 + 4b^2 + c^2 = a^2 + (a + b + c + 1)b^2 + c^2 = (b^2 + a)a + (b + 1)b^2 + (b^2 + c)c$$</span></p> <p><span class="math-container">$$\implies \sum_{cyc}\frac{1}{a^2 + 4b^2 + c^2} = \frac{1}{(a + b + c)^2} \cdot \sum_{cyc}\frac{(a + b + c)^2}{(b^2 + a)a + (b + 1)b^2 + (b^2 + c)c}$$</span></p> <p><span class="math-container">$$ \le \frac{1}{(a + b + c)^2} \cdot \sum_{cyc}\left(\frac{a}{b^2 + a} + \frac{1}{b + 1} + \frac{c}{b^2 + c}\right)$$</span></p> <p>Furthermore, <span class="math-container">$\dfrac{a}{c^2 + a} + \dfrac{c}{a^2 + c} = 1 - \dfrac{(c + a - 2)ca}{(c^2 + a)(a^2 + c)}$</span>, <span class="math-container">$$\sum_{cyc}\frac{1}{a^2 + 4b^2 + c^2} \le \frac{1}{(a + b + c)^2} \cdot \sum_{cyc}\left[\frac{1}{b + 1} - \frac{(c + a - 2)ca}{(c^2 + a)(a^2 + c)}\right] + \frac{1}{3}$$</span></p> <p>Then I don't know what to do next.</p>
LHF
744,207
<p>The idea is indeed that of using Cauchy-Schwarz. The problem is you went from a homogeneous expression (<span class="math-container">$a^2+4b^2+c^2$</span>) to non-homogeneous terms and that may be harder to prove than the original inequality.</p> <p>Instead, I would apply Cauchy-Schwarz like this:</p> <p><span class="math-container">$$\frac{(a+b+c)^2}{a^2+4b^2+c^2} \leq \frac{a^2}{a^2+b^2}+\frac{b^2}{2b^2}+\frac{c^2}{b^2+c^2}=\frac{a^2}{a^2+b^2}+\frac{c^2}{b^2+c^2}+\frac{1}{2}$$</span></p> <p>and summing cyclically:</p> <p><span class="math-container">$$ \begin{aligned} \sum \frac{(a+b+c)^2}{a^2+4b^2+c^2} &amp;\leq \sum\left(\frac{a^2}{a^2+b^2}+\frac{c^2}{b^2+c^2}\right)+\frac{3}{2}\\ &amp;=\sum\left(\frac{a^2}{a^2+b^2}+\frac{b^2}{a^2+b^2}\right)+\frac{3}{2}\\ &amp;=3+\frac{3}{2}=\frac{9}{2} \end{aligned} $$</span></p>
3,573,334
<blockquote> <p>Given positives <span class="math-container">$a, b, c$</span> such that <span class="math-container">$a + b + c = 3$</span>, prove that <span class="math-container">$$\frac{1}{c^2 + 4a^2 + b^2} + \frac{1}{a^2 + 4b^2 + c^2} + \frac{1}{b^2 + 4c^2 + a^2} \le \frac{1}{2}$$</span></p> </blockquote> <p>We have that <span class="math-container">$$a^2 + 4b^2 + c^2 = a^2 + (a + b + c + 1)b^2 + c^2 = (b^2 + a)a + (b + 1)b^2 + (b^2 + c)c$$</span></p> <p><span class="math-container">$$\implies \sum_{cyc}\frac{1}{a^2 + 4b^2 + c^2} = \frac{1}{(a + b + c)^2} \cdot \sum_{cyc}\frac{(a + b + c)^2}{(b^2 + a)a + (b + 1)b^2 + (b^2 + c)c}$$</span></p> <p><span class="math-container">$$ \le \frac{1}{(a + b + c)^2} \cdot \sum_{cyc}\left(\frac{a}{b^2 + a} + \frac{1}{b + 1} + \frac{c}{b^2 + c}\right)$$</span></p> <p>Furthermore, <span class="math-container">$\dfrac{a}{c^2 + a} + \dfrac{c}{a^2 + c} = 1 - \dfrac{(c + a - 2)ca}{(c^2 + a)(a^2 + c)}$</span>, <span class="math-container">$$\sum_{cyc}\frac{1}{a^2 + 4b^2 + c^2} \le \frac{1}{(a + b + c)^2} \cdot \sum_{cyc}\left[\frac{1}{b + 1} - \frac{(c + a - 2)ca}{(c^2 + a)(a^2 + c)}\right] + \frac{1}{3}$$</span></p> <p>Then I don't know what to do next.</p>
Michael Rozenberg
190,319
<p>We need to prove that <span class="math-container">$$\sum_{cyc}\frac{1}{b^2+c^2+4a^2}\leq\frac{9}{2(a+b+c)^2}$$</span> or <span class="math-container">$$\sum_{cyc}(2a^6-4a^5b-4a^5c+13a^4b^2+13a^4c^2-4a^4bc-12a^3b^3-12a^3b^2c-12a^3c^2b+20a^2b^2c^2)\geq0$$</span> or <span class="math-container">$$\sum_{cyc}(a-b)^2(2c^4+2(a^2-4ab+b^2)c^2+a^4-2a^3b+4a^2b^2-2ab^3+b^4)\geq0$$</span> for which it's enough to prove that: <span class="math-container">$$(a^2-4ab+b^2)^2-2(a^4-2a^3b+4a^2b^2-2ab^3+b^4)\leq0$$</span> or <span class="math-container">$$(a-b)^2(a^2+6ab+b^2)\geq0$$</span> and we are done!</p>
4,008,152
<p>Question itself: Throw a coin one million times. What is the expected number of sequences of six tails, if we <strong>do not allow overlap</strong>?</p> <p>I know when overlap is allowed, the answer is (1,000,000-5)/(2^6). Not sure if we can just do (1,000,000-5)/(2^6) divided by 6 if overlap is not allowed?</p> <p>Some clarifications:</p> <p>For example, if part of the sequence is &quot;one H, nine T, then one H&quot;, we would count 1 sequence of six tails. (When overlap is allowed, we can count three times because each of the first 3 T can start a sequence of six tails; However, this question does not allow overlap, so 9T can only be counted as containing <strong>one</strong> sequence of six tails)</p> <p>If part of the sequence is &quot;one H, thirteen T, then one H&quot;, we would count 2 sequences of six tails.</p>
BruceET
221,800
<p><strong>Comment.</strong> I'm not sure I've got the rules exactly right, but I did some checking with simulation in R. The <code>rle</code> procedure in R (for Run Length Encoding) gives run values (<code>0</code>s for Tails, <code>1</code>s for Heads) and lengths.</p> <p>For example:</p> <pre><code>set.seed(2021) x = rbinom(10, 1, .5); x [1] 0 1 1 0 1 1 1 0 1 1 rle(x) Run Length Encoding lengths: int [1:6] 1 2 1 3 1 2 values : int [1:6] 0 1 0 1 0 1 rle(x)$len [1] 1 2 1 3 1 2 </code></pre> <p>So it is easy to see how many runs of length 6 or greater we get in a particular session of a million tosses of a fair coin. Replicating that 1000 times gives a rough idea of the average number of such runs in a sequence of a million.</p> <pre><code>set.seed(202) run.6 = replicate(1000, sum(rle(rbinom(10^6,1,.5))$len &gt;=6)) mean(run.6) [1] 15629.12 </code></pre> <p>The answer seems agreeably close to your <span class="math-container">$10^6/2^6 \approx 15\,625$</span> runs of length 6 or more. But if we count runs of length 12 or more as two runs of 6 or more, the desired number is a little larger, by about an additional 244 runs. There may not be enough runs of length 18 or more to explore fruitfully by simulation (maybe about 4 more). [See @lulu's Comment.]</p> <p>S0 roughly, you're going to get something close to <span class="math-container">$15625 + 244 + 4 = 15873$</span> runs of the designated type in a sequence of a million tosses.</p> <pre><code>run.12 = replicate(1000, sum(rle(rbinom(10^6,1,.5))$len &gt;=12)) mean(run.12) [1] 243.97 10^6/2^12 [1] 244.1406 </code></pre> <p>Iterations of 10,000 or 100,000 sequences of a million tosses would give slightly more accuracy, but my purpose here is to give rough estimates for you to compare with your combinatorial results.</p>
18
<p>Some teachers make memorizing formulas, definitions and others things obligatory, and forbid "aids" in any form during tests and exams. Other allow for writing down more complicated expressions, sometimes anything on paper (books, tables, solutions to previously solved problems) and in yet another setting students are expected to take questions home, study the problems in any way they want and then submit solutions a few days later.</p> <p>Naturally, the memory-oriented problem sets are relatively easier (modulo time limit), encourage less understanding and more proficiency (in the sense that the student has to be efficient in his approach). As the mathematics is in big part thinking, I think that it is beneficial to students to let them focus on problem solving rather than recalling and calculating (i.e. designing a solution rather than modifying a known one). There is a huge difference between work in time-constrained environment (e.g. medical teams, lawyers during trials, etc.) where the cost of "external knowledge" is much higher and good memory is essential. However, math is, in general (things like high-frequency trading are only a small part math-related professions), slow.</p> <p>On the other hand, memory-oriented teaching is far from being a relic of the past. Why is this so? As this is a broad topic, I will make it more specific:</p> <p><strong>What are the advantages of memory-oriented teaching?</strong></p> <p><strong>What are the disadvantages of allowing aids during tests/exams?</strong></p>
guest
11,700
<p>For most courses, I feel like you learn the material better if it is no notes on the tests. You've really internalized the material. It sticks with you better years down the road (so even if you need to look at a book then, it comes back fast since you really learned it down pat before). And then in near term, it gives you more mastery so the tools can be used very conveniently and quickly in physics, chemistry, engineering, etc. </p> <p>Furthermore, I don't think most math courses really require crib sheets (it's not such a mass of tedium). What math courses require is doing lots of practice problems. When you do lots of practice problems, you're doing much more than memorizing the procedure...getting practice in using it and what mistakes to watch out for. So remember the formula itself is least of the worries. If you don't remember the formulas than you have a weak level of knowledge indeed.</p>
743,227
<p>I've this question:</p> <blockquote> <p>Find the area of the intersection between the sphere $x^2 + y^2 + z^2 = 1$ and the cylinder $x^2 + y^2 - y = 0$.</p> </blockquote> <p>Is this second equation even a closed shape? If one were to plot points satisfying that equation, one gets things like $(2, \sqrt{-2})$, $(3, \sqrt{-6})$ and all that.</p> <p><strong>Edit:</strong> I understand the equation for a circle and such, and have (with the help of everyone who answered) found my issue.</p> <p>I was plugging in (whole) numbers that weren't in the codomain of the cylinder, similar to having, say, the equation of a circle $x^2 + y^2 = 16$ and plugging in $25$ for $x$—you will get a complex number for $y$. If one plugs in only numbers not in the domain/codomain, then the equation will not <em>seem</em> like the shape it should be. </p> <p>Sorry for my shortsightedness, and thanks everyone for replying so promptly. :)</p>
MarkisaB
139,345
<p>$$x^2+y^2-y=0$$</p> <p>$$x^2+(y^2-2y\frac{1}{2}+\frac{1}{4}-\frac{1}{4})=0$$</p> <p>$$x^2+(y-\frac{1}{2})=(\frac{1}{2})^2$$</p> <p>This is an equation of cylinder, in the xy plane we have a circle moved from origin with $R=1/2$ and in + nad - z directions we have a constant - so this is cylinder. </p>
98,700
<blockquote> <p>Suppose you wanted to write the number 100000. If you type it in ASCII, this would take 6 characters (which is 6 bytes). However, if you represent it as unsigned binary, you can write it out using 4 bytes.</p> </blockquote> <p>(from <a href="http://www.cs.umd.edu/class/sum2003/cmsc311/Notes/BitOp/asciiBin.html" rel="nofollow">http://www.cs.umd.edu/class/sum2003/cmsc311/Notes/BitOp/asciiBin.html</a>)</p> <p>My question: $\log_2 100,000 \approx 17$. So that means I need <code>17</code> bits to represent <code>100,000</code> in binary, which requires 3 bytes. So why does it say 4 bytes?</p>
TonyK
1,508
<p>You can, in fact, write it out using three bytes. My current project uses 3-byte integers extensively, to save memory in an embedded system.</p>
3,313,697
<p>To calculate the <span class="math-container">$n$</span>-period payment <span class="math-container">$A$</span> on a loan of size <span class="math-container">$P$</span> at an interest rate of <span class="math-container">$r$</span>, the formula is:</p> <p><span class="math-container">$A=\dfrac{Pr(1+r)^n}{(1+r)^n-1}$</span> </p> <p>Source: <a href="https://en.wikipedia.org/wiki/Amortization_calculator#The_formula" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Amortization_calculator#The_formula</a></p> <p>And so the total amount paid over those n-periods is simply:</p> <p><span class="math-container">$n*A=\dfrac{nPr(1+r)^n}{(1+r)^n-1}$</span></p> <p>For example, to full amortize a 10-year loan of $10,000 with 5.00% annual interest would require annual payments (principal + interest) of:</p> <p><span class="math-container">$A=\dfrac{10000*0.05(1.05)^{10}}{(1.05)^{10}-1}\approx1295$</span> per year</p> <p>And over those 10 years then, the person would have paid a total of: <span class="math-container">$n*A=10*1295=12950$</span>.</p> <p>This is the underlying formula for most "amortizing" loans with <span class="math-container">$n$</span> equal installment payments (e.g. car loans, mortgages, student loans). As principal balance is being paid off over time, the interest payments that are based on that decreasing principal balance are decreasing too -- allowing more of the fixed <span class="math-container">$n$</span>-period payment <span class="math-container">$A$</span> to go toward paying off principal. In the end it all balances out (i.e. the increasing portion of <span class="math-container">$A$</span> going toward paying principal offsets the a decreasing portion of <span class="math-container">$A$</span> going toward paying interest payments).</p> <p>Investing on the other hand works differently with the idea of "compound interest" being earned. The total amount <span class="math-container">$B$</span> you will have after investing <span class="math-container">$P$</span> at rate <span class="math-container">$r$</span> over <span class="math-container">$n$</span> periods is simply:</p> <p><span class="math-container">$B=P(1+i)^n$</span></p> <p>For instance, if one invests $10,000 at 5.00%/year for 10 years, the compound interest results in:</p> <p><span class="math-container">$B=P(1+i)^n=10000*1.05^{10}=16289$</span>.</p> <p>Comparing investing rate (<span class="math-container">$i$</span>) to borrowing rate (<span class="math-container">$r$</span>), the break-even analysis for <span class="math-container">$B=nA$</span> should result in <span class="math-container">$0&lt;i&lt;r$</span>.</p> <p>Computing this explicitly, assume <span class="math-container">$B=nA$</span>:</p> <p><span class="math-container">$B=nA$</span> </p> <p><span class="math-container">$P(1+i)^n=\dfrac{nPr(1+r)^n}{(1+r)^n-1}$</span> </p> <p><span class="math-container">$(1+i)^n=\dfrac{nr(1+r)^n}{(1+r)^n-1}$</span> </p> <p><span class="math-container">$i=\bigg(\dfrac{nr(1+r)^n}{(1+r)^n-1}\bigg)^{(\frac{1}{n})}-1$</span> </p> <p>Thus <span class="math-container">$0&lt;i&lt;r$</span> (I couldn't come up with a more simplified formula above, sorry, but the graph plot checks out). </p> <p>Using the example above borrowing at <span class="math-container">$r=5\%$</span>, if we invest at <span class="math-container">$i\approx2.619\%$</span> then <span class="math-container">$nA=B$</span>. Notice how much smaller <span class="math-container">$i$</span> is than <span class="math-container">$r$</span> to simply break even... amazing!</p> <p>In fact, for typical <span class="math-container">$r$</span> like what we would see for common long-term loans, say <span class="math-container">$2\%&lt;r&lt;8\%$</span>, the formula is approximately:</p> <p><span class="math-container">$i\approx\dfrac{r}{2}+0.1\%$</span> (where <span class="math-container">$2\%&lt;i&lt;r&lt;8\%$</span>) (based on regression approximation)</p> <blockquote> <blockquote> <p>Question: Is this true or not? So many people have told me "Only say yes to an X% loan if you think you can beat that same X% investing in the market!" This math makes it seem like actually, you should say "Yes" to loans at X% rates if you can simply beat at least <em>half</em> of that rate investing in the market over the same period.</p> </blockquote> </blockquote>
Claude Leibovici
82,404
<p>Your approximation is not bad at all.</p> <p>We can make it a bit better considering (as you wrote) <span class="math-container">$$i=\bigg(\dfrac{nr(1+r)^n}{(1+r)^n-1}\bigg)^{\frac{1}{n}}-1$$</span> Expand the rhs as a Taylor series around <span class="math-container">$r=0$</span>. This would give <span class="math-container">$$i=\frac{(n+1) r}{2 n}\left(1-\frac{(n-1) (n+3) r}{12 n}+O\left(r^2\right) \right)$$</span></p> <p>For the example <span class="math-container">$n=10$</span>, <span class="math-container">$r=\frac 5 {100}$</span>, this would give <span class="math-container">$i=\frac{8371}{320000}\approx 0.0261594$</span> while the exact value should be <span class="math-container">$0.0261917$</span>. </p> <p>If you want more accurate use <span class="math-container">$$i=\frac{(n+1) r}{2 n}\left(1-\frac{(n-1) (n+3) }{12 n}r+\frac{(n-1)(n^2+2n-1)}{24n^2}r^2+O\left(r^3\right) \right)$$</span> For the example, this would give <span class="math-container">$\frac{3352327}{128000000}\approx 0.0261901$</span></p> <p><strong>Edit</strong></p> <p>Computing exactly <span class="math-container">$i$</span> for given <span class="math-container">$r$</span> and <span class="math-container">$n$</span> does not present any problem using a pocket calculator. Computing <span class="math-container">$r$</span> for given <span class="math-container">$i$</span> and <span class="math-container">$n$</span> is a very different story and it would require numerical methods.</p> <p>However, we can get good approximation using the above Taylor series and series reversion. This process would give <span class="math-container">$$r=x+\frac{(n-1) (n+3) x^2}{12 n}+\frac{(n-1) (n+2) \left(n^2-3\right) x^3}{72 n^2}+O\left(x^4\right)$$</span> where <span class="math-container">$x=\frac{2 n}{n+1}i$</span>.</p> <p>Using <span class="math-container">$n=10$</span> and <span class="math-container">$i=\frac{25}{1000}$</span> would lead to <span class="math-container">$r=\frac{101381}{2129600}\approx 0.0476057$</span> while the exact calculation would give <span class="math-container">$r=0.0476143$</span></p>
37,252
<p>Let $V$ be a vector space with a finite Dimension above $\mathbb{C}$ or $\mathbb{R}$.</p> <p>How does one prove that if $\langle\cdot,\cdot\rangle_{1}$ and $\langle \cdot, \cdot \rangle_{2}$ are two Inner products</p> <p>and for every $v\in V$ $\langle v,v\rangle_{1}$ = $\langle v,v\rangle_{2}$ so $\langle\cdot,\cdot \rangle_{1} = \langle\cdot,\cdot \rangle_{2}$</p> <p>The idea is clear to me, I just can't understand how to formalize it.</p> <p>Thank you.</p>
t.b.
5,363
<p><strong>Hint:</strong> Note that the associated norms satisfy $\|v\|_1 = \sqrt{\langle v,v\rangle_1} = \sqrt{\langle v,v\rangle_2} = \|v\|_2$ and then use the <a href="http://en.wikipedia.org/wiki/Polarization_identity" rel="nofollow">polarization identity</a> to recover the scalar products and see that they are equal.</p>
37,252
<p>Let $V$ be a vector space with a finite Dimension above $\mathbb{C}$ or $\mathbb{R}$.</p> <p>How does one prove that if $\langle\cdot,\cdot\rangle_{1}$ and $\langle \cdot, \cdot \rangle_{2}$ are two Inner products</p> <p>and for every $v\in V$ $\langle v,v\rangle_{1}$ = $\langle v,v\rangle_{2}$ so $\langle\cdot,\cdot \rangle_{1} = \langle\cdot,\cdot \rangle_{2}$</p> <p>The idea is clear to me, I just can't understand how to formalize it.</p> <p>Thank you.</p>
yoyo
6,925
<p>(symmetric, bilinear, char $\neq2$) $$\langle v+w,v+w\rangle_1=\langle v+w,v+w\rangle_2$$ $$\langle v,v\rangle_1+\langle w,w\rangle_1+2\langle v,w\rangle_1=\langle v,v\rangle_2+\langle w,w\rangle_2+2\langle v,w\rangle_2$$ $$\langle v,w\rangle_1=\langle v,w\rangle_2$$ if the product is sesquilinear over $\mathbb{C}$ (linear in first coord, conjugate linear in the second) the above gives $$ \langle v,w\rangle_1+\overline{\langle v,w\rangle}_1=\langle v,w\rangle_2+\overline{\langle v,w\rangle}_2 $$ ie $$ {\rm Re}\langle v,w\rangle_1={\rm Re}\langle v,w\rangle_2. $$ we can equate the imaginary parts by noting that $${\rm Im}\langle v,w\rangle={\rm Re}\langle -iv,w\rangle$$</p>
334,701
<p>The question is really simple, its just terminology.</p> <p>For simplicity we work on smooth algebraic surfaces and we consider the intersection form on curves on the surface.</p> <p>So let $S$ be a surface and $D \in \operatorname{Pic}(S)$ a divisor. Then $D$ is said to be nef if $$ D.C \geq 0 $$ for all curves $C$ on $S$ (wasn't it Miles Reid who introduced this term?).</p> <p>See <a href="http://en.wikipedia.org/wiki/Nef_line_bundle">http://en.wikipedia.org/wiki/Nef_line_bundle</a>.</p> <p>My question is now that with this definition, an effective curve need not be nef! This happens exactly when it has negative self-intersection, as is very well possible. But doesn't nef stand for numerically effective? That suggests that it is weaker then effective..</p> <p>Please enlighten me, i guess a single line answer will do. Thanks!</p>
Rhys
47,565
<p>Everything you have written is correct; the terminology <em>is</em> slightly confusing at first, because as you said, not all effective divisors are numerically effective.</p>
2,426,361
<p>What would be the best mathematical tool/concept to measure how far a matrix is from being singular? Could it be the condition number?</p>
Roger
581,343
<p>A matrix gets the rank it deserves. Technically only a square matrix can be nonsingular, but any m by n matrix, m>0 and n>0, can have a rank greater than zero if at least one entry is nonzero.</p> <p>I interpret the question as, "How far a matrix is from losing rank?"</p> <p>Gaussian elimination with complete pivoting (GECP) is used to reliably factor an m by n matrix that may lack full row and/or column rank. Matrix factorization must stop when the residual matrix is full of zeros because there is nothing left to factor. But when using limited precision, what constitutes zero because it is unlikely the residual terms will be all zero?</p> <p>Thus, some metric must be used to decide when the remaining residual matrix is or is not "zero". If the decision is that the remaining residual matrix is not zero, the matrix will have a rank at least one unit higher, and the next pivotal element will be the absolutely largest entry of the residual matrix. Obviously that pivotal element will be small if all entries in the residual matrix were small.</p> <p>Thus, a good measure of how close or far a matrix is from being singular would be to compare the size of the smallest pivotal element to the largest pivotal element, and to compare the size of the smallest pivotal element to zero. For an n by n matrix, if p_1 and p_n represent the 1st and nth pivots, comparing p_n/p_1 to zero may give a good estimate of how close the matrix is to being singular. If p_n/p_1 is substantially away from the roundoff level, the matrix may be considered safely away from being singular, but when the ratio is near the roundoff level, the matrix may be considered nearly singular.</p>
3,443,137
<p>Find the radius of the circle tangent to <span class="math-container">$3$</span> other circles <span class="math-container">$O_1$</span>, <span class="math-container">$O_2$</span> and <span class="math-container">$O_3$</span> have radius of <span class="math-container">$a$</span>, <span class="math-container">$b$</span> and <span class="math-container">$c$</span></p> <p>The Wikipedia page about the Problem of Apollonius can be found <a href="https://en.wikipedia.org/wiki/Problem_of_Apollonius" rel="nofollow noreferrer">here</a></p> <p>I can't provide you an exact image because the construction is too complex. What have I done is draws tons of lines to attempt to use Pythagoras's theorem and trig to solve it.</p>
MvG
35,416
<p>If the given circles are tangent to one another, use <a href="https://en.wikipedia.org/wiki/Descartes%27_theorem" rel="nofollow noreferrer">Descartes' theorem</a>.</p> <p>Otherwise you will need more information, namely not just the radii but also the distances between the given circles, and following one of the methods from the article on Apollonian circles is likely your best bet. You should expand your question to indicate what parameters you use to describe the configuration.</p> <p>Personally I'd follow the representation using Lie geometry, then extract an expression for the radius from that. But I'd expect it to be a fairly huge expression.</p>
3,501,879
<p>I have been stuck at this problem for some time now. I'd really apprechiate your help. Thanks.</p> <p><span class="math-container">$$2\sin^2(x)+6\cos^2(\frac x4)=5-2k$$</span></p>
gt6989b
16,192
<p><strong>HINT</strong></p> <p>If you are trying to solve for <span class="math-container">$k$</span> in terms of <span class="math-container">$x$</span>, why not use the fact that <span class="math-container">$\sin^2 a + \cos^2 a = 1$</span> for all real <span class="math-container">$a$</span>, and eliminate <span class="math-container">$\sin^2$</span> on the LHS, and then use <span class="math-container">$\arccos$</span> to do what you need?</p>
4,595,208
<p>I'm having trouble calculating this limit directly :</p> <p><span class="math-container">$$\lim_{n\to\infty}\frac{(2n+1)(2n+3)\cdots(4n+1)}{(2n)(2n+2)\cdots(4n)}$$</span></p> <p>It can be calculated using the inventory method and the result is:<br /> <span class="math-container">$$\lim_{n\to\infty}\frac{(2n+1)(2n+3)\cdots(4n+1)}{(2n)(2n+2)\cdots(4n)} =\sqrt {2}$$</span></p> <p>But the limit method is not smart, I need help doing this in a direct way.</p> <p>This is the first time I am asking a question on this site. I apologize if it is not a good question.</p>
Veliko
172,553
<p>You have to split the problem into three scenarios:</p> <ol> <li><span class="math-container">$\alpha&gt;\pi/2$</span></li> <li><span class="math-container">$\alpha=\pi/2$</span> (we have parallel lines, it does not work)</li> <li><span class="math-container">$\alpha&lt;\pi/2$</span></li> </ol> <ul> <li><p>You covered case 1. correctly with one exception. In this case there <span class="math-container">$\beta$</span> cases for which the light never touches the second mirror - this is if <span class="math-container">$\alpha+\beta &lt; \pi$</span>. So in this case the answer is conditional: <span class="math-container">$\alpha=3\pi/4$</span> but only if <span class="math-container">$\beta &lt; \pi/4$</span>.</p> </li> <li><p>The case 2 is more straightforward, if you draw the picture you get a triangle above the mirror which needs to be a right triangle, which quickly leads to the equation <span class="math-container">$$ (\pi-2\beta)+(2\beta+2\alpha-\pi) + \pi/2 = \pi $$</span> yielding <span class="math-container">$\alpha=\pi/4$</span>, regardless of <span class="math-container">$\beta$</span>.</p> </li> </ul>
19,815
<p>Problem:</p> <blockquote> <p>Prove that if gcd( a, b ) = 1, then gcd( a - b, a + b ) is either 1 or 2.</p> </blockquote> <p>From Bezout's Theorem, I see that am + bn = 1, and a, b are relative primes. However, I could not find a way to link this idea to a - b and a + b. I realized that in order to have gcd( a, b ) = 1, they must not be both even. I played around with some examples (13, 17), ...and I saw it's actually true :( ! Any idea?</p>
NebulousReveal
2,548
<p>Note that $d|(a-b)$ and $d|(a+b)$ where $d = \gcd(a-b, a+b)$. So $d$ divides the sum and difference (i.e. $2a$ and $2b$).</p>
1,375,085
<p>It is the first time I met such a question:</p> <blockquote> <p>Which is greater as $n$ gets larger, $f(n)=2^{2^{2^n}}$ or $g(n)=100^{100^n}$?</p> </blockquote> <p>Intuitively I think $f(n)$ would gradually become larger as $n$ gets larger, but I find it hard to produce an argument. Is there any trick to use for this type of question?</p>
Eoin
163,691
<p>Take logs</p> <p>$$\log\log\log(f)=\log\log\log(2^{2^{2^n}})=n\log(2\log(2\log(2)))$$ $$\log\log(g)=\log\log(100^{100^n})=n\log (100(\log (100)))$$</p> <p>Which are both linear and grow at the same rate, so to say. Now exponentiate to retrieve $f,g$.</p>
2,699,621
<p>To show $1 + \frac12 x - \frac18 x^2 &lt; \sqrt{1+x}$ is it enough to tell that the taylor series expansion of $\sqrt{1+x}$ around $0$ has more positive terms?</p>
Jack D'Aurizio
44,121
<p>The Maclaurin series of $\sqrt{1+x}$ is only convergent in a neighbourhood of the origin (due to the singularity at $x=-1$) hence it is not a good idea to use it for proving such inequality. Basic algebra performs much better: it is obvious that for $x&gt;0$ we have $1+\frac{x}{2}&gt;\sqrt{1+x}$ (it is enough to square both sides), and</p> <p>$$0&lt;1+\frac{x}{2}-\sqrt{1+x}=\frac{\left(1+\frac{x}{2}\right)^2-(1+x)}{1+\frac{x}{2}+\sqrt{1+x}}=\frac{x^2}{4\left(1+\frac{x}{2}+\sqrt{1+x}\right)} &lt;\frac{x^2}{4(1+1)}=\frac{x^2}{8}.$$</p>
79,337
<p>Let $V$ be a vector space over $\mathbb C$ and $W$ a $\operatorname{End}(V)$-module. I'm having difficulty seeing why the map $$ \operatorname{Hom}_{\operatorname{End}(V)}(V,W) \otimes V \to W $$ $$ \phi \otimes v \mapsto \phi(v) $$ is surjective. I'm having trouble because I can't construct elements of $\operatorname{Hom}_{\operatorname{End}(V)} (V,W)$. It seems hard because I know that the above map is injective which means showing surjectiveness is not as simple as showing that for any $w \in W$ there exists a module map and $v \in V$ such that $\phi(v) = w$.</p> <p>Thanks!</p>
bradhd
5,116
<p>In the case that $V$ and $W$ are finite-dimensional, here is a "non-intrinsic" way to see this isomorphism. If $V$ is finite-dimensional, then every $\operatorname{End}(V)$-module is semisimple, and moreover every simple $\operatorname{End}(V)$-module is isomorphic to $V$:</p> <p>Choose a basis $\{e_i\}$ for $V$ and define $P_i(e_j)=\delta_{ij}e_i$, so $P_1,\ldots,P_n\in\operatorname{End}(V)$ with $P_iP_j=\delta_{ij}P_i$ and $\sum_iP_i=1$. Each of the left modules $\operatorname{End}(V)\cdot P_i$ is then simple and isomorphic to $V$, via the map $A\cdot P_i\mapsto Ae_i$. This is easier to see if you think of this choice of basis as giving an isomorphism between $\operatorname{End}(V)$ and the ring of $n\times n$ matrices over your field, in which case $\operatorname{End}(V)\cdot P_i$ is the submodule of matrices whose only nonzero column is the $i$th one.</p> <p>Let $M$ be a simple $\operatorname{End}(V)$-module and let $x\in M\setminus\{0\}$. Since $\sum_iP_i=1$, there must be some $i$ for which $P_ix\neq 0$, hence $M=\operatorname{End}(V)\cdot P_ix$. The map of modules $\operatorname{End}(V)\cdot P_i\to M$ sending $aP_i$ to $aP_ix$ is then an isomorphism, by Schur's lemma.</p> <p>Combining this with semisimplicity, if $W$ is finite-dimensional then it is isomorphic as an $\operatorname{End}(V)$-module to some direct sum $V^{\oplus n}$.</p> <p>If we unravel the chain of isomorphisms $\operatorname{Hom}_{\operatorname{End}(V)}(V,V^{\oplus n})\otimes V\cong \operatorname{Hom}_{\operatorname{End}(V)}(V,V)^{\oplus n}\otimes V\cong (\operatorname{Hom}_{\operatorname{End}(V)}(V,V)\otimes V)^{\oplus n}\cong (\mathbb{C}\otimes V)^{\oplus n}\cong V^{\oplus n}$, we see that it is the same as the evaluation map you define.</p>
79,337
<p>Let $V$ be a vector space over $\mathbb C$ and $W$ a $\operatorname{End}(V)$-module. I'm having difficulty seeing why the map $$ \operatorname{Hom}_{\operatorname{End}(V)}(V,W) \otimes V \to W $$ $$ \phi \otimes v \mapsto \phi(v) $$ is surjective. I'm having trouble because I can't construct elements of $\operatorname{Hom}_{\operatorname{End}(V)} (V,W)$. It seems hard because I know that the above map is injective which means showing surjectiveness is not as simple as showing that for any $w \in W$ there exists a module map and $v \in V$ such that $\phi(v) = w$.</p> <p>Thanks!</p>
Eric O. Korman
9
<p>I just thought of the following argument. Suppose first that $W$ is finite dimensional. Then the map I gave is a map of $\operatorname{End}(V)$ mods (with the trivial action on $Hom_{\operatorname{End}(V)}(V,W)$) and so must be an isomorphism by Schur's lemma. In the general case we'll have $W = \bigoplus_i W_i$ with $W_i$ irreducible. Then $$ \operatorname{Hom_{\operatorname{End}(V)}} (V, W) \simeq \bigoplus \operatorname{Hom_{\operatorname{End}(V)}} (V, W_i) \simeq \bigoplus W_i \simeq W. $$</p>
697,668
<p>How many combinations are there to arrange the letters in MISSISSIPPI requiring that the 2 S's must be separated? </p> <p>I found there are 34650 combinations to arrange without restriction. </p> <p>How to approach this question?</p>
Daniel Li
294,291
<p>Find all possible ways we can arrange MIIIPPI: 7!/(4!*2!). Then "insert" four S into the 8 space between each possible arrangement of MIIIPPI, e.g. [1]M[2]I[3]I[4]I[5]P[6]P[7]I[8]; This is really <span class="math-container">${8}\choose{4}$</span>. Multiplying the two!</p>
232,930
<p>Let $f(n)$ denote the number of integer solutions of the equation $$3x^2+2xy+3y^2=n $$</p> <p>How can one evaluate the limit $$\lim_{n\rightarrow\infty}\frac{f(1)+...f(n)}{n}$$</p> <p>Thanks</p>
Sangchul Lee
9,340
<p>Let us elaborate the solution by Thomas Andrews. Though the idea is simple and repeatedly used in other answers, here we want to give a meticulously detailed solution.</p> <p>As we have observed, the number $f(1) + \cdots + f(n)$ is equal to the integer solution $(x, y)$, or equivalently the number of integer points $(x, y) \in \Bbb{Z}^2$, of the inequality</p> <p>$$ 3x^2 + 2xy + 3y^2 \leq n. \tag{1}$$</p> <p>Now let </p> <p>$$S_n = \{ (x, y) \in \Bbb{Z}^2 : (x, y) \text{ satisfies } (1) \}$$</p> <p>be the set of integer solutions $(x, y)$ of $(1)$, and let</p> <p>$$E_n = \{ (x, y) \in \Bbb{R}^2 : (x, y) \text{ satisfies } (1) \}$$</p> <p>be the set of points $(x, y) \in \Bbb{R}^2$ satisfying $(1)$. Plainly, $S_n$ is the set of integer points contained in $E_n$ and in this notation we have</p> <p>$$ \#(S_n) = f(1) + \cdots + f(n), $$</p> <p>where $\#$ denotes the cardinality of a set. We also define</p> <p>$$ A_n = \{ (x, y) \in \Bbb{Z}^n : C(x, y) \subset E_n \},$$</p> <p>and</p> <p>$$ B_n = \{ (x, y) \in \Bbb{Z}^n : C(x, y) \cap E_n \neq \varnothing \},$$</p> <p>Now for each integer point $p = (x, y) \in \Bbb{Z}^2$, we associate a unit square</p> <p>$$C(p) = C(x, y) = \left[x-\tfrac{1}{2}, x+\tfrac{1}{2} \right] \times \left[y-\tfrac{1}{2}, y+\tfrac{1}{2} \right]$$</p> <p>centered at $p = (x, y)$. Then it is plain to observe that</p> <p>$$ A_n \subset S_n \subset B_n$$</p> <p>Also, by definition it is clear that we have</p> <p>$$ \# (A_n) = \mathrm{Area}\Bigg( \bigcup_{p \in A_n} C(p) \Bigg) \leq \mathrm{Area}(E_n) \leq \mathrm{Area}\Bigg( \bigcup_{p \in B_n} C(p) \Bigg) = \# (B_n), \tag{2}$$</p> <p>Now let us digress to a rudimentary real analysis and prove some basic facts. Let $(M, d)$ be a metric space. For $p \in M$ and $\varnothing \neq A \subset M$, consider the minimum distance between $p$ and $A$</p> <p>$$ \mathrm{dist}(p, A) = \inf \{ d(p, q) : q \in A \} $$</p> <p>and correspondingly the $\epsilon$-neighborhood $A^{\epsilon}$ of $K$ defined by</p> <p>$$ N_{\epsilon}(A) = \{ q \in M : \mathrm{dist}(q, A) &lt; \epsilon \}.$$</p> <p>Then we have the following proposition.</p> <blockquote> <p><strong>Proposition.</strong> If $F$ is closed, $ \bigcap_{\epsilon &gt; 0} N_{\epsilon}(F) = F$.</p> <p><em>Proof.</em> Let $F' = \bigcap_{\epsilon &gt; 0} N_{\epsilon}(F)$. Since $F \subset N_{\epsilon}(F)$ for each $\epsilon &gt; 0$, we have $F \subset F'$. To show the converse, let $p \in F'$. Then for each $n$ we have $p \in N_{1/n}(F)$ and hence there exists $q_n \in F$ such that $d(p, q_n) &lt; \frac{2}{n}$. This shows that $q_n \to p$. Since $F$ is closed, the limit must lie in $F$. Therefore $p \in F$ and hence $F' = F$. ////</p> </blockquote> <p>Now let us return to the original problem. For each $p \in B_n \setminus A_n$, we have $C(p) \cap \partial E_n \neq \varnothing$. This implies that</p> <p>$$ C(p) \subset N_{2}(\partial E_n). $$</p> <p>But by the scaling and the homogeneity of the polynomial $3x^2 + 2xy + 3y^2$, clearly we have</p> <p>$$E_n = \sqrt{n} E_1 = \{ (\sqrt{n}x, \sqrt{n}y) : (x, y) \in E_1 \} $$</p> <p>and accordingly </p> <p>$$ N_{2}(\partial E_n) = \sqrt{n} N_{2/\sqrt{n}}(\partial E_1) $$</p> <p>This shows that</p> <p>$$ \begin{align*} \# (B_n \setminus A_n) &amp; = \mathrm{Area} \Bigg( \bigcup_{p \in B_n \setminus A_n} C(p) \Bigg) \leq \mathrm{Area} \big(N_{2}(\partial E_n)\big) \\ &amp; = \mathrm{Area} \big(\sqrt{n} N_{2/\sqrt{n}}(\partial E_1)\big) = n \mathrm{Area} \big(N_{2/\sqrt{n}}(\partial E_1)\big) \tag{3} \end{align*}$$</p> <p>Now we are ready.</p> <ul> <li><p>From $(3)$, we have $$ 0 \leq \frac{\#(B_n) - \#(A_n)}{n} \leq \mathrm{Area} \big(N_{2/\sqrt{n}}(\partial E_1)\big), $$ which goes to zero as $n \to \infty$ by the monotonicity of the Lebesgue measure together with the fact that $\mathrm{Area}(\partial E_1) = 0$. Here we used the proposition above.</p></li> <li><p>Thus from $(2)$, we have $$ \begin{align*} \mathrm{Area}(E_1) = \frac{\mathrm{Area}(E_n)}{n} &amp;\leq \frac{\#(B_n)}{n} \leq \frac{\#(A_n)}{n} + \frac{\#(B_n) - \#(A_n)}{n} \\ &amp;\leq \frac{\mathrm{Area}(E_n)}{n} + o(1) = \mathrm{Area}(E_1) + o(1) \end{align*} $$ and therefore $$ \mathrm{Area}(E_1) = \lim_{n\to\infty} \frac{\#(A_n)}{n} = \lim_{n\to\infty} \frac{\#(S_n)}{n} = \lim_{n\to\infty} \frac{\#(B_n)}{n} = \mathrm{Area}(E_1). $$</p></li> </ul> <p>Now it remains to evaluate the area of the ellipse $E_1$. Let</p> <p>$$ A = \begin{pmatrix} 3 &amp; 1 \\ 1 &amp; 3 \end{pmatrix}$$</p> <p>so that $ \left&lt; \mathrm{x}, A\mathrm{x} \right&gt; = 3x^2 + 2xy + 3y^2$ for $\mathrm{x} = (x, y)$. Then by spectral theory for symmetric matrices, there exists a rotation matrix $R$ such that $\left&lt; R\mathrm{x}, DR\mathrm{x} \right&gt;$ for $D = \mathrm{diag}(\lambda_1, \lambda_2)$. Then with a new coordinate $(X, Y) = R\mathrm{x}$, we have $\lambda_1 X^2 + \lambda_2 Y^2 \leq 1$. Therefore the area of the ellipse $E_1$ is given by </p> <p>$$ \mathrm{Area}(E_1) = \frac{\pi}{\sqrt{\lambda_1 \lambda_2}} = \frac{\pi}{\sqrt{\det A}} = \frac{\pi}{2\sqrt{2}}. $$</p>
2,174,061
<p>in $\Delta ABC$ if the $AD\perp BC$,$D\in BC$,and such $$|BC|=2|AD|$$ show that $$\dfrac{|AB|}{|AC|}\le\sqrt{2}+1$$ <a href="https://i.stack.imgur.com/SXDvI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SXDvI.png" alt="enter image description here"></a></p> <p>since $$\cot{B}+\cot{C}=\dfrac{BD}{AD}+\dfrac{CD}{AD}=2$$ so $$\dfrac{AB}{AC}=\dfrac{\sin{C}}{\sin{B}}$$</p>
Michael Rozenberg
190,319
<p>In the standard notation <span class="math-container">$AD=\frac{a}{2}$</span> and let <span class="math-container">$\frac{c}{b}=x$</span>.</p> <p>Thus, <span class="math-container">$$\frac{a^2}{4}=S_{\Delta ABC}$$</span> or <span class="math-container">$$\frac{a^2}{4}=\frac{1}{4}\sqrt{2(a^2b^2+a^2c^2+b^2c^2)-a^4-b^4-c^4}$$</span> or <span class="math-container">$$2a^4-2(b^2+c^2)a^2+b^4+c^4-2b^2c^2=0,$$</span> which gives <span class="math-container">$$(b^2+c^2)^2-2(b^4+c^4-2b^2c^2)\geq0$$</span> or <span class="math-container">$$c^4-6b^2c^2+b^4\leq0$$</span> or <span class="math-container">$$x^4-6x^2+1\leq0$$</span> or <span class="math-container">$$3-\sqrt2\leq x^2\leq3+2\sqrt2$$</span> or <span class="math-container">$$\sqrt2-1\leq x\leq1+\sqrt2,$$</span> which gives <span class="math-container">$$\frac{AB}{AC}\leq1+\sqrt2.$$</span></p>
2,157,914
<p>I am struggling with the next exercise of my HW:</p> <p>How many conjugacy classes are in $GL_3(\mathbb{F}_p)$? And how many in $SL_2(\mathbb{F}_p)$?</p> <p>It's on the topic of Frobenius normal form of finitely generated modules over $\mathbb{F}_p$.</p> <p>I'd appreciate any idea.</p>
Community
-1
<p>Taking a cue from <a href="http://oeis.org/A000194" rel="noreferrer">sequence A000194 in the OEIS</a>, we have $$S = \sum_{k=1 }^{2016} \frac{1}{f (k)}$$ $$ = \frac {2}{1} + \frac {4}{2} + \frac {6}{3} + \cdots + \frac {2 \lfloor \sqrt {2016} \rfloor}{\lfloor \sqrt {2016} \rfloor} + \frac {36}{\lceil \sqrt {2016} \rceil}\,\,(\text{ why? })$$ $$=2+2+2\cdots +2 +\frac {36}{45} $$ where $2$ is added $44$ times giving an answer of $\boxed {88.8} $.</p> <p>Hope it helps. </p>
2,157,914
<p>I am struggling with the next exercise of my HW:</p> <p>How many conjugacy classes are in $GL_3(\mathbb{F}_p)$? And how many in $SL_2(\mathbb{F}_p)$?</p> <p>It's on the topic of Frobenius normal form of finitely generated modules over $\mathbb{F}_p$.</p> <p>I'd appreciate any idea.</p>
Tig la Pomme
419,035
<p>If $j$ is a positive integer, the integers $n$ for which the closest integer to $\sqrt{n}$ is $j$ are those who verify $j-1/2&lt;\sqrt{n}&lt;j+1/2$, i.e. $j^2-j&lt;n \leqslant j^2+j$. Then you can group terms: $$\sum_{k=1}^{m^{2}+m}\frac{1}{f(k)}=\sum_{j=1}^{m}\sum_{k=j^2-j+1}^{j^2+j}\frac{1}{f(k)}=\sum_{j=1}^{m}\sum_{k=j^2-j+1}^{j^2+j}\frac{1}{j}=\sum_{j=1}^{m}2=2m.$$ In particular, noticing $44 \times 45=1980$ and $45 \times 46=2070$, we get $\sum_{k=1}^{1980}\frac{1}{f(k)}=88$. Moreover, the next $36$ terms verify $f(k)=45$, so we get $$\sum_{k=1}^{2016}\frac{1}{f(k)}=88+\frac{36}{45}=\frac{444}{5}.$$</p> <p>Best regards,</p> <p>Tig la Pomme</p>
353,947
<p>Problem. Let $f:[a,b]\to\mathbb{R}$ be a function such that $ f\in C^3([a,b])$ and $f(a)=f(b)$. Prove that $$ \left|\int\limits_{a}^{\frac{a+b}{2}}f(x)dx-\int\limits_{\frac{a+b}{2}}^{b}f(x)dx\right|\leq\frac{(b-a)^4}{192}\max_{x\in [a,b]}|f'''(x)|.$$ Any idea are welcome.</p>
robjohn
13,854
<p>Let $f(x)=g(t)+h(t)$, where $g$ is odd, $h$ is even, and $x=\frac{b-a}{2}t+\frac{b+a}{2}$; that is $t=\frac{2x-b-a}{b-a}$.</p> <p>Then, since $g$ is odd and $h$ is even, $$ \max_{t\in[-1,1]}|g'''(t)|\le\left|\frac{b-a}{2}\right|^3\max_{x\in[a,b]}|f'''(x)|\tag{1} $$ and $$ \begin{align} \int_{\frac{b+a}{2}}^bf(x)\,\mathrm{d}x-\int_a^{\frac{b+a}{2}}f(x)\,\mathrm{d}x &amp;=\frac{b-a}{2}\left(\int_0^1g(t)\,\mathrm{d}t-\int_{-1}^0g(t)\,\mathrm{d}t\right)\\ &amp;=(b-a)\int_0^1g(t)\,\mathrm{d}t\tag{2} \end{align} $$ Since $f(a)=f(b)$, we must have $g(-1)+h(-1)=g(1)+h(1)$.<br> Since $h(-1)=h(1)$, we must have $g(-1)=g(0)=g(1)=0$.<br> Since $g$ is odd, $g''(0)=0$.</p> <p>Suppose that $\max\limits_{t\in[0,1]}|g'''(t)|=M$, and on $[0,1]$, define $$ p(t)=g(t)-\frac M6(t-t^3)\tag{3} $$ then $p(0)=p(1)=p''(0)=0$ and $p'''(t)=g'''(t)+6\ge0$. Thus, $p''(t)\ge0$, and therefore, $p'$ is non-decreasing.</p> <p>Suppose $p(t_0)\gt0$. The Mean Value Theorem says that for some $t_1\in(0,t_0)$, we have that $p'(t_1)=\frac{p(t_0)-p(0)}{t_0-0}\gt0$. Since $p'$ is non-decreasing, $p(t_0)\gt0$ and for all $t\ge t_0$, $p'(t)\gt0$. This would imply that $p(1)\gt0$.</p> <p>Thus, $p(t)\le0$; that is, $$ g(t)\le\frac M6(t-t^3)\tag{4} $$ By a similar argument, using $q(t)=g(t)+\frac M6(t-t^3)$, we get $g(t)\ge-\frac M6(t-t^3)$. Therefore, since $g$ is odd, for all $t\in[-1,1]$, $$ |g(t)|\le\frac M6\left|\,t-t^3\right|\tag{5} $$ Integrating yields $$ \left|\int_0^1g(t)\,\mathrm{d}t\right|\le\frac M{24}\tag{6} $$ Thus, combining $(1)$, $(2)$, and $(6)$ tells us that $$ \left|\int_{\frac{b+a}{2}}^bf(x)\,\mathrm{d}x-\int_a^{\frac{b+a}{2}}f(x)\,\mathrm{d}x\right|\le\frac{(b-a)^4}{192}\max_{x\in[a,b]}|f'''(x)|\tag{7} $$</p>
4,170,940
<blockquote> <p><a href="https://www.isical.ac.in/%7Eadmission/IsiAdmission2017/PreviousQuestion/BStat-BMath-UGA-2016.pdf" rel="nofollow noreferrer">Question 36</a>: Finding graph corresponding to <span class="math-container">$\int_0^{\sqrt{x} } e^{ -\frac{u^2}{x} } du$</span> <a href="https://i.stack.imgur.com/KIVRA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KIVRA.png" alt="" /></a> <span class="math-container">$x&gt;0$</span> and <span class="math-container">$f(0)=0$</span></p> </blockquote> <p>Clearly we can't say the function is increasing or decreasing just by inspection because the bound and both integrand is variable. To make statements there we have to consider it's derivative, which is done by the <a href="https://en.wikipedia.org/wiki/Leibniz_integral_rule" rel="nofollow noreferrer">Leibniz integral rule</a>:</p> <p><span class="math-container">$$ \frac{d}{dx}( \int_0^{\sqrt{x} }e^{ - \frac{u^2}{x} } du) = F(x)= \frac{1}{2 \sqrt{x}} e^{ - \frac{u^2}{\sqrt{x}} } du+ \frac{u^2}{x^2} \int_0^{\sqrt{x} }e^{ - \frac{u^2}{x} } du$$</span></p> <p>Now... ummm... we still have the integral again, so it's still not easily possible to make statement if it's increasing or decreasing again.</p> <p>I thought that I could rule out option D by checking if there is a extrema point on the function by checking roots for <span class="math-container">$F(x)=0$</span> but that too seems too difficult.</p> <p>Is there any trick which I am not seeing? This question was meant for higher schooler's who are entering undergraduate, so methods in that level would be best (other methods are still fine)</p>
Jean Marie
305,862
<p>I don't know if this can be called a trick, but I have been guided by the &quot;normal law&quot; in probability :</p> <p>Setting <span class="math-container">$\sigma=\sqrt{x}$</span> and <span class="math-container">$u=\tfrac{1}{\sqrt{2}}U$</span>, we get :</p> <p><span class="math-container">$$ I=\frac{1}{\sqrt{2}}\int_0^{\sigma}e^{-\frac{U^2}{2\sigma^2}}dU$$</span></p> <p>By comparison with a well known integral in statistics:</p> <p><span class="math-container">$$K=\frac{1}{\sigma \sqrt{2 \pi}}\int_0^{\sigma}e^{-\frac{U^2}{2\sigma^2}}dU\overset{C.O.V. \ U=\sigma x}{=}\frac{1}{\sqrt{2 \pi}}\int_0^{1}e^{-\frac{x^2}{2}}dx \ \text{ a constant}, $$</span></p> <p>Remark: <span class="math-container">$K$</span> is well known to statisticians, it is the probability that the value of a normal <span class="math-container">$N(0,\sigma)$</span> random variable &quot;falls&quot; into interval <span class="math-container">$[0,\sigma]$</span> with <span class="math-container">$K \approx 33 \% .$</span></p> <p>which is equivalent to say that:</p> <p><span class="math-container">$$I = \sqrt{\pi}\sigma J=\sqrt{x}\sqrt{\pi} J$$</span></p> <p>a <strong>square root trend</strong> in <span class="math-container">$x$</span>: it means the (C) answer.</p>
2,261,927
<p>How to get alternative form from equation 1)</p> <p>$$ 1) -a^2 + a + b^2 -b $$</p> <p>to equation 2)</p> <p>$$ 2) (a-b)(a+b-1)$$</p>
tomi
215,986
<p>First rearrange the equation slightly:</p> <p>$$b^2-a^2+a-b$$</p> <p>Note that we have a 'difference of two squares'</p> <p>$$(b-a)(b+a)+a-b$$</p> <p>Recognise that $a-b\equiv -(b-a)$</p> <p>$$(b-a)(b+a)-(b-a)$$</p> <p>Now we have $(b-a)$ as a common factor so factorise:</p> <p>$$(b-a)\left ((b+a)-1 \right)$$</p> <p>becomes </p> <p>$$(b-a)(b+a-1)$$</p> <p>and </p> <p>$$-(a-b)(a+b-1)$$</p>
3,467,523
<p><a href="https://i.stack.imgur.com/G47bX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/G47bX.png" alt="Attached is the picture of the problem."></a></p> <p>I was doing some trig problems for leisure. This one particularly seems not trivial. So I thought someone may be interested to take a look.</p>
Eric Towers
123,905
<p>If we assume the claim is true and substitute <span class="math-container">$x = 0$</span>, we find <span class="math-container">$4-m=0$</span>, which is straightforward to solve for <span class="math-container">$m$</span>.</p> <p>If we'd like to check another angle, try <span class="math-container">$x = \pi/4$</span> and discover <span class="math-container">$2 = \frac{m}{2}$</span>.</p> <p>Finding <span class="math-container">$m$</span> is fairly easy: pick a value of <span class="math-container">$x$</span> and solve the resulting linear equation for <span class="math-container">$m$</span>. If we want a little work, take <span class="math-container">$x = \pi/3$</span> and obtain <span class="math-container">$\frac{5}{2} + \frac{m}{2} = \frac{9m}{8}$</span>.</p>
2,250,638
<p>I feel I am doing the problem correctly however my answers are not following the solution.</p> <p>My attempt: </p> <p>$y^{2}+2y+12x-23=0$</p> <p>$(y^{2}+2y+1) +12x = 23-1$</p> <p>$(y+1)^{2}+12x=22$ </p> <p>$\dfrac{(y+1)^{2}}{22}+\dfrac{6x}{11}=1$</p> <p>Note $a &gt; b$</p> <p>$a^{2}=22$</p> <p>$b^{2}=11$</p> <p>$c^{2}= 22-11$ </p> <p>$c= ± \sqrt{11}$</p> <p>I'm not sure what I'm doing incorrectly, but the answers are supposed to be:</p> <p>vertex: $(2,-1)$</p> <p>focus: $(-1,-1)$</p> <p>directrix: $x=5$</p>
Blue
409
<p><a href="https://i.stack.imgur.com/C09ng.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C09ng.png" alt="enter image description here"></a></p> <p>Let $A$ and $B$ be points on $x=1$ such that $|\overline{AB}| = 2$. With $R$ the point where $x=1$ meets the circle, define $\alpha := \angle ROA$ and $\beta := \angle ROB$. Let the "other" tangent from $A$ meet the circle at $S$; we see that $\angle ROS = 2\alpha$. Likewise, the "other" tangent from $B$ meets the circle at $T$ such that $\angle ROT = 2\beta$. Let these "other" tangents intersect at $P$, and note that $\overline{OP}$ bisects $\angle SOT$. Consequently, we can write $$\begin{align} P &amp;= \sec(\alpha-\beta)\;\left( \cos(\alpha+\beta), \sin(\alpha+\beta) \right) \\ &amp;= \left( \frac{\cos\alpha \cos\beta - \sin\alpha\sin\beta}{\cos\alpha\cos\beta + \sin\alpha \sin\beta}, \frac{\sin\alpha \cos\beta+\cos\alpha\sin\beta}{\cos\alpha\cos\beta + \sin\alpha \sin\beta} \right) \\ &amp;=\left( \frac{1 - \tan\alpha\tan\beta}{1 + \tan\alpha \tan\beta}, \frac{\tan\alpha+\tan\beta}{1 + \tan\alpha \tan\beta} \right) \end{align}$$ </p> <p>We can get the equation for the locus by eliminating $a := \tan\alpha$ and $b := \tan\beta$ from the system $$x = \frac{1-a b}{1+ab} \qquad y = \frac{a+b}{1+ab}$$ subject to the "intercept condition" $$a - b = 2$$</p> <p>Without too much trouble, we arrive at the relation</p> <blockquote> <p>$$ 2( 1 + x ) = y^2$$</p> </blockquote> <p>which describes a parabola. $\square$</p>
987,895
<blockquote> <p>Let $(G,\cdot)$ be a group, $g \in G$.</p> <p>For $a,b \in G$ define $a * b := a \cdot g^{-1} b$. Show that $(G,*)$ is a group with the neutral element $g$ and $f : (G,*) \rightarrow (G,\cdot), a \mapsto a \cdot g^{-1}$ is a group isomorphism.</p> </blockquote> <p>In order to show that $(G,*)$ is a group, I need to show that $*$ is closed and associative, g is the neutral element and an inverse element exists for each $x \in G$.</p> <p>(* is closed, associativity)</p> <p>$*$ is closed because $\cdot$ is, $\forall x,y \in G: x * y = x \cdot g^{-1} \cdot y$, which is closed because $(G,\cdot)$ is a group. The same applies for associativity: $(x * y) * z = (x \cdot g^{-1} \cdot y) \cdot g^{-1} \cdot z = x \cdot (g^{-1} \cdot y) \cdot g^{-1} \cdot z) = x * (y * z)$. </p> <p>(neutral element)</p> <p>$x * g = x \cdot g^{-1} g = x \cdot e = x, g * x = g \cdot g^{-1} x = e \cdot x = x$</p> <p>Is this right?</p> <p>How do I show that there is an inverse element, so that $\forall x \in G \; \exists x': x * x' = g$?</p> <p>In order to show that f is a group isomorphism, I need to show that it is a group homomorphism, and f is a bijection, right?</p> <p>I already figured out how to show that f is surjective, but how do I show that f is injective? </p>
mfl
148,513
<p>To get the inverse of an element $a:$</p> <p>$$a*b=ag^{-1}b=g\implies g^{-1}b=a^{-1}g\implies b=ga^{-1}g,$$</p> <p>that is, the inverse of $a$ with respect to $*$ is $ga^{-1}g.$</p> <p>To show that $f$ is injective:</p> <p>$$f(a)=f(b)\implies ag^{-1}=bg^{-1}\implies a=b$$ (just multiply by the right by $g$ in the last step).</p>
4,317,945
<p>A function <span class="math-container">$h : A → \mathbb{R}$</span> is Lipschitz continuous if <span class="math-container">$\exists K$</span> s.t.</p> <p><span class="math-container">$$|h(x) - h(y)| \leq K \cdot |x - y|, \forall x, y \in A$$</span></p> <p>Suppose that <span class="math-container">$I = [a, b]$</span> is a closed, bounded interval; and <span class="math-container">$g : I → \mathbb{R}$</span> is differentiable on <span class="math-container">$I$</span> and the function <span class="math-container">$G = Dg = g' : I → \mathbb{R}$</span> is continuous. Prove that <span class="math-container">$g$</span> is Lipschitz continuous on <span class="math-container">$I$</span>.</p>
MachineLearner
647,466
<p>You can apply the <a href="https://en.wikipedia.org/wiki/Binomial_theorem" rel="nofollow noreferrer">binomial formula</a> <span class="math-container">$(1+x)^n=\sum_{k=0}^n\dfrac{n!}{(n-k)!k!}x^k1^{n-k}=\sum_{k=0}^n\dfrac{n!}{(n-k)!k!}x^k$</span></p> <p><span class="math-container">$$g(x)=x^2(x + 1)^n = x^2\sum_{k=0}^n\dfrac{n!}{(n-k)!k!}x^k$$</span> <span class="math-container">$$= \sum_{k=0}^n\dfrac{n!}{(n-k)!k!}x^{k + 2}$$</span></p> <p>We have</p> <p><span class="math-container">$$g^{(1)}(x)= \sum_{k=0}^n\dfrac{n!}{(n-k)!k!}(k+2)x^{k + 1}$$</span> <span class="math-container">$$g^{(2)}(x)=\sum_{k=0}^n\dfrac{n!}{(n-k)!k!}(k+2)(k+1)x^k$$</span> <span class="math-container">$$g^{(3)}(x)=\sum_{k=1}^n\dfrac{n!}{(n-k)!k!}(k+2)(k+1)kx^{k-1}$$</span> <span class="math-container">$$g^{(4)}(x)=\sum_{k=2}^n\dfrac{n!}{(n-k)!k!}(k+2)(k+1)k(k-1)x^{k-2}$$</span> If we continue with this pattern we obtain</p> <p><span class="math-container">$$g^{(n)}(x) = \sum_{k=n-2}^n\dfrac{n!}{(n-k)!k!}(k+2)(k+1)k(k-1)\ldots (k-n+3)x^{k-n+2}$$</span></p> <p>Now, evaluate these three terms of the sum, and you are done.</p>
163,465
<p>I am interested in the quantity $$f(X,t) = \int_t^\infty\negthinspace x\ p(x)\ dx,$$ where $p$ is a probability distribution for a positive variable $X$.</p> <p>1) <strong>Does this quantity $f(X,t)$ have a name</strong>? As the title question suggests, it is a tail of the integral that is cut out when computing the trimmed mean.</p> <p>2) <strong>Are there any bounds</strong> on $f(X,t)\ / \ f(X,0)$ in terms of $t$ and properties only of $X$ (e.g. moments or quantiles of $X$)? Note $f(X,0)=E[X]$.</p> <p>Chebyshev's inequality has the right form for a bound on a different quantity, $$\int_t^\infty\negthinspace p(x)\ dx \le \frac{\text{Var}(X)}{(t - E[X])^2}.$$ I am looking for both upper and lower bounds on $f(X,t)$ and Chebyshev's inequality doesn't seem to provide either.</p> <p>I am especially interested in the case where $X=|Y_1-Y_2|$ where the $Y$'s are i.i.d. variables. Results for the general case might be better-known, and I would appreciate either.</p>
Santiago
10,203
<p>The paper "Bounds on Conditional Moments of Weibull and Other Monotone Failure Rate Families" by Patel and Read (1975) gives some useful bounds for distributions with monotone failure rates (either increasing or decreasing), where the failure rate is given by $h(t) = p(t) / (1 - P(t) )$. </p> <p>Let $H(t) = \int_0^t h(x) dx$, and assume that $h(t)$ is increasing. Among many other results, they show that the conditional expectation is bounded by $$ E[X | X &gt; t] = \frac{f(X,t)} {1-P(t)} \le H^{-1}(1 + H(t)) \le t + \frac 1 {h(t)}\,. $$ I hope that you find this helpful.</p>
189,689
<pre><code>CountryData[ "UnitedStates", {"Population", 2014} ] </code></pre> <blockquote> <p>322 422 965 people</p> </blockquote> <pre><code>CountryData[ "UnitedStates", {"Population", 2015} ] </code></pre> <blockquote> <p>Missing[ "NotAvailable" ]</p> </blockquote> <p>How can I update it? I am new to Mathematica.</p>
Jason B.
9,490
<p><code>CountryData</code> does appear to be out of date, and that is a shame. Many of the XXXData functions now act as wrappers to call <code>EntityValue</code>. Run <code>TracePrint[PlanetData["Venus", "AngularDiameterFromEarth"],_EntityValue]</code> to see that this is true. </p> <p>But <code>CountryData</code> has not been updated to work this way. So we can call <code>EntityValue</code> directly for the data. To find out what the right syntax would be I will use <kbd>Ctrl</kbd><kbd>=</kbd>:</p> <p><a href="https://i.stack.imgur.com/4B9hl.png" rel="noreferrer"><img src="https://i.stack.imgur.com/4B9hl.png" alt="enter image description here"></a></p> <p>becomes </p> <p><a href="https://i.stack.imgur.com/6QaOf.png" rel="noreferrer"><img src="https://i.stack.imgur.com/6QaOf.png" alt="enter image description here"></a></p> <p>which is a formatted form of</p> <pre><code>Entity["Country", "UnitedStates"][ EntityProperty["Country", "Population", {"Date" -&gt; DateObject[{1978}]}] ] </code></pre> <p>I can query for a date range as well. Using <kbd>Ctrl</kbd><kbd>=</kbd> on <code>"US population 2000 through 2018"</code> returns this input expression:</p> <pre><code>Entity["Country", "UnitedStates"][ EntityProperty["Country", "Population", {"Date" -&gt; Interval[{DateObject[{2000}], DateObject @ {2018}}]} ] ] </code></pre> <p>which returns a <code>TimeSeries</code> object suitable for further computation or visualization.</p>
600,404
<p>I'm trying to study line bundle over $S^2$. <a href="https://mathoverflow.net/questions/113924/line-bundle-on-s2">In this post</a> was outlined the method based on clutching functions. But now I'm interesting in another approach. </p> <p>For the sphere there is two maps : upper hemisphere and lower hemisphere with intersection as $[-\epsilon,\epsilon]\times S^1$. For the upper hemisphere and lower hemisphere its well-known that bundles over this spaces is trivial. (Any bundle over a contractible base is trivial). So to prove the fact that line bundle over $S^2$ is trivial we must create continuation of trivialization from upper hemisphere (for example) to the lower hemisphere through "border" $[-\epsilon,\epsilon]\times S^1$. </p> <p>As I understand it is sufficient to continue trivialization from the "border" to the center of the "disk". (I think here it is possible to use a partition of unity, but I'm not sure).</p> <p>I can't formalize this reasoning.</p>
Jean-Claude Arbaut
43,608
<p>You can write</p> <p>$$\frac{1^p+2^p+...+n^p}{n^{p+1}}=\frac{1}{n} \left( \left( \frac{1}{n}\right)^p + \left( \frac{2}{n}\right)^p + ... + \left( \frac{n}{n}\right)^p \right)= \frac{1}{n}\sum_{k=1}^n\left( \frac k n\right)^p$$</p> <p>This is a <a href="https://en.wikipedia.org/wiki/Riemann_sum" rel="nofollow">Riemann sum</a> and, when $n \to \infty$, it converges to $\int_0^1 x^p \, \mathrm{d}x=\frac{1}{p+1}$.</p>
194,472
<p>It is well-known that there are continuous curves $f:I \to \mathbb R^2$ (where $I \subset \mathbb R$ is an interval) whose image have positive measure (e.g Peano curve). I have read somewhere that if we require the curve to be differentiable evrywhere then this cannot happen; but if we require it to be almost everywhere differentiable, then it can happen! How could one proceed to prove the first statement and give a counterexample for the second?</p>
Sangchul Lee
9,340
<p>Let $\phi : [0, 1] \to [0, 1]$ be the <em><a href="http://en.wikipedia.org/wiki/Cantor_function" rel="noreferrer">Cantor-Lebesgue function</a></em>, and $\alpha : [0, 1] \to \Bbb{R}^n$ be a space-filling curve.</p> <p>Since $\phi$ is stationary outside the Cantor set, it is locally constant almost everywhere. That is, $\beta := \alpha \circ \phi$ is also locally constant everywhere, allowing it to be differentiable a.e.. (In fact, $\beta' = 0$ a.e.!) On the other hand, $\phi$ is continuous and surjective. Thus $\beta$ is also a continuous path and the image of $\beta$ coincides with $\alpha$. That is, $\beta$ is also a space-filling curve. Therefore this serves as a counter-example.</p> <hr> <p>Let $f : [0, 1] \to \Bbb{R}^n$ be a differentiable curve in $\Bbb{R}^n$ ($n \geq 2$). Let $\Gamma = f([0, 1])$ be the image of $f$ in $\Bbb{R}^n$.</p> <p>Assuming that $|f'|$ is Lebesgue integrable, we can prove the first statement.</p> <blockquote> <p><strong>Theorem.</strong> [7.21, Rudin] If $f : [a, b] \to \Bbb{R}$ is everywhere differentiable and its derivative $f'$ is Lebesgue integrable, then $$ f(b) - f(a) = \int_{a}^{b} f'(t) \; dt. $$</p> </blockquote> <p>This theorem immediately implies the following corollary:</p> <blockquote> <p><strong>Corollary.</strong> Let $f : [0, 1] \to \Bbb{R}^n$ be everywhere differentiable and its derivative $f'$ is Lebesgue integrable. Then for any $\epsilon &gt; 0$, then there exists a partition $\Pi = \{ 0 = x_0 &lt; \cdots &lt; x_N = 1 \}$ such that for $$\epsilon_k = \sup \{|f(x) - f(y)| : x, y \in [x_{k-1}, x_k] \}, \quad (1 \leq k \leq N)$$ we have $$\epsilon_k \leq \epsilon \quad \text{and} \quad \sum_{k=1}^{N} \epsilon_k \leq \| f' \|_1.$$</p> </blockquote> <p><em>Proof.</em> Since $f'$ is Lebesgue integrable, it is absolutely continuous. Thus there exists $\delta &gt; 0$ such that whenever a measurable subset $E \subset [0, 1]$ satisfies $|E| &lt; \delta$, we have $\int_E |f'| &lt; \epsilon$. Now let $\Pi = \{x_k\}$ be a partition of $[0, 1]$ into subintervals with length less than $\delta$. Then for each $x_{k-1} \leq x &lt; y \leq x_k$, $$ |f(y) - f(x)| = \left| \int_{x}^{y} f' \right| \leq \int_{x}^{y} |f'| \leq \int_{x_{k-1}}^{x_k} |f'| &lt; \epsilon. $$ Thus $$ \epsilon_k \leq \int_{x_{k-1}}^{x_k} |f'| &lt; \epsilon $$ and the conclusion follows. ////</p> <blockquote> <p><strong>Remark.</strong> If $f'$ is bounded, then it is Lebesgue integrable. Also, in this case, the conclusion of the Corollary follows directly by mean value theorem.</p> </blockquote> <p>Now let $\epsilon &gt; 0$ and $\Pi$ be a corresponding partition of $[0, 1]$ by Corollary. Then we can cover $\Gamma$ by balls $B_{2\epsilon_k}(f(x_k))$ for $k = 1, \cdots, N$. Thus</p> <p>$$ |\Gamma| \leq \sum_{k=1}^{N} \left|B_{2\epsilon_k}(f(x_k))\right| \leq \sum_{k=1}^{N} C_n \epsilon_k^n \leq C_n \epsilon^{n-1} \sum_{k=1}^{N} \epsilon_k \leq C_n \epsilon^{n-1} \| f' \|_1, $$</p> <p>where $C_n$ is a constant depending only on the dimension $n$. (In fact, we can choose $C_n = |B_2|$.) Thus taking $\epsilon \to 0$, we have the desired result.</p>
194,472
<p>It is well-known that there are continuous curves $f:I \to \mathbb R^2$ (where $I \subset \mathbb R$ is an interval) whose image have positive measure (e.g Peano curve). I have read somewhere that if we require the curve to be differentiable evrywhere then this cannot happen; but if we require it to be almost everywhere differentiable, then it can happen! How could one proceed to prove the first statement and give a counterexample for the second?</p>
Will Jagy
10,400
<p>The other direction is called (mini)-Sard's Theorem. Page 205 in Appendix 1 of Guilleman and Pollack, <em>Differential Topology</em>. The mini version is just this: Let $U$ be an open set of $\mathbb R^n,$ and let $f:U \rightarrow \mathbb R^m$ be a smooth map. Then, if $m &gt; n,$ we can conclude that $f(U)$ has measure zero in $\mathbb R^m.$</p> <p>The full Sard's Theorem is about critical points, while we make no demands about the relative dimensions. Let $f:X \rightarrow Y$ be a smooth map of manifolds, and let $C$ be the set of critical points of $f$ in $X.$ Then $f(C)$ has measure zero in $Y.$ </p> <p>They say their proof is almost verbatim from John W. Milnor, <em>Topology from the Differentiable Viewpoint.</em> It appears Guillemin and Pollack, also Milnor, assume $C^\infty.$ However, Milnor refers back to Pontryagin(1955), English translation (1959). Evidently Pontryagin worked with weaker conditions, but we are not told what. <a href="http://en.wikipedia.org/wiki/Sard%27s_theorem" rel="nofollow">http://en.wikipedia.org/wiki/Sard%27s_theorem</a></p> <p>Alright, I've been looking stuff up elsewhere. The full Sard's theorem does depend on relative dimension. I currently think that mini-Sard make be true both for $C^1$ and for differentiable everywhere, which is a pretty weak hypothesis. </p>
3,810,733
<p><span class="math-container">$$''' + y' = 2^2 + 4\sin(x)$$</span></p> <p>Find the general solution of the differential equation by using the Indefinite Coefficients Method.</p>
Physor
772,645
<p><span class="math-container">$u$</span> is a unit vector means <span class="math-container">$||u|| = \sqrt{u_1^2 +u_2^2 + u_3^2 + u_4^2} = 1$</span>. Consider it as a constraint on the vector you seek <span class="math-container">$u$</span> in the form <span class="math-container">$$ g(u_1,u_2,u_3,u_4) = 0\qquad {\rm i.e. } \qquad \sqrt{u_1^2 +u_2^2 + u_3^2 + u_4^2} - 1 =0 $$</span> the function you have <span class="math-container">$f(u_1,u_2,u_3,u_4) = 3u_1 - 2u_2 + 4u_3-u_4$</span> must be maximized subject to the cnstraint above.</p> <p>using Lagrange multipliers methode you get the system <span class="math-container">$$ \nabla f(u_1,u_2,u_3,u_4)\ - \lambda \nabla g(u_1,u_2,u_3,u_4) = \vec{0} \\ g(u_1,u_2,u_3,u_4) = 0 $$</span> Solve for <span class="math-container">$u_1,u_2,u_3,u_4$</span> and <span class="math-container">$\lambda$</span> you get two solutions.</p>
108,253
<p>I would like to assign 'x' individuals to 'y' groups, randomly. For example, I would like to divide 50 individuals into 100 groups randomly. Of course, with more groups than individuals many of the groups will have zero individuals, while some groups will have multiple individuals. That is fine. With random assignment, the distribution of the number of individuals per group should fit a Poisson distribution.</p> <p>I feel like there should be a simple function in Mathematica for partitioning X things into Y groups randomly. I have searched and haven't found anything to do this. Please help!</p>
David G. Stork
9,735
<pre><code>x = 50; myList = Range[x]; maxgroupSize = 5; numberGroups = 100; Table[DeleteDuplicates@RandomChoice[myList, maxgroupSize], numberGroups] </code></pre> <p>or</p> <pre><code>Table[RandomSample[myList, maxgroupSize], numberGroups] </code></pre>
108,253
<p>I would like to assign 'x' individuals to 'y' groups, randomly. For example, I would like to divide 50 individuals into 100 groups randomly. Of course, with more groups than individuals many of the groups will have zero individuals, while some groups will have multiple individuals. That is fine. With random assignment, the distribution of the number of individuals per group should fit a Poisson distribution.</p> <p>I feel like there should be a simple function in Mathematica for partitioning X things into Y groups randomly. I have searched and haven't found anything to do this. Please help!</p>
LLlAMnYP
26,956
<p>Randomly partitioning a list into <code>y</code> groups is as easy, as splitting a random permutation of it at <code>y-1</code> positions.</p> <pre><code>set = RandomSample[Range[50]] (* works with any list, though *) split = Partition[{1}~Join~RandomChoice[Range[Length[set] + 1], 99] ~Join~{Length[set] + 1} // Sort, 2, 1] set[[#,#2-1]]&amp;@@@split; </code></pre>
3,306,571
<p>I know that the function <span class="math-container">$f(x) = \frac{x}{x}$</span> is not differentiable at <span class="math-container">$x = 0$</span>, but according to the definition of differentiable functions:</p> <blockquote> <p>A differentiable function of one real variable is a function whose derivative exists at each point in its domain</p> </blockquote> <p>since <span class="math-container">$x = 0$</span> is not in the domain of <span class="math-container">$f$</span>, it doesn't have to be differentiable at that point for the function to be differentiable. This suggests that <span class="math-container">$f$</span> is differentiable as every other points in the domain has a derivative of <span class="math-container">$0$</span>.</p> <p>However, some say that a function must be continuous if it's differentiable. This disproves the fact that <span class="math-container">$f$</span> is differentiable since it's not a continuous function.</p> <p>Then is it really a differentiable function?</p>
Theo Bendit
248,286
<p>The function <span class="math-container">$f(x) = \frac{x}{x}$</span> has a natural domain of <span class="math-container">$\Bbb{R} \setminus \{0\}$</span>; you can sensibly substitute in any value except <span class="math-container">$0$</span>. In order to be continuous or differentiable at a point, it is <strong>required</strong> that the point be part of the domain of the function. The function <span class="math-container">$f$</span> is therefore <strong>neither continuous nor differentiable at <span class="math-container">$0$</span></strong>.</p> <p>Does this mean the function is continuous? It depends on who's asking. Some people define the term "continuous" as being defined and continuous at every point on <span class="math-container">$\Bbb{R}$</span>, and say that all points where the function is undefined are points of discontinuity by default. Others say that a function is "continuous" if it is continuous at every point on its domain. In the latter sense, <span class="math-container">$f$</span> is continuous and differentiable. In the former sense, <span class="math-container">$f$</span> is neither.</p>
802,960
<p>$$\sum\limits_{k=1}^n\arctan\frac{ 1 }{ k }=\frac{\pi}{ 2 }$$ Find value of $n$ for which equation is satisfied. </p>
enigne
143,559
<p>n=3. By drawing this figure, you can easily know <img src="https://i.stack.imgur.com/iA9Tw.png" alt="enter image description here"></p>
373,958
<p>Is $\sum_{n=1}^\infty(2^{\frac1{n}}-1)$ convergent or divergent? $$\lim_{n\to\infty}(2^{\frac1{n}}-1) = 0$$ I can't think of anything to compare it against. The integral looks too hard: $$\int_1^\infty(2^{\frac1{n}}-1)dn = ?$$ Root test seems useless as $\left(2^{\frac1{n}}\right)^{\frac1{n}}$ is probably even harder to find a limit for. Ratio test also seems useless because $2^{\frac1{n+1}}$ can't cancel out with ${2^{\frac1{n}}}$. It seems like the best bet is comparison/limit comparison, but what can it be compared against?</p>
Zarrax
3,035
<p>No need for L'hopital: Just use the fact that by the definition of derivative, $$\lim_{x \rightarrow 0} {2^x - 1 \over x} = {d \over dx} 2^x\bigg|_{x = 0}$$ $$ = \ln 2$$ So as in bjatr's answer, this means that $$\lim_{n \rightarrow \infty} {2^{1 \over n} - 1 \over {1 \over n}} = \ln 2$$ So by the limit comparison test, the series diverges.</p>
1,672,131
<p>A card game is played with a deck whose cards can be one of 6 suits, one of the suits being hearts, and one of 11 ranks. A hand is a subset of 3 cards. What is the probability that a hand has exactly two hearts given that it has the 2 of hearts? Please explain.</p>
Whiz_Geek
230,983
<p>There are exactly three solutions- 1,8,9. 10^n has exactly n+1 digits.</p>
804,882
<p>If both $L:V\rightarrow W$ and $M:W\rightarrow U$ are linear transformations that are invertible, how can you prove that the composition $(M\circ L):V\rightarrow U$ is also invertible.</p>
Galactus
152,561
<p>You can prove that a transformation is invertible $\Leftrightarrow $ its associated matrix is also invertible. Then use the matrix associated to the composition to prove that it is invertible.</p>
1,560,209
<p>Prove that $f(x):\mathbb{R}\to\mathbb{R}$ , $x \mapsto x^3$ is injective.</p> <hr> <p>I want to prove this claim is true. </p> <p>Here is my outline so far:</p> <hr> <p>We want to show that $f(a)=f(b)$ implies that $a=b$, for all $a,b \in \mathbb{R}$</p> <p>We have $f(a)=a^3$, and $f(b)=b^3$</p> <p>So, if $f(a)=f(b)$, we have $a^3=b^3$, or $a^3-b^3=0$, or $(a-b)(a^2+ab+b^2)=0$</p> <p>Consider two cases:</p> <p>Case 1: $(a-b)=0$, then $a=b$, and we are done.</p> <p>Case 2: $(a^2+ab+b^2)=0$ We want to show that $a=b$.</p> <hr> <p>I am having trouble with Case 2. I need to figure out how to show that $a=b$. I think I can use this inequality some how: $a^2+2ab+b^2&gt;0$ </p> <hr> <p>Any help would be appreciated.</p>
Martin Argerami
22,857
<p>Think of it as a quadratic on $a$: if we complete the square, $$ a^2+ab+b^2=\left(a+\frac b2\right)^2+b^2-\frac {b^2}4=\left(a+\frac b2\right)^2+\frac{3b^2}4. $$ For this to be zero we need both summands to be zero (because both are nonnegative). Then we get first that $b=0$, and then that $a^2=0$, so $a=0$. </p> <p>So, for $a,b$ not both zero, $$ a^2+ab+b^2&gt;0. $$</p>
1,560,209
<p>Prove that $f(x):\mathbb{R}\to\mathbb{R}$ , $x \mapsto x^3$ is injective.</p> <hr> <p>I want to prove this claim is true. </p> <p>Here is my outline so far:</p> <hr> <p>We want to show that $f(a)=f(b)$ implies that $a=b$, for all $a,b \in \mathbb{R}$</p> <p>We have $f(a)=a^3$, and $f(b)=b^3$</p> <p>So, if $f(a)=f(b)$, we have $a^3=b^3$, or $a^3-b^3=0$, or $(a-b)(a^2+ab+b^2)=0$</p> <p>Consider two cases:</p> <p>Case 1: $(a-b)=0$, then $a=b$, and we are done.</p> <p>Case 2: $(a^2+ab+b^2)=0$ We want to show that $a=b$.</p> <hr> <p>I am having trouble with Case 2. I need to figure out how to show that $a=b$. I think I can use this inequality some how: $a^2+2ab+b^2&gt;0$ </p> <hr> <p>Any help would be appreciated.</p>
bartgol
33,868
<p>Your strategy works fine. You do not need to show that the function is increasing (although, that would be one way to do it).</p> <p>You just need to show that $a^2+ab+b^2=0$ does not hold for any $a\neq b$. To this end, fix $b$ and try to use the quadratic formula to find whether there is a value of $a$ that satisfies the equation. Spoiler alert: $\Delta&lt;0$.</p>
2,699,170
<p>How to evaluate $$ \int \frac{1}{ \ln x} \ \mathrm{d} x, $$ where $\ln x$ denotes the natural logarithm of $x$? </p> <p>My effort: </p> <blockquote> <p>We note that $$ \int \frac{1}{ \ln x} \ \mathrm{d} x = \int \frac{x}{x \ln x} \ \mathrm{d} x = \int x \frac{ \mathrm{d} }{ \mathrm{d} x } \left( \ln \ln x \right) \ \mathrm{d} x = x \ln \ln x - \int \ln \ln x \ \mathrm{d} x. $$ </p> </blockquote> <p>What next? </p> <p>Another approach: </p> <blockquote> <p>We can also write $$ \int \frac{1}{ \ln x } \ \mathrm{d} x = \frac{x}{\ln x } + \int \frac{1}{ \left( \ln x \right)^2 } \ \mathrm{d} x = \frac{x}{\ln x } + \frac{x}{ \left( \ln x \right)^2 } + \int \frac{ 2 }{ \left( \ln x \right)^3 } \ \mathrm{d} x = \ldots = x \sum_{k=1}^n \left( \ln x \right)^{-k} + n \int \left( \ln x \right)^{-n-1} \ \mathrm{d} x. $$</p> </blockquote> <p>What next? </p> <p>Which one of the above two approaches, if any, is going to lead to a function consisting of finitely many terms comprised of elementary functions, that is, the kinds of solutions that we are used to in calculus courses? </p> <p>Or, is there any other way that can lead us to a suitable enough answer? </p>
The Integrator
538,397
<p>I = $\large\int\frac{1}{ln(x)}dx$</p> <p>let ln(x) = u</p> <p>$\,e^u = x$</p> <p>$\,dx = e^udu$</p> <p>I = $\,\int \frac{e^u}{u}du$</p> <p>expanding e$^u$,</p> <p>I=$\,\int\frac{1+u+\frac{u^2}{2!}+\frac{u^3}{3!}+\frac{u^4}{4!}+\frac{u^5}{5!}..............}{u}du$</p> <p>I = $\,\int\frac{1}{u}+1+\frac{u}{2!}+\frac{u^2}{3!}+\frac{u^3}{4!}+\frac{u^4}{5!}...........du$</p> <p>I = $\,ln(u)+ u + \frac{u^2}{2.2!}+\frac{u^3}{3.3!}+\frac{u^4}{4.4!}+\frac{u^5}{5.5!}......+ C$</p> <p>I = $\,ln(ln(x)) + ln(x) + \frac{(ln(x))^2}{2.2!} + \frac{(ln(x))^3}{3.3!} + \frac{(ln(x))^4}{4.4!} + \frac{(ln(x))^5}{5.5!} + ....... + C $</p> <p>I = $\,ln(ln(x)) + ln(x) + \sum_{k=2}^\infty \frac{(ln(x))^k}{k.k!} + C $</p>
181,367
<p>It is well known that compactness implies pseudocompactness; this follows from <a href="https://secure.wikimedia.org/wikipedia/en/wiki/Heine%E2%80%93Borel_theorem">the Heine–Borel theorem</a>. I know that the converse does not hold, but what is a counterexample?</p> <p>(A <a href="https://secure.wikimedia.org/wikipedia/en/wiki/Pseudocompact_space"><strong>pseudocompact space</strong></a> is a topological space $S = \langle X,{\mathfrak I}\rangle$ such that every continuous function $f:S\to\Bbb R$ has bounded range.)</p>
GEdgar
442
<p>A favorite example (and counterexample) to may things is the first uncountable ordinal $\omega_1$ in its order topology: $[0,\omega_1)$. It is pseudo-compact but not compact.</p>
1,692,757
<p>I was required to find the derivative of $2\sqrt{\cot(x^2)}$.</p> <p><strong>My solution</strong></p> <p><a href="https://i.stack.imgur.com/N98SM.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/N98SM.jpg" alt="enter image description here"></a></p> <p>I can't find any mistake in my solution but in my book following solution is given:</p> <p><a href="https://i.stack.imgur.com/KBmH6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KBmH6.png" alt="enter image description here"></a></p> <p>Of course my answer and the answer in book are not same(I have plotted the graph of both and they don't overlap). </p> <p>I understand the solution given in my book.</p> <p>I'm asking for help to figure out where I have made mistake in my solution. </p>
heropup
118,193
<p>Note that $$f(x) = 2 \sqrt{\cot x^2}$$ is real-valued if and only if $\cot x^2 \ge 0$, so for $f : \mathbb R \to \mathbb R$, we must have $$x^2 \in \bigcup_{k=-\infty}^\infty (\pi k, \pi(k+1/2)].$$ On this domain, we typically take the nonnegative square root. Thus $f \ge 0$ for all such $x$. We also note that because $\cot x^2$ is an even function, we can restrict our attention to the behavior of $f$ for $x &gt; 0$. We would note that on this interval $f'(x) &lt; 0$ since $(\cot x)' = - \csc^2 x &lt; 0$. </p> <p>The book's solution does not satisfy this condition. When $x$ lies in an interval for which $k$ is odd, the answer given by the book is positive (I leave the proof of this claim as an exercise to the reader). Consequently, I would regard the book's answer as incorrect, as it chooses a branch of the square root that is not consistent with the same choice for $f$ itself.</p> <p>I would go further to say that there is also a flaw in the algebra. If we write $$\frac{1}{\sqrt{\cot x^2}} \csc^2 x^2 = \frac{\sqrt{\sin x^2}}{\sqrt{\cos x^2}} \cdot \frac{1}{\sin^2 x^2},$$ then we immediately have a problem for the same reason that we cannot write $$\sqrt{-1} \cdot \sqrt{-1} = \sqrt{(-1)(-1)} = \sqrt{1} = 1.$$ That is to say, the cancellation followed then the use of the double-angle identity is improper when $\sin x^2 &lt; 0$. This becomes evident if you compare the plots of $\sqrt{\tan \theta} \cdot \csc \theta$ versus $\sqrt{2 \csc 2\theta}$: the former function admits negative values because the second factor can be negative; however, the latter function is never negative for the same choice of branch. It is better to simply calculate the derivative as you have done, and leave it at that.</p>
3,086,024
<p>I am trying to solve an exercise from the book "Theory of Numbers" by B.M.Stewart. The exercise is the following one:</p> <blockquote> <p>Let <span class="math-container">$T=2^ap_1^{a_1}p_2^{a_2} \dots p_n^{a_n}$</span>, where <span class="math-container">$a \ge0, n\ge0, 2&lt;p_1&lt;p_2&lt;\dots p_n, p_j$</span> odd prime numbers <span class="math-container">$ \forall j=1 \dots n,a_j \ge1$</span> and let <span class="math-container">$S(T)$</span> indicate the number of primitive Pythagorean triplets of side <span class="math-container">$T$</span>. Show that <span class="math-container">$S(T) = 2^{n-1}$</span> if <span class="math-container">$a=0$</span>. </p> </blockquote> <p>The primitive Pythagorean triplets are the solutions of <span class="math-container">$x^2+y^2=z^2$</span> where <span class="math-container">$\gcd(x,y,z)=1$</span> and they are given by <span class="math-container">$$\cases {x=2uv \\y=u^2-v^2\\z=u^2+v^2\\u&gt;v&gt;0\\ \gcd(u,v)=1\\ u \not\equiv v \pmod2} $$</span></p> <p>If <span class="math-container">$a=0$</span> then <span class="math-container">$T=p_1^{a_1}p_2^{a_2} \dots p_n^{a_n}$</span>, so <span class="math-container">$T \ne 2uv$</span>. Then <span class="math-container">$T=y$</span> or <span class="math-container">$T=z$</span>. </p> <p>If <span class="math-container">$T=y$</span>, then <span class="math-container">$$T=u^2-v^2=(u+v)(u-v)=p_1^{a_1}p_2^{a_2} \dots p_n^{a_n}.$$</span> So for <span class="math-container">$(u+v)$</span> I have <span class="math-container">$2^n$</span> possibilities because I can count the number of prime factors which is <span class="math-container">$\tau(r)$</span> with <span class="math-container">$r=p_1p_2 \dots p_n$</span>.</p> <p>From there I don't know how to proceed. Have you any idea?</p>
poetasis
546,655
<p>I'm not sure this helps because I don't understand the function completely. The formula I've alway seen for generating triples is: <span class="math-container">$$A=m^2-n^2\qquad B=2mn\qquad C=m^2+n^2$$</span> and it is useful but it generates extraneous and trivial triples if <span class="math-container">$m\le n$</span>; it also generates doubles and odd-or-even square multiples of primitive triples. We can generate triples for all <span class="math-container">$m,n\in\mathbb{N}$</span> where <span class="math-container">$GCD(A,B,C)$</span> is an odd square (which include all primitives) and get distinct <span class="math-container">$sets$</span> of triples by replacing <span class="math-container">$(m,n)\text{ with }((2m-1+n),n).$</span> Expanded, this becomes:</p> <p><span class="math-container">$$A=(2m-1)^2+2(2m-1)n\quad B=2(2m-1)n+2n^2\quad C=(2m-1)^2+2(2m-1)n+2n^2$$</span> and the result is as shown in the sample below.</p> <p><span class="math-container">$$\begin{array}{c|c|c|c|c|} \text{$Set_n$}&amp; \text{$Triple_1$} &amp; \text{$Triple_2$} &amp; \text{$Triple_3$} &amp; \text{$Triple_4$}\\ \hline \text{$Set_1$} &amp; 3,4,5 &amp; 5,12,13&amp; 7,24,25&amp; 9,40,41\\ \hline \text{$Set_2$} &amp; 15,8,17 &amp; 21,20,29 &amp;27,36,45 &amp;33,56,65\\ \hline \text{$Set_3$} &amp; 35,12,37 &amp; 45,28,53 &amp;55,48,73 &amp;65,72,97 \\ \hline \text{$Set_{25}$} &amp;2499,100,2501 &amp;2597,204,2605 &amp;2695,312,2713 &amp;2793,424,2825\\ \hline \end{array}$$</span></p> <p>In all cases, <span class="math-container">$m$</span> is the set number and <span class="math-container">$n$</span> is the <span class="math-container">$n^{th}$</span> member of the set.</p> <p>We can see that <span class="math-container">$f(2,3)=(27,36,45)$</span> is <span class="math-container">$9\text{ times }(3,4,5)$</span> and this is because <span class="math-container">$(2m-1)=(2*2-1)=3$</span>, and <span class="math-container">$GCD((2m-1),n)=3.$</span> Whenever <span class="math-container">$GCD((2m-1),n)=1$</span>, a primitive will be generated; otherwise, an odd square multiple will be generated. Only in <span class="math-container">$Set_1 (m=1)$</span> or (rotated <span class="math-container">$90^\circ$</span>), in the first member of each set <span class="math-container">$(k=1)$</span>, are all triples primitive and these can be generated by <span class="math-container">$$\text{For }m=1:\quad A=2k+1\qquad B=2k^2+2k\qquad C=2k^2+2k+1$$</span></p> <p><span class="math-container">$$\text{For }k=1:\quad A=4m^2-1\qquad B=4m\qquad C=4m^2+1$$</span></p> <p>Note that prime values of <span class="math-container">$C$</span> can appear in all sets but prime values of <span class="math-container">$A$</span> can only appear in <span class="math-container">$Set_1(m=1)$</span> because <span class="math-container">$$A=(2m-1)^2+2(2m-1)n=(2m-1)(2m-1+2n)$$</span> so <span class="math-container">$m&gt;1$</span> results in a composite number <span class="math-container">$A$</span>.</p> <p>If you need help finding triples for given sides see <a href="https://math.stackexchange.com/questions/1590629/how-to-find-all-pythagorean-triples-containing-a-given-number/3272945#3272945">this</a>.</p> <p>I hope this helps in your search for primitives and primes.</p>
1,384,735
<p>What is the ODE satisfied by $y=y(x)$ </p> <p>given that $$\frac{dy}{dx} = \frac{-x-2y}{y-2x}$$</p> <p>I understand that I need to get it in some form of $\int \cdots \;dy = \int \cdots \; dx$, but am not sure how to go about it.</p>
Dmytro Chasovskyi
215,170
<p>If you have this equation:</p> <p>$$\frac{dy}{dx} = \frac{-x-2y}{y-2x}$$</p> <p>you may do: $$ ( y - 2 \cdot x) \cdot dy = - ( x + 2 \cdot y ) dx $$ After that $$ y dy + x dx = -2 y^2 \cdot \frac{ydx-xdy}{y^2}$$ Small tricks: $$y~dy = \frac{y^2}{2} + C $$ $$ d\bigg(\frac{x}{y}\bigg) = \frac{ydx-xdy}{y^2} $$</p> <p>With this small tips you can solve this problem with elementary integrals theory.</p>
64,544
<blockquote> <p>Please let me know what is the standard notation for group action.</p> </blockquote> <p>I saw the following three notations for group action. (All the images obtained as <code>G\acts X</code> for different deinitions of <code>\acts</code>.) </p> <p>(1) <img src="https://lh5.googleusercontent.com/_7jyZyirE1is/TcM6Q736oVI/AAAAAAAAACU/7li7VA1-FTc/s144/B.png" alt="alt text"> </p> <p>I saw this one most, but only in handwriting and I like it. But I did not find a better way to write it in LaTeX.</p> <pre><code>\usepackage{mathabx,epsfig} \def\acts{\mathrel{\reflectbox{$\righttoleftarrow$}}} </code></pre> <p>(2) <img src="https://lh5.googleusercontent.com/_7jyZyirE1is/TcM6XymimxI/AAAAAAAAACY/byg0FDFv7wM/s144/C.png" alt="alt text"> </p> <p>It is almost as good as 1, but in handwriting this arrow can be taken as $G$.</p> <pre><code>\usepackage{mathabx} \def\acts{\lefttorightarrow} </code></pre> <p>(3) <img src="https://lh6.googleusercontent.com/_7jyZyirE1is/TcM6JeflvjI/AAAAAAAAACQ/n_dfYCfeQEc/s800/A.png" alt="alt text"> </p> <p>I saw this one in print, I guess it is used since there is no better symbol in "amssymb".</p> <pre><code>\usepackage{amssymb} \def\acts{\curvearrowright} </code></pre>
Dick Palais
7,311
<p>I guess I am somewhat of a minimalist when it comes to notation. I have spent a lot of my career writing about group actions, and what I usually have done is start out defining what a group action is, and then say something like ``If we have in mind a fixed action of G on X then we will say that X is a G-space and we will denote by gx the result of the action of g in G on an element x of X.''</p> <p>To be more pedantic, the point is that for a fixed group G, G-spaces form a category and if you ever had two different actions of G on the same topological space, you really should use two different symbols to denote that space with the different actions And it is convenient to have a morphism (equivariant map) $\phi$ between G-spaces just mean that $\phi(gx) = g \phi(x)$.</p>
316,865
<p>How do you find this limit?</p> <p>$$\lim_{x \rightarrow \infty} \sqrt[5]{x^5-x^4} -x$$</p> <p>I was given a clue to use L'Hospital's rule.</p> <p>I did it this way:</p> <p><strong>UPDATE 1:</strong> $$ \begin{align*} \lim_{x \rightarrow \infty} \sqrt[5]{x^5-x^4} -x &amp;= \lim_{x \rightarrow \infty} x\begin{pmatrix}\sqrt[5]{1-\frac 1 x} -1\end{pmatrix}\\ &amp;= \lim_{x \rightarrow \infty} \frac{\sqrt[5]{1-\frac 1 x} -1}{\frac1x} \end{align*} $$</p> <p>Applying L' Hospital's, $$ \begin{align*} \lim_{x \rightarrow \infty} \frac{\sqrt[5]{1-\frac 1 x} -1}{\frac1x}&amp;= \lim_{x \rightarrow \infty} \frac{0.2\begin{pmatrix}1-\frac 1 x\end{pmatrix}^{-0.8}\begin{pmatrix}-x^{-2}\end{pmatrix}(-1)} {\begin{pmatrix}-x^{-2}\end{pmatrix}}\\ &amp;= -0.2 \end{align*} $$</p> <p>However the answer is $0.2$, so I would like to clarify the correct use of L'Hospital's</p>
muzzlator
60,855
<p>Your working out is fine and you've shown all the steps now. $$\sqrt[5]{x^5 - x^4} &lt; x$$ and so a negative limit is more likely than a positive limit :)</p>
2,083,127
<p>How to show that $\lim_{n \rightarrow \infty} \frac{[a^{n+1}]}{[a^n]}=a$, where $[a]$ = integer part of a?<br> Here $a&gt;1$. But I suspect it is true for all $a \ne 0$. </p>
Community
-1
<p>For $|a|&gt;1$,</p> <p>$$\frac{[a^{n+1}]}{[a^n]}=\frac{a^{n+1}-\{a^{n+1}\}}{a^n-\{a^n\}}=a\frac{1-\dfrac{\{a^{n+1}\}}{a^{n+1}}}{1-\dfrac{\{a^{n}\}}{a^n}}\to a.$$</p> <p>As the fractional parts are bounded, the numerator and denominator both tend to $1$.</p> <hr> <p>This can be extended to $|a|\ge1$ as with $|a|=1$, the fractional parts are all $0$.</p> <hr> <p>For $0\le a&lt;1$, the limit is undefined (none of the ratios are defined).</p> <hr> <p>For $-1&lt;a&lt;0$, the limit is $-1$ (all ratios are $-1$).</p>
1,594,722
<p>The ODE is</p> <p>($xy^{3} + x^{2}y^{7}) \frac{dy}{dx} = 1$</p> <p>I have tried everything like integrating factor,it is not homogenous and not linear differential equation..What should be done now?</p>
Ekaveera Gouribhatla
31,458
<p>HINT: The equation can be written as:</p> <p>$$y^3\frac{dy}{dx}=\frac{1}{x(1+xy^4)}$$</p> <p>Put $y^4=t$</p>
2,037,704
<p>What symmetry property in complex space is related to the fact that the absolute value of numbers $|a+ib| = |b+ia|$ are equals?</p>
israel.sincro
394,550
<p>reading the comment of fleablood and Widawensen about the numbers laid of a circle, I think that the correct answer is:</p> <p>The wanted property is that the absolute value of a number must be invariant under any axis rotation. In this case all numbers $z_1 = a + ib$, $z_2 = -a + ib$, $z_3 = a - ib$, $z_4 = -a - ib$, $z_5 = b + ia$, $z_6 = -b + ia$, $z_7 = b - ia$ and $z_8 = -b - ia$ and other infinity have the same absolute value.</p>
517,282
<p>Suppose $a,n \in \mathbb{Z}$, and $n&gt;a&gt;0$. How do I prove that $\nexists x \in \mathbb{Z}$ s.t. $nx = a$ ? I'm really not sure where to start on this one. I'd be happy if someone could give me a hint.</p> <p>Edit: I've solved this by contradiction, but I will not be 'accepting' an answer from below because I did not use any one of them in a significant way to solve the problem.</p>
Henry Swanson
55,540
<p>Assume there is such an $x$. Since $nx = a$, then $0 &lt; nx$ and $nx &lt; n$. Can you now prove that $0 &lt; n$ and $n &lt; 1$? And can you prove that that is a contradiction?</p> <p>Edit: changed $p &lt; q &lt; r$ statements to $p &lt; q$ and $q &lt; r$ statements. Because hypothetically, I forgot what $&lt;$ means.</p> <p>Edit 2: Electric Boogaloo</p> <p>So one way of defining $\mathbb{Z}$ is that it is a commutative ring with a subset $\mathbb{N}$ such that:</p> <ol> <li>Non-trivialty: $\mathbb{N} \ne \emptyset$.</li> <li>Closure: for all $a,b \in \mathbb{N}$, $ab \in \mathbb{N}$.</li> <li>Trichotomy: For all $x \in \mathbb{Z}$, exactly one of the following is true: $x \in \mathbb{N}$, $x = 0$, $-x \in \mathbb{N}$.</li> <li>Well-ordering principle: If $S \subseteq \mathbb{N}$ and is non-empty, there is an $x \in S$ such that for all $y \in S$, $x \le y$.</li> </ol> <p>Now we can define $&lt;$. We say $a &lt; b$ if $b - a \in \mathbb{N}$.</p> <p>So if you want to prove your statement from the ground up, you should prove:</p> <ol> <li>$a &lt; b$ and $b &lt; c \implies a &lt; c$ (after this point you can write $a &lt; b &lt; c$)</li> <li>$ab &gt; 0$ and $b &gt; 0 \implies a &gt; 0$ (useful for part 3)</li> <li>$ac &lt; bc$ and $c &gt; 0 \implies a &lt; b$ (division not allowed)</li> <li>$0 &lt; 1$, or equivalently $1 \in \mathbb{N}$ (master troll)</li> <li>$\not\exists x \in \mathbb{Z} \ 0 &lt; x &lt; 1$ (this one uses well-ordering)</li> </ol>
3,981,809
<p>Imagine that we have two pairs of integers <span class="math-container">$(a_1,b_1)$</span> and <span class="math-container">$(a_2, b_2)$</span> where</p> <p><span class="math-container">$$ a_1b_1\equiv 0,\,\ a_2b_2\equiv 0,\,\ a_1b_2+a_2b_1\equiv 0\pmod n$$</span></p> <p>Does this imply that <span class="math-container">$$ a_1 b_2 \equiv 0\pmod n $$</span></p> <p>I assume that <span class="math-container">$a_1 b_1 \equiv 0\pmod n$</span> is only true if <span class="math-container">$a_1$</span> and <span class="math-container">$b_1$</span> are divisors of <span class="math-container">$n$</span>. Using that, I've checked <span class="math-container">$a_1b_2 = 0$</span> mod <span class="math-container">$n$</span> numerically for a large number of values of <span class="math-container">$n$</span> and it seems to hold, but I am not sure how to prove it.</p>
Community
-1
<p>The result is true.</p> <p>Let <span class="math-container">$x=a_1b_2$</span> and <span class="math-container">$y=a_2b_1$</span> and let <span class="math-container">$p^m$</span> divide <span class="math-container">$n$</span> for some prime <span class="math-container">$p$</span>.</p> <p>Then <span class="math-container">$xy = 0 \mod p^{2m}$</span> and so <span class="math-container">$p^m$</span> divides at least one of <span class="math-container">$x$</span> and <span class="math-container">$y$</span>.</p> <p>However <span class="math-container">$x+y = 0 \mod p^m$</span> and so <span class="math-container">$p^m$</span> divides both <span class="math-container">$x$</span> and <span class="math-container">$y$</span>.</p> <p>Since this is true for all prime power divisors of <span class="math-container">$n$</span> it is true for <span class="math-container">$n$</span>. Thus <span class="math-container">$n$</span> is a factor of <span class="math-container">$x=a_1b_2$</span>, as required.</p>
149,558
<p>I always use <code>InputForm</code> to check the result object,such as <code>Dataset</code> or <code>Graphics</code> or other objects.But if you are in the result of <code>InputForm</code>,you cannot use the Front-End function of balance the bracket. Note this gif</p> <p><a href="https://i.stack.imgur.com/51OYd.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/51OYd.gif" alt="enter image description here" /></a></p> <p>When I double click the input line, I will select all content just in this bracket.But when I'm in result of <code>InputForm</code>,I will select all line. Of course I can copy the output of <code>InputForm</code> as another new input,but which will make the notebook more mess.</p> <p>Any method can make the output of <code>InputForm</code> support the function of balance the bracket?</p>
kglr
125
<p>If you have Version 11 you can use the function <code>PrettyForm</code> instead of processing <code>InputForm</code> output to get the desired result: </p> <pre><code>Needs["GeneralUtilities`"] Interpolation[{1, 2, 3, 4}] // PrettyForm </code></pre> <p><a href="https://i.stack.imgur.com/VE2qd.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/VE2qd.jpg" alt="enter image description here"></a></p> <pre><code>ListLinePlot[Range[5]^2] // PrettyForm </code></pre> <p><img src="https://i.stack.imgur.com/t93Yn.png" alt="Mathematica graphics"></p> <p>For earlier versions you can also wrap <code>InputForm</code> with <code>DisplayForm</code>:</p> <pre><code>ListLinePlot[Range[5]^2] // InputForm // DisplayForm </code></pre> <p><img src="https://i.stack.imgur.com/jGk7L.png" alt="Mathematica graphics"></p>
6,931
<p>One of the key steps in <a href="http://en.wikipedia.org/wiki/Merge_sort">merge sort</a> is the merging step. Given two sorted lists</p> <pre><code>sorted1={2,6,10,13,16,17,19}; sorted2={1,3,4,5,7,8,9,11,12,14,15,18,20}; </code></pre> <p>of integers, we want to produce a new list as follows:</p> <ol> <li>Start with an empty list <code>acc</code>.</li> <li>Compare the first elements of <code>sorted1</code> and <code>sorted2</code>. Append the smaller one to <code>acc</code>.</li> <li>Remove the element used in step 2 from either <code>sorted1</code> or <code>sorted2</code>.</li> <li>If neither <code>sorted1</code> nor <code>sorted2</code> is empty, go to step 2. Otherwise append the remaining list to <code>acc</code> and output the value of <code>acc</code>.</li> </ol> <p>Applying this process to <code>sorted1</code> and <code>sorted2</code>, we get</p> <pre><code>acc={1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20} </code></pre> <p><em>Added in response to Rojo's question: We can carry out this procedure even if the two lists are not pre-sorted. So <code>list1</code> and <code>list2</code> below are not assumed to be sorted.</em></p> <p>If there were a built-in function <code>MergeList</code> which carries out this process, it would probably take three arguments <code>list1</code>, <code>list2</code>, and <code>f</code>. Here <code>f</code> is a Boolean function of two arguments used to decide which element to pick. In the case of merge sort, <code>f = LessEqual</code>. I feel that <code>MergeList</code> is a fundamental list operation, so</p> <p><strong>Question 1: Is there such a built-in function or one very close to that?</strong></p> <p>If I were to write such a function in Scheme, I would use a recursive definition equivalent to the following:</p> <pre><code>MergeList[list1_,{},f_,acc_:{}]:=Join[acc,list1]; MergeList[{},list2_,f_,acc_:{}]:=Join[acc,list2]; MergeList[list1_,list2_,f_,acc_:{}]:= If[ f@@First/@{list1,list2}, MergeList[Rest[list1],list2,f,Append[acc,First[list1]]], MergeList[list1,Rest[list2],f,Append[acc,First[list2]]] ] </code></pre> <p><em>Sample output with unsorted lists:</em></p> <pre><code>In[2]:= MergeList[{2,5,1},{3,6,4},LessEqual] Out[2]= {2,3,5,1,6,4} </code></pre> <p>My impression is that recursive solutions tend to be inefficient in Mathematica, so</p> <p><strong>Question 2: What would be a better way to implement <code>MergeList</code>?</strong></p> <p>If you have tips about converting loops into their functional equivalents, feel free to mention them as well.</p>
Heike
46
<p>Here's another approach. </p> <pre><code>mergeLists[lista_, listb_, crit_: LessEqual] := Module[{merge}, merge[list1_, list2_] /; crit[First[list1], First[list2]] := With[{part = TakeWhile[list1, crit[#, First[list2]] &amp;]}, Sow[part]; If[Length[part] == Length[list1], Sow[list2], merge[list1[[Length[part] + 1 ;;]], list2]]]; merge[list2_, list1_] /; crit[First[list1], First[list2]] := merge[list1, list2]; merge[list1_, list2_] := With[ {part = TakeWhile[list1, Not[crit[First[list2], #]] &amp;]}, Sow[part]; If[Length[part] == Length[list1], Sow[list2], merge[list1[[Length[part] + 1 ;;]], list2]]]; Flatten[Reap[merge[lista, listb];][[2]]]] </code></pre> <p>It does give slightly different results from Leonid's code though. For example for </p> <pre><code>list1 = {1, 4, 3}; list2 = {2, 3, 4}; </code></pre> <p>I get with my code</p> <pre><code>mergeLists[{1, 4, 3}, {2, 3, 4}, LessEqual] (* out: {1, 2, 3, 4, 4, 3} *) </code></pre> <p>whereas with Leonid's code I get</p> <pre><code>Block[{$IterationLimit = Infinity}, merge[{1, 4, 3}, {2, 3, 4}, LessEqual]] (* out: {1, 2, 3, 4, 3, 4} *) </code></pre> <p>If I take <code>Less</code> instead of <code>LessEqual</code> I get the same result for both codes, so I expect that it has to do with a different treatment of border cases where the two sublists start with the same element. </p> <p>Taking this issue aside, my code does seem to be faster than Leonid's solution. Consider for example (I'm choosing <code>large1</code> and <code>large2</code> such that their intersection is empty to avoid the issue above)</p> <pre><code>{large1, large2} = Partition[RandomSample[Range[20000]], 10000]; </code></pre> <p>then with Leonid's code I get</p> <pre><code>Block[{$IterationLimit = Infinity}, merge[large1, large2, LessEqual]] // Short // AbsoluteTiming (* {0.070483,{9941,7246,4261,11184,10148,1867,12324, &lt;&lt;19986&gt;&gt;,6927,17973,10762,9165,19379,11449,7735}} *) </code></pre> <p>and with my code</p> <pre><code>mergeLists[large1, large2, LessEqual] // Short // AbsoluteTiming (* {0.039470,{9941,7246,4261,11184,10148,1867,12324, &lt;&lt;19986&gt;&gt;,6927,17973,10762,9165,19379,11449,7735}} *) </code></pre>
6,931
<p>One of the key steps in <a href="http://en.wikipedia.org/wiki/Merge_sort">merge sort</a> is the merging step. Given two sorted lists</p> <pre><code>sorted1={2,6,10,13,16,17,19}; sorted2={1,3,4,5,7,8,9,11,12,14,15,18,20}; </code></pre> <p>of integers, we want to produce a new list as follows:</p> <ol> <li>Start with an empty list <code>acc</code>.</li> <li>Compare the first elements of <code>sorted1</code> and <code>sorted2</code>. Append the smaller one to <code>acc</code>.</li> <li>Remove the element used in step 2 from either <code>sorted1</code> or <code>sorted2</code>.</li> <li>If neither <code>sorted1</code> nor <code>sorted2</code> is empty, go to step 2. Otherwise append the remaining list to <code>acc</code> and output the value of <code>acc</code>.</li> </ol> <p>Applying this process to <code>sorted1</code> and <code>sorted2</code>, we get</p> <pre><code>acc={1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20} </code></pre> <p><em>Added in response to Rojo's question: We can carry out this procedure even if the two lists are not pre-sorted. So <code>list1</code> and <code>list2</code> below are not assumed to be sorted.</em></p> <p>If there were a built-in function <code>MergeList</code> which carries out this process, it would probably take three arguments <code>list1</code>, <code>list2</code>, and <code>f</code>. Here <code>f</code> is a Boolean function of two arguments used to decide which element to pick. In the case of merge sort, <code>f = LessEqual</code>. I feel that <code>MergeList</code> is a fundamental list operation, so</p> <p><strong>Question 1: Is there such a built-in function or one very close to that?</strong></p> <p>If I were to write such a function in Scheme, I would use a recursive definition equivalent to the following:</p> <pre><code>MergeList[list1_,{},f_,acc_:{}]:=Join[acc,list1]; MergeList[{},list2_,f_,acc_:{}]:=Join[acc,list2]; MergeList[list1_,list2_,f_,acc_:{}]:= If[ f@@First/@{list1,list2}, MergeList[Rest[list1],list2,f,Append[acc,First[list1]]], MergeList[list1,Rest[list2],f,Append[acc,First[list2]]] ] </code></pre> <p><em>Sample output with unsorted lists:</em></p> <pre><code>In[2]:= MergeList[{2,5,1},{3,6,4},LessEqual] Out[2]= {2,3,5,1,6,4} </code></pre> <p>My impression is that recursive solutions tend to be inefficient in Mathematica, so</p> <p><strong>Question 2: What would be a better way to implement <code>MergeList</code>?</strong></p> <p>If you have tips about converting loops into their functional equivalents, feel free to mention them as well.</p>
Daniel Lichtblau
51
<p>Forget Leonid and Heike's recursive stuff (okay, actually I upvoted both as they are both good responses). But here is a simple, direct version. Note that it will not sort, so if the inputs are unsorted the result will be as well.</p> <pre><code>mergeSortedLists[lista_, listb_, crit_: LessEqual] := Module[ {result, len1 = Length[lista], len2 = Length[listb], i = 1, j = 1}, result = Reap[While[i &lt;= len1 &amp;&amp; j &lt;= len2, If[TrueQ[crit[lista[[i]], listb[[j]]]], Sow[lista[[i]]]; i++, Sow[listb[[j]]]; j++]; ]][[2, 1]]; If[i &lt;= len1, result = Join[result, lista[[i ;; -1]]], If[j &lt;= len2, result = Join[result, listb[[j ;; -1]]]]]; result] </code></pre> <p>Here is Heike's example modified slightly:</p> <pre><code>{large1, large2} = Partition[Sort[RandomSample[Range[100000]]], 50000]; In[2236]:= Timing[ml = mergeSortedLists[large1, large2, LessEqual];] ml === Range[100000] Out[2236]= {0.21, Null} Out[2237]= True </code></pre>
248,710
<p>The organizers of a cycling competition know that about 8% of the racers use steroids. They decided to employ a test that will help them identify steroid-users. The following is known about the test: When a person uses steroids, the person will test positive 96% of the time; on the other hand, when a person does not use steroids, the person will test positive only 9% of the time. The test seems reasonable enough to the organizers. The one last thing they want to find out is this: Suppose a cyclist does test positive, what is the probability that the cyclist is really a steroid-user.</p> <p>S be the event that a randomly selected cyclist is a steroid-user and P be the event that a randomly selected cyclist tests positive.</p> <p>***My questions is Can someone please translate and explain P(P|S) and P(S|P) ?</p>
André Nicolas
6,312
<p>$\Pr(P|S)$ is the probability that the person tests positive, <strong>given</strong> that she uses steroids. We are told explicitly that this is $0.96$. </p> <p>$\Pr(S|P)$ is the probability she is a steroid user, <strong>given</strong> that she tests positive. That is what we are asked to find. Informally, if we confine attention to the people who test positive, $\Pr(S|P)$ measures the proportion of them that <em>really</em> are steroid users. </p> <p>Since the problem says that the proportion of steroid users is not high (sure!), many of the positives will be <strong>false</strong> positives. Thus I would expect that $\Pr(S|P)$ will not be very high: the test is not as good as it looks on first sight.</p> <p>For computing, there are two ways I would suggest, the first very informal and probably not acceptable to your grader, and the second more formal.</p> <p>$(1)$: Imagine $1000$ cyclists. About how many of them will test positive? About how many of <strong>these</strong> will be steroid users? Divide the second number by the first, since $\Pr(S|P)$ asks us to confine attention to the subpopulation of people who tested positive. </p> <p>$(2)$: Use the defining formula $$\Pr(S|P)=\frac{S\cap P}{\Pr(P)}.$$ The two numbers on the right are not hard to compute. I can give further help if they pose difficulty. </p>
439,918
<p>I'm trying to find an example of a space that is Hausdorff and locally compact that is not second countable, but I'm stuck. I search an example on the book Counterexamples in Topology, but I can't find anything.<br> Thank you for any help.</p>
Brian M. Scott
12,042
<p>Let $Y$ be an uncountable set, let $p$ be a point not in $Y$, and let $X=\{p\}\cup Y$. Let</p> <p>$$\mathscr{B}=\{\{x\}:x\in Y\}\cup\{X\setminus F:F\text{ is a finite subset of }Y\}\;;$$</p> <p>then $\mathscr{B}$ is a base for a Hausdorff, locally compact, compact topology $\tau$ on $X$ that is not second countable. It isn’t even first countable: $p$ has no countable local base. $\langle X,\tau\rangle$ is the <a href="http://en.wikipedia.org/wiki/One-point_compactification#The_one-point_compactification" rel="noreferrer">one-point compactification</a> of the discrete space $Y$. (Of course $Y$ itself is also an example, but it’s merely locally compact; $X$ is also compact.)</p> <p>You can make examples with even nicer properties. For instance, $Y\times[0,1]$ is Hausdorff, locally compact, and locally connected but not second countable. The <a href="http://en.wikipedia.org/wiki/Long_line_%28topology%29" rel="noreferrer">closed long ray</a> has all of those properties and is in addition connected, path connected, and locally path connected. Its one-point compactification loses path connectedness but gains compactness.</p>
237,031
<p>The question is: if I assert in ZF that there exists a Reinhardt cardinal, do I really get a theory of higher consistency strength than when I assert in ZFC that there exists an I0 cardinal (the strongest large cardinal not known to be inconsistent with choice, as I understand)? This is implicit in the ordering of things on <a href="http://cantorsattic.info/Upper_attic" rel="noreferrer">Cantor's Attic</a>, for example, but I've been unable to find a proof (granted, I don't necessarily have the best nose for where to look!).</p> <p>One thing that worries me is that when there <em>is</em> a ZFC analog of a ZF statement, many equivalent formulations of the ZFC statement may become inequivalent in ZFC. So we don't have much assurance that the usual definition of a Reinhardt cardinal is "correct" in the absence of choice.</p> <p>I think it should be clear that Con(ZF + Reinhardt) implies Con(ZF + I0). But again, it's not clear that ZF+I0 is equiconsistent with ZFC+I0.</p> <p>It's apparently not possible to formulate Reinhardt cardinals in a first-order way, so I should really talk about NBG + Reinhardt, or maybe ZF($j$) + Reinhardt, where ZF($j$) has separation and replacement for formulas involving the function symbol $j$.</p> <p><strong>EDIT</strong></p> <p>Since this question has attracted a bounty from Joseph Van Name, maybe it's appropriate to update it a bit. Now, I'm not actually a set theorist, but it's not even clear to me that Con(ZF + Reinhardt) implies Con(ZFC + an inaccessible). So perhaps the question should really be: what large cardinal strength, if any, can we extract from the theory ZF + Reinhardt?</p>
Master
141,402
<p>Mohammed Golshani's link doesn't work, so I have reconstructed a sketch of a proof. The key fact is this (You can find a proof in most Set Theory textbooks):</p> <p><strong>Theorem:</strong> If <span class="math-container">$DC_\omega$</span> holds and <span class="math-container">$D$</span> is an <span class="math-container">$\omega_1$</span>-complete ultrafilter, then the ultraproduct of <span class="math-container">$N$</span> by <span class="math-container">$D$</span> is well-founded, for any inner model <span class="math-container">$N$</span>.</p> <p><strong>Theorem:</strong> If <span class="math-container">$DC_\lambda$</span> holds and <span class="math-container">$\kappa$</span> is <span class="math-container">$I0$</span> (With target <span class="math-container">$\lambda$</span>) if and only if there is a non-principal <span class="math-container">$\kappa$</span>-complete <span class="math-container">$L(V_{\lambda+1})$</span>-ultrafilter on <span class="math-container">$V_{\lambda+1}$</span>.</p> <p><em>Proof.</em> Let <span class="math-container">$D$</span> be such an ultrafilter. To verify the existence of a such an ultraproduct, we can code elements of <span class="math-container">$D$</span> as function <span class="math-container">$F: X^{\lt\lambda}\rightarrow X'$</span> for every set <span class="math-container">$X'=\{\chi_x|x\in V_{\lambda+1}\}$</span>, there is a function <span class="math-container">$f: \lambda\rightarrow X$</span>, such that <span class="math-container">$f(\chi_x)\in\chi_x$</span>, and so some <span class="math-container">$A\in D$</span>, such that <span class="math-container">$\{\chi_x|x\in A\}$</span> admits a Choice function. Then if <span class="math-container">$M\cong Ult_D(L(V_{\lambda+1}))$</span> is the ultrapower <span class="math-container">$M\ni V_{\lambda+1}$</span>. <span class="math-container">$L(V_{\lambda+1})$</span> inherits a well-order for each element (A well order not necessarily in <span class="math-container">$L(V_{\lambda+1})$</span> from <span class="math-container">$L(V_{\lambda+1})$</span>. Then by condensation <span class="math-container">$M= L(V_{\lambda+1})$</span> and <span class="math-container">$\kappa$</span> is the critical point of the ultrapower embedding <span class="math-container">$k_D: L(V_{\lambda+1})\prec L(V_{\lambda+1})$</span>. For the other direction, define an ultrafilter <span class="math-container">$D=\{X\subseteq V_{\lambda+1}|j\restriction V_\lambda\in j(X)\}$</span>. This satisfies all the necessary properties.◼</p> <p><strong>Theorem:</strong> If <span class="math-container">$\kappa$</span> is Reinhardt, indeed even the critical point of <span class="math-container">$j: V_{\lambda+2}\prec V_{\lambda+2}$</span>, then <span class="math-container">$\kappa$</span> is <span class="math-container">$I0$</span>.</p> <p><em>Proof.</em> Let <span class="math-container">$\kappa$</span> Reinhardt as witnessed by <span class="math-container">$j: V\prec V$</span>, and let <span class="math-container">$\lambda$</span> be the least fixed point above <span class="math-container">$\kappa$</span>. Let <span class="math-container">$D=\{X\subseteq V_{\lambda+1}|j\restriction V_\lambda\in j(X)\}\cap L(V_{\lambda+1})$</span>. Then <span class="math-container">$D$</span> satisfies all the necessary properties. The second case is not much trickier, as it uses the same argument.◼</p>
1,135,005
<p>Assume that the order of $a$ modulo $n$ is $h$ and the order of $b$ modulo $n$ is $k$. Show that the order of $ab$ modulo $n$ divides $hk$; in particular, if $\gcd(h, k) = 1$, then $ab$ has order $hk$. </p> <p>My attempt : </p> <p>$$(ab)^{hk}\equiv 1\pmod{n} \implies \text{ord}_n(ab) | hk$$</p> <p>But I feel stuck at showing that if $\gcd(h, k) = 1$, then $ab$ has order $hk$. Do I get any help ? Thanks!</p>
lab bhattacharjee
33,337
<p>If ord$_na=h,$ord$_nb=k$ with $(h,k)=1$</p> <p>If ord$_n(ab)=d\implies (ab)^d\equiv1$</p> <p>$\implies a^{hd}b^{hd}=\left[(ab)^d\right]^h\equiv1\implies b^{hd}\equiv1\implies k|hd$</p> <p>As $(h,k)=1,k|d$</p> <p>Similarly, $h|d\implies$lcm$[h,k]|d\implies hk|d\ \ \ \ (1)$ as $(h,k)=1$</p> <p>Now $(ab)^{hk}=(a^h)^k(b^k)^h\equiv1\pmod n\implies d|hk\ \ \ \ (2)$</p> <p>Can you combine $(1),(2)?$</p>
1,135,005
<p>Assume that the order of $a$ modulo $n$ is $h$ and the order of $b$ modulo $n$ is $k$. Show that the order of $ab$ modulo $n$ divides $hk$; in particular, if $\gcd(h, k) = 1$, then $ab$ has order $hk$. </p> <p>My attempt : </p> <p>$$(ab)^{hk}\equiv 1\pmod{n} \implies \text{ord}_n(ab) | hk$$</p> <p>But I feel stuck at showing that if $\gcd(h, k) = 1$, then $ab$ has order $hk$. Do I get any help ? Thanks!</p>
Bill Dubuque
242
<p><strong>Hint</strong> $\,\ \color{#c00}{a^{\large h}}\equiv 1\equiv\color{#0a0}{b^{\large k}}\,\Rightarrow\, (ab)^{\large hk} \equiv (\color{#c00}{a^{\large h}})^{\large k}(\color{#0a0}{b^{\large k}})^{\large h}\equiv \color{#c00}1^{\large k}\color{#0a0}1^{\large h}\equiv 1\,\Rightarrow\, {\rm ord}(ab)\mid hk$</p>
3,663,054
<p>In my introductory abstract algebra course, the quotient group <span class="math-container">$G/H$</span> was defined as <span class="math-container">$$G/H=\{gH:g\in G\}$$</span> which is a <strong>set of sets</strong>. In an exercise, I should show that for the group of invertible matrices <span class="math-container">$GL_n(K)$</span> over a field <span class="math-container">$K$</span> and the normal subgroup <span class="math-container">$SL_n(K)$</span> the quotient group is abelian.</p> <p>I'm horribly confused. What is the operation that combines two sets of matrices? What does it mean for two sets of matrices to commute with respect to this operation?</p> <p>I apologize if this is a silly question, but our lecture only ever mentioned modular arithmetic&hellip;</p>
Kavi Rama Murthy
142,385
<p><span class="math-container">$E|X_t|^{2}=\int_0^{t} e^{-2s} ds=\frac 1 2(1-e^{-t}) &lt;1$</span> for all <span class="math-container">$t$</span> and this implies <span class="math-container">$E|X_t|$</span> is bounded. </p> <p>The limiting distribution is <span class="math-container">$N(0,\int_0^{\infty} e^{-2s} ds)$</span> i.e. <span class="math-container">$N(0, \frac 1 2) $</span>. </p>
470,739
<p>Assume $S$ and $T$ are diagonalizable maps on $\mathbb{R}^n$ such that $S\circ T$=$T \circ S$. Then $S$ and $T$ have a common eigenvector.</p> <p>I already have proof, but I just need validation in one part. My proof: Let $F$ be an eigenvector of $T$. This means $\exists \; \lambda \in R$ such that $T(v)=\lambda v$. Then, using the fact that $S\circ T$=$T \circ S$, we have</p> <p>$$ S(T(v)) = (S\circ T)(v)=(T \circ S)(v)=T(S(v)) \Longrightarrow T(S(v))=\lambda S(v)$$</p> <p>Thus, $S(v)$ is also an eigenvector of $T$. So, $S$ maps eigenvectors of $T$ to eigenvevtors of $T$. Thus, $S$ must have an eigenvector of $T$.</p> <p>How would one rigorously prove that if $S$ maps eigenvectors of $T$ to eigenvectors of $T$, then $S$ also has an eigenvector of $T$?</p> <p>Thanks.</p>
Zavosh
28,494
<p>You've shown that the eigenspaces of $T$ are <em>invariant</em> under $S$. If $E_\lambda$ is the $\lambda$-eigenspace of $T$ inside $\mathbb{R}^n$, then it makes sense to speak of $S'=S|_{E_\lambda}: E_\lambda \rightarrow E_\lambda$. Then the key fact is that the characteristic polynomial $p'(T)$ of $S'$ is a factor of the characteristic polynomial $p(T)$ of $S$. Since $p(T)$ splits completely, so does $p'(T)$. In particular, $p'(T)$ has a root, corresponding to an eigenvalue of $S'$. That means $S'$ has an eigenvector in $E_\lambda$, but an eigenvector of $S'$ is simultaneously an eigenvector of $S$ and $T$. </p> <p>To see why the characteristic polynomial of $S'$ divides the characteristic polynomial of $S$, note that $\mathbb{R}^n = E_\lambda \oplus \hat{E_\lambda}$, where $\hat{E_\lambda}$ is the direct sum of all other eigenspaces $E_\mu$ of $T$. Taking a basis of eigenvectors for $T$, and writing the matrix of $S$ with respect to this basis, we see that the characteristic polynomial of $S$ is equal to the characteristic polynomial of $S|_{E_\lambda}$ times the characteristic polynomial of $S|_{\hat{E_\lambda}}$.</p>
2,412,454
<p>I was obviously not clear enough in my first question, so I will reformulate. I have the following equation $$ A=\frac{B\sin 2\theta}{C+D\cos 2\theta} $$ where $A,B,C,D$ are variables. I need to solve or rewrite the equation to easily obtain $\theta$ (or $2\theta$), given known values for $A, B, C, D$. Thanks for any help.</p>
Acccumulation
476,070
<p>The precise definition of a relation is that it is a set of ordered pairs. We then say that x~y if (x,y) is in that set. However, that's a rather cumbersome definition; if one wishes to discuss the relation of one integer being larger than another, it would not be possible to list every ordered pair in which the first is larger than the second. So, in practice we often dispense with the set-theoretic definition and instead define relations by giving a proposition that has two free variables. For instance, "x is the square root of y" would be a proposition with two free variables: x and y. The relation is then taken to be the set of ordered pairs for which the proposition is true. So in this case (0,0), (1,1), and (2,4) would be in the relation, but (3,6) would not.</p> <p>What's confusing about this list (besides the use of "in" rather than "on") is that the first four relations are given in this propositional form, while the last two are in the set form. The fifth one has no free variables, and the sixth has only one, so asking whether they are equivalence relations is incoherent in the context of using the propositional definition of relations.</p> <p>P.S. it's good practice when presenting an image to also give a transcript of what's in the image.</p>
368,114
<blockquote> <p>Prove that this set is closed:</p> <p><span class="math-container">$$ \left\{ \left( (x, y) \right) : \Re^2 : \sin(x^2 + 4xy) = x + \cos y \right\} \in (\Re^2, d_{\Re^2}) $$</span></p> </blockquote> <p>I've missed a few days in class, and have apparently missed some very important definitions if they existed. I know that a closed set is a set which contains its limit points (or, equivalently, contains its boundary), but I have <em>no idea</em> how to calculate the limit points of an arbitrary set like this. The only intuition I have to that end is to fix either <span class="math-container">$x$</span> or <span class="math-container">$y$</span> and do calculations from there, since doing it in parallel often ends up in disaster. Besides this, I don't know how to approach this problem.</p> <p>If there is any other extant definition of closed-ness in a metric space, I welcome them wholeheartedly.</p>
Philippe Malot
39,781
<p>Your set is the preimage of the closed set $\{0\}$ by the continuous function $\Bbb R^2\ni(x,y)\mapsto\sin (x^2+4xy)- x\cos y$, hence it's closed. </p> <p>Let $f:X\to Y$ a continuous function (between two metric spaces for example) and $F\subset Y$ a closed set, $G=f^{-1}(F)$. Here are two ways of proving that $G$ is a closed set of $X$.</p> <ol> <li><p>The easiest, evident way : $f$ is continuous, so preimage of open sets are open. Since $F$ is a closed subset of $Y$, $Y\setminus F$ is an open subset of $Y$, and $X\setminus G=f^{-1}(Y\setminus F)$ is then an open subset of $X$, so $G=X\setminus(X\setminus G)$ is a closed subset of $X$.</p></li> <li><p>Using limit points : Let $(x_n)_{n\in\Bbb N}\in G^{\Bbb N}$ a convergent sequence with limit point $x$. You want to prove that $x$ is in $G$. By the definition of $G$, you know that $f(x_n)$ is in $F$ for all $n\in\Bbb N$. By continuity, you (should) know that $\lim f(x_n)=f(\lim x_n)=f(x)$. Since $F$ is closed, it implies that $f(x)$ is in $F$. Again, by the definition of $G$, it means that $x$ is in $G$, so $G$ is closed.</p></li> </ol>
2,684,805
<p>This question is asked by my 12 yr old cousin and I seem to be failing to give him a convincing explanation. Here is the summary of our discussion so far - </p> <p>Case1 : $a&gt;0, b&gt;0$<br> <a href="https://i.stack.imgur.com/fuoZS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fuoZS.png" alt="enter image description here"></a></p> <p>I asked him to put another block of length $a$ adjacent to $b$ and stare at the symmetry. He quickly told me that the mid point of $a, b$ equals half the length of $a+b$ : <a href="https://i.stack.imgur.com/ShBQb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ShBQb.png" alt="enter image description here"></a></p> <p>So far we're good. But when either one of $a,b$ is negative, I feel stuck. I fail to give him a similar explanation using symmetry. Greatly appreciate any help. Thanks! </p>
Parcly Taxel
357,390
<p>Without loss of generality, let $A$ be red. Then $B$ and $D$ can be independently coloured blue or green. If they are different (two ways), $C$ must be red. If they are the same (two ways), $C$ can be either red or the colour that $B$ and $D$ are not.</p> <p>Similar reasoning applies if $A$ is blue or green. Thus there are $3(2\cdot1+2\cdot2)=18$ ways, not 24 as you calculated.</p> <p>A generalised problem, counting the colourings of a square with $n$ colours, is given by <a href="https://oeis.org/A091940" rel="nofollow noreferrer">OEIS A091940</a>.</p>
3,775,749
<p>So far I know that it’s possible to draw angles which are multiples of <strong>15°</strong> (ex. <em><strong>15°</strong></em>, <em><strong>30°</strong></em>, <em><strong>45°</strong></em> etc.).</p> <p>Could anybody please tell me if it's possible to draw other angles which are not multiples of 15° using only a compass and a ruler.</p>
loved.by.Jesus
272,774
<p>Concerning the nice answer of Ethan, I must say that not every angle constructed with rule and compass must form a regular n-gon. This is only a <em>sufficient</em> condition.</p> <h2>Angles with rational trigonometric values</h2> <p>I can give you another sufficient condition to create angles with compass and ruler: You can draw <strong>any angle whose cosine, sine, or tangent is a rational number</strong>, i.e., <span class="math-container">$\dfrac{m}{n}\; |\; m,n\in\mathbb{N}$</span>.</p> <p>The proof is easy. I will give it only for the tangent, similarly can be done for cosine and sine. Just take a certain length as unit length and draw a right rectangle with the horizontal cathetus having length <span class="math-container">$n$</span> unit lengths and the vertical one having <span class="math-container">$m$</span> unit lengths.</p> <p>For example, with the lengths <span class="math-container">$m=2$</span> for the vertical cathetus and <span class="math-container">$n=3$</span> for the horizontal you have an angle <span class="math-container">$\alpha\approx33.69^{\circ}$</span></p> <p>An example for the cosine: If you take <span class="math-container">$m=11$</span> for the hypotenuse and <span class="math-container">$n=4$</span> for the horizontal cathetus, then you have an angle with cosine <span class="math-container">$\dfrac{4}{11}$</span>, i.e., <span class="math-container">$\alpha\approx68.676^{\circ}$</span></p> <p>The power of this method, is that you can compute the trigonometric function (cosine, sine, or tangent) of any angle <span class="math-container">$\alpha$</span> and then approximate it by a rational number. Once you have the rational number (if the numerator and denominator are not too big) you might construct with ruler and compass a quite good approximation of the angle.</p> <h2>About the regular n-gons</h2> <p>Just for the sake of completeness. The integer sequence of the numbers of edges of regular polygons constructible with ruler and compass is <a href="https://oeis.org/A003401" rel="nofollow noreferrer">A003401</a></p> <p>The first values of the sequence, i.e., the possible n-gons, are following:</p> <p><span class="math-container">$1, 2, 3, 4, 5, 6, 8, 10, 12, 15, 16, 17, 20, 24, 30, 32, 34, 40, 48, 51, 60, 64, 68, 80, 85, 96, 102, 120, \ldots$</span></p> <p>Now, something very interesting happens with the possible n-gons (constructed with ruler and compass) with <strong>odd</strong> number of edges. The sequence is finite and quite short (<a href="https://oeis.org/A045544" rel="nofollow noreferrer">A045544</a>)</p> <p><span class="math-container">$3, 5, 15, 17, 51, 85, 255, 257, 771, 1285, 3855, 4369, 13107, 21845, 65535, 65537, 196611, 327685, 983055, 1114129, 3342387, 5570645, 16711935, 16843009, 50529027, 84215045, 252645135, 286331153, 858993459, 1431655765, 4294967295$</span></p> <p>So, if there are no more Fermat primes (<a href="https://www.quora.com/Do-you-believe-there-are-Fermat-primes-bigger-than-65-637?share=1" rel="nofollow noreferrer">as it seems the case</a>), the <strong>greatest number</strong> of sides of a ruler+compass odd n-gon is <span class="math-container">$4294967295$</span>.</p>
110,162
<p>One can use <code>$Epilog</code> to do something when the Kernel is quit or put an <code>end.m</code> file next to the <code>init.m</code>.</p> <blockquote> <p>For Wolfram System sessions, <code>$Epilog</code> is conventionally defined to read in a file named end.m.</p> </blockquote> <p>But if <code>$Epilog</code> is set by the user, then <code>end.m</code> is skipped.</p> <p><strong>Question:</strong> So what to do if I want something to be done each time Kernel is quit but I also want to be able to play with <code>$Epilog</code>. In that sense that if I set <code>$Epilog</code> I would like my default action to be taken anyway?</p> <p>I need to stress out that I want to establish an action that will not be accidentally overwritten with daily (but advanced) mma usage.</p>
Szabolcs
12
<p>In principle this can be done using LibraryLink. Just run an action on library unload. The library will be unloaded on kernel exit, if you don't unload it manually before.</p> <p><strong>Warning:</strong> This is a heavyweight solution that just won't be practical in most cases. But it does work and it does not conflict with other packages. If you need to do the cleanup privately, only on your own computer, then it can be useful. If you need to do the cleanup from a package that already uses LibraryLink, then it is useful. Otherwise I wouldn't use it.</p> <p>For reasons of laziness, here's with LTemplate:</p> <pre><code>&lt;&lt; LTemplate` tem = LClass["Cleanup", {}]; SetDirectory[$TemporaryDirectory]; code = " struct Cleanup { ~Cleanup() { mma::print(\"C++: Cleaning up\"); // this below calls myCleanupFunction[] MLINK link = mma::libData-&gt;getMathLink(mma::libData); MLPutFunction(link, \"EvaluatePacket\", 1); MLPutFunction(link, \"myCleanupFunction\", 0); mma::libData-&gt;processMathLink(link); MLNextPacket(link); MLNewPacket(link); } }; "; Export["Cleanup.h", code, "String"]; CompileTemplate[tem, "ShellOutputFunction" -&gt; Print] LoadTemplate[tem] </code></pre> <p>This will be called on exit:</p> <pre><code>myCleanupFunction[] := Print["Mathematica: Cleaning up"] </code></pre> <p>We create a special object. When this object is no longer referenced, or when the library is unloaded, the cleanup code will be run.</p> <pre><code>globalCleaner = Make["Cleanup"] (* Cleanup[1] *) </code></pre> <p>Now we quit:</p> <pre><code>Quit </code></pre> <p>Quitting triggers unloading the library and this gets printed:</p> <pre><code>C++: Cleaning up Mathematica: Cleaning up </code></pre> <p>Standard LibraryLink is a bit different than LTemplate. You'll need to put the code form the destructor above into <code>WolframLibrary_uninitialize()</code> and there's no need a create a special object like <code>Cleanup[1]</code>. I have not tested it (again: laziness), but I expect it will work the same way.</p>
158,896
<p>Being interested in the very foundations of mathematics, I'm trying to build a rigorous proof on my own that $a + b = b + a$ for all $\left[a, b\in\mathbb{R}\right] $. Inspired by interesting properties of the complex plane and some researches, I realized that defining multiplication as repeated addition will lead me nowhere (at least, I could not work with it). So, my ideas:</p> <ul> <li><p>Defining addition $a+b$ as a kind of "walking" to right $\left(b&gt;0\right)$ or to the left $\left(b&lt;0\right)$ a space $b$ from $a$. Adding a number $b$ to a number $a$ (denoted by $a+b$) involves doing the following operation:</p> <ol> <li>Consider the real line $\lambda$ and its origin at $0$. Mark a point $a$, draw another real line $\omega$ above $\lambda$ such what $\omega \parallel \lambda$ and mark a point $b$ on $\omega$. Now, draw a line $\sigma$ such that $\sigma \perp \omega$ and the only point in commom between $\sigma$ and $\omega$ is $b$. Consider the point that $\lambda$ and $\sigma$ have in commom; this point is nicely denoted as $a + b$.</li> </ol></li> </ul> <p>(Note that all my work is based here. Any problems, and my proof goes to trash)</p> <ul> <li><p>This definition can be used to see the properties of adding two numbers $a$ and $b$, for all $a, b \in\mathbb{R}$.</p></li> <li><p>Using geometric properties may lead us to a rigorous proof (if not, I would like to know the problems of using it).</p></li> </ul> <p>So, I started:</p> <ul> <li>$a, b \in\mathbb{N}$:</li> </ul> <p>$a+b = \overbrace{\left(1+1+1+\cdots+1\right)}^a + \overbrace{\left(1+1+1+\cdots+1\right)}^b = \overbrace{1+1+1+1+\cdots+1}^{a+b} = \overbrace{\left(1+1+1+\cdots+1\right)}^b + \overbrace{\left(1+1+1+\cdots+1\right)}^a = b + a$</p> <p>(Implicity, I'm using the fact that $\left(1+1\right)+1 = 1+\left(1+1\right)$, which I do not know how to prove and interpret it as cutting a segment $c$ in two parts -- $a$ and $b$. However, this result can be extended to $\mathbb{Z}$ in the sense that $-a$ $(a &gt; 0)$ is a change; from right to left).</p> <ul> <li>$a, b \in\mathbb{R}$:</li> </ul> <p>Here, we have basically two cases:</p> <ul> <li>$a$ and $b$ are either positive or negative;</li> <li>$a$ and $b$, where one of them is negative.</li> </ul> <p>Since in my definition $-b, b&gt;0$ means drawing a point $b$ to the left of the real line, there's no big deal interpretating it; <em>subtracting</em> can be interpreted now. So, it starts:</p> <p>$a + b = c$. However, $c$ can be cut in two parts: $b$ and $a$. Naturally, if $a&gt;c$, then $b&lt;0$ -- many cases can be listed. So, $c = b + a$. But $c = a + b$; it follows that $a + b = b + a$. My questions:</p> <p><em>Is there any problem in using my definition of adding two numbers $a$ and $b$, which uses many geometric properties? Is there any way to solve it from informality? Is there anything right here?</em></p> <p>Thanks in advance.</p>
Community
-1
<p>First you need to define $\mathbb{R}$ in your construction!</p> <p>To define $\mathbb{R}$, one way is to go about defining $\mathbb{N}$, then defining $\mathbb{Z}$, then defining $\mathbb{Q}$ and then finally defining $\mathbb{R}$. Once you have these things set up, proving associativity, commutativity of addition over reals essentially boils down to proving associativity, commutativity of addition over natural numbers.</p> <p>As said earlier, one goes about first defining natural numbers. For instance, $2$ as a natural number is defined as $2_{\mathbb{N}} = \{\emptyset,\{\emptyset\} \}$. We will use the notation that $e$ is $1_{\mathbb{N}}$ and $S(a)$ be the successor function applied to $a \in \mathbb{N}$. Then we define addition on natural numbers using the successor function. Addition on natural numbers is defined inductively as $$a +_{\mathbb{N}} e = S(a)$$ $$a +_{\mathbb{N}} S(k) = S(a+k)$$ You can also define $\times_{\mathbb{N}},&lt;_{\mathbb{N}}$ on natural numbers similarly.</p> <p>Then one defines integers as an equivalence class (using $+_{\mathbb{N}}$) of ordered pairs of naturals i.e. for instance, $2_{\mathbb{Z}} = \{(n+_{\mathbb{N}}2_{\mathbb{N}},n):n \in \mathbb{N}\}$. You can similarly, extend the notion of addition and multiplication of two integers i.e. you can define $a+_{\mathbb{Z}} b$, $a \times_{\mathbb{Z}}b$, $a &lt;_{\mathbb{Z}} b$. Addition, multiplication and ordering of integers are defined as appropriate operations on these set.</p> <p>Then one moves on to defining rationals as an equivalence class (using $\times_{\mathbb{Z}}$) of ordered pairs of integers. So $2$ as a rational number, $2_{\mathbb{Q}}$ is an equivalence class of ordered pair $$2_{\mathbb{Q}} = \{(a \times_{\mathbb{Z}} 2_{\mathbb{Z}},a):a \in \mathbb{Z}\backslash\{0\}\}$$ Again define $+_{\mathbb{Q}}, \times_{\mathbb{Q}}$, $a &lt;_{\mathbb{Q}} b$. Addition, multiplication and ordering of rationals are defined as appropriate operations on these set.</p> <p>Finally, a real number is defined as the left Dedekind cut of rationals. i.e. for instance $2$ as a real number is defined as $$2_{\mathbb{R}} = \{q \in \mathbb{Q}: q &lt;_{\mathbb{Q}} 2_{\mathbb{Q}}\}$$ Addition, multiplication and ordering of reals are defined as appropriate operations on these set.</p> <p>Once you have these things set up, proving associativity, commutativity of addition over reals essentially boils down to proving associativity, commutativity of addition over natural numbers.</p> <p>Here are proofs of associativity and commutativity in natural numbers using Peano's axiom.</p> <p><strong>Associativity of addition</strong>: $(a+b) + c = a + (b + c )$</p> <p><strong>Proof</strong>:</p> <p>Let $\mathbb{S}$ be the set of all numbers $c$, such that $ (a+b) + c = a + (b + c )$, $ \forall a,b \in \mathbb{N}$.</p> <p>We will prove that $ e$ is in the set and whenever $k \in \mathbb{S}$, we have $S(k) \in \mathbb{S}$. Then by invoking Peano’s axiom (viz, the principle mathematical induction), we get that $\mathbb{S} = \mathbb{N}$ and hence $ (a+b) + c = a + (b + c )$, $ \forall a,b \in \mathbb{N}$.</p> <p>First Step:</p> <p>Clearly, $ e \in \mathbb{S}$. This is because of the definition of addition.</p> <p>$ (a+b)+e = S(a+b)$ and $ a + S(b) = S(a+b)$</p> <p>Hence $ (a+b)+e = a + S(b) = a+ (b+e)$</p> <p>Second Step:</p> <p>Assume that the statement is true for some $ k \in \mathbb{S}$.</p> <p>Therefore, we have $ (a+b)+k = a+(b+k)$.</p> <p>Now we need to prove, $ (a+b) + S(k) = a+(b+S(k))$.</p> <p>By definition of addition, we have $ (a+b)+S(k) = S((a+b) + k)$</p> <p>By induction hypothesis, we have $ (a+b)+k = a+ (b+k)$</p> <p>By definition of addition, we have $ b + S(k) = S(b+k)$</p> <p>By definition of addition, we have $ a+S(b+k) = S(a+(b+k))$</p> <p>Hence, we get,</p> <p>$ (a + b) + S(k) = S((a+b) + k) = S(a+ (b+k)) = a + S(b+k) = a+ (b + S(k))$</p> <p>Hence, we get,</p> <p>$ (a+b) + S(k) = a + (b+S(k))$</p> <p>Final Step:</p> <p>So, we have $ e \in \mathbb{S}$. And whenever, $k \in \mathbb{S}$, we have $S(k) \in \mathbb{S}$.</p> <p>Hence, by principle of mathematical induction, we have that $\mathbb{S} = \mathbb{N}$ i.e. the associativity of addition, viz, $$(a+b) + c = a + (b+c)$$</p> <p><strong>Commutativity of addition:</strong> $ m + n = n + m$, $ \forall m,n \in \mathbb{N}$.</p> <p><strong>Proof</strong>:</p> <p>Let $ \mathbb{S}$ be the set of all numbers $ n$, such that $ m + n = n + m$, $ \forall m \in \mathbb{N}$.</p> <p>We will prove that $ e$ is in the set $ \mathbb{S}$ and whenever $ k \in \mathbb{S}$, we have $ S(k) \in \mathbb{S}$. Then by invoking Peano's axiom (viz, the Principle Mathematical Induction), we state that $ \mathbb{S}=\mathbb{N}$ and hence $ m + n = n + m$, $ \forall m,n \in \mathbb{N}$.</p> <p>First Step:</p> <p>We will prove that $ m + e = e + m$ and hence $ e \in \mathbb{S}$.</p> <p>The line of thought for the proof is as follows:</p> <p>Let $ \mathbb{S}_1$ be the set of all numbers $ m$, such that $ m + e = e + m$.</p> <p>We will prove that $ e$ is in the set $ \mathbb{S}_1$ and whenever $ k \in \mathbb{S}_1$, we have $ S(k) \in \mathbb{S}_1$. Then by invoking Peano's axiom (viz, the Principle Mathematical Induction), we state that $ \mathbb{S}_1=\mathbb{N}$ and hence $ m + e = e + m$, $ \forall m \in \mathbb{N}$.</p> <p>To prove: $ e \in \mathbb{S}_1$</p> <p>Clearly, $ e + e = e + e$ (We are adding the same elements on both sides)</p> <p>Assume that $ k \in \mathbb{S}_1$. So we have $ k + e = e + k$.</p> <p>Now to prove $ S(k)+ e = e + S(k)$.</p> <p>By the definition of addition, we have $ e + S(k) = S(e + k)$</p> <p>By our induction step, we have $ e + k = k + e$.</p> <p>So we have $ S(e+k) = S(k+e)$</p> <p>Again by definition of addition, we have $ k + e = S(k)$.</p> <p>Hence, we get $ e + S(k) = S(S(k))$.</p> <p>Again by definition of addition, $ p + e = S(p)$, which gives us $ S(k) + e = S(S(k))$.</p> <p>Hence, we get that $ S(k+e) = S(k) + e$.</p> <p>So we get,</p> <p>$ e + S(k) = S(e+k) = S(k+e) = S(S(k)) = S(k) + e$.</p> <p>Hence, assuming that $ k \in \mathbb{S}_1$, we have $ S(k) \in \mathbb{S}_1$.</p> <p>Hence, by Principle of Mathematical Induction, we have $ m + e = e + m$, $ \forall m \in \mathbb{N}$.</p> <p>Second Step:</p> <p>Assume that $ k \in \mathbb{S}$. We need to prove now that $ S(k) \in \mathbb{S}$.</p> <p>Since $ k \in \mathbb{S}$, we have $ m + k = k + m$.</p> <p>To prove: $ m + S(k) = S(k) + m$.</p> <p>Proof:</p> <p>By definition of addition, we have $ m + S(k) = S(m+k)$.</p> <p>By induction hypothesis, we have $ m + k = k + m$. Hence, we get $ S(m+k) = S(k+m)$.</p> <p>By definition of addition, we have $ k + S(m) = S(k+m)$.</p> <p>Hence, we get $ m + S(k) = S(m+k) = S(k+m) = k + S(m)$.</p> <p>We are not done yet, since we want to prove, $ m + S(k) = S(k) + m$.</p> <p>So we are left to prove $ k + S(m) = S(k) + m$.</p> <p>$S(k) +m = (k+e) + m = k + (e+m) = k + (m+e) = k + S(m)$.</p> <p>Hence, we get $ m + S(k) = S(k) + m$.</p> <p>Final Step:</p> <p>So, we have $ e \in \mathbb{S}$. And whenever $ n \in \mathbb{S}$, we have $ S(n) \in \mathbb{S}$.</p> <p>Hence, by Principle of Mathematical Induction, we have the commutativity of addition, viz,</p> <p>$ m + n = n + m$, $ \forall m,n \in \mathbb{N}$.</p> <hr> <p>We might think that associativity is harder/lengthier to prove than commutativity, since associativity is on three elements while commutativity is on two elements.</p> <p>On the contrary, if you look at the proof, proving associativity turns out to be easier than commutativity.</p> <p>Note that the definition of addition, viz $m + S(n) = S(m+n)$, incorporates the associativity $m+(n+e) = (m+n)+e$.</p> <p>For commutativity however, we are changing the roles of $m$ and $n$, (we are changing the "order") and no wonder it is "harder/lengthier" to prove it.</p>
2,948,045
<p>In Eric Gourgoulhon's "Special Relativity in General Frames", it is claimed that the two dimensional sphere is not an affine space. Where an affine space of dimension <em>n</em> on <span class="math-container">$\mathbb R$</span> is defined to be a non-empty set E such that there exists a vector space V of dimension <em>n</em> on <span class="math-container">$\mathbb R$</span> and a mapping </p> <p><span class="math-container">$\phi:E \times E \rightarrow V,\space\space\space (A,B) \mapsto \phi(A,B)=:\vec {AB}$</span></p> <p>that obeys the following properties:</p> <p>(i) For any point O <span class="math-container">$\in E$</span>, the function </p> <p><span class="math-container">$\phi_O: E \rightarrow V,\space\space\space M \mapsto \vec {OM}$</span></p> <p>is bijective. </p> <p>(ii) For any triplet (A,B,C) of elements of E, the following relation holds:</p> <p><span class="math-container">$\vec {AB} + \vec {BC} = \vec {AC}.$</span></p> <p>I would like to show that the sphere is not an affine space using this definition. My approach has been to assume that such a <span class="math-container">$\phi$</span> exists and then seek a contradiction. I can construct specific <span class="math-container">$\phi_O$</span>'s that are bijective and I can show that a contradiction arises if I use the same construction centered at a new point A, wtih <span class="math-container">$\phi_A$</span>, but this only invalidates the specific construction I made. I am having trouble generalizing this to any <span class="math-container">$\phi$</span>. </p>
Squid with Black Bean Sauce
593,937
<p>We could start from an easier problem---show that the <span class="math-container">$1$</span>-dimensional sphere, i.e. the circle <span class="math-container">$S^1$</span>, is not an affine space. </p> <p>This one seems pretty easy---we can pick a point <span class="math-container">$A$</span> in <span class="math-container">$S^1$</span>, a nonzero vector <span class="math-container">$v$</span> and a sequence of <span class="math-container">$n+1$</span> points of <span class="math-container">$S^1$</span>, <span class="math-container">$(A_i)_{0\leq i \leq n}$</span> say, such that <span class="math-container">$$A_0 = A, \quad \overrightarrow{A_i A_{i+1}} = v \text{ for }0 \leq i \leq n, \quad A_n = A.$$</span> Then (ii) implies <span class="math-container">$$\sum_{i=0}^{n-1} \overrightarrow{A_i A_{i+1}} = \overrightarrow{AA} = 0.$$</span> From (ii) follows<span class="math-container">$$\overrightarrow{AA} = 0$$</span> as well, since<span class="math-container">$$\text{(ii)} \implies \overrightarrow{AA}+\overrightarrow{AA} = \overrightarrow{AA} \implies \overrightarrow{AA}=0.$$</span> And to conclude, we have thus <span class="math-container">$$n v = 0$$</span> which contradicts <span class="math-container">$v$</span> being nonzero. </p> <p><strong>Update:</strong> I think the issue is simple in principle. An affine space is just a space where we may subtract <span class="math-container">$2$</span> vectors---effectively a weakened linear space without an origin. So some coordinates---not unique but coordinates---must go to infinity. Compact spaces such as spheres cannot be affine spaces.</p> <p>In my proof for the <span class="math-container">$1$</span>-sphere, i.e. circle, I placed the points regularly, and then saw that the coordinate should have returned to the original one but it did not.</p> <p>We can just do the same for the <span class="math-container">$2$</span>-sphere and others, too, no? There are <span class="math-container">$1$</span>-spheres inside <span class="math-container">$2$</span>-sphere etc. If the difference were defined and smooth, then we could search for the <span class="math-container">$N$</span> points along the <span class="math-container">$2$</span>-spheres such as the difference between the <span class="math-container">$N$</span>th and <span class="math-container">$(N-1)$</span>st point would be the same for all of them, just for the <span class="math-container">$1$</span>-sphere. This must be possible if we pick a sufficiently small difference---many points on the circle around the sphere.</p> <p>I would organize the thing by proving a stronger assertion, at least for all <span class="math-container">$D$</span>-spheres but maybe for all compact manifolds. I am positive that at least for the complex affine spaces, the statement about non-compactness of affine spaces is true.</p> <p>On the <span class="math-container">$2$</span>-sphere, if we pick <span class="math-container">$2$</span> nearby points <span class="math-container">$A_1$</span> and <span class="math-container">$A_2$</span>, they have some difference which is "small". So there must exist a point near <span class="math-container">$A_2$</span> such that<span class="math-container">$$A_3 - A_2 = A_2 - A_1.$$</span>It must be somewhere on a topological circle surrounding by directions, and one of them must have the same direction, too. By taking a short-distance limit, we may be able to reconstruct a whole circle inside the sphere on which all the differences have the same direction, and then we adapt the length to get back after <span class="math-container">$N$</span> steps, and then it reduces to my circle proof above.</p> <p>But there may be more elegant and much stronger proofs for all compact manifolds. There just are not any regions inside compact spaces where the coordinates could go to infinity, so it cannot be an affine space, I think.</p>
3,267,883
<p>I apologize in advanced as my literacy in this subject is not too great and this question may either be trivial or impossible as of yet. </p> <p>I have seen many questions on stack exchange utilizing the Chinese Remainder Theorem to find solutions of <span class="math-container">$a^2\equiv 1\mod (p*q)$</span>, where <span class="math-container">$p$</span> and <span class="math-container">$q$</span> are distinct primes. </p> <p>My question is whether we can find nontrivial (besides <span class="math-container">$a\equiv \pm1\mod 2^k$</span>) solutions of <span class="math-container">$a^2\equiv 1\mod 2^k$</span> by any method other than brute force, or whether solutions exist at all. </p> <p>(Off the top of my head, <span class="math-container">$3^2\equiv 1\mod 2^2$</span>, and furthermore, <span class="math-container">$3^2=2^2+1$</span>, but I do not think that <span class="math-container">$a^2=2^k+1$</span> for any other <span class="math-container">$k$</span>.) </p>
hmakholm left over Monica
14,366
<p>Obviously <span class="math-container">$a$</span> needs to be odd, and by going to <span class="math-container">$-a$</span> if necessary we can assume <span class="math-container">$a\equiv 1\pmod 4$</span>.</p> <p>Therefore assume <span class="math-container">$a=2^nm+1$</span> with <span class="math-container">$n\ge 2$</span> and <span class="math-container">$m$</span> odd. We then have <span class="math-container">$$ a^2 = 2^{2n}m^2 + 2^{n+1}m + 1 $$</span> In binary, the rightmost set bits are <span class="math-container">$1$</span> itself and <span class="math-container">$2^{n+1}$</span>. The latter of these does not coincide with any bit of <span class="math-container">$2^{2n}m^2$</span> because <span class="math-container">$2n&gt;n+1$</span>. That bit vanishes modulo <span class="math-container">$2^k$</span> iff <span class="math-container">$n+1\ge k$</span>.</p> <p>So the only solutions are <span class="math-container">$\pm 1$</span> and <span class="math-container">$2^{k-1}\pm 1$</span>.</p>
3,267,883
<p>I apologize in advanced as my literacy in this subject is not too great and this question may either be trivial or impossible as of yet. </p> <p>I have seen many questions on stack exchange utilizing the Chinese Remainder Theorem to find solutions of <span class="math-container">$a^2\equiv 1\mod (p*q)$</span>, where <span class="math-container">$p$</span> and <span class="math-container">$q$</span> are distinct primes. </p> <p>My question is whether we can find nontrivial (besides <span class="math-container">$a\equiv \pm1\mod 2^k$</span>) solutions of <span class="math-container">$a^2\equiv 1\mod 2^k$</span> by any method other than brute force, or whether solutions exist at all. </p> <p>(Off the top of my head, <span class="math-container">$3^2\equiv 1\mod 2^2$</span>, and furthermore, <span class="math-container">$3^2=2^2+1$</span>, but I do not think that <span class="math-container">$a^2=2^k+1$</span> for any other <span class="math-container">$k$</span>.) </p>
lab bhattacharjee
33,337
<p>As <span class="math-container">$a$</span> is odd, <span class="math-container">$a\pm1$</span> are even</p> <p>If <span class="math-container">$2^k$</span> divides <span class="math-container">$(a+1)(a-1)$</span> for <span class="math-container">$k-2\ge1$</span></p> <p><span class="math-container">$\implies2^{k-2}$</span> divides <span class="math-container">$\dfrac{a+1}2\cdot\dfrac{a-1}2$</span></p> <p>Now <span class="math-container">$\dfrac{a+1}2-\dfrac{a-1}2=1$</span></p> <p>So, they are relatively prime, hence both cannot be even</p> <p>If <span class="math-container">$\dfrac{a+1}2$</span> is even, it must be divisible by <span class="math-container">$2^{k-2}$</span></p> <p>What if <span class="math-container">$\dfrac{a-1}2$</span> is even?</p>
1,061,311
<p>Suppose $\sum_{n=0}^\infty a_n$ and $\sum_{m=0}^\infty b_m$ converge absolutely. I have to show that $$\left(\sum_{n=0}^\infty a_n\right) \cdot \left(\sum_{m = 0}^\infty b_m\right) = \sum_{m, n}^\infty a_nb_m.$$ But I do not understand what the sum on the right-hand side means (i.e. what limit this represents). Could anyone help explain it?</p>
user149792
149,792
<p>The notation $$\sum_{m, n = 0}^\infty a_nb_m$$ should be interpreted as the sum $$\sum_{k=1}^\infty a_{n_k}b_{m_k}$$ where for each ordered pair $(m, n)$ of nonnegative integers, there exists $k$ such that $(m_k, n_k) = (m, n)$. The sum is <strong>not well-defined</strong> in general. Since we are not given a way to order the ordered pairs $(m_k, n_k)$, the sum can only be well defined if it does not depend on the order of summation. One is supposed to prove this by showing the sum converges absolutely. In general, <strong>if one sees a sum of terms indexed by an infinite set, one must interpret it this way unless one is given an order of summation</strong>.</p> <p>It is an instructive exercise to show one can rearrange the sum on the right-hand side of the original equation as follows:$$\sum_{\ell = 0}^\infty \left(\sum_{m + n = \ell} a_nb_m\right).$$</p>
2,213,807
<p>I was solving a problem to discover n and after I transformed the problem it gave me this equation:</p> <p>\begin{equation*} \left\lfloor{\frac{2}{3}\sqrt{10^{2n}-1}}\right\rfloor = \frac{2}{3}(10^{n}-1) \end{equation*}</p> <p>So I tried to simplify it by defining: \begin{equation*} k = 10^{n}-1 \end{equation*}</p> <p>and was left with: \begin{equation*} \left\lfloor{\frac{2}{3}\sqrt{k(k+2)}}\right\rfloor = \frac{2}{3}k \end{equation*}</p> <p>But I can't get past that. Can anyone help me?</p>
George Law
141,584
<p>Note that $$-8(10^n-1)\ \le\ 0\ &lt;\ 4\cdot10^n+5$$ is true for all $n\in\mathbb Z^+$. In other words $$-8\cdot10^n+4\ \le\ -4\ &lt;\ 4\cdot10^n+1$$ $$\iff\ 4(10^n-1)^2\ \le\ 4(10^{2n}-1)\ &lt;\ (2\cdot10^n+1)^2$$ $$\iff\ 2(10^n-1)\ \le\ 2\sqrt{10^{2n}-1}\ &lt;\ 2\cdot10^n+1$$ $$\iff\ \frac23(10^n-1)\ \le\ \frac23\sqrt{10^{2n}-1}\ &lt;\ \frac{2\cdot10^n+1}3=\frac23(10^n-1)+1$$ $$\iff\ \left\lfloor\frac23\sqrt{10^{2n}-1}\right\rfloor\ =\ \frac23(10^n-1)$$ since $10^n-1$ is a multiple of 9 and so is divisible by 3. Hence your equation is true for all non-negative integers $n$.</p>
1,987,230
<p>On Socratica, I saw a video demonstrating writing groups by writing the Cayley's table satisfying three conditions of the desired order. (1) Neutral element row and column are copies of the row and column headers. (2) Every row and column has neutral element once (3) All the elements of the set are present in each row and column. </p> <p>My question is does this always lead to a group? I think the conditions are necessary but insufficient because they do not test the associativity. </p>
dxiv
291,201
<p>Any integer $a \equiv 0,\pm1,\pm2 \pmod 5$. Since $2^2=4 \equiv -1 \pmod 5$ it follows that $a^2 \equiv 0, \pm 1 \pmod 5$. This proves both that $a^2 \not \equiv 3 \pmod 5$ which was the original question, and also that $a^2 \not \equiv 2 \pmod 5$ for a bonus conclusion.</p>
98,088
<p>I can't understand a sentence in a textbook: if $x$ is a transitive set, then $\bigcup x^+=x$? Could someone help me to understand?</p> <p><strong>added:</strong> $x^+=x\cup\{x\}$</p>
Asaf Karagila
622
<p>Let us deconstruct this:</p> <ol> <li>$\bigcup A = \{z\mid\exists y\in A: z\in y\}$.</li> <li>$x^+=x\cup\{x\}$</li> </ol> <p>From this we have: $\bigcup x^+ = \{z\mid\exists y\in x\cup\{x\}: z\in y\}$. We then follow the definition of a transitive set:</p> <blockquote> <p>The set $x$ is transitive if for every $y\in x$, $y\subseteq x$</p> </blockquote> <p>Now we want to show that $x\subseteq\bigcup x^+$ and that $\bigcup x^+\subseteq x$, the axiom of extensionality will ensure equality.</p> <p>For the first one, it is nearly trivial. Since $x\in x^+$ we have that $y\in x$ then $y\in\bigcup x^+$ immediately (recall the definition of this union).</p> <p>For the second one, we recall that $y\in x^+$ then $y\in x$ or $y=x$. Therefore $y\subseteq x$. So for every $z\in\bigcup x^+$ we have some $y\subseteq x$ such that $z\in y$, therefore $z\in x$ as wanted.</p>
3,584,113
<p>When we prove things like continuity in real analysis, why do we always aim for the result <span class="math-container">$&lt;\epsilon$</span> when any positive multiple of <span class="math-container">$\epsilon$</span> proves the same result?</p>
Ben W
227,789
<p>Well, first of all, that's not always what we do, because as you say, <span class="math-container">$&lt;c\epsilon$</span> is enough, where <span class="math-container">$c\in(0,\infty)$</span>.</p> <p>However when you're taking a real analysis class (especially for the first time!) we want to make sure that what's obvious to the professor is also obvious (or at least understandable) to the student. So we sometimes insist on <span class="math-container">$&lt;\epsilon$</span> for that reason.</p> <p>P.S. Some people do it for what they believe is simplicity or ease of reading.</p>