qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
778,605
<blockquote> <p>Let $f,g,h : D \to \mathbb{R}$ be functions, $D \subset \mathbb{R}$. Let c be an accumulation point of $D$. Suppose that $$f(x) \le g(x) \le h(x)$$ for all $x \in D$ with $x \neq c$ and suppose $$\lim_{x \to c} f(x) = \lim_{x \to c} h(x) = L \in \mathbb{R}$$ Prove that $\lim_{x \to c}g(x) = L$</p> </blockquote> <p>I'm not really sure how to start this exercise. If someone could help me start this or give ideas, that would help a lot.</p>
André Nicolas
6,312
<p><strong>Outline:</strong> This can be viewed as a geometry problem. Draw the $2\times 2$ square with corners $(0,0)$, $(2,0)$, $(2,2)$, and $(0,2)$. Draw the lines $x-y=\frac{1}{4}$ and $x-y=-\frac{1}{4}$. Draw the line $x=\frac{1}{4}$.</p> <p>We want the probability that the pair $(X,Y)$ lands in the part of the square which is <strong>not</strong> between the two lines, and is to the right of the line $x=\frac{1}{4}$.</p> <p>To find that probability, (i) find the area $k$ of the part of the square which is not between the two lines and is to the right of $x=\frac{1}{4}$ and (ii) divide $k$ by $4$.</p> <p><strong>Remark:</strong> You can calculate $\Pr(A\cap B)$ by a strategy of the kind you were attempting. The calculation of $\Pr(A)$ in the post is not right. The combined area of the two triangles that represent the event $A$ is $\left(2-\frac{1}{4}\right)^2$. Divide by $4$. We get $\Pr(A)=\frac{49}{64}$.</p>
55,404
<p>I have been searching for a version of the isoperimetric inequality which is something like:</p> <p>$P(\Omega) - 2\sqrt{\pi} Vol(\Omega)^{1/2} \geq \pi (r_{out}^2 - r_{in}^2)$ where $r_{out}$ and $r_{in}$ are the inner and outer radius of a given set. There are of course details which I am missing such as what kind of sets this applies to (clearly connected and possibly simply connected). I was hoping somebody may recognize this inequality and be able to direct me to a source for it along with a proof.</p> <p><strong>Update:</strong> I'm curious if anyone can direct me to a some papers which relate the isoperimetric deficit to Hausdorff distance. Such as: $P(\Omega)^2 - 4\pi Vol(\Omega) \geq C d_H(\Omega,B)^2$ whre $B$ is a sphere in $\mathbb{R}^2$ which may be the inner or outer circle.</p> <p><strong>Update April 12:</strong> I would like to know if the first Bonnesen inequality written below is strictly stronger than the one in higher dimensions? In particular, if one considers the Fraenkel assymetry $\alpha(\Omega) = \min_B |\Omega \Delta B|$ where $|B|=|\Omega|$, does it hold on a bounded domain that</p> <p>$ r_{out}^2 - r_{in}^2 \leq C \alpha(\Omega)$,</p> <p>for some constant $C&gt;0$? This seems like it should be true but I can't seem to find a concise proof of it.</p>
Gil Kalai
1,532
<p>A very good source of Bonnesen type inequalties is the paper by Rovert Osserman entitled <a href="http://www.jstor.org/stable/2320297" rel="nofollow">Bonnesen style isoprimetric inequalities</a>, Americam Math Monthly 86(1979) 1-29. <a href="http://mathdl.maa.org/images/upload_library/22/Ford/RobertOsserman.pdf" rel="nofollow">Here is another link</a> for the same paper through <a href="http://mathdl.maa.org/mathDL/?pa=content&amp;sa=viewDocument&amp;nodeId=2956&amp;pf=1" rel="nofollow">this page</a>. Osserman's 1978 Bulletin AMS <a href="http://www.ams.org/journals/bull/1978-84-06/S0002-9904-1978-14553-4/S0002-9904-1978-14553-4.pdf" rel="nofollow">paper on the ioperimetric inequality</a> is also a good related source. </p>
3,176,629
<p>In the evening, pizza was ordered nine people sat around a round table, 50 slices of pizza were served to these nine people. Prove that there were two people sitting next to each other who ate at least 12 pizza slices.</p> <p>I used the pigeon hole principle to determine 50/9 = 5.5 => 6</p> <p>Therefore, at least one person ate 6 slice of pizza. </p> <p>I just don't know how to prove that two people ate at least 12 slices..</p> <p>Help would be greatly appreciated!</p>
MarianD
393,259
<p>Let <span class="math-container">$K_1, K_2, \dots, K_9$</span> be number of pizzas for individual persons. Then</p> <p><span class="math-container">$$K_1+ K_2+ K_3+K_4+K_5+K_6+K_7+K_8+ K_9 = 50$$</span></p> <p>There are <span class="math-container">$9$</span> pairs of people sitting one next to other (neighbors). They ate <span class="math-container">$P_1, P_2, \dots, P_9$</span> piccas, where</p> <p><span class="math-container">$$P_1 = K_1 + K_2\\ P_2 = K_2 + K_3\\ \vdots\\ P_9 = K_9 + K_1\\ $$</span></p> <p><a href="https://i.stack.imgur.com/dWaqu.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dWaqu.jpg" alt="enter image description here"></a></p> <p>Now</p> <p><span class="math-container">\begin{array}{lllllllllll} &amp;P_1 + &amp;P_2 + &amp;P_3 + &amp;P_4 + &amp;P_5 + &amp;P_6 + &amp;P_7+ &amp;P_8 + &amp;P_9 \\[1em] = &amp;K_1 + &amp;K_2 \\ + &amp;&amp;K_2 + &amp;K_3 \\ + &amp;&amp;&amp;K_3 + &amp;K_4 \\ + &amp;&amp;&amp;&amp;K_4 + &amp;K_5 \\ + &amp;&amp;&amp;&amp;&amp;K_5 + &amp;K_6 \\ + &amp;&amp;&amp;&amp;&amp;&amp;K_6 + &amp;K_7 \\ + &amp;&amp;&amp;&amp;&amp;&amp;&amp;K_7 + &amp;K_8 \\ + &amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;K_8 + &amp;K_9 \\ + &amp;K_1 + &amp;&amp;&amp;&amp;&amp;&amp;&amp;&amp;K_9 \\ \hline = &amp;2K_1 + &amp;2K_2 + &amp;2K_3 + &amp;2K_4 + &amp;2K_5 + &amp;2K_6 + &amp;2K_7 + &amp;2K_8 + &amp;2K_9 \\[1ex] = 2(&amp;K_1 + &amp;K_2 + &amp;K_3 + &amp;K_4 + &amp;K_5 + &amp;K_6 + &amp;K_7 + &amp;K_8 + &amp;K_9 )\\ = 2\cdot 50 = 100, \end{array}</span></p> <p>i. e.</p> <p><span class="math-container">$$P_1+ P_2+ P_3+P_4+P_5+P_6+P_7+P_8+ P_9 = 100$$</span></p> <p>It means, that <span class="math-container">$100$</span> pizzas is divided into <span class="math-container">$9$</span> pairs, and as <span class="math-container">$100/9 &gt; 11,\ $</span>at least one pair must get at least 12 pizzas.</p>
1,754,931
<p>If a sequence has a pattern where +2 is the pattern at the start, but 1 is added each time, like the sequence below, is there a formula to find the 125th number in this sequence? It would also need to work with patterns similar to this. For example if the pattern started as +4, and 5 was added each time.</p> <blockquote> <p>2, 4, 7, 11, 16, 22 ...</p> </blockquote>
Ross Millikan
1,827
<p>When you have a polynomial expressed as a sequence, you can find it by taking differences $$ \begin {array} {l l l l l l} 2&amp;4&amp;7&amp;11&amp;16&amp;22\\2&amp;3&amp;4&amp;5&amp;6\\1&amp;1&amp;1&amp;1 \end {array}$$ where the first line is your sequence and subsequent entries are the difference between the one up and to the right and the one above. The fact that the second differences are constant says you have a second power polynomial. To get the leading term, you take the constant in the second line and divide it by $2!$, so your polynomial starts with $\frac 12n^2$. You can subtract that off and get a linear polynomial, which will be constant in the first line down. Another approach is (once you have concluded that it is quadratic) $A(n)=an^2+bn+c$, take the first three terms as data, and solve the equations for $a,b,c$. You can combine these by noting that $a=\frac 12$ and proceeding.</p>
1,197,654
<p>Prove that if $G$ is an $r$-regular, $(r-2)$-edge-connected graph $(r&gt;3)$ of even order containing at most $r-1$ distinct edge cuts of cardinality $r-2$ then $G$ has a $1$-factor</p> <blockquote> <blockquote> <p><strong>Tutte's theorem</strong>: A non trivial graph $G$ has a $1$-factor if and only if $k_0 (G-S) \leq |S|$ for every proper subset $S$ of $V(G)$</p> </blockquote> </blockquote> <p>I'm not sure if I understand this correctly, but here is what I thought. I want to use the Tutte's theorem to prove this (maybe there is a better way) </p> <p>Let $G_i$ and $H_i$ be the odd and even components of $G$ respectively and $S$ be a subset of $V(G)$ then </p> <p>$\sum |V(G_i)| + \sum |V(H_i)| +|S|=n=2k$</p> <p>$G$ is $(r-2)$-edge-connected, so can't I conclude that $|S|=r-2$ or $|S|=r-1$? I'm not sure I understand this phrase "containing at most $r-1$ distinct edge cuts of cardinality $r-2$"</p> <p>I want to prove that $G$ has a $1$-factor so I can't say that in every odd component, there must be one vertex that is adjacent to a vertex in $S$, can I?</p> <p>I know that, if a graph has a $1$-factor, then it has a perfect matching. Is it safe to say if a graph has a perfect matching then it has a $1$-factor?</p>
Tyler Seacrest
172,823
<p>Here are some hints and answers for you: first off, <em>perfect matching</em> and <em>$1$-factor</em> are synonyms, and can be used interchangeably.</p> <p>I don't think we can conclude $|S| = r-2$ or $r-1$, unless I'm missing some suppositions in your thinking. $S$ can be any size.</p> <p>But you're right, I think Tutte's theorem is the key to proving this. Tutte's theorem can be tricky to think about -- I'd recommend doing some specific examples. For example:<img src="https://i.stack.imgur.com/wtfcZ.png" alt="Tutte Example"></p> <p>This graph does not have a matching, because $|S| = 3$ is smaller than the number of odd components of $G - S$ which is $5$. However, this fails to be a counter example since there are so many edge cuts of size $2$ (in particular, there are $5$ distinct edge cuts of size two, the edges between $S$ and each of the odd components). </p>
1,197,654
<p>Prove that if $G$ is an $r$-regular, $(r-2)$-edge-connected graph $(r&gt;3)$ of even order containing at most $r-1$ distinct edge cuts of cardinality $r-2$ then $G$ has a $1$-factor</p> <blockquote> <blockquote> <p><strong>Tutte's theorem</strong>: A non trivial graph $G$ has a $1$-factor if and only if $k_0 (G-S) \leq |S|$ for every proper subset $S$ of $V(G)$</p> </blockquote> </blockquote> <p>I'm not sure if I understand this correctly, but here is what I thought. I want to use the Tutte's theorem to prove this (maybe there is a better way) </p> <p>Let $G_i$ and $H_i$ be the odd and even components of $G$ respectively and $S$ be a subset of $V(G)$ then </p> <p>$\sum |V(G_i)| + \sum |V(H_i)| +|S|=n=2k$</p> <p>$G$ is $(r-2)$-edge-connected, so can't I conclude that $|S|=r-2$ or $|S|=r-1$? I'm not sure I understand this phrase "containing at most $r-1$ distinct edge cuts of cardinality $r-2$"</p> <p>I want to prove that $G$ has a $1$-factor so I can't say that in every odd component, there must be one vertex that is adjacent to a vertex in $S$, can I?</p> <p>I know that, if a graph has a $1$-factor, then it has a perfect matching. Is it safe to say if a graph has a perfect matching then it has a $1$-factor?</p>
Tyler Seacrest
172,823
<p>Assume for sake of contradiction that $G$ does not have a perfect matching. By Tutte's theorem, there exists a set $S$ such that $G - S$ has more odd components than there are vertices in $S$. Let $A_1, \ldots, A_t$ be the odd components of $G - S$. Notice that $t + |S|$ is even, since $G$ has an even number of vertices, and therefore $|S| &lt; t$ implies $|S| + 2 \leq t$. </p> <p>Notice that it is impossible that there are exactly $r-1$ edges between $S$ and one of the $A_i$. This is because the degree sum of the subgraph induced by $A_i$ would then be $r|A_i| - (r - 1)$ which is an odd number, and by the famous handshake lemma we can't have an odd degree sum. </p> <p>The number of edges between $S$ and $A_i$ is at least $r-2$, since $G$ is $r-2$ edge connected. At most $r-1$ of the $A_i$ have $r-2$ edges to $S$. As discussed in the previous paragraph, the other $(t - (r-1))$ components must have at least $r$ edges to $S$.</p> <p>So how many edges are there leaving $S$? Well on the one hand this is at most $r |S| \leq r(t - 2) = rt - 2r$, since the vertices each have degree $r$. On the other hand, this is at least (counting by how many edges are between $S$ and each $A_i$) the following: $(r-2)(r-1) + r(t - (r-1)) = rt- 2r + 2$. Thus we have a contradiction which proves the result.</p>
20,567
<p>Excuse me if the language is a bit off I'm not a native English speaker. I've been studying(self study) computers and programming for a little over two years, but do not have much education. I've been taking a math class for the last 11 weeks and start an new one next week. I've been using Unix/Linux and Stackoverflow on this network for this time and am pretty terrified at the level the math is in here. I've been browsing questions and can't find any for my level. I'm pretty serious in learning math and have books on Basic Maths, Algebra I, Algebra II, Linear Algebra, Pre-Calculus and Calculus and plan on getting more.</p> <p>I have sometimes gotten stuck in my learning and Unix/Linux has helped me a lot in the computers. </p> <p>Can I ask questions here about things where I get stuck at or don't know how to move past a point in my math, at my level? I haven't found anything which says otherwise and also it say's all levels in the help.</p> <p>The topics of the course I just finished where: Addition/Subtraction, Multiplication, Fractions, Algebra, Equations, Power and Square-root. The course I start at next week is the next step, so pretty much I just learned the basics and I'm starting my math career.</p>
Lord_Farin
43,351
<p>Taking a look at your StackOverflow questions, I predict your questions here will be well-received.</p> <p>Generally, any question that expresses an earnest desire to learn and is reasonably scoped can expect a positive response.</p> <p>For some further site-specific pointers, please see <a href="https://math.meta.stackexchange.com/q/9959/43351">this thread</a>.</p> <p>Basically, if you clearly state what topic you're currently studying, what your own attempts/ideas are, and what you hope to learn from the thread, your question is bound to be warmly welcomed :).</p>
3,573,334
<blockquote> <p>Given positives <span class="math-container">$a, b, c$</span> such that <span class="math-container">$a + b + c = 3$</span>, prove that <span class="math-container">$$\frac{1}{c^2 + 4a^2 + b^2} + \frac{1}{a^2 + 4b^2 + c^2} + \frac{1}{b^2 + 4c^2 + a^2} \le \frac{1}{2}$$</span></p> </blockquote> <p>We have that <span class="math-container">$$a^2 + 4b^2 + c^2 = a^2 + (a + b + c + 1)b^2 + c^2 = (b^2 + a)a + (b + 1)b^2 + (b^2 + c)c$$</span></p> <p><span class="math-container">$$\implies \sum_{cyc}\frac{1}{a^2 + 4b^2 + c^2} = \frac{1}{(a + b + c)^2} \cdot \sum_{cyc}\frac{(a + b + c)^2}{(b^2 + a)a + (b + 1)b^2 + (b^2 + c)c}$$</span></p> <p><span class="math-container">$$ \le \frac{1}{(a + b + c)^2} \cdot \sum_{cyc}\left(\frac{a}{b^2 + a} + \frac{1}{b + 1} + \frac{c}{b^2 + c}\right)$$</span></p> <p>Furthermore, <span class="math-container">$\dfrac{a}{c^2 + a} + \dfrac{c}{a^2 + c} = 1 - \dfrac{(c + a - 2)ca}{(c^2 + a)(a^2 + c)}$</span>, <span class="math-container">$$\sum_{cyc}\frac{1}{a^2 + 4b^2 + c^2} \le \frac{1}{(a + b + c)^2} \cdot \sum_{cyc}\left[\frac{1}{b + 1} - \frac{(c + a - 2)ca}{(c^2 + a)(a^2 + c)}\right] + \frac{1}{3}$$</span></p> <p>Then I don't know what to do next.</p>
Michael Rozenberg
190,319
<p>Another way.</p> <p>We'll prove that our inequality is true for any real <span class="math-container">$a$</span>, <span class="math-container">$b$</span> and <span class="math-container">$c$</span> such that <span class="math-container">$a+b+c=3.$</span></p> <p>Indeed, let <span class="math-container">$a+b+c=3u$</span>, <span class="math-container">$ab+ac+bc=3v^2,$</span> where <span class="math-container">$v^2$</span> can be negative, and <span class="math-container">$abc=w^3$</span>.</p> <p>Thus, by my first proof we need to prove that: <span class="math-container">$$\sum_{sym}(a^6-4a^5b+13a^4b^2-2a^4bc-6a^3b^3-12a^3b^2c+10a^2b^2c^2)\geq0$$</span> or <span class="math-container">$$27w^6+A(u,v^2)w^3+B(u,v^2)\geq0,$$</span> where <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are polynomials of <span class="math-container">$u$</span> and <span class="math-container">$v^2$</span>.</p> <p>We'll prove that even the following inequality is true. <span class="math-container">$$\sum_{sym}(a^6-4a^5b+13a^4b^2-2a^4bc-6a^3b^3-12a^3b^2c+10a^2b^2c^2)\geq$$</span> <span class="math-container">$$\geq\frac{1}{3}\left(\sum_{cyc}(a^3-a^2b-a^2c+abc)\right)^2.$$</span> Since <span class="math-container">$$\sum_{cyc}(a^3-a^2b-a^2c+abc)=27u^3-27uv^2+3w^3-9uv^2+3w^3+3w^3$$</span> and <span class="math-container">$$27-\frac{1}{3}\cdot9^2=0,$$</span> we see that the last inequality is a linear inequality of <span class="math-container">$w^3$</span>.</p> <p>Thus, it's enough to prove the last inequality for extreme value of <span class="math-container">$w^3$</span>, </p> <p>which happens for equality case of two variables.</p> <p>Since the last inequality is homogeneous, it's enough to assume <span class="math-container">$b=c=1$</span>, which gives <span class="math-container">$$2(a-1)^4(a^2+5)\geq\frac{1}{3}(a-1)^4(a+2)^2$$</span> or <span class="math-container">$$(a-1)^4(5a^2-4a+26)\geq0,$$</span> which is obvious.</p>
4,008,152
<p>Question itself: Throw a coin one million times. What is the expected number of sequences of six tails, if we <strong>do not allow overlap</strong>?</p> <p>I know when overlap is allowed, the answer is (1,000,000-5)/(2^6). Not sure if we can just do (1,000,000-5)/(2^6) divided by 6 if overlap is not allowed?</p> <p>Some clarifications:</p> <p>For example, if part of the sequence is &quot;one H, nine T, then one H&quot;, we would count 1 sequence of six tails. (When overlap is allowed, we can count three times because each of the first 3 T can start a sequence of six tails; However, this question does not allow overlap, so 9T can only be counted as containing <strong>one</strong> sequence of six tails)</p> <p>If part of the sequence is &quot;one H, thirteen T, then one H&quot;, we would count 2 sequences of six tails.</p>
bof
111,012
<p>The expected number <span class="math-container">$a_n$</span> of nonoverlapping runs of <span class="math-container">$6$</span> consecutive tails in a sequence of <span class="math-container">$n$</span> independent fair coin tosses satisfies the nonhomogeneous linear recurrence <span class="math-container">$$a_n=\frac12a_{n-1}+\frac14a_{n-2}+\frac18a_{n-3}+\frac1{16}a_{n-4}+\frac1{32}a_{n-5}+\frac1{64}a_{n-6}+\frac1{64}(1+a_{n-6}),$$</span> that is, <span class="math-container">$$a_n-\frac12a_{n-1}-\frac14a_{n-2}-\frac18a_{n-3}-\frac1{16}a_{n-4}-\frac1{32}a_{n-5}-\frac1{32}a_{n-6}=\frac1{64}$$</span> with initial values <span class="math-container">$a_0=a_1=a_2=a_3=a_4=a_5=0$</span>.</p> <p><strong>Alternatively,</strong></p> <p><span class="math-container">$$a_n=\frac{n+2}2(x+x^2+x^3+\cdots+x^{\lfloor n/6\rfloor})-3(x+2x^2+3x^3+\cdots+\lfloor n/6\rfloor x^{\lfloor n/6\rfloor})$$</span> where <span class="math-container">$x=(1/2)^6=1/64$</span>. In closed form that's <span class="math-container">$$a_n=\frac{(n+2)[1-(\frac1{64})^{\lfloor n/6\rfloor}]}{126}-\frac{64-(63\lfloor n/6\rfloor+64)(\frac1{64})^{\lfloor n/6\rfloor}}{1323}.$$</span></p>
18
<p>Some teachers make memorizing formulas, definitions and others things obligatory, and forbid "aids" in any form during tests and exams. Other allow for writing down more complicated expressions, sometimes anything on paper (books, tables, solutions to previously solved problems) and in yet another setting students are expected to take questions home, study the problems in any way they want and then submit solutions a few days later.</p> <p>Naturally, the memory-oriented problem sets are relatively easier (modulo time limit), encourage less understanding and more proficiency (in the sense that the student has to be efficient in his approach). As the mathematics is in big part thinking, I think that it is beneficial to students to let them focus on problem solving rather than recalling and calculating (i.e. designing a solution rather than modifying a known one). There is a huge difference between work in time-constrained environment (e.g. medical teams, lawyers during trials, etc.) where the cost of "external knowledge" is much higher and good memory is essential. However, math is, in general (things like high-frequency trading are only a small part math-related professions), slow.</p> <p>On the other hand, memory-oriented teaching is far from being a relic of the past. Why is this so? As this is a broad topic, I will make it more specific:</p> <p><strong>What are the advantages of memory-oriented teaching?</strong></p> <p><strong>What are the disadvantages of allowing aids during tests/exams?</strong></p>
paul garrett
63
<p>It is easy to explain the most immediate disadvantage of allowing "aids" during exams: many students misjudge the situation, thinking that having books and/or papers means they can study less. In particular, they often misjudge information access time.</p> <p>But many students benefit from some form or degree of open-access exams, because they can relax a little about memorization and data.</p> <p>And, indeed, in real life, although there are obvious benefits to having things in one's head, it's not <em>possible</em> to have <em>everything</em> there, and it's simply not the way things work. So it is a bit exaggerated to conduct exams in such artificial environments.</p> <p>Similarly, although allowing "aids" reduces stress (and misleads some students), the necessarily-repetitive aspects of memory-oriented teaching/learning have their own benefits, that seem not achieved by less "primordial" teaching/learning. The point is not only memorization per se, but the repetitive aspect, which is often misguidedly downplayed in contexts where it is pretended that we all have nearly-perfect memories.</p> <p>In brief: "drill" is pretty inhuman(e), or can be, but has benefits not achievable otherwise. Simultaneously, having books and papers available during exams is much more like real life, and changes the emphasis of the exam from memory to function (ideally), but will mislead a substantial number of students who've grown up trying to "game" the system.</p> <p>But perhaps the real point is the exam-design itself, not the circumstances.</p> <p>My own choice has been to always do open-book-open-notes exams, but/and designed to strongly favor people who have the information in their heads. This design criterion can be taxing to fulfill if one allows essentially unlimited internet access, for example, and the course is a cliched one like Calculus I. One device which I have found useful, both because it promotes useful skills and because it complicates "gaming the system" is the requirement of coherent writing, full sentences, good grammar. Not just skeletal formulaic junk splashed on a page... working toward a formula to be circled at the end. <em>Explanations</em> are not (yet?) so easily found by Googling. Of course I tell people in advance that they will need to be able to do this, and the homework would give feedback.</p>
18
<p>Some teachers make memorizing formulas, definitions and others things obligatory, and forbid "aids" in any form during tests and exams. Other allow for writing down more complicated expressions, sometimes anything on paper (books, tables, solutions to previously solved problems) and in yet another setting students are expected to take questions home, study the problems in any way they want and then submit solutions a few days later.</p> <p>Naturally, the memory-oriented problem sets are relatively easier (modulo time limit), encourage less understanding and more proficiency (in the sense that the student has to be efficient in his approach). As the mathematics is in big part thinking, I think that it is beneficial to students to let them focus on problem solving rather than recalling and calculating (i.e. designing a solution rather than modifying a known one). There is a huge difference between work in time-constrained environment (e.g. medical teams, lawyers during trials, etc.) where the cost of "external knowledge" is much higher and good memory is essential. However, math is, in general (things like high-frequency trading are only a small part math-related professions), slow.</p> <p>On the other hand, memory-oriented teaching is far from being a relic of the past. Why is this so? As this is a broad topic, I will make it more specific:</p> <p><strong>What are the advantages of memory-oriented teaching?</strong></p> <p><strong>What are the disadvantages of allowing aids during tests/exams?</strong></p>
Thomas
53
<p>I disagree with one of the other answers when saying that "math is not about memory". Doing math is not only about memory, but remembering your definitions and theorems can be crucial to doing problems. The argument that a mathematician can just look of these things on books disregards the fact that when doing the problem, you need to collect all the aspects of the solution in your mind (in some way). If you want to solve a problem in group theory, you need a good understanding of what a groups is. That is, you need to have memorized the definition and have internalized it.</p> <p>So I would say that this is the answer to the question: "<em>What are the advantages of memory-oriented teaching?</em>". The advantage (or an advantage) is that to "see" a solution to a problem one needs to be able to mentally play with the ingredients.</p> <p><em>What are the disadvantages of allowing aids during tests/exams?</em></p> <p>The disadvantages are, then, that students might not have internalized concepts. </p> <p>In addition to this, I also believe that certain elementary principles and rules need to be memorized. You are not going to do well on a calculus exam if you don't remember basic algebra rules. If you want to succeed as a mathematician (assuming that par of the goal of undergraduate mathematics education is to prepare future mathematicians), you will have to memorize things. It would be very difficult to have a conversation about mathematics if you don't remember things.</p> <p>To sum up, my answer is: I don't think that open-book exams are bad. I think they are great. But, there is much value in memorizing/remembering. </p>
18
<p>Some teachers make memorizing formulas, definitions and others things obligatory, and forbid "aids" in any form during tests and exams. Other allow for writing down more complicated expressions, sometimes anything on paper (books, tables, solutions to previously solved problems) and in yet another setting students are expected to take questions home, study the problems in any way they want and then submit solutions a few days later.</p> <p>Naturally, the memory-oriented problem sets are relatively easier (modulo time limit), encourage less understanding and more proficiency (in the sense that the student has to be efficient in his approach). As the mathematics is in big part thinking, I think that it is beneficial to students to let them focus on problem solving rather than recalling and calculating (i.e. designing a solution rather than modifying a known one). There is a huge difference between work in time-constrained environment (e.g. medical teams, lawyers during trials, etc.) where the cost of "external knowledge" is much higher and good memory is essential. However, math is, in general (things like high-frequency trading are only a small part math-related professions), slow.</p> <p>On the other hand, memory-oriented teaching is far from being a relic of the past. Why is this so? As this is a broad topic, I will make it more specific:</p> <p><strong>What are the advantages of memory-oriented teaching?</strong></p> <p><strong>What are the disadvantages of allowing aids during tests/exams?</strong></p>
DavidButlerUofA
1,853
<p>I' m only going to answer one of the questions:<br /> <strong>What are the disadvantages of of allowing aids during tests/exams?</strong></p> <p>First a point of terminology: Here in Australia, a sheet of notes you are allowed to take into your exam is called a &quot;cheat sheet&quot;, and in Asia it's known as a &quot;help sheet&quot;. In the USA it seems to be most often called a &quot;crib sheet&quot; or &quot;crib notes&quot;.</p> <p>Anyway I read quite a bit about this a couple of years ago, and you can see a short literature review in my conference paper <a href="https://www.merga.net.au/Public/Public/Publications/Annual_Conference_Proceedings/2011_MERGA_CP.aspx" rel="nofollow noreferrer"><em>Student experiences of making and using cheat sheets in mathematical exams</em>, David Butler &amp; Nicholas Crouch</a>. To summarise: some people say it improves grades, while others find no significant difference; some think it encourages a deep approach, while others think it encourages dependency; most agree it helps students feel less stressed, whereas my research indicates it actually increases stress for some students.</p> <p>The strongest conclusion is that different students will react differently to the opportunity to make or use a cheat sheet. From the surveys in my own research, we learned that students have widely varied expectations for what a cheat sheet is capable of, and indeed what an exam is for, and this coloured their view of how useful it was for study and stress reduction. If you choose to allow them, then you need to give students guidance in how to use them to study for understanding, how to study generally, and manage their expectations of the exam. For example, do emphasise that a lot of the exam is <em>not</em> about memory and there will often be several questions that are not the same as anything they've seen before (if that's what your exam is like of course). I have a video of a seminar for students on this topic, <a href="https://www.youtube.com/watch?v=-6GqeqabP8k" rel="nofollow noreferrer"><em>SEMINAR: Writing and Using a &quot;Cheat Sheet&quot;</em></a>: <a href="https://www.youtube.com/channel/UC01Nr5bWVlQeZdxlx_BPN_Q" rel="nofollow noreferrer">Maths Learning Centre UofA</a>. Please excuse the strange clicking in the sound, I'll redo it with better sound quality when I get time.</p> <p>I would recommend if you allow cheat sheets, to make sure that students are aware that they don't <em>have</em> to make one. Indeed, I would actually <em>also</em> provide a formula sheet with the exam and let them know what is on the formula sheet. They can use the formula sheet as a basis for making their cheat sheet, or only include extra information on top of it, or not make a cheat sheet at all. Giving them this choice helps them to have more power over their own study process.</p>
2,205,950
<p>If $f:\mathbb{R} \rightarrow \mathbb{R}$ satisfies $f'(a) \neq 0$ for all $a \in \mathbb{R}$, show that $f$ is one-to-one for all $a\in \mathbb{R}$.</p> <h2>My attempt</h2> <p>We know that $f(a)$ is not a constant because $f'(a)\neq 0$.Define $f$ by $f(a)=bx$. $f'(a)=x\neq 0$</p> <p>If $f(x)=f(v)$ then $$bx=bv$$</p> <p>$$x=v$$</p> <p>Thus, $f$ is one-to-one.</p>
Bernard
202,857
<p>A function which is the derivative of another function has the inttermediate value property. therefore, if $f'(a)\ne 0$ for all $a\in\mathbf R$, $f'$ has a constant sign, and $f$ is increasing or decreasing on $\mathbf R$. In any case, it is one-to-one.</p>
121,924
<p>Why is <span class="math-container">$\mathrm{Hom}_{\mathbb{Z}}\left(\prod_{n \geq 2}\mathbb{Z}_{n},\mathbb{Q}\right)$</span> nonzero?</p> <p>Context: This is problem <span class="math-container">$2.25 (iii)$</span> of page <span class="math-container">$69$</span> Rotman's Introduction to Homological Algebra:</p> <blockquote> <p>Prove that <span class="math-container">$$\mathrm{Hom}_{\mathbb{Z}}\left(\prod_{n \geq 2}\mathbb{Z}_{n},\mathbb{Q}\right) \ncong \prod_{n \geq 2}\mathrm{Hom}_{\mathbb{Z}}(\mathbb{Z}_{n},\mathbb{Q}).$$</span></p> </blockquote> <p>The right hand side is <span class="math-container">$0$</span> because <span class="math-container">$\mathbb{Z}_{n}$</span> is torsion and <span class="math-container">$\mathbb{Q}$</span> is not.</p>
S.Hamid Hassanzadeh
636,075
<p><span class="math-container">$\mathbb{Q}$</span> is an injective <span class="math-container">$\mathbb{Z}$</span>-module. The exact sequence <span class="math-container">$$0\rightarrow \mathbb{Z} \rightarrow \prod \mathbb{Z}_n\rightarrow C\rightarrow 0$$</span> yields the exact sequence <span class="math-container">$$0\rightarrow \mathrm{Hom}_{\mathbb{Z}}(C,\mathbb{Q}) \rightarrow \mathrm{Hom}_{\mathbb{Z}}(\prod \mathbb{Z}_n, \mathbb{Q})\rightarrow \mathrm{Hom}_{\mathbb{Z}}(\mathbb{Z},\mathbb{Q})\rightarrow 0.$$</span> The last term in the latter exact sequence is just <span class="math-container">$\mathbb{Q}$</span>, hence <span class="math-container">$\mathrm{Hom}_{\mathbb{Z}}(\prod \mathbb{Z}_n,\mathbb{Q})\neq 0.$</span></p>
629,275
<p>A function $f$ is defined on an open set $D$ of $\mathbb R^{2}$ is called a differentiable at a point $x\in D$ if there is a vector $m \in \mathbb R^{2} $ such that $$\lim_{h\to 0} \frac{f(x+h)-f(x)-m\cdot h}{|h|}=0.$$</p> <p><strong>My questions are</strong>: (1) What is a geometric interpretation of $f:\mathbb R^{2} \to \mathbb R$ is a differentiable at a point in $D$ ? </p> <p>( Let $f:\mathbb R^{2} \to \mathbb R $such that $f(x, y)= \frac{x^{3}y}{x^{4}+y^{2}}$ for $(x,y)\not = (0,0)$ and $f(0,0)= 0$. Notice that all the directional derivatives of $f$ exists at $(0, 0)$ and they are equal at $(0, 0)$ but although $f$ is fails to be differentiable at $(0,0)$. )</p> <p>(2) What is a geometric interpretation of $f:D\subset \mathbb R^{n}\to \mathbb R^{m}$ is differentiable at point in $D$ ?</p>
Andrej Bauer
30,711
<p>The geometric interpretation of $f : \mathbb{R}^2 \to \mathbb{R}$ is a surface in $\mathbb{R}^3$. The geometric interpretation of the differentiability of $f$ at a point $x$ is that the surface has a tangent plane at $x$. The components of the vector $m$ are the partial derivatives of $f$.</p> <p>In the multidimensional case $f : \mathbb{R}^n \to \mathbb{R}$ the map is differentiable at a given point when, geometrically speaking, it has a tangent <em>hyper</em>-plane at that point.</p>
629,275
<p>A function $f$ is defined on an open set $D$ of $\mathbb R^{2}$ is called a differentiable at a point $x\in D$ if there is a vector $m \in \mathbb R^{2} $ such that $$\lim_{h\to 0} \frac{f(x+h)-f(x)-m\cdot h}{|h|}=0.$$</p> <p><strong>My questions are</strong>: (1) What is a geometric interpretation of $f:\mathbb R^{2} \to \mathbb R$ is a differentiable at a point in $D$ ? </p> <p>( Let $f:\mathbb R^{2} \to \mathbb R $such that $f(x, y)= \frac{x^{3}y}{x^{4}+y^{2}}$ for $(x,y)\not = (0,0)$ and $f(0,0)= 0$. Notice that all the directional derivatives of $f$ exists at $(0, 0)$ and they are equal at $(0, 0)$ but although $f$ is fails to be differentiable at $(0,0)$. )</p> <p>(2) What is a geometric interpretation of $f:D\subset \mathbb R^{n}\to \mathbb R^{m}$ is differentiable at point in $D$ ?</p>
Christian Blatter
1,303
<p>Given a function ${\bf f}:\&gt;{\mathbb R}^n\to{\mathbb R}^m$ and a "working point" ${\bf p}$ in the domain of ${\bf f}$ one may ask how the value of ${\bf f}$ changes when one moves from ${\bf p}$ to a nearby point ${\bf p}+{\bf X}$, $\&gt;|{\bf X}|\ll1$. This means that we are interested in the auxiliary function $$\Delta_{\bf p}{\bf f}({\bf X}):={\bf f}({\bf p}+{\bf X})-{\bf f}({\bf p})$$ as a function of the increment vector ${\bf X}$. When this function is "in first approximation" <em>linear</em> in the increment variable ${\bf X}$ the given function ${\bf f}$ is called <em>differentiable</em> at ${\bf p}$. </p> <p>What does that mean precisely? It means that there is a certain linear map $A:\&gt;{\mathbb R}^n\to{\mathbb R}^m$ such that $${\bf f}({\bf p}+{\bf X})-{\bf f}({\bf p})=A\&gt;{\bf X}\ +r({\bf X})\ ,\tag{1}$$ where the remainder term $r({\bf X})$ is <em>essentially smaller than linear</em> in ${\bf X}$, i.e., one has $${r({\bf X})\over |{\bf X}|}\to{\bf 0}\qquad({\bf X}\to{\bf 0})\ .\tag{2}$$ The relations $(1)$ and $(2)$ can then be condensed in the intuitive formula $${\bf f}({\bf p}+{\bf X})-{\bf f}({\bf p})=A\&gt;{\bf X}\ +o(|{\bf X}|)\qquad({\bf X}\to{\bf 0})\ .\tag{3}$$ The linear map $A$ appearing in $(1)$ and $(3)$ is called the <em>derivative</em> (or <em>differential</em>) of ${\bf f}$ at ${\bf p}$ and is denoted by ${\bf f}'({\bf p})$ or $d{\bf f}({\bf p})$. The $m\, n$ matrix elements of $A$ are the partial derivatives of ${\bf f}$ at ${\bf p}$.</p> <p>One of the difficult to explain miracles of analysis is that most functions ${\bf f}$ occurring in practice are differentiable at most points in their domain.</p>
4,590,677
<p>This is perhaps a silly question related to calculating with surds. I was working out the area of a regular pentagon ABCDE of side length 1 today and I ended up with the following expression :</p> <p><span class="math-container">$$\frac{\sqrt{5+2\sqrt5}+\sqrt{10+2\sqrt{5}}}{4}$$</span></p> <p>obtained by summing the areas of the triangles ABC, ACD and ADE.</p> <p>I checked my solution with Wolfram Alpha which gave me the following equivalent expression :</p> <p><span class="math-container">$$\frac{\sqrt{25+10\sqrt{5}}}{4}$$</span></p> <p>I was able to show that these two expressions are equivalent by squaring the numerator in my expression, which gave me</p> <p><span class="math-container">$$15+4\sqrt5+2\sqrt{70+30\sqrt5},$$</span></p> <p>and then &quot;noticing&quot; that</p> <p><span class="math-container">$$\sqrt{70+30\sqrt5}=\sqrt{25+30\sqrt5+45}=5+3\sqrt5.$$</span> My question is the following : how could I have known beforehand that my sum of surds could be expressed as a single surd, and is there a way to systematize this type of calculation ? I would have liked to find the final, simplest expression on my own without the help of a computer.</p> <p>Thanks in advance !</p>
Jean-Claude Arbaut
43,608
<p>One way to (try to) simplify is squaring the sum and see where it leads. Then you get a product of radicals instead of a sum, and it's easier to simplify.</p> <p><span class="math-container">$$4A=\sqrt{5+2\sqrt5}+\sqrt{10+2\sqrt5}$$</span> <span class="math-container">$$16A^2=15+4\sqrt5+2\sqrt{(5+2\sqrt5)(10+2\sqrt5)}$$</span> <span class="math-container">$$16A^2=15+4\sqrt{5}+2\sqrt{70+30\sqrt5}$$</span></p> <p>When you are here, there is a nested radical that you don't know how to deal with. It would be nice to write the radicand as a square. Let's try:</p> <p><span class="math-container">$$70+30\sqrt5=(a+b\sqrt5)^2=a^2+5b^2+2ab\sqrt5$$</span></p> <p>Therefore, you must have</p> <p><span class="math-container">$$a^2+5b^2=70$$</span> <span class="math-container">$$2ab=30$$</span></p> <p>Or equivalently</p> <p><span class="math-container">$$a^2+5b^2=70$$</span> <span class="math-container">$$5a^2b^2=5\times15^2=1125$$</span></p> <p>Therefore, <span class="math-container">$a^2$</span> and <span class="math-container">$5b^2$</span> are the roots of the trinomial <span class="math-container">$t^2-70t+1125$</span>.</p> <p>The discriminant is <span class="math-container">$70^2-4\times1125=400$</span>, a perfect square so it's going to be rather simple:</p> <p><span class="math-container">$$t=\frac{70\pm\sqrt{400}}{2}=\frac{70\pm20}{2}$$</span></p> <p>That is, <span class="math-container">$t=25$</span> or <span class="math-container">$t=45$</span>.</p> <p>Therefore, either <span class="math-container">$a^2=25$</span> and <span class="math-container">$b^2=9$</span>, or <span class="math-container">$a^2=45$</span> and <span class="math-container">$b^2=5$</span>. Both are correct, but only the first solution leads to integer <span class="math-container">$a$</span> and <span class="math-container">$b$</span> (however, the other solution leads after simplification to the same result): <span class="math-container">$a=\pm5$</span>, <span class="math-container">$b=\pm3$</span>. Now, the sign matters, from <span class="math-container">$2ab=30$</span> we know <span class="math-container">$a$</span> and <span class="math-container">$b$</span> must have the same sign. Since we are squaring it doesn't matter if they are both positive or both negative, but since we are going to take the square root of the square (i.e. absolute value), we may as well choose the positive values of <span class="math-container">$a$</span> and <span class="math-container">$b$</span>.</p> <p>Therefore, <span class="math-container">$70+30\sqrt5=(5+3\sqrt5)^2$</span>, and</p> <p><span class="math-container">$$16A^2=15+4\sqrt5+2\times(5+3\sqrt5)=25+10\sqrt5$$</span></p> <p>And</p> <p><span class="math-container">$$A=\frac14\sqrt{25+10\sqrt5}$$</span></p> <hr /> <p>In this specific case, you may get the expression for the area, by multiplying the apothem by the half-side by <span class="math-container">$10$</span>:</p> <p>If the apothem is <span class="math-container">$a$</span> and the side <span class="math-container">$c=1$</span>, then <span class="math-container">$\tan\frac\pi5=\frac{c}{2a}$</span>. Hence <span class="math-container">$a=\frac{c}{2\tan\frac\pi5}$</span> and the area <span class="math-container">$A=10a\frac c2=5ac=\frac{5}{4\tan\frac\pi5}$</span>.</p> <p><span class="math-container">$$A=\frac{5}{4\tan\frac\pi5}=\frac{5\cos\frac\pi5}{4\sin\frac\pi5}=\frac{5(1+\sqrt5)}{4\sqrt{10-2\sqrt5}}=\frac{5(1+\sqrt5)\sqrt{10+2\sqrt5}}{4\sqrt{80}}\\=\frac{5\sqrt{6+2\sqrt5}\sqrt{10+2\sqrt5}}{16\sqrt5}=\frac{1}{16}\sqrt{30+10\sqrt5}\sqrt{10+2\sqrt5}\\=\frac{1}{16}\sqrt{400+160\sqrt5}=\frac{1}{4}\sqrt{25+10\sqrt5}$$</span></p> <p>Again, we got a product (and quotient) of radicals, and there are ways to simplify this.</p>
2,426,361
<p>What would be the best mathematical tool/concept to measure how far a matrix is from being singular? Could it be the condition number?</p>
José Carlos Santos
446,262
<p>I am assuming that your matrix is a $n\times n$ matrix. You could take the rank of the matrix. Its possible values are $0,1,\ldots,n$. The matrix is singular if and only if its rank is smaller than $n$. The rank is $0$ if and only if the matrix is the null matrix, which is the most singular of all matrices.</p>
728,495
<p>I'm taking Discrete Math this semester. While I understand the mechanics of proofs, I find that I must refine my understanding of how to work them. To that end, I'm working through some extra problems on spring break. Please read over this proof I did from an exercise from the book. I apologize in advance for poor formatting. I just couldn't figure out how to make this one big block of LaTeX commands. I'm still learning.</p> <p>Let A, B, C be subsets of a Universal set U.</p> <p>Given $A \cap B \subseteq C \wedge A^c \cap B \subseteq C \Rightarrow B \subseteq C$</p> <p>Proof: Part 1:</p> <p>$$ \begin{array}{rcl} B &amp; = &amp; (A \cap B) \cup (A^c \cap B) \\ &amp; = &amp; (A \cap B) \cup B , \text{definition of intersection} \\ &amp; = &amp; B \, \square \end{array} $$</p> <p>Part 2: Part 1 states $B = (A \cap B) \cup (A^c \cap B)$.<br> By assumption, $(A \cap B) \subseteq C \wedge (A^c \cap B) \subseteq C$<br> $\therefore B \subseteq C$</p> <p>Thanks<br> Andy</p>
user138320
138,320
<p>Why is $ A^c \cap B = B $? I think there is a typo/mistake in that line. What I believe you want to say is that $ (A \cap B) \cup ( A^c \cap B ) = ( A \cup A^c) \cup B $, by the distributive law, and that $ = ( A \cup A^c) \cup B = U \cup B = B $.</p>
728,495
<p>I'm taking Discrete Math this semester. While I understand the mechanics of proofs, I find that I must refine my understanding of how to work them. To that end, I'm working through some extra problems on spring break. Please read over this proof I did from an exercise from the book. I apologize in advance for poor formatting. I just couldn't figure out how to make this one big block of LaTeX commands. I'm still learning.</p> <p>Let A, B, C be subsets of a Universal set U.</p> <p>Given $A \cap B \subseteq C \wedge A^c \cap B \subseteq C \Rightarrow B \subseteq C$</p> <p>Proof: Part 1:</p> <p>$$ \begin{array}{rcl} B &amp; = &amp; (A \cap B) \cup (A^c \cap B) \\ &amp; = &amp; (A \cap B) \cup B , \text{definition of intersection} \\ &amp; = &amp; B \, \square \end{array} $$</p> <p>Part 2: Part 1 states $B = (A \cap B) \cup (A^c \cap B)$.<br> By assumption, $(A \cap B) \subseteq C \wedge (A^c \cap B) \subseteq C$<br> $\therefore B \subseteq C$</p> <p>Thanks<br> Andy</p>
dsm
129,367
<p>Formatting looks good to me. There is a typo in part 1, as pointed out by user138320, but there is also a typo in what he mentioned: The distributive law yields $$(A \cap B) \cup ( A^c \cap B ) = ( A \cup A^c) \cap B = U \cap B = B$$ Part 2 is sufficient, but it lacks necessary formalism. Additionally, you made a typo by having $\cup$ instead of $\cap$ in the "By assumption" part. Now, when you want to show that a set is a subset of another, it is best to start with an arbitrary element in the subset and show that this element is also in the "larger" set. In your example, consider some $x \in B$. From the decomposition $$B = (A \cap B) \cup ( A^c \cap B )$$ you know that either $x \in A^c \cap B$ or $x \in A \cap B$, but by assumption $A \cap B \subseteq C$ and $A^c \cap B \subseteq C$, so $x \in C$. This shows that for every element $x \in B$ you also have $x \in C$, thus $B \subseteq C$. </p> <p>Moreover, say that you are dealing with a different problem where you want to show that two sets are <em>equal</em>. In order to do this you would follow the same procedure given above, but both ways. In other words, suppose you want to show two sets $X$ and $Y$ are the same. You would first start out by showing that $$x \in X \Rightarrow x \in Y$$ and $$y \in Y \Rightarrow y \in X$$ Showing this guarantees that $X = Y$, as this is the definition of two sets being equal. Overall, appealing to an arbitrary element in a set is usually the most direct way of carrying out a formal proof involving sets. Hope this helps!</p>
3,862,408
<p>This is the second example of 1. in <a href="http://www-personal.umich.edu/%7Ebhattb/teaching/mat679w17/lectures.pdf" rel="nofollow noreferrer">Ex. 2.0.3 </a> of Bhatt's notes in perfectoid space.</p> <p>We define <span class="math-container">$R^{perf}:= \varprojlim ( \cdots R \xrightarrow{\phi} R)$</span> where <span class="math-container">$\phi$</span> is the Frobenius map.</p> <p>He claims that <span class="math-container">$R=\Bbb F_p[t]$</span> we have <span class="math-container">$R^{perf}=\Bbb F_p$</span>.</p> <p><strong>I don't see why.</strong></p> <hr /> <p>My thoughts: From the map <span class="math-container">$\Bbb F_p[t] \rightarrow \Bbb F_p$</span>, <span class="math-container">$t \mapsto 0$</span>. We get an induced map on limits. Since <span class="math-container">$\Bbb F_p$</span> is perfect, <span class="math-container">$$g:\varprojlim \Bbb F_p[t] \rightarrow \Bbb F_p$$</span></p> <p>There is also a canonical <span class="math-container">$\Bbb F_p \rightarrow \Bbb F_p[t]$</span>, inducing <span class="math-container">$$h:\Bbb F_p \rightarrow \varprojlim \Bbb F_p[t]$$</span></p> <p>I'm guess that these two maps are inverses. Its clear that <span class="math-container">$hg=id$</span>. However, it is less clear to me why <span class="math-container">$gh=id$</span>.</p>
Qiaochu Yuan
232
<p>Since two people have already provided concrete answers, an abstract way to see this is to observe that the functor <span class="math-container">$X \mapsto X^H$</span> is represented by <span class="math-container">$G/H$</span>, meaning that <span class="math-container">$\text{Hom}_G(G/H, X) \cong X^H$</span>; in other words, <span class="math-container">$G/H$</span> is the free <span class="math-container">$G$</span>-set on an <span class="math-container">$H$</span>-fixed point. This means <span class="math-container">$X^H$</span> canonically acquires an action of the automorphism group <span class="math-container">$\text{Aut}_G(G/H)$</span> (in fact this group is precisely the group of automorphisms of the <span class="math-container">$H$</span>-fixed points functor, by the Yoneda lemma).</p> <p>An automorphism <span class="math-container">$\varphi : G/H \to G/H$</span> is in particular a map satisfying</p> <p><span class="math-container">$$\forall g \in G : \varphi(gxH) = g \varphi(xH).$$</span></p> <p>Since <span class="math-container">$G$</span> acts transitively on <span class="math-container">$G/H$</span> such a map is determined by where it sends the coset <span class="math-container">$eH$</span> of the identity, which (by the universal property of <span class="math-container">$G/H$</span>) must be another coset <span class="math-container">$yH$</span> fixed by the action of <span class="math-container">$H$</span>, meaning that</p> <p><span class="math-container">$$\forall h \in H : hyH = yH \Leftrightarrow \forall h \in H : y^{-1} hy \in H$$</span></p> <p>which says exactly that <span class="math-container">$y \in N_G(H)$</span>. Moreover <span class="math-container">$y$</span> is only well-defined up to right multiplication by elements of <span class="math-container">$H$</span> which says exactly that <span class="math-container">$yH \in N_G(H)/H$</span>. And conversely every element of <span class="math-container">$N_G(H)/H$</span> gives an automorphism of <span class="math-container">$G/H$</span>, so this is exactly the automorphism group of <span class="math-container">$G/H$</span>.</p> <p>The same argument in the linear setting gives that if <span class="math-container">$V$</span> is a <span class="math-container">$G$</span>-representation over a field <span class="math-container">$k$</span> then the functor <span class="math-container">$V \mapsto V^H$</span> canonically acquires an action of the endomorphism algebra <span class="math-container">$\text{End}_G(k[G/H])$</span>, where <span class="math-container">$k[G/H]$</span> is the free <span class="math-container">$G$</span>-representation on an <span class="math-container">$H$</span>-fixed point, and again by the Yoneda lemma this is precisely the endomorphism algebra of the fixed point functor. This algebra is the <a href="https://en.wikipedia.org/wiki/Hecke_algebra" rel="nofollow noreferrer">Hecke algebra</a> associated to the pair <span class="math-container">$H \hookrightarrow G$</span>, and it has a distinguished basis given by double cosets; see, for example, <a href="https://mathoverflow.net/questions/283716/what-is-the-matter-with-hecke-operators/283727#283727">this MO answer</a>.</p>
19,815
<p>Problem:</p> <blockquote> <p>Prove that if gcd( a, b ) = 1, then gcd( a - b, a + b ) is either 1 or 2.</p> </blockquote> <p>From Bezout's Theorem, I see that am + bn = 1, and a, b are relative primes. However, I could not find a way to link this idea to a - b and a + b. I realized that in order to have gcd( a, b ) = 1, they must not be both even. I played around with some examples (13, 17), ...and I saw it's actually true :( ! Any idea?</p>
Bill Dubuque
242
<p><strong>Hint</strong> <span class="math-container">$\rm\,\ a\!-\!b + (a\!+\!b)\ {\it i}\ =\ (1\!+\!{\it i})\ (a\!+\!b\!\ {\it i})\ \ $</span> yields a slick proof using Gaussian integers. This reveals the arithmetical essence of the matter and, hence, suggests <a href="https://math.stackexchange.com/a/33104/242">obvious generalizations</a>.</p>
1,757,260
<p>A little box contains $40$ smarties: $16$ yellow, $14$ red and $10$ orange.</p> <p>You draw $3$ smarties at random (without replacement) from the box.</p> <p>What is the probability (in percentage) that you get $2$ smarties of one color and another smarties of a different color?</p> <p>Round your answer to the nearest integer.</p> <p>Answer given is $67$. I don't get it. Is it not: $$\left(\frac{16}{40} \times \frac{15}{39} \times\frac{24}{38}\right) + \left(\frac{14}{40} \times\frac{13}{39} \times\frac{26}{38}\right) +\left(\frac{10}{40} \times\frac{9}{39} \times\frac{30}{38}\right)= 22?$$</p>
André Nicolas
6,312
<p>We imagine taking out the candies one at a time.</p> <p>Your $\frac{16}{40}\cdot\frac{15}{39}\cdot \frac{24}{38}$ calculates the provability of getting Yellow, Yellow, Other <em>in that order</em>. However, two Yellow and one Other can happen in two additional orders, Yellow, Other, Yellow or Other, Yellow, Yellow. Each of these turns out to have the same probability as Yellow, Yellow, Other.</p> <p>So the probability of $2$ Yellow and $1$ Other is $3\cdot \frac{16}{40}\cdot\frac{15}{39}\cdot \frac{24}{38}$.</p> <p>Similar adjustments need to be made in your other two terms.</p> <p><em>Another way</em>: There are $\binom{40}{3}$ equally likely ways to choose $3$ candies. We now count the <em>favourables</em>, where we have $2$ candies of one colour and $1$ of another. For example the number of ways to have $2$ Yellow and $1$ other is $\binom{16}{2}\binom{24}{1}$, and we have similar expressions for the other favourables. Add up, and divide by $\binom{40}{3}$.</p>
2,699,621
<p>To show $1 + \frac12 x - \frac18 x^2 &lt; \sqrt{1+x}$ is it enough to tell that the taylor series expansion of $\sqrt{1+x}$ around $0$ has more positive terms?</p>
John Falvey
311,586
<p>8 + 4x -x^2 &lt; 8[1 + x]^.0.5</p> <p>8 + x[ 4 - x] &lt; 8[1 + x ]^0.5 , square both sides</p> <p>64 + 16x[4-x] +x^2 [4 -x]^2 &lt; 64[ 1 + x]</p> <p>64 + 64 x - 16x^2 + x^2 [ x^2 -8x + 16 ] &lt; 64 + 64x , cancel 64 + 64x</p> <p>then - 16x^2 + x^4 - 8x^3 + 16x^2 &lt; 0 , cancel - 16x^2 + 16x^2</p> <p>x^4 - 8x^3 &lt; 0</p> <p>x^3[x - 8]&lt; 0 is true when</p> <p>x^3 &lt; 0 AND [ x -8] > 0 that is when x &lt; 0 and x > 8 , not feasable </p> <p>OR</p> <p>x ^3 > 0 AND [ x - 8] &lt; 0 , that is when</p> <p>0&lt; x&lt; 8 satisfies the inequality</p>
2,174,061
<p>in $\Delta ABC$ if the $AD\perp BC$,$D\in BC$,and such $$|BC|=2|AD|$$ show that $$\dfrac{|AB|}{|AC|}\le\sqrt{2}+1$$ <a href="https://i.stack.imgur.com/SXDvI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SXDvI.png" alt="enter image description here"></a></p> <p>since $$\cot{B}+\cot{C}=\dfrac{BD}{AD}+\dfrac{CD}{AD}=2$$ so $$\dfrac{AB}{AC}=\dfrac{\sin{C}}{\sin{B}}$$</p>
Rafa Budría
362,604
<p>I think you have only to check the worst of the cases, so is, prove that $\sqrt{5}\le\sqrt 2 +1$</p>
2,174,061
<p>in $\Delta ABC$ if the $AD\perp BC$,$D\in BC$,and such $$|BC|=2|AD|$$ show that $$\dfrac{|AB|}{|AC|}\le\sqrt{2}+1$$ <a href="https://i.stack.imgur.com/SXDvI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SXDvI.png" alt="enter image description here"></a></p> <p>since $$\cot{B}+\cot{C}=\dfrac{BD}{AD}+\dfrac{CD}{AD}=2$$ so $$\dfrac{AB}{AC}=\dfrac{\sin{C}}{\sin{B}}$$</p>
Blue
409
<p>Taking <span class="math-container">$|AC| = 1$</span>, we have <span class="math-container">$$|AD| = \sin C = \frac12|BC| \qquad |CD|=\cos C \quad\text{(as a signed length)}$$</span> so that <span class="math-container">$$\begin{align} |AB|^2 &amp;= |AD|^2+|BD|^2 = |AD|^2+\left(|BC|-|CD|\right)^2 \\[4pt] &amp;= \sin^2C + (2\sin C-\cos C) \\[4pt] &amp;= 1-4\sin C\cos C+4\sin^2 C \\[4pt] &amp;= 1-2\sin 2C+2(1-\cos2C) \\[4pt] &amp;= 3-2(\sin 2C+\cos2C) \\[4pt] &amp;= 3-2\sqrt{2}\cos(2C-\pi/4) \end{align}$$</span> This value is clearly maximized with the cosine is <span class="math-container">$-1$</span> (that is, when <span class="math-container">$C= 5\pi/8$</span>), where we have <span class="math-container">$$|AB|^2 = 3 + 2 \sqrt{2} = \left(\;1 + \sqrt{2}\;\right)^2$$</span> proving the result. <span class="math-container">$\square$</span></p> <hr> <p>Because we can write (spontaneously changing <span class="math-container">$C$</span> to <span class="math-container">$\theta$</span>) <span class="math-container">$$|AB|^2 = 1^2 + \sqrt{2}^2 - 2 \cdot 1 \cdot \sqrt{2}\cdot\cos(2\theta-\pi/4)$$</span> à la the Law of Cosines, the general scenario can be <em>illustrated</em> (though not (yet?) properly <em>explained</em>) thusly:</p> <p><a href="https://i.stack.imgur.com/dq4vs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dq4vs.png" alt="enter image description here"></a></p> <p>with this optimal state:</p> <p><a href="https://i.stack.imgur.com/nVvi6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nVvi6.png" alt="enter image description here"></a></p>
3,363,944
<p>A group consisting of <span class="math-container">$3$</span> men and <span class="math-container">$6$</span> women attends a prizegiving ceremony. If <span class="math-container">$ 5$</span> prizes are awarded at random to members of the group, find the probability that exactly <span class="math-container">$3 $</span> of the prizes are awarded to women if<br> a) There is a restriction of at most one prize per person<br> b) There is no restriction on the number of prizes per person</p> <p>I did part a) and got the same result as the solution but I failed at getting the same answer for part b). When I looked at the working outs of both parts, I noticed a significant difference in the ways two parts are solved. </p> <p>This is the working out for part a) (which is also similar to my working out) a) <span class="math-container">$\frac{6C3\times 3C2}{9C5} = \frac{10}{21}\ $</span></p> <p>And this is the working out of part b) b) <span class="math-container">$\ 5C3 \times (\frac{3}{9})^{2} \times (\frac{6}{9})^{3}\ = \frac{80}{243}\ $</span></p> <p>I'm so confused why part b) is done in such a different way than part a) and as a student, how can I know when to consider the numerator and denominator separately like part a) and when to find the probability of each component and times all of them together like part b)? Also, can we solve part b) in a similar way like part a)? Does anyone have any tips on how to distinguish these sorts of methods? </p> <p>Thank you very much for helping.</p>
Oliver Kayende
704,766
<p>If the prizes are not identical and each prize is distinct then the total # of outcomes in a) becomes <span class="math-container">$${9\choose 5}*5!$$</span> whereas in b) it becomes <span class="math-container">$9^5$</span> which counts the # of functions from a 5-set to a 9-set. The # of desirable outcomes in a) is then <span class="math-container">$${6\choose 3}*{5\choose 3}*3!*{3\choose 2}*2$$</span> yielding for a) the probability <span class="math-container">$\frac {{6\choose 3}*{3\choose 2}}{9\choose 5}=\frac {10}{21}$</span>. The number of desirable outcomes in b) becomes <span class="math-container">$${5\choose 3}*6^3*3^2$$</span> because we choose what 3 prizes are awarded to the female winner(s) then multiply by the # of functions from a 3-set to a 6-set and finally multiply by the # of functions from a 2-set to a 3-set. This yields for b) the probability <span class="math-container">$$\frac {{5\choose 3}*6^3*3^2}{9^5}=\frac {80}{243}$$</span> </p>
4,170,940
<blockquote> <p><a href="https://www.isical.ac.in/%7Eadmission/IsiAdmission2017/PreviousQuestion/BStat-BMath-UGA-2016.pdf" rel="nofollow noreferrer">Question 36</a>: Finding graph corresponding to <span class="math-container">$\int_0^{\sqrt{x} } e^{ -\frac{u^2}{x} } du$</span> <a href="https://i.stack.imgur.com/KIVRA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KIVRA.png" alt="" /></a> <span class="math-container">$x&gt;0$</span> and <span class="math-container">$f(0)=0$</span></p> </blockquote> <p>Clearly we can't say the function is increasing or decreasing just by inspection because the bound and both integrand is variable. To make statements there we have to consider it's derivative, which is done by the <a href="https://en.wikipedia.org/wiki/Leibniz_integral_rule" rel="nofollow noreferrer">Leibniz integral rule</a>:</p> <p><span class="math-container">$$ \frac{d}{dx}( \int_0^{\sqrt{x} }e^{ - \frac{u^2}{x} } du) = F(x)= \frac{1}{2 \sqrt{x}} e^{ - \frac{u^2}{\sqrt{x}} } du+ \frac{u^2}{x^2} \int_0^{\sqrt{x} }e^{ - \frac{u^2}{x} } du$$</span></p> <p>Now... ummm... we still have the integral again, so it's still not easily possible to make statement if it's increasing or decreasing again.</p> <p>I thought that I could rule out option D by checking if there is a extrema point on the function by checking roots for <span class="math-container">$F(x)=0$</span> but that too seems too difficult.</p> <p>Is there any trick which I am not seeing? This question was meant for higher schooler's who are entering undergraduate, so methods in that level would be best (other methods are still fine)</p>
Tuvasbien
702,179
<p>Substitute <span class="math-container">$v=\frac{u^2}{x}$</span>, <span class="math-container">$$ \int_0^{\sqrt{x}}e^{-\frac{u^2}{x}}du=\frac{\sqrt{x}}{2}\int_0^1\frac{e^{-v}}{\sqrt{v}}dv $$</span> Now the derivative is easier to calculate, it is <span class="math-container">$&gt;0$</span> and diverges to <span class="math-container">$+\infty$</span> when <span class="math-container">$x\rightarrow 0^+$</span>, thus the corresponding graph for the integral is <span class="math-container">$(C)$</span>.</p>
3,467,523
<p><a href="https://i.stack.imgur.com/G47bX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/G47bX.png" alt="Attached is the picture of the problem."></a></p> <p>I was doing some trig problems for leisure. This one particularly seems not trivial. So I thought someone may be interested to take a look.</p>
Sameer Nilkhan
710,265
<p>By using following identities, <span class="math-container">$$\cos(2x)=2\cos^2(x) - 1$$</span> and <span class="math-container">$$\sin^2(x)=\dfrac{1-\cos(2x)}{2}$$</span> we can simplify the given expression as, <span class="math-container">$$2\cos^2 x-1-m \cos(2x)+3=2 m \left({\dfrac{1-\cos(2x)}{2}}\right)^2$$</span> <span class="math-container">$$\Rightarrow [4-m][\cos^2(2x)+1]=0$$</span> <span class="math-container">$$\Rightarrow m=4$$</span> and <span class="math-container">$x$</span> belongs to real.</p>
320,937
<p>It is easy to think of $\mathbb{C}^2$ as an ordered pair. I just wonder if it is possible to put $\mathbb{C}^2$ into illustration, since $\mathbb{C}$ has taken the role of two dimensional Euclidean Space.</p>
Zev Chonoles
264
<p>Let $$P=\{n\in\mathbb{N}\mid n\text{ is a counterexample to the claim}\}.$$ The well-ordering principle implies that, if $P$ is non-empty, then there is a minimum element of $P$. Assuming that $P$ is non-empty, let's call the minimum element $m$. Usually one then proceeds in the same way as one would by induction: since $m$ is the smallest element of $P$, we know that $m-1$ is not in $P$, i.e. that the claim is true for $m-1$, and there is some way of using this to demonstrate that $m$ would have to be an element of $P$ as well, which is a contradiction. Thus, our initial assumption that $P$ was non-empty must have been false.</p>
1,859,719
<blockquote> <p>Let be $U (x,y) = x^\alpha y^\beta$. Find the maximum of the function $U(x,y)$ subject to the equality constraint $I = px + qy$.</p> </blockquote> <p>I have tried to use the Lagrangian function to find the solution for the problem, with the equation</p> <p>$$\nabla\mathscr{L}=\vec{0}$$</p> <p>where $\mathscr{L}$ is the Lagrangian function and $\vec{0}=\pmatrix{0,0}$. Using this method I have a system of $3$ equations with $3$ variables, but I can't simplify this system:</p> <p>$$ax^{\alpha-1}y^\beta-p\lambda=0$$ $$\beta y^{\beta-1}x^\alpha-q\lambda=0$$ $$I=px+qx$$</p>
smcc
354,034
<h2>The solution</h2> <p>The answer can be been found on the internet in any number of places. The function $U$ is a Cobb-Douglas utility function. The Cobb-Douglas function is one of the most commonly used utility functions in economics. </p> <p>The demand functions you should get are:</p> <p>$$x(p,I)=\frac{\alpha I}{(\alpha+\beta)p}\qquad y(p,I)=\frac{\beta I}{(\alpha+\beta)q}$$</p> <p>The solution has a nice interpretation: the consumer spends a fraction $\frac{\alpha}{\alpha+\beta}$ of their income on good $x$ and fraction $\frac{\beta}{\alpha+\beta}$ on good $y$.</p> <p>If you want to find the full working spend a minute or two searching the internet.</p> <h2>A simplification</h2> <p>Note here that you can simplify things by instead maximizing the function $V$ where $$V(x,y)=\ln U(x,y)=\alpha \ln x+\beta\ln y$$ Since $V$ is an increasing transformation of $U$ it will have the same maximizer. </p> <p>In fact you could simplify the working further by maximizing $W$ where $$W(x,y)=\frac{V(x,y)}{\alpha+\beta}=\bar{\alpha}\ln x+(1-\bar{\alpha})\ln y$$ where $\bar{\alpha}=\frac{\alpha}{\alpha+\beta}$.</p>
1,369,482
<p>I'm have problem proving: Law for Scalar Multiplication :</p> <p>Vector spaces possess a collection of specific characteristics and properties. Use the definitions in the attached “Definitions” to complete this task.</p> <p>Define the elements belonging to $\mathbb{R}^2$ as $\{(a, b) | a, b \in \mathbb{ R}\}$. Combining elements within this set under the operations of vector addition and scalar multiplication should use the following notation:</p> <p>Vector Addition Example: $(–2, 10) + (–5, 0) = (–2 – 5, 10 + 0) = (–7, 10)$</p> <p>Scalar Multiplication Example: $–10 × (1, –7) = (–10 × 1, –10 × –7) = (–10, 70)$, where –10 is a scalar.</p> <p>Under these definitions for the operations, it can be rigorously proven that R2 is a vector space.</p> <p>Prove Closure under Scalar Multiplication - **i need help with this law ** </p> <p>Can someone put it in a proof form?</p>
P Vanchinathan
28,915
<p>Take the examples of all directed arrows (every possible length and every possible angle) originating from a single point.</p> <p>Now in this set vector addition is like addition of forces in physics: parallelogram law. In this set internally there is addition. Also there is an external operation. Any vector can be "scaled up/down" by any real number. This real number is not part of the set of arrows. But it makes sense to talk of 3.75 times a force. SImply a force directed in the same way but with with strength 3.75 times the original. This is depicted as an arrow of that length in the same direction.</p> <p>In general any set where we can add them among themselves, and multiply by an external scalar usually the set of real numbers subject to some expected conditions is called a vector space. </p>
1,878,884
<p>I recently figured out my own algorithm to factorize a number given we know it has $2$ distinct prime factors. Let:</p> <p>$$ ab = c$$</p> <p>Where, $a&lt;b$</p> <p>Then it isn't difficult to show that:</p> <p>$$ \frac{c!}{c^a}= \text{integer}$$</p> <p>In fact, </p> <p>$$ \frac{c!}{c^{a+1}} \neq \text{integer}$$</p> <p>So the idea is to first asymptotically calculate $c!$ and then keep dividing by $c$ until one does not get an integer anymore. </p> <h2>Edit</h2> <p>I just realized a better algorithm would be to first divide $c^{\lfloor \sqrt {c} /2\rfloor }$. If it is not an integer then divide by $c^{\lfloor \sqrt {c} /4\rfloor }$. However is it is an integer then divide by: $c^{3\lfloor \sqrt {c} /4 \rfloor }$ . And so on ... </p> <h2>Question</h2> <p>I was wondering if this already existed in the literature? And what is the running time of this algorithm? Can this algorithm be improved upon?</p>
TonyK
1,508
<p>The best factoring algorithm currently available is the <a href="https://en.wikipedia.org/wiki/General_number_field_sieve" rel="nofollow">General Number Field Sieve</a>. Numbers of more than 200 decimal digits have been factored using this method.</p> <p>The factorial of such a number would have more than $10^{200}$ digits $-$ where on earth are you going to put them all? And that's even before you start your trial divisions. I'm afraid your method is completely impractical as a factoring algorithm.</p>
1,878,884
<p>I recently figured out my own algorithm to factorize a number given we know it has $2$ distinct prime factors. Let:</p> <p>$$ ab = c$$</p> <p>Where, $a&lt;b$</p> <p>Then it isn't difficult to show that:</p> <p>$$ \frac{c!}{c^a}= \text{integer}$$</p> <p>In fact, </p> <p>$$ \frac{c!}{c^{a+1}} \neq \text{integer}$$</p> <p>So the idea is to first asymptotically calculate $c!$ and then keep dividing by $c$ until one does not get an integer anymore. </p> <h2>Edit</h2> <p>I just realized a better algorithm would be to first divide $c^{\lfloor \sqrt {c} /2\rfloor }$. If it is not an integer then divide by $c^{\lfloor \sqrt {c} /4\rfloor }$. However is it is an integer then divide by: $c^{3\lfloor \sqrt {c} /4 \rfloor }$ . And so on ... </p> <h2>Question</h2> <p>I was wondering if this already existed in the literature? And what is the running time of this algorithm? Can this algorithm be improved upon?</p>
Charles
1,778
<p>The basic algorithm takes about $p$ divisions to find the smallest prime factor $p$ of your number, which in the worst case is around $\sqrt{c}$. Each step requires dividing a huge number* by $c$, which takes about $c\log^2 c$ time, for a total runtime of about $cp\log^2c$. This is much worse than trial division!</p> <p>Here is a straightforward implementation of your algorithm:</p> <pre><code>fac(c)=my(N=c!); for(a=0,sqrtint(c), N/=c; if(denominator(N)&gt;1, return(a))); c </code></pre> <p>This uses a fast algorithm to compute the factorial and then simple division to find the factor. Finding a factor of a random number I generated, 924233, with this algorithm took about 1.5 seconds, an eternity for such a small number. I then tried to do the same with the larger number 107231893 which nearly crashed my machine -- the 814,536,627-digit factorial caused my memory to thrash.</p> <p>Your variant algorithm won't help with the memory issue, but there is a fix. If you factor (!) $c$ first, then you can work with the exponents on $a$ and $b$ only. So instead of storing that huge number we can work with the much more manageable $$ a^{10980}b^{9767} $$ which you can do binary splitting as you propose. But you can improve on this by merely choosing the prime with the largest exponent which will of course be the smallest prime factor. So really all your algorithm needs to become efficient is to do a little bit of preprocessing beforehand with an efficient factorization algorithm.</p> <p><i></i>* This can be done with Jebelean's bidirectional algorithm, which will save a factor of about 4 from the runtime.</p>
3,854,446
<p>I am reading a textbook on representation theory which says the following.</p> <p><span class="math-container">$G$</span> is a finite group with irreducible representation <span class="math-container">$\rho:G\to GL(V)$</span> over field <span class="math-container">$k$</span> (possibly algebraically closed, there's an assumption that all fields are algebraically closed which I'm not certain extends to this page in the book). <span class="math-container">$\phi$</span> is a class function from <span class="math-container">$G$</span> to <span class="math-container">$k$</span> satisfying <span class="math-container">$(\phi,\chi_\rho)=0$</span>. Define <span class="math-container">$$T=\frac{1}{\#G}\sum\limits_{g\in G}\phi(g^{-1})\rho_g.$$</span> The text claims that <span class="math-container">$T=0$</span>. I am not sure how to see this. I see why <span class="math-container">$T\in End_GV$</span>, so if <span class="math-container">$k$</span> is algebraically closed then we can identify it with some element of <span class="math-container">$k$</span> (and regardless <span class="math-container">$End_GV$</span> is a division ring, by Schur's lemma), but I don't see why <span class="math-container">$T$</span> must be <span class="math-container">$0$</span>.</p> <p>Is there something that I'm missing here? Thanks in advance.</p>
Angina Seng
436,618
<p>I presume you are in characteristic zero.</p> <p>As you say, <span class="math-container">$T$</span> is a scalar matrix (Schur's lemma).</p> <p>The trace of <span class="math-container">$T$</span> is <span class="math-container">$$\frac1{|G|}\sum_{g\in G}\phi(g^{-1})\textrm{Tr}(\rho_g)=\frac1{|G|}\sum_{g\in G}\phi(g^{-1})\chi_\rho(g)=(\phi,\chi_\rho)=0.$$</span> As <span class="math-container">$T=\lambda I$</span> and has trace zero, then <span class="math-container">$\lambda=0$</span>.</p>
1,103,624
<p>Ashamed to admit that I cannot aid my friend's niece with her second grade homework problem. So much for that college education, eh? Here's the problem.</p> <p>Using only the natural numbers 1 through 9 without repeating any of them (natural because there cannot occur any rationals anywhere in this process, i.e. the first number cannot be prime, since otherwise there would be a fraction after the first operation), place the numbers in the boxes so that the following statement is true.</p> <p>$\square \div \square \times \square + \square \times \square \times \square \div \square + \square \times \square = 100$</p> <p>I am aware that I can use brute force to work this problem out, but this is going to be tedious at best, and might require a computer at worst. Furthermore, the problem doesn't specify anything about using the order of operations. On one hand, that would make sense, but on the other, this is a second grade problem in an American school, and I know that order of operations wasn't on my second grade curriculum.</p> <p>In either case, does anyone know how to approach this in a somewhat elegant way? (Also apologies over tagging, someone please retag this appropriately if they think it isn't).</p>
David Zhang
80,762
<p>Assuming that the usual order of operations is to be ignored, there are precisely 32 solutions to your problem in which fractions do not appear in any intermediate step. Unfortunately, I have no idea how one might arrive at such a solution by hand; I produced these with an exhaustive computer search.</p> <p>$$4\div2\times7+8\times1\times3\div6+9\times5=100$$ $$4\div2\times7+8\times3\times1\div6+9\times5=100$$ $$4\div2\times9+8\times1\times3\div6+7\times5=100$$ $$4\div2\times9+8\times3\times1\div6+7\times5=100$$ $$6\div2\times3+5\times1\times8\div7+9\times4=100$$ $$6\div2\times3+5\times8\times1\div7+9\times4=100$$ $$6\div2\times9+7\times1\times4\div8+3\times5=100$$ $$6\div2\times9+7\times4\times1\div8+3\times5=100$$ $$6\div3\times1+5\times2\times8\div7+9\times4=100$$ $$6\div3\times1+5\times8\times2\div7+9\times4=100$$ $$6\div3\times1+7\times2\times8\div9+4\times5=100$$ $$6\div3\times1+7\times8\times2\div9+4\times5=100$$ $$6\div3\times7+8\times1\times2\div4+9\times5=100$$ $$6\div3\times7+8\times2\times1\div4+9\times5=100$$ $$6\div3\times9+8\times1\times2\div4+7\times5=100$$ $$6\div3\times9+8\times2\times1\div4+7\times5=100$$ $$8\div2\times1+5\times3\times6\div9+7\times4=100$$ $$8\div2\times1+5\times6\times3\div9+7\times4=100$$ $$8\div2\times7+5\times1\times6\div9+3\times4=100$$ $$8\div2\times7+5\times6\times1\div9+3\times4=100$$ $$8\div4\times1+7\times3\times6\div9+2\times5=100$$ $$8\div4\times1+7\times6\times3\div9+2\times5=100$$ $$8\div4\times9+3\times1\times6\div7+2\times5=100$$ $$8\div4\times9+3\times6\times1\div7+2\times5=100$$ $$9\div3\times1+4\times2\times6\div7+8\times5=100$$ $$9\div3\times1+4\times6\times2\div7+8\times5=100$$ $$9\div3\times4+2\times1\times6\div7+8\times5=100$$ $$9\div3\times4+2\times6\times1\div7+8\times5=100$$ $$9\div3\times5+7\times1\times8\div4+6\times2=100$$ $$9\div3\times5+7\times8\times1\div4+6\times2=100$$ $$9\div3\times6+8\times1\times2\div4+7\times5=100$$ $$9\div3\times6+8\times2\times1\div4+7\times5=100$$</p>
108,253
<p>I would like to assign 'x' individuals to 'y' groups, randomly. For example, I would like to divide 50 individuals into 100 groups randomly. Of course, with more groups than individuals many of the groups will have zero individuals, while some groups will have multiple individuals. That is fine. With random assignment, the distribution of the number of individuals per group should fit a Poisson distribution.</p> <p>I feel like there should be a simple function in Mathematica for partitioning X things into Y groups randomly. I have searched and haven't found anything to do this. Please help!</p>
Coolwater
9,754
<p>This does it:</p> <pre><code>x = 10000; y = 50; groups = PositionIndex[RandomChoice[Range[y], x]] </code></pre> <p>And you get the poisson:</p> <pre><code>QuantilePlot[PadLeft[Values[Map[Length, groups]], y], PoissonDistribution[x/y], Method -&gt; {"ReferenceLineMethod" -&gt; "Diagonal"}] </code></pre> <p><a href="https://i.stack.imgur.com/s4kaN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/s4kaN.png" alt="enter image description here"></a></p> <pre><code>x = 1000000; y = 5000; groups = PositionIndex[RandomChoice[Range[y], x]]; With[{vs = PadRight[Values[Map[Length, groups]], y]}, Show[ListPlot[Tally[vs].{{1, 0}, {0, 1/y}}, Filling -&gt; Axis], ListLinePlot[Thread[{#, PDF[PoissonDistribution[m = x/y], #]}] &amp;[Range[Min[vs], Max[vs]]]]]] </code></pre> <p><a href="https://i.stack.imgur.com/UaRX1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UaRX1.png" alt="enter image description here"></a></p>
3,699,105
<p>If <span class="math-container">$T$</span> is normal operator and <span class="math-container">$T^3=T^2$</span>,then show that <span class="math-container">$T$</span> is idempotent .</p> <ol> <li><span class="math-container">$TT*=T*T$</span> </li> <li><span class="math-container">$T^3=T^2$</span></li> <li>We are to prove that <span class="math-container">$T^2=T$</span></li> </ol> <p>I have tried it many times by operating <span class="math-container">$T$</span> in both sides of <span class="math-container">$1$</span> and <span class="math-container">$2$</span> please tell me what will be the proper way</p>
paul blart math cop
571,438
<p>This is not true. Here's an explicit example. Let <span class="math-container">$A = \{0, 1\}$</span>, <span class="math-container">$R = \{(0, 0)\}$</span>, <span class="math-container">$S = \{(1, 1)\}$</span>. Then <span class="math-container">$R \cup S = \{(0, 0), (1, 1)\}$</span>, which is an equivalence relation (just equality). Neither <span class="math-container">$R$</span> nor <span class="math-container">$S$</span> are equivalence relations as neither are reflexive.</p>
510,814
<p>I've seen in several places without further comment that if an equalizer is epic, it's an isomorphism. I've only proved one half of this:</p> <p>Suppose $e:X \rightarrow A$ is an epimorphism and an equalizer for $f$ and $g$. Then $f \circ e = g \circ e \implies f = g$. Then any function $e': X' \rightarrow A$ trivially equalizes $f$ and $g$, so take $id_A: A \rightarrow A$. $e$ is an equalizer, so there exists a unique $k: A \rightarrow X$ such that $e \circ k = id_A$.</p> <p>That gets me one side of the inverse, but how do I prove that $k \circ e = id_X$?</p>
Community
-1
<p>$id_A$ is an equalizer of $f$ and $g$.</p>
1,102,885
<p>I have exams in Machine Learning coming up and I need help answering this question.</p> <blockquote> <p>There are a million identical fish in a lake, one of which has swallowed the One True Ring. You must get it back! After months of effort, you catch another random fish and pass your metal detector over it, and the detector beeps! It is the best metal detector money can buy, and has a very low error rate: it fails to beep when near the ring only one in a billion times, and it beeps incorrectly only one in ten thousand times. What is the probability that, at long last, you’ve found your precious ring?</p> </blockquote> <p>This is my answer I worked out using Bayes rule:</p> <p><img src="https://i.stack.imgur.com/76WjZ.gif" alt="enter image description here"></p> <p>Is this the right way to work out this type of question and is that somewhat the correct answer?</p>
KSmarts
192,747
<p>I think you have mixed up a few of your conditional probabilities. If the detector <em>fails</em> to beep over the ring only one in a billion times, then it <em>does</em> beep the rest of the billion times. Similarly, if it beeps over he wrong fish one in ten thousand times, then it doesn't beep on the rest of those ten thousand wrong fish. So you should have \begin{align} P(B|\tilde{A}) &amp;= \frac{1}{10000}\\ P(\tilde{B}|\tilde{A}) &amp;= \frac{9999}{10000}\\ P(\tilde{B}|A) &amp;= \frac{1}{1000000000}\\ P(B|A) &amp;= \frac{999999999}{1000000000} \end{align} which is different from what you do have. Using these values gives \begin{align} P(B) &amp; =\frac{999999999}{1000000000}\frac{1}{1000000} +\frac{1}{10000}\frac{999999}{1000000} \\ &amp; =\frac{999999999}{1,\!000,\!000,\!000,\!000,\!000} +\frac{999999}{10,\!000,\!000,\!000} \\ &amp; = \frac{100,\!999,\!899,\!999}{1,\!000,\!000,\!000,\!000,\!000} \end{align} and \begin{align} P(A|B) &amp; = \frac{\frac{999999999}{1000000000}\frac{1}{1000000}} {\frac{100999899999}{1000000000000000}} \\ &amp; = \frac{999,\!999,\!999}{1,\!000,\!000,\!000,\!000,\!000} \frac{1,\!000,\!000,\!000,\!000,\!000}{100,\!999,\!899,\!999} \\ &amp; = \frac{999,\!999,\!999}{100,\!999,\!899,\!999}\approx 0.0099 \end{align} Or slightly less than $1$%. To check, we can make a quick estimate of the answer. There are almost one million fish that <em>don't</em> have the ring. The detector gives a false positive one in ten thousand times. So, if you were to test every fish in the lake, you would get approximately $100$ beeps. Of all the fish that trigger beeps, only one has the ring, which means that the detector would only be right about once in every hundred beeps, or $1$% of the time.</p> <p>In regards to the other part of your question, yes, your methods were correct. It was just your starting values that were wrong.</p>
2,413,891
<blockquote> <p><strong>Question :</strong> Evaluate - $$\int_{0}^{1}2^{x^2+x}\mathrm dx$$</p> </blockquote> <p><strong>My Attempt :</strong> First I tried to evaluate the indefinite integral of $2^{x^2+x}$ in order to put the limits $0$ and $1$ later on, but couldn't integrate it. Then I checked on WA and came to know that it's elementary integral doesn't exist. Now I moved one to using properties of definite integration such as $$\int_a^b f(x) \mathrm dx=\int_a^b f(a+b-x) \mathrm dx$$</p> <p>But it couldn't help either. Can you please give me hint to proceed on this question?</p> <p>P.S. - This is a high school level problem and therefore its solution shouldn't involve any special functions, such as Gaussian Integral etc.</p> <p><strong>Edit</strong> : I asked my teacher this question and basically this was an approximation based question. This was a MCQ type question which has an option "None of the above" and it was the correct answer, since the other options were made in such a way that can be rejected by bounding this integral between 2 functions. For example we can use $$2^{x^2+x}&lt;2^{2x} ~; ~x\in (0,1)$$ and thus can be sure that this integral is less than $3/\ln(4)$.</p> <p>Thanks all for devoting your time in my question!</p>
Bill Donald
476,771
<p>Hint:</p> <p>Using $u=\frac{2x+1}{2}$ yields an <a href="http://mathworld.wolfram.com/Erfi.html" rel="nofollow noreferrer">imaginary error function</a>.</p>
1,347
<p>Sometimes I check how many users of <code>mathematica.stackexchange.com</code> there are.<br> I remember that a few weeks ago there were about 15 thousand and recently I've been surprised seeing that the <a href="https://mathematica.stackexchange.com/users?tab=NewUsers&amp;sort=creationdate">new users</a> are signed with numbers over 18000.<br> Let's check <a href="https://mathematica.stackexchange.com/users?page=21&amp;tab=newusers&amp;sort=creationdate">this site</a>, the new users therein have numbers slightly over 14000. The registry number of new users included <code>21</code> pages with <code>4 X 9 = 36</code> new users which is less than <code>800 &lt;&lt; 18000 - 15000</code>.</p> <p>What is going on?</p> <p>Let's look at <a href="https://mathematica.stackexchange.com/users?page=18&amp;tab=newusers&amp;sort=creationdate">this page</a>, the number of <a href="https://mathematica.stackexchange.com/users/14837/user49115">user49115</a> is <code>14837</code> while the next one in the registry is <a href="https://mathematica.stackexchange.com/users/15838/jazz">Jazz</a> having the number <code>15838</code> i.e <code>15838 = 14837 + 1001</code>.</p> <p>I know I can find e.g. how many <a href="https://mathematica.stackexchange.com/help/badges/1/teacher">teachers</a> or <a href="https://mathematica.stackexchange.com/help/badges/2/student">students</a> there are, but the number of all users is also interesting.</p> <p>So what is the reliable number of all users (including unregistered) and of those who are regeistered? </p>
Mr.Wizard
121
<p>Now that the technical side of this has been answered I'll attempt to address:</p> <blockquote> <p>So what is the reliable number of all users (including unregistered) and of those who are registered? </p> </blockquote> <p>The <a href="http://stackexchange.com/leagues/177/alltime/mathematica">Stack Exchange User Reputation Leagues</a> page provides some apparently useful data:</p> <blockquote> <p>Q&amp;A for users of Mathematica (12,957 total users)</p> </blockquote> <p>It also gives a break-down by reputation which provides a useful way to measure participating members:</p> <pre><code>Total Reputation Total Rep* Users 100,000+ 1 50,000+ 5 25,000+ 17 10,000+ 44 5,000+ 69 3,000+ 95 2,000+ 137 1,000+ 214 500+ 362 200+ 728 1+ 11,285 </code></pre> <p>Why this page says that we have 1,672 more members than those with 1 reputation point I don't know. </p> <p>From this data I estimate that we have about one thousand members with a reputation of 100 points, indicating either significant participation or activity on other Stack Exchange sites.</p> <p>On a personal note I am pleased to see that we have 44 major hitters (10k+) now.</p> <p>(Update by belisarius follows. User Rep. Leagues as of Feb 10, 2015)</p> <pre><code>Total Rep* Users 100,000+ 1 50,000+ 7 25,000+ 20 10,000+ 51 5,000+ 80 3,000+ 116 2,000+ 171 1,000+ 254 500+ 443 200+ 898 1+ 14,641 </code></pre>
373,958
<p>Is $\sum_{n=1}^\infty(2^{\frac1{n}}-1)$ convergent or divergent? $$\lim_{n\to\infty}(2^{\frac1{n}}-1) = 0$$ I can't think of anything to compare it against. The integral looks too hard: $$\int_1^\infty(2^{\frac1{n}}-1)dn = ?$$ Root test seems useless as $\left(2^{\frac1{n}}\right)^{\frac1{n}}$ is probably even harder to find a limit for. Ratio test also seems useless because $2^{\frac1{n+1}}$ can't cancel out with ${2^{\frac1{n}}}$. It seems like the best bet is comparison/limit comparison, but what can it be compared against?</p>
chanp
46,291
<p>Yet another elementary method</p> <p>$1=2-1=(2^\frac{1}{n})^n-1=(2^\frac{1}{n}-1)(2^\frac{n-1}{n}+2^\frac{n-2}{n}+\cdots+2^\frac{1}{n}+1) &lt; (2^\frac{1}{n}-1)*(2+2+\cdots+2)$ .</p> <p>Therefore, $(2^\frac{1}{n}-1) &gt; \frac{1}{2n} $.</p>
2,619,185
<p>Let $$P=(X+2)^m+(X+3)^{2m+3}$$ and $$Q=X^2+5X+7.$$ I need to show that $Q$ divides $P$ for any $m$ natural. </p> <p>I said like this: let $a$ be a root of $X^2+5X+7=0$. Then $a^2+5a+7=0$. </p> <p>Now, I know I need to show that $P(a)=0$, but I do not know if it is the right path since I have not found any way to do it.</p>
Hagen von Eitzen
39,174
<p>Note that $$(X+3)^3=X^3+9X^2+27X+27=(X^2+5X+7)(X+4)-1 $$ and $$(X+3)^2 = X^2+6X+9=(X^2+5X+7)+(X+2).$$</p>
2,619,185
<p>Let $$P=(X+2)^m+(X+3)^{2m+3}$$ and $$Q=X^2+5X+7.$$ I need to show that $Q$ divides $P$ for any $m$ natural. </p> <p>I said like this: let $a$ be a root of $X^2+5X+7=0$. Then $a^2+5a+7=0$. </p> <p>Now, I know I need to show that $P(a)=0$, but I do not know if it is the right path since I have not found any way to do it.</p>
nonuser
463,553
<p>Write $t=x+3$. Now we have to prove that $Q=t^2-t+1$ divides $P=(t-1)^m+t^{2m+3}$. Note that if $a$ is root for $Q$ then we have $a^3=-1$. Note that $a-1 = a^2$. Now plug $a$ in to $P$ and we get:</p> <p>\begin{eqnarray} (a-1)^m +a^{2m}\cdot a^3 &amp;=&amp; (a-1)^m -a^{2m} \\ &amp;=&amp; (a^2)^m-a^{2m} \\ &amp;=&amp; a^{2m}-a^{2m} \\ &amp;=&amp;0 \end{eqnarray} So $a$ is also a root for $P$, and we can in the same manner deduce that the second root $b$ of $Q$ is also a root for $P$. Since $a\ne b$, it follows that $Q$ divides $P$. </p>
3,760,594
<p>Is there a proper notation to <em>compose</em> sets and produce a set of sets? (<em>I am referring to this as compose due to ignorance of a proper manner to call it</em>)</p> <p>To illustrate what I want, let me <em>suppose</em> that <span class="math-container">$\otimes$</span> does the job, so that</p> <p><span class="math-container">\begin{align} \{1\} \otimes \{2\} &amp;\rightarrow \{ \{1\} , \{2\} \}\\ \{1\} \otimes \{1,2\} &amp;\rightarrow \{ \{1\} , \{1,2\} \} \end{align}</span></p> <p>Also, how can we write a <em>composition</em> for a finite number of sets? Say that <span class="math-container">$U_i = \{i\}$</span> (trivial example) is there something that can make (again using <span class="math-container">$\otimes$</span>): <span class="math-container">\begin{equation} \bigotimes_{i=1}^N U_i = \{ \{1\} , \{2\}, \ldots , \{N\} \} \end{equation}</span></p>
Sameer Baheti
567,070
<p><strong>HINT:</strong> Integrate both sides of</p> <p><span class="math-container">$df=\frac{\partial f}{\partial x}dx+\frac{\partial f}{\partial y}dy+\cdots= \vec{\nabla f}\cdot \vec{dl}=\vec{\nabla g}\cdot \vec{dl}=dg$</span></p> <p><strong>EDIT:</strong> As pointed out by @peek-a-boo, this only works if the functions in question have an integrable derivative (so for example a sufficient condition is for <span class="math-container">$f$</span> to be <span class="math-container">$C^1$</span>; i.e continuously differentiable).</p>
3,760,594
<p>Is there a proper notation to <em>compose</em> sets and produce a set of sets? (<em>I am referring to this as compose due to ignorance of a proper manner to call it</em>)</p> <p>To illustrate what I want, let me <em>suppose</em> that <span class="math-container">$\otimes$</span> does the job, so that</p> <p><span class="math-container">\begin{align} \{1\} \otimes \{2\} &amp;\rightarrow \{ \{1\} , \{2\} \}\\ \{1\} \otimes \{1,2\} &amp;\rightarrow \{ \{1\} , \{1,2\} \} \end{align}</span></p> <p>Also, how can we write a <em>composition</em> for a finite number of sets? Say that <span class="math-container">$U_i = \{i\}$</span> (trivial example) is there something that can make (again using <span class="math-container">$\otimes$</span>): <span class="math-container">\begin{equation} \bigotimes_{i=1}^N U_i = \{ \{1\} , \{2\}, \ldots , \{N\} \} \end{equation}</span></p>
alphaomega
775,794
<p>Yes that's correct. Solve each (<span class="math-container">$1$</span>-dimensional) equation <span class="math-container">$\partial{f}/\partial{x_i} = \partial{g}/\partial{x_i}$</span></p>
3,760,594
<p>Is there a proper notation to <em>compose</em> sets and produce a set of sets? (<em>I am referring to this as compose due to ignorance of a proper manner to call it</em>)</p> <p>To illustrate what I want, let me <em>suppose</em> that <span class="math-container">$\otimes$</span> does the job, so that</p> <p><span class="math-container">\begin{align} \{1\} \otimes \{2\} &amp;\rightarrow \{ \{1\} , \{2\} \}\\ \{1\} \otimes \{1,2\} &amp;\rightarrow \{ \{1\} , \{1,2\} \} \end{align}</span></p> <p>Also, how can we write a <em>composition</em> for a finite number of sets? Say that <span class="math-container">$U_i = \{i\}$</span> (trivial example) is there something that can make (again using <span class="math-container">$\otimes$</span>): <span class="math-container">\begin{equation} \bigotimes_{i=1}^N U_i = \{ \{1\} , \{2\}, \ldots , \{N\} \} \end{equation}</span></p>
littleO
40,119
<p>[Spoiler warning, this is more than a hint. I wanted to show this method because it avoids working with components.]</p> <hr /> <p>First suppose that <span class="math-container">$h:\mathbb R^n \to \mathbb R$</span> is differentiable and that <span class="math-container">$\nabla h(x) = 0$</span> for all <span class="math-container">$x \in \mathbb R^n$</span>. I'll prove that <span class="math-container">$h$</span> is constant. Suppose (for a contradiction) that there exist points <span class="math-container">$a$</span> and <span class="math-container">$b$</span> in <span class="math-container">$\mathbb R^n$</span> such that <span class="math-container">$h(a) \neq h(b)$</span>. Let <span class="math-container">$z:[0,1] \to \mathbb R$</span> be the function defined by <span class="math-container">$$ z(t) = h(a + t(b - a)). $$</span> Note that <span class="math-container">$z$</span> is continuous on <span class="math-container">$[0,1]$</span> and differentiable on <span class="math-container">$(0,1)$</span> and that <span class="math-container">$z(0) \neq z(1)$</span>. By the mean value theorem, there exists a number <span class="math-container">$c$</span> such that <span class="math-container">$0 &lt; c &lt; 1$</span> and <span class="math-container">$$ z'(c) = z(1) - z(0) \neq 0. $$</span> But, by the chain rule, <span class="math-container">$$ z'(c) = \langle \nabla h(a + c(b -a)), b - a \rangle $$</span> which is <span class="math-container">$0$</span> because we are assuming that <span class="math-container">$\nabla h(x) = 0$</span> for all <span class="math-container">$x$</span> in <span class="math-container">$\mathbb R^n$</span>. This is a contradiction. Therefore <span class="math-container">$h$</span> is constant.</p> <hr /> <p>Next, to solve the original problem, let <span class="math-container">$h = f - g$</span> and apply the above result.</p>
2,699,170
<p>How to evaluate $$ \int \frac{1}{ \ln x} \ \mathrm{d} x, $$ where $\ln x$ denotes the natural logarithm of $x$? </p> <p>My effort: </p> <blockquote> <p>We note that $$ \int \frac{1}{ \ln x} \ \mathrm{d} x = \int \frac{x}{x \ln x} \ \mathrm{d} x = \int x \frac{ \mathrm{d} }{ \mathrm{d} x } \left( \ln \ln x \right) \ \mathrm{d} x = x \ln \ln x - \int \ln \ln x \ \mathrm{d} x. $$ </p> </blockquote> <p>What next? </p> <p>Another approach: </p> <blockquote> <p>We can also write $$ \int \frac{1}{ \ln x } \ \mathrm{d} x = \frac{x}{\ln x } + \int \frac{1}{ \left( \ln x \right)^2 } \ \mathrm{d} x = \frac{x}{\ln x } + \frac{x}{ \left( \ln x \right)^2 } + \int \frac{ 2 }{ \left( \ln x \right)^3 } \ \mathrm{d} x = \ldots = x \sum_{k=1}^n \left( \ln x \right)^{-k} + n \int \left( \ln x \right)^{-n-1} \ \mathrm{d} x. $$</p> </blockquote> <p>What next? </p> <p>Which one of the above two approaches, if any, is going to lead to a function consisting of finitely many terms comprised of elementary functions, that is, the kinds of solutions that we are used to in calculus courses? </p> <p>Or, is there any other way that can lead us to a suitable enough answer? </p>
Bernard
202,857
<p>It cannot be expressed with the <em>elementary functions</em>. Actually, it is a <em>special function</em>, the <em>integral logarithm</em>: $$\operatorname{li}(x)=\int_0^x\frac{\mathrm dt}{\ln t}\biggl(=\lim_{\varepsilon\to 0}\int_0^{1-\varepsilon}\frac{\mathrm dt}{\ln t}+\lim_{\varepsilon\to 0}\int_{1+\varepsilon}^\infty\frac{\mathrm dt}{\ln t}\quad\text{for }x&gt;1\biggr)$$ and it is asymptotically equivalent to $\;\dfrac x{\ln x}$.</p> <p>Closely linked is the function $$\operatorname{Li}(x)=\int_2^x\frac{\mathrm dt}{\ln t}=\operatorname{li}(x)-\operatorname{li}(2), $$ used in number theory: it is a asymptotic equivalent of $\pi(x)$, the number of primes $\le x$.</p>
181,367
<p>It is well known that compactness implies pseudocompactness; this follows from <a href="https://secure.wikimedia.org/wikipedia/en/wiki/Heine%E2%80%93Borel_theorem">the Heine–Borel theorem</a>. I know that the converse does not hold, but what is a counterexample?</p> <p>(A <a href="https://secure.wikimedia.org/wikipedia/en/wiki/Pseudocompact_space"><strong>pseudocompact space</strong></a> is a topological space $S = \langle X,{\mathfrak I}\rangle$ such that every continuous function $f:S\to\Bbb R$ has bounded range.)</p>
Alex Ravsky
71,850
<p>Some exotic examples of pseudocompact and non-compact spaces are constructed in my paper “<a href="http://arxiv.org/abs/1003.5343" rel="nofollow">Pseudocompact paratopological groups that are topological</a>”: </p> <p>Example 1 (p.6). A $T_1$ space having each power countably pracompact (and, hence pseudocompact). (See p.1 for the definition of a countably pracompact space).</p> <p>Example 2 (p.6). [Under $MA_{countable}$] A functionally Hausdorff countably compact space $X$. (See p.6 for axiomatic assumptions which I use for the example construction).</p> <p>Example 3 (p.8) A functionally Hausdorff second countable space having each power countably pracompact (and, hence pseudocompact).</p> <p>Example 5 (p. 14) A $T_0$ sequentially compact, not totally countably compact space and a $T_1$ pseudocompact, not countably pracompact space. (See p.1 for the definition of a totally countably compact space).</p>
1,692,757
<p>I was required to find the derivative of $2\sqrt{\cot(x^2)}$.</p> <p><strong>My solution</strong></p> <p><a href="https://i.stack.imgur.com/N98SM.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/N98SM.jpg" alt="enter image description here"></a></p> <p>I can't find any mistake in my solution but in my book following solution is given:</p> <p><a href="https://i.stack.imgur.com/KBmH6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KBmH6.png" alt="enter image description here"></a></p> <p>Of course my answer and the answer in book are not same(I have plotted the graph of both and they don't overlap). </p> <p>I understand the solution given in my book.</p> <p>I'm asking for help to figure out where I have made mistake in my solution. </p>
TSU
320,683
<p>I still think the two answers are identical. Maybe there is something wrong with your plot ;) (If I haven't misread your writing, that is...)</p> <p>$$ \frac{-2x csc^2(x^2)}{\sqrt{cot(x^2)}} = \frac{-2x \left(\frac{1}{sin^2(x^2)} \right)}{\sqrt{\frac{cos(x^2)}{sin(x^2)}}} = \frac{-2x}{sin(x^2) \sqrt{sin(x^2)cos(x^2)}} = ... $$ Then proceed as in the given answer.</p>
63,015
<p>Why does assigning a DownValue using <code>Apply</code>, e.g.,</p> <pre><code>Remove[a] index={3,4}; (a @@ index) = 5; a @@ index (*Set::write: Tag Apply in a @@ {3, 4} is Protected. &gt;&gt;*) (*a[3,4]*) </code></pre> <p>not work, while an assignment such as</p> <pre><code>Remove[a] a[Sequence @@ index] = 5 a @@ index (*5*) (*5*) </code></pre> <p>does?</p>
WReach
142
<p>The behaviour we see here is due to <code>Set</code> performing limited evaluation of its left-hand-side prior to the assignment.</p> <p><strong><code>Set</code> Evaluates the Left-Hand-Side</strong></p> <p>Despite the fact that <code>Set</code> has the attribute <code>HoldFirst</code>, it performs a special kind of evaluation upon its left-hand-side prior to performing the assignment. Specifically, if the left-hand-side has a head and parts, then that head and those parts are evaluated -- but no further rules are applied to the result.</p> <p>Here is a generic example illustrating this behaviour:</p> <pre><code>ClearAll[f1, f2, a1, a2] f1 = f2; f2[___] := Print@"f2 evaluated!" a1 = 1; a2 = 2; f1[a1 + a2] = 999; ??f2 </code></pre> <p><img src="https://i.stack.imgur.com/hXgYw.png" alt="INFORMATION screenshot"></p> <p>Observe how even though the <code>Set</code> expression was assigning to <code>f1[a1 + a2]</code>, the resulting definition is actually attached to <code>f2[3]</code>. <code>f1</code> was evaluated to <code>f2</code> and <code>a1 + a2</code> was evaluated to <code>3</code>. But also note that the pre-existing definition that matches <code>f2[3]</code> was <em>not</em> invoked, and nothing was printed.</p> <p>In contrast, if the left-hand-side has no parts, it is <em>not</em> evaluated:</p> <pre><code>ClearAll[a, b] a = b; a = 3; ??a </code></pre> <p><img src="https://i.stack.imgur.com/7B45J.png" alt="INFORMATION screenshot"></p> <pre><code>??b </code></pre> <p><img src="https://i.stack.imgur.com/3YC7c.png" alt="INFORMATION screenshot"></p> <p><strong>The Case At Hand</strong></p> <p>Let us now apply this knowledge to the case at hand.</p> <p>For <code>(a @@ index) = 5</code>, <code>Set</code> performs the limited evaluation upon the left-hand-side. <code>index</code> is replaced with <code>{3, 4}</code> to produce <code>a @@ {3, 4}</code> which is to say <code>Apply[a, {3, 4}]</code>. As noted above, the normal definition of <code>Apply</code> is not expanded so this is the final left-hand-side. Next, <code>Set</code> attempts to assign <code>5</code> to this expression. Since <code>Apply</code> is protected, the attempt fails with an error message.</p> <p>Now, <code>a[Sequence @@ index] = 5</code>. By construction, the head <code>a</code> is inert so it evaluates to itself. <code>Sequence @@ index</code> is evaluated to produce <code>Sequence[3, 4]</code>. Since <code>a</code> does not have the <code>SequenceHold</code> attribute, the <code>Sequence</code> head is stripped yielding a final left-hand-side of <code>a[3, 4]</code>. This is a valid and unprotected assignment target, so the assignment succeeds.</p> <p><strong>Why Does <code>Set</code> Behave Like This?</strong></p> <p>It is necessary for <code>Set</code> to perform at least some evaluation of the left-hand-side for otherwise expressions like <code>x[i, j-1] = y</code> would be far less useful. We want to have the indices <code>i</code> and <code>j-1</code> evaluated. On the other hand, full evaluation of the left-hand-side would mean that we could never reassign a variable after its initial assignment because it would disappear from the left-hand-side by evaluation. Thus, we have these arcane rules of partial evaluation. Most of the time, these rules just "do the right thing". But occasionally, like for the present question, the subtleties of the rules are observable.</p> <p><strong>Workaround</strong></p> <p>One way to work-around the present problem is as follows:</p> <pre><code>(a[##] = 5) &amp;@@ index a @@ index (* 5 *) (a[##] = 6) &amp;@@ index a @@ index (* 6 *) </code></pre> <p>The technique works even if a prior assignment has been performed.</p>
3,414,197
<p>I have to model/simulate a moving iron meter with Simulink, more specifically, I need to build a Simulink model for the equation of motion, wich is given as: <span class="math-container">$$ \theta\ddot{\alpha} = T_\phi - T_S $$</span> where <span class="math-container">$\theta$</span> denotes the pointers moment of Inertia, <span class="math-container">$\alpha$</span> is the pointers angle, <span class="math-container">$T_S = c_S\alpha$</span> the springs torque pushing the pointer back to it's initial position, with <span class="math-container">$c_S$</span> as the spring constant <span class="math-container">$T_\phi = c_\phi i$</span> as the Torque generated by the current i and i is from the following equation: <span class="math-container">$Ri = v - c_i\dot{\alpha}$</span>, where <span class="math-container">$R$</span> denotes the resistance in <span class="math-container">$\Omega$</span>, <span class="math-container">$v$</span> the DC voltage that's supposed to be measured, <span class="math-container">$c_i$</span> the coils conductance.</p> <p><span class="math-container">$\theta=6.4*10^{-6}\frac{kgm^2}{rad}$</span>; <span class="math-container">$c_S=6*10^{-4}\frac{Nm}{rad}$</span>; <span class="math-container">$c_\phi = 8*10^{-2} \frac{Nm}{A}$</span>; <span class="math-container">$c_i=1.2\frac{Vs}{A}$</span>; <span class="math-container">$R=2*10^3 \Omega$</span></p> <p>The reason I'm posting here asking you for help is that I don't know if I did this correctly since I don't have any reference values to verify my result. The meter is supposed to measure the DC voltage <span class="math-container">$v$</span> and to get a proper result I think I need to multiply the resulting angle <span class="math-container">$\alpha$</span> by a certain factor. </p> <p>To build my Simulink model I put in all the variables and get this <span class="math-container">$$ \theta \ddot{\alpha} = c_\phi i-c_S\alpha \Leftrightarrow \theta \ddot{\alpha} = c_\phi \frac {v-c_i\alpha}{R}-c_S\alpha $$</span></p> <p>after a Laplace Transform and some math I get: <span class="math-container">$$ \theta s^2X(s) = \frac {c_\phi}{R}v-\frac{c_\phi c_i}{R}sX(s)-c_SX(s) $$</span> then I rearranged the equation so I can build the model using integrators: <span class="math-container">$$ \frac{1}{s}\left(\frac{1}{s}\frac{\frac{c_\phi}{R}v-c_SX(s)}{\theta} - \frac{c_\phi c_i}{\theta R}\right) $$</span></p> <p>So in the end, it seems pretty similar to a damped harmonic oscillator...</p> <p>Attached below you find my Simulink model and the workspace I'm using. </p> <p><a href="https://i.stack.imgur.com/ENAug.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ENAug.png" alt="Simulink model"></a> <a href="https://i.stack.imgur.com/KkGxM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KkGxM.png" alt="Workspace"></a></p>
Pilotf4
272,311
<p>As far as I know, I can't post images in comments, so here we go.</p> <p>This is the Simulink model and a screenshot of the scope. I renamed <span class="math-container">$_0$</span> as <span class="math-container">$f$</span> and <span class="math-container">$$</span> as <span class="math-container">$D$</span>. Their numerical values are the same as in the answer before. The input's amplitude is 20.</p> <p>EDIT: I just took a quick screesnshot and didn't adjust the solver, so the graph is somewhat rough.</p> <p><a href="https://i.stack.imgur.com/pE2Bl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pE2Bl.png" alt="Transfer function modell"></a> <a href="https://i.stack.imgur.com/RIW3c.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RIW3c.png" alt="Scope"></a></p>
1,384,735
<p>What is the ODE satisfied by $y=y(x)$ </p> <p>given that $$\frac{dy}{dx} = \frac{-x-2y}{y-2x}$$</p> <p>I understand that I need to get it in some form of $\int \cdots \;dy = \int \cdots \; dx$, but am not sure how to go about it.</p>
haqnatural
247,767
<p>\begin{align} \frac{dy}{dx} = \frac{ -x-2y }{ y-2x } \\ \frac{ dy }{ dx } =\frac{ -1-2\frac{ y }{ x } }{ \frac{ y }{ x } -2 } \\ \frac{ y }{ x } = t \to \quad y=xt \quad \Rightarrow \quad dy=t+x \frac{ dt }{ dx } \\ t+x\frac{ dt }{ dx } =\frac{ -1-2t }{ t-2 } \\ x\frac{ dt }{ dx } =\frac{ -1-2t }{ t-2 } -t=\frac{ -1-{ t }^{ 2 } }{ t-2 } \\ \int { \frac{ 2-t }{ t^{ 2 }+1 } dt } =\int { \frac{ dx }{ x } } \\ \int { \left( \frac{ 2 }{ { t }^{ 2 }+1 } -\frac{ t }{ { t }^{ 2 }+1 } \right) dt= } \int { \frac { dx }{ x } } \\ 2\int { \frac { 1 }{ { t }^{ 2 }+1 } dt-\frac { 1 }{ 2 } \int { \frac { d\left( { t }^{ 2 }+1 \right) }{ { t }^{ 2 }+1 } } =\int { \frac { dx }{ x } } } \\ 2\arctan { \left( t \right) -\frac { 1 }{ 2 } } \ln { \left( 1+{ t }^{ 2 } \right) =\ln { \left| x \right| +C } } \\ \arctan { \left( t \right) =\frac { 1 }{ 2 } \ln { \left| x\sqrt { 1+{ t }^{ 2 } } \right| +C } } \\ \arctan { \left( \frac { y }{ x } \right) =\frac { 1 }{ 2 } \ln { \left| x\sqrt { 1+{ \left( \frac { y }{ x } \right) }^{ 2 } } \right| +C } } \end{align}</p>
3,956,292
<p>Consider the Euclidean ball <span class="math-container">$B^n(x,r)$</span> in <span class="math-container">$\mathbb{R}^n$</span> given by: <span class="math-container">$$B^n(x,r) = \{z\in\mathbb{R}^n : ||z-x||_2 \leq r\}$$</span> with centre <span class="math-container">$x\in\mathbb{R}^n$</span> and radius <span class="math-container">$r\geq 0$</span>. Now we <strong>break</strong> this ball by <em><strong>removing an arbitrary subset from the boundary</strong></em> of <span class="math-container">$B^n(x,r)$</span>. Is the resulting set still convex?</p> <p>My intuition says <em>Yes</em>, and it's easy to visualise this in <span class="math-container">$\mathbb{R}^2$</span> or <span class="math-container">$\mathbb{R}^3$</span>. I'm wondering how I can prove it generally, i.e. for <span class="math-container">$\mathbb{R}^n$</span> by considering an arbitrary subset <span class="math-container">$S\subseteq B^n(x,r)$</span>? If <span class="math-container">$S = B^n(x,r)$</span> then the resulting set is empty, which is convex - so we can ignore that situation. We assume <span class="math-container">$S\subset B^n(x,r)$</span>. What's next?</p> <p>I consider two points <span class="math-container">$y_1,y_2 \in B^n(x,r)\backslash S$</span>, and want to show that for <span class="math-container">$t\in [0,1]$</span> we have <span class="math-container">$ty_1 + (1-t)y_2 \in B^n(x,r)\backslash S$</span>. How do I take it from here? I'm stuck.</p> <p>Thanks!</p> <p><strong>Addendum</strong>:<br> What if we consider the <span class="math-container">$p$</span>-norm in general, instead of the Euclidean norm? How do things change with <span class="math-container">$p$</span> if the ball is defined as follows? <span class="math-container">$$B^n(x,r) = \{z\in\mathbb{R}^n : ||z-x||_p \leq r\}$$</span> <em>It sounds fun breaking balls in higher dimensions under different norms!</em></p>
Hagen von Eitzen
39,174
<p>Along every (parametrised by <span class="math-container">$t$</span>) line, the squared Euclidean norm is a quadratic function in <span class="math-container">$t$</span> and does not attain a value <span class="math-container">$\ge1$</span> between distinct points where it is <span class="math-container">$\le1$</span>.</p>
1,924,568
<p>This is a question that a friend asked me (has the final answer too).</p> <p>The pdf of a random variable $X$ is</p> <p>$$ f(x) = 0.5,\quad -1 &lt; x &lt; 1 $$</p> <p>The random variable Y is defined as </p> <p>$$ Y = \begin{cases} -2X, &amp; -1 &lt; X &lt; 0 \\ X+1, &amp; 0 &lt; X &lt;1 \end{cases}$$</p> <p>I tried using the inverse transform method but I'm unsure of how to go about this since $Y$ takes values in $[1, 2)$ in both intervals provided above. I get the fact that there should be some sort of an overlapping in this case but can someone provide me with a rigorous way to solve this problem.</p> <p>The answer was given to be</p> <p>$$ f(y) = \begin{cases} 0.25, &amp; 0 &lt; y &lt; 1 \\ 0.75, &amp; 1 &lt; y &lt;2 \end{cases}$$</p> <p>Here is what I did:</p> <p>$P(Y \leq y) = P(-2X \leq y) = \frac{1}{2} + \frac{y}{4}$.</p> <p>Taking the derivative of the above CDF, I get $f_Y(y) = 0.25$ when $0 &lt; y &lt; 2$. </p> <p>I carried out the same procedure for the other interval and obtained $f_Y(y) = 0.5$ when $1 &lt; y &lt; 2$.</p> <p>Is it alright to conclude that the pdf is as provided in the solution because there is an overlap between the two intervals? Is there a more rigorous way of showing this?</p>
Rizky Reza Fujisaki
310,950
<p>You are right intuitively, but I think it is not rigorous</p> <p>how about this</p> <p>for $[0,1)$, the inverse is $x=-\frac{y}{2}$ with absolute jacobian is $\frac{1}{2}$, hence</p> <p>\begin{eqnarray*} f_Y(y)=f_X\left(-\frac{y}{2}\right)\frac{1}{2}=\frac{1}{4} \end{eqnarray*}</p> <p>and for $[1,2)$, we get two inverses, that are $x=-\frac{y}{2}$ and $x=y-1$, with their respective absolute jacobians are $\frac{1}{2}$ and $1$, hence</p> <p>\begin{eqnarray*} f_Y(y)=f_X\left(-\frac{y}{2}\right)\frac{1}{2}+f_X(y-1)1=\frac{3}{4} \end{eqnarray*}</p> <p>why i used $+$? because when $y\in[1,2)$, then $y$ is $-2x$ OR $x+1$</p>
4,640,732
<p>Find roots of: <span class="math-container">$$x^{6}\ -\ \left(x-1\right)^{6}=0 \tag {1}$$</span></p> <p>I know this equation has <span class="math-container">$4$</span> complex roots and exactly one real roots of value <span class="math-container">$0.5$</span>.</p> <p>However, my first instinct was to do this: <span class="math-container">$$x^{6}\ =\ \left(x-1\right)^{6} \tag{2}$$</span> &quot;raise both sides to 6-th power&quot; to get: <span class="math-container">$$x=x-1\tag{3}$$</span></p> <p>Which has no real solution. I see that this wrong. How to avoid this error? Thanks.</p> <p>Inspired by watching <a href="https://www.youtube.com/watch?v=2jENiRqdk3s" rel="nofollow noreferrer">this youtube video</a></p> <p><strong>Edit:</strong></p> <p>I am not asking about how to solve the problem. I want to know what I did wrong from an Algebraic stand-point. Maybe raising to the power? What is wrong with that?</p>
NoChance
15,180
<p>Thanks for all the posted comments above. At the moment, no one had posted an answer, but I understood the following, which combined may provide an answer.</p> <p><span class="math-container">$$|x|=|x-1|$$</span></p> <p>can't always be written as <span class="math-container">$x=x-1$</span>. I need to learn how to solve such an equation.</p> <p>Also, <span class="math-container">$$a^n=b^n$$</span></p> <p>Does not always imply that <span class="math-container">$a=b$</span>. The result is affected by the domain of <span class="math-container">$a$</span> and <span class="math-container">$b$</span> and whether the power is even or odd or integer or not, maybe among other factors.</p>
4,115,417
<p>I'm stuck on this problem for quite some time:</p> <blockquote> <p>Call a triangle a <em>Special Rational triangle</em> if it's area is rational, and the side lengths are consecutive positive integers, Can we find a closed form which generates all <em>Special Rational triangles</em>?</p> </blockquote> <p>I have tried this one for quite some time, I was able to find a nice closed form in terms of a Diophantine equation, but I'm totally not satisfied with it. Your insight would be very helpful.</p> <p>Thanks in advance.</p>
Brian M. Scott
12,042
<p>You’re trying to apply a method for solving recurrences of fixed order to one that is not of fixed order. You can, however, rewrite it: <span class="math-container">$u_{n-1}=\sum_{k=1}^{n-2}u_k$</span>, so</p> <p><span class="math-container">$$u_n=u_{n-1}+\sum_{k=1}^{n-2}u_k=2u_{n-1}\,,$$</span></p> <p>and you now have a <em>very</em> simple homogeneous first-order recurrence.</p>
4,115,417
<p>I'm stuck on this problem for quite some time:</p> <blockquote> <p>Call a triangle a <em>Special Rational triangle</em> if it's area is rational, and the side lengths are consecutive positive integers, Can we find a closed form which generates all <em>Special Rational triangles</em>?</p> </blockquote> <p>I have tried this one for quite some time, I was able to find a nice closed form in terms of a Diophantine equation, but I'm totally not satisfied with it. Your insight would be very helpful.</p> <p>Thanks in advance.</p>
OttR - A. Yu
321,096
<p>You dropped the 1. And lets also set the solution more precisely <span class="math-container">$u_n = kx^{n-1}$</span> since your power of two solution is also <span class="math-container">$n-1$</span>. Because then the <span class="math-container">$u_1$</span> term would add another 1 by being power of 0.</p> <p><span class="math-container">$x^n = x^{n-1} + ... + x + x^0 + 1$</span></p> <p>We can confirm for <span class="math-container">$n = 2$</span>, that <span class="math-container">$x^2 = x + 2$</span> admits a solution <span class="math-container">$x = 2 = 2^{2-1}$</span>.</p> <p>I haven't really confirmed this should work all the way up, but it's promising.</p>
777,186
<p>My equation is the following, and I would like to find which $k$ can make it a circle.</p> <p>$$x^2+y^2+4x-6y+k=0$$</p> <p>My naive approach is to have $k$ to be $-4x+6y+c$ where c is any number, so that I can have any circle that is in 0. However k is a parameter and I can't really figure that out if I am missing something. Any advice?</p>
lab bhattacharjee
33,337
<p>Completing the square $$(x+2)^2+(y-3)^2=2^2+3^2-k$$</p> <p>For real circle, $9+4-k\ge0$</p>
777,186
<p>My equation is the following, and I would like to find which $k$ can make it a circle.</p> <p>$$x^2+y^2+4x-6y+k=0$$</p> <p>My naive approach is to have $k$ to be $-4x+6y+c$ where c is any number, so that I can have any circle that is in 0. However k is a parameter and I can't really figure that out if I am missing something. Any advice?</p>
Anastasiya-Romanova 秀
133,248
<p>\begin{align} x^2+y^2+4x-6y+k&amp;=0\\ x^2+4x+y^2-6y&amp;=-k\\ x^2+4x+4+y^2-6y+9&amp;=-k+4+9\\ (x+2)^2+(y-3)^2&amp;=13-k \end{align} Compare with equation of the circle where its center on $(a,b)$ and radius $r$. \begin{align} (x-a)^2+(y-b)^2=r^2 \end{align} We get $r^2=13-k$. In order to make a circle, then $r&gt;0$. \begin{align} r&amp;&gt;0\\ \sqrt{13-k}&amp;&gt;0\\ 13-k&amp;&gt;0\\ k&amp;&lt;13. \end{align}</p>
1,969,169
<p>We have to do the following integral. $$\int_1^{\frac{1+\sqrt{5}}{2}}\frac{x^2+1}{x^4-x^2+1}\ln\left(1+x-\frac{1}{x}\right)dx$$ I tried it a lot. I substitute $t=1+x-(1/x)$, $dt=1+(1/x^2)$</p> <p>But then I stuck at $$\int\limits_{1}^{2} \frac{\ln(t)}{(t-1)^{2} + 1} \mathrm{d}t$$</p> <p>But now how to proceed.</p>
Robert Z
299,698
<p>Hint. We have that $$ \int_1^2\frac{\log(t)}{(t-1)^2+1}dt =\int_0^1\frac{\log(1+v)}{v^2+1}dv =\int_0^{\pi/4}\log(1+\tan(u))\,du\\ =\int_0^{\pi/4}\log(\cos(u)+\sin(u))\,du-\int_0^{\pi/4}\log(\cos(u))\,du\\ =\int_0^{\pi/4}\log(\sqrt{2}\cos(\pi/4-u)))\,du-\int_0^{\pi/4}\log(\cos(u))\,du.$$ Now work on the first integral and you will get the answer.</p>
1,981,360
<blockquote> <p>Given function $f:\mathbb{R}_0^+ \to \mathbb{R},~f(x) = x^2 + 4x + 4$ prove that it is injective.</p> </blockquote> <p>Using definition of injectivity $(\forall x_1, x_2 \in \mathbb{R}_0^+)(x_1 \neq x_2 \implies f(x_1) \neq f(x_2))$ I'm doing the following:</p> <p>$$x_1^2 + 4x_1 + 4 = x_2^2 + 4x_2 + 4$$ $$x_1^2 - x_2^2 = -4(x_1 - x_2)$$ $$x_1 + x_2 = -4$$ $$x_1 = -4 - x_2.$$ Since domain is $\mathbb{R}_0^+$ it is apparent that $x_1 \neq -4 - x_2$ and hence function is not injective.</p> <hr> <p>Is my final argument correct? In cases like that, shall I use definition instead of counterpositive?</p>
Piquito
219,998
<p>You have $$f(x)=(x+2)^2$$ so if the function has domain $\Bbb R$ then the points $x=t-2$ and $x=-t-2$ have same image for all $t$ so the function $f$ could not be injective. However with domain $\mathbb{R}_0^+ $ it is in fact injective and strictly increasing on its domain because is of the quadratic form $x^2$.</p>
191,548
<p>Say I have a list:</p> <pre><code>{{Line[{{-Sqrt[5/8 - Sqrt[5]/8], 1/4 (-1 - Sqrt[5])}, {0, 1}}], Line[{{Sqrt[5/8 - Sqrt[5]/8], 1/4 (-1 - Sqrt[5])}, {0,1}}]}, {Line[{{-Sqrt[5/8 + Sqrt[5]/8],1/4 (-1 + Sqrt[5])}, {Sqrt[5/8 - Sqrt[5]/8], 1/4 (-1 - Sqrt[5])}}], Line[{{Sqrt[5/8 - Sqrt[5]/8], 1/4 (-1 - Sqrt[5])}, {0, 1}}]}} </code></pre> <p>So that each sublist of that list consist of, in this case, two lines. All the points that tell us about the position of the line appear in another list, call this list <code>points</code>. Now, I want to extract the position of all the points from the above list in that <code>points</code> list. I'm aware of <code>Position</code> function but I'm not sure how to effectively apply it to my big list above in order to get the list of positions. AI'd very much appreciate some help. </p>
Ulrich Neumann
53,677
<p>What about <code>/.Line-&gt;List</code></p> <pre><code>lines /. Line -&gt; (Flatten[ List[#], 1] &amp;) </code></pre>
393,580
<p>Show that $-Z$ is also a standard normal random variable; that is, show that $P[-Z &lt; x] = P[Z &lt; x] \,\forall x.$</p>
Argha
35,821
<p>Since the standard normal is a symmetric distribution about $0$ we have $P[Z&lt;0+x]=P[Z&gt;0-x]\forall x\\ \implies P[Z&lt;x]=P[Z&gt;-x]\forall x$</p> <p>Again note that $$P[-Z&lt;x] =P[Z&gt;-x]\forall x$$Hence it is proved.</p>
533,855
<p>I need to show that $\{x_{n}\}$ is Cauchy given that there exists $0&lt;C&lt;1$ s.t. $|x_{n+1}-x_{n}|\leq C|x_{n}-x_{n-1}|$. Intuitively, that statement clearly implies $\{x_{n}\}$ is Cauchy, since it implies the sequence terms become arbitrarily close. But how to make it precise? </p> <p>Couldn't it also be said from the given information that the sequence is either monotone increasing or monotone decreasing and bounded? Then it would converge, which means it is Cauchy. </p> <p>Thanks for any assistance! </p>
robjohn
13,854
<p>If I understand the question, as soon as one T appears, the next flip is equally likely to be a T or an H, so neither is more likely to appear before the other.</p>
2,316,159
<p>I'm interested in the differences in the groups but also in the Lie algebra associated. I know that two groups can have the same lie algebra if they differ from discrete elements, for instance: $SO(n)$ and $O(n)$ should have the same algebra. But then if I have a group $O(2,2)$, what is the associated Lie algebra? Does $O(2)\times O(2)$ have the same associated Lie algebra that $SO(2)\times SO(2)$ does?</p>
Tsemo Aristide
280,301
<p>Consider $J=\pmatrix{1&amp;0&amp;0&amp;0\cr 0&amp;1&amp;0&amp;0\cr 0&amp;0&amp;-1&amp;0\cr 0&amp;0&amp;0&amp;-1}$ $M$ in $SO(2,2)$ i.e $M^t J M=I$. The Lie algebra of $SO(2,2)$ is the set of $4\times 4$ matrices such that $A^t J+JA=0$.</p> <p>The Lie algebra $so(2)$ of $SO(2)$ is the set of $2\times 2$ matrices such that $A^t A=I$ and the Lie algebra of $SO(2)\times SO(2)$ is the product $so(2)\times so(2)$.</p>
940,352
<p>If there is a mapping of $B$ onto $A$, then $2^{|A|} \leq 2^{|B|}$. [Hint: Given $g$ mapping $B$ onto $A$, let $f(X)=g^{-1}(X)$ for all $X \subseteq A$]</p> <p>I follow the hint and obtain the function $f$. If $f$ is injective, then the statement is proven.</p> <p>Question: Why does $g^{-1}$ exist in the first place? How do we know $g$ is injective? The hint given seems a bit weird.</p> <p>Can anyone explain to me?</p>
Daniel Fischer
83,702
<blockquote> <p>Question: Why does $g^{−1}$ exist in the first place?</p> </blockquote> <p>It exists for all maps. Here, it does not denote the inverse, but the pre-image map,</p> <p>$$g^{-1}(X) = \{b\in B : g(b)\in X\}.$$</p> <p>Now use the surjectivity of $g$ to deduce the injectivity of $g^{-1}$.</p>
940,352
<p>If there is a mapping of $B$ onto $A$, then $2^{|A|} \leq 2^{|B|}$. [Hint: Given $g$ mapping $B$ onto $A$, let $f(X)=g^{-1}(X)$ for all $X \subseteq A$]</p> <p>I follow the hint and obtain the function $f$. If $f$ is injective, then the statement is proven.</p> <p>Question: Why does $g^{-1}$ exist in the first place? How do we know $g$ is injective? The hint given seems a bit weird.</p> <p>Can anyone explain to me?</p>
Diego Robayo
177,025
<p>If there exists a function $f:B \longrightarrow A$ such that $f$ is onto then $\lvert B \rvert \geq \lvert A \rvert$. And this means that there exists an injective function $g: A \longrightarrow B$. Now as we want to see that $2^{\lvert A \rvert} \leq 2^{\lvert B \rvert}$ it's enough to define an injective function $h: \mathcal P \left( {A} \right) \longrightarrow \mathcal P \left( {B} \right)$. Such that for $C \subseteq A$, $ h(C) :=${$ g(x) \lvert x \in C$}. Now it's not to hard to prove that this function is well defined and injective.</p>
1,774,670
<p>Among many fascinating sides of mathematics, there is one that I praise, especially for didactic purposes : the parallels that can be drawn between some &quot;Continuous&quot; and &quot;Discrete&quot; concepts.</p> <p>I am looking for examples bringing a help to a global understanding...</p> <p>Disclaimer : Being driven, as said above, mainly by didactic purposes, I am not in need for full rigor here although I do not deny at all the interest of having a rigorous approach in other contexts where it can be essential to show in particular in which sense the continuous &quot;object&quot; is the limit of its discrete counterparts.</p> <p>I should appreciate if some colleagues can give examples of their own, in the style &quot;my favorite one is...&quot;, or references to works about this theme.</p> <p>Let me provide, on my side, five <strong>examples</strong>:</p> <hr /> <p><strong>1st example:</strong> How to obtain the equations of certain epicycloids, here a nephroid :</p> <p>Consider a <span class="math-container">$N$</span>-sided regular polygon <span class="math-container">$A_1,A_2,\cdots A_N$</span> with any integer <span class="math-container">$N$</span> large enough, say around <span class="math-container">$50$</span>. Let us connect every point <span class="math-container">$A_k$</span> to point <span class="math-container">$A_{3k}$</span> by a line segment (we assume a cyclic numbering). As can be seen on Fig. 1, a certain envelope curve is &quot;suggested&quot;.</p> <p>Question : which (smooth) curve is behind this construction ?</p> <p>Answer : Let us consider two consecutive line segments like those represented on Fig. 1 with a larger width : the evolution speed of <span class="math-container">$A_{3k} \to A_{3k'}$</span> where <span class="math-container">$k'=k+1$</span> is three times the evolution speed of <span class="math-container">$A_{k} \to A_{k'}$</span>, the pivoting of the line segment takes place at the point (of the line segment) which is 3 times closer to <span class="math-container">$A_k$</span> than to <span class="math-container">$A_{3k}$</span> (the weights' ratio 3:1 comes from the size ratio of ''homothetic'' triangles <span class="math-container">$P_kA_kA_k'$</span> and <span class="math-container">$P_kA_{3k}A_{3k'}$</span>.) Said in an algebraic way :</p> <p><span class="math-container">$$P_k=\tfrac{3}{4}e^{ika}+\tfrac{1}{4}e^{3ika}$$</span></p> <p>(<span class="math-container">$A_k$</span> is identified with <span class="math-container">$e^{ika}$</span> with <span class="math-container">$a:=\tfrac{2 \pi}{N}$</span>).</p> <p>Replacing now discrete values <span class="math-container">$ka$</span> by a continuous parameter <span class="math-container">$t$</span>, we get</p> <p><span class="math-container">$$z=\tfrac{3}{4}e^{it}+\tfrac{1}{4}e^{3it}$$</span></p> <p>i.e., a parametric representation of the nephroid, or the equivalent real equations :</p> <p><span class="math-container">$$\begin{cases}x=\tfrac{3}{4}\cos(t)+\tfrac{1}{4}\cos(3t)\\ y=\tfrac{3}{4}\sin(t)+\tfrac{1}{4}\sin(3t)\end{cases}$$</span></p> <p><a href="https://i.stack.imgur.com/XZPzg.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XZPzg.jpg" alt="enter image description here" /></a></p> <p>Fig. 1 : <em>The nephroid as an envelope. It can be viewed as the trajectory of a point of a small circle with radius <span class="math-container">$\dfrac14$</span> rolling inside a circle with radius <span class="math-container">$1$</span>.</em></p> <p>Remark: if, instead of connecting <span class="math-container">$A_k$</span> to <span class="math-container">$A_{3k}$</span>, we had connected it to <span class="math-container">$A_{2k}$</span>, we would have obtained a cardioid, with <span class="math-container">$A_{4k}$</span> an astroid, etc.</p> <hr /> <p><strong>2nd example:</strong> Coupling ''second derivative <span class="math-container">$ \ \leftrightarrow \ \min \ $</span> kernel'' :</p> <p>All functions considered here are at least <span class="math-container">$C^2$</span>, but function <span class="math-container">$K$</span>.</p> <p>Let <span class="math-container">$f:[0,1] \rightarrow \mathbb{R}$</span> and <span class="math-container">$K:[0,1]\times[0,1]\rightarrow \mathbb{R}$</span> (a so-called &quot;kernel&quot;) defined by <span class="math-container">$K(x,y):=\min(x,y)$</span>.</p> <p>Let us associate <span class="math-container">$f$</span> with function <span class="math-container">$\varphi(f)=g$</span> defined by <span class="math-container">$$\tag{1}g(y)=\int_{t=0}^{t=1} K(t,y)f(t)dt=\int_{t=0}^{t=1} \min(t,y)f(t)dt$$</span></p> <p>We can get rid of &quot;<span class="math-container">$\min$</span>&quot; function by decomposing the integral into :</p> <p><span class="math-container">$$\tag{2}g(y)=\int_{t=0}^{t=y} t f(t)dt+\int_{t=y}^{t=1} y f(t)dt$$</span></p> <p><span class="math-container">$$\tag{3}g(y)=\int_{t=0}^{t=y} t f(t)dt - y F(y)$$</span></p> <p>where we have set</p> <p><span class="math-container">$$\tag{4}F(y):=\int_{t=1}^{t=y}f(t)dt \ \ \ \ \ \ \ \ \text{Remark:} \ \ \ F'(y)=f(y)$$</span></p> <p>Let us differentiate (4) twice :</p> <p><span class="math-container">$$\tag{5}g'(y)=y f(y) - 1 F(y) - y f(y) = -F(y)$$</span></p> <p><span class="math-container">$$\tag{6}g''(y)= -f(y) \ \ \Longleftrightarrow \ \ f(y)=-g''(y)$$</span></p> <p>Said otherwise, the inverse of transform <span class="math-container">$f \rightarrow \varphi(f)=g$</span> is:</p> <p><span class="math-container">$$\tag{7}\varphi^{-1} = \text{opposite of the second derivative.}$$</span></p> <p>This connexion with the second derivative is rather unexpected...</p> <p>Had we taken a discrete approach, what would have been found ?</p> <p>The discrete equivalents of <span class="math-container">$\varphi$</span> and <span class="math-container">$\varphi^{-1}$</span> are matrices :</p> <p><span class="math-container">$$\bf{M}=\begin{pmatrix}1&amp;1&amp;1&amp;\cdots&amp;\cdots&amp;1\\1&amp;2&amp;2&amp;\cdot&amp;\cdots&amp;2\\1&amp;2&amp;3&amp;\cdots&amp;\cdots&amp;3\\\cdots&amp;\cdots&amp;\cdots&amp;\cdots&amp;\cdots&amp;\cdots\\\cdots&amp;\cdots&amp;\cdots&amp;\cdots&amp;\cdots&amp;\cdots\\1&amp;2&amp;3&amp;\cdots&amp;\cdots&amp;n \end{pmatrix} \ \ \textbf{and}$$</span> <span class="math-container">$$\bf{D}=\begin{pmatrix}2&amp;-1&amp;&amp;&amp;&amp;\\-1&amp;2&amp;-1&amp;&amp;&amp;\\&amp;-1&amp;2&amp;-1&amp;&amp;\\&amp;&amp;\ddots&amp;\ddots&amp;\ddots&amp;\\&amp;&amp;&amp;-1&amp;2&amp;-1\\&amp;&amp;&amp;&amp;-1&amp;1 \end{pmatrix}$$</span></p> <p>that verify matrix identity: <span class="math-container">$\bf{M}^{-1}=\bf{D}$</span> in analogy with (7).</p> <p>Indeed,</p> <ul> <li><p>Nothing to say about the connection of matrix <span class="math-container">$\bf{M}$</span> with coefficients <span class="math-container">$\bf{M}_{i,j}=min(i,j)$</span> with operator <span class="math-container">$K$</span>.</p> </li> <li><p>tridiagonal matrix <span class="math-container">$\bf{D}$</span> is well known (in particular by all people doing discretization) to be &quot;the&quot; discrete analog of the second derivative due to the classical approximation:</p> </li> </ul> <p><span class="math-container">$$f''(x)\approx\dfrac{1}{2h^2}(f(x-h)-2f(x)+f(x+h))$$</span></p> <p>that can easily be obtained using Taylor expansions. The exceptional value <span class="math-container">$1$</span> at the bottom right of <span class="math-container">$\bf{D}$</span> is explained by discrete boundary conditions.</p> <p>Remark: this correspondence between &quot;min&quot; operator and second derivative is not mine ; I known for a long time but I am unable to trace back where I saw it at first (hopefully in a signal processing book). If somebody has a reference ?</p> <p>Connected : the eigenvalues of <span class="math-container">$D$</span> are remarkable (<a href="http://www.math.nthu.edu.tw/%7Eamen/2005/040903-7.pdf" rel="nofollow noreferrer">http://www.math.nthu.edu.tw/~amen/2005/040903-7.pdf</a>)</p> <p>In the same vein : <a href="https://math.stackexchange.com/q/3655530">computation of adjoint operators</a>.</p> <hr /> <p><strong>3rd example</strong> : the Itô integral.</p> <p>One could think that the Lebesgue integral (1902) is the ultimate theory of integration, correcting the imperfections of the theory elaborated by Riemann some 50 years before. This is not the case. In particular, Itô has defined (1942) a new kind of integral which is now essential in probability and finance. Its principle, roughly said, is that infinitesimal &quot;deterministic&quot; increments &quot;dt&quot; are replaced by random increments of brownian motion type &quot;dW&quot; as formalized by Einstein (1905), then by Wiener (1923). Let us give an image of it.</p> <p>Let us first recall definitions of brownian motion <span class="math-container">$W(t)$</span> or <span class="math-container">$W_t$</span>, (<span class="math-container">$W$</span> for Wiener), an informal one, and a formal one:</p> <p>Informal : A &quot;particle&quot; starting at <span class="math-container">$x=0$</span> at time <span class="math-container">$t$</span>, jumps &quot;at the next instant&quot; <span class="math-container">$t+dt$</span>, to a nearby position; either on the left or on the right, the amplitude and sign of the jump being governed by a normal distribution <span class="math-container">$N(x,\sigma^2)$</span> with an infinitesimal fixed standard deviation <span class="math-container">$\sigma.$</span></p> <p><span class="math-container">$\text{Formal}: \ \ W_t:=G_0 t+\sqrt{2}\sum_{n=1}^{\infty}G_n\dfrac{\sin(\pi n t)}{\pi n}$</span>, with <span class="math-container">$G_n$</span> iid <span class="math-container">$N(0,1)$</span> random variables.</p> <p>(Other definitions exist. This one, under the form of a &quot;random Fourier series&quot; is handy for many computations).</p> <p>Let us now consider one of the fundamental formulas of Itô's integral, for a continuously differentiable function <span class="math-container">$f$</span>:</p> <p><span class="math-container">$$\tag{8}\begin{equation} \displaystyle\int_0^t f(W(s))dW(s) = \displaystyle\int_0^{W(t)} f(\lambda)d \lambda - \frac{1}{2}\displaystyle\int_0^t f'(W(s))ds. \end{equation}$$</span></p> <p><strong>Remark:</strong> The integral sign on the LHS of (8) defines Itô's integral, whereas the integrals on the RHS have to be understood in the sense of Riemann/Lebesgue. The presence of the second term on the RHS is rather puzzling, isnt'it ?</p> <p>Question: how can be understood/justified this second integral ?</p> <p>Szabados has proposed (1990) (see (<a href="https://mathoverflow.net/questions/16163/discrete-version-of-itos-lemma">https://mathoverflow.net/questions/16163/discrete-version-of-itos-lemma</a>)) a discrete analog of formula (8). Here is how it runs:</p> <p><strong>Theorem:</strong> Let <span class="math-container">$f:\mathbb{Z} \longrightarrow \mathbb{R}$</span>. let us define :</p> <p><span class="math-container">$$ \tag{9}\begin{equation} F(k)=\left\{ \begin{matrix} \dfrac{1}{2}f(0)+\displaystyle\sum_{j=1}^{k-1} f(j)+\dfrac{1}{2}f(k) &amp; if &amp; k \geq 1 &amp; \ \ (a)\\ 0 &amp; if &amp; k = 0 &amp; \ \ (b)\\ -\dfrac{1}{2}f(k)-\displaystyle\sum_{j=k+1}^{-1} f(j)-\dfrac{1}{2}f(0) &amp; if &amp; k \leq -1 &amp; \ \ (c) \end{matrix} \right. \end{equation} $$</span></p> <p><strong>Remarks:</strong></p> <ol> <li><p>We will work only on (a) and its particular case (b).</p> </li> <li><p>(a) is nothing else than the &quot;trapezoid formula&quot; explaining in particular factors <span class="math-container">$\dfrac{1}{2}$</span> in front of <span class="math-container">$f(0)$</span> et <span class="math-container">$f(k)$</span>.</p> </li> </ol> <p>Let us now define a family of Random Variables <span class="math-container">$X_k$</span>, <span class="math-container">$k=1, 2, \cdots $</span>, iid on <span class="math-container">$\{-1,1\}$</span> with <span class="math-container">$P(X_k=-1)=P(X_k=1)=\frac{1}{2}$</span>, and let</p> <p><span class="math-container">$$ \begin{equation} S_n= \displaystyle\sum_{k=1}^n X_k. \end{equation} $$</span></p> <p>Then</p> <p><span class="math-container">$$ \tag{10}\begin{equation} \forall n, \ \ \displaystyle\sum_{i=0}^{n}f(S_i)X_{i+1} = F(S_{n+1})-\dfrac{1}{2}\displaystyle\sum_{i=0}^{n}\dfrac{f(S_{i+1})-f(S_{i})}{X_{i+1}} \end{equation} $$</span></p> <p><strong>Remark</strong> : Please note analogies :</p> <ul> <li><p>between <span class="math-container">$\frac{f(S_{i+1})-f(S_{i})}{X_{i+1}}$</span> and <span class="math-container">$f'(S_i)$</span>.</p> </li> <li><p>between <span class="math-container">$F(k)$</span> and <span class="math-container">$\displaystyle\int_{\lambda=0}^{\lambda=k}f(\lambda)d\lambda$</span>.</p> </li> </ul> <p>For example,</p> <p>a) If <span class="math-container">$f$</span> is identity function (<span class="math-container">$\forall k \ f(k)=k$</span>), definition (9)(a) gives : <span class="math-container">$$ \begin{equation} F(k)=\frac{1}{2}(k-1)k+\frac{1}{2}k=\dfrac{1}{2}k^2. \tag{11} \end{equation} $$</span></p> <p>which doesn't come as a surprise : the 'discrete antiderivative' of <span class="math-container">$k$</span> is <span class="math-container">$\frac{1}{2}k^2$</span>... (the formula in (11) remains in fact the same for <span class="math-container">$k&lt;0$</span>).</p> <p>b) If <span class="math-container">$f$</span> is the &quot;squaring function&quot; (<span class="math-container">$\forall k, \ f(k)=k^2$</span>), (9)(a) becomes :</p> <p><span class="math-container">$$ \begin{equation} \text{If} \ k&gt;0, \ \ \ F(k)=\frac{1}{6}(k-1)k(2k-1)+\frac{1}{2}k^2=\dfrac{1}{3}k^3+\dfrac{1}{6}k. \tag{12} \end{equation} $$</span></p> <p>This time, a new term <span class="math-container">$\dfrac{1}{6}k$</span> has entered into the play.</p> <p><strong>Proof of the Theorem:</strong> The definition allows to write :</p> <p><span class="math-container">\begin{equation}F(S_{i+1})-F(S_i)=f(S_i)X_{i+1}+\frac{1}{2}\dfrac{f(S_{i+1})-f(S_i)}{X_{i+1}} \tag{13} \end{equation}</span></p> <p>In fact, proving (11) can be split into two cases: either <span class="math-container">$X_{i+1}=1$</span>, or <span class="math-container">$X_{i+1}=-1$</span>. Let us consider the first case (the second case is similar): the RHS of (13) becomes</p> <p><span class="math-container">$f(S_i)+\frac{1}{2}(f(S_{i+1})-f(S_i))=\frac{1}{2}(f(S_{i+1})+f(S_i))$</span> which is the area variation in the trapezoid formula ;</p> <p>Summing all equations in (10) gives the desired identity.</p> <p><strong>An example of application</strong> : Let <span class="math-container">$f(t)=t$</span> ; we get</p> <p><span class="math-container">$$\displaystyle\sum_{i=0}^{n}S_iX_{i+1} = F(S_{n+1})-\frac{n}{2}=\dfrac{1}{2}S_{n+1}^2-\frac{n}{2}.$$</span></p> <p>which appears as the discrete equivalent of the celebrated formula: <span class="math-container">$$ \begin{equation} \displaystyle\int_0^t W(s)dW(s) = \frac{1}{2}W(t)^2-\frac{1}{2}t. \end{equation} $$</span></p> <p>One can establish the autocorrelation of <span class="math-container">$W_t$</span> process is</p> <p><span class="math-container">$$cov(W_s,W_t)=E(W_sW_t)-E(W_s)E(W_t)=\min(s,t),$$</span></p> <p>(see (<a href="https://math.stackexchange.com/q/884299">Autocorrelation of a Wiener Process proof</a>)) providing an unexpected connection with the second example...</p> <p><strong>Last remark</strong>: Another kind of integral based on a discrete definition : the gauge integral (<a href="https://math.vanderbilt.edu/schectex/ccc/gauge/" rel="nofollow noreferrer">https://math.vanderbilt.edu/schectex/ccc/gauge/</a>).</p> <hr /> <p><strong>4th example</strong> (Darboux sums) :</p> <p>Here is a discrete formula :</p> <p><span class="math-container">$$\prod_{k=1}^{n-1}\sin\frac{k \pi}{n} = \frac{n}{2^{n-1}}$$</span></p> <p>(see a proof in <a href="https://math.stackexchange.com/q/8385">Prove that $\prod_{k=1}^{n-1}\sin\frac{k \pi}{n} = \frac{n}{2^{n-1}}$</a>)</p> <p>Has this formula a continuous &quot;counterpart&quot; ?</p> <p>Taking the logarithm on both sides, and dividing by <span class="math-container">$n$</span>, we get :</p> <p><span class="math-container">$$\tfrac1n \sum_{k=1}^n \ln \sin \tfrac{k \pi}{n}=\tfrac{\ln(n)}{n}-\ln(2)\tfrac{n-1}{n}$$</span></p> <p>Letting now <span class="math-container">$n \to \infty$</span>, we obtain the rather classical integral :</p> <p><span class="math-container">$$\int_0^1 \ln(\sin(\pi x))dx=-\ln(2)$$</span></p> <hr /> <p><strong>5th example</strong> : bivariate cdfs (cumulative probability density functions).</p> <p>Let <span class="math-container">$(X,Y)$</span> a pair of Random Variables with pdf <span class="math-container">$f_{X,Y}$</span> and cdf :</p> <p><span class="math-container">$$F_{X,Y}(x,y):=P(X \leq x \ \&amp; \ Y \leq y).$$</span></p> <p>Take a look at this formula :</p> <p><span class="math-container">$$P(x_1&lt;X \leq x_2, \ \ y_1&lt;Y \leq y_2)=F_{XY}(x_2,y_2)-F_{XY}(x_1,y_2)-F_{XY}(x_2,y_1)+F_{XY}(x_1,y_1)\tag{14}$$</span></p> <p>(<a href="https://www.probabilitycourse.com/chapter5/5_2_2_joint_cdf.php" rel="nofollow noreferrer">https://www.probabilitycourse.com/chapter5/5_2_2_joint_cdf.php</a>)</p> <p>It is the discrete equivalent of the continuous definition of <span class="math-container">$F_{XY}$</span> as the mixed second order partial derivative of <span class="math-container">$F_{X,Y}$</span>, under the assumption that <span class="math-container">$F$</span> is a <span class="math-container">$C^2$</span> function :</p> <p><span class="math-container">$$f_{XY}(x,y)=\dfrac{\partial^2 F_{X,Y}}{\partial x \partial y}(x,y).\tag{15}$$</span></p> <p>Do you see why ? Hint : make <span class="math-container">$x_2 \to x_1$</span> and make <span class="math-container">$y_2 \to y_1$</span> and assimilate the LHS of (14) with <span class="math-container">$f(x_1,y_1)dxdy$</span>.</p> <p><strong>Final remarks</strong> :</p> <ol> <li><p>A remarkable text about this analogy in Physics : <a href="https://www.lptmc.jussieu.fr/user/lesne/MSCS-Lesne.pdf" rel="nofollow noreferrer">https://www.lptmc.jussieu.fr/user/lesne/MSCS-Lesne.pdf</a></p> </li> <li><p>In linear algebra, continuous analogs of some fundamental factorizations (<a href="https://royalsocietypublishing.org/doi/pdf/10.1098/rspa.2014.0585" rel="nofollow noreferrer">https://royalsocietypublishing.org/doi/pdf/10.1098/rspa.2014.0585</a>).</p> </li> <li><p><a href="https://mathoverflow.net/questions/270930/when-has-discrete-understanding-preceded-continuous">A similar question on MathOverflow</a> mentionning in particular the following well written book <a href="http://math.sfsu.edu/beck/papers/noprint.pdf" rel="nofollow noreferrer">&quot;Computing the continuous discretely&quot;</a> by by Beck and Robins.</p> </li> <li><p>There are many other tracks, e.g., connections with graphs <a href="https://web.cs.elte.hu/%7Elovasz/telaviv.pdf" rel="nofollow noreferrer">&quot;Discrete and continuous : two sides of the same.&quot; by L. Lovàsz</a> or this one (<a href="http://jimhuang.org/CDNDSP.pdf" rel="nofollow noreferrer">http://jimhuang.org/CDNDSP.pdf</a>), discrete vs. continuous versions of the logistic equation (<a href="https://math.stackexchange.com/q/3328867">https://math.stackexchange.com/q/3328867</a>), etc.</p> </li> <li><p>In the epidemiology domain: &quot;Discrete versus continuous-time models of malaria infections&quot; <a href="https://ethz.ch/content/dam/ethz/special-interest/usys/ibz/theoreticalbiology/education/learningmaterials/701-1424-00L/malaria.pdf" rel="nofollow noreferrer">lecture notes by Lucy Crooks, ETH Zürich</a>.</p> </li> <li><p>Another example in probability: the connection between a discrete and a continuous distribution, i.e., the Poisson(<span class="math-container">$\lambda$</span>) distribution and the <span class="math-container">$\Gamma(n)$</span> distribution which is well treated in [this answer] (<a href="https://math.stackexchange.com/q/2228023">https://math.stackexchange.com/q/2228023</a>).</p> </li> </ol>
Benjamin Dickman
37,122
<p>I like the following problem:</p> <blockquote> <p>Decompose a positive integer <span class="math-container">$N$</span> into positive integer addends summing to <span class="math-container">$N$</span> with maximal product.</p> </blockquote> <p>In the &quot;discrete&quot; version of this problem, you find that the answer is almost to use all <span class="math-container">$3$</span>s; but, not every positive integer is a sum of <span class="math-container">$3$</span>s, so you might need some <span class="math-container">$2$</span>s to finish it off. Unsurprisingly, then, the &quot;continuous&quot; analog of a problem solved discretely by mostly-<span class="math-container">$3$</span>s-with-a-few-<span class="math-container">$2$</span>s involves <span class="math-container">$e$</span>.</p> <p>The benefit of <span class="math-container">$3$</span> over <span class="math-container">$2$</span> is easy to see by decomposing <span class="math-container">$6$</span>, for which <span class="math-container">$3 \cdot 3 = 9 &gt; 8 = 2 \cdot 2 \cdot 2$</span>.</p> <p>For a couple of references to this particular problem, see:</p> <blockquote> <p>Krause, E. F. (1996). Maximizing the product of summands; minimizing the sum of factors. Mathematics Magazine, 69(4), 270-278.</p> <p>Wasserman, N. H. (2014). A rationale for irrationals: An unintended exploration of e. Mathematics Teacher, 107(7), 500-507.</p> </blockquote> <p>For a student-friendly version, see the link on <a href="https://playwithyourmath.com/2017/07/27/1-split-25/" rel="nofollow noreferrer"><strong>PlayWithYourMath</strong></a>:</p> <p><a href="https://i.stack.imgur.com/bSRV0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bSRV0.png" alt="enter image description here" /></a></p>
368,114
<blockquote> <p>Prove that this set is closed:</p> <p><span class="math-container">$$ \left\{ \left( (x, y) \right) : \Re^2 : \sin(x^2 + 4xy) = x + \cos y \right\} \in (\Re^2, d_{\Re^2}) $$</span></p> </blockquote> <p>I've missed a few days in class, and have apparently missed some very important definitions if they existed. I know that a closed set is a set which contains its limit points (or, equivalently, contains its boundary), but I have <em>no idea</em> how to calculate the limit points of an arbitrary set like this. The only intuition I have to that end is to fix either <span class="math-container">$x$</span> or <span class="math-container">$y$</span> and do calculations from there, since doing it in parallel often ends up in disaster. Besides this, I don't know how to approach this problem.</p> <p>If there is any other extant definition of closed-ness in a metric space, I welcome them wholeheartedly.</p>
Community
-1
<ol> <li><p>Prove that the function $f(x,y) = \sin(x^2 + 4xy) - x\cos y $ is continuous on all of $\Bbb{R}^2$.</p></li> <li><p>The point $0 \in \Bbb{R}$ is closed. (Why?)</p></li> <li><p>Continuous functions take open sets to open sets and so take closed sets to closed sets, because taking the complement of a set commutes with taking the inverse image.</p></li> <li><p>Conclude.</p></li> </ol>
110,162
<p>One can use <code>$Epilog</code> to do something when the Kernel is quit or put an <code>end.m</code> file next to the <code>init.m</code>.</p> <blockquote> <p>For Wolfram System sessions, <code>$Epilog</code> is conventionally defined to read in a file named end.m.</p> </blockquote> <p>But if <code>$Epilog</code> is set by the user, then <code>end.m</code> is skipped.</p> <p><strong>Question:</strong> So what to do if I want something to be done each time Kernel is quit but I also want to be able to play with <code>$Epilog</code>. In that sense that if I set <code>$Epilog</code> I would like my default action to be taken anyway?</p> <p>I need to stress out that I want to establish an action that will not be accidentally overwritten with daily (but advanced) mma usage.</p>
Mr.Wizard
121
<p>If you are willing to rely on undocumented behavior you can move <code>$Epilog</code> out of the System context, and give it a definition that evaluates both an internal (default) expression as well as the public expression assigned to <code>System`$Epilog</code>.</p> <p>Your specialized setup:</p> <pre><code>Context[System`$Epilog] = "hidden`"; hidden`$Epilog := (Print["internal"]; System`$Epilog) </code></pre> <p>An arbitrary public definition:</p> <pre><code>$Epilog := Print["external"] </code></pre> <p>Now when the kernel is terminated:</p> <pre><code>Exit[] </code></pre> <blockquote> <pre><code>internal external </code></pre> </blockquote>
2,781,017
<p>I known that $\sum a_i b_i \leq \sum a_i \sum b_i$ for $a_i$, $b_i &gt; 0$. It seems this inequality will also hold true when $a_i$, $b_i \in (0,1)$. However, I am unable to find out if</p> <p>$\sum \frac{a_i}{b_i} \leq \frac{\sum a_i}{\sum b_i}$ </p> <p>holds true for $a_i$, $b_i \in (0,1)$.</p>
zhw.
228,045
<p>Let $(a_n)$ be the sequence</p> <p>$$1/2,1/4, 1/8,1/16,\dots,$$</p> <p>and let $(b_n)$ be the same as $(a_n)$ except for $n=1,2,$ where let let $b_1=1/4, b_2= 1/2.$ Then $\sum a_n,\sum b_n$ both equal $1,$ hence so does their quotient, but</p> <p>$$\sum \frac{a_n}{b_n} = \frac{1/2}{1/4} + \cdots &gt; 2.$$</p>
158,896
<p>Being interested in the very foundations of mathematics, I'm trying to build a rigorous proof on my own that $a + b = b + a$ for all $\left[a, b\in\mathbb{R}\right] $. Inspired by interesting properties of the complex plane and some researches, I realized that defining multiplication as repeated addition will lead me nowhere (at least, I could not work with it). So, my ideas:</p> <ul> <li><p>Defining addition $a+b$ as a kind of "walking" to right $\left(b&gt;0\right)$ or to the left $\left(b&lt;0\right)$ a space $b$ from $a$. Adding a number $b$ to a number $a$ (denoted by $a+b$) involves doing the following operation:</p> <ol> <li>Consider the real line $\lambda$ and its origin at $0$. Mark a point $a$, draw another real line $\omega$ above $\lambda$ such what $\omega \parallel \lambda$ and mark a point $b$ on $\omega$. Now, draw a line $\sigma$ such that $\sigma \perp \omega$ and the only point in commom between $\sigma$ and $\omega$ is $b$. Consider the point that $\lambda$ and $\sigma$ have in commom; this point is nicely denoted as $a + b$.</li> </ol></li> </ul> <p>(Note that all my work is based here. Any problems, and my proof goes to trash)</p> <ul> <li><p>This definition can be used to see the properties of adding two numbers $a$ and $b$, for all $a, b \in\mathbb{R}$.</p></li> <li><p>Using geometric properties may lead us to a rigorous proof (if not, I would like to know the problems of using it).</p></li> </ul> <p>So, I started:</p> <ul> <li>$a, b \in\mathbb{N}$:</li> </ul> <p>$a+b = \overbrace{\left(1+1+1+\cdots+1\right)}^a + \overbrace{\left(1+1+1+\cdots+1\right)}^b = \overbrace{1+1+1+1+\cdots+1}^{a+b} = \overbrace{\left(1+1+1+\cdots+1\right)}^b + \overbrace{\left(1+1+1+\cdots+1\right)}^a = b + a$</p> <p>(Implicity, I'm using the fact that $\left(1+1\right)+1 = 1+\left(1+1\right)$, which I do not know how to prove and interpret it as cutting a segment $c$ in two parts -- $a$ and $b$. However, this result can be extended to $\mathbb{Z}$ in the sense that $-a$ $(a &gt; 0)$ is a change; from right to left).</p> <ul> <li>$a, b \in\mathbb{R}$:</li> </ul> <p>Here, we have basically two cases:</p> <ul> <li>$a$ and $b$ are either positive or negative;</li> <li>$a$ and $b$, where one of them is negative.</li> </ul> <p>Since in my definition $-b, b&gt;0$ means drawing a point $b$ to the left of the real line, there's no big deal interpretating it; <em>subtracting</em> can be interpreted now. So, it starts:</p> <p>$a + b = c$. However, $c$ can be cut in two parts: $b$ and $a$. Naturally, if $a&gt;c$, then $b&lt;0$ -- many cases can be listed. So, $c = b + a$. But $c = a + b$; it follows that $a + b = b + a$. My questions:</p> <p><em>Is there any problem in using my definition of adding two numbers $a$ and $b$, which uses many geometric properties? Is there any way to solve it from informality? Is there anything right here?</em></p> <p>Thanks in advance.</p>
Potato
18,240
<p>Unfortunately, your method is not rigorous. There are too many undefined notions running around. For example, you never define what $\mathbb{R}$ is in your construction, or what properties it has. To answer your question, in the usual construction of the real numbers, the fact that $a+b=b+a$ is taken as axiomatic (along with several other axioms, called field axioms).</p> <p>You mention that you are a precalculus student. At this stage in your mathematical development, you should just take it on faith that you can add numbers. It's intuitively obvious, and there are much more interesting things you can be doing with your time than worrying about foundational issues. Learn some elementary number theory! Learn some combinatorics! Find a book of challenging elementary problems in mathematics and work through it! There is a time for rigor and proof, but it comes after developing fundamental intuitions about the subject. The whole point of the formalizations of the real numbers is to make our intuitions as precise and error-free as possible, so putting the formalization before the intuition is the wrong way to go about things here, in my opinion. </p> <p>If you are still interested in foundational issues, there are many books available that cover the foundations of mathematics. You would probably want an introductory book on set theory. For an exposition of the construction of the reals and their properties, I recommend <em>Foundations of Analysis</em> by Edmund Landau. But without any experience with proofs, such books are quite difficult to understand.</p>
158,896
<p>Being interested in the very foundations of mathematics, I'm trying to build a rigorous proof on my own that $a + b = b + a$ for all $\left[a, b\in\mathbb{R}\right] $. Inspired by interesting properties of the complex plane and some researches, I realized that defining multiplication as repeated addition will lead me nowhere (at least, I could not work with it). So, my ideas:</p> <ul> <li><p>Defining addition $a+b$ as a kind of "walking" to right $\left(b&gt;0\right)$ or to the left $\left(b&lt;0\right)$ a space $b$ from $a$. Adding a number $b$ to a number $a$ (denoted by $a+b$) involves doing the following operation:</p> <ol> <li>Consider the real line $\lambda$ and its origin at $0$. Mark a point $a$, draw another real line $\omega$ above $\lambda$ such what $\omega \parallel \lambda$ and mark a point $b$ on $\omega$. Now, draw a line $\sigma$ such that $\sigma \perp \omega$ and the only point in commom between $\sigma$ and $\omega$ is $b$. Consider the point that $\lambda$ and $\sigma$ have in commom; this point is nicely denoted as $a + b$.</li> </ol></li> </ul> <p>(Note that all my work is based here. Any problems, and my proof goes to trash)</p> <ul> <li><p>This definition can be used to see the properties of adding two numbers $a$ and $b$, for all $a, b \in\mathbb{R}$.</p></li> <li><p>Using geometric properties may lead us to a rigorous proof (if not, I would like to know the problems of using it).</p></li> </ul> <p>So, I started:</p> <ul> <li>$a, b \in\mathbb{N}$:</li> </ul> <p>$a+b = \overbrace{\left(1+1+1+\cdots+1\right)}^a + \overbrace{\left(1+1+1+\cdots+1\right)}^b = \overbrace{1+1+1+1+\cdots+1}^{a+b} = \overbrace{\left(1+1+1+\cdots+1\right)}^b + \overbrace{\left(1+1+1+\cdots+1\right)}^a = b + a$</p> <p>(Implicity, I'm using the fact that $\left(1+1\right)+1 = 1+\left(1+1\right)$, which I do not know how to prove and interpret it as cutting a segment $c$ in two parts -- $a$ and $b$. However, this result can be extended to $\mathbb{Z}$ in the sense that $-a$ $(a &gt; 0)$ is a change; from right to left).</p> <ul> <li>$a, b \in\mathbb{R}$:</li> </ul> <p>Here, we have basically two cases:</p> <ul> <li>$a$ and $b$ are either positive or negative;</li> <li>$a$ and $b$, where one of them is negative.</li> </ul> <p>Since in my definition $-b, b&gt;0$ means drawing a point $b$ to the left of the real line, there's no big deal interpretating it; <em>subtracting</em> can be interpreted now. So, it starts:</p> <p>$a + b = c$. However, $c$ can be cut in two parts: $b$ and $a$. Naturally, if $a&gt;c$, then $b&lt;0$ -- many cases can be listed. So, $c = b + a$. But $c = a + b$; it follows that $a + b = b + a$. My questions:</p> <p><em>Is there any problem in using my definition of adding two numbers $a$ and $b$, which uses many geometric properties? Is there any way to solve it from informality? Is there anything right here?</em></p> <p>Thanks in advance.</p>
Pedro
23,350
<p>You should first think how you define $\alpha$, a real number. One of the classical construction is defining it as a set. This is a construction based on Dedkind cuts, defined as follows:</p> <blockquote> <p><strong>DEFINITION</strong> (Spivak)</p> <p>A <strong>real number</strong> is a set of rational numbers $\mathrm {\mathbf A}$ such that</p> <p>$(1)$ If $x \in \mathrm {\mathbf A}$ and $y &lt; x$ then $y \in \mathrm {\mathbf A}$.</p> <p>$(2)$ If $x \in \mathrm {\mathbf A}$, there exists another $y$ such that $y \in \mathrm {\mathbf A}$ and $y &gt;x$ - viz, $\mathrm {\mathbf A}$ has no maximal element.</p> <p>$(3)$ $\mathrm {\mathbf A}$ is not empty - viz $\mathrm {\mathbf A} \neq \emptyset$</p> <p>$(4)$ $\mathrm {\mathbf A} \neq \mathbf Q$.</p> <p>The set of all real numbers is noted $\mathbf R$</p> </blockquote> <p>These sets are also called Dedekind cuts, honoring Dedekind, who first considered them. The classical example of a Dedekind cut is $\sqrt{ \mathbf {2}} = \{ x : x^2 &lt; 2 \text{ or } x &lt;0\}$. This set <em>defines</em> the real number $\sqrt 2 $. </p> <p>Then we define the sum $ \large {\bf +}$ of two real numbers (different from $+$, the sum of rationals) as follows</p> <blockquote> <p><strong>DEFINITION</strong> (Spivak)</p> <p>If $\mathrm {\mathbf A} $ and $\mathrm {\mathbf B} $ are real numbers, then</p> <p>$$\mathrm {\mathbf A} {\large {\bf +}}\mathrm {\mathbf B}= \{x:x=y+z \text{ ; for some } y \in \mathrm {\mathbf A} \text{ and some } z \in \mathrm {\mathbf B} \} $$</p> </blockquote> <p>Note that with this definition, the proofs that</p> <p>$${\bf{A}} {\large {\bf +}} {\bf{B}} = {\bf{B}} {\large {\bf +}} {\bf{A}}$$</p> <p>and</p> <p>$$({\bf{A}} {\large {\bf +}} {\bf{B}}){\large {\bf +}}{\bf{C}} = {\bf{B}} {\large {\bf +}} ({\bf{A}}{\large {\bf +}}{\bf{C}})$$</p> <p>directly follow from the fact that for any $x,y,z$ rational numbers</p> <p>$$\eqalign{ &amp; x + y = y + x \cr &amp; x + \left( {y + z} \right) = \left( {x + y} \right) + z \cr} $$</p> <p>We can define then $\mathbf &lt; $ for two real numbers, as:</p> <blockquote> <p><strong>DEFINITION</strong> (Spivak)</p> <p>If ${\bf{A}}$ and ${\bf{B}}$ are real numbers, then ${\bf{A}}\mathbf{&lt;} {\bf{B}}$ means that ${\bf{A}}$ is contained in ${\bf{B}}$, but ${\bf{A}} \neq {\bf{B}}$.</p> </blockquote> <p>Note that $\mathbf &gt; $ $\mathbf \leq $ and $\mathbf \geq $ canall be defined analogously. </p> <p>Then one can prove the following:</p> <blockquote> <p><strong>THEOREM</strong> (Spivak)</p> <p>If $A$ is a set of real numbers, $A \neq \emptyset$ and $A$ is bounded avobe, then $A$ has a least upper bound, or supremum. </p> </blockquote> <p>Note then that if we consider the set $\mathbf R$ along with $+$, $\leq$, $\cdot$ then</p> <p>$(\mathbf 1)$ Then $(\mathbf R,+,\cdot)$ is a <a href="http://en.wikipedia.org/wiki/Field_%28mathematics%29" rel="nofollow">field</a>, meaning the usual properties of addition and multiplication hold (along with the relations between them) , along with the existance of an identity for multiplication ($1$), and identity for the sum ($0$), and the existance of inverses for both operations (excluding $0$ in multiplication).</p> <p>$(\mathbf 2)$ The field is ordered, in the sense the relation $\leq$ is a <a href="http://en.wikipedia.org/wiki/Total_order" rel="nofollow">total order</a>.</p> <p>$(\mathbf 3)$ It is complete, in the sense that every set of real numers which is not the empty set and has an upper bound, has a least upper bound. </p>
41,718
<p>Currently I am working on creating the package on Mathematica version 9 on Windows 7. Here is my code as follows: </p> <pre><code>BeginPackage["mypackage`"] Begin["`Private`"] getColumn[data_,branch_List]:= Module[{pos}, pos = Position[data,#][[1,2]]&amp;/@branch; data[[All,pos]] ]; RemoveMissing[data_]:=DeleteCases[data,{date_,val_}/;!NumberQ[val]]; End[] EndPackage[] </code></pre> <p>As you can see from the above that it contains two defined functions. I saved it as mypackage.m on my working directory.</p> <p>To make sure my working directory is the one that contains mypackage.m, I put <code>SetDirectory[NotebookDirectory[]]</code> in my notebook. Then, I call the package by using </p> <pre><code>Needs["mypackage`"] </code></pre> <p>Evaluating, but it seems not to load, because when I call the function <code>getColumn</code>, the letters should turn black if it is loaded (now it's still blue). Also, I tried other ways </p> <pre><code>`Needs["mypackage`","mypackage.m"]` </code></pre> <p>and</p> <pre><code>&lt;&lt;mypackage` </code></pre> <p>Both are still not working.</p> <p>I have two questions:</p> <ol> <li><p>When we save .m file, is it necessary to have the same name as we defined in the BeginPackage[""]</p></li> <li><p>Is there anything missing so my package does not work ? any suggestions ?</p></li> </ol>
Nasser
70
<p>You had few issues. Try this. It worked now on my system. You needed <code>Usage</code> and better to also add the <code>Unprotect</code> and <code>ClearAll</code></p> <pre><code>BeginPackage["mypackage`"] Unprotect @@ Names["mypackage`*"]; ClearAll @@ Names["mypackage`*"]; getColumn::usage = "getColumn[data,branch]" RemoveMissing::usage = "RemoveMissing[data]" Begin["`Private`"] getColumn[data_,branch_List]:= Module[{pos}, pos = Position[data,#][[1,2]]&amp;/@branch; data[[All,pos]] ]; RemoveMissing[data_]:=DeleteCases[data,{date_,val_}/;!NumberQ[val]]; End[] Protect @@ Names["mypackage`*"]; EndPackage[] </code></pre> <p>Now save this to mypackage.m, then </p> <pre><code>SetDirectory[NotebookDirectory[]] Needs["mypackage`"] ?mypackage`* </code></pre> <p><img src="https://i.stack.imgur.com/IWf8l.png" alt="Mathematica graphics"></p> <pre><code>getColumn[{{1, 2}, {3, 4}}, {1, 2, 3}] </code></pre> <p><img src="https://i.stack.imgur.com/qoq6y.png" alt="Mathematica graphics"></p>
1,821,437
<p>I'm solving past exam questions in preparation for an Applied Mathematics course. I came to the following exercise, which poses some difficulty. <em>If it's any indication of difficulty, the exercise is only Part 3-A of the sheet, graded for 10%</em></p> <blockquote> <p>Solve the equation $z^5=-32$ and draw its solutions in complex space, then describe their characteristic geometrical property.</p> </blockquote> <p>Is it asking to convert z to polar form, then use DeMoivres theorem as I've seen in solutions around the net? If that is the case, how can I work out the $\theta$ angle to be used?</p> <p>Additionally, what does it refer to as its characteristic geometrical property?</p> <p>Any answers would be extremely appreciated as they'd help to get me out of the ditch. I'm completely stalled.</p>
Emilio Novati
187,568
<p>Let $z=\rho e^{i\theta}$. Representing $-32$ in the Argand plane you see that, in polar form, it is $-32=32e^{i\pi}$, so your equation becomes: $$ \left(\rho e^{i\theta}\right)^5=32e^{i\pi} $$ that gives: $$ \rho e^{i\theta}=\left( 32e^{i\pi}\right)^{1/5}=2\left(e^{i\pi}\right)^{1/5} $$ Now, since $e^{i\pi}=e^{i(\pi+2k\pi)}$ we have five distinct values for $\left(e^{i\pi}\right)^{1/5}$:</p> <p>$$ \left(e^{i(\pi+2k\pi)}\right)^{1/5}=e^{i\pi/5+2ik\pi/5} $$ for $k=0,1,2,3,4$ and the five complex roots of $-32$ are: $$ z_1=2e^{i\pi/5} \quad z_2=2e^{3i\pi/5} \quad z_3=2e^{5i\pi/5}=2e^{i\pi}=-2\quad z_4=2e^{7i\pi/5} \quad z_5=2e^{9i\pi/5} $$ </p> <p>In the Argand plane these points are the vertex of a regular pentagon inscribed in a circle of radius $2$ an with a vertex on $-2$.</p>
1,987,230
<p>On Socratica, I saw a video demonstrating writing groups by writing the Cayley's table satisfying three conditions of the desired order. (1) Neutral element row and column are copies of the row and column headers. (2) Every row and column has neutral element once (3) All the elements of the set are present in each row and column. </p> <p>My question is does this always lead to a group? I think the conditions are necessary but insufficient because they do not test the associativity. </p>
Siong Thye Goh
306,553
<p>if $a \equiv 0 \mod 5$, then $a^2 \equiv 0 \mod 5$.</p> <p>if $a \equiv 1 \mod 5$, then $a^2 \equiv 1 \mod 5$.</p> <p>if $a \equiv 2 \mod 5$, then $a^2 \equiv 4 \mod 5$.</p> <p>if $a \equiv -2 \mod 5$, then $a^2 \equiv 4 \mod 5$.</p> <p>if $a \equiv -1 \mod 5$, then $a^2 \equiv 1 \mod 5$.</p>
3,584,113
<p>When we prove things like continuity in real analysis, why do we always aim for the result <span class="math-container">$&lt;\epsilon$</span> when any positive multiple of <span class="math-container">$\epsilon$</span> proves the same result?</p>
just_floating
399,291
<p>We could use an alternative definition, consider the following alternative definition of convergence when it sufficies to prove for any positive multiple K of epsilon:</p> <p>We say <span class="math-container">$x_n \rightarrow l$</span> when <span class="math-container">$(\exists K &gt; 0)(\forall \epsilon &gt; 0)(\exists N)(\forall n &gt; N) |x_n - l| &lt; K\epsilon$</span>.</p> <p>Then consider the proposition that if <span class="math-container">$x_n \rightarrow l, y_n \rightarrow l_2$</span> then <span class="math-container">$x_n + y_n \rightarrow l + l_2$</span>.</p> <p>Then we have for some <span class="math-container">$K$</span> and <span class="math-container">$K_2$</span>, consider any <span class="math-container">$\epsilon &gt; 0$</span> <span class="math-container">$$(\exists N)(\forall n &gt; N) |x_n - l| &lt; K\epsilon$$</span> <span class="math-container">$$(\exists N_2)(\forall n &gt; N_2) |y_n - l_2| &lt; K_2\epsilon $$</span></p> <p>Then consider <span class="math-container">$M = N + N_2$</span>:</p> <p><span class="math-container">$$(\exists M)(\forall n &gt; M) |x_n + y_n - (l + l_2)| &lt; (K+K_2)\epsilon $$</span></p> <p>This means that the positive multiple is <span class="math-container">$K + K_2$</span> so the proposition holds.</p> <p>The issue is that this is really ugly compared to just choosing half epsilon.</p>
1,371,649
<p>The question is:</p> <blockquote> <p>What does the following interation formula do?: <span class="math-container">$$x_{k+1}=2x_k-cx_{k}^2.$$</span></p> </blockquote> <p>I tried to identify this with Newtons method. I.e. I tried to bring that into the form <span class="math-container">$x_{k+1}=x_k-\frac{f(x_0)}{f'(x_0)}$</span>, which leads to: <span class="math-container">$$(cx_k^2-x_k)f'(x_k)=f(x_k).$$</span> But then <span class="math-container">$f(x)$</span> is something like <span class="math-container">$e^a$</span> but these functions doesn't have any roots... Is this still correct and I must note that this iteration formula does not converge or are there any other functions satisfying this equality?</p>
user137794
137,794
<p>You were on the right track, but stopped a bit early. Instead of</p> <p>$$f(x) = f'(x) \cdot (cx^2-x)$$</p> <p>Write that as</p> <p>$$f'(x) = f(x) \cdot \frac{1}{cx^2-x}$$</p> <p>So a function that works would be</p> <p>$$ \begin{align} f(x) &amp;= e^{\int 1/(cx^2-x)\ dx} \\ &amp;= e^{\ln((1-cx)/x)} \\ &amp;= \frac{1-cx}{x} \end{align}$$</p>
2,369,717
<p>From Jaynes' probability theory: the logic of science, I found this:</p> <blockquote> <p><a href="https://i.stack.imgur.com/bnogp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bnogp.png" alt="enter image description here"></a></p> </blockquote> <p>$p$ here is the joint probability distribution of $x,$ and $y$. I'm assuming the $\times$ denotes the cartesian product, but I don't really understand what this equation means, nor why it captures the assumption that $x$ and $y$ are independent. </p> <p><strong>Why does this equation capture the assumption that $x$ and $y$ are independent?</strong> </p>
Adayah
149,178
<p>I think it means that the probability density function $\rho(x, y)$ is given by $$\rho(x, y) = f(x) \cdot g(y).$$</p> <p>This implies independence, because if $A \subseteq X$ and $B \subseteq Y$, then $$\begin{align*} \Pr(x \in A \wedge y \in B) &amp; = \int \limits_{A \times B} \rho(x, y) \, \mbox{d} x \mbox{d} y = \int \limits_{A \times B} f(x) \cdot g(y) \, \mbox{d} x \mbox{d} y \\ &amp; = \int \limits_{A} f(x) \mbox{d} x \cdot \int \limits_{B} g(y) \mbox{d} y \\ &amp; = \Pr(x \in A) \cdot \Pr(y \in B) \end{align*}$$</p>
321,916
<p>In order to define Lebesgue integral, we have to develop some measure theory. This takes some effort in the classroom, after which we need additional effort of defining Lebesgue integral (which also adds a layer of complexity). Why do we do it this way? </p> <p>The first question is to what extent are the notions different. I believe that a bounded measurable function can have a non-measurable "area under graph" (it should be doable by transfinite induction), but I am not completely sure, so treat it as a part of my question. (EDIT: I was very wrong. The two notions coincide and the argument is very straightforward, see Nik Weaver's answer and one of the comments).</p> <p>What are the advantages of the Lebesgue integration over area-under-graph integration? I believe that behaviour under limits may be indeed worse. Is it indeed the main reason? Or maybe we could develop integration with this alternative approach?</p> <p>Note that if a non-negative function has a measurable area under graph, then the area under the graph is the same as the Lebesgue integral by <a href="https://en.wikipedia.org/wiki/Fubini%27s_theorem" rel="noreferrer">Fubini's theorem</a>, so the two integrals shouldn't behave very differently.</p> <p>EDIT: I see that my question might be poorly worded. By "area under the graph", I mean the measure of the set of points <span class="math-container">$(x,y) \in E \times \mathbb{R}$</span> where <span class="math-container">$E$</span> is a space with measure and <span class="math-container">$y \leq f(x)$</span>. I assume that <span class="math-container">$f$</span> is non-negative, but this is also assumed in the standard definition of the Lebesuge integral. We extend this to arbitrary function by looking at the positive and the negative part separately.</p> <p>The motivation for my question concerns mostly teaching. It seems that the struggle to define measurable functions, understand their behaviour, etc. might be really alleviated if directly after defining measure, we define integral without introducing any additional notions.</p>
Nik Weaver
23,141
<p>If <span class="math-container">$f: \mathbb{R} \to [0,\infty)$</span> is Borel (or Lebesgue) measurable, then for each rational <span class="math-container">$a &gt; 0$</span> define <span class="math-container">$X_a = f^{-1}([a,\infty)) \times [0,a)$</span>. Then each <span class="math-container">$X_a$</span> is measurable and their union is exactly the region under the graph. So the region under the graph is measurable.</p> <p>I think the reason why we develop the Lebesgue integral in the usual way is because it provides a powerful technique (characteristic functions --> simple functions --> arbitrary measurable functions) for deriving the basic theory of the integral. Even simple things like <span class="math-container">$\int f + \int g = \int (f + g)$</span> aren't obvious if you take "measure of the region under the graph" as the definition.</p>
829,390
<p>In a tennis tournament, there are $10$ players. In the first round, $5$ groups(of 2 players) will be formed among them and elimination matches will be held among the two players in each group. In how many ways can pairings be done?</p> <p>Answer is given as : $\frac{10!}{2^5\times5!}$</p> <p>My solution :</p> <p>From $10$ players, we can select $2$ players in $\binom{10}{2}$ ways and form a group. From remaining $8$ players we can select $2$ players in $\binom{8}{2}$ ways and so on.</p> <p>So, total number of pairings=$\binom{10}{2}\times\binom{8}{2}\times\binom{6}{2}\times\binom{4}{2}\times\binom{2}{2}=\frac{10!}{2^5}$</p> <p>I want to know why the 5! in the answer should come. Any alternative solution will also be helpful.</p>
Asimov
137,446
<p>The $5!$ comes from the order. You have picked $5$ groups, but you picked them in a certain order, eliminating $2$ at a time the $5!$ accounts for all possible orders of picking</p>
3,248,863
<p>I want to calculate the operator norm of the operator <span class="math-container">$A: L^2[0,1] \to L^2[0,1]$</span> which is defined by <span class="math-container">$$(Af)(x):=i\int\limits_0^x f(t)\,dt-\frac{i}{2} \int\limits_0^1 f(t)\, dt$$</span></p> <p>I've already shown that this operator is compact and selfadjoint. I think maybe this helps me calculating the operator norm. Maybe through spectral theorem for compact self adjoint operators.</p> <p>I also know that for integral operators of the form <span class="math-container">$$(Kf)(x)=\int\limits_0^1 k(x,t) f(t)\,dt$$</span> the inequality <span class="math-container">$\Vert K \Vert \leq \Vert k \Vert{}_{L^2}$</span> holds.</p> <p>For <span class="math-container">$$(Af)(x)=i\int\limits_0^x f(t)\,dt-\frac{i}{2} \int\limits_0^1 f(t) \,dt = \int\limits_0^1 i\,\left(1_{[0,x]}(t)-\frac{1}{2}\right)f(t)\,dt$$</span> this gives me an upper bound:</p> <p><span class="math-container">$$\Vert A \Vert \leq \left\Vert i~1_{[0,x]}-\frac{i}{2} \right\Vert{}_{L^2}=\frac{1}{2}$$</span></p> <p>Can someone help me?</p>
Oliver Díaz
121,671
<p>Let <span class="math-container">$k(t,x)=i\big(\mathbb{1}(0&lt;t\leq x)-\frac12\big)$</span> and define the operator <span class="math-container">$A_k:f\mapsto\int^1_0k(t,x) f(t)\,dt$</span> in <span class="math-container">$L_2([0,1])$</span>. As pointed out in the statement of the problem, <span class="math-container">$A_k$</span> is compact and self adjoint. Here is a short proof for completeness.</p> <ul> <li><span class="math-container">$A_k$</span> is compact in <span class="math-container">$L_2(0,1)$</span> because <span class="math-container">$\int_{[0,1]^2}|K(t,x)|^2\,dt\,dx =\frac14&lt;\infty$</span>.</li> <li>To check self-adjointness of <span class="math-container">$A_K$</span>, notice that <span class="math-container">$$A_Kf(x)=i\Big(\int^x_0f(t)\,dt-\frac12\int^1_0f(t)\,dt\Big)=\frac{i}{2}\Big(\int^x_0 f(t)\,dt-\int^1_x f(f)\,dt\Big)$$</span> while <span class="math-container">\begin{aligned} A^*_Kf(x)&amp;=\int^1_0\overline{k(x,t)}f(t)\,dt=-i\Big(\int^1_xf(t)\,dt-\frac12\int^1_0f(t)\,dt\Big)\\ &amp;=\frac{i}{2}\Big(\int^x_0f(t)\,dt-\int^1_xf(t)\,dt\Big)=A_kf(x) \end{aligned}</span></li> </ul> <p>With all these, we have that the spectrum of <span class="math-container">$A_f$</span> consists of countable eigenvalues converging to <span class="math-container">$0$</span> (and possibly zero). The larges eigenvalue (in magnitude) is also the norm of <span class="math-container">$A_k$</span>.</p> <p>For each <span class="math-container">$n\in\mathbb{Z}$</span>, the function <span class="math-container">$\phi_n(t)=e^{i(2 n + 1)t}$</span> is an eigenvector of <span class="math-container">$A_K$</span> corresponding to the eigenvalue <span class="math-container">$\frac{1}{(2n+1)\pi }$</span>. At least this gives <span class="math-container">$\frac{1}{\pi}\leq\|A_K\|\leq 2$</span>. <strong>One needs to check that <span class="math-container">$\{\frac{1}{(2 n+1)\pi}:n\in\mathbb{Z}\}$</span> are the only eigenvalues</strong>. Once this is verified, it turns out that <span class="math-container">$\|A_k\|=\frac{1}{\pi}$</span>.</p> <hr> <p>A side note: The functions <span class="math-container">$f_\alpha(t)=\sqrt{2\alpha+1}\,t^\alpha$</span> with <span class="math-container">$\alpha&gt;-\frac12$</span>, although they are not eigenfunctions, give an interesting bound: <span class="math-container">$\|A_K f_\alpha\|^2_2=\frac{2\alpha+1}{(\alpha+1)^2}\Big(\frac{1}{2\alpha+3}-\frac{1}{\alpha+2}+\frac14\Big)$</span>. This attains a maximum at <span class="math-container">$\alpha=0.56807...$</span> and which gives a lower bound of <span class="math-container">$0.298225...$</span> for <span class="math-container">$\|A_k\|$</span>. That is optimal as <span class="math-container">$\frac{1}{\pi}=0.3183099...$</span>.</p>
230,416
<p>I'm looking for numerical graph invariants that are bounded by a constant either for a graph $G$ or its complement $\bar{G}$. (The complement graph $\bar{G}$ has the same set of vertices as $G$ but the edges are complemented.) More specifically I’m looking for what numerical “Invariant $X$” is out there for which the following statement is true:</p> <p>“Invariant $X$ is bounded by a constant $c$ either for $G$ or $\bar{G}$”.</p> <p>One example is diameter $d(G)$ of a graph $G$ where it is known that:</p> <p>"Either $d(G) \leq 3$ or $d(\bar{G}) \leq 3$" </p> <p>(Reference: F. Harary, R. W. Robinson: The diameter of a graph and its complement , The American Mathematical Monthly, Vol. 92, No. 3. (Mar., 1985), pp. 211-212”)</p> <p>My question is what other such numerical graph invariants are out there? I looked at the list of <a href="https://en.wikipedia.org/wiki/Category:Graph_invariants">invariants</a> on wikipedia but there was no mention of such bound on their respective pages. Again, all I’m interested is that the condition holds for either $G$ or its complement $\bar{G}$ and not necessarily for both.</p>
Tony Huynh
2,233
<p>Let $c(G)$ be the number of connected components of $G$. Then for all graphs $G$, $$ c(G)=1 \text{ or } c(\overline{G})=1. $$ </p> <p>Here is a slightly more interesting family of examples. For a fixed integer $k$, define $d_k(G)$ to be the number of degree-$k$ vertices of $G$. </p> <p><strong>Claim.</strong> For all graphs $G$, $$d_k(G) \leq 2k \text{ or } d_k(\overline{G}) \leq 2k. $$</p> <p><strong>Proof.</strong> Suppose that $G$ contains more than $2k$ degree-$k$ vertices. Let $X$ be a set of degree-$k$ vertices of $G$ with $|X|=2k+1$. Let $N_G(X)$ be the neighbours of $X$ in $G$. Let $Y$ be the set of degree-$k$ vertices in $\overline{G}$. Since the vertices in $V(G) \setminus N_G(X)$ are not adjacent to any vertex in $X$, it follows that $Y \subseteq N_G(X)$. Towards a contradiction, suppose $|Y| \geq 2k+1$. Since $|X|=2k+1$, each vertex in $Y$ must send at least $k+1$ edges (in $G$) to $X$. Thus, there are at least $(2k+1)(k+1)$ edges between $N_G(X)$ and $X$. On the other hand, the number of edges between $X$ and $N_G(X)$ is at most $k|X|=k(2k+1)$, so we have a contradiction. </p>
230,416
<p>I'm looking for numerical graph invariants that are bounded by a constant either for a graph $G$ or its complement $\bar{G}$. (The complement graph $\bar{G}$ has the same set of vertices as $G$ but the edges are complemented.) More specifically I’m looking for what numerical “Invariant $X$” is out there for which the following statement is true:</p> <p>“Invariant $X$ is bounded by a constant $c$ either for $G$ or $\bar{G}$”.</p> <p>One example is diameter $d(G)$ of a graph $G$ where it is known that:</p> <p>"Either $d(G) \leq 3$ or $d(\bar{G}) \leq 3$" </p> <p>(Reference: F. Harary, R. W. Robinson: The diameter of a graph and its complement , The American Mathematical Monthly, Vol. 92, No. 3. (Mar., 1985), pp. 211-212”)</p> <p>My question is what other such numerical graph invariants are out there? I looked at the list of <a href="https://en.wikipedia.org/wiki/Category:Graph_invariants">invariants</a> on wikipedia but there was no mention of such bound on their respective pages. Again, all I’m interested is that the condition holds for either $G$ or its complement $\bar{G}$ and not necessarily for both.</p>
Shahrooz
19,885
<p>There are a lot of properties which $G$ and $\overline{G}$ both have them: number of vertices (edges), automorphism group, and etc. So, naturally this question is interesting for me. I want to mention a spectral property. Let $G$ be a graph with $n$ vertices, and $\rho(G)$ denotes the largest eigenvalue of the adjacency matrix of $G$. Then we have: $$\rho(G)\geq \frac{n-1}{2}\; \; \text{or}\;\; \rho(\overline{G})\geq \frac{n-1}{2}.$$</p> <p>I will give the simplified proof for the above fact. </p> <p>For any $n\times n$ Hermitian matrices $A$ and $B$, we have: $$\lambda_i(A+B)\leq \lambda_j(A)+\lambda_{i-j+1}(B),\;\; n\geq i\geq j\geq 1,$$ where, $\lambda_i(A)$ denotes the $i$th greatest eigenvalue of the matrix $A$. Now, for graph $G$, let $A$ be the adjacency matrix of $G$ and $B$ be the adjacency matrix of $\overline{G}$. Since we have $A+B=J-I$, which is the adjacency matrix of complete graph, the result is clear. </p>
2,910,301
<p>Most of the textbooks state that provided a nonzero field $F$, a nonzero polynomial $f\in F[x]$ of degree $n$ has at most $n$ <em>distinct</em> roots. I am wondering whether the word "distinct" can be removed? I guess the answer is yes, but I cannot come up with a nice proof. </p> <p>Sorry for the confusion. The question above can be rephrased as "if $f$ has roots $\alpha_i$ of multiplicity $n_i$, then is it possible that $\sum n_i&gt;n$?"</p> <p>Here $\alpha$ is of multiplicity $k$ means $(x-\alpha)^k\mid f$ but $(x-\alpha)^{k+1}\nmid f$.</p> <p>For commutative ring this is possible, e.g, $x^2-x\in\mathbb{Z}_6[x]$ has roots $\overline{0},\overline{1},\overline{3},\overline{4}$.</p>
Community
-1
<p>Suppose that $\alpha$ were not a root of g.</p> <p>Then $\alpha$ is a simple root of f, which is a contradiction.</p>
229,703
<p>Given the function <span class="math-container">\begin{align*} f \colon \mathbb{R}^n &amp;\to \mathbb{R}^n\\ v&amp;\mapsto \dfrac{v}{\|v\|}, \end{align*}</span> I would like to compute the derivative of <span class="math-container">$f$</span>, that is <span class="math-container">$df(v)$</span>. It is possible to derive it by hand, which leads to</p> <p><span class="math-container">$$df(v)=\dfrac{1}{\| v\|}\Big(I_n - \dfrac{v}{\|v\|}\otimes \dfrac{v}{\|v\|}\Big)$$</span></p> <p>where <span class="math-container">$I_n$</span> is the identity second-order matrix.</p> <p>I believe <em>Mathematica</em> cannot find that using simple built-in functions (without explicitly defining <code>v = {v1, v2, v3}</code> if <span class="math-container">$n=3$</span> for instance). Some packages are dedicated to differential geometry (see <a href="https://mathematica.stackexchange.com/questions/60243/coordinate-free-differential-forms-package">Coordinate free differential forms package</a> or <a href="https://mathematica.stackexchange.com/questions/2620/differential-geometry-add-ons-for-mathematica">Differential geometry add-ons for Mathematica</a>) but I failed to achieve the above calculation. Any hint would be appreciated.</p> <hr /> <p><strong>Edit</strong> For those of you who are interesting in how to find the above formula, you can define <span class="math-container">$g(t)=f(v(t))=\big(v(t)\cdot v(t)\big)^{1/2}v(t)$</span> and compute <span class="math-container">$g'(t)$</span> with the chain rule. <span class="math-container">$g'(t)$</span> is a linear function of <span class="math-container">$v'(t)$</span> because:</p> <p><span class="math-container">$$g'(t)=\dfrac{df}{dv}(v(t)) v'(t)$$</span></p> <p>Taking the coefficient in front of <span class="math-container">$v'(t)$</span> gives the above expression.</p> <p>Now, the naive implementation of this approach as follows fail because it does not capture the multi-dimensionality of the <code>f</code>:</p> <pre><code>f[v_] = v/Norm[v] h[t_] = D[f[v[t]], t]/v'[t] // Simplify h[t] /. Norm'[v[t]] -&gt; v[t]/Norm[v[t]] // Simplify (* (Norm[v[t]]^2 - v[t]^2)/Norm[v[t]]^3 *) </code></pre>
Michael Seifert
27,813
<p>You can abuse the variational derivative functionality in <code>xTensor</code> to do this:</p> <pre><code>&lt;&lt; xAct`xTensor` DefManifold[M, dim, IndexRange[a, m]]; DefMetric[1, metric[-a, -b], PD, PrintAs -&gt; &quot;\[Delta]&quot;, FlatMetric -&gt; True, SymbolOfCovD -&gt; {&quot;,&quot;, &quot;\[PartialD]&quot;}]; DefTensor[v[a], M] DefScalarFunction[ff] </code></pre> <p>The function <code>ff</code> here is a stand-in for &quot;the inverse of the norm&quot;. It has to be typed as a scalar function of some scalar argument(s), so <code>xTensor</code> knows how to take its derivative. (Trying to do this directly using <code>Sqrt</code> throws errors; I'm not sure why.)</p> <pre><code>VarD[v[c], PD][v[a] ff[v[b] v[-b]]] // ScreenDollarIndices // ContractMetric % /. ff -&gt; (#^(-1/2) &amp;) </code></pre> <p><a href="https://i.stack.imgur.com/A49Sf.png" rel="noreferrer"><img src="https://i.stack.imgur.com/A49Sf.png" alt="enter image description here" /></a></p> <p>Once the variational derivative is taken, you can then set <code>ff</code> to any function you please, including (in this case) the -1/2 power function.</p> <p>Note that the dimensionality of the manifold <code>dim</code> remains unspecified in this code. It might be necessary to specify it in some related cases (for simplification or for calculating traces of quantities), but it doesn't appear to be needed here.</p>
1,251,537
<p>$f:[a,b] \to R$ is continuous and $\int_a^b{f(x)g(x)dx}=0$ for every continuous function $g:[a,b]\to R$ with $g(a)=g(b)=0$. Must $f$ vanish identically?</p> <hr> <p>Using integration by parts I got the form: $\int_a^bg(x)f(x)-g'(x)F(x)=0$. Where $F'(x)=f(x)$.</p>
egreg
62,967
<p>Suppose $f(x_0)&gt;0$ for some $x_0\in(a,b)$. Then there exists $\delta&gt;0$ such that, for $|x-x_0|&lt;\delta$, $f(x)&gt;k&gt;0$ (take $k=f(x_0)/2$, for instance), with $(x_0-\delta,x_0+\delta)\subseteq(a,b)$.</p> <p>Now build a function $g$ by decreeing that $$ g(x)=\begin{cases} 0 &amp; \text{if $a\le x&lt; x_0-\delta$}\\[3px] ? &amp; \text{if $x_0-\delta\le x&lt;x_0-\delta/2$}\\[3px] 1 &amp; \text{if $x_0-\delta/2\le x\le x_0+\delta/2$}\\[3px] ? &amp; \text{if $x_0+\delta/2&lt;x\le x_0+\delta$}\\[3px] 0 &amp; \text{if $x_0+\delta&lt;x\le b$} \end{cases} $$ where $?$ stands for the values of the segment that makes the function continue in the two intervals (you can easily compute the formula).</p> <p>Then $f(x)g(x)=0$ outside of $(x_0-\delta,x_0+\delta)$, non negative in this interval and strictly positive on $[x_0-\delta/2,x_0+\delta/2]$, so $$ \int_{a}^b f(x)g(x)\,dx&gt;0 $$</p>
4,316,876
<p>I want to prove that given <span class="math-container">$a,b,c\in\mathbb{R}$</span> we have <span class="math-container">$|a+b|\leq|a|+|b|$</span> using an absurd and reaching a contradiction.</p> <p>So, I state, by absurd, that <span class="math-container">$|a+b|&gt;|a|+|b|$</span>, but I can't reach the contradiction. It look simple, but I'm afraid that it isn't. In my research I didn't find this proof. Thanks for any contribution!</p>
Paul Sinclair
258,282
<p>It depends on what you already know about absolute values and inequalities.</p> <p>For instance, have you already shown that if <span class="math-container">$a &gt; b &gt; 0$</span> and <span class="math-container">$c &gt; 0$</span>, then <span class="math-container">$ac &gt; bc$</span>? From that we get <span class="math-container">$a^2 &gt; ab$</span> and <span class="math-container">$ab &gt; b^2$</span>, so <span class="math-container">$a^2 &gt; b^2$</span>. Also for any <span class="math-container">$a,b, |a|^2 = a^2$</span>, <span class="math-container">$|a||b| = |ab|$</span>, and <span class="math-container">$a \le |a|$</span>.</p> <p>Thus if <span class="math-container">$$|a + b| &gt; |a| + |b|\\|a + b|^2 &gt;(|a| + |b|)^2\\(a + b)^2 &gt; (|a| + |b|)^2\\a^2 + 2ab + b^2 &gt; a^2 + 2|ab| + b^2\\ab &gt; |ab|$$</span> which is absurd.</p>
976,910
<p>i'm having a small issue with a certain question. </p> <p>Given a parametric equation of a plane $x=5-2a-3b$, $y=3-4a+2b$, $z=7-6a-2b$, find a point $P$ on the plane so that the position vector of $P$ is perpendicular to the plane.</p> <p>How would you go about this for a parametric equation? I think I could convert this to a cartesian equation and dissect an answer that way, but how can I do this without having to convert it?</p> <p>The hint it gives on the page is that $P$ has the vector $\overrightarrow{OP}$, so I'd imagine the first thing I would do is use the dot product with dummy variables for the $i$, $j$ and $k$ values of $P$. Am I on the right track?</p> <p>Thanks in advance.</p>
WikiSandhu
184,910
<p>You can show that every point of $X-K$ is an interior point. Since for $x \in X-K$, there exist an open set $O \ni x$, such that $O \cap K= \phi$, otherwise $x$ would be the limit point for $K$. and thus for an arbitrary $x \in X-K$, we hav $x \in O \subset X-K$. Hence $X-K$ is open. </p>
148,420
<p>A simple concept but I've not been able to solve it. I'm trying to create a stack of 2D plots in 3D space using Mathematica 9. <strong>This is not a parametric plot</strong>, but I'm creating it from an array of vectors (imported .csv file). The ListPlot3D function creates a filled mesh but what I want is this type of plot (created by HYRY in StackExchange: <em>'Matplotlib plot pulse propagation in 3d'</em>):</p> <p><a href="https://i.stack.imgur.com/iv2Ty.png" rel="noreferrer"><img src="https://i.stack.imgur.com/iv2Ty.png" alt="example taken from StackExchange question: &#39;Matplotlib plot pulse propagation in 3d&#39;"></a></p> <p>I have tried changing the function options for ListPlot3D and was going to create an array of plot images (.jpg) to stack in 3D, each one having an alpha value - but that would not be good. Any help is appreciated.</p> <p>Thank you,</p> <p>Marc </p>
Quantum_Oli
6,588
<p>Here's one way, using <code>Graphics3D</code> and starting from a list of $x,y,z$ values for datapoints.</p> <p>Mock data:</p> <pre><code>data = Flatten[Table[ {x, y, PDF[MultinormalDistribution[{0, 0}, {{0.2, 0.1}, {0.1, 0.4}}],{x, y}]}, {x, -2, 2, 0.01}, {y, -2, 2, 0.25}],1]; </code></pre> <p>Group by y-value:</p> <pre><code>gb = GatherBy[data, #[[2]] &amp;]; </code></pre> <p>Some fancy styling (not necessary):</p> <pre><code>cLines = Transpose[{ ColorData["BlueGreenYellow"] /@ Rescale[Range[Length[gb]]], Line /@ gb }]; </code></pre> <p>And plot (with some options to help the plot styling):</p> <pre><code>Graphics3D[ {Thick, cLines}, Axes -&gt; True, Boxed -&gt; False, BoxRatios -&gt; {1, 1, 1}, FaceGrids -&gt; {{-1, 0, 0}, {0, 1, 0}, {0, 0, -1}}, AxesLabel -&gt; {"x", "y", "z"}, PlotRangePadding -&gt; None] </code></pre> <p><a href="https://i.stack.imgur.com/2KhDh.png" rel="noreferrer"><img src="https://i.stack.imgur.com/2KhDh.png" alt="enter image description here"></a></p>
4,004,978
<blockquote> <p>For all <span class="math-container">$a, b, c, d &gt; 0$</span>, prove that <span class="math-container">$$2\sqrt{a+b+c+d} ≥ \sqrt{a} + \sqrt{b} + \sqrt{c} + \sqrt{d}$$</span></p> </blockquote> <p>The idea would be to use AM-GM, but <span class="math-container">$\sqrt{a} + \sqrt{b} + \sqrt{c} + \sqrt{d}$</span> is hard to expand. I also tried squaring both sides, but that hasn't worked either. Using two terms at a time doesn't really work as well. How can I solve this question? Any help is appreciated.</p>
fleablood
280,126
<p>If you square both sides:</p> <p><span class="math-container">$(\sqrt a + \sqrt b + \sqrt c +\sqrt d)^2 =a + b+c+d + 2(\sqrt{ab} + \sqrt{ac}+\sqrt{ad} + \sqrt{bc}+\sqrt{bd} + \sqrt{cd})$</span></p> <p>while <span class="math-container">$(2\sqrt {a+b+c+d})^2= 4(a+b+c+d)$</span> so it suffices to prove</p> <p><span class="math-container">$2(\sqrt{ab} + \sqrt{ac}+\sqrt{ad} + \sqrt{bc}+\sqrt{bd} + \sqrt{cd})\le 3(a+b+c+d)$</span>.</p> <p>If we apply AM/GM though, we get <span class="math-container">$\sqrt{ab} \le \frac {a+b}2$</span> or in other words <span class="math-container">$2\sqrt{ab} \le a+b$</span>.</p> <p>And that does it:</p> <p><span class="math-container">$2(\sqrt{ab} + \sqrt{ac}+\sqrt{ad} + \sqrt{bc}+\sqrt{bd} + \sqrt{cd})\le$</span></p> <p><span class="math-container">$(a+b) + (a+c) + (a+d) + (b+c) + (b+d) = 3(a+b+c+d)$</span>.</p>
1,929,445
<blockquote> <p>Is there a solution to the problem $$\left\{\begin{matrix} y'=y+y^4\\ y(x_0)=y_0 \end{matrix}\right.$$ which is defined on $\mathbb{R}$? ($x_0,y_0$ might be any real numbers)</p> </blockquote> <p>It's easy to prove that for all $(x,y)\in\mathbb{R}^2$ there exists an open interval $I$ (with $x_0\in I$) where the problem has a unique solution. However is the maximal interval always $(-\infty,\infty)$? I know that the answer is no, but that's just because I found a solution for particular values of $x_0$ and $y_0$ and checked its domain. But is there a way to prove that $I$ need not be $I=(-\infty,\infty)$ without actually solving the problem for a certain initial condition? In other words, how can I prove that the unique solution need not be defined on $\mathbb{R}$?</p>
Will Jagy
10,400
<p>actual solutions, three regions: $y &gt; 0,$ then $0 &gt; y &gt; -1,$ then $y &lt; -1.$ I should emphasize that your ODE is autonomous, no explicit dependence on $x,$ which means that every solution curve is a sideways translate of one of those in the picture. It is sort of accidental that a solution for $y &gt; 0$ also provides the exact correct formula for a solution with $y &lt; -1.$ That is why there are two curves in light green, with matching vertical asymptote. </p> <p><a href="https://i.stack.imgur.com/x7Qh7.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/x7Qh7.jpg" alt="enter image description here"></a></p> <p>it is first order, so solutions cannot intersect or cross each other. There are two constant solutions, $y=0$ and $y=-1.$ Between those, solution curves descend from horizontal asymptote at $y=0$ to that at $y=-1.$</p> <p>Above $0,$ $y$ increases. Once, say, $y &gt; 2,$ we have $y' &gt; 1 + y^2$ and $y &gt; \tan (x - x_0).$ Blowup in finite time; there is a vertical asymptote. </p> <p>Similar for $y &lt; -1$ if you run time backwards. Very roughly, the plot of a large number of solution curves can be rotated $180^\circ$ to give something similar to the original. </p> <p>Oh, well. You can also change variables to get good estimates on when $y$ goes to $\infty.$ We are free to move right or left, all solution curves are translates of each other. In considering $y &gt; 0,$ we are free to demand $y(0) = 1.$ Next, take $$ y = \tan w, $$ so $w(0) = \pi/4$ and $$ y' = \sec^2 w \; w' $$ From $y' = y + y^4, $ I got $$ w' = \cos w \sin w + \frac{\sin^4 w}{\cos^2 w}. $$ For $w$ between $\pi/4$ and $\pi/2,$ we know $\sin w \geq 1$ while $\cos w \leq 1.$ So, as a first try, $w' &gt; 1,$ so we know that $w = \pi/2$ occurs with $x &lt; \pi/2 - \pi / 4 = \pi / 4.$ Much greater precision is available, just solve the numerical ODE for $w$ with $w(0) = \pi/4,$ find when $w = \pi/2.$ In fact, since this thing is solvable in closed form, we find $y(0) = 1$ means $y$ goes to infinity, meaning vertical asymptote, at $$ x = \frac{1}{3} \log 2 \approx 0.231049 $$</p> <p><a href="https://i.stack.imgur.com/Tls1C.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Tls1C.jpg" alt="enter image description here"></a></p>
858,952
<p>related to <a href="https://math.stackexchange.com/questions/830599/one-sided-limit-lim-x-rightarrow-0-fx-where-wolfram-alpha-does-not-hel">this question</a>:</p> <p>Is there an easy closed-form term for</p> <p>$$\sum_{j=k}^{\infty} \frac{x^j}{j!}e^{-x},$$</p> <p>thus when the sum starts at a constant $k$ instead of $1$?</p> <p>EDIT: Thanks for your help. Is there a Chance to solve this sum-term? Because this is not really what I expect, when I talk about a closed-form term. </p> <p>A Little bit more of context might help, maybe:</p> <p>I have $$f(n,p)=\sum_{j=k}^{\infty} \frac{(np)^j}{j!} e^{-np}$$ and it is meant that the partial Derivation is $$\frac{\delta f(n,p)}{\delta n}=\frac{p (np)^{k-1}}{(k-1)!}e^{-np}$$ but I have no idea how to get to this. </p> <p>Because to me: </p> <p>$$\frac{\delta f(n,p)}{\delta n}=\sum_{j=k}^{\infty} \left( \frac{p (np)^{j-1}}{j!} e^{-np} -\frac{p (np)^j}{j!} e^{-np} \right)$$ but then I am stuck.</p>
JJacquelin
108,514
<p>Incomplete Gamma function and Eq.2 in : <a href="http://mathworld.wolfram.com/IncompleteGammaFunction.html" rel="nofollow noreferrer">http://mathworld.wolfram.com/IncompleteGammaFunction.html</a></p> <p><img src="https://i.stack.imgur.com/d6SsD.jpg" alt="enter image description here"></p>
3,063,053
<p>I'm a Calculus I student and my teacher has given me a set of problems to solve with L'Hoptial's rule. Most of them have been pretty easy, but this one has me stumped. <br /></p> <p><span class="math-container">$$\lim\limits_{x\to \infty} \frac{x}{\sqrt{x^2 + 1}}$$</span> </p> <p>You'll notice that using L'Hopital's rule flips the value of the top to the bottom. For example, using it once returns: </p> <p><span class="math-container">$$\lim\limits_{x\to \infty} \frac{\sqrt{x^2 + 1}}{x}$$</span> </p> <p>And doing it again returns you to the beginning: </p> <p><span class="math-container">$$\lim\limits_{x\to \infty} \frac{x}{\sqrt{x^2 + 1}}$$</span> </p> <p>I of course plugged it into my calculator to find the limit to evaluate to 1, but I was wondering if there was a better way to do this algebraically?</p>
cqfd
588,038
<p><strong>Hint</strong>: Divide the numerator and denominator by <span class="math-container">$x $</span> and apply the limit.</p> <p><span class="math-container">$$\frac{x}{\sqrt{x^2 + 1}}=\frac{1}{\sqrt{1 + \frac{1}{x^2}}}$$</span></p>
3,063,053
<p>I'm a Calculus I student and my teacher has given me a set of problems to solve with L'Hoptial's rule. Most of them have been pretty easy, but this one has me stumped. <br /></p> <p><span class="math-container">$$\lim\limits_{x\to \infty} \frac{x}{\sqrt{x^2 + 1}}$$</span> </p> <p>You'll notice that using L'Hopital's rule flips the value of the top to the bottom. For example, using it once returns: </p> <p><span class="math-container">$$\lim\limits_{x\to \infty} \frac{\sqrt{x^2 + 1}}{x}$$</span> </p> <p>And doing it again returns you to the beginning: </p> <p><span class="math-container">$$\lim\limits_{x\to \infty} \frac{x}{\sqrt{x^2 + 1}}$$</span> </p> <p>I of course plugged it into my calculator to find the limit to evaluate to 1, but I was wondering if there was a better way to do this algebraically?</p>
Mostafa Ayaz
518,023
<p><strong>Hint</strong></p> <p>Simply use <span class="math-container">$${x\over x+1}={x\over \sqrt{x^2+2x+1}}&lt;{x\over \sqrt{x^2+1}}&lt;1$$</span>for large enough <span class="math-container">$x&gt;0$</span>.</p>
621,409
<p>I need some help with the following question:</p> <p>We have $H$ acting by automorphisms on $N$, and let $\rho:H\to Aut(N)$ the associated representation by automorphisms.</p> <p>Suppose that $G=H[N]_{\rho}$ is a semidirect product, and $K=\ker(\rho)$.</p> <p>Prove that $K\unlhd G$ and that $G/K$ is also a semidirect product. </p> <p>Thanks a lot in advance for any help!</p> <hr> <p>Edit: I deleted the part that was unclear (in fact bad formulated). The answers of DonAntonio and user finally solved the question.</p>
Ulrik
53,012
<p>Do you know how to find multivariable limits using polar coordinates? Substitute $x^2 + y^2 = r^2$, where $r$ is the distance from the origin to the point $(x,y)$. Then $(x,y)$ approaching the origin is equivalent to $r$ approaching 0. In this way you can reduce the problem to a single variable limit problem, and use L'hôpital's rule for instance.</p>
1,043,094
<p>I have to find the limit of following</p> <p><span class="math-container">$$\lim_{x \to 0}\left(\frac{1}{x} - \frac{1}{x^2}\right)$$</span></p> <p>I have no idea how to start this one off. How would I do it?</p> <p>Do I just substitute the <span class="math-container">$0$</span>? It doesn't look that easy and simple. The answer says it's negative infinity.</p> <p>Please show me a solution without graphing(unless for better explanation).</p>
Paul
17,980
<p>$$\lim_{x\to 0}\frac{x-1}{x^2} = \lim_{x\to 0}\frac{-1}{x^2} = -\infty$$</p>
1,676,505
<p>Let $f:[0,1]\times[0,1]\to \mathbb R$, $$f(x,y)= \begin{cases} \frac1q+\frac1n, &amp; \text{if $(x,y)=(\frac mn,\frac pq) \in \Bbb Q\times\Bbb Q,$ $ (m,n)=1=(p,q)$ } \\ 0, &amp; \text{if $x$ or $y$ irrational$ $ or $0,1$} \end{cases} $$</p> <p>Prove that f is integrable over $R=[0,1]\times[0,1]$ and find the value of the integral (I know its value is zero, because every lower sum is zero).</p> <p>I'm trying to find the set of discontinuities of $f$ over $R$ and prove that it has measure zero, so that $f$ is integrable.</p> <p>I remember doing this for the one dimensional case (Thomae´s function), proving that $f$ was continuous over the irrationals and discontinuous over the rationals, but I can't prove it this time, so I need some help, it will be really appreciated.</p>
Christian Blatter
1,303
<p>Let $u\mapsto T(u)$ $\&gt;(0\leq u\leq1)$ be <a href="https://en.wikipedia.org/wiki/Thomae%27s_function" rel="nofollow">Thomae's function</a>. Then $$0\leq f(x,y)\leq T(x)+T(y)\qquad\bigl((x,y)\in Q:=[0,1]^2\bigr)\ .$$ By "Fubini's theorem" for Riemann integrals one obtains $$\int_Q T(y)\&gt;{\rm d}(x,y)=\int_0^1 \int_0^1 T(y) dy\ dx=0\ ,$$ and similarly for $(x,y)\mapsto T(x)$. It follows that $\int_Q f(x,y)\&gt;{\rm d}(x,y)=0$.</p>