qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
1,419,756
<p>Let $A$ be a nonempty set in the metric space $(X,d)$ and, for $\epsilon&gt;0$, define</p> <p>$$A_\epsilon = \{x\in X: d(x,A) &lt; \epsilon\}$$.</p> <p>Then I want to prove that $A_\epsilon$ is open in $X$.</p> <p>So what I have tried so far is that, well I want to prove that a set is open so I take $x \in A_\epsilon $, then we have to show that $B_{\epsilon_{1}} \subset A_\epsilon$ for a fix $\epsilon$, and since it is clear that $A_\frac{\epsilon}{2}\subset A_\epsilon$ we take $\epsilon_{1}=\epsilon-d(x,A)$ and we pick $z \in B_{\frac{\epsilon_{1}}{2}} $ and we observe the following:</p> <blockquote> <p>I think I got it, $$d(z,A)&lt;d(z,x)+d(x,A)&lt;\epsilon_{1}+d(x,A)=\epsilon-d(x,A)+d(x,A)=\epsilon$$ <strong>Am I right?, and Is the triangle inequality true for a point and a set?</strong></p> </blockquote> <p>So the thing is that I am not sure of the <strong>above step</strong>, Can someone tell me if I am right?, and If I am not, Can someone help me to fix it?</p> <p>Thanks a lot in advance</p> <p><strong>NOTE</strong>: $$d(x,A)=inf\{d(x,y):y \in A\}$$</p>
Plutoro
108,709
<p>The issue is that you do not know that $d(x,A)&lt;\varepsilon/2$. Try this: pick $x\in A$ with $d(x,A)=\varepsilon_2&lt;\varepsilon$. Let $0&lt;\varepsilon_1&lt;\varepsilon-\varepsilon_2$. Now try the ball centered at $x$ of radius $\varepsilon_1$. </p>
1,419,756
<p>Let $A$ be a nonempty set in the metric space $(X,d)$ and, for $\epsilon&gt;0$, define</p> <p>$$A_\epsilon = \{x\in X: d(x,A) &lt; \epsilon\}$$.</p> <p>Then I want to prove that $A_\epsilon$ is open in $X$.</p> <p>So what I have tried so far is that, well I want to prove that a set is open so I take $x \in A_\epsilon $, then we have to show that $B_{\epsilon_{1}} \subset A_\epsilon$ for a fix $\epsilon$, and since it is clear that $A_\frac{\epsilon}{2}\subset A_\epsilon$ we take $\epsilon_{1}=\epsilon-d(x,A)$ and we pick $z \in B_{\frac{\epsilon_{1}}{2}} $ and we observe the following:</p> <blockquote> <p>I think I got it, $$d(z,A)&lt;d(z,x)+d(x,A)&lt;\epsilon_{1}+d(x,A)=\epsilon-d(x,A)+d(x,A)=\epsilon$$ <strong>Am I right?, and Is the triangle inequality true for a point and a set?</strong></p> </blockquote> <p>So the thing is that I am not sure of the <strong>above step</strong>, Can someone tell me if I am right?, and If I am not, Can someone help me to fix it?</p> <p>Thanks a lot in advance</p> <p><strong>NOTE</strong>: $$d(x,A)=inf\{d(x,y):y \in A\}$$</p>
David Hill
145,687
<p><strong>Yet Another Approach:</strong> We show that $\displaystyle A_\epsilon=\bigcup_{a\in A}B_\epsilon(a)$.</p> <p>Let $a\in A$. If $x\in B_{\epsilon}(a)$, then $$d(x,A)\leq d(x,a)&lt;\epsilon.$$ Therefore, $B_\epsilon(a)\subset A_\epsilon$, so $\displaystyle \bigcup_{a\in A}B_\epsilon(a)\subset A_\epsilon$.</p> <p>Conversely, suppose $\displaystyle x\in\left(\bigcup_{a\in A}B_\epsilon(a)\right)^c=\bigcap_{a\in A}B_\epsilon(a)^c$. Then, for every $a\in A$, $d(x,a)\geq\epsilon$. Hence, $$d(x,A)=\inf\{d(x,a)\mid a\in A\}\geq \epsilon.$$ Therefore, $\displaystyle \left(\bigcup_{a\in A}B_\epsilon(a)\right)^c\subset A_\epsilon^c$, so we're done.</p>
3,642,479
<p>What properties do we lose as we go from real numbers to quaternions, then to octonions? Do any new properties arise, or do calculations just become more "path dependant"?</p>
dcromley
157,712
<p>From reals to complex you lose order. From complex to quaternions you lose commutativity. From quaternions to octonions you lose associativity. From octonions to ..? </p> <p>I had written this as a comment, then I followed Noah Schweber's link, which essentially says this plus more. And pregunton's answer is meaty.<br> <a href="https://math.stackexchange.com/questions/641809/">What specific algebraic properties are broken at each Cayley-Dickson stage beyond octonions?</a> </p> <p>EDIT: Bcpicao's comment led me to<br> <a href="https://en.wikipedia.org/wiki/Cayley%E2%80%93Dickson_construction" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Cayley%E2%80%93Dickson_construction</a><br> a fabulous link.</p>
3,996,790
<p>I just realized that there may be a case where L'Hopital's rule fails, specifically</p> <p><span class="math-container">$$ \lim_{x \to \infty} \frac{e^x}{e^x}$$</span></p> <p>which evaluates to an indeterminate form, specifically <span class="math-container">$\frac{\infty}{\infty}$</span>. Sure, we can cancel the <span class="math-container">$e^x$</span>s, but when we use L'Hopital's, we get</p> <p><span class="math-container">$$ \lim_{x \to \infty} \frac{(e^x)^\prime}{(e^x)^\prime}$$</span></p> <p>Since the derivative of <span class="math-container">$e^x$</span> is <span class="math-container">$e^x$</span>, we have</p> <p><span class="math-container">$$ \lim_{x \to \infty} \frac{e^x}{e^x}$$</span></p> <p>which is our original limit. Therefore, L'Hopital's fails to work in this example.</p> <p>Question: Does L'Hopital's rule actually fail in this example, or am I understanding it wrong?</p> <p>Edit: I mean &quot;fails&quot; in which it does not make progress toward a determinate result.</p>
José Carlos Santos
446,262
<p>In the first place, L'Hopital's Rule does not fail, in the usual sense of “fail”, only in the sense that you've added at the end of your question: you can apply it as many times as you wish without ever getting an answer.</p> <p>And there are other cases:<span class="math-container">$$\lim_{x\to\infty}\frac{x^{-1}}{x^{-1}}=\lim_{x\to\infty}\frac{-x^{-2}}{-x^{-2}}=\lim_{x\to\infty}\frac{2x^{-3}}{2x^{-3}}=\cdots$$</span></p>
2,271,204
<p>How do I solve these two equations: $\frac{x}{x+y} = \frac{1}{2}$ and $\frac{y}{x+y} = \frac{1}{6}$ ?</p> <p>I tried reducing the first one and end up getting $x = y$, clearly my maths is pretty dusty </p>
kingW3
130,953
<p>Notice that $$\frac{x}{x+y}+\frac{y}{x+y}=\frac{x+y}{x+y}=1$$ but $$\frac{1}{2}+\frac{1}{6}\neq 1$$</p>
2,781,265
<p>I am learning category theory as a hobby. In the book I am studying from, I was looking at an example of a functor. I want to understand what this functor looks like.</p> <p>If $G$ is a group, a functor $F: G \to \mathbf{Set}$ picks out a set $A = F(\star)$, together with a homomorphism from $G$ to the group of permutations of $A$. This is a permutation representation of $G$. </p> <p>I am confused about some things here:</p> <ol> <li><p>What is $\star$, is it the single object in the group considered as a category?</p></li> <li><p>Why is this homomorphism from $G$ to the group of permutations of $A$ coming into the picture, doesn't a functor just need to map objects to some objects and morphisms to some morphisms? </p></li> <li><p>The book says if that a group can be considered as a category consisting of the single object $G$ where all the morphisms are isomorphisms. Why is this the case? What does this category look like?</p></li> </ol>
Kyle Miller
172,988
<p>Think about transformation groups. You have a space $X$ with symmetries $\operatorname{Aut} X$, and a transformation group is a subgroup $G\subset\operatorname{Aut} X$. Elements of $G$ are automorphisms $X\to X$.</p> <p>You can think of an abstract group as being a transformation group for an abstract object $\star$. Elements of $G$ are automorphisms $\star \to \star$. You are not meant to think of $\star$ as being anything other than a thing that exists. You can think of the one-object category as being the group, or you can think of $G$ as being $\operatorname{Aut}(\star)$. You could say that <em>both</em> points of view are the group, with the kind of metonymy that mathematicians tend to use.</p> <p>A homomorphism $G\to \operatorname{Perm}(A)$ (with $A$ a set and $\operatorname{Perm}(A)$ the set of all bijections $A\to A$) can be thought of as a functor $F$ that</p> <ol> <li>sends $\star$ to $A$, and</li> <li>sends an element $g\in G$, thought of as an automorphism $g:\star\to\star$, to the permutation $F(g):A\to A$.</li> </ol> <p>$F$ being a functor means that $F(gh)=F(g)F(h)$ and $F(1_G)=\operatorname{id}_A$, so $F$ restricted to the morphisms of $G$'s category is the homomorphism.</p> <p>A one-object category is called a <em>monoid</em>. Call the object $\star$. Recall that there is an identity map $1:\star\to\star$ in the category. If every $f:\star\to\star$ is an isomorphism (i.e., there is an inverse $g:\star\to\star$ such that $gf=1$ and $fg=1$), then $\operatorname{End}(\star)$ satisfies all the axioms of a group.</p>
2,339,479
<p>I am an avid board game player, especially ones with miniatures that duke it out (Risk, Axis and Allies, etc.) and I'm trying to analyze efficiency of different units in one of my more recent pickups. </p> <p>In the game (Battlelore Second Edition for those who are curious), you get a hit on either a 5 or a 6. This would be simple if you're just rolling one die. However, figures in this game roll up from 1 die, all the way up to 5 dice (maybe more in certain situations) and I'm having a bugger of a time figuring out the probability. I set up a table in excel that takes the amount of dice being rolled in the column, and checks the probability of getting 1, 2, 3, 4 or 5 hits in each respective row. I'd rather not list out all of the possibilities and count them, so I'm look for an equation of course.</p> <p>To sum up: <strong>I need an equation(s) that gives Z, the probability of getting X hits with Y dice where hits are on 5's and 6's.</strong></p> <p>EDIT: I have gotten as far as all of the chances of getting 1 hit with Y dice, as well as the chance of 2 hits with up to 3 dice, but 4 dice with 2 hits and onwards it giving me trouble.</p>
David K
139,123
<p>This is a binomial distribution with $p = \frac13,$ since that is the probability of rolling a $5$ or $6$ on one die. The probability of exactly $k$ hits on $n$ dice is</p> <p>$$ P_n(k) = \binom nk \left(\frac13\right)^k \left(\frac23\right)^{n-k} $$</p> <p>where $\binom nk$ is a binomial coefficient; $$ \binom nk = \frac{n!}{(n-k)!k!} = \frac{n(n-1)(n-2)\cdots(n-k+2)(n-k+1)}{k(k-1)(k-2)\cdots(2)(1)}. $$</p>
2,153,128
<p>I have been given an exercise to convert switch coordinated from cylindrical to rectangular ones. This task is easy but one of them is a strange looking. The point in cylindrical coordinates is $(0,45,10)$. This corresponds to $r=0$. What is this point? Is it not the origin? But why then the angle and z-coordinate. I have one more question that is how to describe the geometric meaning of the following transformation in cylindrical coordinates: $(r,\theta,z)$ to $(-r,\theta-\pi/4,z)$</p> <p>$-r$ makes the whole problem here.</p>
Mitchell
413,342
<p>You can use the formula to do this. But here is the concept behind it. </p> <p>Suppose you want to choose 3 people from 11 people.</p> <p>In order to do that, imagine that there are 3 chairs. There are 11 ways to fill chair 1. After filling chair one there are 10 ways to fill chair 2 and similarly, there are 9 ways to fill chair 3.</p> <p>Say chose 3 people A, B and C. Choosing any 3 people doesn't depend on what order we choose them in.</p> <p>ABC, ACB, BAC, BCA CAB, and CBA are all the possible permutations of ABC but they all have the same people. <strong><em>Permutation is an ordered combination</em></strong>. Therefore, we can say that 6 permutations are actually 1 combination.</p> <p>Thus, $Total\space Combinations =\frac{11.10.9}{3.2} $</p>
1,013,776
<p>I am looking at the following <strong>Theorem</strong>:</p> <p>Let $\phi$ a type. We suppose that there is a set $Y$, such that $\forall x(\phi(x) \to x \in Y)$. Then, there is the set $\{ x: \phi(x) \}$.</p> <p>and I try to understand its proof.</p> <p>From the axiom shema of specification, there is the set $V=\{ x \in Y: \phi(x) \}$</p> <p>$$x \in V \leftrightarrow (x \in Y \wedge \phi(x))$$</p> <p>How can we continue, in order to show that $\{ x: \phi(x) \}$ is a set?</p>
Rene Schipperus
149,912
<p>You want that </p> <p>$$\{x|\phi(x)\}=\{x\in Y|\phi(x)\}$$ And this follows directly from your assumptions.</p>
1,407,169
<p>Let $A$ and $B$ be two matrices in $M_n$. Is the following ture:</p> <p>$A$ and $B$ are similar $\iff$ $A$ and $B$ have the same jordan canonical form.</p> <p>Could someone explain?</p>
Surb
154,545
<p>If $A=PJP^{-1}$ is the <a href="https://en.wikipedia.org/wiki/Jordan_normal_form" rel="noreferrer">Jordan decomposition</a> of $A$ and $M$ is an invertible matrix such that $A=M^{-1}BM$ then we have $$ B = (MM^{-1})B(MM^{-1})=M(M^{-1}BM)M^{-1} =MAM^{-1}=M(PJP^{-1})M^{-1}=(MP)J(P^{-1}M^{-1})$$</p> <p>and thus the Jordan normal form of $A$ (namely $J$) is the same as that of $B$.</p>
2,338,199
<p>Let $G$ and $H$ be two groups. Suppose that $M$ and $N$ be two normal subgroups of $G$ and $H$ respectively such that we have the following,</p> <ul> <li><p>$G$ and $H$ are isomorphic.</p></li> <li><p>$M$ and $N$ are isomorphic.</p></li> </ul> <p>Then we know that $G/M$ and $H/N$ are isomorphic if for an isomorphism $\varphi:G\to H$ we have $\varphi(M)=N$.</p> <p>My Question is, </p> <blockquote> <p>If $G$ and $H$ be two groups and $M$ and $N$ are two normal subgroups of $G$ and $H$ respectively such that $G/M$ and $H/N$ are isomorphic then is it true that both $G$ and $H$ are isomorphic <strong>or</strong> $M$ and $N$ are isomorphic?</p> </blockquote> <p>I don't know how to approach this problem. Can anyone help me?</p>
Hagen von Eitzen
39,174
<p>Not at all.</p> <p>Just let $G$ and $H$ be arbitary group and $M=G$, $N=H$.</p> <hr> <p>Apart from this, $G\cong H$ and $M\cong N$ does <strong>not</strong> imply $G/M\cong H/N$. Just pick $G=H=\Bbb Z$, $M=2\Bbb Z$, $N=H$.</p>
2,338,199
<p>Let $G$ and $H$ be two groups. Suppose that $M$ and $N$ be two normal subgroups of $G$ and $H$ respectively such that we have the following,</p> <ul> <li><p>$G$ and $H$ are isomorphic.</p></li> <li><p>$M$ and $N$ are isomorphic.</p></li> </ul> <p>Then we know that $G/M$ and $H/N$ are isomorphic if for an isomorphism $\varphi:G\to H$ we have $\varphi(M)=N$.</p> <p>My Question is, </p> <blockquote> <p>If $G$ and $H$ be two groups and $M$ and $N$ are two normal subgroups of $G$ and $H$ respectively such that $G/M$ and $H/N$ are isomorphic then is it true that both $G$ and $H$ are isomorphic <strong>or</strong> $M$ and $N$ are isomorphic?</p> </blockquote> <p>I don't know how to approach this problem. Can anyone help me?</p>
Dietrich Burde
83,966
<p>No, this is not true. Consider $G=GL_n(K)$ and $M=SL_n(K)$, and $H=K^{\times}$, $N=1$. Then $G/M\cong H/N\cong K^{\times}$, but neither $G\cong H$ nor $M\cong N$.</p>
3,853,896
<p>Do I consider the probability before drawing both cards or after?</p> <p><strong>Question more clearly</strong>: A single card is removed at random from a deck of <span class="math-container">$52$</span> cards. From the remainder we draw <span class="math-container">$2$</span> cards at random and find that they are both spades. What is the probability that the first card removed was also a spade?</p>
Brice Cawley
833,121
<p>You are calculating the probability of the first card being a spade, therefor the next two additional draws mean nothing. The chance of drawing a spade from a deck of 52 would be 25%. It's as simple as that.</p> <p>I hope this helps.</p>
1,915,455
<p>First of all they are all even... So it is at least $2$. But is it bigger than that? How do I find out?</p>
GoodDeeds
307,825
<p>This is solved using combinatorics. Any divisor <span class="math-container">$x$</span> of <span class="math-container">$n$</span> will be of the form <span class="math-container">$$x=p_1^{n_1}p_2^{n_2}\cdots p_k^{n_k}$$</span> where <span class="math-container">$0\le n_1\le a$</span>, <span class="math-container">$0\le n_2\le b$</span>, and so on.</p> <p>The <span class="math-container">$k$</span>-tuple <span class="math-container">$(n_1,n_2,\cdots,n_k)$</span> uniquely specifies a divisor. Thus, the number of divisors will be the number of ways of choosing <span class="math-container">$n_1,n_2,\cdots,n_k$</span> given the constraints.</p> <p>The value of <span class="math-container">$n_i$</span> in <span class="math-container">$x$</span> is independent of the value of <span class="math-container">$n_j$</span> for all <span class="math-container">$i\ne j$</span>. So, the number of ways of choosing <span class="math-container">$x$</span> will be the product of the number of ways of choosing <span class="math-container">$n_i$</span> for all <span class="math-container">$1\le i\le k$</span>.</p> <p><span class="math-container">$$\text{Number of ways}=\Pi_i \text{ (Number of ways of choosing }n_i)$$</span></p> <p>Now, <span class="math-container">$n_1$</span> can take any value from <span class="math-container">$0$</span> to <span class="math-container">$a$</span>, <span class="math-container">$n_2$</span> from <span class="math-container">$0$</span> to <span class="math-container">$b$</span>, and so on. That is, <span class="math-container">$n_1$</span> has <span class="math-container">$(a+1)$</span> choices, <span class="math-container">$n_2$</span> has <span class="math-container">$(b+1)$</span> choices, and so on.</p> <p>Thus, <span class="math-container">$$\text{Number of ways}=(a+1)\times(b+1)\times\cdots\times(m+1)$$</span></p>
3,696,265
<p>This is from Vakil's FOAG: exercise 2.5 C, part b. I understand how objects in the extension sheaf from a sheaf on a base <span class="math-container">$\mathcal B$</span> of a topology are created, but I am having trouble understanding how to produce a morphism of sheaves given a morphism of sheaves on a base. </p> <p>Assume we have topological space <span class="math-container">$X$</span>. Supposing we have two sheaves on our base <span class="math-container">$\mathcal B$</span>, say <span class="math-container">$F$</span> and <span class="math-container">$G$</span>, and maps <span class="math-container">$F(B_i) \to G(B_i)$</span> for all <span class="math-container">$B_i \in \mathcal B$</span>, these induce maps between any stalks <span class="math-container">$F_x \to G_x$</span> we like, and we know also for any <span class="math-container">$x \in X$</span>, that <span class="math-container">$F_x \simeq F^{ext}_x$</span>, where <span class="math-container">$F^{ext}$</span> is our extended sheaf (likewise for <span class="math-container">$G^{ext}$</span>). After this, I do not know how to proceed, nor do I know if I needed all of that information.</p>
Brian M. Scott
12,042
<p>Suppose that you want to count the <span class="math-container">$i$</span>-element subsets of <span class="math-container">$[i]=\{1,2,\ldots,i\}$</span>. Of course there’s only one of them, but we can also count them by the following roundabout procedure. We first expand the set from which we’re drawing the <span class="math-container">$i$</span>-element subset to <span class="math-container">$[i+j]=\{1,\ldots,i+j\}$</span>. Now for each <span class="math-container">$\ell\in[i]$</span> let <span class="math-container">$A_\ell$</span> be the family of <span class="math-container">$i$</span>-element subsets of <span class="math-container">$[i+j]$</span> that do not contain <span class="math-container">$\ell$</span>; <span class="math-container">$\bigcup_{\ell=1}^iA_\ell$</span> is the family of <span class="math-container">$i$</span>-elements subsets of <span class="math-container">$[i+j]$</span> that are not subsets of <span class="math-container">$[i]$</span>. By the <a href="https://en.wikipedia.org/wiki/Inclusion%E2%80%93exclusion_principle" rel="nofollow noreferrer">inclusion-exclusion principle</a> we have</p> <p><span class="math-container">$$\begin{align*} \left|\bigcup_{\ell=1}^iA_\ell\right|&amp;=\sum_{\varnothing\ne I\subseteq[i]}(-1)^{|I|+1}\left|\bigcap_{\ell\in I}A_\ell\right|\\ &amp;=\sum_{k=1}^i\binom{i}k(-1)^{k+1}\binom{i+j-k}i\;, \end{align*}$$</span></p> <p>since each non-empty <span class="math-container">$I\subseteq[i]$</span> has cardinality in <span class="math-container">$[i]$</span>, for each <span class="math-container">$k\in[i]$</span> there are <span class="math-container">$\binom{i}k$</span> subsets of <span class="math-container">$[i]$</span> of cardinality <span class="math-container">$k$</span>, and if <span class="math-container">$|I|=k$</span>, </p> <p><span class="math-container">$$\left|\bigcap_{\ell\in I}A_\ell\right|=\binom{i+j-k}i\;.$$</span></p> <p>There are <span class="math-container">$\binom{i+j}i$</span> <span class="math-container">$i$</span>-element subsets of <span class="math-container">$[i+j]$</span> altogether, so after we throw out the ones not contained in <span class="math-container">$[i]$</span>, we have left</p> <p><span class="math-container">$$\begin{align*} \binom{i+j}i&amp;-\sum_{k=1}^i\binom{i}k(-1)^{k+1}\binom{i+j-k}i\\ &amp;=\binom{i+j}i+\sum_{k=1}^i\binom{i}k(-1)^k\binom{i+j-k}i\\ &amp;=\sum_{k\ge 0}\binom{i}k(-1)^k\binom{i+j-k}i\;, \end{align*}$$</span></p> <p>and we already know that this is <span class="math-container">$1$</span>. </p> <p>Note that there is no need to specify an upper limit on the summation: <span class="math-container">$\binom{i}k=0$</span> when <span class="math-container">$k&gt;i$</span>, and <span class="math-container">$\binom{i+j-k}i=0$</span> when <span class="math-container">$k&gt;j$</span>, so all terms with <span class="math-container">$k&gt;i\land j$</span> are <span class="math-container">$0$</span> anyway.</p>
77,642
<p>$$\frac{1}{\sin(z)} = \cot (z) + \tan (\tfrac{z}{2})$$</p> <p>I did this: </p> <p><strong>First attempt</strong>: $$\displaystyle{\frac{1}{\sin (z)} = \frac{\cos (z)}{\sin (z)} + \frac{\sin (\frac{z}{2})}{ \cos (\frac{z}{2})} = \frac{\cos (z) }{\sin (z)} + \frac{2\sin(\frac{z}{4})\cos(\frac{z}{4})}{\cos^{2}(\frac{z}{4})-\sin^{2}(\frac{z}{4})}} = $$ $$\frac{\cos (z)(\cos^{2}(\frac{z}{4})-\sin^{2}(\frac{z}{4}))+2\sin z \sin(\frac{z}{4})\cos(\frac{z}{4})}{\sin (z)(\cos^{2}(\frac{z}{4})-\sin^{2}(\frac{z}{4}))}$$</p> <p>Stuck.</p> <p><strong>Second attempt</strong>: </p> <p>$$\displaystyle{\frac{1}{\sin z} = \left(\frac{1}{2i}(e^{iz}-e^{-iz})\right)^{-1} = 2i\left(\frac{1}{e^{iz}-e^{-iz}}\right)}$$</p> <p>Stuck.</p> <p>Does anybody see a way to continue?</p>
J. M. ain't a mathematician
498
<p>I'll go backwards; I hope you don't mind.</p> <p>$$\begin{align*}\cot\,z+\tan\frac{z}{2}&amp;=\frac{\cos\,z}{\sin\,z}+\frac{\sin\,z}{1+\cos\,z}\\&amp;=\frac{\sin^2 z+(1+\cos\,z)\cos\,z}{(1+\cos\,z)\sin\,z}\\&amp;=\frac{\cos^2 z+\sin^2 z+\cos\,z}{(1+\cos\,z)\sin\,z}\\&amp;=\frac{1+\cos\,z}{(1+\cos\,z)\sin\,z}\\&amp;=\csc\,z\end{align*}$$</p>
17,772
<p>I have an algorithm that segments depth images using surface fitting. At the moment the algothim uses least squares polymonial fitting, but polynomials are not powerful enough to fit the shapes that are in these images. I replaced the explicit polynomial $z = f(x,y)$, with the implicit fitting problem $f(x,y,z) = 0$. Where $x$ and $y$ are the pixels location and $z$ is the value of the pixel and $f$ is a polynomial. A classic problem with many nice linear solutions. This did make the fitting much more powerful, but this left me with a problem I have been unable to solve for a while now. Once I had the least squares solution to the implicit poly, I had to solve for z, not easy at all, but I could search for the minimum (there are only 256 possible pixel values to search through after all). THE problem is that there was more than one solution for z! Where a pixel can only have one value. That is, once the implicit poly was solved for z there were several roots.</p> <p>So my question is;</p> <p>How do I formulate a least squares minimization problem for a single root? What contraints must I add?</p> <p>This might not be completely clear so here is an example: You have a set of noisy data points $\{x,y\}$ that form a semi-circle about the origin in the positive y only. I want to fit $f(x,y) = 0$ to the data points, or to be pricise, a circle. $f == x^2 + y^2 + c = 0$.</p> <p>The least squares minimization problem is a nice simple linear $\min_c \sum_{i=0}^n (x_i^2 + y_i^2 + c)^2$ but I am only interested minimizing the datapoints distance to the positive half of the circle. Solving for $y$ and taking the positive root gives us $y_i = \sqrt{-c-x_i^2}$. Giving us the acutal minimization problem as $\min_c \sum_{i=0}^n (y_i - \sqrt{-c-x_i^2})^2$ NOT a nice linear problem at all. These two minimisation problems will give differnt results since the negative half of the circle should not try to fit itself to any data points. How can i constrain the linear problem so that it is an equivelent to minimizing to a single root? </p> <p>Or in general constrin $\min_{ijk} (\sum_{i,j,k}a_{ijk}x^iy^jz^k)^2$ to a minimization to a single root in $z$?</p> <p>I hope this makes sence and sorry for the length but this has been annoying me for weeks.</p>
fedja
1,131
<p>Asuming that your surfaces are not too wild, it seems logical to try to combine the implicit approximation with the explicit one, i.e., to solve 2 least square problems simultaneously: one for the implicit function and one for the explicit one. The explicit solution will give you a crude approximation for the root and then you just choose the root of the implicit function that is closest to this crude approximation as the "true" position of your pixel. It certainly works with your half-circle example but whether it'll work in your real case depends on the actual surfaces you are dealing with. </p> <p>If you could give us some idea of what they look like, what precision you are getting with the explicit approximation, what precision you are getting with the implicit one, and how close the "parasitic" roots can come to the "true" ones, we might be able to say more.</p> <p>OK, let's try one more idea. Minimize the sum of $f(x,y,z)^2+A(\partial_z f(x,y,z)-1)^2$ with some positive $A$ (you'll have to play with its choice to see what works best). The advantage is two-fold: first, now you have the true root separated (the $z$ derivative is not small) and second, you have <em>two</em> approximate equations for it ($f=0$ and $\partial_z f=1$), which should give you extra advantage, the hope being that the parasitic roots of $f$ won't be able to match the derivative too. Also, keep the power in $z$ well below that in $x$ and $y$. Note that you are still better off than with $z-f(x,y)$ because this explicit formula satisfies all the extra conditions we are trying to impose automatically.</p> <p>And yes, it'll help to see the data, though, if possible, I'd prefer the one-dimensional case (the $x$ slice of your real data should have all the same problems already). Just post something reasonable (like $S(1),\dots, S(50)$) that we want to approximate by $f(x,z)=0$ with $f$ of some reasonable degree with which you currently have a problem. I'll try to play with it a bit when I have free time. </p>
17,772
<p>I have an algorithm that segments depth images using surface fitting. At the moment the algothim uses least squares polymonial fitting, but polynomials are not powerful enough to fit the shapes that are in these images. I replaced the explicit polynomial $z = f(x,y)$, with the implicit fitting problem $f(x,y,z) = 0$. Where $x$ and $y$ are the pixels location and $z$ is the value of the pixel and $f$ is a polynomial. A classic problem with many nice linear solutions. This did make the fitting much more powerful, but this left me with a problem I have been unable to solve for a while now. Once I had the least squares solution to the implicit poly, I had to solve for z, not easy at all, but I could search for the minimum (there are only 256 possible pixel values to search through after all). THE problem is that there was more than one solution for z! Where a pixel can only have one value. That is, once the implicit poly was solved for z there were several roots.</p> <p>So my question is;</p> <p>How do I formulate a least squares minimization problem for a single root? What contraints must I add?</p> <p>This might not be completely clear so here is an example: You have a set of noisy data points $\{x,y\}$ that form a semi-circle about the origin in the positive y only. I want to fit $f(x,y) = 0$ to the data points, or to be pricise, a circle. $f == x^2 + y^2 + c = 0$.</p> <p>The least squares minimization problem is a nice simple linear $\min_c \sum_{i=0}^n (x_i^2 + y_i^2 + c)^2$ but I am only interested minimizing the datapoints distance to the positive half of the circle. Solving for $y$ and taking the positive root gives us $y_i = \sqrt{-c-x_i^2}$. Giving us the acutal minimization problem as $\min_c \sum_{i=0}^n (y_i - \sqrt{-c-x_i^2})^2$ NOT a nice linear problem at all. These two minimisation problems will give differnt results since the negative half of the circle should not try to fit itself to any data points. How can i constrain the linear problem so that it is an equivelent to minimizing to a single root? </p> <p>Or in general constrin $\min_{ijk} (\sum_{i,j,k}a_{ijk}x^iy^jz^k)^2$ to a minimization to a single root in $z$?</p> <p>I hope this makes sence and sorry for the length but this has been annoying me for weeks.</p>
David Bar Moshe
1,059
<p>I think that it is better to use rational approximation for the fitting z = P(x,y)/Q(x,y), where P and Q are polynomials. This type of fitting is more flexible in general than polynomial fitting and doesn't have the multiple solution problem. There are many algorithms to establish this type of fitting, one possibility is the Nelder-Mead simplex algorithm.</p>
255,230
<p>A <em>complete linear hypergraph</em> is a <a href="https://en.wikipedia.org/wiki/Hypergraph" rel="nofollow noreferrer">hypergraph</a> $H=(V,E)$ such that </p> <ol> <li>$|e|\geq 2$ for all $e\in E$,</li> <li>$|e_1\cap e_2|=1$ for all $e_1, e_2\in E$ with $e_1\neq e_2$, and</li> <li>for all $v\in V$ we have $|\{e\in E:v\in e\}| \geq 2.$</li> </ol> <p>For $n&gt;2$ set $\mathbb{N}_n =\{1,\ldots,n\}$ and $$\ell(n)=\min\{|E|: E\subseteq{\cal P}(\mathbb{N}_n) \text{ and }(\mathbb{N}_n, E) \text{ is complete linear}\},$$ so $\ell(n)$ is the <em>greatest lower bound</em> for the number of edges on a complete linear hypergraph on $n$ points.</p> <p><strong>Question</strong>: What is the value of $\ell(n)$, depending on $n$?</p> <hr> <p>(Note concerning least upper bounds: If $H=(V,E)$ is a complete linear hypergraph with $|V|=n&gt;2$, then $|E| \leq n$ by the <a href="https://en.wikipedia.org/wiki/De_Bruijn%E2%80%93Erd%C5%91s_theorem_(incidence_geometry)" rel="nofollow noreferrer">theorem of DeBruijn-Erdos</a>, and we can reach $|E| = n$ with the so-called "near pencil", see the same link.)</p>
Aaron Meyerowitz
8,008
<p>I'll conjecture that for $n \gt 20$, $\ell(n) \le \sqrt{2n}+3.$ </p> <p>It seems optimal to have most points on only two lines. So consider first the case that there is at most one line with points on it having more than two lines. </p> <p>Write $[a_1,a_2,\cdots,a_t]$ for the configuration of one line with $t$ points with point $i$ having $a_i$ <em>other</em> lines on it. Assuming that all other points have only $2$ lines on them, the number of other points is $$\sum_{i \lt j}a_ia_j= \frac{(\sum a_i)^2-\sum a_i^2}{2}$$ giving a total of $n=\frac{(\sum a_i)^2-\sum a_i^2}{2}+t$ points and $e=1+\sum a_i$ lines.</p> <p>In brief $[n,e,[a_1,a_2,\cdots,a_t]$</p> <p>If all the $a_i=1$ then $n=\binom{t}2$ and $e=t+1$ so $e=\lceil \sqrt{2n}\rceil$ which, as noted by several people, is minimal. So for $\binom{t}2 \lt n \le \binom{t+1}2$ the best we can possibly have is $e=t+2.$ </p> <p>Here are the best results of this special type with up to $11$ lines.</p> <p>$[6, 4, [1, 1, 1]], [7, 7, [1, 5]], [8, 5, [1, 1, 2]], [9, 9, [1, 7]], $</p> <hr> <p>$[10, 5, [1, 1, 1, 1]], [11, 6, [1, 2, 2]], [12, 7, [1, 1, 4]], [13, 6, [1, 1, 1, 2]], [14, 7, [1, 2, 3]],$</p> <p>$ [15, 6, [1, 1, 1, 1, 1]], [16, 7, [1, 1, 1, 3]], [17, 7, [1, 1, 2, 2]], [18, 8, [1, 3, 3]], [19, 7, [1, 1, 1, 1, 2]], [20, 9, [1, 2, 5]],$</p> <hr> <p>$ [21, 7, [1, 1, 1, 1, 1, 1]], [22, 8, [1, 2, 2, 2]], [23, 8, [1, 1, 1, 1, 3]], [24, 8, [1, 1, 1, 2, 2]], [25, 9, [1, 1, 2, 4]], [26, 8, [1, 1, 1, 1, 1, 2]], [27, 9, [1, 1, 1, 1, 4]],$</p> <hr> <p>$[28, 8, [1, 1, 1, 1, 1, 1, 1]], [29, 9, [1, 1, 1, 2, 3]], [30, 9, [1, 1, 2, 2, 2]], [31, 9, [1, 1, 1, 1, 1, 3]], [32, 9, [1, 1, 1, 1, 2, 2]], [33, 10, [1, 2, 3, 3]], [34, 9, [1, 1, 1, 1, 1, 1, 2]], [35, 10, [1, 1, 1, 3, 3]], $</p> <hr> <p>$[36, 9, [1, 1, 1, 1, 1, 1, 1, 1]] [37, 10, [1, 2, 2, 2, 2]], [38, 10, [1, 1, 1, 1, 2, 3]], [39, 10, [1, 1, 1, 2, 2, 2]], [40, 10, [1, 1, 1, 1, 1, 1, 3]], [41, 10, [1, 1, 1, 1, 1, 2, 2]], [42, 11, [1, 1, 2, 2, 4]], [43, 10, [1, 1, 1, 1, 1, 1, 1, 2]], [44, 11, [1, 1, 1, 1, 2, 4]],$</p> <hr> <p>$[45, 10, [1, 1, 1, 1, 1, 1, 1, 1, 1]], [46, 11, [1, 1, 1, 1, 1, 1, 4]], [47, 11, [1, 1, 2, 2, 2, 2]], [48, 11, [1, 1, 1, 1, 1, 2, 3]], [49, 11, [1, 1, 1, 1, 2, 2, 2]], [50, 11, [1, 1, 1, 1, 1, 1, 1, 3]], [51, 11, [1, 1, 1, 1, 1, 1, 2, 2]], [53, 11, [1, 1, 1, 1, 1, 1, 1, 1, 2]], $</p> <hr> <p>$[55, 11, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]$</p> <p>A slight variation is to take a configuration (of this type or another), pick $s$ lines no $3$ sharing a point and consider the $\binom{s}{2}$ intersection points. These can be fused into one point preserving $e$ and decreasing $n$ to $n+1 -\binom{s}{2}.$</p> <p>For example the configuration $[22, 8, [1, 2, 2, 2]]$ can have $3$ points determined by $3$ lines fused into $1$ to get a solution for $(20,8).$ This is better than the solution given with $(20,9).$</p>
1,782,423
<p>Gödel's completeness theorem says that for any first order theory $F$, the statements derivable from $F$ are precisely those that hold in all models of $F$. Thus, it is not possible to have a theorem that is "true" (in the sense that it holds in the intersection of all models of $F$) but unprovable in $F$.</p> <p>However, Gödel's completeness theorem is not constructive. <a href="https://en.wikipedia.org/wiki/G%C3%B6del%27s_completeness_theorem#Relationship_to_the_compactness_theorem" rel="nofollow">Wikipedia</a> claims that (at least in the context of reverse mathematics) it is equivalent to the weak König's lemma, which in a constructive context is not valid, as it can be interpreted to give an effective procedure for the halting problem.</p> <p>My question is, is it still possible for there to be "unprovable truths" in the sense that I describe above in a first order axiomatic system, given that Gödel's completeness theorem is non-constructive, and hence, given a property that holds in the intersection of all models of $F$, we may not actually be able to <em>effectively</em> prove that proposition in $F$?</p>
Carl Mummert
630
<p>The problem is not building the model, it's making a completion of a theory. This is related closely to the fact that the set of first-order valid formulas is semi-computable but not computable. </p> <p>Reverse Mathematics <em>reveals</em> this particular quirk of the completeness theorem. The following are in Simpson's book:</p> <blockquote> <p>Theorem II.8.4, II.8.5 The following are provable in $\mathsf{RCA}_0$:</p> <ul> <li><p>Let $X$ be a consistent set of first-order sentences in a countable language so that $X$ is closed under logical consequence. Then there exists a countable model of $X$.</p></li> <li><p>Let $X$ be a consistent set of first-order sentences in a countable language so that $X$ is closed under logical consequence. Then there exists a theory $X^* \supseteq X$ so that $X^*$ is complete and consistent and closed under logical consequence. </p></li> </ul> </blockquote> <p>and</p> <blockquote> <p>Theorem IV.3.3. The following are: equivalent to $\mathsf{WKL}_0$ over $\mathsf{RCA}_0$. </p> <ul> <li><p>Completeness theorem: If $X$ is a consistent set of first-order sentences in a countable language then there is a countable model of $X$. </p></li> <li><p>Lindenbaum's lemma: If $X$ is a consistent set of first-order sentences in a countable language then $X$ has a completion, that is, a consistent, complete theory containing $X$ closed under logical consequence. </p></li> </ul> </blockquote> <p>This means there is a hidden side effect of the definitions and of the completeness theorem. The theory of a model is always complete, consistent, and closed under logical consequence. But a given set of axioms for a theory may be consistent but not complete and not closed under logical consequence. </p> <p>So, when our definitions say that a model comes equipped with a satisfaction relation, this means that the completeness theorem implies "every consistent countable theory has a completion to a consistent, complete, consistent theory closed under logical consequence". We may not be able to find any such completion effectively, but this is the only non-effective step required in the proof. </p> <p>This is why $\mathsf{WKL}_0$ is required for the completeness theorem. If we only applied the completeness theorem to theories that were already closed under logical consequence, we would not need $\mathsf{WKL}_0$ at all, and in fact each such theory has a model computable from the theory. </p>
1,740,032
<p>In layman's terms, why would anyone ever want to change basis? Do eigenvalues have to do with changing basis? </p>
littleO
40,119
<p>Changing basis can make it easier to understand a given linear transformation.</p> <p>Suppose $T:V \to V$ is a linear transformation. It may seem difficult to understand or to visualize what effect $T$ has when it is applied to a vector $x$. However, suppose we are lucky enough to find a vector $v$ with the special property that $T(v) = \lambda v$ for some scalar $\lambda$. Then, it's easy enough to understand what $T$ does to $v$, at least.</p> <p>Suppose we are lucky enough to find an entire basis $\{v_1,\ldots,v_n\}$ of these special vectors. So $T(v_i) = \lambda_i v_i$, for some scalars $\lambda_i, i =1,\ldots,n$. Given any vector $x$, we can write $x$ as a linear combination of the vectors $v_i$: \begin{equation} x = c_1 v_1 + \cdots + c_n v_n. \end{equation} And now it seems easier to think about $T(x)$: \begin{align} T(x) &amp;= c_1 T(v_1) + \cdots + c_n T(v_n) \\ &amp;= c_1 \lambda_1 v_1 + \cdots + c_n \lambda_n v_n. \end{align} That is fairly simple. Each component of $x$ (with respect to our special basis) simply got scaled by a factor $\lambda_i$.</p> <p>So if we can find a basis of eigenvectors for $T$ (and often we can), then it helps us to understand $T$.</p> <hr> <p>By the way, one great practical example of a change of basis is computing a convolution efficiently using the fast Fourier transform (FFT) algorithm. Any discrete convolution operator is <em>diagonalized</em> by a special basis, the discrete Fourier basis. So, to perform a convolution on an image (in image processing), you take the FFT of the image (you <em>change basis</em> to the Fourier basis), then you multiply pointwise by the eigenvalues of the convolution operator, then you take the inverse FFT (change back to the standard basis). This approach is much, much faster than performing convolution in the space domain.</p>
1,910,030
<p>Let $V$ be a vector space and $U,W$ be its subspaces. Prove that if the union $U\cup W$ is a subspace of $V$, then $W \subseteq U$ or $U \subseteq W$.</p> <p>I'm not sure where to begin at all really.</p>
Michael Curtis
365,275
<p>I believe the answer you are looking for lies in the rule for calculating the number of posibilites of two disjoint sets A and B being N(A, B) = N(A) * N(B) where N(A) is the number of possibilities in set A. In this case there are two independent attributes: suit and rank. This means once you calculate the number of ways to pull two ranks from thirteen possible, you still have to deal with the independent event of choosing the suit of those two ranks. Then, for the fifth card, you know that you cannot choose either rank you currently have (making the hand a full-house), nor can you choose a card you already have (unless you are cheating), which means there are 8 cards you cannot choose to be the last card. This leaves 44 cards to choose from.</p> <p>See <a href="http://mathforum.org/library/drmath/view/56158.html" rel="nofollow">http://mathforum.org/library/drmath/view/56158.html</a></p>
424,853
<p>As an interested outsider, I have been intrigued by the number of times that homotopy theory seems to have revamped its foundations over the past fifty years or so. Sometimes there seems to have been a narrowing of focus, via a choice to ignore certain &quot;pathological&quot;—or at least intractably complicated—phenomena; instead of considering all topological spaces, one focuses only on compactly generated spaces or CW complexes or something. Or maybe one chooses to focus only on <i>stable</i> homotopy groups. Other times, there seems to have been a broadening of perspective, as new objects of study are introduced to fill perceived gaps in the landscape. Spectra are one notable example. I was fascinated when I discovered the 1991 paper by Lewis, <a href="https://doi.org/10.1016/0022-4049(91)90030-6" rel="noreferrer">Is there a convenient category of spectra?</a>, showing that a certain list of seemingly desirable properties cannot be simultaneously satisfied. More recent concepts include model categories, <span class="math-container">$\infty$</span>-categories, and homotopy type theory.</p> <p>I was wondering if someone could sketch a timeline of the most important such &quot;foundational shifts&quot; in homotopy theory over the past 50 years, together with a couple of brief sentences about what motivated the shifts. Such a bird's-eye sketch would, I think, help mathematicians from neighboring fields get some sense of the purpose of all the seemingly high-falutin' modern abstractions, and reduce the impenetrability of the current literature.</p>
Mirco A. Mannucci
15,293
<p>Thanks to both Pavlov and White there is now an almost complete list of &quot;critical points&quot; in the history of homotopy theory.</p> <p>There are a few items that perhaps should make it into the list, for instance the Archaic Period: Betti introduced Betti's numbers <a href="https://link.springer.com/article/10.1007/BF02420029" rel="nofollow noreferrer">here</a> in 1870 -incidentally, as simple as they are, Betti's numbers continue to play a dramatic role in applied math, for instance in Persistent Homology.</p> <p>I would suggest one tiny emendation: extend your 50 years slightly, by five years, to include a true <strong>Annus Mirabilis</strong> (1967), namely the work of Quillen on <strong>Model Categories</strong>.</p> <p>Let me say why I feel this is indeed a breakthrough and the beginning of modern era (some of the work after 2005 is properly speaking not modern, but post-modern, see below).</p> <p>Before Quillen, in 1952 Steenrod and Eilenberg managed to get the <strong>Grand Unification of Cohomology</strong>, certainly a major breakthrough in the series of &quot;foundational efforts&quot; in Algebraic Topology.</p> <p>In a way Quillen tried to do the same for homotopical algebra, by introducing a sets of axioms for &quot;doing homotopy &quot; in a category. The key notion here is <strong>weak equivalence</strong>, ie a sets of maps in the ambient cat which contain all the isomorphisms.</p> <p>This simple step is a foundational paradigm change, because it tells us WHAT Homotopy is all about:</p> <p><strong>we move from equality (set theory)_ to isomorphism (category theory) to equivalence (homotopy theory</strong>).</p> <p>Quillen add some axioms on formal fibrations, cofibrations, to compute the so-called homotopy limits and colimits, ie lims and colims &quot;up to homotopy&quot;</p> <p>NOTE before quillen folks knew about hom lim and hom colims: start with a cat with a model structure, and &quot;localize&quot; it, ie formally invert all the weak equivalences. The new cat, called the homotopy category, is is general not well behaved as far as standard lims and colims: one has thus to introduce a new kind of universal objects appropriate for the homotopic context.</p> <p>So, from the Annus Mirabilis begins a new chapter, but it does not end there. As a result of Quillen's shift, now many many &quot;things&quot; that were not under the rubric of homotopy theory, structures that are not even topological, acquire an homotopic flavor.</p> <p>Funny enough, one of these is category theory itself: Cat, the category of small cats, has a default model structure, where weak equivalences are simply cats equivalence.</p> <p>There were several attempts to generalize and expand on the look at Homotopy given by model structures, but if we focus on true radically new insights, here it goes:</p> <p>back in the golden era, a standard homotopy was simply a continuous deformation, so essentially an invertible path between maps. In Quillen you start with the weak equivalences, but let us go back to the continuous deformation: if the cat where I want to introduce my weak equivalences happens to be a 2-cat, and I look at the groupoid of 2-maps therein, I have my continuous deformations.</p> <p>So, here is the key insight that migrates from the modern approach to the post-modern: do not look at a single cat alone, but see it as as only the ground floor of a higher and higher groupoid (paths, paths of paths, etc.) Rather than being the bedrock of Homotopy, model structures become models, or presentations, of the REAL OBJECT of Homotopy, the <strong>invariant infinity groupoid</strong> in all its splendor. That basic insight is already in Grothendieck, around 1983, maybe earlier, but has blossomed into an entire field thanks to Voevodsky, Lurie, Rezk, etc.</p> <p>What is fascinating, is that the post-modern era is not simply foundational, but admits also a foundationalist approach: a large swath of mathematics in principle can be seen from this angle, of &quot;getting rid of equality&quot;, and replacing it with equivalences and their higher versions. .</p>
3,611,845
<p>For the Stable Matching algorithm by Gale-Shapley, how do I prove that at most one man will get his worst choice? </p> <p>My intuition is that I have to use contradiction. Assume that there are two men who will get their worst preferences: <span class="math-container">$M_1$</span> with <span class="math-container">$W_1$</span> and <span class="math-container">$M_2$</span> with <span class="math-container">$W_2$</span>. I have to prove <span class="math-container">$M_2$</span> and <span class="math-container">$W_2$</span> are unstable. However, I can't think of anything. Can anyone help me with the proof? </p> <p>Thanks! </p>
stewbasic
197,161
<p>I assume the basic setup as described on <a href="https://en.wikipedia.org/wiki/Gale%E2%80%93Shapley_algorithm" rel="nofollow noreferrer">wikipedia</a> (ie equal numbers of men and women, strict preferences). Assume <span class="math-container">$M_1$</span> and <span class="math-container">$M_2$</span> are paired with their worst choices, <span class="math-container">$W_1$</span> and <span class="math-container">$W_2$</span>, respectively. Observe the following:</p> <ol> <li><span class="math-container">$M_1$</span> must propose to every woman over the course of the algorithm, proposing to <span class="math-container">$W_1$</span> last.</li> <li><span class="math-container">$M_1$</span> makes at most one proposal per round.</li> <li>Once a woman is proposed to, they remain engaged for the rest of the algorithm (though maybe not to the same person).</li> </ol> <p>Consider the state just before the final round. By 1 and 2, <span class="math-container">$M_1$</span> has proposed to each woman other than <span class="math-container">$W_1$</span> (and possibly <span class="math-container">$W_1$</span> also), and similarly for <span class="math-container">$M_2$</span>. Thus every woman has been proposed to, so by 3 they are all engaged. We have assumed equal numbers of men and women, so the men are also all engaged. But this means the algorithm should terminate, a contradiction.</p>
613,836
<blockquote> <p>Let $x,y,z$ be integers and $11$ divides $7x+2y-5z$. Show that $11$ divides $3x-7y+12z$.</p> </blockquote> <p>I know a method to solve this problem which is to write into $A(7x+2y-5z)+11(B)=C(3x-7y+12z)$, where A is any integer, B is any integer expression, and C is any integer coprime with $11$.</p> <p>I have tried a few trials for example $(7x+2y-5z)+ 11(x...)=6(3x-7y+12z)$, but it doesn't seem to work. My question is are there any tricks or algorithms for quicker way besides trials and errors? Such as by observing some hidden hints or etc?</p> <p>I am always weak at this type of problems where we need to make smart guess or gain some insight from a pool of possibilities? Any help will be greatly appreciated. And maybe some tips to solve these types of problems.</p> <p>Thanks very much!</p>
lab bhattacharjee
33,337
<p>The easiest way can be : eliminate one of the three variables like below</p> <p>$$3(7x+2y-5z)-7(3x-7y+12z)=55y-99z=11(5y-9z)$$</p> <p>$$\iff 7(3x-7y+12z)=3(7x+2y-5z)-11(5y-9z)$$</p> <p>If $\displaystyle 11$ divides $\displaystyle 7x+2y-5z,11$ will divide $\displaystyle7(3x-7y+12z)$</p> <p>But $(7,11)=1$</p>
613,836
<blockquote> <p>Let $x,y,z$ be integers and $11$ divides $7x+2y-5z$. Show that $11$ divides $3x-7y+12z$.</p> </blockquote> <p>I know a method to solve this problem which is to write into $A(7x+2y-5z)+11(B)=C(3x-7y+12z)$, where A is any integer, B is any integer expression, and C is any integer coprime with $11$.</p> <p>I have tried a few trials for example $(7x+2y-5z)+ 11(x...)=6(3x-7y+12z)$, but it doesn't seem to work. My question is are there any tricks or algorithms for quicker way besides trials and errors? Such as by observing some hidden hints or etc?</p> <p>I am always weak at this type of problems where we need to make smart guess or gain some insight from a pool of possibilities? Any help will be greatly appreciated. And maybe some tips to solve these types of problems.</p> <p>Thanks very much!</p>
AHH
31,500
<p>$ 7x+2y-5z \equiv 14x +4y-10z \equiv(4-1)x+(4-11)y-(0-1)z\equiv 3x-7y+z$ $ \equiv3x-7y+12z $ (mod 11)</p>
2,100,726
<p><strong>Problem:</strong> In a binomial experiment with 45 trials, the probability of more than 25 successes can be approximated by $P(z &gt; \frac{(25-27)}{3.29}$)</p> <p>What is the probability of success of a single trial of this experiment? </p> <p><strong>Choices:</strong> </p> <ul> <li>0.07</li> <li>0.56</li> <li>0.79</li> <li>0.61</li> <li>0.6</li> </ul> <p><strong>My Solution:</strong> Since the experiment can be approximated by a normal distribution, I evaluated P(z) using NormalCdf on my calculator. $$P(z &gt; \frac{25-27}{3.29}) = NormCdf(\frac{25-27}{3.29},\infty, 0, 1) = 0.72834 $$</p> <p>Then, I set up the binomialCdf expression as $binomCdf(45, p, 26, 45)$ using 26 as the minimum number of successes as the question specifies it to be more than 25. </p> <p><strong>Through inspection, I found p to be 0.61</strong>. However, the answer key says otherwise. </p> <p>Can someone check the error in my work? Thank you!</p>
Feynmaniac
407,050
<p>As mentioned, $\frac{1}{1-e^{\frac{1}{x}}} \xrightarrow[x \to \infty]{} -\infty$. As for the limit of $f(x) + x$, </p> <p>$\frac{1}{1-e^{\frac{1}{x}}} + x = \frac{1}{1-(1+\frac{1}{x} +\frac{1}{2x^2} +\mathcal{O}(\frac{1}{x^3}))} + x$</p> <p>$= \frac{-x}{1+\frac{1}{2x} + \mathcal{O}(\frac{1}{x^2})} + x = -x(1-\frac{1}{2x} + \mathcal{O}(\frac{1}{x^2})) + x $</p> <p>$= \frac{1}{2} + \mathcal{O}(\frac{1}{x}) \xrightarrow[x \to \infty]{} \frac{1}{2}$</p>
4,045,943
<p>I was requested to find the values of the parameter p for which the following series converges:</p> <p><span class="math-container">$$\sum_{n=1}^\infty \sqrt{n} \ln^{p} \left(1+ \frac{1}{\sqrt{n}}\right)$$</span></p> <p>I tried using <a href="https://en.wikipedia.org/wiki/Root_test" rel="nofollow noreferrer">Cauchy's test</a> and the <a href="https://en.wikipedia.org/wiki/Term_test" rel="nofollow noreferrer">Term test</a>, but reached a dead-end. I also tried to use the ratio test with but it didn't seem to be helpful in this situation.</p> <p>At this point we aren't allowed to use the integral test.</p> <p>I would appreciate any suggestions on how to approach this problem.</p>
user
293,846
<p>Hint: use the fact that <span class="math-container">$$ x-\frac12x^2\le\log(1+x)\le x. $$</span></p> <p>If required it can be proved integrating the inequality <span class="math-container">$$ 1-x\le\frac1{1+x}\le1, $$</span> obviously valid for <span class="math-container">$x\ge0$</span>.</p>
4,045,943
<p>I was requested to find the values of the parameter p for which the following series converges:</p> <p><span class="math-container">$$\sum_{n=1}^\infty \sqrt{n} \ln^{p} \left(1+ \frac{1}{\sqrt{n}}\right)$$</span></p> <p>I tried using <a href="https://en.wikipedia.org/wiki/Root_test" rel="nofollow noreferrer">Cauchy's test</a> and the <a href="https://en.wikipedia.org/wiki/Term_test" rel="nofollow noreferrer">Term test</a>, but reached a dead-end. I also tried to use the ratio test with but it didn't seem to be helpful in this situation.</p> <p>At this point we aren't allowed to use the integral test.</p> <p>I would appreciate any suggestions on how to approach this problem.</p>
Raffaele
83,382
<p><span class="math-container">$$\ln(1+x)\sim x;\;\text{ as }x\to 0$$</span> <span class="math-container">$$\ln \left(1+\frac{1}{\sqrt{n}}\right)\sim \frac{1}{\sqrt{n}};\;\text{ as }n\to\infty$$</span> <span class="math-container">$$\left(\ln \left(1+\frac{1}{\sqrt{n}}\right)\right)^p\sim \left(\frac{1}{\sqrt{n}}\right)^p=\frac{1}{n^{p/2}}$$</span> The series <span class="math-container">$$\sum_{n=1}^{\infty} \sqrt n \left(\ln \left(1+\frac{1}{\sqrt{n}}\right)\right)^p$$</span> converges if in the following fraction <span class="math-container">$$\sqrt n\frac{1}{n^{p/2}}=\frac{1}{n^{p/2-1/2}}$$</span> we have <a href="https://en.wikipedia.org/wiki/Convergence_tests#p-series_test" rel="nofollow noreferrer">p_series test</a> <span class="math-container">$$\frac{p}{2}-\frac{1}{2}&gt;1\to p&gt;3$$</span></p>
2,091,251
<p>I want to be able to check if a function f is even, odd, or neither using Maple's symbolic math. Unfortunately I don't get a boolean return of 'true' on a function I know is even.</p> <pre><code>g:=abs(x)/x^2 evalb(g(x)=g(-x)) false </code></pre> <p>Since my function is even, that is a problem. It turns out that my expression is multiplying g by x or -x instead of inputing/composing them.</p> <p>How can I get Maple to check the parity of my function?</p>
Galen
230,586
<p>I think I figured out what I was looking for. I needed a function that would evaluate g at a value, which I found in the <a href="http://www.maplesoft.com/support/help/Maple/view.aspx?path=eval" rel="nofollow noreferrer">documentation</a>.</p> <pre><code>evalb(eval(g, x = x) = eval(g, x = -x)) </code></pre>
2,091,251
<p>I want to be able to check if a function f is even, odd, or neither using Maple's symbolic math. Unfortunately I don't get a boolean return of 'true' on a function I know is even.</p> <pre><code>g:=abs(x)/x^2 evalb(g(x)=g(-x)) false </code></pre> <p>Since my function is even, that is a problem. It turns out that my expression is multiplying g by x or -x instead of inputing/composing them.</p> <p>How can I get Maple to check the parity of my function?</p>
T.V.
580,653
<p>More human readable method (especially in the part of declaring a function):</p> <pre><code>h(x) := abs(x)/x^2; evalb( h(x) = h(-x) ); </code></pre>
1,410,889
<p>Consider the function with domain $A = \{ (x,y) \in \, \mathbb{R}^2: (x,y) \neq (0,0)\}$ given by</p> <p>$$\frac{2x^2y}{x^4+y^2}$$</p> <blockquote> <p>Letting $(x,y)$ approach $(0,0)$ along the straight line $y=ax$ , where $a$ is a real constant, we find that the limit is zero. This is not enough to conclude that the limit exists. Explain why. </p> </blockquote> <p>I'm incredbly confused so...</p> <p>$$\lim_{x=y \to 0} \frac{2x^2y}{x^4+y^2} = \frac{0}{0} =0$$</p> <p>Amongst two different paths...</p> <p>$$\lim_{x \to 0} \frac{2x^2y}{x^4+y^2} = \frac{0\times y}{0+y^2} =0$$ $$\lim_{y \to 0} \frac{2x^2y}{x^4+y^2} = \frac{x\times 0}{x^4+0} =0$$ Which works better as a proof in my books. As it approach zero in the two paths. Hence the limit is continuous for $(0,0)$</p> <p>Now i know the definition of continuity is formally: <strong>A function f is continuous at a point $a$ if for every $\epsilon&gt;0$, there exists $\delta&gt;0$ such that $|x−a|&lt;\delta$ implies that $|f(x)−f(a)|&lt;\epsilon$</strong></p> <p>But i get confused at finding $\delta$ and $\epsilon$</p> <p>So what i usually use is $x=a$ $$\lim_{x \to a} f(x) = f(a)$$</p> <p>I'm guessing this is not enough proof as we have not proven that $f(x)$ continuous along every point of the domain.</p>
Cameron Williams
22,551
<p>Here's a hint: what if you approach $(0,0)$ along the path $y = x^2$? Note that $f$ is only continuous at $(0,0)$ if <strong>every</strong> path to $(0,0)$ yields the same limit.</p>
1,787,744
<p>There are two measurable spaces with Borel sigma-algebras on them $(X, \mathcal B (X)) $ and $(Y, \mathcal B (Y)) $. There is also a continuous function $f:(X, \mathcal B (X)) \to (Y, \mathcal B (Y)) $. Prove that this function is measurable with respect to Borel sigma-algebra. </p>
Community
-1
<p>If $(Z, \tau ) $ is topological vector space then the Borel algebra of sets is by definition the smallest $\sigma$- algebra that contains $\tau.$ So if </p> <ol> <li>A set $A\in B(Y) $ is open then by the contnuity of $f$ the set $f^{-1} (A) $ is open hence $f^{-1} (A)\in B(X).$</li> <li>Let $\mathcal{M} =\{ A\in B(Y) : f^{-1} (A) \in B(X) \}$ then $\mathcal{M} $ is sigma algebra that contains every open subset of $Y$ hence $\mathcal{M} = B(Y) .$</li> </ol> <p>Thus $f$ is measurable.</p>
2,344,931
<p>It is given that $a,b$ are roots of $3x^2+2x+1$ then find the value of: $$\left(\dfrac{1-a}{1+a}\right)^3+\left(\dfrac{1-b}{1+b}\right)^3$$</p> <p>I thought to proceed in this manner:</p> <p>We know $a+b=\frac{-2}{3}$ and $ab=\frac{1}{3}$. Using this I tried to convert everything to <em>sum and product of roots</em> form, but this way is too complicated! </p> <p>Please suggest a simpler process.</p>
farruhota
425,072
<p>The answer is $-10$. Find the common denominator: $$\left(\dfrac{1-a}{1+a}\right)^3+\left(\dfrac{1-b}{1+b}\right)^3=\frac{(1-ab-(a-b))^3+(1-ab+(a-b))^3}{(1+ab+(a+b))^3}=$$ $$\frac{2\cdot\left(\frac23\right)^3+2\cdot 3 \cdot\left(\frac23\right)\cdot (a-b)^2}{(\frac{2}{3})^3}=\frac{2\cdot\left(\frac23\right)^3+4\cdot ((a+b)^2-4ab)}{(\frac{2}{3})^3}=\frac{-10(\frac{2}{3})^3}{(\frac23)^3}=-10.$$</p>
2,344,931
<p>It is given that $a,b$ are roots of $3x^2+2x+1$ then find the value of: $$\left(\dfrac{1-a}{1+a}\right)^3+\left(\dfrac{1-b}{1+b}\right)^3$$</p> <p>I thought to proceed in this manner:</p> <p>We know $a+b=\frac{-2}{3}$ and $ab=\frac{1}{3}$. Using this I tried to convert everything to <em>sum and product of roots</em> form, but this way is too complicated! </p> <p>Please suggest a simpler process.</p>
serg_1
450,580
<p>From sum and product, we have:$$a+b+ab+ab=0$$ $$a(b+1)=-b(a+1)$$$$\left(\dfrac{1-a}{1+a}\right)^3+\left(\dfrac{1-b}{1+b}\right)^3=\left(1-\dfrac{2a}{1+a}\right)^3+\left(1-\dfrac{2b}{1+b}\right)^3=\left(1-\dfrac{2ab}{b(1+a)}\right)^3+\left(1-\dfrac{2b}{1+b}\right)^3=\left(1+\dfrac{2b}{1+b}\right)^3+\left(1-\dfrac{2b}{1+b}\right)^3$$ Letting $x=\tfrac{2b}{b+1}$, we have $(1+x)^3+(1-x)^3=2+6x^2=2+6\left(\dfrac{2b}{1+b}\right)^2$.</p> <p>As, $3b^2+2b+1=0$, so $(b+1)^2=-2b^2$, and finally $2+6\left(\dfrac{2b}{1+b}\right)^2=2+6\cdot(-2)=-10.$</p>
303,283
<p>A basic PDE I would like to understand much better is the viscous Hamilton-Jacobi equation, such as: \begin{equation*} u - \epsilon \Delta u + H(Du) = f(x) \end{equation*} or \begin{equation*} u_{t} - \epsilon \Delta u + H(Du) = f(x) \end{equation*} with or without boundary conditions in the stationary case, or the Cauchy problem in the time-dependent case. I'm interested in the case when $\epsilon &gt; 0$.</p> <p>Very general viscosity solutions theory implies these equations have continuous solutions under mild assumptions on $H$ and $f$. However, my understanding is the Laplacian term should give us much better regularity than just continuity. </p> <p>This is a relatively basic example and quite well-motivated from the point of view of stochastic control theory, but nonetheless I'm having trouble finding a down-to-earth reference that shows how to establish regularity for these equations without throwing in the kitchen sink. (In other words, I'm looking for a reference at the level of lecture notes so that I can avoid a little longer wading through Gilbarg-Trudinger or the parabolic equivalent.). It would be particularly nice if the reference in question used a fixed point theorem argument to get existence and regularity simultaneously, but I'm open to an alternative approach.</p> <p>Is anyone aware of lectures notes that explain how to establish regularity for these equations? Alternatively, are there papers where this is explained in a compact way? My complaint as a student here is this is touched on only very briefly in Evans (in the discussion of fixed point theorems) and the more advanced textbooks on this strike me as extremely dense and somewhat old-fashioned. I may as well start working my way into those books, but if I can get a head start with something more concrete it would be nice.</p>
Luc Guyot
84,349
<p>Yes, this is <strong>true</strong> and it follows immediately from</p> <blockquote> <blockquote> <p><strong>[1, Corollary 4]</strong> Two numerical semigroups with multiplicity three are equal if and only if they have the same Froebenius number and the same gender.</p> </blockquote> </blockquote> <p>together with</p> <blockquote> <blockquote> <p><strong>[1, Lemma 6]</strong> Let $S$ be a numerical semigroup with multiplicity three, Frobenius number $F(S)$ and gender $G(S)$. Then $\frac{F(S) + 1}{2} \le G(S) &lt; \frac{2F(S) + 3}{3}$.</p> </blockquote> </blockquote> <p>and</p> <blockquote> <blockquote> <p><strong>[1, Theorem 7]</strong> Let $F$ be a positive integer greater than or equal to four that is not a multiple of three. Let $G$ be a positive integer such that $\frac{F + 1}{2} \le G &lt; \frac{2F + 3}{3}$. Then $S = \langle 3, 3G − F, F + 3 \rangle$ is a numerical semigroup with multiplicity three, Frobenius number $F$ and gender $G$.</p> </blockquote> </blockquote> <hr> <p>[1] J. C. Rosales, "Numerical semigroups with multiplicity three and four", 2005. </p>
807,025
<p>What is the exact definition of a reflection through the plane $a.r=0$ for a given vector a and $r=(x,y,z)$. Of course I know what it is but I don't know what's part of its definition and what's part of its properties anymore.</p> <p>My aim is to prove that this type of reflection is linear and that $R(u).R(u)=u.u$, can you help me? It seems so obvious that I can't actually prove it...</p> <p>Thank you</p>
Moishe Kohan
84,907
<p>It is the unique isometry of $R^3$ whose fixed point set is exactly the plane $a\cdot r=0$.</p>
807,025
<p>What is the exact definition of a reflection through the plane $a.r=0$ for a given vector a and $r=(x,y,z)$. Of course I know what it is but I don't know what's part of its definition and what's part of its properties anymore.</p> <p>My aim is to prove that this type of reflection is linear and that $R(u).R(u)=u.u$, can you help me? It seems so obvious that I can't actually prove it...</p> <p>Thank you</p>
Will Jagy
10,400
<p>Given a dot product and a nonzero vector $a,$ the reflection $\tau_a$ is given by $$ \tau_a (x) = x - \left( \frac{2 a \cdot x}{a \cdot a}\right) a $$ Straightforward to calculate $\tau_a (x) \cdot \tau_a(x).$ </p> <p>Note that a reflection, along with being an isometry, is also self adjoint, meaning its matrix will be symmetric as well as orthogonal; we write this as $$ \tau_a (x) \cdot y = x \cdot \tau_a(y) $$If this seems peculiar (it does to me), consider that, given the reflection matrix $R,$ there is some orthogonal matrix $P,$ meaning $P^T P = I,$ such that $P^T R P$ is a diagonal matrix with $n-1$ diagonal entries equal to $1$ and the final diagonal entry equal to $-1.$ </p>
1,673,277
<p>In what scenarios is Goldbach's conjecture, that all even numbers greater than 4 are the sum of two prime numbers; a natural conjecture to research?</p>
hmakholm left over Monica
14,366
<p>In Mallory's immortal words: <strong>Because it's there.</strong></p> <p>After the proof of Fermat's last theorem, Goldbach's conjecture is probably the leading example of a conjecture which is <em>extremely simple to explain</em> -- even a grade schooler will be able to understand what the <em>question</em> is -- but where no answer has been produced even after centuries of concerted effort by a host of very smart people.</p> <p>For some people, that very property makes it impossible <strong>not</strong> to think about the problem and try to figure out ways to attack it. There is something infuriating about such a simple statement resisting all the human ingenuity we can throw at it.</p> <p>A different question is why universities would <em>pay</em> people to think about this particular problem. Here one can point either to the fact that (as Patrick notes) the mathematics that's created <em>while</em> attacking hard problems such as this often turns out to have broader applications -- or to the fact that giving academics the freedom (at least collectively) to work on whatever problem catches their fancy has <em>historically</em> been very successful in getting useful and interesting breakthroughs that nobody could have anticipated.</p> <p>Even if Goldbach's conjecture <em>in particular</em> could be reliably know to lead nowhere useful, we don't have any workable <em>general</em> way to force basic research in a useful direction that works better than "follow your curiosity". Trying to stomp out individual lines of inquiry because they're deemed fruitless would generally create too many secondary problems (general miscontent, researchers who don't really care about Goldbach wondering whether they will be next, and so forth) to be worth it.</p>
4,549,781
<p>suppose we have a convex continuous function from<span class="math-container">$\mathbb{R}$</span> to <span class="math-container">$\mathbb{R^+}$</span> <span class="math-container">$f:\mathbb{R}\to\mathbb{R^+}$</span> such that<br /> <span class="math-container">$f(-x)=f(x),lim_{x\to 0}\frac{f(x)}{x}=0 \text{ and } lim_{x\to \infty}\frac{f(x)}{x}=\infty$</span> and <br /> define <span class="math-container">$f_{1}(x)$</span>=<span class="math-container">$e^{(\frac{-1}{x^2})}f(x)$</span><br /> My question is can we define <span class="math-container">$f(x)$</span> such that<br /> <span class="math-container">$f_1(x) =$</span> <span class="math-container">$\begin{cases} 0 &amp; :x\in[-k,k],\text{where}\hspace{0.2cm} k\in \mathbb{R}\\ \text{it will increase } &amp; : x\in(-\infty,-k)\cup(k,\infty) \end{cases} $</span> <br /> I hope I have written my question so that people can understand what I want to say, if there is any mistake please let me know.All I want is a function which is zero in a closed interval and then it increases and the final plot is like convex function,if I keep on changing the value of <span class="math-container">$k$</span> it should be zero in all those <span class="math-container">$[-k,k]$</span> for every <span class="math-container">$k\in \mathbb{R}$</span></p>
G Frazao
544,105
<p>I think I found an answer:</p> <p>First, use the <a href="https://en.wikipedia.org/wiki/Gershgorin_circle_theorem" rel="nofollow noreferrer">Gershgorin circle theorem</a> as in <a href="https://math.stackexchange.com/a/4549760/544105">Gregory's answer</a> to show that the eigenvalues of <span class="math-container">$A$</span> have non-positive real part.</p> <p>Second, note that <span class="math-container">$A$</span> has at least one null eigenvalue since the rows of the matrix add up to <span class="math-container">$0$</span>.</p> <p>Third, show that <span class="math-container">$A$</span> has only one null eigenvalue:</p> <p>Construct an auxiliary <span class="math-container">$N\times N$</span> matrix <span class="math-container">$B$</span>, equal to <span class="math-container">$A$</span>, but with the last row zeroed. Since row operations do not change the nullity of a matrix, <span class="math-container">$B$</span> has the same number of null eigenvalues as <span class="math-container">$A$</span>. <span class="math-container">$$ B = \begin{bmatrix} a_{11} &amp; 1+n_{12} &amp; \cdots &amp; 1+n_{1(N-1)} &amp; 1+n_{1N} \\ n_{12} &amp; a_{22} &amp; \cdots &amp; 1+n_{2(N-1)} &amp; 1+n_{2N} \\ \vdots &amp; \vdots &amp; \ddots &amp; \vdots &amp; \vdots \\ n_{1(N-1)} &amp; n_{2(N-1)} &amp; \cdots &amp; a_{(N-1)(N-1)} &amp; 1+n_{(N-1)(N-1)} \\ 0 &amp; 0 &amp; \cdots &amp; 0 &amp; 0 \end{bmatrix} $$</span></p> <p>Construct a second auxiliary <span class="math-container">$(N-1) \times (N-1)$</span> matrix <span class="math-container">$C$</span>, equal to <span class="math-container">$B$</span> without the last row and column. Note that the eigenvalues of <span class="math-container">$C$</span> are also eigenvalues of <span class="math-container">$B$</span> since you can construct the eigenvectors of <span class="math-container">$B$</span> by padding the eigenvectors of <span class="math-container">$C$</span> with a <span class="math-container">$0$</span>.</p> <p><span class="math-container">$$ \text{Let } v=[v_1 \cdots v_{N-1}] : Cv = \lambda v $$</span> <span class="math-container">$$ \text{Note } w=[v_1 \cdots v_{N-1} \ 0] \Rightarrow Bw = \lambda w $$</span></p> <p>Finally, use once again the Gershgorin circle theorem to show that all the eigenvalues of <span class="math-container">$C$</span> are strictly negative. <span class="math-container">$$ |\lambda - a_{ii}| \leq \sum_{k\neq i} c_{ki} = |a_{ii}| - n_{iN} &lt; |a_{ii}| $$</span></p> <p><span class="math-container">$C$</span> has no null eigenvalues <span class="math-container">$\Rightarrow$</span> <span class="math-container">$B$</span> has only one null eigenvalue <span class="math-container">$\Rightarrow$</span> <span class="math-container">$A$</span> has only one null eigenvalue <span class="math-container">$\blacksquare$</span></p>
1,209,233
<p>How can I find all of the transitive groups of degree $4$ (i.e. the subgroups $H$ of $S_4$, such that for every $1 \leq i, j \leq 4$ there is $\sigma \in H$, such that $\sigma(i) = j$)? I know that one way of doing this is by brute force, but is there a more clever approach? Thanks in advance!</p>
Nishant
100,692
<p>Let <span class="math-container">$G$</span> be a transitive subgroup of <span class="math-container">$S_4$</span>. Since the orbit of <span class="math-container">$1$</span> under the action of <span class="math-container">$G$</span> is <span class="math-container">$\{1, 2, 3, 4\}$</span>, the order of <span class="math-container">$G$</span> must be divisible by <span class="math-container">$4$</span>, and so must be equal to one of <span class="math-container">$4, 8, 12, 24$</span>. </p> <p>An order <span class="math-container">$4$</span> <span class="math-container">$G$</span> would be either cyclic (generated by a <span class="math-container">$4$</span>-cycle, giving 3 subgroups) or Klein-Four. There are two <span class="math-container">$V_4$</span> subgroups of <span class="math-container">$D_8$</span>, but only one of them is transitive in each <span class="math-container">$D_8$</span>, and they're all equal to <span class="math-container">$\{1, (12)(34), (13)(24), (14)(23)\}$</span></p> <p>The order <span class="math-container">$8$</span> subgroups are Sylow-<span class="math-container">$2$</span>'s, so they're all conjugate to each other and isomorphic to <span class="math-container">$D_8$</span> (it's easy to find a subgroup isomorphic to <span class="math-container">$D_8$</span> by just looking at the symmetries of the vertices of a square). The number of them is either <span class="math-container">$1$</span> or <span class="math-container">$3$</span> by a Sylow Theorem, and it's <span class="math-container">$3$</span> because <span class="math-container">$D_8$</span> is not normal in <span class="math-container">$S_4$</span>. </p> <p>The order <span class="math-container">$12$</span> subgroup must be <span class="math-container">$A_4$</span>, and the order <span class="math-container">$24$</span> subgroup is then <span class="math-container">$S_4$</span> itself. </p>
1,080,746
<p>I want to calculate the limit of this sum :</p> <p>$$\lim\limits_{x \to 1} {\left(x - x^2 + x^4 - x^8 + {x^{16}}-\dotsb\right)}$$</p> <p>My efforts to solve the problem are described in the <a href="https://math.stackexchange.com/a/1308281/202081">self-answer below</a>.</p>
RE60K
67,609
<p>From <a href="http://www.math.harvard.edu/~elkies/Misc/sol8.html" rel="noreferrer">here</a> is the amazing solution:</p> <blockquote> <p>Since $S$ satisfies the functional equation $$S(x) = x − S(x^2)$$ it is clear that if $S(x)$ has a limit as $x$ approaches 1 then that limit must be $1/2$. One might guess that $S(x)$ in fact approaches $1/2$, and numerical computation supports this guess — at first. But once $x$ increases past $0.9$ or so, the approach to $1/2$ gets more and more erratic, and eventually we find that $S(0.995) = 0.50088\ldots &gt; 1/2$. Iterating the functional equation, we find $S(x) = x − x^2 + S(x^4) &gt; S(x^4)$. Therefore the fourth root, 16th root, 64th root, … of $0.995$ are all values of x for which $S(x) &gt; S(0.995) &gt; 1/2$. Since these roots approach $1$, we conclude that in fact $S(x)$ cannot tend to $1/2$ as $x$ approaches $1$, and thus has no limit at all! So what does $S(x)$ do as $x$ approaches $1$? It oscillates infinitely many times, each oscillation about 4 times quicker than the previous one; If we change variables from x to $\log_4(\log(1/x))$, we get in the limit an odd periodic oscillation of period 1 that's almost but not quite sinusoidal, with an amplitude of approximately $0.00275$. Remarkably, the Fourier coefficients can be obtained exactly, but only in terms of the Gamma function evaluated at the pure imaginary number $\pi i / \ln(2)$ and its odd multiples!</p> </blockquote>
4,586,314
<p><em>A three-sided fence is to be built next to a straight section of river, which forms the fourth side of a rectangular region, as shown in the diagram below. The enclosed area is to equal <span class="math-container">$1800 m^2$</span> and the fence running parallel to the river must be set back at least <span class="math-container">$20 m$</span> from the river. Determine the minimum perimeter of such an enclosure and the dimensions of the corresponding enclosure.</em></p> <p>I am using this for the first part: <span class="math-container">$1800m^2 = l(w- 20)$</span></p> <p>Every time I use this I can't really figure out how to differentiate, which leads me to believe I'm doing something wrong. I know that one of the sides needs to be subtracted by <span class="math-container">$20$</span>, but I can't understand when it should be subtracted.</p> <p>Also, if I do find the new width, would that change the entire area?</p>
Carl Christian
307,944
<p>There is a two-sided linear transformation that accomplishes your goal. Specifically, let <span class="math-container">$$E = \begin{bmatrix} 1 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 \end{bmatrix}$$</span> be the matrix such that <span class="math-container">$EA$</span> isolates the first row of <span class="math-container">$A$</span> and nullifies the rest and let <span class="math-container">$$S = \begin{bmatrix} 1 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 1 \\ 0 &amp; 1 &amp; 0 \end{bmatrix}$$</span> be the matrix such that the second and third columns of <span class="math-container">$AS$</span> have been swapped compared with <span class="math-container">$A$</span>. Then the matrix you seek can be computed as <span class="math-container">$$ B = EA + (I-E)AS $$</span> where <span class="math-container">$I$</span> is the 3-by-3 identity matrix. It is clear that <span class="math-container">$A \rightarrow B(A)$</span> is a linear transformation.</p>
1,649,997
<p>I'm having a hard time coming up with two unbounded sequences where their difference yields $0$ when $n\rightarrow\infty$. Any ideas?</p>
Michael Hardy
11,667
<p>$n$ and $n+\dfrac 1 n$. $\qquad\qquad\qquad$</p>
1,649,997
<p>I'm having a hard time coming up with two unbounded sequences where their difference yields $0$ when $n\rightarrow\infty$. Any ideas?</p>
Jakub Konieczny
10,674
<p>Perhaps the simplest example: $$ a_n = n, \qquad b_n = \begin{cases} n, &amp; n &gt; 1\\ 0, &amp; n = 1 \end{cases} $$ for $n = 1,2, \dots$ These are (obviously) different, and their difference is (obviously) convergent to $0$: $$ a_n - b_n = \begin{cases} 0, &amp; n &gt; 1\\ 1, &amp; n = 1. \end{cases} $$ Arguably, these are not essentially different, but I think the whole point of this exercise is to realise that two sequences may be different, yet have very similar asymptotic behaviour.</p>
1,191,176
<p>I know how to prove the result for $n=2$ by contradiction, but does anyone know a proof for general integers $n$ ?</p> <p>Thank you for your answers.</p> <p>Marcus</p>
Nicky Hekster
9,605
<p>FLT is quite a cannon to settle the proof. One can also follow the "classical" approach: assuming WLOG $\gcd(p,q)=1$, if $p^n=2q^n$, then $2 \mid p^n$, implying $2 \mid p$, say $p=2r$. But then $2^{n-1}r^n=q^n$, and since $n \gt 2$, we see that $2 \mid q^n$. It follows that $2 \mid q$, contracting the fact that $p$ and $q$ are relatively prime.</p>
126,897
<p>It is an exercise in a book on discrete mathematics.How to prove that in the decimal expansion of the quotient of two integers, eventually some block of digits repeats. For example: $\frac { 1 }{ 6 } =0.166\dot { 6 } \ldots$ and $\frac { 217 }{ 660 } =0.328787\dot { 8 } \dot { 7 } \ldots$</p> <p>How to think of this?I just can't find the point to use the Pigeonhole Principle. Thanks for your help!</p>
JeremyFR
23,961
<p>Assume the decimal is infinite (otherwise the 0 repeats). Imagine evaluating the quotient by long division. After each step after the numerator has become all 0's, you have to carry over something less than the denominator. Eventually you'll have to carry something you carried before, because you carry something on every step.</p>
4,383,098
<p>This was a question I had ever since I started studying Formal mathematics. Take ZFC for example, in it the axioms tell us 'tests' to check if something is a set or not and how the object, if they are set, behave with some other operations defined on the set.</p> <p>My question is how exactly do we find objects do fulfill these axioms? Is there some formal procedure for it , or, is it just guess work?</p>
boaz
83,796
<p>Hint: try using the identities <span class="math-container">$$ X\cup(Y\cap Z)=(X\cup Y)\cap(X\cup Z) $$</span> and <span class="math-container">$$ X\cap(Y\cup Z)=(X\cap Y)\cup(X\cap Z) $$</span></p>
2,391,505
<p>Most ODE textbooks provide the following steps to the solution of a separable differential equation (here the exponential equation is used as an example):</p> <p>$$\frac{dN}{dt}=-\lambda N(t) \Rightarrow \frac{dN}{N(t)}=-\lambda dt\Rightarrow \int\frac{1}{N}dN=-\lambda\int dt \Rightarrow ln\mid N\mid = -\lambda t+C\Rightarrow \mid N(t) \mid=e^{-\lambda t +C}=e^Ce^{-\lambda t}\Rightarrow N(t)=e^Ce^{-\lambda t} \text{ if N $&gt;0$ and }N(t)=-e^Ce^{-\lambda t} \text{ if N &lt; 0}.$$</p> <p>Ultimately this can be simplified to $N(t)=Ae^{-\lambda t}$ where $A=e^C$ is positive or negative accordingly. </p> <p>I find this demonstration unintuitive. Doesn't the author know that math students have just spent 3 semesters of Calculus having instructors insist that the Leibniz derivative operator is not a fraction, that these infinitesimals are objects that do not really exist on the real number line and which require great mathematical maturity to comprehend? Now, can we try to make this demonstration in a manner that respects our understanding of the Leibniz derivative operator as a symbol that cannot be broken apart?</p> <p>EDIT: Questions similar to this have been asked all over this forum, few have satisfactory answers, however I have ran into this one with some great posts: <a href="https://math.stackexchange.com/questions/2142783/separable-differential-equations-detaching-dy-dx">Separable differential equations: detaching dy/dx</a> </p>
Eric
309,041
<p>To be more clear, write $N(t)$ instead of $N$, write $\frac{dN(t)}{dt}$ instead of $\frac{dN}{dt}$. Also, it is more ideal to write $N'(t)$ than $\frac{dN(t)}{dt}$ in the answer, since we are talking the annoying Leibniz's mis-leding notation, right?</p> <blockquote> <p>$\begin{alignedat}{3}\frac{dN(t)}{dt}=-\lambda N(t) &amp;\Longrightarrow \forall t,~N'(t)=-\lambda N(t)&amp;\text{rewrite to a rigorous notation}\\ &amp;\Longrightarrow \forall t,~\frac{1}{N(t)}N'(t)=-\lambda&amp;\text{move the term $N(t)$ to LHS}\end{alignedat}$</p> </blockquote> <p>I added a quantifier $\forall t$ in front. Since the true thing happens here is that the equation $N'(t)=-\lambda N(t)$ and $\dfrac{1}{N(t)}N'(t)=-\lambda$ not only hold for one particular, but rather <strong>all</strong> $t\in\mathbb{R}$. (we here ignore the little issue that when $t=0$, the equality might be problematic, since this is another(more little) question, off the topic.)</p> <p>To repeat, the equation above, are something like: if we first defined $f:\Bbb R\to \Bbb R;x\mapsto x^2+1$, then we can say:</p> <ul> <li>$\forall x,~f(x)=x^2+1$</li> <li>$\forall y,~f(y)=y^2+1$</li> <li>$\forall t,~f(t)=t^2+1$ </li> <li>$\forall t,~f(t)-1=t^2$</li> <li>$\forall t,~(f(t))^2=t^4+2t^2+1$</li> <li>$\forall t,~\dfrac{f(t)}{f(t)}=1$</li> <li>$\forall t,~f'(t)=2t$, ... etc.</li> </ul> <p>Here these all expression all have a quantifer in it, specifying the fact that not only the equation(say $\dfrac{f(t)}{f(t)}=1$) hold for one particular $t$(say $t=1.467$), but also <strong>all</strong> $t$ in the real numbers.</p> <p>Why do I stress on this point? Because due to it, we can integrate both side. </p> <blockquote> <p>$\begin{alignedat}{3} \left(\forall t,~\frac{1}{N(t)}N'(t)=-\lambda\right)\Longrightarrow \int\left(\frac{1}{N(t)}N'(t)\right) dt=\int (-\lambda)dt&amp;\quad\text{integrate both side w.r.t. $t$}\end{alignedat}$.</p> </blockquote> <p>$(\star)$ It should be notice that, if we are dealing with a equation in precalculus, like $2x^2+x+1=0$. To integrate both side with respect to the variable, getting that $\int (2x^2+x+1) dx = \int 0 dx$, is meaningless, and totally wrong. Here the reason that I can integrate the both side, with respect to $t$, is that we have known that the equation holds for all $t\in\Bbb R$ (or at least on some interval). And since the two expressions are <strong>identical</strong> (at least on some interval), there is nothing more or less to integrate $\dfrac{1}{N(t)}N'(t)$ than to integrate $-\lambda$. For example, the result of $\int (x^3+2x-5x+7)dx$ is the same of $\int (x^3+2x-5x+7)dx$, of course!</p> <p>Now keep going.</p> <blockquote> <p>$\begin{alignedat}{2} &amp;\int\left(\frac{1}{N(t)}N'(t)\right) dt=\int (-\lambda)dt\\ &amp;\Longrightarrow \underbrace{\ln |N(t)|+c_1}_{\dagger}=-\lambda t+c_2\\ &amp;\Longrightarrow \ln |N(t)|=-\lambda t+c\\ &amp;\Longrightarrow e^{-\lambda t+c}=N(t)\\ &amp;\Longrightarrow N(t)=Ce^{-\lambda t}~~\text{(I forgot the reason why we can throw away abs-sign now :P}\\ \end{alignedat}$</p> </blockquote> <p>$(\dagger)$ <em>The integration by substitution used in the LHS is very classic, and rigorous; it doesn't require any annoying differential operations, such as canceling the $dt$'s or $dx$'s.</em></p> <p>And get the answer. You may wonder why different constant $c_1$ and $c_2$ arises, this is because $\int d(\cdot)$ it not truly some kind of function(same input, same output), in fact, it produce a family of functions, each of these are distinct from a constant, as stressed in the calculus books.</p>
3,059,617
<p>Note: I apologize in advance for not using proper notation on some of these values, but this is literally my first post on this site and I do not know how to display these values correctly.</p> <p>I recently was looking up facts about different cardinalities of infinity for a book idea, when I found a post made several years ago about <span class="math-container">$ℵ_{ℵ_0}$</span></p> <p><a href="https://math.stackexchange.com/questions/715321/if-the-infinite-cardinals-aleph-null-aleph-two-etc-continue-indefinitely-is">If the infinite cardinals aleph-null, aleph-two, etc. continue indefinitely, is there any meaning in the idea of aleph-aleph-null?</a></p> <p>In this post people are talk about the difference between cardinal numbers and how <span class="math-container">$ℵ_{ℵ_0}$</span> should instead be <span class="math-container">$ℵ_ω$</span>. The responses to the post then go on to talk about <span class="math-container">$ℵ_{ω+1}$</span>, <span class="math-container">$ℵ_{ω+2}$</span>, and so on.</p> <p>Anyways, my understanding of the different values of ℵ was that they corresponded to the cardinalities of infinite sets, with <span class="math-container">$ℵ_0$</span> being the cardinality of the set of all natural numbers, and that if set X has cardinality of <span class="math-container">$ℵ_a$</span>, then the cardinality of the powerset of X would be <span class="math-container">$ℵ_{a+1}$</span>.</p> <p>With this in mind, I always imagined that if a set Y had cardinality <span class="math-container">$ℵ_0$</span>, and you found its powerset, and then you found the powerset of that set, and then you found the powerset of THAT set, and repeated the process infinitely you would get a set with cardinality <span class="math-container">$ℵ_{ℵ_0}$</span>.</p> <p>So, I guess my question is, in the discussion linked above, when people are talking about <span class="math-container">$ℵ_{ω+1}$</span>, how is that possible? Because if you take a powerset an infinite number of times, taking one more powerset is still just an infinite number of times, isn't it?</p> <p>I hope I worded this question in a way that people will understand, and thanks in advance into any insight you can give me about all this.</p>
J.G.
56,861
<p>Further to existing answers, let's clarify how transfinite recursion defines <span class="math-container">$\aleph_\alpha$</span> for arbitrary ordinals <span class="math-container">$\alpha$</span>. Fr simplicity I'll identify each aleph with the least ordinal of the intended cardinality. We have <span class="math-container">$\aleph_0:=\omega$</span>, and for any limit ordinal <span class="math-container">$\gamma\ne 0$</span> we have <span class="math-container">$\aleph_\gamma:=\bigcup_{\beta&lt;\gamma}\aleph_\gamma$</span>. Finally, <span class="math-container">$\aleph_{\alpha+1}$</span> is the <a href="https://en.wikipedia.org/wiki/Hartogs_number" rel="nofollow noreferrer">Hartogs number</a> of <span class="math-container">$\aleph_\alpha$</span>, i.e. the least ordinal that cannot be injected into <span class="math-container">$\aleph_\alpha$</span>. I recommend convincing yourself <span class="math-container">$\aleph_\alpha$</span> can be injected into <span class="math-container">$\aleph_\beta$</span> only if <span class="math-container">$\alpha=\beta\lor\alpha\in\beta$</span> (for ordinals we usually write this as <span class="math-container">$\alpha\le\beta$</span>), and in particular understanding the proof that a Hartogs number exists. Hence <span class="math-container">$\aleph_\omega&lt;\aleph_{\omega+1}&lt;\aleph_{\omega+2}&lt;\cdots$</span>.</p>
4,219,689
<p>I was trying to derive the perimeter of a circle using vector algebra and came across this.....</p> <p>Taking a circle of radius <span class="math-container">$r$</span> with center at the origin, the position vector of a particular point on the circle making an angle <span class="math-container">$t$</span> with the positive x-axis is given by, <span class="math-container">$$\vec r_o = r\cos t \hat i + r\sin t \hat j $$</span> If we move this point so by an infinitesimally small amount such that the new position vector of the point subtends an angle of <span class="math-container">$t+dt$</span> with the positive x-axis, then the new position vector is given by <span class="math-container">$$\vec r = r\cos (t+dt)\hat i + r\sin (t +dt)\hat j$$</span> The small displacement vector <span class="math-container">$dl$</span> will be given by, <span class="math-container">$$\vec {dl} = \vec r - \vec r_o$$</span> After substitution you get, <span class="math-container">$$\vec {dl} = r [\cos (t+dt)-\cos t ]\hat i + r[\sin(t+dt)-\sin t]\hat j$$</span> The magnitude of this infinitesimal displacement would be, <span class="math-container">$$||\vec {dl}|| = dl = \sqrt {{r^2 [\cos (t+dt)-\cos t ]}^2+{r^2[\sin(t+dt)-\sin t]^2}}$$</span> After further simplification; <span class="math-container">$$dl = 2r\sin {\frac{dt}{2}}$$</span> To get the arclength (say, <span class="math-container">$da$</span>); <span class="math-container">$$da = dl = 2r\sin{\frac{dt}{2}}$$</span> Usually we approximate <span class="math-container">$\sin{dt}=dt$</span> as <span class="math-container">$dt\to0$</span> and we end up with <span class="math-container">$$dl = 2r \frac{dt}{2} = rdt$$</span> and integrate this equation to get the perimeter of the circle, instead i tried applying the taylor expansion of <span class="math-container">$\sin$</span> <span class="math-container">$$\sin x = x - \frac{x^3}{3!}+\frac{x^5}{5!}-\frac{x^7}{7!}+ ....$$</span> <span class="math-container">$$dl = 2r[\frac{dt}{2}-\frac{(\frac{dt}{2})^3}{3!}+\frac{(\frac{dt}{2})^5}{5!}-\frac{(\frac{dt}{2})^7}{7!}+....]$$</span> Integrating the above, <span class="math-container">$$\int{dl}=2r[\int{\frac{dt}{2}}-\int{\frac{(\frac{dt}{2})^3}{3!}}+\int{\frac{(\frac{dt}{2})^5}{5!}}-\int{\frac{(\frac{dt}{2})^7}{7!}}+....]$$</span>I'm stuck at this point and don't know how to carry forward, so i need some help....</p>
peta arantes
419,513
<p><span class="math-container">$\frac{AA_1+DD_2}{2}=d =2d(I)\\ \frac{E_1E_2+B_1B}{2}=d=2d(II)\\ \frac{FF_1+CC_1}{2}=d=2d(III)\\ Sum (I+II+III) =18\rightarrow 6d = 18 \therefore \boxed{\color{red}d = 3} $</span></p>
1,670,816
<p>Given <span class="math-container">$a, b \in \Bbb R$</span>, consider the following large tridiagonal matrix</p> <p><span class="math-container">$$M := \begin{pmatrix} a^2 &amp; b &amp; 0 &amp; 0 &amp; \cdots \\ b &amp; (a+1)^2 &amp; b &amp; 0 &amp; \cdots &amp; \\ 0 &amp; b &amp; (a+2)^2 &amp; b &amp; \cdots \\ \vdots &amp; \vdots &amp; \vdots &amp; \vdots &amp; \ddots \end{pmatrix}$$</span></p> <p>What can be said about its eigenvalues? Are analytic expressions known? Or, at least, properties of the eigenvalues?</p>
Moo
295,756
<p>We are given the function:</p> <p>$$f(x) = (x-1) (x-2) (x-3) (x-4)-\dfrac{x^6}{1000000}$$</p> <p>A local plot of the function shows:</p> <p><a href="https://i.stack.imgur.com/xqqhn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xqqhn.png" alt="enter image description here"></a></p> <p>We clearly see there is a root $r \approx 4$ as well as other roots.</p> <p>One method we can use is called the <a href="https://en.wikipedia.org/wiki/Newton%27s_method" rel="nofollow noreferrer"><em>Newton-Raphson</em></a> Method. The iteration is given by:</p> <p>$$\tag 1 x_{n+1} = x_n - \dfrac{f(x_n)}{f'(x_n)} = x_n - \dfrac{(x_n-1) (x_n-2) (x_n-3) (x_n-4)-\dfrac{x_n^6}{1000000}}{-\dfrac{3 x_n^5}{500000}+4 x_n^3-30 x_n^2+70 x_n-50}$$</p> <p>We will start the iteration with an initial guess of $x_0 = 5$ and iterate using $(1)$.</p> <ul> <li>$x_0 = 5.0000000000000000000000000000000000000000000000000$</li> <li>$x_1 = 4.520132549706139802425909716143553832687257721646$</li> <li>$x_2 = 4.213728406139413811621689461664147945319554657302$</li> <li>Continuing this with $50-$ digits of accuracy, we converge after $9$ iterations to:</li> <li>$x_9 = 4.0006825115317206360517121593821239383700711643283$</li> </ul> <p>Note that you mentioned the <a href="https://en.wikipedia.org/wiki/Secant_method" rel="nofollow noreferrer"><em>Secant Method</em></a> and it also converged to the root, but the <a href="https://en.wikipedia.org/wiki/Bisection_method" rel="nofollow noreferrer"><em>Bisection Method</em></a> did not.</p> <p>It is worth noting that this is a sixth order function and we can find all of the roots using this process (even imaginary ones) as:</p> <ul> <li>$x = -1004.9801730979548412226307120082711312441645974604$</li> <li>$x = 0.99999983333355092551054620226892671527779215577814$</li> <li>$x = 2.0000320035846145257264041034190706369224905276042$ </li> <li>$x = 2.9996356990888364529650498845326637064948826629689$ </li> <li>$x = 4.0006825115317206360517121593821239383700711643283$</li> <li>$x = 994.97982305041611868237699965866834624709936094976$</li> </ul> <p>Lastly, there are methods that converge even faster like the fourth or seventh order method, for example, see <a href="http://www.emis.de/journals/NSJOM/Papers/40_2/NSJOM_40_2_061_067.pdf" rel="nofollow noreferrer"><em>How to develop fourth and seventh order iterative methods?</em></a>, but many other methods are available. See, for example <a href="https://en.wikipedia.org/wiki/Root-finding_algorithm" rel="nofollow noreferrer"><em>Root Finding Algorithms</em></a></p>
176,054
<p>I have a friend who wants to study something applied to neurosciences. He is going to begin his grad studies in mathematics. He asked me which areas of mathematics could be applied to neurosciences. Since I don't know the answer, I thought mathoverflow would be the right place to ask. I mean, there are many areas of mathematics that could be applied to neurosciences. But the question is the following: which are the fields that have already been applied to neurosciences? Are there areas related to dynamical systems, stochastic process, probability, topology, analysis, PDE or algebra applied to neurosciences? Articles are welcome. </p> <p>Thank you in advance</p>
Piyush Grover
30,684
<p>The site scholarpedia.org is a good start. <a href="http://www.scholarpedia.org/article/Encyclopedia:Computational_neuroscience" rel="nofollow">http://www.scholarpedia.org/article/Encyclopedia:Computational_neuroscience</a></p> <p>It was started by a computational/mathematical neuroscientist, and has good articles on mathemtics relevant to neuroscience.</p>
6,661
<p>Is there a simple explanation of what the Laplace transformations do exactly and how they work? Reading my math book has left me in a foggy haze of proofs that I don't completely understand. I'm looking for an explanation in layman's terms so that I understand what it is doing as I make these seemingly magical transformations.</p> <p>I searched the site and closest to an answer was <a href="https://math.stackexchange.com/questions/954/inverse-of-laplace-transform">this</a>. However, it is too complicated for me.</p>
Ronny
101,741
<p>Think that the laplace transformation is a kind of a machine, the machine eats function of t f(t) out comes F(s). you do a transformation from time to frequency. Inside the machine you have this integral expression that you already know. it is similar when you transform from one vector space to another. for instance you go from R to R^2</p>
384,991
<p>If $f$ is defined as a function of real variables to real values, and $c \in cl(Domain)$ as its limit value (i.e. $\lim_{x \to c} {f(x)} = 0 $) how to prove that this implies: $\lim_{x \to c} {1 \over f(x)} = \infty$.</p> <p>It seems logical that the values will be always bigger, but when tried to construct a contradiction using the y-creterion I stuck at: $\exists \epsilon &gt; 0: f(x)&gt;0 \forall x \in [c-\epsilon,c+\epsilon]$.</p>
Thomas
26,188
<p>Let me just assume that $c = 0$. And as pointed out in the comment above, you have to be careful. Take the following just as an outline of a general approach. As pointed out in the excellent answer by Cameron, this doesn't always work. You have to make some assumptions. But maybe it can be helpful?</p> <p>But, I assume that what want to prove that given $N$ there is an $\delta&gt;0$ such that $$ \lvert x\rvert &lt; \delta \Rightarrow \lvert f(x)^{-1}\rvert &gt; N. $$ So Let $N$ be given. Let $\epsilon = \frac{1}{N}$. Then there is a $\delta &gt;0$ such that $\lvert f(x)\rvert &lt; \epsilon = N^{-1}$. That is you get exactly what you want because $$\begin{align} \lvert f(x)\rvert &amp;&lt; \epsilon = N^{-1} \quad \Rightarrow \\ \lvert f(x)\rvert^{-1} &amp;&gt; N. \end{align} $$</p>
2,292,711
<p>Show that the vector field $F(x,y)=(2x-x^5-xy^4,y-y^3-x^2y)$ defined in $R^2$ does not have periodic orbits; the Bendixson criterion is not useful.</p>
dantopa
206,581
<p>@Robert Israel provides the best answer. This post is an elaboration of his remarks.</p> <h1>Dynamical System</h1> <p>Find the periodic solution for the dynamical system: <span class="math-container">$$ % \begin{align} % \dot{x} &amp;= -x \left(x^4+y-2\right) \\ % \dot{y} &amp;= -y \left(x^2+y^2-1\right) \\ % \end{align} \tag{1} % $$</span></p> <h2>Poincaré–Bendixson Theorem</h2> <p>Use the theorem of Poincare and Bendixson to identify a <em>trapping region</em>, an area where the sign of the radial time derivative can change.</p> <p>The trapping region must</p> <ol> <li>Be closed and bounded,</li> <li>Not contain any critical points.</li> </ol> <h2>Fixed points</h2> <p>Proceed by locating fixed points.</p> <h3>Nullclines</h3> <p>Find the nullclines. <span class="math-container">$$ \begin{array}{ccccl} \dot{x} = 0 &amp;\implies &amp;y &amp;= &amp;\left\{ 2-x^4 \right\} \\ \dot{y} = 0 &amp;\implies &amp;x &amp;= &amp;\left\{ 0, \pm\sqrt{1-x^2} \right\} \end{array} $$</span></p> <h3>Identify fixed points</h3> <p>The fixed points are: <span class="math-container">$$ \left[ \begin{array}{c} \dot{x} \\ \dot{y} \end{array} \right]_{(0,0)} = \left[ \begin{array}{c} \pm\sqrt[4]{2} \\ 0 \end{array} \right] $$</span></p> <p>At this juncture, invoke the arguments of @Robert Israel and you are done.</p> <p>To reinforce the lesson, the answer continues with the canonical approach. The plot below shows the flow and the <span class="math-container">$\dot{x}$</span> nullcline in black and the <span class="math-container">$\dot{y}$</span> nullcline in red.</p> <p><a href="https://i.stack.imgur.com/dSVjU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dSVjU.png" alt="flow"></a></p> <h2>Compute <span class="math-container">$\dot{r}$</span></h2> <p>The polar coordinate transform <span class="math-container">$$ % \begin{align} % x &amp;= r \cos \theta \\ % y &amp;= r \sin \theta \\ % \tag{2} \end{align} % $$</span></p> <p>implies <span class="math-container">$$ r^{2} = x^{2} + y^{2} \tag{3} $$</span></p> <p>Differentiate (3) with respect to time: <span class="math-container">$$ 2r\dot{r} = 2x \dot{x} + 2y \dot{y} $$</span> Therefore <span class="math-container">$$ \dot{r} = \frac{x \dot{x} + y \dot{y}} {r} \tag{4} $$</span> Transform <span class="math-container">$\dot{x}$</span> and <span class="math-container">$\dot{y}$</span> to <span class="math-container">$r$</span> and <span class="math-container">$\theta$</span> using (2): <span class="math-container">$$ % \begin{align} % \dot{x} &amp;= -x \left(x^4+y-2\right) = -r \cos \theta \left(r^4 \cos ^4\theta+r \sin \theta-2\right) \\ % \dot{y} &amp;= -y \left(x^2+y^2-1\right) = -r \left(r^2-1\right) \sin \theta \\ % \end{align} % $$</span> Inserting these identities in <span class="math-container">$(4)$</span> produces the final differential equation: <span class="math-container">$$ \dot{r} = -r \left(-\sin ^2\theta+r^4 \cos ^6\theta+r^2 \sin ^4\theta+\cos ^2\theta (r \sin \theta-1) (r \sin \theta+2)\right) \tag{5} $$</span></p> <p>The fact that this differential equation is so difficult is a sign that you have either made a mistake, or missed the reasoning on stationary points. </p> <p>To close the discussion, examine the plot below which shows <span class="math-container">$\dot{r}\left(r, \theta\right)$</span>. The purple line is the nullcline <span class="math-container">$\dot{r}=0$</span>. Below the line <span class="math-container">$\dot{r}&gt;0$</span>, above the line <span class="math-container">$\dot{r}&lt;0$</span>.</p> <p><a href="https://i.stack.imgur.com/eVGDm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eVGDm.png" alt="rdot"></a></p>
1,621,275
<blockquote> <p>Show that $a_n=\frac{n+1}{2n}a_{n-1}+1$ given that:</p> <p>$a_n=1/{{n}\choose{0}}+1/{{n}\choose{1}}+...+1/{{n}\choose{n}}$</p> </blockquote> <p>The hint says to consider when $n$ is even and odd. When $n=2k$ I get:</p> <p>$$a_{n}=1/{{2k}\choose{0}}+1/{{2k}\choose{1}}+...+1/{{2k}\choose{2k}}$$ $$=1+1/{{2k}\choose{1}}+...+1/{{2k}\choose{2k}}$$ $$=1+1/(2k{{2k-1}\choose{0}})+1/(\frac{2k}{2}{{2k-1}\choose{0}})...+1/(\frac{2k}{2k}{{2k-1}\choose{2k-1}})$$ $$=1+\frac{1}{2k}(1/{{2k-1}\choose{0}}+1/{{2k-1}\choose{1}}+1/{{2k-1}\choose{2k-1}})$$</p> <p>which should be $\frac{2k+1}{4k}a_{n-1}+1$ in the end.</p> <p><strong>I used:</strong></p> <p>${{n}\choose{k}}=\frac{n}{k}{{n-1}\choose{k-1}}$ and tried ${{n-1}\choose{k-1}}={{n-1}\choose{n-k}}$</p>
heropup
118,193
<p>First, observe that the given recursion can be written as $$2n(a_n - 1) - (n+1)a_{n-1} = 0.$$ Second, observe that $$a_n - 1 = \sum_{k=0}^{n-1} \binom{n}{k}^{\!-1},$$ since the final term of $a_n$ is always $1$. Therefore, the relationship to be proven is equivalent to showing $$0 = S_{n-1} = \sum_{k=0}^{n-1} \frac{2n}{\binom{n}{k}} - \frac{n+1}{\binom{n-1}{k}}.$$ Simplify the summand by factoring out $\binom{n-1}{k}^{\!-1}$: $$\frac{2n}{\binom{n}{k}} - \frac{n+1}{\binom{n-1}{k}} = \frac{n-1-2k}{\binom{n-1}{k}}.$$ Now, with the substitution $m = n-1-k$, we observe that the RHS sum $S_{n-1}$ is now $$S_{n-1} = \sum_{m=n-1}^0 \frac{n-1 - 2(n-1-m)}{\binom{n-1}{n-1-m}} = \sum_{m=0}^{n-1} \frac{-(n-1 - 2m)}{\binom{n-1}{m}},$$ where in the last step we use the reflection identity $$\binom{n}{m} = \binom{n}{n-m}.$$ Therefore, $S_{n-1} = -S_{n-1}$, from which it follows that $S_{n-1} = 0$, proving the required recursion. No need to use odd/even cases.</p>
142,734
<p>What are some books that discuss elementary mathematical topics ('school mathematics'), like arithmetic, basic non-abstract algebra, plane &amp; solid geometry, trigonometry, etc, in an insightful way? I'm thinking of books like Klein's <em>Elementary Mathematics from an Advanced Standpoint</em> and the books of the Gelfand Correspondence School - school-level books with a university ethos.</p>
Michaël Le Barbier
38,203
<p>I really like <em>Concrete Mathematics</em> by Knuth, Graham and Patashnik, and the introductions to number theory by Rose and by Hardy&amp;Wright: you will find there many interesting school-like problems (but the whole books may not be suitable).</p> <p>In geometry, I can suggest Hartshorne's <em>Geometry: Euclid and beyond.</em></p> <p>Books like <em>Géométrie projective</em> by Pierre Samuel or Artin's <em>Geometric algebra</em> contain a lot of algebra, but it is geometric instead of abstract, so you may judge they are on the safe side.</p>
142,734
<p>What are some books that discuss elementary mathematical topics ('school mathematics'), like arithmetic, basic non-abstract algebra, plane &amp; solid geometry, trigonometry, etc, in an insightful way? I'm thinking of books like Klein's <em>Elementary Mathematics from an Advanced Standpoint</em> and the books of the Gelfand Correspondence School - school-level books with a university ethos.</p>
Viktor
40,294
<p>I always enjoyed "How to Solve It: A New Aspect of Mathematical Method" by G. Pólya.</p> <p>It doesn't really cover all that much mathematics, it just helps you structure your thoughts in a mathematical sense. </p> <p>But it depends a lot on your actual needs. </p>
269,769
<p>Is the following true always for a matrix norm </p> <p>$$\lVert AB\rVert \leqslant \lVert A\rVert \cdot \lVert B\rVert \text{ ?}$$</p> <p>Related to this given $r$ is positive constant, $H$ is symmetric positive definite is the following true :</p> <p>$$\lVert (rI - H)(rI + H)^{-1}\rVert &lt; 1 $$</p> <p>or</p> <p>$(rI - H)(rI + H)^{-1}$ has the spectral radius less than $1$ certainly?</p> <p>Thank you.</p>
dineshdileep
41,541
<p>Matrix norms which satisfy the condition (in addition to others) you mentioned are referred to as sub-multiplicative norms. A well-known example of a matrix norm which doesn't satisfy this condition is the max-norm defined as \begin{align} ||A||_\max=\max_{i,j}|a_{ij}| \end{align} Let $H=UDU^T$ where $U$ is the matrix of eigenvectors and $D$ is the diagonal matrix of positive eigenvalues. Let $[D]_{ii}=d_{i}$ be the $i^{th}$ diagonal entry. Then, using the fact that $UU^T=I$ and that spectral radius is invariant to unitary transformations, we have \begin{align} || (rI - H)(rI + H)^{-1} || &amp;=|| (rI - D)(rI + D)^{-1} || \\ &amp;=\max_i \left|\frac{r-d_i}{r+d_i}\right| \end{align} Then use that \begin{align} \max_i \left|\frac{r-d_i}{r+d_i} \right| \leq\max_i \left(\frac{r}{r+d_i}+\frac{d}{r+d_i}\right)=1 \end{align} Thus the inequalities you ask for should hold. </p>
3,247,982
<p>If <span class="math-container">$f:X \to Y$</span> is a function and <span class="math-container">$U$</span> and <span class="math-container">$V$</span> are subsets of <span class="math-container">$X$</span>, then <span class="math-container">$f(U \cap V) = f(U) \cap f(V)$</span>.</p> <p>I am a little lost on this proof. I believe it to be true, but I am uncertain as to where to start. Any solutions would be appreciated. I have many similar proofs to prove and I would love a complete one to base my further proofs on.</p>
Daniel Sehn Colao
649,328
<p>You can't prove the equality unless <span class="math-container">$f$</span> is injective.</p> <p>If you prove something by Double-Inclusion, then you get the equality. However, at this case you will just get the equality right if <span class="math-container">$f$</span> is injective.</p> <p>I will do one inclusion <span class="math-container">$$f(U \cap V) \subset f(U) \cap f(V)$$</span></p> <ul> <li><p>Let <span class="math-container">$y \in f(U \cap V)$</span> such that <span class="math-container">$y=f(x)$</span>.</p> </li> <li><p>Since<span class="math-container">$ y \in f(U \cap V)$</span> such that <span class="math-container">$y=f(x)$</span>, then there is <span class="math-container">$x\in U \cap V$</span>.</p> </li> <li><p>Since <span class="math-container">$x\in U \cap V$</span>, then <span class="math-container">$x\in U$</span> <strong>and</strong> <span class="math-container">$x\in V$</span> [by the intersection Definition]</p> <ul> <li>For <span class="math-container">$x\in U$</span>, you get <span class="math-container">$y\in f(U)$</span> such that <span class="math-container">$y=f(x)$</span>.</li> <li>For <span class="math-container">$x\in V$</span>, you get <span class="math-container">$y\in f(V)$</span> such that <span class="math-container">$y=f(x)$</span>.</li> <li>In a function, every element of domain has to be &quot;linked&quot; to at least one element of the codomain. Otherwise, it will be not a function. (Not talking about injective or surjective functions at this point).</li> </ul> </li> <li><p>Therefore, since <span class="math-container">$y\in f(U)$</span> and <span class="math-container">$y\in f(V)$</span>, then <span class="math-container">$y \in f(U) \cap f(V)$</span>, by the Intersection Definition again.</p> </li> <li><p>Since y is an arbitrary element, then <span class="math-container">$f(U \cap V) \subset f(U) \cap f(V)$</span></p> </li> </ul> <p>The other inclusion you'll only get if <span class="math-container">$f$</span> is injective (the reason of this has already been answered by someone).</p>
1,273,086
<p>Sum of three dice (six faced) throws is $15$. What is the probability that first throw was $4$?</p> <p>The way I thought of solving this was... - Given - sum of second and third throw is $11$ - Probability of getting first throw = $4$ is $1$ out of $6$, that is $\frac{1}{6}$</p> <p>Is this correct?</p>
Masclins
238,119
<p>Joffan pretty much pointed how to find the answer.</p> <p>$P(D_{1}=4\mid Sum=15) = \frac {ways\ to\ get\ 11\ with\ 2\ dices}{ways\ to\ get\ 15\ with\ 3\ dices}$ (Note that getting 11 with 2 dices is like getting 4 with the first one).</p> <p>$ways\ to\ get\ 11\ with\ 2\ dices = 2$ (both $(5,6)$ and $(6,5)$)</p> <p>$ways\ to\ get\ 15\ with\ 3\ dices =10$</p> <p>You can have 15 with $(3,6,6), (4,5,6),(5,5,5)\ and\ its\ permutations: (6,3,6), (6,6,3), (4,6,5), (5,4,6), (5,6,4), (6,4,5)\ and\ (6,5,4)$</p> <p>All together gives us that $P(D_{1}=4\mid Sum=15) =\frac {2}{10}=0.2$</p> <p>$0.2$, is the probability you were looking for.</p>
11,368
<p>I am interested in improving this plot:</p> <p><img src="https://i.stack.imgur.com/qvPAJ.png" alt="contour plot"></p> <p>which I produced with the following command:</p> <pre><code>dat // ListContourPlot[#, ContourShading -&gt; False, ContourStyle -&gt; ColorData[10] /@ Range[10], Contours -&gt; Range[21]/8, ContourLabels -&gt; None, FrameLabel -&gt; {x, y}, DataRange -&gt; {{-7, 2}, {-15, 15}}, PlotLabel -&gt; "Contour plot of ϕ(x,y)"] &amp; </code></pre> <p>The actual <code>dat</code> is available <a href="http://justpaste.it/1d9v" rel="noreferrer">here</a>; the edge corresponds to zero values (though, it could be set to something else). </p> <p><strong>Question</strong></p> <blockquote> <p>I would like <em>Mathematica</em> to identify the edge of the contours and do a region plot to avoid the ragged line, or alternatively, to draw a thick line over it to make it publication-friendly.</p> </blockquote> <p><strong>Attempt</strong></p> <p>I guess I know how to find the edge:</p> <pre><code>dat2 = dat // Image // EdgeDetect // ImageData // Position[#, 1] &amp; // Sort; </code></pre> <p>On the other hand, these points are not sorted correctly:</p> <p><img src="https://i.stack.imgur.com/oODeH.png" alt="badly-sorted points"></p> <p>I supposed I could use 2D neighbours as a criterion for sorting, but I feel there is a smarter way to achieve my overall goal(?)</p>
Guillochon
805
<p>How about this:</p> <pre><code>{d1, d2} = Dimensions[dat]; xvals = Range[-15, 15, (15 + 15)/(d1 - 1)]; yvals = Range[-7, 2, (2 + 7)/(d2 - 1)]; dat2 = Flatten[Join[Table[{yvals[[j]], xvals[[i]]}, {i, d1}, {j, d2}], Table[Partition[dat[[i]], 1], {i, d1}], 3], 1]; dat2 = Extract[dat2, Position[dat2[[All, 3]], x_ /; x != 0]]; lcp = ListContourPlot[dat2, ContourShading -&gt; False, ContourStyle -&gt; ColorData[10] /@ Range[10], Contours -&gt; Range[21]/8, ContourLabels -&gt; None, FrameLabel -&gt; {x, y}, DataRange -&gt; Automatic, PlotLabel -&gt; "Contour plot of \[Phi](x,y)", BoundaryStyle -&gt; Directive[Black, Thick]] </code></pre> <p><img src="https://i.stack.imgur.com/WjCTQ.png" alt="enter image description here"></p> <p>The "tricks" are to pass ListContourPlot a list of tuples where the zero valued tuples are removed, and to set DataRange -> Automatic so that it interprets the list as a list of {x, y, f} tuples instead of as many datasets.</p>
2,958,566
<p>Expressing <span class="math-container">$\frac{t+2}{t^3+3}$</span> in the form <span class="math-container">$a_o+a_1t+...+a_4t^4$</span>, where <span class="math-container">$t$</span> is a root of <span class="math-container">$x^5+2x+2$</span>.</p> <p>So i can deal with the numerator, but how do I get rid of the denomiator to get it into the correct form? Thanks in advance!</p>
user
505,767
<p>We need to proceed by</p> <ul> <li>base case: <span class="math-container">$n=0 \implies ab^0+c0+d=a+d$</span></li> <li>induction step: assume <span class="math-container">$m|ab^n+cn+d$</span> then</li> </ul> <p><span class="math-container">$$ab^{n+1}+c(n+1)+d=ab^{n+1}+bcn+bd-bcn-bd+c(n+1)+d=$$</span></p> <p><span class="math-container">$$=b(ab^{n}+cn+b)-(b-1)cn-bd+c+d$$</span></p> <p>then note </p> <ul> <li><span class="math-container">$m|a+d \implies m|ab+bd \implies m|bd+a-c\implies m|bd-c-d$</span></li> </ul>
1,787,072
<p>Boolean algebras aren't algebras (to the best of my understanding).</p> <p>So why are they called algebras?</p> <p>Wouldn't it make more sense to call them a "Boolean system" or a "Boology" or something else like that?</p>
zyx
14,120
<p>Because Boole himself introduced the word "algebra" into the subject.</p> <p>The term "algebra of logic" appears in Boole's 1854 book on Laws of Thought:</p> <blockquote> <p>Let us conceive, then, of an Algebra in which the symbols x, y, z, etc. admit indifferently of the values 0 and 1, and of these values alone. The laws, the axioms, and the processes, of such an Algebra will be identical in their whole extent with the laws, the axioms, and the processes of an Algebra of Logic. Difference of interpretation will alone divide them. Upon this principle the method of the following work is established.</p> </blockquote> <p>Boole strongly emphasized the relation between logic and algebra. References to algebra and its correspondence with logic permeate the book.</p> <p>Other writers continued to use "algebra of logic" for Boole's system and its later simplification to what is now called Boolean algebra. For example, MacFarlane <em>Principles of the Algebra of Logic</em> (1874), C.S. Pierce "On the Algebra of Logic" (1880), and E. Schroeder <em>Algebra der Logik</em> (1890).</p> <p>In addition to the analogy that Boole had observed with ordinary algebra, there is an equivalence of Boolean algebras with rings satisfying $x^2=x$ for all $x$, which are equivalent to some algebras (in the modern sense) over the 2-element field. </p>
167,049
<p>$p(z),t(z)$ are two mutually defined functional equations, while $\widehat{G}(z)$ is the exponential generation function of <a href="https://oeis.org/A182173" rel="nofollow noreferrer">A182173</a> (maybe, I am not sure...lol):</p> <p>$$\begin{cases} p(z)=e^{t(z)}-t(z)+2 z-1\\ t(z)=2\left(e^{p(z)}-e^{p(z)/2}+z\right)-p(z)\\ \widehat{G}(z)=p(z)+t(z)-2 z \end{cases}$$</p> <p><strong>Is there any way to get series coefficients effectively?</strong></p> <pre><code>p[z]-&gt;2z+E^t[z]-1-t[z] t[z]-&gt;2*( z+ E^p[z] -E^(p[z]/2) ) - p[z] Ge=p[z]+t[z]-2z A182173[n_]:=n!SeriesCoefficient[Ge,{z,0,n}] </code></pre>
J. M.'s persistent exhaustion
50
<p>As Akku14 notes in his answer, one can determine from the conditions that both functions $p,t$ have a zero constant term. With that additional piece of information, we can use <code>SolveAlways[]</code> to determine the rest of the required coefficients:</p> <pre><code>n = 10; (* desired expansion order *) p = Sum[pc[k] z^k, {k, 0, n}] + O[z]^(n + 1); t = Sum[tc[k] z^k, {k, 0, n}] + O[z]^(n + 1); vals = SolveAlways[{p == Exp[t] - t + 2 z - 1, t == 2 (Exp[p] - Exp[p/2] + z) - p} /. {pc[0] -&gt; 0, tc[0] -&gt; 0}, z]; Rest[CoefficientList[Sum[(pc[k] + tc[k]) z^k, {k, 0, n}] - 2 z + O[z]^(n + 1) /. Join[{pc[0] -&gt; 0, tc[0] -&gt; 0}, First[vals]], z]] Range[n]! {2, 10, 94, 1466, 31814, 887650, 30259198, 1218864842, 56644903958, 2983300619410} </code></pre>
3,996,665
<p>I am really confused about below questions. I am new both in this platform and below topics. I hope, I was clearly explain the questions.</p> <p>For ∀n∈<span class="math-container">$\mathbb{Z}$</span><sup>+</sup></p> <p>Based on <span class="math-container">$B_n$</span> = {n, n+1, n+2...}, <strong>B</strong> = {<span class="math-container">$B_n$</span>|n ∈ <span class="math-container">$\mathbb{Z}$</span><sup>+</sup>} family of sets are given. According to this,</p> <p>(i) Family <strong>B</strong> is the base for a topology on <span class="math-container">$\mathbb{Z}$</span><sup>+</sup>. Show it. (ii) Show that there is no Hausdorff by writing the topology on <span class="math-container">$\mathbb{Z}$</span><sup>+</sup> produced by <strong>B</strong>. (iii) Show that the sequence (2,4,6,8,...) in <span class="math-container">$\mathbb{Z}$</span><sup>+</sup> converges to each point according to topology produced by the family <strong>B</strong></p> <p>Thank you...</p>
Henno Brandsma
4,280
<p><span class="math-container">$(B_n), n \in \Bbb Z^+$</span> is a base for a topology, because their union is <span class="math-container">$\Bbb Z^+$</span> and the collection is closed under finite intersections.</p> <p>The space is not Hausdorff, because there is no basic element that contains <span class="math-container">$1$</span>, but does not contain <span class="math-container">$2$</span>. So <span class="math-container">$1$</span> and <span class="math-container">$2$</span> cannot be separated by open sets. Writing out the whole topology is not needed (though not hard).</p> <p>Let <span class="math-container">$n \in \Bbb Z$</span>. If <span class="math-container">$B_k$</span> is any basic element that contains <span class="math-container">$n$</span>, then <span class="math-container">$k \le n$</span> and then all members of the sequence that are <span class="math-container">$&gt;n$</span> (all but finitely many one, thus) are in <span class="math-container">$B_k$</span>. So the sequence converges to <span class="math-container">$n$</span>.</p>
3,387,049
<p>Say a deck of cards is dealt out equally to four players (each player receives 13 cards).</p> <p>A friend of mine said he believed that if one player is dealt four-of-a-kind (for instance), then the likelihood of another player having four-of-a-kind is increased - compared to if no other players had received a four-of-a-kind. </p> <p>Statistics isn't my strong point but this sort of makes sense given the pigeonhole principle - if one player gets <code>AAAAKKKKQQQQJ</code>, then I would think other players would have a higher likelihood of having four-of-a-kinds in their hand compared to if the one player was dealt <code>AQKJ1098765432</code>.</p> <p>I wrote a Python program that performs a Monte Carlo evaluation to validate this theory, which found:</p> <ul> <li>The odds of <strong>exactly one</strong> player having a four-of-a-kind are ~12%.</li> <li>The odds of <strong>exactly two</strong> players having a four-of-a-kind are ~0.77%.</li> <li>The odds of <strong>exactly three</strong> players having a four-of-a-kind are ~0.03%.</li> <li>The odds of <strong>all four</strong> players having a four-of-a-kind are ~0.001%.</li> </ul> <p>But counter-intuitively, four-of-a-kind frequencies appear to decrease as more players are dealt those hands:</p> <ul> <li>The odds of <strong>two or more</strong> players having a four-of-a-kind when <strong>at least one</strong> player has four-of-a-kind are ~6.24%.</li> <li>The odds of <strong>three or more</strong> players having a four-of-a-kind when <strong>at least two</strong> players have four-of-a-kind are ~3.9%.</li> <li>The odds of <strong>all four</strong> players having a four-of-a-kind when <strong>at least three</strong> players have four-of-a-kind are ~1.39%.</li> </ul> <p>The result is non-intuitive and I'm all sorts of confused - not sure if my friend's hypothesis was incorrect, or if I'm asking my program the wrong questions.</p>
Ben
526,995
<p>Without even doing the calculations you can see intuitively that this must be true. If one player has four-of-a-kind then they have taken all four cards with the same number/face out of the deck, which means that the remaining deck now has heavier concentrations of each of the remaining numbers/faces. Conditioning on this event clearly increases the probability that another player will have four-of-a-kind. The increase in probability will not be huge, and it will still be a rare event, but your friend is correct.</p>
4,048,158
<p>Given data points <span class="math-container">$\{(x_i,f(x_i)\}_{i=0}^{m}$</span>, if we define divided difference recursively as:</p> <p><span class="math-container">$$f[x_0,\cdots,x_{k+1}] = \frac{f[x_1,\cdots,x_{k+1}]-f[x_0,\cdots,x_k]}{x_{k+1}-x_0} \text{ with the definition that} f[x_0] = f(x_0)$$</span></p> <p>then it is true that the unique interpolating polynomial <span class="math-container">$L_m(x)$</span> of degree <span class="math-container">$m$</span> such that <span class="math-container">$L_m(x_i) = f(x_i)$</span> for all <span class="math-container">$i = 0\cdots m$</span> is given by: <span class="math-container">$$ L_m(x) = \sum_{k=0}^mf[x_0,\cdots,x_k]\omega_k(x), \text{ where } \omega_k(x) = \prod_{i=0}^{k-1}(x-x_i). $$</span> The question asks to show this. That is, show the coefficients of the interpolating polynomial of degree <span class="math-container">$m$</span> in the basis <span class="math-container">$\{\omega_k\}_{k=0}^{m}$</span> is exactly <span class="math-container">$a_k = f[x_0,\cdots,x_k]$</span>, the divided differences.</p> <p>To tackle this problem, I've attempted to use induction. The base step is trivial as <span class="math-container">$a_0 = f(x_0)$</span>, but I am stuck on the inductive step. I supposed the degree <span class="math-container">$m$</span> interpolating polynomial of some data <span class="math-container">$\{(x_i,f(x_i)\}_{i=0}^{m}$</span> is indeed given by: <span class="math-container">$$ L_m(x) = \sum_{k=0}^mf[x_0,\cdots,x_k]\omega_k(x) $$</span> and I want to argue that <span class="math-container">$L_{m+1}(x) = \sum_{k=0}^{m+1}f[x_0,\cdots,x_k]\omega_k(x)$</span> is the interpolating polynomial of degree <span class="math-container">$m+1$</span> interpolating <span class="math-container">$\{(x_i,f(x_i)\}_{i=0}^{m+1}$</span>, with any additional point <span class="math-container">$(x_{m+1},f(x_{m+1}))$</span> added.</p> <p>Clearly, <span class="math-container">$L_{m+1}(x)$</span> interpolates <span class="math-container">$\{(x_i,f(x_i)\}_{i=0}^{m}$</span> by definition of the <span class="math-container">$\omega_k$</span>'s, but I am stuck at showing <span class="math-container">$L_{m+1}(x_{m+1}) = f(x_{m+1})$</span>. Hints will be extremely appreciated!</p>
Yuval Peres
360,408
<p>This proof was more delicate than I expected. First observe that by an easy induction, for all <span class="math-container">$x_0,\ldots, x_m$</span> we have <span class="math-container">$$f[x_0,\ldots, x_m]=f[x_m,\ldots,x_0]\,. \quad \quad (1)$$</span> Recall <span class="math-container">$L_m$</span> is defined to be the degree <span class="math-container">$m$</span> polynomial interpolating <span class="math-container">$f$</span> at <span class="math-container">$x_0,\ldots,x_m$</span>. Our goal is to verify by induction that <span class="math-container">$$L_m(x)+f[x_0,\ldots,x_{m+1}] \, \omega_{m+1}(x)=L_{m+1}(x) \, \quad \quad (2) $$</span> for all choices of <span class="math-container">$x,x_0,\ldots, x_m.$</span></p> <p>Since both sides of (2) are degree <span class="math-container">$m+1$</span> polynomials and (2) is clear for <span class="math-container">$x=x_0,\ldots,x_m$</span>, it suffices to verify it also holds for <span class="math-container">$x=x_{m+1}$</span>. To this end, define <span class="math-container">$Q_k$</span> as the degree <span class="math-container">$k$</span> polynomial interpolating <span class="math-container">$f$</span> at <span class="math-container">$x_1,\ldots,x_{k+1}$</span> and denote <span class="math-container">$$\psi_m(x):=\prod_{j=1}^m (x-x_j)=\frac{\omega_{m+1}(x)}{x-x_0} \,. \quad \quad (3)$$</span></p> <p>By the induction hypothesis, equation (2) holds for polynomials of lower degree, so <span class="math-container">$$Q_m(x)=Q_{m-1}(x)+f[x_1,\ldots,x_{m+1}] \psi_{m}(x) \, \quad \quad (4) $$</span> and by considering the points in the reverse order <span class="math-container">$x_m,\ldots,x_1,x_0$</span> <span class="math-container">$$L_m(x)=Q_{m-1}(x)+f[x_m,\ldots,x_0] \psi_{m}(x) \, \quad \quad (5) $$</span> Subtracting the last two equations and setting <span class="math-container">$x=x_{m+1}$</span> gives <span class="math-container">$$Q_m(x_{m+1})-L_m(x_{m+1})=\Bigl(f[x_1,\ldots, x_{m+1}]- f[x_m,\ldots,x_0] \Bigr)\psi_{m}(x_{m+1})\quad \quad \quad \quad \quad \; \; \; \;$$</span> <span class="math-container">$$=f[x_0,\ldots, x_{m+1}]\, \omega_{m+1}(x_{m+1}) \,, \quad \quad (6) $$</span> using equations (1) and (3) in the last step. Since <span class="math-container">$Q_{m}(x_{m+1})=f(x_{m+1})=L_{m+1}(x_{m+1})$</span>, equation (6) gives (2) for <span class="math-container">$x=x_{m+1}$</span>. This completes the induction step and the proof of (2).</p>
4,468,883
<p>In the book &quot;<em>Primes of the form <span class="math-container">$x^2+ny^2$</span></em>&quot;, David Cox had shown that: <span class="math-container">$$p=x^2+14y^2 \Longleftrightarrow (-14/p)=1 \;\text{and}\; (x^2+1)^2=8 \mod p \; \text{has an integer solution.} $$</span> Is this imply that there are infinitely many primes of the form <span class="math-container">$x^2+14y^2$</span>? It is easy to see that there are infinitely many primes that the equation <span class="math-container">$(x^2+1)^2=8 \mod p$</span> has an integer solution but I don't have any clue to check if some of them can take <span class="math-container">$-14$</span> as their quadratic residue.</p>
Piquito
219,998
<p>COMMENT.- I bet there is an infinity of primes and I rely for this on the following:</p> <p>for instance,the four identities <span class="math-container">$$(a+3)^2+14(3b\pm2)^2=6(6a^2+6a+21b^2\pm28+11)-1\\(6a+1)^2+14(3b)^2=6(6a^2+2a+21b^2)+1\\(6a+5)^2+14(3b)^2=6(6a^2+10a+21b^2+4)+1$$</span> There are an infinity of values for the factors of <span class="math-container">$6$</span> in the three <span class="math-container">$RHS$</span> of the four identities which give solutions of the equation <span class="math-container">$$x^2+14y^2=6z\pm1$$</span> and I find very plausible suppose that there are in a lot of them primes (all prime is of the form <span class="math-container">$6z\pm1$</span>).</p>
2,727,598
<p>Given p is a prime number greater than 2, and</p> <p>$ 1 + \frac{1}{2} + \frac{1}{3} + ... \frac{1}{p-1} = \frac{N}{p-1}$ </p> <p>how do I show, $ p | N $ ???</p> <p>The previous part of this question had me factor $ x^{p-1} -1$ mod $p$. Which I think is just plainly $(x-1) ... (x-(p-1))$ </p>
Malcolm
12,559
<p>Working $\bmod{p}$: $$ 1 + \frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{p-1}\\ \equiv 1 + 2^{-1} + 3^{-1} + \cdots + (p-1)^{-1}\\ \equiv 1 + 2 + \cdots + (p-1) \ \text{ reordered} $$ which you have shown is $0\pmod{p}$ when you factored $x^{p-1} -1\pmod{p}$.</p> <p>So if $\displaystyle 1 + \frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{p-1} =\frac{N}{p-1}$ then necessarily $N(p-1)^{-1} = 0 \pmod{p}$ so $p$ divides $N$.</p>
1,356,524
<p>EDIT: the OP has since edited the question fixing all the issues mentioned here. Yay!</p> <p>There was a <a href="https://puzzling.stackexchange.com/questions/17627/knights-knaves-and-normals-smullyans-error/17628">question asked on Puzzling recently</a>, titled <code>Knights, Knaves and Normals &quot;(Smullyan's Error)&quot;</code> where OP <s>claims</s> claimed (this has been since fixed), quote</p> <blockquote> <p>If you think the challenge [of proving you're a Knight by saying statements] is impossible, you are not alone. In What is the Name of This Book?, the esteemed Raymond M. Smullyan himself asserted (p. 97) that since Normals can both tell the truth and lie, they can say anything a Knight could. There was, however, a subtle error in his logic.</p> </blockquote> <p>The OP selected an answer with a statement</p> <blockquote> <p>&quot;If I am not a knight, this is a lie&quot;</p> </blockquote> <p>, obviously hinting that's that he had in mind as &quot;the error&quot;.</p> <p>Frankly, after going through <a href="http://www.pdfarchive.info/pdf/S/Sm/Smullyan_Raymond_-_What_is_the_Name_of_This_Book.pdf" rel="nofollow noreferrer">the book</a> numerous times yesterday just for the sake of it, and reading Smullyan's writings for the last 20 years, I'm completely unconvinced by neither OP's claim nor the accepted answer. My rationale follows: (I'll use the convention of calling well-grounded sentences [i.e. statements] &quot;grounded&quot; and not well-grounded sentences &quot;ungrounded&quot;; most of the things I state here are also directly or indirectly mentioned in the book)</p> <ol> <li>a) ungrounded sentences have no truth values, b) somebody using ungrounded sentences is neither lying nor telling the truth; Smullyan has, numerous times, voiced that - perhaps the most relevant are the problems 70, and, later, entire Chapter 15, with IMO key related problem 255,</li> <li>for a statement to be true, it has to be <em>provable in certain system</em> to be true; for a statement to be false, it has to be <em>provable</em> to be false - i.e. a statement's logical value <em>has to be provable somehow</em>,</li> <li>for a statement to be provable in a system, there must be a clear and objective way to assert its sense and logical value,</li> <li>a logical statement can be either simple (we can assert its value directly) or complex (consisting of other statements bound by operators) - the operators are also a clear way of asserting if the compound expression is true, since they (like shown by e.g. Boole) can be translated to pure arithmetics,</li> <li>as such it follows, that for a complex statement to be grounded, it has to consist only of other grounded sentences (statements), and having at least one ungrounded statement in the statement's composition graph automatically makes it and any statement using it ungrounded,</li> <li>any statement that is neither provable true nor false in the given system is ungrounded within it, for example <span class="math-container">$p$</span> such as <span class="math-container">$p = "x * 2"$</span> is ungrounded, since it's unprovable by mathematics,</li> <li>there are two interpretations of what &quot;Being a knight/knave/normal&quot; means - one (possibly used by OP) that &quot;knight can only say truth, that is he can only use grounded sentences (statements) which are true, that is he can use those statements which are true - he can't use ungrounded sentences, since they aren't true&quot; (and so on for knave/normal), basically requiring all participants to only use grounded statements , and the second one being that &quot;knight can only say truth, that is he can use those grounded statements which are true - he can still use ungrounded statements, since they make no logical assertion about their truth&quot; (and so on for knave/normal)- however Smullyan shows <em>multiple time throughout the book</em> (see aforementioned problem 70, also problems 256 &amp; 259), that he asserted the second case for the sake of the book,</li> <li>AFAIR from both formal logic &amp; linguistic courses: imperatives, interrogatives, exclamations and statements (note: it's about linguistic &quot;statement&quot; here, not logical) with expressed modal doubt aren't grounded <em>per se</em>, only purely declarative statements without expressed modal doubt <em>can</em> be grounded. As an example, &quot;I'm not sure if <span class="math-container">$x=2$</span>&quot; is ungrounded, <span class="math-container">$p = "maybe("x = 2")"$</span> is grounded when interpreted as <em>there exists a provable possibility that x = 2, although x doesn't have to be equal to 2 in all cases</em>, formally making (please excuse the abuse of notation) <span class="math-container">$maybe(x) \equiv \exists (x)\wedge \not\forall(x)$</span> (which can be obviously stated as <span class="math-container">$maybe(x) \equiv \exists (x)\wedge \exists \neg(x)$</span>) and making the exemplary statement into <span class="math-container">$\exists x=2 \wedge \not\forall x=2$</span> (which can be obviously also stated as <span class="math-container">$\exists x=2 \wedge \exists x \not =2$</span>),</li> <li>assuming first interpretation from my pt. 7, Knight couldn't say e.g. &quot;Could you pass the salt?&quot;, since that is a logically ungrounded sentences. Yet, throughout the book, there are <em>numerous</em> examples of truth-value-bound people saying logically ungrounded sentences - e.g. Dracula asks questions, King expresses his doubt and asks rhetorical questions when talking to his daughter etc. - making only second interpretation (everyone that's truth-value-bound can still use ungrounded statements) valid,</li> <li>self-referencing statements aren't necessarily ungrounded, but recursive sentences (self-referencing statements of the form <span class="math-container">$p = "f(p)"$</span>) are ungrounded (doesn't constitute a sentence in logical sense), <em>because they can't be evaluated</em>. The exceptional case Smullyan makes is by showing that <em>if we can avoid recursion by making an objective statement that has a real, independent meaning</em>, then we possibly can &quot;ground&quot; the statement.</li> </ol> <p>As such, (assuming knight/knave only island, no normals)</p> <blockquote> <p>I'm lying now.</p> </blockquote> <p>is ungrounded. <em>Would be a paradox if it were grounded!</em></p> <blockquote> <p>I'm a knave.</p> </blockquote> <p>is grounded, but, as Smullyan says (and I completely agree) it means that either a) knaves can sometimes tell the truth, and/or b) knights can sometimes lie. <em>No paradox here!</em></p> <p>NB.</p> <blockquote> <p>I was lying yesterday.</p> </blockquote> <p>is grounded. The fact can be verified.</p> <blockquote> <p>I will be lying tomorrow.</p> </blockquote> <p>is grounded <em>if and only if</em> it's about <em>habit</em> (possible to verify), not the <em>actual action</em> (impossible to verify, since it hasn't happened yet).</p> <ol start="11"> <li>the accepted answer would be thus ungrounded regardless is the person saying it is knight, knave or normal, and would be an invalid solution to the puzzle, because literally <em>everyone</em> can make ungrounded sentences (see my pt. 9); there are similar problems for other answers, which try to succeed in the &quot;challenge&quot; by either using ungrounded sentences or changing the setting of the question completely (e.g. by calling on 3rd party that can verify the truthfulness of the person - which is completely against the setting of that specific problem, since you're an outsider there, and also against the original intent of the author, because it would trivialize all and every possible logical problem, since saying &quot;the way to prove you're not a normal is to have a knight to say you're not a normal&quot; is equal to saying &quot;every statement can be proved by truthfully stating with utmost certainty whether it is true of false&quot; - well, <em>duh</em>... except it's actually <em>impossible</em> to do that with any nontrivial thesis - that's why we have to <em>prove things</em>),</li> <li>mathematically/logically speaking, IMO formulation of problem 106/107 goes along this lines: (this mirrors the subjects discussed by Smullyan in couple of last chapters of the book; excuse me for any possible errors in transcription)</li> </ol> <blockquote> <p>let <span class="math-container">$A$</span> be an infinite set of possible grounded statements verifiable in system <span class="math-container">$S$</span> an inhabitant can make, all with definite truth value <span class="math-container">$1$</span>, <span class="math-container">$B$</span> a set of such statements with definite truth value <span class="math-container">$0$</span>, set <span class="math-container">$V$</span> be a set of all possible verifiable grounded statements and <span class="math-container">$C$</span> a set of all possible grounded statements, regardless of their verifiability in system <span class="math-container">$S$</span>. Obviously, <span class="math-container">$A$</span> &amp; <span class="math-container">$B$</span> are disjoint, <span class="math-container">$V$</span> is a strict sum of <span class="math-container">$A$</span> and <span class="math-container">$B$</span> and <span class="math-container">$C$</span> is a strict sum of set <span class="math-container">$V$</span> and set <span class="math-container">$U$</span>, describing all the possible grounded statements that are unverifiable in <span class="math-container">$S$</span>. Select a random one set out of <span class="math-container">$\{A,B,C\}$</span> and call it <span class="math-container">$X$</span>, not knowing which one you selected. You can now select any number of statements <span class="math-container">$a_1, a_2, ... a_n$</span> from <span class="math-container">$C$</span> (i.e. you can pick any grounded statement possible) and check the value of <span class="math-container">$p(a_n) = b(a_n,Z)$</span>. <span class="math-container">$b$</span> is a Boolean function here corresponding to the set <span class="math-container">$X$</span> you selected, being just a known <span class="math-container">$b = x$</span> if you selected <span class="math-container">$A$</span>, a known <span class="math-container">$b = \neg x$</span> if you selected <span class="math-container">$B$</span>, <em>and some completely unknown <span class="math-container">$b(x,Z)$</span> function if <span class="math-container">$X = C$</span></em>. <span class="math-container">$Z$</span> is an unknown function of unknown parameters. Also, we define <span class="math-container">$q$</span> to be verifiable in <span class="math-container">$S$</span> if and only if we either know <em>a priori</em> the logic value of <span class="math-container">$q$</span>, or <span class="math-container">$p(q)$</span> is grounded and allows us to clearly say if <span class="math-container">$q$</span> is true or not, regardless if <span class="math-container">$X$</span> is <span class="math-container">$A$</span>, <span class="math-container">$B$</span> or <span class="math-container">$C$</span>.</p> </blockquote> <p>For example, &quot;I'm a knight.&quot; is unverifiable (could've been true &amp; said by knight or false and said by knave/normal), while &quot;I'm a knave and 2+2=4&quot; is obviously verifiable, since it's false and the speaker is a normal.</p> <p>It follows that verifiable statements can be only built on verifiable statements, and any composite statement that has a unverifiable statement as a part is unverifiable itself when it's logical value is dependent on the unverifiable statement - see the discussion about being grounded above, replacing &quot;grounded&quot; with &quot;verifiable&quot;. Thus, for <span class="math-container">$u \in U, v \in V$</span>, we have <span class="math-container">$"\neg u" \in U, "\neg v" \in V, "1 \vee u" \in V, "0 \vee u" \in U, "0 \wedge u" \in V, "1 \wedge u" \in U$</span> etc.</p> <p>The question is:</p> <blockquote> <p>a) is there any number <span class="math-container">$n$</span> of <span class="math-container">$a_n$</span> statements from <span class="math-container">$X$</span>, that can definitely prove that <span class="math-container">$Z$</span> is <span class="math-container">$C$</span>?</p> <p>b) is there any number <span class="math-container">$n$</span> of <span class="math-container">$a_n$</span> statements from <span class="math-container">$X$</span>, that can definitely prove that <span class="math-container">$Z$</span> is <em>not</em> <span class="math-container">$C$</span>?</p> </blockquote> <p>The obvious solution for a) is to take any <span class="math-container">$a_n \in U$</span> - since neither <span class="math-container">$A$</span> nor <span class="math-container">$B$</span> have no element from <span class="math-container">$U$</span>, it proves that <span class="math-container">$X = C$</span>.</p> <p>As to b), <em>there is no way</em> to check if <span class="math-container">$X$</span> strictly <em>ain't</em> <span class="math-container">$C$</span> - if <span class="math-container">$X \not =C$</span>, then because you can only select a statement from <span class="math-container">$X$</span>, you have to select a verifiable statement. On the other hand, each and every verifiable statement may be present both in <span class="math-container">$X$</span> (be it <span class="math-container">$A$</span> or <span class="math-container">$B$</span>) and in <span class="math-container">$C$</span>.</p> <p>Of course, there's a difference between e.g. a statement being in set <span class="math-container">$A$</span>, and being possible to be said by a knight - knights can say everything from <span class="math-container">$A$</span> and all true things from <span class="math-container">$U$</span> - but since nothing from <span class="math-container">$U$</span> can be used to prove anything, the analogy stands - knight would have to use statements only from <span class="math-container">$A$</span> to prove anything, knave would have to prove things using only statements from <span class="math-container">$B$</span>, and normal could've used both statements from from <span class="math-container">$A$</span> and <span class="math-container">$B$</span> to prove things. To prove you're a normal, it'd thus be enough to use two statements, one from <span class="math-container">$A$</span> and one from <span class="math-container">$B$</span> - since only a normal can do that, that would be a proof enough. You could also do that by saying e.g. &quot;I can only use false statements to prove things.&quot;.</p> <ol start="13"> <li><p>Also, AFAIK, in general <em>any</em> proof requires that all the statements used in it are grounded - for me it seems obvious (from <a href="https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems" rel="nofollow noreferrer">Godel</a> &amp; <a href="https://en.wikipedia.org/wiki/Hilbert%27s_second_problem" rel="nofollow noreferrer">Hilbert</a> for example - but it also follows common logic) that <em>we can't prove a statement (thus making it grounded) using any combination of ungrounded sentences</em>, since to prove a statement you can only use <em>statements grounded within the system in which you're proving</em>. Using ungroundability in a proof of groundability seems like trying to reinvent the <a href="https://en.wikipedia.org/wiki/Russell%27s_paradox" rel="nofollow noreferrer">Russell's paradox</a> to me.</p> </li> <li><p>thus it would follow that OP hasn't found &quot;Smullyan's Error&quot;, he just made an error himself by not completely understanding a) the context of the book, and/or b) formal logic behind it.</p> </li> </ol> <p>The question is:</p> <p>Am I <a href="http://knowyourmeme.com/photos/144-youre-doing-it-wrong" rel="nofollow noreferrer">doing it wrong</a>, or is <a href="http://knowyourmeme.com/memes/op-is-a-faggot" rel="nofollow noreferrer">OP wrong on this one</a>?</p>
Community
-1
<p>Since <a href="https://puzzling.stackexchange.com/posts/17627/revisions">OP fixed the question along the lines from my question here</a> , I think I can assume the reasoning proposed in the question was correct. Prof. Smullyan is now officially "no longer in error" <em>chuckle</em>.</p>
3,855,521
<p>Suppose <span class="math-container">$f:A\bigoplus M \to A\bigoplus N$</span> and <span class="math-container">$g: A \to A$</span>, where <span class="math-container">$g = \pi_A\circ f\circ l_A$</span>, are isomorphisms of <span class="math-container">$R$</span>-module. Prove that <span class="math-container">$M\cong N$</span>.</p>
Intelligenti pauca
255,730
<p>This is a rectangle, because for <span class="math-container">$x=0$</span> we get <span class="math-container">$|y|=1$</span>, but for <span class="math-container">$y=0$</span> we obtain <span class="math-container">$$ x=\root{10^{12}}\of{\pi^2\over4}\approx 1.0000000000009031654. $$</span> For a square, you'd better replace <span class="math-container">${4\over\pi^2}$</span> with <span class="math-container">$1$</span>.</p>
3,855,521
<p>Suppose <span class="math-container">$f:A\bigoplus M \to A\bigoplus N$</span> and <span class="math-container">$g: A \to A$</span>, where <span class="math-container">$g = \pi_A\circ f\circ l_A$</span>, are isomorphisms of <span class="math-container">$R$</span>-module. Prove that <span class="math-container">$M\cong N$</span>.</p>
David K
139,123
<p>This is related to what happens with the graphs of very high powers of <span class="math-container">$x,$</span> which in turn is related to exponential growth and decay.</p> <p>Graph <span class="math-container">$y = x^2.$</span> Notice that the curve goes through <span class="math-container">$(0,0)$</span> at its low point, and goes through <span class="math-container">$(-1,1)$</span> on the left and <span class="math-container">$(1,1)$</span> on the right. And the graph has a tiny nearly level section very near the bottom.</p> <p>Try <span class="math-container">$y = x^4.$</span> It's somewhat like <span class="math-container">$y=x^2$</span>, but the sides are steeper at <span class="math-container">$(-1,1)$</span> and <span class="math-container">$(1,1)$</span> and the bottom is much flatter.</p> <p>Try <span class="math-container">$y = x^{10}$</span>. Steeper sides, flatter bottom than <span class="math-container">$x^4.$</span></p> <p>As you try higher and higher powers of <span class="math-container">$x,$</span> you get a larger and larger &quot;flat&quot; part at the bottom of the curve. This part isn't really flat, it's just that for any number <span class="math-container">$x$</span> with <span class="math-container">$|x|&lt;1,$</span> if you look at <span class="math-container">$x^n$</span> and increase the exponent <span class="math-container">$n$</span> you have a process of exponential decay where <span class="math-container">$x^n$</span> approaches zero. At some exponent the value of <span class="math-container">$x^n$</span> will be so small that you cannot see the difference between <span class="math-container">$x^n$</span> and zero on the graph.</p> <p>For values of <span class="math-container">$x$</span> closer to <span class="math-container">$\pm 1$</span>, <span class="math-container">$x^n$</span> decays slower and it takes a higher value of <span class="math-container">$n$</span> before <span class="math-container">$x^n$</span> gets close enough to zero to be indistinguishable from zero by your eye. But if you take really large values of <span class="math-container">$n$</span>, such as <span class="math-container">$10^{12},$</span> the numbers near <span class="math-container">$\pm1$</span> for which <span class="math-container">$x^n$</span> is <strong>not</strong> visually indistinguishable from zero are so close to <span class="math-container">$\pm1$</span> that they are visually indistinguishable from <span class="math-container">$1$</span> and the graph looks like it has straight vertical sides there. In fact even at <span class="math-container">$n = 1000$</span> the graph looks pretty square at the bottom to me.</p> <p>Now flip the graph over by taking <span class="math-container">$y = 1 - x^n$</span> for a very large value of <span class="math-container">$n.$</span> It still has that rectangular shape, but the flat level part is at <span class="math-container">$y = 1$</span> and the rest is below that, passing through <span class="math-container">$(-1,0)$</span> and <span class="math-container">$(1,0)$</span>.</p> <p>Now take <span class="math-container">$y = \sqrt{1 - x^n}.$</span> If <span class="math-container">$n$</span> is large enough this still looks rectangular, but the parts of the graph below the <span class="math-container">$x$</span> axis have disappeared because negative numbers do not have real square roots.</p> <p>If you now square both sides, <span class="math-container">$y^2 = {1 - x^n},$</span> you get the same result above the <span class="math-container">$x$</span> axis, but since <span class="math-container">$(-y)^2 = y^2$</span> you get two symmetric values of <span class="math-container">$y$</span> for each value of <span class="math-container">$x,$</span> that is, the graph above the <span class="math-container">$x$</span> axis is mirrored below the <span class="math-container">$x$</span> axis, forming what looks like a square.</p> <p>Multiplying <span class="math-container">$x^n$</span> by some positive constant <span class="math-container">$a$</span>, as in <span class="math-container">$y^2 = {1 - ax^n},$</span> makes the graph wider or narrower in the <span class="math-container">$x$</span> direction. That is, you are graphing <span class="math-container">$y^2 = {1 - (a^{1/n}x)^n},$</span> so the graph is scaled by a factor of <span class="math-container">$a^{-1/n}$</span> in width. If <span class="math-container">$a$</span> is not too large (for example, <span class="math-container">$a = 4/\pi^2$</span>) and <span class="math-container">$n$</span> is very large, <span class="math-container">$a^{-1/n}$</span> is extremely near <span class="math-container">$1$</span> (as other answers have pointed out).</p> <blockquote> <p>For the exponent of <span class="math-container">$x$</span> being some power of <span class="math-container">$10$</span> greater than <span class="math-container">$10^{12}$</span>, a part of the curve began to disappear.</p> </blockquote> <p>I had a similar experience with extremely high powers of <span class="math-container">$x$</span>, using the graphing calculator at Desmos.com. I suspect this is a limitation of the size of number that the calculator can deal with, or perhaps the horizontal step size (graph so steep that the software cannot increment <span class="math-container">$x$</span> slowly enough to plot a continuous curve).</p>
373,397
<p>the question just like in the title: </p> <p>How to solve $x^3-bx^2+c=0$ for x analitically, when b and c are constants, and all (coefficients and variables) are reals.</p>
John Gowers
26,267
<p>The first step when solving an arbitrary cubic is to get rid of the quadratic term (the term in $x^2$). We do this by 'completing the cube' - a process analogous to completing the square for quadratics. </p> <p>Specifically, we make the substitution $y=x-\frac{1}{3}b$. Then: </p> <p>$$ y^3 = \left(x-\frac{1}{3}b\right)^3 = x^3 - bx^2 +\frac{1}{3}b^2x-\frac{1}{27}b^3 $$</p> <p>Combining this with our equation $x^3-bx^2+c=0$ gives us: </p> <p>$$ y^3 - \frac{1}{3}b^2y-\frac{1}{9}b^3+c=0 $$</p> <p>And we now have a cubic of the form $y^3 - \alpha y + \beta=0$ (A much nicer way to get this form, as Mark Bennet points out, is to use the substitution $y=\frac{1}{x}$. Then you end up with the cubic equation $y^3 - \frac{b}{c}y+\frac{1}{c}=0$). </p> <p>Anyway, suppose you have a cubic equation of the form $y^3-\alpha y+\beta=0$, where $\alpha$ and $\beta$ are real and $\alpha$ is positive. It is a theorem of Galois theory that there is no solution by radicals (i.e., a solution entirely in terms of the coefficients $\alpha$ and $\beta$, where you're allowed to use $+$, $-$, $\times$, $/$ and take roots) of an arbitrary cubic equation that lives entirely in the real numbers. That is to say, even if the coefficients are all real and the solutions are all real, you might still have to get your hands dirty using complex numbers. I wrote an essay a long time ago detailing how that's done, which you can find <a href="https://docs.google.com/file/d/0B3-eWwU682JaTEFja0FSTXFldWM/edit?usp=sharing" rel="nofollow">here</a>, or you can look up 'Cardano's method' to see how to do it with complex numbers. </p> <p>There is, however, a way to get real solutions to a cubic with real coefficients that does not involve complex numbers. It's not a solution by radicals, since it uses trigonometric functions, but it's still quite nice. Note that it is important that $\alpha$ be positive for this to work. We first make the substitution $y=u\cos\theta$ and note the standard trigonometric identity: </p> <p>$$ 4\cos^3\theta-3\cos\theta=\cos(3\theta) $$</p> <p>It turns out that we can choose $u$ in such a way that our cubic equation $y^3-\alpha y + \beta$ turns into this identity. The correct choice turns out to be $u=2\sqrt{\frac{\alpha}{3}}$ (see if you can derive this for yourself). Then our cubic becomes: </p> <p>\begin{align} y^3-\alpha y + \beta &amp;= u^3\cos^3\theta-\alpha u\cos\theta + \beta \\ &amp;=8\frac{\alpha}{3}\sqrt{\frac{\alpha}{3}}\cos^3\theta-2\alpha\sqrt{\frac{\alpha}{3}}\cos\theta+\beta \\ &amp;=\frac{2\alpha}{3}\sqrt{\frac{\alpha}{3}}\left(4\cos^3\theta-3\cos\theta\right)+\beta \\ &amp;=\frac{2\alpha}{3}\sqrt{\frac{\alpha}{3}}\cos(3\theta)+\beta \end{align}</p> <p>From here, it's pretty easy to find $\cos(3\theta)$ in terms of $\alpha$ and $\beta$ and hence to compute $\theta$. Then, setting $y=u\cos\theta$, we are done. </p>
2,177,619
<p>Find a solution to the boundary value problem \begin{align}y''+ 4y &amp;= 0 \\ y\left(\frac{\pi}{8}\right) &amp;=0\\ y\left(\frac{\pi}{6}\right) &amp;= 1\end{align} if the general solution to the differential equation is $y(x) = C_1 \sin(2x) + C_2 \cos (2x)$.</p> <p>I was able to compute the following equations: \begin{align}C_1 \left(\frac 12\right)\sqrt2 + C_2 \left(\frac 12\right)\sqrt2 &amp;= 0\\ C_1 \left(\frac 12\right)\sqrt3 + C_2 \left(\frac 12\right) &amp;= 1\end{align}</p> <p>However I am unable to solve the system of equations. The books says the answer is $\frac{2}{\sqrt3 -1}$for $C_1$ and $-C_2$. I am not sure how to go about manipulating the equations to get on variable. </p>
Hmath
262,582
<p>You have a system of 2 -equations to 2 unknowns!! $C1+ C2 = 0$ and thus $C1 = -C2$.</p> <p>Substitute in the second equations and solve!!</p>
1,678,687
<p>Find the point $(x_0, y_0)$ on the line $ax + by = c$ that is closest to the origin. </p> <p>According to this <a href="http://www.intmath.com/plane-analytic-geometry/perpendicular-distance-point-line.php" rel="nofollow noreferrer">source</a>, I thought that $\left( -\frac{ac}{a^{2}+b^{2}}, -\frac{bc}{a^{2}+b^{2}} \right)$ was the point but it doesn't seem to be correct. Thanks for any help.</p>
Ng Chung Tak
299,599
<p>\begin{array}{cccc} ax+by &amp;=&amp; c &amp; \cdots \cdots (1) \\ bx-ay &amp;=&amp; 0 &amp; \cdots \cdots (2) \end{array}</p> <p>$(1)$ is equation of the given line,</p> <p>$(2)$ is the equation of the normal passing through the origin</p> <p>$(1)\cap(2)$ is the required point.</p> <p>$(1)\times a+(2)\times b$,</p> <p>$$(a^{2}+b^{2})x=ac$$</p> <p>$(1)\times b-(2)\times a$,</p> <p>$$(a^{2}+b^{2})y=bc$$</p> <p>$$\therefore \quad (x,y)=\left( \frac{ac}{a^{2}+b^{2}}, \frac{bc}{a^{2}+b^{2}} \right)$$</p>
1,586,607
<p>This is something I've been thinking about lately;</p> <p>$$\int_0^1 \int_0^1 \frac{1}{1-(xy)^2} dydx$$ Solutions I've read involve making the substitutions: $x= \frac{sin(u)}{cos(v)}$ and $y= \frac{sin(v)}{cos(u)}$. This reduces the integral to the area of a right triangle with both legs of length $\frac{\pi}{2}$. My problem is that coming up with this substitution is not at all obvious to me, and realizing how the substitution distorts the unit square into a right triangle seems to require a lot of reflection. My approach without fancy tricks involves letting $u = xy$ and then the integral "simplifies" accordingly:</p> <p>$\begin{align*} \int_0^1 \int_0^1 \frac{1}{1-(xy)^2} dydx &amp;= \int_0^1\frac{1}{x}\int_0^x \frac{1}{1-u^2}dudx\\ &amp;= \int_0^1\frac{1}{2x}\int_0^x \frac{1}{1-u}+\frac{1}{1+u}dudx\\ &amp;= \int_0^1\frac{1}{2x}ln\left(\frac{1+x}{1-x}\right)dx \end{align*}$ </p> <p>If I've done everything right this should be $\frac{\pi^2}{8}$ but I haven't figured out how to solve it.</p>
Vivek Kaushik
169,367
<p>You can also do this, and this one does not involve any fancy tricks: </p> <p>Consider the double integral: $$I=\int_{0}^{\infty}\int_{0}^{\infty}\frac{1}{(1+x)(x+y^2)}dydx.$$ </p> <p>Integrate with respect to $y$ first and recognize this is :</p> <p>$$I=\int_{0}^{\infty} \frac{1}{1+x}\lim_{y \rightarrow \infty} \frac{\arctan{\frac{y}{\sqrt{x}}}}{\sqrt {x}} dx=\int_{0}^{\infty}\frac{\frac{\pi}{2}}{\sqrt{x}{(1+x)}}dx.$$ Now if you apply the transformation $u=\sqrt{x}, du=\frac{1}{2\sqrt{x}}dx,$ you get $$I=\int_{0}^{\infty} \frac{\pi}{(1+u^2)}du=\frac{\pi^2}{2}.$$</p> <p>On the other hand, reverse the order of integration in $I$ as such: $$I=\int_{0}^{\infty}\int_{0}^{\infty}\frac{1}{(1+x)(x+y^2)}dxdy.$$ Now integrate with respect to $x$ using partial fractions as such:</p> <p>$$\frac{1}{(1+x)(x+y^2)}=\frac{1}{y^2-1} \left(\frac{1}{1+x}-\frac{1}{x+y^2}\right).$$</p> <p>$$I=\int_{0}^{\infty}\int_{0}^{\infty}\frac{1}{y^2-1} \left(\frac{1}{1+x}-\frac{1}{x+y^2}\right)dxdy=\int_{0}^{\infty}\lim_{x\rightarrow \infty}\frac{\ln(1+x)-\ln(x+y^2)}{y^2-1}dy=\int_{0}^{\infty}\frac{\ln(y^2)}{y^2-1}dy=\int_{0}^{\infty}\frac{2\ln(y)}{y^2-1}dy.$$</p> <p>Now consider $$J=\int_{0}^{1}\frac{\ln(y)}{y^2-1}dy.$$ A u-substitution $y=\frac{1}{z}, dy=\frac{-1}{z^2} dz$ tells us: $$J=\int_{1}^{\infty}\frac{\ln(z)}{z^2-1}dz.$$ This tells us: $$J=\frac{I}{4}=\frac{\pi^2}{8}.$$ Now let us make the substitution $z=\frac{1-t}{1+t}, dz=\frac{-2}{(t+1)^2}dt.$ We have:</p> <p>$$J=\frac{\pi^2}{8}=\int_{0}^{1} \frac{\ln(1+t)-\ln(1-t)}{2t} dt,$$ the integral you arrived at. </p>
3,772,193
<p>If I put the two vectors 1,0,0 and 0,1,0 next to each other</p> <p><span class="math-container">$$\begin{pmatrix} 1 &amp; 0 \\ 0 &amp; 1 \\ 0 &amp; 0 \end{pmatrix}$$</span></p> <p>I can see that they are independent since I cannot write <span class="math-container">$(1,0,0)^t$</span> as <span class="math-container">$(0,1,0)^t$</span>.</p> <p>But I've heard that &quot;If you have a row of <span class="math-container">$0$</span>'s in your matrix, it's linearly dependent&quot;</p> <p>But the matrix above has a row of zeroes but is still linearly independent. Which means I'm understanding something wrong. What am I missing?</p>
gt6989b
16,192
<p>I think you may be confusing two notions.</p> <p>The vectors <span class="math-container">$(1,0,0)^T$</span> and <span class="math-container">$(0,1,0)$</span> are linearly independent, since the <em>columns</em> in the matrix are independent.</p> <p>The <em>rows</em> in the matrix may be linearly dependent, since they represent combinations of <span class="math-container">$(1,0),(0,1)$</span> and <span class="math-container">$(0,0)$</span>, which are clearly dependent...</p>
3,772,193
<p>If I put the two vectors 1,0,0 and 0,1,0 next to each other</p> <p><span class="math-container">$$\begin{pmatrix} 1 &amp; 0 \\ 0 &amp; 1 \\ 0 &amp; 0 \end{pmatrix}$$</span></p> <p>I can see that they are independent since I cannot write <span class="math-container">$(1,0,0)^t$</span> as <span class="math-container">$(0,1,0)^t$</span>.</p> <p>But I've heard that &quot;If you have a row of <span class="math-container">$0$</span>'s in your matrix, it's linearly dependent&quot;</p> <p>But the matrix above has a row of zeroes but is still linearly independent. Which means I'm understanding something wrong. What am I missing?</p>
mathreadler
213,607
<p>If you look at row vectors: [0,0] must be a linear combination of [1,0] and [0,1].</p> <p>Because the number of vectors is larger than the number of variables in each vector.</p> <p>But if you look at columns it's the opposite relationship.</p> <p>Too few columns in relation to elements in each column.</p>
57,508
<p>While learning commutative algebra and basic algebraic geometry and trying to understand the structure of results (i.e. what should be proven first and what next) I came to the following question: </p> <p>Is it possible to prove that $\mathbb A^2-point$ is not an affine variety, if you don't know that the polynomial ring is a unique factorisation domain?</p> <p>It seems to me, that this question has some meaning, since when we define affine variety, we don't need to use the fact that the polynomial ring is an UFD. Don't we?</p>
Hailong Dao
2,083
<p>Yes, you can do it over any field.</p> <p>First, it is enough to show $\mathcal O(Y) = k[x,y]$ ($Y=A^2-0$). If that is true and $Y$ is affine, then the embedding $Y \to A^2$ must correspond to some $k$-algebra map $k[x,y] \to k[x,y]$, which is absurd.</p> <p>The key point now, as in Guillermo's post, is to show that $R_{(x)} \cap R_{(y)}= R$ ($R=k[x,y]$). It will follow from the </p> <blockquote> <p>Fact: $(x^m, y^n)$ form a regular sequence on $R$ for all $m,n&gt;0$ </p> </blockquote> <p>Indeed, if $f/x^m =g/y^n$, then $fy^n=0$ modulo $x^m$, so $f=hx^m$ and we are done.</p> <p>The above Fact is elementary. For example you can induct on $m$. Clearly $m=1$ is OK. Now if $m&gt;1$, use the short exact sequence:</p> <p>$$0 \to R/{(x^{m-1})} \to R/(x^m) \to R/(x) \to 0$$ </p> <p>and Snake Lemma to conclude that $y^n$ is regular on the middle term as well. </p> <p>Note that I used $x,y$ abstractly and all you need is that the elements $(x,y)$ form a regular sequence on a commutative Noetherian ring $R$ to start with. Then the proof shows that $\mathcal O(Y) =R$ if $Y=\text{Spec}(R) - V(x,y)$. In fact, more general results are true for ideals of depth at least $2$. If you are interested, it will be a good motivation to learn about depth and regular sequences. </p>
3,823,048
<p>My questions needs more context than what can fit into the title so let me elaborate. In pretty much all the art textbooks I am reading on linear perspective they state that the correct way of placing an ellipse is to imagine the minor axis of an ellipse as an axel of a wheel running to the opposite vanishing point. Here are examples for the various types of perspective the textbooks mention.</p> <p>One point perspective:</p> <p>In one point perspective only one set of parallel lines of a cuboid are concurrent, the other two sets are parallel (one set parallel to the horizon, the other perpendicular to it). If we plot an ellipses inside a one-point square the minor axis' should all run vertical (to the non-concurrent vanishing point). We find that only ellipses whose major axis are perpendicular to the vanishing point (A`) have this property.</p> <p><a href="https://i.stack.imgur.com/L94aq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/L94aq.png" alt="One Point perspective" /></a></p> <p>Two point perspective:</p> <p>In two point perspective two of the sets of parallel lines of a cuboid are concurrent (meeting at R3 &amp; S3 in the image) and the other set are not concurrent (parallel). I tried placing an ellipse such that the perspective center points of the sides of the quadrilateral are the tangent points of the ellipse but the minor axis does not seem concurrent with the lines running to the opposite concurrence (vanishing point). It should be stated that the two quadrilateral and their concurrences are a mirror reflection of each other, but this is not always the case in two point perspective.</p> <p><a href="https://i.stack.imgur.com/tJ1j5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tJ1j5.png" alt="Two point perspective" /></a></p> <p>If I adjust the size of the bounding quadrilateral I can make the minor axis concurrent with the lines running to the opposite vanishing point... but what if I want an ellipse placed inside a different sized quadrilateral?</p> <p><a href="https://i.stack.imgur.com/UtGuI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UtGuI.png" alt="Two point additional" /></a></p> <p>Am I missing something in my plotting of ellipses or are the authors of these textbooks mistaken?</p>
Théophile
26,091
<p>You are absolutely right; the authors of these textbooks are mistaken, and your diagrams show why.</p> <p>I happen to have watched a fair number of instruction videos on perspective drawing, and to my frustration, I haven't yet found a single one that addresses this mistake, even from the most meticulous and formal instructors.</p> <p>This is somewhat understandable. Broadly speaking, artists establish rules based on aesthetics, intuition, and approximation; whereas mathematicians remain skeptical without proof. These are two valid mindsets for two different fields, and your question belongs to the overlap of the two. The danger of an artist learning the technical craft of perspective is that they might use rules of thumb indiscriminately without fully understanding when they apply. The danger of a mathematician learning to draw is that they might break out a straightedge and compass without considering whether that will actually contribute to a successful composition. It would be nice to find a happy middle ground between these two camps, but ellipses turn out to be surprisingly complex. A perfect geometric construction exists, but just isn't worth it in the the context of a larger drawing, so it's natural to look for a simpler way.</p> <p>Note that the rule actually does work for isometric drawings. In other words, if all parallel lines remain parallel (as opposed to converging to a vanishing point), then it holds. There are other cases where it holds, as you allude to in the comments. Most importantly, the rule will be reasonably close when the drawing is within the cone of vision. In contrast, in your first example, the leftmost circle is outside the cone of vision and therefore quite distorted.</p> <p>In summary, the &quot;rule&quot; should be considered a good approximation most of the time, but the more distortion there is, the more likely you should abandon it and use your instincts (or a straightedge and compass).</p>
1,968,267
<p>When doing induction should you always try to put your final answer as the "<em>desired</em> " form? For example if: $$\sum^{n}_{k=1}(k+2)(k+4) = \frac{2n^{3} + 21n^{2} + 67n}{6}$$ we ought to give the final answer as $$\frac{2(k+1)^{3} + 21(k+1)^{2} + 67(k+1)}{6}?$$</p> <p>I just expanded both the $\text{LHS}_{k+1}$ and the $\text{RHS}_{k+1}$ to show they were equal after the induction. Like this: </p> <hr> <p>Show that $$\sum^{n}_{k=1}(k+2)(k+4) = \frac{2n^{3} + 21n^{2} + 67n}{6}$$ for all integers $n \geq 1$.</p> <p>For $n = 1$,</p> <p>$$\sum^{1}_{k=1}(k+2)(k+4) = 15$$</p> <p>and</p> <p>$$\frac{2(1)^{3} + 21(1)^{2} + 67(1)}{6} = 15$$</p> <p>Assume that it is true for some integer $n = k$, thus $$\sum^{k}_{k=1}(k+2)(k+4) = \frac{2k^{3} + 21k^{2} + 67k}{6}$$ so the $\text{LHS}_{k+1}$ $$\sum^{k+1}_{k=1}(k+2)(k+4) = \sum^{k}_{k=1}(k+2)(k+4) + (k+3)(k+5)$$ $$= \frac{2k^{3} + 21k^{2} + 67k}{6} + \frac{6(k+3)(k+5)}{6}$$ $$=\frac{2k^{3} + 27k^{2} + 115k + 90}{6}$$ Now the $\text{RHS}_{k+1}$ $$\frac{2(k+1)^{3} + 21(k+1)^{2}+ 67(k+1)}{6} = \frac{2k^{3} + 27k^{2} + 115k + 90}{6}$$ Thus $\text{LHS}_{k+1} = \text{RHS}_{k+1}$ Q.E.D.</p>
Iuli
33,954
<p>$(k+2)(k+4)=k^2+6k+8$</p> <p>$$\sum^{n}_{k=1}{(k+2)(k+4)}=\sum^{n}_{k=1}{k^2}+6\sum^{n}_{k=1}{k}+\sum^{n}_{k=1}{8}=\frac{n(n+1)(2n+1)}{6}+6 \cdot \frac{n(n+1)}{2}+8n$$</p>
3,184,887
<p>I want to divide a prize, say 1000, among an unknown number of contestants, as determined by their rank in a marathon race. I want the smallest portion to be above zero (so it is worthwhile to compete). So the higher the rank: the higher the prize.</p> <p>PS, feel free to explain in detail as I am a math noob. Thanks.</p> <p>PPS, I just made the marathon up to make the problem more familiar, but I guess real marathons actually are not even like that. My application is for a drag and drop stack of user interface elements, for which I want to have a gradient of values according to their sequence, above being higher value, below being lower value.</p> <p>Update: this was simpler than I thought, and based on the comments, I came to a formula: </p> <p>n = number of contestants<br> P = prize : 1000<br> r = current contestant rank<br> p = unknown portion </p> <p>p = P(r/(n(n+1)/2)) </p> <p>But I would like to simplify this formula please if possible, cause I'll use it in programming</p> <p>edit 2: I got this far: P(r/(n(n+1)/2)) = 2Pr/(n(n+1))</p>
celnaFR
625,259
<p>I think it might be interesting to formalize your thought: in fact, you want to distribute a certain amount a money, let's say 1000 (€!). </p> <p>As you noticed in your previous message, it is strictly equivalent to create numbers between 0 and 1 and then multiply it by the total amount of money you want to distribute. The number - let's say P(n) - between 0 and 1 corresponds to the proportion of the total amount of money to attribute to the n-th finisher.</p> <p>Another constraint is that the total amount of money distributed is equal to 1000 which is equivalent as distributing 100% of the money you have: the sum of the proportion must be equal to 1.</p> <p>Finally, you want the first person to receive more than the second and so on.</p> <p>In a mathematical way, your question is equivalent to ask which (discrete) probability you can take to fit your constraints: you need to restrict yourself to <em>discrete monotone probability</em>. </p> <p>It is a really interesting subject, you might start to study probability to get the intuition behind the object (further study might include measure theory, a fascinating discipline).</p>
1,252,326
<p>"A student sits 6 examination papers, each worth 100 marks. In how many possible ways can he score 40% of the total possible marks?" I could not think of a way to attack this from first principles and thought there must be some theoretical generality that would make it quite easy. Haven't done any serious mathematics for almost 50 years.</p>
Tryss
216,059
<p>If we $e_i$ iqs the score in exam i, we want to know the numbers of combinaisons $(e_1,\cdots,e_6)$ such that :</p> <p>$$\forall i, 0\leq e_i \leq 100$$</p> <p>$$\sum_{i=1}^6 e_i = 240$$</p> <p>And Here I guess, the best way is to consider the general problem with $n$ exams and $p$ points.</p> <p>You then have the reccurence relation </p> <p>$$S(n,p) = \sum_{i=0}^{100} S(n-1,p-i)$$</p> <p>The problem here is that you need to be carefull when $n$ is small, as you can't have $100n &gt; p$</p>
2,598,912
<p>Let $f$ be some function in $L_{loc}^1(\mathbb{R})$ such that, for some $a \in \mathbb{R}$,</p> <p>$$\int_{|x| \leq r} |f(x)|dx \leq (r+1)^a$$</p> <p>for all $r \geq 0$. Show that $f(x)e^{-|tx|} \in L^1(\mathbb{R})$ for all $t \in \mathbb{R} \setminus \{0\}$. </p> <p>I'm having a hard time finding use of the bound described above. Any help would be appreciated. </p>
Julián Aguirre
4,791
<p>A different proof. Assume $t&gt;0$. \begin{align} \int_{\Bbb R}|f(x)|\,e^{-t|x|}\,dx&amp;=\sum_{n=0}^\infty\int_{n\le|x|&lt;n+1}|f(x)|\,e^{-t|x|}\,dx\\ &amp;\le\sum_{n=0}^\infty e^{-t|n|}\int_{|x|&lt;n+1}|f(x)|\,dx\\ &amp;\le\sum_{n=0}^\infty e^{-t|n|}(2+n)^a\\ &amp;&lt;\infty. \end{align}</p>
1,809,500
<p>Prove that $k!&gt;(\frac{k}{e})^{k}$. </p> <p>It is known that $e^{k}&gt;(1+k)$. So if we multiply $k!$ on both sides, we get $k!e^{k}&gt;(k+1)!$. Also $k^k&gt;k!$. Now how to proceed ?</p>
Dark
208,508
<p>Work by induction.</p> <ul> <li>For $k=1$ it is true as $e &gt;1$</li> <li>Assume it is true for $k$. Then $(k+1)! = (k+1) k! &gt; k k!&gt; k \frac{k^k}{e^k} = e \frac{k^{k+1}}{e^{k+1}} &gt; \frac{k^{k+1}}{e^{k+1}}$ because $e&gt;1$.</li> </ul>
2,577,634
<p>My goal is to compute $$I=\int_{0}^{+∞}\frac{\cos{ax}}{1+x^2}dx$$ where $a&gt;0$. </p> <p>$$I=\frac{1}{2}\int_{-∞}^{+∞}\frac{\cos{ax}}{1+x^2}dx=\frac{1}{2}Re\bigg(\int_{-∞}^{+∞}\frac{e^{iax}}{1+x^2}dx\bigg)$$. </p> <p>Let $f(z)=\frac{e^{iaz}}{1+z^2}$.</p> <p>By Residue Theorem, $\int_{-R}^{R}\frac{e^{iax}}{1+x^2}dx+\int_{\gamma_R}\frac{e^{iaz}}{1+z^2}dz=2\pi Res(f,i)=\frac{e^{-a}}{2i}$, where $\gamma_R$ denotes the upper semi-circle centered at $O$ with radius $R$.</p> <p>As $R$—>$+∞$,</p> <p>$\int_{-R}^{R}\frac{e^{iax}}{1+x^2}dx$ —> $\int_{-∞}^{+∞}\frac{e^{iax}}{1+x^2}dx$</p> <p>Now, I am stuck on how to prove $\int_{\gamma_R}\frac{e^{iaz}}{1+z^2}dz$ goes to $0$ as $R$ goes to infinity.</p> <p>Anyone know how to do it? Many thanks.</p>
Robert Z
299,698
<p>Note that for $z=R(\cos(t)+i\sin(t))$ with $R&gt;1$ and $t\in [0,\pi]$ $$\left|\frac{e^{iaz}}{1+z^2}\right|=\frac{e^{-aR\sin(t)}}{R^2-1} \leq \frac{1}{R^2-1}.$$ Hence, as $R\to +\infty$, $$\left|\int_{\gamma_R}\frac{e^{iaz}}{1+z^2}dz\right|\leq \frac{|\gamma_R|}{R^2-1}=\frac{\pi R}{R^2-1}\to 0.$$ P.S. This is a particular case of the <a href="https://en.wikipedia.org/wiki/Jordan%27s_lemma" rel="nofollow noreferrer">Jordan Lemma</a>.</p>
3,779,736
<p>Let <span class="math-container">$(X_1, \ldots, X_n) \sim \operatorname{Unif}(0,b), b&gt;0$</span>. Find <span class="math-container">$E\left[\sum \frac{X_i }{X_{(n)}}\right]$</span> where <span class="math-container">$X_{(n)} = \max_i X_i$</span>.</p> <p>It was suggested to use Basu's Theorem which I am unfamiliar with.</p> <p>There are finitely many terms so we can rearrange using order statistics and write it as:</p> <p><span class="math-container">\begin{align} E\left[\sum_{i = 1}^n \frac{X_i }{X_{(n)}}\right] &amp; = E\left[\frac{X_{(1)}}{X_{(n)}}\right] + E\left[\frac{X_{(2)}}{X_{(n)}}\right]+ \cdots +E[1] \\[8pt] &amp; = (n-1) E\left[\frac{X_i}{X_{(n)}}\right] + 1 \end{align}</span></p> <p>If this is correct then I will need to calculate a conditional expectation to calculate this so I wanted to see if this is even correct before moving forward. Or if someone familiar with Basu's theorem can explain how I apply that here.</p>
StubbornAtom
321,264
<p>Assuming <span class="math-container">$X_1,\ldots,X_n$</span> are independent and identically distributed,</p> <p><span class="math-container">$$E\left[\sum_{i=1}^n \frac{X_i}{X_{(n)}}\right]=\sum_{i=1}^n E\left[\frac{X_{i}}{X_{(n)}}\right]=nE\left[\frac{X_1}{X_{(n)}}\right]$$</span></p> <p>This can be evaluated by showing that <span class="math-container">$X_1/X_{(n)}$</span> is independent of <span class="math-container">$X_{(n)}$</span>, so that</p> <p><span class="math-container">$$E\left[X_1\right]=E\left[\frac{X_1}{X_{(n)}}\cdot X_{(n)}\right]=E\left[\frac{X_1}{X_{(n)}}\right]\cdot E\left[X_{(n)}\right]$$</span></p> <p>Thus giving <span class="math-container">$$E\left[\frac{X_1}{X_{(n)}}\right]=\frac{E\left[X_1\right]}{E\left[X_{(n)}\right]}$$</span></p> <p>The independence can be argued using Basu's theorem since the distribution of <span class="math-container">$\frac{X_1}{X_{(n)}}=\frac{X_1/b}{X_{(n)}/b}$</span> is free of <span class="math-container">$b$</span> (making <span class="math-container">$\frac{X_1}{X_{(n)}}$</span> an ancillary statistic) and <span class="math-container">$X_{(n)}$</span> is a complete sufficient statistic for <span class="math-container">$b$</span>.</p>
1,550,736
<p>Consider Z a Normal (Gaussian) random variable with mean 0 and variance 1.<br> It has density $$f_Z(z)=\frac{1}{\sqrt{2\pi}}e^{\frac{-z^2}{2}} \text{for all x real numbers}$$ We consider $X=2Z+1$. Write the CDF of X in terms of the one of Z and take the derivative to get that the density of X is $$f_X(x)=\frac{1}{2\sqrt{2\pi}}e^{\frac{-(x-1)^2}{8}} \text{for all x real numbers}$$<br> I know that I have to take the integral of $f_Z(z)$ in terms of X to get the cdf, I just do not know how to get it in terms of X. The bounds of the integral should be $-\infty$ to $\infty$ right?</p>
Milen Ivanov
294,209
<p>Let $$A = U \Sigma V^*$$ be the SVD of A. Then, because $ U$ is orthogonal, $$ ||Ax||= ||\Sigma V^* x|| $$ If $V^*x = y$ , $ ||y||=||x||=1$ we are left with $$||Ax||^2=||\Sigma y||^2 = \sum _i |\sigma_iy_i|^2 \geq \sum _i |\sigma_qy_i|^2 = \sigma_q^2 $$ So, the inequality holds. If $x$ is a right singular vector of $A$ then it is one of the columns of $V$. But $V$ is orthogonal, so there $i$ such that $V^*x = e_i$, hence$||Ax|| = ||\Sigma V^* x|| = ||\Sigma e_i|| = \sigma_i$. So, any singular vector of $\sigma_q$ minimises $||Ax||$ and the result follows. Note that it is unaccurate to talk of the minimiser of $||Ax||$, as the singular vectors are not unique ( we can multiply $x$ by -1).</p> <p>Finally, note that every minimiser of $||Ax||$ must be a singular vector. To do this, adapt the proof of the inequality!</p>
159,495
<p>I have a text file where each line is a data point in the form:</p> <pre><code>[ -495.01172, -158.35966, 2705.0 ] [ -489.15576, -127.229675, 2673.0 ] [ -487.6918, -97.679855, 2665.0 ] [ -487.32578, -68.4594, 2663.0 ] [ -485.86182, -39.19415, 2655.0 ] [ -485.3128, -10.12311, 2652.0 ] [ -484.03183, 18.853745, 2645.0 ] [ -482.75082, 47.677364, 2638.0 ] [ -481.6528, 76.37677, 2632.0 ] [ -481.6528, 105.184616, 2632.0 ] ... </code></pre> <p>Each line represents <code>[x,y,z]</code>. I need to 3D plot these, but I am getting errors. Below is what I've tried along with the resulting error.</p> <pre><code>data = Import[ "C:\\Users\\user\\Desktop\\out.txt", "text"] data2 = List[ StringReplace[ data, {"[" -&gt; "{", "]" -&gt; "}", "\n" -&gt; ",", " " -&gt; ""}]] ListPointPlot3D[data2] </code></pre> <p>The first two lines run successfully. The last line returns:</p> <pre><code>...{1161.5232,-887.44867,1677.0}} must be a valid array or a list of valid arrays &gt;&gt; </code></pre>
Nasser
70
<p>This works, but may be there is easier way</p> <pre><code>SetDirectory[NotebookDirectory[]] data = Flatten@Import["input.txt","CSV"]; data2 = If[StringQ[#],ToExpression[StringDelete[#,{"[","]"}]],#]&amp;/@data; data2 = ArrayReshape[data2,{Length@data2/3,3}] </code></pre> <p>now</p> <pre><code>MatrixForm[data2] </code></pre> <p><img src="https://i.stack.imgur.com/hebEg.png" alt="Mathematica graphics"></p> <p>Plot it</p> <pre><code>ListPointPlot3D[data2] </code></pre> <p><img src="https://i.stack.imgur.com/oTTx5.png" alt="Mathematica graphics"></p> <p>The input file is what was shown in OP </p> <p><img src="https://i.stack.imgur.com/SyHPP.png" alt="Mathematica graphics"></p>
572,429
<p>If A is invertible, prove that $\lambda \neq 0$, and $\vec{v}$ is also an eigenvector for $A^{-1}$, what is the corresponding eigenvalue?</p> <p>I don't really know where to start with this one. I know that $p(0)=det(0*I_{n}-A)=det(-A)=(-1)^{n}*det(A)$, thus if both $p(0)$ and $det(0) = 0$ then $0$ is an eigenvalue of A and A is not invertible. If neither are $0$, then $0$ is not an eigenvalue of A and thus A is invertible. I'm unsure of how to use this information to prove $\vec{v}$ is also an eigenvector for $A^{-1}$ and how to find a corresponding eigenvalue.</p>
copper.hat
27,978
<p>If you have $v\neq 0$ such that $A v = \lambda v$, and $A$ is invertible, then $v = \lambda A^{-1} v$.</p> <p>What does that tell you about $\lambda$?</p> <blockquote class="spoiler"> <p> $\lambda$ must be non-zero, otherwise this would mean $v=0$, a contradiction.</p> </blockquote> <p>How do you use the above to find an eigenvalue corresponding to $v$?</p> <blockquote class="spoiler"> <p> Since $\lambda \neq 0$, we have $A^{-1} v = \frac{1}{\lambda} v$, hence the eigenvalue is $\frac{1}{\lambda}$.</p> </blockquote>
4,218,944
<p>A very simple question. Saw this formula many places earlier, but how do we prove it? <span class="math-container">$$ax^2+bx+c=a(x-r_1)(x-r_2)$$</span> Where <span class="math-container">$r_1$</span> and <span class="math-container">$r_2$</span> are the roots of the quadratic.</p> <p>P.S.- I have seen a &quot;proof&quot; using Vieta's formulas, but Vieta's formula itself requires this fact in its proof.</p>
Bernard Pan
800,888
<p>For quadratic equation <span class="math-container">$$0=ax^2+bx+c=a\left(x+\frac{b}{2a}\right)^2-\frac{b^2-4ac}{4a^2},$$</span> its roots are <span class="math-container">$$r_{1,2}=\frac{-b\pm\sqrt{b^2-4ac}}{2a}.$$</span> Depending on the sign of <span class="math-container">$b^2-4ac$</span>, the roots could be distinct reals, a multiple real, or distinct complexes. We actually do not need the fundamental theorem of algebra here, which is usually used for equations of higher orders (e.g., order <span class="math-container">$5$</span> or more).</p> <p>Then we can evaluate it directly <span class="math-container">\begin{align*} a(x-r_1)(x-r_2)&amp;=a[x^2-(r_1+r_2)x+r_1r_2] \\ &amp;=a\left[x^2-\left(-\frac{b}{a}\right)x+\frac{b^2-(b^2-4ac)}{4a^2}\right] \\ &amp;=a\left(x^2+\frac{b}{a}x+\frac{c}{a}\right)=ax^2+bx+c. \end{align*}</span> I guess people are overthinking about this question.</p>
85,651
<p>Let $\Gamma$ be one of the classical congruence subgroups $\Gamma_0(N)$, $\Gamma_1(N)$ and $\Gamma(N)$ of $SL(2, \mathbb{Z})$.</p> <p>How does the lower bound for the length of primitive geodesics on $\Gamma \backslash \mathbb{H}$ depending on $N \rightarrow \infty$?</p> <p>Any suggestions?</p>
kassabov
13,992
<p>Let $\Gamma$ be the congruenece group $\ker SL_2(\mathbb{Z}) \to SL_2(\mathbb{Z}/N\mathbb{Z})$.<br> The length of the shortest geodesic is of the order of $\log N$. The upper bound follows from a counting argument, and the lower bound comes from the observation that a product of $c\log N$ generators of $SL_2(\mathbb{Z})$ can not be a nontrivial element $\Gamma$ because all entries are "small".</p> <p>I think that the same argument also works for $\Gamma_0$ and $\Gamma_1$</p>
4,172,955
<p>Find all <span class="math-container">$|z|=1$</span> such that <span class="math-container">$|z^4+4| = \sqrt{5}.$</span></p> <hr /> <p>I've tried doing <span class="math-container">$$|z^4+4|^2 = 5 \implies (z^4+4)(\overline{z^4}+4) = 5 \implies |z|^8 + 4(z^4+\overline{z^4}) +11=0,$$</span> but i'm not sure how to solve that.</p>
Harsh joshi
940,974
<p>This problem can be solved by simple geometry.</p> <p>Let <span class="math-container">$z^4=w$</span></p> <p>Using <span class="math-container">$|z^4|=|z|^4$</span>, as <span class="math-container">$|z|=1$</span> then <span class="math-container">$|w|=1$</span>.</p> <p>Now <span class="math-container">$w$</span> satisfies two properties</p> <ul> <li><p><span class="math-container">$|w+4|=\sqrt{5}$</span> and <span class="math-container">$|w|=1$</span>, plotting in Argand plane we get that former is a circle with centre at <span class="math-container">$(-4,0)$</span> and radius <span class="math-container">$\sqrt{5}$</span></p> </li> <li><p>while the latter is a circle with centre at <span class="math-container">$(0,0)$</span> and radius <span class="math-container">$1$</span>.</p> </li> </ul> <p>Now the solutions exist on the Argand plane where these two circles intersect and it can be clearly observed that they do not intersect at all . Hence no solution exists for the complex equations.</p>
2,531,523
<p>Knowing that $S_3=\{\text{id},\sigma,\sigma^2,\tau,\tau\circ\sigma,\tau\circ\sigma^2\}$ ($\tau=(1 2), \sigma=(123)$), why does a group homomorphism $f:S_3\to\mathbb{Z}/2\mathbb{Z}$ satisfy $f(a)=0$ for all $a\in S_3$ such that $a^2\neq e$?</p>
Barry Cipra
86,747
<p>For $a\gt0$, we have $u=x^a\to0^+$ as $x\to0^+$, and therefore</p> <p>$$\lim_{x\to0^+}x^a\ln x={1\over a}\lim_{x\to0^+}x^a\ln x^a={1\over a}\lim_{u\to0^+}u\ln u$$</p> <p>so it remains to show that $\lim_{x\to0^+}x\ln x=0$ without resorting to L'Hopital. My favorite way of doing this is to use the integral definition of the natural logarithm, $\ln x=\int_1^x{dt\over t}$, and some crude inequalities based on the decreasing nature of $1/t$: </p> <p>For $0\lt x\lt1$, we have</p> <p>$$0\le x|\ln x|=x\int_x^1{dt\over t}=x\int_x^\sqrt x{dt\over t}+x\int_\sqrt x^1{dt\over t}\le x\left(\sqrt x-x\over x\right)+x\left(1-\sqrt x\over \sqrt x \right)\\=2(\sqrt x-x)\to0$$</p>
2,075
<p>I've come across so many posts here and on the "main" math-SE site that voice complaints, frustrations, pet-peeves, grievances, or else are critical of another post/question, user, OP, etc. It is really an energy sapper! Certainly not a boost for morale.</p> <p>Since I'm pretty new here, and feeling a bit ambivalent about the community here, or lack thereof, I'd really like to know what keeps others here? Given all the frustrations and pet peeves, what keeps you coming back, logging in, participating, contributing?</p> <p>I really am serious: I'd really like to know, plus I think shifting gears for a moment might help balance the (recent?) discord/tension. I'm not in a position to know whether what I perceive to be as tension and impatience, bordering on intolerance, is a "fact of life" here/ "the norm"...or if it cycles, like all growing communities do, between "better times" and "worse times"...slanting toward unity, then tilting towards discord... and individually, between feeling exhilarated and feeling near-burn-out.</p> <p>Just thought I'd ask. It is very likely that people here are happier than they may appear. After all, I think humans are wired to notice what's amiss and what's gone wrong than we are to noting what's going well!</p> <p>Edit: (Addendum) I am reluctant to accept a single answer; the answers and comments have been overwhelmingly supportive and informative. With respect to the "post a question"/"accept an answer" norm for math.SE, is that also the norm here on meta.SE? I sought out input from all interested users regarding the subject line of this thread; everyone is unique, and so I wouldn't even think of establishing criteria with which to evaluate one user's input/answer/comment against another. I did make a point of "upvoting" a good number of contributions, however. Thanks to all who have "chimed in," and any additional answers and comments are most certainly welcome.</p>
amWhy
9,003
<p>I wanted to revisit a very old question of mine, asked when I was relatively new to math.se., if nothing else, to remind myself why I participate here (and why I returned after a very long sabbatical from math.se). Two points:</p> <p>$(1)$ I participate and returned because I learn a lot here (or rather, math.se): about math from other users, about common "stumbling blocks" encountered by students, and about what helps people learn and come to understand concepts with which they are struggling.</p> <p>$(2)$ I participate and returned because I savor those precious, albeit sometimes rare, moments when those posting questions experience an "Aha!!!" moment.</p> <p>Just thought I'd share.</p> <p>Cheers!</p>
2,075
<p>I've come across so many posts here and on the "main" math-SE site that voice complaints, frustrations, pet-peeves, grievances, or else are critical of another post/question, user, OP, etc. It is really an energy sapper! Certainly not a boost for morale.</p> <p>Since I'm pretty new here, and feeling a bit ambivalent about the community here, or lack thereof, I'd really like to know what keeps others here? Given all the frustrations and pet peeves, what keeps you coming back, logging in, participating, contributing?</p> <p>I really am serious: I'd really like to know, plus I think shifting gears for a moment might help balance the (recent?) discord/tension. I'm not in a position to know whether what I perceive to be as tension and impatience, bordering on intolerance, is a "fact of life" here/ "the norm"...or if it cycles, like all growing communities do, between "better times" and "worse times"...slanting toward unity, then tilting towards discord... and individually, between feeling exhilarated and feeling near-burn-out.</p> <p>Just thought I'd ask. It is very likely that people here are happier than they may appear. After all, I think humans are wired to notice what's amiss and what's gone wrong than we are to noting what's going well!</p> <p>Edit: (Addendum) I am reluctant to accept a single answer; the answers and comments have been overwhelmingly supportive and informative. With respect to the "post a question"/"accept an answer" norm for math.SE, is that also the norm here on meta.SE? I sought out input from all interested users regarding the subject line of this thread; everyone is unique, and so I wouldn't even think of establishing criteria with which to evaluate one user's input/answer/comment against another. I did make a point of "upvoting" a good number of contributions, however. Thanks to all who have "chimed in," and any additional answers and comments are most certainly welcome.</p>
Ben Blum-Smith
13,120
<p>I would like to add:</p> <p>I participate in math.SE because</p> <p>(a) I have learned an incredible amount here by reading answers to others' questions and especially by asking questions. From <a href="https://math.stackexchange.com/questions/50362/homology-why-is-a-cycle-a-boundary">my very first question</a>, it's amazed me that people are willing to take the time to engage thoughtfully with technical stuff.</p> <p>(b) Because of (a), I have the feeling I should "pay it forward." Sometimes I answer questions just for fun, or to distract myself from something else, but the main reason I answer questions, which is also the principled reason, is to "earn my keep" for the questions I ask. I don't know if any other user thinks this way but I try to keep my question-to-answer ratio approximately 1:1.</p> <p>I joined math.SE shortly before entering a math PhD program that I have now completed. Over time, more of my questions have become appropriate for MO, and my questions on math.SE have tended to get more technical. Nonetheless, it still often happens that I have a question that I expect will be regarded as basic by experts: either it is not in my field, or else it is a fine point that just seems textbook-level not research-level. It has now happened at least a couple times that I ask such a question here and it doesn't get any traction, and then I cross-post to MO and it gets a great answer. But it still seems respectful to me to ask it here first. So I envision continuing to use math.SE as a question-asking place even as a working mathematician.</p> <p>As an aside, you wrote this question a few months before I joined math.SE, and it was over six years ago, so maybe I'm speaking to something that's personally ancient history, but, I share your feelings about the way that the site's culture treats new users. Obviously different users have different priorities, but I myself am much more distressed by snark toward people who are not "mathematically enculturated" than I am by poorly-formed questions. I don't find the culture of this site uniformly pleasant. (To me there is a striking difference with MO: over there, the atmosphere is more collegial and less policed. Reading in, I experience this as a consequence of MO operating from the presumption that we are all already "in the club.") Nonetheless, the cost-benefit analysis for my use of the site is clear: what I can learn here makes it worth it.</p> <p>(As a last aside, I should say that although I don't find the culture of the site uniformly pleasant, there are many many individuals whom I really appreciate everything they do here and I like being around the way they treat others.)</p>
18,421
<p>I am teaching 4th-grade kids. The topic is Fraction. Basic understanding of a fraction as a part of the whole and as part of the collection is clear to the kids. Several concrete ways exist to teach this basic concept. But when it comes to fraction addition/subtraction I could not find a way that teaches it concretely.<br> Of course, teaching fraction addition &amp; subtraction of the form 3/2 + 1/2 is easy. But what about 3/2+ 4/3?<br> It is where we start talking about the algorithm (using LCM), which makes the matter less intuitive and more abstract which I am trying to avoid in the beginning. I believe all abstract concepts should come after the concrete experience. </p> <p>So teachers do you have any suggestions? </p>
WeCanLearnAnything
7,151
<p>Suggestion: Do not bother having students do calculations with <em>two</em> fractions until they can do a comparable operation with a fraction and a whole number.</p> <p>Example: Before teaching, say, <span class="math-container">$\frac{3}{2}-\frac{4}{3}=\_\_$</span>, ensure they can answer something like <span class="math-container">$4 - \frac{1}{5}=\_\_$</span>. Make sure all students can draw that on a number line and with area diagrams. This is how you ensure that their knowledge of fractions will integrate with their knowledge of whole numbers.</p> <p>Once students are ready to add and subtract fractions that already have common denominators, ensure they can do so on a number line and with area diagrams and with <a href="https://duckduckgo.com/?t=ffab&amp;q=fraction%20tiles&amp;iax=images&amp;ia=images" rel="nofollow noreferrer">fraction tiles</a>.</p> <p>Then, when it comes time to do <span class="math-container">$\frac{3}{2}+\frac{4}{3}=\_\_$</span>, you can:</p> <ol> <li>Put the halves tiles and the thirds tiles together</li> <li>Pretend to be horribly confused by how much that is</li> <li>Estimate it to be a little under 3 wholes</li> <li>Demonstrate how sixths tiles come to the rescue. :)</li> </ol>
1,703,140
<p>Let $n$ be composite. If $a$ is coprime to $n$ such that $a^{n-1} \equiv 1 \pmod n$ then $a$ is called a Fermat liar. If $a^{n-1} \not\equiv 1 \pmod n$, then $a$ is a Fermat witness to the compositeness of $n$.</p> <p>Assume the witnesses or liars are themselves prime. If two Fermat liars are multiplied together their product will also be a Fermat liar. If a Fermat witness and a Fermat liar are multiplied together, the product is a Fermat witness. </p> <p>Is it possible for the product of two Fermat witnesses to be a Fermat liar if the witnesses are themselves prime?</p> <p>My motivation for the question is to consider what might happen if I use a product of primes as the base in a Fermat primality test rather than a single prime. My hope is that a product of primes would improve the odds of finding the correct answer with one test.</p>
mvw
86,776
<p>I assume you mean a number with $N$ digits in decimal base and having the digit sum $S$.</p> <p>It runs down to partition the number $S$ into $N$ parts (if possible at all!), where the parts are in the range $\{ 0, \dotsc, 9 \}$ and sorting it such that either the largest parts come first for a maximal number, or such that the smallest parts come first for a minimal number. Of all feasible partitions one picks the one which comes first under the considered order.</p> <p>One could pose it as <a href="https://en.wikipedia.org/wiki/Integer_programming" rel="nofollow">integer linear programming</a> (ILP) problem, e.g. \begin{matrix} \max &amp; c^\top x \\ \text{w.r.t.} &amp; Ax = b \\ &amp; 0 \le x \\ &amp; x \le 9 \end{matrix} with \begin{align} c^\top &amp;= (10^{N-1}, 10^{N-2}, \dotsc, 1) \in \mathbb{R}^{1 \times N} \\ A &amp;= (1, \dotsc, 1) \in \mathbb{R}^{1 \times N} \\ x &amp;= (x_!, \dotsc, x_N)^\top \in \mathbb{Z}^N \\ b &amp;= (S) \in \mathbb{N} \end{align} and use a solver.</p> <p>The minimal number is a bit trickier, because of the convention that leading $0$ digits are dropped (except for single digit numbers).</p>