qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
359,212
<p>I mean, $\Bbb Z_p$ is an instance of $\Bbb F_p$, I wonder if there are other ways to construct a field with characteristic $p$? Thanks a lot!</p>
Vicfred
85,162
<p>There are two nice constructions I know, the first one is already mentioned. For every prime $p$ you can 'extend' it and construct fields with $p^n$ elements. (this depends on the fact that there are irreducible polynomials of every degree in $\mathbb{F}_p$, exercise: give some examples to show why this might be true). Then you can also take the algebraic closure of these fields and get a genuinely new field since an algebraic closed field can't be finite (you can prove this in the exact same way Euclid proved there are a infinite number of prime numbers), let's call this field $\overline{\mathbb{F}_p}$.</p> <p>The second construction I know is first taking polynomials with coefficients in $\mathbb{F}_p$ i.e. $\mathbb{F}_p[x]$ and take its quotient field (as you do to construct $\mathbb{Q}$ from $\mathbb{Z}$) then get quotients of polynomials, this form a new field which contains $\mathbb{F}_p$ as a subfield (identified by the 'constant polynomials'). Let's denote this new field $\mathbb{F}_p(x)$.</p> <p>One might think these two fields are isomorphic, but <a href="https://math.stackexchange.com/a/58425/1938">this</a> post argues that it isn't the case.</p>
2,663,537
<p>Suppose G is a group with x and y as elements. Show that $(xy)^2 = x^2 y^2$ if and only if x and y commute.</p> <p>My very basic thought is that we expand such that $xxyy = xxyy$, then multiply each side by $x^{-1}$ and $y^{-1}$, such that $x^{-1} y^{-1} xxyy = xxyy x^{-1}$ , and therefore $xy=xy$.</p> <p>I realize that this looks like a disproportionate amount of work for such a simple step, but that is what past instruction has looked like and that is perhaps why I am confused. Moreover, "if and only if" clauses have always been tricky for me since I took Foundations of Math years ago, but if I remember correctly, the goal here should be to basically do the proof from right to left and then left to right, so to speak. Anyhow, I think that I am overthinking this problem.</p>
Xander Henderson
468,350
<p>The proof can be written fairly concisely as follows:</p> <p>\begin{align} (xy)^2 = x^2 y^2 &amp;\iff (xy)(xy) = (xx)(yy) &amp;&amp; (\text{expand both sides}) \\ &amp;\iff xyxy = xxyy &amp;&amp; (\text{association}) \\ &amp;\iff x^{-1} (xyxy) y^{-1} = x^{-1} (xxyy) y^{-1} &amp;&amp;(\text{cancelation}) \\ &amp;\iff (x^{-1} x) yx (y y^{-1}) = (x^{-1} x)xy(y y^{-1}) &amp;&amp;(\text{association}) \\ &amp;\iff yx = xy. &amp;&amp;(\text{def'n of inverses}) \end{align}</p>
610,672
<p>Could anyone help me with homework or give me a hint? Any help would be highly appreciated.</p> <p>Given a set of N distinct objects:</p> <p>How many ways are there to pick any number of them to be in a pile while the rest are in anotherpile? If your answer is written in terms of binomial coecients, use the Binomial Theorem to write as a single (N-dependent) number.</p> <p>Thanks Daniel </p>
gt6989b
16,192
<p><strong>Hint</strong></p> <ol> <li>There are the following choices for the number of objects in the first pile: 0, 1, ..., N</li> <li>If the first pile has $k$, how much does the second pile have?</li> <li>To reach such an arrangement, pick $k$ elements for the first pile from the entire group. Is order important? How many ways are there to do that (this involves a binomial coefficient)?</li> <li>Add all the ways from step (3) -- for different values of $k$ -- and use Binomial Theorem.</li> </ol>
610,672
<p>Could anyone help me with homework or give me a hint? Any help would be highly appreciated.</p> <p>Given a set of N distinct objects:</p> <p>How many ways are there to pick any number of them to be in a pile while the rest are in anotherpile? If your answer is written in terms of binomial coecients, use the Binomial Theorem to write as a single (N-dependent) number.</p> <p>Thanks Daniel </p>
robjohn
13,854
<p><strong>Hint 1:</strong> There are $\binom{n}{k}$ ways to put $k$ items into the first pile and $n-k$ items into the second pile. Consider what this means for all $k$.</p> <p><strong>Hint 2:</strong> Each item can either be in the first pile or the second pile ($2$ options for each item).</p>
87,963
<p>Assume that $L/K$ is an extension of fields and $[L:K]=n$, with $n$ composite. Assume that $p\mid n$, can we always produce a subextension of degree $p$ and if not under what conditions can it be done? I would guess this is very false, but I couldn't come up with any trivial counterexamples.</p>
Soka
20,290
<p>Suppose $L/K$ is Galois with group $G$. Then subextensions of degree $m$ over $K$ correspond to normal subgroups of $G$ of index $m$. Given an abelian group you can always find subgroups of order $p$, where $p$ is a prime divisor of $\vert G \vert =n$. So to find a counterexample, you should (with $L/K$ Galois) start with non-abelian Galois groups. I'm pretty sure there exists a group of order $n$ which doesn't have a normal subgroup of index $m$ for all $m\vert n$. (Note that if $m$ is prime, such a (not necesarily normal) subgroup exists).</p>
4,558,460
<p>According to the implicit function theorem(on <span class="math-container">$\mathbb R^2$</span> for simplicity), if <span class="math-container">$\displaystyle\frac{\partial f}{\partial y}\ne 0$</span> at <span class="math-container">$(x_0, y_0)$</span>, then on a neighborhood of <span class="math-container">$(x_0, y_0)$</span>, there is a function <span class="math-container">$g\in C^1$</span> such that <span class="math-container">$$y = g(x)$$</span></p> <p>There was no information about the converse in the text book, but one day I wondered that they is a function <span class="math-container">$g\in C^1$</span> such that <span class="math-container">$f(x, g(x)) = 0$</span> even though <span class="math-container">$\displaystyle\frac{\partial f}{\partial y}= 0$</span> ? Because <span class="math-container">$\displaystyle\frac{\partial f}{\partial y}$</span> takes part in deonminator, <span class="math-container">$g$</span> won't have a derivative function(or maybe another closed-form), and I have found <span class="math-container">$g$</span> but the only continuity works. Can we find the example that the implicit function <span class="math-container">$g$</span> exists even though the partial derivative at that point is zero? If no, how can I prove this, and which condition can be added to make the Implicit function theorem an equivalent conditions?</p> <p>Really appreciate your helps.</p>
Hans Lundmark
1,242
<p>Take <span class="math-container">$f(x,y)=(y-x^2)^2$</span> and any point <span class="math-container">$(x_0,y_0)$</span> on the curve <span class="math-container">$y=x^2$</span>. Then <span class="math-container">$\nabla f(x_0,y_0) = (0,0)$</span>, so the implicit function theorem doesn't apply, but still the equation <span class="math-container">$f=0$</span> defines a nice function, namely <span class="math-container">$y=g(x)=x^2$</span>.</p> <p>But if <span class="math-container">$f \in C^1$</span> and <span class="math-container">$f_y(x_0,y_0)=0$</span> and <span class="math-container">$f_x(x_0,y_0)\neq 0$</span>, and the equation <span class="math-container">$f=0$</span> happens to define a function <span class="math-container">$y=g(x)$</span> implicitly near <span class="math-container">$(x_0,y_0)$</span>, then that function <span class="math-container">$g$</span> can't be differentiable at <span class="math-container">$x_0$</span>, for precisely the reason that you give; namely, in that case implicit differentiation would give <span class="math-container">$0=f'_x(x_0,y_0) + f'_y(x_0,y_0) g'(x_0)$</span>, which is incompatible with the assumptions.</p> <p>Another example which might interest you is <span class="math-container">$$ f(x) = \begin{cases} \dfrac{y}{\sqrt{x^2+y^2}}, &amp; (x,y) \neq (0,0), \\ 0, &amp; (x,y) = (0,0), \end{cases} $$</span> which isn't even continuous at the origin, and hence not <span class="math-container">$C^1$</span> either. But still the equation <span class="math-container">$f=0$</span> defines a <span class="math-container">$C^\infty$</span> function <span class="math-container">$y=g(x)=0$</span>.</p>
839,124
<p>This is similar to an exercise I just posted. The necessary part is easy, but the sufficient condition I'm having trouble seeing.</p> <p>$\Rightarrow$. Since $(x,y)=g,$ there exist integers $x_1, y_1$ such that $x=gx_1, y=gy_1$. Since $[x,y]=l$, there exist integers $x_2, y_2$ such that $l=xx_2=yy_2$. Then $$l=gx_1x_2=gy_1y_2$$ In both cases, $g|l$</p> <p>$\Leftarrow$. Since $g|l$, there exists an integer $k$ such that $l=gk$. Since $k$ is an integer, there exist integers $k_1, k_2$ such that $k=k_1+k_2$. Then $$l=gk_1+gk_2 $$ $$gk=gk_1+gk_2$$ I feel like I'm not even close.... any hints?</p>
André Nicolas
6,312
<p>Let $x=g$ and let $y=l$. (We do need to assume that $l\gt 0$ and $g\ge 0$.) </p>
27,089
<p>Off hand, does anyone know of some useful conditions for checking if a ring (or more generally a semiring) has non-trivial derivations? (By non-trivial, I mean they do not squish everything down to the additive identity.) Part of the motivation for this is that I was thinking about it the other day, and had trouble finding any good example of a semiring with an interesting derivation. </p> <p>For example, the multiplicative Banach algebra of positive functions is an algebra of the semifield of nonnegative reals. However, the usual definition for derivative breaks down due to the fact that you can have positive functions with negative slope. So, this leads me to wonder if there are any semirings with derivations at all?</p> <p>As a related question, is there a known classification of all the derivations for an algebra? It feels like this should be a pretty standard thing, but I don't think I've ever encountered it in one of my courses and my initial googling around was not too successful at finding references.</p>
Robin Chapman
4,213
<p>There is a notion of a universal derivation for an algebra. I'll assume everything is commutative for simiplcitity. If $A$ is a $k$-algebra ($k$ a commutative ring) then there is an $A$-module $\Omega_{A/k}$, the module of <em>Kahler differentials</em> of $A$ over $k$ and a $k$-derivation $d:A\to\Omega_{A/k}$ which is universal for derviations of $A$. That is, if $\delta:A\to M$ is a $k$-derivation from $A$ to an $A$ module, then $\delta=f\circ d$ for a unique $A$-module homomorphism $f$.</p> <p>There is an explicit desciption of $\Omega_{A/k}$ as $I/I^2$ where $I$ is the ideal in the ring $B=A\otimes_k A$ which is the kernel of the map $\mu:B\to A$ with $\mu(x\otimes y)=xy$. Then $d$ maps $x\in A$ to $1\otimes x-x\otimes 1$. So to find a derivation from $A$ with values in your favourite $A$-module $M$ all one has to do is to find an $A$-homomorphism from $\Omega_{A/k}$ to $M$. Of course $\Omega_{A/k}$ may be hard to determine concretely, and even if that is possible, perhaps it may not be easy to find a homomorphism from that into $M$. Indeed using this method may be no easier than finding a derivation directly :-)</p> <p>For details see the commutative algebra texts by Eisenbud or Matsumura.</p>
27,089
<p>Off hand, does anyone know of some useful conditions for checking if a ring (or more generally a semiring) has non-trivial derivations? (By non-trivial, I mean they do not squish everything down to the additive identity.) Part of the motivation for this is that I was thinking about it the other day, and had trouble finding any good example of a semiring with an interesting derivation. </p> <p>For example, the multiplicative Banach algebra of positive functions is an algebra of the semifield of nonnegative reals. However, the usual definition for derivative breaks down due to the fact that you can have positive functions with negative slope. So, this leads me to wonder if there are any semirings with derivations at all?</p> <p>As a related question, is there a known classification of all the derivations for an algebra? It feels like this should be a pretty standard thing, but I don't think I've ever encountered it in one of my courses and my initial googling around was not too successful at finding references.</p>
Steve Huntsman
1,847
<p>The Banach algebra of bounded functions on a finite set turns out to be semisimple, and therefore carries no nonzero derivations by the results of <a href="http://www.jstor.org/pss/2373262" rel="nofollow">Johnson, B. E. "Continuity of derivations on commutative algebras". <em>Amer. J. Math.</em> <strong>91</strong>, 1 (1969)</a>. </p>
44,868
<p><strong>Bug introduced in version 8 or earlier and fixed in 10.0</strong></p> <hr> <p>I have created a notebook with two cells. This is the content of the first:</p> <pre><code>g = Graph[{1 \[UndirectedEdge] 2, 2 \[UndirectedEdge] 3, 1 \[UndirectedEdge] 3, 1 \[UndirectedEdge] 4, 4 \[UndirectedEdge] 5, 4 \[UndirectedEdge] 6}] </code></pre> <p>And this is the content of the second:</p> <pre><code>g listDegree = VertexDegree[g] vl = VertexList[g] nodeMaxDegree = Pick[vl, listDegree, Max[VertexDegree[g]]][[1]] aM = AdjacencyMatrix[g]; vLM = aM[[VertexIndex[g, nodeMaxDegree]]]; nN = Pick[vl, vLM, 0] </code></pre> <p>If I evaluate the second cell (after processing the first) for a <strong>second</strong> time:</p> <ol> <li>the first time no problem, the results are correct; </li> <li><strong>the second time the vertex list of <code>g</code> is inexplicably wrong but the graph remain correct!!</strong></li> </ol> <p>I don't understand the cause because the graph <code>g</code> is never touched.</p> <p>Thanks in advance</p>
Jacob Akkerboom
4,330
<p>This looks like a bug to me. Here is a slightly more minimal example.</p> <pre><code>ue = UndirectedEdge; g = Graph[ue @@@ {{1, 2}, {2, 3}, {1, 3}, {1, 4}, {4, 5}, {4, 6}}]; vl = VertexList[g] aM = AdjacencyMatrix[g]; vLM = aM[[VertexIndex[g, 1]]]; Pick[vl, vLM, 0]; VertexList[g] </code></pre> <p>Output</p> <blockquote> <pre><code>{1, 2, 3, 4, 5, 6} {1, 5, 6} </code></pre> </blockquote> <p>You can solve the error by making a copy of the vertexlist yourself. This can be done by using <code>Append</code> and <code>Delete</code> as follows</p> <pre><code>ue = UndirectedEdge; g = Graph[ue @@@ {{1, 2}, {2, 3}, {1, 3}, {1, 4}, {4, 5}, {4, 6}}] vl = Delete[Append[VertexList[g], 0], -1] aM = AdjacencyMatrix[g]; vLM = aM[[VertexIndex[g, 1]]]; Pick[vl, vLM, 0]; v1 = VertexList[g] </code></pre> <p>Output</p> <blockquote> <pre><code>{1, 2, 3, 4, 5, 6} {1, 2, 3, 4, 5, 6} </code></pre> </blockquote> <p>So in your case you could do</p> <pre><code>copy[list_] := Delete[Append[list, 0], -1]; g listDegree = VertexDegree[g] Block[ {punchingBag}, punchingBag = copy[VertexList[g]]; nodeMaxDegree = Pick[punchingBag, listDegree, Max[VertexDegree[g]]][[1]]; aM = AdjacencyMatrix[g]; vLM = aM[[VertexIndex[g, nodeMaxDegree]]]; nN = Pick[punchingBag, vLM, 0]; ] vl = VertexList[g] </code></pre> <p>Another (probably better) solution would be to use <code>Developer`ToPackedArray</code> on <code>VertexList[g]</code>, this avoids the strange behaviour from occurring altogether.</p>
897,633
<p><strong>First question:</strong></p> <p>Let's say we have a hypothesis test:</p> <p>${ H }_{ 0 }:u=100$ and ${ H }_{ 1 }:u\neq 100$.</p> <p>The sample has a size of 10 and gives an average $u=103$ and a p-value = 0.08. The level of significance is 0.05.</p> <p>I'm asked the following question (exam):</p> <p>A) We can conclude that $u=100$</p> <p>B) We cannot conclude that $u\neq100$</p> <p>The 2 answers are rather similar, but not the same. I would say B) but I'm not so sure given what I've read.</p> <p>The p-value here indicates that we cannot reject the null hypothesis, so we cannot accept H1 ?</p> <p><strong>Second question:</strong></p> <p>What does it mean exactly that a test is significant ?</p> <p>Does it mean that we can reject the null hypothesis ?</p> <p>Thanks in advance.</p> <p>Regards,</p>
Satish Ramanathan
99,745
<p>A P-value can be reported more formally in terms of a fixed level α test. Here α is a number selected independently of the data, usually 0.05 or 0.01, more rarely 0.10. We reject the null hypothesis at level α if the P-value is smaller than α , otherwise we fail to reject the null hypothesis at level α.</p> <p>Now figure out what you have to do? .</p>
2,290,395
<p>What if in Graham’s Number every “3” was replaced by “tree(3)” instead? How big is this number? Greater than Rayo’s number? Greater than every current named number?</p>
mgiroux
586,861
<p>The TREE function grows much much faster than any construction of knuth up arrows. Because of this, inserting the TREE function into Grahams number would yield a number still very close to TREE(3). It would be like trying to create a number larger than a googolplex by adding a 1 on the end. You would be better off inserting Grahams number into TREE instead of the other way around, creating a "TREE's Graham" instead of "Graham's TREE"</p>
3,073,361
<p>I think I understood 1-forms fairly well with the help of these two sources. They are dual to vectors, so they measure them which can be visualized with planes the vectors pierce.</p> <ul> <li><a href="https://the-eye.eu/public/WorldTracker.org/Physics/Misner%20-%20Gravitation%20%28Freeman%2C%201973%29.pdf" rel="nofollow noreferrer">Gravitation 1973</a><a href="https://i.stack.imgur.com/oU5Cd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oU5Cd.png" alt="1-forms as dual vectorspace to vectors" /></a></li> <li><a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.126.1099&amp;rep=rep1&amp;type=pdf" rel="nofollow noreferrer">On the Visualisation of Differential Forms - Dan Piponi</a><a href="https://i.stack.imgur.com/zREF5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zREF5.png" alt="enter image description here" /></a></li> </ul> <p>But I struggle with the explanations for higher order forms.</p> <p>The goal is to answer and understand these questions with drawings:</p> <ol> <li>How can I visualize the wedge between two 1-forms <span class="math-container">$\alpha\wedge\beta$</span>? I think I understood the wedge between two vectors, as the parallelogram created by the two in a &quot;area sense&quot;. The determinant comes in to make it only about the area which is why <span class="math-container">$v\wedge w = \frac{1}{2}v\wedge 2w$</span> since stretching the parallelogram by two in the w direction is compensated by squishing it in the v direction, so the area stays constant. So the wedge between two vectors is the area it spans with its vectors. But how does that translate to the dual space? Where 1-forms measure the length of the component of its dual vector. And can be visualized as planes the vectors pierce through. What is the visualization between two of these 1-forms as a wedge?</li> </ol> <p>Gravitation has this picture:</p> <p><a href="https://i.stack.imgur.com/62SfS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/62SfS.png" alt="2-forms" /></a> Dan-Piponi drew it like this:</p> <p><a href="https://i.stack.imgur.com/MsczU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MsczU.png" alt="2-forms dan" /></a></p> <p>Now these pictures make some sense as they are generated by intersecting the 1-forms. But I am not quite getting how the result is evaluated. The result (2-form) should map two vectors as input to a number. And I don't see how these intersections do that. While for 1-forms you count the numbers of planes a vector pierced.</p> <ol start="2"> <li>Why does it make sense that <span class="math-container">$d(d\alpha)=0$</span> for every differential form <span class="math-container">$\alpha$</span></li> <li>What does Dan Piponi mean by saying: &quot;exterior derivative is none other than finding the boundary of the picture&quot; (4 Exterior Derivatives)</li> <li>Understand part 5 about Stokes' theorem from Dan Piponi's paper</li> </ol> <p>Note: I should maybe add that I have no background in physics, so I didn't understand a lot of the stuff in Gravitation. I just tried to read it after I couldn't quite understand other source since it was cited there.</p> <p>Similar questions:</p> <ul> <li><p><a href="https://math.stackexchange.com/q/548131/445105">What's the geometrical intuition behind differential forms?</a></p> <p>Edit (since someone voted &quot;close&quot;, based on this question as a duplicate): This question indicates not grasping how 1-forms work (&quot;families of surfaces [...] Why do this interpretation makes any sense?&quot;) not only does that invite explanations for 1-forms and hand-waving away the rest with &quot;it works similarly in higher dimensions&quot; but it in particular does not mention specific visualizations for 2-forms which kind of show that there <em>should</em> be an intuition for 2-forms (and maybe higher). And while this question accepted an answer already, this answer does not help to answer the (enumerated) questions above. So this is absolutely not a duplicate.</p> </li> <li><p><a href="https://math.stackexchange.com/q/440816/445105">Geometric understanding of differential forms.</a></p> </li> <li><p><a href="https://math.stackexchange.com/q/206074/445105">Visualizing Exterior Derivative</a></p> </li> </ul>
Michael Paris
458,204
<blockquote> <ol> <li>How can I visualize the wedge between two 1-forms <span class="math-container">$α∧β$</span>?</li> </ol> </blockquote> <p>First we need to understand what the wedge actually does. In your case it creates a new fully anti-symmetric tensor (think determinant) of order 2. A 2-form is a thing that takes two vectors and returns a scalar. If, for example, we plug a vector into a 1-form we get a number. We also know that forms are multi-linear, therefore we can pull out all the factors and apply the form on each basis-vector individually. So in practice a 1-form just projects a vector and measures the length of that projection. Following this, a 2-form can be visualized as a thing that first takes a vector and becomes a 1-form.</p> <blockquote> <p>But how does that translate to the dual space? What is the visualization between two of these 1-forms as a wedge?</p> </blockquote> <p>Let's say we are in <span class="math-container">$\Lambda(\mathbb{R}^3)$</span>, your 2-vector <span class="math-container">$a \wedge b$</span> can be thought of as something with the magnitude of the enclosed parallelogram of <span class="math-container">$a,b$</span> and an additional other property - path orientation. These two properties are taken by the 2-form to return a number. You might get the impression that this looks very similar to integration, and you would be right. The way they scale and how they compute vectors is exactly how integration works.</p> <blockquote> <ol start="2"> <li>Why does it make sense that <span class="math-container">$d(d\alpha)=0$</span> for every differential form <span class="math-container">$\alpha$</span></li> </ol> </blockquote> <p>We know that the exterior derivative <span class="math-container">$d$</span> takes a (n-1)-form to an n-form. Since a form is also full anti-symmetric we combine that with the Schwarz integrability condition, that the second derivatives are symmetric and arrive at statements like <span class="math-container">$ \operatorname{div}(\operatorname{rot}a)= 0$</span> or <span class="math-container">$\operatorname{rot}(\operatorname{grad}\phi)= 0$</span>. This is just the general statement.</p> <blockquote> <ol start="3"> <li>What does Dan Piponi mean by saying: &quot;exterior derivative is none other than finding the boundary of the picture&quot; (4 Exterior Derivatives)</li> </ol> </blockquote> <p>Let us operate in <span class="math-container">$\Omega (\mathbb{R^3})$</span> and look at the object <span class="math-container">\begin{eqnarray} \phi &amp;=&amp; x_1+x_2+x_3\\ d\phi &amp;=&amp; \frac{\partial \phi}{\partial x_1}dx^1+\frac{\partial \phi}{\partial x_2}dx^2+\frac{\partial \phi}{\partial x_3}dx^3\\ d\phi &amp;=&amp; dx^1+dx^2+dx^3\\ \end{eqnarray}</span> Over what boundary do you need to integrate <span class="math-container">$d\phi$</span> to retrieve the information of <span class="math-container">$\phi$</span>?</p>
1,154,592
<p>I was doing some basic Number Theory problems and came across this problem :</p> <blockquote> <p>Show that if $a$ and $n$ are positive integers with $n\gt 1$ and $a^{n} - 1$ is prime, then $a = 2$ and $n$ is prime</p> </blockquote> <p><strong>My Solution : (Sloppy)</strong></p> <blockquote> <ul> <li>$a^{n}-1$ = $(a-1)$ . $(a^{n-1} + a^{n-2} + ... + a + 1)$</li> <li>This means that $(a-1)$ | $(a^{n}-1)$ </li> <li>But $(a^{n}-1)$ is prime</li> <li>So , $(a-1)$ = 1 $\Rightarrow$ $a = 2$</li> <li>Now , let $n$ be composite</li> <li>$n = kl$ , where $1 \lt k \lt n$ and $1 \lt l \lt n$</li> <li>Now , $a^{kl}-1$ = $(a^{k}-1)$ . $(a^{k.{(l-1)}} + a^{k.{(l-2)}} + ... + a^k + 1)$</li> <li>This means that $2^{n} -1$ is composite </li> <li>Hence , we have achieved a contradiction</li> </ul> </blockquote> <p><strong>My Question :</strong> Am I correct ?</p>
quid
85,306
<p>The proof is alright there are two or three details though (the same issue twice actually), oe was already pointed out in comments: </p> <ul> <li><p>Likely you should exclude the case $a=1$ right away. Just by saying $1^n -1 = 0$ is not prime so assume $a&gt;1$.</p></li> <li><p>You cannot derive from $a^n -1$ being a prime and $(a-1) \mid (a^n-1)$ directly that $(a-1)= 1$. What you can do is say $a-1=1$ or $a-1 = a^n-1$. The latter is impossible as $n &gt; 1$ (and $a &gt;1$); note here you use $n&gt;1$. </p></li> <li><p>When you assert that $a^{kl}-1$ is composite, you should state explicitly that both factors you exhibit are not $1$. </p></li> </ul> <p>These are not major problems, but if one is picky one could insist on them.</p>
1,652,165
<p>On a empty shelf you have to arrange $3$ cans of soup, $4$ cans of beans, and $5$ cans of tomato sauce. What is the probability that none of the cans of soup are next to each other?</p> <p>I tried working this out but get very stuck because I'm not sure that I'm including all the possible outcomes. </p>
André Nicolas
6,312
<p>We have $12$ cans, soup (S) and other (O). An arrangement is a $12$-letter word in the alphabet S, O. We assume all arrangements are equally likely. There are $\binom{12}{3}$ of them. </p> <p>Now we count the arrangements in which the soup cans are separated, the "favourables. Here we use a little trick. Line up the $9$ O cans, with a generous space between any two of them. There are $10$ "gaps" (the $8$ real gaps and the $2$ endgaps) that we can slip an S into. There are $\binom{10}{3}$ ways to choose $3$ of these these gaps.</p> <p>Finally, divide. The expression simplifies nicely. </p>
177,209
<p>I found the following problem while working through Richard Stanley's <a href="http://www-math.mit.edu/~rstan/bij.pdf">Bijective Proof Problems</a> (Page 5, Problem 16). It asks for a combinatorial proof of the following: $$ \sum_{i+j+k=n} \binom{i+j}{i}\binom{j+k}{j}\binom{k+i}{k} = \sum_{r=0}^{n} \binom{2r}{r}$$ where $n \ge 0$, and $i,j,k \in \mathbb{N}$, though any proof would work for me.</p> <p>I also found a similar identity in Concrete Mathematics, which was equivalent to this one, but I could not see how the identity follows from the hint provided in the exercises.</p> <p>My initial observation was to note that the ordinary generating function of the right hand side is $\displaystyle \frac {1}{1-x} \frac{1}{\sqrt{1-4x}}$, but couldn't think of any way to establish the same generating function for the left hand side.</p>
Will Orrick
3,736
<p><strong>Short summary:</strong> On the right we are summing the number of words of $r$ $a$s and $r$ $b$s over $0\le r\le n.$ Denote the set of words with $r$ $a$s and $r$ $b$s by $U_r.$ On the left we are computing the number of triples of words, the first with $i$ $a$s and $j$ $b$s, the second with $j$ $a$s and $k$ $b$s, the third with $k$ $a$s and $i$ $b$s, for all $i+j+k=n.$ Denote this set of triples $V_n.$</p> <p>Let $(x,y,z)\in V_n.$ Define $(x,y,z)$ to be <em>$0$-padded</em> if $x$ does not end in $b$ or $y$ does not end in $a.$ Define $(x,y,z)$ to be <em>$r$-padded</em> if $(x,y,z)=(x'b^r,y'a^r,z)$ where $(x',y',z)\in V_{n-r}$ is $0$-padded.</p> <p>Then any word $w\in U_r$ can be written in a unique way as $w=x'y'z$ where $(x',y',z)\in V_r$ is $0$-padded. (This is proved below.) Then $(x'b^{n-r},y'a^{n-r},z)\in V_n$ is $(n-r)-padded.$ This establishes a bijection between $V_n$ on the left, and $U_0\cup U_1\cup\ldots\cup U_n$ on the right, thereby proving the identity. The elements of $U_n$ correspond to $0$-padded elements of $V_n,$ the elements of $U_{n-1}$ correspond to $1$-padded elements of $V_n,$ and so on.</p> <p><strong>Detailed answer:</strong> The expression $\binom{2r}{r}$ counts words constructed from $r$ $a$s and $r$ $b$s. The sum on the right counts all such words of lengths ranging from $0$ to $2n.$</p> <p>On the left, we have the product $\binom{i+j}{i}\binom{j+k}{j}\binom{k+i}{k},$ which counts all words constructed by concatenating a word with $i$ $a$s and $j$ $b$s to a word with $j$ $a$s and $k$ $b$s to a word with $k$ $a$s and $i$ $b$s. Such words contain $i+j+k$ $a$s and $i+j+k$ $b$s. The sum is over all $i+j+k=n,$ so these words all have $n$ $a$s and $n$ $b$s.</p> <p>These observations hint at the possibility of a bijection, but a few questions arise:</p> <ol> <li>Can any word of $n$ $a$s and $n$ $b$s be constructed by concatenating a word with $i$ $a$s and $j$ $b$s to a word with $j$ $a$s and $k$ $b$s to a word with $k$ $a$s and $i$ $b$s, for suitable $i,$ $j,$ $k$?</li> <li>Is it possible for a word to be constructed in more than one way by such a procedure?</li> <li>What about words of length less than $2n$?</li> </ol> <p>In answering the first question, we will actually answer all three.</p> <p>Given a word $w$ of $n$ $a$s and $n$ $b$s, let's see if we can find $i+j+k=n,$ a word $x$ of $i$ $a$s and $j$ $b$s, a word $y$ of $j$ $a$s and $k$ $b$s, and a word $z$ of $k$ $a$s and $i$ $b$s such that $xyz=w.$ Observe that if such words can be found then $\lvert x\rvert=i+j\le i+j+2k=(j+k)+(k+i)=\lvert y\rvert+\lvert z\rvert.$ So $\lvert x\rvert\le n.$</p> <p>We proceed by splitting $w$ into three, possibly empty, parts as follows: let $X$ consist of the first $P$ letters of $w,$ with $0\le P\le n,$ and let $I$ equal the number of $a$s in $X$ and $J$ the number of $b$s in $X$ (so that $I+J=P$). Let $K=n-I-J.$ Let $Z$ equal the string of letters from position $Q+1$ to $2n$ of $w,$ with $Q$ chosen so that $Z$ has $K$ $a$s. Note that $Q\ge P$ automatically holds: since the number of $a$s in $X$ is $I,$ the number of $a$s in the remainder of $w$ is $n-I\ge n-P=K.$ If $n-I&gt;K,$ then there are $a$s to the right of $X$ and to the left of $Z$ and so $Q&gt;P.$ If $n-I=K,$ then $J=0,$ and $X$ is simply a string of $I$ $a$s. So $Z$ contains all of the remaining $a$s and possibly some $b$s as well. Since $X$ contains no $b$s, we have $Q\ge P.$ So we may let $Y$ equal the string of letters from position $P+1$ to $Q$ of $w,$ giving $w=XYZ.$</p> <p>So far we have that $I+J+K=n,$ that $X$ contains $I$ $a$s and $J$ $b$s, that $Z$ contains $K$ $a$s and $B:=2n-Q-K$ $b$s, and that $Y$ contains $n-I-K=J$ $a$s and $n-J-B$ $b$s. In order to obtain our desired partition of $w,$ we need $B=I.$ We claim that with suitable choices of $P$ and $Q,$ this can always be achieved.</p> <p>To verify that this is so, we need to understand what happens as $P$ increases in steps of $1$ from $0$ to $n.$ To emphasize that $I,$ $J,$ and $K$ are functions of $P,$ we write $I(P),$ etc. We have already seen that there is always at least one possible value of $Q$ (and hence $B$) compatible with a given value of $P.$ There may, however, be more than one such compatible value, so we write $Q_l(P)$ and $Q_u(P)$ for the lower and upper bounds on $Q$ for a given $P.$ We have corresponding lower and upper bounds on $B,$ $$B_l(P)=2n-Q_u(P)-K(P)=n+P-Q_u(P)$$ and $$B_u(P)=2n-Q_l(P)-K(P)=n+P-Q_l(P).$$</p> <p>When $P$ increases by $1,$ $K$ decreases by $1$: $K(P+1)=K(P)-1.$ If the letter of $w$ at position $P+1$ is $a$ then $I(P+1)=I(P)+1$ and $J(P+1)=J(P)$; if that letter is $b$ then $I(P+1)=I(P)$ and $J(P+1)=J(P)+1.$ Since $K(P)=n-P,$ the $K(P)^\text{th}$ $a$ from the right in $w$ is the $(P+1)^\text{st}$ $a$ from the left. Let $u$ be the position of the $P^\text{th}$ $a$ from the left in $w$ ($u=0$ if $P=0$) and let $v$ be the position of the $(P+1)^\text{st}$ $a$ from the left in $w$ ($v=2n+1$ if $P=n$). Then $Q_l(P)=u$ and and $Q_u(P)=v-1.$ In other words, if $\lvert X\rvert=P$ then $Z$ may start anywhere between the position immediately following the $P^\text{th}$ $a$ and the position of the $(P+1)^\text{st}$ $a.$ Notice that $Q_l(P+1)=v$ and therefore, that $Q_l(P+1)=Q_u(P)+1.$ This implies that $$B_u(P+1)=n+P+1-Q_l(P+1)=n+P+1-1-Q_u(P)=n+P-Q_u(P)=B_l(P).$$ Therefore, if $I(P)&lt;B_l(P),$ then, since $I(P+1)$ is at most $I(P)+1,$ we have $I(P+1)\le B_u(P+1).$ On the other hand, $B_l(P)$ is non-increasing as $P$ increases from $0$ to $n,$ and ultimately equals $0.$ Therefore there must eventually be a $P$ such that $B_l(P)\le I(P)\le B_u(P).$ Once we find such a $P,$ we set $i=I(P),$ $j=J(P),$ $k=n-P,$ $x=X,$ $y=Y,$ $z=Z.$</p> <p>Will this be the only solution? Not in general. Let there be a solution $(i,j,k,x,y,z)$ corresponding to a particular $P$ and $Q.$ If $y$ begins with $b$ and $z$ begins with $a$ then we get an additional solution by increasing both $P$ and $Q$ by $1.$ This results in the first $b$ of $y$ becoming the last letter of $x$ and the first $a$ of $z$ becoming the last letter of $y,$ which increases $j$ by $1$ and decreases $k$ by $1$ while leaving $i$ unchanged. The process obviously also works in reverse. There are no other ways to obtain additional solutions, for if the first letter of $y$ is $a,$ adding it to $x$ would force us to add a $b$ to $z,$ but $z$ cannot increase in length when $x$ increases in length.</p> <p>In summary, all solutions for a particular word $w$ have the same $i.$ Equivalently, $\lvert y\rvert=j+k=n-i$ is the same for all solutions. The solution with minimum $j$ must be such that $x$ ends in $a$ or $y$ ends in $b.$ The solution with maximum $j$ must be such that $y$ begins with $a$ or $z$ begins with $b.$</p> <p>Consider an example. Let $w=baabbbaaabab.$ Here $n=6.$ Then we have $$ \begin{array}{ccc|ccc} P &amp; I(P) &amp; J(P) &amp; K(P) &amp; [B_l(P),B_u(P)] &amp; [Q_l(P),Q_u(P)]\\ \hline 0 &amp; 0 &amp; 0 &amp; 6 &amp; [5,6] &amp; [0,1]\\ 1 &amp; 0 &amp; 1 &amp; 5 &amp; [5,5] &amp; [2,2]\\ 2 &amp; 1 &amp; 1 &amp; 4 &amp; [2,5] &amp; [3,6]\\ 3 &amp; 2 &amp; 1 &amp; 3 &amp; [2,2] &amp; [7,7]\\ 4 &amp; 2 &amp; 2 &amp; 2 &amp; [2,2] &amp; [8,8]\\ 5 &amp; 2 &amp; 3 &amp; 1 &amp; [1,2] &amp; [9,10]\\ 6 &amp; 2 &amp; 4 &amp; 0 &amp; [0,1] &amp; [11,12] \end{array} $$ We get three solutions, corresponding to $(P,Q)=(3,7),$ $(4,8),$ $(5,9).$ These are $$ \begin{aligned} &amp;(i,j,k,x,y,x)=(2,1,3,baa,bbba,aabab),\\ &amp;(i,j,k,x,y,x)=(2,2,2,baab,bbaa,abab),\\ &amp;(i,j,k,x,y,x)=(2,3,1,baabb,baaa,bab). \end{aligned} $$</p> <p>We have answered questions $1$ and $2$ in the affirmative. This suggests a way of producing a bijective proof of the identity. Let $W$ be the set of words of length at most $2n$ containing equal numbers of $a$s and $b$s. Let $S$ be the set of sextuples $(i,j,k,x,y,z)$ where $i,$ $j,$ $k$ are nonnegative integers satisfying $i+j+k=n,$ $x$ is a word consisting of $i$ $a$s and $j$ $b$s, $y$ is a word consisting of $j$ $a$s and $k$ $b$s, and $z$ is a word consisting of $k$ $a$s and $i$ $b$s. The map from $S$ to $W$ that simply concatenates $x,$ $y,$ and $z$ is not onto, since it only produces words of length $2n,$ and not one-to-one, as we have seen above. By modifying this simple concatenation map, we obtain a map that is both onto and one-to-one.</p> <p><strong>The map:</strong> Let $s=(i,j,k,x,y,z)\in S.$ If the last letter of $x$ is $b$ and the last letter of $y$ is $a,$ delete the last letter of both words. Repeat until $x$ or $y$ is empty or the last letter of $x$ is not $b$ or the last letter of $y$ is not $a.$ Denote the resulting words $x',$ $y'.$ We define $f:S\to W$ by $f(s)=x'y'z.$</p> <p>The inverse map, applied to a word $w\in W$ with $N$ $a$s and $N$ $b$s, $0\le N\le n,$ is computed</p> <ol> <li>by applying the algorithm described above to write $w=x'y'z,$ where $x'$ is a word with $i$ $a$s and $j'$ $b$s, $y'$ is a word with $j'$ $a$s and $k$ $b$s, $z$ is a word with $k$ $a$s and $i$ $b$s, $i+j'+k=N,$ and $j'$ is as small as possible, </li> <li>by appending $n-N$ $b$s to $x'$ to form $x,$ appending $n-N$ $a$s to $y'$ to form $y,$ and </li> <li>by forming the sextuple $(i,j'+n-N,k,x,y,z).$</li> </ol> <p><strong>Example from original answer:</strong> I've replaced my original answer with what I hope is a more convincing presentation, but this example from that answer may be helpful, so I leave it.</p> <p>Suppose that $n=3.$ The right hand side enumerates the union of the following sets $$ \begin{aligned} &amp;\{e\},\\ &amp;\{ab,ba\},\\ &amp;\{aabb,abab,abba,baab,baba,bbaa\},\\ &amp;\{aaabbb,aababb,aabbab,aabbba,abaabb,ababab,ababba,abbaab,abbaba,abbbaa,\\ &amp;\ \ baaabb,baabab,baabba,babaab,bababa,babbaa,bbaaab,bbaaba,bbabaa,bbbaaa\}. \end{aligned} $$ Here $e$ denotes the empty word. The left hand side enumerates words constructed as follows.</p> <ul> <li>Let $A$ be the set of words containing $i$ $a$s and $j$ $b$s. ($\lvert A\rvert=\binom{i+j}{i}$)</li> <li>Let $B$ be the set of words containing $j$ $a$s and $k$ $b$s. ($\lvert B\rvert=\binom{j+k}{j}$)</li> <li>Let $C$ be the set of words containing $k$ $a$s and $i$ $b$s. ($\lvert C\rvert=\binom{k+i}{k}$)</li> </ul> <p>$$ \begin{aligned} &amp;(i,j,k)=(0,0,3),\ A=\{e\},B=\{bbb\},C=\{aaa\}:\\ &amp;\ \longrightarrow\{bbbaaa\}\\ &amp;(i,j,k)=(0,1,2),\ A=\{b\},B=\{abb,bab,bba\},C=\{aa\}:\\ &amp;\ \longrightarrow\{babbaa,bbabaa,\dot{b}bb\dot{a}aa=bbaa\}\\ &amp;(i,j,k)=(0,2,1),\ A=\{bb\},B=\{aab,aba,baa\},C=\{a\}:\\ &amp;\ \longrightarrow\{bbaaba,b\dot{b}ab\dot{a}a=baba,\dot{b}\dot{b}b\dot{a}\dot{a}a=ba\}\\ &amp;(i,j,k)=(0,3,0),\ A=\{bbb\},B=\{aaa\},C=\{e\}:\\ &amp;\ \longrightarrow\{\dot{b}\dot{b}\dot{b}\dot{a}\dot{a}\dot{a}=e\}\\ &amp;(i,j,k)=(1,0,2),\ A=\{a\},B=\{bb\},C=\{aab,aba,baa\}:\\ &amp;\ \longrightarrow\{abbaab,abbaba,abbbaa\}\\ &amp;(i,j,k)=(1,1,1),\ A=\{ab,ba\},B=\{ab,ba\},C=\{ab,ba\}:\\ &amp;\ \longrightarrow\{ababab,ababba,a\dot{b}b\dot{a}ab=abab,a\dot{b}b\dot{a}ba=abba,baabab,baabba,babaab,bababa\}\\ &amp;(i,j,k)=(1,2,0),\ A=\{abb,bab,bba\},B=\{aa\},C=\{b\}:\\ &amp;\ \longrightarrow\{a\dot{b}\dot{b}\dot{a}\dot{a}b=ab,ba\dot{b}a\dot{a}b=baab,bbaaab\}\\ &amp;(i,j,k)=(2,0,1),\ A=\{aa\},B=\{b\},C=\{abb,bab,bba\}:\\ &amp;\ \longrightarrow\{aababb,aabbab,aabbba\}\\ &amp;(i,j,k)=(2,1,0),\ A=\{aab,aba,baa\},B=\{a\},C=\{bb\}:\\ &amp;\ \longrightarrow\{aa\dot{b}\dot{a}bb=aabb,abaabb,baaabb\}\\ &amp;(i,j,k)=(3,0,0),\ A=\{aaa\},B=\{e\},C=\{bbb\}:\\ &amp;\ \longrightarrow\{aaabbb\} \end{aligned} $$ A dot over a letter indicates that the letter is to be deleted.</p>
4,542,985
<p>I want to fully understand the probabilistic interpretation. As in, I know once we have a probabilistic model, we differentiate for maximum likelihood and find the weights/regressors but what i really find difficult to grasp is how exactly are we developing a probabilistic model for linear regression. I have see that initially we we write: <span class="math-container">$y_i=\epsilon_i +w^Tx_i$</span>.---------(1)<br /> Here i want to know what is <span class="math-container">$y_i$</span>? Is it the observed value? Then how come we model it as random? Where is the randomness coming from? What is <span class="math-container">$\epsilon_i$</span>? Is it error or noise?</p> <p>please correct if i am wrong:</p> <p>What i understand is our measured data is noisy. i.e for the same <span class="math-container">$x_i, y_i$</span> can vary on a different draw of samples, which is due to some inherent randomness in <span class="math-container">$y_i$</span>. And this randomness is what we are quantifying using <span class="math-container">$\epsilon_i \sim N(0,\sigma^2)$</span>. Hence <span class="math-container">$y_i$</span> is a normal random variable given <span class="math-container">$x_i$</span> and it has mean <span class="math-container">$w^Tx_i$</span> so we want to maximize the likelihood, meaning maximize the probability that <span class="math-container">$y_i$</span> takes the value which we have in our current experimental data, given xi and this probability happens to be parameterized by w due to (1).</p>
Matija
1,096,797
<p>The statement is true if a <em>random variable</em> is a measurable function <span class="math-container">$V:\Omega\rightarrow E$</span> to <span class="math-container">$E\subseteq\mathbb R$</span> equipped with the canonical <span class="math-container">$\sigma$</span>-algebra (the Borel algebra induced by one of the equivalent norms).</p> <p>In this case (or any case where <span class="math-container">$\{n\}\in\epsilon$</span> for all <span class="math-container">$n\in\mathbb N$</span>) and for any <span class="math-container">$B\in\epsilon$</span> we have <span class="math-container">$V^{-1}(B)=V^{-1}(B\cap\mathbb N)=\bigsqcup_{n\in B\cap\mathbb N}B_n$</span> by the very definition of <span class="math-container">$V$</span>, so <span class="math-container">$\sigma(V)\subseteq G$</span>. Clearly, for any <span class="math-container">$B\subseteq\mathbb N$</span> and <span class="math-container">$\bigsqcup_{n\in B}B_n\in G$</span> we have <span class="math-container">$B\in\epsilon$</span> and <span class="math-container">$V^{-1}(B)=\bigsqcup_{n\in B}B_n$</span>, so <span class="math-container">$G\subseteq\sigma(V)$</span>.</p> <p>For the general case, recall that <span class="math-container">$\epsilon_\circ=\{\emptyset,E\}$</span> is a <span class="math-container">$\sigma$</span>-algebra. For <span class="math-container">$\epsilon=\epsilon_\circ$</span> we have <span class="math-container">$\sigma(V)=\{\emptyset,\Omega\}\subseteq G$</span>, so <span class="math-container">$V$</span> is a random variable. But usually we don't have <span class="math-container">$\sigma(V)=G$</span> (say for <span class="math-container">$\Omega=\mathbb N$</span> with <span class="math-container">$B_n=\{n\}$</span>).</p>
8
<p>Contexts have backticks, which conflict with the normal way to enter inline code. How do I enter an inline context, since the initial approach:</p> <pre><code>`System`` </code></pre> <p>doesn't work ( `System`` ).</p>
Verbeia
8
<p>You can also use the HTML markup <code>&lt;code&gt;...&lt;/code&gt;</code>. This has the advantage that you can bold and italicise inside it, like so:</p> <pre><code>&lt;code&gt;f[x_*Pattern*]:= 50.`**watch out**&lt;/code&gt; </code></pre> <p>Results in</p> <p><code>f[x_<em>Pattern</em>]:= 50.` <strong>watch out</strong></code></p> <p>And as you can see, you don't need to count backticks.</p>
2,972,950
<p>Everything on this question is in complex plane.</p> <p>As the book describes a property of a winding number, it says that:</p> <blockquote> <p>Outside of the [line segment from <span class="math-container">$a$</span> to <span class="math-container">$b$</span>] the function <span class="math-container">$(z-a) / (z-b)$</span> is never real and <span class="math-container">$\leq 0$</span>.</p> </blockquote> <p>Here, the above statement should be interpreted as "never (real and <span class="math-container">$\leq 0$</span>)".</p> <p>If anyone could explain why this is true that would be great. I do get why any point on the line segment (other than <span class="math-container">$b$</span>, in which case the denominator is <span class="math-container">$0$</span>) has to satisfy the condition that <span class="math-container">$(z-a) / (z-b)$</span> is real and <span class="math-container">$\leq 0$</span>, but I am not sure how to prove why any point not on the line has to satisfy the condition also.</p> <p>Here, <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are arbitrary complex number in a region determined by a closed curve in the complex plane; both points lie on the same region. </p>
Eric Wofsey
86,856
<p>Note that <span class="math-container">$\frac{z-a}{z-b}$</span> is unchanged if we add the same number to each of <span class="math-container">$z$</span>, <span class="math-container">$a$</span>, and <span class="math-container">$b$</span>. So, we may translate all of our points by <span class="math-container">$-a$</span> to assume that <span class="math-container">$a=0$</span>. Now let <span class="math-container">$$t=\frac{z}{z-b}.$$</span> Solving for <span class="math-container">$z$</span>, we have <span class="math-container">$$z=\frac{t}{t-1}b.$$</span> If <span class="math-container">$t$</span> is real, then we see that <span class="math-container">$z$</span> is a real multiple of <span class="math-container">$b$</span>, so it is on the line between <span class="math-container">$a=0$</span> and <span class="math-container">$b$</span>. More specifically, if <span class="math-container">$t\leq 0$</span>, then <span class="math-container">$\frac{t}{t-1}\in [0,1)$</span>, so <span class="math-container">$z$</span> is in fact on the line segment between <span class="math-container">$a=0$</span> and <span class="math-container">$b$</span>.</p> <p>From a geometric perspective, <span class="math-container">$\frac{z-a}{z-b}$</span> being negative means that the vector from <span class="math-container">$a$</span> to <span class="math-container">$z$</span> and the vector from <span class="math-container">$b$</span> to <span class="math-container">$z$</span> point in opposite directions. It should not be hard to convince yourself with a picture that this only happens when <span class="math-container">$z$</span> is on the line segment between <span class="math-container">$a$</span> and <span class="math-container">$b$</span>.</p>
25,488
<p>I have noticed a common pattern followed by many students in crisis:</p> <ul> <li>They experience a crisis or setback (injury, illness, tragedy, etc)</li> <li>This causes them to miss a lot of class.</li> <li>They may stay away from class longer than they &quot;need to&quot; because of shame: they feel that since they have been absent, coming back to class will cause their teacher to be critical of them.</li> <li>Once they do come back, they are overwhelmed. They might need to do twice the amount of work in all of their courses just to catch up. In classes where early material becomes a needed prerequisite for later material, it can become impossible to follow the course content being presented when they do return.</li> <li>This experience of failure can create long term patterns of giving up under pressure.</li> </ul> <p>I have known many <strong>very</strong> capable students who fall into this trap and end up flunking out of University. I could have fallen into this trap myself. As a high school student and college student I got close on several occasions. I have a lot of empathy for this kind of situation.</p> <p>Question: What kinds of policies and practices can institutions and individual faculty members adopt which help students out of this trap?</p>
TomKern
15,671
<p>Set realistic intermediate goals for students while they're trying to catch up. By default their goal might be along the lines of completely catching up within a week. That's unrealistic, and if they fail at it, they'll be demotivated and can easily get stuck in a failure/demotivation feedback spiral.</p> <p>Setting expectations you know the student can be successful in can get students on a success/motivation feedback spiral instead.</p>
3,860,330
<p>I am interested in proving what family of functions have the property <span class="math-container">$$f'(x)=f^{-1}(x)$$</span> I've never dealt with a differential equation of this form, hence I could only go as far as to gather a little data:</p> <p><span class="math-container">$$f'(x)=f^{-1}(x)\implies f(f'(x))=x$$</span> <span class="math-container">$$\implies f''(x)f'(f'(x))=1$$</span> <span class="math-container">$$\implies f'(f'(x))=\frac{1}{f''(x)}$$</span></p> <p>Let <span class="math-container">$f'=g$</span></p> <p><span class="math-container">$$\implies g(g(x))=\frac{1}{g'(x)}$$</span></p> <blockquote> <p>Is this generally solvable?</p> </blockquote> <p>Any and all information would be much useful.</p>
Jan Eerland
226,665
<p>Well, we know that:</p> <p><span class="math-container">$$\text{y}\left(\text{y}^{-1}\left(x\right)\right)=x\tag1$$</span></p> <p>Where <span class="math-container">$\text{y}^{-1}\left(x\right)$</span> is the inverse of the function <span class="math-container">$\text{y}\left(x\right)$</span>.</p> <p>So, using your problem we can write:</p> <p><span class="math-container">$$\text{y}'\left(x\right)=\text{y}^{-1}\left(x\right)\tag2$$</span></p> <p>Note that a power of <span class="math-container">$x$</span> fits the bill for the differential equation, given in <span class="math-container">$(2)$</span>. So, let's set:</p> <p><span class="math-container">$$\text{y}\left(x\right)=\text{A}x^\text{r}\tag3$$</span></p> <p>We see that:</p> <p><span class="math-container">$$\text{y}'\left(x\right)=\text{r}\text{A}x^{\text{r}-1}\tag4$$</span></p> <p>Now the inverse of the function is given by:</p> <p><span class="math-container">$$\text{y}^{-1}\left(x\right)=\left(\frac{x}{\text{A}}\right)^\frac{1}{\text{r}}=\left(\frac{1}{\text{A}}\right)^\frac{1}{\text{r}}\cdot x^\frac{1}{\text{r}}\tag5$$</span></p> <p>So, we need to look at:</p> <p><span class="math-container">$$\text{r}\text{A}x^{\text{r}-1}=\left(\frac{1}{\text{A}}\right)^\frac{1}{\text{r}}\cdot x^\frac{1}{\text{r}}\space\Longleftrightarrow\space x^{\text{r}-1-\frac{1}{\text{r}}}=\frac{1}{\text{r}}\left(\frac{1}{\text{A}}\right)^{1+\frac{1}{\text{r}}}\tag6$$</span></p> <blockquote> <p>Now, to finish note that the RHS is a constant, so the LHS is a constant which means that <span class="math-container">$\text{r}-1-\frac{1}{\text{r}}=0$</span>, which means that <span class="math-container">$\text{r}=\frac{1\pm\sqrt{5}}{2}$</span>. So the LHS gives <span class="math-container">$x^0=1$</span>, which means that <span class="math-container">$\frac{1}{\text{r}}\left(\frac{1}{\text{A}}\right)^{1+\frac{1}{\text{r}}}=1$</span>, and that gives <span class="math-container">$\text{A}=\left(\frac{2}{1+\sqrt{5}}\right)^\frac{2}{1+\sqrt{5}}$</span>.</p> </blockquote> <p>Concluding, we see that this is indeed true for <span class="math-container">$\text{r}=\frac{1+\sqrt{5}}{2}$</span> and <span class="math-container">$\text{A}=\left(\frac{2}{1+\sqrt{5}}\right)^\frac{2}{1+\sqrt{5}}$</span>:</p> <p><span class="math-container">$$\text{y}\left(\text{y}'\left(x\right)\right)=\text{A}\left(\text{r}\text{A}x^{\text{r}-1}\right)^\text{r}=x\tag7$$</span></p> <p>Note that when <span class="math-container">$\text{r}=\frac{1-\sqrt{5}}{2}$</span> there is no solution.</p>
10,977
<p>When I taught calculus, I posted my notes after the lecture. Then I had the students fill out a mid-quarter evaluation, and a lot of them wanted me to post my notes before class.</p> <p>What I started doing was printing and handing out the notes to them, leaving the examples blank so they can fill those in. Many of them expressed to me that they liked it, so that they can concentrate on the problem instead of trying to write everything down. </p> <p>I just read my course evaluations and I got mixed reviews about the notes. Some students said they liked them and found them helpful, and an equal amount said that they didn't like it and preferred to take their own notes, and learned better that way. I did tell them that they don't have to use me notes if they don't want them, and to let me know if they don't want to use them so I can save paper.</p> <p><strong>In your experience, what has worked well, to provide notes, let the students make their own notes, or give them the option by posting it before class to they can print out notes if they want to use it?</strong> I usually print out the notes, a copy for each student, and pass it out to all of them individually, but I am wondering if I should change this.</p>
Jessica B
4,746
<p>I think what works depends on the local culture to a fair extent, ie what the expectations of the students are. </p> <p>As a student myself, I almost always had to copy down everything in lectures, and no-one seemed to have a problem with that. One lecturer gave us gappy notes, which we hated, because it made the easy bits (which were printed) way too slow, and the hard bits (examples to fill in) way too fast. I think that has more to do with the pacing of the lecture though, so could be corrected by the lecturer.</p> <p>On the other hand, last year I released 'outline notes', so students had some structure but still needed to pay attention in lectures. That was not at all popular. I got complaints that they wanted 'proper lecture notes', because all they had was 'what we write down in lectures'(!). I guess the difference was a combination of how much the students were willing to work and what they were accustomed to from my colleagues.</p> <p>This year I decided to change the emphasis of the 'lectures' to contain less writing and more thinking. I've given the students textbooks (essentially) that I follow generally but not to the letter. In lectures I use slides that alternate (roughly) between bits of theory and practice questions, using the board to discuss ideas and solutions. </p> <p>I've used this with two groups so far. The first group didn't like certain details, but the overall style seemed to be ok. One student would just make notes to copy onto the slides (I haven't yet convinced myself to post slides in advance, although I can see it would benefit some students, because I'm afraid of it being unhelpful to others, but I release them at the time of the class). I haven't yet had the comments from the second group. I think some have chosen to read the full text to avoid lectures, and some have come to lectures to avoid the text, and a few have made use of both. This is in line with a paper I read at some point - where multiple resources are available, students don't tend to make use of more than one (although presumably at least some choose the one that is best for them).</p>
10,977
<p>When I taught calculus, I posted my notes after the lecture. Then I had the students fill out a mid-quarter evaluation, and a lot of them wanted me to post my notes before class.</p> <p>What I started doing was printing and handing out the notes to them, leaving the examples blank so they can fill those in. Many of them expressed to me that they liked it, so that they can concentrate on the problem instead of trying to write everything down. </p> <p>I just read my course evaluations and I got mixed reviews about the notes. Some students said they liked them and found them helpful, and an equal amount said that they didn't like it and preferred to take their own notes, and learned better that way. I did tell them that they don't have to use me notes if they don't want them, and to let me know if they don't want to use them so I can save paper.</p> <p><strong>In your experience, what has worked well, to provide notes, let the students make their own notes, or give them the option by posting it before class to they can print out notes if they want to use it?</strong> I usually print out the notes, a copy for each student, and pass it out to all of them individually, but I am wondering if I should change this.</p>
user14622
14,622
<p>In my experience, guided notes seem to be correlated to poor learning. While preference for guided notes tends to split about 50/50, invariably the bottom half in terms of academic performance is dominated by people who use guided notes. Now is this causal? I'm not sure; it seems reasonable that &quot;poor&quot; students might prefer the easy-to-use notes over &quot;good&quot; students who want to do things their own way, so it could be that they are self-selecting. Either way, I have seen no benefit to using them at all.</p>
3,509,441
<p>Given a Complex Matrix <span class="math-container">$A$</span> which is <span class="math-container">$n \times n$</span>. How would I go about showing that <span class="math-container">$A^*A$</span> is <span class="math-container">$$\sum_{i=1}^n \sum_{j = 1}^n | a_{ij} |^2$$</span></p> <p>Here <span class="math-container">$A^*$</span> refers to the complex conjugate - transpose of <span class="math-container">$A$</span>. I know that the trace of any <span class="math-container">$n \times n$</span> matrix is defined to be <span class="math-container">$$\sum _{i = 1}^n a_{ii}$$</span> </p>
lulu
252,071
<p>Think of there being <span class="math-container">$3$</span> sets: Set <span class="math-container">$A$</span>, set <span class="math-container">$B$</span>, and set <span class="math-container">$C$</span>. for each element <span class="math-container">$s\in \{1, \cdots, n\}$</span> you have three choices as to where to put it. Here the subset <span class="math-container">$C$</span> contains all the elements that are in neither set <span class="math-container">$A$</span> nor set <span class="math-container">$B$</span>. As there are <span class="math-container">$n$</span> elements, that makes for <span class="math-container">$3^n$</span> total choices.</p> <p>For <span class="math-container">$n=2$</span>, the <span class="math-container">$9$</span> choices are <span class="math-container">$$(A,A),\, (A,B), \,(A,C),\,(B,A), \,(B,B), \,(B, C),\,(C,A), \,(C,B), \,(C,C)$$</span>.</p> <p>Just to clarify things, the choice <span class="math-container">$(B,C)$</span>, to pick a random one, means that element <span class="math-container">$1$</span> is in the second set and element <span class="math-container">$2$</span> is in neither the first nor the second. Thus it corresponds to the two disjoint sets <span class="math-container">$\emptyset, \{1\}$</span>.</p>
1,315,805
<blockquote> <p>Let the series $$\sum_{n=1}^\infty \frac{2^n \sin^n x}{n^2}$$. For $x\in (-\pi/2, \pi/2)$, when is the series converges?</p> </blockquote> <p>By the root-test:</p> <p>$$\sqrt[n]{a_n} = \sqrt[n]{\frac{2^n\sin^n x}{n^2}} = \frac{2\sin x}{n^{2/n}} \to 2\sin x$$</p> <p>Thus, the series converges $\iff 2\sin x &lt; 1 \iff \sin x &lt; \frac{1}{2}$</p> <p>Is that right?</p>
Barry
90,638
<p>I like the ratio test here:</p> <p>$$\begin{split} L &amp;= \lim_{n\to\infty} \left|\frac{2^{n+1} \sin^{n+1} x}{(n+1)^2} \frac{n^2}{2^n\sin^n{x}} \right| \\ &amp;=\lim_{n\to\infty} \left|\frac{2n^2\sin{x}}{(n+1)^2}\right| \\ &amp;=\lim_{n\to\infty} \left|\frac{2\sin{x}\cdot n^2}{n^2}\right| \\ &amp;= |2\sin{x}| \end{split}$$</p> <p>$L &lt; 1 \iff |2\sin x| &lt; 1 \iff |\sin x| &lt; \frac12 \iff x \in (-\frac{\pi}6,\frac{\pi}6)$</p> <p>We can test the boundary separately. If $\sin x = \pm\frac12$, then the series becomes $\sum_{n=0}^{\infty}\frac{(\pm1)^n}{n^2}$, which clearly converges. Thus, the solution is the closed interval, $x \in [-\frac{\pi}6,\frac{\pi}6]$</p>
754,583
<p>Write <span class="math-container">$$\phi_n\stackrel{(1)}{=}n+\cfrac{n}{n+\cfrac{n}{\ddots}}$$</span> so that <span class="math-container">$\phi_n=n+\frac{n}{\phi_n},$</span> which gives <span class="math-container">$\phi_n=\frac{n\pm\sqrt{n^2+4n}}{2}.$</span> We know <span class="math-container">$\phi_1=\phi$</span>, the <a href="http://en.wikipedia.org/wiki/Golden_ratio" rel="nofollow noreferrer">Golden Ratio</a>, so let's take <span class="math-container">$\phi_n\stackrel{(2)}{=}\frac{n+\sqrt{n(n+4)}}{2}$</span>. (Is that justified?)</p> <p><a href="http://m.wolframalpha.com/input/?i=%28n%2B%E2%88%9A%28n%28n%2B4%29%29%29%2F2&amp;x=0&amp;y=0" rel="nofollow noreferrer">Wolfram Alpha</a> states that, with <span class="math-container">$(2)$</span>, <span class="math-container">$$\lim\limits_{n\to -\infty}\phi_n=-1.$$</span> Why? Can I infer that this is true for <span class="math-container">$(1)$</span> and, if so, <em>why</em>?</p> <p><strong>I wonder what happens in <span class="math-container">$(1)$</span> for <span class="math-container">$n\in\mathbb{C}\backslash\mathbb{Z}$</span> too</strong>. I got something horrendous looking in <span class="math-container">$(2)$</span> for <span class="math-container">$n=i$</span>.</p> <hr> <p><em>Clarification:</em> I'm trying to <strong>find <span class="math-container">$\phi_n$</span> in terms of <span class="math-container">$n$</span></strong>. See the comments below.</p>
achille hui
59,379
<p>For any given $n \ne 0, -4$, let $( a_{n,m} )_{m\in\mathbb{Z}_{+}}$ be the sequence defined by $$a_{n,m} = \begin{cases}n,&amp; m = 1\\\displaystyle n + \frac{n}{a_{n,m-1}},&amp;m &gt; 1\end{cases}$$ Let $\displaystyle\;\mu_n = \frac{n+\sqrt{n(n+4)}}{2}\;$ and $\displaystyle\;\nu_n = \frac{n-\sqrt{n(n+4)}}{2}\;$. It is easy to verify the following expression</p> <p>$$a_{n,m} = \frac{\mu_n^{m+1} - \nu_n^{m+1}}{\mu_n^{m}-\nu_n^{m}}$$</p> <p>provided a closed-form solution for $a_{n,m}$. On those portion of complex plane where $|\mu_n|$ differs from $|\nu_n|$, if is clear one of $\mu_n$ or $\nu_n$ will completely dominate the other one for large $m$. As a result, we can make the following partial summary about $\phi_n$.</p> <p>$$\phi_n = \lim_{m\to\infty} a_{n,m} = \begin{cases} \mu_n, &amp; |\mu_n| &gt; |\nu_n|\\ \\ ???&amp; |\mu_n| = |\nu_n|\quad\leftarrow \begin{array}{c} \small\verb/This includes the special/\\ \small\verb/special case when /n = 0, -4. \end{array} \\ \nu_n, &amp; |\mu_n| &lt; |\nu_n| \end{cases} $$ In particular, $$ \begin{array}{lcl} n \in (0,\infty) &amp; \implies &amp; |\mu_n| &gt; |\nu_n| \implies \phi_n = \mu_n\\ n \in (-\infty,-4) &amp; \implies &amp; |\mu_n| &lt; |\nu_n| \implies \phi_n = \nu_n \end{array} $$ and hence $$\lim\limits_{n\to-\infty}\phi_n = \lim\limits_{n\to-\infty}\nu_n = \lim\limits_{k\to\infty}\frac{-k+\sqrt{k(k-4)}}{2} = \lim\limits_{k\to\infty}\frac{-2k}{k+\sqrt{k(k-4)}} = -1 $$</p> <p><strong>Update</strong></p> <p>Let us switch to the case $|\mu_n| = |\nu_n|$ but $n \ne 0, -4$. In particular, this cover the range where $n \in (-4,0)$. Since $\mu_n\nu_n = -n$ and $\mu_n \ne \nu_n$, we can find a $\theta_n \in (0,\pi)$ such that</p> <p>$$\big\{\; \mu_n, \nu_n \;\big\} = \big\{\; \sqrt{-n}e^{i\theta_n}, \sqrt{-n}e^{-i\theta_n}\;\big\}$$ In terms of $\theta_n$, we have</p> <p>$$a_{n,m} = \sqrt{-n}\frac{\sin((m+1)\theta_n)}{\sin(m\theta_n)}$$</p> <p>There are two sub-cases. </p> <ul> <li>If $\displaystyle\;\frac{\theta_n}{\pi} \in \mathbb{Q}$, then $a_{n,m}$ is periodic in $m$. Notice when $\theta \in (0,\pi)$, $\displaystyle\;\frac{\sin((m+1)\theta)}{\sin\theta}$ is never an constant sequence. So $\phi_n$ diverges.</li> <li>If $\displaystyle\;\frac{\theta_n}{\pi} \notin \mathbb{Q}$, then $a_{n,m}$ form a dense subset of the line $\big\{\; \sqrt{-n} t : t \in \mathbb{R} \;\big\} \subset \mathbb{C}$.<br> Once again, $\phi_n$ diverges.</li> </ul> <p>Finally, it leaves us the cases $n = 0$ and $n = -4$. </p> <ul> <li><p>For $n = 0$, it is easy because start at $m = 2$, we encounter an undefined expression like $a_{0,2} = 0 + \frac{0}{0}$. So all $a_{0,m}, m \ge 2$ and hence $\phi_0$ are undefined.</p></li> <li><p>For $n = -4$, we have $\mu_{-4} = \nu_{-4} = -2$. Notice $$\lim_{\mu\to\nu}\frac{\mu^{m+1}-\nu^{m+1}}{\mu^{m}-\nu^{m}} = \mu\frac{m+1}{m}$$ We will suspect $\displaystyle\;a_{-4,m} = -2\frac{m+1}{m}$. By direct substitution, one can check that this is indeed the case. As a result, $$\phi_{-4} = \lim\limits_{m\to\infty} a_{-4,m} = -2\lim_{m\to\infty}\frac{m+1}{m} = -2 = \nu_n$$</p></li> </ul> <p>Combine all these, we obtain:</p> <p>$$\phi_n = \lim_{m\to\infty} a_{n,m} = \begin{cases} \mu_n, &amp; |\mu_n| &gt; |\nu_n|\\ \nu_n, &amp; |\mu_n| &lt; |\nu_n|\quad\text{ or }\quad n = -4\\ \text{undefined},&amp; n = 0\\ \text{diverges},&amp; |\mu_n| = |\nu_n| \end{cases} $$ <strong>Update2</strong></p> <p>The final question is what are the set that $|\mu_n| = |\nu_n|$. It turns out when $n \ne 0$,</p> <p>$$\begin{align} |\mu_n| = |\nu_n| &amp;\iff \left|1 + \sqrt{1+\frac{4}{n}}\right| = \left|1 - \sqrt{1+\frac{4}{n}}\right| \iff \Re\left(\sqrt{1+\frac{4}{n}}\right) = 0\\ &amp;\iff n \in [-4,0) \end{align}$$ In fact, if we define $\lambda(z)$ by</p> <p>$$\mathbb{C}\setminus (-4,0] \ni z\quad\mapsto\quad \lambda(z) = \frac{z}{2}\left(1 + \sqrt{1 + \frac{4}{z}}\right) \in \mathbb{C},$$</p> <p>$\lambda(z)$ will be a single-valued function over $\mathbb{C} \setminus (-4,0]$ and analytic over its interior $\mathbb{C} \setminus [-4,0]$.<br> Furthermore, $\lambda(n)$ coincides with $\mu_n$ and $\nu_n$ on $(0,\infty)$ and $(\infty,-4]$ respectively! What this means is the apparent switching of value of $\phi_n$ between $\mu_n$ and $\nu_n$ is really an artifact of how we label them.</p> <p>At the end, we have a much simpler description for $\phi_n$.</p> <p>$$\phi_n = \begin{cases} \frac{n}{2}\left(1 + \sqrt{1 + \frac{4}{n}}\right),&amp; n \notin (-4,0]\\ \\ \text{ undefined/diverges },&amp; n \in (-4,0] \end{cases}$$</p>
39,762
<p>Happy new year mathematica gurus of stack exchange!</p> <p>As I see it one of the major obstacles in getting decent at programming mathematica is that, not only do you need to learn how certain commands work, but rather that you mainly need to understand how to write your syntax. This is a typical such situation, I was hoping that someone might shine some light on how to do it.</p> <p>I run this piece of code:</p> <pre><code>For[i = 1, i &lt;= samples, i++, AppendTo[fList, f1[randomSeeds[[1 + (n - 1) (i - 1)]][[1]], randomSeeds[[1 + (n - 1) (i - 1)]][[2]], 0, 0, 0, 0, 0, 0]]; ] For[i = 1, i &lt;= samples, i++, AppendTo[fList, f1[randomSeeds[[1 + (n - 1) (i - 1)]][[1]], randomSeeds[[1 + (n - 1) (i - 1)]][[2]], randomSeeds[[2 + (n - 1) (i - 1)]][[1]], randomSeeds[[2 + (n - 1) (i - 1)]][[2]], 0, 0, 0, 0]]; ] For[i = 1, i &lt;= samples, i++, AppendTo[fList, f1[randomSeeds[[1 + (n - 1) (i - 1)]][[1]], randomSeeds[[1 + (n - 1) (i - 1)]][[2]], randomSeeds[[2 + (n - 1) (i - 1)]][[1]], randomSeeds[[2 + (n - 1) (i - 1)]][[2]], randomSeeds[[3 + (n - 1) (i - 1)]][[1]], randomSeeds[[3 + (n - 1) (i - 1)]][[2]], 0, 0]]; ] For[i = 1, i &lt;= samples, i++, AppendTo[fList, f1[randomSeeds[[1 + (n - 1) (i - 1)]][[1]], randomSeeds[[1 + (n - 1) (i - 1)]][[2]], randomSeeds[[2 + (n - 1) (i - 1)]][[1]], randomSeeds[[2 + (n - 1) (i - 1)]][[2]], randomSeeds[[3 + (n - 1) (i - 1)]][[1]], randomSeeds[[3 + (n - 1) (i - 1)]][[2]], randomSeeds[[4 + (n - 1) (i - 1)]][[1]], randomSeeds[[4 + (n - 1) (i - 1)]][[2]]]]; ] </code></pre> <p>As you can see most of the stuff is identical in the for loops, it's just the number of zeroes that varies in the end. In fact, I only wish to run one of these for loops at the time. Just above these for loops I specify the number of dimensions I'm working in (n in the code), for n=2 I want to run the first loop, for n=3 I want to run the second etc. At the moment I comment an uncomment the undesired parts of the code, but that seems like a very ugly solution to me. </p> <p>So my question is this: is there a simple way of reducing these for copies of code into one copy? It seems to me as though this is a quite ineffective way of doing things.</p> <p>Edit: Small clarification: Ideally I want something like this: I give the program "2" as input and it chooses the first for loop above etc. </p> <p>Cheers, David</p>
Mr.Wizard
121
<p>To begin with you should try to avoid explicit <code>For</code> loops in most cases. See <a href="https://mathematica.stackexchange.com/q/7924/121">Alternatives to procedural loops and iterating over lists in Mathematica</a>. You will often also benefit from a functional rather than mutable style. Consider for example this function which does not rely on any global variables:</p> <pre><code>loop[fn_, seeds_, n_, samples_, jmax_] := fn @@ PadRight[#, 8] &amp; /@ Join @@@ Table[seeds[[j + (n - 1) (i - 1), k]], {i, samples}, {j, jmax}, {k, 2}] </code></pre> <p>An example of use:</p> <pre><code>SeedRandom[1]; randomSeeds = RandomInteger[{-50, 50}, {99, 2}]; loop[f1, randomSeeds, 5, 8, 3] </code></pre> <blockquote> <pre><code>{f1[30, -36, -50, 17, -47, 15, 0, 0], f1[47, 18, 24, -35, -26, -46, 0, 0], f1[33, 20, -49, -20, -2, -25, 0, 0], f1[19, 6, -3, -22, 18, -24, 0, 0], f1[36, 26, -7, -17, -6, 36, 0, 0], f1[-12, -21, 25, -20, -33, 4, 0, 0], f1[-44, -7, -48, 29, -33, -16, 0, 0], f1[1, -9, -35, -32, -5, -44, 0, 0]} </code></pre> </blockquote> <p>This matches the output of your first <code>For</code> loop. For the second and third:</p> <pre><code>loop[f1, randomSeeds, 5, 8, 3] loop[f1, randomSeeds, 5, 8, 4] </code></pre> <p>I intentionally put all data upon which <code>loop</code> depends as arguments to avoid relying on global values. If you do not want to pass these explicitly you might rely upon <a href="https://mathematica.stackexchange.com/questions/353/functions-with-options/358#358">Options</a>. For example:</p> <pre><code>ClearAll[loop] Options[loop] = {Function -&gt; f1, "Seeds" :&gt; randomSeeds}; loop[n_, samples_, jmax_, OptionsPattern[]] := OptionValue[Function] @@ PadRight[#, 8] &amp; /@ Join @@@ Table[OptionValue["Seeds"][[j + (n - 1) (i - 1), k]], {i, samples}, {j, jmax}, {k, 2}] </code></pre> <p>This relies on the global value of <code>randomSeeds</code> but it is no longer hard-coded. Now the call is simpler:</p> <pre><code>loop[5, 6, 3] </code></pre> <blockquote> <pre><code>{f1[30, -36, -50, 17, -47, 15, 0, 0], f1[47, 18, 24, -35, -26, -46, 0, 0], f1[33, 20, -49, -20, -2, -25, 0, 0], f1[19, 6, -3, -22, 18, -24, 0, 0], f1[36, 26, -7, -17, -6, 36, 0, 0], f1[-12, -21, 25, -20, -33, 4, 0, 0]} </code></pre> </blockquote> <p>You change the Options at any time with <code>SetOptions</code>:</p> <pre><code>SetOptions[loop, Function -&gt; foo]; loop[5, 3, 4] </code></pre> <blockquote> <pre><code>{foo[30, -36, -50, 17, -47, 15, 50, -27], foo[47, 18, 24, -35, -26, -46, 50, 40], foo[33, 20, -49, -20, -2, -25, -6, 23]} </code></pre> </blockquote>
2,977,645
<p>I'm trying to prove that <span class="math-container">$\sqrt[n]{\frac{s}{t}}$</span> is irrational unless both s and t are perfect nth powers. I have found plenty of proofs for nth root of an integer but cannot find anything for rationals. Also trying to work up from the proofs I have found is rather difficult.</p> <p>My lecturer wants me to prove this using uniqueness of prime factorisation but I also have no idea where to start with that. </p> <p>Any help is greatly appreciated.</p>
Arthur Hertz
610,350
<p>To build off Vasya and Calum's helpful comments:</p> <p>Let <span class="math-container">$\frac{s}{t}$</span> be an irreducible fraction (to address Vasya's comment).</p> <p>Continuing with the main proof, recall Vasya's comment:</p> <blockquote> <p><span class="math-container">$\sqrt[n]{\frac{s}{t}} = \frac{\sqrt[n]{s}}{\sqrt[n]{t}}$</span></p> </blockquote> <p>We will quickly prove this.</p> <p>Obviously <span class="math-container">$(\sqrt[n]{\frac{s}{t}})^n = \frac{s}{t}.$</span></p> <p>Notice <span class="math-container">$\frac{s}{t} = \frac{(\sqrt[n]{s})^n}{(\sqrt[n]{t})^n} = (\frac{\sqrt[n]{s}}{\sqrt[n]{t}})^n.$</span></p> <p>It follows that <span class="math-container">$\sqrt[n]{\frac{s}{t}} = \frac{\sqrt[n]{s}}{\sqrt[n]{t}}$</span> since the <span class="math-container">$n$</span>th root is either a bijection or always positive.</p> <p>Without loss of generality, let <span class="math-container">$s$</span> have a prime factorization <span class="math-container">$p_1^{a_1}p_2^{a_2}...p_i^{a_i}.$</span> Let <span class="math-container">$\sqrt[n]{s}$</span> be an integer with prime factorization <span class="math-container">$q_1^{b_1}q_2^{b_2}...q_j^{b_j}.$</span></p> <p>Clearly <span class="math-container">$p_1^{a_1}p_2^{a_2}...p_i^{a_i} = q_1^{nb_1}q_2^{nb_2}...q_j^{nb_j}.$</span> We know the prime factorization of <span class="math-container">$s$</span> is unique, so we know that each power of <span class="math-container">$p$</span> corresponds to <em>some</em> power of <span class="math-container">$q.$</span> It's not clear which <span class="math-container">$p$</span> corresponds to which <span class="math-container">$q,$</span> but it is clear that all the prime factors are <span class="math-container">$n$</span>th powers.</p>
997,463
<p>For example, a complex number like $z=1$ can be written as $z=1+0i=|z|e^{i Arg z}=1e^{0i} = e^{i(0+2\pi k)}$.</p> <p>$f(z) = \cos z$ has period $2\pi$ and $\cosh z$ has period $2\pi i$.</p> <p>Given a complex function, how can we tell if it is periodic or not, and further, how would we calculate the period? For example, how do we find the period of $f(z)=\tan z$?</p>
Aaron Maroja
143,413
<p><strong>Hint:</strong> Make the substitution $x = \frac{3}{2}\tan\theta \Rightarrow dx = \frac{3}{2} \sec^2 \theta \ \ d\theta$.</p>
1,523,392
<p>This is question 2.4 in Hartshorne. Let $A$ be a ring and $(X,\mathcal{O}_X)$ a scheme. We have the associated map of sheaves $f^\#: \mathcal{O}_{\text{Spec } A} \rightarrow f_* \mathcal{O}_X$. Taking global sections we obtain a homomorphism $A \rightarrow \Gamma(X,\mathcal{O}_X)$. Thus there is a natural map $\alpha : \text{Hom}(X,\text{Spec} A) \rightarrow \text{Hom}(A,\Gamma(X,\mathcal{O}_X))$. Show $\alpha$ is bijective.</p> <p>I figure we need to start off with the fact that we can cover $X$ with affine open $U_i$, and that a homomorphism $A \rightarrow \Gamma(X,\mathcal{O}_X)$ induces a morphism of schemes from each $U_i$ to $\text{Spec} A$ and some how glue them together. But I have no idea how to show that the induced morphisms agree on intersections. How does this work? </p>
Babai
36,789
<p>Let <span class="math-container">$g\in\hom_{ring}(A,\Gamma(X,\mathcal{O}_X)$</span></p> <p>Cover <span class="math-container">$X$</span> by affine open subsets <span class="math-container">$\{U_i=Spec(A_i)\}_{i\in I}$</span>.</p> <p>Now, the inclusion <span class="math-container">$U_i\hookrightarrow X$</span> gives us a map from global section of <span class="math-container">$U_i$</span> to global section of <span class="math-container">$X$</span> (i.e., <span class="math-container">$\rho^{X}_{Spec(A_i)}:\Gamma(X,\mathcal{O}_X)\rightarrow A_i$</span>)</p> <p>We take the composite map <span class="math-container">$A\hookrightarrow\Gamma(X,\mathcal{O}_X)\hookrightarrow A_i$</span></p> <p>This gives rise to a map from <span class="math-container">$f_i:U_i=Spec(A_i)\rightarrow Spec(A)$</span> for each <span class="math-container">$i\in I$</span> (Note, <span class="math-container">$f_i$</span> is nothing but the Spec map of the composition of <span class="math-container">$g$</span> with the restriction map <span class="math-container">$\rho^{X}_{U_i}$</span>, i.e., <span class="math-container">$f_i=Spec(\rho^{X}_{U_i}\circ g)$</span>)</p> <p>Notation: If <span class="math-container">$h:A\rightarrow B$</span> be a ring homomorphism, then the corresponding scheme morphism is denoted by <span class="math-container">$Spec(h):Spec(B)\rightarrow Spec(A)$</span></p> <p>Now, we use that fact- If <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> are two schemes, then giving a morphism from <span class="math-container">$X$</span> to <span class="math-container">$Y$</span> is equivalent to giving an open cover <span class="math-container">$\{U_i\}_{i\in I}$</span> of X, together with morphism <span class="math-container">$f_i:U_i\rightarrow Y$</span>, where <span class="math-container">$U_i$</span> has the induced open subscheme structure, such that the restrictions of <span class="math-container">$f_i$</span> and <span class="math-container">$f_j$</span> to <span class="math-container">$U_i\cap U_j$</span> are the same, for each <span class="math-container">$i,j\in I$</span></p> <p>Therefore, we need to check: <span class="math-container">$$ f_i|_{U_i\cap U_j}=f_j|_{U_i\cap U_j} $$</span> We need to cover <span class="math-container">$U_i\cap U_j$</span>, again by affine open subsets (Otherwise, we cannot use the functoriality of <span class="math-container">$Spec$</span>) Cover <span class="math-container">$U_i\cap U_j$</span> by <span class="math-container">$\{V_{ijk}=Spec(B_{ijk})\}_{k\in I}$</span></p> <p>Enough to show,</p> <h1><span class="math-container">$f_i|_{V_{ijk}}=f_j|_{V_{ijk}}$</span></h1> <p>We have inclusion of open sets, <span class="math-container">$V_{ijk}\hookrightarrow U_i\cap U_j\hookrightarrow U_i \hookrightarrow X$</span> and <span class="math-container">$V_{ijk}\hookrightarrow U_i\cap U_j\hookrightarrow U_j\hookrightarrow X$</span></p> <p>Observe that,</p> <h1><span class="math-container">$f_i|_{V_{ijk}}=Spec(\rho^{U_i}_{V_{ijk}}\circ\rho^{X}_{U_i}\circ g)$</span></h1> <p>and</p> <h1><span class="math-container">$f_j|_{V_{ijk}}=Spec(\rho^{U_j}_{V_{ijk}}\circ\rho^{X}_{U_j}\circ g)$</span></h1> <h1>and both are equal to <span class="math-container">$Spec(\rho^{X}_{V_{ijk}}\circ g)=f_i|_{V_{ijk}}=f_j|_{V_{ijk}}$</span></h1> <p>Therefore, we conclude that <span class="math-container">$f_i$</span> and <span class="math-container">$f_j$</span> agrees on the intersection and glues in order to give rise to a morphism from <span class="math-container">$X\rightarrow Spec(A).$</span></p>
2,416,071
<p>I have this integral: $\displaystyle \int^{\infty}_0 kx e^{-kx} dx$.</p> <p>I tried integrating it by parts:</p> <p>$\dfrac{1}{k}\displaystyle \int^{\infty}_0 kx e^{-kx} dx = ... $. But I'm stuck </p> <p>now. Can you help me please?</p>
Tony Delgado
477,171
<p>Just take $kx = y$ so you will have $$\int_{0}^\infty\frac{y}{e^y}dy$$</p> <p>Take $u = y$ and $dv = \frac{1}{e^y}$, after integrate just remeber change $y = kx$ </p>
1,558,256
<p>The standard Normal distribution probability density function is $$p(x) = \frac{1}{\sqrt{2\pi}}e^{-x^2/2},\int_{-\infty}^{\infty}p(t)\,dt = 1$$ i.e., mean 0 and variance 1. The cumulative distribution function is given by the improper integral $$P(x) = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{x}e^{-t^2/2}\,dt$$ Describe a numerical method for approximating $P(x)$ given a value of $x$ to a prescribed absolute error $\tau$. Your solution should be as efficient as possible. Justify your answer.</p> <p>I believe we need to consider the case where $x &gt; 0$ and $x &lt; 0$ and then use one a numerical approximating method that is the most accurate to use. That is the crux of the problem, there are many methods, does one need to try each and everyone to see which one is best. Is there way of determining which one is the best given this particular problem before actually performing any computations. Any suggestions is greatly appreciated.</p>
njuffa
114,200
<p>Before embarking on crafting a custom implementation, it seems advisable to check whether the CDF of the standard normal distribution is supported as a built-in function in the programming environment of your choice. For example, <a href="http://www.mathworks.com/help/stats/normcdf.html" rel="nofollow">MATLAB</a> offers a function <code>normcdf</code>, as does <a href="https://docs.nvidia.com/cuda/cuda-math-api/group__CUDA__MATH__DOUBLE.html#group__CUDA__MATH__DOUBLE_1geb117bdbf6a5395d948d516cf633a997" rel="nofollow">CUDA</a>.</p> <p>If no implementation of <code>normcdf</code> is available, the next thing to check is whether the programming environment offers an implementation of the complementary error function, i.e. $\mathrm{erfc}$. This exists as <code>ERFC</code> in Fortran 2008, and as <code>erfc()</code> in C99 and C++11, for example. The CDF of the standard normal distribution is related to $\mathrm{erfc}$ as follows:</p> <p>$$P(x) = \frac{1}{2} \mathrm{erfc} (-x \sqrt{\frac{1}{2}})$$</p> <p>The reason for using $\mathrm{erfc}$ instead of $\mathrm{erf}$ is to avoid subtractive cancellation that leads to inaccuracy in the tails. Note that for $x &lt; 0$, there is error magnification, that is, the relative error incurred in scaling the argument to $\mathrm{erfc}$ is magnified. Computing $x\sqrt{\frac{1}{2}}$ should be performed as accurately as possible, and if the final result $P(x)$ is subject to tight <a href="https://en.wikipedia.org/wiki/Unit_in_the_last_place" rel="nofollow">ulp</a> error bounds, additional compensation may be needed. A technique for computing such a product with maximum accuracy with the help of a fused-multiply add (FMA) operation is presented in this paper:</p> <p>Nicolas Brisebarre and Jean-Michel Muller, "Correctly Rounded Multiplication by Arbitrary Precision Constants." <em>IEEE Transactions on Computers</em>, Vol. 57, No. 2, February 2008, pp. 165-174 <a href="http://perso.ens-lyon.fr/jean-michel.muller/MultFmacArith.pdf" rel="nofollow">(draft online)</a></p> <p>The FMA operation, which computes $ab+c$ with a single rounding at the end is available as the standard math function <code>fma()</code> in C99 and C++11.</p> <p>If a given computational environment does not provide a built-in way to compute $\mathrm{erfc}$, you might want to look into using or porting robust code for the computation of error functions by W. J. Cody which can be found on <a href="http://www.netlib.org/specfun/erf" rel="nofollow">Netlib</a>. That Fortran code is based on the following paper:</p> <p>W. J. Cody, "Rational Chebyshev approximations for the error function." <em>Mathematics of Computation</em>, Vol 23., No. 107 (1969), pp. 631-637. <a href="http://www.ams.org/journals/mcom/1969-23-107/S0025-5718-1969-0247736-4/S0025-5718-1969-0247736-4.pdf" rel="nofollow">(online)</a></p> <p>If Cody's code is not suitable for your work (e.g. due to licensing issues), a custom implementation of $\mathrm{erfc}$ could be created based on the following paper:</p> <p>M. M. Shepherd and J. G. Laframboise, "Chebyshev approximation of $(1 + 2x)\exp(x^2)\operatorname{erfc} x$ in $0 \leqslant x &lt; \infty$." <em>Mathematics of Computation</em>, Vol. 36, No. 153 (Jan., 1981), pp. 249-253 <a href="http://www.ams.org/journals/mcom/1981-36-153/S0025-5718-1981-0595058-X/S0025-5718-1981-0595058-X.pdf" rel="nofollow">(online)</a></p> <p>I used the algorithm from this paper as the basis for the implementation of <code>erfc()</code> for a shipping parallel computation platform, but used a freshly generated polynomial minimax approximation rather than the Chebyshev approximation from the paper. Tools like Maple or Mathematica offer functionality for generating such minimax approximations, or you could generate your own using the Remez algorithm.</p>
5,528
<p>Let H be a subgroup of G. (We can assume G finite if it helps.) A complement of H in G is a subgroup K of G such that HK = G and |H&cap;K|=1. Equivalently, a complement is a transversal of H (a set containing one representative from each coset of H) that happens to be a group.</p> <p>Contrary to my initial naive expectation, it is neither necessary nor sufficient that one of H and K be normal. I ran across both of the following counterexamples in Dummit and Foote:</p> <ul> <li><p>It is not necessary that H or K be normal. An example is S<sub>4</sub> which can be written as the product of H=&lang;(1234), (12)(34)&rang;&cong;D<sub>8</sub> and K=&lang;(123)&rang;&cong;&#8484;<sub>3</sub>, neither of which is normal in S<sub>4</sub>.</p></li> <li><p>It is not sufficient that one of H or K be normal. An example is Q<sub>8</sub> which has a normal subgroup isomorphic to Z<sub>4</sub> (generated by i, say), but which cannot be written as the product of that subgroup and a subgroup of order 2.</p></li> </ul> <p>Are there any general statements about when a subgroup has a complement? The <A href="http://en.wikipedia.org/wiki/Complement_%28group_theory%29">Wikipedia page</A> doesn't have much to say. In practice, there are many situations where one wants to work with a transversal of a subgroup, and it's nice when one can find a transversal that is also a group. Failing that, one can ask for the smallest subgroup of G containing a transversal of H.</p>
Greg Kuperberg
1,450
<p>If $H$ is normal, then there is a well-known textbook answer. $H$ has a complement if and only if $G$ is a semidirect product of $H$ and its complement. The complement is isomorphic to the quotient $G/H$, and if you don't know whether there is one, you can do an extension calculation to see if it exists. For instance, if the extension is central, then the question is whether the 2-cocycle of the extension is null-homologous in $H^2(G/H,H)$.</p> <p>When $H$ is not normal, I once noticed that the isomorphism type of a complement is not unique. $S_3 \subset S_4$ is complemented both by the cyclic group $C_4$ and by the Klein group $C_2 \times C_2$, because they both act freely transitively on four points.</p> <p>It seems reasonable to generally think of a complement of a non-normal subgroup as a freely transitive subgroup of a group action. This yields a converse answer for "when" it happens. In particular, every finite group $K$ arises exactly once as a complement of $S_{n-1} \subset S_n$, namely when $n = |K|$.</p> <p>You can sometimes use topology to know that there isn't a freely transitive action. For example, $\mathrm{SO}(n-1) \subset \mathrm{SO}(n)$ does not have one when $n=3$ or $n \ge 5$, because otherwise there is no Lie group with the topology of its quotient. I don't know any very good obstructions in the finite group case, but I bet that there are some.</p> <p>Thurston (and I'm sure others) has remarked that $\mathrm{SO(3)} \subset \mathrm{Isom}(\mathbb{H}^3)$ has a complement. It is the homothety group the plane, realized as the stabilizer of concentric horospheres.</p>
5,528
<p>Let H be a subgroup of G. (We can assume G finite if it helps.) A complement of H in G is a subgroup K of G such that HK = G and |H&cap;K|=1. Equivalently, a complement is a transversal of H (a set containing one representative from each coset of H) that happens to be a group.</p> <p>Contrary to my initial naive expectation, it is neither necessary nor sufficient that one of H and K be normal. I ran across both of the following counterexamples in Dummit and Foote:</p> <ul> <li><p>It is not necessary that H or K be normal. An example is S<sub>4</sub> which can be written as the product of H=&lang;(1234), (12)(34)&rang;&cong;D<sub>8</sub> and K=&lang;(123)&rang;&cong;&#8484;<sub>3</sub>, neither of which is normal in S<sub>4</sub>.</p></li> <li><p>It is not sufficient that one of H or K be normal. An example is Q<sub>8</sub> which has a normal subgroup isomorphic to Z<sub>4</sub> (generated by i, say), but which cannot be written as the product of that subgroup and a subgroup of order 2.</p></li> </ul> <p>Are there any general statements about when a subgroup has a complement? The <A href="http://en.wikipedia.org/wiki/Complement_%28group_theory%29">Wikipedia page</A> doesn't have much to say. In practice, there are many situations where one wants to work with a transversal of a subgroup, and it's nice when one can find a transversal that is also a group. Failing that, one can ask for the smallest subgroup of G containing a transversal of H.</p>
Theo Johnson-Freyd
78
<p>This doesn't quite answer your question, but rather answers the question in the opposite direction. Thus, while I can't say precisely what subgroups have complements, I can give conditions that the complements must satisfy.</p> <p>To present a group as $G = KH$ with $H \cap K = 1$ is equivalent to the following:</p> <ul> <li>Maps $\lambda_K : H \times K \to K$ and $\rho_H: H \times K \to H$.</li> <li>such that $\lambda_K$ is a left group action of $H$ on the <em>set</em> $K$, and $\rho_H$ is a right group action of $K$ on the <em>set</em> $H$. Moreover, each action fixes the identity element of the other group.</li> <li>and these maps are required to satisfy two coherency conditions: $$ \lambda_K ( h, k_1 k_2 ) = \lambda_K( h, k_1 ) \cdot \lambda_K( \rho_H( h, k_1), k_2 ) $$ $$ \rho_H ( h_1 h_2, k ) = \rho_H (h_1, \lambda_K( h_2, k) ) \cdot \rho_H( h_2, k) $$ where $\cdot$ is the group operation in $K$ in the first line and in $H$ in the second line.</li> </ul> <p>To prove that this data is equivalent, in one direction observe that the factorization $G = KH$ defines for each $g$ a pair, so let $\lambda(h,k), \rho(h,k)$ the unique elements in $K,H$ such that $\lambda(h,k)\cdot \rho(h,k) = hk$ as elements of $G$, and check that these maps satisfy the above axioms. In the other direction, the group structure on $K\times H$ is $$ (k_1,h_1) \cdot (k_2, h_2) = (k_1 \cdot \lambda_K(h_1,k_2), \rho_H(h_1,k_2) \cdot k_2) $$</p> <p>(Hm, I seem to have switched the order of $K,H$ from your question. Oh, well.)</p> <p>In any case, if one of the maps $\lambda, \rho$ is trivial, then the coherency conditions assert that the other action is by group automoprhisms, and $G$ is the semidirect product. More generally, the construction is due to Zappa-Szep, and is sometimes called the <strong>knit product</strong> or <strong>double cross product</strong>, usually written $G = K \bowtie H$. Groups of this form are called <strong>factorized</strong>.</p>
5,528
<p>Let H be a subgroup of G. (We can assume G finite if it helps.) A complement of H in G is a subgroup K of G such that HK = G and |H&cap;K|=1. Equivalently, a complement is a transversal of H (a set containing one representative from each coset of H) that happens to be a group.</p> <p>Contrary to my initial naive expectation, it is neither necessary nor sufficient that one of H and K be normal. I ran across both of the following counterexamples in Dummit and Foote:</p> <ul> <li><p>It is not necessary that H or K be normal. An example is S<sub>4</sub> which can be written as the product of H=&lang;(1234), (12)(34)&rang;&cong;D<sub>8</sub> and K=&lang;(123)&rang;&cong;&#8484;<sub>3</sub>, neither of which is normal in S<sub>4</sub>.</p></li> <li><p>It is not sufficient that one of H or K be normal. An example is Q<sub>8</sub> which has a normal subgroup isomorphic to Z<sub>4</sub> (generated by i, say), but which cannot be written as the product of that subgroup and a subgroup of order 2.</p></li> </ul> <p>Are there any general statements about when a subgroup has a complement? The <A href="http://en.wikipedia.org/wiki/Complement_%28group_theory%29">Wikipedia page</A> doesn't have much to say. In practice, there are many situations where one wants to work with a transversal of a subgroup, and it's nice when one can find a transversal that is also a group. Failing that, one can ask for the smallest subgroup of G containing a transversal of H.</p>
Derek Holt
35,840
<p>Let me mention one other condition. Suppose the finite group $G$ has an abelian Sylow $p$-subgroup $P$ for some prime $p$. Then Burnside's Transfer Theorem says that $P$ has a normal complement in $G$ if and only if $P$ is in the centre of its normalizer in $G$; i.e. $P \le Z(N_G(P))$.</p> <p>This often provides the quickest way of showing that groups of a specific order cannot be simple. For example, if $|G|=56$, then $G$ has 1 or 8 Sylow $7$-subgroups. In the first case, the unique Sylow $7$-subgroup is normal in $G$, whereas in the second case, a Sylow $7$-subgroup $P$ is abelian and self-normalizing in $G$, and hence has a normal complement of order 8. In either case $G$ is not simple. There are alternative ways to prove this without using BTT, but it gets more and more indispensable as the group order increases.</p> <p>Of course $P$ might have a non-normal complement in $G$ even when the condition fails. For example, in $A_5$ a Sylow 5-subgroup has the complement $A_4$.</p> <p>In a finite solvable group $G$, by the theory of Hall subgroups, any subgroup $H$ (not necessarily normal) such that $|H|$ is coprime to $|G:H|$ has a unique conjugacy class of complements in $G$.</p>
3,165,460
<p>I am reading a survey on Frankl's Conjecture. It is stated without commentary that the set of complements of a union-closed family is intersection-closed. I need some clearer indication of why this is true, though I guess it is supposed to be obvious. </p>
Henno Brandsma
4,280
<p>The argument in a comment is worth restating: if <span class="math-container">$X$</span> is a set without isolated points (a crowded space) that is <span class="math-container">$T_1$</span> then no scattered subset (countable or not) can be dense in <span class="math-container">$X$</span>:</p> <p>Let <span class="math-container">$C$</span> be scattered. So it has an isolated point <span class="math-container">$p \in C$</span>, so there is an open set <span class="math-container">$U$</span> of <span class="math-container">$X$</span> such that <span class="math-container">$U \cap C =\{p\}$</span>. But then <span class="math-container">$U\setminus \{p\}$</span> is non-empty (as <span class="math-container">$X$</span> is crowded) and open (as <span class="math-container">$X$</span> is <span class="math-container">$T_1$</span>, <span class="math-container">$\{p\}$</span> is closed) and misses <span class="math-container">$C$</span>. So <span class="math-container">$C$</span> is not dense.</p> <p>This certainly applies to <span class="math-container">$X=[0,1]$</span>.</p>
2,118,931
<p>If $A$ is a $4\times2$ matrix and $B$ is a $2\times 3$ matrix, what are the possible values of $\operatorname*{rank}(AB)$?</p> <p>Construct examples of $A$ and $B$ exhibiting each possible value of $\operatorname*{rank}(AB)$ and explain your reasoning.</p>
dz_killer
410,800
<p>rank = number of pivots in reduced form</p> <p>Okay, so in terms of pivots per column, matrix A can have at most 2 pivots and matrix B also 2 pivots at most. AB which is 4*3 can have 3 pivots at most so I don't really understand why the formula above is true.</p>
636,089
<p>Let the function $$ f(x) = \begin{cases} ax^2 &amp; \text{for } x\in [ 0, 1], \\0 &amp; \text{for } x\notin [0,1].\end{cases} $$ Find $a$, such that the function can describe a probability density function. Calculate the expected value, standard deviation and CDF of a random variable X of such distribution.</p> <p>So thanks to the community, I now can solve the former part of such excercises, in this case by $\int_0^1 ax^2 \, dx = \left.\frac{ax^3}{3}\right|_0^1 = \frac a 3$ so that the function I'm looking for is $f(x)=3x^2$. Still, I'm struggling with finding the descriptive properties of this. Again, it's not specifically about this problem - I'm trying to learn to solve such class of problems so the whole former part may be different and we may end up with different function to work with.</p> <p>So as for the standard deviation, I believe I should find a mean value $m$ and then a definite integral (at least that's what the notes suggest?) so that I end up with $$\int_{- \infty}^\infty (x-m) \cdot 3x^2 \,\mathrm{d}x$$</p> <p>As for the CDF and expected value, I'm clueless though. In the example I have in the notes, the function was $\frac{3}{2}x^2$ and for the expected value there is simply some $E(X)=n\cdot m = 0$ written while the CDF here is put as $D^2 X = n \cdot 0.6 = 6 \leftrightarrow D(X) = \sqrt{6}$ and I can't make head or tail from this. Could you please help? </p>
Sergio Parreiras
33,890
<p>First notice the PDF is $3x^2$ <strong>only in the interval [0,1]</strong> it is zero outside. To get the CDF you just have to use $F(x)=\int_{-\infty}^{x}f(z)dz$ and for the expected value: $E[X]=\int_{-\infty}^{+\infty}z\cdot f(z)dz$.</p>
13,835
<p>Given that $$X = \{(x,y,z) \in \mathbb{R}^3 |\, x^2 + y^2 + z^2 - 2(xy + xz + yz) = k\}\,,$$ where $k$ is a constant. Also given that a group $G$ is represented by $$\langle g_1,g_2,g_3|\, g_1^2 = g_2^2 = g_3^2 = 1_G\rangle\,.$$ $G$ acts on $X$ such that $$g_1 \cdot (x,y,z) = (2(y+z) - x,y,z)\,,$$ $$g_2 \cdot (x,y,z) = (x,2(x+z)-y,z)$$ and $$g_3 \cdot (x,y,z) = (x,y,2(x+y)-z)\,.$$ So given $(x_0,y_0,z_0) \in X$, how do I go about plotting the orbit of the point $(x_0,y_0,z_0)$, ie $$\{g \cdot (x_0,y_0,z_0) \,|\, g \in G\}\,,$$ on a 3D graph? Would <code>ListPointPlot3D</code> be useful to plot all the points? How about other plotting functions?</p>
Markeur
4,448
<p>This answer is actually to reply to whuber, who offered the solution. I'd like to reply in the comment section, however, as this reply is going to be very long and it might take more than 5 comments, I think it'd be better to reply here instead. I'd like to apologise for violating the rules.</p> <p>Anyway, I've tried to plot the points as well as the surface on the software, but to no avail. This is the error when n = 1 (same goes for any other value of n):</p> <p>ContourPlot3D::plln: Limiting value -Abs[orbit[{-(1/6),-(1/3),1/4},{{{1,0,0},{0,1,0},{0,0,1}},{{-1,2,2},{0,1,0},{0,0,1}},{{1,0,0},{2,-1,2},{0,0,1}},{{1,0,0},{0,1,0},{2,2,-1}}}]] in {x,-Abs[orbit[{-(1/6),-(1/3),1/4},{{{1,0,0},{0,1,0},{0,0,1}},{{-1,2,2},{0,1,0},{0,0,1}},{{1,0,0},{2,-1,2},{0,0,1}},{{1,0,0},{0,1,0},{2,2,-1}}}]],Abs[orbit[{-(1/6),-(1/3),1/4},{{{1,0,0},{0,1,0},{0,0,1}},{{-1,2,2},{0,1,0},{0,0,1}},{{1,0,0},{2,-1,2},{0,0,1}},{{1,0,0},{0,1,0},{2,2,-1}}}]]} is not a machine-sized real number. >></p> <p>ListPointPlot3D::arrayerr: orbit[{-(1/6),-(1/3),1/4},{{{1,0,0},{0,1,0},{0,0,1}},{{-1,2,2},{0,1,0},{0,0,1}},{{1,0,0},{2,-1,2},{0,0,1}},{{1,0,0},{0,1,0},{2,2,-1}}}] must be a valid array or a list of valid arrays. >></p> <p>Show::gcomb: Could not combine the graphics objects in Show[ContourPlot3D[f[x,y,z],{x,-Abs[orbit[{-(1/6),-(1/3),1/4},{{&lt;&lt;3>>},{&lt;&lt;3>>},{&lt;&lt;3>>},{&lt;&lt;3>>}}]],Abs[orbit[{-(1/6),-(1/3),1/4},{{{&lt;&lt;3>>},{&lt;&lt;3>>},{&lt;&lt;3>>}},{{&lt;&lt;3>>},{&lt;&lt;3>>},{&lt;&lt;3>>}},{{&lt;&lt;3>>},{&lt;&lt;3>>},{&lt;&lt;3>>}},{{&lt;&lt;3>>},{&lt;&lt;3>>},{&lt;&lt;3>>}}}]]},{y,-Abs[orbit[{-(1/6),-(1/3),1/4},{{&lt;&lt;3>>},{&lt;&lt;3>>},{&lt;&lt;3>>},{&lt;&lt;3>>}}]],Abs[orbit[{-(1/6),-(1/3),1/4},{{{&lt;&lt;3>>},{&lt;&lt;3>>},{&lt;&lt;3>>}},{{&lt;&lt;3>>},{&lt;&lt;3>>},{&lt;&lt;3>>}},{{&lt;&lt;3>>},{&lt;&lt;3>>},{&lt;&lt;3>>}},{{&lt;&lt;3>>},{&lt;&lt;3>>},{&lt;&lt;3>>}}}]]},{z,-Abs[orbit[{-(1/6),-(1/3),1/4},{{&lt;&lt;3>>},{&lt;&lt;3>>},{&lt;&lt;3>>},{&lt;&lt;3>>}}]],Abs[orbit[{-(1/6),-(1/3),1/4},{{{&lt;&lt;3>>},{&lt;&lt;3>>},{&lt;&lt;3>>}},{{&lt;&lt;3>>},{&lt;&lt;3>>},{&lt;&lt;3>>}},{{&lt;&lt;3>>},{&lt;&lt;3>>},{&lt;&lt;3>>}},{{&lt;&lt;3>>},{&lt;&lt;3>>},{&lt;&lt;3>>}}}]]},Contours->{f[-(1/6),-(1/3),1/4]},ContourStyle->Opacity[0.5],Mesh->None],ListPointPlot3D[orbit[&lt;&lt;1>>],&lt;&lt;1>>]]. >></p> <p>And this is the output:</p> <p>Show[ContourPlot3D[ f[x, y, z], {x, -Abs[ orbit[{-(1/6), -(1/3), 1/ 4}, {{{1, 0, 0}, {0, 1, 0}, {0, 0, 1}}, {{-1, 2, 2}, {0, 1, 0}, {0, 0, 1}}, {{1, 0, 0}, {2, -1, 2}, {0, 0, 1}}, {{1, 0, 0}, {0, 1, 0}, {2, 2, -1}}}]], Abs[orbit[{-(1/6), -(1/3), 1/ 4}, {{{1, 0, 0}, {0, 1, 0}, {0, 0, 1}}, {{-1, 2, 2}, {0, 1, 0}, {0, 0, 1}}, {{1, 0, 0}, {2, -1, 2}, {0, 0, 1}}, {{1, 0, 0}, {0, 1, 0}, {2, 2, -1}}}]]}, {y, -Abs[ orbit[{-(1/6), -(1/3), 1/ 4}, {{{1, 0, 0}, {0, 1, 0}, {0, 0, 1}}, {{-1, 2, 2}, {0, 1, 0}, {0, 0, 1}}, {{1, 0, 0}, {2, -1, 2}, {0, 0, 1}}, {{1, 0, 0}, {0, 1, 0}, {2, 2, -1}}}]], Abs[orbit[{-(1/6), -(1/3), 1/ 4}, {{{1, 0, 0}, {0, 1, 0}, {0, 0, 1}}, {{-1, 2, 2}, {0, 1, 0}, {0, 0, 1}}, {{1, 0, 0}, {2, -1, 2}, {0, 0, 1}}, {{1, 0, 0}, {0, 1, 0}, {2, 2, -1}}}]]}, {z, -Abs[ orbit[{-(1/6), -(1/3), 1/ 4}, {{{1, 0, 0}, {0, 1, 0}, {0, 0, 1}}, {{-1, 2, 2}, {0, 1, 0}, {0, 0, 1}}, {{1, 0, 0}, {2, -1, 2}, {0, 0, 1}}, {{1, 0, 0}, {0, 1, 0}, {2, 2, -1}}}]], Abs[orbit[{-(1/6), -(1/3), 1/ 4}, {{{1, 0, 0}, {0, 1, 0}, {0, 0, 1}}, {{-1, 2, 2}, {0, 1, 0}, {0, 0, 1}}, {{1, 0, 0}, {2, -1, 2}, {0, 0, 1}}, {{1, 0, 0}, {0, 1, 0}, {2, 2, -1}}}]]}, Contours -> {f[-(1/6), -(1/3), 1/4]}, ContourStyle -> Opacity[0.5], Mesh -> None], ListPointPlot3D[ orbit[{-(1/6), -(1/3), 1/ 4}, {{{1, 0, 0}, {0, 1, 0}, {0, 0, 1}}, {{-1, 2, 2}, {0, 1, 0}, {0, 0, 1}}, {{1, 0, 0}, {2, -1, 2}, {0, 0, 1}}, {{1, 0, 0}, {0, 1, 0}, {2, 2, -1}}}], PlotStyle -> Directive[GrayLevel[0], PointSize[0.01]]]]</p> <p>I surmise there should be some problem with the function "orbit[x, twoGroup[word /@ {Subscript[g,1], Subscript[g,2], Subscript[g,3]}, n] /. rep]" in the "Module" function. I've tried to search for "orbit" function in the internet, but can't get anything. Is it because of the compatibility issue? I'm using Mathematica v8.0.4. And was wondering whether the "orbit" function is predefined, or how do we go about defining such a function?</p> <p>Thanks in advance.</p>
3,285,036
<p>Obviously this cannot happen in a right rectangle, but otherwise - as Sin(0) or 180 or 360 equals 0, I guess there is no way to find out what the original angle was?</p>
Paras Khosla
478,779
<p>The general solution of the equation is as follows. Note that <span class="math-container">$\pi \text{ rad}=180^{\circ}$</span>. <span class="math-container">$$\sin \theta=0\implies \theta=n\pi, n\in \mathbb{Z}=\{..., -\pi, 0,\pi,2\pi,...\}\equiv \theta=n\cdot180^{\circ}, n\in\mathbb{Z}$$</span></p>
3,285,036
<p>Obviously this cannot happen in a right rectangle, but otherwise - as Sin(0) or 180 or 360 equals 0, I guess there is no way to find out what the original angle was?</p>
Community
-1
<p><strong>Caution:</strong></p> <p>The equation </p> <p><span class="math-container">$$\sin(x)=0$$</span></p> <p>has infinitely many solutions, each an integer multiple of a half turn.</p> <p>But</p> <p><span class="math-container">$$\arcsin(0)=0$$</span></p> <p>and nothing else, because the arc sine is defined to be a function.</p>
2,593,627
<p>I struggle to find the language to express what I am trying to do. So I made a diagram.</p> <p><a href="https://i.stack.imgur.com/faHgE.png" rel="noreferrer"><img src="https://i.stack.imgur.com/faHgE.png" alt="Graph3parallelLines"></a></p> <p>So my original line is the red line. From (2.5,2.5) to (7.5,7.5).</p> <p>I want to shift the line away from itself a certain distance but maintaining the original angle of the line(so move it a certain distance away at a 90 degree angle). So after the shifting the line by the distance of +1 it would become the blue line or by -1 it would be come the yellow line.</p> <p>I don't know a lot of the maths terminology so if anybody could manage an explanation in layman terms it would be appreciated.</p> <p>Thanks, C</p>
user
505,767
<p>The line equation between $(2.5,2.5)$ to $(7.5,7.5)$ is:</p> <p>$$y=x$$</p> <p>For a <strong>vertical shifting</strong> of $\pm1$, the equation for blu line and yellow line are</p> <p>$$y=x+1$$</p> <p>$$y=x-1$$</p> <p>For a <strong>parallel shifting</strong> of $\pm1$, the equation for blu line and yellow line are</p> <p>$$y=x+\frac{\sqrt 2}{2}$$</p> <p>$$y=x-\frac{\sqrt 2}{2}$$</p> <p>Thus if you want a (orthogonal/parallel) distance $\pm d$ between the lines you ave to move vertically by $\pm\frac{d\sqrt 2}{2}$.</p>
257,567
<p>Assuming <a href="http://www.springer.com/gp/book/9783642649059" rel="nofollow noreferrer">Bishop's</a> constructive mathematics, is it true that any real-valued square matrix with <strong>distinct</strong> roots of the characteristic polynomial can be diagonalized? By distinct, I mean <strong>apart</strong>: $x \neq y \triangleq \exists q \in \mathbb{Q}.|x - y| &gt; q$. (There may be similar defnitions)</p> <p>For the case $n=2$, it seems true since we can deduce $a_{11} - \lambda_j \neq 0 \lor a_{22} - \lambda_j \neq 0$ for any $j = 1, 2$ where $a_{ii}, i=1,2$ are the diagonal elements and $\lambda_j$s are the roots of the characteristic polynomial. It then allows multiplying by a nonzero number and, using the property that $(a_{11} - \lambda_j)(a_{22} - \lambda_j) = a_{12}a_{21}$, solving the respective system of linear equations and consequently finding an eigenvector: $A v = \lambda_j v$. </p> <p>Already for the case $n=3$, the argument seems not to work and one cannot proceed without some case distinction on reals. <a href="http://ac.els-cdn.com/0168007286900060/1-s2.0-0168007286900060-main.pdf?_tid=f7d5faaa-c5ca-11e6-9674-00000aab0f27&amp;acdnat=1482138801_34d65d1543ed9d625fe261e2606b9cd1" rel="nofollow noreferrer">This work</a> addresses the problem in Lemma 1.5, but they seem to assume $xy = 0 \implies x =0 \lor y=0$ which is not valid constructively.</p> <p><a href="http://www.cse.chalmers.se/~coquand/FISCHBACHAU/AlgebraLogicCoqLom.pdf" rel="nofollow noreferrer">Coquand and Lombardi</a> in Theorem 2.3 constructed an effective procedure of finding eigenvectors of a projection matrix. I suspected that something like this could be done for general matrices where it is known beforehand that the roots of the characteristic polynomial are distinct. </p> <p><a href="https://arxiv.org/pdf/1605.04832.pdf" rel="nofollow noreferrer">Lombardi and Quitte</a> addressed the problem on page 100 (top). Also in Proposition 5.3, they claim that </p> <p>$$\prod_{i=1}^n ( (A - \lambda_i I)_{1 \dots n-1, 1 \dots n-1} ) \neq 0$$</p> <p>by "exhibiting" the companion matrix of $\lambda^n - 1$. Their language is a bit obscure. However, it seems they imply the following lemma:</p> <p><em>Let $A = M(\mathbb{R},n)$, and suppose that the roots $\lambda_1, \dots, \lambda_n$ of the charteristic polynomial for $A$ are mutually apart. Then, for every root $\lambda_j$ of the characteristic polynomial, there is a principal submatrix of $A - \lambda_j I$ of size $(n-1) \times (n-1)$ which is invertible.</em></p>
Denis Serre
8,799
<p><strong>Edit</strong>. I think now that your question concerns Gårding's theory of <em>hyperbolic polynomials</em>. </p> <p>A homogeneous polynomial of degree $d$ in $N$ real variables is hyperbolic in the direction $\bf e$ if for every vector $X$, the roots of the polynomial $t\mapsto p(X+t{\bf e})$ are real. We may suppose that $p({\bf e})&gt;0$. The connected component of $\bf e$ in $\{p&gt;0\}$ is the forward cone ; it is convex. Actually, $p$ is convex in the direction of any vector of the future cone. Let us denote $\Gamma$ the closure of the forward cone. Gårding proved a reverse Hölder inequality, in terms of the polar form associated with $p$ : $$p(x_1)^{1/d}\cdots p(x_d)^{1/d}\le\phi(x_1,\ldots,x_d)$$ for every $x_1,\ldots,x_d\in\Gamma$. He found also that $p^{1/d}$ is concave over $\Gamma$. Finally, the derivative of $p$ in a forward direction provides a hyperbolic polynomial, whose forward cone contains (strictly, in general) that of $p$.</p> <p>How does this apply here ? The map $\sigma_d:S\mapsto \det S$ is a hyperbolic polynomial over the symmetric matrices, in the direction of $I_d$ ; we have $N=\frac{d(d+1)}2$. This is just saying that every symmetric matrix has real eigenvalues. Its forward cone is that of positive definite matrices. When differentiating in the direction of $I_d$, one obtains (up to a constant factor) $\sigma_{d-1}$, next $\sigma_{d-2}$, etc ... Their closed forward cones $\Gamma_d$, $\Gamma_{d-1}$ ... are larger and larger ; in particular, they all contain ${\bf SPD}_d$.</p> <p>As mentionned below, your inequality amounts to $$(A+D=B+C,\, D\le B,C\le A)\Longrightarrow(\sigma_k(A)\sigma_k(D)\le\sigma_k(B)\sigma_k(C)).$$ I suspect that something stronger holds true, that is, if $p$ is hyperbolic, with closed forward cone $\Gamma$, then for every vectors $A,X,Y\in\Gamma$, the vectors $A,B=A+X,C=A+Y$ and $D=A+X+Y$ satisfy $p(A)p(D)\le p(B)p(C)$. I point out that if $X,Y$ are colinear (that is $A,B,C,D$ are colinear), then this is true because of the concavity of $p^{1/d}$.</p> <p>I was able to prove the claim when $k=2$, in which case $p$ can be written in the Lorentz form $p(s,x)=s^2-|x|^2$, in some appropriate coordinates $X=(s,x)$. The proof is somewhat cumbersome, here are the main arguments. The quantity $$F(s,t,a,x,y)=p(B)p(C)-p(A)p(D),\qquad A=(1,a),\,X=(s,x)\,,Y=(t,y)$$ is a concave function of $x$ and $y$. Let us fix $s,t&gt;0$. When minimizing over $|x|=s$ and $|y|=t$, the constraints must be equalities. There remains to minimize with respect to $a$ in the unit ball. If $a$ is on the unit sphere, $F$ is trivially $\ge0$. Otherwise, a minimum should be reached when $\nabla_aF=0$. An interesting calculation shows that this minimum is precisely zero. I can write out the details if you wish.</p> <hr> <p>Let me begin with two observations. On the one hand, the quantity $$\sum_{Q\in S(n,k)}\det(A_Q)=:\sigma_k(A)$$ is nothing but the $k$-th symmetric polynomial in the eigenvalues of $A$, whence my notation.</p> <p>On the other hand, if the required inequality is true, then a recursive use of it gives at well the inequality $$({\bf I}_k)\qquad \sigma_k(A)\sigma_k(D)\le\sigma_k(B)\sigma_k(C)$$ whenever $A,B,C,D$, symmetric positive definite, obey the constraints $$({\bf C})\qquad A+D=B+C,\qquad D\le B,C\le A.$$</p> <p>I claim that this inequality is true at least for $k=1$ and $k=n$ (I guess that it remains true for every $k$). When $k=1$, this is because $\sigma_1$ is the trace, and the constraints imply $${\rm Tr}\,A+{\rm Tr}\,D={\rm Tr}\,B+{\rm Tr}\,C,\qquad0&lt;{\rm Tr}\,D\le{\rm Tr}\,B,{\rm Tr}\,C\le{\rm Tr}\,A.$$ And we know that $a+d=b+c$ and $0&lt;d\le b,c\le a$ imply $ad\le bc$.</p> <p>For $k=n$, we must prove $\det A\det D\le \det B\det C$. To proceed, let us define $$X=\frac12(A+D)=\frac12(B+C),\qquad T=X-B,\qquad S=X-D.$$ The constraints are that $X&gt;0$ and $\pm T\le S\le X$. We want to prove $$\det(X+S)\det(X-S)\le\det(X-T)\det(X+T).$$ Multiplying every matrix at left and right by $X^{-1/2}$, and using the multiplicativity of the determinant, we may restrict to the case where $X=I_n$. There remains to prove $$(|T|\le S\le I_n)\Longrightarrow(\det(I_n-S^2)\le\det(I_n-T^2)),$$ where $|T|$, the absolute value, is given by functional calculus. Remark that because the right-hand side involves only $T^2$, which equals $|T|^2$, we may also assume that $0_n\le T$. Therefore, there remains to check the monotonicity of $F:T\mapsto\det(I_n-T^2)$ over $0_n\le T\le I_n$. To this end, we differentiate $$DF(T)\cdot H={\rm Tr}(\widehat{I_n-T^2}(HT+TH)),$$ where $\hat M$ is the adjugate of $M$. Up to a density argument, we may assume that $T&lt;I_n$ and therefore $I_n-T^2$ is invertible. Then $DF(T)\cdot H={\rm Tr}(HQ)$ where $$Q=\det(I_n-T^2)\,T^{1/2}(I_n-T^2)^{-1}T^{-1/2}.$$ Because $Q\ge0_n$, the monotonicity holds true and the proof is complete.</p> <p><strong>Edit</strong>. I find embarassing that the constraints (<strong>C</strong>) are invariant under congruence $M\mapsto P^TMP$, whereas the inequalities (<strong>I</strong>$_k$) to prove are not, except for $k=n$.</p>
1,873,180
<p>The final result should be $C(n) = \frac{1}{n+1}\binom{2n}{n}$, for reference.</p> <p>I've worked my way down to this expression in my derivation:</p> <p>$$C(n) = \frac{(1)(3)(5)(7)...(2n-1)}{(n+1)!} 2^n$$</p> <p>And I can see that if I multiply the numerator by $2n!$ I can convert that product chain into $(2n)!$ like so, also taking care to multiply the denominator all the same:</p> <p>$$C(n) = \frac{(2n)!}{(n+1)!(2n!)} 2^n =\frac{1}{n+1} \cdot \frac{(2n)!}{n!n!} 2^{n-1} = \frac{1}{n+1} \binom{2n}{n} 2^{n-1}$$</p> <p>Where did I go wrong? Why can't I get rid of that $2^{n-1}$?</p>
Michael Hardy
11,667
<p>Neither is strictly correct: the expression $\displaystyle\int f(x)\,dx$ is not really unambiguously defined as identifying some particular mathematical object except when the context makes it clear. When it has a precisely defined meaning, it's because of the context that it's precisely defined.</p> <p>If you mean "What do you get when you subtract an antiderivative of $f$ from another antiderivative of $f$?", then the answer is that you get a constant, or in some cases (where the domain is not connected) a piecewise constant.</p> <p>This problem comes up sometimes when integrating by parts. One has $$ \int \text{whatever} \,dx = \int u\,dv = uv - \int v\,du = (\text{some expression}) - 4\cdot\int\text{same thing}\,dx $$ where "same thing" mean the very same integral that is expressed on the extreme left above as $\displaystyle\int\text{whatever}\,dx$. So we add $\displaystyle 4\cdot\int\text{same thing}\,dx$ to both sides, getting $$ 5\int\text{whatever}\,dx = (\text{some expression}) + \text{constant}. $$ Here the $\text{“}\cdots+\text{constant''}$ cannot be dispensed with.</p> <p>This occurs in one commonplace method of integrating the cube of the secant function, and in things like $\displaystyle\int e^{ax}\cos(bx)\,dx$.</p>
630,838
<p>I was woundering if anyone knows any good references about Kähler and complex manifolds? I'm studying supergravity theories and for the simpelest N=1 supergravity we'll get these. Now in the course-notes the're quite short about these complex manifolds. I was hoping someone of you guys might know a good (quite complete book) about the subject ?</p> <p><strong>edit</strong></p> <blockquote> <p>I've also posted this on the <a href="https://physics.stackexchange.com/questions/92776/kahler-and-complex-manifolds">physics stackexchange</a>-site, hoping that maybe a string-theorist of supergravity-specialist might be able to provide some information. But from what I'm seeing in the posts here the answers are already very nice! A big thanks in advance, I'll be going trough the sources somwhere in the end of this week or the beginning of next week somewhere!</p> </blockquote>
Wintermute
67,388
<p>Geometry, Topology, and Physics by Nakahara is a great reference with some good examples but it does not have a ton of Kahler material in it, still I have found it very helpful in my research. Something very complete is Lectures on Kahler Geometry by Moroianu. You can also find the Ballman lectures on Kahler geometry for free on the internet.</p>
1,970,458
<p>Consider a stock that will pay out a dividends over the next 3 years of $1.15, $1.8, and 2.35 respectively. The price of the stock will be $48.42 at time 3. The interest rate is 9%. What is the current price of the stock?</p>
Wolfy
217,910
<p>We have $D_1 = 1.15, D_2 = 1.8, D_3 = 2.35, P_3 = 48.42$ and $r = .09$. We know from the time value of money that $$P_0 = \frac{D_1 + P_1}{1+r} = \frac{D_1}{1+r} + \frac{D_2 + P_2}{(1+r)^2} = \frac{D_1}{1+r} + \frac{D_2}{(1+r)^2} + \frac{D_3 + P_3}{(1+r)^3}$$ Now just plug in what we have and you are done :)</p>
14,847
<p>I thought a simple Mathematica kerning machine (for adjusting the space between characters) would be interesting, but I'm having trouble with the locators. (There are a number of other questions related to this, and I've read the answers, but as yet without finding a solution, or understanding them that well.)</p> <pre><code> Manipulate[ LocatorPane[Dynamic[points], text = "Wolfram"; fonts = FE`Evaluate[FEPrivate`GetPopupList["MenuListFonts"]]; Column[{ Button["Export", Export["/tmp/t.png", g]], g = Graphics[{ MapIndexed[ Text[Style[#1, FontSize -&gt; fontsize, FontFamily -&gt; font], points[[First[#2]]]] &amp;, Characters[text]]}, ImageSize -&gt; 500] }]], {points, {Table[{ x, 0}, {x, 1, Length[Characters[text]]}]}, ControlType -&gt; None}, {fontsize , Table[x, {x, 96, 256, 12}]} , {font, fonts} ] </code></pre> <p>(The font menu gets populated once you use it.)</p> <p><img src="https://i.stack.imgur.com/T1aNl.png" alt="kerning machine"></p> <p>I want to be able to slide the letters right or left (but not up or down), but at the moment they don't move like they're supposed to.</p> <h2>The solution</h2> <p>Thanks to the fine answers, I've got something useful working. There are some kludges and hacks too, and some problems still to be ironed out (string length needs to be dynamic, 'canvas' needs resizing, and so on), but this is excellent for now. </p> <pre><code>With[{fonts = FE`Evaluate[FEPrivate`GetPopupList["MenuListFonts"]]}, DynamicModule[{kernedText, points}, points = Table[{x, 0}, {x, 1, Length[Characters[text]] + 10}]; Column[{ (* input *) InputField[Dynamic[text], String], (* main panel *) Panel@LocatorPane[ Dynamic[ points, (points = ReplacePart[#, {{_, 2} -&gt; 0}]) &amp;], Dynamic[kernedText = Graphics[{MapIndexed[ Text[Style[#1, FontSize -&gt; fontsize, FontFamily -&gt; font], points[[First[#2]]]] &amp;, Characters[text]]}, PlotRangePadding -&gt; 0, PlotRange -&gt; {{0, 1 + Length[Characters[text]]}, Automatic}, Background -&gt; None, ImageSize -&gt; 800]], Appearance -&gt; None], (* controls *) Row[ { PopupMenu[Dynamic[fontsize], Table[x, {x, 72, 416, 12}]], PopupMenu[Dynamic[font], fonts], Button["Export", Export[FileNameJoin[{$HomeDirectory, "Desktop", "KernedText.png"}], kernedText], ImageSize -&gt; {Full, Automatic}] }] }, Left]]] </code></pre> <p><img src="https://i.stack.imgur.com/5DKID.png" alt="final"></p> <p>I realised that the locator dots didn't need to be showing - just click on the characters. The result can be compared to the unkerned version:</p> <pre><code>Text[Style[text, 192, FontFamily -&gt; "Palatino"]] </code></pre> <p><img src="https://i.stack.imgur.com/lKUuh.png" alt="unkerned"></p> <p>For me, this area of Mathematica (Manipulate/Dynamic) is gradually becoming less confusing (but only gradually).</p>
jVincent
1,194
<p>First off, <code>Dynamic[exp]]</code> redraws whenever anything that appears in expression changes. Think of Manipulate as being just a <code>Dynamic[code]</code> with some nice shortcuts to build controllers that can change things that appear in <code>code</code>. In your case, you have a <code>Dynamic[LocatorPane[(* some expression depending on points*)]]</code>. So whenever someone changes <code>points</code> a new locatorpane will be created and displayed right where the old one was, so you won't notice... except that the active controller loses focus, so you stop dragging the controller. Here is a very short demonstration of the problem:</p> <pre><code>points = {{0, 0}}; Dynamic@LocatorPane[Dynamic[points], Graphics[Point /@ points, PlotRange -&gt; 2]] </code></pre> <p>While just moving the <code>Dynamic</code> will cause only the graphic to be updated and solve the issue:</p> <pre><code>LocatorPane[Dynamic[points], Dynamic@Graphics[Point /@ points, PlotRange -&gt; 2]] </code></pre> <p>So how to get around this problem In your case. I would suggest not using <code>LocatorPane</code> inside manipulate, and either fully using only manipulate, or going completely without it. Here is a possible solution without using <code>Manipulate</code>:</p> <pre><code>text = "Wolfram"; fontsize = 96; font = "BankGothic"; DynamicModule[{points = Table[{x, 0}, {x, 1, Length[Characters[text]]}]}, LocatorPane[Dynamic[points], Column[{ Button["Export", Export["/tmp/t.png", g]], Dynamic[g = Graphics[{MapIndexed[ Text[Style[#1, FontSize -&gt; fontsize], points[[First[#2]]]] &amp;, Characters[text]]}, ImageSize -&gt; 500]]}]] ] </code></pre>
14,847
<p>I thought a simple Mathematica kerning machine (for adjusting the space between characters) would be interesting, but I'm having trouble with the locators. (There are a number of other questions related to this, and I've read the answers, but as yet without finding a solution, or understanding them that well.)</p> <pre><code> Manipulate[ LocatorPane[Dynamic[points], text = "Wolfram"; fonts = FE`Evaluate[FEPrivate`GetPopupList["MenuListFonts"]]; Column[{ Button["Export", Export["/tmp/t.png", g]], g = Graphics[{ MapIndexed[ Text[Style[#1, FontSize -&gt; fontsize, FontFamily -&gt; font], points[[First[#2]]]] &amp;, Characters[text]]}, ImageSize -&gt; 500] }]], {points, {Table[{ x, 0}, {x, 1, Length[Characters[text]]}]}, ControlType -&gt; None}, {fontsize , Table[x, {x, 96, 256, 12}]} , {font, fonts} ] </code></pre> <p>(The font menu gets populated once you use it.)</p> <p><img src="https://i.stack.imgur.com/T1aNl.png" alt="kerning machine"></p> <p>I want to be able to slide the letters right or left (but not up or down), but at the moment they don't move like they're supposed to.</p> <h2>The solution</h2> <p>Thanks to the fine answers, I've got something useful working. There are some kludges and hacks too, and some problems still to be ironed out (string length needs to be dynamic, 'canvas' needs resizing, and so on), but this is excellent for now. </p> <pre><code>With[{fonts = FE`Evaluate[FEPrivate`GetPopupList["MenuListFonts"]]}, DynamicModule[{kernedText, points}, points = Table[{x, 0}, {x, 1, Length[Characters[text]] + 10}]; Column[{ (* input *) InputField[Dynamic[text], String], (* main panel *) Panel@LocatorPane[ Dynamic[ points, (points = ReplacePart[#, {{_, 2} -&gt; 0}]) &amp;], Dynamic[kernedText = Graphics[{MapIndexed[ Text[Style[#1, FontSize -&gt; fontsize, FontFamily -&gt; font], points[[First[#2]]]] &amp;, Characters[text]]}, PlotRangePadding -&gt; 0, PlotRange -&gt; {{0, 1 + Length[Characters[text]]}, Automatic}, Background -&gt; None, ImageSize -&gt; 800]], Appearance -&gt; None], (* controls *) Row[ { PopupMenu[Dynamic[fontsize], Table[x, {x, 72, 416, 12}]], PopupMenu[Dynamic[font], fonts], Button["Export", Export[FileNameJoin[{$HomeDirectory, "Desktop", "KernedText.png"}], kernedText], ImageSize -&gt; {Full, Automatic}] }] }, Left]]] </code></pre> <p><img src="https://i.stack.imgur.com/5DKID.png" alt="final"></p> <p>I realised that the locator dots didn't need to be showing - just click on the characters. The result can be compared to the unkerned version:</p> <pre><code>Text[Style[text, 192, FontFamily -&gt; "Palatino"]] </code></pre> <p><img src="https://i.stack.imgur.com/lKUuh.png" alt="unkerned"></p> <p>For me, this area of Mathematica (Manipulate/Dynamic) is gradually becoming less confusing (but only gradually).</p>
Michael E2
4,999
<p>I'm sure the OP has figured the problem out already, so I don't know why I'm bothering except that I think future users might appreciate a good <code>Manipulate</code>-based solution as an alternative, especially as an example of how to work with <code>Locators</code>.</p> <p>A few changes: The most important thing is that the <code>Locators</code> are constrained by making the minimum and maximum <code>y</code> coordinate the same (<code>0</code>). I used <code>BaseStyle</code> instead of styling each character. For a smoother start-up, I put the font list into the specification for <code>fonts</code>; wrapping it in <code>Dynamic</code> means that the list of fonts will not be stored in the <code>Manipulate</code> output cell, only the code for loading it; since <code>Manipulate</code> can't interpret code, I needed to specify the control type. I changed some of the sizes to scale with <code>fontsize</code>. I also made <code>text</code> and <code>g</code> local variables. The important thing about making <code>text</code> local is that <code>points</code> needs to be initialized after <code>text</code> is initialized by the front end, unless it already has been set. The way <code>points</code> might be already initialized is if the cell is copied and pasted, or the notebook reopened.</p> <pre><code>text0 = "Wolfram"; Manipulate[ g = Graphics[{MapIndexed[Text[#1, points[[First[#2]]]] &amp;, Characters[text]]}, ImageSize -&gt; 0.75 fontsize * (StringLength @ text + 1), PlotRange -&gt; {{0, StringLength @ text + 1}, {-0.7, 1.}}, BaseStyle -&gt; {FontSize -&gt; fontsize, FontFamily -&gt; font}], {{text, text0}, ControlType -&gt; None}, {g, ControlType -&gt; None}, {{points, Table[{x, 0}, {x, 1, StringLength @ text0}]}, {0, 0}, {StringLength @ text + 1, 0}, Locator}, {fontsize, Table[x, {x, 96, 256, 12}]}, {{font, "Times"}, Dynamic@FE`Evaluate[FEPrivate`GetPopupList["MenuListFonts"]], PopupMenu}, Button["Export", Export["/tmp/t.png", g]] ] </code></pre> <p>Here's a variation in which the characters themselves are the locators. Wrapping the characters in <code>Pane</code> aligns the text.</p> <pre><code>text0 = "Wolfram"; Manipulate[ g = LocatorPane[Dynamic@points, Dynamic[ Graphics[{}, ImageSize -&gt; 0.75 fontsize * (StringLength @ text + 1), PlotRange -&gt; {{0, StringLength@text + 1}, {-0.7, 1.}}] ], {{0, 0}, {StringLength @ text + 1, 0}}, Appearance -&gt; Pane /@ Characters[text], BaseStyle -&gt; {FontSize -&gt; fontsize, FontFamily -&gt; font} ], {{text, text0}, ControlType -&gt; None}, {g, ControlType -&gt; None}, {{points, Table[{x, 0}, {x, 1, StringLength @ text0}]}, None}, {fontsize, Table[x, {x, 96, 256, 12}]}, {{font, "Times"}, Dynamic@FE`Evaluate[FEPrivate`GetPopupList["MenuListFonts"]], PopupMenu}, Button["Export", Export["/tmp/t.png", g]] ] </code></pre> <p><img src="https://i.stack.imgur.com/bE4ob.png" alt="Mathematica graphics"></p> <hr> <p><em>Edit summary:</em> Apparently I had a definition hanging around that made it seem that the initialization of <code>points</code> was working properly in the previous, simpler version. I've fixed it. An alternative fix is to initialize <code>points</code> in the <code>Initialization</code> option, after <code>text</code> has been initialized in the front end as follows. Set <code>points</code> to be an empty list in its variable specification, and conditionally initialize it later in <code>Initialization</code>. If it's not conditional, the points will be reset to the beginning whenever the notebook is opened or a copy of the cell is pasted into a notebook (when the <code>Initialization</code> code will be executed).</p> <pre><code>{{points, {}}, ...}, ... Initialization :&gt; (If[Length[points] == 0, points = Table[{x, 0}, {x, 1, StringLength @ text}]]) </code></pre> <p>(For the ellipsis after <code>points</code>, fill in the locator code or <code>None</code> in each of the two <code>Manipulate</code>s above, as appropriate.)</p> <p>One could also dispense with the local variable <code>text</code> altogether, but I was playing with as an <code>InputField</code>, in which case it is convenient for it to be local.</p>
424,209
<p>I am a Computer Science student. While going through some random maths topics I came across Chaos Theory. I wanted to know if there are any applications of it in CS. I tried searching on the internet about this but ended up only with <a href="https://security.stackexchange.com/questions/31000/does-chaos-theory-have-any-practical-application-in-computer-security">this</a> and <a href="http://www.slideshare.net/konakid/computer-science-student-uses-chaos-theory-to-create-new-routes" rel="nofollow noreferrer">this</a>. But are there any other domains of computer science where this theory could be applied ?</p> <p>Please help!</p>
nonlinearism
59,567
<p>I think the most fascinating account of chaos in computer science I have read comes when you analyze the attempted solutions to NP problems. </p> <p>Here's a link that explains how the "hardness" of a Sudoku is related to chaotic behavior of solutions <a href="http://www.nature.com/srep/2012/121011/srep00725/full/srep00725.html" rel="nofollow">http://www.nature.com/srep/2012/121011/srep00725/full/srep00725.html</a></p>
119
<p>One thing I could never wrap my head around is how <code>Flatten</code> works when provided with a matrix as the second argument, and the <em>Mathematica</em> help isn't particularly good on this one.</p> <p>Taken from the <a href="http://reference.wolfram.com/mathematica/ref/Flatten.html"><code>Flatten</code></a> <em>Mathematica</em> documentation:</p> <pre><code>Flatten[list, {{s11, s12, ...}, {s21, s22, ...}, ...}] </code></pre> <blockquote> <p>Flattens <code>list</code> by combining all levels $s_{ij}$ to make each level $i$ in the result.</p> </blockquote> <p>Could someone elaborate on what this actually means/does?</p>
Leonid Shifrin
81
<p>One convenient way to think of <code>Flatten</code> with the second argument is that it performs something like <code>Transpose</code> for ragged (irregular) lists. Here is a simple example:</p> <pre><code>In[63]:= Flatten[{{1,2,3},{4,5},{6,7},{8,9,10}},{{2},{1}}] Out[63]= {{1,4,6,8},{2,5,7,9},{3,10}} </code></pre> <p>What happens is that elements which constituted level <code>1</code> in the original list are now constituents at level <code>2</code> in the result, and vice versa. This is exactly what <code>Transpose</code> does, but done for irregular lists. Note however, that some information about positions is lost here, so we can not directly inverse the operation:</p> <pre><code>In[65]:= Flatten[{{1,4,6,8},{2,5,7,9},{3,10}},{{2},{1}}] Out[65]= {{1,2,3},{4,5,10},{6,7},{8,9}} </code></pre> <p>To have it reversed correctly, we'd have to do something like this:</p> <pre><code>In[67]:= Flatten/@Flatten[{{1,4,6,8},{2,5,7,9},{3,{},{},10}},{{2},{1}}] Out[67]= {{1,2,3},{4,5},{6,7},{8,9,10}} </code></pre> <p>A more interesting example is when we have deeper nesting:</p> <pre><code>In[68]:= Flatten[{{{1,2,3},{4,5}},{{6,7},{8,9,10}}},{{2},{1},{3}}] Out[68]= {{{1,2,3},{6,7}},{{4,5},{8,9,10}}} </code></pre> <p>Here again, we can see that <code>Flatten</code> effectively worked like (generalized) <code>Transpose</code>, interchanging pieces at the first 2 levels. The following will be harder to understand:</p> <pre><code>In[69]:= Flatten[{{{1, 2, 3}, {4, 5}}, {{6, 7}, {8, 9, 10}}}, {{3}, {1}, {2}}] Out[69]= {{{1, 4}, {6, 8}}, {{2, 5}, {7, 9}}, {{3}, {10}}} </code></pre> <p>The following image illustrates this generalized transpose:</p> <p><img src="https://i.stack.imgur.com/9ORcH.png" alt="Illustration of cyclic generalized transpose"></p> <p>We may do it in two consecutive steps:</p> <pre><code>In[72]:= step1 = Flatten[{{{1,2,3},{4,5}},{{6,7},{8,9,10}}},{{1},{3},{2}}] Out[72]= {{{1,4},{2,5},{3}},{{6,8},{7,9},{10}}} In[73]:= step2 = Flatten[step1,{{2},{1},{3}}] Out[73]= {{{1,4},{6,8}},{{2,5},{7,9}},{{3},{10}}} </code></pre> <p>Since the permutation <code>{3,1,2}</code> can be obtained as <code>{1,3,2}</code> followed by <code>{2,1,3}</code>. Another way to see how it works is to use numbers which indicate the position in the list structure:</p> <pre><code>Flatten[{{{111, 112, 113}, {121, 122}}, {{211, 212}, {221, 222, 223}}}, {{3}, {1}, {2}}] (* ==&gt; {{{111, 121}, {211, 221}}, {{112, 122}, {212, 222}}, {{113}, {223}}} *) </code></pre> <p>From this, one can see that in the outermost list (first level), the third index (corresponding the third level of the original list) grows, in each member list (second level) the first element grows per element (corresponding to the first level of the original list), and finally in the innermost (third level) lists, the second index grows, corresponding to the second level in the original list. Generally, if the k-th element of the list passed as second element is <code>{n}</code>, growing the k-th index in the resulting list structure corresponds to increasing the n-th index in the original structure.</p> <p>Finally, one can combine several levels to effectively flatten the sub-levels, like so:</p> <pre><code>In[74]:= Flatten[{{{1,2,3},{4,5}},{{6,7},{8,9,10}}},{{2},{1,3}}] Out[74]= {{1,2,3,6,7},{4,5,8,9,10}} </code></pre>
119
<p>One thing I could never wrap my head around is how <code>Flatten</code> works when provided with a matrix as the second argument, and the <em>Mathematica</em> help isn't particularly good on this one.</p> <p>Taken from the <a href="http://reference.wolfram.com/mathematica/ref/Flatten.html"><code>Flatten</code></a> <em>Mathematica</em> documentation:</p> <pre><code>Flatten[list, {{s11, s12, ...}, {s21, s22, ...}, ...}] </code></pre> <blockquote> <p>Flattens <code>list</code> by combining all levels $s_{ij}$ to make each level $i$ in the result.</p> </blockquote> <p>Could someone elaborate on what this actually means/does?</p>
ConvexMartian
10,492
<p>I learned a lot from WReach's and Leonid's answers and I'd like to make a small contribution: </p> <p>It seems worth emphasizing that the primary intention of the list-valued second argument of <code>Flatten</code> is merely to flatten certain levels of lists (as WReach mentions in his <strong>List Flattening</strong> section). Using <code>Flatten</code> as a ragged <code>Transpose</code> seems like a side-effect of this primary design, in my opinion.</p> <p>For example, yesterday I needed to transform this list</p> <pre><code>lists = { {{{1, 0}, {1, 1}}, {{2, 0}, {2, 4}}, {{3, 0}}}, {{{1, 2}, {1, 3}}, {{2, Sqrt[2]}}, {{3, 4}}} (*, more lists... *) }; </code></pre> <p><a href="https://i.stack.imgur.com/pEpAe.png" rel="noreferrer"><img src="https://i.stack.imgur.com/pEpAe.png" alt="treeform1"></a></p> <p>into this one:</p> <pre><code>list2 = { {{1, 0}, {1, 1}, {2, 0}, {2, 4}, {3, 0}}, {{1, 2}, {1, 3}, {2, Sqrt[2]}, {3, 4}} (*, more lists... *) } </code></pre> <p><a href="https://i.stack.imgur.com/djTxI.png" rel="noreferrer"><img src="https://i.stack.imgur.com/djTxI.png" alt="treeform2"></a></p> <p>That is, I needed to crush the 2nd and 3rd list-levels together.</p> <p>I did it with</p> <pre><code>list2 = Flatten[lists, {{1}, {2, 3}}]; </code></pre>
119
<p>One thing I could never wrap my head around is how <code>Flatten</code> works when provided with a matrix as the second argument, and the <em>Mathematica</em> help isn't particularly good on this one.</p> <p>Taken from the <a href="http://reference.wolfram.com/mathematica/ref/Flatten.html"><code>Flatten</code></a> <em>Mathematica</em> documentation:</p> <pre><code>Flatten[list, {{s11, s12, ...}, {s21, s22, ...}, ...}] </code></pre> <blockquote> <p>Flattens <code>list</code> by combining all levels $s_{ij}$ to make each level $i$ in the result.</p> </blockquote> <p>Could someone elaborate on what this actually means/does?</p>
Wjx
6,084
<p>This is a old question, but frequently asked by a <strong>lot</strong> of people. Today when I was trying to explain how this works, I came across a quite clear explanation, so I think sharing it here would be helpful for further audience.</p> <h2>What do index means?</h2> <p>First let's make clear what <strong>index</strong> is: In Mathematica every expression is a tree, for example, let's look at a list:</p> <pre><code>TreeForm@{{1,2},{3,4}} </code></pre> <p><a href="https://i.stack.imgur.com/SOOY3m.png" rel="noreferrer"><img src="https://i.stack.imgur.com/SOOY3m.png" alt="illus1"></a></p> <p>How you navigate in a tree?</p> <p>Simple! You start from the root and at each crossing choose which way to go, for example, here if you want to reach <code>2</code>, you begin with choosing the <strong>first</strong> path, then choose the <strong>second</strong> path. Let's write it out as <code>{1,2}</code> which is the just the index of element <code>2</code> in this expression.</p> <h2>How to understand <code>Flatten</code>?</h2> <p>Here consider a simple question, if I don't provide you with a complete expression, but instead I give you all the elements and their indexes, how you construct the original expression? For example, here I give you:</p> <pre><code>{&lt;|"index" -&gt; {1, 1}, "value" -&gt; 1|&gt;, &lt;|"index" -&gt; {1, 2}, "value" -&gt; 2|&gt;, &lt;|"index" -&gt; {2, 1}, "value" -&gt; 3|&gt;, &lt;|"index" -&gt; {2, 2}, "value" -&gt; 4|&gt;} </code></pre> <p>and tell you all heads are <code>List</code>, so what's the original expression?</p> <p>Well, surely you can reconstruct the original expression as <code>{{1,2},{3,4}}</code>, but how? You probably can list the following steps:</p> <ol> <li>First we look at the first element of index and sort and gather by it. Then we know that the <strong>first</strong> element of the whole expression should contain the <strong>first two</strong> elements in the original list...</li> <li>Then we continue to look at the second argument, do the same...</li> <li>Finally we get the original list as <code>{{1,2},{3,4}}</code>.</li> </ol> <p>Well, that's reasonable! So what if I tell you, no, you should first sort and gather by the second element of the index and then gather by the first element of the index? Or I say we don't gather them twice, we just sort by both elements but give the first argument higher priority?</p> <p>Well, you would probably get the following two list respectively, right?</p> <ol> <li><code>{{1,3},{2,4}}</code></li> <li><code>{1,2,3,4}</code></li> </ol> <p>Well, check by yourself, <code>Flatten[{{1,2},{3,4}},{{2},{1}}]</code> and <code>Flatten[{{1,2},{3,4}},{{1,2}}]</code> do the same!</p> <p>So, how you understand the second argument of Flatten?</p> <ol> <li>Each list element inside the main list, for example, <code>{1,2}</code>, means you should <strong>GATHER</strong> all the lists according to these elements in the index, in other words these <strong>levels</strong>.</li> <li>The order inside a list element represents how you <strong>SORT</strong> the elements gathered inside a list in previous step. for example, <code>{2,1}</code> means the position at the second level has higher priority than the position at the first level.</li> </ol> <h2>Examples</h2> <p>Now let's have some practice to be familiar with previous rules.</p> <h3>1. <code>Transpose</code></h3> <p>The goal of <code>Transpose</code> on a simple m*n matrix is to make $A_{i,j} \rightarrow A^T_{j,i}$. But we can consider it another way, originally we sort the element by their <code>i</code> index first then sort them by their <code>j</code> index, now all we need to do is to change to sort them by <code>j</code> index first then by <code>i</code> next! So the code becomes:</p> <pre><code>Flatten[mat,{{2},{1}}] </code></pre> <p>Simple, right?</p> <h3>2. Traditional <code>Flatten</code></h3> <p>The goal of traditional flatten on a simple m*n matrix is to create a 1D array instead of a 2D matrix, for example: <code>Flatten[{{1,2},{3,4}}]</code> returns <code>{1,2,3,4}</code>. This means that we don't <strong>gather</strong> elements this time, we only <strong>sort</strong> them, first by their first index then by the second:</p> <pre><code>Flatten[mat,{{1,2}}] </code></pre> <h3>3. <code>ArrayFlatten</code></h3> <p>Let's discuss a most simple case of ArrayFlatten, here we have a 4D list:</p> <pre><code>{{{{1,2},{5,6}},{{3,4},{7,8}}},{{{9,10},{13,14}},{{11,12},{15,16}}}} </code></pre> <p>so how we can do such a conversion to make it a 2D list?</p> <p>$\left( \begin{array}{cc} \left( \begin{array}{cc} 1 &amp; 2 \\ 5 &amp; 6 \\ \end{array} \right) &amp; \left( \begin{array}{cc} 3 &amp; 4 \\ 7 &amp; 8 \\ \end{array} \right) \\ \left( \begin{array}{cc} 9 &amp; 10 \\ 13 &amp; 14 \\ \end{array} \right) &amp; \left( \begin{array}{cc} 11 &amp; 12 \\ 15 &amp; 16 \\ \end{array} \right) \\ \end{array} \right) \rightarrow \left( \begin{array}{cccc} 1 &amp; 2 &amp; 3 &amp; 4 \\ 5 &amp; 6 &amp; 7 &amp; 8 \\ 9 &amp; 10 &amp; 11 &amp; 12 \\ 13 &amp; 14 &amp; 15 &amp; 16 \\ \end{array} \right)$</p> <p>Well, this is simple too, we need the group by the original first and the third level index first, and we should give the first index higher priority in sorting. The same goes to the second and the forth level:</p> <pre><code>Flatten[mat,{{1,3},{2,4}}] </code></pre> <h3>4. "Resize" a image</h3> <p>Now we have a image, for example:</p> <pre><code>img=Image@RandomReal[1,{10,10}] </code></pre> <p>But it's definitely too small for us to view, so we want to make it bigger by extending each pixel to a 10*10 size huge pixel.</p> <p>First we shall try:</p> <pre><code>ConstantArray[ImageData@img,{10,10}] </code></pre> <p>But it returns a 4D matrix with dimensions {10,10,10,10}. So we should <code>Flatten</code> it. This time we want the third argument to takes higher priority instead of the first one, so a minor tuning would work:</p> <pre><code>Image@Flatten[ConstantArray[ImageData@img,{10,10}],{{3,1},{4,2}}] </code></pre> <p>A comparison:</p> <p><a href="https://i.stack.imgur.com/4x5qw.png" rel="noreferrer"><img src="https://i.stack.imgur.com/4x5qw.png" alt="illus2"></a></p> <p>Hope this could help!</p>
864,212
<p>While trying to look up examples of PIDs that are not Euclidean domains, I found a statement (without reference) on the <a href="http://en.wikipedia.org/wiki/Euclidean_domain">Euclidean domain</a> page of Wikipedia that</p> <p>$$\mathbb{R}[X,Y]/(X^2+Y^2+1)$$</p> <p>is such a ring. After a good deal of searching, I have not been able to find any other (online) reference to this ring.</p> <p>Can anyone confirm this result? Is there a reference for it (paper, textbook or website)?</p>
user26857
121,097
<blockquote> <p>In P. Samuel, <em>Anneaux factoriels</em>, pages 36-37, it's proved that $A=\mathbb R[X,Y]/(X^2+Y^2+1)$ is a UFD. </p> </blockquote> <p>In the following we denote by $x,y$ the residue classes of $X,Y$ modulo $(X^2+Y^2+1)$. Thus $A=\mathbb R[x,y]$ with $x^2+y^2+1=0$.</p> <blockquote> <p><em>Lemma.</em> The prime ideals of $A$ are of the form $(ax+by+c)$ with $(a,b)\neq (0,0)$. </p> </blockquote> <p><em>Proof.</em> It's not difficult to see that these elements are prime: if $p=ax+by+c$ with $(a,b)\neq (0,0)$, then $A/pA\simeq\mathbb C$.<br> On the other hand, let $\mathfrak p$ be a non-zero prime ideal of $A$. Since $\dim A=1$ necessarily $\mathfrak p$ is maximal, so $A/\mathfrak p$ is a field. It's enough to show that $\mathfrak p$ contains an element of the form $ax+by+c$ with $(a,b)\neq (0,0)$. Note that $A/\mathfrak p=\mathbb R[\hat x,\hat y]$ and $\hat x,\hat y$ are algebraic over $\mathbb R$. Thus we have an algebraic field extension $\mathbb R\subset A/\mathfrak p$, and therefore $[A/\mathfrak p:\mathbb R]\le2$. In particular, the elements $\hat 1,\hat x,\hat y$ of $A/\mathfrak p$ are linearly dependent over $\mathbb R$. </p> <p>Let $S\subset A$ be the multiplicative set generated by all prime elements $ax+by+c$ with $(a,b)\neq (0,0)$. The ring of fractions $S^{-1}A$ has no non-zero prime ideals, and therefore $S^{-1}A$ is a field, hence a UFD. Now we can apply <a href="http://stacks.math.columbia.edu/tag/0AFU" rel="nofollow noreferrer">Nagata's criterion for factoriality</a> to conclude that $A$ is a UFD. </p> <blockquote> <p>It's easily seen that $A$ also a PID (use <a href="https://math.stackexchange.com/questions/78006/prove-that-a-ufd-r-is-a-pid-if-and-only-if-every-nonzero-prime-ideal-in-r-is">this result</a>.) </p> </blockquote> <p>Let's prove that $A$ is not Euclidean.</p> <blockquote> <p><em>Lemma.</em> Let $A$ be a Euclidean domain. Then there is $p\in A-\{0\}$ prime such that $\pi(A^{\times})=(A/pA)^{\times}$, where $\pi:A\to A/pA$ is the canonical surjection.</p> </blockquote> <p><em>Proof.</em> If one considers $p$ a non-zero, non-invertible element with $\delta(p)$ minimal (here $\delta$ is an Euclidean algorithm), then $p$ is prime and for $\hat a\in A/pA$ invertible there is $u\in A$ invertible such that $u-a\in pA$ (write $a=px+u$ with $\delta(u)&lt;\delta(p)$ and notice that $u$ is not $0$ - here one uses that $\hat a$ is invertible -, and necessarily invertible), that is, $\pi(A^{\times})=(A/pA)^{\times}$.</p> <p>In our case $A^{\times}=\mathbb R^{\times}$. Since $A/pA\simeq\mathbb C$ for any prime $p\in A$, we have $(A/pA)^{\times}\simeq\mathbb C^{\times}$. If we assume that $A$ is Euclidean, then we get a surjective group homomorphism $\mathbb R^{\times}\to\mathbb C^{\times}$ which is also injective (see @zcn's comment below), a <a href="https://math.stackexchange.com/questions/620495/prove-that-the-multiplicative-groups-mathbbr-0-and-mathbbc-0">contradiction</a>. </p>
3,520,354
<p>In the problem <span class="math-container">$\frac{8.01-7.50}{3.002}$</span></p> <p>Why would the answer be <span class="math-container">$0.17$</span> and not <span class="math-container">$0.170$</span>? My least amount of <em>sig figs</em> is <span class="math-container">$3$</span> in the original equation. The only thing I can come up with is in the intermediate step.<span class="math-container">$8.01-7.50= 0.51$</span> exactly, which only has <span class="math-container">$2$</span> <em>significant figures</em>. Does the intermediate step really count in determining significant figures? Thank you. :)</p>
Community
-1
<p>It is often regarded as good practice to give <span class="math-container">$1$</span> fewer sig. figs. than in the given numbers. So you are right, the least number of sig figs in the original equation is <span class="math-container">$3$</span> therefore give <span class="math-container">$2$</span> in the final answer.</p> <p>(This has nothing to do with the intermediate step you mention.) </p> <p>To see why this is regarded as best practice you could look at what the original numbers might mean. For example they might have been rounded from <span class="math-container">$8.014,7.495,3.0015$</span>.</p> <p>Your calculation with these numbers gives <span class="math-container">$0.173$</span> to <span class="math-container">$3$</span> sig figs . So <span class="math-container">$0.17$</span> to 2 sig figs gives a more honest degree of accuracy.</p>
3,197,046
<p>I'm interested in <span class="math-container">${\bf integer}$</span> solutions of </p> <p><span class="math-container">$$abcd+1=(ecd-c-d)(fab-a-b)$$</span></p> <p>subject to <span class="math-container">${\bf a,b,c,d \geq 2}$</span>, and <span class="math-container">${\bf e,f \geq 1}$</span>. </p> <p><span class="math-container">${\bf Questions:}$</span> Are there finitely many solutions? If no, is there a nice infinite family of solutions?</p>
Community
-1
<p><span class="math-container">$1=(f-1)abcd-fabc-fabd-acd+ac+ad-bcd+bc+bd$</span></p> <p>Only if an odd number of odd terms exists, can this work. That takes at least one, even variable (not just term). </p> <p>mod 3, we have that when turned into addition of possible negative equivalents, the 2 mod 3's either need to win by a number of terms that are 2 mod 3, or the 1 mod 3 terms need to win by a number of terms by a number of terms thst is 1 mod 3. 0 mod 3 has no effect, except forcing constraints on distribution of remaining terms. as 9 is not 1 or 2 mod 3 at least one term needs to be 0 mod 3. the remaining terms can split 3-5 for a 2 mod 3 win, or 7 terms can be non-zero mod 3 and get 4-3 split for a 1 mod 3 win. 6 non-zero terms have a 2-4, 2 mod 3 win. 5 has a 3-2, 1 mod 3 win. 4 has a 2 mod 3 win of 1-3. 3 has a 1 mod 3 win of 2-1. 2 has a 0-2 win for 2 mod 3. 1 has a 1-0 win for 1 mod 3.</p> <p>you can do the same analysis mod 4 etc. then use CRT to combine them.</p> <p><strong>Edit</strong> 9 terms actually has a 2-7 split .</p>
226,097
<p>I am having a problem with the following exercise. Can someone help me please.</p> <p>Find all functions $f$ for which $f'(x)=f(x)+\int_{0}^1 f(t)dt$</p> <p>Thank you in advance</p>
Michael Hardy
11,667
<p>$$ \frac{df}{dx} = f + c $$ $$ \frac{df}{f+c} = dx $$ (provided $f+c\ne 0$). $$ \log_e|f+c| = x + \text{another constant} $$ $$ |f+c| = e^{x+\text{other constant}} = (e^x\cdot\text{positive constant}) $$ $$ f+c = (e^x\cdot\text{some nonzero constant}) $$ But since this was "provided $f+c\ne0$, we need to check whether $f+c=0$ is also a solution, and substitution shows that it is.</p> <p>So the general solution of the diffential equation is $$ f(x) = ke^x - c. $$ We want $c$ to be $$\int_0^1 f(t)\,dt = \int_0^1 ke^t - c\,dt = ke-k-c.$$</p> <p>Finally, we've got $$ f(x) = ke^x -\frac{k(e-1)}{2}. $$</p>
396,085
<p>The length of three medians of a triangle are $9$,$12$ and $15$cm.The area (in sq. cm) of the triangle is</p> <p>a) $48$</p> <p>b) $144$</p> <p>c) $24$</p> <p>d) $72$</p> <p>I don't want whole solution just give me the hint how can I solve it.Thanks.</p>
newzad
76,526
<p>You know that medians divide a triangle to 6 equal areas. If you find one of them, multiplying with 6 give you the area of whole triangle. Let's denote one area as $S$, now see the figure: <img src="https://i.stack.imgur.com/SqjxT.png" alt="enter image description here"></p> <p>I guess you saw the right triangle.</p>
396,085
<p>The length of three medians of a triangle are $9$,$12$ and $15$cm.The area (in sq. cm) of the triangle is</p> <p>a) $48$</p> <p>b) $144$</p> <p>c) $24$</p> <p>d) $72$</p> <p>I don't want whole solution just give me the hint how can I solve it.Thanks.</p>
Miikash Rainag
470,654
<p>In this type of questions, given medians always make triplet (a right triangle). From these given triplet area of triangle can be find easily A=4/3{area of right triangle form by triplet}</p> <p>As according to your question: A=4/3{0.5×(9×12)} =72</p>
396,085
<p>The length of three medians of a triangle are $9$,$12$ and $15$cm.The area (in sq. cm) of the triangle is</p> <p>a) $48$</p> <p>b) $144$</p> <p>c) $24$</p> <p>d) $72$</p> <p>I don't want whole solution just give me the hint how can I solve it.Thanks.</p>
Piquito
219,998
<p>There exists a formulae giving the area of a triangle in function of its medians. It is <span class="math-container">$$A=\frac13\sqrt{2\alpha^2\beta^2+2\beta^2\gamma^2+2\gamma^2\alpha^2-\alpha^4-\beta^4-\gamma^4}$$</span> where it is clear what are <span class="math-container">$\alpha,\beta$</span> and <span class="math-container">$\gamma$</span>.</p>
2,275,785
<p>I asked a similar question last night asking for an explanation of the statement, however I was unable to find how to prove such a statement, so I have a proof, however I think it is wrong, so I'm just asking for it to be checked and if it is, for it to be corrected, thanks! </p> <p><strong>Question</strong></p> <p>Describe the following set, and prove your answer correct. (Here brackets denote intervals on $\mathbb{R}$.)</p> <p>$\bigcup_{i=0}^{\infty}[i, i+2]$</p> <p><strong>Working</strong></p> <p>$[i,i+2]=${$x\in\mathbb{R}|i\leq x\leq i+2$}</p> <p>Thus, with the union we have {$x\in\mathbb{R}|i\leq x\leq i+2$}$=[0, \infty)$</p> <p>Now, note that $\bigcup_{i=0}^{\infty}[i, i+2]=[0, 2]\cup [1, 3]\cup [2,4]\cup...\cup[k, k+2]\cup...$ for some $k\in\mathbb{R}$</p> <p>Consider $x \notin [0, \infty)$ then $|x|&gt;[0, \infty)$, so $\exists$ some $i\in\mathbb{N}$ such that $i&lt;|x|$. Thus, $x\notin (0, \infty]$</p> <p>$\therefore x$ is not the union</p> <p>Thus, we have shown the union of $[i, i+2]$ is on the interval $[0, \infty)$ so;</p> <p>$\bigcup_{i=0}^{\infty}[i, i+2]=[0, \infty)$</p>
helloworld112358
300,021
<p>It looks like you have all the right ideas here. Your notation is somewhat confusing in some spots, and you perhaps mention more than is necessary. Here is a short summary:</p> <p>Suppose $x\ge 0$. Then for some $i\in\mathbb{Z}_{\ge0}$, we have $x\in [i,i+2]$, so $x\in\bigcup_{i=0}^\infty[i,i+2]$. Suppose $x&lt;0$. Then for all $i\ge 0$, we have $x&lt;i$, so $x\not\in[i,i+2]$, and thus $x\not\in\bigcup_{i=0}^\infty[i,i+2]$. Thus, we have shown $$\bigcup_{i=0}^\infty[i,i+2]=[0,\infty).$$</p>
660,899
<p>find the unit normal $\bf \hat{N}$ of</p> <p>$${\bf r}=6 \mathrm{e}^{-14 t}\cos(t){\bf i}+6 \mathrm{e}^{-14 t}\sin(t){\bf j}$$</p> <p>The answer should be in vector form. Use t as parameter. Write $e^x$ for exponentials.</p> <p>Have been working with this a long time now but cant get the right answer. My answer is $$(-((e^{-14t})(\cos(t)-14\sin(t)))/((\sqrt{12})\sqrt{e^{-28t}}), (-((e^{-14t})(\sin(t)+14\cos(t)))/((\sqrt{12})\sqrt{(e^{-28t}})),0)$$ but it aint right. Thx for help!</p>
leonbloy
312
<p>You can apply the <a href="http://en.wikipedia.org/wiki/Law_of_total_expectation" rel="nofollow">law of total expectation</a> (or "tower rule"), which says $E(Y) = E(E(Y\mid X))$</p> <p>In your case, you know that $E(Y\mid X)= a X +b$. Hence $E(Y)=E(a X+b) = a \mu_x +b$</p>
2,745,884
<p>If random variables $X$ and $Y$ are independent and $X$ and $Z$ are independent, are $X$ and $Y \cup Z$ independent?</p>
Sally G
156,064
<p>No. The event $A_ i,_j$ that $i,j$ have the same birthdays is pairwise independent, so $A_1,_2$ and $A_2,_3$ are independent, as well as $A_1,_2$ and $A_1,_3$, but $A_1,_2, A_2,_3$ and $A_3,_1$ are clearly not independent. As person 1 having the same birthday as person 2 and person 3 having the same birthday as person 1 implies 2 and 3 have the same birthdays.</p>
34,049
<p>A person has a sheets of metal of a fixed size.</p> <p>They are required to cut parts from the sheets of metal. </p> <p>It's desireable to waste as little metal as possible. </p> <p>Assume they have sufficient requirements before making the first cut to more than use one sheet of metal</p> <p>What is the name of the branch of math which is involved in optimizing the decision on how to do the cuts ?</p>
Henry
6,460
<p>Depending on the precise details of the question, this looks like a 2-dimensional <a href="http://en.wikipedia.org/wiki/Cutting_stock_problem" rel="nofollow">cutting stock problem</a> or <a href="http://en.wikipedia.org/wiki/Packing_problem" rel="nofollow">packing problem</a> </p>
4,058,600
<p>Please pardon the elementary question, for some reason I'm not grocking why all possible poker hand combinations are equally probable, as all textbooks and websites say. Just intuitively I would think getting 4 of a number is much more improbably than getting 1 of each number, if I were to draw 4 cards. For example, ignoring order, to get 4 of a single number there are only <span class="math-container">$4 \choose 4$</span> distinct possibilities, whereas for 1 of each number I would have <span class="math-container">${4 \choose 1}^4$</span> distinct possibilities.</p>
saulspatz
235,128
<p>Yes, that's true, but they mean that any particular hand of 5 cards has the same probability as any other hand of 5 cards. Once you start talking about the probability of a pair or four of a kind, you're talking about the probability of getting one of a number of hands. To put it another way, the probability of drawing a royal flush in spades is exactly the same as the probability of drawing the 2,3 of diamonds, the 6,8 of clubs, and the Jack of hearts.</p>
4,058,600
<p>Please pardon the elementary question, for some reason I'm not grocking why all possible poker hand combinations are equally probable, as all textbooks and websites say. Just intuitively I would think getting 4 of a number is much more improbably than getting 1 of each number, if I were to draw 4 cards. For example, ignoring order, to get 4 of a single number there are only <span class="math-container">$4 \choose 4$</span> distinct possibilities, whereas for 1 of each number I would have <span class="math-container">${4 \choose 1}^4$</span> distinct possibilities.</p>
D. G.
581,400
<p>This is just like flipping a coin. You are just as likely to get exactly <span class="math-container">$$HHTHTTH$$</span> as you are to get <span class="math-container">$$HHHHHHH$$</span> You are intuitively grouping poker hands into their categories, and you are right that for example four of a kind is less likely than high card. Notice for example, that the <a href="https://en.wikipedia.org/wiki/List_of_poker_hands" rel="nofollow noreferrer">wikipedia page</a> is careful to distinguish poker hands from &quot;hand-ranking categories&quot;.</p>
698,743
<blockquote> <p>Let the real coefficient polynomials $$f(x)=a_{n}x^n+a_{n-1}x^{n-1}+\cdots+a_{1}x+a_{0}$$ $$g(x)=b_{m}x^m+b_{m-1}x^{m-1}+\cdots+b_{1}x+b_{0}$$ where $a_{n}b_{m}\neq 0,n\ge 1,m\ge 1$, and let $$g_{t}(x)=b_{m}x^m+(b_{m-1}+t)x^{m-1}+\cdots+(b_{1}+t^{m-1})x+(b_{0}+t^m).$$ Show that</p> <p><strong>there exist positive $\delta$, such for any $t$ such that $0&lt;|t|&lt;\delta$, and such $f(x)$ and $g_{t}(x)$ be relatively prime.</strong></p> </blockquote> <p>I fell this result is very well, because although two polynomial are not relatively prime, we can do it to one of the polynomial tiny perturbation makes relatively prime.</p> <p>But I can't prove my problem.</p> <p>Thank you very much</p>
Sylvain Biehler
132,773
<p>let $x_i$ be the roots of $f$.</p> <p>$g_t$ and $f$ are not primes iif $g_t(x_i) \neq 0$ for every $x_i$.</p> <p>So $g_t$ and $f$ are primes iif $t$ no in the finite set of roots of $$b_{m}x_i^m+(b_{m-1}+t)x_i^{m-1}+\cdots+(b_{1}+t^{m-1})x_i+(b_{0}+t^m)$$</p>
48,629
<p>Recently I began to consider algebraic surfaces, that is, the zero set of a polynomial in 3 (or more variables). My algebraic geometry background is poor, and I'm more used to differential and Riemannian geometry. Therefore, I'm looking for the relations between the two areas. I should also mention, that I'm interested in the realm of real surfaces, i.e. subsets of $\mathbb{R}^n$.</p> <p>On my desk you could find the following books: <strong>Algebraic Geometry</strong> by <em>Hartshorne</em>, <strong>Ideals, Varieties, and Algorithms</strong> by <em>Cox &amp; Little &amp; O'Shea</em>, <strong>Algorithms in Real Algebraic Geometry</strong> by <em>Basu &amp; Pollack &amp; Roy</em> and <strong>A SINGULAR Introduction to Commutative Algebra</strong> by <em>Greuel &amp; Pfister</em>. Unfortunately, neither of them introduced notions and ideas I'm looking for.</p> <p>If I get it right, please correct me if I'm wrong, locally, around non-singular points, an algebraic surface behaves very nicely, for example, it is smooth. Here's the first question: <em>is it locally (about non-singular point) a smooth manifold? Is it a Riemannian manifold, having, for instance, the metric induced from the Euclidean space?</em></p> <p>Further questions I have are, for example:</p> <ol> <li>Can I define <em>geodesics</em> (either in the sense of length minimizer or straight curves) in the non-singular areas of the surface? Can they pass singularities?</li> <li>How about <em>curvature</em>? Is it defined for these objects?</li> <li>Can we talk about <em>convexity</em> of subsets of the algebraic surface?</li> <li>What other tools and term can be imported from differential/Riemannian geometry?</li> </ol> <p>I will be grateful for any hint, tip and lead in the form of either answers to my questions, or references to books/papers which can be helpful, or any other sort of help.</p>
Sándor Kovács
10,076
<p>It seems to me that your interest is not in algebraic geometry, but in the differential geometry of spaces defined by algebraic equations. An algebraic variety defined over $\mathbb R$ or $\mathbb C$ is a manifold away from the singularities. The singular set is is a proper closed subset, where closed means defined by some algebraic equations, so this set is actually lower dimensional than the original object, so you have a very nice <em>big</em> open subset where you have a manifold. The question of extending differential geometric constructions to singularities is in general a difficult one and is the focus of a lot of research. You might be able to get a better idea by looking at complex analytic geometry. In any case, you can obviously do any of those you ask on the smooth part, but it will not be algebraic geometry. But that's OK.</p> <p>One possibility you could try is indeed looking at the resolution of singularities, do your Riemannian magic there and try to get bring the results back to the original space. I suspect that you don't know what a resolution of singularities is since it is actually an very specifically algebraic geometric notion. It is the following: Let $X$ be your starting object. A resolution of singularities is a morphism $\pi:\widetilde X\to X$ such that it is an isomorphism outside a smaller dimensional subspace of $X$. You can read more about these in <a href="http://books.google.com/books/about/Lectures_on_resolution_of_singularities.html?id=Oygejj1QFhgC" rel="noreferrer">Lectures on resolution of singularities</a> by János Kollár. The difficulty will be in taking whatever you do on the resolution back to the original, but the good news is that it is differential geometry, so my suggestion would be the following: For now assume that such a $\pi$ exists and see if what you want to do you can on $\widetilde X$. If so, try to see if you can "push-forward" some of those results to $X$. Perhaps you will realize that "if only $\pi$ satisfied property $P$, then I could do this" and it is possible that $\pi$ does. SO, if you get to that point, then look at Kollár's book or come back to MO and ask more specific questions. </p>
88,199
<p>Is there a function that would satisfy the following conditions?:</p> <p>$\forall x \in X, x = f(f(x))$ and $x \not= f(x)$,</p> <p>where the set $X$ is the set of all triplets $(x_1,x_2,x_3)$ with $x_i \in \{0,1,\ldots,255\}$.</p> <p>I would like to find a function that will have as an input RGB color values (triplets) and return the original color after two applications of the function.</p>
dls
1,761
<p>If $f(x)=1-x$, then $f(f(x))=1-(1-x)=x$ (applied to each component).</p>
88,199
<p>Is there a function that would satisfy the following conditions?:</p> <p>$\forall x \in X, x = f(f(x))$ and $x \not= f(x)$,</p> <p>where the set $X$ is the set of all triplets $(x_1,x_2,x_3)$ with $x_i \in \{0,1,\ldots,255\}$.</p> <p>I would like to find a function that will have as an input RGB color values (triplets) and return the original color after two applications of the function.</p>
Paul
16,158
<p>What about this: Define $$g(x)=x-1\mbox{ if }x\mbox{ is odd; and }x+1\mbox{ if } x \mbox{ is even.}$$ Then $g$ maps from $\{0,1,\ldots,255\}$ to $\{0,1,\ldots,255\}$. Note also that $g(x)\neq x$ for all $x\in\{0,1,\ldots,255\}$, and $g(g(x))=x$. Now we can define $f$ as the following: $$f(x_1,x_2,x_3)=(g(x_1),x_2,x_3).$$</p>
88,199
<p>Is there a function that would satisfy the following conditions?:</p> <p>$\forall x \in X, x = f(f(x))$ and $x \not= f(x)$,</p> <p>where the set $X$ is the set of all triplets $(x_1,x_2,x_3)$ with $x_i \in \{0,1,\ldots,255\}$.</p> <p>I would like to find a function that will have as an input RGB color values (triplets) and return the original color after two applications of the function.</p>
ofer
17,209
<p>$f(x_1,x_2,x_3)=(255−x_1,255−x_2,255−x_3)$ should do the work.</p>
88,199
<p>Is there a function that would satisfy the following conditions?:</p> <p>$\forall x \in X, x = f(f(x))$ and $x \not= f(x)$,</p> <p>where the set $X$ is the set of all triplets $(x_1,x_2,x_3)$ with $x_i \in \{0,1,\ldots,255\}$.</p> <p>I would like to find a function that will have as an input RGB color values (triplets) and return the original color after two applications of the function.</p>
Phira
9,325
<p>If you prefer the large color change, you should rather take $$f(x,y,z)= (x+128 \bmod 256,y+128 \bmod 256,z+128 \bmod 256).$$</p>
1,677,035
<p>I'm new to this website so I apologize in advance if what I'm going to ask isn't meant to be posted here.</p> <p>A bit of background though: I haven't been to school in 6 years and the last level I've graduated was Grade 7 due to financial problems, as well as my mom frequently being in and out of the hospital. I am now 18 and I wish to go to college as soon as I can, but I need to be caught up on all the math I've missed (I have been studying these past few years with what's available on the internet, but I don't think it's enough).</p> <p>So my question is, are there any good, easy to understand, high school math books suited for my situation? I learn better with a teacher who can explain the lesson, but since I don't have one I'd prefer books that aren't too difficult, but at the same time provide everything necessary for high school level math and more. I used to be a bright student so I'm sure I can do this on my own with the right material.</p> <p>Again, if this question isn't meant to be on this site I'd be more than willing to delete it asap! That's all. Thank you for reading. :)</p>
Community
-1
<p>Books by Bernard Child, SL Loney are awesome. Do try problems in calculus of one variable by IA Maron. Thomas's Calculus is alse good for in-depth knowledge. Problems in mathematics is good for it's problems (there are too many of them).</p> <p>Best of luck for your studies.</p>
2,540,992
<blockquote> <p>An infinite sequence of increasing positive integers is given with bounded first differences.</p> <p>Prove that there are elements <span class="math-container">$a$</span> and <span class="math-container">$b$</span> in the sequence such that <span class="math-container">$\dfrac{a}{b}$</span> is a positive integer.</p> </blockquote> <p>I think maybe computing the <a href="https://en.wikipedia.org/wiki/Natural_density" rel="noreferrer">Natural Density</a> of the sequence would lead to some contradiction. But don't know if it exists.</p> <p>Any help will be appreciated.<br /> Thanks.</p>
Erick Wong
30,402
<p><a href="https://www.renyi.hu/%7Ep_erdos/1935-04.pdf" rel="nofollow noreferrer">Here is a 1935 paper</a> of a relatively young Erdős proving in a few lines that a sequence of positive integers which don't divide one another must have lower density zero, as a consequence of the fact that <span class="math-container">$\sum_n 1/(a_n \log a_n)$</span> is bounded by an absolute constant. In particular, a sequence with gaps bounded by <span class="math-container">$d$</span> has lower density <span class="math-container">$\ge 1/d$</span>, so this proves the claim, but perhaps there is a more direct argument. The bounded gap requirement does handily eliminate any possibility of a Besicovitch-type construction (which yields a positive upper density, but introduces gaps which grow very rapidly in length).</p>
4,521,661
<p>Calls arrive according to Poisson process with parameter <span class="math-container">$$</span>. Lengths of the calls are iid with cdf <span class="math-container">$F_x(x)$</span>. What is the probability distribution of the number of calls in progress at any given time?</p> <p>I am confused, is the answer then just the pdf of Poisson, that is, <span class="math-container">$P(X=x) = e^{-\lambda}\frac{\lambda^x}{x!}$</span>?</p> <p>I feel like I am missing something. I assumed the <span class="math-container">$$</span> is the number of the calls and I am not sure how to use this with the cdf of the lengths of the calls.</p>
Matthew H.
801,306
<p>Here is my approach, without queuing theory.</p> <p>Let <span class="math-container">$T&gt;0$</span> be any given time, and take <span class="math-container">$\Big\{[t_{i-1},t_i),s_i\Big\}_{i=1}^n$</span> as any uniform tagged partition of <span class="math-container">$[0,T)$</span> with <span class="math-container">$\Delta t=\frac{t_i-t_{i-1}}{n}$</span>.</p> <p>When <span class="math-container">$n$</span> is very large, the number of customers that call the call center on <span class="math-container">$[t_{i-1},t_i)$</span> may be regarded as a splitting process<span class="math-container">$-$</span> those who remain on the line at time <span class="math-container">$T$</span> and those who hang up by time <span class="math-container">$T$</span>. The former distribution is approximately <span class="math-container">$\text{Poisson}\left(\lambda \Delta t\left(1-F_X(T-s_i)\right)\right)$</span>.</p> <p>So, the <em>total</em> number of calls in progress at time <span class="math-container">$T$</span> is approximately <span class="math-container">$\text{Poisson}\left(\sum_{j=1}^n\lambda \Delta t\left(1-F_X(T-s_i)\right)\right)$</span></p> <p>Taking <span class="math-container">$n$</span> to <span class="math-container">$\infty$</span> yields a desired distribution of <span class="math-container">$$\text{Poisson}\left(\int_0^T\lambda \left(1-F_X(T-t)\right)\mathrm{d}t\right)$$</span></p>
631,163
<p>As a student in high school, I never bothered to memorize equations or methods of solving, rather I would try to identify the logic behind the operations and apply them. However, now that I've begun to teach Algebra in high school, I find it rather frustrating when students either a) memorize methods of solving the textbook problems or b) look for a general formula/method to "just plug in to"</p> <p>I've tried to throw them curveballs as my old Algebra teacher did, but usually they just dismiss it as "a weird problem" and continue using whatever method they have been.</p> <p>My objection to A is that it often impedes actual learning. Upon seeing a chunk of 6 similar problems in the textbook, many students just apply the same steps to every problem in the section (and usually get quite a couple wrong).</p> <p>My objection to B is that from my experience, students who flat out memorize equations (like $x=\frac{-b \pm \sqrt{b^2-4ac}}{2a}$ for quadratic equations) often fail to extend the same logic (completing the square/simplification) when faced with different but similar problems. They also frequently misapply the "magic formulas" they were taught before (i.e. solving simple quartics $ax^4+bx^2+c$ with the quadratic formula) and needing plenty of prompting after the suggestion of substituting $x^2$.</p> <p>This is the problem identified in <a href="https://math.stackexchange.com/questions/85681/is-memorization-a-good-skill-to-learn-or-master-mathematics#comment202224_85689">this question</a> and in particular the issue raised in <a href="https://math.stackexchange.com/questions/85681/is-memorization-a-good-skill-to-learn-or-master-mathematics#comment202224_85689">this comment</a>.</p>
String
94,971
<p>From my perspective, it is not just a simple "pro et contra memorization". Actually it is of great value to memorize formulas in general so that they are readily available. What may instead be the issues is rather</p> <ol> <li>How students memorize (reflected/un-reflected)</li> <li>What they do with the memorized stuff (reflect or not)</li> </ol> <p>To go to the extreme, forbidding memorization would make you have to write up multiplication tables from scratch over and over again.</p> <h2>What I do as a teacher</h2> <p>Often times, I tell my students to first simply copy a proof from the textbook without thinking too much. This is a very first step of activating (even un-reflectedly) what the book says. This gets even the weakest of the students started. But copying was never the goal itself.</p> <p>Next step is to copy the proof again while trying to figure out the steps. Also in particular identifying the first step that the student is unable to comprehend or uncertain about. If the students have good memories they might start to reflect upon which previous methods could be into play.</p> <p>The ultimate goal is to break things down by reflecting until the memorizing can be condensed to a simple core of references to previous methods and an idea of the overall scheme of the proof.</p> <h2>My thoughts on the learning process</h2> <p>I think that memorizing and reflecting sometimes belongs to different situations when learning mathematics, as it does in other subjects.</p> <p>Imagine you should learn to play the piano and went on forever analyzing and reflecting upon the way your fingers were acting. That would make the whole process slow and tedious. On the other hand, returning to the reflection of what your fingers do when playing at certain recurring occasions will be quite beneficial.</p> <p>The same way I think about learning mathematics. Some of the time you should simply just do calculations with methods you have memorized, also to reinforce that you are capable of using those methods. At other times you should engage more deeply into reflection.</p> <h2>About curveballs</h2> <p>I used to throw those all the time too. I found out that it muddled the students distinction of when I was teaching them a method and when I was just throwing extra challenges for the especially gifted.</p> <p>Also be careful how curved the ball is. If someone throw a ball at you, you will automatically at least consider catching it (or move). But being forced to move all the time instead of catching will enforce the idea that "this is something that other more talented than me would be doing". In other words "I am not good with maths".</p> <p>Recently I have begun to throw the curveballs in a not so curved way and not aiming at anyone specific, but actually telling them "this is something you might try, but it is quite difficult". Then if someone catches my not-even-so-much-of-a-curveball they will feel "king of the world of mathematics".</p> <p>I hope others will answer your question as well. I do not consider myself a very good or trained teacher. But the thoughts above corresponds to the experience that I have gained so far.</p>
2,317,625
<p>How do you compare $6-2\sqrt{3}$ and $3\sqrt{2}-2$? (no calculator)</p> <p>Look simple but I have tried many ways and fail miserably. Both are positive, so we cannot find which one is bigger than $0$ and the other smaller than $0$. Taking the first minus the second in order to see the result positive or negative get me no where (perhaps I am too dense to see through).</p>
PM 2Ring
207,316
<p>Here's yet another way, for those who aren't comfortable with the $\gtrless$ or $\sim$ notation.</p> <p>We can use crude rational approximations to $\sqrt 2$ and $\sqrt 3$.</p> <p>$$\begin{align} \left(\frac{3}{2}\right)^2 = \frac{9}{4} &amp; \gt 2\\ \frac{3}{2} &amp; \gt \sqrt 2\\ \frac{9}{2} &amp; \gt 3\sqrt 2 \end{align}$$</p> <p>And $$\begin{align} \left(\frac{7}{4}\right)^2 = \frac{49}{16} &amp; \gt 3\\ \frac{7}{4} &amp; \gt \sqrt 3\\ \frac{7}{2} &amp; \gt 2\sqrt 3 \end{align}$$</p> <p>Adding those two approximations, we get $$\begin{align} \frac{9}{2} + \frac{7}{2} = 8 &amp; \gt 3\sqrt 2 + 2\sqrt 3\\ 6 + 2 &amp; \gt 3\sqrt 2 + 2\sqrt 3\\ 6 - 2\sqrt 3 &amp; \gt 3\sqrt 2 - 2 \end{align}$$</p>
1,114,007
<p>How to simplify $$\arctan \left(\frac{1}{2}\tan (2A)\right) + \arctan (\cot (A)) + \arctan (\cot ^{3}(A)) $$ for $0&lt; A&lt; \pi /4$?</p> <p>This is one of the problems in a book I'm using. It is actually an objective question , with 4 options given , so i just put $A=\pi /4$ (even though technically its disallowed as $0&lt; A&lt; \pi /4$) and got the answer as $\pi $ which was one of the options , so that must be the answer (and it is weirdly written in options as $4 \arctan (1) $ ).</p> <p>Still , I'm not able to actually solve this problem. I know the formula for sum of three arctans , but it gets just too messy and looks hard to simplify and it is not obvious that the answer will be constant for all $0&lt; A&lt; \pi /4$. And I don't know of any other way to approach such problems.</p>
lab bhattacharjee
33,337
<p>As $0&lt;A&lt;\dfrac\pi4\implies\cot A&gt;1\implies\cot^3A&gt;1$</p> <p>Like <a href="https://math.stackexchange.com/questions/523625/showing-arctan-frac23-frac12-arctan-frac125/523626#523626">showing $\arctan(\frac{2}{3}) = \frac{1}{2} \arctan(\frac{12}{5})$</a>,</p> <p>$\arctan(\cot A)+\arctan(\cot^3A)=\pi+\arctan\left(\dfrac{\cot A+\cot^3A}{1-\cot A\cdot\cot^3A}\right)$</p> <p>Now $\dfrac{\cot A+\cot^3A}{1-\cot A\cdot\cot^3A}=\dfrac{\tan^3A+\tan A}{\tan^4A-1}=\dfrac{\tan A}{\tan^2A-1}=-\dfrac{\tan2A}2$</p> <p>and $\arctan(-x)=-\arctan(x)$</p>
583,030
<p>I have to show that the following series convergences:</p> <p>$$\sum_{n=0}^{\infty}(-1)^n \frac{2+(-1)^n}{n+1}$$</p> <p>I have tried the following:</p> <ul> <li>The alternating series test cannot be applied, since $\frac{2+(-1)^n}{n+1}$ is not monotonically decreasing.</li> <li>I tried splitting up the series in to series $\sum_{n=0}^{\infty}a_n = \sum_{n=0}^{\infty}(-1)^n \frac{2}{n+1}$ and $\sum_{n=0}^{\infty}b_n=\sum_{n=0}^{\infty}(-1)^n \frac{(-1)^n}{n+1}$. I proofed the convergence of the first series using the alternating series test, but then i realized that the second series is divergent.</li> <li>I also tried using the ratio test: for even $n$ the sequence converges to $\frac{1}{3}$, but for odd $n$ the sequence converges to $3$. Therefore the ratio is also not successful.</li> </ul> <p>I ran out of ideas to show the convergence of the series.</p> <p>Thanks in advance for any help!</p>
Beni Bogosel
7,327
<p>You could look at the partial sums:</p> <p>$$\sum_{n=1}^{N}(-1)^n \frac{2+(-1)^n}{n+1}=\frac{2}{1}-\frac{2}{2}+\frac{2}{3}-\frac{2}{4}+...+\frac{1}{1}+\frac{1}{2}+...=2\sum_{k \leq N+1 \text{ odd}} \frac{1}{k}$$ and this diverges.</p>
4,285,143
<p>Let's suppose I've got a function <span class="math-container">$f(x)$</span> where I'd like to differentiate with respect to <span class="math-container">$t$</span>, but <span class="math-container">$t$</span> depends on <span class="math-container">$x$</span>: <span class="math-container">$t(x)$</span>. Thus the whole linked derivative thing: <span class="math-container">$\dfrac{\mathrm{d}f(x)}{\mathrm{d}t(x)}$</span>. Is this possible at all? Alternatively I had to find <span class="math-container">$t^{-1}(x)$</span>: <span class="math-container">$x(t)$</span> and then calculate the derivative fairly easy: <span class="math-container">$\mathrm{d} f(x(t))/\mathrm{d}t$</span>. But finding the inverse is not always possible. Is there another way? At all?</p>
amsmath
487,169
<p><span class="math-container">$$ \frac{df}{dt} = \frac{df}{dx}\frac{dx}{dt} = \frac{df}{dx}\left(\frac{dt}{dx}\right)^{-1} = \frac{f'(x)}{t'(x)}. $$</span></p>
1,203,179
<p>The problem I have is this:</p> <p>Use suitable linear approximation to find the approximate values for given functions at the points indicated:</p> <p>$f(x, y) = xe^{y+x^2}$ at $(2.05, -3.92)$</p> <p>I know how to do linear approximation with just one variable (take the derivative and such), but with two variables (and later on in the assignment, three variables) I'm a bit lost. Do I take partial derivatives and combine then somehow? Can someone guide me through a problem of this type? </p> <p>Thank you in advance.</p>
Sloan
217,391
<p>Let $L(x,y)$=$f(x_0,y_0)+f_x(x_0,y_0)(x-x_0)+f_y(x_0,y_0)(y-y_0)$. Then $L(x,y) \approx f(x,y)$. Consider $(x_0,y_0)=(2,-4)$. Then, \begin{equation} L(x,y)=2+9(x-2)+2(y+4) \implies f(2.05,-3.92) \approx L(2.05, -3.92)=2.61 \end{equation} Notice, from a calculator, $f(2.05,-3.92)=2.7192$</p>
56,481
<p>I don't know if the question trivial or not (if so I will delete it), but here is what I have:</p> <p>I know that </p> <pre><code>x x (* x^2 *) </code></pre> <p>Looking at the full form gives</p> <pre><code>FullForm[Times[x, x]] (* Power[x, 2] *) </code></pre> <p>It is clear that the <code>Times</code> head has been changed to <code>Power</code> head. I just wonder how does this happen. Can I control this behavior to stop this conversion between heads?</p>
m_goldberg
3,066
<p><em>Mathematica</em> embraces the notion of canonical form. Internally it wants all expressions to be reduced to their canonical forms.</p> <p><code>Times[x, x]</code> is not a canonical form, so it gets reduced to <code>Power[x, 2]</code>, which is.</p> <p>I share Mr.Wizard's occasional frustration over the choices of canon made by Wolfram Research, but I think it is far too late to expect any change. So grin and bear it, as we say here in the US.</p>
3,038,965
<p>Here's the question I'm puzzling over:</p> <p><span class="math-container">$\textbf{Find the perpendicular distance of the point } (p, q, r) \textbf{ from the plane } \\ax + by + cz = d.$</span></p> <p>I tried bringing in the idea of a dot product and attempted to get going with solving the problem, but I'm heading nowhere. That is:</p> <p><span class="math-container">$\text{The direction vector of the normal of the plane } = (a\textbf{i}+b\textbf{j}+c\textbf{k}) \text{, where } \\\textbf{i}, \textbf{ j},\textbf{ k } \text{ are unit vectors.}$</span> </p> <p>This dotted with the direction vector of the point (position vector, precisely) should equal 0. Am I right? And will this method work even?</p>
Chris Culter
87,023
<p>Try redoing the derivative of <span class="math-container">$1/(1-x)$</span> more carefully.</p>
670,781
<p>Given $y=x\sqrt{a+bx^2}$. the tangent to $y$ at point $x=\sqrt5$ is also passing at point </p> <p>$(3\sqrt5,\sqrt5).$ the area between $y=x\sqrt{a+bx^2}$ and the $x$-axis is equal to $18$.</p> <p>Need to find $a,b$.</p> <p>I have tried to differentiate and eliminate but failed...</p>
DonAntonio
31,254
<p>$$f(x):=x\sqrt{a+bx^2}\implies f(\sqrt5)=\sqrt5\sqrt{a+5b}=\sqrt{5a+25b}\implies$$</p> <p>since</p> <p>$$f'(x)=\sqrt{a+bx^2}+\frac{bx^2}{\sqrt{a+bx^2}}=\frac{a+2bx^2}{\sqrt{a+bx^2}}\implies f'(\sqrt5)=\frac{a+10b}{\sqrt{a+5b}}$$</p> <p>Thus, the tangent line at the tangency point $\;(\sqrt5\,,\,\sqrt{5a+25b})\;$ is</p> <p>$$y-\sqrt{5a+25b}=\frac{a+10b}{\sqrt{a+5b}}(x-\sqrt5)$$</p> <p>and as this line passes through $\;(3\sqrt5\,,\,\sqrt5)\;$ , we have that</p> <p>$$\sqrt5-\sqrt{5a+25b}=\frac{a+10b}{\sqrt{a+5b}}(2\sqrt5)\iff$$</p> <p>$$10a+25b-2\sqrt{25a+125b}=\frac{20(a^2+20ab+100b^2)}{a+5b}$$</p> <p>As you can see, we get a first horrendous, awful equation connecting $\;a,b\;$. </p> <p>The second condition implies that either the function intersects the $\;x$-axis twice, at $\;x=0\;,\;\;x=\sqrt{-\frac ab}\;$ , or else they're talking of the improper integral from zero to $\;+\infty\;$. Anyway, you can use that</p> <p>$$\int x\sqrt{a+bx^2}dx=\frac1{2b}\int(2bx\,dx)\sqrt{a+bx^2}=\frac1{2b}\frac23(a+bx^2)^{3/2}+C$$</p> <p>Not the nicest thing (in fact, pretty ugly), but try to take it from here.</p>
2,674,217
<p>Let $\{ a_{n}\}_{n}$ be a sequence and let $a\in \mathbb{R}$. Define $\{ c_{n}\}_{n}$ as:</p> <p>$$c_{n}=\frac{a_{1}+...+a_{n}}{n}.$$</p> <p>I want to prove the following claim: if $\lim\limits_{n\to +\infty}a_{n}=+\infty$ then $\lim\limits_{n\to +\infty}c_{n}=+\infty$</p> <p>Approach: Suppose $\lim\limits_{n\to +\infty}a_{n}=+\infty$</p> <p>Let $M&gt;0$, $\exists\space n_{0}\in\mathbb{N}$ such that $\forall\space n\ge n_{0}, a_{n}&gt;M$</p> <p>\begin{align} c_{n} &amp;=\frac{a_{1}+...+a_{n_{0}-1}+a_{n_{0}}+...+a_{n}}{n} \\ &amp;&gt;\frac{a_{1}+...+a_{n_{0}-1}+(n-n_{0})M}{n}\\ &amp;=\frac{a_{1}+...+a_{n_{0}-1}}{n}+\frac{(n-n_{0})M}{n} \end{align}</p> <p>So $\lim\limits_{n\to +\infty}c_{n}\ge\lim\limits_{n\to +\infty}\left[\frac{a_{1}+...+a_{n_{0}-1}}{n}+(1-\frac{n_{0}}{n})M\right]=M$</p> <p>Since this is true $\forall\space M&gt;0$, then $\lim\limits_{n\to +\infty}c_{n}=+\infty$</p> <p>Is this approach correct?</p>
zhw.
228,045
<p>You were off to a good start. You showed</p> <p>$$c_{n}&gt;\frac{a_{1}+...+a_{n_{0}-1}}{n}+\frac{(n-(n_{0}-1))M}{n} $$</p> <p>for $n\ge n_0.$ Thus $\liminf c_n$ is at least the $\liminf$ of the expression on the right. But the expression on the right has a limit, namely $0+M=M,$ and thus its $\liminf$ is the same. So we have shown $\liminf c_n \ge M.$ Since $M&gt;0$ was arbitrary, $\liminf c_n =\infty,$ proving $\lim c_n =\infty.$</p>
2,336,535
<p>I have a limit:</p> <p>$$\lim_{(x,y)\rightarrow(0,0)} \frac{x^3+y^3}{x^4+y^2}$$</p> <p>I need to show that it doesn't equal 0.</p> <p>Since the power of $x$ is 3 and 4 down it seems like that part could go to $0$ but the power of $y$ is 3 and 2 down so that seems like it's going to $\infty$.</p> <p>I wonder if that even makes sense.</p> <p>So how can I solve this limit, with substitution or changing it into a polar representation?</p>
Fred
380,717
<p>Let $f(x,y)=\frac{x^3+y^3}{x^4+y^2}$. Then</p> <p>$|f(x,x^2)| \to \infty$ for $x \to 0$.</p> <p>Hence $\lim_{(x,y)\rightarrow(0,0)} \frac{x^3+y^3}{x^4+y^2}$ does not exist.</p>
1,387,184
<p>Can someone show how to compute the residue of this function: $$\frac{z}{e^z - 1}$$</p> <p>I think can represent the Taylor series of $e^z$ as $$e^z = 1 + z + \frac{z^2}{2!} + \frac{z^3}{3!} + \cdots$$ Then, we have $$\frac{z}{e^z - 1} = \frac{z}{(1 +z + \frac{z^2}{2!} + \frac{z^3}{3!} + \cdots) -1}$$ $$ = 1 + 1 +\frac{2}{z} + \frac{3!}{z^2} + \cdots - 1 = 1 +\frac{2}{z} + \frac{3!}{z^2} + \cdots$$ and, this function has a pole at $z_0 = 2n \pi i$ Hence, $$\operatorname{Res}_{z_0}f(z) = 0$$</p> <p>EDIT: </p> <p>The way I used to solve this problem is $$\textrm{Res}_{z_0}(\frac{f}{g}) = \frac{f(z_0)}{g'(z_0)} = \frac{2n \pi i}{e^{2n \pi i}}$$</p> <p>Am I correct? If not, can someone please show me how to do it please?</p>
Community
-1
<p>Let $K$ be an algebraically closed field of characteristic $0$, $L$ be the Lie algebra $M_n(K),[.,.]$, $ad(L)=\{ad_x:y\rightarrow xy-yx=[x,y]|x\in L\}$, $der(L)$ be the set of derivations over $L$. </p> <p>Prop 0. $L$ has no proper ideal. (well-known)</p> <p>Prop 1. $ad(L)$ is an ideal of $der(L)$. </p> <p>Proof. $[\delta,ad_x]=ad_{\delta x}$.</p> <p>Prop 2. $x\in L\rightarrow ad_x\in ad(L)$ is an isomorphism of Lie algebra.</p> <p>Proof. $ad_{[x,y]}=[ad_x,ad_y]$.</p> <p>Prop 3. The Killing form over $L$: $K_L(x,y)=Tr(ad_x.ad_y)$ is non-degenerate and $K([x,y],z)=K(x,[y,z])$. </p> <p>Proof. Note that $S=\{x\in L|$ for every $y\in L,K_L(x,y)=0\}$ is an ideal of $L$. It is easy to find $x,y$ s.t. $K(x,y)\not= 0$; according to 0., $S=\{0\}$.</p> <p>Remark. $K_{ad(L)}$ is the restriction of $K_{der(L)}$ and, according to 2.,3., is non-degenerate. Let $I$ be the orthogonal of $ad(L)$ in $der(L)$ with respect to $K_{der(L)}$; then $I\cap ad(L)=\{0\}$; according to 1., they are ideals of $der(L)$ s.t. $[I,ad(L)]\subset ad(L)\cap I=\{0\}$.</p> <p>Let $\delta\in I$; according to 1., for every $x$, $ad_{\delta x}=0$; according to 2., for every $x$, $\delta_x=0$ and $\delta= 0$. Conclusion $ad(L)=der(L)$.</p>
1,386,683
<p>I posted early but got a very tough response.</p> <p>Point $A = 2 + 0i$ and point $B = 2 + i2\sqrt{3}$ find the point $C$ $60$ degrees ($\pm$) such that Triangle $ABC$ is equilateral. </p> <p>Okay, so I'll begin by converting into polar form:</p> <p>$A = 2e^{2\pi i}$ and $B = 4e^{\frac{\pi}{3}i}$</p> <p>$\overline{AB} = \sqrt{13}$</p> <p>How should I find a point with length $\overline{BC} = \overline{AC} = \sqrt{13}$ and the sufficient angle?</p>
Eclipse Sun
119,490
<p>Since the modulus of $\dfrac{\bar z^2}{z}$ is $|z|$, the limit is $0$.</p>
2,558,267
<p>Let $M$ be a finite dimensional von-Neumann algebra. We know this algebra is generated by its projections. My question maybe simple. Can one computing these projections? What about its minimal or central projections? </p> <p>If possible please give me a reference for this. </p> <p>Thanks </p>
Ivan Burbano
232,542
<p>By the structure theorem of finite-dimensional <span class="math-container">$C^*$</span>-algebras (see, Takesaki's, Theory of Operator Algebras, Theorem 11.2), <span class="math-container">$M\cong\bigoplus_{r=1}^N M_{n_r}(\mathbb{C})$</span> for some unique <span class="math-container">$n_1,\dots,n_N\in\mathbb{N}^+$</span>. Let <span class="math-container">$e_{ij}^{(r)}\in M_{n_r}(\mathbb{C})$</span> be the matrix with the <span class="math-container">$ij$</span>-th entry equal to 1 and the rest of the entries vanishing. Then a maximal family of minimal projections are <span class="math-container">$e_{ii}^{(r)}$</span> for all <span class="math-container">$r\in\{1,\dots,N\}$</span> and <span class="math-container">$i\in\{1,\dots,n_r\}$</span>. The central projections are of the form <span class="math-container">$\sum_{i=1}^{n_r}e_{ii}^{(r)}$</span> for all <span class="math-container">$r\in\{1,\dots,N\}$</span>.</p>
4,495,044
<p>Edit: There is an answer at the bottom by me explaining what is going on in this post.</p> <p>Define a function <span class="math-container">$f : R \to R$</span> by <span class="math-container">$f(x) = 1$</span> if <span class="math-container">$x = 0$</span> and <span class="math-container">$f(x) = 0$</span> if <span class="math-container">$x \ne 0$</span>. I was attempting to prove that <span class="math-container">$\lim_{x \to 0; x\in R}f(x)$</span> is undefined. The following is my proof.</p> <p>Proof: Suppose that <span class="math-container">$\lim_{x\to 0; x \in R}f(x)=L$</span>. Then for every <span class="math-container">$\varepsilon &gt; 0$</span> there exists a <span class="math-container">$\delta &gt; 0$</span> such that for all those <span class="math-container">$x \in R$</span> for which <span class="math-container">$|x-0|&lt;\delta$</span> we have that <span class="math-container">$|f(x)-L|&lt;\varepsilon$</span>. But <span class="math-container">$|0| &lt; \delta$</span> and by the Archimedean property we know that for <span class="math-container">$\delta &gt; 0$</span> there exists an integer <span class="math-container">$n&gt;0$</span> such that <span class="math-container">$0&lt;|\frac 1 n| &lt; \delta$</span>. This is a contradiction, as <span class="math-container">$f(0) = 1$</span> and <span class="math-container">$f(\frac 1 n)= 0$</span> and both are less than <span class="math-container">$\delta$</span>.</p> <p>Is the proof correct?</p> <p>Edit: Here a limit is defined using adherent points and not limit points. If we were to use limit points then, <span class="math-container">$\lim_{x \to 0; x\in R\setminus \{0\}}f(x)=0$</span>. I have updated the question with correct notation.</p> <p>Edit 2: Most textbooks define limits using limit points. In which case you we would have that <span class="math-container">$\lim_{x \to 0}f(x)=\lim_{x \to 0; x\in R\setminus \{0\}}f(x)$</span>. We are considering the definition of the limit where limits are defined using adherent points. Where it really matters whether we are considering <span class="math-container">$lim_{x\to 0; x\in R\setminus \{0\}}f(x)$</span> or <span class="math-container">$lim_{x\to 0; x\in R}f(x)$</span>.</p> <p>Edit 3: This is the definition of convergence of a function at a point in the book Analysis 1 by Terence Tao. Let <span class="math-container">$X$</span> be a subset of <span class="math-container">$R$</span>, let <span class="math-container">$f : X → R$</span> be a function, let <span class="math-container">$E$</span> be a subset of <span class="math-container">$X$</span>, <span class="math-container">$x_0$</span> be an adherent point of <span class="math-container">$E$</span>, and let L be a real number. We say that f converges to <span class="math-container">$L$</span> at <span class="math-container">$x0$</span> in <span class="math-container">$E$</span>, and write <span class="math-container">$\lim_{x \to x_0;x\in E} f(x) = L$</span>, iff <span class="math-container">$f$</span>, after restricting to <span class="math-container">$E$</span>, is ε-close to <span class="math-container">$L$</span> near <span class="math-container">$x_0$</span> for every <span class="math-container">$\varepsilon &gt; 0$</span>. If <span class="math-container">$f$</span> does not converge to any number <span class="math-container">$L$</span> at <span class="math-container">$x_0$</span>, we say that <span class="math-container">$f$</span> diverges at <span class="math-container">$x0$</span>, and leave <span class="math-container">$\lim_{x\to x_0;x\in E} f(x)$</span> undefined. In other words, we have <span class="math-container">$lim_{x\to x_0;x\in E} f(x) = L$</span> iff for every <span class="math-container">$\varepsilon &gt; 0$</span>, there exists a <span class="math-container">$\delta &gt; 0$</span> such that <span class="math-container">$|f(x) − L| ≤ ε$</span> for all <span class="math-container">$x \in E$</span> such that <span class="math-container">$|x − x0| &lt; \delta$</span>.</p> <p>There are a two other things he defines that are used in this definition. That of <span class="math-container">$\varepsilon$</span> closeness and local <span class="math-container">$\varepsilon$</span> closeness. For those wanting to read those, <a href="https://lms.umb.sk/pluginfile.php/111477/mod_page/content/5/TerenceTao_Analysis.I.Third.Edition.pdf" rel="nofollow noreferrer">Here is the link</a>. It is on page 221.</p> <p>Edit 4: It might be useless defining limits without limit points. But that is the definition for which I am trying to prove this.</p>
Sourav Ghosh
977,780
<p>Taking absolute value of positive number os nothing but wasting time!</p> <p>How you get contradiction from <span class="math-container">$0&lt;\frac {1} {n} &lt; \delta$</span> and <span class="math-container">$f(0) =1$</span> ?</p> <p><span class="math-container">$1=f(0) &lt;f(\frac{1}{n})=0$</span> is not true unless <span class="math-container">$f$</span> is increasing. In fact <span class="math-container">$f$</span> is decreasing on <span class="math-container">$[0, \infty) $</span></p> <p>But choosing <span class="math-container">$0&lt;\epsilon&lt;1$</span> , you can produce a contradiction. You have to mention that in your proof as this the most crucial step.</p> <p>(The way you defined limit)</p> <p>Let <span class="math-container">$\epsilon=\frac{1}{2}$</span> , then <span class="math-container">$\exists \delta&gt;0$</span> such that <span class="math-container">$|x|&lt;\delta \implies |f(x) -L|&lt;\frac{1}{2}$</span></p> <p>For <span class="math-container">$x=0, |1 -L|&lt;\frac{1}{2}\implies L\in (\frac{1}{2}, \frac{3}{2}) $</span></p> <p>For <span class="math-container">$x=\frac{1}{N}$</span> (obtained by Archimedean property) , <span class="math-container">$|L|&lt;\frac{1}{2}$</span> i.e <span class="math-container">$L\in (-\frac{1}{2}, \frac{1}{2}) $</span></p> <p>Implies <span class="math-container">$L\in (\frac{1}{2}, \frac{3}{2}) \cap (-\frac{1}{2}, \frac{1}{2}) =\emptyset$</span> (impossible)</p> <p>Such <span class="math-container">$L\in\Bbb{R}$</span> doesn't exists.</p> <p>But this is not the correct definition of Limit.</p> <p>Def:</p> <p><span class="math-container">$\lim_{x\to x_0}f(x) =L$</span> if <span class="math-container">$\forall\epsilon&gt;0, \exists \delta&gt;0$</span> such that <span class="math-container">$ \forall x\in \Bbb{R} , 0&lt;|x-x_0|&lt;\delta$</span> implies <span class="math-container">$|f(x) -L|&lt;\epsilon$</span></p>
3,703,981
<p>If we consider an equation <span class="math-container">$x=2x^2,$</span> we find that the values of <span class="math-container">$x$</span> that solve this equation are <span class="math-container">$0$</span> and <span class="math-container">$1/2$</span>. Now, if we differentiate this equation on both sides with respect to <span class="math-container">$x,$</span> we get <span class="math-container">$1=4x.$</span> Now, I know that it is wrong to say that the value of <span class="math-container">$x=1/4,$</span> but then, what does <span class="math-container">$x=1/4$</span> signify? Is it related to maxima or minima? Please help me with this.</p>
Misha Lavrov
383,078
<p>Let <span class="math-container">$d = x - y$</span>. Then we want <span class="math-container">$x^3 - (x-d)^3 = 2020$</span>, which is a quadratic equation in <span class="math-container">$x$</span>. The discriminant is <span class="math-container">$24240d - 3d^4 = 3d (8080 - d^3)$</span>, which is nonnegative only for <span class="math-container">$d \in [0, 8080^{1/3}]$</span>, and since <span class="math-container">$d$</span> is an integer it must be between <span class="math-container">$0$</span> and <span class="math-container">$20$</span>.</p> <p>Moreover, since <span class="math-container">$(x-y)^3 = d(x^2 + xy + y^2) = 2020$</span>, <span class="math-container">$d$</span> must be a divisor of <span class="math-container">$2020$</span>. This leaves only <span class="math-container">$6$</span> possibilities: <span class="math-container">$d = 1, 2, 4, 5, 10, 20$</span>. We already had a finite problem to solve before, but this observation reduces the number of cases.</p> <p>For each value of <span class="math-container">$d$</span>, the solutions to the quadratic equation have irrational solutions, so there are no integer solutions <span class="math-container">$(x,y)$</span>.</p>
2,668,616
<p>I am fairly new at MAPLE and I'm having some trouble solving this ODE. </p> <p>$$(t+1)\frac{dy}{dt}-2(t^2+t)y=\frac{e^{t^2}}{t+1}$$</p> <p>My initial value problem is $$t&gt;-1, y(0)=5$$</p> <p>I put the equation in standard form and typed into maple</p> <p><a href="https://i.stack.imgur.com/rctFl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rctFl.png" alt="What I entered in MAPLE"></a></p> <p>However I am aware that when my initial value is $y(0)=5$ my equation becomes $4=0$ which is not possible. So I am confused on whether or not it is possible to find a general solution. </p>
user577215664
475,762
<p><strong><em>Hint</em></strong> $$(t+1)\frac{dy}{dt}-2(t^2+t)y=\frac{e^{t^2}}{t+1}$$ Since $t &gt;-1$ then $t \neq -1$ $$y'-2ty=\frac{e^{t^2}}{(t+1)^2}$$ $e^{-t^2}$as integrating factor $$(ye^{-t^2})'=\frac{1}{(t+1)^2}$$ $$(ye^{-t^2})=\int \frac{dt}{(t+1)^2}$$ $$y=e^{t^2}\int \frac{dt}{(t+1)^2}$$ Substitute $u=t+1$ $$y=e^{t^2}\int \frac{du}{u^2}$$ The general solution is $$y=e^{t^2}(K-\frac{1}{t+1})$$ $$y(0)=5 \implies K-1=5 \implies K=6$$ Solution with initial condition $$\boxed {y(t)=e^{t^2}\left(\frac{6t+5}{t+1}\right)}$$</p>
194,547
<p>I know the definition of a linear transformation, but I am not sure how to turn this word problem into a matrix to solve:</p> <p>$T(x_1, x_2) = (x_1-4x_2, 2x_1+x_2, x_1+2x_2)$</p> <p><strong>Find the image of the line that passes through the origin and point $(1, -1)$.</strong></p>
Hagen von Eitzen
39,174
<p>The line passing through the origin and $(1, -1)$ is the set of points of the form $(t, -t)$ with $t\in\mathbb R$ (I suppose you are implicitly working over the reals). We compute $$T(t,-t)=(t-4t,2t+t,t+2t)=(-3t,3t,3t).$$ That describes the line in 3D space through the origin and $(3, -3, -3)$ (or equivalently one can use $(1, -1, -1)$ as second point)</p>
3,274,172
<p>Let <span class="math-container">$X$</span> a compact set. Prove that if every connected component is open then the number of components is finite.</p> <p>Ok, <span class="math-container">$X = \bigcup C(x)$</span> where <span class="math-container">$C(x)$</span> is the connected component of <span class="math-container">$x \in X.$</span> I know that <span class="math-container">$X \subset UA_\lambda$</span>, where <span class="math-container">$A_\lambda$</span> is a finite family of opens set but how can i conclude that the components are finite??</p>
J.-E. Pin
89,374
<p><strong>Hint</strong>. Since <span class="math-container">$X$</span> is compact and since, as you observed, the connected component form an open cover of <span class="math-container">$X$</span>, one can extract from them a finite subcover. On the other hand, the connected component form a partition of <span class="math-container">$X$</span>. Thus...</p>
2,879,883
<p>Suppose that $f$ and $g$ are differentiable functions on $(a,b)$ and suppose that $g'(x)=f'(x)$ for all $x \in (a,b)$. Prove that there is some $c \in \mathbb{R}$ such that $g(x) = f(x)+c$.</p> <p>So far, I started with this:</p> <p>Let $h'(x)=f'(x)-g'(x)=0$, then MVT implies $\exists$ c $\in \mathbb{R}$ such that $h'(c) = \frac{h(b)-h(a)}{b-a} =0$. Then $h'(c)=0 \implies h(c)=c$</p> <p>After this i'm not sure where to go, or if this is correct at all, any hints? This is also my first post in Latex so sorry if there's any mistakes!</p>
Titus Moody
583,516
<p>Assume f-g is non-constant. Then f'-g' is not identically zero. But f'=g' by hypothesis. Contradiction. Hence f-g is constant. </p>
2,736,426
<p>Let's imagine a point in 3D coordinate such that its distance to the origin is <span class="math-container">$1 \text{ unit}$</span>.</p> <p>The coordinates of that point have been given as <span class="math-container">$x = a$</span>, <span class="math-container">$y = b$</span>, and <span class="math-container">$z = c$</span>.</p> <p>How can we calculate the angles made by the vector with each of the axes?</p>
user
505,767
<p>The vector point coordinates are $OP=(a,b,c)$ then the angles with $x,y,z$ with unitary vectors $e_1=(1,0,0),e_2=(0,1,0),e_3(0,0,1)$ are given by the dot product</p> <ul> <li>$\cos \alpha = \frac{OP\cdot e_1}{|OP||e_1|}=OP\cdot e_1=a$</li> <li>$\cos \beta = \frac{OP\cdot e_2}{|OP||e_2|}=OP\cdot e_2=b$</li> <li>$\cos \gamma = \frac{OP\cdot e_3}{|OP||e_3|}=OP\cdot e_3=c$</li> </ul>
283,183
<p>I am looking for references that discuss Hecke operators $T_n$ acting on modular forms for the principal congruence subgroup $\Gamma(N)$ of the modular group $SL(2,Z)$ and am happy to restrict to the case that $(n,N)=1$. Most textbooks (Diamond and Shurman, Koblitz etc.) that discuss Hecke operators for congruence subgroups specialize to $\Gamma_0(N)$ or $\Gamma_1(N)$ which contain $T: \tau \rightarrow \tau+1$, but I am specifically interested in the action of Hecke operators on modular forms which are invariant under $T^N$ but not under $T$ and are thus forms for $\Gamma(N)$ but not for $\Gamma_0(N)$ or $\Gamma_1(N)$. It seems reasonably clear how to obtain the answer using the double coset formulation of Hecke operators, but I have not found a reference which writes this out explicitly in terms of the relation between the coefficients of the Fourier expansion of the form at the cusp at infinity and the coefficients of its image under $T_n$. </p>
David Loeffler
2,481
<p>The reason why Hecke theory for $\Gamma(N)$ doesn't get much treatment in the literature is because you can easily reduce it to the $\Gamma_1(N)$ case. More precisely, you can conjugate $\Gamma(N)$ by $\begin{pmatrix} N &amp; 0 \\ 0 &amp; 1\end{pmatrix}$ to get a group intermediate between $\Gamma_0(N^2)$ and $\Gamma_1(N^2)$.</p> <p>This has come up before (in the context of explicit calcuations): see <a href="https://mathoverflow.net/questions/185601/cusps-forms-for-gamma-n">this question</a>.</p>
283,183
<p>I am looking for references that discuss Hecke operators $T_n$ acting on modular forms for the principal congruence subgroup $\Gamma(N)$ of the modular group $SL(2,Z)$ and am happy to restrict to the case that $(n,N)=1$. Most textbooks (Diamond and Shurman, Koblitz etc.) that discuss Hecke operators for congruence subgroups specialize to $\Gamma_0(N)$ or $\Gamma_1(N)$ which contain $T: \tau \rightarrow \tau+1$, but I am specifically interested in the action of Hecke operators on modular forms which are invariant under $T^N$ but not under $T$ and are thus forms for $\Gamma(N)$ but not for $\Gamma_0(N)$ or $\Gamma_1(N)$. It seems reasonably clear how to obtain the answer using the double coset formulation of Hecke operators, but I have not found a reference which writes this out explicitly in terms of the relation between the coefficients of the Fourier expansion of the form at the cusp at infinity and the coefficients of its image under $T_n$. </p>
François Brunault
6,506
<p>The Hecke operators $T(n)$ and the dual Hecke operators $T'(n)$ acting as correspondences on the modular curve $Y(N)$ are defined by Kato in <em>$p$-adic Hodge theory and values of zeta functions of modular forms</em>, section 2.9 (in Kato's notation $Y(N)=Y(N,N)$). The action of $T(p)$ on Fourier expansions is given in section 4.9, there he also describes the relation between his definition and other definitions in the litterature.</p> <p>Actually, when you conjugate the double coset $\Gamma(N) \begin{pmatrix} n &amp; 0 \\ 0 &amp; 1 \end{pmatrix} \Gamma(N)$ in $\mathrm{GL}_2(\mathbf{Q})$ by the matrix $\begin{pmatrix} N &amp; 0 \\ 0 &amp; 1 \end{pmatrix}$, you get the double coset $\Gamma \begin{pmatrix} n &amp; 0 \\ 0 &amp; 1 \end{pmatrix} \Gamma$, where $\Gamma$ is this subgroup intermediate between $\Gamma_1(N^2)$ and $\Gamma_0(N^2)$. So it should be a simple exercise to check that Kato's definition agrees with David's answer.</p>
1,146,824
<p>The Russel's Paradox, showing $X=\{x|x\notin x\}$ can't exist is not very hard. If $X \in X$, then $X \notin X$ by definiition, in the other case, $X \notin X$, then $X \in X$ by definition. Both cases are impossible.</p> <p>But how about whole things $X=\{x|x=x\}$? $X \in X$ probably cause the problem, but I don't know why violation of axiom of foundation in proper class is problem.</p>
Mauro ALLEGRANZA
108,274
<p>Because in presence of the <a href="http://en.wikipedia.org/wiki/Axiom_schema_of_specification">Axion of Separation</a> (or <em>Axiom of Specification</em>), if the "universal set" $V = \{ x \mid x=x \}$ exists, we can have :</p> <blockquote> <p>$R = \{ x \mid x \in V \land x \notin x \}$</p> </blockquote> <p>and $R$ is the "illegal" Russell's set.</p>
1,015,498
<p>I am merely looking for the result of the convolution of a function and a delta function. I know there is some sort of identity but I can't seem to find it. </p> <p>$\int_{-\infty}^{\infty} f(u-x)\delta(u-a)du=?$</p>
Kevin Arlin
31,228
<p>The delta "function" is the multiplicative identity of the convolution algebra. That is, $$\int f(\tau)\delta(t-\tau)d\tau=\int f(t-\tau)\delta(\tau)d\tau=f(t)$$ This is essentially the definition of $\delta$: the distribution with integral $1$ supported only at $0$.</p>
3,013,355
<p>I have been asked to prove that </p> <p>(<span class="math-container">$a \to $</span>b) <span class="math-container">$\vee$</span> (<span class="math-container">$a \to $</span>c) = <span class="math-container">$a \to ($</span>b <span class="math-container">$\vee$</span> c).</p> <p>I believe it is just the simple case of using the distributive law:</p> <p><span class="math-container">$a \wedge ($</span>b <span class="math-container">$\vee$</span> c)= (a <span class="math-container">$\wedge c) \vee ($</span>a <span class="math-container">$\wedge$</span> b).</p> <p>But I am not sure.</p>
Arrow
223,002
<p>Suppose <span class="math-container">$a,b,c,d\in R$</span> are powers of prime elements satisfying <span class="math-container">$ab=cd$</span>. Let us write <span class="math-container">$a\sim_R c$</span> when <span class="math-container">$a,c$</span> are associates in <span class="math-container">$R$</span>. Failure of <span class="math-container">$R$</span> to be a UFD means <span class="math-container">$$\neg (a\sim c\;\vee\;a\sim d).$$</span></p> <p>A possible cause for this is that the disjunction holds <em>locally</em> but <em>not globally</em>. In other words, there may be some open cover <span class="math-container">$(U_i)_{i\in I}$</span> of <span class="math-container">$\operatorname{Spec}R$</span> such that <span class="math-container">$$\forall i\in I\;(a\sim_{U_i} c\;\vee\;a\sim_{U_i} d),$$</span> but such that exist <span class="math-container">$i\neq j$</span> such that only <span class="math-container">$a\sim_{U_i} c$</span> while only <span class="math-container">$a\sim_{U_j} d$</span>.</p> <p>(This already hints at some involvement of (co)homology as an obstruction to globalizing.)</p> <p>Let us suppose indeed that only <span class="math-container">$a\sim_{U_i} c$</span> and only <span class="math-container">$a\sim_{U_j} d$</span>. By further localizing each case, say at <span class="math-container">$c,d$</span> respectively, the associate condition becomes <span class="math-container">$$a|_{D_c}\in R_c^\times, \; a|_{D_d}\in R_d^\times. $$</span></p> <p>Thus <span class="math-container">$a\in R$</span> is invertible on each of the principal opens <span class="math-container">$D_c,D_d$</span>, and yet it is not invertible on their union. This means the shape of the scheme <span class="math-container">$(\operatorname{Spec}R,R)$</span> is in some sense complicated.</p> <p>This seems related to homology, although the <span class="math-container">$a\in R$</span> <em>is</em> a global function. What doesn't globalize is local invertibility. I don't see a naive connection to line bundles yet either.</p> <p>In the example of the circle <span class="math-container">$R=\frac{\mathbb R[x,y]}{ \left\langle x ^2+y^2-1 \right\rangle }$</span>, we have the open cover <span class="math-container">$D_{1-y},D_{1+y}$</span> by two arcs-minus-a-point. On each of them, the regular function <span class="math-container">$x\in R$</span> is invertible: on <span class="math-container">$D_{1-y}$</span> we have that <span class="math-container">$\frac{x}{1-y}$</span> is a unit and on <span class="math-container">$D_{1+y}$</span> we have that <span class="math-container">$\frac{x}{1+y}$</span> is a unit. However, <span class="math-container">$x\notin R^\times$</span>. Indeed every coset in <span class="math-container">$R$</span> has a unique representative of the form <span class="math-container">$f_1+yf_2$</span> for <span class="math-container">$f_1,f_2\in \mathbb R[x]$</span> and such representatives can be used to calculate <span class="math-container">$R^\times$</span> and infer <span class="math-container">$x\notin R^\times $</span> (I think).</p>